Google Cloud Certified - Professional Cloud DevOps Engineer Exam 온라인 연습
최종 업데이트 시간: 2026년04월21일
당신은 온라인 연습 문제를 통해 Google Professional Cloud DevOps Engineer 시험지식에 대해 자신이 어떻게 알고 있는지 파악한 후 시험 참가 신청 여부를 결정할 수 있다.
시험을 100% 합격하고 시험 준비 시간을 35% 절약하기를 바라며 Professional Cloud DevOps Engineer 덤프 (최신 실제 시험 문제)를 사용 선택하여 현재 최신 50개의 시험 문제와 답을 포함하십시오.

정답:
Explanation:
The best option for mitigating the incident impact on users and enabling the developers to troubleshoot the issue is to change the selector on the Service app-svc to app: my-app, version: blue. A Service is a resource that defines how to access a set of Pods. A selector is a field that specifies which Pods are selected by the Service. By changing the selector on the Service app-svc to app: my-app, version: blue, you can ensure that the Service only routes traffic to the Pods that have both labels app: my-app and version: blue. These Pods belong to the Deployment app-blue, which uses the previous version of the application. This way, you can mitigate the incident impact on users by switching back to the working version of the application. You can also enable the developers to troubleshoot the issue with the new version of the application in the Deployment app-green without affecting users.
정답:
Explanation:
The best option for making the third-party application work while following Google-recommended security practices is to add a rule to set the iam.disableServiceAccountKeyCreation policy to off in your project and create a key. The iam.disableServiceAccountKeyCreation policy is an organization policy that controls whether service account keys can be created in a project or organization. By default, this policy is set to on, which means that service account keys cannot be created. However, you can override this policy at a lower level, such as a project, by adding a rule to set it to off. This way, you can create a service account key for your project without affecting other projects or organizations. You should also follow the best practices for managing service account keys, such as rotating them regularly, storing them securely, and deleting them when they are no longer needed.
정답:
Explanation:
The correct answer is A. Add the Logs Writer role to the service account.
To use Cloud Logging, the service account attached to the Compute Engine instance must have the necessary permissions to write log entries. The Logs Writer role (roles/logging.logWriter) provides this permission. You can grant this role to the user-managed service account at the project, folder, or organization level1.
Private Google Access is not required for Cloud Logging, as it allows instances without external IP addresses to access Google APIs and services2.The default Compute Engine service account already has the Logs Writer role, but it is not a recommended practice to use it for user applications3.Exporting the service account key and configuring the agents to use the key is not a secure way of authenticating the service account, as it exposes the key to potential compromise4.
Reference:
1: Access control with IAM | Cloud Logging | Google Cloud
2: Private Google Access overview | VPC | Google Cloud
3: Service accounts | Compute Engine Documentation | Google Cloud
4: Best practices for securing service accounts | IAM Documentation | Google Cloud
정답:
Explanation:
Comprehensive and Detailed Explanation From General Cloud Monitoring Knowledge:
The key requirement is to distinguish between failures in your own service and those caused by an underlying Google Cloud service.
A. Enable Personalized Service Health annotations on the dashboard: Google Cloud Personalized Service Health provides information about incidents affecting Google Cloud services that may impact your projects. When enabled and integrated with Monitoring, it can display these events as
annotations on your dashboards, overlaying them on your service's metrics charts. This allows you to correlate dips in your service's performance with known Google Cloud service issues, directly addressing the need to distinguish failure origins.
B. Create an alerting policy for the system error metrics: Alerting policies are for notifications when metrics cross thresholds. While useful for detecting issues in your own service, they don't inherently distinguish the cause between your service and a Google Cloud dependency without further context, which option A provides.
C. Create a log-based metric to track cloud service errors, and display the metric on the dashboard: You could try to create log-based metrics from logs that might indicate a cloud service error (e.g., specific API error codes from Google Cloud services). However, this is indirect, might require complex parsing, and Personalized Service Health is a more direct and authoritative source for Google Cloud service disruptions.
D. Create a logs widget to display system errors from Cloud Logging on the dashboard: Similar to C, displaying raw system error logs can be helpful for troubleshooting your own service, but it doesn't provide a clear, curated view of whether a Google Cloud service itself is having an issue. It would require manual interpretation to link these logs to a potential Google Cloud outage.
Personalized Service Health is specifically designed to provide visibility into Google Cloud service incidents relevant to your resources. Integrating this with Monitoring dashboards is the most direct way to achieve the stated goal.
Reference (Based on Cloud Monitoring and Personalized Service Health features):
Personalized Service Health Overview: https://cloud.google.com/service-health/docs/overview
Integrating with Cloud Monitoring: Documentation often shows how to enable annotations for Personalized Service Health events on Monitoring charts. This allows a visual correlation between your service metrics and Google Cloud service health events. “Personalized Service Health integrates with Cloud Monitoring so you can see service health events alongside your metrics."
"You can enable annotations on your metric charts to display relevant Personalized Service Health events."
This feature directly helps differentiate between issues in your application versus issues in the underlying Google Cloud services.
정답:
Explanation:
https://cloud.google.com/logging/docs/routing/overview
정답: A
Explanation:
The correct answer is A. Communicate your intent to the incident team. Perform a load analysis to
determine if the remaining nodes can handle the increase in traffic offloaded from the removed node, and scale appropriately. When any new nodes report healthy, drain traffic from the unhealthy node, and remove the unhealthy node from service.
This answer follows the Google-recommended practices for incident management, as described in the Chapter 9 - Incident Response, Google SRE Book1.
According to this source, some of the best practices are:
Maintain a clear line of command. Designate clearly defined roles. Keep a working record of debugging and mitigation as you go. Declare incidents early and often.
Communicate your intent before taking any action that might affect the service or the incident response. This helps to avoid confusion, duplication of work, or unintended consequences.
Perform a load analysis before removing a node from the load balancer pool, as this might affect the capacity and performance of the service. Scale the pool as necessary to handle the expected load.
Drain traffic from the unhealthy node before removing it from service, as this helps to avoid dropping requests or causing errors for users.
Answer A follows these best practices by communicating the intent to the incident team, performing a load analysis and scaling the pool, and draining traffic from the unhealthy node before removing it.
Answer B does not follow the best practice of performing a load analysis before adding or removing nodes, as this might cause overloading or underutilization of resources.
Answer C does not follow the best practice of communicating the intent before taking any action, as this might cause confusion or conflict with other responders.
Answer D does not follow the best practice of draining traffic from the unhealthy node before removing it, as this might cause errors for users.
Reference: 1: Chapter 9 - Incident Response, Google SRE Book
정답:
Explanation:
The best option for giving developers the ability to test the latest revisions of the service before the service is exposed to customers is to run the gcloud run deploy booking-engine --no-traffic Ctag devcommand and use the https://dev----booking-engine-abcdef.a.run.app URL for testing. The gcloud run deploy command is a command that deploys a new revision of your service or updates an existing service. By using the --no-traffic flag, you can prevent any traffic from being sent to the new revision. By using the --tag flag, you can assign a tag to the new revision, such as dev. This way, you can create a new revision of your service without affecting your customers. You can also use the tag-based URL (e.g., https://dev----booking-engine-abcdef.a.run.app) to access and test the new revision.
정답:
Explanation:
https://cloud.google.com/logging/docs/access-control
The logging.configWriter role grants permissions to create, update, and delete log exports. This is the correct role to give team members who need to export logs2.
정답:
Explanation:
https://cloud.google.com/kubernetes-engine/docs/concepts/horizontalpodautoscaler
The Horizontal Pod Autoscaler changes the shape of your Kubernetes workload by automatically increasing or decreasing the number of Pods in response to the workload's CPU or memory consumption
정답:
정답:
Explanation:
https://cloud.google.com/billing/docs/how-to/export-data-bigquery
정답:
Explanation:
The most probable cause of the database timeout is an increased number of connections to the Cloud SQL instance. This could happen if the application does not close connections properly or if it creates too many connections at once. You can check the number of connections to the Cloud SQL
instance using Cloud Monitoring or Cloud SQL Admin API.
정답:
Explanation:
https://cloud.google.com/architecture/continuous-delivery-toolchain-spinnaker-cloud https://spinnaker.io/guides/user/pipeline/triggers/pubsub/
The most efficient way to build an automated pipeline that deploys the application when the image is updated is to use Cloud Pub/Sub to trigger a Spinnaker pipeline. This way, you can leverage the built-in integration between GCR and Cloud Pub/Sub, and use Spinnaker as a continuous delivery platform for deploying your application.
정답:
Explanation:
https://sre.google/workbook/implementing-slos/
https://cloud.google.com/architecture/adopting-slos/
Latency is commonly measured as a distribution. Given a distribution, you can measure various percentiles. For example, you might measure the number of requests that are slower than the historical 99th percentile.
정답:
Explanation:
Comprehensive and Detailed
To ensure all logs (existing and future) are automatically processed by the SIEM system, the best approach is:
Use an organization-level aggregated sink → Captures logs from all existing and future projects automatically.
Send logs to a Pub/Sub topic → Since the SIEM ingests logs via Pub/Sub, this ensures logs are streamed in real-time.
Set an inclusion filter → To capture all logs needed by the security team.
Why not other options?
B (Project-level logging sink) ❌ → Requires manual setup per project, which doesn’t scale for new projects.
C (Log bucket instead of Pub/Sub) ❌ → SIEM is expecting real-time log ingestion via Pub/Sub, not a storage-based approach.
D (Folder-level logging sink) ❌ → Only applies to specific folders, not the entire organization.
Official
Reference: Aggregated Sinks for Cloud Logging Exporting Logs to SIEM via Pub/Sub