시험덤프
매달, 우리는 1000명 이상의 사람들이 시험 준비를 잘하고 시험을 잘 통과할 수 있도록 도와줍니다.
  / Professional Cloud DevOps Engineer 덤프  / Professional Cloud DevOps Engineer 문제 연습

Google Professional Cloud DevOps Engineer 시험

Google Cloud Certified - Professional Cloud DevOps Engineer Exam 온라인 연습

최종 업데이트 시간: 2026년04월21일

당신은 온라인 연습 문제를 통해 Google Professional Cloud DevOps Engineer 시험지식에 대해 자신이 어떻게 알고 있는지 파악한 후 시험 참가 신청 여부를 결정할 수 있다.

시험을 100% 합격하고 시험 준비 시간을 35% 절약하기를 바라며 Professional Cloud DevOps Engineer 덤프 (최신 실제 시험 문제)를 사용 선택하여 현재 최신 50개의 시험 문제와 답을 포함하십시오.

 / 8

Question No : 1


You manage an application that runs in Google Kubernetes Engine (GKE) and uses the blue/green deployment methodology Extracts of the Kubernetes manifests are shown below:



The Deployment app-green was updated to use the new version of the application During post-deployment monitoring you notice that the majority of user requests are failing You did not observe this behavior in the testing environment You need to mitigate the incident impact on users and enable the developers to troubleshoot the issue.
What should you do?

정답:
Explanation:
The best option for mitigating the incident impact on users and enabling the developers to troubleshoot the issue is to change the selector on the Service app-svc to app: my-app, version: blue. A Service is a resource that defines how to access a set of Pods. A selector is a field that specifies which Pods are selected by the Service. By changing the selector on the Service app-svc to app: my-app, version: blue, you can ensure that the Service only routes traffic to the Pods that have both labels app: my-app and version: blue. These Pods belong to the Deployment app-blue, which uses the previous version of the application. This way, you can mitigate the incident impact on users by switching back to the working version of the application. You can also enable the developers to troubleshoot the issue with the new version of the application in the Deployment app-green without affecting users.

Question No : 2


A third-party application needs to have a service account key to work properly When you try to export the key from your cloud project you receive an error "The organization policy constraint larn.disableServiceAccountKeyCreation is enforcedM You need to make the third-party application work while following Google-recommended security practices.
What should you do?

정답:
Explanation:
The best option for making the third-party application work while following Google-recommended security practices is to add a rule to set the iam.disableServiceAccountKeyCreation policy to off in your project and create a key. The iam.disableServiceAccountKeyCreation policy is an organization policy that controls whether service account keys can be created in a project or organization. By default, this policy is set to on, which means that service account keys cannot be created. However, you can override this policy at a lower level, such as a project, by adding a rule to set it to off. This way, you can create a service account key for your project without affecting other projects or organizations. You should also follow the best practices for managing service account keys, such as rotating them regularly, storing them securely, and deleting them when they are no longer needed.

Question No : 3


You are configuring Cloud Logging for a new application that runs on a Compute Engine instance with a public IP address. A user-managed service account is attached to the instance. You confirmed that
the necessary agents are running on the instance but you cannot see any log entries from the instance in Cloud Logging. You want to resolve the issue by following Google-recommended practices.
What should you do?

정답:
Explanation:
The correct answer is A. Add the Logs Writer role to the service account.
To use Cloud Logging, the service account attached to the Compute Engine instance must have the necessary permissions to write log entries. The Logs Writer role (roles/logging.logWriter) provides this permission. You can grant this role to the user-managed service account at the project, folder, or organization level1.
Private Google Access is not required for Cloud Logging, as it allows instances without external IP addresses to access Google APIs and services2.The default Compute Engine service account already has the Logs Writer role, but it is not a recommended practice to use it for user applications3.Exporting the service account key and configuring the agents to use the key is not a secure way of authenticating the service account, as it exposes the key to potential compromise4.
Reference:
1: Access control with IAM | Cloud Logging | Google Cloud
2: Private Google Access overview | VPC | Google Cloud
3: Service accounts | Compute Engine Documentation | Google Cloud
4: Best practices for securing service accounts | IAM Documentation | Google Cloud

Question No : 4


Your company has recently experienced several production service issues. You need to create a Cloud Monitoring dashboard to troubleshoot the issues, and you want to use the dashboard to distinguish between failures in your own service and those caused by a Google Cloud service that you use.
What should you do?

정답:
Explanation:
Comprehensive and Detailed Explanation From General Cloud Monitoring Knowledge:
The key requirement is to distinguish between failures in your own service and those caused by an underlying Google Cloud service.
A. Enable Personalized Service Health annotations on the dashboard: Google Cloud Personalized Service Health provides information about incidents affecting Google Cloud services that may impact your projects. When enabled and integrated with Monitoring, it can display these events as
annotations on your dashboards, overlaying them on your service's metrics charts. This allows you to correlate dips in your service's performance with known Google Cloud service issues, directly addressing the need to distinguish failure origins.
B. Create an alerting policy for the system error metrics: Alerting policies are for notifications when metrics cross thresholds. While useful for detecting issues in your own service, they don't inherently distinguish the cause between your service and a Google Cloud dependency without further context, which option A provides.
C. Create a log-based metric to track cloud service errors, and display the metric on the dashboard: You could try to create log-based metrics from logs that might indicate a cloud service error (e.g., specific API error codes from Google Cloud services). However, this is indirect, might require complex parsing, and Personalized Service Health is a more direct and authoritative source for Google Cloud service disruptions.
D. Create a logs widget to display system errors from Cloud Logging on the dashboard: Similar to C, displaying raw system error logs can be helpful for troubleshooting your own service, but it doesn't provide a clear, curated view of whether a Google Cloud service itself is having an issue. It would require manual interpretation to link these logs to a potential Google Cloud outage.
Personalized Service Health is specifically designed to provide visibility into Google Cloud service incidents relevant to your resources. Integrating this with Monitoring dashboards is the most direct way to achieve the stated goal.
Reference (Based on Cloud Monitoring and Personalized Service Health features):
Personalized Service Health Overview: https://cloud.google.com/service-health/docs/overview
Integrating with Cloud Monitoring: Documentation often shows how to enable annotations for Personalized Service Health events on Monitoring charts. This allows a visual correlation between your service metrics and Google Cloud service health events. “Personalized Service Health integrates with Cloud Monitoring so you can see service health events alongside your metrics."
"You can enable annotations on your metric charts to display relevant Personalized Service Health events."
This feature directly helps differentiate between issues in your application versus issues in the underlying Google Cloud services.

Question No : 5


You are working with a government agency that requires you to archive application logs for seven years. You need to configure Stackdriver to export and store the logs while minimizing costs of storage.
What should you do?

정답:
Explanation:
https://cloud.google.com/logging/docs/routing/overview

Question No : 6


Communicate your actions to the incident team.

정답: A
Explanation:
The correct answer is A. Communicate your intent to the incident team. Perform a load analysis to
determine if the remaining nodes can handle the increase in traffic offloaded from the removed node, and scale appropriately. When any new nodes report healthy, drain traffic from the unhealthy node, and remove the unhealthy node from service.
This answer follows the Google-recommended practices for incident management, as described in the Chapter 9 - Incident Response, Google SRE Book1.
According to this source, some of the best practices are:
Maintain a clear line of command. Designate clearly defined roles. Keep a working record of debugging and mitigation as you go. Declare incidents early and often.
Communicate your intent before taking any action that might affect the service or the incident response. This helps to avoid confusion, duplication of work, or unintended consequences.
Perform a load analysis before removing a node from the load balancer pool, as this might affect the capacity and performance of the service. Scale the pool as necessary to handle the expected load.
Drain traffic from the unhealthy node before removing it from service, as this helps to avoid dropping requests or causing errors for users.
Answer A follows these best practices by communicating the intent to the incident team, performing a load analysis and scaling the pool, and draining traffic from the unhealthy node before removing it.
Answer B does not follow the best practice of performing a load analysis before adding or removing nodes, as this might cause overloading or underutilization of resources.
Answer C does not follow the best practice of communicating the intent before taking any action, as this might cause confusion or conflict with other responders.
Answer D does not follow the best practice of draining traffic from the unhealthy node before removing it, as this might cause errors for users.
Reference: 1: Chapter 9 - Incident Response, Google SRE Book

Question No : 7


You built a serverless application by using Cloud Run and deployed the application to your production environment You want to identify the resource utilization of the application for cost optimization.
What should you do?

정답:
Explanation:
The best option for giving developers the ability to test the latest revisions of the service before the service is exposed to customers is to run the gcloud run deploy booking-engine --no-traffic Ctag devcommand and use the https://dev----booking-engine-abcdef.a.run.app URL for testing. The gcloud run deploy command is a command that deploys a new revision of your service or updates an existing service. By using the --no-traffic flag, you can prevent any traffic from being sent to the new revision. By using the --tag flag, you can assign a tag to the new revision, such as dev. This way, you can create a new revision of your service without affecting your customers. You can also use the tag-based URL (e.g., https://dev----booking-engine-abcdef.a.run.app) to access and test the new revision.

Question No : 8


You manage an application that is writing logs to Stackdriver Logging. You need to give some team members the ability to export logs.
What should you do?

정답:
Explanation:
https://cloud.google.com/logging/docs/access-control
The logging.configWriter role grants permissions to create, update, and delete log exports. This is the correct role to give team members who need to export logs2.

Question No : 9


You are performing a semiannual capacity planning exercise for your flagship service. You expect a service user growth rate of 10% month-over-month over the next six months. Your service is fully containerized and runs on Google Cloud Platform (GCP). using a Google Kubernetes Engine (GKE) Standard regional cluster on three zones with cluster autoscaler enabled. You currently consume about 30% of your total deployed CPU capacity, and you require resilience against the failure of a zone. You want to ensure that your users experience minimal negative impact as a result of this growth or as a result of zone failure, while avoiding unnecessary costs.
How should you prepare to handle the predicted growth?

정답:
Explanation:
https://cloud.google.com/kubernetes-engine/docs/concepts/horizontalpodautoscaler
The Horizontal Pod Autoscaler changes the shape of your Kubernetes workload by automatically increasing or decreasing the number of Pods in response to the workload's CPU or memory consumption

Question No : 10


You are running a real-time gaming application on Compute Engine that has a production and testing environment. Each environment has their own Virtual Private Cloud (VPC) network. The application frontend and backend servers are located on different subnets in the environment's VPC. You suspect there is a malicious process communicating intermittently in your production frontend servers. You want to ensure that network traffic is captured for analysis.
What should you do?

정답:

Question No : 11


You manage several production systems that run on Compute Engine in the same Google Cloud Platform (GCP) project. Each system has its own set of dedicated Compute Engine instances. You want to know how must it costs to run each of the systems.
What should you do?

정답:
Explanation:
https://cloud.google.com/billing/docs/how-to/export-data-bigquery

Question No : 12


You support an application deployed on Compute Engine. The application connects to a Cloud SQL instance to store and retrieve data. After an update to the application, users report errors showing database timeout messages. The number of concurrent active users remained stable. You need to find the most probable cause of the database timeout.
What should you do?

정답:
Explanation:
The most probable cause of the database timeout is an increased number of connections to the Cloud SQL instance. This could happen if the application does not close connections properly or if it creates too many connections at once. You can check the number of connections to the Cloud SQL
instance using Cloud Monitoring or Cloud SQL Admin API.

Question No : 13


Your application images are built and pushed to Google Container Registry (GCR). You want to build an automated pipeline that deploys the application when the image is updated while minimizing the development effort.
What should you do?

정답:
Explanation:
https://cloud.google.com/architecture/continuous-delivery-toolchain-spinnaker-cloud https://spinnaker.io/guides/user/pipeline/triggers/pubsub/
The most efficient way to build an automated pipeline that deploys the application when the image is updated is to use Cloud Pub/Sub to trigger a Spinnaker pipeline. This way, you can leverage the built-in integration between GCR and Cloud Pub/Sub, and use Spinnaker as a continuous delivery platform for deploying your application.

Question No : 14


You are managing an application that exposes an HTTP endpoint without using a load balancer. The latency of the HTTP responses is important for the user experience. You want to understand what HTTP latencies all of your users are experiencing. You use Stackdriver Monitoring.
What should you do?

정답:
Explanation:
https://sre.google/workbook/implementing-slos/
https://cloud.google.com/architecture/adopting-slos/
Latency is commonly measured as a distribution. Given a distribution, you can measure various percentiles. For example, you might measure the number of requests that are slower than the historical 99th percentile.

Question No : 15


Your company runs services on Google Cloud. Each team runs their applications in a dedicated project. New teams and projects are created regularly. Your security team requires that all logs are processed by a security information and event management (SIEM) system. The SIEM ingests logs by using Pub/Sub. You must ensure that all existing and future logs are scanned by the SIEM.
What should you do?

정답:
Explanation:
Comprehensive and Detailed
To ensure all logs (existing and future) are automatically processed by the SIEM system, the best approach is:
Use an organization-level aggregated sink → Captures logs from all existing and future projects automatically.
Send logs to a Pub/Sub topic → Since the SIEM ingests logs via Pub/Sub, this ensures logs are streamed in real-time.
Set an inclusion filter → To capture all logs needed by the security team.
Why not other options?
B (Project-level logging sink) ❌ → Requires manual setup per project, which doesn’t scale for new projects.
C (Log bucket instead of Pub/Sub) ❌ → SIEM is expecting real-time log ingestion via Pub/Sub, not a storage-based approach.
D (Folder-level logging sink) ❌ → Only applies to specific folders, not the entire organization.
Official
Reference: Aggregated Sinks for Cloud Logging Exporting Logs to SIEM via Pub/Sub

 / 8
Google