시험덤프
매달, 우리는 1000명 이상의 사람들이 시험 준비를 잘하고 시험을 잘 통과할 수 있도록 도와줍니다.
  / SOA-C03 덤프  / SOA-C03 문제 연습

Amazon SOA-C03 시험

AWS Certified CloudOps Engineer - Associate 온라인 연습

최종 업데이트 시간: 2026년02월14일

당신은 온라인 연습 문제를 통해 Amazon SOA-C03 시험지식에 대해 자신이 어떻게 알고 있는지 파악한 후 시험 참가 신청 여부를 결정할 수 있다.

시험을 100% 합격하고 시험 준비 시간을 35% 절약하기를 바라며 SOA-C03 덤프 (최신 실제 시험 문제)를 사용 선택하여 현재 최신 65개의 시험 문제와 답을 포함하십시오.

 / 4

Question No : 1


A company deploys an application on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer (ALB). The company wants to protect the application from SQL injection attacks.
Which solution will meet this requirement?

정답:
Explanation:
The AWS Cloud Operations and Security documentation confirms that AWS WAF (Web Application Firewall) is designed to protect web applications from application-layer threats, including SQL injection, cross-site scripting (XSS), and other OWASP Top 10 vulnerabilities.
When integrated with an Application Load Balancer, AWS WAF inspects incoming traffic using rule groups. The AWS Managed Rules for SQL Injection Protection provide preconfigured, continuously updated filters that detect and block malicious SQL patterns.
AWS Shield (Standard or Advanced) defends against DDoS attacks, not application-layer SQL attacks,
and vulnerability scanners (Option C) only detect, not prevent, exploitation.
Thus, Option D provides the correct, managed, and automated protection aligned with AWS best practices.
Reference: AWS Cloud Operations & Security Guide C Protecting Applications from SQL Injection with AWS WAF Managed Rules

Question No : 2


A company’s security policy prohibits connecting to Amazon EC2 instances through SSH and RDP. Instead, staff must use AWS Systems Manager Session Manager. Users report they cannot connect to one Ubuntu instance, even though they can connect to others.
What should a CloudOps engineer do to resolve this issue?

정답:
Explanation:
According to AWS Cloud Operations and Systems Manager documentation, Session Manager requires that each managed instance be associated with an IAM instance profile that grants Systems Manager core permissions. The required permissions are provided by the AmazonSSMManagedInstanceCore AWS-managed policy.
If this policy is missing or misconfigured, the Systems Manager Agent (SSM Agent) cannot communicate with the Systems Manager service, causing connection failures even if the agent is installed and running. This explains why other instances work―those instances likely have the correct IAM role attached.
Enabling port 22 (Option A) violates the company’s security policy, while configuring user names (Option C) and key pairs (Option D) are irrelevant because Session Manager operates over secure API channels, not SSH keys.
Therefore, the correct resolution is to attach or update the instance profile with the AmazonSSMManagedInstanceCore policy, restoring Session Manager connectivity.
Reference: AWS Cloud Operations & Systems Manager Guide C Instance Profile Requirements for Session Manager Connectivity

Question No : 3


An application runs on Amazon EC2 instances that are in an Auto Scaling group. A CloudOps engineer needs to implement a solution that provides a central storage location for errors that the application logs to disk. The solution must also provide an alert when the application logs an error.
What should the CloudOps engineer do to meet these requirements?

정답:
Explanation:
The AWS Cloud Operations and Monitoring documentation specifies that the Amazon CloudWatch Agent is the recommended tool for collecting system and application logs from EC2 instances. The agent pushes these logs into a centralized CloudWatch Logs group, providing durable storage and real-time monitoring.
Once the logs are centralized, a CloudWatch Metric Filter can be configured to search for specific error keywords (for example, “ERROR” or “FAILURE”). This filter transforms matching log entries into custom metrics. From there, a CloudWatch Alarm can monitor the metric threshold and publish notifications to an Amazon SNS topic, which can send email or SMS alerts to subscribed recipients.
This combination provides a fully automated, managed, and serverless solution for log aggregation and error alerting. It eliminates the need for manual cron jobs (Option B), custom scripts (Option D), or Lambda-based log streaming (Option C).
Reference: AWS Cloud Operations & Monitoring Guide C Collecting Application Logs and Creating Alarms Using CloudWatch Agent, Metric Filters, and SNS Notifications

Question No : 4


A company is using an Amazon Aurora MySQL DB cluster that has point-in-time recovery, backtracking, and automatic backup enabled. A CloudOps engineer needs to roll back the DB cluster to a specific recovery point within the previous 72 hours. Restores must be completed in the same production DB cluster.
Which solution will meet these requirements?

정답:
Explanation:
As documented in AWS Cloud Operations and Database Recovery, Aurora Backtrack allows you to rewind the existing database cluster to a chosen point in time without creating a new cluster. This feature supports fine-grained rollback for accidental data changes, making it ideal for scenarios like table deletions or logical corruption.
Backtracking maintains continuous transaction logs and permits rewinding within a configurable window (up to 72 hours). It does not require creating a new cluster or endpoint, and it preserves the same production environment, fulfilling the operational requirement for in-place recovery.
In contrast, Point-in-Time Recovery (Option D) always creates a new cluster, while replica promotion (Option A) and Lambda restoration (Option B) are unrelated to immediate rollback operations.
Therefore, Option C, using Aurora Backtrack, best meets the requirement for same-cluster restoration and minimal downtime.
Reference: AWS Cloud Operations & Database Management Guide C Section: Using Aurora Backtrack for Fast In-Place Recovery

Question No : 5


A company runs an application on Amazon EC2 that connects to an Amazon Aurora PostgreSQL database. A developer accidentally drops a table from the database, causing application errors. Two hours later, a CloudOps engineer needs to recover the data and make the application functional again.
Which solution will meet this requirement?

정답:
Explanation:
In the AWS Cloud Operations and Aurora documentation, when data loss occurs due to human error such as dropped tables, Point-in-Time Recovery (PITR) is the recommended method for restoration. PITR creates a new Aurora cluster restored to a specific time before the failure.
The restored cluster has a new endpoint that must be reconfigured in the application to resume normal operations. AWS does not support performing PITR directly on an existing production database because that would overwrite current data.
Aurora Backtrack (Option A) applies only to Aurora MySQL, not PostgreSQL.
Option B is incorrect because PITR cannot be executed in place.
Option D refers to an import process from S3, which is unrelated to time-based recovery.
Hence, Option C is correct and follows the AWS CloudOps standard recovery pattern for PostgreSQL workloads.
Reference: AWS Cloud Operations & Aurora Guide C Section: Performing Point-in-Time Recovery for Aurora PostgreSQL Clusters

Question No : 6


A company’s architecture team must receive immediate email notifications whenever new Amazon EC2 instances are launched in the company’s main AWS production account.
What should a CloudOps engineer do to meet this requirement?

정답:
Explanation:
As per the AWS Cloud Operations and Event Monitoring documentation, the most efficient method for event-driven notification is to use Amazon EventBridge to detect specific EC2 API events and trigger a Simple Notification Service (SNS) alert.
EventBridge continuously monitors AWS service events, including RunInstances, which signals the creation of new EC2 instances. When such an event occurs, EventBridge sends it to an SNS topic, which then immediately emails subscribed recipients ― in this case, the architecture team.
This combination provides real-time, serverless notifications with minimal management. SQS (Option C) is designed for queue-based processing, not direct user alerts. User data scripts (Option A) and custom polling with Lambda (Option D) introduce unnecessary operational complexity and latency.
Hence, Option B is the correct and AWS-recommended CloudOps design for immediate launch notifications.
Reference: AWS Cloud Operations & Monitoring Guide C Section: EventBridge and SNS Integration for EC2 Event Notifications

Question No : 7


A CloudOps engineer has created a VPC that contains a public subnet and a private subnet. Amazon EC2 instances that were launched in the private subnet cannot access the internet. The default network ACL is active on all subnets in the VPC, and all security groups allow outbound traffic.
Which solution will provide the EC2 instances in the private subnet with access to the internet?

정답:
Explanation:
According to the AWS Cloud Operations and Networking documentation, instances in a private subnet do not have a direct route to the internet gateway and thus require a NAT gateway for outbound internet access.
The correct configuration is to create a NAT gateway in the public subnet, associate an Elastic IP address, and then update the private subnet’s route table to send all 0.0.0.0/0 traffic to the NAT gateway. This enables instances in the private subnet to initiate outbound connections while keeping inbound traffic blocked for security.
Placing the NAT gateway inside the private subnet (Options C or D) prevents connectivity because it would not have a route to the internet gateway. Configuring routes from the public subnet to the NAT gateway (Option B) does not serve private subnet traffic.
Hence, Option A follows AWS best practices for enabling secure, managed, outbound-only internet access from private resources.
Reference: AWS Cloud Operations & Networking Guide C Section: Providing Internet Access to Private Subnets Using NAT Gateway

Question No : 8


A financial services company stores customer images in an Amazon S3 bucket in the us-east-1 Region. To comply with regulations, the company must ensure that all existing objects are replicated to an S3 bucket in a second AWS Region. If an object replication fails, the company must be able to retry replication for the object.
What solution will meet these requirements?

정답:
Explanation:
Per the AWS Cloud Operations and S3 Data Management documentation, Cross-Region Replication (CRR) automatically replicates new objects between S3 buckets across Regions. However, CRR alone does not retroactively replicate existing objects created before replication configuration. To include such objects, AWS introduced S3 Batch Replication.
S3 Batch Replication scans the source bucket and replicates all existing objects that were not copied previously. Additionally, it can retry failed replication tasks automatically, ensuring regulatory compliance for complete dataset replication.
S3 Replication Time Control (S3 RTC) guarantees predictable replication times for new objects only― it does not cover previously stored data. S3 Lifecycle rules (Option D) move or transition objects between storage classes or buckets, but not in a replication context.
Therefore, the correct solution is to use S3 Cross-Region Replication (CRR) combined with S3 Batch Replication to ensure all current and future data is synchronized across Regions with retry capability.
Reference: AWS Cloud Operations and S3 Guide C Section: Cross-Region Replication and Batch Replication for Existing Objects

Question No : 9


A company's website runs on an Amazon EC2 Linux instance. The website needs to serve PDF files from an Amazon S3 bucket. All public access to the S3 bucket is blocked at the account level. The company needs to allow website users to download the PDF files.
Which solution will meet these requirements with the LEAST administrative effort?

정답:
Explanation:
Per the AWS Cloud Operations, Networking, and Security documentation, the best practice for serving private S3 content securely to end users is to use Amazon CloudFront with Origin Access Control (OAC).
OAC enables CloudFront to access S3 buckets privately, even when Block Public Access settings are enabled at the account level. This allows content to be delivered globally and securely without making the S3 bucket public. The bucket policy explicitly allows access only from the CloudFront distribution, ensuring that users can retrieve PDF files only via CloudFront URLs.
This configuration offers:
Automatic scalability through CloudFront caching,
Improved security via private access control,
Minimal administration effort with fully managed services.
Other options require manual handling or make the bucket public, violating AWS security best practices.
Therefore, Option B―using CloudFront with Origin Access Control and a restrictive bucket policy― provides the most secure, efficient, and low-maintenance CloudOps solution.
Reference: AWS Cloud Operations and Content Delivery Guide C Section: Serving Private Content Securely from Amazon S3 via CloudFront Using Origin Access Control

Question No : 10


A company needs to enforce tagging requirements for Amazon DynamoDB tables in its AWS accounts. A CloudOps engineer must implement a solution to identify and remediate all DynamoDB tables that do not have the appropriate tags.
Which solution will meet these requirements with the LEAST operational overhead?

정답:
Explanation:
According to the AWS Cloud Operations, Governance, and Compliance documentation, AWS Config provides managed rules that automatically evaluate resource configurations for compliance. The “required-tags” managed rule allows CloudOps teams to specify mandatory tags (e.g., Environment, Owner, CostCenter) and automatically detect non-compliant resources such as DynamoDB tables.
Furthermore, AWS Config supports automatic remediation through AWS Systems Manager Automation runbooks, enabling correction actions (for example, adding missing tags) without manual intervention. This automation minimizes operational overhead and ensures continuous compliance across multiple accounts.
Using a custom Lambda function (Options A or B) introduces unnecessary management complexity, while EventBridge rules alone (Option D) do not provide resource compliance tracking or historical visibility.
Therefore, Option C provides the most efficient, fully managed, and compliant CloudOps solution.
Reference: AWS Cloud Operations & Governance Guide C Section: Compliance Automation Using AWS Config Managed Rules and Systems Manager Remediation

Question No : 11


A medical research company uses an Amazon Bedrock powered AI assistant with agents and knowledge bases to provide physicians quick access to medical study protocols. The company needs to generate audit reports that contain user identities, usage data for Bedrock agents, access data for knowledge bases, and interaction parameters.
Which solution will meet these requirements?

정답:
Explanation:
As per AWS Cloud Operations, Bedrock, and Governance documentation, AWS CloudTrail is the authoritative service for capturing API activity and audit trails across AWS accounts. For Amazon Bedrock, CloudTrail records all user-initiated API calls, including interactions with agents, knowledge bases, and generative AI model parameters.
Using CloudTrail Lake, organizations can store, query, and analyze CloudTrail events directly without needing to export data. CloudTrail Lake supports SQL-like queries for generating audit and compliance reports, enabling the company to retrieve information such as user identity, API usage, timestamp, model or agent ID, and invocation parameters.
In contrast, CloudWatch focuses on operational metrics and log streaming, not API-level identity data. OpenSearch or Flink would add unnecessary complexity and cost for this use case.
Thus, the AWS-recommended CloudOps best practice is to leverage CloudTrail with CloudTrail Lake to maintain auditable, queryable API activity for Bedrock workloads, fulfilling governance and compliance requirements.
Reference: AWS Cloud Operations & Governance Guide C Section: Auditing and Governance for Generative AI Workloads Using AWS CloudTrail and CloudTrail Lake

Question No : 12


A company has an on-premises DNS solution and wants to resolve DNS records in an Amazon Route 53 private hosted zone for example.com. The company has set up an AWS Direct Connect connection for network connectivity between the on-premises network and the VPC. A CloudOps engineer must ensure that an on-premises server can query records in the example.com domain.
What should the CloudOps engineer do to meet these requirements?

정답:
Explanation:
According to AWS Cloud Operations and Networking documentation, Route 53 Resolver inbound endpoints allow DNS queries to originate from on-premises DNS servers and resolve private hosted
zone records in AWS. The inbound endpoint provides DNS resolver IP addresses within the VPC, which the on-premises DNS servers can forward queries to over AWS Direct Connect or VPN connections.
The inbound endpoint must be associated with a security group that permits inbound traffic on TCP and UDP port 53 from the on-premises DNS server IP addresses. This ensures that DNS requests from the on-premises environment reach the VPC Resolver for resolution of private domains like example.com.
By contrast, outbound endpoints are used for the opposite direction―resolving external (on-premises or internet) DNS names from within AWS VPCs. Therefore, only an inbound endpoint correctly satisfies the direction of resolution in this scenario.
Reference: AWS Cloud Operations & Route 53 Resolver Guide C Section: Inbound and Outbound Endpoints for Hybrid DNS Resolution

Question No : 13


A CloudOps engineer is configuring an Amazon CloudFront distribution to use an SSL/TLS certificate.
The CloudOps engineer must ensure automatic certificate renewal.
Which combination of steps will meet this requirement? (Select TWO.)

정답:
Explanation:
The AWS Cloud Operations and Security documentation specifies that for Amazon CloudFront, automatic certificate renewal is only supported for certificates issued by AWS Certificate Manager (ACM). When a certificate is managed by ACM and validated through DNS validation, ACM automatically renews the certificate before expiration without requiring manual intervention.
Option A ensures that the certificate is issued and managed by ACM, enabling full integration with CloudFront.
Option E (DNS validation) is essential for automation; AWS performs revalidation automatically as long as the DNS validation record remains in place.
By contrast, email validation (Option D) requires manual user confirmation upon renewal, which prevents automatic renewals. Certificates issued by third-party certificate authorities (Option B) are manually managed and must be reimported into ACM after renewal. CloudFront does not have a direct feature (Option C) to renew certificates; it relies on ACM’s lifecycle management.
Thus, combining ACM-issued certificates (A) with DNS validation (E) ensures continuous, automated renewal with no downtime or human action required.
Reference: AWS Cloud Operations and Security Best Practices C Section: Using AWS Certificate Manager with CloudFront for Automatic Certificate Renewal

Question No : 14


A CloudOps engineer needs to set up alerting and remediation for a web application. The application consists of Amazon EC2 instances that have AWS Systems Manager Agent (SSM Agent) installed. Each EC2 instance runs a custom web server. The EC2 instances run behind a load balancer and write logs locally.
The CloudOps engineer must implement a solution that restarts the web server software automatically if specific web errors are detected in the logs.
Which combination of steps will meet these requirements? (Select THREE.)

정답:
Explanation:
Per the AWS Cloud Operations, Monitoring, and Automation documentation, the correct workflow for automated operational remediation is:
Amazon CloudWatch Agent is installed on each EC2 instance (Option A) to collect local log data and push it to Amazon CloudWatch Logs.
A CloudWatch Metric Filter (Option C) is then defined to identify specific error strings or patterns within those logs (e.g., “HTTP 5xx” or “Service Unavailable”). When such an event occurs, CloudWatch Alarms are triggered.
Upon alarm activation, Amazon EventBridge rules (Option E) are configured to respond automatically by invoking an AWS Systems Manager Automation runbook, which executes an action to restart the web server process on the affected instance via SSM Agent.
This approach aligns directly with AWS’s recommended CloudOps remediation pattern, known as event-driven automation, which ensures minimal downtime and eliminates manual intervention.
Options involving CloudTrail (B) or SES notifications (D) are incorrect because they are unrelated to
log-based application monitoring and automated remediation workflows.
Reference: AWS Cloud Operations & Systems Manager Guide C Section: Automated Remediation using CloudWatch, EventBridge, and Systems Manager Automation

Question No : 15


A CloudOps engineer needs to track the costs of data transfer between AWS Regions. The CloudOps engineer must implement a solution to send alerts to an email distribution list when transfer costs reach 75% of a specific threshold.
What should the CloudOps engineer do to meet these requirements?

정답:
Explanation:
According to the AWS Cloud Operations and Cost Management documentation, AWS Budgets is the recommended service to track and alert on cost thresholds across all AWS accounts and resources. It allows users to define cost, usage, or reservation budgets, and configure notifications to trigger when usage or cost reaches defined percentages of the budgeted value (e.g., 75%, 90%, 100%).
The AWS Budgets system integrates natively with Amazon Simple Notification Service (SNS) to deliver alerts to an email distribution list or SNS topic subscribers. AWS Budgets supports granular cost filters, including specific service categories such as data transfer, regions, or linked accounts, ensuring precise visibility into inter-Region transfer costs.
By contrast, CloudWatch billing alarms (Option B) monitor total account charges only and do not allow detailed service-level filtering, such as data transfer between Regions. Cost and Usage Reports (Option A) are for detailed cost analysis, not real-time alerting, and VPC Flow Logs (Option D) capture traffic data, not billing or cost-based metrics.
Thus, using AWS Budgets with a 75% alert threshold best satisfies the operational and notification requirements.
Reference: AWS CloudOps and Cost Management Guide C Section: AWS Budgets for Cost Monitoring and Alerts

 / 4