시험덤프
매달, 우리는 1000명 이상의 사람들이 시험 준비를 잘하고 시험을 잘 통과할 수 있도록 도와줍니다.
  / Managing Cloud Security 덤프  / Managing Cloud Security 문제 연습

WGU Managing Cloud Security 시험

WGU Managing Cloud Security (JY02) 온라인 연습

최종 업데이트 시간: 2025년12월09일

당신은 온라인 연습 문제를 통해 WGU Managing Cloud Security 시험지식에 대해 자신이 어떻게 알고 있는지 파악한 후 시험 참가 신청 여부를 결정할 수 있다.

시험을 100% 합격하고 시험 준비 시간을 35% 절약하기를 바라며 Managing Cloud Security 덤프 (최신 실제 시험 문제)를 사용 선택하여 현재 최신 80개의 시험 문제와 답을 포함하십시오.

 / 2

Question No : 1


Which device is used to create and manage encryption keys used for data transmission in a cloud-based environment?

정답:
Explanation:
A Hardware Security Module (HSM) is a dedicated, tamper-resistant device designed for creating, managing, and storing encryption keys. In cloud environments, HSMs are essential for securing cryptographic operations, such as SSL/TLS key management, digital signatures, and secure data transmission.
TPMs are hardware chips used to secure local devices, such as laptops. Memory controllers and RAID controllers manage system performance and storage but are not cryptographic devices.
HSMs provide strong protection against key theft or misuse by isolating cryptographic functions from general-purpose computing resources. They are often certified under standards like FIPS 140-2, ensuring compliance with stringent security requirements. In cloud services, customers can use provider-managed HSMs or deploy dedicated virtual HSM instances for secure key management.

Question No : 2


When should a cloud service provider delete customer data?

정답:
Explanation:
The correct time for data deletion is after the specified retention period defined by contractual agreements, regulatory frameworks, or internal policies. Retention policies ensure that data is kept for as long as necessary for business, legal, or compliance reasons but not longer than required.
Oversubscription, inactivity, or review cycles are not valid triggers because they may conflict with compliance mandates such as GDPR, HIPAA, or PCI DSS. Deleting data prematurely could result in legal penalties or business risks, while keeping it longer than necessary could increase exposure.
By deleting data only after the retention period, providers demonstrate adherence to data governance principles and protect customer rights while minimizing storage costs and liability.

Question No : 3


A customer requests that a cloud provider physically destroys any drives storing their personal data.
What must the provider do with the drives?

정답:
Explanation:
Cloud providers typically manage multi-tenant infrastructure, where physical hardware is shared among customers. Therefore, drives are not destroyed for each customer unless explicitly required in the contract. If the customer’s agreement specifies dedicated hardware disposal, then the provider must comply by physically destroying the drives.
Cryptographic erasure and degaussing are valid sanitization methods, but they may not meet the specific contractual requirement of physical destruction. Insurance clauses are unrelated to disposal.
This question underscores the importance of negotiating contractual terms in cloud agreements. Customers handling highly sensitive or regulated data may require physical destruction, while others may accept logical erasure. Clear agreements ensure both compliance and alignment of security responsibilities.

Question No : 4


A cloud provider that processes third-party credit card payments is unable to encrypt its customers' cardholder data because of constraints on a legacy payment processing system.
What should it implement to maintain Payment Card Industry Data Security Standard (PCI DSS) compliance?

정답:
Explanation:
When a required PCI DSS control cannot be implemented due to technical limitations, the organization must apply a compensating control. A compensating control is an alternative safeguard that meets the intent and rigor of the original requirement.
Risk acceptance is insufficient under PCI DSS, as compliance demands enforceable safeguards. Privacy controls and protection levels may enhance data security but do not formally replace mandatory encryption requirements.
For example, a provider may use strict access controls, network segmentation, or monitoring to mitigate risks from unencrypted cardholder data. Documenting these compensating controls is essential during audits, ensuring compliance despite system limitations.

Question No : 5


Which concept focuses on operating highly available workloads in the cloud?

정답:
Explanation:
Reliability in cloud design ensures workloads can recover quickly from disruptions and continue operating as expected. This concept focuses on high availability, fault tolerance, and disaster recovery. Reliability requires implementing redundancy, backup strategies, and robust monitoring.
Security ensures data protection, operational excellence covers continuous improvement, and resource hierarchy refers to organizational structures, but none focus specifically on availability and resilience.
By prioritizing reliability, organizations design cloud architectures capable of withstanding failures at multiple layers―compute, storage, networking, and even regions. This design principle ensures customer trust and compliance with service-level agreements.

Question No : 6


Which design pillar encompasses the ability to support development and run workloads effectively, gain insights into operations, and continuously improve supporting processes to deliver business value?

정답:
Explanation:
The Operational Excellence pillar emphasizes practices that allow organizations to develop, deploy, and operate workloads effectively. It includes monitoring operations, responding to incidents, and continuously improving processes. By embedding feedback loops, organizations enhance agility and ensure that technology supports business value.
Performance efficiency deals with using computing resources efficiently, reliability ensures system availability, and sustainability focuses on environmental responsibility. While important, these do not encompass the process-driven improvements at the heart of operational excellence.
Operational excellence ensures that organizations can adapt quickly to changes, implement automation, and drive consistent improvements across cloud workloads. It is a key principle in cloud frameworks like AWS Well-Architected, Microsoft CAF, and Google’s Reliability Engineering practices.

Question No : 7


An organization creates a plan for long-term cloud storage of its backup data.
What should the organization address to avoid losing access to its data?

정답:
Explanation:
The most critical concern in long-term cloud storage is key management. If encryption keys are lost, corrupted, or improperly rotated, the organization will lose the ability to decrypt its own data, rendering backups unusable. This issue is particularly serious because cloud storage almost always relies on encryption to secure sensitive or regulated information.
While regulatory compliance, quantum threats, and change tracking are important, none directly prevent permanent data loss. The reliability of key management ensures that access to long-term archival data is preserved across changes in personnel, technology, and vendors.
Best practices include using centralized key management systems (such as Hardware Security Modules or cloud Key Management Services), applying role-based controls, and performing periodic key rotation and escrow. Addressing key management in the backup plan ensures that data will remain accessible for years or decades, regardless of technological shifts.

Question No : 8


A user creates new financial documents that will be stored in the cloud.
Which action should the user take before uploading the documents to protect them against threats such as packet capture and on-path attacks?

정답:
Explanation:
Before transmitting sensitive financial data to the cloud, the best defense against interception threats like packet capture and man-in-the-middle attacks is encryption. Encryption protects data in transit by converting plain text into cipher text, which can only be deciphered with the correct keys.
Hashing provides integrity verification but does not secure confidentiality. Change tracking monitors modifications but does not prevent interception. Metadata labeling adds context but does not protect against on-path attackers.
Using strong encryption protocols (e.g., TLS) ensures that even if traffic is intercepted, the attacker cannot read the data. Encryption also aligns with compliance requirements such as PCI DSS, which mandates encryption for financial data during transmission. By encrypting before upload, the user ensures end-to-end confidentiality across potentially insecure networks.

Question No : 9


Which security concept requires continuous identity and authorization checks to allow access to data?

정답:
Explanation:
The Zero Trust security model assumes that no user, device, or application should be trusted by default, whether inside or outside the network perimeter. Every access request must be continuously verified using strict identity, authorization, and context-based checks.
Unlike traditional perimeter security, Zero Trust emphasizes the principle of “never trust, always verify.” Traffic inspection looks at data packets, intrusion prevention identifies malicious activity, and secret management safeguards sensitive keys and credentials. None of these approaches enforce constant, adaptive identity verification the way Zero Trust does.
By adopting Zero Trust, organizations ensure that access is not granted simply because a user is “inside” the network. Instead, continuous checks evaluate credentials, device posture, location, and other risk factors. This significantly reduces the risk of insider threats, credential theft, and lateral movement within cloud environments.

Question No : 10


Which type of data sanitization should be used to destroy data on a USB thumb drive while keeping the drive intact?

정답:
Explanation:
The correct approach for sanitizing a USB thumb drive while preserving its usability is overwriting. Overwriting involves replacing the existing data on the device with random data or specific patterns to ensure that the original information cannot be recovered. This process leaves the physical device intact, allowing it to be reused securely.
Physical destruction, such as shredding, renders the device unusable. Degaussing only works on magnetic media like hard disks or tapes, not on solid-state or flash-based USB drives. Key revocation applies to cryptographic keys and not to physical devices.
By using overwriting, organizations comply with data sanitization standards while balancing operational efficiency. Many tools exist that perform multi-pass overwrites to meet regulatory requirements such as those from NIST or ISO. This ensures that sensitive data is removed while allowing the device to remain in circulation for continued use.

Question No : 11


Which term refers to taking an accurate account of a system's desired standard state so changes can be quickly detected for approval or remediation?

정답:
Explanation:
Baselining is the process of establishing a reference point for the standard configuration of systems, networks, or applications. This baseline represents the approved, secure state. By continuously comparing the current environment to the baseline, organizations can detect deviations, unauthorized changes, or misconfigurations.
Patch management involves updating systems, deployment refers to installing new systems, and capacity management focuses on resource planning. While important, these do not establish a standard state for comparison.
Baselining is essential for change management and security auditing. It supports configuration management databases (CMDBs), intrusion detection, and compliance requirements. When deviations are detected, they can be escalated for remediation or formally approved through change control processes.

Question No : 12


Which data destruction technique involves encrypting the data, followed by encrypting the resulting keys with a different engine, and then destroying the keys resulting from the second encryption round?

정답:
Explanation:
Cryptographic erasure is a secure data sanitization technique that relies on encryption. The process involves encrypting the data, encrypting the keys with a second layer, and then destroying the encryption keys. Without the keys, the encrypted data becomes unreadable and is effectively destroyed, even though the storage media remains intact.
One-way hashing is used for password storage, not full data destruction. Degaussing is for magnetic media, and overwriting involves physically writing new data over existing sectors.
Cryptographic erasure is widely used in cloud environments where physical media cannot be easily destroyed or reclaimed by customers. It ensures compliance with data retention and privacy regulations while maintaining environmental sustainability by allowing reuse of storage hardware.

Question No : 13


Which cloud computing service model allows customers to run their own application code without configuring the server environment?

정답:
Explanation:
Platform as a Service (PaaS) allows customers to focus on writing and deploying code without managing the underlying infrastructure. The provider manages the operating system, runtime, and middleware, enabling faster development cycles and reduced administrative overhead.
IaaS would require the customer to configure servers and operating systems, SaaS provides ready-to-use applications, and DSaaS is a specialized category for analytics.
By abstracting the infrastructure, PaaS accelerates innovation and reduces operational burden but also limits flexibility in some cases. Security responsibilities under PaaS focus on application-level controls, while the provider handles infrastructure-level protections.

Question No : 14


Which category of cloud service provides on-demand, self-service access to basic building blocks, such as virtualized servers, block storage, and networking capacity, that can be used to create custom IT solutions?

정답:
Explanation:
Infrastructure as a Service (IaaS) delivers fundamental computing resources over the cloud. These include virtual machines, block storage, networking, and load balancers. Customers use these resources to build and manage custom IT solutions, while the provider manages the underlying hardware.
PaaS abstracts infrastructure further, providing a development environment for applications without requiring infrastructure management. SaaS delivers fully functional applications over the internet. NaaS is a narrower category focusing on network delivery.
IaaS is the correct answer because it gives maximum flexibility and control compared to the other models, allowing organizations to build tailored environments. It also requires customers to manage operating systems, middleware, and runtime security, making shared responsibility an essential part of the model.

Question No : 15


Which characteristic of cloud computing refers to sharing physical assets among multiple customers?

정답:
Explanation:
Resource pooling is one of the core characteristics of cloud computing defined by NIST. It refers to the provider’s ability to serve multiple customers by dynamically allocating and reallocating computing resources such as storage, processing, memory, and network bandwidth. These resources are abstracted using virtualization, ensuring that customers remain isolated from one another even though they share the same physical assets.
Rapid scalability describes elasticity, on-demand self-service allows users to provision resources without provider intervention, and measured service refers to metering usage. None of these concepts directly describe the multi-tenant model of shared resources.
Resource pooling improves efficiency, reduces costs, and provides flexibility, but it also introduces new security considerations such as data isolation and hypervisor security. Customers must ensure that providers implement strong controls to prevent data leakage or cross-tenant compromise.

 / 2
WGU