Kubernetes and Cloud Native Associate (KCNA) 온라인 연습
최종 업데이트 시간: 2026년02월14일
당신은 온라인 연습 문제를 통해 The Linux Foundation KCNA 시험지식에 대해 자신이 어떻게 알고 있는지 파악한 후 시험 참가 신청 여부를 결정할 수 있다.
시험을 100% 합격하고 시험 준비 시간을 35% 절약하기를 바라며 KCNA 덤프 (최신 실제 시험 문제)를 사용 선택하여 현재 최신 126개의 시험 문제와 답을 포함하십시오.
정답:
Explanation:
A hybrid cloud architecture combines public cloud and private/on-premises environments, often spanning multiple infrastructure domains while maintaining some level of portability, connectivity, and unified operations.
Option C captures the commonly accepted definition: services run across public and private clouds, including on-premises data centers, so C is correct.
Hybrid cloud is not limited to a single cloud provider (which is why A is too restrictive). Many organizations adopt hybrid cloud to meet regulatory requirements, data residency constraints, latency needs, or to preserve existing investments while still using public cloud elasticity. In Kubernetes terms, hybrid strategies often include running clusters both on-prem and in one or more public clouds, then standardizing deployment through Kubernetes APIs, GitOps, and consistent security/observability practices.
Option B is incorrect because excluding data centers in different availability zones is not a defining property; in fact, hybrid deployments commonly use multiple zones/regions for resilience.
Option D is a distraction: serverless inclusion or exclusion does not define hybrid cloud. Hybrid is about the combination of infrastructure environments, not a specific compute model.
A practical cloud-native view is that hybrid architectures introduce challenges around identity, networking, policy enforcement, and consistent observability across environments. Kubernetes helps because it provides a consistent control plane API and workload model regardless of where it runs. Tools like service meshes, federated identity, and unified monitoring can further reduce fragmentation.
So, the most accurate definition in the given choices is C: hybrid cloud combines public and private clouds, including on-premises infrastructure, to run services in a coordinated architecture.
정답:
Explanation:
A container orchestration tool (like Kubernetes) is responsible for scheduling, scaling, and health management of workloads, making A correct. Orchestration sits above individual containers and focuses on running applications reliably across a fleet of machines. Scheduling means deciding which node should run a workload based on resource requests, constraints, affinities, taints/tolerations, and current cluster state. Scaling means changing the number of running instances (replicas) to meet demand (manually or automatically through autoscalers). Health management includes monitoring whether containers and Pods are alive and ready, replacing failed instances, and maintaining the declared desired state.
Options B and D include “create images” and “store images,” which are not orchestration responsibilities. Image creation is a CI/build responsibility (Docker/BuildKit/build systems), and image storage is a container registry responsibility (Harbor, ECR, GCR, Docker Hub, etc.). Kubernetes consumes images from registries but does not build or store them.
Option C includes “debug applications,” which is not a core orchestration function. While Kubernetes provides tools that help debugging (logs, exec, events), debugging is a human/operator activity rather than the orchestrator’s fundamental responsibility.
In Kubernetes specifically, these orchestration tasks are implemented through controllers and control loops: Deployments/ReplicaSets manage replica counts and rollouts, kube-scheduler assigns Pods to nodes, kubelet ensures containers run, and probes plus controller logic replace unhealthy replicas.
This is exactly what makes Kubernetes valuable at scale: instead of manually starting/stopping containers on individual hosts, you declare your intent and let the orchestration system continually reconcile reality to match. That combination―placement + elasticity + self-healing―is the core of container orchestration, matching option A precisely.
정답:
Explanation:
Dynamic provisioning is the Kubernetes mechanism where storage is created on-demand when a user creates a PersistentVolumeClaim (PVC) that references a StorageClass, so A is correct. In this model, the user does not need to pre-create a PersistentVolume (PV). Instead, the StorageClass points to a provisioner (typically a CSI driver) that knows how to create a volume in the underlying storage system (cloud disk, SAN, NAS, etc.). When the PVC is created with storageClassName: <class>, Kubernetes triggers the provisioner to create a new volume and then binds the resulting PV to that PVC.
This is why option B is incorrect: you do not put a StorageClass “in the Pod YAML” to request provisioning. Pods reference PVCs, not StorageClasses directly.
Option C is incorrect because the PVC does not need the Pod name; binding is done via the PVC itself.
Option D describes static provisioning: an admin pre-creates PVs and users claim them by creating PVCs that match the PV (capacity, access modes, selectors). Static provisioning can work, but it is not dynamic provisioning.
Under the hood, the StorageClass can define parameters like volume type, replication, encryption, and binding behavior (e.g., volumeBindingMode: WaitForFirstConsumer to delay provisioning until the Pod is scheduled, ensuring the volume is created in the correct zone). Reclaim policies (Delete/Retain) define what happens to the underlying volume after the PVC is deleted.
In cloud-native operations, dynamic provisioning is preferred because it improves developer self-service, reduces manual admin work, and makes scaling stateful workloads easier and faster. The essence is: PVC + StorageClass → automatic PV creation and binding.
정답:
Explanation:
A headless Service is created by setting spec.clusterIP: None, so B is correct. Normally, a Service gets a ClusterIP, and kube-proxy (or an alternative dataplane) implements virtual-IP-based load balancing to route traffic from that ClusterIP to the backend Pods. A headless Service intentionally disables that virtual IP allocation. Instead of giving you a single stable VIP, Kubernetes publishes DNS records that resolve directly to the endpoints (the Pod IPs) behind the Service.
This is especially important for workloads that need direct endpoint discovery or stable per-Pod identities, such as StatefulSets. With a headless Service, clients can discover all Pod IPs (or individual Pod DNS names in StatefulSet patterns) and implement their own selection, quorum, or leader/follower logic. Kubernetes DNS (CoreDNS) responds differently for headless Services: rather than returning a single ClusterIP, it returns multiple A/AAAA records (one per endpoint) or SRV records for named ports, enabling richer service discovery behavior.
The other options are invalid. “headless” is not a magic value for clusterIP; the API expects either an actual IP address assigned by the cluster or the special literal None. 0.0.0.0 and localhost are not valid ways to request headless semantics. Kubernetes uses None specifically to signal “do not allocate a ClusterIP.”
Operationally, headless Services are used to: (1) expose each backend instance individually, (2) support stateful clustering and stable DNS names, and (3) avoid load balancing when the application or client library must choose endpoints itself. The key is that the Service still provides a stable DNS name, but the resolution yields endpoints, not a VIP.
정답:
Explanation:
In Kubernetes, the standard way to execute a command inside a running container is kubectl exec, which is why A is correct. kubectl exec calls the Kubernetes API (API server), which then coordinates with the kubelet on the target node to run the requested command inside the container using the container runtime’s exec mechanism. The -- separator is important: it tells kubectl that everything after -- is the command to run in the container rather than flags for kubectl itself.
This is fundamentally different from docker exec. In Kubernetes, you don’t normally target containers through Docker/CRI tools directly because Kubernetes abstracts the runtime behind CRI. Also, “Docker” might not even be installed on nodes in modern clusters (containerd/CRI-O are common). So option B is not the Kubernetes-native approach and often won’t work.
kubectl run (option C) is for creating a new Pod (or generating workload resources), not for executing a command in an existing container. kubectl attach (option D) attaches your terminal to a running container’s process streams (stdin/stdout/stderr), which is useful for interactive sessions, but it does not execute an arbitrary new command like exec does.
In real usage, you often specify the container when a Pod has multiple containers: kubectl exec -it <pod> -c <container> -- /bin/sh. This is common for debugging, verifying config files mounted from ConfigMaps/Secrets, testing DNS resolution, or checking network connectivity from within the Pod network namespace. Because exec uses the API and kubelet, it respects Kubernetes access control (RBAC) and audit logging―another reason it’s the correct operational method.
정답:
Explanation:
The standard way to deploy workloads to Kubernetes using only built-in tooling is to create Kubernetes manifests (YAML/JSON definitions of API objects) and apply them with kubectl, so C is correct. Kubernetes is a declarative system: you describe the desired state of resources (e.g., a Deployment, Service, ConfigMap, Ingress) in a manifest file, then submit that desired state to the API server. Controllers reconcile the actual cluster state to match what you declared.
A manifest typically includes mandatory fields like apiVersion, kind, and metadata, and then a spec describing desired behavior. For example, a Deployment manifest declares replicas and the Pod template (containers, images, ports, probes, resources). Applying the manifest with kubectl apply -f <file> creates or updates the resources. kubectl apply is also designed to work well with iterative changes: you update the file, re-apply, and Kubernetes performs a controlled rollout based on controller logic.
Option B (Helm) is indeed a popular deployment tool, but Helm is explicitly an “additional tool” beyond kubectl and the Kubernetes API. The question asks “without additional tools,” so Helm is excluded by definition.
Option A (running Bash scripts on worker nodes) bypasses Kubernetes’ desired-state control and is not how Kubernetes workload deployment is intended; it also breaks portability and operational safety.
Option D is not a standard Kubernetes deployment mechanism; kubectl does not “run Python scripts” to deploy workloads (though scripts can automate kubectl, that’s still not the primary mechanism).
From a cloud native delivery standpoint, manifests support GitOps, reviewable changes, and repeatable deployments across environments. The Kubernetes-native approach is: declare resources in manifests and apply them to the cluster. Therefore, C is the verified correct answer.
정답:
Explanation:
etcd is the strongly consistent key-value store backing Kubernetes cluster state. Its performance directly affects the entire control plane because most API operations require reads/writes to etcd. The most critical resources for etcd performance are disk I/O (especially latency) and network throughput/latency between etcd members and API servers―so B is correct.
etcd is write-ahead-log (WAL) based and relies heavily on stable, low-latency storage. Slow disks increase commit latency, which slows down object updates, watches, and controller loops. In busy clusters, poor disk performance can cause request backlogs and timeouts, showing up as slow kubectl operations and delayed controller reconciliation. That’s why production guidance commonly emphasizes fast SSD-backed storage and careful monitoring of fsync latency.
Network performance matters because etcd uses the Raft consensus protocol. Writes must be replicated to a quorum of members, and leader-follower communication is continuous. High network latency or low throughput can slow replication and increase the time to commit writes. Unreliable networking can also cause leader elections or cluster instability, further degrading
performance and availability.
CPU and memory are still relevant, but they are usually not the first bottleneck compared to disk and network. CPU affects request processing and encryption overhead if enabled, while memory affects caching and compaction behavior. Disk “capacity” alone (size) is less relevant than disk I/O characteristics (latency, IOPS), because etcd performance is sensitive to fsync and write latency.
In Kubernetes operations, ensuring etcd health includes: using dedicated fast disks, keeping network stable, enabling regular compaction/defragmentation strategies where appropriate, sizing correctly (typically odd-numbered members for quorum), and monitoring key metrics (commit latency, fsync duration, leader changes). Because etcd is the persistence layer of the API, disk I/O and network quality are the primary determinants of control-plane responsiveness―hence B.
정답:
Explanation:
A Kubernetes NetworkPolicy defines which traffic is allowed to and from Pods by selecting Pods and specifying ingress/egress rules. A key conceptual effect is that it can make Pods “isolated” (default deny except what is allowed) versus “non-isolated” (default allow). This aligns best with option B, so B is correct.
By default, Kubernetes networking is permissive: Pods can typically talk to any other Pod. When you apply a NetworkPolicy that selects a set of Pods, those selected Pods become “isolated” for the direction(s) covered by the policy (ingress and/or egress). That means only traffic explicitly allowed by the policy is permitted; everything else is denied (again, for the selected Pods and direction). This classification concept―isolated vs non-isolated―is a common way the Kubernetes documentation explains NetworkPolicy behavior.
Option A is incorrect: NetworkPolicy does not encrypt (“cryptic and obscure”) traffic. Encryption is typically handled by mTLS via a service mesh or application-layer TLS.
Option C is not the primary role; loopback and host traffic handling depend on the network plugin and node configuration, and NetworkPolicy is not a “prevent loopback” mechanism.
Option D is incorrect because NetworkPolicy is not a logging system; while some CNIs can produce logs about policy decisions, logging is not NetworkPolicy’s role in the API.
One critical Kubernetes detail: NetworkPolicy enforcement is performed by the CNI/network plugin. If your CNI doesn’t implement NetworkPolicy, creating these objects won’t change runtime traffic. In
CNIs that do support it, NetworkPolicy becomes a foundational security primitive for segmentation and least privilege: restricting database access to app Pods only, isolating namespaces, and reducing lateral movement risk.
So, in the language of the provided answers, NetworkPolicy’s role is best captured as the ability to classify Pods into isolated/non-isolated by applying traffic-allow rules―option B.
정답:
Explanation:
A governance board in an open source project typically defines how the community operates―its decision-making rules, roles, conflict resolution, and contribution expectations―so C (“Outline the project's terms of engagement”) is correct. In large cloud-native projects (Kubernetes being a prime example), clear governance is essential to coordinate many contributors, companies, and stakeholders. Governance establishes the “rules of the road” that keep collaboration productive and fair.
“Terms of engagement” commonly includes: how maintainers are selected, how proposals are reviewed (e.g., enhancement processes), how meetings and SIGs operate, what constitutes consensus, how voting works when consensus fails, and what code-of-conduct expectations apply. It also defines escalation and dispute resolution paths so technical disagreements don’t become community-breaking conflicts. In other words, governance is about ensuring the project has durable, transparent processes that outlive any individual contributor and support vendor-neutral decision making.
Option B (reviewing pull requests) is usually the responsibility of maintainers and SIG owners, not a governance board. The governance body may define the structure that empowers maintainers, but it generally does not do day-to-day code review.
Option A (marketing strategy) is often handled by foundations, steering committees, or separate outreach groups, not governance boards as their primary responsibility.
Option D (defining the license) is usually decided early and may be influenced by a foundation or legal process; while governance can shape legal/policy direction, the core governance responsibility is broader community operating rules rather than selecting a license.
In cloud-native ecosystems, strong governance supports sustainability: it encourages contributions, protects neutrality, and provides predictable processes for evolution. Therefore, the best verified answer is C.
정답:
Explanation:
The correct command to get field-level schema information about spec.replicas in a Deployment is kubectl explain deployment.spec.replicas, so B is correct. kubectl explain is designed to retrieve documentation for resource fields directly from Kubernetes API discovery and OpenAPI schemas. When you use kubectl explain deployment.spec.replicas, kubectl shows what the field means, its type, and any relevant notes―exactly what “provides information about the field” implies.
This differs from kubectl get and kubectl describe. kubectl get is for retrieving actual objects or listing resources; it does not accept dot-paths like deployment.spec.replicas as a normal resource argument. You can use JSONPath/custom-columns with kubectl get deployment <name> -o jsonpath='{.spec.replicas}' to extract the live value, but that’s not what option A shows, and it’s not “documentation.” kubectl describe provides a human-friendly summary of a live resource instance (events, status, spec), but it doesn’t target schema field definitions via dot-path in the way shown in option C.
Option D is not valid syntax: kubectl explain deployment --spec.replicas is not how kubectl explain accepts nested field references. The correct pattern is positional dot notation: kubectl explain <resource>.<field>.<subfield>....
Understanding spec.replicas matters operationally: it defines the desired number of Pod replicas for a Deployment. The Deployment controller ensures that the corresponding ReplicaSet maintains that count, supporting self-healing if Pods fail. While autoscalers can adjust replicas automatically, the field remains the primary declarative knob. The question is specifically about finding information (schema docs) for that field, which is why kubectl explain deployment.spec.replicas is the verified correct answer.
정답:
Explanation:
For Kubernetes Deployments, the default update strategy is RollingUpdate, which corresponds to “Rolling update” in option A. Rolling updates replace old Pods with new Pods gradually, aiming to maintain availability during the rollout. Kubernetes does this by creating a new ReplicaSet for the updated Pod template and then scaling the new ReplicaSet up while scaling the old one down.
The pace and safety of a rolling update are controlled by parameters like maxUnavailable and maxSurge. maxUnavailable limits how many replicas can be unavailable during the update, protecting availability. maxSurge controls how many extra replicas can be created temporarily above the desired count, helping speed up rollouts while maintaining capacity. If readiness probes fail, Kubernetes will pause progression because new Pods aren’t becoming Ready, helping prevent a bad version from fully replacing a good one.
Options B (Blue/Green) and C (Canary) are popular progressive delivery patterns, but they are not the default built-in Deployment strategy. They are typically implemented using additional tooling (service mesh routing, traffic splitting controllers, or specialized rollout controllers) or by operating multiple Deployments/Services.
Option D (Recreate) is a valid strategy but not the default; it terminates all old Pods before creating new ones, causing downtime unless you have external buffering or multi-tier redundancy.
From an application delivery perspective, RollingUpdate aligns with Kubernetes’ declarative model: you update the desired Pod template and let the controller converge safely. kubectl rollout status is commonly used to monitor progress. Rollbacks are also supported because the Deployment tracks history. Therefore, the verified correct answer is A: Rolling update.
정답:
Explanation:
The role most commonly responsible for defining, testing, and running an incident management process is Site Reliability Engineers (SREs), so A is correct. SRE is an operational engineering discipline focused on ensuring reliability, availability, and performance of services in production. Incident management is a core part of that mission: when outages or severe degradations occur, someone must coordinate response, restore service quickly, and then drive follow-up improvements to prevent recurrence.
In cloud native environments (including Kubernetes), incident response involves both technical and process elements. On the technical side, SREs ensure observability is in place―metrics, logs, traces, dashboards, and actionable alerts―so incidents can be detected and diagnosed quickly. They also validate operational readiness: runbooks, escalation paths, on-call rotations, and post-incident review practices. On the process side, SREs often establish severity classifications, response roles (incident commander, communications lead, subject matter experts), and “game day” exercises or simulated incidents to test preparedness.
Project managers may help coordinate schedules and communication for projects, but they are not typically the owners of operational incident response mechanics. Application developers are crucial participants during incidents, especially for debugging application-level failures, but they are not usually the primary maintainers of the incident management framework. Quality engineers focus on testing and quality assurance, and while they contribute to preventing defects, they are not usually the owners of real-time incident operations.
In Kubernetes specifically, incidents often span multiple layers: workload behavior, cluster resources, networking, storage, and platform dependencies. SREs are positioned to manage the cross-cutting operational view and to continuously improve reliability through error budgets, SLOs/SLIs, and iterative hardening. That’s why the correct persona is Site Reliability Engineers.
정답:
Explanation:
Yes, a Kubernetes Service can expose multiple ports, and when it does, each port should have a unique, unambiguous name, making B correct. In the Service spec, the ports field is an array, allowing you to define multiple port mappings (e.g., 80 for HTTP and 443 for HTTPS, or grpc and metrics). Each entry can include port (Service port), targetPort (backend Pod port), and protocol.
The naming requirement becomes important because Kubernetes needs to disambiguate ports, especially when other resources refer to them. For example, an Ingress backend or some proxies/controllers can reference Service ports by name. Also, when multiple ports exist, a name helps humans and automation reliably select the correct port. Kubernetes documentation and common practice recommend naming ports whenever there is more than one, and in several scenarios it’s effectively required to avoid ambiguity.
Option A is incorrect because multi-port Services are common and fully supported.
Option C is insufficient: while different port numbers are necessary, naming is the correct distinguishing rule emphasized by Kubernetes patterns and required by some integrations.
Option D is incorrect and nonsensical―Services can expose many ports and are not restricted to 443.
Operationally, exposing multiple ports through one Service is useful when a single backend workload provides multiple interfaces (e.g., application traffic and a metrics endpoint). You can keep stable discovery under one DNS name while still differentiating ports. The backend Pods must still listen on the target ports, and selectors determine which Pods are endpoints. The key correctness point for this question is: multi-port Services are allowed, and each port should be uniquely named to avoid confusion and integration issues.
정답:
Explanation:
The Kubernetes resource used to package one or more containers into a schedulable unit is the Pod, so A is correct. Kubernetes schedules Pods onto nodes; it does not schedule individual containers. A Pod represents a single “instance” of an application component and includes one or more containers that share key runtime properties, including the same network namespace (same IP and port space) and the ability to share volumes.
Pods enable common patterns beyond “one container per Pod.” For example, a Pod may include a main application container plus a sidecar container for logging, proxying, or configuration reload. Because these containers share localhost networking and volume mounts, they can coordinate efficiently without requiring external service calls. Kubernetes manages the Pod lifecycle as a unit: the containers in a Pod are started according to container lifecycle rules and are co-located on the same node.
Option B (ContainerSet) is not a standard Kubernetes workload resource.
Option C (ReplicaSet) manages a set of Pod replicas, ensuring a desired count is running, but it is not the packaging unit itself.
Option D (Deployment) is a higher-level controller that manages ReplicaSets and provides rollout/rollback behavior, again operating on Pods rather than being the container-packaging unit.
From the scheduling perspective, the PodSpec defines container images, commands, resources, volumes, security context, and placement constraints. The scheduler evaluates these constraints and assigns the Pod to a node. This “Pod as the atomic scheduling unit” is fundamental to Kubernetes architecture and explains why Kubernetes-native concepts (Services, selectors, readiness, autoscaling) all revolve around Pods.
정답:
Explanation:
Kubernetes’ networking model assumes that every Pod has its own IP address and that Pods can communicate with other Pods across nodes without requiring network address translation (NAT).
That makes B correct. This is one of Kubernetes’ core design assumptions and is typically implemented via CNI plugins that provide flat, routable Pod networking (or equivalent behavior using encapsulation/routing).
This model matters because scheduling is dynamic. The scheduler can place Pods anywhere in the cluster, and applications should not need to know whether a peer is on the same node or a different node. With the Kubernetes network model, Pod-to-Pod communication works uniformly: a Pod can reach any other Pod IP directly, and nodes can reach Pods as well. Services and DNS add stable naming and load balancing, but direct Pod connectivity is part of the baseline model.
Option A is incorrect because Pods can communicate directly using Pod IPs even without Services (subject to NetworkPolicies and routing). Services are abstractions for stable access and load balancing; they are not the only way Pods can communicate.
Option C is incorrect because Pod IPs are not limited to visibility “inside a Pod”; they are routable within the cluster network.
Option D is misleading: Services are often used by Pods (clients) to reach a set of Pods (backends). “Service IP used for communication between Services” is not the fundamental model; Services are virtual IPs for reaching workloads, and “Service-to-Service communication” usually means one workload calling another via the target Service name.
A useful way to remember the official model: (1) all Pods can communicate with all other Pods (no NAT), (2) all nodes can communicate with all Pods (no NAT), (3) Pod IPs are unique cluster-wide. This enables consistent microservice connectivity and supports higher-level traffic management layers like Ingress and service meshes.