Kubernetes and Cloud Native Associate (KCNA) 온라인 연습
최종 업데이트 시간: 2026년03월09일
당신은 온라인 연습 문제를 통해 The Linux Foundation KCNA 시험지식에 대해 자신이 어떻게 알고 있는지 파악한 후 시험 참가 신청 여부를 결정할 수 있다.
시험을 100% 합격하고 시험 준비 시간을 35% 절약하기를 바라며 KCNA 덤프 (최신 실제 시험 문제)를 사용 선택하여 현재 최신 126개의 시험 문제와 답을 포함하십시오.
정답:
Explanation:
Vertical scaling means changing the resources allocated to a single instance of an application (more or less CPU/memory), which is why C is correct. In Kubernetes terms, this corresponds to adjusting container resource requests and limits (for CPU and memory). Increasing resources can help a workload handle more load per Pod by giving it more compute or memory headroom; decreasing can reduce cost and improve cluster packing efficiency.
This differs from horizontal scaling, which changes the number of instances (replicas).
Option D describes horizontal scaling: adding/removing replicas of the same workload, typically managed by a Deployment and often automated via the Horizontal Pod Autoscaler (HPA).
Option B describes scaling the infrastructure layer (nodes) which is cluster/node autoscaling (Cluster Autoscaler in cloud environments).
Option A is not a standard scaling definition.
In practice, vertical scaling in Kubernetes can be manual (edit the Deployment resource requests/limits) or automated using the Vertical Pod Autoscaler (VPA), which can recommend or apply new requests based on observed usage. A key nuance is that changing requests/limits often requires Pod restarts to take effect, so vertical scaling is less “instant” than HPA and can disrupt workloads if not planned. That’s why many production teams prefer horizontal scaling for traffic-driven workloads and use vertical scaling to right-size baseline resources or address memory-bound/cpu-bound behavior.
From a cloud-native architecture standpoint, understanding vertical vs horizontal scaling helps you design for elasticity: use vertical scaling to tune per-instance capacity; use horizontal scaling for resilience and throughput; and combine with node autoscaling to ensure the cluster has sufficient capacity. The definition the question is testing is simple: vertical scaling = change resources per application instance, which is option C.
정답:
Explanation:
containerd is a widely adopted, industry-standard container runtime known for simplicity, robustness, and portability, so C is correct. containerd originated as a core component extracted from Docker and has become a common runtime across Kubernetes distributions and managed services. It implements container lifecycle management (image pull, unpack, container execution, snapshotting) and typically delegates low-level container execution to an OCI runtime like runc.
In Kubernetes, kubelet communicates with container runtimes through CRI. containerd provides a CRI plugin (or can be integrated via CRI implementations) that makes it a first-class choice for Kubernetes nodes. This aligns with the runtime landscape after dockershim removal: Kubernetes users commonly run containerd or CRI-O as the node runtime.
Option A (CRI-O) is also a CRI-focused runtime and is valid in Kubernetes contexts, but the phrasing “industry-standard … emphasis on simplicity, robustness, and portability” is strongly associated with containerd’s positioning and broad cross-platform adoption beyond Kubernetes.
Option B (LXD) is a system container manager (often associated with LXC) and not the standard Kubernetes runtime in mainstream CRI discussions.
Option D (kata-runtime) is associated with Kata Containers, which focuses on stronger isolation by running containers inside lightweight VMs; that is a security-oriented sandbox approach rather than a simplicity/portability “industry standard” baseline runtime.
From a cloud-native operations point of view, containerd’s popularity comes from its stable API, strong ecosystem support, and alignment with OCI standards. It integrates cleanly with image registries, supports modern snapshotters, and is heavily used in production by many Kubernetes providers. Therefore, the best verified answer is C: containerd.
정답:
Explanation:
A sidecar container is an additional container that runs alongside the main application container within the same Pod, sharing network and storage context. That matches option C, so C is correct. The sidecar pattern is used to add supporting capabilities to an application without modifying the application code. Because both containers are in the same Pod, the sidecar can communicate with the main container over localhost and share volumes for files, sockets, or logs.
Common sidecar examples include: log forwarders that tail application logs and ship them to a logging system, proxies (service mesh sidecars like Envoy) that handle mTLS and routing policy, config reloaders that watch ConfigMaps and signal the main process, and local caching agents. Sidecars are especially powerful in cloud-native systems because they standardize cross-cutting concerns― security, observability, traffic policy―across many workloads.
Options A and D incorrectly describe “a Pod running next to …” which is not how sidecars work; sidecars are containers, not separate Pods. Running separate Pods “next to” each other in a namespace does not give the same shared network namespace and tightly coupled lifecycle.
Option B is also incorrect for the same reason: a sidecar is not a separate Pod; it is a container in the same Pod.
Operationally, sidecars share the Pod lifecycle: they are scheduled together, scaled together, and generally terminated together. This is both a benefit (co-location guarantees) and a responsibility (resource requests/limits should include the sidecar’s needs, and failure modes should be understood). Kubernetes is increasingly formalizing sidecar behavior (e.g., sidecar containers with ordered startup semantics), but the core definition remains: a helper container in the same Pod.
정답:
Explanation:
A Deployment’s replica count is controlled by spec.replicas. You can scale a Deployment by changing that field―either directly editing the object or using kubectl’s scaling helper. Therefore C is correct: you can scale using kubectl scale and also via kubectl edit.
kubectl scale deployment <name> --replicas=<n> is the purpose-built command for scaling. It updates the Deployment’s desired replica count and lets the Deployment controller and ReplicaSet reconcile the change by creating or deleting Pods. This is the cleanest and most explicit operational approach, and it’s easy to automate in scripts and pipelines.
kubectl edit deployment <name> opens the live object in an editor, allowing you to modify fields such as spec.replicas manually. When you save and exit, kubectl submits the updated object back to the API server. This method is useful for quick interactive adjustments or when you’re already making other spec edits, but it’s less structured and more error-prone than kubectl scale for simple replica changes.
Option B is invalid because kubectl scale-up deployment is not a standard kubectl command.
Option A is incorrect because kubectl edit is not the only method; scaling is commonly done with kubectl scale.
Option D is also incorrect because while kubectl scale is a primary method, kubectl edit is also a valid method to change replicas.
In production, you often scale with autoscalers (HPA/VPA), but the question is asking about kubectl methods. The key Kubernetes concept is that scaling is achieved by updating desired state (spec.replicas), and controllers reconcile Pods to match.
정답:
Explanation:
Under the Kubernetes Pod Security Standards (PSS), the Restricted profile is the most locked-down baseline intended to reduce container privilege and host attack surface. In that profile, adding Linux capabilities is generally prohibited except for very limited cases. Among the listed capabilities, NET_BIND_SERVICE is the one commonly permitted in restricted-like policies, so D is correct.
NET_BIND_SERVICE allows a process to bind to “privileged” ports below 1024 (like 80/443) without running as root. This aligns with restricted security guidance: applications should run as non-root, but still sometimes need to listen on standard ports. Allowing NET_BIND_SERVICE enables that pattern without granting broad privileges.
The other capabilities listed are more sensitive and typically not allowed in a restricted profile: CHOWN can be used to change file ownership, SETUID relates to privilege changes and can be abused, and SYS_CHROOT is a broader system-level capability associated with filesystem root changes. In hardened Kubernetes environments, these are normally disallowed because they increase the risk of privilege escalation or container breakout paths, especially if combined with other misconfigurations.
A practical note: exact enforcement depends on the cluster’s admission configuration (e.g., the built-in Pod Security Admission controller) and any additional policy engines (OPA/Gatekeeper). But the security intent of “Restricted” is consistent: run as non-root, disallow privilege escalation, restrict capabilities, and lock down host access. NET_BIND_SERVICE is a well-known exception used to support common application networking needs while staying non-root.
So, the verified correct choice for an allowed capability in Restricted among these options is D: NET_BIND_SERVICE.
정답:
Explanation:
Kubernetes supports two primary built-in modes of Service discovery for workloads: environment variables and DNS, making A correct.
Environment variables: When a Pod is created, kubelet can inject environment variables for Services
that exist in the same namespace at the time the Pod starts. These variables include the Service host and port (for example, MY_SERVICE_HOST and MY_SERVICE_PORT). This approach is simple but has limitations: values are captured at Pod creation time and don’t automatically update if Services change, and it can become cluttered in namespaces with many Services.
DNS-based discovery: This is the most common and flexible method. Kubernetes cluster DNS (usually CoreDNS) provides names like service-name.namespace.svc.cluster.local. Clients resolve the name and connect to the Service, which then routes to backend Pods. DNS scales better, is dynamic with endpoint updates, supports headless Services for per-Pod discovery, and is the default pattern for microservice communication.
The other options are not Kubernetes service discovery modes. Labels and selectors are used internally to relate Services to Pods, but they are not what application code uses for discovery (apps typically don’t query selectors; they call DNS names). LDAP and RADIUS are identity/authentication protocols, not service discovery. DHCP is for IP assignment on networks, not for Kubernetes Service discovery.
Operationally, DNS is central: many applications assume name-based connectivity. If CoreDNS is misconfigured or overloaded, service-to-service calls may fail even if Pods and Services are otherwise healthy. Environment-variable discovery can still work for some legacy apps, but modern cloud-native practice strongly prefers DNS (and sometimes service meshes on top of it). The key exam concept is: Kubernetes provides service discovery via env vars and DNS.
정답:
Explanation:
To retrieve CPU and memory consumption for nodes or Pods, you use kubectl top, so C is correct. kubectl top nodes shows per-node resource usage, and kubectl top pods shows per-Pod (and optionally per-container) usage. This data comes from the Kubernetes resource metrics pipeline, most commonly metrics-server, which scrapes kubelet/cAdvisor stats and exposes them via the metrics.k8s.io API.
It’s important to recognize that kubectl top provides current resource usage snapshots, not long-term historical trending. For long-term metrics and alerting, clusters typically use Prometheus and related
tooling. But for quick operational checks―“Is this Pod CPU-bound?” “Are nodes near memory saturation?”―kubectl top is the built-in day-to-day tool.
Option A (kubectl cluster-info) shows general cluster endpoints and info about control plane services, not resource usage.
Option B (kubectl version) prints client/server version info.
Option D (kubectl api-resources) lists resource types available in the cluster. None of those report CPU/memory usage.
In observability practice, kubectl top is often used during incidents to correlate symptoms with resource pressure. For example, if a node is high on memory, you might see Pods being OOMKilled or the kubelet evicting Pods under pressure. Similarly, sustained high CPU utilization might explain latency spikes or throttling if limits are set. Note that kubectl top requires metrics-server (or an equivalent provider) to be installed and functioning; otherwise it may return errors like “metrics not available.”
So, the correct command for retrieving node/Pod CPU and memory usage is kubectl top.
정답:
Explanation:
In Kubernetes, the Pod is the smallest deployable and schedulable unit, making C correct. Kubernetes does not schedule individual containers directly; instead, it schedules Pods, each of which encapsulates one or more containers that must run together on the same node. This design supports both single-container Pods (the most common) and multi-container Pods (for sidecars, adapters, and co-located helper processes).
Pods provide shared context: containers in a Pod share the same network namespace (one IP address and port space) and can share storage volumes. This enables tight coupling where needed―for example, a service mesh proxy sidecar and the application container communicate via localhost, or a log-forwarding sidecar reads logs from a shared volume. Kubernetes manages lifecycle at the Pod level: kubelet ensures the containers defined in the PodSpec are running and uses probes to determine readiness and liveness.
StatefulSet and Deployment are controllers that manage sets of Pods. A Deployment manages ReplicaSets for stateless workloads and provides rollout/rollback features; a StatefulSet provides stable identities, ordered operations, and stable storage for stateful replicas. These are higher-level constructs, not the smallest units.
Option D (“Container”) is smaller in an abstract sense, but it is not the smallest Kubernetes deployable unit because Kubernetes APIs and scheduling work at the Pod boundary. You don’t “kubectl apply” a container; you apply a Pod template within a Pod object (often via controllers).
Understanding Pods as the atomic unit is crucial: Services select Pods, autoscalers scale Pods (replica counts), and scheduling decisions are made per Pod. That’s why Kubernetes documentation consistently refers to Pods as the fundamental building block for running workloads.
정답:
Explanation:
Cloud-native architecture is important because it enables organizations to build and run software in a way that supports rapid innovation while maintaining reliability, scalability, and efficient operations.
Option B best captures this: cloud native removes constraints to rapid innovation, so B is correct.
In traditional environments, innovation is slowed by heavyweight release processes, tightly coupled systems, manual operations, and limited elasticity. Cloud-native approaches―containers, declarative APIs, automation, and microservices-friendly patterns―reduce those constraints. Kubernetes exemplifies this by offering a consistent deployment model, self-healing, automated rollouts, scaling primitives, and a large ecosystem of delivery and observability tools. This makes it easier to ship changes more frequently and safely: teams can iterate quickly, roll back confidently, and standardize operations across environments.
Option A is partly descriptive (containers/microservices/pipelines are common in cloud native), but it doesn’t explain why it matters; it lists ingredients rather than the benefit.
Option C is vague (“modern”) and again doesn’t capture the core value proposition.
Option D is incorrect because cloud native is not primarily about being “bleeding edge”―it’s about proven practices that improve time-to-market and operational stability.
A good way to interpret “removes constraints” is: cloud native shifts the bottleneck away from infrastructure friction. With automation (IaC/GitOps), standardized runtime packaging (containers), and platform capabilities (Kubernetes controllers), teams spend less time on repetitive manual work
and more time delivering features. Combined with observability and policy automation, this results in faster delivery with better reliability―exactly the reason cloud-native architecture is emphasized across the Kubernetes ecosystem.
정답:
Explanation:
A Kubernetes Service routes traffic to a dynamic set of backends (usually Pods). The set of backend IPs and ports is represented by endpoint-tracking resources. Historically this was the Endpoints object; today Kubernetes commonly uses EndpointSlice for scalability, but the concept remains the same: endpoints represent the concrete network destinations behind a Service. That’s why D is correct: a Service endpoint is an object that contains the IP addresses (and ports) of the individual Pods (or other backends) associated with that Service.
When a Service has a selector, Kubernetes automatically maintains endpoints by watching which Pods match the selector and are Ready, then publishing those Pod IPs into Endpoints/EndpointSlices. Consumers don’t usually use endpoints directly; instead they call the Service DNS name, and kube-proxy (or an alternate dataplane) forwards traffic to one of the endpoints. Still, endpoints are critical because they are what make Service routing accurate and up to date during scaling events, rolling updates, and failures.
Option A confuses this with the Kubernetes API server endpoint (the cluster API URL).
Option B is incorrect; there’s no special “Service Endpoint Pod.” Option C describes an external/public IP concept, which may exist for LoadBalancer Services, but “Service endpoint” in Kubernetes vocabulary is about the backend destinations, not the public entrypoint.
Operationally, endpoints are useful for debugging: if a Service isn’t routing traffic, checking Endpoints/EndpointSlices shows whether the Service actually has backends and whether readiness is excluding Pods. This ties directly into Kubernetes service discovery and load balancing: the Service is the stable front door; endpoints are the actual backends.
정답:
Explanation:
A hybrid cloud architecture combines public cloud and private/on-premises environments, often spanning multiple infrastructure domains while maintaining some level of portability, connectivity, and unified operations.
Option C captures the commonly accepted definition: services run across public and private clouds, including on-premises data centers, so C is correct.
Hybrid cloud is not limited to a single cloud provider (which is why A is too restrictive). Many organizations adopt hybrid cloud to meet regulatory requirements, data residency constraints, latency needs, or to preserve existing investments while still using public cloud elasticity. In Kubernetes terms, hybrid strategies often include running clusters both on-prem and in one or more public clouds, then standardizing deployment through Kubernetes APIs, GitOps, and consistent security/observability practices.
Option B is incorrect because excluding data centers in different availability zones is not a defining property; in fact, hybrid deployments commonly use multiple zones/regions for resilience.
Option D is a distraction: serverless inclusion or exclusion does not define hybrid cloud. Hybrid is about the combination of infrastructure environments, not a specific compute model.
A practical cloud-native view is that hybrid architectures introduce challenges around identity, networking, policy enforcement, and consistent observability across environments. Kubernetes helps because it provides a consistent control plane API and workload model regardless of where it runs. Tools like service meshes, federated identity, and unified monitoring can further reduce fragmentation.
So, the most accurate definition in the given choices is C: hybrid cloud combines public and private clouds, including on-premises infrastructure, to run services in a coordinated architecture.
정답:
Explanation:
A container orchestration tool (like Kubernetes) is responsible for scheduling, scaling, and health management of workloads, making A correct. Orchestration sits above individual containers and focuses on running applications reliably across a fleet of machines. Scheduling means deciding which node should run a workload based on resource requests, constraints, affinities, taints/tolerations, and current cluster state. Scaling means changing the number of running instances (replicas) to meet demand (manually or automatically through autoscalers). Health management includes monitoring whether containers and Pods are alive and ready, replacing failed instances, and maintaining the declared desired state.
Options B and D include “create images” and “store images,” which are not orchestration responsibilities. Image creation is a CI/build responsibility (Docker/BuildKit/build systems), and image storage is a container registry responsibility (Harbor, ECR, GCR, Docker Hub, etc.). Kubernetes consumes images from registries but does not build or store them.
Option C includes “debug applications,” which is not a core orchestration function. While Kubernetes provides tools that help debugging (logs, exec, events), debugging is a human/operator activity rather than the orchestrator’s fundamental responsibility.
In Kubernetes specifically, these orchestration tasks are implemented through controllers and control loops: Deployments/ReplicaSets manage replica counts and rollouts, kube-scheduler assigns Pods to nodes, kubelet ensures containers run, and probes plus controller logic replace unhealthy replicas.
This is exactly what makes Kubernetes valuable at scale: instead of manually starting/stopping containers on individual hosts, you declare your intent and let the orchestration system continually reconcile reality to match. That combination―placement + elasticity + self-healing―is the core of container orchestration, matching option A precisely.
정답:
Explanation:
Dynamic provisioning is the Kubernetes mechanism where storage is created on-demand when a user creates a PersistentVolumeClaim (PVC) that references a StorageClass, so A is correct. In this model, the user does not need to pre-create a PersistentVolume (PV). Instead, the StorageClass points to a provisioner (typically a CSI driver) that knows how to create a volume in the underlying storage system (cloud disk, SAN, NAS, etc.). When the PVC is created with storageClassName: <class>, Kubernetes triggers the provisioner to create a new volume and then binds the resulting PV to that PVC.
This is why option B is incorrect: you do not put a StorageClass “in the Pod YAML” to request provisioning. Pods reference PVCs, not StorageClasses directly.
Option C is incorrect because the PVC does not need the Pod name; binding is done via the PVC itself.
Option D describes static provisioning: an admin pre-creates PVs and users claim them by creating PVCs that match the PV (capacity, access modes, selectors). Static provisioning can work, but it is not dynamic provisioning.
Under the hood, the StorageClass can define parameters like volume type, replication, encryption, and binding behavior (e.g., volumeBindingMode: WaitForFirstConsumer to delay provisioning until the Pod is scheduled, ensuring the volume is created in the correct zone). Reclaim policies (Delete/Retain) define what happens to the underlying volume after the PVC is deleted.
In cloud-native operations, dynamic provisioning is preferred because it improves developer self-service, reduces manual admin work, and makes scaling stateful workloads easier and faster. The essence is: PVC + StorageClass → automatic PV creation and binding.
정답:
Explanation:
A headless Service is created by setting spec.clusterIP: None, so B is correct. Normally, a Service gets
a ClusterIP, and kube-proxy (or an alternative dataplane) implements virtual-IP-based load balancing to route traffic from that ClusterIP to the backend Pods. A headless Service intentionally disables that virtual IP allocation. Instead of giving you a single stable VIP, Kubernetes publishes DNS records that resolve directly to the endpoints (the Pod IPs) behind the Service.
This is especially important for workloads that need direct endpoint discovery or stable per-Pod identities, such as StatefulSets. With a headless Service, clients can discover all Pod IPs (or individual Pod DNS names in StatefulSet patterns) and implement their own selection, quorum, or leader/follower logic. Kubernetes DNS (CoreDNS) responds differently for headless Services: rather than returning a single ClusterIP, it returns multiple A/AAAA records (one per endpoint) or SRV records for named ports, enabling richer service discovery behavior.
The other options are invalid. “headless” is not a magic value for clusterIP; the API expects either an actual IP address assigned by the cluster or the special literal None. 0.0.0.0 and localhost are not valid ways to request headless semantics. Kubernetes uses None specifically to signal “do not allocate a ClusterIP.”
Operationally, headless Services are used to: (1) expose each backend instance individually, (2) support stateful clustering and stable DNS names, and (3) avoid load balancing when the application or client library must choose endpoints itself. The key is that the Service still provides a stable DNS name, but the resolution yields endpoints, not a VIP.
정답:
Explanation:
In Kubernetes, the standard way to execute a command inside a running container is kubectl exec, which is why A is correct. kubectl exec calls the Kubernetes API (API server), which then coordinates with the kubelet on the target node to run the requested command inside the container using the container runtime’s exec mechanism. The -- separator is important: it tells kubectl that everything after -- is the command to run in the container rather than flags for kubectl itself.
This is fundamentally different from docker exec. In Kubernetes, you don’t normally target containers through Docker/CRI tools directly because Kubernetes abstracts the runtime behind CRI. Also, “Docker” might not even be installed on nodes in modern clusters (containerd/CRI-O are common).
So option B is not the Kubernetes-native approach and often won’t work.
kubectl run (option C) is for creating a new Pod (or generating workload resources), not for executing a command in an existing container. kubectl attach (option D) attaches your terminal to a running container’s process streams (stdin/stdout/stderr), which is useful for interactive sessions, but it does not execute an arbitrary new command like exec does.
In real usage, you often specify the container when a Pod has multiple containers: kubectl exec -it <pod> -c <container> -- /bin/sh. This is common for debugging, verifying config files mounted from ConfigMaps/Secrets, testing DNS resolution, or checking network connectivity from within the Pod network namespace. Because exec uses the API and kubelet, it respects Kubernetes access control (RBAC) and audit logging―another reason it’s the correct operational method.