Data Center, Associate (JNCIA-DC) 온라인 연습
최종 업데이트 시간: 2026년03월09일
당신은 온라인 연습 문제를 통해 Juniper JN0-281 시험지식에 대해 자신이 어떻게 알고 있는지 파악한 후 시험 참가 신청 여부를 결정할 수 있다.
시험을 100% 합격하고 시험 준비 시간을 35% 절약하기를 바라며 JN0-281 덤프 (최신 실제 시험 문제)를 사용 선택하여 현재 최신 65개의 시험 문제와 답을 포함하십시오.
정답:
Explanation:
In a leaf spine IP fabric, spines form the high-speed transit layer and leaves provide the attachment points for servers, services, and edge connectivity. The defining physical topology rule is that every spine connects directly to every leaf. This design creates consistent one-hop transit through a spine for traffic between any two leaves, which keeps latency predictable and simplifies capacity planning. It also enables equal-cost multipath routing across all available spine links, allowing the fabric to use bandwidth efficiently and recover quickly from failures by shifting traffic to remaining paths.
Spine-to-spine connections are not required in a classic two-tier leaf spine fabric. Adding spine-to-spine links can create unnecessary complexity and does not improve the standard forwarding model, because spines are intended to provide transit between leaves, not to act as an additional meshed layer. Likewise, there is no inherent requirement that each spine must have two or more physical links to each leaf. Many fabrics start with one link per spine-leaf pair and scale capacity by adding more spines or adding additional parallel links as demand grows. The redundancy objective is achieved primarily by multiple spines and multiple available routed paths, not by mandating multiple links between every spine and every leaf from the outset.

정답:
Explanation:
The commit error indicates that the interface is being treated as an access port while the configuration attempts to associate it with more than one VLAN. In Junos Ethernet switching, an access mode interface represents a single untagged VLAN membership. Because access ports accept and transmit frames without 802.1Q tags, the switch must map all ingress untagged traffic to exactly one VLAN. For that reason, Junos enforces the rule that an access interface can be part of only one VLAN, and it rejects configurations that try to add multiple VLAN members under access mode.
There are two valid ways to resolve this, depending on the intended design. First, if the port truly connects to a single endpoint that should live in only one broadcast domain, configure the interface as a member of only one VLAN. This aligns with access port semantics and eliminates the conflict that causes the commit to fail.
Second, if the endpoint or downstream device needs to carry multiple VLANs over the same physical link, change the interface to trunk mode. A trunk port is designed to transport multiple VLANs using 802.1Q tagging, so multiple VLAN members are valid and expected. In data center environments, trunking is common for server virtualization hosts, appliance uplinks, and switch-to-switch links.
Connecting the interface to the network does not affect configuration validation, and logical unit numbering is unrelated to VLAN membership rules for access ports.
정답:
Explanation:
An access port is a Layer 2 switch interface mode intended for an endpoint that belongs to a single VLAN. Traffic on an access port is associated with exactly one VLAN, and frames are typically transmitted and received untagged on the wire. The switch internally maps that untagged traffic into the configured access VLAN, placing the endpoint into the correct broadcast domain. This behavior is widely used for server access, management ports, out of band devices, and any endpoint that does not tag VLANs.
By contrast, trunk ports are designed to carry multiple VLANs simultaneously, usually with 802.1Q tagging, and are typically used between switches, to routers, to virtualization hosts, or to appliances that handle multiple VLANs. That is why assigning multiple VLANs to an access port is not the standard access mode behavior.
An access port does not have to connect to a router or a firewall. It can connect directly to any Ethernet endpoint. Routing between VLANs is provided by a Layer 3 interface such as an IRB interface on the switch or an external routed device, but that is independent of whether the endpoint connects on an access port.
Verification sources from Juniper documentation
https://www.juniper.net/documentation/us/en/software/junos/multicast-l2/topics/topic-map/bridging-and-vlans.html
https://www.juniper.net/documentation/us/en/software/junos/multicast-l2/topics/task/interfaces-configuring-ethernet-switching-access.html
정답:
Explanation:
In modern data center architectures, an overlay network is a logical or virtual network built on top of an underlay, where the underlay is the physical IP fabric that provides basic Layer 3 transport and reachability between fabric nodes. The overlay abstracts the physical topology and delivers tenant connectivity and segmentation services. This makes the statement that an overlay is a virtual network running on top of a physical network correct.
In EVPN VXLAN based data centers, overlays commonly carry both Layer 2 and Layer 3 services. Layer 2 extension is achieved by encapsulating Ethernet frames inside VXLAN so that a VLAN like segment can span multiple leaf switches across a routed underlay. Layer 3 services are delivered either through symmetric routing at the VTEPs or by advertising IP prefixes and host routes through the overlay control plane. As a result, overlays are not limited to Layer 3 traffic only, and statement D is correct.
The overlay is not defined by being public versus private. It can be built for private multi tenant segmentation inside a single data center, across multiple data centers, or to connect to external services, but public network is not a defining attribute.
Verification sources from Juniper documentation
https://www.juniper.net/documentation/us/en/software/junos/evpn-vxlan/topics/topic-map/evpn-vxlan-overview.html
https://www.juniper.net/documentation/us/en/software/junos/vxlan/topics/topic-map/vxlan-overview.html
정답:
Explanation:
Link aggregation groups, implemented on Junos as aggregated Ethernet interfaces, allow multiple physical Ethernet links to operate as one logical Layer 2 interface. This increases available bandwidth and provides link level resiliency because member links can fail without taking down the logical interface, as long as at least one member remains operational. In data center leaf spine designs, link aggregation is commonly used for server dual homing, uplinks to appliances, or inter switch connectivity where parallel links are desired with a single logical adjacency.
From a forwarding perspective, the device distributes traffic across member links using a hashing algorithm based on packet header fields so that individual flows remain in order while the aggregate uses multiple links. Control plane operation can be static or negotiated with LACP. With LACP, both sides exchange protocol information to ensure consistent bundling and to remove failed or miswired members automatically. This makes link aggregation a core high availability building block at the link layer, independent of Routing Engine redundancy features.
Graceful Routing Engine switchover, nonstop bridging, and nonstop active routing are control plane redundancy features. They are designed to minimize disruption during Routing Engine failover or preserve protocol state, but they do not combine multiple physical Ethernet interfaces into one logical link layer interface.
Verification sources from Juniper documentation
https://www.juniper.net/documentation/us/en/software/junos/interfaces-ethernet/topics/topic-map/understanding-lacp.html
https://www.juniper.net/documentation/us/en/software/junos/interfaces-ethernet/topics/topic-map/aggregated-ethernet-interfaces-overview.html
정답:
Explanation:
EBGP is defined as BGP peering between different autonomous systems, which makes statement A correct. In data center IP fabrics, it is common to assign different private AS numbers to leaves and spines or to use a structured AS design so that every leaf forms EBGP sessions to each spine. This provides clear policy boundaries, straightforward troubleshooting, and predictable route propagation without needing an additional interior gateway protocol to carry underlay reachability.
Statement C is also correct because EBGP can be deployed without a supporting IGP. BGP itself can distribute the underlay routes needed for fabric reachability, such as loopback addresses and point to point link prefixes. This is a widely used approach for leaf spine fabrics because it reduces protocol complexity and avoids running multiple control planes for the underlay. Convergence can be improved using multipath, rapid failure detection mechanisms, and consistent routing policy.
Statement B is incorrect because BGP within a single AS is IBGP, not EBGP. Statement D is incorrect because while some designs may choose to run an IGP and use BGP only for certain functions, EBGP does not inherently require an IGP to operate or to provide underlay connectivity in a fabric design.
정답:
Explanation:
A modern IP fabric in a leaf spine topology is built to deliver predictable latency, high bandwidth, and horizontal scale. The defining characteristic is that every leaf switch connects upstream to every spine switch. This full mesh between leaf and spine layers creates multiple equal cost paths between any two endpoints connected anywhere in the fabric. With this design, east west traffic between servers attached to different leaves can traverse the fabric using one spine hop, keeping path length consistent and enabling efficient load sharing across links.
Leaf to leaf direct connections are not used in a standard leaf spine IP fabric because they create irregular topologies, complicate troubleshooting, and undermine the uniform multi path behavior that makes fabrics scalable. Similarly, a leaf does not inherently require two or more physical connections to each spine. Many designs start with a single link per leaf spine pair and increase capacity by adding additional parallel links as needed. Redundancy is achieved by having multiple spines and multiple equal cost paths, not by mandating multiple links to every spine from day one.
Server connectivity requirements also vary. Some servers use single uplinks, others use dual homing for redundancy. That decision is independent of the fundamental requirement that each leaf should connect to every spine.
정답:
Explanation:
Generated routes in Junos are used to conditionally create a route in the routing table when specific criteria are satisfied. They are policy-driven and allow operators to inject a prefix only when the device has the necessary supporting reachability or other configured conditions. This is useful in data center environments to ensure that certain aggregate or service prefixes are advertised only when the infrastructure can truly forward traffic toward the intended destinations. For example, a generated route can be used to advertise a summary prefix into BGP only if one or more contributing routes exist, preventing blackholing that could occur if a summary is advertised while downstream reachability is missing.
Generated routes do not create a separate routing table; routing tables are created by routing instances and related configuration constructs. They also are not intended simply to increase the number of advertisements. Their value is correctness and control, not volume. Finally, expanding a single advertisement into multiple routes is not the purpose of generated routes; that behavior is more aligned with route policies that manipulate announcements, prefix lists, or mechanisms that originate multiple prefixes by configuration.
In short, generated routes provide conditional route origination based on defined match conditions, enabling safer summarization and controlled advertisement patterns that are common in scalable data center routing designs.
정답:
Explanation:
Nonstop bridging is a high availability capability focused on maintaining Layer 2 switching continuity during a Routing Engine switchover on platforms that support redundant control planes. The intent is to keep Layer 2 forwarding operational and minimize disruption to bridged traffic when the system transitions from a primary to a backup Routing Engine. Achieving this requires Graceful Routing Engine switchover, because GRES is the mechanism that enables a control plane switchover while keeping forwarding and interface state stable. With GRES in place, the forwarding plane can continue switching frames while the backup Routing Engine assumes control, reducing or eliminating traffic loss for Layer 2 domains.
Nonstop bridging is not the feature that preserves Layer 3 protocol sessions and routing information end-to-end. That function is associated with nonstop routing capabilities, which focus on maintaining routing protocol state across Routing Engine events. Therefore, stating that nonstop bridging preserves Layer 3 information and protocol sessions is incorrect. Likewise, nonstop active routing is not a requirement for nonstop bridging; it is a separate feature aimed at routing stability. The flow-control setting under gigether-options is unrelated to Routing Engine redundancy and does not determine whether nonstop bridging operates.
In data center access and aggregation environments where VLANs must remain stable for servers and appliances, nonstop bridging paired with GRES helps protect Layer 2 service continuity during control plane events.
정답:
Explanation:
An IP fabric underlay is the routed foundation of a modern leaf-spine data center. Its purpose is to provide scalable, deterministic Layer 3 reachability between all fabric nodes, typically using point-to-point routed links between leaves and spines. In this design, EBGP is commonly used as an underlay routing protocol because it scales well, supports clear policy boundaries, and enables fast convergence and operational simplicity. Each leaf forms EBGP sessions to each spine, advertising loopback addresses and link subnets so that overlay endpoints and control plane services can reach one another reliably.
RSTP is a Layer 2 spanning tree mechanism and is not the standard protocol for a routed underlay. EVPN is an overlay control plane used to distribute tenant reachability and multihoming information; it is not the underlay routing protocol itself. VXLAN is a data plane encapsulation used by the overlay to transport Layer 2 segments across a Layer 3 fabric; it also is not the underlay routing protocol.
In Juniper data center architectures, the underlay is intentionally kept simple and purely routed, while overlays such as EVPN VXLAN deliver multi-tenant Layer 2 and Layer 3 services on top of that underlay. EBGP fits the underlay requirement among the provided options.
정답:
Explanation:
BGP uses a small set of well-defined message types to form and maintain peerings and to exchange routing information. The Open message is used during session establishment after the TCP connection is up. It communicates the parameters required to form the BGP session, such as the BGP version, the autonomous system number, the negotiated hold time, the BGP identifier, and optional capabilities. Capabilities are especially important in data center designs because they enable features such as 4 byte ASNs, route refresh, and EVPN signaling when applicable.
The Update message is the core mechanism BGP uses to advertise reachability and to withdraw routes that are no longer valid. In a data center underlay using EBGP, Update messages carry the prefixes that represent loopbacks and point-to-point links, enabling leaf and spine reachability. In an EVPN control plane, Update messages carry EVPN Network Layer Reachability Information to distribute MAC and IP reachability and multihoming information across the fabric.
Hello is not a BGP message type. Hello is commonly associated with protocols like OSPF, IS-IS, and some discovery mechanisms. LSA is not a BGP message type either; Link State Advertisements are
specific to OSPF.

정답:
Explanation:
The exhibit shows BFD liveness detection configured under a BGP group with minimum-interval set to 1000 milliseconds. In Junos, BFD provides rapid failure detection by sending periodic BFD control packets between neighbors. The minimum-interval value is the negotiated minimum transmit and receive interval used for BFD control packets for that session. A neighbor is declared down when the local system fails to receive a certain number of consecutive BFD packets within the expected time window.
That time window is determined by the BFD detection time, which is calculated as the minimum-interval multiplied by the BFD multiplier. The multiplier represents how many BFD control packets can be missed before the session is considered failed. If the multiplier is not explicitly configured, Junos uses the default multiplier value of 3. Therefore, with minimum-interval set to 1000 ms and the default multiplier of 3, the detection time becomes 3000 ms. After approximately 3 seconds without receiving the expected BFD control packets, the BFD session transitions to down and BGP can react by treating the associated peer as unreachable for fast convergence.
This behavior is commonly used in data center underlays and EVPN fabrics to reduce convergence time compared to relying only on BGP hold timers.

정답:
Explanation:
In the BGP finite state machine, OpenConfirm indicates that the TCP session is already established and the BGP OPEN message exchange has successfully completed. At this point, both peers have agreed on critical session parameters such as BGP version, autonomous system numbers, hold time, and the BGP identifier. Because the OPEN messages have been processed, BGP transitions from OpenSent to OpenConfirm and then waits for the next control message that validates the session is operational.
Specifically, in OpenConfirm the local BGP process is waiting to receive a KEEPALIVE message from the neighbor, which confirms that the neighbor accepted the OPEN and is ready to bring the session to the Established state. If instead a NOTIFICATION message is received, it indicates an error condition and the session will be torn down. This is why option D is correct.
Option A describes OpenSent, where the router is waiting to receive an OPEN after sending its own OPEN.
Option B aligns more closely with Connect, where BGP is still trying to complete the transport connection.
Option C relates to Idle, where BGP waits for a start event or configuration to initiate the session. In data center BGP underlays and EVPN control planes, being stuck in OpenConfirm commonly points to policy mismatch, capability negotiation issues, or keepalive handling problems, rather than basic IP reachability.
정답:
Explanation:
In Junos, a static route can be configured with multiple next hops. When you want a “primary and backup” behavior, the correct mechanism is a qualified next hop. A qualified next hop allows you to assign a different preference to each next hop for the same destination prefix. The static route with the best preference is selected as active, while the higher-preference value next hop remains available as a standby option. If the primary next hop becomes unusable, Junos can install the backup qualified next hop so forwarding continues with minimal operational change, which is a common requirement for data center edge or services routing where predictable failover is needed.
The key point in the question is “secondary next hop with a unique route preference value.” That wording maps directly to qualified next-hop behavior, because qualification is how Junos differentiates multiple next hops under the same route using distinct preference values. The retain option relates to keeping routes in the routing table under certain conditions and does not specifically create primary/backup next-hop selection based on preference. The resolve option concerns how the system resolves a next hop through another route and is not the feature that creates a preference-ranked secondary next hop. The install option is not the mechanism used to define backup next-hop preference for a static route.
정답:
Explanation:
In Junos Ethernet switching, a trunk interface is intended to carry multiple VLANs using 802.1Q tags. The native VLAN ID feature defines how the trunk handles frames that arrive without an 802.1Q tag
and how it can transmit untagged frames when required. When a native VLAN is configured on a trunk, untagged inbound frames are mapped into the native VLAN’s Layer 2 broadcast domain. Likewise, traffic belonging to the native VLAN can be sent untagged on that trunk, depending on how the receiving device expects to process untagged frames. This is commonly used in data center environments to interoperate with devices that require one VLAN to be carried untagged, or for specific control-plane or legacy connectivity requirements.
Option A is incorrect because native VLAN does not restrict the trunk to tagged-only traffic; it explicitly provides a mechanism to accept or emit untagged frames on a trunk.
Option B is incorrect because access ports are designed for a single VLAN and normally treat traffic as untagged by default; they do not become “tagged access” by using native VLAN.
Option C is incorrect because native VLAN does not change the VLAN ID range; VLAN ID ranges are determined by the 802.1Q standard and platform support, not by the native VLAN feature.