BIG-IP Administration Data Plane Concepts (F5CAB2) exam 온라인 연습
최종 업데이트 시간: 2026년02월14일
당신은 온라인 연습 문제를 통해 F5 F5CAB2 시험지식에 대해 자신이 어떻게 알고 있는지 파악한 후 시험 참가 신청 여부를 결정할 수 있다.
시험을 100% 합격하고 시험 준비 시간을 35% 절약하기를 바라며 F5CAB2 덤프 (최신 실제 시험 문제)를 사용 선택하여 현재 최신 40개의 시험 문제와 답을 포함하십시오.
정답:
Explanation:
Comprehensive and Detailed Explanation From BIG-IP Administration Data Plane Concepts documents:
In BIG-IP LTM, pool member availability is determined by health monitors, which continuously test application responsiveness and correctness.
For the default HTTP monitor, the behavior is defined as follows:
BIG-IP sends an HTTP request (by default, GET /)
The monitor expects a response with HTTP status 200 OK
Any HTTP response code other than 200 is considered a monitor failure
A failed monitor causes the associated pool member to be marked offline (down)
Applying this to the scenario:
Two pool members return 404 Not Found
A 404 response indicates the requested object is missing
This response does not satisfy the success criteria of the default HTTP monitor
BIG-IP marks these two members as offline
One pool member returns 200 OK
This matches the expected response code
BIG-IP marks this member as online
Resulting Pool Status:
2 members: Offline
1 member: Online
Why the Other Options Are Incorrect:
B C Members returning 404 responses cannot be considered healthy
C C At least one member responds with 200 OK, so the entire pool is not offline
D C Not all members meet the monitor success criteria
Key Data Plane Concept Reinforced:
BIG-IP health monitors validate not just reachability, but application correctness. For HTTP monitors, the response code is critical―404 is treated as a failure, even though the service is reachable.
정답:
Explanation:
Comprehensive and Detailed Explanation From BIG-IP Administration Data Plane Concepts documents:
In BIG-IP LTM, health monitors are used to determine the availability of pool members and directly influence traffic flow decisions in the data plane.
Key characteristics of the default HTTP monitor according to BIG-IP Administration Data Plane Concepts:
Sends an HTTP request (typically GET /)
Expects an HTTP response code of 200 OK
Any response other than 200 is treated as a monitor failure
A failed monitor causes the pool member to be marked offline (down)
In this scenario:
Two pool members return 404 Not Found
A 404 response indicates that the requested object was not found
This does not meet the success criteria of the default HTTP monitor
These two members are therefore marked offline
One pool member returns 200 OK
This matches the expected response
The member is marked online
Resulting Pool Member Availability:
2 members: Offline
1 member: Online
Why the Other Options Are Incorrect:
B C 404 responses are not considered healthy by the default HTTP monitor C C At least one member responds with the expected 200 OK
D C Members returning 404 responses fail the monitor and cannot be marked online
Key Data Plane Concept Reinforced:
BIG-IP health monitors make binary availability decisions based strictly on configured success criteria. For HTTP monitors, response codes matter―404 is a failure, even if the service is technically reachable.
정답:
Explanation:
Comprehensive and Detailed Explanation From BIG-IP Administration Data Plane Concepts documents:
In BIG-IP LTM, a pool member state directly affects how traffic is handled at the data plane level. When a pool member is manually disabled, BIG-IP changes the member’s availability state to disabled, which has specific and predictable traffic-handling consequences.
According to BIG-IP Administration Data Plane Concepts:
A disabled pool member:
Does not accept new connections
Continues to process existing non-persistent connections until they naturally close
Is removed from load-balancing decisions, including persistence lookups
Most importantly for this
question:
Persistent connections
(such as those created using source-address persistence, cookie persistence, or SSL persistence) are not honored for a disabled pool member
BIG-IP will not send new persistent traffic to a disabled member, even if persistence records exist
Therefore, when a pool member is manually disabled, it stops processing persistent connections, while allowing existing non-persistent flows to drain gracefully.
Why the Other Options Are Incorrect:
B C Persistent connections are not honored for a disabled pool member
C C Existing connections are not immediately terminated when a pool member is disabled
D C Only the disabled pool member stops accepting new connections, not all pool members
Key Data Plane Concept Reinforced:
Manually disabling a pool member is a graceful administrative action that prevents new and persistent traffic from reaching the member while allowing existing connections to complete, which is critical for maintenance and troubleshooting scenarios.
정답:
Explanation:
Comprehensive and Detailed Explanation From BIG-IP Administration Data Plane Concepts documents:
In a BIG-IP high availability (HA) configuration, Auto-Sync is a device trust feature that automatically synchronizes configuration changes from the Active device to the Standby device within a Sync-Failover device group.
Key principles from BIG-IP Administration Data Plane Concepts: The Active device is always the authoritative source of configuration Configuration changes are intended to be made only on the Active device
With Auto-Sync enabled, any time the Active device configuration changes, the system automatically pushes the configuration to all Standby members of the device group
Configuration changes made directly on a Standby device are not preserved
In this scenario:
The administrator modifies a Virtual Server on the Standby device
That change is local only and does not alter the device group’s synchronized configuration
When Auto-Sync next runs (triggered by a change on the Active device or an internal sync event), the Active device configuration overwrites the Standby configuration
As a result, the configuration change made on the Standby device is undone.
Why the Other Options Are Incorrect:
A C The change is not undone only when another change is made; it is undone during the next Auto-Sync operation
B C Changes made on the Standby device are never propagated to the Active device
D C Auto-Sync does not merge or promote Standby changes into the HA pair configuration
Best Practice Reinforced:
Always perform configuration changes on the Active BIG-IP device when Auto-Sync is enabled to ensure consistent and predictable HA behavior.
정답:
Explanation:
Comprehensive and Detailed Explanation From BIG-IP Administration Data Plane Concepts documents:
In BIG-IP high availability (HA) configurations, MAC Masquerade is used to speed up failover by allowing traffic-group-associated Self IPs to retain the same MAC address when moving between devices. This prevents upstream switches and routers from having to relearn ARP entries during a failover event, resulting in near-instant traffic recovery.
By default, MAC masquerade applies one MAC address per traffic group, regardless of how many VLANs the traffic group spans. This can create problems in some network designs because the same
MAC address appearing on multiple VLANs may violate network policies or confuse switching infrastructure.
To address this, BIG-IP provides Per-VLAN MAC Masquerade, enabled by the database variable:
`tm.macmasqaddr_per_vlan = true`
When this feature is enabled:
BIG-IP derives a unique MAC address per VLAN
The base MAC address configured on the traffic group remains the first four octets
The last two octets are replaced with the VLAN ID expressed in hexadecimal
The VLAN ID is encoded in network byte order (high byte first, low byte second)
### VLAN ID Conversion:
VLAN ID: 1501 (decimal)
Convert to hexadecimal:
1501 ₁₀ = 0x05DD
High byte: 05
Low byte: DD
### Resulting MAC Address: Base MAC: `02:12:34:56:00:00`
Per-VLAN substitution → last two bytes = `05:DD`
Final MAC address:
`02:12:34:56:05:dd`
### Why the Other Options Are Incorrect:
A (01:15) C Incorrect hexadecimal conversion of 1501
B (dd:05) C Byte order reversed (little-endian, not used by BIG-IP) D (15:01) C Uses decimal values instead of hexadecimal
### Key BIG-IP HA Concept Reinforced:
Per-VLAN MAC Masquerade ensures Layer 2 uniqueness per VLAN while preserving the fast failover benefits of traffic groups, making it the recommended best practice in multi-VLAN HA deployments.
정답:
Explanation:
Comprehensive and Detailed Explanation (BIG-IP Administration C Data Plane Concepts):
This scenario combines session continuity, multiple protocols (HTTP and FTP), and HA failover behavior, which directly implicates persistence handling across devices and services.
Key Requirements Breakdown
Same pool member for entire session
Session must survive failover
Session must span multiple services (HTTP and FTP)
Why Persistence Mirroring + Match Across Services Is Required
Persistence Mirroring
Ensures persistence records are synchronized from the active BIG-IP to the standby BIG-IP.
Without mirroring:
After failover, the standby device has no persistence table
Clients are load-balanced again
Sessions break, forcing users to restart
Persistence mirroring is essential for session continuity during failover
Match Across Services
Allows a single persistence record to be shared across multiple virtual servers / protocols
Required when:
HTTP and FTP must use the same pool member
Multiple services are part of a single application session
Together, these settings ensure:
Persistence survives device failover
Persistence is honored across HTTP and FTP
Why the Other Options Are Incorrect
A. Cookie persistence and session timeout
Cookie persistence only applies to HTTP and does not address FTP or failover synchronization.
B. Stateful failover and Network Failover detection
Stateful failover applies to connection state, not persistence records, and does not link HTTP and FTP sessions.
D. SYN-cookie insertion threshold and connection low-water mark
These are DoS / SYN flood protection settings, unrelated to persistence or HA behavior.
정답:
Explanation:
Comprehensive and Detailed Explanation From BIG-IP Administration Data Plane Concepts documents:
To select the correct virtual server type, an administrator must balance the need for L7 intelligence versus raw throughput and hardware offloading:
Performance (Layer 4) Virtual Server: This type is designed for maximum speed. It uses the fastL4 profile, which allows the BIG-IP system to leverage the ePVA (Embedded Packet Velocity Accelerator) hardware chip. When a Performance (L4) virtual server is used, the system processes packets at the network layer (L4) without looking into the application payload (L7). This fulfills the requirement for hardware acceleration and avoids the overhead of HTTP optimization features, which are not needed in this scenario.
Performance (HTTP) Virtual Server: While fast, this type uses the fasthttp profile to provide some L7 awareness and optimization (like header insertion or small-scale multiplexing). Since the requirement specifically states HTTP optimization is not required, the L4 variant is more efficient.
Standard Virtual Server: This is a full-proxy type. While it offers the most features (SSL offload, iRules, Compression), it processes traffic primarily in the TMOS software layer (or via high-level hardware assistance), which is "slower" than the pure hardware switching path of the Performance (L4) type.
Stateless Virtual Server: This is typically used for specific UDP/ICMP traffic where the system does not need to maintain a connection table. It is not appropriate for standard HTTP (TCP) applications requiring persistent sessions or stateful load balancing.
By choosing Performance (Layer 4) with the fastL4 profile, the organization ensures that the traffic is handled by the hardware acceleration chips, providing the lowest latency and highest throughput possible for their HTTP application.
정답:
Explanation:
The BIG-IP system uses a specific precedence algorithm to determine which virtual server (listener) should process an incoming packet when multiple virtual servers might match the criteria. Since BIG-IP version 11.3.0, the system evaluates three primary factors in a fixed order of importance:
Destination Address: The system first looks for the most specific destination match. A "Host" address (mask /32) is preferred over a "Network" address (mask /24, /16, etc.), which is preferred over a "Wildcard" (0.0.0.0/0).
Source Address: If multiple virtual servers have identical destination masks, the system then evaluates the source address criteria. Again, a specific source host match is preferred over a source network or a wildcard source.
Service Port: Finally, if both destination and source specifications are equal, the system checks the port. A specific port match (e.g., 80) is preferred over a wildcard port (e.g., or 0).
Following this logic, a virtual server configured with a specific destination host, a specific source host, and a specific service port represents the highest level of specificity and thus the highest preference.
정답:
Explanation:
Comprehensive and Detailed Explanation (BIG-IP Administration C Data Plane Concepts):
In a BIG-IP device service cluster, configuration objects such as virtual servers, pools, profiles, and iRules are maintained through configuration synchronization (config-sync).
Key BIG-IP concepts involved:
Device Service Cluster (DSC)
A cluster is a group of BIG-IP devices that share configuration data. One device is typically used to make changes, which are then synchronized to the rest of the group.
Config-Sync Direction Matters
Changes are made on a local device
Those changes must be pushed to the group
The correct operation is “Sync Device to Group”
Why C is correct:
The virtual server was created only on device 1
Other devices in the cluster do not yet have this object
To propagate the new virtual server to all cluster members, the administrator must synchronize device 1 to the group
Why the other options are incorrect:
A. Synchronize the settings of the group to device 1
This would overwrite device 1’s configuration with the group’s existing configuration and may remove the newly created virtual server.
B. Create a new cluster on device 1
The cluster already exists. Creating a new cluster is unnecessary and disruptive.
D. Create a new virtual server on device 2
This defeats the purpose of centralized configuration management and risks configuration drift.
Conclusion:
After creating a new virtual server on a BIG-IP device that is part of a cluster, the administrator must synchronize the configuration from that device to the group so all devices share the same ADC application objects.

정답:
Explanation:
Comprehensive and Detailed Explanation (BIG-IP Administration C Data Plane Concepts):
For BIG-IP to send or receive traffic on a VLAN, that VLAN must be bound to a physical interface or a trunk. Creating a VLAN object and a Self IP alone is not sufficient to establish data-plane connectivity.
From the exhibit:
The VLAN (vlan_1033) exists and has a tag defined.
A Self IP is configured and associated with the VLAN.
However, traffic cannot reach servers on that VLAN.
This indicates a Layer 2 connectivity issue, not a Layer 3 or HA issue.
Why assigning a physical interface fixes the problem:
BIG-IP VLANs do not carry traffic unless they are explicitly attached to:
A physical interface (e.g., 1.1), or
A trunk
Without an interface assignment, the VLAN is effectively isolated and cannot transmit or receive frames, making servers unreachable regardless of correct IP addressing.
Why the other options are incorrect:
A. Set Port Lockdown to Allow All
Port Lockdown controls which services can be accessed on the Self IP (management-plane access), not whether BIG-IP can reach servers on that VLAN.
B. Change Auto Last Hop to enabled
Auto Last Hop affects return traffic routing for asymmetric paths. It does not fix missing Layer 2 connectivity.
D. Create a Floating Self IP address
Floating Self IPs are used for HA failover. They do not resolve reachability issues on a single device when the VLAN itself is not connected to an interface.
Conclusion:
The servers are unreachable because the VLAN has no physical interface assigned. To restore connectivity, the BIG-IP Administrator must assign a physical interface (or trunk) to the VLAN, enabling Layer 2 traffic flow.
정답:
Explanation:
Comprehensive and Detailed Explanation (BIG-IP Administration C Data Plane Concepts):
On BIG-IP systems, physical interface bandwidth is fixed by the link speed (for example, 1GbE or 10GbE). When traffic demand exceeds the capacity of a single interface, BIG-IP provides link aggregation through trunks.
Key concepts involved:
Interfaces
A single physical interface (such as 1.1) is limited to its negotiated link speed. You cannot exceed this capacity through software tuning alone.
Trunks (Link Aggregation)
A trunk combines multiple physical interfaces into a single logical interface.
BIG-IP supports LACP and static trunks.
Traffic is distributed across member interfaces, increasing aggregate bandwidth and providing redundancy.
VLANs are then assigned to the trunk, not directly to individual interfaces.
Why option B is correct:
Creating a trunk with two interfaces allows BIG-IP to use both physical links simultaneously.
This increases total available bandwidth (for example, two 10Gb interfaces → up to 20Gb aggregate capacity).
This is the documented and supported method for scaling bandwidth on BIG-IP.
Why the other options are incorrect:
A. Increase the MTU
MTU changes affect packet size and efficiency, not total bandwidth capacity.
C. Assign two interfaces to the VLAN
BIG-IP does not support assigning a VLAN to multiple interfaces directly. VLANs must be associated with one interface or one trunk.
D. Set the media speed manually
Media speed can only be set up to the physical capability of the interface and connected switch port.
It cannot exceed the hardware limit.
Conclusion:
To increase total available bandwidth on BIG-IP when a single interface is insufficient, the administrator must create a trunk object with multiple interfaces and move the VLAN onto the trunk. This aligns directly with BIG-IP data plane design and best practices.
정답:
Explanation:
Comprehensive and Detailed Explanation (BIG-IP Administration C Data Plane Concepts):
With Least Connections (member), BIG-IP attempts to send new connections to the pool member with the fewest current connections. In a perfectly “stateless” scenario (no affinity), this often trends toward a fairly even distribution over time.
However, persistence overrides load balancing:
When a persistence profile is applied, BIG-IP will continue sending a client (or client group) to the same pool member based on the persistence record (cookie / source address / SSL session ID, etc.).
This means even if another pool member has fewer connections, BIG-IP may still select the persisted member to honor session affinity.
The result can be uneven active connection counts, even though the configured load balancing method is Least Connections.
Why the other options are not the best cause:
A. Priority Group Activation is disabled
Priority Group Activation only affects selection when priority groups are configured; disabling it does not inherently create uneven distribution under Least Connections.
B. SSL Profile Server is applied
A server-side SSL profile affects encryption to pool members, but it does not by itself cause skewed selection across pool members. (Skew could happen indirectly if members have different performance/latency, but that’s not the primary, expected exam answer.)
D. Incorrect load balancing method
Least Connections is a valid method and does not itself explain unevenness unless something is overriding it (like persistence) or pool members are not all eligible.
Conclusion:
A persistence profile is the most common and expected reason that active connections become unevenly distributed, because persistence takes precedence over the Least Connections load-balancing decision.

정답:
Explanation:
Comprehensive and Detailed Explanation (BIG-IP Administration C Data Plane Concepts):
In an Active/Standby BIG-IP design, application availability during failover depends on both units having equivalent data-plane connectivity for the networks that carry application traffic. Specifically:
VLANs are bound to specific interfaces (and optionally VLAN tags).
Floating self IPs / traffic groups move to the new Active device during failover.
For traffic to continue flowing after failover, the new Active device must have the same VLANs available on the correct interfaces that connect to the upstream/downstream networks.
What the symptom tells you:
Traffic works when Device A is Active
Traffic fails when Device B becomes Active
Failback immediately restores traffic
This pattern strongly indicates the Standby unit does not have the VLAN connected the same way (wrong physical interface assignment), so when it becomes Active, it owns the floating addresses but cannot actually pass traffic on the correct network segment.
Why Interface mismatch is the best match:
If the Active unit is already working, its interface mapping is correct.
The fix is to make the Standby unit’s VLAN/interface assignment match the Active unit.
That corresponds to changing the Standby device interface to 1.1.
Why the Tag options are less likely here (given the choices and the exhibit intent):
Tag issues can also break failover traffic, but the question/options are clearly driving toward the classic HA requirement: consistent VLAN-to-interface mapping on both devices so the data plane remains functional after the traffic group moves.
Conclusion: To avoid an outage on the next failover, the BIG-IP Administrator must ensure the Standby device uses the same interface (1.1) for the relevant VLAN(s) that carry the application traffic, so when it becomes Active it can forward/receive traffic normally.
정답:
Explanation:
Comprehensive and Detailed Explanation (BIG-IP Administration C Data Plane Concepts):
In BIG-IP traffic management, persistence profiles cause existing client connections (and subsequent
requests) to be repeatedly sent to the same pool member. When persistence is enabled, simply preventing new connections is not sufficient if the requirement is to immediately remove all existing connections.
Key behavior of pool member states:
Forced Offline
Immediately removes the pool member from load balancing.
Terminates all existing connections, regardless of persistence.
Prevents new connections from being established.
This is the correct state when urgent maintenance or troubleshooting is required.
Disabled
Prevents new connections from being sent to the pool member.
Allows existing connections to continue, which is not acceptable when persistence is configured and connections must be cleared immediately.
Offline (non-forced)
Similar to Disabled behavior depending on context.
Does not guarantee immediate termination of existing connections.
Manually deleting connections via the command line
Is unnecessary and operationally inefficient.
BIG-IP already provides a supported mechanism (Forced Offline) to cleanly and immediately remove traffic.
Conclusion:
To immediately remove all existing connections, including those maintained by persistence, the BIG-IP Administrator must set the pool member to a Forced Offline state. This directly satisfies the requirement without additional manual steps.
정답:
Explanation:
Comprehensive and Detailed Explanation (BIG-IP Administration C Data Plane Concepts):
The configuration shown matches a Performance Layer 4 virtual server because it is explicitly using a FastL4 profile:
The screenshot shows Protocol: TCP and Protocol Profile (Client): fastL4.
In BIG-IP data plane terms, FastL4 is the hallmark of a Performance (Layer 4) virtual server, designed to process connections at Layer 4 with minimal overhead (high throughput/low latency) compared to full proxy L7 processing.
The screenshot also shows HTTP Profile (Client): None (and HTTP server profile effectively not in use).
A Standard virtual server commonly uses full-proxy features and frequently includes L7 profiles (like
HTTP) when doing HTTP-aware load balancing, header manipulation, cookie persistence, etc. In contrast, a Performance L4 virtual server typically does not use an HTTP profile because it is not doing HTTP-aware (Layer 7) processing.
It is not a Forwarding IP virtual server:
A Forwarding (IP) virtual server is used to route/forward packets (often without load balancing to pool members in the same way as Standard/Performance VS) and is selected by choosing a forwarding type. The presence of a TCP protocol with a FastL4 client profile aligns with a Layer 4 load-balancing style virtual server, not a packet-forwarding virtual server type.
Conclusion: Because the configuration is TCP-based and explicitly uses fastL4 with no HTTP profile, the expected BIG-IP virtual server type is Performance Layer 4 (Option C)