시험덤프
매달, 우리는 1000명 이상의 사람들이 시험 준비를 잘하고 시험을 잘 통과할 수 있도록 도와줍니다.
  / FlashArray Implementation Specialist 덤프  / FlashArray Implementation Specialist 문제 연습

Pure Storage FlashArray Implementation Specialist 시험

Pure Storage Certified FlashArray Implementation Specialist (FAIS) 온라인 연습

최종 업데이트 시간: 2026년04월21일

당신은 온라인 연습 문제를 통해 Pure Storage FlashArray Implementation Specialist 시험지식에 대해 자신이 어떻게 알고 있는지 파악한 후 시험 참가 신청 여부를 결정할 수 있다.

시험을 100% 합격하고 시험 준비 시간을 35% 절약하기를 바라며 FlashArray Implementation Specialist 덤프 (최신 실제 시험 문제)를 사용 선택하여 현재 최신 75개의 시험 문제와 답을 포함하십시오.

 / 8

Question No : 1


A customer has a FlashArray X20R3 with 22/22 data packs in the chassis at 80% space usage and wishes to swap one of the data packs for a 45TB pack.
What step needs to be taken to facilitate this consolidation?

정답:
Explanation:
To facilitate a Capacity Consolidation (CapCon) in a chassis that is physically full (both data pack slots occupied) and logically utilized above the threshold to allow internal evacuation, the standard procedure requires temporary external capacity, often referred to as "swing gear."
In this scenario, the FlashArray//X20 chassis is fully populated with two 22TB data packs. Since there are no empty slots to insert the new 45TB pack, one of the existing packs must be removed first. However, with the array at 80% usage, the data residing on the pack designated for removal (roughly half the raw capacity) cannot fit into the remaining free space of the other pack.
Therefore, Pure Storage Support will provision a DirectFlash Shelf (DFS) containing temporary capacity (e.g., a 63TB pack as mentioned in Option B). The Implementation Engineer attaches this shelf, adds it to the system, and utilizes it as a destination to evacuate data from the old 22TB pack. Once the old pack is empty and removed, the new 45TB pack is installed into the chassis. Finally, the data is moved from the temporary shelf back into the new chassis capacity, and the temporary shelf is disconnected and returned.
Option A is incorrect because an empty shelf provides no storage to offload data, and Option C is operationally unfeasible for most customers.

Question No : 2


Prior to running the puresetup newarray command, which command should an Installation Engineer run on a new install of an //XR4 array?

정답:
Explanation:
On a new FlashArray//XR4 installation, the Implementation Engineer must run the cobalt_check.py script before executing puresetup newarray.
The FlashArray//XR4 represents a significant hardware architectural shift (internally codenamed or associated with the "Cobalt" platform generation). This platform introduces new PCIe layouts, NVMe backplanes, and controller components that require specific validation beyond the standard legacy checks. The cobalt_check.py script is a specialized hardware diagnostic tool pre-loaded on the manufacturing image of //XR4 controllers.
Its purpose is to verify that the specific hardware components of the R4 platform―such as the status of the internal NVMe interconnects, the correct population of the chassis, and the health of the new controller mainboard―are functioning within strict tolerances. Running this check ensures that the physical layer is 100% healthy before the Purity operating system attempts to initialize the database and claim the storage media.
Option A (pureboot list) checks boot versions but not hardware health, and Option C (purehw list) is a general command that might not catch the specific low-level architectural issues the cobalt_check.py script is designed to identify on this specific generation.

Question No : 3


Before leaving the site after an install, who is required to provide completion sign-off?

정답:
Explanation:
The Customer is the mandatory party required to provide completion sign-off before the Implementation Engineer leaves the site.
According to the FlashArray Implementation Service Brief and standard operating procedures, the final step of the deployment phase is the "Project Sign-off." After the engineer has racked, cabled, initialized, and verified the array's health (often demonstrating connectivity and the Purity GUI), the customer must formally acknowledge that the installation meets the agreed-upon scope of work. This sign-off serves as the legal and operational acceptance of the hardware and software implementation.
While the Account Team manages the commercial relationship and Support may assist with technical hurdles, neither can validate that the physical implementation meets the customer's specific on-site requirements or that the "handover" of administration has occurred. The customer's signature (physical or digital) on the implementation checklist or service completion form signals the transition from the "Deployment" phase to the "Production/Support" phase.

Question No : 4


Support requests a powercycle on a newly-installed FlashArray.
What is the correct way to perform this action?

정답:
Explanation:
To perform a "powercycle" or restart of the FlashArray system as typically requested by Support for diagnostic or initialization purposes, the correct and safe method is to run the purearray reboot command.
While the term "power cycle" often implies a physical interruption of power in general IT terminology, Pure Storage FlashArrays are designed as "Always-On" systems and do not have standard physical power buttons on the controllers for this purpose (Option A is incorrect as no such button exists for system-wide power cycling). Physically unplugging the power cables (Option B) constitutes a "hard" power cycle or "cold boot," which is generally reserved for specific hardware replacements or emergency procedures, not standard support requests on a healthy or newly installed system.
The purearray reboot command gracefully shuts down the Purity operating system services, flushes data from the volatile memory to the non-volatile storage, and restarts both controllers sequentially or simultaneously depending on the arguments. This ensures that the system returns to a clean state without the risk of data inconsistency or "dirty" shutdowns that physical cable removal might cause. In the context of a new installation or support troubleshooting, this software-driven restart is the standard procedure.

Question No : 5


A FlashArray//XR2/3 has an Ethernet mezzanine (EMEZZ), what are the interface names of the ports on the mezzanine?

정답:
Explanation:
On a FlashArray//XR2 or //XR3 controller equipped with an optional Ethernet Mezzanine (EMEZZ) card, the additional ports are enumerated as ETH6, ETH7, ETH8, and ETH9.
The port numbering logic on these controllers follows a strict sequence:
ETH0 and ETH1: Fixed onboard 1GbE Management ports.
ETH2 and ETH3: Fixed onboard 10/25GbE ports (often used for replication or iSCSI).
ETH4 and ETH5: Reserved or used by the first PCIe slot expansion (if populated with Ethernet).
ETH6 through ETH9: Assigned to the Ethernet Mezzanine (EMEZZ) slot.
The EMEZZ is a specific internal slot distinct from the standard PCIe risers. When populated, the Purity operating system reserves the eth6Ceth9 block for these interfaces.
Options B and C are incorrect because they describe port ranges that would typically be assigned to PCIe expansion cards in Slot 1 or Slot 2 (e.g., eth10+), or they simply do not align with the hardcoded enumeration logic of the R2/R3 controller architecture. Correct identification of these interfaces is critical for configuring link aggregation (LACP) or assigning iSCSI IPs during the initial setup.

Question No : 6


Which PCIe slot supports 4-port FC cards on FlashArray//XL?

정답:
Explanation:
On the FlashArray//XL, the 4-port Fibre Channel (FC) cards are specifically supported in Slot 8 (and typically Slot 4 depending on configuration depth). The FlashArray//XL chassis (5U) utilizes a significantly different PCIe bus layout compared to the standard 3U FlashArray//X series. While the //X series typically uses Slot 0 or Slot 2 for host connectivity, the //XL architecture reserves the lower-numbered slots (0-3) primarily for backend connectivity (SAS/NVMe-oF) or specific NVRAM modules in certain configurations.
To support the high bandwidth requirements of the 4-port 32Gb FC cards, the //XL chassis provides specific x16 electrical slots. Slot 8 is a designated high-performance slot located in the upper riser section of the controller, making it the correct placement for these dense host I/O cards. Using an incorrect slot such as Slot 2 (often x8 or reserved for other functions in XL) would likely result in the card not being recognized or operating at reduced performance. Therefore, Implementation Engineers must verify the specific "slot map" for the //XL model in the FlashArray//XL Hardware Guide before seating cards, with Slot 8 being the standard supported location for the 4-port FC option among the choices provided.

Question No : 7


What is the redundancy of the FlashArray//XL PSUs?

정답:
Explanation:
The FlashArray//XL features a robust 2+2 power supply unit (PSU) redundancy configuration. Unlike the smaller FlashArray//X chassis (3U), which typically utilizes two power supplies in a 1+1 redundancy setup, the FlashArray//XL utilizes a larger 5U chassis designed for higher performance and density, requiring a more substantial power infrastructure.
The //XL chassis is equipped with four physical Power Supply Units. These are configured to operate in an N+2 mode (effectively 2+2), meaning the system requires two PSUs to support the full electrical load of the chassis, while the other two provide redundancy. This architecture allows the array to survive the simultaneous failure or loss of input power to up to two power supplies without any interruption to service.
This design ensures maximum availability even in scenarios where an entire power grid feed (A-side or B-side) fails, or if multiple hardware components malfunction simultaneously.
Options A (1+1) applies to the standard //X series, and Option C (3+1) is not a supported configuration for the FlashArray//XL. The 2+2 design is critical for maintaining the "Always-On" reliability standards required for the mission-critical workloads that the //XL platform supports.

Question No : 8


A data pack has been installed and the puredrive list command shows the drives in “unadmitted” status.
Which command should be used to complete the admission?

정답:

Question No : 9


On a FlashArray//X50 R2/R3 Fibre Channel (FC) array, what is the default type and placement of the PCIe FC card?

정답:
Explanation:
For the FlashArray//X50 R2 and R3 models, the default Fibre Channel (FC) configuration utilizes a 4-port PCIe Fibre Channel card installed in Slot 0.
The hardware architecture of the FlashArray//X series differentiates slot usage based on the model controller chassis.
FlashArray//X10 and //X20: These lower-end models utilize Slot 2 for host connectivity (Fibre Channel or iSCSI) because Slots 0 and 1 are typically reserved or occupied by onboard controllers/mezzanine cards. The standard card for these models is often a 2-port card.
FlashArray//X50, //X70, and //X90: These mid-to-high-range models feature a different PCIe bus layout. Slot 0 is the primary designated slot for Host I/O connectivity. To support the higher performance capabilities and port density requirements of the X50, Pure Storage defaults to 4-port FC cards (typically 16Gb or 32Gb).
Therefore, identifying the model number is crucial. Since the question specifies the X50 (R2/R3), the correct placement is Slot 0, and the correct card type is the 4-port model.
Option A is incorrect because the 2-port card is not the standard default for the performance-tier X50.
Option B describes the configuration for an X20, not an X50.

Question No : 10


An Implementation Engineer is performing a capacity consolidation on an X50R3.
Which command is needed for the engineer to determine whether the customer’s FlashArray supports SAS Flash Modules?

정답:
Explanation:
To determine whether a specific FlashArray configuration, such as the FlashArray//X50R3, supports or is currently equipped to handle SAS Flash Modules, the command purehw list --type drive --all is the correct diagnostic tool. While the FlashArray//X series is architected as a 100% NVMe platform using DirectFlash Modules (DFMs) in the base chassis, it retains the capability to support legacy SAS Flash Modules (SSDs) through the attachment of external SAS expansion shelves.
The purehw list --type drive --all command provides a comprehensive enumeration of the physical drive hardware components recognized by the system. Unlike the puredrive list command, which focuses on the logical status, capacity, and admission state of the drives, the purehw command with the --type drive flag exposes the hardware-level details, including the interface protocol (SAS vs. NVMe) and the physical location of the drive bays.
By running this command, an Implementation Engineer can verify if the array is communicating with any SAS-based drive hardware. If SAS modules or SAS-compatible drive bays in an external shelf are listed, support is confirmed. Conversely, option A is syntactically incorrect for this purpose, and option B (purehw list --type controller) focuses on the compute node specifications rather than the storage media interfaces. Therefore, listing the drive hardware components is the definitive method to validate the presence and support of SAS Flash media within the specific array configuration during a capacity consolidation.

Question No : 11


Once the new Purity firmware has been installed using the pureinstall command, what step is required to commit the new version?

정답:
Explanation:
The Purity upgrade process involves two main phases: installing the software image (placing the bits on the boot drive) and then activating it (booting into the new kernel).
After the pureinstall command has successfully unpacked and staged the new Purity version, the changes are not live until the system reboots. In a Non-Disruptive Upgrade (NDU), this is done one controller at a time.
The required step to commit and run the new version is to Reboot the controller.
The pureinstall workflow typically prompts for or automatically initiates this reboot.
The secondary controller reboots first, loads the new Purity version, and rejoins the cluster.
Then, the primary controller fails over services to the updated secondary, reboots, and updates itself.
A full "Reboot the array" (simultaneous reboot) would be disruptive and is not the standard procedure for an NDU. "Logging out" has no effect on the system state.

Question No : 12


FlashArray//C and //E models use which flash storage architecture, identifiable by gray tabs on the DirectFlash Module carriers?

정답:
Explanation:
Pure Storage differentiates its product lines based on the type of NAND flash used, optimizing for either performance or capacity/cost.
FlashArray//X uses TLC (Triple-Level Cell) flash for high performance and endurance. These modules typically have orange tabs.
FlashArray//C and FlashArray//E are designed for high-capacity, capacity-optimized workloads. They utilize QLC (Quad-Level Cell) flash.
QLC flash stores 4 bits per cell, offering higher density at a lower cost per terabyte, but with different endurance characteristics managed by the DirectFlash software. To help engineers and customers physically distinguish these modules, QLC DirectFlash Modules feature gray release tabs on the carrier. Identifying these tabs confirms that the correct media type is being installed into the capacity-oriented //C or //E chassis.

Question No : 13


Which ports are used by default for replication on a FlashArray//XR4?

정답:
Explanation:
The FlashArray//XR4 maintains the standard port assignment convention established in previous generations for its onboard Ethernet interfaces.
ETH2 and ETH3 are the default ports designated for Replication traffic.
ETH0 and ETH1 are reserved for Management.
ETH2 and ETH3 are configured by Purity defaults to handle the heavy bandwidth of asynchronous or synchronous replication.
While these assignments can be modified in software, the physical ports labeled eth2 and eth3 on the rear of the chassis are the intended primary interfaces for this function. Implementation Engineers should cable these ports to the replication network switches during the initial install.

Question No : 14


An Implementation Engineer is replacing a failing component in a FlashArray//XR4 and must ensure proper service clearance at the front and rear of the array.
What is the required service depth clearance for the front and rear panels Of the FlashArray//XR4?

정답:
Explanation:
Service clearance is a critical physical installation requirement to ensure that components (like controllers, fans, and drives) can be safely removed and replaced without obstruction.
For the FlashArray//XR4, the specific service depth requirements are:
Front Panel: 216 mm (8.5 in.). This allows sufficient room to open the bezel, unlatch drives, and pull them out of the chassis.
Rear Panel: 530 mm (20.9 in.). This larger clearance is necessary because the controllers themselves (which are long, heavy components) slide out from the rear of the chassis. The engineer needs enough space to fully extract the controller without hitting a wall or another rack door.
Adhering to these clearances prevents situations where a failed controller cannot be replaced because the rack is too close to a wall.

Question No : 15


An Implementation Engineer is performing a hardware NDU from a fully populated FlashArray//X90R3 to a FlashArray//XL that already contains 20 DFM-Ds. The drive transfer is complete, and the //XL is still in SPM (Shelf Mode).
What is the next step the Implementation Engineer should take to continue the upgrade?

정답:
Explanation:
When upgrading from a FlashArray//X to a FlashArray//XL, the hardware migration often involves handling DirectMemory Modules (DMMs) differently than standard data drives. DMMs are high-performance cache modules that reside in the controller chassis.
In this specific scenario―migrating a fully populated //X90R3 to an //XL that already has its own set of DirectFlash Modules (DFM-Ds)―the DMMs from the old chassis need to be moved to the new chassis to preserve their cache function or be properly decommissioned.
The correct procedure is to Evacuate DMMs as a datapack then install into any slot in //XL.
DMMs cannot be simply "hot-swapped" randomly while the system is live in a way that disrupts the cache map.
They must be logically grouped and evacuated (conceptually "vacated" of active data) so they can be safely removed from the old chassis.
Once removed, they can be installed into the new //XL chassis. The //XL architecture supports DMMs in specific slots, and Purity will detect and re-incorporate them into the cache pool.
This ensures that the expensive SCM (Storage Class Memory) hardware is reused in the new system without causing data integrity issues during the transition.

 / 8
Pure Storage