Pure Certified FlashArray Storage Professional 온라인 연습
최종 업데이트 시간: 2026년03월30일
당신은 온라인 연습 문제를 통해 Pure Storage FlashArray Storage Professional 시험지식에 대해 자신이 어떻게 알고 있는지 파악한 후 시험 참가 신청 여부를 결정할 수 있다.
시험을 100% 합격하고 시험 준비 시간을 35% 절약하기를 바라며 FlashArray Storage Professional 덤프 (최신 실제 시험 문제)를 사용 선택하여 현재 최신 75개의 시험 문제와 답을 포함하십시오.
정답:
Explanation:
Requirement for Lossless Ethernet: NVMe over RoCE (RDMA over Converged Ethernet) requires a lossless fabric to function correctly. Unlike standard iSCSI which uses TCP for error recovery, RoCE assumes the network will not drop packets. If the network is "lossy," performance degrades significantly.
The Role of PFC: Priority Flow Control (PFC) (IEEE 802.1Qbb) is the specific mechanism used in Data Center Bridging (DCB) to provide flow control on a per-priority basis. It allows the switch to send a "pause" frame to the sender when buffers are full, preventing packet drops.
Symptom Analysis: In the scenario provided, the array itself is not overloaded ("within the performance envelope"). However, the addition of a new workload increased traffic to the point where buffer congestion occurred. Because PFC was likely misconfigured (either on the FlashArray ports, the network switches, or the host NICs), the network dropped packets instead of pausing traffic. This leads to "go-back-N" retransmissions and massive latency spikes that affect all workloads sharing that fabric.
Pure Storage Best Practices: Pure Storage documentation for NVMe-RoCE emphasizes that PFC must be enabled and consistent across the entire path. If there is a mismatch in PFC configuration, the resulting packet loss will cause the symptoms described: extreme latency and potential service outages.
정답:
Explanation:
As Pure Storage has iterated through FlashArray generations (moving from //X R2 to R3, R4, and beyond), the power density and performance capabilities of the controllers have increased significantly. Modern high-performance controllers, such as those found in the //X R4 or //XL series, have strict power requirements that often exceed what a standard 110V/120V (Low-Line) circuit can provide.
To support the higher wattage and current draw of modern CPUs and NVRAM modules, Pure Storage requires 200-240V (High-Line) power for its latest generation controllers. If an existing array is currently running on 110V power, it must be migrated to 220V power outlets before the upgrade can proceed. Attempting to run newer, high-spec controllers on 110V power could lead to power supply instability, insufficient cooling performance, or the controllers failing to boot entirely.
Here is why the other options are incorrect:
Call support to schedule a power supply replacement (A): The issue is not a faulty power supply; it is the external electrical infrastructure's inability to provide the necessary voltage/wattage for the new hardware. Replacing the power supply with the same model would not solve the voltage limitation.
Transition the array to DC power (B): While Pure Storage does offer DC power options for specific telco environments, this is not a standard requirement for a typical controller upgrade. Moving to standard high-line AC power (220V) is the standard prerequisite for data center environments.
정답:
Explanation:
Pure Storage FlashArrays utilize Thin Provisioning as a core, always-on architectural principle. When a volume is created, the "size" assigned to it is merely a logical limit (a quota) presented to the host; no physical back-end flash capacity is allocated or "pinned" at the time of creation.
Because of this architecture, Purity allows administrators to create volumes that are significantly larger than the actual physical capacity of the array (this is known as over-provisioning). If an administrator
accidentally selects PB (Petabytes) instead of GB, the Purity GUI will allow the volume to be created because it is a logical operation that doesn't immediately consume 1PB of physical flash. However, Purity includes a built-in safety check: if the requested logical size is exceptionally large or exceeds the current physical capacity of the array, the GUI will present a warning or confirmation prompt to ensure the administrator is aware of the massive logical size being provisioned before finalizing the change.
Here is why the other options are incorrect:
The volume will be created and space will immediately be used (A): This describes "Thick Provisioning," which Pure Storage does not use. Space is only consumed on a FlashArray when unique data is actually written by the host and processed by the deduplication and compression engines.
The volume will not be created and a warning will be displayed (C): Purity does not strictly forbid over-provisioning. While it warns the user to prevent human error, it does not block the creation of the volume, as over-provisioning is a standard practice in thin-provisioned environments.
정답:
Explanation:
Within the Pure Storage ecosystem, the absolute best method to troubleshoot and pinpoint the exact source of VMware latency is to use VM Analytics (VM Topology) in Pure1.
VM Analytics is a feature built directly into Pure1 that maps the entire data path from the virtual machine all the way down to the physical FlashArray. It provides a visual topology map detailing the VM, Virtual Disk, ESXi Host, Datastore, and FlashArray Volume. By analyzing performance across this topology, an administrator can instantly identify exactly where the latency is being introduced. For example, you can clearly see if the latency spikes at the ESXi host layer (indicating compute contention) or the network layer, even if the FlashArray volume itself is reporting sub-millisecond latency at the storage level.
Here is why the other options are incorrect:
Analyze load metrics in Pure1 for each volume in the user's data path (C): Looking exclusively at volume-level metrics on the FlashArray will only tell you the latency from the array's perspective. If the latency is being caused by an overloaded ESXi host CPU or a saturated SAN fabric, the FlashArray metrics will look perfectly healthy, and you will fail to identify the source of the problem.
Analyze performance charts in vSphere for CPU, Memory, Network, and Storage Path for the user's data path (B): While vCenter performance charts are useful, they often lack deep storage-array-level context. Pure1's VM Topology is the "best" method because it correlates the vSphere stack data with the native FlashArray telemetry data in a single, unified view, making full-stack root cause analysis much faster.
정답:
Explanation:
In the Pure Storage Purity operating environment, a volume can only be a member of one Volume Group at a time.
When an administrator navigates to a Volume Group in the GUI and clicks to add members, the system filters the inventory and only displays volumes that are currently "unassigned" (not belonging to any Volume Group). If a volume is already residing inside another Volume Group, Purity intentionally hides it from this available list to prevent conflicting overlapping memberships. To resolve this, the administrator must first navigate to the volume's current Volume Group, remove the volume from that group, and then it will become available to add to the new one.
Here is why the other options are incorrect:
It is already part of a Protection Group (B): Protection Groups (pgroups) manage snapshot and replication schedules. A volume can absolutely be a standalone member of a Protection Group while simultaneously being added to a Volume Group. Being in a pgroup does not hide it from the vgroup selection list.
It is protected by SafeMode (C): SafeMode is a ransomware protection feature that prevents the manual eradication of destroyed volumes and snapshots before their retention timer expires. It does not dictate or restrict logical organizational containers like Volume Groups.
정답:
Explanation:
For FlashArray File Services (FA File), user authentication and mapping depend on the storage protocol being used. In an NFS-only environment, remote user authentication (resolving UNIX UIDs and GIDs to actual usernames and managing access) is typically handled via LDAP or NIS.
To configure this integration in the Purity GUI, the storage administrator must navigate to Settings > Access > Directory Services. This specific section allows the FlashArray to connect to a centralized directory server (such as OpenLDAP or even Active Directory providing LDAP services) to pull the necessary UNIX user and group attributes required for NFS file permissions to function properly.
Here is why the other options are incorrect:
Settings > Access > Create Active Directory Account (A): This specific menu path is used strictly for configuring native Active Directory (AD) computer accounts and joining the domain to support the SMB (Server Message Block) protocol. Since the scenario explicitly states the company is an "NFS only shop," configuring an SMB AD account is not the correct step.
Settings > Access > File System (C): While you manage file-level exports and policies within the Purity file interface, the global configuration for remote user authentication and directory server integration lives under the dedicated Directory Services pane.
정답:
Explanation:
In Pure Storage Purity OS, the absolute best practice and proper configuration method for sharing a single volume across multiple hosts―such as a VMware ESXi cluster or a Microsoft Windows Server Failover Cluster (WSFC)―is to connect the volume to a Host Group.
When you create a Host Group, you add the individual Host objects (which contain the WWPNs, IQNs, or NQNs) into that group. When a volume is then connected to the Host Group, Purity automatically ensures that the volume is presented to every host in that group using the exact same LUN ID. Consistent LUN IDs across all nodes in a cluster are a strict requirement for clustered file systems like VMFS and Cluster Shared Volumes (CSV) to function correctly and prevent data corruption.
Here is why the other options are incorrect:
Connect the volume to each individual host (C): This is known as creating "private connections." If you manually connect a shared volume to multiple hosts individually, Purity might assign a different LUN ID to the volume for each host. Inconsistent LUN IDs will cause clustered operating systems to fail to recognize the disk as a shared resource. Private connections should only be used for boot LUNs or standalone standalone servers.
Connect a volume group to the host (B): In Purity, a "Volume Group" is a logical container used for applying consistent snapshot policies, replication schedules, or ActiveCluster configurations to a set of related volumes (like a database and its log files). Volume groups are not used for host presentation or access control.
정답:
Explanation:
On a Pure Storage FlashArray, Ethernet ports operate at both a physical hardware layer and a logical network configuration layer. If you need to verify the actual physical negotiated port speed of an Ethernet port (for example, verifying if a 25GbE port negotiated down to 10GbE due to switch configurations or cable limitations), you must query the hardware layer directly.
The command purehw list --all --type eth interacts directly with the physical NIC hardware components to report their true link status, health, and dynamically negotiated hardware link speed.
Here is why the other options are incorrect:
purenetwork eth list -- all (B): The purenetwork command suite is primarily focused on the logical Layer 2/Layer 3 networking stack. It is used to configure and list IP addresses, subnet masks, MTU sizes (Jumbo Frames), and routing, rather than focusing on the physical hardware negotiation details of the NIC itself.
pureport list (A): The pureport command suite is specifically used for managing and viewing storage protocol target ports. An administrator would use this to list the array's Fibre Channel WWNs or iSCSI IQNs to configure host zoning or initiator connections, not to verify Ethernet link negotiation speeds.
정답:
Explanation:
When an NFS client successfully mounts an export using the target's IP address, it proves that the fundamental network connectivity (routing, firewalls) and the storage protocol layer (NFS export policies, host access permissions) are functioning correctly.
However, if the exact same mount attempt fails when using the Fully Qualified Domain Name (FQDN) of the FlashArray file service, the issue lies entirely with name resolution. The Domain Name System (DNS) is responsible for translating human-readable FQDNs into the IP addresses required for network communication. If the client cannot reach the DNS server, or if the DNS server lacks the correct A or AAAA records for the FlashArray's file Virtual IP (VIP) addresses, the client won't be able to resolve the name to the IP, causing the mount command to fail.
Here is why the other options are incorrect:
Issue with the Active Directory (AD) controller (A): Active Directory is primarily used for directory services, user authentication, and authorization (such as mapping permissions for SMB or NFSv4). While AD environments usually include DNS, an "AD controller issue" in the context of storage protocols usually points to permission denials, not host name resolution failures. Furthermore, since the mount works via IP, basic access is already validated.
Issue with the OpenLDAP (C): Similar to AD, OpenLDAP provides directory services for user mappings (UID/GID) and authentication. It does not perform FQDN-to-IP resolution.
정답:
Explanation:
The ActiveCluster Mediator (whether it is the Pure1 Cloud Mediator or the On-Premises VM) is a
lightweight tie-breaker that communicates continuously with the management interfaces of both FlashArrays. If it was previously online and suddenly reports as "unreachable" from both arrays simultaneously, the issue is almost always caused by a network interruption or firewall rule change blocking the required communication ports between the arrays' management IP addresses and the Mediator VM.
If a network firewall is suddenly configured to drop or deny outbound TCP traffic (such as port 80/443 depending on the specific HTTP/HTTPS discovery and heartbeat configuration) from the FlashArrays to the ESXi-hosted Mediator, the arrays will fail to send their heartbeats, causing the mediator status to drop to "unreachable."
Here is why the other options are incorrect:
Fibre Channel (FC) zoning or network access has not been created properly for the host (A): The Mediator is completely independent of the front-end host storage fabric (Fibre Channel or iSCSI). Host zoning issues would prevent the ESXi server from seeing its volumes, but it would not cause the FlashArrays to lose management network connectivity to the Mediator.
The mediator does not reside within a Pure datastore (B): This is actually a strict best practice and requirement. Pure Storage explicitly states that the On-Premises Mediator VM must be deployed in a separate (third) failure domain. It should not reside on the ActiveCluster mirrored datastore, because a site-wide SAN failure would take the mediator offline exactly when it is needed most. Therefore, not residing on a Pure datastore is the correct setup, not a cause for an outage.
정답:
Explanation:
In a VMware vSphere environment utilizing Virtual Volumes (vVols), a Protocol Endpoint (PE) acts as a crucial logical proxy or I/O access point between the ESXi hosts and the storage array.
Unlike traditional VMFS datastores where the host mounts a massive LUN and places all VM files inside it, vVols map individual virtual machine disks directly to native volumes on the FlashArray. Because a single ESXi host could potentially need to communicate with thousands of individual vVol volumes, it would be extremely inefficient to map every single one directly to the host. Instead, the ESXi host mounts the Protocol Endpoint, and the storage array uses this PE to dynamically route the I/O to the correct underlying vVol. On a Pure Storage FlashArray, creating and connecting a PE volume to your ESXi host groups is a mandatory prerequisite for setting up a vVol datastore.
Here is why the other options are incorrect:
It allows for volumes of the same name within host groups (A): Purity OS requires all volume names across the entire FlashArray to be completely unique, regardless of which host group they are connected to or whether a Protocol Endpoint is in use.
It is required to set Host Protocol (C): The host communication protocol (such as iSCSI, Fibre Channel, or NVMe-oF) is determined by the physical host bus adapters (HBAs), network interface cards (NICs), and the configuration of the Host object in Purity, not by the creation of a volume type like a PE.
정답:
Explanation:
In Pure Storage FlashArray File Services (Purity//FA), administrators can apply Quota Policies to managed directories to control and monitor capacity consumption. When configuring the rules for these quotas, the limits are categorized into two specific types: Enforced and Unenforced.
Enforced Quotas (Hard Limits): When a quota rule is set with the --enforced flag set to True, it acts as a hard boundary. If the users or applications writing to that managed directory hit the specified capacity limit, the FlashArray will actively block any further write operations, ensuring the directory cannot exceed its allocated space.
Unenforced Quotas (Soft Limits): When a quota rule is unenforced (the flag is set to False), it acts purely as a monitoring and alerting threshold. Users can continue to write data and organically grow the directory past the specified limit without application disruption, but the system will track the overage and trigger administrative notifications.
Here is why the other options are incorrect:
File and Block (A): This describes the two underlying storage protocols/architectures the unified FlashArray serves, not the types of capacity quota limits for directories.
Limited and Unlimited (B): While you can theoretically leave a file system to grow "unlimited" up to the size of the array, the specific technical parameters in the Purity quota policy engine are defined as enforced vs. unenforced.
정답:
Explanation:
According to the official Pure Storage FlashArray Asynchronous Replication Configuration and Best Practices Guide, the proper and immediate method to halt an active, in-progress asynchronous replication transfer is by disallowing the protection group at the target.
When you navigate to the target FlashArray and disallow the specific Protection Group, Purity immediately breaks the replication authorization for that group. If there is an in-progress snapshot transfer occurring at that exact moment, the transfer is immediately stopped, and the partially transferred snapshot data is discarded on the target side.
Here is why the other options are incorrect:
Disabling the replication schedule (B): Toggling the replication schedule to "Disabled" only prevents future scheduled snapshots from being created and sent. It does not kill or interrupt a replication transfer that is already currently in progress.
Removing the volume member from a protection group (A): Modifying the members of a protection group updates the configuration for the next snapshot cycle. It does not actively abort the transmission of the current point-in-time snapshot that the array is already busy sending over the WAN.
정답:
Explanation:
On a Pure Storage FlashArray, volume snapshots are immutable, read-only, point-in-time representations of your data. Because they cannot be attached directly to a host to be read or modified, you must use the Copy function to make the data usable.
The Purity operating environment allows you to copy a snapshot to two specific destinations:
A new volume: This effectively creates a clone. It provisions a brand-new, writable volume using the exact data footprint of the snapshot. This is incredibly useful for test/dev environments, offline reporting, or granular file recovery where you don't want to disrupt the original production volume.
An existing volume: This takes the data from the snapshot and completely overwrites the target volume.
This is the standard procedure when you need to perform a full rollback of a corrupted volume, or when you want to quickly refresh a lower-level environment (like refreshing a QA database with yesterday's Production snapshot).
Here is why the other options are incorrect:
A new Snapshot (B & C): You cannot directly "copy" a snapshot to create another standalone snapshot. Snapshots are uniquely generated from active volumes. If you wanted to duplicate a snapshot's exact state, you would first copy it to a volume, and then take a new snapshot of that resulting volume.
정답:
Explanation:
In the Pure Storage FlashArray GUI, granular performance metrics (Latency, IOPS, Bandwidth) are located under the Analysis > Performance tabs. When you navigate to the Volumes sub-tab and select a specific volume, Purity displays a unified line graph tracking the performance of that volume over time.
By default, the Latency graph simultaneously plots Read, Write, and Mirrored Write (for volumes participating in an ActiveCluster or synchronous replication pod) latencies. Because these lines can overlap or compress the Y-axis (especially if one metric spikes), isolating a specific metric requires interacting with the graph's legend.
To view the exact, un-obscured latency for standard write requests to that volume, the administrator should click on "Read" and "Mirrored Write" in the chart's legend. This deselects those metrics, effectively hiding their lines from the graph and automatically rescaling the view to exclusively display the host write latency.
Here is why the other options are incorrect:
Health > Network (A): The Health tab is used to check the hardware status of the physical controller ports, including link state and errors. While you might see port-level throughput or queue depth here, it does not provide volume-specific application latency.
Storage > Volumes > Details (B): The Storage tab is primarily used for provisioning and configuration management. Clicking on a volume here will show its size, data reduction ratio, snapshot policies, and connected hosts, but it does not provide detailed interactive performance graphs.