Administering Windows Server Hybrid Core Infrastructure 온라인 연습
최종 업데이트 시간: 2026년02월14일
당신은 온라인 연습 문제를 통해 Microsoft AZ-800 시험지식에 대해 자신이 어떻게 알고 있는지 파악한 후 시험 참가 신청 여부를 결정할 수 있다.
시험을 100% 합격하고 시험 준비 시간을 35% 절약하기를 바라며 AZ-800 덤프 (최신 실제 시험 문제)를 사용 선택하여 현재 최신 54개의 시험 문제와 답을 포함하십시오.
정답:
Explanation:
The Windows Server Data Deduplication section of the Administering Windows Server Hybrid Core Infrastructure course explains that deduplicated volumes require the deduplication components on any system that needs to read them: “Data Deduplication is implemented as a filter driver. To access a deduplicated volume on another server, you must install the Data Deduplication role service; otherwise files may appear as reparse points and cannot be opened.” It also clarifies that “when the role is present, the system transparently rehydrates data on access without requiring a restore.” In your scenario, Disk2 was deduplicated on Server1 and is now attached to Server2. To regain immediate access to all files as quickly as possible, install the Data Deduplication role service on Server2 so the filter can interpret the chunk store and metadata and make the files readable instantly. Creating a storage pool is irrelevant because the disk already exists and is readable; FSRM does not interpret dedup data; and restoring from Azure Backup would be slower and unnecessary since the original deduplicated volume is intact. Hence, the fastest, correct action is to install Data Deduplication on Server2. https://docs.microsoft.com/en-us/windows-server/storage/data-deduplication/overview
정답:
Explanation:
In the Administering Windows Server Hybrid Core Infrastructure materials for file services, Microsoft specifies that FSRM is the component designed to enforce space usage limits on shared data: “File Server Resource Manager provides quota management, file screening, and storage reports. You can assign quotas to volumes or specific folders and use templates to standardize limits and notifications.” The guide further notes that you can “configure hard or soft quotas per folder, generate warning e-mails or events as thresholds are reached, and apply automatic quota templates for user home directories.” Because your requirement is to limit the amount of storage space that each user can consume in the UserData share, the supported and least-privilege approach is to apply an FSRM quota on the UserData folder (or on each user’s home subfolder via auto-apply templates). Other options listed do not meet the goal: Storage Spaces manages physical capacity and resiliency, not per-user limits; Work Folders synchronizes user data but does not enforce quotas on a share; DFS Namespaces provides a unified namespace and referrals, not storage consumption control. Therefore, FSRM is the correct tool to enforce per-user or per-folder storage limits on UserData.


정답: 
Explanation:
You can create a file named File2.docx in D:\Folder1 on Server1. = No
You can create a file named File1.docx in D:\Data1 on Server2. = No
File3.docx will sync to Server1. = Yes
In the Administering Windows Server Hybrid Core Infrastructure content for Azure File Sync, Microsoft describes Azure File Sync as a multi-master synchronization service where a sync group contains one cloud endpoint (the Azure file share) and one or more server endpoints (paths on Windows Server). The documentation explains that “the namespace is kept consistent across all endpoints in the sync group; files and folders created on any endpoint are synced to the Azure file share and then to all other server endpoints.” It also states that “Azure File Sync is bidirectional and uses a last-writer-wins conflict model; the cloud endpoint is the hub that fan-outs changes to all registered server endpoints,” and that “existing items in the cloud share will be projected to each server endpoint (with cloud tiering optionally stubbing files) so the same names and paths appear on every endpoint.”
Applying this:
Because share1 already contains File2.docx, it will be synced down to D:\Folder1 on Server1, so you cannot create another File2.docx there without overwriting―No.
File1.docx exists on Server1 and will be uploaded to share1 and then projected to Server2 at D:\Data1, so creating a brand-new File1.docx there would conflict―No.
File3.docx exists in share1 (and also locally on Server2) and will be synchronized to Server1 at D:\Folder1―Yes.


정답: 
Explanation:
File1: Server1 and the Azure file share
File3: The Azure file share only
Azure File Sync cloud tiering manages local storage by balancing two specific policies: the Volume Free Space Policy and the Date Policy. According to the official documentation for Administering Windows Server Hybrid Core Infrastructure, the Volume Free Space policy is the primary governing factor for tiering decisions.
Storage Calculations: Volume E is 500 GB. With a 30 percent free space requirement, the server must maintain 150 GB of free space ($500 \times 0.30 = 150$). This means only 350 GB ($500 - 150$) of file content can be cached locally on Server1.
Date Policy Application: The date policy is set to 70 days. Any file not accessed within this window is automatically tiered regardless of free space.
File4 (Last accessed 100 days ago) is older than 70 days, so it is tiered to The Azure file share only.
Volume Free Space Application: The remaining files (File1, File2, and File3) were all accessed within the last 70 days. Their total size is 500 GB ($200 + 100 + 200 = 500$). Since the maximum allowed local storage is 350 GB, Azure File Sync must tier additional files to satisfy the 30% free space requirement.
Tiering Priority: Files are tiered based on their "coldness" (last access time). File3 (60 days ago) is significantly older than File1 (2 days) and File2 (10 days). By tiering File3, the remaining local data (File1 + File2) totals 300 GB, which fits within the 350 GB limit.
Conclusion: Consequently, File1 remains cached on Server1 and the Azure file share, while File3 is tiered to The Azure file share only.



정답: 
Explanation:
The Azure File Sync design guidance in Administering Windows Server Hybrid Core Infrastructure explains that a sync group defines one sync topology and is anchored by exactly one cloud endpoint (an Azure file share). The material states: “Each sync group contains a single cloud endpoint… Additional Azure file shares must be placed in separate sync groups.” Consequently, after Group1 uses share1 as its cloud endpoint, share2 cannot be added to the same sync group, which makes the first statement No.
For server endpoints, the guide notes: “A server endpoint represents a specific path on a registered
Windows Server… A server can host multiple server endpoints, including in the same sync group, provided paths don’t overlap (no parent/child or identical paths).” Because D:\Folder1 is already in Group1, adding E:\Folder2 from the same Server1 is supported and non-overlapping, so the second statement is Yes.
Finally, multi-server synchronization is a core capability: “Multiple servers can be added as server endpoints to the same sync group to enable branch-to-branch and server-to-cloud sync; contents are merged in the Azure file share namespace.” Therefore, adding D:\Data from Server2 as another server endpoint in Group1 is fully supported, making the third statement Yes.
These rules satisfy the scenario: one cloud endpoint per sync group, multiple non-overlapping server endpoints (including multiple from the same server), and multi-server participation in the same group.

정답:
Explanation:
The exam materials describe several patterns for resolving split internal namespaces distributed across multiple DNS servers: zone delegation from a parent zone, and conditional forwarders between peer authoritative servers. Delegation enables the parent (contoso.local on Server1) to refer queries for child zones to their authorities. However, child zone servers (Server2 for east.contoso.local and Server3 for west.contoso.local) don’t automatically resolve names in the parent or sibling zones. The recommended approach is: “configure conditional forwarders on each child’s DNS server to the authoritative servers for the other internal namespaces while keeping Internet resolution via root hints or upstream forwarders.”
Implementing conditional forwarders on Server2 for contoso.local (to Server1) and west.contoso.local (to Server3), and on Server3 for contoso.local (to Server1) and east.contoso.local (to Server2), enables full internal resolution. All three servers already use root hints for Internet hosts, satisfying the external resolution requirement without additional changes. This exactly meets the stated goal.
정답:
Explanation:
The hybrid core curriculum explains that DNSSEC validation by Windows clients is controlled through the Name Resolution Policy Table (NRPT), deployable via Group Policy. The NRPT lets administrators “require DNSSEC for specific namespaces and configure how clients validate responses.” While trust anchors (Add-DnsServerTrustAnchor) are used by DNS servers performing validation, member servers acting as DNS clients rely on NRPT rules to demand DNSSEC-validated answers from their resolvers for named namespaces (e.g., fabrikam.com). The guidance emphasizes: to “enforce DNSSEC validation on domain-joined clients for a given suffix, create a GPO-based NRPT rule that requires DNSSEC,” ensuring unsigned or invalid answers are rejected. Therefore, to make all member servers validate DNSSEC for fabrikam.com, deploy a GPO NRPT rule targeting that namespace. Adding trust anchors on Server1 or on each member server is unnecessary (and in the latter case, inapplicable unless they run the DNS Server role).

정답:
Explanation:
Infrastructure documents: =
In the Windows Server DNS guidance for hybrid core infrastructure, Microsoft specifies that forwarders send “all queries for names that the DNS server cannot resolve locally to a designated upstream server,” while conditional forwarders target “queries for a specific DNS domain to one or more authoritative DNS servers for that domain.” When a branch office DNS server must resolve a partner/remote domain hosted elsewhere in the organization and you cannot change the authoritative server, the recommended pattern is to configure a conditional forwarder on the branch server pointing to the remote authoritative server. For Internet name resolution, you configure a standard forwarder to the required recursive resolver IP.
Applied here: New York clients use Server2. To resolve contoso.com (hosted on Server1), create a conditional forwarder on Server2 for contoso.com that points to 10.1.1.1 (Server1). To meet the requirement to forward all other external lookups, configure a forwarder on Server2 to 131.107.100.200. This design avoids zone transfers or changes on Server1 and fulfills both name-resolution requirements with minimal administration.

정답:
Explanation:
The DNS chapters in Administering Windows Server Hybrid Core Infrastructure describe conditional forwarders as a way to direct queries for specific namespaces to authoritative DNS servers. The text notes: “A conditional forwarder forwards queries for a designated DNS domain to specified DNS servers,” which is used to “integrate split or private namespaces across sites or forests.” In this design, Server1 hosts contoso.local and delegates east and west to Server2 and Server3. By configuring Server2 and Server3 with a conditional forwarder for contoso.local pointing to Server1, any query for contoso.local (including child names like east.contoso.local or west.contoso.local when not answered locally) is sent to Server1. Server1, being authoritative for the parent, uses the existing delegations to return referrals/answers from the proper child zones. For Internet hosts, all three servers already use root hints, which the course material confirms remains valid alongside conditional forwarding. The documentation also stresses that “authoritative data is answered locally first; forwarding applies only to names the server is not authoritative for,” so Server2 continues to answer east locally while leveraging Server1 to reach parent and sibling zones. This configuration ensures that all servers can resolve all internal namespaces and Internet hosts.
정답:
Explanation:
The AZ-800 materials discuss options for migrating on-premises workloads to Azure when IP address preservation is required and subnet overlaps exist across a Site-to-Site VPN. The guidance explains that Azure Extended Network allows you to “stretch an on-premises subnet into Azure so a VM can retain its original IP address after migration” and that it is specifically intended to “avoid renumbering during lift-and-shift scenarios where overlapping address space or application dependencies prevent changing IPs.” In contrast, services like VNet peering or Application Gateway do not solve overlapping address space or IP preservation for a VM, and NAT in a VNet translates traffic but does not allow the VM to keep its original IP address. The hybrid curriculum highlights that Extended Network “minimizes administrative effort by maintaining existing addressing and DNS records during migration,” which matches the requirement to migrate the on-premises VM to Azure without modifying the IP address and with minimal changes to the environment. Therefore, implementing Azure Extended Network before the migration is the correct and least-effort solution.

정답:
Explanation:
In the Windows Server DNS planning guidance from Administering Windows Server Hybrid Core Infrastructure, forwarders can be used to centralize name resolution through a designated DNS server. The guide states that a DNS server can be configured to “forward queries it cannot resolve to one or more upstream DNS servers” and that this is often used to “centralize Internet and internal namespace resolution through an authoritative hub server.” In this scenario, Server1 hosts the contoso.local parent zone and already contains delegations to east.contoso.local and west.contoso.local. If Server2 and Server3 are set to forward unresolved queries to 10.0.1.10 (Server1), they will resolve:
• Internal names―Server1 is authoritative for the parent and, through delegations, can refer requests to the appropriate child-zone servers.
• Internet names―Server1 uses root hints and can resolve external hosts, with Server2/Server3 receiving the answers via forwarding.
The study materials emphasize: “Delegations enable a parent zone to direct queries to child zones,” and “forwarders do not break authority―authoritative data is answered locally; only unresolved names are forwarded.” Thus, configuring Server2 and Server3 to forward to Server1 satisfies the requirement that all DNS servers resolve all internal namespaces and Internet hosts, while keeping the design simple and consistent with DNS best practices.

정답: 
Explanation:
The Administering Windows Server Hybrid Core Infrastructure (AZ-800) study content for Hyper-V and Discrete Device Assignment (DDA) explains that devices passed through to a guest are first dismounted from the host and then attached to the VM. To return the hardware to the host you must reverse those steps in the proper order. The guide states that when reclaiming a DDA device, the VM must be powered off before removal, and you must use Remove-VMAssignableDevice to detach the device from the VM. After removal, the device remains in a host-dismounted state until you explicitly mount it back to the host with Mount-VMHostAssignableDevice. Finally, to make the hardware usable by the host OS, re-enable the device in Device Manager. This sequence is summarized in the course materials as: stop (turn off) the VM that owns the device → remove the VM assignment (Remove-VMAssignableDevice) → return it to the host (Mount-VMHostAssignableDevice) → enable the device for host use (Device Manager). This ensures the NVMe device is properly detached from VM1, made available to Server1 again, and recognized/initialized by Windows on the host for normal operation.


정답: 
Explanation:
The Administering Windows Server Hybrid Core Infrastructure guidance explains two Windows container isolation modes: process isolation (Windows Server containers) and Hyper-V isolation. With process isolation, containers share the host’s kernel, so they run as ordinary processes on the host. With Hyper-V isolation, each container runs inside a lightweight VM with its own kernel; this provides a hard isolation boundary and prevents kernel sharing with other containers. The curriculum further notes that Linux containers on Windows require Hyper-V isolation because they cannot share the Windows kernel.
Applying these rules:
Container1 “must NOT share a kernel with other containers.” The only mode that prevents kernel sharing is Hyper-V isolation, so Container1 must use Hyper-V isolation.
Container2 is a Linux container. On a Windows Server container host, Linux containers are run via a utility VM and therefore require Hyper-V isolation.
Container3 is a Windows container that needs a static IP address. The networking modules describe that Windows containers can obtain static IPs when you use supported networks (for example, l2bridge/transparent) regardless of isolation mode; therefore a Windows workload like a database can run in either process or Hyper-V isolation, depending on the isolation/security you want.
Thus, the correct selections are: Hyper-V isolation only for Container1 and Container2, and Hyper-V isolation or process isolation for Container3.
정답:
Explanation:
Within the Manage and maintain Windows Server IaaS virtual machines section, the guide notes operational impacts of common VM changes. It states: “Resizing a virtual machine changes its compute allocation; the VM is restarted and may require stop/deallocate if the target size isn’t available on the current host. Expect downtime during a size change.” In contrast, the storage guidance says: “Managed data disks can be attached or removed from a running VM; these operations do not require a VM redeploy and typically do not incur downtime.” Adding a new standard SSD (data disk) is supported online, and detaching a data disk can also be performed without shutting down the VM (Windows may require a rescan and clean removal in the OS, but the VM remains running). Therefore, among the proposed changes, the one that requires downtime is changing the VM size to D4s_v4, because the platform must restart (and sometimes deallocate) the VM to re-provision compute resources, whereas attaching/detaching data disks does not inherently require downtime.
정답:
Explanation:
Reference: The Administering Windows Server Hybrid Core Infrastructure materials covering Microsoft Defender for Cloud and Azure Policy guest configuration explain that guest configuration policies use a guest configuration extension and a managed identity on the VM to retrieve policy artifacts and report compliance. The text emphasizes: “When using Azure Policy guest configuration (audit or deployIfNotExists/modify), the virtual machine must have a managed identity enabled. The platform uses the VM’s managed identity to securely access content and to send compliance data.” It further clarifies that installing DSC or Custom Script extensions is not required to enable the Azure Policy guest configuration feature; the policy assignment deploys the needed guest configuration extension automatically when the VM has an identity. A system-assigned managed identity is the simplest least-privilege option because its lifecycle is tied to the VM and it requires no separate credential management. Hence, enabling a system-assigned managed identity on VM1 fulfills the prerequisite for Azure Policy guest configuration to manage the server.