AI Transformation Leader (beta) 온라인 연습
최종 업데이트 시간: 2026년03월30일
당신은 온라인 연습 문제를 통해 Microsoft AB-731 시험지식에 대해 자신이 어떻게 알고 있는지 파악한 후 시험 참가 신청 여부를 결정할 수 있다.
시험을 100% 합격하고 시험 준비 시간을 35% 절약하기를 바라며 AB-731 덤프 (최신 실제 시험 문제)를 사용 선택하여 현재 최신 52개의 시험 문제와 답을 포함하십시오.

정답: 
Explanation:
Answer Area
Microsoft Copilot provides a single AI app that has identical features and experiences across all Microsoft products. Answer. No
Microsoft 365 Copilot delivers AI capabilities for business users that use Microsoft 365 apps. Answer. Yes
Microsoft Security Copilot helps companies understand risks and the organizational security posture. Answer. Yes
No ― “Copilot” is an umbrella brand across Microsoft, but the experiences are not identical. Different Copilots target different workloads (productivity, security, development, business apps) and therefore expose different capabilities, connectors, permissions models, and admin controls. For example, Microsoft 365 Copilot is embedded in Word/Excel/PowerPoint/Outlook/Teams, while Security Copilot is built for SOC workflows and integrates with security tooling; they are intentionally not the same app with the same features.
Yes ― Microsoft 365 Copilot is specifically designed to deliver generative AI assistance for users working in Microsoft 365 applications. It enhances common business tasks such as drafting, summarizing, meeting recap, creating presentations, and working with documents and communications―directly inside the Microsoft 365 productivity suite.
Yes ― Microsoft Security Copilot is focused on security operations and helps analysts understand threats, investigate incidents, and improve visibility into security posture. Its purpose is aligned with helping organizations interpret security signals and risk context more efficiently, which supports understanding organizational risk and posture.
정답:
Explanation:
The problem is harmful output language (inappropriate or exclusionary/ableist content). The requirement says you must prevent those responses while minimizing costs. The most cost-effective and direct control is to add a content-moderation filter (B) to screen and block (or rewrite/escalate) responses that violate your safety or inclusion standards. Moderation can be applied at the output stage (and often also at input) without retraining the model, which keeps costs and delivery time low. It also provides an immediate safety layer even if the underlying model occasionally produces biased or exclusionary phrasing.
Option A is not reliable: a newer model version might reduce issues but does not guarantee elimination of ableist language, and you still need policy enforcement.
Option C (retraining on only inclusive content) can help, but it is typically expensive (data curation, re-training, re-evaluation, regression testing, re-deployment) and not the “minimize costs” path―also it can reduce coverage/utility if overly restrictive.
Option D is clearly wrong because it would amplify the harmful behavior.
In practice, the lowest-cost, high-impact approach is to implement moderation thresholds and handling actions (block, warn, regenerate with constraints, human review) and then, if needed, follow up later with deeper mitigations like prompt constraints, targeted fine-tuning, red-teaming, and continuous evaluation.

정답: 
Explanation:
uses structured data and provides insights by using text, charts, tables, and other visuals.
The Analyst agent in Microsoft 365 Copilot is positioned as a “data analysis” reasoning agent that helps users work through structured information (for example, tables, spreadsheets, and other dataset-like inputs) and then produces analytical outputs. The best completion is the option stating it “uses structured data and provides insights by using text, charts, tables, and other visuals,” because that describes the hallmark outcome of analyst-style work: summarizing patterns, highlighting key drivers, and presenting results in formats that business users can act on. Analyst-style assistance typically includes exploring the data, identifying trends and anomalies, comparing segments, and explaining findings clearly―often accompanied by tables and visual representations that make the insights easier to consume.
The other dropdown options align to different use cases: “compiles background research for a new market or initiative” describes a research-oriented agent, “generate audio summaries” is a media summarization function, and “answer employee FAQs” describes a conversational knowledge assistant. Analyst is the one most directly associated with structured-data interpretation and producing a mix of narrative plus analytical artifacts (tables/charts) to communicate conclusions.

정답: 
Explanation:
Answer Area
Microsoft 365 Copilot can amplify existing data governance challenges.
Answer. Yes
Implementing Microsoft 365 Copilot reduces data management costs.
Answer. No
Microsoft 365 Copilot can help IT teams manage data risks.
Answer. Yes
Yes ― Copilot relies on the permissions, sharing links, and content exposure that already exist in Microsoft 365. If an organization has oversharing (for example, broadly accessible SharePoint sites, poorly scoped Teams, unmanaged external sharing, or excessive access rights), Copilot can surface that content more easily through natural-language querying. In other words, Copilot doesn’t create new permissions, but it can increase visibility of governance gaps and make the impact of weak information architecture more apparent.
No ― It is not accurate to claim that implementing Copilot inherently reduces data management costs. Adoption often requires up-front investment in data hygiene, sensitivity labeling, retention, permission cleanup, DLP, and change management. Some organizations may realize productivity gains or reduced effort over time, but “reduces costs” is not a guaranteed outcome and depends heavily on the current state of governance, the scale of remediation needed, and how Copilot is rolled out.
Yes ― Copilot can support IT risk management when deployed with the right controls: identity and access governance, sensitivity labels, DLP policies, retention, auditing, and compliance tooling. Because Copilot operates within the Microsoft 365 security/compliance boundary and honors existing access controls, IT can apply centralized policies to reduce leakage risk and improve overall control of how organizational data is accessed and used.
정답:
Explanation:
The requirement has two key phrases: indexing information and knowledge mining by extracting insights from documents. The Microsoft service purpose-built for this is Azure AI Search (formerly Azure Cognitive Search), which provides a search index over your content and supports “AI enrichment” workflows to extract and structure insights from documents during indexing.
Azure AI Search can ingest content from common enterprise sources (files, blobs, databases), build searchable indexes, and enrich the indexed content using built-in skills or integrated AI capabilities― such as entity recognition, key phrase extraction, language detection, and OCR (depending on the
pipeline). This is exactly what “knowledge mining” refers to: turning large volumes of unstructured documents into structured, searchable knowledge that applications and users can query.
The other choices are partial fits: Azure Vision focuses on image/video analysis, not general document indexing. Azure Document Intelligence is excellent for extracting fields/tables from forms and documents, but on its own it does not provide the full indexing/search and knowledge mining layer across a corpus. Microsoft Foundry is an overarching platform for building AI apps/agents; it can incorporate search, but the specific service that directly delivers indexing + knowledge mining is Azure AI Search.

정답: 
Explanation:
Answer Area
Allowing AI models to make autonomous decisions supports the Microsoft responsible AI principle of accountability.
Answer. No
Regularly testing AI models for fairness and inclusiveness helps ensure they align with Microsoft’s Responsible AI principles.
Answer. Yes
Protecting user data and limiting access to personal information supports the Microsoft responsible
AI principles of privacy and security.
Answer. Yes
Microsoft’s Responsible AI principles emphasize that people and organizations must remain accountable for AI systems and their outcomes. Accountability is strengthened by governance, human oversight, clear ownership, auditability, and processes to review and address issues―not by letting models make unchecked autonomous decisions. Therefore, statement 1 is No: increasing autonomy can actually increase risk unless paired with human-in-the-loop controls and clear escalation paths, because accountability requires clear responsibility for decisions and impacts.
Statement 2 is Yes because fairness and inclusiveness are explicitly supported through ongoing evaluation. Regular testing helps detect disparate impact, performance gaps across user groups, and unintended bias introduced by data drift or changes in usage patterns. It’s not a one-time activity; it’s continuous assurance that the system behaves appropriately as conditions change.
Statement 3 is Yes because privacy and security are directly supported by protecting personal/sensitive data, enforcing least privilege access, and implementing controls such as data loss prevention, encryption, access logging, and strong identity governance. Limiting access to personal information reduces exposure and supports compliance obligations while aligning with privacy-by-design and secure-by-design expectations for AI-enabled solutions.

정답: 
Explanation:
Answer Area
Retrieval Augmented Generation RAG requires model fine-tuning.
Answer. No
Retrieval Augmented Generation RAG is helpful when you need a generative AI solution that can access current, verifiable information.
Answer. Yes
Retrieval Augmented Generation RAG enables you to get more relevant responses based on your organization’s documents without retraining the base model. Answer. Yes
RAG is an architecture pattern that improves generative AI responses by retrieving relevant information from external knowledge sources (for example, a document index, database, or knowledge base) and injecting that information into the model’s prompt/context at runtime.
No ― RAG does not inherently require fine-tuning. Fine-tuning changes the model weights. RAG, instead, keeps the base model as-is and augments it with retrieved context. Fine-tuning can be complementary (for style, domain tone, or specialized tasks), but it is not required for RAG to work.
Yes ― RAG is especially valuable when you need current and verifiable information because the retrieval layer can pull the latest approved content (updated policies, product specs, incident runbooks) and provide it to the model. This reduces hallucinations and makes answers traceable to known sources.
Yes ― A major benefit of RAG is improved relevance to organizational documents without retraining. Instead of rebuilding the model whenever documents change, you update the underlying content store/index; the model then generates responses grounded in the retrieved passages, producing answers that align with your organization’s latest information and terminology.
정답:
Explanation:
The core business concern in the scenario is data leakage―employees using consumer tools where corporate data could be pasted, stored, or processed outside the organization’s governance boundary. The key differentiator of Microsoft 365 Copilot is that it’s designed to work inside your Microsoft 365 tenant and to respect the organization’s existing security, compliance, identity, and data access controls. Therefore, D is the best answer: Copilot accesses internal work data (Microsoft Graph-connected content such as mail, files, chats, meetings) in accordance with existing Microsoft 365 policies and permissions―meaning it can only surface content the user is already allowed to access, and it operates under enterprise-grade controls (authentication, auditing, compliance boundaries, and admin governance).
Options B and C describe general generative AI capabilities that personal ChatGPT can also provide (brainstorming, drafting, rewriting). A can be done in multiple tools as well, and it is not the primary “enterprise value” difference tied to the stated risk. The scenario’s driver is governance: reducing the likelihood of proprietary data leaving controlled systems while still enabling productivity. Rolling out Copilot addresses that by providing “work-safe” AI anchored to organizational content and managed through the same tenant controls your company already uses.

정답: 
Explanation:
Answer Area
A generative AI solution is well-suited to predict next-quarter sales trends. Answer. No
A generative AI solution can summarize lengthy policy documents. Answer. Yes
A generative AI solution can create product descriptions from product specifications. Answer. Yes
No ― Predicting next-quarter sales trends is primarily a forecasting/predictive analytics problem. Microsoft differentiates predictive AI (forecasting outcomes from historical patterns) from generative AI (creating content like text, images, or code). While you can use LLMs to assist analysts (explain trends, draft narratives), the core forecasting model is typically traditional ML/time-series methods rather than generative AI as the main engine.
Yes ― Summarization is a classic, high-value generative AI capability. Given a long policy, an LLM can compress it into executive summaries, key obligations, risks, and action items, often with formatting constraints (bullets, sections, “do/don’t” lists). Microsoft highlights summarization and analysis as common generative AI use cases in business contexts.
Yes ― Generative AI is well-suited to transform structured inputs (features/specs) into natural-language outputs (product descriptions). This is straightforward “content generation,” where you control tone, length, and required fields (benefits, differentiators, disclaimers). Microsoft also points to generating product descriptions and similar marketing/customer-facing text as a practical generative AI scenario.
정답:
Explanation:
For a proof of concept, the key requirements are low commitment, quick start, and the ability to scale up or down as you learn what real usage looks like. Azure OpenAI Standard On-Demand pricing is designed for exactly that: you pay per token consumed (input and output) on a pay-as-you-go basis, which makes it ideal when demand is uncertain or variable―typical in early pilots and PoCs.
By contrast, Provisioned (PTUs) is best when you have well-defined, predictable throughput and latency requirements―usually a more mature, production workload. PTUs involve reserving model processing capacity to achieve consistent performance and more predictable costs, which is usually premature for a PoC where actual traffic patterns are not yet known.
Batch API is optimized for asynchronous high-volume jobs with a target turnaround (for example, up to 24 hours) and discounted pricing. That’s great for offline processing, but it does not match an interactive “agent” PoC that typically needs near-real-time responses and iterative testing.
Microsoft 365 Copilot is a separate SaaS licensing model and is not the Azure OpenAI pricing model for building your own agent solution.

정답: 
Explanation:
uses reasoning capabilities to generate deep insights based on organizational data and the web.
The sentence is best completed by the option describing Researcher as a research-oriented reasoning agent that combines information from the web and your work data to produce deeper insights. Microsoft describes Researcher as an agent built into Microsoft 365 Copilot to tackle complex, multistep research and to help users gather, analyze, and summarize information from “the web, your work documents, or both,” producing a structured output that supports decision-making. That is exactly what the completion “uses reasoning capabilities to generate deep insights based on organizational data and the web” captures.
The other dropdown options are better matches for different tools/agents: “creates visual dashboards from structured data in Excel and Power BI” is more aligned to BI/reporting workflows; “generates a pivot table and performs time series forecasting” is spreadsheet/analytics functionality; and “performs complex, multi-step, data analysis and code execution tasks over arbitrary datasets” is the hallmark positioning of the Analyst agent (“virtual data scientist”) rather than Researcher. Researcher’s differentiator is deep research across both organizational context and the open web, while Analyst’s differentiator is data analysis and computation.
정답:
Explanation:
An AI governance council (often called an “AI Council”) exists primarily to set direction and provide cross-functional oversight so AI adoption stays aligned to the organization’s values, risk posture, and Responsible AI commitments. That maps most directly to D. Microsoft’s guidance on creating an AI Council describes leadership responsibilities such as defining and communicating the organization’s AI vision, values, and policies, reviewing and approving AI use cases/projects, and coordinating with enablement and technical readiness teams to understand risks, issues, and opportunities. It also emphasizes representation across distinct functions (for example: senior leadership, legal, compliance, risk, ethics, data, technology, business, HR) to ensure governance decisions reflect a broad, accountable perspective.
The other options describe activities that may be supporting outcomes of governance, but they are not the council’s primary purpose.
A is narrow (IT policy enforcement/user monitoring) and is typically handled by security/compliance operations rather than the top-level governance body.
B is user enablement/training (commonly owned by adoption/change management teams).
C focuses on technical delivery and performance management (often owned by engineering/MLOps/service owners). The governance council’s central value is strategic guidance + oversight + cross-functional alignment to ensure Responsible AI adoption is consistent, accountable, and sustainable across the business.
정답:
Explanation:
The requirement is specific: users must build and use declarative agents that can access work data (tenant data / organizational context). Microsoft’s licensing guidance for Copilot extensibility ties use of declarative agents to having the appropriate Copilot entitlement that enables tenant grounding and organizational data access. In Microsoft’s cost and licensing considerations for declarative agents, Microsoft states that to use a declarative agent, users must have a Microsoft 365 Copilot add-on license (or an equivalent Copilot Chat add-on path tied to eligible licensing). Therefore, among the provided options, the best recommendation is A.
Option B (Copilot Studio user license) is primarily about authoring/building agents in Copilot Studio, but it is not, by itself, the licensing prerequisite that grants end users the right to use those agents with full Microsoft 365 Copilot capabilities and work-data grounding inside the Microsoft 365 Copilot environment. Publishing/building can be separate from the end-user entitlement to use the agent with organizational context.
Option C (Copilot Chat pay-as-you-go) can enable usage-based access to declarative agents in some configurations, but the question asks for the best license recommendation for users who need work-data access through declarative agents. The Microsoft 365 Copilot add-on is the straightforward, fully supported entitlement for that scenario.
정답:
Explanation:
Even when training data is already consistent and uniform, the first step in building a custom Azure Machine Learning model is still to prepare the training data. “Consistent” data reduces the amount of cleaning you may need, but preparation is broader than cleaning: you still must confirm the schema, validate data types, handle missing values (if any), ensure label quality (for supervised learning), select/engineer features, and split data into training/validation/test sets. Those actions determine whether training will be stable and whether evaluation metrics will be meaningful.
If you skip preparation and go directly to training (C), the model might learn from the wrong columns, inconsistent labels, or poorly partitioned data, producing misleading results. Evaluation (B) comes after training because you need a trained model to score and measure. Hyperparameter tuning (D) is an optimization activity that presupposes you already have a working training pipeline and a baseline model to improve. Deployment (E) is last, after you have validated performance and selected the model candidate.
Azure Machine Learning commonly operationalizes these steps through pipelines, where data preparation is a foundational stage that precedes training and evaluation (and can also be iterated as you refine features and quality).

정답: 
Explanation:
Answer Area
A generative AI model guarantees factually accurate responses if the model is trained on a large
dataset.
Answer. No
Content filtering and responsible AI safeguards help a generative AI model generate safe and
inoffensive content.
Answer. Yes
A generative AI model always produces fair and unbiased results when the training data has been
properly prepared and reviewed for fairness.
Answer. No
No ― A larger training dataset can improve coverage and fluency, but it does not guarantee factual accuracy. Generative models can still hallucinate, mix concepts, or produce plausible-but-incorrect statements because they generate likely text rather than verifying truth. This is why solution designs commonly add grounding/retrieval, validation, and human review for high-stakes outputs.
Yes ― Content filtering and Responsible AI controls are specifically used to reduce harmful, unsafe, or policy-violating outputs. In practice, safeguards include input/output filters, safety classifiers, and governance controls that help enforce safety policies and minimize offensive content. These controls don’t make outputs “perfect,” but they materially reduce risk and are a standard part of production AI deployments.
No ― Even with careful data preparation and fairness reviews, models can still produce biased outcomes due to residual bias in data, label/measurement issues, deployment context, and shifting real-world distributions. “Always fair and unbiased” is an absolute claim that is not achievable in real systems; fairness is managed through continuous evaluation, monitoring, and mitigations―not assumed as guaranteed.