시험덤프
매달, 우리는 1000명 이상의 사람들이 시험 준비를 잘하고 시험을 잘 통과할 수 있도록 도와줍니다.
  / AB-730 덤프  / AB-730 문제 연습

Microsoft AB-730 시험

AI Business Professional 온라인 연습

최종 업데이트 시간: 2026년03월30일

당신은 온라인 연습 문제를 통해 Microsoft AB-730 시험지식에 대해 자신이 어떻게 알고 있는지 파악한 후 시험 참가 신청 여부를 결정할 수 있다.

시험을 100% 합격하고 시험 준비 시간을 35% 절약하기를 바라며 AB-730 덤프 (최신 실제 시험 문제)를 사용 선택하여 현재 최신 47개의 시험 문제와 답을 포함하십시오.

 / 4

Question No : 1


You receive the following response to a prompt: "Sorry, it looks like I can't respond to this. Let's try a different topic."
What is a possible cause of the response?

정답:
Explanation:
Microsoft 365 Copilot follows Microsoft’s Responsible AI principles and enforces strict content safety policies. When a prompt violates safety guidelines―such as containing harmful, abusive, illegal, or restricted content―the system may refuse to generate a response. The refusal message shown is consistent with safety filtering behavior.
Generative AI systems include moderation layers that evaluate prompts before generating output. If the prompt is classified as unsafe or non-compliant with policy, Copilot blocks the request and encourages the user to try a different topic.
A vague prompt typically results in a generic or clarifying response rather than a refusal. There is no fixed limit of five requests per prompt. Exceeding the context window usually results in truncation or processing errors, not a safety-based refusal message.
Therefore, the most likely cause of the response is that the prompt contains language that violates safety guidelines.

Question No : 2


You sign in to the Microsoft 365 Copilot app by using your work account as shown in the exhibit. A colleague tells you that when they open the Microsoft 365 Copilot app, they have access to the Researcher agent. You need to access the Researcher agent.
What should you do?

정답:
Explanation:
In Microsoft 365 Copilot, agents such as Researcher are accessed through the Agents experience within the Copilot app. If the user interface does not immediately display a specific agent, the correct action is to browse or search the available agents catalog. The exhibit shows the left navigation pane with an Explore agents option.
According to Microsoft AI Business Professional guidance, built-in and custom agents can be discovered and enabled through the Explore agents section. If the user already has the appropriate Copilot license and is signed in with their work account, there is no need to switch accounts or request another license.
Signing in through a browser does not change feature availability, and using a personal account would remove access to organizational features. Therefore, to access the Researcher agent, you should select Explore agents and search for Researcher.

Question No : 3


You sign in to the Microsoft 365 Copilot app by using your work account as shown in the following exhibit



A colleague tells you that when they open the Microsoft 365 Copilot app, they have access to the Researcher agent. You need to access to the Researcher agent.
What should you do?

정답:
Explanation:
In Microsoft 365 Copilot, agents such as Researcher are accessed through the Agents experience within the Copilot app. If the user interface does not immediately display a specific agent, the correct action is to browse or search the available agents catalog. The exhibit shows the left navigation pane with an Explore agents option.
According to Microsoft AI Business Professional guidance, built-in and custom agents can be discovered and enabled through the Explore agents section. If the user already has the appropriate Copilot license and is signed in with their work account, there is no need to switch accounts or request another license.
Signing in through a browser does not change feature availability, and using a personal account would remove access to organizational features. Therefore, to access the Researcher agent, you should select Explore agents and search for Researcher.

Question No : 4


HOTSPOT
While using Microsoft 365 Copilot, content is returned from a website.
You need to verify the exact search query used to find the website.
What should you use? To answer, select the appropriate options in the answer area.



정답:

Question No : 5


HOTSPOT
Select the answer that correctly completes the sentence.



정답:


Explanation:
The Describe tab is where you leverage generative AI to help build an agent by describing what you want the agent to do in natural language. Instead of manually configuring everything first, you provide the agent’s purpose, target users, tone, and the tasks it should perform (for example: “Create an onboarding agent that answers HR policy questions in a friendly tone and references SharePoint policies”). Copilot then uses that description to propose an initial agent setup―such as suggested behaviors, starter prompts, and structure―accelerating agent creation and improving consistency. By contrast, the Configure tab is typically used to fine-tune and finalize settings (such as knowledge sources, actions, and other parameters) after the agent concept is established. A Copilot notebook or Copilot page can help organize content and outputs, but they are not the primary place where generative AI “builds” the agent definition. Using the Describe tab reflects best practice: start with clear intent and constraints, then refine configuration for governance, accuracy, and usability.

Question No : 6


HOTSPOT
Select the answer that correctly completes the sentence.



정답:


Explanation:
Enterprise data protection (EDP) (often referred to as commercial data protection) is the capability that ensures Microsoft 365 Copilot handles organizational data under enterprise security and privacy guarantees. With EDP, Copilot processes prompts and responses within the customer’s Microsoft 365 boundary, respecting tenant isolation and identity-based access controls. This means Copilot only retrieves and uses data that the signed-in user is permitted to access, and the data remains governed by Microsoft 365 compliance features (such as retention, eDiscovery, audit, and Purview controls). Critically, EDP ensures that your organization’s prompts and the retrieved business content are not used to train the underlying AI foundation models. This is essential for protecting confidential business information and meeting regulatory and contractual requirements. The other options (Common Data Model, sensitivity labels, and Zero Trust) support data structure, classification, and security posture, but they do not specifically represent the Copilot guarantee that tenant data stays protected and is not used for model training.

Question No : 7


You are a project coordinator for a small consulting firm.
You are responsible for tracking client communications, managing project timelines, and preparing
weekly status updates for internal stakeholders.
You have a Microsoft 365 Copilot license.
You create an agent to help you monitor project milestones, follow up on client emails, and generate weekly summary reports.
With whom can you share the agent?

정답:
Explanation:
Microsoft 365 Copilot agents operate within the security, compliance, and identity boundaries of a
Microsoft 365 tenant. Custom agents created in the Microsoft 365 Copilot app are governed by organizational policies, role-based access control, and Microsoft Entra ID authentication.
According to Microsoft AI Business Professional guidance, Copilot agents are designed for enterprise use and are shared within the organization unless administrators configure broader sharing capabilities. External sharing with personal Microsoft accounts or arbitrary email addresses is not supported by default due to security, data protection, and compliance requirements.
Because the agent in this scenario interacts with organizational data such as client emails, project milestones, and internal reports, access must remain restricted to authenticated users within the same tenant. This ensures that sensitive business information remains protected and that data access respects existing permissions.
Therefore, the agent can be shared only with people in your organization, making option D the correct answer.

Question No : 8


HOTSPOT
For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.



정답:


Explanation:
All three statements are false because generative AI responses are not guaranteed to be identical even when the prompt is the same. First, when grounded in the web, results can vary due to changing web content, different retrieved sources, or differences in how information is summarized at run time. Second, when grounded in your organization’s data, responses can change based on updates to files, emails, meetings, permissions, or which specific items Copilot retrieves as the most relevant context at that moment. Third, even when relying only on the model’s general knowledge, large language models are probabilistic: they may choose different wording, structure, examples, or emphasis across runs, especially when temperature/decoding settings and internal routing differ. In business scenarios, this means Copilot outputs should be treated as drafts that may require validation, and repeatability should be improved by adding precise constraints (cite specific sources, use fixed formats, specify exact sections, and request verbatim quotes where appropriate).

Question No : 9


You ask Microsoft 365 Copilot to create a report based on information from the web. You verify the response and discover that some information is fictional.
What is this an example of?

정답:
Explanation:
This scenario is an example of fabrication, which is commonly referred to in generative AI contexts as a hallucination. Fabrication occurs when an AI system generates information that appears credible but is factually incorrect, invented, or unsupported by verifiable sources.
According to Microsoft AI Business Professional guidance, large language models predict text based on patterns learned during training. They do not “know” facts in the human sense. As a result, when asked to generate reports using web-based information, the model may produce plausible-sounding but fictional details if sufficient grounding or reliable sources are not provided.
Deepfake refers specifically to synthetic media such as manipulated images, audio, or video. Overreliance describes a human behavior risk where users trust AI outputs without verification. Prompt injection is a malicious technique designed to manipulate model behavior. Bias refers to systematic unfairness in outputs.
In this case, the presence of fictional information in the generated report directly aligns with fabrication, making option B the correct answer.

Question No : 10


HOTSPOT
For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.



정답:


Explanation:
Microsoft 365 Copilot is designed to be helpful by using work context―for example, the files you have access to, recent activity, meetings, emails, and SharePoint/OneDrive content―to suggest relevant prompts and help you start tasks faster. It also uses this context to augment your prompt before it is sent to the LLM. This is the grounding approach (often described as retrieval-augmented generation): Copilot retrieves relevant organizational content you’re permitted to access and adds it as supporting context so responses are accurate and business-relevant. However, Microsoft 365 Copilot does not use your organization’s contextual data to train the underlying foundation model. That separation is critical for enterprise privacy and compliance: your prompts, responses, and tenant data are used to generate the answer for your session and permissions, but are not used to improve or retrain the base LLM. This approach supports responsible AI, protects confidential business information, and ensures outputs respect access controls.

Question No : 11


HOTSPOT
For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.



정답:


Explanation:
Prompt injection is a generative AI security risk where an attacker inserts instructions (often hidden in text, documents, webpages, or user inputs) to override or manipulate the assistant’s intended behavior. This can lead to unintended actions such as ignoring policy controls, producing unsafe outputs, or attempting to reveal sensitive information. Because generative AI systems follow natural-language instructions, they can be socially engineered to prioritize malicious content unless safeguards are in place. This is why prompt injection can cause data exposure (for example, attempting to extract confidential content from grounded sources) and can also embed harmful instructions that redirect the model’s behavior. In enterprise settings like Microsoft 365 Copilot, mitigations include grounding boundaries, permission trimming, content filtering, and instruction hierarchy (system policies over user instructions). From a business governance perspective, users should treat untrusted inputs (emails, documents, web text) as potentially hostile and apply least-privilege access and validation when using AI outputs in decision-making.

Question No : 12


You are discussing Microsoft 365 Copilot with a colleague. The colleague asks which data Copilot uses to answer questions when using the Work scope.
What should you tell your colleague?

정답:
Explanation:
Microsoft 365 Copilot operates within two primary knowledge boundaries: its foundational large language model training data and the organizational data available within the Microsoft 365 tenant. However, Copilot strictly enforces Microsoft’s security and compliance model, meaning it only retrieves and uses data that the signed-in user is authorized to access.
When using the Work scope, Copilot combines the general knowledge it was trained on with organizational data such as documents, emails, chats, calendars, and files stored in Microsoft 365. Importantly, Copilot respects role-based access control and existing permissions. It does not surface information from content the user does not have access to.
Option A is incomplete because Work scope includes organizational data.
Option B is incorrect because Copilot does not access all tenant data indiscriminately; it is permission-scoped.
Option D is incomplete because Copilot also leverages its general training knowledge.
Therefore, the correct explanation is that Copilot provides responses based only on data the user can access, combined with its general training knowledge.

Question No : 13


You use Microsoft 365 Copilot to generate a training plan.
You need to check if there are any existing training plans in your organization that are similar to the new training plan.
What should you use in Copilot?

정답:
Explanation:
Microsoft 365 Copilot integrates with Microsoft Search to help users discover relevant content across their organization’s Microsoft 365 data estate, including SharePoint, OneDrive, Teams, and Exchange. When the objective is to determine whether similar training plans already exist, the appropriate action is to perform a search across organizational content.
Using Search allows Copilot to query indexed enterprise documents and return files, plans, or related materials that the user has permission to access. This supports content reuse, avoids duplication of work, and aligns with Microsoft’s guidance on leveraging organizational knowledge efficiently.
Designer is focused on visual content creation, Apps provides access to Microsoft 365 applications, and Pages is used for creating and organizing content within Copilot. None of these options are intended for discovering existing documents across the tenant.
Therefore, to identify similar existing training plans within your organization, the correct tool to use in Copilot is Search.

Question No : 14


HOTSPOT
You have a Microsoft 365 subscription.
You do NOT have a Microsoft 365 Copilot license.
For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.



정답:


Explanation:
Microsoft 365 Copilot Chat is included at no additional cost for users with qualifying Microsoft 365 subscriptions such as Business Basic, Business Standard, Business Premium, E3, and E5. Therefore, users without a Microsoft 365 Copilot add-on license can still access Copilot Chat.
Copilot Chat allows users to upload documents from their local device within a session for summarization, analysis, and drafting assistance. However, advanced enterprise-integrated features―such as creating agents that access organizational SharePoint folders―require a Microsoft 365 Copilot license.
Agent creation and enterprise data grounding leverage Microsoft Graph integration and are considered premium Copilot capabilities tied to licensed Copilot services.
This distinction reflects Microsoft’s licensing model: baseline AI chat access is included with qualifying subscriptions, while deep Microsoft 365 app integration and enterprise data agents require the additional Copilot license.

Question No : 15


You are creating a custom analytics agent in the Microsoft 365 Copilot app. The agent will use Microsoft Excel files that contain sales data as knowledge.
You need to ensure that the agent can create visualizations, perform mathematical operations, create aggregations, and analyze the data in the files.
What should you add to the agent?

정답:
Explanation:
When building a custom analytics agent in Microsoft 365 Copilot that must process structured data from Excel files, advanced analytical capabilities are required. According to Microsoft AI Business Professional guidance, tasks such as performing mathematical calculations, generating aggregations, creating charts, and conducting structured data analysis require programmatic execution capabilities rather than simple text generation.
A code interpreter enables the agent to run Python-based analytical operations in a secure execution environment. This allows the agent to manipulate datasets, compute totals and averages, perform grouping and filtering, and generate visualizations such as bar charts or line graphs based on the Excel data. The interpreter bridges the gap between natural language instructions and executable analytical logic.
An image generator is designed for creative visual content and is unrelated to structured data analytics. Suggested prompts and templates improve usability and consistency but do not provide computational or visualization capabilities.
Therefore, to enable mathematical operations, aggregation, data analysis, and visualization of Excel sales data, the correct component to add to the agent is a code interpreter.

 / 4