시험덤프
매달, 우리는 1000명 이상의 사람들이 시험 준비를 잘하고 시험을 잘 통과할 수 있도록 도와줍니다.
  / AIGP 덤프  / AIGP 문제 연습

IAPP AIGP 시험

Artificial Intelligence Governance Professional 온라인 연습

최종 업데이트 시간: 2026년04월21일

당신은 온라인 연습 문제를 통해 IAPP AIGP 시험지식에 대해 자신이 어떻게 알고 있는지 파악한 후 시험 참가 신청 여부를 결정할 수 있다.

시험을 100% 합격하고 시험 준비 시간을 35% 절약하기를 바라며 AIGP 덤프 (최신 실제 시험 문제)를 사용 선택하여 현재 최신 100개의 시험 문제와 답을 포함하십시오.

 / 6

Question No : 1


Scenario:
A financial services company is planning a new AI project to assess creditworthiness. The AI team is mapping out what tasks should be completed during the planning phase of the AI lifecycle.
The planning phase of the AI lifecycle includes all of the following EXCEPT:

정답:
Explanation:
The correct answer is C. The choice of architecture (e.g., neural networks vs. decision trees) is typically part of the design and development phase, not the initial planning.
From the AIGP Body of Knowledge C AI Lifecycle Module:
“Planning involves scoping, context definition, stakeholder identification, governance planning, and assumptions―not yet model selection.”
Confirmed in the ILT Participant Guide:
“Design decisions such as architecture or algorithm type come after planning―usually during development based on technical feasibility and data availability.”

Question No : 2


Scenario:
An enterprise is evaluating multiple third-party generative AI tools to integrate into its platform. As part of its AI governance policy, it is assessing the most effective methods to reduce risks related to bias, data misuse, and liability when using third-party solutions.
All of the following are commonly adopted processes and policies in reducing potential risks introduced by third-party AI tools or applications EXCEPT:

정답:
Explanation:
The correct answer is B. Allowing PII to be freely entered into prompts without safeguards is considered a major privacy and security risk and is not a responsible governance practice.
From the AIGP ILT Guide C Generative AI & Third-Party Risk Management:
“Use of personal or sensitive information in AI prompts can result in unintended exposure, regulatory breaches, and downstream liability.”
The AI Governance in Practice Report 2025 highlights:
“PII should be minimized or protected by design. Prompt engineering should prevent entry of personally identifiable data unless legally and technically safeguarded.”
A, C, and D are established best practices under responsible AI procurement and use.

Question No : 3


CASE STUDY
A global marketing agency is adapting a large language model ("LLM") to generate content for an upcoming marketing campaign for a client's new product: a hard hat designed for construction workers of any gender to better protect them from head injuries.
The marketing agency is accessing the LLM through an application programming interface ("API") developed by a third-party technology company. They want to generate text to be used for targeted advertising communications that highlight the benefits of the hard hat to potential purchasers. Both the marketing agency and the technology company have taken reasonable steps to address Al governance.
The marketing company has:
• Entered into a contract with the technology company with suitable representations and warranties.
• Completed an impact assessment on the LLM for this intended use.
• Built technical guidance on how to measure and mitigate bias in the LLM.
• Enabled technical aspects of transparency, explainability, robustness and privacy.
• Followed applicable regulatory requirements.
• Created specific legal statements and disclosures regarding the use of the Al on its client's advertising.
The technology company has:
• Provided guidance and resources to developers to address environmental concerns.
• Build technical guidance on how to measure and mitigate bias in the LLM.
• Provided tools and resources to measure bias specific to the LLM.
• Enabled technical aspects of transparency, explainability, robustness and privacy.
• Mapped and mitigated potential societal harms and large-scale impacts.
• Followed applicable regulatory requirements and industry standards.
• Created specific legal statements and disclosures regarding the LLM. including with respect to IP and rights to data.
The marketing company and its tech provider have taken reasonable steps to govern the AI’s use, including legal disclosures, impact assessments, and bias mitigation. However, the company wants to take one more step to improve governance and reduce risks related to ongoing oversight and accountability.
While the marketing agency took steps to mitigate its risks, the best additional step would be to:

정답:
Explanation:
The correct answer is D. Forming a dedicated governance committee ensures continuous oversight, role clarity, and accountability throughout the AI lifecycle.
From the AIGP ILT Guide C Governance Structures:
“Organizations using AI in high-impact scenarios should establish a governance body responsible for oversight of risk, compliance, and ethical alignment.”
Also reflected in AI Governance in Practice Report 2025:
“Committees support cross-functional decision-making, provide guidance for updates, and maintain accountability. This is especially critical for high-stakes applications like marketing to diverse audiences.”
Options A, B, and C are valid supplementary actions, but Doffers a long-term and systematic governance mechanism.

Question No : 4


CASE STUDY
A global marketing agency is adapting a large language model ("LLM") to generate content for an upcoming marketing campaign for a client's new product: a hard hat designed for construction workers of any gender to better protect them from head injuries.
The marketing agency is accessing the LLM through an application programming interface ("API”) developed by a third-party technology company. They want to generate text to be used for targeted advertising communications that highlight the benefits of the hard hat to potential purchasers. Both the marketing agency and the technology company have taken reasonable steps to address Al governance.
The marketing company has:
• Entered into a contract with the technology company with suitable representations and warranties.
• Completed an impact assessment on the LLM for this intended use.
• Built technical guidance on how to measure and mitigate bias in the LLM.
• Enabled technical aspects of transparency, explainability, robustness and privacy.
• Followed applicable regulatory requirements.
• Created specific legal statements and disclosures regarding the use of the Al on its client's advertising.
The technology company has:
• Provided guidance and resources to developers to address environmental concerns.
• Build technical guidance on how to measure and mitigate bias in the LLM.
• Provided tools and resources to measure bias specific to the LLM.
• Enabled technical aspects of transparency, explainability, robustness and privacy.
• Mapped and mitigated potential societal harms and large-scale impacts.
• Followed applicable regulatory requirements and industry standards.
• Created specific legal statements and disclosures regarding the LLM. including with respect to IP and rights to data.
The agency has taken governance actions such as:
Conducting an impact assessment
Providing legal disclosures
Enabling bias mitigation and explainability
Complying with regulatory requirements
Which of the following should be included in the marketing company’s disclosures about the use of the LLM EXCEPT?

정답:
Explanation:
The correct answer is B C Proprietary methods. While transparency is important, organizations are not obligated to disclose proprietary algorithms, methods, or trade secrets in public disclosures.
From the AIGP Body of Knowledge C Transparency & Disclosures:
“AI system users should disclose the purpose, capabilities, limitations, and applicable legal context― but not sensitive IP.”
AI Governance in Practice Report 2025 (Transparency Section) states:
“Disclosure requirements balance public understanding with the need to protect proprietary business interests. Proprietary training methods are not expected to be disclosed.”
Thus, while it’s best practice to disclose the intended purpose, legal compliance, and system limitations, internal proprietary techniques are usually excluded.

Question No : 5


CASE STUDY
A global marketing agency is adapting a large language model ("LLM") to generate content for an upcoming marketing campaign for a client's new product: a hard hat designed for construction workers of any gender to better protect them from head injuries.
The marketing agency is accessing the LLM through an application programming interface ("API") developed by a third-party technology company. They want to generate text to be used for targeted advertising communications that highlight the benefits of the hard hat to potential purchasers. Both the marketing agency and the technology company have taken reasonable steps to address Al governance.
The marketing company has:
• Entered into a contract with the technology company with suitable representations and warranties.
• Completed an impact assessment on the LLM for this intended use.
• Built technical guidance on how to measure and mitigate bias in the LLM.
• Enabled technical aspects of transparency, explainability, robustness and privacy.
• Followed applicable regulatory requirements.
• Created specific legal statements and disclosures regarding the use of the Al on its client's advertising.
The technology company has:
• Provided guidance and resources to developers to address environmental concerns.
• Build technical guidance on how to measure and mitigate bias in the LLM.
• Provided tools and resources to measure bias specific to the LLM.
• Enabled technical aspects of transparency, explainability, robustness and privacy.
• Mapped and mitigated potential societal harms and large-scale impacts.
• Followed applicable regulatory requirements and industry standards.
• Created specific legal statements and disclosures regarding the LLM. including with respect to IP and rights to data.
The technology company has also addressed environmental concerns and societal harms.
Which of the following results would be considered biased outputs from this AI system EXCEPT?

정답:
Explanation:
The correct answer is A. Sending ads to construction companies (business entities) rather than individual workers isa business targeting decision, not inherently a biased AI output.
From the AIGP ILT Participant Guide C Bias & Fairness Module:
“Biased outputs often include stereotyping, exclusion of underrepresented groups, or reinforcing harmful societal assumptions.”
Examples like insufficient representation of minority groups or gender-stereotyping in visuals or language are typical manifestations of bias.
AI Governance in Practice Report 2025 also notes:
“Bias in generative models may manifest in representation gaps, stereotyping, or unequal performance across demographic groups.”
Option A, by contrast, describes a distribution strategy, not a bias generated by the AI model.

Question No : 6


CASE STUDY
A global marketing agency is adapting a large language model ("LLM") to generate content for an upcoming marketing campaign for a client's new product: a hard hat designed for construction workers of any gender to better protect them from head injuries.
The marketing agency is accessing the LLM through an application programming interface ("API") developed by a third-party technology company. They want to generate text to be used for targeted advertising communications that highlight the benefits of the hard hat to potential purchasers. Both the marketing agency and the technology company have taken reasonable steps to address Al governance.
The marketing company has:
• Entered into a contract with the technology company with suitable representations and warranties.
• Completed an impact assessment on the LLM for this intended use.
• Built technical guidance on how to measure and mitigate bias in the LLM.
• Enabled technical aspects of transparency, explainability, robustness and privacy.
• Followed applicable regulatory requirements.
• Created specific legal statements and disclosures regarding the use of the Al on its client's advertising.
The technology company has:
• Provided guidance and resources to developers to address environmental concerns.
• Build technical guidance on how to measure and mitigate bias in the LLM.
• Provided tools and resources to measure bias specific to the LLM.
• Enabled technical aspects of transparency, explainability, robustness and privacy.
• Mapped and mitigated potential societal harms and large-scale impacts.
• Followed applicable regulatory requirements and industry standards.
• Created specific legal statements and disclosures regarding the LLM. including with respect to IP and rights to data.
Which stakeholder is responsible for the lawful collection of data used to train the foundational AI model?

정답:
Explanation:
The correct answer is B C The tech company. The party that develops and trains the foundational model is responsible for ensuring the lawful collection of training data.
From the AIGP ILT Guide C Foundational Models & Data Governance:
“Responsibility for the lawfulness of data collection typically lies with the party that trains the model―usually the provider or developer of the foundational model.”
AI Governance in Practice Report 2025 confirms:
“General Purpose AI providers are required to ensure that training data is lawfully acquired, including compliance with intellectual property and privacy requirements.”
The marketing agency is only a user or downstream integrator, not responsible for original data collection.

Question No : 7


Scenario:
A mid-sized tech firm is building its AI governance program and is exploring ISO/IEC standards that could support consistency in terminology and risk assessment processes across teams.
ISO/IEC 22989andISO/IEC 42001can be valuable resources for AI Governance professionals inall of the following ways EXCEPT:

정답:
Explanation:
The correct answer is C. ISO/IEC 22989 and 42001 focus on terminology, risk, and management systems, but do not specifically address procurement-related concerns with third-party vendors.
From the AIGP Body of Knowledge C Standards Section:
“ISO/IEC 22989 defines terminology and foundational concepts. ISO/IEC 42001 provides a management system standard for AI. They are not procurement-focused documents.”
Also confirmed in the AI Governance in Practice Report 2025:
“These standards help establish common language and risk governance procedures. Procurement governance typically falls under separate frameworks or sector-specific guidance.”
Thus, procurement governance (Option C)is not a central use case for these standards.

Question No : 8


Scenario:
A global organization wants to align with international frameworks on AI governance. They are reviewing guidance from the OECD on how to incorporate broader governance tools into their AI program.
Codes of conduct and collective agreements are what type of assessment tools as defined by the Organization for Economic Cooperation and Development (OECD)?

정답:
Explanation:
The correct answer is B C Procedural. The OECD Framework for Classifying AI Systems categorizes codes of conduct and collective agreements as procedural tools because they guide internal governance and decision-making processes.
From the AIGP ILT Participant Guide C Global Governance Models:
“Procedural tools include internal codes of conduct, collective agreements, and procedural audits that guide governance without necessarily involving technical measurement.”
AI Governance in Practice Report 2025 elaborates:
“These procedural tools support internal accountability mechanisms and ethics compliance frameworks… they are part of soft governance.”
These tools do not measure or analyze technical performance, hence they are not technical or analytic.

Question No : 9


Scenario:
A U.S.-based AI governance professional is evaluating resources from the National Institute of Standards and Technology (NIST) to guide the organization’s AI risk assessment strategy. They are particularly interested in programs focused on assessing AI-specific impacts.
The main purpose of NIST’s Assessing Risks and Impacts of AI (ARIA)program is to:

정답:
Explanation:
The correct answer is A. The ARIA program by NIST is explicitly designed to support stakeholders in understanding and managing the risks and impacts of AI systems.
From the AIGP ILT Guide C U.S. Risk Frameworks Module:
“NIST’s ARIA program develops and pilots assessment tools for AI risks and impacts, aimed at improving organizational capacity for responsible AI use.”
Also cited in the AI Governance in Practice Report 2025 (Frameworks Section):
“ARIA supports and aligns with the AI Risk Management Framework by helping organizations assess AI harms, safety concerns, and societal implications.”
ARIA is not a red-teaming or sandbox program―it’s an assessment and governance resource.

Question No : 10


Scenario:
A distributor operating in the EU is responsible for selling imported high-risk AI systems to businesses. The distributor wants to ensure they fulfill all applicable obligations under the EU AI Act.
All of the following are obligations of a distributor of high-risk AI systems under the EU AI Act EXCEPT?

정답:
Explanation:
The correct answer is C. Registration in the EU database is an obligation of providers of high-risk AI systems―not distributors.
From the AIGP ILT Guide C Roles & Obligations Module:
“Distributors must verify CE marking, ensure instructions for use are provided, inform authorities of risks, and take corrective action when necessary. However, registration duties in the EU database lie with the provider.”
Also from the AI Governance in Practice Report 2025:
“The AI Act differentiates responsibilities for developers, providers, importers, and distributors. Only providers of high-risk systems are obligated to register their systems in the EU AI Database.”
Distributors focus on verification and communication, not formal registration.

Question No : 11


Which of the following may be permissible uses of an AI system under the EU AI Act EXCEPT?

정답:
Explanation:
The correct answer is D. Emotion recognition in the workplace is flagged as unacceptable or highly restricted under the EU AI Act due to its intrusive nature and potential for misuse.
From the AIGP ILT Guide C EU AI Act Training Module:
“AI systems that monitor individuals’ emotions in the workplace or educational settings are listed among prohibited or strictly limited practices under Article 5.”
AI Governance in Practice Report 2025 supports this interpretation:
“Emotion recognition systems, especially in sensitive contexts such as employment or education, raise significant concerns under EU fundamental rights law and are likely to be restricted.”
Other uses listed―such as emergency response or emotion detection in healthcare―may fall under lawful and beneficial uses, especially when justified by public interest.

Question No : 12


Scenario:
An organization is developing a powerful general-purpose AI (GPAI) model that has systemic impact.
The compliance team is assessing what legal obligations apply under the EU AI Act.
Under the EU AI Act, which of the following compliance actions applies only to General Purpose AI models with systemic risk?

정답:
Explanation:
The correct answer is A. Only GPAI models with systemic risk must publish a detailed summary of training data to meet transparency and accountability standards under the EU AI Act.
From the AI Governance in Practice Report 2025 (EU AI Act Section):
“For GPAI systems with systemic risk, providers must publish sufficiently detailed summaries of the content used to train the model.”
Also, the AIGP ILT Guide confirms:
“The obligation to disclose summaries of training data applies only to systemic-risk GPAI models, not all general-purpose models or high-risk systems.”
This unique requirement is part of the Act’s effort to increase transparency and auditability for powerful foundational models.

Question No : 13


Scenario:
A European AI technology company was found to be non-compliant with certain provisions of the EU AI Act. The regulator is considering penalties under the enforcement provisions of the regulation.
According to the EU AI Act, which of the following non-compliance examples could lead to fines of up to 15 million or 3% of annual worldwide turnover (whichever is higher)?

정답:
Explanation:
The correct answer is B. The EU AI Act assigns a tiered penalty system based on the severity of the violation. A breach of obligations related to high-risk AI systems falls into the mid-tier category, triggering fines of 15 million or 3% of annual global turnover.
From the AIGP ILT Guide C EU AI Act Module:
“Providers of high-risk AI systems must comply with strict documentation, testing, monitoring, and registration obligations. Breaches of these result in significant fines of up to 15 million or 3% of turnover.”
AI Governance in Practice Report 2025 supports this:
“Non-compliance with obligations under Title III (high-risk systems) leads to financial penalties under Article 71(3) of the EU AI Act.”
Note: The highest penalty (35 million or 7%) applies to prohibited AI uses, not to obligations for high-risk systems.

Question No : 14


Scenario:
A company is using different types of AI systems to enhance consumer engagement. These include chatbots, recommendation engines, and automated content generation tools.
Which of the following situations would be least likely to raise concerns under existing consumer protection laws?

정답:
Explanation:
The correct answer is D. Personalized content and advertisements, as long as properly disclosed and non-deceptive, are not generally a consumer protection issue under current legal regimes.
From the AI Governance in Practice Report 2025 (Consumer Protection Section):
“Standard practices like targeted advertising and recommendations are widely accepted provided they comply with transparency and consent requirements.”
Meanwhile, credit decision-making and misleading AI performance claims (Answers A and B) have already led to regulatory enforcement.
The AIGP ILT Guide highlights:
“Deceptive claims, biased financial decisions, and unauthorized data use may violate consumer protection and privacy laws. Advertising personalization is routine but must be disclosed appropriately.”

Question No : 15


Which of the following are not considered biometric data under U.S. privacy laws?

정답:
Explanation:
The correct answer is D. GPS location data is not biometric data―it is considered geolocation data, which is personal data but not biometric under most U.S. laws.
From the AIGP ILT Guide (Data Privacy Module):
“Biometric data includes measurable biological or behavioral characteristics such as iris scans, facial
recognition, voice prints, and keystroke patterns when used for identification.”
AI Governance in Practice Report 2025 (Privacy and Data Protection section):
“Location data, while sensitive, is not considered biometric unless it’s tied to a uniquely identifying biological trait.”
Thus, GPS location data, while potentially sensitive, is not classified as biometric.

 / 6
IAPP