Artificial Intelligence Governance Professional 온라인 연습
최종 업데이트 시간: 2026년04월21일
당신은 온라인 연습 문제를 통해 IAPP AIGP 시험지식에 대해 자신이 어떻게 알고 있는지 파악한 후 시험 참가 신청 여부를 결정할 수 있다.
시험을 100% 합격하고 시험 준비 시간을 35% 절약하기를 바라며 AIGP 덤프 (최신 실제 시험 문제)를 사용 선택하여 현재 최신 100개의 시험 문제와 답을 포함하십시오.
정답:
Explanation:
The correct answer is C. The choice of architecture (e.g., neural networks vs. decision trees) is typically part of the design and development phase, not the initial planning.
From the AIGP Body of Knowledge C AI Lifecycle Module:
“Planning involves scoping, context definition, stakeholder identification, governance planning, and assumptions―not yet model selection.”
Confirmed in the ILT Participant Guide:
“Design decisions such as architecture or algorithm type come after planning―usually during development based on technical feasibility and data availability.”
정답:
Explanation:
The correct answer is B. Allowing PII to be freely entered into prompts without safeguards is considered a major privacy and security risk and is not a responsible governance practice.
From the AIGP ILT Guide C Generative AI & Third-Party Risk Management:
“Use of personal or sensitive information in AI prompts can result in unintended exposure, regulatory breaches, and downstream liability.”
The AI Governance in Practice Report 2025 highlights:
“PII should be minimized or protected by design. Prompt engineering should prevent entry of personally identifiable data unless legally and technically safeguarded.”
A, C, and D are established best practices under responsible AI procurement and use.
정답:
Explanation:
The correct answer is D. Forming a dedicated governance committee ensures continuous oversight, role clarity, and accountability throughout the AI lifecycle.
From the AIGP ILT Guide C Governance Structures:
“Organizations using AI in high-impact scenarios should establish a governance body responsible for oversight of risk, compliance, and ethical alignment.”
Also reflected in AI Governance in Practice Report 2025:
“Committees support cross-functional decision-making, provide guidance for updates, and maintain accountability. This is especially critical for high-stakes applications like marketing to diverse audiences.”
Options A, B, and C are valid supplementary actions, but Doffers a long-term and systematic governance mechanism.
정답:
Explanation:
The correct answer is B C Proprietary methods. While transparency is important, organizations are not obligated to disclose proprietary algorithms, methods, or trade secrets in public disclosures.
From the AIGP Body of Knowledge C Transparency & Disclosures:
“AI system users should disclose the purpose, capabilities, limitations, and applicable legal context― but not sensitive IP.”
AI Governance in Practice Report 2025 (Transparency Section) states:
“Disclosure requirements balance public understanding with the need to protect proprietary business interests. Proprietary training methods are not expected to be disclosed.”
Thus, while it’s best practice to disclose the intended purpose, legal compliance, and system limitations, internal proprietary techniques are usually excluded.
정답:
Explanation:
The correct answer is A. Sending ads to construction companies (business entities) rather than individual workers isa business targeting decision, not inherently a biased AI output.
From the AIGP ILT Participant Guide C Bias & Fairness Module:
“Biased outputs often include stereotyping, exclusion of underrepresented groups, or reinforcing harmful societal assumptions.”
Examples like insufficient representation of minority groups or gender-stereotyping in visuals or language are typical manifestations of bias.
AI Governance in Practice Report 2025 also notes:
“Bias in generative models may manifest in representation gaps, stereotyping, or unequal performance across demographic groups.”
Option A, by contrast, describes a distribution strategy, not a bias generated by the AI model.
정답:
Explanation:
The correct answer is B C The tech company. The party that develops and trains the foundational model is responsible for ensuring the lawful collection of training data.
From the AIGP ILT Guide C Foundational Models & Data Governance:
“Responsibility for the lawfulness of data collection typically lies with the party that trains the model―usually the provider or developer of the foundational model.”
AI Governance in Practice Report 2025 confirms:
“General Purpose AI providers are required to ensure that training data is lawfully acquired, including compliance with intellectual property and privacy requirements.”
The marketing agency is only a user or downstream integrator, not responsible for original data collection.
정답:
Explanation:
The correct answer is C. ISO/IEC 22989 and 42001 focus on terminology, risk, and management systems, but do not specifically address procurement-related concerns with third-party vendors.
From the AIGP Body of Knowledge C Standards Section:
“ISO/IEC 22989 defines terminology and foundational concepts. ISO/IEC 42001 provides a management system standard for AI. They are not procurement-focused documents.”
Also confirmed in the AI Governance in Practice Report 2025:
“These standards help establish common language and risk governance procedures. Procurement governance typically falls under separate frameworks or sector-specific guidance.”
Thus, procurement governance (Option C)is not a central use case for these standards.
정답:
Explanation:
The correct answer is B C Procedural. The OECD Framework for Classifying AI Systems categorizes codes of conduct and collective agreements as procedural tools because they guide internal governance and decision-making processes.
From the AIGP ILT Participant Guide C Global Governance Models:
“Procedural tools include internal codes of conduct, collective agreements, and procedural audits that guide governance without necessarily involving technical measurement.”
AI Governance in Practice Report 2025 elaborates:
“These procedural tools support internal accountability mechanisms and ethics compliance frameworks… they are part of soft governance.”
These tools do not measure or analyze technical performance, hence they are not technical or analytic.
정답:
Explanation:
The correct answer is A. The ARIA program by NIST is explicitly designed to support stakeholders in understanding and managing the risks and impacts of AI systems.
From the AIGP ILT Guide C U.S. Risk Frameworks Module:
“NIST’s ARIA program develops and pilots assessment tools for AI risks and impacts, aimed at improving organizational capacity for responsible AI use.”
Also cited in the AI Governance in Practice Report 2025 (Frameworks Section):
“ARIA supports and aligns with the AI Risk Management Framework by helping organizations assess AI harms, safety concerns, and societal implications.”
ARIA is not a red-teaming or sandbox program―it’s an assessment and governance resource.
정답:
Explanation:
The correct answer is C. Registration in the EU database is an obligation of providers of high-risk AI systems―not distributors.
From the AIGP ILT Guide C Roles & Obligations Module:
“Distributors must verify CE marking, ensure instructions for use are provided, inform authorities of risks, and take corrective action when necessary. However, registration duties in the EU database lie with the provider.”
Also from the AI Governance in Practice Report 2025:
“The AI Act differentiates responsibilities for developers, providers, importers, and distributors. Only providers of high-risk systems are obligated to register their systems in the EU AI Database.”
Distributors focus on verification and communication, not formal registration.
정답:
Explanation:
The correct answer is D. Emotion recognition in the workplace is flagged as unacceptable or highly restricted under the EU AI Act due to its intrusive nature and potential for misuse.
From the AIGP ILT Guide C EU AI Act Training Module:
“AI systems that monitor individuals’ emotions in the workplace or educational settings are listed among prohibited or strictly limited practices under Article 5.”
AI Governance in Practice Report 2025 supports this interpretation:
“Emotion recognition systems, especially in sensitive contexts such as employment or education, raise significant concerns under EU fundamental rights law and are likely to be restricted.”
Other uses listed―such as emergency response or emotion detection in healthcare―may fall under lawful and beneficial uses, especially when justified by public interest.
정답:
Explanation:
The correct answer is A. Only GPAI models with systemic risk must publish a detailed summary of training data to meet transparency and accountability standards under the EU AI Act.
From the AI Governance in Practice Report 2025 (EU AI Act Section):
“For GPAI systems with systemic risk, providers must publish sufficiently detailed summaries of the content used to train the model.”
Also, the AIGP ILT Guide confirms:
“The obligation to disclose summaries of training data applies only to systemic-risk GPAI models, not all general-purpose models or high-risk systems.”
This unique requirement is part of the Act’s effort to increase transparency and auditability for powerful foundational models.
정답:
Explanation:
The correct answer is B. The EU AI Act assigns a tiered penalty system based on the severity of the violation. A breach of obligations related to high-risk AI systems falls into the mid-tier category, triggering fines of 15 million or 3% of annual global turnover.
From the AIGP ILT Guide C EU AI Act Module:
“Providers of high-risk AI systems must comply with strict documentation, testing, monitoring, and registration obligations. Breaches of these result in significant fines of up to 15 million or 3% of turnover.”
AI Governance in Practice Report 2025 supports this:
“Non-compliance with obligations under Title III (high-risk systems) leads to financial penalties under Article 71(3) of the EU AI Act.”
Note: The highest penalty (35 million or 7%) applies to prohibited AI uses, not to obligations for high-risk systems.
정답:
Explanation:
The correct answer is D. Personalized content and advertisements, as long as properly disclosed and non-deceptive, are not generally a consumer protection issue under current legal regimes.
From the AI Governance in Practice Report 2025 (Consumer Protection Section):
“Standard practices like targeted advertising and recommendations are widely accepted provided they comply with transparency and consent requirements.”
Meanwhile, credit decision-making and misleading AI performance claims (Answers A and B) have already led to regulatory enforcement.
The AIGP ILT Guide highlights:
“Deceptive claims, biased financial decisions, and unauthorized data use may violate consumer protection and privacy laws. Advertising personalization is routine but must be disclosed appropriately.”
정답:
Explanation:
The correct answer is D. GPS location data is not biometric data―it is considered geolocation data, which is personal data but not biometric under most U.S. laws.
From the AIGP ILT Guide (Data Privacy Module):
“Biometric data includes measurable biological or behavioral characteristics such as iris scans, facial
recognition, voice prints, and keystroke patterns when used for identification.”
AI Governance in Practice Report 2025 (Privacy and Data Protection section):
“Location data, while sensitive, is not considered biometric unless it’s tied to a uniquely identifying biological trait.”
Thus, GPS location data, while potentially sensitive, is not classified as biometric.