ISACA Advanced in AI Audit (AAIA) 온라인 연습
최종 업데이트 시간: 2026년02월14일
당신은 온라인 연습 문제를 통해 ISACA AAIA 시험지식에 대해 자신이 어떻게 알고 있는지 파악한 후 시험 참가 신청 여부를 결정할 수 있다.
시험을 100% 합격하고 시험 준비 시간을 35% 절약하기를 바라며 AAIA 덤프 (최신 실제 시험 문제)를 사용 선택하여 현재 최신 90개의 시험 문제와 답을 포함하십시오.
정답:
Explanation:
Human-centered design (HCD) focuses on integrating stakeholder needs, ethical implications, and social impact into AI system development. The AAIA™ Study Guide emphasizes the role of HCD in minimizing negative workforce impacts and ensuring inclusive design.
“Auditors should review whether the AI system design process included human-centered practices― particularly for applications affecting jobs or critical human roles. HCD ensures responsible adoption.”
While feedback collection (C) and impact assessments (D) provide useful context, A directly addresses ethical impact mitigation at the design level.
Reference: ISACA Advanced in AI Audit™ (AAIA™) Study Guide, Section: “Ethical and Legal
Considerations in AI,” Subsection: “Human-Centered AI and Workforce Impacts”
정답:
Explanation:
Generative AI tools such as large language models often have token limitations that restrict how much input they can process in a single prompt. According to the AAIA™ Study Guide, when input exceeds token limits, the model processes only the initial portion―leading to biased outputs based on early entries.
“Token limits in generative AI systems may lead to partial input processing, skewing model outputs toward the beginning of the data and introducing interpretive bias. Auditors must assess whether output reflects the full dataset.”
Options A and C describe side effects, but D identifies the most significant risk―output distortion due to truncated input.
Reference: ISACA Advanced in AI Audit™ (AAIA™) Study Guide, Section: “AI Operations and Performance,” Subsection: “Limitations of Generative AI Models”
정답:
Explanation:
Allowing AI to autonomously generate code without human review introduces significant risks, including security vulnerabilities, logic errors, and noncompliance with organizational development standards. The AAIA™ Study Guide strongly advocates for human-in-the-loop oversight, particularly in automated development contexts.
“AI-assisted development must include manual code reviews to ensure functionality, compliance, and security. Autonomous code generation without validation increases the risk of introducing undetected flaws.”
While A, B, and C involve operational risks or inefficiencies, only D constitutes a direct breach of secure development life cycle principles.
Reference: ISACA Advanced in AI Audit™ (AAIA™) Study Guide, Section: “AI Fundamentals and Technologies,” Subsection: “AI in Software Development and Associated Risks”
정답:
Explanation:
The AAIA™ Study Guide defines the primary objective of AI governance as establishing structure and accountability for AI initiatives. This includes clearly assigning responsibilities across development, deployment, risk management, and auditing roles to ensure that AI is used responsibly and transparently.
“AI governance establishes the policies, roles, and oversight structures that guide the ethical and
secure deployment of AI. Clear accountability helps prevent unauthorized use and ensures strategic alignment.”
Options A and C are essential components of governance but are not its core definition.
Option D is a business outcome, not a governance goal. Thus, B is the most comprehensive and accurate objective.
Reference: ISACA Advanced in AI Audit™ (AAIA™) Study Guide, Section: “AI Governance and Risk Management,” Subsection: “Governance Objectives and Structures”
정답:
Explanation:
The effectiveness of an AI-driven business process―such as categorizing customers for promotional campaigns―depends on how well it supports defined business objectives. The AAIA™ Study Guide recommends validating that AI methodology aligns with intended outcomes as part of performance auditing.
“Effectiveness is best measured by assessing whether the AI logic contributes meaningfully to business goals. Output alignment with organizational KPIs or campaign strategies provides clear evidence of functional success.”
Options A and D support operational resilience and quality assurance.
Option C is a privacy technique, not directly tied to effectiveness validation. Thus, B is correct.
Reference: ISACA Advanced in AI Audit™ (AAIA™) Study Guide, Section: “AI in Audit Processes,” Subsection: “Evaluating AI Alignment with Business Objectives”
정답:
Explanation:
Sudden and unexplained changes in AI-generated credit scores may result from data drift, model overfitting, or lack of recalibration. According to the AAIA™ Study Guide, regular expert review and calibration help maintain model reliability and transparency, particularly in high-stakes decisions like credit scoring.
“Ongoing human oversight ensures that predictive models remain stable and justifiable. In high-impact environments, such as banking, experts must review and recalibrate AI systems to prevent opaque or unexpected behavior.”
Option B may cause the exclusion of relevant long-term patterns.
C promotes risk by removing oversight.
D is a validation strategy, not a stability control. Therefore, A is the best option.
Reference: ISACA Advanced in AI Audit™ (AAIA™) Study Guide, Section: “AI Operations and Performance,” Subsection: “Model Monitoring and Recalibration Strategies”
정답:
Explanation:
An AI usage policy sets the foundation for safe, ethical, and effective AI deployment. According to the AAIA™ Study Guide, having an AI policy in place ensures that users understand acceptable behaviors, limitations, and responsibilities when interacting with AI tools.
“AI acceptable use policies promote governance by clearly outlining the dos and don’ts of AI interaction, preventing misuse and aligning user activity with organizational values and compliance standards.”
Other actions (B, C, D) are important in operations and risk management but should follow the establishment of governance protocols through a usage policy. Hence, A is the highest-priority prerequisite.
Reference: ISACA Advanced in AI Audit™ (AAIA™) Study Guide, Section: “AI Governance and Risk Management,” Subsection: “Policy Frameworks for End-User AI Interaction”
정답:
Explanation:
Generative AI systems, particularly those based on transformer models, produce outputs using probabilistic computations. As a result, even when given the same input data, these models may generate different outputs depending on sampling strategies (e.g., temperature, top-k sampling).
“Generative AI operates probabilistically, meaning that outputs can vary with each run based on stochastic sampling techniques. This variability is expected and must be accounted for in risk-sensitive environments like finance.”
While A and B refer to limitations and architecture, and D is unrelated to logic, C directly explains the output inconsistency.
Reference: ISACA Advanced in AI Audit™ (AAIA™) Study Guide, Section: “AI Fundamentals and Technologies,” Subsection: “Stochastic Behavior in Generative Models”
정답:
Explanation:
Imputing missing values using the mean, median, or mode is a common technique to fill blanks in datasets. However, according to the AAIA™ Study Guide, this method introduces a significant risk of information distortion, especially if the data is not normally distributed or if imputed values disproportionately impact underrepresented groups.
“Simple imputation can reduce data variability and reinforce existing biases. It may also misrepresent the true distribution of the data, leading to skewed model outputs.”
Other options (B, C, D) may involve transformations, but they do not inherently cause as much bias or data misrepresentation as A.
Reference: ISACA Advanced in AI Audit™ (AAIA™) Study Guide, Section: “AI Fundamentals and Technologies,” Subsection: “Data Imputation and Transformation Risks”
정답:
Explanation:
Periodic testing and monitoring for bias is a sustainable, proactive strategy aligned with best practices outlined in the AAIA™ Study Guide. This approach ensures that the AI system remains compliant over time, even as data and hiring conditions change.
“Ongoing fairness assessments help detect emerging biases and ensure that the AI model maintains equitable decision-making standards. Periodic testing also allows organizations to take corrective action before regulatory or reputational damage occurs.”
Suspending the system (B) or relying solely on external datasets (C) are temporary or limited in scope. Manual reviews (D) are effective but do not solve the root issue. Therefore, A provides a comprehensive, audit-aligned solution.
Reference: ISACA Advanced in AI Audit™ (AAIA™) Study Guide, Section: “Ethical and Legal Considerations in AI,” Subsection: “Bias Mitigation and Monitoring”
정답:
Explanation:
Representativeness ensures that the training data reflects the full spectrum of conditions the AI model will encounter in production. According to the AAIA™ Study Guide, models trained on non-representative data are prone to bias, poor generalization, and underperformance in real-world applications.
“Ensuring that training data accurately represents the operational environment is critical for model reliability, fairness, and scalability. Without it, the model may perform well in testing but fail in actual usage.”
Timeliness (A) and understandability (D) support performance and usability, but they are secondary to ensuring data coverage. Predictability (B) may not be desirable in dynamic modeling.
Reference: ISACA Advanced in AI Audit™ (AAIA™) Study Guide, Section: “AI Fundamentals and Technologies,” Subsection: “Training Data Characteristics and Model Validity”
정답:
Explanation:
Computer vision uses machine learning techniques to identify and classify visual data such as images or videos. In inventory audits, it can be used to recognize product types, scan barcodes, or evaluate storage conditions without human assistance.
“Computer vision is particularly effective in automated environments like manufacturing, where visual data from cameras or sensors can be processed to verify product identification and placement.”
NLP (A) and speech modeling (B) are not suitable for image-based tasks. RPA (C) automates tasks but cannot visually interpret products. Therefore, D is the correct tool.
Reference: ISACA Advanced in AI Audit™ (AAIA™) Study Guide, Section: “AI in Audit Processes,” Subsection: “AI Tools in Operational and Inventory Audits”
정답:
Explanation:
Testing the AI model with a curated and representative sample data set allows auditors to directly evaluate the fairness and bias of model decisions. This approach is aligned with best practices outlined in the AAIA™ Study Guide, as it enables quantifiable analysis of model behavior across different demographics or input scenarios.
“To assess fairness, auditors should use controlled data sets to evaluate whether model outputs disproportionately impact specific groups. This empirical testing provides stronger evidence than qualitative methods.”
While metadata (A) and developer interviews (C) can supplement findings, only B provides objective, reproducible evidence.
Option D may reflect real-world interactions but lacks the control and consistency required in an audit.
Reference: ISACA Advanced in AI Audit™ (AAIA™) Study Guide, Section: “Ethical and Legal Considerations in AI,” Subsection: “Fairness and Bias Testing in AI Systems”
정답:
Explanation:
According to the AAIA™ Study Guide, a robust data governance framework ensures that AI systems are compliant with data protection laws, ethical standards, and internal policies. It provides controls over data quality, access, retention, and processing, all of which are essential to avoid breaches and maintain trust.
“A strong data governance structure is foundational for regulatory compliance and ethical AI practices. It ensures that data privacy, integrity, and usage rights are maintained across the AI lifecycle.”
While option A is an outcome of good data governance, and automation (B) may improve efficiency, the most fundamental benefit is risk reduction and compliance (C).
Option D reflects a misunderstanding of governance which requires human oversight.
Reference: ISACA Advanced in AI Audit™ (AAIA™) Study Guide, Section: “AI Governance and Risk
Management,” Subsection: “Data Governance Frameworks and Compliance”
정답:
Explanation:
The AAIA™ Study Guide advises that if an AI model presents a risk that exceeds the organization's predefined risk tolerance―especially in cases of ethical harm or bias―deployment should be delayed until proper safeguards are in place. This approach prevents legal exposure and preserves stakeholder trust.
“When AI risks exceed acceptable thresholds, organizations must suspend implementation until corrective action reduces the risk to within tolerance levels. Proceeding without mitigation violates sound governance principles.”
While improving data (B) may help, it does not address the immediate governance concern. Risk tolerance (D) should not be adjusted to fit flawed systems. Thus, A is the correct course.
Reference: ISACA Advanced in AI Audit™ (AAIA™) Study Guide, Section: “AI Governance and Risk Management,” Subsection: “Risk Evaluation and Implementation Decision-Making”