ISACA Advanced in AI Security Management Exam 온라인 연습
최종 업데이트 시간: 2025년12월09일
당신은 온라인 연습 문제를 통해 ISACA AAISM 시험지식에 대해 자신이 어떻게 알고 있는지 파악한 후 시험 참가 신청 여부를 결정할 수 있다.
시험을 100% 합격하고 시험 준비 시간을 35% 절약하기를 바라며 AAISM 덤프 (최신 실제 시험 문제)를 사용 선택하여 현재 최신 90개의 시험 문제와 답을 포함하십시오.
정답:
Explanation:
The most material contractual control for reducing security and privacy risk in outsourced AI services is a data-use restriction that prohibits the provider from using customer data for model training (and from derivative model improvements) unless explicitly authorized. This prevents unintended secondary processing, model inversion exposure of proprietary data, unauthorized profiling, and downstream data proliferation across multi-tenant systems. AAISM positions third-party risk controls to prioritize data minimization, purpose limitation, confidentiality, and downstream controls; among common MSA provisions, data-use limitations directly constrain the provider’s technical and organizational handling of sensitive inputs, making it the highest-impact risk-reducing clause. Query throttling (B) and logging (C) are useful operational controls but are secondary to legal/processing authority. Unlimited retraining (D) increases attack surface and cost without addressing the core risk of misuse of customer data.
Reference: AI Security Management™ (AAISM) Body of Knowledge ― Third-Party & Supply-Chain Governance; Contractual Controls for AI Services; Data Minimization and Purpose Limitation. AAISM Study Guide ― Procurement & MSA/DPA Clauses for AI; Provider Model Training and Data-Use Restrictions; Privacy & Confidentiality Safeguards in Outsourced AI.
정답:
Explanation:
AAISM requires that risk thresholds/tolerances be set by aligning threat likelihood and impact with the organization’s business context and risk appetite. Determining “acceptable” risk starts with assessing business impact of credible threats (e.g., prompt injection leading to data exfiltration, policy evasion, or harmful actions), then translating this into control intensity and thresholds. Hard input restrictions (A) and static output caps (C) are blunt measures that may degrade utility without ensuring alignment to risk appetite. Monitoring (B) is essential for detection, but it does not, by itself, define what level of risk is acceptable.
Reference: AI Security Management™ (AAISM) Body of Knowledge ― Risk Appetite and Tolerance for AI; Threat Modeling for LLMs; Business Impact Analysis and Risk Acceptance Criteria.
정답:
Explanation:
Per AAISM, an AI policy is a governance instrument that defines objectives, principles, roles, responsibilities, accountability, and control requirements for AI systems across their lifecycle. It establishes the framework within which performance, compliance, ethics, risk appetite, security, privacy, and sustainability objectives are set and operationalized. Environmental considerations (A), accuracy optimization (B), and regulatory compliance (D) are important outcomes addressed under the policy, but the primary purpose is to provide the overarching framework for objectives and controls.
Reference: AI Security Management™ (AAISM) Body of Knowledge ― AI Governance Frameworks; Policies, Standards, and Procedures; Roles and Accountability in AI Programs.
정답:
Explanation:
AAISM directs that when harmful or biased behavior is observed in a production AI system, the organization should enter a formal incident/variance handling workflow that begins with root cause analysis (RCA) to identify the source of deviation (data drift, concept drift, feature leakage, pipeline changes, control failures) and determine proportionate risk treatments. Immediate retraining (Option A) without RCA risks reinforcing the same bias; audits (Option C) are key activities within RCA rather than the action that frames the response; a kill switch (Option D) is reserved for conditions where risk exceeds the defined tolerances and immediate harm prevention is required.
Reference: AI Security Management™ (AAISM) Body of Knowledge ― Incident Response & Post-Incident Improvement; Model Risk Treatment & Drift Management; Bias Detection and Remediation Governance.
정답:
Explanation:
Per AAISM’s ML lifecycle controls, hyperparameter tuning is performed on the validation set, reserving the test set strictly for final, unbiased performance estimation. The training set is used to fit parameters; the validation set guides model selection and hyperparameter optimization; the test set is untouched until the end to prevent leakage and optimistic bias. “Configuration” is not a dataset type in the lifecycle split.
Reference:
• AI Security Management™ (AAISM) Body of Knowledge: Model Development Controls―Data Splitting and Evaluation Integrity
• AAISM Study Guide: Overfitting Avoidance; Validation vs. Test Separation; Leakage Prevention
• AAISM Mapping to Standards: Evaluation Integrity―Hold-out Protocols and Tuning Practices
정답:
Explanation:
AAISM prescribes Preparation as the foundational phase of AI incident response. The first priority is to form and empower a cross-functional incident response (IR) team with AI/ML expertise (security, data science, product, legal/compliance). Only once the accountable team exists can you define playbooks, communications, containment/eradication steps, recovery processes, and escalation paths. Without a designated team, procedures and channels lack ownership and effectiveness.
Reference:
• AI Security Management™ (AAISM) Body of Knowledge: Incident Management― Preparation; Roles & Responsibilities; Cross-functional Coordination
• AAISM Study Guide: AI IR Operating Model; Stakeholder Mapping; Authority & Escalation
• AAISM Mapping to Standards: Security Operations―Preparation Before Procedures (people and roles precede playbooks)
정답:
Explanation:
AAISM directs organizations to embed security, safety, and compliance controls at design time (“secure-by-design” and “shift-left”), ensuring requirements for robustness, privacy, and governance are defined as non-functional constraints on architecture, data sourcing, model choices, and evaluation criteria before any model is trained. Deferring these requirements to training, testing, or deployment increases residual risk and rework, and weakens traceability of control coverage.
Reference:
• AI Security Management™ (AAISM) Body of Knowledge: Governance―Secure-by-Design; Policy-to-Control Traceability; Requirements Management
• AAISM Study Guide: AI Program Lifecycle―Planning & Design Controls; Design-time Threat Modeling and Control Selection
• AAISM Mapping to Standards: Design-phase Risk Identification and Requirements Engineering for AI
정답:
Explanation:
Model drift occurs when the statistical properties of input data and/or the relationship between features and outcomes change over time, causing degraded model performance. The AAISM guidance classifies data-centric causes (distribution shift, concept drift, and contamination) as the primary drivers and highlights that malicious contamination of training or incremental learning data (data poisoning) is a direct, high-likelihood driver of observable drift in production because it changes the effective data-generating process the model learns from. In contrast:
• Perfect knowledge is an attacker capability descriptor, not a drift cause.
• Membership inference targets privacy of the training set and does not inherently shift data distributions.
• Model stealing targets IP/confidentiality; it does not change the victim model’s data distribution or decision boundary in situ.
Reference:
• AI Security Management™ (AAISM) Body of Knowledge: Model Risk & Drift; Data Integrity Risks; Adversarial ML―Poisoning vs. Evasion
• AAISM Study Guide: Production Monitoring & Drift Management; Risk Scenarios―Data Poisoning Impacts and Controls
• AAISM Mapping to Standards: Lifecycle Risk Treatment―Robustness to Data Contamination; Continuous Monitoring and Feedback
정답:
Explanation:
The most critical risk when deriving statistical insights from AI-generated data is systemic bias in data. According to the AI Security Management™ (AAISM) framework, systemic bias directly undermines the fairness, reliability, and validity of analytical results derived from AI systems. If the input data or learned model patterns are biased―reflecting skewed representation, sampling imbalance, or embedded prejudice―the statistical outputs will propagate and amplify these biases, leading to misinformed decisions and compliance violations.
Why Option A is Correct:
Systemic bias affects the integrity and trustworthiness of AI-generated statistical information.
It can introduce discriminatory outcomes, ethical breaches, and regulatory non-compliance―key concerns in AAISM’s AI Risk Management and Governance principles.
Mitigating systemic bias requires data quality assessments, fairness audits, bias detection tools, and model interpretability measures to ensure the derived insights are accurate and ethically sound.
Why Other Options Are Incorrect:
Option B: Incomplete outputs can affect accuracy but are typically handled through process monitoring or retraining, not as a primary risk factor in statistical validity.
Option C: Lack of data normalization is a technical preprocessing issue, not a governance-level risk impacting statistical trustworthiness.
Option D: Hallucinations occur mainly in generative models (e.g., LLMs) and affect content generation, not statistical computation pipelines.
Exact Extract from Official AAISM Study Guide:
“Systemic bias in AI training and inference data represents the most material statistical risk. Bias propagates through derived metrics, predictive models, and decision outputs, compromising fairness, accuracy, and compliance. AI Security Management requires implementing bias detection, fairness testing, and governance mechanisms to identify and mitigate such systemic bias before using AI-generated analytics for organizational or regulatory reporting.”
Reference: AI Security Management™ (AAISM) Body of Knowledge: AI Risk Identification and Evaluation, Bias and Fairness Management in AI Systems.
AI Security Management™ Study Guide: Systemic Bias Mitigation Techniques, Fairness Assurance in AI Analytics.
ISO/IEC 23894:2023 ― Clause 7.2: Bias identification and treatment within AI risk frameworks.
정답:
Explanation:
When an AI system experiences an attack after being in production for an extended period, the most effective mitigation strategy is to update the deployed training data with new adversarial data. This process strengthens the model’s resilience by retraining it to recognize and resist attack vectors that were previously unknown or unaccounted for. According to the AI Security Management™ (AAISM) framework, risk mitigation for AI systems must address model robustness through adversarial retraining, data quality improvement, and model lifecycle hardening rather than relying solely on reactive measures.
Why Option B is Correct:
Incorporating adversarial examples into the training set enhances the system’s ability to correctly classify and withstand malicious inputs.
This approach directly mitigates the vulnerability exploited in the attack and supports a proactive,
continuous risk management cycle.
Why Other Options Are Incorrect:
Option A: Monitoring helps detect suspicious activity but does not resolve the underlying vulnerability.
Option C: Concealing confidence scores may reduce model transparency but does not address the attack mechanism or its root cause.
Option D: Implementing access controls protects the model’s architecture but does not improve model robustness against input manipulation attacks.
Exact Extract from Official AAISM Study Guide:
“AI risk management requires continuous improvement following incidents. After an adversarial or data poisoning event, the preferred risk treatment involves retraining the model using adversarial data and updated datasets to enhance robustness. This ensures the AI model adapts to evolving threat landscapes rather than merely restricting access or obscuring outputs.”
Reference: AI Security Management™ (AAISM) Body of Knowledge: AI Risk Treatment and Mitigation Strategies, Adversarial Robustness and Resilience Engineering.
AI Security Management™ Study Guide: Model Lifecycle Security, Continuous Risk Treatment through Adversarial Retraining.
ISO/IEC 23894:2023, Clause 8.3.2 ― Risk treatment through robustness improvement and adversarial data inclusion.
정답:
Explanation:
AAISM risk guidance notes that the most stringent recovery objectives apply to industrial control systems, as downtime can directly disrupt critical infrastructure, manufacturing, or safety operations. Health support systems also require high availability, but industrial control often underpins safety-critical and real-time environments where delays can result in catastrophic outcomes. Credit risk models and navigation systems are important but less critical in terms of immediate physical and operational impact. Thus, industrial control systems require the tightest RTO.
Reference: AAISM Study Guide C AI Risk Management (Business Continuity in AI)
ISACA AI Security Management C RTO Priorities for AI Systems
정답:
Explanation:
AAISM defines automation bias as the tendency of individuals to over-rely on AI-generated outputs even when contradictory real-world evidence is available. In this scenario, the driver ignores traffic signs and follows the AI’s instructions, showing blind reliance on automation. Selection bias relates to data sampling, reporting bias refers to misrepresentation of results, and confirmation bias involves interpreting information to fit pre-existing beliefs. The most accurate description is automation bias.
Reference: AAISM Exam Content Outline C AI Risk Management (Bias Types in AI)
AI Security Management Study Guide C Automation Bias in AI Use
정답:
Explanation:
AAISM highlights that the core ethical risk in AI is the perpetuation of bias that results in unfair or discriminatory outcomes. Therefore, the most important validation step is ensuring that outputs of AI systems are free from adverse biases. A responsible development policy, stakeholder approvals, and privacy reviews all contribute to governance, but they do not directly ensure ethical outcomes. Validation of output fairness is the critical safeguard for ensuring AI does not violate ethical principles.
Reference: AAISM Study Guide C AI Risk Management (Bias and Ethics Validation)
ISACA AI Security Management C Ethical AI Practices
정답:
Explanation:
According to AAISM lifecycle management guidance, the best justification for disabling an AI system immediately is the detection of excessive model drift. Drift results in outputs that are no longer reliable, accurate, or aligned with intended purpose, creating significant risks. Performance slowness and overly detailed outputs are operational inefficiencies but not critical shutdown triggers.
Insufficient training should be addressed before deployment rather than after. The trigger for immediate deactivation in production is excessive drift compromising reliability.
Reference: AAISM Exam Content Outline C AI Governance and Program Management (Model Drift Management)
AI Security Management Study Guide C Disabling AI Systems
정답:
Explanation:
AAISM materials emphasize that in operational AI systems, key risk indicators (KRIs) must reflect risks to performance and reliability rather than technical design factors alone. In the case of threat detection, the most relevant KRI is the frequency of system overrides by human analysts, as this indicates a lack of trust, frequent false positives, or poor detection accuracy. Training epochs, model depth, and training time are technical metrics but do not directly measure operational risk. Analyst overrides represent a practical measure of system effectiveness and risk.
Reference: AAISM Study Guide C AI Risk Management (Operational KRIs for AI Systems)
ISACA AI Security Management C Monitoring AI Effectiveness