PMI Certified Professional in Managing AI 온라인 연습
최종 업데이트 시간: 2026년03월09일
당신은 온라인 연습 문제를 통해 PMI PMI-CPMAI 시험지식에 대해 자신이 어떻게 알고 있는지 파악한 후 시험 참가 신청 여부를 결정할 수 있다.
시험을 100% 합격하고 시험 준비 시간을 35% 절약하기를 바라며 PMI-CPMAI 덤프 (최신 실제 시험 문제)를 사용 선택하여 현재 최신 102개의 시험 문제와 답을 포함하십시오.
정답:
Explanation:
The PMI-CPMAI framework places strong emphasis on traceability, accountability, and documentation across the entire AI lifecycle―covering both cognitive (ML models, data pipelines) and non-cognitive components (traditional automation, rule engines, integration services). It explains that AI projects typically involve cross-functional roles―data scientists, ML engineers, domain experts, security, compliance, and operations―and that “clear accountability requires that decisions, changes, and artifacts be documented in a way that is shared, searchable, and version-controlled across the team.”
To achieve this, PMI-CPMAI recommends centralized documentation repositories (for example, a single documentation platform or system-of-record) where all contributors can log design decisions, assumptions, model versions, data lineage, approvals, and test results. Centralization reduces fragmentation, ensures a “single source of truth,” and supports audits, governance reviews, and
handovers. Periodic reviews by the project manager improve quality but do not, by themselves, create systematic accountability. Splitting protocols for cognitive vs. non-cognitive parts can introduce silos and inconsistencies, and a separate documentation team may distance those doing the work from owning the records.
By contrast, using a centralized documentation system accessible to all team members aligns directly with PMI-CPMAI’s call for integrated, lifecycle-wide documentation: every role remains responsible for its own artifacts, but all content lives in a shared, governed environment, enabling accurate, up-to-date accountability documentation.
정답:
Explanation:
PMI-CPMAI guidance on evaluating operational AI systems, especially in risk-sensitive domains like fraud detection, stresses that project managers must link model performance to business KPIs using multiple complementary evaluation methods, not a single metric. The material explains that fraud models have asymmetric costs (false positives vs. false negatives), evolving fraud patterns, and complex business impacts, so “no single measure is sufficient to characterize business value or risk.” Instead, teams are encouraged to use a diverse set of validation techniques, such as holdout and cross-validation, backtesting on historical periods, confusion matrices, cost/benefit-weighted metrics, and A/B or championCchallenger tests in production-like environments.
PMI-CPMAI also notes that evaluation should combine technical metrics (precision, recall, ROC/AUC, F1, lift) with business-oriented indicators (fraud losses avoided, investigation workload, customer friction, and regulatory or compliance thresholds). Using multiple techniques allows the project manager to check consistency across views and avoid being misled by a single “good-looking” number that hides harmful side effects. Relying on quarterly financial reports or external experts alone does not provide the granular, model-specific insight required, and a single comprehensive metric contradicts PMI’s emphasis on multidimensional evaluation. Therefore, to ensure an accurate and reliable assessment of the AI fraud system against business KPIs, the most effective method is utilizing a diverse set of validation techniques.
==============
정답:
Explanation:
In the PMI-CPMAI body of knowledge, healthcare AI initiatives are repeatedly framed as data-intensive efforts that must integrate heterogeneous sources such as EHRs, patient-reported outcomes, and unstructured clinical narratives. The guidance stresses that “unstructured sources, including physician notes and narrative reports, often contain critical clinical context that will not appear in structured fields,” and that project teams must use techniques that can reliably extract this information into analysis-ready form to achieve completeness and reliability of the dataset. This is where natural language processing (NLP) is highlighted as a key enabler: by systematically parsing and extracting diagnoses, treatments, comorbidities, timelines, and outcomes from free-text clinical notes, NLP makes these rich but messy data usable alongside structured EHR fields and survey data.
PMI-CPMAI also emphasizes that simply adding more data or distributing training (such as data augmentation or federated learning) does not guarantee that the underlying data are comprehensive; what matters is that all relevant signals are captured and normalized across modalities. NLP directly supports this by converting unstructured text into standardized features, reducing omissions and manual abstraction errors. Real-time EHR integration improves freshness, but not necessarily coverage across all sources. Therefore, to ensure the data is comprehensive and reliable for a readmission prediction model, employing NLP to extract relevant data from clinical notes is the most effective technique among the options.
정답:
Explanation:
In PMI-CPMAICaligned practice, a go/no-go assessment is a formal checkpoint where technology, data, governance, risk, and stakeholder factors are evaluated against predefined criteria. If this assessment uncovers that multiple technology and data factors are insufficient, the appropriate response is not to proceed, but to pause and address those deficiencies. The project manager’s role is to coordinate further analysis of data readiness (availability, quality, completeness, relevance) and verify that stakeholder expectations and commitments are still aligned with the AI initiative’s constraints and risks.
Option A―verify data quality and stakeholder alignment―captures this corrective step. It reflects the PMI principle that AI projects must be based on trustworthy data and shared understanding; otherwise, model outcomes may be unreliable, non-compliant, or misaligned with business value. Options B, C, and D effectively ignore or downplay the red flags discovered in the assessment, which violates disciplined, risk-aware AI governance. Proceeding despite known gaps, focusing only on technology while neglecting data, or launching without further assessment directly contradicts structured go/no-go decision logic and could expose the organization to operational, ethical, or regulatory failure.
Therefore, the appropriate action after an unfavorable go/no-go outcome is to re-verify and remediate data quality issues and ensure stakeholder alignment (option A).
정답:
Explanation:
For an AI-based predictive maintenance system, PMI-style AI lifecycle guidance emphasizes that the first critical step is defining a comprehensive data collection strategy aligned with the business objective and risk profile. Predictive maintenance models require a blend of historical failure records, maintenance logs, operational sensor readings (e.g., temperature, vibration, pressure), usage patterns, and contextual data such as environment and flight profile. The project manager is expected to ensure clarity on what data is needed, from which sources, at what frequency, and under what quality standards, before investing in pipelines, cleaning routines, or pilots.
Option A (setting up real-time streaming) and B (data cleaning and preprocessing) are important implementation tasks, but they come after the fundamental question of “which data and why?” has been answered.
Option D (pilot with a small dataset) is a useful validation step, but it still depends on having the right data identified and collected in the first place. PMI-oriented AI governance stresses making data requirements explicit and traceable to model objectives, performance metrics, and regulatory constraints.
Thus, the project manager should develop a comprehensive data collection strategy (option C) to define and structure all required data for training the predictive maintenance model.
정답:
Explanation:
In PMI-aligned AI data management practices, handling missing data is approached from a risk, quality, and fitness-for-use perspective. Before model development, the project manager must ensure that the dataset is not only complete enough, but also representative and unbiased for the intended AI use case. When the portion of missing data is minimal and not systematically biased, a common, acceptable mitigation is to remove those records so that the remaining dataset maintains integrity and consistency while avoiding the introduction of artificial or misleading values.
Options B and C (duplicating data or blindly filling zeros) can create serious distortions in the underlying data distribution, leading to biased model behavior, degraded performance, and weaker generalization, which contradicts responsible AI practices highlighted in PMI-style guidance. Simply ignoring missing data (option A) without a structured strategy or analysis is also discouraged, as it hides potential data quality issues and can propagate errors downstream.
Therefore, in line with good AI data preparation practice, when missingness is genuinely limited and not concentrated in critical attributes, removing records with missing values if minimal (option D) is the most effective and responsible approach among the given choices.
정답:
Explanation:
CPMAI’s Phase I C Business Understanding focuses on clearly defining the business problem, aligning AI efforts with organizational goals, and establishing measurable success criteria including ROI expectations. PMI’s own overview of CPMAI notes that in this phase, teams should “set success criteria” and define both KPIs and ROI expectations so that everyone understands what success and failure look like before moving on Other CPMAI-oriented resources describe Phase I artefacts such as a problem statement, AI pattern fit, stakeholder analysis, and a preliminary ROI sheet that quantifies expected benefits and costs. In the scenario, the hospital has already identified where the cognitive solution will be applied, quantified business objectives, and defined KPIs.
What is still missing from the core Phase I deliverables is a clear view of the project’s expected ROI, linking reduced paper records and process improvements to financial and operational value.
Beginning prototype development (B) belongs to later modeling phases, exploring external data sources (D) is part of Data Understanding, and interdepartmental strategies (C) are broader organizational actions rather than a specific Phase I gating item. To progress to the next CPMAI phase in a way that matches the methodology, the team must determine the project ROI, making option A the correct answer.
정답:
Explanation:
CPMAI’s Data Understanding and Data Preparation phases stress that AI success in domains like healthcare depends on robust data pipelines that ensure consistency, quality, and accessibility before modeling begins. Guidance describes these phases as profiling and assessing data, then performing cleaning, transformation, and structuring so that data are reliable and usable by downstream models.
A data quality assessment combined with ETL (extraction, transformation, loading) processes directly supports these objectives. ETL pipelines standardize formats across disparate systems, enforce validation rules, manage missing values, harmonize coding schemes (for example, diagnosis codes), and centralize data into accessible stores. This is exactly the kind of foundational work CPMAI describes as a prerequisite to effective model development, particularly in regulated sectors such as healthcare where inconsistent or inaccessible data can have clinical and regulatory consequences.
By contrast, using NLP to standardize records (B) is a specialized technique that may help later but does not replace a systematic quality and ETL process. Integrating EHR with ML algorithms (C) and designing hybrid cloud storage (D) are more about later technical integration and infrastructure than about defining and ensuring initial data consistency and accessibility. Thus, in line with CPMAI’s data-centric guidance, performing a data quality assessment with ETL processes is the correct method, making option A the best answer.
정답:
Explanation:
In CPMAI’s Data Understanding phase, the methodology emphasizes identifying data sources, ownership, quality, and the people who truly understand those data assets. Data subject matter experts (SMEs) are not defined purely by generic analytics skills or by having worked on AI before; they are defined by deep familiarity with the specific datasets and domain context that drive the AI solution.
For predictive policing, the key datasets are historical crime data, socioeconomic data, and real-time incident reports. CPMAI guidance stresses that teams must understand how these datasets are generated, what biases they may contain, their limitations, and how they relate to the real-world processes they represent. Therefore, the best way to identify appropriate data SMEs is to evaluate who on the team (or in the wider organization) already has strong familiarity with these concrete data sources, their structures, and usage history.
Options focusing on prior AI tools, workshops on a single data stream, or generic analytics certifications do not guarantee deep, source-specific knowledge. Aligning with CPMAI’s data-centric approach, evaluating the team’s familiarity with historical crime and socioeconomic data is the most appropriate method, making option C correct.
정답:
Explanation:
Within CPMAI, model evaluation is never framed as a single-number decision. The methodology stresses that AI performance must be assessed using multiple technical and business metrics, not just error rate. In the Model Evaluation phase, guidance explains that model success “goes beyond raw accuracy” and must be aligned with ROI and costCbenefit criteria defined earlier in the project. This explicitly means that a team focusing only on error rate can easily miss critical aspects such as precision/recall trade-offs, class imbalance, latency, robustness, explainability, fairness, and business impact.
CPMAI materials also highlight that evaluation should answer whether the model is fit for purpose in the real context, which requires comparing different models across a balanced scorecard of metrics, including technical quality and business KPIs. Selecting a model based solely on error rate risks deploying a solution that looks good statistically but performs poorly in production, causes unintended bias, or fails to meet stakeholder expectations. Therefore, according to CPMAI-aligned evaluation practices, the outcome of using only error rate as the selection criterion is a potential to overlook other critical performance metrics, making option A the correct answer.
정답:
Explanation:
According to PMI-CPMAI’s view of AI lifecycle and value realization, data and knowledge currency are essential to maintaining accuracy, usefulness, and user trust in AI-driven customer support systems. For a telecommunications company, customer queries, products, plans, and policies change frequently. If the AI system relies on outdated or incomplete information, its responses will quickly become inaccurate or unhelpful, even if the underlying model is technically sound.
PMI-CPMAI emphasizes continuous feedback loops and iterative improvement: real-world interactions should be monitored, and insights from those interactions must feed back into updating training data, rules, and knowledge artifacts. Regularly updating the AI system’s knowledge base with the latest information and feedback from customer interactions directly supports these principles. It ensures that the AI reflects current offerings, known issues, resolved cases, and emerging customer needs. Customer satisfaction surveys and staff training are supportive measures but are too infrequent and indirect to guarantee response quality. A parallel static rule-based system does not address the need for current knowledge and can create inconsistency. Thus, the most effective method to ensure accurate and helpful responses is ongoing updates of the AI knowledge base informed by real customer feedback and new information.
정답:
Explanation:
PMI-CPMAI’s guidance on AI operationalization and MLOps highlights the importance of consistency and reliability across deployment environments, especially in distributed or multi-site organizations. In this aerospace predictive maintenance scenario, each manufacturing site has different computational capacity and network characteristics, which can lead to inconsistent model performance and latency if models are hosted and executed locally. To mitigate this, PMI-aligned practices emphasize standardizing the runtime environment and centralizing critical AI services wherever feasible.
By utilizing cloud-based AI services uniformly, the organization can ensure that all sites call the same models, same versioning, same configuration, and same infrastructure stack, regardless of local hardware constraints. This reduces variability in inference behavior, simplifies monitoring, and supports unified logging, performance tracking, and governance enforcement across sites. A centralized model repository alone does not standardize execution; it only manages artifacts. Decentralized architectures and extensive site-specific tuning tend to increase divergence and complexity, making performance less consistent. Therefore, the most effective method to help ensure consistent AI performance across sites with different local capabilities is to utilize cloud-based AI services uniformly as the operational backbone.
정답:
Explanation:
Within PMI-CPMAI’s treatment of AI business cases, the core expectation is that the project manager demonstrates clear, quantifiable value aligned with organizational goals. For a capital markets firm whose objectives are improved trading accuracy and profitability, the most suitable method is to develop a financial impact assessment that translates AI benefits into measurable financial terms. This assessment typically compares the current trading performance (baseline) with projected AI-enhanced performance, estimating impacts on revenues, margins, risk-adjusted returns, and operational costs.
PMI’s AI-oriented business case guidance emphasizes that decision makers need a structured view of costs, benefits, risks, and assumptions, expressed in financial metrics such as net benefit, payback period, ROI, or expected value under uncertainty. Market trend analyses and vendor consultations can inform context and options but do not directly quantify how the AI solution improves trading results. Scenario analysis can support stress testing and complement the financial view, yet the central artifact that “meets the firm’s goals and objectives” for funding decisions is a financial impact assessment tied to accuracy and profitability. Thus, the method that best satisfies the firm’s needs is developing a financial impact assessment.
정답:
Explanation:
In the PMI-CPMAI perspective on responsible AI and data governance, regulatory compliance starts with knowing exactly what data you have and how sensitive it is. Before you can design controls, encryption schemes, or risk plans, you must first perform a data audit and classification to identify personal, sensitive, and regulated data elements, as well as their sources, flows, and storage locations. This aligns with the guidance that early in the AI lifecycle, project teams should create a clear data inventory and mapping to understand which datasets fall under privacy regulations (such as health, financial, or personally identifiable information).
By conducting a thorough data audit to identify sensitive information, the project team can determine which regulations apply, what consent or legal basis is required, and where to apply specific safeguards (access controls, anonymization, retention limits, etc.). Encryption and broader risk management plans are important, but they are secondary steps that rely on the foundational insight gained from the audit. Verbal commitments from stakeholders have no formal regulatory standing. Therefore, in the initial stages of data collection and aggregation, the task that most directly supports regulatory compliance is a thorough data audit to identify sensitive information.
정답:
Explanation:
In PMI-CPMAI, selecting the appropriate machine learning approach starts with clarifying the type of question being asked of the data. When upper management wants to “see if there are any patterns and insights that can be discovered from customer data” without predefined labels or outcomes, this maps directly to unsupervised learning.
Unsupervised learning techniques―such as clustering, dimensionality reduction, and association rule mining―are used to uncover hidden structure, segments, or relationships in data where no target variable is specified. PMI-CPMAI training descriptions highlight using such approaches in discovery phases to identify segments, behavioral groupings, or natural patterns that can later inform strategy, product design, or subsequent supervised models.
Reinforcement learning (option C) focuses on agents learning via rewards and penalties through interaction with an environment, which does not fit this “exploratory pattern discovery” objective. Saying “all would work equally well” (option A) contradicts PMI-style guidance, which requires fit-for-purpose selection of AI techniques based on problem framing and data characteristics. Therefore, for discovering patterns and structure in customer data without pre-labeled outcomes, Unsupervised Learning (option B) is the correct choice in line with PMI-CPMAI principles.