시험덤프
매달, 우리는 1000명 이상의 사람들이 시험 준비를 잘하고 시험을 잘 통과할 수 있도록 도와줍니다.
  / PMI-CPMAI 덤프  / PMI-CPMAI 문제 연습

PMI PMI-CPMAI 시험

PMI Certified Professional in Managing AI 온라인 연습

최종 업데이트 시간: 2026년02월14일

당신은 온라인 연습 문제를 통해 PMI PMI-CPMAI 시험지식에 대해 자신이 어떻게 알고 있는지 파악한 후 시험 참가 신청 여부를 결정할 수 있다.

시험을 100% 합격하고 시험 준비 시간을 35% 절약하기를 바라며 PMI-CPMAI 덤프 (최신 실제 시험 문제)를 사용 선택하여 현재 최신 102개의 시험 문제와 답을 포함하십시오.

 / 3

Question No : 1


A project manager is overseeing the quality assurance and quality control of an AI/machine learning (ML) model. The model has been trained and initial tests have shown promising results. However, the project manager is concerned about the long-term performance and reliability of the model in real-world scenarios.
What should the project manager do?

정답:
Explanation:
PMI-CPMAI stresses that AI/ML models are not “one-and-done” artifacts; they must be managed across an operational lifecycle, including continuous monitoring, feedback, and improvement. The exam outline for CPMAI/PMI-CPMAI explicitly includes tasks such as monitoring deployed AI systems, detecting performance drift, and adapting models to changing data and business conditions.
Initial promising test results only indicate that the model works under current test conditions. In real-world environments, data distributions, usage patterns, and operating contexts evolve. Without ongoing monitoring and feedback loops, the project manager cannot reliably detect degradation (e.g., accuracy drop, bias drift, latency issues) or emerging risks. PMI-aligned AI lifecycle practices emphasize setting up metrics, alerts, logging, human-in-the-loop review where appropriate, and structured mechanisms to feed production insights back into retraining or re-engineering efforts.
Options A, C, and D (hyperparameter tuning, larger cross-validation, data augmentation) are valuable development-phase techniques, but they do not address long-term, in-production reliability. PMI-CPMAI focuses on operationalization and value realization, making establishing continuous monitoring and feedback loops (option B) the correct action to protect long-term performance and trustworthiness.

Question No : 2


An aerospace company is integrating AI into their manufacturing process to enhance safety and efficiency. The project team needs to evaluate potential security threats to prevent unauthorized
access to sensitive data.
What is the highest risk?

정답:
Explanation:
PMI-CPMAI treats data privacy, governance, and security as central pillars of responsible AI, highlighting that AI projects often deal with sensitive and regulated information. LPCentre+1 When evaluating threats that could lead to unauthorized access to sensitive aerospace manufacturing data, the framework encourages looking at attack surface, distribution of data, and control complexity.
A decentralized data storage system (option C) significantly increases the potential risk: data is distributed across multiple locations or nodes, making consistent access control, identity management, logging, and incident response more challenging. Misconfigurations or weak endpoints in such an environment can create numerous entry points for attackers, magnifying exposure of proprietary designs, safety-critical parameters, or personal data. PMI-CPMAI’s guidance on data governance stresses centralized policies, clear stewardship, and controlled data flows precisely to reduce this risk.
By contrast, proprietary software with no open-source review (A) may present transparency concerns but does not inherently imply broader data exposure. Lack of regular data updates (B) is more a model performance and drift issue than a direct security threat.
Option D describes a mitigation― securing APIs and enforcing governance―not a risk. Therefore, the highest security risk for unauthorized access in this scenario is operationalizing a decentralized data storage system.

Question No : 3


A company's leadership team has requested insights into the AI model's ability to support decision-making processes without requiring them to understand complex technical details.
Which step should the project manager take?

정답:
Explanation:
In PMI-CPMAI, a key responsibility of the AI project manager is to translate technical capabilities into business-usable decision support, especially for senior leaders who do not need (or want) deep technical model detail. The PMI-CPMAI exam content emphasizes aligning AI outputs with business processes and decision workflows across the full lifecycle, from defining the business need to operationalizing the solution in real environments. ProjectManagement Rather than explaining the mathematics of neural networks, gradient descent, or ensemble methods (options ACC), the guidance stresses demonstrating how the AI system’s outputs appear in familiar tools (dashboards, reports, workflow systems) and how they can be acted upon by decision-makers. This includes clarifying inputs, key indicators, thresholds, confidence levels, exception handling, and what actions users should take based on different system recommendations.
PMI-CPMAI also links this to value realization―leaders need to see how the model’s outputs are embedded in end-user systems to drive measurable outcomes, not how the algorithm is implemented. certifyera.com+1 Demonstrating integration into end-user systems (option D) directly addresses that need, supports adoption, and satisfies the framework’s focus on practical, lifecycle-oriented AI delivery.

Question No : 4


A transportation company is preparing data for an AI model to optimize fleet management. The project team is working with large amounts of structured and unstructured data.
If the project manager avoids addressing the variety of data during preparation, what will be the result?

정답:
Explanation:
PMI-CPMAI explains that modern AI projects often work with high-volume, high-variety data, including both structured (tables, logs, telemetry) and unstructured formats (text, documents, images). A core principle in the data preparation and pipeline design stages is that “variety must be explicitly addressed through normalization, harmonization, and feature extraction so that models receive coherent, compatible inputs.” If the project manager ignores the variety dimension―treating all data as if it were homogeneous―this typically leads to misaligned schemas, inconsistent encodings, missing modalities, and improperly handled unstructured content.
The guidance notes that such issues “manifest as degraded model performance, instability, and reduced generalizability, even when volume and velocity are adequately managed.” In a fleet management context, failing to harmonize telematics, maintenance records, driver logs, and external data (e.g., traffic or weather) means the model cannot fully capture relevant patterns, and some signals may be effectively unusable or misleading. Rather than improving accuracy or consistency, skipping this work undermines the quality of features, increases noise, and introduces hidden biases.
As a result, PMI-CPMAI indicates that not addressing data variety during preparation will most directly lead to reduced model performance, because the model is trained and evaluated on incomplete, inconsistent, or poorly integrated representations of the underlying operational reality.

Question No : 5


A healthcare organization plans to use an AI solution to predict patient readmissions. The data science team needs to identify data sources and ensure data quality.
Which method will meet the project team's objectives?

정답:
Explanation:
In PMI-CPMAI’s treatment of data for AI, especially in sensitive domains like healthcare, the first responsibility of the project and data science teams is to understand and assess data quality and suitability before model development. The guidance states that AI teams should “systematically profile candidate data sources to evaluate completeness, consistency, validity, and coverage of key populations and variables relevant to the use case.” Data profiling tools are highlighted as a practical means to inspect distributions, missing values, outliers, and anomalies across structured clinical, administrative, and claims data.
For a patient readmission prediction use case, PMI-CPMAI stresses that teams must identify which sources (EHR, discharge summaries, lab results, prior admissions, demographics, social determinants, etc.) are available and then “quantify data quality metrics such as completeness and timeliness to determine whether the dataset is fit for training and deployment.” While techniques such as augmentation or real-time validation might be valuable later, they build upon an initial understanding obtained via profiling. Operationalizing a catalog supports governance and discovery but does not directly satisfy the immediate need to measure data quality.
Therefore, the method that best meets the objective of identifying data sources and ensuring data quality is to use data profiling tools to assess data completeness and other quality dimensions, providing an evidence-based foundation for subsequent preprocessing, feature engineering, and model training.

Question No : 6


During the transition to an AI solution, the project manager discovers that certain tasks may not require cognitive AI capabilities and can be handled through traditional automation methods. As a result, the project team starts segregating tasks based on their cognitive requirements.
What should the team consider?

정답:
Explanation:
PMI-CPMAI clearly distinguishes between cognitive AI capabilities and traditional automation or noncognitive solutions. The guidance stresses that not every task in a workflow benefits from AI and that “project leaders should deliberately match solution complexity to problem complexity, reserving cognitive AI for tasks that truly require perception, learning, or sophisticated decision support.” For deterministic, rule-based, repetitive tasks, the recommended approach is to use conventional automation technologies (scripts, RPA, rule engines, workflow systems) rather than machine learning models.
When a project team discovers that certain tasks do not require cognition (e.g., simple routing, format conversion, deterministic validations), PMI-CPMAI recommends “segregating cognitive from noncognitive tasks and applying the simplest effective technology to each.” This reduces cost, operational risk, and technical debt, while focusing AI engineering effort where it provides differentiated value. Applying AI to noncognitive tasks can introduce unnecessary complexity, additional monitoring and governance overhead, and avoidable model risk. Proceeding only with intelligent functionalities or overanalyzing traditional tasks without acting on the insight misses this key optimization.
Therefore, once tasks have been segregated by cognitive requirements, the team should utilize traditional automation solutions for noncognitive tasks and focus AI design, data, and model work only where cognitive capabilities are justified. This aligns with PMI-CPMAI’s principle of “fit-for-purpose” technology selection and responsible, efficient AI adoption.

Question No : 7


A project manager is overseeing the transition of a company's legacy system to a new AI-driven solution. The team has identified multiple cognitive patterns required for different aspects of the system. However, the project manager is concerned about overcomplicating the transition.
Which activity should be performed first?

정답:
Explanation:
In the PMI-CPMAI guidance on transitioning from legacy systems to AI-enabled solutions, the project manager is encouraged to control complexity and risk through incremental, phased adoption rather than attempting to introduce multiple cognitive capabilities at once. The material emphasizes that when several cognitive patterns (e.g., classification, prediction, recommendation, NLP) have been identified, “the implementation roadmap should prioritize a limited set of use cases and patterns in early iterations, validating value and technical feasibility before expanding scope.” This staged approach allows the team to learn from each iteration, refine data pipelines and integration, and adjust governance and risk controls before adding more advanced or additional cognitive components.
PMI-CPMAI also highlights that overcomplication at the outset increases the chance of cost overruns, resistance to change, and technical failure, recommending that teams “sequence AI capabilities into manageable releases that deliver value quickly while minimizing disruption to existing operations.” Establishing a phased approach targeting one pattern at a time directly addresses the project manager’s concern: it avoids “big bang” AI deployment and enables structured change management, training, and stakeholder alignment with each step. Activities such as consolidating all patterns into a single iteration or training employees on everything at once contradict this incremental, value-focused evolution of AI capabilities. Therefore, the first activity should be to establish a phased approach focusing on one cognitive pattern at a time.

Question No : 8


A hospital system has been using a chatbot and has received complaints from end users. The end users believe they are speaking to a person but are frustrated when answers do not make sense.
To help ensure end users know that they are engaging with an AI chatbot, what should be considered to support transparency?

정답:
Explanation:
Responsible and transparent AI―key themes in PMI-CPMAI―require that end users understand when they are interacting with an AI system rather than a human. In this scenario, end users mistakenly believe they are chatting with a person and become frustrated when responses are nonsensical. PMI-style responsible AI and ethics guidance emphasizes clear disclosure, user awareness, and expectation management as essential controls to protect trust and reduce harm.
The most direct way to support transparency here is a disclosure notice with each use (option C), for example a visible label or brief statement indicating “You are interacting with an AI-powered chatbot.” This can appear at session start, in the chat header, or near the input box and may be reinforced periodically.
Inclusion of diverse datasets (option A) and interpretable models (option D) are important for fairness and explainability but do not solve the misunderstanding about the chatbot’s identity. Operationalizing advanced algorithms (option B) might improve answer quality, but again, it does not address the core transparency issue. Therefore, to ensure users know they are engaging with an AI chatbot, the system should present a clear disclosure notice with each use.

Question No : 9


An AI project team has prepared the data and is ready to proceed with model development.
Which action should the project manager perform next?

정답:
Explanation:
Once data preparation is complete and the team is ready for model development, PMI-aligned AI lifecycle guidance calls for clear definition and documentation of performance metrics and success criteria before training models. The project manager should ensure that everyone agrees on which metrics will be used (e.g., accuracy, precision, recall, F1, AUC, business KPIs) and what thresholds will be considered acceptable. This supports traceability, objective evaluation, and transparent go/no-go decisions in later stages.
Because the question states that the data is already prepared and the team is ready to proceed, it implies that initial data quality activities have already occurred. Repeating a “final assessment of data quality” (option A) is less critical at this specific point than locking in evaluation metrics. Go/no-go questions (option C) and scalability reporting (option D) depend on having those metrics explicitly defined; they are downstream decisions and artifacts. PMI-style AI guidance stresses that model development should be driven by pre-defined, documented performance metrics that connect technical outputs to business value and risk tolerances. Therefore, the next action for the project manager is to document the performance metrics for the model.

Question No : 10


An IT services company is developing an AI system to automate network security monitoring. The project manager needs to consider various factors to mitigate risks associated with false positives and false negatives.
Which action should the project manager implement?

정답:
Explanation:
In AI-enabled security monitoring, PMI-style AI risk management highlights false positives and false negatives as key operational risks: false positives overwhelm analysts and create alert fatigue, while false negatives hide real threats. To mitigate these, guidance stresses continuous monitoring, feedback, and humanCAI collaboration, not just algorithm choice. Establishing a continuous feedback loop with security teams (option D) means that security analysts review alerts, label them as true/false, and feed those labels back into the AI pipeline. This enables threshold tuning, recalibration, and retraining, incrementally reducing misclassification rates over time.
Option B (model combinations and trade-offs) can help at design time, but it does not by itself guarantee ongoing control of false positives/negatives once the system is deployed.
Option A is too
narrow and algorithm-specific and ignores the governance and lifecycle aspects.
Option C addresses data security, which is important but unrelated to classification error rates. PMI-style AI operations (akin to MLOps) underline that closed-loop learning with real-world feedback is critical for safety, resilience, and performance. Hence, the action that directly addresses the risk of false positives and false negatives is to establish a continuous feedback loop with security.

Question No : 11


In the finance sector, a company is implementing an AI system for credit risk assessment. The project manager needs to identify the data subject matter experts (SMEs) who can help to ensure the accuracy and reliability of the model.
What is an effective method to achieve this objective?

정답:
Explanation:
For an AI credit risk assessment system, PMI-style AI governance and lifecycle guidance consistently emphasizes that domain and data expertise must be combined to ensure model accuracy, relevance, and reliability. In the finance context, this means involving: (1) data analysts / data scientists who understand data structures, data quality, feature engineering, and model behavior, and (2) financial / credit risk experts who understand regulatory constraints, lending policies, risk appetite, and real-world meaning of variables and outputs. Together, they validate that input data correctly represents customer risk profiles, that derived features reflect sound credit risk logic, and that model outputs are interpretable and aligned with institutional policies.
Options B, C, and D conflict with good AI practice described in PMI-style guidance. Focusing on SMEs “with experience in noncognitive solutions” is irrelevant to credit risk modeling. Relying on general IT staff ignores the need for specialized financial and data expertise. Selecting SMEs based on availability rather than expertise directly undermines model quality and risk control. Therefore, the effective and expected method in an AI credit risk initiative is to engage internal data analysts and financial experts as data SMEs to support model design, validation, and ongoing monitoring.

Question No : 12


A financial services firm is building an AI model to detect fraudulent transactions. Identifying and validating data sources is critical to the model's success.
What is an effective method that helps to ensure data accuracy?

정답:
Explanation:
For a financial services firm building an AI model for fraud detection, the accuracy and trustworthiness of transaction data is critical. PMI-CPMAI’s guidance on AI data governance stresses the need to understand where data comes from, how it flows, and what transformations it undergoes before being used for model training or inference. This is precisely what data lineage tools are designed to support.
Data lineage enables teams to trace data back to its original source, see each processing step (cleansing, aggregation, enrichment), and verify that transformations conform to defined business and regulatory rules. In regulated sectors like finance, this traceability is essential for audits, model validation, and demonstrating that AI decisions (such as fraud flags) are based on accurate, well-governed data. While technologies like blockchain (option C) or batch cleansing (option D) may have roles in specific architectures, PMI-style AI governance places primary emphasis on visibility, traceability, and control over the data lifecycle.
A federated database system (option B) addresses access architecture, not inherently accuracy. By contrast, utilizing data lineage tools directly supports identifying and validating data sources and understanding whether the data remains accurate after multiple hops. Therefore, in line with PMI-CPMAI data governance practices, option A is the most effective method listed to help ensure data accuracy.

Question No : 13


An AI project team with a manufacturing company needs to ensure data integrity before moving to model development. They discovered some data inconsistencies due to manual entry errors.
What is an effective method that helps to ensure data integrity?

정답:
Explanation:
In AI data management, PMI-CPMAI highlights data integrity as the property that data remains accurate, consistent, and reliable over its lifecycle. When the team discovers inconsistencies due to manual entry errors, the most direct and effective control is to prevent bad data at the point of capture. This is achieved by implementing real-time data validation rules―for example, enforcing allowed ranges, formats, mandatory fields, cross-field consistency checks, and lookup constraints before a record is accepted.
PMI’s AI data practices emphasize that “controls at data entry” are preferable to downstream correction because they reduce rework, lower the risk of propagating errors into models, and create cleaner training datasets from the outset. Although automating data entry (option B) can also reduce manual errors, it does not, by itself, guarantee integrity if upstream systems or processes are flawed. Regular audits (option C) are useful as a monitoring mechanism, but they are periodic and reactive rather than preventive. Using ML algorithms to detect and correct errors (option D) adds complexity and itself relies on having sufficiently good data.
Thus, in alignment with PMI-style AI governance and quality management, real-time data validation rules are the most effective method named here to ensure data integrity before moving to model development.

Question No : 14


A telecommunications company is considering an AI solution to improve customer service through automated chatbots. The project team is assessing the feasibility of the AI solution by examining its potential scalability and effectiveness.
What will present the highest risk to the company?

정답:
Explanation:
In PMI’s treatment of AI in customer-facing environments, responsible AI, privacy, and regulatory compliance are consistently framed as high-impact risk areas. For a telecommunications company using AI chatbots for customer service, any breach of customer data privacy is not just a technical issue but a legal, regulatory, and reputational threat. It may trigger regulatory investigations, fines, lawsuits, and loss of customer trust.
While scalability risks (such as the chatbot not handling volume) and integration risks (such as poor connection with existing platforms) may harm service quality, they are usually remediable through technical improvements, capacity upgrades, or refactoring. Conversely, PMI’s AI governance perspective emphasizes that violations of data protection laws can incur “non-recoverable” damage: sanctions, forced shutdown of systems, and long-term brand erosion. Therefore, the potential that “the solution might breach customer data privacy regulations, leading to legal consequences” is typically assessed as a higher-order risk than operational challenges.
PMI-CPMAI content stresses implementing privacy-by-design, strict access controls, encryption, and compliance checks early in the solution lifecycle. This means that, in a feasibility and risk assessment, data privacy and regulatory compliance represent the highest risk category, and thus option D is the most appropriate answer.

Question No : 15


An aerospace company is integrating AI for predictive maintenance. The project manager is concerned about potential delays due to external dependencies.
Which initial step should the project manager take?

정답:
Explanation:
Within the PMI Certified Professional in Managing AI (PMI-CPMAI) framework, managing external dependencies is a core component of AI project risk management, especially for industries such as aerospace where supply chains and component availability can significantly affect timelines. PMI emphasizes that external dependency risks―such as reliance on specialized hardware, sensors, cloud services, or third-party data streams―must be addressed proactively to ensure uninterrupted AI system development and deployment.
The PMI-CPMAI Risk and Dependency Management section states that AI project managers should “identify and stabilize critical external inputs early in the lifecycle, particularly when those dependencies are single-source or highly specialized.” It further highlights that mitigation begins with “diversifying suppliers or service providers to reduce the probability of bottlenecks or delays caused by external parties.” This approach not only reduces vulnerability but also improves resilience and reduces procurement-related schedule risks.
Although increasing internal resources (A) or implementing just-in-time inventory (B) may optimize internal operations, they do not mitigate dependency on external providers. Establishing contingency plans (C) is important but is not the initial action; PMI guidance is clear that risk avoidance and reduction take precedence over contingency responses. The most appropriate first step, according to PMI-CPMAI, is to “engage with multiple suppliers to ensure redundancy and reduce exposure to single-point external failures.”

 / 3
PMI