Browse Certification Practice Tests by Exam Family

PMI-CPMAI Practice Test

Practice PMI-CPMAI with free sample questions, timed mock exams, and detailed explanations in PM Mastery.

PMI-CPMAI is PMI’s managing-AI certification for practitioners who need to frame AI work clearly, judge data readiness, guide model decisions, and operationalize responsibly. If you are searching for PMI-CPMAI sample exam questions, a practice test, mock exam, or exam simulator, this is the main PM Mastery page to start on web and continue on iOS or Android with the same PM Mastery account.

Choose PMI-CPMAI when you need an AI initiative management exam rather than a general PM exam. This route is strongest when you own the AI business case, data readiness, model evaluation, governance, rollout, and monitoring. If you mainly need broad project-leadership prep with some AI context, compare PMP 2026 . If your role is specifically Scrum Master or Product Owner, compare PSM-AI and PSPO-AI .

Interactive Practice Center

Start a practice session for PMI Certified Professional in Managing AI (PMI-CPMAI) below, or open the full app in a new tab. For the best experience, open the full app in a new tab and navigate with swipes/gestures or the mouse wheel—just like on your phone or tablet.

Open Full App in a New Tab

A small set of questions is available for free preview. Subscribers can unlock full access by signing in with the same app-family account they use on web and mobile.

Use on iPhone or Android too: PM Mastery on the App Store or PM Mastery on Google Play using the same PM Mastery account you use on web. The same PM Mastery subscription works across web and mobile.

Free diagnostic: Try the 120-question PMI-CPMAI full-length practice exam before subscribing. Treat the result as an AI-delivery diagnostic: separate misses caused by business framing, data readiness, evaluation design, governance, and operational rollout.

What this PMI-CPMAI practice page gives you

  • A direct route into PM Mastery practice for PMI-CPMAI.
  • Topic drills and mixed sets across responsible AI, business needs, data needs, model evaluation, and operationalization.
  • Detailed explanations that show why the best AI-delivery answer is right under real constraints.
  • 24 on-page sample questions plus access to a larger PM Mastery library with 2,700+ PMI-CPMAI practice questions.
  • A clear free-preview path before you subscribe.
  • the same PM Mastery account across web and mobile

PMI-CPMAI exam snapshot

  • Vendor: PMI
  • Official exam name: PMI Certified Professional in Managing AI (PMI-CPMAI)
  • Exam code: PMI-CPMAI
  • Items: 120 total
  • Exam time: 160 minutes
  • Assessment style: scenario-based AI project delivery, governance, data, and operational decisions

PMI-CPMAI questions usually reward the option that balances business value with governance, data realism, validation discipline, and safe operational rollout.

If your role is closest to…Best pageWhy
End-to-end AI initiative leadershipPMI-CPMAIStrongest fit for business framing, data readiness, model evaluation, governance, rollout, and monitoring.
Mainstream PMP credentials with AI contextPMP 2026Best if your target is still PMP and your exam date is July 9, 2026 or later.
Scrum Master or agile coach workPSM-AIBetter fit for facilitation, team support, and AI inside Scrum events.
Product Owner workPSPO-AIBetter fit for discovery, backlog quality, prioritization, and value decisions.
Broader AI-enabled project deliveryAIPMBetter fit if you want a wider AI project-delivery route beyond PMI’s AI-management framing.

AI delivery loop you should recognize

Diagram showing the PMI-CPMAI delivery loop: business need, data readiness, model evaluation, governance and release, then operations and monitoring with feedback into the next cycle.

The exam keeps circling through the same logic: frame the business problem correctly, confirm the data is usable, evaluate the model with the right success measures, release under governance controls, then monitor and improve in production.

Topic coverage for PMI-CPMAI practice

DomainWeight
Support Responsible and Trustworthy AI Efforts15%
Identify Business Needs and Solutions26%
Identify Data Needs26%
Manage AI Model Development and Evaluation16%
Operationalize AI Solution17%

PMI-CPMAI decision filters for AI scenarios

AI exam scenarios often include tempting technical answers. Use these filters to keep the decision tied to value, evidence, governance, and safe operation.

Scenario signalFirst checkStrong answer usually…Weak answer usually…
Leaders request an AI solution before defining the problemBusiness need and measurable outcomeClarifies the decision, value measure, constraints, and success criteria before choosing a modelStarts tool selection or model development because AI has executive attention
The model performs well in a lab but adoption is weakWorkflow, change impact, and stakeholder readinessAddresses process fit, user trust, auditability, training, and accountability before scalingTunes accuracy only and treats adoption as a post-launch communication issue
Data quality issues appear during preparationData suitability and traceabilityStops or gates progress until requirements, lineage, privacy, and quality checks are satisfiedProceeds to training because the team can compensate during modeling
Accuracy metrics look promising but harm is possibleResponsible AI controlsAdds risk review, bias testing, explainability, human oversight, and approval gates appropriate to impactUses one aggregate metric as proof the solution is ready
A pilot is ready for productionOperational readinessConfirms SLOs, monitoring, rollback, support ownership, model drift checks, and incident responseMoves to production because the pilot met functional acceptance criteria
Performance degrades after launchMonitoring and continuous improvementInvestigates drift, data changes, feedback loops, and retraining triggers under governanceRetrains immediately without diagnosing the cause or approval path

PMI-CPMAI readiness map

Use this map after each timed set to classify the miss before you do more questions.

DomainWhat the exam testsWhat PM Mastery practice should forceCommon trap
Responsible and Trustworthy AIWhether governance, risk, transparency, fairness, privacy, and oversight match the solution impactChoose controls proportionate to stakeholder harm, data sensitivity, and decision criticalityTreating responsible AI as a checklist after model selection
Business Needs and SolutionsWhether the AI initiative is solving the right problem with measurable valueTranslate vague AI interest into outcomes, success measures, constraints, and route-fit decisionsOptimizing for technical novelty instead of business value
Data NeedsWhether data is fit for purpose, legal, representative, traceable, and operationally availableSpot gaps in lineage, consent, quality, bias, feature readiness, and governanceAssuming more data is automatically better
Model Development and EvaluationWhether evaluation design matches the use case and risk profileCompare metrics, validation methods, test data, human review, and go/no-go evidenceChoosing the highest metric without checking failure cost
Operationalize AI SolutionWhether the solution can run safely in productionConnect deployment, monitoring, support, drift, rollback, feedback, and retraining decisionsTreating launch as the finish line

How to use the PMI-CPMAI simulator efficiently

  1. Start with focused drills on business framing, data readiness, and responsible AI before mixing in later lifecycle decisions.
  2. Review every miss until you can explain the trade-off between feasibility, governance, value, and operational reliability.
  3. Move into mixed sets once you can connect business need, data quality, model evaluation, and deployment planning in one scenario.
  4. Finish with timed runs so you can keep sound judgment under pressure instead of chasing technically impressive but risky answers.

Final 7-day PMI-CPMAI practice sequence

Use the final week to rehearse AI-delivery judgment, not to memorize model terminology.

TimingPractice focusWhat to review after the set
Days 7-5One full-length diagnostic plus targeted drills in the weakest lifecycle domainsWhether misses came from business framing, data readiness, evaluation criteria, responsible AI, or operationalization
Days 4-3Mixed AI lifecycle sets with exhibits, constraints, and stakeholder decisionsWhether you can explain why the safest valuable next step is better than the most technical answer
Days 2-1Light review of governance gates, data checks, evaluation choices, monitoring, and rollback languageOnly recurring traps; do not introduce unfamiliar AI frameworks late
Exam dayWarm up with a few scenario items if usefulRead for the lifecycle stage first, then choose the answer that improves evidence, value, and control

When PMI-CPMAI practice is enough

If you can score above 75% on several mixed or timed attempts and explain each miss in lifecycle terms without recognizing the exact question, you are likely ready for the exam. Continuing to repeat the same large bank can become overtraining: you may remember item patterns while losing the habit of reasoning from the business problem, data evidence, model risk, and production constraint.

Free preview vs premium

  • Free preview: a smaller web set so you can validate the question style and explanation depth.
  • Premium: the full PMI-CPMAI practice bank, focused drills, mixed sets, timed mock exams, detailed explanations, and progress tracking across web and mobile.

Need deeper concept review first?

If you want concept-first reading before heavier simulator work, use the companion guide at PMExams.com .

24 PMI-CPMAI sample questions with detailed explanations

These sample questions are original PM Mastery practice items aligned to PMI-CPMAI-style AI initiative-management decisions. They are not PMI exam questions and are not copied from any exam sponsor. Use them to check your readiness here, then continue in PM Mastery with mixed sets, topic drills, and timed mocks.

Question 1

Topic: Domain V: Operationalize AI Solution

You are planning deployment for a customer-facing ML scoring API that will support both a mobile app and a call center. Leadership asks you to “size the infrastructure and on-call support” for launch, but the request contains no operational targets or usage estimates.

What should you ask for first before selecting compute, scaling, and support resources?

  • A. Procurement’s preferred vendors and discount thresholds
  • B. A finalized end-user training and change management plan
  • C. Expected traffic profile and SLOs (latency, availability, peak load)
  • D. The model’s algorithm choice and training hyperparameters

Best answer: C

Explanation: Infrastructure and resource planning depends primarily on the workload and the required service levels. Without request volume patterns and targets like latency and availability, you cannot defensibly choose an architecture, scale strategy, or on-call staffing. Establish these operational requirements first, then evaluate options that meet them within constraints.


Question 2

Topic: Domain II: Identify Business Needs and Solutions

A team is piloting an AI assistant that suggests next-best actions to call center agents. Early results show acceptable model accuracy, but adoption is low: supervisors report agents bypass recommendations and the quality team is unsure how to audit AI-influenced calls. The product owner wants to expand to three more call centers in six weeks. What is the best next step?

  • A. Run a change impact assessment and map impacted stakeholders
  • B. Expand deployment now and collect stakeholder feedback post-rollout
  • C. Tune the model to increase accuracy before addressing adoption
  • D. Move the solution to production operations for ongoing ownership

Best answer: A

Explanation: Low adoption and unclear audit responsibilities indicate a change-management gap, not a model-performance gap. The next step is to assess change impacts and identify all stakeholder groups affected by AI-assisted work so required process updates, training, communications, and accountability can be planned before scaling.


Question 3

Topic: Domain IV: Manage AI Model Development and Evaluation

You are preparing a go/no-go recommendation to start data preparation for a lead-scoring model. The team provides the following artifact.

Data prep check (excerpt)
Source dataset: CRM_Leads v3.2
Requirements: R1 Exclude opted-out leads; R2 Impute income only for verified income; R3 Store reproducible lineage for derived features
Findings: R1 check NOT RUN (opt_out flag mapping missing)
Findings: R2 18% of income values imputed using median (verification not used)
Findings: R3 Lineage recorded as "analyst spreadsheet notes" (not in repo)
Row count change after transforms: -2.1%

What is the best next action based on this exhibit?

  • A. Pause go/no-go and require fixes plus traceable reruns
  • B. Proceed to data preparation because row loss is small
  • C. Continue to model training and address data issues after evaluation
  • D. Request additional features from the business to offset imputation

Best answer: A

Explanation: The exhibit shows preprocessing results that do not align with stated requirements (opt-out exclusion not validated, income imputed without using verification) and are not traceable (lineage kept outside the repository). The appropriate response is to stop progression and correct the transformation logic and documentation, then rerun checks in a reproducible, auditable way before making a go/no-go decision.


Question 4

Topic: Domain II: Identify Business Needs and Solutions

A retailer is considering an AI-driven email personalization model to improve online sales. Executives want a business case for a go/no-go decision and expect benefits to be tied to measurable value (financial or strategic), not just technical performance.

Which approach should the AI project manager NOT use when building the business case?

  • A. Translate expected conversion lift into incremental profit using baseline volume
  • B. Justify the initiative mainly with model metrics (e.g., AUC/accuracy)
  • C. Define strategic outcomes with measurable proxies (e.g., retention) and targets
  • D. Include total cost of ownership and run best/base/worst benefit scenarios

Best answer: B

Explanation: A strong AI business case connects model outputs to business KPIs and then to financial or strategic value, using transparent assumptions and costs. Relying primarily on technical metrics (like accuracy or AUC) does not show how the organization will realize value or whether the investment is justified. Decision-makers need quantified impact pathways and measurable targets to compare benefits against total costs and risk.


Question 5

Topic: Domain III: Identify Data Needs

An AI team is starting model development for a customer service triage solution. They plan to use chat transcripts that include PII and access them from a shared network folder, then train models on individual laptops.

An internal audit flags that the team’s workspace is not aligned with the organization’s AI governance requirements for least-privilege access, segregation of environments, and auditability. The sponsor wants progress to continue without violating controls.

What is the best next step?

  • A. Proceed with training on laptops using de-identified extracts while security reviews the workspace
  • B. Build the model now and address environment controls during production deployment planning
  • C. Ask the vendor to host the data and experimentation environment to accelerate delivery
  • D. Stand up an approved, segregated dev/test workspace with role-based access, logging, and controlled data access before continuing model work

Best answer: D

Explanation: Before further development, the team must move experimentation into a governed workspace that enforces access controls and environment segregation for sensitive data. Establishing role-based access, auditable logging, and controlled dataset provisioning enables progress while meeting governance requirements. This sequencing prevents rework and reduces the risk of noncompliant data handling during development and testing.


Question 6

Topic: Domain II: Identify Business Needs and Solutions

A health insurer is proposing an AI-assisted claims “fast-track” triage to reduce adjuster effort and overpayments. Constraints: finance requires an 18-month payback, compliance requires documented ROI assumptions, the go/no-go decision is needed in 3 weeks, and PHI cannot leave the internal environment.

Exhibit: ROI worksheet (draft, annualized)

Volume: 200,000 claims/year
Time saved when used: 4 minutes/claim
Labor cost: $45/hour
Assumptions: 60% eligible claims, 75% adjuster adoption
Estimated labor savings: $270,000/year
Estimated overpayment reduction: $500,000/year (assumes 10% reduction)
One-time build+integration: $650,000
Annual run/monitoring: $180,000

What is the BEST next action to support a defensible ROI decision?

  • A. Remove monitoring and governance costs to meet the 18-month payback
  • B. Begin model development to replace assumptions with measured performance
  • C. Proceed using the draft ROI because data cannot leave the environment
  • D. Run sensitivity analysis and validate adoption and leakage assumptions quickly

Best answer: D

Explanation: The draft ROI hinges on a few high-uncertainty assumptions that drive most of the benefits, especially adjuster adoption and the claimed overpayment reduction. The best next step is to quantify how ROI changes when those assumptions vary and rapidly validate them with stakeholders and available internal evidence. This creates a defensible, decision-ready business case within the 3-week window.


Question 7

Topic: Domain III: Identify Data Needs

While identifying project resources for a regulated AI initiative, the project sponsor says the team must be able to trace the model’s training data from original sources through each transformation and handoff to support auditability and reproducibility. What AI governance term describes this capability?

  • A. Data lineage
  • B. Model card
  • C. Data dictionary
  • D. Data drift

Best answer: A

Explanation: The described need is end-to-end traceability of where data came from and how it changed before use in the model. That capability is called data lineage and is commonly supported through data engineering and data governance practices to enable audits and reproducibility.


Question 8

Topic: Domain V: Operationalize AI Solution

An AI team is closing a project after piloting a loan pre-screening model. The pilot improved approval-cycle time, but the evaluation also found weaker performance for “thin-file” applicants and that training data only covered the last 18 months, so results may not generalize during economic shifts. The sponsor asks for a single “success story” slide for leadership.

Which CPMAI-aligned practice best matches how the team should present the final results?

  • A. Complete a data lineage and access-control audit for all training and scoring datasets
  • B. Provide a model card-style final report summarizing performance, limitations, intended use, and residual risks
  • C. Publish an operations runbook focused on drift detection thresholds and alert escalation
  • D. Deliver end-user training and communications to drive adoption of the new workflow

Best answer: B

Explanation: The situation requires transparent communication of both what the pilot achieved and where the model should not be over-claimed. A model card-style final report (or AI fact sheet) is designed to disclose performance, known limitations, intended use, and residual risks in stakeholder-friendly language, supporting a responsible handover and closure.


Question 9

Topic: Domain V: Operationalize AI Solution

A health insurer completed a 6-week pilot of an AI model that prioritizes inbound care-management calls. Results show a 12% reduction in average call handling time, but the operations team reports alert fatigue, and a fairness check found higher false negatives for Spanish-speaking members. Member data is highly restricted (no broad re-sharing), and the steering committee has low risk tolerance and requires a final report with lessons learned in 5 business days to decide whether to scale.

What is the BEST next action?

  • A. Adjust alert thresholds immediately to reduce fatigue before reporting
  • B. Run a cross-functional retrospective and document actionable improvements
  • C. Defer lessons learned until after the solution is scaled enterprise-wide
  • D. Deliver a final report focused only on model accuracy and latency

Best answer: B

Explanation: The final report should synthesize what worked and what to improve across business outcomes, data constraints, model performance (including fairness), and operational readiness. The fastest, lowest-risk way to do this within 5 days is an evidence-based, cross-functional lessons-learned session that results in prioritized actions with owners. This directly supports the steering committee’s scale decision without making ungoverned changes.


Question 10

Topic: Domain V: Operationalize AI Solution

A bank is transitioning a fraud-detection model from the delivery team to a 24/7 production support team. A key constraint is that support staff cannot access raw customer transactions due to privacy controls, but they must still troubleshoot alerts, diagnose drift, and execute rollback steps during incidents.

Which approach best coordinates knowledge transfer and training for the support team under this constraint?

  • A. One-time walkthrough of the model by data scientists
  • B. Grant repository access and require self-paced learning
  • C. Hands-on training in a de-identified sandbox with runbooks
  • D. Rely on vendor SLAs for issues and minimize internal training

Best answer: C

Explanation: Because support cannot view raw transactions, training must simulate real operational tasks without using sensitive data. A de-identified sandbox plus operational runbooks lets the team practice monitoring, triage, escalation, rollback, and drift-response procedures safely and repeatably. This is the most reliable way to achieve readiness for 24/7 production support under strict privacy constraints.


Question 11

Topic: Domain III: Identify Data Needs

A retail bank launched a churn prediction model. Two months after release, monitoring shows the score distribution has shifted and performance dropped versus validation. Adoption is also low because different teams report they cannot reproduce the same customer lists from week to week.

A privacy review then finds a spreadsheet with full customer PII on a shared drive used for “temporary analysis.” In interviews, analysts say access to the curated feature tables in the data platform takes weeks to get approved, so they request one-off extracts from whoever already has access.

What is the most likely underlying cause?

  • A. True concept drift from changing customer behavior
  • B. Low adoption due to inadequate end-user training
  • C. Poorly designed data access controls causing uncontrolled data copies
  • D. Model overfitting due to insufficient regularization

Best answer: C

Explanation: The clues point to access governance failures: slow or unclear approvals for the right datasets pushed teams to use ad-hoc extracts and shared files. That creates uncontrolled PII exposure and inconsistent data versions, which can also manifest as drift and performance drops. Implementing role-based, least-privilege access with an approved analysis path enables work without encouraging shadow data.


Question 12

Topic: Domain V: Operationalize AI Solution

You are transitioning a demand-forecasting model from the data science team to operations for production support. The sponsor wants a go-live date, but the ops lead says they cannot commit because “support expectations aren’t defined yet.”

What is the first question you should ask to establish ongoing maintenance and support procedures that include monitoring and incident response?

  • A. What minimum offline accuracy metric is required to approve go-live?
  • B. Which modeling approach and features were used in the final version?
  • C. What is the projected infrastructure cost for the next fiscal year?
  • D. What incident severity levels and response targets will ops support?

Best answer: D

Explanation: Before you can define monitoring and incident response, you need clear operational expectations: what constitutes an incident, how urgent each type is, and the required response/restoration targets. These details drive alert thresholds, on-call coverage, escalation paths, and runbook content. Without them, an ops team cannot responsibly accept the solution into production support.


Question 13

Topic: Domain V: Operationalize AI Solution

A customer-facing AI virtual assistant for a health insurer begins returning snippets of other members’ claim notes in its responses. The on-call team confirms the issue is reproducible and could expose personal data to any user.

Which contingency/incident response procedure is the BEST immediate action to follow, given this situation?

  • A. Activate IR runbook: isolate, preserve logs, escalate privacy/legal
  • B. Add enhanced monitoring and wait for more incident data
  • C. Continue service while tuning prompts to reduce leakage
  • D. Schedule retraining and redeploy after the next sprint review

Best answer: A

Explanation: Because the assistant is exposing member information, the decisive factor is a potential privacy/security incident. The procedure should prioritize rapid containment to stop harm, preserve evidence for investigation, and route escalation through predefined privacy/legal and executive channels. Model improvements can follow only after the incident is controlled.


Question 14

Topic: Domain V: Operationalize AI Solution

An AI-driven customer support triage model is in production. You send a biweekly performance report to executives, operations, and the model owners. Recent monitoring shows a small overall accuracy improvement, but performance dropped for one high-value customer segment and the data pipeline has a known 48-hour lag.

Which reporting approach is INCORRECT and should be avoided?

  • A. Annotate the report with recent changes and incidents affecting metrics
  • B. Include segment-level results and flag the high-value segment regression
  • C. Publish only the overall accuracy trend to reduce confusion
  • D. Add a data-freshness note and label metrics as delayed by 48 hours

Best answer: C

Explanation: Performance reporting for operational AI must be decision-useful, which means pairing metrics with the key caveats and limitations that affect interpretation. When monitoring reveals segment regressions and known data latency, reporting only an overall metric can mislead stakeholders into thinking the system is improving everywhere and in real time. Transparent context builds trust and supports timely corrective action.


Question 15

Topic: Domain III: Identify Data Needs

An AI team is building a churn prediction model. In early testing, the model looks strong, but stakeholders disagree on whether results are improving the business because “churn” is defined as (1) account cancellation in billing, (2) no purchases in 90 days in CRM, and (3) loss of contract in the data warehouse. What CPMAI-aligned data principle/governance approach best addresses this situation to prevent misalignment during model development?

  • A. Perform data profiling to quantify missing values and outliers
  • B. Run a bias assessment across protected groups before training
  • C. Standardize and get sign-off on metric and label definitions with data SMEs
  • D. Implement drift monitoring to detect changes in churn rates over time

Best answer: C

Explanation: The core issue is inconsistent business and data definitions of the target label and success metrics. The right principle is to validate and standardize those definitions with the appropriate data SMEs, document them in a shared glossary/data dictionary, and obtain stakeholder sign-off. This aligns model development, evaluation, and reporting to the same measurable outcomes.


Question 16

Topic: Domain V: Operationalize AI Solution

You are overseeing a phased (canary) deployment of a credit decisioning model. Two hours after release, monitoring shows an increase in average decision latency and a spike in declined applications. An engineer proposes an immediate configuration change to the feature pipeline to “stabilize things,” but you are concerned about bypassing governance.

What should you verify or ask FIRST before deciding on any implementation change?

  • A. Which new features could be added in the next iteration to improve model accuracy
  • B. Which go/no-go thresholds and rollback criteria were approved for this release, and which monitored metrics have actually breached them
  • C. Whether the engineering team can refactor the feature pipeline to reduce technical debt
  • D. Whether stakeholders prefer a different model type for future versions

Best answer: B

Explanation: In a canary deployment, the first step is to anchor decisions to the approved release guardrails: success metrics, alert thresholds, and rollback triggers. Verifying what was agreed and whether the telemetry truly violates those thresholds enables rapid issue resolution while staying within governance. Only then should you select an action such as rollback, pause, or a controlled change.


Question 17

Topic: Domain IV: Manage AI Model Development and Evaluation

A retail bank wants an AI solution to “identify suspicious card transactions and new fraud patterns.” The team is debating whether to use a supervised classifier or an unsupervised approach.

Before choosing a learning approach, what should the project manager verify/ask for FIRST?

  • A. What the maximum allowable inference latency is in production
  • B. Which specific algorithm the data science team prefers to implement
  • C. Which data ingestion tool will be used to stream transactions into the model
  • D. Whether there are enough confirmed fraud labels and a clear target outcome definition

Best answer: D

Explanation: Selecting supervised vs. unsupervised learning hinges on the availability and quality of labeled outcomes and what “success” means. If the bank has sufficient, trustworthy confirmed-fraud labels aligned to the desired decision, supervised classification is feasible; if not, unsupervised methods (e.g., anomaly detection) may be more appropriate. Clarifying the target and labels is the earliest gating question.


Question 18

Topic: Domain V: Operationalize AI Solution

An AI team has completed a 12-week pilot that uses machine learning to prioritize customer support tickets. The solution is transitioning to operations, and another business unit plans a similar initiative next quarter. As the AI project manager, you want to capture lessons learned and best practices in a reusable format.

Which action should you NOT take?

  • A. Update reusable assets (checklists, runbooks, model documentation) based on what worked and failed
  • B. Store lessons learned in a standardized template within a searchable, shared repository
  • C. Run a cross-functional retrospective and record outcomes, root causes, and recommendations
  • D. Rely on informal knowledge sharing and skip documenting lessons until issues reoccur

Best answer: D

Explanation: Lessons learned must be captured in a durable, reusable format so future AI teams can find, apply, and audit what happened and why. That typically means a structured after-action review plus storing outcomes in a shared repository. Relying on informal sharing causes knowledge loss and prevents consistent reuse across initiatives.


Question 19

Topic: Domain V: Operationalize AI Solution

Your team’s fraud-detection model meets agreed evaluation metrics in the lab, and the sponsor asks for a go-live decision next sprint. You have not yet met with IT operations or information security, and the target production environment is unclear (cloud/on-prem, network zones, and upstream/downstream system interfaces).

What should you verify or obtain first to assess deployment readiness?

  • A. Additional hyperparameter tuning to improve model accuracy
  • B. A user training curriculum for fraud analysts
  • C. Production infrastructure, security, and integration requirements sign-off
  • D. A communications plan to announce the new capability

Best answer: C

Explanation: Before a deployment go/no-go, you must confirm the solution can run safely and reliably in the intended production context. The most immediate gap is the lack of validated infrastructure capacity, security controls (e.g., access, data handling), and system integration needs. Getting these requirements and approvals first prevents committing to a deployment that cannot be hosted, secured, or connected.


Question 20

Topic: Domain II: Identify Business Needs and Solutions

A retail bank has deployed an AI-assisted agent tool to draft responses and recommend next-best actions in the contact center. The sponsor wants an ROI measurement plan for the next 6 months. Constraints: customer PII cannot be exported outside approved analytics storage, benefits must be attributable (not “feelings”), and the plan must include adoption because agents can ignore recommendations. Which approach best optimizes credible ROI measurement while meeting the constraints?

  • A. Use monthly supervisor surveys to estimate time savings and customer impact
  • B. Set a baseline and run an A/B rollout with KPI, cost, and adoption tracking
  • C. Measure only model quality (accuracy/latency) and infer ROI from it
  • D. Capture and export full call transcripts to maximize benefit quantification

Best answer: B

Explanation: A strong ROI plan specifies measurable value outcomes, establishes a pre-deployment baseline, and uses an attribution method (such as A/B or phased rollout) to isolate the AI’s impact. It also tracks adoption/usage so benefits are not overstated when users bypass recommendations. Keeping measurement within approved storage satisfies the PII constraint while enabling repeatable reporting.


Question 21

Topic: Domain V: Operationalize AI Solution

An AI team is deploying a complaint-triage model into a regulated contact center. In staging, accuracy meets the agreed KPI and a 2-week canary shows stable results (no drift signals). However, the production rollout is repeatedly delayed when the privacy office and security team say the required DPIA and penetration test were never scheduled; a rushed workaround led to an incident where PII was written to an analytics log and the release was rolled back, driving low user adoption.

What is the most likely underlying cause?

  • A. The deployment strategy and timeline did not include governance gates and ops constraints
  • B. End users are resisting adoption because change management was not performed
  • C. The model is overfitting and cannot generalize to production traffic
  • D. Concept drift is the primary driver of the poor rollout outcomes

Best answer: A

Explanation: The clues point away from a model-quality issue (KPIs met and canary is stable) and toward a planning failure. When privacy/security approvals and operational prerequisites aren’t built into the deployment strategy and timeline, teams either slip the schedule or bypass controls, increasing the likelihood of incidents and rollbacks. Those disruptions commonly reduce trust and adoption even if the model performs well.


Question 22

Topic: Domain II: Identify Business Needs and Solutions

A claims organization piloted an AI model to recommend which claims to fast-track. Offline metrics looked strong, but in the pilot adjusters rarely used the recommendations, a bias review flagged higher denial rates for one protected group, and a privacy incident occurred when the team began collecting extra customer attributes “to improve accuracy.” The drift dashboard shows stable input distributions versus training, and data quality checks passed. What is the most likely underlying cause?

  • A. Unclear problem statement and testable success criteria
  • B. Severe concept drift in production claim patterns
  • C. Inadequate end-user training on the new workflow
  • D. Insufficient access controls for sensitive customer data

Best answer: A

Explanation: The symptoms point to misalignment: strong offline results but low adoption, a bias signal, and reactive data collection that caused a privacy incident. This most often occurs when the problem statement, desired outcome, and success criteria (including guardrails) were not defined and agreed upfront, so the team cannot validate “success” in a consistent, testable way.


Question 23

Topic: Domain III: Identify Data Needs

A bank is preparing training data for a credit-risk model. Today, analysts apply transformations in notebooks against mutable tables. In 12 months, internal audit must be able to reproduce the exact training dataset and show what source data and transforms created it. Data contains PII and must stay within the secure analytics environment. The team also wants minimal rework for future iterations.

Which approach best optimizes reproducibility and data version tracking while meeting these constraints?

  • A. Save weekly CSV extracts with date-stamped filenames in a shared folder
  • B. Create a parameterized pipeline with version-controlled code and immutable dataset snapshots plus run metadata
  • C. Export transformed data to analysts’ local machines and document steps in a wiki
  • D. Freeze the current notebook as “final” and continue querying the same source tables

Best answer: B

Explanation: Use an automated, parameterized transformation pipeline where the code is version controlled and every run produces an immutable, time-stamped dataset snapshot with captured metadata. This creates a repeatable process and a defensible audit trail tying exact inputs and transform versions to the training dataset, without moving PII outside the secure environment.


Question 24

Topic: Domain V: Operationalize AI Solution

Which term describes a pre-defined and tested set of steps to revert an AI deployment to a previously stable version when the new release causes service failures or unacceptable model behavior?

  • A. Incident response plan
  • B. Model card
  • C. Drift monitoring plan
  • D. Rollback procedure

Best answer: D

Explanation: A rollback procedure is the deployment contingency that specifies exactly how to revert to a prior stable version when a release misbehaves in production. It is typically defined and tested before launch so teams can restore service and performance quickly with minimal impact.

PMI-CPMAI AI project map

Use this map after the sample questions to connect individual items to AI project methodology, data readiness, model lifecycle, governance, risk, stakeholder adoption, and responsible-AI decisions.

    flowchart LR
	  S1["AI project lifecycle scenario"] --> S2
	  S2["Define business problem and data context"] --> S3
	  S3["Assess model risk governance and feasibility"] --> S4
	  S4["Choose iteration experiment or control step"] --> S5
	  S5["Validate outcome adoption and ethics"] --> S6
	  S6["Monitor model and business performance"]

Quick Cheat Sheet

CueWhat to remember
Problem framingDo not start with a model; define the business decision and success criteria first.
Data workData availability, quality, labeling, privacy, and drift can dominate project risk.
LifecycleAI projects need experimentation, validation, deployment, monitoring, and retraining logic.
GovernanceHuman oversight, transparency, security, and ethical controls must be explicit.
AdoptionA technically working model still needs users, process fit, and measurable value.

Mini Glossary

  • AI governance: Policies, controls, accountability, data practices, and human oversight for AI-enabled work.
  • Prompt risk: Risk that AI output is unreliable, biased, incomplete, insecure, or unsuitable for the decision context.
  • Risk: Uncertain event or condition that can affect objectives positively or negatively.
  • Stakeholder engagement: Identifying, analyzing, communicating with, and involving people affected by the work.
  • Value delivery: Creating outcomes that matter to customers, users, sponsors, and the organization.

Focused sample questions

Use these child pages when you want focused PM Mastery practice before returning to mixed sets and timed mocks.

In this section

Revised on Friday, May 15, 2026