Browse Certification Practice Tests by Exam Family

PMI-CPMAI: Support Responsible and Trustworthy AI Efforts

Try 10 focused PMI-CPMAI questions on Support Responsible and Trustworthy AI Efforts, with answers and explanations, then continue with PM Mastery.

On this page

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Topic snapshot

FieldDetail
Exam routePMI-CPMAI
Topic areaSupport Responsible and Trustworthy AI Efforts
Blueprint weight15%
Page purposeFocused sample questions before returning to mixed practice

How to use this topic drill

Use this page to isolate Support Responsible and Trustworthy AI Efforts for PMI-CPMAI. Work through the 10 questions first, then review the explanations and return to mixed practice in PM Mastery.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 15% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original PM Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: Support Responsible and Trustworthy AI Efforts

You are leading a customer-support AI initiative that will use historical chat transcripts containing PII. The organization expects an internal audit within 6 months and wants evidence that key stakeholders approved each go/no-go decision (problem framing, data readiness, model evaluation, and production release) with clear criteria. The team is under pressure to ship an MVP in 8 weeks.

Which approach best optimizes auditability and accountability without creating unnecessary delays or privacy risk?

  • A. Store all approval artifacts and raw transcript samples in an open shared folder for transparency
  • B. Rely on sprint review notes and email threads as the approval record
  • C. Create a lightweight decision register with stage gates, criteria, evidence links, and role-based sign-offs
  • D. Collect a single executive approval at production launch with a summary slide deck

Best answer: C

What this tests: Support Responsible and Trustworthy AI Efforts

Explanation: A defensible audit trail requires recorded go/no-go checkpoints with pre-defined criteria, named approvers, decision dates, and linked evidence. A lightweight decision register (with controlled access and versioning) meets accountability needs while supporting an 8-week MVP. It also avoids introducing new privacy exposure when handling PII-containing artifacts.

The core need is an accountable, reviewable trail that proves who approved what decision, when, and based on which criteria and evidence. For AI initiatives, the highest-value approach is to formalize a small set of lifecycle stage gates (e.g., problem framing, data readiness, model evaluation, deployment readiness) and record each as a go/no-go entry with consistent fields.

A practical minimum set for each checkpoint is:

  • Decision name and date, with explicit go/no-go criteria
  • Required approver roles and captured sign-offs
  • Evidence references (links to controlled artifacts such as data quality results, bias/fairness tests, security/privacy review, evaluation summary)
  • Storage in a versioned repository with role-based access controls (especially for PII)

This optimizes audit readiness and responsible AI accountability without slowing delivery through excessive bureaucracy or creating new privacy risks.

It captures explicit go/no-go criteria, approver identity, and supporting evidence in a controlled, access-managed audit trail while remaining lightweight.


Question 2

Topic: Support Responsible and Trustworthy AI Efforts

A bank is building an ML model to detect fraudulent card transactions. Training data includes customer PII, transaction details, and investigator notes. The solution will ingest data from the data warehouse, train in a controlled analytics environment, serve real-time inference via an API, and store logs for monitoring and audits.

Which end-to-end data handling practice should the team AVOID?

  • A. Pseudonymizing customer identifiers in training datasets and keeping the re-identification key in a separate, access-restricted system
  • B. Using encryption in transit and at rest, with secrets managed in a controlled vault
  • C. Allowing analysts to download raw training data to personal laptops for faster experimentation
  • D. Defining retention periods for raw data, features, and inference logs, and implementing secure deletion aligned to audit needs

Best answer: C

What this tests: Support Responsible and Trustworthy AI Efforts

Explanation: Secure end-to-end data handling minimizes unnecessary replication of sensitive data and keeps it within controlled, monitored environments. Allowing raw datasets to be downloaded to personal devices increases exposure, weakens access controls, and complicates traceability and deletion. The other practices strengthen confidentiality and lifecycle governance across collection, training, inference, and retention.

Designing secure data handling across the AI lifecycle means controlling where sensitive data can exist, who can access it, and how it is protected from ingestion through disposal. In this scenario, the highest-risk anti-pattern is creating unmanaged copies of raw PII on personal devices, which undermines least privilege, monitoring, incident response, and consistent retention/deletion.

A practical secure procedure typically includes:

  • Controlled environments for training and analysis (no local raw PII copies)
  • Encryption in transit/at rest and strong secret management
  • Data minimization and pseudonymization/tokenization where feasible
  • Logged, access-controlled inference and monitoring data with defined retention and secure deletion

The key takeaway is to prevent sensitive data from escaping governed systems while still enabling reproducible, auditable ML work.

This breaks controlled handling by expanding uncontrolled copies of sensitive data beyond approved environments and access controls.


Question 3

Topic: Support Responsible and Trustworthy AI Efforts

An AI team deployed a credit pre-approval model. Over three months, approval rates fluctuate, a fairness check shows a new bias signal against one segment, and a privacy review finds some training records contained unmasked PII. When asked to investigate, the team cannot identify which dataset snapshot or feature set was used for the latest retrain, who approved the change, or roll back confidently; only informal chat messages exist.

What is the most likely underlying cause?

  • A. Concept drift caused by a major change in applicant behavior
  • B. Low user adoption due to insufficient training and communications
  • C. Missing auditable change/decision tracking for data, model, and approvals
  • D. Poor model performance due to an overly simple algorithm choice

Best answer: C

What this tests: Support Responsible and Trustworthy AI Efforts

Explanation: The strongest clue is the inability to trace which data, features, model version, and approvals correspond to the deployed behavior. That points to a breakdown in auditability across iterations (decision logs, versioning, and governance checkpoints). Without an audit trail, investigations, accountability, and rollback become unreliable, increasing the chance that privacy and bias issues slip into production.

An audit trail for AI initiatives is the practical mechanism that makes transparency and accountability possible across iterations. In this scenario, multiple symptoms (performance fluctuation, bias signal, and a privacy finding) become hard to diagnose because the team cannot answer basic traceability questions: what data snapshot was used, what changed, who approved it, and what is currently running.

A fit-for-purpose audit trail typically records, per iteration:

  • Dataset identifiers/lineage and preprocessing decisions
  • Model version, feature set, and evaluation results
  • Approval evidence (go/no-go, risk sign-offs) and release notes
  • Rollback path to a known-good baseline

The key takeaway is that uncontrolled, undocumented changes are a common root cause when teams cannot reproduce or explain model behavior in production.

Without versioned datasets/models and recorded approvals, the team cannot reproduce, explain, or safely roll back iterations, leading to uncontrolled drift, bias, and privacy failures.


Question 4

Topic: Support Responsible and Trustworthy AI Efforts

A multinational company is piloting an ML system to rank and shortlist job applicants for roles in Germany and France. The system uses applicants’ resumes, assessment scores, and transcripts from video interviews, and recruiters typically follow the recommended shortlist.

Which set of internal policies and external regulations should the project team prioritize for the compliance checklist?

  • A. GDPR, internal personal-data handling policy, and applicable employment/anti-discrimination rules
  • B. HIPAA and internal protected health information policy
  • C. SOX and internal financial reporting controls
  • D. PCI DSS and internal payment card data policy

Best answer: A

What this tests: Support Responsible and Trustworthy AI Efforts

Explanation: Because the system influences hiring decisions about individuals in the EU using personal data, the primary compliance drivers are privacy/data protection and employment-related requirements. That points to GDPR and internal policies governing personal data processing, retention, access, and security, plus applicable anti-discrimination and employment rules for the jurisdictions in scope.

Start by identifying what the AI is doing (employment decision support), who it affects (individual applicants), and what data it processes (EU personal data, including potentially sensitive inferences from interviews). In this scenario, the decisive factor is that the use case directly impacts individuals’ employment opportunities in the EU, making privacy and employment/fairness obligations central.

A practical compliance checklist should be anchored on:

  • GDPR obligations for lawful basis, transparency, data minimization, retention, security, and data subject rights.
  • Internal personal-data governance policies (classification, access control, retention, vendor/processor controls, auditability).
  • Applicable employment and anti-discrimination requirements in the jurisdictions where hiring occurs.

Other regulatory regimes (payment, health, financial reporting) are not triggered by the described data and decision context.

This is an EU employment decision use case using EU personal data, so GDPR plus internal data governance and employment/anti-discrimination requirements are directly applicable.


Question 5

Topic: Support Responsible and Trustworthy AI Efforts

A customer support organization has deployed an NLP model that auto-routes incoming tickets. The business objective is to reduce average handling time, but the organization has low risk tolerance for misrouted safety-related tickets. Only a small SRE team is available, and the system must remain auditable because ticket text contains PII and decisions are reviewed during internal audits.

In the first two weeks, operations reports intermittent spikes in misrouted safety tickets. The product owner, data science lead, and IT operations each assume another group “owns” incident response and change approvals.

What is the BEST next action?

  • A. Immediately disable auto-routing and revert to manual triage until retraining is complete
  • B. Ask the model vendor to take end-to-end responsibility for incident response and approvals
  • C. Schedule a cross-functional meeting to brainstorm likely root causes and remediation ideas
  • D. Define and publish a RACI plus an escalation/runbook for AI outcomes, changes, and incidents

Best answer: D

What this tests: Support Responsible and Trustworthy AI Efforts

Explanation: The immediate gap is unclear accountability in operations, which increases risk for safety-related misroutes and undermines auditability. Creating a documented RACI and escalation/runbook clarifies who is accountable for outcomes, who can approve changes, and how incidents are handled and recorded. This enables consistent response, governance, and an auditable trail while the team investigates and improves the model.

In operational AI, “who owns what” must be explicit for outcomes (business impact), changes (model/data/config updates), and incidents (misroutes, drift, outages). With low risk tolerance and audit needs, the best next step is to produce and socialize accountability documentation that operations can execute and auditors can trace.

At minimum, document and version-control:

  • Accountable owners (business outcome owner, model owner, operations/on-call owner)
  • Incident severity levels, escalation matrix, and response SLAs
  • Change authority (who can approve rollback, thresholds, and releases)
  • Required evidence for the audit trail (alerts, approvals, post-incident reviews)

Once ownership and escalation paths are in place, investigation, retraining, and process changes can proceed under controlled, auditable governance.

It establishes accountable owners and documented escalation/change paths needed for safe, auditable operations before further tuning or scaling.


Question 6

Topic: Support Responsible and Trustworthy AI Efforts

A retail bank is preparing to deploy an AI-based credit decisioning model. To meet its model risk management policy, the team must keep audit-ready evidence of data lineage, control checks (e.g., access reviews), and approval sign-offs. The product owner asks to “document later” to hit the launch date, and these artifacts are not created. An internal audit is scheduled for next week as part of regulatory exam readiness.

What is the most likely near-term impact of this omission?

  • A. End users lose trust after months of inconsistent decisions
  • B. Inference latency increases due to missing governance artifacts
  • C. Deployment is paused because controls cannot be evidenced
  • D. Model performance drops because assumptions are undocumented

Best answer: C

What this tests: Support Responsible and Trustworthy AI Efforts

Explanation: Audit readiness depends on being able to prove what data and controls were used, who approved key decisions, and that required checks occurred. With an audit next week, missing traceability and approvals most directly creates an immediate compliance verification failure. The near-term consequence is typically a governance hold or delayed release until evidence is assembled.

The core concept is audit-ready documentation: traceability (data sources, transformations, versions), documented control operation (access reviews, segregation of duties, testing/validation evidence), and recorded approvals at required governance checkpoints. In the scenario, the omission happens right before an internal audit tied to regulatory exam readiness, so the most immediate impact is not technical degradation but an inability to demonstrate compliance on demand.

Near-term outcomes commonly include:

  • An audit finding or failed gate review due to insufficient evidence
  • A deployment pause while artifacts are recreated and approvals are obtained
  • Increased compliance and operational risk exposure until controls are evidenced

Technical impacts like drift, accuracy issues, or user sentiment are possible later, but they are not the most direct consequence of missing audit artifacts in the next week.

Without traceability and approval evidence, the solution is unlikely to pass the immediate audit/compliance gate, triggering a hold until documentation is produced.


Question 7

Topic: Support Responsible and Trustworthy AI Efforts

A health insurer is deploying an AI model that automatically approves or denies certain medical claims. During a pilot, a model update caused a spike in incorrect denials, and operations teams were unsure who could authorize a rollback, who owned customer-impact decisions, and when to escalate to compliance. Because the system makes high-impact decisions, what is the best way to clarify ownership and escalation paths for AI outcomes, changes, and incidents in production?

  • A. Implement an AI RACI and incident/change runbook tied to the ticketing system
  • B. Route monitoring alerts to the data science on-call engineer for rapid fixes
  • C. Rely on the vendor’s SLA and escalate issues only through account management
  • D. Publish a detailed model card with performance, bias, and data lineage

Best answer: A

What this tests: Support Responsible and Trustworthy AI Efforts

Explanation: High-impact AI decisions require explicit decision rights, not just technical documentation or monitoring. A documented accountability structure that names who is accountable for business outcomes and who is responsible for technical actions—paired with a defined escalation and rollback path—prevents confusion during incidents. Tying it to formal incident/change processes also preserves the audit trail needed for review.

For AI in operations—especially when it can directly affect customers—accountability must be operationalized: who is accountable for the decision outcomes, who can approve changes, and who can trigger/authorize rollback or suspension. The most effective approach is to document and socialize a clear ownership and escalation structure (often a RACI) and connect it to existing incident and change management so every event is traceable.

A practical setup includes:

  • Named roles for accountable business owner, model owner, operations, risk/compliance
  • Severity-based escalation and response timelines (including stop/rollback authority)
  • Change approval gates and required evidence (testing, sign-offs)
  • Ticket-based records for incidents, decisions, and actions to support audits

Model documentation and monitoring support governance, but they do not, by themselves, establish decision rights and escalation paths.

A named accountability matrix plus a documented incident/change escalation runbook creates clear decision rights (including rollback) and an auditable trail for high-impact AI operations.


Question 8

Topic: Support Responsible and Trustworthy AI Efforts

A team is ready to launch a machine-learning model that recommends credit-limit increases. In the project charter, stakeholders agreed on a go/no-go criterion: the model must achieve a disparate impact ratio of at least 0.80 for a protected group compared to the reference group, measured on the latest validation set.

During final evaluation, the disparate impact ratio is 0.72, while overall accuracy meets target. The business sponsor is pushing to deploy this week.

As the AI project manager, which action SHOULD AVOID when making the go/no-go decision?

  • A. Deploy only as decision support with controls and disclosure
  • B. Reclassify the metric as “informational” to proceed
  • C. Delay release and implement bias mitigation with re-testing
  • D. Escalate to governance for a formal go/no-go decision

Best answer: B

What this tests: Support Responsible and Trustworthy AI Efforts

Explanation: Because the fairness outcome misses the pre-agreed go/no-go threshold, the team should follow governance and either mitigate and re-test or constrain use until criteria are met. Changing the status of the metric to force a “go” undermines the agreed acceptance criteria and removes accountability. Go/no-go decisions should be made transparently against documented thresholds.

Go/no-go criteria for fairness are acceptance criteria: if the measured fairness metric fails the agreed threshold, the solution is not ready for the originally intended production use. In this scenario, the disparate impact ratio (0.72) is below the pre-set minimum (0.80), so proceeding requires a controlled decision through the responsible AI governance process.

Appropriate responses typically include:

  • Triggering a governance checkpoint (risk/ethics/compliance) for a documented decision
  • Delaying release to apply mitigation (data, features, thresholds, post-processing) and re-evaluating
  • If business needs demand near-term value, narrowing deployment (e.g., decision support) with human oversight, disclosures, and monitoring until criteria are met

The key is not to redefine success criteria after seeing results without stakeholder approval and documented rationale.

Unilaterally downgrading an agreed fairness go/no-go criterion bypasses governance and enables deploying a known unfair outcome.


Question 9

Topic: Support Responsible and Trustworthy AI Efforts

You are the AI project manager for a credit-risk model approaching pilot deployment. An internal audit requires end-to-end traceability of the training data.

Exhibit: Model card (excerpt)

Model: Claims Risk v1
Training data: CRM_export_v3.csv; collections_notes.xlsx
Time window: [not recorded]
Prep steps: "standard cleaning, one-hot encoding"
Label: 60+ days past due within 90 days
Data versioning/lineage: [TBD]

Based on the exhibit, what is the best next action to meet transparency and traceability expectations?

  • A. Create a data lineage record with source, extract logic, time window, and pipeline version
  • B. Collect additional training data to improve model accuracy before documenting
  • C. Restrict access to the training files so only the data science team can view them
  • D. Add model explainability visuals to the model card to increase transparency

Best answer: A

What this tests: Support Responsible and Trustworthy AI Efforts

Explanation: The exhibit shows the model card lacks key traceability elements: the time window, specific preparation steps, and any dataset/pipeline versioning or lineage. The best action is to document data sources and the end-to-end preparation workflow in a reproducible way (what was extracted, from where, when, and how it was transformed). This enables auditability and reliable handover.

Transparency for AI systems includes being able to trace training data back to its origin and reproduce how it became the model-ready dataset. In the exhibit, references to file names and vague “standard cleaning” are not sufficient for audit or operational support because they don’t establish provenance, scope, or repeatable transformation logic.

The most appropriate next action is to complete a data lineage and preparation record that captures, at minimum:

  • Authoritative source systems and owners (not just file names)
  • Extraction method (queries/filters), time window, and refresh/cutoff date
  • Join logic and label construction notes tied to source fields
  • Versioning of raw data, prepared dataset, and preprocessing pipeline (e.g., scripts/config hash)

This focuses on traceability and reproducibility rather than model performance or access control alone.

It directly fills the missing data provenance and preparation details needed to reproduce and audit the dataset.


Question 10

Topic: Support Responsible and Trustworthy AI Efforts

A team has built a promising prototype model to prioritize insurance claim investigations. During a review, stakeholders disagree on “what explainability means”: compliance wants auditable rationale for adverse actions, investigators want case-level reason codes they can act on, and executives want a simple view of key drivers.

What is the best next step?

  • A. Deploy the prototype and add explanations iteratively based on feedback
  • B. Elicit and document audience-specific explainability requirements and acceptance criteria
  • C. Pause work and require full model interpretability for all stakeholders
  • D. Select a specific explainability technique and retrofit it to the model

Best answer: B

What this tests: Support Responsible and Trustworthy AI Efforts

Explanation: Explainability is requirement-driven and depends on who is making which decision with the model output. The immediate need is to align stakeholders on the decisions being supported and to define the minimum explanation content, format, and evidence required for each audience. Those requirements then drive the choice of techniques, documentation, and evaluation gates.

The core transparency step here is to translate a vague request for “explainability” into concrete, audience-specific requirements tied to real decisions. In this scenario, compliance, investigators, and executives have different decisions and risks, so they will need different explanation types (e.g., audit-ready rationale vs. case-level reason codes vs. aggregated driver summaries) and different evidence/controls.

A good next step is to facilitate a short working session and produce a documented explainability plan that specifies:

  • Stakeholder groups and decisions they make
  • Explanation level (global vs. local), format, and delivery point
  • Acceptance criteria and validation approach (what “good enough” means)

Only after this should the team select explanation methods and incorporate them into evaluation and release gates.

You must first define what explanations each decision-maker needs and how they will be judged before choosing explanation techniques or proceeding.

Continue with full practice

Use the PMI-CPMAI Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Free review resource

Read the PMI-CPMAI guide on PMExams.com, then return to PM Mastery for timed practice.

Revised on Thursday, May 14, 2026