Try 10 focused PMI-CPMAI questions on Support Responsible and Trustworthy AI Efforts, with answers and explanations, then continue with PM Mastery.
| Field | Detail |
|---|---|
| Exam route | PMI-CPMAI |
| Topic area | Support Responsible and Trustworthy AI Efforts |
| Blueprint weight | 15% |
| Page purpose | Focused sample questions before returning to mixed practice |
Use this page to isolate Support Responsible and Trustworthy AI Efforts for PMI-CPMAI. Work through the 10 questions first, then review the explanations and return to mixed practice in PM Mastery.
| Pass | What to do | What to record |
|---|---|---|
| First attempt | Answer without checking the explanation first. | The fact, rule, calculation, or judgment point that controlled your answer. |
| Review | Read the explanation even when you were correct. | Why the best answer is stronger than the closest distractor. |
| Repair | Repeat only missed or uncertain items after a short break. | The pattern behind misses, not the answer letter. |
| Transfer | Return to mixed practice once the topic feels stable. | Whether the same skill holds up when the topic is no longer obvious. |
Blueprint context: 15% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.
These questions are original PM Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.
Topic: Support Responsible and Trustworthy AI Efforts
You are leading a customer-support AI initiative that will use historical chat transcripts containing PII. The organization expects an internal audit within 6 months and wants evidence that key stakeholders approved each go/no-go decision (problem framing, data readiness, model evaluation, and production release) with clear criteria. The team is under pressure to ship an MVP in 8 weeks.
Which approach best optimizes auditability and accountability without creating unnecessary delays or privacy risk?
Best answer: C
What this tests: Support Responsible and Trustworthy AI Efforts
Explanation: A defensible audit trail requires recorded go/no-go checkpoints with pre-defined criteria, named approvers, decision dates, and linked evidence. A lightweight decision register (with controlled access and versioning) meets accountability needs while supporting an 8-week MVP. It also avoids introducing new privacy exposure when handling PII-containing artifacts.
The core need is an accountable, reviewable trail that proves who approved what decision, when, and based on which criteria and evidence. For AI initiatives, the highest-value approach is to formalize a small set of lifecycle stage gates (e.g., problem framing, data readiness, model evaluation, deployment readiness) and record each as a go/no-go entry with consistent fields.
A practical minimum set for each checkpoint is:
This optimizes audit readiness and responsible AI accountability without slowing delivery through excessive bureaucracy or creating new privacy risks.
It captures explicit go/no-go criteria, approver identity, and supporting evidence in a controlled, access-managed audit trail while remaining lightweight.
Topic: Support Responsible and Trustworthy AI Efforts
A bank is building an ML model to detect fraudulent card transactions. Training data includes customer PII, transaction details, and investigator notes. The solution will ingest data from the data warehouse, train in a controlled analytics environment, serve real-time inference via an API, and store logs for monitoring and audits.
Which end-to-end data handling practice should the team AVOID?
Best answer: C
What this tests: Support Responsible and Trustworthy AI Efforts
Explanation: Secure end-to-end data handling minimizes unnecessary replication of sensitive data and keeps it within controlled, monitored environments. Allowing raw datasets to be downloaded to personal devices increases exposure, weakens access controls, and complicates traceability and deletion. The other practices strengthen confidentiality and lifecycle governance across collection, training, inference, and retention.
Designing secure data handling across the AI lifecycle means controlling where sensitive data can exist, who can access it, and how it is protected from ingestion through disposal. In this scenario, the highest-risk anti-pattern is creating unmanaged copies of raw PII on personal devices, which undermines least privilege, monitoring, incident response, and consistent retention/deletion.
A practical secure procedure typically includes:
The key takeaway is to prevent sensitive data from escaping governed systems while still enabling reproducible, auditable ML work.
This breaks controlled handling by expanding uncontrolled copies of sensitive data beyond approved environments and access controls.
Topic: Support Responsible and Trustworthy AI Efforts
An AI team deployed a credit pre-approval model. Over three months, approval rates fluctuate, a fairness check shows a new bias signal against one segment, and a privacy review finds some training records contained unmasked PII. When asked to investigate, the team cannot identify which dataset snapshot or feature set was used for the latest retrain, who approved the change, or roll back confidently; only informal chat messages exist.
What is the most likely underlying cause?
Best answer: C
What this tests: Support Responsible and Trustworthy AI Efforts
Explanation: The strongest clue is the inability to trace which data, features, model version, and approvals correspond to the deployed behavior. That points to a breakdown in auditability across iterations (decision logs, versioning, and governance checkpoints). Without an audit trail, investigations, accountability, and rollback become unreliable, increasing the chance that privacy and bias issues slip into production.
An audit trail for AI initiatives is the practical mechanism that makes transparency and accountability possible across iterations. In this scenario, multiple symptoms (performance fluctuation, bias signal, and a privacy finding) become hard to diagnose because the team cannot answer basic traceability questions: what data snapshot was used, what changed, who approved it, and what is currently running.
A fit-for-purpose audit trail typically records, per iteration:
The key takeaway is that uncontrolled, undocumented changes are a common root cause when teams cannot reproduce or explain model behavior in production.
Without versioned datasets/models and recorded approvals, the team cannot reproduce, explain, or safely roll back iterations, leading to uncontrolled drift, bias, and privacy failures.
Topic: Support Responsible and Trustworthy AI Efforts
A multinational company is piloting an ML system to rank and shortlist job applicants for roles in Germany and France. The system uses applicants’ resumes, assessment scores, and transcripts from video interviews, and recruiters typically follow the recommended shortlist.
Which set of internal policies and external regulations should the project team prioritize for the compliance checklist?
Best answer: A
What this tests: Support Responsible and Trustworthy AI Efforts
Explanation: Because the system influences hiring decisions about individuals in the EU using personal data, the primary compliance drivers are privacy/data protection and employment-related requirements. That points to GDPR and internal policies governing personal data processing, retention, access, and security, plus applicable anti-discrimination and employment rules for the jurisdictions in scope.
Start by identifying what the AI is doing (employment decision support), who it affects (individual applicants), and what data it processes (EU personal data, including potentially sensitive inferences from interviews). In this scenario, the decisive factor is that the use case directly impacts individuals’ employment opportunities in the EU, making privacy and employment/fairness obligations central.
A practical compliance checklist should be anchored on:
Other regulatory regimes (payment, health, financial reporting) are not triggered by the described data and decision context.
This is an EU employment decision use case using EU personal data, so GDPR plus internal data governance and employment/anti-discrimination requirements are directly applicable.
Topic: Support Responsible and Trustworthy AI Efforts
A customer support organization has deployed an NLP model that auto-routes incoming tickets. The business objective is to reduce average handling time, but the organization has low risk tolerance for misrouted safety-related tickets. Only a small SRE team is available, and the system must remain auditable because ticket text contains PII and decisions are reviewed during internal audits.
In the first two weeks, operations reports intermittent spikes in misrouted safety tickets. The product owner, data science lead, and IT operations each assume another group “owns” incident response and change approvals.
What is the BEST next action?
Best answer: D
What this tests: Support Responsible and Trustworthy AI Efforts
Explanation: The immediate gap is unclear accountability in operations, which increases risk for safety-related misroutes and undermines auditability. Creating a documented RACI and escalation/runbook clarifies who is accountable for outcomes, who can approve changes, and how incidents are handled and recorded. This enables consistent response, governance, and an auditable trail while the team investigates and improves the model.
In operational AI, “who owns what” must be explicit for outcomes (business impact), changes (model/data/config updates), and incidents (misroutes, drift, outages). With low risk tolerance and audit needs, the best next step is to produce and socialize accountability documentation that operations can execute and auditors can trace.
At minimum, document and version-control:
Once ownership and escalation paths are in place, investigation, retraining, and process changes can proceed under controlled, auditable governance.
It establishes accountable owners and documented escalation/change paths needed for safe, auditable operations before further tuning or scaling.
Topic: Support Responsible and Trustworthy AI Efforts
A retail bank is preparing to deploy an AI-based credit decisioning model. To meet its model risk management policy, the team must keep audit-ready evidence of data lineage, control checks (e.g., access reviews), and approval sign-offs. The product owner asks to “document later” to hit the launch date, and these artifacts are not created. An internal audit is scheduled for next week as part of regulatory exam readiness.
What is the most likely near-term impact of this omission?
Best answer: C
What this tests: Support Responsible and Trustworthy AI Efforts
Explanation: Audit readiness depends on being able to prove what data and controls were used, who approved key decisions, and that required checks occurred. With an audit next week, missing traceability and approvals most directly creates an immediate compliance verification failure. The near-term consequence is typically a governance hold or delayed release until evidence is assembled.
The core concept is audit-ready documentation: traceability (data sources, transformations, versions), documented control operation (access reviews, segregation of duties, testing/validation evidence), and recorded approvals at required governance checkpoints. In the scenario, the omission happens right before an internal audit tied to regulatory exam readiness, so the most immediate impact is not technical degradation but an inability to demonstrate compliance on demand.
Near-term outcomes commonly include:
Technical impacts like drift, accuracy issues, or user sentiment are possible later, but they are not the most direct consequence of missing audit artifacts in the next week.
Without traceability and approval evidence, the solution is unlikely to pass the immediate audit/compliance gate, triggering a hold until documentation is produced.
Topic: Support Responsible and Trustworthy AI Efforts
A health insurer is deploying an AI model that automatically approves or denies certain medical claims. During a pilot, a model update caused a spike in incorrect denials, and operations teams were unsure who could authorize a rollback, who owned customer-impact decisions, and when to escalate to compliance. Because the system makes high-impact decisions, what is the best way to clarify ownership and escalation paths for AI outcomes, changes, and incidents in production?
Best answer: A
What this tests: Support Responsible and Trustworthy AI Efforts
Explanation: High-impact AI decisions require explicit decision rights, not just technical documentation or monitoring. A documented accountability structure that names who is accountable for business outcomes and who is responsible for technical actions—paired with a defined escalation and rollback path—prevents confusion during incidents. Tying it to formal incident/change processes also preserves the audit trail needed for review.
For AI in operations—especially when it can directly affect customers—accountability must be operationalized: who is accountable for the decision outcomes, who can approve changes, and who can trigger/authorize rollback or suspension. The most effective approach is to document and socialize a clear ownership and escalation structure (often a RACI) and connect it to existing incident and change management so every event is traceable.
A practical setup includes:
Model documentation and monitoring support governance, but they do not, by themselves, establish decision rights and escalation paths.
A named accountability matrix plus a documented incident/change escalation runbook creates clear decision rights (including rollback) and an auditable trail for high-impact AI operations.
Topic: Support Responsible and Trustworthy AI Efforts
A team is ready to launch a machine-learning model that recommends credit-limit increases. In the project charter, stakeholders agreed on a go/no-go criterion: the model must achieve a disparate impact ratio of at least 0.80 for a protected group compared to the reference group, measured on the latest validation set.
During final evaluation, the disparate impact ratio is 0.72, while overall accuracy meets target. The business sponsor is pushing to deploy this week.
As the AI project manager, which action SHOULD AVOID when making the go/no-go decision?
Best answer: B
What this tests: Support Responsible and Trustworthy AI Efforts
Explanation: Because the fairness outcome misses the pre-agreed go/no-go threshold, the team should follow governance and either mitigate and re-test or constrain use until criteria are met. Changing the status of the metric to force a “go” undermines the agreed acceptance criteria and removes accountability. Go/no-go decisions should be made transparently against documented thresholds.
Go/no-go criteria for fairness are acceptance criteria: if the measured fairness metric fails the agreed threshold, the solution is not ready for the originally intended production use. In this scenario, the disparate impact ratio (0.72) is below the pre-set minimum (0.80), so proceeding requires a controlled decision through the responsible AI governance process.
Appropriate responses typically include:
The key is not to redefine success criteria after seeing results without stakeholder approval and documented rationale.
Unilaterally downgrading an agreed fairness go/no-go criterion bypasses governance and enables deploying a known unfair outcome.
Topic: Support Responsible and Trustworthy AI Efforts
You are the AI project manager for a credit-risk model approaching pilot deployment. An internal audit requires end-to-end traceability of the training data.
Exhibit: Model card (excerpt)
Model: Claims Risk v1
Training data: CRM_export_v3.csv; collections_notes.xlsx
Time window: [not recorded]
Prep steps: "standard cleaning, one-hot encoding"
Label: 60+ days past due within 90 days
Data versioning/lineage: [TBD]
Based on the exhibit, what is the best next action to meet transparency and traceability expectations?
Best answer: A
What this tests: Support Responsible and Trustworthy AI Efforts
Explanation: The exhibit shows the model card lacks key traceability elements: the time window, specific preparation steps, and any dataset/pipeline versioning or lineage. The best action is to document data sources and the end-to-end preparation workflow in a reproducible way (what was extracted, from where, when, and how it was transformed). This enables auditability and reliable handover.
Transparency for AI systems includes being able to trace training data back to its origin and reproduce how it became the model-ready dataset. In the exhibit, references to file names and vague “standard cleaning” are not sufficient for audit or operational support because they don’t establish provenance, scope, or repeatable transformation logic.
The most appropriate next action is to complete a data lineage and preparation record that captures, at minimum:
This focuses on traceability and reproducibility rather than model performance or access control alone.
It directly fills the missing data provenance and preparation details needed to reproduce and audit the dataset.
Topic: Support Responsible and Trustworthy AI Efforts
A team has built a promising prototype model to prioritize insurance claim investigations. During a review, stakeholders disagree on “what explainability means”: compliance wants auditable rationale for adverse actions, investigators want case-level reason codes they can act on, and executives want a simple view of key drivers.
What is the best next step?
Best answer: B
What this tests: Support Responsible and Trustworthy AI Efforts
Explanation: Explainability is requirement-driven and depends on who is making which decision with the model output. The immediate need is to align stakeholders on the decisions being supported and to define the minimum explanation content, format, and evidence required for each audience. Those requirements then drive the choice of techniques, documentation, and evaluation gates.
The core transparency step here is to translate a vague request for “explainability” into concrete, audience-specific requirements tied to real decisions. In this scenario, compliance, investigators, and executives have different decisions and risks, so they will need different explanation types (e.g., audit-ready rationale vs. case-level reason codes vs. aggregated driver summaries) and different evidence/controls.
A good next step is to facilitate a short working session and produce a documented explainability plan that specifies:
Only after this should the team select explanation methods and incorporate them into evaluation and release gates.
You must first define what explanations each decision-maker needs and how they will be judged before choosing explanation techniques or proceeding.
Use the PMI-CPMAI Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Read the PMI-CPMAI guide on PMExams.com, then return to PM Mastery for timed practice.