Browse Certification Practice Tests by Exam Family

PMI-CPMAI: Identify Business Needs and Solutions

Try 10 focused PMI-CPMAI questions on Identify Business Needs and Solutions, with answers and explanations, then continue with PM Mastery.

On this page

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Topic snapshot

FieldDetail
Exam routePMI-CPMAI
Topic areaIdentify Business Needs and Solutions
Blueprint weight26%
Page purposeFocused sample questions before returning to mixed practice

How to use this topic drill

Use this page to isolate Identify Business Needs and Solutions for PMI-CPMAI. Work through the 10 questions first, then review the explanations and return to mixed practice in PM Mastery.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 26% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original PM Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: Identify Business Needs and Solutions

A retail bank wants to implement a generative AI assistant to help contact-center agents draft email responses. The sponsor proposes a 6-week timeline and cites a goal of reducing average email handling time by 15%, but has not described how AI risks will be approved or who will run the solution after launch.

As the AI project manager evaluating initial feasibility, what should you ask to verify FIRST?

  • A. A detailed breakdown of customer email intent categories for the last year
  • B. Which generative model family is most accurate on industry benchmarks
  • C. Whether an AI operating model exists with named owners, decision rights, and change capacity to govern and support the solution
  • D. A draft training curriculum for all contact-center agents

Best answer: C

What this tests: Identify Business Needs and Solutions

Explanation: Before committing to delivery and feasibility, you must confirm the organization can govern, adopt, and operate an AI solution. Verifying named accountability, decision rights, and change capacity ensures there is a realistic path to approvals, rollout, and ongoing monitoring/support. Without this readiness, data and model choices may be moot or the deployment may stall.

Initial AI feasibility is not only about technical possibility; it also requires organizational readiness to implement and sustain the solution responsibly. In this scenario, the sponsor has a KPI and an aggressive timeline, but key readiness elements are missing: who approves AI risk decisions, who owns the product post-launch, and whether there is capacity for change management and ongoing operations.

Verify readiness by clarifying:

  • Accountable roles (business owner, risk/compliance, operations)
  • Decision rights and governance checkpoints for approvals
  • Skills and resourcing for deployment, monitoring, and incident response
  • Change capacity for rollout, training, and process updates

Once readiness is confirmed, you can refine data needs, evaluation approach, and the schedule with realistic constraints.

Organizational readiness hinges on having accountable governance, skills, and capacity to adopt and operate the AI safely and sustainably.


Question 2

Topic: Identify Business Needs and Solutions

In an AI solution design, which term refers to an operational document that specifies how the deployed model will be monitored, how changes are approved and released, and how production incidents (including rollback) are handled?

  • A. Data lineage
  • B. Model card
  • C. Experiment tracking log
  • D. MLOps runbook

Best answer: D

What this tests: Identify Business Needs and Solutions

Explanation: An MLOps runbook is the operations-focused playbook for running a model in production. It captures what to monitor and how to respond, along with release and change-control steps, so support teams can operate the solution reliably. This is a key artifact to outline early so operational needs shape the design.

An MLOps runbook is a practical, operations-ready guide for keeping an AI solution stable after deployment. It typically describes the monitoring approach (what metrics are tracked and how alerts are handled), the change-control/release process (approvals, promotion steps, and rollback triggers), and the incident response process (triage, escalation, containment, and post-incident review). Outlining the runbook early helps ensure the solution design includes the necessary telemetry, access controls, and operational roles before go-live. In contrast, several other common AI artifacts document the model or data but don’t prescribe how to operate and support it in production.

It defines procedures for monitoring, change control, incident triage, and rollback for a production model.


Question 3

Topic: Identify Business Needs and Solutions

You are drafting an AI solution for a hospital network to predict outpatient appointment no-shows and trigger outreach actions. The sponsor says, “Use EHR and scheduling data to reduce no-shows,” but has not specified how the prediction will be consumed operationally.

Which question should you ask FIRST to define the data flow and processing requirements?

  • A. Where will predictions be used, and required latency/refresh?
  • B. Which modeling approach will be easiest to explain to clinicians?
  • C. What is the minimum acceptable model AUC for go-live?
  • D. How many data scientists can be dedicated to this effort?

Best answer: A

What this tests: Identify Business Needs and Solutions

Explanation: Data flow and processing requirements depend first on how and when the organization will use predictions in the workflow. Clarifying the consuming system, triggering action, and latency/refresh needs determines whether the solution requires real-time integration, scheduled batch scoring, or another pattern. Once that is known, you can specify upstream feeds, transformations, and operational SLAs consistently with the use case.

The core concept is to anchor data flow and processing design to the intended operational use. In this scenario, “predict no-shows” is not enough to define requirements such as ingestion frequency, processing windows, integration interfaces, or infrastructure placement.

Ask first about the consumption pattern because it drives the rest of the data flow:

  • Where the score will be consumed (scheduler, outreach platform, clinician queue)
  • When it must be available (real time vs nightly vs several days ahead)
  • How often it must refresh and expected volume (SLA/throughput)

With those constraints, you can then map source systems, required data elements, permissible movement, and processing steps to meet the workflow timing. Performance targets or staffing can be decided after the operational data-flow needs are clear.

The consumption point and timing constraints determine whether data must flow in real time or batch and what processing SLAs are needed.


Question 4

Topic: Identify Business Needs and Solutions

A product team proposes an AI-based ticket triage solution to reduce customer support backlog. Stakeholders disagree on what the “real problem” is (slow routing vs. unclear categories vs. duplicate tickets), and there is no documented baseline for cycle time or rework. The sponsor asks you to move the initiative forward this week.

What is the best next step?

  • A. Run a rapid model proof of concept to estimate achievable accuracy
  • B. Draft an operational rollout and monitoring plan to accelerate adoption
  • C. Start a data readiness assessment of historical tickets and labels
  • D. Conduct stakeholder interviews and workflow observation, then document pain points and baseline metrics

Best answer: D

What this tests: Identify Business Needs and Solutions

Explanation: Before committing to an AI solution, you must first elicit and document the business pain points and the current-state workflow. Interviews and observation reconcile conflicting stakeholder narratives, surface root causes, and establish a measurable baseline. That evidence becomes the foundation for a clear problem statement and AI-appropriate success criteria.

The core step at this point is business need discovery: elicit and document pain points using stakeholder interviews and direct observation of the current workflow. In the scenario, stakeholders disagree on the problem and there is no baseline, so jumping to data, modeling, or rollout would lock in assumptions and can optimize the wrong outcome.

A practical next step is to:

  • Interview key roles (agents, team leads, QA, ops) to capture pain points and desired outcomes
  • Observe/shadow real ticket intake and routing to validate where delays and rework occur
  • Document the current-state process, pain points, and baseline KPIs (e.g., cycle time, rework rate)
  • Convert findings into a problem statement and measurable success criteria

Only after this is documented should you assess data readiness and consider model options and operational plans.

You need validated, documented pain points (and a baseline) before selecting an AI approach or defining success criteria.


Question 5

Topic: Identify Business Needs and Solutions

An organization wants to assess its readiness to scale AI beyond pilots. Leaders propose creating a cross-functional group that sets common AI standards, provides reusable templates and training, and coordinates governance checkpoints across business, data, risk, and IT.

Which term best describes this capability?

  • A. Model registry
  • B. AI Center of Excellence (CoE)
  • C. Data stewardship
  • D. MLOps pipeline

Best answer: B

What this tests: Identify Business Needs and Solutions

Explanation: An AI Center of Excellence is an organizational capability that improves AI readiness by centralizing standards, enabling skills development, and coordinating governance across functions. This directly addresses the need to scale AI consistently and safely beyond isolated pilots.

Organizational readiness for AI implementation includes having the skills, governance structure, and change capacity to repeatably deliver AI solutions. An AI Center of Excellence (CoE) is a common pattern used to build that readiness: it convenes cross-functional stakeholders (business, data, engineering, security, risk/compliance) to define standards, provide training and reusable assets, and establish consistent governance checkpoints for AI initiatives.

This is different from tooling or single-function roles. A model registry is primarily a repository for versioning and managing models; an MLOps pipeline is the automation and processes for build/test/deploy/monitor; and data stewardship focuses on data definitions, quality, and access policies. The key idea is an operating model capability that enables scale, not a single artifact or technical pipeline.

A CoE is a cross-functional capability that standardizes practices, builds skills, and coordinates governance to enable scalable AI delivery.


Question 6

Topic: Identify Business Needs and Solutions

You are scoping an AI-driven prior-authorization triage solution for a health insurer. Constraints: only de-identified historical claims can be used for the 10-week pilot, compliance requires conservative automation (human-in-the-loop), and the CFO will fund work only with a clear ROI case. Stakeholders expect fewer manual touches, faster turnaround time, and fewer regulatory audit findings.

What is the BEST next action to support ROI determination?

  • A. Build the ROI using labor cost savings only, since de-identified data limits financial attribution
  • B. Request access to identified claim records so you can quantify fraud recovery revenue in the ROI model
  • C. Run a benefits workshop to classify expected benefits into cost, revenue, risk, and quality, then define measurable KPIs and owners
  • D. Select a model approach and evaluation metrics to ensure the pilot meets accuracy targets

Best answer: C

What this tests: Identify Business Needs and Solutions

Explanation: Before calculating ROI, the team must clarify and categorize the initiative’s benefit types and make them measurable. In this scenario, expected outcomes map to cost reduction (manual touches), quality improvements (turnaround time), and risk reduction (audit findings), with any revenue effects requiring careful justification. A structured benefits-and-KPI step enables a credible ROI case without violating privacy or governance constraints.

ROI determination depends on identifying the full set of benefit types and translating them into measurable, attributable indicators. Here, fewer manual touches is primarily cost reduction; faster turnaround time is a quality improvement that may also influence customer retention; and fewer audit findings is risk reduction (avoided losses, penalties, and remediation effort). The best next step is to align stakeholders on which benefit types will be claimed, how each will be measured in the pilot (KPIs, baselines, targets), and who owns each metric—while staying within de-identified data use and human-in-the-loop governance.

A practical output is a benefits map that:

  • Classifies benefits (cost/revenue/risk/quality)
  • Defines KPI, baseline, target, and measurement method
  • States assumptions and attribution limits for the pilot

This creates auditable inputs for ROI rather than jumping to modeling or narrowing benefits prematurely.

It explicitly identifies and structures benefit types (cost, revenue, risk, quality) into measurable inputs needed for an ROI case under the stated constraints.


Question 7

Topic: Identify Business Needs and Solutions

A bank is preparing to pilot an internal generative-AI assistant that uses retrieval-augmented generation (RAG) over sensitive policy and pricing documents. The risk assessment identified likely threats: prompt-injection attempts to bypass guardrails, unintended disclosure of confidential data, and automated misuse (high-volume probing) to extract internal content.

Which evidence best validates the solution is cyber-risk ready for the pilot go/no-go decision?

  • A. Training completion records showing all team members finished secure development and privacy modules
  • B. Network and host penetration test results for the infrastructure running the assistant
  • C. Adversarial security test report covering prompt injection, data exfiltration, and model/content extraction, with fixes verified and residual risk explicitly accepted
  • D. Model evaluation results showing high answer quality and user satisfaction in a controlled UAT

Best answer: C

What this tests: Identify Business Needs and Solutions

Explanation: The most defensible readiness evidence is proof that the system withstands the specific adversarial behaviors identified in the risk assessment. An adversarial security testing report demonstrates whether prompt-injection, data leakage/exfiltration, and extraction attempts are prevented or detected, and documents residual risk and acceptance for governance. This is stronger than activity outputs or non-AI performance testing.

Cybersecurity risk readiness for AI systems is best validated by evidence that controls work against realistic AI attack paths tied to the identified threats. In this scenario, the key risks are prompt injection (bypassing instructions/guardrails), data exposure through RAG (retrieval and disclosure of sensitive content), and automated misuse to extract internal information at scale.

The most decision-useful artifact is an adversarial testing (red-team) report that:

  • Executes abuse cases aligned to the threat model (injection, exfiltration, extraction, high-volume probing)
  • Verifies mitigations (e.g., access controls, retrieval authorization, output filtering, rate limits, monitoring/alerts)
  • Records remaining gaps and an explicit residual-risk decision for pilot scope

Other security evidence can be helpful, but it does not directly validate these AI-specific misuse and adversarial failure modes.

This directly validates AI-specific threat mitigations and remaining cyber risk using attack-focused evidence, not indirect activity or performance measures.


Question 8

Topic: Identify Business Needs and Solutions

A retailer plans to scale an ML-based fraud model that automatically cancels online orders. In a recent pilot, a brief data pipeline issue degraded model performance and caused a spike in legitimate order cancellations, leading to customer complaints and a call-center backlog. Executives are concerned about reputational harm and business continuity during the upcoming holiday peak.

Which metric/evidence/artifact best validates that the solution is ready to manage the reputational and continuity impacts of AI errors or degradation?

  • A. A completed model card describing intended use, limitations, and ethical considerations
  • B. Operational readiness report showing alert thresholds, rollback steps, and a tested incident + customer-communication drill meeting the RTO
  • C. Offline validation metrics (e.g., AUC/F1) on a held-out test set
  • D. A stakeholder demo summary showing high satisfaction with user experience

Best answer: B

What this tests: Identify Business Needs and Solutions

Explanation: Because the key risk is reputational damage and operational disruption from model or pipeline degradation, readiness is best validated by evidence of detection, response, and recovery capability. A validated incident/rollback process tied to business targets (like RTO) shows the team can limit customer impact when errors occur. This is stronger than static quality or documentation artifacts that don’t prove continuity performance under failure conditions.

When AI errors or degradation can directly affect customers, reputational and business continuity risk is driven by how quickly you can detect issues, reduce harm, and recover operations—not just how good the model looks offline. The most decision-relevant validation is operational evidence that failure modes have been anticipated and the organization can execute a response within agreed continuity targets.

The strongest readiness evidence typically shows:

  • Defined monitoring signals and alert thresholds linked to customer harm (e.g., false cancellations, complaint rate)
  • A proven mitigation path (rollback, safe mode, manual review) with owners and steps
  • A tested incident-response and customer-communication drill that meets the stated recovery objectives

Offline model scores, documentation, and positive demos can be necessary, but they do not validate real-world resilience to degradation and the ability to protect brand trust during an incident.

It demonstrates the organization can detect degradation quickly and restore service while managing external communications to limit reputational and continuity impact.


Question 9

Topic: Identify Business Needs and Solutions

You are preparing a business case package for an AI-powered churn reduction initiative that must pass an internal audit. During review, you find the following excerpt.

Business case v0.3 (excerpt)
Problem: "Too many customers leave" (no baseline cited)
Proposed solution: Predict churn risk; trigger outreach
Success metric: "Reduce churn" (target TBD)
Data: CRM + contact-center notes (access pending)
Benefits: "Improved retention" (no financial estimate)
Owner: (blank)   Approvals: "Discussed with Sales" (no record)
Assumptions/constraints: (blank)

What is the best next action to improve clarity, traceability, and audit readiness of this business case?

  • A. Finalize the model selection and evaluation plan first
  • B. Add baselines, defined KPIs, assumptions, ownership, and signed approvals
  • C. Focus the document on qualitative benefits to gain buy-in
  • D. Proceed to a pilot to collect metrics later

Best answer: B

What this tests: Identify Business Needs and Solutions

Explanation: The exhibit shows multiple audit-blocking gaps: no baseline, no defined target metric, no documented assumptions/constraints, and no accountable owner or recorded approvals. Refining the business case to specify measurable success criteria and decision accountability creates clear traceability from problem to expected outcomes and provides evidence for later governance reviews.

Audit-ready business cases must be clear (unambiguous problem and success criteria), traceable (who decided what, when, and why), and complete enough to support go/no-go decisions. In the excerpt, the problem lacks a baseline, the success metric has no definition or target, data access is uncertain, benefits are not quantified, and ownership/approvals are undocumented.

A practical refinement is to:

  • Add baseline and target KPIs with precise definitions and measurement method
  • Document assumptions, constraints, and dependencies (including data access)
  • Assign accountable owner(s) and capture formal approvals/version history

This strengthens decision quality now and supports future reviews and audits without relying on verbal context.

These missing elements make outcomes measurable, decisions traceable, and the artifact auditable.


Question 10

Topic: Identify Business Needs and Solutions

A manufacturing firm wants to reduce unplanned downtime by detecting early signs of pump failure from vibration and temperature telemetry. Available data includes 18 months of high-frequency sensor readings for 220 pumps, but only 12 confirmed failure events and inconsistent maintenance notes. The solution must run on-premises near the equipment (low latency), be maintainable by a small team, and produce explanations technicians can act on.

Which AI approach best optimizes time-to-market and risk reduction while fitting the data constraints?

  • A. Use a large language model to analyze notes and predict impending failures
  • B. Use reinforcement learning to optimize a maintenance policy from downtime rewards
  • C. Train a deep supervised classifier to predict failure within 7 days
  • D. Train an anomaly detector on normal telemetry and alert on deviations

Best answer: D

What this tests: Identify Business Needs and Solutions

Explanation: Because confirmed failures are rare and labels are sparse, a supervised failure-prediction model is unlikely to generalize well. A one-class/semi-supervised anomaly detection approach can be trained primarily on abundant normal operating data, deployed on-prem for low latency, and paired with clear deviation drivers that technicians can interpret and validate.

The core selection principle is matching the AI approach to the learning signal available. Here, the business goal is early warning, but the dataset has abundant “healthy” telemetry and very few confirmed failure examples, making supervised failure classification high risk for overfitting and missed detection.

A better fit is anomaly detection (one-class or semi-supervised) trained on normal operating patterns, then flagging statistically or model-defined deviations. This optimizes time-to-market and maintainability because it can start with existing telemetry and straightforward features (rolling aggregates, frequency-domain summaries) and produce actionable explanations such as “vibration in band X increased 3× above baseline.” The key takeaway is to leverage plentiful normal data when failure labels are scarce, rather than forcing a label-hungry approach.

With very few labeled failures, modeling “normal” behavior enables earlier detection using available data and supports simpler, explainable deployment.

Continue with full practice

Use the PMI-CPMAI Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Free review resource

Read the PMI-CPMAI guide on PMExams.com, then return to PM Mastery for timed practice.

Revised on Thursday, May 14, 2026