APMG AIPM Practice Test

Practice APMG AIPM with free sample questions, timed mock exams, and detailed explanations for governance, delivery, and change decisions.

AIPM is APMG International’s AI-Driven Project Manager certification for professionals who need practical AI fluency inside project planning, delivery, and organizational adoption. If you are searching for AIPM sample exam questions, a practice test, or an exam simulator, this is the main PM Mastery page to start on web and continue on iOS or Android with the same account.

Choose AIPM when you want a broad AI-driven project management route rather than a single Scrum-role or PMI-only lens. It fits learners who need AI lifecycle awareness, tool-fit decisions, delivery use cases, adoption risks, and practical action planning. If you need a stronger governance-and-operations route, compare PMI-CPMAI . If you need a mainstream PM credential with AI context, compare PMP 2026 .

Interactive Practice Center

Start a practice session for APMG AI-Driven Project Manager (AIPM) below, or open the full app in a new tab. For the best experience, open the full app in a new tab and navigate with swipes/gestures or the mouse wheel—just like on your phone or tablet.

Open Full App in a New Tab

A small set of questions is available for free preview. Subscribers can unlock full access by signing in with the same account they use on web and mobile.

Use on iPhone or Android too: PM Mastery on the App Store or PM Mastery on Google Play using the same account you use on web. The same subscription works across web and mobile.

What this AIPM practice page gives you

  • A fast route into the PM Mastery simulator for AIPM.
  • Short drills, mixed sets, and timed practice for AI-driven project management decisions.
  • Detailed explanations that connect the right answer to the lifecycle step, tool choice, or governance response.
  • 24 on-page sample questions plus access to a larger PM Mastery library with 2,500+ AIPM practice questions.
  • A clear free-preview path before you subscribe.
  • The same account across web and mobile.

AIPM exam snapshot

  • Vendor: APMG International
  • Official exam name: APMG AI-Driven Project Manager (AIPM)
  • Exam code: AIPM
  • Questions: 40
  • Time limit: 40 minutes
  • Format: closed-book and proctored
  • Pass mark: 60%

Because the exam is short, the fastest gains usually come from removing hesitation around lifecycle stages, AI tool fit, organizational adoption risks, and action planning.

  • AIPM : broader AI-driven project management across lifecycle, tool fit, case studies, and adoption choices.
  • PMI-CPMAI : deeper AI initiative management across business case, data, evaluation, governance, and operations.
  • PMP 2026 : broad mainstream project leadership, with AI and sustainability appearing as part of the refreshed PMP blueprint.
  • PSM-AI / PSPO-AI : Scrum-role-specific AI routes rather than general project-delivery coverage.

Topic coverage for AIPM practice

TopicWeightEstimated questions
1. Embracing AI in Project Management and Basic Concepts17%7
2. The AI Project Life Cycle: Navigating from Problem Scoping to Evaluation17%7
3. Optimizing Project Outcomes with AI: AI Tools and Techniques17%7
4. Challenges of Bringing AI into the Organization17%7
5. Case Studies and Real-World Applications of AI in Project Management16%6
6. Harnessing the Future: Action Plan for AI-Driven Project Management16%6

How to use the AIPM simulator efficiently

  1. Start with one topic and run a short drill immediately after review.
  2. Review every miss until you can explain the AI project-management logic behind the best answer.
  3. Move into mixed sets once you can switch comfortably between lifecycle, tooling, organizational, and case-based decisions.
  4. Finish with full timed runs to rehearse pace and judgment under pressure.

Free preview vs premium

  • Free preview: a smaller web set so you can validate the question style and explanation depth.
  • Premium: the full AIPM practice bank, focused drills, mixed sets, timed mock exams, detailed explanations, and progress tracking across web and mobile.

Need deeper concept review first?

If you want concept-first reading before heavier simulator work, use the companion guide at PMExams.com .

24 AIPM sample questions with detailed explanations

These sample questions cover multiple blueprint areas for AIPM. Use them to check your readiness here, then move into the full PM Mastery question bank for broader timed coverage.

Question 1

Topic: 2. The AI Project Life Cycle: Navigating from Problem Scoping to Evaluation

A project team is developing an AI model to forecast customer churn for a subscription business. After the first pilot release, the dashboard shows “low churn risk” for many accounts that the customer success leads consider urgent save-cases. In review, the data scientist explains the model is performing well against the training label they were given, but business SMEs argue the output is “not actionable” and refuse to use it for planning.

The project manager finds no documented agreement on what counts as “churn,” who owned label definitions, or who would sign off on model usefulness before building the dashboard.

What is the most likely underlying cause of the failure?

  • A. Overfitting caused by an overly complex algorithm
  • B. Unclear ownership of label and success-criteria definition
  • C. Stakeholders resisting AI due to change fatigue
  • D. Data drift from recent customer behavior changes

Best answer: B

Explanation: The symptoms point to a model that is internally consistent with its training target but misaligned with business reality. That typically happens when the project manager, technical team, and business SMEs did not coordinate responsibilities for defining the target/labels, acceptance criteria, and business validation checkpoints during model development.


Question 2

Topic: 4. Challenges of Bringing AI into the Organization

Midway through a customer-facing software project, the PM starts using a public generative AI tool to draft weekly status updates by pasting in unredacted defect summaries and support tickets. The organization’s stated value is “customer data stays in approved internal systems,” and the customer sponsor has not been told this AI is being used. What is the most likely near-term impact?

  • A. The organization incurs regulatory penalties and lawsuits as the primary immediate consequence
  • B. Work is paused for a data-use review, reducing stakeholder trust and creating schedule risk
  • C. The project avoids major costs because automated reporting permanently reduces staffing needs
  • D. Delivery quality declines over the next releases due to model drift in the AI tool

Best answer: B

Explanation: This AI usage conflicts directly with the organization’s stated data-handling values and with the customer sponsor’s expectations of transparency. The most likely immediate outcome is a governance/privacy escalation that forces the team to stop, assess impact, and remediate, which quickly harms stakeholder trust and introduces schedule risk.


Question 3

Topic: 1. Embracing AI in Project Management and Basic Concepts

A project team is building a predictive model to flag likely schedule slippage from two years of completed project data. Executives will decide whether to embed the model in the PMO dashboard, and they want an unbiased performance estimate to trust the results. Which description best explains training, validation, and testing in simple project terms for this situation?

  • A. Train on all historical data, then test by scoring the same projects to confirm it works
  • B. Use the validation portion for the final performance report because it is closest to real use
  • C. Train on one portion, tune on a separate validation portion, and report final performance on an untouched test portion
  • D. Train on most data and use the test portion repeatedly to tune settings until accuracy is highest

Best answer: C

Explanation: Training data is what the team uses to build the model, validation data is a separate slice used to choose/tune model settings, and test data is held back to provide the final unbiased performance estimate. Because executives need trustworthy evidence before embedding the model, the key is to keep the test set untouched until the end.


Question 4

Topic: 1. Embracing AI in Project Management and Basic Concepts

A project manager plans to introduce an AI assistant that analyzes the delivery team’s internal chat and email to flag emerging schedule risks. The sponsor supports the pilot, and IT security has confirmed the solution meets technical controls. Which stakeholder group is most directly impacted, and what concern is most likely to be raised?

  • A. Delivery team members; privacy and perceived workplace surveillance
  • B. Procurement; lack of clarity on vendor licensing terms
  • C. IT security; insufficient compute capacity to run the model
  • D. Executive sponsor; inability to explain model decisions to the board

Best answer: A

Explanation: When AI is introduced into a workflow, the most impacted stakeholders are those whose data and daily work are being analyzed or altered. In this scenario, the AI processes employees’ communications, making the delivery team (and often employee representatives) the closest stakeholder group. Their most common concern is privacy and perceived surveillance, which can quickly undermine adoption if not addressed upfront.


Question 5

Topic: 3. Optimizing Project Outcomes with AI: AI Tools and Techniques

Your project uses an AI risk assistant that flags: “High risk of supplier API delay in the next 3 weeks (0.72 probability). Top drivers: recent slip in supplier sprint burndown; increase in open defects; unresolved integration dependencies.” The supplier disputes the alert.

Before you decide to accept the risk, fund mitigation, or escalate to the steering committee, which evidence should you gather first to avoid unnecessary near-term schedule/cost disruption and loss of stakeholder trust?

  • A. Retrain the model using last year’s project outcomes
  • B. Verify current supplier data and critical-path schedule impact
  • C. Survey stakeholders about perceived delivery risk
  • D. Wait for the next monthly status report to confirm

Best answer: B

Explanation: The fastest defensible way to act on an AI-identified risk is to validate the signal against current, decision-relevant evidence: the underlying supplier performance inputs and the resulting impact on the critical path. That evidence supports an immediate choice to accept, mitigate, or escalate without overreacting to a potentially noisy prediction. It also protects stakeholder trust by showing a transparent basis for action.


Question 6

Topic: 1. Embracing AI in Project Management and Basic Concepts

A project team uses an ML model to forecast whether the next release will miss its date. For the current plan, the model outputs a 0.72 probability of missing the milestone based on recent throughput and defect trends. Which statement about using this output for decision-making is INCORRECT?

  • A. Report it as a probability with assumptions and confidence
  • B. Choose mitigation using a risk threshold and impact trade-offs
  • C. Monitor prediction accuracy over time and adjust the model
  • D. Treat 0.72 as certain and rebaseline schedule immediately

Best answer: D

Explanation: Many AI outputs are probabilistic, so they describe likelihood, not certainty. Decisions should account for uncertainty by using thresholds, trade-offs, and transparent communication of assumptions rather than treating a single predicted probability as a guaranteed outcome. Ongoing monitoring helps ensure the probability remains decision-useful as conditions change.


Question 7

Topic: 2. The AI Project Life Cycle: Navigating from Problem Scoping to Evaluation

A team deployed an ML model to forecast weekly call volume for workforce planning. For the first 6 months it met the acceptance criterion (MAE 10%), but in the last 4 weeks MAE rose to 18% and the forecast bias shifted strongly upward.

Evaluation notes:

  • A new customer segment launched 2 months ago; monitoring shows clear input data drift versus the training set.
  • The business still needs the forecasts.
  • Ground-truth labels are available weekly and you have enough new labeled data from the last 8 weeks.
  • No changes were made to the scoring code or infrastructure.

Which action best fits these evaluation findings?

  • A. Retrain the model using recent labeled data and revalidate performance
  • B. Decommission the model and revert to manual forecasting
  • C. Replace the model with a completely different modeling approach immediately
  • D. Tune hyperparameters without changing the training data

Best answer: A

Explanation: The evaluation indicates performance decay driven by a changed data distribution, not a deployment defect. With sufficient recent labeled data available and the use case still valuable, retraining on up-to-date data (then revalidating against the acceptance criterion) is the appropriate next step to restore accuracy and reduce bias.


Question 8

Topic: 5. Case Studies and Real-World Applications of AI in Project Management

You are evaluating an AI-based “schedule slippage predictor” to embed into weekly portfolio reporting. Sponsors want it rolled out company-wide next month.

Exhibit: Readiness & evidence (excerpt)

Data coverage: 2 of 5 business units; ~40% projects missing weekly updates
Offline validation (held-out): AUC 0.82 (Units A/B only)
Error analysis: false positives higher on small projects
Integration: API to PPM tool not yet tested in production
Security/privacy review: pending
Ops plan: monitoring + rollback drafted; not rehearsed
Change impact: PMs request guidance for acting on alerts

Based on the exhibit, what is the best next action?

  • A. Roll out only the dashboard now and add predictions in a later release
  • B. Run a timeboxed pilot with clear success criteria before scaling
  • C. Proceed with full rollout and address gaps during hypercare
  • D. Cancel the rollout until a new model eliminates false positives

Best answer: B

Explanation: The exhibit indicates the solution is not ready for a full rollout: training evidence covers only 2 of 5 units, data completeness is weak, and key go-live dependencies (security review, production integration, operational rehearsal) are unfinished. A pilot is the appropriate step to validate real-world performance and workflow adoption with defined success measures before scaling.


Question 9

Topic: 6. Harnessing the Future: Action Plan for AI-Driven Project Management

A PMO wants to launch its first AI-driven project management use case within 8 weeks using a small team. Constraints: no new data collection during the pilot, outputs must be reviewable by humans before sharing, and the organization is risk-averse (privacy/legal concerns). The goal is a visible “quick win” that demonstrates value.

Exhibit: Candidate use cases (1=low, 5=high)

Use caseExpected valueFeasibility nowDelivery/ethics risk
Auto-generate weekly status summaries from approved project documents452
Predict schedule slippage across the portfolio using historical project data523
Generate draft vendor contract clauses for procurement335
Detect on-site safety noncompliance using camera feeds424

Which is the best starting use case to select based on value, feasibility, and risk?

  • A. Auto-generate weekly status summaries from approved project documents
  • B. Predict schedule slippage across the portfolio using historical project data
  • C. Generate draft vendor contract clauses for procurement
  • D. Detect on-site safety noncompliance using camera feeds

Best answer: A

Explanation: A strong first AI use case is typically a low-risk, high-feasibility “quick win” that uses existing data and keeps humans accountable for final outputs. The status-summary automation fits the 8-week constraint, can be validated through reviewer acceptance and time saved, and avoids high-stakes decisions or sensitive data exposure compared with the other options.


Question 10

Topic: 1. Embracing AI in Project Management and Basic Concepts

You are managing a project deploying a neural-network model to forecast weekly contact-center volume for staffing. The model shows 96% training accuracy and did well in a pilot for Region A, but the rollout will include new regions where a key input field is often missing and a product launch next month is expected to change customer behavior.

Which evidence best validates the decision to proceed with rollout while accounting for real-world model performance risks?

  • A. Model size and training time meet the technical targets
  • B. Out-of-time, multi-region holdout MAE plus drift report
  • C. Stakeholder survey shows strong confidence in the forecasts
  • D. High training accuracy on the full historical dataset

Best answer: B

Explanation: The most credible validation for real-world performance is evaluation on unseen data that matches expected deployment conditions. Using a recent, multi-region holdout set tests representativeness and changing conditions, while drift evidence addresses whether patterns have shifted since training. This directly targets the main reasons models fail after going live: poor data quality/coverage and non-stationary environments.


Question 11

Topic: 1. Embracing AI in Project Management and Basic Concepts

You manage a software migration project. The steering committee requests an updated schedule/cost forecast and the top 5 risks within 24 hours. You have 4 hours today to prepare the update.

A team member proposes pasting a full export of tickets and vendor invoices (includes customer PII and negotiated rates) into a consumer generative AI site that is not on the company’s approved tool list to draft the forecast narrative and risk list. Leaders want a credible update with a clear audit trail of assumptions.

What is the BEST next action?

  • A. Avoid AI entirely and create a fresh forecast without referencing historical tickets or invoices
  • B. Summarize and de-identify inputs, use an approved environment, and have SMEs validate AI outputs before sharing
  • C. Use the consumer AI to generate a single-point forecast and adopt it as the new baseline immediately
  • D. Use the consumer AI with the full export, then send the output to the steering committee to meet the deadline

Best answer: B

Explanation: The immediate risks are data exposure from using an unapproved external tool and incorrect outputs if leaders rely on AI-generated forecasts without validation. The best next action is to minimize and sanitize data, keep processing in an approved environment, and add human-in-the-loop review so the update is credible and traceable.


Question 12

Topic: 1. Embracing AI in Project Management and Basic Concepts

A logistics PMO wants an AI capability to predict whether a shipment will be late at the time it is booked, so dispatch can intervene. The team proposes supervised learning (late vs on-time) using two years of shipment history, but an executive asks how you will validate that supervised learning is the right learning approach before committing budget.

Which artifact/metric is the best evidence to validate that decision?

  • A. Label-quality and coverage report plus time-split validation (e.g., AUC)
  • B. High training-set accuracy from an initial prototype model
  • C. Total number of data sources integrated and records collected
  • D. Planner survey results indicating enthusiasm to use AI recommendations

Best answer: A

Explanation: To validate choosing supervised learning, you need evidence that a trustworthy target label exists at sufficient scale and that a model generalizes to future periods. A label-quality/coverage assessment combined with time-split validation directly tests whether supervised prediction is feasible and credible for the intended decision point.


Question 13

Topic: 5. Case Studies and Real-World Applications of AI in Project Management

You are reviewing a one-page case study claiming an AI-driven forecasting approach reduced average milestone slippage by 15% across three software projects. The write-up says it “used historical task updates and risk logs” and that PMs received “weekly reforecast recommendations,” but provides few details.

Which information request is NOT necessary to properly evaluate the credibility and transferability of this case study?

  • A. Validation method and reported performance metrics
  • B. Data sources, time period, and data-quality handling
  • C. Pre-AI baseline and comparable KPI definitions
  • D. Exact AI tool/vendor product name used

Best answer: D

Explanation: To evaluate an AI case study, you need the context that makes the result interpretable: what “improvement” was measured against, what data fueled the model, and how performance was validated. These details determine credibility and whether the results could generalize to your environment. The specific vendor/tool name is not required to judge methodological soundness.


Question 14

Topic: 5. Case Studies and Real-World Applications of AI in Project Management

In AI-driven project management, what term describes the warning sign where the team routinely accepts an AI forecast or recommendation without challenge, even when their domain knowledge or new evidence suggests it may be wrong?

  • A. Data leakage
  • B. Model drift
  • C. Automation bias
  • D. Overfitting

Best answer: C

Explanation: This behavior is best described as automation bias: people defer to an AI system’s output as the “default truth” and stop applying critical thinking. In projects, it shows up as unchallenged AI schedules, risk scores, or resource recommendations despite credible contrary signals.


Question 15

Topic: 3. Optimizing Project Outcomes with AI: AI Tools and Techniques

You are building an integrated schedule for a product launch with three teams. The draft plan has missing predecessor links because teams planned in separate backlogs, and the launch date is fixed. You have an AI assistant that can analyze the WBS, ticket links, and similar past projects to suggest dependencies and highlight potential critical path risks.

Which action should you NOT take when using AI for this purpose?

  • A. Ask AI to flag highly connected activities for focused critical-path risk discussions
  • B. Use AI to propose missing cross-team dependencies for a joint review workshop
  • C. Baseline the schedule using AI-suggested links without team validation
  • D. Run AI-supported what-if scenarios to see which dependency delays threaten the launch milestone

Best answer: C

Explanation: AI can accelerate dependency discovery and critical-path risk identification, but its outputs remain hypotheses. The key control is validating inferred predecessor relationships with the teams and data owners before committing them to the network. Skipping validation can misidentify the critical path and drive incorrect mitigation decisions.


Question 16

Topic: 1. Embracing AI in Project Management and Basic Concepts

A project team built an AI model to forecast delivery dates from sprint metrics. In testing it achieved training \(R^2=0.98\) but only \(R^2=0.55\) on a holdout set. The sponsor wants the model rolled out next week for portfolio reporting.

The PM proposes delaying rollout by one sprint to use cross-validation, simplify the model, and add more representative data from two other teams. What is the most likely near-term impact?

  • A. Immediate rollout cuts cost because the model is already trained.
  • B. Using a simpler model will immediately reduce compute cost drastically.
  • C. One-sprint delay, but fewer forecast surprises and higher trust.
  • D. Collecting more data increases scope, causing major budget overrun now.

Best answer: C

Explanation: The gap between training and holdout performance is a classic overfitting signal, so rolling out immediately would likely create unreliable forecasts. Adding proper validation, simplifying the model, and improving data representativeness reduces near-term risk of bad portfolio decisions. The most immediate trade-off is a short schedule delay to stabilize performance and credibility.


Question 17

Topic: 1. Embracing AI in Project Management and Basic Concepts

You are managing a 6-month digital transformation project. In 24 hours you must brief the steering committee on schedule risk and propose recovery actions. You have thousands of Jira comments and weekly status notes, some containing customer PII, and team members disagree on whether a key vendor delay is “manageable.”

Which approach best balances speed, quality, cost, and risk when using AI on this work?

  • A. Use AI to summarize work logs and flag schedule-risk drivers, then have the PM validate sources, sanitize PII, and make the final recovery recommendation
  • B. Feed all raw notes to AI and accept its recommended recovery plan without changes to save time
  • C. Use AI only to schedule meetings and format slides; keep all analysis and drafting manual
  • D. Avoid AI entirely and have the PM manually read all notes to ensure accuracy and confidentiality

Best answer: A

Explanation: AI is well-suited to rapidly summarizing large volumes of project text and highlighting potential risk drivers, which meets the 24-hour constraint. Human judgment is still needed to validate evidence, handle PII safely, and choose recovery actions that reflect stakeholder priorities and real-world constraints. This balances speed with governance and decision accountability.


Question 18

Topic: 1. Embracing AI in Project Management and Basic Concepts

A sponsor tells your project team, “We need to add deep learning to our process so it becomes AI-driven,” but provides no further details about the problem.

What should you ask/verify FIRST to determine whether the solution really needs AI, ML, or deep learning?

  • A. Whether the task can be handled with explicit rules or must learn from data (and what data types/labels exist)
  • B. Run a quick neural-network prototype to see what accuracy you get
  • C. Which vendor AI platform the organization prefers
  • D. How many GPUs and how much compute budget are available

Best answer: A

Explanation: Start by clarifying the problem mechanism and inputs: is the solution mainly rule-based, or does it require learning patterns from data, and what kind of data is available. AI is the broad umbrella, ML is AI that learns from data, and deep learning is a subset of ML typically used for complex/unstructured inputs and larger datasets. This first question prevents prematurely committing to deep learning just because it was named.


Question 19

Topic: 2. The AI Project Life Cycle: Navigating from Problem Scoping to Evaluation

You are piloting an AI model that auto-routes IT service desk tickets. The sponsor wants to claim “33% time saved” and approve a full rollout based on the pilot.

Exhibit: Pilot summary (4 weeks)

MetricBaseline (all teams)Pilot team
Avg handle time18 min12 min
First-time-right routing82%80%
Password reset tickets10%35%

Which approach best evaluates the AI’s impact on project outcomes using evidence before deciding to scale?

  • A. Roll out now and rely on production monitoring dashboards
  • B. Use the vendor’s offline accuracy report as rollout evidence
  • C. Retrain the model until routing accuracy exceeds baseline
  • D. Run stratified A/B trial and compare agreed KPIs

Best answer: D

Explanation: The pilot’s faster handle time is not yet credible evidence of AI impact because the pilot processed a very different ticket mix. A stratified A/B test (or equivalent controlled comparison) isolates the AI effect and measures outcomes with agreed KPIs such as handle time, first-time-right routing, and escalation/rework rates. That produces defensible evidence for time savings, quality change, and risk reduction before scaling.


Question 20

Topic: 3. Optimizing Project Outcomes with AI: AI Tools and Techniques

You are planning a 6-month CRM migration. An AI estimator trained on your company’s past projects forecasts “12 weeks to complete” with “85% confidence,” but you know the training set mostly contains smaller, non-regulated migrations and your project has a new compliance workstream.

Which action SHOULD AVOID when using the AI output to make planning decisions?

  • A. Validate the forecast assumptions with SMEs and adjust inputs for the compliance workstream
  • B. Translate the forecast into a range with contingency and reforecast after early actuals are available
  • C. Baseline 12 weeks and treat the AI forecast as the committed date
  • D. Run what-if scenarios on scope and resourcing and communicate uncertainty to stakeholders

Best answer: C

Explanation: AI-assisted estimates are decision inputs that must be interpreted in context, not accepted as commitments. Here, known mismatch between training data and the project’s regulatory complexity means the point forecast should be tested, adjusted, and expressed with uncertainty before baselining. Planning should incorporate validation, scenarios, and reforecasting based on actual performance.


Question 21

Topic: 1. Embracing AI in Project Management and Basic Concepts

You are managing an 8-week pilot to improve construction-site safety monitoring. The team captures ~5,000 drone photos per week and currently reviews them manually.

A data scientist proposes using a neural network to detect and flag safety hazards in the images. However, only 200 photos are already labeled “hazard / no hazard,” and the rest are unlabeled.

If you approve the neural-network approach, what is the most likely near-term impact on the project?

  • A. Safety incidents decrease immediately because detection is automated
  • B. No added work is needed because neural networks do not require training data
  • C. Stakeholder trust drops immediately because neural networks are black boxes
  • D. Schedule pressure increases due to image labeling and data preparation

Best answer: D

Explanation: Neural networks are commonly used for image recognition and other complex pattern detection, but they usually need substantial labeled training data. With only 200 labeled images, the team will spend time collecting, labeling, and curating examples before a useful model is available. That adds near-term effort and increases schedule and cost risk during an 8-week pilot.


Question 22

Topic: 4. Challenges of Bringing AI into the Organization

A PMO is rolling out an AI assistant that drafts weekly status updates and risk summaries from project notes and tool exports. The PMO wants outputs that are “trustworthy and auditable,” but teams currently paste the text into emails with no standard record of what the AI used or how the draft was produced.

As the AI-driven project manager, what should you ask for FIRST to establish documentation and auditability expectations for these AI-assisted outputs?

  • A. A list of preferred prompt templates for each project type
  • B. A target reduction in reporting effort for executives
  • C. A pilot group of volunteers to test the assistant for two sprints
  • D. An agreed audit record standard for each draft

Best answer: D

Explanation: To make AI-generated status content auditable, you first need a clear definition of what evidence must be captured for every output. That standard drives process design (who reviews/approves), tool configuration (logging), and retention so an auditor can reconstruct how a specific statement was produced and validated.


Question 23

Topic: 1. Embracing AI in Project Management and Basic Concepts

A project analytics team built an ML model to forecast task effort hours from historical project data. The model achieves \(R^2=0.95\) on the training set but only \(R^2=0.55\) on a large, representative validation set covering all product lines. Which action is the most appropriate mitigation before deployment?

  • A. Change the evaluation metric from \(R^2\) to MAE only
  • B. Add more features to capture hidden relationships
  • C. Simplify the model and reduce feature complexity
  • D. Replace it with unsupervised clustering instead

Best answer: C

Explanation: The strong performance on training data combined with a much weaker result on representative validation data is a classic sign of overfitting. The most direct mitigation is to reduce model capacity so it fits signal rather than noise, which typically narrows the train–validation gap and improves out-of-sample performance.


Question 24

Topic: 3. Optimizing Project Outcomes with AI: AI Tools and Techniques

You are leading a 12-week enterprise rollout. In 10 business days you must show an AI-driven schedule and risk forecasting dashboard using data from the existing PPM tool, time-tracking, and the risk register. Security has stated that project data cannot leave the company’s tenant and the solution must use existing SSO.

Exhibit: Tool shortlist (summary)

OptionData accessIntegration effortUsabilityCost
Configurable AI add-on inside current PPMUses in-tenant data/connectors3–5 days configPMs stay in current UIMedium subscription
External AI forecasting SaaSRequires weekly CSV exports to vendor cloud1–2 days setupNew UI + trainingLow subscription
Custom ML pipelineCan be in-tenant6–8 weeks buildTailoredHigh build, low run
Stand-alone desktop analyticsManual import only0–1 daySingle-user, limited sharingOne-time low

Which option is most suitable?

  • A. Stand-alone desktop analytics
  • B. External AI forecasting SaaS
  • C. Custom ML pipeline
  • D. Configurable AI add-on inside current PPM

Best answer: D

Explanation: The dominant discriminator is the integration and compliance constraint: you must deliver within 10 business days without moving data outside the tenant and while using existing SSO. The in-platform add-on satisfies data access/security and has a realistic configuration effort within the time window, making it the most suitable choice despite higher subscription cost.

Revised on Sunday, April 26, 2026