PSM-AI: AI Security and Ethics

Try 10 focused PSM-AI questions on AI Security and Ethics, with answers and explanations, then continue with PM Mastery.

On this page

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Topic snapshot

FieldDetail
Exam routePSM-AI
Topic areaAI Security and Ethics
Blueprint weight25%
Page purposeFocused sample questions before returning to mixed practice

How to use this topic drill

Use this page to isolate AI Security and Ethics for PSM-AI. Work through the 10 questions first, then review the explanations and return to mixed practice in PM Mastery.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 25% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original PM Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: AI Security and Ethics

A Scrum Master asks an AI assistant to draft a Sprint Retrospective summary from the team’s notes and is about to post it in the team channel.

Exhibit: AI output (excerpt)

Retro summary:
- Root cause: Alex and Priya lack ownership; they “blocked others”.
- Action: Require Alex to give daily status updates to the PO.
- Action: Identify “low performers” and escalate to managers.
- Note: People seemed defensive; push harder on accountability.

What is the best next action?

  • A. Ask the AI to rewrite it and post the new version immediately.
  • B. Forward it to managers to address the accountability issue.
  • C. Do not share it; rewrite with the team using neutral, non-personal language.
  • D. Post it with a disclaimer that the AI may be wrong.

Best answer: C

What this tests: AI Security and Ethics

Explanation: The AI output assigns blame to named individuals and recommends escalation and pressure tactics, which can reduce openness in future Retrospectives. A Scrum Master remains accountable for protecting psychological safety and should not broadcast harmful, unvalidated content. The corrective action is to pause sharing and co-create a neutral, team-owned summary focused on improvements, not people.

Psychological safety is critical for an effective Sprint Retrospective; team members must feel safe to speak candidly without fear of blame or punishment. The exhibit includes personal attributions (“lack ownership”), labeling (“low performers”), and escalation to managers—content that can create fear and shut down transparency.

A responsible corrective action is to:

  • Stop distribution of the AI draft.
  • Remove names and judgmental language; focus on observable issues and experiments.
  • Validate the summary with the Scrum Team and align on how AI may be used for retro artifacts.

A disclaimer or “AI rewrite” does not replace human accountability for tone, safety, and accuracy.

It contains blaming and escalation suggestions that can damage psychological safety, so the Scrum Master should pause, remove personal attributions, and validate a safer summary with the team.


Question 2

Topic: AI Security and Ethics

A Scrum Team wants to use a generative AI assistant to summarize recent customer support tickets into Product Backlog items. Some tickets include medical symptoms and occasional payment card details.

Exhibit: Internal AI use policy (excerpt)

Do not input: PHI, PCI, or personal data unless the use case is approved.
Approved requires: documented purpose, data minimization, and an approved AI environment.
When unsure whether data is regulated, consult Privacy/Compliance before use.

Based on the exhibit, what is the best next action for the Scrum Master?

  • A. Ask Privacy/Compliance to confirm requirements and use only an approved AI environment
  • B. Use the AI assistant only for estimating effort, not for summarizing tickets
  • C. Proceed using the AI assistant after removing customer names
  • D. Continue without AI until the next Sprint Retrospective decides on tooling

Best answer: A

What this tests: AI Security and Ethics

Explanation: The exhibit explicitly flags PHI, PCI, and personal data as requiring an approved use case and an approved AI environment. Because the team is unsure whether the ticket content is regulated (and it likely is), the policy instructs consulting Privacy/Compliance before proceeding. This supports responsible AI use while keeping human accountability for compliance decisions.

The core issue is governance and compliance: certain data types (health information, payment card information, and personal data) can trigger regulatory and organizational requirements. The policy excerpt gives a clear decision rule: if regulated data might be involved or if there is uncertainty, the team must consult Privacy/Compliance and only proceed under an approved use case with data minimization and an approved AI environment. As Scrum Master, the best next action is to guide the team to follow that policy rather than relying on informal “anonymization” or deferring the decision to a later event. The closest misconception is assuming that removing obvious identifiers is sufficient; regulated data can remain even after names are removed.

The tickets may contain regulated data (PHI/PCI/personal data), so the policy requires consultation and approved-use controls before using AI.


Question 3

Topic: AI Security and Ethics

A Scrum Master is preparing a public blog post about Scrum events and uses a generative AI assistant to draft an introduction. The Scrum Master receives the following output.

AI output:
“Scrum is a lightweight framework that helps people, teams, and organizations
create value through adaptive solutions for complex problems.”
AI note: “This paragraph is ready to paste as-is; similar wording appears in
common references I was trained on.”

What is the best next action?

  • A. Run it through translation to avoid any copyright concerns
  • B. Ask the AI to make it longer so it becomes sufficiently different
  • C. Treat it as potentially memorized text: find the original source, then either quote with attribution/permission or rewrite in your own words and verify originality
  • D. Paste it as-is because AI-generated text is original by default

Best answer: C

What this tests: AI Security and Ethics

Explanation: The exhibit explicitly warns that the wording may come from training data and is “ready to paste,” which is a red flag for verbatim reproduction. Responsible use means assuming it could be copyrighted or otherwise owned, then verifying the source and using attribution/permission when quoting—or rewriting from your own understanding and checking for unintended copying.

Generative AI can reproduce memorized passages that closely match existing copyrighted or licensed material, especially when it outputs polished “paste-ready” text and hints it has seen similar wording before. In this situation, you should avoid copying verbatim into a public deliverable.

A responsible approach is to:

  • Locate the likely original source(s) and check licensing/usage terms.
  • If you must use the exact wording, quote it and provide proper attribution (and permission if required).
  • Prefer rewriting in your own words based on your understanding, and validate originality (e.g., with a plagiarism check or web search for close matches).

The key takeaway is that AI assistance does not remove your accountability for copyright, attribution, and originality.

The exhibit signals likely verbatim reproduction, so you should not paste it without source verification and appropriate attribution or original rewriting.


Question 4

Topic: AI Security and Ethics

A Developer tells you they pasted a snippet of production error logs into a public generative AI chat to “quickly find the root cause.” The logs may contain customer email addresses (confidential data). Your organization has an AI usage policy, but the team is unsure what it allows, and the Sprint Review is tomorrow (time pressure). As Scrum Master, what is the BEST next action?

  • A. Proceed as planned and mention at the Sprint Review that AI was used to save time.
  • B. Escalate to the Developers’ line manager to decide whether AI use is acceptable.
  • C. Ask the Developer to stop further sharing and escalate via the security/privacy incident path and the AI policy owner.
  • D. Handle it within the Scrum Team by agreeing not to use AI for the rest of the Sprint.

Best answer: C

What this tests: AI Security and Ethics

Explanation: This is a potential confidentiality and policy violation, so the priority is to contain the risk and follow the organization’s established escalation channels. The appropriate escalation is to security/privacy for possible data exposure and to the AI policy owner for guidance and compliance. Time pressure does not justify bypassing these paths.

When AI use raises ethical or security concerns (like possible PII leakage), the Scrum Master helps the Scrum Team act responsibly without taking over accountability for decisions. The best next step is to stop further unsafe use and escalate through the right channels:

  • Use the security/privacy incident process to assess and contain potential data exposure.
  • Involve the AI policy owner to interpret or clarify what is permitted and what remediation is required.

A people manager may need to be informed, but they are not the primary escalation path for handling a potential security/privacy incident. The key takeaway is to route issues to the function that can investigate and mitigate harm, while keeping transparency and accountability clear.

Possible exposure of confidential data requires immediate escalation to security/privacy and the policy owner so risks are contained and handled transparently.


Question 5

Topic: AI Security and Ethics

A Scrum Team asks the Scrum Master to use a general-purpose AI assistant (hosted outside the organization) to group comments from the last Sprint Retrospective. The notes include people’s names and a few performance-related remarks.

Exhibit: Policy excerpt

  • Personal data: sharing externally requires explicit consent and an approved processor
  • Anonymized, non-personal data: may be used for experimentation

Which action should you AVOID?

  • A. Paste full notes with names into external AI service.
  • B. Use synthetic examples instead of real comments in prompts.
  • C. Anonymize notes, then use AI; disclose AI use.
  • D. Get team consent; otherwise analyze retro notes manually.

Best answer: A

What this tests: AI Security and Ethics

Explanation: Because the retrospective notes contain personal data, using them in an external AI prompt requires explicit consent and an approved processing arrangement. Responsible use applies data minimization and anonymization, and makes AI assistance transparent to affected people. The unsafe choice is the one that exports identifiable notes as-is to the external service.

The core concept is that consent (and organizational approval) is needed when you plan to use personal data in AI prompts, especially with external services. In this scenario, names and performance-related remarks are personal data, and the policy explicitly requires explicit consent and an approved processor before sharing it externally.

Appropriate approaches focus on:

  • Data minimization: only share what is necessary.
  • De-identification: anonymize or aggregate so individuals are not identifiable.
  • Substitution: use synthetic examples when feasible.
  • Transparency and accountability: disclose AI use and keep humans responsible for decisions.

Exporting identifiable retrospective notes to an external AI service without consent/approval is an avoidable privacy and ethics breach.

It shares personal data externally without explicit consent or an approved processor, violating the policy and privacy principles.


Question 6

Topic: AI Security and Ethics

A Scrum Master suggests using a generative AI assistant to quickly summarize a set of recent customer support tickets into themes for Product Backlog refinement. The tickets were exported from the support system and may include customer contact details and troubleshooting notes.

What is the most important question to ask first before anyone pastes the ticket text into an AI prompt?

  • A. Which prompting template will produce the most accurate themes and summaries?
  • B. How many themes should be produced so refinement fits in one hour?
  • C. What sensitive data types are in the tickets, and does policy allow sharing them with an AI service?
  • D. Should the summaries be presented in the next Sprint Review or shared asynchronously?

Best answer: C

What this tests: AI Security and Ethics

Explanation: Before using AI with real ticket content, the team must classify what data is present and confirm what sharing is permitted. Support tickets commonly contain PII, account identifiers, and sometimes credentials or health-related details, which can make copying raw text into an AI prompt inappropriate. Verifying policy constraints and safe handling comes before optimizing output format or meeting logistics.

Responsible AI use starts with data classification and allowed data handling. Customer support tickets often contain sensitive data such as PII (names, emails, phone numbers), customer/account data, and occasionally credentials; in some domains they may include PHI. Before any prompt is created, confirm what data types are present and what your organization permits (e.g., approved AI environments, redaction/anonymization requirements, retention/logging constraints, and who may access outputs). Only after those constraints are clear should the team decide how to structure the prompt, what “good” looks like, and how to share results.

The key takeaway: treat “paste the data into AI” as a disclosure decision that requires classification and policy alignment first.

You must first identify whether the text includes PII/PHI/credentials/customer data and confirm approved handling before sending it to any AI system.


Question 7

Topic: AI Security and Ethics

A Scrum Team is building a telehealth product. To speed up refinement, a Developer proposes pasting raw customer support chat transcripts (names, dates of birth, symptoms, appointment details) into a generative AI system to summarize issues and suggest Product Backlog Items.

The Scrum Master is unsure whether this is allowed and the organization has an AI usage policy and privacy/compliance guidance.

What is the best response?

  • A. Proceed if the team removes obvious identifiers like names and email addresses.
  • B. Use a public AI tool only for speed, then delete the chat logs immediately after.
  • C. Pause the plan and consult the organization’s AI/privacy policy (and appropriate compliance experts) before using any regulated health data with AI.
  • D. Ask the Product Owner to decide whether the business value justifies the risk.

Best answer: C

What this tests: AI Security and Ethics

Explanation: The decisive factor is that the transcripts contain regulated health-related personal data. When AI use involves privacy/health/financial regulatory obligations and you are uncertain, the responsible action is to consult and follow organizational policy and compliance guidance before processing the data. This preserves accountability and reduces the chance of improper disclosure or non-compliant processing.

Using AI with personal data can trigger regulatory and contractual obligations, especially when the data is health-related, financial, or otherwise sensitive. In this scenario, the transcripts include identifying information and medical details, so the team should not assume that “quick anonymization” or deletion makes the use acceptable. The Scrum Master should guide the team to follow established governance: consult the organization’s AI usage policy and privacy/compliance guidance, use only approved processing methods/tools, and ensure appropriate safeguards (such as documented purpose, access controls, and validated de-identification) before any AI processing occurs.

The key takeaway is: when regulated data is involved and permission/controls are unclear, consult policy and compliance first rather than optimizing for speed.

Health-related personal data is regulated, so policy and compliance guidance must be followed before processing it with AI.


Question 8

Topic: AI Security and Ethics

During a Sprint, a Scrum Master receives a message that looks like a polished internal memo and includes an “AI-generated summary” of a supposed leadership request:

Request: “Please send the full customer contact list and renewal dates today.
We need it for an executive review. Keep this confidential.”
Signed: “CFO Office”

The message arrived via a chat channel where anyone in the company can post, and there is no linked ticket or prior context. What is the best thing to verify or ask FIRST before taking any action?

  • A. Which team member can export the data fastest to meet the deadline?
  • B. How can we independently verify the requester’s identity and authorization before sharing any data?
  • C. Can the AI rephrase the request to sound more urgent so it gets handled quickly?
  • D. What specific format and columns should the customer list include?

Best answer: B

What this tests: AI Security and Ethics

Explanation: This is a classic social-engineering pattern: an urgent, confidential request for sensitive data presented with authoritative language. When AI outputs increase perceived legitimacy, the first step is to verify the requester and their authorization through an independent, trusted channel. Only after that should you determine data classification, sharing constraints, and the appropriate process for fulfilling the request.

AI-generated or AI-assisted messages can be used to make phishing and business email compromise attempts look credible and “executive.” In this scenario, the channel is untrusted (anyone can post) and there is no supporting context, so the primary risk is unauthorized disclosure driven by perceived authority and urgency.

A responsible first step is to authenticate the requester and confirm they are authorized to ask for that information, using an independent mechanism (e.g., known contact method, established request workflow). Once identity and authorization are confirmed, you can then apply data handling rules (classification, minimum necessary sharing, approved recipients, audit trail) and decide how to proceed.

Focusing on formatting or speed assumes the request is legitimate and bypasses basic verification.

Authoritative-looking AI text can enable impersonation, so identity/authorization must be confirmed via a trusted channel before any disclosure.


Question 9

Topic: AI Security and Ethics

A Scrum Master wants to use generative AI to create a diagram and a simple framework slide for an internal onboarding page about the Scrum Team’s current workflow. Constraints: the page is on the company wiki but may be forwarded outside the team, the workflow details include some client-specific terms that must not be shared with AI, and the content must be accurate because new hires will follow it. There is only 60 minutes before the page is published.

What is the BEST next action?

  • A. Generate a generic draft, label AI-assisted, and get team verification
  • B. Paste the real workflow, including client terms, to get accuracy
  • C. Remove attribution so the AI-generated framework looks original
  • D. Publish the AI diagram as final since it is internal content

Best answer: A

What this tests: AI Security and Ethics

Explanation: Use AI to accelerate a first draft without exposing confidential information, then keep humans accountable for correctness. Clearly label AI-assisted content as a draft and have the Scrum Team verify it before it becomes guidance for others. If the output resembles a known third-party framework, include appropriate attribution rather than implying it is original.

When adding AI-generated diagrams or frameworks to internal materials, the safe pattern is: protect confidential inputs, maintain transparency, and validate accuracy before others rely on it. In this scenario, the onboarding page can influence behavior and may be forwarded, so the Scrum Master should avoid sharing client-specific terms with AI, treat the output as a draft, and have the Product Owner and Developers confirm it matches the real workflow and the Definition of Done expectations. If the diagram/framework appears derived from a recognizable third-party source, add attribution (or replace it with an original representation) to reduce intellectual property and ethical risks. The key is that AI can assist creation, but the Scrum Team remains accountable for what is published.

It keeps sensitive details out of prompts while ensuring transparent labeling and human review before publishing.


Question 10

Topic: AI Security and Ethics

A Scrum Master wants to “save time” before annual reviews. They paste their private 1:1 notes (including names, personal circumstances, and peer feedback) into a generative AI tool and ask it to draft individual performance reviews and rank the Developers.

When the Developers learn this, trust drops and several refuse to speak candidly in future 1:1s, even though the AI text itself looked coherent.

What is the most likely underlying cause of the problem?

  • A. Prompt injection changed the model’s behavior through hidden instructions
  • B. Missing context in the prompt led to generic, low-value output
  • C. Automation bias caused the Scrum Master to over-trust the AI’s rankings
  • D. Sensitive data misuse and a conflict-of-interest boundary violation

Best answer: D

What this tests: AI Security and Ethics

Explanation: The key issue is not output quality but inappropriate use: the Scrum Master used sensitive, identifiable personal information and asked the AI to produce individual performance evaluations and rankings. That creates a conflict of interest, breaks confidentiality expectations around 1:1s, and predictably damages psychological safety and trust.

Responsible AI use requires clear ethical boundaries, especially around people data and power dynamics. Private 1:1 notes often contain sensitive personal information and are shared under an expectation of confidentiality; repurposing them for automated performance reviews (and ranking) changes the purpose and increases harm. A Scrum Master also has a facilitation/accountability role, not a people-manager role, so using AI to evaluate individuals can amplify bias, reduce transparency, and erode psychological safety.

A better approach is to avoid individual performance judgments with AI, keep personal data out of AI systems unless explicitly permitted, and focus AI use on team-level, non-identifying support (e.g., summarizing anonymous themes with consent and strong safeguards). The “coherent output” clue points away from prompt quality issues and toward an ethical misuse root cause.

Using identifiable, personal 1:1 data to generate and rank performance reviews is an unethical use that undermines trust and confidentiality.

Continue with full practice

Use the PSM-AI Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Free review resource

Read the PSM-AI guide on PMExams.com, then return to PM Mastery for timed practice.

Revised on Thursday, May 14, 2026