Try 10 focused PSM-AI questions on AI Security and Ethics, with answers and explanations, then continue with PM Mastery.
| Field | Detail |
|---|---|
| Exam route | PSM-AI |
| Topic area | AI Security and Ethics |
| Blueprint weight | 25% |
| Page purpose | Focused sample questions before returning to mixed practice |
Use this page to isolate AI Security and Ethics for PSM-AI. Work through the 10 questions first, then review the explanations and return to mixed practice in PM Mastery.
| Pass | What to do | What to record |
|---|---|---|
| First attempt | Answer without checking the explanation first. | The fact, rule, calculation, or judgment point that controlled your answer. |
| Review | Read the explanation even when you were correct. | Why the best answer is stronger than the closest distractor. |
| Repair | Repeat only missed or uncertain items after a short break. | The pattern behind misses, not the answer letter. |
| Transfer | Return to mixed practice once the topic feels stable. | Whether the same skill holds up when the topic is no longer obvious. |
Blueprint context: 25% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.
These questions are original PM Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.
Topic: AI Security and Ethics
A Scrum Master asks an AI assistant to draft a Sprint Retrospective summary from the team’s notes and is about to post it in the team channel.
Exhibit: AI output (excerpt)
Retro summary:
- Root cause: Alex and Priya lack ownership; they “blocked others”.
- Action: Require Alex to give daily status updates to the PO.
- Action: Identify “low performers” and escalate to managers.
- Note: People seemed defensive; push harder on accountability.
What is the best next action?
Best answer: C
What this tests: AI Security and Ethics
Explanation: The AI output assigns blame to named individuals and recommends escalation and pressure tactics, which can reduce openness in future Retrospectives. A Scrum Master remains accountable for protecting psychological safety and should not broadcast harmful, unvalidated content. The corrective action is to pause sharing and co-create a neutral, team-owned summary focused on improvements, not people.
Psychological safety is critical for an effective Sprint Retrospective; team members must feel safe to speak candidly without fear of blame or punishment. The exhibit includes personal attributions (“lack ownership”), labeling (“low performers”), and escalation to managers—content that can create fear and shut down transparency.
A responsible corrective action is to:
A disclaimer or “AI rewrite” does not replace human accountability for tone, safety, and accuracy.
It contains blaming and escalation suggestions that can damage psychological safety, so the Scrum Master should pause, remove personal attributions, and validate a safer summary with the team.
Topic: AI Security and Ethics
A Scrum Team wants to use a generative AI assistant to summarize recent customer support tickets into Product Backlog items. Some tickets include medical symptoms and occasional payment card details.
Exhibit: Internal AI use policy (excerpt)
Do not input: PHI, PCI, or personal data unless the use case is approved.
Approved requires: documented purpose, data minimization, and an approved AI environment.
When unsure whether data is regulated, consult Privacy/Compliance before use.
Based on the exhibit, what is the best next action for the Scrum Master?
Best answer: A
What this tests: AI Security and Ethics
Explanation: The exhibit explicitly flags PHI, PCI, and personal data as requiring an approved use case and an approved AI environment. Because the team is unsure whether the ticket content is regulated (and it likely is), the policy instructs consulting Privacy/Compliance before proceeding. This supports responsible AI use while keeping human accountability for compliance decisions.
The core issue is governance and compliance: certain data types (health information, payment card information, and personal data) can trigger regulatory and organizational requirements. The policy excerpt gives a clear decision rule: if regulated data might be involved or if there is uncertainty, the team must consult Privacy/Compliance and only proceed under an approved use case with data minimization and an approved AI environment. As Scrum Master, the best next action is to guide the team to follow that policy rather than relying on informal “anonymization” or deferring the decision to a later event. The closest misconception is assuming that removing obvious identifiers is sufficient; regulated data can remain even after names are removed.
The tickets may contain regulated data (PHI/PCI/personal data), so the policy requires consultation and approved-use controls before using AI.
Topic: AI Security and Ethics
A Scrum Master is preparing a public blog post about Scrum events and uses a generative AI assistant to draft an introduction. The Scrum Master receives the following output.
AI output:
“Scrum is a lightweight framework that helps people, teams, and organizations
create value through adaptive solutions for complex problems.”
AI note: “This paragraph is ready to paste as-is; similar wording appears in
common references I was trained on.”
What is the best next action?
Best answer: C
What this tests: AI Security and Ethics
Explanation: The exhibit explicitly warns that the wording may come from training data and is “ready to paste,” which is a red flag for verbatim reproduction. Responsible use means assuming it could be copyrighted or otherwise owned, then verifying the source and using attribution/permission when quoting—or rewriting from your own understanding and checking for unintended copying.
Generative AI can reproduce memorized passages that closely match existing copyrighted or licensed material, especially when it outputs polished “paste-ready” text and hints it has seen similar wording before. In this situation, you should avoid copying verbatim into a public deliverable.
A responsible approach is to:
The key takeaway is that AI assistance does not remove your accountability for copyright, attribution, and originality.
The exhibit signals likely verbatim reproduction, so you should not paste it without source verification and appropriate attribution or original rewriting.
Topic: AI Security and Ethics
A Developer tells you they pasted a snippet of production error logs into a public generative AI chat to “quickly find the root cause.” The logs may contain customer email addresses (confidential data). Your organization has an AI usage policy, but the team is unsure what it allows, and the Sprint Review is tomorrow (time pressure). As Scrum Master, what is the BEST next action?
Best answer: C
What this tests: AI Security and Ethics
Explanation: This is a potential confidentiality and policy violation, so the priority is to contain the risk and follow the organization’s established escalation channels. The appropriate escalation is to security/privacy for possible data exposure and to the AI policy owner for guidance and compliance. Time pressure does not justify bypassing these paths.
When AI use raises ethical or security concerns (like possible PII leakage), the Scrum Master helps the Scrum Team act responsibly without taking over accountability for decisions. The best next step is to stop further unsafe use and escalate through the right channels:
A people manager may need to be informed, but they are not the primary escalation path for handling a potential security/privacy incident. The key takeaway is to route issues to the function that can investigate and mitigate harm, while keeping transparency and accountability clear.
Possible exposure of confidential data requires immediate escalation to security/privacy and the policy owner so risks are contained and handled transparently.
Topic: AI Security and Ethics
A Scrum Team asks the Scrum Master to use a general-purpose AI assistant (hosted outside the organization) to group comments from the last Sprint Retrospective. The notes include people’s names and a few performance-related remarks.
Exhibit: Policy excerpt
Which action should you AVOID?
Best answer: A
What this tests: AI Security and Ethics
Explanation: Because the retrospective notes contain personal data, using them in an external AI prompt requires explicit consent and an approved processing arrangement. Responsible use applies data minimization and anonymization, and makes AI assistance transparent to affected people. The unsafe choice is the one that exports identifiable notes as-is to the external service.
The core concept is that consent (and organizational approval) is needed when you plan to use personal data in AI prompts, especially with external services. In this scenario, names and performance-related remarks are personal data, and the policy explicitly requires explicit consent and an approved processor before sharing it externally.
Appropriate approaches focus on:
Exporting identifiable retrospective notes to an external AI service without consent/approval is an avoidable privacy and ethics breach.
It shares personal data externally without explicit consent or an approved processor, violating the policy and privacy principles.
Topic: AI Security and Ethics
A Scrum Master suggests using a generative AI assistant to quickly summarize a set of recent customer support tickets into themes for Product Backlog refinement. The tickets were exported from the support system and may include customer contact details and troubleshooting notes.
What is the most important question to ask first before anyone pastes the ticket text into an AI prompt?
Best answer: C
What this tests: AI Security and Ethics
Explanation: Before using AI with real ticket content, the team must classify what data is present and confirm what sharing is permitted. Support tickets commonly contain PII, account identifiers, and sometimes credentials or health-related details, which can make copying raw text into an AI prompt inappropriate. Verifying policy constraints and safe handling comes before optimizing output format or meeting logistics.
Responsible AI use starts with data classification and allowed data handling. Customer support tickets often contain sensitive data such as PII (names, emails, phone numbers), customer/account data, and occasionally credentials; in some domains they may include PHI. Before any prompt is created, confirm what data types are present and what your organization permits (e.g., approved AI environments, redaction/anonymization requirements, retention/logging constraints, and who may access outputs). Only after those constraints are clear should the team decide how to structure the prompt, what “good” looks like, and how to share results.
The key takeaway: treat “paste the data into AI” as a disclosure decision that requires classification and policy alignment first.
You must first identify whether the text includes PII/PHI/credentials/customer data and confirm approved handling before sending it to any AI system.
Topic: AI Security and Ethics
A Scrum Team is building a telehealth product. To speed up refinement, a Developer proposes pasting raw customer support chat transcripts (names, dates of birth, symptoms, appointment details) into a generative AI system to summarize issues and suggest Product Backlog Items.
The Scrum Master is unsure whether this is allowed and the organization has an AI usage policy and privacy/compliance guidance.
What is the best response?
Best answer: C
What this tests: AI Security and Ethics
Explanation: The decisive factor is that the transcripts contain regulated health-related personal data. When AI use involves privacy/health/financial regulatory obligations and you are uncertain, the responsible action is to consult and follow organizational policy and compliance guidance before processing the data. This preserves accountability and reduces the chance of improper disclosure or non-compliant processing.
Using AI with personal data can trigger regulatory and contractual obligations, especially when the data is health-related, financial, or otherwise sensitive. In this scenario, the transcripts include identifying information and medical details, so the team should not assume that “quick anonymization” or deletion makes the use acceptable. The Scrum Master should guide the team to follow established governance: consult the organization’s AI usage policy and privacy/compliance guidance, use only approved processing methods/tools, and ensure appropriate safeguards (such as documented purpose, access controls, and validated de-identification) before any AI processing occurs.
The key takeaway is: when regulated data is involved and permission/controls are unclear, consult policy and compliance first rather than optimizing for speed.
Health-related personal data is regulated, so policy and compliance guidance must be followed before processing it with AI.
Topic: AI Security and Ethics
During a Sprint, a Scrum Master receives a message that looks like a polished internal memo and includes an “AI-generated summary” of a supposed leadership request:
Request: “Please send the full customer contact list and renewal dates today.
We need it for an executive review. Keep this confidential.”
Signed: “CFO Office”
The message arrived via a chat channel where anyone in the company can post, and there is no linked ticket or prior context. What is the best thing to verify or ask FIRST before taking any action?
Best answer: B
What this tests: AI Security and Ethics
Explanation: This is a classic social-engineering pattern: an urgent, confidential request for sensitive data presented with authoritative language. When AI outputs increase perceived legitimacy, the first step is to verify the requester and their authorization through an independent, trusted channel. Only after that should you determine data classification, sharing constraints, and the appropriate process for fulfilling the request.
AI-generated or AI-assisted messages can be used to make phishing and business email compromise attempts look credible and “executive.” In this scenario, the channel is untrusted (anyone can post) and there is no supporting context, so the primary risk is unauthorized disclosure driven by perceived authority and urgency.
A responsible first step is to authenticate the requester and confirm they are authorized to ask for that information, using an independent mechanism (e.g., known contact method, established request workflow). Once identity and authorization are confirmed, you can then apply data handling rules (classification, minimum necessary sharing, approved recipients, audit trail) and decide how to proceed.
Focusing on formatting or speed assumes the request is legitimate and bypasses basic verification.
Authoritative-looking AI text can enable impersonation, so identity/authorization must be confirmed via a trusted channel before any disclosure.
Topic: AI Security and Ethics
A Scrum Master wants to use generative AI to create a diagram and a simple framework slide for an internal onboarding page about the Scrum Team’s current workflow. Constraints: the page is on the company wiki but may be forwarded outside the team, the workflow details include some client-specific terms that must not be shared with AI, and the content must be accurate because new hires will follow it. There is only 60 minutes before the page is published.
What is the BEST next action?
Best answer: A
What this tests: AI Security and Ethics
Explanation: Use AI to accelerate a first draft without exposing confidential information, then keep humans accountable for correctness. Clearly label AI-assisted content as a draft and have the Scrum Team verify it before it becomes guidance for others. If the output resembles a known third-party framework, include appropriate attribution rather than implying it is original.
When adding AI-generated diagrams or frameworks to internal materials, the safe pattern is: protect confidential inputs, maintain transparency, and validate accuracy before others rely on it. In this scenario, the onboarding page can influence behavior and may be forwarded, so the Scrum Master should avoid sharing client-specific terms with AI, treat the output as a draft, and have the Product Owner and Developers confirm it matches the real workflow and the Definition of Done expectations. If the diagram/framework appears derived from a recognizable third-party source, add attribution (or replace it with an original representation) to reduce intellectual property and ethical risks. The key is that AI can assist creation, but the Scrum Team remains accountable for what is published.
It keeps sensitive details out of prompts while ensuring transparent labeling and human review before publishing.
Topic: AI Security and Ethics
A Scrum Master wants to “save time” before annual reviews. They paste their private 1:1 notes (including names, personal circumstances, and peer feedback) into a generative AI tool and ask it to draft individual performance reviews and rank the Developers.
When the Developers learn this, trust drops and several refuse to speak candidly in future 1:1s, even though the AI text itself looked coherent.
What is the most likely underlying cause of the problem?
Best answer: D
What this tests: AI Security and Ethics
Explanation: The key issue is not output quality but inappropriate use: the Scrum Master used sensitive, identifiable personal information and asked the AI to produce individual performance evaluations and rankings. That creates a conflict of interest, breaks confidentiality expectations around 1:1s, and predictably damages psychological safety and trust.
Responsible AI use requires clear ethical boundaries, especially around people data and power dynamics. Private 1:1 notes often contain sensitive personal information and are shared under an expectation of confidentiality; repurposing them for automated performance reviews (and ranking) changes the purpose and increases harm. A Scrum Master also has a facilitation/accountability role, not a people-manager role, so using AI to evaluate individuals can amplify bias, reduce transparency, and erode psychological safety.
A better approach is to avoid individual performance judgments with AI, keep personal data out of AI systems unless explicitly permitted, and focus AI use on team-level, non-identifying support (e.g., summarizing anonymous themes with consent and strong safeguards). The “coherent output” clue points away from prompt quality issues and toward an ethical misuse root cause.
Using identifiable, personal 1:1 data to generate and rank performance reviews is an unethical use that undermines trust and confidentiality.
Use the PSM-AI Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Read the PSM-AI guide on PMExams.com, then return to PM Mastery for timed practice.