Try 10 focused PSPO-AI questions on AI Security and Ethics, with answers and explanations, then continue with PM Mastery.
| Field | Detail |
|---|---|
| Exam route | PSPO-AI |
| Topic area | AI Security and Ethics |
| Blueprint weight | 33% |
| Page purpose | Focused sample questions before returning to mixed practice |
Use this page to isolate AI Security and Ethics for PSPO-AI. Work through the 10 questions first, then review the explanations and return to mixed practice in PM Mastery.
| Pass | What to do | What to record |
|---|---|---|
| First attempt | Answer without checking the explanation first. | The fact, rule, calculation, or judgment point that controlled your answer. |
| Review | Read the explanation even when you were correct. | Why the best answer is stronger than the closest distractor. |
| Repair | Repeat only missed or uncertain items after a short break. | The pattern behind misses, not the answer letter. |
| Transfer | Return to mixed practice once the topic feels stable. | Whether the same skill holds up when the topic is no longer obvious. |
Blueprint context: 33% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.
These questions are original PM Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.
Topic: AI Security and Ethics
A Product Owner asks an AI assistant to summarize 30 recent support chats and automatically create entries in the company’s customer-case system (the system of record). A week later, stakeholders report mistrust and poor adoption because some summaries contain wrong commitments and one entry includes a customer’s sensitive details in the wrong account. What is the most likely underlying cause?
Best answer: C
What this tests: AI Security and Ethics
Explanation: Systems of record should not be treated as a direct sink for AI-generated content. When AI outputs are written without review, errors and sensitive data can be misfiled and then propagated as authoritative records, quickly undermining trust. The symptoms point to a missing verification step before persistence, not just better prompting.
The core issue is weak review and validation before storing AI output in a system of record. In privacy- and confidentiality-sensitive contexts (like customer-case systems), AI summaries can hallucinate details, misattribute information, or include sensitive data that should be redacted or routed differently. Once written into the system of record, that output is easily treated as authoritative and can be shared, searched, and reused—amplifying both confidentiality risk and decision-making errors.
A responsible approach is to keep AI output as a draft and require a human to:
Better context and success criteria can improve quality, but they do not replace review when persisting information that carries privacy impact.
Persisting unreviewed AI output into a system of record increases the chance of privacy leaks and incorrect records becoming “truth.”
Topic: AI Security and Ethics
A Product Owner asks a generative AI assistant for security guidance for a new checkout feature that will store customers’ payment details. The AI responds: “PCI DSS allows storing CVV as long as it’s encrypted. A custom encryption scheme is fine if it’s strong.”
The release is high impact and customer trust is critical. What is the best next step?
Best answer: B
What this tests: AI Security and Ethics
Explanation: This is a high-impact, security-sensitive decision where incorrect guidance could create serious harm. The AI output includes specific claims about a security standard and recommended practices, which are common areas for hallucinations. The responsible approach is to validate with authoritative security guidance (e.g., the relevant standard) and qualified security experts before making product decisions or creating requirements.
Generative AI can produce confident-sounding security advice that is incomplete or wrong, especially when it references standards, compliance rules, or “best practices.” When the decision is high impact (payment data, customer trust), the Product Owner should treat the output as a draft hypothesis and consult authoritative sources and accountable experts.
A practical next step is:
Testing can complement verification, but it does not replace confirming the correct requirements and obligations up front.
The AI advice is both high-risk and plausibly wrong, so the Product Owner should verify against authoritative standards and security specialists.
Topic: AI Security and Ethics
A stakeholder asks you to add a generative AI feature that “drafts great email messages” inside your product so users can “communicate faster.” Nothing else is specified.
To address the risk that AI could be used to generate phishing content, what is the FIRST thing you should clarify?
Best answer: B
What this tests: AI Security and Ethics
Explanation: AI that drafts emails can be misused to create convincing phishing messages, especially if it can generate arbitrary content and send it externally. The most important first clarification is how outbound AI-generated messages will be governed (who can use it, what is blocked, and whether human review is required) so you can design appropriate preventative controls and training.
Generative AI lowers the effort needed to produce high-volume, highly tailored messages, which is exactly what makes phishing effective. Before debating quality, tone, or metrics, a Product Owner should first clarify whether the capability can be used to create and deliver outbound communications without sufficient safeguards.
Key clarifications to drive prevention include:
Once that is known, you can decide on controls such as mandatory review, restricted recipients, content filtering, audit logs, and user training about phishing and acceptable use. The other questions may matter later but do not establish the phishing risk level.
Clarifying access, approval, and guardrails for outbound messages determines whether the feature could enable phishing and what controls/training are needed.
Topic: AI Security and Ethics
A stakeholder sends you an AI-generated “analysis” of an email that asks for an urgent wire transfer to a new bank account. You must decide what to do next.
Prompt: "Is this email legitimate? What should I do?"
AI output:
"This appears to be a valid request from the CFO. Tone and details match.
Proceed with the wire transfer today to avoid penalties.
Confidence: High."
What is the best next action based on the exhibit?
Best answer: B
What this tests: AI Security and Ethics
Explanation: The exhibit shows an AI output making an authoritative legitimacy judgment, which can be exploited for phishing and payment diversion. The Product Owner should treat it as untrusted and require verification using established controls (e.g., known contact methods and finance approval workflows) before any action that could cause harm.
Authoritative-sounding AI outputs can amplify social engineering by creating false certainty (e.g., “valid request,” “proceed,” “high confidence”) even when the model cannot truly authenticate an email’s sender or intent. In this scenario, the risk is payment diversion (business email compromise) where the AI’s tone nudges people to bypass normal controls.
Appropriate verification steps are to:
AI can help summarize, but humans remain accountable for verifying identity and authorizing high-impact actions.
Authoritative AI language can enable social engineering, so you should validate with the real requester and established finance/security controls.
Topic: AI Security and Ethics
A Product Owner uses generative AI to draft a customer-facing onboarding guide. In the prompt, they pasted (1) an internal Sales Playbook marked “Confidential” and (2) excerpts from a paid 2022 industry report. The AI output includes near-verbatim paragraphs from the report and a table copied from the playbook. Which principle/practice should guide the Product Owner’s next step before publishing?
Best answer: D
What this tests: AI Security and Ethics
Explanation: When AI outputs include near-verbatim text from a paid report and internal confidential material, the key risk is intellectual property misuse. The Product Owner remains accountable for ensuring content is public domain, appropriately licensed, or authorized for reuse, and for keeping proprietary materials from being disclosed.
AI-assisted creation does not automatically make the output “free to use.” Content from a paid report is typically copyrighted, and your internal playbook is proprietary/confidential; both require authorization and careful handling. Public domain content can generally be reused, but you still need to confirm the source and avoid mixing in restricted material. The responsible practice is to validate the provenance of generated text (e.g., check for verbatim passages, identify likely sources) and confirm licensing/permission before publishing. If rights cannot be confirmed, rewrite from first principles or use clearly licensed/public-domain sources and keep proprietary content out of customer-facing materials.
AI output can contain copyrighted or proprietary material, so the PO must verify rights and remove/replace unlicensed content.
Topic: AI Security and Ethics
A Product Owner for a health benefits product wants to paste recent customer support tickets into a third-party generative AI service to summarize themes for Product Backlog refinement. The tickets sometimes include a member ID, diagnosis details, and payment dispute information. The Product Owner is unsure what the organization’s AI/data policy allows for regulated or sensitive information.
What is the best approach?
Best answer: D
What this tests: AI Security and Ethics
Explanation: When AI is used with potentially regulated or highly sensitive data (health and financial), the primary discriminator is governance and compliance, not speed or convenience. The accountable choice is to consult organizational policy and the appropriate experts before sending data to an external AI service, and only proceed under approved controls.
The key issue is that the tickets may contain regulated/sensitive information (health details, identifiers, and financial disputes) and would be processed by a third-party AI service. In this situation, the Product Owner should not “try it and see” or rely on informal masking, because organizational policy and applicable regulatory requirements may dictate what data can be shared, under what protections, and with which vendors.
A responsible path is:
The takeaway is that uncertainty plus sensitive/regulatory exposure should trigger policy consultation before AI use.
Because the data may be regulated and sensitive, policy-guided review is required before processing it with an external AI service.
Topic: AI Security and Ethics
A Product Owner used generative AI to summarize support tickets and draft a prioritization memo shared with stakeholders.
The memo footer currently says: “Priorities were determined by AI from support tickets; questions should be directed to the model output.” Stakeholders now distrust the memo and are reluctant to use it.
Which replacement footer is the most appropriate human accountability statement for this AI-assisted output?
Best answer: D
What this tests: AI Security and Ethics
Explanation: A good accountability statement makes AI assistance visible but keeps responsibility with a human decision-maker. It should also indicate that the output was reviewed and validated, rather than treating the model as the accountable party. This helps rebuild stakeholder trust without overstating AI reliability or hiding behind the tool.
In governance and ethical use of AI, accountability cannot be delegated to a model. When sharing AI-assisted analyses (like summaries or prioritization inputs), the Product Owner should be transparent that AI was used, state that humans reviewed and validated the output, and make it clear that product decisions and communication remain human-owned.
A strong statement typically:
This reduces automation bias and addresses stakeholder concerns by clarifying who stands behind the content and how it was checked.
It is transparent about AI use while clearly keeping accountability and final judgment with the Product Owner after human review.
Topic: AI Security and Ethics
During a Sprint, the Developers use a generative AI assistant to draft code for a new login API. To save time, they copy the snippet into the product with only a quick “does it compile” check and no peer review, security scanning, or negative-path tests. The Sprint Goal depends on releasing this login API to a small customer segment.
What is the most likely near-term impact of this action?
Best answer: B
What this tests: AI Security and Ethics
Explanation: AI-generated code can contain subtle security flaws or unsafe defaults, and copying it in without proper review/testing increases the chance those flaws ship. The most immediate consequence is a vulnerable Increment reaching users, forcing urgent remediation. That directly impacts value delivery and customer trust in the product.
AI-generated snippets can introduce vulnerabilities such as missing input validation, unsafe authentication/authorization logic, insecure defaults, or accidentally embedded secrets. When a team bypasses normal engineering controls (peer review, static analysis/secret scanning, and tests that cover failure/abuse cases), the near-term risk is that these issues make it into the Increment and are released.
Practical review and testing steps include:
The key takeaway is that the immediate impact is increased likelihood of a security incident and emergency rework, not slower, indirect effects.
Skipping review and security testing makes it likely that insecure patterns (e.g., weak validation or hardcoded secrets) ship and must be rapidly fixed, harming trust and delivery.
Topic: AI Security and Ethics
A Product Owner notices a pull request includes an AI-generated code snippet for a new “filter orders” feature. The team plans to release the Increment at the end of the Sprint.
Exhibit: AI-generated snippet (excerpt)
def get_orders(user_id, status):
query = "SELECT * FROM orders WHERE user_id=" + user_id \
+ " AND status='" + status + "'"
return db.execute(query)
What is the best next action supported by the exhibit?
Best answer: D
What this tests: AI Security and Ethics
Explanation: The exhibit shows SQL being built via string concatenation with user-provided values, which is a common SQL injection vulnerability. AI-generated code should be reviewed like any other code, with security-focused review and tests that validate the behavior against malicious inputs before it becomes part of the Increment.
The core issue in the exhibit is that user_id and status are directly concatenated into a SQL statement. That pattern can allow an attacker (or unexpected input) to change the meaning of the query, making it a clear candidate for SQL injection. Because AI-generated snippets can be plausible but unsafe, the right response is to treat them as untrusted and require the normal engineering safeguards before accepting the change.
Practical steps include:
Regenerating code or relying on monitoring does not substitute for eliminating the vulnerability and proving it with tests.
The string-concatenated SQL indicates SQL injection risk, so the snippet should be treated as untrusted and verified with secure coding changes plus security-focused testing.
Topic: AI Security and Ethics
A Product Owner wants to use a generative AI tool to draft marketing copy and UI icons for the next Increment. Your organization’s policy requires you to avoid unclear ownership or licensing of content used in the product.
Which approach is the safest, policy-friendly way to reduce IP ownership ambiguity when using AI-generated content?
Best answer: C
What this tests: AI Security and Ethics
Explanation: IP ambiguity is highest when you cannot prove who owns the rights to generated content or what it may be derived from. A safe approach is to treat AI outputs as potentially third-party material and only use them after verifying provenance/licensing and documenting approval. This keeps humans accountable and creates an audit trail aligned with policy.
Generative AI can produce text, images, or code that resembles copyrighted or licensed material, and the ownership or permitted use may be unclear without controls. For a Product Owner, a policy-friendly stance is to assume AI outputs are not automatically “yours” and require human accountability plus evidence before incorporating them into an Increment.
Practical controls include:
Techniques that improve accuracy or privacy are still useful, but they do not, by themselves, resolve IP ownership or licensing uncertainty.
This reduces IP ambiguity by ensuring licensing/ownership is verified and auditable before the content is incorporated.
Use the PSPO-AI Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Read the PSPO-AI guide on PMExams.com, then return to PM Mastery for timed practice.