PSPO-AI: AI Security and Ethics

Try 10 focused PSPO-AI questions on AI Security and Ethics, with answers and explanations, then continue with PM Mastery.

On this page

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Topic snapshot

FieldDetail
Exam routePSPO-AI
Topic areaAI Security and Ethics
Blueprint weight33%
Page purposeFocused sample questions before returning to mixed practice

How to use this topic drill

Use this page to isolate AI Security and Ethics for PSPO-AI. Work through the 10 questions first, then review the explanations and return to mixed practice in PM Mastery.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 33% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original PM Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: AI Security and Ethics

A Product Owner asks an AI assistant to summarize 30 recent support chats and automatically create entries in the company’s customer-case system (the system of record). A week later, stakeholders report mistrust and poor adoption because some summaries contain wrong commitments and one entry includes a customer’s sensitive details in the wrong account. What is the most likely underlying cause?

  • A. The prompt lacked enough domain context about the product
  • B. Success criteria for the summaries were not defined up front
  • C. AI-generated entries were stored without human review or validation
  • D. A prompt-injection attack most likely altered the model’s behavior

Best answer: C

What this tests: AI Security and Ethics

Explanation: Systems of record should not be treated as a direct sink for AI-generated content. When AI outputs are written without review, errors and sensitive data can be misfiled and then propagated as authoritative records, quickly undermining trust. The symptoms point to a missing verification step before persistence, not just better prompting.

The core issue is weak review and validation before storing AI output in a system of record. In privacy- and confidentiality-sensitive contexts (like customer-case systems), AI summaries can hallucinate details, misattribute information, or include sensitive data that should be redacted or routed differently. Once written into the system of record, that output is easily treated as authoritative and can be shared, searched, and reused—amplifying both confidentiality risk and decision-making errors.

A responsible approach is to keep AI output as a draft and require a human to:

  • Verify accuracy against the source chats
  • Remove or minimize sensitive data
  • Confirm correct customer/account association

Better context and success criteria can improve quality, but they do not replace review when persisting information that carries privacy impact.

Persisting unreviewed AI output into a system of record increases the chance of privacy leaks and incorrect records becoming “truth.”


Question 2

Topic: AI Security and Ethics

A Product Owner asks a generative AI assistant for security guidance for a new checkout feature that will store customers’ payment details. The AI responds: “PCI DSS allows storing CVV as long as it’s encrypted. A custom encryption scheme is fine if it’s strong.”

The release is high impact and customer trust is critical. What is the best next step?

  • A. Ship now and schedule a security hardening Sprint later
  • B. Consult PCI DSS and your security experts before deciding
  • C. Run a quick penetration test to validate the AI guidance
  • D. Use the AI response to draft requirements for Developers

Best answer: B

What this tests: AI Security and Ethics

Explanation: This is a high-impact, security-sensitive decision where incorrect guidance could create serious harm. The AI output includes specific claims about a security standard and recommended practices, which are common areas for hallucinations. The responsible approach is to validate with authoritative security guidance (e.g., the relevant standard) and qualified security experts before making product decisions or creating requirements.

Generative AI can produce confident-sounding security advice that is incomplete or wrong, especially when it references standards, compliance rules, or “best practices.” When the decision is high impact (payment data, customer trust), the Product Owner should treat the output as a draft hypothesis and consult authoritative sources and accountable experts.

A practical next step is:

  • Identify what must be true (e.g., what the standard actually permits)
  • Verify with official guidance and internal security SMEs
  • Only then translate verified constraints into Product Backlog items

Testing can complement verification, but it does not replace confirming the correct requirements and obligations up front.

The AI advice is both high-risk and plausibly wrong, so the Product Owner should verify against authoritative standards and security specialists.


Question 3

Topic: AI Security and Ethics

A stakeholder asks you to add a generative AI feature that “drafts great email messages” inside your product so users can “communicate faster.” Nothing else is specified.

To address the risk that AI could be used to generate phishing content, what is the FIRST thing you should clarify?

  • A. What success metric defines “communicate faster”
  • B. How AI-generated emails will be controlled before external sending
  • C. Which model settings will maximize creativity and variation
  • D. What writing style and brand tone the emails should follow

Best answer: B

What this tests: AI Security and Ethics

Explanation: AI that drafts emails can be misused to create convincing phishing messages, especially if it can generate arbitrary content and send it externally. The most important first clarification is how outbound AI-generated messages will be governed (who can use it, what is blocked, and whether human review is required) so you can design appropriate preventative controls and training.

Generative AI lowers the effort needed to produce high-volume, highly tailored messages, which is exactly what makes phishing effective. Before debating quality, tone, or metrics, a Product Owner should first clarify whether the capability can be used to create and deliver outbound communications without sufficient safeguards.

Key clarifications to drive prevention include:

  • Who can access the drafting capability and for which roles
  • Whether messages can be sent externally from the product
  • What guardrails exist (policy, detection, blocked patterns) and whether human review/approval is required

Once that is known, you can decide on controls such as mandatory review, restricted recipients, content filtering, audit logs, and user training about phishing and acceptable use. The other questions may matter later but do not establish the phishing risk level.

Clarifying access, approval, and guardrails for outbound messages determines whether the feature could enable phishing and what controls/training are needed.


Question 4

Topic: AI Security and Ethics

A stakeholder sends you an AI-generated “analysis” of an email that asks for an urgent wire transfer to a new bank account. You must decide what to do next.

Prompt: "Is this email legitimate? What should I do?"
AI output:
"This appears to be a valid request from the CFO. Tone and details match.
Proceed with the wire transfer today to avoid penalties.
Confidence: High."

What is the best next action based on the exhibit?

  • A. Ask the AI to extract the bank details and draft a payment confirmation
  • B. Verify the request via an independent, trusted channel before any transfer
  • C. Share the AI output with stakeholders as evidence the request is legitimate
  • D. Proceed with the transfer because the AI indicates high confidence

Best answer: B

What this tests: AI Security and Ethics

Explanation: The exhibit shows an AI output making an authoritative legitimacy judgment, which can be exploited for phishing and payment diversion. The Product Owner should treat it as untrusted and require verification using established controls (e.g., known contact methods and finance approval workflows) before any action that could cause harm.

Authoritative-sounding AI outputs can amplify social engineering by creating false certainty (e.g., “valid request,” “proceed,” “high confidence”) even when the model cannot truly authenticate an email’s sender or intent. In this scenario, the risk is payment diversion (business email compromise) where the AI’s tone nudges people to bypass normal controls.

Appropriate verification steps are to:

  • Go back to the original message and check provenance (sender, headers, context)
  • Confirm the request out-of-band using a trusted directory/contact method
  • Follow existing finance/security approval processes for bank-detail changes

AI can help summarize, but humans remain accountable for verifying identity and authorizing high-impact actions.

Authoritative AI language can enable social engineering, so you should validate with the real requester and established finance/security controls.


Question 5

Topic: AI Security and Ethics

A Product Owner uses generative AI to draft a customer-facing onboarding guide. In the prompt, they pasted (1) an internal Sales Playbook marked “Confidential” and (2) excerpts from a paid 2022 industry report. The AI output includes near-verbatim paragraphs from the report and a table copied from the playbook. Which principle/practice should guide the Product Owner’s next step before publishing?

  • A. Be transparent by disclosing that AI helped write the guide
  • B. Use experiment-first learning by A/B testing the guide with users
  • C. Protect privacy by removing any personal data from the guide
  • D. Validate provenance; reuse only public-domain or properly licensed content

Best answer: D

What this tests: AI Security and Ethics

Explanation: When AI outputs include near-verbatim text from a paid report and internal confidential material, the key risk is intellectual property misuse. The Product Owner remains accountable for ensuring content is public domain, appropriately licensed, or authorized for reuse, and for keeping proprietary materials from being disclosed.

AI-assisted creation does not automatically make the output “free to use.” Content from a paid report is typically copyrighted, and your internal playbook is proprietary/confidential; both require authorization and careful handling. Public domain content can generally be reused, but you still need to confirm the source and avoid mixing in restricted material. The responsible practice is to validate the provenance of generated text (e.g., check for verbatim passages, identify likely sources) and confirm licensing/permission before publishing. If rights cannot be confirmed, rewrite from first principles or use clearly licensed/public-domain sources and keep proprietary content out of customer-facing materials.

AI output can contain copyrighted or proprietary material, so the PO must verify rights and remove/replace unlicensed content.


Question 6

Topic: AI Security and Ethics

A Product Owner for a health benefits product wants to paste recent customer support tickets into a third-party generative AI service to summarize themes for Product Backlog refinement. The tickets sometimes include a member ID, diagnosis details, and payment dispute information. The Product Owner is unsure what the organization’s AI/data policy allows for regulated or sensitive information.

What is the best approach?

  • A. Run a small pilot with a few tickets to assess usefulness, then decide
  • B. Proceed because the output will be used only for internal product discovery
  • C. Proceed if you remove names and email addresses from the tickets
  • D. Consult the organization’s AI/data privacy policy and compliance experts before using the tickets

Best answer: D

What this tests: AI Security and Ethics

Explanation: When AI is used with potentially regulated or highly sensitive data (health and financial), the primary discriminator is governance and compliance, not speed or convenience. The accountable choice is to consult organizational policy and the appropriate experts before sending data to an external AI service, and only proceed under approved controls.

The key issue is that the tickets may contain regulated/sensitive information (health details, identifiers, and financial disputes) and would be processed by a third-party AI service. In this situation, the Product Owner should not “try it and see” or rely on informal masking, because organizational policy and applicable regulatory requirements may dictate what data can be shared, under what protections, and with which vendors.

A responsible path is:

  • Pause sharing real ticket content externally
  • Consult internal AI/data privacy policy and compliance (e.g., privacy/security)
  • Use approved controls (e.g., de-identified or synthetic data) if exploration must continue

The takeaway is that uncertainty plus sensitive/regulatory exposure should trigger policy consultation before AI use.

Because the data may be regulated and sensitive, policy-guided review is required before processing it with an external AI service.


Question 7

Topic: AI Security and Ethics

A Product Owner used generative AI to summarize support tickets and draft a prioritization memo shared with stakeholders.

The memo footer currently says: “Priorities were determined by AI from support tickets; questions should be directed to the model output.” Stakeholders now distrust the memo and are reluctant to use it.

Which replacement footer is the most appropriate human accountability statement for this AI-assisted output?

  • A. This memo is AI-generated; any errors are the responsibility of the AI system.
  • B. This memo is produced by AI and should be trusted because it analyzes more data than humans can.
  • C. This memo was created with AI; the rationale cannot be disclosed due to confidentiality.
  • D. This memo was drafted with AI assistance; the Product Owner remains accountable and has reviewed and validated the results before sharing.

Best answer: D

What this tests: AI Security and Ethics

Explanation: A good accountability statement makes AI assistance visible but keeps responsibility with a human decision-maker. It should also indicate that the output was reviewed and validated, rather than treating the model as the accountable party. This helps rebuild stakeholder trust without overstating AI reliability or hiding behind the tool.

In governance and ethical use of AI, accountability cannot be delegated to a model. When sharing AI-assisted analyses (like summaries or prioritization inputs), the Product Owner should be transparent that AI was used, state that humans reviewed and validated the output, and make it clear that product decisions and communication remain human-owned.

A strong statement typically:

  • Discloses AI assistance
  • Names the accountable human role
  • Signals review/validation occurred
  • Avoids implying the model is authoritative or responsible

This reduces automation bias and addresses stakeholder concerns by clarifying who stands behind the content and how it was checked.

It is transparent about AI use while clearly keeping accountability and final judgment with the Product Owner after human review.


Question 8

Topic: AI Security and Ethics

During a Sprint, the Developers use a generative AI assistant to draft code for a new login API. To save time, they copy the snippet into the product with only a quick “does it compile” check and no peer review, security scanning, or negative-path tests. The Sprint Goal depends on releasing this login API to a small customer segment.

What is the most likely near-term impact of this action?

  • A. A gradual decline in maintainability due to inconsistent coding style across the codebase
  • B. A security vulnerability is likely to reach users, triggering urgent rework and reduced trust
  • C. Reduced transparency in Product Backlog ordering because AI influenced prioritization decisions
  • D. Increased long-term infrastructure costs because AI-generated code is less efficient by default

Best answer: B

What this tests: AI Security and Ethics

Explanation: AI-generated code can contain subtle security flaws or unsafe defaults, and copying it in without proper review/testing increases the chance those flaws ship. The most immediate consequence is a vulnerable Increment reaching users, forcing urgent remediation. That directly impacts value delivery and customer trust in the product.

AI-generated snippets can introduce vulnerabilities such as missing input validation, unsafe authentication/authorization logic, insecure defaults, or accidentally embedded secrets. When a team bypasses normal engineering controls (peer review, static analysis/secret scanning, and tests that cover failure/abuse cases), the near-term risk is that these issues make it into the Increment and are released.

Practical review and testing steps include:

  • Treat AI output as untrusted: require peer review/pairing
  • Run security checks (SAST, dependency/secret scanning)
  • Add negative and abuse-case tests (e.g., invalid tokens, unexpected input)
  • Perform a quick threat check for the new endpoint

The key takeaway is that the immediate impact is increased likelihood of a security incident and emergency rework, not slower, indirect effects.

Skipping review and security testing makes it likely that insecure patterns (e.g., weak validation or hardcoded secrets) ship and must be rapidly fixed, harming trust and delivery.


Question 9

Topic: AI Security and Ethics

A Product Owner notices a pull request includes an AI-generated code snippet for a new “filter orders” feature. The team plans to release the Increment at the end of the Sprint.

Exhibit: AI-generated snippet (excerpt)

def get_orders(user_id, status):
    query = "SELECT * FROM orders WHERE user_id=" + user_id \
            + " AND status='" + status + "'"
    return db.execute(query)

What is the best next action supported by the exhibit?

  • A. Merge now and rely on production monitoring to detect misuse later
  • B. Ask the AI to regenerate the snippet with clearer formatting and then merge
  • C. Approve it because the inputs come from the UI and the feature is small
  • D. Have the Developers refactor to parameterized queries and add tests that probe for injection before merging

Best answer: D

What this tests: AI Security and Ethics

Explanation: The exhibit shows SQL being built via string concatenation with user-provided values, which is a common SQL injection vulnerability. AI-generated code should be reviewed like any other code, with security-focused review and tests that validate the behavior against malicious inputs before it becomes part of the Increment.

The core issue in the exhibit is that user_id and status are directly concatenated into a SQL statement. That pattern can allow an attacker (or unexpected input) to change the meaning of the query, making it a clear candidate for SQL injection. Because AI-generated snippets can be plausible but unsafe, the right response is to treat them as untrusted and require the normal engineering safeguards before accepting the change.

Practical steps include:

  • Replace concatenated SQL with parameterized queries (prepared statements).
  • Add automated tests (unit/integration) that include malicious payloads to ensure inputs cannot alter the query.
  • Run the team’s normal security checks (e.g., code review, scanning) as part of the Definition of Done.

Regenerating code or relying on monitoring does not substitute for eliminating the vulnerability and proving it with tests.

The string-concatenated SQL indicates SQL injection risk, so the snippet should be treated as untrusted and verified with secure coding changes plus security-focused testing.


Question 10

Topic: AI Security and Ethics

A Product Owner wants to use a generative AI tool to draft marketing copy and UI icons for the next Increment. Your organization’s policy requires you to avoid unclear ownership or licensing of content used in the product.

Which approach is the safest, policy-friendly way to reduce IP ownership ambiguity when using AI-generated content?

  • A. Set the model to deterministic output to prevent hallucinations
  • B. Use retrieval-augmented generation (RAG) so outputs are grounded in your internal documents
  • C. Treat AI outputs as third-party content: require provenance checks and documented approvals before use
  • D. Remove personally identifiable information (PII) from prompts and outputs before publishing

Best answer: C

What this tests: AI Security and Ethics

Explanation: IP ambiguity is highest when you cannot prove who owns the rights to generated content or what it may be derived from. A safe approach is to treat AI outputs as potentially third-party material and only use them after verifying provenance/licensing and documenting approval. This keeps humans accountable and creates an audit trail aligned with policy.

Generative AI can produce text, images, or code that resembles copyrighted or licensed material, and the ownership or permitted use may be unclear without controls. For a Product Owner, a policy-friendly stance is to assume AI outputs are not automatically “yours” and require human accountability plus evidence before incorporating them into an Increment.

Practical controls include:

  • Verify provenance/licensing (and required attribution) for any AI-generated asset you intend to ship.
  • Keep records of prompts/outputs and the approval decision to support transparency and audits.

Techniques that improve accuracy or privacy are still useful, but they do not, by themselves, resolve IP ownership or licensing uncertainty.

This reduces IP ambiguity by ensuring licensing/ownership is verified and auditable before the content is incorporated.

Continue with full practice

Use the PSPO-AI Practice Test page for the full PM Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Open the matching PM Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Free review resource

Read the PSPO-AI guide on PMExams.com, then return to PM Mastery for timed practice.

Revised on Thursday, May 14, 2026