AWS AIF-C01: AI Security and Governance

Try 10 focused AWS AIF-C01 questions on AI Security and Governance, with explanations, then continue with IT Mastery.

On this page

Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Try AWS AIF-C01 on Web View full AWS AIF-C01 practice page

Topic snapshot

FieldDetail
Exam routeAWS AIF-C01
Topic areaSecurity, Compliance, and Governance for AI Solutions
Blueprint weight14%
Page purposeFocused sample questions before returning to mixed practice

How to use this topic drill

Use this page to isolate Security, Compliance, and Governance for AI Solutions for AWS AIF-C01. Work through the 10 questions first, then review the explanations and return to mixed practice in IT Mastery.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 14% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original IT Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: Security, Compliance, and Governance for AI Solutions

An organization is establishing governance requirements for AI systems (including generative AI applications on AWS). Which TWO statements reflect common expectations for transparency standards and team training? (Select TWO.)

Options:

  • A. Provide role-based responsible AI training with periodic refreshers for relevant staff

  • B. Maintain documentation that explains intended use, data sources, and known limitations

  • C. Only ML engineers need training because they are the only ones affecting AI risk

  • D. If using a managed service, the cloud provider fully covers governance so internal training is optional

  • E. Once deployed, AI documentation can remain unchanged unless the model is retrained

  • F. Transparency requires open-sourcing the foundation model weights and training code

Correct answers: A and B

Explanation: AI governance commonly includes transparency requirements (clear documentation and disclosures about what the system does and where it can fail) and team enablement requirements (role-based training for people who build, operate, or use AI). These controls help stakeholders understand system behavior and help staff apply policies consistently across the AI lifecycle.

At a high level, governance for AI systems typically expects two things relevant here: transparency and preparedness. Transparency means keeping artifacts that let stakeholders understand and audit the system, such as documentation of intended purpose, data/input sources, key assumptions, known limitations, and how to escalate issues. Team training expectations commonly include role-based education (developers, reviewers, operators, and business users) on responsible AI, data handling, and operational procedures, with refreshers as the system and policies evolve. These practices apply whether you build models yourself or consume them through managed services; using managed services can reduce operational burden, but it does not remove organizational accountability for how the system is used.

  • Document purpose and limits aligns with typical transparency and audit expectations.
  • Role-based training is a common governance control to reduce misuse and improve compliance.
  • Open-source requirement is not a general transparency standard; transparency focuses on explainability and disclosures, not publishing weights.
  • Provider-only governance is incorrect because organizations still own internal policies, training, and oversight for their use case.

Question 2

Topic: Security, Compliance, and Governance for AI Solutions

A company hosts an internal chatbot in private subnets in a VPC. The chatbot calls Amazon Bedrock to generate responses. A security review requires that AI service traffic must not traverse the public internet.

Exhibit: Security review notes

1| Workload subnet: private (no IGW)
2| Current egress: NAT Gateway
3| Destination: bedrock-runtime.us-east-1.amazonaws.com:443
4| VPC Flow Logs: ACCEPT 10.0.2.15 -> 54.x.x.x:443
5| Finding: "AI calls use public AWS service endpoint"
6| Requirement: "Use private connectivity where available"

Which action is the best next step to meet the requirement and reduce exposure?

Options:

  • A. Add AWS WAF rules to filter the Bedrock Runtime API traffic

  • B. Enable SSE-KMS encryption for all chatbot prompts and responses

  • C. Use an Amazon S3 gateway endpoint for Bedrock Runtime requests

  • D. Create an interface VPC endpoint (AWS PrivateLink) for Bedrock Runtime

Best answer: D

Explanation: The exhibit indicates the chatbot reaches bedrock-runtime through a NAT Gateway and a public IP destination (lines 2–4), which is flagged as using a public AWS service endpoint (line 5). Using AWS PrivateLink through an interface VPC endpoint routes Bedrock Runtime API calls privately within the AWS network, reducing internet exposure while meeting the “private connectivity” requirement (line 6).

AWS PrivateLink provides private connectivity from a VPC to supported AWS services by creating an interface VPC endpoint (powered by PrivateLink). In the exhibit, the chatbot’s Bedrock calls currently egress through a NAT Gateway (line 2) to bedrock-runtime...:443 (line 3) and resolve to a public IP in Flow Logs (line 4), leading to the finding that AI calls use a public service endpoint (line 5).

The best next step is to create the interface VPC endpoint for Amazon Bedrock Runtime and route the chatbot’s traffic to it, so:

  • traffic stays on the AWS network instead of using NAT/public IP paths (lines 2–4)
  • the “Use private connectivity where available” requirement is satisfied (line 6)

This addresses network exposure; encryption and web filtering controls do not change the public-endpoint egress shown in the exhibit.

  • WAF mismatch doesn’t apply because the exhibit shows egress to an AWS managed service endpoint, not an internet-facing app endpoint (lines 2–4).
  • Wrong endpoint type fails because S3 gateway endpoints are only for Amazon S3, not bedrock-runtime (line 3).
  • Encryption vs connectivity helps protect data at rest but does not change the public endpoint path flagged in the finding (lines 2–5).

Question 3

Topic: Security, Compliance, and Governance for AI Solutions

A company is preparing evidence for an internal audit of an AWS-based generative AI workload. The auditor requests automated assessment of software vulnerabilities and exposure for compute and container artifacts.

Exhibit: Audit evidence request (excerpt)

Control: Vulnerability management
Scope: EC2 instances and ECR container images
Evidence needed: CVE findings with severity and affected resources
Frequency: Continuous

Which AWS service should the company use to meet this audit request?

Options:

  • A. Amazon Inspector

  • B. Amazon GuardDuty

  • C. Amazon Macie

  • D. AWS Config

Best answer: A

Explanation: The exhibit asks for continuous CVE vulnerability findings, including severity and affected resources, specifically for EC2 instances and ECR images. Amazon Inspector is the AWS service designed to automatically assess vulnerabilities and exposure for these resource types and produce findings suitable as audit evidence.

This request is for vulnerability management evidence: continuous detection of CVEs, severities, and the impacted resources. In the exhibit, the scope explicitly includes “EC2 instances and ECR container images” and the evidence requires “CVE findings with severity,” which matches Amazon Inspector’s purpose as a managed vulnerability assessment service for AWS workloads.

Inspector produces vulnerability findings (including CVE IDs and severity) tied to affected resources, which can be used directly as audit evidence for the stated control. The key cue is that the audit is not asking for data classification, threat detection, or configuration drift, but for vulnerability assessment results.

  • Data discovery (Macie) focuses on sensitive data identification in S3, not CVE scanning.
  • Threat detection (GuardDuty) detects suspicious activity and compromise signals, not software vulnerability inventories.
  • Config compliance (AWS Config) tracks resource configuration changes and rule compliance, not CVE severity findings.

Question 4

Topic: Security, Compliance, and Governance for AI Solutions

A team is deploying a generative AI application on AWS and wants a managed service that runs automated best-practice checks across their AWS account and provides recommendations that include both cost optimization and security improvements. Which AWS service meets this requirement?

Options:

  • A. AWS Trusted Advisor

  • B. AWS Security Hub

  • C. AWS Config

  • D. AWS Audit Manager

Best answer: A

Explanation: AWS Trusted Advisor is designed to evaluate an AWS environment against AWS best practices and return actionable recommendations. It uniquely fits the requirement because it includes checks across multiple categories, including both cost optimization and security, in a single service view.

The deciding attribute is “automated best-practice checks that include both cost and security recommendations.” AWS Trusted Advisor continuously evaluates your AWS account and surfaces recommendations across several best-practice categories, including cost optimization and security, which matches the team’s stated goal.

Services like AWS Security Hub, AWS Config, and AWS Audit Manager can support governance and compliance, but they do not provide the same broad, multi-category best-practice check set that includes cost optimization. Key takeaway: use Trusted Advisor when you need AWS best-practice checks and recommendations spanning cost and security.

  • Security-only focus fits AWS Security Hub, but it doesn’t provide cost optimization best-practice checks.
  • Config rule compliance helps track configuration and evaluate resource compliance, not broad cost-and-security best-practice checks.
  • Audit evidence collection is what AWS Audit Manager provides; it doesn’t generate cost optimization recommendations.

Question 5

Topic: Security, Compliance, and Governance for AI Solutions

A company is building a RAG-based GenAI assistant using Amazon Bedrock. Conversation transcripts and model outputs are stored in Amazon S3, and embeddings are stored in an Amazon OpenSearch Service domain.

A new requirement states that all stored data must be encrypted at rest with customer managed keys, keys must be centrally auditable, and the team wants the lowest ongoing operational effort for key lifecycle management (for example, rotation and access revocation).

Which change best meets these requirements?

Options:

  • A. Store an AES key in Secrets Manager and encrypt in the app

  • B. Keep S3 SSE-S3 and rely on default OpenSearch encryption

  • C. Use AWS KMS CMKs for S3 SSE-KMS and OpenSearch encryption

  • D. Use AWS CloudHSM keys and implement custom rotation processes

Best answer: C

Explanation: AWS KMS is the AWS managed service for creating and controlling encryption keys and using them with AWS services. Using KMS customer managed keys with S3 SSE-KMS and OpenSearch encryption meets the “customer managed keys” requirement while keeping operations low through native integrations, centralized auditing (for example, AWS CloudTrail), and managed key lifecycle features like rotation.

AWS KMS lets you create, manage, and control access to encryption keys (KMS keys) that AWS services can use to encrypt data at rest. In this scenario, configuring S3 to use SSE-KMS and configuring OpenSearch Service to use a customer managed KMS key satisfies the requirement for customer managed keys without the team having to build and operate custom encryption, storage, rotation, or audit tooling.

KMS also supports operational needs that commonly show up in AI solutions storing prompts, outputs, and embeddings:

  • Centralized access control through IAM and KMS key policies
  • Auditing of key usage with AWS CloudTrail
  • Key lifecycle management such as enabling rotation (where supported) and quickly disabling a key to revoke future decrypt access

Compared with application-managed encryption or HSM-managed designs, KMS integrations reduce operational effort while still giving the customer control of the keys.

  • Default encryption only doesn’t meet the stated customer managed key requirement.
  • App-side encryption with Secrets Manager increases latency and operational burden, and Secrets Manager is not a key management and usage-auditing service like KMS.
  • CloudHSM for all keys can satisfy customer control but is operationally heavier and costlier than KMS for this use case.

Question 6

Topic: Security, Compliance, and Governance for AI Solutions

Which statement is INCORRECT about encryption requirements for AI workloads on AWS?

Options:

  • A. Encrypt data at rest in Amazon S3 with SSE-KMS for prompts and training data

  • B. Using AWS PrivateLink removes the need for TLS because traffic stays on the AWS network

  • C. Use AWS KMS keys to control, audit, and manage encryption keys used for data at rest

  • D. Use TLS/HTTPS to encrypt data in transit when calling managed AI services

Best answer: B

Explanation: Private connectivity (such as AWS PrivateLink) does not replace encryption in transit. AI workloads should generally encrypt sensitive data at rest (for example, with SSE-KMS) and encrypt data in transit (for example, with TLS/HTTPS) to protect data confidentiality as it moves between components and services.

The core requirement is to protect sensitive AI data both at rest and in transit. Encrypting at rest (for example, storing prompts, embeddings, documents, and logs in Amazon S3 using SSE-KMS) protects data on disk/backing storage. Encrypting in transit (typically TLS/HTTPS) protects requests and responses as they traverse networks, including calls to AWS managed AI services.

AWS PrivateLink helps keep traffic on private IP space and reduces exposure to the public internet, but it does not inherently encrypt application payloads; TLS is still the standard control for in-transit encryption and end-to-end protection. AWS KMS is commonly used to manage and audit the keys used for at-rest encryption across AWS services.

  • Private networking vs encryption: The claim that PrivateLink removes the need for TLS is unsafe; private routing is not a substitute for in-transit encryption.
  • S3 at-rest protection: Using SSE-KMS is a common way to encrypt AI artifacts stored in S3.
  • TLS for service calls: Using HTTPS/TLS is the standard approach to encrypt data in transit to managed AI services.
  • Key management and auditability: AWS KMS helps control, rotate (as needed), and audit use of encryption keys for data at rest.

Question 7

Topic: Security, Compliance, and Governance for AI Solutions

A company is adopting generative AI on AWS to create an internal support assistant using Amazon Bedrock. The assistant will summarize employee tickets and sometimes reference internal knowledge articles. The security team is defining AI governance so the solution meets compliance expectations and can be audited.

Which action is INCORRECT as part of an AI governance process for this system?

Options:

  • A. Maintain documentation of data sources and evaluation results for audits

  • B. Enable monitoring and periodic review of outputs for risk and policy compliance

  • C. Let developers connect production ticket data to the model without review or audit logging

  • D. Define roles, approvals, and change control for model and prompt updates

Best answer: C

Explanation: AI governance provides oversight, accountability, and evidence that an AI system is controlled across its lifecycle. Allowing direct use of production data with no review or audit trail removes key governance safeguards such as access control, traceability, and risk management. Governance processes are required to meet compliance obligations and to reliably detect and correct issues over time.

AI governance is the set of lifecycle processes that ensure an AI system is used safely, compliantly, and consistently with business and security requirements. In this scenario, governance is required because the assistant touches enterprise data and must be auditable; you need clear accountability, controlled changes, and evidence (logs and documentation) that policies are followed.

A governance process commonly includes:

  • Defined roles and approvals for changes (who can modify prompts/models and how)
  • Documentation of data sources, intended use, and evaluation/validation evidence
  • Ongoing monitoring and review to detect drift, policy violations, and emerging risks

Bypassing review and audit logging while connecting production data undermines oversight and traceability, which are core governance principles.

  • Change management is a core governance control to prevent unreviewed updates from introducing new risk.
  • Audit evidence (data lineage and evaluation records) supports compliance reviews and incident investigation.
  • Ongoing monitoring helps detect harmful or noncompliant outputs after deployment.

Question 8

Topic: Security, Compliance, and Governance for AI Solutions

Which TWO statements about the AWS shared responsibility model for securing AI solutions are INCORRECT? (Select TWO.)

Options:

  • A. Customers should enable logging and auditability (for example, AWS CloudTrail) to meet governance and compliance requirements.

  • B. AWS secures the underlying infrastructure for Amazon Bedrock and SageMaker AI, including physical facilities and the virtualization layer.

  • C. AWS automatically encrypts customer AI datasets in Amazon S3 using customer-managed AWS KMS keys without any customer configuration.

  • D. Customers must mitigate risks from prompts and model outputs (for example, PII leakage) using controls such as Bedrock Guardrails and application logic.

  • E. Customers are responsible for configuring IAM permissions that control who can invoke models and access data.

  • F. Because Amazon Bedrock is fully managed, AWS is responsible for defining customers’ data classification and retention policies for prompts and responses.

Correct answers: C and F

Explanation: Under the AWS shared responsibility model, AWS is responsible for security of the cloud, while customers are responsible for security in the cloud. For managed AI services, AWS secures the service infrastructure, but customers still configure identity, data protection choices, and governance controls for their own data and usage.

The shared responsibility model splits duties between AWS and the customer. For AI services such as Amazon Bedrock and SageMaker AI, AWS handles security of the underlying cloud and managed service components (facilities, hardware, and core service operation). Customers are still responsible for how they use the service: controlling access with IAM, choosing and configuring data protection (including whether to use SSE-KMS and how KMS keys are administered), setting governance policies like data classification and retention, and implementing safety controls for prompts and outputs (for example, Bedrock Guardrails) along with logging for audit.

Key takeaway: “Fully managed” reduces infrastructure operations for the customer, but it does not transfer the customer’s identity, data governance, and usage responsibilities to AWS.

  • SSE-KMS is not automatic: Using customer-managed KMS keys requires customer configuration and key policy administration.
  • Governance stays with the customer: Classification and retention for prompts/responses are customer decisions, even when the model hosting is managed.
  • AWS secures the platform: AWS remains responsible for the physical and virtualization layers that run the managed AI services.
  • Customers secure usage: IAM access control, prompt/output risk controls, and audit logging are part of security in the cloud.

Question 9

Topic: Security, Compliance, and Governance for AI Solutions

A company is preparing a compliance audit for an AI-enabled application running on Amazon EC2 and AWS Lambda, with container images stored in Amazon ECR. The company wants to reduce security exposure and keep defensible evidence of controls.

Which TWO practices are INCORRECT for this goal? (Select TWO)

Options:

  • A. Use IAM roles with least-privilege access for Bedrock and S3

  • B. Encrypt S3 data with AWS KMS and restrict bucket access

  • C. Rely on quarterly manual scans instead of Amazon Inspector

  • D. Retain CloudTrail logs in a protected, access-controlled location

  • E. Disable AWS CloudTrail to reduce log storage costs

  • F. Enable Amazon Inspector scans for EC2, ECR, and Lambda

Correct answers: C and E

Explanation: Amazon Inspector is the AWS managed service used to assess vulnerability and exposure for supported resources such as EC2 instances, ECR container images, and Lambda functions. For audit readiness, you also need governance evidence through centralized API activity logging. Practices that remove continuous vulnerability assessment or eliminate audit logs undermine both security posture and compliance evidence.

For vulnerability and exposure assessment on AWS, Amazon Inspector provides managed, continuous scanning for common resource types used in AI workloads (for example, EC2 instances, ECR images, and Lambda functions). Skipping Inspector in favor of infrequent, manual checks increases the window where known CVEs and unintended exposures can go undetected.

Compliance and AI governance also depend on auditability. AWS CloudTrail records API activity across the account, which helps demonstrate control operation and supports investigations. Turning off CloudTrail to save cost removes a primary source of evidence and weakens monitoring, even if other controls (encryption, IAM least privilege) are in place. The key takeaway is to combine continuous vulnerability assessment (Inspector) with durable audit logging (CloudTrail) and standard preventive controls.

  • Use managed vulnerability assessment Amazon Inspector is an appropriate AWS-native service to assess vulnerabilities and exposure for EC2/ECR/Lambda.
  • Maintain audit evidence CloudTrail log retention supports compliance and incident forensics when stored with strong access controls.
  • Least privilege access IAM roles scoped to required actions reduce blast radius and support governance expectations.
  • Encrypt sensitive data KMS-backed encryption and restrictive S3 policies are standard controls for protecting prompts and outputs at rest.

Question 10

Topic: Security, Compliance, and Governance for AI Solutions

A company is preparing a governance review for a new GenAI assistant on AWS.

Exhibit: Workload summary (from the intake form)

Use case: Draft customer support emails
Model access: Third-party foundation model via managed API
Input data: Restricted (includes customer PII)
Output: Customer-facing content
Human review: Not required before sending

Based on the exhibit, which governance framework is the best fit to scope GenAI security risks and required controls at a high level for this workload?

Options:

  • A. Generative AI Security Scoping Matrix

  • B. AWS Well-Architected Framework cost optimization pillar

  • C. NIST Cybersecurity Framework (CSF)

  • D. ROUGE/BLEU evaluation report

Best answer: A

Explanation: The exhibit highlights GenAI-specific risk drivers: Restricted/PII input, use of a third-party FM via API, customer-facing outputs, and no human review. A GenAI-focused scoping framework is used to quickly categorize these dimensions and determine which security controls and governance checks are needed. The Generative AI Security Scoping Matrix is intended for this purpose.

The key decision is choosing a framework that scopes GenAI security risk based on the workload’s characteristics shown in the exhibit. The Generative AI Security Scoping Matrix is purpose-built for GenAI and helps teams map core factors—such as input data sensitivity (“Restricted (includes customer PII)”), model access/hosting (“Third-party foundation model via managed API”), and degree of autonomy/impact (“Customer-facing content” with “Human review: Not required”)—to an appropriate risk posture and control areas to validate.

General security frameworks are useful later, but they don’t directly structure GenAI-specific scoping around prompts/outputs, external model access, and autonomy in the same way. The exhibit’s fields align directly to GenAI scoping dimensions, making the matrix the best fit for the governance intake step.

  • Model quality metrics like ROUGE/BLEU evaluate text similarity, not governance scoping for PII, customer-facing outputs, and no human review.
  • Cost optimization does not address the exhibit’s primary risk signals (PII input and unsupervised customer-facing output).
  • General cybersecurity framework can support overall security, but it’s not a GenAI-specific scoping tool tied to model access patterns and prompt/output risks shown in the exhibit.

Continue with full practice

Use the AWS AIF-C01 Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Try AWS AIF-C01 on Web View AWS AIF-C01 Practice Test

Free review resource

Read the AWS AIF-C01 Cheat Sheet on Tech Exam Lexicon, then return to IT Mastery for timed practice.

Revised on Thursday, May 14, 2026