Try 10 focused AWS AIF-C01 questions on AI Security and Governance, with explanations, then continue with IT Mastery.
Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.
| Field | Detail |
|---|---|
| Exam route | AWS AIF-C01 |
| Topic area | Security, Compliance, and Governance for AI Solutions |
| Blueprint weight | 14% |
| Page purpose | Focused sample questions before returning to mixed practice |
Use this page to isolate Security, Compliance, and Governance for AI Solutions for AWS AIF-C01. Work through the 10 questions first, then review the explanations and return to mixed practice in IT Mastery.
| Pass | What to do | What to record |
|---|---|---|
| First attempt | Answer without checking the explanation first. | The fact, rule, calculation, or judgment point that controlled your answer. |
| Review | Read the explanation even when you were correct. | Why the best answer is stronger than the closest distractor. |
| Repair | Repeat only missed or uncertain items after a short break. | The pattern behind misses, not the answer letter. |
| Transfer | Return to mixed practice once the topic feels stable. | Whether the same skill holds up when the topic is no longer obvious. |
Blueprint context: 14% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.
These questions are original IT Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.
Topic: Security, Compliance, and Governance for AI Solutions
An organization is establishing governance requirements for AI systems (including generative AI applications on AWS). Which TWO statements reflect common expectations for transparency standards and team training? (Select TWO.)
Options:
A. Provide role-based responsible AI training with periodic refreshers for relevant staff
B. Maintain documentation that explains intended use, data sources, and known limitations
C. Only ML engineers need training because they are the only ones affecting AI risk
D. If using a managed service, the cloud provider fully covers governance so internal training is optional
E. Once deployed, AI documentation can remain unchanged unless the model is retrained
F. Transparency requires open-sourcing the foundation model weights and training code
Correct answers: A and B
Explanation: AI governance commonly includes transparency requirements (clear documentation and disclosures about what the system does and where it can fail) and team enablement requirements (role-based training for people who build, operate, or use AI). These controls help stakeholders understand system behavior and help staff apply policies consistently across the AI lifecycle.
At a high level, governance for AI systems typically expects two things relevant here: transparency and preparedness. Transparency means keeping artifacts that let stakeholders understand and audit the system, such as documentation of intended purpose, data/input sources, key assumptions, known limitations, and how to escalate issues. Team training expectations commonly include role-based education (developers, reviewers, operators, and business users) on responsible AI, data handling, and operational procedures, with refreshers as the system and policies evolve. These practices apply whether you build models yourself or consume them through managed services; using managed services can reduce operational burden, but it does not remove organizational accountability for how the system is used.
Topic: Security, Compliance, and Governance for AI Solutions
A company hosts an internal chatbot in private subnets in a VPC. The chatbot calls Amazon Bedrock to generate responses. A security review requires that AI service traffic must not traverse the public internet.
Exhibit: Security review notes
1| Workload subnet: private (no IGW)
2| Current egress: NAT Gateway
3| Destination: bedrock-runtime.us-east-1.amazonaws.com:443
4| VPC Flow Logs: ACCEPT 10.0.2.15 -> 54.x.x.x:443
5| Finding: "AI calls use public AWS service endpoint"
6| Requirement: "Use private connectivity where available"
Which action is the best next step to meet the requirement and reduce exposure?
Options:
A. Add AWS WAF rules to filter the Bedrock Runtime API traffic
B. Enable SSE-KMS encryption for all chatbot prompts and responses
C. Use an Amazon S3 gateway endpoint for Bedrock Runtime requests
D. Create an interface VPC endpoint (AWS PrivateLink) for Bedrock Runtime
Best answer: D
Explanation: The exhibit indicates the chatbot reaches bedrock-runtime through a NAT Gateway and a public IP destination (lines 2–4), which is flagged as using a public AWS service endpoint (line 5). Using AWS PrivateLink through an interface VPC endpoint routes Bedrock Runtime API calls privately within the AWS network, reducing internet exposure while meeting the “private connectivity” requirement (line 6).
AWS PrivateLink provides private connectivity from a VPC to supported AWS services by creating an interface VPC endpoint (powered by PrivateLink). In the exhibit, the chatbot’s Bedrock calls currently egress through a NAT Gateway (line 2) to bedrock-runtime...:443 (line 3) and resolve to a public IP in Flow Logs (line 4), leading to the finding that AI calls use a public service endpoint (line 5).
The best next step is to create the interface VPC endpoint for Amazon Bedrock Runtime and route the chatbot’s traffic to it, so:
This addresses network exposure; encryption and web filtering controls do not change the public-endpoint egress shown in the exhibit.
bedrock-runtime (line 3).Topic: Security, Compliance, and Governance for AI Solutions
A company is preparing evidence for an internal audit of an AWS-based generative AI workload. The auditor requests automated assessment of software vulnerabilities and exposure for compute and container artifacts.
Exhibit: Audit evidence request (excerpt)
Control: Vulnerability management
Scope: EC2 instances and ECR container images
Evidence needed: CVE findings with severity and affected resources
Frequency: Continuous
Which AWS service should the company use to meet this audit request?
Options:
A. Amazon Inspector
B. Amazon GuardDuty
C. Amazon Macie
D. AWS Config
Best answer: A
Explanation: The exhibit asks for continuous CVE vulnerability findings, including severity and affected resources, specifically for EC2 instances and ECR images. Amazon Inspector is the AWS service designed to automatically assess vulnerabilities and exposure for these resource types and produce findings suitable as audit evidence.
This request is for vulnerability management evidence: continuous detection of CVEs, severities, and the impacted resources. In the exhibit, the scope explicitly includes “EC2 instances and ECR container images” and the evidence requires “CVE findings with severity,” which matches Amazon Inspector’s purpose as a managed vulnerability assessment service for AWS workloads.
Inspector produces vulnerability findings (including CVE IDs and severity) tied to affected resources, which can be used directly as audit evidence for the stated control. The key cue is that the audit is not asking for data classification, threat detection, or configuration drift, but for vulnerability assessment results.
Topic: Security, Compliance, and Governance for AI Solutions
A team is deploying a generative AI application on AWS and wants a managed service that runs automated best-practice checks across their AWS account and provides recommendations that include both cost optimization and security improvements. Which AWS service meets this requirement?
Options:
A. AWS Trusted Advisor
B. AWS Security Hub
C. AWS Config
D. AWS Audit Manager
Best answer: A
Explanation: AWS Trusted Advisor is designed to evaluate an AWS environment against AWS best practices and return actionable recommendations. It uniquely fits the requirement because it includes checks across multiple categories, including both cost optimization and security, in a single service view.
The deciding attribute is “automated best-practice checks that include both cost and security recommendations.” AWS Trusted Advisor continuously evaluates your AWS account and surfaces recommendations across several best-practice categories, including cost optimization and security, which matches the team’s stated goal.
Services like AWS Security Hub, AWS Config, and AWS Audit Manager can support governance and compliance, but they do not provide the same broad, multi-category best-practice check set that includes cost optimization. Key takeaway: use Trusted Advisor when you need AWS best-practice checks and recommendations spanning cost and security.
Topic: Security, Compliance, and Governance for AI Solutions
A company is building a RAG-based GenAI assistant using Amazon Bedrock. Conversation transcripts and model outputs are stored in Amazon S3, and embeddings are stored in an Amazon OpenSearch Service domain.
A new requirement states that all stored data must be encrypted at rest with customer managed keys, keys must be centrally auditable, and the team wants the lowest ongoing operational effort for key lifecycle management (for example, rotation and access revocation).
Which change best meets these requirements?
Options:
A. Store an AES key in Secrets Manager and encrypt in the app
B. Keep S3 SSE-S3 and rely on default OpenSearch encryption
C. Use AWS KMS CMKs for S3 SSE-KMS and OpenSearch encryption
D. Use AWS CloudHSM keys and implement custom rotation processes
Best answer: C
Explanation: AWS KMS is the AWS managed service for creating and controlling encryption keys and using them with AWS services. Using KMS customer managed keys with S3 SSE-KMS and OpenSearch encryption meets the “customer managed keys” requirement while keeping operations low through native integrations, centralized auditing (for example, AWS CloudTrail), and managed key lifecycle features like rotation.
AWS KMS lets you create, manage, and control access to encryption keys (KMS keys) that AWS services can use to encrypt data at rest. In this scenario, configuring S3 to use SSE-KMS and configuring OpenSearch Service to use a customer managed KMS key satisfies the requirement for customer managed keys without the team having to build and operate custom encryption, storage, rotation, or audit tooling.
KMS also supports operational needs that commonly show up in AI solutions storing prompts, outputs, and embeddings:
Compared with application-managed encryption or HSM-managed designs, KMS integrations reduce operational effort while still giving the customer control of the keys.
Topic: Security, Compliance, and Governance for AI Solutions
Which statement is INCORRECT about encryption requirements for AI workloads on AWS?
Options:
A. Encrypt data at rest in Amazon S3 with SSE-KMS for prompts and training data
B. Using AWS PrivateLink removes the need for TLS because traffic stays on the AWS network
C. Use AWS KMS keys to control, audit, and manage encryption keys used for data at rest
D. Use TLS/HTTPS to encrypt data in transit when calling managed AI services
Best answer: B
Explanation: Private connectivity (such as AWS PrivateLink) does not replace encryption in transit. AI workloads should generally encrypt sensitive data at rest (for example, with SSE-KMS) and encrypt data in transit (for example, with TLS/HTTPS) to protect data confidentiality as it moves between components and services.
The core requirement is to protect sensitive AI data both at rest and in transit. Encrypting at rest (for example, storing prompts, embeddings, documents, and logs in Amazon S3 using SSE-KMS) protects data on disk/backing storage. Encrypting in transit (typically TLS/HTTPS) protects requests and responses as they traverse networks, including calls to AWS managed AI services.
AWS PrivateLink helps keep traffic on private IP space and reduces exposure to the public internet, but it does not inherently encrypt application payloads; TLS is still the standard control for in-transit encryption and end-to-end protection. AWS KMS is commonly used to manage and audit the keys used for at-rest encryption across AWS services.
Topic: Security, Compliance, and Governance for AI Solutions
A company is adopting generative AI on AWS to create an internal support assistant using Amazon Bedrock. The assistant will summarize employee tickets and sometimes reference internal knowledge articles. The security team is defining AI governance so the solution meets compliance expectations and can be audited.
Which action is INCORRECT as part of an AI governance process for this system?
Options:
A. Maintain documentation of data sources and evaluation results for audits
B. Enable monitoring and periodic review of outputs for risk and policy compliance
C. Let developers connect production ticket data to the model without review or audit logging
D. Define roles, approvals, and change control for model and prompt updates
Best answer: C
Explanation: AI governance provides oversight, accountability, and evidence that an AI system is controlled across its lifecycle. Allowing direct use of production data with no review or audit trail removes key governance safeguards such as access control, traceability, and risk management. Governance processes are required to meet compliance obligations and to reliably detect and correct issues over time.
AI governance is the set of lifecycle processes that ensure an AI system is used safely, compliantly, and consistently with business and security requirements. In this scenario, governance is required because the assistant touches enterprise data and must be auditable; you need clear accountability, controlled changes, and evidence (logs and documentation) that policies are followed.
A governance process commonly includes:
Bypassing review and audit logging while connecting production data undermines oversight and traceability, which are core governance principles.
Topic: Security, Compliance, and Governance for AI Solutions
Which TWO statements about the AWS shared responsibility model for securing AI solutions are INCORRECT? (Select TWO.)
Options:
A. Customers should enable logging and auditability (for example, AWS CloudTrail) to meet governance and compliance requirements.
B. AWS secures the underlying infrastructure for Amazon Bedrock and SageMaker AI, including physical facilities and the virtualization layer.
C. AWS automatically encrypts customer AI datasets in Amazon S3 using customer-managed AWS KMS keys without any customer configuration.
D. Customers must mitigate risks from prompts and model outputs (for example, PII leakage) using controls such as Bedrock Guardrails and application logic.
E. Customers are responsible for configuring IAM permissions that control who can invoke models and access data.
F. Because Amazon Bedrock is fully managed, AWS is responsible for defining customers’ data classification and retention policies for prompts and responses.
Correct answers: C and F
Explanation: Under the AWS shared responsibility model, AWS is responsible for security of the cloud, while customers are responsible for security in the cloud. For managed AI services, AWS secures the service infrastructure, but customers still configure identity, data protection choices, and governance controls for their own data and usage.
The shared responsibility model splits duties between AWS and the customer. For AI services such as Amazon Bedrock and SageMaker AI, AWS handles security of the underlying cloud and managed service components (facilities, hardware, and core service operation). Customers are still responsible for how they use the service: controlling access with IAM, choosing and configuring data protection (including whether to use SSE-KMS and how KMS keys are administered), setting governance policies like data classification and retention, and implementing safety controls for prompts and outputs (for example, Bedrock Guardrails) along with logging for audit.
Key takeaway: “Fully managed” reduces infrastructure operations for the customer, but it does not transfer the customer’s identity, data governance, and usage responsibilities to AWS.
Topic: Security, Compliance, and Governance for AI Solutions
A company is preparing a compliance audit for an AI-enabled application running on Amazon EC2 and AWS Lambda, with container images stored in Amazon ECR. The company wants to reduce security exposure and keep defensible evidence of controls.
Which TWO practices are INCORRECT for this goal? (Select TWO)
Options:
A. Use IAM roles with least-privilege access for Bedrock and S3
B. Encrypt S3 data with AWS KMS and restrict bucket access
C. Rely on quarterly manual scans instead of Amazon Inspector
D. Retain CloudTrail logs in a protected, access-controlled location
E. Disable AWS CloudTrail to reduce log storage costs
F. Enable Amazon Inspector scans for EC2, ECR, and Lambda
Correct answers: C and E
Explanation: Amazon Inspector is the AWS managed service used to assess vulnerability and exposure for supported resources such as EC2 instances, ECR container images, and Lambda functions. For audit readiness, you also need governance evidence through centralized API activity logging. Practices that remove continuous vulnerability assessment or eliminate audit logs undermine both security posture and compliance evidence.
For vulnerability and exposure assessment on AWS, Amazon Inspector provides managed, continuous scanning for common resource types used in AI workloads (for example, EC2 instances, ECR images, and Lambda functions). Skipping Inspector in favor of infrequent, manual checks increases the window where known CVEs and unintended exposures can go undetected.
Compliance and AI governance also depend on auditability. AWS CloudTrail records API activity across the account, which helps demonstrate control operation and supports investigations. Turning off CloudTrail to save cost removes a primary source of evidence and weakens monitoring, even if other controls (encryption, IAM least privilege) are in place. The key takeaway is to combine continuous vulnerability assessment (Inspector) with durable audit logging (CloudTrail) and standard preventive controls.
Topic: Security, Compliance, and Governance for AI Solutions
A company is preparing a governance review for a new GenAI assistant on AWS.
Exhibit: Workload summary (from the intake form)
Use case: Draft customer support emails
Model access: Third-party foundation model via managed API
Input data: Restricted (includes customer PII)
Output: Customer-facing content
Human review: Not required before sending
Based on the exhibit, which governance framework is the best fit to scope GenAI security risks and required controls at a high level for this workload?
Options:
A. Generative AI Security Scoping Matrix
B. AWS Well-Architected Framework cost optimization pillar
C. NIST Cybersecurity Framework (CSF)
D. ROUGE/BLEU evaluation report
Best answer: A
Explanation: The exhibit highlights GenAI-specific risk drivers: Restricted/PII input, use of a third-party FM via API, customer-facing outputs, and no human review. A GenAI-focused scoping framework is used to quickly categorize these dimensions and determine which security controls and governance checks are needed. The Generative AI Security Scoping Matrix is intended for this purpose.
The key decision is choosing a framework that scopes GenAI security risk based on the workload’s characteristics shown in the exhibit. The Generative AI Security Scoping Matrix is purpose-built for GenAI and helps teams map core factors—such as input data sensitivity (“Restricted (includes customer PII)”), model access/hosting (“Third-party foundation model via managed API”), and degree of autonomy/impact (“Customer-facing content” with “Human review: Not required”)—to an appropriate risk posture and control areas to validate.
General security frameworks are useful later, but they don’t directly structure GenAI-specific scoping around prompts/outputs, external model access, and autonomy in the same way. The exhibit’s fields align directly to GenAI scoping dimensions, making the matrix the best fit for the governance intake step.
Use the AWS AIF-C01 Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Try AWS AIF-C01 on Web View AWS AIF-C01 Practice Test
Read the AWS AIF-C01 Cheat Sheet on Tech Exam Lexicon, then return to IT Mastery for timed practice.