AWS AIF-C01: Fundamentals of Genai

Try 10 focused AWS AIF-C01 questions on Fundamentals of Genai, with explanations, then continue with IT Mastery.

On this page

Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Try AWS AIF-C01 on Web View full AWS AIF-C01 practice page

Topic snapshot

FieldDetail
Exam routeAWS AIF-C01
Topic areaFundamentals of Genai
Blueprint weight24%
Page purposeFocused sample questions before returning to mixed practice

How to use this topic drill

Use this page to isolate Fundamentals of Genai for AWS AIF-C01. Work through the 10 questions first, then review the explanations and return to mixed practice in IT Mastery.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 24% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original IT Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: Fundamentals of Genai

A company is choosing a foundation model for a returns-assistant. Users will upload a photo of a damaged item and add a short text description. The assistant must respond with text-only return instructions.

Exhibit: Model capability summary

ModelSupported inputsSupported outputsPrimary use
TextGen-Atexttextchat/summarize
VisionChat-Bimage, texttextimage Q&A
MediaGen-Ctextimageimage generation

Based only on the exhibit, which model should the company choose to meet the requirements?

Options:

  • A. VisionChat-B

  • B. MediaGen-C

  • C. Any model works because all are generative

  • D. TextGen-A

Best answer: A

Explanation: The assistant needs multimodal input (an image plus text) and a text-only response. In the exhibit, only VisionChat-B lists supported inputs as “image, text” while keeping the supported output as “text,” which directly matches the requirement.

A multimodal model can take in more than one data modality (for example, text plus images) and/or produce outputs in different modalities. The requirement is multimodal input (photo + text) with a text-only output.

From the exhibit:

  • VisionChat-B shows supported inputs as “image, text” and supported outputs as “text.”
  • TextGen-A accepts only “text” input.
  • MediaGen-C outputs “image,” which does not meet the text-only response requirement.

The deciding evidence is the VisionChat-B row cells under “Supported inputs” and “Supported outputs.”

  • Text-only model fails because “Supported inputs” is only “text” for TextGen-A.
  • Wrong output modality fails because MediaGen-C lists “Supported outputs” as “image.”
  • Generative is enough fails because the exhibit differentiates models by supported input/output modalities.

Question 2

Topic: Fundamentals of Genai

A healthcare company in Germany is building a GenAI assistant to summarize clinician notes that contain PHI. The company must meet GDPR requirements and keep all patient data processing and storage within the EU.

Which action should the company NOT take when selecting a foundation model and supporting services on AWS?

Options:

  • A. Use an EU Region Bedrock model that meets residency needs.

  • B. Send raw PHI to a us-east-1 model for accuracy.

  • C. Redact PII/PHI with Bedrock Guardrails before prompting.

  • D. Keep RAG data in EU S3 encrypted with KMS.

Best answer: B

Explanation: Model selection must account for compliance, data residency, and privacy constraints, not only model quality. Sending raw PHI to a model endpoint in a different Region can violate GDPR-related residency requirements and increase exposure of sensitive data. Keeping processing in the required geography and minimizing sensitive data shared with the model are key selection factors.

The key selection factor here is meeting compliance and data residency requirements while protecting sensitive data. If patient data must remain in the EU, the model and all supporting components that process or store that data should run in an EU Region to avoid cross-Region transfer. Privacy constraints also favor minimizing sensitive content in prompts by applying controls such as PII/PHI detection and redaction before invocation, and encrypting any stored knowledge base or retrieved context (for example, RAG documents and embeddings) with AWS KMS. Performance and accuracy matter, but they cannot override explicit regulatory and data boundary requirements; choose a model that satisfies the constraints first, then optimize within that allowed set.

  • Cross-Region PHI transfer is an anti-pattern because it can violate EU residency requirements.
  • EU Region model choice supports data residency by keeping processing in-region.
  • Guardrails redaction reduces sensitive data exposure in prompts and responses.
  • KMS-encrypted EU storage helps meet privacy and protection requirements for RAG data.

Question 3

Topic: Fundamentals of Genai

A company wants to add a GenAI assistant for its internal help desk using Amazon Bedrock. The team wants fast updates as policies change and a simple natural-language experience for employees.

Which TWO proposed decisions are NOT appropriate because they are unsafe governance/security practices? (Select TWO)

Options:

  • A. Store prompts containing employee PII unencrypted in a public S3 bucket

  • B. Use a knowledge base (RAG) to reflect latest IT docs

  • C. Encrypt stored prompts/outputs with KMS and enable CloudTrail

  • D. Use a natural-language chat interface for employees

  • E. Iterate with prompt changes instead of retraining a model

  • F. Use Bedrock Guardrails to help block PII in prompts

  • G. Grant all employees bedrock:* permissions on all models

Correct answers: A and G

Explanation: GenAI is valuable here because it can respond quickly to changing policies, adapt behavior through prompts and retrieved context, and provide a simple natural-language interface. Those benefits still require basic security and governance, such as least-privilege access and protecting sensitive data (like PII) with access controls and encryption.

GenAI solutions are often chosen because they can be updated and improved quickly (for example, by changing prompts or augmenting responses with retrieved content) and because users can interact through natural language without building rigid intent flows. In AWS, you can preserve these advantages while applying standard governance controls.

Unsafe practices are the ones that break core security principles:

  • Least privilege: only the roles/users that need model invocation should have narrowly scoped permissions.
  • Data protection: prompts and outputs can contain sensitive data, so they must be protected with strong access controls and encryption (and should not be publicly accessible).

A natural-language UI, prompt iteration, and RAG improve responsiveness and adaptability without requiring risky permission or data-handling shortcuts.

  • Natural language interface supports the simplicity advantage of GenAI for end users.
  • RAG/knowledge base helps keep answers current as documents change, improving adaptability.
  • Guardrails and logging/encryption are standard controls that reduce risk without removing GenAI benefits.
  • Broad permissions or public unencrypted storage are governance anti-patterns that increase blast radius and data-leak risk.

Question 4

Topic: Fundamentals of Genai

Which statement best differentiates Amazon Bedrock and Amazon SageMaker AI at a high level?

Options:

  • A. Bedrock provides managed foundation model APIs; SageMaker AI is a broader ML platform to build, train, and deploy models

  • B. Bedrock is used to train custom ML models; SageMaker AI only provides access to third-party FMs

  • C. Bedrock is an MLOps platform; SageMaker AI is a managed FM inference API service

  • D. Bedrock is for data labeling and feature engineering; SageMaker AI is for prompt engineering

Best answer: A

Explanation: Amazon Bedrock is primarily for integrating and operating foundation models through managed APIs for GenAI use cases. Amazon SageMaker AI is a more comprehensive ML platform that supports building, training, tuning, and deploying ML models (including custom models) along with related MLOps capabilities.

The core distinction is scope: Amazon Bedrock is a managed service for invoking and customizing foundation models via APIs (and adding GenAI application features like guardrails and knowledge-base integrations) without managing model infrastructure. Amazon SageMaker AI is a broader ML platform used to develop, train, tune, and deploy machine learning models and manage the ML lifecycle.

A common confusion is equating “access to models” with “ML platform.” While SageMaker AI can help you discover and deploy pre-built models, its defining purpose is end-to-end ML development and MLOps, whereas Bedrock’s defining purpose is managed FM consumption for GenAI applications.

  • Reverses roles: Claiming Bedrock is for training custom models and SageMaker AI is only FM access swaps each service’s primary purpose.
  • MLOps confusion: Calling Bedrock an MLOps platform mislabels it; SageMaker AI is the service typically associated with broader MLOps and model lifecycle tooling.
  • Wrong feature focus: Data labeling/feature engineering are not Bedrock’s core use case, and prompt engineering is not the defining purpose of SageMaker AI.

Question 5

Topic: Fundamentals of Genai

A company is building a customer-support chat application using Amazon Bedrock. The app needs low, consistent latency during business hours, but traffic is bursty during marketing campaigns. The team wants to balance responsiveness and cost and must follow governance controls (auditability and protection of customer PII).

Which TWO actions are NOT appropriate (anti-patterns) for this solution? (Select TWO)

Options:

  • A. Route simple queries to a smaller model; escalate as needed.

  • B. Log Bedrock invocations with CloudTrail for auditability.

  • C. Archive raw prompts/responses with PII for offline analysis.

  • D. Provision baseline throughput; use on-demand for burst traffic.

  • E. Run peak Provisioned Throughput for largest model 24/7.

  • F. Use Bedrock Guardrails for PII redaction and policy checks.

Correct answers: C and E

Explanation: Provisioned Throughput can improve consistency and reduce latency variability, but it introduces fixed cost, so it should be sized to predictable demand rather than peak 24/7. Governance also requires protecting sensitive data and maintaining auditability of model usage. The anti-patterns are the choices that either waste spend by overprovisioning or expose customer PII.

For Amazon Bedrock, you typically balance responsiveness and cost by matching model size and throughput to demand patterns. On-demand usage is flexible and cost-efficient for bursty or uncertain traffic, while Provisioned Throughput can provide more predictable performance/availability at a fixed cost, making it best for steady baseline traffic.

Governance should be built in alongside these performance decisions:

  • Protect sensitive inputs/outputs (for example, use policy controls and redaction for PII).
  • Maintain auditability of model access and invocations (for example, log API activity).

Overprovisioning peak dedicated capacity 24/7 is a cost anti-pattern, and retaining raw customer PII in prompts/responses is a privacy and least-privilege anti-pattern.

  • Model routing is a common way to reduce cost while keeping good responsiveness for most requests.
  • Baseline provision + bursts on-demand matches fixed capacity to predictable load and avoids paying for peak all day.
  • Audit logging supports governance by providing traceability of model API usage.
  • Guardrails help enforce policy (including PII handling) at the application boundary.

Question 6

Topic: Fundamentals of Genai

A company piloted a GenAI solution to generate claim denial letters for an insurance business unit.

Exhibit: Pilot evaluation summary

Pilot: Auto-draft claim denial letters (1,000 cases)
Acceptance criteria: critical errors <=0.1%; compliance pass >=99.5%
Results:
- Compliance pass rate: 98.7%
- Critical factual errors (wrong policy/amount): 0.8%
- Minor tone/grammar issues: 6.2%
- Avg handling time saved when used as draft: 45%

Based on the exhibit, what is the best next step for this use case?

Options:

  • A. Fully automate generating and sending denial letters end-to-end

  • B. Use GenAI to draft letters, with mandatory human review and approval

  • C. Stop using GenAI because it produces tone and grammar issues

  • D. Add prompt guardrails and then deploy end-to-end automation

Best answer: B

Explanation: The pilot misses its own acceptance criteria: compliance pass rate is 98.7% versus a 99.5% target, and critical factual errors are 0.8% versus a 0.1% limit. Those results indicate the output is not reliable enough for unsupervised decisions. The best fit is to use GenAI for drafting while keeping humans accountable for final approval.

This decision is about whether GenAI should automate an entire workflow or augment people with a human-in-the-loop. In the exhibit, the acceptance criteria require critical errors 0.1% and compliance pass 0a599.5%, but the measured results are 0.8% critical factual errors and 98.7% compliance pass rate. Because the system fails the stated quality and compliance thresholds, sending letters without review would create business and regulatory risk.

A suitable approach is to:

  • Use the model to generate a first draft (leveraging the 45% time savings).
  • Require a human to validate facts/policy references and approve before sending.

Automation is appropriate only when evaluation results meet the acceptance criteria consistently; here they do not.

  • End-to-end automation is not justified because the exhibit shows 0.8% critical errors and 98.7% compliance vs the stated targets.
  • Abandoning GenAI ignores that it provides 45% handling-time savings when used as a draft.
  • Guardrails-only fix does not address the exhibit’s core gap: critical factual errors exceeding the allowed rate.

Question 7

Topic: Fundamentals of Genai

A marketing team uses Amazon Bedrock to generate draft subject lines and body copy for a weekly email campaign. Humans still approve the final text. Leadership will expand the solution only if it improves the primary business outcome: increasing paid subscriptions from free trials. The team can use existing email and web analytics events (opens, clicks, and subscription purchases) but cannot add new survey-based measurements.

Which metric should the team use as the primary KPI to evaluate whether the GenAI content generation is successful?

Options:

  • A. Increase in free-trial-to-paid subscription conversion rate for recipients

  • B. Increase in email open rate and click-through rate

  • C. Average model inference cost per generated email

  • D. Average time saved per marketer to produce each email

Best answer: A

Explanation: Because the stated goal is more paid subscriptions, the primary KPI should measure downstream conversion, not a proxy. Free-trial-to-paid conversion rate can be calculated from existing click and purchase events and best reflects business impact of the generated content.

Pick metrics that match the business outcome the content is supposed to drive. For marketing content generation, engagement (opens/clicks) is often only a leading indicator, while conversion ties directly to revenue or customer acquisition outcomes.

In this scenario, the constraint is clear: leadership cares about increasing paid subscriptions from free trials, and the team already has analytics for purchases. Therefore, measuring lift in free-trial-to-paid conversion rate for users who received GenAI-assisted emails is the most decision-useful KPI; it captures whether the generated content actually improves the funnel outcome, not just attention or internal efficiency.

Engagement, time saved, and cost are valuable secondary metrics, but they should not replace the outcome metric when the decision is about expanding the program for business impact.

  • Engagement as a proxy: Opens and clicks indicate interest but do not prove improved paid subscription outcomes.
  • Internal efficiency only: Time saved helps measure productivity benefits, but it doesn’t validate the stated subscription-growth objective.
  • Cost without value: Inference cost tracks spend, but it cannot show whether the content improved conversions.

Question 8

Topic: Fundamentals of Genai

A company built an internal chatbot using Amazon Bedrock, a custom RAG pipeline, and Amazon OpenSearch Serverless. The chatbot answers questions from documents in Amazon S3 and Confluence, but the team spends significant effort maintaining connectors, enforcing per-user permissions, and adding citations and safe-response controls.

The company wants to reduce operational effort and improve safety while keeping these requirements:

  • Use the organization’s existing SSO and honor document permissions
  • Provide grounded answers with citations from approved sources
  • Support common employee workflows (for example, opening an IT ticket)

Which change best meets these requirements?

Options:

  • A. Replace the chatbot with Amazon Q Developer

  • B. Use Bedrock Knowledge Bases and keep a custom chatbot UI

  • C. Use Amazon Q Business with built-in connectors and SSO

  • D. Fine-tune a foundation model on company documents

Best answer: C

Explanation: Amazon Q Business is designed as a managed AI assistant for organizational knowledge and common business workflows. It can connect to enterprise content repositories, use existing identity for per-user access control, and return grounded answers with citations. This reduces the need to build and maintain custom connectors and permission logic while improving operational overhead and safety controls.

The core optimization is to replace a custom enterprise RAG chatbot with a purpose-built managed service for organizational knowledge and workflows. Amazon Q Business is an AI assistant that indexes and retrieves information from connected enterprise data sources (such as S3 and Confluence) and uses your organization’s identity/SSO to enforce each user’s permissions at query time. It also supports grounded responses (often with citations) and can be used to help employees complete common tasks and workflows, reducing the amount of custom code and ongoing maintenance.

The key tradeoff is less low-level customization of the RAG pipeline in exchange for significantly lower operational effort and faster delivery with enterprise access control built in.

  • Fine-tuning misuse increases cost/effort and does not inherently enforce per-user document permissions or provide citations.
  • Still custom ops using Knowledge Bases can help RAG, but you still own the app, identity integration, and workflow orchestration.
  • Wrong Q product Amazon Q Developer focuses on software development assistance, not enterprise knowledge/workflows for employees.

Question 9

Topic: Fundamentals of Genai

A company is building an internal GenAI assistant that summarizes customer case notes. The solution will run on AWS and must use a managed service to access foundation models while meeting basic security and governance expectations.

Which approach is INCORRECT and should be avoided?

Options:

  • A. Enable AWS CloudTrail logging for Bedrock API activity and review access

  • B. Invoke a foundation model through Amazon Bedrock using IAM authentication

  • C. Use Bedrock Guardrails to help filter prompts and responses for PII

  • D. Embed long-term AWS access keys in a web app to call the model directly

Best answer: D

Explanation: Amazon Bedrock provides managed access to foundation models through AWS APIs, so access should be controlled with IAM and monitored with AWS audit logs. Putting long-term AWS credentials into client-side code removes effective access control and greatly increases the risk of credential leakage and unauthorized model use.

The core principle violated is secure access control and credential management (least privilege). Amazon Bedrock is the AWS managed service used to access multiple foundation models through a consistent API, where you can apply IAM permissions and auditing.

A secure pattern in this scenario is:

  • Call Bedrock from trusted server-side components using IAM roles
  • Add safeguards like Bedrock Guardrails for PII-related requirements
  • Log and audit usage with AWS CloudTrail

Exposing long-term AWS access keys in a client (such as a browser app) is an anti-pattern because anyone who obtains the keys can invoke models and access other allowed AWS resources.

  • Managed FM access invoking models via Amazon Bedrock with IAM is an appropriate managed-service approach.
  • Safety controls using Bedrock Guardrails supports responsible handling of PII and unsafe content.
  • Auditability CloudTrail logging helps with governance by recording Bedrock API calls for review.

Question 10

Topic: Fundamentals of Genai

A company is building a GenAI customer support assistant on AWS. The baseline solution sends the entire policy manual and troubleshooting guides in every prompt to a foundation model, then a Lambda function uses simple pattern matching to decide whether to call internal REST APIs (for example, create a support case or reset a user’s MFA).

The company needs lower cost and latency, must provide answers grounded in the latest internal documents with citations, and must prevent the assistant from taking any action outside a fixed set of approved APIs. The company does not want to fine-tune a model.

Which change best meets these requirements?

Options:

  • A. Fine-tune a foundation model on the policy manual and API usage examples using Amazon SageMaker AI

  • B. Switch to a larger foundation model and enable provisioned throughput to reduce latency spikes

  • C. Increase max_tokens and temperature so the model includes more detail and reduces back-and-forth requests

  • D. Use Knowledge Bases for Amazon Bedrock for RAG with citations, and use Agents for Amazon Bedrock action groups plus Bedrock Guardrails for approved tool use

Best answer: D

Explanation: This workload is a GenAI assistant/agent use case: answer questions from enterprise content and help users complete tasks through approved actions. Using a Bedrock knowledge base for RAG grounds answers in current documents and avoids sending large documents in every prompt, improving cost and latency. Using Bedrock Agents with Guardrails reduces operational effort and improves safety by restricting actions to explicitly allowed tools.

Assistants handle Q&A and help users complete tasks; agents add controlled tool use (calling APIs) to complete those tasks. In the baseline, stuffing full documents into each prompt increases token cost and latency, and custom parsing/orchestration makes it hard to reliably constrain actions.

A better pattern is:

  • Use Knowledge Bases for Amazon Bedrock to retrieve only relevant passages at runtime (RAG) and return citations from your documents.
  • Use Agents for Amazon Bedrock with action groups to invoke only the approved internal REST APIs.
  • Apply Bedrock Guardrails to enforce safety and reduce the chance of unauthorized or unsafe outputs.

This meets the grounding/citation and safety constraints while reducing prompt size and operational complexity without fine-tuning.

  • Fine-tuning violates the constraint to avoid fine-tuning and does not inherently provide citations from the latest documents.
  • Raising temperature/max tokens typically increases token spend and variability, and does not constrain API actions.
  • Larger model/provisioned throughput can reduce latency variability but usually increases cost and still doesn’t add grounding with citations or strict tool constraints.

Continue with full practice

Use the AWS AIF-C01 Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Try AWS AIF-C01 on Web View AWS AIF-C01 Practice Test

Free review resource

Read the AWS AIF-C01 Cheat Sheet on Tech Exam Lexicon, then return to IT Mastery for timed practice.

Revised on Thursday, May 14, 2026