A practical AIF-C01 study plan you can follow: 30-day intensive, 60-day balanced, and 90-day part-time schedules with weekly focus by domain, suggested hours/week, and tips for using the Mastery Cloud practice app.
This page answers the question most candidates actually have: “How do I structure my AIF‑C01 prep?”
Below are three realistic schedules (30/60/90 days) based on the official domain weights and the way AIF‑C01 questions are written (definitions + best-fit design choices + responsible use).
Use the plan that matches your available time, then follow the loop: Syllabus → drills → review misses → mixed sets → timed runs.
Most candidates land in a range based on background:
| Your starting point | Typical total study time | Best-fit timeline |
|---|---|---|
| You already work with AWS and have AI/GenAI basics | 25–40 hours | 30–60 days |
| You know AWS basics but are new to GenAI terms | 40–60 hours | 60 days |
| You’re new to both AWS and AI concepts | 60–80+ hours | 90 days |
Choose a plan based on hours per week:
| Time you can commit | Recommended plan | What it feels like |
|---|---|---|
| 8–12 hrs/week | 30‑day intensive | Fast learning + lots of practice |
| 4–7 hrs/week | 60‑day balanced | Steady progress + room for review |
| 3–4 hrs/week | 90‑day part‑time | Slow-and-solid with repetition |
AIF‑C01 domain weights:
| Domain | Weight | What you should be good at |
|---|---|---|
| Domain 1: Fundamentals of AI and ML | 20% | Core terminology, metrics, lifecycle, when ML is (and isn’t) a fit |
| Domain 2: Fundamentals of Generative AI | 24% | Tokens/embeddings/RAG basics, capabilities vs limitations, cost/latency trade-offs |
| Domain 3: Applications of Foundation Models | 28% | Prompting patterns, RAG design, evaluation, customization basics |
| Domain 4: Guidelines for Responsible AI | 14% | Fairness, transparency, safety, human oversight, documentation |
| Domain 5: Security, Compliance, and Governance for AI Solutions | 14% | Privacy, access control, auditability, governance basics |
If you want one rule: spend ~60% learning + 40% practice early, then invert it to ~30% learning + 70% practice in the final 1–2 weeks.
Target pace: ~8–12 hours/week.
Goal: learn the vocabulary fast, then harden instincts through drills and mixed sets.
| Week | Focus (domains/tasks) | What to do | Links |
|---|---|---|---|
| 1 | Domain 1 fundamentals + start Domain 2 • Task 1.1 • Task 1.2 • Task 2.1 | Build core vocabulary; make a one-page “terms” sheet. Do 2–3 focused drills and start a miss log. | Syllabus • Cheatsheet • Practice |
| 2 | Domain 1 lifecycle + Domain 2 limits + AWS services • Task 1.3 • Task 2.2 • Task 2.3 | Learn “when gen AI is risky” + service pickers (Bedrock vs SageMaker vs pre-built AI services). End the week with a 30–40Q mixed set. | Cheatsheet • Practice |
| 3 | Domain 3 foundation model apps • Task 3.1 • Task 3.2 | Build RAG + prompt instincts. Drill daily on prompt patterns, grounding, and safe tool use. | Syllabus • Practice |
| 4 | Domain 3 evaluation/customization + Domains 4–5 + review • Task 3.3 • Task 3.4 • Task 4.1 • Task 4.2 • Task 5.1 • Task 5.2 | Do 2 mixed sets + 1 timed run (65Q/90m). Review every miss and re-drill weak tasks until misses repeat less. | Practice • FAQ |
Target pace: ~4–7 hours/week.
Goal: more repetition and spaced review while steadily building practice volume.
| Weeks | Focus | What to do |
|---|---|---|
| 1–2 | Domain 1 + Task 2.1 | Build fundamentals; do 2 drills per week and keep a miss log. |
| 3–4 | Domain 2 (Tasks 2.2–2.3) | Focus on limitations, safety, and AWS service selection; end week 4 with a mixed set. |
| 5–7 | Domain 3 (Tasks 3.1–3.4) | RAG, prompting, evaluation, and customization basics; do weekly mixed sets. |
| 8 | Domains 4–5 + final review | 2 mixed sets + 2 timed runs; revisit weak tasks. |
Use task links from the Syllabus to drill each area as you go.
Target pace: ~3–4 hours/week.
Goal: slow repetition with consistent drills and periodic mixed sets.
| Week | Focus (tasks) | What to do |
|---|---|---|
| 1 | Task 1.1 | Learn core terms; do one short drill set. |
| 2 | Task 1.2 | Service pickers by use case; write one-liner rules. |
| 3 | Task 1.3 | Lifecycle + MLOps vocabulary; do 1–2 drills. |
| 4 | Task 2.1 | Tokens/embeddings/RAG basics; do 1–2 drills. |
| 5 | Task 2.2 | Limits + risks; add to miss log. |
| 6 | Task 2.3 | Bedrock/SageMaker/service selection; do a mixed set. |
| 7 | Task 3.1 | RAG architecture + grounding; drill. |
| 8 | Task 3.2 | Prompt patterns + safety; drill. |
| 9 | Task 3.3 | Prompt vs RAG vs fine-tune; do 1–2 drills. |
| 10 | Task 3.4 | Evaluation rubric + safety checks; do a mixed set. |
| 11 | Task 4.1 + Task 4.2 | Responsible AI + explainability; drill. |
| 12 | Task 5.1 + Task 5.2 + final review | 1–2 mixed sets + 2 timed runs; re-drill weak tasks. |
Use the app to turn the syllabus into a repeatable loop:
Direct practice link: /app/cloud/#/topic-selection/aws_aif-c01