Last-mile 1Z0-1122-25 review: AI/ML lifecycle, evaluation metrics pickers, leakage/overfitting rules, GenAI grounding intuition, and responsible AI checklists.
Use this for last‑mile review. Pair it with the Syllabus.
flowchart LR
P["Problem framing"] --> D["Data + labels"]
D --> F["Features"]
F --> T["Train"]
T --> E["Evaluate"]
E --> DEP["Deploy"]
DEP --> MON["Monitor + iterate"]
Exam cue: if you skip evaluation/monitoring, the option is usually incomplete.
| Task | Good default | When to change |
|---|---|---|
| Classification | F1 / AUC | use precision/recall when FP/FN costs differ |
| Regression | MAE / RMSE | RMSE punishes large errors more |
Rule: If the prompt mentions class imbalance, accuracy is rarely the best answer.
| Concept | What it means | Practical implication |
|---|---|---|
| Tokens | text pieces | cost and latency scale with tokens |
| Context window | max prompt + docs | long docs require chunking |
| Hallucination | plausible but wrong | add grounding + citations |
flowchart LR
Q["Question"] --> RET["Retrieve relevant docs"]
RET --> PROMPT["Prompt with context"]
PROMPT --> LLM["LLM"]
LLM --> A["Answer + citations"]
Rule: grounded answers come from good retrieval + clean data, not clever prompts.