Try 10 focused AI-103 questions on Implement Generative AI and Agentic Solutions, with explanations, then continue with IT Mastery.
Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.
| Field | Detail |
|---|---|
| Exam route | AI-103 |
| Topic area | Implement Generative AI and Agentic Solutions |
| Blueprint weight | 33% |
| Page purpose | Focused sample questions before returning to mixed practice |
Use this page to isolate Implement Generative AI and Agentic Solutions for AI-103. Work through the 10 questions first, then review the explanations and return to mixed practice in IT Mastery.
| Pass | What to do | What to record |
|---|---|---|
| First attempt | Answer without checking the explanation first. | The fact, rule, calculation, or judgment point that controlled your answer. |
| Review | Read the explanation even when you were correct. | Why the best answer is stronger than the closest distractor. |
| Repair | Repeat only missed or uncertain items after a short break. | The pattern behind misses, not the answer letter. |
| Transfer | Return to mixed practice once the topic feels stable. | Whether the same skill holds up when the topic is no longer obvious. |
Blueprint context: 33% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.
These questions are original IT Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.
Topic: Implement Generative AI and Agentic Solutions
A Microsoft Foundry agent uses a getShippingQuote tool. Users report that the agent repeatedly says it is “checking shipping” but then asks for the same details again.
Trace excerpt:
Tool call: getShippingQuote
Arguments: {"orderId":"SO-1842","deliveryType":"fast"}
API response: 400 Bad Request
Reason: postalCode is required; deliveryType must be one of standard, express, overnight
Tool schema: orderId string required; deliveryType string optional; postalCode string optional
What is the best next fix?
Options:
A. Add required fields and enums to the tool schema.
B. Lower the agent model temperature.
C. Add Azure AI Search grounding data.
D. Increase the API retry count.
Best answer: A
Explanation: The failure is caused by a mismatch between the tool schema and the API contract. The agent is allowed to omit postalCode and invent an unsupported deliveryType, so updating the schema is the most direct fix.
Tool schemas should describe the function contract clearly enough for the agent to supply valid arguments. In this trace, the API rejects the call because a required argument is not enforced and an unconstrained string allows an invalid value. The schema should mark postalCode as required and define deliveryType with an enum such as standard, express, and overnight.
This improves tool invocation before runtime. Retrying or changing model behavior does not correct an invalid function contract.
Topic: Implement Generative AI and Agentic Solutions
A company is building a Foundry-based assistant for internal policy questions. The source documents change daily and contain confidential data. Security requires keyless private access, least-privilege document visibility, and auditable source citations. The assistant must answer from current enterprise knowledge. Which implementation should you recommend?
Options:
A. Disable document grounding and answer only from the base model.
B. Place the latest documents in the system prompt for every request.
C. Fine-tune an LLM on the documents and update the model monthly.
D. Use RAG with Azure AI Search, managed identity, role filters, and provenance metadata.
Best answer: D
Explanation: The scenario needs current enterprise knowledge, not model training. A RAG design can retrieve fresh content at query time and enforce security controls such as managed identity, private access, role-based filtering, and citations without blocking valid user questions.
RAG is the right pattern when an app must use enterprise content that changes frequently. In this case, the assistant should retrieve from an Azure AI Search index or connected knowledge source at runtime, using managed identity for keyless access and private networking where required. Role-based filters preserve each user’s document visibility, and provenance metadata enables source citations and audit review. This keeps confidential knowledge outside model weights while still grounding answers in approved content. Fine-tuning is not a substitute for secure retrieval when facts change daily.
Topic: Implement Generative AI and Agentic Solutions
An insurance team is building a Foundry agent that reviews claim notes and can submit payout requests through a tool. Policy allows automatic payout submission up to 1,000 USD, but higher payouts require an adjuster’s approval with logged approver identity and cited claim evidence. A prototype uses one system prompt: Ask for approval before submitting high-value payouts. Which design should you implement?
Options:
A. Strengthen the system prompt and lower model temperature before permitting payout tool calls.
B. Use managed identity and private networking while keeping prompt-based approval logic.
C. Use a Foundry workflow gate that records approval and provenance before high-value tool calls.
D. Send every payout request to a manual queue outside the agent workflow.
Best answer: C
Explanation: Critical business actions should be enforced by workflow state and tool-access controls, not hidden inside a prompt. A Foundry workflow gate can require approval, provenance, and logging before high-value payout tool calls while allowing legitimate low-value automation.
Prompt instructions are not a reliable or auditable control for a high-impact business action such as submitting a payout. The agent can still analyze the claim and produce cited evidence, but the workflow should check the payout amount and require a separate approval state for high-value claims. Only after the approval step records the approver identity and provenance should the payout tool become available. This preserves oversight without forcing every low-value claim into a manual process. Security controls such as managed identity and private networking protect access paths, but they do not replace business approval logic.
Topic: Implement Generative AI and Agentic Solutions
You are building a procurement assistant in a Microsoft Foundry project. A user can ask one request such as: “Can I buy 40 tablets from this supplier and create the draft purchase request?” The solution must use policy documents, check live supplier status through a private ERP connector, create the draft only after policy and schema validation pass, and provide audit traces. Which implementation should you choose?
Options:
A. Use one LLM call that includes policy excerpts and ERP credentials in the prompt.
B. Let the agent call ERP tools first, then retrieve policy content to justify the action.
C. Index ERP exports in Azure AI Search and have the model generate purchase requests from retrieved text.
D. Build a Foundry workflow that retrieves policy context, reasons, calls ERP tools by managed identity, validates, then responds.
Best answer: D
Explanation: The best implementation uses a stepwise Foundry workflow instead of a single opaque model call. It grounds the decision with retrieval, uses controlled tool execution for live ERP actions, validates before side effects, and leaves auditable traces.
For high-risk agent tasks, use a decomposed workflow or tool-augmented reasoning pipeline. The assistant should first retrieve the relevant procurement policy from Azure AI Search, then use a model step to reason over the request, then call the ERP connector through managed identity only when needed. Before creating the draft purchase request, add validation gates for policy compliance and tool input/output schema checks. The final response can summarize the decision and include provenance from the retrieved policy and tool results. This design makes each step observable and controllable, unlike a single model call that hides retrieval, reasoning, and action decisions inside one prompt.
Topic: Implement Generative AI and Agentic Solutions
You are configuring instructions for a Microsoft Foundry agent for an internal expense assistant. The agent can answer policy questions by using Azure AI Search and can call only a createExpenseTicket tool. It must not approve expenses, access payroll data, or respond to requests that violate the organization’s content safety policy. Which instruction should you add?
Options:
A. Escalate all prohibited requests with the full conversation transcript.
B. Use model reasoning for unauthorized actions, then log the result.
C. Refuse every request that current tools cannot complete.
D. Answer from Search; escalate role or tool-limit requests; refuse safety-policy violations.
Best answer: D
Explanation: Agent instructions should clearly define boundaries. The agent can answer grounded policy questions, escalate requests that exceed its role or available tools, and refuse requests that violate safety constraints without attempting tool calls.
For a Foundry agent, role, tool, and safety boundaries should be explicit in the system or agent instructions. A request to approve an expense or access payroll exceeds the agent’s role and tool permissions, so the safe behavior is to explain the limitation and offer escalation through the approved ticket tool. A request that violates content safety policy should be refused rather than escalated or attempted with another tool. The key is to prevent the model from improvising authority it does not have.
Topic: Implement Generative AI and Agentic Solutions
A Foundry agent for HR benefits support uses Azure AI Search to ground answers in current policy documents. Users ask follow-up questions such as, “Does that apply to my spouse?” The app must remember facts provided earlier in the same chat, but must not use facts from other chats or store chat transcripts in the search index. What should you implement?
Options:
A. Disable state and rely on semantic ranking
B. Per-thread short-term memory with Search retrieval each turn
C. Long-term user memory shared across all sessions
D. Index every chat transcript in Azure AI Search
Best answer: B
Explanation: The requirement is conversation continuity within a single chat, not cross-session personalization. Per-thread short-term memory or state can retain relevant user facts and conversation summary, while Azure AI Search remains the grounding source for current HR policy content.
Agent memory and retrieval serve different purposes. For follow-up questions, the agent needs per-conversation state to resolve references such as “that” or “my spouse” using facts already provided in the same thread. Azure AI Search should still be called on each relevant turn to ground the answer in authoritative, current policy documents. This avoids mixing user facts across sessions and avoids polluting the retrieval index with private conversation transcripts. The key design is to keep ephemeral or thread-scoped memory for dialogue continuity and use the retrieval tool for external knowledge grounding.
Topic: Implement Generative AI and Agentic Solutions
A Microsoft Foundry support agent can summarize cases, look up orders, issue refunds, and close customer accounts. Company policy allows autonomous lookup and response drafting, but refunds and account closures require supervisor approval before execution. You need to enforce the policy without blocking normal support work. What should you configure?
Options:
A. Full autonomous tool access with trace logging enabled
B. Disable all tools and allow draft responses only
C. Private endpoints and managed identity for every tool
D. Approval workflow for the refund and closure tools only
Best answer: D
Explanation: The core control is human-in-the-loop approval for high-impact agent actions. Applying approval only to refund and account-closure tools satisfies the policy while still allowing the agent to perform allowed autonomous tasks such as lookup and drafting.
In safeguarded autonomous workflows, tool-access controls should match the risk of the action. Read-only or low-impact tools can remain available to the agent, while tools that change customer state or spend money should require an approval step before invocation. This is different from merely logging activity or securing network access; those controls help with auditing and access protection but do not stop an unapproved sensitive action. The key takeaway is to constrain autonomy at the tool boundary where the risky action occurs.
Topic: Implement Generative AI and Agentic Solutions
You are building a Microsoft Foundry workflow for a benefits claims assistant. It must retrieve current policy evidence from Azure AI Search, draft a response, check groundedness and safety, and create an HR case through a Foundry tool only if the checks pass. Auditors require separate trace records for the evidence, draft, evaluation result, and HR action. Which implementation should you use?
Options:
A. Use separate retrieval, generation, evaluator gate, and action steps.
B. Run the HR action before evaluating the generated response.
C. Use one agent turn with search and HR tools enabled.
D. Embed policies in the prompt and parse generated JSON.
Best answer: A
Explanation: The workflow should split retrieval, generation, evaluation, and action because each stage has a different responsibility and audit artifact. A separate evaluator gate can prevent the HR tool from running unless the generated draft is grounded and safe.
In Foundry workflows and tool-augmented flows, action steps should be isolated when they have business impact. Retrieval should collect evidence from Azure AI Search, generation should create a draft grounded in that evidence, evaluation should score or validate the draft, and the action tool should run only after the gate passes. This structure supports independent tracing, retries, and human review paths if an evaluation fails. Combining these responsibilities into a single model turn makes it harder to prove what evidence was used and whether the action was blocked by a reliable gate.
Topic: Implement Generative AI and Agentic Solutions
A Foundry project includes a customer support agent that retrieves refund policy from Azure AI Search and can call issue_refund(customerId, amount). A safety evaluation fails for refunds over USD 500.
Trace excerpt:
Retrieved policy: "Refunds over $500 require manager approval."
Assistant: "This refund requires manager approval before processing."
Tool call: issue_refund({"customerId":"C37","amount":850})
Tool result: refund issued
Which optimization should you implement?
Options:
A. Add an orchestration approval gate before the refund tool runs.
B. Revise the prompt to restate the refund rule.
C. Switch the agent to a larger language model.
D. Increase the Azure AI Search top-k value.
Best answer: A
Explanation: The failure is not caused by missing retrieval or weak reasoning. The trace shows that the policy was retrieved and the assistant stated the approval requirement, but the tool was still executed. A deterministic orchestration gate is needed to enforce approval before tool invocation.
Optimization should target the failing component shown in traces and evaluations. Here, retrieval succeeded and the model produced the correct natural-language policy response. The failure happened at the workflow boundary: the agent could still call issue_refund before a manager approval event existed. Approval requirements should be enforced by orchestration or tool-access controls, not only by model instructions. This prevents unsafe tool execution even if the model output is inconsistent in a later turn.
Topic: Implement Generative AI and Agentic Solutions
A company is building a Microsoft Foundry support-agent workflow. The workflow retrieves customer records from Azure AI Search, calls an order-refund tool, and drafts a response. Security policy requires keyless access over a private network, least-privilege tool access, human approval for refunds above USD 500, safety screening, and auditable traces with cited sources. The solution must not block routine refunds or normal support questions. Which implementation should you use?
Options:
A. Disable the refund tool, route refunds to email, and evaluate safety only before deployment.
B. Use managed identity, private endpoints, role-scoped tools, conditional approval, safety filters, evaluators, traces, and provenance.
C. Use public endpoints, prompt-only safety instructions, and logs of final responses only.
D. Use stored service keys, approve every refund tool call, and disable tracing for customer privacy.
Best answer: B
Explanation: The workflow needs layered controls, not a single prompt or blanket block. Managed identity and private networking address secure access, while role-scoped tools, conditional approval, safety filters, evaluators, trace logging, and provenance preserve safety and observability without stopping legitimate routine actions.
For a tool-augmented Foundry workflow, the safest design combines identity, network, policy, oversight, and telemetry controls. Managed identity with role assignments avoids embedded secrets, and private endpoints reduce network exposure. Tool-access policies limit who or what can invoke the refund tool. A conditional human approval step for refunds above the stated threshold targets high-risk actions while allowing normal refunds to proceed. Safety filters and evaluators detect harmful or policy-violating content, while trace logging and provenance metadata make each retrieval, tool call, and generated answer auditable. Blanket approval or tool removal would reduce risk, but it would also block legitimate use.
Use the AI-103 Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Try AI-103 on Web View AI-103 Practice Test
Read the AI-103 Cheat Sheet on Tech Exam Lexicon, then return to IT Mastery for timed practice.