Try 10 focused GitHub Copilot GH-300 questions on Prompt Engineering, with explanations, then continue with IT Mastery.
Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.
Try GitHub Copilot GH-300 on Web View full GitHub Copilot GH-300 practice page
| Field | Detail |
|---|---|
| Exam route | GitHub Copilot GH-300 |
| Topic area | Apply Prompt Engineering and Context Crafting |
| Blueprint weight | 13% |
| Page purpose | Focused sample questions before returning to mixed practice |
Use this page to isolate Apply Prompt Engineering and Context Crafting for GitHub Copilot GH-300. Work through the 10 questions first, then review the explanations and return to mixed practice in IT Mastery.
| Pass | What to do | What to record |
|---|---|---|
| First attempt | Answer without checking the explanation first. | The fact, rule, calculation, or judgment point that controlled your answer. |
| Review | Read the explanation even when you were correct. | Why the best answer is stronger than the closest distractor. |
| Repair | Repeat only missed or uncertain items after a short break. | The pattern behind misses, not the answer letter. |
| Transfer | Return to mixed practice once the topic feels stable. | Whether the same skill holds up when the topic is no longer obvious. |
Blueprint context: 13% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.
These questions are original IT Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.
Topic: Apply Prompt Engineering and Context Crafting
A developer is fixing a flaky test in their IDE. They want to ask Copilot for likely causes, add a failure log in a follow-up prompt, then ask Copilot to revise the recommendation based on that new information. They want each turn to use the prior conversation context before making any edits. Which GitHub Copilot feature best fits this need?
Options:
A. Prompt files
B. Inline suggestions
C. Instruction files
D. Copilot Chat
Best answer: D
Explanation: Copilot Chat is the best fit for iterative prompt process flow because it supports back-and-forth conversation with retained context. That lets the developer refine the request over several turns as new details are added.
When a task depends on iterative prompting, the key need is a surface that preserves conversational context from earlier turns. Copilot Chat is built for that flow: the user asks an initial question, reviews the response, adds new evidence or constraints, and then refines the request in follow-up turns. Chat history helps Copilot interpret those later prompts in the context of the ongoing discussion.
This is different from features that mainly provide one-shot completion or reusable guidance:
For a debugging conversation that evolves step by step, Copilot Chat is the right surface. Any suggested fix should still be reviewed and validated.
Topic: Apply Prompt Engineering and Context Crafting
A developer is using GitHub Copilot Chat in an IDE to create tests for PaymentService.
Exhibit:
Turn 1: Generate smoke tests for the old synchronous PaymentService API.
Turn 5: The code changes to async, and `payment_service.py` is selected.
Turn 6: Use pytest and add edge cases for timeouts and retries.
Turn 7: After two follow-up corrections, Copilot still suggests sync tests and old helper names.
The developer wants accurate pytest tests for the current async code. What is the best next step?
Options:
A. Start a new chat and restate the async API, selected file, pytest style, and target edge cases in one prompt.
B. Switch to Agent Mode so Copilot can infer the updated API from the repository.
C. Accept the draft tests and depend on CI failures to show what needs changing.
D. Send a shorter follow-up such as fix the tests and rely on the existing chat history.
Best answer: A
Explanation: Copilot Chat uses earlier turns as part of the context for later replies. When a conversation is anchored to outdated assumptions, the best recovery is to start fresh and provide the current goal and constraints explicitly.
In an iterative Copilot Chat conversation, previous turns influence later responses. If early prompts describe an outdated API or old requirements, later requests can continue to inherit that stale context, even when the selected file has changed. In this scenario, repeated follow-ups have not corrected the drift, so the best next step is to start a new chat and provide one specific prompt with the current async API, the selected file, the pytest convention, and the desired edge cases.
This prompt process flow matters because a clean conversation reduces anchoring from old chat history and makes the intended context explicit. Vague follow-ups, unnecessary feature changes, or relying on CI alone do not fix the underlying context problem.
Topic: Apply Prompt Engineering and Context Crafting
A developer uses a reusable few-shot prompt file with Copilot Chat to generate error-mapping functions. After adding one example, the output looks more uniform, but a new result repeats the example’s log message and error code even though the business case is different.
Exhibit:
Example in prompt:
logger.warning("inventory timeout")
return {"code":"INV_TIMEOUT","retry":true}
Generated for payment decline:
logger.warning("inventory timeout")
return {"code":"INV_TIMEOUT","retry":true}
The developer needs to determine whether the few-shot prompt improved consistency or is copying the example too closely. What is the best next step?
Options:
A. Switch to Agent Mode for broader repository context.
B. Add more examples with the same literals to reinforce the pattern.
C. Keep the prompt if the next file compiles and passes tests.
D. Run varied unseen cases, abstract the examples, and compare consistency.
Best answer: D
Explanation: The best action is to validate the prompt on new cases and make the examples less literal. Few-shot prompting works when Copilot keeps the intended structure while adapting case-specific details, not when it repeats example text unchanged.
Few-shot prompting should teach a reusable pattern, not cause literal reuse from one example. In this scenario, copied strings such as inventory timeout and INV_TIMEOUT suggest the example is too concrete, so you need to test for generalization.
This is the best way to tell whether consistency improved or the model is over-copying. Adding more similar examples usually strengthens mimicry, and changing to Agent Mode does not fix a prompt-design problem.
Topic: Apply Prompt Engineering and Context Crafting
A developer has been using Copilot Chat to discuss a Python migration. In the same IDE session, they switch to src/api/refund.ts in a monorepo. The repository already has an instruction file for TypeScript API conventions, and the team maintains a prompt file for endpoint hardening. The developer wants Copilot to update only the selected refund() handler and reuse the existing validator in validation.ts. Which workflow is most likely to give Copilot the right context?
Options:
A. Open only refund.ts and ask Copilot to improve the file without extra context.
B. Paste the full api folder into chat and request a comprehensive rewrite.
C. Start a new chat, select refund(), open validation.ts, and run the endpoint-hardening prompt file.
D. Continue the current chat and ask Copilot to apply the earlier migration approach here.
Best answer: C
Explanation: Copilot context comes from more than the current prompt. Starting a fresh chat avoids stale Python-focused chat history, while selecting the handler, opening the validator file, and using the prompt file gives Copilot focused local and repository-level context for the requested change.
Copilot can draw context from selected text, open files, repository context, chat history, prompt files, and instruction files. In this scenario, the earlier Python conversation can bias the next response, so starting a new chat is useful when the old thread is no longer relevant. Selecting refund() narrows the task to the exact code to change, and keeping validation.ts open helps surface the existing validator pattern. The repository instruction file reinforces team conventions, and the prompt file adds reusable task framing for endpoint hardening. Together, these signals give Copilot targeted context without asking it to infer too much or process unnecessary content. A broader file-level or folder-level request is less precise than shaping the context around the specific handler.
refund.ts open is broader than needed and does not explicitly surface the validator as nearby context.Topic: Apply Prompt Engineering and Context Crafting
A developer has been using GitHub Copilot to discuss a service refactor. In the conversation, Copilot assumed the service would stay synchronous. The requirement changes: the service must now call an async queue, but the earlier design discussion is still useful. The developer wants to keep relevant context from the prior discussion while correcting the outdated assumption before asking for revised code and tests. Which Copilot feature is the best fit?
Options:
A. Inline suggestions
B. Edit Mode
C. Agent Mode
D. Copilot Chat
Best answer: D
Explanation: Copilot Chat is the best choice because the developer’s need is conversational: keep useful prior context, correct an earlier assumption, and continue with refined follow-up prompts. That maps directly to chat history usage, not simple completion or autonomous editing.
When the goal is to preserve useful context across multiple prompts, Copilot Chat is the right surface because it carries conversation history forward. In this scenario, the developer should continue in chat but explicitly state that the previous synchronous assumption is no longer valid and that the new async queue requirement now applies. That keeps relevant background while reducing the risk that Copilot keeps reasoning from stale context.
Inline suggestions are optimized for immediate code completion, not for managing changing assumptions across a conversation. Edit Mode focuses on applying edits, and Agent Mode is better for broader multi-step work; neither is the best fit when the main need is to refine context through follow-up discussion. The key takeaway is to use chat history deliberately: preserve what is still true, and clearly overwrite what changed.
Topic: Apply Prompt Engineering and Context Crafting
A team uses GitHub Copilot Chat to draft API deprecation notices across many services. Each time, they want the same prompt structure: summarize breaking changes, list migration steps, and create customer-facing release notes from the selected files. They need this to be reusable and versioned, but they do not want it applied automatically to every chat. Which approach is best?
Options:
A. Create a prompt file for the deprecation-notice workflow
B. Rely on chat history to preserve the template between sessions
C. Store the text in a repository instruction file
D. Use Agent Mode so Copilot infers the format each time
Best answer: A
Explanation: Prompt files fit recurring tasks that need a reusable prompt structure without becoming always-on guidance. The scenario explicitly separates reusable task prompts from default chat behavior, which is why a prompt file is the best match.
The key concept is choosing the right reusable context mechanism. Prompt files are for repeatable, task-specific prompts that teams want to save, share, and run on demand. In this scenario, the team repeatedly performs the same deprecation-notice task and wants a consistent structure, but only when they choose that workflow.
Instruction files serve a different purpose: they provide standing guidance Copilot should generally follow for a repository, project, or organization. Chat history is session context, not a durable reusable template. Agent Mode can help execute broader tasks, but it does not replace the need for a saved, repeatable prompt structure. The best fit is the feature built to reuse prompts for recurring work.
Topic: Apply Prompt Engineering and Context Crafting
A team gets inconsistent Copilot results from broad requests like Write unit tests for this handler. They want a reusable repository artifact that turns this into a few-shot prompt by showing two example requests and the corresponding preferred outputs. Developers should use it when needed, but it should not become always-on guidance for every Copilot interaction. Which GitHub Copilot feature best fits this need?
Options:
A. Use Copilot Chat for the request each time
B. Store the examples in an instruction file
C. Store the examples in a prompt file
D. Use Agent Mode to generate the tests
Best answer: C
Explanation: Prompt files are the best fit when a team wants to turn a vague request into a reusable few-shot prompt. They let the team capture example inputs and outputs for a specific task without making that guidance persist across all Copilot interactions.
Few-shot prompting works by giving Copilot examples that demonstrate the pattern you want, such as sample requests and preferred responses. When that pattern should be reusable for a specific task, the right Copilot feature is a prompt file. It packages the examples as a repeatable prompt that developers can invoke when needed.
Instruction files serve a different purpose: they provide persistent guidance such as coding standards or repository conventions across interactions. Copilot Chat is useful for iterating on a prompt in the moment, but it is not the best feature for sharing a reusable few-shot prompt artifact. Agent Mode is better for carrying out multi-step implementation work than for storing prompt examples. The key distinction is task-specific reusable prompting versus always-on behavioral guidance.
Topic: Apply Prompt Engineering and Context Crafting
A developer is using GitHub Copilot Chat in VS Code to fix a VAT rounding bug in the selected calculateShippingTotal() function in checkout/pricing.ts. The function is already highlighted, and the developer wants a focused answer for that file only. Which prompt is the best choice to avoid irrelevant or excessive context that could reduce response quality?
Options:
A. Rewrite pricing across the repository and optimize shipping, payments, and email templates together.
B. Review the whole checkout system, including sprint goals and release notes, then fix the rounding bug.
C. Use the selected calculateShippingTotal() code to explain the rounding bug, suggest a minimal TypeScript fix, and add three unit tests.
D. Use the selected code plus payment-service logs and team meeting notes to infer the best fix.
Best answer: C
Explanation: The best prompt gives Copilot only the context needed for the task: the selected function, the bug to investigate, and the expected outputs. Adding unrelated documents, logs, or broader rewrite goals increases noise and can make the response less focused and less useful.
When Copilot already has the relevant code in context, the strongest prompt adds only task-specific guidance, not extra material that does not help solve the problem. Here, the developer needs help with one selected function in one file, so the prompt should name that scope, describe the bug, and request concrete deliverables such as a minimal fix and unit tests.
Useful prompt elements are:
Broader repository goals, release notes, meeting notes, or unrelated logs dilute the signal and can push Copilot toward vague or overly broad answers. The key takeaway is to include relevant context, not maximum context.
Topic: Apply Prompt Engineering and Context Crafting
A developer is using Copilot Chat in an IDE to choose how to handle concurrent updates in a Node.js order service backed by PostgreSQL. The current prompt is too generic, so Copilot returns textbook pros and cons instead of reasoning tied to the codebase.
Open file: src/orders/OrderService.ts
Selected method: updateOrderStatus(orderId, status)
Current prompt:
"Which implementation approach should I use for concurrent updates?"
Goal:
Get a recommendation that explains tradeoffs for this code path.
Which prompt/context change would best improve the result?
Options:
A. Ask, Which approach is best here? Be detailed.
B. Select updateOrderStatus and ask Copilot to compare optimistic concurrency and row locking for this method, using workload constraints, then recommend one and list tests.
C. Start a new chat and repeat the current prompt without selected code.
D. Switch to Agent Mode and let Copilot edit the service before explaining the tradeoffs.
Best answer: B
Explanation: The best improvement is to make the prompt specific, contextual, and constrained. Naming the code path, the two approaches, and the workload priorities gives Copilot enough context to produce a grounded tradeoff analysis instead of a generic answer.
When you want Copilot to explain reasoning between implementation approaches, the strongest prompt is explicit about three things: the code in scope, the options being compared, and the constraints that matter. Selecting updateOrderStatus gives Copilot the exact code path. Naming optimistic concurrency and row locking prevents a vague answer. Adding requirements such as expected write conflicts, correctness priority, and minimal schema change lets Copilot evaluate tradeoffs against real project needs.
Asking for a recommendation plus tests also improves the result because Copilot should provide a draft decision that you can review and validate. A broader prompt usually produces generic pros and cons, and removing context makes that worse. Using Agent Mode is unnecessary here because the task is analysis first, not autonomous code changes.
The key takeaway is that Copilot gives better tradeoff reasoning when you provide the context and constraints it cannot safely infer.
Topic: Apply Prompt Engineering and Context Crafting
A developer uses GitHub Copilot Chat in an IDE to fix a checkout rounding bug, but the response is generic and misses the defect.
Exhibit:
Open files:
- checkout/price_calculator.py
- checkout/tests/test_totals.py
- docs/oncall-runbook.md
- marketing/spring_campaign.md
- infra/docker-compose.yml
Failing test:
- total 19.995 returns 19.99; expected 20.00
Prompt:
"Analyze the whole repo and fix the checkout problem.
Use our sprint notes, runbook, marketing copy, and any open files.
The bug is somewhere in totals. Also explain Python best practices."
Which prompt/context change would most likely improve Copilot’s response quality?
Options:
A. Add architecture docs and more repository files for broader context.
B. Keep the same context but ask for a longer explanation.
C. Include unrelated chat history about a recent logging refactor.
D. Focus on price_calculator.py, the failing test, and expected rounding output.
Best answer: D
Explanation: The best improvement is to narrow the context to the code and test that directly describe the bug, plus the expected result. Irrelevant files and broad requests can dilute the signal Copilot uses, making its response less specific and less useful.
When using Copilot Chat, better results usually come from high-signal context, not maximum context. In this scenario, the failing test and the calculator file are directly relevant because they show where the defect appears and what correct behavior should be. The runbook, marketing content, and a request for general Python advice add noise that can pull the response away from the checkout bug.
A stronger prompt should include:
Extra repository files or unrelated history are only helpful if they materially affect the task. The key takeaway is to trim context to what the model actually needs to solve the current problem.
Use the GitHub Copilot GH-300 Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Try GitHub Copilot GH-300 on Web View GitHub Copilot GH-300 Practice Test
Read the GitHub Copilot GH-300 Cheat Sheet on Tech Exam Lexicon, then return to IT Mastery for timed practice.