Prepare for GitHub Copilot (GH-300) with free sample questions, a full-length diagnostic, topic drills, timed practice, prompt and context choices, Copilot Chat, responsible AI, privacy safeguards, organization policy scenarios, and detailed explanations in IT Mastery.
GitHub Copilot validates skill in using GitHub Copilot to improve software-development productivity, quality, and security across programming languages while respecting responsible AI, privacy, and organizational safeguards.
IT Mastery practice for GitHub Copilot (GH-300) is live now. Use this page to start the web simulator, review the exam snapshot, work through 24 public sample questions, and continue into full IT Mastery practice with the same IT Mastery account on web, iOS, iPadOS, macOS, or Android.
Start a practice session for GitHub Copilot (GH-300) below, or open the full app in a new tab. For the best experience, open the full app in a new tab and navigate with swipes/gestures or the mouse wheel—just like on your phone or tablet.
Open Full App in a New TabA small set of questions is available for free preview. Subscribers can unlock full access by signing in with the same app-family account they use on web and mobile.
Prefer to practice on your phone or tablet? Download the IT Mastery – AWS, Azure, GCP & CompTIA exam prep app for iOS or IT Mastery app on Google Play (Android) and use the same IT Mastery account across web and mobile.
Free diagnostic: Try the GitHub Copilot GH-300 full-length practice exam before subscribing. Use it as one Copilot baseline, then return to IT Mastery for timed mocks, topic drills, explanations, and the full GH-300 question bank.
| Area | What to practise |
|---|---|
| Responsible AI | safe use, limitations, review discipline, attribution awareness, and human oversight |
| Copilot features | chat, completions, explanations, refactoring, tests, pull requests, and plan-specific capabilities |
| Data and architecture | how context is used, privacy boundaries, content exclusions, and organizational safeguards |
| Prompt and context crafting | clear intent, constraints, examples, repository context, and iterative refinement |
| Developer productivity | quality improvement, testing, documentation, debugging, and workflow acceleration |
Copilot questions usually reward the workflow where the developer frames the task, gives relevant context, reviews output critically, tests it, and keeps sensitive data out of prompts.
flowchart LR
Intent["State intent"] --> Context["Add relevant context"]
Context --> Prompt["Ask with constraints"]
Prompt --> Review["Review generated output"]
Review --> Test["Test and validate"]
Test --> Refine["Refine prompt or code"]
Refine --> Review
Review --> Secure["Check security, privacy, and licensing risk"]
Secure --> Commit["Commit only reviewed work"]
GH-300-style questions can include small exhibits. Use them to decide what context Copilot has, what policy applies, and what a responsible next step looks like. The best exhibit is usually a short prompt, chat exchange, policy table, audit line, or selected-code description rather than a large diagram.
| Exhibit type | What to look for |
|---|---|
| Prompt snippet | Missing constraints, unclear goal, sensitive data, or absent repository context |
| Copilot Chat exchange | Whether the developer should refine, verify, test, or reject the answer |
| Selected-code context | Whether Copilot has enough local context to make a useful suggestion |
| Organization policy excerpt | Content exclusions, allowed features, review requirements, and data handling |
| Audit log entry | Which user, repository, policy, or subscription event needs review |
| Pull request summary | Whether Copilot is helping review focus without replacing required reviewers |
Example prompt exhibit:
Refactor this payment helper to make it easier to test.
Keep the public function name unchanged.
Do not change currency rounding behavior.
Use pytest examples for the boundary cases.
Example chat exhibit:
Developer: Explain why this authentication middleware rejects valid tokens.
Copilot: The token may be expired or the signing key may be different.
Developer: The token is not expired. The failing test uses the staging JWKS URL.
Best next step: refine the prompt with the environment, failing test, and selected middleware code.
Example policy exhibit:
| Policy setting | Meaning for the question |
|---|---|
Content exclusions enabled for /contracts/ | Do not rely on excluded files for suggestions |
| Public code suggestions restricted | Review generated snippets for licensing and source-risk concerns |
| Copilot Chat allowed in private repositories | Still avoid secrets, customer data, and confidential business data in prompts |
| Required pull request review remains enabled | Copilot summaries do not replace human approval |
Example audit exhibit:
2026-04-24T15:12:04Z copilot.policy.update org=acme-inc actor=repo-admin
setting=content_exclusions path=/contracts/** repositories=engineering-platform
Use these filters when a Copilot answer sounds productive but unsafe:
| Day | Practice focus |
|---|---|
| 7 | Take the free full-length diagnostic and tag misses by prompt, feature, policy, privacy, or productivity. |
| 6 | Drill responsible AI, output review, testing discipline, limitations, and safe developer behavior. |
| 5 | Drill Copilot features, chat workflows, code completion, pull request support, and plan-specific capability choices. |
| 4 | Drill context crafting, selected code, prompt constraints, examples, and iterative refinement. |
| 3 | Drill privacy, content exclusions, organizational policies, audit signals, and safeguard settings. |
| 2 | Complete a timed mixed set and explain whether each miss was a context, feature, policy, or review issue. |
| 1 | Review weak exhibit types; avoid treating Copilot as an autopilot or approval system. |
If several unseen mixed attempts are above roughly 75% and you can explain the responsible-use, context, privacy, or feature-selection reason behind your answers, you are likely ready. Additional drilling should improve Copilot judgment, not make you memorize policy wording.
Use these child pages when you want focused IT Mastery practice before returning to mixed sets and timed mocks.
Need concept review first? Read the GitHub Copilot GH-300 Cheat Sheet on Tech Exam Lexicon, then return here for timed mocks, topic drills, and full IT Mastery practice.
These are original IT Mastery practice questions aligned to the live GitHub Copilot (GH-300) route and the main blueprint areas shown above. Use them to test prompt, context, privacy, safeguard, and Copilot feature judgment here, then continue in IT Mastery with mixed sets, topic drills, and timed mocks.
Topic: Configure Privacy, Content Exclusions, and Safeguards
A fintech organization uses GitHub Copilot in a monorepo. The payments/ folder contains regulated code and must be excluded from Copilot context. The security team also wants duplication detection and security warnings applied consistently, without relying on each developer to configure local editor settings. Which Copilot capability should the admin use?
Best answer: A
Explanation: For higher-risk workflows, the best fit is centralized Copilot governance. Organization-wide policies can combine content exclusions with safeguard settings such as duplication detection and security warnings, so protection does not depend on each developer’s local setup. This scenario is about applying multiple Copilot safeguards consistently in a higher-risk area of a repository. The right mechanism is organization-wide Copilot policy, because it lets administrators set a managed baseline for how Copilot is used across developers and repositories. In that model, content exclusions keep sensitive paths like payments/ out of Copilot context, while safeguard settings such as duplication detection and security warnings are applied centrally.
Relying on individual editor settings is weaker because users may configure them differently. Prompt files and instruction files help shape prompts and responses, but they do not enforce privacy boundaries or safeguard controls. Pull request review is useful after code is written, but it does not prevent sensitive content from being used during suggestion generation. For higher-risk workflows, govern first with policy, then review generated output as needed.
Topic: Use GitHub Copilot Responsibly
Your team uses GitHub Copilot in a private monorepo.
Requirements:
customer-data/ contain regulated data and must not be used as Copilot context.Which workflow best meets these requirements?
customer-data/ as context because the repository is private.customer-data/, and keep human review and tests required.Best answer: C
Explanation: The best workflow combines reusable guidance, privacy safeguards, and human validation. Instruction files help Copilot follow team standards, content exclusions keep regulated folders out of Copilot context, and normal review plus testing remain necessary because Copilot output is only a draft. Responsible Copilot use requires different controls for different needs. In this scenario, the team needs a persistent way to express coding conventions, a privacy safeguard for sensitive repository content, and a review process that does not treat AI output as authoritative. Instruction files are the best fit for reusable team standards because they provide consistent guidance across Copilot-assisted work. Content exclusions are the right control for preventing a path such as customer-data/ from being used as Copilot context. Those exclusions are different from security warnings or duplication detection, which may flag risky output but do not control what repository content Copilot can use. Even with good instructions and exclusions, developers must still validate Copilot-generated changes through tests and normal pull request review. Manual prompting can help for one-off tasks, but it is less consistent than repository-level guidance.
Topic: Understand GitHub Copilot Data and Architecture
Developers in a monorepo often ask Copilot Chat to “add tests for this service” or “refactor this handler.” Copilot responses are generic and sometimes use the wrong libraries because repository conventions such as pytest, factory fixtures, and an approved logging wrapper are missing from the prompt context. The team wants that guidance applied across future assistance in the repository without repeating it in every prompt. Which feature should they use?
Best answer: A
Explanation: Missing repository context often causes Copilot to fall back to generic patterns. Instruction files are the best fit because they provide persistent project guidance during prompt building, so future responses better reflect the repo’s standards without repeating the same details in each prompt. Copilot builds its response from the context it has available, such as the user prompt, selected code, open files, and any repository guidance. When important project conventions are missing, the model may still produce plausible output, but it is more likely to be generic or use the wrong framework, library, or coding pattern.
In this scenario, the team needs durable, repository-wide context that Copilot can reuse automatically. Instruction files are designed for that purpose: they capture conventions and preferences so Copilot can include them when forming responses. A prompt file can help start a common task, but it is not the same as always-on repository guidance. Agent Mode and inline suggestions can act on available context, but they do not solve the root problem of missing persistent context.
The key idea is to supply stable project context, not just switch interaction surfaces.
Topic: Apply Prompt Engineering and Context Crafting
You are using Copilot Chat in your IDE. The current chat already includes discussion about API docs, CI failures, and a Dockerfile issue. You now want help optimizing one function in pricing.py, but Copilot keeps suggesting unrelated repository changes.
Current prompt:
Review our checkout system, use all available context, and improve performance.
What is the best next step?
Best answer: C
Explanation: The issue is not missing context; it is noisy context. Starting a new chat and grounding the request in the selected function with a narrow goal improves relevance and reduces the chance that unrelated chat history will steer the response. To improve Copilot prompt performance, reduce unnecessary context and make the task specific. In this scenario, the existing chat history contains unrelated topics, so keeping that history active makes it more likely that Copilot will produce broad or off-target suggestions.
A better prompt process is:
This gives Copilot the minimum useful context for the current task. Expanding to service-wide or repo-wide analysis is broader than needed and usually reduces focus.
Topic: Improve Developer Productivity with GitHub Copilot
A developer is onboarding to an unfamiliar private repository in an IDE with GitHub Copilot Chat enabled. They highlight a processRefund method and want a quick explanation of only that method’s logic before making changes. They want to reduce context switching and avoid unnecessary detail. What is the best next step?
Best answer: D
Explanation: The best action is to use Copilot Chat with the code selected and ask for a short, specific explanation. Scoping the prompt to the selected method and naming the focus areas reduces noise, speeds understanding, and still requires the developer to verify the explanation against the code. When the goal is learning acceleration, the best Copilot prompt is narrow, explicit, and tied to the current task. Here, the developer already knows which method they need to understand, so Copilot Chat should be asked to explain the selected code only, with a concise output format and specific focus areas such as inputs, side effects, and failure paths. That keeps the response relevant and reduces context switching.
A broader architecture request may help later, but it is not the best next step when the immediate need is understanding one selected method.
Topic: Use GitHub Copilot Features
A developer uses GitHub Copilot CLI to draft a Bash cleanup script for a Linux build server. The draft is:
find /builds -type f -name '*.tmp' -mtime +7 -delete
rm -rf /builds/cache/*
They want to improve productivity without risking accidental data loss. What is the best next step?
Best answer: B
Explanation: Copilot CLI is useful for drafting shell commands, but generated scripts still require human review. When commands can delete files or change a system, the safest productive step is to inspect them, preview the affected targets, and validate behavior in a safe location before execution. The core concept is responsible use of Copilot CLI for shell automation. In this scenario, the draft includes -delete and rm -rf, so the correct workflow is to review the generated commands, change the script so it shows what would be affected first, and test it against a nonproduction location before using the real server path.
Copilot CLI helps you create the draft quickly, but it does not replace your responsibility to verify file and system effects before running potentially destructive commands.
Topic: Configure Privacy, Content Exclusions, and Safeguards
A company uses GitHub Copilot in a monorepo. The billing/ and customer-export/ paths contain proprietary payment logic and regulated customer code, but developers still want Copilot for the rest of the repository. What is the best next step to reduce the chance that these paths are used as Copilot context?
Best answer: A
Explanation: Use content exclusions when specific repositories or paths contain confidential, regulated, or security-sensitive code. That safeguard directly limits Copilot context for those areas while allowing Copilot use elsewhere in the repository. The key privacy safeguard here is content exclusion. When only certain paths in a repository contain sensitive material, configuring content exclusions for those paths is the best control because it is policy-based and targeted. It reduces the likelihood that Copilot uses excluded files as context while preserving productivity in the rest of the monorepo. By contrast, instruction files shape behavior but do not enforce privacy boundaries, and user reminders depend on perfect human compliance. Pull request review is still important for validating AI-generated output, but it happens after suggestions are produced and does not prevent sensitive files from being used as context. For this scenario, the strongest next step is to apply exclusions to the sensitive paths.
Topic: Use GitHub Copilot Responsibly
A developer asks Copilot Chat:
Does the payment provider's refund API support retries without an idempotency key?
Does our company policy allow masked card tokens in debug logs?
Copilot replies, Yes on both. The change is planned for production. What is the best next step?
Best answer: D
Explanation: When Copilot answers questions about API behavior or internal policy, treat the response as a draft. The responsible next step is to verify those claims in authoritative sources before implementing or approving the change. Copilot can help surface likely answers, but it is not the authority for API contracts or organization policy. In this scenario, the answer affects production payment behavior and logging of sensitive data, so the claims need source verification. The correct practice is to check the payment provider’s official API documentation for retry and idempotency requirements, and to check the organization’s approved security or compliance policy for logging rules.
Tests, code review, and follow-up prompts are still useful, but they validate implementation quality more than they validate whether an external API guarantee or an internal policy statement is actually correct. The key takeaway is that when Copilot answers questions about APIs, licensing-sensitive code, security controls, or policy, you should verify the source rather than rely on the model’s confidence.
Topic: Understand GitHub Copilot Data and Architecture
A developer is writing calculateRefund() in refunds.ts. GitHub Copilot keeps showing inline suggestions that use inventory terms from unrelated open tabs. Based on how a Copilot code suggestion is produced, what is the best next action?
Best answer: B
Explanation: The best action is to adjust the context Copilot is using before asking for another suggestion. Inline suggestions start with context collection from the current editing session, so irrelevant open files can push the model toward the wrong code pattern. A Copilot code suggestion is not created from the whole repository by default or automatically validated before you see it. The lifecycle starts when Copilot collects available context such as the active file, cursor location, nearby code and comments, selected text, and other relevant editor or repository context. That context is assembled into a prompt, processed through Copilot services, sent to the model, and the returned candidate is post-processed before it is displayed in the IDE. If unrelated tabs are influencing the result, the best first step is to narrow the context and regenerate. Using a different surface such as CLI does not fix an inline-context problem, and displayed output still requires human review.
Topic: Apply Prompt Engineering and Context Crafting
A developer is using Copilot Chat in an IDE to choose how to handle concurrent updates in a Node.js order service backed by PostgreSQL. The current prompt is too generic, so Copilot returns textbook pros and cons instead of reasoning tied to the codebase.
Open file: src/orders/OrderService.ts
Selected method: updateOrderStatus(orderId, status)
Current prompt:
"Which implementation approach should I use for concurrent updates?"
Goal:
Get a recommendation that explains tradeoffs for this code path.
Which prompt/context change would best improve the result?
Which approach is best here? Be detailed.updateOrderStatus and ask Copilot to compare optimistic concurrency and row locking for this method, using workload constraints, then recommend one and list tests.Best answer: B
Explanation: The best improvement is to make the prompt specific, contextual, and constrained. Naming the code path, the two approaches, and the workload priorities gives Copilot enough context to produce a grounded tradeoff analysis instead of a generic answer. When you want Copilot to explain reasoning between implementation approaches, the strongest prompt is explicit about three things: the code in scope, the options being compared, and the constraints that matter. Selecting updateOrderStatus gives Copilot the exact code path. Naming optimistic concurrency and row locking prevents a vague answer. Adding requirements such as expected write conflicts, correctness priority, and minimal schema change lets Copilot evaluate tradeoffs against real project needs.
Asking for a recommendation plus tests also improves the result because Copilot should provide a draft decision that you can review and validate. A broader prompt usually produces generic pros and cons, and removing context makes that worse. Using Agent Mode is unnecessary here because the task is analysis first, not autonomous code changes.
The key takeaway is that Copilot gives better tradeoff reasoning when you provide the context and constraints it cannot safely infer.
Topic: Improve Developer Productivity with GitHub Copilot
A developer is updating a service that logs client IP addresses. To reduce context switching, they ask Copilot Chat whether keeping raw IPs for 30 days meets the company’s privacy and security requirements. The company already has legal-approved data-retention standards and security policies. Which workflow is best?
Best answer: C
Explanation: Copilot can speed up code discovery and understanding, but its explanations are not authoritative for policy, legal, or security decisions. The best workflow uses Copilot to reduce context switching, then confirms the requirement in the organization’s approved guidance. The key concept is separating productivity help from authoritative guidance. In this scenario, Copilot Chat is useful for locating where IP addresses are logged, identifying affected files, and summarizing likely implementation changes. But whether retaining raw IP addresses is allowed is a policy and legal/security question, so the decision must come from the company’s approved retention standard or designated policy owners.
A sound workflow is:
Using Copilot for acceleration is appropriate; using its explanation as compliance approval is not.
Topic: Use GitHub Copilot Features
A new developer wants to use GitHub Copilot inline suggestions in Visual Studio Code, and Visual Studio Code is supported by their organization. Which setup satisfies the prerequisites to enable that feature?
Best answer: B
Explanation: To use inline suggestions in a supported IDE such as VS Code, the developer needs Copilot access on their GitHub account, the GitHub Copilot extension installed, and GitHub sign-in from the editor. Those are the core prerequisites for enabling Copilot in the IDE. For Copilot enablement in an IDE, check four basics: the editor must be supported, the user must have a GitHub account with Copilot entitlement, the GitHub Copilot extension must be installed for that editor, and the user must be signed in to GitHub from the editor. In this scenario, VS Code support is already established, so the deciding prerequisites are the license, extension, and sign-in. After those are in place, inline suggestions can be enabled and used. Prompt files, instruction files, and Copilot CLI can help with other workflows, but they do not replace the required IDE setup. The closest distractor is the setup with extension and sign-in but no license, because entitlement is still required.
Topic: Configure Privacy, Content Exclusions, and Safeguards
Context: In VS Code, a developer is editing auth/reset.py. The selected file already contains ResetTokenStore.save(user_id, token_hash, expires_at) and send_reset_email(user, url).
Chat prompt:
Implement password reset tokens.
Copilot suggests using random.randint() to create the token and logs the raw token value. The editor raises a security warning. The developer wants to retry with a better prompt, keep using Copilot Chat, and avoid sharing secrets. Which prompt/context change best improves the result?
secrets, hashed storage, expiry, no token logging, and tests.Best answer: D
Explanation: The best mitigation is to refine the prompt with relevant file context and explicit security requirements. Naming the existing helpers and safe constraints steers Copilot toward a safer draft without exposing sensitive data. When Copilot raises a security warning, the best prompt-level mitigation is to narrow the task and state the safe implementation requirements. In this case, the improved prompt uses the relevant file context and tells Copilot to use secrets, hash tokens before storage, set an expiration, avoid logging raw tokens, and add tests. That directly addresses the risky pattern in the original suggestion while keeping the request specific and verifiable.
This approach is better than asking for generic examples, widening context unnecessarily, or sharing credentials. After Copilot generates a new draft, the developer should still review the code, run tests, and confirm the design meets the team’s security standards.
Topic: Use GitHub Copilot Responsibly
A developer uses GitHub Copilot in the IDE to generate a helper function for a private service. The suggestion looks almost identical to code the developer remembers from a public repository, and the organization’s policy requires review before any code with possible public-source or license conflicts is merged. What is the most responsible next step?
Best answer: D
Explanation: When Copilot output appears similar to public code, the key issue is compliance, not just correctness. The responsible response is to avoid merging it as-is, review policy and possible license obligations, and replace it if the risk cannot be cleared. The core concept is responsible human review of Copilot output when origin or licensing is uncertain. Copilot suggestions are useful drafts, but they are not automatic proof that code is safe to adopt under your organization’s policy. In this scenario, the developer should pause before merging, evaluate whether the similarity could create license or policy conflicts, and use a rewritten or alternative implementation if uncertainty remains.
Tests, builds, and peer review still matter, but they validate behavior and quality, not whether the code may resemble public source in a way that conflicts with policy. Cosmetic edits such as renaming variables also do not resolve licensing uncertainty. The right decision is policy and license review first, then validated replacement if needed.
Topic: Understand GitHub Copilot Data and Architecture
A platform team is documenting GitHub Copilot’s response flow for developers. They note that the underlying model can produce a candidate completion, but the developer may see a shortened version or no suggestion at all. The team lead asks what Copilot post-processing is for at that point. Which explanation is best?
Best answer: D
Explanation: Post-processing happens after the model returns a candidate response. In GitHub Copilot, that stage is used to shape what is actually shown to the user by applying final filtering, formatting, ranking, or suppression rather than treating the raw model output as the final result. The core idea is that a model’s candidate output is not automatically the final Copilot response. After generation, Copilot can apply product-level checks and transformations so the displayed suggestion fits the chat or IDE surface and follows safeguards. That can include filtering out unsuitable output, reformatting it, or deciding not to show it at all.
This step improves usability and safety, but it does not certify correctness; developers still need to review, test, and validate the output.
Topic: Apply Prompt Engineering and Context Crafting
A developer is using Copilot Chat in VS Code with priceCalculator.ts selected. The selected file contains calculateDiscountedTotal(items, customerType, couponCode?). The repo uses Jest, test names begin with should, and coupons must not stack with VIP discounts.
Current prompt:
Write tests for this function.
Copilot returned only a few happy-path tests and missed the repo conventions. Which revised prompt would best improve the result?
priceCalculator.ts, write Jest tests for calculateDiscountedTotal. Follow the repo’s should ... naming, avoid external mocks, cover empty cart, invalid coupon, VIP with coupon, and null couponCode. If behavior is unclear, state assumptions first, then return only the test file.Best answer: A
Explanation: The strongest prompt is the one that reduces ambiguity. It tells Copilot exactly which function to test, which framework and naming convention to use, which edge cases matter, how to handle uncertainty, and what output to return. Good Copilot prompts are specific, contextual, constrained, and easy to verify. Here, the original request was too vague, so Copilot had to guess the test framework, repository style, and important business-rule coverage. The best revision uses the selected-file context, names the exact function, requires Jest, preserves the repo naming pattern, lists concrete edge cases, and asks Copilot to state assumptions if behavior is unclear.
That makes the result testable in a simple way:
should ... naming?Broader prompts leave too many decisions to the model, which increases the chance of incomplete or mismatched output. The key takeaway is to ask for a specific, reviewable draft instead of a generic test request.
Topic: Improve Developer Productivity with GitHub Copilot
A developer uses Copilot Chat while tuning a high-traffic API. Copilot suggests replacing a readable LINQ-based filter with a manual loop and buffer reuse, saying it should reduce allocations and improve throughput. The repository already has passing unit tests for expected behavior and edge cases, but no one has measured performance on representative payload sizes. Before the team accepts the change, what is the best next Copilot request?
Best answer: D
Explanation: When Copilot proposes a performance improvement, the best follow-up is benchmark evidence. Explanation helps understanding and extra tests help correctness, but neither confirms that the new code is actually faster on realistic workloads. Choose the follow-up based on the kind of claim being made. Here, Copilot is proposing a more complex implementation specifically to improve throughput and reduce allocations, while the stem already says functional and edge-case behavior is covered by existing tests. That means the highest-value next request is a benchmark harness or benchmark plan that compares the current and proposed implementations with representative payload sizes, then validates the results by running it.
A reviewer summary may help communication, but it does not validate the optimization claim.
Topic: Use GitHub Copilot Features
A developer asks GitHub Copilot CLI how to restart a workload in a managed Kubernetes cluster. Copilot returns several kubectl commands. The repository includes approved operational scripts, and the cloud provider documents version-specific cluster restrictions. What is the best next step?
Best answer: A
Explanation: GitHub Copilot CLI can draft useful shell guidance, but operational commands still require validation. Because the repo defines approved tooling and the managed platform has documented restrictions, the best workflow is to compare the suggested commands with those authoritative sources before running them. The core concept is that Copilot CLI helps generate or explain commands, but it does not replace project tooling guidance or platform documentation. In this scenario, the repository already provides approved scripts, which may wrap commands, enforce naming conventions, or account for team-specific safeguards. The cloud provider’s documentation is also authoritative for version-specific Kubernetes behavior and restrictions.
Before executing the suggested kubectl sequence, the developer should confirm that it matches the project’s operational process and the platform’s supported behavior. That review helps catch unsupported flags, incorrect command order, or a bypass of required team tooling. A syntax check or another AI explanation may improve readability, but neither proves the commands are the right ones for this project and platform.
The key takeaway is to treat Copilot CLI output as draft guidance and validate it against authoritative documentation when tooling or platform details matter.
Topic: Configure Privacy, Content Exclusions, and Safeguards
Your organization excludes apps/claims/** from GitHub Copilot context because the files can contain regulated customer data. You need help refactoring null handling in apps/claims/formatter.ts, but Copilot Chat says nearby files are unavailable as context and its suggestions are too generic. The bug fix is due today. What is the best next step?
Best answer: C
Explanation: Keep the content exclusion in place and work with safe context instead. A sanitized minimal example lets Copilot help with the coding pattern without exposing regulated data or weakening the organization’s privacy safeguard. Content exclusions are meant to prevent sensitive repository content from being used as Copilot context. If suggestions become weaker because excluded files are unavailable, the right response is to reduce and sanitize the problem, not relax the exclusion for convenience.
This approach preserves privacy controls while still improving productivity. Temporarily removing the exclusion or pasting regulated content may improve one response, but it defeats the safeguard the policy was created to enforce.
Topic: Use GitHub Copilot Responsibly
A developer is using Copilot Chat to draft a password reset endpoint.
Context:
Open file: api/resetPassword.ts
Current prompt: "Create a password reset endpoint for this app."
Copilot draft:
- hardcodes `JWT_SECRET = "dev-secret"`
- accepts any email string
- returns "user not found" when the account does not exist
Which prompt/context change would best improve the next result?
Best answer: A
Explanation: The best improvement is to give Copilot the approved repository context and explicit security requirements. For authentication-related code, a specific prompt that forbids hardcoded secrets, requires input validation, and requests tests reduces common harmful output patterns. When generated code affects authentication or secret handling, the safest prompt is specific, contextual, and constrained. Opening the repository’s validation and secret-management files helps Copilot reuse existing approved patterns instead of inventing insecure ones. Explicitly requiring input validation, no hardcoded secrets, and generic reset responses also mitigates common risks such as weak validation, secret exposure, and user enumeration.
A stronger follow-up prompt should:
A vague request for “production-ready” code still leaves the important security requirements unstated.
Topic: Understand GitHub Copilot Data and Architecture
A developer needs GitHub Copilot to update a single configuration-parser file to support a new apiKey field. The repository is enabled for Copilot, but organization policy says production credentials must not be entered into prompts unless explicitly approved and required. The real secret values are not needed to make the code change. Which Copilot feature should the developer use?
.env file pasted into the promptapiKey included in the command promptBest answer: C
Explanation: Use Edit Mode with the selected code and redacted placeholders. The change only requires local code context, so pasting real credentials would add sensitive data to the prompt unnecessarily and against policy. Any text a user adds to a Copilot prompt becomes part of the input context, so sensitive data should be included only when it is explicitly permitted and truly necessary. Here, the developer only needs help changing one parser file, not analyzing real credentials.
A narrow feature choice is best:
REDACTED_API_KEY.Chat, CLI, and especially Agent Mode would either encourage unnecessary secret sharing or broaden the context beyond what this task requires. The key takeaway is to minimize prompt data and avoid exposing real secrets when placeholders are enough.
Topic: Apply Prompt Engineering and Context Crafting
A developer is using Copilot Chat in VS Code with the fetchInvoice(id) function selected in paymentClient.ts. The project already uses axios, logger, and Jest. She wants Copilot to help add retries only for transient 429 and 503 responses, keep the current function signature, and suggest tests. Which prompt is most likely to produce a useful first draft?
fetchInvoice(id) TypeScript function, use the existing axios client to retry only HTTP 429 and 503 up to 3 times with exponential backoff, preserve the function signature and current logger calls, and suggest Jest unit tests.Best answer: B
Explanation: The best prompt is the one that is specific about the programming goal, the code scope, and the constraints. By naming the selected function, expected retry behavior, existing project tools, and test expectations, it gives Copilot enough context to generate a focused draft instead of a generic response. Useful Copilot prompts usually answer three questions: what should change, where should it change, and what must stay the same. In this scenario, the strongest prompt identifies the exact target (fetchInvoice(id)), the desired behavior (retry only 429 and 503), the limit and approach (3 retries with exponential backoff), the project conventions (axios, existing logger calls), and the expected follow-up output (Jest tests). That combination reduces ambiguity and guides Copilot toward code that fits the real task.
Broader prompts can still return suggestions, but they are more likely to be incomplete, generic, or aimed at explanation instead of implementation. Even with a strong prompt, the generated result should still be reviewed and tested before use.
Topic: Improve Developer Productivity with GitHub Copilot
A developer inherits a 10-year-old order-pricing module with little documentation. They must update one method to support a new discount rule, but they cannot change existing tax and shipping behavior. The method is open in the IDE, and GitHub Copilot Chat is available. Which workflow is the best first step?
Best answer: C
Explanation: For legacy modernization, the best Copilot workflow starts with comprehension. Asking Copilot Chat to explain the selected code helps surface current behavior, assumptions, and side effects before requesting any change. When working with legacy code, Copilot is most useful first as a code-understanding assistant. Using Copilot Chat on the selected method to explain its current logic, inputs, outputs, dependencies, and side effects gives the developer the context needed to write a precise follow-up prompt for the discount change. That reduces the risk of breaking hidden business rules in tax or shipping behavior. After the explanation, the developer can request a targeted update and then validate the result with tests and review. Jumping straight to generation or broad rewrites skips the key step of understanding what the old code already does.
Topic: Use GitHub Copilot Features
An enterprise administrator must send a weekly report of who currently has GitHub Copilot access and automatically reclaim seats from contractors whose engagement has ended. The team wants a repeatable solution that plugs into existing admin automation instead of a manual review. Which approach is best?
Best answer: B
Explanation: When Copilot subscription work must be repeated or automated, the GitHub REST API is the best fit. It supports scripted reporting and administrative actions, which matches a weekly seat report and offboarding workflow better than logs, prompts, or manual UI steps. The core concept is matching the work to the right Copilot management surface. For recurring seat reports and automated access changes, the GitHub REST API is the correct choice because it can be called from scheduled scripts or existing admin automation to read current Copilot assignments and manage subscription access consistently. That makes it suitable for weekly reporting, contractor offboarding, and other repeatable administrative tasks.
Audit logs are useful for investigating events and compliance evidence, but they are not the best primary mechanism for ongoing seat inventory and subscription changes. Instruction files influence Copilot behavior in chats and edits; they do not administer licenses. Manual UI review may work for one-off tasks, but it does not meet the requirement for repeatable, low-effort administration at scale.
| Need | Best Copilot behavior |
|---|---|
| Better answer quality | Provide goal, context, constraints, and examples |
| Safer output | Review, test, and scan like human-written code |
| Protect privacy | Use minimal sanitized examples |
| Generate tests | Name framework, boundaries, and expected behavior |
| Debug a problem | Provide error, relevant code, environment, and expected result |
| Learn legacy code | Ask for explanation, then verify against behavior |
| Avoid policy drift | Follow organization Copilot settings and content exclusions |
| Improve a weak answer | Iterate with corrections and constraints |