Browse Certification Practice Tests by Exam Family

Free GitHub GH-500 Full-Length Practice Exam: 50 Questions

Try 50 free GitHub Advanced Security GH-500 questions across the exam domains, with explanations, then continue with full IT Mastery practice.

This free full-length GitHub Advanced Security GH-500 practice exam includes 50 original IT Mastery questions across the exam domains.

These questions are for self-assessment. They are not official exam questions and do not imply affiliation with the exam sponsor.

Count note: this page uses the full-length practice count maintained in the Mastery exam catalog. Some certification vendors publish total questions, scored questions, duration, or unscored/pretest-item rules differently; always confirm exam-day rules with the sponsor.

Need concept review first? Read the GitHub Advanced Security GH-500 Cheat Sheet on Tech Exam Lexicon, then return here for timed mocks and full IT Mastery practice.

Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Try GitHub Advanced Security GH-500 on Web View full GitHub Advanced Security GH-500 practice page

Exam snapshot

  • Exam route: GitHub Advanced Security GH-500
  • Practice-set question count: 50
  • Time limit: 100 minutes
  • Practice style: mixed-domain diagnostic run with answer explanations

Full-length exam mix

DomainWeight
Describe the GHAS Security Features and Functionality15%
Configure and Use Secret Scanning15%
Configure and Use Dependabot and Dependency Review35%
Configure and Use Code Scanning with Codeql25%
Describe GHAS Best Practices, Results, and Corrective Measures10%

Use this as one diagnostic run. IT Mastery gives you timed mocks, topic drills, analytics, code-reading practice where relevant, and full practice.

Practice questions

Questions 1-25

Question 1

Topic: Describe the GHAS Security Features and Functionality

A pull request to a private repository is blocked by push protection after GitHub detects a cloud provider token in a committed .env file. During triage, the team asks whether the same GHAS feature should also be used to find SQL injection flaws and vulnerable third-party libraries. What is the best response?

Options:

  • A. Treat this as secret exposure; use code scanning for insecure code patterns and dependency management for vulnerable packages.

  • B. Treat this as a code scanning issue; use secret scanning for insecure code patterns and dependency management for vulnerable packages.

  • C. Treat all three risks with code scanning, because it covers secrets, dependencies, and custom code.

  • D. Treat this as a dependency management issue; use code scanning for vulnerable packages and secret scanning for insecure code patterns.

Best answer: A

Explanation: The blocked pull request is a secret exposure event, so secret scanning and push protection are the relevant GHAS capabilities. Code scanning is for weaknesses in the repository’s code, while dependency management covers known vulnerabilities in external packages and libraries.

This scenario is about correctly mapping GHAS features to the risk they detect. A token found in a committed .env file is a leaked secret, so secret scanning is the right feature, and push protection is the prevention control that blocked it before merge. SQL injection is a code flaw in application logic, which belongs to code scanning. Vulnerable third-party libraries are tracked through dependency management features such as the dependency graph and Dependabot alerts.

A good way to separate them is:

  • Secret scanning: exposed credentials and tokens
  • Code scanning: insecure code patterns and data-flow issues
  • Dependency management: vulnerable packages and versions

The closest distractors swap these responsibilities, but GHAS keeps them as distinct detection areas.

  • Code flaw confusion fails because secret scanning does not detect SQL injection or other source-code vulnerability patterns.
  • Dependency mix-up fails because dependency management tracks vulnerable packages, not leaked credentials in files.
  • One tool for everything fails because code scanning does not replace secret scanning or Dependabot-style dependency alerts.

Question 2

Topic: Configure and Use Secret Scanning

A company uses GitHub Advanced Security on a private repository. During pull request review, the AppSec team investigates any new secret scanning alerts before approving. The repo contains partner-issued cloud API keys and a company-only deployment token format. The team wants built-in coverage where possible and only wants to define patterns when necessary. Which statement is correct?

Options:

  • A. Provider-supported patterns are the right choice for proprietary internal token formats.

  • B. Custom patterns are required for all secrets in private repositories.

  • C. Custom patterns include the same provider validation and partner-response support as provider-supported patterns.

  • D. Provider-supported patterns fit known partner-issued secrets; custom patterns fit proprietary internal formats.

Best answer: D

Explanation: Provider-supported secret patterns cover known formats from integrated secret providers, so they fit common cloud API keys. A company-only token format is not provider-known, so it must be detected with a custom pattern.

The key distinction is who defines and maintains the secret format. Provider-supported patterns are built in to GitHub secret scanning for known partner or provider credential formats, such as common cloud or SaaS tokens. In a pull request workflow, they can surface alerts for those known secrets without the organization writing its own pattern.

Custom patterns are for secrets unique to your organization, such as an internal deployment token format. They extend secret scanning coverage to proprietary formats that GitHub does not already recognize. Using built-in provider coverage where available reduces maintenance, and adding a custom pattern fills the gap for internal-only tokens.

A custom pattern can detect an internal token, but it does not turn that token into a provider-supported secret.

  • The option claiming all private-repository secrets need custom patterns ignores that GHAS includes built-in provider-supported detections.
  • The option assigning provider-supported patterns to proprietary internal tokens reverses the intended use of built-in versus organization-defined patterns.
  • The option claiming custom patterns get the same provider validation or partner response overstates what custom secret detection provides.

Question 3

Topic: Configure and Use Dependabot and Dependency Review

A private repository already has dependency graph and Dependabot alerts enabled. The team now wants Dependabot version update pull requests for the npm project in /web to run weekly, group related updates, and ignore one package until a later release. The organization requires this behavior to be stored in the repository and reviewed through pull requests. Which option is the best choice?

Options:

  • A. Create a repository ruleset that requires weekly dependency updates before merge.

  • B. Add a .github/dependabot.yml file with the schedule, directory, grouping, and ignore rules.

  • C. Enable dependency review so pull request checks control when Dependabot opens updates.

  • D. Publish an SBOM on each build so GitHub can generate grouped version update pull requests.

Best answer: B

Explanation: A Dependabot configuration file is used to control version update behavior such as ecosystem, directory, schedule, grouping, and ignore rules. The other options improve visibility or enforcement, but they do not tell Dependabot how to create update pull requests.

For Dependabot version updates, the repository-level .github/dependabot.yml file is the control point for update behavior. It lets you define which package ecosystem and directory to monitor, how often to check, and rules such as grouping or ignoring selected dependencies. That matches the scenario because the team wants update behavior customized and stored as code in the repository.

Dependency review evaluates dependency changes in pull requests, but it does not schedule or shape Dependabot version update PRs. An SBOM documents components for inventory and sharing, not update behavior. A ruleset can enforce merge requirements, but it cannot configure how Dependabot generates update PRs in the first place.

Use the configuration file when the goal is to control Dependabot’s update behavior, not just review or govern the results.

  • Dependency review helps assess dependency changes in pull requests, but it does not define update cadence, grouping, or ignore rules.
  • SBOM publishing improves component visibility, but it does not instruct Dependabot to open version update pull requests.
  • Repository rulesets enforce policy on branches and merges, but they do not configure Dependabot’s update generation behavior.

Question 4

Topic: Describe the GHAS Security Features and Functionality

A security manager with organization-level visibility is triaging a new critical Dependabot alert in the private repository payments-api. Leadership asks whether this is an isolated issue or part of a broader backlog of open critical dependency risk across the organization’s customer-facing repositories. What is the best next action?

Options:

  • A. Dismiss the alert until each repository owner confirms their own exposure

  • B. Generate an SBOM for payments-api and manually compare other repositories later

  • C. Review only the payments-api Security tab for related dependency alerts

  • D. Use Security Overview to filter open Dependabot alerts across repositories

Best answer: D

Explanation: Security Overview is the right choice when the question spans multiple repositories. In this case, leadership wants an organization-wide view of critical dependency risk, so checking one repository’s Security tab is too narrow.

Security Overview is designed for centralized visibility across repositories, which makes it the better starting point when you need to assess scope, trends, or prioritization across an organization. Here, the trigger is a single critical Dependabot alert, but the task is to determine whether similar open critical dependency risks exist elsewhere in customer-facing repositories. That requires an organization-level view, not a single repository view.

  • Use Security Overview for cross-repository triage and prioritization.
  • Use a repository Security tab for investigating and remediating alerts within one repository.

An SBOM or manual follow-up may help later, but they are slower and do not answer the immediate organization-wide question as directly.

  • Reviewing only one repository misses the stated need to measure exposure across multiple repositories.
  • Generating an SBOM for one repository does not directly show open alerts across the organization.
  • Dismissing the alert delays triage and provides no evidence about broader impact.

Question 5

Topic: Configure and Use Secret Scanning

An enterprise already has GitHub Advanced Security available. A private repository uses a ruleset that requires pull requests, approvals, dependency review, and a passing code scanning check before merge. The AppSec team also wants GitHub to generate an alert if a developer commits TOKEN_PLACEHOLDER to a branch and opens a pull request. Which feature must be enabled for this private repository?

Options:

  • A. Enable push protection.

  • B. Enable code scanning.

  • C. Enable secret scanning.

  • D. Enable dependency review.

Best answer: C

Explanation: For a private repository, secret scanning is not automatically active just because other secure workflow controls are in place. To have GitHub detect committed secrets and create secret scanning alerts, secret scanning must be enabled for that repository or inherited through policy.

Secret scanning is the GitHub Advanced Security feature that detects exposed credentials in repository content and creates secret scanning alerts. In this scenario, the repository already uses pull requests, approvals, dependency review, and code scanning as part of its secure development workflow, but those controls do not enable secret detection.

For a private repository, secret scanning must be turned on at the repository level or through organization or enterprise policy before GitHub will generate alerts for committed secrets. Push protection is related but different: it helps block supported secrets before a push is accepted, while secret scanning alerts identify secrets that are already present in commits or pull requests.

The workflow controls improve review and remediation, but secret scanning is the required feature for this specific goal.

  • Push protection timing Push protection helps prevent supported secrets from being pushed, but it is not the feature that creates alerts for secrets already committed in a private repository.
  • Code scanning mismatch Code scanning analyzes code vulnerabilities and insecure patterns, not leaked credentials such as tokens.
  • Dependency review scope Dependency review evaluates dependency changes and risk from packages, not hardcoded secrets in repository content.

Question 6

Topic: Configure and Use Dependabot and Dependency Review

An organization has GitHub Advanced Security enabled for a private npm repository. A Dependabot alert shows that lodash in package-lock.json is vulnerable, and the alert lists a patched version. The team wants GitHub to propose the fix automatically while still using normal pull request review and CI checks before merge. What is the best security action?

Options:

  • A. Require dependency review on all pull requests.

  • B. Upload SARIF results from a third-party scanner.

  • C. Enable Dependabot security updates for the repository.

  • D. Export an SBOM and verify package versions manually.

Best answer: C

Explanation: Dependabot security updates are designed for this exact case: an existing vulnerable dependency with an available patched version. GitHub can create an automated pull request for the version update, and the team can still review, test, and merge it through the standard workflow.

The key concept is that Dependabot security updates remediate eligible Dependabot alerts by creating automated pull requests. In this scenario, the repository already has a Dependabot alert and the alert shows a patched version is available, so the best action is to enable Dependabot security updates. That lets GitHub prepare the dependency change in a pull request instead of requiring a manual edit first.

Dependabot security updates do not bypass normal engineering controls. The pull request can still go through CI, code review, branch protection, and merge approval just like any other change. This is different from features that only detect risk or review incoming changes; here the goal is automated remediation when possible.

The closest distractor is dependency review, which helps detect risky dependency changes in pull requests but does not automatically fix an existing vulnerable dependency alert.

  • Dependency review helps evaluate dependency changes in pull requests, but it does not open a remediation pull request for an existing vulnerable package.
  • SBOM export improves inventory and analysis, but it is not an automated update mechanism.
  • SARIF upload is for code scanning results, not Dependabot-based dependency remediation.

Question 7

Topic: Describe the GHAS Security Features and Functionality

A GHAS-enabled repository generated a code scanning alert for SQL injection on the default branch after a pull request was merged. The team wants similar vulnerabilities found before merge in future pull requests. Which action is the best fit?

Options:

  • A. Enable secret scanning push protection for contributors.

  • B. Add dependency review to the pull request workflow.

  • C. Run code scanning on pull requests and require its status check.

  • D. Run code scanning only on a nightly scan of the default branch.

Best answer: C

Explanation: Code scanning is the GHAS feature for detecting insecure code patterns such as injection flaws. To find them before merge, it should run on pull requests and be enforced with a required check.

Code scanning fits in the pull request stage when the goal is to catch vulnerabilities in application code before they reach the default branch. In this scenario, the problem was found only after merge, so the best improvement is to run code scanning during pull requests and enforce the result with a required status check or ruleset. That moves security findings into the normal review workflow, where developers can remediate issues before the change is merged.

Other GHAS features help at different points in the lifecycle, but they do not analyze custom code for flaws like SQL injection. Scanning only the default branch still provides coverage, but it does not satisfy the stated before-merge requirement.

  • Secret scanning scope targets exposed credentials, not insecure code patterns such as SQL injection.
  • Dependency review scope evaluates dependency changes, not vulnerabilities in custom application logic.
  • Default-branch timing still finds issues after merge, which is later than the team’s goal.

Question 8

Topic: Configure and Use Dependabot and Dependency Review

Your organization is standardizing security defaults for new private repositories. AppSec wants vulnerable dependencies to appear in the Security tab for triage, but repositories must not receive automated pull requests that change dependency versions until each repo’s test pipeline is approved. Which configuration best meets this requirement?

Options:

  • A. Enable code scanning default setup instead of Dependabot alerts.

  • B. Enable Dependabot security updates for all repositories and require PR approval before merge.

  • C. Enable dependency graph and Dependabot alerts; keep Dependabot security updates disabled.

  • D. Enable dependency review on pull requests and disable Dependabot alerts.

Best answer: C

Explanation: Dependabot alerts and Dependabot security update pull requests are separate features. Alerts identify vulnerable dependencies for triage, while security updates create automated remediation PRs, so only alerts should be enabled here.

The key distinction is notification versus automated remediation. Dependabot alerts use dependency data and advisory matching to flag vulnerable packages in the Security tab so teams can assess and prioritize risk. Dependabot security updates are a separate capability that opens pull requests to update affected dependencies when a safe version is available.

In this scenario, the organization wants visibility into vulnerable dependencies but does not want automated version-change PRs yet. That means enabling dependency graph and Dependabot alerts, while leaving Dependabot security updates off. Dependency review is useful for evaluating dependency changes already proposed in pull requests, and code scanning is for code findings, not package vulnerability alerting.

Requiring approval on Dependabot PRs would still allow those PRs to be created, which violates the stated governance constraint.

  • Security updates enabled fails because it still creates automated remediation pull requests, even if merge approval is required.
  • Dependency review only fails because it evaluates dependency changes in pull requests rather than creating vulnerability alerts for existing dependencies.
  • Code scanning instead fails because code scanning identifies code issues, not vulnerable package versions from dependency advisories.

Question 9

Topic: Describe GHAS Best Practices, Results, and Corrective Measures

A team uses Jenkins for a private repository with GitHub Advanced Security enabled. A repository ruleset requires code scanning results before pull requests can merge. Jenkins runs the CodeQL CLI, creates a database, and generates results.sarif, but no code scanning alerts or pull request annotations appear in GitHub.

What should the team do next to make the externally generated CodeQL findings available in GitHub?

Options:

  • A. Enable dependency review on pull requests

  • B. Upload results.sarif to GitHub code scanning

  • C. Commit the CodeQL database to the default branch

  • D. Move the analysis to GitHub Actions because external results are unsupported

Best answer: B

Explanation: Generating SARIF in an external pipeline is not enough by itself. To surface CodeQL findings in GitHub as code scanning alerts and pull request annotations, the SARIF file must be uploaded to GitHub code scanning.

The key concept is the CodeQL CLI to SARIF to GitHub code scanning flow. When CodeQL runs outside GitHub, such as in Jenkins or another CI system, the analysis results are produced as a SARIF file. GitHub does not read the raw CodeQL database or a stored build artifact to create alerts. It needs the SARIF results to be uploaded to the repository’s code scanning system.

After the SARIF upload, GitHub can show alerts in the Security tab, annotate pull requests, and use those results with merge controls that depend on code scanning. External CI is fully supported for this pattern, so replacing the pipeline is unnecessary.

The main takeaway is that externally generated CodeQL results reach GitHub through SARIF upload.

  • Dependency review mismatch checks dependency changes in a pull request, not CodeQL analysis findings.
  • Database confusion fails because GitHub ingests SARIF results for code scanning, not a committed CodeQL database.
  • External CI supported is inaccurate because Jenkins or other CI systems can publish CodeQL findings by uploading SARIF.

Question 10

Topic: Configure and Use Code Scanning with Codeql

A private repository uses GitHub Advanced Security with a CodeQL workflow. Branch protection on main requires a code scanning result before merge. Developers open pull requests from feature branches into main, but no code scanning results appear in the pull requests. After merge, scans do run and alerts appear on main.

Exhibit:

on:
  push:
    branches: [main]
  pull_request:
    branches: [release/**]

What is the best workflow adjustment to make code scanning results appear on pull requests into main without adding unnecessary branch scans?

Options:

  • A. Add feature/** to push.branches

  • B. Add main to pull_request.branches

  • C. Re-enable code scanning in repository settings

  • D. Add a schedule trigger for daily analysis

Best answer: B

Explanation: The workflow already proves code scanning is enabled because scans run after merge on main. The missing pull request results come from the pull_request branch filter, which does not include the PR target branch main.

For code scanning to appear in a pull request, the workflow must run for the PR event itself. In this scenario, push to main works, so the repository and workflow are already configured to scan after merge. The problem is that the pull_request trigger is limited to release/**.

For pull_request events, the branch filter applies to the base branch of the PR, not the feature branch that opened it. Because these PRs target main, the workflow is skipped and no PR result is produced. Adding main to pull_request.branches is the narrowest fix because it enables PR-time code scanning for the protected branch without expanding scans to every feature-branch push.

The key takeaway is to check PR trigger filters first when pull request results are missing but post-merge scans still run.

  • Extra push scans would broaden analysis to feature-branch pushes, but the stem asks for PR results without unnecessary expansion.
  • Scheduled analysis helps periodic coverage, not pull request-time feedback before merge.
  • Re-enabling code scanning is unnecessary because successful scans already occur on main after merge.

Question 11

Topic: Configure and Use Dependabot and Dependency Review

An organization has a private Node.js repository on GitHub Enterprise Cloud. The repository contains package.json and package-lock.json. The security team wants GitHub to build and maintain its own view of the repository’s dependencies directly from those files, with no extra CI-generated metadata. Which configuration should the repository administrator enable?

Options:

  • A. Upload an SBOM from the build pipeline.

  • B. Turn on Dependabot security updates.

  • C. Enable the dependency graph for the repository.

  • D. Require dependency review on pull requests.

Best answer: C

Explanation: The dependency graph is GitHub’s view of repository dependencies detected from supported manifest and lock files. Because the team wants that view directly from package.json and package-lock.json with no extra CI step, enabling dependency graph is the best fit.

The core concept is that the dependency graph is GitHub’s native inventory of dependencies detected from supported manifest and lock files in a repository. In this scenario, the repository already has package.json and package-lock.json, so enabling the dependency graph lets GitHub read those files and build its own dependency view directly.

That view is foundational for other dependency features, because GitHub first needs to know what dependencies exist in the repo. The requirement explicitly rules out extra CI-generated metadata, so the best choice is the built-in file-based detection mechanism rather than a workflow-produced artifact.

The closest distractors are useful dependency features, but they depend on or extend GitHub’s dependency knowledge rather than defining that native dependency view.

  • Dependency review helps evaluate dependency changes in pull requests, but it is not the feature that defines GitHub’s baseline dependency inventory from repository files.
  • Dependabot security updates opens remediation pull requests for vulnerable packages, but it does not itself represent GitHub’s file-detected dependency view.
  • SBOM upload provides an external artifact, while the requirement is for GitHub to detect dependencies directly from manifest and lock files with no extra CI metadata.

Question 12

Topic: Configure and Use Dependabot and Dependency Review

A private repository has this Dependabot alert, and team policy requires all dependency changes to go through a reviewed pull request with CI tests.

Package: `acme-web`
Manifest: `package.json`
Dependency type: direct, production
Current version: 3.9.0
Vulnerable versions: < 4.2.1
First patched version: 4.2.1
Dependabot security update: unavailable
Reason: fix requires a major version upgrade

What is the safest remediation path?

Options:

  • A. Keep the version and rely on dependency review controls.

  • B. Open a PR to upgrade to 4.2.1+, test, and review before merge.

  • C. Wait for Dependabot to propose the upgrade automatically later.

  • D. Dismiss the alert because GitHub could not open a fix PR.

Best answer: B

Explanation: The alert already identifies a patched version, but Dependabot cannot automate the change because it requires a major upgrade. The safest response is to manually update the direct production dependency in a pull request and validate compatibility through normal review and testing.

Dependabot alert details show both the vulnerable range and the first patched version. Here, the dependency is a direct production dependency, and a fix exists at 4.2.1 or later. Because the remediation requires a major version upgrade, Dependabot cannot safely create an automated security update PR, so the team should perform the upgrade manually.

A safe remediation path is:

  • update the dependency to the first patched version or later
  • use a pull request so CI, tests, and code review run
  • confirm the major-version change does not break the application
  • merge only after validation succeeds

The key point is that lack of an automated PR does not mean lack of a fix; it means manual review is required for the upgrade.

  • Auto-fix confusion dismissing the alert because no security update PR exists mistakes an automation limit for a valid dismissal reason.
  • Wait and see waiting for a later Dependabot run is unsafe because the alert already shows the required fix needs manual action.
  • Wrong control dependency review helps assess new dependency changes in pull requests, but it does not remediate the currently vulnerable version.

Question 13

Topic: Configure and Use Code Scanning with Codeql

A private repository on GitHub Enterprise Cloud hosts a web API. The security team must detect source-code weaknesses such as SQL injection and unsafe data flow during pull requests, and merges must be blocked until those findings are reviewed. The requirement does not concern leaked credentials or outdated packages. Which GHAS configuration best meets this need?

Options:

  • A. Enable dependency review on pull requests with a severity threshold.

  • B. Enable Dependabot alerts and Dependabot security updates.

  • C. Enable secret scanning with push protection for the repository.

  • D. Enable code scanning with a CodeQL workflow on pull requests and require its check before merge.

Best answer: D

Explanation: The requirement is about vulnerabilities in the repository’s own source code, so the correct GHAS feature is code scanning. Running CodeQL on pull requests and requiring its check before merge satisfies both the detection and governance goals.

Source-code risks such as injection flaws, insecure data handling, and other coding weaknesses map to code scanning. In GHAS, enabling code scanning with a CodeQL workflow analyzes the application’s code during pull requests and surfaces code scanning alerts for review. If merges must be blocked until findings are addressed, the code scanning result can be enforced as a required check through branch protection or a ruleset.

Secret scanning is for exposed credentials, not insecure coding patterns. Dependabot alerts and Dependabot security updates focus on known vulnerable dependencies, and dependency review evaluates dependency changes in pull requests. Those features are useful, but they do not replace code scanning for risks in custom source code.

  • Secret scanning with push protection helps prevent committed secrets, but it does not inspect application logic for coding vulnerabilities.
  • Dependabot alerts and security updates target vulnerable packages and remediation PRs, not flaws in the repository’s own code.
  • Dependency review checks dependency changes in pull requests, but it does not perform static analysis of source code paths.

Question 14

Topic: Configure and Use Dependabot and Dependency Review

A private GitHub Enterprise Cloud repository uses pull requests to merge into main. The security team wants every PR that changes manifest or lock files to be evaluated before merge so reviewers can see newly introduced vulnerable dependencies and the merge can be blocked when policy fails. Dependabot alerts are already enabled.

What is the best security action?

Options:

  • A. Enable secret scanning push protection for the repository.

  • B. Enable Dependabot security updates to automatically remediate vulnerable packages.

  • C. Upload SARIF results from a third-party scanner for each pull request.

  • D. Add a Dependency Review workflow on pull_request and require its check for main.

Best answer: D

Explanation: The requirement is to inspect dependency changes during the pull request process and stop merges when dependency risk violates policy. A Dependency Review workflow triggered on pull_request, combined with a required status check or ruleset, is the GHAS control built for that pre-merge dependency gate.

Dependency Review is the GitHub feature that compares dependency changes in a pull request and surfaces newly introduced vulnerable dependencies before code is merged. In this scenario, the key need is pre-merge enforcement, not just post-merge alerting or automated remediation. The best design is a workflow that runs on pull_request for manifest or lockfile changes and then make that check required on main through branch protection or a repository ruleset.

This gives the team two controls:

  • visibility into what dependency risk the PR introduces
  • merge blocking when the dependency review check fails policy

Dependabot alerts help identify vulnerable dependencies already present, and Dependabot security updates help propose fixes, but neither is the primary pre-merge review control for dependency changes in a PR.

  • Dependabot updates help remediate known vulnerabilities but do not replace a PR-time dependency change gate.
  • SARIF upload is for code scanning results, not the built-in dependency diff review for package changes.
  • Secret scanning push protection prevents secrets from being pushed, which is unrelated to dependency vulnerability review.

Question 15

Topic: Configure and Use Secret Scanning

A private repository already has secret scanning enabled. During the last sprint, GitHub created several alerts after developers pushed supported cloud tokens to feature branches and then removed them in later commits. The security lead wants the next accidental token to be stopped before it reaches any remote branch on GitHub. What should the team do?

Options:

  • A. Enable secret scanning push protection for the repository.

  • B. Require pull request approvals before merging to main.

  • C. Run dependency review on all pull requests.

  • D. Enable validity checks and triage only active alerts.

Best answer: A

Explanation: Push protection is the preventive secret-scanning control. It blocks supported secrets during a push, while the other choices either act only after exposure or address different risks entirely.

Push protection is the best choice when the goal is to prevent a new secret from ever reaching GitHub. In this scenario, secret scanning is already finding the tokens, but only after developers have pushed them. That means exposure has already occurred, even if the secret is removed later.

With push protection enabled, GitHub inspects pushes for supported secrets and blocks the push when a match is found. This shortens the exposure window to before the secret is stored in the remote repository. Validity checks can help prioritize alerts, but they do not stop the initial push. Branch protection applies to merge controls on protected branches, and dependency review is for dependency risk, not leaked credentials.

The key distinction is prevention before push versus detection after push.

  • Validity checks help assess whether a detected secret may still be active, but they occur after the secret has already been pushed.
  • Branch protection on main does not stop secrets from being pushed to feature branches or other remote refs first.
  • Dependency review evaluates dependency changes in pull requests, not accidental credential exposure.

Question 16

Topic: Configure and Use Dependabot and Dependency Review

Your team uses GitHub Advanced Security on a private Node.js repository. A pull request adds a new library for image processing. The dependency review check on the pull request reports:

Dependency review summary:
- Added dependency: contoso-image-lib 3.2.0
- Scope: production
- Vulnerabilities introduced by this change: 1 high
- Advisory note: patched version available in 3.2.4

The feature does not require version 3.2.0 specifically. What is the best security action?

Options:

  • A. Merge now and wait for Dependabot to remediate on the default branch.

  • B. Dismiss the finding because the package exists only in the pull request.

  • C. Request changes and upgrade or replace the vulnerable package before merge.

  • D. Run CodeQL analysis to decide whether the dependency version is acceptable.

Best answer: C

Explanation: Dependency review is a pre-merge control for spotting vulnerable dependencies introduced by a pull request. Because the report shows a new high-severity production dependency and a patched version is available, the safest action is to block the change until the dependency is updated or removed.

Dependency review compares the dependencies in a pull request with the base branch and highlights vulnerable packages introduced by that change. Here, the evidence is explicit: the pull request adds contoso-image-lib 3.2.0 in production scope, and that version introduces a high-severity vulnerability. Since a patched version is available and the feature does not depend on the vulnerable version, the correct response is to remediate the dependency in the pull request before merge.

This uses dependency review as intended: prevent known vulnerable dependencies from reaching the default branch. Waiting until after merge shifts the problem into remediation mode instead of stopping it earlier.

  • Merge first delays remediation until the vulnerable dependency is already on the default branch.
  • Dismiss the pull request finding ignores direct dependency review evidence that the change introduces a known vulnerable production package.
  • Use CodeQL instead applies the wrong control, because code scanning does not determine whether a dependency version is already known to be vulnerable.

Question 17

Topic: Configure and Use Dependabot and Dependency Review

Your organization uses GitHub Advanced Security on a private repository. Before each release, a regulated customer requires a machine-readable artifact that lists the software components and versions included in the shipped application, including transitive dependencies, so they can compare the release against newly disclosed advisories. What is the best security action?

Options:

  • A. Export current Dependabot alerts for the repository.

  • B. Enable dependency review on pull requests.

  • C. Configure Dependabot security updates for vulnerable packages.

  • D. Generate and provide an SBOM for each release.

Best answer: D

Explanation: An SBOM is a machine-readable inventory of the components included in software, usually covering direct and transitive dependencies. Because the customer needs a release artifact they can compare to future advisories, generating and sharing an SBOM is the best action.

A Software Bill of Materials (SBOM) represents the component inventory for a piece of software: the packages, libraries, and versions that make up the application, often including transitive dependencies. That makes it the right artifact when a customer, auditor, or downstream team needs to understand what is in a shipped release and assess exposure when new vulnerabilities are disclosed.

In GitHub dependency management, these features serve different purposes:

  • SBOM: documents what the software contains
  • Dependency review: shows dependency changes before merge
  • Dependabot alerts: identifies known vulnerable dependencies
  • Dependabot security updates: proposes remediation PRs

The closest distractor is dependency review, but it helps evaluate pull request changes rather than representing the release contents.

  • Dependency review helps inspect package changes in pull requests, but it is not the release inventory artifact the customer asked for.
  • Dependabot alerts report known vulnerable dependencies only, not the full set of components and versions in the software.
  • Dependabot security updates help remediate vulnerable packages, but they do not describe everything included in the released application.

Question 18

Topic: Configure and Use Code Scanning with Codeql

A private production repository uses a ruleset that requires code scanning results before merge to main. Most changes arrive through pull requests. The CodeQL workflow was simplified to:

on:
  push:
    branches: [main]
  schedule:
    - cron: '15 4 * * 1'

Which edit best fits this repository?

Options:

  • A. Add pull_request for main and keep push and schedule.

  • B. Remove schedule and run CodeQL manually before each release.

  • C. Use dependency review on pull requests and keep CodeQL on push only.

  • D. Replace push with a daily schedule so scans run off-hours.

Best answer: A

Explanation: An active production repository that merges through pull requests should run code scanning on those pull requests, not only after code reaches main. Keeping push and schedule also preserves coverage for the default branch and periodic reanalysis.

For CodeQL in a production repository, the key workflow edit is adding a pull_request trigger for the protected target branch. That lets GitHub produce code scanning results during review, which supports repository rulesets or branch protection that expect security feedback before merge.

Keeping the existing triggers is also important:

  • push analyzes the code that actually lands on main
  • schedule provides periodic rescans over time
  • pull_request shifts detection earlier in the developer workflow

Dependency review is useful for dependency changes, but it does not replace CodeQL analysis of code vulnerabilities. Manual or schedule-only approaches delay feedback and weaken pre-merge protection.

  • Schedule only fails because off-hours scans still do not create CodeQL results during pull request review.
  • Dependency review instead is tempting, but it checks dependency risk rather than replacing code scanning for application code issues.
  • Manual release scans are too late and too inconsistent for an active repository with protected merges.

Question 19

Topic: Configure and Use Code Scanning with Codeql

A private monorepo has GHAS enabled. CodeQL default setup is generating alerts normally. The team also uploads SARIF from two third-party scanners in the same workflow, and both upload steps succeed, but only the last third-party result set remains visible for each commit.

- uses: github/codeql-action/upload-sarif
  with:
    sarif_file: iac.sarif

- uses: github/codeql-action/upload-sarif
  with:
    sarif_file: containers.sarif

What is the best security action?

Options:

  • A. Add a distinct category to each SARIF upload.

  • B. Split CodeQL into separate language-specific jobs.

  • C. Expand the CodeQL query suite for the repository.

  • D. Add a CodeQL autobuild step before the uploads.

Best answer: A

Explanation: This is a SARIF upload categorization issue, not a CodeQL workflow issue. Because CodeQL is already producing alerts, the symptom of one third-party result set replacing another points to missing distinct category values on the SARIF uploads.

The core concept is that SARIF upload troubleshooting is different from CodeQL analysis troubleshooting. Here, CodeQL default setup is already working, so changing CodeQL queries, build steps, or job structure targets the wrong component. The problem is with multiple third-party SARIF uploads for the same commit in the same workflow.

  • Each uploaded SARIF analysis should have its own category when you want GitHub to keep them as separate code scanning results.
  • Without distinct categories, one upload can replace another for the same commit and ref.
  • upload-sarif is for publishing third-party results; it does not mean the issue is with CodeQL analysis itself.

The key takeaway is to troubleshoot missing third-party findings at the SARIF upload layer before modifying a healthy CodeQL workflow.

  • Query suite mismatch is not the issue because CodeQL findings are already being created successfully.
  • Autobuild confusion fails because autobuild helps CodeQL prepare compiled code, not preserve separate third-party SARIF uploads.
  • Language job split can help CodeQL coverage in a monorepo, but it does not stop one SARIF upload from overwriting another.

Question 20

Topic: Configure and Use Code Scanning with Codeql

A private repository uses GitHub Advanced Security with CodeQL enabled. The repository ruleset already requires the CodeQL check before merging into main. Recent findings have been in application code, not dependency manifests. The current workflow is:

on:
  push:
    branches: [main]
  schedule:
    - cron: '15 3 * * 1'

Developers want vulnerabilities in a pull request to be found before the code is merged. What is the best security action?

Options:

  • A. Enable secret scanning push protection for the repository.

  • B. Run the existing main scan more frequently.

  • C. Add a pull_request trigger for PRs targeting main.

  • D. Require dependency review on every pull request.

Best answer: C

Explanation: Pull request scanning works only when the code scanning workflow runs in the pull request context. Here, the workflow scans main after merge or on a schedule, so it cannot identify new code vulnerabilities during review.

To catch code vulnerabilities before merge, the code scanning workflow must run on pull requests. In this scenario, CodeQL runs only on push to main and on a scheduled job, which means alerts appear only after the code is already merged or at some later time. Adding a pull_request trigger for PRs targeting main makes GitHub Advanced Security analyze the proposed changes while the pull request is open. That allows reviewers to see code scanning results before approving or merging, and the existing ruleset can use that check in the pre-merge workflow. Scanning main more often may shorten detection time, but it still does not provide reliable pre-merge detection.

  • Frequent main scans reduce delay, but they still detect issues only after code reaches main.
  • Dependency review is useful for vulnerable package changes, not for flaws introduced in custom application code.
  • Secret scanning push protection helps block leaked credentials, not code vulnerabilities found by code scanning.

Question 21

Topic: Describe GHAS Best Practices, Results, and Corrective Measures

An AppSec team manages 60 private repositories with GitHub Advanced Security and CodeQL enabled. They created custom CodeQL queries to detect misuse of an internal authentication library. The team wants every repository to run the same custom queries, keep updates centralized, and let other teams reuse the queries without copying files into each repository. What is the best security action?

Options:

  • A. Package the queries in a QL pack and reference the pack in CodeQL analysis.

  • B. Put the queries in a reusable GitHub Actions workflow and call it from each repository.

  • C. Create a query suite in each repository and copy the queries into each repo.

  • D. Export the custom queries as SARIF and upload them during each analysis run.

Best answer: A

Explanation: A QL pack is designed to package and share reusable CodeQL queries. That fits the requirement to distribute centrally maintained custom queries across many repositories without copying the query files into each repo.

A QL pack is the reusable packaging unit for CodeQL queries. In this scenario, the team needs one centrally maintained source of custom queries that many repositories can run, so packaging the queries in a QL pack and referencing that pack from CodeQL analysis is the right approach.

  • Use a QL pack to package and version custom query logic.
  • Reference the pack from CodeQL so repositories run the same shared content.
  • Update the pack centrally instead of maintaining copied query files in many repos.

The closest distractor is a query suite, which helps select queries to run but does not itself solve packaging and sharing.

  • Per-repo query suites still create drift because a query suite selects queries but does not package them for centralized sharing.
  • Reusable workflow can standardize automation, but it is not the CodeQL-native way to package reusable query content.
  • SARIF upload is for analysis results, not for distributing query definitions.

Question 22

Topic: Configure and Use Dependabot and Dependency Review

A private repository uses GitHub Advanced Security. A ruleset blocks merges until required checks pass, and the team’s security policy rejects pull requests that introduce vulnerable dependencies.

Exhibit:

Dependency review for PR #214
- Added braces 3.0.2 — 1 high vulnerability
- Added fast-xml-parser 4.3.2 — no known vulnerabilities
- Updated axios 1.6.1 -> 1.6.3 — fixes 1 moderate vulnerability
- Existing on base branch: minimist 0.0.8 — 1 critical vulnerability

Which dependency review finding shows the pull request introduced the vulnerable dependency that should block the merge?

Options:

  • A. minimist 0.0.8

  • B. braces 3.0.2

  • C. axios 1.6.3

  • D. fast-xml-parser 4.3.2

Best answer: B

Explanation: Dependency review compares the base branch and the pull request to show which dependency changes introduce risk. The only package that is both newly added and marked with a vulnerability is braces 3.0.2, so it is the merge-blocking finding under the stated policy.

Dependency review is used in pull requests to identify dependency changes and whether those changes introduce known vulnerabilities from GitHub’s advisory data. In this scenario, the key test is not just whether a dependency is vulnerable, but whether the pull request introduced that vulnerable dependency.

braces 3.0.2 meets both conditions: it is added by the PR and it has a high vulnerability. A clean new dependency does not trigger the policy, and an update that fixes a vulnerability improves security rather than worsening it. A dependency that is already vulnerable on the base branch still needs remediation, but it was not introduced by this pull request, so it is not the dependency review finding that should block this merge.

The important distinction is between existing risk in the repository and new risk introduced by the PR.

  • New but clean The fast-xml-parser change is an addition, but the evidence says it has no known vulnerabilities.
  • Remediating update The axios change fixes a moderate vulnerability, so it reduces risk instead of introducing it.
  • New vulnerable package The braces change is both newly added and flagged with a high vulnerability.
  • Pre-existing issue The minimist entry is vulnerable, but the review states it already exists on the base branch.

Question 23

Topic: Configure and Use Code Scanning with Codeql

A team uses a third-party SAST tool in GitHub Actions for a private GitHub Enterprise Cloud repository with GitHub Advanced Security enabled. The tool reports 12 findings for a pull request, but no new code scanning alerts appear in the repository.

permissions:
  contents: read

steps:
  - run: vendor-sast --output results.sarif
  - uses: github/codeql-action/upload-sarif@v3
    with:
      sarif_file: results.sarif
      category: vendor-sast
Upload SARIF
HTTP 403: Resource not accessible by integration

What is the best next action?

Options:

  • A. Rerun only vendor-sast; GitHub will ingest the job log output.

  • B. Wait for Security Overview to sync the external findings automatically.

  • C. Add security-events: write and rerun the SARIF upload.

  • D. Change category to codeql and rerun the workflow.

Best answer: C

Explanation: Third-party findings become code scanning alerts only after GitHub successfully accepts the SARIF file. Here, the analyzer produced results, but the upload failed with a 403, so fixing the upload permission and rerunning it is the right remediation.

A third-party scanner running successfully is not enough by itself to create GitHub code scanning alerts. GitHub must receive and process a SARIF upload for those findings to appear in code scanning. In the scenario, results.sarif was generated, but the upload step failed with HTTP 403, which indicates the workflow token does not have the required permission to write security events.

The correct fix is to grant the workflow security-events: write and rerun the upload or the workflow. The category value only groups or distinguishes result sets; it does not resolve an authorization failure. Likewise, GitHub does not create alerts from job logs or from a later background sync when no SARIF upload succeeded.

The key takeaway is that failed SARIF upload blocks ingestion, so no third-party code scanning alerts can be created.

  • Re-running only the scanner fails because GitHub does not turn console output into code scanning alerts.
  • Changing the SARIF category fails because categories organize uploaded results; they do not fix a 403 permission error.
  • Waiting for Security Overview fails because there are no uploaded SARIF results for GitHub to ingest.

Question 24

Topic: Configure and Use Dependabot and Dependency Review

An organization has GitHub Advanced Security enabled for a private monorepo, and the dependency graph feature is already enabled. Security Overview still shows no dependencies for two services.

CI workflow excerpt
- run: npm install express axios
- run: pip install flask requests
Committed dependency files: none

The AppSec team wants GitHub to generate dependency data for Dependabot alerts and SBOM export. What is the best security action?

Options:

  • A. Commit supported manifest and lock files to the default branch.

  • B. Upload the build package list as SARIF after each run.

  • C. Enable code scanning default setup for the repository.

  • D. Turn on Dependabot security updates before dependency collection.

Best answer: A

Explanation: GitHub dependency information comes from supported package manifests committed to the repository, with lock files improving accuracy where supported. CI commands that install packages directly do not populate the dependency graph for Dependabot alerts or SBOM export.

GitHub dependency management features rely on dependency data that GitHub can read from repository contents. For supported ecosystems such as npm and pip, GitHub builds the dependency graph by parsing committed package manifests and, when available, lock files on the default branch. In this scenario, the workflow installs packages directly with command lines, but no supported dependency files are committed, so GitHub has nothing to parse.

The best action is to store dependencies in supported manifest files and commit those files, along with lock files where applicable. That enables GitHub to generate dependency information used by Dependabot alerts, security updates, and SBOM export. Workflow logs, SARIF uploads, and code scanning do not substitute for repository dependency manifests.

  • The SARIF option fails because SARIF uploads are for code scanning results, not dependency graph data.
  • The code scanning option fails because code scanning analyzes code issues, not package inventory from direct install commands.
  • The Dependabot security updates option fails because those updates depend on existing dependency data rather than discovering dependencies first.

Question 25

Topic: Configure and Use Dependabot and Dependency Review

A private GitHub repository uses GHAS with this setup:

Current settings:
- `dependabot.yml`: weekly grouped npm version updates
- Pull requests require the `dependency-review` check

Blocked PR:
- Change source: manual developer PR
- Added package: `pkg-alpha` 3.1.0
- Result: introduces a high-severity advisory
- Patched version: 3.1.2

The team wants to keep grouped Dependabot updates but continue blocking any pull request that introduces vulnerable dependencies. What is the best action?

Options:

  • A. Disable dependency review and wait for a Dependabot security update pull request after merge.

  • B. Move the blocking rule into dependabot.yml so manual pull requests are rejected there.

  • C. Keep update grouping in dependabot.yml; enforce the block with dependency review and update pkg-alpha to 3.1.2 or later.

  • D. Replace dependency review with CodeQL so package advisories are enforced by code scanning.

Best answer: C

Explanation: The blocked pull request is being enforced by dependency review, not by Dependabot’s update configuration. dependabot.yml controls how Dependabot creates update pull requests, while dependency review controls whether dependency changes in any pull request can merge.

In GitHub Advanced Security, these features solve different problems. dependabot.yml defines how Dependabot behaves, such as schedule, package ecosystems, grouping, and ignore or allow rules for the update pull requests Dependabot opens. Dependency review is the pull request gate: it compares dependency changes, identifies newly introduced vulnerabilities, and can be required by branch protection or repository rulesets.

Here, the risky change came from a manual developer pull request, so changing Dependabot grouping would not change the enforcement result. The correct response is to keep the desired Dependabot update configuration, remediate the dependency to a patched version, and continue using dependency review as the merge-blocking policy.

  • Blocking in config fails because dependabot.yml does not act as a merge policy for manual pull requests.
  • Fix after merge fails because dependency review is intended to stop vulnerable dependency changes before they are merged.
  • Wrong GHAS feature fails because CodeQL code scanning analyzes code, not dependency advisory policy for pull request package changes.

Questions 26-50

Question 26

Topic: Configure and Use Secret Scanning

An organization enabled secret scanning for a private repository. A developer pushed a commit yesterday, and today the repository’s Security tab shows a secret scanning alert for a placeholder API token in that commit. The team wants to know whether this is the same as push protection. What is the best response?

Options:

  • A. Enable code scanning so future leaked secrets are blocked before push.

  • B. Dismiss the alert unless validity checks confirm the token is active.

  • C. Revoke or rotate the token and remove it from the repository, because the alert means the secret is already in GitHub and push protection is a pre-push control.

  • D. Wait for push protection to block the commit retroactively before taking action.

Best answer: C

Explanation: A secret scanning alert means GitHub detected a secret after it was already committed to the repository. Push protection is different: it tries to stop supported secrets when a user pushes them, so it does not replace alert triage and remediation.

The key distinction is timing. Secret scanning alerts are post-commit findings: the secret is already present in repository content, so you should assume exposure and start remediation such as revoking or rotating the credential and removing it from the codebase. Push protection is preventative and runs when a user attempts to push supported secrets to GitHub; it does not retroactively block a commit that already exists in the repository.

A good response flow is:

  • Treat the alert as an active exposure.
  • Revoke or rotate the secret.
  • Remove the secret from the repository and review any needed history cleanup.

Validity checks can help prioritize, but they are not a reason to ignore a leaked secret. The closest distractor is waiting for push protection, but push protection is not a post-commit remediation control.

  • Retroactive block fails because push protection does not go back and stop a commit that was already accepted by GitHub.
  • Validity-only dismissal fails because validity checks are a prioritization signal, not permission to ignore an exposed secret.
  • Wrong feature fails because code scanning finds code issues, not leaked credentials at push time.

Question 27

Topic: Configure and Use Code Scanning with Codeql

A repository ruleset on main requires CodeQL results before merge and applies this policy:

Block a pull request only if it introduces an OPEN code scanning alert
with security severity High or Critical.
Medium or Low alerts require triage but do not block.
Dismissed alerts do not block.

PR results:
- SQL injection | Open | High | introduced by this PR
- Path traversal | Open | High | already exists on `main`
- Reflected XSS | Open | Medium | introduced by this PR
- Unsafe deserialization | Dismissed | High | introduced by this PR

Which result should block the pull request?

Options:

  • A. The existing path traversal alert

  • B. The new SQL injection alert

  • C. The dismissed unsafe deserialization alert

  • D. The new reflected XSS alert

Best answer: B

Explanation: Apply the stated merge policy literally: the alert must be open, introduced by the current pull request, and High or Critical. Only the new SQL injection alert meets all three conditions.

This question is about code scanning merge protection based on a defined threshold. Under the ruleset, a pull request is blocked only when a code scanning alert is both open and introduced by that pull request, and the alert’s security severity is High or Critical.

  • The SQL injection result qualifies because it is new in the PR, still open, and High.
  • The path traversal result is High but already exists on main, so it does not block under this policy.
  • The reflected XSS result is introduced by the PR but is only Medium, so it needs triage rather than blocking.
  • The unsafe deserialization result was dismissed, so it is not an open blocker.

A common mistake is assuming any High alert anywhere should block, even when the policy is limited to open High-or-Critical alerts introduced by the PR.

  • Existing on base The path traversal alert is High, but the policy blocks only alerts introduced by the current pull request.
  • Below threshold The reflected XSS alert is new and open, yet Medium does not meet the High-or-Critical blocking threshold.
  • Already dismissed The unsafe deserialization alert is not open anymore, so it does not block merge under the stated policy.

Question 28

Topic: Configure and Use Dependabot and Dependency Review

A team uses GitHub Advanced Security on a private repository. A repository ruleset requires pull requests and passing checks before merging to main, including dependency review and code scanning. A PR that changed only application code merged successfully. The next day, after a new advisory is published, the Security tab shows a Dependabot alert for a transitive package already present on main.

Which statement best explains what that alert represents?

Options:

  • A. Branch protection blocked main until Dependabot automatically merged a security fix.

  • B. Dependency review found that the merged pull request introduced the vulnerable package.

  • C. GitHub matched the repository’s dependency data to a published advisory and raised a vulnerability notification.

  • D. Code scanning confirmed that application code reaches the vulnerable package at runtime.

Best answer: C

Explanation: A Dependabot alert is a notification that GitHub detected a vulnerable dependency from the repository’s dependency data. It is not the same as a dependency review result in a pull request, a code scanning finding, or an automatically merged remediation.

Dependabot alerts come from dependency management data, such as manifests, lockfiles, and the dependency graph, matched against known advisories. In this scenario, the package was already on main, and the alert appeared after a new advisory was published, so the alert represents newly recognized risk in existing dependency data rather than a failed pull request check.

Dependency review is a pull request control that evaluates dependency changes before merge. Code scanning analyzes source code for security issues. Dependabot security updates may open pull requests when a fix is available, but that remediation workflow is separate from the alert itself.

The key takeaway is that a Dependabot alert is the vulnerability notification layer for dependencies, not the enforcement or code-analysis layer.

  • PR-only finding fails because the stem says the PR changed only application code, and dependency review is specific to pull request dependency changes.
  • Code analysis confusion fails because code scanning detects code issues, not advisory matches for package versions.
  • Automatic fix confusion fails because branch protection does not create or merge a remediation; alerts and security update pull requests are separate features.

Question 29

Topic: Describe the GHAS Security Features and Functionality

Your AppSec team has access to Security Overview for an organization that uses GitHub Advanced Security across 240 private repositories. You have time to focus on only a small set of repositories this sprint and want the most risk-based starting point.

Exhibit:

Security Overview summary
- Repositories with open alerts: 38
- Repositories with code scanning disabled: 27
- Repositories with secret scanning disabled: 19
- Repositories with open alerts and at least one disabled feature: 11

What is the best security action?

Options:

  • A. Review each repository Security tab manually before setting priorities.

  • B. Use audit log activity to prioritize the repositories with the most pushes.

  • C. Use Security Overview to prioritize the 11 overlapping repositories first.

  • D. Enable Dependabot security updates on all repositories before triaging alerts.

Best answer: C

Explanation: Security Overview is the right tool for comparing repositories by both unresolved alerts and feature coverage. The best first step is to target repositories that have known findings and missing protections, because they represent immediate risk plus weak visibility.

Security Overview gives organization-level visibility into both security findings and GHAS feature adoption, so it is the best place to prioritize work across many repositories. In this scenario, the most important set is the overlap between repositories with open alerts and repositories missing code scanning or secret scanning. Those repositories already have unresolved risk and also lack full security coverage, which makes them the highest-value starting point.

A sound prioritization approach is:

  • identify repositories with open alerts
  • narrow to those missing key GHAS features
  • work the overlap first

Manual per-repository review is slower and less consistent, while activity data or a single broad enablement action does not directly target the repositories with the clearest combined risk signal.

  • Manual review of every repository does not scale when Security Overview already aggregates alert and coverage data.
  • Enabling Dependabot security updates everywhere may help later, but it does not prioritize repositories already showing both unresolved alerts and weak coverage.
  • Audit log push activity shows usage patterns, not which repositories currently have the strongest GHAS risk signals.

Question 30

Topic: Configure and Use Dependabot and Dependency Review

A private repository has GitHub Advanced Security enabled. An external vulnerability finding reports that acme-lib 1.2.0 is in use, but GitHub shows no dependency graph entry and no Dependabot alert for that package.

Exhibit:

Default branch files:
- package.json   # acme-lib is not listed
- deploy.sh      # runs: npm install acme-lib@1.2.0

What is the best remediation so GitHub can generate dependency information for this package?

Options:

  • A. Enable CodeQL so JavaScript imports populate dependencies.

  • B. Declare acme-lib in package.json and commit the lockfile.

  • C. Export an SBOM to populate the dependency graph.

  • D. Run dependency review against deploy.sh changes.

Best answer: B

Explanation: GitHub dependency graph and Dependabot alerts are based on dependency information parsed from supported package manifests and related lockfiles in the repository. Because acme-lib is installed only by a script, GitHub has no manifest-based record to analyze.

GitHub generates repository dependency information by parsing supported package manifests and, when applicable, lockfiles committed to the repo. For an npm project, that normally means declaring the dependency in package.json and recording the resolved version in a lockfile. A shell script that runs npm install during build is an execution step, not a supported manifest source of truth for dependency graph generation.

When the dependency is moved into the committed manifest, GitHub can include it in the dependency graph, which then supports features such as Dependabot alerts, dependency review, and SBOM export. The key point is that these features rely on known dependency metadata; they do not discover undeclared packages from ad hoc install commands.

  • Code scanning mix-up CodeQL analyzes code for vulnerabilities and does not create dependency graph entries from import statements.
  • Dependency review scope Dependency review compares dependency changes derived from supported manifests, not shell commands in deploy.sh.
  • SBOM direction SBOM export reflects dependencies GitHub already knows about; it does not discover missing ones first.

Question 31

Topic: Configure and Use Dependabot and Dependency Review

An organization enables Dependabot security updates for a private GitHub repository. They want GitHub to open remediation pull requests as soon as a vulnerable dependency has a patched version. Their security policy requires that every dependency change to main pass CI tests and be approved by a maintainer before merge.

Which configuration best meets these requirements?

Options:

  • A. Allow dependabot[bot] to bypass review requirements so update pull requests merge immediately.

  • B. Exclude Dependabot pull requests from required CI checks to speed remediation.

  • C. Enable Dependabot security updates and enforce review plus passing checks before merge to main.

  • D. Automatically dismiss a Dependabot alert when its update pull request opens.

Best answer: C

Explanation: Dependabot security updates automate pull request creation, not final approval. The best setup is to let Dependabot open the remediation PR while branch governance still requires testing and maintainer review before anyone merges it.

Dependabot security updates help remediate vulnerable dependencies faster by opening a pull request when GitHub knows a safer version is available. But that pull request is still a code change, so it should follow the repository’s normal merge controls. In this scenario, the correct configuration is to keep Dependabot security updates enabled and enforce required CI checks plus required pull request review on main through branch protection or a ruleset. That preserves automated remediation proposal, testing, human validation, and an explicit merge decision. Creating the PR alone does not mean the repository is fixed, and bypassing review or checks would violate the stated governance requirement.

  • Allowing the bot to bypass reviews fails because the policy explicitly requires maintainer approval before merge.
  • Excluding Dependabot pull requests from CI checks fails because dependency updates still need testing.
  • Dismissing the alert when a pull request opens is premature because proposed remediation is not the same as validated, merged remediation.

Question 32

Topic: Describe the GHAS Security Features and Functionality

You manage security for a GitHub Enterprise Cloud organization with 250 private repositories. Leadership wants a weekly prioritized list of active repositories that need attention because they have unresolved GHAS alerts or incomplete security feature enablement. The team wants centralized visibility first and does not want to enforce new merge gates or broaden alert access yet. Which choice is best?

Options:

  • A. Enable Dependabot security updates on all repositories and use incoming pull requests as the priority signal.

  • B. Use organization Security Overview to filter active private repositories for open alerts and disabled GHAS features.

  • C. Grant all repository admins broader security alert access and ask each team for weekly reports.

  • D. Enforce an organization ruleset that blocks merges until every security alert is resolved.

Best answer: B

Explanation: The best choice is to use Security Overview at the organization scope with repository-level filters. It directly supports centralized prioritization by combining alert status and feature enablement coverage, which matches the goal without adding enforcement or wider permissions.

Security Overview is designed to aggregate GHAS posture across repositories so security teams can identify where attention is most needed. In this scenario, the requirement is not to remediate immediately or enforce policy, but to prioritize active private repositories that either have unresolved alerts or weak enablement coverage. Filtering the organization-level repositories view for open alerts and disabled GHAS features gives a single backlog for triage.

This approach works because it:

  • uses centralized visibility across repositories
  • highlights both alert volume and missing coverage
  • avoids changing branch controls or developer workflows
  • avoids unnecessary expansion of alert permissions

A ruleset or feature rollout may be appropriate later, but Security Overview is the right first step for prioritization.

  • Merge blocking too early fails because it enforces policy before the team has identified and ranked the highest-risk repositories.
  • Dependabot PRs as a signal fails because security updates help remediate dependencies, but they do not provide broad repository coverage prioritization.
  • Broader alert access fails because it expands permissions and relies on manual reporting instead of a centralized GHAS view.

Question 33

Topic: Describe the GHAS Security Features and Functionality

An organization uses GitHub Advanced Security on organization-owned private repositories. Repository maintainers with write access already review code scanning and Dependabot alerts. The central AppSec lead must also investigate secret scanning alerts across all repositories, but should not become an admin on each repo. What is the best security action?

Options:

  • A. Enable CodeQL default setup on each repository.

  • B. Require Dependabot security updates on default branches.

  • C. Give the AppSec lead write access to each repository.

  • D. Assign the AppSec lead the organization security manager role.

Best answer: D

Explanation: Use the organization security manager role. Secret scanning alert access is more restricted than code scanning or Dependabot alert access, so broad write permissions are not the right control for this requirement.

Different GitHub security alert types do not share the same access model. Code scanning and Dependabot alerts are commonly reviewed by repository collaborators with development responsibility, but secret scanning alerts are intentionally more restricted because they can expose sensitive credential material. In this scenario, the AppSec lead needs organization-wide visibility without becoming a repository admin on every repository. The organization security manager role is the best fit because it gives a security-focused user access to investigate and manage security findings across the organization.

The key takeaway is to use a security-specific role for sensitive alerts instead of widening normal repository permissions.

  • Write access is not the right answer because secret scanning alerts use a more restricted visibility model than standard development-focused alert review.
  • CodeQL setup affects code scanning analysis, not who can view secret scanning findings.
  • Dependabot security updates help remediate vulnerable dependencies, not grant access to secret scanning alerts.

Question 34

Topic: Configure and Use Code Scanning with Codeql

A private repository uses CodeQL with these triggers:

on:
  push:
    branches: [main]
  schedule:
    - cron: '20 3 * * 1'

A high-severity code scanning alert was opened only after a pull request merged into main. Developers use short-lived feature branches and want future findings to appear in pull requests before merge, without scanning every feature-branch push. What is the best workflow change?

Options:

  • A. Add a pull_request trigger for pull requests targeting main.

  • B. Change the push trigger to run on all branches.

  • C. Increase the scheduled scan from weekly to daily.

  • D. Require developers to start scans with workflow_dispatch before merging.

Best answer: A

Explanation: The best fix is to add a pull_request trigger. The current workflow scans main after code is already merged, and the team explicitly wants results during PR review without scanning every feature-branch push.

This scenario is about matching the code scanning trigger to the development pattern. A push trigger on main detects issues only after merge, while a pull_request trigger analyzes changes during the PR process so reviewers can see code scanning results before the branch is merged.

In this case:

  • push on main is too late for pre-merge feedback.
  • pull_request gives PR-time scanning for the branches that matter.
  • schedule is useful for periodic coverage, not for merge-time prevention.
  • workflow_dispatch depends on people remembering to run it.

The closest distractor is scanning all branch pushes, but that conflicts with the stated goal of avoiding scans on every feature-branch push.

  • All branch pushes is broader than needed and violates the requirement to avoid scanning every feature branch push.
  • More frequent schedules still finds problems after the fact rather than during PR review.
  • Manual dispatch is possible, but it is not the best automated trigger for a consistent PR workflow.

Question 35

Topic: Describe GHAS Best Practices, Results, and Corrective Measures

A private monorepo on GitHub Enterprise Cloud uses GitHub code scanning with CodeQL for a Java/Kotlin service. Pull request analysis began failing after the team changed the build to run custom Gradle tasks that generate source files before compilation. The workflow still uses autobuild.

- uses: github/codeql-action/init@v3
  with:
    languages: java-kotlin
- uses: github/codeql-action/autobuild@v3
- uses: github/codeql-action/analyze@v3

Latest log excerpt: Autobuild failed and No source code was seen during the build.

What is the best security action to restore reliable CodeQL scanning?

Options:

  • A. Upload third-party SARIF results instead of fixing the CodeQL build.

  • B. Replace autobuild with explicit Gradle tasks that generate and build the service.

  • C. Exclude generated source directories from code scanning results.

  • D. Restrict CodeQL runs to the default branch until autobuild succeeds.

Best answer: B

Explanation: For compiled languages, CodeQL must observe a successful build before it can analyze the code. Because autobuild is failing on a Java/Kotlin project that now depends on custom Gradle tasks, the right fix is to add manual build steps that mirror the real build.

CodeQL analysis for compiled languages depends on extraction during a successful build. autobuild is a best-effort helper, but it often fails when a repository uses custom tasks, generated source, multi-module sequencing, or other nonstandard build behavior. The log shows that CodeQL never saw a usable build, so the workflow needs custom build configuration rather than a change in scan timing or a different alert source.

  • Keep the CodeQL workflow in advanced setup.
  • Run the same Gradle commands your CI uses to generate sources and compile the project.
  • Then run analyze so CodeQL can create and assess the database correctly.

Changing scope or substituting unrelated results does not restore actual CodeQL coverage for this compiled project.

  • Branch-only scanning changes when scans run, but it does not make the Java/Kotlin project build successfully for CodeQL extraction.
  • Path exclusions can reduce analyzed scope, but they do not run the custom tasks required to generate and compile the code.
  • Third-party SARIF can add other code scanning results, but it does not fix this repository’s broken CodeQL build process.

Question 36

Topic: Configure and Use Dependabot and Dependency Review

Your organization uses a Dependabot auto-triage rule that dismisses moderate npm alerts for manifests under /apps/** to reduce noise from grouped dependency updates.

In the payments-service repository, this alert matches the rule:

Package: axios
Manifest: /apps/payments/package.json
Scope: production
Dependency: direct
First patched version: 1.8.2
Dependabot security update PR: open (CI passed)

Which action is best?

Options:

  • A. Dismiss the alert as not used because npm dependencies are centrally managed.

  • B. Close the security update PR and wait for the next grouped update.

  • C. Keep the rule because moderate alerts in grouped updates are low priority.

  • D. Exclude this path from the auto-triage dismissal rule and review the security update PR.

Best answer: D

Explanation: Auto-dismissal is unsafe here because the alert has a clear remediation path: a first patched version is available and Dependabot already opened a security update PR. Since the dependency is direct and in production scope, the safer action is to narrow the rule and process the fix through normal review and testing.

Dependabot auto-triage dismissal rules are useful for reducing noise, but they should not hide fixable risk. In this scenario, the alert is for a direct production dependency, GitHub identifies a patched version, and a Dependabot security update PR is already open. Those facts are strong evidence that the alert is actionable rather than a candidate for dismissal.

The best response is to remove this path or condition from the dismissal rule, then review, test, and merge the security update through the normal workflow. Grouped dependency updates do not justify dismissing an actionable security alert, especially when remediation is already prepared. The closest distractor treats moderate severity as a reason to ignore the alert, but fix availability and production impact matter more here.

  • Severity alone is not enough to justify dismissal when a direct production dependency has a known patched version.
  • Not used fails because the alert explicitly shows a direct production dependency, which contradicts that dismissal reason.
  • Wait for grouping is weaker than using the existing security update PR, because it delays a fix that is already available and tested by CI.

Question 37

Topic: Describe GHAS Best Practices, Results, and Corrective Measures

A pull request to a private Java repository is blocked because the required CodeQL code scanning check failed. The workflow currently uses autobuild, but this project only compiles in CI when it runs ./gradlew generateSources build -p services/payments.

CodeQL autobuild failed
No build command was detected.
Java analysis requires a successful build.

What is the best remediation?

Options:

  • A. Re-run the same workflow on the default branch so autobuild can infer the build.

  • B. Use a manual CodeQL build and run the repository’s Gradle commands explicitly.

  • C. Dismiss the failed result because no vulnerability alert was produced.

  • D. Change the CodeQL query suite so analysis can run without compiling.

Best answer: B

Explanation: For compiled languages such as Java, CodeQL needs a successful build to analyze the code accurately. When autobuild fails and the repository already depends on custom build commands, the right fix is to configure CodeQL to use those explicit build steps.

This scenario points to a custom build requirement, not a query or alert-triage issue. CodeQL can analyze interpreted languages more directly, but compiled languages such as Java rely on a successful build so CodeQL can resolve the project structure, dependencies, and generated sources. Here, the log says autobuild could not detect a build command, and the stem already tells you the repository needs a specific Gradle sequence to compile.

  • Keep CodeQL analysis enabled.
  • Replace reliance on autobuild with the repository’s actual build commands.
  • Let analysis run after that successful build.

Re-running the same failing setup will not teach CodeQL the custom build, and changing query suites does not remove the need to compile the project first.

  • Re-run only is insufficient because the missing problem is build configuration, not a transient failure.
  • Change query suite fails because query selection does not replace the build step required for compiled-language analysis.
  • Dismiss the result is inappropriate because the scan did not complete, so code scanning coverage is still missing.

Question 38

Topic: Configure and Use Code Scanning with Codeql

A team uses GitHub Enterprise Cloud, and GitHub Advanced Security is already enabled for a private repository. A repository ruleset requires a passing code scanning result on pull requests to main, but the repository has no code scanning results because no analysis tool is configured. The team wants GitHub to analyze the code during pull requests by using GitHub Actions. Which change should the repository administrator make?

Options:

  • A. Require approving reviews and signed commits in the ruleset.

  • B. Turn on secret scanning push protection for all contributors.

  • C. Enable Dependabot security updates for the repository.

  • D. Add a CodeQL workflow that runs on pull_request and push and uploads code scanning results.

Best answer: D

Explanation: Code scanning is enabled when a workflow runs a code analysis tool and sends results to GitHub. In this scenario, the repository has GHAS and a ruleset, but it still needs a GitHub Actions code scanning workflow such as CodeQL to produce the results checked on pull requests.

Repository rulesets and branch protection can require a code scanning check, but they do not generate analysis results by themselves. To enable code scanning through GitHub Actions, the repository needs a workflow that runs a supported analyzer and uploads findings to GitHub. When the team is not using an external SAST tool, the standard GitHub-native choice is a CodeQL workflow triggered on pull requests and pushes to the protected branch.

Once that workflow runs, GitHub can create code scanning alerts in Security and surface the related check in the pull request. The closest distraction is dependency management, which helps with vulnerable packages but does not analyze the repository’s source code for code scanning findings.

  • Dependabot security updates focus on vulnerable dependencies and patch PRs, not source-code analysis for code scanning alerts.
  • Secret scanning push protection helps stop leaked credentials before commit, but it is a different GHAS feature.
  • Review and signing requirements can enforce merge policy, but they do not create the code scanning results the ruleset expects.

Question 39

Topic: Describe the GHAS Security Features and Functionality

Your company uses GitHub Enterprise Cloud with GitHub Advanced Security. A central AppSec team needs a single place to review code scanning, secret scanning, and Dependabot alert trends across 12 organizations and hundreds of repositories. Repository teams will still investigate and fix alerts in their own repos. What is the best security action?

Options:

  • A. Use organization-level Security Overview separately in each organization and compare results manually.

  • B. Use each repository’s Security tab to review alerts one repository at a time.

  • C. Use the enterprise-level Security Overview for aggregated visibility across organizations.

  • D. Enable repository rulesets so pull requests cannot merge until all alerts are fixed.

Best answer: C

Explanation: The scenario requires one view across many organizations, so enterprise-level Security Overview is the right fit. Repository views are for detailed investigation within a single repository, and organization-level overview stops at one organization.

In GitHub Advanced Security, the key distinction is scope. Repository-level alert views help a team inspect and triage alerts for one repository. Organization-level Security Overview aggregates security posture across repositories in a single organization. Enterprise-level Security Overview extends that aggregation across multiple organizations.

Here, the AppSec team needs centralized visibility across 12 organizations and hundreds of repositories, so the enterprise-level Security Overview is the best choice. It supports broad posture review and prioritization without replacing repository teams’ local remediation work. The closest distractor is reviewing each organization separately, but that still fragments the view and adds manual comparison effort.

  • The option about each repository’s Security tab fails because repository views do not provide centralized cross-organization visibility.
  • The option about using organization-level Security Overview in each organization is possible but not the best fit when one enterprise-wide view is required.
  • The option about repository rulesets addresses merge governance, not the need to compare GHAS alert trends across organizations.

Question 40

Topic: Configure and Use Secret Scanning

A GitHub Enterprise Cloud organization uses GHAS on a private repository. Pull requests to main must pass dependency review and code scanning before a ruleset allows merge.

Policy excerpt

- Developers handle code scanning and Dependabot findings in their services.
- Exposed credentials must be revoked or rotated immediately by the Cloud Identity team.
- Secret details must be limited to users with a need to know.

Repository admins already manage workflow and ruleset settings. The organization wants to grant one additional team access to secret scanning alerts.

Which team should receive that access?

Options:

  • A. The Cloud Identity team responsible for credential rotation

  • B. All developers with write access to the repository

  • C. The team that reviews Dependabot dependency alerts

  • D. The CODEOWNERS reviewers for the protected branch

Best answer: A

Explanation: Secret scanning alerts should be visible to the people who can act on leaked credentials, not simply to everyone involved in pull requests. Because the policy assigns revocation and rotation to the Cloud Identity team and limits secret details to need-to-know users, that team is the right additional recipient.

Secret scanning alert access should be granted based on operational responsibility and least privilege. In this scenario, the policy clearly separates normal development remediation from secret exposure response: developers handle code scanning and Dependabot findings, while the Cloud Identity team must revoke or rotate leaked credentials. That makes the Cloud Identity team the correct group to receive secret scanning alerts.

Secret scanning is different from PR review controls such as branch protection, rulesets, dependency review, and code scanning. Those controls help prevent risky changes from merging, but they do not determine who should see sensitive secret alert details. Alert visibility should align with the team that can immediately contain the leak and has a need to know.

The key point is to give secret scanning access to credential responders, not to every contributor or reviewer.

  • Broad developer access fails because write access and PR participation do not justify viewing sensitive secret details.
  • CODEOWNERS reviewers are responsible for merge approval, but that does not make them the team that revokes or rotates leaked credentials.
  • Dependabot triage is for vulnerable dependencies, not exposed secrets, so that workflow owner is the wrong recipient here.

Question 41

Topic: Configure and Use Dependabot and Dependency Review

A GitHub Enterprise Cloud organization uses GHAS for a private Node.js monorepo.

Current state:
- Dependency graph: enabled
- Dependabot alerts: enabled
- Ruleset on `main`: can require status checks
Goal:
- Block a pull request before merge if it introduces a known vulnerable dependency

Last month, a vulnerable transitive package was detected only after it reached the default branch. What is the best security action?

Options:

  • A. Add a Dependency Review check to pull requests and require it before merge.

  • B. Review exported SBOM files during the monthly release process.

  • C. Rely on Dependabot alerts to notify maintainers after the default branch changes.

  • D. Enable Dependabot security updates to automatically open remediation pull requests.

Best answer: A

Explanation: The requirement is to stop vulnerable dependency changes during the pull request workflow, before merge. Dependency Review is designed for that PR-time comparison, while Dependabot alerts mainly identify vulnerable dependencies already present in the repository’s dependency graph.

The key distinction is timing and purpose. Dependabot alerts tell you that a repository dependency has a known vulnerability, typically after the vulnerable package is part of the repository context being monitored. Dependency Review compares the base and head of a pull request to identify dependency changes and can surface newly introduced vulnerabilities before merge.

In this scenario, the team already has Dependabot alerts enabled, but that only helped after the vulnerable transitive dependency reached the default branch. To enforce prevention, they should add a Dependency Review workflow for pull requests and make that check required through the ruleset on main.

Dependabot security updates help remediate vulnerable dependencies, but they do not replace PR-time gating of incoming dependency changes.

  • Post-merge detection relying on Dependabot alerts is too late for a requirement to block vulnerable changes before merge.
  • Remediation path enabling Dependabot security updates helps create update PRs, but it does not evaluate every incoming PR for newly introduced vulnerable dependencies.
  • Inventory, not enforcement reviewing SBOM output supports visibility and audit needs, but a monthly process does not provide pull request merge control.

Question 42

Topic: Configure and Use Code Scanning with Codeql

A monorepo has a repository ruleset that blocks pull request merges until required code scanning completes. The same third-party scanner runs twice on each PR commit: once for services/api and once for web. Both jobs upload SARIF, but after the workflow finishes, only the second result set is shown and the first appears overwritten.

- uses: github/codeql-action/upload-sarif@v3
  with:
    sarif_file: api.sarif

- uses: github/codeql-action/upload-sarif@v3
  with:
    sarif_file: web.sarif

Which change best prevents this?

Options:

  • A. Use dependency review instead of SARIF for the pull request.

  • B. Upload SARIF only from the default branch workflow.

  • C. Give each SARIF file a different filename on every run.

  • D. Add unique category values to both upload-sarif steps.

Best answer: D

Explanation: Use a unique SARIF category when the same tool uploads multiple analyses for the same commit. In this pull request workflow, separate categories keep the API and web scans distinct so both result sets remain available for code scanning and merge enforcement.

GitHub code scanning needs a way to distinguish multiple SARIF uploads from the same tool on the same commit. The category value provides that identity. In this scenario, the API and web jobs both upload results for the same pull request commit, so leaving category unspecified can cause one upload to overwrite the other instead of preserving both analyses.

  • Assign one stable category for the API scan.
  • Assign a different stable category for the web scan.
  • Keep those values consistent across runs.

That allows both result sets to appear separately in code scanning, which is important when pull request rules depend on complete scan results. Changing filenames or switching to another GHAS feature does not solve SARIF analysis identity.

  • Different filenames change the uploaded file name, not the analysis identity GitHub uses to separate SARIF results.
  • Default branch only reduces pull request visibility and still does not distinguish two analyses of the same commit.
  • Dependency review evaluates dependency changes in a PR, not SARIF-based code scanning results.

Question 43

Topic: Describe GHAS Best Practices, Results, and Corrective Measures

Your organization uses a repository ruleset that blocks merges to main unless the CodeQL code scanning check succeeds on each pull request. The AppSec team wants to detect a company-specific insecure logging pattern in Java code. Which GitHub Advanced Security item contains the logic that identifies that pattern in the code?

Options:

  • A. A repository ruleset that requires code scanning

  • B. A CodeQL query suite that groups queries to run

  • C. A SARIF upload category that labels findings

  • D. A CodeQL query that defines the detection logic

Best answer: D

Explanation: A CodeQL query contains the actual logic used to identify a code pattern or vulnerability. In this pull request workflow, the ruleset enforces the check, but the query is what tells CodeQL what to look for.

In CodeQL, a query is the artifact that expresses detection logic against a CodeQL database built from the repository’s code. That logic can look for specific API usage, insecure patterns, or tainted data flow that may indicate a vulnerability. A query suite does not define new detection logic; it only groups queries so they run together. A repository ruleset governs whether a pull request can merge, and SARIF is a results format used to publish findings. The key distinction is that enforcement and reporting are separate from the logic that performs the analysis.

  • The option describing a query suite is tempting because suites are used in CodeQL workflows, but they bundle queries rather than define the pattern itself.
  • The option describing a repository ruleset fits the pull request scenario, but rulesets enforce checks and do not detect vulnerable code.
  • The option describing a SARIF category relates to reporting results, but it does not contain the analysis logic that finds issues.

Question 44

Topic: Configure and Use Secret Scanning

A GitHub Enterprise Cloud organization uses GHAS on a private payments monorepo. Hundreds of developers have read access, but only the incident-response team and repository admins can revoke production credentials and coordinate cleanup. Secret scanning detects a leaked cloud API token in commit history. To reduce unnecessary exposure while still enabling fast remediation, what is the best security action?

Options:

  • A. Grant all repository readers access to the alert so more developers can help inspect the leak.

  • B. Create a team-wide engineering ticket that includes the alert details for all contributors to follow.

  • C. Restrict secret scanning alert visibility and notifications to designated responders who can revoke, rotate, and remove the secret.

  • D. Use the token’s validity status to decide visibility, and broaden access when validity is unknown.

Best answer: C

Explanation: Secret scanning alerts should be visible only to appropriate responders. They contain sensitive exposure information, so limiting access to people who can revoke or rotate the secret follows least privilege and reduces the chance of further misuse.

The core concept is least-privilege handling of secret exposure. A secret scanning alert is itself sensitive because it identifies a leaked credential, where it appeared, and other details useful for remediation. In this scenario, the people who can actually respond are the incident-response team and repository admins, so they are the right audience for alert visibility and notifications.

Broadening access to all repository readers or all contributors increases unnecessary disclosure without improving response, because those users cannot revoke the credential or own remediation. Secret validity checks can help prioritize triage, but they do not justify wider sharing of alert details. If broader communication is needed, share a sanitized status update rather than the alert contents. The key takeaway is to keep secret-scanning alerts with responders who can take action.

  • Broad repo access fails because most readers cannot rotate the credential, so wider visibility adds exposure without adding response capability.
  • Team-wide ticket fails because copying alert details into a broad workflow spreads sensitive information beyond the people who need it.
  • Validity-based sharing fails because verification status helps prioritize the alert, not decide who should see the alert details.

Question 45

Topic: Configure and Use Dependabot and Dependency Review

A company uses a monorepo with this layout:

/web  -> npm  -> Frontend team
/api  -> pip  -> Platform team

They want fewer Dependabot security update pull requests, but each PR must stay within one team’s review scope and avoid bundling unrelated rollback risk. Which dependabot.yml strategy is best?

Options:

  • A. Define security update groups by severity so critical and high alerts are merged together across ecosystems.

  • B. Define one repository-wide group for all security updates so every alert is fixed in a single pull request.

  • C. Do not group security updates; instead raise Dependabot’s open pull request limit.

  • D. Define separate groups for security-updates in each ecosystem/directory entry, and group only related packages owned by the same team.

Best answer: D

Explanation: The best approach is to group Dependabot security updates within the same package ecosystem and directory, aligned to team ownership. That reduces PR noise without combining unrelated services or ecosystems into one hard-to-review pull request.

A good grouping strategy for Dependabot security updates should match a natural review and rollback boundary. In a monorepo, package-ecosystem and directory usually separate manifests, build paths, and owning teams. Creating grouped security updates inside each ecosystem/directory entry lets Dependabot combine related vulnerable dependency fixes while keeping each pull request narrow enough for the correct team to test and approve.

Repository-wide grouping is usually too broad because it mixes unrelated changes and expands blast radius if a rollback is needed. Leaving updates ungrouped preserves isolation but does not meet the goal of reducing PR volume. The key takeaway is to group by technical ownership boundaries, not by the entire repository.

  • Single mega-PR reduces PR count, but it mixes unrelated ecosystems and increases review and rollback risk.
  • No grouping keeps changes isolated, but it does not create a valid grouping strategy for related security updates.
  • Severity buckets are not the best boundary here because review ownership and manifest boundaries are the real governance constraints.

Question 46

Topic: Configure and Use Secret Scanning

A developer tries to push a commit to a private repository with secret scanning push protection enabled. The push is blocked:

File: config.example.env
Detected: generic API token
Value: PLACEHOLDER_TOKEN_12345
Validity check: unavailable

The developer confirms the value is only a dummy placeholder for documentation and the organization uses delegated bypass review. What is the best next action?

Options:

  • A. Open a pull request with the same commit because push protection only blocks direct pushes.

  • B. Disable push protection for the repository, then push the commit.

  • C. Submit a push protection bypass request and mark it as a placeholder or false positive.

  • D. Change the placeholder format until the detector no longer matches it.

Best answer: C

Explanation: Push protection is meant to stop potential secrets before they enter the repository. When the blocked value is confirmed to be a dummy placeholder and delegated bypass is enabled, the right response is to use the bypass request flow with the appropriate reason.

The key concept is that push protection blocks suspicious secrets pre-push, but it also provides a governed exception path for non-sensitive placeholders and false positives. In this scenario, the developer already verified that the value is not a real credential, and the organization requires delegated review rather than turning the control off.

A good response is to:

  • confirm the string is not usable as a real secret
  • submit the bypass request with the correct reason
  • keep push protection enabled for future pushes

The Validity check: unavailable detail does not justify disabling or evading the control; it just means validity was not confirmed automatically. The safest and most auditable action is the approved bypass path.

  • Disabling push protection weakens the repository control for everyone and is unnecessary when a bypass workflow exists.
  • Opening a pull request does not solve the issue because the blocked content still has to be pushed to GitHub first.
  • Altering the placeholder to avoid detection bypasses governance and removes the review trail the organization expects.

Question 47

Topic: Configure and Use Code Scanning with Codeql

A private repository uses GitHub code scanning. A Java alert reports java/sql-injection in src/test/resources/insecure-samples/DemoQuery.java. The AppSec team confirms the data flow is real, but the file is an intentionally vulnerable training sample used only by automated tests and excluded from release artifacts. The organization requires dismissed alerts to be explainable during later reviews. Which action is best?

Options:

  • A. Leave the alert open to avoid a dismissal comment.

  • B. Dismiss as won't fix without further explanation.

  • C. Dismiss as used in tests with release-exclusion rationale.

  • D. Dismiss as false positive because it is non-production.

Best answer: C

Explanation: The alert is valid, but its scope is limited to intentionally vulnerable test code that is not shipped. Using the specific dismissal reason used in tests and adding supporting context preserves an accurate audit trail for future reviewers.

Code scanning dismissals should reflect why an alert is being closed. In this case, the result is not incorrect—the unsafe flow exists—so false positive would misclassify the finding. The strongest action is to dismiss it as used in tests and document that the file is an intentional training sample excluded from release artifacts.

That combination matters because it:

  • records an accurate reason for closure
  • gives future reviewers evidence for the decision
  • avoids hiding a real finding behind a vague or incorrect label

A good dismissal reduces alert noise while keeping the security record understandable if the repository contents or usage change later.

  • False positive fails because the alert is technically correct; non-production scope does not make the result incorrect.
  • Vague acceptance fails because won't fix without rationale does not clearly explain why this real finding is acceptable.
  • Leave it open fails because known acceptable test-only findings should be triaged with a documented reason rather than left as unresolved noise.

Question 48

Topic: Configure and Use Secret Scanning

An organization uses GitHub Advanced Security in private repositories. Secret scanning is enabled, but committed internal deployment tokens are not generating alerts.

Exhibit:

Token facts
- Issuer: internal deployment service
- Format: ACMEDEPLOY_[A-Z0-9]{20}
- Covered by GitHub partner patterns: No
- Goal: generate secret scanning alerts after commit
- Governance: apply one reusable setting across the org
- Not required now: block pushes

Which configuration should you choose?

Options:

  • A. Create a repository ruleset that rejects the token prefix.

  • B. Add a CodeQL workflow that flags the token prefix.

  • C. Enable push protection and rely on partner validity checks.

  • D. Create an organization-level custom secret scanning pattern for the token.

Best answer: D

Explanation: GitHub secret scanning generates alerts from supported provider patterns or from custom patterns you define. Because this token comes from an internal service and is not covered by a partner pattern, the right configuration is an organization-level custom secret scanning pattern.

The deciding concept is how secret scanning knows what to detect. Built-in secret scanning alerts are generated for supported secret types that GitHub already recognizes, often tied to partner providers. In this scenario, the token issuer is internal, so GitHub has no built-in provider pattern to match.

To generate native secret scanning alerts, the organization should define a custom secret scanning pattern for the token format and apply it at the organization level so the same detection logic covers all target repositories consistently. That matches both the technical requirement to generate alerts after commit and the governance requirement for reusable centralized configuration.

Push protection is a separate preventive control, while CodeQL and rulesets do not create secret scanning alerts.

  • Push protection mismatch is tempting, but push protection is a pre-push control and partner validity checks do not solve detection for an unsupported internal token type.
  • Code scanning confusion fails because CodeQL findings become code scanning alerts, not secret scanning alerts.
  • Ruleset overreach may block content, but it does not create native secret scanning alerts and exceeds the stated need to alert after commit.

Question 49

Topic: Describe the GHAS Security Features and Functionality

A private monorepo already uses GitHub Advanced Security. During weekly triage, AppSec finds:

  • a secret scanning alert for a supported cloud token that was committed to main
  • a Dependabot alert for a vulnerable package version introduced through a merged pull request

The team wants GHAS controls that shift both checks earlier in the SDLC, before changes reach the default branch whenever possible. Which feature combination best fits that goal?

Options:

  • A. Enable CodeQL code scanning and third-party SARIF uploads

  • B. Enable custom secret scanning patterns and Dependabot security updates

  • C. Enable Dependabot alerts and SBOM export

  • D. Enable secret scanning push protection and dependency review

Best answer: D

Explanation: The gap is that both issues were discovered only after they reached the repository. Secret scanning push protection moves secret detection to push time, and dependency review evaluates dependency changes in pull requests so risk is surfaced before merge.

This scenario is about choosing GHAS features that move security checks earlier in the secure SDLC. A secret scanning alert on main means the secret was detected after it entered the repository; push protection is the paired control that helps stop supported secrets before they are pushed. A Dependabot alert after a merge means the vulnerable dependency is already in the codebase; dependency review is the feature that analyzes dependency changes in pull requests so reviewers can catch risky additions or version changes before merge.

Use the feature by problem type:

  • Secret exposure before commit reaches the repo: secret scanning push protection
  • Dependency risk during pull request review: dependency review
  • Existing vulnerable dependencies already present: Dependabot alerts and security updates

The key distinction is prevention and early review versus after-the-fact detection or remediation.

  • Custom patterns + updates can help with unsupported secret formats and later remediation, but they do not provide the best pre-push and pre-merge coverage requested here.
  • Code scanning + SARIF targets code vulnerability findings, not leaked credentials or dependency version changes in pull requests.
  • Alerts + SBOM improve visibility and inventory, but they do not shift these checks earlier in the developer workflow.

Question 50

Topic: Configure and Use Code Scanning with Codeql

A team maintains a private payments-api repository in GitHub Enterprise Cloud. AppSec needs GitHub to identify risky first-party code patterns such as SQL injection and command injection in pull requests. Secret exposure and vulnerable package management are already handled separately. What is the best security action?

Options:

  • A. Enable Dependabot alerts and Dependabot security updates.

  • B. Enable secret scanning push protection with custom patterns.

  • C. Enable code scanning with CodeQL on pull requests.

  • D. Require dependency review on every pull request.

Best answer: C

Explanation: The requirement is to detect vulnerabilities in the team’s own source code, so code scanning is the correct GHAS feature. Secret scanning is for exposed credentials, and Dependabot features focus on dependency risk rather than first-party code flaws.

Code scanning is the GitHub Advanced Security capability used to analyze first-party source code for security weaknesses such as injection, unsafe data flow, and other CWE-aligned findings. In this scenario, the team wants detection of risky application code during pull requests, so enabling code scanning with CodeQL is the best match. Secret scanning looks for leaked secrets, while Dependabot alerts, security updates, and dependency review focus on vulnerable or changed dependencies. Use code scanning when the risk comes from how the code is written, not from credentials or package versions.

  • The secret scanning option fails because push protection and custom patterns target secrets, not insecure query handling or command execution paths.
  • The Dependabot option fails because it addresses known vulnerable dependencies and update PRs, not flaws in the repository’s own code.
  • The dependency review option fails because it evaluates dependency changes in pull requests rather than analyzing source code behavior.

Continue with full practice

Use the GitHub Advanced Security GH-500 Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Try GitHub Advanced Security GH-500 on Web View GitHub Advanced Security GH-500 Practice Test

Focused topic pages

Free review resource

Read the GitHub Advanced Security GH-500 Cheat Sheet on Tech Exam Lexicon for concept review before another timed run.

Revised on Thursday, May 14, 2026