Try 50 free GitHub Actions GH-200 questions across the exam domains, with explanations, then continue with full IT Mastery practice.
This free full-length GitHub Actions GH-200 practice exam includes 50 original IT Mastery questions across the exam domains.
These questions are for self-assessment. They are not official exam questions and do not imply affiliation with the exam sponsor.
Count note: this page uses the full-length practice count maintained in the Mastery exam catalog. Some certification vendors publish total questions, scored questions, duration, or unscored/pretest-item rules differently; always confirm exam-day rules with the sponsor.
Need concept review first? Read the GitHub Actions GH-200 Cheat Sheet on Tech Exam Lexicon, then return here for timed mocks and full IT Mastery practice.
Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.
Try GitHub Actions GH-200 on Web View full GitHub Actions GH-200 practice page
| Domain | Weight |
|---|---|
| Author and Manage Workflows | 24% |
| Consume and Troubleshoot Workflows | 18% |
| Author and Maintain Actions | 18% |
| Manage Github Actions for the Enterprise | 25% |
| Secure and Optimize Automation | 15% |
Use this as one diagnostic run. IT Mastery gives you timed mocks, topic drills, analytics, code-reading practice where relevant, and full practice.
Topic: Author and Maintain Actions
A repository uses a local composite action. A workflow run fails on the Add summary step when no with: values are supplied.
# workflow step
- name: Add summary
uses: ./.github/actions/publish-summary
# .github/actions/publish-summary/action.yml
name: Publish summary
description: Adds a one-line summary
inputs:
title:
description: Summary title
required: false
default: Build completed
runs:
using: composite
steps:
- shell: bash
run: |
if [ -z "${{ inputs.heading }}" ]; then
echo "title is required"
exit 1
fi
echo "### ${{ inputs.heading }}" >> "$GITHUB_STEP_SUMMARY"
title is required
Error: Process completed with exit code 1.
What is the most likely cause of the failure?
Options:
A. The action defines title, but its steps read inputs.heading.
B. The workflow must pass title explicitly because composite actions do not support input defaults.
C. The action must copy title into GITHUB_ENV before a run step can use it.
D. An optional input cannot also define a default value in action.yml.
Best answer: A
Explanation: The action metadata and the action logic are out of sync. The only defined input is title, so omitting with: should still use Build completed; the failure happens because the composite step reads inputs.heading instead.
For custom actions, the input ID in action.yml is the contract between the caller and the action implementation. The default, description, and required settings apply to that exact input name. Here, the action defines title, but the composite step checks and writes inputs.heading, which is undefined and evaluates as empty.
To fix the problem, make the metadata and implementation use the same input ID:
title and change the step to inputs.title, orheading if that is the intended contract.The key takeaway is that input metadata only helps when the action code references the same input names.
GITHUB_ENV is wrong because ${{ inputs.<id> }} can be used directly in a composite action run step.required: false and default are a valid combination.Topic: Author and Manage Workflows
A private repository uses this workflow. The workflow sets permissions: {} at the top level, so any job permission not explicitly granted is none. org/publish-check@v1 creates a check run in the current repository by using ${{ github.token }}.
jobs:
verify:
runs-on: ubuntu-latest
permissions: <varies by run>
steps:
- uses: actions/checkout@v4
- run: echo "<testsuite/>" > results.xml
- uses: org/publish-check@v1
with:
token: ${{ github.token }}
path: results.xml
- uses: actions/upload-artifact@v4
with:
name: junit
path: results.xml
Three test runs use these job permissions:
Run A: {}
Run B: contents: read
Run C: contents: read, checks: write
Which execution trace is correct?
Options:
A. Run A fails at checkout; Run B succeeds because contents: read also allows creating check runs; Run C is broader than necessary.
B. Run A fails at checkout; Run B fails when creating the check run; Run C succeeds, including artifact upload.
C. Run A reaches the check step; Run B fails at artifact upload because actions: write is missing; Run C still needs actions: write.
D. Run A succeeds because the workflow is in the same repository; Run B fails at artifact upload because artifacts need checks: write; Run C fails without statuses: write.
Best answer: B
Explanation: With workflow-level permissions: {}, the job token starts with no repository scopes. In a private repo, actions/checkout needs contents: read, creating a check run needs checks: write, and actions/upload-artifact does not require adding actions: write to GITHUB_TOKEN for this same-run upload.
The key concept is least-privilege GITHUB_TOKEN scoping. Because the workflow sets permissions: {}, each job gets only the permissions explicitly listed for that job.
In this scenario:
actions/checkout@v4 must read repository contents, so Run A fails and Run B can fetch the repo.org/publish-check@v1 creates a check run through the Checks API, which requires checks: write, so Run B fails there.actions/upload-artifact@v4 uploads an artifact for the current run by using the Actions runtime service; it does not require adding actions: write to GITHUB_TOKEN here.The minimum working job permission block is:
permissions:
contents: read
checks: write
Anything broader may work, but it is not the minimum required set.
contents permission.contents: read is enough fails because reading the repository does not grant permission to create check runs.actions: write fails because same-run artifact upload is not unlocked by expanding GITHUB_TOKEN with that scope.statuses: write is required fails because the action is creating a check run, which uses the Checks API, not commit statuses.Topic: Author and Maintain Actions
Your team maintains a composite action. One step generates Markdown release notes, and a later step in the same job reads them from the RELEASE_NOTES environment variable. The notes contain multiple lines.
Which configuration is the best choice for the generating step?
- id: build_notes
shell: bash
run: |
notes="$(./generate-notes.sh)"
# write RELEASE_NOTES here
Options:
GITHUB_ENV with multiline environment-file syntax.{
echo 'RELEASE_NOTES<<EOF'
echo "$notes"
echo 'EOF'
} >> "$GITHUB_ENV"
export RELEASE_NOTES="$notes"
GITHUB_ENV.echo "RELEASE_NOTES=$notes" >> "$GITHUB_ENV"
GITHUB_OUTPUT with multiline syntax.{
echo 'RELEASE_NOTES<<EOF'
echo "$notes"
echo 'EOF'
} >> "$GITHUB_OUTPUT"
Best answer: A
Explanation: For a multiline value that later steps must read as an environment variable, write it to GITHUB_ENV using the multiline delimiter format. That preserves line breaks and persists the value across subsequent steps in the job.
GITHUB_ENV is the environment file for variables that later steps in the same job should receive. When the value spans multiple lines, you must use the multiline form so GitHub Actions can parse the boundaries correctly.
{
echo 'RELEASE_NOTES<<EOF'
echo "$notes"
echo 'EOF'
} >> "$GITHUB_ENV"
A plain NAME=value append is only safe for single-line values. export changes only the current step’s shell process, so the next step will not inherit it. GITHUB_OUTPUT is for step outputs accessed through steps.<id>.outputs, not for creating an environment variable such as RELEASE_NOTES.
RELEASE_NOTES=$notes directly to GITHUB_ENV breaks when the value contains embedded newlines.export affects only the current shell session for that step, not later steps.GITHUB_OUTPUT creates a step output, not an environment variable for subsequent steps.Topic: Manage Github Actions for the Enterprise
An enterprise team standardizes secret and variable names like AZURE_PROD_* so their purpose is clear across repositories. The api-service repository still has some legacy names.
A workflow in api-service calls this reusable workflow:
# caller workflow
jobs:
deploy:
uses: octo-org/platform/.github/workflows/deploy.yml@v2
secrets:
cloud_client_id: ${{ secrets.AZURE_PROD_CLIENT_ID }}
# deploy.yml
on:
workflow_call:
secrets:
cloud_client_id:
required: true
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- id: inspect
run: |
echo "client_set=${{ secrets.cloud_client_id != '' }}" >> "$GITHUB_OUTPUT"
echo "sub=${{ vars.AZURE_PROD_SUBSCRIPTION }}" >> "$GITHUB_OUTPUT"
- run: echo "${{ steps.inspect.outputs.client_set }}|${{ steps.inspect.outputs.sub }}"
Available to api-service:
| Scope | Name | Value |
|---|---|---|
| Organization secret | AZURE_PROD_CLIENT_ID | org-client-id |
| Repository secret | PROD_CLIENT_ID | legacy-client-id |
| Organization variable | AZURE_PROD_SUBSCRIPTION | Org-Sub |
| Repository variable | AZURE_PROD_SUBSCRIPTION | Repo-Sub |
| Repository variable | SUBSCRIPTION | Legacy-Sub |
The organization secret is shared with api-service.
Which output appears in the final step?
Options:
A. true|Repo-Sub
B. false|Repo-Sub
C. true|Legacy-Sub
D. true|Org-Sub
Best answer: A
Explanation: The reusable workflow receives cloud_client_id because the caller explicitly maps secrets.AZURE_PROD_CLIENT_ID into that secret name. For vars.AZURE_PROD_SUBSCRIPTION, GitHub resolves the exact variable name and uses the repository-scoped value over the organization-scoped value.
This run depends on two GitHub Actions rules: explicit secret mapping for reusable workflows and exact-name variable lookup with scope precedence. The caller passes secrets.AZURE_PROD_CLIENT_ID into the reusable workflow as cloud_client_id, so secrets.cloud_client_id != '' evaluates to true.
For vars.AZURE_PROD_SUBSCRIPTION, GitHub looks for that exact variable name. Because both an organization variable and a repository variable exist with the same name, the repository-scoped value is used. The legacy names PROD_CLIENT_ID and SUBSCRIPTION are not considered, because GitHub does not infer meaning from similar names.
Clear naming conventions such as AZURE_PROD_* make reusable workflows predictable across repositories and reduce mistakes caused by ambiguous legacy names.
secrets: mapping from the caller into cloud_client_id.AZURE_PROD_SUBSCRIPTION is resolved.Topic: Consume and Troubleshoot Workflows
You maintain a repository where all pushes go directly to main. The workflow starts as:
name: CI
on:
push:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
steps:
- run: echo ok
Timeline:
.github/workflows/ci.yml to .github/workflows/pipeline.yml; the file contents stay the same.CI..github/workflows/pipeline.yml.Which trace matches GitHub Actions behavior?
Options:
A. Pushes 1 and 2 create runs; Pushes 3 and 4 do not; earlier completed runs stay in history.
B. Push 1 creates a run; Push 2 does not because a renamed workflow takes effect on the next push; earlier completed runs stay in history.
C. Pushes 1, 2, and 3 create runs; disabling blocks only manual runs; deleting the file removes prior history.
D. Pushes 1 and 2 create runs; Push 3 does not; Push 4 still creates one final run from the previous commit’s workflow definition.
Best answer: A
Explanation: GitHub Actions evaluates workflow files that exist in .github/workflows for the pushed commit. Renaming the file within that directory does not stop the next eligible push from running, but disabling the workflow stops later runs, and deleting the file leaves no workflow to trigger. Completed runs remain in history.
The core concept is workflow availability at the time each event is processed. A workflow is eligible only if the workflow file exists in .github/workflows and the workflow is enabled.
.github/workflows.Disabling or deleting a workflow affects future runs, not completed ones. Previous runs remain in workflow history until they expire by retention policy or are manually deleted. The closest misconception is treating a rename within .github/workflows as if the workflow had been removed.
.github/workflows with moving it out of the workflows directory.Topic: Author and Maintain Actions
Your team is publishing a composite action to GitHub Marketplace. It accepts an artifact name and region, returns a preview URL, and comments that URL on pull requests by using GITHUB_TOKEN.
inputs:
artifact-name: required
region: required
outputs:
preview-url
Consumers often miss the required permission and do not know how to read the output. Which README configuration is best before publication?
Options:
A. A README with input/output tables, a least-privilege permissions: section, and workflow examples that pass inputs and read preview-url
B. A README focused on installation and runner labels, with outputs documented only through sample logs
C. A brief README that points users to action.yml for inputs and lets them discover permissions during setup
D. A README that recommends a PAT and broad write permissions so consumers avoid comment failures
Best answer: A
Explanation: For a published action, the README should clearly describe the public contract: inputs, outputs, required permissions, and working usage examples. Including a least-privilege permissions: section and showing how to consume preview-url directly addresses the stated support problems.
When an action is published, action.yml defines metadata, but the README is where consumers learn how to use it safely and correctly. Good action documentation should explain required and optional inputs, documented outputs, any workflow permissions: the caller must grant, and at least one realistic usage example.
In this scenario, the action comments on a pull request by using GITHUB_TOKEN, so the documentation should explicitly call out the needed permission instead of making users infer it from failures. The example should also show how callers pass artifact-name and region, then read preview-url from the action step output. That combination reduces setup errors and avoids over-privileged workarounds.
The key takeaway is that action documentation should describe both the interface and the safe usage pattern.
action.yml alone is incomplete because required permissions are easy to miss during setup.Topic: Author and Manage Workflows
On a push to branch release, the team wants the uploaded artifact to be named web-release. Which replacement for the name line uses the value at the correct evaluation stage?
name: Package
on: push
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Compute artifact name
id: meta
run: echo "artifact_name=web-${GITHUB_REF_NAME}" >> "$GITHUB_OUTPUT"
- name: Upload bundle
uses: actions/upload-artifact@v4
with:
name: $artifact_name
path: dist/
Options:
A. name: ${{ env.artifact_name }}
B. name: ${{ steps.meta.outputs.artifact_name }}
C. name: ${{ github.ref_name }}
D. name: ${artifact_name}
Best answer: B
Explanation: with: values are not expanded by the shell, so $artifact_name is treated as literal text. Because the first step writes a step output to GITHUB_OUTPUT, the later step must reference that output with ${{ steps.meta.outputs.artifact_name }}.
GitHub Actions has two different evaluation styles here: shell expansion inside a run: script, and expression evaluation in workflow fields such as with:. The first step runs in a shell, so ${GITHUB_REF_NAME} becomes release, and the step writes artifact_name=web-release to GITHUB_OUTPUT.
The upload step is different: its with: block is not executed by a shell. To consume a value produced by an earlier step, you must use the step output context:
id.GITHUB_OUTPUT.steps.<id>.outputs.<name> inside ${{ }}.That is why the correct replacement is the expression that references steps.meta.outputs.artifact_name, not shell-style interpolation or a different context.
env.artifact_name would require an environment variable, but this workflow stores the value as a step output.${artifact_name} is shell-style interpolation, not a GitHub Actions expression for a with: input.github.ref_name would resolve to release, but it skips the web- prefix created by the previous step.Topic: Consume and Troubleshoot Workflows
A workflow run failed for one matrix variant, but the run history shows six entries all labeled test. The team wants future failures to be identifiable from the run history without opening each job log.
jobs:
test:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macos-latest]
node: [18, 20]
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node }}
- run: npm test
Which change is the best next step?
Options:
A. Set strategy.fail-fast: false so failed matrix variants get unique labels.
B. Move each OS and Node combination into a separate workflow file.
C. Add a job-level name that includes ${{ matrix.os }} and ${{ matrix.node }}.
D. Add name values to the steps so each matrix variant shows different step titles.
Best answer: C
Explanation: Matrix jobs create multiple job variants, but the run history identifies them by job name. If every expansion keeps the same name, the fix is to set a job-level name that includes matrix values such as OS and Node version.
In a matrix strategy, GitHub Actions expands one job definition into multiple job runs. If you do not give that job a distinct name, the run history can show the same label for every matrix variant, which makes troubleshooting slower. The best fix is to add a job-level name that uses matrix context values, for example:
name: test (${{ matrix.os }}, Node ${{ matrix.node }})
That makes entries like test (windows-latest, Node 20) appear directly in run history, so the failing variant is obvious before you open logs. Step names only help after you open a specific job, and fail-fast changes cancellation behavior, not labeling. Splitting the matrix into separate workflows is unnecessary duplication when the matrix context already solves the visibility problem.
fail-fast fails because it controls whether other matrix jobs stop after a failure, not how jobs are named.Topic: Secure and Optimize Automation
A service repository deploys to production from main with this workflow:
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- run: ./build.sh
deploy-prod:
needs: build
runs-on: ubuntu-latest
steps:
- run: ./deploy-prod.sh
Builds must continue automatically on every push. The deploy-prod job must pause until a member of the ops-reviewers team approves the production deployment. Which configuration best meets this requirement?
Options:
A. Protect main with required pull request reviews, and keep deploy-prod running automatically after merges to main.
B. Add an if condition so only specific usernames can run deploy-prod.
C. Replace push with workflow_dispatch, and allow only ops engineers to start the workflow.
D. Create a production environment with ops-reviewers as required reviewers, and set environment: production on deploy-prod.
Best answer: D
Explanation: Use a GitHub Actions environment for production and add required reviewers to that environment. When the deploy job references that environment, GitHub pauses the job until an authorized reviewer approves it, which directly satisfies the requirement.
Environment protection rules are the built-in way to require human approval before a sensitive job deploys. Configure a production environment in the repository, add the ops-reviewers team as required reviewers, and reference that environment from the deploy-prod job.
deploy-prod:
needs: build
runs-on: ubuntu-latest
environment: production
steps:
- run: ./deploy-prod.sh
When the workflow reaches that job, GitHub holds the deployment until an allowed reviewer approves the environment. This preserves automatic build execution on every push while adding a deployment-specific approval gate. Branch protection, manual triggers, and username checks can restrict code changes or who starts a run, but they do not provide the same built-in pre-deployment reviewer control.
main, but it does not require approval for each production deployment job.workflow_dispatch changes how runs start and removes the automatic push flow without adding an environment approval gate.if condition only controls job logic based on identity and does not create an auditable reviewer approval step.Topic: Author and Maintain Actions
Review the workflow snippet:
name: metadata
on: pull_request
jobs:
check:
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macos-latest]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v4
- id: meta
uses: acme/release-meta-action@v1
acme/release-meta-action@v1 is currently a Docker action, but its logic is only small Node-based parsing plus one npm library. The team wants the same action to run in all matrix jobs, start faster than a container action, and keep dependencies packaged with the action instead of depending on runner tools. Which edit best satisfies this?
Options:
A. Rewrite it as a composite action that runs bash and jq.
B. Rewrite it as a JavaScript action and bundle the npm dependency.
C. Move the logic to a service container used by each matrix job.
D. Keep it as a Docker action and pin it to a full commit SHA.
Best answer: B
Explanation: Because the workflow runs on an Ubuntu, Windows, and macOS matrix, a Docker action is a poor fit for this requirement set. A JavaScript action is portable across those runners, starts faster than a container action, and can bundle its npm dependency so the action remains self-contained.
Docker actions are useful when you need strong dependency isolation, but they add container startup overhead and are not the best fit when the same action must run unchanged across Ubuntu, Windows, and macOS matrix jobs. A JavaScript action runs directly on the runner, so it is typically faster to start and works across the hosted runner operating systems.
It also gives better dependency control than a composite action in this scenario because you can package the required Node libraries with the action instead of assuming tools such as bash or jq exist on every runner. Composite actions are often easier to maintain when you are only stitching together existing steps, but they depend more on the runner environment unless you add extra install logic.
The key tradeoff here is that JavaScript actions best balance portability, startup time, and self-contained dependencies.
Topic: Manage Github Actions for the Enterprise
You manage GitHub Actions for an organization. The general runner group has six self-hosted runners labeled linux-x64. The ui runner group has two self-hosted runners labeled browser. Queue time spikes only when UI tests fan out, but build jobs must stay fast and the team wants queue/failure trends to show whether the ui pool needs more capacity.
jobs:
build:
runs-on:
group: general
labels: linux-x64
steps:
- run: ./build.sh
ui-test:
strategy:
matrix:
shard: [1, 2, 3, 4]
runs-on:
group: general
labels: linux-x64
steps:
- run: ./run-ui-tests.sh --shard ${{ matrix.shard }}
Which edit best meets this goal?
Options:
A. Add max-parallel: 2 to the ui-test matrix.
B. Set continue-on-error: true on ui-test.
C. Change ui-test to group: ui with labels: browser.
D. Add needs: build to ui-test.
Best answer: C
Explanation: Routing ui-test to a dedicated runner group is the best way to separate browser-heavy work from general build capacity. That keeps build jobs responsive and makes queue times and failures on the ui group a clear signal for scale-out decisions.
Runner groups and labels are the right mechanism for workload separation in GitHub Actions. In this workflow, the matrix creates four ui-test jobs, and all of them currently compete with build in the general pool. Moving ui-test to the ui group with the browser label isolates that workload, so queue buildup and runner-specific failures are attributable to the UI runner class instead of being mixed with normal build demand.
build stays on the general runners.ui-test uses only the browser runners.ui now show whether that pool needs more runners.Throttling or delaying jobs on the same pool can reduce pressure, but it does not give the same clean capacity signal for targeted scale-out.
max-parallel throttles UI jobs on the same pool, but it does not separate UI demand from build demand.needs: build serializes the workflow more, but both jobs still target general runners.continue-on-error weakens failure visibility instead of improving runner-capacity monitoring.Topic: Consume and Troubleshoot Workflows
During a pull request run, this workflow and run summary are shown. Which statement correctly identifies the actual failure and the downstream results?
name: ci
on: pull_request
jobs:
lint:
runs-on: ubuntu-latest
steps:
- run: npm run lint
test:
needs: lint
strategy:
matrix:
os: [ubuntu-latest, windows-latest]
node: [18, 20]
fail-fast: true
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node }}
- name: Restore cache
uses: actions/cache@v4
with:
path: .npm
key: ${{ runner.os }}-npm-${{ matrix.node }}
- name: Run tests
run: npm test
- name: Upload test logs
if: failure()
uses: actions/upload-artifact@v4
with:
name: logs-${{ matrix.os }}-${{ matrix.node }}
path: test-output.log
package:
needs: test
runs-on: ubuntu-latest
steps:
- run: npm pack
Run summary
- lint .................................... Success
- test (ubuntu-latest, 18) ................ Success
- test (ubuntu-latest, 20) ................ Success
- test (windows-latest, 18) ............... Cancelled
- test (windows-latest, 20) ............... Failed
Checkout .............................. Success
Setup Node ............................ Success
Restore cache ......................... Success
Run tests ............................. Failed
Upload test logs ...................... Success
- package ................................. Skipped
Options:
A. The test (windows-latest, 18) job was skipped by the if: failure() condition, so only one Windows matrix job actually ran.
B. The package job failed before the Windows tests completed, which caused test (windows-latest, 18) to be canceled.
C. The Restore cache step failed in test (windows-latest, 20), and the artifact upload changed the job result to failed.
D. The Run tests step failed in test (windows-latest, 20); the log upload still ran there, test (windows-latest, 18) was canceled by fail-fast, and package was skipped because needs: test was not successful.
Best answer: D
Explanation: The root failure is the Run tests step inside test (windows-latest, 20). After that failure, the failure() condition allows the log-upload step to run, fail-fast: true cancels the remaining matrix job, and package is skipped because its needs dependency did not succeed.
This matrix expands test into four jobs: two operating systems and two Node versions. In the run summary, the only actual failed step is Run tests inside test (windows-latest, 20). The Upload test logs step succeeds afterward because if: failure() evaluates to true once an earlier step in that same job has failed.
fail-fast: true cancels other queued or in-progress matrix copies after a failure, which explains test (windows-latest, 18) being canceled.package has needs: test, so it waits for the matrix result and is skipped when that required job set is not fully successful.The canceled and skipped jobs are downstream effects of the failed matrix copy, not separate root failures.
Restore cache is explicitly marked Success, so it is not the failing step.package is marked Skipped and depends on test, so it cannot fail first or cancel matrix jobs.if: failure() applies only to Upload test logs, not to the entire Windows matrix job.Topic: Manage Github Actions for the Enterprise
An enterprise platform team publishes this reusable workflow. Deployments require a Linux self-hosted runner labeled deploy. Repository teams often misconfigure callers because the workflow contract is unclear.
name: shared-deploy
on:
workflow_call:
jobs:
deploy:
runs-on: self-hosted
steps:
- uses: actions/checkout@v4
- run: ./deploy.sh "${{ inputs.environment }}"
env:
DEPLOY_TOKEN: ${{ secrets.deploy_token }}
- run: gh release create "${{ github.ref_name }}"
env:
GH_TOKEN: ${{ github.token }}
Which edit best documents the required input, secret, token scope, and runner assumption for consumers?
Options:
A. Keep workflow_call empty, move values to env, and rely on default permissions.
B. Declare required workflow_call input and secret, explicit contents: write, and [self-hosted, linux, deploy].
C. Use secrets: inherit and keep the generic self-hosted runner label.
D. Replace workflow_call with workflow_dispatch and document requirements only in a README.
Best answer: B
Explanation: The best edit is to make the reusable workflow declare its own contract. Required workflow_call inputs and secrets tell callers exactly what must be passed, while explicit permissions and specific runner labels make token scope and runner expectations visible in the workflow itself.
Reusable workflows are easiest to consume when their contract is expressed in the workflow file, not implied by step usage. Under on.workflow_call, declare typed inputs and named secrets with required: true so GitHub can validate caller configuration. Add an explicit permissions block for the GITHUB_TOKEN scope the workflow needs; gh release create requires repository content write access. Use specific runner labels such as [self-hosted, linux, deploy] instead of only self-hosted so consumers can see the workflow depends on a particular runner class.
A README can add context, but the key contract should be encoded in the reusable workflow so teams discover requirements early and misconfigurations fail fast.
env hides the contract because env does not declare required caller inputs, named secrets, or token scopes.secrets: inherit and a generic self-hosted label does not tell consumers which secret name or runner labels are actually required.workflow_dispatch removes reusable-workflow semantics and leaves the contract outside GitHub’s validation model.Topic: Secure and Optimize Automation
A team uses a reusable workflow to build dist/app.tgz and create an artifact attestation. All third-party actions inside package.yml are already pinned to full commit SHAs, and the job uses GitHub-hosted runners. The team wants consumers to rely on the attestation as evidence of a reviewed, immutable build process.
name: release
on:
push:
tags: ['v*']
permissions:
contents: read
attestations: write
id-token: write
jobs:
package:
uses: octo-org/build-templates/.github/workflows/package.yml@main
Which edit best addresses the remaining trust limitation?
Options:
A. Add workflow_dispatch as an additional trigger
B. Grant contents: write to the workflow
C. Pin the reusable workflow reference to a full commit SHA
D. Pin the reusable workflow reference to a major version tag
Best answer: C
Explanation: Artifact attestations prove provenance for the run that produced an artifact, but they do not make a floating workflow reference trustworthy. Because the reusable workflow is called with @main, the build definition can change over time. Pinning it to a full commit SHA makes the attested process immutable and reviewable.
The core issue is workflow trust. An artifact attestation can accurately state which workflow run produced an artifact, but it does not guarantee that the workflow definition itself was fixed or trusted. In this snippet, the reusable workflow is referenced by @main, which is a moving branch reference. That means future runs can produce valid attestations while using different workflow logic.
attestations: write and id-token: write to generate the attestation.The key takeaway is that provenance is only as trustworthy as the workflow, dependencies, and runner environment that produced it.
v1 can still be moved to different commits.Topic: Secure and Optimize Automation
An organization wants teams to keep using currently approved GitHub Actions. If a pull request adds a new third-party or unverified action, or changes an existing uses: reference, the security team must review it before the workflow can be used on the protected main branch. Which configuration best meets this requirement?
Options:
A. Use environment required reviewers for all jobs and allow third-party actions pinned to tags
B. Use an org Actions allowlist with full-SHA pinning and CODEOWNERS review on workflow files
C. Use first-time contributor workflow approvals and keep the repository’s default action policy
D. Use restricted self-hosted runners for all workflows and set GITHUB_TOKEN permissions to read-only
Best answer: B
Explanation: The best fit is to combine governance at the policy layer with review at the pull request layer. An Actions allowlist limits which actions can run, and required CODEOWNERS review on workflow files makes the security team approve any newly introduced or changed action reference before it reaches main.
GitHub Actions does not give trusted maintainers a general runtime prompt to approve each newly added marketplace action. The durable approach is to control both what can run and who must review the change that introduces it. Use an organization Actions policy to allow only approved actions or reusable workflows, and require full commit SHA pinning for integrity. Then protect workflow files with a ruleset or branch protection that requires CODEOWNERS review by the security team. That way, a pull request that adds or changes a uses: reference cannot land on main without review, and even after approval only allowlisted, immutable action versions can run. Environment approvals, fork-run approvals, and runner isolation address different risks but do not create an approval flow for newly introduced actions.
Topic: Consume and Troubleshoot Workflows
An operations team wants to pause .github/workflows/deploy.yml during a release freeze. They must stop all future runs now, keep prior run history available, and later continue using the same workflow entry in the Actions tab. Which configuration best meets this requirement?
Options:
A. Rename deploy.yml to deploy.yml.disabled
B. Delete deploy.yml and restore it later
C. Move deploy.yml to an archive folder
D. Disable the workflow in GitHub Actions
Best answer: D
Explanation: Disabling is the best fit because it pauses the workflow without removing or relocating its definition. Renaming, moving, or deleting the file can also stop future runs, but those are file lifecycle changes rather than a reversible disabled state tied to the same workflow history.
GitHub Actions recognizes workflow definitions from YAML files stored in .github/workflows. When you disable a workflow, GitHub stops triggering future runs, but the workflow remains intact and its historical runs stay associated with that same workflow.
If you rename the file so it is no longer a recognized workflow file, move it out of .github/workflows, or delete it, future runs stop because GitHub no longer has an active workflow definition in the expected location. Past runs can remain in history, but you have changed or removed the file itself instead of putting the workflow into a reversible paused state. For a temporary freeze where continuity matters, disabling is the cleanest choice.
.github/workflows prevents future triggers, but the workflow stays unavailable until the file is returned.Topic: Author and Manage Workflows
A public repository runs the following workflow. The pull request comes from a fork. The GHCR package is private and requires authentication. REGISTRY_USER and REGISTRY_PASSWORD are repository secrets in the base repository.
on: pull_request
jobs:
test:
runs-on: ubuntu-latest
services:
db:
image: ghcr.io/acme/private-postgres:14
credentials:
username: ${{ secrets.REGISTRY_USER }}
password: ${{ secrets.REGISTRY_PASSWORD }}
ports:
- 5432:5432
options: >-
--health-cmd "pg_isready -U postgres"
--health-interval 5s
--health-timeout 3s
--health-retries 5
steps:
- uses: actions/checkout@v4
- run: npm test
Which statement matches GitHub Actions behavior for this run?
Options:
A. The job completes actions/checkout first, then fails when the service health check times out.
B. The job starts without the service container, and only npm test fails when localhost:5432 is unreachable.
C. The job pulls the service image with GITHUB_TOKEN automatically, so the missing secrets do not matter.
D. The job fails before actions/checkout because the private service image cannot authenticate on a forked pull_request run.
Best answer: D
Explanation: On a pull_request run from a fork, repository secrets from the base repository are not exposed to the workflow. Because GitHub Actions must pull and start service containers before any step runs, the job fails during service setup and never reaches actions/checkout.
The key concept is service container startup order combined with secret availability on fork-originated pull requests. GitHub Actions provisions required service containers before executing any job steps. In this workflow, the db service uses credentials from repository secrets to pull a private GHCR image.
Because the event is pull_request from a fork, those repository secrets are not available to the run. That means the registry credentials cannot be supplied, so the private image pull fails during job initialization. Since the container never starts, GitHub Actions does not move on to step execution.
Health checks and port mapping matter only after the image is successfully pulled and the container is created. The closest wrong idea is that checkout runs first, but step execution begins only after required services are up.
services.<name>.credentials with GITHUB_TOKEN for a private registry pull.Topic: Author and Manage Workflows
All pull requests in repo api-service target the default branch main. The workflow below already exists on main:
name: PR check
on:
pull_request_target:
types: [opened, synchronize]
jobs:
verify:
runs-on: ubuntu-latest
steps:
- run: exit 1
A maintainer changes the step to run: echo ok in branch fix-pr-check and opens a PR to main. The run triggered for that PR still fails with Process completed with exit code 1. A fork-originated PR to main fails with the same log line. What is the best explanation?
Options:
A. pull_request_target loads the workflow from the base branch main.
B. Workflow changes in PR branches apply only after the branch is rebased.
C. Both runs fail because secrets are unavailable in pull request workflows.
D. The hosted runner cached the previous workflow definition.
Best answer: A
Explanation: The key clue is the pull_request_target trigger. For PRs targeting main, GitHub uses the workflow definition from the base branch, so edits in a feature branch or a fork do not change that run until the workflow change is merged.
pull_request_target behaves differently from pull_request. It runs in the context of the target repository and reads the workflow file from the PR’s base branch. In this scenario, both the maintainer’s feature-branch PR and the fork-originated PR target the default branch main, so GitHub executes the workflow version already committed on main, not the edited version in fix-pr-check or in the fork. That is why both runs show the same exit 1 failure.
When troubleshooting pull_request_target, inspect the workflow file on the target branch first. If you want the PR branch’s workflow changes to affect the run immediately, you would typically use a different event such as pull_request instead.
pull_request_target loads.Topic: Manage Github Actions for the Enterprise
A private repository Octo/api belongs to the Octo organization in the Acme enterprise. Octo/shared-build is an action stored in another private repository in the same organization.
Actions access policies
| Layer | Policy |
|---|---|
| Enterprise | Allow GitHub-authored actions and actions from repositories in the enterprise |
| Organization | Allow GitHub-authored actions only |
| Repository | Inherit organization policy |
A push to main triggers this workflow. Assume GitHub validates all uses: references before any job steps start.
name: ci
on: push
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: Octo/shared-build@3f2c9e7b4a6d1c8e5f0a2b7c9d1e3f4a5b6c7d8e
- run: echo "build complete"
Which outcome matches GitHub Actions behavior?
Options:
A. actions/checkout runs first, and the job fails only when it reaches Octo/shared-build.
B. The workflow is blocked before any step runs because the repository inherits the organization restriction, so Octo/shared-build is not allowed.
C. Only the run step executes because action access policy applies to uses steps, not shell steps.
D. The job succeeds because the enterprise policy already allows actions from repositories in the enterprise.
Best answer: B
Explanation: For allowed actions, the effective repository behavior comes from the most restrictive applicable policy layer. Here, the repository inherits the organization’s GitHub-authored-only rule, so the org-owned Octo/shared-build action is blocked before the job starts.
GitHub Actions access controls stack across enterprise, organization, and repository scopes. A lower scope can inherit or further restrict what a higher scope allows, but it does not widen the effective policy when a parent is stricter. In this scenario, the enterprise would allow both GitHub-authored actions and actions from repositories in the enterprise, but the organization narrows that to GitHub-authored actions only, and the repository inherits that narrower rule.
actions/checkout@v4 is GitHub-authored, so it is allowed. Octo/shared-build@... is owned by the organization, not by GitHub, so it is disallowed under the inherited organization policy. Because uses: references are validated before job steps begin, the build job does not start and the later shell step never runs.
The key takeaway is that the effective policy is the restrictive result of the layered settings, not the broadest one.
actions/checkout runs first fails because the scenario states that uses: references are validated before any job steps start.Topic: Author and Maintain Actions
You are adding a custom composite action named lint-config to a repository. Several workflows in the same repository must reuse it by local path, and each workflow already runs actions/checkout@v4 first. The action should stay versioned with the repository rather than published separately. Which configuration is best?
Options:
A. Place action.yml in .github/actions/lint-config/ and reference uses: ./.github/actions/lint-config/action.yml
B. Place lint-config.yml in .github/workflows/ and reference uses: ./.github/workflows/lint-config.yml
C. Place action.yml and helper files in .github/actions/lint-config/ and reference uses: ./.github/actions/lint-config
D. Place run.sh in .github/actions/lint-config/ and reference uses: ./.github/actions/lint-config/run.sh
Best answer: C
Explanation: Local actions are referenced by the folder path, not by the metadata file or a script file. The folder must contain action.yml plus any supporting files, so pointing uses at .github/actions/lint-config is the correct configuration.
For a local custom action, GitHub Actions resolves uses: ./relative-path to a directory in the checked-out repository. That directory must contain the action metadata file, such as action.yml, and any scripts or other files the action needs. A reusable workflow under .github/workflows is a different component type, even though it can also be called with uses. If the requirement is specifically a composite action, the workflow must point to the action’s directory, not to the workflow file, the action.yml file itself, or a raw script. The key takeaway is that local action references target the folder that defines the action.
uses should target the action directory, not action.yml directly.uses cannot point straight to run.sh; the script must be wrapped by action metadata.Topic: Author and Manage Workflows
A repository normally deploys when a GitHub release is published, but operators also need to start the workflow manually and choose a target environment without creating a release. They add this workflow:
on:
release:
types: [published]
workflow_dispatch:
inputs:
target:
type: choice
options: [staging, production]
required: true
jobs:
plan:
runs-on: ubuntu-latest
outputs:
target: ${{ steps.pick.outputs.target }}
steps:
- id: pick
run: echo "target=${{ github.event_name == 'workflow_dispatch' && inputs.target || 'production' }}" >> "$GITHUB_OUTPUT"
deploy:
needs: plan
if: needs.plan.outputs.target == 'production'
runs-on: ubuntu-latest
steps:
- run: echo "Deploying to ${{ needs.plan.outputs.target }}"
An operator clicks Run workflow and selects staging. No release event occurs. Which outcome matches GitHub Actions behavior?
Options:
A. plan runs, sets target to staging, and deploy is skipped.
B. The workflow starts, but plan fails because inputs.target is unavailable here.
C. plan runs, and deploy also runs because manual runs inherit release behavior.
D. Nothing runs because a workflow cannot combine workflow_dispatch with a repository event.
Best answer: A
Explanation: workflow_dispatch is the correct trigger for a human-started run with operator-provided input. In this case, the run uses the selected staging value, so plan succeeds and deploy is skipped by its if condition.
Repository events such as release run only when that repository activity actually happens. If a person needs to start a workflow on demand and optionally provide values, workflow_dispatch is the right trigger.
Here, the operator manually starts the workflow, so github.event_name is workflow_dispatch. That makes the expression in plan write staging to GITHUB_OUTPUT, and the job exposes that value as needs.plan.outputs.target. The deploy job then evaluates needs.plan.outputs.target == 'production', which is false, so deploy is skipped.
The key takeaway is that a manual run does not simulate a repository event; it runs as its own event type with its own inputs.
release behavior is incorrect because no release was published, so the event is still workflow_dispatch.inputs.target is unavailable is incorrect because the inputs context is valid for workflow_dispatch runs.Topic: Secure and Optimize Automation
A repository uses deployment environments with these settings:
| Environment | Required reviewers | Secret DEPLOY_URL |
|---|---|---|
staging | none | staging.example |
production | 1 reviewer | prod.example |
There is no repository-level secret named DEPLOY_URL.
A push to main triggers this workflow:
name: release
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- run: echo "package" > app.txt
- uses: actions/upload-artifact@v4
with:
name: app
path: app.txt
deploy-staging:
needs: build
runs-on: ubuntu-latest
environment: staging
steps:
- uses: actions/download-artifact@v4
with:
name: app
- run: echo "Deploying to ${{ secrets.DEPLOY_URL }}"
deploy-prod:
needs: deploy-staging
runs-on: ubuntu-latest
environment: production
steps:
- run: echo "Deploying to ${{ secrets.DEPLOY_URL }}"
build and deploy-staging both complete successfully. No one has approved the production environment yet. Which statement matches GitHub Actions behavior?
Options:
A. deploy-prod is skipped because the uploaded artifact can be downloaded only once.
B. deploy-prod starts immediately, but its secret is blank until approval is granted.
C. Both deployment jobs would have required approval because both jobs declare environment:.
D. deploy-prod waits for approval; none of its steps run; deploy-staging used the staging environment secret.
Best answer: D
Explanation: Environment protections apply to the specific environment a job targets. Here, deploy-staging can run normally and use staging secrets, while deploy-prod pauses before any steps start because production requires reviewer approval.
In GitHub Actions, needs controls job order, and environment protection rules control whether a specific deployment job can start. After build succeeds, deploy-staging runs because the staging environment has no required reviewers. Its DEPLOY_URL value comes from the staging environment secret, since no repository-level secret with that name exists.
When deploy-prod becomes eligible, GitHub checks the production environment rules first. Because approval is required, the job waits and is not dispatched to a runner yet. That means no deploy-prod steps execute and its environment secret is not exposed before approval. Artifacts uploaded in one job can be downloaded by later jobs in the same workflow run, so the earlier artifact step does not block this behavior.
The key point is that protection is per environment, not for every job that uses environment:.
Topic: Manage Github Actions for the Enterprise
A platform team is publishing a reusable deployment workflow for all repositories in an organization. Consumers must know the exact contract before calling it.
environment, artifact-nameregistry-tokenGITHUB_TOKEN permissions: contents: read, id-token: writedeployWhich configuration is best?
Options:
A. Create a reusable workflow with required workflow_call inputs and secret, explicit minimal permissions, and documented self-hosted, linux, deploy labels.
B. Publish a workflow template with commented placeholders for secrets and permissions, and let each repo adapt runners.
C. Publish a composite action and ask callers to pass secrets as environment variables and choose permissions themselves.
D. Create a reusable workflow with secrets: inherit, default permissions, and runner details only in sample caller files.
Best answer: A
Explanation: The best choice is a reusable workflow that declares its required workflow_call inputs and secret, sets the minimum permissions, and clearly states the self-hosted runner labels it needs. That creates a clear, enforceable contract for consumers instead of relying on hidden defaults or tribal knowledge.
Shared workflows should expose a clear contract. In GitHub Actions, reusable workflows can declare required inputs and secrets under on.workflow_call, so callers know exactly what must be passed and GitHub can validate the call. The workflow should also define the minimum permissions: it needs so GITHUB_TOKEN access is explicit and least privilege is preserved. Runner expectations, especially self-hosted labels, should be documented with the workflow so consumers know whether the required runner is available to their repositories. Templates, examples, or inherited secrets can help convenience, but they do not replace an explicit reusable workflow interface. Hiding requirements behind defaults makes enterprise reuse, governance, and troubleshooting harder.
secrets: inherit hides the actual secret contract, and default token permissions are not a good substitute for documented least-privilege scopes.Topic: Author and Manage Workflows
A team policy forbids writing secret values or secret-derived strings to logs or job summaries, even if GitHub masks them. Review this workflow step:
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Login and report
run: |
echo "Using token ${{ secrets.REGISTRY_TOKEN }}"
echo "Registry token: ${{ secrets.REGISTRY_TOKEN }}" >> $GITHUB_STEP_SUMMARY
echo "${{ secrets.REGISTRY_TOKEN }}" | docker login registry.example.com -u ci-bot --password-stdin
Which edit best satisfies the policy while preserving authentication?
Options:
A. Remove the echo lines, use step env, and --password-stdin.
B. Store the secret in GITHUB_ENV and print it from there.
C. Replace the secret with a repository variable and keep logging.
D. Keep the output, but add ::add-mask:: before using the secret.
Best answer: A
Explanation: The best fix is to stop rendering the secret into any echoed text and pass it only to the step that needs it. Using step-level env plus docker login --password-stdin preserves authentication while avoiding secret interpolation in logs or summaries.
In GitHub Actions, ${{ secrets.NAME }} is resolved before the shell runs. If you place that expression inside echo statements or append it to GITHUB_STEP_SUMMARY, you are explicitly sending secret material to output surfaces that users can read. The safer pattern is to keep the secret out of log and summary content entirely and provide it only to the consuming command.
A good edit is:
- name: Login
env:
REGISTRY_TOKEN: ${{ secrets.REGISTRY_TOKEN }}
run: echo "$REGISTRY_TOKEN" | docker login registry.example.com -u ci-bot --password-stdin
This limits scope to one step and avoids exposing the secret in command arguments or visible output. The key takeaway is to use secrets as inputs to the required command, not as text to be displayed.
vars are not secret storage and still leaves the value intentionally logged.GITHUB_ENV option broadens the secret to later steps and still violates the policy by printing it.::add-mask:: option is defense-in-depth, but it does not justify writing secret content to logs or summaries in the first place.Topic: Manage Github Actions for the Enterprise
A repository can use the organization runner group ci-linux, and both runners below are online.
| Runner | Labels |
|---|---|
runner-a | self-hosted, linux, x64 |
runner-b | self-hosted, linux, x64, gpu |
A workflow change was meant to let the build job run on either a self-hosted Linux runner or a GitHub-hosted Ubuntu runner:
jobs:
build:
runs-on: [self-hosted, linux, ubuntu-latest]
steps:
- run: echo Build
The run stays queued with Waiting for a runner to pick up this job.... What is the best cause?
Options:
A. The workflow needs permissions: actions: write before a self-hosted runner can accept the job.
B. GitHub Actions should automatically fall back from self-hosted labels to ubuntu-latest when no self-hosted runner is free.
C. runs-on arrays require one runner to match all labels, and no runner has both self-hosted and ubuntu-latest.
D. The job must specify only the runner group name in runs-on, not labels.
Best answer: C
Explanation: The job is queued because runs-on label arrays are conjunctive, not fallback choices. GitHub looks for one runner with every listed label, and ubuntu-latest is a GitHub-hosted label that these self-hosted runners do not have.
Runner selection depends on how runs-on is evaluated. When you provide an array, GitHub Actions searches for a single runner that matches every value in that array. In this case, both available runners are self-hosted Linux machines, but neither has the ubuntu-latest label.
So the job cannot land on:
ubuntu-latestself-hostedThere is no built-in “either self-hosted or GitHub-hosted” fallback in one runs-on array. To route correctly, use labels for only the intended runner type, or split the logic into separate jobs or conditions.
runs-on does not interpret array entries as alternatives.permissions: controls GITHUB_TOKEN access, not runner matching.Topic: Manage Github Actions for the Enterprise
A production deployment workflow started and failed at the Authenticate step with HTTP 401 Unauthorized. The same log shows X-Region: us-east, and the team says both DEPLOY_TOKEN and DEPLOY_REGION were added in the production environment settings.
jobs:
deploy:
runs-on: ubuntu-latest
environment: production
steps:
- name: Authenticate
run: |
curl -H "Authorization: Bearer ${{ secrets.DEPLOY_TOKEN }}" \
-H "X-Region: ${{ vars.DEPLOY_REGION }}" \
https://deploy.internal/start
What is the best next diagnostic action?
Options:
A. Move DEPLOY_REGION to a secret because environment-scoped values should not use vars.
B. Grant deployments: write to GITHUB_TOKEN so the curl request can authenticate.
C. Switch to a self-hosted runner because GitHub-hosted runners do not receive environment values.
D. Verify whether DEPLOY_TOKEN was created as a production environment variable instead of a secret.
Best answer: D
Explanation: The workflow reads the credential from secrets.DEPLOY_TOKEN and the region from vars.DEPLOY_REGION. Because the region appears in the request but authentication fails, the best check is whether the token was stored as a variable instead of a secret. Credentials belong in secrets; non-sensitive configuration belongs in variables.
GitHub Actions separates sensitive and non-sensitive values by both storage type and context. Secrets are encrypted, masked in logs, and referenced with secrets.NAME. Variables are for non-sensitive configuration and are referenced with vars.NAME.
In this run, X-Region: us-east shows that the environment variable is resolving correctly. The bearer token is read from secrets.DEPLOY_TOKEN; if the team accidentally created DEPLOY_TOKEN under environment variables, the secrets lookup will be empty and the external service can return 401 Unauthorized.
The right fix is to store the credential as a secret at the needed scope, here the production environment, and keep the region as a variable. Permission changes or runner changes would not correct a missing secret value.
GITHUB_TOKEN permissions does not affect a separate bearer token sent to an external deployment API.Topic: Author and Manage Workflows
A developer uses VS Code with GitHub Actions schema validation enabled. The referenced reusable workflow exists, and publish.yml declares a required artifact_name input. The workflow file is pushed unchanged.
name: release
on:
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
outputs:
artifact_name: ${{ steps.pack.outputs.name }}
steps:
- id: pack
run: echo "name=web-${{ github.run_number }}" >> "$GITHUB_OUTPUT"
publish:
needs: build
if: ${{ steps.pack.outputs.name != '' }}
uses: ./.github/workflows/publish.yml
with:
artifact_name: ${{ needs.build.outputs.artifact_name }}
Which outcome matches GitHub Actions behavior?
Options:
A. build runs, and publish calls publish.yml with the generated artifact name.
B. build runs, then publish is skipped because steps.pack.outputs.name resolves to empty.
C. publish evaluates first, then waits for build before receiving the output.
D. Schema validation flags publish.if; the workflow is invalid, so no jobs start.
Best answer: D
Explanation: The build job correctly creates a job output, but publish uses the wrong context in its job-level if. steps is only valid inside steps of the same job, so editor/schema validation can catch this before runtime and the workflow will not start.
GitHub Actions validates expressions against the contexts allowed in that location. Here, build correctly maps steps.pack.outputs.name to the job output artifact_name, and a downstream job could consume that value through needs.build.outputs.artifact_name. However, the publish job condition is evaluated at job scope, where the steps context is not available.
steps.* is for later steps in the same job.needs.* is for outputs from completed dependency jobs.with: for a reusable workflow can use needs.build.outputs.artifact_name.Because the invalid context appears in publish.if, tooling can flag it before runtime, and GitHub rejects the workflow instead of treating the condition as simply false.
with: is valid, but the job never reaches the reusable workflow call.needs: build enforces dependency order, and the workflow is invalid before scheduling either job.Topic: Consume and Troubleshoot Workflows
A repository uses this workflow:
name: CI
on:
push:
branches: [main, 'release/*']
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- run: echo "build"
The support team wants the run history list to show each run’s triggering event, branch, actor, commit, and exact workflow revision used, without opening logs or job details. Which edit best meets this requirement?
Options:
A. Add a step that exports the values with GITHUB_ENV and echoes them.
B. Add name: under jobs.build using the same context values.
C. Add a step that writes the values to GITHUB_STEP_SUMMARY.
D. Add top-level run-name using github.event_name, github.ref_name, github.actor, github.sha, and github.workflow_sha.
Best answer: D
Explanation: The best edit is a top-level run-name expression. It can include GitHub context values such as event, branch, actor, commit SHA, and github.workflow_sha, which makes the exact workflow version visible from the run entry itself.
This requirement is about run history visibility, not job output. In GitHub Actions, run-name is the workflow-level field that controls the label shown for each workflow run in the run history list. Because run-name supports GitHub context expressions, you can surface values like github.event_name, github.ref_name, github.actor, github.sha, and github.workflow_sha directly on the run entry.
github.sha identifies the commit the run used, while github.workflow_sha identifies the exact revision of the workflow file used for that run. That makes it the most direct way to help operators identify the trigger, branch, actor, commit, and workflow version from run history.
Job names, step summaries, and log output can still expose this information, but only after opening the run, so they do not satisfy the stated requirement as well.
GITHUB_ENV and logs affect later steps or log output, not the run history list display.Topic: Manage Github Actions for the Enterprise
An enterprise repo runs GPU benchmarks in GitHub Actions. Python dependencies are already cached, but the last step must use an NVIDIA GPU and is too slow on ubuntu-latest. The organization provides a GitHub-hosted larger runner labeled ubuntu-16core and a self-hosted GPU runner with labels self-hosted, linux, x64, and gpu. Which edit best meets the requirement?
jobs:
benchmark:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.12'
- run: pip install -r requirements.txt
- run: pytest tests/gpu_benchmarks -m cuda
Options:
A. Replace runs-on with ubuntu-16core.
B. Replace runs-on with [self-hosted, linux, x64, gpu].
C. Convert the job to a four-way matrix with max-parallel: 4.
D. Add a pip cache step before pip install.
Best answer: B
Explanation: The job should target the specialized GPU runner because the requirement is specific hardware support for CUDA. Larger runners help CPU-, memory-, or disk-bound jobs, but they do not satisfy a workload that explicitly needs an NVIDIA GPU.
In GitHub Actions, runs-on determines which runner can execute a job. Here, the deciding requirement is not just faster execution; the test step uses pytest ... -m cuda, so the runner must have GPU hardware available. Using runs-on: [self-hosted, linux, x64, gpu] makes the job eligible only for a runner that matches all of those labels.
Larger runners are appropriate when the workload is still compatible with standard hosted capabilities but needs more CPU, RAM, or disk for better performance. Specialized runners are the better choice when the workload depends on unique hardware or environment characteristics, such as GPUs. Caching and matrix parallelism can reduce overhead or split work, but they cannot add missing GPU capability.
Topic: Manage Github Actions for the Enterprise
A nightly workflow on GitHub-hosted runners started failing even though no commits or workflow changes were made since the last successful run.
strategy:
matrix:
os: [ubuntu-latest, ubuntu-22.04]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v4
- run: cmake --version
- run: ./build.sh
Today’s run results:
ubuntu-latest: /bin/bash: cmake: command not foundubuntu-22.04: job succeedsWhat is the best next diagnostic action?
Options:
A. Pin actions/checkout to a full commit SHA
B. Check organization policy for blocked ubuntu-latest use
C. Review ubuntu-latest image docs and release notes
D. Delete the Actions cache and rerun
Best answer: C
Explanation: This pattern strongly suggests a GitHub-hosted runner image change. When ubuntu-22.04 still works but ubuntu-latest suddenly loses a preinstalled tool, the highest-value next step is to check the runner image documentation and release notes.
GitHub-hosted runner images are updated regularly, and moving labels such as ubuntu-latest can point to a different underlying image over time. In this scenario, the failure happens before the build at cmake --version, there were no repository changes, and the pinned ubuntu-22.04 matrix variant still succeeds. That makes a preinstalled software change on the ubuntu-latest image the most likely explanation.
ubuntu-latest is an alias, not a fixed OS version.If the workflow depends on a specific tool, the durable fix is to pin the runner image or install the required version explicitly.
ubuntu-latest job already started and reached a shell step, so it was not blocked from runner use.Topic: Secure and Optimize Automation
A trusted push to main triggers this workflow. deploy (staging) completes, but deploy (production) stays blocked with no step logs.
| Environment | Protection |
|---|---|
staging | none |
production | required reviewers: release-managers |
jobs:
deploy:
strategy:
matrix:
target: [staging, production]
runs-on: ubuntu-latest
environment: ${{ matrix.target }}
steps:
- run: ./deploy.sh ${{ matrix.target }}
Run history for deploy (production) shows: This job is awaiting approval for environment 'production'.
What is the best explanation for the blocked job?
Options:
A. production is paused until a required reviewer approves deployment.
B. Missing deployments: write permission prevents the job from being assigned.
C. environment: cannot use a matrix value such as ${{ matrix.target }}.
D. Matrix deployments run one at a time unless max-parallel is set.
Best answer: A
Explanation: The run history message is the key evidence. In GitHub Actions, a job that targets an environment with required reviewers pauses before any steps run until an authorized reviewer approves the deployment.
GitHub evaluates environment protection rules before a job that references that environment starts executing. Here, only the production matrix variant targets an environment that requires reviewers, and the run history explicitly says the job is awaiting approval for that environment. That is why there are no step logs yet: the job is blocked at the environment gate, not failing during execution.
staging variant completing shows the workflow syntax and runner selection are fine.production environment.production job can start and access that environment’s protected resources.The key takeaway is that required reviewers create a pre-execution deployment gate for sensitive environments.
deployments: write would affect deployment operations during execution, not create an approval hold before steps start.${{ matrix.target }} for environment: is valid here.Topic: Author and Maintain Actions
A team created a composite action to calculate a version string. The workflow should print Version=1.2.3, but the final step prints an empty value. Which edit best fixes this so the caller workflow can read the action output?
# .github/actions/get-version/action.yml
name: get-version
runs:
using: composite
steps:
- id: calc
shell: bash
run: echo "version=1.2.3" >> "$GITHUB_OUTPUT"
# .github/workflows/build.yml
jobs:
build:
runs-on: ubuntu-latest
steps:
- id: ver
uses: ./.github/actions/get-version
- run: echo "Version=${{ steps.ver.outputs.version }}"
Options:
A. Add outputs.version.value: ${{ steps.calc.outputs.version }} to action.yml.
B. Write version to GITHUB_ENV instead of GITHUB_OUTPUT.
C. Give the final workflow step an id of version.
D. Change the workflow to steps.calc.outputs.version.
Best answer: A
Explanation: Inside a composite action, GITHUB_OUTPUT sets an output for that internal step, not automatically for the action as seen by the caller. To make steps.ver.outputs.version work, action.yml must declare an action output and map it to steps.calc.outputs.version.
The key scope boundary is between the composite action’s internal steps and the workflow step that calls the action. echo "version=1.2.3" >> "$GITHUB_OUTPUT" creates version as an output of the internal step calc. The caller workflow does not see calc directly.
To expose that value to the workflow, the action metadata must publish an action-level output:
outputs:
version:
value: ${{ steps.calc.outputs.version }}
After that mapping exists, the caller can read it with steps.ver.outputs.version. Changing the workflow reference alone does not bypass the action boundary.
steps.calc.outputs.version fails because calc exists only inside the composite action, not in the caller workflow.GITHUB_ENV fails because environment variables are for later steps in the same job, not for steps.<id>.outputs.id to the final workflow step fails because naming that step does not publish outputs from the action call.Topic: Author and Maintain Actions
A repository uses a custom JavaScript action, and step debug logging is enabled. Review the metadata and run excerpt. Which statement best matches GitHub Actions behavior?
runs:
using: node20
pre: dist/setup.js
main: dist/index.js
post: dist/cleanup.js
Run octo-org/tool-cache-action@3c4d5e6
::debug::pre: creating /home/runner/work/_temp/tool-cache
::debug::main: tool restored from cache
Run npm test
Tests passed
Run actions/upload-artifact@v4
Artifact upload completed
Post Run octo-org/tool-cache-action@3c4d5e6
::debug::post: removing /home/runner/work/_temp/tool-cache
::error::EACCES: permission denied, unlink '/home/runner/work/_temp/tool-cache/lock'
Error: Process completed with exit code 1.
Options:
A. The custom action’s main script failed, but the runner reported it only during job cleanup.
B. The custom action’s pre script failed, but the workflow continued because debug logging was enabled.
C. The custom action’s post script failed during cleanup after the regular steps finished.
D. actions/upload-artifact failed, because post-job errors are logged under the most recent completed step.
Best answer: C
Explanation: The log clearly shows pre and main activity succeeded, then normal workflow steps completed, and only afterward GitHub ran Post Run for the custom action. That means the failing phase is the action’s post cleanup logic.
JavaScript actions can define three execution hooks: pre, main, and post. In GitHub Actions logs, a failure in cleanup is identified by a Post Run <action> entry, followed by any debug or error annotations from that cleanup code.
Here, the trace shows the action starting, successful pre and main debug output, then successful npm test and artifact upload steps. Only after those regular steps finish does GitHub execute Post Run octo-org/tool-cache-action@3c4d5e6, where the permission error appears. That pinpoints the failure to dist/cleanup.js, not setup, main execution, or the artifact action.
The key takeaway is that Post Run in the logs means the action’s cleanup phase is running, so errors there belong to the post entrypoint.
main failure as Post Run; the log explicitly shows cleanup execution.octo-org/tool-cache-action, not to actions/upload-artifact.pre failure would occur before the later test and artifact steps completed.Topic: Secure and Optimize Automation
Your team maintains a Node.js library. Pull requests need fast feedback on lint and unit tests. Before a version tag publishes the package, the library must still be validated on Ubuntu, Windows, and macOS with Node 20 and 22. Most PR failures are OS-independent, developers use Node 22 day-to-day, and the team wants lower runner cost without adding self-hosted infrastructure. Which workflow configuration is the best fit?
Options:
A. Run pull_request checks on ubuntu-latest with Node 22 only; run the full Ubuntu/Windows/macOS × Node 20/22 matrix on version-tag pushes with standard GitHub-hosted runners and max-parallel: 2.
B. Replace the matrix with one macos-latest job that runs Node 20 and 22 sequentially for both PRs and releases.
C. Keep the full matrix on every pull_request, but move it to self-hosted runners so all jobs can start immediately.
D. Run the full Ubuntu/Windows/macOS × Node 20/22 matrix on every pull_request and push, and raise max-parallel to 6.
Best answer: A
Explanation: The best optimization is to reduce matrix breadth where it adds little signal and keep full coverage only when it is required. Using ubuntu-latest with the team’s daily Node version on pull requests gives quick feedback, while the capped full matrix on version tags preserves required cross-platform validation before publishing.
Matrix optimization in GitHub Actions should match the decision point. For pull requests, the goal is fast feedback, so the best configuration uses the smallest high-signal matrix that catches most regressions: here, ubuntu-latest with Node 22. For publishing, the goal changes to compatibility assurance, so that is the right time to run the full Ubuntu, Windows, and macOS matrix across Node 20 and 22.
Limiting max-parallel on the larger release matrix controls spend and reduces runner contention without removing required coverage. By contrast, running the full matrix on every pull request increases queue time and cost, and moving the same workload to self-hosted runners changes infrastructure rather than fixing matrix scope. A single macOS job also misses required Windows and Ubuntu validation.
The key idea is to size matrix breadth and concurrency to the value of the feedback at each trigger.
Topic: Author and Manage Workflows
A team expanded an integration-test workflow to run on both Linux and Windows. The next run produced these results.
jobs:
integration:
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, windows-latest]
node: [20]
runs-on: ${{ matrix.os }}
services:
postgres:
image: postgres:16
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node }}
- run: npm test
Run results:
integration (ubuntu-latest, 20) Succeeded
integration (windows-latest, 20) Failed
Failed job log:
Error: Service containers are only supported on Linux runners
What is the best explanation for the failed matrix run?
Options:
A. The failure is caused by actions/setup-node@v4, which does not support Windows runners.
B. The Windows variant fails because the job uses a service container, which is supported only on Linux runners.
C. The fail-fast: false setting prevents PostgreSQL from starting in the Windows job.
D. The Windows matrix job is missing a secret required by npm test.
Best answer: B
Explanation: This is a matrix-specific runner-image issue. The Ubuntu combination succeeds, but the Windows combination fails before tests run because the job defines a service container, and service containers are supported only on Linux runners.
A matrix creates separate jobs for each combination, so the expanded job names show exactly which variant failed. Here, only integration (windows-latest, 20) fails, which points to a problem tied to that matrix value rather than to the workflow as a whole.
The log is the key evidence: the failure happens during job setup, before npm test runs. Because runs-on resolves from matrix.os, the failing combination is assigned to a Windows runner. That same job also declares a postgres service container, and service containers require Linux runners in GitHub Actions.
A practical fix is to keep this integration job on Linux only or split Windows testing into a separate job that does not use service containers. The claim about actions/setup-node is the closest distractor, but that action supports Windows runners.
actions/setup-node@v4 fails because that action supports GitHub-hosted Windows runners.fail-fast: false setting only controls whether other matrix jobs are canceled after a failure; it does not affect service startup.Topic: Manage Github Actions for the Enterprise
An organization applies these GitHub Actions runner settings to the payments-service repository.
| Setting | Value |
|---|---|
| Standard GitHub-hosted runners | Disabled |
Larger runner group large-ci | Allowed; one idle runner with label ubuntu-8core |
Self-hosted runner group deploy-linux | Allowed; one idle runner with label linux-x64 |
A maintainer manually triggers this workflow:
on: workflow_dispatch
jobs:
unit:
runs-on: ubuntu-latest
load:
runs-on:
group: large-ci
labels: ubuntu-8core
deploy:
needs: [unit, load]
runs-on:
group: deploy-linux
labels: linux-x64
Which execution trace matches GitHub Actions behavior?
Options:
A. unit cannot start, load runs, and deploy is skipped.
B. unit is skipped, load is also blocked because larger runners are disabled with standard GitHub-hosted runners, and deploy runs on self-hosted.
C. unit runs on the self-hosted runner, load runs on the larger runner, and deploy waits.
D. unit and load both run, then deploy runs after both finish.
Best answer: A
Explanation: Runner type access is enforced per repository. In this scenario, ubuntu-latest needs a standard GitHub-hosted runner, but only larger runners and the named self-hosted group are allowed, so unit cannot start. Because deploy depends on both upstream jobs, it does not run when unit fails to get a runner.
GitHub Actions does not automatically reroute a job to another runner type. A job targeting ubuntu-latest specifically requires a standard GitHub-hosted runner. Since that runner type is disabled for this repository, the unit job cannot be assigned a runner.
The load job can start because its larger runner group is explicitly allowed and an idle matching runner exists. The deploy job uses an allowed self-hosted runner group, but needs: [unit, load] means it runs only if both required jobs complete successfully. Because unit does not, deploy is skipped.
The key takeaway is that runner policies control eligibility by runner type or runner group; there is no fallback from standard GitHub-hosted labels to larger or self-hosted runners.
ubuntu-latest does not switch to self-hosted just because a Linux self-hosted runner is available.needs matters: a downstream job does not run just because its own runner is available; all required upstream jobs must succeed first.Topic: Consume and Troubleshoot Workflows
A workflow run shows three completed matrix jobs and one job still queued.
jobs:
test:
runs-on: [self-hosted, ${{ matrix.os }}]
strategy:
matrix:
os: [linux, windows]
node: [18, 20]
exclude:
- os: windows
node: 18
include:
- os: macos
node: 20
Run history:
- Completed: test (linux, 18)
- Completed: test (linux, 20)
- Completed: test (windows, 20)
- Queued: 1 job
Online self-hosted runner labels:
- runner-1: self-hosted, linux
- runner-2: self-hosted, windows
Which option best explains the queued job?
Options:
A. The queued job is test (windows, 20) because the runner must also match the Node 20 version.
B. The queued job is test (linux, 20) because include duplicates the highest Node version.
C. The queued job is test (windows, 18) because exclude removes it only after the job is scheduled.
D. The queued job is test (macos, 20) because include adds it and no online runner has the macos label.
Best answer: D
Explanation: First expand the matrix: four base combinations, remove windows/18, then add macos/20. Because runs-on requires both self-hosted and the OS label, the only variant that can remain queued is macos/20 when no online runner has macos.
This is a matrix-expansion troubleshooting problem. Start with the Cartesian product of os and node: linux/18, linux/20, windows/18, and windows/20. Then apply the matrix rules:
exclude removes windows/18include adds a new macos/20 variantThat leaves four actual jobs: linux/18, linux/20, windows/20, and macos/20.
The runs-on expression uses only self-hosted and matrix.os, so runner selection depends on the OS label, not on the Node version. Since the only online runners are labeled linux and windows, the macos/20 job has no matching runner and stays queued. The closest distractor is the option claiming Node 20 affects runner matching, but matrix.node is not part of runs-on.
windows/18 is queued fails because exclude prevents that matrix job from being created.windows/20 needs a Node 20 runner fails because node is not used in runs-on.linux/20 is duplicated fails because this include entry creates macos/20, not another Linux variant.Topic: Author and Maintain Actions
A team is publishing a reusable action named octo-org/cache-report for hundreds of workflow authors. They want users to copy a working README example, reference a stable version that can receive backward-compatible fixes, and avoid editing every workflow for each patch release. The organization’s policy does not require full commit SHA pinning for this action. Which publication configuration is best?
Options:
A. Create release v1.0.0, but show uses: octo-org/cache-report@main in README so workflows always run the newest code.
B. Create release v1.0.0 and show uses: octo-org/cache-report@v1.0.0 in README; do not maintain a major tag.
C. Skip numbered releases, create only tag latest, and show uses: octo-org/cache-report@latest in README.
D. Create release v1.0.0, add tags v1.0.0 and v1, and show uses: octo-org/cache-report@v1 in README with documented inputs and outputs.
Best answer: D
Explanation: The best pattern is to publish a semantic versioned release, keep both the exact version tag and a moving major tag, and document a copyable README example. That lets workflow authors use @v1 for stable adoption while still receiving compatible fixes.
To make an action consumable, maintainers should publish it the way workflow authors expect to reference it: create a versioned release, keep an exact tag such as v1.0.0, maintain a major tag such as v1 that points to the latest compatible release, and include a working uses: example in the README. This combination improves discoverability, reduces adoption friction, and avoids requiring every consumer to edit workflow files for each patch update.
v1.0.0 identifies the specific release.v1 provides a stable major-version reference for compatible updates.uses: line and required inputs or outputs.By contrast, floating branch or non-semantic tags are less predictable, and patch-only references create unnecessary maintenance for workflow authors.
@main is too unstable for published consumption because new commits can change behavior unexpectedly.@v1.0.0 is valid for exact pinning, but it does not meet the goal of receiving compatible fixes without workflow edits.latest tag is not a clear semantic versioning strategy and makes compatibility expectations unclear for consumers.Topic: Consume and Troubleshoot Workflows
A repository contains several pull request workflows, and the Actions tab shows multiple entries named CI. The team wants this workflow to appear as a distinct item in the workflow list and as a distinct value in the workflow run filters, without changing when it runs or what it executes.
name: CI
on:
pull_request:
paths:
- 'services/api/**'
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm test
Which edit should you make?
Options:
A. Change the top-level name to API CI.
B. Rename the test job to API CI.
C. Add run-name: API CI at the top level.
D. Rename the workflow file to api-ci.yml.
Best answer: A
Explanation: To make a workflow easy to locate in the Actions workflow list and workflow filters, give it a unique top-level workflow name. In this file, name: CI is the displayed workflow identity, so changing that value solves the problem without changing behavior.
GitHub Actions identifies a workflow in the Actions sidebar and workflow run filters by the workflow’s top-level name: value. Because this file already sets name: CI, that is the label users see when browsing or filtering workflows.
Changing the top-level name to something unique like API CI updates how the workflow is listed, while leaving its trigger, jobs, and steps unchanged. By contrast, run-name: affects only individual run titles, job names affect only job labels inside a run, and the file name matters mainly when no top-level workflow name is defined.
The key takeaway is that workflow discovery in the UI depends on the workflow name, not the job name or per-run title.
run-name idea fails because it labels individual runs, not the workflow entry used in the workflow list or filter.name is already present, so the UI continues to show CI.Topic: Author and Manage Workflows
A repository uses a reusable workflow to decide whether to deploy. In the latest run, prepare succeeded, but deploy was skipped. The prepare job log shows echo "release_ready=true" >> "$GITHUB_ENV".
Caller workflow:
jobs:
prepare:
uses: org/platform/.github/workflows/release-gate.yml@v1
deploy:
needs: prepare
if: needs.prepare.outputs.release_ready == 'true'
runs-on: ubuntu-latest
steps:
- run: echo "Deploying"
Reusable workflow:
on:
workflow_call:
outputs:
release_ready:
value: ${{ jobs.check.outputs.release_ready }}
jobs:
check:
runs-on: ubuntu-latest
outputs:
release_ready: ${{ steps.gate.outputs.release_ready }}
steps:
- id: gate
run: echo "release_ready=true" >> "$GITHUB_ENV"
What is the best explanation for the skipped deploy job?
Options:
A. The reusable workflow must expose steps.gate.outputs.release_ready directly from workflow_call.outputs.
B. The reusable workflow used GITHUB_ENV, so no step output existed to flow into the job and workflow outputs.
C. The caller must read jobs.prepare.outputs.release_ready instead of needs.prepare.outputs.release_ready.
D. The gate step needs additional token permissions before its value can be passed to the caller.
Best answer: B
Explanation: The reusable workflow is trying to return a value through step, job, and workflow outputs, but the step only writes to GITHUB_ENV. That makes release_ready available only inside later steps of check, so the caller sees an empty output and skips deploy.
GITHUB_ENV and outputs have different scopes. Writing release_ready=true to GITHUB_ENV creates an environment variable for later steps in the same job, but it does not populate steps.gate.outputs.release_ready.
To pass a value from a reusable workflow back to the caller, the value must move through these layers:
GITHUB_OUTPUT in a step with an idjobs.check.outputson.workflow_call.outputsneeds.prepare.outputs.release_readyBecause the gate step never created a step output, the reusable workflow output was empty. The if expression therefore evaluated to false, so deploy was skipped.
jobs.prepare.outputs fails because downstream jobs consume prior job outputs through needs, not jobs.workflow_call.outputs fails because reusable workflow outputs are mapped from job outputs.Topic: Manage Github Actions for the Enterprise
A company uses GitHub Enterprise Cloud. Retail contains only the web-store repository.
Exhibit: Current GitHub Actions policy
| Scope | Policy |
|---|---|
| Enterprise | Allow only actions and reusable workflows from repositories in this enterprise |
Organization Retail | Allow all actions and reusable workflows |
Repository web-store | Allow only the internal deployment reusable workflow |
The web-store repository must use the official GitHub-authored actions actions/checkout and actions/setup-node along with the internal deployment workflow. Other organizations should keep their current restrictions. Which configuration is best?
Options:
A. Allow GitHub-authored actions at enterprise, then allow only the required actions in web-store.
B. Add actions/checkout and actions/setup-node only to web-store.
C. Allow GitHub-authored actions only in Retail.
D. Copy the official actions into web-store as local actions.
Best answer: A
Explanation: GitHub Actions policies are inherited top-down, and lower scopes cannot grant more access than a parent scope allows. Because the enterprise currently allows only internal enterprise actions, web-store cannot use public GitHub-authored actions until the enterprise policy is broadened first.
The core concept is policy inheritance. In GitHub Actions, the enterprise setting defines the maximum allowance, the organization can be equally or more restrictive, and the repository can be more restrictive still. Here, actions/checkout and actions/setup-node are GitHub-authored public actions, so the enterprise policy currently blocks them everywhere.
web-store tightly controlled by allowing only the required actions plus the internal reusable workflow.Changing only the organization or repository would not override the inherited enterprise restriction.
Topic: Secure and Optimize Automation
You add artifact attestations to a release workflow. The build job succeeds, but the production deploy job is blocked by an organization policy. The repository uses a shared self-hosted runner group managed outside the release team.
jobs:
build:
runs-on: [self-hosted, linux, shared]
permissions:
contents: read
id-token: write
attestations: write
steps:
- uses: actions/checkout@v4
- run: ./scripts/build.sh
- uses: actions/attest-build-provenance@v3
with:
subject-path: dist/app.tgz
Verification: attestation valid
Policy: blocked
Reason: builder environment not trusted
What is the best explanation for the block?
Options:
A. actions/checkout must be pinned to a full SHA.
B. subject-path must reference an uploaded workflow artifact.
C. The build job also needs contents: write permission.
D. The attestation is valid, but the shared self-hosted runner is not trusted.
Best answer: D
Explanation: The attestation is valid, so the artifact can be tied to the recorded workflow run. But provenance is only as trustworthy as the builder, and a shared self-hosted runner outside the team’s control can still undermine the build, so the policy blocks deployment.
A GitHub artifact attestation can prove that dist/app.tgz came from the recorded workflow run, and the log confirms that provenance verification passed. But that assurance depends on trusting the workflow, its dependencies, and the runner environment. Because this job ran on a shared self-hosted runner managed outside the release team, the attestation cannot by itself prove the build environment was safe from tampering.
Pinning actions is also a good hardening step, but the immediate reason for the block is the untrusted runner environment named in the policy result.
contents: write is not the issue here; the attestation already verified, and the policy message points to builder trust.actions/checkout to a full SHA improves supply-chain integrity, but it is not required for attestation verification and does not match the stated block reason.subject-path can point to a file created in the workspace; it does not have to be an uploaded workflow artifact.Topic: Consume and Troubleshoot Workflows
A workflow run creates four jobs, including windows-latest with Python 3.11 and experimental: true. The team expected only three jobs because that combination was excluded. Why does the workflow still run that job?
name: test
on: push
jobs:
unit-tests:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, windows-latest]
python: ['3.10', '3.11']
exclude:
- os: windows-latest
python: '3.11'
include:
- os: windows-latest
python: '3.11'
experimental: true
steps:
- run: echo "${{ matrix.os }} ${{ matrix.python }}"
Options:
A. Repeating os and python in include always duplicates a matching job.
B. exclude must also specify experimental: true to remove the included job.
C. Quoted version values prevent exclude from matching python: '3.11'.
D. include is processed after exclude, so it adds that combination back as a new job.
Best answer: D
Explanation: GitHub Actions first removes windows-latest/3.11 from the base matrix, leaving three jobs. It then processes include, and because no matching job remains, that entry is added as a new standalone combination.
This behavior comes from how matrix expansion is ordered. The base Cartesian product here is four combinations: two operating systems times two Python versions. The exclude block removes windows-latest with 3.11, so three jobs remain.
After that, include is evaluated. An include item can merge into an existing matrix job only if a matching combination still exists and the added values do not change the original matrix values. In this case, the matching Windows 3.11 combination was already removed, so the include entry cannot merge into anything. GitHub Actions therefore creates a new matrix job for it.
The key takeaway is that include can reintroduce a combination that was previously excluded.
experimental: true to exclude would not fix this, because the extra job is created later by include.include does not always duplicate a job; it merges when the matching combination still exists.'3.11' as a quoted string is normal YAML and does not stop matrix matching.Topic: Author and Manage Workflows
A repository uses this workflow:
on:
push:
branches:
- main
paths:
- 'src/**'
pull_request:
branches:
- main
paths:
- 'src/**'
jobs:
test:
runs-on: ubuntu-latest
steps:
- run: echo "run"
A developer pushes one commit that changes src/app.js to branch feature/api, and then opens a pull request from feature/api into main. No other commits are pushed after the pull request is opened.
How many workflow runs are created?
Options:
A. No runs, because all events and filters under on must match together before the workflow starts.
B. Two runs, because listing both push and pull_request causes both to run for the same commit.
C. One run, created by push, because the branch will later merge into main.
D. One run, created by pull_request, because that event matches main as the base branch and src/** as a changed path.
Best answer: D
Explanation: The workflow runs once, on the pull_request event. In GitHub Actions, top-level events under on are alternatives, but each event applies its own filters independently.
GitHub Actions treats multiple events under on as OR conditions: any listed event can start the workflow. For each individual event, its filters must match together.
Here, the push event requires both:
mainsrc/**The commit changed src/app.js, but it was pushed to feature/api, so the push trigger does not match. When the pull request is opened, the pull_request event checks the PR’s base branch, which is main, and the changed files, which include src/app.js. That event matches, so one run is created.
The key takeaway is that pull_request.branches filters the target branch, not the source branch.
push and pull_request are separate triggers, and the push branch filter does not match feature/api.push.branches evaluates the branch receiving the commit at push time, not where it might be merged later.on are not cumulative requirements; each event is evaluated on its own.Topic: Author and Manage Workflows
A team stores setup commands inside ./ci/build.sh and wants a strict sequence: build, then test, then package, then deploy. Which single edit best enforces that order?
on: push
jobs:
build:
runs-on: ubuntu-latest
steps:
- run: ./ci/build.sh
test:
runs-on: ubuntu-latest
needs: build
steps:
- run: ./ci/test.sh
package:
runs-on: ubuntu-latest
needs: build
steps:
- run: ./ci/package.sh
deploy:
runs-on: ubuntu-latest
needs: package
steps:
- run: ./ci/deploy.sh
Options:
A. Change deploy to needs: [package, test]
B. Add if: success() to deploy
C. Change test to needs: [build, package]
D. Change package to needs: test
Best answer: D
Explanation: GitHub Actions controls job order with needs. In the current workflow, test and package both depend only on build, so they can run in parallel; making package depend on test creates the required sequence.
needs defines the workflow’s job dependency graph. Here, both test and package wait for build, but there is no dependency between test and package, so GitHub Actions can start them at the same time after build finishes.
Changing package to needs: test creates a strict chain:
build runs firsttest runs after buildpackage runs after testdeploy already runs after packageThat produces the intended end-to-end order without changing the job steps themselves. Adding more conditions to deploy can delay deployment, but it does not stop packaging from starting too early.
deploy depend on both package and test still allows package to run before tests finish.test depend on package changes the sequence to build, package, then test.if: success() does not create a new job ordering rule.Topic: Consume and Troubleshoot Workflows
A repository uses this workflow:
jobs:
test:
strategy:
matrix:
os: [ubuntu-latest, windows-latest]
node: [18, 20]
runs-on: ${{ matrix.os }}
steps:
- run: npm test
- if: failure()
uses: actions/upload-artifact@v4
with:
name: test-results-${{ matrix.os }}-${{ matrix.node }}
path: test-results/
A run failed only for windows-latest with node: 20. The maintainer wants to change the workflow to add debug steps and then delete the failed run. What is the best next diagnostic action?
Options:
A. Rerun the full matrix to refresh runner state.
B. Delete the failed run after noting the exit code.
C. Commit debug steps, then rerun only the failed job.
D. Download the failed job logs and test-results-windows-latest-20 artifact first.
Best answer: D
Explanation: Before changing workflow files or deleting runs, preserve the evidence from the original failure. Downloading the failing job logs and its failure artifact keeps the exact matrix-specific context available for comparison after any later changes or reruns.
The key troubleshooting practice is to preserve the original diagnostic evidence before you modify the workflow or remove the run. In this case, the failure is limited to one matrix variant, so the most valuable evidence is the logs and uploaded artifact from windows-latest with Node 20.
If you change the workflow first, later runs may behave differently because the workflow, logging, or runner conditions have changed. If you delete the run, you lose the original context entirely. Reruns are useful later, but only after the initial evidence is preserved.
Topic: Author and Maintain Actions
A repository contains a local action intended to run from this workflow, but the job fails with: Can't find 'action.yml', 'action.yaml' or 'Dockerfile' under '.github/actions/standards-check'.
jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run standards check
uses: ./.github/actions/standards-check
Current repository layout:
.github/
workflows/
ci.yml
actions/
standards-check/
action-files/
action.yml
check.sh
Which edit best fixes the workflow without changing the uses path?
Options:
A. Rename action.yml to standards-check.yml in action-files/.
B. Move the action files up so .github/actions/standards-check/action.yml exists.
C. Add working-directory: ./.github/actions/standards-check/action-files to the step.
D. Move the action under .github/workflows/standards-check/.
Best answer: B
Explanation: For a local action, the uses path must resolve to a directory that directly contains the action metadata file. Here, uses: ./.github/actions/standards-check points to the parent folder, but action.yml is nested one level deeper.
GitHub Actions resolves a local uses path from the checked-out repository workspace. After actions/checkout, the runner can access repository files, but the directory named in uses must itself contain action.yml, action.yaml, or Dockerfile.
In this case, the workflow references .github/actions/standards-check, but that folder does not directly contain the metadata file. The actual action is nested under action-files/, so the runner cannot load it. To keep the existing uses line, restructure the repository so the referenced folder is the action root and directly contains action.yml and its companion files.
Changing the step working directory does not affect how uses resolves a local action path.
working-directory affects shell commands in run, not how a uses action path is located..github/workflows is for workflow files, not the target directory of this local action reference.action.yml, action.yaml, or Dockerfile, not an arbitrary YAML filename.Topic: Manage Github Actions for the Enterprise
A repository can use only the shared-linux runner group. The only self-hosted runners available to this repository are shown below, and GitHub-hosted runners are enabled.
| Runner | Group | Repo can use? | Labels |
|---|---|---|---|
build-01 | shared-linux | Yes | self-hosted, linux, x64 |
build-arm | shared-linux | Yes | self-hosted, linux, ARM64 |
gpu-01 | research-gpu | No | self-hosted, linux, x64, gpu |
jobs:
compile:
runs-on: ubuntu-latest
steps:
- run: echo compile
package:
needs: compile
runs-on: [self-hosted, linux, x64]
steps:
- run: echo package
perf:
needs: package
runs-on: [self-hosted, linux, x64, gpu]
steps:
- run: echo perf
release:
needs: [package, perf]
runs-on: ubuntu-latest
steps:
- run: echo release
Which execution trace matches GitHub Actions behavior?
Options:
A. compile and package both run on build-01, perf runs on gpu-01, and release then runs.
B. compile runs on GitHub-hosted Ubuntu, package runs on either build-01 or build-arm, and perf runs on gpu-01.
C. compile runs on GitHub-hosted Ubuntu, package runs on build-01, perf falls back to GitHub-hosted Ubuntu, and release then runs.
D. compile runs on GitHub-hosted Ubuntu, package runs on build-01, perf stays queued, and release never starts.
Best answer: D
Explanation: runs-on matches eligible runners exactly. package can use build-01, but perf requires a runner with all four labels and accessible group membership, so it waits indefinitely and blocks release through needs.
In GitHub Actions, runs-on selects a runner only if that runner is eligible and satisfies the full label set. For [self-hosted, linux, x64], one accessible self-hosted runner must have all three labels, so package goes to build-01. build-arm does not qualify because ARM64 is not x64.
The perf job requires [self-hosted, linux, x64, gpu]. Although gpu-01 has those labels, the repository cannot use its runner group, so that runner is not eligible. The job remains queued rather than switching to a different runner type. ubuntu-latest is a GitHub-hosted label, so compile and release target GitHub-hosted Ubuntu runners. Because release depends on both package and perf, it cannot start while perf has no eligible runner.
The key takeaway is that label matching is effectively an AND check, and runner-group access also affects routing.
compile on build-01 is incorrect because ubuntu-latest targets GitHub-hosted runners, not self-hosted Linux machines.package on build-arm fails because one runner must match every listed label, including x64.perf to gpu-01 overlooks that the repository cannot use the research-gpu runner group.perf falls back to GitHub-hosted Ubuntu is wrong because GitHub Actions does not automatically switch runner types when self-hosted labels do not match.Topic: Author and Manage Workflows
An organization standardizes version calculation in a reusable workflow named version.yml. Application repositories call it and then run a downstream deploy job that needs the computed release_version.
The caller workflow already contains:
jobs:
prepare:
uses: octo-org/platform/.github/workflows/version.yml@v1
deploy:
needs: prepare
runs-on: ubuntu-latest
steps:
- run: echo "Deploying version ${{ needs.prepare.outputs.release_version }}"
Which configuration for version.yml is the best way to return release_version to the downstream deploy job?
Options:
A. Define on.workflow_call.outputs.release_version, map it to a job output, and set that job output from a step output written to $GITHUB_OUTPUT.
B. Reference the reusable workflow’s internal step output directly from the caller by using ${{ jobs.prepare.steps.calc.outputs.release_version }}.
C. Upload release_version as an artifact in the reusable workflow and download it in deploy.
D. Write release_version to $GITHUB_ENV in the reusable workflow so the caller can read ${{ env.release_version }} in deploy.
Best answer: A
Explanation: Reusable workflow data must be explicitly published through on.workflow_call.outputs. The value flows from a step output written to GITHUB_OUTPUT, to a job output in the called workflow, and then to the caller through needs.prepare.outputs.release_version.
The key concept is output chaining across workflow boundaries: step output -> job output -> reusable workflow output -> caller job output. A downstream job in the caller workflow cannot read internal steps from the called workflow, and environment files such as GITHUB_ENV do not cross that boundary.
To expose release_version, the reusable workflow must publish it under on.workflow_call.outputs and map that value from a job output:
on:
workflow_call:
outputs:
release_version:
value: ${{ jobs.version.outputs.release_version }}
jobs:
version:
outputs:
release_version: ${{ steps.calc.outputs.release_version }}
steps:
- id: calc
run: echo "release_version=1.2.3" >> $GITHUB_OUTPUT
Then the caller can use needs.prepare.outputs.release_version in downstream jobs. Using artifacts can move data, but it is not the intended mechanism for a simple reusable-workflow output.
GITHUB_ENV only affects later steps in the same job, so it does not publish a value from a called workflow to caller jobs.Topic: Manage Github Actions for the Enterprise
An enterprise uses self-hosted runners for a code-signing workflow. Every job currently spends about 15 minutes installing an 8 GB vendor SDK and an HSM client. Security requires the exact approved versions to be present on every run, and the team does not want workflows to download these tools dynamically. Which configuration is the best fit?
Options:
A. Switch to ubuntu-latest and install the SDK inside a job container at startup.
B. Move the installation steps into a reusable workflow that each repository calls before signing.
C. Create a custom VM image for the self-hosted runners with the approved SDK and HSM client preinstalled.
D. Keep generic self-hosted runners and restore the SDK with actions/cache when possible.
Best answer: C
Explanation: The best fit is a custom self-hosted runner image with the required tools already installed. That provides a prebuilt, controlled environment and removes the 15-minute setup step from every job.
When setup time and software control are both critical, GitHub Actions jobs should run on a prebuilt environment instead of assembling the toolchain during each workflow run. A custom self-hosted runner image lets administrators install and validate the exact SDK and HSM client versions once, then reuse that image for every signing job. The workflow simply targets those runners, so jobs start with the required software already available.
Reusable workflows, caches, and startup installation scripts can centralize or speed setup, but they still rely on runtime provisioning rather than a fixed image.
ubuntu-latest container option still builds the environment at run time and gives less control over the host toolchain.Use the GitHub Actions GH-200 Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Try GitHub Actions GH-200 on Web View GitHub Actions GH-200 Practice Test
Read the GitHub Actions GH-200 Cheat Sheet on Tech Exam Lexicon for concept review before another timed run.