Try 10 focused GitHub Actions GH-200 questions on Enterprise Actions, with explanations, then continue with IT Mastery.
Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.
Try GitHub Actions GH-200 on Web View full GitHub Actions GH-200 practice page
| Field | Detail |
|---|---|
| Exam route | GitHub Actions GH-200 |
| Topic area | Manage Github Actions for the Enterprise |
| Blueprint weight | 25% |
| Page purpose | Focused sample questions before returning to mixed practice |
Use this page to isolate Manage Github Actions for the Enterprise for GitHub Actions GH-200. Work through the 10 questions first, then review the explanations and return to mixed practice in IT Mastery.
| Pass | What to do | What to record |
|---|---|---|
| First attempt | Answer without checking the explanation first. | The fact, rule, calculation, or judgment point that controlled your answer. |
| Review | Read the explanation even when you were correct. | Why the best answer is stronger than the closest distractor. |
| Repair | Repeat only missed or uncertain items after a short break. | The pattern behind misses, not the answer letter. |
| Transfer | Return to mixed practice once the topic feels stable. | Whether the same skill holds up when the topic is no longer obvious. |
Blueprint context: 25% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.
These questions are original IT Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.
Topic: Manage Github Actions for the Enterprise
A platform team is publishing a reusable deployment workflow for all repositories in an organization. Consumers must know the exact contract before calling it.
environment, artifact-nameregistry-tokenGITHUB_TOKEN permissions: contents: read, id-token: writedeployWhich configuration is best?
Options:
A. Publish a workflow template with commented placeholders for secrets and permissions, and let each repo adapt runners.
B. Publish a composite action and ask callers to pass secrets as environment variables and choose permissions themselves.
C. Create a reusable workflow with required workflow_call inputs and secret, explicit minimal permissions, and documented self-hosted, linux, deploy labels.
D. Create a reusable workflow with secrets: inherit, default permissions, and runner details only in sample caller files.
Best answer: C
Explanation: The best choice is a reusable workflow that declares its required workflow_call inputs and secret, sets the minimum permissions, and clearly states the self-hosted runner labels it needs. That creates a clear, enforceable contract for consumers instead of relying on hidden defaults or tribal knowledge.
Shared workflows should expose a clear contract. In GitHub Actions, reusable workflows can declare required inputs and secrets under on.workflow_call, so callers know exactly what must be passed and GitHub can validate the call. The workflow should also define the minimum permissions: it needs so GITHUB_TOKEN access is explicit and least privilege is preserved. Runner expectations, especially self-hosted labels, should be documented with the workflow so consumers know whether the required runner is available to their repositories. Templates, examples, or inherited secrets can help convenience, but they do not replace an explicit reusable workflow interface. Hiding requirements behind defaults makes enterprise reuse, governance, and troubleshooting harder.
secrets: inherit hides the actual secret contract, and default token permissions are not a good substitute for documented least-privilege scopes.Topic: Manage Github Actions for the Enterprise
An organization workflow template adds this caller workflow to service repositories, and it invokes a reusable workflow stored in org/.github.
# caller workflow
on: workflow_dispatch
jobs:
scan:
strategy:
matrix:
service: [api, web]
uses: org/.github/.github/workflows/scan.yml@v1
with:
service: ${{ matrix.service }}
report:
needs: scan
runs-on: ubuntu-latest
steps:
- run: echo "final=${{ needs.scan.outputs.service_name }}"
# reusable workflow: scan.yml
on:
workflow_call:
inputs:
service:
required: true
type: string
outputs:
service_name:
value: ${{ jobs.scan.outputs.service_name }}
jobs:
scan:
runs-on: ubuntu-latest
outputs:
service_name: ${{ steps.out.outputs.service_name }}
steps:
- id: out
run: echo "service_name=${{ inputs.service }}" >> $GITHUB_OUTPUT
A manual run starts both matrix copies at the same time. scan (web) finishes successfully after 1 minute, and scan (api) finishes successfully after 3 minutes.
Which statement matches GitHub Actions behavior?
Options:
A. report runs twice, once for each matrix value.
B. report is skipped because matrix calls to reusable workflows do not expose outputs.
C. report runs once and prints final=web.
D. report runs once and prints final=api.
Best answer: D
Explanation: A reusable workflow can expose outputs to the caller through workflow_call and job outputs. When the caller uses a matrix with that reusable workflow, the downstream job sees the output from the last successful matrix copy that completed and set a value, so this run prints api.
The reusable workflow correctly maps a step output to a job output, then maps that job output to a workflow_call output. In the caller, report waits for all scan matrix copies because it uses needs: scan.
For matrix executions of a reusable workflow:
Because scan (web) finishes first and scan (api) finishes later, needs.scan.outputs.service_name resolves to api. The report job then runs one time after both matrix copies finish.
The closest wrong idea is assuming the first completion or matrix order determines the final output; it is based on the last successful completion that set a value.
scan job is expanded by the matrix; report is a single job.Topic: Manage Github Actions for the Enterprise
A repository has one online self-hosted runner labeled [self-hosted, linux, corp-net]. That runner is inside the corporate network and is the only runner that can resolve and reach registry.corp.local. GitHub-hosted runners cannot reach that host.
name: release
on:
workflow_dispatch:
jobs:
prepare:
runs-on: ubuntu-latest
outputs:
publish: ${{ steps.flag.outputs.publish }}
steps:
- id: flag
run: echo "publish=true" >> "$GITHUB_OUTPUT"
publish:
needs: prepare
if: needs.prepare.outputs.publish == 'true'
runs-on: ubuntu-latest
steps:
- run: curl -fsS https://registry.corp.local/upload
deploy:
needs: publish
runs-on: [self-hosted, linux, corp-net]
steps:
- run: echo "Deploying"
What happens when the workflow is manually triggered?
Options:
A. publish is skipped because downstream jobs cannot use needs.prepare.outputs.publish in if:.
B. publish succeeds on ubuntu-latest, and deploy then runs on corp-net.
C. prepare succeeds, publish runs and fails reaching the private host, and deploy is skipped.
D. prepare succeeds, publish is routed to corp-net, and deploy runs.
Best answer: C
Explanation: Runner selection is per job, based on that job’s own runs-on value. The output from prepare makes publish eligible to run, but publish still uses a GitHub-hosted runner that cannot reach the private endpoint, so the next job is skipped.
The core concept is that private-network reachability does not carry across jobs; each job runs where its own runs-on setting places it. In this workflow, prepare runs on ubuntu-latest and writes publish=true to GITHUB_OUTPUT, which is then exposed as a job output. That makes the publish job condition evaluate to true, so publish starts.
publish runs on ubuntu-latest, not on the self-hosted runner.curl command fails because only the corp-net self-hosted runner can reach registry.corp.local.deploy depends on publish, so it is skipped after publish fails.The key takeaway is that the job requiring stable private-network access must itself target the self-hosted runner labels.
ubuntu-latest job to move to that runner.GITHUB_OUTPUT are available through needs.<job>.outputs in downstream job conditions.Topic: Manage Github Actions for the Enterprise
A new service repository was created from the platform team’s template repository. Its first deployment workflow fails in this step:
- name: Validate config
run: |
test -n "${{ vars.REGISTRY_HOST }}" || (echo "REGISTRY_HOST missing" && exit 1)
test -n "${{ secrets.REGISTRY_TOKEN }}" || (echo "REGISTRY_TOKEN missing" && exit 1)
The team confirms that REGISTRY_HOST and REGISTRY_TOKEN were created only as repository-scoped values in the template repository. The same values must be reused by many service repositories in this organization.
What is the best explanation for the failed run?
Options:
A. permissions: is too restrictive, so vars and secrets are not injected.
B. Template-repository values appear in new repos only after the first successful run.
C. Deployment jobs can use only environment-scoped values.
D. Repository-scoped values stay in the template repo; shared values should be organization-scoped with repository access control.
Best answer: D
Explanation: The workflow is running in the new repository, not in the template repository. Repository-scoped secrets and variables do not follow the workflow file into other repositories, so values reused across many repositories belong at organization scope, optionally limited to selected repositories.
GitHub Actions resolves repository-scoped secrets and variables only from the repository where the workflow run occurs. Creating a new repository from a template copies files, but it does not copy or share the template repository’s secrets or variables. In this scenario, the failed validation step is strong evidence that the new repository has no REGISTRY_HOST or REGISTRY_TOKEN defined for its own scope.
For scope selection:
staging or production within a repository.The closest distractor is environment scope, but that solves per-environment control, not cross-repository reuse.
GITHUB_TOKEN can do, not whether configured secrets and variables exist.Topic: Manage Github Actions for the Enterprise
octo/payments can use both runner groups below. Each group has exactly one idle self-hosted runner.
build-large: labels self-hosted, linux, large; no internal network accessprod-net: labels self-hosted, linux, prodnet; can reach internal deployment endpointson:
push:
branches: [main]
jobs:
lint:
runs-on: ubuntu-latest
steps:
- run: echo lint
package:
needs: lint
strategy:
matrix:
component: [api, web]
runs-on:
group: build-large
labels: linux
steps:
- run: echo "packaging ${{ matrix.component }}"
deploy-check:
needs: package
runs-on:
group: prod-net
labels: linux
steps:
- run: curl -I http://internal-deploy.local/health
A push to main triggers the workflow. Which execution trace matches GitHub Actions behavior?
Options:
A. lint runs first, the two package matrix jobs run one after the other on build-large, and deploy-check then runs on prod-net.
B. lint runs first, one package matrix job runs on build-large, the other runs on prod-net, and deploy-check starts after the first package job finishes.
C. lint runs first, both package matrix jobs finish, and deploy-check falls back to ubuntu-latest because prod-net is reserved for deployment traffic.
D. lint runs first, the two package matrix jobs run at the same time on build-large, and deploy-check may also use build-large because it matches linux.
Best answer: A
Explanation: The package job expands into two matrix jobs, but only one eligible runner exists in build-large, so they run serially. After both finish, deploy-check runs only on prod-net because group restricts job placement to that runner group.
This trace depends on three GitHub Actions behaviors: needs, matrix expansion, and runner-group routing. lint runs first because package depends on nothing but deploy-check depends on package. The package job expands into two separate jobs, one for api and one for web. Because only one runner in the build-large group matches the job, those two matrix jobs cannot run simultaneously; one waits until the runner is free.
deploy-check does not start until all package matrix jobs succeed, because needs: package waits for the whole matrix job set. When it becomes eligible, runs-on targets the prod-net group with the linux label, so GitHub Actions routes it only to a runner in that group. It does not use another self-hosted runner just because the label matches, and it does not fall back to GitHub-hosted runners.
group: build-large or group: prod-net limits routing to that specific runner group.needs: package waits for all matrix copies of package, not just the first one.ubuntu-latest.Topic: Manage Github Actions for the Enterprise
An enterprise centralizes reusable workflows in octo-org/.github. In the payments-service repository, the organization variable DEPLOY_REGION is eastus, and the organization secret DEPLOY_TOKEN is shared with the repository.
The following workflow runs in payments-service:
name: Release
on: workflow_dispatch
env:
RUNTIME_REGION: ${{ vars.DEPLOY_REGION }}
jobs:
deploy:
uses: octo-org/.github/.github/workflows/reuse-deploy.yml@v1
with:
service: payments
secrets:
deploy_token: ${{ secrets.DEPLOY_TOKEN }}
The called reusable workflow is:
on:
workflow_call:
inputs:
service:
required: true
type: string
secrets:
deploy_token:
required: true
jobs:
trace:
runs-on: ubuntu-latest
steps:
- run: echo "vars=${{ vars.DEPLOY_REGION }}"
- run: echo "env=$RUNTIME_REGION"
- run: echo "token-set=${{ secrets.deploy_token != '' }}"
When Release is manually started, which trace matches GitHub Actions behavior?
Options:
A. vars=eastus, env= is empty, token-set=false
B. vars=eastus, env=eastus, token-set=true
C. vars=eastus, env= is empty, token-set=true
D. vars= is empty, env=eastus, token-set=true
Best answer: C
Explanation: Reusable workflows can read organization or repository configuration variables through the vars context of the caller repository. But the caller workflow’s env block is not inherited by the called workflow, and the explicitly mapped secret is available there.
This scenario tests how reusable workflows receive configuration. In a called workflow, vars.DEPLOY_REGION resolves because organization variables shared with the caller repository are available through the vars context. By contrast, env values defined in the caller workflow are not automatically passed into the reusable workflow, so $RUNTIME_REGION expands to an empty value unless you pass it explicitly as an input or define it again in the called workflow.
The secret behaves differently from env: reusable workflows must receive secrets through explicit mapping or secrets: inherit. Here, deploy_token is mapped from secrets.DEPLOY_TOKEN, so secrets.deploy_token is set and the expression evaluates to true.
The common mistake is assuming caller env behaves like shared configuration variables.
env propagation fails because workflow-level env in the caller does not flow into the called reusable workflow.vars value fails because organization variables shared with the caller repository are available through the vars context.DEPLOY_TOKEN to the reusable workflow secret deploy_token.Topic: Manage Github Actions for the Enterprise
A repository deploys from main with the workflow below. The repository has an environment named production that contains the secret DEPLOY_API_KEY and the variable APP_NAME. Required reviewers are configured on that environment, and there are no repository- or organization-scoped entries with those names. During the deploy job, both values are empty.
jobs:
deploy:
runs-on: ubuntu-latest
permissions:
contents: read
steps:
- uses: actions/checkout@v4
- name: Deploy
run: ./deploy.sh
env:
DEPLOY_API_KEY: ${{ secrets.DEPLOY_API_KEY }}
APP_NAME: ${{ vars.APP_NAME }}
You need to fix the missing values while preserving the existing production approval and narrow scope controls. Which configuration is best?
Options:
A. Convert both entries to organization-scoped values for all repositories.
B. Copy DEPLOY_API_KEY and APP_NAME to repository scope.
C. Set top-level env from the same secrets and vars contexts.
D. Add environment: production to the deploy job.
Best answer: D
Explanation: Environment-scoped secrets and variables are not available unless the job explicitly targets that environment. Adding environment: production makes both values available and preserves the environment’s required-reviewer protection.
In GitHub Actions, environment secrets and environment variables are attached to a job only when that job declares the environment with environment:. Because this workflow’s deploy job does not reference production, both secrets.DEPLOY_API_KEY and vars.APP_NAME resolve empty even though they exist in the environment.
A minimal fix is:
jobs:
deploy:
environment: production
That job-to-environment binding also activates the environment’s protection rules, such as required reviewers. Moving the values to broader scopes would make them available, but it would weaken the scoped governance the scenario says to keep.
env does not make environment-scoped secrets or vars appear; the job still has to declare the environment.Topic: Manage Github Actions for the Enterprise
A nightly workflow on GitHub-hosted runners started failing even though no commits or workflow changes were made since the last successful run.
strategy:
matrix:
os: [ubuntu-latest, ubuntu-22.04]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v4
- run: cmake --version
- run: ./build.sh
Today’s run results:
ubuntu-latest: /bin/bash: cmake: command not foundubuntu-22.04: job succeedsWhat is the best next diagnostic action?
Options:
A. Pin actions/checkout to a full commit SHA
B. Check organization policy for blocked ubuntu-latest use
C. Review ubuntu-latest image docs and release notes
D. Delete the Actions cache and rerun
Best answer: C
Explanation: This pattern strongly suggests a GitHub-hosted runner image change. When ubuntu-22.04 still works but ubuntu-latest suddenly loses a preinstalled tool, the highest-value next step is to check the runner image documentation and release notes.
GitHub-hosted runner images are updated regularly, and moving labels such as ubuntu-latest can point to a different underlying image over time. In this scenario, the failure happens before the build at cmake --version, there were no repository changes, and the pinned ubuntu-22.04 matrix variant still succeeds. That makes a preinstalled software change on the ubuntu-latest image the most likely explanation.
ubuntu-latest is an alias, not a fixed OS version.If the workflow depends on a specific tool, the durable fix is to pin the runner image or install the required version explicitly.
ubuntu-latest job already started and reached a shell step, so it was not blocked from runner use.Topic: Manage Github Actions for the Enterprise
An organization moved JavaScript builds to an org runner group backed by plain self-hosted Linux VMs. The workflow below now fails, although the same commit passes in another repository that still uses ubuntu-latest.
jobs:
build:
runs-on: [self-hosted, linux, x64]
steps:
- uses: actions/checkout@v4
- run: node -v
- run: npm ci
- run: npm test
v16.20.2
npm ERR! code EBADENGINE
npm ERR! notsup Required: {"node":">=20"}
What is the best explanation for the failure?
Options:
A. The job needs packages: read permission so npm ci can evaluate the engine requirement.
B. The Node runtime should be restored from actions/cache instead of installed during the job.
C. The job assumes the runner already has Node 20; add actions/setup-node to install the required version.
D. The workflow must pin ubuntu-latest to a specific image to stop hosted image drift.
Best answer: C
Explanation: The log already shows the decisive evidence: the job is running Node 16 while the project requires Node 20 or later. On self-hosted runners, labels route the job to a machine, but they do not guarantee the language runtime unless you install or preconfigure it.
This failure is about runtime version management on self-hosted runners. The step output shows node -v returning v16.20.2, and npm ci stops because the package declares node >=20. That means the workflow is relying on whatever Node version happens to be present on the runner image.
For portable, repeatable builds across repositories and runner types, explicitly install the required tool version in the workflow, such as with actions/setup-node, or use a container image that already contains the required runtime. Runner labels like self-hosted, linux, and x64 only select an eligible runner; they do not enforce a specific Node version.
The closest distractor is the hosted image pinning idea, but this job is no longer running on a GitHub-hosted image.
ubuntu-latest.actions/cache misuse fails because dependency caching does not install or switch the Node runtime itself.Topic: Manage Github Actions for the Enterprise
A GitHub organization has 40 repositories. Four private deployment repositories need the same REGISTRY_HOST value and the same REGISTRY_TOKEN secret. The other 20 private repositories and all 16 public repositories must not be able to access either value, and the platform team wants to manage both values centrally. Which configuration is best?
Options:
A. Use repository-level secret and variable in each deployment repository.
B. Use organization-level secret and variable scoped to private repositories.
C. Use organization-level secret and variable scoped to selected repositories.
D. Use an organization secret scoped to selected repositories and an organization variable scoped to all repositories.
Best answer: C
Explanation: Organization secrets and organization variables can be centrally managed and scoped by repository visibility. Because only four repositories should access either value, the best fit is to scope both to selected repositories rather than all private repositories or all repositories.
The key concept is that organization-level secrets and variables support centralized management, but their access can still be limited by visibility policy. When only a specific subset of repositories needs a value, selected repositories is the most restrictive organization-wide scope that still avoids duplicating configuration.
In this scenario:
Using organization-level entries with selected repositories satisfies all of those requirements at once. Scoping to all private repositories is broader than required, and using repository-level entries increases maintenance overhead. Even a non-sensitive variable should not be exposed more widely when the requirement says only four repositories may access it.
Use the GitHub Actions GH-200 Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.
Try GitHub Actions GH-200 on Web View GitHub Actions GH-200 Practice Test
Read the GitHub Actions GH-200 Cheat Sheet on Tech Exam Lexicon, then return to IT Mastery for timed practice.