Browse Certification Practice Tests by Exam Family

GitHub Actions GH-200: Author and Manage Workflows

Try 10 focused GitHub Actions GH-200 questions on Author and Manage Workflows, with explanations, then continue with IT Mastery.

On this page

Open the matching IT Mastery practice page for timed mocks, topic drills, progress tracking, explanations, and full practice.

Try GitHub Actions GH-200 on Web View full GitHub Actions GH-200 practice page

Topic snapshot

FieldDetail
Exam routeGitHub Actions GH-200
Topic areaAuthor and Manage Workflows
Blueprint weight24%
Page purposeFocused sample questions before returning to mixed practice

How to use this topic drill

Use this page to isolate Author and Manage Workflows for GitHub Actions GH-200. Work through the 10 questions first, then review the explanations and return to mixed practice in IT Mastery.

PassWhat to doWhat to record
First attemptAnswer without checking the explanation first.The fact, rule, calculation, or judgment point that controlled your answer.
ReviewRead the explanation even when you were correct.Why the best answer is stronger than the closest distractor.
RepairRepeat only missed or uncertain items after a short break.The pattern behind misses, not the answer letter.
TransferReturn to mixed practice once the topic feels stable.Whether the same skill holds up when the topic is no longer obvious.

Blueprint context: 24% of the practice outline. A focused topic score can overstate readiness if you recognize the pattern too quickly, so use it as repair work before timed mixed sets.

Sample questions

These questions are original IT Mastery practice items aligned to this topic area. They are designed for self-assessment and are not official exam questions.

Question 1

Topic: Author and Manage Workflows

Your team wants editor schema validation to catch reusable workflow input mistakes before runtime.

# .github/workflows/deploy.yml
on:
  workflow_call:

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - run: ./deploy.sh "${{ inputs.environment }}"

# .github/workflows/release.yml
jobs:
  deploy-prod:
    uses: ./.github/workflows/deploy.yml
    with:
      envrionment: prod

Which edit best enables that validation and fixes the workflow contract?

Options:

  • A. Set env.environment: prod in the caller and keep inputs.environment

  • B. Read vars.environment in the called workflow instead of inputs.environment

  • C. Add permissions: contents: read to both workflows

  • D. Declare workflow_call.inputs.environment with type: string, then pass environment: prod

Best answer: D

Explanation: Reusable workflows expose a contract through on.workflow_call.inputs. Declaring environment there gives schema-aware tooling enough metadata to validate inputs.environment in the called workflow and the caller’s with key before the workflow runs.

In GitHub Actions, the inputs context inside a reusable workflow is defined by its on.workflow_call.inputs metadata. That metadata acts like the workflow’s interface: it tells tooling which inputs are valid, their types, and what names callers may pass in with. In the snippet, the caller uses a misspelled key, envrionment, and the called workflow has not declared any inputs, so there is no clear contract to validate against. Declaring an environment input with a type and then using that exact name in the caller fixes both the context definition and the typo detection path. Using env, vars, or extra permissions does not define reusable workflow input metadata or improve caller validation.

  • Using caller env does not create a reusable workflow input contract for with values.
  • Switching to vars.environment changes the data source instead of validating a per-call input name.
  • Adding contents: read affects token scope, not YAML schema or context validation.

Question 2

Topic: Author and Manage Workflows

Your team wants a single job to compute an image tag once and reuse it as IMAGE_TAG in later shell steps. The value does not need to be shared with other jobs.

Exhibit:

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Compute tag
        run: |
          # add configuration here
      - name: Build image
        run: docker build -t app:$IMAGE_TAG .
      - name: Publish note
        run: echo "Built $IMAGE_TAG"

Which configuration is best?

Options:

  • A. In Compute tag, export IMAGE_TAG in the shell and rely on later steps to inherit it.

  • B. In Compute tag, write the computed tag to GITHUB_OUTPUT and treat it as a step output.

  • C. In Compute tag, append the computed IMAGE_TAG to GITHUB_ENV so later steps inherit it.

  • D. In Compute tag, write the computed IMAGE_TAG to GITHUB_STEP_SUMMARY for later steps.

Best answer: C

Explanation: GITHUB_ENV is the right choice when later steps in the same job need a value as a normal environment variable. Here, both later run steps already expect IMAGE_TAG, and the value does not need to cross job boundaries.

GITHUB_ENV is an environment file that GitHub Actions reads after a step finishes. Each NAME=value line written to it becomes an environment variable for subsequent steps in the same job, which matches this requirement exactly.

- name: Compute tag
  run: echo "IMAGE_TAG=$(git rev-parse --short HEAD)" >> "$GITHUB_ENV"

Use GITHUB_OUTPUT when you need a formal step output consumed through expressions, such as steps.id.outputs.name. A shell export only affects the current step process, and GITHUB_STEP_SUMMARY is only for human-readable job summaries.

  • Step outputs can pass data, but they do not directly create the IMAGE_TAG environment variable for later shell steps.
  • Shell export only lasts for the current step, so the next step will not automatically inherit the value.
  • Job summary writes run notes for people to read; it does not set variables for workflow execution.

Question 3

Topic: Author and Manage Workflows

An engineer is diagnosing a matrix-specific failure in this workflow:

jobs:
  verify:
    strategy:
      fail-fast: false
      matrix:
        os: [ubuntu-latest, windows-latest]
        node: ['18', '20']
        exclude:
          - os: windows-latest
            node: '18'
    runs-on: ${{ matrix.os }}
    name: verify (${{ matrix.os }}, node${{ matrix.node }})
    steps:
      - name: Show matrix
        run: echo "os=${{ matrix.os }} node=${{ matrix.node }}"
      - name: Unit tests
        run: npm test
      - name: Upload coverage
        if: ${{ matrix.os == 'ubuntu-latest' && matrix.node == '20' }}
        run: npm run coverage

Run summary:

verify (ubuntu-latest, node18)   success
verify (ubuntu-latest, node20)   failure
verify (windows-latest, node20)  success

Failed job log excerpt:

Show matrix
os=ubuntu-latest node=20

Unit tests
passed

Upload coverage
Error: Missing script: "coverage"

Which statement correctly explains the run?

Options:

  • A. The failure in verify (ubuntu-latest, node20) should cancel verify (windows-latest, node20) despite fail-fast: false.

  • B. Only verify (ubuntu-latest, node20) runs Upload coverage, and fail-fast: false lets the other matrix jobs complete.

  • C. Both Ubuntu combinations run Upload coverage, so verify (ubuntu-latest, node18) should also fail.

  • D. The excluded windows-latest, node18 combination still ran and failed, but it is hidden from the run summary.

Best answer: B

Explanation: The matrix expands to three jobs because windows-latest with Node 18 is excluded. Only the Ubuntu and Node 20 job satisfies the step condition for Upload coverage, and its log shows that step failed while the other matrix jobs still finished because fail-fast is disabled.

To diagnose this run, first trace the matrix expansion, then match the failing log to the step condition. The exclude entry removes the windows-latest and Node 18 combination, so only three jobs are created. The Upload coverage step runs only when matrix.os is ubuntu-latest and matrix.node is 20, so verify (ubuntu-latest, node20) is the only job that can execute it. The log confirms that exact matrix context and shows Unit tests passed before npm run coverage failed. Because fail-fast: false is set, a failure in one matrix job does not cancel the remaining matrix jobs.

The key takeaway is that expanded job names and per-combination logs tell you exactly which matrix entry failed and why.

  • The option about the excluded Windows and Node 18 combination is wrong because excluded matrix entries are not created or run.
  • The option claiming both Ubuntu jobs run coverage is wrong because the condition also requires matrix.node == '20'.
  • The option claiming the Windows and Node 20 job should be canceled is wrong because fail-fast: false disables matrix-wide cancellation on failure.

Question 4

Topic: Author and Manage Workflows

A team wants one job to fan out across operating systems, Node.js versions, dependency modes, and two test partitions. In this workflow, shard represents a test partition. What matrix will GitHub Actions create?

jobs:
  test:
    strategy:
      matrix:
        os: [ubuntu-latest, windows-latest]
        node: [18, 20]
        deps: [locked, latest]
        shard: [1, 2]
        exclude:
          - os: windows-latest
            deps: latest
    runs-on: ${{ matrix.os }}
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: ${{ matrix.node }}
      - run: npm test -- --shard=${{ matrix.shard }}/2

Options:

  • A. The workflow is invalid because exclude must also specify node and shard.

  • B. 8 jobs: both operating systems run only locked, because exclude removes every latest combination.

  • C. 12 jobs: Ubuntu runs both dependency modes on both Node versions and both shards; Windows runs only locked on both Node versions and both shards.

  • D. 14 jobs: only one windows-latest + latest combination is removed, so the other Node versions and shards still run.

Best answer: C

Explanation: GitHub Actions expands all matrix axes as a Cartesian product, then applies exclude. Here that starts with 16 combinations, and the partial exclude entry removes every combination where os is windows-latest and deps is latest, leaving 12 jobs.

A matrix job starts by combining every value from every axis:

  • os: 2 values
  • node: 2 values
  • deps: 2 values
  • shard: 2 values

That gives \(2 \times 2 \times 2 \times 2 = 16\) possible jobs. The exclude section does not need to name every axis. It removes any matrix entry that matches the listed key-value pairs. So os: windows-latest plus deps: latest removes all matching Windows/latest jobs, across both Node versions and both shards. That removes \(2 \times 2 = 4\) jobs, leaving 12.

The key point is that partial exclude entries filter all matching combinations, not just one row.

  • Too broad exclude: The option claiming latest is removed everywhere is wrong because the filter only matches windows-latest with latest.
  • Only one combo removed: The 14-job option misses that omitted axes still vary, so both Node versions and both shards are excluded for that match.
  • Invalid syntax claim: The workflow is valid because exclude entries can specify only the axes needed for matching.

Question 5

Topic: Author and Manage Workflows

A team wants a manually triggered caller workflow to invoke a reusable publishing workflow. The reusable workflow must receive only one repository secret from the caller, NPM_TOKEN; do not use broad secret inheritance.

Exhibit:

# .github/workflows/publish.yml
on:
  workflow_call:
    secrets:
      registry_token:
        required: true

jobs:
  publish:
    runs-on: ubuntu-latest
    steps:
      - run: npm publish
        env:
          NODE_AUTH_TOKEN: ${{ secrets.registry_token }}

# .github/workflows/release.yml
on:
  workflow_dispatch:

jobs:
  publish:
    uses: ./.github/workflows/publish.yml

Which edit to release.yml best satisfies the requirement?

Options:

  • A. Change publish.yml to use secrets.NPM_TOKEN directly.

  • B. Add with: { registry_token: ${{ secrets.NPM_TOKEN }} } under the calling job.

  • C. Add env: { registry_token: ${{ secrets.NPM_TOKEN }} } under the calling job.

  • D. Add secrets: { registry_token: ${{ secrets.NPM_TOKEN }} } under the calling job.

Best answer: D

Explanation: Reusable workflows do not automatically receive repository secrets from the caller. Because the called workflow declares registry_token under workflow_call.secrets, the caller must pass a value with a secrets: mapping on the job that uses the reusable workflow.

Reusable workflows have a boundary between the caller and the called file. Declaring registry_token under on.workflow_call.secrets means the called workflow expects a secret with that name, but it still has no value until the caller maps one. In the calling job, secrets.registry_token: ${{ secrets.NPM_TOKEN }} explicitly passes the caller repository secret into the reusable workflow.

  • Use secrets: on the job that contains uses:.
  • Match the left side to the callee’s expected secret name.
  • Reference the caller’s stored secret on the right side.
  • Use with: only for inputs, not secrets.

Setting env on the caller job does not automatically cross into the reusable workflow, and changing the callee to reference secrets.NPM_TOKEN would still assume access that was never passed.

  • Using with: passes an input, but the callee declared registry_token as a secret, not an input.
  • Using env: sets an environment variable for the caller job context; it does not map a secret into the reusable workflow.
  • Referencing the caller secret directly assumes implicit access, which reusable workflows do not have unless the caller passes or inherits secrets.

Question 6

Topic: Author and Manage Workflows

A team wants to keep this job running directly on the GitHub-hosted runner. The PostgreSQL container starts successfully, but the test step cannot connect to localhost:5432. Which edit best fixes the workflow?

jobs:
  integration:
    runs-on: ubuntu-latest
    services:
      db:
        image: postgres:16
        env:
          POSTGRES_PASSWORD: postgres
    steps:
      - uses: actions/checkout@v4
      - name: Run tests
        run: npm test
        env:
          DATABASE_URL: postgres://postgres:postgres@localhost:5432/postgres

Options:

  • A. Add permissions: contents: read to the job.

  • B. Add container: node:20 to the job and keep localhost:5432.

  • C. Add ports: - 5432:5432 under services.db.

  • D. Change the host in DATABASE_URL from localhost to db.

Best answer: C

Explanation: Because the job runs directly on the runner, the PostgreSQL service is not reachable on localhost unless its port is published to the host. Adding ports: - 5432:5432 exposes the service correctly for this workflow design.

GitHub Actions service containers behave differently depending on where the job runs. When a job runs directly on the runner VM, steps access a service container through the runner host, so you must publish the container port with services.<name>.ports and connect through localhost:<published-port>. In this case, PostgreSQL is running, but nothing maps container port 5432 to the runner, so localhost:5432 is unavailable.

If the job ran inside a job container, the job container and service containers would share a Docker network. Then the service would usually be reached by its label, such as db:5432, without relying on localhost or host port publishing.

So the minimal correct fix for this runner-based job is to add the port mapping under the service definition.

  • Use service name works for jobs running in a job container, not for steps running directly on the runner host.
  • Add a job container changes the execution model, and localhost inside that container still would not point to the PostgreSQL service.
  • Add permissions affects token access to GitHub resources, not network access to a service container.

Question 7

Topic: Author and Manage Workflows

A workflow job runs integration tests against PostgreSQL on localhost:5432. The first test step intermittently fails because the database container is still starting. The team wants GitHub Actions to wait for actual readiness, not a fixed delay.

jobs:
  integration:
    runs-on: ubuntu-latest
    services:
      db:
        image: postgres:16
        env:
          POSTGRES_PASSWORD: postgres
        ports:
          - 5432:5432
    steps:
      - uses: actions/checkout@v4
      - run: npm test

Which edit best prevents npm test from starting before PostgreSQL is ready?

Options:

  • A. Insert a sleep 30 step before npm test.

  • B. Set timeout-minutes: 10 on the job.

  • C. Add options to db with a pg_isready health check and retry settings.

  • D. Change ports to expose only 5432 without the host mapping.

Best answer: C

Explanation: The reliable fix is to add a Docker health check to the PostgreSQL service by using the service container’s options. Without a health check, GitHub Actions can start steps after the container launches even if PostgreSQL is not yet accepting connections.

Service containers start before job steps, but readiness is only enforced when Docker has a health check to evaluate. In this workflow, PostgreSQL may be running as a container while still finishing initialization, so the test step can hit startup failures. Adding a health check with options lets the runner wait until pg_isready succeeds and Docker marks the service healthy.

services:
  db:
    image: postgres:16
    options: >-
      --health-cmd "pg_isready -U postgres"
      --health-interval 10s
      --health-timeout 5s
      --health-retries 5

This uses actual service readiness instead of guessing with a delay.

  • Fixed delay A sleep step is only a guess, so it can still be too short or waste time when the database starts faster.
  • Job timeout timeout-minutes limits total job duration; it does not pause steps until a service accepts connections.
  • Port mapping Changing ports affects network exposure, not whether PostgreSQL has finished starting.

Question 8

Topic: Author and Manage Workflows

An Actions workflow must read the app version once and reuse it in a later step of the same job.

jobs:
  package:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - id: read_version
        name: Read version
        run: |
          VERSION=$(node -p "require('./package.json').version")
          echo "Found &#36;VERSION"

      - name: Package
        run: tar -czf "app-&#36;VERSION.tgz" dist/

The Package step currently creates app-.tgz because VERSION is empty. Which edit best makes VERSION available to later steps in the same job?

Options:

  • A. In Read version, add echo "VERSION=&#36;VERSION" >> "&#36;GITHUB_ENV".

  • B. In Read version, add echo "VERSION=&#36;VERSION" >> "&#36;GITHUB_OUTPUT".

  • C. In Read version, replace the assignment with export VERSION=$(node -p "require('./package.json').version").

  • D. In Package, add env: VERSION: &#36;{{ steps.read_version.outputs.VERSION }}.

Best answer: A

Explanation: GITHUB_ENV is the environment file used to pass environment variables to later steps in the same job. A shell variable or export only affects the current step process, and step outputs require explicit output creation and referencing.

GitHub Actions runs each run step in its own process, so variables created inside one shell step do not automatically exist in the next step. To make an environment value available later in the same job, write NAME=value to the GITHUB_ENV file during the first step.

A working edit is:

- id: read_version
  run: |
    VERSION=$(node -p "require('./package.json').version")
    echo "VERSION=&#36;VERSION" >> "&#36;GITHUB_ENV"

After that, later steps in the same job can use &#36;VERSION in their shell commands. The closest distractor is the step output approach, but that requires writing to GITHUB_OUTPUT and then referencing steps.<id>.outputs.<name> explicitly.

  • Writing to GITHUB_OUTPUT creates a step output, not a normal shell environment variable for later steps.
  • Using export only affects the current step’s shell process and does not persist to the next step.
  • Referencing steps.read_version.outputs.VERSION fails because the workflow never creates an output named VERSION.

Question 9

Topic: Author and Manage Workflows

A team wants to cap matrix concurrency at two jobs but still collect results from every runner image even if one combination fails. They use this workflow:

jobs:
  test:
    runs-on: &#36;{{ matrix.os }}
    strategy:
      fail-fast: false
      max-parallel: 2
      matrix:
        include:
          - name: ubuntu18
            os: ubuntu-latest
          - name: windows18
            os: windows-latest
          - name: macos20
            os: macos-latest
          - name: ubuntu20
            os: ubuntu-latest

For this run, matrix entries are queued in the listed order, and each open slot takes the next queued entry. Enough GitHub-hosted runners are available. The test step outcomes are: ubuntu18 succeeds in 3 minutes, windows18 fails in 1 minute, macos20 succeeds in 4 minutes, and ubuntu20 succeeds in 2 minutes.

Which execution trace matches GitHub Actions behavior?

Options:

  • A. All four matrix jobs start immediately because fail-fast: false disables the matrix queue.

  • B. ubuntu18 starts first, then windows18; macos20 waits until both finish because max-parallel is enforced separately for each runner image.

  • C. ubuntu18 and windows18 start first; when windows18 fails, all queued matrix jobs are canceled, so only ubuntu18 finishes.

  • D. ubuntu18 and windows18 start first; when windows18 fails at 1 minute, macos20 starts; when ubuntu18 finishes at 3 minutes, ubuntu20 starts; all four matrix jobs run.

Best answer: D

Explanation: max-parallel: 2 allows only two matrix jobs to run at the same time. Because fail-fast is false, the Windows failure does not cancel the queued macOS and Ubuntu entries, so the next queued job starts whenever a slot opens.

max-parallel controls how many matrix jobs can run concurrently, while fail-fast controls whether one failing matrix job cancels the others. In this workflow, only two jobs can run at once, and failures do not stop the rest of the matrix.

  • At time 0, ubuntu18 and windows18 start.
  • At 1 minute, windows18 fails, freeing one slot.
  • macos20 starts next because it is next in the queue.
  • At 3 minutes, ubuntu18 finishes, freeing the second slot for ubuntu20.

The run ends after all four matrix jobs have executed, with one failure and three successes. The closest wrong idea is treating fail-fast: false as a way to increase concurrency; it only affects cancellation behavior.

  • Cancellation confusion fails because matrix cancellation on first failure happens when fail-fast is enabled, not when it is false.
  • Unlimited concurrency fails because fail-fast does not override max-parallel; the cap of two still applies.
  • Per-image limit fails because max-parallel applies to the whole matrix job, not separately to each runner image.

Question 10

Topic: Author and Manage Workflows

A workflow is triggered manually. The repository has a repository variable named REGISTRY and a secret named NPM_TOKEN. An engineer wants the Report step to expose the repository identifier, runner OS, REGISTRY, whether NPM_TOKEN is set, the channel input, the current matrix value, the version output from prep, and the digest output from the earlier pack step. Which env: block should replace # TODO?

on:
  workflow_dispatch:
    inputs:
      channel:
        required: true
        type: string

jobs:
  prep:
    runs-on: ubuntu-latest
    outputs:
      version: &#36;{{ steps.meta.outputs.version }}
    steps:
      - id: meta
        run: echo "version=1.2.3" >> "&#36;GITHUB_OUTPUT"

  build:
    needs: prep
    strategy:
      matrix:
        node: [18, 20]
    runs-on: ubuntu-latest
    steps:
      - id: pack
        run: echo "digest=sha256:abc123" >> "&#36;GITHUB_OUTPUT"
      - name: Report
        env:
          # TODO
        run: echo "&#36;REPO &#36;RUNNER_OS &#36;REGISTRY &#36;HAS_TOKEN &#36;CHANNEL &#36;NODE &#36;VERSION &#36;DIGEST"

Options:

  • A. ```yaml REPO: ${{ github.repository }} RUNNER_OS: ${{ runner.os }} REGISTRY: ${{ variables.REGISTRY }} HAS_TOKEN: ${{ secrets.NPM_TOKEN != ’’ }} CHANNEL: ${{ github.inputs.channel }} NODE: ${{ matrix.node }} VERSION: ${{ jobs.prep.outputs.version }} DIGEST: ${{ steps.pack.output.digest }}

- B. ```yaml
REPO: &#36;{{ github.repository }}
RUNNER_OS: &#36;{{ github.runner_os }}
REGISTRY: &#36;{{ env.REGISTRY }}
HAS_TOKEN: &#36;{{ vars.NPM_TOKEN != '' }}
CHANNEL: &#36;{{ inputs.channel }}
NODE: &#36;{{ matrix.node_version }}
VERSION: &#36;{{ needs.build.outputs.version }}
DIGEST: &#36;{{ steps.pack.outputs.digest }}
  • C. ```yaml REPO: ${{ repository.name }} RUNNER_OS: ${{ runner.os }} REGISTRY: ${{ vars.REGISTRY }} HAS_TOKEN: ${{ secrets.NPM_TOKEN != ’’ }} CHANNEL: ${{ inputs.channel }} NODE: ${{ strategy.matrix.node }} VERSION: ${{ needs.prep.outputs.version }} DIGEST: ${{ job.steps.pack.outputs.digest }}

- D. ```yaml
REPO: &#36;{{ github.repository }}
RUNNER_OS: &#36;{{ runner.os }}
REGISTRY: &#36;{{ vars.REGISTRY }}
HAS_TOKEN: &#36;{{ secrets.NPM_TOKEN != '' }}
CHANNEL: &#36;{{ inputs.channel }}
NODE: &#36;{{ matrix.node }}
VERSION: &#36;{{ needs.prep.outputs.version }}
DIGEST: &#36;{{ steps.pack.outputs.digest }}

Best answer: D

Explanation: The valid block is the one that uses github.repository, runner.os, vars, secrets, inputs, matrix, needs, and steps in their proper scopes. The key scope rule is that outputs from another job come from needs, while outputs from an earlier step in the same job come from steps.

GitHub Actions contexts are scope-specific. In a step, repository metadata is read from github, runner metadata from runner, repository or organization variables from vars, and secrets from secrets. Because this workflow uses workflow_dispatch, the manual value is available as inputs.channel. Matrix values use matrix.<name>. Cross-job outputs are exposed only through needs.<job_id>.outputs.<name>, because build depends on prep. Same-job step outputs use steps.<step_id>.outputs.<name> after that step has run, so steps.pack.outputs.digest is valid in Report. The common mistake is using lookalike names such as variables, jobs, or strategy.matrix, which are not the correct runtime contexts here.

  • The block using variables.REGISTRY, github.inputs.channel, jobs.prep.outputs.version, and steps.pack.output.digest fails because those are not the correct context names or property paths here.
  • The block using repository.name, strategy.matrix.node, and job.steps.pack.outputs.digest fails because repository, matrix, and step-output data are accessed through github, matrix, and steps.
  • The block using github.runner_os, env.REGISTRY, vars.NPM_TOKEN, matrix.node_version, and needs.build.outputs.version fails because it mixes undefined properties with the wrong scopes for variables, secrets, matrix keys, and dependency outputs.

Continue with full practice

Use the GitHub Actions GH-200 Practice Test page for the full IT Mastery route, mixed-topic practice, timed mock exams, explanations, and web/mobile app access.

Try GitHub Actions GH-200 on Web View GitHub Actions GH-200 Practice Test

Free review resource

Read the GitHub Actions GH-200 Cheat Sheet on Tech Exam Lexicon, then return to IT Mastery for timed practice.

Revised on Thursday, May 14, 2026