CI/CD

Pipeline Security

How to protect the automation that builds and releases software from credential abuse, runner compromise, and tampering.

The CI/CD pipeline is one of the most valuable targets in a software supply chain attack. A compromised pipeline can inject malicious code into every artifact it builds, steal production credentials, or substitute a legitimate release with a backdoored one , all without touching the application source code. High-profile incidents like the SolarWinds breach (malicious code injected into the build system) and the Codecov breach (malicious script inserted into a CI tool) demonstrate that attackers understand the leverage a pipeline offers. Pipeline security means treating the automation layer as a high-value target with its own threat model.

Learning objectives

What you should be able to do after reading.
  • Identify the main trust boundaries inside a delivery pipeline.
  • Explain how secrets, permissions, and runner isolation reduce risk.
  • Recognize the common ways pipelines are abused or tampered with.

At a glance

Fast mental model before you dive in.
Trust boundaries
  • Secrets
  • Runners
  • Permissions
Change control
  • Branch protection
  • PR review
  • Approvals
Safer patterns
  • OIDC
  • Least privilege
  • Build isolation

Core idea

A CI/CD pipeline typically holds the keys to the kingdom. Credentials for cloud environments, registry push access, deployment permissions, signing keys. If an attacker can influence what the pipeline executes or what it does with its credentials, they can affect everything downstream. This is a more attractive target than compromising individual application instances, because a single pipeline compromise can poison every release for weeks before being detected.

The supply chain attack surface extends beyond the pipeline itself. Third-party GitHub Actions, npm packages, pip packages, and base container images are all inputs to the build. Malicious or compromised third-party inputs can insert code into artifacts without the pipeline operator's knowledge. This is why dependency pinning, artifact verification, and careful review of new external dependencies are security controls, not just operational hygiene.

The right mental model is to treat every external input to the pipeline as untrusted until it has been verified. Code from contributors outside the organization, third-party actions pinned to mutable tags, base images without digest pinning, and unverified build tools are all potential injection points. Defense in depth means that even if one input is compromised, the overall pipeline still limits what the attacker can do with the resulting artifact.

Separation of concerns is the organizing principle for pipeline security. Untrusted contributor-driven steps (building and testing a PR from a fork) should never have access to production secrets, signing keys, or deployment permissions. Only trusted, reviewed code on protected branches should be able to trigger the steps that access sensitive credentials or publish official artifacts.

Trust model

  • Scope credentials to the smallest possible job, environment, and time window, a job that builds code should not have the permissions that a job deploying to production has.
  • Separate the build and test steps (which run on potentially untrusted code) from the sign, publish, and deploy steps (which require trusted credentials).
  • Treat branch protection on release and main branches as a security control, not just a collaboration workflow. It determines which code can trigger privileged pipeline steps.

Baseline

  • Review changes to pipeline configuration with the same rigor as changes to application code, a single malicious line in a workflow file can exfiltrate all pipeline secrets.
  • Pin all third-party actions, base images, and external tools to specific versions or digests, not to mutable tags like 'latest' or 'v2'.
  • Prefer short-lived, scoped credentials (OIDC-based tokens) over long-lived static secrets stored in pipeline configuration.

Signals to watch for

Patterns worth investigating further.
  • Build logs reveal secrets, tokens, or internal endpoints.
  • A pull request can influence privileged jobs without review.
  • A runner has broader access than the job actually needs.

DEEP DIVE

Secrets in pipelines

Pipeline secrets are high-value, high-risk credentials. They typically include cloud provider credentials, container registry push tokens, signing keys, deployment tokens, and API keys for external services. Because pipelines are automated and run many times per day, these secrets are used frequently and must be available to the automation, which creates exposure risk. The goal is to make secrets available to the specific jobs that need them while preventing any job that should not have them from seeing them.

In GitHub Actions, secrets can be scoped at the repository, environment, or organization level. Environment-scoped secrets are only available to jobs that specify that environment, and environments can require approval before being accessed. Meaning a deployment secret is only available after a human has reviewed and approved the deployment. This is a useful pattern for separating development secrets (available to all branches) from production secrets (available only to protected branches with approval).

Log masking is an important but insufficient defense. Most CI systems automatically mask known secret values in log output, replacing them with asterisks. But masking only works for exact matches, a secret that is base64-encoded, URL-encoded, or embedded in a longer string may not be masked. Pipelines should never explicitly print secrets, 'echo $SECRET', and should be reviewed for accidentally verbose error messages that might include secret values in exception traces.

After a secret is suspected to be exposed, whether through a log, an artifact, a compromised pipeline, or a vulnerability report, the correct response is immediate rotation, not monitoring to see if it was misused. By the time misuse is detected, the damage is done. Rotation invalidates the exposed credential and replaces it with a new one, limiting the window of exposure.

Permissions

Pipeline permissions define what the automated job is allowed to do, independent of what a human user can do in the UI. In GitHub Actions, workflow permissions are declared as a top-level permissions block and can include write access to repository contents, issues, pull requests, packages, security events, and other resources. The principle of least privilege means setting only the permissions each job actually needs, a job that only reads code and runs tests does not need write access to packages or deployments.

The GITHUB_TOKEN is an automatically generated token scoped to the repository for each workflow run. Its default permissions vary by organization setting (permissive or restricted), which is why explicitly declaring the permissions block in workflow files is important. It makes the actual permissions visible and auditable regardless of the organization's default setting. Workflows should never rely on the default permissions being permissive, they should declare exactly what they need.

The distinction between what a contributor can do in the GitHub UI and what a workflow they trigger can do with the GITHUB_TOKEN is subtle but important. A contributor with 'read' access to a repository can still trigger a workflow on their fork. If that workflow has write permissions or access to environment secrets, it can perform actions that the contributor could not perform directly. This is the core of the pull_request_target risk.

Least-privilege pipeline permissions are especially important for release and deployment steps. A job that publishes a container image to a registry or deploys to a cloud environment should have exactly the permissions required for that operation. Nothing broader. Time-bound credentials (credentials that expire after a short period, like OIDC-issued tokens) reduce the window during which a leaked credential can be misused.

Build isolation

Build isolation ensures that what happens in one build job does not contaminate another. Without isolation, a malicious build step could leave behind modified files, environment variables, or cached data that affects subsequent jobs. Isolation is achieved through disposable environments. Ideally, each job runs in a fresh container or VM that is discarded after the job completes and contains no state from previous runs.

Caching is a performance optimization that directly conflicts with isolation. A cache shared between jobs reduces build times but also creates a channel for one job to affect another. Caches should be scoped to specific keys (including the hash of the files that determine the cached content) and should be treated as untrusted inputs. Code that restores a cache should not assume the cache contents are safe, since a previous malicious run could have poisoned the cache.

Docker-in-Docker (DinD), running Docker commands inside a Docker container, as is common in CI pipelines, has two main variants with different security profiles. The privileged DinD approach runs the inner Docker daemon as a privileged container, which grants near-host-level access and largely defeats container isolation. The Docker socket mount approach mounts the host's Docker socket into the CI container, which allows the CI job to control the host's Docker daemon and all containers on that host. Both are significant security risks in shared CI environments, using dedicated container-native CI tools or isolated VMs for jobs that need to build container images is safer.

Runner images used by CI jobs should be treated like production infrastructure. They should be minimal, kept up to date, and scanned for vulnerabilities. A CI runner image that contains outdated software or unnecessary tools is both a vulnerability and a capability available to malicious code running in the job. Using versioned, scanned, organization-controlled runner images rather than the latest public ones reduces the trusted computing base.

Runner trust

A CI runner is the machine or container that executes pipeline jobs. It has access to the job's source code, its environment variables (including secrets), its artifacts, and potentially other jobs that run on the same hardware. The security of the runner is therefore part of the security boundary for everything the pipeline does. A compromised runner can see and modify everything that runs through it.

Self-hosted runners provide more control over the runner environment (custom hardware, specific software, private network access) but also carry more risk. A self-hosted runner running on-premises or in a cloud VM has persistent state across jobs unless explicitly cleaned between runs. A malicious job that runs on a self-hosted runner can potentially leave behind files, modify system configuration, or install persistent tooling that affects future jobs on the same runner.

Shared runners (provided by GitHub, GitLab, or other CI services) are isolated per job in containerized or VM-based environments that are discarded after each job. This eliminates persistence across jobs and prevents most cross-job contamination. The trade-off is less control over the environment and shared compute infrastructure with other organizations. For highly sensitive workloads, dedicated runner pools that only handle trusted code are the right choice.

The most dangerous configuration is a self-hosted runner that handles both untrusted fork PR builds and privileged release jobs. A malicious fork PR can run code on the runner, which may have access to the organization's production credentials, internal network, or other jobs' data. Separate runner pools for untrusted (fork PRs, external contributor builds) and trusted (main branch, protected environments) workflows is a critical security control in organizations that accept external contributions.

Branch protection

Branch protection rules enforce conditions that must be met before code can be merged to or pushed to a protected branch. For the main branch and release branches in a CI/CD context, the most important protections are require pull request reviews before merging (at least one person other than the author must approve), require status checks to pass (the CI pipeline must succeed), require linear history (force rebasing or squashing to prevent ambiguous merges), and restrict who can push directly (only specific users or roles can bypass the PR process).

Branch protection matters for pipeline security because protected branches are typically the triggers for privileged pipeline steps. A push to main might trigger a deployment to staging, a tag on a release branch might trigger a deployment to production. If anyone can push to those branches without review, they can trigger those privileged steps with arbitrary code. Branch protection makes the review step mandatory and auditable.

Signed commits add a cryptographic layer to branch protection. When signed commits are required on a protected branch, each commit must be signed by a key associated with the committing author, and the signature is verified before the merge is allowed. This prevents impersonation (a fake commit claiming to be from a trusted author) and provides non-repudiation (the author cannot later deny having made a change).

Branch protection rules also interact with required deployment environments. In GitHub Actions, a deployment environment can require specific reviewers before it is activated. This means that even if a pipeline job requests access to production secrets, a human must approve that request before the secrets are made available. This creates a human-in-the-loop checkpoint at the moment of production access, separate from the code review checkpoint at the moment of merge.

Pull request risks

Pull requests from external contributors or untrusted forks represent a particularly high-risk scenario in CI/CD pipelines. When a PR triggers a CI job, that job runs the code from the PR, which could be arbitrary code written by an untrusted author. If that job has access to organization secrets or can trigger deployments, the attacker has achieved code execution in a privileged context by simply submitting a PR.

In GitHub Actions, the pull_request event triggers workflows with limited permissions (no access to secrets, read-only GITHUB_TOKEN) when the PR comes from a fork. The pull_request_target event triggers workflows with full repository permissions, including access to secrets, even for PRs from forks. The pull_request_target event exists for legitimate use cases (posting comments on PRs, updating PR statuses) but is frequently misused. Any workflow that uses pull_request_target and also checks out or runs code from the PR is creating a critical security vulnerability.

First-time contributor approval gates add a manual checkpoint before CI runs on a PR from a first-time contributor. GitHub Actions has a 'Require approval for first-time contributors' setting that prevents automated workflows from running until a maintainer explicitly approves. This gate is specifically designed to prevent the scenario where a malicious PR triggers privileged CI workflows before any review has occurred.

The safe pattern for handling PRs from forks is run untrusted code in a restricted context (pull_request, not pull_request_target, no secrets, no deployment permissions), collect artifacts and test results, then in a separate workflow triggered by a reviewer's approval, run any privileged steps using those artifacts. The privileged workflow runs on trusted infrastructure with access to secrets but never runs untrusted code. It only uses pre-built artifacts that were produced in the restricted context.

OIDC

OpenID Connect (OIDC) is a trust exchange protocol that allows a CI/CD pipeline to obtain short-lived credentials from a cloud provider without storing long-lived static secrets in the pipeline configuration. Instead of an IAM access key stored in a repository secret, the pipeline requests a JWT token from the CI provider (GitHub, GitLab, CircleCI) and exchanges it for temporary credentials from the cloud provider (AWS, GCP, Azure) using a pre-configured trust relationship.

The OIDC flow works as follows the CI job requests a signed JWT from the CI provider's token endpoint, containing claims about the workflow's identity (repository name, branch, environment, actor). The cloud provider's identity system validates the JWT's signature and checks the claims against a configured trust policy. If the claims match, for example, 'repository is org/myapp AND branch is main AND environment is production', the cloud provider issues temporary credentials with the permissions defined in the corresponding IAM role.

OIDC eliminates several entire classes of secret management problems. There are no long-lived credentials to rotate, store securely, or accidentally expose in a commit. A stolen OIDC token is useless outside the context of the original workflow run, because the token's claims are specific to that run and the token expires quickly. Even if an attacker extracts the token from a log, they cannot use it to assume the IAM role outside of the expected CI environment.

Claims-based access control with OIDC can be very granular. A trust policy can require that the workflow runs on a specific branch, from a specific repository, in a specific environment, triggered by a specific event. This means that fork PR workflows, even if they can obtain a OIDC token, cannot assume the production IAM role, because their claims don't match the policy that requires the branch to be main and the environment to be production.

Artifact tampering

Artifact tampering is the modification of a build output after it leaves the trusted build environment. An attacker who can replace a container image in a registry with a malicious one, or intercept an artifact upload and substitute it, can cause a downstream deployment system to run attacker-controlled code. This attack is specifically designed to bypass code review and testing, because the tampered artifact never went through those controls.

The defense against artifact tampering is integrity verification at every point where an artifact is transferred or used. For container images, this means signing with Cosign and verifying the signature before deployment. For other artifact types (binaries, JAR files, npm packages), this means generating a cryptographic checksum at build time, storing it securely, and verifying the checksum before using the artifact. SLSA provenance attestations go further. They record not just that the artifact has a specific checksum but who built it, from what source, using what build system.

The deployment step is the final verification point. Before a deployment system pulls an image and starts containers, it should verify that the image's signature matches the expected signing key and that the provenance attestation records the expected build origin. This verification should happen at deploy time, not only at build time, a clean image at build time can be replaced in the registry before deployment.

Artifact tampering risks are higher in environments where the artifact lifecycle is not well-controlled. Images pushed to shared namespaces in registries with weak access control, artifacts stored in S3 buckets with public-write permissions, or npm packages published from compromised maintainer accounts. The combination of registry access control (only the build pipeline can push), image signing (only signed images can be deployed), and provenance attestations (the deployment can verify the build origin) creates a supply chain integrity model that is robust against most tampering scenarios.