Supply Chain

Artifact Integrity

How to prove that a build output is genuine, traceable, and unchanged before it is consumed.

Artifact integrity is the discipline of proving that a build output is genuine, unmodified, and produced by the expected process before it is deployed or consumed. A successful build is not sufficient evidence that the artifact is trustworthy: the artifact could be modified after it leaves the build environment, it could be sourced from a different pipeline than intended, or a compromised dependency could have altered the output without any visible sign in the build logs. Integrity means establishing a verifiable chain of evidence from source code to deployed artifact.

Learning objectives

What you should be able to do after reading.
  • Explain the difference between creating an artifact and proving that it is trustworthy.
  • Describe how signing, verification, provenance, and attestations work together.
  • Recognize why integrity checks belong at the point of use, not only at the point of build.

At a glance

Fast mental model before you dive in.
Evidence
  • Signatures
  • SBOM
  • Provenance
Verification
  • Digest checks
  • Policy checks
  • Admission decisions
Trust chain
  • Attestations
  • Cosign
  • Trusted roots

Core idea

Artifact integrity is the discipline of proving that the thing you deploy or consume is the thing that was actually built and approved. Without that proof, a successful build does not guarantee a trustworthy release. The pipeline can complete without errors, all tests can pass, and the deployed artifact can still be a different binary than what was tested, if something substituted it between build and deploy.

The important move is to bind the artifact to evidence, a cryptographic signature, provenance metadata, and a verifiable digest that travel with the artifact far enough for a consumer to verify the claim at deployment time. Building integrity into the pipeline is not just a compliance checkbox, it is the technical foundation for trusting automated deployments.

Evidence model

  • Record a stable digest for the artifact and sign it before distribution so any modification is detectable.
  • Attach metadata such as SBOMs, provenance, and attestations in a form that consumers and deployment systems can verify.
  • Make verification part of deployment, admission control, or consumption instead of relying on trust by convention.

Baseline

  • Prefer immutable references such as digest-pinned image addresses over mutable tags or filenames when integrity matters.
  • Protect signing identities and policy enforcement points with the same access controls as the build infrastructure itself.
  • Fail closed when expected evidence is missing rather than silently accepting the artifact and continuing.

Signals to watch for

Patterns worth investigating further.
  • Unsigned artifacts are accepted into a trusted environment.
  • Consumers pull mutable tags without checking what digest they resolved to.
  • Integrity metadata exists, but no deployment step actually verifies it.

DEEP DIVE

Artifacts

An artifact is the packaged output of a build, a container image, a compiled binary, a JAR file, a Helm chart, a signed APK, or any other distributable unit that the build system produces. In supply chain security, the critical question about an artifact is not only what it contains but whether it can be traced back to a specific, trusted build event. An artifact without that traceability is opaque. There is no way to determine, after the fact, where it came from, what source code produced it, or whether it has been modified since it was built.

Artifacts should be immutable once built. An artifact identified by version 2.3.1 should always refer to the exact same bytes. Mutable artifact identifiers, where a tag like latest or v2 can point to different content over time, undermine every downstream integrity check. Content-addressable storage, where an artifact is identified by the cryptographic hash of its content (a digest), provides immutability by construction, if the content changes, the hash changes, and any reference using the old hash no longer matches.

Versioning strategy communicates intent and enables traceability. Using the git commit SHA as part of the artifact version (myapp:a3f2c9b) allows any running instance to be traced back to the exact source code commit that produced it. Semantic versions communicate stability contracts for libraries and APIs. Build numbers are simple but lose the connection to source code without additional metadata. The right choice depends on the context, but all choices should support the ability to answer. Given this running artifact, what source code produced it and when?

The artifact lifecycle has four key integrity moments. Creation (the build must be trusted and the inputs controlled), storage (the registry or artifact store must protect the artifact from modification), distribution (the transfer from storage to deployment must be integrity-verified), and consumption (the deployment system must verify the artifact before using it). A supply chain attack can target any of these moments, which is why integrity controls at a single point are insufficient.

Signing

Signing binds a cryptographic proof to an artifact or to metadata about that artifact, allowing any consumer to verify that the artifact was produced by a specific party and has not been modified since it was signed. For container images, Cosign (from the Sigstore project) is the current standard. Cosign signs images by creating a signature that is stored in the same OCI registry as the image, using a key pair or a keyless identity from an OIDC provider such as GitHub Actions. The signature is attached to the image digest, not to a mutable tag, so signing is stable and tamper-evident.

The value of signing depends entirely on the quality of the signing identity. A signature from a well-managed key pair or a verifiable OIDC identity proves who built the image. A signature from a key that is widely shared, stored insecurely, or generated by an untrusted pipeline provides little assurance. Key management for long-lived signing keys is a serious operational responsibility. The key must be protected against theft, rotated on a schedule, and revoked if compromised. Keyless signing, where a short-lived identity credential is obtained from an OIDC provider at build time, eliminates long-lived key management at the cost of depending on the identity provider and the transparency log.

Signing is not limited to container images. Binary artifacts, release tarballs, npm packages, PyPI packages, and Helm charts can all be signed using appropriate tooling. Sigstore's cosign supports non-container artifacts. GPG signing is the traditional mechanism for Linux package signing and git commit signing. The specific tool is less important than the discipline, every artifact published from a trusted build pipeline should be signed, and consumers should be configured to verify signatures before using the artifact.

A common mistake is signing artifacts but never verifying the signatures at deployment time. Signing that is not enforced at consumption provides an audit trail but not a security control. The combination of signing at build time and mandatory verification at deploy time is what prevents tampered artifacts from being deployed. Admission controllers in Kubernetes (such as Sigstore's policy-controller or Connaisseur) can enforce signature verification on every image before a Pod is allowed to start.

Verification

Verification is the consumer-side check that confirms an artifact's signature, identity, and policy are acceptable before the artifact is used. Verification is where integrity becomes an operational reality rather than a theoretical property. An artifact that is signed but never verified provides a false sense of security. The signing ceremony happened, but nothing actually depends on the result being correct.

Verification should happen at the point of consumption, not only at the point of build or storage. An image that was verified clean when it was built and placed in a registry may have been modified in the registry, or the tag it is referenced by may have been updated to point to a different image. Verifying the signature at deploy time, using the digest of the image actually being pulled, provides assurance that what is being deployed is what was signed, regardless of what may have changed in the registry in the interim.

Policy-based verification allows deployment systems to enforce requirements beyond just the presence of a valid signature. A policy might require that the signature comes from a specific OIDC identity (the production build pipeline), that the image was built from a specific repository, that a specific set of tests passed before the artifact was signed, or that the artifact is not older than a maximum age. These richer checks are expressed through attestation verification, where signed statements about the artifact are checked against policy rules at deploy time.

The failure mode that undermines verification is treating it as optional or as a warning rather than a hard gate. If a deployment system logs a verification failure but continues to deploy the artifact, the verification provides no protection. If verification is enabled only in production but not in staging or development, misconfigurations in the verification policy may not be discovered until a production deployment fails. Verification should be a hard gate in every environment where integrity matters.

SBOM

A software bill of materials (SBOM) is a machine-readable inventory of every component inside a software artifact. For a container image, this means all operating system packages, language libraries, and their exact versions. For a compiled binary, this means all statically linked libraries and their versions. SBOM formats include SPDX (a Linux Foundation standard) and CycloneDX (widely used in security tooling). Tools like Syft can generate an SBOM from any container image by inspecting its layers and detecting package manifests.

SBOMs enable rapid response when a new vulnerability is disclosed. When Log4Shell was announced in December 2021, organisations with current SBOMs for all their artifacts could immediately query which artifacts contained log4j and at which version. They could prioritise remediation based on actual exposure rather than having to scan everything from scratch under time pressure. Organisations without SBOMs had to reactively scan, often with incomplete results, while the vulnerability was actively being exploited.

SBOMs support use cases beyond incident response. License compliance review uses the SBOM to verify that every included component has an acceptable license. Supply chain transparency requirements, such as those in the US Executive Order 14028, require SBOMs for software sold to the federal government. Customer security questionnaires increasingly ask for SBOMs as evidence of supply chain awareness. Regulatory requirements in critical infrastructure sectors are beginning to mandate SBOMs as part of secure software delivery.

The operational value of an SBOM depends on it being accurate, current, and accessible at the time it is needed. An SBOM generated at build time and stored alongside the artifact in the registry is immediately available to any system that needs it. An SBOM stored separately from the artifact it describes, not tied to the specific artifact digest, or generated only on request rather than as a standard build output, loses most of its operational value. The SBOM should be a first-class output of the build pipeline, generated automatically and attached to every published artifact.

Provenance

Provenance is the documented record of where an artifact came from and how it was produced. A provenance record for a container image might state. The image was built from commit abc123 in repository github.com/myorg/myapp, using the production-build GitHub Actions workflow, at timestamp 2024-03-15T10:30:00Z, with inputs including the base image sha256:deadbeef. This record allows a consumer to trace the artifact back to its origin and assess whether the origin meets the consumer's trust requirements.

SLSA (Supply-chain Levels for Software Artifacts) is the framework that formalises provenance requirements into a set of maturity levels. SLSA Level 1 requires basic provenance documentation. SLSA Level 2 requires that provenance is generated by the build service itself, not by the build author. SLSA Level 3 requires that the build runs in a hardened, isolated environment with a tamper-resistant provenance record. SLSA Level 4 adds two-party review requirements. Each level makes it progressively harder for an attacker to substitute a malicious artifact while maintaining a plausible provenance record.

Provenance is generated as part of the build process and signed by the build system, not by the developer. This separation is important, a developer who can modify their own provenance record could create a false provenance for a malicious artifact. Provenance generated and signed by the CI platform (such as GitHub Actions' artifact attestation feature or SLSA provenance generators) is trusted because it comes from the build infrastructure, which the developer does not directly control.

Provenance verification at deploy time allows deployment systems to enforce that every artifact they use was built by a trusted process. A policy that says only accept images built by the production-build workflow in the main branch, with provenance signed by the GitHub Actions identity, prevents deployment of images built outside the trusted pipeline, even if those images are signed with a valid key. This is the most powerful use of provenance. It proves not just that the artifact is unmodified but that it came from a specific, trusted build path.

Attestations

Attestations are signed statements about an artifact that go beyond the binary claim of a signature. While a signature says this artifact was signed by this key, an attestation says this artifact passed the following security scan, or this artifact was built from this source commit, or this artifact's dependencies have no critical CVEs as of this timestamp. Attestations are expressed as structured, signed documents (using formats like in-toto) and stored alongside the artifact in the registry.

Attestations allow deployment policy to be expressed in terms of what evidence the artifact carries, not just who signed it. A deployment policy might require a valid signature from the production pipeline (authentication), a passing SAST scan attestation (code quality evidence), a passing SCA scan attestation with no critical CVEs (dependency health evidence), and a provenance attestation showing the build came from the main branch (source control evidence). The deployment system checks all of these attestations before allowing the artifact to be deployed.

The value of attestations is that they bring multiple security check results together into a single, verifiable package that travels with the artifact. Instead of requiring the deployment system to independently verify that a specific pipeline ran and all its checks passed, the pipeline attaches attestations to the artifact at build time, and the deployment system verifies the attestations at deploy time. This makes security evidence portable, verifiable offline, and independent of the availability of the CI system at deploy time.

A common mistake is generating attestations without a policy that requires them. Attestations that are created but never checked at deployment provide no security value. The deployment policy must explicitly require specific attestations, and the enforcement must be a hard gate rather than a warning. Building this policy requires advance planning. Deciding which checks are required, which attestation format they produce, and how the deployment system verifies them, before the first artifact with attestations is deployed.

Cosign

Cosign is an open-source tool from the Sigstore project that provides signing, verification, and attestation capabilities for container images and other OCI artifacts. It supports both traditional key pair signing and keyless signing using OIDC identities, and it stores signatures and attestations in OCI registries alongside the images they describe. Cosign has become the de-facto standard for container image signing in the cloud-native ecosystem.

Keyless signing with Cosign works by obtaining a short-lived signing certificate from Sigstore's Fulcio certificate authority, using an OIDC token from the CI provider (GitHub Actions, GitLab, or others) as proof of identity. The certificate is tied to the OIDC identity (for example, the GitHub Actions workflow URL), and the signing event is recorded in Sigstore's Rekor transparency log. This means there is a permanent, publicly auditable record of every signing event, without requiring teams to manage long-lived signing keys.

Cosign integrates with Kubernetes admission controllers through tools like Sigstore's policy-controller and Connaisseur. These controllers intercept every Pod creation request, extract the image references, fetch the signatures and attestations from the registry, verify them against a configured policy, and allow or deny the Pod based on the result. This makes signature verification automatic and mandatory for every workload in the cluster, without requiring any changes to how workloads are deployed.

The Sigstore ecosystem that Cosign is part of includes Rekor (the transparency log), Fulcio (the certificate authority), and various integrations with CI/CD systems and package registries. Understanding Cosign in the context of this ecosystem explains why keyless signing is more trustworthy than raw key-pair signing. The combination of a short-lived certificate from a trusted CA and an append-only transparency log makes it very difficult to sign an artifact in secret or to deny that a signing event occurred.

Trust chain

The trust chain is the full set of identities, certificates, policies, and verifiers that connect a signed artifact to a deployment decision. A trust chain for a container image deployed in Kubernetes might include the GitHub Actions OIDC identity that obtained the signing certificate from Fulcio, the Cosign signature stored in the OCI registry, the Rekor transparency log entry that records the signing event, the Kubernetes admission controller that fetches and verifies the signature, and the policy that specifies which identities are trusted signers for which repositories. Each element of this chain must be correctly configured for the trust to be meaningful.

Trust chains must be explicit and reviewed. An implicit trust chain, where signing is technically happening but no one has reviewed whether the signing identity is correct, the verification policy matches the intended signers, or the admission controller is correctly configured, provides a false sense of security. The security of the trust chain is only as strong as its weakest element, and weaknesses are often in configuration and policy rather than in the cryptography itself.

Rotating elements of the trust chain requires coordination. If the signing key is rotated, or the OIDC identity changes (for example, because the repository is renamed or the workflow file is moved), existing signatures become unverifiable unless the new identity is added to the trust policy before the old one is removed. Trust chain rotation should be treated as a deployment event. Planned, tested in a non-production environment, and executed with rollback capability.

Supply chain attacks that target trust chains look for weak links. Signing keys with broad access stored insecurely, verification policies that are too permissive, admission controllers that are configured in warning mode rather than enforcement mode, or trust roots that include more identities than intended. Regular review of the trust chain configuration, asking who can sign, under what conditions, and where are those conditions enforced, is the operational practice that keeps the chain tight over time.