Supply Chain

Dependency Security

How to control third-party packages with inventory, lockfiles, version discipline, and review of upstream risk.

Dependency security is the practice of understanding, controlling, and continuously reviewing the third-party packages that a software project relies on. Modern applications commonly have hundreds of direct dependencies and thousands of transitive ones, each of which executes with the same level of trust as the application's own code. Good dependency security means maintaining an accurate inventory of what is in use, constraining how versions are resolved, reviewing upstream risk before new packages enter the build, and keeping dependencies updated so that security fixes are applied promptly.

Learning objectives

What you should be able to do after reading.
  • Explain how direct and transitive dependencies change the application's trust boundary.
  • Describe how lockfiles, pinning, and update workflows reduce supply-chain surprise.
  • Recognize common package-introduction attacks such as dependency confusion and typosquatting.

At a glance

Fast mental model before you dive in.
Inventory
  • Direct packages
  • Transitive packages
  • Lockfiles
Change control
  • Pinning
  • Updates
  • Pull-request review
Supply risk
  • Dependency confusion
  • Typosquatting
  • Third-party risk

Core idea

Every dependency is borrowed code that executes with some level of trust in your environment. That means dependency management is not just a maintenance task, it is a security control over what external code is allowed into the product. A single compromised or vulnerable package can expose the entire application to attack, regardless of how carefully the application's own code was written.

The goal is not to avoid all dependencies. The goal is to know what you rely on, constrain how versions are resolved, make risky changes visible before they spread across builds, and have a process for acting on new vulnerability disclosures without turning every update into a crisis.

Control points

  • Keep an accurate inventory of direct and transitive dependencies so new additions are visible and removals are tracked.
  • Use lockfiles and version pinning to reduce unexpected resolution changes and make builds reproducible.
  • Review where packages come from, who maintains them, and whether private names can be shadowed by public packages with the same name.

Baseline

  • Remove libraries that are no longer used instead of carrying quiet risk forward indefinitely.
  • Update dependencies on a regular cadence so security fixes do not pile up into larger, riskier changes.
  • Treat package source configuration and registry settings as part of the release surface, not just a developer convenience.

Signals to watch for

Patterns worth investigating further.
  • A new package is added without a clear functional reason or owner.
  • Builds resolve different versions even though the application code did not change.
  • Package names, sources, or maintainers change without review.

DEEP DIVE

Dependencies

Dependencies include both the packages a developer explicitly installs (direct dependencies) and all the packages that those packages depend on in turn (transitive dependencies). The transitive dependency graph is usually much larger than the direct one. A Node.js application with fifty direct dependencies may have two thousand transitive ones. Most of those transitive packages are never directly called by application code, but they are present in the build, they are executed at install time, and vulnerabilities in them can still be exploited.

Treating dependency selection as an architectural decision, not just a coding convenience, changes the standard of care applied. When a developer adds a package to solve a ten-line problem, the right questions are. Does this package have a healthy maintenance history? Does it have a reasonable number of transitive dependencies? Does it come from a trustworthy publisher? Is the functionality available through a package the project already uses? These questions are not gatekeeping, they are risk management applied before the package is part of the product.

The attack surface of a dependency is its entire codebase, not just the functions the application calls. Malicious code in an npm package that runs during installation (via install scripts) executes regardless of whether the application ever calls that package's public API. This is why dependency confusion and typosquatting attacks are so effective. The attacker does not need to compromise a function that the application uses, they only need to get their package to be installed.

Dependency health indicators that deserve attention include time since last release (abandoned packages accumulate unpatched vulnerabilities), number of open security issues, whether the package is maintained by a single anonymous author (higher risk of account takeover), whether the package is pinned to or audited against a known-good version, and whether the package is a deep transitive dependency brought in by something the team has no direct control over.

Lockfiles

A lockfile records the complete, resolved dependency graph that a build used at a specific point in time. It specifies not just the direct dependencies and their version constraints but the exact resolved versions of every transitive dependency. npm produces package-lock.json, Yarn produces yarn.lock, pip can produce requirements.txt with pinned versions or a pip.lock equivalent, and Bundler produces Gemfile.lock. When a lockfile is committed to version control and used consistently, every developer and every CI run installs exactly the same packages.

Without a lockfile, package managers resolve the dependency graph fresh on each install. If the constraints allow ranges (for example, express: >=4.0.0 or requests: ~=2.28), the resolver may select different patch or minor versions on different days as new versions are published. A build that passes tests today may pick up a different transitive dependency version tomorrow and behave differently. The lockfile eliminates this non-determinism by fixing the full graph until the lockfile is deliberately updated.

Lockfiles are a security control because they prevent silent substitution of dependency versions. If a malicious version of a package is published between two builds, and neither build has a lockfile, the second build may silently pick up the malicious version. With a lockfile, the second build uses the same exact version as the first until someone explicitly runs the update command. Combined with integrity hashes in the lockfile, any tampering with the published package content is detectable.

Lockfiles must be committed to version control and kept up to date. A lockfile that is excluded from version control provides no protection because each developer and each CI runner resolves the graph independently. A lockfile that is months or years out of date may contain resolved versions with known vulnerabilities that have since been patched in newer releases. The lockfile update process should be automated and regular, with dependency update pull requests reviewed and merged on a predictable cadence.

Pinning

Pinning specifies an exact version of a dependency rather than a version range. Instead of specifying requests>=2.28.0, a pinned dependency specifies requests==2.28.2. Pinning prevents the package manager from ever resolving to a different version without an explicit change to the dependency specification. It is a stronger guarantee than a lockfile for direct dependencies, because the lockfile can be regenerated to pick up new versions while the pinned version requires an intentional edit.

Pinning is especially important for security-sensitive dependencies, for dependencies at the top of the build graph (where a change propagates to everything below), and in environments where reproducibility is critical, such as in compliance-regulated software or in the build tooling itself. Pinning base images in Dockerfiles (using digest references rather than floating tags) and pinning GitHub Actions to a specific commit SHA rather than a mutable version tag are applications of the same principle beyond just language packages.

The tradeoff of pinning is that pinned versions do not automatically receive security fixes. A package pinned to a specific version stays at that version indefinitely until someone explicitly updates it. This means teams must have a process for regularly reviewing and updating pinned versions, rather than relying on version ranges to automatically pull in fixes. Automation such as Dependabot or Renovate can open pull requests when a pinned version has a newer release available, making the update process manageable without sacrificing control.

A common mistake is confusing pinning with security. Pinning to a version that has a known vulnerability provides reproducibility but not safety. The value of pinning is that it makes the dependency graph stable and controlled, which is a precondition for other security practices such as SCA scanning and vulnerability triage. Pinning without SCA scanning means you know exactly which vulnerable version you are running but have no automated process for detecting it.

Updates

Regular dependency updates are the operational counterbalance to pinning and lockfiles. Pinning and lockfiles create a stable, controlled dependency graph, updates keep that graph from accumulating known vulnerabilities and compatibility debt over time. The worst update situation is one where dependencies are never updated until a critical vulnerability forces an emergency response, at which point the update may touch many packages simultaneously and the risk of a breaking change is highest.

Automated update tooling such as Dependabot (GitHub), Renovate Bot, or PyUP (Python) creates pull requests when new versions of pinned dependencies are available. The PR includes the version change, the changelog for the new release, and any security advisories. Reviewing and merging these PRs regularly keeps the dependency graph current in small, low-risk increments. The investment in reviewing a single-package minor version update is far smaller than the investment in responding to a critical CVE in a package that has not been updated in two years.

Update strategies should distinguish between security updates and non-security updates. Security updates for critical or high severity CVEs should be treated as high-priority work with a short response time, typically measured in days rather than weeks. Non-security maintenance updates (new features, deprecation fixes, performance improvements) can be deferred to a regular update cadence. Some teams use separate Dependabot configurations for security-only updates (which merge automatically if CI passes) and non-security updates (which require human review).

Breaking changes in dependency updates are the main reason teams defer updates for extended periods. A major version update that requires code changes to the application is legitimate work that competes with feature development. The practical solution is to invest in good test coverage so that the effect of a dependency update is visible in CI results, and to prefer smaller, more frequent updates over large, infrequent ones. An application with comprehensive tests can upgrade a major dependency version with confidence, one with thin coverage treats every major update as a risky unknown.

Dependency confusion

Dependency confusion is an attack where a malicious package with the same name as an internal private package is published to a public registry. When the build system's package resolution logic checks public registries in addition to the private one, it may resolve the public package instead of the private one, especially if the public version has a higher version number. The attacker publishes the public package with a high version number and malicious code in an install script, and the build fetches and executes it automatically.

This attack was demonstrated publicly in 2021 by researcher Alex Birsan, who used it to compromise the build systems of several major technology companies. The affected companies had private packages with names like mycompany-utils that resolved from internal registries. By publishing packages with the same names to npm, PyPI, or RubyGems at higher version numbers, Birsan caused those companies' build systems to fetch and execute his code during builds, proving the vulnerability was real and widespread.

The defenses against dependency confusion work at the registry configuration level. The most reliable defense is to scope all private packages under a namespace that cannot exist in public registries (for npm, this means using scoped packages like @mycompany/utils, because the @mycompany scope is controlled by the organisation). For ecosystems that do not support namespacing, the defenses include configuring the package manager to use only the internal registry for specific package names, or using an internal registry proxy that blocks public packages with the same names as private ones.

Regularly auditing the names of internal packages against public registries is a useful detective control. If an internal package name exists in a public registry without the organisation's knowledge, it may have been registered by an attacker in preparation for a dependency confusion attack, or it may be a legitimate coincidence that still creates a resolution ambiguity risk. Either way, the conflict needs to be resolved by renaming the internal package or registering the public name defensively.

Typosquatting

Typosquatting is the registration of package names that are visually similar to popular or frequently used packages, with the intent of being installed when a developer makes a typographical error. The typosquatted package may appear functional on first use but contain malicious code that runs at install time, exfiltrates environment variables (which often contain credentials and API keys), or establishes a backdoor. Because install scripts run automatically during package installation, the attack executes before the developer can review the code.

The most effective defense against typosquatting is strict dependency review before new packages are added to a project. A developer who types reqeusts instead of requests in a requirements file should be stopped by a PR review process, not by discovering that the installed package was malicious. Automated checks that compare new dependency names against a list of known packages and flag names that are close but not exact matches (edit distance checks) can surface potential typosquats before they are committed.

Package managers and registries have implemented some protections against typosquatting, including algorithms that block registration of names very similar to popular packages and takedown procedures for reported malicious packages. However, these controls are reactive and incomplete. The ecosystem's sheer size means that some typosquatted packages will exist in public registries at any given time. Development environments that allow arbitrary package installation without review are exposed to this risk on every developer's machine.

In CI/CD environments, allowlisting packages is a strong defense that eliminates both typosquatting and dependency confusion risk. An allowlist specifies the exact packages that are permitted in a build, any package not on the list fails the build. This is practical for stable codebases with known, infrequently changing dependency sets. For rapidly evolving projects where new dependencies are added frequently, strict PR review of dependency changes provides similar protection with less operational overhead.

Third party risk

Third-party risk encompasses the full range of ways that external packages can introduce risk beyond known CVEs. This includes maintainer compromise, where an attacker takes over a legitimate package account and publishes a malicious version, abandoned projects, where the maintainer stops responding to security reports and the package accumulates unpatched vulnerabilities, sudden ownership changes, where a popular package is sold or transferred to a new owner whose intentions are unknown, and insecure defaults, where the package ships with configuration that is functional but insecure.

Maintainer account compromise is a particularly dangerous risk because it replaces a trusted package with a malicious one in a way that looks legitimate to automated systems. If the compromised version passes checksum verification (because the attacker published through the legitimate account), there is no automated signal that anything has changed. The defense is vigilance. Monitoring for unexpected new releases from packages in active use, subscribing to the package ecosystem's security advisories, and treating any unexpected behaviour in a build after a dependency update as a signal to investigate.

Evaluating the ongoing health of a dependency requires looking beyond the CVE database. Metrics such as time since last commit, number of open unaddressed security issues, response time to security reports, number of active maintainers, and the dependency's own dependency count are all signals of future risk. A package that is actively maintained, responsive to security reports, and minimally dependent on other packages is lower risk than one that is effectively abandoned, even if it has no current CVEs.

The decision of whether to use or continue using a third-party package should be a documented, revisitable decision. Teams that adopt packages without considering long-term maintenance, and then continue using them indefinitely without review, accumulate risk that is not visible in any vulnerability scan. A simple policy that requires periodic review of high-use dependencies for continued maintenance health, and that designates an owner for responding to security advisories for each critical dependency, converts implicit risk into managed risk.