Hardening

Linux

Linux hardening is about minimalism, strong authentication, and predictable configuration. You reduce the number of running services, enforce least privilege, and use built in security features such as MAC, auditing, and firewalls to make compromise harder and easier to detect.

Learning objectives

What you should be able to do after reading.
  • Reduce attack surface by disabling what you do not need
  • Harden authentication and privilege escalation paths
  • Apply kernel and package updates safely and consistently
  • Validate hardening with simple checks and meaningful logging

At a glance

Fast mental model before you dive in.
🧠
Main goals
  • Minimal exposed services and clear ownership of changes
  • Strong SSH and sudo configuration
  • Defense in depth with MAC, firewalling, and auditing
  • Fast recovery with backups and known good configs
High impact controls
  • Automatic security updates where appropriate
  • SSH key auth and restricted login policy
  • Firewall default deny inbound with explicit allowed ports
  • SELinux or AppArmor enforcing on supported systems
Practical workflow
  • Baseline build, then harden in small steps with verification
  • Use configuration management to avoid snowflake servers
  • Log and monitor auth, privilege, and network changes
  • Review installed packages and running services regularly

Overview

Linux hardening starts with reducing complexity. Every enabled service, open port, and installed package is another place for a vulnerability or misconfiguration to exist.

Most compromises succeed because of weak authentication, exposed management services, or unpatched software. Your first priority is to make remote access predictable and to keep updates flowing.

For stricter environments, use mandatory access control, auditing, and file integrity checks to add layers that still matter after a user account is compromised.

  • Remove or disable unused services and listening daemons
  • Prefer SSH keys and restrict who can log in and from where
  • Use sudo intentionally and avoid direct root login
  • Keep time synchronized and logs retained for investigations
Tip
Hardening is easier before production
Build a secure image first, then clone it. Retrofitting hardening on many unique systems is slow and error prone.
💡

Hardening actions

Use the toggle to switch between a low friction home baseline and a stricter security baseline.

Action What you do Why you do it Security effect
Keep packages up to date Enable regular updates for the OS and key packages. Reboot when required for kernel or libc updates. Many exploits target known vulnerabilities that already have patches. Reduces exposure to commodity exploitation.
Disable unused services List enabled services and disable anything you do not use. Close unused ports. A service you do not need is pure risk. Smaller attack surface and fewer remote entry points.
Use a host firewall Enable a firewall tool (ufw, firewalld, or nftables wrapper) and default deny inbound. Allow only required ports. Local firewalls contain mistakes and reduce lateral movement. Fewer reachable services and clearer network posture.
Harden SSH basics Use SSH keys, disable direct root login, and limit which users can SSH in. SSH is a common brute force and credential stuffing target. Stronger auth and reduced exposure to password attacks.
Use sudo, avoid shared accounts Give named users sudo where needed and avoid logging in as root. Shared accounts destroy accountability and increase blast radius. Better traceability and reduced privilege misuse.
Basic logging and log rotation Ensure journald or syslog is running and logs are rotated and retained long enough for troubleshooting. Without logs you cannot explain incidents or failures. Improves detection and forensics readiness.
Backups and recovery plan Back up critical configs and data. Test restores, not just backup jobs. Hardening does not prevent every incident. Recovery matters. Limits downtime and reduces ransom leverage.
Action What you do Why you do it Security effect
Apply a documented baseline Use a CIS aligned baseline or vendor hardening guide and manage it as code. A baseline reduces drift and makes audits possible. Repeatable posture across fleets and faster investigation.
Enforce MAC (SELinux or AppArmor) Keep SELinux or AppArmor in enforcing mode where supported and fix denials rather than disabling it. MAC limits what compromised processes can access even with user level execution. Contains exploitation and reduces privilege escalation paths.
Centralize authentication and restrict SSH Disable password auth where possible, restrict users and source networks, and consider MFA for privileged access. Passwords are easy to brute force and easy to phish. Reduces credential abuse and limits remote entry.
Harden sudo and privilege escalation Limit sudoers entries to specific commands, require re authentication, and monitor sudo usage. Over broad sudo turns a user compromise into full root quickly. Reduces lateral movement and persistence options.
Enable and tune auditd Enable auditd and deploy rules focused on auth, privilege, and critical files. Forward audit logs centrally. Audit trails are valuable when you need to answer who did what and when. Improves detection and supports incident response.
Kernel and network hardening Apply safe sysctl settings for networking and disable unsafe legacy behaviors when not needed. Many attacks rely on weak defaults and legacy compatibility. Reduces exploitation surface and improves resilience.
File integrity monitoring Use a file integrity tool (for example AIDE) on critical paths and alert on unexpected changes. Attackers often modify binaries, configs, and cron jobs for persistence. Detects tampering and supports containment.
Logging pipeline and alerting Ship auth and audit logs to a central system and alert on suspicious patterns such as repeated failures or new privileged groups. Local logs can be deleted after compromise. Higher chance of detecting incidents early.
Watch out
Do not disable MAC to fix an app
If SELinux or AppArmor blocks an action, treat it as a signal. Adjust policy or the app, not the security layer.
⚠️

Signals to watch for

Patterns worth investigating further.
📡
  • New listening ports or services enabled outside of change windows
  • Repeated SSH authentication failures or logins from unusual locations
  • Unexpected sudo usage or changes to sudoers files
  • Changes to critical configs, cron jobs, or system binaries

DEEP DIVE

Mental model: reduce what is reachable and what is allowed

Linux hardening is easiest when you treat the system as a collection of small, composable permissions. The core idea is simple: reduce what is reachable, reduce what is allowed, and reduce what is trusted.

A useful mental model is exposure times permission equals risk. Exposure is open services and reachable interfaces. Permission is what code can do after it starts. Trust is what packages and configurations you accept as valid.

Linux gives you many controls, but the hard part is consistency. Small differences between hosts, distributions, and teams can create gaps that attackers exploit because the environment becomes unpredictable.

• Minimize services and listening sockets because they define remote input.

• Minimize privileges and capabilities because they define local impact.

• Minimize trust boundaries by separating roles, users, and workloads.

Baseline priorities for Linux systems

Real world Linux incidents often start with one of three patterns: an exposed service, a weak credential path, or a supply chain surprise. A good baseline prioritizes controls that block these patterns across many workloads.

Service exposure is not only about ports. It is also about who can reach them, which interface they bind to, and whether they run with unnecessary permissions. Default to bind only to what you need.

Credential paths include SSH, local accounts, API tokens in files, and application secrets. Hardening is partly about reducing where secrets live and partly about limiting what an attacker can do if one leaks.

Supply chain risk is operational. If you install from many sources, run unknown scripts as root, or skip updates, your baseline is fragile. Prefer fewer repositories, signed packages, and a repeatable install story.

Tradeoff to expect: hardening can cost convenience. Tight permissions and strict services can break automation and developer tools. The safe approach is to separate environments or roles instead of weakening everything for everyone.

When you need compatibility, aim for narrower changes, such as allowing one service feature, rather than broad changes like disabling a control globally.

Common traps on Linux hardening

A frequent trap is hardening the host but forgetting the app. If the application runs as root, writes secrets to world readable files, or has no patch cadence, host hardening is forced to compensate and often fails.

Another trap is trusting network boundaries too much. Many Linux servers are reachable from places people did not intend because of VPNs, cloud security groups, container networks, or flat internal networks.

Privilege creep is also common. Over time, more users get sudo access, more services run as privileged, and more exceptions are added. The baseline slowly becomes a suggestion rather than a rule.

Over hardening can be a trap too. If you make changes that are hard to troubleshoot, operators will bypass them under pressure. That creates a split brain environment with undocumented behavior.

• Silent failures: a control is configured but not enforced, for example a policy in permissive mode or a service started with a different unit file.

• Inconsistent configuration: the same role behaves differently across hosts, which makes both monitoring and incident response harder.

Containment: MAC, isolation, and safer defaults

Containment is where Linux shines when used deliberately. Think in terms of reducing what a process can see, touch, and call. Even if code execution happens, containment can prevent privilege escalation and data theft.

Use mandatory access control when appropriate. SELinux or AppArmor can be frustrating at first, but they provide a strong boundary that is hard to replicate with file permissions alone. The tradeoff is policy maintenance and troubleshooting skill.

Isolation tools matter too. Namespaces, cgroups, and container runtimes can be part of a hardening story, but only if you treat them as security boundaries with clear assumptions.

Systemd can also reduce risk by limiting a service at runtime, such as restricting file system access and system calls. This is powerful because it hardens the service without requiring code changes.

When to relax containment: often for legacy software, vendor agents, or debugging. The safe pattern is to relax only for the specific service and keep the rest strict, then write down the reason and a plan to revisit.

Document exceptions with a test: what will you check later to confirm you can tighten again without breaking production.

Verification and evidence: logs, audits, and integrity checks

Verification on Linux should focus on effective state, not intended state. Many systems look hardened on paper but run with different runtime settings, custom unit overrides, or outdated packages.

Start with exposure checks: list listening ports, services, and interfaces. Good looks like only expected services are listening, and management services are reachable only from controlled networks.

Then verify privilege and integrity: check which accounts have sudo, which services run as root, and whether critical files are protected from modification. If you use integrity tooling, confirm it is actually reporting changes.

How to verify, and what good looks like:

• Exposure: no unexpected listening sockets, and firewalls or security groups match documented intent

• Accounts: minimal sudo users, no stale accounts, and clear ownership for service accounts

• Policy: SELinux is enforcing or AppArmor profiles are applied where intended, not quietly disabled

• Drift: configuration management reports converge, and manual edits are rare and reviewed

When to relax this: for short lived lab hosts or isolated build machines. Even then, write down what is relaxed, and keep at least basic patching and account hygiene because those failures travel easily between environments.