Detection and Logging

Log Types Overview

Different log types answer different questions. Strong detection comes from collecting the right sources, keeping clocks consistent, and making events searchable with clear, normalized fields.

Learning objectives

What you should be able to do after reading.
  • Explain what system, authentication, application, and network logs are used for
  • Pick a practical baseline of log sources that supports common investigations
  • Understand why normalization, time sync, retention, and integrity controls matter

At a glance

Fast mental model before you dive in.
🧠
What logs answer
  • What happened, when it happened, and what changed
  • Who did it and which identity was used
  • Where it originated and what it reached
  • Whether it succeeded and what followed
Core log families
  • System and service logs for stability and service activity
  • Authentication and audit logs for identity and privileged actions
  • Network logs for reachability and policy decisions
  • Application logs for business actions and API behavior
Common mistakes
  • Keeping logs local only, making deletion and tampering easier
  • Time drift and mixed time zones that break timelines
  • Unparsed data and inconsistent field names
  • Collecting noise without a clear investigation use case

Core log categories

System logs describe what the operating system and core services are doing. Use them to confirm reboots, service restarts, update activity, and signs of instability such as disk, memory, or driver errors.

Authentication and audit logs describe who accessed what and what privileged actions were taken. Use them to investigate logins, failed attempts, privilege elevation, account changes, and security relevant configuration changes.

Application logs describe what a specific application is doing. Use them to trace user actions, API calls, validation failures, and business events, and to connect an incident to an app feature or transaction.

Network logs describe connectivity and policy decisions. Use them to understand who talked to what, through which ports, and whether traffic was allowed, denied, redirected, or resolved through DNS.

Normalization and time

  • Normalize key fields such as user, host, source_ip, dest_ip, action, outcome, and reason so searches work across tools
  • Store timestamps in UTC for correlation, and keep the original timezone only for display when needed
  • Verify time sync and log pipelines continuously so you notice drift, gaps, or parsing failures early
Watch out
Why time sync matters
Incident timelines depend on event order. If clocks drift, a single login and follow on action can look reversed, and correlations across hosts become unreliable.
⚠️

Retention and integrity

  • Set retention based on investigation needs, then budget storage for the log sources that provide the highest security value
  • Protect high value logs with central storage, least privilege access, and backups or write-once controls where possible
  • Alert on missing hosts, sudden volume drops, or disabled logging so you catch blind spots fast

Signals to watch for

Patterns worth investigating further.
📡
  • A host that stops sending logs or has repeated gaps during business hours
  • Security or audit events drop sharply while other logs continue normally
  • Unexpected clock changes, repeated NTP failures, or sudden timezone shifts
  • Error and restart spikes that coincide with authentication anomalies

DEEP DIVE

Think in questions, not sources

A good logging plan starts with the questions you expect to answer during incidents. For example: who authenticated, what changed, which process ran, where it connected, and whether it succeeded.

Once you know the questions, map them to sources. System logs cover stability and service activity. Auth and audit logs cover identity and privileged actions. Network logs cover reachability and policy decisions. Application logs capture business and API behavior.

A practical technique: write a one page incident question list for your environment, then add a log source for each unanswered question.

• Identity questions: who logged in, from where, and with what method

• Change questions: what was installed, modified, or started

• Network questions: what talked to what, and what was blocked

• Data questions: what was accessed, exported, or deleted

Normalization and field mapping

Normalization is how you make different sources searchable in one consistent way. It lets you write one query that works across many log types.

Choose consistent field names for the same idea across sources, such as user, host, process, src_ip, dst_ip, action, and result. Keep the raw event too so you can revisit parsing later.

Treat parsing quality as a security concern. If a key field stops extracting correctly, detections can silently degrade.

• Keep a short list of critical fields per log family and monitor them

• Prefer structured logs when available, but ingest unstructured logs too

Time, integrity, and retention

Time synchronization is a security control. If clocks drift, timelines break and correlations fail. Use a common time source across servers, endpoints, and network devices, and watch for sudden time jumps.

Protect log integrity with access control and immutability where possible. Logs are evidence, so you want visibility into changes, deletions, and collection failures.

Retention is a tradeoff between cost and investigation depth. A common approach is fast searchable storage for recent data, and cheaper archival storage for older data you may still need during a longer incident.

Centralize logs so you can search consistently and reduce the risk of local log deletion on a compromised host.

Operational checks that prevent surprises

Validate the full pipeline: generation, collection, transport, parsing, storage, and search. Many failures happen in the middle due to rate limits, queue drops, or parser errors.

Track simple health signals: log volume trends per source, ingestion delay, parser error counts, and missing critical fields.

Create a small set of synthetic events you can generate on purpose, so you can confirm end to end visibility any time you suspect a gap.

Picking sources by investigation use case

Different investigations need different log types. Authentication questions need identity logs. Lateral movement often needs endpoint and remote management logs. Data access questions need application and database logs.

Start with a minimum set that covers identity, endpoint, network perimeter, DNS, and critical applications. Expand based on what you actually run and what threats you care about.

Common pitfalls and how to avoid them

Too much noise is a common failure mode. Collecting everything is not a strategy if no one can search it. Prefer high value events first, then expand with measurement.

Do not rely on a single source for critical conclusions. Use at least two perspectives when possible, such as endpoint plus network, or application plus database.

Standardize on UTC for storage and always preserve the original timestamp, since time zones and daylight changes create confusion during response.

Starter checklist

• Define key incident questions and map each to sources

• Synchronize clocks and store timestamps consistently

• Centralize logs and validate parsing of critical fields

• Protect log access and integrity, then set retention expectations

• Build repeatable searches for common incidents and validate regularly