Cost Optimization Guide
How to reduce Datadog logs with filtering, sampling, and routing
Log reduction works when you separate critical events from repetitive noise and enforce that policy consistently.
Why this problem exists
Most organizations lack a formal logging policy, so every team logs differently.
Without centralized controls, temporary debug logs become permanent ingest overhead.
Real cost and impact
Uncontrolled log growth can outpace infrastructure growth and create recurring budget pressure.
High-volume services pay the largest penalty for noisy defaults.
Solutions (including alternatives)
- Step 1: classify logs into keep, sample, and drop categories by operational value.
- Step 2: apply route and status-based filtering before Datadog ingestion.
- Step 3: route full raw logs to S3 and keep only high-signal subsets in Datadog.
How LogTrim solves it
LogTrim lets teams enforce these rules centrally and iterate safely.
Because filtering happens pre-ingestion, savings appear as soon as noise is removed.
Example scenario
A platform team rolled out keep/sample/drop rules service-by-service over two weeks.
They reduced ingest while preserving alerts, dashboards, and on-call workflows.