Cost Optimization Guide
Datadog log filtering: post-ingestion vs pre-ingestion
Filtering strategy determines both observability quality and monthly cost. Placement matters more than syntax.
Why this problem exists
Teams often rely only on downstream pipeline rules, which still require initial ingestion.
Noise classification is usually applied inconsistently across services, leading to uneven log quality.
Real cost and impact
Post-ingestion filtering can improve dashboards but does not eliminate all ingestion-driven cost.
A noisy API edge can generate huge billable volume even if most events are discarded later.
Solutions (including alternatives)
- Use Datadog pipelines for enrichment and parsing of logs you intentionally keep.
- Use pre-ingestion controls for high-volume noise that should never be indexed.
- Adopt a shared rule library for status codes, health checks, and bot traffic.
How LogTrim solves it
LogTrim filters high-volume noise before Datadog, lowering ingestion and preserving high-signal events.
Teams can keep operationally useful logs in Datadog and archive full streams to S3.
Example scenario
An ecommerce backend kept all 5xx and latency anomalies while dropping repetitive 200s from static asset paths.
The signal-to-noise ratio in dashboards improved immediately.
Comparison
| Dimension | Datadog Pipelines | LogTrim Pre-ingestion |
|---|---|---|
| Cost control | Limited for ingestion-driven billing | Reduces billable volume before ingestion |
| Rule placement | Inside Datadog | Before Datadog and before indexing |
| Full retention strategy | Requires separate archival plan | Dual-route high signal to Datadog and full stream to S3 |