Cost Optimization Guide

Datadog log filtering: post-ingestion vs pre-ingestion

Filtering strategy determines both observability quality and monthly cost. Placement matters more than syntax.

Why this problem exists

Teams often rely only on downstream pipeline rules, which still require initial ingestion.

Noise classification is usually applied inconsistently across services, leading to uneven log quality.

Real cost and impact

Post-ingestion filtering can improve dashboards but does not eliminate all ingestion-driven cost.

A noisy API edge can generate huge billable volume even if most events are discarded later.

Solutions (including alternatives)

  • Use Datadog pipelines for enrichment and parsing of logs you intentionally keep.
  • Use pre-ingestion controls for high-volume noise that should never be indexed.
  • Adopt a shared rule library for status codes, health checks, and bot traffic.

How LogTrim solves it

LogTrim filters high-volume noise before Datadog, lowering ingestion and preserving high-signal events.

Teams can keep operationally useful logs in Datadog and archive full streams to S3.

Example scenario

An ecommerce backend kept all 5xx and latency anomalies while dropping repetitive 200s from static asset paths.

The signal-to-noise ratio in dashboards improved immediately.

Comparison

Side-by-side view of the trade-offs for this use case.
DimensionDatadog PipelinesLogTrim Pre-ingestion
Cost controlLimited for ingestion-driven billingReduces billable volume before ingestion
Rule placementInside DatadogBefore Datadog and before indexing
Full retention strategyRequires separate archival planDual-route high signal to Datadog and full stream to S3

Reduce your costs with LogTrim

Start with high-noise categories, keep high-signal logs in Datadog, and archive full retention in S3.