Cost Optimization Guide

Log sampling vs filtering: what to use and when

Sampling and filtering solve different problems. Treating them as interchangeable leads to blind spots or excess cost.

Why this problem exists

Teams sample everything to cut volume, then lose rare but important edge-case events.

Other teams filter too aggressively and remove useful operational context.

Real cost and impact

Bad sampling choices can hide incidents and increase MTTR.

Bad filtering choices can preserve too much noise and keep ingestion expensive.

Solutions (including alternatives)

  • Use filtering for deterministic drops such as known-noise health checks or static asset requests.
  • Use sampling for high-volume success traffic where trends matter more than every event.
  • Use both together: filter obvious noise first, then sample remaining high-frequency low-risk events.

How LogTrim solves it

LogTrim supports rule-based filtering plus sampling policies in the same pre-ingestion path.

Teams can preserve required visibility while controlling ingest growth.

Example scenario

A team filtered known bot noise and sampled remaining 200 responses at 10%, while keeping all 4xx and 5xx logs.

They kept trend fidelity and reduced ingestion volume.

Reduce your costs with LogTrim

Start with high-noise categories, keep high-signal logs in Datadog, and archive full retention in S3.