Cost Optimization Guide
Why Datadog can feel expensive at scale
Datadog logs are powerful, but pricing is tied to how much data you ingest. Unfiltered traffic drives costs.
Why this problem exists
Most teams ingest logs first and optimize later, so low-value events are billed like high-value events.
Microservices and background jobs create many repetitive logs that are rarely queried.
Real cost and impact
When log volume doubles, ingestion cost often doubles as well unless filtering is already in place.
Cost spikes are common during release incidents because retry storms generate high-volume noise.
Solutions (including alternatives)
- Treat log volume as a budgeted resource with explicit keep/drop policy ownership.
- Use pre-ingestion filtering to reduce billable bytes before logs reach Datadog.
- Keep complete retention in S3 so audits and forensics still have full coverage.
How LogTrim solves it
LogTrim sits in front of Datadog and enforces filtering, sampling, and masking rules in real time.
This keeps Datadog focused on high-value events while S3 stores full history cheaply.
Example scenario
A B2B API platform reduced Datadog indexed volume after moving low-value request logs to S3.
The team maintained alert quality while cutting monthly observability spend.