Docker Compose-based local observability stack for metrics, logs, and dashboards.
node-exporter: host CPU/memory/disk/network metrics exportprometheus: scrape and store metrics (prometheus,node-exporter)fluent-bit: tail app logs fromsample-logs/*.logand send to Elasticsearchelasticsearch: store and search ingested logsgrafana: visualize metrics/logs with provisioned datasources and dashboard
- Metrics:
- source:
node-exporterand Prometheus self-metrics - storage/query: Prometheus
- visualization: Grafana (Prometheus datasource)
- source:
- Logs:
- source:
sample-logs/app.log(JSON lines) - pipeline: Fluent Bit -> Elasticsearch index
logs-local-sample-app-* - visualization: Grafana Explore (Elasticsearch datasource)
- source:
Create .env from .env.sample and set Grafana admin credentials:
GF_SECURITY_ADMIN_USER=<your-admin-user>
GF_SECURITY_ADMIN_PASSWORD=<your-strong-password>Optional:
ES_HEAP_SIZE=512mIf Grafana is behind a reverse proxy subpath (for example /grafana), also set:
GRAFANA_ROOT_URL=https://<host>/grafana/
GRAFANA_SERVE_FROM_SUB_PATH=true- Start services.
docker compose up -d- Verify access.
- Grafana: http://localhost:3300
- Prometheus: http://localhost:9090
- Check log visibility in Grafana.
Open Grafana and sign in with
.envcredentials. Go toExplore-> datasourceElasticsearch-> query typeLogs. RunLucene Query: *with a time range including sample timestamps.
Sample log timestamps in sample-logs/app.log:
2026-02-12T10:00:00.000Z2026-02-12T10:00:10.000Z
- Fluent Bit tails
sample-logs/*.logvia container path/var/log/app/*.log - Parser expects JSON logs with
@timestampin%Y-%m-%dT%H:%M:%S.%LZformat - Logs are written to Elasticsearch index pattern:
logs-local-sample-app-* - Tail state DB path:
/fluent-bit/state/tail.db(persisted influentbit_statevolume to reduce duplicate ingestion after restarts)
Required JSON keys:
@timestamplevelmessage
docker compose downRemove persisted data volumes:
docker compose down -v