Skip to content

ptrxwsmitt/techstack-bench

Repository files navigation

Web Framework Benchmark

Note: These implementations are not fully optimized for the constrained environment (1 CPU, 512 MB memory). Results may differ significantly with more resources or further tuning of runtime parameters, connection pools, garbage collectors, etc.

Benchmark comparison of 9 web framework implementations of an identical Shopping List REST API, all backed by the same PostgreSQL database. Measures latency, throughput, cold-start time, memory, and CPU under identical conditions.

Service Port Stack
rust-axum 8081 Rust — Axum 0.8 + sqlx (raw SQL)
spring-graal-jpa 8082 Kotlin — Spring Boot 3.5 + JPA, GraalVM native
spring-graal-jdbc 8084 Kotlin — Spring Boot 3.5 + JdbcTemplate, GraalVM native
spring-jpa 8085 Kotlin — Spring Boot 3.5 + JPA, standard JVM (baseline)
spring-graal-reactive 8086 Kotlin — Spring Boot 3.5 + WebFlux/R2DBC + coroutines, GraalVM native
python-fastapi 8083 Python 3.13 — FastAPI + asyncpg + orjson
go-fiber 8087 Go 1.24 — Fiber v3 + pgx v5
dotnet-mvc 8088 C# — ASP.NET Core 9 MVC + Npgsql, JIT
dotnet-aot 8089 C# — ASP.NET Core 9 Minimal API + Npgsql, NativeAOT

All services expose the same API (health check, CRUD for lists/members/items) and run under identical Docker resource limits: 1 CPU, 512 MB memory, connection pool of 10.

Quick Start

Prerequisites

  • Docker + Docker Compose
  • Rust toolchain (for the benchmark client)

Build

./build.sh                          # build everything (DB, bench client, all 9 service images)
./build.sh rust-axum go-fiber       # build only specific services

This starts Postgres, runs dbmate migrations, builds the bench client (cargo build --release), and builds the selected Docker images.

Run Benchmarks

./run.sh                            # benchmark all 9 services
./run.sh rust-axum go-fiber         # benchmark only specific services
./run.sh --merge                    # merge existing per-target results without re-running

Each target runs in full isolation — all other app containers are stopped before a target is started. The flow per target is:

  1. Stop all 9 app services
  2. Reset the database (truncate + re-seed)
  3. Cold-start the target container
  4. Run the benchmark workload
  5. Collect CPU/memory metrics
  6. Write a per-target result to bench/results/

After all targets complete, results are merged into docs/report.json and docs/index.html.

Environment Variables

Variable Default Description
NUM_TASKS 5000 Number of concurrent task sequences to run
MAX_CONCURRENCY 100 Max parallel tasks at any given time
NUM_TASKS=1000 MAX_CONCURRENCY=50 ./run.sh rust-axum

Benchmark Workload

Each task executes a 9-step sequence exercising every endpoint:

Step Method Action
1 POST Create shopping list
2 POST Add member "Alice"
3 POST Add member "Bob"
4 POST Add item "Milk" (qty=2)
5 POST Add item "Bread" (qty=1)
6 PATCH Update Milk position
7 PATCH Mark Milk as "bought" by Alice
8 DELETE Delete Bread
9 GET Fetch final list state

With 5,000 tasks at 100 concurrency, this produces 45,000 HTTP requests per target. Every response is validated for correctness (status codes, UUIDs, field values, array cardinalities).

What Gets Measured

  • Cold-start time — from docker compose start to first healthy response
  • Throughput — requests/second over the full workload
  • Latency percentiles — p50, p95, p99, max
  • CPU usage — average and peak % (sampled via docker stats)
  • Memory usage — average and peak MB

Reports

After a run, reports are written to:

  • bench/results/<service>_<timestamp>.json — per-target result (one file per run)
  • docs/report.json — combined report (all targets)
  • docs/index.html — interactive HTML dashboard with charts (served via GitHub Pages)

You can re-merge at any time without re-running benchmarks:

./run.sh --merge

This picks the latest result per service from bench/results/ and produces the combined report.

Project Structure

.
├── build.sh                  # Build script (DB + bench client + Docker images)
├── run.sh                    # Benchmark runner (isolated per-target execution)
├── docker-compose.yml        # All services + Postgres + monitoring stack
├── db/migrations/            # dbmate SQL migrations
├── bench/
│   ├── client/               # Rust async benchmark client
│   └── results/              # Per-target JSON results (gitignored)
├── docs/
│   ├── index.html            # HTML dashboard (GitHub Pages)
│   └── report.json           # Combined report
├── rust-axum/
├── spring-graal-jpa/
├── spring-graal-jdbc/
├── spring-jpa/
├── spring-graal-reactive/
├── python-fastapi/
├── go-fiber/
├── dotnet-mvc/
└── dotnet-aot/

Monitoring (Optional)

The Docker Compose file includes an optional monitoring stack:

  • cAdvisor:8888
  • Prometheus:9090 (scrapes every 1s)
  • Grafana:3000 (anonymous admin access)
docker compose up -d cadvisor prometheus grafana

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors