Ultra-low latency Go HTTP engine with a protocol-aware dual-architecture (io_uring & epoll) designed for high-throughput infrastructure and zero-allocation microservices. It provides a familiar route-group and middleware API similar to Gin and Echo, so teams can adopt it without learning a new programming model.
- 3.3M+ HTTP/2 requests/sec on a single 8-vCPU machine (arm64 Graviton3)
- 590K+ HTTP/1.1 requests/sec — 81% syscall-bound, zero allocations on the hot path
- io_uring and epoll at parity — both engines hit the same throughput
- H2 is 5.7x faster than H1 thanks to stream multiplexing and inline handler execution
- Zero hot-path allocations for both H1 and H2
- Tiered io_uring — auto-selects the best io_uring feature set (multishot accept/recv, provided buffers, SQ poll, fixed files) for your kernel
- Edge-triggered epoll — per-core event loops with CPU pinning
- Adaptive meta-engine — dynamically switches between io_uring and epoll based on runtime telemetry
- SIMD HTTP parser — SSE2 (amd64) and NEON (arm64) with generic SWAR fallback
- HTTP/2 cleartext (h2c) — full stream multiplexing, flow control, HPACK, inline handler execution, zero-alloc HEADERS fast path
- Auto-detect — protocol negotiation from the first bytes on the wire
- Error-returning handlers —
HandlerFuncreturnserror; structuredHTTPErrorfor status codes - Serialization — JSON and XML response methods (
JSON,XML); Protocol Buffers available viagithub.com/goceleris/middlewares;Bindauto-detects request format from Content-Type - net/http compatibility — wrap existing
http.Handlerviaceleris.Adapt() - Built-in metrics collector — atomic counters, always-on
Server.Collector().Snapshot()
go get github.com/goceleris/celeris@latest
Requires Go 1.26+. Linux for io_uring/epoll engines; any OS for the std engine.
package main
import (
"log"
"github.com/goceleris/celeris"
)
func main() {
s := celeris.New(celeris.Config{Addr: ":8080"})
s.GET("/hello", func(c *celeris.Context) error {
return c.String(200, "Hello, World!")
})
log.Fatal(s.Start())
}s := celeris.New(celeris.Config{Addr: ":8080"})
// Static routes
s.GET("/health", healthHandler)
// Named parameters
s.GET("/users/:id", func(c *celeris.Context) error {
id := c.Param("id")
return c.JSON(200, map[string]string{"id": id})
})
// Catch-all wildcards
s.GET("/files/*path", staticFileHandler)
// Route groups
api := s.Group("/api")
api.GET("/items", listItems)
api.POST("/items", createItem)
// Nested groups
v2 := api.Group("/v2")
v2.GET("/items", listItemsV2)Middleware is provided by the goceleris/middlewares module — one subpackage per middleware, individually importable.
import (
"github.com/goceleris/middlewares/logger"
"github.com/goceleris/middlewares/recovery"
"github.com/goceleris/middlewares/cors"
"github.com/goceleris/middlewares/ratelimit"
)
s := celeris.New(celeris.Config{Addr: ":8080"})
s.Use(recovery.New())
s.Use(logger.New(slog.Default()))
api := s.Group("/api")
api.Use(ratelimit.New(1000))
api.Use(cors.New(cors.Config{
AllowOrigins: []string{"https://example.com"},
}))See the middlewares repo for the full list: Logger, Recovery, CORS, RateLimit, RequestID, Timeout, BodyLimit, BasicAuth, JWT, CSRF, Session, Metrics, Debug, Compress, and more.
Middleware is just a HandlerFunc that calls c.Next():
func Timing() celeris.HandlerFunc {
return func(c *celeris.Context) error {
start := time.Now()
err := c.Next()
dur := time.Since(start)
slog.Info("request", "path", c.Path(), "duration", dur, "error", err)
return err
}
}
s.Use(Timing())The error returned by c.Next() is the first non-nil error from any downstream handler. Middleware can inspect, wrap, or swallow the error before returning.
HandlerFunc has the signature func(*Context) error. Returning a non-nil error propagates it up through the middleware chain. If no middleware handles the error, the router's safety net converts it to an HTTP response:
*HTTPError— responds withCodeandMessagefrom the error.- Any other
error— responds with500 Internal Server Error.
// Return a structured HTTP error
s.GET("/item/:id", func(c *celeris.Context) error {
item, err := store.Find(c.Param("id"))
if err != nil {
return celeris.NewHTTPError(404, "item not found").WithError(err)
}
return c.JSON(200, item)
})
// Middleware can intercept errors from downstream handlers
func ErrorLogger() celeris.HandlerFunc {
return func(c *celeris.Context) error {
err := c.Next()
if err != nil {
slog.Error("handler error", "path", c.Path(), "error", err)
}
return err
}
}s := celeris.New(celeris.Config{
Addr: ":8080",
Protocol: celeris.Auto, // HTTP1, H2C, or Auto
Engine: celeris.Adaptive, // IOUring, Epoll, Adaptive, or Std
Workers: 8,
Objective: celeris.Latency, // Latency, Throughput, or Balanced
ReadTimeout: 30 * time.Second,
WriteTimeout: 30 * time.Second,
IdleTimeout: 120 * time.Second,
ShutdownTimeout: 10 * time.Second, // max wait for in-flight requests (default 30s)
Logger: slog.Default(),
})Wrap existing net/http handlers and middleware:
// Wrap http.Handler
s.GET("/legacy", celeris.Adapt(legacyHandler))
// Wrap http.HandlerFunc
s.GET("/func", celeris.AdaptFunc(func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("from stdlib"))
}))The bridge buffers the adapted handler's response in memory, capped at 100 MB. Responses exceeding this limit return an error.
| Engine | Platform | Use Case |
|---|---|---|
IOUring |
Linux 5.10+ | Lowest latency, highest throughput |
Epoll |
Linux | Broad kernel support, proven stability |
Adaptive |
Linux | Auto-switch based on telemetry |
Std |
Any OS | Development, compatibility, non-Linux deploys |
Use Adaptive (the default on Linux) unless you have a specific reason to pin an engine. On non-Linux platforms, only Std is available.
| Profile | Optimizes For | Key Tuning |
|---|---|---|
celeris.Latency |
P99 tail latency | TCP_NODELAY, small batches, SO_BUSY_POLL |
celeris.Throughput |
Max RPS | Large CQ batches, write batching |
celeris.Balanced |
Mixed workloads | Default settings |
Use StartWithContext for production deployments. When the context is canceled, the server drains in-flight requests up to ShutdownTimeout (default 30s).
ctx, stop := signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGTERM)
defer stop()
s := celeris.New(celeris.Config{
Addr: ":8080",
ShutdownTimeout: 15 * time.Second,
})
s.GET("/hello", helloHandler)
if err := s.StartWithContext(ctx); err != nil {
log.Fatal(err)
}The core provides a lightweight metrics collector accessible via Server.Collector():
snap := server.Collector().Snapshot()
fmt.Println(snap.RequestsTotal, snap.ErrorsTotal, snap.ActiveConns)For Prometheus exposition and debug endpoints, use the middlewares/metrics and middlewares/debug packages.
| Feature | io_uring | epoll | std |
|---|---|---|---|
| HTTP/1.1 | yes | yes | yes |
| H2C | yes | yes | yes |
| Auto-detect | yes | yes | yes |
| CPU pinning | yes | yes | no |
| Provided buffers | yes (5.19+) | no | no |
| Multishot accept | yes (5.19+) | no | no |
| Multishot recv | yes (6.0+) | no | no |
| Zero-alloc HEADERS | yes | yes | no |
| Inline H2 handlers | yes | yes | no |
Cloud benchmarks on arm64 c7g.2xlarge (8 vCPU Graviton3), separate server and client machines:
| Protocol | Engine | Throughput |
|---|---|---|
| HTTP/2 | epoll | 3.33M rps |
| HTTP/2 | io_uring | 3.30M rps |
| HTTP/1.1 | epoll | 590K rps |
| HTTP/1.1 | io_uring | 590K rps |
- io_uring and epoll within 1% of each other on both protocols
- H2 is 5.7x faster than H1 (stream multiplexing + inline handlers)
- Zero allocations on the hot path for both H1 and H2
- All 3 engines within 0.3% of each other in adaptive mode
Methodology: 14 server configurations (io_uring/epoll/std x latency/throughput/balanced x H1/H2) tested with wrk (H1, 16384 connections) and h2load (H2, 128 connections x 128 streams) in 9-pass interleaved runs. Full results and reproduction scripts are in the benchmarks repo.
| Type | Package | Description |
|---|---|---|
Server |
celeris |
Top-level entry point; owns config, router, engine |
Config |
celeris |
Server configuration (addr, engine, protocol, timeouts) |
Context |
celeris |
Per-request context with params, headers, body, response methods |
HandlerFunc |
celeris |
func(*Context) error — handler/middleware signature |
HTTPError |
celeris |
Structured error carrying HTTP status code and message |
RouteGroup |
celeris |
Group of routes sharing a prefix and middleware |
Route |
celeris |
Opaque handle to a registered route |
Collector |
observe |
Lock-free request metrics aggregator |
Snapshot |
observe |
Point-in-time copy of all collected metrics |
block-beta
columns 3
A["celeris (public API)"]:3
B["adaptive"]:1 C["observe"]:2
E["engine/iouring"]:1 F["engine/epoll"]:1 G["engine/std"]:1
H["protocol/h1"]:1 I["protocol/h2"]:1 J["protocol/detect"]:1
K["probe"]:1 L["resource"]:1 M["internal"]:1
- Go 1.26+
- Linux for io_uring and epoll engines
- Any OS for the std engine
- Dependencies:
golang.org/x/sys,golang.org/x/net
adaptive/ Adaptive meta-engine (Linux)
engine/ Engine interface + implementations (iouring, epoll, std)
internal/ Shared internals (conn, cpumon, platform, sockopts)
observe/ Lightweight metrics collector (atomic counters, Snapshot)
probe/ System capability detection
protocol/ Protocol parsers (h1, h2, detect)
resource/ Configuration, presets, objectives
test/ Conformance, spec compliance, integration, benchmarks
go install github.com/magefile/mage@latest # one-time setup
mage build # build all targets
mage test # run tests with race detector
mage lint # run linters
mage check # full verification: lint + test + spec + buildPull requests should target main.