Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 33 additions & 0 deletions website/docs/About/Changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,39 @@ instead.
> On 6/21/2025, Eth Docker's repository name changed. Everything should work as it did.
> If you do wish to manually update your local reference, run `git remote set-url origin https://github.com/ethstaker/eth-docker.git`

## v26.3.0 2026-03-02

*This is an optional release*

**Breaking changes**

- Requires Geth `v1.17.0` or later when using the built-in Grafana via `grafana.yml` or `grafana-rootless.yml`

Changes
- `./ethd prune-history` now offers a menu of expiry options, depending on which the chosen client supports. This ranges from "pre-merge"
to "pre-cancun", "rolling" and even "aggressive".
- Support `MAX_BLOBS` with Geth and Nimbus EL. `./ethd config` will attempt to compute `MAX_BLOBS` depending on upload bandwidth. This
can allow bandwidth-constrained nodes to still build locally.
- Initial support for Nimbus Verified Proxy. This is useful when expiring more than pre-merge history while also needing receipts,
e.g. when running RocketPool, Nodeset or SSV. Note this client is still in alpha.
- Geth sends traces to Tempo by default, when using `grafana.yml` or `grafana-rootless.yml`
- Support EraE file import with Geth. Note there aren't many EraE files yet, this functionality requires more testing.
- Remove Era/Era1 import from Nimbus EL. Only support EraE going forward.
- Support Reth `v1.11.0` and later
- Support optional pre- and post-update hooks when running `./ethd update`. The optional files `pre-ethd-update.sh` and/or `post-ethd-update.sh`
are executed just before and after `./ethd update`, and should be bash scripts. Thanks @erl-100!
- Remove deprecated `--in-process-validators=false` from Nimbus
- All Dockerfiles explicitly add `adduser` and `bash`, even if these are currently already shipped with the client image
- From this release, Eth Docker uses a calendar version scheme

Bug fixes
- Fixed a bug that kept Lighthouse from starting when using the new graffiti append option. Thanks @victorelec14!
- Fixed a bug in the Lighthouse jwtsecret ownership check. Thanks @Olexandr88!
- Ensure `ping` utility is installed before testing IPv6 connectivity
- Fixed a bug that broke Geth telemetry. Thanks @marcovc!
- Remove a duplicate `gosu` install from the Nimbus-EL Dockerfile


## v2.19.1.0 2026-02-10

*This is an optional release. It is required when using Lodestar `v1.39.0` or later*
Expand Down
48 changes: 48 additions & 0 deletions website/docs/Usage/Advanced/RPCProxy.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
---
title: "RPC Proxy"
sidebar_position: 10
sidebar_label: RPC Proxy
---

## Verified RPC Proxy

When expiring more than pre-merge history and running protocols such as RocketPool, Nodeset or SSV, the receipts
needed for these protocols are missing from the local execution layer. The `eth_getLogs` calls for these receipts
will fail. This issue will become even more pronounced when validators can run without an execution layer at all,
and just consume proofs instead.

The obvious solution is to use third-party RPC for RPC queries. But this requires trusting the third-party RPC.

A "verified RPC proxy" can solve this, by checking against a trusted block root.

### Setup

- Make an account with Alchemy or any other provider that supports `eth_getProof`. Infura does not as of March 2026.
Free account should be fine if used for occasional receipts queries.
- Add `nimbus-vp.yml` to `COMPOSE_FILE` in `.env` via `nano .env`
- While in `.env`, set `RPC_URL` to your RPC provider's `wss://` endpoint, including API key / auth
- Run `./ethd update` and `./ethd up`
- RocketPool, **only** if using hybrid mode with execution layer client in Eth Docker: `rocketpool service config` and set
"Execution Client" `HTTP URL` to `http://rpc-proxy:48545` and `Websocket URL` to `ws://rpc-proxy:48546`
- Nodeset, **only** if using hybrid mode with execution layer client in Eth Docker: `hyperdrive service config` and set
"Execution Client" `HTTP URL` to `http://rpc-proxy:48545` and `Websocket URL` to `ws://rpc-proxy:48546`
- SSV Node, `nano ssv-config/config.yaml` and change the `ETH1Addr` to be `ws://rpc-proxy:48546`, then `./ethd restart ssv-node`
- SSV Anchor, `nano .env` and change `EL_RPC_NODE` to `http://rpc-proxy:48545` and `EL_WS_NODE` to `ws://rpc-proxy:48546`, then
`./ethd restart anchor`
- Lido SimpleDVT with Obol, `nano .env` and change `OBOL_EL_NODE` to `http://rpc-proxy:48545`, then `./ethd restart validator-ejector`

### Adjusting defaults

Nimbus Verified Proxy on startup gets a trusted root from `CL_NODE`, connects to `RPC_URL`, and proxies all
RPC requests while verifying them against the trusted root. This works for `http://` and `ws://` queries.

You can change the ports the proxy listens on with `PROXY_RPC_PORT` and `PROXY_WS_PORT`.

These ports can be exposed to the host via `proxy-shared.yml`, or encrypted to `https://` and `wss://`
via `proxy-traefik.yml`, in which case you also want `DOMAIN`, `PROXY_RPC_HOST`, `PROXY_WS_HOST`, and either
`traefik-cf.yml` or `traefik-aws.yml` with their attendant parameters.

If running multiple Eth Docker stacks on one host, each with an rpc proxy and using the same Docker bridge network via
`ext-network.yml`, you can use `RPC_PROXY_ALIAS` and `WS_PROXY_ALIAS` to give the proxies distinctive names. In
that case, use the alias names when configuring other protocols to connect to the proxy, do **not** use
`rpc-proxy` as it'd round-robin between multiple instances of the proxy on the bridge network.
34 changes: 17 additions & 17 deletions website/docs/Usage/ResourceUsage.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,11 +36,11 @@ DB Size is shown with values for different types of nodes: Full, and different l
|--------|---------|------|---------|---------------|----------------|------------|---------------|-----|-------|
| Geth | 1.15.11 | May 2025 | ~1.2 TiB | ~830 GiB | n/a | n/a | n/a | ~ 8 GiB | |
| Nethermind | 1.36.0 | February 2026 | ~1.1 TiB | ~740 GiB | ~600 GiB | ~240 GiB | n/a | ~ 7 GiB | With HalfPath, can automatic online prune at ~350 GiB free |
| Besu | v25.8.0 | August 2025 | ~1.35 TiB | ~850 GiB | n/a | tbd | ~290 GiB | ~ 10 GiB | |
| Reth | 1.5.0 | July 2025 | ~1.6 TiB | ~950 GiB | tbd | tbd | tbd | ~ 9 GiB | |
| Erigon | 3.0.3 | May 2025 | ~1.0 TiB | ~650 GiB | n/a | tbd | tbd | See comment | Erigon will have the OS use all available RAM as a DB cache during post-sync operation, but this RAM is free to be used by other programs as needed. During sync, it may run out of memory on machines with 32 GiB or less |
| Nimbus | 0.1.0-alpha | May 2025 | tbd | 755 GiB | n/a | n/a | n/a | With Era1 import |
| Ethrex | 4.0.0 | October 2025 | n/a | 450 GiB | n/a | n/a | n/a | |
| Besu | v26.1.0 | February 2026 | ~1.35 TiB | ~850 GiB | n/a | ~560 GiB | ~290 GiB | ~ 10 GiB | |
| Reth | 1.11.1 | February 2026 | tbd | tbd | tbd | tbd | tbd | ~ 9 GiB | Storage v2 |
| Erigon | 3.3.8 | February 2026 | ~1.0 TiB | ~650 GiB | n/a | ~640 GiB | ~355 GiB | See comment | Erigon will have the OS use all available RAM as a DB cache during post-sync operation, but this RAM is free to be used by other programs as needed. During sync, it may run out of memory on machines with 32 GiB or less |
| Nimbus | 0.1.0-alpha | May 2025 | tbd | 755 GiB | n/a | n/a | n/a | | With Era1 import |
| Ethrex | 4.0.0 | October 2025 | n/a | 450 GiB | n/a | n/a | n/a | | |

Notes on disk usage
- Reth, Besu, Geth, Erigon, Ethrex and Nimbus continously prune
Expand All @@ -65,11 +65,10 @@ Cache size default in all tests.
| Client | Version | Date | Node Type | Test System | Time Taken | Notes |
|--------|---------|------|-----------|-------------|------------|--------|
| Geth | 1.15.10 | April 2025 | Full | OVH Baremetal NVMe | ~ 5 hours | |
| Nethermind | 1.24.0| January 2024 | Full | OVH Baremetal NVMe | ~ 5 hours | Ready to attest after ~ 1 hour |
| Nethermind | 1.36.0| February 2026 | post-Cancun | Netcup RS G11 | ~ 2 hours | Ready to attest after ~ 1 hour |
| Besu | v25.8.0 | August 2025 | post-merge | OVH Baremetal NVMe | ~ 13 hours | |
| Erigon | 3.0.3 | May 2025 | post-merge | OVH Baremetal NVMe | ~ 2 hours | |
| Reth | beta.1 | March 2024 | Full | OVH Baremetal NVMe | ~ 2 days 16 hours | |
| Besu | v26.1.0 | February 2026 | rolling | Netcup RS G11 | ~ 13 hours | |
| Erigon | 3.3.8 | February 2026 | rolling | Netcup RS G11 | ~ 12 hours | |
| Reth | 1.11.1 | February 2026 | Full | Legacy miniPC | ~ 5 days | |
| Nimbus | 0.1.0-alpha | May 2025 | Full | OVH Baremetal NVME | ~ 5 1/2 days | With Era1 import |
| Ethrex | 4.0.0 | October 2025 | post-merge | OVH Baremetal NVME | ~ 2 hours | |

Expand All @@ -88,24 +87,25 @@ Specifically `fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --
Servers have been configured with [noatime](https://www.howtoforge.com/reducing-disk-io-by-mounting-partitions-with-noatime) and [no swap](https://www.geeksforgeeks.org/how-to-permanently-disable-swap-in-linux/) to improve latency.


| Name | RAM | SSD Size | CPU | r/w IOPS | r/w latency | Notes |
|----------------------|--------|----------|------------|------|-------|--------|
| [OVH](https://ovhcloud.com/) Baremetal NVMe | 32 GiB | 1.9 TB | Intel Hexa | 177k/59k | 150us max | This is in line with any good NVMe drive |
| [Netcup](https://netcup.eu) RS G11 | 96 GiB | 3 TB | 20 vCPU on an AMD 84-core | | 400us avg / 1.1ms max | This is an example of a system with storage that is fast enough to attest, but too slow to get best rewards |
| Name | RAM | SSD Size | CPU | r/w latency | Notes |
|----------------------|--------|----------|------------|-------------|-------|
| [OVH](https://ovhcloud.com/) Baremetal NVMe | 32 GiB | 1.9 TB | Intel Hexa | 150us max | Datacenter-class NVMe drive |
| [Netcup](https://netcup.eu) RS G11 | 96 GiB | 3 TB | 20 vCPU on an AMD 84-core | 400us avg / 1.1ms max | Storage is fast enough to attest, but too slow to get best rewards |
| Legacy miniPC | 32 GiB | 2 TB | Intel Quad 6th gen | 230 us avg / 320 us max | Home staker setup with PCIe 3 NVMe and older CPU |

## Getting better latency

Ethereum execution layer clients need decently low latency. IOPS can be used as a proxy for that. HDD will not be sufficient.
Ethereum execution layer clients need decently low latency. Measure latency with `ioping` when the system is under load. NVMe SSD is highly recommended; HDD will not be sufficient.

For cloud providers, here are some results for syncing Geth.
- AWS, gp2 or gp3 with provisioned IOPS have both been tested successfully.
For cloud providers, here are some results for syncing Geth. In a nutshell, use baremetal instead.
- AWS, gp2 or gp3 with provisioned IOPS delivered sub-par performance during sync committees.
- Linode block storage, make sure to get NVMe-backed storage.
- Netcup RS G11 works, but rewards are not optimal.
- There are reports that Digital Ocean block storage is too slow, as of late 2021.
- Strato V-Server is too slow as of late 2021.

Dedicated servers with NVMe SSD will always have sufficiently low latency. Do avoid hardware RAID though, see below.
OVH Advance line is a well-liked dedicated option; Linode or Strato or any other provider will work as well.
OVH Advance line is a well-liked dedicated option; latitude.sh, Linode, Vultr or Strato or any other baremetal provider will work as well.

For own hardware, we've seen three causes of high latency:
- DRAMless or QLC SSD. Choose a ["mainstream" SSD](https://gist.github.com/yorickdowne/f3a3e79a573bf35767cd002cc977b038)
Expand Down