feat: add signet-rpc-storage crate with ETH RPC endpoints#75
feat: add signet-rpc-storage crate with ETH RPC endpoints#75
Conversation
| @@ -0,0 +1,578 @@ | |||
| //! Integration tests for the `signet-rpc-storage` ETH RPC endpoints. | |||
There was a problem hiding this comment.
[Claude Code]
Test coverage gap: eth_call and eth_estimateGas are the only two supported endpoints without integration tests. They require full EVM state setup (deployed contract bytecode, proper storage) to exercise meaningfully. Worth adding as a follow-up.
The current tests cover all 22 other supported endpoints plus error paths.
|
[Claude Code] Review SummaryWhat's here
Issues flagged (inline comments)
Codebase health
|
Add the foundational scaffolding for the signet-rpc-storage crate, which provides an Ethereum JSON-RPC server backed by signet-storage's unified storage backend, independent of reth's FullNodeComponents. This includes: - Workspace dependency additions (signet-storage, signet-cold, signet-hot, signet-storage-types) - StorageRpcCtx context struct with Arc<Inner> pattern - BlockTags atomic block tag tracker for Latest/Safe/Finalized - Block ID and block tag resolution utilities - Stub eth module (endpoints to be added in follow-up) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Implement all ETH namespace JSON-RPC endpoints backed by cold/hot storage instead of reth. Converts eth.rs placeholder into eth/ directory module with error types, helpers, and 24 supported endpoint handlers: - Simple queries: blockNumber, chainId - Block queries: getBlockByHash/Number, getBlockReceipts, headers - Transaction queries: getTransactionByHash, getTransactionReceipt, raw txs - Account state (hot storage): getBalance, getStorageAt, getCode, getTransactionCount - EVM execution: call, estimateGas (via signet-evm/trevm) - Transaction submission: sendRawTransaction (via TxCache) - Logs: getLogs with bloom filter matching Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add 23 integration tests covering all endpoint categories: simple queries, block/transaction lookups, account state, logs, and error cases. Tests exercise the router through the axum service layer using tower's oneshot(). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- send_raw_transaction: log warning on forwarding failure instead of silently discarding the error - get_logs: reject reversed block ranges (from > to) with an explicit error instead of silently returning empty results - build_receipt_envelope: remove catch-all arm so new TxType variants from alloy produce a compile error instead of a runtime panic Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Regular txs execute before system txs, not the other way around. Drive-by from #74 (comment) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
4341320 to
601c156
Compare
- Extract `resolve_evm_block` method on `StorageRpcCtx` to deduplicate the block resolution + header fetch + revm db creation shared by `call()` and `estimate_gas()`. Resolves headers directly (by hash or by tag→number) to avoid redundant cold storage lookups. - Replace glob import `use endpoints::*` with explicit imports. - Remove unused `revm_state()` method from `StorageRpcCtx`. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ounds - Move `resolve_block_id` and `resolve_block_number_or_tag` from free functions in resolve.rs to `resolve_block_id` and `resolve_block_tag` methods on `StorageRpcCtx`. This eliminates repeated `ctx.tags()` and `ctx.cold()` threading at every call site. - `resolve_block_tag` returns `u64` directly (infallible) instead of `Result`, simplifying callers like `get_logs`. - Remove `H::RoTx: Send + Sync + 'static` bounds from all endpoint functions, router, and ctx methods — the trait already provides these. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace the bare `rpc_gas_cap` constructor parameter with a `StorageRpcConfig` struct that bundles all RPC configuration. This moves `max_blocks_per_filter` from a hard-coded constant to a configurable value, adds `max_logs_per_response` enforcement in `eth_getLogs`, and pre-creates a tracing semaphore for future debug endpoint concurrency limiting. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
| let mut all_logs = Vec::new(); | ||
|
|
||
| for (chunk_start, chunk_end) in | ||
| BlockRangeInclusiveIter::new(from..=to, MAX_HEADERS_RANGE) |
There was a problem hiding this comment.
I'm guessing the intent here is to return batches of 1000, but the way BlockRangeInclusiveIter is implemented, we'd need to pass MAX_HEADERS_RANGE - 1 as the step value.
I think it might be worth changing the BlockRangeInclusiveIter to treat the step arg the same way as the std lib - i.e. to make it mean "batch size" and not increment it by 1 internally. That would mean we'd also need to update BlockRangeInclusiveIter::next() to have let end = (start + self.step - 1).min(self.end);
There was a problem hiding this comment.
this is basically ported from reth, so i'll take a deeper look here
| // --------------------------------------------------------------------------- | ||
|
|
||
| pub(crate) async fn not_supported() -> ResponsePayload<(), ()> { | ||
| ResponsePayload::internal_error_message(Cow::Borrowed("Method not supported.")) |
There was a problem hiding this comment.
This should probably be method_not_found() rather than an internal error, so callers don't retry.
Add gas oracle, filters, subscriptions, debug tracing, and signet namespaces to signet-rpc-storage. Port 15 endpoints from the old reth-backed signet-rpc crate to the storage-backed architecture. New modules: - gas_oracle: cold-storage gas price oracle (suggest_tip_cap) - interest/: filter manager, subscription manager, block notifications - debug/: traceBlockByNumber, traceBlockByHash, traceTransaction - signet/: sendOrder, callBundle Wired eth endpoints: gasPrice, maxPriorityFeePerGas, feeHistory, newFilter, newBlockFilter, uninstallFilter, getFilterChanges, getFilterLogs, subscribe, unsubscribe. Integration tests cover gas/fee queries, filter lifecycle, and debug tracing with noop tracer. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Use `SealedHeader` with `.hash()` / `.into_inner()` instead of `header.hash_slow()` - Use `RecoveredTx` (pre-recovered sender) instead of manual `recover_sender` calls - Use `ColdReceipt` with per-tx `gas_used` instead of computing deltas from cumulative gas - Delegate `get_logs` to cold storage instead of manual bloom filtering and block iteration - Remove `BlockRangeInclusiveIter`, `collect_matching_logs`, `build_receipt_from_parts`, and `recover_sender` helpers - Simplify `build_rpc_transaction` and `build_receipt` to be infallible Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Addresses PR review feedback: - resolve_block_id now uses hot HotDbRead::get_header_number instead of cold storage, making it synchronous and avoiding async overhead - Add resolve_header for direct header fetches from hot storage, eliminating the double header lookup in header_by - Change not_supported() to return method_not_found() (JSON-RPC -32601) instead of internal_error (-32603) - Update ResolveError to use Storage/Db variants instead of Cold - Update tests to write headers to both hot and cold storage Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Swap return types to use RpcBlock/RpcReceipt/RpcTransaction/RpcHeader type aliases, rename tx_by_block_and_index to match rpc naming, fix not_supported error message, and split call into run_call + call. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Align rpc-storage behavioral semantics with the rpc crate: - subscribe: require filter for Logs, reject filter for NewHeads via TryFrom impl - send_raw_transaction: return Result from spawned task instead of swallowing errors - uninstall_filter/unsubscribe: add HandlerCtx and wrap in spawn_blocking Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…to 0.5 resolve_evm_block previously mapped Pending to Latest without modifying the header, causing eth_estimateGas (which defaults to Pending) and eth_call with explicit Pending to see wrong block.number, timestamp, and base_fee. Now synthesizes a next-block header matching signet-rpc's block_cfg() behavior. Also refactors callBundle to use resolve_evm_block instead of duplicating the pending header logic inline, and passes max_logs to cold.get_logs() for early termination (signet-cold 0.5 API). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
| } | ||
|
|
||
| /// Get the finalized block number. | ||
| pub fn finalized(&self) -> u64 { |
There was a problem hiding this comment.
is there a way to set all of these atomically at once
prestwich
left a comment
There was a problem hiding this comment.
are there ways that we can DRY endpoint logic or definitions?
| /// assert_eq!(tags.latest(), 101); | ||
| /// ``` | ||
| #[derive(Debug, Clone)] | ||
| pub struct BlockTags { |
There was a problem hiding this comment.
can we add syncing status information to this struct
crates/rpc-storage/src/lib.rs
Outdated
| #![cfg_attr(docsrs, feature(doc_cfg, doc_auto_cfg))] | ||
|
|
||
| mod config; | ||
| pub use config::StorageRpcConfig; |
There was a problem hiding this comment.
add newlines after each pub use
crates/rpc-storage/src/lib.rs
Outdated
| H: signet_hot::HotKv + Send + Sync + 'static, | ||
| <H::RoTx as signet_hot::model::HotKvRead>::Error: trevm::revm::database::DBErrorMarker, | ||
| { | ||
| ajj::Router::new().merge(eth::eth()).merge(debug::debug()).merge(signet::signet()) |
There was a problem hiding this comment.
these are not appropriately namespaced. we need to nest them with a prefix, as done in signet-rpc
crates/rpc-storage/src/lib.rs
Outdated
| } | ||
|
|
||
| /// Instantiate the `signet` API router. | ||
| pub fn signet<H>() -> ajj::Router<StorageRpcCtx<H>> |
There was a problem hiding this comment.
these functions do not need to exist. we only need the main fully built router at crate root
| @@ -0,0 +1,85 @@ | |||
| //! Configuration for the storage-backed RPC server. | |||
There was a problem hiding this comment.
make a config/ module folder containing the contents of ctx.rs gas_oracle.rs config.rs and resolve.rs
| H: HotKv + Send + Sync + 'static, | ||
| <H::RoTx as HotKvRead>::Error: DBErrorMarker, | ||
| { | ||
| let timeout = bundle.bundle.timeout.unwrap_or(1000); |
There was a problem hiding this comment.
the 1000 should be a config parameter
| }; | ||
|
|
||
| let task = |hctx: HandlerCtx| async move { | ||
| hctx.spawn(async move { tx_cache.forward_order(order).await.map_err(|e| e.to_string()) }); |
There was a problem hiding this comment.
is this error swallowed?
| /// The caller constructs and sends these via a | ||
| /// [`tokio::sync::broadcast::Sender`]. | ||
| #[derive(Debug, Clone)] | ||
| pub struct NewBlockNotification { |
There was a problem hiding this comment.
how and where are resources managed for this? are these ever cloned?
| @@ -0,0 +1,22 @@ | |||
| //! Filter and subscription management for block/log notifications. | |||
There was a problem hiding this comment.
make comprehensive documentation for this module
There was a problem hiding this comment.
include details about resource usage expectations, etc
crates/rpc-storage/Cargo.toml
Outdated
| thiserror.workspace = true | ||
| serde.workspace = true | ||
| dashmap = "6.1.0" | ||
| reth-rpc-eth-api.workspace = true |
There was a problem hiding this comment.
where is this used? how can we remove it
| { | ||
| let opts = response_tri!(opts.ok_or(DebugError::InvalidTracerConfig)); | ||
|
|
||
| let _permit = ctx.acquire_tracing_permit().await; |
There was a problem hiding this comment.
where is this permit dropped?
| let t = trevm.fill_tx(tx); | ||
| let frame; | ||
| (frame, trevm) = response_tri!(crate::debug::tracer::trace(t, &opts, tx_info)); | ||
| frames.push(TraceResult::Success { result: frame, tx_hash: Some(*tx.tx_hash()) }); |
There was a problem hiding this comment.
we are collecting these frames only to immediately serailize them. is there a way we could feed these to the serializer via an iterator instead?
There was a problem hiding this comment.
the same question applies to other instances where we return vecs
| #[derive(Debug, Clone, thiserror::Error)] | ||
| pub enum DebugError { | ||
| /// Cold storage error. | ||
| #[error("cold storage: {0}")] |
There was a problem hiding this comment.
can we avoid string-typing these?
There was a problem hiding this comment.
we should also avoid leaking the full error into the API return. ensure that there is a log of the full error, and the API returns a simple description
prestwich
left a comment
There was a problem hiding this comment.
add many more comments throughout the endpoint code, explaining what is happening and why
- Restructure into config/ module folder (ctx, resolve, gas_oracle, rpc_config) - Remove standalone pub fn wrappers, use .nest() for namespace prefixing - Remove reth-rpc-eth-api dependency, use alloy Network trait type aliases - Change not_supported to method_not_found, add concrete SignetError variants - Sanitize DebugError Display output, add tracing::warn at error call sites - Add fire-and-forget error logging for send_raw_transaction and send_order - Add default_bundle_timeout_ms config, SyncStatus struct, BlockTags::update_all - Add comprehensive interest/ module docs, endpoint doc comments, permit docs - Extract hot_reader_at_block helper for account state endpoints - Add FUTURE-EVALUATION.md documenting Vec collection constraint Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Bump ajj from 0.3.4 to 0.5.0 and adapt all call sites to the new API: - ResponsePayload::Success/Failure replaced with ResponsePayload(Ok/Err) - Subscription task rewritten to use ctx.notify() instead of raw channel access via the removed notifications() method Leverage relaxed RpcSend bounds for lazy serialization: - LazyReceipts: serializes receipts inline from raw ColdReceipt + RecoveredTx data without intermediate Vec<RpcReceipt> - In-housed BlockTransactions and RpcBlock: serialize full transactions or hashes lazily from Vec<RecoveredTx>, hardcode empty uncles Refactor build_receipt and build_rpc_transaction to take references, cloning only where needed (logs Vec, RecoveredTx for into_inner). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Apply the same ajj 0.5.0 migration to the canonical rpc crate: - ResponsePayload::Success/Failure replaced with ResponsePayload(Ok/Err) - Subscription task rewritten to use ctx.notify() instead of the removed notifications() channel accessor Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ruct Add SubscriptionNotification and SubscriptionParams structs with derived Serialize impls, replacing hand-crafted json! macro usage in both rpc and rpc-storage subscription tasks. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Summary
signet-rpc-storagecrate providing Ethereum JSON-RPC endpoints backed bysignet-storage(hot + cold), independent of reth'sFullNodeComponentsStorageRpcCtxwrapsUnifiedStorage,BlockTags, system constants, optionalTxCache, and RPC gas capBlockTagsprovides atomic latest/safe/finalized block tracking with sync/async resolutionblockNumber,chainIdgetBlockByHash/Number,getBlockTransactionCount*,getBlockReceipts,getBlockHeader*getTransactionByHash,getRawTransactionByHash,*ByBlockAndIndex,getTransactionReceiptgetBalance,getStorageAt,getTransactionCount,getCodecall,estimateGasviasignet-evm/trevmsendRawTransactionviaTxCachegetLogswith bloom filter matchingsignet-storage0.3.0 from crates.ioTest plan
cargo clippy -p signet-rpc-storage --all-features --all-targets— cleancargo clippy -p signet-rpc-storage --no-default-features --all-targets— cleancargo +nightly fmt— cleancargo t -p signet-rpc-storage— 33 tests passcargo doc -p signet-rpc-storage— docs build successfully🤖 Generated with Claude Code