Fix: step 1 cli correctness mining hygiene#59
Fix: step 1 cli correctness mining hygiene#59SIDDHANTCOOKIE wants to merge 7 commits intoStabilityNexus:mainfrom
Conversation
WalkthroughRefactors canonical hashing/serialization for transactions and blocks, restructures mempool into per-sender nonce queues, adds P2P message validation and deduplication, limits transactions per mined block, updates mining/CLI nonce sourcing and reward handling, and adds tests covering these protocol hardening changes. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant CLI as CLI
participant Mempool as Mempool
participant Miner as Miner/Chain
participant Network as P2PNetwork
CLI->>Mempool: submit signed tx
Mempool-->>CLI: enqueue (per-sender nonce)
CLI->>Miner: request mine (uses chain.state nonce)
Miner->>Mempool: get_transactions_for_block(max=MAX_TRANSACTIONS_PER_BLOCK)
Mempool-->>Miner: ready transactions (nonce-ordered, timestamp-sorted)
Miner->>Miner: construct Block(header includes miner)
Miner->>Chain: add_block(block)
alt add success
Chain->>Mempool: remove confirmed transactions (only mined txs)
Chain->>Network: broadcast block payload
Network-->>Peers: validation + dedup marking
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan for PR comments
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 11
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@main.py`:
- Around line 132-133: The miner reward is being taken from an unhashed
out-of-band field payload_data["miner"] (falling back to BURN_ADDRESS) and
passed to chain.state.credit_mining_reward, which allows replay or forgery;
instead embed/derive the reward recipient from authenticated block data (the
hashed block or a coinbase transaction) before crediting. Update code paths that
use payload_data.get("miner", BURN_ADDRESS) and chain.state.credit_mining_reward
to read the miner from Block (e.g., a coinbase Tx or a Block.miner property that
is included in Block.compute_hash) or validate that payload_data["miner"]
matches the value inside the hashed block, and refuse to credit if they differ;
ensure broadcast_block no longer supplies an unauthenticated miner field or that
it is ignored in favor of the miner inside the block.
In `@minichain/block.py`:
- Around line 15-19: The change uses Transaction.tx_id as Merkle leaves
(tx_hashes) which will change merkle_root for all existing blocks; restore
compatibility by reintroducing the original per-transaction digest as the
primary leaf or by versioning the leaf format: update the code that builds
tx_hashes to call a deterministic helper (e.g., get_transaction_leaf(tx)) which
returns the old per-transaction digest if present (e.g., tx.digest or
tx.get_leaf_digest()) and falls back to tx.tx_id only for new-format
transactions, and add a block/chain field (e.g., merkle_version or
leaf_format_version on Block) so nodes can detect which leaf format was used
when validating merkle_root.
In `@minichain/contract.py`:
- Around line 21-22: The except block catching (OSError, ValueError) after
calling setrlimit currently just logs a warning and continues; change this to
fail closed by aborting execution (e.g., raise a RuntimeError or call
sys.exit(1)) so untrusted contract code cannot run without address-space limits,
unless there is an explicit configurable escape hatch (e.g., an "unsafe" or
"allow_unrestricted" flag) that must be checked first; update the code around
setrlimit and logger.warning to perform the flag check and either raise/exit on
error or proceed only when the explicit unsafe mode is enabled.
In `@minichain/mempool.py`:
- Around line 73-88: get_transactions_for_block currently removes the chosen
transactions from the mempool inside get_transactions_for_block (via
_remove_transactions_unlocked), causing lost txs if block is not accepted;
change get_transactions_for_block to be read-only by removing the call to
self._remove_transactions_unlocked(selected) and simply return the selected
list, and then update the caller (e.g., where chain.add_block(mined_block) is
invoked in main.py) to call mempool.remove_transactions(pending_txs) only after
chain.add_block(mined_block) returns True so removal happens only on successful
block acceptance.
In `@minichain/p2p.py`:
- Around line 262-267: The code currently calls _mark_seen(msg_type, payload)
before invoking the handler, which suppresses messages even if the handler later
rejects them; change the handler invocation (the function referenced as
handler_callback) to return a boolean success flag and only call
_mark_seen(msg_type, payload) when the handler_callback returns True; update the
call site that currently invokes handler_callback to capture its return value
and conditionally call _mark_seen, and update any handlers to return True on
successful accept/application and False on rejection so duplicates are only
recorded for accepted messages.
- Around line 112-136: In _validate_transaction_payload(), beyond current type
checks, reject any payload where payload["amount"] <= 0 (ensure amount is an int
> 0) and enforce receiver format when payload.get("receiver") is not None by
requiring isinstance(receiver, str) and that it matches your expected address
validation (call an existing validate_address()/is_valid_address() helper or add
a simple non-empty/pattern check); update the function to return False for
invalid amount or receiver format so network-submitted transactions can't
contain negative/zero amounts or malformed receiver addresses.
In `@tests/test_protocol_hardening.py`:
- Around line 78-87: Update the test_remove_transactions_keeps_other_pending to
assert the actual surviving transaction rather than just the length: after
calling mempool.remove_transactions([tx0]) verify that the remaining entry is
tx1 (e.g., by checking membership/peeking the queue or retrieving via the
mempool API) and then advance the state past nonce 0 and assert tx1 becomes
selectable (use the mempool selection method or is_selectable helper) to ensure
tx1 is truly kept pending and becomes available when nonce 0 is consumed;
reference Mempool, add_transaction, remove_transactions and the
test_remove_transactions_keeps_other_pending test to locate where to add these
checks.
- Line 34: Annotate the helper function _signed_tx with a return type by
changing its signature to include -> Transaction so it satisfies ANN202; ensure
the symbol Transaction is available in the module (import it or reference it
fully-qualified) and update any test stubs or type-only imports if needed so the
test file compiles type-checker/ruff-wise.
- Around line 117-148: The test currently reuses the same dict objects for
duplicate checks, which can hide bugs that rely on object identity or in-place
mutation; modify the test so after calling network._mark_seen("tx",
tx_message["data"]) and network._mark_seen("block", block_message["data"]) you
call network._is_duplicate with newly constructed but equivalent payload dicts
(e.g., rebuild the tx and block data dicts rather than reusing
tx_message["data"] / block_message["data"]) to ensure _is_duplicate and
_mark_seen behavior is based on content, not object identity or mutation.
- Around line 45-77: Add a test that covers the new per-block transaction cap by
creating a Mempool, inserting more ready transactions than the cap, calling
Mempool.get_transactions_for_block(self.state) and asserting it returns exactly
the cap number of transactions while leaving the rest in the mempool; use the
same helpers as existing tests (_signed_tx, self.state.apply_transaction) and
reference the Mempool class and get_transactions_for_block method to locate
where to add the test, and if a named cap constant exists (e.g.,
TRANSACTIONS_PER_BLOCK, MAX_TX_PER_BLOCK) use it to drive the expected count
instead of hardcoding a value.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: cd0e9e3e-4cc8-444f-b316-8bc57158a0a3
📒 Files selected for processing (9)
main.pyminichain/block.pyminichain/contract.pyminichain/mempool.pyminichain/p2p.pyminichain/pow.pyminichain/serialization.pyminichain/transaction.pytests/test_protocol_hardening.py
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (1)
minichain/contract.py (1)
10-12: 🧹 Nitpick | 🔵 TrivialConsider caching the environment variable read.
The environment variable is read on every contract execution. For consistency within a node's lifetime and slight performance improvement, consider caching the result at module load time:
-def _allow_unrestricted_contracts() -> bool: - value = os.getenv("MINICHAIN_ALLOW_UNRESTRICTED_CONTRACTS", "") - return value.lower() in {"1", "true", "yes", "on"} +_ALLOW_UNRESTRICTED_CONTRACTS: bool = os.getenv( + "MINICHAIN_ALLOW_UNRESTRICTED_CONTRACTS", "" +).lower() in {"1", "true", "yes", "on"} + +def _allow_unrestricted_contracts() -> bool: + return _ALLOW_UNRESTRICTED_CONTRACTSThis also prevents mid-execution behavior changes if someone modifies the environment variable while the node is running.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@minichain/contract.py` around lines 10 - 12, The helper _allow_unrestricted_contracts currently reads MINICHAIN_ALLOW_UNRESTRICTED_CONTRACTS on every call; compute and cache the boolean at module import instead by reading os.getenv("MINICHAIN_ALLOW_UNRESTRICTED_CONTRACTS", "") once into a module-level constant (e.g., _CACHED_ALLOW_UNRESTRICTED) and then have _allow_unrestricted_contracts() simply return that constant; this ensures consistent behavior for the node lifetime and avoids repeated env access.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@minichain/block.py`:
- Around line 34-52: The linter warns about using getattr with constant
attribute names in _transaction_leaf; update the checks to use direct attribute
access after confirming existence with hasattr—e.g., replace
callable(getattr(tx, "get_leaf_digest")) and getattr(tx, "get_leaf_digest") with
callable(tx.get_leaf_digest) and tx.get_leaf_digest(), similarly use tx.digest
(not getattr) and tx.to_dict() when hasattr confirms presence, keeping the same
behavior that falls back to _sha256(json.dumps(tx.to_dict(), sort_keys=True))
and finally tx.tx_id.
In `@minichain/contract.py`:
- Around line 28-37: The current conditional in the contract resource limit
handler uses _allow_unrestricted_contracts() to skip failing on setrlimit
errors, which can cause consensus divergence; change this by (1) emitting a
conspicuous startup warning when _allow_unrestricted_contracts() is true (so
operators know it’s unsafe), and (2) enforce fail-closed behavior for networked
validation paths by treating setrlimit failures as hard errors when processing
remote blocks/transactions (the same path that uses setrlimit for contract
execution and the validation logic in chain.py); update the code that checks
MINICHAIN_ALLOW_UNRESTRICTED_CONTRACTS to only permit bypass for
local/mining-only execution (and not for block validation) and add a clear
startup log message referencing MINICHAIN_ALLOW_UNRESTRICTED_CONTRACTS and
_allow_unrestricted_contracts().
In `@minichain/mempool.py`:
- Around line 83-86: The current branch in mempool.py silently returns an empty
list when max_count is invalid (non-int or <= 0); instead, validate max_count at
the start of the selection logic (the block that slices `selected` using
`selected[:max_count]`) and raise a ValueError with a clear message like
"max_count must be a positive integer" when it's not an int or is <= 0, or
optionally log a warning before raising; update the code around the `if
max_count is not None:` check to perform this validation and then proceed to
slice `selected` only for valid values.
- Around line 21-28: The fallback branch in _expected_nonce_for_sender is dead
(state is always provided); change the function to require state (remove default
None) and update all callers (e.g., get_transactions_for_block and any tests
that call _expected_nonce_for_sender) to pass the state explicitly; ensure
signature of _expected_nonce_for_sender no longer accepts None and add a brief
docstring/typing that state is required so callers are updated accordingly.
In `@minichain/p2p.py`:
- Around line 327-329: The except block catching (TypeError, ValueError) named
exc should log the full traceback instead of dropping it; replace the
logger.error("Network: Failed to serialize tx: %s", exc) call in the except
(TypeError, ValueError) as exc block with a call that preserves the stack (e.g.,
logger.exception("Network: Failed to serialize tx") or logger.error("Network:
Failed to serialize tx", exc_info=True)); keep the same message/context and
return behavior so serialization failures still exit early.
---
Duplicate comments:
In `@minichain/contract.py`:
- Around line 10-12: The helper _allow_unrestricted_contracts currently reads
MINICHAIN_ALLOW_UNRESTRICTED_CONTRACTS on every call; compute and cache the
boolean at module import instead by reading
os.getenv("MINICHAIN_ALLOW_UNRESTRICTED_CONTRACTS", "") once into a module-level
constant (e.g., _CACHED_ALLOW_UNRESTRICTED) and then have
_allow_unrestricted_contracts() simply return that constant; this ensures
consistent behavior for the node lifetime and avoids repeated env access.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: c2ad5f87-5710-43e3-b004-496a1a38c746
📒 Files selected for processing (6)
main.pyminichain/block.pyminichain/contract.pyminichain/mempool.pyminichain/p2p.pytests/test_protocol_hardening.py
Addressed Issues:
Screenshots/Recordings:
Additional Notes:
AI Usage Disclosure:
We encourage contributors to use AI tools responsibly when creating Pull Requests. While AI can be a valuable aid, it is essential to ensure that your contributions meet the task requirements, build successfully, include relevant tests, and pass all linters. Submissions that do not meet these standards may be closed without warning to maintain the quality and integrity of the project. Please take the time to understand the changes you are proposing and their impact. AI slop is strongly discouraged and may lead to banning and blocking. Do not spam our repos with AI slop.
Check one of the checkboxes below:
I have used the following AI models and tools: NA
Checklist
Summary by CodeRabbit
New Features
Bug Fixes
Tests