## Issue Addressed
Automates a build and push to antithesis servers on merges to unstable. They run tests against lighthouse daily and have requested more frequent pushes. Currently we are just manually pushing stable images when we have a new release.
## Proposed Changes
- Add a `Dockerfile.libvoidstar`
- Add the `libvoidstar.so` binary
- Add a new workflow to autmatically build and push on merges to unstable
## Additional Info
Requires adding the following secrets
-`ANTITHESIS_USERNAME`
-`ANTITHESIS_PASSWORD`
-`ANTITHESIS_REPOSITORY`
-`ANTITHESIS_SERVER`
Tested here: https://github.com/realbigsean/lighthouse/actions/runs/1612821446
Co-authored-by: realbigsean <seananderson33@gmail.com>
Co-authored-by: realbigsean <sean@sigmaprime.io>
## Issue Addressed
NA
## Proposed Changes
We have observed occasions were under-resourced nodes will receive messages that were valid *at the time*, but later become invalidated due to long waits for a `BeaconProcessor` worker.
In this PR, we will check to see if the message was valid *at the time of receipt*. If it was initially valid but invalid now, we just ignore the message without penalizing the peer.
## Additional Info
NA
## Issue Addressed
Restore compatibility between Lighthouse v2.0.1 VC and `unstable` BN in preparation for the next release.
## Proposed Changes
* Don't serialize the `PROPOSER_SCORE_BOOST` as `null` because it breaks the `extra_fields: HashMap<String, String>` used by the v2.0.1 VC.
## Issue Addressed
NA
## Proposed Changes
I've observed some Prater nodes (and potentially some mainnet nodes) banning peers due to validator pubkey cache lock timeouts. For the `BeaconChainError`-type of errors, they're caused by internal faults and we can't necessarily tell if the peer is bad or not. I think this is causing us to ban peers unnecessarily when running on under-resourced machines.
## Additional Info
NA
## Issue Addressed
NA
## Proposed Changes
Introduces a cache to attestation to produce atop blocks which will become the head, but are not fully imported (e.g., not inserted into the database).
Whilst attesting to a block before it's imported is rather easy, if we're going to produce that attestation then we also need to be able to:
1. Verify that attestation.
1. Respond to RPC requests for the `beacon_block_root`.
Attestation verification (1) is *partially* covered. Since we prime the shuffling cache before we insert the block into the early attester cache, we should be fine for all typical use-cases. However, it is possible that the cache is washed out before we've managed to insert the state into the database and then attestation verification will fail with a "missing beacon state"-type error.
Providing the block via RPC (2) is also partially covered, since we'll check the database *and* the early attester cache when responding a blocks-by-root request. However, we'll still omit the block from blocks-by-range requests (until the block lands in the DB). I *think* this is fine, since there's no guarantee that we return all blocks for those responses.
Another important consideration is whether or not the *parent* of the early attester block is available in the databse. If it were not, we might fail to respond to blocks-by-root request that are iterating backwards to collect a chain of blocks. I argue that *we will always have the parent of the early attester block in the database.* This is because we are holding the fork-choice write-lock when inserting the block into the early attester cache and we do not drop that until the block is in the database.
## Issue Addressed
The current `lcli` block packing code has an off-by-one where it would include an extra slot (the oldest slot) of attestations as "available" (this means there would be 33 slots of "available" attestations instead of 32).
There is typically only single-digit attestations remaining from that slot and as such does not cause a significant change to the results although every efficiency will have been very slightly under-reported.
## Proposed Changes
Prune the `available_attestation_set` before writing out the data instead of after.
## Additional Info
This `lcli` code will soon be deprecated by a Lighthouse API (#2879) which will run significantly faster and will be used to hook into our upcoming monitoring platform #2873.
## Issue Addressed
The fee-recipient argument of the beacon node does not allow a value to be specified:
> $ lighthouse beacon_node --merge --fee-recipient "0x332E43696A505EF45b9319973785F837ce5267b9"
> error: Found argument '0x332E43696A505EF45b9319973785F837ce5267b9' which wasn't expected, or isn't valid in this context
>
> USAGE:
> lighthouse beacon_node --fee-recipient --merge
>
> For more information try --help
## Proposed Changes
Allow specifying a value for the fee-recipient argument in beacon_node/src/cli.rs
## Additional Info
I've added .takes_value(true) and successfully proposed a block in the kintsugi testnet with my own fee-recipient address instead of the hardcoded default. I think that was just missed as the argument does not make sense without a value :)
Co-authored-by: pk910 <philipp@pk910.de>
Co-authored-by: Michael Sproul <micsproul@gmail.com>
Co-authored-by: Michael Sproul <michael@sigmaprime.io>
## Issue Addressed
NA
## Proposed Changes
Ensures full roots are printed, rather than shortened versions like `0x935b…d376`.
For example, it would be nice if we could do API queries based upon the roots shown in the `Beacon chain re-org` event:
```
Jan 05 12:36:52.224 WARN Beacon chain re-org reorg_distance: 2, new_slot: 2073184, new_head: 0x8a97…2dec, new_head_parent: 0xa985…7688, previous_slot: 2073183, previous_head: 0x935b…d376, service: beacon
Jan 05 13:35:05.832 WARN Beacon chain re-org reorg_distance: 1, new_slot: 2073475, new_head: 0x9207…c6b9, new_head_parent: 0xb2ce…839b, previous_slot: 2073474, previous_head: 0x8066…92f7, service: beacon
```
## Additional Info
We should eventually fix this project-wide, however this is a short-term patch.
## Proposed Changes
Update `superstruct` to bring in @realbigsean's fixes necessary for MEV-compatible private beacon block types (a la #2795).
The refactoring is due to another change in superstruct that allows partial getters to be auto-generated.
## Issue Addressed
There was an overeager assert in the import of slashing protection data here:
fff01b24dd/validator_client/slashing_protection/src/slashing_database.rs (L939)
We were asserting that if the import contained any blocks for a validator, then the database should contain only a single block for that validator due to pruning/consolidation. However, we would only prune if the import contained _relevant blocks_ (that would actually change the maximum slot):
fff01b24dd/validator_client/slashing_protection/src/slashing_database.rs (L629-L633)
This lead to spurious failures (in the form of `ConsistencyError`s) when importing an interchange containing no new blocks for any of the validators. This wasn't hard to trigger, e.g. export and then immediately re-import the same file.
## Proposed Changes
This PR fixes the issue by simplifying the import so that it's more like the import for attestations. I.e. we make the assert true by always pruning when the imported file contains blocks.
In practice this doesn't have any downsides: if we import a new block then the behaviour is as before, except that we drop the `signing_root`. If we import an existing block or an old block then we prune the database to a single block. The only time this would be relevant is during extreme clock drift locally _plus_ import of a non-drifted interchange, which should occur infrequently.
## Additional Info
I've also added `Arbitrary` implementations to the slashing protection types so that we can fuzz them. I have a fuzzer sitting in a separate directory which I may or may not commit in a subsequent PR.
There's a new test in the standard interchange tests v5.2.1 that checks for this issue: https://github.com/eth-clients/slashing-protection-interchange-tests/pull/12
## Issue Addressed
N/A
## Proposed Changes
We are currently treating errors from the EL on `engine_executePayload` as `PayloadVerificationStatus::NotVerified`. This adds the block as a candidate head block in fork choice even if the EL explicitly rejected the block as invalid.
`PayloadVerificationStatus::NotVerified` should be only returned when the EL explicitly returns "syncing" imo. This PR propagates an error instead of returning `NotVerified` on EL all EL errors.
## Issue Addressed
Resolves#2854
## Proposed Changes
If validator was imported first without entering password and then imported again with valid password update the password in validator_definitions.yml
## Additional Info
There can be other cases for updating existing validator during import. They are not covered here.
Co-authored-by: Michael Sproul <micsproul@gmail.com>
## Issue Addressed
Closes#2286Closes#2538Closes#2342
## Proposed Changes
Part II of major slasher optimisations after #2767
These changes will be backwards-incompatible due to the move to MDBX (and the schema change) 😱
* [x] Shrink attester keys from 16 bytes to 7 bytes.
* [x] Shrink attester records from 64 bytes to 6 bytes.
* [x] Separate `DiskConfig` from regular `Config`.
* [x] Add configuration for the LRU cache size.
* [x] Add a "migration" that deletes any legacy LMDB database.
## Issue Addressed
Successor to #2431
## Proposed Changes
* Add a `BlockReplayer` struct to abstract over the intricacies of calling `per_slot_processing` and `per_block_processing` while avoiding unnecessary tree hashing.
* Add a variant of the forwards state root iterator that does not require an `end_state`.
* Use the `BlockReplayer` when reconstructing states in the database. Use the efficient forwards iterator for frozen states.
* Refactor the iterators to remove `Arc<HotColdDB>` (this seems to be neater than making _everything_ an `Arc<HotColdDB>` as I did in #2431).
Supplying the state roots allow us to avoid building a tree hash cache at all when reconstructing historic states, which saves around 1 second flat (regardless of `slots-per-restore-point`). This is a small percentage of worst-case state load times with 200K validators and SPRP=2048 (~15s vs ~16s) but a significant speed-up for more frequent restore points: state loads with SPRP=32 should be now consistently <500ms instead of 1.5s (a ~3x speedup).
## Additional Info
Required by https://github.com/sigp/lighthouse/pull/2628
## Issue Addressed
#2834
## Proposed Changes
Change log message severity from error to debug in attestation verification when attestation state is finalized.
## Proposed Changes
Add `mallinfo2` behind a feature flag so that we can get accurate memory metrics during debugging. It can be enabled when building Lighthouse like so (so long as the platform supports it):
```
cargo install --path lighthouse --features "malloc_utils/mallinfo2"
```
## Issue Addressed
- Resolves#2778
## Proposed Changes
Updates docker images from Buster (10) to Bullseye (11), since Bullseye is [listed](https://www.debian.org/releases/) as the "current stable release".
## Additional Info
NA
## Issue Addressed
#2841
## Proposed Changes
Not counting dialing peers while deciding if we have reached the target peers in case of outbound peers.
## Additional Info
Checked this running in nodes and bandwidth looks normal, peer count looks normal too
## Proposed Changes
Remove the `is_first_block_in_epoch` logic from the balances cache update logic, as it was incorrect in the case of skipped slots. The updated code is simpler because regardless of whether the block is the first in the epoch we can check if an entry for the epoch boundary root already exists in the cache, and update the cache accordingly.
Additionally, to assist with flip-flopping justified epochs, move to cloning the balance cache rather than moving it. This should still be very fast in practice because the balances cache is a ~1.6MB `Vec`, and this operation is expected to only occur infrequently.
## Issue Addressed
Resolves: https://github.com/sigp/lighthouse/issues/2741
Includes: https://github.com/sigp/lighthouse/pull/2853 so that we can get ssz static tests passing here on v1.1.6. If we want to merge that first, we can make this diff slightly smaller
## Proposed Changes
- Changes the `justified_epoch` and `finalized_epoch` in the `ProtoArrayNode` each to an `Option<Checkpoint>`. The `Option` is necessary only for the migration, so not ideal. But does allow us to add a default logic to `None` on these fields during the database migration.
- Adds a database migration from a legacy fork choice struct to the new one, search for all necessary block roots in fork choice by iterating through blocks in the db.
- updates related to https://github.com/ethereum/consensus-specs/pull/2727
- We will have to update the persisted forkchoice to make sure the justified checkpoint stored is correct according to the updated fork choice logic. This boils down to setting the forkchoice store's justified checkpoint to the justified checkpoint of the block that advanced the finalized checkpoint to the current one.
- AFAICT there's no migration steps necessary for the update to allow applying attestations from prior blocks, but would appreciate confirmation on that
- I updated the consensus spec tests to v1.1.6 here, but they will fail until we also implement the proposer score boost updates. I confirmed that the previously failing scenario `new_finalized_slot_is_justified_checkpoint_ancestor` will now pass after the boost updates, but haven't confirmed _all_ tests will pass because I just quickly stubbed out the proposer boost test scenario formatting.
- This PR now also includes proposer boosting https://github.com/ethereum/consensus-specs/pull/2730
## Additional Info
I realized checking justified and finalized roots in fork choice makes it more likely that we trigger this bug: https://github.com/ethereum/consensus-specs/pull/2727
It's possible the combination of justified checkpoint and finalized checkpoint in the forkchoice store is different from in any block in fork choice. So when trying to startup our store's justified checkpoint seems invalid to the rest of fork choice (but it should be valid). When this happens we get an `InvalidBestNode` error and fail to start up. So I'm including that bugfix in this branch.
Todo:
- [x] Fix fork choice tests
- [x] Self review
- [x] Add fix for https://github.com/ethereum/consensus-specs/pull/2727
- [x] Rebase onto Kintusgi
- [x] Fix `num_active_validators` calculation as @michaelsproul pointed out
- [x] Clean up db migrations
Co-authored-by: realbigsean <seananderson33@gmail.com>
## Issue Addressed
N/A
## Proposed Changes
Changes required for the `merge-devnet-3`. Added some more non substantive renames on top of @realbigsean 's commit.
Note: this doesn't include the proposer boosting changes in kintsugi v3.
This devnet isn't running with the proposer boosting fork choice changes so if we are looking to merge https://github.com/sigp/lighthouse/pull/2822 into `unstable`, then I think we should just maintain this branch for the devnet temporarily.
Co-authored-by: realbigsean <seananderson33@gmail.com>
Co-authored-by: Paul Hauner <paul@paulhauner.com>
## Issue Addressed
The version of `rusqlite` that we were depending on has been yanked due to a vulnerability. The vulnerability only affects `update_hook`, which we don't use in Lighthouse.
There is no need to push a release -- users are safe to ignore this warning.
## Additional Info
Incoming advisory: https://github.com/rustsec/advisory-db/pull/1117
## Proposed Changes
In the event of a late block, keep the block in the snapshot cache by cloning it. This helps us process new blocks quickly in the event the late block was re-org'd.
Co-authored-by: Michael Sproul <michael@sigmaprime.io>
## Issue Addressed
We were calculating justified balances incorrectly on cache misses in `set_justified_checkpoint`
## Proposed Changes
Use the `get_effective_balances` method as opposed to `state.balances`, which returns exact balances
Co-authored-by: realbigsean <seananderson33@gmail.com>
## Issue Addressed
New rust lints
## Proposed Changes
- Boxing some enum variants
- removing some unused fields (is the validator lockfile unused? seemed so to me)
## Additional Info
- some error fields were marked as dead code but are logged out in areas
- left some dead fields in our ef test code because I assume they are useful for debugging?
Co-authored-by: realbigsean <seananderson33@gmail.com>
Fix max packet sizes
Fix max_payload_size function
Add merge block test
Fix max size calculation; fix up test
Clear comments
Add a payload_size_function
Use safe arith for payload calculation
Return an error if block too big in block production
Separate test to check if block is over limit
* Fix fork choice after rebase
* Remove paulhauner warp dep
* Fix fork choice test compile errors
* Assume fork choice payloads are valid
* Add comment
* Ignore new tests
* Fix error in test skipping