## Issue Addressed
NA
## Proposed Changes
Adds metrics to track validators that are submitting equivocating (but not slashable) sync messages. This follows on from some research we've been doing in a separate fork of LH.
## Additional Info
@jimmygchen and @michaelsproul have already run their eyes over this so it should be easy to get into v4.2.0, IMO.
## Issue Addressed
#4281
## Proposed Changes
- Change `ShufflingCache` implementation from using `LruCache` to a custom cache that removes entry with lowest epoch instead of oldest insertion time.
- Protect the "enshrined" head shufflings when inserting new committee cache entries. The shuffling ids matching the head's previous, current, and future epochs will never be ejected from the cache during `Self::insert_cache_item`.
## Additional Info
There is a bonus point on shuffling preferences in the issue description that hasn't been implemented yet, as I haven't figured out a good way to do this:
> However I'm not convinced since there are some complexities around tie-breaking when two entries have the same epoch. Perhaps preferring entries in the canonical chain is best?
We should be able to check if a block is on the canonical chain by:
```rust
canonical_head
.fork_choice_read_lock()
.contains_block(root)
```
However we need to interleave the shuffling and fork choice locks, which may cause deadlocks if we're not careful (mentioned by @paulhauner). Alternatively, we could use the `state.block_roots` field of the `chain.canonical_head.snapshot.beacon_state`, which avoids deadlock but requires more work.
I'd like to get some feedback on review & testing before I dig deeper into the preferences stuff, as having the canonical head preference may already be quite useful in preventing the issue raised.
Co-authored-by: Jimmy Chen <jimmy@sigmaprime.io>
## Issue Addressed
Closes https://github.com/sigp/lighthouse/issues/4291, part of #3613.
## Proposed Changes
- Implement the `el_offline` field on `/eth/v1/node/syncing`. We set `el_offline=true` if:
- The EL's internal status is `Offline` or `AuthFailed`, _or_
- The most recent call to `newPayload` resulted in an error (more on this in a moment).
- Use the `el_offline` field in the VC to mark nodes with offline ELs as _unsynced_. These nodes will still be used, but only after synced nodes.
- Overhaul the usage of `RequireSynced` so that `::No` is used almost everywhere. The `--allow-unsynced` flag was broken and had the opposite effect to intended, so it has been deprecated.
- Add tests for the EL being offline on the upcheck call, and being offline due to the newPayload check.
## Why track `newPayload` errors?
Tracking the EL's online/offline status is too coarse-grained to be useful in practice, because:
- If the EL is timing out to some calls, it's unlikely to timeout on the `upcheck` call, which is _just_ `eth_syncing`. Every failed call is followed by an upcheck [here](693886b941/beacon_node/execution_layer/src/engines.rs (L372-L380)), which would have the effect of masking the failure and keeping the status _online_.
- The `newPayload` call is the most likely to time out. It's the call in which ELs tend to do most of their work (often 1-2 seconds), with `forkchoiceUpdated` usually returning much faster (<50ms).
- If `newPayload` is failing consistently (e.g. timing out) then this is a good indication that either the node's EL is in trouble, or the network as a whole is. In the first case validator clients _should_ prefer other BNs if they have one available. In the second case, all of their BNs will likely report `el_offline` and they'll just have to proceed with trying to use them.
## Additional Changes
- Add utility method `ForkName::latest` which is quite convenient for test writing, but probably other things too.
- Delete some stale comments from when we used to support multiple execution nodes.
## Issue Addressed
N/A
## Proposed Changes
Modifies the local testnet scripts to start a network with genesis validators embedded into the genesis state. This allows us to start a local testnet without the need for deploying a deposit contract or depositing validators pre-genesis.
This also enables us to start a local test network at any fork we want without going through fork transitions. Also adds scripts to start multiple geth clients and peer them with each other and peer the geth clients with beacon nodes to start a post merge local testnet.
## Additional info
Adds a new lcli command `mnemonics-validators` that generates validator directories derived from a given mnemonic.
Adds a new `derived-genesis-state` option to the `lcli new-testnet` command to generate a genesis state populated with validators derived from a mnemonic.
## Issue Addressed
#2335
## Proposed Changes
- Remove the `lighthouse-network::tests::gossipsub_tests` module
- Remove dead code from the `lighthouse-network::tests::common` helper module (`build_full_mesh`)
## Additional Info
After discussion with both @divagant-martian and @AgeManning, these tests seem to have two main issues in that they are:
- Redundant, in that they don't test anything meaningful (due to our handling of duplicate messages)
- Out-of-place, in that it doesn't really test Lighthouse-specific functionality (rather libp2p functionality)
As such, this PR supersedes #4286.
## Issue Addressed
NA
## Proposed Changes
Adds an additional check to a feature introduced in #4179 to prevent us from re-queuing already-known blocks that could be rejected immediately.
## Additional Info
Ideally this would have been included in v4.1.0, however we came across it too late to release it safely. We decided that the safest path forward is to release *without* this check and then patch it in the next version. The lack of this check should only result in a very minor performance impact (the impact is totally negligible in my assessment).
## Issue Addressed
NA
## Proposed Changes
Adds a flag to store invalid blocks on disk for teh debugz. Only *some* invalid blocks are stored, those which:
- Were received via gossip (rather than RPC, for instance)
- This keeps things simple to start with and should capture most blocks.
- Passed gossip verification
- This reduces the ability for random people to fill up our disk. A proposer signature is required to write something to disk.
## Additional Info
It's possible that we'll store blocks that aren't necessarily invalid, but we had an internal error during verification. Those blocks seem like they might be useful sometimes.
## Issue Addressed
N/A
## Proposed Changes
Replace ganache-cli with anvil https://github.com/foundry-rs/foundry/blob/master/anvil/README.md
We can lose all js dependencies in CI as a consequence.
## Additional info
Also changes the ethers-rs version used in the execution layer (for the transaction reconstruction) to a newer one. This was necessary to get use the ethers utils for anvil. The fixed execution engine integration tests should catch any potential issues with the payload reconstruction after #3592
Co-authored-by: Michael Sproul <michael@sigmaprime.io>
## Issue Addressed
#4233
## Proposed Changes
Remove the `best_justified_checkpoint` from the `PersistedForkChoiceStore` type as it is now unused.
Additionally, remove the `Option`'s wrapping the `justified_checkpoint` and `finalized_checkpoint` fields on `ProtoNode` which were only present to facilitate a previous migration.
Include the necessary code to facilitate the migration to a new DB schema.
## Issue Addressed
Addresses #4238
## Proposed Changes
- [x] Add tests for the scenarios
- [x] Use the fork of the attestation slot for signature verification.
## Issue Addressed
Update some broken links in Lighthouse Book
## Proposed Changes
The updated links correctly link to the section of the book
## Additional Info
Please provide any additional information. For example, future considerations
or information useful for reviewers.
## Issue Addressed
Addresses #4234
## Proposed Changes
- Skip withdrawals processing in an inconsistent state replay.
- Repurpose `StateRootStrategy`: rename to `StateProcessingStrategy` and always skip withdrawals if using `StateProcessingStrategy::Inconsistent`
- Add a test to reproduce the scenario
Co-authored-by: Jimmy Chen <jimmy@sigmaprime.io>
## Issue Addressed
#4266
## Proposed Changes
- Log `Using external block builder` instead of `Connected to external block builder` on its initialization to resolve the confusion (there's no actual connection there)
## Additional Info
The log is mentioned in builders docs, so it's changed there too.
This commit adds a check to the networking service when handling core gossipsub topic subscription requests. If the BN is already subscribed to the core topics, we won't attempt to resubscribe.
## Issue Addressed
#4258
## Proposed Changes
- In the networking service, check if we're already subscribed to all of the core gossipsub topics and, if so, do nothing
## Additional Info
N/A
## Limit Backfill Sync
This PR transitions Lighthouse from syncing all the way back to genesis to only syncing back to the weak subjectivity point (~ 5 months) when syncing via a checkpoint sync.
There are a number of important points to note with this PR:
- Firstly and most importantly, this PR fundamentally shifts the default security guarantees of checkpoint syncing in Lighthouse. Prior to this PR, Lighthouse could verify the checkpoint of any given chain by ensuring the chain eventually terminates at the corresponding genesis. This guarantee can still be employed via the new CLI flag --genesis-backfill which will prompt lighthouse to the old behaviour of downloading all blocks back to genesis. The new behaviour only checks the proposer signatures for the last 5 months of blocks but cannot guarantee the chain matches the genesis chain.
- I have not modified any of the peer scoring or RPC responses. Clients syncing from gensis, will downscore new Lighthouse peers that do not possess blocks prior to the WSP. This is by design, as Lighthouse nodes of this form, need a mechanism to sort through peers in order to find useful peers in order to complete their genesis sync. We therefore do not discriminate between empty/error responses for blocks prior or post the local WSP. If we request a block that a peer does not posses, then fundamentally that peer is less useful to us than other peers.
- This will make a radical shift in that the majority of nodes will no longer store the full history of the chain. In the future we could add a pruning mechanism to remove old blocks from the db also.
Co-authored-by: Paul Hauner <paul@paulhauner.com>
## Issue Addressed
#3873
## Proposed Changes
add a cache to optimise historical state lookup.
## Additional Info
N/A
Co-authored-by: Michael Sproul <micsproul@gmail.com>
## Issue Addressed
Update Lighthouse book to include latest information especially after Capella upgrade
## Proposed Changes
Notable changes:
- Combine Sec 4.1 & 6.1 into Sec 4, because Sec 6.1 is importing validator key which is a required step when want to run a validator
- Combine Sec 5.1 & 5.2 with Sec 5, and move Sec 5 to under Sec 9
- Added partial withdrawals in Sec 6
## Additional Info
Please provide any additional information. For example, future considerations
or information useful for reviewers.
Co-authored-by: chonghe <tanck2005@gmail.com>
## Issue Addressed
This PR un-deprecates some commonly used test util functions, e.g. `extend_chain`. Most of these were deprecated in 2020 but some of us still found them quite convenient and they're still being used a lot. If there's no issue with using them, I think we should remove the "Deprecated" comment to avoid confusion.
## Proposed Changes
- Allow Docker images to be built with different profiles via e.g. `--build-arg PROFILE=maxperf`.
- Include the build profile in `lighthouse --version`.
## Additional Info
This only affects Docker images built from source. Our published Docker images use `cross`-compiled binaries that get copied into place.
## Issue Addressed
#4150
## Proposed Changes
Maintain trusted peers in the pruning logic. ~~In principle the changes here are not necessary as a trusted peer has a max score (100) and all other peers can have at most 0 (because we don't implement positive scores). This means that we should never prune trusted peers unless we have more trusted peers than the target peer count.~~
This change shifts this logic to explicitly never prune trusted peers which I expect is the intuitive behaviour.
~~I suspect the issue in #4150 arises when a trusted peer disconnects from us for one reason or another and then we remove that peer from our peerdb as it becomes stale. When it re-connects at some large time later, it is no longer a trusted peer.~~
Currently we do disconnect trusted peers, and this PR corrects this to maintain trusted peers in the pruning logic.
As suggested in #4150 we maintain trusted peers in the db and thus we remember them even if they disconnect from us.
## Issue Addressed
[#4162](https://github.com/sigp/lighthouse/issues/4162)
## Proposed Changes
update `--logfile-no-restricted-perms` flag help text to indicate that, for Windows users, the file permissions are inherited from the parent folder
## Additional Info
N/A
## Proposed Changes
Added page explanation for authentication under Siren UI book.
## Additional Info
Please provide any additional information. For example, future considerations
or information useful for reviewers.
* Update Engine API to Latest
* Get Mock EE Working
* Fix Mock EE
* Update Engine API Again
* Rip out get_blobs_bundle Stuff
* Fix Test Harness
* Fix Clippy Complaints
* Fix Beacon Chain Tests
* Temp hack to compile
* Fix doppelganger tests
* Kill in groups instead of storing pid
* Install geth in CI
* Install geth first
* Fix eth1_block_hash
* Fix directory paths and block hash
* Fix workflow for local testnets; reset genesis.json after running script
* Disable capella and deneb forks for doppelganger tests
* oops not capella
* Spin up a spare bn for the doppelganger validator
* testing
* Revert "testing"
This reverts commit 14eb178bca5b7d27b9cd9b665b5cd2c916f50901.
* Modify beacon_node script to take trusted peers
* Set doppelganger bn as a trusted peer
* Update var
* update another
* Fix port
* Add a flag to disable peer scoring
* Disable peer scoring in local testnet bn script
* Revert trusted peers hack
* fmt
* Fix proposer boost score
It is a well-known fact that IP addresses for beacon nodes used by specific validators can be de-anonymized. There is an assumed risk that a malicious user may attempt to DOS validators when producing blocks to prevent chain growth/liveness.
Although there are a number of ideas put forward to address this, there a few simple approaches we can take to mitigate this risk.
Currently, a Lighthouse user is able to set a number of beacon-nodes that their validator client can connect to. If one beacon node is taken offline, it can fallback to another. Different beacon nodes can use VPNs or rotate IPs in order to mask their IPs.
This PR provides an additional setup option which further mitigates attacks of this kind.
This PR introduces a CLI flag --proposer-only to the beacon node. Setting this flag will configure the beacon node to run with minimal peers and crucially will not subscribe to subnets or sync committees. Therefore nodes of this kind should not be identified as nodes connected to validators of any kind.
It also introduces a CLI flag --proposer-nodes to the validator client. Users can then provide a number of beacon nodes (which may or may not run the --proposer-only flag) that the Validator client will use for block production and propagation only. If these nodes fail, the validator client will fallback to the default list of beacon nodes.
Users are then able to set up a number of beacon nodes dedicated to block proposals (which are unlikely to be identified as validator nodes) and point their validator clients to produce blocks on these nodes and attest on other beacon nodes. An attack attempting to prevent liveness on the eth2 network would then need to preemptively find and attack the proposer nodes which is significantly more difficult than the default setup.
This is a follow on from: #3328
Co-authored-by: Michael Sproul <michael@sigmaprime.io>
Co-authored-by: Paul Hauner <paul@paulhauner.com>
## Issue Addressed
The latest stable version (1.69.0) of Rust was released on 20 April and contains this change:
- [Update the minimum external LLVM to 14.](https://github.com/rust-lang/rust/pull/107573/)
This impacts some of our CI workflows (build and release-test-windows) that uses LLVM 13.0. This PR updates the workflows to install LLVM 15.0.
**UPDATE**: Also updated `h2` to address [this issue](https://github.com/advisories/GHSA-f8vr-r385-rh5r)