## Issue Addressed
Closes#4332
## Proposed Changes
Remove the `CountUnrealized` type, defaulting unrealized justification to _on_. This fixes the #4332 issue by ensuring that importing the same block to fork choice always results in the same outcome.
Finalized sync speed may be slightly impacted by this change, but that is deemed an acceptable trade-off until the optimisation from #4118 is implemented.
TODO:
- [x] Also check that the block isn't a duplicate before importing
* some blob reprocessing work
* remove ForceBlockLookup
* reorder enum match arms in sync manager
* a lot more reprocessing work
* impl logic for triggerng blob lookups along with block lookups
* deal with rpc blobs in groups per block in the da checker. don't cache missing blob ids in the da checker.
* make single block lookup generic
* more work
* add delayed processing logic and combine some requests
* start fixing some compile errors
* fix compilation in main block lookup mod
* much work
* get things compiling
* parent blob lookups
* fix compile
* revert red/stevie changes
* fix up sync manager delay message logic
* add peer usefulness enum
* should remove lookup refactor
* consolidate retry error handling
* improve peer scoring during certain failures in parent lookups
* improve retry code
* drop parent lookup if either req has a peer disconnect during download
* refactor single block processed method
* processing peer refactor
* smol bugfix
* fix some todos
* fix lints
* fix lints
* fix compile in lookup tests
* fix lints
* fix lints
* fix existing block lookup tests
* renamings
* fix after merge
* cargo fmt
* compilation fix in beacon chain tests
* fix
* refactor lookup tests to work with multiple forks and response types
* make tests into macros
* wrap availability check error
* fix compile after merge
* add random blobs
* start fixing up lookup verify error handling
* some bug fixes and the start of deneb only tests
* make tests work for all forks
* track information about peer source
* error refactoring
* improve peer scoring
* fix test compilation
* make sure blobs are sent for processing after stream termination, delete copied tests
* add some tests and fix a bug
* smol bugfixes and moar tests
* add tests and fix some things
* compile after merge
* lots of refactoring
* retry on invalid block/blob
* merge unknown parent messages before current slot lookup
* get tests compiling
* penalize blob peer on invalid blobs
* Check disk on in-memory cache miss
* Update beacon_node/beacon_chain/src/data_availability_checker/overflow_lru_cache.rs
* Update beacon_node/network/src/sync/network_context.rs
Co-authored-by: Divma <26765164+divagant-martian@users.noreply.github.com>
* fix bug in matching blocks and blobs in range sync
* pr feedback
* fix conflicts
* upgrade logs from warn to crit when we receive incorrect response in range
* synced_and_connected_within_tolerance -> should_search_for_block
* remove todo
* Fix Broken Overflow Tests
* fix merge conflicts
* checkpoint sync without alignment
* add import
* query for checkpoint state by slot rather than state root (teku doesn't serve by state root)
* get state first and query by most recent block root
* simplify delay logic
* rename unknown parent sync message variants
* rename parameter, block_slot -> slot
* add some docs to the lookup module
* use interval instead of sleep
* drop request if blocks and blobs requests both return `None` for `Id`
* clean up `find_single_lookup` logic
* add lookup source enum
* clean up `find_single_lookup` logic
* add docs to find_single_lookup_request
* move LookupSource our of param where unnecessary
* remove unnecessary todo
* query for block by `state.latest_block_header.slot`
* fix lint
* fix test
* fix test
* fix observed blob sidecars test
* PR updates
* use optional params instead of a closure
* create lookup and trigger request in separate method calls
* remove `LookupSource`
* make sure duplicate lookups are not dropped
---------
Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
Co-authored-by: Mark Mackey <mark@sigmaprime.io>
Co-authored-by: Divma <26765164+divagant-martian@users.noreply.github.com>
Checkpointz now supports deposit snapshot but [they only support returning them in JSON](https://github.com/ethpandaops/checkpointz/issues/74) so I've modified lighthouse to request them in JSON by default.
There's also `get_opt` & `get_opt_with_timeout` methods which seem to expect responses in JSON but were not adding `Accept: application/json` to the request headers so I fixed that as well.
Also the beacon API puts quantities in quotes so I fixed that in the snapshot JSON serialization
## Issue Addressed
Resolves#3980. Builds on work by @GeemoCandama in #4084
## Proposed Changes
Extends the `SupportedProtocol` abstraction added in Geemo's PR and attempts to fix internal versioning of requests that are mentioned in this comment https://github.com/sigp/lighthouse/pull/4084#issuecomment-1496380033
Co-authored-by: geemo <geemo@tutanota.com>
## Issue Addressed
Closes#3964
## Proposed Changes
Use the `maxperf` profile to build Windows binaries, now that Rust 1.70.0 fixed the underlying issue.
## Proposed Changes
Correct some typos in the book, also update information about withdrawals since the Mainnet will be having 700K validators in about a month
## Issue Addressed
#3510
## Proposed Changes
- Replace mime with MediaTypeList
- Remove parse_accept fn as MediaTypeList does it built-in
- Get the supported media type of the highest q-factor in a single list iteration without sorting
## Additional Info
I have addressed the suggested changes in previous [PR](https://github.com/sigp/lighthouse/pull/3520#discussion_r959048633)
Done in different PRs so that they can reviewed independently, as it's likely this won't be merged before I leave
Includes resolution for #4080
- [ ] #4299
- [ ] #4318
- [ ] #4320
Co-authored-by: Diva M <divma@protonmail.com>
Co-authored-by: Age Manning <Age@AgeManning.com>
## Issue Addressed
This PR addresses issue https://github.com/sigp/lighthouse/issues/4350
## Proposed Changes
This change will enable slasher broadcast in the following cases:
No flag is passed,
`--slasher-broadcast` is passed and,
`--slasher-broadcast=true` is passed.
Only when an explicit false value is passed the slasher does not broadcast.(`--slasher-broadcast=false`).
## Additional Info
TODO
- [x] Modify CLI parsing logic
- [x] Write test
Refer to #4353
Co-authored-by: Rahul Dogra <rahulcooldogra@gmail.com>
Co-authored-by: Gua00va <105484243+Gua00va@users.noreply.github.com>
## Issue Addressed
Workaround for https://github.com/foundry-rs/foundry/issues/5115.
## Proposed Changes
Allow Anvil to be installed on Windows without errors by enabling the IPC features (which we don't use, but Anvil expects to exist).
## Issue Addressed
NA
## Proposed Changes
Downgrade a `CRIT` to an `ERRO` when there's an `Irrecoverable` error whilst publishing a blinded block.
It's quite common for builders successfully broadcast a block to the network whilst failing to respond to the BN when it publishes a signed, blinded block. The VC is currently raising a `CRIT` when this happens and I think that's excessive.
These changes have the same intent as #4073. In that PR I only managed to remove the `CRIT`s in the BN but missed this one in the VC.
I've also tidied the log messages to:
- Give them all the same title (*"Error whilst producing block"*) to help with grepping.
- Include the `block_slot` so it's easy to look up the slot in an explorer and see if it was actually skipped.
## Additional Info
This PR should not change any logic beyond logging.
## Issue Addressed
Closes#4354Closes#3987
Replaces #4305, #4283
## Proposed Changes
This switches the default slasher backend _back_ to LMDB.
If an MDBX database exists and the MDBX backend is enabled then MDBX will continue to be used. Our release binaries and Docker images will continue to include MDBX for as long as it is practical, so users of these should not notice any difference.
The main benefit is to users compiling from source and devs running tests. These users no longer have to struggle to compile MDBX and deal with the compatibility issues that arises. Similarly, devs don't need to worry about toggling feature flags in tests or risk forgetting to run the slasher tests due to backend issues.
## Issue Addressed
N/A
## Proposed Changes
This change will log the value of the relay block and the local block when the relay block is more profitable.
## Additional Info
This change will help validators understand the block selection (as it looks like the execution reward sometimes is higher that the MEV-reward).
The rationale for this change is to aid operators to better understand why a relay-block was chosen over a local block.
Looking at produced blocks (at beaconcha.in for example) it sometimes looks like the builder is making a profit just from the execution reward vs the MEV-reward, and creates the nagging question: "Could i have built this block and made that extra profit?"... The answer is probably "No, not without the extra transactions included by the relay", but by logging the value of the local block-candidate, this will no longer be an issue..
### Example (Mainnet)
https://beaconcha.in/block/17370329
MEV Block Reward: 0.17122 Ether to 0xE35bBaFa0266089f95d745d348b468622805D82B
Execution Reward: 0.17528 Ether to 0x1f9090aaE28b8a3dCeaDf281B0F12828e676c326
Difference: 0.00406 Ether
### Examples (Goerli)
https://goerli.beaconcha.in/block/9040065
MEV Block Reward: 0.56423 Ether to 0xF5794543CF6055Ae710E9c8E99E31343Cea004a8
Execution Reward: 0.56488 Ether to 0xfC0157aA4F5DB7177830ACddB3D5a9BB5BE9cc5e
Difference: 0.00065 Ether
https://goerli.beaconcha.in/block/9019921
MEV Block Reward: 1.39440 Ether to 0xF5794543CF6055Ae710E9c8E99E31343Cea004a8
Execution Reward: 1.39469 Ether to 0xfC0157aA4F5DB7177830ACddB3D5a9BB5BE9cc5e
Difference: 0.00029 Ether
https://goerli.beaconcha.in/block/9015583
MEV Block Reward: 1.04356 Ether to 0xF5794543CF6055Ae710E9c8E99E31343Cea004a8
Execution Reward: 1.04896 Ether to 0xfC0157aA4F5DB7177830ACddB3D5a9BB5BE9cc5e
Difference: 0.0054 Ether
## Issue Addressed
On deneb devnetv5, lighthouse keeps rate limiting peers which makes it harder to bootstrap new nodes as there are very few peers in the network. This PR adds an option to disable the inbound rate limiter for testnets.
Added an option to configure inbound rate limits as well.
Co-authored-by: Diva M <divma@protonmail.com>
## Proposed Changes
This is a light refactor of the execution layer's block hash calculation logic making it easier to use externally. e.g. in `eleel` (https://github.com/sigp/eleel/pull/18).
A static method is preferable to a method because the calculation doesn't actually need any data from `self`, and callers may want to compute block hashes without constructing an `ExecutionLayer` (`eleel` only constructs a simpler `Engine` struct).
## Issue Addressed
Update Information in Lighthouse Book
## Proposed Changes
- move Validator Graffiti from Advanced Usage to Validator Management
- update API response and command
- some items that aren't too sure I put it in comment, which can be seen in raw/review format but not live
## Additional Info
Please provide any additional information. For example, future considerations
or information useful for reviewers.
Co-authored-by: chonghe <44791194+chong-he@users.noreply.github.com>
## Issue Addressed
Added some frequently asked questions in the Lighthouse Book
## Proposed Changes
Created another file faqV2.md which categorises the FAQs into different sections for better organisation. For review purpose, a review on faq.md will suffice. Then, if faqV2.md looks better, can delete faq.md; otherwise if the changes to the faqV2.md is too much, can keep faq.md and delete faqv2.md.
## Additional Info
Please provide any additional information. For example, future considerations
or information useful for reviewers.
Co-authored-by: chonghe <44791194+chong-he@users.noreply.github.com>
## Issue Addressed
-
## Proposed Changes
Add information to use SSH to connect to Siren beyond the local computer
## Additional Info
Please provide any additional information. For example, future considerations
or information useful for reviewers.
Co-authored-by: chonghe <44791194+chong-he@users.noreply.github.com>
Co-authored-by: Michael Sproul <micsproul@gmail.com>
This PR address the following spec change: https://github.com/ethereum/consensus-specs/pull/3312
Instead of subscribing to a long-lived subnet for every attached validator to a beacon node, all beacon nodes will subscribe to `SUBNETS_PER_NODE` long-lived subnets. This is currently set to 2 for mainnet.
This PR does not include any scoring or advanced discovery mechanisms. A future PR will improve discovery and we can implement scoring after the next hard fork when we expect all client teams and all implementations to respect this spec change.
This will be a significant change in the subnet network structure for consensus clients and we will likely have to monitor and tweak our peer management logic.
## Issue Addressed
Realized this was missing while discussing #4280
## Proposed Changes
Add an Item to the list of additional requirements for developers.
## Issue Addressed
N/A
## Proposed Changes
Geth's latest release breaks our CI with the following message
```
Fatal: Failed to register the Ethereum service: ethash is only supported as a historical component of already merged networks
Shutting down
```
Latest geth version has removed support for PoW networks. Hence, we need to add an extra `terminalTotalDifficultyPassed ` parameter in the genesis config to start from a merged network.
## Issue Addressed
Add a new `--output values` option to `db inspect` for dumping raw database values to SSZ files.
This could be useful for inspecting the database when we're unable to start the beacon node.
Example usage:
```
# Output the `ForkChoice` column to an SSZ file
lighthouse db inspect --column frk --output values
```
By default, it stores the output files in the current directory, and can be overriden with `--ouput-dir`.
List of columns can be found here:
c547a11b0d/beacon_node/store/src/lib.rs (L169-L216)
## Issue Addressed
None
## Proposed Changes
Tiny change to the documentation: Bellatrix happened for Gnosis so also needs a merge-ready config.
## Additional Info
None