Commit Graph

4640 Commits

Author SHA1 Message Date
Mac L
d316305411 Add is_optimistic to eth/v1/node/syncing response (#3374)
## Issue Addressed

As specified in the [Beacon Chain API specs](https://github.com/ethereum/beacon-APIs/blob/master/apis/node/syncing.yaml#L32-L35) we should return `is_optimistic` as part of the response to a query for the `eth/v1/node/syncing` endpoint.

## Proposed Changes

Compute the optimistic status of the head and add it to the `SyncingData` response.
2022-07-26 08:50:16 +00:00
realbigsean
904dd62524 Strict fee recipient (#3363)
## Issue Addressed

Resolves #3267
Resolves #3156 

## Proposed Changes

- Move the log for fee recipient checks from proposer cache insertion into block proposal so we are directly checking what we get from the EE
- Only log when there is a discrepancy with the local EE, not when using the builder API. In the `builder-api` branch there is an `info` log when there is a discrepancy, I think it is more likely there will be a difference in fee recipient with the builder api because proposer payments might be made via a transaction in the block. Not really sure what patterns will become commong.
- Upgrade the log from a `warn` to an `error` - not actually sure which we want, but I think this is worth an error because the local EE with default transaction ordering I think should pretty much always use the provided fee recipient
- add a `strict-fee-recipient` flag to the VC so we only sign blocks with matching fee recipients. Falls back from the builder API to the local API if there is a discrepancy .




Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-07-26 02:17:24 +00:00
Paul Hauner
b82e2dfc51 Add merge transition docs (#3361)
## Issue Addressed

NA

## Proposed Changes

Add some documentation about migrating pre-merge Lighthouse to post-merge Lighthouse.

## Additional Info

NA
2022-07-26 02:17:22 +00:00
ethDreamer
f7354abe0f Fix Block Cache Range Math for Faster Syncing (#3358)
## Issue Addressed

While messing with the deposit snapshot stuff, I had my proxy running and noticed the beacon node wasn't syncing the block cache continuously. There were long periods where it did nothing. I believe this was caused by a logical error introduced in #3234 that dealt with an issue that arose while syncing the block cache on Ropsten.

The problem is that when the block cache is initially syncing, it will trigger the logic that detects the cache is far behind the execution chain in time. This will trigger a batch syncing mechanism which is intended to sync further ahead than the chain would normally. But the batch syncing is actually slower than the range this function usually estimates (in this scenario).

## Proposed Changes

I believe I've fixed this function by taking the end of the range to be the maximum of (batch syncing range, usual range).
I've also renamed and restructured some things a bit. It's equivalent logic but I think it's more clear what's going on.
2022-07-26 02:17:21 +00:00
realbigsean
20ebf1f3c1 Realized unrealized experimentation (#3322)
## Issue Addressed

Add a flag that optionally enables unrealized vote tracking.  Would like to test out on testnets and benchmark differences in methods of vote tracking. This PR includes a DB schema upgrade to enable to new vote tracking style.


Co-authored-by: realbigsean <sean@sigmaprime.io>
Co-authored-by: Paul Hauner <paul@paulhauner.com>
Co-authored-by: sean <seananderson33@gmail.com>
Co-authored-by: Mac L <mjladson@pm.me>
2022-07-25 23:53:26 +00:00
Mac L
bb5a6d2cca Add execution_optimistic flag to HTTP responses (#3070)
## Issue Addressed

#3031 

## Proposed Changes

Updates the following API endpoints to conform with https://github.com/ethereum/beacon-APIs/pull/190 and https://github.com/ethereum/beacon-APIs/pull/196
- [x] `beacon/states/{state_id}/root` 
- [x] `beacon/states/{state_id}/fork`
- [x] `beacon/states/{state_id}/finality_checkpoints`
- [x] `beacon/states/{state_id}/validators`
- [x] `beacon/states/{state_id}/validators/{validator_id}`
- [x] `beacon/states/{state_id}/validator_balances`
- [x] `beacon/states/{state_id}/committees`
- [x] `beacon/states/{state_id}/sync_committees`
- [x] `beacon/headers`
- [x] `beacon/headers/{block_id}`
- [x] `beacon/blocks/{block_id}`
- [x] `beacon/blocks/{block_id}/root`
- [x] `beacon/blocks/{block_id}/attestations`
- [x] `debug/beacon/states/{state_id}`
- [x] `debug/beacon/heads`
- [x] `validator/duties/attester/{epoch}`
- [x] `validator/duties/proposer/{epoch}`
- [x] `validator/duties/sync/{epoch}`

Updates the following Server-Sent Events:
- [x]  `events?topics=head`
- [x]  `events?topics=block`
- [x]  `events?topics=finalized_checkpoint`
- [x]  `events?topics=chain_reorg`

## Backwards Incompatible
There is a very minor breaking change with the way the API now handles requests to `beacon/blocks/{block_id}/root` and `beacon/states/{state_id}/root` when `block_id` or `state_id` is the `Root` variant of `BlockId` and `StateId` respectively.

Previously a request to a non-existent root would simply echo the root back to the requester:
```
curl "http://localhost:5052/eth/v1/beacon/states/0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/root"
{"data":{"root":"0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"}}
```
Now it will return a `404`:
```
curl "http://localhost:5052/eth/v1/beacon/blocks/0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/root"
{"code":404,"message":"NOT_FOUND: beacon block with root 0xaaaa…aaaa","stacktraces":[]}
```

In addition to this is the block root `0x0000000000000000000000000000000000000000000000000000000000000000` previously would return the genesis block. It will now return a `404`:
```
curl "http://localhost:5052/eth/v1/beacon/blocks/0x0000000000000000000000000000000000000000000000000000000000000000"
{"code":404,"message":"NOT_FOUND: beacon block with root 0x0000…0000","stacktraces":[]}
```

## Additional Info
- `execution_optimistic` is always set, and will return `false` pre-Bellatrix. I am also open to the idea of doing something like `#[serde(skip_serializing_if = "Option::is_none")]`.
- The value of `execution_optimistic` is set to `false` where possible. Any computation that is reliant on the `head` will simply use the `ExecutionStatus` of the head (unless the head block is pre-Bellatrix).

Co-authored-by: Paul Hauner <paul@paulhauner.com>
2022-07-25 08:23:00 +00:00
Paul Hauner
21dec6f603 v2.4.0 (#3360)
## Issue Addressed

NA

## Proposed Changes

Bump versions to v2.4.0

## Additional Info

Blocked on:

- ~~#3349~~
- ~~#3347~~
2022-07-21 22:02:36 +00:00
Pawan Dhananjay
612cdb7092 Merge readiness endpoint (#3349)
## Issue Addressed

Resolves final task in https://github.com/sigp/lighthouse/issues/3260

## Proposed Changes

Adds a lighthouse http endpoint to indicate merge readiness.

Blocked on #3339
2022-07-21 05:45:39 +00:00
Michael Sproul
e32868458f Set safe block hash to justified (#3347)
## Issue Addressed

Closes https://github.com/sigp/lighthouse/issues/3189.

## Proposed Changes

- Always supply the justified block hash as the `safe_block_hash` when calling `forkchoiceUpdated` on the execution engine.
- Refactor the `get_payload` routine to use the new `ForkchoiceUpdateParameters` struct rather than just the `finalized_block_hash`. I think this is a nice simplification and that the old way of computing the `finalized_block_hash` was unnecessary, but if anyone sees reason to keep that approach LMK.
2022-07-21 05:45:37 +00:00
Paul Hauner
6a0e9d4353 Add Goerli --network flag as duplicate of Prater: Option A (#3346)
## Issue Addressed

- Resolves #3338

## Proposed Changes

This PR adds a new `--network goerli` flag that reuses the [Prater network configs](https://github.com/sigp/lighthouse/tree/stable/common/eth2_network_config/built_in_network_configs/prater).

As you'll see in #3338, there are several approaches to the problem of the Goerli/Prater alias. This approach achieves:

1. No duplication of the genesis state between Goerli and Prater.
    - Upside: the genesis state for Prater is ~17mb, duplication would increase the size of the binary by that much.
2. When the user supplies `--network goerli`, they will get a datadir in `~/.lighthouse/goerli`.
    - Upside: our docs stay correct when they declare a datadir is located at `~/.lighthouse/{network}`
    - Downside: switching from `--network prater` to `--network goerli` will require some manual migration. 
3. When using `--network goerli`, the [`config/spec`](https://ethereum.github.io/beacon-APIs/#/Config/getSpec) endpoint will return a [`CONFIG_NAME`](02a2b71d64/configs/mainnet.yaml (L11)) of "prater".
    - Upside: VC running `--network prater` will still think it's on the same network as one using `--network goerli`.
    - Downside: potentially confusing.
    
#3348 achieves the same goal as this PR with a different approach and set of trade-offs.

## Additional Info

### Notes for reviewers:

In e4896c2682 you'll see that I remove the `$name_str` by just using `stringify!($name_ident)` instead. This is a simplification that should have have been there in the first place.

Then, in 90b5e22fca I reclaim that second parameter with a new purpose; to specify the directory from which to load configs.
2022-07-20 23:16:56 +00:00
Pawan Dhananjay
5b5cf9cfaa Log ttd (#3339)
## Issue Addressed

Resolves #3249 

## Proposed Changes

Log merge related parameters and EE status in the beacon notifier before the merge.


Co-authored-by: Paul Hauner <paul@paulhauner.com>
2022-07-20 23:16:54 +00:00
ethDreamer
7c3ff903ca Fix Gossip Penalties During Optimistic Sync Window (#3350)
## Issue Addressed
* #3344 

## Proposed Changes

There are a number of cases during block processing where we might get an `ExecutionPayloadError` but we shouldn't penalize peers. We were forgetting to enumerate all of the non-penalizing errors in every single match statement where we are making that decision. I created a function to make it explicit when we should and should not penalize peers and I used that function in all places where this logic is needed. This way we won't make the same mistake if we add another variant of `ExecutionPayloadError` in the future.
2022-07-20 20:59:38 +00:00
Paul Hauner
6d8dfc9eee Add TTD and Bellatrix epoch for Prater (#3345)
## Issue Addressed

NA

## Proposed Changes

Adds the TTD and Bellatrix values for Prater, as per https://github.com/eth-clients/eth2-networks/pull/77.

## Additional Info

- ~~Blocked on https://github.com/eth-clients/eth2-networks/pull/77~~
2022-07-20 20:59:36 +00:00
realbigsean
fabe50abe7 debug tests rust version (#3354)
## Issue Addressed

Which issue # does this PR address?

## Proposed Changes

Please list or describe the changes introduced by this PR.

## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.


Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-07-20 18:18:26 +00:00
realbigsean
822c30da66 docker rust version update (#3353)
## Issue Addressed

The lcli and antithesis docker builds are failing in unstable so bumping all the versions here

Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-07-20 18:18:25 +00:00
Mac L
7dbc59efeb Share reqwest::Client between validators when using Web3Signer (#3335)
## Issue Addressed

#3302

## Proposed Changes

Move the `reqwest::Client` from being initialized per-validator, to being initialized per distinct Web3Signer. 
This is done by placing the `Client` into a `HashMap` keyed by the definition of the Web3Signer as specified by the `ValidatorDefintion`. This will allow multiple Web3Signers to be used with a single VC and also maintains backwards compatibility.

## Additional Info

This was done to reduce the memory used by the VC when connecting to a Web3Signer.

I set up a local testnet using [a custom script](https://github.com/macladson/lighthouse/tree/web3signer-local-test/scripts/local_testnet_web3signer) and ran a VC with 200 validator keys:


VC with Web3Signer:
- `unstable`: ~200MB
- With fix: ~50MB



VC with Local Signer:
- `unstable`: ~35MB
- With fix: ~35MB 


> I'm seeing some fragmentation with the VC using the Web3Signer, but not when using a local signer (this is most likely due to making lots of http requests and dealing with lots of JSON objects). I tested the above using `MALLOC_ARENA_MAX=1` to try to reduce the fragmentation. Without it, the values are around +50MB for both `unstable` and the fix.
2022-07-19 05:48:05 +00:00
Pawan Dhananjay
e5e4e62758 Don't create a execution payload with same timestamp as terminal block (#3331)
## Issue Addressed

Resolves #3316 

## Proposed Changes

This PR fixes an issue where lighthouse created a transition block with `block.execution_payload().timestamp == terminal_block.timestamp` if the terminal block was created at the slot boundary.
2022-07-18 23:15:41 +00:00
Pawan Dhananjay
f9b9658711 Add merge support to simulator (#3292)
## Issue Addressed

N/A

## Proposed Changes

Make simulator merge compatible. Adds a `--post_merge` flag to the eth1 simulator that enables a ttd and simulates the merge transition. Uses the `MockServer` in the execution layer test utils to simulate a dummy execution node.

Adds the merge transition simulation to CI.
2022-07-18 23:15:40 +00:00
Pawan Dhananjay
da7b7a0f60 Make transactions in execution layer integration tests (#3320)
## Issue Addressed

Resolves #3159 

## Proposed Changes

Sends transactions to the EE before requesting for a payload in the `execution_integration_tests`. Made some changes to the integration tests in order to be able to sign and publish transactions to the EE:

1. `genesis.json` for both geth and nethermind was modified to include pre-funded accounts that we know private keys for 
2. Using the unauthenticated port again in order to make `eth_sendTransaction` and calls from the `personal` namespace to import keys

Also added a `fcu` call with `PayloadAttributes` before calling `getPayload` in order to give EEs sufficient time to pack transactions into the payload.
2022-07-18 01:51:36 +00:00
Age Manning
2ed51c364d Improve block-lookup functionality (#3287)
Improves some of the functionality around single and parent block lookup. 

Gives extra information about whether failures for lookups are related to processing or downloading.

This is entirely untested.


Co-authored-by: Diva M <divma@protonmail.com>
2022-07-17 23:26:58 +00:00
Peter Davies
4f58c555a9 Add Merge support to web3signer validators (#3318)
## Issue Addressed

Web3signer validators can't produce post-Bellatrix blocks.

## Proposed Changes

Add support for Bellatrix to web3signer validators.

## Additional Info

I am running validators with this code on Ropsten, but it may be a while for them to get a proposal.
2022-07-15 14:16:00 +00:00
Mac L
2940783a9c Upstream local testnet improvements (#3336)
## Proposed Changes

Adds some improvements I found when playing around with local testnets in #3335:
- When trying to kill processes, do not exit on a failure. (If a node fails to start due to a bug, the PID associated with it no longer exists. When trying to tear down the testnets, an error will be raised when it tries that PID and then will not try any PIDs following it. This change means it will continue and tear down the rest of the network.
- When starting the testnet, set `ulimit` to a high number. This allows the VCs to import 1000s of validators without running into limitations.
2022-07-15 07:31:22 +00:00
Pawan Dhananjay
5243cc6c30 Add a u256_hex_be module to encode/decode U256 types (#3321)
## Issue Addressed

Resolves #3314 

## Proposed Changes

Add a module to encode/decode u256 types according to the execution layer encoding/decoding standards
https://github.com/ethereum/execution-apis/blob/main/src/engine/specification.md#structures

Updates `JsonExecutionPayloadV1.base_fee_per_gas`, `JsonExecutionPayloadHeaderV1.base_fee_per_gas`  and `TransitionConfigurationV1.terminal_total_difficulty` to encode/decode according to standards

Co-authored-by: Michael Sproul <micsproul@gmail.com>
2022-07-15 07:31:21 +00:00
Pawan Dhananjay
28b0ff27ff Ignored sync jobs 2 (#3317)
## Issue Addressed

Duplicate of #3269. Making this since @divagant-martian opened the previous PR and she can't approve her own PR 😄 


Co-authored-by: Diva M <divma@protonmail.com>
2022-07-15 07:31:20 +00:00
Akihito Nakano
98a9626ef5 Bump the MSRV to 1.62 and using #[derive(Default)] on enums (#3304)
## Issue Addressed

N/A

## Proposed Changes

Since Rust 1.62, we can use `#[derive(Default)]` on enums.  

https://blog.rust-lang.org/2022/06/30/Rust-1.62.0.html#default-enum-variants

There are no changes to functionality in this PR, just replaced the `Default` trait implementation with `#[derive(Default)]`.
2022-07-15 07:31:19 +00:00
Paul Hauner
1f54e10b7b Do not interpret "latest valid hash" as identifying a valid hash (#3327)
## Issue Addressed

NA

## Proposed Changes

After some discussion in Discord with @mkalinin it was raised that it was not the intention of the engine API to have CLs validate the `latest_valid_hash` (LVH) and all ancestors.

Whilst I believe the engine API is being updated such that the LVH *must* identify a valid hash or be set to some junk value, I'm not confident that we can rely upon the LVH as being valid (at least for now) due to the confusion surrounding it.

Being able to validate blocks via the LVH is a relatively minor optimisation; if the LVH value ends up becoming our head we'll send an fcU and get the VALID status there.

Falsely marking a block as valid has serious consequences and since it's a minor optimisation to use LVH I think that we don't take the risk.

For clarity, we will still *invalidate* the *descendants* of the LVH, we just wont *validate* the *ancestors*.

## Additional Info

NA
2022-07-13 23:07:49 +00:00
Paul Hauner
7a6e6928a3 Further remove EE redundancy (#3324)
## Issue Addressed

Resolves #3176

## Proposed Changes

Continues from PRs by @divagant-martian to gradually remove EL redundancy (see #3284, #3257).

This PR achieves:

- Removes the `broadcast` and `first_success` methods. The functional impact is that every request to the EE will always be tried immediately, regardless of the cached `EngineState` (this resolves #3176). Previously we would check the engine state before issuing requests, this doesn't make sense in a single-EE world; there's only one EE so we might as well try it for every request.
- Runs the upcheck/watchdog routine once per slot rather than thrice. When we had multiple EEs frequent polling was useful to try and detect when the primary EE had come back online and we could switch to it. That's not as relevant now.
- Always creates logs in the `Engines::upcheck` function. Previously we would mute some logs since they could get really noisy when one EE was down but others were functioning fine. Now we only have one EE and are upcheck-ing it less, it makes sense to always produce logs.

This PR purposefully does not achieve:

- Updating all occurances of "engines" to "engine". I'm trying to keep the diff small and manageable. We can come back for this.

## Additional Info

NA
2022-07-13 20:31:39 +00:00
Paul Hauner
a390695e0f Add --release to disallowed-from-async lint (#3325)
## Issue Addressed

- #3251

## Proposed Changes

Adds the release tag to the `disallowed_from_async` lint.

## Additional Info

~~I haven't run this locally yet due to (minor) complexity of running the lint, I'm seeing if it will work via Github.~~
2022-07-12 15:54:17 +00:00
sragss
4212f22ddb add sync committee contribution timeout (#3291)
## Issue Addressed

Resolves #3276. 

## Proposed Changes

Add a timeout for the sync committee contributions at 1/4 the slot length such that we may be able to try backup beacon nodes in the case of contribution post failure.

## Additional Info

1/4 slot length seemed standard for the timeouts, but may want to decrease this to 1/2.

I did not find any timeout related / sync committee related tests, so there are no tests. Happy to write some with a bit of guidance.
2022-07-11 01:44:42 +00:00
Divma
6d42a09ff8 Merge Engines and Engine struct in one in the execution_layer crate (#3284)
## Issue Addressed

Part of https://github.com/sigp/lighthouse/issues/3118, continuation of https://github.com/sigp/lighthouse/pull/3257 and https://github.com/sigp/lighthouse/pull/3283

## Proposed Changes
- Merge the [`Engines`](9c429d0764/beacon_node/execution_layer/src/engines.rs (L161-L165)) struct and [`Engine` ](9c429d0764/beacon_node/execution_layer/src/engines.rs (L62-L67))
- Remove unnecessary generics 

## Additional Info
There is more cleanup to do that will come in subsequent PRs
2022-07-11 01:44:41 +00:00
Kirill
5dbfb37d74 eth2_hashing: make cpufeatures dep optional (#3309)
## Issue Addressed

#3308 

## Proposed Changes

* add `cpufeatures` feature.
* make `cpufeature` default feature to preserve the compatibility;
* hide all `cpufeature`-related code with `cpufeatures` feature.

Co-authored-by: Kirill <kirill@aurora.dev>
2022-07-06 22:00:58 +00:00
ethDreamer
d5e2d98970 Implement feerecipient API for keymanager (#3213)
## Issue Addressed

* #3173 

## Proposed Changes

Moved all `fee_recipient_file` related logic inside the `ValidatorStore` as it makes more sense to have this all together there. I tested this with the validators I have on `mainnet-shadow-fork-5` and everything appeared to work well. Only technicality is that I can't get the method to return `401` when the authorization header is not specified (it returns `400` instead). Fixing this is probably quite difficult given that none of `warp`'s rejections have code `401`.. I don't really think this matters too much though as long as it fails.
2022-07-06 03:51:08 +00:00
Divma
3dc323b035 Fix RUSTSEC-2022-0032 (#3311)
## Issue Addressed
Failure of cargo audit for [RUSTSEC-2022-0032](https://rustsec.org/advisories/RUSTSEC-2022-0032)

## Proposed Changes
Cargo update does the trick again

## Additional Info
na
2022-07-05 23:36:42 +00:00
Michael Sproul
aed764c4d8 Document min CMake version (#3310)
## Proposed Changes

Add a tip about the minimum CMake version to make it more obvious what it takes to compile on Ubuntu 18.04.
2022-07-05 23:36:36 +00:00
Michael Sproul
748475be1d Ensure caches are built for block_rewards POST API (#3305)
## Issue Addressed

Follow up to https://github.com/sigp/lighthouse/pull/3290 that fixes a caching bug

## Proposed Changes

Build the committee cache for the new `POST /lighthouse/analysis/block_rewards` API. Due to an unusual quirk of the total active balance cache the API endpoint would sometimes fail after loading a state from disk which had a current epoch cache _but not_  a total active balance cache. This PR adds calls to build the caches immediately before they're required, and has been running smoothly with `blockdreamer` the last few days.
2022-07-04 02:56:15 +00:00
Akihito Nakano
1cc8a97d4e Remove unused method in HandlerNetworkContext (#3299)
## Issue Addressed

N/A

## Proposed Changes

Removed unused method in `HandlerNetworkContext`.
2022-07-04 02:56:14 +00:00
Divma
1219da9a45 Simplify error handling after engines fallback removal (#3283)
## Issue Addressed
Part of #3118, continuation of #3257

## Proposed Changes
- the [ `first_success_without_retry` ](9c429d0764/beacon_node/execution_layer/src/engines.rs (L348-L351)) function returns a single error.
- the [`first_success`](9c429d0764/beacon_node/execution_layer/src/engines.rs (L324)) function returns a single error.
- [ `EngineErrors` ](9c429d0764/beacon_node/execution_layer/src/lib.rs (L69)) carries a single error.
- [`EngineError`](9c429d0764/beacon_node/execution_layer/src/engines.rs (L173-L177)) now does not need to carry an Id
- [`process_multiple_payload_statuses`](9c429d0764/beacon_node/execution_layer/src/payload_status.rs (L46-L50)) now doesn't need to receive an iterator of statuses and weight in different errors

## Additional Info
This is built on top of #3294
2022-07-04 02:56:13 +00:00
Michael Sproul
61ed5f0ec6 Optimize historic committee calculation for the HTTP API (#3272)
## Issue Addressed

Closes https://github.com/sigp/lighthouse/issues/3270

## Proposed Changes

Optimize the calculation of historic beacon committees in the HTTP API.

This is achieved by allowing committee caches to be constructed for historic epochs, and constructing these committee caches on the fly in the API. This is much faster than reconstructing the state at the requested epoch, which usually takes upwards of 20s, and sometimes minutes with SPRP=8192. The depth of the `randao_mixes` array allows us to look back 64K epochs/0.8 years from a single state, which is pretty awesome!

We always use the `state_id` provided by the caller, but will return a nice 400 error if the epoch requested is out of range for the state requested, e.g.

```bash
# Prater
curl "http://localhost:5052/eth/v1/beacon/states/3170304/committees?epoch=33538"
```

```json
{"code":400,"message":"BAD_REQUEST: epoch out of bounds, try state at slot 1081344","stacktraces":[]}
```

Queries will be fastest when aligned to `slot % SPRP == 0`, so the hint suggests a slot that is 0 mod 8192.
2022-07-04 02:56:11 +00:00
Divma
66bb5c716c Use latest tags for nethermind and geth in the execution engine integration test (#3303)
## Issue Addressed

Currently the execution-engine-integration test uses latest master for nethermind and geth, and right now the test fails using the latest unreleased commits.

## Proposed Changes
Fix the nethermind and geth revisions the test uses to the latest tag in each repo. This way we are not continuously testing unreleased code, which might even get reverted, and reduce the failures only to releases in each one.
Also improve error handling of the commands used to manage the git repos.

## Additional Info
na

Co-authored-by: Michael Sproul <micsproul@gmail.com>
2022-07-03 05:36:51 +00:00
Paul Hauner
be4e261e74 Use async code when interacting with EL (#3244)
## Overview

This rather extensive PR achieves two primary goals:

1. Uses the finalized/justified checkpoints of fork choice (FC), rather than that of the head state.
2. Refactors fork choice, block production and block processing to `async` functions.

Additionally, it achieves:

- Concurrent forkchoice updates to the EL and cache pruning after a new head is selected.
- Concurrent "block packing" (attestations, etc) and execution payload retrieval during block production.
- Concurrent per-block-processing and execution payload verification during block processing.
- The `Arc`-ification of `SignedBeaconBlock` during block processing (it's never mutated, so why not?):
    - I had to do this to deal with sending blocks into spawned tasks.
    - Previously we were cloning the beacon block at least 2 times during each block processing, these clones are either removed or turned into cheaper `Arc` clones.
    - We were also `Box`-ing and un-`Box`-ing beacon blocks as they moved throughout the networking crate. This is not a big deal, but it's nice to avoid shifting things between the stack and heap.
    - Avoids cloning *all the blocks* in *every chain segment* during sync.
    - It also has the potential to clean up our code where we need to pass an *owned* block around so we can send it back in the case of an error (I didn't do much of this, my PR is already big enough 😅)
- The `BeaconChain::HeadSafetyStatus` struct was removed. It was an old relic from prior merge specs.

For motivation for this change, see https://github.com/sigp/lighthouse/pull/3244#issuecomment-1160963273

## Changes to `canonical_head` and `fork_choice`

Previously, the `BeaconChain` had two separate fields:

```
canonical_head: RwLock<Snapshot>,
fork_choice: RwLock<BeaconForkChoice>
```

Now, we have grouped these values under a single struct:

```
canonical_head: CanonicalHead {
  cached_head: RwLock<Arc<Snapshot>>,
  fork_choice: RwLock<BeaconForkChoice>
} 
```

Apart from ergonomics, the only *actual* change here is wrapping the canonical head snapshot in an `Arc`. This means that we no longer need to hold the `cached_head` (`canonical_head`, in old terms) lock when we want to pull some values from it. This was done to avoid deadlock risks by preventing functions from acquiring (and holding) the `cached_head` and `fork_choice` locks simultaneously.

## Breaking Changes

### The `state` (root) field in the `finalized_checkpoint` SSE event

Consider the scenario where epoch `n` is just finalized, but `start_slot(n)` is skipped. There are two state roots we might in the `finalized_checkpoint` SSE event:

1. The state root of the finalized block, which is `get_block(finalized_checkpoint.root).state_root`.
4. The state root at slot of `start_slot(n)`, which would be the state from (1), but "skipped forward" through any skip slots.

Previously, Lighthouse would choose (2). However, we can see that when [Teku generates that event](de2b2801c8/data/beaconrestapi/src/main/java/tech/pegasys/teku/beaconrestapi/handlers/v1/events/EventSubscriptionManager.java (L171-L182)) it uses [`getStateRootFromBlockRoot`](de2b2801c8/data/provider/src/main/java/tech/pegasys/teku/api/ChainDataProvider.java (L336-L341)) which uses (1).

I have switched Lighthouse from (2) to (1). I think it's a somewhat arbitrary choice between the two, where (1) is easier to compute and is consistent with Teku.

## Notes for Reviewers

I've renamed `BeaconChain::fork_choice` to `BeaconChain::recompute_head`. Doing this helped ensure I broke all previous uses of fork choice and I also find it more descriptive. It describes an action and can't be confused with trying to get a reference to the `ForkChoice` struct.

I've changed the ordering of SSE events when a block is received. It used to be `[block, finalized, head]` and now it's `[block, head, finalized]`. It was easier this way and I don't think we were making any promises about SSE event ordering so it's not "breaking".

I've made it so fork choice will run when it's first constructed. I did this because I wanted to have a cached version of the last call to `get_head`. Ensuring `get_head` has been run *at least once* means that the cached values doesn't need to wrapped in an `Option`. This was fairly simple, it just involved passing a `slot` to the constructor so it knows *when* it's being run. When loading a fork choice from the store and a slot clock isn't handy I've just used the `slot` that was saved in the `fork_choice_store`. That seems like it would be a faithful representation of the slot when we saved it.

I added the `genesis_time: u64` to the `BeaconChain`. It's small, constant and nice to have around.

Since we're using FC for the fin/just checkpoints, we no longer get the `0x00..00` roots at genesis. You can see I had to remove a work-around in `ef-tests` here: b56be3bc2. I can't find any reason why this would be an issue, if anything I think it'll be better since the genesis-alias has caught us out a few times (0x00..00 isn't actually a real root). Edit: I did find a case where the `network` expected the 0x00..00 alias and patched it here: 3f26ac3e2.

You'll notice a lot of changes in tests. Generally, tests should be functionally equivalent. Here are the things creating the most diff-noise in tests:
- Changing tests to be `tokio::async` tests.
- Adding `.await` to fork choice, block processing and block production functions.
- Refactor of the `canonical_head` "API" provided by the `BeaconChain`. E.g., `chain.canonical_head.cached_head()` instead of `chain.canonical_head.read()`.
- Wrapping `SignedBeaconBlock` in an `Arc`.
- In the `beacon_chain/tests/block_verification`, we can't use the `lazy_static` `CHAIN_SEGMENT` variable anymore since it's generated with an async function. We just generate it in each test, not so efficient but hopefully insignificant.

I had to disable `rayon` concurrent tests in the `fork_choice` tests. This is because the use of `rayon` and `block_on` was causing a panic.

Co-authored-by: Mac L <mjladson@pm.me>
2022-07-03 05:36:50 +00:00
Paul Hauner
e5212f1320 Avoid growing Vec for sync committee indices (#3301)
## Issue Addressed

NA

## Proposed Changes

This is a fairly simple micro-optimization to avoid using `Vec::grow`. I don't believe this will have a substantial effect on block processing times, however it was showing up in flamegraphs. I think it's worth making this change for general memory-hygiene.

## Additional Info

NA
2022-07-01 03:44:37 +00:00
realbigsean
a7da0677d5 Remove builder redundancy (#3294)
## Issue Addressed

This PR is a subset of the changes in #3134. Unstable will still not function correctly with the new builder spec once this is merged, #3134 should be used on testnets

## Proposed Changes

- Removes redundancy in "builders" (servers implementing the builder spec)
- Renames `payload-builder` flag to `builder`
- Moves from old builder RPC API to new HTTP API, but does not implement the validator registration API (implemented in https://github.com/sigp/lighthouse/pull/3194)



Co-authored-by: sean <seananderson33@gmail.com>
Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-07-01 01:15:19 +00:00
Divma
d40c76e667 Fix clippy lints for rust 1.62 (#3300)
## Issue Addressed

Fixes some new clippy lints after the last rust release
### Lints fixed for the curious:
- [cast_abs_to_unsigned](https://rust-lang.github.io/rust-clippy/master/index.html#cast_abs_to_unsigned)
- [map_identity](https://rust-lang.github.io/rust-clippy/master/index.html#map_identity) 
- [let_unit_value](https://rust-lang.github.io/rust-clippy/master/index.html#let_unit_value)
- [crate_in_macro_def](https://rust-lang.github.io/rust-clippy/master/index.html#crate_in_macro_def) 
- [extra_unused_lifetimes](https://rust-lang.github.io/rust-clippy/master/index.html#extra_unused_lifetimes)
- [format_push_string](https://rust-lang.github.io/rust-clippy/master/index.html#format_push_string)
2022-06-30 22:51:49 +00:00
realbigsean
f6ec44f0dd Register validator api (#3194)
## Issue Addressed

Lays the groundwork for builder API changes by implementing the beacon-API's new `register_validator` endpoint

## Proposed Changes

- Add a routine in the VC that runs on startup (re-try until success), once per epoch or whenever `suggested_fee_recipient` is updated, signing `ValidatorRegistrationData` and sending it to the BN.
  -  TODO: `gas_limit` config options https://github.com/ethereum/builder-specs/issues/17
-  BN only sends VC registration data to builders on demand, but VC registration data *does update* the BN's prepare proposer cache and send an updated fcU to  a local EE. This is necessary for fee recipient consistency between the blinded and full block flow in the event of fallback.  Having the BN only send registration data to builders on demand gives feedback directly to the VC about relay status. Also, since the BN has no ability to sign these messages anyways (so couldn't refresh them if it wanted), and validator registration is independent of the BN head, I think this approach makes sense. 
- Adds upcoming consensus spec changes for this PR https://github.com/ethereum/consensus-specs/pull/2884
  -  I initially applied the bit mask based on a configured application domain.. but I ended up just hard coding it here instead because that's how it's spec'd in the builder repo. 
  -  Should application mask appear in the api?



Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-06-30 00:49:21 +00:00
Pawan Dhananjay
5de00b7ee8 Unify execution layer endpoints (#3214)
## Issue Addressed

Resolves #3069 

## Proposed Changes

Unify the `eth1-endpoints` and `execution-endpoints` flags in a backwards compatible way as described in https://github.com/sigp/lighthouse/issues/3069#issuecomment-1134219221

Users have 2 options:
1. Use multiple non auth execution endpoints for deposit processing pre-merge
2. Use a single jwt authenticated execution endpoint for both execution layer and deposit processing post merge

Related https://github.com/sigp/lighthouse/issues/3118

To enable jwt authenticated deposit processing, this PR removes the calls to `net_version` as the `net` namespace is not exposed in the auth server in execution clients. 
Moving away from using `networkId` is a good step in my opinion as it doesn't provide us with any added guarantees over `chainId`. See https://github.com/ethereum/consensus-specs/issues/2163 and https://github.com/sigp/lighthouse/issues/2115


Co-authored-by: Paul Hauner <paul@paulhauner.com>
2022-06-29 09:07:09 +00:00
Michael Sproul
53b2b500db Extend block reward APIs (#3290)
## Proposed Changes

Add a new HTTP endpoint `POST /lighthouse/analysis/block_rewards` which takes a vec of `BeaconBlock`s as input and outputs the `BlockReward`s for them.

Augment the `BlockReward` struct with the attestation data for attestations in the block, which simplifies access to this information from blockprint. Using attestation data I've been able to make blockprint up to 95% accurate across Prysm/Lighthouse/Teku/Nimbus. I hope to go even higher using a bunch of synthetic blocks produced for Prysm/Nimbus/Lodestar, which are underrepresented in the current training data.
2022-06-29 04:50:37 +00:00
Michael Sproul
36453929d5 Update Cross config for v0.2.2 (#3286)
## Proposed Changes

Update `Cross.toml` for the recently released Cross v0.2.2. This allows us to remove the dependency on my fork of the Cross Docker image, which was a maintenance burden and prone to bit-rot. This PR puts us back in sync with upstream Cross.

## Additional Info

Due to some bindgen errors on the default Cross images we seemingly need a full `clang-3.9` install. The `libclang-3.9-dev` package was found to be insufficient due to `stdarg.h` being missing.

In order to continue building locally all Lighthouse devs should update their local cross version with `cargo install cross`.
2022-06-29 04:50:36 +00:00
Paul Hauner
45b2eb18bc v2.3.2-rc.0 (#3289)
## Issue Addressed

NA

## Proposed Changes

Bump versions

## Additional Info

NA
2022-06-28 03:03:30 +00:00
Paul Hauner
f3a1b5da31 Update Sepolia TTD (#3288)
## Issue Addressed

NA

## Proposed Changes

Update Sepolia TTD as per https://github.com/eth-clients/merge-testnets/pull/21

## Additional Info

NA
2022-06-27 22:50:27 +00:00
Pawan Dhananjay
7acfbd89ee Recover from NonConsecutive eth1 errors (#3273)
## Issue Addressed

Fixes #1864 and a bunch of other closed but unresolved issues.

## Proposed Changes

Allows the deposit caching to recover from `NonConsecutive` deposit errors by resetting the last processed block to the last valid deposit's block number. Still not sure of the underlying cause of this error, but this should recover the cache so we don't need `--eth1-purge-cache` anymore 🎉 

A huge thanks to @one-three-three-seven for reproducing the error and providing the data that helped testing out the fix 🙌 

Still needs a few more tests.
2022-06-26 23:10:58 +00:00