## Issue Addressed
Which issue # does this PR address?
## Proposed Changes
Please list or describe the changes introduced by this PR.
## Additional Info
Please provide any additional information. For example, future considerations
or information useful for reviewers.
## Issue Addressed
NA
## Proposed Changes
- Uses a `Vec` in `SingleEpochParticipationCache` rather than `HashMap` to speed up processing times at the cost of memory usage.
- Cache the result of `integer_sqrt` rather than recomputing for each validator.
- Cache `state.previous_epoch` rather than recomputing it for each validator.
### Benchmarks
Benchmarks on a recent mainnet state using #3252 to get timing.
#### Without this PR
```
lcli skip-slots --state-path /tmp/state-0x3cdc.ssz --partial-state-advance --slots 32 --state-root 0x3cdc33cd02713d8d6cc33a6dbe2d3a5bf9af1d357de0d175a403496486ff845e --runs 10
[2022-06-09T08:21:02Z INFO lcli::skip_slots] Using mainnet spec
[2022-06-09T08:21:02Z INFO lcli::skip_slots] Advancing 32 slots
[2022-06-09T08:21:02Z INFO lcli::skip_slots] Doing 10 runs
[2022-06-09T08:21:02Z INFO lcli::skip_slots] State path: "/tmp/state-0x3cdc.ssz"
SSZ decoding /tmp/state-0x3cdc.ssz: 43ms
[2022-06-09T08:21:03Z INFO lcli::skip_slots] Run 0: 245.718794ms
[2022-06-09T08:21:03Z INFO lcli::skip_slots] Run 1: 245.364782ms
[2022-06-09T08:21:03Z INFO lcli::skip_slots] Run 2: 255.866179ms
[2022-06-09T08:21:04Z INFO lcli::skip_slots] Run 3: 243.838909ms
[2022-06-09T08:21:04Z INFO lcli::skip_slots] Run 4: 250.431425ms
[2022-06-09T08:21:04Z INFO lcli::skip_slots] Run 5: 248.68765ms
[2022-06-09T08:21:04Z INFO lcli::skip_slots] Run 6: 262.051113ms
[2022-06-09T08:21:05Z INFO lcli::skip_slots] Run 7: 264.293967ms
[2022-06-09T08:21:05Z INFO lcli::skip_slots] Run 8: 293.202007ms
[2022-06-09T08:21:05Z INFO lcli::skip_slots] Run 9: 264.552017ms
```
#### With this PR:
```
lcli skip-slots --state-path /tmp/state-0x3cdc.ssz --partial-state-advance --slots 32 --state-root 0x3cdc33cd02713d8d6cc33a6dbe2d3a5bf9af1d357de0d175a403496486ff845e --runs 10
[2022-06-09T08:57:59Z INFO lcli::skip_slots] Run 0: 73.898678ms
[2022-06-09T08:57:59Z INFO lcli::skip_slots] Run 1: 75.536978ms
[2022-06-09T08:57:59Z INFO lcli::skip_slots] Run 2: 75.176104ms
[2022-06-09T08:57:59Z INFO lcli::skip_slots] Run 3: 76.460828ms
[2022-06-09T08:57:59Z INFO lcli::skip_slots] Run 4: 75.904195ms
[2022-06-09T08:58:00Z INFO lcli::skip_slots] Run 5: 75.53077ms
[2022-06-09T08:58:00Z INFO lcli::skip_slots] Run 6: 74.745572ms
[2022-06-09T08:58:00Z INFO lcli::skip_slots] Run 7: 75.823489ms
[2022-06-09T08:58:00Z INFO lcli::skip_slots] Run 8: 74.892055ms
[2022-06-09T08:58:00Z INFO lcli::skip_slots] Run 9: 76.333569ms
```
## Additional Info
NA
## Description
Add a new lint to CI that attempts to detect calls to functions like `block_on` from async execution contexts. This lint was written from scratch exactly for this purpose, on my fork of Clippy: https://github.com/michaelsproul/rust-clippy/tree/disallow-from-async
## Additional Info
- I've successfully detected the previous two issues we had with `block_on` by running the linter on the commits prior to each of these PRs: https://github.com/sigp/lighthouse/pull/3165, https://github.com/sigp/lighthouse/pull/3199.
- The lint runs on CI with `continue-on-error: true` so that if it fails spuriously it won't block CI.
- I think it would be good to merge this PR before https://github.com/sigp/lighthouse/pull/3244 so that we can lint the extensive executor-related changes in that PR.
- I aim to upstream the lint to Clippy, at which point building a custom version of Clippy from my fork will no longer be necessary. I imagine this will take several weeks or months though, because the code is currently a bit hacky and will need some renovations to pass review.
## Issue Addressed
Newer versions of MDBX have removed Windows and macOS support, so this PR pins MDBX at the last working version to prevent an accidental regression via `cargo update`.
## Additional Info
This is a short-term solution, if our pinned version of MDBX turns out to be buggy we will need to consider backporting patches from upstream to our own fork.
## Issue Addressed
Failures in our CI integration tests for Geth.
## Proposed Changes
Only connect to the authenticated execution endpoints during execution tests.
This is necessary now that it is impossible to connect to the `engine` api on an unauthenticated endpoint.
See https://github.com/ethereum/go-ethereum/pull/24997
## Additional Info
As these tests break semi-regularly, I have kept logs enabled to ease future debugging.
I've also updated the Nethermind tests, although these weren't broken. This should future-proof us if Nethermind decides to follow suit with Geth
## Issue Addressed
currently we count a failed attempt for a syncing chain even if the peer is not at fault. This makes us do more work if the chain fails, and heavily penalize peers, when we can simply retry. Inspired by a proposal I made to #3094
## Proposed Changes
If a batch fails but the peer is not at fault, do not count the attempt
Also removes some annoying logs
## Additional Info
We still get a counter on ignored attempts.. just in case
## Issue Addressed
na
## Proposed Changes
Updates libp2p to https://github.com/libp2p/rust-libp2p/pull/2662
## Additional Info
From comments on the relevant PRs listed, we should pay attention at peer management consistency, but I don't think anything weird will happen.
This is running in prater tok and sin
## Issue Addressed
Fixes a timing issue that results in spurious fork choice notifier failures:
```
WARN Error signalling fork choice waiter slot: 3962270, error: ForkChoiceSignalOutOfOrder { current: Slot(3962271), latest: Slot(3962270) }, service: beacon
```
There’s a fork choice run that is scheduled to run at the start of every slot by the `timer`, which creates a 12s interval timer when the beacon node starts up. The problem is that if there’s a bit of clock drift that gets corrected via NTP (or a leap second for that matter) then these 12s intervals will cease to line up with the start of the slot. This then creates the mismatch in slot number that we see above.
Lighthouse also runs fork choice 500ms before the slot begins, and these runs are what is conflicting with the start-of-slot runs. This means that the warning in current versions of Lighthouse is mostly cosmetic because fork choice is up to date with all but the most recent 500ms of attestations (which usually isn’t many).
## Proposed Changes
Fix the per-slot timer so that it continually re-calculates the duration to the start of the next slot and waits for that.
A side-effect of this change is that we may skip slots if the per-slot task takes >12s to run, but I think this is an unlikely scenario and an acceptable compromise.
## Issue Addressed
Reduces the effect of late blocks on overall node buildup
## Proposed Changes
change the capacity of the channels used to send work for reprocessing in the beacon processor, and to send back to the main processor task, to be 75% of the capacity of the channel for receiving new events
## Additional Info
The issues we've seen suggest we should still evaluate node performance under stress, with late blocks being a big factor.
Other changes that could help:
1. right now we have a cap for queued attestations for reprocessing that applies to the sum of aggregated and unaggregated attestations. We could consider adding a separate cap that favors aggregated ones.
2. solving #2848
## Issue Addressed
Fix for the eth1 cache sync issue observed on Ropsten.
## Proposed Changes
Ropsten blocks are so infrequent that they broke our algorithm for downloading eth1 blocks. We currently try to download forwards from the last block in our cache to the block with block number [`remote_highest_block - FOLLOW_DISTANCE + FOLLOW_DISTANCE / ETH1_BLOCK_TIME_TOLERANCE_FACTOR`](6f732986f1/beacon_node/eth1/src/service.rs (L489-L492)). With the tolerance set to 4 this is insufficient because we lag by 1536 blocks, which is more like ~14 hours on Ropsten. This results in us having an incomplete eth1 cache, because we should cache all blocks between -16h and -8h. Even if we were to set the tolerance to 2 for the largest allowance, we would only look back 1024 blocks which is still more than 8 hours.
For example consider this block https://ropsten.etherscan.io/block/12321390. The block from 1536 blocks earlier is 14 hours and 20 minutes before it: https://ropsten.etherscan.io/block/12319854. The block from 1024 blocks earlier is https://ropsten.etherscan.io/block/12320366, 8 hours and 48 minutes before.
- This PR introduces a new CLI flag called `--eth1-cache-follow-distance` which can be used to set the distance manually.
- A new dynamic catchup mechanism is added which detects when the cache is lagging the true eth1 chain and tries to download more blocks within the follow distance in order to catch up.
## Issue Addressed
N/A
## Proposed Changes
Preemptively switch Nethermind integration tests to use the `master` branch along with the baked in `kiln` config.
## Additional Info
There have been some spurious timeouts across CI so this also increases the timeout to 20s.
## Issue Addressed
#3156
## Proposed Changes
Emit a `WARN` log whenever the value of `fee_recipient` as returned from the EE is different from the value of `suggested_fee_recipient` as set on the BN, for example by the `--suggested-fee-recipient` CLI flag.
## Additional Info
I have set the log level to `WARN` since it is legal behaviour (meaning it isn't really an error but is important to know when it is occurring).
If we feel like this behaviour is almost always undesired (caused by a misconfiguration or malicious EE) then an `ERRO` log would be more appropriate. Happy to change it in that case.
## Issue Addressed
N/A
## Proposed Changes
Use stable version of ubuntu base image in dockerfile instead of using latest. This will help in narrowing down issues with docker images.
## Proposed Changes
Speed up epoch processing by around 10% by inlining methods from the `safe_arith` crate.
The Rust standard library uses `#[inline]` for the `checked_` functions that we're wrapping, so it makes sense for us to inline them too.
## Additional Info
I conducted a brief statistical test on the block at slot [3858336](https://beaconcha.in/block/3858336) applied to the state at slot 3858335, which requires an epoch transition. The command used for testing was:
```
lcli transition-blocks --testnet-dir ./common/eth2_network_config/built_in_network_configs/mainnet --no-signature-verification state.ssz block.ssz output.ssz
```
The testing found that inlining reduced the epoch transition time from 398ms to 359ms, a reduction of 9.77%, which was found to be statistically significant with a two-tailed t-test (p < 0.01). Data and intermediate calculations can be found here: https://docs.google.com/spreadsheets/d/1tlf3eFjz3dcXeb9XVOn21953uYpc9RdQapPtcHGH1PY
## Issue Addressed
We were logging `out_finalized_epoch` instead of `our_finalized_epoch`. I noticed this ages ago but only just got around to fixing it.
## Additional Info
I also reformatted the log line to respect the line length limit (`rustfmt` won't do it because it gets confused by the `;` in slog's log macros).
## Proposed Changes
It's reasonably often that we want to manually convert an attestation to indexed form. This PR adds an `lcli` command for doing this, using an SSZ state and a list of JSON attestations (as extracted from a JSON block) as input.
## Issue Addressed
NA
## Proposed Changes
Please list or describe the changes introduced by this PR.
## Additional Info
- Pending testing on our infra. **Please do not merge**
## Issue Addressed
#3212
## Proposed Changes
Move chain segments coming from back-fill syncing from highest priority to lowest
## Additional Info
If this does not solve the issue, next steps would be lowering the batch size for back-fill sync, and as last resort throttling the processing of these chain segments
## Issue Addressed
#3154
## Proposed Changes
Add three new metrics for the VC:
1. `vc_beacon_nodes_synced_count`
2. `vc_beacon_nodes_available_count`
3. `vc_beacon_nodes_total_count`
Their values mirror the values present in the following log line:
```
Apr 08 17:25:17.000 INFO Connected to beacon node(s) synced: 4, available: 4, total: 4, service: notifier
```
## Issue Addressed
Fixes an issue that @paulhauner found with the v2.3.0 release candidate whereby the fork choice runs introduced by #3168 tripped over each other during sync:
```
May 24 23:06:40.542 WARN Error signalling fork choice waiter slot: 3884129, error: ForkChoiceSignalOutOfOrder { current: Slot(3884131), latest: Slot(3884129) }, service: beacon
```
This can occur because fork choice is called from the state advance _and_ the per-slot task. When one of these runs takes a long time it can end up finishing after a run from a later slot, tripping the error above. The problem is resolved by not running either of these fork choice calls during sync.
Additionally, these parallel fork choice runs were causing issues in the database:
```
May 24 07:49:05.098 WARN Found a chain that should already have been pruned, head_slot: 92925, head_block_root: 0xa76c7bf1b98e54ed4b0d8686efcfdf853484e6c2a4c67e91cbf19e5ad1f96b17, service: beacon
May 24 07:49:05.101 WARN Database migration failed error: HotColdDBError(FreezeSlotError { current_split_slot: Slot(92608), proposed_split_slot: Slot(92576) }), service: beacon
```
In this case, two fork choice calls triggering the finalization processing were being processed out of order due to differences in their processing time, causing the background migrator to try to advance finalization _backwards_ 😳. Removing the parallel fork choice runs from sync effectively addresses the issue, because these runs are most likely to have different finalized checkpoints (because of the speed at which fork choice advances during sync). In theory it's still possible to process updates out of order if any other fork choice runs end up completing out of order, but this should be much less common. Fixing out of order fork choice runs in general is difficult as it requires architectural changes like serialising fork choice updates through a single thread, or locking fork choice along with the head when it is mutated (https://github.com/sigp/lighthouse/pull/3175).
## Proposed Changes
* Don't run per-slot fork choice during sync (if head is older than 4 slots)
* Don't run state-advance fork choice during sync (if head is older than 4 slots)
* Check for monotonic finalization updates in the background migrator. This is a good defensive check to have, and I'm not sure why we didn't have it before (we may have had it and wrongly removed it).
## Proposed Changes
Add documentation for the `lighthouse db migate` command, which users will be able to use to downgrade from Lighthouse v2.3.0 on non-merge networks (mainnet & Prater).
I think it's important to get this into the live instance of the book so we can link to it from the v2.3.0 release notes.
*This PR was adapted from @pawanjay176's work in #3197.*
## Issue Addressed
Fixes a regression in https://github.com/sigp/lighthouse/pull/3168
## Proposed Changes
https://github.com/sigp/lighthouse/pull/3168 added calls to `fork_choice` in `BeaconChain::per_slot_task` function. This leads to a panic as `per_slot_task` is called from an async context which calls fork choice, which then calls `block_on`.
This PR changes the timer to call the `per_slot_task` function in a blocking thread.
Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
## Issue Addressed
Removes the build status badge from the main README.md. I don't think it actually serves a purpose and it also has the downside that a spurious failure gives us a red badge. For example, v2.2.1 failed with a [spurious failure](https://github.com/sigp/lighthouse/runs/5984392665?check_suite_focus=true) and I can't see a way to re-trigger that run. It will be red until our next release.
The same test suite runs when we merge into `unstable`, so those tests must have already passed in order for the commits to get onto `stable` (assuming our workflow is followed). Github will send notifications on failed CI, so we'll still be alerted to a failure without checking this badge.
## Issue Addressed
This fixes the low-hanging Clippy lints introduced in Rust 1.61 (due any hour now). It _ignores_ one lint, because fixing it requires a structural refactor of the validator client that needs to be done delicately. I've started on that refactor and will create another PR that can be reviewed in more depth in the coming days. I think we should merge this PR in the meantime to unblock CI.
## Issue Addressed
NA
## Proposed Changes
Adds the configuration for the upcoming merge of the Ropsten network, as per:
https://github.com/eth-clients/merge-testnets/pull/9
Use the Ropsten network with: `lighthouse --network ropsten`
## Additional Info
This is still a work-in-progress. We should wait for the eth-clients/merge-testnets PR to be approved before merging this into our `unstable`.
## Issue Addressed
Upcoming spec change https://github.com/ethereum/consensus-specs/pull/2878
## Proposed Changes
1. Run fork choice at the start of every slot, and wait for this run to complete before proposing a block.
2. As an optimisation, also run fork choice 3/4 of the way through the slot (at 9s), _dequeueing attestations for the next slot_.
3. Remove the fork choice run from the state advance timer that occurred before advancing the state.
## Additional Info
### Block Proposal Accuracy
This change makes us more likely to propose on top of the correct head in the presence of re-orgs with proposer boost in play. The main scenario that this change is designed to address is described in the linked spec issue.
### Attestation Accuracy
This change _also_ makes us more likely to attest to the correct head. Currently in the case of a skipped slot at `slot` we only run fork choice 9s into `slot - 1`. This means the attestations from `slot - 1` aren't taken into consideration, and any boost applied to the block from `slot - 1` is not removed (it should be). In the language of the linked spec issue, this means we are liable to attest to C, even when the majority voting weight has already caused a re-org to B.
### Why remove the call before the state advance?
If we've run fork choice at the start of the slot then it has already dequeued all the attestations from the previous slot, which are the only ones eligible to influence the head in the current slot. Running fork choice again is unnecessary (unless we run it for the next slot and try to pre-empt a re-org, but I don't currently think this is a great idea).
### Performance
Based on Prater testing this adds about 5-25ms of runtime to block proposal times, which are 500-1000ms on average (and spike to 5s+ sometimes due to state handling issues 😢 ). I believe this is a small enough penalty to enable it by default, with the option to disable it via the new flag `--fork-choice-before-proposal-timeout 0`. Upcoming work on block packing and state representation will also reduce block production times in general, while removing the spikes.
### Implementation
Fork choice gets invoked at the start of the slot via the `per_slot_task` function called from the slot timer. It then uses a condition variable to signal to block production that fork choice has been updated. This is a bit funky, but it seems to work. One downside of the timer-based approach is that it doesn't happen automatically in most of the tests. The test added by this PR has to trigger the run manually.
## Issue Addressed
@z3n-chada is currently getting a `PayloadIdUnavailable` error when connecting lighthouse to Erigon and it's difficult to discern why so this just logs out the response status from the EE when we hit an `PayloadIdUnavailable` error
Co-authored-by: realbigsean <sean@sigmaprime.io>
## Issue Addressed
Web3Signer validators do not support client authentication. This means the `--tls-known-clients-file` option on Web3Signer can't be used with Lighthouse.
## Proposed Changes
Add two new fields to Web3Signer validators, `client_identity_path` and `client_identity_password`, which specify the path and password for a PKCS12 file containing a certificate and private key. If `client_identity_path` is present, use the certificate for SSL client authentication.
## Additional Info
I am successfully validating on Prater using client authentication with Web3Signer and client authentication.
## Issue Addressed
#3141
## Proposed Changes
Changes the algorithm for proposing blocks from
```
For each BN (first success):
- Produce a block
- Sign the block and store its root in the slashing protection DB
- Publish the block
```
to
```
For each BN (first success):
- Produce a block
Sign the block and store its root in the slashing protection DB
For each BN (first success):
- Publish the block
```
Separating the producing from the publishing makes sure that we only add a signed block once to the slashing DB.
## Issue Addressed
Which issue # does this PR address?
#3114
## Proposed Changes
1. introduce `mime` package
2. Parse `Accept` field in the header with `mime`
## Additional Info
Please provide any additional information. For example, future considerations
or information useful for reviewers.
## Proposed Changes
Remove support for DB migrations that support upgrading from schema's below version 5. This is mostly for cosmetic/code quality reasons as in most circumstances upgrading from versions of Lighthouse this old will almost always require a re-sync.
## Additional Info
The minimum supported database schema is now version 5.
## Issue Addressed
N/A
## Proposed Changes
Prevents the early attester cache from producing attestations to future blocks. This bug could result in a missed head vote if the BN was requested to produce an attestation for an earlier slot than the head block during the (usually) short window of time between verifying a block and setting it as the head.
This bug was noticed in an [Antithesis](https://andreagrieser.com/) test and diagnosed by @realbigsean.
## Additional Info
NA
# Description
Since the `TaskExecutor` currently requires a `Weak<Runtime>`, it's impossible to use it in an async test where the `Runtime` is created outside our scope. Whilst we *could* create a new `Runtime` instance inside the async test, dropping that `Runtime` would cause a panic (you can't drop a `Runtime` in an async context).
To address this issue, this PR creates the `enum Handle`, which supports either:
- A `Weak<Runtime>` (for use in our production code)
- A `Handle` to a runtime (for use in testing)
In theory, there should be no change to the behaviour of our production code (beyond some slightly different descriptions in HTTP 500 errors), or even our tests. If there is no change, you might ask *"why bother?"*. There are two PRs (#3070 and #3175) that are waiting on these fixes to introduce some new tests. Since we've added the EL to the `BeaconChain` (for the merge), we are now doing more async stuff in tests.
I've also added a `RuntimeExecutor` to the `BeaconChainTestHarness`. Whilst that's not immediately useful, it will become useful in the near future with all the new async testing.
Code simplifications using `Option`/`Result` combinators to make pattern-matches a tad simpler.
Opinions on these loosely held, happy to adjust in review.
Tool-aided by [comby-rust](https://github.com/huitseeker/comby-rust).
## Proposed Changes
Remove the `lcli` code which performs block packing analysis.
The `lcli` code has been deprecated by a more performant version available in the HTTP API added in #2879.
## Additional Info
Any applications depending on the `lcli` code should migrate to the version in the HTTP API.
The only feature which is unavailable in the API version is an estimate of live/dead validators. This was originally used to determine a closer approximation of block packing efficiencies since offline validators have a disproportionate impact on efficiencies. However the implimentation in `lcli` is a poor approximation which cannot account for a multitude of factors. It is recommended to simply calculate relative efficiencies instead or use a more advanced method of determining live/dead validators.
## Proposed Changes
Reduce post-merge disk usage by not storing finalized execution payloads in Lighthouse's database.
⚠️ **This is achieved in a backwards-incompatible way for networks that have already merged** ⚠️. Kiln users and shadow fork enjoyers will be unable to downgrade after running the code from this PR. The upgrade migration may take several minutes to run, and can't be aborted after it begins.
The main changes are:
- New column in the database called `ExecPayload`, keyed by beacon block root.
- The `BeaconBlock` column now stores blinded blocks only.
- Lots of places that previously used full blocks now use blinded blocks, e.g. analytics APIs, block replay in the DB, etc.
- On finalization:
- `prune_abanonded_forks` deletes non-canonical payloads whilst deleting non-canonical blocks.
- `migrate_db` deletes finalized canonical payloads whilst deleting finalized states.
- Conversions between blinded and full blocks are implemented in a compositional way, duplicating some work from Sean's PR #3134.
- The execution layer has a new `get_payload_by_block_hash` method that reconstructs a payload using the EE's `eth_getBlockByHash` call.
- I've tested manually that it works on Kiln, using Geth and Nethermind.
- This isn't necessarily the most efficient method, and new engine APIs are being discussed to improve this: https://github.com/ethereum/execution-apis/pull/146.
- We're depending on the `ethers` master branch, due to lots of recent changes. We're also using a workaround for https://github.com/gakonst/ethers-rs/issues/1134.
- Payload reconstruction is used in the HTTP API via `BeaconChain::get_block`, which is now `async`. Due to the `async` fn, the `blocking_json` wrapper has been removed.
- Payload reconstruction is used in network RPC to serve blocks-by-{root,range} responses. Here the `async` adjustment is messier, although I think I've managed to come up with a reasonable compromise: the handlers take the `SendOnDrop` by value so that they can drop it on _task completion_ (after the `fn` returns). Still, this is introducing disk reads onto core executor threads, which may have a negative performance impact (thoughts appreciated).
## Additional Info
- [x] For performance it would be great to remove the cloning of full blocks when converting them to blinded blocks to write to disk. I'm going to experiment with a `put_block` API that takes the block by value, breaks it into a blinded block and a payload, stores the blinded block, and then re-assembles the full block for the caller.
- [x] We should measure the latency of blocks-by-root and blocks-by-range responses.
- [x] We should add integration tests that stress the payload reconstruction (basic tests done, issue for more extensive tests: https://github.com/sigp/lighthouse/issues/3159)
- [x] We should (manually) test the schema v9 migration from several prior versions, particularly as blocks have changed on disk and some migrations rely on being able to load blocks.
Co-authored-by: Paul Hauner <paul@paulhauner.com>
## Proposed Changes
Remove the bulky part of the EE integration test directory from the Docker build context so that we don't have to send 10GB of junk to the Docker daemon when building an image.
## Issue Addressed
#3068
## Proposed Changes
Adds support for remote key API.
## Additional Info
Needed to add `is_local_keystore` argument to `delete_definition_and_keystore` to know if we want to delete local or remote key. Previously this wasn't necessary because remotekeys(web3signers) could be deleted.
## Issue Addressed
Resolves#3102
## Proposed Changes
- https://github.com/sigp/lighthouse/issues/3102#issuecomment-1114835063
- This is not an ideal solution, since the commit hash is missing from version number, but I think it is sufficient.
## Additional Info
I've tested ... :
- `fallback` is updated via `change_version.sh`.
```shell
$ cd scripts/
$ ./change_version.sh 2.2.1 2.2.2
$ git diff ../common/lighthouse_version/src/lib.rs
```
```diff
@ common/lighthouse_version/src/lib.rs:20 @ pub const VERSION: &str = git_version!(
// NOTE: using --match instead of --exclude for compatibility with old Git
"--match=thiswillnevermatchlol"
],
- prefix = "Lighthouse/v2.2.1-",
- fallback = "Lighthouse/v2.2.1"
+ prefix = "Lighthouse/v2.2.2-",
+ fallback = "Lighthouse/v2.2.2"
);
```
- a package built without git info prints expected version number (v2.2.1).
```shell
$ git archive HEAD --output=/tmp/lighthouse.zip
$ cd /tmp
$ unzip lighthouse.zip
$ cd lighthouse
$ cargo build --release
$ target/release/lighthouse --version
Lighthouse v2.2.1
BLS library: blst
SHA256 hardware acceleration: false
Specs: mainnet (true), minimal (false), gnosis (false)
```
## Issue Addressed
Fix a panic due to misuse of the Tokio executor when processing a forkchoiceUpdated response. We were previously calling `process_invalid_execution_payload` from the async function `update_execution_engine_forkchoice_async`, which resulted in a panic because `process_invalid_execution_payload` contains a call to fork choice, which ultimately calls `block_on`.
An example backtrace can be found here: https://gist.github.com/michaelsproul/ac5da03e203d6ffac672423eaf52fb20
## Proposed Changes
Wrap the call to `process_invalid_execution_payload` in a `spawn_blocking` so that `block_on` is no longer called from an async context.
## Additional Info
- I've been thinking about how to catch bugs like this with static analysis (a new Clippy lint).
- The payload validation tests have been re-worked to support distinct responses from the mock EE for newPayload and forkchoiceUpdated. Three new tests have been added covering the `Invalid`, `InvalidBlockHash` and `InvalidTerminalBlock` cases.
- I think we need a bunch more tests of different legal and illegal variations