Commit Graph

180 Commits

Author SHA1 Message Date
Jimmy Chen
70c4ae35ab
Merge branch 'unstable' into deneb-free-blobs
# Conflicts:
#	.github/workflows/docker.yml
#	.github/workflows/local-testnet.yml
#	.github/workflows/test-suite.yml
#	Cargo.lock
#	Cargo.toml
#	beacon_node/beacon_chain/src/beacon_chain.rs
#	beacon_node/beacon_chain/src/builder.rs
#	beacon_node/beacon_chain/src/test_utils.rs
#	beacon_node/execution_layer/src/engine_api/json_structures.rs
#	beacon_node/network/src/beacon_processor/mod.rs
#	beacon_node/network/src/beacon_processor/worker/gossip_methods.rs
#	beacon_node/network/src/sync/backfill_sync/mod.rs
#	beacon_node/store/src/config.rs
#	beacon_node/store/src/hot_cold_store.rs
#	common/eth2_network_config/Cargo.toml
#	consensus/ssz/src/decode/impls.rs
#	consensus/ssz_derive/src/lib.rs
#	consensus/ssz_derive/tests/tests.rs
#	consensus/ssz_types/src/serde_utils/mod.rs
#	consensus/tree_hash/src/impls.rs
#	consensus/tree_hash/src/lib.rs
#	consensus/types/Cargo.toml
#	consensus/types/src/beacon_state.rs
#	consensus/types/src/chain_spec.rs
#	consensus/types/src/eth_spec.rs
#	consensus/types/src/fork_name.rs
#	lcli/Cargo.toml
#	lcli/src/main.rs
#	lcli/src/new_testnet.rs
#	scripts/local_testnet/el_bootnode.sh
#	scripts/local_testnet/genesis.json
#	scripts/local_testnet/geth.sh
#	scripts/local_testnet/setup.sh
#	scripts/local_testnet/start_local_testnet.sh
#	scripts/local_testnet/vars.env
#	scripts/tests/doppelganger_protection.sh
#	scripts/tests/genesis.json
#	scripts/tests/vars.env
#	testing/ef_tests/Cargo.toml
#	validator_client/src/block_service.rs
2023-05-30 22:44:05 +10:00
Paul Hauner
01ae37ac37 Add more metrics for tracking sync messages (#4308)
## Issue Addressed

NA

## Proposed Changes

Adds metrics to track validators that are submitting equivocating (but not slashable) sync messages. This follows on from some research we've been doing in a separate fork of LH.

## Additional Info

@jimmygchen and @michaelsproul have already run their eyes over this so it should be easy to get into v4.2.0, IMO.
2023-05-19 05:13:07 +00:00
Michael Sproul
3052db29fe Implement el_offline and use it in the VC (#4295)
## Issue Addressed

Closes https://github.com/sigp/lighthouse/issues/4291, part of #3613.

## Proposed Changes

- Implement the `el_offline` field on `/eth/v1/node/syncing`. We set `el_offline=true` if:
  - The EL's internal status is `Offline` or `AuthFailed`, _or_
  - The most recent call to `newPayload` resulted in an error (more on this in a moment).

- Use the `el_offline` field in the VC to mark nodes with offline ELs as _unsynced_. These nodes will still be used, but only after synced nodes.
- Overhaul the usage of `RequireSynced` so that `::No` is used almost everywhere. The `--allow-unsynced` flag was broken and had the opposite effect to intended, so it has been deprecated.
- Add tests for the EL being offline on the upcheck call, and being offline due to the newPayload check.


## Why track `newPayload` errors?

Tracking the EL's online/offline status is too coarse-grained to be useful in practice, because:

- If the EL is timing out to some calls, it's unlikely to timeout on the `upcheck` call, which is _just_ `eth_syncing`. Every failed call is followed by an upcheck [here](693886b941/beacon_node/execution_layer/src/engines.rs (L372-L380)), which would have the effect of masking the failure and keeping the status _online_.
- The `newPayload` call is the most likely to time out. It's the call in which ELs tend to do most of their work (often 1-2 seconds), with `forkchoiceUpdated` usually returning much faster (<50ms).
- If `newPayload` is failing consistently (e.g. timing out) then this is a good indication that either the node's EL is in trouble, or the network as a whole is. In the first case validator clients _should_ prefer other BNs if they have one available. In the second case, all of their BNs will likely report `el_offline` and they'll just have to proceed with trying to use them.

## Additional Changes

- Add utility method `ForkName::latest` which is quite convenient for test writing, but probably other things too.
- Delete some stale comments from when we used to support multiple execution nodes.
2023-05-17 05:51:56 +00:00
Jimmy Chen
40abaefffb Attestation verification uses head state fork (#4263)
## Issue Addressed

Addresses #4238 

## Proposed Changes

- [x] Add tests for the scenarios
- [x] Use the fork of the attestation slot for signature verification.
2023-05-15 02:10:41 +00:00
Jimmy Chen
8d9c748025 Fix attestation withdrawals root mismatch (#4249)
## Issue Addressed

Addresses #4234 

## Proposed Changes

- Skip withdrawals processing in an inconsistent state replay. 
- Repurpose `StateRootStrategy`: rename to `StateProcessingStrategy` and always skip withdrawals if using `StateProcessingStrategy::Inconsistent`
- Add a test to reproduce the scenario


Co-authored-by: Jimmy Chen <jimmy@sigmaprime.io>
2023-05-09 10:48:15 +00:00
ethDreamer
c1d47da02d
Update engine_api to latest version (#4223)
* Update Engine API to Latest

* Get Mock EE Working

* Fix Mock EE

* Update Engine API Again

* Rip out get_blobs_bundle Stuff

* Fix Test Harness

* Fix Clippy Complaints

* Fix Beacon Chain Tests
2023-04-27 14:18:21 -04:00
Pawan Dhananjay
b276af98b7
Rework block processing (#4092)
* introduce availability pending block

* add intoavailableblock trait

* small fixes

* add 'gossip blob cache' and start to clean up processing and transition types

* shard memory blob cache

* Initial commit

* Fix after rebase

* Add gossip verification conditions

* cache cleanup

* general chaos

* extended chaos

* cargo fmt

* more progress

* more progress

* tons of changes, just tryna compile

* everything, everywhere, all at once

* Reprocess an ExecutedBlock on unavailable blobs

* Add sus gossip verification for blobs

* Merge stuff

* Remove reprocessing cache stuff

* lint

* Add a wrapper to allow construction of only valid `AvailableBlock`s

* rename blob arc list to blob list

* merge cleanuo

* Revert "merge cleanuo"

This reverts commit 5e98326878c77528d0c4668c5a4db4a4b0fbaeaa.

* Revert "Revert "merge cleanuo""

This reverts commit 3a4009443a5812b3028abe855079307436dc5419.

* fix rpc methods

* move beacon block and blob to eth2/types

* rename gossip blob cache to data availability checker

* lots of changes

* fix some compilation issues

* fix compilation issues

* fix compilation issues

* fix compilation issues

* fix compilation issues

* fix compilation issues

* cargo fmt

* use a common data structure for block import types

* fix availability check on proposal import

* refactor the blob cache and split the block wrapper into two types

* add type conversion for signed block and block wrapper

* fix beacon chain tests and do some renaming, add some comments

* Partial processing (#4)

* move beacon block and blob to eth2/types

* rename gossip blob cache to data availability checker

* lots of changes

* fix some compilation issues

* fix compilation issues

* fix compilation issues

* fix compilation issues

* fix compilation issues

* fix compilation issues

* cargo fmt

* use a common data structure for block import types

* fix availability check on proposal import

* refactor the blob cache and split the block wrapper into two types

* add type conversion for signed block and block wrapper

* fix beacon chain tests and do some renaming, add some comments

* cargo update (#6)

---------

Co-authored-by: realbigsean <sean@sigmaprime.io>
Co-authored-by: realbigsean <seananderson33@gmail.com>
2023-03-24 17:30:41 -04:00
Diva M
1b9cfcc11b
Merge branch 'unstable' into eip4844 2023-03-24 13:32:50 -05:00
Paul Hauner
1f8c17b530 Fork choice modifications and cleanup (#3962)
## Issue Addressed

NA

## Proposed Changes

- Implements https://github.com/ethereum/consensus-specs/pull/3290/
- Bumps `ef-tests` to [v1.3.0-rc.4](https://github.com/ethereum/consensus-spec-tests/releases/tag/v1.3.0-rc.4).

The `CountRealizedFull` concept has been removed and the `--count-unrealized-full` and `--count-unrealized` BN flags now do nothing but log a `WARN` when used.

## Database Migration Debt

This PR removes the `best_justified_checkpoint` from fork choice. This field is persisted on-disk and the correct way to go about this would be to make a DB migration to remove the field. However, in this PR I've simply stubbed out the value with a junk value. I've taken this approach because if we're going to do a DB migration I'd love to remove the `Option`s around the justified and finalized checkpoints on `ProtoNode` whilst we're at it. Those options were added in #2822 which was included in Lighthouse v2.1.0. The options were only put there to handle the migration and they've been set to `Some` ever since v2.1.0. There's no reason to keep them as options anymore.

I started adding the DB migration to this branch but I started to feel like I was bloating this rather critical PR with nice-to-haves. I've kept the partially-complete migration [over in my repo](https://github.com/paulhauner/lighthouse/tree/fc-pr-18-migration) so we can pick it up after this PR is merged.
2023-03-21 07:34:41 +00:00
Diva M
d93753cc88
Merge branch 'unstable' into off-4844 2023-03-02 15:38:00 -05:00
Michael Sproul
0b6850221e
Fix Capella schema downgrades (#4004) 2023-02-20 17:50:42 +11:00
Emilia Hane
13efd47238
fixup! Disable use of system time in tests 2023-02-15 09:20:30 +01:00
Emilia Hane
73c7ad73b8
Disable use of system time in tests 2023-02-14 13:33:38 +01:00
realbigsean
ad9af6d8b1
complete match for has_context_bytes 2023-02-13 16:44:54 -05:00
realbigsean
28702c9d5d
merge upstream, add back get_blobs logic 2023-02-13 16:29:21 -05:00
Emilia Hane
2653f88b5f
Fix conflicts rebasing eip4844 2023-02-10 15:35:00 +01:00
Emilia Hane
4d3ff347a3
Fixes after rebasing eip4844 2023-02-10 15:34:58 +01:00
Emilia Hane
e9e198a2b6
Fix conflicts rebasing eip4844 2023-02-10 09:41:23 +01:00
Emilia Hane
d292a3a6a8
Fix conflicts rebasing eip4844 2023-02-10 09:41:23 +01:00
Emilia Hane
994990063a
Fix weak_subjectivity_sync test 2023-02-10 09:41:23 +01:00
Emilia Hane
16cb9cfca2
fixup! Debug tests 2023-02-10 09:39:22 +01:00
Emilia Hane
7220f35ff6
Debug tests 2023-02-10 09:39:21 +01:00
naviechan
9b5c2eefd5 Implement sync_committee_rewards API (per-validator reward) (#3903)
[#3661](https://github.com/sigp/lighthouse/issues/3661)

`/eth/v1/beacon/rewards/sync_committee/{block_id}`

```
{
  "execution_optimistic": false,
  "finalized": false,
  "data": [
    {
      "validator_index": "0",
      "reward": "2000"
    }
  ]
}
```

The issue contains the implementation of three per-validator reward APIs:
* `sync_committee_rewards`
* [`attestation_rewards`](https://github.com/sigp/lighthouse/pull/3822)
* `block_rewards`

This PR only implements the `sync_committe_rewards `.

The endpoints can be viewed in the Ethereum Beacon nodes API browser: [https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Rewards](https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Rewards)

The implementation of [consensus client reward APIs](https://github.com/eth-protocol-fellows/cohort-three/blob/master/projects/project-ideas.md#consensus-client-reward-apis) is part of the [EPF](https://github.com/eth-protocol-fellows/cohort-three).

Co-authored-by: navie <naviechan@gmail.com>
Co-authored-by: kevinbogner <kevbogner@gmail.com>
2023-01-25 14:22:15 +01:00
ethDreamer
eb9da6c837 Use eth1_withdrawal_credentials in Test States (#3898)
* Use eth1_withdrawal_credential in Some Test States

* Update beacon_node/genesis/src/interop.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Update beacon_node/genesis/src/interop.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Increase validator sizes

* Pick next sync committee message

Co-authored-by: Michael Sproul <micsproul@gmail.com>
Co-authored-by: Paul Hauner <paul@paulhauner.com>
2023-01-25 14:21:54 +01:00
Michael Sproul
c76a1971cc
Merge remote-tracking branch 'origin/unstable' into capella 2023-01-25 14:20:16 +11:00
ethDreamer
3d4dd6af75
Use eth1_withdrawal_credentials in Test States (#3898)
* Use eth1_withdrawal_credential in Some Test States

* Update beacon_node/genesis/src/interop.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Update beacon_node/genesis/src/interop.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Increase validator sizes

* Pick next sync committee message

Co-authored-by: Michael Sproul <micsproul@gmail.com>
Co-authored-by: Paul Hauner <paul@paulhauner.com>
2023-01-24 16:22:51 +01:00
naviechan
2802bc9a9c Implement sync_committee_rewards API (per-validator reward) (#3903)
## Issue Addressed

[#3661](https://github.com/sigp/lighthouse/issues/3661)

## Proposed Changes

`/eth/v1/beacon/rewards/sync_committee/{block_id}`

```
{
  "execution_optimistic": false,
  "finalized": false,
  "data": [
    {
      "validator_index": "0",
      "reward": "2000"
    }
  ]
}
```

The issue contains the implementation of three per-validator reward APIs:
* `sync_committee_rewards`
* [`attestation_rewards`](https://github.com/sigp/lighthouse/pull/3822)
* `block_rewards`

This PR only implements the `sync_committe_rewards `.

The endpoints can be viewed in the Ethereum Beacon nodes API browser: [https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Rewards](https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Rewards)

## Additional Info

The implementation of [consensus client reward APIs](https://github.com/eth-protocol-fellows/cohort-three/blob/master/projects/project-ideas.md#consensus-client-reward-apis) is part of the [EPF](https://github.com/eth-protocol-fellows/cohort-three).

Co-authored-by: navie <naviechan@gmail.com>
Co-authored-by: kevinbogner <kevbogner@gmail.com>
2023-01-24 02:06:42 +00:00
realbigsean
8f8b7211d0
execution API related fixes 2023-01-19 09:32:08 -05:00
ethDreamer
26787412cd
Update engine_api to Latest spec (#3893)
* Update engine_api to Latest spec

* Small Test Fix

* Fix Test Deserialization Issue
2023-01-19 22:42:17 +11:00
realbigsean
06f71e8cce
merge capella 2023-01-12 12:51:09 -05:00
Michael Sproul
aa896decc1 Fix some beacon_chain tests 2023-01-12 19:13:01 +11:00
Michael Sproul
2af8110529
Merge remote-tracking branch 'origin/unstable' into capella
Fixing the conflicts involved patching up some of the `block_hash` verification,
the rest will be done as part of https://github.com/sigp/lighthouse/issues/3870
2023-01-12 16:22:00 +11:00
realbigsean
438126f19a
merge upstream, fix compile errors 2023-01-11 13:52:58 -05:00
Paul Hauner
830efdb5c2 Improve validator monitor experience for high validator counts (#3728)
## Issue Addressed

NA

## Proposed Changes

Myself and others (#3678) have observed  that when running with lots of validators (e.g., 1000s) the cardinality is too much for Prometheus. I've seen Prometheus instances just grind to a halt when we turn the validator monitor on for our testnet validators (we have 10,000s of Goerli validators). Additionally, the debug log volume can get very high with one log per validator, per attestation.

To address this, the `bn --validator-monitor-individual-tracking-threshold <INTEGER>` flag has been added to *disable* per-validator (i.e., non-aggregated) metrics/logging once the validator monitor exceeds the threshold of validators. The default value is `64`, which is a finger-to-the-wind value. I don't actually know the value at which Prometheus starts to become overwhelmed, but I've seen it work with ~64 validators and I've seen it *not* work with 1000s of validators. A default of `64` seems like it will result in a breaking change to users who are running millions of dollars worth of validators whilst resulting in a no-op for low-validator-count users. I'm open to changing this number, though.

Additionally, this PR starts collecting aggregated Prometheus metrics (e.g., total count of head hits across all validators), so that high-validator-count validators still have some interesting metrics. We already had logging for aggregated values, so nothing has been added there.

I've opted to make this a breaking change since it can be rather damaging to your Prometheus instance to accidentally enable the validator monitor with large numbers of validators. I've crashed a Prometheus instance myself and had a report from another user who's done the same thing.

## Additional Info

NA

## Breaking Changes Note

A new label has been added to the validator monitor Prometheus metrics: `total`. This label tracks the aggregated metrics of all validators in the validator monitor (as opposed to each validator being tracking individually using its pubkey as the label).

Additionally, a new flag has been added to the Beacon Node: `--validator-monitor-individual-tracking-threshold`. The default value is `64`, which means that when the validator monitor is tracking more than 64 validators then it will stop tracking per-validator metrics and only track the `all_validators` metric. It will also stop logging per-validator logs and only emit aggregated logs (the exception being that exit and slashing logs are always emitted).

These changes were introduced in #3728 to address issues with untenable Prometheus cardinality and log volume when using the validator monitor with high validator counts (e.g., 1000s of validators). Users with less than 65 validators will see no change in behavior (apart from the added `all_validators` metric). Users with more than 65 validators who wish to maintain the previous behavior can set something like `--validator-monitor-individual-tracking-threshold 999999`.
2023-01-09 08:18:55 +00:00
Mark Mackey
be232c4587
Update Execution Layer Tests for Capella 2023-01-03 16:58:15 -06:00
realbigsean
d893706e0e
merge with capella 2022-12-15 09:33:18 -05:00
Michael Sproul
991e4094f8
Merge remote-tracking branch 'origin/unstable' into capella-update 2022-12-14 13:00:41 +11:00
Michael Sproul
775d222299 Enable proposer boost re-orging (#2860)
## Proposed Changes

With proposer boosting implemented (#2822) we have an opportunity to re-org out late blocks.

This PR adds three flags to the BN to control this behaviour:

* `--disable-proposer-reorgs`: turn aggressive re-orging off (it's on by default).
* `--proposer-reorg-threshold N`: attempt to orphan blocks with less than N% of the committee vote. If this parameter isn't set then N defaults to 20% when the feature is enabled.
* `--proposer-reorg-epochs-since-finalization N`: only attempt to re-org late blocks when the number of epochs since finalization is less than or equal to N. The default is 2 epochs, meaning re-orgs will only be attempted when the chain is finalizing optimally.

For safety Lighthouse will only attempt a re-org under very specific conditions:

1. The block being proposed is 1 slot after the canonical head, and the canonical head is 1 slot after its parent. i.e. at slot `n + 1` rather than building on the block from slot `n` we build on the block from slot `n - 1`.
2. The current canonical head received less than N% of the committee vote. N should be set depending on the proposer boost fraction itself, the fraction of the network that is believed to be applying it, and the size of the largest entity that could be hoarding votes.
3. The current canonical head arrived after the attestation deadline from our perspective. This condition was only added to support suppression of forkchoiceUpdated messages, but makes intuitive sense.
4. The block is being proposed in the first 2 seconds of the slot. This gives it time to propagate and receive the proposer boost.


## Additional Info

For the initial idea and background, see: https://github.com/ethereum/consensus-specs/pull/2353#issuecomment-950238004

There is also a specification for this feature here: https://github.com/ethereum/consensus-specs/pull/3034

Co-authored-by: Michael Sproul <micsproul@gmail.com>
Co-authored-by: pawan <pawandhananjay@gmail.com>
2022-12-13 09:57:26 +00:00
realbigsean
8102a01085
merge with upstream 2022-12-01 11:13:07 -05:00
Mark Mackey
8a04c3428e Merged with unstable 2022-11-30 17:29:10 -06:00
Mark Mackey
f5e6a54f05 Refactored Execution Layer & Fixed Some Tests 2022-11-29 18:18:33 -06:00
Mark Mackey
36170ec428 Fixed some BeaconChain Tests 2022-11-29 18:18:18 -06:00
GeemoCandama
3534c85e30 Optimize finalized chain sync by skipping newPayload messages (#3738)
## Issue Addressed

#3704 

## Proposed Changes
Adds is_syncing_finalized: bool parameter for block verification functions. Sets the payload_verification_status to Optimistic if is_syncing_finalized is true. Uses SyncState in NetworkGlobals in BeaconProcessor to retrieve the syncing status.

## Additional Info
I could implement FinalizedSignatureVerifiedBlock if you think it would be nicer.
2022-11-29 08:19:27 +00:00
realbigsean
48b2efce9f
merge with upstream 2022-11-22 18:38:30 -05:00
ethDreamer
24e5252a55
Massive Update to Engine API (#3740)
* Massive Update to Engine API

* Update beacon_node/execution_layer/src/engine_api/json_structures.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Update beacon_node/execution_layer/src/engine_api/json_structures.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Update beacon_node/beacon_chain/src/execution_payload.rs

Co-authored-by: realbigsean <seananderson33@GMAIL.com>

* Update beacon_node/execution_layer/src/engine_api.rs

Co-authored-by: realbigsean <seananderson33@GMAIL.com>

Co-authored-by: Michael Sproul <micsproul@gmail.com>
Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2022-11-22 13:27:48 -05:00
realbigsean
dc87156641
block and blob handling progress 2022-11-19 16:53:34 -05:00
Michael Sproul
edf23bb40e Fix attestation shuffling filter (#3629)
## Issue Addressed

Fix a bug in block production that results in blocks with 0 attestations during the first slot of an epoch.

The bug is marked by debug logs of the form:

> DEBG Discarding attestation because of missing ancestor, block_root: 0x3cc00d9c9e0883b2d0db8606278f2b8423d4902f9a1ee619258b5b60590e64f8, pivot_slot: 4042591

It occurs when trying to look up the shuffling decision root for an attestation from a slot which is prior to fork choice's finalized block. This happens frequently when proposing in the first slot of the epoch where we have:

- `current_epoch == n`
- `attestation.data.target.epoch == n - 1`
- attestation shuffling epoch `== n - 3` (decision block being the last block of `n - 3`)
- `state.finalized_checkpoint.epoch == n - 2` (first block of `n - 2` is finalized)

Hence the shuffling decision slot is out of range of the fork choice backwards iterator _by a single slot_.

Unfortunately this bug was hidden when we weren't pruning fork choice, and then reintroduced in v2.5.1 when we fixed the pruning (https://github.com/sigp/lighthouse/releases/tag/v2.5.1). There's no way to turn that off or disable the filtering in our current release, so we need a new release to fix this issue.

Fortunately, it also does not occur on every epoch boundary because of the gradual pruning of fork choice every 256 blocks (~8 epochs):

01e84b71f5/consensus/proto_array/src/proto_array_fork_choice.rs (L16)

01e84b71f5/consensus/proto_array/src/proto_array.rs (L713-L716)

So the probability of proposing a 0-attestation block given a proposal assignment is approximately `1/32 * 1/8 = 0.39%`.

## Proposed Changes

- Load the block's shuffling ID from fork choice and verify it against the expected shuffling ID of the head state. This code was initially written before we had settled on a representation of shuffling IDs, so I think it's a nice simplification to make use of them here rather than more ad-hoc logic that fundamentally does the same thing.

## Additional Info

Thanks to @moshe-blox for noticing this issue and bringing it to our attention.
2022-10-18 04:02:06 +00:00
Michael Sproul
59ec6b71b8 Consensus context with proposer index caching (#3604)
## Issue Addressed

Closes https://github.com/sigp/lighthouse/issues/2371

## Proposed Changes

Backport some changes from `tree-states` that remove duplicated calculations of the `proposer_index`.

With this change the proposer index should be calculated only once for each block, and then plumbed through to every place it is required.

## Additional Info

In future I hope to add more data to the consensus context that is cached on a per-epoch basis, like the effective balances of validators and the base rewards.

There are some other changes to remove indexing in tests that were also useful for `tree-states` (the `tree-states` types don't implement `Index`).
2022-10-15 22:25:54 +00:00
Paul Hauner
9246a92d76 Make garbage collection test less failure prone (#3599)
## Issue Addressed

NA

## Proposed Changes

This PR attempts to fix the following spurious CI failure:

```
---- store_tests::garbage_collect_temp_states_from_failed_block stdout ----
thread 'store_tests::garbage_collect_temp_states_from_failed_block' panicked at 'disk store should initialize: DBError { message: "Error { message: \"IO error: lock /tmp/.tmp6DcBQ9/cold_db/LOCK: already held by process\" }" }', beacon_node/beacon_chain/tests/store_tests.rs:59:10
```

I believe that some async task is taking a clone of the store and holding it in some other thread for a short time. This creates a race-condition when we try to open a new instance of the store.

## Additional Info

NA
2022-09-23 03:52:44 +00:00
Paul Hauner
fa6ad1a11a Deduplicate block root computation (#3590)
## Issue Addressed

NA

## Proposed Changes

This PR removes duplicated block root computation.

Computing the `SignedBeaconBlock::canonical_root` has become more expensive since the merge as we need to compute the merke root of each transaction inside an `ExecutionPayload`.

Computing the root for [a mainnet block](https://beaconcha.in/slot/4704236) is taking ~10ms on my i7-8700K CPU @ 3.70GHz (no sha extensions). Given that our median seen-to-imported time for blocks is presently 300-400ms, removing a few duplicated block roots (~30ms) could represent an easy 10% improvement. When we consider that the seen-to-imported times include operations *after* the block has been placed in the early attester cache, we could expect the 30ms to be more significant WRT our seen-to-attestable times.

## Additional Info

NA
2022-09-23 03:52:42 +00:00