Commit Graph

4775 Commits

Author SHA1 Message Date
Divma
9bd384a573 send attnet unsubscription event on random subnet expiry (#3600)
## Issue Addressed
🐞 in which we don't actually unsubscribe from a random long lived subnet when it expires

## Proposed Changes

Remove code addressing a specific case in which we are subscribed to all subnets and handle the removal of the long lived subnet. I don't think the special case code is particularly important as, if someone is running with that many validators to be subscribed to all subnets, it should use `--subscribe-all-subnets` instead

## Additional Info

Noticed on some test nodes climbing bandwidth usage periodically (around 27hours, the time of subnet expirations) I'm running this code to test this does not happen anymore, but I think it should be good now
2022-09-23 03:52:45 +00:00
Paul Hauner
9246a92d76 Make garbage collection test less failure prone (#3599)
## Issue Addressed

NA

## Proposed Changes

This PR attempts to fix the following spurious CI failure:

```
---- store_tests::garbage_collect_temp_states_from_failed_block stdout ----
thread 'store_tests::garbage_collect_temp_states_from_failed_block' panicked at 'disk store should initialize: DBError { message: "Error { message: \"IO error: lock /tmp/.tmp6DcBQ9/cold_db/LOCK: already held by process\" }" }', beacon_node/beacon_chain/tests/store_tests.rs:59:10
```

I believe that some async task is taking a clone of the store and holding it in some other thread for a short time. This creates a race-condition when we try to open a new instance of the store.

## Additional Info

NA
2022-09-23 03:52:44 +00:00
Pawan Dhananjay
3a3dddc5fb Fix ee integration tests (#3592)
## Issue Addressed

Resolves #3573 

## Proposed Changes

Fix the bytecode for the deposit contract deployment transaction and value for deposit transaction in the execution engine integration tests. Also verify that all published transaction make it to the execution payload and have a valid status.
2022-09-23 03:52:43 +00:00
Paul Hauner
fa6ad1a11a Deduplicate block root computation (#3590)
## Issue Addressed

NA

## Proposed Changes

This PR removes duplicated block root computation.

Computing the `SignedBeaconBlock::canonical_root` has become more expensive since the merge as we need to compute the merke root of each transaction inside an `ExecutionPayload`.

Computing the root for [a mainnet block](https://beaconcha.in/slot/4704236) is taking ~10ms on my i7-8700K CPU @ 3.70GHz (no sha extensions). Given that our median seen-to-imported time for blocks is presently 300-400ms, removing a few duplicated block roots (~30ms) could represent an easy 10% improvement. When we consider that the seen-to-imported times include operations *after* the block has been placed in the early attester cache, we could expect the 30ms to be more significant WRT our seen-to-attestable times.

## Additional Info

NA
2022-09-23 03:52:42 +00:00
Ramana Kumar
76ba0a1aaf Add disable-log-timestamp flag (#3101) (#3586)
## Issues Addressed

Closes https://github.com/sigp/lighthouse/issues/3101

## Proposed Changes

Add global flag to suppress timestamps in the terminal logger.
2022-09-23 03:52:41 +00:00
Paul Hauner
3128b5b430 v3.1.1 (#3585)
## Issue Addressed

NA

## Proposed Changes

Bump versions

## Additional Info

- ~~Requires additional testing~~
- ~~Blocked on:~~
    - ~~#3589~~
    - ~~#3540~~
    - ~~#3587~~
2022-09-22 06:08:52 +00:00
Paul Hauner
dadbd69eec Fix concurrency issue with oneshot_broadcast (#3596)
## Issue Addressed

NA

## Proposed Changes

Fixes an issue found during testing with #3595.

## Additional Info

NA
2022-09-21 10:52:14 +00:00
Paul Hauner
96692b8e43 Impl oneshot_broadcast for committee promises (#3595)
## Issue Addressed

NA

## Proposed Changes

Fixes an issue introduced in #3574 where I erroneously assumed that a `crossbeam_channel` multiple receiver queue was a *broadcast* queue. This is incorrect, each message will be received by *only one* receiver. The effect of this mistake is these logs:

```
Sep 20 06:56:17.001 INFO Synced                                  slot: 4736079, block: 0xaa8a…180d, epoch: 148002, finalized_epoch: 148000, finalized_root: 0x2775…47f2, exec_hash: 0x2ca5…ffde (verified), peers: 6, service: slot_notifier
Sep 20 06:56:23.237 ERRO Unable to validate attestation          error: CommitteeCacheWait(RecvError), peer_id: 16Uiu2HAm2Jnnj8868tb7hCta1rmkXUf5YjqUH1YPj35DCwNyeEzs, type: "aggregated", slot: Slot(4736047), beacon_block_root: 0x88d318534b1010e0ebd79aed60b6b6da1d70357d72b271c01adf55c2b46206c1
```

## Additional Info

NA
2022-09-21 01:01:50 +00:00
Paul Hauner
a95bcba2ab Avoid holding write-lock whilst waiting on shuffling cache promise (#3589)
## Issue Addressed

NA

## Proposed Changes

Fixes a bug which hogged the write-lock for the `shuffling_cache`.

## Additional Info

NA
2022-09-19 07:58:50 +00:00
Michael Sproul
507bb9dad4 Refined payload pruning (#3587)
## Proposed Changes

Improve the payload pruning feature in several ways:

- Payload pruning is now entirely optional. It is enabled by default but can be disabled with `--prune-payloads false`. The previous `--prune-payloads-on-startup` flag from #3565 is removed.
- Initial payload pruning on startup now runs in a background thread. This thread will always load the split state, which is a small fraction of its total work (up to ~300ms) and then backtrack from that state. This pruning process ran in 2m5s on one Prater node with good I/O and 16m on a node with slower I/O.
- To work with the optional payload pruning the database function `try_load_full_block` will now attempt to load execution payloads for finalized slots _if_ pruning is currently disabled. This gives users an opt-out for the extensive traffic between the CL and EL for reconstructing payloads.

## Additional Info

If the `prune-payloads` flag is toggled on and off then the on-startup check may not see any payloads to delete and fail to clean them up. In this case the `lighthouse db prune_payloads` command should be used to force a manual sweep of the database.
2022-09-19 07:58:49 +00:00
Michael Sproul
f2ac0738d8 Implement skip_randao_verification and blinded block rewards API (#3540)
## Issue Addressed

https://github.com/ethereum/beacon-APIs/pull/222

## Proposed Changes

Update Lighthouse's randao verification API to match the `beacon-APIs` spec. We implemented the API before spec stabilisation, and it changed slightly in the course of review.

Rather than a flag `verify_randao` taking a boolean value, the new API uses a `skip_randao_verification` flag which takes no argument. The new spec also requires the randao reveal to be present and equal to the point-at-infinity when `skip_randao_verification` is set.

I've also updated the `POST /lighthouse/analysis/block_rewards` API to take blinded blocks as input, as the execution payload is irrelevant and we may want to assess blocks produced by builders.

## Additional Info

This is technically a breaking change, but seeing as I suspect I'm the only one using these parameters/APIs, I think we're OK to include this in a patch release.
2022-09-19 07:58:48 +00:00
Marius van der Wijden
6f7d21c542 enable 4844 at epoch 3 2022-09-18 12:13:03 +02:00
Marius van der Wijden
257087b010 correct fork version 2022-09-18 11:43:53 +02:00
Marius van der Wijden
285dbf43ed hacky hacks 2022-09-18 11:34:46 +02:00
Marius van der Wijden
14aa4957b9 correct fork version 2022-09-18 10:46:01 +02:00
Marius van der Wijden
aa000fd119 more enr 2022-09-18 10:23:53 +02:00
Marius van der Wijden
8b71b978e0 new round of hacks (config etc) 2022-09-17 23:42:49 +02:00
Daniel Knopik
750c594f5f forgor something 2022-09-17 21:38:57 +02:00
Daniel Knopik
eab1fce0e5 Merge branch 'eip4844' of github.com:dknopik/lighthouse into eip4844 2022-09-17 20:55:36 +02:00
Daniel Knopik
76572db9d5 add network config 2022-09-17 20:55:21 +02:00
Marius van der Wijden
f43532d3de implement handle blobs by range req 2022-09-17 20:05:51 +02:00
Marius van der Wijden
f9209e2d08 more network stuff 2022-09-17 16:39:40 +02:00
Marius van der Wijden
aeb52ff186 network stuff 2022-09-17 16:10:42 +02:00
Daniel Knopik
d4d40be870 storable blobs 2022-09-17 15:58:52 +02:00
Marius van der Wijden
36a0add0cd network stuff 2022-09-17 15:23:28 +02:00
Daniel Knopik
0518665949 Merge remote-tracking branch 'fork/eip4844' into eip4844 2022-09-17 14:58:33 +02:00
Daniel Knopik
292a16a6eb gossip boilerplate 2022-09-17 14:58:27 +02:00
Marius van der Wijden
acace8ab31 network: blobs by range message 2022-09-17 14:55:18 +02:00
Daniel Knopik
bcc738cb9d progress on gossip stuff 2022-09-17 14:31:57 +02:00
Marius van der Wijden
8473f08d10 beacon: consensus: implement engine api getBlobs 2022-09-17 14:10:15 +02:00
Daniel Knopik
dcfae6c5cf implement From<FullPayload> for Payload 2022-09-17 13:29:20 +02:00
Marius van der Wijden
fe6be28e6b beacon: consensus: implement engine api getBlobs 2022-09-17 13:20:18 +02:00
Daniel Knopik
ca1e17b386 it compiles! 2022-09-17 12:23:03 +02:00
Daniel Knopik
95203c51d4 fix some bugx, adjust stucts 2022-09-17 11:26:18 +02:00
Daniel Knopik
8e4e499b51 start adding types 2022-09-17 09:49:12 +02:00
Michael Sproul
ca42ef2e5a Prune finalized execution payloads (#3565)
## Issue Addressed

Closes https://github.com/sigp/lighthouse/issues/3556

## Proposed Changes

Delete finalized execution payloads from the database in two places:

1. When running the finalization migration in `migrate_database`. We delete the finalized payloads between the last split point and the new updated split point. _If_ payloads are already pruned prior to this then this is sufficient to prune _all_ payloads as non-canonical payloads are already deleted by the head pruner, and all canonical payloads prior to the previous split will already have been pruned.
2. To address the fact that users will update to this code _after_ the merge on mainnet (and testnets), we need a one-off scan to delete the finalized payloads from the canonical chain. This is implemented in `try_prune_execution_payloads` which runs on startup and scans the chain back to the Bellatrix fork or the anchor slot (if checkpoint synced after Bellatrix). In the case where payloads are already pruned this check only imposes a single state load for the split state, which shouldn't be _too slow_. Even so, a flag `--prepare-payloads-on-startup=false` is provided to turn this off after it has run the first time, which provides faster start-up times.

There is also a new `lighthouse db prune_payloads` subcommand for users who prefer to run the pruning manually.

## Additional Info

The tests have been updated to not rely on finalized payloads in the database, instead using the `MockExecutionLayer` to reconstruct them. Additionally a check was added to `check_chain_dump` which asserts the non-existence or existence of payloads on disk depending on their slot.
2022-09-17 02:27:01 +00:00
Michael Sproul
5b2843c2cd Pre-allocate vectors in SSZ decoding (#3417)
## Issue Addressed

Fixes a potential regression in memory fragmentation identified by @paulhauner here: https://github.com/sigp/lighthouse/pull/3371#discussion_r931770045.

## Proposed Changes

Immediately allocate a vector with sufficient size to hold all decoded elements in SSZ decoding. The `size_hint` is derived from the range iterator here:

2983235650/consensus/ssz/src/decode/impls.rs (L489)

## Additional Info

I'd like to test this out on some infra for a substantial duration to see if it affects total fragmentation.
2022-09-16 11:54:17 +00:00
Paul Hauner
b0b606dabe Use SmallVec for TreeHash packed encoding (#3581)
## Issue Addressed

NA

## Proposed Changes

I've noticed that our block hashing times increase significantly after the merge. I did some flamegraph-ing and noticed that we're allocating a `Vec` for each byte of each execution payload transaction. This seems like unnecessary work and a bit of a fragmentation risk.

This PR switches to `SmallVec<[u8; 32]>` for the packed encoding of `TreeHash`. I believe this is a nice simple optimisation with no downside.

### Benchmarking

These numbers were computed using #3580 on my desktop (i7 hex-core). You can see a bit of noise in the numbers, that's probably just my computer doing other things. Generally I found this change takes the time from 10-11ms to 8-9ms. I can also see all the allocations disappear from flamegraph.

This is the block being benchmarked: https://beaconcha.in/slot/4704236

#### Before

```
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 980: 10.553003ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 981: 10.563737ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 982: 10.646352ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 983: 10.628532ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 984: 10.552112ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 985: 10.587778ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 986: 10.640526ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 987: 10.587243ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 988: 10.554748ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 989: 10.551111ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 990: 11.559031ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 991: 11.944827ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 992: 10.554308ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 993: 11.043397ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 994: 11.043315ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 995: 11.207711ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 996: 11.056246ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 997: 11.049706ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 998: 11.432449ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 999: 11.149617ms
```

#### After

```
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 980: 14.011653ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 981: 8.925314ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 982: 8.849563ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 983: 8.893689ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 984: 8.902964ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 985: 8.942067ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 986: 8.907088ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 987: 9.346101ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 988: 8.96142ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 989: 9.366437ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 990: 9.809334ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 991: 9.541561ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 992: 11.143518ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 993: 10.821181ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 994: 9.855973ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 995: 10.941006ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 996: 9.596155ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 997: 9.121739ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 998: 9.090019ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 999: 9.071885ms
```

## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.
2022-09-16 08:54:06 +00:00
Paul Hauner
bde3c168e2 Add lcli block-root tool (#3580)
## Issue Addressed

NA

## Proposed Changes

Adds a simple tool for computing the block root of some block from a beacon-API or a file. This is useful for benchmarking.

## Additional Info

NA
2022-09-16 08:54:04 +00:00
Paul Hauner
2cd3e3a768 Avoid duplicate committee cache loads (#3574)
## Issue Addressed

NA

## Proposed Changes

I have observed scenarios on Goerli where Lighthouse was receiving attestations which reference the same, un-cached shuffling on multiple threads at the same time. Lighthouse was then loading the same state from database and determining the shuffling on multiple threads at the same time. This is unnecessary load on the disk and RAM.

This PR modifies the shuffling cache so that each entry can be either:

- A committee
- A promise for a committee (i.e., a `crossbeam_channel::Receiver`)

Now, in the scenario where we have thread A and thread B simultaneously requesting the same un-cached shuffling, we will have the following:

1. Thread A will take the write-lock on the shuffling cache, find that there's no cached committee and then create a "promise" (a `crossbeam_channel::Sender`) for a committee before dropping the write-lock.
1. Thread B will then be allowed to take the write-lock for the shuffling cache and find the promise created by thread A. It will block the current thread waiting for thread A to fulfill that promise.
1. Thread A will load the state from disk, obtain the shuffling, send it down the channel, insert the entry into the cache and then continue to verify the attestation.
1. Thread B will then receive the shuffling from the receiver, be un-blocked and then continue to verify the attestation.

In the case where thread A fails to generate the shuffling and drops the sender, the next time that specific shuffling is requested we will detect that the channel is disconnected and return a `None` entry for that shuffling. This will cause the shuffling to be re-calculated.

## Additional Info

NA
2022-09-16 08:54:03 +00:00
Paul Hauner
7d3948c8fe Add metric for re-org distance (#3566)
## Issue Addressed

NA

## Proposed Changes

Add a metric to track the re-org distance.

## Additional Info

NA
2022-09-13 17:19:27 +00:00
Michael Sproul
cd31e54b99 Bump axum deps (#3570)
## Issue Addressed

Fix a `cargo-audit` failure. We don't use `axum` for anything besides tests, but `cargo-audit` is failing due to this vulnerability in `axum-core`: https://rustsec.org/advisories/RUSTSEC-2022-0055
2022-09-13 01:57:47 +00:00
realbigsean
614d74a6d4 Fix builder gas limit docs (#3569)
## Issue Addressed

Make sure gas limit examples in our docs represent sane values.

Thanks @dankrad for raising this in discord.

## Additional Info

We could also consider logging warnings about whether the gas limits configured are sane. Prysm has an open issue for this: https://github.com/prysmaticlabs/prysm/issues/10810


Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-09-13 01:57:46 +00:00
Alessandro Tagliapietra
88a7e5a2ca Fix ganache test endpoint for ipv6 machines (#3563)
## Issue Addressed

#3562

## Proposed Changes

Change the fork endpoint from `localhost` to `127.0.0.1` to match the ganache default listening host.
This way it doesn't try (and fail) to connect to `::1` on IPV6 machines.

## Additional Info

First PR
2022-09-13 01:57:45 +00:00
tim gretler
98815516a1 Support histogram buckets (#3391)
## Issue Addressed

#3285

## Proposed Changes

Adds support for specifying histogram with buckets and adds new metric buckets for metrics mentioned in issue.

## Additional Info

Need some help for the buckets.


Co-authored-by: Michael Sproul <micsproul@gmail.com>
2022-09-13 01:57:44 +00:00
Rémy Roy
cfa518ab41 Use generic domain for community checkpoint sync example (#3560)
## Proposed Changes

Use a generic domain for community checkpoint sync example to meet the concern raised in https://github.com/sigp/lighthouse/pull/3558#discussion_r966720171
2022-09-10 01:35:11 +00:00
Nils Effinghausen
f682df51a1 fix description for BALANCES_CACHE_MISSES metric (#3545)
## Issue Addressed

fixes metric description


Co-authored-by: Nils Effinghausen <nils.effinghausen@t-systems.com>
2022-09-10 01:35:10 +00:00
Rémy Roy
60e9777db8 Add community checkpoint sync endpoints to book (#3558)
## Proposed Changes

Add a section on the new community checkpoint sync endpoints in the book. This should help stakers sync faster even without having to create an account.
2022-09-09 02:52:35 +00:00
realbigsean
d1a8d6cf91 Pin mev rs deps (#3557)
## Issue Addressed

We were unable to update lighthouse by running `cargo update` because some of the `mev-build-rs` deps weren't pinned. But `mev-build-rs` is now pinned here and includes it's own pinned commits for `ssz-rs` and `etheruem-consensus`



Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-09-08 23:46:03 +00:00
realbigsean
a9f075c3c0 Remove strict fee recipient (#3552)
## Issue Addressed

Resolves: #3550

Remove the `--strict-fee-recipient` flag. It will cause missed proposals prior to the bellatrix transition.

Co-authored-by: realbigsean <sean@sigmaprime.io>
2022-09-08 23:46:02 +00:00