Commit Graph

5110 Commits

Author SHA1 Message Date
int88
5691123153 update README of local_testnet (#4114)
## Issue Addressed

NA

## Proposed Changes

update the descriptions of README in `scripts/local_testnet`.

## Additional Info

NA
2023-03-30 10:14:07 +00:00
Jimmy Chen
d351cc8d8d Test failing CI tests due to port conflicts (#4134)
## Issue Addressed

#4127. PR to test port conflicts in CI tests . 

## Proposed Changes

See issue for more details, potential solution could be adding a cache bound by time to the `unused_port` function.
2023-03-30 06:08:38 +00:00
Daniel Ramirez Chiquillo
036b797b2c Add finalized to HTTP API responses (#3753)
## Issue Addressed

#3708 

## Proposed Changes
- Add `is_finalized_block` method to `BeaconChain` in `beacon_node/beacon_chain/src/beacon_chain.rs`.
- Add `is_finalized_state` method to `BeaconChain` in `beacon_node/beacon_chain/src/beacon_chain.rs`.
- Add `fork_and_execution_optimistic_and_finalized` in `beacon_node/http_api/src/state_id.rs`.
- Add `ExecutionOptimisticFinalizedForkVersionedResponse` type in `consensus/types/src/fork_versioned_response.rs`.
- Add `execution_optimistic_finalized_fork_versioned_response`function in  `beacon_node/http_api/src/version.rs`.
- Add `ExecutionOptimisticFinalizedResponse` type in `common/eth2/src/types.rs`.
- Add `add_execution_optimistic_finalized` method in  `common/eth2/src/types.rs`.
- Update API response methods to include finalized.
- Remove `execution_optimistic_fork_versioned_response`

Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2023-03-30 06:08:37 +00:00
Age Manning
12205a8811 Correct log for ENR (#4133)
## Issue Addressed

https://github.com/sigp/lighthouse/issues/4080

Fixes a log when displaying the initial ENR.
2023-03-29 23:55:55 +00:00
Maksim Shcherbo
788a4b718f Optimise update_validators by decrypting key cache only when necessary (#4126)
## Title

Optimise `update_validators` by decrypting key cache only when necessary

## Issue Addressed

Resolves [#3968: Slow performance of validator client PATCH API with hundreds of keys](https://github.com/sigp/lighthouse/issues/3968)

## Proposed Changes

1. Add a check to determine if there is at least one local definition before decrypting the key cache.
2. Assign an empty `KeyCache` when all definitions are of the `Web3Signer` type.
3. Perform cache-related operations (e.g., saving the modified key cache) only if there are local definitions.

## Additional Info

This PR addresses the excessive CPU usage and slow performance experienced when using the `PATCH lighthouse/validators/{pubkey}` request with a large number of keys. The issue was caused by the key cache using cryptography to decipher and cipher the cache entities every time the request was made. This operation called `scrypt`, which was very slow and required a lot of memory when there were many concurrent requests.

These changes have no impact on the overall functionality but can lead to significant performance improvements when working with remote signers. Importantly, the key cache is never used when there are only `Web3Signer` definitions, avoiding the expensive operation of decrypting the key cache in such cases.

Co-authored-by: Maksim Shcherbo <max.shcherbo@consensys.net>
2023-03-29 02:56:39 +00:00
Christopher Chong
6bb28bc806 Add debug fork choice api (#4003)
## Issue Addressed

Which issue # does this PR address?
https://github.com/sigp/lighthouse/issues/3669

## Proposed Changes

Please list or describe the changes introduced by this PR.
- A new API to fetch fork choice data, as specified [here](https://github.com/ethereum/beacon-APIs/pull/232)
- A new integration test to test the new API

## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.

- `extra_data` field specified in the beacon-API spec is not implemented, please let me know if I should instead.

Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-03-29 02:56:37 +00:00
int88
d3c20ffa9d improve error message (#4141)
## Issue Addressed

NA

## Proposed Changes

Not use magic number directly in the error message.

## Additional Info

NA
2023-03-28 22:07:05 +00:00
Jimmy Chen
65d0f63639 Update Rust version in lcli Dockerfile (#4121)
## Issue Addressed

The minimum supported Rust version has been set to 1.66 as of Lighthouse v4.0.0.  This PR updates Rust to 1.66 in lcli Dockerfile. 

Co-authored-by: Jimmy Chen <jimmy@sigmaprime.io>
2023-03-28 22:07:03 +00:00
Michael Sproul
f4d13f9149 Update arbitrary (#4139)
## Proposed Changes

To prevent breakages from `cargo update`, this updates the `arbitrary` crate to a new commit from my fork. Unfortunately we still need to use my fork (even though my `bound` change was merged) because of this issue: https://github.com/rust-lang/rust-clippy/issues/10185.

In a couple of Rust versions it should be resolved upstream.
2023-03-28 16:49:21 +00:00
Paul Hauner
a53830fd60 Release v4.0.1 (#4125)
## Issue Addressed

NA

## Proposed Changes

- Bump versions.
- Bump openssl version to resolve various `cargo audit` notices.

## Additional Info

- Requires further testing
2023-03-26 22:39:28 +00:00
Paul Hauner
b2525d6ebd
Release Candidate v4.0.1-rc.0 (#4123) 2023-03-23 21:16:14 +11:00
Paul Hauner
23ea1481e0 Fix fork choice error message (#4122)
## Issue Addressed

NA

## Proposed Changes

Ensures that we log the values of the *head* block rather than the *justified* block.

## Additional Info

NA
2023-03-23 07:16:49 +00:00
Paul Hauner
0616e01202 Release v4.0.0 (#4112)
## Issue Addressed

NA

## Proposed Changes

Bump versions to `v4.0.0`

## Additional Info

NA
2023-03-22 04:06:02 +00:00
Paul Hauner
1f8c17b530 Fork choice modifications and cleanup (#3962)
## Issue Addressed

NA

## Proposed Changes

- Implements https://github.com/ethereum/consensus-specs/pull/3290/
- Bumps `ef-tests` to [v1.3.0-rc.4](https://github.com/ethereum/consensus-spec-tests/releases/tag/v1.3.0-rc.4).

The `CountRealizedFull` concept has been removed and the `--count-unrealized-full` and `--count-unrealized` BN flags now do nothing but log a `WARN` when used.

## Database Migration Debt

This PR removes the `best_justified_checkpoint` from fork choice. This field is persisted on-disk and the correct way to go about this would be to make a DB migration to remove the field. However, in this PR I've simply stubbed out the value with a junk value. I've taken this approach because if we're going to do a DB migration I'd love to remove the `Option`s around the justified and finalized checkpoints on `ProtoNode` whilst we're at it. Those options were added in #2822 which was included in Lighthouse v2.1.0. The options were only put there to handle the migration and they've been set to `Some` ever since v2.1.0. There's no reason to keep them as options anymore.

I started adding the DB migration to this branch but I started to feel like I was bloating this rather critical PR with nice-to-haves. I've kept the partially-complete migration [over in my repo](https://github.com/paulhauner/lighthouse/tree/fc-pr-18-migration) so we can pick it up after this PR is merged.
2023-03-21 07:34:41 +00:00
Paul Hauner
3ac5583cf9 Set Capella fork epoch for Mainnet (#4111)
## Issue Addressed

NA

## Proposed Changes

Sets the mainnet Capella fork epoch as per https://github.com/ethereum/consensus-specs/pull/3300

## Additional Info

I expect the `ef_tests` to fail until we get a compatible consensus spec tests release.
2023-03-21 05:15:01 +00:00
Paul Hauner
59e45fe349 Reduce verbosity of reprocess queue logs (#4101)
## Issue Addressed

NA

## Proposed Changes

Replaces #4058 to attempt to reduce `ERRO Failed to send scheduled attestation` spam and provide more information for diagnosis. With this PR we achieve:

- When dequeuing attestations after a block is received, send only one log which reports `n` failures (rather than `n` logs reporting `n` failures).
- Make a distinction in logs between two separate attestation dequeuing events.
- Add more information to both log events to help assist with troubleshooting.

## Additional Info

NA
2023-03-21 05:15:00 +00:00
Age Manning
785a9171e6 Customisable shuffling cache size (#4081)
This PR enables the user to adjust the shuffling cache size.

This is useful for some HTTP API requests which require re-computing old shufflings. This PR currently optimizes the
beacon/states/{state_id}/committees HTTP API by first checking the cache before re-building shuffling.

If the shuffling is set to a non-default value, then the HTTP API request will also fill the cache when as it constructs new shufflings.

If the CLI flag is not present or the value is set to the default of 16 the default behaviour is observed.


Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2023-03-21 05:14:59 +00:00
Age Manning
76a2007b64 Improve Lighthouse Connectivity Via ENR TCP Update (#4057)
Currently Lighthouse will remain uncontactable if users port forward a port that is not the same as the one they are listening on. 

For example, if Lighthouse runs with port 9000 TCP/UDP locally but a router is configured to pass 9010 externally to the lighthouse node on 9000, other nodes on the network will not be able to reach the lighthouse node. 

This occurs because Lighthouse does not update its ENR TCP port on external socket discovery. The intention was always that users should use `--enr-tcp-port` to customise this, but this is non-intuitive. 

The difficulty arises because we have no discovery mechanism to find our external TCP port. If we discovery a new external UDP port, we must guess what our external TCP port might be. This PR assumes the external TCP port is the same as the external UDP port (which may not be the case) and thus updates the TCP port along with the UDP port if the `--enr-tcp-port` flag is not set. 

Along with this PR, will be added documentation to the Lighthouse book so users can correctly understand and configure their ENR to maximize Lighthouse's connectivity. 

This relies on https://github.com/sigp/discv5/pull/166 and we should wait for a new release in discv5 before adding this PR.
2023-03-21 05:14:57 +00:00
Age Manning
17d56b06f6 Ignore self as a bootnode (#4110)
If a node is also a bootnode it can try to add itself to its own local routing table which will emit an error. 

The error is entirely harmless but we would prefer to avoid emitting the error. 

This PR does not attempt to add a boot node ENR if that ENR corresponds to our local peer-id/node-id.
2023-03-20 21:50:37 +00:00
ethDreamer
65a5eb8292 Reconstruct Payloads using Payload Bodies Methods (#4028)
## Issue Addressed

* #3895 

Co-authored-by: ethDreamer <37123614+ethDreamer@users.noreply.github.com>
Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2023-03-19 23:15:59 +00:00
Paul Hauner
020fb483fe Clarify "Ready for Capella" (#4095)
## Issue Addressed

Resolves #4061

## Proposed Changes

Adds a message to tell users to check their EE.

## Additional Info

I really struggled to come up with something succinct and complete, so I'm totally open to feedback.
2023-03-17 00:44:04 +00:00
Paul Hauner
c3e5053612 Reduce false positive logging for late builder blocks (#4073)
## Issue Addressed

NA

## Proposed Changes

When producing a block from a builder, there are two points where we could consider the block "broadcast":

1. When the blinded block is published to the builder.
2. When the un-blinded block is published to the P2P network (this is always *after* the previous step).

Our logging for late block broadcasts was using (2) for builder-blocks, which was creating a lot of false-positive logs. This is because the builder publishes the block on the P2P network themselves before returning it to us and we perform (2). For clarity, the logs were false-positives because we claim that the block was published late by us when it was actually published earlier by the builder.

This PR changes our logging behavior so we do our logging at (1) instead. It also updates our metrics for block broadcast to distinguish between local and builder blocks. I believe the metrics change will be natively compatible with existing Grafana dashboards.

## Additional Info

One could argue that the builder *should* return the block to us faster, however that's not the case. I think it's more important that we don't desensitize users with false-positives.
2023-03-17 00:44:03 +00:00
Michael Sproul
4c2d4af6cd Make more noise when the EL is broken (#3986)
## Issue Addressed

Closes #3814, replaces #3818.

## Proposed Changes

* Add a WARN log for the case where we are attempting to sync chain segments but can't process them because they're building on an invalid parent. The most common case where we see this is when the execution node database is corrupt, causing sync to stall mysteriously (because we're currently logging the failure only at debug level).
* Additionally I've bumped up the logging for invalid execution payloads to `WARN`. This may result in some duplicate logs as we log errors from the `beacon_chain` and then again from the beacon processor. Invalid payloads and corrupt DBs _should_ be rare enough that this doesn't produce overwhelming log volume.
2023-03-17 00:44:02 +00:00
Ricki Moore
974b7e9f58 Siren Ui Lighthouse Version Requirments (#4093)
## Issue Addressed

Added note in lighthouse book to instruct users to use a min lighthouse requirement to run Siren Ui.

Which issue # does this PR address?

## Proposed Changes

Please list or describe the changes introduced by this PR.

## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.
2023-03-16 08:03:43 +00:00
Age Manning
3d99ce25f8 Correct a race condition when dialing peers (#4056)
There is a race condition which occurs when multiple discovery queries return at almost the exact same time and they independently contain a useful peer we would like to connect to.

The condition can occur that we can add the same peer to the dial queue, before we get a chance to process the queue. 
This ends up displaying an error to the user: 
```
ERRO Dialing an already dialing peer
```
Although this error is harmless it's not ideal. 

There are two solutions to resolving this:
1. As we decide to dial the peer, we change the state in the peer-db to dialing (before we add it to the queue) which would prevent other requests from adding to the queue. 
2. We prevent duplicates in the dial queue

This PR has opted for 2. because 1. will complicate the code in that we are changing states in non-intuitive places. Although this technically adds a very slight performance cost, its probably a cleaner solution as we can keep the state-changing logic in one place.
2023-03-16 05:44:54 +00:00
Daniel Ramirez Chiquillo
1ec3041673 Remove Router/Processor Code (#4002)
## Issue Addressed

#3938 

## Proposed Changes

- `network::Processor` is deleted and all it's logic is moved to `network::Router`.
- The `network::Router` module is moved to a single file.
- The following functions are deleted: `on_disconnect` `send_status` `on_status_response` `on_blocks_by_root_request` `on_lightclient_bootstrap` `on_blocks_by_range_request` `on_block_gossip` `on_unaggregated_attestation_gossip` `on_aggregated_attestation_gossip` `on_voluntary_exit_gossip` `on_proposer_slashing_gossip` `on_attester_slashing_gossip` `on_sync_committee_signature_gossip` `on_sync_committee_contribution_gossip` `on_light_client_finality_update_gossip` `on_light_client_optimistic_update_gossip`. This deletions are possible because the updated `Router` allows the underlying methods to be called directly.
2023-03-15 01:27:47 +00:00
realbigsean
ac0eb39ced Complete match for has_context_bytes (#3972)
## Issue Addressed

- Add a complete match for `Protocol` here. 
- The incomplete match was causing us not to append context bytes to the light client protocols
- This is the relevant part of the spec and it looks like context bytes are defined https://github.com/ethereum/consensus-specs/blob/dev/specs/altair/light-client/p2p-interface.md#getlightclientbootstrap

Disclaimer: I have no idea if people are using it but it shouldn't have been working so not sure why it wasn't caught

Co-authored-by: realbigsean <seananderson33@gmail.com>
2023-03-15 01:27:46 +00:00
Michael Sproul
36e163c042 Add parent_block_number to payload SSE (#4053)
## Issue Addressed

In #4027 I forgot to add the `parent_block_number` to the payload attributes SSE.

## Proposed Changes

Compute the parent block number while computing the pre-payload attributes. Pass it on to the SSE stream.

## Additional Info

Not essential for v3.5.1 as I suspect most builders don't need the `parent_block_root`. I would like to use it for my dummy no-op builder however.
2023-03-14 06:26:37 +00:00
Divma
e190ebb8a0 Support for Ipv6 (#4046)
## Issue Addressed
Add support for ipv6 and dual stack in lighthouse. 

## Proposed Changes
From an user perspective, now setting an ipv6 address, optionally configuring the ports should feel exactly the same as using an ipv4 address. If listening over both ipv4 and ipv6 then the user needs to:
- use the `--listen-address` two times (ipv4 and ipv6 addresses)
- `--port6` becomes then required
- `--discovery-port6` can now be used to additionally configure the ipv6 udp port

### Rough list of code changes
- Discovery:
  - Table filter and ip mode set to match the listening config. 
  - Ipv6 address, tcp port and udp port set in the ENR builder
  - Reported addresses now check which tcp port to give to libp2p
- LH Network Service:
  - Can listen over Ipv6, Ipv4, or both. This uses two sockets. Using mapped addresses is disabled from libp2p and it's the most compatible option.
- NetworkGlobals:
  - No longer stores udp port since was not used at all. Instead, stores the Ipv4 and Ipv6 TCP ports.
- NetworkConfig:
  - Update names to make it clear that previous udp and tcp ports in ENR were Ipv4
  - Add fields to configure Ipv6 udp and tcp ports in the ENR
  - Include advertised enr Ipv6 address.
  - Add type to model Listening address that's either Ipv4, Ipv6 or both. A listening address includes the ip, udp port and tcp port.
- UPnP:
  - Kept only for ipv4
- Cli flags:
  - `--listen-addresses` now can take up to two values
  - `--port` will apply to ipv4 or ipv6 if only one listening address is given. If two listening addresses are given it will apply only to Ipv4.
  - `--port6` New flag required when listening over ipv4 and ipv6 that applies exclusively to Ipv6.
  - `--discovery-port` will now apply to ipv4 and ipv6 if only one listening address is given.
  - `--discovery-port6` New flag to configure the individual udp port of ipv6 if listening over both ipv4 and ipv6.
  - `--enr-udp-port` Updated docs to specify that it only applies to ipv4. This is an old behaviour.
  - `--enr-udp6-port` Added to configure the enr udp6 field.
  - `--enr-tcp-port` Updated docs to specify that it only applies to ipv4. This is an old behaviour.
  - `--enr-tcp6-port` Added to configure the enr tcp6 field.
  - `--enr-addresses` now can take two values.
  - `--enr-match` updated behaviour.
- Common:
  - rename `unused_port` functions to specify that they are over ipv4.
  - add functions to get unused ports over ipv6.
- Testing binaries
  - Updated code to reflect network config changes and unused_port changes.

## Additional Info

TODOs:
- use two sockets in discovery. I'll get back to this and it's on https://github.com/sigp/discv5/pull/160
- lcli allow listening over two sockets in generate_bootnodes_enr
- add at least one smoke flag for ipv6 (I have tested this and works for me)
- update the book
2023-03-14 01:13:34 +00:00
Michael Sproul
06af31a66a Correct /lighthouse/nat implementation (#4069)
## Proposed Changes

The current `/lighthouse/nat` implementation checks for _zero_ address updated messages, when it should check for a _non-zero_ number. This was spotted while debugging an issue on Discord where a user's ports weren't forwarded but `/lighthouse/nat` was still returning `true`.
2023-03-13 04:08:15 +00:00
Sebastian Richel
373beaf913 Added warning when new jwt is generated (#4000)
## Issue Addressed
#3435

## Proposed Changes
Fire a warning with the path of JWT to be created when the path given by --execution-jwt is not found
Currently, the same error is logged if the jwt is found but doesn't match the execution client's jwt, and if no jwt was found at the given path. This makes it very hard to tell if you accidentally typed the wrong path, as a new jwt is created silently that won't match the execution client's jwt. So instead, it will now fire a warning stating that a jwt is being generated at the given path.

## Additional Info
In the future, it may be smarter to handle this case by adding an InvalidJWTPath member to the Error enum in lib.rs or auth.rs
that can be handled during upcheck()

This is my first PR and first project with rust. so thanks to anyone who looks at this for their patience and help!

Co-authored-by: Sebastian Richel <47844429+sebastianrich18@users.noreply.github.com>
2023-03-13 04:08:14 +00:00
Michael Sproul
90cef1db86 Appease Clippy 1.68 and refactor http_api (#4068)
## Proposed Changes

Two tiny updates to satisfy Clippy 1.68

Plus refactoring of the `http_api` into less complex types so the compiler can chew and digest them more easily.

Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2023-03-13 01:40:03 +00:00
Alex Wied
4a1c0c96be Fix order of arguments to log_count (#4060)
See: https://github.com/sigp/lighthouse/pull/4027

## Proposed Changes

The order of the arguments to `log_count` is swapped in `beacon_node/beacon_chain/src/events.rs`.
2023-03-13 01:40:01 +00:00
Paul Hauner
319cc61afe Release v3.5.1 (#4049)
## Issue Addressed

NA

## Proposed Changes

Bumps versions to v3.5.1.

## Additional Info

- [x] Requires further testing
2023-03-07 07:57:39 +00:00
Daniel Ramirez Chiquillo
4c109115ca Add a flag to always use payloads from builders (#4052)
## Issue Addressed

#4040 

## Proposed Changes

- Add the `always_prefer_builder_payload`  field to `Config` in `beacon_node/client/src/config.rs`.
- Add that same field to `Inner` in `beacon_node/execution_layer/src/lib.rs`
- Modify the logic for picking the payload in `beacon_node/execution_layer/src/lib.rs`
- Add the `always-prefer-builder-payload` flag to the beacon node CLI
- Test the new flags in `lighthouse/tests/beacon_node.rs`

Co-authored-by: Paul Hauner <paul@paulhauner.com>
2023-03-07 05:37:28 +00:00
Paul Hauner
5bb635d17f Set Capella fork epoch for Goerli (#4044)
## Issue Addressed

NA

## Proposed Changes

Sets the Capella fork epoch as per https://github.com/eth-clients/goerli/pull/160.

The fork will occur at:

- Epoch: 162304
- Slot: 5193728
- UTC: 14/03/2023, 10:25:36 pm

## Additional Info

- [x] Blocked on https://github.com/eth-clients/goerli/pull/160 being merged
2023-03-07 03:06:52 +00:00
Paul Hauner
3ad77fb52c Add VC metric for primary BN latency (#4051)
## Issue Addressed

NA

## Proposed Changes

In #4024 we added metrics to expose the latency measurements from a VC to each BN. Whilst playing with these new metrics on our infra I realised it would be great to have a single metric to make sure that the primary BN for each VC has a reasonable latency. With the current "metrics for all BNs" it's hard to tell which is the primary.

## Additional Info

NA
2023-03-06 04:08:49 +00:00
Paul Hauner
f775404c10 Log a WARN in the VC for a mismatched Capella fork epoch (#4050)
## Issue Addressed

NA

## Proposed Changes

- Adds a `WARN` statement for Capella, just like the previous forks.
- Adds a hint message to all those WARNs to suggest the user update the BN or VC.

## Additional Info

NA
2023-03-06 04:08:48 +00:00
Michael Sproul
4dc0c4c5b7 Update dependencies incl tempfile (#4048)
## Proposed Changes

Fix the cargo audit failure caused by [RUSTSEC-2023-0018](https://rustsec.org/advisories/RUSTSEC-2023-0018) which we were exposed to via `tempfile`.

## Additional Info

I've held back the libp2p crate for now because it seemed to introduce another duplicate dependency on libp2p-core, for a total of 3 copies. Maybe that's fine, but we can sort it out later.
2023-03-05 23:43:32 +00:00
Michael Sproul
57dfcfd83a Optimise attestation selection proof signing (#4033)
## Issue Addressed

Closes #3963 (hopefully)

## Proposed Changes

Compute attestation selection proofs gradually each slot rather than in a single `join_all` at the start of each epoch. On a machine with 5k validators this replaces 5k tasks signing 5k proofs with 1 task that signs 5k/32 ~= 160 proofs each slot.

Based on testing with Goerli validators this seems to reduce the average time to produce a signature by preventing Tokio and the OS from falling over each other trying to run hundreds of threads. My testing so far has been with local keystores, which run on a dynamic pool of up to 512 OS threads because they use [`spawn_blocking`](https://docs.rs/tokio/1.11.0/tokio/task/fn.spawn_blocking.html) (and we haven't changed the default).

An earlier version of this PR hyper-optimised the time-per-signature metric to the detriment of the entire system's performance (see the reverted commits). The current PR is conservative in that it avoids touching the attestation service at all. I think there's more optimising to do here, but we can come back for that in a future PR rather than expanding the scope of this one.

The new algorithm for attestation selection proofs is:

- We sign a small batch of selection proofs each slot, for slots up to 8 slots in the future. On average we'll sign one slot's worth of proofs per slot, with an 8 slot lookahead.
- The batch is signed halfway through the slot when there is unlikely to be contention for signature production (blocks are <4s, attestations are ~4-6 seconds, aggregates are 8s+).

## Performance Data

_See first comment for updated graphs_.

Graph of median signing times before this PR:

![signing_times_median](https://user-images.githubusercontent.com/4452260/221495627-3ab3c105-319f-406e-b99d-b5913e0ded9c.png)

Graph of update attesters metric (includes selection proof signing) before this PR:

![update_attesters_store](https://user-images.githubusercontent.com/4452260/221497057-01ba40e4-8148-45f6-9e21-36a9567a631a.png)

Median signing time after this PR (prototype from 12:00, updated version from 13:30):

![signing_times_median_updated](https://user-images.githubusercontent.com/4452260/221771578-47a040cc-b832-482f-9a1a-d1bd9854e00e.png)

99th percentile on signing times (bounded attestation signing from 16:55, now removed):

![signing_times_99pc](https://user-images.githubusercontent.com/4452260/221772055-e64081a8-2220-45ba-ba6d-9d7e344a5bde.png)

Attester map update timing after this PR:

![update_attesters_store_updated](https://user-images.githubusercontent.com/4452260/221771757-c8558a48-7f4e-4bb5-9929-dee177a66c1e.png)

Selection proof signings per second change:

![signing_attempts](https://user-images.githubusercontent.com/4452260/221771855-64f5da22-1655-478d-926b-810be8a3650c.png)

## Link to late blocks

I believe this is related to the slow block signings because logs from Stakely in #3963 show these two logs almost 5 seconds apart:

> Feb 23 18:56:23.978 INFO Received unsigned block, slot: 5862880, service: block, module: validator_client::block_service:393
> Feb 23 18:56:28.552 INFO Publishing signed block, slot: 5862880, service: block, module: validator_client::block_service:416

The only thing that happens between those two logs is the signing of the block:

0fb58a680d/validator_client/src/block_service.rs (L410-L414)

Helpfully, Stakely noticed this issue without any Lighthouse BNs in the mix, which pointed to a clear issue in the VC.

## TODO

- [x] Further testing on testnet infrastructure.
- [x] Make the attestation signing parallelism configurable.
2023-03-05 23:43:31 +00:00
Michael Sproul
01556f6f01 Optimise payload attributes calculation and add SSE (#4027)
## Issue Addressed

Closes #3896
Closes #3998
Closes #3700

## Proposed Changes

- Optimise the calculation of withdrawals for payload attributes by avoiding state clones, avoiding unnecessary state advances and reading from the snapshot cache if possible.
- Use the execution layer's payload attributes cache to avoid re-calculating payload attributes. I actually implemented a new LRU cache just for withdrawals but it had the exact same key and most of the same data as the existing payload attributes cache, so I deleted it.
- Add a new SSE event that fires when payloadAttributes are calculated. This is useful for block builders, a la https://github.com/ethereum/beacon-APIs/issues/244.
- Add a new CLI flag `--always-prepare-payload` which forces payload attributes to be sent with every fcU regardless of connected proposers. This is intended for use by builders/relays.

For maximum effect, the flags I've been using to run Lighthouse in "payload builder mode" are:

```
--always-prepare-payload \
--prepare-payload-lookahead 12000 \
--suggested-fee-recipient 0x0000000000000000000000000000000000000000
```

The fee recipient is required so Lighthouse has something to pack in the payload attributes (it can be ignored by the builder). The lookahead causes fcU to be sent at the start of every slot rather than at 8s. As usual, fcU will also be sent after each change of head block. I think this combination is sufficient for builders to build on all viable heads. Often there will be two fcU (and two payload attributes) sent for the same slot: one sent at the start of the slot with the head from `n - 1` as the parent, and one sent after the block arrives with `n` as the parent.

Example usage of the new event stream:

```bash
curl -N "http://localhost:5052/eth/v1/events?topics=payload_attributes"
```

## Additional Info

- [x] Tests added by updating the proposer re-org tests. This has the benefit of testing the proposer re-org code paths with withdrawals too, confirming that the new changes don't interact poorly.
- [ ] Benchmarking with `blockdreamer` on devnet-7 showed promising results but I'm yet to do a comparison to `unstable`.


Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-03-05 23:43:30 +00:00
Paul Hauner
6e15533b54 Add latency measurement service to VC (#4024)
## Issue Addressed

NA

## Proposed Changes

Adds a service which periodically polls (11s into each mainnet slot) the `node/version` endpoint on each BN and roughly measures the round-trip latency. The latency is exposed as a `DEBG` log and a Prometheus metric.

The `--latency-measurement-service` has been added to the VC, with the following options:

- `--latency-measurement-service true`: enable the service (default).
    - `--latency-measurement-service`: (without a value) has the same effect.
- `--latency-measurement-service false`: disable the service.

## Additional Info

Whilst looking at our staking setup, I think the BN+VC latency is contributing to late blocks. Now that we have to wait for the builders to respond it's nice to try and do everything we can to reduce that latency. Having visibility is the first step.
2023-03-05 23:43:29 +00:00
Paul Hauner
cac3a66be4 Permit a null LVH from an INVALID response to newPayload (#4037)
## Issue Addressed

NA

## Proposed Changes

As discovered in #4034, Lighthouse is not accepting `latest_valid_hash == None` in an `INVALID` response to `newPayload`. The `null`/`None` response *was* illegal at one point, however it was added in https://github.com/ethereum/execution-apis/pull/254.

This PR brings Lighthouse in line with the standard and should fix the root cause of what #4034 patched around.

## Additional Info

NA
2023-03-03 04:12:50 +00:00
Atanas Minkov
2b6348781a Log a debug message when a request fails for a beacon node candidate (#4036)
## Issue Addressed
#3985

## Proposed Changes

Log a debug message when a BN candidate returns an error.

`Mar 01 16:40:24.011 DEBG Request to beacon node failed           error: ServerMessage(ErrorMessage { code: 503, message: "SERVICE_UNAVAILABLE: beacon node is syncing: head slot is 8416, current slot is 5098402", stacktraces: [] }), node: http://localhost:5052/`
2023-03-02 05:26:14 +00:00
Pawan Dhananjay
5b18fd92cb Cleaner logic for gossip subscriptions for new forks (#4030)
## Issue Addressed

Cleaner resolution for #4006 

## Proposed Changes

We are currently subscribing to core topics of new forks way before the actual fork since we had just a single `CORE_TOPICS` array. This PR separates the core topics for every fork and subscribes to only required topics based on the current fork.
Also adds logic for subscribing to the core topics of a new fork only 2 slots before the fork happens.

2 slots is to give enough time for the gossip meshes to form. 

Currently doesn't add logic to remove topics from older forks in new forks. For e.g. in the coupled 4844 world, we had to remove the `BeaconBlock` topic in favour of `BeaconBlocksAndBlobsSidecar` at the 4844 fork. It should be easy enough to add though. Not adding it because I'm assuming that  #4019 will get merged before this PR and we won't require any deletion logic. Happy to add it regardless though.
2023-03-01 09:22:48 +00:00
Michael Sproul
ca1ce381a9 Delete Kiln and Ropsten configs (#4038)
## Proposed Changes

Remove built-in support for Ropsten and Kiln via the `--network` flag. Both testnets are long dead and deprecated.

This shaves about 30MiB off the binary size, from 135MiB to 103MiB (maxperf), or 165MiB to 135MiB (release).
2023-03-01 06:16:14 +00:00
Divma
047c7544e3 Clean capella (#4019)
## Issue Addressed

Cleans up all the remnants of 4844 in capella. This makes sure when 4844 is reviewed there is nothing we are missing because it got included here 

## Proposed Changes

drop a bomb on every 4844 thing 

## Additional Info

Merge process I did (locally) is as follows:
- squash merge to produce one commit
- in new branch off unstable with the squashed commit create a `git revert HEAD` commit
- merge that new branch onto 4844 with `--strategy ours`
- compare local 4844 to remote 4844 and make sure the diff is empty
- enjoy

Co-authored-by: Paul Hauner <paul@paulhauner.com>
2023-03-01 03:19:02 +00:00
Paul Hauner
17d9a620cf Add more logs in the BN HTTP API during block production (#4025)
## Issue Addressed

NA

## Proposed Changes

Adds two new `DEBG` logs to the HTTP API:

1. As soon as we are requested to produce a block.
2. As soon as a signed block is received.

In #3858 we added some very helpful logs to the VC so we could see when things are happening with block proposals in the VC. After doing some more debugging, I found that I can tell when the VC is sending a block but I *can't* tell the time that the BN receives it (I can only get the time after the BN has started doing some work with the block). Knowing when the VC published and as soon as the BN receives is useful for determining the delays introduced by network latency (and some other things like JSON decoding, etc).

## Additional Info

NA
2023-02-28 02:20:53 +00:00
Age Manning
0155455990 Docs for Siren (#4023)
This adds some documentation for the Siren app into the Lighthouse book.

Co-authored-by: Mavrik <mrricki.m.usmc@gmail.com>
2023-02-28 02:20:52 +00:00
Paul Hauner
caa6190d4a Use consensus-spec-tests v1.3.0-rc.3 (#4021)
## Issue Addressed

NA

## Proposed Changes

Updates our `ef_tests` to use: https://github.com/ethereum/consensus-specs/releases/tag/v1.3.0-rc.3

This required:

- Skipping a `merkle_proof_validity` test (see #4022)
- Account for the `eip4844` tests changing name to `deneb`
    - My IDE did some Python linting during this change. It seemed simple and nice so I left it there.

## Additional Info

NA
2023-02-28 02:20:51 +00:00