Commit Graph

2283 Commits

Author SHA1 Message Date
Jimmy Chen
2aef2db66f Un-deprecate test utils functions such as extend_chain (#4255)
## Issue Addressed

This PR un-deprecates some commonly used test util functions, e.g. `extend_chain`. Most of these were deprecated in 2020 but some of us still found them quite convenient and they're still being used a lot. If there's no issue with using them, I think we should remove the "Deprecated" comment to avoid confusion.
2023-05-03 04:12:12 +00:00
Age Manning
616bee6757 Maintain trusted peers (#4159)
## Issue Addressed
#4150 

## Proposed Changes

Maintain trusted peers in the pruning logic. ~~In principle the changes here are not necessary as a trusted peer has a max score (100) and all other peers can have at most 0 (because we don't implement positive scores). This means that we should never prune trusted peers unless we have more trusted peers than the target peer count.~~

This change shifts this logic to explicitly never prune trusted peers which I expect is the intuitive behaviour. 

~~I suspect the issue in #4150 arises when a trusted peer disconnects from us for one reason or another and then we remove that peer from our peerdb as it becomes stale. When it re-connects at some large time later, it is no longer a trusted peer.~~

Currently we do disconnect trusted peers, and this PR corrects this to maintain trusted peers in the pruning logic.

As suggested in #4150 we maintain trusted peers in the db and thus we remember them even if they disconnect from us.
2023-05-03 04:12:10 +00:00
Michael Sproul
c11638c36c Split common crates out into their own repos (#3890)
## Proposed Changes

Split out several crates which now exist in separate repos under `sigp`.

- [`ssz` and `ssz_derive`](https://github.com/sigp/ethereum_ssz)
- [`tree_hash` and `tree_hash_derive`](https://github.com/sigp/tree_hash)
- [`ethereum_hashing`](https://github.com/sigp/ethereum_hashing)
- [`ethereum_serde_utils`](https://github.com/sigp/ethereum_serde_utils)
- [`ssz_types`](https://github.com/sigp/ssz_types)

For the published crates see: https://crates.io/teams/github:sigp:crates-io?sort=recent-updates.

## Additional Info

- [x] Need to work out how to handle versioning. I was hoping to do 1.0 versions of several crates, but if they depend on `ethereum-types 0.x` that is not going to work. EDIT: decided to go with 0.5.x versions.
- [x] Need to port several changes from `tree-states`, `capella`, `eip4844` branches to the external repos.
2023-04-28 01:15:40 +00:00
Age Manning
7456e1e8fa Separate BN for block proposals (#4182)
It is a well-known fact that IP addresses for beacon nodes used by specific validators can be de-anonymized. There is an assumed risk that a malicious user may attempt to DOS validators when producing blocks to prevent chain growth/liveness.

Although there are a number of ideas put forward to address this, there a few simple approaches we can take to mitigate this risk.

Currently, a Lighthouse user is able to set a number of beacon-nodes that their validator client can connect to. If one beacon node is taken offline, it can fallback to another. Different beacon nodes can use VPNs or rotate IPs in order to mask their IPs.

This PR provides an additional setup option which further mitigates attacks of this kind.

This PR introduces a CLI flag --proposer-only to the beacon node. Setting this flag will configure the beacon node to run with minimal peers and crucially will not subscribe to subnets or sync committees. Therefore nodes of this kind should not be identified as nodes connected to validators of any kind.

It also introduces a CLI flag --proposer-nodes to the validator client. Users can then provide a number of beacon nodes (which may or may not run the --proposer-only flag) that the Validator client will use for block production and propagation only. If these nodes fail, the validator client will fallback to the default list of beacon nodes.

Users are then able to set up a number of beacon nodes dedicated to block proposals (which are unlikely to be identified as validator nodes) and point their validator clients to produce blocks on these nodes and attest on other beacon nodes. An attack attempting to prevent liveness on the eth2 network would then need to preemptively find and attack the proposer nodes which is significantly more difficult than the default setup.

This is a follow on from: #3328 

Co-authored-by: Michael Sproul <michael@sigmaprime.io>
Co-authored-by: Paul Hauner <paul@paulhauner.com>
2023-04-26 01:12:36 +00:00
int88
90d562b3d4 add attestation inclusion distance in http api (#4148)
## Issue Addressed

#4097

## Proposed Changes

Add attestation inclusion distance in http api, extend `/lighthouse/ui/validator_metrics` to include it.

## Usage
```
curl -X POST "http://localhost:8001/lighthouse/ui/validator_metrics" -d '{"indices": [1]}' -H "Content-Type: application/json" | jq
```

```
{
  "data": {
    "validators": {
      "1": {
        "attestation_hits": 3,
        "attestation_misses": 1,
        "attestation_hit_percentage": 75,
        "attestation_head_hits": 3,
        "attestation_head_misses": 0,
        "attestation_head_hit_percentage": 100,
        "attestation_target_hits": 3,
        "attestation_target_misses": 0,
        "attestation_target_hit_percentage": 100,
        "attestation_inclusion_distance": 1
      }
    }
  }
}
```

## Additional Info

NA
2023-04-26 01:12:35 +00:00
Pawan Dhananjay
a78285db5e Fix Rust 1.69 lints (#4222)
## Issue Addressed

N/A

## Proposed Changes

Fixes lints mostly `extra-unused-type-parameters` https://rust-lang.github.io/rust-clippy/master/index.html#extra_unused_type_paramete
2023-04-21 18:29:28 +00:00
Paul Hauner
693886b941 Release v4.1.0 (#4191)
## Issue Addressed

NA

## Proposed Changes

Bump versions.

## Additional Info

NA
2023-04-20 00:51:38 +00:00
Paul Hauner
48843ba198 Check lateness of block before requeuing it (#4208)
## Issue Addressed

NA

## Proposed Changes

Avoids reprocessing loops introduced in #4179. (Also somewhat related to #4192).

Breaks the re-queue loop by only re-queuing when an RPC block is received before the attestation creation deadline.

I've put `proposal_is_known` behind a closure to avoid interacting with the `observed_proposers` lock unnecessarily. 

## Additional Info

NA
2023-04-19 04:23:20 +00:00
Michael Sproul
e9a7316f1d Set user agent on requests to builder (#4199)
## Issue Addressed

Closes #4185

## Proposed Changes

- Set user agent to `Lighthouse/vX.Y.Z-<commit hash>` by default
- Allow tweaking user agent via `--builder-user-agent "agent"`
2023-04-18 02:47:36 +00:00
Michael Sproul
1d92e3f77c Use efficient payload reconstruction for HTTP API (#4102)
## Proposed Changes

Builds on #4028 to use the new payload bodies methods in the HTTP API as well.

## Caveats

The payloads by range method only works for the finalized chain, so it can't be used in the execution engine integration tests because we try to reconstruct unfinalized payloads there.
2023-04-18 02:47:35 +00:00
Paul Hauner
dd124b2d68 Address observed proposers behaviour (#4192)
## Issue Addressed

NA

## Proposed Changes

Apply two changes to code introduced in #4179:

1. Remove the `ERRO` log for when we error on `proposer_has_been_observed()`. We were seeing a lot of this in our logs for finalized blocks and it's a bit noisy.
1. Use `false` rather than `true` for `proposal_already_known` when there is an error. If a block raises an error in `proposer_has_been_observed()` then the block must be invalid, so we should process (and reject) it now rather than queuing it.

For reference, here is one of the offending `ERRO` logs:

```
ERRO Failed to check observed proposers block_root: 0x5845…878e, source: rpc, error: FinalizedBlock { slot: Slot(5410983), finalized_slot: Slot(5411232) }
```

## Additional Info

NA
2023-04-14 06:37:16 +00:00
Paul Hauner
2b3084f578 Use head state for exit verification (#4183)
## Issue Addressed

NA

## Proposed Changes

Similar to #4181 but without the version bump and a more nuanced fix.

Patches the high CPU usage seen after the Capella fork which was caused by processing exits when there are skip slots.

## Additional Info

~~This is an imperfect solution that will cause us to drop some exits at the fork boundary. This is tracked at #4184.~~
2023-04-14 01:11:46 +00:00
Michael Sproul
a3669abac5 Avoid processing redundant RPC blocks (#4179)
## Proposed Changes

We already make some attempts to avoid processing RPC blocks when a block from the same proposer is already being processed through gossip. This PR strengthens that guarantee by using the existing cache for `observed_block_producers` to inform whether an RPC block's processing should be delayed.
2023-04-13 07:05:02 +00:00
Michael Sproul
b90c0c3fb1 Make re-org strat more cautious and add more config (#4151)
## Proposed Changes

This change attempts to prevent failed re-orgs by:

1. Lowering the re-org cutoff from 2s to 1s. This is informed by a failed re-org attempted by @yorickdowne's node. The failed block was requested in the 1.5-2s window due to a Vouch failure, and failed to propagate to the majority of the network before the attestation deadline at 4s.
2. Allow users to adjust their re-org cutoff depending on observed network conditions and their risk profile. The static 2 second cutoff was too rigid.
3. Add a `--proposer-reorg-disallowed-offsets` flag which can be used to prohibit reorgs at certain slots. This is intended to help workaround an issue whereby reorging blocks at slot 1 are currently taking ~1.6s to propagate on gossip rather than ~500ms. This is suspected to be due to a cache miss in current versions of Prysm, which should be fixed in their next release.

## Additional Info

I'm of two minds about removing the `shuffling_stable` check which checks for blocks at slot 0 in the epoch. If we removed it users would be able to configure Lighthouse to try reorging at slot 0, which likely wouldn't work very well due to interactions with the proposer index cache. I think we could leave it for now and revisit it later.
2023-04-13 07:05:01 +00:00
ethDreamer
00cf5fc184 Remove Redundant Trait Bound (#4169)
I realized this is redundant while reasoning about how the `store` is implemented given the [definition of `ItemStore`](https://github.com/sigp/lighthouse/blob/v4.0.1/beacon_node/store/src/lib.rs#L107)
```rust
pub trait ItemStore<E: EthSpec>: KeyValueStore<E> + Sync + Send + Sized + 'static {
    ...
}
```
2023-04-12 01:48:22 +00:00
Mac L
0e2e23e088 Remove the unused ExecutionOptimisticForkVersionedResponse type (#4160)
## Issue Addressed

#4146 

## Proposed Changes

Removes the `ExecutionOptimisticForkVersionedResponse` type and the associated Beacon API endpoint which is now deprecated. Also removes the test associated with the endpoint.
2023-04-12 01:48:21 +00:00
Pawan Dhananjay
3b117f4bf6 Add a flag to disable peer scoring (#4135)
## Issue Addressed

N/A

## Proposed Changes

Adds a flag for disabling peer scoring. This is useful for local testing and testing small networks for new features.
2023-04-12 01:48:19 +00:00
Mac L
8630ddfec4 Add beacon.watch (#3362)
> This is currently a WIP and all features are subject to alteration or removal at any time.

## Overview

The successor to #2873.

Contains the backbone of `beacon.watch` including syncing code, the initial API, and several core database tables.

See `watch/README.md` for more information, requirements and usage.
2023-04-03 05:35:11 +00:00
int88
1e029ce538 remove dup log (#4155)
## Issue Addressed

NA

## Proposed Changes

remove duplicate log message.

## Additional Info

NA
2023-04-03 03:02:58 +00:00
Age Manning
311e69db65 Ban peer race condition (#4140)
It is possible that when we go to ban a peer, there is already an unbanned message in the queue. It could lead to the case that we ban and immediately unban a peer leaving us in a state where a should-be banned peer is unbanned. 

If this banned peer connects to us in this faulty state, we currently do not attempt to re-ban it. This PR does correct this also, so if we do see this error, it will now self-correct (although we shouldn't see the error in the first place). 

I have also incremented the severity of not supporting protocols as I see peers ultimately get banned in a few steps and it seems to make sense to just ban them outright, rather than have them linger.
2023-04-03 03:02:57 +00:00
Jimmy Chen
2de3451011 Rate limiting backfill sync (#3936)
## Issue Addressed

#3212 

## Proposed Changes

- Introduce a new `rate_limiting_backfill_queue` - any new inbound backfill work events gets immediately sent to this FIFO queue **without any processing**
- Spawn a `backfill_scheduler` routine that pops a backfill event from the FIFO queue at specified intervals (currently halfway through a slot, or at 6s after slot start for 12s slots) and sends the event to `BeaconProcessor` via a `scheduled_backfill_work_tx` channel
- This channel gets polled last in the `InboundEvents`, and work event received is  wrapped in a `InboundEvent::ScheduledBackfillWork` enum variant, which gets processed immediately or queued by the `BeaconProcessor` (existing logic applies from here)

Diagram comparing backfill processing with / without rate-limiting: 
https://github.com/sigp/lighthouse/issues/3212#issuecomment-1386249922

See this comment for @paulhauner's  explanation and solution: https://github.com/sigp/lighthouse/issues/3212#issuecomment-1384674956

## Additional Info

I've compared this branch (with backfill processing rate limited to to 1 and 3 batches per slot) against the latest stable version. The CPU usage during backfill sync is reduced by ~5% - 20%, more details on this page:

https://hackmd.io/@jimmygchen/SJuVpJL3j

The above testing is done on Goerli (as I don't currently have hardware for Mainnet), I'm guessing the differences are likely to be bigger on mainnet due to block size.

### TODOs

- [x] Experiment with processing multiple batches per slot. (need to think about how to do this for different slot durations)
- [x] Add option to disable rate-limiting, enabed by default.
- [x] (No longer required now we're reusing the reprocessing queue) Complete the `backfill_scheduler` task when backfill sync is completed or not required
2023-04-03 03:02:55 +00:00
Daniel Ramirez Chiquillo
036b797b2c Add finalized to HTTP API responses (#3753)
## Issue Addressed

#3708 

## Proposed Changes
- Add `is_finalized_block` method to `BeaconChain` in `beacon_node/beacon_chain/src/beacon_chain.rs`.
- Add `is_finalized_state` method to `BeaconChain` in `beacon_node/beacon_chain/src/beacon_chain.rs`.
- Add `fork_and_execution_optimistic_and_finalized` in `beacon_node/http_api/src/state_id.rs`.
- Add `ExecutionOptimisticFinalizedForkVersionedResponse` type in `consensus/types/src/fork_versioned_response.rs`.
- Add `execution_optimistic_finalized_fork_versioned_response`function in  `beacon_node/http_api/src/version.rs`.
- Add `ExecutionOptimisticFinalizedResponse` type in `common/eth2/src/types.rs`.
- Add `add_execution_optimistic_finalized` method in  `common/eth2/src/types.rs`.
- Update API response methods to include finalized.
- Remove `execution_optimistic_fork_versioned_response`

Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2023-03-30 06:08:37 +00:00
Age Manning
12205a8811 Correct log for ENR (#4133)
## Issue Addressed

https://github.com/sigp/lighthouse/issues/4080

Fixes a log when displaying the initial ENR.
2023-03-29 23:55:55 +00:00
Christopher Chong
6bb28bc806 Add debug fork choice api (#4003)
## Issue Addressed

Which issue # does this PR address?
https://github.com/sigp/lighthouse/issues/3669

## Proposed Changes

Please list or describe the changes introduced by this PR.
- A new API to fetch fork choice data, as specified [here](https://github.com/ethereum/beacon-APIs/pull/232)
- A new integration test to test the new API

## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.

- `extra_data` field specified in the beacon-API spec is not implemented, please let me know if I should instead.

Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-03-29 02:56:37 +00:00
int88
d3c20ffa9d improve error message (#4141)
## Issue Addressed

NA

## Proposed Changes

Not use magic number directly in the error message.

## Additional Info

NA
2023-03-28 22:07:05 +00:00
Paul Hauner
a53830fd60 Release v4.0.1 (#4125)
## Issue Addressed

NA

## Proposed Changes

- Bump versions.
- Bump openssl version to resolve various `cargo audit` notices.

## Additional Info

- Requires further testing
2023-03-26 22:39:28 +00:00
Paul Hauner
b2525d6ebd
Release Candidate v4.0.1-rc.0 (#4123) 2023-03-23 21:16:14 +11:00
Paul Hauner
0616e01202 Release v4.0.0 (#4112)
## Issue Addressed

NA

## Proposed Changes

Bump versions to `v4.0.0`

## Additional Info

NA
2023-03-22 04:06:02 +00:00
Paul Hauner
1f8c17b530 Fork choice modifications and cleanup (#3962)
## Issue Addressed

NA

## Proposed Changes

- Implements https://github.com/ethereum/consensus-specs/pull/3290/
- Bumps `ef-tests` to [v1.3.0-rc.4](https://github.com/ethereum/consensus-spec-tests/releases/tag/v1.3.0-rc.4).

The `CountRealizedFull` concept has been removed and the `--count-unrealized-full` and `--count-unrealized` BN flags now do nothing but log a `WARN` when used.

## Database Migration Debt

This PR removes the `best_justified_checkpoint` from fork choice. This field is persisted on-disk and the correct way to go about this would be to make a DB migration to remove the field. However, in this PR I've simply stubbed out the value with a junk value. I've taken this approach because if we're going to do a DB migration I'd love to remove the `Option`s around the justified and finalized checkpoints on `ProtoNode` whilst we're at it. Those options were added in #2822 which was included in Lighthouse v2.1.0. The options were only put there to handle the migration and they've been set to `Some` ever since v2.1.0. There's no reason to keep them as options anymore.

I started adding the DB migration to this branch but I started to feel like I was bloating this rather critical PR with nice-to-haves. I've kept the partially-complete migration [over in my repo](https://github.com/paulhauner/lighthouse/tree/fc-pr-18-migration) so we can pick it up after this PR is merged.
2023-03-21 07:34:41 +00:00
Paul Hauner
59e45fe349 Reduce verbosity of reprocess queue logs (#4101)
## Issue Addressed

NA

## Proposed Changes

Replaces #4058 to attempt to reduce `ERRO Failed to send scheduled attestation` spam and provide more information for diagnosis. With this PR we achieve:

- When dequeuing attestations after a block is received, send only one log which reports `n` failures (rather than `n` logs reporting `n` failures).
- Make a distinction in logs between two separate attestation dequeuing events.
- Add more information to both log events to help assist with troubleshooting.

## Additional Info

NA
2023-03-21 05:15:00 +00:00
Age Manning
785a9171e6 Customisable shuffling cache size (#4081)
This PR enables the user to adjust the shuffling cache size.

This is useful for some HTTP API requests which require re-computing old shufflings. This PR currently optimizes the
beacon/states/{state_id}/committees HTTP API by first checking the cache before re-building shuffling.

If the shuffling is set to a non-default value, then the HTTP API request will also fill the cache when as it constructs new shufflings.

If the CLI flag is not present or the value is set to the default of 16 the default behaviour is observed.


Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2023-03-21 05:14:59 +00:00
Age Manning
76a2007b64 Improve Lighthouse Connectivity Via ENR TCP Update (#4057)
Currently Lighthouse will remain uncontactable if users port forward a port that is not the same as the one they are listening on. 

For example, if Lighthouse runs with port 9000 TCP/UDP locally but a router is configured to pass 9010 externally to the lighthouse node on 9000, other nodes on the network will not be able to reach the lighthouse node. 

This occurs because Lighthouse does not update its ENR TCP port on external socket discovery. The intention was always that users should use `--enr-tcp-port` to customise this, but this is non-intuitive. 

The difficulty arises because we have no discovery mechanism to find our external TCP port. If we discovery a new external UDP port, we must guess what our external TCP port might be. This PR assumes the external TCP port is the same as the external UDP port (which may not be the case) and thus updates the TCP port along with the UDP port if the `--enr-tcp-port` flag is not set. 

Along with this PR, will be added documentation to the Lighthouse book so users can correctly understand and configure their ENR to maximize Lighthouse's connectivity. 

This relies on https://github.com/sigp/discv5/pull/166 and we should wait for a new release in discv5 before adding this PR.
2023-03-21 05:14:57 +00:00
Age Manning
17d56b06f6 Ignore self as a bootnode (#4110)
If a node is also a bootnode it can try to add itself to its own local routing table which will emit an error. 

The error is entirely harmless but we would prefer to avoid emitting the error. 

This PR does not attempt to add a boot node ENR if that ENR corresponds to our local peer-id/node-id.
2023-03-20 21:50:37 +00:00
ethDreamer
65a5eb8292 Reconstruct Payloads using Payload Bodies Methods (#4028)
## Issue Addressed

* #3895 

Co-authored-by: ethDreamer <37123614+ethDreamer@users.noreply.github.com>
Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2023-03-19 23:15:59 +00:00
Paul Hauner
020fb483fe Clarify "Ready for Capella" (#4095)
## Issue Addressed

Resolves #4061

## Proposed Changes

Adds a message to tell users to check their EE.

## Additional Info

I really struggled to come up with something succinct and complete, so I'm totally open to feedback.
2023-03-17 00:44:04 +00:00
Paul Hauner
c3e5053612 Reduce false positive logging for late builder blocks (#4073)
## Issue Addressed

NA

## Proposed Changes

When producing a block from a builder, there are two points where we could consider the block "broadcast":

1. When the blinded block is published to the builder.
2. When the un-blinded block is published to the P2P network (this is always *after* the previous step).

Our logging for late block broadcasts was using (2) for builder-blocks, which was creating a lot of false-positive logs. This is because the builder publishes the block on the P2P network themselves before returning it to us and we perform (2). For clarity, the logs were false-positives because we claim that the block was published late by us when it was actually published earlier by the builder.

This PR changes our logging behavior so we do our logging at (1) instead. It also updates our metrics for block broadcast to distinguish between local and builder blocks. I believe the metrics change will be natively compatible with existing Grafana dashboards.

## Additional Info

One could argue that the builder *should* return the block to us faster, however that's not the case. I think it's more important that we don't desensitize users with false-positives.
2023-03-17 00:44:03 +00:00
Michael Sproul
4c2d4af6cd Make more noise when the EL is broken (#3986)
## Issue Addressed

Closes #3814, replaces #3818.

## Proposed Changes

* Add a WARN log for the case where we are attempting to sync chain segments but can't process them because they're building on an invalid parent. The most common case where we see this is when the execution node database is corrupt, causing sync to stall mysteriously (because we're currently logging the failure only at debug level).
* Additionally I've bumped up the logging for invalid execution payloads to `WARN`. This may result in some duplicate logs as we log errors from the `beacon_chain` and then again from the beacon processor. Invalid payloads and corrupt DBs _should_ be rare enough that this doesn't produce overwhelming log volume.
2023-03-17 00:44:02 +00:00
Age Manning
3d99ce25f8 Correct a race condition when dialing peers (#4056)
There is a race condition which occurs when multiple discovery queries return at almost the exact same time and they independently contain a useful peer we would like to connect to.

The condition can occur that we can add the same peer to the dial queue, before we get a chance to process the queue. 
This ends up displaying an error to the user: 
```
ERRO Dialing an already dialing peer
```
Although this error is harmless it's not ideal. 

There are two solutions to resolving this:
1. As we decide to dial the peer, we change the state in the peer-db to dialing (before we add it to the queue) which would prevent other requests from adding to the queue. 
2. We prevent duplicates in the dial queue

This PR has opted for 2. because 1. will complicate the code in that we are changing states in non-intuitive places. Although this technically adds a very slight performance cost, its probably a cleaner solution as we can keep the state-changing logic in one place.
2023-03-16 05:44:54 +00:00
Daniel Ramirez Chiquillo
1ec3041673 Remove Router/Processor Code (#4002)
## Issue Addressed

#3938 

## Proposed Changes

- `network::Processor` is deleted and all it's logic is moved to `network::Router`.
- The `network::Router` module is moved to a single file.
- The following functions are deleted: `on_disconnect` `send_status` `on_status_response` `on_blocks_by_root_request` `on_lightclient_bootstrap` `on_blocks_by_range_request` `on_block_gossip` `on_unaggregated_attestation_gossip` `on_aggregated_attestation_gossip` `on_voluntary_exit_gossip` `on_proposer_slashing_gossip` `on_attester_slashing_gossip` `on_sync_committee_signature_gossip` `on_sync_committee_contribution_gossip` `on_light_client_finality_update_gossip` `on_light_client_optimistic_update_gossip`. This deletions are possible because the updated `Router` allows the underlying methods to be called directly.
2023-03-15 01:27:47 +00:00
realbigsean
ac0eb39ced Complete match for has_context_bytes (#3972)
## Issue Addressed

- Add a complete match for `Protocol` here. 
- The incomplete match was causing us not to append context bytes to the light client protocols
- This is the relevant part of the spec and it looks like context bytes are defined https://github.com/ethereum/consensus-specs/blob/dev/specs/altair/light-client/p2p-interface.md#getlightclientbootstrap

Disclaimer: I have no idea if people are using it but it shouldn't have been working so not sure why it wasn't caught

Co-authored-by: realbigsean <seananderson33@gmail.com>
2023-03-15 01:27:46 +00:00
Michael Sproul
36e163c042 Add parent_block_number to payload SSE (#4053)
## Issue Addressed

In #4027 I forgot to add the `parent_block_number` to the payload attributes SSE.

## Proposed Changes

Compute the parent block number while computing the pre-payload attributes. Pass it on to the SSE stream.

## Additional Info

Not essential for v3.5.1 as I suspect most builders don't need the `parent_block_root`. I would like to use it for my dummy no-op builder however.
2023-03-14 06:26:37 +00:00
Divma
e190ebb8a0 Support for Ipv6 (#4046)
## Issue Addressed
Add support for ipv6 and dual stack in lighthouse. 

## Proposed Changes
From an user perspective, now setting an ipv6 address, optionally configuring the ports should feel exactly the same as using an ipv4 address. If listening over both ipv4 and ipv6 then the user needs to:
- use the `--listen-address` two times (ipv4 and ipv6 addresses)
- `--port6` becomes then required
- `--discovery-port6` can now be used to additionally configure the ipv6 udp port

### Rough list of code changes
- Discovery:
  - Table filter and ip mode set to match the listening config. 
  - Ipv6 address, tcp port and udp port set in the ENR builder
  - Reported addresses now check which tcp port to give to libp2p
- LH Network Service:
  - Can listen over Ipv6, Ipv4, or both. This uses two sockets. Using mapped addresses is disabled from libp2p and it's the most compatible option.
- NetworkGlobals:
  - No longer stores udp port since was not used at all. Instead, stores the Ipv4 and Ipv6 TCP ports.
- NetworkConfig:
  - Update names to make it clear that previous udp and tcp ports in ENR were Ipv4
  - Add fields to configure Ipv6 udp and tcp ports in the ENR
  - Include advertised enr Ipv6 address.
  - Add type to model Listening address that's either Ipv4, Ipv6 or both. A listening address includes the ip, udp port and tcp port.
- UPnP:
  - Kept only for ipv4
- Cli flags:
  - `--listen-addresses` now can take up to two values
  - `--port` will apply to ipv4 or ipv6 if only one listening address is given. If two listening addresses are given it will apply only to Ipv4.
  - `--port6` New flag required when listening over ipv4 and ipv6 that applies exclusively to Ipv6.
  - `--discovery-port` will now apply to ipv4 and ipv6 if only one listening address is given.
  - `--discovery-port6` New flag to configure the individual udp port of ipv6 if listening over both ipv4 and ipv6.
  - `--enr-udp-port` Updated docs to specify that it only applies to ipv4. This is an old behaviour.
  - `--enr-udp6-port` Added to configure the enr udp6 field.
  - `--enr-tcp-port` Updated docs to specify that it only applies to ipv4. This is an old behaviour.
  - `--enr-tcp6-port` Added to configure the enr tcp6 field.
  - `--enr-addresses` now can take two values.
  - `--enr-match` updated behaviour.
- Common:
  - rename `unused_port` functions to specify that they are over ipv4.
  - add functions to get unused ports over ipv6.
- Testing binaries
  - Updated code to reflect network config changes and unused_port changes.

## Additional Info

TODOs:
- use two sockets in discovery. I'll get back to this and it's on https://github.com/sigp/discv5/pull/160
- lcli allow listening over two sockets in generate_bootnodes_enr
- add at least one smoke flag for ipv6 (I have tested this and works for me)
- update the book
2023-03-14 01:13:34 +00:00
Michael Sproul
06af31a66a Correct /lighthouse/nat implementation (#4069)
## Proposed Changes

The current `/lighthouse/nat` implementation checks for _zero_ address updated messages, when it should check for a _non-zero_ number. This was spotted while debugging an issue on Discord where a user's ports weren't forwarded but `/lighthouse/nat` was still returning `true`.
2023-03-13 04:08:15 +00:00
Sebastian Richel
373beaf913 Added warning when new jwt is generated (#4000)
## Issue Addressed
#3435

## Proposed Changes
Fire a warning with the path of JWT to be created when the path given by --execution-jwt is not found
Currently, the same error is logged if the jwt is found but doesn't match the execution client's jwt, and if no jwt was found at the given path. This makes it very hard to tell if you accidentally typed the wrong path, as a new jwt is created silently that won't match the execution client's jwt. So instead, it will now fire a warning stating that a jwt is being generated at the given path.

## Additional Info
In the future, it may be smarter to handle this case by adding an InvalidJWTPath member to the Error enum in lib.rs or auth.rs
that can be handled during upcheck()

This is my first PR and first project with rust. so thanks to anyone who looks at this for their patience and help!

Co-authored-by: Sebastian Richel <47844429+sebastianrich18@users.noreply.github.com>
2023-03-13 04:08:14 +00:00
Michael Sproul
90cef1db86 Appease Clippy 1.68 and refactor http_api (#4068)
## Proposed Changes

Two tiny updates to satisfy Clippy 1.68

Plus refactoring of the `http_api` into less complex types so the compiler can chew and digest them more easily.

Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2023-03-13 01:40:03 +00:00
Alex Wied
4a1c0c96be Fix order of arguments to log_count (#4060)
See: https://github.com/sigp/lighthouse/pull/4027

## Proposed Changes

The order of the arguments to `log_count` is swapped in `beacon_node/beacon_chain/src/events.rs`.
2023-03-13 01:40:01 +00:00
Paul Hauner
319cc61afe Release v3.5.1 (#4049)
## Issue Addressed

NA

## Proposed Changes

Bumps versions to v3.5.1.

## Additional Info

- [x] Requires further testing
2023-03-07 07:57:39 +00:00
Daniel Ramirez Chiquillo
4c109115ca Add a flag to always use payloads from builders (#4052)
## Issue Addressed

#4040 

## Proposed Changes

- Add the `always_prefer_builder_payload`  field to `Config` in `beacon_node/client/src/config.rs`.
- Add that same field to `Inner` in `beacon_node/execution_layer/src/lib.rs`
- Modify the logic for picking the payload in `beacon_node/execution_layer/src/lib.rs`
- Add the `always-prefer-builder-payload` flag to the beacon node CLI
- Test the new flags in `lighthouse/tests/beacon_node.rs`

Co-authored-by: Paul Hauner <paul@paulhauner.com>
2023-03-07 05:37:28 +00:00
Michael Sproul
01556f6f01 Optimise payload attributes calculation and add SSE (#4027)
## Issue Addressed

Closes #3896
Closes #3998
Closes #3700

## Proposed Changes

- Optimise the calculation of withdrawals for payload attributes by avoiding state clones, avoiding unnecessary state advances and reading from the snapshot cache if possible.
- Use the execution layer's payload attributes cache to avoid re-calculating payload attributes. I actually implemented a new LRU cache just for withdrawals but it had the exact same key and most of the same data as the existing payload attributes cache, so I deleted it.
- Add a new SSE event that fires when payloadAttributes are calculated. This is useful for block builders, a la https://github.com/ethereum/beacon-APIs/issues/244.
- Add a new CLI flag `--always-prepare-payload` which forces payload attributes to be sent with every fcU regardless of connected proposers. This is intended for use by builders/relays.

For maximum effect, the flags I've been using to run Lighthouse in "payload builder mode" are:

```
--always-prepare-payload \
--prepare-payload-lookahead 12000 \
--suggested-fee-recipient 0x0000000000000000000000000000000000000000
```

The fee recipient is required so Lighthouse has something to pack in the payload attributes (it can be ignored by the builder). The lookahead causes fcU to be sent at the start of every slot rather than at 8s. As usual, fcU will also be sent after each change of head block. I think this combination is sufficient for builders to build on all viable heads. Often there will be two fcU (and two payload attributes) sent for the same slot: one sent at the start of the slot with the head from `n - 1` as the parent, and one sent after the block arrives with `n` as the parent.

Example usage of the new event stream:

```bash
curl -N "http://localhost:5052/eth/v1/events?topics=payload_attributes"
```

## Additional Info

- [x] Tests added by updating the proposer re-org tests. This has the benefit of testing the proposer re-org code paths with withdrawals too, confirming that the new changes don't interact poorly.
- [ ] Benchmarking with `blockdreamer` on devnet-7 showed promising results but I'm yet to do a comparison to `unstable`.


Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-03-05 23:43:30 +00:00
Paul Hauner
cac3a66be4 Permit a null LVH from an INVALID response to newPayload (#4037)
## Issue Addressed

NA

## Proposed Changes

As discovered in #4034, Lighthouse is not accepting `latest_valid_hash == None` in an `INVALID` response to `newPayload`. The `null`/`None` response *was* illegal at one point, however it was added in https://github.com/ethereum/execution-apis/pull/254.

This PR brings Lighthouse in line with the standard and should fix the root cause of what #4034 patched around.

## Additional Info

NA
2023-03-03 04:12:50 +00:00
Pawan Dhananjay
5b18fd92cb Cleaner logic for gossip subscriptions for new forks (#4030)
## Issue Addressed

Cleaner resolution for #4006 

## Proposed Changes

We are currently subscribing to core topics of new forks way before the actual fork since we had just a single `CORE_TOPICS` array. This PR separates the core topics for every fork and subscribes to only required topics based on the current fork.
Also adds logic for subscribing to the core topics of a new fork only 2 slots before the fork happens.

2 slots is to give enough time for the gossip meshes to form. 

Currently doesn't add logic to remove topics from older forks in new forks. For e.g. in the coupled 4844 world, we had to remove the `BeaconBlock` topic in favour of `BeaconBlocksAndBlobsSidecar` at the 4844 fork. It should be easy enough to add though. Not adding it because I'm assuming that  #4019 will get merged before this PR and we won't require any deletion logic. Happy to add it regardless though.
2023-03-01 09:22:48 +00:00
Divma
047c7544e3 Clean capella (#4019)
## Issue Addressed

Cleans up all the remnants of 4844 in capella. This makes sure when 4844 is reviewed there is nothing we are missing because it got included here 

## Proposed Changes

drop a bomb on every 4844 thing 

## Additional Info

Merge process I did (locally) is as follows:
- squash merge to produce one commit
- in new branch off unstable with the squashed commit create a `git revert HEAD` commit
- merge that new branch onto 4844 with `--strategy ours`
- compare local 4844 to remote 4844 and make sure the diff is empty
- enjoy

Co-authored-by: Paul Hauner <paul@paulhauner.com>
2023-03-01 03:19:02 +00:00
Paul Hauner
17d9a620cf Add more logs in the BN HTTP API during block production (#4025)
## Issue Addressed

NA

## Proposed Changes

Adds two new `DEBG` logs to the HTTP API:

1. As soon as we are requested to produce a block.
2. As soon as a signed block is received.

In #3858 we added some very helpful logs to the VC so we could see when things are happening with block proposals in the VC. After doing some more debugging, I found that I can tell when the VC is sending a block but I *can't* tell the time that the BN receives it (I can only get the time after the BN has started doing some work with the block). Knowing when the VC published and as soon as the BN receives is useful for determining the delays introduced by network latency (and some other things like JSON decoding, etc).

## Additional Info

NA
2023-02-28 02:20:53 +00:00
Alan Höng
cc4fc422b2 Add content-type header to metrics server response (#3970)
This fixes issues with certain metrics scrapers, which might error if the content-type is not correctly set.

## Issue Addressed

Fixes https://github.com/sigp/lighthouse/issues/3437

## Proposed Changes

Simply set header: `Content-Type: text/plain` on metrics server response. Seems like the errored branch does this correctly already.

## Additional Info

This is needed also to enable influx-db metric scraping which work very nicely with Geth.
2023-02-28 02:20:50 +00:00
Michael Sproul
47b22d5256 Allow compilation with no slasher backend (#3888)
## Proposed Changes

Allowing compiling without MDBX by running:

```bash
CARGO_INSTALL_EXTRA_FLAGS="--no-default-features" make
```

The reasons to do this are several:

- Save compilation time if the slasher won't be used
- Work around compilation errors in slasher backend dependencies (our pinned version of MDBX is currently not compiling on FreeBSD with certain compiler versions).

## Additional Info

When I opened this PR we were using resolver v1 which [doesn't disable default features in dependencies](https://doc.rust-lang.org/cargo/reference/features.html#resolver-version-2-command-line-flags), and `mdbx` is default for the `slasher` crate. Even after the resolver got changed to v2 in #3697 compiling with `--no-default-features` _still_ wasn't turning off the slasher crate's default features, so I added `default-features = false` in all the places we depend on it.

Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-02-28 02:20:49 +00:00
Paul Hauner
0fb58a680d v3.5.0 (#3996)
## Issue Addressed

NA

## Proposed Changes

- Bump versions

## Sepolia Capella Upgrade

This release will enable the Capella fork on Sepolia. We are planning to publish this release on the 23rd of Feb 2023.

Users who can build from source and wish to do pre-release testing can use this branch.

## Additional Info

- [ ] Requires further testing
2023-02-22 06:00:49 +00:00
Age Manning
5c63d8758e Register disconnected peers when temporarily banned (#4001)
This is a correction to #3757. 

The correction registers a peer that is being disconnected in the local peer manager db to ensure we are tracking the correct state.
2023-02-21 23:45:44 +00:00
Michael Sproul
fa8b920dd8
Merge branch 'capella' into unstable 2023-02-22 10:25:45 +11:00
Paul Hauner
9c81be8ac4
Fix metric (#4020) 2023-02-22 09:46:45 +11:00
Pawan Dhananjay
bb5285ac6d
Remove BeaconBlockAndBlobsSidecar from core topics (#4016) 2023-02-22 09:45:38 +11:00
Mac L
3642efe76a Cache validator balances and allow them to be served over the HTTP API (#3863)
## Issue Addressed

#3804

## Proposed Changes

- Add `total_balance` to the validator monitor and adjust the number of historical epochs which are cached. 
- Allow certain values in the cache to be served out via the HTTP API without requiring a state read.

## Usage
```
curl -X POST "http://localhost:5052/lighthouse/ui/validator_info" -d '{"indices": [0]}' -H "Content-Type: application/json" | jq
```

```
{
  "data": {
    "validators": {
      "0": {
        "info": [
          {
            "epoch": 172981,
            "total_balance": 36566388519
          },
         ...
          {
            "epoch": 172990,
            "total_balance": 36566496513
          }
        ]
      },
      "1": {
        "info": [
          {
            "epoch": 172981,
            "total_balance": 36355797968
          },
          ...
          {
            "epoch": 172990,
            "total_balance": 36355905962
          }
        ]
      }
    }
  }
}
```

## Additional Info

This requires no historical states to operate which mean it will still function on the freshly checkpoint synced node, however because of this, the values will populate each epoch (up to a maximum of 10 entries).

Another benefit of this method, is that we can easily cache any other values which would normally require a state read and serve them via the same endpoint. However, we would need be cautious about not overly increasing block processing time by caching values from complex computations.

This also caches some of the validator metrics directly, rather than pulling them from the Prometheus metrics when the API is called. This means when the validator count exceeds the individual monitor threshold, the cached values will still be available.

Co-authored-by: Paul Hauner <paul@paulhauner.com>
2023-02-21 20:54:55 +00:00
Paul Hauner
eed7d65ce7
Allow for withdrawals in max block size (#4011)
* Allow for withdrawals in max block size

* Ensure payload size is counted
2023-02-21 18:03:10 +11:00
Paul Hauner
d53d43844c
Suggestions for Capella beacon_chain (#3999)
* Remove CapellaReadiness::NotSynced

Some EEs have a habit of flipping between synced/not-synced, which causes some
spurious "Not read for the merge" messages back before the merge. For the
merge, if the EE wasn't synced the CE simple wouldn't go through the transition
(due to optimistic sync stuff). However, we don't have that hard requirement
for Capella; the CE will go through the fork and just wait for the EE to catch
up. I think that removing `NotSynced` here will avoid false-positives on the
"Not ready logs..". We'll be creating other WARN/ERRO logs if the EE isn't
synced, anyway.

* Change some Capella readiness logging

There's two changes here:

1. Shorten the log messages, for readability.
2. Change the hints.

Connecting a Capella-ready LH to a non-Capella-ready EE gives this log:

```
WARN Not ready for Capella                   info: The execution endpoint does not appear to support the required engine api methods for Capella: Required Methods Unsupported: engine_getPayloadV2 engine_forkchoiceUpdatedV2 engine_newPayloadV2, service: slot_notifier
```

This variant of error doesn't get a "try updating" style hint, when it's the
one that needs it. This is because we detect the method-not-found reponse from
the EE and return default capabilities, rather than indicating that the request
fails. I think it's fair to say that an EE upgrade is required whenever it
doesn't provide the required methods.

I changed the `ExchangeCapabilitiesFailed` message since that can only happen
when the EE fails to respond with anything other than success or not-found.
2023-02-21 11:05:36 +11:00
Michael Sproul
0b6850221e
Fix Capella schema downgrades (#4004) 2023-02-20 17:50:42 +11:00
Michael Sproul
066c27750a
Merge remote-tracking branch 'origin/staging' into capella-update 2023-02-17 12:05:36 +11:00
Paul Hauner
4aa8a2ab12
Suggestions for Capella execution_layer (#3983)
* Restrict Engine::request to FnOnce

* Use `Into::into`

* Impl IntoIterator for VariableList

* Use Instant rather than SystemTime
2023-02-17 11:58:33 +11:00
Divma
ffeb8b6e05 blacklist tests in windows (#3961)
## Issue Addressed
Windows tests for subscription and unsubscriptions fail in CI sporadically. We usually ignore this failures, so this PR aims to help reduce the failure noise. Associated issue is https://github.com/sigp/lighthouse/issues/3960
2023-02-16 23:34:30 +00:00
Michael Sproul
461bda6e85
Execution engine suggestions from code review
Co-authored-by: Paul Hauner <paul@paulhauner.com>
2023-02-16 16:54:05 +11:00
Michael Sproul
918b688f72
Simplify payload traits and reduce cloning (#3976)
* Simplify payload traits and reduce cloning

* Fix self limiter
2023-02-15 14:17:56 +11:00
Age Manning
8dd9249177 Enforce a timeout on peer disconnect (#3757)
On heavily crowded networks, we are seeing many attempted connections to our node every second. 

Often these connections come from peers that have just been disconnected. This can be for a number of reasons including: 
- We have deemed them to be not as useful as other peers
- They have performed poorly
- They have dropped the connection with us
- The connection was spontaneously lost
- They were randomly removed because we have too many peers

In all of these cases, if we have reached or exceeded our target peer limit, there is no desire to accept new connections immediately after the disconnect from these peers. In fact, it often costs us resources to handle the established connections and defeats some of the logic of dropping them in the first place. 

This PR adds a timeout, that prevents recently disconnected peers from reconnecting to us.

Technically we implement a ban at the swarm layer to prevent immediate re connections for at least 10 minutes. I decided to keep this light, and use a time-based LRUCache which only gets updated during the peer manager heartbeat to prevent added stress of polling a delay map for what could be a large number of peers.

This cache is bounded in time. An extra space bound could be added should people consider this a risk.

Co-authored-by: Diva M <divma@protonmail.com>
2023-02-14 03:25:42 +00:00
Michael Sproul
f7bd4bf06e
Update block rewards API for Capella 2023-02-14 12:09:40 +11:00
Michael Sproul
d53ccf8fc7
Placeholder for BlobsByRange outbound rate limit 2023-02-14 12:08:14 +11:00
Michael Sproul
18c8cab4da
Merge remote-tracking branch 'origin/unstable' into capella-merge 2023-02-14 12:07:27 +11:00
Michael Sproul
2f456ff9eb Fix regression in DB write atomicity (#3931)
## Issue Addressed

Fix a bug introduced by #3696. The bug is not expected to occur frequently, so releasing this PR is non-urgent.

## Proposed Changes

* Add a variant to `StoreOp` that allows a raw KV operation to be passed around.
* Return to using `self.store.do_atomically` rather than `self.store.hot_db.do_atomically`. This streamlines the write back into a single call and makes our auto-revert work again.
* Prevent `import_block_update_shuffling_cache` from failing block import. This is an outstanding bug from before v3.4.0 which may have contributed to some random unexplained database corruption.

## Additional Info

In #3696 I split the database write into two calls, one to convert the `StoreOp`s to `KeyValueStoreOp`s and one to write them. This had the unfortunate side-effect of damaging our atomicity guarantees in case of a write error. If the first call failed, we would be left with the block in fork choice but not on-disk (or the snapshot cache), which would prevent us from processing any descendant blocks. On `unstable` the first call is very unlikely to fail unless the disk is full, but on `tree-states` the conversion is more involved and a user reported database corruption after it failed in a way that should have been recoverable.

Additionally, as @emhane observed, #3696 also inadvertently removed the import of the new block into the block cache. Although this seems like it could have negatively impacted performance, there are several mitigating factors:

- For regular block processing we should almost always load the parent block (and state) from the snapshot cache.
- We often load blinded blocks, which bypass the block cache anyway.
- Metrics show no noticeable increase in the block cache miss rate with v3.4.0.

However, I expect the block cache _will_ be useful again in `tree-states`, so it is restored to use by this PR.
2023-02-13 03:32:01 +00:00
Paul Hauner
84843d67d7 Reduce some EE and builder related ERRO logs to WARN (#3966)
## Issue Addressed

NA

## Proposed Changes

Our `ERRO` stream has been rather noisy since the merge due to some unexpected behaviours of builders and EEs. Now that we've been running post-merge for a while, I think we can drop some of these `ERRO` to `WARN` so we're not "crying wolf".

The modified logs are:

#### `ERRO Execution engine call failed`

I'm seeing this quite frequently on Geth nodes. They seem to timeout when they're busy and it rarely indicates a serious issue. We also have logging across block import, fork choice updating and payload production that raise `ERRO` or `CRIT` when the EE times out, so I think we're not at risk of silencing actual issues.

#### `ERRO "Builder failed to reveal payload"`

In #3775 we reduced this log from `CRIT` to `ERRO` since it's common for builders to fail to reveal the block to the producer directly whilst still broadcasting it to the networ. I think it's worth dropping this to `WARN` since it's rarely interesting.

I elected to stay with `WARN` since I really do wish builders would fulfill their API promises by returning the block to us. Perhaps I'm just being pedantic here, I could be convinced otherwise.

#### `ERRO "Relay error when registering validator(s)"`

It seems like builders and/or mev-boost struggle to handle heavy loads of validator registrations. I haven't observed issues with validators not actually being registered, but I see timeouts on these endpoints many times a day. It doesn't seem like this `ERRO` is worth it.

#### `ERRO Error fetching block for peer     ExecutionLayerErrorPayloadReconstruction`

This means we failed to respond to a peer on the P2P network with a block they requested because of an error in the `execution_layer`. It's very common to see timeouts or incomplete responses on this endpoint whilst the EE is busy and I don't think it's important enough for an `ERRO`. As long as the peer count stays high, I don't think the user needs to be actively concerned about how we're responding to peers.

## Additional Info

NA
2023-02-12 23:14:08 +00:00
ethDreamer
e743d75c9b
Update Mock Builder for Post-Capella Tests (#3958)
* Update Mock Builder for Post-Capella Tests

* Add _mut Suffix to BidStuff Functions

* Fix Setting Gas Limit
2023-02-10 13:30:14 -06:00
ethDreamer
39f8327f73
Properly Deserialize ForkVersionedResponses (#3944)
* Move ForkVersionedResponse to consensus/types

* Properly Deserialize ForkVersionedResponses

* Elide Types in from_value Calls

* Added Tests for ForkVersionedResponse Deserialize

* Address Sean's Comments & Make Less Restrictive

* Utilize `map_fork_name!`
2023-02-10 08:49:25 -06:00
Michael Sproul
c9354a9d25 Tweaks to reward APIs (#3957)
## Proposed Changes

* Return the effective balance in gwei to align with the spec ([ideal attestation rewards](https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Rewards/getAttestationsRewards)).
* Use quoted `i64`s for attestation and sync committee rewards.
2023-02-10 06:19:42 +00:00
Paul Hauner
5276dd0cb0 Fix edge-case when finding the finalized descendant (#3924)
## Issue Addressed

NA

## Description

We were missing an edge case when checking to see if a block is a descendant of the finalized checkpoint. This edge case is described for one of the tests in this PR:

a119edc739/consensus/proto_array/src/proto_array_fork_choice.rs (L1018-L1047)

This bug presented itself in the following mainnet log:

```
Jan 26 15:12:42.841 ERRO Unable to validate attestation error: MissingBeaconState(0x7c30cb80ec3d4ec624133abfa70e4c6cfecfca456bfbbbff3393e14e5b20bf25), peer_id: 16Uiu2HAm8RPRciXJYtYc5c3qtCRdrZwkHn2BXN3XP1nSi1gxHYit, type: "unaggregated", slot: Slot(5660161), beacon_block_root: 0x4a45e59da7cb9487f4836c83bdd1b741b4f31c67010c7ae343fa6771b3330489
```

Here the BN is rejecting an attestation because of a "missing beacon state". Whilst it was correct to reject the attestation, it should have rejected it because it attests to a block that conflicts with finality rather than claiming that the database is inconsistent.

The block that this attestation points to (`0x4a45`) is block `C` in the above diagram. It is a non-canonical block in the first slot of an epoch that conflicts with the finalized checkpoint. Due to our lazy pruning of proto array, `0x4a45` was still present in proto-array. Our missed edge-case in [`ForkChoice::is_descendant_of_finalized`](38514c07f2/consensus/fork_choice/src/fork_choice.rs (L1375-L1379)) would have indicated to us that the block is a descendant of the finalized block. Therefore, we would have accepted the attestation thinking that it attests to a descendant of the finalized *checkpoint*.

Since we didn't have the shuffling for this erroneously processed block, we attempted to read its state from the database. This failed because we prune states from the database by keeping track of the tips of the chain and iterating back until we find a finalized block. This would have deleted `C` from the database, hence the `MissingBeaconState` error.
2023-02-09 23:51:18 +00:00
Divma
ceb986549d Self rate limiting dev flag (#3928)
## Issue Addressed
Adds self rate limiting options, mainly with the idea to comply with peer's rate limits in small testnets

## Proposed Changes
Add a hidden flag `self-limiter` this can take no value, or customs values to configure quotas per protocol

## Additional Info
### How to use
`--self-limiter` will turn on the self rate limiter applying the same params we apply to inbound requests (requests from other peers)
`--self-limiter "beacon_blocks_by_range:64/1"` will turn on the self rate limiter for ALL protocols, but change the quota for bbrange to 64 requested blocks per 1 second.
`--self-limiter "beacon_blocks_by_range:64/1;ping:1/10"` same as previous one, changing the quota for ping as well.

### Caveats
- The rate limiter is either on or off for all protocols. I added the custom values to be able to change the quotas per protocol so that some protocols can be given extremely loose or tight quotas. I think this should satisfy every need even if we can't technically turn off rate limits per protocol.
- This reuses the rate limiter struct for the inbound requests so there is this ugly part of the code in which we need to deal with the inbound only protocols (light client stuff) if this becomes too ugly as we add lc protocols, we might want to split the rate limiters. I've checked this and looks doable with const generics to avoid so much code duplication

### Knowing if this is on
```
Feb 06 21:12:05.493 DEBG Using self rate limiting params         config: OutboundRateLimiterConfig { ping: 2/10s, metadata: 1/15s, status: 5/15s, goodbye: 1/10s, blocks_by_range: 1024/10s, blocks_by_root: 128/10s }, service: libp2p_rpc, service: libp2p
```
2023-02-08 02:18:53 +00:00
naviechan
9547ac069c Implement block_rewards API (per-validator reward) (#3907)
## Issue Addressed

[#3661](https://github.com/sigp/lighthouse/issues/3661)

## Proposed Changes

`/eth/v1/beacon/rewards/blocks/{block_id}`

```
{
  "execution_optimistic": false,
  "finalized": false,
  "data": {
    "proposer_index": "123",
    "total": "123",
    "attestations": "123",
    "sync_aggregate": "123",
    "proposer_slashings": "123",
    "attester_slashings": "123"
  }
}
```

The issue contains the implementation of three per-validator reward APIs:
* `sync_committee_rewards`
* [`attestation_rewards`](https://github.com/sigp/lighthouse/pull/3822)
* `block_rewards`

This PR only implements the `block_rewards`.

The endpoints can be viewed in the Ethereum Beacon nodes API browser: [https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Rewards](https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Rewards)

## Additional Info

The implementation of [consensus client reward APIs](https://github.com/eth-protocol-fellows/cohort-three/blob/master/projects/project-ideas.md#consensus-client-reward-apis) is part of the [EPF](https://github.com/eth-protocol-fellows/cohort-three).

Co-authored-by: kevinbogner <kevbogner@gmail.com>
Co-authored-by: navie <naviechan@gmail.com>
2023-02-07 08:33:23 +00:00
Paul Hauner
e062a7cf76
Broadcast address changes at Capella (#3919)
* Add first efforts at broadcast

* Tidy

* Move broadcast code to client

* Progress with broadcast impl

* Rename to address change

* Fix compile errors

* Use `while` loop

* Tidy

* Flip broadcast condition

* Switch to forgetting individual indices

* Always broadcast when the node starts

* Refactor into two functions

* Add testing

* Add another test

* Tidy, add more testing

* Tidy

* Add test, rename enum

* Rename enum again

* Tidy

* Break loop early

* Add V15 schema migration

* Bump schema version

* Progress with migration

* Update beacon_node/client/src/address_change_broadcast.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Fix typo in function name

---------

Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-02-07 17:13:49 +11:00
kevinbogner
4d07e40501 Implement attestation_rewards API (per-validator reward) (#3822)
## Issue Addressed

#3661 

## Proposed Changes
`/eth/v1/beacon/rewards/attestations/{epoch}`

```json
{
  "execution_optimistic": false,
  "finalized": false,
  "data": [
    {
      "ideal_rewards": [
        {
          "effective_balance": "1000000000",
          "head": "2500",
          "target": "5000",
          "source": "5000"
        }
      ],
      "total_rewards": [
        {
          "validator_index": "0",
          "head": "2000",
          "target": "2000",
          "source": "4000",
          "inclusion_delay": "2000"
        }
      ]
    }
  ]
}
```

The issue contains the implementation of three per-validator reward APIs:
- [`sync_committee_rewards`](https://github.com/sigp/lighthouse/pull/3790)
- `attestation_rewards`
- `block_rewards`.

This PR *only* implements the `attestation_rewards`.

The endpoints can be viewed in the Ethereum Beacon nodes API browser: https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Rewards

## Additional Info
The implementation of [consensus client reward APIs](https://github.com/eth-protocol-fellows/cohort-three/blob/master/projects/project-ideas.md#consensus-client-reward-apis) is part of the [EPF](https://github.com/eth-protocol-fellows/cohort-three).

---
- [x] `get_state`
- [x] Calculate *ideal rewards* with some logic from  `get_flag_index_deltas`
- [x] Calculate *actual rewards*  with some logic from `get_flag_index_deltas`
- [x] Code cleanup
- [x] Testing
2023-02-07 00:00:19 +00:00
ethDreamer
5b398b1990
Don't Reject all Builder Bids After Capella (#3940)
* Fix bug in Builder API Post-Capella

* Fix Clippy Complaints
2023-02-06 13:09:13 +11:00
ethDreamer
90b6ae62e6
Use Local Payload if More Profitable than Builder (#3934)
* Use Local Payload if More Profitable than Builder

* Rename clone -> clone_from_ref

* Minimize Clones of GetPayloadResponse

* Cleanup & Fix Tests

* Added Tests for Payload Choice by Profit

* Fix Outdated Comments
2023-02-01 19:37:46 -06:00
Mark Mackey
d842215a44
Merge branch 'upstream/unstable' into capella 2023-01-31 12:16:26 -06:00
ethDreamer
7b7595347d
exchangeCapabilities & Capella Readiness Logging (#3918)
* Undo Passing Spec to Engine API

* Utilize engine_exchangeCapabilities

* Add Logging to Indicate Capella Readiness

* Add exchangeCapabilities to mock_execution_layer

* Send Nested Array for engine_exchangeCapabilities

* Use Mutex Instead of RwLock for EngineCapabilities

* Improve Locking to Avoid Deadlock

* Prettier logic for get_engine_capabilities

* Improve Comments

* Update beacon_node/beacon_chain/src/capella_readiness.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Update beacon_node/beacon_chain/src/capella_readiness.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Update beacon_node/beacon_chain/src/capella_readiness.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Update beacon_node/beacon_chain/src/capella_readiness.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Update beacon_node/beacon_chain/src/capella_readiness.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Update beacon_node/client/src/notifier.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Update beacon_node/execution_layer/src/engine_api/http.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Addressed Michael's Comments

---------

Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-01-31 18:26:23 +01:00
Michael Sproul
0866b739d0 Clippy 1.67 (#3916)
## Proposed Changes

Clippy 1.67.0 put us on blast for the size of some of our errors, most of them written by me ( 👀 ). This PR shrinks the size of `BeaconChainError` by dropping some extraneous info and boxing an inner error which should only occur infrequently anyway.

For the `AttestationSlashInfo` and `BlockSlashInfo` I opted to ignore the lint as they are always used in a `Result<A, Info>` where `A` is a similar size. This means they don't bloat the size of the `Result`, so it's a bit annoying for Clippy to report this as an issue.

I also chose to ignore `clippy::uninlined-format-args` because I think the benefit-to-churn ratio is too low. E.g. sometimes we have long identifiers in `format!` args and IMO the non-inlined form is easier to read:

```rust
// I prefer this...
format!(
    "{} did {} to {}",
    REALLY_LONG_CONSTANT_NAME,
    ANOTHER_REALLY_LONG_CONSTANT_NAME,
    regular_long_identifier_name
);
  
// To this
format!("{REALLY_LONG_CONSTANT_NAME} did {ANOTHER_REALLY_LONG_CONSTANT_NAME} to {regular_long_identifier_name}");
```

I tried generating an automatic diff with `cargo clippy --fix` but it came out at:

```
250 files changed, 1209 insertions(+), 1469 deletions(-)
```

Which seems like a bad idea when we'd have to back-merge it to `capella` and `eip4844` 😱
2023-01-27 09:48:42 +00:00
Michael Sproul
16bdb2771b
Update another test broken by the shuffling change 2023-01-25 16:18:00 +11:00
Michael Sproul
e48487db01
Fix the new BLS to execution change test 2023-01-25 15:47:07 +11:00
Michael Sproul
79a20e8a5f
Update sync rewards API for abstract exec payload 2023-01-25 15:46:47 +11:00
Michael Sproul
c76a1971cc
Merge remote-tracking branch 'origin/unstable' into capella 2023-01-25 14:20:16 +11:00
GeemoCandama
a7351c00c0 light client optimistic update reprocessing (#3799)
## Issue Addressed
Currently there is a race between receiving blocks and receiving light client optimistic updates (in unstable), which results in processing errors. This is a continuation of PR #3693 and seeks to progress on issue #3651

## Proposed Changes

Add the parent_root to ReprocessQueueMessage::BlockImported so we can remove blocks from queue when a block arrives that has the same parent root. We use the parent root as opposed to the block_root because the LightClientOptimisticUpdate does not contain the block_root.

If light_client_optimistic_update.attested_header.canonical_root() != head_block.message().parent_root() then we queue the update. Otherwise we process immediately.
## Additional Info
michaelsproul came up with this idea.
The code was heavily based off of the attestation reprocessing.
I have not properly tested this to see if it works as intended.
2023-01-24 22:17:50 +00:00
ethDreamer
3d4dd6af75
Use eth1_withdrawal_credentials in Test States (#3898)
* Use eth1_withdrawal_credential in Some Test States

* Update beacon_node/genesis/src/interop.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Update beacon_node/genesis/src/interop.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Increase validator sizes

* Pick next sync committee message

Co-authored-by: Michael Sproul <micsproul@gmail.com>
Co-authored-by: Paul Hauner <paul@paulhauner.com>
2023-01-24 16:22:51 +01:00
naviechan
2802bc9a9c Implement sync_committee_rewards API (per-validator reward) (#3903)
## Issue Addressed

[#3661](https://github.com/sigp/lighthouse/issues/3661)

## Proposed Changes

`/eth/v1/beacon/rewards/sync_committee/{block_id}`

```
{
  "execution_optimistic": false,
  "finalized": false,
  "data": [
    {
      "validator_index": "0",
      "reward": "2000"
    }
  ]
}
```

The issue contains the implementation of three per-validator reward APIs:
* `sync_committee_rewards`
* [`attestation_rewards`](https://github.com/sigp/lighthouse/pull/3822)
* `block_rewards`

This PR only implements the `sync_committe_rewards `.

The endpoints can be viewed in the Ethereum Beacon nodes API browser: [https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Rewards](https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Rewards)

## Additional Info

The implementation of [consensus client reward APIs](https://github.com/eth-protocol-fellows/cohort-three/blob/master/projects/project-ideas.md#consensus-client-reward-apis) is part of the [EPF](https://github.com/eth-protocol-fellows/cohort-three).

Co-authored-by: navie <naviechan@gmail.com>
Co-authored-by: kevinbogner <kevbogner@gmail.com>
2023-01-24 02:06:42 +00:00
Michael Sproul
d8abf2fc41
Import BLS to execution changes before Capella (#3892)
* Import BLS to execution changes before Capella

* Test for BLS to execution change HTTP API

* Pack BLS to execution changes in LIFO order

* Remove unused var

* Clippy
2023-01-21 10:39:59 +11:00
Michael Sproul
bb0e99c097 Merge remote-tracking branch 'origin/unstable' into capella 2023-01-21 10:37:26 +11:00
Age Manning
f8a3b3b95a Improve block delay metrics (#3894)
We recently ran a large-block experiment on the testnet and plan to do a further experiment on mainnet.

Although the metrics recovered from lighthouse nodes were quite useful, I think we could do with greater resolution in the block delay metrics and get some specific values for each block (currently these can be lost to large exponential histogram buckets). 

This PR increases the resolution of the block delay histogram buckets, but also introduces a new metric which records the last block delay. Depending on the polling resolution of the metric server, we can lose some block delay information, however it will always give us a specific value and we will not lose exact data based on poor resolution histogram buckets.
2023-01-20 00:46:56 +00:00
ethDreamer
26787412cd
Update engine_api to Latest spec (#3893)
* Update engine_api to Latest spec

* Small Test Fix

* Fix Test Deserialization Issue
2023-01-19 22:42:17 +11:00
Santiago Medina
912ea2a5ca Return HTTP 404 rather than 405 (#3836)
## Issue Addressed

Issue #3112

## Proposed Changes

Add `Filter::recover` to the GET chain to handle rejections specifically as 404 NOT FOUND

## Additional Info

Making a request to `http://localhost:5052/not_real` now returns the following:

```
{
    "code": 404,
    "message": "NOT_FOUND",
    "stacktraces": []
}
```


Co-authored-by: Paul Hauner <paul@paulhauner.com>
2023-01-16 03:42:09 +00:00