Commit Graph

5418 Commits

Author SHA1 Message Date
Pawan Dhananjay
bb5285ac6d
Remove BeaconBlockAndBlobsSidecar from core topics (#4016) 2023-02-22 09:45:38 +11:00
Michael Sproul
b7d7addd4a Disable debug info on CI (#4018)
## Issue Addressed

Closes #4005

Alternative to #4017

## Proposed Changes

Disable debug info on CI to save RAM and disk space.
2023-02-21 20:54:57 +00:00
Mac L
3642efe76a Cache validator balances and allow them to be served over the HTTP API (#3863)
## Issue Addressed

#3804

## Proposed Changes

- Add `total_balance` to the validator monitor and adjust the number of historical epochs which are cached. 
- Allow certain values in the cache to be served out via the HTTP API without requiring a state read.

## Usage
```
curl -X POST "http://localhost:5052/lighthouse/ui/validator_info" -d '{"indices": [0]}' -H "Content-Type: application/json" | jq
```

```
{
  "data": {
    "validators": {
      "0": {
        "info": [
          {
            "epoch": 172981,
            "total_balance": 36566388519
          },
         ...
          {
            "epoch": 172990,
            "total_balance": 36566496513
          }
        ]
      },
      "1": {
        "info": [
          {
            "epoch": 172981,
            "total_balance": 36355797968
          },
          ...
          {
            "epoch": 172990,
            "total_balance": 36355905962
          }
        ]
      }
    }
  }
}
```

## Additional Info

This requires no historical states to operate which mean it will still function on the freshly checkpoint synced node, however because of this, the values will populate each epoch (up to a maximum of 10 entries).

Another benefit of this method, is that we can easily cache any other values which would normally require a state read and serve them via the same endpoint. However, we would need be cautious about not overly increasing block processing time by caching values from complex computations.

This also caches some of the validator metrics directly, rather than pulling them from the Prometheus metrics when the API is called. This means when the validator count exceeds the individual monitor threshold, the cached values will still be available.

Co-authored-by: Paul Hauner <paul@paulhauner.com>
2023-02-21 20:54:55 +00:00
Paul Hauner
40669da486
Modify some Capella comments (#4015)
* Modify comment to only include 4844

Capella only modifies per epoch processing by adding
`process_historical_summaries_update`, which does not change the realization of
justification or finality.

Whilst 4844 does not currently modify realization, the spec is not yet final
enough to say that it never will.

* Clarify address change verification comment

The verification of the address change doesn't really have anything to do with
the current epoch. I think this was just a copy-paste from a function like
`verify_exit`.
2023-02-21 18:03:42 +11:00
Paul Hauner
1bce7a02c8
Fix post-Bellatrix checkpoint sync (#4014)
* Recognise execution in post-merge blocks

* Remove `.body()`

* Fix typo

* Use `is_default_with_empty_roots`.
2023-02-21 18:03:24 +11:00
Paul Hauner
eed7d65ce7
Allow for withdrawals in max block size (#4011)
* Allow for withdrawals in max block size

* Ensure payload size is counted
2023-02-21 18:03:10 +11:00
Paul Hauner
729c178020
Revert Sepolia genesis change (#4013) 2023-02-21 17:18:28 +11:00
Paul Hauner
b72f273e47
Capella consensus review (#4012)
* Add extra encoding/decoding tests

* Remove TODO

The method LGTM

* Remove `FreeAttestation`

This is an ancient relic, I'm surprised it still existed!

* Add paranoid check for eip4844 code

This is not technically necessary, but I think it's nice to be explicit about
EIP4844 consensus code for the time being.

* Reduce big-O complexity of address change pruning

I'm not sure this is *actually* useful, but it might come in handy if we see a
ton of address changes at the fork boundary. I know the devops team have been
testing with ~100k changes, so maybe this will help in that case.

* Revert "Reduce big-O complexity of address change pruning"

This reverts commit e7d93e6cc7cf1b92dd5a9e1966ce47d4078121eb.
2023-02-21 15:33:27 +11:00
Paul Hauner
d53d43844c
Suggestions for Capella beacon_chain (#3999)
* Remove CapellaReadiness::NotSynced

Some EEs have a habit of flipping between synced/not-synced, which causes some
spurious "Not read for the merge" messages back before the merge. For the
merge, if the EE wasn't synced the CE simple wouldn't go through the transition
(due to optimistic sync stuff). However, we don't have that hard requirement
for Capella; the CE will go through the fork and just wait for the EE to catch
up. I think that removing `NotSynced` here will avoid false-positives on the
"Not ready logs..". We'll be creating other WARN/ERRO logs if the EE isn't
synced, anyway.

* Change some Capella readiness logging

There's two changes here:

1. Shorten the log messages, for readability.
2. Change the hints.

Connecting a Capella-ready LH to a non-Capella-ready EE gives this log:

```
WARN Not ready for Capella                   info: The execution endpoint does not appear to support the required engine api methods for Capella: Required Methods Unsupported: engine_getPayloadV2 engine_forkchoiceUpdatedV2 engine_newPayloadV2, service: slot_notifier
```

This variant of error doesn't get a "try updating" style hint, when it's the
one that needs it. This is because we detect the method-not-found reponse from
the EE and return default capabilities, rather than indicating that the request
fails. I think it's fair to say that an EE upgrade is required whenever it
doesn't provide the required methods.

I changed the `ExchangeCapabilitiesFailed` message since that can only happen
when the EE fails to respond with anything other than success or not-found.
2023-02-21 11:05:36 +11:00
Paul Hauner
c3c181aa03
Remove "eip4844" network (#4008) 2023-02-21 11:01:22 +11:00
Michael Sproul
0b6850221e
Fix Capella schema downgrades (#4004) 2023-02-20 17:50:42 +11:00
Paul Hauner
9a41f65b89
Add capella fork epoch (#3997) 2023-02-17 16:25:20 +11:00
Michael Sproul
1f419f4653
Merge pull request #3981 from michaelsproul/capella-update
Update `capella` to `unstable`
2023-02-17 12:47:34 +11:00
Michael Sproul
066c27750a
Merge remote-tracking branch 'origin/staging' into capella-update 2023-02-17 12:05:36 +11:00
Paul Hauner
4aa8a2ab12
Suggestions for Capella execution_layer (#3983)
* Restrict Engine::request to FnOnce

* Use `Into::into`

* Impl IntoIterator for VariableList

* Use Instant rather than SystemTime
2023-02-17 11:58:33 +11:00
Michael Sproul
ebf2fec5d0 Fix exec integration tests for Geth v1.11.0 (#3982)
## Proposed Changes

* Bump Go from 1.17 to 1.20. The latest Geth release v1.11.0 requires 1.18 minimum.
* Prevent a cache miss during payload building by using the right fee recipient. This prevents Geth v1.11.0 from building a block with 0 transactions. The payload building mechanism is overhauled in the new Geth to improve the payload every 2s, and the tests were failing because we were falling back on a `getPayload` call with no lookahead due to `get_payload_id` cache miss caused by the mismatched fee recipient. Alternatively we could hack the tests to send `proposer_preparation_data`, but I think the static fee recipient is simpler for now.
* Add support for optionally enabling Lighthouse logs in the integration tests. Enable using `cargo run --release --features logging/test_logger`. This was very useful for debugging.
2023-02-16 23:34:33 +00:00
Jimmy Chen
245e922c7b Improve testing slot clock to allow manipulation of time in tests (#3974)
## Issue Addressed

I discovered this issue while implementing [this test](https://github.com/jimmygchen/lighthouse/blob/test-example/beacon_node/network/src/beacon_processor/tests.rs#L895), where I tried to manipulate the slot clock with: 

`rig.chain.slot_clock.set_current_time(duration);`

however the change doesn't get reflected in the `slot_clock` in `ReprocessQueue`, and I realised `slot_clock` was cloned a few times in the code, and therefore changing the time in `rig.chain.slot_clock` doesn't have any effect in `ReprocessQueue`.

I've incorporated the suggestion from the @paulhauner and @michaelsproul - wrapping the `ManualSlotClock.current_time` (`RwLock<Duration>)` in an `Arc`, and the above test now passes. 

Let's see if this breaks any existing tests :)
2023-02-16 23:34:32 +00:00
Divma
ffeb8b6e05 blacklist tests in windows (#3961)
## Issue Addressed
Windows tests for subscription and unsubscriptions fail in CI sporadically. We usually ignore this failures, so this PR aims to help reduce the failure noise. Associated issue is https://github.com/sigp/lighthouse/issues/3960
2023-02-16 23:34:30 +00:00
Michael Sproul
461bda6e85
Execution engine suggestions from code review
Co-authored-by: Paul Hauner <paul@paulhauner.com>
2023-02-16 16:54:05 +11:00
realbigsean
55753f8bc8
bump recursion limit 2023-02-15 16:32:50 -05:00
realbigsean
ca8e341649
fix compilation after merge 2023-02-15 14:30:39 -05:00
realbigsean
8320b918ae
merge self limiter 2023-02-15 14:26:18 -05:00
realbigsean
4d0b0f681d
merge self limiter 2023-02-15 14:25:58 -05:00
realbigsean
b805fa6279
merge with upstream 2023-02-15 14:20:12 -05:00
realbigsean
87d1fbeb21
Merge pull request #3905 from emhane/beacon_chain_tests
Debug CI
2023-02-15 11:53:41 -05:00
Emilia Hane
aaf6404d4f
Remove unused generic 2023-02-15 17:45:22 +01:00
Michael Sproul
2fcfdf1a01 Fix docker and deps (#3978)
## Proposed Changes

- Fix this cargo-audit failure for `sqlite3-sys`: https://github.com/sigp/lighthouse/actions/runs/4179008889/jobs/7238473962
- Prevent the Docker builds from running out of RAM on CI by removing `gnosis` and LMDB support from the `-dev` images (see: https://github.com/sigp/lighthouse/pull/3959#issuecomment-1430531155, successful run on my fork: https://github.com/michaelsproul/lighthouse/actions/runs/4179162480/jobs/7239537947).
2023-02-15 11:51:46 +00:00
Emilia Hane
2672cf40bb
Better fix for debug tests 2023-02-15 11:47:56 +01:00
Emilia Hane
9fea440ae6
Fix lint 2023-02-15 09:54:36 +01:00
Michael Sproul
fd379ae2e2
Upgrade sqlite3 2023-02-15 09:44:37 +01:00
realbigsean
44dbccfeae
add v3 to capabilities 2023-02-15 09:23:59 +01:00
Emilia Hane
13efd47238
fixup! Disable use of system time in tests 2023-02-15 09:20:30 +01:00
Michael Sproul
918b688f72
Simplify payload traits and reduce cloning (#3976)
* Simplify payload traits and reduce cloning

* Fix self limiter
2023-02-15 14:17:56 +11:00
Emilia Hane
9e4abc79fb
Comment out tests that use system time 2023-02-14 14:12:50 +01:00
Emilia Hane
73c7ad73b8
Disable use of system time in tests 2023-02-14 13:33:38 +01:00
Emilia Hane
148385eb70
Remove unused error 2023-02-14 12:43:13 +01:00
Emilia Hane
810d875b02
Fix merge conflicts with eip4844 2023-02-14 12:42:41 +01:00
Emilia Hane
8200d37045
Merge pull request #2 from realbigsean/sean-debug-ci
Debug CI Sean's PR feedback
2023-02-14 12:24:15 +01:00
Michael Sproul
10d32ee04c
Quote Capella BeaconState fields (#3967) 2023-02-14 14:41:28 +11:00
Michael Sproul
5fc798dd1b
Merge pull request #3973 from michaelsproul/capella-merge
Update Capella to latest `unstable`
2023-02-14 14:41:14 +11:00
Age Manning
8dd9249177 Enforce a timeout on peer disconnect (#3757)
On heavily crowded networks, we are seeing many attempted connections to our node every second. 

Often these connections come from peers that have just been disconnected. This can be for a number of reasons including: 
- We have deemed them to be not as useful as other peers
- They have performed poorly
- They have dropped the connection with us
- The connection was spontaneously lost
- They were randomly removed because we have too many peers

In all of these cases, if we have reached or exceeded our target peer limit, there is no desire to accept new connections immediately after the disconnect from these peers. In fact, it often costs us resources to handle the established connections and defeats some of the logic of dropping them in the first place. 

This PR adds a timeout, that prevents recently disconnected peers from reconnecting to us.

Technically we implement a ban at the swarm layer to prevent immediate re connections for at least 10 minutes. I decided to keep this light, and use a time-based LRUCache which only gets updated during the peer manager heartbeat to prevent added stress of polling a delay map for what could be a large number of peers.

This cache is bounded in time. An extra space bound could be added should people consider this a risk.

Co-authored-by: Diva M <divma@protonmail.com>
2023-02-14 03:25:42 +00:00
Michael Sproul
f7bd4bf06e
Update block rewards API for Capella 2023-02-14 12:09:40 +11:00
Michael Sproul
d53ccf8fc7
Placeholder for BlobsByRange outbound rate limit 2023-02-14 12:08:14 +11:00
Michael Sproul
18c8cab4da
Merge remote-tracking branch 'origin/unstable' into capella-merge 2023-02-14 12:07:27 +11:00
realbigsean
d2ecbd942e
fix a couple new lints 2023-02-13 17:13:47 -05:00
realbigsean
cd8757de1c
Revert "make batch size check compile time panic"
This reverts commit 68f2484efc.
2023-02-13 16:51:55 -05:00
realbigsean
68f2484efc
make batch size check compile time panic 2023-02-13 16:51:46 -05:00
realbigsean
4c3561dcaf
make batch size check compile time panic 2023-02-13 16:50:33 -05:00
realbigsean
8f9c5cfca9
remove unused structs 2023-02-13 16:47:36 -05:00
realbigsean
ad9af6d8b1
complete match for has_context_bytes 2023-02-13 16:44:54 -05:00