Commit Graph

340 Commits

Author SHA1 Message Date
Alexander Uvizhev
b4556a3d62
Broadcast various requests as per #4684 (#4920)
* multiple broadcast flags

* rewrite with single --broadcast option

* satisfy cargo fmt

* shorten sync-committee-messages

* fix a doc comment and a test

* use strum

* Add broadcast test to simulator

* bring --disable-run-on-all flag back with deprecation notice
2023-11-27 15:39:37 +11:00
Jimmy Chen
6b63d18420
Fix Rust beta compiler warnings (rustc 1.75.0-beta.1 (782883f60 2023-11-12)) (#4932) 2023-11-18 03:55:11 +11:00
Eitan Seri-Levi
07f53b18fc Block v3 endpoint (#4629)
## Issue Addressed

#4582

## Proposed Changes

Add a new v3 block fetching flow that can decide to return a Full OR Blinded payload

## Additional Info



Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-11-03 00:12:18 +00:00
Eitan Seri-Levi
4ce01ddd11 Activate clippy::manual_let_else lint (#4889)
## Issue Addressed

#4888

## Proposed Changes

Enabled `clippy::manual_let_else` lint and resolved the warning messages.
2023-10-31 10:31:02 +00:00
Pawan Dhananjay
6315a81260 Upgrade to v1.4.0-beta.3 (#4862)
## Issue Addressed

Makes lighthouse compliant with new kzg changes in https://github.com/ethereum/consensus-specs/releases/tag/v1.4.0-beta.3

## Proposed Changes

1. Adds new official trusted setup
2. Refactors kzg to match upstream changes in https://github.com/ethereum/c-kzg-4844/pull/377
3. Updates pre-generated `BlobBundle` to work with official trusted setup. ~~Using json here instead of ssz to account for different value of `MaxBlobCommitmentsPerBlock` in minimal and mainnet. By using json, we can just use one pre generated bundle for both minimal and mainnet. Size of 2 separate ssz bundles is approximately equal to one json bundle cc @jimmygchen~~ 
Dunno what I was doing, ssz works without any issues  
4. Stores trusted_setup as just bytes in eth2_network_config so that we don't have kzg dependency in that lib and in lcli. 


Co-authored-by: realbigsean <seananderson33@gmail.com>
Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2023-10-21 13:49:27 +00:00
Jimmy Chen
e8fba8d3a7 Enable BLS portable feature on all CI tests (#4868)
## Issue Addressed

Addresses the recent CI failures caused by caching `blst` for the wrong CPU type. 

## Proposed Changes

- Use `FEATURES: jemalloc,portable` when building Lighthouse & `lcli` in tests
- Add a new `TEST_FEATURES` and set to `portable` for all CI test jobs.
- Updated Makefiles to read the `TEST_FEATURES` environment variable, and default to none.
2023-10-20 07:30:27 +00:00
Jimmy Chen
1b4545cd9d Remove blob clones in KZG verification (#4852)
## Issue Addressed

This PR removes two instances of blob clones during blob verification that may not be necessary.
2023-10-18 06:52:54 +00:00
Mac L
369b624b19 Fix broken Nethermind integration tests (#4836)
## Issue Addressed

CI is currently blocked by persistently failing integration tests.

## Proposed Changes

Use latest Nethermind release and apply the appropriate fixes as there have been breaking changes.
Also increase the timeout since I had some local timeouts.


Co-authored-by: Michael Sproul <michael@sigmaprime.io>
Co-authored-by: antondlr <anton@delaruelle.net>
Co-authored-by: Jimmy Chen <jchen.tc@gmail.com>
2023-10-18 04:08:55 +00:00
realbigsean
4555e33048
Remove serde derive references (#4830)
* remove remaining uses of serde_derive

* fix lockfile

---------

Co-authored-by: João Oliveira <hello@jxs.pt>
2023-10-11 13:01:30 -04:00
Jimmy Chen
a96963fd5f
Re-commit corrupted key files 2023-10-06 00:24:09 +11:00
Jimmy Chen
c5c84f1213
Merge branch 'unstable' into merge-unstable-to-deneb-20231005
# Conflicts:
#	.github/workflows/test-suite.yml
#	Cargo.lock
#	beacon_node/execution_layer/Cargo.toml
#	beacon_node/execution_layer/src/test_utils/mock_builder.rs
#	beacon_node/execution_layer/src/test_utils/mod.rs
#	beacon_node/network/src/service/tests.rs
#	consensus/types/src/builder_bid.rs
2023-10-05 15:54:44 +11:00
Divma
f11884ccdb enforce non zero enr ports (#4776)
## Issue Addressed

Right now lighthouse accepts zero as enr ports. Since enr ports should be reachable, zero ports should be rejected here

## Proposed Changes

- update the config to use `NonZerou16` as an ENR port for all enr-related fields.
- the enr builder from config now sets the enr to the listening port only if the enr port is not already set (prev behaviour) and the listening port is not zero (new behaviour)
- reject zero listening ports when used with `enr-match`. 
- boot node now rejects listening port as zero, since those are advertised.
- generate-bootnode-enr also rejected zero listening ports for the same reason.
- update local network scripts

## Additional Info

Unrelated, but why do we overwrite `enr-x-port` values with listening ports if `enr-match` is present? we prob should only do this for enr values that are not already set.
2023-10-03 23:59:34 +00:00
realbigsean
c7ddf1f0b1
add processing and processed caching to the DA checker (#4732)
* add processing and processed caching to the DA checker

* move processing cache out of critical cache

* get it compiling

* fix lints

* add docs to `AvailabilityView`

* some self review

* fix lints

* fix beacon chain tests

* cargo fmt

* make availability view easier to implement, start on testing

* move child component cache and finish test

* cargo fix

* cargo fix

* cargo fix

* fmt and lint

* make blob commitments not optional, rename some caches, add missing blobs struct

* Update beacon_node/beacon_chain/src/data_availability_checker/processing_cache.rs

Co-authored-by: ethDreamer <37123614+ethDreamer@users.noreply.github.com>

* marks review feedback and other general cleanup

* cargo fix

* improve availability view docs

* some renames

* some renames and docs

* fix should delay lookup logic

* get rid of some wrapper methods

* fix up single lookup changes

* add a couple docs

* add single blob merge method and improve process_... docs

* update some names

* lints

* fix merge

* remove blob indices from lookup creation log

* remove blob indices from lookup creation log

* delayed lookup logging improvement

* check fork choice before doing any blob processing

* remove unused dep

* Update beacon_node/beacon_chain/src/data_availability_checker/availability_view.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Update beacon_node/beacon_chain/src/data_availability_checker/availability_view.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Update beacon_node/beacon_chain/src/data_availability_checker/availability_view.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Update beacon_node/beacon_chain/src/data_availability_checker/availability_view.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Update beacon_node/network/src/sync/block_lookups/delayed_lookup.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* remove duplicate deps

* use gen range in random blobs geneartor

* rename processing cache fields

* require block root in rpc block construction and check block root consistency

* send peers as vec in single message

* spawn delayed lookup service from network beacon processor

* fix tests

---------

Co-authored-by: ethDreamer <37123614+ethDreamer@users.noreply.github.com>
Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-10-03 09:59:33 -04:00
Age Manning
8a1b77bf89 Ultra Fast Super Slick CI (#4755)
Attempting to improve our CI speeds as its recently been a pain point.

Major changes:

 - Use a github action to pull stable/nightly rust rather than building it each run
 - Shift test suite to `nexttest` https://github.com/nextest-rs/nextest for CI
 
 UPDATE:

So I've iterated on some changes, and although I think its still not optimal I think this is a good base to start from. Some extra things in this PR:
- Shifted where we pull rust from. We're now using this thing: https://github.com/moonrepo/setup-rust . It's got some interesting cache's built in, but was not seeing the gains that Jimmy managed to get. In either case tho, it can pull rust, cargofmt, clippy, cargo nexttest all in < 5s. So I think it's worthwhile. 
- I've grouped a few of the check-like tests into a single test called `code-test`. Although we were using github runners in parallel which may be faster, it just seems wasteful. There were like 4-5 tests, where we would pull lighthouse, compile it, then run an action, like clippy, cargo-audit or fmt. I've grouped these into a single action, so we only compile lighthouse once, then in each step we run the checks. This avoids compiling lighthouse like 5 times.
- Ive made doppelganger tests run on our local machines to avoid pulling foundry, building and making lcli which are all now baked into the images. 
- We have sccache and do not incremental compile lighthouse

Misc bonus things:
- Cargo update
- Fix web3 signer openssl keys which is required after a cargo update
- Use mock_instant in an LRU cache test to avoid non-deterministic test
- Remove race condition in building web3signer tests

There's still some things we could improve on. Such as downloading the EF tests every run and the web3-signer binary, but I've left these to be out of scope of this PR. I think the above are meaningful improvements.



Co-authored-by: Paul Hauner <paul@paulhauner.com>
Co-authored-by: realbigsean <seananderson33@gmail.com>
Co-authored-by: antondlr <anton@delaruelle.net>
2023-10-03 06:33:15 +00:00
Jimmy Chen
c0b6b92f27
Merge unstable 20230925 into deneb-free-blobs. 2023-09-26 10:32:18 +10:00
Lion - dapplion
5c5afafc0d
Update Deneb to 1.4.0-beta.2 (devnet-9) (#4735)
* Add MAX_PER_EPOCH_ACTIVATION_CHURN_LIMIT

* Update tests to 1.4.0-beta.2

* Implement equivocation check for proposer boost

* Use hotfix tests and fix minimal config

* Start updating fork choice tests for Deneb

* Finish implementing fork choice blob handling

---------

Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2023-09-25 15:05:31 +10:00
João Oliveira
dcd69dfc62 Move dependencies to workspace (#4650)
## Issue Addressed

Synchronize dependencies and edition on the workspace `Cargo.toml`

## Proposed Changes

with https://github.com/rust-lang/cargo/issues/8415 merged it's now possible to synchronize details on the workspace `Cargo.toml` like the metadata and dependencies.
By only having dependencies that are shared between multiple crates aligned on the workspace `Cargo.toml` it's easier to not miss duplicate versions of the same dependency and therefore ease on the compile times.

## Additional Info
this PR also removes the no longer required direct dependency of the `serde_derive` crate.

should be reviewed after https://github.com/sigp/lighthouse/pull/4639 get's merged.
closes https://github.com/sigp/lighthouse/issues/4651


Co-authored-by: Michael Sproul <michael@sigmaprime.io>
Co-authored-by: Michael Sproul <micsproul@gmail.com>
2023-09-22 04:30:56 +00:00
João Oliveira
d386a07b0c validator client: start http api before genesis (#4714)
## Issue Addressed

On a new network a user might require importing validators before waiting until genesis has occurred.

## Proposed Changes

Starts the validator client http api before waiting for genesis 

## Additional Info

cc @antondlr
2023-09-15 10:08:30 +00:00
Jimmy Chen
b88e57c989 Update Java runtime requirement to 17 for Web3Signer tests (#4681)
Web3Signer now requires Java runtime v17, see [v23.8.0 release](https://github.com/Consensys/web3signer/releases/tag/23.8.0).

We have some Web3Signer tests that requires a compatible Java runtime to be installed on dev machines. This PR updates `setup` documentation in Lighthouse book, and also fixes a small typo.
2023-09-15 08:49:14 +00:00
Age Manning
e4ed317b76 Add Experimental QUIC support (#4577)
## Issue Addressed

#4402 

## Proposed Changes

This PR adds QUIC support to Lighthouse. As this is not officially spec'd this will only work between lighthouse <-> lighthouse connections. We attempt a QUIC connection (if the node advertises it) and if it fails we fallback to TCP. 

This should be a backwards compatible modification. We want to test this functionality on live networks to observe any improvements in bandwidth/latency.

NOTE: This also removes the websockets transport as I believe no one is really using it. It should be mentioned in our release however.


Co-authored-by: João Oliveira <hello@jxs.pt>
2023-09-15 03:07:24 +00:00
Jimmy Chen
6771954c5f
Merge unstable 20230911 into deneb-free-blobs. 2023-09-11 12:09:58 +10:00
Paul Hauner
2550170337
Deneb review suggestions (3) (#4694)
* Fix typos

* Avoid consuming/cloning blob

* Tidy comments
2023-09-05 15:50:57 -04:00
Paul Hauner
0bfc933c50
Deneb review suggestions (2) (#4680)
* Serde deny unknown fields

* Require 0x prefix
2023-09-05 15:49:57 -04:00
Mac L
e056c279aa Increase web3signer_tests timeouts (#4662)
## Issue Addressed

`web3signer_tests` can sometimes timeout.

## Proposed Changes

Increase the `web3signer_tests` timeout from 20s to 30s

## Additional Info
Previously I believed the consistent CI failures were due to this, but it ended up being something different. See below:

---

The timing of this makes it very likely it is related to the [latest release of `web3-signer`](https://github.com/Consensys/web3signer/releases/tag/23.8.1).

I now believe this is due to an out of date Java runtime on our runners. A newer version of Java became a requirement with the new `web3-signer` release.

However, I was getting timeouts locally, which implies that the margin before timeout is quite small at 20s so bumping it up to 30s could be a good idea regardless.
2023-08-28 00:55:34 +00:00
João Oliveira
c258270d6a update dependencies (#4639)
## Issue Addressed

updates underlying dependencies and removes the ignored `RUSTSEC`'s for `cargo audit`.

Also switches `procinfo` to `procfs` on `eth2` to remove the `nom` warning, `procinfo` is unmaintained see [here](https://github.com/danburkert/procinfo-rs/issues/46).
2023-08-28 00:55:28 +00:00
Jimmy Chen
8a6f171b2a
Merge branch 'unstable' into merge-unstable-to-deneb-20230822
# Conflicts:
#	beacon_node/beacon_chain/src/builder.rs
#	beacon_node/beacon_chain/tests/store_tests.rs
#	beacon_node/client/src/builder.rs
#	beacon_node/src/config.rs
#	beacon_node/store/src/hot_cold_store.rs
#	lighthouse/tests/beacon_node.rs
2023-08-22 21:20:47 +10:00
Jimmy Chen
f92b856cd1 Remove Antithesis docker build workflow (#4642)
## Issue Addressed

This PR remove Antithesis docker build CI workflow. Confirmed with Antithesis team that this is no longer used.
2023-08-21 05:02:36 +00:00
Michael Sproul
20067b9465 Remove checkpoint alignment requirements and enable historic state pruning (#4610)
## Issue Addressed

Closes #3210
Closes #3211

## Proposed Changes

- Checkpoint sync from the latest finalized state regardless of its alignment.
- Add the `block_root` to the database's split point. This is _only_ added to the in-memory split in order to avoid a schema migration. See `load_split`.
- Add a new method to the DB called `get_advanced_state`, which looks up a state _by block root_, with a `state_root` as fallback. Using this method prevents accidental accesses of the split's unadvanced state, which does not exist in the hot DB and is not guaranteed to exist in the freezer DB at all. Previously Lighthouse would look up this state _from the freezer DB_, even if it was required for block/attestation processing, which was suboptimal.
- Replace several state look-ups in block and attestation processing with `get_advanced_state` so that they can't hit the split block's unadvanced state.
- Do not store any states in the freezer database by default. All states will be deleted upon being evicted from the hot database unless `--reconstruct-historic-states` is set. The anchor info which was previously used for checkpoint sync is used to implement this, including when syncing from genesis.

## Additional Info

Needs further testing. I want to stress-test the pruned database under Hydra.

The `get_advanced_state` method is intended to become more relevant over time: `tree-states` includes an identically named method that returns advanced states from its in-memory cache.

Co-authored-by: realbigsean <seananderson33@gmail.com>
2023-08-21 05:02:32 +00:00
realbigsean
7d468cb487
More deneb cleanup (#4640)
* remove protoc and token from network tests github action

* delete unused beacon chain methods

* downgrade writing blobs to store log

* reduce diff in block import logic

* remove some todo's and deneb built in network

* remove unnecessary error, actually use some added metrics

* remove some metrics, fix missing components on publish funcitonality

* fix status tests

* rename sidecar by root to blobs by root

* clean up some metrics

* remove unnecessary feature gate from attestation subnet tests, clean up blobs by range response code

* pawan's suggestion in `protocol_info`, peer score in matching up batch sync block and blobs

* fix range tests for deneb

* pub block and blob db cache behind the same mutex

* remove unused errs and an empty file

* move sidecar trait to new file

* move types from payload to eth2 crate

* update comment and add flag value name

* make function private again, remove allow unused

* use reth rlp for tx decoding

* fix compile after merge

* rename kzg commitments

* cargo fmt

* remove unused dep

* Update beacon_node/execution_layer/src/lib.rs

Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>

* Update beacon_node/beacon_processor/src/lib.rs

Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>

* pawan's suggestiong for vec capacity

* cargo fmt

* Revert "use reth rlp for tx decoding"

This reverts commit 5181837d81c66dcca4c960a85989ac30c7f806e2.

* remove reth rlp

---------

Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
2023-08-20 21:17:17 -04:00
realbigsean
fddd4e4c87
Merge pull request #4591 from realbigsean/merge-unstable-deneb-aug-9
Merge unstable deneb aug 9
2023-08-09 16:24:18 -04:00
ethDreamer
2b5385fb46
Changes for devnet-8 (#4518)
* Addressed #4487

Add override threshold flag
Added tests for Override Threshold Flag
Override default shown in decimal

* Addressed #4445

Addressed Jimmy's Comments
No need for matches
Fix Mock Execution Engine Tests
Fix clippy
fix fcuv3 bug

* Fix Block Root Calculation post-Deneb

* Addressed #4444

Attestation Verification Post-Deneb
Fix Gossip Attestation Verification Test

* Addressed #4443

Fix Exit Signing for EIP-7044
Fix cross exit test
Move 7044 Logic to signing_context()

* Update EF Tests

* Addressed #4560

* Added Comments around EIP7045

* Combine Altair Deneb to Eliminate Duplicated Code
2023-08-09 15:44:47 -04:00
realbigsean
12b5e9ad3d
Merge branch 'unstable' of https://github.com/sigp/lighthouse into merge-unstable-deneb-aug-9 2023-08-09 10:42:51 -04:00
Paul Hauner
b60304b19f Use BeaconProcessor for API requests (#4462)
## Issue Addressed

NA

## Proposed Changes

Rather than spawning new tasks on the tokio executor to process each HTTP API request, send the tasks to the `BeaconProcessor`. This achieves:

1. Places a bound on how many concurrent requests are being served (i.e., how many we are actually trying to compute at one time).
1. Places a bound on how many requests can be awaiting a response at one time (i.e., starts dropping requests when we have too many queued).
1. Allows the BN prioritise HTTP requests with respect to messages coming from the P2P network (i.e., proiritise importing gossip blocks rather than serving API requests).

Presently there are two levels of priorities:

- `Priority::P0`
    - The beacon processor will prioritise these above everything other than importing new blocks.
    - Roughly all validator-sensitive endpoints.
- `Priority::P1`
    - The beacon processor will prioritise practically all other P2P messages over these, except for historical backfill things.
    - Everything that's not `Priority::P0`
    
The `--http-enable-beacon-processor false` flag can be supplied to revert back to the old behaviour of spawning new `tokio` tasks for each request:

```
        --http-enable-beacon-processor <BOOLEAN>
            The beacon processor is a scheduler which provides quality-of-service and DoS protection. When set to
            "true", HTTP API requests will queued and scheduled alongside other tasks. When set to "false", HTTP API
            responses will be executed immediately. [default: true]
```
    
## New CLI Flags

I added some other new CLI flags:

```
        --beacon-processor-aggregate-batch-size <INTEGER>
            Specifies the number of gossip aggregate attestations in a signature verification batch. Higher values may
            reduce CPU usage in a healthy network while lower values may increase CPU usage in an unhealthy or hostile
            network. [default: 64]
        --beacon-processor-attestation-batch-size <INTEGER>
            Specifies the number of gossip attestations in a signature verification batch. Higher values may reduce CPU
            usage in a healthy network whilst lower values may increase CPU usage in an unhealthy or hostile network.
            [default: 64]
        --beacon-processor-max-workers <INTEGER>
            Specifies the maximum concurrent tasks for the task scheduler. Increasing this value may increase resource
            consumption. Reducing the value may result in decreased resource usage and diminished performance. The
            default value is the number of logical CPU cores on the host.
        --beacon-processor-reprocess-queue-len <INTEGER>
            Specifies the length of the queue for messages requiring delayed processing. Higher values may prevent
            messages from being dropped while lower values may help protect the node from becoming overwhelmed.
            [default: 12288]
```


I needed to add the max-workers flag since the "simulator" flavor tests started failing with HTTP timeouts on the test assertions. I believe they were failing because the Github runners only have 2 cores and there just weren't enough workers available to process our requests in time. I added the other flags since they seem fun to fiddle with.

## Additional Info

I bumped the timeouts on the "simulator" flavor test from 4s to 8s. The prioritisation of consensus messages seems to be causing slower responses, I guess this is what we signed up for 🤷 

The `validator/register` validator has some special handling because the relays have a bad habit of timing out on these calls. It seems like a waste of a `BeaconProcessor` worker to just wait for the builder API HTTP response, so we spawn a new `tokio` task to wait for a builder response.

I've added an optimisation for the `GET beacon/states/{state_id}/validators/{validator_id}` endpoint in [efbabe3](efbabe3252). That's the endpoint the VC uses to resolve pubkeys to validator indices, and it's the endpoint that was causing us grief. Perhaps I should move that into a new PR, not sure.
2023-08-08 23:30:15 +00:00
Jimmy Chen
ec416df061
Merge branch 'unstable' into merge-unstable-to-deneb-20230808
# Conflicts:
#	Cargo.lock
#	beacon_node/beacon_chain/src/lib.rs
#	beacon_node/execution_layer/src/engine_api.rs
#	beacon_node/execution_layer/src/engine_api/http.rs
#	beacon_node/execution_layer/src/test_utils/mod.rs
#	beacon_node/lighthouse_network/src/rpc/codec/ssz_snappy.rs
#	beacon_node/lighthouse_network/src/rpc/handler.rs
#	beacon_node/lighthouse_network/src/rpc/protocol.rs
#	beacon_node/lighthouse_network/src/service/utils.rs
#	beacon_node/lighthouse_network/tests/rpc_tests.rs
#	beacon_node/network/Cargo.toml
#	beacon_node/network/src/network_beacon_processor/tests.rs
#	lcli/src/parse_ssz.rs
#	scripts/cross/Dockerfile
#	validator_client/src/block_service.rs
#	validator_client/src/validator_store.rs
2023-08-08 17:02:51 +10:00
Divma
ff9b09d964 upgrade to libp2p 0.52 (#4431)
## Issue Addressed

Upgrade libp2p to v0.52

## Proposed Changes
- **Workflows**: remove installation of `protoc`
- **Book**: remove installation of `protoc`
- **`Dockerfile`s and `cross`**: remove custom base `Dockerfile` for cross since it's no longer needed. Remove `protoc` from remaining `Dockerfiles`s
- **Upgrade `discv5` to `v0.3.1`:** we have some cool stuff in there: no longer needs `protoc` and faster ip updates on cold start
- **Upgrade `prometheus` to `0.21.0`**, now it no longer needs encoding checks
- **things that look like refactors:** bunch of api types were renamed and need to be accessed in a different (clearer) way
- **Lighthouse network**
	- connection limits is now a behaviour
	- banned peers no longer exist on the swarm level, but at the behaviour level
	- `connection_event_buffer_size` now is handled per connection with a buffer size of 4
	- `mplex` is deprecated and was removed
	- rpc handler now logs the peer to which it belongs

## Additional Info

Tried to keep as much behaviour unchanged as possible. However, there is a great deal of improvements we can do _after_ this upgrade:
- Smart connection limits: Connection limits have been checked only based on numbers, we can now use information about the incoming peer to decide if we want it
- More powerful peer management: Dial attempts from other behaviours can be rejected early
- Incoming connections can be rejected early
- Banning can be returned exclusively to the peer management: We should not get connections to banned peers anymore making use of this
- TCP Nat updates: We might be able to take advantage of confirmed external addresses to check out tcp ports/ips


Co-authored-by: Age Manning <Age@AgeManning.com>
Co-authored-by: Akihito Nakano <sora.akatsuki@gmail.com>
2023-08-02 00:59:34 +00:00
Gua00va
73764d0dd2 Deprecate exchangeTransitionConfiguration functionality (#4517)
## Issue Addressed

Solves #4442 
## Proposed Changes

EL clients log errors if we don't query this endpoint, but they are making releases that remove this error logging. After those are out we can stop calling it, after which point EL teams will remove the endpoint entirely. 
Refer https://hackmd.io/@n0ble/deprecate-exchgTC
2023-07-31 23:51:39 +00:00
Jimmy Chen
4ca101e085
Merge branch 'unstable' into deneb-free-blobs 2023-07-24 20:55:27 +10:00
Jimmy Chen
fc7f1ba6b9 Phase 0 attestation rewards via Beacon API (#4474)
## Issue Addressed

Addresses #4026.

Beacon-API spec [here](https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Beacon/getAttestationsRewards).

Endpoint: `POST /eth/v1/beacon/rewards/attestations/{epoch}`

This endpoint already supports post-Altair epochs. This PR adds support for phase 0 rewards calculation.

## Proposed Changes

- [x] Attestation rewards API to support phase 0 rewards calculation, re-using logic from `state_processing`. Refactored `get_attestation_deltas` slightly to support computing deltas for a subset of validators.
- [x] Add `inclusion_delay` to `ideal_rewards` (`beacon-API` spec update to follow)
- [x] Add `inactivity` penalties to both `ideal_rewards` and `total_rewards` (`beacon-API` spec update to follow)
- [x] Add tests to compute attestation rewards and compare results with beacon states 

## Additional Notes

- The extra penalty for missing attestations or being slashed during an inactivity leak is currently not included in the API response (for both phase 0 and Altair) in the spec. 
- I went with adding `inactivity` as a separate component rather than combining them with the 4 rewards, because this is how it was grouped in [the phase 0 spec](https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/beacon-chain.md#get_attestation_deltas). During inactivity leak, all rewards include the optimal reward, and inactivity penalties are calculated separately (see below code snippet from the spec), so it would be quite confusing if we merge them. This would also work better with Altair, because there's no "cancelling" of rewards and inactivity penalties are more separate.
- Altair calculation logic (to include inactivity penalties) to be updated in a follow-up PR.

```python
def get_attestation_deltas(state: BeaconState) -> Tuple[Sequence[Gwei], Sequence[Gwei]]:
    """
    Return attestation reward/penalty deltas for each validator.
    """
    source_rewards, source_penalties = get_source_deltas(state)
    target_rewards, target_penalties = get_target_deltas(state)
    head_rewards, head_penalties = get_head_deltas(state)
    inclusion_delay_rewards, _ = get_inclusion_delay_deltas(state)
    _, inactivity_penalties = get_inactivity_penalty_deltas(state)

    rewards = [
        source_rewards[i] + target_rewards[i] + head_rewards[i] + inclusion_delay_rewards[i]
        for i in range(len(state.validators))
    ]

    penalties = [
        source_penalties[i] + target_penalties[i] + head_penalties[i] + inactivity_penalties[i]
        for i in range(len(state.validators))
    ]

    return rewards, penalties
```

## Example API Response

<details>
  <summary>Click me</summary>
  
```json
{
  "ideal_rewards": [
    {
      "effective_balance": "1000000000",
      "head": "6638",
      "target": "6638",
      "source": "6638",
      "inclusion_delay": "9783",
      "inactivity": "0"
    },
    {
      "effective_balance": "2000000000",
      "head": "13276",
      "target": "13276",
      "source": "13276",
      "inclusion_delay": "19565",
      "inactivity": "0"
    },
    {
      "effective_balance": "3000000000",
      "head": "19914",
      "target": "19914",
      "source": "19914",
      "inclusion_delay": "29349",
      "inactivity": "0"
    },
    {
      "effective_balance": "4000000000",
      "head": "26553",
      "target": "26553",
      "source": "26553",
      "inclusion_delay": "39131",
      "inactivity": "0"
    },
    {
      "effective_balance": "5000000000",
      "head": "33191",
      "target": "33191",
      "source": "33191",
      "inclusion_delay": "48914",
      "inactivity": "0"
    },
    {
      "effective_balance": "6000000000",
      "head": "39829",
      "target": "39829",
      "source": "39829",
      "inclusion_delay": "58697",
      "inactivity": "0"
    },
    {
      "effective_balance": "7000000000",
      "head": "46468",
      "target": "46468",
      "source": "46468",
      "inclusion_delay": "68480",
      "inactivity": "0"
    },
    {
      "effective_balance": "8000000000",
      "head": "53106",
      "target": "53106",
      "source": "53106",
      "inclusion_delay": "78262",
      "inactivity": "0"
    },
    {
      "effective_balance": "9000000000",
      "head": "59744",
      "target": "59744",
      "source": "59744",
      "inclusion_delay": "88046",
      "inactivity": "0"
    },
    {
      "effective_balance": "10000000000",
      "head": "66383",
      "target": "66383",
      "source": "66383",
      "inclusion_delay": "97828",
      "inactivity": "0"
    },
    {
      "effective_balance": "11000000000",
      "head": "73021",
      "target": "73021",
      "source": "73021",
      "inclusion_delay": "107611",
      "inactivity": "0"
    },
    {
      "effective_balance": "12000000000",
      "head": "79659",
      "target": "79659",
      "source": "79659",
      "inclusion_delay": "117394",
      "inactivity": "0"
    },
    {
      "effective_balance": "13000000000",
      "head": "86298",
      "target": "86298",
      "source": "86298",
      "inclusion_delay": "127176",
      "inactivity": "0"
    },
    {
      "effective_balance": "14000000000",
      "head": "92936",
      "target": "92936",
      "source": "92936",
      "inclusion_delay": "136959",
      "inactivity": "0"
    },
    {
      "effective_balance": "15000000000",
      "head": "99574",
      "target": "99574",
      "source": "99574",
      "inclusion_delay": "146742",
      "inactivity": "0"
    },
    {
      "effective_balance": "16000000000",
      "head": "106212",
      "target": "106212",
      "source": "106212",
      "inclusion_delay": "156525",
      "inactivity": "0"
    },
    {
      "effective_balance": "17000000000",
      "head": "112851",
      "target": "112851",
      "source": "112851",
      "inclusion_delay": "166307",
      "inactivity": "0"
    },
    {
      "effective_balance": "18000000000",
      "head": "119489",
      "target": "119489",
      "source": "119489",
      "inclusion_delay": "176091",
      "inactivity": "0"
    },
    {
      "effective_balance": "19000000000",
      "head": "126127",
      "target": "126127",
      "source": "126127",
      "inclusion_delay": "185873",
      "inactivity": "0"
    },
    {
      "effective_balance": "20000000000",
      "head": "132766",
      "target": "132766",
      "source": "132766",
      "inclusion_delay": "195656",
      "inactivity": "0"
    },
    {
      "effective_balance": "21000000000",
      "head": "139404",
      "target": "139404",
      "source": "139404",
      "inclusion_delay": "205439",
      "inactivity": "0"
    },
    {
      "effective_balance": "22000000000",
      "head": "146042",
      "target": "146042",
      "source": "146042",
      "inclusion_delay": "215222",
      "inactivity": "0"
    },
    {
      "effective_balance": "23000000000",
      "head": "152681",
      "target": "152681",
      "source": "152681",
      "inclusion_delay": "225004",
      "inactivity": "0"
    },
    {
      "effective_balance": "24000000000",
      "head": "159319",
      "target": "159319",
      "source": "159319",
      "inclusion_delay": "234787",
      "inactivity": "0"
    },
    {
      "effective_balance": "25000000000",
      "head": "165957",
      "target": "165957",
      "source": "165957",
      "inclusion_delay": "244570",
      "inactivity": "0"
    },
    {
      "effective_balance": "26000000000",
      "head": "172596",
      "target": "172596",
      "source": "172596",
      "inclusion_delay": "254352",
      "inactivity": "0"
    },
    {
      "effective_balance": "27000000000",
      "head": "179234",
      "target": "179234",
      "source": "179234",
      "inclusion_delay": "264136",
      "inactivity": "0"
    },
    {
      "effective_balance": "28000000000",
      "head": "185872",
      "target": "185872",
      "source": "185872",
      "inclusion_delay": "273918",
      "inactivity": "0"
    },
    {
      "effective_balance": "29000000000",
      "head": "192510",
      "target": "192510",
      "source": "192510",
      "inclusion_delay": "283701",
      "inactivity": "0"
    },
    {
      "effective_balance": "30000000000",
      "head": "199149",
      "target": "199149",
      "source": "199149",
      "inclusion_delay": "293484",
      "inactivity": "0"
    },
    {
      "effective_balance": "31000000000",
      "head": "205787",
      "target": "205787",
      "source": "205787",
      "inclusion_delay": "303267",
      "inactivity": "0"
    },
    {
      "effective_balance": "32000000000",
      "head": "212426",
      "target": "212426",
      "source": "212426",
      "inclusion_delay": "313050",
      "inactivity": "0"
    }
  ],
  "total_rewards": [
    {
      "validator_index": "0",
      "head": "212426",
      "target": "212426",
      "source": "212426",
      "inclusion_delay": "313050",
      "inactivity": "0"
    },
    {
      "validator_index": "32",
      "head": "212426",
      "target": "212426",
      "source": "212426",
      "inclusion_delay": "313050",
      "inactivity": "0"
    },
    {
      "validator_index": "63",
      "head": "-357771",
      "target": "-357771",
      "source": "-357771",
      "inclusion_delay": "0",
      "inactivity": "0"
    }
  ]
}
```
</details>
2023-07-18 01:48:40 +00:00
realbigsean
b96db45090
Merge branch 'unstable' of https://github.com/sigp/lighthouse into merge-unstable-deneb-jul-14 2023-07-17 09:33:37 -04:00
Jimmy Chen
d4a61756ca CI fix: add retries to eth1 sim tests (#4501)
## Issue Addressed

This PR attempts to workaround the recent frequent eth1 simulator failures caused by missing eth logs from Anvil. 

> FailedToInsertDeposit(NonConsecutive { log_index: 1, expected: 0 })

This usually occurs at the beginning of the tests, and it guarantees a timeout after a few hours if this log shows up, and this is currently causing our CIs to fail quite frequently. 

Example failure here: https://github.com/sigp/lighthouse/actions/runs/5525760195/jobs/10079736914

## Proposed Changes

The quick fix applied here adds a timeout to node startup and restarts the node again.

- Add a 60 seconds timeout to beacon node startup in eth1 simulator tests. It takes ~10 seconds on my machine, but could take longer on CI runners.
- Wrap the startup code in a retry function, that allows for 3 retries before returning an error.

## Additional Info

We should probably raise an issue under the Anvil GitHub repo there so this can be further investigated.
2023-07-17 00:14:18 +00:00
Michael Sproul
5569d97a07 Remove wget dependency (#4497)
## Proposed Changes

Replace `wget` in the EF-tests makefile with `curl`.

On macOS `curl` is pre-installed, and I found myself making this change to avoid installing `wget`.

The `-L` flag is used to follow redirects which is useful if a repo gets renamed, and more similar to `wget`'s default behaviour.
2023-07-17 00:14:16 +00:00
Jimmy Chen
5cd738c882 Use unique arg names for eth1-sim (#4463)
## Issue Addressed

When trying to run `eth1-sim` locally, the simulator doesn't start for me, and panicked due to duplicate arg names for `proposer-nodes` (using same arg names as `nodes`). Not sure why this isn't failing on CI but failing on mine 🤔 

```
thread 'main' panicked at 'Argument short must be unique
thread 'main' panicked at 'Argument long must be unique
```
2023-07-17 00:14:13 +00:00
realbigsean
782a53ad9d
fix compile 2023-07-11 16:05:05 -04:00
realbigsean
c4da1ba450
resolve merge issues 2023-07-07 10:17:04 -04:00
realbigsean
cfe2452533
Merge branch 'remove-into-gossip-verified-block' of https://github.com/realbigsean/lighthouse into merge-unstable-deneb-june-6th 2023-07-06 16:51:35 -04:00
realbigsean
ba65812972
remove patched dependencies (#4470) 2023-07-05 15:53:35 -04:00
realbigsean
af4a66846e
remove clang from Dockerfile (#4471) 2023-07-05 15:53:22 -04:00
Jimmy Chen
46be05f728 Cache target attester balances for unrealized FFG progression calculation (#4362)
## Issue Addressed

#4118 

## Proposed Changes

This PR introduces a "progressive balances" cache on the `BeaconState`, which keeps track of the accumulated target attestation balance for the current & previous epochs. The cached values are utilised by fork choice to calculate unrealized justification and finalization (instead of converting epoch participation arrays to balances for each block we receive).

This optimization will be rolled out gradually to allow for more testing. A new `--progressive-balances disabled|checked|strict|fast` flag is introduced to support this:
- `checked`: enabled with checks against participation cache, and falls back to the existing epoch processing calculation if there is a total target attester balance mismatch. There is no performance gain from this as the participation cache still needs to be computed. **This is the default mode for now.**
- `strict`: enabled with checks against participation cache, returns error if there is a mismatch. **Used for testing only**.
- `fast`: enabled with no comparative checks and without computing the participation cache. This mode gives us the performance gains from the optimization. This is still experimental and not currently recommended for production usage, but will become the default mode in a future release.
- `disabled`: disable the usage of progressive cache, and use the existing method for FFG progression calculation. This mode may be useful if we find a bug and want to stop the frequent error logs.

### Tasks

- [x] Initial cache implementation in `BeaconState`
- [x] Perform checks in fork choice to compare the progressive balances cache against results from `ParticipationCache`
- [x] Add CLI flag, and disable the optimization by default
- [x] Testing on Goerli & Benchmarking
- [x]  Move caching logic from state processing to the `ProgressiveBalancesCache` (see [this comment](https://github.com/sigp/lighthouse/pull/4362#discussion_r1230877001))
- [x] Add attesting balance metrics



Co-authored-by: Jimmy Chen <jimmy@sigmaprime.io>
2023-06-30 01:13:06 +00:00
realbigsean
adbb62f7f3
Devnet6 (#4404)
* some blob reprocessing work

* remove ForceBlockLookup

* reorder enum match arms in sync manager

* a lot more reprocessing work

* impl logic for triggerng blob lookups along with block lookups

* deal with rpc blobs in groups per block in the da checker. don't cache missing blob ids in the da checker.

* make single block lookup generic

* more work

* add delayed processing logic and combine some requests

* start fixing some compile errors

* fix compilation in main block lookup mod

* much work

* get things compiling

* parent blob lookups

* fix compile

* revert red/stevie changes

* fix up sync manager delay message logic

* add peer usefulness enum

* should remove lookup refactor

* consolidate retry error handling

* improve peer scoring during certain failures in parent lookups

* improve retry code

* drop parent lookup if either req has a peer disconnect during download

* refactor single block processed method

* processing peer refactor

* smol bugfix

* fix some todos

* fix lints

* fix lints

* fix compile in lookup tests

* fix lints

* fix lints

* fix existing block lookup tests

* renamings

* fix after merge

* cargo fmt

* compilation fix in beacon chain tests

* fix

* refactor lookup tests to work with multiple forks and response types

* make tests into macros

* wrap availability check error

* fix compile after merge

* add random blobs

* start fixing up lookup verify error handling

* some bug fixes and the start of deneb only tests

* make tests work for all forks

* track information about peer source

* error refactoring

* improve peer scoring

* fix test compilation

* make sure blobs are sent for processing after stream termination, delete copied tests

* add some tests and fix a bug

* smol bugfixes and moar tests

* add tests and fix some things

* compile after merge

* lots of refactoring

* retry on invalid block/blob

* merge unknown parent messages before current slot lookup

* get tests compiling

* penalize blob peer on invalid blobs

* Check disk on in-memory cache miss

* Update beacon_node/beacon_chain/src/data_availability_checker/overflow_lru_cache.rs

* Update beacon_node/network/src/sync/network_context.rs

Co-authored-by: Divma <26765164+divagant-martian@users.noreply.github.com>

* fix bug in matching blocks and blobs in range sync

* pr feedback

* fix conflicts

* upgrade logs from warn to crit when we receive incorrect response in range

* synced_and_connected_within_tolerance -> should_search_for_block

* remove todo

* add data gas used and update excess data gas to u64

* Fix Broken Overflow Tests

* payload verification with commitments

* fix merge conflicts

* restore payload file

* Restore payload file

* remove todo

* add max blob commitments per block

* c-kzg lib update

* Fix ef tests

* Abstract over minimal/mainnet spec in kzg crate

* Start integrating new KZG

* checkpoint sync without alignment

* checkpoint sync without alignment

* add import

* add import

* query for checkpoint state by slot rather than state root (teku doesn't serve by state root)

* query for checkpoint state by slot rather than state root (teku doesn't serve by state root)

* loosen check

* get state first and query by most recent block root

* Revert "loosen check"

This reverts commit 069d13dd63aa794a3505db9f17bd1a6b73f0be81.

* get state first and query by most recent block root

* merge max blobs change

* simplify delay logic

* rename unknown parent sync message variants

* rename parameter, block_slot -> slot

* add some docs to the lookup module

* use interval instead of sleep

* drop request if blocks and blobs requests both return `None` for `Id`

* clean up `find_single_lookup` logic

* add lookup source enum

* clean up `find_single_lookup` logic

* add docs to find_single_lookup_request

* move LookupSource our of param where unnecessary

* remove unnecessary todo

* query for block by `state.latest_block_header.slot`

* fix lint

* fix merge transition ef tests

* fix test

* fix test

* fix observed  blob sidecars test

* Add some metrics (#33)

* fix protocol limits for blobs by root

* Update Engine API for 1:1 Structure Method

* make beacon chain tests to fix devnet 6 changes

* get ckzg working and fix some tests

* fix remaining tests

* fix lints

* Fix KZG linking issues

* remove unused dep

* lockfile

* test fixes

* remove dbgs

* remove unwrap

* cleanup tx generator

* small fixes

* fixing fixes

* more self reivew

* more self review

* refactor genesis header initialization

* refactor mock el instantiations

* fix compile

* fix network test, make sure they run for each fork

* pr feedback

* fix last test (hopefully)

---------

Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
Co-authored-by: Mark Mackey <mark@sigmaprime.io>
Co-authored-by: Divma <26765164+divagant-martian@users.noreply.github.com>
Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2023-06-29 15:35:43 -04:00
Jack McPherson
1aff082eea Add broadcast validation routes to Beacon Node HTTP API (#4316)
## Issue Addressed

 - #4293 
 - #4264 

## Proposed Changes

*Changes largely follow those suggested in the main issue*.

 - Add new routes to HTTP API
   - `post_beacon_blocks_v2`
   - `post_blinded_beacon_blocks_v2`
 - Add new routes to `BeaconNodeHttpClient`
   - `post_beacon_blocks_v2`
   - `post_blinded_beacon_blocks_v2`
 - Define new Eth2 common types
   - `BroadcastValidation`, enum representing the level of validation to apply to blocks prior to broadcast
   - `BroadcastValidationQuery`, the corresponding HTTP query string type for the above type
 - ~~Define `_checked` variants of both `publish_block` and `publish_blinded_block` that enforce a validation level at a type level~~
 - Add interactive tests to the `bn_http_api_tests` test target covering each validation level (to their own test module, `broadcast_validation_tests`)
   - `beacon/blocks`
       - `broadcast_validation=gossip`
         - Invalid (400)
         - Full Pass (200)
         - Partial Pass (202)
        - `broadcast_validation=consensus`
          - Invalid (400)
          - Only gossip (400)
          - Only consensus pass (i.e., equivocates) (200)
          - Full pass (200)
        - `broadcast_validation=consensus_and_equivocation`
          - Invalid (400)
          - Invalid due to early equivocation (400)
          - Only gossip (400)
          - Only consensus (400)
          - Pass (200)
   - `beacon/blinded_blocks`
       - `broadcast_validation=gossip`
         - Invalid (400)
         - Full Pass (200)
         - Partial Pass (202)
        - `broadcast_validation=consensus`
          - Invalid (400)
          - Only gossip (400)
          - ~~Only consensus pass (i.e., equivocates) (200)~~
          - Full pass (200)
        - `broadcast_validation=consensus_and_equivocation`
          - Invalid (400)
          - Invalid due to early equivocation (400)
          - Only gossip (400)
          - Only consensus (400)
          - Pass (200)
 - Add a new trait, `IntoGossipVerifiedBlock`, which allows type-level guarantees to be made as to gossip validity
 - Modify the structure of the `ObservedBlockProducers` cache from a `(slot, validator_index)` mapping to a `((slot, validator_index), block_root)` mapping
 - Modify `ObservedBlockProducers::proposer_has_been_observed` to return a `SeenBlock` rather than a boolean on success
 - Punish gossip peer (low) for submitting equivocating blocks
 - Rename `BlockError::SlashablePublish` to `BlockError::SlashableProposal`

## Additional Info

This PR contains changes that directly modify how blocks are verified within the client. For more context, consult [comments in-thread](https://github.com/sigp/lighthouse/pull/4316#discussion_r1234724202).


Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2023-06-29 12:02:38 +00:00