2019-03-04 05:39:37 +00:00
|
|
|
[package]
|
Rename eth2_libp2p to lighthouse_network (#2702)
## Description
The `eth2_libp2p` crate was originally named and designed to incorporate a simple libp2p integration into lighthouse. Since its origins the crates purpose has expanded dramatically. It now houses a lot more sophistication that is specific to lighthouse and no longer just a libp2p integration.
As of this writing it currently houses the following high-level lighthouse-specific logic:
- Lighthouse's implementation of the eth2 RPC protocol and specific encodings/decodings
- Integration and handling of ENRs with respect to libp2p and eth2
- Lighthouse's discovery logic, its integration with discv5 and logic about searching and handling peers.
- Lighthouse's peer manager - This is a large module handling various aspects of Lighthouse's network, such as peer scoring, handling pings and metadata, connection maintenance and recording, etc.
- Lighthouse's peer database - This is a collection of information stored for each individual peer which is specific to lighthouse. We store connection state, sync state, last seen ips and scores etc. The data stored for each peer is designed for various elements of the lighthouse code base such as syncing and the http api.
- Gossipsub scoring - This stores a collection of gossipsub 1.1 scoring mechanisms that are continuously analyssed and updated based on the ethereum 2 networks and how Lighthouse performs on these networks.
- Lighthouse specific types for managing gossipsub topics, sync status and ENR fields
- Lighthouse's network HTTP API metrics - A collection of metrics for lighthouse network monitoring
- Lighthouse's custom configuration of all networking protocols, RPC, gossipsub, discovery, identify and libp2p.
Therefore it makes sense to rename the crate to be more akin to its current purposes, simply that it manages the majority of Lighthouse's network stack. This PR renames this crate to `lighthouse_network`
Co-authored-by: Paul Hauner <paul@paulhauner.com>
2021-10-19 00:30:39 +00:00
|
|
|
name = "lighthouse_network"
|
2020-08-06 03:43:05 +00:00
|
|
|
version = "0.2.0"
|
2020-08-04 02:31:41 +00:00
|
|
|
authors = ["Sigma Prime <contact@sigmaprime.io>"]
|
2022-02-25 00:10:17 +00:00
|
|
|
edition = "2021"
|
2019-03-04 05:39:37 +00:00
|
|
|
|
|
|
|
[dependencies]
|
2022-10-28 05:40:06 +00:00
|
|
|
discv5 = { version = "0.1.0", features = ["libp2p"] }
|
2021-02-10 23:29:49 +00:00
|
|
|
unsigned-varint = { version = "0.6.0", features = ["codec"] }
|
2020-05-18 11:24:23 +00:00
|
|
|
types = { path = "../../consensus/types" }
|
2021-11-29 03:57:54 +00:00
|
|
|
eth2_ssz_types = "0.2.2"
|
2020-10-05 08:22:19 +00:00
|
|
|
serde = { version = "1.0.116", features = ["derive"] }
|
|
|
|
serde_derive = "1.0.116"
|
2021-11-29 03:57:54 +00:00
|
|
|
eth2_ssz = "0.4.1"
|
Implement SSZ union type (#2579)
## Issue Addressed
NA
## Proposed Changes
Implements the "union" type from the SSZ spec for `ssz`, `ssz_derive`, `tree_hash` and `tree_hash_derive` so it may be derived for `enums`:
https://github.com/ethereum/consensus-specs/blob/v1.1.0-beta.3/ssz/simple-serialize.md#union
The union type is required for the merge, since the `Transaction` type is defined as a single-variant union `Union[OpaqueTransaction]`.
### Crate Updates
This PR will (hopefully) cause CI to publish new versions for the following crates:
- `eth2_ssz_derive`: `0.2.1` -> `0.3.0`
- `eth2_ssz`: `0.3.0` -> `0.4.0`
- `eth2_ssz_types`: `0.2.0` -> `0.2.1`
- `tree_hash`: `0.3.0` -> `0.4.0`
- `tree_hash_derive`: `0.3.0` -> `0.4.0`
These these crates depend on each other, I've had to add a workspace-level `[patch]` for these crates. A follow-up PR will need to remove this patch, ones the new versions are published.
### Union Behaviors
We already had SSZ `Encode` and `TreeHash` derive for enums, however it just did a "transparent" pass-through of the inner value. Since the "union" decoding from the spec is in conflict with the transparent method, I've required that all `enum` have exactly one of the following enum-level attributes:
#### SSZ
- `#[ssz(enum_behaviour = "union")]`
- matches the spec used for the merge
- `#[ssz(enum_behaviour = "transparent")]`
- maintains existing functionality
- not supported for `Decode` (never was)
#### TreeHash
- `#[tree_hash(enum_behaviour = "union")]`
- matches the spec used for the merge
- `#[tree_hash(enum_behaviour = "transparent")]`
- maintains existing functionality
This means that we can maintain the existing transparent behaviour, but all existing users will get a compile-time error until they explicitly opt-in to being transparent.
### Legacy Option Encoding
Before this PR, we already had a union-esque encoding for `Option<T>`. However, this was with the *old* SSZ spec where the union selector was 4 bytes. During merge specification, the spec was changed to use 1 byte for the selector.
Whilst the 4-byte `Option` encoding was never used in the spec, we used it in our database. Writing a migrate script for all occurrences of `Option` in the database would be painful, especially since it's used in the `CommitteeCache`. To avoid the migrate script, I added a serde-esque `#[ssz(with = "module")]` field-level attribute to `ssz_derive` so that we can opt into the 4-byte encoding on a field-by-field basis.
The `ssz::legacy::four_byte_impl!` macro allows a one-liner to define the module required for the `#[ssz(with = "module")]` for some `Option<T> where T: Encode + Decode`.
Notably, **I have removed `Encode` and `Decode` impls for `Option`**. I've done this to force a break on downstream users. Like I mentioned, `Option` isn't used in the spec so I don't think it'll be *that* annoying. I think it's nicer than quietly having two different union implementations or quietly breaking the existing `Option` impl.
### Crate Publish Ordering
I've modified the order in which CI publishes crates to ensure that we don't publish a crate without ensuring we already published a crate that it depends upon.
## TODO
- [ ] Queue a follow-up `[patch]`-removing PR.
2021-09-25 05:58:36 +00:00
|
|
|
eth2_ssz_derive = "0.3.0"
|
2022-11-01 14:28:21 +00:00
|
|
|
tree_hash = "0.4.1"
|
|
|
|
tree_hash_derive = "0.4.0"
|
2019-10-30 01:22:18 +00:00
|
|
|
slog = { version = "2.5.2", features = ["max_level_trace"] }
|
2020-08-04 07:44:53 +00:00
|
|
|
lighthouse_version = { path = "../../common/lighthouse_version" }
|
2021-11-18 05:08:42 +00:00
|
|
|
tokio = { version = "1.14.0", features = ["time", "macros"] }
|
2020-11-28 05:30:57 +00:00
|
|
|
futures = "0.3.7"
|
2020-10-05 08:22:19 +00:00
|
|
|
error-chain = "0.12.4"
|
|
|
|
dirs = "3.0.1"
|
2020-05-26 20:34:15 +00:00
|
|
|
fnv = "1.0.7"
|
2019-10-30 01:22:18 +00:00
|
|
|
lazy_static = "1.4.0"
|
2020-05-18 11:24:23 +00:00
|
|
|
lighthouse_metrics = { path = "../../common/lighthouse_metrics" }
|
2021-01-11 23:32:11 +00:00
|
|
|
smallvec = "1.6.1"
|
2021-02-10 23:29:49 +00:00
|
|
|
tokio-io-timeout = "1.1.1"
|
2021-12-22 06:17:14 +00:00
|
|
|
lru = "0.7.1"
|
2022-04-04 00:26:16 +00:00
|
|
|
parking_lot = "0.12.0"
|
2022-10-28 05:40:06 +00:00
|
|
|
sha2 = "0.10"
|
2020-10-05 08:22:19 +00:00
|
|
|
snap = "1.0.1"
|
2020-10-05 07:45:54 +00:00
|
|
|
hex = "0.4.2"
|
2021-02-10 23:29:49 +00:00
|
|
|
tokio-util = { version = "0.6.2", features = ["codec", "compat", "time"] }
|
2020-05-17 11:16:48 +00:00
|
|
|
tiny-keccak = "2.0.2"
|
2020-10-05 07:45:54 +00:00
|
|
|
task_executor = { path = "../../common/task_executor" }
|
2022-04-04 00:26:16 +00:00
|
|
|
rand = "0.8.5"
|
2020-09-29 00:02:44 +00:00
|
|
|
directory = { path = "../../common/directory" }
|
2022-03-08 19:48:12 +00:00
|
|
|
regex = "1.5.5"
|
2022-04-01 00:58:59 +00:00
|
|
|
strum = { version = "0.24.0", features = ["derive"] }
|
Separate execution payloads in the DB (#3157)
## Proposed Changes
Reduce post-merge disk usage by not storing finalized execution payloads in Lighthouse's database.
:warning: **This is achieved in a backwards-incompatible way for networks that have already merged** :warning:. Kiln users and shadow fork enjoyers will be unable to downgrade after running the code from this PR. The upgrade migration may take several minutes to run, and can't be aborted after it begins.
The main changes are:
- New column in the database called `ExecPayload`, keyed by beacon block root.
- The `BeaconBlock` column now stores blinded blocks only.
- Lots of places that previously used full blocks now use blinded blocks, e.g. analytics APIs, block replay in the DB, etc.
- On finalization:
- `prune_abanonded_forks` deletes non-canonical payloads whilst deleting non-canonical blocks.
- `migrate_db` deletes finalized canonical payloads whilst deleting finalized states.
- Conversions between blinded and full blocks are implemented in a compositional way, duplicating some work from Sean's PR #3134.
- The execution layer has a new `get_payload_by_block_hash` method that reconstructs a payload using the EE's `eth_getBlockByHash` call.
- I've tested manually that it works on Kiln, using Geth and Nethermind.
- This isn't necessarily the most efficient method, and new engine APIs are being discussed to improve this: https://github.com/ethereum/execution-apis/pull/146.
- We're depending on the `ethers` master branch, due to lots of recent changes. We're also using a workaround for https://github.com/gakonst/ethers-rs/issues/1134.
- Payload reconstruction is used in the HTTP API via `BeaconChain::get_block`, which is now `async`. Due to the `async` fn, the `blocking_json` wrapper has been removed.
- Payload reconstruction is used in network RPC to serve blocks-by-{root,range} responses. Here the `async` adjustment is messier, although I think I've managed to come up with a reasonable compromise: the handlers take the `SendOnDrop` by value so that they can drop it on _task completion_ (after the `fn` returns). Still, this is introducing disk reads onto core executor threads, which may have a negative performance impact (thoughts appreciated).
## Additional Info
- [x] For performance it would be great to remove the cloning of full blocks when converting them to blinded blocks to write to disk. I'm going to experiment with a `put_block` API that takes the block by value, breaks it into a blinded block and a payload, stores the blinded block, and then re-assembles the full block for the caller.
- [x] We should measure the latency of blocks-by-root and blocks-by-range responses.
- [x] We should add integration tests that stress the payload reconstruction (basic tests done, issue for more extensive tests: https://github.com/sigp/lighthouse/issues/3159)
- [x] We should (manually) test the schema v9 migration from several prior versions, particularly as blocks have changed on disk and some migrations rely on being able to load blocks.
Co-authored-by: Paul Hauner <paul@paulhauner.com>
2022-05-12 00:42:17 +00:00
|
|
|
superstruct = "0.5.0"
|
2022-09-29 01:50:11 +00:00
|
|
|
prometheus-client = "0.18.0"
|
2022-02-17 21:47:06 +00:00
|
|
|
unused_port = { path = "../../common/unused_port" }
|
2022-09-05 00:22:48 +00:00
|
|
|
delay_map = "0.1.1"
|
2023-01-06 15:59:33 +00:00
|
|
|
void = "1"
|
2020-05-17 11:16:48 +00:00
|
|
|
|
|
|
|
[dependencies.libp2p]
|
2023-01-06 15:59:33 +00:00
|
|
|
version = "0.50.0"
|
2021-10-29 01:59:29 +00:00
|
|
|
default-features = false
|
2023-01-06 15:59:33 +00:00
|
|
|
features = ["websocket", "identify", "mplex", "yamux", "noise", "gossipsub", "dns", "tcp", "tokio", "plaintext", "secp256k1", "macros", "ecdsa"]
|
2019-11-27 01:47:46 +00:00
|
|
|
|
|
|
|
[dev-dependencies]
|
2020-10-05 08:22:19 +00:00
|
|
|
slog-term = "2.6.0"
|
2020-05-17 11:16:48 +00:00
|
|
|
slog-async = "2.5.0"
|
2021-01-06 06:36:11 +00:00
|
|
|
tempfile = "3.1.0"
|
2020-06-06 06:39:42 +00:00
|
|
|
exit-future = "0.2.0"
|
2021-12-20 23:45:21 +00:00
|
|
|
void = "1"
|
2022-06-25 22:22:34 +00:00
|
|
|
quickcheck = "0.9.2"
|
|
|
|
quickcheck_macros = "0.9.1"
|
2020-06-18 07:42:42 +00:00
|
|
|
|
|
|
|
[features]
|
|
|
|
libp2p-websocket = []
|