2020-09-29 03:46:54 +00:00
|
|
|
//! This crate contains a HTTP server which serves the endpoints listed here:
|
|
|
|
//!
|
2021-10-06 00:46:09 +00:00
|
|
|
//! https://github.com/ethereum/beacon-APIs
|
2020-09-29 03:46:54 +00:00
|
|
|
//!
|
|
|
|
//! There are also some additional, non-standard endpoints behind the `/lighthouse/` path which are
|
|
|
|
//! used for development.
|
|
|
|
|
Add API to compute discrete validator attestation performance (#2874)
## Issue Addressed
N/A
## Proposed Changes
Add a HTTP API which can be used to compute the attestation performances of a validator (or all validators) over a discrete range of epochs.
Performances can be computed for a single validator, or for the global validator set.
## Usage
### Request
The API can be used as follows:
```
curl "http://localhost:5052/lighthouse/analysis/attestation_performance/{validator_index}?start_epoch=57730&end_epoch=57732"
```
Alternatively, to compute performances for the global validator set:
```
curl "http://localhost:5052/lighthouse/analysis/attestation_performance/global?start_epoch=57730&end_epoch=57732"
```
### Response
The response is JSON formatted as follows:
```
[
{
"index": 72,
"epochs": {
"57730": {
"active": true,
"head": false,
"target": false,
"source": false
},
"57731": {
"active": true,
"head": true,
"target": true,
"source": true,
"delay": 1
},
"57732": {
"active": true,
"head": true,
"target": true,
"source": true,
"delay": 1
},
}
}
]
```
> Note that the `"epochs"` are not guaranteed to be in ascending order.
## Additional Info
- This API is intended to be used in our upcoming validator analysis tooling (#2873) and will likely not be very useful for regular users. Some advanced users or block explorers may find this API useful however.
- The request range is limited to 100 epochs (since the range is inclusive and it also computes the `end_epoch` it's actually 101 epochs) to prevent Lighthouse using exceptionally large amounts of memory.
2022-01-27 22:58:31 +00:00
|
|
|
mod attestation_performance;
|
2021-03-17 05:09:57 +00:00
|
|
|
mod attester_duties;
|
2020-09-29 03:46:54 +00:00
|
|
|
mod block_id;
|
2022-02-21 23:21:02 +00:00
|
|
|
mod block_packing_efficiency;
|
2022-01-27 01:06:02 +00:00
|
|
|
mod block_rewards;
|
2021-09-22 00:37:28 +00:00
|
|
|
mod database;
|
2020-09-29 03:46:54 +00:00
|
|
|
mod metrics;
|
2021-03-17 05:09:57 +00:00
|
|
|
mod proposer_duties;
|
2020-09-29 03:46:54 +00:00
|
|
|
mod state_id;
|
2021-08-06 00:47:31 +00:00
|
|
|
mod sync_committees;
|
2020-09-29 03:46:54 +00:00
|
|
|
mod validator_inclusion;
|
2021-08-06 00:47:31 +00:00
|
|
|
mod version;
|
2020-09-29 03:46:54 +00:00
|
|
|
|
|
|
|
use beacon_chain::{
|
Batch BLS verification for attestations (#2399)
## Issue Addressed
NA
## Proposed Changes
Adds the ability to verify batches of aggregated/unaggregated attestations from the network.
When the `BeaconProcessor` finds there are messages in the aggregated or unaggregated attestation queues, it will first check the length of the queue:
- `== 1` verify the attestation individually.
- `>= 2` take up to 64 of those attestations and verify them in a batch.
Notably, we only perform batch verification if the queue has a backlog. We don't apply any artificial delays to attestations to try and force them into batches.
### Batching Details
To assist with implementing batches we modify `beacon_chain::attestation_verification` to have two distinct categories for attestations:
- *Indexed* attestations: those which have passed initial validation and were valid enough for us to derive an `IndexedAttestation`.
- *Verified* attestations: those attestations which were indexed *and also* passed signature verification. These are well-formed, interesting messages which were signed by validators.
The batching functions accept `n` attestations and then return `n` attestation verification `Result`s, where those `Result`s can be any combination of `Ok` or `Err`. In other words, we attempt to verify as many attestations as possible and return specific per-attestation results so peer scores can be updated, if required.
When we batch verify attestations, we first try to map all those attestations to *indexed* attestations. If any of those attestations were able to be indexed, we then perform batch BLS verification on those indexed attestations. If the batch verification succeeds, we convert them into *verified* attestations, disabling individual signature checking. If the batch fails, we convert to verified attestations with individual signature checking enabled.
Ultimately, we optimistically try to do a batch verification of attestation signatures and fall-back to individual verification if it fails. This opens an attach vector for "poisoning" the attestations and causing us to waste a batch verification. I argue that peer scoring should do a good-enough job of defending against this and the typical-case gains massively outweigh the worst-case losses.
## Additional Info
Before this PR, attestation verification took the attestations by value (instead of by reference). It turns out that this was unnecessary and, in my opinion, resulted in some undesirable ergonomics (e.g., we had to pass the attestation back in the `Err` variant to avoid clones). In this PR I've modified attestation verification so that it now takes a reference.
I refactored the `beacon_chain/tests/attestation_verification.rs` tests so they use a builder-esque "tester" struct instead of a weird macro. It made it easier for me to test individual/batch with the same set of tests and I think it was a nice tidy-up. Notably, I did this last to try and make sure my new refactors to *actual* production code would pass under the existing test suite.
2021-09-22 08:49:41 +00:00
|
|
|
attestation_verification::VerifiedAttestation,
|
2021-03-04 04:43:31 +00:00
|
|
|
observed_operations::ObservationOutcome,
|
|
|
|
validator_monitor::{get_block_delay_ms, timestamp_now},
|
2021-01-20 19:19:38 +00:00
|
|
|
AttestationError as AttnError, BeaconChain, BeaconChainError, BeaconChainTypes,
|
2022-03-28 07:14:13 +00:00
|
|
|
HeadSafetyStatus, ProduceBlockVerification, WhenSlotSkipped,
|
2020-09-29 03:46:54 +00:00
|
|
|
};
|
|
|
|
use block_id::BlockId;
|
2021-08-06 00:47:31 +00:00
|
|
|
use eth2::types::{self as api_types, EndpointVersion, ValidatorId};
|
Rename eth2_libp2p to lighthouse_network (#2702)
## Description
The `eth2_libp2p` crate was originally named and designed to incorporate a simple libp2p integration into lighthouse. Since its origins the crates purpose has expanded dramatically. It now houses a lot more sophistication that is specific to lighthouse and no longer just a libp2p integration.
As of this writing it currently houses the following high-level lighthouse-specific logic:
- Lighthouse's implementation of the eth2 RPC protocol and specific encodings/decodings
- Integration and handling of ENRs with respect to libp2p and eth2
- Lighthouse's discovery logic, its integration with discv5 and logic about searching and handling peers.
- Lighthouse's peer manager - This is a large module handling various aspects of Lighthouse's network, such as peer scoring, handling pings and metadata, connection maintenance and recording, etc.
- Lighthouse's peer database - This is a collection of information stored for each individual peer which is specific to lighthouse. We store connection state, sync state, last seen ips and scores etc. The data stored for each peer is designed for various elements of the lighthouse code base such as syncing and the http api.
- Gossipsub scoring - This stores a collection of gossipsub 1.1 scoring mechanisms that are continuously analyssed and updated based on the ethereum 2 networks and how Lighthouse performs on these networks.
- Lighthouse specific types for managing gossipsub topics, sync status and ENR fields
- Lighthouse's network HTTP API metrics - A collection of metrics for lighthouse network monitoring
- Lighthouse's custom configuration of all networking protocols, RPC, gossipsub, discovery, identify and libp2p.
Therefore it makes sense to rename the crate to be more akin to its current purposes, simply that it manages the majority of Lighthouse's network stack. This PR renames this crate to `lighthouse_network`
Co-authored-by: Paul Hauner <paul@paulhauner.com>
2021-10-19 00:30:39 +00:00
|
|
|
use lighthouse_network::{types::SyncState, EnrExt, NetworkGlobals, PeerId, PubsubMessage};
|
2020-09-29 03:46:54 +00:00
|
|
|
use lighthouse_version::version_with_platform;
|
|
|
|
use network::NetworkMessage;
|
|
|
|
use serde::{Deserialize, Serialize};
|
2020-11-22 03:39:13 +00:00
|
|
|
use slog::{crit, debug, error, info, warn, Logger};
|
2020-09-29 03:46:54 +00:00
|
|
|
use slot_clock::SlotClock;
|
2020-10-22 06:05:49 +00:00
|
|
|
use ssz::Encode;
|
2020-09-29 03:46:54 +00:00
|
|
|
use state_id::StateId;
|
|
|
|
use std::borrow::Cow;
|
|
|
|
use std::convert::TryInto;
|
|
|
|
use std::future::Future;
|
2022-03-24 00:04:49 +00:00
|
|
|
use std::net::{IpAddr, Ipv4Addr, SocketAddr};
|
2021-10-12 03:35:49 +00:00
|
|
|
use std::path::PathBuf;
|
|
|
|
use std::pin::Pin;
|
2020-09-29 03:46:54 +00:00
|
|
|
use std::sync::Arc;
|
|
|
|
use tokio::sync::mpsc::UnboundedSender;
|
2021-03-01 01:58:05 +00:00
|
|
|
use tokio_stream::{wrappers::BroadcastStream, StreamExt};
|
2020-09-29 03:46:54 +00:00
|
|
|
use types::{
|
2022-03-31 07:52:23 +00:00
|
|
|
Attestation, AttesterSlashing, BeaconBlockBodyMerge, BeaconBlockMerge, BeaconStateError,
|
|
|
|
BlindedPayload, CommitteeCache, ConfigAndPreset, Epoch, EthSpec, ForkName, FullPayload,
|
|
|
|
ProposerPreparationData, ProposerSlashing, RelativeEpoch, Signature, SignedAggregateAndProof,
|
Separate execution payloads in the DB (#3157)
## Proposed Changes
Reduce post-merge disk usage by not storing finalized execution payloads in Lighthouse's database.
:warning: **This is achieved in a backwards-incompatible way for networks that have already merged** :warning:. Kiln users and shadow fork enjoyers will be unable to downgrade after running the code from this PR. The upgrade migration may take several minutes to run, and can't be aborted after it begins.
The main changes are:
- New column in the database called `ExecPayload`, keyed by beacon block root.
- The `BeaconBlock` column now stores blinded blocks only.
- Lots of places that previously used full blocks now use blinded blocks, e.g. analytics APIs, block replay in the DB, etc.
- On finalization:
- `prune_abanonded_forks` deletes non-canonical payloads whilst deleting non-canonical blocks.
- `migrate_db` deletes finalized canonical payloads whilst deleting finalized states.
- Conversions between blinded and full blocks are implemented in a compositional way, duplicating some work from Sean's PR #3134.
- The execution layer has a new `get_payload_by_block_hash` method that reconstructs a payload using the EE's `eth_getBlockByHash` call.
- I've tested manually that it works on Kiln, using Geth and Nethermind.
- This isn't necessarily the most efficient method, and new engine APIs are being discussed to improve this: https://github.com/ethereum/execution-apis/pull/146.
- We're depending on the `ethers` master branch, due to lots of recent changes. We're also using a workaround for https://github.com/gakonst/ethers-rs/issues/1134.
- Payload reconstruction is used in the HTTP API via `BeaconChain::get_block`, which is now `async`. Due to the `async` fn, the `blocking_json` wrapper has been removed.
- Payload reconstruction is used in network RPC to serve blocks-by-{root,range} responses. Here the `async` adjustment is messier, although I think I've managed to come up with a reasonable compromise: the handlers take the `SendOnDrop` by value so that they can drop it on _task completion_ (after the `fn` returns). Still, this is introducing disk reads onto core executor threads, which may have a negative performance impact (thoughts appreciated).
## Additional Info
- [x] For performance it would be great to remove the cloning of full blocks when converting them to blinded blocks to write to disk. I'm going to experiment with a `put_block` API that takes the block by value, breaks it into a blinded block and a payload, stores the blinded block, and then re-assembles the full block for the caller.
- [x] We should measure the latency of blocks-by-root and blocks-by-range responses.
- [x] We should add integration tests that stress the payload reconstruction (basic tests done, issue for more extensive tests: https://github.com/sigp/lighthouse/issues/3159)
- [x] We should (manually) test the schema v9 migration from several prior versions, particularly as blocks have changed on disk and some migrations rely on being able to load blocks.
Co-authored-by: Paul Hauner <paul@paulhauner.com>
2022-05-12 00:42:17 +00:00
|
|
|
SignedBeaconBlock, SignedBeaconBlockMerge, SignedBlindedBeaconBlock,
|
|
|
|
SignedContributionAndProof, SignedVoluntaryExit, Slot, SyncCommitteeMessage,
|
|
|
|
SyncContributionData,
|
2020-09-29 03:46:54 +00:00
|
|
|
};
|
2021-10-28 01:18:04 +00:00
|
|
|
use version::{
|
|
|
|
add_consensus_version_header, fork_versioned_response, inconsistent_fork_rejection,
|
|
|
|
unsupported_version_rejection, V1,
|
|
|
|
};
|
2020-11-28 05:30:57 +00:00
|
|
|
use warp::http::StatusCode;
|
2021-02-10 23:29:49 +00:00
|
|
|
use warp::sse::Event;
|
2021-01-06 03:01:46 +00:00
|
|
|
use warp::Reply;
|
2021-02-10 23:29:49 +00:00
|
|
|
use warp::{http::Response, Filter};
|
2022-01-20 09:14:19 +00:00
|
|
|
use warp_utils::{
|
|
|
|
query::multi_key_query,
|
|
|
|
task::{blocking_json_task, blocking_task},
|
|
|
|
};
|
2020-09-29 03:46:54 +00:00
|
|
|
|
|
|
|
const API_PREFIX: &str = "eth";
|
|
|
|
|
|
|
|
/// If the node is within this many epochs from the head, we declare it to be synced regardless of
|
|
|
|
/// the network sync state.
|
|
|
|
///
|
|
|
|
/// This helps prevent attacks where nodes can convince us that we're syncing some non-existent
|
|
|
|
/// finalized head.
|
|
|
|
const SYNC_TOLERANCE_EPOCHS: u64 = 8;
|
|
|
|
|
2021-10-12 03:35:49 +00:00
|
|
|
/// A custom type which allows for both unsecured and TLS-enabled HTTP servers.
|
|
|
|
type HttpServer = (SocketAddr, Pin<Box<dyn Future<Output = ()> + Send>>);
|
|
|
|
|
|
|
|
/// Configuration used when serving the HTTP server over TLS.
|
|
|
|
#[derive(PartialEq, Debug, Clone, Serialize, Deserialize)]
|
|
|
|
pub struct TlsConfig {
|
|
|
|
pub cert: PathBuf,
|
|
|
|
pub key: PathBuf,
|
|
|
|
}
|
|
|
|
|
2020-09-29 03:46:54 +00:00
|
|
|
/// A wrapper around all the items required to spawn the HTTP server.
|
|
|
|
///
|
|
|
|
/// The server will gracefully handle the case where any fields are `None`.
|
|
|
|
pub struct Context<T: BeaconChainTypes> {
|
|
|
|
pub config: Config,
|
|
|
|
pub chain: Option<Arc<BeaconChain<T>>>,
|
|
|
|
pub network_tx: Option<UnboundedSender<NetworkMessage<T::EthSpec>>>,
|
|
|
|
pub network_globals: Option<Arc<NetworkGlobals<T::EthSpec>>>,
|
2020-11-02 00:37:30 +00:00
|
|
|
pub eth1_service: Option<eth1::Service>,
|
2020-09-29 03:46:54 +00:00
|
|
|
pub log: Logger,
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Configuration for the HTTP server.
|
|
|
|
#[derive(PartialEq, Debug, Clone, Serialize, Deserialize)]
|
|
|
|
pub struct Config {
|
|
|
|
pub enabled: bool,
|
2022-03-24 00:04:49 +00:00
|
|
|
pub listen_addr: IpAddr,
|
2020-09-29 03:46:54 +00:00
|
|
|
pub listen_port: u16,
|
|
|
|
pub allow_origin: Option<String>,
|
2021-07-09 06:15:32 +00:00
|
|
|
pub serve_legacy_spec: bool,
|
2021-10-12 03:35:49 +00:00
|
|
|
pub tls_config: Option<TlsConfig>,
|
2021-10-02 19:57:23 +00:00
|
|
|
pub allow_sync_stalled: bool,
|
2020-09-29 03:46:54 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
impl Default for Config {
|
|
|
|
fn default() -> Self {
|
|
|
|
Self {
|
|
|
|
enabled: false,
|
2022-03-24 00:04:49 +00:00
|
|
|
listen_addr: IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)),
|
2020-09-29 03:46:54 +00:00
|
|
|
listen_port: 5052,
|
|
|
|
allow_origin: None,
|
2021-07-09 06:15:32 +00:00
|
|
|
serve_legacy_spec: true,
|
2021-10-12 03:35:49 +00:00
|
|
|
tls_config: None,
|
2021-10-02 19:57:23 +00:00
|
|
|
allow_sync_stalled: false,
|
2020-09-29 03:46:54 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
#[derive(Debug)]
|
|
|
|
pub enum Error {
|
|
|
|
Warp(warp::Error),
|
|
|
|
Other(String),
|
|
|
|
}
|
|
|
|
|
|
|
|
impl From<warp::Error> for Error {
|
|
|
|
fn from(e: warp::Error) -> Self {
|
|
|
|
Error::Warp(e)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
impl From<String> for Error {
|
|
|
|
fn from(e: String) -> Self {
|
|
|
|
Error::Other(e)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Creates a `warp` logging wrapper which we use to create `slog` logs.
|
|
|
|
pub fn slog_logging(
|
|
|
|
log: Logger,
|
|
|
|
) -> warp::filters::log::Log<impl Fn(warp::filters::log::Info) + Clone> {
|
|
|
|
warp::log::custom(move |info| {
|
|
|
|
match info.status() {
|
2020-10-22 02:59:42 +00:00
|
|
|
status
|
|
|
|
if status == StatusCode::OK
|
|
|
|
|| status == StatusCode::NOT_FOUND
|
|
|
|
|| status == StatusCode::PARTIAL_CONTENT =>
|
|
|
|
{
|
2020-11-22 03:39:13 +00:00
|
|
|
debug!(
|
2020-09-29 03:46:54 +00:00
|
|
|
log,
|
|
|
|
"Processed HTTP API request";
|
|
|
|
"elapsed" => format!("{:?}", info.elapsed()),
|
|
|
|
"status" => status.to_string(),
|
|
|
|
"path" => info.path(),
|
|
|
|
"method" => info.method().to_string(),
|
|
|
|
);
|
|
|
|
}
|
|
|
|
status => {
|
|
|
|
warn!(
|
|
|
|
log,
|
|
|
|
"Error processing HTTP API request";
|
|
|
|
"elapsed" => format!("{:?}", info.elapsed()),
|
|
|
|
"status" => status.to_string(),
|
|
|
|
"path" => info.path(),
|
|
|
|
"method" => info.method().to_string(),
|
|
|
|
);
|
|
|
|
}
|
|
|
|
};
|
|
|
|
})
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Creates a `warp` logging wrapper which we use for Prometheus metrics (not necessarily logging,
|
|
|
|
/// per say).
|
|
|
|
pub fn prometheus_metrics() -> warp::filters::log::Log<impl Fn(warp::filters::log::Info) + Clone> {
|
|
|
|
warp::log::custom(move |info| {
|
|
|
|
// Here we restrict the `info.path()` value to some predefined values. Without this, we end
|
|
|
|
// up with a new metric type each time someone includes something unique in the path (e.g.,
|
|
|
|
// a block hash).
|
|
|
|
let path = {
|
|
|
|
let equals = |s: &'static str| -> Option<&'static str> {
|
2021-08-06 00:47:31 +00:00
|
|
|
if info.path() == format!("/{}/{}", API_PREFIX, s) {
|
2020-09-29 03:46:54 +00:00
|
|
|
Some(s)
|
|
|
|
} else {
|
|
|
|
None
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
let starts_with = |s: &'static str| -> Option<&'static str> {
|
2021-08-06 00:47:31 +00:00
|
|
|
if info.path().starts_with(&format!("/{}/{}", API_PREFIX, s)) {
|
2020-09-29 03:46:54 +00:00
|
|
|
Some(s)
|
|
|
|
} else {
|
|
|
|
None
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2021-08-06 00:47:31 +00:00
|
|
|
// First line covers `POST /v1/beacon/blocks` only
|
|
|
|
equals("v1/beacon/blocks")
|
|
|
|
.or_else(|| starts_with("v1/validator/duties/attester"))
|
|
|
|
.or_else(|| starts_with("v1/validator/duties/proposer"))
|
|
|
|
.or_else(|| starts_with("v1/validator/attestation_data"))
|
|
|
|
.or_else(|| starts_with("v1/validator/blocks"))
|
|
|
|
.or_else(|| starts_with("v2/validator/blocks"))
|
|
|
|
.or_else(|| starts_with("v1/validator/aggregate_attestation"))
|
|
|
|
.or_else(|| starts_with("v1/validator/aggregate_and_proofs"))
|
|
|
|
.or_else(|| starts_with("v1/validator/beacon_committee_subscriptions"))
|
|
|
|
.or_else(|| starts_with("v1/beacon/"))
|
|
|
|
.or_else(|| starts_with("v2/beacon/"))
|
|
|
|
.or_else(|| starts_with("v1/config/"))
|
|
|
|
.or_else(|| starts_with("v1/debug/"))
|
|
|
|
.or_else(|| starts_with("v1/events/"))
|
|
|
|
.or_else(|| starts_with("v1/node/"))
|
|
|
|
.or_else(|| starts_with("v1/validator/"))
|
2020-09-29 03:46:54 +00:00
|
|
|
.unwrap_or("other")
|
|
|
|
};
|
|
|
|
|
|
|
|
metrics::inc_counter_vec(&metrics::HTTP_API_PATHS_TOTAL, &[path]);
|
|
|
|
metrics::inc_counter_vec(
|
|
|
|
&metrics::HTTP_API_STATUS_CODES_TOTAL,
|
|
|
|
&[&info.status().to_string()],
|
|
|
|
);
|
|
|
|
metrics::observe_timer_vec(&metrics::HTTP_API_PATHS_TIMES, &[path], info.elapsed());
|
|
|
|
})
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Creates a server that will serve requests using information from `ctx`.
|
|
|
|
///
|
|
|
|
/// The server will shut down gracefully when the `shutdown` future resolves.
|
|
|
|
///
|
|
|
|
/// ## Returns
|
|
|
|
///
|
|
|
|
/// This function will bind the server to the provided address and then return a tuple of:
|
|
|
|
///
|
|
|
|
/// - `SocketAddr`: the address that the HTTP server will listen on.
|
|
|
|
/// - `Future`: the actual server future that will need to be awaited.
|
|
|
|
///
|
|
|
|
/// ## Errors
|
|
|
|
///
|
|
|
|
/// Returns an error if the server is unable to bind or there is another error during
|
|
|
|
/// configuration.
|
|
|
|
pub fn serve<T: BeaconChainTypes>(
|
|
|
|
ctx: Arc<Context<T>>,
|
|
|
|
shutdown: impl Future<Output = ()> + Send + Sync + 'static,
|
2021-10-12 03:35:49 +00:00
|
|
|
) -> Result<HttpServer, Error> {
|
2020-09-29 03:46:54 +00:00
|
|
|
let config = ctx.config.clone();
|
2021-10-02 19:57:23 +00:00
|
|
|
let allow_sync_stalled = config.allow_sync_stalled;
|
2020-09-29 03:46:54 +00:00
|
|
|
let log = ctx.log.clone();
|
2020-10-22 04:47:27 +00:00
|
|
|
|
|
|
|
// Configure CORS.
|
|
|
|
let cors_builder = {
|
|
|
|
let builder = warp::cors()
|
|
|
|
.allow_methods(vec!["GET", "POST"])
|
|
|
|
.allow_headers(vec!["Content-Type"]);
|
|
|
|
|
|
|
|
warp_utils::cors::set_builder_origins(
|
|
|
|
builder,
|
|
|
|
config.allow_origin.as_deref(),
|
|
|
|
(config.listen_addr, config.listen_port),
|
|
|
|
)?
|
|
|
|
};
|
2020-09-29 03:46:54 +00:00
|
|
|
|
|
|
|
// Sanity check.
|
|
|
|
if !config.enabled {
|
|
|
|
crit!(log, "Cannot start disabled HTTP server");
|
|
|
|
return Err(Error::Other(
|
|
|
|
"A disabled server should not be started".to_string(),
|
|
|
|
));
|
|
|
|
}
|
|
|
|
|
2021-08-06 00:47:31 +00:00
|
|
|
// Create a filter that extracts the endpoint version.
|
|
|
|
let any_version = warp::path(API_PREFIX).and(warp::path::param::<EndpointVersion>().or_else(
|
|
|
|
|_| async move {
|
|
|
|
Err(warp_utils::reject::custom_bad_request(
|
|
|
|
"Invalid version identifier".to_string(),
|
|
|
|
))
|
|
|
|
},
|
|
|
|
));
|
|
|
|
|
|
|
|
// Filter that enforces a single endpoint version and then discards the `EndpointVersion`.
|
|
|
|
let single_version = |reqd: EndpointVersion| {
|
|
|
|
any_version
|
|
|
|
.and_then(move |version| async move {
|
|
|
|
if version == reqd {
|
|
|
|
Ok(())
|
|
|
|
} else {
|
|
|
|
Err(unsupported_version_rejection(version))
|
|
|
|
}
|
|
|
|
})
|
|
|
|
.untuple_one()
|
|
|
|
};
|
|
|
|
|
|
|
|
let eth1_v1 = single_version(V1);
|
2020-09-29 03:46:54 +00:00
|
|
|
|
|
|
|
// Create a `warp` filter that provides access to the network globals.
|
|
|
|
let inner_network_globals = ctx.network_globals.clone();
|
|
|
|
let network_globals = warp::any()
|
|
|
|
.map(move || inner_network_globals.clone())
|
|
|
|
.and_then(|network_globals| async move {
|
|
|
|
match network_globals {
|
|
|
|
Some(globals) => Ok(globals),
|
|
|
|
None => Err(warp_utils::reject::custom_not_found(
|
|
|
|
"network globals are not initialized.".to_string(),
|
|
|
|
)),
|
|
|
|
}
|
|
|
|
});
|
|
|
|
|
|
|
|
// Create a `warp` filter that provides access to the beacon chain.
|
|
|
|
let inner_ctx = ctx.clone();
|
|
|
|
let chain_filter =
|
|
|
|
warp::any()
|
|
|
|
.map(move || inner_ctx.chain.clone())
|
|
|
|
.and_then(|chain| async move {
|
|
|
|
match chain {
|
|
|
|
Some(chain) => Ok(chain),
|
|
|
|
None => Err(warp_utils::reject::custom_not_found(
|
|
|
|
"Beacon chain genesis has not yet been observed.".to_string(),
|
|
|
|
)),
|
|
|
|
}
|
|
|
|
});
|
|
|
|
|
|
|
|
// Create a `warp` filter that provides access to the network sender channel.
|
|
|
|
let inner_ctx = ctx.clone();
|
|
|
|
let network_tx_filter = warp::any()
|
|
|
|
.map(move || inner_ctx.network_tx.clone())
|
|
|
|
.and_then(|network_tx| async move {
|
|
|
|
match network_tx {
|
|
|
|
Some(network_tx) => Ok(network_tx),
|
|
|
|
None => Err(warp_utils::reject::custom_not_found(
|
|
|
|
"The networking stack has not yet started.".to_string(),
|
|
|
|
)),
|
|
|
|
}
|
|
|
|
});
|
|
|
|
|
2020-11-02 00:37:30 +00:00
|
|
|
// Create a `warp` filter that provides access to the Eth1 service.
|
|
|
|
let inner_ctx = ctx.clone();
|
|
|
|
let eth1_service_filter = warp::any()
|
|
|
|
.map(move || inner_ctx.eth1_service.clone())
|
|
|
|
.and_then(|eth1_service| async move {
|
|
|
|
match eth1_service {
|
|
|
|
Some(eth1_service) => Ok(eth1_service),
|
|
|
|
None => Err(warp_utils::reject::custom_not_found(
|
|
|
|
"The Eth1 service is not started. Use --eth1 on the CLI.".to_string(),
|
|
|
|
)),
|
|
|
|
}
|
|
|
|
});
|
|
|
|
|
2021-10-06 10:21:21 +00:00
|
|
|
// Create a `warp` filter that rejects requests whilst the node is syncing.
|
2021-10-02 19:57:23 +00:00
|
|
|
let not_while_syncing_filter =
|
|
|
|
warp::any()
|
|
|
|
.and(network_globals.clone())
|
|
|
|
.and(chain_filter.clone())
|
|
|
|
.and_then(
|
|
|
|
move |network_globals: Arc<NetworkGlobals<T::EthSpec>>,
|
|
|
|
chain: Arc<BeaconChain<T>>| async move {
|
|
|
|
match *network_globals.sync_state.read() {
|
|
|
|
SyncState::SyncingFinalized { .. } => {
|
|
|
|
let head_slot = chain
|
|
|
|
.best_slot()
|
|
|
|
.map_err(warp_utils::reject::beacon_chain_error)?;
|
|
|
|
|
|
|
|
let current_slot =
|
|
|
|
chain.slot_clock.now_or_genesis().ok_or_else(|| {
|
|
|
|
warp_utils::reject::custom_server_error(
|
|
|
|
"unable to read slot clock".to_string(),
|
|
|
|
)
|
|
|
|
})?;
|
|
|
|
|
|
|
|
let tolerance = SYNC_TOLERANCE_EPOCHS * T::EthSpec::slots_per_epoch();
|
|
|
|
|
|
|
|
if head_slot + tolerance >= current_slot {
|
|
|
|
Ok(())
|
|
|
|
} else {
|
|
|
|
Err(warp_utils::reject::not_synced(format!(
|
|
|
|
"head slot is {}, current slot is {}",
|
|
|
|
head_slot, current_slot
|
|
|
|
)))
|
|
|
|
}
|
2020-09-29 03:46:54 +00:00
|
|
|
}
|
2021-10-02 19:57:23 +00:00
|
|
|
SyncState::SyncingHead { .. }
|
|
|
|
| SyncState::SyncTransition
|
|
|
|
| SyncState::BackFillSyncing { .. } => Ok(()),
|
|
|
|
SyncState::Synced => Ok(()),
|
|
|
|
SyncState::Stalled if allow_sync_stalled => Ok(()),
|
|
|
|
SyncState::Stalled => Err(warp_utils::reject::not_synced(
|
|
|
|
"sync is stalled".to_string(),
|
|
|
|
)),
|
2020-09-29 03:46:54 +00:00
|
|
|
}
|
2021-10-02 19:57:23 +00:00
|
|
|
},
|
|
|
|
)
|
|
|
|
.untuple_one();
|
2020-09-29 03:46:54 +00:00
|
|
|
|
2021-10-07 11:24:57 +00:00
|
|
|
// Create a `warp` filter that rejects requests unless the head has been verified by the
|
|
|
|
// execution layer.
|
|
|
|
let only_with_safe_head = warp::any()
|
2021-10-06 10:21:21 +00:00
|
|
|
.and(chain_filter.clone())
|
|
|
|
.and_then(move |chain: Arc<BeaconChain<T>>| async move {
|
2021-10-07 11:24:57 +00:00
|
|
|
let status = chain.head_safety_status().map_err(|e| {
|
2021-10-06 10:21:21 +00:00
|
|
|
warp_utils::reject::custom_server_error(format!(
|
2021-10-07 11:24:57 +00:00
|
|
|
"failed to read head safety status: {:?}",
|
2021-10-06 10:21:21 +00:00
|
|
|
e
|
|
|
|
))
|
|
|
|
})?;
|
|
|
|
match status {
|
2021-10-07 11:24:57 +00:00
|
|
|
HeadSafetyStatus::Safe(_) => Ok(()),
|
|
|
|
HeadSafetyStatus::Unsafe(hash) => {
|
|
|
|
Err(warp_utils::reject::custom_server_error(format!(
|
|
|
|
"optimistic head hash {:?} has not been verified by the execution layer",
|
|
|
|
hash
|
|
|
|
)))
|
|
|
|
}
|
|
|
|
HeadSafetyStatus::Invalid(hash) => {
|
|
|
|
Err(warp_utils::reject::custom_server_error(format!(
|
|
|
|
"the head block has an invalid payload {:?}, this may be unrecoverable",
|
|
|
|
hash
|
|
|
|
)))
|
|
|
|
}
|
2021-10-06 10:21:21 +00:00
|
|
|
}
|
|
|
|
})
|
|
|
|
.untuple_one();
|
|
|
|
|
2020-09-29 03:46:54 +00:00
|
|
|
// Create a `warp` filter that provides access to the logger.
|
2021-07-09 06:15:32 +00:00
|
|
|
let inner_ctx = ctx.clone();
|
|
|
|
let log_filter = warp::any().map(move || inner_ctx.log.clone());
|
2020-09-29 03:46:54 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
*
|
|
|
|
* Start of HTTP method definitions.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
|
|
|
// GET beacon/genesis
|
|
|
|
let get_beacon_genesis = eth1_v1
|
|
|
|
.and(warp::path("beacon"))
|
|
|
|
.and(warp::path("genesis"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(chain_filter.clone())
|
|
|
|
.and_then(|chain: Arc<BeaconChain<T>>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
chain
|
|
|
|
.head_info()
|
|
|
|
.map_err(warp_utils::reject::beacon_chain_error)
|
|
|
|
.map(|head| api_types::GenesisData {
|
|
|
|
genesis_time: head.genesis_time,
|
|
|
|
genesis_validators_root: head.genesis_validators_root,
|
|
|
|
genesis_fork_version: chain.spec.genesis_fork_version,
|
|
|
|
})
|
|
|
|
.map(api_types::GenericResponse::from)
|
|
|
|
})
|
|
|
|
});
|
|
|
|
|
|
|
|
/*
|
|
|
|
* beacon/states/{state_id}
|
|
|
|
*/
|
|
|
|
|
|
|
|
let beacon_states_path = eth1_v1
|
|
|
|
.and(warp::path("beacon"))
|
|
|
|
.and(warp::path("states"))
|
2020-12-03 23:10:08 +00:00
|
|
|
.and(warp::path::param::<StateId>().or_else(|_| async {
|
|
|
|
Err(warp_utils::reject::custom_bad_request(
|
|
|
|
"Invalid state ID".to_string(),
|
|
|
|
))
|
2020-11-09 23:13:56 +00:00
|
|
|
}))
|
2020-09-29 03:46:54 +00:00
|
|
|
.and(chain_filter.clone());
|
|
|
|
|
|
|
|
// GET beacon/states/{state_id}/root
|
|
|
|
let get_beacon_state_root = beacon_states_path
|
|
|
|
.clone()
|
|
|
|
.and(warp::path("root"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and_then(|state_id: StateId, chain: Arc<BeaconChain<T>>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
state_id
|
|
|
|
.root(&chain)
|
|
|
|
.map(api_types::RootData::from)
|
|
|
|
.map(api_types::GenericResponse::from)
|
|
|
|
})
|
|
|
|
});
|
|
|
|
|
|
|
|
// GET beacon/states/{state_id}/fork
|
|
|
|
let get_beacon_state_fork = beacon_states_path
|
|
|
|
.clone()
|
|
|
|
.and(warp::path("fork"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and_then(|state_id: StateId, chain: Arc<BeaconChain<T>>| {
|
|
|
|
blocking_json_task(move || state_id.fork(&chain).map(api_types::GenericResponse::from))
|
|
|
|
});
|
|
|
|
|
|
|
|
// GET beacon/states/{state_id}/finality_checkpoints
|
|
|
|
let get_beacon_state_finality_checkpoints = beacon_states_path
|
|
|
|
.clone()
|
|
|
|
.and(warp::path("finality_checkpoints"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and_then(|state_id: StateId, chain: Arc<BeaconChain<T>>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
state_id
|
|
|
|
.map_state(&chain, |state| {
|
|
|
|
Ok(api_types::FinalityCheckpointsData {
|
2021-07-09 06:15:32 +00:00
|
|
|
previous_justified: state.previous_justified_checkpoint(),
|
|
|
|
current_justified: state.current_justified_checkpoint(),
|
|
|
|
finalized: state.finalized_checkpoint(),
|
2020-09-29 03:46:54 +00:00
|
|
|
})
|
|
|
|
})
|
|
|
|
.map(api_types::GenericResponse::from)
|
|
|
|
})
|
|
|
|
});
|
|
|
|
|
2020-11-09 23:13:56 +00:00
|
|
|
// GET beacon/states/{state_id}/validator_balances?id
|
|
|
|
let get_beacon_state_validator_balances = beacon_states_path
|
|
|
|
.clone()
|
|
|
|
.and(warp::path("validator_balances"))
|
|
|
|
.and(warp::path::end())
|
2022-01-20 09:14:19 +00:00
|
|
|
.and(multi_key_query::<api_types::ValidatorBalancesQuery>())
|
2020-11-09 23:13:56 +00:00
|
|
|
.and_then(
|
|
|
|
|state_id: StateId,
|
|
|
|
chain: Arc<BeaconChain<T>>,
|
2022-01-20 09:14:19 +00:00
|
|
|
query_res: Result<api_types::ValidatorBalancesQuery, warp::Rejection>| {
|
2020-11-09 23:13:56 +00:00
|
|
|
blocking_json_task(move || {
|
2022-01-20 09:14:19 +00:00
|
|
|
let query = query_res?;
|
2020-11-09 23:13:56 +00:00
|
|
|
state_id
|
|
|
|
.map_state(&chain, |state| {
|
|
|
|
Ok(state
|
2021-07-09 06:15:32 +00:00
|
|
|
.validators()
|
2020-11-09 23:13:56 +00:00
|
|
|
.iter()
|
2021-07-09 06:15:32 +00:00
|
|
|
.zip(state.balances().iter())
|
2020-11-09 23:13:56 +00:00
|
|
|
.enumerate()
|
|
|
|
// filter by validator id(s) if provided
|
|
|
|
.filter(|(index, (validator, _))| {
|
|
|
|
query.id.as_ref().map_or(true, |ids| {
|
2022-01-20 09:14:19 +00:00
|
|
|
ids.iter().any(|id| match id {
|
2020-11-09 23:13:56 +00:00
|
|
|
ValidatorId::PublicKey(pubkey) => {
|
|
|
|
&validator.pubkey == pubkey
|
|
|
|
}
|
|
|
|
ValidatorId::Index(param_index) => {
|
|
|
|
*param_index == *index as u64
|
|
|
|
}
|
|
|
|
})
|
|
|
|
})
|
|
|
|
})
|
|
|
|
.map(|(index, (_, balance))| {
|
|
|
|
Some(api_types::ValidatorBalanceData {
|
|
|
|
index: index as u64,
|
|
|
|
balance: *balance,
|
|
|
|
})
|
|
|
|
})
|
|
|
|
.collect::<Vec<_>>())
|
|
|
|
})
|
|
|
|
.map(api_types::GenericResponse::from)
|
|
|
|
})
|
|
|
|
},
|
|
|
|
);
|
|
|
|
|
2020-10-29 05:13:04 +00:00
|
|
|
// GET beacon/states/{state_id}/validators?id,status
|
2020-09-29 03:46:54 +00:00
|
|
|
let get_beacon_state_validators = beacon_states_path
|
|
|
|
.clone()
|
|
|
|
.and(warp::path("validators"))
|
|
|
|
.and(warp::path::end())
|
2022-01-20 09:14:19 +00:00
|
|
|
.and(multi_key_query::<api_types::ValidatorsQuery>())
|
2020-10-29 05:13:04 +00:00
|
|
|
.and_then(
|
2022-01-20 09:14:19 +00:00
|
|
|
|state_id: StateId,
|
|
|
|
chain: Arc<BeaconChain<T>>,
|
|
|
|
query_res: Result<api_types::ValidatorsQuery, warp::Rejection>| {
|
2020-10-29 05:13:04 +00:00
|
|
|
blocking_json_task(move || {
|
2022-01-20 09:14:19 +00:00
|
|
|
let query = query_res?;
|
2020-10-29 05:13:04 +00:00
|
|
|
state_id
|
|
|
|
.map_state(&chain, |state| {
|
|
|
|
let epoch = state.current_epoch();
|
|
|
|
let far_future_epoch = chain.spec.far_future_epoch;
|
|
|
|
|
|
|
|
Ok(state
|
2021-07-09 06:15:32 +00:00
|
|
|
.validators()
|
2020-10-29 05:13:04 +00:00
|
|
|
.iter()
|
2021-07-09 06:15:32 +00:00
|
|
|
.zip(state.balances().iter())
|
2020-10-29 05:13:04 +00:00
|
|
|
.enumerate()
|
|
|
|
// filter by validator id(s) if provided
|
|
|
|
.filter(|(index, (validator, _))| {
|
|
|
|
query.id.as_ref().map_or(true, |ids| {
|
2022-01-20 09:14:19 +00:00
|
|
|
ids.iter().any(|id| match id {
|
2020-10-29 05:13:04 +00:00
|
|
|
ValidatorId::PublicKey(pubkey) => {
|
|
|
|
&validator.pubkey == pubkey
|
|
|
|
}
|
|
|
|
ValidatorId::Index(param_index) => {
|
|
|
|
*param_index == *index as u64
|
|
|
|
}
|
|
|
|
})
|
|
|
|
})
|
|
|
|
})
|
|
|
|
// filter by status(es) if provided and map the result
|
|
|
|
.filter_map(|(index, (validator, balance))| {
|
|
|
|
let status = api_types::ValidatorStatus::from_validator(
|
2021-02-24 04:15:13 +00:00
|
|
|
validator,
|
2020-10-29 05:13:04 +00:00
|
|
|
epoch,
|
|
|
|
far_future_epoch,
|
|
|
|
);
|
|
|
|
|
2021-02-24 04:15:13 +00:00
|
|
|
let status_matches =
|
|
|
|
query.status.as_ref().map_or(true, |statuses| {
|
2022-01-20 09:14:19 +00:00
|
|
|
statuses.contains(&status)
|
|
|
|
|| statuses.contains(&status.superstatus())
|
2021-02-24 04:15:13 +00:00
|
|
|
});
|
|
|
|
|
|
|
|
if status_matches {
|
2020-10-29 05:13:04 +00:00
|
|
|
Some(api_types::ValidatorData {
|
|
|
|
index: index as u64,
|
|
|
|
balance: *balance,
|
|
|
|
status,
|
|
|
|
validator: validator.clone(),
|
|
|
|
})
|
|
|
|
} else {
|
|
|
|
None
|
|
|
|
}
|
|
|
|
})
|
|
|
|
.collect::<Vec<_>>())
|
|
|
|
})
|
|
|
|
.map(api_types::GenericResponse::from)
|
|
|
|
})
|
|
|
|
},
|
|
|
|
);
|
2020-09-29 03:46:54 +00:00
|
|
|
|
|
|
|
// GET beacon/states/{state_id}/validators/{validator_id}
|
|
|
|
let get_beacon_state_validators_id = beacon_states_path
|
|
|
|
.clone()
|
|
|
|
.and(warp::path("validators"))
|
2020-12-03 23:10:08 +00:00
|
|
|
.and(warp::path::param::<ValidatorId>().or_else(|_| async {
|
|
|
|
Err(warp_utils::reject::custom_bad_request(
|
|
|
|
"Invalid validator ID".to_string(),
|
|
|
|
))
|
|
|
|
}))
|
2020-09-29 03:46:54 +00:00
|
|
|
.and(warp::path::end())
|
|
|
|
.and_then(
|
|
|
|
|state_id: StateId, chain: Arc<BeaconChain<T>>, validator_id: ValidatorId| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
state_id
|
|
|
|
.map_state(&chain, |state| {
|
|
|
|
let index_opt = match &validator_id {
|
|
|
|
ValidatorId::PublicKey(pubkey) => {
|
2021-07-09 06:15:32 +00:00
|
|
|
state.validators().iter().position(|v| v.pubkey == *pubkey)
|
2020-09-29 03:46:54 +00:00
|
|
|
}
|
|
|
|
ValidatorId::Index(index) => Some(*index as usize),
|
|
|
|
};
|
|
|
|
|
|
|
|
index_opt
|
|
|
|
.and_then(|index| {
|
2021-07-09 06:15:32 +00:00
|
|
|
let validator = state.validators().get(index)?;
|
|
|
|
let balance = *state.balances().get(index)?;
|
2020-09-29 03:46:54 +00:00
|
|
|
let epoch = state.current_epoch();
|
|
|
|
let far_future_epoch = chain.spec.far_future_epoch;
|
|
|
|
|
|
|
|
Some(api_types::ValidatorData {
|
|
|
|
index: index as u64,
|
|
|
|
balance,
|
|
|
|
status: api_types::ValidatorStatus::from_validator(
|
2021-02-24 04:15:13 +00:00
|
|
|
validator,
|
2020-09-29 03:46:54 +00:00
|
|
|
epoch,
|
|
|
|
far_future_epoch,
|
|
|
|
),
|
|
|
|
validator: validator.clone(),
|
|
|
|
})
|
|
|
|
})
|
|
|
|
.ok_or_else(|| {
|
|
|
|
warp_utils::reject::custom_not_found(format!(
|
|
|
|
"unknown validator: {}",
|
|
|
|
validator_id
|
|
|
|
))
|
|
|
|
})
|
|
|
|
})
|
|
|
|
.map(api_types::GenericResponse::from)
|
|
|
|
})
|
|
|
|
},
|
|
|
|
);
|
|
|
|
|
2020-11-18 23:31:39 +00:00
|
|
|
// GET beacon/states/{state_id}/committees?slot,index,epoch
|
2020-09-29 03:46:54 +00:00
|
|
|
let get_beacon_state_committees = beacon_states_path
|
|
|
|
.clone()
|
|
|
|
.and(warp::path("committees"))
|
|
|
|
.and(warp::query::<api_types::CommitteesQuery>())
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and_then(
|
2020-11-18 23:31:39 +00:00
|
|
|
|state_id: StateId, chain: Arc<BeaconChain<T>>, query: api_types::CommitteesQuery| {
|
|
|
|
// the api spec says if the epoch is not present then the epoch of the state should be used
|
|
|
|
let query_state_id = query.epoch.map_or(state_id, |epoch| {
|
|
|
|
StateId::slot(epoch.start_slot(T::EthSpec::slots_per_epoch()))
|
|
|
|
});
|
|
|
|
|
2020-09-29 03:46:54 +00:00
|
|
|
blocking_json_task(move || {
|
2020-11-18 23:31:39 +00:00
|
|
|
query_state_id.map_state(&chain, |state| {
|
2021-07-09 06:15:32 +00:00
|
|
|
let epoch = state.slot().epoch(T::EthSpec::slots_per_epoch());
|
2020-09-29 03:46:54 +00:00
|
|
|
|
|
|
|
let committee_cache = if state
|
2020-11-18 23:31:39 +00:00
|
|
|
.committee_cache_is_initialized(RelativeEpoch::Current)
|
2020-09-29 03:46:54 +00:00
|
|
|
{
|
2020-11-18 23:31:39 +00:00
|
|
|
state
|
|
|
|
.committee_cache(RelativeEpoch::Current)
|
|
|
|
.map(Cow::Borrowed)
|
2020-09-29 03:46:54 +00:00
|
|
|
} else {
|
|
|
|
CommitteeCache::initialized(state, epoch, &chain.spec).map(Cow::Owned)
|
|
|
|
}
|
2020-11-18 23:31:39 +00:00
|
|
|
.map_err(BeaconChainError::BeaconStateError)
|
|
|
|
.map_err(warp_utils::reject::beacon_chain_error)?;
|
2020-09-29 03:46:54 +00:00
|
|
|
|
|
|
|
// Use either the supplied slot or all slots in the epoch.
|
|
|
|
let slots = query.slot.map(|slot| vec![slot]).unwrap_or_else(|| {
|
|
|
|
epoch.slot_iter(T::EthSpec::slots_per_epoch()).collect()
|
|
|
|
});
|
|
|
|
|
|
|
|
// Use either the supplied committee index or all available indices.
|
|
|
|
let indices = query.index.map(|index| vec![index]).unwrap_or_else(|| {
|
|
|
|
(0..committee_cache.committees_per_slot()).collect()
|
|
|
|
});
|
|
|
|
|
|
|
|
let mut response = Vec::with_capacity(slots.len() * indices.len());
|
|
|
|
|
|
|
|
for slot in slots {
|
|
|
|
// It is not acceptable to query with a slot that is not within the
|
|
|
|
// specified epoch.
|
|
|
|
if slot.epoch(T::EthSpec::slots_per_epoch()) != epoch {
|
|
|
|
return Err(warp_utils::reject::custom_bad_request(format!(
|
|
|
|
"{} is not in epoch {}",
|
|
|
|
slot, epoch
|
|
|
|
)));
|
|
|
|
}
|
|
|
|
|
|
|
|
for &index in &indices {
|
|
|
|
let committee = committee_cache
|
|
|
|
.get_beacon_committee(slot, index)
|
|
|
|
.ok_or_else(|| {
|
2020-11-18 23:31:39 +00:00
|
|
|
warp_utils::reject::custom_bad_request(format!(
|
|
|
|
"committee index {} does not exist in epoch {}",
|
|
|
|
index, epoch
|
|
|
|
))
|
|
|
|
})?;
|
2020-09-29 03:46:54 +00:00
|
|
|
|
|
|
|
response.push(api_types::CommitteeData {
|
|
|
|
index,
|
|
|
|
slot,
|
|
|
|
validators: committee
|
|
|
|
.committee
|
|
|
|
.iter()
|
|
|
|
.map(|i| *i as u64)
|
|
|
|
.collect(),
|
|
|
|
});
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
Ok(api_types::GenericResponse::from(response))
|
|
|
|
})
|
|
|
|
})
|
|
|
|
},
|
|
|
|
);
|
|
|
|
|
2021-08-06 00:47:31 +00:00
|
|
|
// GET beacon/states/{state_id}/sync_committees?epoch
|
|
|
|
let get_beacon_state_sync_committees = beacon_states_path
|
|
|
|
.clone()
|
|
|
|
.and(warp::path("sync_committees"))
|
|
|
|
.and(warp::query::<api_types::SyncCommitteesQuery>())
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and_then(
|
|
|
|
|state_id: StateId,
|
|
|
|
chain: Arc<BeaconChain<T>>,
|
|
|
|
query: api_types::SyncCommitteesQuery| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
let sync_committee = state_id.map_state(&chain, |state| {
|
|
|
|
let current_epoch = state.current_epoch();
|
|
|
|
let epoch = query.epoch.unwrap_or(current_epoch);
|
|
|
|
state
|
|
|
|
.get_built_sync_committee(epoch, &chain.spec)
|
|
|
|
.map(|committee| committee.clone())
|
|
|
|
.map_err(|e| match e {
|
|
|
|
BeaconStateError::SyncCommitteeNotKnown { .. } => {
|
|
|
|
warp_utils::reject::custom_bad_request(format!(
|
|
|
|
"state at epoch {} has no sync committee for epoch {}",
|
|
|
|
current_epoch, epoch
|
|
|
|
))
|
|
|
|
}
|
|
|
|
BeaconStateError::IncorrectStateVariant => {
|
|
|
|
warp_utils::reject::custom_bad_request(format!(
|
|
|
|
"state at epoch {} is not activated for Altair",
|
|
|
|
current_epoch,
|
|
|
|
))
|
|
|
|
}
|
|
|
|
e => warp_utils::reject::beacon_state_error(e),
|
|
|
|
})
|
|
|
|
})?;
|
|
|
|
|
|
|
|
let validators = chain
|
|
|
|
.validator_indices(sync_committee.pubkeys.iter())
|
|
|
|
.map_err(warp_utils::reject::beacon_chain_error)?;
|
|
|
|
|
|
|
|
let validator_aggregates = validators
|
|
|
|
.chunks_exact(T::EthSpec::sync_subcommittee_size())
|
|
|
|
.map(|indices| api_types::SyncSubcommittee {
|
|
|
|
indices: indices.to_vec(),
|
|
|
|
})
|
|
|
|
.collect();
|
|
|
|
|
|
|
|
let response = api_types::SyncCommitteeByValidatorIndices {
|
|
|
|
validators,
|
|
|
|
validator_aggregates,
|
|
|
|
};
|
|
|
|
|
|
|
|
Ok(api_types::GenericResponse::from(response))
|
|
|
|
})
|
|
|
|
},
|
|
|
|
);
|
|
|
|
|
2020-09-29 03:46:54 +00:00
|
|
|
// GET beacon/headers
|
|
|
|
//
|
|
|
|
// Note: this endpoint only returns information about blocks in the canonical chain. Given that
|
|
|
|
// there's a `canonical` flag on the response, I assume it should also return non-canonical
|
|
|
|
// things. Returning non-canonical things is hard for us since we don't already have a
|
|
|
|
// mechanism for arbitrary forwards block iteration, we only support iterating forwards along
|
|
|
|
// the canonical chain.
|
|
|
|
let get_beacon_headers = eth1_v1
|
|
|
|
.and(warp::path("beacon"))
|
|
|
|
.and(warp::path("headers"))
|
|
|
|
.and(warp::query::<api_types::HeadersQuery>())
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(chain_filter.clone())
|
|
|
|
.and_then(
|
|
|
|
|query: api_types::HeadersQuery, chain: Arc<BeaconChain<T>>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
let (root, block) = match (query.slot, query.parent_root) {
|
|
|
|
// No query parameters, return the canonical head block.
|
|
|
|
(None, None) => chain
|
|
|
|
.head_beacon_block()
|
|
|
|
.map_err(warp_utils::reject::beacon_chain_error)
|
Separate execution payloads in the DB (#3157)
## Proposed Changes
Reduce post-merge disk usage by not storing finalized execution payloads in Lighthouse's database.
:warning: **This is achieved in a backwards-incompatible way for networks that have already merged** :warning:. Kiln users and shadow fork enjoyers will be unable to downgrade after running the code from this PR. The upgrade migration may take several minutes to run, and can't be aborted after it begins.
The main changes are:
- New column in the database called `ExecPayload`, keyed by beacon block root.
- The `BeaconBlock` column now stores blinded blocks only.
- Lots of places that previously used full blocks now use blinded blocks, e.g. analytics APIs, block replay in the DB, etc.
- On finalization:
- `prune_abanonded_forks` deletes non-canonical payloads whilst deleting non-canonical blocks.
- `migrate_db` deletes finalized canonical payloads whilst deleting finalized states.
- Conversions between blinded and full blocks are implemented in a compositional way, duplicating some work from Sean's PR #3134.
- The execution layer has a new `get_payload_by_block_hash` method that reconstructs a payload using the EE's `eth_getBlockByHash` call.
- I've tested manually that it works on Kiln, using Geth and Nethermind.
- This isn't necessarily the most efficient method, and new engine APIs are being discussed to improve this: https://github.com/ethereum/execution-apis/pull/146.
- We're depending on the `ethers` master branch, due to lots of recent changes. We're also using a workaround for https://github.com/gakonst/ethers-rs/issues/1134.
- Payload reconstruction is used in the HTTP API via `BeaconChain::get_block`, which is now `async`. Due to the `async` fn, the `blocking_json` wrapper has been removed.
- Payload reconstruction is used in network RPC to serve blocks-by-{root,range} responses. Here the `async` adjustment is messier, although I think I've managed to come up with a reasonable compromise: the handlers take the `SendOnDrop` by value so that they can drop it on _task completion_ (after the `fn` returns). Still, this is introducing disk reads onto core executor threads, which may have a negative performance impact (thoughts appreciated).
## Additional Info
- [x] For performance it would be great to remove the cloning of full blocks when converting them to blinded blocks to write to disk. I'm going to experiment with a `put_block` API that takes the block by value, breaks it into a blinded block and a payload, stores the blinded block, and then re-assembles the full block for the caller.
- [x] We should measure the latency of blocks-by-root and blocks-by-range responses.
- [x] We should add integration tests that stress the payload reconstruction (basic tests done, issue for more extensive tests: https://github.com/sigp/lighthouse/issues/3159)
- [x] We should (manually) test the schema v9 migration from several prior versions, particularly as blocks have changed on disk and some migrations rely on being able to load blocks.
Co-authored-by: Paul Hauner <paul@paulhauner.com>
2022-05-12 00:42:17 +00:00
|
|
|
.map(|block| (block.canonical_root(), block.into()))?,
|
2020-09-29 03:46:54 +00:00
|
|
|
// Only the parent root parameter, do a forwards-iterator lookup.
|
|
|
|
(None, Some(parent_root)) => {
|
Separate execution payloads in the DB (#3157)
## Proposed Changes
Reduce post-merge disk usage by not storing finalized execution payloads in Lighthouse's database.
:warning: **This is achieved in a backwards-incompatible way for networks that have already merged** :warning:. Kiln users and shadow fork enjoyers will be unable to downgrade after running the code from this PR. The upgrade migration may take several minutes to run, and can't be aborted after it begins.
The main changes are:
- New column in the database called `ExecPayload`, keyed by beacon block root.
- The `BeaconBlock` column now stores blinded blocks only.
- Lots of places that previously used full blocks now use blinded blocks, e.g. analytics APIs, block replay in the DB, etc.
- On finalization:
- `prune_abanonded_forks` deletes non-canonical payloads whilst deleting non-canonical blocks.
- `migrate_db` deletes finalized canonical payloads whilst deleting finalized states.
- Conversions between blinded and full blocks are implemented in a compositional way, duplicating some work from Sean's PR #3134.
- The execution layer has a new `get_payload_by_block_hash` method that reconstructs a payload using the EE's `eth_getBlockByHash` call.
- I've tested manually that it works on Kiln, using Geth and Nethermind.
- This isn't necessarily the most efficient method, and new engine APIs are being discussed to improve this: https://github.com/ethereum/execution-apis/pull/146.
- We're depending on the `ethers` master branch, due to lots of recent changes. We're also using a workaround for https://github.com/gakonst/ethers-rs/issues/1134.
- Payload reconstruction is used in the HTTP API via `BeaconChain::get_block`, which is now `async`. Due to the `async` fn, the `blocking_json` wrapper has been removed.
- Payload reconstruction is used in network RPC to serve blocks-by-{root,range} responses. Here the `async` adjustment is messier, although I think I've managed to come up with a reasonable compromise: the handlers take the `SendOnDrop` by value so that they can drop it on _task completion_ (after the `fn` returns). Still, this is introducing disk reads onto core executor threads, which may have a negative performance impact (thoughts appreciated).
## Additional Info
- [x] For performance it would be great to remove the cloning of full blocks when converting them to blinded blocks to write to disk. I'm going to experiment with a `put_block` API that takes the block by value, breaks it into a blinded block and a payload, stores the blinded block, and then re-assembles the full block for the caller.
- [x] We should measure the latency of blocks-by-root and blocks-by-range responses.
- [x] We should add integration tests that stress the payload reconstruction (basic tests done, issue for more extensive tests: https://github.com/sigp/lighthouse/issues/3159)
- [x] We should (manually) test the schema v9 migration from several prior versions, particularly as blocks have changed on disk and some migrations rely on being able to load blocks.
Co-authored-by: Paul Hauner <paul@paulhauner.com>
2022-05-12 00:42:17 +00:00
|
|
|
let parent = BlockId::from_root(parent_root).blinded_block(&chain)?;
|
2020-09-29 03:46:54 +00:00
|
|
|
let (root, _slot) = chain
|
|
|
|
.forwards_iter_block_roots(parent.slot())
|
|
|
|
.map_err(warp_utils::reject::beacon_chain_error)?
|
|
|
|
// Ignore any skip-slots immediately following the parent.
|
|
|
|
.find(|res| {
|
|
|
|
res.as_ref().map_or(false, |(root, _)| *root != parent_root)
|
|
|
|
})
|
|
|
|
.transpose()
|
|
|
|
.map_err(warp_utils::reject::beacon_chain_error)?
|
|
|
|
.ok_or_else(|| {
|
|
|
|
warp_utils::reject::custom_not_found(format!(
|
|
|
|
"child of block with root {}",
|
|
|
|
parent_root
|
|
|
|
))
|
|
|
|
})?;
|
|
|
|
|
|
|
|
BlockId::from_root(root)
|
Separate execution payloads in the DB (#3157)
## Proposed Changes
Reduce post-merge disk usage by not storing finalized execution payloads in Lighthouse's database.
:warning: **This is achieved in a backwards-incompatible way for networks that have already merged** :warning:. Kiln users and shadow fork enjoyers will be unable to downgrade after running the code from this PR. The upgrade migration may take several minutes to run, and can't be aborted after it begins.
The main changes are:
- New column in the database called `ExecPayload`, keyed by beacon block root.
- The `BeaconBlock` column now stores blinded blocks only.
- Lots of places that previously used full blocks now use blinded blocks, e.g. analytics APIs, block replay in the DB, etc.
- On finalization:
- `prune_abanonded_forks` deletes non-canonical payloads whilst deleting non-canonical blocks.
- `migrate_db` deletes finalized canonical payloads whilst deleting finalized states.
- Conversions between blinded and full blocks are implemented in a compositional way, duplicating some work from Sean's PR #3134.
- The execution layer has a new `get_payload_by_block_hash` method that reconstructs a payload using the EE's `eth_getBlockByHash` call.
- I've tested manually that it works on Kiln, using Geth and Nethermind.
- This isn't necessarily the most efficient method, and new engine APIs are being discussed to improve this: https://github.com/ethereum/execution-apis/pull/146.
- We're depending on the `ethers` master branch, due to lots of recent changes. We're also using a workaround for https://github.com/gakonst/ethers-rs/issues/1134.
- Payload reconstruction is used in the HTTP API via `BeaconChain::get_block`, which is now `async`. Due to the `async` fn, the `blocking_json` wrapper has been removed.
- Payload reconstruction is used in network RPC to serve blocks-by-{root,range} responses. Here the `async` adjustment is messier, although I think I've managed to come up with a reasonable compromise: the handlers take the `SendOnDrop` by value so that they can drop it on _task completion_ (after the `fn` returns). Still, this is introducing disk reads onto core executor threads, which may have a negative performance impact (thoughts appreciated).
## Additional Info
- [x] For performance it would be great to remove the cloning of full blocks when converting them to blinded blocks to write to disk. I'm going to experiment with a `put_block` API that takes the block by value, breaks it into a blinded block and a payload, stores the blinded block, and then re-assembles the full block for the caller.
- [x] We should measure the latency of blocks-by-root and blocks-by-range responses.
- [x] We should add integration tests that stress the payload reconstruction (basic tests done, issue for more extensive tests: https://github.com/sigp/lighthouse/issues/3159)
- [x] We should (manually) test the schema v9 migration from several prior versions, particularly as blocks have changed on disk and some migrations rely on being able to load blocks.
Co-authored-by: Paul Hauner <paul@paulhauner.com>
2022-05-12 00:42:17 +00:00
|
|
|
.blinded_block(&chain)
|
2020-09-29 03:46:54 +00:00
|
|
|
.map(|block| (root, block))?
|
|
|
|
}
|
|
|
|
// Slot is supplied, search by slot and optionally filter by
|
|
|
|
// parent root.
|
|
|
|
(Some(slot), parent_root_opt) => {
|
|
|
|
let root = BlockId::from_slot(slot).root(&chain)?;
|
Separate execution payloads in the DB (#3157)
## Proposed Changes
Reduce post-merge disk usage by not storing finalized execution payloads in Lighthouse's database.
:warning: **This is achieved in a backwards-incompatible way for networks that have already merged** :warning:. Kiln users and shadow fork enjoyers will be unable to downgrade after running the code from this PR. The upgrade migration may take several minutes to run, and can't be aborted after it begins.
The main changes are:
- New column in the database called `ExecPayload`, keyed by beacon block root.
- The `BeaconBlock` column now stores blinded blocks only.
- Lots of places that previously used full blocks now use blinded blocks, e.g. analytics APIs, block replay in the DB, etc.
- On finalization:
- `prune_abanonded_forks` deletes non-canonical payloads whilst deleting non-canonical blocks.
- `migrate_db` deletes finalized canonical payloads whilst deleting finalized states.
- Conversions between blinded and full blocks are implemented in a compositional way, duplicating some work from Sean's PR #3134.
- The execution layer has a new `get_payload_by_block_hash` method that reconstructs a payload using the EE's `eth_getBlockByHash` call.
- I've tested manually that it works on Kiln, using Geth and Nethermind.
- This isn't necessarily the most efficient method, and new engine APIs are being discussed to improve this: https://github.com/ethereum/execution-apis/pull/146.
- We're depending on the `ethers` master branch, due to lots of recent changes. We're also using a workaround for https://github.com/gakonst/ethers-rs/issues/1134.
- Payload reconstruction is used in the HTTP API via `BeaconChain::get_block`, which is now `async`. Due to the `async` fn, the `blocking_json` wrapper has been removed.
- Payload reconstruction is used in network RPC to serve blocks-by-{root,range} responses. Here the `async` adjustment is messier, although I think I've managed to come up with a reasonable compromise: the handlers take the `SendOnDrop` by value so that they can drop it on _task completion_ (after the `fn` returns). Still, this is introducing disk reads onto core executor threads, which may have a negative performance impact (thoughts appreciated).
## Additional Info
- [x] For performance it would be great to remove the cloning of full blocks when converting them to blinded blocks to write to disk. I'm going to experiment with a `put_block` API that takes the block by value, breaks it into a blinded block and a payload, stores the blinded block, and then re-assembles the full block for the caller.
- [x] We should measure the latency of blocks-by-root and blocks-by-range responses.
- [x] We should add integration tests that stress the payload reconstruction (basic tests done, issue for more extensive tests: https://github.com/sigp/lighthouse/issues/3159)
- [x] We should (manually) test the schema v9 migration from several prior versions, particularly as blocks have changed on disk and some migrations rely on being able to load blocks.
Co-authored-by: Paul Hauner <paul@paulhauner.com>
2022-05-12 00:42:17 +00:00
|
|
|
let block = BlockId::from_root(root).blinded_block(&chain)?;
|
2020-09-29 03:46:54 +00:00
|
|
|
|
|
|
|
// If the parent root was supplied, check that it matches the block
|
|
|
|
// obtained via a slot lookup.
|
|
|
|
if let Some(parent_root) = parent_root_opt {
|
|
|
|
if block.parent_root() != parent_root {
|
|
|
|
return Err(warp_utils::reject::custom_not_found(format!(
|
|
|
|
"no canonical block at slot {} with parent root {}",
|
|
|
|
slot, parent_root
|
|
|
|
)));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
(root, block)
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
let data = api_types::BlockHeaderData {
|
|
|
|
root,
|
|
|
|
canonical: true,
|
|
|
|
header: api_types::BlockHeaderAndSignature {
|
2021-07-09 06:15:32 +00:00
|
|
|
message: block.message().block_header(),
|
|
|
|
signature: block.signature().clone().into(),
|
2020-09-29 03:46:54 +00:00
|
|
|
},
|
|
|
|
};
|
|
|
|
|
|
|
|
Ok(api_types::GenericResponse::from(vec![data]))
|
|
|
|
})
|
|
|
|
},
|
|
|
|
);
|
|
|
|
|
|
|
|
// GET beacon/headers/{block_id}
|
|
|
|
let get_beacon_headers_block_id = eth1_v1
|
|
|
|
.and(warp::path("beacon"))
|
|
|
|
.and(warp::path("headers"))
|
2020-12-03 23:10:08 +00:00
|
|
|
.and(warp::path::param::<BlockId>().or_else(|_| async {
|
|
|
|
Err(warp_utils::reject::custom_bad_request(
|
|
|
|
"Invalid block ID".to_string(),
|
|
|
|
))
|
|
|
|
}))
|
2020-09-29 03:46:54 +00:00
|
|
|
.and(warp::path::end())
|
|
|
|
.and(chain_filter.clone())
|
|
|
|
.and_then(|block_id: BlockId, chain: Arc<BeaconChain<T>>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
let root = block_id.root(&chain)?;
|
Separate execution payloads in the DB (#3157)
## Proposed Changes
Reduce post-merge disk usage by not storing finalized execution payloads in Lighthouse's database.
:warning: **This is achieved in a backwards-incompatible way for networks that have already merged** :warning:. Kiln users and shadow fork enjoyers will be unable to downgrade after running the code from this PR. The upgrade migration may take several minutes to run, and can't be aborted after it begins.
The main changes are:
- New column in the database called `ExecPayload`, keyed by beacon block root.
- The `BeaconBlock` column now stores blinded blocks only.
- Lots of places that previously used full blocks now use blinded blocks, e.g. analytics APIs, block replay in the DB, etc.
- On finalization:
- `prune_abanonded_forks` deletes non-canonical payloads whilst deleting non-canonical blocks.
- `migrate_db` deletes finalized canonical payloads whilst deleting finalized states.
- Conversions between blinded and full blocks are implemented in a compositional way, duplicating some work from Sean's PR #3134.
- The execution layer has a new `get_payload_by_block_hash` method that reconstructs a payload using the EE's `eth_getBlockByHash` call.
- I've tested manually that it works on Kiln, using Geth and Nethermind.
- This isn't necessarily the most efficient method, and new engine APIs are being discussed to improve this: https://github.com/ethereum/execution-apis/pull/146.
- We're depending on the `ethers` master branch, due to lots of recent changes. We're also using a workaround for https://github.com/gakonst/ethers-rs/issues/1134.
- Payload reconstruction is used in the HTTP API via `BeaconChain::get_block`, which is now `async`. Due to the `async` fn, the `blocking_json` wrapper has been removed.
- Payload reconstruction is used in network RPC to serve blocks-by-{root,range} responses. Here the `async` adjustment is messier, although I think I've managed to come up with a reasonable compromise: the handlers take the `SendOnDrop` by value so that they can drop it on _task completion_ (after the `fn` returns). Still, this is introducing disk reads onto core executor threads, which may have a negative performance impact (thoughts appreciated).
## Additional Info
- [x] For performance it would be great to remove the cloning of full blocks when converting them to blinded blocks to write to disk. I'm going to experiment with a `put_block` API that takes the block by value, breaks it into a blinded block and a payload, stores the blinded block, and then re-assembles the full block for the caller.
- [x] We should measure the latency of blocks-by-root and blocks-by-range responses.
- [x] We should add integration tests that stress the payload reconstruction (basic tests done, issue for more extensive tests: https://github.com/sigp/lighthouse/issues/3159)
- [x] We should (manually) test the schema v9 migration from several prior versions, particularly as blocks have changed on disk and some migrations rely on being able to load blocks.
Co-authored-by: Paul Hauner <paul@paulhauner.com>
2022-05-12 00:42:17 +00:00
|
|
|
let block = BlockId::from_root(root).blinded_block(&chain)?;
|
2020-09-29 03:46:54 +00:00
|
|
|
|
|
|
|
let canonical = chain
|
Use the forwards iterator more often (#2376)
## Issue Addressed
NA
## Primary Change
When investigating memory usage, I noticed that retrieving a block from an early slot (e.g., slot 900) would cause a sharp increase in the memory footprint (from 400mb to 800mb+) which seemed to be ever-lasting.
After some investigation, I found that the reverse iteration from the head back to that slot was the likely culprit. To counter this, I've switched the `BeaconChain::block_root_at_slot` to use the forwards iterator, instead of the reverse one.
I also noticed that the networking stack is using `BeaconChain::root_at_slot` to check if a peer is relevant (`check_peer_relevance`). Perhaps the steep, seemingly-random-but-consistent increases in memory usage are caused by the use of this function.
Using the forwards iterator with the HTTP API alleviated the sharp increases in memory usage. It also made the response much faster (before it felt like to took 1-2s, now it feels instant).
## Additional Changes
In the process I also noticed that we have two functions for getting block roots:
- `BeaconChain::block_root_at_slot`: returns `None` for a skip slot.
- `BeaconChain::root_at_slot`: returns the previous root for a skip slot.
I unified these two functions into `block_root_at_slot` and added the `WhenSlotSkipped` enum. Now, the caller must be explicit about the skip-slot behaviour when requesting a root.
Additionally, I replaced `vec![]` with `Vec::with_capacity` in `store::chunked_vector::range_query`. I stumbled across this whilst debugging and made this modification to see what effect it would have (not much). It seems like a decent change to keep around, but I'm not concerned either way.
Also, `BeaconChain::get_ancestor_block_root` is unused, so I got rid of it :wastebasket:.
## Additional Info
I haven't also done the same for state roots here. Whilst it's possible and a good idea, it's more work since the fwds iterators are presently block-roots-specific.
Whilst there's a few places a reverse iteration of state roots could be triggered (e.g., attestation production, HTTP API), they're no where near as common as the `check_peer_relevance` call. As such, I think we should get this PR merged first, then come back for the state root iters. I made an issue here https://github.com/sigp/lighthouse/issues/2377.
2021-05-31 04:18:20 +00:00
|
|
|
.block_root_at_slot(block.slot(), WhenSlotSkipped::None)
|
2020-09-29 03:46:54 +00:00
|
|
|
.map_err(warp_utils::reject::beacon_chain_error)?
|
|
|
|
.map_or(false, |canonical| root == canonical);
|
|
|
|
|
|
|
|
let data = api_types::BlockHeaderData {
|
|
|
|
root,
|
|
|
|
canonical,
|
|
|
|
header: api_types::BlockHeaderAndSignature {
|
2021-07-09 06:15:32 +00:00
|
|
|
message: block.message().block_header(),
|
|
|
|
signature: block.signature().clone().into(),
|
2020-09-29 03:46:54 +00:00
|
|
|
},
|
|
|
|
};
|
|
|
|
|
|
|
|
Ok(api_types::GenericResponse::from(data))
|
|
|
|
})
|
|
|
|
});
|
|
|
|
|
|
|
|
/*
|
|
|
|
* beacon/blocks
|
|
|
|
*/
|
|
|
|
|
2020-11-09 23:13:56 +00:00
|
|
|
// POST beacon/blocks
|
2020-09-29 03:46:54 +00:00
|
|
|
let post_beacon_blocks = eth1_v1
|
|
|
|
.and(warp::path("beacon"))
|
|
|
|
.and(warp::path("blocks"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(warp::body::json())
|
|
|
|
.and(chain_filter.clone())
|
|
|
|
.and(network_tx_filter.clone())
|
|
|
|
.and(log_filter.clone())
|
|
|
|
.and_then(
|
|
|
|
|block: SignedBeaconBlock<T::EthSpec>,
|
|
|
|
chain: Arc<BeaconChain<T>>,
|
|
|
|
network_tx: UnboundedSender<NetworkMessage<T::EthSpec>>,
|
|
|
|
log: Logger| {
|
|
|
|
blocking_json_task(move || {
|
2021-01-20 19:19:38 +00:00
|
|
|
let seen_timestamp = timestamp_now();
|
|
|
|
|
2020-09-29 03:46:54 +00:00
|
|
|
// Send the block, regardless of whether or not it is valid. The API
|
|
|
|
// specification is very clear that this is the desired behaviour.
|
|
|
|
publish_pubsub_message(
|
|
|
|
&network_tx,
|
|
|
|
PubsubMessage::BeaconBlock(Box::new(block.clone())),
|
|
|
|
)?;
|
|
|
|
|
2021-03-04 04:43:31 +00:00
|
|
|
// Determine the delay after the start of the slot, register it with metrics.
|
|
|
|
let delay =
|
2021-07-09 06:15:32 +00:00
|
|
|
get_block_delay_ms(seen_timestamp, block.message(), &chain.slot_clock);
|
2021-03-04 04:43:31 +00:00
|
|
|
metrics::observe_duration(
|
|
|
|
&metrics::HTTP_API_BLOCK_BROADCAST_DELAY_TIMES,
|
|
|
|
delay,
|
|
|
|
);
|
|
|
|
|
2020-09-29 03:46:54 +00:00
|
|
|
match chain.process_block(block.clone()) {
|
|
|
|
Ok(root) => {
|
|
|
|
info!(
|
|
|
|
log,
|
|
|
|
"Valid block from HTTP API";
|
2021-08-23 00:59:14 +00:00
|
|
|
"block_delay" => ?delay,
|
|
|
|
"root" => format!("{}", root),
|
|
|
|
"proposer_index" => block.message().proposer_index(),
|
|
|
|
"slot" => block.slot(),
|
2020-09-29 03:46:54 +00:00
|
|
|
);
|
|
|
|
|
2021-01-20 19:19:38 +00:00
|
|
|
// Notify the validator monitor.
|
|
|
|
chain.validator_monitor.read().register_api_block(
|
|
|
|
seen_timestamp,
|
2021-07-09 06:15:32 +00:00
|
|
|
block.message(),
|
2021-01-20 19:19:38 +00:00
|
|
|
root,
|
|
|
|
&chain.slot_clock,
|
|
|
|
);
|
|
|
|
|
2020-09-29 03:46:54 +00:00
|
|
|
// Update the head since it's likely this block will become the new
|
|
|
|
// head.
|
|
|
|
chain
|
|
|
|
.fork_choice()
|
|
|
|
.map_err(warp_utils::reject::beacon_chain_error)?;
|
|
|
|
|
2021-03-04 04:43:31 +00:00
|
|
|
// Perform some logging to inform users if their blocks are being produced
|
|
|
|
// late.
|
|
|
|
//
|
|
|
|
// Check to see the thresholds are non-zero to avoid logging errors with small
|
|
|
|
// slot times (e.g., during testing)
|
2021-08-23 00:59:14 +00:00
|
|
|
let crit_threshold = chain.slot_clock.unagg_attestation_production_delay();
|
|
|
|
let error_threshold = crit_threshold / 2;
|
|
|
|
if delay >= crit_threshold {
|
2021-03-04 04:43:31 +00:00
|
|
|
crit!(
|
|
|
|
log,
|
|
|
|
"Block was broadcast too late";
|
|
|
|
"msg" => "system may be overloaded, block likely to be orphaned",
|
2021-08-23 00:59:14 +00:00
|
|
|
"delay_ms" => delay.as_millis(),
|
|
|
|
"slot" => block.slot(),
|
|
|
|
"root" => ?root,
|
2021-03-04 04:43:31 +00:00
|
|
|
)
|
2021-08-23 00:59:14 +00:00
|
|
|
} else if delay >= error_threshold {
|
|
|
|
error!(
|
2021-03-04 04:43:31 +00:00
|
|
|
log,
|
|
|
|
"Block broadcast was delayed";
|
|
|
|
"msg" => "system may be overloaded, block may be orphaned",
|
2021-08-23 00:59:14 +00:00
|
|
|
"delay_ms" => delay.as_millis(),
|
|
|
|
"slot" => block.slot(),
|
|
|
|
"root" => ?root,
|
2021-03-04 04:43:31 +00:00
|
|
|
)
|
|
|
|
}
|
|
|
|
|
2020-09-29 03:46:54 +00:00
|
|
|
Ok(())
|
|
|
|
}
|
|
|
|
Err(e) => {
|
|
|
|
let msg = format!("{:?}", e);
|
|
|
|
error!(
|
|
|
|
log,
|
|
|
|
"Invalid block provided to HTTP API";
|
|
|
|
"reason" => &msg
|
|
|
|
);
|
|
|
|
Err(warp_utils::reject::broadcast_without_import(msg))
|
|
|
|
}
|
|
|
|
}
|
|
|
|
})
|
|
|
|
},
|
|
|
|
);
|
|
|
|
|
2022-03-31 07:52:23 +00:00
|
|
|
/*
|
|
|
|
* beacon/blocks
|
|
|
|
*/
|
|
|
|
|
|
|
|
// POST beacon/blocks
|
|
|
|
let post_beacon_blinded_blocks = eth1_v1
|
|
|
|
.and(warp::path("beacon"))
|
|
|
|
.and(warp::path("blinded_blocks"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(warp::body::json())
|
|
|
|
.and(chain_filter.clone())
|
|
|
|
.and(network_tx_filter.clone())
|
|
|
|
.and(log_filter.clone())
|
|
|
|
.and_then(
|
|
|
|
|block: SignedBeaconBlock<T::EthSpec, BlindedPayload<_>>,
|
|
|
|
chain: Arc<BeaconChain<T>>,
|
|
|
|
network_tx: UnboundedSender<NetworkMessage<T::EthSpec>>,
|
|
|
|
_log: Logger| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
if let Some(el) = chain.execution_layer.as_ref() {
|
|
|
|
//FIXME(sean): we may not always receive the payload in this response because it
|
|
|
|
// should be the relay's job to propogate the block. However, since this block is
|
|
|
|
// already signed and sent this might be ok (so long as the relay validates
|
|
|
|
// the block before revealing the payload).
|
|
|
|
|
|
|
|
//FIXME(sean) additionally, this endpoint should serve blocks prior to Bellatrix, and should
|
|
|
|
// be able to support the normal block proposal flow, because at some point full block endpoints
|
|
|
|
// will be deprecated from the beacon API. This will entail creating full blocks in
|
|
|
|
// `validator/blinded_blocks`, caching their payloads, and transforming them into blinded
|
|
|
|
// blocks. We will access the payload of those blocks here. This flow should happen if the
|
|
|
|
// execution layer has no payload builders or if we have not yet finalized post-merge transition.
|
|
|
|
let payload = el
|
|
|
|
.block_on(|el| el.propose_blinded_beacon_block(&block))
|
|
|
|
.map_err(|e| {
|
|
|
|
warp_utils::reject::custom_server_error(format!(
|
|
|
|
"proposal failed: {:?}",
|
|
|
|
e
|
|
|
|
))
|
|
|
|
})?;
|
|
|
|
let new_block = SignedBeaconBlock::Merge(SignedBeaconBlockMerge {
|
|
|
|
message: BeaconBlockMerge {
|
|
|
|
slot: block.message().slot(),
|
|
|
|
proposer_index: block.message().proposer_index(),
|
|
|
|
parent_root: block.message().parent_root(),
|
|
|
|
state_root: block.message().state_root(),
|
|
|
|
body: BeaconBlockBodyMerge {
|
|
|
|
randao_reveal: block.message().body().randao_reveal().clone(),
|
|
|
|
eth1_data: block.message().body().eth1_data().clone(),
|
|
|
|
graffiti: *block.message().body().graffiti(),
|
|
|
|
proposer_slashings: block
|
|
|
|
.message()
|
|
|
|
.body()
|
|
|
|
.proposer_slashings()
|
|
|
|
.clone(),
|
|
|
|
attester_slashings: block
|
|
|
|
.message()
|
|
|
|
.body()
|
|
|
|
.attester_slashings()
|
|
|
|
.clone(),
|
|
|
|
attestations: block.message().body().attestations().clone(),
|
|
|
|
deposits: block.message().body().deposits().clone(),
|
|
|
|
voluntary_exits: block
|
|
|
|
.message()
|
|
|
|
.body()
|
|
|
|
.voluntary_exits()
|
|
|
|
.clone(),
|
|
|
|
sync_aggregate: block
|
|
|
|
.message()
|
|
|
|
.body()
|
|
|
|
.sync_aggregate()
|
|
|
|
.unwrap()
|
|
|
|
.clone(),
|
|
|
|
execution_payload: payload.into(),
|
|
|
|
},
|
|
|
|
},
|
|
|
|
signature: block.signature().clone(),
|
|
|
|
});
|
|
|
|
|
|
|
|
// Send the block, regardless of whether or not it is valid. The API
|
|
|
|
// specification is very clear that this is the desired behaviour.
|
|
|
|
publish_pubsub_message(
|
|
|
|
&network_tx,
|
|
|
|
PubsubMessage::BeaconBlock(Box::new(new_block.clone())),
|
|
|
|
)?;
|
|
|
|
|
|
|
|
match chain.process_block(new_block) {
|
|
|
|
Ok(_) => {
|
|
|
|
// Update the head since it's likely this block will become the new
|
|
|
|
// head.
|
|
|
|
chain
|
|
|
|
.fork_choice()
|
|
|
|
.map_err(warp_utils::reject::beacon_chain_error)?;
|
|
|
|
|
|
|
|
Ok(())
|
|
|
|
}
|
|
|
|
Err(e) => {
|
|
|
|
let msg = format!("{:?}", e);
|
|
|
|
|
|
|
|
Err(warp_utils::reject::broadcast_without_import(msg))
|
|
|
|
}
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
Err(warp_utils::reject::custom_server_error(
|
|
|
|
"no execution layer found".to_string(),
|
|
|
|
))
|
|
|
|
}
|
|
|
|
})
|
|
|
|
},
|
|
|
|
);
|
|
|
|
|
2021-08-06 00:47:31 +00:00
|
|
|
let block_id_or_err = warp::path::param::<BlockId>().or_else(|_| async {
|
|
|
|
Err(warp_utils::reject::custom_bad_request(
|
|
|
|
"Invalid block ID".to_string(),
|
|
|
|
))
|
|
|
|
});
|
|
|
|
|
|
|
|
let beacon_blocks_path_v1 = eth1_v1
|
2020-09-29 03:46:54 +00:00
|
|
|
.and(warp::path("beacon"))
|
|
|
|
.and(warp::path("blocks"))
|
2021-08-06 00:47:31 +00:00
|
|
|
.and(block_id_or_err)
|
|
|
|
.and(chain_filter.clone());
|
|
|
|
|
|
|
|
let beacon_blocks_path_any = any_version
|
|
|
|
.and(warp::path("beacon"))
|
|
|
|
.and(warp::path("blocks"))
|
|
|
|
.and(block_id_or_err)
|
2020-09-29 03:46:54 +00:00
|
|
|
.and(chain_filter.clone());
|
|
|
|
|
|
|
|
// GET beacon/blocks/{block_id}
|
2021-08-06 00:47:31 +00:00
|
|
|
let get_beacon_block = beacon_blocks_path_any
|
2021-02-24 04:15:14 +00:00
|
|
|
.clone()
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(warp::header::optional::<api_types::Accept>("accept"))
|
|
|
|
.and_then(
|
2021-08-06 00:47:31 +00:00
|
|
|
|endpoint_version: EndpointVersion,
|
|
|
|
block_id: BlockId,
|
2021-02-24 04:15:14 +00:00
|
|
|
chain: Arc<BeaconChain<T>>,
|
|
|
|
accept_header: Option<api_types::Accept>| {
|
Separate execution payloads in the DB (#3157)
## Proposed Changes
Reduce post-merge disk usage by not storing finalized execution payloads in Lighthouse's database.
:warning: **This is achieved in a backwards-incompatible way for networks that have already merged** :warning:. Kiln users and shadow fork enjoyers will be unable to downgrade after running the code from this PR. The upgrade migration may take several minutes to run, and can't be aborted after it begins.
The main changes are:
- New column in the database called `ExecPayload`, keyed by beacon block root.
- The `BeaconBlock` column now stores blinded blocks only.
- Lots of places that previously used full blocks now use blinded blocks, e.g. analytics APIs, block replay in the DB, etc.
- On finalization:
- `prune_abanonded_forks` deletes non-canonical payloads whilst deleting non-canonical blocks.
- `migrate_db` deletes finalized canonical payloads whilst deleting finalized states.
- Conversions between blinded and full blocks are implemented in a compositional way, duplicating some work from Sean's PR #3134.
- The execution layer has a new `get_payload_by_block_hash` method that reconstructs a payload using the EE's `eth_getBlockByHash` call.
- I've tested manually that it works on Kiln, using Geth and Nethermind.
- This isn't necessarily the most efficient method, and new engine APIs are being discussed to improve this: https://github.com/ethereum/execution-apis/pull/146.
- We're depending on the `ethers` master branch, due to lots of recent changes. We're also using a workaround for https://github.com/gakonst/ethers-rs/issues/1134.
- Payload reconstruction is used in the HTTP API via `BeaconChain::get_block`, which is now `async`. Due to the `async` fn, the `blocking_json` wrapper has been removed.
- Payload reconstruction is used in network RPC to serve blocks-by-{root,range} responses. Here the `async` adjustment is messier, although I think I've managed to come up with a reasonable compromise: the handlers take the `SendOnDrop` by value so that they can drop it on _task completion_ (after the `fn` returns). Still, this is introducing disk reads onto core executor threads, which may have a negative performance impact (thoughts appreciated).
## Additional Info
- [x] For performance it would be great to remove the cloning of full blocks when converting them to blinded blocks to write to disk. I'm going to experiment with a `put_block` API that takes the block by value, breaks it into a blinded block and a payload, stores the blinded block, and then re-assembles the full block for the caller.
- [x] We should measure the latency of blocks-by-root and blocks-by-range responses.
- [x] We should add integration tests that stress the payload reconstruction (basic tests done, issue for more extensive tests: https://github.com/sigp/lighthouse/issues/3159)
- [x] We should (manually) test the schema v9 migration from several prior versions, particularly as blocks have changed on disk and some migrations rely on being able to load blocks.
Co-authored-by: Paul Hauner <paul@paulhauner.com>
2022-05-12 00:42:17 +00:00
|
|
|
async move {
|
|
|
|
let block = block_id.full_block(&chain).await?;
|
2021-10-28 01:18:04 +00:00
|
|
|
let fork_name = block
|
|
|
|
.fork_name(&chain.spec)
|
|
|
|
.map_err(inconsistent_fork_rejection)?;
|
2021-02-24 04:15:14 +00:00
|
|
|
match accept_header {
|
|
|
|
Some(api_types::Accept::Ssz) => Response::builder()
|
|
|
|
.status(200)
|
|
|
|
.header("Content-Type", "application/octet-stream")
|
|
|
|
.body(block.as_ssz_bytes().into())
|
|
|
|
.map_err(|e| {
|
|
|
|
warp_utils::reject::custom_server_error(format!(
|
|
|
|
"failed to create response: {}",
|
|
|
|
e
|
|
|
|
))
|
|
|
|
}),
|
2021-10-28 01:18:04 +00:00
|
|
|
_ => fork_versioned_response(endpoint_version, fork_name, block)
|
|
|
|
.map(|res| warp::reply::json(&res).into_response()),
|
2021-02-24 04:15:14 +00:00
|
|
|
}
|
2021-10-28 01:18:04 +00:00
|
|
|
.map(|resp| add_consensus_version_header(resp, fork_name))
|
Separate execution payloads in the DB (#3157)
## Proposed Changes
Reduce post-merge disk usage by not storing finalized execution payloads in Lighthouse's database.
:warning: **This is achieved in a backwards-incompatible way for networks that have already merged** :warning:. Kiln users and shadow fork enjoyers will be unable to downgrade after running the code from this PR. The upgrade migration may take several minutes to run, and can't be aborted after it begins.
The main changes are:
- New column in the database called `ExecPayload`, keyed by beacon block root.
- The `BeaconBlock` column now stores blinded blocks only.
- Lots of places that previously used full blocks now use blinded blocks, e.g. analytics APIs, block replay in the DB, etc.
- On finalization:
- `prune_abanonded_forks` deletes non-canonical payloads whilst deleting non-canonical blocks.
- `migrate_db` deletes finalized canonical payloads whilst deleting finalized states.
- Conversions between blinded and full blocks are implemented in a compositional way, duplicating some work from Sean's PR #3134.
- The execution layer has a new `get_payload_by_block_hash` method that reconstructs a payload using the EE's `eth_getBlockByHash` call.
- I've tested manually that it works on Kiln, using Geth and Nethermind.
- This isn't necessarily the most efficient method, and new engine APIs are being discussed to improve this: https://github.com/ethereum/execution-apis/pull/146.
- We're depending on the `ethers` master branch, due to lots of recent changes. We're also using a workaround for https://github.com/gakonst/ethers-rs/issues/1134.
- Payload reconstruction is used in the HTTP API via `BeaconChain::get_block`, which is now `async`. Due to the `async` fn, the `blocking_json` wrapper has been removed.
- Payload reconstruction is used in network RPC to serve blocks-by-{root,range} responses. Here the `async` adjustment is messier, although I think I've managed to come up with a reasonable compromise: the handlers take the `SendOnDrop` by value so that they can drop it on _task completion_ (after the `fn` returns). Still, this is introducing disk reads onto core executor threads, which may have a negative performance impact (thoughts appreciated).
## Additional Info
- [x] For performance it would be great to remove the cloning of full blocks when converting them to blinded blocks to write to disk. I'm going to experiment with a `put_block` API that takes the block by value, breaks it into a blinded block and a payload, stores the blinded block, and then re-assembles the full block for the caller.
- [x] We should measure the latency of blocks-by-root and blocks-by-range responses.
- [x] We should add integration tests that stress the payload reconstruction (basic tests done, issue for more extensive tests: https://github.com/sigp/lighthouse/issues/3159)
- [x] We should (manually) test the schema v9 migration from several prior versions, particularly as blocks have changed on disk and some migrations rely on being able to load blocks.
Co-authored-by: Paul Hauner <paul@paulhauner.com>
2022-05-12 00:42:17 +00:00
|
|
|
}
|
2021-02-24 04:15:14 +00:00
|
|
|
},
|
|
|
|
);
|
2020-09-29 03:46:54 +00:00
|
|
|
|
|
|
|
// GET beacon/blocks/{block_id}/root
|
2021-08-06 00:47:31 +00:00
|
|
|
let get_beacon_block_root = beacon_blocks_path_v1
|
2020-09-29 03:46:54 +00:00
|
|
|
.clone()
|
|
|
|
.and(warp::path("root"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and_then(|block_id: BlockId, chain: Arc<BeaconChain<T>>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
block_id
|
|
|
|
.root(&chain)
|
|
|
|
.map(api_types::RootData::from)
|
|
|
|
.map(api_types::GenericResponse::from)
|
|
|
|
})
|
|
|
|
});
|
|
|
|
|
|
|
|
// GET beacon/blocks/{block_id}/attestations
|
2021-08-06 00:47:31 +00:00
|
|
|
let get_beacon_block_attestations = beacon_blocks_path_v1
|
2020-09-29 03:46:54 +00:00
|
|
|
.clone()
|
|
|
|
.and(warp::path("attestations"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and_then(|block_id: BlockId, chain: Arc<BeaconChain<T>>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
block_id
|
Separate execution payloads in the DB (#3157)
## Proposed Changes
Reduce post-merge disk usage by not storing finalized execution payloads in Lighthouse's database.
:warning: **This is achieved in a backwards-incompatible way for networks that have already merged** :warning:. Kiln users and shadow fork enjoyers will be unable to downgrade after running the code from this PR. The upgrade migration may take several minutes to run, and can't be aborted after it begins.
The main changes are:
- New column in the database called `ExecPayload`, keyed by beacon block root.
- The `BeaconBlock` column now stores blinded blocks only.
- Lots of places that previously used full blocks now use blinded blocks, e.g. analytics APIs, block replay in the DB, etc.
- On finalization:
- `prune_abanonded_forks` deletes non-canonical payloads whilst deleting non-canonical blocks.
- `migrate_db` deletes finalized canonical payloads whilst deleting finalized states.
- Conversions between blinded and full blocks are implemented in a compositional way, duplicating some work from Sean's PR #3134.
- The execution layer has a new `get_payload_by_block_hash` method that reconstructs a payload using the EE's `eth_getBlockByHash` call.
- I've tested manually that it works on Kiln, using Geth and Nethermind.
- This isn't necessarily the most efficient method, and new engine APIs are being discussed to improve this: https://github.com/ethereum/execution-apis/pull/146.
- We're depending on the `ethers` master branch, due to lots of recent changes. We're also using a workaround for https://github.com/gakonst/ethers-rs/issues/1134.
- Payload reconstruction is used in the HTTP API via `BeaconChain::get_block`, which is now `async`. Due to the `async` fn, the `blocking_json` wrapper has been removed.
- Payload reconstruction is used in network RPC to serve blocks-by-{root,range} responses. Here the `async` adjustment is messier, although I think I've managed to come up with a reasonable compromise: the handlers take the `SendOnDrop` by value so that they can drop it on _task completion_ (after the `fn` returns). Still, this is introducing disk reads onto core executor threads, which may have a negative performance impact (thoughts appreciated).
## Additional Info
- [x] For performance it would be great to remove the cloning of full blocks when converting them to blinded blocks to write to disk. I'm going to experiment with a `put_block` API that takes the block by value, breaks it into a blinded block and a payload, stores the blinded block, and then re-assembles the full block for the caller.
- [x] We should measure the latency of blocks-by-root and blocks-by-range responses.
- [x] We should add integration tests that stress the payload reconstruction (basic tests done, issue for more extensive tests: https://github.com/sigp/lighthouse/issues/3159)
- [x] We should (manually) test the schema v9 migration from several prior versions, particularly as blocks have changed on disk and some migrations rely on being able to load blocks.
Co-authored-by: Paul Hauner <paul@paulhauner.com>
2022-05-12 00:42:17 +00:00
|
|
|
.blinded_block(&chain)
|
2021-07-09 06:15:32 +00:00
|
|
|
.map(|block| block.message().body().attestations().clone())
|
2020-09-29 03:46:54 +00:00
|
|
|
.map(api_types::GenericResponse::from)
|
|
|
|
})
|
|
|
|
});
|
|
|
|
|
|
|
|
/*
|
|
|
|
* beacon/pool
|
|
|
|
*/
|
|
|
|
|
|
|
|
let beacon_pool_path = eth1_v1
|
|
|
|
.and(warp::path("beacon"))
|
|
|
|
.and(warp::path("pool"))
|
|
|
|
.and(chain_filter.clone());
|
|
|
|
|
|
|
|
// POST beacon/pool/attestations
|
|
|
|
let post_beacon_pool_attestations = beacon_pool_path
|
|
|
|
.clone()
|
|
|
|
.and(warp::path("attestations"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(warp::body::json())
|
|
|
|
.and(network_tx_filter.clone())
|
2020-11-18 23:31:39 +00:00
|
|
|
.and(log_filter.clone())
|
2020-09-29 03:46:54 +00:00
|
|
|
.and_then(
|
|
|
|
|chain: Arc<BeaconChain<T>>,
|
2020-11-18 23:31:39 +00:00
|
|
|
attestations: Vec<Attestation<T::EthSpec>>,
|
|
|
|
network_tx: UnboundedSender<NetworkMessage<T::EthSpec>>,
|
|
|
|
log: Logger| {
|
2020-09-29 03:46:54 +00:00
|
|
|
blocking_json_task(move || {
|
2021-01-20 19:19:38 +00:00
|
|
|
let seen_timestamp = timestamp_now();
|
2020-11-18 23:31:39 +00:00
|
|
|
let mut failures = Vec::new();
|
2020-09-29 03:46:54 +00:00
|
|
|
|
2020-11-18 23:31:39 +00:00
|
|
|
for (index, attestation) in attestations.as_slice().iter().enumerate() {
|
|
|
|
let attestation = match chain
|
Batch BLS verification for attestations (#2399)
## Issue Addressed
NA
## Proposed Changes
Adds the ability to verify batches of aggregated/unaggregated attestations from the network.
When the `BeaconProcessor` finds there are messages in the aggregated or unaggregated attestation queues, it will first check the length of the queue:
- `== 1` verify the attestation individually.
- `>= 2` take up to 64 of those attestations and verify them in a batch.
Notably, we only perform batch verification if the queue has a backlog. We don't apply any artificial delays to attestations to try and force them into batches.
### Batching Details
To assist with implementing batches we modify `beacon_chain::attestation_verification` to have two distinct categories for attestations:
- *Indexed* attestations: those which have passed initial validation and were valid enough for us to derive an `IndexedAttestation`.
- *Verified* attestations: those attestations which were indexed *and also* passed signature verification. These are well-formed, interesting messages which were signed by validators.
The batching functions accept `n` attestations and then return `n` attestation verification `Result`s, where those `Result`s can be any combination of `Ok` or `Err`. In other words, we attempt to verify as many attestations as possible and return specific per-attestation results so peer scores can be updated, if required.
When we batch verify attestations, we first try to map all those attestations to *indexed* attestations. If any of those attestations were able to be indexed, we then perform batch BLS verification on those indexed attestations. If the batch verification succeeds, we convert them into *verified* attestations, disabling individual signature checking. If the batch fails, we convert to verified attestations with individual signature checking enabled.
Ultimately, we optimistically try to do a batch verification of attestation signatures and fall-back to individual verification if it fails. This opens an attach vector for "poisoning" the attestations and causing us to waste a batch verification. I argue that peer scoring should do a good-enough job of defending against this and the typical-case gains massively outweigh the worst-case losses.
## Additional Info
Before this PR, attestation verification took the attestations by value (instead of by reference). It turns out that this was unnecessary and, in my opinion, resulted in some undesirable ergonomics (e.g., we had to pass the attestation back in the `Err` variant to avoid clones). In this PR I've modified attestation verification so that it now takes a reference.
I refactored the `beacon_chain/tests/attestation_verification.rs` tests so they use a builder-esque "tester" struct instead of a weird macro. It made it easier for me to test individual/batch with the same set of tests and I think it was a nice tidy-up. Notably, I did this last to try and make sure my new refactors to *actual* production code would pass under the existing test suite.
2021-09-22 08:49:41 +00:00
|
|
|
.verify_unaggregated_attestation_for_gossip(attestation, None)
|
2020-11-18 23:31:39 +00:00
|
|
|
{
|
|
|
|
Ok(attestation) => attestation,
|
|
|
|
Err(e) => {
|
|
|
|
error!(log,
|
|
|
|
"Failure verifying attestation for gossip";
|
|
|
|
"error" => ?e,
|
|
|
|
"request_index" => index,
|
|
|
|
"committee_index" => attestation.data.index,
|
|
|
|
"attestation_slot" => attestation.data.slot,
|
|
|
|
);
|
|
|
|
failures.push(api_types::Failure::new(
|
|
|
|
index,
|
|
|
|
format!("Verification: {:?}", e),
|
|
|
|
));
|
|
|
|
// skip to the next attestation so we do not publish this one to gossip
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2021-01-20 19:19:38 +00:00
|
|
|
// Notify the validator monitor.
|
|
|
|
chain
|
|
|
|
.validator_monitor
|
|
|
|
.read()
|
|
|
|
.register_api_unaggregated_attestation(
|
|
|
|
seen_timestamp,
|
|
|
|
attestation.indexed_attestation(),
|
|
|
|
&chain.slot_clock,
|
|
|
|
);
|
|
|
|
|
2020-11-18 23:31:39 +00:00
|
|
|
publish_pubsub_message(
|
|
|
|
&network_tx,
|
|
|
|
PubsubMessage::Attestation(Box::new((
|
|
|
|
attestation.subnet_id(),
|
|
|
|
attestation.attestation().clone(),
|
|
|
|
))),
|
|
|
|
)?;
|
|
|
|
|
|
|
|
let committee_index = attestation.attestation().data.index;
|
|
|
|
let slot = attestation.attestation().data.slot;
|
|
|
|
|
|
|
|
if let Err(e) = chain.apply_attestation_to_fork_choice(&attestation) {
|
|
|
|
error!(log,
|
|
|
|
"Failure applying verified attestation to fork choice";
|
|
|
|
"error" => ?e,
|
|
|
|
"request_index" => index,
|
|
|
|
"committee_index" => committee_index,
|
|
|
|
"slot" => slot,
|
|
|
|
);
|
|
|
|
failures.push(api_types::Failure::new(
|
|
|
|
index,
|
|
|
|
format!("Fork choice: {:?}", e),
|
|
|
|
));
|
|
|
|
};
|
|
|
|
|
Batch BLS verification for attestations (#2399)
## Issue Addressed
NA
## Proposed Changes
Adds the ability to verify batches of aggregated/unaggregated attestations from the network.
When the `BeaconProcessor` finds there are messages in the aggregated or unaggregated attestation queues, it will first check the length of the queue:
- `== 1` verify the attestation individually.
- `>= 2` take up to 64 of those attestations and verify them in a batch.
Notably, we only perform batch verification if the queue has a backlog. We don't apply any artificial delays to attestations to try and force them into batches.
### Batching Details
To assist with implementing batches we modify `beacon_chain::attestation_verification` to have two distinct categories for attestations:
- *Indexed* attestations: those which have passed initial validation and were valid enough for us to derive an `IndexedAttestation`.
- *Verified* attestations: those attestations which were indexed *and also* passed signature verification. These are well-formed, interesting messages which were signed by validators.
The batching functions accept `n` attestations and then return `n` attestation verification `Result`s, where those `Result`s can be any combination of `Ok` or `Err`. In other words, we attempt to verify as many attestations as possible and return specific per-attestation results so peer scores can be updated, if required.
When we batch verify attestations, we first try to map all those attestations to *indexed* attestations. If any of those attestations were able to be indexed, we then perform batch BLS verification on those indexed attestations. If the batch verification succeeds, we convert them into *verified* attestations, disabling individual signature checking. If the batch fails, we convert to verified attestations with individual signature checking enabled.
Ultimately, we optimistically try to do a batch verification of attestation signatures and fall-back to individual verification if it fails. This opens an attach vector for "poisoning" the attestations and causing us to waste a batch verification. I argue that peer scoring should do a good-enough job of defending against this and the typical-case gains massively outweigh the worst-case losses.
## Additional Info
Before this PR, attestation verification took the attestations by value (instead of by reference). It turns out that this was unnecessary and, in my opinion, resulted in some undesirable ergonomics (e.g., we had to pass the attestation back in the `Err` variant to avoid clones). In this PR I've modified attestation verification so that it now takes a reference.
I refactored the `beacon_chain/tests/attestation_verification.rs` tests so they use a builder-esque "tester" struct instead of a weird macro. It made it easier for me to test individual/batch with the same set of tests and I think it was a nice tidy-up. Notably, I did this last to try and make sure my new refactors to *actual* production code would pass under the existing test suite.
2021-09-22 08:49:41 +00:00
|
|
|
if let Err(e) = chain.add_to_naive_aggregation_pool(&attestation) {
|
2020-11-18 23:31:39 +00:00
|
|
|
error!(log,
|
|
|
|
"Failure adding verified attestation to the naive aggregation pool";
|
|
|
|
"error" => ?e,
|
|
|
|
"request_index" => index,
|
|
|
|
"committee_index" => committee_index,
|
|
|
|
"slot" => slot,
|
|
|
|
);
|
|
|
|
failures.push(api_types::Failure::new(
|
|
|
|
index,
|
|
|
|
format!("Naive aggregation pool: {:?}", e),
|
|
|
|
));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if failures.is_empty() {
|
|
|
|
Ok(())
|
|
|
|
} else {
|
|
|
|
Err(warp_utils::reject::indexed_bad_request(
|
|
|
|
"error processing attestations".to_string(),
|
|
|
|
failures,
|
|
|
|
))
|
|
|
|
}
|
2020-09-29 03:46:54 +00:00
|
|
|
})
|
|
|
|
},
|
|
|
|
);
|
|
|
|
|
2020-11-18 23:31:39 +00:00
|
|
|
// GET beacon/pool/attestations?committee_index,slot
|
2020-09-29 03:46:54 +00:00
|
|
|
let get_beacon_pool_attestations = beacon_pool_path
|
|
|
|
.clone()
|
|
|
|
.and(warp::path("attestations"))
|
|
|
|
.and(warp::path::end())
|
2020-11-18 23:31:39 +00:00
|
|
|
.and(warp::query::<api_types::AttestationPoolQuery>())
|
|
|
|
.and_then(
|
|
|
|
|chain: Arc<BeaconChain<T>>, query: api_types::AttestationPoolQuery| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
let query_filter = |attestation: &Attestation<T::EthSpec>| {
|
|
|
|
query
|
|
|
|
.slot
|
|
|
|
.map_or(true, |slot| slot == attestation.data.slot)
|
|
|
|
&& query
|
|
|
|
.committee_index
|
|
|
|
.map_or(true, |index| index == attestation.data.index)
|
|
|
|
};
|
|
|
|
|
|
|
|
let mut attestations = chain.op_pool.get_filtered_attestations(query_filter);
|
|
|
|
attestations.extend(
|
|
|
|
chain
|
|
|
|
.naive_aggregation_pool
|
|
|
|
.read()
|
|
|
|
.iter()
|
|
|
|
.cloned()
|
|
|
|
.filter(query_filter),
|
|
|
|
);
|
|
|
|
Ok(api_types::GenericResponse::from(attestations))
|
|
|
|
})
|
|
|
|
},
|
|
|
|
);
|
2020-09-29 03:46:54 +00:00
|
|
|
|
|
|
|
// POST beacon/pool/attester_slashings
|
|
|
|
let post_beacon_pool_attester_slashings = beacon_pool_path
|
|
|
|
.clone()
|
|
|
|
.and(warp::path("attester_slashings"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(warp::body::json())
|
|
|
|
.and(network_tx_filter.clone())
|
|
|
|
.and_then(
|
|
|
|
|chain: Arc<BeaconChain<T>>,
|
|
|
|
slashing: AttesterSlashing<T::EthSpec>,
|
|
|
|
network_tx: UnboundedSender<NetworkMessage<T::EthSpec>>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
let outcome = chain
|
|
|
|
.verify_attester_slashing_for_gossip(slashing.clone())
|
|
|
|
.map_err(|e| {
|
|
|
|
warp_utils::reject::object_invalid(format!(
|
|
|
|
"gossip verification failed: {:?}",
|
|
|
|
e
|
|
|
|
))
|
|
|
|
})?;
|
|
|
|
|
2021-01-20 19:19:38 +00:00
|
|
|
// Notify the validator monitor.
|
|
|
|
chain
|
|
|
|
.validator_monitor
|
|
|
|
.read()
|
|
|
|
.register_api_attester_slashing(&slashing);
|
|
|
|
|
2020-09-29 03:46:54 +00:00
|
|
|
if let ObservationOutcome::New(slashing) = outcome {
|
|
|
|
publish_pubsub_message(
|
|
|
|
&network_tx,
|
|
|
|
PubsubMessage::AttesterSlashing(Box::new(
|
|
|
|
slashing.clone().into_inner(),
|
|
|
|
)),
|
|
|
|
)?;
|
|
|
|
|
|
|
|
chain
|
|
|
|
.import_attester_slashing(slashing)
|
|
|
|
.map_err(warp_utils::reject::beacon_chain_error)?;
|
|
|
|
}
|
|
|
|
|
|
|
|
Ok(())
|
|
|
|
})
|
|
|
|
},
|
|
|
|
);
|
|
|
|
|
|
|
|
// GET beacon/pool/attester_slashings
|
|
|
|
let get_beacon_pool_attester_slashings = beacon_pool_path
|
|
|
|
.clone()
|
|
|
|
.and(warp::path("attester_slashings"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and_then(|chain: Arc<BeaconChain<T>>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
let attestations = chain.op_pool.get_all_attester_slashings();
|
|
|
|
Ok(api_types::GenericResponse::from(attestations))
|
|
|
|
})
|
|
|
|
});
|
|
|
|
|
|
|
|
// POST beacon/pool/proposer_slashings
|
|
|
|
let post_beacon_pool_proposer_slashings = beacon_pool_path
|
|
|
|
.clone()
|
|
|
|
.and(warp::path("proposer_slashings"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(warp::body::json())
|
|
|
|
.and(network_tx_filter.clone())
|
|
|
|
.and_then(
|
|
|
|
|chain: Arc<BeaconChain<T>>,
|
|
|
|
slashing: ProposerSlashing,
|
|
|
|
network_tx: UnboundedSender<NetworkMessage<T::EthSpec>>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
let outcome = chain
|
|
|
|
.verify_proposer_slashing_for_gossip(slashing.clone())
|
|
|
|
.map_err(|e| {
|
|
|
|
warp_utils::reject::object_invalid(format!(
|
|
|
|
"gossip verification failed: {:?}",
|
|
|
|
e
|
|
|
|
))
|
|
|
|
})?;
|
|
|
|
|
2021-01-20 19:19:38 +00:00
|
|
|
// Notify the validator monitor.
|
|
|
|
chain
|
|
|
|
.validator_monitor
|
|
|
|
.read()
|
|
|
|
.register_api_proposer_slashing(&slashing);
|
|
|
|
|
2020-09-29 03:46:54 +00:00
|
|
|
if let ObservationOutcome::New(slashing) = outcome {
|
|
|
|
publish_pubsub_message(
|
|
|
|
&network_tx,
|
|
|
|
PubsubMessage::ProposerSlashing(Box::new(
|
|
|
|
slashing.clone().into_inner(),
|
|
|
|
)),
|
|
|
|
)?;
|
|
|
|
|
|
|
|
chain.import_proposer_slashing(slashing);
|
|
|
|
}
|
|
|
|
|
|
|
|
Ok(())
|
|
|
|
})
|
|
|
|
},
|
|
|
|
);
|
|
|
|
|
|
|
|
// GET beacon/pool/proposer_slashings
|
|
|
|
let get_beacon_pool_proposer_slashings = beacon_pool_path
|
|
|
|
.clone()
|
|
|
|
.and(warp::path("proposer_slashings"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and_then(|chain: Arc<BeaconChain<T>>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
let attestations = chain.op_pool.get_all_proposer_slashings();
|
|
|
|
Ok(api_types::GenericResponse::from(attestations))
|
|
|
|
})
|
|
|
|
});
|
|
|
|
|
|
|
|
// POST beacon/pool/voluntary_exits
|
|
|
|
let post_beacon_pool_voluntary_exits = beacon_pool_path
|
|
|
|
.clone()
|
|
|
|
.and(warp::path("voluntary_exits"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(warp::body::json())
|
|
|
|
.and(network_tx_filter.clone())
|
|
|
|
.and_then(
|
|
|
|
|chain: Arc<BeaconChain<T>>,
|
|
|
|
exit: SignedVoluntaryExit,
|
|
|
|
network_tx: UnboundedSender<NetworkMessage<T::EthSpec>>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
let outcome = chain
|
|
|
|
.verify_voluntary_exit_for_gossip(exit.clone())
|
|
|
|
.map_err(|e| {
|
|
|
|
warp_utils::reject::object_invalid(format!(
|
|
|
|
"gossip verification failed: {:?}",
|
|
|
|
e
|
|
|
|
))
|
|
|
|
})?;
|
|
|
|
|
2021-01-20 19:19:38 +00:00
|
|
|
// Notify the validator monitor.
|
|
|
|
chain
|
|
|
|
.validator_monitor
|
|
|
|
.read()
|
|
|
|
.register_api_voluntary_exit(&exit.message);
|
|
|
|
|
2020-09-29 03:46:54 +00:00
|
|
|
if let ObservationOutcome::New(exit) = outcome {
|
|
|
|
publish_pubsub_message(
|
|
|
|
&network_tx,
|
|
|
|
PubsubMessage::VoluntaryExit(Box::new(exit.clone().into_inner())),
|
|
|
|
)?;
|
|
|
|
|
|
|
|
chain.import_voluntary_exit(exit);
|
|
|
|
}
|
|
|
|
|
|
|
|
Ok(())
|
|
|
|
})
|
|
|
|
},
|
|
|
|
);
|
|
|
|
|
|
|
|
// GET beacon/pool/voluntary_exits
|
|
|
|
let get_beacon_pool_voluntary_exits = beacon_pool_path
|
|
|
|
.clone()
|
|
|
|
.and(warp::path("voluntary_exits"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and_then(|chain: Arc<BeaconChain<T>>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
let attestations = chain.op_pool.get_all_voluntary_exits();
|
|
|
|
Ok(api_types::GenericResponse::from(attestations))
|
|
|
|
})
|
|
|
|
});
|
|
|
|
|
2021-08-06 00:47:31 +00:00
|
|
|
// POST beacon/pool/sync_committees
|
|
|
|
let post_beacon_pool_sync_committees = beacon_pool_path
|
|
|
|
.clone()
|
|
|
|
.and(warp::path("sync_committees"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(warp::body::json())
|
|
|
|
.and(network_tx_filter.clone())
|
|
|
|
.and(log_filter.clone())
|
|
|
|
.and_then(
|
|
|
|
|chain: Arc<BeaconChain<T>>,
|
|
|
|
signatures: Vec<SyncCommitteeMessage>,
|
|
|
|
network_tx: UnboundedSender<NetworkMessage<T::EthSpec>>,
|
|
|
|
log: Logger| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
sync_committees::process_sync_committee_signatures(
|
|
|
|
signatures, network_tx, &chain, log,
|
|
|
|
)?;
|
|
|
|
Ok(api_types::GenericResponse::from(()))
|
|
|
|
})
|
|
|
|
},
|
|
|
|
);
|
|
|
|
|
2020-09-29 03:46:54 +00:00
|
|
|
/*
|
2021-08-24 01:36:27 +00:00
|
|
|
* config
|
2020-09-29 03:46:54 +00:00
|
|
|
*/
|
|
|
|
|
|
|
|
let config_path = eth1_v1.and(warp::path("config"));
|
|
|
|
|
|
|
|
// GET config/fork_schedule
|
|
|
|
let get_config_fork_schedule = config_path
|
|
|
|
.and(warp::path("fork_schedule"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(chain_filter.clone())
|
|
|
|
.and_then(|chain: Arc<BeaconChain<T>>| {
|
|
|
|
blocking_json_task(move || {
|
2021-08-24 01:36:27 +00:00
|
|
|
let forks = ForkName::list_all()
|
|
|
|
.into_iter()
|
|
|
|
.filter_map(|fork_name| chain.spec.fork_for_name(fork_name))
|
|
|
|
.collect::<Vec<_>>();
|
|
|
|
Ok(api_types::GenericResponse::from(forks))
|
2020-09-29 03:46:54 +00:00
|
|
|
})
|
|
|
|
});
|
|
|
|
|
|
|
|
// GET config/spec
|
2021-07-09 06:15:32 +00:00
|
|
|
let serve_legacy_spec = ctx.config.serve_legacy_spec;
|
2020-09-29 03:46:54 +00:00
|
|
|
let get_config_spec = config_path
|
|
|
|
.and(warp::path("spec"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(chain_filter.clone())
|
2021-07-09 06:15:32 +00:00
|
|
|
.and_then(move |chain: Arc<BeaconChain<T>>| {
|
2020-09-29 03:46:54 +00:00
|
|
|
blocking_json_task(move || {
|
2021-07-09 06:15:32 +00:00
|
|
|
let mut config_and_preset =
|
|
|
|
ConfigAndPreset::from_chain_spec::<T::EthSpec>(&chain.spec);
|
|
|
|
if serve_legacy_spec {
|
|
|
|
config_and_preset.make_backwards_compat(&chain.spec);
|
|
|
|
}
|
|
|
|
Ok(api_types::GenericResponse::from(config_and_preset))
|
2020-09-29 03:46:54 +00:00
|
|
|
})
|
|
|
|
});
|
|
|
|
|
|
|
|
// GET config/deposit_contract
|
|
|
|
let get_config_deposit_contract = config_path
|
|
|
|
.and(warp::path("deposit_contract"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(chain_filter.clone())
|
|
|
|
.and_then(|chain: Arc<BeaconChain<T>>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
Ok(api_types::GenericResponse::from(
|
|
|
|
api_types::DepositContractData {
|
|
|
|
address: chain.spec.deposit_contract_address,
|
2021-10-01 06:32:38 +00:00
|
|
|
chain_id: chain.spec.deposit_chain_id,
|
2020-09-29 03:46:54 +00:00
|
|
|
},
|
|
|
|
))
|
|
|
|
})
|
|
|
|
});
|
|
|
|
|
|
|
|
/*
|
|
|
|
* debug
|
|
|
|
*/
|
|
|
|
|
|
|
|
// GET debug/beacon/states/{state_id}
|
2021-08-06 00:47:31 +00:00
|
|
|
let get_debug_beacon_states = any_version
|
2020-09-29 03:46:54 +00:00
|
|
|
.and(warp::path("debug"))
|
|
|
|
.and(warp::path("beacon"))
|
|
|
|
.and(warp::path("states"))
|
2020-12-03 23:10:08 +00:00
|
|
|
.and(warp::path::param::<StateId>().or_else(|_| async {
|
|
|
|
Err(warp_utils::reject::custom_bad_request(
|
|
|
|
"Invalid state ID".to_string(),
|
|
|
|
))
|
|
|
|
}))
|
2020-09-29 03:46:54 +00:00
|
|
|
.and(warp::path::end())
|
2021-01-06 03:01:46 +00:00
|
|
|
.and(warp::header::optional::<api_types::Accept>("accept"))
|
2020-09-29 03:46:54 +00:00
|
|
|
.and(chain_filter.clone())
|
2021-01-06 03:01:46 +00:00
|
|
|
.and_then(
|
2021-08-06 00:47:31 +00:00
|
|
|
|endpoint_version: EndpointVersion,
|
|
|
|
state_id: StateId,
|
2021-01-06 03:01:46 +00:00
|
|
|
accept_header: Option<api_types::Accept>,
|
|
|
|
chain: Arc<BeaconChain<T>>| {
|
|
|
|
blocking_task(move || match accept_header {
|
|
|
|
Some(api_types::Accept::Ssz) => {
|
|
|
|
let state = state_id.state(&chain)?;
|
2021-10-28 01:18:04 +00:00
|
|
|
let fork_name = state
|
|
|
|
.fork_name(&chain.spec)
|
|
|
|
.map_err(inconsistent_fork_rejection)?;
|
2021-01-06 03:01:46 +00:00
|
|
|
Response::builder()
|
|
|
|
.status(200)
|
|
|
|
.header("Content-Type", "application/octet-stream")
|
|
|
|
.body(state.as_ssz_bytes().into())
|
2021-10-28 01:18:04 +00:00
|
|
|
.map(|resp| add_consensus_version_header(resp, fork_name))
|
2021-01-06 03:01:46 +00:00
|
|
|
.map_err(|e| {
|
|
|
|
warp_utils::reject::custom_server_error(format!(
|
|
|
|
"failed to create response: {}",
|
|
|
|
e
|
|
|
|
))
|
|
|
|
})
|
|
|
|
}
|
|
|
|
_ => state_id.map_state(&chain, |state| {
|
2021-10-28 01:18:04 +00:00
|
|
|
let fork_name = state
|
|
|
|
.fork_name(&chain.spec)
|
|
|
|
.map_err(inconsistent_fork_rejection)?;
|
2021-08-06 00:47:31 +00:00
|
|
|
let res = fork_versioned_response(endpoint_version, fork_name, &state)?;
|
2021-10-28 01:18:04 +00:00
|
|
|
Ok(add_consensus_version_header(
|
|
|
|
warp::reply::json(&res).into_response(),
|
|
|
|
fork_name,
|
|
|
|
))
|
2021-01-06 03:01:46 +00:00
|
|
|
}),
|
2020-09-29 03:46:54 +00:00
|
|
|
})
|
2021-01-06 03:01:46 +00:00
|
|
|
},
|
|
|
|
);
|
2020-09-29 03:46:54 +00:00
|
|
|
|
|
|
|
// GET debug/beacon/heads
|
|
|
|
let get_debug_beacon_heads = eth1_v1
|
|
|
|
.and(warp::path("debug"))
|
|
|
|
.and(warp::path("beacon"))
|
|
|
|
.and(warp::path("heads"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(chain_filter.clone())
|
|
|
|
.and_then(|chain: Arc<BeaconChain<T>>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
let heads = chain
|
|
|
|
.heads()
|
|
|
|
.into_iter()
|
2021-05-10 00:53:09 +00:00
|
|
|
.map(|(root, slot)| api_types::ChainHeadData { slot, root })
|
2020-09-29 03:46:54 +00:00
|
|
|
.collect::<Vec<_>>();
|
|
|
|
Ok(api_types::GenericResponse::from(heads))
|
|
|
|
})
|
|
|
|
});
|
|
|
|
|
|
|
|
/*
|
|
|
|
* node
|
|
|
|
*/
|
|
|
|
|
|
|
|
// GET node/identity
|
|
|
|
let get_node_identity = eth1_v1
|
|
|
|
.and(warp::path("node"))
|
|
|
|
.and(warp::path("identity"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(network_globals.clone())
|
|
|
|
.and_then(|network_globals: Arc<NetworkGlobals<T::EthSpec>>| {
|
|
|
|
blocking_json_task(move || {
|
2020-10-22 02:59:42 +00:00
|
|
|
let enr = network_globals.local_enr();
|
|
|
|
let p2p_addresses = enr.multiaddr_p2p_tcp();
|
|
|
|
let discovery_addresses = enr.multiaddr_p2p_udp();
|
2021-08-04 01:44:57 +00:00
|
|
|
let meta_data = network_globals.local_metadata.read();
|
2020-09-29 03:46:54 +00:00
|
|
|
Ok(api_types::GenericResponse::from(api_types::IdentityData {
|
|
|
|
peer_id: network_globals.local_peer_id().to_base58(),
|
2020-10-22 02:59:42 +00:00
|
|
|
enr,
|
|
|
|
p2p_addresses,
|
|
|
|
discovery_addresses,
|
|
|
|
metadata: api_types::MetaData {
|
2021-08-04 01:44:57 +00:00
|
|
|
seq_number: *meta_data.seq_number(),
|
2020-10-22 02:59:42 +00:00
|
|
|
attnets: format!(
|
2021-08-04 01:44:57 +00:00
|
|
|
"0x{}",
|
|
|
|
hex::encode(meta_data.attnets().clone().into_bytes()),
|
|
|
|
),
|
|
|
|
syncnets: format!(
|
2020-10-22 02:59:42 +00:00
|
|
|
"0x{}",
|
|
|
|
hex::encode(
|
2021-08-04 01:44:57 +00:00
|
|
|
meta_data
|
|
|
|
.syncnets()
|
|
|
|
.map(|x| x.clone())
|
|
|
|
.unwrap_or_default()
|
2020-10-22 02:59:42 +00:00
|
|
|
.into_bytes()
|
2021-08-04 01:44:57 +00:00
|
|
|
)
|
2020-10-22 02:59:42 +00:00
|
|
|
),
|
|
|
|
},
|
2020-09-29 03:46:54 +00:00
|
|
|
}))
|
|
|
|
})
|
|
|
|
});
|
|
|
|
|
|
|
|
// GET node/version
|
|
|
|
let get_node_version = eth1_v1
|
|
|
|
.and(warp::path("node"))
|
|
|
|
.and(warp::path("version"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and_then(|| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
Ok(api_types::GenericResponse::from(api_types::VersionData {
|
|
|
|
version: version_with_platform(),
|
|
|
|
}))
|
|
|
|
})
|
|
|
|
});
|
|
|
|
|
|
|
|
// GET node/syncing
|
|
|
|
let get_node_syncing = eth1_v1
|
|
|
|
.and(warp::path("node"))
|
|
|
|
.and(warp::path("syncing"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(network_globals.clone())
|
|
|
|
.and(chain_filter.clone())
|
|
|
|
.and_then(
|
|
|
|
|network_globals: Arc<NetworkGlobals<T::EthSpec>>, chain: Arc<BeaconChain<T>>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
let head_slot = chain
|
|
|
|
.head_info()
|
|
|
|
.map(|info| info.slot)
|
|
|
|
.map_err(warp_utils::reject::beacon_chain_error)?;
|
|
|
|
let current_slot = chain
|
|
|
|
.slot()
|
|
|
|
.map_err(warp_utils::reject::beacon_chain_error)?;
|
|
|
|
|
|
|
|
// Taking advantage of saturating subtraction on slot.
|
|
|
|
let sync_distance = current_slot - head_slot;
|
|
|
|
|
|
|
|
let syncing_data = api_types::SyncingData {
|
|
|
|
is_syncing: network_globals.sync_state.read().is_syncing(),
|
|
|
|
head_slot,
|
|
|
|
sync_distance,
|
|
|
|
};
|
|
|
|
|
|
|
|
Ok(api_types::GenericResponse::from(syncing_data))
|
|
|
|
})
|
|
|
|
},
|
|
|
|
);
|
|
|
|
|
2020-10-22 02:59:42 +00:00
|
|
|
// GET node/health
|
|
|
|
let get_node_health = eth1_v1
|
|
|
|
.and(warp::path("node"))
|
|
|
|
.and(warp::path("health"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(network_globals.clone())
|
|
|
|
.and_then(|network_globals: Arc<NetworkGlobals<T::EthSpec>>| {
|
|
|
|
blocking_task(move || match *network_globals.sync_state.read() {
|
2020-11-01 23:37:39 +00:00
|
|
|
SyncState::SyncingFinalized { .. }
|
|
|
|
| SyncState::SyncingHead { .. }
|
2021-09-22 00:37:28 +00:00
|
|
|
| SyncState::SyncTransition
|
|
|
|
| SyncState::BackFillSyncing { .. } => Ok(warp::reply::with_status(
|
2020-11-01 23:37:39 +00:00
|
|
|
warp::reply(),
|
|
|
|
warp::http::StatusCode::PARTIAL_CONTENT,
|
|
|
|
)),
|
2020-10-22 02:59:42 +00:00
|
|
|
SyncState::Synced => Ok(warp::reply::with_status(
|
|
|
|
warp::reply(),
|
|
|
|
warp::http::StatusCode::OK,
|
|
|
|
)),
|
|
|
|
SyncState::Stalled => Err(warp_utils::reject::not_synced(
|
|
|
|
"sync stalled, beacon chain may not yet be initialized.".to_string(),
|
|
|
|
)),
|
|
|
|
})
|
|
|
|
});
|
|
|
|
|
|
|
|
// GET node/peers/{peer_id}
|
|
|
|
let get_node_peers_by_id = eth1_v1
|
|
|
|
.and(warp::path("node"))
|
|
|
|
.and(warp::path("peers"))
|
|
|
|
.and(warp::path::param::<String>())
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(network_globals.clone())
|
|
|
|
.and_then(
|
|
|
|
|requested_peer_id: String, network_globals: Arc<NetworkGlobals<T::EthSpec>>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
let peer_id = PeerId::from_bytes(
|
2020-12-23 07:53:36 +00:00
|
|
|
&bs58::decode(requested_peer_id.as_str())
|
2020-10-22 02:59:42 +00:00
|
|
|
.into_vec()
|
|
|
|
.map_err(|e| {
|
|
|
|
warp_utils::reject::custom_bad_request(format!(
|
|
|
|
"invalid peer id: {}",
|
|
|
|
e
|
|
|
|
))
|
|
|
|
})?,
|
|
|
|
)
|
|
|
|
.map_err(|_| {
|
|
|
|
warp_utils::reject::custom_bad_request("invalid peer id.".to_string())
|
|
|
|
})?;
|
|
|
|
|
2021-11-25 03:45:52 +00:00
|
|
|
if let Some(peer_info) = network_globals.peers.read().peer_info(&peer_id) {
|
2021-10-11 02:45:06 +00:00
|
|
|
let address = if let Some(socket_addr) = peer_info.seen_addresses().next() {
|
Rename eth2_libp2p to lighthouse_network (#2702)
## Description
The `eth2_libp2p` crate was originally named and designed to incorporate a simple libp2p integration into lighthouse. Since its origins the crates purpose has expanded dramatically. It now houses a lot more sophistication that is specific to lighthouse and no longer just a libp2p integration.
As of this writing it currently houses the following high-level lighthouse-specific logic:
- Lighthouse's implementation of the eth2 RPC protocol and specific encodings/decodings
- Integration and handling of ENRs with respect to libp2p and eth2
- Lighthouse's discovery logic, its integration with discv5 and logic about searching and handling peers.
- Lighthouse's peer manager - This is a large module handling various aspects of Lighthouse's network, such as peer scoring, handling pings and metadata, connection maintenance and recording, etc.
- Lighthouse's peer database - This is a collection of information stored for each individual peer which is specific to lighthouse. We store connection state, sync state, last seen ips and scores etc. The data stored for each peer is designed for various elements of the lighthouse code base such as syncing and the http api.
- Gossipsub scoring - This stores a collection of gossipsub 1.1 scoring mechanisms that are continuously analyssed and updated based on the ethereum 2 networks and how Lighthouse performs on these networks.
- Lighthouse specific types for managing gossipsub topics, sync status and ENR fields
- Lighthouse's network HTTP API metrics - A collection of metrics for lighthouse network monitoring
- Lighthouse's custom configuration of all networking protocols, RPC, gossipsub, discovery, identify and libp2p.
Therefore it makes sense to rename the crate to be more akin to its current purposes, simply that it manages the majority of Lighthouse's network stack. This PR renames this crate to `lighthouse_network`
Co-authored-by: Paul Hauner <paul@paulhauner.com>
2021-10-19 00:30:39 +00:00
|
|
|
let mut addr = lighthouse_network::Multiaddr::from(socket_addr.ip());
|
|
|
|
addr.push(lighthouse_network::multiaddr::Protocol::Tcp(
|
|
|
|
socket_addr.port(),
|
|
|
|
));
|
2020-11-09 04:01:03 +00:00
|
|
|
addr.to_string()
|
2021-10-11 02:45:06 +00:00
|
|
|
} else if let Some(addr) = peer_info.listening_addresses().first() {
|
2020-11-09 04:01:03 +00:00
|
|
|
addr.to_string()
|
|
|
|
} else {
|
|
|
|
String::new()
|
2020-10-22 02:59:42 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
// the eth2 API spec implies only peers we have been connected to at some point should be included.
|
2021-10-11 02:45:06 +00:00
|
|
|
if let Some(dir) = peer_info.connection_direction().as_ref() {
|
2020-10-22 02:59:42 +00:00
|
|
|
return Ok(api_types::GenericResponse::from(api_types::PeerData {
|
|
|
|
peer_id: peer_id.to_string(),
|
2021-10-11 02:45:06 +00:00
|
|
|
enr: peer_info.enr().map(|enr| enr.to_base64()),
|
2020-11-13 02:02:41 +00:00
|
|
|
last_seen_p2p_address: address,
|
2021-07-30 01:11:47 +00:00
|
|
|
direction: api_types::PeerDirection::from_connection_direction(dir),
|
2020-10-22 02:59:42 +00:00
|
|
|
state: api_types::PeerState::from_peer_connection_status(
|
2021-07-30 01:11:47 +00:00
|
|
|
peer_info.connection_status(),
|
2020-10-22 02:59:42 +00:00
|
|
|
),
|
|
|
|
}));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
Err(warp_utils::reject::custom_not_found(
|
|
|
|
"peer not found.".to_string(),
|
|
|
|
))
|
|
|
|
})
|
|
|
|
},
|
|
|
|
);
|
|
|
|
|
|
|
|
// GET node/peers
|
|
|
|
let get_node_peers = eth1_v1
|
|
|
|
.and(warp::path("node"))
|
|
|
|
.and(warp::path("peers"))
|
|
|
|
.and(warp::path::end())
|
2022-01-20 09:14:19 +00:00
|
|
|
.and(multi_key_query::<api_types::PeersQuery>())
|
2020-11-13 02:02:41 +00:00
|
|
|
.and(network_globals.clone())
|
|
|
|
.and_then(
|
2022-01-20 09:14:19 +00:00
|
|
|
|query_res: Result<api_types::PeersQuery, warp::Rejection>,
|
|
|
|
network_globals: Arc<NetworkGlobals<T::EthSpec>>| {
|
2020-11-13 02:02:41 +00:00
|
|
|
blocking_json_task(move || {
|
2022-01-20 09:14:19 +00:00
|
|
|
let query = query_res?;
|
2020-11-13 02:02:41 +00:00
|
|
|
let mut peers: Vec<api_types::PeerData> = Vec::new();
|
|
|
|
network_globals
|
2021-11-25 03:45:52 +00:00
|
|
|
.peers
|
|
|
|
.read()
|
2020-11-13 02:02:41 +00:00
|
|
|
.peers()
|
|
|
|
.for_each(|(peer_id, peer_info)| {
|
|
|
|
let address =
|
2021-10-11 02:45:06 +00:00
|
|
|
if let Some(socket_addr) = peer_info.seen_addresses().next() {
|
Rename eth2_libp2p to lighthouse_network (#2702)
## Description
The `eth2_libp2p` crate was originally named and designed to incorporate a simple libp2p integration into lighthouse. Since its origins the crates purpose has expanded dramatically. It now houses a lot more sophistication that is specific to lighthouse and no longer just a libp2p integration.
As of this writing it currently houses the following high-level lighthouse-specific logic:
- Lighthouse's implementation of the eth2 RPC protocol and specific encodings/decodings
- Integration and handling of ENRs with respect to libp2p and eth2
- Lighthouse's discovery logic, its integration with discv5 and logic about searching and handling peers.
- Lighthouse's peer manager - This is a large module handling various aspects of Lighthouse's network, such as peer scoring, handling pings and metadata, connection maintenance and recording, etc.
- Lighthouse's peer database - This is a collection of information stored for each individual peer which is specific to lighthouse. We store connection state, sync state, last seen ips and scores etc. The data stored for each peer is designed for various elements of the lighthouse code base such as syncing and the http api.
- Gossipsub scoring - This stores a collection of gossipsub 1.1 scoring mechanisms that are continuously analyssed and updated based on the ethereum 2 networks and how Lighthouse performs on these networks.
- Lighthouse specific types for managing gossipsub topics, sync status and ENR fields
- Lighthouse's network HTTP API metrics - A collection of metrics for lighthouse network monitoring
- Lighthouse's custom configuration of all networking protocols, RPC, gossipsub, discovery, identify and libp2p.
Therefore it makes sense to rename the crate to be more akin to its current purposes, simply that it manages the majority of Lighthouse's network stack. This PR renames this crate to `lighthouse_network`
Co-authored-by: Paul Hauner <paul@paulhauner.com>
2021-10-19 00:30:39 +00:00
|
|
|
let mut addr =
|
|
|
|
lighthouse_network::Multiaddr::from(socket_addr.ip());
|
|
|
|
addr.push(lighthouse_network::multiaddr::Protocol::Tcp(
|
2020-11-13 02:02:41 +00:00
|
|
|
socket_addr.port(),
|
|
|
|
));
|
|
|
|
addr.to_string()
|
2021-10-11 02:45:06 +00:00
|
|
|
} else if let Some(addr) = peer_info.listening_addresses().first() {
|
2020-11-13 02:02:41 +00:00
|
|
|
addr.to_string()
|
|
|
|
} else {
|
|
|
|
String::new()
|
|
|
|
};
|
|
|
|
|
|
|
|
// the eth2 API spec implies only peers we have been connected to at some point should be included.
|
2021-10-11 02:45:06 +00:00
|
|
|
if let Some(dir) = peer_info.connection_direction() {
|
2020-11-13 02:02:41 +00:00
|
|
|
let direction =
|
2021-07-30 01:11:47 +00:00
|
|
|
api_types::PeerDirection::from_connection_direction(dir);
|
2020-11-13 02:02:41 +00:00
|
|
|
let state = api_types::PeerState::from_peer_connection_status(
|
2021-07-30 01:11:47 +00:00
|
|
|
peer_info.connection_status(),
|
2020-11-13 02:02:41 +00:00
|
|
|
);
|
|
|
|
|
|
|
|
let state_matches = query.state.as_ref().map_or(true, |states| {
|
2022-01-20 09:14:19 +00:00
|
|
|
states.iter().any(|state_param| *state_param == state)
|
2020-11-13 02:02:41 +00:00
|
|
|
});
|
|
|
|
let direction_matches =
|
|
|
|
query.direction.as_ref().map_or(true, |directions| {
|
2022-01-20 09:14:19 +00:00
|
|
|
directions.iter().any(|dir_param| *dir_param == direction)
|
2020-11-13 02:02:41 +00:00
|
|
|
});
|
|
|
|
|
|
|
|
if state_matches && direction_matches {
|
|
|
|
peers.push(api_types::PeerData {
|
|
|
|
peer_id: peer_id.to_string(),
|
2021-10-11 02:45:06 +00:00
|
|
|
enr: peer_info.enr().map(|enr| enr.to_base64()),
|
2020-11-13 02:02:41 +00:00
|
|
|
last_seen_p2p_address: address,
|
|
|
|
direction,
|
|
|
|
state,
|
|
|
|
});
|
|
|
|
}
|
|
|
|
}
|
|
|
|
});
|
|
|
|
Ok(api_types::PeersData {
|
|
|
|
meta: api_types::PeersMetaData {
|
|
|
|
count: peers.len() as u64,
|
|
|
|
},
|
|
|
|
data: peers,
|
|
|
|
})
|
|
|
|
})
|
|
|
|
},
|
|
|
|
);
|
|
|
|
|
|
|
|
// GET node/peer_count
|
|
|
|
let get_node_peer_count = eth1_v1
|
|
|
|
.and(warp::path("node"))
|
|
|
|
.and(warp::path("peer_count"))
|
|
|
|
.and(warp::path::end())
|
2020-10-22 02:59:42 +00:00
|
|
|
.and(network_globals.clone())
|
|
|
|
.and_then(|network_globals: Arc<NetworkGlobals<T::EthSpec>>| {
|
|
|
|
blocking_json_task(move || {
|
2020-11-13 02:02:41 +00:00
|
|
|
let mut connected: u64 = 0;
|
|
|
|
let mut connecting: u64 = 0;
|
|
|
|
let mut disconnected: u64 = 0;
|
|
|
|
let mut disconnecting: u64 = 0;
|
|
|
|
|
2021-11-25 03:45:52 +00:00
|
|
|
network_globals
|
|
|
|
.peers
|
|
|
|
.read()
|
|
|
|
.peers()
|
|
|
|
.for_each(|(_, peer_info)| {
|
|
|
|
let state = api_types::PeerState::from_peer_connection_status(
|
|
|
|
peer_info.connection_status(),
|
|
|
|
);
|
|
|
|
match state {
|
|
|
|
api_types::PeerState::Connected => connected += 1,
|
|
|
|
api_types::PeerState::Connecting => connecting += 1,
|
|
|
|
api_types::PeerState::Disconnected => disconnected += 1,
|
|
|
|
api_types::PeerState::Disconnecting => disconnecting += 1,
|
|
|
|
}
|
|
|
|
});
|
2020-11-13 02:02:41 +00:00
|
|
|
|
|
|
|
Ok(api_types::GenericResponse::from(api_types::PeerCount {
|
|
|
|
connected,
|
2021-05-10 00:53:09 +00:00
|
|
|
connecting,
|
2020-11-13 02:02:41 +00:00
|
|
|
disconnected,
|
2021-05-10 00:53:09 +00:00
|
|
|
disconnecting,
|
2020-11-13 02:02:41 +00:00
|
|
|
}))
|
2020-10-22 02:59:42 +00:00
|
|
|
})
|
|
|
|
});
|
2020-09-29 03:46:54 +00:00
|
|
|
/*
|
|
|
|
* validator
|
|
|
|
*/
|
|
|
|
|
2020-11-09 23:13:56 +00:00
|
|
|
// GET validator/duties/proposer/{epoch}
|
|
|
|
let get_validator_duties_proposer = eth1_v1
|
|
|
|
.and(warp::path("validator"))
|
|
|
|
.and(warp::path("duties"))
|
|
|
|
.and(warp::path("proposer"))
|
2020-12-03 23:10:08 +00:00
|
|
|
.and(warp::path::param::<Epoch>().or_else(|_| async {
|
2021-02-10 23:29:49 +00:00
|
|
|
Err(warp_utils::reject::custom_bad_request(
|
|
|
|
"Invalid epoch".to_string(),
|
|
|
|
))
|
2020-12-03 23:10:08 +00:00
|
|
|
}))
|
2020-11-09 23:13:56 +00:00
|
|
|
.and(warp::path::end())
|
|
|
|
.and(not_while_syncing_filter.clone())
|
|
|
|
.and(chain_filter.clone())
|
2021-03-17 05:09:57 +00:00
|
|
|
.and(log_filter.clone())
|
|
|
|
.and_then(|epoch: Epoch, chain: Arc<BeaconChain<T>>, log: Logger| {
|
|
|
|
blocking_json_task(move || proposer_duties::proposer_duties(epoch, &chain, &log))
|
|
|
|
});
|
2020-11-09 23:13:56 +00:00
|
|
|
|
|
|
|
// GET validator/blocks/{slot}
|
2021-08-06 00:47:31 +00:00
|
|
|
let get_validator_blocks = any_version
|
2020-11-09 23:13:56 +00:00
|
|
|
.and(warp::path("validator"))
|
|
|
|
.and(warp::path("blocks"))
|
2020-12-03 23:10:08 +00:00
|
|
|
.and(warp::path::param::<Slot>().or_else(|_| async {
|
|
|
|
Err(warp_utils::reject::custom_bad_request(
|
|
|
|
"Invalid slot".to_string(),
|
|
|
|
))
|
|
|
|
}))
|
2020-11-09 23:13:56 +00:00
|
|
|
.and(warp::path::end())
|
|
|
|
.and(not_while_syncing_filter.clone())
|
|
|
|
.and(warp::query::<api_types::ValidatorBlocksQuery>())
|
|
|
|
.and(chain_filter.clone())
|
|
|
|
.and_then(
|
2021-08-06 00:47:31 +00:00
|
|
|
|endpoint_version: EndpointVersion,
|
|
|
|
slot: Slot,
|
|
|
|
query: api_types::ValidatorBlocksQuery,
|
|
|
|
chain: Arc<BeaconChain<T>>| {
|
2020-11-09 23:13:56 +00:00
|
|
|
blocking_json_task(move || {
|
2022-03-28 07:14:13 +00:00
|
|
|
let randao_reveal = query.randao_reveal.as_ref().map_or_else(
|
|
|
|
|| {
|
|
|
|
if query.verify_randao {
|
|
|
|
Err(warp_utils::reject::custom_bad_request(
|
|
|
|
"randao_reveal is mandatory unless verify_randao=false".into(),
|
|
|
|
))
|
|
|
|
} else {
|
|
|
|
Ok(Signature::empty())
|
|
|
|
}
|
|
|
|
},
|
|
|
|
|sig_bytes| {
|
|
|
|
sig_bytes.try_into().map_err(|e| {
|
|
|
|
warp_utils::reject::custom_bad_request(format!(
|
|
|
|
"randao reveal is not a valid BLS signature: {:?}",
|
|
|
|
e
|
|
|
|
))
|
|
|
|
})
|
|
|
|
},
|
|
|
|
)?;
|
|
|
|
|
|
|
|
let randao_verification = if query.verify_randao {
|
|
|
|
ProduceBlockVerification::VerifyRandao
|
|
|
|
} else {
|
|
|
|
ProduceBlockVerification::NoVerification
|
|
|
|
};
|
2020-11-09 23:13:56 +00:00
|
|
|
|
2021-08-06 00:47:31 +00:00
|
|
|
let (block, _) = chain
|
2022-03-31 07:52:23 +00:00
|
|
|
.produce_block_with_verification::<FullPayload<T::EthSpec>>(
|
|
|
|
randao_reveal,
|
|
|
|
slot,
|
|
|
|
query.graffiti.map(Into::into),
|
|
|
|
randao_verification,
|
|
|
|
)
|
|
|
|
.map_err(warp_utils::reject::block_production_error)?;
|
|
|
|
let fork_name = block
|
|
|
|
.to_ref()
|
|
|
|
.fork_name(&chain.spec)
|
|
|
|
.map_err(inconsistent_fork_rejection)?;
|
|
|
|
fork_versioned_response(endpoint_version, fork_name, block)
|
|
|
|
})
|
|
|
|
},
|
|
|
|
);
|
|
|
|
|
|
|
|
// GET validator/blinded_blocks/{slot}
|
|
|
|
let get_validator_blinded_blocks = any_version
|
|
|
|
.and(warp::path("validator"))
|
|
|
|
.and(warp::path("blinded_blocks"))
|
|
|
|
.and(warp::path::param::<Slot>().or_else(|_| async {
|
|
|
|
Err(warp_utils::reject::custom_bad_request(
|
|
|
|
"Invalid slot".to_string(),
|
|
|
|
))
|
|
|
|
}))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(not_while_syncing_filter.clone())
|
|
|
|
.and(warp::query::<api_types::ValidatorBlocksQuery>())
|
|
|
|
.and(chain_filter.clone())
|
|
|
|
.and_then(
|
|
|
|
|endpoint_version: EndpointVersion,
|
|
|
|
slot: Slot,
|
|
|
|
query: api_types::ValidatorBlocksQuery,
|
|
|
|
chain: Arc<BeaconChain<T>>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
let randao_reveal = query.randao_reveal.as_ref().map_or_else(
|
|
|
|
|| {
|
|
|
|
if query.verify_randao {
|
|
|
|
Err(warp_utils::reject::custom_bad_request(
|
|
|
|
"randao_reveal is mandatory unless verify_randao=false".into(),
|
|
|
|
))
|
|
|
|
} else {
|
|
|
|
Ok(Signature::empty())
|
|
|
|
}
|
|
|
|
},
|
|
|
|
|sig_bytes| {
|
|
|
|
sig_bytes.try_into().map_err(|e| {
|
|
|
|
warp_utils::reject::custom_bad_request(format!(
|
|
|
|
"randao reveal is not a valid BLS signature: {:?}",
|
|
|
|
e
|
|
|
|
))
|
|
|
|
})
|
|
|
|
},
|
|
|
|
)?;
|
|
|
|
|
|
|
|
let randao_verification = if query.verify_randao {
|
|
|
|
ProduceBlockVerification::VerifyRandao
|
|
|
|
} else {
|
|
|
|
ProduceBlockVerification::NoVerification
|
|
|
|
};
|
|
|
|
|
|
|
|
let (block, _) = chain
|
|
|
|
.produce_block_with_verification::<BlindedPayload<T::EthSpec>>(
|
2022-03-28 07:14:13 +00:00
|
|
|
randao_reveal,
|
|
|
|
slot,
|
|
|
|
query.graffiti.map(Into::into),
|
|
|
|
randao_verification,
|
|
|
|
)
|
2021-08-06 00:47:31 +00:00
|
|
|
.map_err(warp_utils::reject::block_production_error)?;
|
2021-10-28 01:18:04 +00:00
|
|
|
let fork_name = block
|
|
|
|
.to_ref()
|
|
|
|
.fork_name(&chain.spec)
|
|
|
|
.map_err(inconsistent_fork_rejection)?;
|
2021-08-06 00:47:31 +00:00
|
|
|
fork_versioned_response(endpoint_version, fork_name, block)
|
2020-11-09 23:13:56 +00:00
|
|
|
})
|
|
|
|
},
|
|
|
|
);
|
|
|
|
|
|
|
|
// GET validator/attestation_data?slot,committee_index
|
|
|
|
let get_validator_attestation_data = eth1_v1
|
|
|
|
.and(warp::path("validator"))
|
|
|
|
.and(warp::path("attestation_data"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(warp::query::<api_types::ValidatorAttestationDataQuery>())
|
|
|
|
.and(not_while_syncing_filter.clone())
|
2021-10-07 11:24:57 +00:00
|
|
|
.and(only_with_safe_head.clone())
|
2020-11-09 23:13:56 +00:00
|
|
|
.and(chain_filter.clone())
|
|
|
|
.and_then(
|
|
|
|
|query: api_types::ValidatorAttestationDataQuery, chain: Arc<BeaconChain<T>>| {
|
|
|
|
blocking_json_task(move || {
|
2020-11-16 02:59:35 +00:00
|
|
|
let current_slot = chain
|
|
|
|
.slot()
|
|
|
|
.map_err(warp_utils::reject::beacon_chain_error)?;
|
|
|
|
|
|
|
|
// allow a tolerance of one slot to account for clock skew
|
|
|
|
if query.slot > current_slot + 1 {
|
|
|
|
return Err(warp_utils::reject::custom_bad_request(format!(
|
|
|
|
"request slot {} is more than one slot past the current slot {}",
|
|
|
|
query.slot, current_slot
|
|
|
|
)));
|
|
|
|
}
|
|
|
|
|
2020-11-09 23:13:56 +00:00
|
|
|
chain
|
|
|
|
.produce_unaggregated_attestation(query.slot, query.committee_index)
|
|
|
|
.map(|attestation| attestation.data)
|
|
|
|
.map(api_types::GenericResponse::from)
|
|
|
|
.map_err(warp_utils::reject::beacon_chain_error)
|
|
|
|
})
|
|
|
|
},
|
|
|
|
);
|
|
|
|
|
|
|
|
// GET validator/aggregate_attestation?attestation_data_root,slot
|
|
|
|
let get_validator_aggregate_attestation = eth1_v1
|
|
|
|
.and(warp::path("validator"))
|
|
|
|
.and(warp::path("aggregate_attestation"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(warp::query::<api_types::ValidatorAggregateAttestationQuery>())
|
|
|
|
.and(not_while_syncing_filter.clone())
|
2021-10-07 11:24:57 +00:00
|
|
|
.and(only_with_safe_head.clone())
|
2020-11-09 23:13:56 +00:00
|
|
|
.and(chain_filter.clone())
|
|
|
|
.and_then(
|
|
|
|
|query: api_types::ValidatorAggregateAttestationQuery, chain: Arc<BeaconChain<T>>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
chain
|
|
|
|
.get_aggregated_attestation_by_slot_and_root(
|
|
|
|
query.slot,
|
|
|
|
&query.attestation_data_root,
|
|
|
|
)
|
2022-04-13 03:54:42 +00:00
|
|
|
.map_err(|e| {
|
|
|
|
warp_utils::reject::custom_bad_request(format!(
|
|
|
|
"unable to fetch aggregate: {:?}",
|
|
|
|
e
|
|
|
|
))
|
|
|
|
})?
|
2020-11-09 23:13:56 +00:00
|
|
|
.map(api_types::GenericResponse::from)
|
|
|
|
.ok_or_else(|| {
|
|
|
|
warp_utils::reject::custom_not_found(
|
|
|
|
"no matching aggregate found".to_string(),
|
|
|
|
)
|
|
|
|
})
|
|
|
|
})
|
|
|
|
},
|
|
|
|
);
|
|
|
|
|
|
|
|
// POST validator/duties/attester/{epoch}
|
|
|
|
let post_validator_duties_attester = eth1_v1
|
2020-09-29 03:46:54 +00:00
|
|
|
.and(warp::path("validator"))
|
|
|
|
.and(warp::path("duties"))
|
|
|
|
.and(warp::path("attester"))
|
2020-12-03 23:10:08 +00:00
|
|
|
.and(warp::path::param::<Epoch>().or_else(|_| async {
|
|
|
|
Err(warp_utils::reject::custom_bad_request(
|
|
|
|
"Invalid epoch".to_string(),
|
|
|
|
))
|
|
|
|
}))
|
2020-09-29 03:46:54 +00:00
|
|
|
.and(warp::path::end())
|
|
|
|
.and(not_while_syncing_filter.clone())
|
2020-11-09 23:13:56 +00:00
|
|
|
.and(warp::body::json())
|
2020-09-29 03:46:54 +00:00
|
|
|
.and(chain_filter.clone())
|
|
|
|
.and_then(
|
2020-11-09 23:13:56 +00:00
|
|
|
|epoch: Epoch, indices: api_types::ValidatorIndexData, chain: Arc<BeaconChain<T>>| {
|
2020-09-29 03:46:54 +00:00
|
|
|
blocking_json_task(move || {
|
2021-03-17 05:09:57 +00:00
|
|
|
attester_duties::attester_duties(epoch, &indices.0, &chain)
|
2020-09-29 03:46:54 +00:00
|
|
|
})
|
|
|
|
},
|
|
|
|
);
|
|
|
|
|
2021-08-06 00:47:31 +00:00
|
|
|
// POST validator/duties/sync
|
|
|
|
let post_validator_duties_sync = eth1_v1
|
|
|
|
.and(warp::path("validator"))
|
|
|
|
.and(warp::path("duties"))
|
|
|
|
.and(warp::path("sync"))
|
|
|
|
.and(warp::path::param::<Epoch>().or_else(|_| async {
|
|
|
|
Err(warp_utils::reject::custom_bad_request(
|
|
|
|
"Invalid epoch".to_string(),
|
|
|
|
))
|
|
|
|
}))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(not_while_syncing_filter.clone())
|
|
|
|
.and(warp::body::json())
|
|
|
|
.and(chain_filter.clone())
|
|
|
|
.and_then(
|
|
|
|
|epoch: Epoch, indices: api_types::ValidatorIndexData, chain: Arc<BeaconChain<T>>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
sync_committees::sync_committee_duties(epoch, &indices.0, &chain)
|
|
|
|
})
|
|
|
|
},
|
|
|
|
);
|
|
|
|
|
|
|
|
// GET validator/sync_committee_contribution
|
|
|
|
let get_validator_sync_committee_contribution = eth1_v1
|
|
|
|
.and(warp::path("validator"))
|
|
|
|
.and(warp::path("sync_committee_contribution"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(warp::query::<SyncContributionData>())
|
|
|
|
.and(not_while_syncing_filter.clone())
|
2021-10-07 11:24:57 +00:00
|
|
|
.and(only_with_safe_head)
|
2021-08-06 00:47:31 +00:00
|
|
|
.and(chain_filter.clone())
|
|
|
|
.and_then(
|
|
|
|
|sync_committee_data: SyncContributionData, chain: Arc<BeaconChain<T>>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
chain
|
|
|
|
.get_aggregated_sync_committee_contribution(&sync_committee_data)
|
|
|
|
.map(api_types::GenericResponse::from)
|
|
|
|
.ok_or_else(|| {
|
|
|
|
warp_utils::reject::custom_not_found(
|
|
|
|
"no matching sync contribution found".to_string(),
|
|
|
|
)
|
|
|
|
})
|
|
|
|
})
|
|
|
|
},
|
|
|
|
);
|
|
|
|
|
2020-09-29 03:46:54 +00:00
|
|
|
// POST validator/aggregate_and_proofs
|
|
|
|
let post_validator_aggregate_and_proofs = eth1_v1
|
|
|
|
.and(warp::path("validator"))
|
|
|
|
.and(warp::path("aggregate_and_proofs"))
|
|
|
|
.and(warp::path::end())
|
2021-08-06 00:47:31 +00:00
|
|
|
.and(not_while_syncing_filter.clone())
|
2020-09-29 03:46:54 +00:00
|
|
|
.and(chain_filter.clone())
|
|
|
|
.and(warp::body::json())
|
|
|
|
.and(network_tx_filter.clone())
|
2020-11-09 23:13:56 +00:00
|
|
|
.and(log_filter.clone())
|
2020-09-29 03:46:54 +00:00
|
|
|
.and_then(
|
|
|
|
|chain: Arc<BeaconChain<T>>,
|
2020-11-09 23:13:56 +00:00
|
|
|
aggregates: Vec<SignedAggregateAndProof<T::EthSpec>>,
|
|
|
|
network_tx: UnboundedSender<NetworkMessage<T::EthSpec>>, log: Logger| {
|
2020-09-29 03:46:54 +00:00
|
|
|
blocking_json_task(move || {
|
2021-01-20 19:19:38 +00:00
|
|
|
let seen_timestamp = timestamp_now();
|
2020-11-09 23:13:56 +00:00
|
|
|
let mut verified_aggregates = Vec::with_capacity(aggregates.len());
|
|
|
|
let mut messages = Vec::with_capacity(aggregates.len());
|
|
|
|
let mut failures = Vec::new();
|
|
|
|
|
|
|
|
// Verify that all messages in the post are valid before processing further
|
Batch BLS verification for attestations (#2399)
## Issue Addressed
NA
## Proposed Changes
Adds the ability to verify batches of aggregated/unaggregated attestations from the network.
When the `BeaconProcessor` finds there are messages in the aggregated or unaggregated attestation queues, it will first check the length of the queue:
- `== 1` verify the attestation individually.
- `>= 2` take up to 64 of those attestations and verify them in a batch.
Notably, we only perform batch verification if the queue has a backlog. We don't apply any artificial delays to attestations to try and force them into batches.
### Batching Details
To assist with implementing batches we modify `beacon_chain::attestation_verification` to have two distinct categories for attestations:
- *Indexed* attestations: those which have passed initial validation and were valid enough for us to derive an `IndexedAttestation`.
- *Verified* attestations: those attestations which were indexed *and also* passed signature verification. These are well-formed, interesting messages which were signed by validators.
The batching functions accept `n` attestations and then return `n` attestation verification `Result`s, where those `Result`s can be any combination of `Ok` or `Err`. In other words, we attempt to verify as many attestations as possible and return specific per-attestation results so peer scores can be updated, if required.
When we batch verify attestations, we first try to map all those attestations to *indexed* attestations. If any of those attestations were able to be indexed, we then perform batch BLS verification on those indexed attestations. If the batch verification succeeds, we convert them into *verified* attestations, disabling individual signature checking. If the batch fails, we convert to verified attestations with individual signature checking enabled.
Ultimately, we optimistically try to do a batch verification of attestation signatures and fall-back to individual verification if it fails. This opens an attach vector for "poisoning" the attestations and causing us to waste a batch verification. I argue that peer scoring should do a good-enough job of defending against this and the typical-case gains massively outweigh the worst-case losses.
## Additional Info
Before this PR, attestation verification took the attestations by value (instead of by reference). It turns out that this was unnecessary and, in my opinion, resulted in some undesirable ergonomics (e.g., we had to pass the attestation back in the `Err` variant to avoid clones). In this PR I've modified attestation verification so that it now takes a reference.
I refactored the `beacon_chain/tests/attestation_verification.rs` tests so they use a builder-esque "tester" struct instead of a weird macro. It made it easier for me to test individual/batch with the same set of tests and I think it was a nice tidy-up. Notably, I did this last to try and make sure my new refactors to *actual* production code would pass under the existing test suite.
2021-09-22 08:49:41 +00:00
|
|
|
for (index, aggregate) in aggregates.iter().enumerate() {
|
2021-07-14 05:24:08 +00:00
|
|
|
match chain.verify_aggregated_attestation_for_gossip(aggregate) {
|
2020-11-09 23:13:56 +00:00
|
|
|
Ok(verified_aggregate) => {
|
|
|
|
messages.push(PubsubMessage::AggregateAndProofAttestation(Box::new(
|
|
|
|
verified_aggregate.aggregate().clone(),
|
|
|
|
)));
|
2021-01-20 19:19:38 +00:00
|
|
|
|
|
|
|
// Notify the validator monitor.
|
|
|
|
chain
|
|
|
|
.validator_monitor
|
|
|
|
.read()
|
|
|
|
.register_api_aggregated_attestation(
|
|
|
|
seen_timestamp,
|
|
|
|
verified_aggregate.aggregate(),
|
|
|
|
verified_aggregate.indexed_attestation(),
|
|
|
|
&chain.slot_clock,
|
|
|
|
);
|
|
|
|
|
2020-11-09 23:13:56 +00:00
|
|
|
verified_aggregates.push((index, verified_aggregate));
|
|
|
|
}
|
2020-09-29 03:46:54 +00:00
|
|
|
// If we already know the attestation, don't broadcast it or attempt to
|
|
|
|
// further verify it. Return success.
|
|
|
|
//
|
|
|
|
// It's reasonably likely that two different validators produce
|
|
|
|
// identical aggregates, especially if they're using the same beacon
|
|
|
|
// node.
|
Batch BLS verification for attestations (#2399)
## Issue Addressed
NA
## Proposed Changes
Adds the ability to verify batches of aggregated/unaggregated attestations from the network.
When the `BeaconProcessor` finds there are messages in the aggregated or unaggregated attestation queues, it will first check the length of the queue:
- `== 1` verify the attestation individually.
- `>= 2` take up to 64 of those attestations and verify them in a batch.
Notably, we only perform batch verification if the queue has a backlog. We don't apply any artificial delays to attestations to try and force them into batches.
### Batching Details
To assist with implementing batches we modify `beacon_chain::attestation_verification` to have two distinct categories for attestations:
- *Indexed* attestations: those which have passed initial validation and were valid enough for us to derive an `IndexedAttestation`.
- *Verified* attestations: those attestations which were indexed *and also* passed signature verification. These are well-formed, interesting messages which were signed by validators.
The batching functions accept `n` attestations and then return `n` attestation verification `Result`s, where those `Result`s can be any combination of `Ok` or `Err`. In other words, we attempt to verify as many attestations as possible and return specific per-attestation results so peer scores can be updated, if required.
When we batch verify attestations, we first try to map all those attestations to *indexed* attestations. If any of those attestations were able to be indexed, we then perform batch BLS verification on those indexed attestations. If the batch verification succeeds, we convert them into *verified* attestations, disabling individual signature checking. If the batch fails, we convert to verified attestations with individual signature checking enabled.
Ultimately, we optimistically try to do a batch verification of attestation signatures and fall-back to individual verification if it fails. This opens an attach vector for "poisoning" the attestations and causing us to waste a batch verification. I argue that peer scoring should do a good-enough job of defending against this and the typical-case gains massively outweigh the worst-case losses.
## Additional Info
Before this PR, attestation verification took the attestations by value (instead of by reference). It turns out that this was unnecessary and, in my opinion, resulted in some undesirable ergonomics (e.g., we had to pass the attestation back in the `Err` variant to avoid clones). In this PR I've modified attestation verification so that it now takes a reference.
I refactored the `beacon_chain/tests/attestation_verification.rs` tests so they use a builder-esque "tester" struct instead of a weird macro. It made it easier for me to test individual/batch with the same set of tests and I think it was a nice tidy-up. Notably, I did this last to try and make sure my new refactors to *actual* production code would pass under the existing test suite.
2021-09-22 08:49:41 +00:00
|
|
|
Err(AttnError::AttestationAlreadyKnown(_)) => continue,
|
|
|
|
Err(e) => {
|
2020-11-09 23:13:56 +00:00
|
|
|
error!(log,
|
|
|
|
"Failure verifying aggregate and proofs";
|
|
|
|
"error" => format!("{:?}", e),
|
|
|
|
"request_index" => index,
|
|
|
|
"aggregator_index" => aggregate.message.aggregator_index,
|
|
|
|
"attestation_index" => aggregate.message.aggregate.data.index,
|
|
|
|
"attestation_slot" => aggregate.message.aggregate.data.slot,
|
|
|
|
);
|
|
|
|
failures.push(api_types::Failure::new(index, format!("Verification: {:?}", e)));
|
2021-02-10 23:29:49 +00:00
|
|
|
}
|
2020-11-09 23:13:56 +00:00
|
|
|
}
|
|
|
|
}
|
2020-09-29 03:46:54 +00:00
|
|
|
|
2020-11-09 23:13:56 +00:00
|
|
|
// Publish aggregate attestations to the libp2p network
|
|
|
|
if !messages.is_empty() {
|
|
|
|
publish_network_message(&network_tx, NetworkMessage::Publish { messages })?;
|
|
|
|
}
|
2020-09-29 03:46:54 +00:00
|
|
|
|
2020-11-09 23:13:56 +00:00
|
|
|
// Import aggregate attestations
|
|
|
|
for (index, verified_aggregate) in verified_aggregates {
|
|
|
|
if let Err(e) = chain.apply_attestation_to_fork_choice(&verified_aggregate) {
|
|
|
|
error!(log,
|
|
|
|
"Failure applying verified aggregate attestation to fork choice";
|
|
|
|
"error" => format!("{:?}", e),
|
|
|
|
"request_index" => index,
|
|
|
|
"aggregator_index" => verified_aggregate.aggregate().message.aggregator_index,
|
|
|
|
"attestation_index" => verified_aggregate.attestation().data.index,
|
|
|
|
"attestation_slot" => verified_aggregate.attestation().data.slot,
|
|
|
|
);
|
|
|
|
failures.push(api_types::Failure::new(index, format!("Fork choice: {:?}", e)));
|
|
|
|
}
|
Batch BLS verification for attestations (#2399)
## Issue Addressed
NA
## Proposed Changes
Adds the ability to verify batches of aggregated/unaggregated attestations from the network.
When the `BeaconProcessor` finds there are messages in the aggregated or unaggregated attestation queues, it will first check the length of the queue:
- `== 1` verify the attestation individually.
- `>= 2` take up to 64 of those attestations and verify them in a batch.
Notably, we only perform batch verification if the queue has a backlog. We don't apply any artificial delays to attestations to try and force them into batches.
### Batching Details
To assist with implementing batches we modify `beacon_chain::attestation_verification` to have two distinct categories for attestations:
- *Indexed* attestations: those which have passed initial validation and were valid enough for us to derive an `IndexedAttestation`.
- *Verified* attestations: those attestations which were indexed *and also* passed signature verification. These are well-formed, interesting messages which were signed by validators.
The batching functions accept `n` attestations and then return `n` attestation verification `Result`s, where those `Result`s can be any combination of `Ok` or `Err`. In other words, we attempt to verify as many attestations as possible and return specific per-attestation results so peer scores can be updated, if required.
When we batch verify attestations, we first try to map all those attestations to *indexed* attestations. If any of those attestations were able to be indexed, we then perform batch BLS verification on those indexed attestations. If the batch verification succeeds, we convert them into *verified* attestations, disabling individual signature checking. If the batch fails, we convert to verified attestations with individual signature checking enabled.
Ultimately, we optimistically try to do a batch verification of attestation signatures and fall-back to individual verification if it fails. This opens an attach vector for "poisoning" the attestations and causing us to waste a batch verification. I argue that peer scoring should do a good-enough job of defending against this and the typical-case gains massively outweigh the worst-case losses.
## Additional Info
Before this PR, attestation verification took the attestations by value (instead of by reference). It turns out that this was unnecessary and, in my opinion, resulted in some undesirable ergonomics (e.g., we had to pass the attestation back in the `Err` variant to avoid clones). In this PR I've modified attestation verification so that it now takes a reference.
I refactored the `beacon_chain/tests/attestation_verification.rs` tests so they use a builder-esque "tester" struct instead of a weird macro. It made it easier for me to test individual/batch with the same set of tests and I think it was a nice tidy-up. Notably, I did this last to try and make sure my new refactors to *actual* production code would pass under the existing test suite.
2021-09-22 08:49:41 +00:00
|
|
|
if let Err(e) = chain.add_to_block_inclusion_pool(&verified_aggregate) {
|
2020-11-09 23:13:56 +00:00
|
|
|
warn!(log,
|
|
|
|
"Could not add verified aggregate attestation to the inclusion pool";
|
|
|
|
"error" => format!("{:?}", e),
|
|
|
|
"request_index" => index,
|
|
|
|
);
|
|
|
|
failures.push(api_types::Failure::new(index, format!("Op pool: {:?}", e)));
|
|
|
|
}
|
|
|
|
}
|
2020-09-29 03:46:54 +00:00
|
|
|
|
2020-11-09 23:13:56 +00:00
|
|
|
if !failures.is_empty() {
|
|
|
|
Err(warp_utils::reject::indexed_bad_request("error processing aggregate and proofs".to_string(),
|
2021-02-10 23:29:49 +00:00
|
|
|
failures,
|
2020-09-29 03:46:54 +00:00
|
|
|
))
|
2020-11-09 23:13:56 +00:00
|
|
|
} else {
|
|
|
|
Ok(())
|
|
|
|
}
|
2020-09-29 03:46:54 +00:00
|
|
|
})
|
|
|
|
},
|
|
|
|
);
|
|
|
|
|
2021-08-06 00:47:31 +00:00
|
|
|
let post_validator_contribution_and_proofs = eth1_v1
|
|
|
|
.and(warp::path("validator"))
|
|
|
|
.and(warp::path("contribution_and_proofs"))
|
|
|
|
.and(warp::path::end())
|
2021-09-22 00:37:28 +00:00
|
|
|
.and(not_while_syncing_filter.clone())
|
2021-08-06 00:47:31 +00:00
|
|
|
.and(chain_filter.clone())
|
|
|
|
.and(warp::body::json())
|
|
|
|
.and(network_tx_filter.clone())
|
|
|
|
.and(log_filter.clone())
|
|
|
|
.and_then(
|
|
|
|
|chain: Arc<BeaconChain<T>>,
|
|
|
|
contributions: Vec<SignedContributionAndProof<T::EthSpec>>,
|
|
|
|
network_tx: UnboundedSender<NetworkMessage<T::EthSpec>>,
|
|
|
|
log: Logger| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
sync_committees::process_signed_contribution_and_proofs(
|
|
|
|
contributions,
|
|
|
|
network_tx,
|
|
|
|
&chain,
|
|
|
|
log,
|
|
|
|
)?;
|
|
|
|
Ok(api_types::GenericResponse::from(()))
|
|
|
|
})
|
|
|
|
},
|
|
|
|
);
|
|
|
|
|
2020-09-29 03:46:54 +00:00
|
|
|
// POST validator/beacon_committee_subscriptions
|
|
|
|
let post_validator_beacon_committee_subscriptions = eth1_v1
|
|
|
|
.and(warp::path("validator"))
|
|
|
|
.and(warp::path("beacon_committee_subscriptions"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(warp::body::json())
|
2021-08-06 00:47:31 +00:00
|
|
|
.and(network_tx_filter.clone())
|
2021-01-20 19:19:38 +00:00
|
|
|
.and(chain_filter.clone())
|
2020-09-29 03:46:54 +00:00
|
|
|
.and_then(
|
|
|
|
|subscriptions: Vec<api_types::BeaconCommitteeSubscription>,
|
2021-01-20 19:19:38 +00:00
|
|
|
network_tx: UnboundedSender<NetworkMessage<T::EthSpec>>,
|
|
|
|
chain: Arc<BeaconChain<T>>| {
|
2020-09-29 03:46:54 +00:00
|
|
|
blocking_json_task(move || {
|
|
|
|
for subscription in &subscriptions {
|
2021-01-20 19:19:38 +00:00
|
|
|
chain
|
|
|
|
.validator_monitor
|
|
|
|
.write()
|
|
|
|
.auto_register_local_validator(subscription.validator_index);
|
|
|
|
|
2020-09-29 03:46:54 +00:00
|
|
|
let subscription = api_types::ValidatorSubscription {
|
|
|
|
validator_index: subscription.validator_index,
|
|
|
|
attestation_committee_index: subscription.committee_index,
|
|
|
|
slot: subscription.slot,
|
|
|
|
committee_count_at_slot: subscription.committees_at_slot,
|
|
|
|
is_aggregator: subscription.is_aggregator,
|
|
|
|
};
|
|
|
|
|
|
|
|
publish_network_message(
|
|
|
|
&network_tx,
|
2021-08-04 01:44:57 +00:00
|
|
|
NetworkMessage::AttestationSubscribe {
|
2020-09-29 03:46:54 +00:00
|
|
|
subscriptions: vec![subscription],
|
|
|
|
},
|
|
|
|
)?;
|
|
|
|
}
|
|
|
|
|
|
|
|
Ok(())
|
|
|
|
})
|
|
|
|
},
|
|
|
|
);
|
|
|
|
|
2022-02-08 19:52:20 +00:00
|
|
|
// POST validator/prepare_beacon_proposer
|
|
|
|
let post_validator_prepare_beacon_proposer = eth1_v1
|
|
|
|
.and(warp::path("validator"))
|
|
|
|
.and(warp::path("prepare_beacon_proposer"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(not_while_syncing_filter.clone())
|
|
|
|
.and(chain_filter.clone())
|
|
|
|
.and(warp::addr::remote())
|
|
|
|
.and(log_filter.clone())
|
|
|
|
.and(warp::body::json())
|
|
|
|
.and_then(
|
|
|
|
|chain: Arc<BeaconChain<T>>,
|
|
|
|
client_addr: Option<SocketAddr>,
|
|
|
|
log: Logger,
|
|
|
|
preparation_data: Vec<ProposerPreparationData>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
let execution_layer = chain
|
|
|
|
.execution_layer
|
|
|
|
.as_ref()
|
|
|
|
.ok_or(BeaconChainError::ExecutionLayerMissing)
|
|
|
|
.map_err(warp_utils::reject::beacon_chain_error)?;
|
|
|
|
let current_epoch = chain
|
|
|
|
.epoch()
|
|
|
|
.map_err(warp_utils::reject::beacon_chain_error)?;
|
|
|
|
|
|
|
|
debug!(
|
|
|
|
log,
|
|
|
|
"Received proposer preparation data";
|
|
|
|
"count" => preparation_data.len(),
|
|
|
|
"client" => client_addr
|
|
|
|
.map(|a| a.to_string())
|
|
|
|
.unwrap_or_else(|| "unknown".to_string()),
|
|
|
|
);
|
|
|
|
|
|
|
|
execution_layer
|
|
|
|
.update_proposer_preparation_blocking(current_epoch, &preparation_data)
|
|
|
|
.map_err(|_e| {
|
|
|
|
warp_utils::reject::custom_bad_request(
|
|
|
|
"error processing proposer preparations".to_string(),
|
|
|
|
)
|
|
|
|
})?;
|
|
|
|
|
2022-03-09 00:42:05 +00:00
|
|
|
chain.prepare_beacon_proposer_blocking().map_err(|e| {
|
|
|
|
warp_utils::reject::custom_bad_request(format!(
|
|
|
|
"error updating proposer preparations: {:?}",
|
|
|
|
e
|
|
|
|
))
|
|
|
|
})?;
|
|
|
|
|
2022-02-08 19:52:20 +00:00
|
|
|
Ok(())
|
|
|
|
})
|
|
|
|
},
|
|
|
|
);
|
|
|
|
|
2021-08-06 00:47:31 +00:00
|
|
|
// POST validator/sync_committee_subscriptions
|
|
|
|
let post_validator_sync_committee_subscriptions = eth1_v1
|
|
|
|
.and(warp::path("validator"))
|
|
|
|
.and(warp::path("sync_committee_subscriptions"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(warp::body::json())
|
|
|
|
.and(network_tx_filter)
|
|
|
|
.and(chain_filter.clone())
|
|
|
|
.and_then(
|
|
|
|
|subscriptions: Vec<types::SyncCommitteeSubscription>,
|
|
|
|
network_tx: UnboundedSender<NetworkMessage<T::EthSpec>>,
|
|
|
|
chain: Arc<BeaconChain<T>>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
for subscription in subscriptions {
|
|
|
|
chain
|
|
|
|
.validator_monitor
|
|
|
|
.write()
|
|
|
|
.auto_register_local_validator(subscription.validator_index);
|
|
|
|
|
|
|
|
publish_network_message(
|
|
|
|
&network_tx,
|
|
|
|
NetworkMessage::SyncCommitteeSubscribe {
|
|
|
|
subscriptions: vec![subscription],
|
|
|
|
},
|
|
|
|
)?;
|
|
|
|
}
|
|
|
|
|
|
|
|
Ok(())
|
|
|
|
})
|
|
|
|
},
|
|
|
|
);
|
|
|
|
|
2021-07-31 03:50:52 +00:00
|
|
|
// POST lighthouse/liveness
|
|
|
|
let post_lighthouse_liveness = warp::path("lighthouse")
|
|
|
|
.and(warp::path("liveness"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(warp::body::json())
|
|
|
|
.and(chain_filter.clone())
|
|
|
|
.and_then(
|
|
|
|
|request_data: api_types::LivenessRequestData, chain: Arc<BeaconChain<T>>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
// Ensure the request is for either the current, previous or next epoch.
|
|
|
|
let current_epoch = chain
|
|
|
|
.epoch()
|
|
|
|
.map_err(warp_utils::reject::beacon_chain_error)?;
|
|
|
|
let prev_epoch = current_epoch.saturating_sub(Epoch::new(1));
|
|
|
|
let next_epoch = current_epoch.saturating_add(Epoch::new(1));
|
|
|
|
|
|
|
|
if request_data.epoch < prev_epoch || request_data.epoch > next_epoch {
|
|
|
|
return Err(warp_utils::reject::custom_bad_request(format!(
|
|
|
|
"request epoch {} is more than one epoch from the current epoch {}",
|
|
|
|
request_data.epoch, current_epoch
|
|
|
|
)));
|
|
|
|
}
|
|
|
|
|
|
|
|
let liveness: Vec<api_types::LivenessResponseData> = request_data
|
|
|
|
.indices
|
|
|
|
.iter()
|
|
|
|
.cloned()
|
|
|
|
.map(|index| {
|
|
|
|
let is_live =
|
|
|
|
chain.validator_seen_at_epoch(index as usize, request_data.epoch);
|
|
|
|
api_types::LivenessResponseData {
|
|
|
|
index: index as u64,
|
|
|
|
epoch: request_data.epoch,
|
|
|
|
is_live,
|
|
|
|
}
|
|
|
|
})
|
|
|
|
.collect();
|
|
|
|
|
|
|
|
Ok(api_types::GenericResponse::from(liveness))
|
|
|
|
})
|
|
|
|
},
|
|
|
|
);
|
|
|
|
|
2020-09-29 03:46:54 +00:00
|
|
|
// GET lighthouse/health
|
|
|
|
let get_lighthouse_health = warp::path("lighthouse")
|
|
|
|
.and(warp::path("health"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and_then(|| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
eth2::lighthouse::Health::observe()
|
|
|
|
.map(api_types::GenericResponse::from)
|
|
|
|
.map_err(warp_utils::reject::custom_bad_request)
|
|
|
|
})
|
|
|
|
});
|
|
|
|
|
|
|
|
// GET lighthouse/syncing
|
|
|
|
let get_lighthouse_syncing = warp::path("lighthouse")
|
|
|
|
.and(warp::path("syncing"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(network_globals.clone())
|
|
|
|
.and_then(|network_globals: Arc<NetworkGlobals<T::EthSpec>>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
Ok(api_types::GenericResponse::from(
|
|
|
|
network_globals.sync_state(),
|
|
|
|
))
|
|
|
|
})
|
|
|
|
});
|
|
|
|
|
2021-12-22 06:17:14 +00:00
|
|
|
// GET lighthouse/nat
|
|
|
|
let get_lighthouse_nat = warp::path("lighthouse")
|
|
|
|
.and(warp::path("nat"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and_then(|| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
Ok(api_types::GenericResponse::from(
|
|
|
|
lighthouse_network::metrics::NAT_OPEN
|
|
|
|
.as_ref()
|
|
|
|
.map(|v| v.get())
|
|
|
|
.unwrap_or(0)
|
|
|
|
!= 0,
|
|
|
|
))
|
|
|
|
})
|
|
|
|
});
|
|
|
|
|
2020-09-29 03:46:54 +00:00
|
|
|
// GET lighthouse/peers
|
|
|
|
let get_lighthouse_peers = warp::path("lighthouse")
|
|
|
|
.and(warp::path("peers"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(network_globals.clone())
|
|
|
|
.and_then(|network_globals: Arc<NetworkGlobals<T::EthSpec>>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
Ok(network_globals
|
2021-11-25 03:45:52 +00:00
|
|
|
.peers
|
|
|
|
.read()
|
2020-09-29 03:46:54 +00:00
|
|
|
.peers()
|
|
|
|
.map(|(peer_id, peer_info)| eth2::lighthouse::Peer {
|
|
|
|
peer_id: peer_id.to_string(),
|
|
|
|
peer_info: peer_info.clone(),
|
|
|
|
})
|
|
|
|
.collect::<Vec<_>>())
|
|
|
|
})
|
|
|
|
});
|
|
|
|
|
|
|
|
// GET lighthouse/peers/connected
|
|
|
|
let get_lighthouse_peers_connected = warp::path("lighthouse")
|
|
|
|
.and(warp::path("peers"))
|
|
|
|
.and(warp::path("connected"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(network_globals)
|
|
|
|
.and_then(|network_globals: Arc<NetworkGlobals<T::EthSpec>>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
Ok(network_globals
|
2021-11-25 03:45:52 +00:00
|
|
|
.peers
|
|
|
|
.read()
|
2020-09-29 03:46:54 +00:00
|
|
|
.connected_peers()
|
|
|
|
.map(|(peer_id, peer_info)| eth2::lighthouse::Peer {
|
|
|
|
peer_id: peer_id.to_string(),
|
|
|
|
peer_info: peer_info.clone(),
|
|
|
|
})
|
|
|
|
.collect::<Vec<_>>())
|
|
|
|
})
|
|
|
|
});
|
|
|
|
|
|
|
|
// GET lighthouse/proto_array
|
|
|
|
let get_lighthouse_proto_array = warp::path("lighthouse")
|
|
|
|
.and(warp::path("proto_array"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(chain_filter.clone())
|
|
|
|
.and_then(|chain: Arc<BeaconChain<T>>| {
|
|
|
|
blocking_task(move || {
|
|
|
|
Ok::<_, warp::Rejection>(warp::reply::json(&api_types::GenericResponseRef::from(
|
|
|
|
chain.fork_choice.read().proto_array().core_proto_array(),
|
|
|
|
)))
|
|
|
|
})
|
|
|
|
});
|
|
|
|
|
|
|
|
// GET lighthouse/validator_inclusion/{epoch}/{validator_id}
|
|
|
|
let get_lighthouse_validator_inclusion_global = warp::path("lighthouse")
|
|
|
|
.and(warp::path("validator_inclusion"))
|
|
|
|
.and(warp::path::param::<Epoch>())
|
|
|
|
.and(warp::path::param::<ValidatorId>())
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(chain_filter.clone())
|
|
|
|
.and_then(
|
|
|
|
|epoch: Epoch, validator_id: ValidatorId, chain: Arc<BeaconChain<T>>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
validator_inclusion::validator_inclusion_data(epoch, &validator_id, &chain)
|
|
|
|
.map(api_types::GenericResponse::from)
|
|
|
|
})
|
|
|
|
},
|
|
|
|
);
|
|
|
|
|
|
|
|
// GET lighthouse/validator_inclusion/{epoch}/global
|
|
|
|
let get_lighthouse_validator_inclusion = warp::path("lighthouse")
|
|
|
|
.and(warp::path("validator_inclusion"))
|
|
|
|
.and(warp::path::param::<Epoch>())
|
|
|
|
.and(warp::path("global"))
|
|
|
|
.and(warp::path::end())
|
2020-10-22 06:05:49 +00:00
|
|
|
.and(chain_filter.clone())
|
2020-09-29 03:46:54 +00:00
|
|
|
.and_then(|epoch: Epoch, chain: Arc<BeaconChain<T>>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
validator_inclusion::global_validator_inclusion_data(epoch, &chain)
|
|
|
|
.map(api_types::GenericResponse::from)
|
|
|
|
})
|
|
|
|
});
|
|
|
|
|
2020-11-02 00:37:30 +00:00
|
|
|
// GET lighthouse/eth1/syncing
|
|
|
|
let get_lighthouse_eth1_syncing = warp::path("lighthouse")
|
|
|
|
.and(warp::path("eth1"))
|
|
|
|
.and(warp::path("syncing"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(chain_filter.clone())
|
|
|
|
.and_then(|chain: Arc<BeaconChain<T>>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
let head_info = chain
|
|
|
|
.head_info()
|
|
|
|
.map_err(warp_utils::reject::beacon_chain_error)?;
|
2020-11-30 20:29:17 +00:00
|
|
|
let current_slot_opt = chain.slot().ok();
|
2020-11-02 00:37:30 +00:00
|
|
|
|
|
|
|
chain
|
|
|
|
.eth1_chain
|
|
|
|
.as_ref()
|
|
|
|
.ok_or_else(|| {
|
|
|
|
warp_utils::reject::custom_not_found(
|
|
|
|
"Eth1 sync is disabled. See the --eth1 CLI flag.".to_string(),
|
|
|
|
)
|
|
|
|
})
|
|
|
|
.and_then(|eth1| {
|
2020-11-30 20:29:17 +00:00
|
|
|
eth1.sync_status(head_info.genesis_time, current_slot_opt, &chain.spec)
|
2020-11-02 00:37:30 +00:00
|
|
|
.ok_or_else(|| {
|
|
|
|
warp_utils::reject::custom_server_error(
|
|
|
|
"Unable to determine Eth1 sync status".to_string(),
|
|
|
|
)
|
|
|
|
})
|
|
|
|
})
|
|
|
|
.map(api_types::GenericResponse::from)
|
|
|
|
})
|
|
|
|
});
|
|
|
|
|
|
|
|
// GET lighthouse/eth1/block_cache
|
|
|
|
let get_lighthouse_eth1_block_cache = warp::path("lighthouse")
|
|
|
|
.and(warp::path("eth1"))
|
|
|
|
.and(warp::path("block_cache"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(eth1_service_filter.clone())
|
|
|
|
.and_then(|eth1_service: eth1::Service| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
Ok(api_types::GenericResponse::from(
|
|
|
|
eth1_service
|
|
|
|
.blocks()
|
|
|
|
.read()
|
|
|
|
.iter()
|
|
|
|
.cloned()
|
|
|
|
.collect::<Vec<_>>(),
|
|
|
|
))
|
|
|
|
})
|
|
|
|
});
|
|
|
|
|
|
|
|
// GET lighthouse/eth1/deposit_cache
|
|
|
|
let get_lighthouse_eth1_deposit_cache = warp::path("lighthouse")
|
|
|
|
.and(warp::path("eth1"))
|
|
|
|
.and(warp::path("deposit_cache"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(eth1_service_filter)
|
|
|
|
.and_then(|eth1_service: eth1::Service| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
Ok(api_types::GenericResponse::from(
|
|
|
|
eth1_service
|
|
|
|
.deposits()
|
|
|
|
.read()
|
|
|
|
.cache
|
|
|
|
.iter()
|
|
|
|
.cloned()
|
|
|
|
.collect::<Vec<_>>(),
|
|
|
|
))
|
|
|
|
})
|
|
|
|
});
|
|
|
|
|
2020-10-22 06:05:49 +00:00
|
|
|
// GET lighthouse/beacon/states/{state_id}/ssz
|
|
|
|
let get_lighthouse_beacon_states_ssz = warp::path("lighthouse")
|
|
|
|
.and(warp::path("beacon"))
|
|
|
|
.and(warp::path("states"))
|
|
|
|
.and(warp::path::param::<StateId>())
|
|
|
|
.and(warp::path("ssz"))
|
|
|
|
.and(warp::path::end())
|
2020-11-23 01:00:22 +00:00
|
|
|
.and(chain_filter.clone())
|
2020-10-22 06:05:49 +00:00
|
|
|
.and_then(|state_id: StateId, chain: Arc<BeaconChain<T>>| {
|
|
|
|
blocking_task(move || {
|
|
|
|
let state = state_id.state(&chain)?;
|
|
|
|
Response::builder()
|
|
|
|
.status(200)
|
|
|
|
.header("Content-Type", "application/ssz")
|
|
|
|
.body(state.as_ssz_bytes())
|
|
|
|
.map_err(|e| {
|
|
|
|
warp_utils::reject::custom_server_error(format!(
|
|
|
|
"failed to create response: {}",
|
|
|
|
e
|
|
|
|
))
|
|
|
|
})
|
|
|
|
})
|
|
|
|
});
|
|
|
|
|
2020-11-23 01:00:22 +00:00
|
|
|
// GET lighthouse/staking
|
|
|
|
let get_lighthouse_staking = warp::path("lighthouse")
|
|
|
|
.and(warp::path("staking"))
|
|
|
|
.and(warp::path::end())
|
2020-12-04 00:18:58 +00:00
|
|
|
.and(chain_filter.clone())
|
2020-11-23 01:00:22 +00:00
|
|
|
.and_then(|chain: Arc<BeaconChain<T>>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
if chain.eth1_chain.is_some() {
|
|
|
|
Ok(())
|
|
|
|
} else {
|
|
|
|
Err(warp_utils::reject::custom_not_found(
|
|
|
|
"staking is not enabled, \
|
|
|
|
see the --staking CLI flag"
|
|
|
|
.to_string(),
|
|
|
|
))
|
|
|
|
}
|
|
|
|
})
|
|
|
|
});
|
|
|
|
|
2021-09-22 00:37:28 +00:00
|
|
|
let database_path = warp::path("lighthouse").and(warp::path("database"));
|
|
|
|
|
|
|
|
// GET lighthouse/database/info
|
|
|
|
let get_lighthouse_database_info = database_path
|
|
|
|
.and(warp::path("info"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(chain_filter.clone())
|
|
|
|
.and_then(|chain: Arc<BeaconChain<T>>| blocking_json_task(move || database::info(chain)));
|
|
|
|
|
|
|
|
// POST lighthouse/database/reconstruct
|
|
|
|
let post_lighthouse_database_reconstruct = database_path
|
|
|
|
.and(warp::path("reconstruct"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(not_while_syncing_filter)
|
|
|
|
.and(chain_filter.clone())
|
|
|
|
.and_then(|chain: Arc<BeaconChain<T>>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
chain.store_migrator.process_reconstruction();
|
|
|
|
Ok("success")
|
|
|
|
})
|
|
|
|
});
|
|
|
|
|
|
|
|
// POST lighthouse/database/historical_blocks
|
|
|
|
let post_lighthouse_database_historical_blocks = database_path
|
|
|
|
.and(warp::path("historical_blocks"))
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(warp::body::json())
|
|
|
|
.and(chain_filter.clone())
|
|
|
|
.and(log_filter.clone())
|
|
|
|
.and_then(
|
Separate execution payloads in the DB (#3157)
## Proposed Changes
Reduce post-merge disk usage by not storing finalized execution payloads in Lighthouse's database.
:warning: **This is achieved in a backwards-incompatible way for networks that have already merged** :warning:. Kiln users and shadow fork enjoyers will be unable to downgrade after running the code from this PR. The upgrade migration may take several minutes to run, and can't be aborted after it begins.
The main changes are:
- New column in the database called `ExecPayload`, keyed by beacon block root.
- The `BeaconBlock` column now stores blinded blocks only.
- Lots of places that previously used full blocks now use blinded blocks, e.g. analytics APIs, block replay in the DB, etc.
- On finalization:
- `prune_abanonded_forks` deletes non-canonical payloads whilst deleting non-canonical blocks.
- `migrate_db` deletes finalized canonical payloads whilst deleting finalized states.
- Conversions between blinded and full blocks are implemented in a compositional way, duplicating some work from Sean's PR #3134.
- The execution layer has a new `get_payload_by_block_hash` method that reconstructs a payload using the EE's `eth_getBlockByHash` call.
- I've tested manually that it works on Kiln, using Geth and Nethermind.
- This isn't necessarily the most efficient method, and new engine APIs are being discussed to improve this: https://github.com/ethereum/execution-apis/pull/146.
- We're depending on the `ethers` master branch, due to lots of recent changes. We're also using a workaround for https://github.com/gakonst/ethers-rs/issues/1134.
- Payload reconstruction is used in the HTTP API via `BeaconChain::get_block`, which is now `async`. Due to the `async` fn, the `blocking_json` wrapper has been removed.
- Payload reconstruction is used in network RPC to serve blocks-by-{root,range} responses. Here the `async` adjustment is messier, although I think I've managed to come up with a reasonable compromise: the handlers take the `SendOnDrop` by value so that they can drop it on _task completion_ (after the `fn` returns). Still, this is introducing disk reads onto core executor threads, which may have a negative performance impact (thoughts appreciated).
## Additional Info
- [x] For performance it would be great to remove the cloning of full blocks when converting them to blinded blocks to write to disk. I'm going to experiment with a `put_block` API that takes the block by value, breaks it into a blinded block and a payload, stores the blinded block, and then re-assembles the full block for the caller.
- [x] We should measure the latency of blocks-by-root and blocks-by-range responses.
- [x] We should add integration tests that stress the payload reconstruction (basic tests done, issue for more extensive tests: https://github.com/sigp/lighthouse/issues/3159)
- [x] We should (manually) test the schema v9 migration from several prior versions, particularly as blocks have changed on disk and some migrations rely on being able to load blocks.
Co-authored-by: Paul Hauner <paul@paulhauner.com>
2022-05-12 00:42:17 +00:00
|
|
|
|blocks: Vec<SignedBlindedBeaconBlock<T::EthSpec>>,
|
2021-09-22 00:37:28 +00:00
|
|
|
chain: Arc<BeaconChain<T>>,
|
|
|
|
log: Logger| {
|
|
|
|
info!(
|
|
|
|
log,
|
|
|
|
"Importing historical blocks";
|
|
|
|
"count" => blocks.len(),
|
|
|
|
"source" => "http_api"
|
|
|
|
);
|
|
|
|
blocking_json_task(move || database::historical_blocks(chain, blocks))
|
|
|
|
},
|
|
|
|
);
|
|
|
|
|
Add API to compute discrete validator attestation performance (#2874)
## Issue Addressed
N/A
## Proposed Changes
Add a HTTP API which can be used to compute the attestation performances of a validator (or all validators) over a discrete range of epochs.
Performances can be computed for a single validator, or for the global validator set.
## Usage
### Request
The API can be used as follows:
```
curl "http://localhost:5052/lighthouse/analysis/attestation_performance/{validator_index}?start_epoch=57730&end_epoch=57732"
```
Alternatively, to compute performances for the global validator set:
```
curl "http://localhost:5052/lighthouse/analysis/attestation_performance/global?start_epoch=57730&end_epoch=57732"
```
### Response
The response is JSON formatted as follows:
```
[
{
"index": 72,
"epochs": {
"57730": {
"active": true,
"head": false,
"target": false,
"source": false
},
"57731": {
"active": true,
"head": true,
"target": true,
"source": true,
"delay": 1
},
"57732": {
"active": true,
"head": true,
"target": true,
"source": true,
"delay": 1
},
}
}
]
```
> Note that the `"epochs"` are not guaranteed to be in ascending order.
## Additional Info
- This API is intended to be used in our upcoming validator analysis tooling (#2873) and will likely not be very useful for regular users. Some advanced users or block explorers may find this API useful however.
- The request range is limited to 100 epochs (since the range is inclusive and it also computes the `end_epoch` it's actually 101 epochs) to prevent Lighthouse using exceptionally large amounts of memory.
2022-01-27 22:58:31 +00:00
|
|
|
// GET lighthouse/analysis/block_rewards
|
2022-01-27 01:06:02 +00:00
|
|
|
let get_lighthouse_block_rewards = warp::path("lighthouse")
|
Add API to compute discrete validator attestation performance (#2874)
## Issue Addressed
N/A
## Proposed Changes
Add a HTTP API which can be used to compute the attestation performances of a validator (or all validators) over a discrete range of epochs.
Performances can be computed for a single validator, or for the global validator set.
## Usage
### Request
The API can be used as follows:
```
curl "http://localhost:5052/lighthouse/analysis/attestation_performance/{validator_index}?start_epoch=57730&end_epoch=57732"
```
Alternatively, to compute performances for the global validator set:
```
curl "http://localhost:5052/lighthouse/analysis/attestation_performance/global?start_epoch=57730&end_epoch=57732"
```
### Response
The response is JSON formatted as follows:
```
[
{
"index": 72,
"epochs": {
"57730": {
"active": true,
"head": false,
"target": false,
"source": false
},
"57731": {
"active": true,
"head": true,
"target": true,
"source": true,
"delay": 1
},
"57732": {
"active": true,
"head": true,
"target": true,
"source": true,
"delay": 1
},
}
}
]
```
> Note that the `"epochs"` are not guaranteed to be in ascending order.
## Additional Info
- This API is intended to be used in our upcoming validator analysis tooling (#2873) and will likely not be very useful for regular users. Some advanced users or block explorers may find this API useful however.
- The request range is limited to 100 epochs (since the range is inclusive and it also computes the `end_epoch` it's actually 101 epochs) to prevent Lighthouse using exceptionally large amounts of memory.
2022-01-27 22:58:31 +00:00
|
|
|
.and(warp::path("analysis"))
|
2022-01-27 01:06:02 +00:00
|
|
|
.and(warp::path("block_rewards"))
|
|
|
|
.and(warp::query::<eth2::lighthouse::BlockRewardsQuery>())
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(chain_filter.clone())
|
|
|
|
.and(log_filter.clone())
|
|
|
|
.and_then(|query, chain, log| {
|
|
|
|
blocking_json_task(move || block_rewards::get_block_rewards(query, chain, log))
|
|
|
|
});
|
|
|
|
|
Add API to compute discrete validator attestation performance (#2874)
## Issue Addressed
N/A
## Proposed Changes
Add a HTTP API which can be used to compute the attestation performances of a validator (or all validators) over a discrete range of epochs.
Performances can be computed for a single validator, or for the global validator set.
## Usage
### Request
The API can be used as follows:
```
curl "http://localhost:5052/lighthouse/analysis/attestation_performance/{validator_index}?start_epoch=57730&end_epoch=57732"
```
Alternatively, to compute performances for the global validator set:
```
curl "http://localhost:5052/lighthouse/analysis/attestation_performance/global?start_epoch=57730&end_epoch=57732"
```
### Response
The response is JSON formatted as follows:
```
[
{
"index": 72,
"epochs": {
"57730": {
"active": true,
"head": false,
"target": false,
"source": false
},
"57731": {
"active": true,
"head": true,
"target": true,
"source": true,
"delay": 1
},
"57732": {
"active": true,
"head": true,
"target": true,
"source": true,
"delay": 1
},
}
}
]
```
> Note that the `"epochs"` are not guaranteed to be in ascending order.
## Additional Info
- This API is intended to be used in our upcoming validator analysis tooling (#2873) and will likely not be very useful for regular users. Some advanced users or block explorers may find this API useful however.
- The request range is limited to 100 epochs (since the range is inclusive and it also computes the `end_epoch` it's actually 101 epochs) to prevent Lighthouse using exceptionally large amounts of memory.
2022-01-27 22:58:31 +00:00
|
|
|
// GET lighthouse/analysis/attestation_performance/{index}
|
|
|
|
let get_lighthouse_attestation_performance = warp::path("lighthouse")
|
|
|
|
.and(warp::path("analysis"))
|
|
|
|
.and(warp::path("attestation_performance"))
|
|
|
|
.and(warp::path::param::<String>())
|
|
|
|
.and(warp::query::<eth2::lighthouse::AttestationPerformanceQuery>())
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(chain_filter.clone())
|
|
|
|
.and_then(|target, query, chain: Arc<BeaconChain<T>>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
attestation_performance::get_attestation_performance(target, query, chain)
|
|
|
|
})
|
|
|
|
});
|
|
|
|
|
2022-02-21 23:21:02 +00:00
|
|
|
// GET lighthouse/analysis/block_packing_efficiency
|
|
|
|
let get_lighthouse_block_packing_efficiency = warp::path("lighthouse")
|
|
|
|
.and(warp::path("analysis"))
|
|
|
|
.and(warp::path("block_packing_efficiency"))
|
|
|
|
.and(warp::query::<eth2::lighthouse::BlockPackingEfficiencyQuery>())
|
|
|
|
.and(warp::path::end())
|
|
|
|
.and(chain_filter.clone())
|
|
|
|
.and_then(|query, chain: Arc<BeaconChain<T>>| {
|
|
|
|
blocking_json_task(move || {
|
|
|
|
block_packing_efficiency::get_block_packing_efficiency(query, chain)
|
|
|
|
})
|
|
|
|
});
|
|
|
|
|
2020-12-04 00:18:58 +00:00
|
|
|
let get_events = eth1_v1
|
|
|
|
.and(warp::path("events"))
|
|
|
|
.and(warp::path::end())
|
2022-01-20 09:14:19 +00:00
|
|
|
.and(multi_key_query::<api_types::EventQuery>())
|
2020-12-04 00:18:58 +00:00
|
|
|
.and(chain_filter)
|
|
|
|
.and_then(
|
2022-01-20 09:14:19 +00:00
|
|
|
|topics_res: Result<api_types::EventQuery, warp::Rejection>,
|
|
|
|
chain: Arc<BeaconChain<T>>| {
|
2020-12-04 00:18:58 +00:00
|
|
|
blocking_task(move || {
|
2022-01-20 09:14:19 +00:00
|
|
|
let topics = topics_res?;
|
2020-12-04 00:18:58 +00:00
|
|
|
// for each topic subscribed spawn a new subscription
|
2022-01-20 09:14:19 +00:00
|
|
|
let mut receivers = Vec::with_capacity(topics.topics.len());
|
2020-12-04 00:18:58 +00:00
|
|
|
|
|
|
|
if let Some(event_handler) = chain.event_handler.as_ref() {
|
2022-01-20 09:14:19 +00:00
|
|
|
for topic in topics.topics {
|
2020-12-04 00:18:58 +00:00
|
|
|
let receiver = match topic {
|
|
|
|
api_types::EventTopic::Head => event_handler.subscribe_head(),
|
|
|
|
api_types::EventTopic::Block => event_handler.subscribe_block(),
|
|
|
|
api_types::EventTopic::Attestation => {
|
|
|
|
event_handler.subscribe_attestation()
|
|
|
|
}
|
|
|
|
api_types::EventTopic::VoluntaryExit => {
|
|
|
|
event_handler.subscribe_exit()
|
|
|
|
}
|
|
|
|
api_types::EventTopic::FinalizedCheckpoint => {
|
|
|
|
event_handler.subscribe_finalized()
|
|
|
|
}
|
2021-06-17 02:10:46 +00:00
|
|
|
api_types::EventTopic::ChainReorg => {
|
|
|
|
event_handler.subscribe_reorgs()
|
|
|
|
}
|
2021-09-25 07:53:58 +00:00
|
|
|
api_types::EventTopic::ContributionAndProof => {
|
|
|
|
event_handler.subscribe_contributions()
|
|
|
|
}
|
2021-09-30 04:31:41 +00:00
|
|
|
api_types::EventTopic::LateHead => {
|
|
|
|
event_handler.subscribe_late_head()
|
|
|
|
}
|
2022-01-27 01:06:02 +00:00
|
|
|
api_types::EventTopic::BlockReward => {
|
|
|
|
event_handler.subscribe_block_reward()
|
|
|
|
}
|
2020-12-04 00:18:58 +00:00
|
|
|
};
|
2021-02-10 23:29:49 +00:00
|
|
|
|
2021-03-01 01:58:05 +00:00
|
|
|
receivers.push(BroadcastStream::new(receiver).map(|msg| {
|
|
|
|
match msg {
|
|
|
|
Ok(data) => Event::default()
|
|
|
|
.event(data.topic_name())
|
|
|
|
.json_data(data)
|
|
|
|
.map_err(|e| {
|
|
|
|
warp_utils::reject::server_sent_event_error(format!(
|
|
|
|
"{:?}",
|
|
|
|
e
|
|
|
|
))
|
|
|
|
}),
|
|
|
|
Err(e) => Err(warp_utils::reject::server_sent_event_error(
|
|
|
|
format!("{:?}", e),
|
|
|
|
)),
|
|
|
|
}
|
|
|
|
}));
|
2020-12-04 00:18:58 +00:00
|
|
|
}
|
|
|
|
} else {
|
|
|
|
return Err(warp_utils::reject::custom_server_error(
|
|
|
|
"event handler was not initialized".to_string(),
|
|
|
|
));
|
|
|
|
}
|
|
|
|
|
2021-02-10 23:29:49 +00:00
|
|
|
let s = futures::stream::select_all(receivers);
|
2020-12-04 00:18:58 +00:00
|
|
|
|
2021-02-10 23:29:49 +00:00
|
|
|
Ok::<_, warp::Rejection>(warp::sse::reply(warp::sse::keep_alive().stream(s)))
|
2020-12-04 00:18:58 +00:00
|
|
|
})
|
|
|
|
},
|
|
|
|
);
|
|
|
|
|
2020-09-29 03:46:54 +00:00
|
|
|
// Define the ultimate set of routes that will be provided to the server.
|
|
|
|
let routes = warp::get()
|
|
|
|
.and(
|
|
|
|
get_beacon_genesis
|
2020-11-09 23:13:56 +00:00
|
|
|
.boxed()
|
2020-09-29 03:46:54 +00:00
|
|
|
.or(get_beacon_state_root.boxed())
|
|
|
|
.or(get_beacon_state_fork.boxed())
|
|
|
|
.or(get_beacon_state_finality_checkpoints.boxed())
|
2020-11-09 23:13:56 +00:00
|
|
|
.or(get_beacon_state_validator_balances.boxed())
|
2020-09-29 03:46:54 +00:00
|
|
|
.or(get_beacon_state_validators_id.boxed())
|
2022-01-20 09:14:19 +00:00
|
|
|
.or(get_beacon_state_validators.boxed())
|
2020-09-29 03:46:54 +00:00
|
|
|
.or(get_beacon_state_committees.boxed())
|
2021-08-06 00:47:31 +00:00
|
|
|
.or(get_beacon_state_sync_committees.boxed())
|
2020-09-29 03:46:54 +00:00
|
|
|
.or(get_beacon_headers.boxed())
|
|
|
|
.or(get_beacon_headers_block_id.boxed())
|
|
|
|
.or(get_beacon_block.boxed())
|
|
|
|
.or(get_beacon_block_attestations.boxed())
|
|
|
|
.or(get_beacon_block_root.boxed())
|
|
|
|
.or(get_beacon_pool_attestations.boxed())
|
|
|
|
.or(get_beacon_pool_attester_slashings.boxed())
|
|
|
|
.or(get_beacon_pool_proposer_slashings.boxed())
|
|
|
|
.or(get_beacon_pool_voluntary_exits.boxed())
|
|
|
|
.or(get_config_fork_schedule.boxed())
|
|
|
|
.or(get_config_spec.boxed())
|
|
|
|
.or(get_config_deposit_contract.boxed())
|
|
|
|
.or(get_debug_beacon_states.boxed())
|
|
|
|
.or(get_debug_beacon_heads.boxed())
|
|
|
|
.or(get_node_identity.boxed())
|
|
|
|
.or(get_node_version.boxed())
|
|
|
|
.or(get_node_syncing.boxed())
|
2020-10-22 02:59:42 +00:00
|
|
|
.or(get_node_health.boxed())
|
|
|
|
.or(get_node_peers_by_id.boxed())
|
|
|
|
.or(get_node_peers.boxed())
|
2020-11-13 02:02:41 +00:00
|
|
|
.or(get_node_peer_count.boxed())
|
2020-09-29 03:46:54 +00:00
|
|
|
.or(get_validator_duties_proposer.boxed())
|
|
|
|
.or(get_validator_blocks.boxed())
|
2022-03-31 07:52:23 +00:00
|
|
|
.or(get_validator_blinded_blocks.boxed())
|
2020-09-29 03:46:54 +00:00
|
|
|
.or(get_validator_attestation_data.boxed())
|
|
|
|
.or(get_validator_aggregate_attestation.boxed())
|
2021-08-06 00:47:31 +00:00
|
|
|
.or(get_validator_sync_committee_contribution.boxed())
|
2020-09-29 03:46:54 +00:00
|
|
|
.or(get_lighthouse_health.boxed())
|
|
|
|
.or(get_lighthouse_syncing.boxed())
|
2021-12-22 06:17:14 +00:00
|
|
|
.or(get_lighthouse_nat.boxed())
|
2020-09-29 03:46:54 +00:00
|
|
|
.or(get_lighthouse_peers.boxed())
|
|
|
|
.or(get_lighthouse_peers_connected.boxed())
|
|
|
|
.or(get_lighthouse_proto_array.boxed())
|
|
|
|
.or(get_lighthouse_validator_inclusion_global.boxed())
|
|
|
|
.or(get_lighthouse_validator_inclusion.boxed())
|
2020-11-02 00:37:30 +00:00
|
|
|
.or(get_lighthouse_eth1_syncing.boxed())
|
|
|
|
.or(get_lighthouse_eth1_block_cache.boxed())
|
|
|
|
.or(get_lighthouse_eth1_deposit_cache.boxed())
|
2020-11-23 01:00:22 +00:00
|
|
|
.or(get_lighthouse_beacon_states_ssz.boxed())
|
2020-12-04 00:18:58 +00:00
|
|
|
.or(get_lighthouse_staking.boxed())
|
2021-09-22 00:37:28 +00:00
|
|
|
.or(get_lighthouse_database_info.boxed())
|
2022-01-27 01:06:02 +00:00
|
|
|
.or(get_lighthouse_block_rewards.boxed())
|
Add API to compute discrete validator attestation performance (#2874)
## Issue Addressed
N/A
## Proposed Changes
Add a HTTP API which can be used to compute the attestation performances of a validator (or all validators) over a discrete range of epochs.
Performances can be computed for a single validator, or for the global validator set.
## Usage
### Request
The API can be used as follows:
```
curl "http://localhost:5052/lighthouse/analysis/attestation_performance/{validator_index}?start_epoch=57730&end_epoch=57732"
```
Alternatively, to compute performances for the global validator set:
```
curl "http://localhost:5052/lighthouse/analysis/attestation_performance/global?start_epoch=57730&end_epoch=57732"
```
### Response
The response is JSON formatted as follows:
```
[
{
"index": 72,
"epochs": {
"57730": {
"active": true,
"head": false,
"target": false,
"source": false
},
"57731": {
"active": true,
"head": true,
"target": true,
"source": true,
"delay": 1
},
"57732": {
"active": true,
"head": true,
"target": true,
"source": true,
"delay": 1
},
}
}
]
```
> Note that the `"epochs"` are not guaranteed to be in ascending order.
## Additional Info
- This API is intended to be used in our upcoming validator analysis tooling (#2873) and will likely not be very useful for regular users. Some advanced users or block explorers may find this API useful however.
- The request range is limited to 100 epochs (since the range is inclusive and it also computes the `end_epoch` it's actually 101 epochs) to prevent Lighthouse using exceptionally large amounts of memory.
2022-01-27 22:58:31 +00:00
|
|
|
.or(get_lighthouse_attestation_performance.boxed())
|
2022-02-21 23:21:02 +00:00
|
|
|
.or(get_lighthouse_block_packing_efficiency.boxed())
|
2020-12-04 00:18:58 +00:00
|
|
|
.or(get_events.boxed()),
|
2020-09-29 03:46:54 +00:00
|
|
|
)
|
2020-11-09 23:13:56 +00:00
|
|
|
.or(warp::post().and(
|
|
|
|
post_beacon_blocks
|
|
|
|
.boxed()
|
2022-03-31 07:52:23 +00:00
|
|
|
.or(post_beacon_blinded_blocks.boxed())
|
2020-11-09 23:13:56 +00:00
|
|
|
.or(post_beacon_pool_attestations.boxed())
|
|
|
|
.or(post_beacon_pool_attester_slashings.boxed())
|
|
|
|
.or(post_beacon_pool_proposer_slashings.boxed())
|
|
|
|
.or(post_beacon_pool_voluntary_exits.boxed())
|
2021-08-06 00:47:31 +00:00
|
|
|
.or(post_beacon_pool_sync_committees.boxed())
|
2020-11-09 23:13:56 +00:00
|
|
|
.or(post_validator_duties_attester.boxed())
|
2021-08-06 00:47:31 +00:00
|
|
|
.or(post_validator_duties_sync.boxed())
|
2020-11-09 23:13:56 +00:00
|
|
|
.or(post_validator_aggregate_and_proofs.boxed())
|
2021-08-06 00:47:31 +00:00
|
|
|
.or(post_validator_contribution_and_proofs.boxed())
|
|
|
|
.or(post_validator_beacon_committee_subscriptions.boxed())
|
|
|
|
.or(post_validator_sync_committee_subscriptions.boxed())
|
2022-02-08 19:52:20 +00:00
|
|
|
.or(post_validator_prepare_beacon_proposer.boxed())
|
2021-09-22 00:37:28 +00:00
|
|
|
.or(post_lighthouse_liveness.boxed())
|
|
|
|
.or(post_lighthouse_database_reconstruct.boxed())
|
|
|
|
.or(post_lighthouse_database_historical_blocks.boxed()),
|
2020-11-09 23:13:56 +00:00
|
|
|
))
|
2020-09-29 03:46:54 +00:00
|
|
|
.recover(warp_utils::reject::handle_rejection)
|
|
|
|
.with(slog_logging(log.clone()))
|
|
|
|
.with(prometheus_metrics())
|
|
|
|
// Add a `Server` header.
|
|
|
|
.map(|reply| warp::reply::with_header(reply, "Server", &version_with_platform()))
|
2020-10-22 04:47:27 +00:00
|
|
|
.with(cors_builder.build());
|
2020-09-29 03:46:54 +00:00
|
|
|
|
2022-03-24 00:04:49 +00:00
|
|
|
let http_socket: SocketAddr = SocketAddr::new(config.listen_addr, config.listen_port);
|
2021-10-12 03:35:49 +00:00
|
|
|
let http_server: HttpServer = match config.tls_config {
|
|
|
|
Some(tls_config) => {
|
|
|
|
let (socket, server) = warp::serve(routes)
|
|
|
|
.tls()
|
|
|
|
.cert_path(tls_config.cert)
|
|
|
|
.key_path(tls_config.key)
|
|
|
|
.try_bind_with_graceful_shutdown(http_socket, async {
|
|
|
|
shutdown.await;
|
|
|
|
})?;
|
|
|
|
|
|
|
|
info!(log, "HTTP API is being served over TLS";);
|
|
|
|
|
|
|
|
(socket, Box::pin(server))
|
|
|
|
}
|
|
|
|
None => {
|
|
|
|
let (socket, server) =
|
|
|
|
warp::serve(routes).try_bind_with_graceful_shutdown(http_socket, async {
|
|
|
|
shutdown.await;
|
|
|
|
})?;
|
|
|
|
(socket, Box::pin(server))
|
|
|
|
}
|
2020-11-28 05:30:57 +00:00
|
|
|
};
|
2020-09-29 03:46:54 +00:00
|
|
|
|
|
|
|
info!(
|
|
|
|
log,
|
|
|
|
"HTTP API started";
|
2021-10-12 03:35:49 +00:00
|
|
|
"listen_address" => %http_server.0,
|
2020-09-29 03:46:54 +00:00
|
|
|
);
|
|
|
|
|
2021-10-12 03:35:49 +00:00
|
|
|
Ok(http_server)
|
2020-09-29 03:46:54 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/// Publish a message to the libp2p pubsub network.
|
|
|
|
fn publish_pubsub_message<T: EthSpec>(
|
|
|
|
network_tx: &UnboundedSender<NetworkMessage<T>>,
|
|
|
|
message: PubsubMessage<T>,
|
|
|
|
) -> Result<(), warp::Rejection> {
|
|
|
|
publish_network_message(
|
|
|
|
network_tx,
|
|
|
|
NetworkMessage::Publish {
|
|
|
|
messages: vec![message],
|
|
|
|
},
|
|
|
|
)
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Publish a message to the libp2p network.
|
|
|
|
fn publish_network_message<T: EthSpec>(
|
|
|
|
network_tx: &UnboundedSender<NetworkMessage<T>>,
|
|
|
|
message: NetworkMessage<T>,
|
|
|
|
) -> Result<(), warp::Rejection> {
|
|
|
|
network_tx.send(message).map_err(|e| {
|
|
|
|
warp_utils::reject::custom_server_error(format!(
|
|
|
|
"unable to publish to network channel: {}",
|
|
|
|
e
|
|
|
|
))
|
|
|
|
})
|
|
|
|
}
|