2023-04-03 05:35:11 +00:00
|
|
|
use crate::{Config, Context};
|
2021-08-06 00:47:31 +00:00
|
|
|
use beacon_chain::{
|
2023-01-25 04:47:07 +00:00
|
|
|
test_utils::{
|
|
|
|
BeaconChainHarness, BoxedMutator, Builder as HarnessBuilder, EphemeralHarnessType,
|
|
|
|
},
|
2021-08-06 00:47:31 +00:00
|
|
|
BeaconChain, BeaconChainTypes,
|
|
|
|
};
|
Use `BeaconProcessor` for API requests (#4462)
## Issue Addressed
NA
## Proposed Changes
Rather than spawning new tasks on the tokio executor to process each HTTP API request, send the tasks to the `BeaconProcessor`. This achieves:
1. Places a bound on how many concurrent requests are being served (i.e., how many we are actually trying to compute at one time).
1. Places a bound on how many requests can be awaiting a response at one time (i.e., starts dropping requests when we have too many queued).
1. Allows the BN prioritise HTTP requests with respect to messages coming from the P2P network (i.e., proiritise importing gossip blocks rather than serving API requests).
Presently there are two levels of priorities:
- `Priority::P0`
- The beacon processor will prioritise these above everything other than importing new blocks.
- Roughly all validator-sensitive endpoints.
- `Priority::P1`
- The beacon processor will prioritise practically all other P2P messages over these, except for historical backfill things.
- Everything that's not `Priority::P0`
The `--http-enable-beacon-processor false` flag can be supplied to revert back to the old behaviour of spawning new `tokio` tasks for each request:
```
--http-enable-beacon-processor <BOOLEAN>
The beacon processor is a scheduler which provides quality-of-service and DoS protection. When set to
"true", HTTP API requests will queued and scheduled alongside other tasks. When set to "false", HTTP API
responses will be executed immediately. [default: true]
```
## New CLI Flags
I added some other new CLI flags:
```
--beacon-processor-aggregate-batch-size <INTEGER>
Specifies the number of gossip aggregate attestations in a signature verification batch. Higher values may
reduce CPU usage in a healthy network while lower values may increase CPU usage in an unhealthy or hostile
network. [default: 64]
--beacon-processor-attestation-batch-size <INTEGER>
Specifies the number of gossip attestations in a signature verification batch. Higher values may reduce CPU
usage in a healthy network whilst lower values may increase CPU usage in an unhealthy or hostile network.
[default: 64]
--beacon-processor-max-workers <INTEGER>
Specifies the maximum concurrent tasks for the task scheduler. Increasing this value may increase resource
consumption. Reducing the value may result in decreased resource usage and diminished performance. The
default value is the number of logical CPU cores on the host.
--beacon-processor-reprocess-queue-len <INTEGER>
Specifies the length of the queue for messages requiring delayed processing. Higher values may prevent
messages from being dropped while lower values may help protect the node from becoming overwhelmed.
[default: 12288]
```
I needed to add the max-workers flag since the "simulator" flavor tests started failing with HTTP timeouts on the test assertions. I believe they were failing because the Github runners only have 2 cores and there just weren't enough workers available to process our requests in time. I added the other flags since they seem fun to fiddle with.
## Additional Info
I bumped the timeouts on the "simulator" flavor test from 4s to 8s. The prioritisation of consensus messages seems to be causing slower responses, I guess this is what we signed up for 🤷
The `validator/register` validator has some special handling because the relays have a bad habit of timing out on these calls. It seems like a waste of a `BeaconProcessor` worker to just wait for the builder API HTTP response, so we spawn a new `tokio` task to wait for a builder response.
I've added an optimisation for the `GET beacon/states/{state_id}/validators/{validator_id}` endpoint in [efbabe3](https://github.com/sigp/lighthouse/pull/4462/commits/efbabe32521ed6eb3564764da4e507d26a1c4bd0). That's the endpoint the VC uses to resolve pubkeys to validator indices, and it's the endpoint that was causing us grief. Perhaps I should move that into a new PR, not sure.
2023-08-08 23:30:15 +00:00
|
|
|
use beacon_processor::{BeaconProcessor, BeaconProcessorChannels, BeaconProcessorConfig};
|
2022-11-15 05:21:26 +00:00
|
|
|
use directory::DEFAULT_ROOT_DIR;
|
2021-08-06 00:47:31 +00:00
|
|
|
use eth2::{BeaconNodeHttpClient, Timeouts};
|
Rename eth2_libp2p to lighthouse_network (#2702)
## Description
The `eth2_libp2p` crate was originally named and designed to incorporate a simple libp2p integration into lighthouse. Since its origins the crates purpose has expanded dramatically. It now houses a lot more sophistication that is specific to lighthouse and no longer just a libp2p integration.
As of this writing it currently houses the following high-level lighthouse-specific logic:
- Lighthouse's implementation of the eth2 RPC protocol and specific encodings/decodings
- Integration and handling of ENRs with respect to libp2p and eth2
- Lighthouse's discovery logic, its integration with discv5 and logic about searching and handling peers.
- Lighthouse's peer manager - This is a large module handling various aspects of Lighthouse's network, such as peer scoring, handling pings and metadata, connection maintenance and recording, etc.
- Lighthouse's peer database - This is a collection of information stored for each individual peer which is specific to lighthouse. We store connection state, sync state, last seen ips and scores etc. The data stored for each peer is designed for various elements of the lighthouse code base such as syncing and the http api.
- Gossipsub scoring - This stores a collection of gossipsub 1.1 scoring mechanisms that are continuously analyssed and updated based on the ethereum 2 networks and how Lighthouse performs on these networks.
- Lighthouse specific types for managing gossipsub topics, sync status and ENR fields
- Lighthouse's network HTTP API metrics - A collection of metrics for lighthouse network monitoring
- Lighthouse's custom configuration of all networking protocols, RPC, gossipsub, discovery, identify and libp2p.
Therefore it makes sense to rename the crate to be more akin to its current purposes, simply that it manages the majority of Lighthouse's network stack. This PR renames this crate to `lighthouse_network`
Co-authored-by: Paul Hauner <paul@paulhauner.com>
2021-10-19 00:30:39 +00:00
|
|
|
use lighthouse_network::{
|
2021-08-06 00:47:31 +00:00
|
|
|
discv5::enr::{CombinedKey, EnrBuilder},
|
2023-08-02 00:59:34 +00:00
|
|
|
libp2p::swarm::{
|
|
|
|
behaviour::{ConnectionEstablished, FromSwarm},
|
|
|
|
ConnectionId, NetworkBehaviour,
|
2023-01-06 15:59:33 +00:00
|
|
|
},
|
2021-08-06 00:47:31 +00:00
|
|
|
rpc::methods::{MetaData, MetaDataV2},
|
|
|
|
types::{EnrAttestationBitfield, EnrSyncCommitteeBitfield, SyncState},
|
2021-11-03 23:44:44 +00:00
|
|
|
ConnectedPoint, Enr, NetworkGlobals, PeerId, PeerManager,
|
2021-08-06 00:47:31 +00:00
|
|
|
};
|
2022-12-13 09:57:26 +00:00
|
|
|
use logging::test_logger;
|
2022-08-30 05:47:31 +00:00
|
|
|
use network::{NetworkReceivers, NetworkSenders};
|
2021-08-06 00:47:31 +00:00
|
|
|
use sensitive_url::SensitiveUrl;
|
|
|
|
use slog::Logger;
|
|
|
|
use std::future::Future;
|
2022-03-24 00:04:49 +00:00
|
|
|
use std::net::{IpAddr, Ipv4Addr, SocketAddr};
|
2021-08-06 00:47:31 +00:00
|
|
|
use std::sync::Arc;
|
|
|
|
use std::time::Duration;
|
2022-12-13 09:57:26 +00:00
|
|
|
use store::MemoryStore;
|
Use `BeaconProcessor` for API requests (#4462)
## Issue Addressed
NA
## Proposed Changes
Rather than spawning new tasks on the tokio executor to process each HTTP API request, send the tasks to the `BeaconProcessor`. This achieves:
1. Places a bound on how many concurrent requests are being served (i.e., how many we are actually trying to compute at one time).
1. Places a bound on how many requests can be awaiting a response at one time (i.e., starts dropping requests when we have too many queued).
1. Allows the BN prioritise HTTP requests with respect to messages coming from the P2P network (i.e., proiritise importing gossip blocks rather than serving API requests).
Presently there are two levels of priorities:
- `Priority::P0`
- The beacon processor will prioritise these above everything other than importing new blocks.
- Roughly all validator-sensitive endpoints.
- `Priority::P1`
- The beacon processor will prioritise practically all other P2P messages over these, except for historical backfill things.
- Everything that's not `Priority::P0`
The `--http-enable-beacon-processor false` flag can be supplied to revert back to the old behaviour of spawning new `tokio` tasks for each request:
```
--http-enable-beacon-processor <BOOLEAN>
The beacon processor is a scheduler which provides quality-of-service and DoS protection. When set to
"true", HTTP API requests will queued and scheduled alongside other tasks. When set to "false", HTTP API
responses will be executed immediately. [default: true]
```
## New CLI Flags
I added some other new CLI flags:
```
--beacon-processor-aggregate-batch-size <INTEGER>
Specifies the number of gossip aggregate attestations in a signature verification batch. Higher values may
reduce CPU usage in a healthy network while lower values may increase CPU usage in an unhealthy or hostile
network. [default: 64]
--beacon-processor-attestation-batch-size <INTEGER>
Specifies the number of gossip attestations in a signature verification batch. Higher values may reduce CPU
usage in a healthy network whilst lower values may increase CPU usage in an unhealthy or hostile network.
[default: 64]
--beacon-processor-max-workers <INTEGER>
Specifies the maximum concurrent tasks for the task scheduler. Increasing this value may increase resource
consumption. Reducing the value may result in decreased resource usage and diminished performance. The
default value is the number of logical CPU cores on the host.
--beacon-processor-reprocess-queue-len <INTEGER>
Specifies the length of the queue for messages requiring delayed processing. Higher values may prevent
messages from being dropped while lower values may help protect the node from becoming overwhelmed.
[default: 12288]
```
I needed to add the max-workers flag since the "simulator" flavor tests started failing with HTTP timeouts on the test assertions. I believe they were failing because the Github runners only have 2 cores and there just weren't enough workers available to process our requests in time. I added the other flags since they seem fun to fiddle with.
## Additional Info
I bumped the timeouts on the "simulator" flavor test from 4s to 8s. The prioritisation of consensus messages seems to be causing slower responses, I guess this is what we signed up for 🤷
The `validator/register` validator has some special handling because the relays have a bad habit of timing out on these calls. It seems like a waste of a `BeaconProcessor` worker to just wait for the builder API HTTP response, so we spawn a new `tokio` task to wait for a builder response.
I've added an optimisation for the `GET beacon/states/{state_id}/validators/{validator_id}` endpoint in [efbabe3](https://github.com/sigp/lighthouse/pull/4462/commits/efbabe32521ed6eb3564764da4e507d26a1c4bd0). That's the endpoint the VC uses to resolve pubkeys to validator indices, and it's the endpoint that was causing us grief. Perhaps I should move that into a new PR, not sure.
2023-08-08 23:30:15 +00:00
|
|
|
use task_executor::test_utils::TestRuntime;
|
2021-10-14 02:58:10 +00:00
|
|
|
use types::{ChainSpec, EthSpec};
|
2021-08-06 00:47:31 +00:00
|
|
|
|
|
|
|
pub const TCP_PORT: u16 = 42;
|
|
|
|
pub const UDP_PORT: u16 = 42;
|
|
|
|
pub const SEQ_NUMBER: u64 = 0;
|
|
|
|
pub const EXTERNAL_ADDR: &str = "/ip4/0.0.0.0/tcp/9000";
|
|
|
|
|
|
|
|
/// HTTP API tester that allows interaction with the underlying beacon chain harness.
|
|
|
|
pub struct InteractiveTester<E: EthSpec> {
|
|
|
|
pub harness: BeaconChainHarness<EphemeralHarnessType<E>>,
|
|
|
|
pub client: BeaconNodeHttpClient,
|
2022-08-30 05:47:31 +00:00
|
|
|
pub network_rx: NetworkReceivers<E>,
|
2021-08-06 00:47:31 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/// The result of calling `create_api_server`.
|
|
|
|
///
|
|
|
|
/// Glue-type between `tests::ApiTester` and `InteractiveTester`.
|
|
|
|
pub struct ApiServer<E: EthSpec, SFut: Future<Output = ()>> {
|
|
|
|
pub server: SFut,
|
|
|
|
pub listening_socket: SocketAddr,
|
2022-08-30 05:47:31 +00:00
|
|
|
pub network_rx: NetworkReceivers<E>,
|
2021-08-06 00:47:31 +00:00
|
|
|
pub local_enr: Enr,
|
|
|
|
pub external_peer_id: PeerId,
|
|
|
|
}
|
|
|
|
|
2023-01-25 04:47:07 +00:00
|
|
|
type Initializer<E> = Box<
|
|
|
|
dyn FnOnce(HarnessBuilder<EphemeralHarnessType<E>>) -> HarnessBuilder<EphemeralHarnessType<E>>,
|
|
|
|
>;
|
2022-12-13 09:57:26 +00:00
|
|
|
type Mutator<E> = BoxedMutator<E, MemoryStore<E>, MemoryStore<E>>;
|
|
|
|
|
2021-08-06 00:47:31 +00:00
|
|
|
impl<E: EthSpec> InteractiveTester<E> {
|
2021-10-11 02:45:06 +00:00
|
|
|
pub async fn new(spec: Option<ChainSpec>, validator_count: usize) -> Self {
|
2023-01-25 04:47:07 +00:00
|
|
|
Self::new_with_initializer_and_mutator(spec, validator_count, None, None).await
|
2022-12-13 09:57:26 +00:00
|
|
|
}
|
|
|
|
|
2023-01-25 04:47:07 +00:00
|
|
|
pub async fn new_with_initializer_and_mutator(
|
2022-12-13 09:57:26 +00:00
|
|
|
spec: Option<ChainSpec>,
|
|
|
|
validator_count: usize,
|
2023-01-25 04:47:07 +00:00
|
|
|
initializer: Option<Initializer<E>>,
|
2022-12-13 09:57:26 +00:00
|
|
|
mutator: Option<Mutator<E>>,
|
|
|
|
) -> Self {
|
|
|
|
let mut harness_builder = BeaconChainHarness::builder(E::default())
|
2021-10-14 02:58:10 +00:00
|
|
|
.spec_or_default(spec)
|
2022-12-13 09:57:26 +00:00
|
|
|
.logger(test_logger())
|
2023-01-25 04:47:07 +00:00
|
|
|
.mock_execution_layer();
|
|
|
|
|
|
|
|
harness_builder = if let Some(initializer) = initializer {
|
|
|
|
// Apply custom initialization provided by the caller.
|
|
|
|
initializer(harness_builder)
|
|
|
|
} else {
|
|
|
|
// Apply default initial configuration.
|
|
|
|
harness_builder
|
|
|
|
.deterministic_keypairs(validator_count)
|
|
|
|
.fresh_ephemeral_store()
|
|
|
|
};
|
2022-12-13 09:57:26 +00:00
|
|
|
|
2023-01-25 04:47:07 +00:00
|
|
|
// Add a mutator for the beacon chain builder which will be called in
|
|
|
|
// `HarnessBuilder::build`.
|
2022-12-13 09:57:26 +00:00
|
|
|
if let Some(mutator) = mutator {
|
|
|
|
harness_builder = harness_builder.initial_mutator(mutator);
|
|
|
|
}
|
|
|
|
|
|
|
|
let harness = harness_builder.build();
|
2021-08-06 00:47:31 +00:00
|
|
|
|
|
|
|
let ApiServer {
|
|
|
|
server,
|
|
|
|
listening_socket,
|
|
|
|
network_rx,
|
|
|
|
..
|
Use `BeaconProcessor` for API requests (#4462)
## Issue Addressed
NA
## Proposed Changes
Rather than spawning new tasks on the tokio executor to process each HTTP API request, send the tasks to the `BeaconProcessor`. This achieves:
1. Places a bound on how many concurrent requests are being served (i.e., how many we are actually trying to compute at one time).
1. Places a bound on how many requests can be awaiting a response at one time (i.e., starts dropping requests when we have too many queued).
1. Allows the BN prioritise HTTP requests with respect to messages coming from the P2P network (i.e., proiritise importing gossip blocks rather than serving API requests).
Presently there are two levels of priorities:
- `Priority::P0`
- The beacon processor will prioritise these above everything other than importing new blocks.
- Roughly all validator-sensitive endpoints.
- `Priority::P1`
- The beacon processor will prioritise practically all other P2P messages over these, except for historical backfill things.
- Everything that's not `Priority::P0`
The `--http-enable-beacon-processor false` flag can be supplied to revert back to the old behaviour of spawning new `tokio` tasks for each request:
```
--http-enable-beacon-processor <BOOLEAN>
The beacon processor is a scheduler which provides quality-of-service and DoS protection. When set to
"true", HTTP API requests will queued and scheduled alongside other tasks. When set to "false", HTTP API
responses will be executed immediately. [default: true]
```
## New CLI Flags
I added some other new CLI flags:
```
--beacon-processor-aggregate-batch-size <INTEGER>
Specifies the number of gossip aggregate attestations in a signature verification batch. Higher values may
reduce CPU usage in a healthy network while lower values may increase CPU usage in an unhealthy or hostile
network. [default: 64]
--beacon-processor-attestation-batch-size <INTEGER>
Specifies the number of gossip attestations in a signature verification batch. Higher values may reduce CPU
usage in a healthy network whilst lower values may increase CPU usage in an unhealthy or hostile network.
[default: 64]
--beacon-processor-max-workers <INTEGER>
Specifies the maximum concurrent tasks for the task scheduler. Increasing this value may increase resource
consumption. Reducing the value may result in decreased resource usage and diminished performance. The
default value is the number of logical CPU cores on the host.
--beacon-processor-reprocess-queue-len <INTEGER>
Specifies the length of the queue for messages requiring delayed processing. Higher values may prevent
messages from being dropped while lower values may help protect the node from becoming overwhelmed.
[default: 12288]
```
I needed to add the max-workers flag since the "simulator" flavor tests started failing with HTTP timeouts on the test assertions. I believe they were failing because the Github runners only have 2 cores and there just weren't enough workers available to process our requests in time. I added the other flags since they seem fun to fiddle with.
## Additional Info
I bumped the timeouts on the "simulator" flavor test from 4s to 8s. The prioritisation of consensus messages seems to be causing slower responses, I guess this is what we signed up for 🤷
The `validator/register` validator has some special handling because the relays have a bad habit of timing out on these calls. It seems like a waste of a `BeaconProcessor` worker to just wait for the builder API HTTP response, so we spawn a new `tokio` task to wait for a builder response.
I've added an optimisation for the `GET beacon/states/{state_id}/validators/{validator_id}` endpoint in [efbabe3](https://github.com/sigp/lighthouse/pull/4462/commits/efbabe32521ed6eb3564764da4e507d26a1c4bd0). That's the endpoint the VC uses to resolve pubkeys to validator indices, and it's the endpoint that was causing us grief. Perhaps I should move that into a new PR, not sure.
2023-08-08 23:30:15 +00:00
|
|
|
} = create_api_server(
|
|
|
|
harness.chain.clone(),
|
|
|
|
&harness.runtime,
|
|
|
|
harness.logger().clone(),
|
|
|
|
)
|
|
|
|
.await;
|
2021-08-06 00:47:31 +00:00
|
|
|
|
|
|
|
tokio::spawn(server);
|
|
|
|
|
|
|
|
let client = BeaconNodeHttpClient::new(
|
|
|
|
SensitiveUrl::parse(&format!(
|
|
|
|
"http://{}:{}",
|
|
|
|
listening_socket.ip(),
|
|
|
|
listening_socket.port()
|
|
|
|
))
|
|
|
|
.unwrap(),
|
|
|
|
Timeouts::set_all(Duration::from_secs(1)),
|
|
|
|
);
|
|
|
|
|
|
|
|
Self {
|
|
|
|
harness,
|
|
|
|
client,
|
|
|
|
network_rx,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-10-11 02:45:06 +00:00
|
|
|
pub async fn create_api_server<T: BeaconChainTypes>(
|
2021-08-06 00:47:31 +00:00
|
|
|
chain: Arc<BeaconChain<T>>,
|
Use `BeaconProcessor` for API requests (#4462)
## Issue Addressed
NA
## Proposed Changes
Rather than spawning new tasks on the tokio executor to process each HTTP API request, send the tasks to the `BeaconProcessor`. This achieves:
1. Places a bound on how many concurrent requests are being served (i.e., how many we are actually trying to compute at one time).
1. Places a bound on how many requests can be awaiting a response at one time (i.e., starts dropping requests when we have too many queued).
1. Allows the BN prioritise HTTP requests with respect to messages coming from the P2P network (i.e., proiritise importing gossip blocks rather than serving API requests).
Presently there are two levels of priorities:
- `Priority::P0`
- The beacon processor will prioritise these above everything other than importing new blocks.
- Roughly all validator-sensitive endpoints.
- `Priority::P1`
- The beacon processor will prioritise practically all other P2P messages over these, except for historical backfill things.
- Everything that's not `Priority::P0`
The `--http-enable-beacon-processor false` flag can be supplied to revert back to the old behaviour of spawning new `tokio` tasks for each request:
```
--http-enable-beacon-processor <BOOLEAN>
The beacon processor is a scheduler which provides quality-of-service and DoS protection. When set to
"true", HTTP API requests will queued and scheduled alongside other tasks. When set to "false", HTTP API
responses will be executed immediately. [default: true]
```
## New CLI Flags
I added some other new CLI flags:
```
--beacon-processor-aggregate-batch-size <INTEGER>
Specifies the number of gossip aggregate attestations in a signature verification batch. Higher values may
reduce CPU usage in a healthy network while lower values may increase CPU usage in an unhealthy or hostile
network. [default: 64]
--beacon-processor-attestation-batch-size <INTEGER>
Specifies the number of gossip attestations in a signature verification batch. Higher values may reduce CPU
usage in a healthy network whilst lower values may increase CPU usage in an unhealthy or hostile network.
[default: 64]
--beacon-processor-max-workers <INTEGER>
Specifies the maximum concurrent tasks for the task scheduler. Increasing this value may increase resource
consumption. Reducing the value may result in decreased resource usage and diminished performance. The
default value is the number of logical CPU cores on the host.
--beacon-processor-reprocess-queue-len <INTEGER>
Specifies the length of the queue for messages requiring delayed processing. Higher values may prevent
messages from being dropped while lower values may help protect the node from becoming overwhelmed.
[default: 12288]
```
I needed to add the max-workers flag since the "simulator" flavor tests started failing with HTTP timeouts on the test assertions. I believe they were failing because the Github runners only have 2 cores and there just weren't enough workers available to process our requests in time. I added the other flags since they seem fun to fiddle with.
## Additional Info
I bumped the timeouts on the "simulator" flavor test from 4s to 8s. The prioritisation of consensus messages seems to be causing slower responses, I guess this is what we signed up for 🤷
The `validator/register` validator has some special handling because the relays have a bad habit of timing out on these calls. It seems like a waste of a `BeaconProcessor` worker to just wait for the builder API HTTP response, so we spawn a new `tokio` task to wait for a builder response.
I've added an optimisation for the `GET beacon/states/{state_id}/validators/{validator_id}` endpoint in [efbabe3](https://github.com/sigp/lighthouse/pull/4462/commits/efbabe32521ed6eb3564764da4e507d26a1c4bd0). That's the endpoint the VC uses to resolve pubkeys to validator indices, and it's the endpoint that was causing us grief. Perhaps I should move that into a new PR, not sure.
2023-08-08 23:30:15 +00:00
|
|
|
test_runtime: &TestRuntime,
|
2021-08-06 00:47:31 +00:00
|
|
|
log: Logger,
|
2022-07-30 00:22:37 +00:00
|
|
|
) -> ApiServer<T::EthSpec, impl Future<Output = ()>> {
|
|
|
|
// Get a random unused port.
|
2023-03-14 01:13:34 +00:00
|
|
|
let port = unused_port::unused_tcp4_port().unwrap();
|
Use `BeaconProcessor` for API requests (#4462)
## Issue Addressed
NA
## Proposed Changes
Rather than spawning new tasks on the tokio executor to process each HTTP API request, send the tasks to the `BeaconProcessor`. This achieves:
1. Places a bound on how many concurrent requests are being served (i.e., how many we are actually trying to compute at one time).
1. Places a bound on how many requests can be awaiting a response at one time (i.e., starts dropping requests when we have too many queued).
1. Allows the BN prioritise HTTP requests with respect to messages coming from the P2P network (i.e., proiritise importing gossip blocks rather than serving API requests).
Presently there are two levels of priorities:
- `Priority::P0`
- The beacon processor will prioritise these above everything other than importing new blocks.
- Roughly all validator-sensitive endpoints.
- `Priority::P1`
- The beacon processor will prioritise practically all other P2P messages over these, except for historical backfill things.
- Everything that's not `Priority::P0`
The `--http-enable-beacon-processor false` flag can be supplied to revert back to the old behaviour of spawning new `tokio` tasks for each request:
```
--http-enable-beacon-processor <BOOLEAN>
The beacon processor is a scheduler which provides quality-of-service and DoS protection. When set to
"true", HTTP API requests will queued and scheduled alongside other tasks. When set to "false", HTTP API
responses will be executed immediately. [default: true]
```
## New CLI Flags
I added some other new CLI flags:
```
--beacon-processor-aggregate-batch-size <INTEGER>
Specifies the number of gossip aggregate attestations in a signature verification batch. Higher values may
reduce CPU usage in a healthy network while lower values may increase CPU usage in an unhealthy or hostile
network. [default: 64]
--beacon-processor-attestation-batch-size <INTEGER>
Specifies the number of gossip attestations in a signature verification batch. Higher values may reduce CPU
usage in a healthy network whilst lower values may increase CPU usage in an unhealthy or hostile network.
[default: 64]
--beacon-processor-max-workers <INTEGER>
Specifies the maximum concurrent tasks for the task scheduler. Increasing this value may increase resource
consumption. Reducing the value may result in decreased resource usage and diminished performance. The
default value is the number of logical CPU cores on the host.
--beacon-processor-reprocess-queue-len <INTEGER>
Specifies the length of the queue for messages requiring delayed processing. Higher values may prevent
messages from being dropped while lower values may help protect the node from becoming overwhelmed.
[default: 12288]
```
I needed to add the max-workers flag since the "simulator" flavor tests started failing with HTTP timeouts on the test assertions. I believe they were failing because the Github runners only have 2 cores and there just weren't enough workers available to process our requests in time. I added the other flags since they seem fun to fiddle with.
## Additional Info
I bumped the timeouts on the "simulator" flavor test from 4s to 8s. The prioritisation of consensus messages seems to be causing slower responses, I guess this is what we signed up for 🤷
The `validator/register` validator has some special handling because the relays have a bad habit of timing out on these calls. It seems like a waste of a `BeaconProcessor` worker to just wait for the builder API HTTP response, so we spawn a new `tokio` task to wait for a builder response.
I've added an optimisation for the `GET beacon/states/{state_id}/validators/{validator_id}` endpoint in [efbabe3](https://github.com/sigp/lighthouse/pull/4462/commits/efbabe32521ed6eb3564764da4e507d26a1c4bd0). That's the endpoint the VC uses to resolve pubkeys to validator indices, and it's the endpoint that was causing us grief. Perhaps I should move that into a new PR, not sure.
2023-08-08 23:30:15 +00:00
|
|
|
create_api_server_on_port(chain, test_runtime, log, port).await
|
2022-07-30 00:22:37 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
pub async fn create_api_server_on_port<T: BeaconChainTypes>(
|
|
|
|
chain: Arc<BeaconChain<T>>,
|
Use `BeaconProcessor` for API requests (#4462)
## Issue Addressed
NA
## Proposed Changes
Rather than spawning new tasks on the tokio executor to process each HTTP API request, send the tasks to the `BeaconProcessor`. This achieves:
1. Places a bound on how many concurrent requests are being served (i.e., how many we are actually trying to compute at one time).
1. Places a bound on how many requests can be awaiting a response at one time (i.e., starts dropping requests when we have too many queued).
1. Allows the BN prioritise HTTP requests with respect to messages coming from the P2P network (i.e., proiritise importing gossip blocks rather than serving API requests).
Presently there are two levels of priorities:
- `Priority::P0`
- The beacon processor will prioritise these above everything other than importing new blocks.
- Roughly all validator-sensitive endpoints.
- `Priority::P1`
- The beacon processor will prioritise practically all other P2P messages over these, except for historical backfill things.
- Everything that's not `Priority::P0`
The `--http-enable-beacon-processor false` flag can be supplied to revert back to the old behaviour of spawning new `tokio` tasks for each request:
```
--http-enable-beacon-processor <BOOLEAN>
The beacon processor is a scheduler which provides quality-of-service and DoS protection. When set to
"true", HTTP API requests will queued and scheduled alongside other tasks. When set to "false", HTTP API
responses will be executed immediately. [default: true]
```
## New CLI Flags
I added some other new CLI flags:
```
--beacon-processor-aggregate-batch-size <INTEGER>
Specifies the number of gossip aggregate attestations in a signature verification batch. Higher values may
reduce CPU usage in a healthy network while lower values may increase CPU usage in an unhealthy or hostile
network. [default: 64]
--beacon-processor-attestation-batch-size <INTEGER>
Specifies the number of gossip attestations in a signature verification batch. Higher values may reduce CPU
usage in a healthy network whilst lower values may increase CPU usage in an unhealthy or hostile network.
[default: 64]
--beacon-processor-max-workers <INTEGER>
Specifies the maximum concurrent tasks for the task scheduler. Increasing this value may increase resource
consumption. Reducing the value may result in decreased resource usage and diminished performance. The
default value is the number of logical CPU cores on the host.
--beacon-processor-reprocess-queue-len <INTEGER>
Specifies the length of the queue for messages requiring delayed processing. Higher values may prevent
messages from being dropped while lower values may help protect the node from becoming overwhelmed.
[default: 12288]
```
I needed to add the max-workers flag since the "simulator" flavor tests started failing with HTTP timeouts on the test assertions. I believe they were failing because the Github runners only have 2 cores and there just weren't enough workers available to process our requests in time. I added the other flags since they seem fun to fiddle with.
## Additional Info
I bumped the timeouts on the "simulator" flavor test from 4s to 8s. The prioritisation of consensus messages seems to be causing slower responses, I guess this is what we signed up for 🤷
The `validator/register` validator has some special handling because the relays have a bad habit of timing out on these calls. It seems like a waste of a `BeaconProcessor` worker to just wait for the builder API HTTP response, so we spawn a new `tokio` task to wait for a builder response.
I've added an optimisation for the `GET beacon/states/{state_id}/validators/{validator_id}` endpoint in [efbabe3](https://github.com/sigp/lighthouse/pull/4462/commits/efbabe32521ed6eb3564764da4e507d26a1c4bd0). That's the endpoint the VC uses to resolve pubkeys to validator indices, and it's the endpoint that was causing us grief. Perhaps I should move that into a new PR, not sure.
2023-08-08 23:30:15 +00:00
|
|
|
test_runtime: &TestRuntime,
|
2022-07-30 00:22:37 +00:00
|
|
|
log: Logger,
|
|
|
|
port: u16,
|
2021-08-06 00:47:31 +00:00
|
|
|
) -> ApiServer<T::EthSpec, impl Future<Output = ()>> {
|
2022-08-30 05:47:31 +00:00
|
|
|
let (network_senders, network_receivers) = NetworkSenders::new();
|
2021-08-06 00:47:31 +00:00
|
|
|
|
|
|
|
// Default metadata
|
|
|
|
let meta_data = MetaData::V2(MetaDataV2 {
|
|
|
|
seq_number: SEQ_NUMBER,
|
|
|
|
attnets: EnrAttestationBitfield::<T::EthSpec>::default(),
|
|
|
|
syncnets: EnrSyncCommitteeBitfield::<T::EthSpec>::default(),
|
|
|
|
});
|
|
|
|
let enr_key = CombinedKey::generate_secp256k1();
|
|
|
|
let enr = EnrBuilder::new("v4").build(&enr_key).unwrap();
|
2021-10-11 02:45:06 +00:00
|
|
|
let network_globals = Arc::new(NetworkGlobals::new(
|
|
|
|
enr.clone(),
|
2023-03-14 01:13:34 +00:00
|
|
|
Some(TCP_PORT),
|
|
|
|
None,
|
2021-10-11 02:45:06 +00:00
|
|
|
meta_data,
|
|
|
|
vec![],
|
2023-04-12 01:48:19 +00:00
|
|
|
false,
|
2021-10-11 02:45:06 +00:00
|
|
|
&log,
|
|
|
|
));
|
2021-08-06 00:47:31 +00:00
|
|
|
|
2021-10-11 02:45:06 +00:00
|
|
|
// Only a peer manager can add peers, so we create a dummy manager.
|
2021-11-03 23:44:44 +00:00
|
|
|
let config = lighthouse_network::peer_manager::config::Config::default();
|
2022-09-29 01:50:11 +00:00
|
|
|
let mut pm = PeerManager::new(config, network_globals.clone(), &log).unwrap();
|
2021-10-11 02:45:06 +00:00
|
|
|
|
|
|
|
// add a peer
|
2021-08-06 00:47:31 +00:00
|
|
|
let peer_id = PeerId::random();
|
|
|
|
|
2023-01-06 15:59:33 +00:00
|
|
|
let endpoint = &ConnectedPoint::Listener {
|
2021-10-11 02:45:06 +00:00
|
|
|
local_addr: EXTERNAL_ADDR.parse().unwrap(),
|
|
|
|
send_back_addr: EXTERNAL_ADDR.parse().unwrap(),
|
|
|
|
};
|
2023-08-02 00:59:34 +00:00
|
|
|
let connection_id = ConnectionId::new_unchecked(1);
|
2023-01-06 15:59:33 +00:00
|
|
|
pm.on_swarm_event(FromSwarm::ConnectionEstablished(ConnectionEstablished {
|
|
|
|
peer_id,
|
|
|
|
connection_id,
|
|
|
|
endpoint,
|
|
|
|
failed_addresses: &[],
|
|
|
|
other_established: 0,
|
|
|
|
}));
|
2021-08-06 00:47:31 +00:00
|
|
|
*network_globals.sync_state.write() = SyncState::Synced;
|
|
|
|
|
2022-10-04 08:33:39 +00:00
|
|
|
let eth1_service =
|
|
|
|
eth1::Service::new(eth1::Config::default(), log.clone(), chain.spec.clone()).unwrap();
|
2021-08-06 00:47:31 +00:00
|
|
|
|
2023-08-21 05:02:34 +00:00
|
|
|
let beacon_processor_config = BeaconProcessorConfig {
|
|
|
|
// The number of workers must be greater than one. Tests which use the
|
|
|
|
// builder workflow sometimes require an internal HTTP request in order
|
|
|
|
// to fulfill an already in-flight HTTP request, therefore having only
|
|
|
|
// one worker will result in a deadlock.
|
|
|
|
max_workers: 2,
|
|
|
|
..BeaconProcessorConfig::default()
|
|
|
|
};
|
Use `BeaconProcessor` for API requests (#4462)
## Issue Addressed
NA
## Proposed Changes
Rather than spawning new tasks on the tokio executor to process each HTTP API request, send the tasks to the `BeaconProcessor`. This achieves:
1. Places a bound on how many concurrent requests are being served (i.e., how many we are actually trying to compute at one time).
1. Places a bound on how many requests can be awaiting a response at one time (i.e., starts dropping requests when we have too many queued).
1. Allows the BN prioritise HTTP requests with respect to messages coming from the P2P network (i.e., proiritise importing gossip blocks rather than serving API requests).
Presently there are two levels of priorities:
- `Priority::P0`
- The beacon processor will prioritise these above everything other than importing new blocks.
- Roughly all validator-sensitive endpoints.
- `Priority::P1`
- The beacon processor will prioritise practically all other P2P messages over these, except for historical backfill things.
- Everything that's not `Priority::P0`
The `--http-enable-beacon-processor false` flag can be supplied to revert back to the old behaviour of spawning new `tokio` tasks for each request:
```
--http-enable-beacon-processor <BOOLEAN>
The beacon processor is a scheduler which provides quality-of-service and DoS protection. When set to
"true", HTTP API requests will queued and scheduled alongside other tasks. When set to "false", HTTP API
responses will be executed immediately. [default: true]
```
## New CLI Flags
I added some other new CLI flags:
```
--beacon-processor-aggregate-batch-size <INTEGER>
Specifies the number of gossip aggregate attestations in a signature verification batch. Higher values may
reduce CPU usage in a healthy network while lower values may increase CPU usage in an unhealthy or hostile
network. [default: 64]
--beacon-processor-attestation-batch-size <INTEGER>
Specifies the number of gossip attestations in a signature verification batch. Higher values may reduce CPU
usage in a healthy network whilst lower values may increase CPU usage in an unhealthy or hostile network.
[default: 64]
--beacon-processor-max-workers <INTEGER>
Specifies the maximum concurrent tasks for the task scheduler. Increasing this value may increase resource
consumption. Reducing the value may result in decreased resource usage and diminished performance. The
default value is the number of logical CPU cores on the host.
--beacon-processor-reprocess-queue-len <INTEGER>
Specifies the length of the queue for messages requiring delayed processing. Higher values may prevent
messages from being dropped while lower values may help protect the node from becoming overwhelmed.
[default: 12288]
```
I needed to add the max-workers flag since the "simulator" flavor tests started failing with HTTP timeouts on the test assertions. I believe they were failing because the Github runners only have 2 cores and there just weren't enough workers available to process our requests in time. I added the other flags since they seem fun to fiddle with.
## Additional Info
I bumped the timeouts on the "simulator" flavor test from 4s to 8s. The prioritisation of consensus messages seems to be causing slower responses, I guess this is what we signed up for 🤷
The `validator/register` validator has some special handling because the relays have a bad habit of timing out on these calls. It seems like a waste of a `BeaconProcessor` worker to just wait for the builder API HTTP response, so we spawn a new `tokio` task to wait for a builder response.
I've added an optimisation for the `GET beacon/states/{state_id}/validators/{validator_id}` endpoint in [efbabe3](https://github.com/sigp/lighthouse/pull/4462/commits/efbabe32521ed6eb3564764da4e507d26a1c4bd0). That's the endpoint the VC uses to resolve pubkeys to validator indices, and it's the endpoint that was causing us grief. Perhaps I should move that into a new PR, not sure.
2023-08-08 23:30:15 +00:00
|
|
|
let BeaconProcessorChannels {
|
|
|
|
beacon_processor_tx,
|
|
|
|
beacon_processor_rx,
|
|
|
|
work_reprocessing_tx,
|
|
|
|
work_reprocessing_rx,
|
|
|
|
} = BeaconProcessorChannels::new(&beacon_processor_config);
|
|
|
|
|
|
|
|
let beacon_processor_send = beacon_processor_tx;
|
|
|
|
BeaconProcessor {
|
|
|
|
network_globals: network_globals.clone(),
|
|
|
|
executor: test_runtime.task_executor.clone(),
|
|
|
|
current_workers: 0,
|
|
|
|
config: beacon_processor_config,
|
|
|
|
log: log.clone(),
|
|
|
|
}
|
|
|
|
.spawn_manager(
|
|
|
|
beacon_processor_rx,
|
|
|
|
work_reprocessing_tx,
|
|
|
|
work_reprocessing_rx,
|
|
|
|
None,
|
|
|
|
chain.slot_clock.clone(),
|
|
|
|
chain.spec.maximum_gossip_clock_disparity(),
|
|
|
|
)
|
|
|
|
.unwrap();
|
|
|
|
|
2023-04-03 05:35:11 +00:00
|
|
|
let ctx = Arc::new(Context {
|
2021-08-06 00:47:31 +00:00
|
|
|
config: Config {
|
|
|
|
enabled: true,
|
2022-03-24 00:04:49 +00:00
|
|
|
listen_addr: IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)),
|
2022-07-30 00:22:37 +00:00
|
|
|
listen_port: port,
|
2021-08-06 00:47:31 +00:00
|
|
|
allow_origin: None,
|
2021-10-12 03:35:49 +00:00
|
|
|
tls_config: None,
|
2021-10-02 19:57:23 +00:00
|
|
|
allow_sync_stalled: false,
|
2022-11-15 05:21:26 +00:00
|
|
|
data_dir: std::path::PathBuf::from(DEFAULT_ROOT_DIR),
|
2022-08-10 07:52:59 +00:00
|
|
|
spec_fork_name: None,
|
2023-08-17 02:37:29 +00:00
|
|
|
sse_capacity_multiplier: 1,
|
Use `BeaconProcessor` for API requests (#4462)
## Issue Addressed
NA
## Proposed Changes
Rather than spawning new tasks on the tokio executor to process each HTTP API request, send the tasks to the `BeaconProcessor`. This achieves:
1. Places a bound on how many concurrent requests are being served (i.e., how many we are actually trying to compute at one time).
1. Places a bound on how many requests can be awaiting a response at one time (i.e., starts dropping requests when we have too many queued).
1. Allows the BN prioritise HTTP requests with respect to messages coming from the P2P network (i.e., proiritise importing gossip blocks rather than serving API requests).
Presently there are two levels of priorities:
- `Priority::P0`
- The beacon processor will prioritise these above everything other than importing new blocks.
- Roughly all validator-sensitive endpoints.
- `Priority::P1`
- The beacon processor will prioritise practically all other P2P messages over these, except for historical backfill things.
- Everything that's not `Priority::P0`
The `--http-enable-beacon-processor false` flag can be supplied to revert back to the old behaviour of spawning new `tokio` tasks for each request:
```
--http-enable-beacon-processor <BOOLEAN>
The beacon processor is a scheduler which provides quality-of-service and DoS protection. When set to
"true", HTTP API requests will queued and scheduled alongside other tasks. When set to "false", HTTP API
responses will be executed immediately. [default: true]
```
## New CLI Flags
I added some other new CLI flags:
```
--beacon-processor-aggregate-batch-size <INTEGER>
Specifies the number of gossip aggregate attestations in a signature verification batch. Higher values may
reduce CPU usage in a healthy network while lower values may increase CPU usage in an unhealthy or hostile
network. [default: 64]
--beacon-processor-attestation-batch-size <INTEGER>
Specifies the number of gossip attestations in a signature verification batch. Higher values may reduce CPU
usage in a healthy network whilst lower values may increase CPU usage in an unhealthy or hostile network.
[default: 64]
--beacon-processor-max-workers <INTEGER>
Specifies the maximum concurrent tasks for the task scheduler. Increasing this value may increase resource
consumption. Reducing the value may result in decreased resource usage and diminished performance. The
default value is the number of logical CPU cores on the host.
--beacon-processor-reprocess-queue-len <INTEGER>
Specifies the length of the queue for messages requiring delayed processing. Higher values may prevent
messages from being dropped while lower values may help protect the node from becoming overwhelmed.
[default: 12288]
```
I needed to add the max-workers flag since the "simulator" flavor tests started failing with HTTP timeouts on the test assertions. I believe they were failing because the Github runners only have 2 cores and there just weren't enough workers available to process our requests in time. I added the other flags since they seem fun to fiddle with.
## Additional Info
I bumped the timeouts on the "simulator" flavor test from 4s to 8s. The prioritisation of consensus messages seems to be causing slower responses, I guess this is what we signed up for 🤷
The `validator/register` validator has some special handling because the relays have a bad habit of timing out on these calls. It seems like a waste of a `BeaconProcessor` worker to just wait for the builder API HTTP response, so we spawn a new `tokio` task to wait for a builder response.
I've added an optimisation for the `GET beacon/states/{state_id}/validators/{validator_id}` endpoint in [efbabe3](https://github.com/sigp/lighthouse/pull/4462/commits/efbabe32521ed6eb3564764da4e507d26a1c4bd0). That's the endpoint the VC uses to resolve pubkeys to validator indices, and it's the endpoint that was causing us grief. Perhaps I should move that into a new PR, not sure.
2023-08-08 23:30:15 +00:00
|
|
|
enable_beacon_processor: true,
|
2021-08-06 00:47:31 +00:00
|
|
|
},
|
2023-04-03 05:35:11 +00:00
|
|
|
chain: Some(chain),
|
2022-08-30 05:47:31 +00:00
|
|
|
network_senders: Some(network_senders),
|
2021-10-11 02:45:06 +00:00
|
|
|
network_globals: Some(network_globals),
|
Use `BeaconProcessor` for API requests (#4462)
## Issue Addressed
NA
## Proposed Changes
Rather than spawning new tasks on the tokio executor to process each HTTP API request, send the tasks to the `BeaconProcessor`. This achieves:
1. Places a bound on how many concurrent requests are being served (i.e., how many we are actually trying to compute at one time).
1. Places a bound on how many requests can be awaiting a response at one time (i.e., starts dropping requests when we have too many queued).
1. Allows the BN prioritise HTTP requests with respect to messages coming from the P2P network (i.e., proiritise importing gossip blocks rather than serving API requests).
Presently there are two levels of priorities:
- `Priority::P0`
- The beacon processor will prioritise these above everything other than importing new blocks.
- Roughly all validator-sensitive endpoints.
- `Priority::P1`
- The beacon processor will prioritise practically all other P2P messages over these, except for historical backfill things.
- Everything that's not `Priority::P0`
The `--http-enable-beacon-processor false` flag can be supplied to revert back to the old behaviour of spawning new `tokio` tasks for each request:
```
--http-enable-beacon-processor <BOOLEAN>
The beacon processor is a scheduler which provides quality-of-service and DoS protection. When set to
"true", HTTP API requests will queued and scheduled alongside other tasks. When set to "false", HTTP API
responses will be executed immediately. [default: true]
```
## New CLI Flags
I added some other new CLI flags:
```
--beacon-processor-aggregate-batch-size <INTEGER>
Specifies the number of gossip aggregate attestations in a signature verification batch. Higher values may
reduce CPU usage in a healthy network while lower values may increase CPU usage in an unhealthy or hostile
network. [default: 64]
--beacon-processor-attestation-batch-size <INTEGER>
Specifies the number of gossip attestations in a signature verification batch. Higher values may reduce CPU
usage in a healthy network whilst lower values may increase CPU usage in an unhealthy or hostile network.
[default: 64]
--beacon-processor-max-workers <INTEGER>
Specifies the maximum concurrent tasks for the task scheduler. Increasing this value may increase resource
consumption. Reducing the value may result in decreased resource usage and diminished performance. The
default value is the number of logical CPU cores on the host.
--beacon-processor-reprocess-queue-len <INTEGER>
Specifies the length of the queue for messages requiring delayed processing. Higher values may prevent
messages from being dropped while lower values may help protect the node from becoming overwhelmed.
[default: 12288]
```
I needed to add the max-workers flag since the "simulator" flavor tests started failing with HTTP timeouts on the test assertions. I believe they were failing because the Github runners only have 2 cores and there just weren't enough workers available to process our requests in time. I added the other flags since they seem fun to fiddle with.
## Additional Info
I bumped the timeouts on the "simulator" flavor test from 4s to 8s. The prioritisation of consensus messages seems to be causing slower responses, I guess this is what we signed up for 🤷
The `validator/register` validator has some special handling because the relays have a bad habit of timing out on these calls. It seems like a waste of a `BeaconProcessor` worker to just wait for the builder API HTTP response, so we spawn a new `tokio` task to wait for a builder response.
I've added an optimisation for the `GET beacon/states/{state_id}/validators/{validator_id}` endpoint in [efbabe3](https://github.com/sigp/lighthouse/pull/4462/commits/efbabe32521ed6eb3564764da4e507d26a1c4bd0). That's the endpoint the VC uses to resolve pubkeys to validator indices, and it's the endpoint that was causing us grief. Perhaps I should move that into a new PR, not sure.
2023-08-08 23:30:15 +00:00
|
|
|
beacon_processor_send: Some(beacon_processor_send),
|
2021-08-06 00:47:31 +00:00
|
|
|
eth1_service: Some(eth1_service),
|
2023-05-22 05:57:08 +00:00
|
|
|
sse_logging_components: None,
|
2021-08-06 00:47:31 +00:00
|
|
|
log,
|
|
|
|
});
|
2023-04-03 05:35:11 +00:00
|
|
|
|
Use `BeaconProcessor` for API requests (#4462)
## Issue Addressed
NA
## Proposed Changes
Rather than spawning new tasks on the tokio executor to process each HTTP API request, send the tasks to the `BeaconProcessor`. This achieves:
1. Places a bound on how many concurrent requests are being served (i.e., how many we are actually trying to compute at one time).
1. Places a bound on how many requests can be awaiting a response at one time (i.e., starts dropping requests when we have too many queued).
1. Allows the BN prioritise HTTP requests with respect to messages coming from the P2P network (i.e., proiritise importing gossip blocks rather than serving API requests).
Presently there are two levels of priorities:
- `Priority::P0`
- The beacon processor will prioritise these above everything other than importing new blocks.
- Roughly all validator-sensitive endpoints.
- `Priority::P1`
- The beacon processor will prioritise practically all other P2P messages over these, except for historical backfill things.
- Everything that's not `Priority::P0`
The `--http-enable-beacon-processor false` flag can be supplied to revert back to the old behaviour of spawning new `tokio` tasks for each request:
```
--http-enable-beacon-processor <BOOLEAN>
The beacon processor is a scheduler which provides quality-of-service and DoS protection. When set to
"true", HTTP API requests will queued and scheduled alongside other tasks. When set to "false", HTTP API
responses will be executed immediately. [default: true]
```
## New CLI Flags
I added some other new CLI flags:
```
--beacon-processor-aggregate-batch-size <INTEGER>
Specifies the number of gossip aggregate attestations in a signature verification batch. Higher values may
reduce CPU usage in a healthy network while lower values may increase CPU usage in an unhealthy or hostile
network. [default: 64]
--beacon-processor-attestation-batch-size <INTEGER>
Specifies the number of gossip attestations in a signature verification batch. Higher values may reduce CPU
usage in a healthy network whilst lower values may increase CPU usage in an unhealthy or hostile network.
[default: 64]
--beacon-processor-max-workers <INTEGER>
Specifies the maximum concurrent tasks for the task scheduler. Increasing this value may increase resource
consumption. Reducing the value may result in decreased resource usage and diminished performance. The
default value is the number of logical CPU cores on the host.
--beacon-processor-reprocess-queue-len <INTEGER>
Specifies the length of the queue for messages requiring delayed processing. Higher values may prevent
messages from being dropped while lower values may help protect the node from becoming overwhelmed.
[default: 12288]
```
I needed to add the max-workers flag since the "simulator" flavor tests started failing with HTTP timeouts on the test assertions. I believe they were failing because the Github runners only have 2 cores and there just weren't enough workers available to process our requests in time. I added the other flags since they seem fun to fiddle with.
## Additional Info
I bumped the timeouts on the "simulator" flavor test from 4s to 8s. The prioritisation of consensus messages seems to be causing slower responses, I guess this is what we signed up for 🤷
The `validator/register` validator has some special handling because the relays have a bad habit of timing out on these calls. It seems like a waste of a `BeaconProcessor` worker to just wait for the builder API HTTP response, so we spawn a new `tokio` task to wait for a builder response.
I've added an optimisation for the `GET beacon/states/{state_id}/validators/{validator_id}` endpoint in [efbabe3](https://github.com/sigp/lighthouse/pull/4462/commits/efbabe32521ed6eb3564764da4e507d26a1c4bd0). That's the endpoint the VC uses to resolve pubkeys to validator indices, and it's the endpoint that was causing us grief. Perhaps I should move that into a new PR, not sure.
2023-08-08 23:30:15 +00:00
|
|
|
let (listening_socket, server) = crate::serve(ctx, test_runtime.task_executor.exit()).unwrap();
|
2021-08-06 00:47:31 +00:00
|
|
|
|
|
|
|
ApiServer {
|
|
|
|
server,
|
|
|
|
listening_socket,
|
2022-08-30 05:47:31 +00:00
|
|
|
network_rx: network_receivers,
|
2021-08-06 00:47:31 +00:00
|
|
|
local_enr: enr,
|
|
|
|
external_peer_id: peer_id,
|
|
|
|
}
|
|
|
|
}
|