Use BeaconProcessor for API requests (#4462)

## Issue Addressed

NA

## Proposed Changes

Rather than spawning new tasks on the tokio executor to process each HTTP API request, send the tasks to the `BeaconProcessor`. This achieves:

1. Places a bound on how many concurrent requests are being served (i.e., how many we are actually trying to compute at one time).
1. Places a bound on how many requests can be awaiting a response at one time (i.e., starts dropping requests when we have too many queued).
1. Allows the BN prioritise HTTP requests with respect to messages coming from the P2P network (i.e., proiritise importing gossip blocks rather than serving API requests).

Presently there are two levels of priorities:

- `Priority::P0`
    - The beacon processor will prioritise these above everything other than importing new blocks.
    - Roughly all validator-sensitive endpoints.
- `Priority::P1`
    - The beacon processor will prioritise practically all other P2P messages over these, except for historical backfill things.
    - Everything that's not `Priority::P0`
    
The `--http-enable-beacon-processor false` flag can be supplied to revert back to the old behaviour of spawning new `tokio` tasks for each request:

```
        --http-enable-beacon-processor <BOOLEAN>
            The beacon processor is a scheduler which provides quality-of-service and DoS protection. When set to
            "true", HTTP API requests will queued and scheduled alongside other tasks. When set to "false", HTTP API
            responses will be executed immediately. [default: true]
```
    
## New CLI Flags

I added some other new CLI flags:

```
        --beacon-processor-aggregate-batch-size <INTEGER>
            Specifies the number of gossip aggregate attestations in a signature verification batch. Higher values may
            reduce CPU usage in a healthy network while lower values may increase CPU usage in an unhealthy or hostile
            network. [default: 64]
        --beacon-processor-attestation-batch-size <INTEGER>
            Specifies the number of gossip attestations in a signature verification batch. Higher values may reduce CPU
            usage in a healthy network whilst lower values may increase CPU usage in an unhealthy or hostile network.
            [default: 64]
        --beacon-processor-max-workers <INTEGER>
            Specifies the maximum concurrent tasks for the task scheduler. Increasing this value may increase resource
            consumption. Reducing the value may result in decreased resource usage and diminished performance. The
            default value is the number of logical CPU cores on the host.
        --beacon-processor-reprocess-queue-len <INTEGER>
            Specifies the length of the queue for messages requiring delayed processing. Higher values may prevent
            messages from being dropped while lower values may help protect the node from becoming overwhelmed.
            [default: 12288]
```


I needed to add the max-workers flag since the "simulator" flavor tests started failing with HTTP timeouts on the test assertions. I believe they were failing because the Github runners only have 2 cores and there just weren't enough workers available to process our requests in time. I added the other flags since they seem fun to fiddle with.

## Additional Info

I bumped the timeouts on the "simulator" flavor test from 4s to 8s. The prioritisation of consensus messages seems to be causing slower responses, I guess this is what we signed up for 🤷 

The `validator/register` validator has some special handling because the relays have a bad habit of timing out on these calls. It seems like a waste of a `BeaconProcessor` worker to just wait for the builder API HTTP response, so we spawn a new `tokio` task to wait for a builder response.

I've added an optimisation for the `GET beacon/states/{state_id}/validators/{validator_id}` endpoint in [efbabe3](efbabe3252). That's the endpoint the VC uses to resolve pubkeys to validator indices, and it's the endpoint that was causing us grief. Perhaps I should move that into a new PR, not sure.
This commit is contained in:
Paul Hauner 2023-08-08 23:30:15 +00:00
parent 1373dcf076
commit b60304b19f
24 changed files with 1864 additions and 962 deletions

5
Cargo.lock generated
View File

@ -601,7 +601,9 @@ dependencies = [
"lighthouse_metrics",
"lighthouse_network",
"logging",
"num_cpus",
"parking_lot 0.12.1",
"serde",
"slog",
"slot_clock",
"strum",
@ -3266,6 +3268,7 @@ name = "http_api"
version = "0.1.0"
dependencies = [
"beacon_chain",
"beacon_processor",
"bs58 0.4.0",
"bytes",
"directory",
@ -4320,6 +4323,7 @@ dependencies = [
"account_manager",
"account_utils",
"beacon_node",
"beacon_processor",
"bls",
"boot_node",
"clap",
@ -8893,6 +8897,7 @@ dependencies = [
"serde",
"serde_json",
"serde_yaml",
"task_executor",
"testcontainers",
"tokio",
"tokio-postgres",

View File

@ -79,8 +79,6 @@ pub struct ChainConfig {
///
/// This is useful for block builders and testing.
pub always_prepare_payload: bool,
/// Whether backfill sync processing should be rate-limited.
pub enable_backfill_rate_limiting: bool,
/// Whether to use `ProgressiveBalancesCache` in unrealized FFG progression calculation.
pub progressive_balances_mode: ProgressiveBalancesMode,
/// Number of epochs between each migration of data from the hot database to the freezer.
@ -114,7 +112,6 @@ impl Default for ChainConfig {
shuffling_cache_size: crate::shuffling_cache::DEFAULT_CACHE_SIZE,
genesis_backfill: false,
always_prepare_payload: false,
enable_backfill_rate_limiting: true,
progressive_balances_mode: ProgressiveBalancesMode::Checked,
epochs_per_migration: crate::migrate::DEFAULT_EPOCHS_PER_MIGRATION,
}

View File

@ -21,4 +21,6 @@ types = { path = "../../consensus/types" }
ethereum_ssz = "0.5.0"
lazy_static = "1.4.0"
lighthouse_metrics = { path = "../../common/lighthouse_metrics" }
parking_lot = "0.12.0"
parking_lot = "0.12.0"
num_cpus = "1.13.0"
serde = { version = "1.0.116", features = ["derive"] }

View File

@ -48,6 +48,7 @@ use lighthouse_network::NetworkGlobals;
use lighthouse_network::{MessageId, PeerId};
use logging::TimeLatch;
use parking_lot::Mutex;
use serde::{Deserialize, Serialize};
use slog::{crit, debug, error, trace, warn, Logger};
use slot_clock::SlotClock;
use std::cmp;
@ -70,7 +71,7 @@ pub mod work_reprocessing_queue;
/// The maximum size of the channel for work events to the `BeaconProcessor`.
///
/// Setting this too low will cause consensus messages to be dropped.
pub const MAX_WORK_EVENT_QUEUE_LEN: usize = 16_384;
const DEFAULT_MAX_WORK_EVENT_QUEUE_LEN: usize = 16_384;
/// The maximum size of the channel for idle events to the `BeaconProcessor`.
///
@ -79,7 +80,7 @@ pub const MAX_WORK_EVENT_QUEUE_LEN: usize = 16_384;
const MAX_IDLE_QUEUE_LEN: usize = 16_384;
/// The maximum size of the channel for re-processing work events.
pub const MAX_SCHEDULED_WORK_QUEUE_LEN: usize = 3 * MAX_WORK_EVENT_QUEUE_LEN / 4;
const DEFAULT_MAX_SCHEDULED_WORK_QUEUE_LEN: usize = 3 * DEFAULT_MAX_WORK_EVENT_QUEUE_LEN / 4;
/// The maximum number of queued `Attestation` objects that will be stored before we start dropping
/// them.
@ -167,6 +168,14 @@ const MAX_BLS_TO_EXECUTION_CHANGE_QUEUE_LEN: usize = 16_384;
/// will be stored before we start dropping them.
const MAX_LIGHT_CLIENT_BOOTSTRAP_QUEUE_LEN: usize = 1_024;
/// The maximum number of priority-0 (highest priority) messages that will be queued before
/// they begin to be dropped.
const MAX_API_REQUEST_P0_QUEUE_LEN: usize = 1_024;
/// The maximum number of priority-1 (second-highest priority) messages that will be queued before
/// they begin to be dropped.
const MAX_API_REQUEST_P1_QUEUE_LEN: usize = 1_024;
/// The name of the manager tokio task.
const MANAGER_TASK_NAME: &str = "beacon_processor_manager";
@ -184,8 +193,8 @@ const WORKER_TASK_NAME: &str = "beacon_processor_worker";
/// Poisoning occurs when an invalid signature is included in a batch of attestations. A single
/// invalid signature causes the entire batch to fail. When a batch fails, we fall-back to
/// individually verifying each attestation signature.
const MAX_GOSSIP_ATTESTATION_BATCH_SIZE: usize = 64;
const MAX_GOSSIP_AGGREGATE_BATCH_SIZE: usize = 64;
const DEFAULT_MAX_GOSSIP_ATTESTATION_BATCH_SIZE: usize = 64;
const DEFAULT_MAX_GOSSIP_AGGREGATE_BATCH_SIZE: usize = 64;
/// Unique IDs used for metrics and testing.
pub const WORKER_FREED: &str = "worker_freed";
@ -215,6 +224,61 @@ pub const UNKNOWN_BLOCK_ATTESTATION: &str = "unknown_block_attestation";
pub const UNKNOWN_BLOCK_AGGREGATE: &str = "unknown_block_aggregate";
pub const UNKNOWN_LIGHT_CLIENT_UPDATE: &str = "unknown_light_client_update";
pub const GOSSIP_BLS_TO_EXECUTION_CHANGE: &str = "gossip_bls_to_execution_change";
pub const API_REQUEST_P0: &str = "api_request_p0";
pub const API_REQUEST_P1: &str = "api_request_p1";
#[derive(Clone, PartialEq, Debug, Serialize, Deserialize)]
pub struct BeaconProcessorConfig {
pub max_workers: usize,
pub max_work_event_queue_len: usize,
pub max_scheduled_work_queue_len: usize,
pub max_gossip_attestation_batch_size: usize,
pub max_gossip_aggregate_batch_size: usize,
pub enable_backfill_rate_limiting: bool,
}
impl Default for BeaconProcessorConfig {
fn default() -> Self {
Self {
max_workers: cmp::max(1, num_cpus::get()),
max_work_event_queue_len: DEFAULT_MAX_WORK_EVENT_QUEUE_LEN,
max_scheduled_work_queue_len: DEFAULT_MAX_SCHEDULED_WORK_QUEUE_LEN,
max_gossip_attestation_batch_size: DEFAULT_MAX_GOSSIP_ATTESTATION_BATCH_SIZE,
max_gossip_aggregate_batch_size: DEFAULT_MAX_GOSSIP_AGGREGATE_BATCH_SIZE,
enable_backfill_rate_limiting: true,
}
}
}
// The channels necessary to instantiate a `BeaconProcessor`.
pub struct BeaconProcessorChannels<E: EthSpec> {
pub beacon_processor_tx: BeaconProcessorSend<E>,
pub beacon_processor_rx: mpsc::Receiver<WorkEvent<E>>,
pub work_reprocessing_tx: mpsc::Sender<ReprocessQueueMessage>,
pub work_reprocessing_rx: mpsc::Receiver<ReprocessQueueMessage>,
}
impl<E: EthSpec> BeaconProcessorChannels<E> {
pub fn new(config: &BeaconProcessorConfig) -> Self {
let (beacon_processor_tx, beacon_processor_rx) =
mpsc::channel(config.max_scheduled_work_queue_len);
let (work_reprocessing_tx, work_reprocessing_rx) =
mpsc::channel(config.max_scheduled_work_queue_len);
Self {
beacon_processor_tx: BeaconProcessorSend(beacon_processor_tx),
beacon_processor_rx,
work_reprocessing_rx,
work_reprocessing_tx,
}
}
}
impl<E: EthSpec> Default for BeaconProcessorChannels<E> {
fn default() -> Self {
Self::new(&BeaconProcessorConfig::default())
}
}
/// A simple first-in-first-out queue with a maximum length.
struct FifoQueue<T> {
@ -363,7 +427,7 @@ impl<E: EthSpec> WorkEvent<E> {
}
}
impl<E: EthSpec> std::convert::From<ReadyWork> for WorkEvent<E> {
impl<E: EthSpec> From<ReadyWork> for WorkEvent<E> {
fn from(ready_work: ReadyWork) -> Self {
match ready_work {
ReadyWork::Block(QueuedGossipBlock {
@ -465,6 +529,10 @@ impl<E: EthSpec> BeaconProcessorSend<E> {
pub type AsyncFn = Pin<Box<dyn Future<Output = ()> + Send + Sync>>;
pub type BlockingFn = Box<dyn FnOnce() + Send + Sync>;
pub type BlockingFnWithManualSendOnIdle = Box<dyn FnOnce(SendOnDrop) + Send + Sync>;
pub enum BlockingOrAsync {
Blocking(BlockingFn),
Async(AsyncFn),
}
/// Indicates the type of work to be performed and therefore its priority and
/// queuing specifics.
@ -523,6 +591,8 @@ pub enum Work<E: EthSpec> {
BlocksByRootsRequest(BlockingFnWithManualSendOnIdle),
GossipBlsToExecutionChange(BlockingFn),
LightClientBootstrapRequest(BlockingFn),
ApiRequestP0(BlockingOrAsync),
ApiRequestP1(BlockingOrAsync),
}
impl<E: EthSpec> fmt::Debug for Work<E> {
@ -560,6 +630,8 @@ impl<E: EthSpec> Work<E> {
Work::UnknownBlockAggregate { .. } => UNKNOWN_BLOCK_AGGREGATE,
Work::GossipBlsToExecutionChange(_) => GOSSIP_BLS_TO_EXECUTION_CHANGE,
Work::UnknownLightClientOptimisticUpdate { .. } => UNKNOWN_LIGHT_CLIENT_UPDATE,
Work::ApiRequestP0 { .. } => API_REQUEST_P0,
Work::ApiRequestP1 { .. } => API_REQUEST_P1,
}
}
}
@ -638,7 +710,7 @@ pub struct BeaconProcessor<E: EthSpec> {
pub executor: TaskExecutor,
pub max_workers: usize,
pub current_workers: usize,
pub enable_backfill_rate_limiting: bool,
pub config: BeaconProcessorConfig,
pub log: Logger,
}
@ -714,11 +786,13 @@ impl<E: EthSpec> BeaconProcessor<E> {
let mut lcbootstrap_queue = FifoQueue::new(MAX_LIGHT_CLIENT_BOOTSTRAP_QUEUE_LEN);
let mut api_request_p0_queue = FifoQueue::new(MAX_API_REQUEST_P0_QUEUE_LEN);
let mut api_request_p1_queue = FifoQueue::new(MAX_API_REQUEST_P1_QUEUE_LEN);
// Channels for sending work to the re-process scheduler (`work_reprocessing_tx`) and to
// receive them back once they are ready (`ready_work_rx`).
let (ready_work_tx, ready_work_rx) =
mpsc::channel::<ReadyWork>(MAX_SCHEDULED_WORK_QUEUE_LEN);
mpsc::channel::<ReadyWork>(self.config.max_scheduled_work_queue_len);
spawn_reprocess_scheduler(
ready_work_tx,
work_reprocessing_rx,
@ -739,7 +813,7 @@ impl<E: EthSpec> BeaconProcessor<E> {
reprocess_work_rx: ready_work_rx,
};
let enable_backfill_rate_limiting = self.enable_backfill_rate_limiting;
let enable_backfill_rate_limiting = self.config.enable_backfill_rate_limiting;
loop {
let work_event = match inbound_events.next().await {
@ -850,12 +924,17 @@ impl<E: EthSpec> BeaconProcessor<E> {
// required to verify some attestations.
} else if let Some(item) = gossip_block_queue.pop() {
self.spawn_worker(item, idle_tx);
// Check the priority 0 API requests after blocks, but before attestations.
} else if let Some(item) = api_request_p0_queue.pop() {
self.spawn_worker(item, idle_tx);
// Check the aggregates, *then* the unaggregates since we assume that
// aggregates are more valuable to local validators and effectively give us
// more information with less signature verification time.
} else if aggregate_queue.len() > 0 {
let batch_size =
cmp::min(aggregate_queue.len(), MAX_GOSSIP_AGGREGATE_BATCH_SIZE);
let batch_size = cmp::min(
aggregate_queue.len(),
self.config.max_gossip_aggregate_batch_size,
);
if batch_size < 2 {
// One single aggregate is in the queue, process it individually.
@ -914,7 +993,7 @@ impl<E: EthSpec> BeaconProcessor<E> {
} else if attestation_queue.len() > 0 {
let batch_size = cmp::min(
attestation_queue.len(),
MAX_GOSSIP_ATTESTATION_BATCH_SIZE,
self.config.max_gossip_attestation_batch_size,
);
if batch_size < 2 {
@ -1005,6 +1084,12 @@ impl<E: EthSpec> BeaconProcessor<E> {
self.spawn_worker(item, idle_tx);
} else if let Some(item) = gossip_bls_to_execution_change_queue.pop() {
self.spawn_worker(item, idle_tx);
// Check the priority 1 API requests after we've
// processed all the interesting things from the network
// and things required for us to stay in good repute
// with our P2P peers.
} else if let Some(item) = api_request_p1_queue.pop() {
self.spawn_worker(item, idle_tx);
// Handle backfill sync chain segments.
} else if let Some(item) = backfill_chain_segment.pop() {
self.spawn_worker(item, idle_tx);
@ -1127,6 +1212,12 @@ impl<E: EthSpec> BeaconProcessor<E> {
Work::UnknownLightClientOptimisticUpdate { .. } => {
unknown_light_client_update_queue.push(work, work_id, &self.log)
}
Work::ApiRequestP0 { .. } => {
api_request_p0_queue.push(work, work_id, &self.log)
}
Work::ApiRequestP1 { .. } => {
api_request_p1_queue.push(work, work_id, &self.log)
}
}
}
}
@ -1183,6 +1274,14 @@ impl<E: EthSpec> BeaconProcessor<E> {
&metrics::BEACON_PROCESSOR_BLS_TO_EXECUTION_CHANGE_QUEUE_TOTAL,
gossip_bls_to_execution_change_queue.len() as i64,
);
metrics::set_gauge(
&metrics::BEACON_PROCESSOR_API_REQUEST_P0_QUEUE_TOTAL,
api_request_p0_queue.len() as i64,
);
metrics::set_gauge(
&metrics::BEACON_PROCESSOR_API_REQUEST_P1_QUEUE_TOTAL,
api_request_p1_queue.len() as i64,
);
if aggregate_queue.is_full() && aggregate_debounce.elapsed() {
error!(
@ -1299,6 +1398,10 @@ impl<E: EthSpec> BeaconProcessor<E> {
task_spawner.spawn_blocking_with_manual_send_idle(work)
}
Work::ChainSegmentBackfill(process_fn) => task_spawner.spawn_async(process_fn),
Work::ApiRequestP0(process_fn) | Work::ApiRequestP1(process_fn) => match process_fn {
BlockingOrAsync::Blocking(process_fn) => task_spawner.spawn_blocking(process_fn),
BlockingOrAsync::Async(process_fn) => task_spawner.spawn_async(process_fn),
},
Work::GossipVoluntaryExit(process_fn)
| Work::GossipProposerSlashing(process_fn)
| Work::GossipAttesterSlashing(process_fn)

View File

@ -100,6 +100,15 @@ lazy_static::lazy_static! {
"beacon_processor_sync_contribution_queue_total",
"Count of sync committee contributions waiting to be processed."
);
// HTTP API requests.
pub static ref BEACON_PROCESSOR_API_REQUEST_P0_QUEUE_TOTAL: Result<IntGauge> = try_create_int_gauge(
"beacon_processor_api_request_p0_queue_total",
"Count of P0 HTTP requesets waiting to be processed."
);
pub static ref BEACON_PROCESSOR_API_REQUEST_P1_QUEUE_TOTAL: Result<IntGauge> = try_create_int_gauge(
"beacon_processor_api_request_p1_queue_total",
"Count of P1 HTTP requesets waiting to be processed."
);
/*
* Attestation reprocessing queue metrics.

View File

@ -13,10 +13,8 @@ use beacon_chain::{
store::{HotColdDB, ItemStore, LevelDB, StoreConfig},
BeaconChain, BeaconChainTypes, Eth1ChainBackend, MigratorConfig, ServerSentEventHandler,
};
use beacon_processor::{
work_reprocessing_queue::ReprocessQueueMessage, BeaconProcessor, BeaconProcessorSend,
WorkEvent, MAX_SCHEDULED_WORK_QUEUE_LEN, MAX_WORK_EVENT_QUEUE_LEN,
};
use beacon_processor::BeaconProcessorConfig;
use beacon_processor::{BeaconProcessor, BeaconProcessorChannels};
use environment::RuntimeContext;
use eth1::{Config as Eth1Config, Service as Eth1Service};
use eth2::{
@ -37,7 +35,7 @@ use std::path::{Path, PathBuf};
use std::sync::Arc;
use std::time::Duration;
use timer::spawn_timer;
use tokio::sync::{mpsc, oneshot};
use tokio::sync::oneshot;
use types::{
test_utils::generate_deterministic_keypairs, BeaconState, ChainSpec, EthSpec,
ExecutionBlockHash, Hash256, SignedBeaconBlock,
@ -76,11 +74,9 @@ pub struct ClientBuilder<T: BeaconChainTypes> {
http_api_config: http_api::Config,
http_metrics_config: http_metrics::Config,
slasher: Option<Arc<Slasher<T::EthSpec>>>,
beacon_processor_config: Option<BeaconProcessorConfig>,
beacon_processor_channels: Option<BeaconProcessorChannels<T::EthSpec>>,
eth_spec_instance: T::EthSpec,
beacon_processor_send: BeaconProcessorSend<T::EthSpec>,
beacon_processor_receive: mpsc::Receiver<WorkEvent<T::EthSpec>>,
work_reprocessing_tx: mpsc::Sender<ReprocessQueueMessage>,
work_reprocessing_rx: mpsc::Receiver<ReprocessQueueMessage>,
}
impl<TSlotClock, TEth1Backend, TEthSpec, THotStore, TColdStore>
@ -96,10 +92,6 @@ where
///
/// The `eth_spec_instance` parameter is used to concretize `TEthSpec`.
pub fn new(eth_spec_instance: TEthSpec) -> Self {
let (beacon_processor_send, beacon_processor_receive) =
mpsc::channel(MAX_WORK_EVENT_QUEUE_LEN);
let (work_reprocessing_tx, work_reprocessing_rx) =
mpsc::channel(MAX_SCHEDULED_WORK_QUEUE_LEN);
Self {
slot_clock: None,
store: None,
@ -117,10 +109,8 @@ where
http_metrics_config: <_>::default(),
slasher: None,
eth_spec_instance,
beacon_processor_send: BeaconProcessorSend(beacon_processor_send),
beacon_processor_receive,
work_reprocessing_tx,
work_reprocessing_rx,
beacon_processor_config: None,
beacon_processor_channels: None,
}
}
@ -136,6 +126,12 @@ where
self
}
pub fn beacon_processor(mut self, config: BeaconProcessorConfig) -> Self {
self.beacon_processor_channels = Some(BeaconProcessorChannels::new(&config));
self.beacon_processor_config = Some(config);
self
}
pub fn slasher(mut self, slasher: Arc<Slasher<TEthSpec>>) -> Self {
self.slasher = Some(slasher);
self
@ -496,6 +492,7 @@ where
chain: None,
network_senders: None,
network_globals: None,
beacon_processor_send: None,
eth1_service: Some(genesis_service.eth1_service.clone()),
log: context.log().clone(),
sse_logging_components: runtime_context.sse_logging_components.clone(),
@ -573,6 +570,10 @@ where
.as_ref()
.ok_or("network requires a runtime_context")?
.clone();
let beacon_processor_channels = self
.beacon_processor_channels
.as_ref()
.ok_or("network requires beacon_processor_channels")?;
// If gossipsub metrics are required we build a registry to record them
let mut gossipsub_registry = if config.metrics_enabled {
@ -588,8 +589,8 @@ where
gossipsub_registry
.as_mut()
.map(|registry| registry.sub_registry_with_prefix("gossipsub")),
self.beacon_processor_send.clone(),
self.work_reprocessing_tx.clone(),
beacon_processor_channels.beacon_processor_tx.clone(),
beacon_processor_channels.work_reprocessing_tx.clone(),
)
.await
.map_err(|e| format!("Failed to start network: {:?}", e))?;
@ -712,6 +713,14 @@ where
.runtime_context
.as_ref()
.ok_or("build requires a runtime context")?;
let beacon_processor_channels = self
.beacon_processor_channels
.take()
.ok_or("build requires beacon_processor_channels")?;
let beacon_processor_config = self
.beacon_processor_config
.take()
.ok_or("build requires a beacon_processor_config")?;
let log = runtime_context.log().clone();
let http_api_listen_addr = if self.http_api_config.enabled {
@ -721,6 +730,7 @@ where
network_senders: self.network_senders.clone(),
network_globals: self.network_globals.clone(),
eth1_service: self.eth1_service.clone(),
beacon_processor_send: Some(beacon_processor_channels.beacon_processor_tx.clone()),
sse_logging_components: runtime_context.sse_logging_components.clone(),
log: log.clone(),
});
@ -784,15 +794,13 @@ where
executor: beacon_processor_context.executor.clone(),
max_workers: cmp::max(1, num_cpus::get()),
current_workers: 0,
enable_backfill_rate_limiting: beacon_chain
.config
.enable_backfill_rate_limiting,
config: beacon_processor_config,
log: beacon_processor_context.log().clone(),
}
.spawn_manager(
self.beacon_processor_receive,
self.work_reprocessing_tx,
self.work_reprocessing_rx,
beacon_processor_channels.beacon_processor_rx,
beacon_processor_channels.work_reprocessing_tx,
beacon_processor_channels.work_reprocessing_rx,
None,
beacon_chain.slot_clock.clone(),
beacon_chain.spec.maximum_gossip_clock_disparity(),

View File

@ -1,4 +1,5 @@
use beacon_chain::validator_monitor::DEFAULT_INDIVIDUAL_TRACKING_THRESHOLD;
use beacon_processor::BeaconProcessorConfig;
use directory::DEFAULT_ROOT_DIR;
use environment::LoggerConfig;
use network::NetworkConfig;
@ -80,6 +81,7 @@ pub struct Config {
pub slasher: Option<slasher::Config>,
pub logger_config: LoggerConfig,
pub always_prefer_builder_payload: bool,
pub beacon_processor: BeaconProcessorConfig,
}
impl Default for Config {
@ -107,6 +109,7 @@ impl Default for Config {
validator_monitor_individual_tracking_threshold: DEFAULT_INDIVIDUAL_TRACKING_THRESHOLD,
logger_config: LoggerConfig::default(),
always_prefer_builder_payload: false,
beacon_processor: <_>::default(),
}
}
}

View File

@ -3,12 +3,12 @@ name = "http_api"
version = "0.1.0"
authors = ["Paul Hauner <paul@paulhauner.com>"]
edition = "2021"
autotests = false # using a single test binary compiles faster
autotests = false # using a single test binary compiles faster
[dependencies]
warp = { version = "0.3.2", features = ["tls"] }
serde = { version = "1.0.116", features = ["derive"] }
tokio = { version = "1.14.0", features = ["macros","sync"] }
tokio = { version = "1.14.0", features = ["macros", "sync"] }
tokio-stream = { version = "0.1.3", features = ["sync"] }
types = { path = "../../consensus/types" }
hex = "0.4.2"
@ -27,9 +27,9 @@ slot_clock = { path = "../../common/slot_clock" }
ethereum_ssz = "0.5.0"
bs58 = "0.4.0"
futures = "0.3.8"
execution_layer = {path = "../execution_layer"}
execution_layer = { path = "../execution_layer" }
parking_lot = "0.12.0"
safe_arith = {path = "../../consensus/safe_arith"}
safe_arith = { path = "../../consensus/safe_arith" }
task_executor = { path = "../../common/task_executor" }
lru = "0.7.7"
tree_hash = "0.5.0"
@ -40,9 +40,10 @@ logging = { path = "../../common/logging" }
ethereum_serde_utils = "0.5.0"
operation_pool = { path = "../operation_pool" }
sensitive_url = { path = "../../common/sensitive_url" }
unused_port = {path = "../../common/unused_port"}
unused_port = { path = "../../common/unused_port" }
store = { path = "../store" }
bytes = "1.1.0"
beacon_processor = { path = "../beacon_processor" }
[dev-dependencies]
environment = { path = "../../lighthouse/environment" }
@ -52,4 +53,4 @@ genesis = { path = "../genesis" }
[[test]]
name = "bn_http_api_tests"
path = "tests/main.rs"
path = "tests/main.rs"

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,214 @@
use beacon_processor::{BeaconProcessorSend, BlockingOrAsync, Work, WorkEvent};
use serde::Serialize;
use std::future::Future;
use tokio::sync::{mpsc::error::TrySendError, oneshot};
use types::EthSpec;
use warp::reply::{Reply, Response};
/// Maps a request to a queue in the `BeaconProcessor`.
#[derive(Clone, Copy)]
pub enum Priority {
/// The highest priority.
P0,
/// The lowest priority.
P1,
}
impl Priority {
/// Wrap `self` in a `WorkEvent` with an appropriate priority.
fn work_event<E: EthSpec>(&self, process_fn: BlockingOrAsync) -> WorkEvent<E> {
let work = match self {
Priority::P0 => Work::ApiRequestP0(process_fn),
Priority::P1 => Work::ApiRequestP1(process_fn),
};
WorkEvent {
drop_during_sync: false,
work,
}
}
}
/// Spawns tasks on the `BeaconProcessor` or directly on the tokio executor.
pub struct TaskSpawner<E: EthSpec> {
/// Used to send tasks to the `BeaconProcessor`. The tokio executor will be
/// used if this is `None`.
beacon_processor_send: Option<BeaconProcessorSend<E>>,
}
impl<E: EthSpec> TaskSpawner<E> {
pub fn new(beacon_processor_send: Option<BeaconProcessorSend<E>>) -> Self {
Self {
beacon_processor_send,
}
}
/// Executes a "blocking" (non-async) task which returns a `Response`.
pub async fn blocking_response_task<F, T>(
self,
priority: Priority,
func: F,
) -> Result<Response, warp::Rejection>
where
F: FnOnce() -> Result<T, warp::Rejection> + Send + Sync + 'static,
T: Reply + Send + 'static,
{
if let Some(beacon_processor_send) = &self.beacon_processor_send {
// Create a closure that will execute `func` and send the result to
// a channel held by this thread.
let (tx, rx) = oneshot::channel();
let process_fn = move || {
// Execute the function, collect the return value.
let func_result = func();
// Send the result down the channel. Ignore any failures; the
// send can only fail if the receiver is dropped.
let _ = tx.send(func_result);
};
// Send the function to the beacon processor for execution at some arbitrary time.
match send_to_beacon_processor(
beacon_processor_send,
priority,
BlockingOrAsync::Blocking(Box::new(process_fn)),
rx,
)
.await
{
Ok(result) => result.map(Reply::into_response),
Err(error_response) => Ok(error_response),
}
} else {
// There is no beacon processor so spawn a task directly on the
// tokio executor.
warp_utils::task::blocking_response_task(func).await
}
}
/// Executes a "blocking" (non-async) task which returns a JSON-serializable
/// object.
pub async fn blocking_json_task<F, T>(
self,
priority: Priority,
func: F,
) -> Result<Response, warp::Rejection>
where
F: FnOnce() -> Result<T, warp::Rejection> + Send + Sync + 'static,
T: Serialize + Send + 'static,
{
let func = || func().map(|t| warp::reply::json(&t).into_response());
self.blocking_response_task(priority, func).await
}
/// Executes an async task which may return a `warp::Rejection`.
pub async fn spawn_async_with_rejection(
self,
priority: Priority,
func: impl Future<Output = Result<Response, warp::Rejection>> + Send + Sync + 'static,
) -> Result<Response, warp::Rejection> {
if let Some(beacon_processor_send) = &self.beacon_processor_send {
// Create a wrapper future that will execute `func` and send the
// result to a channel held by this thread.
let (tx, rx) = oneshot::channel();
let process_fn = async move {
// Await the future, collect the return value.
let func_result = func.await;
// Send the result down the channel. Ignore any failures; the
// send can only fail if the receiver is dropped.
let _ = tx.send(func_result);
};
// Send the function to the beacon processor for execution at some arbitrary time.
send_to_beacon_processor(
beacon_processor_send,
priority,
BlockingOrAsync::Async(Box::pin(process_fn)),
rx,
)
.await
.unwrap_or_else(Result::Ok)
} else {
// There is no beacon processor so spawn a task directly on the
// tokio executor.
tokio::task::spawn(func).await.unwrap_or_else(|e| {
let response = warp::reply::with_status(
warp::reply::json(&format!("Tokio did not execute task: {e:?}")),
eth2::StatusCode::INTERNAL_SERVER_ERROR,
)
.into_response();
Ok(response)
})
}
}
/// Executes an async task which always returns a `Response`.
pub async fn spawn_async(
self,
priority: Priority,
func: impl Future<Output = Response> + Send + Sync + 'static,
) -> Response {
if let Some(beacon_processor_send) = &self.beacon_processor_send {
// Create a wrapper future that will execute `func` and send the
// result to a channel held by this thread.
let (tx, rx) = oneshot::channel();
let process_fn = async move {
// Await the future, collect the return value.
let func_result = func.await;
// Send the result down the channel. Ignore any failures; the
// send can only fail if the receiver is dropped.
let _ = tx.send(func_result);
};
// Send the function to the beacon processor for execution at some arbitrary time.
send_to_beacon_processor(
beacon_processor_send,
priority,
BlockingOrAsync::Async(Box::pin(process_fn)),
rx,
)
.await
.unwrap_or_else(|error_response| error_response)
} else {
// There is no beacon processor so spawn a task directly on the
// tokio executor.
tokio::task::spawn(func).await.unwrap_or_else(|e| {
warp::reply::with_status(
warp::reply::json(&format!("Tokio did not execute task: {e:?}")),
eth2::StatusCode::INTERNAL_SERVER_ERROR,
)
.into_response()
})
}
}
}
/// Send a task to the beacon processor and await execution.
///
/// If the task is not executed, return an `Err(response)` with an error message
/// for the API consumer.
async fn send_to_beacon_processor<E: EthSpec, T>(
beacon_processor_send: &BeaconProcessorSend<E>,
priority: Priority,
process_fn: BlockingOrAsync,
rx: oneshot::Receiver<T>,
) -> Result<T, Response> {
let error_message = match beacon_processor_send.try_send(priority.work_event(process_fn)) {
Ok(()) => {
match rx.await {
// The beacon processor executed the task and sent a result.
Ok(func_result) => return Ok(func_result),
// The beacon processor dropped the channel without sending a
// result. The beacon processor dropped this task because its
// queues are full or it's shutting down.
Err(_) => "The task did not execute. The server is overloaded or shutting down.",
}
}
Err(TrySendError::Full(_)) => "The task was dropped. The server is overloaded.",
Err(TrySendError::Closed(_)) => "The task was dropped. The server is shutting down.",
};
let error_response = warp::reply::with_status(
warp::reply::json(&error_message),
eth2::StatusCode::INTERNAL_SERVER_ERROR,
)
.into_response();
Err(error_response)
}

View File

@ -5,6 +5,7 @@ use beacon_chain::{
},
BeaconChain, BeaconChainTypes,
};
use beacon_processor::{BeaconProcessor, BeaconProcessorChannels, BeaconProcessorConfig};
use directory::DEFAULT_ROOT_DIR;
use eth2::{BeaconNodeHttpClient, Timeouts};
use lighthouse_network::{
@ -26,7 +27,7 @@ use std::net::{IpAddr, Ipv4Addr, SocketAddr};
use std::sync::Arc;
use std::time::Duration;
use store::MemoryStore;
use tokio::sync::oneshot;
use task_executor::test_utils::TestRuntime;
use types::{ChainSpec, EthSpec};
pub const TCP_PORT: u16 = 42;
@ -39,7 +40,6 @@ pub struct InteractiveTester<E: EthSpec> {
pub harness: BeaconChainHarness<EphemeralHarnessType<E>>,
pub client: BeaconNodeHttpClient,
pub network_rx: NetworkReceivers<E>,
_server_shutdown: oneshot::Sender<()>,
}
/// The result of calling `create_api_server`.
@ -48,7 +48,6 @@ pub struct InteractiveTester<E: EthSpec> {
pub struct ApiServer<E: EthSpec, SFut: Future<Output = ()>> {
pub server: SFut,
pub listening_socket: SocketAddr,
pub shutdown_tx: oneshot::Sender<()>,
pub network_rx: NetworkReceivers<E>,
pub local_enr: Enr,
pub external_peer_id: PeerId,
@ -96,10 +95,14 @@ impl<E: EthSpec> InteractiveTester<E> {
let ApiServer {
server,
listening_socket,
shutdown_tx: _server_shutdown,
network_rx,
..
} = create_api_server(harness.chain.clone(), harness.logger().clone()).await;
} = create_api_server(
harness.chain.clone(),
&harness.runtime,
harness.logger().clone(),
)
.await;
tokio::spawn(server);
@ -117,22 +120,23 @@ impl<E: EthSpec> InteractiveTester<E> {
harness,
client,
network_rx,
_server_shutdown,
}
}
}
pub async fn create_api_server<T: BeaconChainTypes>(
chain: Arc<BeaconChain<T>>,
test_runtime: &TestRuntime,
log: Logger,
) -> ApiServer<T::EthSpec, impl Future<Output = ()>> {
// Get a random unused port.
let port = unused_port::unused_tcp4_port().unwrap();
create_api_server_on_port(chain, log, port).await
create_api_server_on_port(chain, test_runtime, log, port).await
}
pub async fn create_api_server_on_port<T: BeaconChainTypes>(
chain: Arc<BeaconChain<T>>,
test_runtime: &TestRuntime,
log: Logger,
port: u16,
) -> ApiServer<T::EthSpec, impl Future<Output = ()>> {
@ -180,6 +184,37 @@ pub async fn create_api_server_on_port<T: BeaconChainTypes>(
let eth1_service =
eth1::Service::new(eth1::Config::default(), log.clone(), chain.spec.clone()).unwrap();
let beacon_processor_config = BeaconProcessorConfig::default();
let BeaconProcessorChannels {
beacon_processor_tx,
beacon_processor_rx,
work_reprocessing_tx,
work_reprocessing_rx,
} = BeaconProcessorChannels::new(&beacon_processor_config);
let beacon_processor_send = beacon_processor_tx;
BeaconProcessor {
network_globals: network_globals.clone(),
executor: test_runtime.task_executor.clone(),
// The number of workers must be greater than one. Tests which use the
// builder workflow sometimes require an internal HTTP request in order
// to fulfill an already in-flight HTTP request, therefore having only
// one worker will result in a deadlock.
max_workers: 2,
current_workers: 0,
config: beacon_processor_config,
log: log.clone(),
}
.spawn_manager(
beacon_processor_rx,
work_reprocessing_tx,
work_reprocessing_rx,
None,
chain.slot_clock.clone(),
chain.spec.maximum_gossip_clock_disparity(),
)
.unwrap();
let ctx = Arc::new(Context {
config: Config {
enabled: true,
@ -190,26 +225,22 @@ pub async fn create_api_server_on_port<T: BeaconChainTypes>(
allow_sync_stalled: false,
data_dir: std::path::PathBuf::from(DEFAULT_ROOT_DIR),
spec_fork_name: None,
enable_beacon_processor: true,
},
chain: Some(chain),
network_senders: Some(network_senders),
network_globals: Some(network_globals),
beacon_processor_send: Some(beacon_processor_send),
eth1_service: Some(eth1_service),
sse_logging_components: None,
log,
});
let (shutdown_tx, shutdown_rx) = oneshot::channel();
let server_shutdown = async {
// It's not really interesting why this triggered, just that it happened.
let _ = shutdown_rx.await;
};
let (listening_socket, server) = crate::serve(ctx, server_shutdown).unwrap();
let (listening_socket, server) = crate::serve(ctx, test_runtime.task_executor.exit()).unwrap();
ApiServer {
server,
listening_socket,
shutdown_tx,
network_rx: network_receivers,
local_enr: enr,
external_peer_id: peer_id,

View File

@ -0,0 +1,21 @@
use beacon_chain::{BeaconChain, BeaconChainError, BeaconChainTypes};
use types::*;
/// Uses the `chain.validator_pubkey_cache` to resolve a pubkey to a validator
/// index and then ensures that the validator exists in the given `state`.
pub fn pubkey_to_validator_index<T: BeaconChainTypes>(
chain: &BeaconChain<T>,
state: &BeaconState<T::EthSpec>,
pubkey: &PublicKeyBytes,
) -> Result<Option<usize>, BeaconChainError> {
chain
.validator_index(pubkey)?
.filter(|&index| {
state
.validators()
.get(index)
.map_or(false, |v| v.pubkey == *pubkey)
})
.map(Result::Ok)
.transpose()
}

View File

@ -30,7 +30,6 @@ use state_processing::per_block_processing::get_expected_withdrawals;
use state_processing::per_slot_processing;
use std::convert::TryInto;
use std::sync::Arc;
use tokio::sync::oneshot;
use tokio::time::Duration;
use tree_hash::TreeHash;
use types::application_domain::ApplicationDomain;
@ -70,7 +69,6 @@ struct ApiTester {
attester_slashing: AttesterSlashing<E>,
proposer_slashing: ProposerSlashing,
voluntary_exit: SignedVoluntaryExit,
_server_shutdown: oneshot::Sender<()>,
network_rx: NetworkReceivers<E>,
local_enr: Enr,
external_peer_id: PeerId,
@ -234,11 +232,10 @@ impl ApiTester {
let ApiServer {
server,
listening_socket: _,
shutdown_tx,
network_rx,
local_enr,
external_peer_id,
} = create_api_server_on_port(chain.clone(), log, port).await;
} = create_api_server_on_port(chain.clone(), &harness.runtime, log, port).await;
harness.runtime.task_executor.spawn(server, "api_server");
@ -266,7 +263,6 @@ impl ApiTester {
attester_slashing,
proposer_slashing,
voluntary_exit,
_server_shutdown: shutdown_tx,
network_rx,
local_enr,
external_peer_id,
@ -320,11 +316,10 @@ impl ApiTester {
let ApiServer {
server,
listening_socket,
shutdown_tx,
network_rx,
local_enr,
external_peer_id,
} = create_api_server(chain.clone(), log).await;
} = create_api_server(chain.clone(), &harness.runtime, log).await;
harness.runtime.task_executor.spawn(server, "api_server");
@ -349,7 +344,6 @@ impl ApiTester {
attester_slashing,
proposer_slashing,
voluntary_exit,
_server_shutdown: shutdown_tx,
network_rx,
local_enr,
external_peer_id,

View File

@ -7,9 +7,9 @@ use beacon_chain::{
};
use beacon_chain::{BeaconChainTypes, NotifyExecutionLayer};
use beacon_processor::{
work_reprocessing_queue::ReprocessQueueMessage, BeaconProcessorSend, DuplicateCache,
GossipAggregatePackage, GossipAttestationPackage, Work, WorkEvent as BeaconWorkEvent,
MAX_SCHEDULED_WORK_QUEUE_LEN, MAX_WORK_EVENT_QUEUE_LEN,
work_reprocessing_queue::ReprocessQueueMessage, BeaconProcessorChannels, BeaconProcessorSend,
DuplicateCache, GossipAggregatePackage, GossipAttestationPackage, Work,
WorkEvent as BeaconWorkEvent,
};
use environment::null_logger;
use lighthouse_network::{
@ -545,11 +545,15 @@ impl<E: EthSpec> NetworkBeaconProcessor<TestBeaconChainType<E>> {
pub fn null_for_testing(
network_globals: Arc<NetworkGlobals<E>>,
) -> (Self, mpsc::Receiver<BeaconWorkEvent<E>>) {
let (beacon_processor_send, beacon_processor_receive) =
mpsc::channel(MAX_WORK_EVENT_QUEUE_LEN);
let BeaconProcessorChannels {
beacon_processor_tx,
beacon_processor_rx,
work_reprocessing_tx,
work_reprocessing_rx: _work_reprocessing_rx,
} = <_>::default();
let (network_tx, _network_rx) = mpsc::unbounded_channel();
let (sync_tx, _sync_rx) = mpsc::unbounded_channel();
let (reprocess_tx, _reprocess_rx) = mpsc::channel(MAX_SCHEDULED_WORK_QUEUE_LEN);
let log = null_logger().unwrap();
let harness: BeaconChainHarness<TestBeaconChainType<E>> =
BeaconChainHarness::builder(E::default())
@ -562,18 +566,18 @@ impl<E: EthSpec> NetworkBeaconProcessor<TestBeaconChainType<E>> {
let runtime = TestRuntime::default();
let network_beacon_processor = Self {
beacon_processor_send: BeaconProcessorSend(beacon_processor_send),
beacon_processor_send: beacon_processor_tx,
duplicate_cache: DuplicateCache::default(),
chain: harness.chain,
network_tx,
sync_tx,
reprocess_tx,
reprocess_tx: work_reprocessing_tx,
network_globals,
invalid_block_storage: InvalidBlockStorage::Disabled,
executor: runtime.task_executor.clone(),
log,
};
(network_beacon_processor, beacon_processor_receive)
(network_beacon_processor, beacon_processor_rx)
}
}

View File

@ -11,7 +11,7 @@ use crate::{
use beacon_chain::test_utils::{
AttestationStrategy, BeaconChainHarness, BlockStrategy, EphemeralHarnessType,
};
use beacon_chain::{BeaconChain, ChainConfig};
use beacon_chain::BeaconChain;
use beacon_processor::{work_reprocessing_queue::*, *};
use lighthouse_network::{
discv5::enr::{CombinedKey, EnrBuilder},
@ -68,16 +68,21 @@ struct TestRig {
impl Drop for TestRig {
fn drop(&mut self) {
// Causes the beacon processor to shutdown.
self.beacon_processor_tx = BeaconProcessorSend(mpsc::channel(MAX_WORK_EVENT_QUEUE_LEN).0);
let len = BeaconProcessorConfig::default().max_work_event_queue_len;
self.beacon_processor_tx = BeaconProcessorSend(mpsc::channel(len).0);
}
}
impl TestRig {
pub async fn new(chain_length: u64) -> Self {
Self::new_with_chain_config(chain_length, ChainConfig::default()).await
Self::new_parametric(
chain_length,
BeaconProcessorConfig::default().enable_backfill_rate_limiting,
)
.await
}
pub async fn new_with_chain_config(chain_length: u64, chain_config: ChainConfig) -> Self {
pub async fn new_parametric(chain_length: u64, enable_backfill_rate_limiting: bool) -> Self {
// This allows for testing voluntary exits without building out a massive chain.
let mut spec = E::default_spec();
spec.shard_committee_period = 2;
@ -86,7 +91,7 @@ impl TestRig {
.spec(spec)
.deterministic_keypairs(VALIDATOR_COUNT)
.fresh_ephemeral_store()
.chain_config(chain_config)
.chain_config(<_>::default())
.build();
harness.advance_slot();
@ -172,8 +177,15 @@ impl TestRig {
let log = harness.logger().clone();
let (beacon_processor_tx, beacon_processor_rx) = mpsc::channel(MAX_WORK_EVENT_QUEUE_LEN);
let beacon_processor_tx = BeaconProcessorSend(beacon_processor_tx);
let mut beacon_processor_config = BeaconProcessorConfig::default();
beacon_processor_config.enable_backfill_rate_limiting = enable_backfill_rate_limiting;
let BeaconProcessorChannels {
beacon_processor_tx,
beacon_processor_rx,
work_reprocessing_tx,
work_reprocessing_rx,
} = BeaconProcessorChannels::new(&beacon_processor_config);
let (sync_tx, _sync_rx) = mpsc::unbounded_channel();
// Default metadata
@ -196,8 +208,6 @@ impl TestRig {
let executor = harness.runtime.task_executor.clone();
let (work_reprocessing_tx, work_reprocessing_rx) =
mpsc::channel(MAX_SCHEDULED_WORK_QUEUE_LEN);
let (work_journal_tx, work_journal_rx) = mpsc::channel(16_364);
let duplicate_cache = DuplicateCache::default();
@ -220,7 +230,7 @@ impl TestRig {
executor,
max_workers: cmp::max(1, num_cpus::get()),
current_workers: 0,
enable_backfill_rate_limiting: harness.chain.config.enable_backfill_rate_limiting,
config: beacon_processor_config,
log: log.clone(),
}
.spawn_manager(
@ -943,11 +953,8 @@ async fn test_backfill_sync_processing() {
/// Ensure that backfill batches get processed as fast as they can when rate-limiting is disabled.
#[tokio::test]
async fn test_backfill_sync_processing_rate_limiting_disabled() {
let chain_config = ChainConfig {
enable_backfill_rate_limiting: false,
..Default::default()
};
let mut rig = TestRig::new_with_chain_config(SMALL_CHAIN, chain_config).await;
let enable_backfill_rate_limiting = false;
let mut rig = TestRig::new_parametric(SMALL_CHAIN, enable_backfill_rate_limiting).await;
for _ in 0..3 {
rig.enqueue_backfill_batch();

View File

@ -4,15 +4,13 @@ mod tests {
use crate::persisted_dht::load_dht;
use crate::{NetworkConfig, NetworkService};
use beacon_chain::test_utils::BeaconChainHarness;
use beacon_processor::{
BeaconProcessorSend, MAX_SCHEDULED_WORK_QUEUE_LEN, MAX_WORK_EVENT_QUEUE_LEN,
};
use beacon_processor::BeaconProcessorChannels;
use lighthouse_network::Enr;
use slog::{o, Drain, Level, Logger};
use sloggers::{null::NullLoggerBuilder, Build};
use std::str::FromStr;
use std::sync::Arc;
use tokio::{runtime::Runtime, sync::mpsc};
use tokio::runtime::Runtime;
use types::MinimalEthSpec;
fn get_logger(actual_log: bool) -> Logger {
@ -70,17 +68,20 @@ mod tests {
// Create a new network service which implicitly gets dropped at the
// end of the block.
let (beacon_processor_send, _beacon_processor_receive) =
mpsc::channel(MAX_WORK_EVENT_QUEUE_LEN);
let (beacon_processor_reprocess_tx, _beacon_processor_reprocess_rx) =
mpsc::channel(MAX_SCHEDULED_WORK_QUEUE_LEN);
let BeaconProcessorChannels {
beacon_processor_tx,
beacon_processor_rx: _beacon_processor_rx,
work_reprocessing_tx,
work_reprocessing_rx: _work_reprocessing_rx,
} = <_>::default();
let _network_service = NetworkService::start(
beacon_chain.clone(),
&config,
executor,
None,
BeaconProcessorSend(beacon_processor_send),
beacon_processor_reprocess_tx,
beacon_processor_tx,
work_reprocessing_tx,
)
.await
.unwrap();

View File

@ -382,6 +382,17 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
stalled. This is useful for very small testnets. TESTING ONLY. DO NOT USE ON \
MAINNET.")
)
.arg(
Arg::with_name("http-enable-beacon-processor")
.long("http-enable-beacon-processor")
.value_name("BOOLEAN")
.help("The beacon processor is a scheduler which provides quality-of-service and \
DoS protection. When set to \"true\", HTTP API requests will be queued and scheduled \
alongside other tasks. When set to \"false\", HTTP API responses will be executed \
immediately.")
.takes_value(true)
.default_value("true")
)
/* Prometheus metrics HTTP server related arguments */
.arg(
Arg::with_name("metrics")
@ -1141,4 +1152,55 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
.takes_value(true)
.possible_values(ProgressiveBalancesMode::VARIANTS)
)
.arg(
Arg::with_name("beacon-processor-max-workers")
.long("beacon-processor-max-workers")
.value_name("INTEGER")
.help("Specifies the maximum concurrent tasks for the task scheduler. Increasing \
this value may increase resource consumption. Reducing the value \
may result in decreased resource usage and diminished performance. The \
default value is the number of logical CPU cores on the host.")
.takes_value(true)
)
.arg(
Arg::with_name("beacon-processor-work-queue-len")
.long("beacon-processor-work-queue-len")
.value_name("INTEGER")
.help("Specifies the length of the inbound event queue. \
Higher values may prevent messages from being dropped while lower values \
may help protect the node from becoming overwhelmed.")
.default_value("16384")
.takes_value(true)
)
.arg(
Arg::with_name("beacon-processor-reprocess-queue-len")
.long("beacon-processor-reprocess-queue-len")
.value_name("INTEGER")
.help("Specifies the length of the queue for messages requiring delayed processing. \
Higher values may prevent messages from being dropped while lower values \
may help protect the node from becoming overwhelmed.")
.default_value("12288")
.takes_value(true)
)
.arg(
Arg::with_name("beacon-processor-attestation-batch-size")
.long("beacon-processor-attestation-batch-size")
.value_name("INTEGER")
.help("Specifies the number of gossip attestations in a signature verification batch. \
Higher values may reduce CPU usage in a healthy network whilst lower values may \
increase CPU usage in an unhealthy or hostile network.")
.default_value("64")
.takes_value(true)
)
.arg(
Arg::with_name("beacon-processor-aggregate-batch-size")
.long("beacon-processor-aggregate-batch-size")
.value_name("INTEGER")
.help("Specifies the number of gossip aggregate attestations in a signature \
verification batch. \
Higher values may reduce CPU usage in a healthy network while lower values may \
increase CPU usage in an unhealthy or hostile network.")
.default_value("64")
.takes_value(true)
)
}

View File

@ -4,6 +4,7 @@ use beacon_chain::chain_config::{
};
use clap::ArgMatches;
use clap_utils::flags::DISABLE_MALLOC_TUNING_FLAG;
use clap_utils::parse_required;
use client::{ClientConfig, ClientGenesis};
use directory::{DEFAULT_BEACON_NODE_DIR, DEFAULT_NETWORK_DIR, DEFAULT_ROOT_DIR};
use environment::RuntimeContext;
@ -148,6 +149,9 @@ pub fn get_config<E: EthSpec>(
client_config.http_api.allow_sync_stalled = true;
}
client_config.http_api.enable_beacon_processor =
parse_required(cli_args, "http-enable-beacon-processor")?;
if let Some(cache_size) = clap_utils::parse_optional(cli_args, "shuffling-cache-size")? {
client_config.chain.shuffling_cache_size = cache_size;
}
@ -800,7 +804,7 @@ pub fn get_config<E: EthSpec>(
}
// Backfill sync rate-limiting
client_config.chain.enable_backfill_rate_limiting =
client_config.beacon_processor.enable_backfill_rate_limiting =
!cli_args.is_present("disable-backfill-rate-limiting");
if let Some(path) = clap_utils::parse_optional(cli_args, "invalid-gossip-verified-blocks-path")?
@ -814,6 +818,28 @@ pub fn get_config<E: EthSpec>(
client_config.chain.progressive_balances_mode = progressive_balances_mode;
}
if let Some(max_workers) = clap_utils::parse_optional(cli_args, "beacon-processor-max-workers")?
{
client_config.beacon_processor.max_workers = max_workers;
}
if client_config.beacon_processor.max_workers == 0 {
return Err("--beacon-processor-max-workers must be a non-zero value".to_string());
}
client_config.beacon_processor.max_work_event_queue_len =
clap_utils::parse_required(cli_args, "beacon-processor-work-queue-len")?;
client_config.beacon_processor.max_scheduled_work_queue_len =
clap_utils::parse_required(cli_args, "beacon-processor-reprocess-queue-len")?;
client_config
.beacon_processor
.max_gossip_attestation_batch_size =
clap_utils::parse_required(cli_args, "beacon-processor-attestation-batch-size")?;
client_config
.beacon_processor
.max_gossip_aggregate_batch_size =
clap_utils::parse_required(cli_args, "beacon-processor-aggregate-batch-size")?;
Ok(client_config)
}

View File

@ -83,6 +83,7 @@ impl<E: EthSpec> ProductionBeaconNode<E> {
let builder = ClientBuilder::new(context.eth_spec_instance.clone())
.runtime_context(context)
.chain_spec(spec)
.beacon_processor(client_config.beacon_processor.clone())
.http_api_config(client_config.http_api.clone())
.disk_store(&db_path, &freezer_db_path, store_config, log.clone())?;

View File

@ -66,6 +66,7 @@ lighthouse_network = { path = "../beacon_node/lighthouse_network" }
sensitive_url = { path = "../common/sensitive_url" }
eth1 = { path = "../beacon_node/eth1" }
eth2 = { path = "../common/eth2" }
beacon_processor = { path = "../beacon_node/beacon_processor" }
[[test]]
name = "lighthouse_tests"

View File

@ -5,6 +5,7 @@ use beacon_node::beacon_chain::chain_config::{
DisallowedReOrgOffsets, DEFAULT_RE_ORG_CUTOFF_DENOMINATOR,
DEFAULT_RE_ORG_MAX_EPOCHS_SINCE_FINALIZATION, DEFAULT_RE_ORG_THRESHOLD,
};
use beacon_processor::BeaconProcessorConfig;
use eth1::Eth1Endpoint;
use lighthouse_network::PeerId;
use std::fs::File;
@ -1118,13 +1119,13 @@ fn disable_backfill_rate_limiting_flag() {
CommandLineTest::new()
.flag("disable-backfill-rate-limiting", None)
.run_with_zero_port()
.with_config(|config| assert!(!config.chain.enable_backfill_rate_limiting));
.with_config(|config| assert!(!config.beacon_processor.enable_backfill_rate_limiting));
}
#[test]
fn default_backfill_rate_limiting_flag() {
CommandLineTest::new()
.run_with_zero_port()
.with_config(|config| assert!(config.chain.enable_backfill_rate_limiting));
.with_config(|config| assert!(config.beacon_processor.enable_backfill_rate_limiting));
}
#[test]
fn default_boot_nodes() {
@ -1463,6 +1464,22 @@ fn http_allow_sync_stalled_flag() {
.with_config(|config| assert_eq!(config.http_api.allow_sync_stalled, true));
}
#[test]
fn http_enable_beacon_processor() {
CommandLineTest::new()
.run_with_zero_port()
.with_config(|config| assert_eq!(config.http_api.enable_beacon_processor, true));
CommandLineTest::new()
.flag("http-enable-beacon-processor", Some("true"))
.run_with_zero_port()
.with_config(|config| assert_eq!(config.http_api.enable_beacon_processor, true));
CommandLineTest::new()
.flag("http-enable-beacon-processor", Some("false"))
.run_with_zero_port()
.with_config(|config| assert_eq!(config.http_api.enable_beacon_processor, false));
}
#[test]
fn http_tls_flags() {
let dir = TempDir::new().expect("Unable to create temporary directory");
CommandLineTest::new()
@ -2295,3 +2312,40 @@ fn progressive_balances_fast() {
)
});
}
#[test]
fn beacon_processor() {
CommandLineTest::new()
.run_with_zero_port()
.with_config(|config| assert_eq!(config.beacon_processor, <_>::default()));
CommandLineTest::new()
.flag("beacon-processor-max-workers", Some("1"))
.flag("beacon-processor-work-queue-len", Some("2"))
.flag("beacon-processor-reprocess-queue-len", Some("3"))
.flag("beacon-processor-attestation-batch-size", Some("4"))
.flag("beacon-processor-aggregate-batch-size", Some("5"))
.flag("disable-backfill-rate-limiting", None)
.run_with_zero_port()
.with_config(|config| {
assert_eq!(
config.beacon_processor,
BeaconProcessorConfig {
max_workers: 1,
max_work_event_queue_len: 2,
max_scheduled_work_queue_len: 3,
max_gossip_attestation_batch_size: 4,
max_gossip_aggregate_batch_size: 5,
enable_backfill_rate_limiting: false
}
)
});
}
#[test]
#[should_panic]
fn beacon_processor_zero_workers() {
CommandLineTest::new()
.flag("beacon-processor-max-workers", Some("0"))
.run_with_zero_port();
}

View File

@ -24,7 +24,7 @@ pub use execution_layer::test_utils::{
pub use validator_client::Config as ValidatorConfig;
/// The global timeout for HTTP requests to the beacon node.
const HTTP_TIMEOUT: Duration = Duration::from_secs(4);
const HTTP_TIMEOUT: Duration = Duration::from_secs(8);
/// The timeout for a beacon node to start up.
const STARTUP_TIMEOUT: Duration = Duration::from_secs(60);
@ -115,6 +115,11 @@ pub fn testing_client_config() -> ClientConfig {
genesis_time: now,
};
// Specify a constant count of beacon processor workers. Having this number
// too low can cause annoying HTTP timeouts, especially on Github runners
// with 2 logical CPUs.
client_config.beacon_processor.max_workers = 4;
client_config
}

View File

@ -43,3 +43,4 @@ beacon_chain = { path = "../beacon_node/beacon_chain" }
network = { path = "../beacon_node/network" }
testcontainers = "0.14.0"
unused_port = { path = "../common/unused_port" }
task_executor = { path = "../common/task_executor" }

View File

@ -85,7 +85,6 @@ struct TesterBuilder {
pub harness: BeaconChainHarness<EphemeralHarnessType<E>>,
pub config: Config,
_bn_network_rx: NetworkReceivers<E>,
_bn_api_shutdown_tx: oneshot::Sender<()>,
}
impl TesterBuilder {
@ -102,10 +101,14 @@ impl TesterBuilder {
let ApiServer {
server,
listening_socket: bn_api_listening_socket,
shutdown_tx: _bn_api_shutdown_tx,
network_rx: _bn_network_rx,
..
} = create_api_server(harness.chain.clone(), harness.logger().clone()).await;
} = create_api_server(
harness.chain.clone(),
&harness.runtime,
harness.logger().clone(),
)
.await;
tokio::spawn(server);
/*
@ -139,7 +142,6 @@ impl TesterBuilder {
harness,
config,
_bn_network_rx,
_bn_api_shutdown_tx,
}
}
pub async fn build(self, pool: PgPool) -> Tester {
@ -186,7 +188,6 @@ impl TesterBuilder {
config: self.config,
updater,
_bn_network_rx: self._bn_network_rx,
_bn_api_shutdown_tx: self._bn_api_shutdown_tx,
_watch_shutdown_tx,
}
}
@ -204,7 +205,6 @@ struct Tester {
pub config: Config,
pub updater: UpdateHandler<E>,
_bn_network_rx: NetworkReceivers<E>,
_bn_api_shutdown_tx: oneshot::Sender<()>,
_watch_shutdown_tx: oneshot::Sender<()>,
}