Stable futures (#879)

* Port eth1 lib to use stable futures

* Port eth1_test_rig to stable futures

* Port eth1 tests to stable futures

* Port genesis service to stable futures

* Port genesis tests to stable futures

* Port beacon_chain to stable futures

* Port lcli to stable futures

* Fix eth1_test_rig (#1014)

* Fix lcli

* Port timer to stable futures

* Fix timer

* Port websocket_server to stable futures

* Port notifier to stable futures

* Add TODOS

* Update hashmap hashset to stable futures

* Adds panic test to hashset delay

* Port remote_beacon_node to stable futures

* Fix lcli merge conflicts

* Non rpc stuff compiles

* protocol.rs compiles

* Port websockets, timer and notifier to stable futures (#1035)

* Fix lcli

* Port timer to stable futures

* Fix timer

* Port websocket_server to stable futures

* Port notifier to stable futures

* Add TODOS

* Port remote_beacon_node to stable futures

* Partial eth2-libp2p stable future upgrade

* Finished first round of fighting RPC types

* Further progress towards porting eth2-libp2p adds caching to discovery

* Update behaviour

* RPC handler to stable futures

* Update RPC to master libp2p

* Network service additions

* Fix the fallback transport construction (#1102)

* Correct warning

* Remove hashmap delay

* Compiling version of eth2-libp2p

* Update all crates versions

* Fix conversion function and add tests (#1113)

* Port validator_client to stable futures (#1114)

* Add PH & MS slot clock changes

* Account for genesis time

* Add progress on duties refactor

* Add simple is_aggregator bool to val subscription

* Start work on attestation_verification.rs

* Add progress on ObservedAttestations

* Progress with ObservedAttestations

* Fix tests

* Add observed attestations to the beacon chain

* Add attestation observation to processing code

* Add progress on attestation verification

* Add first draft of ObservedAttesters

* Add more tests

* Add observed attesters to beacon chain

* Add observers to attestation processing

* Add more attestation verification

* Create ObservedAggregators map

* Remove commented-out code

* Add observed aggregators into chain

* Add progress

* Finish adding features to attestation verification

* Ensure beacon chain compiles

* Link attn verification into chain

* Integrate new attn verification in chain

* Remove old attestation processing code

* Start trying to fix beacon_chain tests

* Split adding into pools into two functions

* Add aggregation to harness

* Get test harness working again

* Adjust the number of aggregators for test harness

* Fix edge-case in harness

* Integrate new attn processing in network

* Fix compile bug in validator_client

* Update validator API endpoints

* Fix aggreagation in test harness

* Fix enum thing

* Fix attestation observation bug:

* Patch failing API tests

* Start adding comments to attestation verification

* Remove unused attestation field

* Unify "is block known" logic

* Update comments

* Supress fork choice errors for network processing

* Add todos

* Tidy

* Add gossip attn tests

* Disallow test harness to produce old attns

* Comment out in-progress tests

* Partially address pruning tests

* Fix failing store test

* Add aggregate tests

* Add comments about which spec conditions we check

* Dont re-aggregate

* Split apart test harness attn production

* Fix compile error in network

* Make progress on commented-out test

* Fix skipping attestation test

* Add fork choice verification tests

* Tidy attn tests, remove dead code

* Remove some accidentally added code

* Fix clippy lint

* Rename test file

* Add block tests, add cheap block proposer check

* Rename block testing file

* Add observed_block_producers

* Tidy

* Switch around block signature verification

* Finish block testing

* Remove gossip from signature tests

* First pass of self review

* Fix deviation in spec

* Update test spec tags

* Start moving over to hashset

* Finish moving observed attesters to hashmap

* Move aggregation pool over to hashmap

* Make fc attn borrow again

* Fix rest_api compile error

* Fix missing comments

* Fix monster test

* Uncomment increasing slots test

* Address remaining comments

* Remove unsafe, use cfg test

* Remove cfg test flag

* Fix dodgy comment

* Revert "Update hashmap hashset to stable futures"

This reverts commit d432378a3cc5cd67fc29c0b15b96b886c1323554.

* Revert "Adds panic test to hashset delay"

This reverts commit 281502396fc5b90d9c421a309c2c056982c9525b.

* Ported attestation_service

* Ported duties_service

* Ported fork_service

* More ports

* Port block_service

* Minor fixes

* VC compiles

* Update TODOS

* Borrow self where possible

* Ignore aggregates that are already known.

* Unify aggregator modulo logic

* Fix typo in logs

* Refactor validator subscription logic

* Avoid reproducing selection proof

* Skip HTTP call if no subscriptions

* Rename DutyAndState -> DutyAndProof

* Tidy logs

* Print root as dbg

* Fix compile errors in tests

* Fix compile error in test

* Re-Fix attestation and duties service

* Minor fixes

Co-authored-by: Paul Hauner <paul@paulhauner.com>

* Network crate update to stable futures

* Port account_manager to stable futures (#1121)

* Port account_manager to stable futures

* Run async fns in tokio environment

* Port rest_api crate to stable futures (#1118)

* Port rest_api lib to stable futures

* Reduce tokio features

* Update notifier to stable futures

* Builder update

* Further updates

* Convert self referential async functions

* stable futures fixes (#1124)

* Fix eth1 update functions

* Fix genesis and client

* Fix beacon node lib

* Return appropriate runtimes from environment

* Fix test rig

* Refactor eth1 service update

* Upgrade simulator to stable futures

* Lighthouse compiles on stable futures

* Remove println debugging statement

* Update libp2p service, start rpc test upgrade

* Update network crate for new libp2p

* Update tokio::codec to futures_codec (#1128)

* Further work towards RPC corrections

* Correct http timeout and network service select

* Use tokio runtime for libp2p

* Revert "Update tokio::codec to futures_codec (#1128)"

This reverts commit e57aea924acf5cbabdcea18895ac07e38a425ed7.

* Upgrade RPC libp2p tests

* Upgrade secio fallback test

* Upgrade gossipsub examples

* Clean up RPC protocol

* Test fixes (#1133)

* Correct websocket timeout and run on os thread

* Fix network test

* Clean up PR

* Correct tokio tcp move attestation service tests

* Upgrade attestation service tests

* Correct network test

* Correct genesis test

* Test corrections

* Log info when block is received

* Modify logs and update attester service events

* Stable futures: fixes to vc, eth1 and account manager (#1142)

* Add local testnet scripts

* Remove whiteblock script

* Rename local testnet script

* Move spawns onto handle

* Fix VC panic

* Initial fix to block production issue

* Tidy block producer fix

* Tidy further

* Add local testnet clean script

* Run cargo fmt

* Tidy duties service

* Tidy fork service

* Tidy ForkService

* Tidy AttestationService

* Tidy notifier

* Ensure await is not suppressed in eth1

* Ensure await is not suppressed in account_manager

* Use .ok() instead of .unwrap_or(())

* RPC decoding test for proto

* Update discv5 and eth2-libp2p deps

* Fix lcli double runtime issue (#1144)

* Handle stream termination and dialing peer errors

* Correct peer_info variant types

* Remove unnecessary warnings

* Handle subnet unsubscription removal and improve logigng

* Add logs around ping

* Upgrade discv5 and improve logging

* Handle peer connection status for multiple connections

* Improve network service logging

* Improve logging around peer manager

* Upgrade swarm poll centralise peer management

* Identify clients on error

* Fix `remove_peer` in sync (#1150)

* remove_peer removes from all chains

* Remove logs

* Fix early return from loop

* Improved logging, fix panic

* Partially correct tests

* Stable futures: Vc sync (#1149)

* Improve syncing heuristic

* Add comments

* Use safer method for tolerance

* Fix tests

* Stable futures: Fix VC bug, update agg pool, add more metrics (#1151)

* Expose epoch processing summary

* Expose participation metrics to prometheus

* Switch to f64

* Reduce precision

* Change precision

* Expose observed attesters metrics

* Add metrics for agg/unagg attn counts

* Add metrics for gossip rx

* Add metrics for gossip tx

* Adds ignored attns to prom

* Add attestation timing

* Add timer for aggregation pool sig agg

* Add write lock timer for agg pool

* Add more metrics to agg pool

* Change map lock code

* Add extra metric to agg pool

* Change lock handling in agg pool

* Change .write() to .read()

* Add another agg pool timer

* Fix for is_aggregator

* Fix pruning bug

Co-authored-by: pawan <pawandhananjay@gmail.com>
Co-authored-by: Paul Hauner <paul@paulhauner.com>
This commit is contained in:
Age Manning 2020-05-17 21:16:48 +10:00 committed by GitHub
parent 21901b1615
commit b6408805a2
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
165 changed files with 7924 additions and 7733 deletions

2165
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -17,7 +17,7 @@ members = [
"eth2/utils/eth2_wallet", "eth2/utils/eth2_wallet",
"eth2/utils/logging", "eth2/utils/logging",
"eth2/utils/eth2_hashing", "eth2/utils/eth2_hashing",
"eth2/utils/hashmap_delay", "eth2/utils/hashset_delay",
"eth2/utils/lighthouse_metrics", "eth2/utils/lighthouse_metrics",
"eth2/utils/merkle_proof", "eth2/utils/merkle_proof",
"eth2/utils/int_to_bytes", "eth2/utils/int_to_bytes",

View File

@ -5,26 +5,27 @@ authors = ["Paul Hauner <paul@paulhauner.com>", "Luke Anderson <luke@sigmaprime.
edition = "2018" edition = "2018"
[dev-dependencies] [dev-dependencies]
tempdir = "0.3" tempdir = "0.3.7"
[dependencies] [dependencies]
bls = { path = "../eth2/utils/bls" } bls = { path = "../eth2/utils/bls" }
clap = "2.33.0" clap = "2.33.0"
slog = "2.5.2" slog = "2.5.2"
slog-term = "2.4.2" slog-term = "2.5.0"
slog-async = "2.3.0" slog-async = "2.5.0"
types = { path = "../eth2/types" } types = { path = "../eth2/types" }
dirs = "2.0.2" dirs = "2.0.2"
environment = { path = "../lighthouse/environment" } environment = { path = "../lighthouse/environment" }
deposit_contract = { path = "../eth2/utils/deposit_contract" } deposit_contract = { path = "../eth2/utils/deposit_contract" }
libc = "0.2.65" libc = "0.2.65"
eth2_ssz = { path = "../eth2/utils/ssz" } eth2_ssz = "0.1.2"
eth2_ssz_derive = { path = "../eth2/utils/ssz_derive" } eth2_ssz_derive = "0.1.0"
hex = "0.3" hex = "0.4.2"
validator_client = { path = "../validator_client" } validator_client = { path = "../validator_client" }
rayon = "1.2.0" rayon = "1.3.0"
eth2_testnet_config = { path = "../eth2/utils/eth2_testnet_config" } eth2_testnet_config = { path = "../eth2/utils/eth2_testnet_config" }
web3 = "0.10.0" web3 = "0.10.0"
futures = "0.1.25" futures = { version = "0.3.5", features = ["compat"] }
clap_utils = { path = "../eth2/utils/clap_utils" } clap_utils = { path = "../eth2/utils/clap_utils" }
tokio = "0.1.22" # reduce feature set
tokio = {version = "0.2.20", features = ["full"]}

View File

@ -1,15 +1,11 @@
use clap::{App, Arg, ArgMatches}; use clap::{App, Arg, ArgMatches};
use clap_utils; use clap_utils;
use environment::Environment; use environment::Environment;
use futures::{ use futures::compat::Future01CompatExt;
future::{self, loop_fn, Loop},
Future,
};
use slog::{info, Logger}; use slog::{info, Logger};
use std::fs; use std::fs;
use std::path::PathBuf; use std::path::PathBuf;
use std::time::{Duration, Instant}; use tokio::time::{delay_until, Duration, Instant};
use tokio::timer::Delay;
use types::EthSpec; use types::EthSpec;
use validator_client::validator_directory::ValidatorDirectoryBuilder; use validator_client::validator_directory::ValidatorDirectoryBuilder;
use web3::{ use web3::{
@ -80,7 +76,10 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
) )
} }
pub fn cli_run<T: EthSpec>(matches: &ArgMatches, mut env: Environment<T>) -> Result<(), String> { pub fn cli_run<T: EthSpec>(
matches: &ArgMatches<'_>,
mut env: Environment<T>,
) -> Result<(), String> {
let spec = env.core_context().eth2_config.spec; let spec = env.core_context().eth2_config.spec;
let log = env.core_context().log; let log = env.core_context().log;
@ -138,12 +137,13 @@ pub fn cli_run<T: EthSpec>(matches: &ArgMatches, mut env: Environment<T>) -> Res
let tx_hash_log = log.clone(); let tx_hash_log = log.clone();
env.runtime() env.runtime()
.block_on( .block_on(async {
ValidatorDirectoryBuilder::default() ValidatorDirectoryBuilder::default()
.spec(spec.clone()) .spec(spec.clone())
.custom_deposit_amount(deposit_gwei) .custom_deposit_amount(deposit_gwei)
.thread_random_keypairs() .thread_random_keypairs()
.submit_eth1_deposit(web3.clone(), from_address, deposit_contract) .submit_eth1_deposit(web3.clone(), from_address, deposit_contract)
.await
.map(move |(builder, tx_hash)| { .map(move |(builder, tx_hash)| {
info!( info!(
tx_hash_log, tx_hash_log,
@ -152,8 +152,8 @@ pub fn cli_run<T: EthSpec>(matches: &ArgMatches, mut env: Environment<T>) -> Res
"index" => format!("{}/{}", i + 1, n), "index" => format!("{}/{}", i + 1, n),
); );
builder builder
}), })
)? })?
.create_directory(validator_dir.clone())? .create_directory(validator_dir.clone())?
.write_keypair_files()? .write_keypair_files()?
.write_eth1_data_file()? .write_eth1_data_file()?
@ -183,73 +183,59 @@ fn existing_validator_count(validator_dir: &PathBuf) -> Result<usize, String> {
} }
/// Run a poll on the `eth_syncing` endpoint, blocking until the node is synced. /// Run a poll on the `eth_syncing` endpoint, blocking until the node is synced.
fn poll_until_synced<T>(web3: Web3<T>, log: Logger) -> impl Future<Item = (), Error = String> + Send async fn poll_until_synced<T>(web3: Web3<T>, log: Logger) -> Result<(), String>
where where
T: Transport + Send + 'static, T: Transport + Send + 'static,
<T as Transport>::Out: Send, <T as Transport>::Out: Send,
{ {
loop_fn((web3.clone(), log.clone()), move |(web3, log)| { loop {
web3.clone() let sync_state = web3
.clone()
.eth() .eth()
.syncing() .syncing()
.map_err(|e| format!("Unable to read syncing state from eth1 node: {:?}", e)) .compat()
.and_then::<_, Box<dyn Future<Item = _, Error = _> + Send>>(move |sync_state| { .await
match sync_state { .map_err(|e| format!("Unable to read syncing state from eth1 node: {:?}", e))?;
SyncState::Syncing(SyncInfo { match sync_state {
current_block, SyncState::Syncing(SyncInfo {
highest_block, current_block,
.. highest_block,
}) => { ..
info!( }) => {
log, info!(
"Waiting for eth1 node to sync"; log,
"est_highest_block" => format!("{}", highest_block), "Waiting for eth1 node to sync";
"current_block" => format!("{}", current_block), "est_highest_block" => format!("{}", highest_block),
); "current_block" => format!("{}", current_block),
);
Box::new( delay_until(Instant::now() + SYNCING_STATE_RETRY_DELAY).await;
Delay::new(Instant::now() + SYNCING_STATE_RETRY_DELAY) }
.map_err(|e| format!("Failed to trigger delay: {:?}", e)) SyncState::NotSyncing => {
.and_then(|_| future::ok(Loop::Continue((web3, log)))), let block_number = web3
) .clone()
} .eth()
SyncState::NotSyncing => Box::new( .block_number()
web3.clone() .compat()
.eth() .await
.block_number() .map_err(|e| format!("Unable to read block number from eth1 node: {:?}", e))?;
.map_err(|e| { if block_number > 0.into() {
format!("Unable to read block number from eth1 node: {:?}", e) info!(
}) log,
.and_then::<_, Box<dyn Future<Item = _, Error = _> + Send>>( "Eth1 node is synced";
|block_number| { "head_block" => format!("{}", block_number),
if block_number > 0.into() { );
info!( break;
log, } else {
"Eth1 node is synced"; delay_until(Instant::now() + SYNCING_STATE_RETRY_DELAY).await;
"head_block" => format!("{}", block_number), info!(
); log,
Box::new(future::ok(Loop::Break((web3, log)))) "Waiting for eth1 node to sync";
} else { "current_block" => 0,
Box::new( );
Delay::new(Instant::now() + SYNCING_STATE_RETRY_DELAY)
.map_err(|e| {
format!("Failed to trigger delay: {:?}", e)
})
.and_then(|_| {
info!(
log,
"Waiting for eth1 node to sync";
"current_block" => 0,
);
future::ok(Loop::Continue((web3, log)))
}),
)
}
},
),
),
} }
}) }
}) }
.map(|_| ()) }
Ok(())
} }

View File

@ -5,7 +5,8 @@ use clap::ArgMatches;
use deposit_contract::DEPOSIT_GAS; use deposit_contract::DEPOSIT_GAS;
use environment::{Environment, RuntimeContext}; use environment::{Environment, RuntimeContext};
use eth2_testnet_config::Eth2TestnetConfig; use eth2_testnet_config::Eth2TestnetConfig;
use futures::{future, Future, IntoFuture, Stream}; use futures::compat::Future01CompatExt;
use futures::{FutureExt, StreamExt};
use rayon::prelude::*; use rayon::prelude::*;
use slog::{error, info, Logger}; use slog::{error, info, Logger};
use std::fs; use std::fs;
@ -23,7 +24,7 @@ use web3::{
pub use cli::cli_app; pub use cli::cli_app;
/// Run the account manager, returning an error if the operation did not succeed. /// Run the account manager, returning an error if the operation did not succeed.
pub fn run<T: EthSpec>(matches: &ArgMatches, mut env: Environment<T>) -> Result<(), String> { pub fn run<T: EthSpec>(matches: &ArgMatches<'_>, mut env: Environment<T>) -> Result<(), String> {
let context = env.core_context(); let context = env.core_context();
let log = context.log.clone(); let log = context.log.clone();
@ -292,7 +293,7 @@ fn make_validators(
/// ///
/// Returns success as soon as the eth1 endpoint accepts the transaction (i.e., does not wait for /// Returns success as soon as the eth1 endpoint accepts the transaction (i.e., does not wait for
/// transaction success/revert). /// transaction success/revert).
fn deposit_validators<E: EthSpec>( async fn deposit_validators<E: EthSpec>(
context: RuntimeContext<E>, context: RuntimeContext<E>,
eth1_endpoint: String, eth1_endpoint: String,
deposit_contract: Address, deposit_contract: Address,
@ -300,156 +301,154 @@ fn deposit_validators<E: EthSpec>(
account_index: usize, account_index: usize,
deposit_value: u64, deposit_value: u64,
password: Option<String>, password: Option<String>,
) -> impl Future<Item = (), Error = ()> { ) -> Result<(), ()> {
let log_1 = context.log.clone(); let log_1 = context.log.clone();
let log_2 = context.log.clone(); let log_2 = context.log.clone();
Http::new(&eth1_endpoint) let (event_loop, transport) = Http::new(&eth1_endpoint).map_err(move |e| {
.map_err(move |e| { error!(
error!( log_1,
log_1, "Failed to start web3 HTTP transport";
"Failed to start web3 HTTP transport"; "error" => format!("{:?}", e)
"error" => format!("{:?}", e) )
})?;
/*
* Loop through the validator directories and submit the deposits.
*/
let web3 = Web3::new(transport);
futures::stream::iter(validators)
.for_each(|validator| async {
let web3 = web3.clone();
let log = log_2.clone();
let password = password.clone();
let _ = deposit_validator(
web3,
deposit_contract,
validator,
deposit_value,
account_index,
password,
log,
) )
.await;
}) })
.into_future() .map(|_| event_loop)
/* // // Web3 gives errors if the event loop is dropped whilst performing requests.
* Loop through the validator directories and submit the deposits.
*/
.and_then(move |(event_loop, transport)| {
let web3 = Web3::new(transport);
futures::stream::iter_ok(validators)
.for_each(move |validator| {
let web3 = web3.clone();
let log = log_2.clone();
let password = password.clone();
deposit_validator(
web3,
deposit_contract,
&validator,
deposit_value,
account_index,
password,
log,
)
})
.map(|_| event_loop)
})
// Web3 gives errors if the event loop is dropped whilst performing requests.
.map(drop) .map(drop)
.await;
Ok(())
} }
/// For the given `ValidatorDirectory`, submit a deposit transaction to the `web3` node. /// For the given `ValidatorDirectory`, submit a deposit transaction to the `web3` node.
/// ///
/// Returns success as soon as the eth1 endpoint accepts the transaction (i.e., does not wait for /// Returns success as soon as the eth1 endpoint accepts the transaction (i.e., does not wait for
/// transaction success/revert). /// transaction success/revert).
fn deposit_validator( async fn deposit_validator(
web3: Web3<Http>, web3: Web3<Http>,
deposit_contract: Address, deposit_contract: Address,
validator: &ValidatorDirectory, validator: ValidatorDirectory,
deposit_amount: u64, deposit_amount: u64,
account_index: usize, account_index: usize,
password_opt: Option<String>, password_opt: Option<String>,
log: Logger, log: Logger,
) -> impl Future<Item = (), Error = ()> { ) -> Result<(), ()> {
validator let voting_keypair = validator
.voting_keypair .voting_keypair
.clone() .clone()
.ok_or_else(|| error!(log, "Validator does not have voting keypair")) .ok_or_else(|| error!(log, "Validator does not have voting keypair"))?;
.and_then(|voting_keypair| {
validator
.deposit_data
.clone()
.ok_or_else(|| error!(log, "Validator does not have deposit data"))
.map(|deposit_data| (voting_keypair, deposit_data))
})
.into_future()
.and_then(move |(voting_keypair, deposit_data)| {
let pubkey_1 = voting_keypair.pk.clone();
let pubkey_2 = voting_keypair.pk;
let web3_1 = web3.clone(); let deposit_data = validator
let web3_2 = web3.clone(); .deposit_data
.clone()
.ok_or_else(|| error!(log, "Validator does not have deposit data"))?;
let log_1 = log.clone(); let pubkey_1 = voting_keypair.pk.clone();
let log_2 = log.clone(); let pubkey_2 = voting_keypair.pk;
web3.eth() let log_1 = log.clone();
.accounts() let log_2 = log.clone();
.map_err(|e| format!("Failed to get accounts: {:?}", e))
.and_then(move |accounts| {
accounts
.get(account_index)
.cloned()
.ok_or_else(|| "Insufficient accounts for deposit".to_string())
})
/*
* If a password was supplied, unlock the account.
*/
.and_then(move |from_address| {
let future: Box<dyn Future<Item = Address, Error = String> + Send> =
if let Some(password) = password_opt {
// Unlock for only a single transaction.
let duration = None;
let future = web3_1 // TODO: creating a future to extract the Error type
.personal() // check if there's a better way
.unlock_account(from_address, &password, duration) let future = async move {
.then(move |result| match result { let accounts = web3
Ok(true) => Ok(from_address), .eth()
Ok(false) => { .accounts()
Err("Eth1 node refused to unlock account. Check password." .compat()
.to_string()) .await
} .map_err(|e| format!("Failed to get accounts: {:?}", e))?;
Err(e) => Err(format!("Eth1 unlock request failed: {:?}", e)),
});
Box::new(future) let from_address = accounts
} else { .get(account_index)
Box::new(future::ok(from_address)) .cloned()
}; .ok_or_else(|| "Insufficient accounts for deposit".to_string())?;
future /*
}) * If a password was supplied, unlock the account.
/* */
* Submit the deposit transaction. let from = if let Some(password) = password_opt {
*/ // Unlock for only a single transaction.
.and_then(move |from| { let duration = None;
let tx_request = TransactionRequest {
from,
to: Some(deposit_contract),
gas: Some(U256::from(DEPOSIT_GAS)),
gas_price: None,
value: Some(from_gwei(deposit_amount)),
data: Some(deposit_data.into()),
nonce: None,
condition: None,
};
web3_2 let result = web3
.eth() .personal()
.send_transaction(tx_request) .unlock_account(from_address, &password, duration)
.map_err(|e| format!("Failed to call deposit fn: {:?}", e)) .compat()
}) .await;
.map(move |tx| { match result {
info!( Ok(true) => from_address,
log_1, Ok(false) => {
"Validator deposit successful"; return Err::<(), String>(
"eth1_tx_hash" => format!("{:?}", tx), "Eth1 node refused to unlock account. Check password.".to_string(),
"validator_voting_pubkey" => format!("{:?}", pubkey_1)
) )
}) }
.map_err(move |e| { Err(e) => return Err::<(), String>(format!("Eth1 unlock request failed: {:?}", e)),
error!( }
log_2, } else {
"Validator deposit_failed"; from_address
"error" => e, };
"validator_voting_pubkey" => format!("{:?}", pubkey_2)
) /*
}) * Submit the deposit transaction.
}) */
let tx_request = TransactionRequest {
from,
to: Some(deposit_contract),
gas: Some(U256::from(DEPOSIT_GAS)),
gas_price: None,
value: Some(from_gwei(deposit_amount)),
data: Some(deposit_data.into()),
nonce: None,
condition: None,
};
let tx = web3
.eth()
.send_transaction(tx_request)
.compat()
.await
.map_err(|e| format!("Failed to call deposit fn: {:?}", e))?;
info!(
log_1,
"Validator deposit successful";
"eth1_tx_hash" => format!("{:?}", tx),
"validator_voting_pubkey" => format!("{:?}", pubkey_1)
);
Ok(())
};
future.await.map_err(move |e| {
error!(
log_2,
"Validator deposit_failed";
"error" => e,
"validator_voting_pubkey" => format!("{:?}", pubkey_2)
);
})?;
Ok(())
} }
/// Converts gwei to wei. /// Converts gwei to wei.

View File

@ -22,23 +22,22 @@ store = { path = "./store" }
client = { path = "client" } client = { path = "client" }
version = { path = "version" } version = { path = "version" }
clap = "2.33.0" clap = "2.33.0"
rand = "0.7.2" rand = "0.7.3"
slog = { version = "2.5.2", features = ["max_level_trace", "release_max_level_trace"] } slog = { version = "2.5.2", features = ["max_level_trace", "release_max_level_trace"] }
slog-term = "2.4.2" slog-term = "2.5.0"
slog-async = "2.3.0" slog-async = "2.5.0"
ctrlc = { version = "3.1.3", features = ["termination"] } ctrlc = { version = "3.1.4", features = ["termination"] }
tokio = "0.1.22" tokio = {version = "0.2.20", features = ["time"] }
tokio-timer = "0.2.12" exit-future = "0.2.0"
exit-future = "0.1.4"
env_logger = "0.7.1" env_logger = "0.7.1"
dirs = "2.0.2" dirs = "2.0.2"
logging = { path = "../eth2/utils/logging" } logging = { path = "../eth2/utils/logging" }
futures = "0.1.29" futures = "0.3.5"
environment = { path = "../lighthouse/environment" } environment = { path = "../lighthouse/environment" }
genesis = { path = "genesis" } genesis = { path = "genesis" }
eth2_testnet_config = { path = "../eth2/utils/eth2_testnet_config" } eth2_testnet_config = { path = "../eth2/utils/eth2_testnet_config" }
eth2-libp2p = { path = "./eth2-libp2p" } eth2-libp2p = { path = "./eth2-libp2p" }
eth2_ssz = { path = "../eth2/utils/ssz" } eth2_ssz = "0.1.2"
toml = "0.5.4" toml = "0.5.6"
serde = "1.0.102" serde = "1.0.110"
clap_utils = { path = "../eth2/utils/clap_utils" } clap_utils = { path = "../eth2/utils/clap_utils" }

View File

@ -5,24 +5,26 @@ authors = ["Paul Hauner <paul@paulhauner.com>", "Age Manning <Age@AgeManning.com
edition = "2018" edition = "2018"
[features] [features]
default = ["participation_metrics"]
write_ssz_files = [] # Writes debugging .ssz files to /tmp during block processing. write_ssz_files = [] # Writes debugging .ssz files to /tmp during block processing.
participation_metrics = [] # Exposes validator participation metrics to Prometheus.
[dependencies] [dependencies]
eth2_config = { path = "../../eth2/utils/eth2_config" } eth2_config = { path = "../../eth2/utils/eth2_config" }
merkle_proof = { path = "../../eth2/utils/merkle_proof" } merkle_proof = { path = "../../eth2/utils/merkle_proof" }
store = { path = "../store" } store = { path = "../store" }
parking_lot = "0.9.0" parking_lot = "0.10.2"
lazy_static = "1.4.0" lazy_static = "1.4.0"
lighthouse_metrics = { path = "../../eth2/utils/lighthouse_metrics" } lighthouse_metrics = { path = "../../eth2/utils/lighthouse_metrics" }
log = "0.4.8" log = "0.4.8"
operation_pool = { path = "../../eth2/operation_pool" } operation_pool = { path = "../../eth2/operation_pool" }
rayon = "1.2.0" rayon = "1.3.0"
serde = "1.0.102" serde = "1.0.110"
serde_derive = "1.0.102" serde_derive = "1.0.110"
serde_yaml = "0.8.11" serde_yaml = "0.8.11"
serde_json = "1.0.41" serde_json = "1.0.52"
slog = { version = "2.5.2", features = ["max_level_trace"] } slog = { version = "2.5.2", features = ["max_level_trace"] }
sloggers = "0.3.4" sloggers = "1.0.0"
slot_clock = { path = "../../eth2/utils/slot_clock" } slot_clock = { path = "../../eth2/utils/slot_clock" }
eth2_hashing = "0.1.0" eth2_hashing = "0.1.0"
eth2_ssz = "0.1.2" eth2_ssz = "0.1.2"
@ -31,13 +33,13 @@ eth2_ssz_derive = "0.1.0"
state_processing = { path = "../../eth2/state_processing" } state_processing = { path = "../../eth2/state_processing" }
tree_hash = "0.1.0" tree_hash = "0.1.0"
types = { path = "../../eth2/types" } types = { path = "../../eth2/types" }
tokio = "0.1.22" tokio = "0.2.20"
eth1 = { path = "../eth1" } eth1 = { path = "../eth1" }
websocket_server = { path = "../websocket_server" } websocket_server = { path = "../websocket_server" }
futures = "0.1.25" futures = "0.3.5"
genesis = { path = "../genesis" } genesis = { path = "../genesis" }
integer-sqrt = "0.1" integer-sqrt = "0.1.3"
rand = "0.7.2" rand = "0.7.3"
proto_array_fork_choice = { path = "../../eth2/proto_array_fork_choice" } proto_array_fork_choice = { path = "../../eth2/proto_array_fork_choice" }
lru = "0.4.3" lru = "0.4.3"
tempfile = "3.1.0" tempfile = "3.1.0"

View File

@ -847,7 +847,7 @@ where
// The state roots are not useful for the shuffling, so there's no need to // The state roots are not useful for the shuffling, so there's no need to
// compute them. // compute them.
per_slot_processing(&mut state, Some(Hash256::zero()), &chain.spec) per_slot_processing(&mut state, Some(Hash256::zero()), &chain.spec)
.map_err(|e| BeaconChainError::from(e))? .map_err(|e| BeaconChainError::from(e))?;
} }
metrics::stop_timer(state_skip_timer); metrics::stop_timer(state_skip_timer);

View File

@ -553,7 +553,7 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
// Note: supplying some `state_root` when it is known would be a cheap and easy // Note: supplying some `state_root` when it is known would be a cheap and easy
// optimization. // optimization.
match per_slot_processing(&mut state, skip_state_root, &self.spec) { match per_slot_processing(&mut state, skip_state_root, &self.spec) {
Ok(()) => (), Ok(_) => (),
Err(e) => { Err(e) => {
warn!( warn!(
self.log, self.log,
@ -863,7 +863,14 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
&self, &self,
attestation: Attestation<T::EthSpec>, attestation: Attestation<T::EthSpec>,
) -> Result<VerifiedUnaggregatedAttestation<T>, AttestationError> { ) -> Result<VerifiedUnaggregatedAttestation<T>, AttestationError> {
VerifiedUnaggregatedAttestation::verify(attestation, self) metrics::inc_counter(&metrics::UNAGGREGATED_ATTESTATION_PROCESSING_REQUESTS);
let _timer =
metrics::start_timer(&metrics::UNAGGREGATED_ATTESTATION_GOSSIP_VERIFICATION_TIMES);
VerifiedUnaggregatedAttestation::verify(attestation, self).map(|v| {
metrics::inc_counter(&metrics::UNAGGREGATED_ATTESTATION_PROCESSING_SUCCESSES);
v
})
} }
/// Accepts some `SignedAggregateAndProof` from the network and attempts to verify it, /// Accepts some `SignedAggregateAndProof` from the network and attempts to verify it,
@ -872,7 +879,14 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
&self, &self,
signed_aggregate: SignedAggregateAndProof<T::EthSpec>, signed_aggregate: SignedAggregateAndProof<T::EthSpec>,
) -> Result<VerifiedAggregatedAttestation<T>, AttestationError> { ) -> Result<VerifiedAggregatedAttestation<T>, AttestationError> {
VerifiedAggregatedAttestation::verify(signed_aggregate, self) metrics::inc_counter(&metrics::AGGREGATED_ATTESTATION_PROCESSING_REQUESTS);
let _timer =
metrics::start_timer(&metrics::AGGREGATED_ATTESTATION_GOSSIP_VERIFICATION_TIMES);
VerifiedAggregatedAttestation::verify(signed_aggregate, self).map(|v| {
metrics::inc_counter(&metrics::AGGREGATED_ATTESTATION_PROCESSING_SUCCESSES);
v
})
} }
/// Accepts some attestation-type object and attempts to verify it in the context of fork /// Accepts some attestation-type object and attempts to verify it in the context of fork
@ -887,6 +901,8 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
&self, &self,
unverified_attestation: &'a impl IntoForkChoiceVerifiedAttestation<'a, T>, unverified_attestation: &'a impl IntoForkChoiceVerifiedAttestation<'a, T>,
) -> Result<ForkChoiceVerifiedAttestation<'a, T>, AttestationError> { ) -> Result<ForkChoiceVerifiedAttestation<'a, T>, AttestationError> {
let _timer = metrics::start_timer(&metrics::ATTESTATION_PROCESSING_APPLY_TO_FORK_CHOICE);
let verified = unverified_attestation.into_fork_choice_verified_attestation(self)?; let verified = unverified_attestation.into_fork_choice_verified_attestation(self)?;
let indexed_attestation = verified.indexed_attestation(); let indexed_attestation = verified.indexed_attestation();
self.fork_choice self.fork_choice
@ -907,6 +923,8 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
&self, &self,
unaggregated_attestation: VerifiedUnaggregatedAttestation<T>, unaggregated_attestation: VerifiedUnaggregatedAttestation<T>,
) -> Result<VerifiedUnaggregatedAttestation<T>, AttestationError> { ) -> Result<VerifiedUnaggregatedAttestation<T>, AttestationError> {
let _timer = metrics::start_timer(&metrics::ATTESTATION_PROCESSING_APPLY_TO_AGG_POOL);
let attestation = unaggregated_attestation.attestation(); let attestation = unaggregated_attestation.attestation();
match self.naive_aggregation_pool.insert(attestation) { match self.naive_aggregation_pool.insert(attestation) {
@ -950,6 +968,8 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
&self, &self,
signed_aggregate: VerifiedAggregatedAttestation<T>, signed_aggregate: VerifiedAggregatedAttestation<T>,
) -> Result<VerifiedAggregatedAttestation<T>, AttestationError> { ) -> Result<VerifiedAggregatedAttestation<T>, AttestationError> {
let _timer = metrics::start_timer(&metrics::ATTESTATION_PROCESSING_APPLY_TO_OP_POOL);
// If there's no eth1 chain then it's impossible to produce blocks and therefore // If there's no eth1 chain then it's impossible to produce blocks and therefore
// useless to put things in the op pool. // useless to put things in the op pool.
if self.eth1_chain.is_some() { if self.eth1_chain.is_some() {

View File

@ -54,10 +54,12 @@ use slot_clock::SlotClock;
use ssz::Encode; use ssz::Encode;
use state_processing::{ use state_processing::{
block_signature_verifier::{BlockSignatureVerifier, Error as BlockSignatureVerifierError}, block_signature_verifier::{BlockSignatureVerifier, Error as BlockSignatureVerifierError},
per_block_processing, per_slot_processing, BlockProcessingError, BlockSignatureStrategy, per_block_processing,
SlotProcessingError, per_epoch_processing::EpochProcessingSummary,
per_slot_processing, BlockProcessingError, BlockSignatureStrategy, SlotProcessingError,
}; };
use std::borrow::Cow; use std::borrow::Cow;
use std::convert::TryFrom;
use std::fs; use std::fs;
use std::io::Write; use std::io::Write;
use store::{Error as DBError, StateBatch}; use store::{Error as DBError, StateBatch};
@ -238,7 +240,7 @@ pub fn signature_verify_chain_segment<T: BeaconChainTypes>(
/// the p2p network. /// the p2p network.
pub struct GossipVerifiedBlock<T: BeaconChainTypes> { pub struct GossipVerifiedBlock<T: BeaconChainTypes> {
pub block: SignedBeaconBlock<T::EthSpec>, pub block: SignedBeaconBlock<T::EthSpec>,
block_root: Hash256, pub block_root: Hash256,
parent: BeaconSnapshot<T::EthSpec>, parent: BeaconSnapshot<T::EthSpec>,
} }
@ -556,6 +558,8 @@ impl<T: BeaconChainTypes> FullyVerifiedBlock<T> {
}); });
} }
let mut summaries = vec![];
// Transition the parent state to the block slot. // Transition the parent state to the block slot.
let mut state = parent.beacon_state; let mut state = parent.beacon_state;
let distance = block.slot().as_u64().saturating_sub(state.slot.as_u64()); let distance = block.slot().as_u64().saturating_sub(state.slot.as_u64());
@ -571,9 +575,12 @@ impl<T: BeaconChainTypes> FullyVerifiedBlock<T> {
state_root state_root
}; };
per_slot_processing(&mut state, Some(state_root), &chain.spec)?; per_slot_processing(&mut state, Some(state_root), &chain.spec)?
.map(|summary| summaries.push(summary));
} }
expose_participation_metrics(&summaries);
metrics::stop_timer(catchup_timer); metrics::stop_timer(catchup_timer);
/* /*
@ -891,6 +898,45 @@ fn get_signature_verifier<'a, E: EthSpec>(
) )
} }
fn expose_participation_metrics(summaries: &[EpochProcessingSummary]) {
if !cfg!(feature = "participation_metrics") {
return;
}
for summary in summaries {
let b = &summary.total_balances;
metrics::maybe_set_float_gauge(
&metrics::PARTICIPATION_PREV_EPOCH_ATTESTER,
participation_ratio(b.previous_epoch_attesters(), b.previous_epoch()),
);
metrics::maybe_set_float_gauge(
&metrics::PARTICIPATION_PREV_EPOCH_TARGET_ATTESTER,
participation_ratio(b.previous_epoch_target_attesters(), b.previous_epoch()),
);
metrics::maybe_set_float_gauge(
&metrics::PARTICIPATION_PREV_EPOCH_HEAD_ATTESTER,
participation_ratio(b.previous_epoch_head_attesters(), b.previous_epoch()),
);
}
}
fn participation_ratio(section: u64, total: u64) -> Option<f64> {
// Reduce the precision to help ensure we fit inside a u32.
const PRECISION: u64 = 100_000_000;
let section: f64 = u32::try_from(section / PRECISION).ok()?.into();
let total: f64 = u32::try_from(total / PRECISION).ok()?.into();
if total > 0_f64 {
Some(section / total)
} else {
None
}
}
fn write_state<T: EthSpec>(prefix: &str, state: &BeaconState<T>, log: &Logger) { fn write_state<T: EthSpec>(prefix: &str, state: &BeaconState<T>, log: &Logger) {
if WRITE_BLOCK_PROCESSING_SSZ { if WRITE_BLOCK_PROCESSING_SSZ {
let root = state.tree_hash_root(); let root = state.tree_hash_root();

View File

@ -1,7 +1,6 @@
use crate::metrics; use crate::metrics;
use eth1::{Config as Eth1Config, Eth1Block, Service as HttpService}; use eth1::{Config as Eth1Config, Eth1Block, Service as HttpService};
use eth2_hashing::hash; use eth2_hashing::hash;
use futures::Future;
use slog::{debug, error, trace, Logger}; use slog::{debug, error, trace, Logger};
use ssz::{Decode, Encode}; use ssz::{Decode, Encode};
use ssz_derive::{Decode, Encode}; use ssz_derive::{Decode, Encode};
@ -286,11 +285,10 @@ impl<T: EthSpec, S: Store<T>> CachingEth1Backend<T, S> {
} }
/// Starts the routine which connects to the external eth1 node and updates the caches. /// Starts the routine which connects to the external eth1 node and updates the caches.
pub fn start( pub fn start(&self, exit: tokio::sync::oneshot::Receiver<()>) {
&self, // don't need to spawn as a task is being spawned in auto_update
exit: tokio::sync::oneshot::Receiver<()>, // TODO: check if this is correct
) -> impl Future<Item = (), Error = ()> { HttpService::auto_update(self.core.clone(), exit);
self.core.auto_update(exit)
} }
/// Instantiates `self` from an existing service. /// Instantiates `self` from an existing service.

View File

@ -1,6 +1,7 @@
use crate::{BeaconChain, BeaconChainTypes}; use crate::{BeaconChain, BeaconChainTypes};
pub use lighthouse_metrics::*; pub use lighthouse_metrics::*;
use types::{BeaconState, Epoch, Hash256, Slot}; use slot_clock::SlotClock;
use types::{BeaconState, Epoch, EthSpec, Hash256, Slot};
lazy_static! { lazy_static! {
/* /*
@ -79,25 +80,81 @@ lazy_static! {
"Number of attestations in a block" "Number of attestations in a block"
); );
/*
* Unaggregated Attestation Verification
*/
pub static ref UNAGGREGATED_ATTESTATION_PROCESSING_REQUESTS: Result<IntCounter> = try_create_int_counter(
"beacon_unaggregated_attestation_processing_requests_total",
"Count of all unaggregated attestations submitted for processing"
);
pub static ref UNAGGREGATED_ATTESTATION_PROCESSING_SUCCESSES: Result<IntCounter> = try_create_int_counter(
"beacon_unaggregated_attestation_processing_successes_total",
"Number of unaggregated attestations verified for gossip"
);
pub static ref UNAGGREGATED_ATTESTATION_GOSSIP_VERIFICATION_TIMES: Result<Histogram> = try_create_histogram(
"beacon_unaggregated_attestation_gossip_verification_seconds",
"Full runtime of aggregated attestation gossip verification"
);
/*
* Aggregated Attestation Verification
*/
pub static ref AGGREGATED_ATTESTATION_PROCESSING_REQUESTS: Result<IntCounter> = try_create_int_counter(
"beacon_aggregated_attestation_processing_requests_total",
"Count of all aggregated attestations submitted for processing"
);
pub static ref AGGREGATED_ATTESTATION_PROCESSING_SUCCESSES: Result<IntCounter> = try_create_int_counter(
"beacon_aggregated_attestation_processing_successes_total",
"Number of aggregated attestations verified for gossip"
);
pub static ref AGGREGATED_ATTESTATION_GOSSIP_VERIFICATION_TIMES: Result<Histogram> = try_create_histogram(
"beacon_aggregated_attestation_gossip_verification_seconds",
"Full runtime of aggregated attestation gossip verification"
);
/*
* General Attestation Processing
*/
pub static ref ATTESTATION_PROCESSING_APPLY_TO_FORK_CHOICE: Result<Histogram> = try_create_histogram(
"beacon_attestation_processing_apply_to_fork_choice",
"Time spent applying an attestation to fork choice"
);
pub static ref ATTESTATION_PROCESSING_APPLY_TO_AGG_POOL: Result<Histogram> = try_create_histogram(
"beacon_attestation_processing_apply_to_agg_pool",
"Time spent applying an attestation to the naive aggregation pool"
);
pub static ref ATTESTATION_PROCESSING_AGG_POOL_MAPS_WRITE_LOCK: Result<Histogram> = try_create_histogram(
"beacon_attestation_processing_agg_pool_maps_write_lock",
"Time spent waiting for the maps write lock when adding to the agg poll"
);
pub static ref ATTESTATION_PROCESSING_AGG_POOL_PRUNE: Result<Histogram> = try_create_histogram(
"beacon_attestation_processing_agg_pool_prune",
"Time spent for the agg pool to prune"
);
pub static ref ATTESTATION_PROCESSING_AGG_POOL_INSERT: Result<Histogram> = try_create_histogram(
"beacon_attestation_processing_agg_pool_insert",
"Time spent for the outer pool.insert() function of agg pool"
);
pub static ref ATTESTATION_PROCESSING_AGG_POOL_CORE_INSERT: Result<Histogram> = try_create_histogram(
"beacon_attestation_processing_agg_pool_core_insert",
"Time spent for the core map.insert() function of agg pool"
);
pub static ref ATTESTATION_PROCESSING_AGG_POOL_AGGREGATION: Result<Histogram> = try_create_histogram(
"beacon_attestation_processing_agg_pool_aggregation",
"Time spent doing signature aggregation when adding to the agg poll"
);
pub static ref ATTESTATION_PROCESSING_AGG_POOL_CREATE_MAP: Result<Histogram> = try_create_histogram(
"beacon_attestation_processing_agg_pool_create_map",
"Time spent for creating a map for a new slot"
);
pub static ref ATTESTATION_PROCESSING_APPLY_TO_OP_POOL: Result<Histogram> = try_create_histogram(
"beacon_attestation_processing_apply_to_op_pool",
"Time spent applying an attestation to the block inclusion pool"
);
/* /*
* Attestation Processing * Attestation Processing
*/ */
pub static ref ATTESTATION_PROCESSING_REQUESTS: Result<IntCounter> = try_create_int_counter(
"beacon_attestation_processing_requests_total",
"Count of all attestations submitted for processing"
);
pub static ref ATTESTATION_PROCESSING_SUCCESSES: Result<IntCounter> = try_create_int_counter(
"beacon_attestation_processing_successes_total",
"total_attestation_processing_successes"
);
pub static ref ATTESTATION_PROCESSING_TIMES: Result<Histogram> = try_create_histogram(
"beacon_attestation_processing_seconds",
"Full runtime of attestation processing"
);
pub static ref ATTESTATION_PROCESSING_INITIAL_VALIDATION_TIMES: Result<Histogram> = try_create_histogram(
"beacon_attestation_processing_initial_validation_seconds",
"Time spent on the initial_validation of attestation processing"
);
pub static ref ATTESTATION_PROCESSING_SHUFFLING_CACHE_WAIT_TIMES: Result<Histogram> = try_create_histogram( pub static ref ATTESTATION_PROCESSING_SHUFFLING_CACHE_WAIT_TIMES: Result<Histogram> = try_create_histogram(
"beacon_attestation_processing_shuffling_cache_wait_seconds", "beacon_attestation_processing_shuffling_cache_wait_seconds",
"Time spent on waiting for the shuffling cache lock during attestation processing" "Time spent on waiting for the shuffling cache lock during attestation processing"
@ -251,6 +308,34 @@ lazy_static! {
try_create_int_gauge("beacon_op_pool_proposer_slashings_total", "Count of proposer slashings in the op pool"); try_create_int_gauge("beacon_op_pool_proposer_slashings_total", "Count of proposer slashings in the op pool");
pub static ref OP_POOL_NUM_VOLUNTARY_EXITS: Result<IntGauge> = pub static ref OP_POOL_NUM_VOLUNTARY_EXITS: Result<IntGauge> =
try_create_int_gauge("beacon_op_pool_voluntary_exits_total", "Count of voluntary exits in the op pool"); try_create_int_gauge("beacon_op_pool_voluntary_exits_total", "Count of voluntary exits in the op pool");
/*
* Participation Metrics
*/
pub static ref PARTICIPATION_PREV_EPOCH_ATTESTER: Result<Gauge> = try_create_float_gauge(
"beacon_participation_prev_epoch_attester",
"Ratio of attesting balances to total balances"
);
pub static ref PARTICIPATION_PREV_EPOCH_TARGET_ATTESTER: Result<Gauge> = try_create_float_gauge(
"beacon_participation_prev_epoch_target_attester",
"Ratio of target-attesting balances to total balances"
);
pub static ref PARTICIPATION_PREV_EPOCH_HEAD_ATTESTER: Result<Gauge> = try_create_float_gauge(
"beacon_participation_prev_epoch_head_attester",
"Ratio of head-attesting balances to total balances"
);
/*
* Attestation Observation Metrics
*/
pub static ref ATTN_OBSERVATION_PREV_EPOCH_ATTESTERS: Result<IntGauge> = try_create_int_gauge(
"beacon_attn_observation_epoch_attesters",
"Count of attesters that have been seen by the beacon chain in the previous epoch"
);
pub static ref ATTN_OBSERVATION_PREV_EPOCH_AGGREGATORS: Result<IntGauge> = try_create_int_gauge(
"beacon_attn_observation_epoch_aggregators",
"Count of aggregators that have been seen by the beacon chain in the previous epoch"
);
} }
/// Scrape the `beacon_chain` for metrics that are not constantly updated (e.g., the present slot, /// Scrape the `beacon_chain` for metrics that are not constantly updated (e.g., the present slot,
@ -260,6 +345,10 @@ pub fn scrape_for_metrics<T: BeaconChainTypes>(beacon_chain: &BeaconChain<T>) {
scrape_head_state::<T>(&head.beacon_state, head.beacon_state_root) scrape_head_state::<T>(&head.beacon_state, head.beacon_state_root)
} }
if let Some(slot) = beacon_chain.slot_clock.now() {
scrape_attestation_observation(slot, beacon_chain);
}
set_gauge_by_usize( set_gauge_by_usize(
&OP_POOL_NUM_ATTESTATIONS, &OP_POOL_NUM_ATTESTATIONS,
beacon_chain.op_pool.num_attestations(), beacon_chain.op_pool.num_attestations(),
@ -332,6 +421,24 @@ fn scrape_head_state<T: BeaconChainTypes>(state: &BeaconState<T::EthSpec>, state
set_gauge_by_u64(&HEAD_STATE_ETH1_DEPOSIT_INDEX, state.eth1_deposit_index); set_gauge_by_u64(&HEAD_STATE_ETH1_DEPOSIT_INDEX, state.eth1_deposit_index);
} }
fn scrape_attestation_observation<T: BeaconChainTypes>(slot_now: Slot, chain: &BeaconChain<T>) {
let prev_epoch = slot_now.epoch(T::EthSpec::slots_per_epoch()) - 1;
if let Some(count) = chain
.observed_attesters
.observed_validator_count(prev_epoch)
{
set_gauge_by_usize(&ATTN_OBSERVATION_PREV_EPOCH_ATTESTERS, count);
}
if let Some(count) = chain
.observed_aggregators
.observed_validator_count(prev_epoch)
{
set_gauge_by_usize(&ATTN_OBSERVATION_PREV_EPOCH_AGGREGATORS, count);
}
}
fn set_gauge_by_slot(gauge: &Result<IntGauge>, value: Slot) { fn set_gauge_by_slot(gauge: &Result<IntGauge>, value: Slot) {
set_gauge(gauge, value.as_u64() as i64); set_gauge(gauge, value.as_u64() as i64);
} }

View File

@ -1,3 +1,4 @@
use crate::metrics;
use parking_lot::RwLock; use parking_lot::RwLock;
use std::collections::HashMap; use std::collections::HashMap;
use types::{Attestation, AttestationData, EthSpec, Slot}; use types::{Attestation, AttestationData, EthSpec, Slot};
@ -68,6 +69,8 @@ impl<E: EthSpec> AggregatedAttestationMap<E> {
/// ///
/// The given attestation (`a`) must only have one signature. /// The given attestation (`a`) must only have one signature.
pub fn insert(&mut self, a: &Attestation<E>) -> Result<InsertOutcome, Error> { pub fn insert(&mut self, a: &Attestation<E>) -> Result<InsertOutcome, Error> {
let _timer = metrics::start_timer(&metrics::ATTESTATION_PROCESSING_AGG_POOL_CORE_INSERT);
let set_bits = a let set_bits = a
.aggregation_bits .aggregation_bits
.iter() .iter()
@ -93,6 +96,8 @@ impl<E: EthSpec> AggregatedAttestationMap<E> {
{ {
Ok(InsertOutcome::SignatureAlreadyKnown { committee_index }) Ok(InsertOutcome::SignatureAlreadyKnown { committee_index })
} else { } else {
let _timer =
metrics::start_timer(&metrics::ATTESTATION_PROCESSING_AGG_POOL_AGGREGATION);
existing_attestation.aggregate(a); existing_attestation.aggregate(a);
Ok(InsertOutcome::SignatureAggregated { committee_index }) Ok(InsertOutcome::SignatureAggregated { committee_index })
} }
@ -164,8 +169,9 @@ impl<E: EthSpec> NaiveAggregationPool<E> {
/// The pool may be pruned if the given `attestation.data` has a slot higher than any /// The pool may be pruned if the given `attestation.data` has a slot higher than any
/// previously seen. /// previously seen.
pub fn insert(&self, attestation: &Attestation<E>) -> Result<InsertOutcome, Error> { pub fn insert(&self, attestation: &Attestation<E>) -> Result<InsertOutcome, Error> {
let _timer = metrics::start_timer(&metrics::ATTESTATION_PROCESSING_AGG_POOL_INSERT);
let slot = attestation.data.slot; let slot = attestation.data.slot;
let lowest_permissible_slot = *self.lowest_permissible_slot.read(); let lowest_permissible_slot: Slot = *self.lowest_permissible_slot.read();
// Reject any attestations that are too old. // Reject any attestations that are too old.
if slot < lowest_permissible_slot { if slot < lowest_permissible_slot {
@ -175,11 +181,15 @@ impl<E: EthSpec> NaiveAggregationPool<E> {
}); });
} }
let lock_timer =
metrics::start_timer(&metrics::ATTESTATION_PROCESSING_AGG_POOL_MAPS_WRITE_LOCK);
let mut maps = self.maps.write(); let mut maps = self.maps.write();
drop(lock_timer);
let outcome = if let Some(map) = maps.get_mut(&slot) { let outcome = if let Some(map) = maps.get_mut(&slot) {
map.insert(attestation) map.insert(attestation)
} else { } else {
let _timer = metrics::start_timer(&metrics::ATTESTATION_PROCESSING_AGG_POOL_CREATE_MAP);
// To avoid re-allocations, try and determine a rough initial capacity for the new item // To avoid re-allocations, try and determine a rough initial capacity for the new item
// by obtaining the mean size of all items in earlier epoch. // by obtaining the mean size of all items in earlier epoch.
let (count, sum) = maps let (count, sum) = maps
@ -219,8 +229,19 @@ impl<E: EthSpec> NaiveAggregationPool<E> {
/// Removes any attestations with a slot lower than `current_slot` and bars any future /// Removes any attestations with a slot lower than `current_slot` and bars any future
/// attestations with a slot lower than `current_slot - SLOTS_RETAINED`. /// attestations with a slot lower than `current_slot - SLOTS_RETAINED`.
pub fn prune(&self, current_slot: Slot) { pub fn prune(&self, current_slot: Slot) {
let _timer = metrics::start_timer(&metrics::ATTESTATION_PROCESSING_AGG_POOL_PRUNE);
// Taking advantage of saturating subtraction on `Slot`. // Taking advantage of saturating subtraction on `Slot`.
let lowest_permissible_slot = current_slot - Slot::from(SLOTS_RETAINED); let lowest_permissible_slot = current_slot - Slot::from(SLOTS_RETAINED);
// No need to prune if the lowest permissible slot has not changed and the queue length is
// less than the maximum
if *self.lowest_permissible_slot.read() == lowest_permissible_slot
&& self.maps.read().len() <= SLOTS_RETAINED
{
return;
}
*self.lowest_permissible_slot.write() = lowest_permissible_slot; *self.lowest_permissible_slot.write() = lowest_permissible_slot;
let mut maps = self.maps.write(); let mut maps = self.maps.write();

View File

@ -36,9 +36,12 @@ pub trait Item {
/// The default capacity for self. Used when we can't guess a reasonable size. /// The default capacity for self. Used when we can't guess a reasonable size.
fn default_capacity() -> usize; fn default_capacity() -> usize;
/// Returns the number of validator indices stored in `self`. /// Returns the allocated size of `self`, measured by validator indices.
fn len(&self) -> usize; fn len(&self) -> usize;
/// Returns the number of validators that have been observed by `self`.
fn validator_count(&self) -> usize;
/// Store `validator_index` in `self`. /// Store `validator_index` in `self`.
fn insert(&mut self, validator_index: usize) -> bool; fn insert(&mut self, validator_index: usize) -> bool;
@ -67,6 +70,10 @@ impl Item for EpochBitfield {
self.bitfield.len() self.bitfield.len()
} }
fn validator_count(&self) -> usize {
self.bitfield.iter().filter(|bit| **bit).count()
}
fn insert(&mut self, validator_index: usize) -> bool { fn insert(&mut self, validator_index: usize) -> bool {
self.bitfield self.bitfield
.get_mut(validator_index) .get_mut(validator_index)
@ -116,6 +123,10 @@ impl Item for EpochHashSet {
self.set.len() self.set.len()
} }
fn validator_count(&self) -> usize {
self.set.len()
}
/// Inserts the `validator_index` in the set. Returns `true` if the `validator_index` was /// Inserts the `validator_index` in the set. Returns `true` if the `validator_index` was
/// already in the set. /// already in the set.
fn insert(&mut self, validator_index: usize) -> bool { fn insert(&mut self, validator_index: usize) -> bool {
@ -219,6 +230,15 @@ impl<T: Item, E: EthSpec> AutoPruningContainer<T, E> {
Ok(exists) Ok(exists)
} }
/// Returns the number of validators that have been observed at the given `epoch`. Returns
/// `None` if `self` does not have a cache for that epoch.
pub fn observed_validator_count(&self, epoch: Epoch) -> Option<usize> {
self.items
.read()
.get(&epoch)
.map(|item| item.validator_count())
}
fn sanitize_request(&self, a: &Attestation<E>, validator_index: usize) -> Result<(), Error> { fn sanitize_request(&self, a: &Attestation<E>, validator_index: usize) -> Result<(), Error> {
if validator_index > E::ValidatorRegistryLimit::to_usize() { if validator_index > E::ValidatorRegistryLimit::to_usize() {
return Err(Error::ValidatorIndexTooHigh(validator_index)); return Err(Error::ValidatorIndexTooHigh(validator_index));

View File

@ -5,8 +5,8 @@ authors = ["Age Manning <Age@AgeManning.com>"]
edition = "2018" edition = "2018"
[dev-dependencies] [dev-dependencies]
sloggers = "0.3.4" sloggers = "1.0.0"
toml = "^0.5" toml = "0.5.6"
[dependencies] [dependencies]
beacon_chain = { path = "../beacon_chain" } beacon_chain = { path = "../beacon_chain" }
@ -15,27 +15,27 @@ network = { path = "../network" }
timer = { path = "../timer" } timer = { path = "../timer" }
eth2-libp2p = { path = "../eth2-libp2p" } eth2-libp2p = { path = "../eth2-libp2p" }
rest_api = { path = "../rest_api" } rest_api = { path = "../rest_api" }
parking_lot = "0.9.0" parking_lot = "0.10.2"
websocket_server = { path = "../websocket_server" } websocket_server = { path = "../websocket_server" }
prometheus = "0.7.0" prometheus = "0.8.0"
types = { path = "../../eth2/types" } types = { path = "../../eth2/types" }
tree_hash = "0.1.0" tree_hash = "0.1.0"
eth2_config = { path = "../../eth2/utils/eth2_config" } eth2_config = { path = "../../eth2/utils/eth2_config" }
slot_clock = { path = "../../eth2/utils/slot_clock" } slot_clock = { path = "../../eth2/utils/slot_clock" }
serde = "1.0.102" serde = "1.0.110"
serde_derive = "1.0.102" serde_derive = "1.0.110"
error-chain = "0.12.1" error-chain = "0.12.2"
serde_yaml = "0.8.11" serde_yaml = "0.8.11"
slog = { version = "2.5.2", features = ["max_level_trace"] } slog = { version = "2.5.2", features = ["max_level_trace"] }
slog-async = "2.3.0" slog-async = "2.5.0"
tokio = "0.1.22" tokio = "0.2.20"
dirs = "2.0.2" dirs = "2.0.2"
futures = "0.1.29" futures = "0.3.5"
reqwest = "0.9.22" reqwest = "0.10.4"
url = "2.1.0" url = "2.1.1"
eth1 = { path = "../eth1" } eth1 = { path = "../eth1" }
genesis = { path = "../genesis" } genesis = { path = "../genesis" }
environment = { path = "../../lighthouse/environment" } environment = { path = "../../lighthouse/environment" }
eth2_ssz = { path = "../../eth2/utils/ssz" } eth2_ssz = "0.1.2"
lazy_static = "1.4.0" lazy_static = "1.4.0"
lighthouse_metrics = { path = "../../eth2/utils/lighthouse_metrics" } lighthouse_metrics = { path = "../../eth2/utils/lighthouse_metrics" }

View File

@ -13,7 +13,6 @@ use environment::RuntimeContext;
use eth1::{Config as Eth1Config, Service as Eth1Service}; use eth1::{Config as Eth1Config, Service as Eth1Service};
use eth2_config::Eth2Config; use eth2_config::Eth2Config;
use eth2_libp2p::NetworkGlobals; use eth2_libp2p::NetworkGlobals;
use futures::{future, Future, IntoFuture};
use genesis::{interop_genesis_state, Eth1GenesisService}; use genesis::{interop_genesis_state, Eth1GenesisService};
use network::{NetworkConfig, NetworkMessage, NetworkService}; use network::{NetworkConfig, NetworkMessage, NetworkService};
use slog::info; use slog::info;
@ -109,11 +108,11 @@ where
/// Initializes the `BeaconChainBuilder`. The `build_beacon_chain` method will need to be /// Initializes the `BeaconChainBuilder`. The `build_beacon_chain` method will need to be
/// called later in order to actually instantiate the `BeaconChain`. /// called later in order to actually instantiate the `BeaconChain`.
pub fn beacon_chain_builder( pub async fn beacon_chain_builder(
mut self, mut self,
client_genesis: ClientGenesis, client_genesis: ClientGenesis,
config: ClientConfig, config: ClientConfig,
) -> impl Future<Item = Self, Error = String> { ) -> Result<Self, String> {
let store = self.store.clone(); let store = self.store.clone();
let store_migrator = self.store_migrator.take(); let store_migrator = self.store_migrator.take();
let chain_spec = self.chain_spec.clone(); let chain_spec = self.chain_spec.clone();
@ -122,123 +121,94 @@ where
let data_dir = config.data_dir.clone(); let data_dir = config.data_dir.clone();
let disabled_forks = config.disabled_forks.clone(); let disabled_forks = config.disabled_forks.clone();
future::ok(()) let store =
.and_then(move |()| { store.ok_or_else(|| "beacon_chain_start_method requires a store".to_string())?;
let store = store let store_migrator = store_migrator
.ok_or_else(|| "beacon_chain_start_method requires a store".to_string())?; .ok_or_else(|| "beacon_chain_start_method requires a store migrator".to_string())?;
let store_migrator = store_migrator.ok_or_else(|| { let context = runtime_context
"beacon_chain_start_method requires a store migrator".to_string() .ok_or_else(|| "beacon_chain_start_method requires a runtime context".to_string())?
})?; .service_context("beacon".into());
let context = runtime_context let spec = chain_spec
.ok_or_else(|| { .ok_or_else(|| "beacon_chain_start_method requires a chain spec".to_string())?;
"beacon_chain_start_method requires a runtime context".to_string()
})?
.service_context("beacon".into());
let spec = chain_spec
.ok_or_else(|| "beacon_chain_start_method requires a chain spec".to_string())?;
let builder = BeaconChainBuilder::new(eth_spec_instance) let builder = BeaconChainBuilder::new(eth_spec_instance)
.logger(context.log.clone()) .logger(context.log.clone())
.store(store) .store(store)
.store_migrator(store_migrator) .store_migrator(store_migrator)
.data_dir(data_dir) .data_dir(data_dir)
.custom_spec(spec.clone()) .custom_spec(spec.clone())
.disabled_forks(disabled_forks); .disabled_forks(disabled_forks);
Ok((builder, spec, context)) let chain_exists = builder
}) .store_contains_beacon_chain()
.and_then(move |(builder, spec, context)| { .unwrap_or_else(|_| false);
let chain_exists = builder
.store_contains_beacon_chain()
.unwrap_or_else(|_| false);
// If the client is expect to resume but there's no beacon chain in the database, // If the client is expect to resume but there's no beacon chain in the database,
// use the `DepositContract` method. This scenario is quite common when the client // use the `DepositContract` method. This scenario is quite common when the client
// is shutdown before finding genesis via eth1. // is shutdown before finding genesis via eth1.
// //
// Alternatively, if there's a beacon chain in the database then always resume // Alternatively, if there's a beacon chain in the database then always resume
// using it. // using it.
let client_genesis = if client_genesis == ClientGenesis::FromStore && !chain_exists let client_genesis = if client_genesis == ClientGenesis::FromStore && !chain_exists {
{ info!(context.log, "Defaulting to deposit contract genesis");
info!(context.log, "Defaulting to deposit contract genesis");
ClientGenesis::DepositContract ClientGenesis::DepositContract
} else if chain_exists { } else if chain_exists {
ClientGenesis::FromStore ClientGenesis::FromStore
} else { } else {
client_genesis client_genesis
}; };
let genesis_state_future: Box<dyn Future<Item = _, Error = _> + Send> = let (beacon_chain_builder, eth1_service_option) = match client_genesis {
match client_genesis { ClientGenesis::Interop {
ClientGenesis::Interop { validator_count,
validator_count, genesis_time,
genesis_time, } => {
} => { let keypairs = generate_deterministic_keypairs(validator_count);
let keypairs = generate_deterministic_keypairs(validator_count); let genesis_state = interop_genesis_state(&keypairs, genesis_time, &spec)?;
let result = interop_genesis_state(&keypairs, genesis_time, &spec); builder.genesis_state(genesis_state).map(|v| (v, None))?
}
ClientGenesis::SszBytes {
genesis_state_bytes,
} => {
info!(
context.log,
"Starting from known genesis state";
);
let future = result let genesis_state = BeaconState::from_ssz_bytes(&genesis_state_bytes)
.and_then(move |genesis_state| builder.genesis_state(genesis_state)) .map_err(|e| format!("Unable to parse genesis state SSZ: {:?}", e))?;
.into_future()
.map(|v| (v, None));
Box::new(future) builder.genesis_state(genesis_state).map(|v| (v, None))?
} }
ClientGenesis::SszBytes { ClientGenesis::DepositContract => {
genesis_state_bytes, info!(
} => { context.log,
info!( "Waiting for eth2 genesis from eth1";
context.log, "eth1_endpoint" => &config.eth1.endpoint,
"Starting from known genesis state"; "contract_deploy_block" => config.eth1.deposit_contract_deploy_block,
); "deposit_contract" => &config.eth1.deposit_contract_address
);
let result = BeaconState::from_ssz_bytes(&genesis_state_bytes) let genesis_service = Eth1GenesisService::new(config.eth1, context.log.clone());
.map_err(|e| format!("Unable to parse genesis state SSZ: {:?}", e));
let future = result let genesis_state = genesis_service
.and_then(move |genesis_state| builder.genesis_state(genesis_state)) .wait_for_genesis_state(
.into_future() Duration::from_millis(ETH1_GENESIS_UPDATE_INTERVAL_MILLIS),
.map(|v| (v, None)); context.eth2_config().spec.clone(),
)
.await?;
Box::new(future) builder
} .genesis_state(genesis_state)
ClientGenesis::DepositContract => { .map(|v| (v, Some(genesis_service.into_core_service())))?
info!( }
context.log, ClientGenesis::FromStore => builder.resume_from_db().map(|v| (v, None))?,
"Waiting for eth2 genesis from eth1"; };
"eth1_endpoint" => &config.eth1.endpoint,
"contract_deploy_block" => config.eth1.deposit_contract_deploy_block,
"deposit_contract" => &config.eth1.deposit_contract_address
);
let genesis_service = self.eth1_service = eth1_service_option;
Eth1GenesisService::new(config.eth1, context.log.clone()); self.beacon_chain_builder = Some(beacon_chain_builder);
Ok(self)
let future = genesis_service
.wait_for_genesis_state(
Duration::from_millis(ETH1_GENESIS_UPDATE_INTERVAL_MILLIS),
context.eth2_config().spec.clone(),
)
.and_then(move |genesis_state| builder.genesis_state(genesis_state))
.map(|v| (v, Some(genesis_service.into_core_service())));
Box::new(future)
}
ClientGenesis::FromStore => {
let future = builder.resume_from_db().into_future().map(|v| (v, None));
Box::new(future)
}
};
genesis_state_future
})
.map(move |(beacon_chain_builder, eth1_service_option)| {
self.eth1_service = eth1_service_option;
self.beacon_chain_builder = Some(beacon_chain_builder);
self
})
} }
/// Immediately starts the networking stack. /// Immediately starts the networking stack.
@ -251,10 +221,10 @@ where
.runtime_context .runtime_context
.as_ref() .as_ref()
.ok_or_else(|| "network requires a runtime_context")? .ok_or_else(|| "network requires a runtime_context")?
.service_context("network".into()); .clone();
let (network_globals, network_send, network_exit) = let (network_globals, network_send, network_exit) =
NetworkService::start(beacon_chain, config, &context.executor, context.log) NetworkService::start(beacon_chain, config, &context.runtime_handle, context.log)
.map_err(|e| format!("Failed to start network: {:?}", e))?; .map_err(|e| format!("Failed to start network: {:?}", e))?;
self.network_globals = Some(network_globals); self.network_globals = Some(network_globals);
@ -281,13 +251,10 @@ where
.ok_or_else(|| "node timer requires a chain spec".to_string())? .ok_or_else(|| "node timer requires a chain spec".to_string())?
.milliseconds_per_slot; .milliseconds_per_slot;
let timer_exit = timer::spawn( let timer_exit = context
&context.executor, .runtime_handle
beacon_chain, .enter(|| timer::spawn(beacon_chain, milliseconds_per_slot))
milliseconds_per_slot, .map_err(|e| format!("Unable to start node timer: {}", e))?;
context.log,
)
.map_err(|e| format!("Unable to start node timer: {}", e))?;
self.exit_channels.push(timer_exit); self.exit_channels.push(timer_exit);
@ -323,21 +290,23 @@ where
network_chan: network_send, network_chan: network_send,
}; };
let (exit_channel, listening_addr) = rest_api::start_server( let log = context.log.clone();
&client_config.rest_api, let (exit_channel, listening_addr) = context.runtime_handle.enter(|| {
&context.executor, rest_api::start_server(
beacon_chain, &client_config.rest_api,
network_info, beacon_chain,
client_config network_info,
.create_db_path() client_config
.map_err(|_| "unable to read data dir")?, .create_db_path()
client_config .map_err(|_| "unable to read data dir")?,
.create_freezer_db_path() client_config
.map_err(|_| "unable to read freezer DB dir")?, .create_freezer_db_path()
eth2_config.clone(), .map_err(|_| "unable to read freezer DB dir")?,
context.log, eth2_config.clone(),
) log,
.map_err(|e| format!("Failed to start HTTP API: {:?}", e))?; )
.map_err(|e| format!("Failed to start HTTP API: {:?}", e))
})?;
self.exit_channels.push(exit_channel); self.exit_channels.push(exit_channel);
self.http_listen_addr = Some(listening_addr); self.http_listen_addr = Some(listening_addr);
@ -366,13 +335,17 @@ where
.ok_or_else(|| "slot_notifier requires a chain spec".to_string())? .ok_or_else(|| "slot_notifier requires a chain spec".to_string())?
.milliseconds_per_slot; .milliseconds_per_slot;
let exit_channel = spawn_notifier( let exit_channel = context
context, .runtime_handle
beacon_chain, .enter(|| {
network_globals, spawn_notifier(
milliseconds_per_slot, beacon_chain,
) network_globals,
.map_err(|e| format!("Unable to start slot notifier: {}", e))?; milliseconds_per_slot,
context.log.clone(),
)
})
.map_err(|e| format!("Unable to start slot notifier: {}", e))?;
self.exit_channels.push(exit_channel); self.exit_channels.push(exit_channel);
@ -468,8 +441,9 @@ where
Option<_>, Option<_>,
Option<_>, Option<_>,
) = if config.enabled { ) = if config.enabled {
let (sender, exit, listening_addr) = let (sender, exit, listening_addr) = context
websocket_server::start_server(&config, &context.executor, &context.log)?; .runtime_handle
.enter(|| websocket_server::start_server(&config, &context.log))?;
(sender, Some(exit), Some(listening_addr)) (sender, Some(exit), Some(listening_addr))
} else { } else {
(WebSocketSender::dummy(), None, None) (WebSocketSender::dummy(), None, None)
@ -688,7 +662,7 @@ where
}; };
// Starts the service that connects to an eth1 node and periodically updates caches. // Starts the service that connects to an eth1 node and periodically updates caches.
context.executor.spawn(backend.start(exit)); context.runtime_handle.enter(|| backend.start(exit));
self.beacon_chain_builder = Some(beacon_chain_builder.eth1_backend(Some(backend))); self.beacon_chain_builder = Some(beacon_chain_builder.eth1_backend(Some(backend)));

View File

@ -1,14 +1,12 @@
use crate::metrics; use crate::metrics;
use beacon_chain::{BeaconChain, BeaconChainTypes}; use beacon_chain::{BeaconChain, BeaconChainTypes};
use environment::RuntimeContext;
use eth2_libp2p::NetworkGlobals; use eth2_libp2p::NetworkGlobals;
use futures::{Future, Stream}; use futures::prelude::*;
use parking_lot::Mutex; use parking_lot::Mutex;
use slog::{debug, error, info, warn}; use slog::{debug, error, info, warn};
use slot_clock::SlotClock; use slot_clock::SlotClock;
use std::sync::Arc; use std::sync::Arc;
use std::time::{Duration, Instant}; use std::time::{Duration, Instant};
use tokio::timer::Interval;
use types::{EthSpec, Slot}; use types::{EthSpec, Slot};
/// Create a warning log whenever the peer count is at or below this value. /// Create a warning log whenever the peer count is at or below this value.
@ -27,15 +25,11 @@ const SPEEDO_OBSERVATIONS: usize = 4;
/// Spawns a notifier service which periodically logs information about the node. /// Spawns a notifier service which periodically logs information about the node.
pub fn spawn_notifier<T: BeaconChainTypes>( pub fn spawn_notifier<T: BeaconChainTypes>(
context: RuntimeContext<T::EthSpec>,
beacon_chain: Arc<BeaconChain<T>>, beacon_chain: Arc<BeaconChain<T>>,
network: Arc<NetworkGlobals<T::EthSpec>>, network: Arc<NetworkGlobals<T::EthSpec>>,
milliseconds_per_slot: u64, milliseconds_per_slot: u64,
log: slog::Logger,
) -> Result<tokio::sync::oneshot::Sender<()>, String> { ) -> Result<tokio::sync::oneshot::Sender<()>, String> {
let log_1 = context.log.clone();
let log_2 = context.log.clone();
let log_3 = context.log.clone();
let slot_duration = Duration::from_millis(milliseconds_per_slot); let slot_duration = Duration::from_millis(milliseconds_per_slot);
let duration_to_next_slot = beacon_chain let duration_to_next_slot = beacon_chain
.slot_clock .slot_clock
@ -43,29 +37,26 @@ pub fn spawn_notifier<T: BeaconChainTypes>(
.ok_or_else(|| "slot_notifier unable to determine time to next slot")?; .ok_or_else(|| "slot_notifier unable to determine time to next slot")?;
// Run this half way through each slot. // Run this half way through each slot.
let start_instant = Instant::now() + duration_to_next_slot + (slot_duration / 2); let start_instant = tokio::time::Instant::now() + duration_to_next_slot + (slot_duration / 2);
// Run this each slot. // Run this each slot.
let interval_duration = slot_duration; let interval_duration = slot_duration;
let speedo = Mutex::new(Speedo::default()); let speedo = Mutex::new(Speedo::default());
let mut interval = tokio::time::interval_at(start_instant, interval_duration);
let interval_future = Interval::new(start_instant, interval_duration) let interval_future = async move {
.map_err( while let Some(_) = interval.next().await {
move |e| error!(log_1, "Slot notifier timer failed"; "error" => format!("{:?}", e)),
)
.for_each(move |_| {
let log = log_2.clone();
let connected_peer_count = network.connected_peers(); let connected_peer_count = network.connected_peers();
let sync_state = network.sync_state(); let sync_state = network.sync_state();
let head_info = beacon_chain.head_info() let head_info = beacon_chain.head_info().map_err(|e| {
.map_err(|e| error!( error!(
log, log,
"Failed to get beacon chain head info"; "Failed to get beacon chain head info";
"error" => format!("{:?}", e) "error" => format!("{:?}", e)
))?; )
})?;
let head_slot = head_info.slot; let head_slot = head_info.slot;
let current_slot = beacon_chain.slot().map_err(|e| { let current_slot = beacon_chain.slot().map_err(|e| {
@ -83,7 +74,10 @@ pub fn spawn_notifier<T: BeaconChainTypes>(
let mut speedo = speedo.lock(); let mut speedo = speedo.lock();
speedo.observe(head_slot, Instant::now()); speedo.observe(head_slot, Instant::now());
metrics::set_gauge(&metrics::SYNC_SLOTS_PER_SECOND, speedo.slots_per_second().unwrap_or_else(|| 0_f64) as i64); metrics::set_gauge(
&metrics::SYNC_SLOTS_PER_SECOND,
speedo.slots_per_second().unwrap_or_else(|| 0_f64) as i64,
);
// The next two lines take advantage of saturating subtraction on `Slot`. // The next two lines take advantage of saturating subtraction on `Slot`.
let head_distance = current_slot - head_slot; let head_distance = current_slot - head_slot;
@ -104,7 +98,6 @@ pub fn spawn_notifier<T: BeaconChainTypes>(
"sync_state" =>format!("{}", sync_state) "sync_state" =>format!("{}", sync_state)
); );
// Log if we are syncing // Log if we are syncing
if sync_state.is_syncing() { if sync_state.is_syncing() {
let distance = format!( let distance = format!(
@ -122,9 +115,13 @@ pub fn spawn_notifier<T: BeaconChainTypes>(
); );
} else { } else {
if sync_state.is_synced() { if sync_state.is_synced() {
let block_info = if current_slot > head_slot { format!(" … empty") } else { format!("{}", head_root) }; let block_info = if current_slot > head_slot {
format!(" … empty")
} else {
format!("{}", head_root)
};
info!( info!(
log_2, log,
"Synced"; "Synced";
"peers" => peer_count_pretty(connected_peer_count), "peers" => peer_count_pretty(connected_peer_count),
"finalized_root" => format!("{}", finalized_root), "finalized_root" => format!("{}", finalized_root),
@ -135,7 +132,7 @@ pub fn spawn_notifier<T: BeaconChainTypes>(
); );
} else { } else {
info!( info!(
log_2, log,
"Searching for peers"; "Searching for peers";
"peers" => peer_count_pretty(connected_peer_count), "peers" => peer_count_pretty(connected_peer_count),
"finalized_root" => format!("{}", finalized_root), "finalized_root" => format!("{}", finalized_root),
@ -145,25 +142,14 @@ pub fn spawn_notifier<T: BeaconChainTypes>(
); );
} }
} }
Ok(()) }
}) Ok::<(), ()>(())
.then(move |result| { };
match result {
Ok(()) => Ok(()),
Err(e) => {
error!(
log_3,
"Notifier failed to notify";
"error" => format!("{:?}", e)
);
Ok(())
} } });
let (exit_signal, exit) = tokio::sync::oneshot::channel(); let (exit_signal, exit) = tokio::sync::oneshot::channel();
context // run the notifier on the current executor
.executor tokio::spawn(futures::future::select(Box::pin(interval_future), exit));
.spawn(interval_future.select(exit).map(|_| ()).map_err(|_| ()));
Ok(exit_signal) Ok(exit_signal)
} }

View File

@ -7,25 +7,26 @@ edition = "2018"
[dev-dependencies] [dev-dependencies]
eth1_test_rig = { path = "../../tests/eth1_test_rig" } eth1_test_rig = { path = "../../tests/eth1_test_rig" }
environment = { path = "../../lighthouse/environment" } environment = { path = "../../lighthouse/environment" }
toml = "^0.5" toml = "0.5.6"
web3 = "0.10.0" web3 = "0.10.0"
sloggers = "1.0.0"
[dependencies] [dependencies]
reqwest = "0.9" reqwest = "0.10.4"
futures = "0.1.25" futures = { version = "0.3.5", features = ["compat"] }
serde_json = "1.0" serde_json = "1.0.52"
serde = { version = "1.0", features = ["derive"] } serde = { version = "1.0.110", features = ["derive"] }
hex = "0.3" hex = "0.4.2"
types = { path = "../../eth2/types"} types = { path = "../../eth2/types"}
merkle_proof = { path = "../../eth2/utils/merkle_proof"} merkle_proof = { path = "../../eth2/utils/merkle_proof"}
eth2_ssz = { path = "../../eth2/utils/ssz"} eth2_ssz = "0.1.2"
eth2_ssz_derive = "0.1.0" eth2_ssz_derive = "0.1.0"
tree_hash = { path = "../../eth2/utils/tree_hash"} tree_hash = "0.1.0"
eth2_hashing = { path = "../../eth2/utils/eth2_hashing"} eth2_hashing = "0.1.0"
parking_lot = "0.7" parking_lot = "0.10.2"
slog = "^2.2.3" slog = "2.5.2"
tokio = "0.1.22" tokio = { version = "0.2.20", features = ["full"] }
state_processing = { path = "../../eth2/state_processing" } state_processing = { path = "../../eth2/state_processing" }
libflate = "0.1" libflate = "1.0.0"
lighthouse_metrics = { path = "../../eth2/utils/lighthouse_metrics"} lighthouse_metrics = { path = "../../eth2/utils/lighthouse_metrics"}
lazy_static = "1.4.0" lazy_static = "1.4.0"

View File

@ -10,8 +10,8 @@
//! //!
//! There is no ABI parsing here, all function signatures and topics are hard-coded as constants. //! There is no ABI parsing here, all function signatures and topics are hard-coded as constants.
use futures::{Future, Stream}; use futures::future::TryFutureExt;
use reqwest::{header::CONTENT_TYPE, r#async::ClientBuilder, StatusCode}; use reqwest::{header::CONTENT_TYPE, ClientBuilder, StatusCode};
use serde_json::{json, Value}; use serde_json::{json, Value};
use std::ops::Range; use std::ops::Range;
use std::time::Duration; use std::time::Duration;
@ -40,80 +40,73 @@ pub struct Block {
/// Returns the current block number. /// Returns the current block number.
/// ///
/// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`. /// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`.
pub fn get_block_number( pub async fn get_block_number(endpoint: &str, timeout: Duration) -> Result<u64, String> {
endpoint: &str, let response_body = send_rpc_request(endpoint, "eth_blockNumber", json!([]), timeout).await?;
timeout: Duration, hex_to_u64_be(
) -> impl Future<Item = u64, Error = String> { response_result(&response_body)?
send_rpc_request(endpoint, "eth_blockNumber", json!([]), timeout) .ok_or_else(|| "No result field was returned for block number".to_string())?
.and_then(|response_body| { .as_str()
hex_to_u64_be( .ok_or_else(|| "Data was not string")?,
response_result(&response_body)? )
.ok_or_else(|| "No result field was returned for block number".to_string())? .map_err(|e| format!("Failed to get block number: {}", e))
.as_str()
.ok_or_else(|| "Data was not string")?,
)
})
.map_err(|e| format!("Failed to get block number: {}", e))
} }
/// Gets a block hash by block number. /// Gets a block hash by block number.
/// ///
/// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`. /// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`.
pub fn get_block( pub async fn get_block(
endpoint: &str, endpoint: &str,
block_number: u64, block_number: u64,
timeout: Duration, timeout: Duration,
) -> impl Future<Item = Block, Error = String> { ) -> Result<Block, String> {
let params = json!([ let params = json!([
format!("0x{:x}", block_number), format!("0x{:x}", block_number),
false // do not return full tx objects. false // do not return full tx objects.
]); ]);
send_rpc_request(endpoint, "eth_getBlockByNumber", params, timeout) let response_body = send_rpc_request(endpoint, "eth_getBlockByNumber", params, timeout).await?;
.and_then(|response_body| { let hash = hex_to_bytes(
let hash = hex_to_bytes( response_result(&response_body)?
response_result(&response_body)? .ok_or_else(|| "No result field was returned for block".to_string())?
.ok_or_else(|| "No result field was returned for block".to_string())? .get("hash")
.get("hash") .ok_or_else(|| "No hash for block")?
.ok_or_else(|| "No hash for block")? .as_str()
.as_str() .ok_or_else(|| "Block hash was not string")?,
.ok_or_else(|| "Block hash was not string")?, )?;
)?; let hash = if hash.len() == 32 {
let hash = if hash.len() == 32 { Ok(Hash256::from_slice(&hash))
Ok(Hash256::from_slice(&hash)) } else {
} else { Err(format!("Block has was not 32 bytes: {:?}", hash))
Err(format!("Block has was not 32 bytes: {:?}", hash)) }?;
}?;
let timestamp = hex_to_u64_be( let timestamp = hex_to_u64_be(
response_result(&response_body)? response_result(&response_body)?
.ok_or_else(|| "No result field was returned for timestamp".to_string())? .ok_or_else(|| "No result field was returned for timestamp".to_string())?
.get("timestamp") .get("timestamp")
.ok_or_else(|| "No timestamp for block")? .ok_or_else(|| "No timestamp for block")?
.as_str() .as_str()
.ok_or_else(|| "Block timestamp was not string")?, .ok_or_else(|| "Block timestamp was not string")?,
)?; )?;
let number = hex_to_u64_be( let number = hex_to_u64_be(
response_result(&response_body)? response_result(&response_body)?
.ok_or_else(|| "No result field was returned for number".to_string())? .ok_or_else(|| "No result field was returned for number".to_string())?
.get("number") .get("number")
.ok_or_else(|| "No number for block")? .ok_or_else(|| "No number for block")?
.as_str() .as_str()
.ok_or_else(|| "Block number was not string")?, .ok_or_else(|| "Block number was not string")?,
)?; )?;
if number <= usize::max_value() as u64 { if number <= usize::max_value() as u64 {
Ok(Block { Ok(Block {
hash, hash,
timestamp, timestamp,
number, number,
})
} else {
Err(format!("Block number {} is larger than a usize", number))
}
}) })
.map_err(|e| format!("Failed to get block number: {}", e)) } else {
Err(format!("Block number {} is larger than a usize", number))
}
.map_err(|e| format!("Failed to get block number: {}", e))
} }
/// Returns the value of the `get_deposit_count()` call at the given `address` for the given /// Returns the value of the `get_deposit_count()` call at the given `address` for the given
@ -122,20 +115,21 @@ pub fn get_block(
/// Assumes that the `address` has the same ABI as the eth2 deposit contract. /// Assumes that the `address` has the same ABI as the eth2 deposit contract.
/// ///
/// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`. /// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`.
pub fn get_deposit_count( pub async fn get_deposit_count(
endpoint: &str, endpoint: &str,
address: &str, address: &str,
block_number: u64, block_number: u64,
timeout: Duration, timeout: Duration,
) -> impl Future<Item = Option<u64>, Error = String> { ) -> Result<Option<u64>, String> {
call( let result = call(
endpoint, endpoint,
address, address,
DEPOSIT_COUNT_FN_SIGNATURE, DEPOSIT_COUNT_FN_SIGNATURE,
block_number, block_number,
timeout, timeout,
) )
.and_then(|result| match result { .await?;
match result {
None => Err("Deposit root response was none".to_string()), None => Err("Deposit root response was none".to_string()),
Some(bytes) => { Some(bytes) => {
if bytes.is_empty() { if bytes.is_empty() {
@ -151,7 +145,7 @@ pub fn get_deposit_count(
)) ))
} }
} }
}) }
} }
/// Returns the value of the `get_hash_tree_root()` call at the given `block_number`. /// Returns the value of the `get_hash_tree_root()` call at the given `block_number`.
@ -159,20 +153,21 @@ pub fn get_deposit_count(
/// Assumes that the `address` has the same ABI as the eth2 deposit contract. /// Assumes that the `address` has the same ABI as the eth2 deposit contract.
/// ///
/// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`. /// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`.
pub fn get_deposit_root( pub async fn get_deposit_root(
endpoint: &str, endpoint: &str,
address: &str, address: &str,
block_number: u64, block_number: u64,
timeout: Duration, timeout: Duration,
) -> impl Future<Item = Option<Hash256>, Error = String> { ) -> Result<Option<Hash256>, String> {
call( let result = call(
endpoint, endpoint,
address, address,
DEPOSIT_ROOT_FN_SIGNATURE, DEPOSIT_ROOT_FN_SIGNATURE,
block_number, block_number,
timeout, timeout,
) )
.and_then(|result| match result { .await?;
match result {
None => Err("Deposit root response was none".to_string()), None => Err("Deposit root response was none".to_string()),
Some(bytes) => { Some(bytes) => {
if bytes.is_empty() { if bytes.is_empty() {
@ -186,7 +181,7 @@ pub fn get_deposit_root(
)) ))
} }
} }
}) }
} }
/// Performs a instant, no-transaction call to the contract `address` with the given `0x`-prefixed /// Performs a instant, no-transaction call to the contract `address` with the given `0x`-prefixed
@ -195,13 +190,13 @@ pub fn get_deposit_root(
/// Returns bytes, if any. /// Returns bytes, if any.
/// ///
/// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`. /// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`.
fn call( async fn call(
endpoint: &str, endpoint: &str,
address: &str, address: &str,
hex_data: &str, hex_data: &str,
block_number: u64, block_number: u64,
timeout: Duration, timeout: Duration,
) -> impl Future<Item = Option<Vec<u8>>, Error = String> { ) -> Result<Option<Vec<u8>>, String> {
let params = json! ([ let params = json! ([
{ {
"to": address, "to": address,
@ -210,19 +205,18 @@ fn call(
format!("0x{:x}", block_number) format!("0x{:x}", block_number)
]); ]);
send_rpc_request(endpoint, "eth_call", params, timeout).and_then(|response_body| { let response_body = send_rpc_request(endpoint, "eth_call", params, timeout).await?;
match response_result(&response_body)? { match response_result(&response_body)? {
None => Ok(None), None => Ok(None),
Some(result) => { Some(result) => {
let hex = result let hex = result
.as_str() .as_str()
.map(|s| s.to_string()) .map(|s| s.to_string())
.ok_or_else(|| "'result' value was not a string".to_string())?; .ok_or_else(|| "'result' value was not a string".to_string())?;
Ok(Some(hex_to_bytes(&hex)?)) Ok(Some(hex_to_bytes(&hex)?))
}
} }
}) }
} }
/// A reduced set of fields from an Eth1 contract log. /// A reduced set of fields from an Eth1 contract log.
@ -238,12 +232,12 @@ pub struct Log {
/// It's not clear from the Ethereum JSON-RPC docs if this range is inclusive or not. /// It's not clear from the Ethereum JSON-RPC docs if this range is inclusive or not.
/// ///
/// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`. /// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`.
pub fn get_deposit_logs_in_range( pub async fn get_deposit_logs_in_range(
endpoint: &str, endpoint: &str,
address: &str, address: &str,
block_height_range: Range<u64>, block_height_range: Range<u64>,
timeout: Duration, timeout: Duration,
) -> impl Future<Item = Vec<Log>, Error = String> { ) -> Result<Vec<Log>, String> {
let params = json! ([{ let params = json! ([{
"address": address, "address": address,
"topics": [DEPOSIT_EVENT_TOPIC], "topics": [DEPOSIT_EVENT_TOPIC],
@ -251,46 +245,44 @@ pub fn get_deposit_logs_in_range(
"toBlock": format!("0x{:x}", block_height_range.end), "toBlock": format!("0x{:x}", block_height_range.end),
}]); }]);
send_rpc_request(endpoint, "eth_getLogs", params, timeout) let response_body = send_rpc_request(endpoint, "eth_getLogs", params, timeout).await?;
.and_then(|response_body| { response_result(&response_body)?
response_result(&response_body)? .ok_or_else(|| "No result field was returned for deposit logs".to_string())?
.ok_or_else(|| "No result field was returned for deposit logs".to_string())? .as_array()
.as_array() .cloned()
.cloned() .ok_or_else(|| "'result' value was not an array".to_string())?
.ok_or_else(|| "'result' value was not an array".to_string())? .into_iter()
.into_iter() .map(|value| {
.map(|value| { let block_number = value
let block_number = value .get("blockNumber")
.get("blockNumber") .ok_or_else(|| "No block number field in log")?
.ok_or_else(|| "No block number field in log")? .as_str()
.as_str() .ok_or_else(|| "Block number was not string")?;
.ok_or_else(|| "Block number was not string")?;
let data = value let data = value
.get("data") .get("data")
.ok_or_else(|| "No block number field in log")? .ok_or_else(|| "No block number field in log")?
.as_str() .as_str()
.ok_or_else(|| "Data was not string")?; .ok_or_else(|| "Data was not string")?;
Ok(Log { Ok(Log {
block_number: hex_to_u64_be(&block_number)?, block_number: hex_to_u64_be(&block_number)?,
data: hex_to_bytes(data)?, data: hex_to_bytes(data)?,
}) })
})
.collect::<Result<Vec<Log>, String>>()
}) })
.collect::<Result<Vec<Log>, String>>()
.map_err(|e| format!("Failed to get logs in range: {}", e)) .map_err(|e| format!("Failed to get logs in range: {}", e))
} }
/// Sends an RPC request to `endpoint`, using a POST with the given `body`. /// Sends an RPC request to `endpoint`, using a POST with the given `body`.
/// ///
/// Tries to receive the response and parse the body as a `String`. /// Tries to receive the response and parse the body as a `String`.
pub fn send_rpc_request( pub async fn send_rpc_request(
endpoint: &str, endpoint: &str,
method: &str, method: &str,
params: Value, params: Value,
timeout: Duration, timeout: Duration,
) -> impl Future<Item = String, Error = String> { ) -> Result<String, String> {
let body = json! ({ let body = json! ({
"jsonrpc": "2.0", "jsonrpc": "2.0",
"method": method, "method": method,
@ -303,7 +295,7 @@ pub fn send_rpc_request(
// //
// A better solution would be to create some struct that contains a built client and pass it // A better solution would be to create some struct that contains a built client and pass it
// around (similar to the `web3` crate's `Transport` structs). // around (similar to the `web3` crate's `Transport` structs).
ClientBuilder::new() let response = ClientBuilder::new()
.timeout(timeout) .timeout(timeout)
.build() .build()
.expect("The builder should always build a client") .expect("The builder should always build a client")
@ -312,43 +304,32 @@ pub fn send_rpc_request(
.body(body) .body(body)
.send() .send()
.map_err(|e| format!("Request failed: {:?}", e)) .map_err(|e| format!("Request failed: {:?}", e))
.and_then(|response| { .await?;
if response.status() != StatusCode::OK { if response.status() != StatusCode::OK {
Err(format!( return Err(format!(
"Response HTTP status was not 200 OK: {}.", "Response HTTP status was not 200 OK: {}.",
response.status() response.status()
)) ));
} else { };
Ok(response) let encoding = response
} .headers()
}) .get(CONTENT_TYPE)
.and_then(|response| { .ok_or_else(|| "No content-type header in response".to_string())?
response .to_str()
.headers() .map(|s| s.to_string())
.get(CONTENT_TYPE) .map_err(|e| format!("Failed to parse content-type header: {}", e))?;
.ok_or_else(|| "No content-type header in response".to_string())
.and_then(|encoding| { response
encoding .bytes()
.to_str() .map_err(|e| format!("Failed to receive body: {:?}", e))
.map(|s| s.to_string()) .await
.map_err(|e| format!("Failed to parse content-type header: {}", e)) .and_then(move |bytes| match encoding.as_str() {
}) "application/json" => Ok(bytes),
.map(|encoding| (response, encoding)) "application/json; charset=utf-8" => Ok(bytes),
}) other => Err(format!("Unsupported encoding: {}", other)),
.and_then(|(response, encoding)| {
response
.into_body()
.concat2()
.map(|chunk| chunk.iter().cloned().collect::<Vec<u8>>())
.map_err(|e| format!("Failed to receive body: {:?}", e))
.and_then(move |bytes| match encoding.as_str() {
"application/json" => Ok(bytes),
"application/json; charset=utf-8" => Ok(bytes),
other => Err(format!("Unsupported encoding: {}", other)),
})
.map(|bytes| String::from_utf8_lossy(&bytes).into_owned())
.map_err(|e| format!("Failed to receive body: {:?}", e))
}) })
.map(|bytes| String::from_utf8_lossy(&bytes).into_owned())
.map_err(|e| format!("Failed to receive body: {:?}", e))
} }
/// Accepts an entire HTTP body (as a string) and returns the `result` field, as a serde `Value`. /// Accepts an entire HTTP body (as a string) and returns the `result` field, as a serde `Value`.

View File

@ -2,21 +2,18 @@ use crate::metrics;
use crate::{ use crate::{
block_cache::{BlockCache, Error as BlockCacheError, Eth1Block}, block_cache::{BlockCache, Error as BlockCacheError, Eth1Block},
deposit_cache::Error as DepositCacheError, deposit_cache::Error as DepositCacheError,
http::{get_block, get_block_number, get_deposit_logs_in_range}, http::{get_block, get_block_number, get_deposit_logs_in_range, Log},
inner::{DepositUpdater, Inner}, inner::{DepositUpdater, Inner},
DepositLog, DepositLog,
}; };
use futures::{ use futures::{future::TryFutureExt, stream, stream::TryStreamExt, StreamExt};
future::{loop_fn, Loop},
stream, Future, Stream,
};
use parking_lot::{RwLock, RwLockReadGuard}; use parking_lot::{RwLock, RwLockReadGuard};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use slog::{debug, error, info, trace, Logger}; use slog::{debug, error, info, trace, Logger};
use std::ops::{Range, RangeInclusive}; use std::ops::{Range, RangeInclusive};
use std::sync::Arc; use std::sync::Arc;
use std::time::{Duration, Instant, SystemTime, UNIX_EPOCH}; use std::time::{SystemTime, UNIX_EPOCH};
use tokio::timer::Delay; use tokio::time::{interval_at, Duration, Instant};
const STANDARD_TIMEOUT_MILLIS: u64 = 15_000; const STANDARD_TIMEOUT_MILLIS: u64 = 15_000;
@ -241,63 +238,40 @@ impl Service {
/// - Err(_) if there is an error. /// - Err(_) if there is an error.
/// ///
/// Emits logs for debugging and errors. /// Emits logs for debugging and errors.
pub fn update( pub async fn update(
&self, service: Self,
) -> impl Future<Item = (DepositCacheUpdateOutcome, BlockCacheUpdateOutcome), Error = String> ) -> Result<(DepositCacheUpdateOutcome, BlockCacheUpdateOutcome), String> {
{ let update_deposit_cache = async {
let log_a = self.log.clone(); let outcome = Service::update_deposit_cache(service.clone())
let log_b = self.log.clone(); .await
let inner_1 = self.inner.clone(); .map_err(|e| format!("Failed to update eth1 cache: {:?}", e))?;
let inner_2 = self.inner.clone();
let deposit_future = self trace!(
.update_deposit_cache() service.log,
.map_err(|e| format!("Failed to update eth1 cache: {:?}", e)) "Updated eth1 deposit cache";
.then(move |result| { "cached_deposits" => service.inner.deposit_cache.read().cache.len(),
match &result { "logs_imported" => outcome.logs_imported,
Ok(DepositCacheUpdateOutcome { logs_imported }) => trace!( "last_processed_eth1_block" => service.inner.deposit_cache.read().last_processed_block,
log_a, );
"Updated eth1 deposit cache"; Ok(outcome)
"cached_deposits" => inner_1.deposit_cache.read().cache.len(), };
"logs_imported" => logs_imported,
"last_processed_eth1_block" => inner_1.deposit_cache.read().last_processed_block,
),
Err(e) => error!(
log_a,
"Failed to update eth1 deposit cache";
"error" => e
),
};
result let update_block_cache = async {
}); let outcome = Service::update_block_cache(service.clone())
.await
.map_err(|e| format!("Failed to update eth1 cache: {:?}", e))?;
let block_future = self trace!(
.update_block_cache() service.log,
.map_err(|e| format!("Failed to update eth1 cache: {:?}", e)) "Updated eth1 block cache";
.then(move |result| { "cached_blocks" => service.inner.block_cache.read().len(),
match &result { "blocks_imported" => outcome.blocks_imported,
Ok(BlockCacheUpdateOutcome { "head_block" => outcome.head_block_number,
blocks_imported, );
head_block_number, Ok(outcome)
}) => trace!( };
log_b,
"Updated eth1 block cache";
"cached_blocks" => inner_2.block_cache.read().len(),
"blocks_imported" => blocks_imported,
"head_block" => head_block_number,
),
Err(e) => error!(
log_b,
"Failed to update eth1 block cache";
"error" => e
),
};
result futures::try_join!(update_deposit_cache, update_block_cache)
});
deposit_future.join(block_future)
} }
/// A looping future that updates the cache, then waits `config.auto_update_interval` before /// A looping future that updates the cache, then waits `config.auto_update_interval` before
@ -309,56 +283,42 @@ impl Service {
/// - Err(_) if there is an error. /// - Err(_) if there is an error.
/// ///
/// Emits logs for debugging and errors. /// Emits logs for debugging and errors.
pub fn auto_update( pub fn auto_update(service: Self, exit: tokio::sync::oneshot::Receiver<()>) {
&self, let update_interval = Duration::from_millis(service.config().auto_update_interval_millis);
exit: tokio::sync::oneshot::Receiver<()>,
) -> impl Future<Item = (), Error = ()> {
let service = self.clone();
let log = self.log.clone();
let update_interval = Duration::from_millis(self.config().auto_update_interval_millis);
let loop_future = loop_fn((), move |()| { let mut interval = interval_at(Instant::now(), update_interval);
let service = service.clone();
let log_a = log.clone();
let log_b = log.clone();
service let update_future = async move {
.update() while interval.next().await.is_some() {
.then(move |update_result| { Service::do_update(service.clone(), update_interval)
match update_result { .await
Err(e) => error!( .ok();
log_a, }
"Failed to update eth1 cache"; };
"retry_millis" => update_interval.as_millis(),
"error" => e,
),
Ok((deposit, block)) => debug!(
log_a,
"Updated eth1 cache";
"retry_millis" => update_interval.as_millis(),
"blocks" => format!("{:?}", block),
"deposits" => format!("{:?}", deposit),
),
};
// Do not break the loop if there is an update failure. let future = futures::future::select(Box::pin(update_future), exit);
Ok(())
})
.and_then(move |_| Delay::new(Instant::now() + update_interval))
.then(move |timer_result| {
if let Err(e) = timer_result {
error!(
log_b,
"Failed to trigger eth1 cache update delay";
"error" => format!("{:?}", e),
);
}
// Do not break the loop if there is an timer failure.
Ok(Loop::Continue(()))
})
});
loop_future.select(exit).map(|_| ()).map_err(|_| ()) tokio::task::spawn(future);
}
async fn do_update(service: Self, update_interval: Duration) -> Result<(), ()> {
let update_result = Service::update(service.clone()).await;
match update_result {
Err(e) => error!(
service.log,
"Failed to update eth1 cache";
"retry_millis" => update_interval.as_millis(),
"error" => e,
),
Ok((deposit, block)) => debug!(
service.log,
"Updated eth1 cache";
"retry_millis" => update_interval.as_millis(),
"blocks" => format!("{:?}", block),
"deposits" => format!("{:?}", deposit),
),
};
Ok(())
} }
/// Contacts the remote eth1 node and attempts to import deposit logs up to the configured /// Contacts the remote eth1 node and attempts to import deposit logs up to the configured
@ -373,135 +333,126 @@ impl Service {
/// - Err(_) if there is an error. /// - Err(_) if there is an error.
/// ///
/// Emits logs for debugging and errors. /// Emits logs for debugging and errors.
pub fn update_deposit_cache( pub async fn update_deposit_cache(service: Self) -> Result<DepositCacheUpdateOutcome, Error> {
&self, let endpoint = service.config().endpoint.clone();
) -> impl Future<Item = DepositCacheUpdateOutcome, Error = Error> { let follow_distance = service.config().follow_distance;
let service_1 = self.clone(); let deposit_contract_address = service.config().deposit_contract_address.clone();
let service_2 = self.clone();
let service_3 = self.clone(); let blocks_per_log_query = service.config().blocks_per_log_query;
let blocks_per_log_query = self.config().blocks_per_log_query; let max_log_requests_per_update = service
let max_log_requests_per_update = self
.config() .config()
.max_log_requests_per_update .max_log_requests_per_update
.unwrap_or_else(usize::max_value); .unwrap_or_else(usize::max_value);
let next_required_block = self let next_required_block = service
.deposits() .deposits()
.read() .read()
.last_processed_block .last_processed_block
.map(|n| n + 1) .map(|n| n + 1)
.unwrap_or_else(|| self.config().deposit_contract_deploy_block); .unwrap_or_else(|| service.config().deposit_contract_deploy_block);
get_new_block_numbers( let range = get_new_block_numbers(&endpoint, next_required_block, follow_distance).await?;
&self.config().endpoint,
next_required_block, let block_number_chunks = if let Some(range) = range {
self.config().follow_distance,
)
.map(move |range| {
range range
.map(|range| { .collect::<Vec<u64>>()
range .chunks(blocks_per_log_query)
.collect::<Vec<u64>>() .take(max_log_requests_per_update)
.chunks(blocks_per_log_query) .map(|vec| {
.take(max_log_requests_per_update) let first = vec.first().cloned().unwrap_or_else(|| 0);
.map(|vec| { let last = vec.last().map(|n| n + 1).unwrap_or_else(|| 0);
let first = vec.first().cloned().unwrap_or_else(|| 0); first..last
let last = vec.last().map(|n| n + 1).unwrap_or_else(|| 0);
first..last
})
.collect::<Vec<Range<u64>>>()
}) })
.unwrap_or_else(|| vec![]) .collect::<Vec<Range<u64>>>()
}) } else {
.and_then(move |block_number_chunks| { Vec::new()
stream::unfold( };
block_number_chunks.into_iter(),
move |mut chunks| match chunks.next() { let logs: Vec<(Range<u64>, Vec<Log>)> =
stream::try_unfold(block_number_chunks.into_iter(), |mut chunks| async {
match chunks.next() {
Some(chunk) => { Some(chunk) => {
let chunk_1 = chunk.clone(); let chunk_1 = chunk.clone();
Some( match get_deposit_logs_in_range(
get_deposit_logs_in_range( &endpoint,
&service_1.config().endpoint, &deposit_contract_address,
&service_1.config().deposit_contract_address, chunk,
chunk, Duration::from_millis(GET_DEPOSIT_LOG_TIMEOUT_MILLIS),
Duration::from_millis(GET_DEPOSIT_LOG_TIMEOUT_MILLIS),
)
.map_err(Error::GetDepositLogsFailed)
.map(|logs| (chunk_1, logs))
.map(|logs| (logs, chunks)),
) )
.await
{
Ok(logs) => Ok(Some(((chunk_1, logs), chunks))),
Err(e) => Err(Error::GetDepositLogsFailed(e)),
}
} }
None => None, None => Ok(None),
},
)
.fold(0, move |mut sum, (block_range, log_chunk)| {
let mut cache = service_2.deposits().write();
log_chunk
.into_iter()
.map(|raw_log| {
DepositLog::from_log(&raw_log).map_err(|error| {
Error::FailedToParseDepositLog {
block_range: block_range.clone(),
error,
}
})
})
// Return early if any of the logs cannot be parsed.
//
// This costs an additional `collect`, however it enforces that no logs are
// imported if any one of them cannot be parsed.
.collect::<Result<Vec<_>, _>>()?
.into_iter()
.map(|deposit_log| {
cache
.cache
.insert_log(deposit_log)
.map_err(Error::FailedToInsertDeposit)?;
sum += 1;
Ok(())
})
// Returns if a deposit is unable to be added to the cache.
//
// If this error occurs, the cache will no longer be guaranteed to hold either
// none or all of the logs for each block (i.e., they may exist _some_ logs for
// a block, but not _all_ logs for that block). This scenario can cause the
// node to choose an invalid genesis state or propose an invalid block.
.collect::<Result<_, _>>()?;
cache.last_processed_block = Some(block_range.end.saturating_sub(1));
metrics::set_gauge(&metrics::DEPOSIT_CACHE_LEN, cache.cache.len() as i64);
metrics::set_gauge(
&metrics::HIGHEST_PROCESSED_DEPOSIT_BLOCK,
cache.last_processed_block.unwrap_or_else(|| 0) as i64,
);
Ok(sum)
})
.map(move |logs_imported| {
if logs_imported > 0 {
info!(
service_3.log,
"Imported deposit log(s)";
"latest_block" => service_3.inner.deposit_cache.read().cache.latest_block_number(),
"total" => service_3.deposit_cache_len(),
"new" => logs_imported
);
} else {
debug!(
service_3.log,
"No new deposits found";
"latest_block" => service_3.inner.deposit_cache.read().cache.latest_block_number(),
"total_deposits" => service_3.deposit_cache_len(),
);
} }
DepositCacheUpdateOutcome { logs_imported }
}) })
}) .try_collect()
.await?;
let mut logs_imported = 0;
for (block_range, log_chunk) in logs.iter() {
let mut cache = service.deposits().write();
log_chunk
.into_iter()
.map(|raw_log| {
DepositLog::from_log(&raw_log).map_err(|error| Error::FailedToParseDepositLog {
block_range: block_range.clone(),
error,
})
})
// Return early if any of the logs cannot be parsed.
//
// This costs an additional `collect`, however it enforces that no logs are
// imported if any one of them cannot be parsed.
.collect::<Result<Vec<_>, _>>()?
.into_iter()
.map(|deposit_log| {
cache
.cache
.insert_log(deposit_log)
.map_err(Error::FailedToInsertDeposit)?;
logs_imported += 1;
Ok(())
})
// Returns if a deposit is unable to be added to the cache.
//
// If this error occurs, the cache will no longer be guaranteed to hold either
// none or all of the logs for each block (i.e., they may exist _some_ logs for
// a block, but not _all_ logs for that block). This scenario can cause the
// node to choose an invalid genesis state or propose an invalid block.
.collect::<Result<_, _>>()?;
cache.last_processed_block = Some(block_range.end.saturating_sub(1));
metrics::set_gauge(&metrics::DEPOSIT_CACHE_LEN, cache.cache.len() as i64);
metrics::set_gauge(
&metrics::HIGHEST_PROCESSED_DEPOSIT_BLOCK,
cache.last_processed_block.unwrap_or_else(|| 0) as i64,
);
}
if logs_imported > 0 {
info!(
service.log,
"Imported deposit log(s)";
"latest_block" => service.inner.deposit_cache.read().cache.latest_block_number(),
"total" => service.deposit_cache_len(),
"new" => logs_imported
);
} else {
debug!(
service.log,
"No new deposits found";
"latest_block" => service.inner.deposit_cache.read().cache.latest_block_number(),
"total_deposits" => service.deposit_cache_len(),
);
}
Ok(DepositCacheUpdateOutcome { logs_imported })
} }
/// Contacts the remote eth1 node and attempts to import all blocks up to the configured /// Contacts the remote eth1 node and attempts to import all blocks up to the configured
@ -515,218 +466,249 @@ impl Service {
/// - Err(_) if there is an error. /// - Err(_) if there is an error.
/// ///
/// Emits logs for debugging and errors. /// Emits logs for debugging and errors.
pub fn update_block_cache(&self) -> impl Future<Item = BlockCacheUpdateOutcome, Error = Error> { pub async fn update_block_cache(service: Self) -> Result<BlockCacheUpdateOutcome, Error> {
let cache_1 = self.inner.clone(); let block_cache_truncation = service.config().block_cache_truncation;
let cache_2 = self.inner.clone(); let max_blocks_per_update = service
let cache_3 = self.inner.clone();
let cache_4 = self.inner.clone();
let cache_5 = self.inner.clone();
let cache_6 = self.inner.clone();
let service_1 = self.clone();
let block_cache_truncation = self.config().block_cache_truncation;
let max_blocks_per_update = self
.config() .config()
.max_blocks_per_update .max_blocks_per_update
.unwrap_or_else(usize::max_value); .unwrap_or_else(usize::max_value);
let next_required_block = cache_1 let next_required_block = service
.inner
.block_cache .block_cache
.read() .read()
.highest_block_number() .highest_block_number()
.map(|n| n + 1) .map(|n| n + 1)
.unwrap_or_else(|| self.config().lowest_cached_block_number); .unwrap_or_else(|| service.config().lowest_cached_block_number);
get_new_block_numbers( let endpoint = service.config().endpoint.clone();
&self.config().endpoint, let follow_distance = service.config().follow_distance;
next_required_block,
self.config().follow_distance, let range = get_new_block_numbers(&endpoint, next_required_block, follow_distance).await?;
)
// Map the range of required blocks into a Vec. // Map the range of required blocks into a Vec.
// //
// If the required range is larger than the size of the cache, drop the exiting cache // If the required range is larger than the size of the cache, drop the exiting cache
// because it's exipred and just download enough blocks to fill the cache. // because it's exipred and just download enough blocks to fill the cache.
.and_then(move |range| { let required_block_numbers = if let Some(range) = range {
range if range.start() > range.end() {
.map(|range| { // Note: this check is not strictly necessary, however it remains to safe
if range.start() > range.end() { // guard against any regression which may cause an underflow in a following
// Note: this check is not strictly necessary, however it remains to safe // subtraction operation.
// guard against any regression which may cause an underflow in a following return Err(Error::Internal("Range was not increasing".into()));
// subtraction operation. } else {
Err(Error::Internal("Range was not increasing".into())) let range_size = range.end() - range.start();
} else { let max_size = block_cache_truncation
let range_size = range.end() - range.start(); .map(|n| n as u64)
let max_size = block_cache_truncation .unwrap_or_else(u64::max_value);
.map(|n| n as u64) if range_size > max_size {
.unwrap_or_else(u64::max_value); // If the range of required blocks is larger than `max_size`, drop all
if range_size > max_size { // existing blocks and download `max_size` count of blocks.
// If the range of required blocks is larger than `max_size`, drop all let first_block = range.end() - max_size;
// existing blocks and download `max_size` count of blocks. (*service.inner.block_cache.write()) = BlockCache::default();
let first_block = range.end() - max_size; (first_block..=*range.end()).collect::<Vec<u64>>()
(*cache_5.block_cache.write()) = BlockCache::default(); } else {
Ok((first_block..=*range.end()).collect::<Vec<u64>>()) range.collect::<Vec<u64>>()
} else { }
Ok(range.collect::<Vec<u64>>()) }
} else {
Vec::new()
};
// Download the range of blocks and sequentially import them into the cache.
// Last processed block in deposit cache
let latest_in_cache = service
.inner
.deposit_cache
.read()
.last_processed_block
.unwrap_or(0);
let required_block_numbers = required_block_numbers
.into_iter()
.filter(|x| *x <= latest_in_cache)
.take(max_blocks_per_update)
.collect::<Vec<_>>();
// Produce a stream from the list of required block numbers and return a future that
// consumes the it.
let eth1_blocks: Vec<Eth1Block> = stream::try_unfold(
required_block_numbers.into_iter(),
|mut block_numbers| async {
match block_numbers.next() {
Some(block_number) => {
match download_eth1_block(service.inner.clone(), block_number).await {
Ok(eth1_block) => Ok(Some((eth1_block, block_numbers))),
Err(e) => Err(e),
} }
} }
}) None => Ok(None),
.unwrap_or_else(|| Ok(vec![])) }
}) },
// Download the range of blocks and sequentially import them into the cache. )
.and_then(move |required_block_numbers| { .try_collect()
// Last processed block in deposit cache .await?;
let latest_in_cache = cache_6
.deposit_cache
.read()
.last_processed_block
.unwrap_or(0);
let required_block_numbers = required_block_numbers let mut blocks_imported = 0;
.into_iter() for eth1_block in eth1_blocks {
.filter(|x| *x <= latest_in_cache) service
.take(max_blocks_per_update) .inner
.collect::<Vec<_>>(); .block_cache
// Produce a stream from the list of required block numbers and return a future that .write()
// consumes the it. .insert_root_or_child(eth1_block)
stream::unfold( .map_err(Error::FailedToInsertEth1Block)?;
required_block_numbers.into_iter(),
move |mut block_numbers| match block_numbers.next() {
Some(block_number) => Some(
download_eth1_block(cache_2.clone(), block_number)
.map(|v| (v, block_numbers)),
),
None => None,
},
)
.fold(0, move |sum, eth1_block| {
cache_3
.block_cache
.write()
.insert_root_or_child(eth1_block)
.map_err(Error::FailedToInsertEth1Block)?;
metrics::set_gauge(
&metrics::BLOCK_CACHE_LEN,
cache_3.block_cache.read().len() as i64,
);
metrics::set_gauge(
&metrics::LATEST_CACHED_BLOCK_TIMESTAMP,
cache_3
.block_cache
.read()
.latest_block_timestamp()
.unwrap_or_else(|| 0) as i64,
);
Ok(sum + 1)
})
})
.and_then(move |blocks_imported| {
// Prune the block cache, preventing it from growing too large.
cache_4.prune_blocks();
metrics::set_gauge( metrics::set_gauge(
&metrics::BLOCK_CACHE_LEN, &metrics::BLOCK_CACHE_LEN,
cache_4.block_cache.read().len() as i64, service.inner.block_cache.read().len() as i64,
);
metrics::set_gauge(
&metrics::LATEST_CACHED_BLOCK_TIMESTAMP,
service
.inner
.block_cache
.read()
.latest_block_timestamp()
.unwrap_or_else(|| 0) as i64,
); );
let block_cache = service_1.inner.block_cache.read(); blocks_imported += 1;
let latest_block_mins = block_cache }
.latest_block_timestamp()
.and_then(|timestamp| {
SystemTime::now()
.duration_since(UNIX_EPOCH)
.ok()
.and_then(|now| now.checked_sub(Duration::from_secs(timestamp)))
})
.map(|duration| format!("{} mins", duration.as_secs() / 60))
.unwrap_or_else(|| "n/a".into());
if blocks_imported > 0 { // Prune the block cache, preventing it from growing too large.
debug!( service.inner.prune_blocks();
service_1.log,
"Imported eth1 block(s)";
"latest_block_age" => latest_block_mins,
"latest_block" => block_cache.highest_block_number(),
"total_cached_blocks" => block_cache.len(),
"new" => blocks_imported
);
} else {
debug!(
service_1.log,
"No new eth1 blocks imported";
"latest_block" => block_cache.highest_block_number(),
"cached_blocks" => block_cache.len(),
);
}
Ok(BlockCacheUpdateOutcome { metrics::set_gauge(
blocks_imported, &metrics::BLOCK_CACHE_LEN,
head_block_number: cache_4.block_cache.read().highest_block_number(), service.inner.block_cache.read().len() as i64,
);
let block_cache = service.inner.block_cache.read();
let latest_block_mins = block_cache
.latest_block_timestamp()
.and_then(|timestamp| {
SystemTime::now()
.duration_since(UNIX_EPOCH)
.ok()
.and_then(|now| now.checked_sub(Duration::from_secs(timestamp)))
}) })
.map(|duration| format!("{} mins", duration.as_secs() / 60))
.unwrap_or_else(|| "n/a".into());
if blocks_imported > 0 {
info!(
service.log,
"Imported eth1 block(s)";
"latest_block_age" => latest_block_mins,
"latest_block" => block_cache.highest_block_number(),
"total_cached_blocks" => block_cache.len(),
"new" => blocks_imported
);
} else {
debug!(
service.log,
"No new eth1 blocks imported";
"latest_block" => block_cache.highest_block_number(),
"cached_blocks" => block_cache.len(),
);
}
let block_cache = service.inner.block_cache.read();
let latest_block_mins = block_cache
.latest_block_timestamp()
.and_then(|timestamp| {
SystemTime::now()
.duration_since(UNIX_EPOCH)
.ok()
.and_then(|now| now.checked_sub(Duration::from_secs(timestamp)))
})
.map(|duration| format!("{} mins", duration.as_secs() / 60))
.unwrap_or_else(|| "n/a".into());
if blocks_imported > 0 {
debug!(
service.log,
"Imported eth1 block(s)";
"latest_block_age" => latest_block_mins,
"latest_block" => block_cache.highest_block_number(),
"total_cached_blocks" => block_cache.len(),
"new" => blocks_imported
);
} else {
debug!(
service.log,
"No new eth1 blocks imported";
"latest_block" => block_cache.highest_block_number(),
"cached_blocks" => block_cache.len(),
);
}
Ok(BlockCacheUpdateOutcome {
blocks_imported,
head_block_number: service.inner.block_cache.read().highest_block_number(),
}) })
} }
} }
/// Determine the range of blocks that need to be downloaded, given the remotes best block and /// Determine the range of blocks that need to be downloaded, given the remotes best block and
/// the locally stored best block. /// the locally stored best block.
fn get_new_block_numbers<'a>( async fn get_new_block_numbers<'a>(
endpoint: &str, endpoint: &str,
next_required_block: u64, next_required_block: u64,
follow_distance: u64, follow_distance: u64,
) -> impl Future<Item = Option<RangeInclusive<u64>>, Error = Error> + 'a { ) -> Result<Option<RangeInclusive<u64>>, Error> {
get_block_number(endpoint, Duration::from_millis(BLOCK_NUMBER_TIMEOUT_MILLIS)) let remote_highest_block =
.map_err(Error::GetBlockNumberFailed) get_block_number(endpoint, Duration::from_millis(BLOCK_NUMBER_TIMEOUT_MILLIS))
.and_then(move |remote_highest_block| { .map_err(Error::GetBlockNumberFailed)
let remote_follow_block = remote_highest_block.saturating_sub(follow_distance); .await?;
let remote_follow_block = remote_highest_block.saturating_sub(follow_distance);
if next_required_block <= remote_follow_block { if next_required_block <= remote_follow_block {
Ok(Some(next_required_block..=remote_follow_block)) Ok(Some(next_required_block..=remote_follow_block))
} else if next_required_block > remote_highest_block + 1 { } else if next_required_block > remote_highest_block + 1 {
// If this is the case, the node must have gone "backwards" in terms of it's sync // If this is the case, the node must have gone "backwards" in terms of it's sync
// (i.e., it's head block is lower than it was before). // (i.e., it's head block is lower than it was before).
// //
// We assume that the `follow_distance` should be sufficient to ensure this never // We assume that the `follow_distance` should be sufficient to ensure this never
// happens, otherwise it is an error. // happens, otherwise it is an error.
Err(Error::RemoteNotSynced { Err(Error::RemoteNotSynced {
next_required_block, next_required_block,
remote_highest_block, remote_highest_block,
follow_distance, follow_distance,
})
} else {
// Return an empty range.
Ok(None)
}
}) })
} else {
// Return an empty range.
Ok(None)
}
} }
/// Downloads the `(block, deposit_root, deposit_count)` tuple from an eth1 node for the given /// Downloads the `(block, deposit_root, deposit_count)` tuple from an eth1 node for the given
/// `block_number`. /// `block_number`.
/// ///
/// Performs three async calls to an Eth1 HTTP JSON RPC endpoint. /// Performs three async calls to an Eth1 HTTP JSON RPC endpoint.
fn download_eth1_block<'a>( async fn download_eth1_block(cache: Arc<Inner>, block_number: u64) -> Result<Eth1Block, Error> {
cache: Arc<Inner>, let endpoint = cache.config.read().endpoint.clone();
block_number: u64,
) -> impl Future<Item = Eth1Block, Error = Error> + 'a {
let deposit_root = cache let deposit_root = cache
.deposit_cache .deposit_cache
.read() .read()
.cache .cache
.get_deposit_root_from_cache(block_number); .get_deposit_root_from_cache(block_number);
let deposit_count = cache let deposit_count = cache
.deposit_cache .deposit_cache
.read() .read()
.cache .cache
.get_deposit_count_from_cache(block_number); .get_deposit_count_from_cache(block_number);
// Performs a `get_blockByNumber` call to an eth1 node. // Performs a `get_blockByNumber` call to an eth1 node.
get_block( let http_block = get_block(
&cache.config.read().endpoint, &endpoint,
block_number, block_number,
Duration::from_millis(GET_BLOCK_TIMEOUT_MILLIS), Duration::from_millis(GET_BLOCK_TIMEOUT_MILLIS),
) )
.map_err(Error::BlockDownloadFailed) .map_err(Error::BlockDownloadFailed)
.map(move |http_block| Eth1Block { .await?;
Ok(Eth1Block {
hash: http_block.hash, hash: http_block.hash,
number: http_block.number, number: http_block.number,
timestamp: http_block.timestamp, timestamp: http_block.timestamp,

View File

@ -4,17 +4,23 @@ use eth1::http::{get_deposit_count, get_deposit_logs_in_range, get_deposit_root,
use eth1::{Config, Service}; use eth1::{Config, Service};
use eth1::{DepositCache, DepositLog}; use eth1::{DepositCache, DepositLog};
use eth1_test_rig::GanacheEth1Instance; use eth1_test_rig::GanacheEth1Instance;
use futures::Future; use futures::compat::Future01CompatExt;
use merkle_proof::verify_merkle_proof; use merkle_proof::verify_merkle_proof;
use slog::Logger;
use sloggers::{null::NullLoggerBuilder, Build};
use std::ops::Range; use std::ops::Range;
use std::time::Duration; use std::time::Duration;
use tokio::runtime::Runtime;
use tree_hash::TreeHash; use tree_hash::TreeHash;
use types::{DepositData, EthSpec, Hash256, Keypair, MainnetEthSpec, MinimalEthSpec, Signature}; use types::{DepositData, EthSpec, Hash256, Keypair, MainnetEthSpec, MinimalEthSpec, Signature};
use web3::{transports::Http, Web3}; use web3::{transports::Http, Web3};
const DEPOSIT_CONTRACT_TREE_DEPTH: usize = 32; const DEPOSIT_CONTRACT_TREE_DEPTH: usize = 32;
pub fn null_logger() -> Logger {
let log_builder = NullLoggerBuilder;
log_builder.build().expect("should build logger")
}
pub fn new_env() -> Environment<MinimalEthSpec> { pub fn new_env() -> Environment<MinimalEthSpec> {
EnvironmentBuilder::minimal() EnvironmentBuilder::minimal()
// Use a single thread, so that when all tests are run in parallel they don't have so many // Use a single thread, so that when all tests are run in parallel they don't have so many
@ -47,76 +53,65 @@ fn random_deposit_data() -> DepositData {
} }
/// Blocking operation to get the deposit logs from the `deposit_contract`. /// Blocking operation to get the deposit logs from the `deposit_contract`.
fn blocking_deposit_logs( async fn blocking_deposit_logs(eth1: &GanacheEth1Instance, range: Range<u64>) -> Vec<Log> {
runtime: &mut Runtime, get_deposit_logs_in_range(
eth1: &GanacheEth1Instance, &eth1.endpoint(),
range: Range<u64>, &eth1.deposit_contract.address(),
) -> Vec<Log> { range,
runtime timeout(),
.block_on(get_deposit_logs_in_range( )
&eth1.endpoint(), .await
&eth1.deposit_contract.address(), .expect("should get logs")
range,
timeout(),
))
.expect("should get logs")
} }
/// Blocking operation to get the deposit root from the `deposit_contract`. /// Blocking operation to get the deposit root from the `deposit_contract`.
fn blocking_deposit_root( async fn blocking_deposit_root(eth1: &GanacheEth1Instance, block_number: u64) -> Option<Hash256> {
runtime: &mut Runtime, get_deposit_root(
eth1: &GanacheEth1Instance, &eth1.endpoint(),
block_number: u64, &eth1.deposit_contract.address(),
) -> Option<Hash256> { block_number,
runtime timeout(),
.block_on(get_deposit_root( )
&eth1.endpoint(), .await
&eth1.deposit_contract.address(), .expect("should get deposit root")
block_number,
timeout(),
))
.expect("should get deposit root")
} }
/// Blocking operation to get the deposit count from the `deposit_contract`. /// Blocking operation to get the deposit count from the `deposit_contract`.
fn blocking_deposit_count( async fn blocking_deposit_count(eth1: &GanacheEth1Instance, block_number: u64) -> Option<u64> {
runtime: &mut Runtime, get_deposit_count(
eth1: &GanacheEth1Instance, &eth1.endpoint(),
block_number: u64, &eth1.deposit_contract.address(),
) -> Option<u64> { block_number,
runtime timeout(),
.block_on(get_deposit_count( )
&eth1.endpoint(), .await
&eth1.deposit_contract.address(), .expect("should get deposit count")
block_number,
timeout(),
))
.expect("should get deposit count")
} }
fn get_block_number(runtime: &mut Runtime, web3: &Web3<Http>) -> u64 { async fn get_block_number(web3: &Web3<Http>) -> u64 {
runtime web3.eth()
.block_on(web3.eth().block_number().map(|v| v.as_u64())) .block_number()
.compat()
.await
.map(|v| v.as_u64())
.expect("should get block number") .expect("should get block number")
} }
mod eth1_cache { mod eth1_cache {
use super::*; use super::*;
#[test] #[tokio::test]
fn simple_scenario() { async fn simple_scenario() {
let mut env = new_env(); let log = null_logger();
let log = env.core_context().log;
let runtime = env.runtime();
for follow_distance in 0..2 { for follow_distance in 0..2 {
let eth1 = runtime let eth1 = GanacheEth1Instance::new()
.block_on(GanacheEth1Instance::new()) .await
.expect("should start eth1 environment"); .expect("should start eth1 environment");
let deposit_contract = &eth1.deposit_contract; let deposit_contract = &eth1.deposit_contract;
let web3 = eth1.web3(); let web3 = eth1.web3();
let initial_block_number = get_block_number(runtime, &web3); let initial_block_number = get_block_number(&web3).await;
let service = Service::new( let service = Service::new(
Config { Config {
@ -145,20 +140,18 @@ mod eth1_cache {
}; };
for _ in 0..blocks { for _ in 0..blocks {
runtime eth1.ganache.evm_mine().await.expect("should mine block");
.block_on(eth1.ganache.evm_mine())
.expect("should mine block");
} }
runtime Service::update_deposit_cache(service.clone())
.block_on(service.update_deposit_cache()) .await
.expect("should update deposit cache"); .expect("should update deposit cache");
runtime Service::update_block_cache(service.clone())
.block_on(service.update_block_cache()) .await
.expect("should update block cache"); .expect("should update block cache");
runtime Service::update_block_cache(service.clone())
.block_on(service.update_block_cache()) .await
.expect("should update cache when nothing has changed"); .expect("should update cache when nothing has changed");
assert_eq!( assert_eq!(
@ -178,14 +171,13 @@ mod eth1_cache {
} }
/// Tests the case where we attempt to download more blocks than will fit in the cache. /// Tests the case where we attempt to download more blocks than will fit in the cache.
#[test]
fn big_skip() {
let mut env = new_env();
let log = env.core_context().log;
let runtime = env.runtime();
let eth1 = runtime #[tokio::test]
.block_on(GanacheEth1Instance::new()) async fn big_skip() {
let log = null_logger();
let eth1 = GanacheEth1Instance::new()
.await
.expect("should start eth1 environment"); .expect("should start eth1 environment");
let deposit_contract = &eth1.deposit_contract; let deposit_contract = &eth1.deposit_contract;
let web3 = eth1.web3(); let web3 = eth1.web3();
@ -196,7 +188,7 @@ mod eth1_cache {
Config { Config {
endpoint: eth1.endpoint(), endpoint: eth1.endpoint(),
deposit_contract_address: deposit_contract.address(), deposit_contract_address: deposit_contract.address(),
lowest_cached_block_number: get_block_number(runtime, &web3), lowest_cached_block_number: get_block_number(&web3).await,
follow_distance: 0, follow_distance: 0,
block_cache_truncation: Some(cache_len), block_cache_truncation: Some(cache_len),
..Config::default() ..Config::default()
@ -207,16 +199,14 @@ mod eth1_cache {
let blocks = cache_len * 2; let blocks = cache_len * 2;
for _ in 0..blocks { for _ in 0..blocks {
runtime eth1.ganache.evm_mine().await.expect("should mine block")
.block_on(eth1.ganache.evm_mine())
.expect("should mine block")
} }
runtime Service::update_deposit_cache(service.clone())
.block_on(service.update_deposit_cache()) .await
.expect("should update deposit cache"); .expect("should update deposit cache");
runtime Service::update_block_cache(service.clone())
.block_on(service.update_block_cache()) .await
.expect("should update block cache"); .expect("should update block cache");
assert_eq!( assert_eq!(
@ -228,14 +218,12 @@ mod eth1_cache {
/// Tests to ensure that the cache gets pruned when doing multiple downloads smaller than the /// Tests to ensure that the cache gets pruned when doing multiple downloads smaller than the
/// cache size. /// cache size.
#[test] #[tokio::test]
fn pruning() { async fn pruning() {
let mut env = new_env(); let log = null_logger();
let log = env.core_context().log;
let runtime = env.runtime();
let eth1 = runtime let eth1 = GanacheEth1Instance::new()
.block_on(GanacheEth1Instance::new()) .await
.expect("should start eth1 environment"); .expect("should start eth1 environment");
let deposit_contract = &eth1.deposit_contract; let deposit_contract = &eth1.deposit_contract;
let web3 = eth1.web3(); let web3 = eth1.web3();
@ -246,7 +234,7 @@ mod eth1_cache {
Config { Config {
endpoint: eth1.endpoint(), endpoint: eth1.endpoint(),
deposit_contract_address: deposit_contract.address(), deposit_contract_address: deposit_contract.address(),
lowest_cached_block_number: get_block_number(runtime, &web3), lowest_cached_block_number: get_block_number(&web3).await,
follow_distance: 0, follow_distance: 0,
block_cache_truncation: Some(cache_len), block_cache_truncation: Some(cache_len),
..Config::default() ..Config::default()
@ -254,17 +242,15 @@ mod eth1_cache {
log, log,
); );
for _ in 0..4 { for _ in 0..4u8 {
for _ in 0..cache_len / 2 { for _ in 0..cache_len / 2 {
runtime eth1.ganache.evm_mine().await.expect("should mine block")
.block_on(eth1.ganache.evm_mine())
.expect("should mine block")
} }
runtime Service::update_deposit_cache(service.clone())
.block_on(service.update_deposit_cache()) .await
.expect("should update deposit cache"); .expect("should update deposit cache");
runtime Service::update_block_cache(service.clone())
.block_on(service.update_block_cache()) .await
.expect("should update block cache"); .expect("should update block cache");
} }
@ -275,16 +261,14 @@ mod eth1_cache {
); );
} }
#[test] #[tokio::test]
fn double_update() { async fn double_update() {
let mut env = new_env(); let log = null_logger();
let log = env.core_context().log;
let runtime = env.runtime();
let n = 16; let n = 16;
let eth1 = runtime let eth1 = GanacheEth1Instance::new()
.block_on(GanacheEth1Instance::new()) .await
.expect("should start eth1 environment"); .expect("should start eth1 environment");
let deposit_contract = &eth1.deposit_contract; let deposit_contract = &eth1.deposit_contract;
let web3 = eth1.web3(); let web3 = eth1.web3();
@ -293,7 +277,7 @@ mod eth1_cache {
Config { Config {
endpoint: eth1.endpoint(), endpoint: eth1.endpoint(),
deposit_contract_address: deposit_contract.address(), deposit_contract_address: deposit_contract.address(),
lowest_cached_block_number: get_block_number(runtime, &web3), lowest_cached_block_number: get_block_number(&web3).await,
follow_distance: 0, follow_distance: 0,
..Config::default() ..Config::default()
}, },
@ -301,24 +285,18 @@ mod eth1_cache {
); );
for _ in 0..n { for _ in 0..n {
runtime eth1.ganache.evm_mine().await.expect("should mine block")
.block_on(eth1.ganache.evm_mine())
.expect("should mine block")
} }
runtime futures::try_join!(
.block_on( Service::update_deposit_cache(service.clone()),
service Service::update_deposit_cache(service.clone())
.update_deposit_cache() )
.join(service.update_deposit_cache()), .expect("should perform two simultaneous updates of deposit cache");
) futures::try_join!(
.expect("should perform two simultaneous updates of deposit cache"); Service::update_block_cache(service.clone()),
runtime Service::update_block_cache(service.clone())
.block_on( )
service .expect("should perform two simultaneous updates of block cache");
.update_block_cache()
.join(service.update_block_cache()),
)
.expect("should perform two simultaneous updates of block cache");
assert!(service.block_cache_len() >= n, "should grow the cache"); assert!(service.block_cache_len() >= n, "should grow the cache");
} }
@ -327,21 +305,19 @@ mod eth1_cache {
mod deposit_tree { mod deposit_tree {
use super::*; use super::*;
#[test] #[tokio::test]
fn updating() { async fn updating() {
let mut env = new_env(); let log = null_logger();
let log = env.core_context().log;
let runtime = env.runtime();
let n = 4; let n = 4;
let eth1 = runtime let eth1 = GanacheEth1Instance::new()
.block_on(GanacheEth1Instance::new()) .await
.expect("should start eth1 environment"); .expect("should start eth1 environment");
let deposit_contract = &eth1.deposit_contract; let deposit_contract = &eth1.deposit_contract;
let web3 = eth1.web3(); let web3 = eth1.web3();
let start_block = get_block_number(runtime, &web3); let start_block = get_block_number(&web3).await;
let service = Service::new( let service = Service::new(
Config { Config {
@ -359,16 +335,17 @@ mod deposit_tree {
for deposit in &deposits { for deposit in &deposits {
deposit_contract deposit_contract
.deposit(runtime, deposit.clone()) .deposit(deposit.clone())
.await
.expect("should perform a deposit"); .expect("should perform a deposit");
} }
runtime Service::update_deposit_cache(service.clone())
.block_on(service.update_deposit_cache()) .await
.expect("should perform update"); .expect("should perform update");
runtime Service::update_deposit_cache(service.clone())
.block_on(service.update_deposit_cache()) .await
.expect("should perform update when nothing has changed"); .expect("should perform update when nothing has changed");
let first = n * round; let first = n * round;
@ -400,21 +377,19 @@ mod deposit_tree {
} }
} }
#[test] #[tokio::test]
fn double_update() { async fn double_update() {
let mut env = new_env(); let log = null_logger();
let log = env.core_context().log;
let runtime = env.runtime();
let n = 8; let n = 8;
let eth1 = runtime let eth1 = GanacheEth1Instance::new()
.block_on(GanacheEth1Instance::new()) .await
.expect("should start eth1 environment"); .expect("should start eth1 environment");
let deposit_contract = &eth1.deposit_contract; let deposit_contract = &eth1.deposit_contract;
let web3 = eth1.web3(); let web3 = eth1.web3();
let start_block = get_block_number(runtime, &web3); let start_block = get_block_number(&web3).await;
let service = Service::new( let service = Service::new(
Config { Config {
@ -432,32 +407,28 @@ mod deposit_tree {
for deposit in &deposits { for deposit in &deposits {
deposit_contract deposit_contract
.deposit(runtime, deposit.clone()) .deposit(deposit.clone())
.await
.expect("should perform a deposit"); .expect("should perform a deposit");
} }
runtime futures::try_join!(
.block_on( Service::update_deposit_cache(service.clone()),
service Service::update_deposit_cache(service.clone())
.update_deposit_cache() )
.join(service.update_deposit_cache()), .expect("should perform two updates concurrently");
)
.expect("should perform two updates concurrently");
assert_eq!(service.deposit_cache_len(), n); assert_eq!(service.deposit_cache_len(), n);
} }
#[test] #[tokio::test]
fn cache_consistency() { async fn cache_consistency() {
let mut env = new_env();
let runtime = env.runtime();
let n = 8; let n = 8;
let deposits: Vec<_> = (0..n).map(|_| random_deposit_data()).collect(); let deposits: Vec<_> = (0..n).map(|_| random_deposit_data()).collect();
let eth1 = runtime let eth1 = GanacheEth1Instance::new()
.block_on(GanacheEth1Instance::new()) .await
.expect("should start eth1 environment"); .expect("should start eth1 environment");
let deposit_contract = &eth1.deposit_contract; let deposit_contract = &eth1.deposit_contract;
let web3 = eth1.web3(); let web3 = eth1.web3();
@ -468,15 +439,18 @@ mod deposit_tree {
// Perform deposits to the smart contract, recording it's state along the way. // Perform deposits to the smart contract, recording it's state along the way.
for deposit in &deposits { for deposit in &deposits {
deposit_contract deposit_contract
.deposit(runtime, deposit.clone()) .deposit(deposit.clone())
.await
.expect("should perform a deposit"); .expect("should perform a deposit");
let block_number = get_block_number(runtime, &web3); let block_number = get_block_number(&web3).await;
deposit_roots.push( deposit_roots.push(
blocking_deposit_root(runtime, &eth1, block_number) blocking_deposit_root(&eth1, block_number)
.await
.expect("should get root if contract exists"), .expect("should get root if contract exists"),
); );
deposit_counts.push( deposit_counts.push(
blocking_deposit_count(runtime, &eth1, block_number) blocking_deposit_count(&eth1, block_number)
.await
.expect("should get count if contract exists"), .expect("should get count if contract exists"),
); );
} }
@ -484,8 +458,9 @@ mod deposit_tree {
let mut tree = DepositCache::default(); let mut tree = DepositCache::default();
// Pull all the deposit logs from the contract. // Pull all the deposit logs from the contract.
let block_number = get_block_number(runtime, &web3); let block_number = get_block_number(&web3).await;
let logs: Vec<_> = blocking_deposit_logs(runtime, &eth1, 0..block_number) let logs: Vec<_> = blocking_deposit_logs(&eth1, 0..block_number)
.await
.iter() .iter()
.map(|raw| DepositLog::from_log(raw).expect("should parse deposit log")) .map(|raw| DepositLog::from_log(raw).expect("should parse deposit log"))
.inspect(|log| { .inspect(|log| {
@ -546,64 +521,59 @@ mod deposit_tree {
mod http { mod http {
use super::*; use super::*;
fn get_block(runtime: &mut Runtime, eth1: &GanacheEth1Instance, block_number: u64) -> Block { async fn get_block(eth1: &GanacheEth1Instance, block_number: u64) -> Block {
runtime eth1::http::get_block(&eth1.endpoint(), block_number, timeout())
.block_on(eth1::http::get_block( .await
&eth1.endpoint(),
block_number,
timeout(),
))
.expect("should get block number") .expect("should get block number")
} }
#[test] #[tokio::test]
fn incrementing_deposits() { async fn incrementing_deposits() {
let mut env = new_env(); let eth1 = GanacheEth1Instance::new()
let runtime = env.runtime(); .await
let eth1 = runtime
.block_on(GanacheEth1Instance::new())
.expect("should start eth1 environment"); .expect("should start eth1 environment");
let deposit_contract = &eth1.deposit_contract; let deposit_contract = &eth1.deposit_contract;
let web3 = eth1.web3(); let web3 = eth1.web3();
let block_number = get_block_number(runtime, &web3); let block_number = get_block_number(&web3).await;
let logs = blocking_deposit_logs(runtime, &eth1, 0..block_number); let logs = blocking_deposit_logs(&eth1, 0..block_number).await;
assert_eq!(logs.len(), 0); assert_eq!(logs.len(), 0);
let mut old_root = blocking_deposit_root(runtime, &eth1, block_number); let mut old_root = blocking_deposit_root(&eth1, block_number).await;
let mut old_block = get_block(runtime, &eth1, block_number); let mut old_block = get_block(&eth1, block_number).await;
let mut old_block_number = block_number; let mut old_block_number = block_number;
assert_eq!( assert_eq!(
blocking_deposit_count(runtime, &eth1, block_number), blocking_deposit_count(&eth1, block_number).await,
Some(0), Some(0),
"should have deposit count zero" "should have deposit count zero"
); );
for i in 1..=8 { for i in 1..=8 {
runtime eth1.ganache
.block_on(eth1.ganache.increase_time(1)) .increase_time(1)
.await
.expect("should be able to increase time on ganache"); .expect("should be able to increase time on ganache");
deposit_contract deposit_contract
.deposit(runtime, random_deposit_data()) .deposit(random_deposit_data())
.await
.expect("should perform a deposit"); .expect("should perform a deposit");
// Check the logs. // Check the logs.
let block_number = get_block_number(runtime, &web3); let block_number = get_block_number(&web3).await;
let logs = blocking_deposit_logs(runtime, &eth1, 0..block_number); let logs = blocking_deposit_logs(&eth1, 0..block_number).await;
assert_eq!(logs.len(), i, "the number of logs should be as expected"); assert_eq!(logs.len(), i, "the number of logs should be as expected");
// Check the deposit count. // Check the deposit count.
assert_eq!( assert_eq!(
blocking_deposit_count(runtime, &eth1, block_number), blocking_deposit_count(&eth1, block_number).await,
Some(i as u64), Some(i as u64),
"should have a correct deposit count" "should have a correct deposit count"
); );
// Check the deposit root. // Check the deposit root.
let new_root = blocking_deposit_root(runtime, &eth1, block_number); let new_root = blocking_deposit_root(&eth1, block_number).await;
assert_ne!( assert_ne!(
new_root, old_root, new_root, old_root,
"deposit root should change with each deposit" "deposit root should change with each deposit"
@ -611,7 +581,7 @@ mod http {
old_root = new_root; old_root = new_root;
// Check the block hash. // Check the block hash.
let new_block = get_block(runtime, &eth1, block_number); let new_block = get_block(&eth1, block_number).await;
assert_ne!( assert_ne!(
new_block.hash, old_block.hash, new_block.hash, old_block.hash,
"block hash should change with each deposit" "block hash should change with each deposit"
@ -647,19 +617,17 @@ mod fast {
// Adds deposits into deposit cache and matches deposit_count and deposit_root // Adds deposits into deposit cache and matches deposit_count and deposit_root
// with the deposit count and root computed from the deposit cache. // with the deposit count and root computed from the deposit cache.
#[test] #[tokio::test]
fn deposit_cache_query() { async fn deposit_cache_query() {
let mut env = new_env(); let log = null_logger();
let log = env.core_context().log;
let runtime = env.runtime();
let eth1 = runtime let eth1 = GanacheEth1Instance::new()
.block_on(GanacheEth1Instance::new()) .await
.expect("should start eth1 environment"); .expect("should start eth1 environment");
let deposit_contract = &eth1.deposit_contract; let deposit_contract = &eth1.deposit_contract;
let web3 = eth1.web3(); let web3 = eth1.web3();
let now = get_block_number(runtime, &web3); let now = get_block_number(&web3).await;
let service = Service::new( let service = Service::new(
Config { Config {
endpoint: eth1.endpoint(), endpoint: eth1.endpoint(),
@ -676,16 +644,15 @@ mod fast {
let deposits: Vec<_> = (0..n).map(|_| random_deposit_data()).collect(); let deposits: Vec<_> = (0..n).map(|_| random_deposit_data()).collect();
for deposit in &deposits { for deposit in &deposits {
deposit_contract deposit_contract
.deposit(runtime, deposit.clone()) .deposit(deposit.clone())
.await
.expect("should perform a deposit"); .expect("should perform a deposit");
// Mine an extra block between deposits to test for corner cases // Mine an extra block between deposits to test for corner cases
runtime eth1.ganache.evm_mine().await.expect("should mine block");
.block_on(eth1.ganache.evm_mine())
.expect("should mine block");
} }
runtime Service::update_deposit_cache(service.clone())
.block_on(service.update_deposit_cache()) .await
.expect("should perform update"); .expect("should perform update");
assert!( assert!(
@ -693,9 +660,9 @@ mod fast {
"should have imported n deposits" "should have imported n deposits"
); );
for block_num in 0..=get_block_number(runtime, &web3) { for block_num in 0..=get_block_number(&web3).await {
let expected_deposit_count = blocking_deposit_count(runtime, &eth1, block_num); let expected_deposit_count = blocking_deposit_count(&eth1, block_num).await;
let expected_deposit_root = blocking_deposit_root(runtime, &eth1, block_num); let expected_deposit_root = blocking_deposit_root(&eth1, block_num).await;
let deposit_count = service let deposit_count = service
.deposits() .deposits()
@ -721,19 +688,17 @@ mod fast {
mod persist { mod persist {
use super::*; use super::*;
#[test] #[tokio::test]
fn test_persist_caches() { async fn test_persist_caches() {
let mut env = new_env(); let log = null_logger();
let log = env.core_context().log;
let runtime = env.runtime();
let eth1 = runtime let eth1 = GanacheEth1Instance::new()
.block_on(GanacheEth1Instance::new()) .await
.expect("should start eth1 environment"); .expect("should start eth1 environment");
let deposit_contract = &eth1.deposit_contract; let deposit_contract = &eth1.deposit_contract;
let web3 = eth1.web3(); let web3 = eth1.web3();
let now = get_block_number(runtime, &web3); let now = get_block_number(&web3).await;
let config = Config { let config = Config {
endpoint: eth1.endpoint(), endpoint: eth1.endpoint(),
deposit_contract_address: deposit_contract.address(), deposit_contract_address: deposit_contract.address(),
@ -748,12 +713,13 @@ mod persist {
let deposits: Vec<_> = (0..n).map(|_| random_deposit_data()).collect(); let deposits: Vec<_> = (0..n).map(|_| random_deposit_data()).collect();
for deposit in &deposits { for deposit in &deposits {
deposit_contract deposit_contract
.deposit(runtime, deposit.clone()) .deposit(deposit.clone())
.await
.expect("should perform a deposit"); .expect("should perform a deposit");
} }
runtime Service::update_deposit_cache(service.clone())
.block_on(service.update_deposit_cache()) .await
.expect("should perform update"); .expect("should perform update");
assert!( assert!(
@ -763,8 +729,8 @@ mod persist {
let deposit_count = service.deposit_cache_len(); let deposit_count = service.deposit_cache_len();
runtime Service::update_block_cache(service.clone())
.block_on(service.update_block_cache()) .await
.expect("should perform update"); .expect("should perform update");
assert!( assert!(

View File

@ -5,38 +5,47 @@ authors = ["Age Manning <Age@AgeManning.com>"]
edition = "2018" edition = "2018"
[dependencies] [dependencies]
hex = "0.3" hex = "0.4.2"
# rust-libp2p is presently being sourced from a Sigma Prime fork of the
# `libp2p/rust-libp2p` repository.
libp2p = { git = "https://github.com/SigP/rust-libp2p", rev = "71cf486b4d992862f5a05f9f4ef5e5c1631f4add" }
types = { path = "../../eth2/types" } types = { path = "../../eth2/types" }
hashmap_delay = { path = "../../eth2/utils/hashmap_delay" } hashset_delay = { path = "../../eth2/utils/hashset_delay" }
eth2_ssz_types = { path = "../../eth2/utils/ssz_types" } eth2_ssz_types = { path = "../../eth2/utils/ssz_types" }
serde = { version = "1.0.102", features = ["derive"] } serde = { version = "1.0.110", features = ["derive"] }
serde_derive = "1.0.102" serde_derive = "1.0.110"
eth2_ssz = "0.1.2" eth2_ssz = "0.1.2"
eth2_ssz_derive = "0.1.0" eth2_ssz_derive = "0.1.0"
slog = { version = "2.5.2", features = ["max_level_trace"] } slog = { version = "2.5.2", features = ["max_level_trace"] }
version = { path = "../version" } version = { path = "../version" }
tokio = "0.1.22" tokio = { version = "0.2.20", features = ["time"] }
futures = "0.1.29" futures = "0.3.5"
error-chain = "0.12.1" error-chain = "0.12.2"
dirs = "2.0.2" dirs = "2.0.2"
fnv = "1.0.6" fnv = "1.0.6"
unsigned-varint = "0.2.3" unsigned-varint = { git = "https://github.com/sigp/unsigned-varint", branch = "latest-codecs", features = ["codec"] }
lazy_static = "1.4.0" lazy_static = "1.4.0"
lighthouse_metrics = { path = "../../eth2/utils/lighthouse_metrics" } lighthouse_metrics = { path = "../../eth2/utils/lighthouse_metrics" }
tokio-io-timeout = "0.3.1" smallvec = "1.4.0"
smallvec = "1.0.0"
lru = "0.4.3" lru = "0.4.3"
parking_lot = "0.9.0" parking_lot = "0.10.2"
sha2 = "0.8.0" sha2 = "0.8.1"
base64 = "0.11.0" base64 = "0.12.1"
snap = "1" snap = "1.0.0"
void = "1.0.2" void = "1.0.2"
tokio-io-timeout = "0.4.0"
tokio-util = { version = "0.3.1", features = ["codec", "compat"] }
# Patched for quick updates
discv5 = { git = "https://github.com/sigp/discv5", rev = "7b3bd40591b62b8c002ffdb85de008aa9f82e2e5" }
tiny-keccak = "2.0.2"
libp2p-tcp = { version = "0.18.0", default-features = false, features = ["tokio"] }
[dependencies.libp2p]
version = "0.18.1"
default-features = false
features = ["websocket", "identify", "mplex", "yamux", "noise", "secio", "gossipsub", "dns"]
[dev-dependencies] [dev-dependencies]
tokio = { version = "0.2.20", features = ["full"] }
slog-stdlog = "4.0.0" slog-stdlog = "4.0.0"
slog-term = "2.4.2" slog-term = "2.5.0"
slog-async = "2.3.0" slog-async = "2.5.0"
tempdir = "0.3" tempdir = "0.3.7"

View File

@ -3,20 +3,22 @@ use crate::peer_manager::{PeerManager, PeerManagerEvent};
use crate::rpc::*; use crate::rpc::*;
use crate::types::{GossipEncoding, GossipKind, GossipTopic}; use crate::types::{GossipEncoding, GossipKind, GossipTopic};
use crate::{error, Enr, NetworkConfig, NetworkGlobals, PubsubMessage, TopicHash}; use crate::{error, Enr, NetworkConfig, NetworkGlobals, PubsubMessage, TopicHash};
use discv5::Discv5Event;
use futures::prelude::*; use futures::prelude::*;
use libp2p::{ use libp2p::{
core::{identity::Keypair, ConnectedPoint}, core::identity::Keypair,
discv5::Discv5Event,
gossipsub::{Gossipsub, GossipsubEvent, MessageId}, gossipsub::{Gossipsub, GossipsubEvent, MessageId},
identify::{Identify, IdentifyEvent}, identify::{Identify, IdentifyEvent},
swarm::{NetworkBehaviourAction, NetworkBehaviourEventProcess}, swarm::{NetworkBehaviourAction, NetworkBehaviourEventProcess, PollParameters},
tokio_io::{AsyncRead, AsyncWrite},
NetworkBehaviour, PeerId, NetworkBehaviour, PeerId,
}; };
use lru::LruCache; use lru::LruCache;
use slog::{crit, debug, o, warn}; use slog::{crit, debug, o};
use std::marker::PhantomData; use std::{
use std::sync::Arc; marker::PhantomData,
sync::Arc,
task::{Context, Poll},
};
use types::{EnrForkId, EthSpec, SubnetId}; use types::{EnrForkId, EthSpec, SubnetId};
const MAX_IDENTIFY_ADDRESSES: usize = 10; const MAX_IDENTIFY_ADDRESSES: usize = 10;
@ -26,17 +28,17 @@ const MAX_IDENTIFY_ADDRESSES: usize = 10;
/// behaviours. /// behaviours.
#[derive(NetworkBehaviour)] #[derive(NetworkBehaviour)]
#[behaviour(out_event = "BehaviourEvent<TSpec>", poll_method = "poll")] #[behaviour(out_event = "BehaviourEvent<TSpec>", poll_method = "poll")]
pub struct Behaviour<TSubstream: AsyncRead + AsyncWrite, TSpec: EthSpec> { pub struct Behaviour<TSpec: EthSpec> {
/// The routing pub-sub mechanism for eth2. /// The routing pub-sub mechanism for eth2.
gossipsub: Gossipsub<TSubstream>, gossipsub: Gossipsub,
/// The Eth2 RPC specified in the wire-0 protocol. /// The Eth2 RPC specified in the wire-0 protocol.
eth2_rpc: RPC<TSubstream, TSpec>, eth2_rpc: RPC<TSpec>,
/// Keep regular connection to peers and disconnect if absent. /// Keep regular connection to peers and disconnect if absent.
// TODO: Using id for initial interop. This will be removed by mainnet. // TODO: Using id for initial interop. This will be removed by mainnet.
/// Provides IP addresses and peer information. /// Provides IP addresses and peer information.
identify: Identify<TSubstream>, identify: Identify,
/// Discovery behaviour. /// Discovery behaviour.
discovery: Discovery<TSubstream, TSpec>, discovery: Discovery<TSpec>,
/// The peer manager that keeps track of peer's reputation and status. /// The peer manager that keeps track of peer's reputation and status.
#[behaviour(ignore)] #[behaviour(ignore)]
peer_manager: PeerManager<TSpec>, peer_manager: PeerManager<TSpec>,
@ -65,7 +67,7 @@ pub struct Behaviour<TSubstream: AsyncRead + AsyncWrite, TSpec: EthSpec> {
} }
/// Implements the combined behaviour for the libp2p service. /// Implements the combined behaviour for the libp2p service.
impl<TSubstream: AsyncRead + AsyncWrite, TSpec: EthSpec> Behaviour<TSubstream, TSpec> { impl<TSpec: EthSpec> Behaviour<TSpec> {
pub fn new( pub fn new(
local_key: &Keypair, local_key: &Keypair,
net_conf: &NetworkConfig, net_conf: &NetworkConfig,
@ -114,12 +116,12 @@ impl<TSubstream: AsyncRead + AsyncWrite, TSpec: EthSpec> Behaviour<TSubstream, T
} }
/// Obtain a reference to the discovery protocol. /// Obtain a reference to the discovery protocol.
pub fn discovery(&self) -> &Discovery<TSubstream, TSpec> { pub fn discovery(&self) -> &Discovery<TSpec> {
&self.discovery &self.discovery
} }
/// Obtain a reference to the gossipsub protocol. /// Obtain a reference to the gossipsub protocol.
pub fn gs(&self) -> &Gossipsub<TSubstream> { pub fn gs(&self) -> &Gossipsub {
&self.gossipsub &self.gossipsub
} }
@ -304,8 +306,10 @@ impl<TSubstream: AsyncRead + AsyncWrite, TSpec: EthSpec> Behaviour<TSubstream, T
}; };
let event = if is_request { let event = if is_request {
debug!(self.log, "Sending Ping"; "request_id" => id, "peer_id" => peer_id.to_string());
RPCEvent::Request(id, RPCRequest::Ping(ping)) RPCEvent::Request(id, RPCRequest::Ping(ping))
} else { } else {
debug!(self.log, "Sending Pong"; "request_id" => id, "peer_id" => peer_id.to_string());
RPCEvent::Response(id, RPCCodedResponse::Success(RPCResponse::Pong(ping))) RPCEvent::Response(id, RPCCodedResponse::Success(RPCResponse::Pong(ping)))
}; };
self.send_rpc(peer_id, event); self.send_rpc(peer_id, event);
@ -326,12 +330,50 @@ impl<TSubstream: AsyncRead + AsyncWrite, TSpec: EthSpec> Behaviour<TSubstream, T
); );
self.send_rpc(peer_id, metadata_response); self.send_rpc(peer_id, metadata_response);
} }
/// Returns a reference to the peer manager to allow the swarm to notify the manager of peer
/// status
pub fn peer_manager(&mut self) -> &mut PeerManager<TSpec> {
&mut self.peer_manager
}
/* Address in the new behaviour. Connections are now maintained at the swarm level.
/// Notifies the behaviour that a peer has connected.
pub fn notify_peer_connect(&mut self, peer_id: PeerId, endpoint: ConnectedPoint) {
match endpoint {
ConnectedPoint::Dialer { .. } => self.peer_manager.connect_outgoing(&peer_id),
ConnectedPoint::Listener { .. } => self.peer_manager.connect_ingoing(&peer_id),
};
// Find ENR info about a peer if possible.
if let Some(enr) = self.discovery.enr_of_peer(&peer_id) {
let bitfield = match enr.bitfield::<TSpec>() {
Ok(v) => v,
Err(e) => {
warn!(self.log, "Peer has invalid ENR bitfield";
"peer_id" => format!("{}", peer_id),
"error" => format!("{:?}", e));
return;
}
};
// use this as a baseline, until we get the actual meta-data
let meta_data = MetaData {
seq_number: 0,
attnets: bitfield,
};
// TODO: Shift to the peer manager
self.network_globals
.peers
.write()
.add_metadata(&peer_id, meta_data);
}
}
*/
} }
// Implement the NetworkBehaviourEventProcess trait so that we can derive NetworkBehaviour for Behaviour // Implement the NetworkBehaviourEventProcess trait so that we can derive NetworkBehaviour for Behaviour
impl<TSubstream: AsyncRead + AsyncWrite, TSpec: EthSpec> impl<TSpec: EthSpec> NetworkBehaviourEventProcess<GossipsubEvent> for Behaviour<TSpec> {
NetworkBehaviourEventProcess<GossipsubEvent> for Behaviour<TSubstream, TSpec>
{
fn inject_event(&mut self, event: GossipsubEvent) { fn inject_event(&mut self, event: GossipsubEvent) {
match event { match event {
GossipsubEvent::Message(propagation_source, id, gs_msg) => { GossipsubEvent::Message(propagation_source, id, gs_msg) => {
@ -358,7 +400,7 @@ impl<TSubstream: AsyncRead + AsyncWrite, TSpec: EthSpec>
debug!(self.log, "Could not decode gossipsub message"; "error" => format!("{}", e)) debug!(self.log, "Could not decode gossipsub message"; "error" => format!("{}", e))
} }
Ok(msg) => { Ok(msg) => {
crit!(self.log, "A duplicate gossipsub message was received"; "message_source" => format!("{}", gs_msg.source), "propagated_peer" => format!("{}",propagation_source), "message" => format!("{}", msg)); debug!(self.log, "A duplicate gossipsub message was received"; "message_source" => format!("{}", gs_msg.source), "propagated_peer" => format!("{}",propagation_source), "message" => format!("{}", msg));
} }
} }
} }
@ -372,112 +414,66 @@ impl<TSubstream: AsyncRead + AsyncWrite, TSpec: EthSpec>
} }
} }
impl<TSubstream: AsyncRead + AsyncWrite, TSpec: EthSpec> impl<TSpec: EthSpec> NetworkBehaviourEventProcess<RPCMessage<TSpec>> for Behaviour<TSpec> {
NetworkBehaviourEventProcess<RPCMessage<TSpec>> for Behaviour<TSubstream, TSpec> fn inject_event(&mut self, message: RPCMessage<TSpec>) {
{ let peer_id = message.peer_id;
fn inject_event(&mut self, event: RPCMessage<TSpec>) { // The METADATA and PING RPC responses are handled within the behaviour and not
match event { // propagated
// TODO: These are temporary methods to give access to injected behaviour // TODO: Improve the RPC types to better handle this logic discrepancy
// events to the match message.event {
// peer manager. After a behaviour re-write remove these: RPCEvent::Request(id, RPCRequest::Ping(ping)) => {
RPCMessage::PeerConnectedHack(peer_id, connected_point) => { // inform the peer manager and send the response
match connected_point { self.peer_manager.ping_request(&peer_id, ping.data);
ConnectedPoint::Dialer { .. } => self.peer_manager.connect_outgoing(&peer_id), // send a ping response
ConnectedPoint::Listener { .. } => self.peer_manager.connect_ingoing(&peer_id), self.send_ping(id, peer_id, false);
};
// Find ENR info about a peer if possible.
if let Some(enr) = self.discovery.enr_of_peer(&peer_id) {
let bitfield = match enr.bitfield::<TSpec>() {
Ok(v) => v,
Err(e) => {
warn!(self.log, "Peer has invalid ENR bitfield";
"peer_id" => format!("{}", peer_id),
"error" => format!("{:?}", e));
return;
}
};
// use this as a baseline, until we get the actual meta-data
let meta_data = MetaData {
seq_number: 0,
attnets: bitfield,
};
// TODO: Shift to the peer manager
self.network_globals
.peers
.write()
.add_metadata(&peer_id, meta_data);
}
} }
RPCMessage::PeerDisconnectedHack(peer_id, _connected_point) => { RPCEvent::Request(id, RPCRequest::MetaData(_)) => {
self.peer_manager.notify_disconnect(&peer_id) // send the requested meta-data
self.send_meta_data_response(id, peer_id);
} }
RPCEvent::Response(_, RPCCodedResponse::Success(RPCResponse::Pong(ping))) => {
RPCMessage::PeerDialed(peer_id) => { self.peer_manager.pong_response(&peer_id, ping.data);
self.events.push(BehaviourEvent::PeerDialed(peer_id))
} }
RPCMessage::PeerDisconnected(peer_id) => { RPCEvent::Response(_, RPCCodedResponse::Success(RPCResponse::MetaData(meta_data))) => {
self.events.push(BehaviourEvent::PeerDisconnected(peer_id)) self.peer_manager.meta_data_response(&peer_id, meta_data);
} }
RPCMessage::RPC(peer_id, rpc_event) => { RPCEvent::Request(_, RPCRequest::Status(_))
// The METADATA and PING RPC responses are handled within the behaviour and not | RPCEvent::Response(_, RPCCodedResponse::Success(RPCResponse::Status(_))) => {
// propagated // inform the peer manager that we have received a status from a peer
// TODO: Improve the RPC types to better handle this logic discrepancy self.peer_manager.peer_statusd(&peer_id);
match rpc_event { // propagate the STATUS message upwards
RPCEvent::Request(id, RPCRequest::Ping(ping)) => { self.events
// inform the peer manager and send the response .push(BehaviourEvent::RPC(peer_id, message.event));
self.peer_manager.ping_request(&peer_id, ping.data); }
// send a ping response RPCEvent::Error(_, protocol, ref err) => {
self.send_ping(id, peer_id, false); self.peer_manager.handle_rpc_error(&peer_id, protocol, err);
} self.events
RPCEvent::Request(id, RPCRequest::MetaData(_)) => { .push(BehaviourEvent::RPC(peer_id, message.event));
// send the requested meta-data }
self.send_meta_data_response(id, peer_id); _ => {
} // propagate all other RPC messages upwards
RPCEvent::Response(_, RPCCodedResponse::Success(RPCResponse::Pong(ping))) => { self.events
self.peer_manager.pong_response(&peer_id, ping.data); .push(BehaviourEvent::RPC(peer_id, message.event))
}
RPCEvent::Response(
_,
RPCCodedResponse::Success(RPCResponse::MetaData(meta_data)),
) => {
self.peer_manager.meta_data_response(&peer_id, meta_data);
}
RPCEvent::Request(_, RPCRequest::Status(_))
| RPCEvent::Response(_, RPCCodedResponse::Success(RPCResponse::Status(_))) => {
// inform the peer manager that we have received a status from a peer
self.peer_manager.peer_statusd(&peer_id);
// propagate the STATUS message upwards
self.events.push(BehaviourEvent::RPC(peer_id, rpc_event));
}
RPCEvent::Error(_, protocol, ref err) => {
self.peer_manager.handle_rpc_error(&peer_id, protocol, err);
self.events.push(BehaviourEvent::RPC(peer_id, rpc_event));
}
_ => {
// propagate all other RPC messages upwards
self.events.push(BehaviourEvent::RPC(peer_id, rpc_event))
}
}
} }
} }
} }
} }
impl<TSubstream: AsyncRead + AsyncWrite, TSpec: EthSpec> Behaviour<TSubstream, TSpec> { impl<TSpec: EthSpec> Behaviour<TSpec> {
/// Consumes the events list when polled. /// Consumes the events list when polled.
fn poll<TBehaviourIn>( fn poll<TBehaviourIn>(
&mut self, &mut self,
) -> Async<NetworkBehaviourAction<TBehaviourIn, BehaviourEvent<TSpec>>> { cx: &mut Context,
_: &mut impl PollParameters,
) -> Poll<NetworkBehaviourAction<TBehaviourIn, BehaviourEvent<TSpec>>> {
// check the peer manager for events // check the peer manager for events
loop { loop {
match self.peer_manager.poll() { match self.peer_manager.poll_next_unpin(cx) {
Ok(Async::Ready(Some(event))) => match event { Poll::Ready(Some(event)) => match event {
PeerManagerEvent::Status(peer_id) => { PeerManagerEvent::Status(peer_id) => {
// it's time to status. We don't keep a beacon chain reference here, so we inform // it's time to status. We don't keep a beacon chain reference here, so we inform
// the network to send a status to this peer // the network to send a status to this peer
return Async::Ready(NetworkBehaviourAction::GenerateEvent( return Poll::Ready(NetworkBehaviourAction::GenerateEvent(
BehaviourEvent::StatusPeer(peer_id), BehaviourEvent::StatusPeer(peer_id),
)); ));
} }
@ -495,25 +491,20 @@ impl<TSubstream: AsyncRead + AsyncWrite, TSpec: EthSpec> Behaviour<TSubstream, T
//TODO: Implement //TODO: Implement
} }
}, },
Ok(Async::NotReady) => break, Poll::Pending => break,
Ok(Async::Ready(None)) | Err(_) => { Poll::Ready(None) => break, // peer manager ended
crit!(self.log, "Error polling peer manager");
break;
}
} }
} }
if !self.events.is_empty() { if !self.events.is_empty() {
return Async::Ready(NetworkBehaviourAction::GenerateEvent(self.events.remove(0))); return Poll::Ready(NetworkBehaviourAction::GenerateEvent(self.events.remove(0)));
} }
Async::NotReady Poll::Pending
} }
} }
impl<TSubstream: AsyncRead + AsyncWrite, TSpec: EthSpec> NetworkBehaviourEventProcess<IdentifyEvent> impl<TSpec: EthSpec> NetworkBehaviourEventProcess<IdentifyEvent> for Behaviour<TSpec> {
for Behaviour<TSubstream, TSpec>
{
fn inject_event(&mut self, event: IdentifyEvent) { fn inject_event(&mut self, event: IdentifyEvent) {
match event { match event {
IdentifyEvent::Received { IdentifyEvent::Received {
@ -545,9 +536,7 @@ impl<TSubstream: AsyncRead + AsyncWrite, TSpec: EthSpec> NetworkBehaviourEventPr
} }
} }
impl<TSubstream: AsyncRead + AsyncWrite, TSpec: EthSpec> NetworkBehaviourEventProcess<Discv5Event> impl<TSpec: EthSpec> NetworkBehaviourEventProcess<Discv5Event> for Behaviour<TSpec> {
for Behaviour<TSubstream, TSpec>
{
fn inject_event(&mut self, _event: Discv5Event) { fn inject_event(&mut self, _event: Discv5Event) {
// discv5 has no events to inject // discv5 has no events to inject
} }
@ -558,11 +547,6 @@ impl<TSubstream: AsyncRead + AsyncWrite, TSpec: EthSpec> NetworkBehaviourEventPr
pub enum BehaviourEvent<TSpec: EthSpec> { pub enum BehaviourEvent<TSpec: EthSpec> {
/// A received RPC event and the peer that it was received from. /// A received RPC event and the peer that it was received from.
RPC(PeerId, RPCEvent<TSpec>), RPC(PeerId, RPCEvent<TSpec>),
/// We have completed an initial connection to a new peer.
PeerDialed(PeerId),
/// A peer has disconnected.
PeerDisconnected(PeerId),
/// A gossipsub message has been received.
PubsubMessage { PubsubMessage {
/// The gossipsub message id. Used when propagating blocks after validation. /// The gossipsub message id. Used when propagating blocks after validation.
id: MessageId, id: MessageId,

View File

@ -1,6 +1,6 @@
use crate::types::GossipKind; use crate::types::GossipKind;
use crate::Enr; use crate::Enr;
use libp2p::discv5::{Discv5Config, Discv5ConfigBuilder}; use discv5::{Discv5Config, Discv5ConfigBuilder};
use libp2p::gossipsub::{GossipsubConfig, GossipsubConfigBuilder, GossipsubMessage, MessageId}; use libp2p::gossipsub::{GossipsubConfig, GossipsubConfigBuilder, GossipsubMessage, MessageId};
use libp2p::Multiaddr; use libp2p::Multiaddr;
use serde_derive::{Deserialize, Serialize}; use serde_derive::{Deserialize, Serialize};

View File

@ -1,15 +1,15 @@
//! Helper functions and an extension trait for Ethereum 2 ENRs. //! Helper functions and an extension trait for Ethereum 2 ENRs.
pub use libp2p::{core::identity::Keypair, discv5::enr::CombinedKey}; pub use discv5::enr::{self, CombinedKey, EnrBuilder};
pub use libp2p::core::identity::Keypair;
use super::ENR_FILENAME; use super::ENR_FILENAME;
use crate::types::{Enr, EnrBitfield}; use crate::types::{Enr, EnrBitfield};
use crate::CombinedKeyExt;
use crate::NetworkConfig; use crate::NetworkConfig;
use libp2p::discv5::enr::EnrBuilder;
use slog::{debug, warn}; use slog::{debug, warn};
use ssz::{Decode, Encode}; use ssz::{Decode, Encode};
use ssz_types::BitVector; use ssz_types::BitVector;
use std::convert::TryInto;
use std::fs::File; use std::fs::File;
use std::io::prelude::*; use std::io::prelude::*;
use std::path::Path; use std::path::Path;
@ -62,10 +62,7 @@ pub fn build_or_load_enr<T: EthSpec>(
// Build the local ENR. // Build the local ENR.
// Note: Discovery should update the ENR record's IP to the external IP as seen by the // Note: Discovery should update the ENR record's IP to the external IP as seen by the
// majority of our peers, if the CLI doesn't expressly forbid it. // majority of our peers, if the CLI doesn't expressly forbid it.
let enr_key: CombinedKey = local_key let enr_key = CombinedKey::from_libp2p(&local_key)?;
.try_into()
.map_err(|_| "Invalid key type for ENR records")?;
let mut local_enr = build_enr::<T>(&enr_key, config, enr_fork_id)?; let mut local_enr = build_enr::<T>(&enr_key, config, enr_fork_id)?;
let enr_f = config.network_dir.join(ENR_FILENAME); let enr_f = config.network_dir.join(ENR_FILENAME);

View File

@ -0,0 +1,190 @@
//! ENR extension trait to support libp2p integration.
use crate::{Enr, Multiaddr, PeerId};
use discv5::enr::{CombinedKey, CombinedPublicKey};
use libp2p::core::{identity::Keypair, identity::PublicKey, multiaddr::Protocol};
use tiny_keccak::{Hasher, Keccak};
/// Extend ENR for libp2p types.
pub trait EnrExt {
/// The libp2p `PeerId` for the record.
fn peer_id(&self) -> PeerId;
/// Returns a list of multiaddrs if the ENR has an `ip` and either a `tcp` or `udp` key **or** an `ip6` and either a `tcp6` or `udp6`.
/// The vector remains empty if these fields are not defined.
fn multiaddr(&self) -> Vec<Multiaddr>;
}
/// Extend ENR CombinedPublicKey for libp2p types.
pub trait CombinedKeyPublicExt {
/// Converts the publickey into a peer id, without consuming the key.
fn into_peer_id(&self) -> PeerId;
}
/// Extend ENR CombinedKey for conversion to libp2p keys.
pub trait CombinedKeyExt {
/// Converts a libp2p key into an ENR combined key.
fn from_libp2p(key: &libp2p::core::identity::Keypair) -> Result<CombinedKey, &'static str>;
}
impl EnrExt for Enr {
/// The libp2p `PeerId` for the record.
fn peer_id(&self) -> PeerId {
self.public_key().into_peer_id()
}
/// Returns a list of multiaddrs if the ENR has an `ip` and either a `tcp` or `udp` key **or** an `ip6` and either a `tcp6` or `udp6`.
/// The vector remains empty if these fields are not defined.
///
/// Note: Only available with the `libp2p` feature flag.
fn multiaddr(&self) -> Vec<Multiaddr> {
let mut multiaddrs: Vec<Multiaddr> = Vec::new();
if let Some(ip) = self.ip() {
if let Some(udp) = self.udp() {
let mut multiaddr: Multiaddr = ip.into();
multiaddr.push(Protocol::Udp(udp));
multiaddrs.push(multiaddr);
}
if let Some(tcp) = self.tcp() {
let mut multiaddr: Multiaddr = ip.into();
multiaddr.push(Protocol::Tcp(tcp));
multiaddrs.push(multiaddr);
}
}
if let Some(ip6) = self.ip6() {
if let Some(udp6) = self.udp6() {
let mut multiaddr: Multiaddr = ip6.into();
multiaddr.push(Protocol::Udp(udp6));
multiaddrs.push(multiaddr);
}
if let Some(tcp6) = self.tcp6() {
let mut multiaddr: Multiaddr = ip6.into();
multiaddr.push(Protocol::Tcp(tcp6));
multiaddrs.push(multiaddr);
}
}
multiaddrs
}
}
impl CombinedKeyPublicExt for CombinedPublicKey {
/// Converts the publickey into a peer id, without consuming the key.
///
/// This is only available with the `libp2p` feature flag.
fn into_peer_id(&self) -> PeerId {
match self {
Self::Secp256k1(pk) => {
let pk_bytes = pk.serialize_compressed();
let libp2p_pk = libp2p::core::PublicKey::Secp256k1(
libp2p::core::identity::secp256k1::PublicKey::decode(&pk_bytes)
.expect("valid public key"),
);
PeerId::from_public_key(libp2p_pk)
}
Self::Ed25519(pk) => {
let pk_bytes = pk.to_bytes();
let libp2p_pk = libp2p::core::PublicKey::Ed25519(
libp2p::core::identity::ed25519::PublicKey::decode(&pk_bytes)
.expect("valid public key"),
);
PeerId::from_public_key(libp2p_pk)
}
}
}
}
impl CombinedKeyExt for CombinedKey {
fn from_libp2p(key: &libp2p::core::identity::Keypair) -> Result<CombinedKey, &'static str> {
match key {
Keypair::Secp256k1(key) => {
let secret = discv5::enr::secp256k1::SecretKey::parse(&key.secret().to_bytes())
.expect("libp2p key must be valid");
Ok(CombinedKey::Secp256k1(secret))
}
Keypair::Ed25519(key) => {
let ed_keypair =
discv5::enr::ed25519_dalek::SecretKey::from_bytes(&key.encode()[..32])
.expect("libp2p key must be valid");
Ok(CombinedKey::from(ed_keypair))
}
_ => Err("ENR: Unsupported libp2p key type"),
}
}
}
// helper function to convert a peer_id to a node_id. This is only possible for secp256k1/ed25519 libp2p
// peer_ids
pub fn peer_id_to_node_id(peer_id: &PeerId) -> Result<discv5::enr::NodeId, String> {
// A libp2p peer id byte representation should be 2 length bytes + 4 protobuf bytes + compressed pk bytes
// if generated from a PublicKey with Identity multihash.
let pk_bytes = &peer_id.as_bytes()[2..];
match PublicKey::from_protobuf_encoding(pk_bytes).map_err(|e| {
format!(
" Cannot parse libp2p public key public key from peer id: {}",
e
)
})? {
PublicKey::Secp256k1(pk) => {
let uncompressed_key_bytes = &pk.encode_uncompressed()[1..];
let mut output = [0_u8; 32];
let mut hasher = Keccak::v256();
hasher.update(&uncompressed_key_bytes);
hasher.finalize(&mut output);
return Ok(discv5::enr::NodeId::parse(&output).expect("Must be correct length"));
}
PublicKey::Ed25519(pk) => {
let uncompressed_key_bytes = pk.encode();
let mut output = [0_u8; 32];
let mut hasher = Keccak::v256();
hasher.update(&uncompressed_key_bytes);
hasher.finalize(&mut output);
return Ok(discv5::enr::NodeId::parse(&output).expect("Must be correct length"));
}
_ => return Err("Unsupported public key".into()),
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_secp256k1_peer_id_conversion() {
let sk_hex = "df94a73d528434ce2309abb19c16aedb535322797dbd59c157b1e04095900f48";
let sk_bytes = hex::decode(sk_hex).unwrap();
let secret_key = discv5::enr::secp256k1::SecretKey::parse_slice(&sk_bytes).unwrap();
let libp2p_sk = libp2p::identity::secp256k1::SecretKey::from_bytes(sk_bytes).unwrap();
let secp256k1_kp: libp2p::identity::secp256k1::Keypair = libp2p_sk.into();
let libp2p_kp = Keypair::Secp256k1(secp256k1_kp);
let peer_id = libp2p_kp.public().into_peer_id();
let enr = discv5::enr::EnrBuilder::new("v4")
.build(&secret_key)
.unwrap();
let node_id = peer_id_to_node_id(&peer_id).unwrap();
assert_eq!(enr.node_id(), node_id);
}
#[test]
fn test_ed25519_peer_conversion() {
let sk_hex = "4dea8a5072119927e9d243a7d953f2f4bc95b70f110978e2f9bc7a9000e4b261";
let sk_bytes = hex::decode(sk_hex).unwrap();
let secret = discv5::enr::ed25519_dalek::SecretKey::from_bytes(&sk_bytes).unwrap();
let public = discv5::enr::ed25519_dalek::PublicKey::from(&secret);
let keypair = discv5::enr::ed25519_dalek::Keypair { public, secret };
let libp2p_sk = libp2p::identity::ed25519::SecretKey::from_bytes(sk_bytes).unwrap();
let ed25519_kp: libp2p::identity::ed25519::Keypair = libp2p_sk.into();
let libp2p_kp = Keypair::Ed25519(ed25519_kp);
let peer_id = libp2p_kp.public().into_peer_id();
let enr = discv5::enr::EnrBuilder::new("v4").build(&keypair).unwrap();
let node_id = peer_id_to_node_id(&peer_id).unwrap();
assert_eq!(enr.node_id(), node_id);
}
}

View File

@ -1,28 +1,35 @@
///! This manages the discovery and management of peers. ///! This manages the discovery and management of peers.
pub(crate) mod enr; pub(crate) mod enr;
pub mod enr_ext;
// Allow external use of the lighthouse ENR builder // Allow external use of the lighthouse ENR builder
pub use enr::{build_enr, CombinedKey, Keypair}; pub use enr::{build_enr, CombinedKey, Keypair};
pub use enr_ext::{CombinedKeyExt, EnrExt};
use crate::metrics; use crate::metrics;
use crate::{error, Enr, NetworkConfig, NetworkGlobals}; use crate::{error, Enr, NetworkConfig, NetworkGlobals};
use discv5::{enr::NodeId, Discv5, Discv5Event};
use enr::{Eth2Enr, BITFIELD_ENR_KEY, ETH2_ENR_KEY}; use enr::{Eth2Enr, BITFIELD_ENR_KEY, ETH2_ENR_KEY};
use futures::prelude::*; use futures::prelude::*;
use libp2p::core::{ConnectedPoint, Multiaddr, PeerId}; use libp2p::core::{connection::ConnectionId, Multiaddr, PeerId};
use libp2p::discv5::enr::NodeId;
use libp2p::discv5::{Discv5, Discv5Event};
use libp2p::multiaddr::Protocol; use libp2p::multiaddr::Protocol;
use libp2p::swarm::{NetworkBehaviour, NetworkBehaviourAction, PollParameters, ProtocolsHandler}; use libp2p::swarm::{
protocols_handler::DummyProtocolsHandler, DialPeerCondition, NetworkBehaviour,
NetworkBehaviourAction, PollParameters, ProtocolsHandler,
};
use lru::LruCache;
use slog::{crit, debug, info, warn}; use slog::{crit, debug, info, warn};
use ssz::{Decode, Encode}; use ssz::{Decode, Encode};
use ssz_types::BitVector; use ssz_types::BitVector;
use std::collections::{HashSet, VecDeque}; use std::{
use std::net::SocketAddr; collections::{HashSet, VecDeque},
use std::path::Path; net::SocketAddr,
use std::sync::Arc; path::Path,
use std::time::{Duration, Instant}; sync::Arc,
use tokio::io::{AsyncRead, AsyncWrite}; task::{Context, Poll},
use tokio::timer::Delay; time::Duration,
};
use tokio::time::{delay_until, Delay, Instant};
use types::{EnrForkId, EthSpec, SubnetId}; use types::{EnrForkId, EthSpec, SubnetId};
/// Maximum seconds before searching for extra peers. /// Maximum seconds before searching for extra peers.
@ -36,10 +43,13 @@ const TARGET_SUBNET_PEERS: u64 = 3;
/// Lighthouse discovery behaviour. This provides peer management and discovery using the Discv5 /// Lighthouse discovery behaviour. This provides peer management and discovery using the Discv5
/// libp2p protocol. /// libp2p protocol.
pub struct Discovery<TSubstream, TSpec: EthSpec> { pub struct Discovery<TSpec: EthSpec> {
/// Events to be processed by the behaviour. /// Events to be processed by the behaviour.
events: VecDeque<NetworkBehaviourAction<void::Void, Discv5Event>>, events: VecDeque<NetworkBehaviourAction<void::Void, Discv5Event>>,
/// A collection of seen live ENRs for quick lookup and to map peer-id's to ENRs.
cached_enrs: LruCache<PeerId, Enr>,
/// The currently banned peers. /// The currently banned peers.
banned_peers: HashSet<PeerId>, banned_peers: HashSet<PeerId>,
@ -62,7 +72,7 @@ pub struct Discovery<TSubstream, TSpec: EthSpec> {
tcp_port: u16, tcp_port: u16,
/// The discovery behaviour used to discover new peers. /// The discovery behaviour used to discover new peers.
discovery: Discv5<TSubstream>, discovery: Discv5,
/// A collection of network constants that can be read from other threads. /// A collection of network constants that can be read from other threads.
network_globals: Arc<NetworkGlobals<TSpec>>, network_globals: Arc<NetworkGlobals<TSpec>>,
@ -71,7 +81,7 @@ pub struct Discovery<TSubstream, TSpec: EthSpec> {
log: slog::Logger, log: slog::Logger,
} }
impl<TSubstream, TSpec: EthSpec> Discovery<TSubstream, TSpec> { impl<TSpec: EthSpec> Discovery<TSpec> {
pub fn new( pub fn new(
local_key: &Keypair, local_key: &Keypair,
config: &NetworkConfig, config: &NetworkConfig,
@ -91,9 +101,12 @@ impl<TSubstream, TSpec: EthSpec> Discovery<TSubstream, TSpec> {
let listen_socket = SocketAddr::new(config.listen_address, config.discovery_port); let listen_socket = SocketAddr::new(config.listen_address, config.discovery_port);
// convert the keypair into an ENR key
let enr_key: CombinedKey = CombinedKey::from_libp2p(&local_key)?;
let mut discovery = Discv5::new( let mut discovery = Discv5::new(
local_enr, local_enr,
local_key.clone(), enr_key,
config.discv5_config.clone(), config.discv5_config.clone(),
listen_socket, listen_socket,
) )
@ -121,9 +134,10 @@ impl<TSubstream, TSpec: EthSpec> Discovery<TSubstream, TSpec> {
Ok(Self { Ok(Self {
events: VecDeque::with_capacity(16), events: VecDeque::with_capacity(16),
cached_enrs: LruCache::new(50),
banned_peers: HashSet::new(), banned_peers: HashSet::new(),
max_peers: config.max_peers, max_peers: config.max_peers,
peer_discovery_delay: Delay::new(Instant::now()), peer_discovery_delay: delay_until(Instant::now()),
past_discovery_delay: INITIAL_SEARCH_DELAY, past_discovery_delay: INITIAL_SEARCH_DELAY,
tcp_port: config.libp2p_port, tcp_port: config.libp2p_port,
discovery, discovery,
@ -147,6 +161,9 @@ impl<TSubstream, TSpec: EthSpec> Discovery<TSubstream, TSpec> {
/// Add an ENR to the routing table of the discovery mechanism. /// Add an ENR to the routing table of the discovery mechanism.
pub fn add_enr(&mut self, enr: Enr) { pub fn add_enr(&mut self, enr: Enr) {
// add the enr to seen caches
self.cached_enrs.put(enr.peer_id(), enr.clone());
let _ = self.discovery.add_enr(enr).map_err(|e| { let _ = self.discovery.add_enr(enr).map_err(|e| {
warn!( warn!(
self.log, self.log,
@ -174,7 +191,18 @@ impl<TSubstream, TSpec: EthSpec> Discovery<TSubstream, TSpec> {
/// Returns the ENR of a known peer if it exists. /// Returns the ENR of a known peer if it exists.
pub fn enr_of_peer(&mut self, peer_id: &PeerId) -> Option<Enr> { pub fn enr_of_peer(&mut self, peer_id: &PeerId) -> Option<Enr> {
self.discovery.enr_of_peer(peer_id) // first search the local cache
if let Some(enr) = self.cached_enrs.get(peer_id) {
return Some(enr.clone());
}
// not in the local cache, look in the routing table
if let Ok(_node_id) = enr_ext::peer_id_to_node_id(peer_id) {
// TODO: Need to update discv5
// self.discovery.find_enr(&node_id)
return None;
} else {
return None;
}
} }
/// Adds/Removes a subnet from the ENR Bitfield /// Adds/Removes a subnet from the ENR Bitfield
@ -342,48 +370,58 @@ impl<TSubstream, TSpec: EthSpec> Discovery<TSubstream, TSpec> {
} }
} }
// Redirect all behaviour events to underlying discovery behaviour. // Build a dummy Network behaviour around the discv5 server
impl<TSubstream, TSpec: EthSpec> NetworkBehaviour for Discovery<TSubstream, TSpec> impl<TSpec: EthSpec> NetworkBehaviour for Discovery<TSpec> {
where type ProtocolsHandler = DummyProtocolsHandler;
TSubstream: AsyncRead + AsyncWrite, type OutEvent = Discv5Event;
{
type ProtocolsHandler = <Discv5<TSubstream> as NetworkBehaviour>::ProtocolsHandler;
type OutEvent = <Discv5<TSubstream> as NetworkBehaviour>::OutEvent;
fn new_handler(&mut self) -> Self::ProtocolsHandler { fn new_handler(&mut self) -> Self::ProtocolsHandler {
NetworkBehaviour::new_handler(&mut self.discovery) DummyProtocolsHandler::default()
} }
fn addresses_of_peer(&mut self, peer_id: &PeerId) -> Vec<Multiaddr> { fn addresses_of_peer(&mut self, peer_id: &PeerId) -> Vec<Multiaddr> {
// Let discovery track possible known peers. if let Some(enr) = self.enr_of_peer(peer_id) {
self.discovery.addresses_of_peer(peer_id) // ENR's may have multiple Multiaddrs. The multi-addr associated with the UDP
// port is removed, which is assumed to be associated with the discv5 protocol (and
// therefore irrelevant for other libp2p components).
let mut out_list = enr.multiaddr();
out_list.retain(|addr| {
addr.iter()
.find(|v| match v {
Protocol::Udp(_) => true,
_ => false,
})
.is_none()
});
out_list
} else {
// PeerId is not known
Vec::new()
}
} }
fn inject_connected(&mut self, _peer_id: PeerId, _endpoint: ConnectedPoint) {} // ignore libp2p connections/streams
fn inject_connected(&mut self, _: &PeerId) {}
fn inject_disconnected(&mut self, _peer_id: &PeerId, _endpoint: ConnectedPoint) {} // ignore libp2p connections/streams
fn inject_disconnected(&mut self, _: &PeerId) {}
fn inject_replaced( // no libp2p discv5 events - event originate from the session_service.
fn inject_event(
&mut self, &mut self,
_peer_id: PeerId, _: PeerId,
_closed: ConnectedPoint, _: ConnectionId,
_opened: ConnectedPoint,
) {
// discv5 doesn't implement
}
fn inject_node_event(
&mut self,
_peer_id: PeerId,
_event: <Self::ProtocolsHandler as ProtocolsHandler>::OutEvent, _event: <Self::ProtocolsHandler as ProtocolsHandler>::OutEvent,
) { ) {
// discv5 doesn't implement void::unreachable(_event)
} }
fn poll( fn poll(
&mut self, &mut self,
params: &mut impl PollParameters, cx: &mut Context,
) -> Async< _: &mut impl PollParameters,
) -> Poll<
NetworkBehaviourAction< NetworkBehaviourAction<
<Self::ProtocolsHandler as ProtocolsHandler>::InEvent, <Self::ProtocolsHandler as ProtocolsHandler>::InEvent,
Self::OutEvent, Self::OutEvent,
@ -391,8 +429,8 @@ where
> { > {
// search for peers if it is time // search for peers if it is time
loop { loop {
match self.peer_discovery_delay.poll() { match self.peer_discovery_delay.poll_unpin(cx) {
Ok(Async::Ready(_)) => { Poll::Ready(_) => {
if self.network_globals.connected_peers() < self.max_peers { if self.network_globals.connected_peers() < self.max_peers {
self.find_peers(); self.find_peers();
} }
@ -401,17 +439,14 @@ where
Instant::now() + Duration::from_secs(MAX_TIME_BETWEEN_PEER_SEARCHES), Instant::now() + Duration::from_secs(MAX_TIME_BETWEEN_PEER_SEARCHES),
); );
} }
Ok(Async::NotReady) => break, Poll::Pending => break,
Err(e) => {
warn!(self.log, "Discovery peer search failed"; "error" => format!("{:?}", e));
}
} }
} }
// Poll discovery // Poll discovery
loop { loop {
match self.discovery.poll(params) { match self.discovery.poll_next_unpin(cx) {
Async::Ready(NetworkBehaviourAction::GenerateEvent(event)) => { Poll::Ready(Some(event)) => {
match event { match event {
Discv5Event::Discovered(_enr) => { Discv5Event::Discovered(_enr) => {
// peers that get discovered during a query but are not contactable or // peers that get discovered during a query but are not contactable or
@ -434,7 +469,7 @@ where
let enr = self.discovery.local_enr(); let enr = self.discovery.local_enr();
enr::save_enr_to_disk(Path::new(&self.enr_dir), enr, &self.log); enr::save_enr_to_disk(Path::new(&self.enr_dir), enr, &self.log);
return Async::Ready(NetworkBehaviourAction::ReportObservedAddr { return Poll::Ready(NetworkBehaviourAction::ReportObservedAddr {
address, address,
}); });
} }
@ -451,9 +486,12 @@ where
self.peer_discovery_delay self.peer_discovery_delay
.reset(Instant::now() + Duration::from_secs(delay)); .reset(Instant::now() + Duration::from_secs(delay));
for peer_id in closer_peers { for enr in closer_peers {
// if we need more peers, attempt a connection // cache known peers
let peer_id = enr.peer_id();
self.cached_enrs.put(enr.peer_id(), enr);
// if we need more peers, attempt a connection
if self.network_globals.connected_or_dialing_peers() if self.network_globals.connected_or_dialing_peers()
< self.max_peers < self.max_peers
&& !self && !self
@ -463,10 +501,18 @@ where
.is_connected_or_dialing(&peer_id) .is_connected_or_dialing(&peer_id)
&& !self.banned_peers.contains(&peer_id) && !self.banned_peers.contains(&peer_id)
{ {
debug!(self.log, "Connecting to discovered peer"; "peer_id"=> format!("{:?}", peer_id)); // TODO: Debugging only
self.network_globals.peers.write().dialing_peer(&peer_id); // NOTE: The peer manager will get updated by the global swarm.
self.events let connection_status = self
.push_back(NetworkBehaviourAction::DialPeer { peer_id }); .network_globals
.peers
.read()
.connection_status(&peer_id);
debug!(self.log, "Connecting to discovered peer"; "peer_id"=> peer_id.to_string(), "status" => format!("{:?}", connection_status));
self.events.push_back(NetworkBehaviourAction::DialPeer {
peer_id,
condition: DialPeerCondition::Disconnected,
});
} }
} }
} }
@ -474,16 +520,16 @@ where
} }
} }
// discv5 does not output any other NetworkBehaviourAction // discv5 does not output any other NetworkBehaviourAction
Async::Ready(_) => {} Poll::Ready(_) => {}
Async::NotReady => break, Poll::Pending => break,
} }
} }
// process any queued events // process any queued events
if let Some(event) = self.events.pop_front() { if let Some(event) = self.events.pop_front() {
return Async::Ready(event); return Poll::Ready(event);
} }
Async::NotReady Poll::Pending
} }
} }

View File

@ -17,9 +17,10 @@ pub mod types;
pub use crate::types::{error, Enr, GossipTopic, NetworkGlobals, PubsubMessage}; pub use crate::types::{error, Enr, GossipTopic, NetworkGlobals, PubsubMessage};
pub use behaviour::BehaviourEvent; pub use behaviour::BehaviourEvent;
pub use config::Config as NetworkConfig; pub use config::Config as NetworkConfig;
pub use discovery::enr_ext::{CombinedKeyExt, EnrExt};
pub use libp2p::gossipsub::{MessageId, Topic, TopicHash}; pub use libp2p::gossipsub::{MessageId, Topic, TopicHash};
pub use libp2p::{core::ConnectedPoint, PeerId, Swarm};
pub use libp2p::{multiaddr, Multiaddr}; pub use libp2p::{multiaddr, Multiaddr};
pub use libp2p::{PeerId, Swarm}; pub use peer_manager::{client::Client, PeerDB, PeerInfo, PeerSyncStatus, SyncInfo};
pub use peer_manager::{PeerDB, PeerInfo, PeerSyncStatus, SyncInfo};
pub use rpc::RPCEvent; pub use rpc::RPCEvent;
pub use service::{Service, NETWORK_KEY_FILENAME}; pub use service::{Libp2pEvent, Service, NETWORK_KEY_FILENAME};

View File

@ -131,6 +131,18 @@ fn client_from_agent_version(agent_version: &str) -> (ClientKind, String, String
let unknown = String::from("unknown"); let unknown = String::from("unknown");
(kind, unknown.clone(), unknown) (kind, unknown.clone(), unknown)
} }
Some("nim-libp2p") => {
let kind = ClientKind::Nimbus;
let mut version = String::from("unknown");
let mut os_version = version.clone();
if let Some(agent_version) = agent_split.next() {
version = agent_version.into();
if let Some(agent_os_version) = agent_split.next() {
os_version = agent_os_version.into();
}
}
(kind, version, os_version)
}
_ => { _ => {
let unknown = String::from("unknown"); let unknown = String::from("unknown");
(ClientKind::Unknown, unknown.clone(), unknown) (ClientKind::Unknown, unknown.clone(), unknown)

View File

@ -6,16 +6,18 @@ use crate::rpc::{MetaData, Protocol, RPCError, RPCResponseErrorCode};
use crate::{NetworkGlobals, PeerId}; use crate::{NetworkGlobals, PeerId};
use futures::prelude::*; use futures::prelude::*;
use futures::Stream; use futures::Stream;
use hashmap_delay::HashSetDelay; use hashset_delay::HashSetDelay;
use libp2p::identify::IdentifyInfo; use libp2p::identify::IdentifyInfo;
use slog::{crit, debug, error, warn}; use slog::{crit, debug, error, warn};
use smallvec::SmallVec; use smallvec::SmallVec;
use std::convert::TryInto; use std::convert::TryInto;
use std::pin::Pin;
use std::sync::Arc; use std::sync::Arc;
use std::task::{Context, Poll};
use std::time::{Duration, Instant}; use std::time::{Duration, Instant};
use types::EthSpec; use types::EthSpec;
mod client; pub mod client;
mod peer_info; mod peer_info;
mod peer_sync_status; mod peer_sync_status;
mod peerdb; mod peerdb;
@ -24,7 +26,7 @@ pub use peer_info::{PeerConnectionStatus::*, PeerInfo};
pub use peer_sync_status::{PeerSyncStatus, SyncInfo}; pub use peer_sync_status::{PeerSyncStatus, SyncInfo};
/// The minimum reputation before a peer is disconnected. /// The minimum reputation before a peer is disconnected.
// Most likely this needs tweaking. // Most likely this needs tweaking.
const MIN_REP_BEFORE_BAN: Rep = 10; const _MIN_REP_BEFORE_BAN: Rep = 10;
/// The time in seconds between re-status's peers. /// The time in seconds between re-status's peers.
const STATUS_INTERVAL: u64 = 300; const STATUS_INTERVAL: u64 = 300;
/// The time in seconds between PING events. We do not send a ping if the other peer as PING'd us within /// The time in seconds between PING events. We do not send a ping if the other peer as PING'd us within
@ -42,7 +44,7 @@ pub struct PeerManager<TSpec: EthSpec> {
/// A collection of peers awaiting to be Status'd. /// A collection of peers awaiting to be Status'd.
status_peers: HashSetDelay<PeerId>, status_peers: HashSetDelay<PeerId>,
/// Last updated moment. /// Last updated moment.
last_updated: Instant, _last_updated: Instant,
/// The logger associated with the `PeerManager`. /// The logger associated with the `PeerManager`.
log: slog::Logger, log: slog::Logger,
} }
@ -104,7 +106,7 @@ impl<TSpec: EthSpec> PeerManager<TSpec> {
PeerManager { PeerManager {
network_globals, network_globals,
events: SmallVec::new(), events: SmallVec::new(),
last_updated: Instant::now(), _last_updated: Instant::now(),
ping_peers: HashSetDelay::new(Duration::from_secs(PING_INTERVAL)), ping_peers: HashSetDelay::new(Duration::from_secs(PING_INTERVAL)),
status_peers: HashSetDelay::new(Duration::from_secs(STATUS_INTERVAL)), status_peers: HashSetDelay::new(Duration::from_secs(STATUS_INTERVAL)),
log: log.clone(), log: log.clone(),
@ -123,7 +125,7 @@ impl<TSpec: EthSpec> PeerManager<TSpec> {
debug!(self.log, "Received a ping request"; "peer_id" => peer_id.to_string(), "seq_no" => seq); debug!(self.log, "Received a ping request"; "peer_id" => peer_id.to_string(), "seq_no" => seq);
self.ping_peers.insert(peer_id.clone()); self.ping_peers.insert(peer_id.clone());
// if the sequence number is unknown send update the meta data of the peer. // if the sequence number is unknown send an update the meta data of the peer.
if let Some(meta_data) = &peer_info.meta_data { if let Some(meta_data) = &peer_info.meta_data {
if meta_data.seq_number < seq { if meta_data.seq_number < seq {
debug!(self.log, "Requesting new metadata from peer"; debug!(self.log, "Requesting new metadata from peer";
@ -180,9 +182,7 @@ impl<TSpec: EthSpec> PeerManager<TSpec> {
"peer_id" => peer_id.to_string(), "known_seq_no" => known_meta_data.seq_number, "new_seq_no" => meta_data.seq_number); "peer_id" => peer_id.to_string(), "known_seq_no" => known_meta_data.seq_number, "new_seq_no" => meta_data.seq_number);
peer_info.meta_data = Some(meta_data); peer_info.meta_data = Some(meta_data);
} else { } else {
// TODO: isn't this malicious/random behaviour? What happens if the seq_number debug!(self.log, "Received old metadata";
// is the same but the contents differ?
warn!(self.log, "Received old metadata";
"peer_id" => peer_id.to_string(), "known_seq_no" => known_meta_data.seq_number, "new_seq_no" => meta_data.seq_number); "peer_id" => peer_id.to_string(), "known_seq_no" => known_meta_data.seq_number, "new_seq_no" => meta_data.seq_number);
} }
} else { } else {
@ -204,11 +204,8 @@ impl<TSpec: EthSpec> PeerManager<TSpec> {
/// Updates the state of the peer as disconnected. /// Updates the state of the peer as disconnected.
pub fn notify_disconnect(&mut self, peer_id: &PeerId) { pub fn notify_disconnect(&mut self, peer_id: &PeerId) {
self.update_reputations(); //self.update_reputations();
{ self.network_globals.peers.write().disconnect(peer_id);
let mut peerdb = self.network_globals.peers.write();
peerdb.disconnect(peer_id);
}
// remove the ping and status timer for the peer // remove the ping and status timer for the peer
self.ping_peers.remove(peer_id); self.ping_peers.remove(peer_id);
@ -223,25 +220,31 @@ impl<TSpec: EthSpec> PeerManager<TSpec> {
/// Sets a peer as connected as long as their reputation allows it /// Sets a peer as connected as long as their reputation allows it
/// Informs if the peer was accepted /// Informs if the peer was accepted
pub fn connect_ingoing(&mut self, peer_id: &PeerId) -> bool { pub fn connect_ingoing(&mut self, peer_id: &PeerId) -> bool {
self.connect_peer(peer_id, false) self.connect_peer(peer_id, ConnectingType::IngoingConnected)
} }
/// Sets a peer as connected as long as their reputation allows it /// Sets a peer as connected as long as their reputation allows it
/// Informs if the peer was accepted /// Informs if the peer was accepted
pub fn connect_outgoing(&mut self, peer_id: &PeerId) -> bool { pub fn connect_outgoing(&mut self, peer_id: &PeerId) -> bool {
self.connect_peer(peer_id, true) self.connect_peer(peer_id, ConnectingType::OutgoingConnected)
}
/// Updates the database informing that a peer is being dialed.
pub fn dialing_peer(&mut self, peer_id: &PeerId) -> bool {
self.connect_peer(peer_id, ConnectingType::Dialing)
} }
/// Reports a peer for some action. /// Reports a peer for some action.
/// ///
/// If the peer doesn't exist, log a warning and insert defaults. /// If the peer doesn't exist, log a warning and insert defaults.
pub fn report_peer(&mut self, peer_id: &PeerId, action: PeerAction) { pub fn report_peer(&mut self, peer_id: &PeerId, action: PeerAction) {
self.update_reputations(); //TODO: Check these. There are double disconnects for example
// self.update_reputations();
self.network_globals self.network_globals
.peers .peers
.write() .write()
.add_reputation(peer_id, action.rep_change()); .add_reputation(peer_id, action.rep_change());
self.update_reputations(); // self.update_reputations();
} }
/// Updates `PeerInfo` with `identify` information. /// Updates `PeerInfo` with `identify` information.
@ -255,7 +258,14 @@ impl<TSpec: EthSpec> PeerManager<TSpec> {
} }
pub fn handle_rpc_error(&mut self, peer_id: &PeerId, protocol: Protocol, err: &RPCError) { pub fn handle_rpc_error(&mut self, peer_id: &PeerId, protocol: Protocol, err: &RPCError) {
debug!(self.log, "RPCError"; "protocol" => protocol.to_string(), "err" => err.to_string()); let client = self
.network_globals
.peers
.read()
.peer_info(peer_id)
.map(|info| info.client.clone())
.unwrap_or_default();
debug!(self.log, "RPCError"; "protocol" => protocol.to_string(), "err" => err.to_string(), "client" => client.to_string());
// Map this error to a `PeerAction` (if any) // Map this error to a `PeerAction` (if any)
let peer_action = match err { let peer_action = match err {
@ -321,21 +331,23 @@ impl<TSpec: EthSpec> PeerManager<TSpec> {
/// ///
/// This informs if the peer was accepted in to the db or not. /// This informs if the peer was accepted in to the db or not.
// TODO: Drop peers if over max_peer limit // TODO: Drop peers if over max_peer limit
fn connect_peer(&mut self, peer_id: &PeerId, outgoing: bool) -> bool { fn connect_peer(&mut self, peer_id: &PeerId, connection: ConnectingType) -> bool {
// TODO: remove after timed updates // TODO: remove after timed updates
self.update_reputations(); //self.update_reputations();
{ {
let mut peerdb = self.network_globals.peers.write(); let mut peerdb = self.network_globals.peers.write();
if peerdb.connection_status(peer_id).map(|c| c.is_banned()) == Some(true) { if peerdb.connection_status(peer_id).map(|c| c.is_banned()) == Some(true) {
// don't connect if the peer is banned // don't connect if the peer is banned
return false; // TODO: Handle this case. If peer is banned this shouldn't be reached. It will put
// our connection/disconnection out of sync with libp2p
// return false;
} }
if outgoing { match connection {
peerdb.connect_outgoing(peer_id); ConnectingType::Dialing => peerdb.dialing_peer(peer_id),
} else { ConnectingType::IngoingConnected => peerdb.connect_outgoing(peer_id),
peerdb.connect_ingoing(peer_id); ConnectingType::OutgoingConnected => peerdb.connect_ingoing(peer_id),
} }
} }
@ -366,10 +378,10 @@ impl<TSpec: EthSpec> PeerManager<TSpec> {
/// ///
/// A banned(disconnected) peer that gets its rep above(below) MIN_REP_BEFORE_BAN is /// A banned(disconnected) peer that gets its rep above(below) MIN_REP_BEFORE_BAN is
/// now considered a disconnected(banned) peer. /// now considered a disconnected(banned) peer.
fn update_reputations(&mut self) { fn _update_reputations(&mut self) {
// avoid locking the peerdb too often // avoid locking the peerdb too often
// TODO: call this on a timer // TODO: call this on a timer
if self.last_updated.elapsed().as_secs() < 30 { if self._last_updated.elapsed().as_secs() < 30 {
return; return;
} }
@ -382,7 +394,7 @@ impl<TSpec: EthSpec> PeerManager<TSpec> {
/* Check how long have peers been in this state and update their reputations if needed */ /* Check how long have peers been in this state and update their reputations if needed */
let mut pdb = self.network_globals.peers.write(); let mut pdb = self.network_globals.peers.write();
for (id, info) in pdb.peers_mut() { for (id, info) in pdb._peers_mut() {
// Update reputations // Update reputations
match info.connection_status { match info.connection_status {
Connected { .. } => { Connected { .. } => {
@ -398,7 +410,7 @@ impl<TSpec: EthSpec> PeerManager<TSpec> {
.as_secs() .as_secs()
/ 3600; / 3600;
let last_dc_hours = self let last_dc_hours = self
.last_updated ._last_updated
.checked_duration_since(since) .checked_duration_since(since)
.unwrap_or_else(|| Duration::from_secs(0)) .unwrap_or_else(|| Duration::from_secs(0))
.as_secs() .as_secs()
@ -423,12 +435,13 @@ impl<TSpec: EthSpec> PeerManager<TSpec> {
// TODO: decide how to handle this // TODO: decide how to handle this
} }
} }
Unknown => {} //TODO: Handle this case
} }
// Check if the peer gets banned or unbanned and if it should be disconnected // Check if the peer gets banned or unbanned and if it should be disconnected
if info.reputation < MIN_REP_BEFORE_BAN && !info.connection_status.is_banned() { if info.reputation < _MIN_REP_BEFORE_BAN && !info.connection_status.is_banned() {
// This peer gets banned. Check if we should request disconnection // This peer gets banned. Check if we should request disconnection
ban_queue.push(id.clone()); ban_queue.push(id.clone());
} else if info.reputation >= MIN_REP_BEFORE_BAN && info.connection_status.is_banned() { } else if info.reputation >= _MIN_REP_BEFORE_BAN && info.connection_status.is_banned() {
// This peer gets unbanned // This peer gets unbanned
unban_queue.push(id.clone()); unban_queue.push(id.clone());
} }
@ -444,57 +457,56 @@ impl<TSpec: EthSpec> PeerManager<TSpec> {
pdb.disconnect(&id); pdb.disconnect(&id);
} }
self.last_updated = Instant::now(); self._last_updated = Instant::now();
} }
} }
impl<TSpec: EthSpec> Stream for PeerManager<TSpec> { impl<TSpec: EthSpec> Stream for PeerManager<TSpec> {
type Item = PeerManagerEvent; type Item = PeerManagerEvent;
type Error = ();
fn poll(&mut self) -> Poll<Option<Self::Item>, Self::Error> { fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
// poll the timeouts for pings and status' // poll the timeouts for pings and status'
// TODO: Remove task notifies and temporary vecs for stable futures loop {
// These exist to handle a bug in delayqueue match self.ping_peers.poll_next_unpin(cx) {
let mut peers_to_add = Vec::new(); Poll::Ready(Some(Ok(peer_id))) => {
while let Async::Ready(Some(peer_id)) = self.ping_peers.poll().map_err(|e| { self.ping_peers.insert(peer_id.clone());
error!(self.log, "Failed to check for peers to ping"; "error" => e.to_string()); self.events.push(PeerManagerEvent::Ping(peer_id));
})? { }
debug!(self.log, "Pinging peer"; "peer_id" => peer_id.to_string()); Poll::Ready(Some(Err(e))) => {
// add the ping timer back error!(self.log, "Failed to check for peers to ping"; "error" => format!("{}",e))
peers_to_add.push(peer_id.clone()); }
self.events.push(PeerManagerEvent::Ping(peer_id)); Poll::Ready(None) | Poll::Pending => break,
}
} }
if !peers_to_add.is_empty() { loop {
futures::task::current().notify(); match self.status_peers.poll_next_unpin(cx) {
} Poll::Ready(Some(Ok(peer_id))) => {
while let Some(peer) = peers_to_add.pop() { self.status_peers.insert(peer_id.clone());
self.ping_peers.insert(peer); self.events.push(PeerManagerEvent::Status(peer_id))
} }
Poll::Ready(Some(Err(e))) => {
while let Async::Ready(Some(peer_id)) = self.status_peers.poll().map_err(|e| { error!(self.log, "Failed to check for peers to ping"; "error" => format!("{}",e))
error!(self.log, "Failed to check for peers to status"; "error" => e.to_string()); }
})? { Poll::Ready(None) | Poll::Pending => break,
debug!(self.log, "Sending Status to peer"; "peer_id" => peer_id.to_string()); }
// add the status timer back
peers_to_add.push(peer_id.clone());
self.events.push(PeerManagerEvent::Status(peer_id));
}
if !peers_to_add.is_empty() {
futures::task::current().notify();
}
while let Some(peer) = peers_to_add.pop() {
self.status_peers.insert(peer);
} }
if !self.events.is_empty() { if !self.events.is_empty() {
return Ok(Async::Ready(Some(self.events.remove(0)))); return Poll::Ready(Some(self.events.remove(0)));
} else { } else {
self.events.shrink_to_fit(); self.events.shrink_to_fit();
} }
Ok(Async::NotReady) Poll::Pending
} }
} }
enum ConnectingType {
/// We are in the process of dialing this peer.
Dialing,
/// A peer has dialed us.
IngoingConnected,
/// We have successfully dialed a peer.
OutgoingConnected,
}

View File

@ -100,6 +100,8 @@ pub enum PeerConnectionStatus {
/// time since we last communicated with the peer. /// time since we last communicated with the peer.
since: Instant, since: Instant,
}, },
/// The connection status has not been specified.
Unknown,
} }
/// Serialization for http requests. /// Serialization for http requests.
@ -127,15 +129,14 @@ impl Serialize for PeerConnectionStatus {
s.serialize_field("since", &since.elapsed().as_secs())?; s.serialize_field("since", &since.elapsed().as_secs())?;
s.end() s.end()
} }
Unknown => serializer.serialize_unit_variant("", 4, "Unknown"),
} }
} }
} }
impl Default for PeerConnectionStatus { impl Default for PeerConnectionStatus {
fn default() -> Self { fn default() -> Self {
PeerConnectionStatus::Dialing { PeerConnectionStatus::Unknown
since: Instant::now(),
}
} }
} }
@ -177,7 +178,7 @@ impl PeerConnectionStatus {
pub fn connect_ingoing(&mut self) { pub fn connect_ingoing(&mut self) {
match self { match self {
Connected { n_in, .. } => *n_in += 1, Connected { n_in, .. } => *n_in += 1,
Disconnected { .. } | Banned { .. } | Dialing { .. } => { Disconnected { .. } | Banned { .. } | Dialing { .. } | Unknown => {
*self = Connected { n_in: 1, n_out: 0 } *self = Connected { n_in: 1, n_out: 0 }
} }
} }
@ -188,7 +189,7 @@ impl PeerConnectionStatus {
pub fn connect_outgoing(&mut self) { pub fn connect_outgoing(&mut self) {
match self { match self {
Connected { n_out, .. } => *n_out += 1, Connected { n_out, .. } => *n_out += 1,
Disconnected { .. } | Banned { .. } | Dialing { .. } => { Disconnected { .. } | Banned { .. } | Dialing { .. } | Unknown => {
*self = Connected { n_in: 0, n_out: 1 } *self = Connected { n_in: 0, n_out: 1 }
} }
} }

View File

@ -2,8 +2,8 @@ use super::peer_info::{PeerConnectionStatus, PeerInfo};
use super::peer_sync_status::PeerSyncStatus; use super::peer_sync_status::PeerSyncStatus;
use crate::rpc::methods::MetaData; use crate::rpc::methods::MetaData;
use crate::PeerId; use crate::PeerId;
use slog::{crit, warn}; use slog::{crit, debug, warn};
use std::collections::HashMap; use std::collections::{hash_map::Entry, HashMap};
use std::time::Instant; use std::time::Instant;
use types::{EthSpec, SubnetId}; use types::{EthSpec, SubnetId};
@ -77,7 +77,7 @@ impl<TSpec: EthSpec> PeerDB<TSpec> {
} }
/// Returns an iterator over all peers in the db. /// Returns an iterator over all peers in the db.
pub(super) fn peers_mut(&mut self) -> impl Iterator<Item = (&PeerId, &mut PeerInfo<TSpec>)> { pub(super) fn _peers_mut(&mut self) -> impl Iterator<Item = (&PeerId, &mut PeerInfo<TSpec>)> {
self.peers.iter_mut() self.peers.iter_mut()
} }
@ -228,11 +228,12 @@ impl<TSpec: EthSpec> PeerDB<TSpec> {
let info = self.peers.entry(peer_id.clone()).or_default(); let info = self.peers.entry(peer_id.clone()).or_default();
if info.connection_status.is_disconnected() { if info.connection_status.is_disconnected() {
self.n_dc -= 1; self.n_dc = self.n_dc.saturating_sub(1);
} }
info.connection_status = PeerConnectionStatus::Dialing { info.connection_status = PeerConnectionStatus::Dialing {
since: Instant::now(), since: Instant::now(),
}; };
debug!(self.log, "Peer dialing in db"; "peer_id" => peer_id.to_string(), "n_dc" => self.n_dc);
} }
/// Sets a peer as connected with an ingoing connection. /// Sets a peer as connected with an ingoing connection.
@ -240,9 +241,10 @@ impl<TSpec: EthSpec> PeerDB<TSpec> {
let info = self.peers.entry(peer_id.clone()).or_default(); let info = self.peers.entry(peer_id.clone()).or_default();
if info.connection_status.is_disconnected() { if info.connection_status.is_disconnected() {
self.n_dc -= 1; self.n_dc = self.n_dc.saturating_sub(1);
} }
info.connection_status.connect_ingoing(); info.connection_status.connect_ingoing();
debug!(self.log, "Peer connected to db"; "peer_id" => peer_id.to_string(), "n_dc" => self.n_dc);
} }
/// Sets a peer as connected with an outgoing connection. /// Sets a peer as connected with an outgoing connection.
@ -250,9 +252,10 @@ impl<TSpec: EthSpec> PeerDB<TSpec> {
let info = self.peers.entry(peer_id.clone()).or_default(); let info = self.peers.entry(peer_id.clone()).or_default();
if info.connection_status.is_disconnected() { if info.connection_status.is_disconnected() {
self.n_dc -= 1; self.n_dc = self.n_dc.saturating_sub(1);
} }
info.connection_status.connect_outgoing(); info.connection_status.connect_outgoing();
debug!(self.log, "Peer connected to db"; "peer_id" => peer_id.to_string(), "n_dc" => self.n_dc);
} }
/// Sets the peer as disconnected. A banned peer remains banned /// Sets the peer as disconnected. A banned peer remains banned
@ -263,11 +266,11 @@ impl<TSpec: EthSpec> PeerDB<TSpec> {
"peer_id" => peer_id.to_string()); "peer_id" => peer_id.to_string());
PeerInfo::default() PeerInfo::default()
}); });
if !info.connection_status.is_disconnected() && !info.connection_status.is_banned() { if !info.connection_status.is_disconnected() && !info.connection_status.is_banned() {
info.connection_status.disconnect(); info.connection_status.disconnect();
self.n_dc += 1; self.n_dc += 1;
} }
debug!(self.log, "Peer disconnected from db"; "peer_id" => peer_id.to_string(), "n_dc" => self.n_dc);
self.shrink_to_fit(); self.shrink_to_fit();
} }
@ -284,7 +287,7 @@ impl<TSpec: EthSpec> PeerDB<TSpec> {
.map(|(id, _)| id.clone()) .map(|(id, _)| id.clone())
.unwrap(); // should be safe since n_dc > MAX_DC_PEERS > 0 .unwrap(); // should be safe since n_dc > MAX_DC_PEERS > 0
self.peers.remove(&to_drop); self.peers.remove(&to_drop);
self.n_dc -= 1; self.n_dc = self.n_dc.saturating_sub(1);
} }
} }
@ -297,8 +300,9 @@ impl<TSpec: EthSpec> PeerDB<TSpec> {
PeerInfo::default() PeerInfo::default()
}); });
if info.connection_status.is_disconnected() { if info.connection_status.is_disconnected() {
self.n_dc -= 1; self.n_dc = self.n_dc.saturating_sub(1);
} }
debug!(self.log, "Peer banned"; "peer_id" => peer_id.to_string(), "n_dc" => self.n_dc);
info.connection_status.ban(); info.connection_status.ban();
} }
@ -334,11 +338,14 @@ impl<TSpec: EthSpec> PeerDB<TSpec> {
/// upper (lower) bounds, it stays at the maximum (minimum) value. /// upper (lower) bounds, it stays at the maximum (minimum) value.
pub(super) fn add_reputation(&mut self, peer_id: &PeerId, change: RepChange) { pub(super) fn add_reputation(&mut self, peer_id: &PeerId, change: RepChange) {
let log_ref = &self.log; let log_ref = &self.log;
let info = self.peers.entry(peer_id.clone()).or_insert_with(|| { let info = match self.peers.entry(peer_id.clone()) {
warn!(log_ref, "Adding to the reputation of an unknown peer"; Entry::Vacant(_) => {
"peer_id" => peer_id.to_string()); warn!(log_ref, "Peer is unknown, no reputation change made";
PeerInfo::default() "peer_id" => peer_id.to_string());
}); return;
}
Entry::Occupied(e) => e.into_mut(),
};
info.reputation = if change.is_good { info.reputation = if change.is_good {
info.reputation.saturating_add(change.diff) info.reputation.saturating_add(change.diff)

View File

@ -4,10 +4,10 @@ use crate::rpc::{ErrorMessage, RPCCodedResponse, RPCRequest, RPCResponse};
use libp2p::bytes::BufMut; use libp2p::bytes::BufMut;
use libp2p::bytes::BytesMut; use libp2p::bytes::BytesMut;
use std::marker::PhantomData; use std::marker::PhantomData;
use tokio::codec::{Decoder, Encoder}; use tokio_util::codec::{Decoder, Encoder};
use types::EthSpec; use types::EthSpec;
pub trait OutboundCodec: Encoder + Decoder { pub trait OutboundCodec<TItem>: Encoder<TItem> + Decoder {
type ErrorType; type ErrorType;
fn decode_error( fn decode_error(
@ -21,7 +21,7 @@ pub trait OutboundCodec: Encoder + Decoder {
pub struct BaseInboundCodec<TCodec, TSpec> pub struct BaseInboundCodec<TCodec, TSpec>
where where
TCodec: Encoder + Decoder, TCodec: Encoder<RPCCodedResponse<TSpec>> + Decoder,
TSpec: EthSpec, TSpec: EthSpec,
{ {
/// Inner codec for handling various encodings /// Inner codec for handling various encodings
@ -31,7 +31,7 @@ where
impl<TCodec, TSpec> BaseInboundCodec<TCodec, TSpec> impl<TCodec, TSpec> BaseInboundCodec<TCodec, TSpec>
where where
TCodec: Encoder + Decoder, TCodec: Encoder<RPCCodedResponse<TSpec>> + Decoder,
TSpec: EthSpec, TSpec: EthSpec,
{ {
pub fn new(codec: TCodec) -> Self { pub fn new(codec: TCodec) -> Self {
@ -46,7 +46,7 @@ where
// This deals with Decoding RPC Responses from other peers and encoding our requests // This deals with Decoding RPC Responses from other peers and encoding our requests
pub struct BaseOutboundCodec<TOutboundCodec, TSpec> pub struct BaseOutboundCodec<TOutboundCodec, TSpec>
where where
TOutboundCodec: OutboundCodec, TOutboundCodec: OutboundCodec<RPCRequest<TSpec>>,
TSpec: EthSpec, TSpec: EthSpec,
{ {
/// Inner codec for handling various encodings. /// Inner codec for handling various encodings.
@ -59,7 +59,7 @@ where
impl<TOutboundCodec, TSpec> BaseOutboundCodec<TOutboundCodec, TSpec> impl<TOutboundCodec, TSpec> BaseOutboundCodec<TOutboundCodec, TSpec>
where where
TSpec: EthSpec, TSpec: EthSpec,
TOutboundCodec: OutboundCodec, TOutboundCodec: OutboundCodec<RPCRequest<TSpec>>,
{ {
pub fn new(codec: TOutboundCodec) -> Self { pub fn new(codec: TOutboundCodec) -> Self {
BaseOutboundCodec { BaseOutboundCodec {
@ -75,15 +75,18 @@ where
/* Base Inbound Codec */ /* Base Inbound Codec */
// This Encodes RPC Responses sent to external peers // This Encodes RPC Responses sent to external peers
impl<TCodec, TSpec> Encoder for BaseInboundCodec<TCodec, TSpec> impl<TCodec, TSpec> Encoder<RPCCodedResponse<TSpec>> for BaseInboundCodec<TCodec, TSpec>
where where
TSpec: EthSpec, TSpec: EthSpec,
TCodec: Decoder + Encoder<Item = RPCCodedResponse<TSpec>>, TCodec: Decoder + Encoder<RPCCodedResponse<TSpec>>,
{ {
type Item = RPCCodedResponse<TSpec>; type Error = <TCodec as Encoder<RPCCodedResponse<TSpec>>>::Error;
type Error = <TCodec as Encoder>::Error;
fn encode(&mut self, item: Self::Item, dst: &mut BytesMut) -> Result<(), Self::Error> { fn encode(
&mut self,
item: RPCCodedResponse<TSpec>,
dst: &mut BytesMut,
) -> Result<(), Self::Error> {
dst.clear(); dst.clear();
dst.reserve(1); dst.reserve(1);
dst.put_u8( dst.put_u8(
@ -98,7 +101,7 @@ where
impl<TCodec, TSpec> Decoder for BaseInboundCodec<TCodec, TSpec> impl<TCodec, TSpec> Decoder for BaseInboundCodec<TCodec, TSpec>
where where
TSpec: EthSpec, TSpec: EthSpec,
TCodec: Encoder + Decoder<Item = RPCRequest<TSpec>>, TCodec: Encoder<RPCCodedResponse<TSpec>> + Decoder<Item = RPCRequest<TSpec>>,
{ {
type Item = RPCRequest<TSpec>; type Item = RPCRequest<TSpec>;
type Error = <TCodec as Decoder>::Error; type Error = <TCodec as Decoder>::Error;
@ -111,15 +114,14 @@ where
/* Base Outbound Codec */ /* Base Outbound Codec */
// This Encodes RPC Requests sent to external peers // This Encodes RPC Requests sent to external peers
impl<TCodec, TSpec> Encoder for BaseOutboundCodec<TCodec, TSpec> impl<TCodec, TSpec> Encoder<RPCRequest<TSpec>> for BaseOutboundCodec<TCodec, TSpec>
where where
TSpec: EthSpec, TSpec: EthSpec,
TCodec: OutboundCodec + Encoder<Item = RPCRequest<TSpec>>, TCodec: OutboundCodec<RPCRequest<TSpec>> + Encoder<RPCRequest<TSpec>>,
{ {
type Item = RPCRequest<TSpec>; type Error = <TCodec as Encoder<RPCRequest<TSpec>>>::Error;
type Error = <TCodec as Encoder>::Error;
fn encode(&mut self, item: Self::Item, dst: &mut BytesMut) -> Result<(), Self::Error> { fn encode(&mut self, item: RPCRequest<TSpec>, dst: &mut BytesMut) -> Result<(), Self::Error> {
self.inner.encode(item, dst) self.inner.encode(item, dst)
} }
} }
@ -128,7 +130,8 @@ where
impl<TCodec, TSpec> Decoder for BaseOutboundCodec<TCodec, TSpec> impl<TCodec, TSpec> Decoder for BaseOutboundCodec<TCodec, TSpec>
where where
TSpec: EthSpec, TSpec: EthSpec,
TCodec: OutboundCodec<ErrorType = ErrorMessage> + Decoder<Item = RPCResponse<TSpec>>, TCodec: OutboundCodec<RPCRequest<TSpec>, ErrorType = ErrorMessage>
+ Decoder<Item = RPCResponse<TSpec>>,
{ {
type Item = RPCCodedResponse<TSpec>; type Item = RPCCodedResponse<TSpec>;
type Error = <TCodec as Decoder>::Error; type Error = <TCodec as Decoder>::Error;
@ -168,3 +171,47 @@ where
inner_result inner_result
} }
} }
#[cfg(test)]
mod tests {
use super::super::ssz::*;
use super::super::ssz_snappy::*;
use super::*;
use crate::rpc::protocol::*;
#[test]
fn test_decode_status_message() {
let message = hex::decode("ff060000734e615070590032000006e71e7b54989925efd6c9cbcb8ceb9b5f71216f5137282bf6a1e3b50f64e42d6c7fb347abe07eb0db8200000005029e2800").unwrap();
let mut buf = BytesMut::new();
buf.extend_from_slice(&message);
type Spec = types::MainnetEthSpec;
let snappy_protocol_id =
ProtocolId::new(Protocol::Status, Version::V1, Encoding::SSZSnappy);
let ssz_protocol_id = ProtocolId::new(Protocol::Status, Version::V1, Encoding::SSZ);
let mut snappy_outbound_codec =
SSZSnappyOutboundCodec::<Spec>::new(snappy_protocol_id, 1_048_576);
let mut ssz_outbound_codec = SSZOutboundCodec::<Spec>::new(ssz_protocol_id, 1_048_576);
// decode message just as snappy message
let snappy_decoded_message = snappy_outbound_codec.decode(&mut buf.clone());
// decode message just a ssz message
let ssz_decoded_message = ssz_outbound_codec.decode(&mut buf.clone());
// build codecs for entire chunk
let mut snappy_base_outbound_codec = BaseOutboundCodec::new(snappy_outbound_codec);
let mut ssz_base_outbound_codec = BaseOutboundCodec::new(ssz_outbound_codec);
// decode message as ssz snappy chunk
let snappy_decoded_chunk = snappy_base_outbound_codec.decode(&mut buf.clone());
// decode message just a ssz chunk
let ssz_decoded_chunk = ssz_base_outbound_codec.decode(&mut buf.clone());
let _ = dbg!(snappy_decoded_message);
let _ = dbg!(ssz_decoded_message);
let _ = dbg!(snappy_decoded_chunk);
let _ = dbg!(ssz_decoded_chunk);
}
}

View File

@ -8,7 +8,7 @@ use self::ssz_snappy::{SSZSnappyInboundCodec, SSZSnappyOutboundCodec};
use crate::rpc::protocol::RPCError; use crate::rpc::protocol::RPCError;
use crate::rpc::{RPCCodedResponse, RPCRequest}; use crate::rpc::{RPCCodedResponse, RPCRequest};
use libp2p::bytes::BytesMut; use libp2p::bytes::BytesMut;
use tokio::codec::{Decoder, Encoder}; use tokio_util::codec::{Decoder, Encoder};
use types::EthSpec; use types::EthSpec;
// Known types of codecs // Known types of codecs
@ -22,11 +22,10 @@ pub enum OutboundCodec<TSpec: EthSpec> {
SSZ(BaseOutboundCodec<SSZOutboundCodec<TSpec>, TSpec>), SSZ(BaseOutboundCodec<SSZOutboundCodec<TSpec>, TSpec>),
} }
impl<T: EthSpec> Encoder for InboundCodec<T> { impl<T: EthSpec> Encoder<RPCCodedResponse<T>> for InboundCodec<T> {
type Item = RPCCodedResponse<T>;
type Error = RPCError; type Error = RPCError;
fn encode(&mut self, item: Self::Item, dst: &mut BytesMut) -> Result<(), Self::Error> { fn encode(&mut self, item: RPCCodedResponse<T>, dst: &mut BytesMut) -> Result<(), Self::Error> {
match self { match self {
InboundCodec::SSZ(codec) => codec.encode(item, dst), InboundCodec::SSZ(codec) => codec.encode(item, dst),
InboundCodec::SSZSnappy(codec) => codec.encode(item, dst), InboundCodec::SSZSnappy(codec) => codec.encode(item, dst),
@ -46,11 +45,10 @@ impl<TSpec: EthSpec> Decoder for InboundCodec<TSpec> {
} }
} }
impl<TSpec: EthSpec> Encoder for OutboundCodec<TSpec> { impl<TSpec: EthSpec> Encoder<RPCRequest<TSpec>> for OutboundCodec<TSpec> {
type Item = RPCRequest<TSpec>;
type Error = RPCError; type Error = RPCError;
fn encode(&mut self, item: Self::Item, dst: &mut BytesMut) -> Result<(), Self::Error> { fn encode(&mut self, item: RPCRequest<TSpec>, dst: &mut BytesMut) -> Result<(), Self::Error> {
match self { match self {
OutboundCodec::SSZ(codec) => codec.encode(item, dst), OutboundCodec::SSZ(codec) => codec.encode(item, dst),
OutboundCodec::SSZSnappy(codec) => codec.encode(item, dst), OutboundCodec::SSZSnappy(codec) => codec.encode(item, dst),

View File

@ -7,7 +7,7 @@ use crate::rpc::{ErrorMessage, RPCCodedResponse, RPCRequest, RPCResponse};
use libp2p::bytes::{BufMut, Bytes, BytesMut}; use libp2p::bytes::{BufMut, Bytes, BytesMut};
use ssz::{Decode, Encode}; use ssz::{Decode, Encode};
use std::marker::PhantomData; use std::marker::PhantomData;
use tokio::codec::{Decoder, Encoder}; use tokio_util::codec::{Decoder, Encoder};
use types::{EthSpec, SignedBeaconBlock}; use types::{EthSpec, SignedBeaconBlock};
use unsigned_varint::codec::UviBytes; use unsigned_varint::codec::UviBytes;
@ -19,7 +19,7 @@ pub struct SSZInboundCodec<TSpec: EthSpec> {
phantom: PhantomData<TSpec>, phantom: PhantomData<TSpec>,
} }
impl<T: EthSpec> SSZInboundCodec<T> { impl<TSpec: EthSpec> SSZInboundCodec<TSpec> {
pub fn new(protocol: ProtocolId, max_packet_size: usize) -> Self { pub fn new(protocol: ProtocolId, max_packet_size: usize) -> Self {
let mut uvi_codec = UviBytes::default(); let mut uvi_codec = UviBytes::default();
uvi_codec.set_max_len(max_packet_size); uvi_codec.set_max_len(max_packet_size);
@ -36,11 +36,14 @@ impl<T: EthSpec> SSZInboundCodec<T> {
} }
// Encoder for inbound streams: Encodes RPC Responses sent to peers. // Encoder for inbound streams: Encodes RPC Responses sent to peers.
impl<TSpec: EthSpec> Encoder for SSZInboundCodec<TSpec> { impl<TSpec: EthSpec> Encoder<RPCCodedResponse<TSpec>> for SSZInboundCodec<TSpec> {
type Item = RPCCodedResponse<TSpec>;
type Error = RPCError; type Error = RPCError;
fn encode(&mut self, item: Self::Item, dst: &mut BytesMut) -> Result<(), Self::Error> { fn encode(
&mut self,
item: RPCCodedResponse<TSpec>,
dst: &mut BytesMut,
) -> Result<(), Self::Error> {
let bytes = match item { let bytes = match item {
RPCCodedResponse::Success(resp) => match resp { RPCCodedResponse::Success(resp) => match resp {
RPCResponse::Status(res) => res.as_ssz_bytes(), RPCResponse::Status(res) => res.as_ssz_bytes(),
@ -145,11 +148,10 @@ impl<TSpec: EthSpec> SSZOutboundCodec<TSpec> {
} }
// Encoder for outbound streams: Encodes RPC Requests to peers // Encoder for outbound streams: Encodes RPC Requests to peers
impl<TSpec: EthSpec> Encoder for SSZOutboundCodec<TSpec> { impl<TSpec: EthSpec> Encoder<RPCRequest<TSpec>> for SSZOutboundCodec<TSpec> {
type Item = RPCRequest<TSpec>;
type Error = RPCError; type Error = RPCError;
fn encode(&mut self, item: Self::Item, dst: &mut BytesMut) -> Result<(), Self::Error> { fn encode(&mut self, item: RPCRequest<TSpec>, dst: &mut BytesMut) -> Result<(), Self::Error> {
let bytes = match item { let bytes = match item {
RPCRequest::Status(req) => req.as_ssz_bytes(), RPCRequest::Status(req) => req.as_ssz_bytes(),
RPCRequest::Goodbye(req) => req.as_ssz_bytes(), RPCRequest::Goodbye(req) => req.as_ssz_bytes(),
@ -201,7 +203,7 @@ impl<TSpec: EthSpec> Decoder for SSZOutboundCodec<TSpec> {
match self.inner.decode(src).map_err(RPCError::from) { match self.inner.decode(src).map_err(RPCError::from) {
Ok(Some(mut packet)) => { Ok(Some(mut packet)) => {
// take the bytes from the buffer // take the bytes from the buffer
let raw_bytes = packet.take(); let raw_bytes = packet.split();
match self.protocol.message_name { match self.protocol.message_name {
Protocol::Status => match self.protocol.version { Protocol::Status => match self.protocol.version {
@ -239,7 +241,7 @@ impl<TSpec: EthSpec> Decoder for SSZOutboundCodec<TSpec> {
} }
} }
impl<TSpec: EthSpec> OutboundCodec for SSZOutboundCodec<TSpec> { impl<TSpec: EthSpec> OutboundCodec<RPCRequest<TSpec>> for SSZOutboundCodec<TSpec> {
type ErrorType = ErrorMessage; type ErrorType = ErrorMessage;
fn decode_error(&mut self, src: &mut BytesMut) -> Result<Option<Self::ErrorType>, RPCError> { fn decode_error(&mut self, src: &mut BytesMut) -> Result<Option<Self::ErrorType>, RPCError> {

View File

@ -12,7 +12,7 @@ use std::io::Cursor;
use std::io::ErrorKind; use std::io::ErrorKind;
use std::io::{Read, Write}; use std::io::{Read, Write};
use std::marker::PhantomData; use std::marker::PhantomData;
use tokio::codec::{Decoder, Encoder}; use tokio_util::codec::{Decoder, Encoder};
use types::{EthSpec, SignedBeaconBlock}; use types::{EthSpec, SignedBeaconBlock};
use unsigned_varint::codec::Uvi; use unsigned_varint::codec::Uvi;
@ -44,11 +44,14 @@ impl<T: EthSpec> SSZSnappyInboundCodec<T> {
} }
// Encoder for inbound streams: Encodes RPC Responses sent to peers. // Encoder for inbound streams: Encodes RPC Responses sent to peers.
impl<TSpec: EthSpec> Encoder for SSZSnappyInboundCodec<TSpec> { impl<TSpec: EthSpec> Encoder<RPCCodedResponse<TSpec>> for SSZSnappyInboundCodec<TSpec> {
type Item = RPCCodedResponse<TSpec>;
type Error = RPCError; type Error = RPCError;
fn encode(&mut self, item: Self::Item, dst: &mut BytesMut) -> Result<(), Self::Error> { fn encode(
&mut self,
item: RPCCodedResponse<TSpec>,
dst: &mut BytesMut,
) -> Result<(), Self::Error> {
let bytes = match item { let bytes = match item {
RPCCodedResponse::Success(resp) => match resp { RPCCodedResponse::Success(resp) => match resp {
RPCResponse::Status(res) => res.as_ssz_bytes(), RPCResponse::Status(res) => res.as_ssz_bytes(),
@ -116,7 +119,7 @@ impl<TSpec: EthSpec> Decoder for SSZSnappyInboundCodec<TSpec> {
// `n` is how many bytes the reader read in the compressed stream // `n` is how many bytes the reader read in the compressed stream
let n = reader.get_ref().position(); let n = reader.get_ref().position();
self.len = None; self.len = None;
src.split_to(n as usize); let _read_bytes = src.split_to(n as usize);
match self.protocol.message_name { match self.protocol.message_name {
Protocol::Status => match self.protocol.version { Protocol::Status => match self.protocol.version {
Version::V1 => Ok(Some(RPCRequest::Status(StatusMessage::from_ssz_bytes( Version::V1 => Ok(Some(RPCRequest::Status(StatusMessage::from_ssz_bytes(
@ -193,11 +196,10 @@ impl<TSpec: EthSpec> SSZSnappyOutboundCodec<TSpec> {
} }
// Encoder for outbound streams: Encodes RPC Requests to peers // Encoder for outbound streams: Encodes RPC Requests to peers
impl<TSpec: EthSpec> Encoder for SSZSnappyOutboundCodec<TSpec> { impl<TSpec: EthSpec> Encoder<RPCRequest<TSpec>> for SSZSnappyOutboundCodec<TSpec> {
type Item = RPCRequest<TSpec>;
type Error = RPCError; type Error = RPCError;
fn encode(&mut self, item: Self::Item, dst: &mut BytesMut) -> Result<(), Self::Error> { fn encode(&mut self, item: RPCRequest<TSpec>, dst: &mut BytesMut) -> Result<(), Self::Error> {
let bytes = match item { let bytes = match item {
RPCRequest::Status(req) => req.as_ssz_bytes(), RPCRequest::Status(req) => req.as_ssz_bytes(),
RPCRequest::Goodbye(req) => req.as_ssz_bytes(), RPCRequest::Goodbye(req) => req.as_ssz_bytes(),
@ -262,7 +264,7 @@ impl<TSpec: EthSpec> Decoder for SSZSnappyOutboundCodec<TSpec> {
// `n` is how many bytes the reader read in the compressed stream // `n` is how many bytes the reader read in the compressed stream
let n = reader.get_ref().position(); let n = reader.get_ref().position();
self.len = None; self.len = None;
src.split_to(n as usize); let _read_byts = src.split_to(n as usize);
match self.protocol.message_name { match self.protocol.message_name {
Protocol::Status => match self.protocol.version { Protocol::Status => match self.protocol.version {
Version::V1 => Ok(Some(RPCResponse::Status( Version::V1 => Ok(Some(RPCResponse::Status(
@ -307,7 +309,7 @@ impl<TSpec: EthSpec> Decoder for SSZSnappyOutboundCodec<TSpec> {
} }
} }
impl<TSpec: EthSpec> OutboundCodec for SSZSnappyOutboundCodec<TSpec> { impl<TSpec: EthSpec> OutboundCodec<RPCRequest<TSpec>> for SSZSnappyOutboundCodec<TSpec> {
type ErrorType = ErrorMessage; type ErrorType = ErrorMessage;
fn decode_error(&mut self, src: &mut BytesMut) -> Result<Option<Self::ErrorType>, RPCError> { fn decode_error(&mut self, src: &mut BytesMut) -> Result<Option<Self::ErrorType>, RPCError> {
@ -334,7 +336,7 @@ impl<TSpec: EthSpec> OutboundCodec for SSZSnappyOutboundCodec<TSpec> {
// `n` is how many bytes the reader read in the compressed stream // `n` is how many bytes the reader read in the compressed stream
let n = reader.get_ref().position(); let n = reader.get_ref().position();
self.len = None; self.len = None;
src.split_to(n as usize); let _read_bytes = src.split_to(n as usize);
Ok(Some(ErrorMessage::from_ssz_bytes(&decoded_buffer)?)) Ok(Some(ErrorMessage::from_ssz_bytes(&decoded_buffer)?))
} }
Err(e) => match e.kind() { Err(e) => match e.kind() {

View File

@ -5,7 +5,6 @@ use super::methods::{ErrorMessage, RPCCodedResponse, RequestId, ResponseTerminat
use super::protocol::{Protocol, RPCError, RPCProtocol, RPCRequest}; use super::protocol::{Protocol, RPCError, RPCProtocol, RPCRequest};
use super::RPCEvent; use super::RPCEvent;
use crate::rpc::protocol::{InboundFramed, OutboundFramed}; use crate::rpc::protocol::{InboundFramed, OutboundFramed};
use core::marker::PhantomData;
use fnv::FnvHashMap; use fnv::FnvHashMap;
use futures::prelude::*; use futures::prelude::*;
use libp2p::core::upgrade::{ use libp2p::core::upgrade::{
@ -14,15 +13,18 @@ use libp2p::core::upgrade::{
use libp2p::swarm::protocols_handler::{ use libp2p::swarm::protocols_handler::{
KeepAlive, ProtocolsHandler, ProtocolsHandlerEvent, ProtocolsHandlerUpgrErr, SubstreamProtocol, KeepAlive, ProtocolsHandler, ProtocolsHandlerEvent, ProtocolsHandlerUpgrErr, SubstreamProtocol,
}; };
use libp2p::swarm::NegotiatedSubstream;
use slog::{crit, debug, error, trace, warn}; use slog::{crit, debug, error, trace, warn};
use smallvec::SmallVec; use smallvec::SmallVec;
use std::collections::hash_map::Entry; use std::{
use std::time::{Duration, Instant}; collections::hash_map::Entry,
use tokio::io::{AsyncRead, AsyncWrite}; pin::Pin,
use tokio::timer::{delay_queue, DelayQueue}; task::{Context, Poll},
time::{Duration, Instant},
};
use tokio::time::{delay_queue, DelayQueue};
use types::EthSpec; use types::EthSpec;
//TODO: Implement close() on the substream types to improve the poll code.
//TODO: Implement check_timeout() on the substream types //TODO: Implement check_timeout() on the substream types
/// The time (in seconds) before a substream that is awaiting a response from the user times out. /// The time (in seconds) before a substream that is awaiting a response from the user times out.
@ -39,9 +41,8 @@ type InboundRequestId = RequestId;
type OutboundRequestId = RequestId; type OutboundRequestId = RequestId;
/// Implementation of `ProtocolsHandler` for the RPC protocol. /// Implementation of `ProtocolsHandler` for the RPC protocol.
pub struct RPCHandler<TSubstream, TSpec> pub struct RPCHandler<TSpec>
where where
TSubstream: AsyncRead + AsyncWrite,
TSpec: EthSpec, TSpec: EthSpec,
{ {
/// The upgrade for inbound substreams. /// The upgrade for inbound substreams.
@ -63,7 +64,7 @@ where
inbound_substreams: FnvHashMap< inbound_substreams: FnvHashMap<
InboundRequestId, InboundRequestId,
( (
InboundSubstreamState<TSubstream, TSpec>, InboundSubstreamState<TSpec>,
Option<delay_queue::Key>, Option<delay_queue::Key>,
Protocol, Protocol,
), ),
@ -74,14 +75,8 @@ where
/// Map of outbound substreams that need to be driven to completion. The `RequestId` is /// Map of outbound substreams that need to be driven to completion. The `RequestId` is
/// maintained by the application sending the request. /// maintained by the application sending the request.
outbound_substreams: FnvHashMap< outbound_substreams:
OutboundRequestId, FnvHashMap<OutboundRequestId, (OutboundSubstreamState<TSpec>, delay_queue::Key, Protocol)>,
(
OutboundSubstreamState<TSubstream, TSpec>,
delay_queue::Key,
Protocol,
),
>,
/// Inbound substream `DelayQueue` which keeps track of when an inbound substream will timeout. /// Inbound substream `DelayQueue` which keeps track of when an inbound substream will timeout.
outbound_substreams_delay: DelayQueue<OutboundRequestId>, outbound_substreams_delay: DelayQueue<OutboundRequestId>,
@ -107,21 +102,27 @@ where
/// Logger for handling RPC streams /// Logger for handling RPC streams
log: slog::Logger, log: slog::Logger,
/// Marker to pin the generic stream.
_phantom: PhantomData<TSubstream>,
} }
/// State of an outbound substream. Either waiting for a response, or in the process of sending. pub enum InboundSubstreamState<TSpec>
pub enum InboundSubstreamState<TSubstream, TSpec>
where where
TSubstream: AsyncRead + AsyncWrite,
TSpec: EthSpec, TSpec: EthSpec,
{ {
/// A response has been sent, pending writing and flush. /// A response has been sent, pending writing.
ResponsePendingSend { ResponsePendingSend {
/// The substream used to send the response /// The substream used to send the response
substream: futures::sink::Send<InboundFramed<TSubstream, TSpec>>, substream: InboundFramed<NegotiatedSubstream, TSpec>,
/// The message that is attempting to be sent.
message: RPCCodedResponse<TSpec>,
/// Whether a stream termination is requested. If true the stream will be closed after
/// this send. Otherwise it will transition to an idle state until a stream termination is
/// requested or a timeout is reached.
closing: bool,
},
/// A response has been sent, pending flush.
ResponsePendingFlush {
/// The substream used to send the response
substream: InboundFramed<NegotiatedSubstream, TSpec>,
/// Whether a stream termination is requested. If true the stream will be closed after /// Whether a stream termination is requested. If true the stream will be closed after
/// this send. Otherwise it will transition to an idle state until a stream termination is /// this send. Otherwise it will transition to an idle state until a stream termination is
/// requested or a timeout is reached. /// requested or a timeout is reached.
@ -129,31 +130,31 @@ where
}, },
/// The response stream is idle and awaiting input from the application to send more chunked /// The response stream is idle and awaiting input from the application to send more chunked
/// responses. /// responses.
ResponseIdle(InboundFramed<TSubstream, TSpec>), ResponseIdle(InboundFramed<NegotiatedSubstream, TSpec>),
/// The substream is attempting to shutdown. /// The substream is attempting to shutdown.
Closing(InboundFramed<TSubstream, TSpec>), Closing(InboundFramed<NegotiatedSubstream, TSpec>),
/// Temporary state during processing /// Temporary state during processing
Poisoned, Poisoned,
} }
pub enum OutboundSubstreamState<TSubstream, TSpec: EthSpec> { /// State of an outbound substream. Either waiting for a response, or in the process of sending.
pub enum OutboundSubstreamState<TSpec: EthSpec> {
/// A request has been sent, and we are awaiting a response. This future is driven in the /// A request has been sent, and we are awaiting a response. This future is driven in the
/// handler because GOODBYE requests can be handled and responses dropped instantly. /// handler because GOODBYE requests can be handled and responses dropped instantly.
RequestPendingResponse { RequestPendingResponse {
/// The framed negotiated substream. /// The framed negotiated substream.
substream: OutboundFramed<TSubstream, TSpec>, substream: OutboundFramed<NegotiatedSubstream, TSpec>,
/// Keeps track of the actual request sent. /// Keeps track of the actual request sent.
request: RPCRequest<TSpec>, request: RPCRequest<TSpec>,
}, },
/// Closing an outbound substream> /// Closing an outbound substream>
Closing(OutboundFramed<TSubstream, TSpec>), Closing(OutboundFramed<NegotiatedSubstream, TSpec>),
/// Temporary state during processing /// Temporary state during processing
Poisoned, Poisoned,
} }
impl<TSubstream, TSpec> InboundSubstreamState<TSubstream, TSpec> impl<TSpec> InboundSubstreamState<TSpec>
where where
TSubstream: AsyncRead + AsyncWrite,
TSpec: EthSpec, TSpec: EthSpec,
{ {
/// Moves the substream state to closing and informs the connected peer. The /// Moves the substream state to closing and informs the connected peer. The
@ -172,18 +173,37 @@ where
RPCCodedResponse::StreamTermination(ResponseTermination::BlocksByRange); RPCCodedResponse::StreamTermination(ResponseTermination::BlocksByRange);
match std::mem::replace(self, InboundSubstreamState::Poisoned) { match std::mem::replace(self, InboundSubstreamState::Poisoned) {
InboundSubstreamState::ResponsePendingSend { substream, closing } => { // if we are busy awaiting a send/flush add the termination to the queue
InboundSubstreamState::ResponsePendingSend {
substream,
message,
closing,
} => {
if !closing { if !closing {
outbound_queue.push(error); outbound_queue.push(error);
outbound_queue.push(stream_termination); outbound_queue.push(stream_termination);
} }
// if the stream is closing after the send, allow it to finish // if the stream is closing after the send, allow it to finish
*self = InboundSubstreamState::ResponsePendingSend { substream, closing } *self = InboundSubstreamState::ResponsePendingSend {
substream,
message,
closing,
}
}
// if we are busy awaiting a send/flush add the termination to the queue
InboundSubstreamState::ResponsePendingFlush { substream, closing } => {
if !closing {
outbound_queue.push(error);
outbound_queue.push(stream_termination);
}
// if the stream is closing after the send, allow it to finish
*self = InboundSubstreamState::ResponsePendingFlush { substream, closing }
} }
InboundSubstreamState::ResponseIdle(substream) => { InboundSubstreamState::ResponseIdle(substream) => {
*self = InboundSubstreamState::ResponsePendingSend { *self = InboundSubstreamState::ResponsePendingSend {
substream: substream.send(error), substream: substream,
message: error,
closing: true, closing: true,
}; };
} }
@ -198,9 +218,8 @@ where
} }
} }
impl<TSubstream, TSpec> RPCHandler<TSubstream, TSpec> impl<TSpec> RPCHandler<TSpec>
where where
TSubstream: AsyncRead + AsyncWrite,
TSpec: EthSpec, TSpec: EthSpec,
{ {
pub fn new( pub fn new(
@ -225,7 +244,6 @@ where
inactive_timeout, inactive_timeout,
outbound_io_error_retries: 0, outbound_io_error_retries: 0,
log: log.clone(), log: log.clone(),
_phantom: PhantomData,
} }
} }
@ -258,15 +276,13 @@ where
} }
} }
impl<TSubstream, TSpec> ProtocolsHandler for RPCHandler<TSubstream, TSpec> impl<TSpec> ProtocolsHandler for RPCHandler<TSpec>
where where
TSubstream: AsyncRead + AsyncWrite,
TSpec: EthSpec, TSpec: EthSpec,
{ {
type InEvent = RPCEvent<TSpec>; type InEvent = RPCEvent<TSpec>;
type OutEvent = RPCEvent<TSpec>; type OutEvent = RPCEvent<TSpec>;
type Error = ProtocolsHandlerUpgrErr<RPCError>; type Error = RPCError;
type Substream = TSubstream;
type InboundProtocol = RPCProtocol<TSpec>; type InboundProtocol = RPCProtocol<TSpec>;
type OutboundProtocol = RPCRequest<TSpec>; type OutboundProtocol = RPCRequest<TSpec>;
type OutboundOpenInfo = (RequestId, RPCRequest<TSpec>); // Keep track of the id and the request type OutboundOpenInfo = (RequestId, RPCRequest<TSpec>); // Keep track of the id and the request
@ -277,14 +293,14 @@ where
fn inject_fully_negotiated_inbound( fn inject_fully_negotiated_inbound(
&mut self, &mut self,
out: <RPCProtocol<TSpec> as InboundUpgrade<TSubstream>>::Output, substream: <Self::InboundProtocol as InboundUpgrade<NegotiatedSubstream>>::Output,
) { ) {
// update the keep alive timeout if there are no more remaining outbound streams // update the keep alive timeout if there are no more remaining outbound streams
if let KeepAlive::Until(_) = self.keep_alive { if let KeepAlive::Until(_) = self.keep_alive {
self.keep_alive = KeepAlive::Until(Instant::now() + self.inactive_timeout); self.keep_alive = KeepAlive::Until(Instant::now() + self.inactive_timeout);
} }
let (req, substream) = out; let (req, substream) = substream;
// drop the stream and return a 0 id for goodbye "requests" // drop the stream and return a 0 id for goodbye "requests"
if let r @ RPCRequest::Goodbye(_) = req { if let r @ RPCRequest::Goodbye(_) = req {
self.events_out.push(RPCEvent::Request(0, r)); self.events_out.push(RPCEvent::Request(0, r));
@ -309,7 +325,7 @@ where
fn inject_fully_negotiated_outbound( fn inject_fully_negotiated_outbound(
&mut self, &mut self,
out: <RPCRequest<TSpec> as OutboundUpgrade<TSubstream>>::Output, out: <Self::OutboundProtocol as OutboundUpgrade<NegotiatedSubstream>>::Output,
request_info: Self::OutboundOpenInfo, request_info: Self::OutboundOpenInfo,
) { ) {
self.dial_negotiated -= 1; self.dial_negotiated -= 1;
@ -394,15 +410,18 @@ where
// if it's a single rpc request or an error, close the stream after // if it's a single rpc request or an error, close the stream after
*substream_state = *substream_state =
InboundSubstreamState::ResponsePendingSend { InboundSubstreamState::ResponsePendingSend {
substream: substream.send(response), substream: substream,
message: response,
closing: !res_is_multiple | res_is_error, // close if an error or we are not expecting more responses closing: !res_is_multiple | res_is_error, // close if an error or we are not expecting more responses
}; };
} }
} }
} }
InboundSubstreamState::ResponsePendingSend { substream, closing } InboundSubstreamState::ResponsePendingSend {
if res_is_multiple => substream,
{ message,
closing,
} if res_is_multiple => {
// the stream is in use, add the request to a pending queue // the stream is in use, add the request to a pending queue
self.queued_outbound_items self.queued_outbound_items
.entry(rpc_id) .entry(rpc_id)
@ -411,6 +430,22 @@ where
// return the state // return the state
*substream_state = InboundSubstreamState::ResponsePendingSend { *substream_state = InboundSubstreamState::ResponsePendingSend {
substream,
message,
closing,
};
}
InboundSubstreamState::ResponsePendingFlush { substream, closing }
if res_is_multiple =>
{
// the stream is in use, add the request to a pending queue
self.queued_outbound_items
.entry(rpc_id)
.or_insert_with(Vec::new)
.push(response);
// return the state
*substream_state = InboundSubstreamState::ResponsePendingFlush {
substream, substream,
closing, closing,
}; };
@ -419,8 +454,20 @@ where
*substream_state = InboundSubstreamState::Closing(substream); *substream_state = InboundSubstreamState::Closing(substream);
debug!(self.log, "Response not sent. Stream is closing"; "response" => format!("{}",response)); debug!(self.log, "Response not sent. Stream is closing"; "response" => format!("{}",response));
} }
InboundSubstreamState::ResponsePendingSend { substream, .. } => { InboundSubstreamState::ResponsePendingSend {
substream,
message,
..
} => {
*substream_state = InboundSubstreamState::ResponsePendingSend { *substream_state = InboundSubstreamState::ResponsePendingSend {
substream,
message,
closing: true,
};
error!(self.log, "Attempted sending multiple responses to a single response request");
}
InboundSubstreamState::ResponsePendingFlush { substream, .. } => {
*substream_state = InboundSubstreamState::ResponsePendingFlush {
substream, substream,
closing: true, closing: true,
}; };
@ -433,7 +480,7 @@ where
} }
} }
None => { None => {
warn!(self.log, "Stream has expired. Response not sent"; "response" => format!("{}", response)); warn!(self.log, "Stream has expired. Response not sent"; "response" => response.to_string(), "id" => rpc_id);
} }
}; };
} }
@ -446,7 +493,7 @@ where
&mut self, &mut self,
request_info: Self::OutboundOpenInfo, request_info: Self::OutboundOpenInfo,
error: ProtocolsHandlerUpgrErr< error: ProtocolsHandlerUpgrErr<
<Self::OutboundProtocol as OutboundUpgrade<Self::Substream>>::Error, <Self::OutboundProtocol as OutboundUpgrade<NegotiatedSubstream>>::Error,
>, >,
) { ) {
let (id, req) = request_info; let (id, req) = request_info;
@ -470,7 +517,7 @@ where
ProtocolsHandlerUpgrErr::Upgrade(UpgradeError::Select( ProtocolsHandlerUpgrErr::Upgrade(UpgradeError::Select(
NegotiationError::ProtocolError(e), NegotiationError::ProtocolError(e),
)) => match e { )) => match e {
ProtocolError::IoError(io_err) => RPCError::IoError(io_err), ProtocolError::IoError(io_err) => RPCError::IoError(io_err.to_string()),
ProtocolError::InvalidProtocol => { ProtocolError::InvalidProtocol => {
RPCError::InternalError("Protocol was deemed invalid") RPCError::InternalError("Protocol was deemed invalid")
} }
@ -490,64 +537,82 @@ where
fn poll( fn poll(
&mut self, &mut self,
cx: &mut Context<'_>,
) -> Poll< ) -> Poll<
ProtocolsHandlerEvent<Self::OutboundProtocol, Self::OutboundOpenInfo, Self::OutEvent>, ProtocolsHandlerEvent<
Self::Error, Self::OutboundProtocol,
Self::OutboundOpenInfo,
Self::OutEvent,
Self::Error,
>,
> { > {
if !self.pending_error.is_empty() { if !self.pending_error.is_empty() {
let (id, protocol, err) = self.pending_error.remove(0); let (id, protocol, err) = self.pending_error.remove(0);
return Ok(Async::Ready(ProtocolsHandlerEvent::Custom( return Poll::Ready(ProtocolsHandlerEvent::Custom(RPCEvent::Error(
RPCEvent::Error(id, protocol, err), id, protocol, err,
))); )));
} }
// return any events that need to be reported // return any events that need to be reported
if !self.events_out.is_empty() { if !self.events_out.is_empty() {
return Ok(Async::Ready(ProtocolsHandlerEvent::Custom( return Poll::Ready(ProtocolsHandlerEvent::Custom(self.events_out.remove(0)));
self.events_out.remove(0),
)));
} else { } else {
self.events_out.shrink_to_fit(); self.events_out.shrink_to_fit();
} }
// purge expired inbound substreams and send an error // purge expired inbound substreams and send an error
while let Async::Ready(Some(stream_id)) = loop {
self.inbound_substreams_delay.poll().map_err(|e| { match self.inbound_substreams_delay.poll_next_unpin(cx) {
warn!(self.log, "Inbound substream poll failed"; "error" => format!("{:?}", e)); Poll::Ready(Some(Ok(stream_id))) => {
ProtocolsHandlerUpgrErr::Timer // handle a stream timeout for various states
})? if let Some((substream_state, delay_key, _)) =
{ self.inbound_substreams.get_mut(stream_id.get_ref())
let rpc_id = stream_id.get_ref(); {
// the delay has been removed
*delay_key = None;
// handle a stream timeout for various states let outbound_queue = self
if let Some((substream_state, delay_key, _)) = self.inbound_substreams.get_mut(rpc_id) { .queued_outbound_items
// the delay has been removed .entry(stream_id.into_inner())
*delay_key = None; .or_insert_with(Vec::new);
substream_state.close(outbound_queue);
let outbound_queue = self }
.queued_outbound_items }
.entry(*rpc_id) Poll::Ready(Some(Err(e))) => {
.or_insert_with(Vec::new); warn!(self.log, "Inbound substream poll failed"; "error" => format!("{:?}", e));
substream_state.close(outbound_queue); // drops the peer if we cannot read the delay queue
return Poll::Ready(ProtocolsHandlerEvent::Close(RPCError::InternalError(
"Could not poll inbound stream timer",
)));
}
Poll::Pending | Poll::Ready(None) => break,
} }
} }
// purge expired outbound substreams // purge expired outbound substreams
if let Async::Ready(Some(stream_id)) = loop {
self.outbound_substreams_delay.poll().map_err(|e| { match self.outbound_substreams_delay.poll_next_unpin(cx) {
warn!(self.log, "Outbound substream poll failed"; "error" => format!("{:?}", e)); Poll::Ready(Some(Ok(stream_id))) => {
ProtocolsHandlerUpgrErr::Timer if let Some((_id, _stream, protocol)) =
})? self.outbound_substreams.remove(stream_id.get_ref())
{ {
if let Some((_id, _stream, protocol)) = // notify the user
self.outbound_substreams.remove(stream_id.get_ref()) return Poll::Ready(ProtocolsHandlerEvent::Custom(RPCEvent::Error(
{ *stream_id.get_ref(),
// notify the user protocol,
return Ok(Async::Ready(ProtocolsHandlerEvent::Custom( RPCError::StreamTimeout,
RPCEvent::Error(*stream_id.get_ref(), protocol, RPCError::StreamTimeout), )));
))); } else {
} else { crit!(self.log, "timed out substream not in the books"; "stream_id" => stream_id.get_ref());
crit!(self.log, "timed out substream not in the books"; "stream_id" => stream_id.get_ref()); }
}
Poll::Ready(Some(Err(e))) => {
warn!(self.log, "Outbound substream poll failed"; "error" => format!("{:?}", e));
return Poll::Ready(ProtocolsHandlerEvent::Close(RPCError::InternalError(
"Could not poll outbound stream timer",
)));
}
Poll::Pending | Poll::Ready(None) => break,
} }
} }
@ -566,20 +631,75 @@ where
) { ) {
InboundSubstreamState::ResponsePendingSend { InboundSubstreamState::ResponsePendingSend {
mut substream, mut substream,
message,
closing, closing,
} => { } => {
match substream.poll() { match Sink::poll_ready(Pin::new(&mut substream), cx) {
Ok(Async::Ready(raw_substream)) => { Poll::Ready(Ok(())) => {
// completed the send // stream is ready to send data
match Sink::start_send(Pin::new(&mut substream), message) {
// close the stream if required Ok(()) => {
// await flush
entry.get_mut().0 =
InboundSubstreamState::ResponsePendingFlush {
substream,
closing,
}
}
Err(e) => {
// error with sending in the codec
warn!(self.log, "Error sending RPC message"; "error" => e.to_string());
// keep connection with the peer and return the
// stream to awaiting response if this message
// wasn't closing the stream
// TODO: Duplicate code
if closing {
entry.get_mut().0 =
InboundSubstreamState::Closing(substream)
} else {
// check for queued chunks and update the stream
entry.get_mut().0 = apply_queued_responses(
substream,
&mut self
.queued_outbound_items
.get_mut(&request_id),
&mut new_items_to_send,
);
}
}
}
}
Poll::Ready(Err(e)) => {
error!(self.log, "Outbound substream error while sending RPC message: {:?}", e);
entry.remove();
return Poll::Ready(ProtocolsHandlerEvent::Close(e));
}
Poll::Pending => {
// the stream is not yet ready, continue waiting
entry.get_mut().0 =
InboundSubstreamState::ResponsePendingSend {
substream,
message,
closing,
};
}
}
}
InboundSubstreamState::ResponsePendingFlush {
mut substream,
closing,
} => {
match Sink::poll_flush(Pin::new(&mut substream), cx) {
Poll::Ready(Ok(())) => {
// finished flushing
// TODO: Duplicate code
if closing { if closing {
entry.get_mut().0 = entry.get_mut().0 =
InboundSubstreamState::Closing(raw_substream) InboundSubstreamState::Closing(substream)
} else { } else {
// check for queued chunks and update the stream // check for queued chunks and update the stream
entry.get_mut().0 = apply_queued_responses( entry.get_mut().0 = apply_queued_responses(
raw_substream, substream,
&mut self &mut self
.queued_outbound_items .queued_outbound_items
.get_mut(&request_id), .get_mut(&request_id),
@ -587,24 +707,34 @@ where
); );
} }
} }
Ok(Async::NotReady) => { Poll::Ready(Err(e)) => {
// error during flush
trace!(self.log, "Error sending flushing RPC message"; "error" => e.to_string());
// we drop the stream on error and inform the user, remove
// any pending requests
// TODO: Duplicate code
if let Some(delay_key) = &entry.get().1 {
self.inbound_substreams_delay.remove(delay_key);
}
self.queued_outbound_items.remove(&request_id);
entry.remove();
if self.outbound_substreams.is_empty()
&& self.inbound_substreams.is_empty()
{
self.keep_alive = KeepAlive::Until(
Instant::now() + self.inactive_timeout,
);
}
}
Poll::Pending => {
entry.get_mut().0 = entry.get_mut().0 =
InboundSubstreamState::ResponsePendingSend { InboundSubstreamState::ResponsePendingFlush {
substream, substream,
closing, closing,
}; };
} }
Err(e) => { }
if let Some(delay_key) = &entry.get().1 {
self.inbound_substreams_delay.remove(delay_key);
}
let protocol = entry.get().2;
entry.remove_entry();
return Ok(Async::Ready(ProtocolsHandlerEvent::Custom(
RPCEvent::Error(0, protocol, e),
)));
}
};
} }
InboundSubstreamState::ResponseIdle(substream) => { InboundSubstreamState::ResponseIdle(substream) => {
entry.get_mut().0 = apply_queued_responses( entry.get_mut().0 = apply_queued_responses(
@ -614,9 +744,8 @@ where
); );
} }
InboundSubstreamState::Closing(mut substream) => { InboundSubstreamState::Closing(mut substream) => {
match substream.close() { match Sink::poll_close(Pin::new(&mut substream), cx) {
Ok(Async::Ready(())) | Err(_) => { Poll::Ready(Ok(())) => {
//trace!(self.log, "Inbound stream dropped");
if let Some(delay_key) = &entry.get().1 { if let Some(delay_key) = &entry.get().1 {
self.inbound_substreams_delay.remove(delay_key); self.inbound_substreams_delay.remove(delay_key);
} }
@ -631,7 +760,25 @@ where
); );
} }
} // drop the stream } // drop the stream
Ok(Async::NotReady) => { Poll::Ready(Err(e)) => {
error!(self.log, "Error closing inbound stream"; "error" => e.to_string());
// drop the stream anyway
// TODO: Duplicate code
if let Some(delay_key) = &entry.get().1 {
self.inbound_substreams_delay.remove(delay_key);
}
self.queued_outbound_items.remove(&request_id);
entry.remove();
if self.outbound_substreams.is_empty()
&& self.inbound_substreams.is_empty()
{
self.keep_alive = KeepAlive::Until(
Instant::now() + self.inactive_timeout,
);
}
}
Poll::Pending => {
entry.get_mut().0 = entry.get_mut().0 =
InboundSubstreamState::Closing(substream); InboundSubstreamState::Closing(substream);
} }
@ -641,7 +788,7 @@ where
crit!(self.log, "Poisoned outbound substream"); crit!(self.log, "Poisoned outbound substream");
unreachable!("Coding Error: Inbound Substream is poisoned"); unreachable!("Coding Error: Inbound Substream is poisoned");
} }
}; }
} }
Entry::Vacant(_) => unreachable!(), Entry::Vacant(_) => unreachable!(),
} }
@ -659,8 +806,8 @@ where
OutboundSubstreamState::RequestPendingResponse { OutboundSubstreamState::RequestPendingResponse {
mut substream, mut substream,
request, request,
} => match substream.poll() { } => match substream.poll_next_unpin(cx) {
Ok(Async::Ready(Some(response))) => { Poll::Ready(Some(Ok(response))) => {
if request.multiple_responses() && !response.is_error() { if request.multiple_responses() && !response.is_error() {
entry.get_mut().0 = entry.get_mut().0 =
OutboundSubstreamState::RequestPendingResponse { OutboundSubstreamState::RequestPendingResponse {
@ -678,11 +825,11 @@ where
entry.get_mut().0 = OutboundSubstreamState::Closing(substream); entry.get_mut().0 = OutboundSubstreamState::Closing(substream);
} }
return Ok(Async::Ready(ProtocolsHandlerEvent::Custom( return Poll::Ready(ProtocolsHandlerEvent::Custom(
RPCEvent::Response(request_id, response), RPCEvent::Response(request_id, response),
))); ));
} }
Ok(Async::Ready(None)) => { Poll::Ready(None) => {
// stream closed // stream closed
// if we expected multiple streams send a stream termination, // if we expected multiple streams send a stream termination,
// else report the stream terminating only. // else report the stream terminating only.
@ -694,59 +841,62 @@ where
// notify the application error // notify the application error
if request.multiple_responses() { if request.multiple_responses() {
// return an end of stream result // return an end of stream result
return Ok(Async::Ready(ProtocolsHandlerEvent::Custom( return Poll::Ready(ProtocolsHandlerEvent::Custom(
RPCEvent::Response( RPCEvent::Response(
request_id, request_id,
RPCCodedResponse::StreamTermination( RPCCodedResponse::StreamTermination(
request.stream_termination(), request.stream_termination(),
), ),
), ),
))); ));
} // else we return an error, stream should not have closed early. } // else we return an error, stream should not have closed early.
return Ok(Async::Ready(ProtocolsHandlerEvent::Custom( return Poll::Ready(ProtocolsHandlerEvent::Custom(
RPCEvent::Error( RPCEvent::Error(
request_id, request_id,
request.protocol(), request.protocol(),
RPCError::IncompleteStream, RPCError::IncompleteStream,
), ),
))); ));
} }
Ok(Async::NotReady) => { Poll::Pending => {
entry.get_mut().0 = OutboundSubstreamState::RequestPendingResponse { entry.get_mut().0 = OutboundSubstreamState::RequestPendingResponse {
substream, substream,
request, request,
} }
} }
Err(e) => { Poll::Ready(Some(Err(e))) => {
// drop the stream // drop the stream
let delay_key = &entry.get().1; let delay_key = &entry.get().1;
self.outbound_substreams_delay.remove(delay_key); self.outbound_substreams_delay.remove(delay_key);
let protocol = entry.get().2; let protocol = entry.get().2;
entry.remove_entry(); entry.remove_entry();
return Ok(Async::Ready(ProtocolsHandlerEvent::Custom( return Poll::Ready(ProtocolsHandlerEvent::Custom(
RPCEvent::Error(request_id, protocol, e), RPCEvent::Error(request_id, protocol, e),
))); ));
} }
}, },
OutboundSubstreamState::Closing(mut substream) => match substream.close() { OutboundSubstreamState::Closing(mut substream) => {
Ok(Async::Ready(())) | Err(_) => { match Sink::poll_close(Pin::new(&mut substream), cx) {
//trace!(self.log, "Outbound stream dropped"); // TODO: check if this is supposed to be a stream
// drop the stream Poll::Ready(_) => {
let delay_key = &entry.get().1; // drop the stream - including if there is an error
self.outbound_substreams_delay.remove(delay_key); let delay_key = &entry.get().1;
entry.remove_entry(); self.outbound_substreams_delay.remove(delay_key);
entry.remove_entry();
if self.outbound_substreams.is_empty() if self.outbound_substreams.is_empty()
&& self.inbound_substreams.is_empty() && self.inbound_substreams.is_empty()
{ {
self.keep_alive = self.keep_alive = KeepAlive::Until(
KeepAlive::Until(Instant::now() + self.inactive_timeout); Instant::now() + self.inactive_timeout,
);
}
}
Poll::Pending => {
entry.get_mut().0 = OutboundSubstreamState::Closing(substream);
} }
} }
Ok(Async::NotReady) => { }
entry.get_mut().0 = OutboundSubstreamState::Closing(substream);
}
},
OutboundSubstreamState::Poisoned => { OutboundSubstreamState::Poisoned => {
crit!(self.log, "Poisoned outbound substream"); crit!(self.log, "Poisoned outbound substream");
unreachable!("Coding Error: Outbound substream is poisoned") unreachable!("Coding Error: Outbound substream is poisoned")
@ -762,23 +912,21 @@ where
self.dial_negotiated += 1; self.dial_negotiated += 1;
let (id, req) = self.dial_queue.remove(0); let (id, req) = self.dial_queue.remove(0);
self.dial_queue.shrink_to_fit(); self.dial_queue.shrink_to_fit();
return Ok(Async::Ready( return Poll::Ready(ProtocolsHandlerEvent::OutboundSubstreamRequest {
ProtocolsHandlerEvent::OutboundSubstreamRequest { protocol: SubstreamProtocol::new(req.clone()),
protocol: SubstreamProtocol::new(req.clone()), info: (id, req),
info: (id, req), });
},
));
} }
Ok(Async::NotReady) Poll::Pending
} }
} }
// Check for new items to send to the peer and update the underlying stream // Check for new items to send to the peer and update the underlying stream
fn apply_queued_responses<TSubstream: AsyncRead + AsyncWrite, TSpec: EthSpec>( fn apply_queued_responses<TSpec: EthSpec>(
raw_substream: InboundFramed<TSubstream, TSpec>, substream: InboundFramed<NegotiatedSubstream, TSpec>,
queued_outbound_items: &mut Option<&mut Vec<RPCCodedResponse<TSpec>>>, queued_outbound_items: &mut Option<&mut Vec<RPCCodedResponse<TSpec>>>,
new_items_to_send: &mut bool, new_items_to_send: &mut bool,
) -> InboundSubstreamState<TSubstream, TSpec> { ) -> InboundSubstreamState<TSpec> {
match queued_outbound_items { match queued_outbound_items {
Some(ref mut queue) if !queue.is_empty() => { Some(ref mut queue) if !queue.is_empty() => {
*new_items_to_send = true; *new_items_to_send = true;
@ -786,17 +934,18 @@ fn apply_queued_responses<TSubstream: AsyncRead + AsyncWrite, TSpec: EthSpec>(
match queue.remove(0) { match queue.remove(0) {
RPCCodedResponse::StreamTermination(_) => { RPCCodedResponse::StreamTermination(_) => {
// close the stream if this is a stream termination // close the stream if this is a stream termination
InboundSubstreamState::Closing(raw_substream) InboundSubstreamState::Closing(substream)
} }
chunk => InboundSubstreamState::ResponsePendingSend { chunk => InboundSubstreamState::ResponsePendingSend {
substream: raw_substream.send(chunk), substream: substream,
message: chunk,
closing: false, closing: false,
}, },
} }
} }
_ => { _ => {
// no items queued set to idle // no items queued set to idle
InboundSubstreamState::ResponseIdle(raw_substream) InboundSubstreamState::ResponseIdle(substream)
} }
} }
} }

View File

@ -164,7 +164,7 @@ pub enum RPCResponse<T: EthSpec> {
} }
/// Indicates which response is being terminated by a stream termination response. /// Indicates which response is being terminated by a stream termination response.
#[derive(Debug)] #[derive(Debug, Clone)]
pub enum ResponseTermination { pub enum ResponseTermination {
/// Blocks by range stream termination. /// Blocks by range stream termination.
BlocksByRange, BlocksByRange,
@ -175,7 +175,7 @@ pub enum ResponseTermination {
/// The structured response containing a result/code indicating success or failure /// The structured response containing a result/code indicating success or failure
/// and the contents of the response /// and the contents of the response
#[derive(Debug)] #[derive(Debug, Clone)]
pub enum RPCCodedResponse<T: EthSpec> { pub enum RPCCodedResponse<T: EthSpec> {
/// The response is a successful. /// The response is a successful.
Success(RPCResponse<T>), Success(RPCResponse<T>),
@ -194,7 +194,7 @@ pub enum RPCCodedResponse<T: EthSpec> {
} }
/// The code assigned to an erroneous `RPCResponse`. /// The code assigned to an erroneous `RPCResponse`.
#[derive(Debug)] #[derive(Debug, Clone)]
pub enum RPCResponseErrorCode { pub enum RPCResponseErrorCode {
InvalidRequest, InvalidRequest,
ServerError, ServerError,
@ -268,14 +268,14 @@ impl<T: EthSpec> RPCCodedResponse<T> {
} }
} }
#[derive(Encode, Decode, Debug)] #[derive(Encode, Decode, Debug, Clone)]
pub struct ErrorMessage { pub struct ErrorMessage {
/// The UTF-8 encoded Error message string. /// The UTF-8 encoded Error message string.
pub error_message: Vec<u8>, pub error_message: Vec<u8>,
} }
impl ErrorMessage { impl std::string::ToString for ErrorMessage {
pub fn as_string(&self) -> String { fn to_string(&self) -> String {
String::from_utf8(self.error_message.clone()).unwrap_or_else(|_| "".into()) String::from_utf8(self.error_message.clone()).unwrap_or_else(|_| "".into())
} }
} }

View File

@ -4,12 +4,11 @@
//! direct peer-to-peer communication primarily for sending/receiving chain information for //! direct peer-to-peer communication primarily for sending/receiving chain information for
//! syncing. //! syncing.
use futures::prelude::*;
use handler::RPCHandler; use handler::RPCHandler;
use libp2p::core::ConnectedPoint; use libp2p::core::{connection::ConnectionId, ConnectedPoint};
use libp2p::swarm::{ use libp2p::swarm::{
protocols_handler::ProtocolsHandler, NetworkBehaviour, NetworkBehaviourAction, PollParameters, protocols_handler::ProtocolsHandler, NetworkBehaviour, NetworkBehaviourAction, NotifyHandler,
SubstreamProtocol, PollParameters, SubstreamProtocol,
}; };
use libp2p::{Multiaddr, PeerId}; use libp2p::{Multiaddr, PeerId};
pub use methods::{ pub use methods::{
@ -19,8 +18,8 @@ pub use methods::{
pub use protocol::{Protocol, RPCError, RPCProtocol, RPCRequest}; pub use protocol::{Protocol, RPCError, RPCProtocol, RPCRequest};
use slog::{debug, o}; use slog::{debug, o};
use std::marker::PhantomData; use std::marker::PhantomData;
use std::task::{Context, Poll};
use std::time::Duration; use std::time::Duration;
use tokio::io::{AsyncRead, AsyncWrite};
use types::EthSpec; use types::EthSpec;
pub(crate) mod codec; pub(crate) mod codec;
@ -29,7 +28,7 @@ pub mod methods;
mod protocol; mod protocol;
/// The return type used in the behaviour and the resultant event from the protocols handler. /// The return type used in the behaviour and the resultant event from the protocols handler.
#[derive(Debug)] #[derive(Debug, Clone)]
pub enum RPCEvent<T: EthSpec> { pub enum RPCEvent<T: EthSpec> {
/// An inbound/outbound request for RPC protocol. The first parameter is a sequential /// An inbound/outbound request for RPC protocol. The first parameter is a sequential
/// id which tracks an awaiting substream for the response. /// id which tracks an awaiting substream for the response.
@ -42,6 +41,14 @@ pub enum RPCEvent<T: EthSpec> {
Error(RequestId, Protocol, RPCError), Error(RequestId, Protocol, RPCError),
} }
/// Messages sent to the user from the RPC protocol.
pub struct RPCMessage<TSpec: EthSpec> {
/// The peer that sent the message.
pub peer_id: PeerId,
/// The message that was sent.
pub event: RPCEvent<TSpec>,
}
impl<T: EthSpec> RPCEvent<T> { impl<T: EthSpec> RPCEvent<T> {
pub fn id(&self) -> usize { pub fn id(&self) -> usize {
match *self { match *self {
@ -68,21 +75,18 @@ impl<T: EthSpec> std::fmt::Display for RPCEvent<T> {
/// Implements the libp2p `NetworkBehaviour` trait and therefore manages network-level /// Implements the libp2p `NetworkBehaviour` trait and therefore manages network-level
/// logic. /// logic.
pub struct RPC<TSubstream, TSpec: EthSpec> { pub struct RPC<TSpec: EthSpec> {
/// Queue of events to processed. /// Queue of events to processed.
events: Vec<NetworkBehaviourAction<RPCEvent<TSpec>, RPCMessage<TSpec>>>, events: Vec<NetworkBehaviourAction<RPCEvent<TSpec>, RPCMessage<TSpec>>>,
/// Pins the generic substream.
marker: PhantomData<TSubstream>,
/// Slog logger for RPC behaviour. /// Slog logger for RPC behaviour.
log: slog::Logger, log: slog::Logger,
} }
impl<TSubstream, TSpec: EthSpec> RPC<TSubstream, TSpec> { impl<TSpec: EthSpec> RPC<TSpec> {
pub fn new(log: slog::Logger) -> Self { pub fn new(log: slog::Logger) -> Self {
let log = log.new(o!("service" => "libp2p_rpc")); let log = log.new(o!("service" => "libp2p_rpc"));
RPC { RPC {
events: Vec::new(), events: Vec::new(),
marker: PhantomData,
log, log,
} }
} }
@ -91,19 +95,19 @@ impl<TSubstream, TSpec: EthSpec> RPC<TSubstream, TSpec> {
/// ///
/// The peer must be connected for this to succeed. /// The peer must be connected for this to succeed.
pub fn send_rpc(&mut self, peer_id: PeerId, rpc_event: RPCEvent<TSpec>) { pub fn send_rpc(&mut self, peer_id: PeerId, rpc_event: RPCEvent<TSpec>) {
self.events.push(NetworkBehaviourAction::SendEvent { self.events.push(NetworkBehaviourAction::NotifyHandler {
peer_id, peer_id,
handler: NotifyHandler::Any,
event: rpc_event, event: rpc_event,
}); });
} }
} }
impl<TSubstream, TSpec> NetworkBehaviour for RPC<TSubstream, TSpec> impl<TSpec> NetworkBehaviour for RPC<TSpec>
where where
TSubstream: AsyncRead + AsyncWrite,
TSpec: EthSpec, TSpec: EthSpec,
{ {
type ProtocolsHandler = RPCHandler<TSubstream, TSpec>; type ProtocolsHandler = RPCHandler<TSpec>;
type OutEvent = RPCMessage<TSpec>; type OutEvent = RPCMessage<TSpec>;
fn new_handler(&mut self) -> Self::ProtocolsHandler { fn new_handler(&mut self) -> Self::ProtocolsHandler {
@ -121,75 +125,64 @@ where
Vec::new() Vec::new()
} }
fn inject_connected(&mut self, peer_id: PeerId, connected_point: ConnectedPoint) { // Use connection established/closed instead of these currently
// TODO: Remove this on proper peer discovery fn inject_connected(&mut self, peer_id: &PeerId) {
self.events.push(NetworkBehaviourAction::GenerateEvent(
RPCMessage::PeerConnectedHack(peer_id.clone(), connected_point.clone()),
));
// if initialised the connection, report this upwards to send the HELLO request
if let ConnectedPoint::Dialer { .. } = connected_point {
self.events.push(NetworkBehaviourAction::GenerateEvent(
RPCMessage::PeerDialed(peer_id.clone()),
));
}
// find the peer's meta-data // find the peer's meta-data
debug!(self.log, "Requesting new peer's metadata"; "peer_id" => format!("{}",peer_id)); debug!(self.log, "Requesting new peer's metadata"; "peer_id" => format!("{}",peer_id));
let rpc_event = let rpc_event =
RPCEvent::Request(RequestId::from(0usize), RPCRequest::MetaData(PhantomData)); RPCEvent::Request(RequestId::from(0usize), RPCRequest::MetaData(PhantomData));
self.events.push(NetworkBehaviourAction::SendEvent { self.events.push(NetworkBehaviourAction::NotifyHandler {
peer_id, peer_id: peer_id.clone(),
handler: NotifyHandler::Any,
event: rpc_event, event: rpc_event,
}); });
} }
fn inject_disconnected(&mut self, peer_id: &PeerId, connected_point: ConnectedPoint) { fn inject_disconnected(&mut self, _peer_id: &PeerId) {}
// TODO: Remove this on proper peer discovery
self.events.push(NetworkBehaviourAction::GenerateEvent(
RPCMessage::PeerDisconnectedHack(peer_id.clone(), connected_point.clone()),
));
// inform the rpc handler that the peer has disconnected fn inject_connection_established(
self.events.push(NetworkBehaviourAction::GenerateEvent( &mut self,
RPCMessage::PeerDisconnected(peer_id.clone()), _peer_id: &PeerId,
)); _: &ConnectionId,
_connected_point: &ConnectedPoint,
) {
} }
fn inject_node_event( fn inject_connection_closed(
&mut self,
_peer_id: &PeerId,
_: &ConnectionId,
_connected_point: &ConnectedPoint,
) {
}
fn inject_event(
&mut self, &mut self,
source: PeerId, source: PeerId,
_: ConnectionId,
event: <Self::ProtocolsHandler as ProtocolsHandler>::OutEvent, event: <Self::ProtocolsHandler as ProtocolsHandler>::OutEvent,
) { ) {
// send the event to the user // send the event to the user
self.events self.events
.push(NetworkBehaviourAction::GenerateEvent(RPCMessage::RPC( .push(NetworkBehaviourAction::GenerateEvent(RPCMessage {
source, event, peer_id: source,
))); event,
}));
} }
fn poll( fn poll(
&mut self, &mut self,
_cx: &mut Context,
_: &mut impl PollParameters, _: &mut impl PollParameters,
) -> Async< ) -> Poll<
NetworkBehaviourAction< NetworkBehaviourAction<
<Self::ProtocolsHandler as ProtocolsHandler>::InEvent, <Self::ProtocolsHandler as ProtocolsHandler>::InEvent,
Self::OutEvent, Self::OutEvent,
>, >,
> { > {
if !self.events.is_empty() { if !self.events.is_empty() {
return Async::Ready(self.events.remove(0)); return Poll::Ready(self.events.remove(0));
} }
Async::NotReady Poll::Pending
} }
} }
/// Messages sent to the user from the RPC protocol.
pub enum RPCMessage<TSpec: EthSpec> {
RPC(PeerId, RPCEvent<TSpec>),
PeerDialed(PeerId),
PeerDisconnected(PeerId),
// TODO: This is a hack to give access to connections to peer manager. Remove this once
// behaviour is re-written
PeerConnectedHack(PeerId, ConnectedPoint),
PeerDisconnectedHack(PeerId, ConnectedPoint),
}

View File

@ -10,17 +10,19 @@ use crate::rpc::{
}, },
methods::ResponseTermination, methods::ResponseTermination,
}; };
use futures::future::*; use futures::future::Ready;
use futures::{future, sink, stream, Sink, Stream}; use futures::prelude::*;
use libp2p::core::{upgrade, InboundUpgrade, OutboundUpgrade, ProtocolName, UpgradeInfo}; use futures::prelude::{AsyncRead, AsyncWrite};
use libp2p::core::{InboundUpgrade, OutboundUpgrade, ProtocolName, UpgradeInfo};
use std::io; use std::io;
use std::marker::PhantomData; use std::marker::PhantomData;
use std::pin::Pin;
use std::time::Duration; use std::time::Duration;
use tokio::codec::Framed;
use tokio::io::{AsyncRead, AsyncWrite};
use tokio::timer::timeout;
use tokio::util::FutureExt;
use tokio_io_timeout::TimeoutStream; use tokio_io_timeout::TimeoutStream;
use tokio_util::{
codec::Framed,
compat::{Compat, FuturesAsyncReadCompatExt},
};
use types::EthSpec; use types::EthSpec;
/// The maximum bytes that can be sent across the RPC. /// The maximum bytes that can be sent across the RPC.
@ -171,45 +173,28 @@ impl ProtocolName for ProtocolId {
pub type InboundOutput<TSocket, TSpec> = (RPCRequest<TSpec>, InboundFramed<TSocket, TSpec>); pub type InboundOutput<TSocket, TSpec> = (RPCRequest<TSpec>, InboundFramed<TSocket, TSpec>);
pub type InboundFramed<TSocket, TSpec> = pub type InboundFramed<TSocket, TSpec> =
Framed<TimeoutStream<upgrade::Negotiated<TSocket>>, InboundCodec<TSpec>>; Framed<TimeoutStream<Compat<TSocket>>, InboundCodec<TSpec>>;
// Auxiliary types
// The type of the socket timeout in the `InboundUpgrade` type `Future`
type TTimeout<TSocket, TSpec> =
timeout::Timeout<stream::StreamFuture<InboundFramed<TSocket, TSpec>>>;
// The type of the socket timeout error in the `InboundUpgrade` type `Future`
type TTimeoutErr<TSocket, TSpec> = timeout::Error<(RPCError, InboundFramed<TSocket, TSpec>)>;
// `TimeoutErr` to `RPCError` mapping function
type FnMapErr<TSocket, TSpec> = fn(TTimeoutErr<TSocket, TSpec>) -> RPCError;
type FnAndThen<TSocket, TSpec> = fn( type FnAndThen<TSocket, TSpec> = fn(
(Option<RPCRequest<TSpec>>, InboundFramed<TSocket, TSpec>), (
) -> FutureResult<InboundOutput<TSocket, TSpec>, RPCError>; Option<Result<RPCRequest<TSpec>, RPCError>>,
InboundFramed<TSocket, TSpec>,
),
) -> Ready<Result<InboundOutput<TSocket, TSpec>, RPCError>>;
type FnMapErr = fn(tokio::time::Elapsed) -> RPCError;
impl<TSocket, TSpec> InboundUpgrade<TSocket> for RPCProtocol<TSpec> impl<TSocket, TSpec> InboundUpgrade<TSocket> for RPCProtocol<TSpec>
where where
TSocket: AsyncRead + AsyncWrite, TSocket: AsyncRead + AsyncWrite + Unpin + Send + 'static,
TSpec: EthSpec, TSpec: EthSpec,
{ {
type Output = InboundOutput<TSocket, TSpec>; type Output = InboundOutput<TSocket, TSpec>;
type Error = RPCError; type Error = RPCError;
type Future = Pin<Box<dyn Future<Output = Result<Self::Output, Self::Error>> + Send>>;
type Future = future::Either< fn upgrade_inbound(self, socket: TSocket, protocol: ProtocolId) -> Self::Future {
FutureResult<InboundOutput<TSocket, TSpec>, RPCError>,
future::AndThen<
future::MapErr<TTimeout<TSocket, TSpec>, FnMapErr<TSocket, TSpec>>,
FutureResult<InboundOutput<TSocket, TSpec>, RPCError>,
FnAndThen<TSocket, TSpec>,
>,
>;
fn upgrade_inbound(
self,
socket: upgrade::Negotiated<TSocket>,
protocol: ProtocolId,
) -> Self::Future {
let protocol_name = protocol.message_name; let protocol_name = protocol.message_name;
// convert the socket to tokio compatible socket
let socket = socket.compat();
let codec = match protocol.encoding { let codec = match protocol.encoding {
Encoding::SSZSnappy => { Encoding::SSZSnappy => {
let ssz_snappy_codec = let ssz_snappy_codec =
@ -226,32 +211,23 @@ where
let socket = Framed::new(timed_socket, codec); let socket = Framed::new(timed_socket, codec);
match protocol_name { // MetaData requests should be empty, return the stream
// `MetaData` requests should be empty, return the stream Box::pin(match protocol_name {
Protocol::MetaData => { Protocol::MetaData => {
future::Either::A(future::ok((RPCRequest::MetaData(PhantomData), socket))) future::Either::Left(future::ok((RPCRequest::MetaData(PhantomData), socket)))
} }
_ => future::Either::B({
socket _ => future::Either::Right(
.into_future() tokio::time::timeout(Duration::from_secs(REQUEST_TIMEOUT), socket.into_future())
.timeout(Duration::from_secs(REQUEST_TIMEOUT)) .map_err(RPCError::from as FnMapErr)
.map_err({
|err| {
if err.is_elapsed() {
RPCError::StreamTimeout
} else {
RPCError::InternalError("Stream timer failed")
}
}
} as FnMapErr<TSocket, TSpec>)
.and_then({ .and_then({
|(req, stream)| match req { |(req, stream)| match req {
Some(request) => future::ok((request, stream)), Some(Ok(request)) => future::ok((request, stream)),
None => future::err(RPCError::IncompleteStream), Some(Err(_)) | None => future::err(RPCError::IncompleteStream),
} }
} as FnAndThen<TSocket, TSpec>) } as FnAndThen<TSocket, TSpec>),
}), ),
} })
} }
} }
@ -371,23 +347,20 @@ impl<TSpec: EthSpec> RPCRequest<TSpec> {
/* Outbound upgrades */ /* Outbound upgrades */
pub type OutboundFramed<TSocket, TSpec> = pub type OutboundFramed<TSocket, TSpec> = Framed<Compat<TSocket>, OutboundCodec<TSpec>>;
Framed<upgrade::Negotiated<TSocket>, OutboundCodec<TSpec>>;
impl<TSocket, TSpec> OutboundUpgrade<TSocket> for RPCRequest<TSpec> impl<TSocket, TSpec> OutboundUpgrade<TSocket> for RPCRequest<TSpec>
where where
TSpec: EthSpec, TSpec: EthSpec + Send + 'static,
TSocket: AsyncRead + AsyncWrite, TSocket: AsyncRead + AsyncWrite + Unpin + Send + 'static,
{ {
type Output = OutboundFramed<TSocket, TSpec>; type Output = OutboundFramed<TSocket, TSpec>;
type Error = RPCError; type Error = RPCError;
type Future = sink::Send<OutboundFramed<TSocket, TSpec>>; type Future = Pin<Box<dyn Future<Output = Result<Self::Output, Self::Error>> + Send>>;
fn upgrade_outbound( fn upgrade_outbound(self, socket: TSocket, protocol: Self::Info) -> Self::Future {
self, // convert to a tokio compatible socket
socket: upgrade::Negotiated<TSocket>, let socket = socket.compat();
protocol: Self::Info,
) -> Self::Future {
let codec = match protocol.encoding { let codec = match protocol.encoding {
Encoding::SSZSnappy => { Encoding::SSZSnappy => {
let ssz_snappy_codec = let ssz_snappy_codec =
@ -400,18 +373,22 @@ where
OutboundCodec::SSZ(ssz_codec) OutboundCodec::SSZ(ssz_codec)
} }
}; };
Framed::new(socket, codec).send(self)
let mut socket = Framed::new(socket, codec);
let future = async { socket.send(self).await.map(|_| socket) };
Box::pin(future)
} }
} }
/// Error in RPC Encoding/Decoding. /// Error in RPC Encoding/Decoding.
#[derive(Debug)] #[derive(Debug, Clone)]
pub enum RPCError { pub enum RPCError {
/// Error when decoding the raw buffer from ssz. /// Error when decoding the raw buffer from ssz.
// NOTE: in the future a ssz::DecodeError should map to an InvalidData error // NOTE: in the future a ssz::DecodeError should map to an InvalidData error
SSZDecodeError(ssz::DecodeError), SSZDecodeError(ssz::DecodeError),
/// IO Error. /// IO Error.
IoError(io::Error), IoError(String),
/// The peer returned a valid response but the response indicated an error. /// The peer returned a valid response but the response indicated an error.
ErrorResponse(RPCResponseErrorCode), ErrorResponse(RPCResponseErrorCode),
/// Timed out waiting for a response. /// Timed out waiting for a response.
@ -434,10 +411,15 @@ impl From<ssz::DecodeError> for RPCError {
RPCError::SSZDecodeError(err) RPCError::SSZDecodeError(err)
} }
} }
impl From<tokio::time::Elapsed> for RPCError {
fn from(_: tokio::time::Elapsed) -> Self {
RPCError::StreamTimeout
}
}
impl From<io::Error> for RPCError { impl From<io::Error> for RPCError {
fn from(err: io::Error) -> Self { fn from(err: io::Error) -> Self {
RPCError::IoError(err) RPCError::IoError(err.to_string())
} }
} }
@ -463,7 +445,7 @@ impl std::error::Error for RPCError {
match *self { match *self {
// NOTE: this does have a source // NOTE: this does have a source
RPCError::SSZDecodeError(_) => None, RPCError::SSZDecodeError(_) => None,
RPCError::IoError(ref err) => Some(err), RPCError::IoError(_) => None,
RPCError::StreamTimeout => None, RPCError::StreamTimeout => None,
RPCError::UnsupportedProtocol => None, RPCError::UnsupportedProtocol => None,
RPCError::IncompleteStream => None, RPCError::IncompleteStream => None,

View File

@ -2,45 +2,76 @@ use crate::behaviour::{Behaviour, BehaviourEvent};
use crate::discovery::enr; use crate::discovery::enr;
use crate::multiaddr::Protocol; use crate::multiaddr::Protocol;
use crate::types::{error, GossipKind}; use crate::types::{error, GossipKind};
use crate::EnrExt;
use crate::{NetworkConfig, NetworkGlobals}; use crate::{NetworkConfig, NetworkGlobals};
use futures::prelude::*; use futures::prelude::*;
use futures::Stream;
use libp2p::core::{ use libp2p::core::{
identity::Keypair, identity::Keypair,
multiaddr::Multiaddr, multiaddr::Multiaddr,
muxing::StreamMuxerBox, muxing::StreamMuxerBox,
nodes::Substream,
transport::boxed::Boxed, transport::boxed::Boxed,
upgrade::{InboundUpgradeExt, OutboundUpgradeExt}, upgrade::{InboundUpgradeExt, OutboundUpgradeExt},
ConnectedPoint, ConnectedPoint,
}; };
use libp2p::{core, noise, secio, swarm::NetworkBehaviour, PeerId, Swarm, Transport}; use libp2p::{
use slog::{crit, debug, error, info, trace, warn}; core, noise, secio,
swarm::{NetworkBehaviour, SwarmBuilder, SwarmEvent},
PeerId, Swarm, Transport,
};
use slog::{crit, debug, error, info, o, trace, warn};
use std::fs::File; use std::fs::File;
use std::io::prelude::*; use std::io::prelude::*;
use std::io::{Error, ErrorKind}; use std::io::{Error, ErrorKind};
use std::pin::Pin;
use std::sync::Arc; use std::sync::Arc;
use std::time::Duration; use std::time::Duration;
use tokio::timer::DelayQueue; use tokio::time::DelayQueue;
use types::{EnrForkId, EthSpec}; use types::{EnrForkId, EthSpec};
type Libp2pStream = Boxed<(PeerId, StreamMuxerBox), Error>;
type Libp2pBehaviour<TSpec> = Behaviour<Substream<StreamMuxerBox>, TSpec>;
pub const NETWORK_KEY_FILENAME: &str = "key"; pub const NETWORK_KEY_FILENAME: &str = "key";
/// The time in milliseconds to wait before banning a peer. This allows for any Goodbye messages to be /// The time in milliseconds to wait before banning a peer. This allows for any Goodbye messages to be
/// flushed and protocols to be negotiated. /// flushed and protocols to be negotiated.
const BAN_PEER_WAIT_TIMEOUT: u64 = 200; const BAN_PEER_WAIT_TIMEOUT: u64 = 200;
/// The maximum simultaneous libp2p connections per peer.
const MAX_CONNECTIONS_PER_PEER: usize = 1;
/// The types of events than can be obtained from polling the libp2p service.
///
/// This is a subset of the events that a libp2p swarm emits.
#[derive(Debug)]
pub enum Libp2pEvent<TSpec: EthSpec> {
/// A behaviour event
Behaviour(BehaviourEvent<TSpec>),
/// A new listening address has been established.
NewListenAddr(Multiaddr),
/// A peer has established at least one connection.
PeerConnected {
/// The peer that connected.
peer_id: PeerId,
/// Whether the peer was a dialer or listener.
endpoint: ConnectedPoint,
},
/// A peer no longer has any connections, i.e is disconnected.
PeerDisconnected {
/// The peer the disconnected.
peer_id: PeerId,
/// Whether the peer was a dialer or a listener.
endpoint: ConnectedPoint,
},
}
/// The configuration and state of the libp2p components for the beacon node. /// The configuration and state of the libp2p components for the beacon node.
pub struct Service<TSpec: EthSpec> { pub struct Service<TSpec: EthSpec> {
/// The libp2p Swarm handler. /// The libp2p Swarm handler.
//TODO: Make this private //TODO: Make this private
pub swarm: Swarm<Libp2pStream, Libp2pBehaviour<TSpec>>, pub swarm: Swarm<Behaviour<TSpec>>,
/// This node's PeerId. /// This node's PeerId.
pub local_peer_id: PeerId, pub local_peer_id: PeerId,
/// Used for managing the state of peers.
network_globals: Arc<NetworkGlobals<TSpec>>,
/// A current list of peers to ban after a given timeout. /// A current list of peers to ban after a given timeout.
peers_to_ban: DelayQueue<PeerId>, peers_to_ban: DelayQueue<PeerId>,
@ -55,8 +86,9 @@ impl<TSpec: EthSpec> Service<TSpec> {
pub fn new( pub fn new(
config: &NetworkConfig, config: &NetworkConfig,
enr_fork_id: EnrForkId, enr_fork_id: EnrForkId,
log: slog::Logger, log: &slog::Logger,
) -> error::Result<(Arc<NetworkGlobals<TSpec>>, Self)> { ) -> error::Result<(Arc<NetworkGlobals<TSpec>>, Self)> {
let log = log.new(o!("service"=> "libp2p"));
trace!(log, "Libp2p Service starting"); trace!(log, "Libp2p Service starting");
// initialise the node's ID // initialise the node's ID
@ -84,10 +116,22 @@ impl<TSpec: EthSpec> Service<TSpec> {
let mut swarm = { let mut swarm = {
// Set up the transport - tcp/ws with noise/secio and mplex/yamux // Set up the transport - tcp/ws with noise/secio and mplex/yamux
let transport = build_transport(local_keypair.clone()); let transport = build_transport(local_keypair.clone())
.map_err(|e| format!("Failed to build transport: {:?}", e))?;
// Lighthouse network behaviour // Lighthouse network behaviour
let behaviour = Behaviour::new(&local_keypair, config, network_globals.clone(), &log)?; let behaviour = Behaviour::new(&local_keypair, config, network_globals.clone(), &log)?;
Swarm::new(transport, behaviour, local_peer_id.clone())
// use the executor for libp2p
struct Executor(tokio::runtime::Handle);
impl libp2p::core::Executor for Executor {
fn exec(&self, f: Pin<Box<dyn Future<Output = ()> + Send>>) {
self.0.spawn(f);
}
}
SwarmBuilder::new(transport, behaviour, local_peer_id.clone())
.peer_connection_limit(MAX_CONNECTIONS_PER_PEER)
.executor(Box::new(Executor(tokio::runtime::Handle::current())))
.build()
}; };
// listen on the specified address // listen on the specified address
@ -131,19 +175,24 @@ impl<TSpec: EthSpec> Service<TSpec> {
} }
// attempt to connect to any specified boot-nodes // attempt to connect to any specified boot-nodes
for bootnode_enr in &config.boot_nodes { let mut boot_nodes = config.boot_nodes.clone();
boot_nodes.dedup();
for bootnode_enr in boot_nodes {
for multiaddr in &bootnode_enr.multiaddr() { for multiaddr in &bootnode_enr.multiaddr() {
// ignore udp multiaddr if it exists // ignore udp multiaddr if it exists
let components = multiaddr.iter().collect::<Vec<_>>(); let components = multiaddr.iter().collect::<Vec<_>>();
if let Protocol::Udp(_) = components[1] { if let Protocol::Udp(_) = components[1] {
continue; continue;
} }
// inform the peer manager that we are currently dialing this peer
network_globals if !network_globals
.peers .peers
.write() .read()
.dialing_peer(&bootnode_enr.peer_id()); .is_connected_or_dialing(&bootnode_enr.peer_id())
dial_addr(multiaddr); {
dial_addr(multiaddr);
}
} }
} }
@ -160,6 +209,7 @@ impl<TSpec: EthSpec> Service<TSpec> {
let service = Service { let service = Service {
local_peer_id, local_peer_id,
swarm, swarm,
network_globals: network_globals.clone(),
peers_to_ban: DelayQueue::new(), peers_to_ban: DelayQueue::new(),
peer_ban_timeout: DelayQueue::new(), peer_ban_timeout: DelayQueue::new(),
log, log,
@ -177,76 +227,132 @@ impl<TSpec: EthSpec> Service<TSpec> {
); );
self.peer_ban_timeout.insert(peer_id, timeout); self.peer_ban_timeout.insert(peer_id, timeout);
} }
}
impl<TSpec: EthSpec> Stream for Service<TSpec> { pub async fn next_event(&mut self) -> Libp2pEvent<TSpec> {
type Item = BehaviourEvent<TSpec>;
type Error = error::Error;
fn poll(&mut self) -> Poll<Option<Self::Item>, Self::Error> {
loop { loop {
match self.swarm.poll() { tokio::select! {
Ok(Async::Ready(Some(event))) => { event = self.swarm.next_event() => {
return Ok(Async::Ready(Some(event))); match event {
} SwarmEvent::Behaviour(behaviour) => {
Ok(Async::Ready(None)) => unreachable!("Swarm stream shouldn't end"), return Libp2pEvent::Behaviour(behaviour)
Ok(Async::NotReady) => break, }
_ => break, SwarmEvent::ConnectionEstablished {
} peer_id,
} endpoint,
num_established,
} => {
debug!(self.log, "Connection established"; "peer_id"=> peer_id.to_string(), "connections" => num_established.get());
// if this is the first connection inform the network layer a new connection
// has been established and update the db
if num_established.get() == 1 {
// update the peerdb
match endpoint {
ConnectedPoint::Listener { .. } => {
self.swarm.peer_manager().connect_ingoing(&peer_id);
}
ConnectedPoint::Dialer { .. } => self
.network_globals
.peers
.write()
.connect_outgoing(&peer_id),
}
return Libp2pEvent::PeerConnected { peer_id, endpoint };
}
}
SwarmEvent::ConnectionClosed {
peer_id,
cause,
endpoint,
num_established,
} => {
debug!(self.log, "Connection closed"; "peer_id"=> peer_id.to_string(), "cause" => cause.to_string(), "connections" => num_established);
if num_established == 0 {
// update the peer_db
self.swarm.peer_manager().notify_disconnect(&peer_id);
// the peer has disconnected
return Libp2pEvent::PeerDisconnected {
peer_id,
endpoint,
};
}
}
SwarmEvent::NewListenAddr(multiaddr) => {
return Libp2pEvent::NewListenAddr(multiaddr)
}
// check if peers need to be banned SwarmEvent::IncomingConnection {
loop { local_addr,
match self.peers_to_ban.poll() { send_back_addr,
Ok(Async::Ready(Some(peer_id))) => { } => {
let peer_id = peer_id.into_inner(); debug!(self.log, "Incoming connection"; "our_addr" => local_addr.to_string(), "from" => send_back_addr.to_string())
Swarm::ban_peer_id(&mut self.swarm, peer_id.clone()); }
// TODO: Correctly notify protocols of the disconnect SwarmEvent::IncomingConnectionError {
// TODO: Also remove peer from the DHT: https://github.com/sigp/lighthouse/issues/629 local_addr,
let dummy_connected_point = ConnectedPoint::Dialer { send_back_addr,
address: "/ip4/0.0.0.0" error,
.parse::<Multiaddr>() } => {
.expect("valid multiaddr"), debug!(self.log, "Failed incoming connection"; "our_addr" => local_addr.to_string(), "from" => send_back_addr.to_string(), "error" => error.to_string())
}; }
self.swarm SwarmEvent::BannedPeer {
.inject_disconnected(&peer_id, dummy_connected_point); peer_id,
// inform the behaviour that the peer has been banned endpoint: _,
self.swarm.peer_banned(peer_id); } => {
} debug!(self.log, "Attempted to dial a banned peer"; "peer_id" => peer_id.to_string())
Ok(Async::NotReady) | Ok(Async::Ready(None)) => break, }
Err(e) => { SwarmEvent::UnreachableAddr {
warn!(self.log, "Peer banning queue failed"; "error" => format!("{:?}", e)); peer_id,
address,
error,
attempts_remaining,
} => {
debug!(self.log, "Failed to dial address"; "peer_id" => peer_id.to_string(), "address" => address.to_string(), "error" => error.to_string(), "attempts_remaining" => attempts_remaining);
self.swarm.peer_manager().notify_disconnect(&peer_id);
}
SwarmEvent::UnknownPeerUnreachableAddr { address, error } => {
debug!(self.log, "Peer not known at dialed address"; "address" => address.to_string(), "error" => error.to_string());
}
SwarmEvent::ExpiredListenAddr(multiaddr) => {
debug!(self.log, "Listen address expired"; "multiaddr" => multiaddr.to_string())
}
SwarmEvent::ListenerClosed { addresses, reason } => {
debug!(self.log, "Listener closed"; "addresses" => format!("{:?}", addresses), "reason" => format!("{:?}", reason))
}
SwarmEvent::ListenerError { error } => {
debug!(self.log, "Listener error"; "error" => format!("{:?}", error.to_string()))
}
SwarmEvent::Dialing(peer_id) => {
debug!(self.log, "Dialing peer"; "peer" => peer_id.to_string());
self.swarm.peer_manager().dialing_peer(&peer_id);
}
} }
} }
} Some(Ok(peer_to_ban)) = self.peers_to_ban.next() => {
let peer_id = peer_to_ban.into_inner();
// un-ban peer if it's timeout has expired Swarm::ban_peer_id(&mut self.swarm, peer_id.clone());
loop { // TODO: Correctly notify protocols of the disconnect
match self.peer_ban_timeout.poll() { // TODO: Also remove peer from the DHT: https://github.com/sigp/lighthouse/issues/629
Ok(Async::Ready(Some(peer_id))) => { self.swarm.inject_disconnected(&peer_id);
let peer_id = peer_id.into_inner(); // inform the behaviour that the peer has been banned
debug!(self.log, "Peer has been unbanned"; "peer" => format!("{:?}", peer_id)); self.swarm.peer_banned(peer_id);
self.swarm.peer_unbanned(&peer_id); }
Swarm::unban_peer_id(&mut self.swarm, peer_id); Some(Ok(peer_to_unban)) = self.peer_ban_timeout.next() => {
} debug!(self.log, "Peer has been unbanned"; "peer" => format!("{:?}", peer_to_unban));
Ok(Async::NotReady) | Ok(Async::Ready(None)) => break, let unban_peer = peer_to_unban.into_inner();
Err(e) => { self.swarm.peer_unbanned(&unban_peer);
warn!(self.log, "Peer banning timeout queue failed"; "error" => format!("{:?}", e)); Swarm::unban_peer_id(&mut self.swarm, unban_peer);
} }
} }
} }
Ok(Async::NotReady)
} }
} }
/// The implementation supports TCP/IP, WebSockets over TCP/IP, noise/secio as the encryption layer, and /// The implementation supports TCP/IP, WebSockets over TCP/IP, noise/secio as the encryption layer, and
/// mplex or yamux as the multiplexing layer. /// mplex or yamux as the multiplexing layer.
fn build_transport(local_private_key: Keypair) -> Boxed<(PeerId, StreamMuxerBox), Error> { fn build_transport(
// TODO: The Wire protocol currently doesn't specify encryption and this will need to be customised local_private_key: Keypair,
// in the future. ) -> Result<Boxed<(PeerId, StreamMuxerBox), Error>, Error> {
let transport = libp2p::tcp::TcpConfig::new().nodelay(true); let transport = libp2p_tcp::TokioTcpConfig::new().nodelay(true);
let transport = libp2p::dns::DnsConfig::new(transport); let transport = libp2p::dns::DnsConfig::new(transport)?;
#[cfg(feature = "libp2p-websocket")] #[cfg(feature = "libp2p-websocket")]
let transport = { let transport = {
let trans_clone = transport.clone(); let trans_clone = transport.clone();
@ -260,7 +366,7 @@ fn build_transport(local_private_key: Keypair) -> Boxed<(PeerId, StreamMuxerBox)
secio::SecioConfig::new(local_private_key), secio::SecioConfig::new(local_private_key),
); );
core::upgrade::apply(stream, upgrade, endpoint, core::upgrade::Version::V1).and_then( core::upgrade::apply(stream, upgrade, endpoint, core::upgrade::Version::V1).and_then(
move |out| { |out| async move {
match out { match out {
// Noise was negotiated // Noise was negotiated
core::either::EitherOutput::First((remote_id, out)) => { core::either::EitherOutput::First((remote_id, out)) => {
@ -288,12 +394,12 @@ fn build_transport(local_private_key: Keypair) -> Boxed<(PeerId, StreamMuxerBox)
.map_outbound(move |muxer| (peer_id2, muxer)); .map_outbound(move |muxer| (peer_id2, muxer));
core::upgrade::apply(stream, upgrade, endpoint, core::upgrade::Version::V1) core::upgrade::apply(stream, upgrade, endpoint, core::upgrade::Version::V1)
.map(|(id, muxer)| (id, core::muxing::StreamMuxerBox::new(muxer))) .map_ok(|(id, muxer)| (id, core::muxing::StreamMuxerBox::new(muxer)))
}) })
.timeout(Duration::from_secs(20)) .timeout(Duration::from_secs(20))
.map_err(|err| Error::new(ErrorKind::Other, err)) .map_err(|err| Error::new(ErrorKind::Other, err))
.boxed(); .boxed();
transport Ok(transport)
} }
fn keypair_from_hex(hex_bytes: &str) -> error::Result<Keypair> { fn keypair_from_hex(hex_bytes: &str) -> error::Result<Keypair> {

View File

@ -2,6 +2,7 @@
use crate::peer_manager::PeerDB; use crate::peer_manager::PeerDB;
use crate::rpc::methods::MetaData; use crate::rpc::methods::MetaData;
use crate::types::SyncState; use crate::types::SyncState;
use crate::EnrExt;
use crate::{discovery::enr::Eth2Enr, Enr, GossipTopic, Multiaddr, PeerId}; use crate::{discovery::enr::Eth2Enr, Enr, GossipTopic, Multiaddr, PeerId};
use parking_lot::RwLock; use parking_lot::RwLock;
use std::collections::HashSet; use std::collections::HashSet;

View File

@ -9,7 +9,7 @@ use types::{BitVector, EthSpec};
#[allow(type_alias_bounds)] #[allow(type_alias_bounds)]
pub type EnrBitfield<T: EthSpec> = BitVector<T::SubnetBitfieldLength>; pub type EnrBitfield<T: EthSpec> = BitVector<T::SubnetBitfieldLength>;
pub type Enr = libp2p::discv5::enr::Enr<libp2p::discv5::enr::CombinedKey>; pub type Enr = discv5::enr::Enr<discv5::enr::CombinedKey>;
pub use globals::NetworkGlobals; pub use globals::NetworkGlobals;
pub use pubsub::PubsubMessage; pub use pubsub::PubsubMessage;

View File

@ -1,8 +1,9 @@
#![cfg(test)] #![cfg(test)]
use eth2_libp2p::Enr; use eth2_libp2p::Enr;
use eth2_libp2p::EnrExt;
use eth2_libp2p::Multiaddr; use eth2_libp2p::Multiaddr;
use eth2_libp2p::NetworkConfig;
use eth2_libp2p::Service as LibP2PService; use eth2_libp2p::Service as LibP2PService;
use eth2_libp2p::{Libp2pEvent, NetworkConfig};
use slog::{debug, error, o, Drain}; use slog::{debug, error, o, Drain};
use std::net::{TcpListener, UdpSocket}; use std::net::{TcpListener, UdpSocket};
use std::time::Duration; use std::time::Duration;
@ -85,7 +86,7 @@ pub fn build_libp2p_instance(
let port = unused_port("tcp").unwrap(); let port = unused_port("tcp").unwrap();
let config = build_config(port, boot_nodes, secret_key); let config = build_config(port, boot_nodes, secret_key);
// launch libp2p service // launch libp2p service
LibP2PService::new(&config, EnrForkId::default(), log.clone()) LibP2PService::new(&config, EnrForkId::default(), &log)
.expect("should build libp2p instance") .expect("should build libp2p instance")
.1 .1
} }
@ -93,7 +94,6 @@ pub fn build_libp2p_instance(
#[allow(dead_code)] #[allow(dead_code)]
pub fn get_enr(node: &LibP2PService<E>) -> Enr { pub fn get_enr(node: &LibP2PService<E>) -> Enr {
let enr = node.swarm.discovery().local_enr().clone(); let enr = node.swarm.discovery().local_enr().clone();
dbg!(enr.multiaddr());
enr enr
} }
@ -121,19 +121,46 @@ pub fn build_full_mesh(log: slog::Logger, n: usize) -> Vec<LibP2PService<E>> {
nodes nodes
} }
// Constructs a pair of nodes with seperate loggers. The sender dials the receiver. // Constructs a pair of nodes with separate loggers. The sender dials the receiver.
// This returns a (sender, receiver) pair. // This returns a (sender, receiver) pair.
#[allow(dead_code)] #[allow(dead_code)]
pub fn build_node_pair(log: &slog::Logger) -> (LibP2PService<E>, LibP2PService<E>) { pub async fn build_node_pair(log: &slog::Logger) -> (LibP2PService<E>, LibP2PService<E>) {
let sender_log = log.new(o!("who" => "sender")); let sender_log = log.new(o!("who" => "sender"));
let receiver_log = log.new(o!("who" => "receiver")); let receiver_log = log.new(o!("who" => "receiver"));
let mut sender = build_libp2p_instance(vec![], None, sender_log); let mut sender = build_libp2p_instance(vec![], None, sender_log);
let receiver = build_libp2p_instance(vec![], None, receiver_log); let mut receiver = build_libp2p_instance(vec![], None, receiver_log);
let receiver_multiaddr = receiver.swarm.discovery().local_enr().clone().multiaddr()[1].clone(); let receiver_multiaddr = receiver.swarm.discovery().local_enr().clone().multiaddr()[1].clone();
match libp2p::Swarm::dial_addr(&mut sender.swarm, receiver_multiaddr) {
Ok(()) => debug!(log, "Sender dialed receiver"), // let the two nodes set up listeners
let sender_fut = async {
loop {
if let Libp2pEvent::NewListenAddr(_) = sender.next_event().await {
return;
}
}
};
let receiver_fut = async {
loop {
if let Libp2pEvent::NewListenAddr(_) = receiver.next_event().await {
return;
}
}
};
let joined = futures::future::join(sender_fut, receiver_fut);
// wait for either both nodes to listen or a timeout
tokio::select! {
_ = tokio::time::delay_for(Duration::from_millis(500)) => {}
_ = joined => {}
}
match libp2p::Swarm::dial_addr(&mut sender.swarm, receiver_multiaddr.clone()) {
Ok(()) => {
debug!(log, "Sender dialed receiver"; "address" => format!("{:?}", receiver_multiaddr))
}
Err(_) => error!(log, "Dialing failed"), Err(_) => error!(log, "Dialing failed"),
}; };
(sender, receiver) (sender, receiver)

View File

@ -2,7 +2,6 @@
use crate::types::GossipEncoding; use crate::types::GossipEncoding;
use ::types::{BeaconBlock, EthSpec, MinimalEthSpec, Signature, SignedBeaconBlock}; use ::types::{BeaconBlock, EthSpec, MinimalEthSpec, Signature, SignedBeaconBlock};
use eth2_libp2p::*; use eth2_libp2p::*;
use futures::prelude::*;
use slog::{debug, Level}; use slog::{debug, Level};
type E = MinimalEthSpec; type E = MinimalEthSpec;
@ -19,8 +18,8 @@ mod common;
// //
// node1 <-> node2 <-> node3 ..... <-> node(n-1) <-> node(n) // node1 <-> node2 <-> node3 ..... <-> node(n-1) <-> node(n)
#[test] #[tokio::test]
fn test_gossipsub_forward() { async fn test_gossipsub_forward() {
// set up the logging. The level and enabled or not // set up the logging. The level and enabled or not
let log = common::build_log(Level::Info, false); let log = common::build_log(Level::Info, false);
@ -41,55 +40,64 @@ fn test_gossipsub_forward() {
.clone() .clone()
.into(); .into();
let mut subscribed_count = 0; let mut subscribed_count = 0;
tokio::run(futures::future::poll_fn(move || -> Result<_, ()> { let fut = async move {
for node in nodes.iter_mut() { for node in nodes.iter_mut() {
loop { loop {
match node.poll().unwrap() { match node.next_event().await {
Async::Ready(Some(BehaviourEvent::PubsubMessage { Libp2pEvent::Behaviour(b) => match b {
topics, BehaviourEvent::PubsubMessage {
message, topics,
source, message,
id, source,
})) => { id,
assert_eq!(topics.len(), 1); } => {
// Assert topic is the published topic assert_eq!(topics.len(), 1);
assert_eq!( // Assert topic is the published topic
topics.first().unwrap(), assert_eq!(
&TopicHash::from_raw(publishing_topic.clone()) topics.first().unwrap(),
); &TopicHash::from_raw(publishing_topic.clone())
// Assert message received is the correct one );
assert_eq!(message, pubsub_message.clone()); // Assert message received is the correct one
received_count += 1; assert_eq!(message, pubsub_message.clone());
// Since `propagate_message` is false, need to propagate manually received_count += 1;
node.swarm.propagate_message(&source, id); // Since `propagate_message` is false, need to propagate manually
// Test should succeed if all nodes except the publisher receive the message node.swarm.propagate_message(&source, id);
if received_count == num_nodes - 1 { // Test should succeed if all nodes except the publisher receive the message
debug!(log.clone(), "Received message at {} nodes", num_nodes - 1); if received_count == num_nodes - 1 {
return Ok(Async::Ready(())); debug!(log.clone(), "Received message at {} nodes", num_nodes - 1);
} return;
}
Async::Ready(Some(BehaviourEvent::PeerSubscribed(_, topic))) => {
// Publish on beacon block topic
if topic == TopicHash::from_raw(publishing_topic.clone()) {
subscribed_count += 1;
// Every node except the corner nodes are connected to 2 nodes.
if subscribed_count == (num_nodes * 2) - 2 {
node.swarm.publish(vec![pubsub_message.clone()]);
} }
} }
} BehaviourEvent::PeerSubscribed(_, topic) => {
// Publish on beacon block topic
if topic == TopicHash::from_raw(publishing_topic.clone()) {
subscribed_count += 1;
// Every node except the corner nodes are connected to 2 nodes.
if subscribed_count == (num_nodes * 2) - 2 {
node.swarm.publish(vec![pubsub_message.clone()]);
}
}
}
_ => break,
},
_ => break, _ => break,
} }
} }
} }
Ok(Async::NotReady) };
}))
tokio::select! {
_ = fut => {}
_ = tokio::time::delay_for(tokio::time::Duration::from_millis(800)) => {
panic!("Future timed out");
}
}
} }
// Test publishing of a message with a full mesh for the topic // Test publishing of a message with a full mesh for the topic
// Not very useful but this is the bare minimum functionality. // Not very useful but this is the bare minimum functionality.
#[test] #[tokio::test]
fn test_gossipsub_full_mesh_publish() { async fn test_gossipsub_full_mesh_publish() {
// set up the logging. The level and enabled or not // set up the logging. The level and enabled or not
let log = common::build_log(Level::Debug, false); let log = common::build_log(Level::Debug, false);
@ -115,11 +123,13 @@ fn test_gossipsub_full_mesh_publish() {
.into(); .into();
let mut subscribed_count = 0; let mut subscribed_count = 0;
let mut received_count = 0; let mut received_count = 0;
tokio::run(futures::future::poll_fn(move || -> Result<_, ()> { let fut = async move {
for node in nodes.iter_mut() { for node in nodes.iter_mut() {
while let Async::Ready(Some(BehaviourEvent::PubsubMessage { while let Libp2pEvent::Behaviour(BehaviourEvent::PubsubMessage {
topics, message, .. topics,
})) = node.poll().unwrap() message,
..
}) = node.next_event().await
{ {
assert_eq!(topics.len(), 1); assert_eq!(topics.len(), 1);
// Assert topic is the published topic // Assert topic is the published topic
@ -131,12 +141,12 @@ fn test_gossipsub_full_mesh_publish() {
assert_eq!(message, pubsub_message.clone()); assert_eq!(message, pubsub_message.clone());
received_count += 1; received_count += 1;
if received_count == num_nodes - 1 { if received_count == num_nodes - 1 {
return Ok(Async::Ready(())); return;
} }
} }
} }
while let Async::Ready(Some(BehaviourEvent::PeerSubscribed(_, topic))) = while let Libp2pEvent::Behaviour(BehaviourEvent::PeerSubscribed(_, topic)) =
publishing_node.poll().unwrap() publishing_node.next_event().await
{ {
// Publish on beacon block topic // Publish on beacon block topic
if topic == TopicHash::from_raw(publishing_topic.clone()) { if topic == TopicHash::from_raw(publishing_topic.clone()) {
@ -146,6 +156,11 @@ fn test_gossipsub_full_mesh_publish() {
} }
} }
} }
Ok(Async::NotReady) };
})) tokio::select! {
_ = fut => {}
_ = tokio::time::delay_for(tokio::time::Duration::from_millis(800)) => {
panic!("Future timed out");
}
}
} }

View File

@ -1,39 +1,39 @@
#![cfg(test)] #![cfg(test)]
use crate::behaviour::{Behaviour, BehaviourEvent}; use crate::behaviour::Behaviour;
use crate::multiaddr::Protocol; use crate::multiaddr::Protocol;
use ::types::{EnrForkId, MinimalEthSpec}; use ::types::{EnrForkId, MinimalEthSpec};
use eth2_libp2p::discovery::build_enr; use eth2_libp2p::discovery::{build_enr, CombinedKey, CombinedKeyExt};
use eth2_libp2p::*; use eth2_libp2p::*;
use futures::prelude::*; use futures::prelude::*;
use libp2p::core::identity::Keypair; use libp2p::core::identity::Keypair;
use libp2p::{ use libp2p::{
core, core,
core::{muxing::StreamMuxerBox, nodes::Substream, transport::boxed::Boxed}, core::{muxing::StreamMuxerBox, transport::boxed::Boxed},
secio, PeerId, Swarm, Transport, secio,
swarm::{SwarmBuilder, SwarmEvent},
PeerId, Swarm, Transport,
}; };
use slog::{crit, debug, info, Level}; use slog::{crit, debug, info, Level};
use std::convert::TryInto;
use std::io::{Error, ErrorKind}; use std::io::{Error, ErrorKind};
use std::sync::atomic::{AtomicBool, Ordering::Relaxed}; use std::pin::Pin;
use std::sync::Arc; use std::sync::Arc;
use std::time::Duration; use std::time::Duration;
use tokio::prelude::*;
type TSpec = MinimalEthSpec; type TSpec = MinimalEthSpec;
mod common; mod common;
type Libp2pStream = Boxed<(PeerId, StreamMuxerBox), Error>; type Libp2pBehaviour = Behaviour<TSpec>;
type Libp2pBehaviour = Behaviour<Substream<StreamMuxerBox>, TSpec>;
/// Build and return a eth2_libp2p Swarm with only secio support. /// Build and return a eth2_libp2p Swarm with only secio support.
fn build_secio_swarm( fn build_secio_swarm(
config: &NetworkConfig, config: &NetworkConfig,
log: slog::Logger, log: slog::Logger,
) -> error::Result<Swarm<Libp2pStream, Libp2pBehaviour>> { ) -> error::Result<Swarm<Libp2pBehaviour>> {
let local_keypair = Keypair::generate_secp256k1(); let local_keypair = Keypair::generate_secp256k1();
let local_peer_id = PeerId::from(local_keypair.public()); let local_peer_id = PeerId::from(local_keypair.public());
let enr_key: libp2p::discv5::enr::CombinedKey = local_keypair.clone().try_into().unwrap(); let enr_key = CombinedKey::from_libp2p(&local_keypair).unwrap();
let enr = build_enr::<TSpec>(&enr_key, config, EnrForkId::default()).unwrap(); let enr = build_enr::<TSpec>(&enr_key, config, EnrForkId::default()).unwrap();
let network_globals = Arc::new(NetworkGlobals::new( let network_globals = Arc::new(NetworkGlobals::new(
enr, enr,
@ -47,7 +47,16 @@ fn build_secio_swarm(
let transport = build_secio_transport(local_keypair.clone()); let transport = build_secio_transport(local_keypair.clone());
// Lighthouse network behaviour // Lighthouse network behaviour
let behaviour = Behaviour::new(&local_keypair, config, network_globals.clone(), &log)?; let behaviour = Behaviour::new(&local_keypair, config, network_globals.clone(), &log)?;
Swarm::new(transport, behaviour, local_peer_id.clone()) // requires a tokio runtime
struct Executor(tokio::runtime::Handle);
impl libp2p::core::Executor for Executor {
fn exec(&self, f: Pin<Box<dyn Future<Output = ()> + Send>>) {
self.0.spawn(f);
}
}
SwarmBuilder::new(transport, behaviour, local_peer_id.clone())
.executor(Box::new(Executor(tokio::runtime::Handle::current())))
.build()
}; };
// listen on the specified address // listen on the specified address
@ -101,7 +110,7 @@ fn build_secio_swarm(
/// Build a simple TCP transport with secio, mplex/yamux. /// Build a simple TCP transport with secio, mplex/yamux.
fn build_secio_transport(local_private_key: Keypair) -> Boxed<(PeerId, StreamMuxerBox), Error> { fn build_secio_transport(local_private_key: Keypair) -> Boxed<(PeerId, StreamMuxerBox), Error> {
let transport = libp2p::tcp::TcpConfig::new().nodelay(true); let transport = libp2p_tcp::TokioTcpConfig::new().nodelay(true);
transport transport
.upgrade(core::upgrade::Version::V1) .upgrade(core::upgrade::Version::V1)
.authenticate(secio::SecioConfig::new(local_private_key)) .authenticate(secio::SecioConfig::new(local_private_key))
@ -117,8 +126,8 @@ fn build_secio_transport(local_private_key: Keypair) -> Boxed<(PeerId, StreamMux
} }
/// Test if the encryption falls back to secio if noise isn't available /// Test if the encryption falls back to secio if noise isn't available
#[test] #[tokio::test]
fn test_secio_noise_fallback() { async fn test_secio_noise_fallback() {
// set up the logging. The level and enabled logging or not // set up the logging. The level and enabled logging or not
let log_level = Level::Trace; let log_level = Level::Trace;
let enable_logging = false; let enable_logging = false;
@ -127,7 +136,7 @@ fn test_secio_noise_fallback() {
let port = common::unused_port("tcp").unwrap(); let port = common::unused_port("tcp").unwrap();
let noisy_config = common::build_config(port, vec![], None); let noisy_config = common::build_config(port, vec![], None);
let mut noisy_node = Service::new(&noisy_config, EnrForkId::default(), log.clone()) let mut noisy_node = Service::new(&noisy_config, EnrForkId::default(), &log)
.expect("should build a libp2p instance") .expect("should build a libp2p instance")
.1; .1;
@ -142,40 +151,31 @@ fn test_secio_noise_fallback() {
let secio_log = log.clone(); let secio_log = log.clone();
let noisy_future = future::poll_fn(move || -> Poll<bool, ()> { let noisy_future = async {
loop { loop {
match noisy_node.poll().unwrap() { noisy_node.next_event().await;
_ => return Ok(Async::NotReady),
}
} }
}); };
let secio_future = future::poll_fn(move || -> Poll<bool, ()> { let secio_future = async {
loop { loop {
match secio_swarm.poll().unwrap() { match secio_swarm.next_event().await {
Async::Ready(Some(BehaviourEvent::PeerDialed(peer_id))) => { SwarmEvent::ConnectionEstablished { peer_id, .. } => {
// secio node negotiated a secio transport with // secio node negotiated a secio transport with
// the noise compatible node // the noise compatible node
info!(secio_log, "Connected to peer {}", peer_id); info!(secio_log, "Connected to peer {}", peer_id);
return Ok(Async::Ready(true)); return;
} }
_ => return Ok(Async::NotReady), _ => {} // Ignore all other events
} }
} }
}); };
// execute the futures and check the result tokio::select! {
let test_result = Arc::new(AtomicBool::new(false)); _ = noisy_future => {}
let error_result = test_result.clone(); _ = secio_future => {}
let thread_result = test_result.clone(); _ = tokio::time::delay_for(Duration::from_millis(800)) => {
tokio::run( panic!("Future timed out");
noisy_future }
.select(secio_future) }
.timeout(Duration::from_millis(1000))
.map_err(move |_| error_result.store(false, Relaxed))
.map(move |result| {
thread_result.store(result.0, Relaxed);
}),
);
assert!(test_result.load(Relaxed));
} }

View File

@ -1,12 +1,10 @@
#![cfg(test)] #![cfg(test)]
use eth2_libp2p::rpc::methods::*; use eth2_libp2p::rpc::methods::*;
use eth2_libp2p::rpc::*; use eth2_libp2p::rpc::*;
use eth2_libp2p::{BehaviourEvent, RPCEvent}; use eth2_libp2p::{BehaviourEvent, Libp2pEvent, RPCEvent};
use slog::{warn, Level}; use slog::{debug, warn, Level};
use std::sync::atomic::{AtomicBool, Ordering::Relaxed};
use std::sync::{Arc, Mutex};
use std::time::Duration; use std::time::Duration;
use tokio::prelude::*; use tokio::time::delay_for;
use types::{ use types::{
BeaconBlock, Epoch, EthSpec, Hash256, MinimalEthSpec, Signature, SignedBeaconBlock, Slot, BeaconBlock, Epoch, EthSpec, Hash256, MinimalEthSpec, Signature, SignedBeaconBlock, Slot,
}; };
@ -15,17 +13,17 @@ mod common;
type E = MinimalEthSpec; type E = MinimalEthSpec;
#[test] #[tokio::test]
// Tests the STATUS RPC message // Tests the STATUS RPC message
fn test_status_rpc() { async fn test_status_rpc() {
// set up the logging. The level and enabled logging or not // set up the logging. The level and enabled logging or not
let log_level = Level::Trace; let log_level = Level::Debug;
let enable_logging = false; let enable_logging = false;
let log = common::build_log(log_level, enable_logging); let log = common::build_log(log_level, enable_logging);
// get sender/receiver // get sender/receiver
let (mut sender, mut receiver) = common::build_node_pair(&log); let (mut sender, mut receiver) = common::build_node_pair(&log).await;
// Dummy STATUS RPC message // Dummy STATUS RPC message
let rpc_request = RPCRequest::Status(StatusMessage { let rpc_request = RPCRequest::Status(StatusMessage {
@ -45,92 +43,80 @@ fn test_status_rpc() {
head_slot: Slot::new(1), head_slot: Slot::new(1),
}); });
let sender_request = rpc_request.clone();
let sender_log = log.clone();
let sender_response = rpc_response.clone();
// build the sender future // build the sender future
let sender_future = future::poll_fn(move || -> Poll<bool, ()> { let sender_future = async {
loop { loop {
match sender.poll().unwrap() { match sender.next_event().await {
Async::Ready(Some(BehaviourEvent::PeerDialed(peer_id))) => { Libp2pEvent::PeerConnected { peer_id, .. } => {
// Send a STATUS message // Send a STATUS message
warn!(sender_log, "Sending RPC"); debug!(log, "Sending RPC");
sender sender
.swarm .swarm
.send_rpc(peer_id, RPCEvent::Request(1, sender_request.clone())); .send_rpc(peer_id, RPCEvent::Request(10, rpc_request.clone()));
} }
Async::Ready(Some(BehaviourEvent::RPC(_, event))) => match event { Libp2pEvent::Behaviour(BehaviourEvent::RPC(_, event)) => match event {
// Should receive the RPC response // Should receive the RPC response
RPCEvent::Response(id, response @ RPCCodedResponse::Success(_)) => { RPCEvent::Response(id, response @ RPCCodedResponse::Success(_)) => {
if id == 1 { if id == 10 {
warn!(sender_log, "Sender Received"); debug!(log, "Sender Received");
let response = { let response = {
match response { match response {
RPCCodedResponse::Success(r) => r, RPCCodedResponse::Success(r) => r,
_ => unreachable!(), _ => unreachable!(),
} }
}; };
assert_eq!(response, sender_response.clone()); assert_eq!(response, rpc_response.clone());
debug!(log, "Sender Completed");
warn!(sender_log, "Sender Completed"); return;
return Ok(Async::Ready(true));
} }
} }
e => panic!("Received invalid RPC message {}", e), _ => {} // Ignore other RPC messages
}, },
Async::Ready(Some(_)) => (), _ => {}
Async::Ready(None) | Async::NotReady => return Ok(Async::NotReady),
};
}
});
// build the receiver future
let receiver_future = future::poll_fn(move || -> Poll<bool, ()> {
loop {
match receiver.poll().unwrap() {
Async::Ready(Some(BehaviourEvent::RPC(peer_id, event))) => match event {
// Should receive sent RPC request
RPCEvent::Request(id, request) => {
if request == rpc_request {
// send the response
warn!(log, "Receiver Received");
receiver.swarm.send_rpc(
peer_id,
RPCEvent::Response(
id,
RPCCodedResponse::Success(rpc_response.clone()),
),
);
}
}
e => panic!("Received invalid RPC message {}", e),
},
Async::Ready(Some(_)) => (),
Async::Ready(None) | Async::NotReady => return Ok(Async::NotReady),
} }
} }
}); };
// execute the futures and check the result // build the receiver future
let test_result = Arc::new(AtomicBool::new(false)); let receiver_future = async {
let error_result = test_result.clone(); loop {
let thread_result = test_result.clone(); match receiver.next_event().await {
tokio::run( Libp2pEvent::Behaviour(BehaviourEvent::RPC(peer_id, event)) => {
sender_future match event {
.select(receiver_future) // Should receive sent RPC request
.timeout(Duration::from_millis(1000)) RPCEvent::Request(id, request) => {
.map_err(move |_| error_result.store(false, Relaxed)) if request == rpc_request {
.map(move |result| { // send the response
thread_result.store(result.0, Relaxed); debug!(log, "Receiver Received");
}), receiver.swarm.send_rpc(
); peer_id,
assert!(test_result.load(Relaxed)); RPCEvent::Response(
id,
RPCCodedResponse::Success(rpc_response.clone()),
),
);
}
}
_ => {} // Ignore other RPC requests
}
}
_ => {} // Ignore other events
}
}
};
tokio::select! {
_ = sender_future => {}
_ = receiver_future => {}
_ = delay_for(Duration::from_millis(800)) => {
panic!("Future timed out");
}
}
} }
#[test] #[tokio::test]
// Tests a streamed BlocksByRange RPC Message // Tests a streamed BlocksByRange RPC Message
fn test_blocks_by_range_chunked_rpc() { async fn test_blocks_by_range_chunked_rpc() {
// set up the logging. The level and enabled logging or not // set up the logging. The level and enabled logging or not
let log_level = Level::Trace; let log_level = Level::Trace;
let enable_logging = false; let enable_logging = false;
@ -140,7 +126,7 @@ fn test_blocks_by_range_chunked_rpc() {
let log = common::build_log(log_level, enable_logging); let log = common::build_log(log_level, enable_logging);
// get sender/receiver // get sender/receiver
let (mut sender, mut receiver) = common::build_node_pair(&log); let (mut sender, mut receiver) = common::build_node_pair(&log).await;
// BlocksByRange Request // BlocksByRange Request
let rpc_request = RPCRequest::BlocksByRange(BlocksByRangeRequest { let rpc_request = RPCRequest::BlocksByRange(BlocksByRangeRequest {
@ -158,116 +144,100 @@ fn test_blocks_by_range_chunked_rpc() {
}; };
let rpc_response = RPCResponse::BlocksByRange(Box::new(empty_signed)); let rpc_response = RPCResponse::BlocksByRange(Box::new(empty_signed));
let sender_request = rpc_request.clone();
let sender_log = log.clone();
let sender_response = rpc_response.clone();
// keep count of the number of messages received // keep count of the number of messages received
let messages_received = Arc::new(Mutex::new(0)); let mut messages_received = 0;
// build the sender future // build the sender future
let sender_future = future::poll_fn(move || -> Poll<bool, ()> { let sender_future = async {
loop { loop {
match sender.poll().unwrap() { match sender.next_event().await {
Async::Ready(Some(BehaviourEvent::PeerDialed(peer_id))) => { Libp2pEvent::PeerConnected { peer_id, .. } => {
// Send a BlocksByRange request // Send a STATUS message
warn!(sender_log, "Sender sending RPC request"); debug!(log, "Sending RPC");
sender sender
.swarm .swarm
.send_rpc(peer_id, RPCEvent::Request(1, sender_request.clone())); .send_rpc(peer_id, RPCEvent::Request(10, rpc_request.clone()));
} }
Async::Ready(Some(BehaviourEvent::RPC(_, event))) => match event { Libp2pEvent::Behaviour(BehaviourEvent::RPC(_, event)) => match event {
// Should receive the RPC response // Should receive the RPC response
RPCEvent::Response(id, response) => { RPCEvent::Response(id, response) => {
if id == 1 { if id == 10 {
warn!(sender_log, "Sender received a response"); warn!(log, "Sender received a response");
match response { match response {
RPCCodedResponse::Success(res) => { RPCCodedResponse::Success(res) => {
assert_eq!(res, sender_response.clone()); assert_eq!(res, rpc_response.clone());
*messages_received.lock().unwrap() += 1; messages_received += 1;
warn!(sender_log, "Chunk received"); warn!(log, "Chunk received");
} }
RPCCodedResponse::StreamTermination( RPCCodedResponse::StreamTermination(_) => {
ResponseTermination::BlocksByRange,
) => {
// should be exactly 10 messages before terminating // should be exactly 10 messages before terminating
assert_eq!( assert_eq!(messages_received, messages_to_send);
*messages_received.lock().unwrap(),
messages_to_send
);
// end the test // end the test
return Ok(Async::Ready(true)); return;
} }
_ => panic!("Invalid RPC received"), _ => panic!("Invalid RPC received"),
} }
} }
} }
_ => panic!("Received invalid RPC message"), _ => {} // Ignore other RPC messages
}, },
Async::Ready(Some(_)) => {} _ => {} // Ignore other behaviour events
Async::Ready(None) | Async::NotReady => return Ok(Async::NotReady), }
};
} }
}); };
// build the receiver future // build the receiver future
let receiver_future = future::poll_fn(move || -> Poll<bool, ()> { let receiver_future = async {
loop { loop {
match receiver.poll().unwrap() { match receiver.next_event().await {
Async::Ready(Some(BehaviourEvent::RPC(peer_id, event))) => match event { Libp2pEvent::Behaviour(BehaviourEvent::RPC(peer_id, event)) => {
// Should receive the sent RPC request match event {
RPCEvent::Request(id, request) => { // Should receive sent RPC request
if request == rpc_request { RPCEvent::Request(id, request) => {
// send the response if request == rpc_request {
warn!(log, "Receiver got request"); // send the response
warn!(log, "Receiver got request");
for _ in 1..=messages_to_send { for _ in 1..=messages_to_send {
receiver.swarm.send_rpc(
peer_id.clone(),
RPCEvent::Response(
id,
RPCCodedResponse::Success(rpc_response.clone()),
),
);
}
// send the stream termination
receiver.swarm.send_rpc( receiver.swarm.send_rpc(
peer_id.clone(), peer_id,
RPCEvent::Response( RPCEvent::Response(
id, id,
RPCCodedResponse::Success(rpc_response.clone()), RPCCodedResponse::StreamTermination(
ResponseTermination::BlocksByRange,
),
), ),
); );
} }
// send the stream termination
receiver.swarm.send_rpc(
peer_id,
RPCEvent::Response(
id,
RPCCodedResponse::StreamTermination(
ResponseTermination::BlocksByRange,
),
),
);
} }
_ => {} // Ignore other events
} }
_ => panic!("Received invalid RPC message"), }
}, _ => {} // Ignore other events
Async::Ready(Some(_)) => (),
Async::Ready(None) | Async::NotReady => return Ok(Async::NotReady),
} }
} }
}); };
// execute the futures and check the result tokio::select! {
let test_result = Arc::new(AtomicBool::new(false)); _ = sender_future => {}
let error_result = test_result.clone(); _ = receiver_future => {}
let thread_result = test_result.clone(); _ = delay_for(Duration::from_millis(800)) => {
tokio::run( panic!("Future timed out");
sender_future }
.select(receiver_future) }
.timeout(Duration::from_millis(1000))
.map_err(move |_| error_result.store(false, Relaxed))
.map(move |result| {
thread_result.store(result.0, Relaxed);
}),
);
assert!(test_result.load(Relaxed));
} }
#[test] #[tokio::test]
// Tests an empty response to a BlocksByRange RPC Message // Tests an empty response to a BlocksByRange RPC Message
fn test_blocks_by_range_single_empty_rpc() { async fn test_blocks_by_range_single_empty_rpc() {
// set up the logging. The level and enabled logging or not // set up the logging. The level and enabled logging or not
let log_level = Level::Trace; let log_level = Level::Trace;
let enable_logging = false; let enable_logging = false;
@ -275,7 +245,7 @@ fn test_blocks_by_range_single_empty_rpc() {
let log = common::build_log(log_level, enable_logging); let log = common::build_log(log_level, enable_logging);
// get sender/receiver // get sender/receiver
let (mut sender, mut receiver) = common::build_node_pair(&log); let (mut sender, mut receiver) = common::build_node_pair(&log).await;
// BlocksByRange Request // BlocksByRange Request
let rpc_request = RPCRequest::BlocksByRange(BlocksByRangeRequest { let rpc_request = RPCRequest::BlocksByRange(BlocksByRangeRequest {
@ -293,116 +263,106 @@ fn test_blocks_by_range_single_empty_rpc() {
}; };
let rpc_response = RPCResponse::BlocksByRange(Box::new(empty_signed)); let rpc_response = RPCResponse::BlocksByRange(Box::new(empty_signed));
let sender_request = rpc_request.clone(); let messages_to_send = 1;
let sender_log = log.clone();
let sender_response = rpc_response.clone();
// keep count of the number of messages received // keep count of the number of messages received
let messages_received = Arc::new(Mutex::new(0)); let mut messages_received = 0;
// build the sender future // build the sender future
let sender_future = future::poll_fn(move || -> Poll<bool, ()> { let sender_future = async {
loop { loop {
match sender.poll().unwrap() { match sender.next_event().await {
Async::Ready(Some(BehaviourEvent::PeerDialed(peer_id))) => { Libp2pEvent::PeerConnected { peer_id, .. } => {
// Send a BlocksByRange request // Send a STATUS message
warn!(sender_log, "Sender sending RPC request"); debug!(log, "Sending RPC");
sender sender
.swarm .swarm
.send_rpc(peer_id, RPCEvent::Request(1, sender_request.clone())); .send_rpc(peer_id, RPCEvent::Request(10, rpc_request.clone()));
} }
Async::Ready(Some(BehaviourEvent::RPC(_, event))) => match event { Libp2pEvent::Behaviour(BehaviourEvent::RPC(_, event)) => match event {
// Should receive the RPC response // Should receive the RPC response
RPCEvent::Response(id, response) => { RPCEvent::Response(id, response) => {
if id == 1 { if id == 10 {
warn!(sender_log, "Sender received a response"); warn!(log, "Sender received a response");
match response { match response {
RPCCodedResponse::Success(res) => { RPCCodedResponse::Success(res) => {
assert_eq!(res, sender_response.clone()); assert_eq!(res, rpc_response.clone());
*messages_received.lock().unwrap() += 1; messages_received += 1;
warn!(sender_log, "Chunk received"); warn!(log, "Chunk received");
} }
RPCCodedResponse::StreamTermination( RPCCodedResponse::StreamTermination(_) => {
ResponseTermination::BlocksByRange, // should be exactly 10 messages before terminating
) => { assert_eq!(messages_received, messages_to_send);
// should be exactly 1 messages before terminating
assert_eq!(*messages_received.lock().unwrap(), 1);
// end the test // end the test
return Ok(Async::Ready(true)); return;
} }
_ => panic!("Invalid RPC received"), _ => panic!("Invalid RPC received"),
} }
} }
} }
m => panic!("Received invalid RPC message: {}", m), _ => {} // Ignore other RPC messages
}, },
Async::Ready(Some(_)) => {} _ => {} // Ignore other behaviour events
Async::Ready(None) | Async::NotReady => return Ok(Async::NotReady),
};
}
});
// build the receiver future
let receiver_future = future::poll_fn(move || -> Poll<bool, ()> {
loop {
match receiver.poll().unwrap() {
Async::Ready(Some(BehaviourEvent::RPC(peer_id, event))) => match event {
// Should receive the sent RPC request
RPCEvent::Request(id, request) => {
if request == rpc_request {
// send the response
warn!(log, "Receiver got request");
receiver.swarm.send_rpc(
peer_id.clone(),
RPCEvent::Response(
id,
RPCCodedResponse::Success(rpc_response.clone()),
),
);
// send the stream termination
receiver.swarm.send_rpc(
peer_id,
RPCEvent::Response(
id,
RPCCodedResponse::StreamTermination(
ResponseTermination::BlocksByRange,
),
),
);
}
}
_ => panic!("Received invalid RPC message"),
},
Async::Ready(Some(_)) => (),
Async::Ready(None) | Async::NotReady => return Ok(Async::NotReady),
} }
} }
}); };
// execute the futures and check the result // build the receiver future
let test_result = Arc::new(AtomicBool::new(false)); let receiver_future = async {
let error_result = test_result.clone(); loop {
let thread_result = test_result.clone(); match receiver.next_event().await {
tokio::run( Libp2pEvent::Behaviour(BehaviourEvent::RPC(peer_id, event)) => {
sender_future match event {
.select(receiver_future) // Should receive sent RPC request
.timeout(Duration::from_millis(1000)) RPCEvent::Request(id, request) => {
.map_err(move |_| error_result.store(false, Relaxed)) if request == rpc_request {
.map(move |result| { // send the response
thread_result.store(result.0, Relaxed); warn!(log, "Receiver got request");
}),
); for _ in 1..=messages_to_send {
assert!(test_result.load(Relaxed)); receiver.swarm.send_rpc(
peer_id.clone(),
RPCEvent::Response(
id,
RPCCodedResponse::Success(rpc_response.clone()),
),
);
}
// send the stream termination
receiver.swarm.send_rpc(
peer_id,
RPCEvent::Response(
id,
RPCCodedResponse::StreamTermination(
ResponseTermination::BlocksByRange,
),
),
);
}
}
_ => {} // Ignore other events
}
}
_ => {} // Ignore other events
}
}
};
tokio::select! {
_ = sender_future => {}
_ = receiver_future => {}
_ = delay_for(Duration::from_millis(800)) => {
panic!("Future timed out");
}
}
} }
#[test] #[tokio::test]
// Tests a streamed, chunked BlocksByRoot RPC Message // Tests a streamed, chunked BlocksByRoot RPC Message
// The size of the reponse is a full `BeaconBlock` // The size of the reponse is a full `BeaconBlock`
// which is greater than the Snappy frame size. Hence, this test // which is greater than the Snappy frame size. Hence, this test
// serves to test the snappy framing format as well. // serves to test the snappy framing format as well.
fn test_blocks_by_root_chunked_rpc() { async fn test_blocks_by_root_chunked_rpc() {
// set up the logging. The level and enabled logging or not // set up the logging. The level and enabled logging or not
let log_level = Level::Trace; let log_level = Level::Debug;
let enable_logging = false; let enable_logging = false;
let messages_to_send = 3; let messages_to_send = 3;
@ -411,7 +371,7 @@ fn test_blocks_by_root_chunked_rpc() {
let spec = E::default_spec(); let spec = E::default_spec();
// get sender/receiver // get sender/receiver
let (mut sender, mut receiver) = common::build_node_pair(&log); let (mut sender, mut receiver) = common::build_node_pair(&log).await;
// BlocksByRoot Request // BlocksByRoot Request
let rpc_request = RPCRequest::BlocksByRoot(BlocksByRootRequest { let rpc_request = RPCRequest::BlocksByRoot(BlocksByRootRequest {
@ -426,112 +386,101 @@ fn test_blocks_by_root_chunked_rpc() {
}; };
let rpc_response = RPCResponse::BlocksByRoot(Box::new(signed_full_block)); let rpc_response = RPCResponse::BlocksByRoot(Box::new(signed_full_block));
let sender_request = rpc_request.clone();
let sender_log = log.clone();
let sender_response = rpc_response.clone();
// keep count of the number of messages received // keep count of the number of messages received
let messages_received = Arc::new(Mutex::new(0)); let mut messages_received = 0;
// build the sender future // build the sender future
let sender_future = future::poll_fn(move || -> Poll<bool, ()> { let sender_future = async {
loop { loop {
match sender.poll().unwrap() { match sender.next_event().await {
Async::Ready(Some(BehaviourEvent::PeerDialed(peer_id))) => { Libp2pEvent::PeerConnected { peer_id, .. } => {
// Send a BlocksByRoot request // Send a STATUS message
warn!(sender_log, "Sender sending RPC request"); debug!(log, "Sending RPC");
sender sender
.swarm .swarm
.send_rpc(peer_id, RPCEvent::Request(1, sender_request.clone())); .send_rpc(peer_id, RPCEvent::Request(10, rpc_request.clone()));
} }
Async::Ready(Some(BehaviourEvent::RPC(_, event))) => match event { Libp2pEvent::Behaviour(BehaviourEvent::RPC(_, event)) => match event {
// Should receive the RPC response // Should receive the RPC response
RPCEvent::Response(id, response) => { RPCEvent::Response(id, response) => {
warn!(sender_log, "Sender received a response"); if id == 10 {
assert_eq!(id, 1); debug!(log, "Sender received a response");
match response { match response {
RPCCodedResponse::Success(res) => { RPCCodedResponse::Success(res) => {
assert_eq!(res, sender_response.clone()); assert_eq!(res, rpc_response.clone());
*messages_received.lock().unwrap() += 1; messages_received += 1;
warn!(sender_log, "Chunk received"); debug!(log, "Chunk received");
}
RPCCodedResponse::StreamTermination(_) => {
// should be exactly messages_to_send
assert_eq!(messages_received, messages_to_send);
// end the test
return;
}
_ => {} // Ignore other RPC messages
} }
RPCCodedResponse::StreamTermination(
ResponseTermination::BlocksByRoot,
) => {
// should be exactly 10 messages before terminating
assert_eq!(*messages_received.lock().unwrap(), messages_to_send);
// end the test
return Ok(Async::Ready(true));
}
m => panic!("Invalid RPC received: {}", m),
} }
} }
m => panic!("Received invalid RPC message: {}", m), _ => {} // Ignore other RPC messages
}, },
Async::Ready(Some(_)) => {} _ => {} // Ignore other behaviour events
Async::Ready(None) | Async::NotReady => return Ok(Async::NotReady),
};
}
});
// build the receiver future
let receiver_future = future::poll_fn(move || -> Poll<bool, ()> {
loop {
match receiver.poll().unwrap() {
Async::Ready(Some(BehaviourEvent::RPC(peer_id, event))) => match event {
// Should receive the sent RPC request
RPCEvent::Request(id, request) => {
if request == rpc_request {
// send the response
warn!(log, "Receiver got request");
for _ in 1..=messages_to_send {
receiver.swarm.send_rpc(
peer_id.clone(),
RPCEvent::Response(
id,
RPCCodedResponse::Success(rpc_response.clone()),
),
);
}
// send the stream termination
receiver.swarm.send_rpc(
peer_id,
RPCEvent::Response(
id,
RPCCodedResponse::StreamTermination(
ResponseTermination::BlocksByRange,
),
),
);
}
}
_ => panic!("Received invalid RPC message"),
},
Async::Ready(Some(_)) => (),
Async::Ready(None) | Async::NotReady => return Ok(Async::NotReady),
} }
} }
}); };
// execute the futures and check the result // build the receiver future
let test_result = Arc::new(AtomicBool::new(false)); let receiver_future = async {
let error_result = test_result.clone(); loop {
let thread_result = test_result.clone(); match receiver.next_event().await {
tokio::run( Libp2pEvent::Behaviour(BehaviourEvent::RPC(peer_id, event)) => {
sender_future match event {
.select(receiver_future) // Should receive sent RPC request
.timeout(Duration::from_millis(1000)) RPCEvent::Request(id, request) => {
.map_err(move |_| error_result.store(false, Relaxed)) if request == rpc_request {
.map(move |result| { // send the response
thread_result.store(result.0, Relaxed); debug!(log, "Receiver got request");
}),
); for _ in 1..=messages_to_send {
assert!(test_result.load(Relaxed)); receiver.swarm.send_rpc(
peer_id.clone(),
RPCEvent::Response(
id,
RPCCodedResponse::Success(rpc_response.clone()),
),
);
debug!(log, "Sending message");
}
// send the stream termination
receiver.swarm.send_rpc(
peer_id,
RPCEvent::Response(
id,
RPCCodedResponse::StreamTermination(
ResponseTermination::BlocksByRange,
),
),
);
debug!(log, "Send stream term");
}
}
_ => {} // Ignore other events
}
}
_ => {} // Ignore other events
}
}
};
tokio::select! {
_ = sender_future => {}
_ = receiver_future => {}
_ = delay_for(Duration::from_millis(1000)) => {
panic!("Future timed out");
}
}
} }
#[test] #[tokio::test]
// Tests a Goodbye RPC message // Tests a Goodbye RPC message
fn test_goodbye_rpc() { async fn test_goodbye_rpc() {
// set up the logging. The level and enabled logging or not // set up the logging. The level and enabled logging or not
let log_level = Level::Trace; let log_level = Level::Trace;
let enable_logging = false; let enable_logging = false;
@ -539,65 +488,54 @@ fn test_goodbye_rpc() {
let log = common::build_log(log_level, enable_logging); let log = common::build_log(log_level, enable_logging);
// get sender/receiver // get sender/receiver
let (mut sender, mut receiver) = common::build_node_pair(&log); let (mut sender, mut receiver) = common::build_node_pair(&log).await;
// Goodbye Request // Goodbye Request
let rpc_request = RPCRequest::Goodbye(GoodbyeReason::ClientShutdown); let rpc_request = RPCRequest::Goodbye(GoodbyeReason::ClientShutdown);
let sender_request = rpc_request.clone();
let sender_log = log.clone();
// build the sender future // build the sender future
let sender_future = future::poll_fn(move || -> Poll<bool, ()> { let sender_future = async {
loop { loop {
match sender.poll().unwrap() { match sender.next_event().await {
Async::Ready(Some(BehaviourEvent::PeerDialed(peer_id))) => { Libp2pEvent::PeerConnected { peer_id, .. } => {
// Send a Goodbye request // Send a STATUS message
warn!(sender_log, "Sender sending RPC request"); debug!(log, "Sending RPC");
sender sender
.swarm .swarm
.send_rpc(peer_id, RPCEvent::Request(1, sender_request.clone())); .send_rpc(peer_id, RPCEvent::Request(10, rpc_request.clone()));
} }
Async::Ready(Some(_)) => {} _ => {} // Ignore other RPC messages
Async::Ready(None) | Async::NotReady => return Ok(Async::NotReady),
};
}
});
// build the receiver future
let receiver_future = future::poll_fn(move || -> Poll<bool, ()> {
loop {
match receiver.poll().unwrap() {
Async::Ready(Some(BehaviourEvent::RPC(_, event))) => match event {
// Should receive the sent RPC request
RPCEvent::Request(id, request) => {
if request == rpc_request {
assert_eq!(id, 0);
assert_eq!(rpc_request.clone(), request);
// receives the goodbye. Nothing left to do
return Ok(Async::Ready(true));
}
}
_ => panic!("Received invalid RPC message"),
},
Async::Ready(Some(_)) => (),
Async::Ready(None) | Async::NotReady => return Ok(Async::NotReady),
} }
} }
}); };
// execute the futures and check the result // build the receiver future
let test_result = Arc::new(AtomicBool::new(false)); let receiver_future = async {
let error_result = test_result.clone(); loop {
let thread_result = test_result.clone(); match receiver.next_event().await {
tokio::run( Libp2pEvent::Behaviour(BehaviourEvent::RPC(_peer_id, event)) => {
sender_future match event {
.select(receiver_future) // Should receive sent RPC request
.timeout(Duration::from_millis(1000)) RPCEvent::Request(id, request) => {
.map_err(move |_| error_result.store(false, Relaxed)) if request == rpc_request {
.map(move |result| { assert_eq!(id, 0);
thread_result.store(result.0, Relaxed); assert_eq!(rpc_request.clone(), request); // receives the goodbye. Nothing left to do
}), return;
); }
assert!(test_result.load(Relaxed)); }
_ => {} // Ignore other events
}
}
_ => {} // Ignore other events
}
}
};
tokio::select! {
_ = sender_future => {}
_ = receiver_future => {}
_ = delay_for(Duration::from_millis(1000)) => {
panic!("Future timed out");
}
}
} }

View File

@ -6,23 +6,22 @@ edition = "2018"
[dev-dependencies] [dev-dependencies]
eth1_test_rig = { path = "../../tests/eth1_test_rig" } eth1_test_rig = { path = "../../tests/eth1_test_rig" }
futures = "0.1.25"
[dependencies] [dependencies]
futures = "0.1.25" futures = "0.3.5"
types = { path = "../../eth2/types"} types = { path = "../../eth2/types"}
environment = { path = "../../lighthouse/environment"} environment = { path = "../../lighthouse/environment"}
eth1 = { path = "../eth1"} eth1 = { path = "../eth1"}
rayon = "1.0" rayon = "1.3.0"
state_processing = { path = "../../eth2/state_processing" } state_processing = { path = "../../eth2/state_processing" }
merkle_proof = { path = "../../eth2/utils/merkle_proof" } merkle_proof = { path = "../../eth2/utils/merkle_proof" }
eth2_ssz = "0.1" eth2_ssz = "0.1.2"
eth2_hashing = { path = "../../eth2/utils/eth2_hashing" } eth2_hashing = "0.1.0"
tree_hash = "0.1" tree_hash = "0.1.0"
tokio = "0.1.22" tokio = { version = "0.2.20", features = ["full"] }
parking_lot = "0.7" parking_lot = "0.10.2"
slog = "^2.2.3" slog = "2.5.2"
exit-future = "0.1.4" exit-future = "0.2.0"
serde = "1.0" serde = "1.0.110"
serde_derive = "1.0" serde_derive = "1.0.110"
int_to_bytes = { path = "../../eth2/utils/int_to_bytes" } int_to_bytes = { path = "../../eth2/utils/int_to_bytes" }

View File

@ -2,11 +2,6 @@ pub use crate::{common::genesis_deposits, interop::interop_genesis_state};
pub use eth1::Config as Eth1Config; pub use eth1::Config as Eth1Config;
use eth1::{DepositLog, Eth1Block, Service}; use eth1::{DepositLog, Eth1Block, Service};
use futures::{
future,
future::{loop_fn, Loop},
Future,
};
use parking_lot::Mutex; use parking_lot::Mutex;
use slog::{debug, error, info, trace, Logger}; use slog::{debug, error, info, trace, Logger};
use state_processing::{ use state_processing::{
@ -14,8 +9,8 @@ use state_processing::{
per_block_processing::process_deposit, process_activations, per_block_processing::process_deposit, process_activations,
}; };
use std::sync::Arc; use std::sync::Arc;
use std::time::{Duration, Instant}; use std::time::Duration;
use tokio::timer::Delay; use tokio::time::delay_for;
use types::{BeaconState, ChainSpec, Deposit, Eth1Data, EthSpec, Hash256}; use types::{BeaconState, ChainSpec, Deposit, Eth1Data, EthSpec, Hash256};
/// Provides a service that connects to some Eth1 HTTP JSON-RPC endpoint and maintains a cache of eth1 /// Provides a service that connects to some Eth1 HTTP JSON-RPC endpoint and maintains a cache of eth1
@ -87,117 +82,83 @@ impl Eth1GenesisService {
/// ///
/// - `Ok(state)` once the canonical eth2 genesis state has been discovered. /// - `Ok(state)` once the canonical eth2 genesis state has been discovered.
/// - `Err(e)` if there is some internal error during updates. /// - `Err(e)` if there is some internal error during updates.
pub fn wait_for_genesis_state<E: EthSpec>( pub async fn wait_for_genesis_state<E: EthSpec>(
&self, &self,
update_interval: Duration, update_interval: Duration,
spec: ChainSpec, spec: ChainSpec,
) -> impl Future<Item = BeaconState<E>, Error = String> { ) -> Result<BeaconState<E>, String> {
let service = self.clone(); let service = self.clone();
let log = service.core.log.clone();
let min_genesis_active_validator_count = spec.min_genesis_active_validator_count;
let min_genesis_time = spec.min_genesis_time;
loop {
// **WARNING** `delay_for` panics on error
delay_for(update_interval).await;
let update_result = Service::update_deposit_cache(self.core.clone())
.await
.map_err(|e| format!("{:?}", e));
loop_fn::<(ChainSpec, Option<BeaconState<E>>), _, _, _>( if let Err(e) = update_result {
(spec, None), error!(
move |(spec, state)| { log,
let service_1 = service.clone(); "Failed to update eth1 deposit cache";
let service_2 = service.clone(); "error" => e
let service_3 = service.clone(); )
let service_4 = service.clone(); }
let log = service.core.log.clone();
let min_genesis_active_validator_count = spec.min_genesis_active_validator_count;
let min_genesis_time = spec.min_genesis_time;
Delay::new(Instant::now() + update_interval) // Do not exit the loop if there is an error whilst updating.
.map_err(|e| format!("Delay between genesis deposit checks failed: {:?}", e)) // Only enable the `sync_blocks` flag if there are enough deposits to feasibly
.and_then(move |()| { // trigger genesis.
service_1 //
.core // Note: genesis is triggered by the _active_ validator count, not just the
.update_deposit_cache() // deposit count, so it's possible that block downloads are started too early.
.map_err(|e| format!("{:?}", e)) // This is just wasteful, not erroneous.
}) let mut sync_blocks = self.sync_blocks.lock();
.then(move |update_result| {
if let Err(e) = update_result {
error!(
log,
"Failed to update eth1 deposit cache";
"error" => e
)
}
// Do not exit the loop if there is an error whilst updating. if !(*sync_blocks) {
Ok(()) if let Some(viable_eth1_block) =
}) self.first_viable_eth1_block(min_genesis_active_validator_count as usize)
// Only enable the `sync_blocks` flag if there are enough deposits to feasibly {
// trigger genesis. info!(
// log,
// Note: genesis is triggered by the _active_ validator count, not just the "Minimum genesis deposit count met";
// deposit count, so it's possible that block downloads are started too early. "deposit_count" => min_genesis_active_validator_count,
// This is just wasteful, not erroneous. "block_number" => viable_eth1_block,
.and_then(move |()| { );
let mut sync_blocks = service_2.sync_blocks.lock(); self.core.set_lowest_cached_block(viable_eth1_block);
*sync_blocks = true
}
}
if !(*sync_blocks) { let should_update_block_cache = *sync_blocks;
if let Some(viable_eth1_block) = service_2.first_viable_eth1_block( if should_update_block_cache {
min_genesis_active_validator_count as usize, let update_result = Service::update_block_cache(self.core.clone()).await;
) { if let Err(e) = update_result {
info!( error!(
service_2.core.log, log,
"Minimum genesis deposit count met"; "Failed to update eth1 block cache";
"deposit_count" => min_genesis_active_validator_count, "error" => format!("{:?}", e)
"block_number" => viable_eth1_block, );
); }
service_2.core.set_lowest_cached_block(viable_eth1_block); };
*sync_blocks = true if let Some(genesis_state) = self
} .scan_new_blocks::<E>(&spec)
} .map_err(|e| format!("Failed to scan for new blocks: {}", e))?
{
Ok(*sync_blocks) break Ok(genesis_state);
}) } else {
.and_then(move |should_update_block_cache| { debug!(
let maybe_update_future: Box<dyn Future<Item = _, Error = _> + Send> = log,
if should_update_block_cache { "No eth1 genesis block found";
Box::new(service_3.core.update_block_cache().then( "latest_block_timestamp" => self.core.latest_block_timestamp(),
move |update_result| { "min_genesis_time" => min_genesis_time,
if let Err(e) = update_result { "min_validator_count" => min_genesis_active_validator_count,
error!( "cached_blocks" => self.core.block_cache_len(),
service_3.core.log, "cached_deposits" => self.core.deposit_cache_len(),
"Failed to update eth1 block cache"; "cache_head" => self.highest_known_block(),
"error" => format!("{:?}", e) );
); }
} }
// Do not exit the loop if there is an error whilst updating.
Ok(())
},
))
} else {
Box::new(future::ok(()))
};
maybe_update_future
})
.and_then(move |()| {
if let Some(genesis_state) = service_4
.scan_new_blocks::<E>(&spec)
.map_err(|e| format!("Failed to scan for new blocks: {}", e))?
{
Ok(Loop::Break((spec, genesis_state)))
} else {
debug!(
service_4.core.log,
"No eth1 genesis block found";
"latest_block_timestamp" => service_4.core.latest_block_timestamp(),
"min_genesis_time" => min_genesis_time,
"min_validator_count" => min_genesis_active_validator_count,
"cached_blocks" => service_4.core.block_cache_len(),
"cached_deposits" => service_4.core.deposit_cache_len(),
"cache_head" => service_4.highest_known_block(),
);
Ok(Loop::Continue((spec, state)))
}
})
},
)
.map(|(_spec, state)| state)
} }
/// Processes any new blocks that have appeared since this function was last run. /// Processes any new blocks that have appeared since this function was last run.

View File

@ -5,7 +5,7 @@
#![cfg(test)] #![cfg(test)]
use environment::{Environment, EnvironmentBuilder}; use environment::{Environment, EnvironmentBuilder};
use eth1_test_rig::{DelayThenDeposit, GanacheEth1Instance}; use eth1_test_rig::{DelayThenDeposit, GanacheEth1Instance};
use futures::Future; use futures::compat::Future01CompatExt;
use genesis::{Eth1Config, Eth1GenesisService}; use genesis::{Eth1Config, Eth1GenesisService};
use state_processing::is_valid_genesis_state; use state_processing::is_valid_genesis_state;
use std::time::Duration; use std::time::Duration;
@ -24,81 +24,85 @@ pub fn new_env() -> Environment<MinimalEthSpec> {
#[test] #[test]
fn basic() { fn basic() {
let mut env = new_env(); let mut env = new_env();
let log = env.core_context().log; let log = env.core_context().log.clone();
let mut spec = env.eth2_config().spec.clone(); let mut spec = env.eth2_config().spec.clone();
let runtime = env.runtime();
let eth1 = runtime env.runtime().block_on(async {
.block_on(GanacheEth1Instance::new()) let eth1 = GanacheEth1Instance::new()
.expect("should start eth1 environment"); .await
let deposit_contract = &eth1.deposit_contract; .expect("should start eth1 environment");
let web3 = eth1.web3(); let deposit_contract = &eth1.deposit_contract;
let web3 = eth1.web3();
let now = runtime let now = web3
.block_on(web3.eth().block_number().map(|v| v.as_u64())) .eth()
.expect("should get block number"); .block_number()
.compat()
.await
.map(|v| v.as_u64())
.expect("should get block number");
let service = Eth1GenesisService::new( let service = Eth1GenesisService::new(
Eth1Config { Eth1Config {
endpoint: eth1.endpoint(), endpoint: eth1.endpoint(),
deposit_contract_address: deposit_contract.address(), deposit_contract_address: deposit_contract.address(),
deposit_contract_deploy_block: now, deposit_contract_deploy_block: now,
lowest_cached_block_number: now, lowest_cached_block_number: now,
follow_distance: 0, follow_distance: 0,
block_cache_truncation: None, block_cache_truncation: None,
..Eth1Config::default() ..Eth1Config::default()
}, },
log, log,
); );
// NOTE: this test is sensitive to the response speed of the external web3 server. If // NOTE: this test is sensitive to the response speed of the external web3 server. If
// you're experiencing failures, try increasing the update_interval. // you're experiencing failures, try increasing the update_interval.
let update_interval = Duration::from_millis(500); let update_interval = Duration::from_millis(500);
spec.min_genesis_time = 0; spec.min_genesis_time = 0;
spec.min_genesis_active_validator_count = 8; spec.min_genesis_active_validator_count = 8;
let deposits = (0..spec.min_genesis_active_validator_count + 2) let deposits = (0..spec.min_genesis_active_validator_count + 2)
.map(|i| { .map(|i| {
deposit_contract.deposit_helper::<MinimalEthSpec>( deposit_contract.deposit_helper::<MinimalEthSpec>(
generate_deterministic_keypair(i as usize), generate_deterministic_keypair(i as usize),
Hash256::from_low_u64_le(i), Hash256::from_low_u64_le(i),
32_000_000_000, 32_000_000_000,
) )
}) })
.map(|deposit| DelayThenDeposit { .map(|deposit| DelayThenDeposit {
delay: Duration::from_secs(0), delay: Duration::from_secs(0),
deposit, deposit,
}) })
.collect::<Vec<_>>(); .collect::<Vec<_>>();
let deposit_future = deposit_contract.deposit_multiple(deposits); let deposit_future = deposit_contract.deposit_multiple(deposits);
let wait_future = let wait_future =
service.wait_for_genesis_state::<MinimalEthSpec>(update_interval, spec.clone()); service.wait_for_genesis_state::<MinimalEthSpec>(update_interval, spec.clone());
let state = runtime let state = futures::try_join!(deposit_future, wait_future)
.block_on(deposit_future.join(wait_future)) .map(|(_, state)| state)
.map(|(_, state)| state) .expect("should finish waiting for genesis");
.expect("should finish waiting for genesis");
// Note: using ganache these deposits are 1-per-block, therefore we know there should only be // Note: using ganache these deposits are 1-per-block, therefore we know there should only be
// the minimum number of validators. // the minimum number of validators.
assert_eq!( assert_eq!(
state.validators.len(), state.validators.len(),
spec.min_genesis_active_validator_count as usize, spec.min_genesis_active_validator_count as usize,
"should have expected validator count" "should have expected validator count"
); );
assert!(state.genesis_time > 0, "should have some genesis time"); assert!(state.genesis_time > 0, "should have some genesis time");
assert!( assert!(
is_valid_genesis_state(&state, &spec), is_valid_genesis_state(&state, &spec),
"should be valid genesis state" "should be valid genesis state"
); );
assert!( assert!(
is_valid_genesis_state(&state, &spec), is_valid_genesis_state(&state, &spec),
"should be valid genesis state" "should be valid genesis state"
); );
});
} }

View File

@ -5,32 +5,32 @@ authors = ["Age Manning <Age@AgeManning.com>"]
edition = "2018" edition = "2018"
[dev-dependencies] [dev-dependencies]
sloggers = "0.3.4" sloggers = "1.0.0"
genesis = { path = "../genesis" } genesis = { path = "../genesis" }
tempdir = "0.3"
lazy_static = "1.4.0" lazy_static = "1.4.0"
matches = "0.1.8"
tempfile = "3.1.0"
[dependencies] [dependencies]
beacon_chain = { path = "../beacon_chain" } beacon_chain = { path = "../beacon_chain" }
store = { path = "../store" } store = { path = "../store" }
eth2-libp2p = { path = "../eth2-libp2p" } eth2-libp2p = { path = "../eth2-libp2p" }
hashmap_delay = { path = "../../eth2/utils/hashmap_delay" } hashset_delay = { path = "../../eth2/utils/hashset_delay" }
rest_types = { path = "../../eth2/utils/rest_types" } rest_types = { path = "../../eth2/utils/rest_types" }
types = { path = "../../eth2/types" } types = { path = "../../eth2/types" }
slot_clock = { path = "../../eth2/utils/slot_clock" } slot_clock = { path = "../../eth2/utils/slot_clock" }
slog = { version = "2.5.2", features = ["max_level_trace"] } slog = { version = "2.5.2", features = ["max_level_trace"] }
hex = "0.3" hex = "0.4.2"
eth2_ssz = "0.1.2" eth2_ssz = "0.1.2"
tree_hash = "0.1.0" tree_hash = "0.1.0"
futures = "0.1.29" futures = "0.3.5"
error-chain = "0.12.1" error-chain = "0.12.2"
tokio = "0.1.22" tokio = { version = "0.2.20", features = ["full"] }
parking_lot = "0.9.0" parking_lot = "0.10.2"
smallvec = "1.0.0" smallvec = "1.4.0"
# TODO: Remove rand crate for mainnet # TODO: Remove rand crate for mainnet
rand = "0.7.2" rand = "0.7.3"
fnv = "1.0.6" fnv = "1.0.6"
rlp = "0.4.3" rlp = "0.4.5"
tokio-timer = "0.2.12" lazy_static = "1.4.0"
matches = "0.1.8" lighthouse_metrics = { path = "../../eth2/utils/lighthouse_metrics" }
tempfile = "3.1.0"

View File

@ -5,16 +5,20 @@
use beacon_chain::{BeaconChain, BeaconChainTypes}; use beacon_chain::{BeaconChain, BeaconChainTypes};
use eth2_libp2p::{types::GossipKind, MessageId, NetworkGlobals, PeerId}; use eth2_libp2p::{types::GossipKind, MessageId, NetworkGlobals, PeerId};
use futures::prelude::*; use futures::prelude::*;
use hashmap_delay::HashSetDelay; use hashset_delay::HashSetDelay;
use rand::seq::SliceRandom; use rand::seq::SliceRandom;
use rest_types::ValidatorSubscription; use rest_types::ValidatorSubscription;
use slog::{crit, debug, error, o, warn}; use slog::{crit, debug, error, o, warn};
use slot_clock::SlotClock; use slot_clock::SlotClock;
use std::collections::VecDeque; use std::collections::VecDeque;
use std::pin::Pin;
use std::sync::Arc; use std::sync::Arc;
use std::task::{Context, Poll};
use std::time::{Duration, Instant}; use std::time::{Duration, Instant};
use types::{Attestation, EthSpec, Slot, SubnetId}; use types::{Attestation, EthSpec, Slot, SubnetId};
mod tests;
/// The minimum number of slots ahead that we attempt to discover peers for a subscription. If the /// The minimum number of slots ahead that we attempt to discover peers for a subscription. If the
/// slot is less than this number, skip the peer discovery process. /// slot is less than this number, skip the peer discovery process.
const MIN_PEER_DISCOVERY_SLOT_LOOK_AHEAD: u64 = 1; const MIN_PEER_DISCOVERY_SLOT_LOOK_AHEAD: u64 = 1;
@ -564,7 +568,7 @@ impl<T: BeaconChainTypes> AttestationService<T> {
return Ok(()); return Ok(());
} }
let subscribed_subnets = self.random_subnets.keys_vec(); let subscribed_subnets = self.random_subnets.keys().cloned().collect::<Vec<_>>();
let to_remove_subnets = subscribed_subnets.choose_multiple( let to_remove_subnets = subscribed_subnets.choose_multiple(
&mut rand::thread_rng(), &mut rand::thread_rng(),
random_subnets_per_validator as usize, random_subnets_per_validator as usize,
@ -576,10 +580,10 @@ impl<T: BeaconChainTypes> AttestationService<T> {
for subnet_id in to_remove_subnets { for subnet_id in to_remove_subnets {
// If a subscription is queued for two slots in the future, it's associated unsubscription // If a subscription is queued for two slots in the future, it's associated unsubscription
// will unsubscribe from the expired subnet. // will unsubscribe from the expired subnet.
// If there is no subscription for this subnet,slot it is safe to add one, without // If there is no unsubscription for this subnet,slot it is safe to add one, without
// unsubscribing early from a required subnet // unsubscribing early from a required subnet
let subnet = ExactSubnet { let subnet = ExactSubnet {
subnet_id: **subnet_id, subnet_id: *subnet_id,
slot: current_slot + 2, slot: current_slot + 2,
}; };
if self.subscriptions.get(&subnet).is_none() { if self.subscriptions.get(&subnet).is_none() {
@ -597,11 +601,11 @@ impl<T: BeaconChainTypes> AttestationService<T> {
self.unsubscriptions self.unsubscriptions
.insert_at(subnet, unsubscription_duration); .insert_at(subnet, unsubscription_duration);
} }
// as the long lasting subnet subscription is being removed, remove the subnet_id from // as the long lasting subnet subscription is being removed, remove the subnet_id from
// the ENR bitfield // the ENR bitfield
self.events self.events
.push_back(AttServiceMessage::EnrRemove(**subnet_id)); .push_back(AttServiceMessage::EnrRemove(*subnet_id));
self.random_subnets.remove(subnet_id);
} }
Ok(()) Ok(())
} }
@ -609,648 +613,64 @@ impl<T: BeaconChainTypes> AttestationService<T> {
impl<T: BeaconChainTypes> Stream for AttestationService<T> { impl<T: BeaconChainTypes> Stream for AttestationService<T> {
type Item = AttServiceMessage; type Item = AttServiceMessage;
type Error = ();
fn poll(&mut self) -> Poll<Option<Self::Item>, Self::Error> { fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
// process any peer discovery events // process any peer discovery events
while let Async::Ready(Some(exact_subnet)) = match self.discover_peers.poll_next_unpin(cx) {
self.discover_peers.poll().map_err(|e| { Poll::Ready(Some(Ok(exact_subnet))) => self.handle_discover_peers(exact_subnet),
error!(self.log, "Failed to check for peer discovery requests"; "error"=> format!("{}", e)); Poll::Ready(Some(Err(e))) => {
})? error!(self.log, "Failed to check for peer discovery requests"; "error"=> format!("{}", e));
{ }
self.handle_discover_peers(exact_subnet); Poll::Ready(None) | Poll::Pending => {}
} }
// process any subscription events // process any subscription events
while let Async::Ready(Some(exact_subnet)) = self.subscriptions.poll().map_err(|e| { match self.subscriptions.poll_next_unpin(cx) {
error!(self.log, "Failed to check for subnet subscription times"; "error"=> format!("{}", e)); Poll::Ready(Some(Ok(exact_subnet))) => self.handle_subscriptions(exact_subnet),
})? Poll::Ready(Some(Err(e))) => {
{ error!(self.log, "Failed to check for subnet subscription times"; "error"=> format!("{}", e));
self.handle_subscriptions(exact_subnet); }
} Poll::Ready(None) | Poll::Pending => {}
}
// process any un-subscription events // process any un-subscription events
while let Async::Ready(Some(exact_subnet)) = self.unsubscriptions.poll().map_err(|e| { match self.unsubscriptions.poll_next_unpin(cx) {
error!(self.log, "Failed to check for subnet unsubscription times"; "error"=> format!("{}", e)); Poll::Ready(Some(Ok(exact_subnet))) => self.handle_unsubscriptions(exact_subnet),
})? Poll::Ready(Some(Err(e))) => {
{ error!(self.log, "Failed to check for subnet unsubscription times"; "error"=> format!("{}", e));
self.handle_unsubscriptions(exact_subnet); }
} Poll::Ready(None) | Poll::Pending => {}
}
// process any random subnet expiries // process any random subnet expiries
while let Async::Ready(Some(subnet)) = self.random_subnets.poll().map_err(|e| { match self.random_subnets.poll_next_unpin(cx) {
error!(self.log, "Failed to check for random subnet cycles"; "error"=> format!("{}", e)); Poll::Ready(Some(Ok(subnet))) => self.handle_random_subnet_expiry(subnet),
})? Poll::Ready(Some(Err(e))) => {
{ error!(self.log, "Failed to check for random subnet cycles"; "error"=> format!("{}", e));
self.handle_random_subnet_expiry(subnet); }
} Poll::Ready(None) | Poll::Pending => {}
}
// process any known validator expiries // process any known validator expiries
while let Async::Ready(Some(_validator_index)) = self.known_validators.poll().map_err(|e| { match self.known_validators.poll_next_unpin(cx) {
error!(self.log, "Failed to check for random subnet cycles"; "error"=> format!("{}", e)); Poll::Ready(Some(Ok(_validator_index))) => {
})? let _ = self.handle_known_validator_expiry();
{ }
let _ = self.handle_known_validator_expiry(); Poll::Ready(Some(Err(e))) => {
} error!(self.log, "Failed to check for random subnet cycles"; "error"=> format!("{}", e));
}
Poll::Ready(None) | Poll::Pending => {}
}
// poll to remove entries on expiration, no need to act on expiration events // poll to remove entries on expiration, no need to act on expiration events
let _ = self.aggregate_validators_on_subnet.poll().map_err(|e| { error!(self.log, "Failed to check for aggregate validator on subnet expirations"; "error"=> format!("{}", e)); }); if let Poll::Ready(Some(Err(e))) = self.aggregate_validators_on_subnet.poll_next_unpin(cx) {
error!(self.log, "Failed to check for aggregate validator on subnet expirations"; "error"=> format!("{}", e));
}
// process any generated events // process any generated events
if let Some(event) = self.events.pop_front() { if let Some(event) = self.events.pop_front() {
return Ok(Async::Ready(Some(event))); return Poll::Ready(Some(event));
} }
Ok(Async::NotReady) Poll::Pending
}
}
#[cfg(test)]
mod tests {
use super::*;
use beacon_chain::builder::{BeaconChainBuilder, Witness};
use beacon_chain::eth1_chain::CachingEth1Backend;
use beacon_chain::events::NullEventHandler;
use beacon_chain::migrate::NullMigrator;
use eth2_libp2p::discovery::{build_enr, Keypair};
use eth2_libp2p::{discovery::CombinedKey, NetworkConfig, NetworkGlobals};
use futures::Stream;
use genesis::{generate_deterministic_keypairs, interop_genesis_state};
use lazy_static::lazy_static;
use matches::assert_matches;
use slog::Logger;
use sloggers::{null::NullLoggerBuilder, Build};
use slot_clock::{SlotClock, SystemTimeSlotClock};
use std::convert::TryInto;
use std::sync::atomic::{AtomicBool, Ordering::Relaxed};
use std::time::SystemTime;
use store::MemoryStore;
use tempfile::tempdir;
use tokio::prelude::*;
use types::{CommitteeIndex, EnrForkId, EthSpec, MinimalEthSpec};
const SLOT_DURATION_MILLIS: u64 = 200;
type TestBeaconChainType = Witness<
MemoryStore<MinimalEthSpec>,
NullMigrator,
SystemTimeSlotClock,
CachingEth1Backend<MinimalEthSpec, MemoryStore<MinimalEthSpec>>,
MinimalEthSpec,
NullEventHandler<MinimalEthSpec>,
>;
pub struct TestBeaconChain {
chain: Arc<BeaconChain<TestBeaconChainType>>,
}
impl TestBeaconChain {
pub fn new_with_system_clock() -> Self {
let data_dir = tempdir().expect("should create temporary data_dir");
let spec = MinimalEthSpec::default_spec();
let keypairs = generate_deterministic_keypairs(1);
let log = get_logger();
let chain = Arc::new(
BeaconChainBuilder::new(MinimalEthSpec)
.logger(log.clone())
.custom_spec(spec.clone())
.store(Arc::new(MemoryStore::open()))
.store_migrator(NullMigrator)
.data_dir(data_dir.path().to_path_buf())
.genesis_state(
interop_genesis_state::<MinimalEthSpec>(&keypairs, 0, &spec)
.expect("should generate interop state"),
)
.expect("should build state using recent genesis")
.dummy_eth1_backend()
.expect("should build dummy backend")
.null_event_handler()
.slot_clock(SystemTimeSlotClock::new(
Slot::new(0),
Duration::from_secs(recent_genesis_time()),
Duration::from_millis(SLOT_DURATION_MILLIS),
))
.reduced_tree_fork_choice()
.expect("should add fork choice to builder")
.build()
.expect("should build"),
);
Self { chain }
}
}
pub fn recent_genesis_time() -> u64 {
SystemTime::now()
.duration_since(SystemTime::UNIX_EPOCH)
.unwrap()
.as_secs()
}
fn get_logger() -> Logger {
NullLoggerBuilder.build().expect("logger should build")
}
lazy_static! {
static ref CHAIN: TestBeaconChain = { TestBeaconChain::new_with_system_clock() };
}
fn get_attestation_service() -> AttestationService<TestBeaconChainType> {
let log = get_logger();
let beacon_chain = CHAIN.chain.clone();
let config = NetworkConfig::default();
let enr_key: CombinedKey = Keypair::generate_secp256k1().try_into().unwrap();
let enr = build_enr::<MinimalEthSpec>(&enr_key, &config, EnrForkId::default()).unwrap();
let network_globals: NetworkGlobals<MinimalEthSpec> = NetworkGlobals::new(enr, 0, 0, &log);
AttestationService::new(beacon_chain, Arc::new(network_globals), &log)
}
fn get_subscription(
validator_index: u64,
attestation_committee_index: CommitteeIndex,
slot: Slot,
) -> ValidatorSubscription {
let is_aggregator = true;
ValidatorSubscription {
validator_index,
attestation_committee_index,
slot,
is_aggregator,
}
}
fn _get_subscriptions(validator_count: u64, slot: Slot) -> Vec<ValidatorSubscription> {
let mut subscriptions: Vec<ValidatorSubscription> = Vec::new();
for validator_index in 0..validator_count {
let is_aggregator = true;
subscriptions.push(ValidatorSubscription {
validator_index,
attestation_committee_index: validator_index,
slot,
is_aggregator,
});
}
subscriptions
}
// gets a number of events from the subscription service, or returns none if it times out after a number
// of slots
fn get_events<S: Stream<Item = AttServiceMessage, Error = ()>>(
stream: S,
no_events: u64,
no_slots_before_timeout: u32,
) -> impl Future<Item = Vec<AttServiceMessage>, Error = ()> {
stream
.take(no_events)
.collect()
.timeout(Duration::from_millis(SLOT_DURATION_MILLIS) * no_slots_before_timeout)
.map_err(|_| ())
}
#[test]
fn subscribe_current_slot() {
// subscription config
let validator_index = 1;
let committee_index = 1;
let subscription_slot = 0;
// create the attestation service and subscriptions
let mut attestation_service = get_attestation_service();
let current_slot = attestation_service
.beacon_chain
.slot_clock
.now()
.expect("Could not get current slot");
let subscriptions = vec![get_subscription(
validator_index,
committee_index,
current_slot + Slot::new(subscription_slot),
)];
// submit the subscriptions
attestation_service
.validator_subscriptions(subscriptions)
.unwrap();
// not enough time for peer discovery, just subscribe
let expected = vec![AttServiceMessage::Subscribe(SubnetId::new(validator_index))];
let test_result = Arc::new(AtomicBool::new(false));
let thread_result = test_result.clone();
tokio::run(
get_events(attestation_service, 4, 1)
.map(move |events| {
assert_matches!(
events[..3],
[
AttServiceMessage::DiscoverPeers(_any2),
AttServiceMessage::Subscribe(_any1),
AttServiceMessage::EnrAdd(_any3)
]
);
assert_eq!(expected[..], events[3..]);
// test completed successfully
thread_result.store(true, Relaxed);
})
// this doesn't need to be here, but helps with debugging
.map_err(|_| panic!("Did not receive desired events in the given time frame")),
);
assert!(test_result.load(Relaxed))
}
#[test]
fn subscribe_current_slot_wait_for_unsubscribe() {
// subscription config
let validator_index = 1;
let committee_index = 1;
let subscription_slot = 0;
// create the attestation service and subscriptions
let mut attestation_service = get_attestation_service();
let current_slot = attestation_service
.beacon_chain
.slot_clock
.now()
.expect("Could not get current slot");
let subscriptions = vec![get_subscription(
validator_index,
committee_index,
current_slot + Slot::new(subscription_slot),
)];
// submit the subscriptions
attestation_service
.validator_subscriptions(subscriptions)
.unwrap();
// not enough time for peer discovery, just subscribe, unsubscribe
let expected = vec![
AttServiceMessage::Subscribe(SubnetId::new(validator_index)),
AttServiceMessage::Unsubscribe(SubnetId::new(validator_index)),
];
let test_result = Arc::new(AtomicBool::new(false));
let thread_result = test_result.clone();
tokio::run(
get_events(attestation_service, 5, 2)
.map(move |events| {
assert_matches!(
events[..3],
[
AttServiceMessage::DiscoverPeers(_any2),
AttServiceMessage::Subscribe(_any1),
AttServiceMessage::EnrAdd(_any3)
]
);
assert_eq!(expected[..], events[3..]);
// test completed successfully
thread_result.store(true, Relaxed);
})
// this doesn't need to be here, but helps with debugging
.map_err(|_| panic!("Did not receive desired events in the given time frame")),
);
assert!(test_result.load(Relaxed))
}
#[test]
fn subscribe_five_slots_ahead() {
// subscription config
let validator_index = 1;
let committee_index = 1;
let subscription_slot = 5;
// create the attestation service and subscriptions
let mut attestation_service = get_attestation_service();
let current_slot = attestation_service
.beacon_chain
.slot_clock
.now()
.expect("Could not get current slot");
let subscriptions = vec![get_subscription(
validator_index,
committee_index,
current_slot + Slot::new(subscription_slot),
)];
// submit the subscriptions
attestation_service
.validator_subscriptions(subscriptions)
.unwrap();
// just discover peers, don't subscribe yet
let expected = vec![AttServiceMessage::DiscoverPeers(SubnetId::new(
validator_index,
))];
let test_result = Arc::new(AtomicBool::new(false));
let thread_result = test_result.clone();
tokio::run(
get_events(attestation_service, 4, 1)
.map(move |events| {
assert_matches!(
events[..3],
[
AttServiceMessage::DiscoverPeers(_any1),
AttServiceMessage::Subscribe(_any2),
AttServiceMessage::EnrAdd(_any3)
]
);
assert_eq!(expected[..], events[3..]);
// test completed successfully
thread_result.store(true, Relaxed);
})
// this doesn't need to be here, but helps with debugging
.map_err(|_| panic!("Did not receive desired events in the given time frame")),
);
assert!(test_result.load(Relaxed))
}
#[test]
fn subscribe_five_slots_ahead_wait_five_slots() {
// subscription config
let validator_index = 1;
let committee_index = 1;
let subscription_slot = 5;
// create the attestation service and subscriptions
let mut attestation_service = get_attestation_service();
let current_slot = attestation_service
.beacon_chain
.slot_clock
.now()
.expect("Could not get current slot");
let subscriptions = vec![get_subscription(
validator_index,
committee_index,
current_slot + Slot::new(subscription_slot),
)];
// submit the subscriptions
attestation_service
.validator_subscriptions(subscriptions)
.unwrap();
// we should discover peers, wait, then subscribe
let expected = vec![
AttServiceMessage::DiscoverPeers(SubnetId::new(validator_index)),
AttServiceMessage::Subscribe(SubnetId::new(validator_index)),
];
let test_result = Arc::new(AtomicBool::new(false));
let thread_result = test_result.clone();
tokio::run(
get_events(attestation_service, 5, 5)
.map(move |events| {
//dbg!(&events);
assert_matches!(
events[..3],
[
AttServiceMessage::DiscoverPeers(_any1),
AttServiceMessage::Subscribe(_any2),
AttServiceMessage::EnrAdd(_any3)
]
);
assert_eq!(expected[..], events[3..]);
// test completed successfully
thread_result.store(true, Relaxed);
})
// this doesn't need to be here, but helps with debugging
.map_err(|_| panic!("Did not receive desired events in the given time frame")),
);
assert!(test_result.load(Relaxed))
}
#[test]
fn subscribe_ten_slots_ahead() {
// subscription config
let validator_index = 1;
let committee_index = 1;
let subscription_slot = 10;
// create the attestation service and subscriptions
let mut attestation_service = get_attestation_service();
let current_slot = attestation_service
.beacon_chain
.slot_clock
.now()
.expect("Could not get current slot");
let subscriptions = vec![get_subscription(
validator_index,
committee_index,
current_slot + Slot::new(subscription_slot),
)];
// submit the subscriptions
attestation_service
.validator_subscriptions(subscriptions)
.unwrap();
// ten slots ahead is before our target peer discover time, so expect no messages
let expected: Vec<AttServiceMessage> = vec![];
let test_result = Arc::new(AtomicBool::new(false));
let thread_result = test_result.clone();
tokio::run(
get_events(attestation_service, 3, 1)
.map(move |events| {
assert_matches!(
events[..3],
[
AttServiceMessage::DiscoverPeers(_any1),
AttServiceMessage::Subscribe(_any2),
AttServiceMessage::EnrAdd(_any3)
]
);
assert_eq!(expected[..], events[3..]);
// test completed successfully
thread_result.store(true, Relaxed);
})
// this doesn't need to be here, but helps with debugging
.map_err(|_| panic!("Did not receive desired events in the given time frame")),
);
assert!(test_result.load(Relaxed))
}
#[test]
fn subscribe_ten_slots_ahead_wait_five_slots() {
// subscription config
let validator_index = 1;
let committee_index = 1;
let subscription_slot = 10;
// create the attestation service and subscriptions
let mut attestation_service = get_attestation_service();
let current_slot = attestation_service
.beacon_chain
.slot_clock
.now()
.expect("Could not get current slot");
let subscriptions = vec![get_subscription(
validator_index,
committee_index,
current_slot + Slot::new(subscription_slot),
)];
// submit the subscriptions
attestation_service
.validator_subscriptions(subscriptions)
.unwrap();
// expect discover peers because we will enter TARGET_PEER_DISCOVERY_SLOT_LOOK_AHEAD range
let expected: Vec<AttServiceMessage> = vec![AttServiceMessage::DiscoverPeers(
SubnetId::new(validator_index),
)];
let test_result = Arc::new(AtomicBool::new(false));
let thread_result = test_result.clone();
tokio::run(
get_events(attestation_service, 4, 5)
.map(move |events| {
assert_matches!(
events[..3],
[
AttServiceMessage::DiscoverPeers(_any1),
AttServiceMessage::Subscribe(_any2),
AttServiceMessage::EnrAdd(_any3)
]
);
assert_eq!(expected[..], events[3..]);
// test completed successfully
thread_result.store(true, Relaxed);
})
// this doesn't need to be here, but helps with debugging
.map_err(|_| panic!("Did not receive desired events in the given time frame")),
);
assert!(test_result.load(Relaxed))
}
#[test]
fn subscribe_all_random_subnets() {
// subscribe 10 slots ahead so we do not produce any exact subnet messages
let subscription_slot = 10;
let subscription_count = 64;
// create the attestation service and subscriptions
let mut attestation_service = get_attestation_service();
let current_slot = attestation_service
.beacon_chain
.slot_clock
.now()
.expect("Could not get current slot");
let subscriptions =
_get_subscriptions(subscription_count, current_slot + subscription_slot);
// submit the subscriptions
attestation_service
.validator_subscriptions(subscriptions)
.unwrap();
let test_result = Arc::new(AtomicBool::new(false));
let thread_result = test_result.clone();
tokio::run(
get_events(attestation_service, 192, 3)
.map(move |events| {
let mut discover_peer_count = 0;
let mut subscribe_count = 0;
let mut enr_add_count = 0;
let mut unexpected_msg_count = 0;
for event in events {
match event {
AttServiceMessage::DiscoverPeers(_any_subnet) => {
discover_peer_count = discover_peer_count + 1
}
AttServiceMessage::Subscribe(_any_subnet) => {
subscribe_count = subscribe_count + 1
}
AttServiceMessage::EnrAdd(_any_subnet) => {
enr_add_count = enr_add_count + 1
}
_ => unexpected_msg_count = unexpected_msg_count + 1,
}
}
assert_eq!(discover_peer_count, 64);
assert_eq!(subscribe_count, 64);
assert_eq!(enr_add_count, 64);
assert_eq!(unexpected_msg_count, 0);
// test completed successfully
thread_result.store(true, Relaxed);
})
// this doesn't need to be here, but helps with debugging
.map_err(|_| panic!("Did not receive desired events in the given time frame")),
);
assert!(test_result.load(Relaxed))
}
#[test]
fn subscribe_all_random_subnets_plus_one() {
// subscribe 10 slots ahead so we do not produce any exact subnet messages
let subscription_slot = 10;
// the 65th subscription should result in no more messages than the previous scenario
let subscription_count = 65;
// create the attestation service and subscriptions
let mut attestation_service = get_attestation_service();
let current_slot = attestation_service
.beacon_chain
.slot_clock
.now()
.expect("Could not get current slot");
let subscriptions =
_get_subscriptions(subscription_count, current_slot + subscription_slot);
// submit the subscriptions
attestation_service
.validator_subscriptions(subscriptions)
.unwrap();
let test_result = Arc::new(AtomicBool::new(false));
let thread_result = test_result.clone();
tokio::run(
get_events(attestation_service, 192, 3)
.map(move |events| {
let mut discover_peer_count = 0;
let mut subscribe_count = 0;
let mut enr_add_count = 0;
let mut unexpected_msg_count = 0;
for event in events {
match event {
AttServiceMessage::DiscoverPeers(_any_subnet) => {
discover_peer_count = discover_peer_count + 1
}
AttServiceMessage::Subscribe(_any_subnet) => {
subscribe_count = subscribe_count + 1
}
AttServiceMessage::EnrAdd(_any_subnet) => {
enr_add_count = enr_add_count + 1
}
_ => unexpected_msg_count = unexpected_msg_count + 1,
}
}
assert_eq!(discover_peer_count, 64);
assert_eq!(subscribe_count, 64);
assert_eq!(enr_add_count, 64);
assert_eq!(unexpected_msg_count, 0);
// test completed successfully
thread_result.store(true, Relaxed);
})
// this doesn't need to be here, but helps with debugging
.map_err(|_| panic!("Did not receive desired events in the given time frame")),
);
assert!(test_result.load(Relaxed))
} }
} }

View File

@ -0,0 +1,508 @@
#[cfg(test)]
mod tests {
use super::super::*;
use beacon_chain::{
builder::{BeaconChainBuilder, Witness},
eth1_chain::CachingEth1Backend,
events::NullEventHandler,
migrate::NullMigrator,
};
use eth2_libp2p::discovery::{build_enr, Keypair};
use eth2_libp2p::{discovery::CombinedKey, CombinedKeyExt, NetworkConfig, NetworkGlobals};
use futures::Stream;
use genesis::{generate_deterministic_keypairs, interop_genesis_state};
use lazy_static::lazy_static;
use matches::assert_matches;
use slog::Logger;
use sloggers::{null::NullLoggerBuilder, Build};
use slot_clock::{SlotClock, SystemTimeSlotClock};
use std::time::SystemTime;
use store::MemoryStore;
use tempfile::tempdir;
use tokio::time::Duration;
use types::{CommitteeIndex, EnrForkId, EthSpec, MinimalEthSpec};
const SLOT_DURATION_MILLIS: u64 = 2000;
type TestBeaconChainType = Witness<
MemoryStore<MinimalEthSpec>,
NullMigrator,
SystemTimeSlotClock,
CachingEth1Backend<MinimalEthSpec, MemoryStore<MinimalEthSpec>>,
MinimalEthSpec,
NullEventHandler<MinimalEthSpec>,
>;
pub struct TestBeaconChain {
chain: Arc<BeaconChain<TestBeaconChainType>>,
}
impl TestBeaconChain {
pub fn new_with_system_clock() -> Self {
let data_dir = tempdir().expect("should create temporary data_dir");
let spec = MinimalEthSpec::default_spec();
let keypairs = generate_deterministic_keypairs(1);
let log = get_logger();
let chain = Arc::new(
BeaconChainBuilder::new(MinimalEthSpec)
.logger(log.clone())
.custom_spec(spec.clone())
.store(Arc::new(MemoryStore::open()))
.store_migrator(NullMigrator)
.data_dir(data_dir.path().to_path_buf())
.genesis_state(
interop_genesis_state::<MinimalEthSpec>(&keypairs, 0, &spec)
.expect("should generate interop state"),
)
.expect("should build state using recent genesis")
.dummy_eth1_backend()
.expect("should build dummy backend")
.null_event_handler()
.slot_clock(SystemTimeSlotClock::new(
Slot::new(0),
Duration::from_secs(recent_genesis_time()),
Duration::from_millis(SLOT_DURATION_MILLIS),
))
.reduced_tree_fork_choice()
.expect("should add fork choice to builder")
.build()
.expect("should build"),
);
Self { chain }
}
}
pub fn recent_genesis_time() -> u64 {
SystemTime::now()
.duration_since(SystemTime::UNIX_EPOCH)
.unwrap()
.as_secs()
}
fn get_logger() -> Logger {
NullLoggerBuilder.build().expect("logger should build")
}
lazy_static! {
static ref CHAIN: TestBeaconChain = { TestBeaconChain::new_with_system_clock() };
}
fn get_attestation_service() -> AttestationService<TestBeaconChainType> {
let log = get_logger();
let beacon_chain = CHAIN.chain.clone();
let config = NetworkConfig::default();
let enr_key = CombinedKey::from_libp2p(&Keypair::generate_secp256k1()).unwrap();
let enr = build_enr::<MinimalEthSpec>(&enr_key, &config, EnrForkId::default()).unwrap();
let network_globals: NetworkGlobals<MinimalEthSpec> = NetworkGlobals::new(enr, 0, 0, &log);
AttestationService::new(beacon_chain, Arc::new(network_globals), &log)
}
fn get_subscription(
validator_index: u64,
attestation_committee_index: CommitteeIndex,
slot: Slot,
) -> ValidatorSubscription {
let is_aggregator = true;
ValidatorSubscription {
validator_index,
attestation_committee_index,
slot,
is_aggregator,
}
}
fn _get_subscriptions(validator_count: u64, slot: Slot) -> Vec<ValidatorSubscription> {
let mut subscriptions: Vec<ValidatorSubscription> = Vec::new();
for validator_index in 0..validator_count {
let is_aggregator = true;
subscriptions.push(ValidatorSubscription {
validator_index,
attestation_committee_index: validator_index,
slot,
is_aggregator,
});
}
subscriptions
}
// gets a number of events from the subscription service, or returns none if it times out after a number
// of slots
async fn get_events<S: Stream<Item = AttServiceMessage> + Unpin>(
mut stream: S,
no_events: usize,
no_slots_before_timeout: u32,
) -> Vec<AttServiceMessage> {
let mut events = Vec::new();
let collect_stream_fut = async {
loop {
if let Some(result) = stream.next().await {
events.push(result);
if events.len() == no_events {
return;
}
}
}
};
tokio::select! {
_ = collect_stream_fut => {return events}
_ = tokio::time::delay_for(
Duration::from_millis(SLOT_DURATION_MILLIS) * no_slots_before_timeout,
) => { return events; }
}
}
#[tokio::test]
async fn subscribe_current_slot() {
// subscription config
let validator_index = 1;
let committee_index = 1;
let subscription_slot = 0;
// create the attestation service and subscriptions
let mut attestation_service = get_attestation_service();
let current_slot = attestation_service
.beacon_chain
.slot_clock
.now()
.expect("Could not get current slot");
let subscriptions = vec![get_subscription(
validator_index,
committee_index,
current_slot + Slot::new(subscription_slot),
)];
// submit the subscriptions
attestation_service
.validator_subscriptions(subscriptions)
.unwrap();
// not enough time for peer discovery, just subscribe
let expected = vec![AttServiceMessage::Subscribe(SubnetId::new(validator_index))];
let events = get_events(attestation_service, 4, 1).await;
assert_matches!(
events[..3],
[
AttServiceMessage::DiscoverPeers(_any2),
AttServiceMessage::Subscribe(_any1),
AttServiceMessage::EnrAdd(_any3)
]
);
assert_eq!(expected[..], events[3..]);
}
#[tokio::test]
async fn subscribe_current_slot_wait_for_unsubscribe() {
// subscription config
let validator_index = 1;
let committee_index = 1;
let subscription_slot = 0;
// create the attestation service and subscriptions
let mut attestation_service = get_attestation_service();
let current_slot = attestation_service
.beacon_chain
.slot_clock
.now()
.expect("Could not get current slot");
let subscriptions = vec![get_subscription(
validator_index,
committee_index,
current_slot + Slot::new(subscription_slot),
)];
// submit the subscriptions
attestation_service
.validator_subscriptions(subscriptions)
.unwrap();
// not enough time for peer discovery, just subscribe, unsubscribe
let expected = vec![
AttServiceMessage::Subscribe(SubnetId::new(validator_index)),
AttServiceMessage::Unsubscribe(SubnetId::new(validator_index)),
];
let events = get_events(attestation_service, 5, 2).await;
assert_matches!(
events[..3],
[
AttServiceMessage::DiscoverPeers(_any2),
AttServiceMessage::Subscribe(_any1),
AttServiceMessage::EnrAdd(_any3)
]
);
assert_eq!(expected[..], events[3..]);
}
#[tokio::test]
async fn subscribe_five_slots_ahead() {
// subscription config
let validator_index = 1;
let committee_index = 1;
let subscription_slot = 5;
// create the attestation service and subscriptions
let mut attestation_service = get_attestation_service();
let current_slot = attestation_service
.beacon_chain
.slot_clock
.now()
.expect("Could not get current slot");
let subscriptions = vec![get_subscription(
validator_index,
committee_index,
current_slot + Slot::new(subscription_slot),
)];
// submit the subscriptions
attestation_service
.validator_subscriptions(subscriptions)
.unwrap();
// just discover peers, don't subscribe yet
let expected = vec![AttServiceMessage::DiscoverPeers(SubnetId::new(
validator_index,
))];
let events = get_events(attestation_service, 4, 1).await;
assert_matches!(
events[..3],
[
AttServiceMessage::DiscoverPeers(_any1),
AttServiceMessage::Subscribe(_any2),
AttServiceMessage::EnrAdd(_any3)
]
);
assert_eq!(expected[..], events[3..]);
}
#[tokio::test]
async fn subscribe_five_slots_ahead_wait_five_slots() {
// subscription config
let validator_index = 1;
let committee_index = 1;
let subscription_slot = 5;
// create the attestation service and subscriptions
let mut attestation_service = get_attestation_service();
let current_slot = attestation_service
.beacon_chain
.slot_clock
.now()
.expect("Could not get current slot");
let subscriptions = vec![get_subscription(
validator_index,
committee_index,
current_slot + Slot::new(subscription_slot),
)];
// submit the subscriptions
attestation_service
.validator_subscriptions(subscriptions)
.unwrap();
// we should discover peers, wait, then subscribe
let expected = vec![
AttServiceMessage::DiscoverPeers(SubnetId::new(validator_index)),
AttServiceMessage::Subscribe(SubnetId::new(validator_index)),
];
let events = get_events(attestation_service, 5, 5).await;
assert_matches!(
events[..3],
[
AttServiceMessage::DiscoverPeers(_any1),
AttServiceMessage::Subscribe(_any2),
AttServiceMessage::EnrAdd(_any3)
]
);
assert_eq!(expected[..], events[3..]);
}
#[tokio::test]
async fn subscribe_7_slots_ahead() {
// subscription config
let validator_index = 1;
let committee_index = 1;
let subscription_slot = 7;
// create the attestation service and subscriptions
let mut attestation_service = get_attestation_service();
let current_slot = attestation_service
.beacon_chain
.slot_clock
.now()
.expect("Could not get current slot");
let subscriptions = vec![get_subscription(
validator_index,
committee_index,
current_slot + Slot::new(subscription_slot),
)];
// submit the subscriptions
attestation_service
.validator_subscriptions(subscriptions)
.unwrap();
// ten slots ahead is before our target peer discover time, so expect no messages
let expected: Vec<AttServiceMessage> = vec![];
let events = get_events(attestation_service, 3, 1).await;
assert_matches!(
events[..3],
[
AttServiceMessage::DiscoverPeers(_any1),
AttServiceMessage::Subscribe(_any2),
AttServiceMessage::EnrAdd(_any3)
]
);
assert_eq!(expected[..], events[3..]);
}
#[tokio::test]
async fn subscribe_ten_slots_ahead_wait_five_slots() {
// subscription config
let validator_index = 1;
let committee_index = 1;
let subscription_slot = 10;
// create the attestation service and subscriptions
let mut attestation_service = get_attestation_service();
let current_slot = attestation_service
.beacon_chain
.slot_clock
.now()
.expect("Could not get current slot");
let subscriptions = vec![get_subscription(
validator_index,
committee_index,
current_slot + Slot::new(subscription_slot),
)];
// submit the subscriptions
attestation_service
.validator_subscriptions(subscriptions)
.unwrap();
// expect discover peers because we will enter TARGET_PEER_DISCOVERY_SLOT_LOOK_AHEAD range
let expected: Vec<AttServiceMessage> = vec![AttServiceMessage::DiscoverPeers(
SubnetId::new(validator_index),
)];
let events = get_events(attestation_service, 4, 5).await;
assert_matches!(
events[..3],
[
AttServiceMessage::DiscoverPeers(_any1),
AttServiceMessage::Subscribe(_any2),
AttServiceMessage::EnrAdd(_any3)
]
);
assert_eq!(expected[..], events[3..]);
}
#[tokio::test]
async fn subscribe_all_random_subnets() {
// subscribe 10 slots ahead so we do not produce any exact subnet messages
let subscription_slot = 10;
let subscription_count = 64;
// create the attestation service and subscriptions
let mut attestation_service = get_attestation_service();
let current_slot = attestation_service
.beacon_chain
.slot_clock
.now()
.expect("Could not get current slot");
let subscriptions =
_get_subscriptions(subscription_count, current_slot + subscription_slot);
// submit the subscriptions
attestation_service
.validator_subscriptions(subscriptions)
.unwrap();
let events = get_events(attestation_service, 192, 3).await;
let mut discover_peer_count = 0;
let mut subscribe_count = 0;
let mut enr_add_count = 0;
let mut unexpected_msg_count = 0;
for event in events {
match event {
AttServiceMessage::DiscoverPeers(_any_subnet) => {
discover_peer_count = discover_peer_count + 1
}
AttServiceMessage::Subscribe(_any_subnet) => subscribe_count = subscribe_count + 1,
AttServiceMessage::EnrAdd(_any_subnet) => enr_add_count = enr_add_count + 1,
_ => unexpected_msg_count = unexpected_msg_count + 1,
}
}
assert_eq!(discover_peer_count, 64);
assert_eq!(subscribe_count, 64);
assert_eq!(enr_add_count, 64);
assert_eq!(unexpected_msg_count, 0);
// test completed successfully
}
#[tokio::test]
async fn subscribe_all_random_subnets_plus_one() {
// subscribe 10 slots ahead so we do not produce any exact subnet messages
let subscription_slot = 10;
// the 65th subscription should result in no more messages than the previous scenario
let subscription_count = 65;
// create the attestation service and subscriptions
let mut attestation_service = get_attestation_service();
let current_slot = attestation_service
.beacon_chain
.slot_clock
.now()
.expect("Could not get current slot");
let subscriptions =
_get_subscriptions(subscription_count, current_slot + subscription_slot);
// submit the subscriptions
attestation_service
.validator_subscriptions(subscriptions)
.unwrap();
let events = get_events(attestation_service, 192, 3).await;
let mut discover_peer_count = 0;
let mut subscribe_count = 0;
let mut enr_add_count = 0;
let mut unexpected_msg_count = 0;
for event in events {
match event {
AttServiceMessage::DiscoverPeers(_any_subnet) => {
discover_peer_count = discover_peer_count + 1
}
AttServiceMessage::Subscribe(_any_subnet) => subscribe_count = subscribe_count + 1,
AttServiceMessage::EnrAdd(_any_subnet) => enr_add_count = enr_add_count + 1,
_ => unexpected_msg_count = unexpected_msg_count + 1,
}
}
assert_eq!(discover_peer_count, 64);
assert_eq!(subscribe_count, 64);
assert_eq!(enr_add_count, 64);
assert_eq!(unexpected_msg_count, 0);
}
}

View File

@ -1,8 +1,12 @@
#[macro_use]
extern crate lazy_static;
/// This crate provides the network server for Lighthouse. /// This crate provides the network server for Lighthouse.
pub mod error; pub mod error;
pub mod service; pub mod service;
mod attestation_service; mod attestation_service;
mod metrics;
mod persisted_dht; mod persisted_dht;
mod router; mod router;
mod sync; mod sync;

View File

@ -0,0 +1,39 @@
pub use lighthouse_metrics::*;
lazy_static! {
/*
* Gossip Rx
*/
pub static ref GOSSIP_BLOCKS_RX: Result<IntCounter> = try_create_int_counter(
"network_gossip_blocks_rx_total",
"Count of gossip blocks received"
);
pub static ref GOSSIP_UNAGGREGATED_ATTESTATIONS_RX: Result<IntCounter> = try_create_int_counter(
"network_gossip_unaggregated_attestations_rx_total",
"Count of gossip unaggregated attestations received"
);
pub static ref GOSSIP_UNAGGREGATED_ATTESTATIONS_IGNORED: Result<IntCounter> = try_create_int_counter(
"network_gossip_unaggregated_attestations_ignored_total",
"Count of gossip unaggregated attestations ignored by attestation service"
);
pub static ref GOSSIP_AGGREGATED_ATTESTATIONS_RX: Result<IntCounter> = try_create_int_counter(
"network_gossip_aggregated_attestations_rx_total",
"Count of gossip aggregated attestations received"
);
/*
* Gossip Tx
*/
pub static ref GOSSIP_BLOCKS_TX: Result<IntCounter> = try_create_int_counter(
"network_gossip_blocks_tx_total",
"Count of gossip blocks transmitted"
);
pub static ref GOSSIP_UNAGGREGATED_ATTESTATIONS_TX: Result<IntCounter> = try_create_int_counter(
"network_gossip_unaggregated_attestations_tx_total",
"Count of gossip unaggregated attestations transmitted"
);
pub static ref GOSSIP_AGGREGATED_ATTESTATIONS_TX: Result<IntCounter> = try_create_int_counter(
"network_gossip_aggregated_attestations_tx_total",
"Count of gossip aggregated attestations transmitted"
);
}

View File

@ -16,10 +16,9 @@ use eth2_libp2p::{
}, },
MessageId, NetworkGlobals, PeerId, PubsubMessage, RPCEvent, MessageId, NetworkGlobals, PeerId, PubsubMessage, RPCEvent,
}; };
use futures::future::Future; use futures::prelude::*;
use futures::stream::Stream;
use processor::Processor; use processor::Processor;
use slog::{debug, o, trace, warn}; use slog::{debug, info, o, trace, warn};
use std::sync::Arc; use std::sync::Arc;
use tokio::sync::mpsc; use tokio::sync::mpsc;
use types::EthSpec; use types::EthSpec;
@ -60,7 +59,7 @@ impl<T: BeaconChainTypes> Router<T> {
beacon_chain: Arc<BeaconChain<T>>, beacon_chain: Arc<BeaconChain<T>>,
network_globals: Arc<NetworkGlobals<T::EthSpec>>, network_globals: Arc<NetworkGlobals<T::EthSpec>>,
network_send: mpsc::UnboundedSender<NetworkMessage<T::EthSpec>>, network_send: mpsc::UnboundedSender<NetworkMessage<T::EthSpec>>,
executor: &tokio::runtime::TaskExecutor, runtime_handle: &tokio::runtime::Handle,
log: slog::Logger, log: slog::Logger,
) -> error::Result<mpsc::UnboundedSender<RouterMessage<T::EthSpec>>> { ) -> error::Result<mpsc::UnboundedSender<RouterMessage<T::EthSpec>>> {
let message_handler_log = log.new(o!("service"=> "router")); let message_handler_log = log.new(o!("service"=> "router"));
@ -70,7 +69,7 @@ impl<T: BeaconChainTypes> Router<T> {
// Initialise a message instance, which itself spawns the syncing thread. // Initialise a message instance, which itself spawns the syncing thread.
let processor = Processor::new( let processor = Processor::new(
executor, runtime_handle,
beacon_chain, beacon_chain,
network_globals, network_globals,
network_send.clone(), network_send.clone(),
@ -85,13 +84,12 @@ impl<T: BeaconChainTypes> Router<T> {
}; };
// spawn handler task and move the message handler instance into the spawned thread // spawn handler task and move the message handler instance into the spawned thread
executor.spawn( runtime_handle.spawn(async move {
handler_recv handler_recv
.for_each(move |msg| Ok(handler.handle_message(msg))) .for_each(move |msg| future::ready(handler.handle_message(msg)))
.map_err(move |_| { .await;
debug!(log, "Network message handler terminated."); debug!(log, "Network message handler terminated.");
}), });
);
Ok(handler_send) Ok(handler_send)
} }
@ -172,7 +170,7 @@ impl<T: BeaconChainTypes> Router<T> {
// an error could have occurred. // an error could have occurred.
match error_response { match error_response {
RPCCodedResponse::InvalidRequest(error) => { RPCCodedResponse::InvalidRequest(error) => {
warn!(self.log, "Peer indicated invalid request"; "peer_id" => format!("{:?}", peer_id), "error" => error.as_string()); warn!(self.log, "RPC Invalid Request"; "peer_id" => peer_id.to_string(), "request_id" => request_id, "error" => error.to_string());
self.handle_rpc_error( self.handle_rpc_error(
peer_id, peer_id,
request_id, request_id,
@ -180,7 +178,7 @@ impl<T: BeaconChainTypes> Router<T> {
); );
} }
RPCCodedResponse::ServerError(error) => { RPCCodedResponse::ServerError(error) => {
warn!(self.log, "Peer internal server error"; "peer_id" => format!("{:?}", peer_id), "error" => error.as_string()); warn!(self.log, "RPC Server Error"; "peer_id" => peer_id.to_string(), "request_id" => request_id, "error" => error.to_string());
self.handle_rpc_error( self.handle_rpc_error(
peer_id, peer_id,
request_id, request_id,
@ -188,7 +186,7 @@ impl<T: BeaconChainTypes> Router<T> {
); );
} }
RPCCodedResponse::Unknown(error) => { RPCCodedResponse::Unknown(error) => {
warn!(self.log, "Unknown peer error"; "peer" => format!("{:?}", peer_id), "error" => error.as_string()); warn!(self.log, "RPC Unknown Error"; "peer_id" => peer_id.to_string(), "request_id" => request_id, "error" => error.to_string());
self.handle_rpc_error( self.handle_rpc_error(
peer_id, peer_id,
request_id, request_id,
@ -278,6 +276,7 @@ impl<T: BeaconChainTypes> Router<T> {
PubsubMessage::BeaconBlock(block) => { PubsubMessage::BeaconBlock(block) => {
match self.processor.should_forward_block(&peer_id, block) { match self.processor.should_forward_block(&peer_id, block) {
Ok(verified_block) => { Ok(verified_block) => {
info!(self.log, "New block received"; "slot" => verified_block.block.slot(), "hash" => verified_block.block_root.to_string());
self.propagate_message(id, peer_id.clone()); self.propagate_message(id, peer_id.clone());
self.processor.on_block_gossip(peer_id, verified_block); self.processor.on_block_gossip(peer_id, verified_block);
} }
@ -313,7 +312,7 @@ impl<T: BeaconChainTypes> Router<T> {
/// Informs the network service that the message should be forwarded to other peers. /// Informs the network service that the message should be forwarded to other peers.
fn propagate_message(&mut self, message_id: MessageId, propagation_source: PeerId) { fn propagate_message(&mut self, message_id: MessageId, propagation_source: PeerId) {
self.network_send self.network_send
.try_send(NetworkMessage::Propagate { .send(NetworkMessage::Propagate {
propagation_source, propagation_source,
message_id, message_id,
}) })

View File

@ -44,7 +44,7 @@ pub struct Processor<T: BeaconChainTypes> {
impl<T: BeaconChainTypes> Processor<T> { impl<T: BeaconChainTypes> Processor<T> {
/// Instantiate a `Processor` instance /// Instantiate a `Processor` instance
pub fn new( pub fn new(
executor: &tokio::runtime::TaskExecutor, runtime_handle: &tokio::runtime::Handle,
beacon_chain: Arc<BeaconChain<T>>, beacon_chain: Arc<BeaconChain<T>>,
network_globals: Arc<NetworkGlobals<T::EthSpec>>, network_globals: Arc<NetworkGlobals<T::EthSpec>>,
network_send: mpsc::UnboundedSender<NetworkMessage<T::EthSpec>>, network_send: mpsc::UnboundedSender<NetworkMessage<T::EthSpec>>,
@ -54,7 +54,7 @@ impl<T: BeaconChainTypes> Processor<T> {
// spawn the sync thread // spawn the sync thread
let (sync_send, _sync_exit) = crate::sync::manager::spawn( let (sync_send, _sync_exit) = crate::sync::manager::spawn(
executor, runtime_handle,
beacon_chain.clone(), beacon_chain.clone(),
network_globals, network_globals,
network_send.clone(), network_send.clone(),
@ -71,7 +71,7 @@ impl<T: BeaconChainTypes> Processor<T> {
} }
fn send_to_sync(&mut self, message: SyncMessage<T::EthSpec>) { fn send_to_sync(&mut self, message: SyncMessage<T::EthSpec>) {
self.sync_send.try_send(message).unwrap_or_else(|_| { self.sync_send.send(message).unwrap_or_else(|_| {
warn!( warn!(
self.log, self.log,
"Could not send message to the sync service"; "Could not send message to the sync service";
@ -485,10 +485,9 @@ impl<T: BeaconChainTypes> Processor<T> {
) -> Result<GossipVerifiedBlock<T>, BlockError> { ) -> Result<GossipVerifiedBlock<T>, BlockError> {
let result = self.chain.verify_block_for_gossip(*block.clone()); let result = self.chain.verify_block_for_gossip(*block.clone());
if let Err(BlockError::ParentUnknown(block_hash)) = result { if let Err(BlockError::ParentUnknown(_)) = result {
// if we don't know the parent, start a parent lookup // if we don't know the parent, start a parent lookup
// TODO: Modify the return to avoid the block clone. // TODO: Modify the return to avoid the block clone.
debug!(self.log, "Unknown block received. Starting a parent lookup"; "block_slot" => block.message.slot, "block_hash" => format!("{}", block_hash));
self.send_to_sync(SyncMessage::UnknownBlock(peer_id.clone(), block)); self.send_to_sync(SyncMessage::UnknownBlock(peer_id.clone(), block));
} }
result result
@ -929,7 +928,7 @@ impl<T: EthSpec> HandlerNetworkContext<T> {
); );
self.send_rpc_request(peer_id.clone(), RPCRequest::Goodbye(reason)); self.send_rpc_request(peer_id.clone(), RPCRequest::Goodbye(reason));
self.network_send self.network_send
.try_send(NetworkMessage::Disconnect { peer_id }) .send(NetworkMessage::Disconnect { peer_id })
.unwrap_or_else(|_| { .unwrap_or_else(|_| {
warn!( warn!(
self.log, self.log,
@ -970,7 +969,7 @@ impl<T: EthSpec> HandlerNetworkContext<T> {
fn send_rpc_event(&mut self, peer_id: PeerId, rpc_event: RPCEvent<T>) { fn send_rpc_event(&mut self, peer_id: PeerId, rpc_event: RPCEvent<T>) {
self.network_send self.network_send
.try_send(NetworkMessage::RPC(peer_id, rpc_event)) .send(NetworkMessage::RPC(peer_id, rpc_event))
.unwrap_or_else(|_| { .unwrap_or_else(|_| {
warn!( warn!(
self.log, self.log,

View File

@ -1,23 +1,22 @@
use crate::error;
use crate::persisted_dht::{load_dht, persist_dht}; use crate::persisted_dht::{load_dht, persist_dht};
use crate::router::{Router, RouterMessage}; use crate::router::{Router, RouterMessage};
use crate::{ use crate::{
attestation_service::{AttServiceMessage, AttestationService}, attestation_service::{AttServiceMessage, AttestationService},
NetworkConfig, NetworkConfig,
}; };
use crate::{error, metrics};
use beacon_chain::{BeaconChain, BeaconChainTypes}; use beacon_chain::{BeaconChain, BeaconChainTypes};
use eth2_libp2p::Service as LibP2PService; use eth2_libp2p::Service as LibP2PService;
use eth2_libp2p::{rpc::RPCRequest, BehaviourEvent, Enr, MessageId, NetworkGlobals, PeerId, Swarm}; use eth2_libp2p::{rpc::RPCRequest, BehaviourEvent, Enr, MessageId, NetworkGlobals, PeerId};
use eth2_libp2p::{PubsubMessage, RPCEvent}; use eth2_libp2p::{Libp2pEvent, PubsubMessage, RPCEvent};
use futures::prelude::*; use futures::prelude::*;
use futures::Stream;
use rest_types::ValidatorSubscription; use rest_types::ValidatorSubscription;
use slog::{debug, error, info, trace}; use slog::{debug, error, info, o, trace};
use std::sync::Arc; use std::sync::Arc;
use std::time::{Duration, Instant}; use std::time::Duration;
use tokio::runtime::TaskExecutor; use tokio::runtime::Handle;
use tokio::sync::{mpsc, oneshot}; use tokio::sync::{mpsc, oneshot};
use tokio::timer::Delay; use tokio::time::Delay;
use types::EthSpec; use types::EthSpec;
mod tests; mod tests;
@ -42,8 +41,6 @@ pub struct NetworkService<T: BeaconChainTypes> {
store: Arc<T::Store>, store: Arc<T::Store>,
/// A collection of global variables, accessible outside of the network service. /// A collection of global variables, accessible outside of the network service.
network_globals: Arc<NetworkGlobals<T::EthSpec>>, network_globals: Arc<NetworkGlobals<T::EthSpec>>,
/// An initial delay to update variables after the libp2p service has started.
initial_delay: Delay,
/// A delay that expires when a new fork takes place. /// A delay that expires when a new fork takes place.
next_fork_update: Option<Delay>, next_fork_update: Option<Delay>,
/// The logger for the network service. /// The logger for the network service.
@ -56,7 +53,7 @@ impl<T: BeaconChainTypes> NetworkService<T> {
pub fn start( pub fn start(
beacon_chain: Arc<BeaconChain<T>>, beacon_chain: Arc<BeaconChain<T>>,
config: &NetworkConfig, config: &NetworkConfig,
executor: &TaskExecutor, runtime_handle: &Handle,
network_log: slog::Logger, network_log: slog::Logger,
) -> error::Result<( ) -> error::Result<(
Arc<NetworkGlobals<T::EthSpec>>, Arc<NetworkGlobals<T::EthSpec>>,
@ -78,16 +75,12 @@ impl<T: BeaconChainTypes> NetworkService<T> {
// launch libp2p service // launch libp2p service
let (network_globals, mut libp2p) = let (network_globals, mut libp2p) =
LibP2PService::new(config, enr_fork_id, network_log.clone())?; runtime_handle.enter(|| LibP2PService::new(config, enr_fork_id, &network_log))?;
for enr in load_dht::<T::Store, T::EthSpec>(store.clone()) { for enr in load_dht::<T::Store, T::EthSpec>(store.clone()) {
libp2p.swarm.add_enr(enr); libp2p.swarm.add_enr(enr);
} }
// A delay used to initialise code after the network has started
// This is currently used to obtain the listening addresses from the libp2p service.
let initial_delay = Delay::new(Instant::now() + Duration::from_secs(1));
// launch derived network services // launch derived network services
// router task // router task
@ -95,7 +88,7 @@ impl<T: BeaconChainTypes> NetworkService<T> {
beacon_chain.clone(), beacon_chain.clone(),
network_globals.clone(), network_globals.clone(),
network_send.clone(), network_send.clone(),
executor, runtime_handle,
network_log.clone(), network_log.clone(),
)?; )?;
@ -104,6 +97,7 @@ impl<T: BeaconChainTypes> NetworkService<T> {
AttestationService::new(beacon_chain.clone(), network_globals.clone(), &network_log); AttestationService::new(beacon_chain.clone(), network_globals.clone(), &network_log);
// create the network service and spawn the task // create the network service and spawn the task
let network_log = network_log.new(o!("service"=> "network"));
let network_service = NetworkService { let network_service = NetworkService {
beacon_chain, beacon_chain,
libp2p, libp2p,
@ -112,13 +106,12 @@ impl<T: BeaconChainTypes> NetworkService<T> {
router_send, router_send,
store, store,
network_globals: network_globals.clone(), network_globals: network_globals.clone(),
initial_delay,
next_fork_update, next_fork_update,
log: network_log, log: network_log,
propagation_percentage, propagation_percentage,
}; };
let network_exit = spawn_service(network_service, &executor)?; let network_exit = runtime_handle.enter(|| spawn_service(network_service))?;
Ok((network_globals, network_send, network_exit)) Ok((network_globals, network_send, network_exit))
} }
@ -126,248 +119,249 @@ impl<T: BeaconChainTypes> NetworkService<T> {
fn spawn_service<T: BeaconChainTypes>( fn spawn_service<T: BeaconChainTypes>(
mut service: NetworkService<T>, mut service: NetworkService<T>,
executor: &TaskExecutor,
) -> error::Result<tokio::sync::oneshot::Sender<()>> { ) -> error::Result<tokio::sync::oneshot::Sender<()>> {
let (network_exit, mut exit_rx) = tokio::sync::oneshot::channel(); let (network_exit, mut exit_rx) = tokio::sync::oneshot::channel();
// spawn on the current executor // spawn on the current executor
executor.spawn( tokio::spawn(async move {
futures::future::poll_fn(move || -> Result<_, ()> { loop {
// build the futures to check simultaneously
let log = &service.log; tokio::select! {
// handle network shutdown
// handles any logic which requires an initial delay _ = (&mut exit_rx) => {
if !service.initial_delay.is_elapsed() {
if let Ok(Async::Ready(_)) = service.initial_delay.poll() {
let multi_addrs = Swarm::listeners(&service.libp2p.swarm).cloned().collect();
*service.network_globals.listen_multiaddrs.write() = multi_addrs;
}
}
// perform termination tasks when the network is being shutdown
if let Ok(Async::Ready(_)) | Err(_) = exit_rx.poll() {
// network thread is terminating // network thread is terminating
let enrs: Vec<Enr> = service.libp2p.swarm.enr_entries().cloned().collect(); let enrs: Vec<Enr> = service.libp2p.swarm.enr_entries().cloned().collect();
debug!( debug!(
log, service.log,
"Persisting DHT to store"; "Persisting DHT to store";
"Number of peers" => format!("{}", enrs.len()), "Number of peers" => format!("{}", enrs.len()),
); );
match persist_dht::<T::Store, T::EthSpec>(service.store.clone(), enrs) { match persist_dht::<T::Store, T::EthSpec>(service.store.clone(), enrs) {
Err(e) => error!( Err(e) => error!(
log, service.log,
"Failed to persist DHT on drop"; "Failed to persist DHT on drop";
"error" => format!("{:?}", e) "error" => format!("{:?}", e)
), ),
Ok(_) => info!( Ok(_) => info!(
log, service.log,
"Saved DHT state"; "Saved DHT state";
), ),
} }
info!(log.clone(), "Network service shutdown"); info!(service.log, "Network service shutdown");
return Ok(Async::Ready(())); return;
} }
// handle a message sent to the network
// processes the network channel before processing the libp2p swarm Some(message) = service.network_recv.recv() => {
loop { match message {
// poll the network channel NetworkMessage::RPC(peer_id, rpc_event) => {
match service.network_recv.poll() { trace!(service.log, "Sending RPC"; "rpc" => format!("{}", rpc_event));
Ok(Async::Ready(Some(message))) => match message { service.libp2p.swarm.send_rpc(peer_id, rpc_event);
NetworkMessage::RPC(peer_id, rpc_event) => { }
trace!(log, "Sending RPC"; "rpc" => format!("{}", rpc_event)); NetworkMessage::Propagate {
service.libp2p.swarm.send_rpc(peer_id, rpc_event); propagation_source,
} message_id,
NetworkMessage::Propagate { } => {
propagation_source, // TODO: Remove this for mainnet
message_id, // randomly prevents propagation
} => { let mut should_send = true;
// TODO: Remove this for mainnet if let Some(percentage) = service.propagation_percentage {
// randomly prevents propagation // not exact percentage but close enough
let mut should_send = true; let rand = rand::random::<u8>() % 100;
if let Some(percentage) = service.propagation_percentage { if rand > percentage {
// not exact percentage but close enough // don't propagate
let rand = rand::random::<u8>() % 100; should_send = false;
if rand > percentage { }
// don't propagate }
should_send = false; if !should_send {
info!(service.log, "Random filter did not propagate message");
} else {
trace!(service.log, "Propagating gossipsub message";
"propagation_peer" => format!("{:?}", propagation_source),
"message_id" => message_id.to_string(),
);
service
.libp2p
.swarm
.propagate_message(&propagation_source, message_id);
} }
} }
if !should_send { NetworkMessage::Publish { messages } => {
info!(log, "Random filter did not propagate message"); // TODO: Remove this for mainnet
} else { // randomly prevents propagation
trace!(log, "Propagating gossipsub message"; let mut should_send = true;
"propagation_peer" => format!("{:?}", propagation_source), if let Some(percentage) = service.propagation_percentage {
"message_id" => message_id.to_string(), // not exact percentage but close enough
); let rand = rand::random::<u8>() % 100;
service.libp2p if rand > percentage {
.swarm // don't propagate
.propagate_message(&propagation_source, message_id); should_send = false;
} }
}
NetworkMessage::Publish { messages } => {
// TODO: Remove this for mainnet
// randomly prevents propagation
let mut should_send = true;
if let Some(percentage) = service.propagation_percentage {
// not exact percentage but close enough
let rand = rand::random::<u8>() % 100;
if rand > percentage {
// don't propagate
should_send = false;
} }
} if !should_send {
if !should_send { info!(service.log, "Random filter did not publish messages");
info!(log, "Random filter did not publish messages"); } else {
} else { let mut topic_kinds = Vec::new();
let mut topic_kinds = Vec::new(); for message in &messages {
for message in &messages {
if !topic_kinds.contains(&message.kind()) { if !topic_kinds.contains(&message.kind()) {
topic_kinds.push(message.kind()); topic_kinds.push(message.kind());
} }
} }
debug!(log, "Sending pubsub messages"; "count" => messages.len(), "topics" => format!("{:?}", topic_kinds)); debug!(
service.libp2p.swarm.publish(messages); service.log,
} "Sending pubsub messages";
} "count" => messages.len(),
NetworkMessage::Disconnect { peer_id } => { "topics" => format!("{:?}", topic_kinds)
service.libp2p.disconnect_and_ban_peer( );
peer_id, expose_publish_metrics(&messages);
std::time::Duration::from_secs(BAN_PEER_TIMEOUT), service.libp2p.swarm.publish(messages);
);
}
NetworkMessage::Subscribe { subscriptions } =>
{
// the result is dropped as it used solely for ergonomics
let _ = service.attestation_service.validator_subscriptions(subscriptions);
}
},
Ok(Async::NotReady) => break,
Ok(Async::Ready(None)) => {
debug!(log, "Network channel closed");
return Err(());
}
Err(e) => {
debug!(log, "Network channel error"; "error" => format!("{}", e));
return Err(());
}
}
}
// process any attestation service events
// NOTE: This must come after the network message processing as that may trigger events in
// the attestation service.
while let Ok(Async::Ready(Some(attestation_service_message))) = service.attestation_service.poll() {
match attestation_service_message {
// TODO: Implement
AttServiceMessage::Subscribe(subnet_id) => {
service.libp2p.swarm.subscribe_to_subnet(subnet_id);
},
AttServiceMessage::Unsubscribe(subnet_id) => {
service.libp2p.swarm.subscribe_to_subnet(subnet_id);
},
AttServiceMessage::EnrAdd(subnet_id) => {
service.libp2p.swarm.update_enr_subnet(subnet_id, true);
},
AttServiceMessage::EnrRemove(subnet_id) => {
service.libp2p.swarm.update_enr_subnet(subnet_id, false);
},
AttServiceMessage::DiscoverPeers(subnet_id) => {
service.libp2p.swarm.peers_request(subnet_id);
},
}
}
let mut peers_to_ban = Vec::new();
// poll the swarm
loop {
match service.libp2p.poll() {
Ok(Async::Ready(Some(event))) => match event {
BehaviourEvent::RPC(peer_id, rpc_event) => {
// if we received a Goodbye message, drop and ban the peer
if let RPCEvent::Request(_, RPCRequest::Goodbye(_)) = rpc_event {
peers_to_ban.push(peer_id.clone());
};
service.router_send
.try_send(RouterMessage::RPC(peer_id, rpc_event))
.map_err(|_| { debug!(log, "Failed to send RPC to router");} )?;
}
BehaviourEvent::PeerDialed(peer_id) => {
debug!(log, "Peer Dialed"; "peer_id" => format!("{}", peer_id));
service.router_send
.try_send(RouterMessage::PeerDialed(peer_id))
.map_err(|_| { debug!(log, "Failed to send peer dialed to router");})?;
}
BehaviourEvent::PeerDisconnected(peer_id) => {
debug!(log, "Peer Disconnected"; "peer_id" => format!("{}", peer_id));
service.router_send
.try_send(RouterMessage::PeerDisconnected(peer_id))
.map_err(|_| { debug!(log, "Failed to send peer disconnect to router");})?;
}
BehaviourEvent::StatusPeer(peer_id) => {
service.router_send
.try_send(RouterMessage::StatusPeer(peer_id))
.map_err(|_| { debug!(log, "Failed to send re-status peer to router");})?;
}
BehaviourEvent::PubsubMessage {
id,
source,
message,
..
} => {
match message {
// attestation information gets processed in the attestation service
PubsubMessage::Attestation(ref subnet_and_attestation) => {
let subnet = &subnet_and_attestation.0;
let attestation = &subnet_and_attestation.1;
// checks if we have an aggregator for the slot. If so, we process
// the attestation
if service.attestation_service.should_process_attestation(&id, &source, subnet, attestation) {
service.router_send
.try_send(RouterMessage::PubsubMessage(id, source, message))
.map_err(|_| { debug!(log, "Failed to send pubsub message to router");})?;
}
}
_ => {
// all else is sent to the router
service.router_send
.try_send(RouterMessage::PubsubMessage(id, source, message))
.map_err(|_| { debug!(log, "Failed to send pubsub message to router");})?;
} }
} }
} NetworkMessage::Disconnect { peer_id } => {
BehaviourEvent::PeerSubscribed(_, _) => {} service.libp2p.disconnect_and_ban_peer(
}, peer_id,
Ok(Async::Ready(None)) => unreachable!("Stream never ends"), std::time::Duration::from_secs(BAN_PEER_TIMEOUT),
Ok(Async::NotReady) => break, );
Err(_) => break, }
NetworkMessage::Subscribe { subscriptions } => {
// the result is dropped as it used solely for ergonomics
let _ = service
.attestation_service
.validator_subscriptions(subscriptions);
}
}
}
// process any attestation service events
Some(attestation_service_message) = service.attestation_service.next() => {
match attestation_service_message {
// TODO: Implement
AttServiceMessage::Subscribe(subnet_id) => {
service.libp2p.swarm.subscribe_to_subnet(subnet_id);
}
AttServiceMessage::Unsubscribe(subnet_id) => {
service.libp2p.swarm.subscribe_to_subnet(subnet_id);
}
AttServiceMessage::EnrAdd(subnet_id) => {
service.libp2p.swarm.update_enr_subnet(subnet_id, true);
}
AttServiceMessage::EnrRemove(subnet_id) => {
service.libp2p.swarm.update_enr_subnet(subnet_id, false);
}
AttServiceMessage::DiscoverPeers(subnet_id) => {
service.libp2p.swarm.peers_request(subnet_id);
}
}
}
libp2p_event = service.libp2p.next_event() => {
// poll the swarm
match libp2p_event {
Libp2pEvent::Behaviour(event) => match event {
BehaviourEvent::RPC(peer_id, rpc_event) => {
// if we received a Goodbye message, drop and ban the peer
if let RPCEvent::Request(_, RPCRequest::Goodbye(_)) = rpc_event {
//peers_to_ban.push(peer_id.clone());
service.libp2p.disconnect_and_ban_peer(
peer_id.clone(),
std::time::Duration::from_secs(BAN_PEER_TIMEOUT),
);
};
let _ = service
.router_send
.send(RouterMessage::RPC(peer_id, rpc_event))
.map_err(|_| {
debug!(service.log, "Failed to send RPC to router");
});
}
BehaviourEvent::StatusPeer(peer_id) => {
let _ = service
.router_send
.send(RouterMessage::StatusPeer(peer_id))
.map_err(|_| {
debug!(service.log, "Failed to send re-status peer to router");
});
}
BehaviourEvent::PubsubMessage {
id,
source,
message,
..
} => {
// Update prometheus metrics.
expose_receive_metrics(&message);
match message {
// attestation information gets processed in the attestation service
PubsubMessage::Attestation(ref subnet_and_attestation) => {
let subnet = &subnet_and_attestation.0;
let attestation = &subnet_and_attestation.1;
// checks if we have an aggregator for the slot. If so, we process
// the attestation
if service.attestation_service.should_process_attestation(
&id,
&source,
subnet,
attestation,
) {
let _ = service
.router_send
.send(RouterMessage::PubsubMessage(id, source, message))
.map_err(|_| {
debug!(service.log, "Failed to send pubsub message to router");
});
} else {
metrics::inc_counter(&metrics::GOSSIP_UNAGGREGATED_ATTESTATIONS_IGNORED)
}
}
_ => {
// all else is sent to the router
let _ = service
.router_send
.send(RouterMessage::PubsubMessage(id, source, message))
.map_err(|_| {
debug!(service.log, "Failed to send pubsub message to router");
});
}
}
}
BehaviourEvent::PeerSubscribed(_, _) => {},
}
Libp2pEvent::NewListenAddr(multiaddr) => {
service.network_globals.listen_multiaddrs.write().push(multiaddr);
}
Libp2pEvent::PeerConnected{ peer_id, endpoint,} => {
debug!(service.log, "Peer Connected"; "peer_id" => peer_id.to_string(), "endpoint" => format!("{:?}", endpoint));
if let eth2_libp2p::ConnectedPoint::Dialer { .. } = endpoint {
let _ = service
.router_send
.send(RouterMessage::PeerDialed(peer_id))
.map_err(|_| {
debug!(service.log, "Failed to send peer dialed to router"); });
}
}
Libp2pEvent::PeerDisconnected{ peer_id, endpoint,} => {
debug!(service.log, "Peer Disconnected"; "peer_id" => peer_id.to_string(), "endpoint" => format!("{:?}", endpoint));
let _ = service
.router_send
.send(RouterMessage::PeerDisconnected(peer_id))
.map_err(|_| {
debug!(service.log, "Failed to send peer disconnect to router");
});
}
}
}
} }
}
// ban and disconnect any peers that sent Goodbye requests if let Some(delay) = &service.next_fork_update {
while let Some(peer_id) = peers_to_ban.pop() { if delay.is_elapsed() {
service.libp2p.disconnect_and_ban_peer( service
peer_id.clone(), .libp2p
std::time::Duration::from_secs(BAN_PEER_TIMEOUT), .swarm
); .update_fork_version(service.beacon_chain.enr_fork_id());
} service.next_fork_update = next_fork_delay(&service.beacon_chain);
// if we have just forked, update inform the libp2p layer
if let Some(mut update_fork_delay) = service.next_fork_update.take() {
if !update_fork_delay.is_elapsed() {
if let Ok(Async::Ready(_)) = update_fork_delay.poll() {
service.libp2p.swarm.update_fork_version(service.beacon_chain.enr_fork_id());
service.next_fork_update = next_fork_delay(&service.beacon_chain);
} }
} }
} }
});
Ok(Async::NotReady)
})
);
Ok(network_exit) Ok(network_exit)
} }
@ -376,11 +370,11 @@ fn spawn_service<T: BeaconChainTypes>(
/// If there is no scheduled fork, `None` is returned. /// If there is no scheduled fork, `None` is returned.
fn next_fork_delay<T: BeaconChainTypes>( fn next_fork_delay<T: BeaconChainTypes>(
beacon_chain: &BeaconChain<T>, beacon_chain: &BeaconChain<T>,
) -> Option<tokio::timer::Delay> { ) -> Option<tokio::time::Delay> {
beacon_chain.duration_to_next_fork().map(|until_fork| { beacon_chain.duration_to_next_fork().map(|until_fork| {
// Add a short time-out to start within the new fork period. // Add a short time-out to start within the new fork period.
let delay = Duration::from_millis(200); let delay = Duration::from_millis(200);
tokio::timer::Delay::new(Instant::now() + until_fork + delay) tokio::time::delay_until(tokio::time::Instant::now() + until_fork + delay)
}) })
} }
@ -403,3 +397,33 @@ pub enum NetworkMessage<T: EthSpec> {
/// Disconnect and bans a peer id. /// Disconnect and bans a peer id.
Disconnect { peer_id: PeerId }, Disconnect { peer_id: PeerId },
} }
/// Inspects the `messages` that were being sent to the network and updates Prometheus metrics.
fn expose_publish_metrics<T: EthSpec>(messages: &[PubsubMessage<T>]) {
for message in messages {
match message {
PubsubMessage::BeaconBlock(_) => metrics::inc_counter(&metrics::GOSSIP_BLOCKS_TX),
PubsubMessage::Attestation(_) => {
metrics::inc_counter(&metrics::GOSSIP_UNAGGREGATED_ATTESTATIONS_TX)
}
PubsubMessage::AggregateAndProofAttestation(_) => {
metrics::inc_counter(&metrics::GOSSIP_AGGREGATED_ATTESTATIONS_TX)
}
_ => {}
}
}
}
/// Inspects a `message` received from the network and updates Prometheus metrics.
fn expose_receive_metrics<T: EthSpec>(message: &PubsubMessage<T>) {
match message {
PubsubMessage::BeaconBlock(_) => metrics::inc_counter(&metrics::GOSSIP_BLOCKS_RX),
PubsubMessage::Attestation(_) => {
metrics::inc_counter(&metrics::GOSSIP_UNAGGREGATED_ATTESTATIONS_RX)
}
PubsubMessage::AggregateAndProofAttestation(_) => {
metrics::inc_counter(&metrics::GOSSIP_AGGREGATED_ATTESTATIONS_RX)
}
_ => {}
}
}

View File

@ -5,7 +5,6 @@ mod tests {
use crate::{NetworkConfig, NetworkService}; use crate::{NetworkConfig, NetworkService};
use beacon_chain::test_utils::BeaconChainHarness; use beacon_chain::test_utils::BeaconChainHarness;
use eth2_libp2p::Enr; use eth2_libp2p::Enr;
use futures::{Future, IntoFuture};
use slog::Logger; use slog::Logger;
use sloggers::{null::NullLoggerBuilder, Build}; use sloggers::{null::NullLoggerBuilder, Build};
use std::str::FromStr; use std::str::FromStr;
@ -33,21 +32,20 @@ mod tests {
let enrs = vec![enr1, enr2]; let enrs = vec![enr1, enr2];
let runtime = Runtime::new().unwrap(); let runtime = Runtime::new().unwrap();
let executor = runtime.executor(); let handle = runtime.handle().clone();
let mut config = NetworkConfig::default(); let mut config = NetworkConfig::default();
config.libp2p_port = 21212; config.libp2p_port = 21212;
config.discovery_port = 21212; config.discovery_port = 21212;
config.boot_nodes = enrs.clone(); config.boot_nodes = enrs.clone();
runtime runtime.spawn(async move {
.block_on_all( // Create a new network service which implicitly gets dropped at the
// Create a new network service which implicitly gets dropped at the // end of the block.
// end of the block.
NetworkService::start(beacon_chain.clone(), &config, &executor, log.clone()) let _ =
.into_future() NetworkService::start(beacon_chain.clone(), &config, &handle, log.clone()).unwrap();
.and_then(move |(_globals, _service, _exit)| Ok(())), });
) runtime.shutdown_timeout(tokio::time::Duration::from_millis(300));
.unwrap();
// Load the persisted dht from the store // Load the persisted dht from the store
let persisted_enrs = load_dht(store); let persisted_enrs = load_dht(store);

View File

@ -34,26 +34,38 @@ pub fn spawn_block_processor<T: BeaconChainTypes>(
chain: Weak<BeaconChain<T>>, chain: Weak<BeaconChain<T>>,
process_id: ProcessId, process_id: ProcessId,
downloaded_blocks: Vec<SignedBeaconBlock<T::EthSpec>>, downloaded_blocks: Vec<SignedBeaconBlock<T::EthSpec>>,
mut sync_send: mpsc::UnboundedSender<SyncMessage<T::EthSpec>>, sync_send: mpsc::UnboundedSender<SyncMessage<T::EthSpec>>,
log: slog::Logger, log: slog::Logger,
) { ) {
std::thread::spawn(move || { std::thread::spawn(move || {
match process_id { match process_id {
// this a request from the range sync // this a request from the range sync
ProcessId::RangeBatchId(chain_id, batch_id) => { ProcessId::RangeBatchId(chain_id, batch_id) => {
debug!(log, "Processing batch"; "id" => *batch_id, "blocks" => downloaded_blocks.len()); let len = downloaded_blocks.len();
let start_slot = if len > 0 {
downloaded_blocks[0].message.slot.as_u64()
} else {
0
};
let end_slot = if len > 0 {
downloaded_blocks[len - 1].message.slot.as_u64()
} else {
0
};
debug!(log, "Processing batch"; "id" => *batch_id, "blocks" => downloaded_blocks.len(), "start_slot" => start_slot, "end_slot" => end_slot);
let result = match process_blocks(chain, downloaded_blocks.iter(), &log) { let result = match process_blocks(chain, downloaded_blocks.iter(), &log) {
(_, Ok(_)) => { (_, Ok(_)) => {
debug!(log, "Batch processed"; "id" => *batch_id ); debug!(log, "Batch processed"; "id" => *batch_id , "start_slot" => start_slot, "end_slot" => end_slot);
BatchProcessResult::Success BatchProcessResult::Success
} }
(imported_blocks, Err(e)) if imported_blocks > 0 => { (imported_blocks, Err(e)) if imported_blocks > 0 => {
debug!(log, "Batch processing failed but imported some blocks"; warn!(log, "Batch processing failed but imported some blocks";
"id" => *batch_id, "error" => e, "imported_blocks"=> imported_blocks); "id" => *batch_id, "error" => e, "imported_blocks"=> imported_blocks);
BatchProcessResult::Partial BatchProcessResult::Partial
} }
(_, Err(e)) => { (_, Err(e)) => {
debug!(log, "Batch processing failed"; "id" => *batch_id, "error" => e); warn!(log, "Batch processing failed"; "id" => *batch_id, "error" => e);
BatchProcessResult::Failed BatchProcessResult::Failed
} }
}; };
@ -64,7 +76,7 @@ pub fn spawn_block_processor<T: BeaconChainTypes>(
downloaded_blocks, downloaded_blocks,
result, result,
}; };
sync_send.try_send(msg).unwrap_or_else(|_| { sync_send.send(msg).unwrap_or_else(|_| {
debug!( debug!(
log, log,
"Block processor could not inform range sync result. Likely shutting down." "Block processor could not inform range sync result. Likely shutting down."
@ -84,7 +96,7 @@ pub fn spawn_block_processor<T: BeaconChainTypes>(
(_, Err(e)) => { (_, Err(e)) => {
warn!(log, "Parent lookup failed"; "last_peer_id" => format!("{}", peer_id), "error" => e); warn!(log, "Parent lookup failed"; "last_peer_id" => format!("{}", peer_id), "error" => e);
sync_send sync_send
.try_send(SyncMessage::ParentLookupFailed(peer_id)) .send(SyncMessage::ParentLookupFailed(peer_id))
.unwrap_or_else(|_| { .unwrap_or_else(|_| {
// on failure, inform to downvote the peer // on failure, inform to downvote the peer
debug!( debug!(

View File

@ -43,7 +43,6 @@ use eth2_libp2p::rpc::{methods::*, RequestId};
use eth2_libp2p::types::NetworkGlobals; use eth2_libp2p::types::NetworkGlobals;
use eth2_libp2p::PeerId; use eth2_libp2p::PeerId;
use fnv::FnvHashMap; use fnv::FnvHashMap;
use futures::prelude::*;
use slog::{crit, debug, error, info, trace, warn, Logger}; use slog::{crit, debug, error, info, trace, warn, Logger};
use smallvec::SmallVec; use smallvec::SmallVec;
use std::boxed::Box; use std::boxed::Box;
@ -182,7 +181,7 @@ impl SingleBlockRequest {
/// chain. This allows the chain to be /// chain. This allows the chain to be
/// dropped during the syncing process which will gracefully end the `SyncManager`. /// dropped during the syncing process which will gracefully end the `SyncManager`.
pub fn spawn<T: BeaconChainTypes>( pub fn spawn<T: BeaconChainTypes>(
executor: &tokio::runtime::TaskExecutor, runtime_handle: &tokio::runtime::Handle,
beacon_chain: Arc<BeaconChain<T>>, beacon_chain: Arc<BeaconChain<T>>,
network_globals: Arc<NetworkGlobals<T::EthSpec>>, network_globals: Arc<NetworkGlobals<T::EthSpec>>,
network_send: mpsc::UnboundedSender<NetworkMessage<T::EthSpec>>, network_send: mpsc::UnboundedSender<NetworkMessage<T::EthSpec>>,
@ -197,14 +196,14 @@ pub fn spawn<T: BeaconChainTypes>(
let (sync_send, sync_recv) = mpsc::unbounded_channel::<SyncMessage<T::EthSpec>>(); let (sync_send, sync_recv) = mpsc::unbounded_channel::<SyncMessage<T::EthSpec>>();
// create an instance of the SyncManager // create an instance of the SyncManager
let sync_manager = SyncManager { let mut sync_manager = SyncManager {
range_sync: RangeSync::new( range_sync: RangeSync::new(
beacon_chain.clone(), beacon_chain.clone(),
network_globals.clone(), network_globals.clone(),
sync_send.clone(), sync_send.clone(),
log.clone(), log.clone(),
), ),
network: SyncNetworkContext::new(network_send, log.clone()), network: SyncNetworkContext::new(network_send, network_globals.clone(), log.clone()),
chain: beacon_chain, chain: beacon_chain,
network_globals, network_globals,
input_channel: sync_recv, input_channel: sync_recv,
@ -216,14 +215,10 @@ pub fn spawn<T: BeaconChainTypes>(
// spawn the sync manager thread // spawn the sync manager thread
debug!(log, "Sync Manager started"); debug!(log, "Sync Manager started");
executor.spawn( runtime_handle.spawn(async move {
sync_manager futures::future::select(Box::pin(sync_manager.main()), exit_rx).await;
.select(exit_rx.then(|_| Ok(()))) info!(log.clone(), "Sync Manager shutdown");
.then(move |_| { });
info!(log.clone(), "Sync Manager shutdown");
Ok(())
}),
);
(sync_send, sync_exit) (sync_send, sync_exit)
} }
@ -470,6 +465,8 @@ impl<T: BeaconChainTypes> SyncManager<T> {
} }
} }
debug!(self.log, "Unknown block received. Starting a parent lookup"; "block_slot" => block.message.slot, "block_hash" => format!("{}", block.canonical_root()));
let parent_request = ParentRequests { let parent_request = ParentRequests {
downloaded_blocks: vec![block], downloaded_blocks: vec![block],
failed_attempts: 0, failed_attempts: 0,
@ -730,17 +727,13 @@ impl<T: BeaconChainTypes> SyncManager<T> {
self.parent_queue.push(parent_request); self.parent_queue.push(parent_request);
} }
} }
}
impl<T: BeaconChainTypes> Future for SyncManager<T> { /// The main driving future for the sync manager.
type Item = (); async fn main(&mut self) {
type Error = String;
fn poll(&mut self) -> Result<Async<Self::Item>, Self::Error> {
// process any inbound messages // process any inbound messages
loop { loop {
match self.input_channel.poll() { if let Some(sync_message) = self.input_channel.recv().await {
Ok(Async::Ready(Some(message))) => match message { match sync_message {
SyncMessage::AddPeer(peer_id, info) => { SyncMessage::AddPeer(peer_id, info) => {
self.add_peer(peer_id, info); self.add_peer(peer_id, info);
} }
@ -792,17 +785,8 @@ impl<T: BeaconChainTypes> Future for SyncManager<T> {
SyncMessage::ParentLookupFailed(peer_id) => { SyncMessage::ParentLookupFailed(peer_id) => {
self.network.downvote_peer(peer_id); self.network.downvote_peer(peer_id);
} }
},
Ok(Async::NotReady) => break,
Ok(Async::Ready(None)) => {
return Err("Sync manager channel closed".into());
}
Err(e) => {
return Err(format!("Sync Manager channel error: {:?}", e));
} }
} }
} }
Ok(Async::NotReady)
} }
} }

View File

@ -6,7 +6,7 @@ use crate::service::NetworkMessage;
use beacon_chain::{BeaconChain, BeaconChainTypes}; use beacon_chain::{BeaconChain, BeaconChainTypes};
use eth2_libp2p::rpc::methods::*; use eth2_libp2p::rpc::methods::*;
use eth2_libp2p::rpc::{RPCEvent, RPCRequest, RequestId}; use eth2_libp2p::rpc::{RPCEvent, RPCRequest, RequestId};
use eth2_libp2p::PeerId; use eth2_libp2p::{Client, NetworkGlobals, PeerId};
use slog::{debug, trace, warn}; use slog::{debug, trace, warn};
use std::sync::Arc; use std::sync::Arc;
use tokio::sync::mpsc; use tokio::sync::mpsc;
@ -18,20 +18,39 @@ pub struct SyncNetworkContext<T: EthSpec> {
/// The network channel to relay messages to the Network service. /// The network channel to relay messages to the Network service.
network_send: mpsc::UnboundedSender<NetworkMessage<T>>, network_send: mpsc::UnboundedSender<NetworkMessage<T>>,
/// Access to the network global vars.
network_globals: Arc<NetworkGlobals<T>>,
/// A sequential ID for all RPC requests.
request_id: RequestId, request_id: RequestId,
/// Logger for the `SyncNetworkContext`. /// Logger for the `SyncNetworkContext`.
log: slog::Logger, log: slog::Logger,
} }
impl<T: EthSpec> SyncNetworkContext<T> { impl<T: EthSpec> SyncNetworkContext<T> {
pub fn new(network_send: mpsc::UnboundedSender<NetworkMessage<T>>, log: slog::Logger) -> Self { pub fn new(
network_send: mpsc::UnboundedSender<NetworkMessage<T>>,
network_globals: Arc<NetworkGlobals<T>>,
log: slog::Logger,
) -> Self {
Self { Self {
network_send, network_send,
network_globals,
request_id: 1, request_id: 1,
log, log,
} }
} }
/// Returns the Client type of the peer if known
pub fn client_type(&self, peer_id: &PeerId) -> Client {
self.network_globals
.peers
.read()
.peer_info(peer_id)
.map(|info| info.client.clone())
.unwrap_or_default()
}
pub fn status_peer<U: BeaconChainTypes>( pub fn status_peer<U: BeaconChainTypes>(
&mut self, &mut self,
chain: Arc<BeaconChain<U>>, chain: Arc<BeaconChain<U>>,
@ -104,7 +123,7 @@ impl<T: EthSpec> SyncNetworkContext<T> {
// ignore the error if the channel send fails // ignore the error if the channel send fails
let _ = self.send_rpc_request(peer_id.clone(), RPCRequest::Goodbye(reason)); let _ = self.send_rpc_request(peer_id.clone(), RPCRequest::Goodbye(reason));
self.network_send self.network_send
.try_send(NetworkMessage::Disconnect { peer_id }) .send(NetworkMessage::Disconnect { peer_id })
.unwrap_or_else(|_| { .unwrap_or_else(|_| {
warn!( warn!(
self.log, self.log,
@ -130,7 +149,7 @@ impl<T: EthSpec> SyncNetworkContext<T> {
rpc_event: RPCEvent<T>, rpc_event: RPCEvent<T>,
) -> Result<(), &'static str> { ) -> Result<(), &'static str> {
self.network_send self.network_send
.try_send(NetworkMessage::RPC(peer_id, rpc_event)) .send(NetworkMessage::RPC(peer_id, rpc_event))
.map_err(|_| { .map_err(|_| {
debug!( debug!(
self.log, self.log,

View File

@ -31,6 +31,7 @@ const BATCH_BUFFER_SIZE: u8 = 5;
/// be downvoted. /// be downvoted.
const INVALID_BATCH_LOOKUP_ATTEMPTS: u8 = 3; const INVALID_BATCH_LOOKUP_ATTEMPTS: u8 = 3;
#[derive(PartialEq)]
/// A return type for functions that act on a `Chain` which informs the caller whether the chain /// A return type for functions that act on a `Chain` which informs the caller whether the chain
/// has been completed and should be removed or to be kept if further processing is /// has been completed and should be removed or to be kept if further processing is
/// required. /// required.
@ -380,8 +381,8 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
} }
} }
BatchProcessResult::Failed => { BatchProcessResult::Failed => {
warn!(self.log, "Batch processing failed"; debug!(self.log, "Batch processing failed";
"chain_id" => self.id,"id" => *batch.id, "peer" => format!("{}", batch.current_peer)); "chain_id" => self.id,"id" => *batch.id, "peer" => batch.current_peer.to_string(), "client" => network.client_type(&batch.current_peer).to_string());
// The batch processing failed // The batch processing failed
// This could be because this batch is invalid, or a previous invalidated batch // This could be because this batch is invalid, or a previous invalidated batch
// is invalid. We need to find out which and downvote the peer that has sent us // is invalid. We need to find out which and downvote the peer that has sent us

View File

@ -369,7 +369,19 @@ impl<T: BeaconChainTypes> ChainCollection<T> {
.find_map(|(index, chain)| Some((index, func(chain)?))) .find_map(|(index, chain)| Some((index, func(chain)?)))
} }
/// Runs a function on all finalized chains. /// Given a chain iterator, runs a given function on each chain and return all `Some` results.
fn request_function_all<'a, F, I, U>(chain: I, mut func: F) -> Vec<(usize, U)>
where
I: Iterator<Item = &'a mut SyncingChain<T>>,
F: FnMut(&'a mut SyncingChain<T>) -> Option<U>,
{
chain
.enumerate()
.filter_map(|(index, chain)| Some((index, func(chain)?)))
.collect()
}
/// Runs a function on finalized chains until we get the first `Some` result from `F`.
pub fn finalized_request<F, U>(&mut self, func: F) -> Option<(usize, U)> pub fn finalized_request<F, U>(&mut self, func: F) -> Option<(usize, U)>
where where
F: FnMut(&mut SyncingChain<T>) -> Option<U>, F: FnMut(&mut SyncingChain<T>) -> Option<U>,
@ -377,7 +389,7 @@ impl<T: BeaconChainTypes> ChainCollection<T> {
ChainCollection::request_function(self.finalized_chains.iter_mut(), func) ChainCollection::request_function(self.finalized_chains.iter_mut(), func)
} }
/// Runs a function on all head chains. /// Runs a function on head chains until we get the first `Some` result from `F`.
pub fn head_request<F, U>(&mut self, func: F) -> Option<(usize, U)> pub fn head_request<F, U>(&mut self, func: F) -> Option<(usize, U)>
where where
F: FnMut(&mut SyncingChain<T>) -> Option<U>, F: FnMut(&mut SyncingChain<T>) -> Option<U>,
@ -385,7 +397,7 @@ impl<T: BeaconChainTypes> ChainCollection<T> {
ChainCollection::request_function(self.head_chains.iter_mut(), func) ChainCollection::request_function(self.head_chains.iter_mut(), func)
} }
/// Runs a function on all finalized and head chains. /// Runs a function on finalized and head chains until we get the first `Some` result from `F`.
pub fn head_finalized_request<F, U>(&mut self, func: F) -> Option<(usize, U)> pub fn head_finalized_request<F, U>(&mut self, func: F) -> Option<(usize, U)>
where where
F: FnMut(&mut SyncingChain<T>) -> Option<U>, F: FnMut(&mut SyncingChain<T>) -> Option<U>,
@ -398,6 +410,19 @@ impl<T: BeaconChainTypes> ChainCollection<T> {
) )
} }
/// Runs a function on all finalized and head chains and collects all `Some` results from `F`.
pub fn head_finalized_request_all<F, U>(&mut self, func: F) -> Vec<(usize, U)>
where
F: FnMut(&mut SyncingChain<T>) -> Option<U>,
{
ChainCollection::request_function_all(
self.finalized_chains
.iter_mut()
.chain(self.head_chains.iter_mut()),
func,
)
}
/// Removes any outdated finalized or head chains. /// Removes any outdated finalized or head chains.
/// ///
/// This removes chains with no peers, or chains whose start block slot is less than our current /// This removes chains with no peers, or chains whose start block slot is less than our current

View File

@ -355,7 +355,7 @@ impl<T: BeaconChainTypes> RangeSync<T> {
peer_id: &PeerId, peer_id: &PeerId,
) { ) {
// if the peer is in the awaiting head mapping, remove it // if the peer is in the awaiting head mapping, remove it
self.awaiting_head_peers.remove(&peer_id); self.awaiting_head_peers.remove(peer_id);
// remove the peer from any peer pool // remove the peer from any peer pool
self.remove_peer(network, peer_id); self.remove_peer(network, peer_id);
@ -370,26 +370,26 @@ impl<T: BeaconChainTypes> RangeSync<T> {
/// for this peer. If so we mark the batch as failed. The batch may then hit it's maximum /// for this peer. If so we mark the batch as failed. The batch may then hit it's maximum
/// retries. In this case, we need to remove the chain and re-status all the peers. /// retries. In this case, we need to remove the chain and re-status all the peers.
fn remove_peer(&mut self, network: &mut SyncNetworkContext<T::EthSpec>, peer_id: &PeerId) { fn remove_peer(&mut self, network: &mut SyncNetworkContext<T::EthSpec>, peer_id: &PeerId) {
if let Some((index, ProcessingResult::RemoveChain)) = for (index, result) in self.chains.head_finalized_request_all(|chain| {
self.chains.head_finalized_request(|chain| { if chain.peer_pool.remove(peer_id) {
if chain.peer_pool.remove(peer_id) { // this chain contained the peer
// this chain contained the peer while let Some(batch) = chain.pending_batches.remove_batch_by_peer(peer_id) {
while let Some(batch) = chain.pending_batches.remove_batch_by_peer(peer_id) { if let ProcessingResult::RemoveChain = chain.failed_batch(network, batch) {
if let ProcessingResult::RemoveChain = chain.failed_batch(network, batch) { // a single batch failed, remove the chain
// a single batch failed, remove the chain return Some(ProcessingResult::RemoveChain);
return Some(ProcessingResult::RemoveChain);
}
} }
// peer removed from chain, no batch failed
Some(ProcessingResult::KeepChain)
} else {
None
} }
}) // peer removed from chain, no batch failed
{ Some(ProcessingResult::KeepChain)
// the chain needed to be removed } else {
debug!(self.log, "Chain being removed due to failed batch"); None
self.chains.remove_chain(network, index); }
}) {
if result == ProcessingResult::RemoveChain {
// the chain needed to be removed
debug!(self.log, "Chain being removed due to failed batch");
self.chains.remove_chain(network, index);
}
} }
} }

View File

@ -13,31 +13,31 @@ network = { path = "../network" }
eth2-libp2p = { path = "../eth2-libp2p" } eth2-libp2p = { path = "../eth2-libp2p" }
store = { path = "../store" } store = { path = "../store" }
version = { path = "../version" } version = { path = "../version" }
serde = { version = "1.0", features = ["derive"] } serde = { version = "1.0.110", features = ["derive"] }
serde_json = "1.0" serde_json = "1.0.52"
serde_yaml = "0.8" serde_yaml = "0.8.11"
slog = "2.5" slog = "2.5.2"
slog-term = "2.4" slog-term = "2.5.0"
slog-async = "2.3" slog-async = "2.5.0"
eth2_ssz = { path = "../../eth2/utils/ssz" } eth2_ssz = "0.1.2"
eth2_ssz_derive = { path = "../../eth2/utils/ssz_derive" } eth2_ssz_derive = "0.1.0"
state_processing = { path = "../../eth2/state_processing" } state_processing = { path = "../../eth2/state_processing" }
types = { path = "../../eth2/types" } types = { path = "../../eth2/types" }
http = "0.1" http = "0.2.1"
hyper = "0.12" hyper = "0.13.5"
tokio = "0.1.22" tokio = { version = "0.2", features = ["sync"] }
url = "2.1" url = "2.1.1"
lazy_static = "1.3.0" lazy_static = "1.4.0"
eth2_config = { path = "../../eth2/utils/eth2_config" } eth2_config = { path = "../../eth2/utils/eth2_config" }
lighthouse_metrics = { path = "../../eth2/utils/lighthouse_metrics" } lighthouse_metrics = { path = "../../eth2/utils/lighthouse_metrics" }
slot_clock = { path = "../../eth2/utils/slot_clock" } slot_clock = { path = "../../eth2/utils/slot_clock" }
hex = "0.3" hex = "0.4.2"
parking_lot = "0.9" parking_lot = "0.10.2"
futures = "0.1.29" futures = "0.3.5"
operation_pool = { path = "../../eth2/operation_pool" } operation_pool = { path = "../../eth2/operation_pool" }
rayon = "1.3.0" rayon = "1.3.0"
[dev-dependencies] [dev-dependencies]
remote_beacon_node = { path = "../../eth2/utils/remote_beacon_node" } remote_beacon_node = { path = "../../eth2/utils/remote_beacon_node" }
node_test_rig = { path = "../../tests/node_test_rig" } node_test_rig = { path = "../../tests/node_test_rig" }
tree_hash = { path = "../../eth2/utils/tree_hash" } tree_hash = "0.1.0"

View File

@ -1,9 +1,8 @@
use crate::helpers::*; use crate::helpers::*;
use crate::response_builder::ResponseBuilder; use crate::response_builder::ResponseBuilder;
use crate::validator::get_state_for_epoch; use crate::validator::get_state_for_epoch;
use crate::{ApiError, ApiResult, BoxFut, UrlQuery}; use crate::{ApiError, ApiResult, UrlQuery};
use beacon_chain::{BeaconChain, BeaconChainTypes, StateSkipConfig}; use beacon_chain::{BeaconChain, BeaconChainTypes, StateSkipConfig};
use futures::{Future, Stream};
use hyper::{Body, Request}; use hyper::{Body, Request};
use rest_types::{ use rest_types::{
BlockResponse, CanonicalHeadResponse, Committee, HeadBeaconBlock, StateResponse, BlockResponse, CanonicalHeadResponse, Committee, HeadBeaconBlock, StateResponse,
@ -216,23 +215,22 @@ pub fn get_active_validators<T: BeaconChainTypes>(
/// ///
/// This method allows for a basically unbounded list of `pubkeys`, where as the `get_validators` /// This method allows for a basically unbounded list of `pubkeys`, where as the `get_validators`
/// request is limited by the max number of pubkeys you can fit in a URL. /// request is limited by the max number of pubkeys you can fit in a URL.
pub fn post_validators<T: BeaconChainTypes>( pub async fn post_validators<T: BeaconChainTypes>(
req: Request<Body>, req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>, beacon_chain: Arc<BeaconChain<T>>,
) -> BoxFut { ) -> ApiResult {
let response_builder = ResponseBuilder::new(&req); let response_builder = ResponseBuilder::new(&req);
let future = req let body = req.into_body();
.into_body() let chunks = hyper::body::to_bytes(body)
.concat2() .await
.map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e))) .map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e)))?;
.and_then(|chunks| { serde_json::from_slice::<ValidatorRequest>(&chunks)
serde_json::from_slice::<ValidatorRequest>(&chunks).map_err(|e| { .map_err(|e| {
ApiError::BadRequest(format!( ApiError::BadRequest(format!(
"Unable to parse JSON into ValidatorRequest: {:?}", "Unable to parse JSON into ValidatorRequest: {:?}",
e e
)) ))
})
}) })
.and_then(|bulk_request| { .and_then(|bulk_request| {
validator_responses_by_pubkey( validator_responses_by_pubkey(
@ -241,9 +239,7 @@ pub fn post_validators<T: BeaconChainTypes>(
bulk_request.pubkeys, bulk_request.pubkeys,
) )
}) })
.and_then(|validators| response_builder?.body(&validators)); .and_then(|validators| response_builder?.body(&validators))
Box::new(future)
} }
/// Returns either the state given by `state_root_opt`, or the canonical head state if it is /// Returns either the state given by `state_root_opt`, or the canonical head state if it is
@ -449,23 +445,23 @@ pub fn get_genesis_validators_root<T: BeaconChainTypes>(
ResponseBuilder::new(&req)?.body(&beacon_chain.head_info()?.genesis_validators_root) ResponseBuilder::new(&req)?.body(&beacon_chain.head_info()?.genesis_validators_root)
} }
pub fn proposer_slashing<T: BeaconChainTypes>( pub async fn proposer_slashing<T: BeaconChainTypes>(
req: Request<Body>, req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>, beacon_chain: Arc<BeaconChain<T>>,
) -> BoxFut { ) -> ApiResult {
let response_builder = ResponseBuilder::new(&req); let response_builder = ResponseBuilder::new(&req);
let future = req let body = req.into_body();
.into_body() let chunks = hyper::body::to_bytes(body)
.concat2() .await
.map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e))) .map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e)))?;
.and_then(|chunks| {
serde_json::from_slice::<ProposerSlashing>(&chunks).map_err(|e| { serde_json::from_slice::<ProposerSlashing>(&chunks)
ApiError::BadRequest(format!( .map_err(|e| {
"Unable to parse JSON into ProposerSlashing: {:?}", ApiError::BadRequest(format!(
e "Unable to parse JSON into ProposerSlashing: {:?}",
)) e
}) ))
}) })
.and_then(move |proposer_slashing| { .and_then(move |proposer_slashing| {
let spec = &beacon_chain.spec; let spec = &beacon_chain.spec;
@ -481,33 +477,31 @@ pub fn proposer_slashing<T: BeaconChainTypes>(
)) ))
}) })
} else { } else {
Err(ApiError::BadRequest( return Err(ApiError::BadRequest(
"Cannot insert proposer slashing on node without Eth1 connection.".to_string(), "Cannot insert proposer slashing on node without Eth1 connection.".to_string(),
)) ));
} }
}) })
.and_then(|_| response_builder?.body(&true)); .and_then(|_| response_builder?.body(&true))
Box::new(future)
} }
pub fn attester_slashing<T: BeaconChainTypes>( pub async fn attester_slashing<T: BeaconChainTypes>(
req: Request<Body>, req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>, beacon_chain: Arc<BeaconChain<T>>,
) -> BoxFut { ) -> ApiResult {
let response_builder = ResponseBuilder::new(&req); let response_builder = ResponseBuilder::new(&req);
let future = req let body = req.into_body();
.into_body() let chunks = hyper::body::to_bytes(body)
.concat2() .await
.map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e))) .map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e)))?;
.and_then(|chunks| {
serde_json::from_slice::<AttesterSlashing<T::EthSpec>>(&chunks).map_err(|e| { serde_json::from_slice::<AttesterSlashing<T::EthSpec>>(&chunks)
ApiError::BadRequest(format!( .map_err(|e| {
"Unable to parse JSON into AttesterSlashing: {:?}", ApiError::BadRequest(format!(
e "Unable to parse JSON into AttesterSlashing: {:?}",
)) e
}) ))
}) })
.and_then(move |attester_slashing| { .and_then(move |attester_slashing| {
let spec = &beacon_chain.spec; let spec = &beacon_chain.spec;
@ -528,7 +522,5 @@ pub fn attester_slashing<T: BeaconChainTypes>(
)) ))
} }
}) })
.and_then(|_| response_builder?.body(&true)); .and_then(|_| response_builder?.body(&true))
Box::new(future)
} }

View File

@ -1,8 +1,7 @@
use crate::helpers::*; use crate::helpers::*;
use crate::response_builder::ResponseBuilder; use crate::response_builder::ResponseBuilder;
use crate::{ApiError, ApiResult, BoxFut, UrlQuery}; use crate::{ApiError, ApiResult, UrlQuery};
use beacon_chain::{BeaconChain, BeaconChainTypes}; use beacon_chain::{BeaconChain, BeaconChainTypes};
use futures::{Future, Stream};
use hyper::{Body, Request}; use hyper::{Body, Request};
use rest_types::{IndividualVotesRequest, IndividualVotesResponse}; use rest_types::{IndividualVotesRequest, IndividualVotesResponse};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
@ -71,23 +70,23 @@ pub fn get_vote_count<T: BeaconChainTypes>(
ResponseBuilder::new(&req)?.body(&report) ResponseBuilder::new(&req)?.body(&report)
} }
pub fn post_individual_votes<T: BeaconChainTypes>( pub async fn post_individual_votes<T: BeaconChainTypes>(
req: Request<Body>, req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>, beacon_chain: Arc<BeaconChain<T>>,
) -> BoxFut { ) -> ApiResult {
let response_builder = ResponseBuilder::new(&req); let response_builder = ResponseBuilder::new(&req);
let future = req let body = req.into_body();
.into_body() let chunks = hyper::body::to_bytes(body)
.concat2() .await
.map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e))) .map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e)))?;
.and_then(|chunks| {
serde_json::from_slice::<IndividualVotesRequest>(&chunks).map_err(|e| { serde_json::from_slice::<IndividualVotesRequest>(&chunks)
ApiError::BadRequest(format!( .map_err(|e| {
"Unable to parse JSON into ValidatorDutiesRequest: {:?}", ApiError::BadRequest(format!(
e "Unable to parse JSON into ValidatorDutiesRequest: {:?}",
)) e
}) ))
}) })
.and_then(move |body| { .and_then(move |body| {
let epoch = body.epoch; let epoch = body.epoch;
@ -136,7 +135,5 @@ pub fn post_individual_votes<T: BeaconChainTypes>(
}) })
.collect::<Result<Vec<_>, _>>() .collect::<Result<Vec<_>, _>>()
}) })
.and_then(|votes| response_builder?.body_no_ssz(&votes)); .and_then(|votes| response_builder?.body_no_ssz(&votes))
Box::new(future)
} }

View File

@ -1,4 +1,3 @@
use crate::BoxFut;
use hyper::{Body, Response, StatusCode}; use hyper::{Body, Response, StatusCode};
use std::error::Error as StdError; use std::error::Error as StdError;
@ -42,12 +41,6 @@ impl Into<Response<Body>> for ApiError {
} }
} }
impl Into<BoxFut> for ApiError {
fn into(self) -> BoxFut {
Box::new(futures::future::err(self))
}
}
impl From<store::Error> for ApiError { impl From<store::Error> for ApiError {
fn from(e: store::Error) -> ApiError { fn from(e: store::Error) -> ApiError {
ApiError::ServerError(format!("Database error: {:?}", e)) ApiError::ServerError(format!("Database error: {:?}", e))

View File

@ -229,14 +229,14 @@ pub fn implementation_pending_response(_req: Request<Body>) -> ApiResult {
} }
pub fn publish_beacon_block_to_network<T: BeaconChainTypes + 'static>( pub fn publish_beacon_block_to_network<T: BeaconChainTypes + 'static>(
mut chan: NetworkChannel<T::EthSpec>, chan: NetworkChannel<T::EthSpec>,
block: SignedBeaconBlock<T::EthSpec>, block: SignedBeaconBlock<T::EthSpec>,
) -> Result<(), ApiError> { ) -> Result<(), ApiError> {
// send the block via SSZ encoding // send the block via SSZ encoding
let messages = vec![PubsubMessage::BeaconBlock(Box::new(block))]; let messages = vec![PubsubMessage::BeaconBlock(Box::new(block))];
// Publish the block to the p2p network via gossipsub. // Publish the block to the p2p network via gossipsub.
if let Err(e) = chan.try_send(NetworkMessage::Publish { messages }) { if let Err(e) = chan.send(NetworkMessage::Publish { messages }) {
return Err(ApiError::ServerError(format!( return Err(ApiError::ServerError(format!(
"Unable to send new block to network: {:?}", "Unable to send new block to network: {:?}",
e e

View File

@ -26,23 +26,21 @@ pub use config::ApiEncodingFormat;
use error::{ApiError, ApiResult}; use error::{ApiError, ApiResult};
use eth2_config::Eth2Config; use eth2_config::Eth2Config;
use eth2_libp2p::NetworkGlobals; use eth2_libp2p::NetworkGlobals;
use hyper::rt::Future; use futures::future::TryFutureExt;
use hyper::server::conn::AddrStream; use hyper::server::conn::AddrStream;
use hyper::service::{make_service_fn, service_fn}; use hyper::service::{make_service_fn, service_fn};
use hyper::{Body, Request, Response, Server}; use hyper::{Body, Request, Server};
use slog::{info, warn}; use slog::{info, warn};
use std::net::SocketAddr; use std::net::SocketAddr;
use std::ops::Deref; use std::ops::Deref;
use std::path::PathBuf; use std::path::PathBuf;
use std::sync::Arc; use std::sync::Arc;
use tokio::runtime::TaskExecutor;
use tokio::sync::{mpsc, oneshot}; use tokio::sync::{mpsc, oneshot};
use url_query::UrlQuery; use url_query::UrlQuery;
pub use crate::helpers::parse_pubkey_bytes; pub use crate::helpers::parse_pubkey_bytes;
pub use config::Config; pub use config::Config;
pub type BoxFut = Box<dyn Future<Item = Response<Body>, Error = ApiError> + Send>;
pub type NetworkChannel<T> = mpsc::UnboundedSender<NetworkMessage<T>>; pub type NetworkChannel<T> = mpsc::UnboundedSender<NetworkMessage<T>>;
pub struct NetworkInfo<T: BeaconChainTypes> { pub struct NetworkInfo<T: BeaconChainTypes> {
@ -54,7 +52,6 @@ pub struct NetworkInfo<T: BeaconChainTypes> {
#[allow(clippy::too_many_arguments)] #[allow(clippy::too_many_arguments)]
pub fn start_server<T: BeaconChainTypes>( pub fn start_server<T: BeaconChainTypes>(
config: &Config, config: &Config,
executor: &TaskExecutor,
beacon_chain: Arc<BeaconChain<T>>, beacon_chain: Arc<BeaconChain<T>>,
network_info: NetworkInfo<T>, network_info: NetworkInfo<T>,
db_path: PathBuf, db_path: PathBuf,
@ -75,18 +72,20 @@ pub fn start_server<T: BeaconChainTypes>(
let db_path = db_path.clone(); let db_path = db_path.clone();
let freezer_db_path = freezer_db_path.clone(); let freezer_db_path = freezer_db_path.clone();
service_fn(move |req: Request<Body>| { async move {
router::route( Ok::<_, hyper::Error>(service_fn(move |req: Request<Body>| {
req, router::route(
beacon_chain.clone(), req,
network_globals.clone(), beacon_chain.clone(),
network_channel.clone(), network_globals.clone(),
eth2_config.clone(), network_channel.clone(),
log.clone(), eth2_config.clone(),
db_path.clone(), log.clone(),
freezer_db_path.clone(), db_path.clone(),
) freezer_db_path.clone(),
}) )
}))
}
}); });
let bind_addr = (config.listen_address, config.port).into(); let bind_addr = (config.listen_address, config.port).into();
@ -99,16 +98,19 @@ pub fn start_server<T: BeaconChainTypes>(
let actual_listen_addr = server.local_addr(); let actual_listen_addr = server.local_addr();
// Build a channel to kill the HTTP server. // Build a channel to kill the HTTP server.
let (exit_signal, exit) = oneshot::channel(); let (exit_signal, exit) = oneshot::channel::<()>();
let inner_log = log.clone(); let inner_log = log.clone();
let server_exit = exit.and_then(move |_| { let server_exit = async move {
let _ = exit.await;
info!(inner_log, "HTTP service shutdown"); info!(inner_log, "HTTP service shutdown");
Ok(()) };
});
// Configure the `hyper` server to gracefully shutdown when the shutdown channel is triggered. // Configure the `hyper` server to gracefully shutdown when the shutdown channel is triggered.
let inner_log = log.clone(); let inner_log = log.clone();
let server_future = server let server_future = server
.with_graceful_shutdown(server_exit) .with_graceful_shutdown(async {
server_exit.await;
})
.map_err(move |e| { .map_err(move |e| {
warn!( warn!(
inner_log, inner_log,
@ -123,7 +125,7 @@ pub fn start_server<T: BeaconChainTypes>(
"port" => actual_listen_addr.port(), "port" => actual_listen_addr.port(),
); );
executor.spawn(server_future); tokio::spawn(server_future);
Ok((exit_signal, actual_listen_addr)) Ok((exit_signal, actual_listen_addr))
} }

View File

@ -2,9 +2,7 @@ macro_rules! try_future {
($expr:expr) => { ($expr:expr) => {
match $expr { match $expr {
core::result::Result::Ok(val) => val, core::result::Result::Ok(val) => val,
core::result::Result::Err(err) => { core::result::Result::Err(err) => return Err(std::convert::From::from(err)),
return Box::new(futures::future::err(std::convert::From::from(err)))
}
} }
}; };
($expr:expr,) => { ($expr:expr,) => {

View File

@ -1,6 +1,6 @@
use super::{ApiError, ApiResult}; use super::{ApiError, ApiResult};
use crate::config::ApiEncodingFormat; use crate::config::ApiEncodingFormat;
use http::header; use hyper::header;
use hyper::{Body, Request, Response, StatusCode}; use hyper::{Body, Request, Response, StatusCode};
use serde::Serialize; use serde::Serialize;
use ssz::Encode; use ssz::Encode;

View File

@ -1,11 +1,10 @@
use crate::{ use crate::{
advanced, beacon, consensus, error::ApiError, helpers, lighthouse, metrics, network, node, advanced, beacon, consensus, error::ApiError, helpers, lighthouse, metrics, network, node,
spec, validator, BoxFut, NetworkChannel, spec, validator, NetworkChannel,
}; };
use beacon_chain::{BeaconChain, BeaconChainTypes}; use beacon_chain::{BeaconChain, BeaconChainTypes};
use eth2_config::Eth2Config; use eth2_config::Eth2Config;
use eth2_libp2p::NetworkGlobals; use eth2_libp2p::NetworkGlobals;
use futures::{Future, IntoFuture};
use hyper::{Body, Error, Method, Request, Response}; use hyper::{Body, Error, Method, Request, Response};
use slog::debug; use slog::debug;
use std::path::PathBuf; use std::path::PathBuf;
@ -13,17 +12,9 @@ use std::sync::Arc;
use std::time::Instant; use std::time::Instant;
use types::Slot; use types::Slot;
fn into_boxfut<F: IntoFuture + 'static>(item: F) -> BoxFut
where
F: IntoFuture<Item = Response<Body>, Error = ApiError>,
F::Future: Send,
{
Box::new(item.into_future())
}
// Allowing more than 7 arguments. // Allowing more than 7 arguments.
#[allow(clippy::too_many_arguments)] #[allow(clippy::too_many_arguments)]
pub fn route<T: BeaconChainTypes>( pub async fn route<T: BeaconChainTypes>(
req: Request<Body>, req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>, beacon_chain: Arc<BeaconChain<T>>,
network_globals: Arc<NetworkGlobals<T::EthSpec>>, network_globals: Arc<NetworkGlobals<T::EthSpec>>,
@ -32,7 +23,7 @@ pub fn route<T: BeaconChainTypes>(
local_log: slog::Logger, local_log: slog::Logger,
db_path: PathBuf, db_path: PathBuf,
freezer_db_path: PathBuf, freezer_db_path: PathBuf,
) -> impl Future<Item = Response<Body>, Error = Error> { ) -> Result<Response<Body>, Error> {
metrics::inc_counter(&metrics::REQUEST_COUNT); metrics::inc_counter(&metrics::REQUEST_COUNT);
let timer = metrics::start_timer(&metrics::REQUEST_RESPONSE_TIME); let timer = metrics::start_timer(&metrics::REQUEST_RESPONSE_TIME);
let received_instant = Instant::now(); let received_instant = Instant::now();
@ -40,222 +31,179 @@ pub fn route<T: BeaconChainTypes>(
let path = req.uri().path().to_string(); let path = req.uri().path().to_string();
let log = local_log.clone(); let log = local_log.clone();
let request_result: Box<dyn Future<Item = Response<_>, Error = _> + Send> = let request_result = match (req.method(), path.as_ref()) {
match (req.method(), path.as_ref()) { // Methods for Client
// Methods for Client (&Method::GET, "/node/version") => node::get_version(req),
(&Method::GET, "/node/version") => into_boxfut(node::get_version(req)), (&Method::GET, "/node/syncing") => {
(&Method::GET, "/node/syncing") => { // inform the current slot, or set to 0
// inform the current slot, or set to 0 let current_slot = beacon_chain
let current_slot = beacon_chain .head_info()
.head_info() .map(|info| info.slot)
.map(|info| info.slot) .unwrap_or_else(|_| Slot::from(0u64));
.unwrap_or_else(|_| Slot::from(0u64));
into_boxfut(node::syncing::<T::EthSpec>( node::syncing::<T::EthSpec>(req, network_globals, current_slot)
req, }
network_globals,
current_slot,
))
}
// Methods for Network // Methods for Network
(&Method::GET, "/network/enr") => { (&Method::GET, "/network/enr") => network::get_enr::<T>(req, network_globals),
into_boxfut(network::get_enr::<T>(req, network_globals)) (&Method::GET, "/network/peer_count") => network::get_peer_count::<T>(req, network_globals),
} (&Method::GET, "/network/peer_id") => network::get_peer_id::<T>(req, network_globals),
(&Method::GET, "/network/peer_count") => { (&Method::GET, "/network/peers") => network::get_peer_list::<T>(req, network_globals),
into_boxfut(network::get_peer_count::<T>(req, network_globals)) (&Method::GET, "/network/listen_port") => {
} network::get_listen_port::<T>(req, network_globals)
(&Method::GET, "/network/peer_id") => { }
into_boxfut(network::get_peer_id::<T>(req, network_globals)) (&Method::GET, "/network/listen_addresses") => {
} network::get_listen_addresses::<T>(req, network_globals)
(&Method::GET, "/network/peers") => { }
into_boxfut(network::get_peer_list::<T>(req, network_globals))
}
(&Method::GET, "/network/listen_port") => {
into_boxfut(network::get_listen_port::<T>(req, network_globals))
}
(&Method::GET, "/network/listen_addresses") => {
into_boxfut(network::get_listen_addresses::<T>(req, network_globals))
}
// Methods for Beacon Node // Methods for Beacon Node
(&Method::GET, "/beacon/head") => into_boxfut(beacon::get_head::<T>(req, beacon_chain)), (&Method::GET, "/beacon/head") => beacon::get_head::<T>(req, beacon_chain),
(&Method::GET, "/beacon/heads") => { (&Method::GET, "/beacon/heads") => beacon::get_heads::<T>(req, beacon_chain),
into_boxfut(beacon::get_heads::<T>(req, beacon_chain)) (&Method::GET, "/beacon/block") => beacon::get_block::<T>(req, beacon_chain),
} (&Method::GET, "/beacon/block_root") => beacon::get_block_root::<T>(req, beacon_chain),
(&Method::GET, "/beacon/block") => { (&Method::GET, "/beacon/fork") => beacon::get_fork::<T>(req, beacon_chain),
into_boxfut(beacon::get_block::<T>(req, beacon_chain)) (&Method::GET, "/beacon/genesis_time") => beacon::get_genesis_time::<T>(req, beacon_chain),
} (&Method::GET, "/beacon/genesis_validators_root") => {
(&Method::GET, "/beacon/block_root") => { beacon::get_genesis_validators_root::<T>(req, beacon_chain)
into_boxfut(beacon::get_block_root::<T>(req, beacon_chain)) }
} (&Method::GET, "/beacon/validators") => beacon::get_validators::<T>(req, beacon_chain),
(&Method::GET, "/beacon/fork") => into_boxfut(beacon::get_fork::<T>(req, beacon_chain)), (&Method::POST, "/beacon/validators") => {
(&Method::GET, "/beacon/genesis_time") => { beacon::post_validators::<T>(req, beacon_chain).await
into_boxfut(beacon::get_genesis_time::<T>(req, beacon_chain)) }
} (&Method::GET, "/beacon/validators/all") => {
(&Method::GET, "/beacon/genesis_validators_root") => { beacon::get_all_validators::<T>(req, beacon_chain)
into_boxfut(beacon::get_genesis_validators_root::<T>(req, beacon_chain)) }
} (&Method::GET, "/beacon/validators/active") => {
(&Method::GET, "/beacon/validators") => { beacon::get_active_validators::<T>(req, beacon_chain)
into_boxfut(beacon::get_validators::<T>(req, beacon_chain)) }
} (&Method::GET, "/beacon/state") => beacon::get_state::<T>(req, beacon_chain),
(&Method::POST, "/beacon/validators") => { (&Method::GET, "/beacon/state_root") => beacon::get_state_root::<T>(req, beacon_chain),
into_boxfut(beacon::post_validators::<T>(req, beacon_chain)) (&Method::GET, "/beacon/state/genesis") => {
} beacon::get_genesis_state::<T>(req, beacon_chain)
(&Method::GET, "/beacon/validators/all") => { }
into_boxfut(beacon::get_all_validators::<T>(req, beacon_chain)) (&Method::GET, "/beacon/committees") => beacon::get_committees::<T>(req, beacon_chain),
} (&Method::POST, "/beacon/proposer_slashing") => {
(&Method::GET, "/beacon/validators/active") => { beacon::proposer_slashing::<T>(req, beacon_chain).await
into_boxfut(beacon::get_active_validators::<T>(req, beacon_chain)) }
} (&Method::POST, "/beacon/attester_slashing") => {
(&Method::GET, "/beacon/state") => { beacon::attester_slashing::<T>(req, beacon_chain).await
into_boxfut(beacon::get_state::<T>(req, beacon_chain)) }
}
(&Method::GET, "/beacon/state_root") => {
into_boxfut(beacon::get_state_root::<T>(req, beacon_chain))
}
(&Method::GET, "/beacon/state/genesis") => {
into_boxfut(beacon::get_genesis_state::<T>(req, beacon_chain))
}
(&Method::GET, "/beacon/committees") => {
into_boxfut(beacon::get_committees::<T>(req, beacon_chain))
}
(&Method::POST, "/beacon/proposer_slashing") => {
into_boxfut(beacon::proposer_slashing::<T>(req, beacon_chain))
}
(&Method::POST, "/beacon/attester_slashing") => {
into_boxfut(beacon::attester_slashing::<T>(req, beacon_chain))
}
// Methods for Validator // Methods for Validator
(&Method::POST, "/validator/duties") => { (&Method::POST, "/validator/duties") => {
let timer = let timer = metrics::start_timer(&metrics::VALIDATOR_GET_DUTIES_REQUEST_RESPONSE_TIME);
metrics::start_timer(&metrics::VALIDATOR_GET_DUTIES_REQUEST_RESPONSE_TIME); let response = validator::post_validator_duties::<T>(req, beacon_chain);
let response = validator::post_validator_duties::<T>(req, beacon_chain); drop(timer);
drop(timer); response.await
into_boxfut(response) }
} (&Method::POST, "/validator/subscribe") => {
(&Method::POST, "/validator/subscribe") => { validator::post_validator_subscriptions::<T>(req, network_channel).await
validator::post_validator_subscriptions::<T>(req, network_channel) }
} (&Method::GET, "/validator/duties/all") => {
(&Method::GET, "/validator/duties/all") => { validator::get_all_validator_duties::<T>(req, beacon_chain)
into_boxfut(validator::get_all_validator_duties::<T>(req, beacon_chain)) }
} (&Method::GET, "/validator/duties/active") => {
(&Method::GET, "/validator/duties/active") => into_boxfut( validator::get_active_validator_duties::<T>(req, beacon_chain)
validator::get_active_validator_duties::<T>(req, beacon_chain), }
), (&Method::GET, "/validator/block") => {
(&Method::GET, "/validator/block") => { let timer = metrics::start_timer(&metrics::VALIDATOR_GET_BLOCK_REQUEST_RESPONSE_TIME);
let timer = let response = validator::get_new_beacon_block::<T>(req, beacon_chain, log);
metrics::start_timer(&metrics::VALIDATOR_GET_BLOCK_REQUEST_RESPONSE_TIME); drop(timer);
let response = validator::get_new_beacon_block::<T>(req, beacon_chain, log); response
drop(timer); }
into_boxfut(response) (&Method::POST, "/validator/block") => {
} validator::publish_beacon_block::<T>(req, beacon_chain, network_channel, log).await
(&Method::POST, "/validator/block") => { }
validator::publish_beacon_block::<T>(req, beacon_chain, network_channel, log) (&Method::GET, "/validator/attestation") => {
} let timer =
(&Method::GET, "/validator/attestation") => { metrics::start_timer(&metrics::VALIDATOR_GET_ATTESTATION_REQUEST_RESPONSE_TIME);
let timer = let response = validator::get_new_attestation::<T>(req, beacon_chain);
metrics::start_timer(&metrics::VALIDATOR_GET_ATTESTATION_REQUEST_RESPONSE_TIME); drop(timer);
let response = validator::get_new_attestation::<T>(req, beacon_chain); response
drop(timer); }
into_boxfut(response) (&Method::GET, "/validator/aggregate_attestation") => {
} validator::get_aggregate_attestation::<T>(req, beacon_chain)
(&Method::GET, "/validator/aggregate_attestation") => { }
into_boxfut(validator::get_aggregate_attestation::<T>(req, beacon_chain)) (&Method::POST, "/validator/attestations") => {
} validator::publish_attestations::<T>(req, beacon_chain, network_channel, log).await
(&Method::POST, "/validator/attestations") => { }
validator::publish_attestations::<T>(req, beacon_chain, network_channel, log) (&Method::POST, "/validator/aggregate_and_proofs") => {
} validator::publish_aggregate_and_proofs::<T>(req, beacon_chain, network_channel, log)
(&Method::POST, "/validator/aggregate_and_proofs") => { .await
validator::publish_aggregate_and_proofs::<T>( }
req,
beacon_chain,
network_channel,
log,
)
}
// Methods for consensus // Methods for consensus
(&Method::GET, "/consensus/global_votes") => { (&Method::GET, "/consensus/global_votes") => {
into_boxfut(consensus::get_vote_count::<T>(req, beacon_chain)) consensus::get_vote_count::<T>(req, beacon_chain)
} }
(&Method::POST, "/consensus/individual_votes") => { (&Method::POST, "/consensus/individual_votes") => {
consensus::post_individual_votes::<T>(req, beacon_chain) consensus::post_individual_votes::<T>(req, beacon_chain).await
} }
// Methods for bootstrap and checking configuration // Methods for bootstrap and checking configuration
(&Method::GET, "/spec") => into_boxfut(spec::get_spec::<T>(req, beacon_chain)), (&Method::GET, "/spec") => spec::get_spec::<T>(req, beacon_chain),
(&Method::GET, "/spec/slots_per_epoch") => { (&Method::GET, "/spec/slots_per_epoch") => spec::get_slots_per_epoch::<T>(req),
into_boxfut(spec::get_slots_per_epoch::<T>(req)) (&Method::GET, "/spec/deposit_contract") => helpers::implementation_pending_response(req),
} (&Method::GET, "/spec/eth2_config") => spec::get_eth2_config::<T>(req, eth2_config),
(&Method::GET, "/spec/deposit_contract") => {
into_boxfut(helpers::implementation_pending_response(req))
}
(&Method::GET, "/spec/eth2_config") => {
into_boxfut(spec::get_eth2_config::<T>(req, eth2_config))
}
// Methods for advanced parameters // Methods for advanced parameters
(&Method::GET, "/advanced/fork_choice") => { (&Method::GET, "/advanced/fork_choice") => {
into_boxfut(advanced::get_fork_choice::<T>(req, beacon_chain)) advanced::get_fork_choice::<T>(req, beacon_chain)
} }
(&Method::GET, "/advanced/operation_pool") => { (&Method::GET, "/advanced/operation_pool") => {
into_boxfut(advanced::get_operation_pool::<T>(req, beacon_chain)) advanced::get_operation_pool::<T>(req, beacon_chain)
} }
(&Method::GET, "/metrics") => into_boxfut(metrics::get_prometheus::<T>(
req,
beacon_chain,
db_path,
freezer_db_path,
)),
// Lighthouse specific (&Method::GET, "/metrics") => {
(&Method::GET, "/lighthouse/syncing") => { metrics::get_prometheus::<T>(req, beacon_chain, db_path, freezer_db_path)
into_boxfut(lighthouse::syncing::<T::EthSpec>(req, network_globals)) }
}
(&Method::GET, "/lighthouse/peers") => { // Lighthouse specific
into_boxfut(lighthouse::peers::<T::EthSpec>(req, network_globals)) (&Method::GET, "/lighthouse/syncing") => {
} lighthouse::syncing::<T::EthSpec>(req, network_globals)
(&Method::GET, "/lighthouse/connected_peers") => into_boxfut( }
lighthouse::connected_peers::<T::EthSpec>(req, network_globals),
), (&Method::GET, "/lighthouse/peers") => {
_ => Box::new(futures::future::err(ApiError::NotFound( lighthouse::peers::<T::EthSpec>(req, network_globals)
"Request path and/or method not found.".to_owned(), }
))),
}; (&Method::GET, "/lighthouse/connected_peers") => {
lighthouse::connected_peers::<T::EthSpec>(req, network_globals)
}
_ => Err(ApiError::NotFound(
"Request path and/or method not found.".to_owned(),
)),
};
// Map the Rust-friendly `Result` in to a http-friendly response. In effect, this ensures that // Map the Rust-friendly `Result` in to a http-friendly response. In effect, this ensures that
// any `Err` returned from our response handlers becomes a valid http response to the client // any `Err` returned from our response handlers becomes a valid http response to the client
// (e.g., a response with a 404 or 500 status). // (e.g., a response with a 404 or 500 status).
request_result.then(move |result| { let duration = Instant::now().duration_since(received_instant);
let duration = Instant::now().duration_since(received_instant); match request_result {
match result { Ok(response) => {
Ok(response) => { debug!(
debug!( local_log,
local_log, "HTTP API request successful";
"HTTP API request successful"; "path" => path,
"path" => path, "duration_ms" => duration.as_millis()
"duration_ms" => duration.as_millis() );
); metrics::inc_counter(&metrics::SUCCESS_COUNT);
metrics::inc_counter(&metrics::SUCCESS_COUNT); metrics::stop_timer(timer);
metrics::stop_timer(timer);
Ok(response) Ok(response)
}
Err(e) => {
let error_response = e.into();
debug!(
local_log,
"HTTP API request failure";
"path" => path,
"duration_ms" => duration.as_millis()
);
metrics::stop_timer(timer);
Ok(error_response)
}
} }
}) Err(e) => {
let error_response = e.into();
debug!(
local_log,
"HTTP API request failure";
"path" => path,
"duration_ms" => duration.as_millis()
);
metrics::stop_timer(timer);
Ok(error_response)
}
}
} }

View File

@ -1,13 +1,12 @@
use crate::helpers::{check_content_type_for_json, publish_beacon_block_to_network}; use crate::helpers::{check_content_type_for_json, publish_beacon_block_to_network};
use crate::response_builder::ResponseBuilder; use crate::response_builder::ResponseBuilder;
use crate::{ApiError, ApiResult, BoxFut, NetworkChannel, UrlQuery}; use crate::{ApiError, ApiResult, NetworkChannel, UrlQuery};
use beacon_chain::{ use beacon_chain::{
attestation_verification::Error as AttnError, BeaconChain, BeaconChainTypes, BlockError, attestation_verification::Error as AttnError, BeaconChain, BeaconChainTypes, BlockError,
StateSkipConfig, StateSkipConfig,
}; };
use bls::PublicKeyBytes; use bls::PublicKeyBytes;
use eth2_libp2p::PubsubMessage; use eth2_libp2p::PubsubMessage;
use futures::{Future, Stream};
use hyper::{Body, Request}; use hyper::{Body, Request};
use network::NetworkMessage; use network::NetworkMessage;
use rayon::prelude::*; use rayon::prelude::*;
@ -23,23 +22,23 @@ use types::{
/// HTTP Handler to retrieve the duties for a set of validators during a particular epoch. This /// HTTP Handler to retrieve the duties for a set of validators during a particular epoch. This
/// method allows for collecting bulk sets of validator duties without risking exceeding the max /// method allows for collecting bulk sets of validator duties without risking exceeding the max
/// URL length with query pairs. /// URL length with query pairs.
pub fn post_validator_duties<T: BeaconChainTypes>( pub async fn post_validator_duties<T: BeaconChainTypes>(
req: Request<Body>, req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>, beacon_chain: Arc<BeaconChain<T>>,
) -> BoxFut { ) -> ApiResult {
let response_builder = ResponseBuilder::new(&req); let response_builder = ResponseBuilder::new(&req);
let future = req let body = req.into_body();
.into_body() let chunks = hyper::body::to_bytes(body)
.concat2() .await
.map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e))) .map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e)))?;
.and_then(|chunks| {
serde_json::from_slice::<ValidatorDutiesRequest>(&chunks).map_err(|e| { serde_json::from_slice::<ValidatorDutiesRequest>(&chunks)
ApiError::BadRequest(format!( .map_err(|e| {
"Unable to parse JSON into ValidatorDutiesRequest: {:?}", ApiError::BadRequest(format!(
e "Unable to parse JSON into ValidatorDutiesRequest: {:?}",
)) e
}) ))
}) })
.and_then(|bulk_request| { .and_then(|bulk_request| {
return_validator_duties( return_validator_duties(
@ -48,45 +47,42 @@ pub fn post_validator_duties<T: BeaconChainTypes>(
bulk_request.pubkeys.into_iter().map(Into::into).collect(), bulk_request.pubkeys.into_iter().map(Into::into).collect(),
) )
}) })
.and_then(|duties| response_builder?.body_no_ssz(&duties)); .and_then(|duties| response_builder?.body_no_ssz(&duties))
Box::new(future)
} }
/// HTTP Handler to retrieve subscriptions for a set of validators. This allows the node to /// HTTP Handler to retrieve subscriptions for a set of validators. This allows the node to
/// organise peer discovery and topic subscription for known validators. /// organise peer discovery and topic subscription for known validators.
pub fn post_validator_subscriptions<T: BeaconChainTypes>( pub async fn post_validator_subscriptions<T: BeaconChainTypes>(
req: Request<Body>, req: Request<Body>,
mut network_chan: NetworkChannel<T::EthSpec>, network_chan: NetworkChannel<T::EthSpec>,
) -> BoxFut { ) -> ApiResult {
try_future!(check_content_type_for_json(&req)); try_future!(check_content_type_for_json(&req));
let response_builder = ResponseBuilder::new(&req); let response_builder = ResponseBuilder::new(&req);
let body = req.into_body(); let body = req.into_body();
Box::new( let chunks = hyper::body::to_bytes(body)
body.concat2() .await
.map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e))) .map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e)))?;
.and_then(|chunks| {
serde_json::from_slice(&chunks).map_err(|e| { serde_json::from_slice(&chunks)
ApiError::BadRequest(format!( .map_err(|e| {
"Unable to parse JSON into ValidatorSubscriptions: {:?}", ApiError::BadRequest(format!(
"Unable to parse JSON into ValidatorSubscriptions: {:?}",
e
))
})
.and_then(move |subscriptions: Vec<ValidatorSubscription>| {
network_chan
.send(NetworkMessage::Subscribe { subscriptions })
.map_err(|e| {
ApiError::ServerError(format!(
"Unable to subscriptions to the network: {:?}",
e e
)) ))
}) })?;
}) Ok(())
.and_then(move |subscriptions: Vec<ValidatorSubscription>| { })
network_chan .and_then(|_| response_builder?.body_no_ssz(&()))
.try_send(NetworkMessage::Subscribe { subscriptions })
.map_err(|e| {
ApiError::ServerError(format!(
"Unable to subscriptions to the network: {:?}",
e
))
})?;
Ok(())
})
.and_then(|_| response_builder?.body_no_ssz(&())),
)
} }
/// HTTP Handler to retrieve all validator duties for the given epoch. /// HTTP Handler to retrieve all validator duties for the given epoch.
@ -291,24 +287,23 @@ pub fn get_new_beacon_block<T: BeaconChainTypes>(
} }
/// HTTP Handler to publish a SignedBeaconBlock, which has been signed by a validator. /// HTTP Handler to publish a SignedBeaconBlock, which has been signed by a validator.
pub fn publish_beacon_block<T: BeaconChainTypes>( pub async fn publish_beacon_block<T: BeaconChainTypes>(
req: Request<Body>, req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>, beacon_chain: Arc<BeaconChain<T>>,
network_chan: NetworkChannel<T::EthSpec>, network_chan: NetworkChannel<T::EthSpec>,
log: Logger, log: Logger,
) -> BoxFut { ) -> ApiResult {
try_future!(check_content_type_for_json(&req)); try_future!(check_content_type_for_json(&req));
let response_builder = ResponseBuilder::new(&req); let response_builder = ResponseBuilder::new(&req);
let body = req.into_body(); let body = req.into_body();
Box::new( let chunks = hyper::body::to_bytes(body)
body.concat2() .await
.map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e))) .map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e)))?;
.and_then(|chunks| {
serde_json::from_slice(&chunks).map_err(|e| { serde_json::from_slice(&chunks).map_err(|e| {
ApiError::BadRequest(format!("Unable to parse JSON into SignedBeaconBlock: {:?}", e)) ApiError::BadRequest(format!("Unable to parse JSON into SignedBeaconBlock: {:?}", e))
}) })
})
.and_then(move |block: SignedBeaconBlock<T::EthSpec>| { .and_then(move |block: SignedBeaconBlock<T::EthSpec>| {
let slot = block.slot(); let slot = block.slot();
match beacon_chain.process_block(block.clone()) { match beacon_chain.process_block(block.clone()) {
@ -382,7 +377,6 @@ pub fn publish_beacon_block<T: BeaconChainTypes>(
} }
}) })
.and_then(|_| response_builder?.body_no_ssz(&())) .and_then(|_| response_builder?.body_no_ssz(&()))
)
} }
/// HTTP Handler to produce a new Attestation from the current state, ready to be signed by a validator. /// HTTP Handler to produce a new Attestation from the current state, ready to be signed by a validator.
@ -424,59 +418,56 @@ pub fn get_aggregate_attestation<T: BeaconChainTypes>(
} }
/// HTTP Handler to publish a list of Attestations, which have been signed by a number of validators. /// HTTP Handler to publish a list of Attestations, which have been signed by a number of validators.
pub fn publish_attestations<T: BeaconChainTypes>( pub async fn publish_attestations<T: BeaconChainTypes>(
req: Request<Body>, req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>, beacon_chain: Arc<BeaconChain<T>>,
network_chan: NetworkChannel<T::EthSpec>, network_chan: NetworkChannel<T::EthSpec>,
log: Logger, log: Logger,
) -> BoxFut { ) -> ApiResult {
try_future!(check_content_type_for_json(&req)); try_future!(check_content_type_for_json(&req));
let response_builder = ResponseBuilder::new(&req); let response_builder = ResponseBuilder::new(&req);
Box::new( let body = req.into_body();
req.into_body() let chunk = hyper::body::to_bytes(body)
.concat2() .await
.map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e))) .map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e)))?;
.map(|chunk| chunk.iter().cloned().collect::<Vec<u8>>())
.and_then(|chunks| { let chunks = chunk.iter().cloned().collect::<Vec<u8>>();
serde_json::from_slice(&chunks.as_slice()).map_err(|e| { serde_json::from_slice(&chunks.as_slice())
ApiError::BadRequest(format!( .map_err(|e| {
"Unable to deserialize JSON into a list of attestations: {:?}", ApiError::BadRequest(format!(
e "Unable to deserialize JSON into a list of attestations: {:?}",
)) e
))
})
// Process all of the aggregates _without_ exiting early if one fails.
.map(move |attestations: Vec<Attestation<T::EthSpec>>| {
attestations
.into_par_iter()
.enumerate()
.map(|(i, attestation)| {
process_unaggregated_attestation(
&beacon_chain,
network_chan.clone(),
attestation,
i,
&log,
)
}) })
}) .collect::<Vec<Result<_, _>>>()
// Process all of the aggregates _without_ exiting early if one fails. })
.map(move |attestations: Vec<Attestation<T::EthSpec>>| { // Iterate through all the results and return on the first `Err`.
attestations //
.into_par_iter() // Note: this will only provide info about the _first_ failure, not all failures.
.enumerate() .and_then(|processing_results| processing_results.into_iter().try_for_each(|result| result))
.map(|(i, attestation)| { .and_then(|_| response_builder?.body_no_ssz(&()))
process_unaggregated_attestation(
&beacon_chain,
network_chan.clone(),
attestation,
i,
&log,
)
})
.collect::<Vec<Result<_, _>>>()
})
// Iterate through all the results and return on the first `Err`.
//
// Note: this will only provide info about the _first_ failure, not all failures.
.and_then(|processing_results| {
processing_results.into_iter().try_for_each(|result| result)
})
.and_then(|_| response_builder?.body_no_ssz(&())),
)
} }
/// Processes an unaggregrated attestation that was included in a list of attestations with the /// Processes an unaggregrated attestation that was included in a list of attestations with the
/// index `i`. /// index `i`.
fn process_unaggregated_attestation<T: BeaconChainTypes>( fn process_unaggregated_attestation<T: BeaconChainTypes>(
beacon_chain: &BeaconChain<T>, beacon_chain: &BeaconChain<T>,
mut network_chan: NetworkChannel<T::EthSpec>, network_chan: NetworkChannel<T::EthSpec>,
attestation: Attestation<T::EthSpec>, attestation: Attestation<T::EthSpec>,
i: usize, i: usize,
log: &Logger, log: &Logger,
@ -496,7 +487,7 @@ fn process_unaggregated_attestation<T: BeaconChainTypes>(
})?; })?;
// Publish the attestation to the network // Publish the attestation to the network
if let Err(e) = network_chan.try_send(NetworkMessage::Publish { if let Err(e) = network_chan.send(NetworkMessage::Publish {
messages: vec![PubsubMessage::Attestation(Box::new(( messages: vec![PubsubMessage::Attestation(Box::new((
attestation attestation
.subnet_id(&beacon_chain.spec) .subnet_id(&beacon_chain.spec)
@ -542,61 +533,56 @@ fn process_unaggregated_attestation<T: BeaconChainTypes>(
} }
/// HTTP Handler to publish an Attestation, which has been signed by a validator. /// HTTP Handler to publish an Attestation, which has been signed by a validator.
pub fn publish_aggregate_and_proofs<T: BeaconChainTypes>( pub async fn publish_aggregate_and_proofs<T: BeaconChainTypes>(
req: Request<Body>, req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>, beacon_chain: Arc<BeaconChain<T>>,
network_chan: NetworkChannel<T::EthSpec>, network_chan: NetworkChannel<T::EthSpec>,
log: Logger, log: Logger,
) -> BoxFut { ) -> ApiResult {
try_future!(check_content_type_for_json(&req)); try_future!(check_content_type_for_json(&req));
let response_builder = ResponseBuilder::new(&req); let response_builder = ResponseBuilder::new(&req);
let body = req.into_body();
Box::new( let chunk = hyper::body::to_bytes(body)
req.into_body() .await
.concat2() .map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e)))?;
.map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e))) let chunks = chunk.iter().cloned().collect::<Vec<u8>>();
.map(|chunk| chunk.iter().cloned().collect::<Vec<u8>>()) serde_json::from_slice(&chunks.as_slice())
.and_then(|chunks| { .map_err(|e| {
serde_json::from_slice(&chunks.as_slice()).map_err(|e| { ApiError::BadRequest(format!(
ApiError::BadRequest(format!( "Unable to deserialize JSON into a list of SignedAggregateAndProof: {:?}",
"Unable to deserialize JSON into a list of SignedAggregateAndProof: {:?}", e
e ))
)) })
}) // Process all of the aggregates _without_ exiting early if one fails.
}) .map(
// Process all of the aggregates _without_ exiting early if one fails. move |signed_aggregates: Vec<SignedAggregateAndProof<T::EthSpec>>| {
.map( signed_aggregates
move |signed_aggregates: Vec<SignedAggregateAndProof<T::EthSpec>>| { .into_par_iter()
signed_aggregates .enumerate()
.into_par_iter() .map(|(i, signed_aggregate)| {
.enumerate() process_aggregated_attestation(
.map(|(i, signed_aggregate)| { &beacon_chain,
process_aggregated_attestation( network_chan.clone(),
&beacon_chain, signed_aggregate,
network_chan.clone(), i,
signed_aggregate, &log,
i, )
&log, })
) .collect::<Vec<Result<_, _>>>()
}) },
.collect::<Vec<Result<_, _>>>() )
}, // Iterate through all the results and return on the first `Err`.
) //
// Iterate through all the results and return on the first `Err`. // Note: this will only provide info about the _first_ failure, not all failures.
// .and_then(|processing_results| processing_results.into_iter().try_for_each(|result| result))
// Note: this will only provide info about the _first_ failure, not all failures. .and_then(|_| response_builder?.body_no_ssz(&()))
.and_then(|processing_results| {
processing_results.into_iter().try_for_each(|result| result)
})
.and_then(|_| response_builder?.body_no_ssz(&())),
)
} }
/// Processes an aggregrated attestation that was included in a list of attestations with the index /// Processes an aggregrated attestation that was included in a list of attestations with the index
/// `i`. /// `i`.
fn process_aggregated_attestation<T: BeaconChainTypes>( fn process_aggregated_attestation<T: BeaconChainTypes>(
beacon_chain: &BeaconChain<T>, beacon_chain: &BeaconChain<T>,
mut network_chan: NetworkChannel<T::EthSpec>, network_chan: NetworkChannel<T::EthSpec>,
signed_aggregate: SignedAggregateAndProof<T::EthSpec>, signed_aggregate: SignedAggregateAndProof<T::EthSpec>,
i: usize, i: usize,
log: &Logger, log: &Logger,
@ -643,7 +629,7 @@ fn process_aggregated_attestation<T: BeaconChainTypes>(
}; };
// Publish the attestation to the network // Publish the attestation to the network
if let Err(e) = network_chan.try_send(NetworkMessage::Publish { if let Err(e) = network_chan.send(NetworkMessage::Publish {
messages: vec![PubsubMessage::AggregateAndProofAttestation(Box::new( messages: vec![PubsubMessage::AggregateAndProofAttestation(Box::new(
signed_aggregate, signed_aggregate,
))], ))],

View File

@ -18,7 +18,6 @@ use beacon_chain::{
use clap::ArgMatches; use clap::ArgMatches;
use config::get_config; use config::get_config;
use environment::RuntimeContext; use environment::RuntimeContext;
use futures::{Future, IntoFuture};
use slog::{info, warn}; use slog::{info, warn};
use std::ops::{Deref, DerefMut}; use std::ops::{Deref, DerefMut};
use types::EthSpec; use types::EthSpec;
@ -51,27 +50,26 @@ impl<E: EthSpec> ProductionBeaconNode<E> {
/// Identical to `start_from_client_config`, however the `client_config` is generated from the /// Identical to `start_from_client_config`, however the `client_config` is generated from the
/// given `matches` and potentially configuration files on the local filesystem or other /// given `matches` and potentially configuration files on the local filesystem or other
/// configurations hosted remotely. /// configurations hosted remotely.
pub fn new_from_cli<'a, 'b>( pub async fn new_from_cli<'a, 'b>(
context: RuntimeContext<E>, context: RuntimeContext<E>,
matches: &ArgMatches<'b>, matches: &ArgMatches<'b>,
) -> impl Future<Item = Self, Error = String> + 'a { ) -> Result<Self, String> {
get_config::<E>( let client_config = get_config::<E>(
&matches, &matches,
&context.eth2_config.spec_constants, &context.eth2_config.spec_constants,
&context.eth2_config().spec, &context.eth2_config().spec,
context.log.clone(), context.log.clone(),
) )?;
.into_future() Self::new(context, client_config).await
.and_then(move |client_config| Self::new(context, client_config))
} }
/// Starts a new beacon node `Client` in the given `environment`. /// Starts a new beacon node `Client` in the given `environment`.
/// ///
/// Client behaviour is defined by the given `client_config`. /// Client behaviour is defined by the given `client_config`.
pub fn new( pub async fn new(
context: RuntimeContext<E>, context: RuntimeContext<E>,
mut client_config: ClientConfig, mut client_config: ClientConfig,
) -> impl Future<Item = Self, Error = String> { ) -> Result<Self, String> {
let http_eth2_config = context.eth2_config().clone(); let http_eth2_config = context.eth2_config().clone();
let spec = context.eth2_config().spec.clone(); let spec = context.eth2_config().spec.clone();
let client_config_1 = client_config.clone(); let client_config_1 = client_config.clone();
@ -79,60 +77,56 @@ impl<E: EthSpec> ProductionBeaconNode<E> {
let store_config = client_config.store.clone(); let store_config = client_config.store.clone();
let log = context.log.clone(); let log = context.log.clone();
let db_path_res = client_config.create_db_path(); let db_path = client_config.create_db_path()?;
let freezer_db_path_res = client_config.create_freezer_db_path(); let freezer_db_path_res = client_config.create_freezer_db_path();
db_path_res let builder = ClientBuilder::new(context.eth_spec_instance.clone())
.into_future() .runtime_context(context)
.and_then(move |db_path| { .chain_spec(spec)
Ok(ClientBuilder::new(context.eth_spec_instance.clone()) .disk_store(&db_path, &freezer_db_path_res?, store_config)?
.runtime_context(context) .background_migrator()?;
.chain_spec(spec)
.disk_store(&db_path, &freezer_db_path_res?, store_config)?
.background_migrator()?)
})
.and_then(move |builder| builder.beacon_chain_builder(client_genesis, client_config_1))
.and_then(move |builder| {
let builder = if client_config.sync_eth1_chain && !client_config.dummy_eth1_backend
{
info!(
log,
"Block production enabled";
"endpoint" => &client_config.eth1.endpoint,
"method" => "json rpc via http"
);
builder.caching_eth1_backend(client_config.eth1.clone())?
} else if client_config.dummy_eth1_backend {
warn!(
log,
"Block production impaired";
"reason" => "dummy eth1 backend is enabled"
);
builder.dummy_eth1_backend()?
} else {
info!(
log,
"Block production disabled";
"reason" => "no eth1 backend configured"
);
builder.no_eth1_backend()?
};
let builder = builder let builder = builder
.system_time_slot_clock()? .beacon_chain_builder(client_genesis, client_config_1)
.websocket_event_handler(client_config.websocket_server.clone())? .await?;
.build_beacon_chain()? let builder = if client_config.sync_eth1_chain && !client_config.dummy_eth1_backend {
.network(&mut client_config.network)? info!(
.notifier()?; log,
"Block production enabled";
"endpoint" => &client_config.eth1.endpoint,
"method" => "json rpc via http"
);
builder.caching_eth1_backend(client_config.eth1.clone())?
} else if client_config.dummy_eth1_backend {
warn!(
log,
"Block production impaired";
"reason" => "dummy eth1 backend is enabled"
);
builder.dummy_eth1_backend()?
} else {
info!(
log,
"Block production disabled";
"reason" => "no eth1 backend configured"
);
builder.no_eth1_backend()?
};
let builder = if client_config.rest_api.enabled { let builder = builder
builder.http_server(&client_config, &http_eth2_config)? .system_time_slot_clock()?
} else { .websocket_event_handler(client_config.websocket_server.clone())?
builder .build_beacon_chain()?
}; .network(&mut client_config.network)?
.notifier()?;
Ok(Self(builder.build())) let builder = if client_config.rest_api.enabled {
}) builder.http_server(&client_config, &http_eth2_config)?
} else {
builder
};
Ok(Self(builder.build()))
} }
pub fn into_inner(self) -> ProductionClient<E> { pub fn into_inner(self) -> ProductionClient<E> {

View File

@ -10,23 +10,23 @@ harness = false
[dev-dependencies] [dev-dependencies]
tempfile = "3.1.0" tempfile = "3.1.0"
sloggers = "0.3.2" sloggers = "1.0.0"
criterion = "0.3.0" criterion = "0.3.2"
rayon = "1.2.0" rayon = "1.3.0"
[dependencies] [dependencies]
db-key = "0.0.5" db-key = "0.0.5"
leveldb = "0.8.4" leveldb = "0.8.4"
parking_lot = "0.9.0" parking_lot = "0.10.2"
itertools = "0.8" itertools = "0.9.0"
eth2_ssz = "0.1.2" eth2_ssz = "0.1.2"
eth2_ssz_derive = "0.1.0" eth2_ssz_derive = "0.1.0"
tree_hash = "0.1.0" tree_hash = "0.1.0"
types = { path = "../../eth2/types" } types = { path = "../../eth2/types" }
state_processing = { path = "../../eth2/state_processing" } state_processing = { path = "../../eth2/state_processing" }
slog = "2.2.3" slog = "2.5.2"
serde = "1.0" serde = "1.0.110"
serde_derive = "1.0.102" serde_derive = "1.0.110"
lazy_static = "1.4.0" lazy_static = "1.4.0"
lighthouse_metrics = { path = "../../eth2/utils/lighthouse_metrics" } lighthouse_metrics = { path = "../../eth2/utils/lighthouse_metrics" }
lru = "0.4.3" lru = "0.4.3"

View File

@ -8,7 +8,7 @@ edition = "2018"
beacon_chain = { path = "../beacon_chain" } beacon_chain = { path = "../beacon_chain" }
types = { path = "../../eth2/types" } types = { path = "../../eth2/types" }
slot_clock = { path = "../../eth2/utils/slot_clock" } slot_clock = { path = "../../eth2/utils/slot_clock" }
tokio = "0.1.22" tokio = { version = "0.2.20", features = ["full"] }
slog = "2.5.2" slog = "2.5.2"
parking_lot = "0.10.0" parking_lot = "0.10.2"
futures = "0.1.29" futures = "0.3.5"

View File

@ -3,20 +3,20 @@
//! This service allows task execution on the beacon node for various functionality. //! This service allows task execution on the beacon node for various functionality.
use beacon_chain::{BeaconChain, BeaconChainTypes}; use beacon_chain::{BeaconChain, BeaconChainTypes};
use futures::{future, prelude::*}; use futures::future;
use slog::error; use futures::stream::StreamExt;
use slot_clock::SlotClock; use slot_clock::SlotClock;
use std::sync::Arc; use std::sync::Arc;
use std::time::{Duration, Instant}; use std::time::Duration;
use tokio::runtime::TaskExecutor; use tokio::time::{interval_at, Instant};
use tokio::timer::Interval;
/// Spawns a timer service which periodically executes tasks for the beacon chain /// Spawns a timer service which periodically executes tasks for the beacon chain
/// TODO: We might not need a `Handle` to the runtime since this function should be
/// called from the context of a runtime and we can simply spawn using task::spawn.
/// Check for issues without the Handle.
pub fn spawn<T: BeaconChainTypes>( pub fn spawn<T: BeaconChainTypes>(
executor: &TaskExecutor,
beacon_chain: Arc<BeaconChain<T>>, beacon_chain: Arc<BeaconChain<T>>,
milliseconds_per_slot: u64, milliseconds_per_slot: u64,
log: slog::Logger,
) -> Result<tokio::sync::oneshot::Sender<()>, &'static str> { ) -> Result<tokio::sync::oneshot::Sender<()>, &'static str> {
let (exit_signal, exit) = tokio::sync::oneshot::channel(); let (exit_signal, exit) = tokio::sync::oneshot::channel();
@ -26,25 +26,15 @@ pub fn spawn<T: BeaconChainTypes>(
.duration_to_next_slot() .duration_to_next_slot()
.ok_or_else(|| "slot_notifier unable to determine time to next slot")?; .ok_or_else(|| "slot_notifier unable to determine time to next slot")?;
let timer_future = Interval::new(start_instant, Duration::from_millis(milliseconds_per_slot)) // Warning: `interval_at` panics if `milliseconds_per_slot` = 0.
.map_err(move |e| { let timer_future = interval_at(start_instant, Duration::from_millis(milliseconds_per_slot))
error!(
log,
"Beacon chain timer failed";
"error" => format!("{:?}", e)
)
})
.for_each(move |_| { .for_each(move |_| {
beacon_chain.per_slot_task(); beacon_chain.per_slot_task();
future::ok(()) future::ready(())
}); });
executor.spawn( let future = futures::future::select(timer_future, exit);
exit.map_err(|_| ()) tokio::spawn(future);
.select(timer_future)
.map(|_| ())
.map_err(|_| ()),
);
Ok(exit_signal) Ok(exit_signal)
} }

View File

@ -7,11 +7,11 @@ edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies] [dependencies]
futures = "0.1.29" futures = "0.3.5"
serde = "1.0.102" serde = "1.0.110"
serde_derive = "1.0.102" serde_derive = "1.0.110"
serde_json = "1.0.41" serde_json = "1.0.52"
slog = "2.5.2" slog = "2.5.2"
tokio = "0.1.22" tokio = { version = "0.2.20", features = ["full"] }
types = { path = "../../eth2/types" } types = { path = "../../eth2/types" }
ws = "0.9.1" ws = "0.9.1"

View File

@ -1,9 +1,6 @@
use futures::Future;
use slog::{debug, error, info, warn, Logger}; use slog::{debug, error, info, warn, Logger};
use std::marker::PhantomData; use std::marker::PhantomData;
use std::net::SocketAddr; use std::net::SocketAddr;
use std::thread;
use tokio::runtime::TaskExecutor;
use types::EthSpec; use types::EthSpec;
use ws::{Sender, WebSocket}; use ws::{Sender, WebSocket};
@ -38,7 +35,6 @@ impl<T: EthSpec> WebSocketSender<T> {
pub fn start_server<T: EthSpec>( pub fn start_server<T: EthSpec>(
config: &Config, config: &Config,
executor: &TaskExecutor,
log: &Logger, log: &Logger,
) -> Result< ) -> Result<
( (
@ -76,30 +72,29 @@ pub fn start_server<T: EthSpec>(
let log_inner = log.clone(); let log_inner = log.clone();
let broadcaster_inner = server.broadcaster(); let broadcaster_inner = server.broadcaster();
let exit_future = exit let exit_future = async move {
.and_then(move |_| { let _ = exit.await;
if let Err(e) = broadcaster_inner.shutdown() { if let Err(e) = broadcaster_inner.shutdown() {
warn!( warn!(
log_inner, log_inner,
"Websocket server errored on shutdown"; "Websocket server errored on shutdown";
"error" => format!("{:?}", e) "error" => format!("{:?}", e)
); );
} else { } else {
info!(log_inner, "Websocket server shutdown"); info!(log_inner, "Websocket server shutdown");
} }
Ok(()) };
})
.map_err(|_| ());
// Place a future on the executor that will shutdown the websocket server when the // Place a future on the handle that will shutdown the websocket server when the
// application exits. // application exits.
executor.spawn(exit_future); tokio::spawn(exit_future);
exit_channel exit_channel
}; };
let log_inner = log.clone(); let log_inner = log.clone();
let _handle = thread::spawn(move || match server.run() {
let _ = std::thread::spawn(move || match server.run() {
Ok(_) => { Ok(_) => {
debug!( debug!(
log_inner, log_inner,

View File

@ -6,14 +6,14 @@ edition = "2018"
[dependencies] [dependencies]
int_to_bytes = { path = "../utils/int_to_bytes" } int_to_bytes = { path = "../utils/int_to_bytes" }
parking_lot = "0.9.0" parking_lot = "0.10.2"
types = { path = "../types" } types = { path = "../types" }
state_processing = { path = "../state_processing" } state_processing = { path = "../state_processing" }
eth2_ssz = "0.1.2" eth2_ssz = "0.1.2"
eth2_ssz_derive = "0.1.0" eth2_ssz_derive = "0.1.0"
serde = "1.0.102" serde = "1.0.110"
serde_derive = "1.0.102" serde_derive = "1.0.110"
store = { path = "../../beacon_node/store" } store = { path = "../../beacon_node/store" }
[dev-dependencies] [dev-dependencies]
rand = "0.7.2" rand = "0.7.3"

View File

@ -9,11 +9,11 @@ name = "proto_array_fork_choice"
path = "src/bin.rs" path = "src/bin.rs"
[dependencies] [dependencies]
parking_lot = "0.9.0" parking_lot = "0.10.2"
types = { path = "../types" } types = { path = "../types" }
itertools = "0.8.1" itertools = "0.9.0"
eth2_ssz = "0.1.2" eth2_ssz = "0.1.2"
eth2_ssz_derive = "0.1.0" eth2_ssz_derive = "0.1.0"
serde = "1.0.102" serde = "1.0.110"
serde_derive = "1.0.102" serde_derive = "1.0.110"
serde_yaml = "0.8.11" serde_yaml = "0.8.11"

View File

@ -9,10 +9,10 @@ name = "benches"
harness = false harness = false
[dev-dependencies] [dev-dependencies]
criterion = "0.3.0" criterion = "0.3.2"
env_logger = "0.7.1" env_logger = "0.7.1"
serde = "1.0.102" serde = "1.0.110"
serde_derive = "1.0.102" serde_derive = "1.0.110"
lazy_static = "1.4.0" lazy_static = "1.4.0"
serde_yaml = "0.8.11" serde_yaml = "0.8.11"
beacon_chain = { path = "../../beacon_node/beacon_chain" } beacon_chain = { path = "../../beacon_node/beacon_chain" }
@ -21,20 +21,20 @@ store = { path = "../../beacon_node/store" }
[dependencies] [dependencies]
bls = { path = "../utils/bls" } bls = { path = "../utils/bls" }
integer-sqrt = "0.1.2" integer-sqrt = "0.1.3"
itertools = "0.8.1" itertools = "0.9.0"
eth2_ssz = "0.1.2" eth2_ssz = "0.1.2"
eth2_ssz_types = { path = "../utils/ssz_types" } eth2_ssz_types = { path = "../utils/ssz_types" }
merkle_proof = { path = "../utils/merkle_proof" } merkle_proof = { path = "../utils/merkle_proof" }
log = "0.4.8" log = "0.4.8"
safe_arith = { path = "../utils/safe_arith" } safe_arith = { path = "../utils/safe_arith" }
tree_hash = "0.1.0" tree_hash = "0.1.0"
tree_hash_derive = "0.2" tree_hash_derive = "0.2.0"
types = { path = "../types" } types = { path = "../types" }
rayon = "1.2.0" rayon = "1.3.0"
eth2_hashing = { path = "../utils/eth2_hashing" } eth2_hashing = "0.1.0"
int_to_bytes = { path = "../utils/int_to_bytes" } int_to_bytes = { path = "../utils/int_to_bytes" }
arbitrary = { version = "0.4.3", features = ["derive"], optional = true } arbitrary = { version = "0.4.4", features = ["derive"], optional = true }
[features] [features]
fake_crypto = ["bls/fake_crypto"] fake_crypto = ["bls/fake_crypto"]

View File

@ -15,6 +15,11 @@ pub use process_slashings::process_slashings;
pub use registry_updates::process_registry_updates; pub use registry_updates::process_registry_updates;
pub use validator_statuses::{TotalBalances, ValidatorStatus, ValidatorStatuses}; pub use validator_statuses::{TotalBalances, ValidatorStatus, ValidatorStatuses};
/// Provides a summary of validator participation during the epoch.
pub struct EpochProcessingSummary {
pub total_balances: TotalBalances,
}
/// Performs per-epoch processing on some BeaconState. /// Performs per-epoch processing on some BeaconState.
/// ///
/// Mutates the given `BeaconState`, returning early if an error is encountered. If an error is /// Mutates the given `BeaconState`, returning early if an error is encountered. If an error is
@ -24,7 +29,7 @@ pub use validator_statuses::{TotalBalances, ValidatorStatus, ValidatorStatuses};
pub fn per_epoch_processing<T: EthSpec>( pub fn per_epoch_processing<T: EthSpec>(
state: &mut BeaconState<T>, state: &mut BeaconState<T>,
spec: &ChainSpec, spec: &ChainSpec,
) -> Result<(), Error> { ) -> Result<EpochProcessingSummary, Error> {
// Ensure the committee caches are built. // Ensure the committee caches are built.
state.build_committee_cache(RelativeEpoch::Previous, spec)?; state.build_committee_cache(RelativeEpoch::Previous, spec)?;
state.build_committee_cache(RelativeEpoch::Current, spec)?; state.build_committee_cache(RelativeEpoch::Current, spec)?;
@ -58,7 +63,9 @@ pub fn per_epoch_processing<T: EthSpec>(
// Rotate the epoch caches to suit the epoch transition. // Rotate the epoch caches to suit the epoch transition.
state.advance_caches(); state.advance_caches();
Ok(()) Ok(EpochProcessingSummary {
total_balances: validator_statuses.total_balances,
})
} }
/// Update the following fields on the `BeaconState`: /// Update the following fields on the `BeaconState`:

View File

@ -1,4 +1,4 @@
use crate::*; use crate::{per_epoch_processing::EpochProcessingSummary, *};
use types::*; use types::*;
#[derive(Debug, PartialEq)] #[derive(Debug, PartialEq)]
@ -18,16 +18,18 @@ pub fn per_slot_processing<T: EthSpec>(
state: &mut BeaconState<T>, state: &mut BeaconState<T>,
state_root: Option<Hash256>, state_root: Option<Hash256>,
spec: &ChainSpec, spec: &ChainSpec,
) -> Result<(), Error> { ) -> Result<Option<EpochProcessingSummary>, Error> {
cache_state(state, state_root)?; cache_state(state, state_root)?;
let mut summary = None;
if state.slot > spec.genesis_slot && (state.slot + 1) % T::slots_per_epoch() == 0 { if state.slot > spec.genesis_slot && (state.slot + 1) % T::slots_per_epoch() == 0 {
per_epoch_processing(state, spec)?; summary = Some(per_epoch_processing(state, spec)?);
} }
state.slot += 1; state.slot += 1;
Ok(()) Ok(summary)
} }
fn cache_state<T: EthSpec>( fn cache_state<T: EthSpec>(

View File

@ -13,19 +13,19 @@ bls = { path = "../utils/bls" }
compare_fields = { path = "../utils/compare_fields" } compare_fields = { path = "../utils/compare_fields" }
compare_fields_derive = { path = "../utils/compare_fields_derive" } compare_fields_derive = { path = "../utils/compare_fields_derive" }
dirs = "2.0.2" dirs = "2.0.2"
derivative = "1.0.3" derivative = "2.1.1"
eth2_interop_keypairs = { path = "../utils/eth2_interop_keypairs" } eth2_interop_keypairs = { path = "../utils/eth2_interop_keypairs" }
ethereum-types = "0.9.1" ethereum-types = "0.9.1"
eth2_hashing = "0.1.0" eth2_hashing = "0.1.0"
hex = "0.3" hex = "0.4.2"
int_to_bytes = { path = "../utils/int_to_bytes" } int_to_bytes = { path = "../utils/int_to_bytes" }
log = "0.4.8" log = "0.4.8"
merkle_proof = { path = "../utils/merkle_proof" } merkle_proof = { path = "../utils/merkle_proof" }
rayon = "1.2.0" rayon = "1.3.0"
rand = "0.7.2" rand = "0.7.3"
safe_arith = { path = "../utils/safe_arith" } safe_arith = { path = "../utils/safe_arith" }
serde = "1.0.102" serde = "1.0.110"
serde_derive = "1.0.102" serde_derive = "1.0.110"
slog = "2.5.2" slog = "2.5.2"
eth2_ssz = "0.1.2" eth2_ssz = "0.1.2"
eth2_ssz_derive = "0.1.0" eth2_ssz_derive = "0.1.0"
@ -33,17 +33,17 @@ eth2_ssz_types = { path = "../utils/ssz_types" }
swap_or_not_shuffle = { path = "../utils/swap_or_not_shuffle" } swap_or_not_shuffle = { path = "../utils/swap_or_not_shuffle" }
test_random_derive = { path = "../utils/test_random_derive" } test_random_derive = { path = "../utils/test_random_derive" }
tree_hash = "0.1.0" tree_hash = "0.1.0"
tree_hash_derive = "0.2" tree_hash_derive = "0.2.0"
rand_xorshift = "0.2.0" rand_xorshift = "0.2.0"
cached_tree_hash = { path = "../utils/cached_tree_hash" } cached_tree_hash = { path = "../utils/cached_tree_hash" }
serde_yaml = "0.8.11" serde_yaml = "0.8.11"
tempfile = "3.1.0" tempfile = "3.1.0"
arbitrary = { version = "0.4", features = ["derive"], optional = true } arbitrary = { version = "0.4.4", features = ["derive"], optional = true }
[dev-dependencies] [dev-dependencies]
env_logger = "0.7.1" env_logger = "0.7.1"
serde_json = "1.0.41" serde_json = "1.0.52"
criterion = "0.3.0" criterion = "0.3.2"
[features] [features]
arbitrary-fuzz = [ arbitrary-fuzz = [

View File

@ -7,15 +7,15 @@ edition = "2018"
[dependencies] [dependencies]
milagro_bls = { git = "https://github.com/sigp/milagro_bls", tag = "v1.0.1" } milagro_bls = { git = "https://github.com/sigp/milagro_bls", tag = "v1.0.1" }
eth2_hashing = "0.1.0" eth2_hashing = "0.1.0"
hex = "0.3" hex = "0.4.2"
rand = "0.7.2" rand = "0.7.3"
serde = "1.0.102" serde = "1.0.110"
serde_derive = "1.0.102" serde_derive = "1.0.110"
serde_hex = { path = "../serde_hex" } serde_hex = { path = "../serde_hex" }
eth2_ssz = "0.1.2" eth2_ssz = "0.1.2"
eth2_ssz_types = { path = "../ssz_types" } eth2_ssz_types = { path = "../ssz_types" }
tree_hash = "0.1.0" tree_hash = "0.1.0"
arbitrary = { version = "0.4", features = ["derive"], optional = true } arbitrary = { version = "0.4.4", features = ["derive"], optional = true }
[features] [features]
fake_crypto = [] fake_crypto = []

View File

@ -5,17 +5,17 @@ authors = ["Michael Sproul <michael@sigmaprime.io>"]
edition = "2018" edition = "2018"
[dependencies] [dependencies]
ethereum-types = "0.9" ethereum-types = "0.9.1"
eth2_ssz_types = { path = "../ssz_types" } eth2_ssz_types = { path = "../ssz_types" }
eth2_hashing = "0.1" eth2_hashing = "0.1.0"
eth2_ssz_derive = "0.1.0" eth2_ssz_derive = "0.1.0"
eth2_ssz = "0.1.2" eth2_ssz = "0.1.2"
tree_hash = "0.1" tree_hash = "0.1.0"
smallvec = "1.2.0" smallvec = "1.4.0"
[dev-dependencies] [dev-dependencies]
quickcheck = "0.9" quickcheck = "0.9.2"
quickcheck_macros = "0.8" quickcheck_macros = "0.9.1"
[features] [features]
arbitrary = ["ethereum-types/arbitrary"] arbitrary = ["ethereum-types/arbitrary"]

View File

@ -8,8 +8,8 @@ edition = "2018"
[dependencies] [dependencies]
clap = "2.33.0" clap = "2.33.0"
hex = "0.3" hex = "0.4.2"
dirs = "2.0" dirs = "2.0.2"
types = { path = "../../types" } types = { path = "../../types" }
eth2_testnet_config = { path = "../eth2_testnet_config" } eth2_testnet_config = { path = "../eth2_testnet_config" }
eth2_ssz = { path = "../ssz" } eth2_ssz = "0.1.2"

View File

@ -8,5 +8,5 @@ edition = "2018"
proc-macro = true proc-macro = true
[dependencies] [dependencies]
syn = "0.15" syn = "1.0.18"
quote = "0.6" quote = "1.0.4"

View File

@ -8,7 +8,7 @@ use syn::{parse_macro_input, DeriveInput};
fn is_slice(field: &syn::Field) -> bool { fn is_slice(field: &syn::Field) -> bool {
field.attrs.iter().any(|attr| { field.attrs.iter().any(|attr| {
attr.path.is_ident("compare_fields") attr.path.is_ident("compare_fields")
&& attr.tts.to_string().replace(" ", "") == "(as_slice)" && attr.tokens.to_string().replace(" ", "") == "(as_slice)"
}) })
} }

View File

@ -7,11 +7,11 @@ edition = "2018"
build = "build.rs" build = "build.rs"
[build-dependencies] [build-dependencies]
reqwest = "0.9.20" reqwest = { version = "0.10.4", features = ["blocking", "json"] }
serde_json = "1.0" serde_json = "1.0.52"
[dependencies] [dependencies]
types = { path = "../../types"} types = { path = "../../types"}
eth2_ssz = { path = "../ssz"} eth2_ssz = "0.1.2"
tree_hash = { path = "../tree_hash"} tree_hash = "0.1.0"
ethabi = "12.0" ethabi = "12.0.0"

View File

@ -56,8 +56,8 @@ pub fn download_deposit_contract(
if abi_file.exists() { if abi_file.exists() {
// Nothing to do. // Nothing to do.
} else { } else {
match reqwest::get(url) { match reqwest::blocking::get(url) {
Ok(mut response) => { Ok(response) => {
let mut abi_file = File::create(abi_file) let mut abi_file = File::create(abi_file)
.map_err(|e| format!("Failed to create local abi file: {:?}", e))?; .map_err(|e| format!("Failed to create local abi file: {:?}", e))?;
let mut bytecode_file = File::create(bytecode_file) let mut bytecode_file = File::create(bytecode_file)

View File

@ -5,7 +5,7 @@ authors = ["Paul Hauner <paul@paulhauner.com>"]
edition = "2018" edition = "2018"
[dependencies] [dependencies]
serde = "1.0.102" serde = "1.0.110"
serde_derive = "1.0.102" serde_derive = "1.0.110"
toml = "0.5.4" toml = "0.5.6"
types = { path = "../../types" } types = { path = "../../types" }

View File

@ -13,13 +13,13 @@ lazy_static = { version = "1.4.0", optional = true }
ring = "0.16.9" ring = "0.16.9"
[target.'cfg(target_arch = "wasm32")'.dependencies] [target.'cfg(target_arch = "wasm32")'.dependencies]
sha2 = "0.8.0" sha2 = "0.8.1"
[dev-dependencies] [dev-dependencies]
rustc-hex = "2.0.1" rustc-hex = "2.1.0"
[target.'cfg(target_arch = "wasm32")'.dev-dependencies] [target.'cfg(target_arch = "wasm32")'.dev-dependencies]
wasm-bindgen-test = "0.3.2" wasm-bindgen-test = "0.3.12"
[features] [features]
default = ["zero_hash_cache"] default = ["zero_hash_cache"]

View File

@ -8,13 +8,13 @@ edition = "2018"
[dependencies] [dependencies]
lazy_static = "1.4.0" lazy_static = "1.4.0"
num-bigint = "0.2.3" num-bigint = "0.2.6"
eth2_hashing = "0.1.0" eth2_hashing = "0.1.0"
hex = "0.3" hex = "0.4.2"
milagro_bls = { git = "https://github.com/sigp/milagro_bls", tag = "v1.0.1" } milagro_bls = { git = "https://github.com/sigp/milagro_bls", tag = "v1.0.1" }
serde_yaml = "0.8.11" serde_yaml = "0.8.11"
serde = "1.0.102" serde = "1.0.110"
serde_derive = "1.0.102" serde_derive = "1.0.110"
[dev-dependencies] [dev-dependencies]
base64 = "0.11.0" base64 = "0.12.1"

View File

@ -11,7 +11,7 @@ rand = "0.7.2"
rust-crypto = "0.2.36" rust-crypto = "0.2.36"
uuid = { version = "0.8", features = ["serde", "v4"] } uuid = { version = "0.8", features = ["serde", "v4"] }
zeroize = { version = "1.0.0", features = ["zeroize_derive"] } zeroize = { version = "1.0.0", features = ["zeroize_derive"] }
serde = "1.0.102" serde = "1.0.110"
serde_repr = "0.1" serde_repr = "0.1"
hex = "0.3" hex = "0.3"
bls = { path = "../bls" } bls = { path = "../bls" }

View File

@ -7,15 +7,14 @@ edition = "2018"
build = "build.rs" build = "build.rs"
[build-dependencies] [build-dependencies]
reqwest = "0.9.20" reqwest = { version = "0.10.4", features = ["blocking"] }
[dev-dependencies] [dev-dependencies]
tempdir = "0.3" tempdir = "0.3.7"
reqwest = "0.9.20"
[dependencies] [dependencies]
serde = "1.0" serde = "1.0.110"
serde_yaml = "0.8" serde_yaml = "0.8.11"
types = { path = "../../types"} types = { path = "../../types"}
eth2-libp2p = { path = "../../../beacon_node/eth2-libp2p"} eth2-libp2p = { path = "../../../beacon_node/eth2-libp2p"}
eth2_ssz = { path = "../ssz"} eth2_ssz = "0.1.2"

View File

@ -46,13 +46,18 @@ pub fn get_file(filename: &str) -> Result<(), String> {
let mut file = let mut file =
File::create(path).map_err(|e| format!("Failed to create {}: {:?}", filename, e))?; File::create(path).map_err(|e| format!("Failed to create {}: {:?}", filename, e))?;
let mut response = reqwest::get(&url) let request = reqwest::blocking::Client::builder()
.build()
.map_err(|_| "Could not build request client".to_string())?
.get(&url)
.timeout(std::time::Duration::from_secs(120));
let contents = request
.send()
.map_err(|e| format!("Failed to download {}: {}", filename, e))? .map_err(|e| format!("Failed to download {}: {}", filename, e))?
.error_for_status() .error_for_status()
.map_err(|e| format!("Error downloading {}: {}", filename, e))?; .map_err(|e| format!("Error downloading {}: {}", filename, e))?
let mut contents: Vec<u8> = vec![]; .bytes()
response
.copy_to(&mut contents)
.map_err(|e| format!("Failed to read {} response bytes: {}", filename, e))?; .map_err(|e| format!("Failed to read {} response bytes: {}", filename, e))?;
file.write(&contents) file.write(&contents)

Some files were not shown because too many files have changed in this diff Show More