Initial work towards v0.2.0 (#924)

* Remove ping protocol

* Initial renaming of network services

* Correct rebasing relative to latest master

* Start updating types

* Adds HashMapDelay struct to utils

* Initial network restructure

* Network restructure. Adds new types for v0.2.0

* Removes build artefacts

* Shift validation to beacon chain

* Temporarily remove gossip validation

This is to be updated to match current optimisation efforts.

* Adds AggregateAndProof

* Begin rebuilding pubsub encoding/decoding

* Signature hacking

* Shift gossipsup decoding into eth2_libp2p

* Existing EF tests passing with fake_crypto

* Shifts block encoding/decoding into RPC

* Delete outdated API spec

* All release tests passing bar genesis state parsing

* Update and test YamlConfig

* Update to spec v0.10 compatible BLS

* Updates to BLS EF tests

* Add EF test for AggregateVerify

And delete unused hash2curve tests for uncompressed points

* Update EF tests to v0.10.1

* Use optional block root correctly in block proc

* Use genesis fork in deposit domain. All tests pass

* Fast aggregate verify test

* Update REST API docs

* Fix unused import

* Bump spec tags to v0.10.1

* Add `seconds_per_eth1_block` to chainspec

* Update to timestamp based eth1 voting scheme

* Return None from `get_votes_to_consider` if block cache is empty

* Handle overflows in `is_candidate_block`

* Revert to failing tests

* Fix eth1 data sets test

* Choose default vote according to spec

* Fix collect_valid_votes tests

* Fix `get_votes_to_consider` to choose all eligible blocks

* Uncomment winning_vote tests

* Add comments; remove unused code

* Reduce seconds_per_eth1_block for simulation

* Addressed review comments

* Add test for default vote case

* Fix logs

* Remove unused functions

* Meter default eth1 votes

* Fix comments

* Progress on attestation service

* Address review comments; remove unused dependency

* Initial work on removing libp2p lock

* Add LRU caches to store (rollup)

* Update attestation validation for DB changes (WIP)

* Initial version of should_forward_block

* Scaffold

* Progress on attestation validation

Also, consolidate prod+testing slot clocks so that they share much
of the same implementation and can both handle sub-slot time changes.

* Removes lock from libp2p service

* Completed network lock removal

* Finish(?) attestation processing

* Correct network termination future

* Add slot check to block check

* Correct fmt issues

* Remove Drop implementation for network service

* Add first attempt at attestation proc. re-write

* Add version 2 of attestation processing

* Minor fixes

* Add validator pubkey cache

* Make get_indexed_attestation take a committee

* Link signature processing into new attn verification

* First working version

* Ensure pubkey cache is updated

* Add more metrics, slight optimizations

* Clone committee cache during attestation processing

* Update shuffling cache during block processing

* Remove old commented-out code

* Fix shuffling cache insert bug

* Used indexed attestation in fork choice

* Restructure attn processing, add metrics

* Add more detailed metrics

* Tidy, fix failing tests

* Fix failing tests, tidy

* Address reviewers suggestions

* Disable/delete two outdated tests

* Modification of validator for subscriptions

* Add slot signing to validator client

* Further progress on validation subscription

* Adds necessary validator subscription functionality

* Add new Pubkeys struct to signature_sets

* Refactor with functional approach

* Update beacon chain

* Clean up validator <-> beacon node http types

* Add aggregator status to ValidatorDuty

* Impl Clone for manual slot clock

* Fix minor errors

* Further progress validator client subscription

* Initial subscription and aggregation handling

* Remove decompressed member from pubkey bytes

* Progress to modifying val client for attestation aggregation

* First draft of validator client upgrade for aggregate attestations

* Add hashmap for indices lookup

* Add state cache, remove store cache

* Only build the head committee cache

* Removes lock on a network channel

* Partially implement beacon node subscription http api

* Correct compilation issues

* Change `get_attesting_indices` to use Vec

* Fix failing test

* Partial implementation of timer

* Adds timer, removes exit_future, http api to op pool

* Partial multiple aggregate attestation handling

* Permits bulk messages accross gossipsub network channel

* Correct compile issues

* Improve gosispsub messaging and correct rest api helpers

* Added global gossipsub subscriptions

* Update validator subscriptions data structs

* Tidy

* Re-structure validator subscriptions

* Initial handling of subscriptions

* Re-structure network service

* Add pubkey cache persistence file

* Add more comments

* Integrate persistence file into builder

* Add pubkey cache tests

* Add HashSetDelay and introduce into attestation service

* Handles validator subscriptions

* Add data_dir to beacon chain builder

* Remove Option in pubkey cache persistence file

* Ensure consistency between datadir/data_dir

* Fix failing network test

* Peer subnet discovery gets queued for future subscriptions

* Reorganise attestation service functions

* Initial wiring of attestation service

* First draft of attestation service timing logic

* Correct minor typos

* Tidy

* Fix todos

* Improve tests

* Add PeerInfo to connected peers mapping

* Fix compile error

* Fix compile error from merge

* Split up block processing metrics

* Tidy

* Refactor get_pubkey_from_state

* Remove commented-out code

* Rename state_cache -> checkpoint_cache

* Rename Checkpoint -> Snapshot

* Tidy, add comments

* Tidy up find_head function

* Change some checkpoint -> snapshot

* Add tests

* Expose max_len

* Remove dead code

* Tidy

* Fix bug

* Add sync-speed metric

* Add first attempt at VerifiableBlock

* Start integrating into beacon chain

* Integrate VerifiableBlock

* Rename VerifableBlock -> PartialBlockVerification

* Add start of typed methods

* Add progress

* Add further progress

* Rename structs

* Add full block verification to block_processing.rs

* Further beacon chain integration

* Update checks for gossip

* Add todo

* Start adding segement verification

* Add passing chain segement test

* Initial integration with batch sync

* Minor changes

* Tidy, add more error checking

* Start adding chain_segment tests

* Finish invalid signature tests

* Include single and gossip verified blocks in tests

* Add gossip verification tests

* Start adding docs

* Finish adding comments to block_processing.rs

* Rename block_processing.rs -> block_verification

* Start removing old block processing code

* Fixes beacon_chain compilation

* Fix project-wide compile errors

* Remove old code

* Correct code to pass all tests

* Fix bug with beacon proposer index

* Fix shim for BlockProcessingError

* Only process one epoch at a time

* Fix loop in chain segment processing

* Correct tests from master merge

* Add caching for state.eth1_data_votes

* Add BeaconChain::validator_pubkey

* Revert "Add caching for state.eth1_data_votes"

This reverts commit cd73dcd6434fb8d8e6bf30c5356355598ea7b78e.

Co-authored-by: Grant Wuerker <gwuerker@gmail.com>
Co-authored-by: Michael Sproul <michael@sigmaprime.io>
Co-authored-by: Michael Sproul <micsproul@gmail.com>
Co-authored-by: pawan <pawandhananjay@gmail.com>
Co-authored-by: Paul Hauner <paul@paulhauner.com>
This commit is contained in:
Age Manning 2020-03-17 17:24:44 +11:00 committed by GitHub
parent c198bddf9e
commit 95c8e476bc
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
161 changed files with 9771 additions and 5266 deletions

5179
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -13,12 +13,14 @@ members = [
"eth2/utils/eth2_testnet_config", "eth2/utils/eth2_testnet_config",
"eth2/utils/logging", "eth2/utils/logging",
"eth2/utils/eth2_hashing", "eth2/utils/eth2_hashing",
"eth2/utils/hashmap_delay",
"eth2/utils/lighthouse_metrics", "eth2/utils/lighthouse_metrics",
"eth2/utils/lighthouse_bootstrap", "eth2/utils/lighthouse_bootstrap",
"eth2/utils/merkle_proof", "eth2/utils/merkle_proof",
"eth2/utils/int_to_bytes", "eth2/utils/int_to_bytes",
"eth2/utils/serde_hex", "eth2/utils/serde_hex",
"eth2/utils/slot_clock", "eth2/utils/slot_clock",
"eth2/utils/rest_types",
"eth2/utils/ssz", "eth2/utils/ssz",
"eth2/utils/ssz_derive", "eth2/utils/ssz_derive",
"eth2/utils/ssz_types", "eth2/utils/ssz_types",
@ -28,14 +30,15 @@ members = [
"eth2/utils/tree_hash_derive", "eth2/utils/tree_hash_derive",
"eth2/utils/test_random_derive", "eth2/utils/test_random_derive",
"beacon_node", "beacon_node",
"beacon_node/store",
"beacon_node/client",
"beacon_node/rest_api",
"beacon_node/network",
"beacon_node/eth2-libp2p",
"beacon_node/version",
"beacon_node/eth1",
"beacon_node/beacon_chain", "beacon_node/beacon_chain",
"beacon_node/client",
"beacon_node/eth1",
"beacon_node/eth2-libp2p",
"beacon_node/network",
"beacon_node/rest_api",
"beacon_node/store",
"beacon_node/timer",
"beacon_node/version",
"beacon_node/websocket_server", "beacon_node/websocket_server",
"tests/simulator", "tests/simulator",
"tests/ef_tests", "tests/ef_tests",

View File

@ -1,6 +1,6 @@
[package] [package]
name = "beacon_node" name = "beacon_node"
version = "0.1.0" version = "0.2.0"
authors = ["Paul Hauner <paul@paulhauner.com>", "Age Manning <Age@AgeManning.com"] authors = ["Paul Hauner <paul@paulhauner.com>", "Age Manning <Age@AgeManning.com"]
edition = "2018" edition = "2018"

View File

@ -1,6 +1,6 @@
[package] [package]
name = "beacon_chain" name = "beacon_chain"
version = "0.1.0" version = "0.2.0"
authors = ["Paul Hauner <paul@paulhauner.com>", "Age Manning <Age@AgeManning.com>"] authors = ["Paul Hauner <paul@paulhauner.com>", "Age Manning <Age@AgeManning.com>"]
edition = "2018" edition = "2018"
@ -33,10 +33,10 @@ eth2_ssz_derive = "0.1.0"
state_processing = { path = "../../eth2/state_processing" } state_processing = { path = "../../eth2/state_processing" }
tree_hash = "0.1.0" tree_hash = "0.1.0"
types = { path = "../../eth2/types" } types = { path = "../../eth2/types" }
tokio = "0.1.22"
eth1 = { path = "../eth1" } eth1 = { path = "../eth1" }
websocket_server = { path = "../websocket_server" } websocket_server = { path = "../websocket_server" }
futures = "0.1.25" futures = "0.1.25"
exit-future = "0.1.3"
genesis = { path = "../genesis" } genesis = { path = "../genesis" }
integer-sqrt = "0.1" integer-sqrt = "0.1"
rand = "0.7.2" rand = "0.7.2"

File diff suppressed because it is too large Load Diff

View File

@ -5,14 +5,14 @@ use types::{BeaconState, EthSpec, Hash256, SignedBeaconBlock};
/// Represents some block and its associated state. Generally, this will be used for tracking the /// Represents some block and its associated state. Generally, this will be used for tracking the
/// head, justified head and finalized head. /// head, justified head and finalized head.
#[derive(Clone, Serialize, PartialEq, Debug, Encode, Decode)] #[derive(Clone, Serialize, PartialEq, Debug, Encode, Decode)]
pub struct CheckPoint<E: EthSpec> { pub struct BeaconSnapshot<E: EthSpec> {
pub beacon_block: SignedBeaconBlock<E>, pub beacon_block: SignedBeaconBlock<E>,
pub beacon_block_root: Hash256, pub beacon_block_root: Hash256,
pub beacon_state: BeaconState<E>, pub beacon_state: BeaconState<E>,
pub beacon_state_root: Hash256, pub beacon_state_root: Hash256,
} }
impl<E: EthSpec> CheckPoint<E> { impl<E: EthSpec> BeaconSnapshot<E> {
/// Create a new checkpoint. /// Create a new checkpoint.
pub fn new( pub fn new(
beacon_block: SignedBeaconBlock<E>, beacon_block: SignedBeaconBlock<E>,

View File

@ -0,0 +1,801 @@
//! Provides `SignedBeaconBlock` verification logic.
//!
//! Specifically, it provides the following:
//!
//! - Verification for gossip blocks (i.e., should we gossip some block from the network).
//! - Verification for normal blocks (e.g., some block received on the RPC during a parent lookup).
//! - Verification for chain segments (e.g., some chain of blocks received on the RPC during a
//! sync).
//!
//! The primary source of complexity here is that we wish to avoid doing duplicate work as a block
//! moves through the verification process. For example, if some block is verified for gossip, we
//! do not wish to re-verify the block proposal signature or re-hash the block. Or, if we've
//! verified the signatures of a block during a chain segment import, we do not wish to verify each
//! signature individually again.
//!
//! The incremental processing steps (e.g., signatures verified but not the state transition) is
//! represented as a sequence of wrapper-types around the block. There is a linear progression of
//! types, starting at a `SignedBeaconBlock` and finishing with a `Fully VerifiedBlock` (see
//! diagram below).
//!
//! ```ignore
//! START
//! |
//! ▼
//! SignedBeaconBlock
//! |---------------
//! | |
//! | ▼
//! | GossipVerifiedBlock
//! | |
//! |---------------
//! |
//! ▼
//! SignatureVerifiedBlock
//! |
//! ▼
//! FullyVerifiedBlock
//! |
//! ▼
//! END
//!
//! ```
use crate::validator_pubkey_cache::ValidatorPubkeyCache;
use crate::{
beacon_chain::{BLOCK_PROCESSING_CACHE_LOCK_TIMEOUT, VALIDATOR_PUBKEY_CACHE_LOCK_TIMEOUT},
metrics, BeaconChain, BeaconChainError, BeaconChainTypes, BeaconSnapshot,
};
use parking_lot::RwLockReadGuard;
use state_processing::{
block_signature_verifier::{
BlockSignatureVerifier, Error as BlockSignatureVerifierError, G1Point,
},
per_block_processing, per_slot_processing, BlockProcessingError, BlockSignatureStrategy,
SlotProcessingError,
};
use std::borrow::Cow;
use store::{Error as DBError, StateBatch};
use types::{
BeaconBlock, BeaconState, BeaconStateError, ChainSpec, CloneConfig, EthSpec, Hash256,
RelativeEpoch, SignedBeaconBlock, Slot,
};
mod block_processing_outcome;
pub use block_processing_outcome::BlockProcessingOutcome;
/// Maximum block slot number. Block with slots bigger than this constant will NOT be processed.
const MAXIMUM_BLOCK_SLOT_NUMBER: u64 = 4_294_967_296; // 2^32
/// Returned when a block was not verified. A block is not verified for two reasons:
///
/// - The block is malformed/invalid (indicated by all results other than `BeaconChainError`.
/// - We encountered an error whilst trying to verify the block (a `BeaconChainError`).
#[derive(Debug, PartialEq)]
pub enum BlockError {
/// The parent block was unknown.
ParentUnknown(Hash256),
/// The block slot is greater than the present slot.
FutureSlot {
present_slot: Slot,
block_slot: Slot,
},
/// The block state_root does not match the generated state.
StateRootMismatch { block: Hash256, local: Hash256 },
/// The block was a genesis block, these blocks cannot be re-imported.
GenesisBlock,
/// The slot is finalized, no need to import.
WouldRevertFinalizedSlot {
block_slot: Slot,
finalized_slot: Slot,
},
/// Block is already known, no need to re-import.
BlockIsAlreadyKnown,
/// The block slot exceeds the MAXIMUM_BLOCK_SLOT_NUMBER.
BlockSlotLimitReached,
/// The proposal signature in invalid.
ProposalSignatureInvalid,
/// A signature in the block is invalid (exactly which is unknown).
InvalidSignature,
/// The provided block is from an earlier slot than its parent.
BlockIsNotLaterThanParent { block_slot: Slot, state_slot: Slot },
/// At least one block in the chain segement did not have it's parent root set to the root of
/// the prior block.
NonLinearParentRoots,
/// The slots of the blocks in the chain segment were not strictly increasing. I.e., a child
/// had lower slot than a parent.
NonLinearSlots,
/// The block failed the specification's `per_block_processing` function, it is invalid.
PerBlockProcessingError(BlockProcessingError),
/// There was an error whilst processing the block. It is not necessarily invalid.
BeaconChainError(BeaconChainError),
}
impl From<BlockSignatureVerifierError> for BlockError {
fn from(e: BlockSignatureVerifierError) -> Self {
BlockError::BeaconChainError(BeaconChainError::BlockSignatureVerifierError(e))
}
}
impl From<BeaconChainError> for BlockError {
fn from(e: BeaconChainError) -> Self {
BlockError::BeaconChainError(e)
}
}
impl From<BeaconStateError> for BlockError {
fn from(e: BeaconStateError) -> Self {
BlockError::BeaconChainError(BeaconChainError::BeaconStateError(e))
}
}
impl From<SlotProcessingError> for BlockError {
fn from(e: SlotProcessingError) -> Self {
BlockError::BeaconChainError(BeaconChainError::SlotProcessingError(e))
}
}
impl From<DBError> for BlockError {
fn from(e: DBError) -> Self {
BlockError::BeaconChainError(BeaconChainError::DBError(e))
}
}
/// Verify all signatures (except deposit signatures) on all blocks in the `chain_segment`. If all
/// signatures are valid, the `chain_segment` is mapped to a `Vec<SignatureVerifiedBlock>` that can
/// later be transformed into a `FullyVerifiedBlock` without re-checking the signatures. If any
/// signature in the block is invalid, an `Err` is returned (it is not possible to known _which_
/// signature was invalid).
///
/// ## Errors
///
/// The given `chain_segement` must span no more than two epochs, otherwise an error will be
/// returned.
pub fn signature_verify_chain_segment<T: BeaconChainTypes>(
chain_segment: Vec<(Hash256, SignedBeaconBlock<T::EthSpec>)>,
chain: &BeaconChain<T>,
) -> Result<Vec<SignatureVerifiedBlock<T>>, BlockError> {
let (mut parent, slot) = if let Some(block) = chain_segment.first().map(|(_, block)| block) {
let parent = load_parent(&block.message, chain)?;
(parent, block.slot())
} else {
return Ok(vec![]);
};
let highest_slot = chain_segment
.last()
.map(|(_, block)| block.slot())
.unwrap_or_else(|| slot);
let state = cheap_state_advance_to_obtain_committees(
&mut parent.beacon_state,
highest_slot,
&chain.spec,
)?;
let pubkey_cache = get_validator_pubkey_cache(chain)?;
let mut signature_verifier = get_signature_verifier(&state, &pubkey_cache, &chain.spec);
for (block_root, block) in &chain_segment {
signature_verifier.include_all_signatures(block, Some(*block_root))?;
}
if signature_verifier.verify().is_err() {
return Err(BlockError::InvalidSignature);
}
drop(pubkey_cache);
let mut signature_verified_blocks = chain_segment
.into_iter()
.map(|(block_root, block)| SignatureVerifiedBlock {
block,
block_root,
parent: None,
})
.collect::<Vec<_>>();
if let Some(signature_verified_block) = signature_verified_blocks.first_mut() {
signature_verified_block.parent = Some(parent);
}
Ok(signature_verified_blocks)
}
/// A wrapper around a `SignedBeaconBlock` that indicates it has been approved for re-gossiping on
/// the p2p network.
pub struct GossipVerifiedBlock<T: BeaconChainTypes> {
block: SignedBeaconBlock<T::EthSpec>,
block_root: Hash256,
parent: BeaconSnapshot<T::EthSpec>,
}
/// A wrapper around a `SignedBeaconBlock` that indicates that all signatures (except the deposit
/// signatures) have been verified.
pub struct SignatureVerifiedBlock<T: BeaconChainTypes> {
block: SignedBeaconBlock<T::EthSpec>,
block_root: Hash256,
parent: Option<BeaconSnapshot<T::EthSpec>>,
}
/// A wrapper around a `SignedBeaconBlock` that indicates that this block is fully verified and
/// ready to import into the `BeaconChain`. The validation includes:
///
/// - Parent is known
/// - Signatures
/// - State root check
/// - Per block processing
///
/// Note: a `FullyVerifiedBlock` is not _forever_ valid to be imported, it may later become invalid
/// due to finality or some other event. A `FullyVerifiedBlock` should be imported into the
/// `BeaconChain` immediately after it is instantiated.
pub struct FullyVerifiedBlock<T: BeaconChainTypes> {
pub block: SignedBeaconBlock<T::EthSpec>,
pub block_root: Hash256,
pub state: BeaconState<T::EthSpec>,
pub parent_block: SignedBeaconBlock<T::EthSpec>,
pub intermediate_states: StateBatch<T::EthSpec>,
}
/// Implemented on types that can be converted into a `FullyVerifiedBlock`.
///
/// Used to allow functions to accept blocks at various stages of verification.
pub trait IntoFullyVerifiedBlock<T: BeaconChainTypes> {
fn into_fully_verified_block(
self,
chain: &BeaconChain<T>,
) -> Result<FullyVerifiedBlock<T>, BlockError>;
fn block(&self) -> &SignedBeaconBlock<T::EthSpec>;
}
impl<T: BeaconChainTypes> GossipVerifiedBlock<T> {
/// Instantiates `Self`, a wrapper that indicates the given `block` is safe to be re-gossiped
/// on the p2p network.
///
/// Returns an error if the block is invalid, or if the block was unable to be verified.
pub fn new(
block: SignedBeaconBlock<T::EthSpec>,
chain: &BeaconChain<T>,
) -> Result<Self, BlockError> {
// Do not gossip or process blocks from future slots.
//
// TODO: adjust this to allow for clock disparity tolerance.
let present_slot = chain.slot()?;
if block.slot() > present_slot {
return Err(BlockError::FutureSlot {
present_slot,
block_slot: block.slot(),
});
}
// Do not gossip a block from a finalized slot.
//
// TODO: adjust this to allow for clock disparity tolerance.
check_block_against_finalized_slot(&block.message, chain)?;
// TODO: add check for the `(block.proposer_index, block.slot)` tuple once we have v0.11.0
let mut parent = load_parent(&block.message, chain)?;
let block_root = get_block_root(&block);
let state = cheap_state_advance_to_obtain_committees(
&mut parent.beacon_state,
block.slot(),
&chain.spec,
)?;
let pubkey_cache = get_validator_pubkey_cache(chain)?;
let mut signature_verifier = get_signature_verifier(&state, &pubkey_cache, &chain.spec);
signature_verifier.include_block_proposal(&block, Some(block_root))?;
if signature_verifier.verify().is_ok() {
Ok(Self {
block,
block_root,
parent,
})
} else {
Err(BlockError::ProposalSignatureInvalid)
}
}
}
impl<T: BeaconChainTypes> IntoFullyVerifiedBlock<T> for GossipVerifiedBlock<T> {
/// Completes verification of the wrapped `block`.
fn into_fully_verified_block(
self,
chain: &BeaconChain<T>,
) -> Result<FullyVerifiedBlock<T>, BlockError> {
let fully_verified = SignatureVerifiedBlock::from_gossip_verified_block(self, chain)?;
fully_verified.into_fully_verified_block(chain)
}
fn block(&self) -> &SignedBeaconBlock<T::EthSpec> {
&self.block
}
}
impl<T: BeaconChainTypes> SignatureVerifiedBlock<T> {
/// Instantiates `Self`, a wrapper that indicates that all signatures (except the deposit
/// signatures) are valid (i.e., signed by the correct public keys).
///
/// Returns an error if the block is invalid, or if the block was unable to be verified.
pub fn new(
block: SignedBeaconBlock<T::EthSpec>,
chain: &BeaconChain<T>,
) -> Result<Self, BlockError> {
let mut parent = load_parent(&block.message, chain)?;
let block_root = get_block_root(&block);
let state = cheap_state_advance_to_obtain_committees(
&mut parent.beacon_state,
block.slot(),
&chain.spec,
)?;
let pubkey_cache = get_validator_pubkey_cache(chain)?;
let mut signature_verifier = get_signature_verifier(&state, &pubkey_cache, &chain.spec);
signature_verifier.include_all_signatures(&block, Some(block_root))?;
if signature_verifier.verify().is_ok() {
Ok(Self {
block,
block_root,
parent: Some(parent),
})
} else {
Err(BlockError::InvalidSignature)
}
}
/// Finishes signature verification on the provided `GossipVerifedBlock`. Does not re-verify
/// the proposer signature.
pub fn from_gossip_verified_block(
from: GossipVerifiedBlock<T>,
chain: &BeaconChain<T>,
) -> Result<Self, BlockError> {
let mut parent = from.parent;
let block = from.block;
let state = cheap_state_advance_to_obtain_committees(
&mut parent.beacon_state,
block.slot(),
&chain.spec,
)?;
let pubkey_cache = get_validator_pubkey_cache(chain)?;
let mut signature_verifier = get_signature_verifier(&state, &pubkey_cache, &chain.spec);
signature_verifier.include_all_signatures_except_proposal(&block)?;
if signature_verifier.verify().is_ok() {
Ok(Self {
block,
block_root: from.block_root,
parent: Some(parent),
})
} else {
Err(BlockError::InvalidSignature)
}
}
}
impl<T: BeaconChainTypes> IntoFullyVerifiedBlock<T> for SignatureVerifiedBlock<T> {
/// Completes verification of the wrapped `block`.
fn into_fully_verified_block(
self,
chain: &BeaconChain<T>,
) -> Result<FullyVerifiedBlock<T>, BlockError> {
let block = self.block;
let parent = self
.parent
.map(Result::Ok)
.unwrap_or_else(|| load_parent(&block.message, chain))?;
FullyVerifiedBlock::from_signature_verified_components(
block,
self.block_root,
parent,
chain,
)
}
fn block(&self) -> &SignedBeaconBlock<T::EthSpec> {
&self.block
}
}
impl<T: BeaconChainTypes> IntoFullyVerifiedBlock<T> for SignedBeaconBlock<T::EthSpec> {
/// Verifies the `SignedBeaconBlock` by first transforming it into a `SignatureVerifiedBlock`
/// and then using that implementation of `IntoFullyVerifiedBlock` to complete verification.
fn into_fully_verified_block(
self,
chain: &BeaconChain<T>,
) -> Result<FullyVerifiedBlock<T>, BlockError> {
SignatureVerifiedBlock::new(self, chain)?.into_fully_verified_block(chain)
}
fn block(&self) -> &SignedBeaconBlock<T::EthSpec> {
&self
}
}
impl<T: BeaconChainTypes> FullyVerifiedBlock<T> {
/// Instantiates `Self`, a wrapper that indicates that the given `block` is fully valid. See
/// the struct-level documentation for more information.
///
/// Note: this function does not verify block signatures, it assumes they are valid. Signature
/// verification must be done upstream (e.g., via a `SignatureVerifiedBlock`
///
/// Returns an error if the block is invalid, or if the block was unable to be verified.
pub fn from_signature_verified_components(
block: SignedBeaconBlock<T::EthSpec>,
block_root: Hash256,
parent: BeaconSnapshot<T::EthSpec>,
chain: &BeaconChain<T>,
) -> Result<Self, BlockError> {
// Reject any block if its parent is not known to fork choice.
//
// A block that is not in fork choice is either:
//
// - Not yet imported: we should reject this block because we should only import a child
// after its parent has been fully imported.
// - Pre-finalized: if the parent block is _prior_ to finalization, we should ignore it
// because it will revert finalization. Note that the finalized block is stored in fork
// choice, so we will not reject any child of the finalized block (this is relevant during
// genesis).
if !chain.fork_choice.contains_block(&block.parent_root()) {
return Err(BlockError::ParentUnknown(block.parent_root()));
}
/*
* Perform cursory checks to see if the block is even worth processing.
*/
check_block_relevancy(&block, Some(block_root), chain)?;
/*
* Advance the given `parent.beacon_state` to the slot of the given `block`.
*/
let catchup_timer = metrics::start_timer(&metrics::BLOCK_PROCESSING_CATCHUP_STATE);
// Keep a batch of any states that were "skipped" (block-less) in between the parent state
// slot and the block slot. These will be stored in the database.
let mut intermediate_states = StateBatch::new();
// The block must have a higher slot than its parent.
if block.slot() <= parent.beacon_state.slot {
return Err(BlockError::BlockIsNotLaterThanParent {
block_slot: block.slot(),
state_slot: parent.beacon_state.slot,
});
}
// Transition the parent state to the block slot.
let mut state = parent.beacon_state;
let distance = block.slot().as_u64().saturating_sub(state.slot.as_u64());
for i in 0..distance {
let state_root = if i == 0 {
parent.beacon_block.state_root()
} else {
// This is a new state we've reached, so stage it for storage in the DB.
// Computing the state root here is time-equivalent to computing it during slot
// processing, but we get early access to it.
let state_root = state.update_tree_hash_cache()?;
intermediate_states.add_state(state_root, &state)?;
state_root
};
per_slot_processing(&mut state, Some(state_root), &chain.spec)?;
}
metrics::stop_timer(catchup_timer);
/*
* Build the committee caches on the state.
*/
let committee_timer = metrics::start_timer(&metrics::BLOCK_PROCESSING_COMMITTEE);
state.build_committee_cache(RelativeEpoch::Previous, &chain.spec)?;
state.build_committee_cache(RelativeEpoch::Current, &chain.spec)?;
metrics::stop_timer(committee_timer);
/*
* Perform `per_block_processing` on the block and state, returning early if the block is
* invalid.
*/
let core_timer = metrics::start_timer(&metrics::BLOCK_PROCESSING_CORE);
if let Err(err) = per_block_processing(
&mut state,
&block,
Some(block_root),
// Signatures were verified earlier in this function.
BlockSignatureStrategy::NoVerification,
&chain.spec,
) {
match err {
// Capture `BeaconStateError` so that we can easily distinguish between a block
// that's invalid and one that caused an internal error.
BlockProcessingError::BeaconStateError(e) => return Err(e.into()),
other => return Err(BlockError::PerBlockProcessingError(other)),
}
};
metrics::stop_timer(core_timer);
/*
* Calculate the state root of the newly modified state
*/
let state_root_timer = metrics::start_timer(&metrics::BLOCK_PROCESSING_STATE_ROOT);
let state_root = state.update_tree_hash_cache()?;
metrics::stop_timer(state_root_timer);
/*
* Check to ensure the state root on the block matches the one we have calculated.
*/
if block.state_root() != state_root {
return Err(BlockError::StateRootMismatch {
block: block.state_root(),
local: state_root,
});
}
Ok(Self {
block,
block_root,
state,
parent_block: parent.beacon_block,
intermediate_states,
})
}
}
/// Returns `Ok(())` if the block is later than the finalized slot on `chain`.
///
/// Returns an error if the block is earlier or equal to the finalized slot, or there was an error
/// verifying that condition.
fn check_block_against_finalized_slot<T: BeaconChainTypes>(
block: &BeaconBlock<T::EthSpec>,
chain: &BeaconChain<T>,
) -> Result<(), BlockError> {
let finalized_slot = chain
.head_info()?
.finalized_checkpoint
.epoch
.start_slot(T::EthSpec::slots_per_epoch());
if block.slot <= finalized_slot {
Err(BlockError::WouldRevertFinalizedSlot {
block_slot: block.slot,
finalized_slot,
})
} else {
Ok(())
}
}
/// Performs simple, cheap checks to ensure that the block is relevant to imported.
///
/// `Ok(block_root)` is returned if the block passes these checks and should progress with
/// verification (viz., it is relevant).
///
/// Returns an error if the block fails one of these checks (viz., is not relevant) or an error is
/// experienced whilst attempting to verify.
pub fn check_block_relevancy<T: BeaconChainTypes>(
signed_block: &SignedBeaconBlock<T::EthSpec>,
block_root: Option<Hash256>,
chain: &BeaconChain<T>,
) -> Result<Hash256, BlockError> {
let block = &signed_block.message;
// Do not process blocks from the future.
if block.slot > chain.slot()? {
return Err(BlockError::FutureSlot {
present_slot: chain.slot()?,
block_slot: block.slot,
});
}
// Do not re-process the genesis block.
if block.slot == 0 {
return Err(BlockError::GenesisBlock);
}
// This is an artificial (non-spec) restriction that provides some protection from overflow
// abuses.
if block.slot >= MAXIMUM_BLOCK_SLOT_NUMBER {
return Err(BlockError::BlockSlotLimitReached);
}
// Do not process a block from a finalized slot.
check_block_against_finalized_slot(block, chain)?;
let block_root = block_root.unwrap_or_else(|| get_block_root(&signed_block));
// Check if the block is already known. We know it is post-finalization, so it is
// sufficient to check the fork choice.
if chain.fork_choice.contains_block(&block_root) {
return Err(BlockError::BlockIsAlreadyKnown);
}
Ok(block_root)
}
/// Returns the canonical root of the given `block`.
///
/// Use this function to ensure that we report the block hashing time Prometheus metric.
pub fn get_block_root<E: EthSpec>(block: &SignedBeaconBlock<E>) -> Hash256 {
let block_root_timer = metrics::start_timer(&metrics::BLOCK_PROCESSING_BLOCK_ROOT);
let block_root = block.canonical_root();
metrics::stop_timer(block_root_timer);
block_root
}
/// Load the parent snapshot (block and state) of the given `block`.
///
/// Returns `Err(BlockError::ParentUnknown)` if the parent is not found, or if an error occurs
/// whilst attempting the operation.
fn load_parent<T: BeaconChainTypes>(
block: &BeaconBlock<T::EthSpec>,
chain: &BeaconChain<T>,
) -> Result<BeaconSnapshot<T::EthSpec>, BlockError> {
let db_read_timer = metrics::start_timer(&metrics::BLOCK_PROCESSING_DB_READ);
// Reject any block if its parent is not known to fork choice.
//
// A block that is not in fork choice is either:
//
// - Not yet imported: we should reject this block because we should only import a child
// after its parent has been fully imported.
// - Pre-finalized: if the parent block is _prior_ to finalization, we should ignore it
// because it will revert finalization. Note that the finalized block is stored in fork
// choice, so we will not reject any child of the finalized block (this is relevant during
// genesis).
if !chain.fork_choice.contains_block(&block.parent_root) {
return Err(BlockError::ParentUnknown(block.parent_root));
}
// Load the parent block and state from disk, returning early if it's not available.
let result = chain
.snapshot_cache
.try_write_for(BLOCK_PROCESSING_CACHE_LOCK_TIMEOUT)
.and_then(|mut snapshot_cache| snapshot_cache.try_remove(block.parent_root))
.map(|snapshot| Ok(Some(snapshot)))
.unwrap_or_else(|| {
// Load the blocks parent block from the database, returning invalid if that block is not
// found.
//
// We don't return a DBInconsistent error here since it's possible for a block to
// exist in fork choice but not in the database yet. In such a case we simply
// indicate that we don't yet know the parent.
let parent_block = if let Some(block) = chain.get_block(&block.parent_root)? {
block
} else {
return Ok(None);
};
// Load the parent blocks state from the database, returning an error if it is not found.
// It is an error because if we know the parent block we should also know the parent state.
let parent_state_root = parent_block.state_root();
let parent_state = chain
.get_state(&parent_state_root, Some(parent_block.slot()))?
.ok_or_else(|| {
BeaconChainError::DBInconsistent(format!(
"Missing state {:?}",
parent_state_root
))
})?;
Ok(Some(BeaconSnapshot {
beacon_block: parent_block,
beacon_block_root: block.parent_root,
beacon_state: parent_state,
beacon_state_root: parent_state_root,
}))
})
.map_err(BlockError::BeaconChainError)?
.ok_or_else(|| BlockError::ParentUnknown(block.parent_root));
metrics::stop_timer(db_read_timer);
result
}
/// Performs a cheap (time-efficient) state advancement so the committees for `slot` can be
/// obtained from `state`.
///
/// The state advancement is "cheap" since it does not generate state roots. As a result, the
/// returned state might be holistically invalid but the committees will be correct (since they do
/// not rely upon state roots).
///
/// If the given `state` can already serve the `slot`, the committees will be built on the `state`
/// and `Cow::Borrowed(state)` will be returned. Otherwise, the state will be cloned, cheaply
/// advanced and then returned as a `Cow::Owned`. The end result is that the given `state` is never
/// mutated to be invalid (in fact, it is never changed beyond a simple committee cache build).
fn cheap_state_advance_to_obtain_committees<'a, E: EthSpec>(
state: &'a mut BeaconState<E>,
block_slot: Slot,
spec: &ChainSpec,
) -> Result<Cow<'a, BeaconState<E>>, BlockError> {
let block_epoch = block_slot.epoch(E::slots_per_epoch());
if state.current_epoch() == block_epoch {
state.build_committee_cache(RelativeEpoch::Current, spec)?;
Ok(Cow::Borrowed(state))
} else if state.slot > block_slot {
Err(BlockError::BlockIsNotLaterThanParent {
block_slot,
state_slot: state.slot,
})
} else {
let mut state = state.clone_with(CloneConfig::committee_caches_only());
while state.current_epoch() < block_epoch {
// Don't calculate state roots since they aren't required for calculating
// shuffling (achieved by providing Hash256::zero()).
per_slot_processing(&mut state, Some(Hash256::zero()), spec).map_err(|e| {
BlockError::BeaconChainError(BeaconChainError::SlotProcessingError(e))
})?;
}
state.build_committee_cache(RelativeEpoch::Current, spec)?;
Ok(Cow::Owned(state))
}
}
/// Obtains a read-locked `ValidatorPubkeyCache` from the `chain`.
fn get_validator_pubkey_cache<T: BeaconChainTypes>(
chain: &BeaconChain<T>,
) -> Result<RwLockReadGuard<ValidatorPubkeyCache>, BlockError> {
chain
.validator_pubkey_cache
.try_read_for(VALIDATOR_PUBKEY_CACHE_LOCK_TIMEOUT)
.ok_or_else(|| BeaconChainError::ValidatorPubkeyCacheLockTimeout)
.map_err(BlockError::BeaconChainError)
}
/// Produces an _empty_ `BlockSignatureVerifier`.
///
/// The signature verifier is empty because it does not yet have any of this block's signatures
/// added to it. Use `Self::apply_to_signature_verifier` to apply the signatures.
fn get_signature_verifier<'a, E: EthSpec>(
state: &'a BeaconState<E>,
validator_pubkey_cache: &'a ValidatorPubkeyCache,
spec: &'a ChainSpec,
) -> BlockSignatureVerifier<'a, E, impl Fn(usize) -> Option<Cow<'a, G1Point>> + Clone> {
BlockSignatureVerifier::new(
state,
move |validator_index| {
// Disallow access to any validator pubkeys that are not in the current beacon
// state.
if validator_index < state.validators.len() {
validator_pubkey_cache
.get(validator_index)
.map(|pk| Cow::Borrowed(pk.as_point()))
} else {
None
}
},
spec,
)
}

View File

@ -0,0 +1,105 @@
use crate::{BeaconChainError, BlockError};
use state_processing::BlockProcessingError;
use types::{Hash256, Slot};
/// This is a legacy object that is being kept around to reduce merge conflicts.
///
/// As soon as this is merged into master, it should be removed as soon as possible.
#[derive(Debug, PartialEq)]
pub enum BlockProcessingOutcome {
/// Block was valid and imported into the block graph.
Processed {
block_root: Hash256,
},
InvalidSignature,
/// The proposal signature in invalid.
ProposalSignatureInvalid,
/// The parent block was unknown.
ParentUnknown(Hash256),
/// The block slot is greater than the present slot.
FutureSlot {
present_slot: Slot,
block_slot: Slot,
},
/// The block state_root does not match the generated state.
StateRootMismatch {
block: Hash256,
local: Hash256,
},
/// The block was a genesis block, these blocks cannot be re-imported.
GenesisBlock,
/// The slot is finalized, no need to import.
WouldRevertFinalizedSlot {
block_slot: Slot,
finalized_slot: Slot,
},
/// Block is already known, no need to re-import.
BlockIsAlreadyKnown,
/// The block slot exceeds the MAXIMUM_BLOCK_SLOT_NUMBER.
BlockSlotLimitReached,
/// The provided block is from an earlier slot than its parent.
BlockIsNotLaterThanParent {
block_slot: Slot,
state_slot: Slot,
},
/// At least one block in the chain segement did not have it's parent root set to the root of
/// the prior block.
NonLinearParentRoots,
/// The slots of the blocks in the chain segment were not strictly increasing. I.e., a child
/// had lower slot than a parent.
NonLinearSlots,
/// The block could not be applied to the state, it is invalid.
PerBlockProcessingError(BlockProcessingError),
}
impl BlockProcessingOutcome {
pub fn shim(
result: Result<Hash256, BlockError>,
) -> Result<BlockProcessingOutcome, BeaconChainError> {
match result {
Ok(block_root) => Ok(BlockProcessingOutcome::Processed { block_root }),
Err(BlockError::ParentUnknown(root)) => Ok(BlockProcessingOutcome::ParentUnknown(root)),
Err(BlockError::FutureSlot {
present_slot,
block_slot,
}) => Ok(BlockProcessingOutcome::FutureSlot {
present_slot,
block_slot,
}),
Err(BlockError::StateRootMismatch { block, local }) => {
Ok(BlockProcessingOutcome::StateRootMismatch { block, local })
}
Err(BlockError::GenesisBlock) => Ok(BlockProcessingOutcome::GenesisBlock),
Err(BlockError::WouldRevertFinalizedSlot {
block_slot,
finalized_slot,
}) => Ok(BlockProcessingOutcome::WouldRevertFinalizedSlot {
block_slot,
finalized_slot,
}),
Err(BlockError::BlockIsAlreadyKnown) => Ok(BlockProcessingOutcome::BlockIsAlreadyKnown),
Err(BlockError::BlockSlotLimitReached) => {
Ok(BlockProcessingOutcome::BlockSlotLimitReached)
}
Err(BlockError::ProposalSignatureInvalid) => {
Ok(BlockProcessingOutcome::ProposalSignatureInvalid)
}
Err(BlockError::InvalidSignature) => Ok(BlockProcessingOutcome::InvalidSignature),
Err(BlockError::BlockIsNotLaterThanParent {
block_slot,
state_slot,
}) => Ok(BlockProcessingOutcome::BlockIsNotLaterThanParent {
block_slot,
state_slot,
}),
Err(BlockError::NonLinearParentRoots) => {
Ok(BlockProcessingOutcome::NonLinearParentRoots)
}
Err(BlockError::NonLinearSlots) => Ok(BlockProcessingOutcome::NonLinearSlots),
Err(BlockError::PerBlockProcessingError(e)) => {
Ok(BlockProcessingOutcome::PerBlockProcessingError(e))
}
Err(BlockError::BeaconChainError(e)) => Err(e),
}
}
}

View File

@ -7,10 +7,11 @@ use crate::fork_choice::SszForkChoice;
use crate::head_tracker::HeadTracker; use crate::head_tracker::HeadTracker;
use crate::persisted_beacon_chain::PersistedBeaconChain; use crate::persisted_beacon_chain::PersistedBeaconChain;
use crate::shuffling_cache::ShufflingCache; use crate::shuffling_cache::ShufflingCache;
use crate::snapshot_cache::{SnapshotCache, DEFAULT_SNAPSHOT_CACHE_SIZE};
use crate::timeout_rw_lock::TimeoutRwLock; use crate::timeout_rw_lock::TimeoutRwLock;
use crate::validator_pubkey_cache::ValidatorPubkeyCache; use crate::validator_pubkey_cache::ValidatorPubkeyCache;
use crate::{ use crate::{
BeaconChain, BeaconChainTypes, CheckPoint, Eth1Chain, Eth1ChainBackend, EventHandler, BeaconChain, BeaconChainTypes, BeaconSnapshot, Eth1Chain, Eth1ChainBackend, EventHandler,
ForkChoice, ForkChoice,
}; };
use eth1::Config as Eth1Config; use eth1::Config as Eth1Config;
@ -71,10 +72,10 @@ where
pub struct BeaconChainBuilder<T: BeaconChainTypes> { pub struct BeaconChainBuilder<T: BeaconChainTypes> {
store: Option<Arc<T::Store>>, store: Option<Arc<T::Store>>,
store_migrator: Option<T::StoreMigrator>, store_migrator: Option<T::StoreMigrator>,
canonical_head: Option<CheckPoint<T::EthSpec>>, canonical_head: Option<BeaconSnapshot<T::EthSpec>>,
/// The finalized checkpoint to anchor the chain. May be genesis or a higher /// The finalized checkpoint to anchor the chain. May be genesis or a higher
/// checkpoint. /// checkpoint.
pub finalized_checkpoint: Option<CheckPoint<T::EthSpec>>, pub finalized_snapshot: Option<BeaconSnapshot<T::EthSpec>>,
genesis_block_root: Option<Hash256>, genesis_block_root: Option<Hash256>,
op_pool: Option<OperationPool<T::EthSpec>>, op_pool: Option<OperationPool<T::EthSpec>>,
fork_choice: Option<ForkChoice<T>>, fork_choice: Option<ForkChoice<T>>,
@ -110,7 +111,7 @@ where
store: None, store: None,
store_migrator: None, store_migrator: None,
canonical_head: None, canonical_head: None,
finalized_checkpoint: None, finalized_snapshot: None,
genesis_block_root: None, genesis_block_root: None,
op_pool: None, op_pool: None,
fork_choice: None, fork_choice: None,
@ -247,14 +248,14 @@ where
.map_err(|e| format!("DB error when reading finalized state: {:?}", e))? .map_err(|e| format!("DB error when reading finalized state: {:?}", e))?
.ok_or_else(|| "Finalized state not found in store".to_string())?; .ok_or_else(|| "Finalized state not found in store".to_string())?;
self.finalized_checkpoint = Some(CheckPoint { self.finalized_snapshot = Some(BeaconSnapshot {
beacon_block_root: finalized_block_root, beacon_block_root: finalized_block_root,
beacon_block: finalized_block, beacon_block: finalized_block,
beacon_state_root: finalized_state_root, beacon_state_root: finalized_state_root,
beacon_state: finalized_state, beacon_state: finalized_state,
}); });
self.canonical_head = Some(CheckPoint { self.canonical_head = Some(BeaconSnapshot {
beacon_block_root: head_block_root, beacon_block_root: head_block_root,
beacon_block: head_block, beacon_block: head_block,
beacon_state_root: head_state_root, beacon_state_root: head_state_root,
@ -291,7 +292,7 @@ where
self.genesis_block_root = Some(beacon_block_root); self.genesis_block_root = Some(beacon_block_root);
store store
.put_state(&beacon_state_root, beacon_state.clone()) .put_state(&beacon_state_root, &beacon_state)
.map_err(|e| format!("Failed to store genesis state: {:?}", e))?; .map_err(|e| format!("Failed to store genesis state: {:?}", e))?;
store store
.put(&beacon_block_root, &beacon_block) .put(&beacon_block_root, &beacon_block)
@ -305,7 +306,7 @@ where
) )
})?; })?;
self.finalized_checkpoint = Some(CheckPoint { self.finalized_snapshot = Some(BeaconSnapshot {
beacon_block_root, beacon_block_root,
beacon_block, beacon_block,
beacon_state_root, beacon_state_root,
@ -367,7 +368,7 @@ where
let mut canonical_head = if let Some(head) = self.canonical_head { let mut canonical_head = if let Some(head) = self.canonical_head {
head head
} else { } else {
self.finalized_checkpoint self.finalized_snapshot
.ok_or_else(|| "Cannot build without a state".to_string())? .ok_or_else(|| "Cannot build without a state".to_string())?
}; };
@ -407,7 +408,7 @@ where
.op_pool .op_pool
.ok_or_else(|| "Cannot build without op pool".to_string())?, .ok_or_else(|| "Cannot build without op pool".to_string())?,
eth1_chain: self.eth1_chain, eth1_chain: self.eth1_chain,
canonical_head: TimeoutRwLock::new(canonical_head), canonical_head: TimeoutRwLock::new(canonical_head.clone()),
genesis_block_root: self genesis_block_root: self
.genesis_block_root .genesis_block_root
.ok_or_else(|| "Cannot build without a genesis block root".to_string())?, .ok_or_else(|| "Cannot build without a genesis block root".to_string())?,
@ -418,6 +419,10 @@ where
.event_handler .event_handler
.ok_or_else(|| "Cannot build without an event handler".to_string())?, .ok_or_else(|| "Cannot build without an event handler".to_string())?,
head_tracker: self.head_tracker.unwrap_or_default(), head_tracker: self.head_tracker.unwrap_or_default(),
snapshot_cache: TimeoutRwLock::new(SnapshotCache::new(
DEFAULT_SNAPSHOT_CACHE_SIZE,
canonical_head,
)),
shuffling_cache: TimeoutRwLock::new(ShufflingCache::new()), shuffling_cache: TimeoutRwLock::new(ShufflingCache::new()),
validator_pubkey_cache: TimeoutRwLock::new(validator_pubkey_cache), validator_pubkey_cache: TimeoutRwLock::new(validator_pubkey_cache),
log: log.clone(), log: log.clone(),
@ -469,30 +474,30 @@ where
ForkChoice::from_ssz_container(persisted) ForkChoice::from_ssz_container(persisted)
.map_err(|e| format!("Unable to read persisted fork choice from disk: {:?}", e))? .map_err(|e| format!("Unable to read persisted fork choice from disk: {:?}", e))?
} else { } else {
let finalized_checkpoint = &self let finalized_snapshot = &self
.finalized_checkpoint .finalized_snapshot
.as_ref() .as_ref()
.ok_or_else(|| "fork_choice_backend requires a finalized_checkpoint")?; .ok_or_else(|| "fork_choice_backend requires a finalized_snapshot")?;
let genesis_block_root = self let genesis_block_root = self
.genesis_block_root .genesis_block_root
.ok_or_else(|| "fork_choice_backend requires a genesis_block_root")?; .ok_or_else(|| "fork_choice_backend requires a genesis_block_root")?;
let backend = ProtoArrayForkChoice::new( let backend = ProtoArrayForkChoice::new(
finalized_checkpoint.beacon_block.message.slot, finalized_snapshot.beacon_block.message.slot,
finalized_checkpoint.beacon_block.message.state_root, finalized_snapshot.beacon_block.message.state_root,
// Note: here we set the `justified_epoch` to be the same as the epoch of the // Note: here we set the `justified_epoch` to be the same as the epoch of the
// finalized checkpoint. Whilst this finalized checkpoint may actually point to // finalized checkpoint. Whilst this finalized checkpoint may actually point to
// a _later_ justified checkpoint, that checkpoint won't yet exist in the fork // a _later_ justified checkpoint, that checkpoint won't yet exist in the fork
// choice. // choice.
finalized_checkpoint.beacon_state.current_epoch(), finalized_snapshot.beacon_state.current_epoch(),
finalized_checkpoint.beacon_state.current_epoch(), finalized_snapshot.beacon_state.current_epoch(),
finalized_checkpoint.beacon_block_root, finalized_snapshot.beacon_block_root,
)?; )?;
ForkChoice::new( ForkChoice::new(
backend, backend,
genesis_block_root, genesis_block_root,
&finalized_checkpoint.beacon_state, &finalized_snapshot.beacon_state,
) )
}; };
@ -563,7 +568,7 @@ where
/// Requires the state to be initialized. /// Requires the state to be initialized.
pub fn testing_slot_clock(self, slot_duration: Duration) -> Result<Self, String> { pub fn testing_slot_clock(self, slot_duration: Duration) -> Result<Self, String> {
let genesis_time = self let genesis_time = self
.finalized_checkpoint .finalized_snapshot
.as_ref() .as_ref()
.ok_or_else(|| "testing_slot_clock requires an initialized state")? .ok_or_else(|| "testing_slot_clock requires an initialized state")?
.beacon_state .beacon_state
@ -642,7 +647,7 @@ mod test {
#[test] #[test]
fn recent_genesis() { fn recent_genesis() {
let validator_count = 8; let validator_count = 1;
let genesis_time = 13_371_337; let genesis_time = 13_371_337;
let log = get_logger(); let log = get_logger();

View File

@ -3,9 +3,11 @@ use crate::fork_choice::Error as ForkChoiceError;
use operation_pool::OpPoolError; use operation_pool::OpPoolError;
use ssz::DecodeError; use ssz::DecodeError;
use ssz_types::Error as SszTypesError; use ssz_types::Error as SszTypesError;
use state_processing::per_block_processing::errors::AttestationValidationError; use state_processing::{
use state_processing::BlockProcessingError; block_signature_verifier::Error as BlockSignatureVerifierError,
use state_processing::SlotProcessingError; per_block_processing::errors::AttestationValidationError,
signature_sets::Error as SignatureSetError, BlockProcessingError, SlotProcessingError,
};
use std::time::Duration; use std::time::Duration;
use types::*; use types::*;
@ -57,13 +59,18 @@ pub enum BeaconChainError {
IncorrectStateForAttestation(RelativeEpochError), IncorrectStateForAttestation(RelativeEpochError),
InvalidValidatorPubkeyBytes(DecodeError), InvalidValidatorPubkeyBytes(DecodeError),
ValidatorPubkeyCacheIncomplete(usize), ValidatorPubkeyCacheIncomplete(usize),
SignatureSetError(state_processing::signature_sets::Error), SignatureSetError(SignatureSetError),
BlockSignatureVerifierError(state_processing::block_signature_verifier::Error),
DuplicateValidatorPublicKey,
ValidatorPubkeyCacheFileError(String), ValidatorPubkeyCacheFileError(String),
OpPoolError(OpPoolError),
} }
easy_from_to!(SlotProcessingError, BeaconChainError); easy_from_to!(SlotProcessingError, BeaconChainError);
easy_from_to!(AttestationValidationError, BeaconChainError); easy_from_to!(AttestationValidationError, BeaconChainError);
easy_from_to!(SszTypesError, BeaconChainError); easy_from_to!(SszTypesError, BeaconChainError);
easy_from_to!(OpPoolError, BeaconChainError);
easy_from_to!(BlockSignatureVerifierError, BeaconChainError);
#[derive(Debug, PartialEq)] #[derive(Debug, PartialEq)]
pub enum BlockProductionError { pub enum BlockProductionError {
@ -84,3 +91,27 @@ easy_from_to!(BlockProcessingError, BlockProductionError);
easy_from_to!(BeaconStateError, BlockProductionError); easy_from_to!(BeaconStateError, BlockProductionError);
easy_from_to!(SlotProcessingError, BlockProductionError); easy_from_to!(SlotProcessingError, BlockProductionError);
easy_from_to!(Eth1ChainError, BlockProductionError); easy_from_to!(Eth1ChainError, BlockProductionError);
/// A reason for not propagating an attestation (single or aggregate).
#[derive(Debug, PartialEq)]
pub enum AttestationDropReason {
SlotClockError,
TooNew { attestation_slot: Slot, now: Slot },
TooOld { attestation_slot: Slot, now: Slot },
NoValidationState(BeaconChainError),
BlockUnknown(Hash256),
BadIndexedAttestation(AttestationValidationError),
AggregatorNotInAttestingIndices,
AggregatorNotSelected,
AggregatorSignatureInvalid,
SignatureInvalid,
}
/// A reason for not propagating a block.
#[derive(Debug, PartialEq)]
pub enum BlockDropReason {
SlotClockError,
TooNew { block_slot: Slot, now: Slot },
// FIXME(sproul): add detail here
ValidationFailure,
}

View File

@ -1,7 +1,6 @@
use crate::metrics; use crate::metrics;
use eth1::{Config as Eth1Config, Eth1Block, Service as HttpService}; use eth1::{Config as Eth1Config, Eth1Block, Service as HttpService};
use eth2_hashing::hash; use eth2_hashing::hash;
use exit_future::Exit;
use futures::Future; use futures::Future;
use slog::{debug, error, trace, Logger}; use slog::{debug, error, trace, Logger};
use ssz::{Decode, Encode}; use ssz::{Decode, Encode};
@ -279,7 +278,10 @@ impl<T: EthSpec, S: Store<T>> CachingEth1Backend<T, S> {
} }
/// Starts the routine which connects to the external eth1 node and updates the caches. /// Starts the routine which connects to the external eth1 node and updates the caches.
pub fn start(&self, exit: Exit) -> impl Future<Item = (), Error = ()> { pub fn start(
&self,
exit: tokio::sync::oneshot::Receiver<()>,
) -> impl Future<Item = (), Error = ()> {
self.core.auto_update(exit) self.core.auto_update(exit)
} }

View File

@ -306,10 +306,7 @@ impl CheckpointManager {
.ok_or_else(|| Error::UnknownJustifiedBlock(block_root))?; .ok_or_else(|| Error::UnknownJustifiedBlock(block_root))?;
let state = chain let state = chain
.get_state_caching_only_with_committee_caches( .get_state(&block.state_root(), Some(block.slot()))?
&block.state_root(),
Some(block.slot()),
)?
.ok_or_else(|| Error::UnknownJustifiedState(block.state_root()))?; .ok_or_else(|| Error::UnknownJustifiedState(block.state_root()))?;
Ok(get_effective_balances(&state)) Ok(get_effective_balances(&state))

View File

@ -3,8 +3,9 @@
extern crate lazy_static; extern crate lazy_static;
mod beacon_chain; mod beacon_chain;
mod beacon_snapshot;
mod block_verification;
pub mod builder; pub mod builder;
mod checkpoint;
mod errors; mod errors;
pub mod eth1_chain; pub mod eth1_chain;
pub mod events; pub mod events;
@ -13,16 +14,17 @@ mod head_tracker;
mod metrics; mod metrics;
mod persisted_beacon_chain; mod persisted_beacon_chain;
mod shuffling_cache; mod shuffling_cache;
mod snapshot_cache;
pub mod test_utils; pub mod test_utils;
mod timeout_rw_lock; mod timeout_rw_lock;
mod validator_pubkey_cache; mod validator_pubkey_cache;
pub use self::beacon_chain::{ pub use self::beacon_chain::{
AttestationProcessingOutcome, BeaconChain, BeaconChainTypes, BlockProcessingOutcome, AttestationProcessingOutcome, BeaconChain, BeaconChainTypes, StateSkipConfig,
StateSkipConfig,
}; };
pub use self::checkpoint::CheckPoint; pub use self::beacon_snapshot::BeaconSnapshot;
pub use self::errors::{BeaconChainError, BlockProductionError}; pub use self::errors::{BeaconChainError, BlockProductionError};
pub use block_verification::{BlockError, BlockProcessingOutcome};
pub use eth1_chain::{Eth1Chain, Eth1ChainBackend}; pub use eth1_chain::{Eth1Chain, Eth1ChainBackend};
pub use events::EventHandler; pub use events::EventHandler;
pub use fork_choice::ForkChoice; pub use fork_choice::ForkChoice;

View File

@ -32,6 +32,10 @@ lazy_static! {
"beacon_block_processing_committee_building_seconds", "beacon_block_processing_committee_building_seconds",
"Time spent building/obtaining committees for block processing." "Time spent building/obtaining committees for block processing."
); );
pub static ref BLOCK_PROCESSING_SIGNATURE: Result<Histogram> = try_create_histogram(
"beacon_block_processing_signature_seconds",
"Time spent doing signature verification for a block."
);
pub static ref BLOCK_PROCESSING_CORE: Result<Histogram> = try_create_histogram( pub static ref BLOCK_PROCESSING_CORE: Result<Histogram> = try_create_histogram(
"beacon_block_processing_core_seconds", "beacon_block_processing_core_seconds",
"Time spent doing the core per_block_processing state processing." "Time spent doing the core per_block_processing state processing."

View File

@ -0,0 +1,217 @@
use crate::BeaconSnapshot;
use std::cmp;
use types::{Epoch, EthSpec, Hash256};
/// The default size of the cache.
pub const DEFAULT_SNAPSHOT_CACHE_SIZE: usize = 4;
/// Provides a cache of `BeaconSnapshot` that is intended primarily for block processing.
///
/// ## Cache Queuing
///
/// The cache has a non-standard queue mechanism (specifically, it is not LRU).
///
/// The cache has a max number of elements (`max_len`). Until `max_len` is achieved, all snapshots
/// are simply added to the queue. Once `max_len` is achieved, adding a new snapshot will cause an
/// existing snapshot to be ejected. The ejected snapshot will:
///
/// - Never be the `head_block_root`.
/// - Be the snapshot with the lowest `state.slot` (ties broken arbitrarily).
pub struct SnapshotCache<T: EthSpec> {
max_len: usize,
head_block_root: Hash256,
snapshots: Vec<BeaconSnapshot<T>>,
}
impl<T: EthSpec> SnapshotCache<T> {
/// Instantiate a new cache which contains the `head` snapshot.
///
/// Setting `max_len = 0` is equivalent to setting `max_len = 1`.
pub fn new(max_len: usize, head: BeaconSnapshot<T>) -> Self {
Self {
max_len: cmp::max(max_len, 1),
head_block_root: head.beacon_block_root,
snapshots: vec![head],
}
}
/// Insert a snapshot, potentially removing an existing snapshot if `self` is at capacity (see
/// struct-level documentation for more info).
pub fn insert(&mut self, snapshot: BeaconSnapshot<T>) {
if self.snapshots.len() < self.max_len {
self.snapshots.push(snapshot);
} else {
let insert_at = self
.snapshots
.iter()
.enumerate()
.filter_map(|(i, snapshot)| {
if snapshot.beacon_block_root != self.head_block_root {
Some((i, snapshot.beacon_state.slot))
} else {
None
}
})
.min_by_key(|(_i, slot)| *slot)
.map(|(i, _slot)| i);
if let Some(i) = insert_at {
self.snapshots[i] = snapshot;
}
}
}
/// If there is a snapshot with `block_root`, remove and return it.
pub fn try_remove(&mut self, block_root: Hash256) -> Option<BeaconSnapshot<T>> {
self.snapshots
.iter()
.position(|snapshot| snapshot.beacon_block_root == block_root)
.map(|i| self.snapshots.remove(i))
}
/// If there is a snapshot with `block_root`, clone it (with only the committee caches) and
/// return the clone.
pub fn get_cloned(&self, block_root: Hash256) -> Option<BeaconSnapshot<T>> {
self.snapshots
.iter()
.find(|snapshot| snapshot.beacon_block_root == block_root)
.map(|snapshot| snapshot.clone_with_only_committee_caches())
}
/// Removes all snapshots from the queue that are less than or equal to the finalized epoch.
pub fn prune(&mut self, finalized_epoch: Epoch) {
self.snapshots.retain(|snapshot| {
snapshot.beacon_state.slot > finalized_epoch.start_slot(T::slots_per_epoch())
})
}
/// Inform the cache that the head of the beacon chain has changed.
///
/// The snapshot that matches this `head_block_root` will never be ejected from the cache
/// during `Self::insert`.
pub fn update_head(&mut self, head_block_root: Hash256) {
self.head_block_root = head_block_root
}
}
#[cfg(test)]
mod test {
use super::*;
use types::{
test_utils::{generate_deterministic_keypair, TestingBeaconStateBuilder},
BeaconBlock, Epoch, MainnetEthSpec, Signature, SignedBeaconBlock, Slot,
};
const CACHE_SIZE: usize = 4;
fn get_snapshot(i: u64) -> BeaconSnapshot<MainnetEthSpec> {
let spec = MainnetEthSpec::default_spec();
let state_builder = TestingBeaconStateBuilder::from_deterministic_keypairs(1, &spec);
let (beacon_state, _keypairs) = state_builder.build();
BeaconSnapshot {
beacon_state,
beacon_state_root: Hash256::from_low_u64_be(i),
beacon_block: SignedBeaconBlock {
message: BeaconBlock::empty(&spec),
signature: Signature::new(&[42], &generate_deterministic_keypair(0).sk),
},
beacon_block_root: Hash256::from_low_u64_be(i),
}
}
#[test]
fn insert_get_prune_update() {
let mut cache = SnapshotCache::new(CACHE_SIZE, get_snapshot(0));
// Insert a bunch of entries in the cache. It should look like this:
//
// Index Root
// 0 0 <--head
// 1 1
// 2 2
// 3 3
for i in 1..CACHE_SIZE as u64 {
let mut snapshot = get_snapshot(i);
// Each snapshot should be one slot into an epoch, with each snapshot one epoch apart.
snapshot.beacon_state.slot = Slot::from(i * MainnetEthSpec::slots_per_epoch() + 1);
cache.insert(snapshot);
assert_eq!(
cache.snapshots.len(),
i as usize + 1,
"cache length should be as expected"
);
assert_eq!(cache.head_block_root, Hash256::from_low_u64_be(0));
}
// Insert a new value in the cache. Afterwards it should look like:
//
// Index Root
// 0 0 <--head
// 1 42
// 2 2
// 3 3
assert_eq!(cache.snapshots.len(), CACHE_SIZE);
cache.insert(get_snapshot(42));
assert_eq!(cache.snapshots.len(), CACHE_SIZE);
assert!(
cache.try_remove(Hash256::from_low_u64_be(1)).is_none(),
"the snapshot with the lowest slot should have been removed during the insert function"
);
assert!(cache.get_cloned(Hash256::from_low_u64_be(1)).is_none());
assert!(
cache
.get_cloned(Hash256::from_low_u64_be(0))
.expect("the head should still be in the cache")
.beacon_block_root
== Hash256::from_low_u64_be(0),
"get_cloned should get the correct snapshot"
);
assert!(
cache
.try_remove(Hash256::from_low_u64_be(0))
.expect("the head should still be in the cache")
.beacon_block_root
== Hash256::from_low_u64_be(0),
"try_remove should get the correct snapshot"
);
assert_eq!(
cache.snapshots.len(),
CACHE_SIZE - 1,
"try_remove should shorten the cache"
);
// Prune the cache. Afterwards it should look like:
//
// Index Root
// 0 2
// 1 3
cache.prune(Epoch::new(2));
assert_eq!(cache.snapshots.len(), 2);
cache.update_head(Hash256::from_low_u64_be(2));
// Over-fill the cache so it needs to eject some old values on insert.
for i in 0..CACHE_SIZE as u64 {
cache.insert(get_snapshot(u64::max_value() - i));
}
// Ensure that the new head value was not removed from the cache.
assert!(
cache
.try_remove(Hash256::from_low_u64_be(2))
.expect("the new head should still be in the cache")
.beacon_block_root
== Hash256::from_low_u64_be(2),
"try_remove should get the correct snapshot"
);
}
}

View File

@ -6,8 +6,7 @@ use crate::{
builder::{BeaconChainBuilder, Witness}, builder::{BeaconChainBuilder, Witness},
eth1_chain::CachingEth1Backend, eth1_chain::CachingEth1Backend,
events::NullEventHandler, events::NullEventHandler,
AttestationProcessingOutcome, BeaconChain, BeaconChainTypes, BlockProcessingOutcome, AttestationProcessingOutcome, BeaconChain, BeaconChainTypes, StateSkipConfig,
StateSkipConfig,
}; };
use genesis::interop_genesis_state; use genesis::interop_genesis_state;
use rayon::prelude::*; use rayon::prelude::*;
@ -256,20 +255,15 @@ where
let (block, new_state) = self.build_block(state.clone(), slot, block_strategy); let (block, new_state) = self.build_block(state.clone(), slot, block_strategy);
let outcome = self let block_root = self
.chain .chain
.process_block(block) .process_block(block)
.expect("should not error during block processing"); .expect("should not error during block processing");
self.chain.fork_choice().expect("should find head"); self.chain.fork_choice().expect("should find head");
if let BlockProcessingOutcome::Processed { block_root } = outcome {
head_block_root = Some(block_root); head_block_root = Some(block_root);
self.add_free_attestations(&attestation_strategy, &new_state, block_root, slot); self.add_free_attestations(&attestation_strategy, &new_state, block_root, slot);
} else {
panic!("block should be successfully processed: {:?}", outcome);
}
state = new_state; state = new_state;
slot += 1; slot += 1;
@ -348,7 +342,7 @@ where
.for_each(|attestation| { .for_each(|attestation| {
match self match self
.chain .chain
.process_attestation(attestation) .process_attestation(attestation, Some(false))
.expect("should not error during attestation processing") .expect("should not error during attestation processing")
{ {
AttestationProcessingOutcome::Processed => (), AttestationProcessingOutcome::Processed => (),

View File

@ -1,10 +1,11 @@
use crate::errors::BeaconChainError; use crate::errors::BeaconChainError;
use ssz::{Decode, DecodeError, Encode}; use ssz::{Decode, DecodeError, Encode};
use std::collections::HashMap;
use std::convert::TryInto; use std::convert::TryInto;
use std::fs::{File, OpenOptions}; use std::fs::{File, OpenOptions};
use std::io::{self, Read, Write}; use std::io::{self, Read, Write};
use std::path::Path; use std::path::Path;
use types::{BeaconState, EthSpec, PublicKey, PublicKeyBytes}; use types::{BeaconState, EthSpec, PublicKey, PublicKeyBytes, Validator};
/// Provides a mapping of `validator_index -> validator_publickey`. /// Provides a mapping of `validator_index -> validator_publickey`.
/// ///
@ -19,6 +20,7 @@ use types::{BeaconState, EthSpec, PublicKey, PublicKeyBytes};
/// copy of itself. This allows it to be restored between process invocations. /// copy of itself. This allows it to be restored between process invocations.
pub struct ValidatorPubkeyCache { pub struct ValidatorPubkeyCache {
pubkeys: Vec<PublicKey>, pubkeys: Vec<PublicKey>,
indices: HashMap<PublicKeyBytes, usize>,
persitence_file: ValidatorPubkeyCacheFile, persitence_file: ValidatorPubkeyCacheFile,
} }
@ -47,6 +49,7 @@ impl ValidatorPubkeyCache {
let mut cache = Self { let mut cache = Self {
persitence_file: ValidatorPubkeyCacheFile::create(persistence_path)?, persitence_file: ValidatorPubkeyCacheFile::create(persistence_path)?,
pubkeys: vec![], pubkeys: vec![],
indices: HashMap::new(),
}; };
cache.import_new_pubkeys(state)?; cache.import_new_pubkeys(state)?;
@ -61,13 +64,25 @@ impl ValidatorPubkeyCache {
&mut self, &mut self,
state: &BeaconState<T>, state: &BeaconState<T>,
) -> Result<(), BeaconChainError> { ) -> Result<(), BeaconChainError> {
state if state.validators.len() > self.pubkeys.len() {
.validators self.import(&state.validators[self.pubkeys.len()..])
.iter() } else {
.skip(self.pubkeys.len()) Ok(())
.try_for_each(|v| { }
}
/// Adds zero or more validators to `self`.
fn import(&mut self, validators: &[Validator]) -> Result<(), BeaconChainError> {
self.pubkeys.reserve(validators.len());
self.indices.reserve(validators.len());
for v in validators.iter() {
let i = self.pubkeys.len(); let i = self.pubkeys.len();
if self.indices.contains_key(&v.pubkey) {
return Err(BeaconChainError::DuplicateValidatorPublicKey);
}
// The item is written to disk (the persistence file) _before_ it is written into // The item is written to disk (the persistence file) _before_ it is written into
// the local struct. // the local struct.
// //
@ -85,14 +100,21 @@ impl ValidatorPubkeyCache {
.map_err(BeaconChainError::InvalidValidatorPubkeyBytes)?, .map_err(BeaconChainError::InvalidValidatorPubkeyBytes)?,
); );
self.indices.insert(v.pubkey.clone(), i);
}
Ok(()) Ok(())
})
} }
/// Get the public key for a validator with index `i`. /// Get the public key for a validator with index `i`.
pub fn get(&self, i: usize) -> Option<&PublicKey> { pub fn get(&self, i: usize) -> Option<&PublicKey> {
self.pubkeys.get(i) self.pubkeys.get(i)
} }
/// Get the index of a validator with `pubkey`.
pub fn get_index(&self, pubkey: &PublicKeyBytes) -> Option<usize> {
self.indices.get(pubkey).copied()
}
} }
/// Allows for maintaining an on-disk copy of the `ValidatorPubkeyCache`. The file is raw SSZ bytes /// Allows for maintaining an on-disk copy of the `ValidatorPubkeyCache`. The file is raw SSZ bytes
@ -168,12 +190,14 @@ impl ValidatorPubkeyCacheFile {
let mut last = None; let mut last = None;
let mut pubkeys = Vec::with_capacity(list.len()); let mut pubkeys = Vec::with_capacity(list.len());
let mut indices = HashMap::new();
for (index, pubkey) in list { for (index, pubkey) in list {
let expected = last.map(|n| n + 1); let expected = last.map(|n| n + 1);
if expected.map_or(true, |expected| index == expected) { if expected.map_or(true, |expected| index == expected) {
last = Some(index); last = Some(index);
pubkeys.push((&pubkey).try_into().map_err(Error::SszError)?); pubkeys.push((&pubkey).try_into().map_err(Error::SszError)?);
indices.insert(pubkey, index);
} else { } else {
return Err(Error::InconsistentIndex { return Err(Error::InconsistentIndex {
expected, expected,
@ -184,6 +208,7 @@ impl ValidatorPubkeyCacheFile {
Ok(ValidatorPubkeyCache { Ok(ValidatorPubkeyCache {
pubkeys, pubkeys,
indices,
persitence_file: self, persitence_file: self,
}) })
} }
@ -221,6 +246,16 @@ mod test {
if i < validator_count { if i < validator_count {
let pubkey = cache.get(i).expect("pubkey should be present"); let pubkey = cache.get(i).expect("pubkey should be present");
assert_eq!(pubkey, &keypairs[i].pk, "pubkey should match cache"); assert_eq!(pubkey, &keypairs[i].pk, "pubkey should match cache");
let pubkey_bytes: PublicKeyBytes = pubkey.clone().into();
assert_eq!(
i,
cache
.get_index(&pubkey_bytes)
.expect("should resolve index"),
"index should match cache"
);
} else { } else {
assert_eq!( assert_eq!(
cache.get(i), cache.get(i),

View File

@ -0,0 +1,572 @@
#![cfg(not(debug_assertions))]
#[macro_use]
extern crate lazy_static;
use beacon_chain::{
test_utils::{AttestationStrategy, BeaconChainHarness, BlockStrategy, HarnessType},
BeaconSnapshot, BlockError,
};
use types::{
test_utils::generate_deterministic_keypair, AggregateSignature, AttestationData,
AttesterSlashing, Checkpoint, Deposit, DepositData, Epoch, EthSpec, Hash256,
IndexedAttestation, Keypair, MainnetEthSpec, ProposerSlashing, Signature, SignedBeaconBlock,
SignedBeaconBlockHeader, SignedVoluntaryExit, Slot, VoluntaryExit, DEPOSIT_TREE_DEPTH,
};
type E = MainnetEthSpec;
// Should ideally be divisible by 3.
pub const VALIDATOR_COUNT: usize = 24;
pub const CHAIN_SEGMENT_LENGTH: usize = 64 * 5;
lazy_static! {
/// A cached set of keys.
static ref KEYPAIRS: Vec<Keypair> = types::test_utils::generate_deterministic_keypairs(VALIDATOR_COUNT);
/// A cached set of valid blocks
static ref CHAIN_SEGMENT: Vec<BeaconSnapshot<E>> = get_chain_segment();
}
fn get_chain_segment() -> Vec<BeaconSnapshot<E>> {
let harness = get_harness(VALIDATOR_COUNT);
harness.extend_chain(
CHAIN_SEGMENT_LENGTH,
BlockStrategy::OnCanonicalHead,
AttestationStrategy::AllValidators,
);
harness
.chain
.chain_dump()
.expect("should dump chain")
.into_iter()
.skip(1)
.collect()
}
fn get_harness(validator_count: usize) -> BeaconChainHarness<HarnessType<E>> {
let harness = BeaconChainHarness::new(MainnetEthSpec, KEYPAIRS[0..validator_count].to_vec());
harness.advance_slot();
harness
}
fn chain_segment_blocks() -> Vec<SignedBeaconBlock<E>> {
CHAIN_SEGMENT
.iter()
.map(|snapshot| snapshot.beacon_block.clone())
.collect()
}
fn junk_signature() -> Signature {
let kp = generate_deterministic_keypair(VALIDATOR_COUNT);
let message = &[42, 42];
Signature::new(message, &kp.sk)
}
fn junk_aggregate_signature() -> AggregateSignature {
let mut agg_sig = AggregateSignature::new();
agg_sig.add(&junk_signature());
agg_sig
}
fn update_proposal_signatures(
snapshots: &mut [BeaconSnapshot<E>],
harness: &BeaconChainHarness<HarnessType<E>>,
) {
for snapshot in snapshots {
let spec = &harness.chain.spec;
let slot = snapshot.beacon_block.slot();
let state = &snapshot.beacon_state;
let proposer_index = state
.get_beacon_proposer_index(slot, spec)
.expect("should find proposer index");
let keypair = harness
.keypairs
.get(proposer_index)
.expect("proposer keypair should be available");
snapshot.beacon_block =
snapshot
.beacon_block
.message
.clone()
.sign(&keypair.sk, &state.fork, spec);
}
}
fn update_parent_roots(snapshots: &mut [BeaconSnapshot<E>]) {
for i in 0..snapshots.len() {
let root = snapshots[i].beacon_block.canonical_root();
if let Some(child) = snapshots.get_mut(i + 1) {
child.beacon_block.message.parent_root = root
}
}
}
#[test]
fn chain_segment_full_segment() {
let harness = get_harness(VALIDATOR_COUNT);
let blocks = chain_segment_blocks();
harness
.chain
.slot_clock
.set_slot(blocks.last().unwrap().slot().as_u64());
// Sneak in a little check to ensure we can process empty chain segments.
harness
.chain
.process_chain_segment(vec![])
.expect("should import empty chain segment");
harness
.chain
.process_chain_segment(blocks.clone())
.expect("should import chain segment");
harness.chain.fork_choice().expect("should run fork choice");
assert_eq!(
harness
.chain
.head_info()
.expect("should get harness b head")
.block_root,
blocks.last().unwrap().canonical_root(),
"harness should have last block as head"
);
}
#[test]
fn chain_segment_varying_chunk_size() {
for chunk_size in &[1, 2, 3, 5, 31, 32, 33, 42] {
let harness = get_harness(VALIDATOR_COUNT);
let blocks = chain_segment_blocks();
harness
.chain
.slot_clock
.set_slot(blocks.last().unwrap().slot().as_u64());
for chunk in blocks.chunks(*chunk_size) {
harness
.chain
.process_chain_segment(chunk.to_vec())
.expect(&format!(
"should import chain segment of len {}",
chunk_size
));
}
harness.chain.fork_choice().expect("should run fork choice");
assert_eq!(
harness
.chain
.head_info()
.expect("should get harness b head")
.block_root,
blocks.last().unwrap().canonical_root(),
"harness should have last block as head"
);
}
}
#[test]
fn chain_segment_non_linear_parent_roots() {
let harness = get_harness(VALIDATOR_COUNT);
harness
.chain
.slot_clock
.set_slot(CHAIN_SEGMENT.last().unwrap().beacon_block.slot().as_u64());
/*
* Test with a block removed.
*/
let mut blocks = chain_segment_blocks();
blocks.remove(2);
assert_eq!(
harness.chain.process_chain_segment(blocks.clone()),
Err(BlockError::NonLinearParentRoots),
"should not import chain with missing parent"
);
/*
* Test with a modified parent root.
*/
let mut blocks = chain_segment_blocks();
blocks[3].message.parent_root = Hash256::zero();
assert_eq!(
harness.chain.process_chain_segment(blocks.clone()),
Err(BlockError::NonLinearParentRoots),
"should not import chain with a broken parent root link"
);
}
#[test]
fn chain_segment_non_linear_slots() {
let harness = get_harness(VALIDATOR_COUNT);
harness
.chain
.slot_clock
.set_slot(CHAIN_SEGMENT.last().unwrap().beacon_block.slot().as_u64());
/*
* Test where a child is lower than the parent.
*/
let mut blocks = chain_segment_blocks();
blocks[3].message.slot = Slot::new(0);
assert_eq!(
harness.chain.process_chain_segment(blocks.clone()),
Err(BlockError::NonLinearSlots),
"should not import chain with a parent that has a lower slot than its child"
);
/*
* Test where a child is equal to the parent.
*/
let mut blocks = chain_segment_blocks();
blocks[3].message.slot = blocks[2].message.slot;
assert_eq!(
harness.chain.process_chain_segment(blocks.clone()),
Err(BlockError::NonLinearSlots),
"should not import chain with a parent that has an equal slot to its child"
);
}
#[test]
fn invalid_signatures() {
let mut checked_attestation = false;
for &block_index in &[0, 1, 32, 64, 68 + 1, 129, CHAIN_SEGMENT.len() - 1] {
let harness = get_harness(VALIDATOR_COUNT);
harness
.chain
.slot_clock
.set_slot(CHAIN_SEGMENT.last().unwrap().beacon_block.slot().as_u64());
// Import all the ancestors before the `block_index` block.
let ancestor_blocks = CHAIN_SEGMENT
.iter()
.take(block_index)
.map(|snapshot| snapshot.beacon_block.clone())
.collect();
harness
.chain
.process_chain_segment(ancestor_blocks)
.expect("should import all blocks prior to the one being tested");
// For the given snapshots, test the following:
//
// - The `process_chain_segment` function returns `InvalidSignature`.
// - The `process_block` function returns `InvalidSignature` when importing the
// `SignedBeaconBlock` directly.
// - The `verify_block_for_gossip` function does _not_ return an error.
// - The `process_block` function returns `InvalidSignature` when verifying the
// GossipVerifiedBlock.
let assert_invalid_signature = |snapshots: &[BeaconSnapshot<E>], item: &str| {
let blocks = snapshots
.iter()
.map(|snapshot| snapshot.beacon_block.clone())
.collect();
// Ensure the block will be rejected if imported in a chain segment.
assert_eq!(
harness.chain.process_chain_segment(blocks),
Err(BlockError::InvalidSignature),
"should not import chain segment with an invalid {} signature",
item
);
// Ensure the block will be rejected if imported on its own (without gossip checking).
assert_eq!(
harness
.chain
.process_block(snapshots[block_index].beacon_block.clone()),
Err(BlockError::InvalidSignature),
"should not import individual block with an invalid {} signature",
item
);
let gossip_verified = harness
.chain
.verify_block_for_gossip(snapshots[block_index].beacon_block.clone())
.expect("should obtain gossip verified block");
assert_eq!(
harness.chain.process_block(gossip_verified),
Err(BlockError::InvalidSignature),
"should not import gossip verified block with an invalid {} signature",
item
);
};
/*
* Block proposal
*/
let mut snapshots = CHAIN_SEGMENT.clone();
snapshots[block_index].beacon_block.signature = junk_signature();
let blocks = snapshots
.iter()
.map(|snapshot| snapshot.beacon_block.clone())
.collect();
// Ensure the block will be rejected if imported in a chain segment.
assert_eq!(
harness.chain.process_chain_segment(blocks),
Err(BlockError::InvalidSignature),
"should not import chain segment with an invalid gossip signature",
);
// Ensure the block will be rejected if imported on its own (without gossip checking).
assert_eq!(
harness
.chain
.process_block(snapshots[block_index].beacon_block.clone()),
Err(BlockError::InvalidSignature),
"should not import individual block with an invalid gossip signature",
);
/*
* Randao reveal
*/
let mut snapshots = CHAIN_SEGMENT.clone();
snapshots[block_index]
.beacon_block
.message
.body
.randao_reveal = junk_signature();
update_parent_roots(&mut snapshots);
update_proposal_signatures(&mut snapshots, &harness);
assert_invalid_signature(&snapshots, "randao");
/*
* Proposer slashing
*/
let mut snapshots = CHAIN_SEGMENT.clone();
let proposer_slashing = ProposerSlashing {
proposer_index: 0,
signed_header_1: SignedBeaconBlockHeader {
message: snapshots[block_index].beacon_block.message.block_header(),
signature: junk_signature(),
},
signed_header_2: SignedBeaconBlockHeader {
message: snapshots[block_index].beacon_block.message.block_header(),
signature: junk_signature(),
},
};
snapshots[block_index]
.beacon_block
.message
.body
.proposer_slashings
.push(proposer_slashing)
.expect("should update proposer slashing");
update_parent_roots(&mut snapshots);
update_proposal_signatures(&mut snapshots, &harness);
assert_invalid_signature(&snapshots, "proposer slashing");
/*
* Attester slashing
*/
let mut snapshots = CHAIN_SEGMENT.clone();
let indexed_attestation = IndexedAttestation {
attesting_indices: vec![0].into(),
data: AttestationData {
slot: Slot::new(0),
index: 0,
beacon_block_root: Hash256::zero(),
source: Checkpoint {
epoch: Epoch::new(0),
root: Hash256::zero(),
},
target: Checkpoint {
epoch: Epoch::new(0),
root: Hash256::zero(),
},
},
signature: junk_aggregate_signature(),
};
let attester_slashing = AttesterSlashing {
attestation_1: indexed_attestation.clone(),
attestation_2: indexed_attestation,
};
snapshots[block_index]
.beacon_block
.message
.body
.attester_slashings
.push(attester_slashing)
.expect("should update attester slashing");
update_parent_roots(&mut snapshots);
update_proposal_signatures(&mut snapshots, &harness);
assert_invalid_signature(&snapshots, "attester slashing");
/*
* Attestation
*/
let mut snapshots = CHAIN_SEGMENT.clone();
if let Some(attestation) = snapshots[block_index]
.beacon_block
.message
.body
.attestations
.get_mut(0)
{
attestation.signature = junk_aggregate_signature();
update_parent_roots(&mut snapshots);
update_proposal_signatures(&mut snapshots, &harness);
assert_invalid_signature(&snapshots, "attestation");
checked_attestation = true;
}
/*
* Deposit
*
* Note: an invalid deposit signature is permitted!
*/
let mut snapshots = CHAIN_SEGMENT.clone();
let deposit = Deposit {
proof: vec![Hash256::zero(); DEPOSIT_TREE_DEPTH + 1].into(),
data: DepositData {
pubkey: Keypair::random().pk.into(),
withdrawal_credentials: Hash256::zero(),
amount: 0,
signature: junk_signature().into(),
},
};
snapshots[block_index]
.beacon_block
.message
.body
.deposits
.push(deposit)
.expect("should update deposit");
update_parent_roots(&mut snapshots);
update_proposal_signatures(&mut snapshots, &harness);
let blocks = snapshots
.iter()
.map(|snapshot| snapshot.beacon_block.clone())
.collect();
assert!(
harness.chain.process_chain_segment(blocks) != Err(BlockError::InvalidSignature),
"should not throw an invalid signature error for a bad deposit signature"
);
/*
* Voluntary exit
*/
let mut snapshots = CHAIN_SEGMENT.clone();
let epoch = snapshots[block_index].beacon_state.current_epoch();
snapshots[block_index]
.beacon_block
.message
.body
.voluntary_exits
.push(SignedVoluntaryExit {
message: VoluntaryExit {
epoch,
validator_index: 0,
},
signature: junk_signature(),
})
.expect("should update deposit");
update_parent_roots(&mut snapshots);
update_proposal_signatures(&mut snapshots, &harness);
assert_invalid_signature(&snapshots, "voluntary exit");
}
assert!(
checked_attestation,
"the test should check an attestation signature"
)
}
fn unwrap_err<T, E>(result: Result<T, E>) -> E {
match result {
Ok(_) => panic!("called unwrap_err on Ok"),
Err(e) => e,
}
}
#[test]
fn gossip_verification() {
let harness = get_harness(VALIDATOR_COUNT);
let block_index = CHAIN_SEGMENT_LENGTH - 2;
harness
.chain
.slot_clock
.set_slot(CHAIN_SEGMENT[block_index].beacon_block.slot().as_u64());
// Import the ancestors prior to the block we're testing.
for snapshot in &CHAIN_SEGMENT[0..block_index] {
let gossip_verified = harness
.chain
.verify_block_for_gossip(snapshot.beacon_block.clone())
.expect("should obtain gossip verified block");
harness
.chain
.process_block(gossip_verified)
.expect("should import valid gossip verfied block");
}
/*
* Block with invalid signature
*/
let mut block = CHAIN_SEGMENT[block_index].beacon_block.clone();
block.signature = junk_signature();
assert_eq!(
unwrap_err(harness.chain.verify_block_for_gossip(block)),
BlockError::ProposalSignatureInvalid,
"should not import a block with an invalid proposal signature"
);
/*
* Block from a future slot.
*/
let mut block = CHAIN_SEGMENT[block_index].beacon_block.clone();
let block_slot = block.message.slot + 1;
block.message.slot = block_slot;
assert_eq!(
unwrap_err(harness.chain.verify_block_for_gossip(block)),
BlockError::FutureSlot {
present_slot: block_slot - 1,
block_slot
},
"should not import a block with a future slot"
);
/*
* Block from a finalized slot.
*/
let mut block = CHAIN_SEGMENT[block_index].beacon_block.clone();
let finalized_slot = harness
.chain
.head_info()
.expect("should get head info")
.finalized_checkpoint
.epoch
.start_slot(E::slots_per_epoch());
block.message.slot = finalized_slot;
assert_eq!(
unwrap_err(harness.chain.verify_block_for_gossip(block)),
BlockError::WouldRevertFinalizedSlot {
block_slot: finalized_slot,
finalized_slot
},
"should not import a block with a finalized slot"
);
}

View File

@ -3,13 +3,10 @@
#[macro_use] #[macro_use]
extern crate lazy_static; extern crate lazy_static;
use beacon_chain::AttestationProcessingOutcome; use beacon_chain::test_utils::{
use beacon_chain::{
test_utils::{
AttestationStrategy, BeaconChainHarness, BlockStrategy, HarnessType, OP_POOL_DB_KEY, AttestationStrategy, BeaconChainHarness, BlockStrategy, HarnessType, OP_POOL_DB_KEY,
},
BlockProcessingOutcome,
}; };
use beacon_chain::AttestationProcessingOutcome;
use operation_pool::PersistedOperationPool; use operation_pool::PersistedOperationPool;
use state_processing::{ use state_processing::{
per_slot_processing, per_slot_processing::Error as SlotProcessingError, EpochProcessingError, per_slot_processing, per_slot_processing::Error as SlotProcessingError, EpochProcessingError,
@ -562,15 +559,13 @@ fn run_skip_slot_test(skip_slots: u64) {
.head() .head()
.expect("should get head") .expect("should get head")
.beacon_block .beacon_block
.clone() .clone(),
), ),
Ok(BlockProcessingOutcome::Processed { Ok(harness_a
block_root: harness_a
.chain .chain
.head() .head()
.expect("should get head") .expect("should get head")
.beacon_block_root .beacon_block_root)
})
); );
harness_b harness_b

View File

@ -1,6 +1,6 @@
[package] [package]
name = "client" name = "client"
version = "0.1.0" version = "0.2.0"
authors = ["Age Manning <Age@AgeManning.com>"] authors = ["Age Manning <Age@AgeManning.com>"]
edition = "2018" edition = "2018"
@ -12,6 +12,7 @@ toml = "^0.5"
beacon_chain = { path = "../beacon_chain" } beacon_chain = { path = "../beacon_chain" }
store = { path = "../store" } store = { path = "../store" }
network = { path = "../network" } network = { path = "../network" }
timer = { path = "../timer" }
eth2-libp2p = { path = "../eth2-libp2p" } eth2-libp2p = { path = "../eth2-libp2p" }
rest_api = { path = "../rest_api" } rest_api = { path = "../rest_api" }
parking_lot = "0.9.0" parking_lot = "0.9.0"
@ -29,7 +30,6 @@ slog = { version = "2.5.2", features = ["max_level_trace"] }
slog-async = "2.3.0" slog-async = "2.3.0"
tokio = "0.1.22" tokio = "0.1.22"
dirs = "2.0.2" dirs = "2.0.2"
exit-future = "0.1.4"
futures = "0.1.29" futures = "0.1.29"
reqwest = "0.9.22" reqwest = "0.9.22"
url = "2.1.0" url = "2.1.0"
@ -38,3 +38,5 @@ genesis = { path = "../genesis" }
environment = { path = "../../lighthouse/environment" } environment = { path = "../../lighthouse/environment" }
lighthouse_bootstrap = { path = "../../eth2/utils/lighthouse_bootstrap" } lighthouse_bootstrap = { path = "../../eth2/utils/lighthouse_bootstrap" }
eth2_ssz = { path = "../../eth2/utils/ssz" } eth2_ssz = { path = "../../eth2/utils/ssz" }
lazy_static = "1.4.0"
lighthouse_metrics = { path = "../../eth2/utils/lighthouse_metrics" }

View File

@ -14,13 +14,13 @@ use beacon_chain::{
use environment::RuntimeContext; use environment::RuntimeContext;
use eth1::{Config as Eth1Config, Service as Eth1Service}; use eth1::{Config as Eth1Config, Service as Eth1Service};
use eth2_config::Eth2Config; use eth2_config::Eth2Config;
use exit_future::Signal; use eth2_libp2p::NetworkGlobals;
use futures::{future, Future, IntoFuture}; use futures::{future, Future, IntoFuture};
use genesis::{ use genesis::{
generate_deterministic_keypairs, interop_genesis_state, state_from_ssz_file, Eth1GenesisService, generate_deterministic_keypairs, interop_genesis_state, state_from_ssz_file, Eth1GenesisService,
}; };
use lighthouse_bootstrap::Bootstrapper; use lighthouse_bootstrap::Bootstrapper;
use network::{NetworkConfig, NetworkMessage, Service as NetworkService}; use network::{NetworkConfig, NetworkMessage, NetworkService};
use slog::info; use slog::info;
use ssz::Decode; use ssz::Decode;
use std::net::SocketAddr; use std::net::SocketAddr;
@ -56,10 +56,10 @@ pub struct ClientBuilder<T: BeaconChainTypes> {
beacon_chain_builder: Option<BeaconChainBuilder<T>>, beacon_chain_builder: Option<BeaconChainBuilder<T>>,
beacon_chain: Option<Arc<BeaconChain<T>>>, beacon_chain: Option<Arc<BeaconChain<T>>>,
eth1_service: Option<Eth1Service>, eth1_service: Option<Eth1Service>,
exit_signals: Vec<Signal>, exit_channels: Vec<tokio::sync::oneshot::Sender<()>>,
event_handler: Option<T::EventHandler>, event_handler: Option<T::EventHandler>,
libp2p_network: Option<Arc<NetworkService<T>>>, network_globals: Option<Arc<NetworkGlobals<T::EthSpec>>>,
libp2p_network_send: Option<UnboundedSender<NetworkMessage>>, network_send: Option<UnboundedSender<NetworkMessage<T::EthSpec>>>,
http_listen_addr: Option<SocketAddr>, http_listen_addr: Option<SocketAddr>,
websocket_listen_addr: Option<SocketAddr>, websocket_listen_addr: Option<SocketAddr>,
eth_spec_instance: T::EthSpec, eth_spec_instance: T::EthSpec,
@ -90,10 +90,10 @@ where
beacon_chain_builder: None, beacon_chain_builder: None,
beacon_chain: None, beacon_chain: None,
eth1_service: None, eth1_service: None,
exit_signals: vec![], exit_channels: vec![],
event_handler: None, event_handler: None,
libp2p_network: None, network_globals: None,
libp2p_network_send: None, network_send: None,
http_listen_addr: None, http_listen_addr: None,
websocket_listen_addr: None, websocket_listen_addr: None,
eth_spec_instance, eth_spec_instance,
@ -249,24 +249,55 @@ where
}) })
} }
/// Immediately starts the libp2p networking stack. /// Immediately starts the networking stack.
pub fn libp2p_network(mut self, config: &NetworkConfig) -> Result<Self, String> { pub fn network(mut self, config: &NetworkConfig) -> Result<Self, String> {
let beacon_chain = self let beacon_chain = self
.beacon_chain .beacon_chain
.clone() .clone()
.ok_or_else(|| "libp2p_network requires a beacon chain")?; .ok_or_else(|| "network requires a beacon chain")?;
let context = self let context = self
.runtime_context .runtime_context
.as_ref() .as_ref()
.ok_or_else(|| "libp2p_network requires a runtime_context")? .ok_or_else(|| "network requires a runtime_context")?
.service_context("network".into()); .service_context("network".into());
let (network, network_send) = let (network_globals, network_send, network_exit) =
NetworkService::new(beacon_chain, config, &context.executor, context.log) NetworkService::start(beacon_chain, config, &context.executor, context.log)
.map_err(|e| format!("Failed to start libp2p network: {:?}", e))?; .map_err(|e| format!("Failed to start network: {:?}", e))?;
self.libp2p_network = Some(network); self.network_globals = Some(network_globals);
self.libp2p_network_send = Some(network_send); self.network_send = Some(network_send);
self.exit_channels.push(network_exit);
Ok(self)
}
/// Immediately starts the timer service.
fn timer(mut self) -> Result<Self, String> {
let context = self
.runtime_context
.as_ref()
.ok_or_else(|| "node timer requires a runtime_context")?
.service_context("node_timer".into());
let beacon_chain = self
.beacon_chain
.clone()
.ok_or_else(|| "node timer requires a beacon chain")?;
let milliseconds_per_slot = self
.chain_spec
.as_ref()
.ok_or_else(|| "node timer requires a chain spec".to_string())?
.milliseconds_per_slot;
let timer_exit = timer::spawn(
&context.executor,
beacon_chain,
milliseconds_per_slot,
context.log,
)
.map_err(|e| format!("Unable to start node timer: {}", e))?;
self.exit_channels.push(timer_exit);
Ok(self) Ok(self)
} }
@ -286,21 +317,21 @@ where
.as_ref() .as_ref()
.ok_or_else(|| "http_server requires a runtime_context")? .ok_or_else(|| "http_server requires a runtime_context")?
.service_context("http".into()); .service_context("http".into());
let network = self let network_globals = self
.libp2p_network .network_globals
.clone() .clone()
.ok_or_else(|| "http_server requires a libp2p network")?; .ok_or_else(|| "http_server requires a libp2p network")?;
let network_send = self let network_send = self
.libp2p_network_send .network_send
.clone() .clone()
.ok_or_else(|| "http_server requires a libp2p network sender")?; .ok_or_else(|| "http_server requires a libp2p network sender")?;
let network_info = rest_api::NetworkInfo { let network_info = rest_api::NetworkInfo {
network_service: network, network_globals,
network_chan: network_send, network_chan: network_send,
}; };
let (exit_signal, listening_addr) = rest_api::start_server( let (exit_channel, listening_addr) = rest_api::start_server(
&client_config.rest_api, &client_config.rest_api,
&context.executor, &context.executor,
beacon_chain, beacon_chain,
@ -316,7 +347,7 @@ where
) )
.map_err(|e| format!("Failed to start HTTP API: {:?}", e))?; .map_err(|e| format!("Failed to start HTTP API: {:?}", e))?;
self.exit_signals.push(exit_signal); self.exit_channels.push(exit_channel);
self.http_listen_addr = Some(listening_addr); self.http_listen_addr = Some(listening_addr);
Ok(self) Ok(self)
@ -333,8 +364,8 @@ where
.beacon_chain .beacon_chain
.clone() .clone()
.ok_or_else(|| "slot_notifier requires a beacon chain")?; .ok_or_else(|| "slot_notifier requires a beacon chain")?;
let network = self let network_globals = self
.libp2p_network .network_globals
.clone() .clone()
.ok_or_else(|| "slot_notifier requires a libp2p network")?; .ok_or_else(|| "slot_notifier requires a libp2p network")?;
let milliseconds_per_slot = self let milliseconds_per_slot = self
@ -343,10 +374,15 @@ where
.ok_or_else(|| "slot_notifier requires a chain spec".to_string())? .ok_or_else(|| "slot_notifier requires a chain spec".to_string())?
.milliseconds_per_slot; .milliseconds_per_slot;
let exit_signal = spawn_notifier(context, beacon_chain, network, milliseconds_per_slot) let exit_channel = spawn_notifier(
context,
beacon_chain,
network_globals,
milliseconds_per_slot,
)
.map_err(|e| format!("Unable to start slot notifier: {}", e))?; .map_err(|e| format!("Unable to start slot notifier: {}", e))?;
self.exit_signals.push(exit_signal); self.exit_channels.push(exit_channel);
Ok(self) Ok(self)
} }
@ -361,10 +397,10 @@ where
{ {
Client { Client {
beacon_chain: self.beacon_chain, beacon_chain: self.beacon_chain,
libp2p_network: self.libp2p_network, network_globals: self.network_globals,
http_listen_addr: self.http_listen_addr, http_listen_addr: self.http_listen_addr,
websocket_listen_addr: self.websocket_listen_addr, websocket_listen_addr: self.websocket_listen_addr,
_exit_signals: self.exit_signals, _exit_channels: self.exit_channels,
} }
} }
} }
@ -404,7 +440,8 @@ where
self.beacon_chain_builder = None; self.beacon_chain_builder = None;
self.event_handler = None; self.event_handler = None;
Ok(self) // a beacon chain requires a timer
self.timer()
} }
} }
@ -434,7 +471,7 @@ where
.ok_or_else(|| "websocket_event_handler requires a runtime_context")? .ok_or_else(|| "websocket_event_handler requires a runtime_context")?
.service_context("ws".into()); .service_context("ws".into());
let (sender, exit_signal, listening_addr): ( let (sender, exit_channel, listening_addr): (
WebSocketSender<TEthSpec>, WebSocketSender<TEthSpec>,
Option<_>, Option<_>,
Option<_>, Option<_>,
@ -446,8 +483,8 @@ where
(WebSocketSender::dummy(), None, None) (WebSocketSender::dummy(), None, None)
}; };
if let Some(signal) = exit_signal { if let Some(channel) = exit_channel {
self.exit_signals.push(signal); self.exit_channels.push(channel);
} }
self.event_handler = Some(sender); self.event_handler = Some(sender);
self.websocket_listen_addr = listening_addr; self.websocket_listen_addr = listening_addr;
@ -648,8 +685,8 @@ where
self.eth1_service = None; self.eth1_service = None;
let exit = { let exit = {
let (tx, rx) = exit_future::signal(); let (tx, rx) = tokio::sync::oneshot::channel();
self.exit_signals.push(tx); self.exit_channels.push(tx);
rx rx
}; };
@ -711,7 +748,7 @@ where
.ok_or_else(|| "system_time_slot_clock requires a beacon_chain_builder")?; .ok_or_else(|| "system_time_slot_clock requires a beacon_chain_builder")?;
let genesis_time = beacon_chain_builder let genesis_time = beacon_chain_builder
.finalized_checkpoint .finalized_snapshot
.as_ref() .as_ref()
.ok_or_else(|| "system_time_slot_clock requires an initialized beacon state")? .ok_or_else(|| "system_time_slot_clock requires an initialized beacon state")?
.beacon_state .beacon_state

View File

@ -1,15 +1,14 @@
extern crate slog; extern crate slog;
pub mod config; pub mod config;
mod metrics;
mod notifier; mod notifier;
pub mod builder; pub mod builder;
pub mod error; pub mod error;
use beacon_chain::BeaconChain; use beacon_chain::BeaconChain;
use eth2_libp2p::{Enr, Multiaddr}; use eth2_libp2p::{Enr, Multiaddr, NetworkGlobals};
use exit_future::Signal;
use network::Service as NetworkService;
use std::net::SocketAddr; use std::net::SocketAddr;
use std::sync::Arc; use std::sync::Arc;
@ -23,11 +22,11 @@ pub use eth2_config::Eth2Config;
/// Holds references to running services, cleanly shutting them down when dropped. /// Holds references to running services, cleanly shutting them down when dropped.
pub struct Client<T: BeaconChainTypes> { pub struct Client<T: BeaconChainTypes> {
beacon_chain: Option<Arc<BeaconChain<T>>>, beacon_chain: Option<Arc<BeaconChain<T>>>,
libp2p_network: Option<Arc<NetworkService<T>>>, network_globals: Option<Arc<NetworkGlobals<T::EthSpec>>>,
http_listen_addr: Option<SocketAddr>, http_listen_addr: Option<SocketAddr>,
websocket_listen_addr: Option<SocketAddr>, websocket_listen_addr: Option<SocketAddr>,
/// Exit signals will "fire" when dropped, causing each service to exit gracefully. /// Exit channels will complete/error when dropped, causing each service to exit gracefully.
_exit_signals: Vec<Signal>, _exit_channels: Vec<tokio::sync::oneshot::Sender<()>>,
} }
impl<T: BeaconChainTypes> Client<T> { impl<T: BeaconChainTypes> Client<T> {
@ -48,16 +47,16 @@ impl<T: BeaconChainTypes> Client<T> {
/// Returns the port of the client's libp2p stack, if it was started. /// Returns the port of the client's libp2p stack, if it was started.
pub fn libp2p_listen_port(&self) -> Option<u16> { pub fn libp2p_listen_port(&self) -> Option<u16> {
self.libp2p_network.as_ref().map(|n| n.listen_port()) self.network_globals.as_ref().map(|n| n.listen_port_tcp())
} }
/// Returns the list of libp2p addresses the client is listening to. /// Returns the list of libp2p addresses the client is listening to.
pub fn libp2p_listen_addresses(&self) -> Option<Vec<Multiaddr>> { pub fn libp2p_listen_addresses(&self) -> Option<Vec<Multiaddr>> {
self.libp2p_network.as_ref().map(|n| n.listen_multiaddrs()) self.network_globals.as_ref().map(|n| n.listen_multiaddrs())
} }
/// Returns the local libp2p ENR of this node, for network discovery. /// Returns the local libp2p ENR of this node, for network discovery.
pub fn enr(&self) -> Option<Enr> { pub fn enr(&self) -> Option<Enr> {
self.libp2p_network.as_ref()?.local_enr() self.network_globals.as_ref()?.local_enr()
} }
} }

View File

@ -0,0 +1,9 @@
use lazy_static::lazy_static;
pub use lighthouse_metrics::*;
lazy_static! {
pub static ref SYNC_SLOTS_PER_SECOND: Result<IntGauge> = try_create_int_gauge(
"sync_slots_per_second",
"The number of blocks being imported per second"
);
}

View File

@ -1,8 +1,8 @@
use crate::metrics;
use beacon_chain::{BeaconChain, BeaconChainTypes}; use beacon_chain::{BeaconChain, BeaconChainTypes};
use environment::RuntimeContext; use environment::RuntimeContext;
use exit_future::Signal; use eth2_libp2p::NetworkGlobals;
use futures::{Future, Stream}; use futures::{Future, Stream};
use network::Service as NetworkService;
use parking_lot::Mutex; use parking_lot::Mutex;
use slog::{debug, error, info, warn}; use slog::{debug, error, info, warn};
use slot_clock::SlotClock; use slot_clock::SlotClock;
@ -29,9 +29,9 @@ const SPEEDO_OBSERVATIONS: usize = 4;
pub fn spawn_notifier<T: BeaconChainTypes>( pub fn spawn_notifier<T: BeaconChainTypes>(
context: RuntimeContext<T::EthSpec>, context: RuntimeContext<T::EthSpec>,
beacon_chain: Arc<BeaconChain<T>>, beacon_chain: Arc<BeaconChain<T>>,
network: Arc<NetworkService<T>>, network: Arc<NetworkGlobals<T::EthSpec>>,
milliseconds_per_slot: u64, milliseconds_per_slot: u64,
) -> Result<Signal, String> { ) -> Result<tokio::sync::oneshot::Sender<()>, String> {
let log_1 = context.log.clone(); let log_1 = context.log.clone();
let log_2 = context.log.clone(); let log_2 = context.log.clone();
let log_3 = context.log.clone(); let log_3 = context.log.clone();
@ -83,6 +83,8 @@ pub fn spawn_notifier<T: BeaconChainTypes>(
let mut speedo = speedo.lock(); let mut speedo = speedo.lock();
speedo.observe(head_slot, Instant::now()); speedo.observe(head_slot, Instant::now());
metrics::set_gauge(&metrics::SYNC_SLOTS_PER_SECOND, speedo.slots_per_second().unwrap_or_else(|| 0_f64) as i64);
// The next two lines take advantage of saturating subtraction on `Slot`. // The next two lines take advantage of saturating subtraction on `Slot`.
let head_distance = current_slot - head_slot; let head_distance = current_slot - head_slot;
@ -164,10 +166,11 @@ pub fn spawn_notifier<T: BeaconChainTypes>(
Ok(()) Ok(())
} } }); } } });
let (exit_signal, exit) = exit_future::signal(); let (exit_signal, exit) = tokio::sync::oneshot::channel();
context context
.executor .executor
.spawn(exit.until(interval_future).map(|_| ())); .spawn(interval_future.select(exit).map(|_| ()).map_err(|_| ()));
Ok(exit_signal) Ok(exit_signal)
} }

View File

@ -1,6 +1,6 @@
[package] [package]
name = "eth1" name = "eth1"
version = "0.1.0" version = "0.2.0"
authors = ["Paul Hauner <paul@paulhauner.com>"] authors = ["Paul Hauner <paul@paulhauner.com>"]
edition = "2018" edition = "2018"
@ -26,7 +26,6 @@ parking_lot = "0.7"
slog = "^2.2.3" slog = "^2.2.3"
tokio = "0.1.22" tokio = "0.1.22"
state_processing = { path = "../../eth2/state_processing" } state_processing = { path = "../../eth2/state_processing" }
exit-future = "0.1.4"
libflate = "0.1" libflate = "0.1"
lighthouse_metrics = { path = "../../eth2/utils/lighthouse_metrics"} lighthouse_metrics = { path = "../../eth2/utils/lighthouse_metrics"}
lazy_static = "1.4.0" lazy_static = "1.4.0"

View File

@ -6,7 +6,6 @@ use crate::{
inner::{DepositUpdater, Inner}, inner::{DepositUpdater, Inner},
DepositLog, DepositLog,
}; };
use exit_future::Exit;
use futures::{ use futures::{
future::{loop_fn, Loop}, future::{loop_fn, Loop},
stream, Future, Stream, stream, Future, Stream,
@ -314,7 +313,10 @@ impl Service {
/// - Err(_) if there is an error. /// - Err(_) if there is an error.
/// ///
/// Emits logs for debugging and errors. /// Emits logs for debugging and errors.
pub fn auto_update(&self, exit: Exit) -> impl Future<Item = (), Error = ()> { pub fn auto_update(
&self,
exit: tokio::sync::oneshot::Receiver<()>,
) -> impl Future<Item = (), Error = ()> {
let service = self.clone(); let service = self.clone();
let log = self.log.clone(); let log = self.log.clone();
let update_interval = Duration::from_millis(self.config().auto_update_interval_millis); let update_interval = Duration::from_millis(self.config().auto_update_interval_millis);
@ -360,7 +362,7 @@ impl Service {
}) })
}); });
exit.until(loop_future).map(|_: Option<()>| ()) loop_future.select(exit).map(|_| ()).map_err(|_| ())
} }
/// Contacts the remote eth1 node and attempts to import deposit logs up to the configured /// Contacts the remote eth1 node and attempts to import deposit logs up to the configured

View File

@ -1,6 +1,6 @@
[package] [package]
name = "eth2-libp2p" name = "eth2-libp2p"
version = "0.1.0" version = "0.2.0"
authors = ["Age Manning <Age@AgeManning.com>"] authors = ["Age Manning <Age@AgeManning.com>"]
edition = "2018" edition = "2018"

View File

@ -1,6 +1,6 @@
use crate::discovery::Discovery; use crate::discovery::Discovery;
use crate::rpc::{RPCEvent, RPCMessage, RPC}; use crate::rpc::{RPCEvent, RPCMessage, RPC};
use crate::{error, GossipTopic, NetworkConfig, NetworkGlobals, Topic, TopicHash}; use crate::{error, GossipTopic, NetworkConfig, NetworkGlobals, PubsubMessage, TopicHash};
use enr::Enr; use enr::Enr;
use futures::prelude::*; use futures::prelude::*;
use libp2p::{ use libp2p::{
@ -8,16 +8,14 @@ use libp2p::{
discv5::Discv5Event, discv5::Discv5Event,
gossipsub::{Gossipsub, GossipsubEvent, MessageId}, gossipsub::{Gossipsub, GossipsubEvent, MessageId},
identify::{Identify, IdentifyEvent}, identify::{Identify, IdentifyEvent},
ping::{Ping, PingConfig, PingEvent},
swarm::{NetworkBehaviourAction, NetworkBehaviourEventProcess}, swarm::{NetworkBehaviourAction, NetworkBehaviourEventProcess},
tokio_io::{AsyncRead, AsyncWrite}, tokio_io::{AsyncRead, AsyncWrite},
NetworkBehaviour, PeerId, NetworkBehaviour, PeerId,
}; };
use lru::LruCache; use lru::LruCache;
use slog::{debug, o}; use slog::{debug, o, warn};
use std::num::NonZeroU32;
use std::sync::Arc; use std::sync::Arc;
use std::time::Duration; use types::EthSpec;
const MAX_IDENTIFY_ADDRESSES: usize = 20; const MAX_IDENTIFY_ADDRESSES: usize = 20;
@ -25,48 +23,43 @@ const MAX_IDENTIFY_ADDRESSES: usize = 20;
/// This core behaviour is managed by `Behaviour` which adds peer management to all core /// This core behaviour is managed by `Behaviour` which adds peer management to all core
/// behaviours. /// behaviours.
#[derive(NetworkBehaviour)] #[derive(NetworkBehaviour)]
#[behaviour(out_event = "BehaviourEvent", poll_method = "poll")] #[behaviour(out_event = "BehaviourEvent<TSpec>", poll_method = "poll")]
pub struct Behaviour<TSubstream: AsyncRead + AsyncWrite> { pub struct Behaviour<TSubstream: AsyncRead + AsyncWrite, TSpec: EthSpec> {
/// The routing pub-sub mechanism for eth2. /// The routing pub-sub mechanism for eth2.
gossipsub: Gossipsub<TSubstream>, gossipsub: Gossipsub<TSubstream>,
/// The Eth2 RPC specified in the wire-0 protocol. /// The Eth2 RPC specified in the wire-0 protocol.
eth2_rpc: RPC<TSubstream>, eth2_rpc: RPC<TSubstream, TSpec>,
/// Keep regular connection to peers and disconnect if absent. /// Keep regular connection to peers and disconnect if absent.
// TODO: Remove Libp2p ping in favour of discv5 ping.
ping: Ping<TSubstream>,
// TODO: Using id for initial interop. This will be removed by mainnet. // TODO: Using id for initial interop. This will be removed by mainnet.
/// Provides IP addresses and peer information. /// Provides IP addresses and peer information.
identify: Identify<TSubstream>, identify: Identify<TSubstream>,
/// Discovery behaviour. /// Discovery behaviour.
discovery: Discovery<TSubstream>, discovery: Discovery<TSubstream, TSpec>,
/// The events generated by this behaviour to be consumed in the swarm poll. /// The events generated by this behaviour to be consumed in the swarm poll.
#[behaviour(ignore)] #[behaviour(ignore)]
events: Vec<BehaviourEvent>, events: Vec<BehaviourEvent<TSpec>>,
/// A cache of recently seen gossip messages. This is used to filter out any possible /// A cache of recently seen gossip messages. This is used to filter out any possible
/// duplicates that may still be seen over gossipsub. /// duplicates that may still be seen over gossipsub.
#[behaviour(ignore)] #[behaviour(ignore)]
seen_gossip_messages: LruCache<MessageId, ()>, seen_gossip_messages: LruCache<MessageId, ()>,
/// A collections of variables accessible outside the network service.
#[behaviour(ignore)]
network_globals: Arc<NetworkGlobals<TSpec>>,
#[behaviour(ignore)] #[behaviour(ignore)]
/// Logger for behaviour actions. /// Logger for behaviour actions.
log: slog::Logger, log: slog::Logger,
} }
impl<TSubstream: AsyncRead + AsyncWrite> Behaviour<TSubstream> { impl<TSubstream: AsyncRead + AsyncWrite, TSpec: EthSpec> Behaviour<TSubstream, TSpec> {
pub fn new( pub fn new(
local_key: &Keypair, local_key: &Keypair,
net_conf: &NetworkConfig, net_conf: &NetworkConfig,
network_globals: Arc<NetworkGlobals>, network_globals: Arc<NetworkGlobals<TSpec>>,
log: &slog::Logger, log: &slog::Logger,
) -> error::Result<Self> { ) -> error::Result<Self> {
let local_peer_id = local_key.public().into_peer_id(); let local_peer_id = local_key.public().into_peer_id();
let behaviour_log = log.new(o!()); let behaviour_log = log.new(o!());
let ping_config = PingConfig::new()
.with_timeout(Duration::from_secs(30))
.with_interval(Duration::from_secs(20))
.with_max_failures(NonZeroU32::new(2).expect("2 != 0"))
.with_keep_alive(false);
let identify = Identify::new( let identify = Identify::new(
"lighthouse/libp2p".into(), "lighthouse/libp2p".into(),
version::version(), version::version(),
@ -76,16 +69,16 @@ impl<TSubstream: AsyncRead + AsyncWrite> Behaviour<TSubstream> {
Ok(Behaviour { Ok(Behaviour {
eth2_rpc: RPC::new(log.clone()), eth2_rpc: RPC::new(log.clone()),
gossipsub: Gossipsub::new(local_peer_id, net_conf.gs_config.clone()), gossipsub: Gossipsub::new(local_peer_id, net_conf.gs_config.clone()),
discovery: Discovery::new(local_key, net_conf, network_globals, log)?, discovery: Discovery::new(local_key, net_conf, network_globals.clone(), log)?,
ping: Ping::new(ping_config),
identify, identify,
events: Vec::new(), events: Vec::new(),
seen_gossip_messages: LruCache::new(100_000), seen_gossip_messages: LruCache::new(100_000),
network_globals,
log: behaviour_log, log: behaviour_log,
}) })
} }
pub fn discovery(&self) -> &Discovery<TSubstream> { pub fn discovery(&self) -> &Discovery<TSubstream, TSpec> {
&self.discovery &self.discovery
} }
@ -95,17 +88,20 @@ impl<TSubstream: AsyncRead + AsyncWrite> Behaviour<TSubstream> {
} }
// Implement the NetworkBehaviourEventProcess trait so that we can derive NetworkBehaviour for Behaviour // Implement the NetworkBehaviourEventProcess trait so that we can derive NetworkBehaviour for Behaviour
impl<TSubstream: AsyncRead + AsyncWrite> NetworkBehaviourEventProcess<GossipsubEvent> impl<TSubstream: AsyncRead + AsyncWrite, TSpec: EthSpec>
for Behaviour<TSubstream> NetworkBehaviourEventProcess<GossipsubEvent> for Behaviour<TSubstream, TSpec>
{ {
fn inject_event(&mut self, event: GossipsubEvent) { fn inject_event(&mut self, event: GossipsubEvent) {
match event { match event {
GossipsubEvent::Message(propagation_source, id, gs_msg) => { GossipsubEvent::Message(propagation_source, id, gs_msg) => {
let msg = PubsubMessage::from_topics(&gs_msg.topics, gs_msg.data);
// Note: We are keeping track here of the peer that sent us the message, not the // Note: We are keeping track here of the peer that sent us the message, not the
// peer that originally published the message. // peer that originally published the message.
if self.seen_gossip_messages.put(id.clone(), ()).is_none() { if self.seen_gossip_messages.put(id.clone(), ()).is_none() {
match PubsubMessage::decode(&gs_msg.topics, &gs_msg.data) {
Err(e) => {
debug!(self.log, "Could not decode gossipsub message"; "error" => format!("{}", e))
}
Ok(msg) => {
// if this message isn't a duplicate, notify the network // if this message isn't a duplicate, notify the network
self.events.push(BehaviourEvent::GossipMessage { self.events.push(BehaviourEvent::GossipMessage {
id, id,
@ -113,8 +109,10 @@ impl<TSubstream: AsyncRead + AsyncWrite> NetworkBehaviourEventProcess<GossipsubE
topics: gs_msg.topics, topics: gs_msg.topics,
message: msg, message: msg,
}); });
}
}
} else { } else {
debug!(self.log, "A duplicate message was received"; "message" => format!("{:?}", msg)); warn!(self.log, "A duplicate gossipsub message was received"; "message" => format!("{:?}", gs_msg));
} }
} }
GossipsubEvent::Subscribed { peer_id, topic } => { GossipsubEvent::Subscribed { peer_id, topic } => {
@ -126,10 +124,10 @@ impl<TSubstream: AsyncRead + AsyncWrite> NetworkBehaviourEventProcess<GossipsubE
} }
} }
impl<TSubstream: AsyncRead + AsyncWrite> NetworkBehaviourEventProcess<RPCMessage> impl<TSubstream: AsyncRead + AsyncWrite, TSpec: EthSpec>
for Behaviour<TSubstream> NetworkBehaviourEventProcess<RPCMessage<TSpec>> for Behaviour<TSubstream, TSpec>
{ {
fn inject_event(&mut self, event: RPCMessage) { fn inject_event(&mut self, event: RPCMessage<TSpec>) {
match event { match event {
RPCMessage::PeerDialed(peer_id) => { RPCMessage::PeerDialed(peer_id) => {
self.events.push(BehaviourEvent::PeerDialed(peer_id)) self.events.push(BehaviourEvent::PeerDialed(peer_id))
@ -144,19 +142,11 @@ impl<TSubstream: AsyncRead + AsyncWrite> NetworkBehaviourEventProcess<RPCMessage
} }
} }
impl<TSubstream: AsyncRead + AsyncWrite> NetworkBehaviourEventProcess<PingEvent> impl<TSubstream: AsyncRead + AsyncWrite, TSpec: EthSpec> Behaviour<TSubstream, TSpec> {
for Behaviour<TSubstream>
{
fn inject_event(&mut self, _event: PingEvent) {
// not interested in ping responses at the moment.
}
}
impl<TSubstream: AsyncRead + AsyncWrite> Behaviour<TSubstream> {
/// Consumes the events list when polled. /// Consumes the events list when polled.
fn poll<TBehaviourIn>( fn poll<TBehaviourIn>(
&mut self, &mut self,
) -> Async<NetworkBehaviourAction<TBehaviourIn, BehaviourEvent>> { ) -> Async<NetworkBehaviourAction<TBehaviourIn, BehaviourEvent<TSpec>>> {
if !self.events.is_empty() { if !self.events.is_empty() {
return Async::Ready(NetworkBehaviourAction::GenerateEvent(self.events.remove(0))); return Async::Ready(NetworkBehaviourAction::GenerateEvent(self.events.remove(0)));
} }
@ -165,8 +155,8 @@ impl<TSubstream: AsyncRead + AsyncWrite> Behaviour<TSubstream> {
} }
} }
impl<TSubstream: AsyncRead + AsyncWrite> NetworkBehaviourEventProcess<IdentifyEvent> impl<TSubstream: AsyncRead + AsyncWrite, TSpec: EthSpec> NetworkBehaviourEventProcess<IdentifyEvent>
for Behaviour<TSubstream> for Behaviour<TSubstream, TSpec>
{ {
fn inject_event(&mut self, event: IdentifyEvent) { fn inject_event(&mut self, event: IdentifyEvent) {
match event { match event {
@ -196,8 +186,8 @@ impl<TSubstream: AsyncRead + AsyncWrite> NetworkBehaviourEventProcess<IdentifyEv
} }
} }
impl<TSubstream: AsyncRead + AsyncWrite> NetworkBehaviourEventProcess<Discv5Event> impl<TSubstream: AsyncRead + AsyncWrite, TSpec: EthSpec> NetworkBehaviourEventProcess<Discv5Event>
for Behaviour<TSubstream> for Behaviour<TSubstream, TSpec>
{ {
fn inject_event(&mut self, _event: Discv5Event) { fn inject_event(&mut self, _event: Discv5Event) {
// discv5 has no events to inject // discv5 has no events to inject
@ -205,24 +195,49 @@ impl<TSubstream: AsyncRead + AsyncWrite> NetworkBehaviourEventProcess<Discv5Even
} }
/// Implements the combined behaviour for the libp2p service. /// Implements the combined behaviour for the libp2p service.
impl<TSubstream: AsyncRead + AsyncWrite> Behaviour<TSubstream> { impl<TSubstream: AsyncRead + AsyncWrite, TSpec: EthSpec> Behaviour<TSubstream, TSpec> {
/* Pubsub behaviour functions */ /* Pubsub behaviour functions */
/// Subscribes to a gossipsub topic. /// Subscribes to a gossipsub topic.
pub fn subscribe(&mut self, topic: Topic) -> bool { pub fn subscribe(&mut self, topic: GossipTopic) -> bool {
self.gossipsub.subscribe(topic) if !self
.network_globals
.gossipsub_subscriptions
.read()
.contains(&topic)
{
self.network_globals
.gossipsub_subscriptions
.write()
.push(topic.clone());
}
self.gossipsub.subscribe(topic.into())
} }
/// Unsubscribe from a gossipsub topic. /// Unsubscribe from a gossipsub topic.
pub fn unsubscribe(&mut self, topic: Topic) -> bool { pub fn unsubscribe(&mut self, topic: GossipTopic) -> bool {
self.gossipsub.unsubscribe(topic) let pos = self
.network_globals
.gossipsub_subscriptions
.read()
.iter()
.position(|s| s == &topic);
if let Some(pos) = pos {
self.network_globals
.gossipsub_subscriptions
.write()
.swap_remove(pos);
}
self.gossipsub.unsubscribe(topic.into())
} }
/// Publishes a message on the pubsub (gossipsub) behaviour. /// Publishes a list of messages on the pubsub (gossipsub) behaviour, choosing the encoding.
pub fn publish(&mut self, topics: &[Topic], message: PubsubMessage) { pub fn publish(&mut self, messages: Vec<PubsubMessage<TSpec>>) {
let message_data = message.into_data(); for message in messages {
for topic in topics { for topic in message.topics() {
self.gossipsub.publish(topic, message_data.clone()); let message_data = message.encode();
self.gossipsub.publish(&topic.into(), message_data);
}
} }
} }
@ -236,14 +251,15 @@ impl<TSubstream: AsyncRead + AsyncWrite> Behaviour<TSubstream> {
/* Eth2 RPC behaviour functions */ /* Eth2 RPC behaviour functions */
/// Sends an RPC Request/Response via the RPC protocol. /// Sends an RPC Request/Response via the RPC protocol.
pub fn send_rpc(&mut self, peer_id: PeerId, rpc_event: RPCEvent) { pub fn send_rpc(&mut self, peer_id: PeerId, rpc_event: RPCEvent<TSpec>) {
self.eth2_rpc.send_rpc(peer_id, rpc_event); self.eth2_rpc.send_rpc(peer_id, rpc_event);
} }
/* Discovery / Peer management functions */ /* Discovery / Peer management functions */
/// Return the list of currently connected peers.
/// The current number of connected libp2p peers.
pub fn connected_peers(&self) -> usize { pub fn connected_peers(&self) -> usize {
self.discovery.connected_peers() self.network_globals.connected_peers()
} }
/// Notify discovery that the peer has been banned. /// Notify discovery that the peer has been banned.
@ -268,9 +284,9 @@ impl<TSubstream: AsyncRead + AsyncWrite> Behaviour<TSubstream> {
} }
/// The types of events than can be obtained from polling the behaviour. /// The types of events than can be obtained from polling the behaviour.
pub enum BehaviourEvent { pub enum BehaviourEvent<TSpec: EthSpec> {
/// A received RPC event and the peer that it was received from. /// A received RPC event and the peer that it was received from.
RPC(PeerId, RPCEvent), RPC(PeerId, RPCEvent<TSpec>),
/// We have completed an initial connection to a new peer. /// We have completed an initial connection to a new peer.
PeerDialed(PeerId), PeerDialed(PeerId),
/// A peer has disconnected. /// A peer has disconnected.
@ -284,60 +300,8 @@ pub enum BehaviourEvent {
/// The topics that this message was sent on. /// The topics that this message was sent on.
topics: Vec<TopicHash>, topics: Vec<TopicHash>,
/// The message itself. /// The message itself.
message: PubsubMessage, message: PubsubMessage<TSpec>,
}, },
/// Subscribed to peer for given topic /// Subscribed to peer for given topic
PeerSubscribed(PeerId, TopicHash), PeerSubscribed(PeerId, TopicHash),
} }
/// Messages that are passed to and from the pubsub (Gossipsub) behaviour. These are encoded and
/// decoded upstream.
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub enum PubsubMessage {
/// Gossipsub message providing notification of a new block.
Block(Vec<u8>),
/// Gossipsub message providing notification of a new attestation.
Attestation(Vec<u8>),
/// Gossipsub message providing notification of a voluntary exit.
VoluntaryExit(Vec<u8>),
/// Gossipsub message providing notification of a new proposer slashing.
ProposerSlashing(Vec<u8>),
/// Gossipsub message providing notification of a new attester slashing.
AttesterSlashing(Vec<u8>),
/// Gossipsub message from an unknown topic.
Unknown(Vec<u8>),
}
impl PubsubMessage {
/* Note: This is assuming we are not hashing topics. If we choose to hash topics, these will
* need to be modified.
*
* Also note that a message can be associated with many topics. As soon as one of the topics is
* known we match. If none of the topics are known we return an unknown state.
*/
fn from_topics(topics: &[TopicHash], data: Vec<u8>) -> Self {
for topic in topics {
match GossipTopic::from(topic.as_str()) {
GossipTopic::BeaconBlock => return PubsubMessage::Block(data),
GossipTopic::BeaconAttestation => return PubsubMessage::Attestation(data),
GossipTopic::VoluntaryExit => return PubsubMessage::VoluntaryExit(data),
GossipTopic::ProposerSlashing => return PubsubMessage::ProposerSlashing(data),
GossipTopic::AttesterSlashing => return PubsubMessage::AttesterSlashing(data),
GossipTopic::Shard => return PubsubMessage::Unknown(data),
GossipTopic::Unknown(_) => continue,
}
}
PubsubMessage::Unknown(data)
}
fn into_data(self) -> Vec<u8> {
match self {
PubsubMessage::Block(data)
| PubsubMessage::Attestation(data)
| PubsubMessage::VoluntaryExit(data)
| PubsubMessage::ProposerSlashing(data)
| PubsubMessage::AttesterSlashing(data)
| PubsubMessage::Unknown(data) => data,
}
}
}

View File

@ -1,4 +1,4 @@
use crate::topics::GossipTopic; use crate::types::{GossipEncoding, GossipKind, GossipTopic};
use enr::Enr; use enr::Enr;
use libp2p::gossipsub::{GossipsubConfig, GossipsubConfigBuilder, GossipsubMessage, MessageId}; use libp2p::gossipsub::{GossipsubConfig, GossipsubConfigBuilder, GossipsubMessage, MessageId};
use libp2p::Multiaddr; use libp2p::Multiaddr;
@ -67,11 +67,11 @@ impl Default for Config {
// The default topics that we will initially subscribe to // The default topics that we will initially subscribe to
let topics = vec![ let topics = vec![
GossipTopic::BeaconBlock, GossipTopic::new(GossipKind::BeaconBlock, GossipEncoding::SSZ),
GossipTopic::BeaconAttestation, GossipTopic::new(GossipKind::BeaconAggregateAndProof, GossipEncoding::SSZ),
GossipTopic::VoluntaryExit, GossipTopic::new(GossipKind::VoluntaryExit, GossipEncoding::SSZ),
GossipTopic::ProposerSlashing, GossipTopic::new(GossipKind::ProposerSlashing, GossipEncoding::SSZ),
GossipTopic::AttesterSlashing, GossipTopic::new(GossipKind::AttesterSlashing, GossipEncoding::SSZ),
]; ];
// The function used to generate a gossipsub message id // The function used to generate a gossipsub message id

View File

@ -1,5 +1,5 @@
use crate::metrics; use crate::metrics;
use crate::{error, NetworkConfig, NetworkGlobals}; use crate::{error, NetworkConfig, NetworkGlobals, PeerInfo};
/// This manages the discovery and management of peers. /// This manages the discovery and management of peers.
/// ///
/// Currently using discv5 for peer discovery. /// Currently using discv5 for peer discovery.
@ -16,10 +16,11 @@ use std::fs::File;
use std::io::prelude::*; use std::io::prelude::*;
use std::path::Path; use std::path::Path;
use std::str::FromStr; use std::str::FromStr;
use std::sync::{atomic::Ordering, Arc}; use std::sync::Arc;
use std::time::{Duration, Instant}; use std::time::{Duration, Instant};
use tokio::io::{AsyncRead, AsyncWrite}; use tokio::io::{AsyncRead, AsyncWrite};
use tokio::timer::Delay; use tokio::timer::Delay;
use types::EthSpec;
/// Maximum seconds before searching for extra peers. /// Maximum seconds before searching for extra peers.
const MAX_TIME_BETWEEN_PEER_SEARCHES: u64 = 120; const MAX_TIME_BETWEEN_PEER_SEARCHES: u64 = 120;
@ -30,7 +31,7 @@ const ENR_FILENAME: &str = "enr.dat";
/// Lighthouse discovery behaviour. This provides peer management and discovery using the Discv5 /// Lighthouse discovery behaviour. This provides peer management and discovery using the Discv5
/// libp2p protocol. /// libp2p protocol.
pub struct Discovery<TSubstream> { pub struct Discovery<TSubstream, TSpec: EthSpec> {
/// The currently banned peers. /// The currently banned peers.
banned_peers: HashSet<PeerId>, banned_peers: HashSet<PeerId>,
@ -56,17 +57,17 @@ pub struct Discovery<TSubstream> {
discovery: Discv5<TSubstream>, discovery: Discv5<TSubstream>,
/// A collection of network constants that can be read from other threads. /// A collection of network constants that can be read from other threads.
network_globals: Arc<NetworkGlobals>, network_globals: Arc<NetworkGlobals<TSpec>>,
/// Logger for the discovery behaviour. /// Logger for the discovery behaviour.
log: slog::Logger, log: slog::Logger,
} }
impl<TSubstream> Discovery<TSubstream> { impl<TSubstream, TSpec: EthSpec> Discovery<TSubstream, TSpec> {
pub fn new( pub fn new(
local_key: &Keypair, local_key: &Keypair,
config: &NetworkConfig, config: &NetworkConfig,
network_globals: Arc<NetworkGlobals>, network_globals: Arc<NetworkGlobals<TSpec>>,
log: &slog::Logger, log: &slog::Logger,
) -> error::Result<Self> { ) -> error::Result<Self> {
let log = log.clone(); let log = log.clone();
@ -81,8 +82,7 @@ impl<TSubstream> Discovery<TSubstream> {
None => String::from(""), None => String::from(""),
}; };
info!(log, "ENR Initialised"; "enr" => local_enr.to_base64(), "seq" => local_enr.seq()); info!(log, "ENR Initialised"; "enr" => local_enr.to_base64(), "seq" => local_enr.seq(), "id"=> format!("{}",local_enr.node_id()), "ip" => format!("{:?}", local_enr.ip()), "udp"=> local_enr.udp().unwrap_or_else(|| 0), "tcp" => local_enr.tcp().unwrap_or_else(|| 0));
debug!(log, "Discv5 Node ID Initialised"; "node_id" => format!("{}",local_enr.node_id()));
// the last parameter enables IP limiting. 2 Nodes on the same /24 subnet per bucket and 10 // the last parameter enables IP limiting. 2 Nodes on the same /24 subnet per bucket and 10
// nodes on the same /24 subnet per table. // nodes on the same /24 subnet per table.
@ -131,21 +131,6 @@ impl<TSubstream> Discovery<TSubstream> {
self.discovery.add_enr(enr); self.discovery.add_enr(enr);
} }
/// The current number of connected libp2p peers.
pub fn connected_peers(&self) -> usize {
self.network_globals.connected_peers.load(Ordering::Relaxed)
}
/// The current number of connected libp2p peers.
pub fn connected_peer_set(&self) -> Vec<PeerId> {
self.network_globals
.connected_peer_set
.read()
.iter()
.cloned()
.collect::<Vec<_>>()
}
/// The peer has been banned. Add this peer to the banned list to prevent any future /// The peer has been banned. Add this peer to the banned list to prevent any future
/// re-connections. /// re-connections.
// TODO: Remove the peer from the DHT if present // TODO: Remove the peer from the DHT if present
@ -172,7 +157,7 @@ impl<TSubstream> Discovery<TSubstream> {
} }
// Redirect all behaviour events to underlying discovery behaviour. // Redirect all behaviour events to underlying discovery behaviour.
impl<TSubstream> NetworkBehaviour for Discovery<TSubstream> impl<TSubstream, TSpec: EthSpec> NetworkBehaviour for Discovery<TSubstream, TSpec>
where where
TSubstream: AsyncRead + AsyncWrite, TSubstream: AsyncRead + AsyncWrite,
{ {
@ -189,18 +174,18 @@ where
} }
fn inject_connected(&mut self, peer_id: PeerId, _endpoint: ConnectedPoint) { fn inject_connected(&mut self, peer_id: PeerId, _endpoint: ConnectedPoint) {
// TODO: Search for a known ENR once discv5 is updated.
self.network_globals self.network_globals
.connected_peer_set .connected_peer_set
.write() .write()
.insert(peer_id); .insert(peer_id, PeerInfo::new());
self.network_globals.connected_peers.store(
self.network_globals.connected_peer_set.read().len(),
Ordering::Relaxed,
);
// TODO: Drop peers if over max_peer limit // TODO: Drop peers if over max_peer limit
metrics::inc_counter(&metrics::PEER_CONNECT_EVENT_COUNT); metrics::inc_counter(&metrics::PEER_CONNECT_EVENT_COUNT);
metrics::set_gauge(&metrics::PEERS_CONNECTED, self.connected_peers() as i64); metrics::set_gauge(
&metrics::PEERS_CONNECTED,
self.network_globals.connected_peers() as i64,
);
} }
fn inject_disconnected(&mut self, peer_id: &PeerId, _endpoint: ConnectedPoint) { fn inject_disconnected(&mut self, peer_id: &PeerId, _endpoint: ConnectedPoint) {
@ -208,13 +193,12 @@ where
.connected_peer_set .connected_peer_set
.write() .write()
.remove(peer_id); .remove(peer_id);
self.network_globals.connected_peers.store(
self.network_globals.connected_peer_set.read().len(),
Ordering::Relaxed,
);
metrics::inc_counter(&metrics::PEER_DISCONNECT_EVENT_COUNT); metrics::inc_counter(&metrics::PEER_DISCONNECT_EVENT_COUNT);
metrics::set_gauge(&metrics::PEERS_CONNECTED, self.connected_peers() as i64); metrics::set_gauge(
&metrics::PEERS_CONNECTED,
self.network_globals.connected_peers() as i64,
);
} }
fn inject_replaced( fn inject_replaced(
@ -247,8 +231,7 @@ where
loop { loop {
match self.peer_discovery_delay.poll() { match self.peer_discovery_delay.poll() {
Ok(Async::Ready(_)) => { Ok(Async::Ready(_)) => {
if self.network_globals.connected_peers.load(Ordering::Relaxed) < self.max_peers if self.network_globals.connected_peers() < self.max_peers {
{
self.find_peers(); self.find_peers();
} }
// Set to maximum, and update to earlier, once we get our results back. // Set to maximum, and update to earlier, once we get our results back.
@ -303,8 +286,7 @@ where
for peer_id in closer_peers { for peer_id in closer_peers {
// if we need more peers, attempt a connection // if we need more peers, attempt a connection
if self.network_globals.connected_peers.load(Ordering::Relaxed) if self.network_globals.connected_peers() < self.max_peers
< self.max_peers
&& self && self
.network_globals .network_globals
.connected_peer_set .connected_peer_set

View File

@ -1,30 +0,0 @@
//! A collection of variables that are accessible outside of the network thread itself.
use crate::{Enr, Multiaddr, PeerId};
use parking_lot::RwLock;
use std::collections::HashSet;
use std::sync::atomic::AtomicUsize;
pub struct NetworkGlobals {
/// The current local ENR.
pub local_enr: RwLock<Option<Enr>>,
/// The local peer_id.
pub peer_id: RwLock<PeerId>,
/// Listening multiaddrs.
pub listen_multiaddrs: RwLock<Vec<Multiaddr>>,
/// Current number of connected libp2p peers.
pub connected_peers: AtomicUsize,
/// The collection of currently connected peers.
pub connected_peer_set: RwLock<HashSet<PeerId>>,
}
impl NetworkGlobals {
pub fn new(peer_id: PeerId) -> Self {
NetworkGlobals {
local_enr: RwLock::new(None),
peer_id: RwLock::new(peer_id),
listen_multiaddrs: RwLock::new(Vec::new()),
connected_peers: AtomicUsize::new(0),
connected_peer_set: RwLock::new(HashSet::new()),
}
}
}

View File

@ -8,25 +8,16 @@ extern crate lazy_static;
pub mod behaviour; pub mod behaviour;
mod config; mod config;
mod discovery; mod discovery;
pub mod error;
mod globals;
mod metrics; mod metrics;
pub mod rpc; pub mod rpc;
mod service; mod service;
mod topics; pub mod types;
pub use behaviour::PubsubMessage; pub use crate::types::{error, GossipTopic, NetworkGlobals, PeerInfo, PubsubData, PubsubMessage};
pub use config::Config as NetworkConfig; pub use config::Config as NetworkConfig;
pub use globals::NetworkGlobals;
pub use libp2p::enr::Enr; pub use libp2p::enr::Enr;
pub use libp2p::gossipsub::{MessageId, Topic, TopicHash}; pub use libp2p::gossipsub::{MessageId, Topic, TopicHash};
pub use libp2p::multiaddr; pub use libp2p::{multiaddr, Multiaddr};
pub use libp2p::Multiaddr; pub use libp2p::{PeerId, Swarm};
pub use libp2p::{
gossipsub::{GossipsubConfig, GossipsubConfigBuilder},
PeerId, Swarm,
};
pub use rpc::RPCEvent; pub use rpc::RPCEvent;
pub use service::Libp2pEvent; pub use service::{Libp2pEvent, Service};
pub use service::Service;
pub use topics::GossipTopic;

View File

@ -3,7 +3,9 @@
use crate::rpc::{ErrorMessage, RPCErrorResponse, RPCRequest, RPCResponse}; use crate::rpc::{ErrorMessage, RPCErrorResponse, RPCRequest, RPCResponse};
use libp2p::bytes::BufMut; use libp2p::bytes::BufMut;
use libp2p::bytes::BytesMut; use libp2p::bytes::BytesMut;
use std::marker::PhantomData;
use tokio::codec::{Decoder, Encoder}; use tokio::codec::{Decoder, Encoder};
use types::EthSpec;
pub trait OutboundCodec: Encoder + Decoder { pub trait OutboundCodec: Encoder + Decoder {
type ErrorType; type ErrorType;
@ -17,43 +19,53 @@ pub trait OutboundCodec: Encoder + Decoder {
/* Global Inbound Codec */ /* Global Inbound Codec */
// This deals with Decoding RPC Requests from other peers and encoding our responses // This deals with Decoding RPC Requests from other peers and encoding our responses
pub struct BaseInboundCodec<TCodec> pub struct BaseInboundCodec<TCodec, TSpec>
where where
TCodec: Encoder + Decoder, TCodec: Encoder + Decoder,
TSpec: EthSpec,
{ {
/// Inner codec for handling various encodings /// Inner codec for handling various encodings
inner: TCodec, inner: TCodec,
phantom: PhantomData<TSpec>,
} }
impl<TCodec> BaseInboundCodec<TCodec> impl<TCodec, TSpec> BaseInboundCodec<TCodec, TSpec>
where where
TCodec: Encoder + Decoder, TCodec: Encoder + Decoder,
TSpec: EthSpec,
{ {
pub fn new(codec: TCodec) -> Self { pub fn new(codec: TCodec) -> Self {
BaseInboundCodec { inner: codec } BaseInboundCodec {
inner: codec,
phantom: PhantomData,
}
} }
} }
/* Global Outbound Codec */ /* Global Outbound Codec */
// This deals with Decoding RPC Responses from other peers and encoding our requests // This deals with Decoding RPC Responses from other peers and encoding our requests
pub struct BaseOutboundCodec<TOutboundCodec> pub struct BaseOutboundCodec<TOutboundCodec, TSpec>
where where
TOutboundCodec: OutboundCodec, TOutboundCodec: OutboundCodec,
TSpec: EthSpec,
{ {
/// Inner codec for handling various encodings. /// Inner codec for handling various encodings.
inner: TOutboundCodec, inner: TOutboundCodec,
/// Keeps track of the current response code for a chunk. /// Keeps track of the current response code for a chunk.
current_response_code: Option<u8>, current_response_code: Option<u8>,
phantom: PhantomData<TSpec>,
} }
impl<TOutboundCodec> BaseOutboundCodec<TOutboundCodec> impl<TOutboundCodec, TSpec> BaseOutboundCodec<TOutboundCodec, TSpec>
where where
TSpec: EthSpec,
TOutboundCodec: OutboundCodec, TOutboundCodec: OutboundCodec,
{ {
pub fn new(codec: TOutboundCodec) -> Self { pub fn new(codec: TOutboundCodec) -> Self {
BaseOutboundCodec { BaseOutboundCodec {
inner: codec, inner: codec,
current_response_code: None, current_response_code: None,
phantom: PhantomData,
} }
} }
} }
@ -63,11 +75,12 @@ where
/* Base Inbound Codec */ /* Base Inbound Codec */
// This Encodes RPC Responses sent to external peers // This Encodes RPC Responses sent to external peers
impl<TCodec> Encoder for BaseInboundCodec<TCodec> impl<TCodec, TSpec> Encoder for BaseInboundCodec<TCodec, TSpec>
where where
TCodec: Decoder + Encoder<Item = RPCErrorResponse>, TSpec: EthSpec,
TCodec: Decoder + Encoder<Item = RPCErrorResponse<TSpec>>,
{ {
type Item = RPCErrorResponse; type Item = RPCErrorResponse<TSpec>;
type Error = <TCodec as Encoder>::Error; type Error = <TCodec as Encoder>::Error;
fn encode(&mut self, item: Self::Item, dst: &mut BytesMut) -> Result<(), Self::Error> { fn encode(&mut self, item: Self::Item, dst: &mut BytesMut) -> Result<(), Self::Error> {
@ -82,11 +95,12 @@ where
} }
// This Decodes RPC Requests from external peers // This Decodes RPC Requests from external peers
impl<TCodec> Decoder for BaseInboundCodec<TCodec> impl<TCodec, TSpec> Decoder for BaseInboundCodec<TCodec, TSpec>
where where
TCodec: Encoder + Decoder<Item = RPCRequest>, TSpec: EthSpec,
TCodec: Encoder + Decoder<Item = RPCRequest<TSpec>>,
{ {
type Item = RPCRequest; type Item = RPCRequest<TSpec>;
type Error = <TCodec as Decoder>::Error; type Error = <TCodec as Decoder>::Error;
fn decode(&mut self, src: &mut BytesMut) -> Result<Option<Self::Item>, Self::Error> { fn decode(&mut self, src: &mut BytesMut) -> Result<Option<Self::Item>, Self::Error> {
@ -97,11 +111,12 @@ where
/* Base Outbound Codec */ /* Base Outbound Codec */
// This Encodes RPC Requests sent to external peers // This Encodes RPC Requests sent to external peers
impl<TCodec> Encoder for BaseOutboundCodec<TCodec> impl<TCodec, TSpec> Encoder for BaseOutboundCodec<TCodec, TSpec>
where where
TCodec: OutboundCodec + Encoder<Item = RPCRequest>, TSpec: EthSpec,
TCodec: OutboundCodec + Encoder<Item = RPCRequest<TSpec>>,
{ {
type Item = RPCRequest; type Item = RPCRequest<TSpec>;
type Error = <TCodec as Encoder>::Error; type Error = <TCodec as Encoder>::Error;
fn encode(&mut self, item: Self::Item, dst: &mut BytesMut) -> Result<(), Self::Error> { fn encode(&mut self, item: Self::Item, dst: &mut BytesMut) -> Result<(), Self::Error> {
@ -110,11 +125,12 @@ where
} }
// This decodes RPC Responses received from external peers // This decodes RPC Responses received from external peers
impl<TCodec> Decoder for BaseOutboundCodec<TCodec> impl<TCodec, TSpec> Decoder for BaseOutboundCodec<TCodec, TSpec>
where where
TCodec: OutboundCodec<ErrorType = ErrorMessage> + Decoder<Item = RPCResponse>, TSpec: EthSpec,
TCodec: OutboundCodec<ErrorType = ErrorMessage> + Decoder<Item = RPCResponse<TSpec>>,
{ {
type Item = RPCErrorResponse; type Item = RPCErrorResponse<TSpec>;
type Error = <TCodec as Decoder>::Error; type Error = <TCodec as Decoder>::Error;
fn decode(&mut self, src: &mut BytesMut) -> Result<Option<Self::Item>, Self::Error> { fn decode(&mut self, src: &mut BytesMut) -> Result<Option<Self::Item>, Self::Error> {
@ -130,7 +146,7 @@ where
}); });
let inner_result = { let inner_result = {
if RPCErrorResponse::is_response(response_code) { if RPCErrorResponse::<TSpec>::is_response(response_code) {
// decode an actual response and mutates the buffer if enough bytes have been read // decode an actual response and mutates the buffer if enough bytes have been read
// returning the result. // returning the result.
self.inner self.inner

View File

@ -7,18 +7,19 @@ use crate::rpc::protocol::RPCError;
use crate::rpc::{RPCErrorResponse, RPCRequest}; use crate::rpc::{RPCErrorResponse, RPCRequest};
use libp2p::bytes::BytesMut; use libp2p::bytes::BytesMut;
use tokio::codec::{Decoder, Encoder}; use tokio::codec::{Decoder, Encoder};
use types::EthSpec;
// Known types of codecs // Known types of codecs
pub enum InboundCodec { pub enum InboundCodec<TSpec: EthSpec> {
SSZ(BaseInboundCodec<SSZInboundCodec>), SSZ(BaseInboundCodec<SSZInboundCodec<TSpec>, TSpec>),
} }
pub enum OutboundCodec { pub enum OutboundCodec<TSpec: EthSpec> {
SSZ(BaseOutboundCodec<SSZOutboundCodec>), SSZ(BaseOutboundCodec<SSZOutboundCodec<TSpec>, TSpec>),
} }
impl Encoder for InboundCodec { impl<T: EthSpec> Encoder for InboundCodec<T> {
type Item = RPCErrorResponse; type Item = RPCErrorResponse<T>;
type Error = RPCError; type Error = RPCError;
fn encode(&mut self, item: Self::Item, dst: &mut BytesMut) -> Result<(), Self::Error> { fn encode(&mut self, item: Self::Item, dst: &mut BytesMut) -> Result<(), Self::Error> {
@ -28,8 +29,8 @@ impl Encoder for InboundCodec {
} }
} }
impl Decoder for InboundCodec { impl<TSpec: EthSpec> Decoder for InboundCodec<TSpec> {
type Item = RPCRequest; type Item = RPCRequest<TSpec>;
type Error = RPCError; type Error = RPCError;
fn decode(&mut self, src: &mut BytesMut) -> Result<Option<Self::Item>, Self::Error> { fn decode(&mut self, src: &mut BytesMut) -> Result<Option<Self::Item>, Self::Error> {
@ -39,8 +40,8 @@ impl Decoder for InboundCodec {
} }
} }
impl Encoder for OutboundCodec { impl<TSpec: EthSpec> Encoder for OutboundCodec<TSpec> {
type Item = RPCRequest; type Item = RPCRequest<TSpec>;
type Error = RPCError; type Error = RPCError;
fn encode(&mut self, item: Self::Item, dst: &mut BytesMut) -> Result<(), Self::Error> { fn encode(&mut self, item: Self::Item, dst: &mut BytesMut) -> Result<(), Self::Error> {
@ -50,8 +51,8 @@ impl Encoder for OutboundCodec {
} }
} }
impl Decoder for OutboundCodec { impl<T: EthSpec> Decoder for OutboundCodec<T> {
type Item = RPCErrorResponse; type Item = RPCErrorResponse<T>;
type Error = RPCError; type Error = RPCError;
fn decode(&mut self, src: &mut BytesMut) -> Result<Option<Self::Item>, Self::Error> { fn decode(&mut self, src: &mut BytesMut) -> Result<Option<Self::Item>, Self::Error> {

View File

@ -8,17 +8,20 @@ use crate::rpc::{
use crate::rpc::{ErrorMessage, RPCErrorResponse, RPCRequest, RPCResponse}; use crate::rpc::{ErrorMessage, RPCErrorResponse, RPCRequest, RPCResponse};
use libp2p::bytes::{BufMut, Bytes, BytesMut}; use libp2p::bytes::{BufMut, Bytes, BytesMut};
use ssz::{Decode, Encode}; use ssz::{Decode, Encode};
use std::marker::PhantomData;
use tokio::codec::{Decoder, Encoder}; use tokio::codec::{Decoder, Encoder};
use types::{EthSpec, SignedBeaconBlock};
use unsigned_varint::codec::UviBytes; use unsigned_varint::codec::UviBytes;
/* Inbound Codec */ /* Inbound Codec */
pub struct SSZInboundCodec { pub struct SSZInboundCodec<TSpec: EthSpec> {
inner: UviBytes, inner: UviBytes,
protocol: ProtocolId, protocol: ProtocolId,
phantom: PhantomData<TSpec>,
} }
impl SSZInboundCodec { impl<T: EthSpec> SSZInboundCodec<T> {
pub fn new(protocol: ProtocolId, max_packet_size: usize) -> Self { pub fn new(protocol: ProtocolId, max_packet_size: usize) -> Self {
let mut uvi_codec = UviBytes::default(); let mut uvi_codec = UviBytes::default();
uvi_codec.set_max_len(max_packet_size); uvi_codec.set_max_len(max_packet_size);
@ -29,24 +32,23 @@ impl SSZInboundCodec {
SSZInboundCodec { SSZInboundCodec {
inner: uvi_codec, inner: uvi_codec,
protocol, protocol,
phantom: PhantomData,
} }
} }
} }
// Encoder for inbound streams: Encodes RPC Responses sent to peers. // Encoder for inbound streams: Encodes RPC Responses sent to peers.
impl Encoder for SSZInboundCodec { impl<TSpec: EthSpec> Encoder for SSZInboundCodec<TSpec> {
type Item = RPCErrorResponse; type Item = RPCErrorResponse<TSpec>;
type Error = RPCError; type Error = RPCError;
fn encode(&mut self, item: Self::Item, dst: &mut BytesMut) -> Result<(), Self::Error> { fn encode(&mut self, item: Self::Item, dst: &mut BytesMut) -> Result<(), Self::Error> {
let bytes = match item { let bytes = match item {
RPCErrorResponse::Success(resp) => { RPCErrorResponse::Success(resp) => match resp {
match resp {
RPCResponse::Status(res) => res.as_ssz_bytes(), RPCResponse::Status(res) => res.as_ssz_bytes(),
RPCResponse::BlocksByRange(res) => res, // already raw bytes RPCResponse::BlocksByRange(res) => res.as_ssz_bytes(),
RPCResponse::BlocksByRoot(res) => res, // already raw bytes RPCResponse::BlocksByRoot(res) => res.as_ssz_bytes(),
} },
}
RPCErrorResponse::InvalidRequest(err) => err.as_ssz_bytes(), RPCErrorResponse::InvalidRequest(err) => err.as_ssz_bytes(),
RPCErrorResponse::ServerError(err) => err.as_ssz_bytes(), RPCErrorResponse::ServerError(err) => err.as_ssz_bytes(),
RPCErrorResponse::Unknown(err) => err.as_ssz_bytes(), RPCErrorResponse::Unknown(err) => err.as_ssz_bytes(),
@ -70,8 +72,8 @@ impl Encoder for SSZInboundCodec {
} }
// Decoder for inbound streams: Decodes RPC requests from peers // Decoder for inbound streams: Decodes RPC requests from peers
impl Decoder for SSZInboundCodec { impl<TSpec: EthSpec> Decoder for SSZInboundCodec<TSpec> {
type Item = RPCRequest; type Item = RPCRequest<TSpec>;
type Error = RPCError; type Error = RPCError;
fn decode(&mut self, src: &mut BytesMut) -> Result<Option<Self::Item>, Self::Error> { fn decode(&mut self, src: &mut BytesMut) -> Result<Option<Self::Item>, Self::Error> {
@ -111,12 +113,13 @@ impl Decoder for SSZInboundCodec {
/* Outbound Codec: Codec for initiating RPC requests */ /* Outbound Codec: Codec for initiating RPC requests */
pub struct SSZOutboundCodec { pub struct SSZOutboundCodec<TSpec: EthSpec> {
inner: UviBytes, inner: UviBytes,
protocol: ProtocolId, protocol: ProtocolId,
phantom: PhantomData<TSpec>,
} }
impl SSZOutboundCodec { impl<TSpec: EthSpec> SSZOutboundCodec<TSpec> {
pub fn new(protocol: ProtocolId, max_packet_size: usize) -> Self { pub fn new(protocol: ProtocolId, max_packet_size: usize) -> Self {
let mut uvi_codec = UviBytes::default(); let mut uvi_codec = UviBytes::default();
uvi_codec.set_max_len(max_packet_size); uvi_codec.set_max_len(max_packet_size);
@ -127,13 +130,14 @@ impl SSZOutboundCodec {
SSZOutboundCodec { SSZOutboundCodec {
inner: uvi_codec, inner: uvi_codec,
protocol, protocol,
phantom: PhantomData,
} }
} }
} }
// Encoder for outbound streams: Encodes RPC Requests to peers // Encoder for outbound streams: Encodes RPC Requests to peers
impl Encoder for SSZOutboundCodec { impl<TSpec: EthSpec> Encoder for SSZOutboundCodec<TSpec> {
type Item = RPCRequest; type Item = RPCRequest<TSpec>;
type Error = RPCError; type Error = RPCError;
fn encode(&mut self, item: Self::Item, dst: &mut BytesMut) -> Result<(), Self::Error> { fn encode(&mut self, item: Self::Item, dst: &mut BytesMut) -> Result<(), Self::Error> {
@ -142,6 +146,7 @@ impl Encoder for SSZOutboundCodec {
RPCRequest::Goodbye(req) => req.as_ssz_bytes(), RPCRequest::Goodbye(req) => req.as_ssz_bytes(),
RPCRequest::BlocksByRange(req) => req.as_ssz_bytes(), RPCRequest::BlocksByRange(req) => req.as_ssz_bytes(),
RPCRequest::BlocksByRoot(req) => req.block_roots.as_ssz_bytes(), RPCRequest::BlocksByRoot(req) => req.block_roots.as_ssz_bytes(),
RPCRequest::Phantom(_) => unreachable!("Never encode phantom data"),
}; };
// length-prefix // length-prefix
self.inner self.inner
@ -155,8 +160,8 @@ impl Encoder for SSZOutboundCodec {
// The majority of the decoding has now been pushed upstream due to the changing specification. // The majority of the decoding has now been pushed upstream due to the changing specification.
// We prefer to decode blocks and attestations with extra knowledge about the chain to perform // We prefer to decode blocks and attestations with extra knowledge about the chain to perform
// faster verification checks before decoding entire blocks/attestations. // faster verification checks before decoding entire blocks/attestations.
impl Decoder for SSZOutboundCodec { impl<TSpec: EthSpec> Decoder for SSZOutboundCodec<TSpec> {
type Item = RPCResponse; type Item = RPCResponse<TSpec>;
type Error = RPCError; type Error = RPCError;
fn decode(&mut self, src: &mut BytesMut) -> Result<Option<Self::Item>, Self::Error> { fn decode(&mut self, src: &mut BytesMut) -> Result<Option<Self::Item>, Self::Error> {
@ -173,11 +178,15 @@ impl Decoder for SSZOutboundCodec {
}, },
RPC_GOODBYE => Err(RPCError::InvalidProtocol("GOODBYE doesn't have a response")), RPC_GOODBYE => Err(RPCError::InvalidProtocol("GOODBYE doesn't have a response")),
RPC_BLOCKS_BY_RANGE => match self.protocol.version.as_str() { RPC_BLOCKS_BY_RANGE => match self.protocol.version.as_str() {
"1" => Ok(Some(RPCResponse::BlocksByRange(Vec::new()))), "1" => Err(RPCError::Custom(
"Status stream terminated unexpectedly, empty block".into(),
)), // cannot have an empty block message.
_ => unreachable!("Cannot negotiate an unknown version"), _ => unreachable!("Cannot negotiate an unknown version"),
}, },
RPC_BLOCKS_BY_ROOT => match self.protocol.version.as_str() { RPC_BLOCKS_BY_ROOT => match self.protocol.version.as_str() {
"1" => Ok(Some(RPCResponse::BlocksByRoot(Vec::new()))), "1" => Err(RPCError::Custom(
"Status stream terminated unexpectedly, empty block".into(),
)), // cannot have an empty block message.
_ => unreachable!("Cannot negotiate an unknown version"), _ => unreachable!("Cannot negotiate an unknown version"),
}, },
_ => unreachable!("Cannot negotiate an unknown protocol"), _ => unreachable!("Cannot negotiate an unknown protocol"),
@ -199,11 +208,15 @@ impl Decoder for SSZOutboundCodec {
Err(RPCError::InvalidProtocol("GOODBYE doesn't have a response")) Err(RPCError::InvalidProtocol("GOODBYE doesn't have a response"))
} }
RPC_BLOCKS_BY_RANGE => match self.protocol.version.as_str() { RPC_BLOCKS_BY_RANGE => match self.protocol.version.as_str() {
"1" => Ok(Some(RPCResponse::BlocksByRange(raw_bytes.to_vec()))), "1" => Ok(Some(RPCResponse::BlocksByRange(Box::new(
SignedBeaconBlock::from_ssz_bytes(&raw_bytes)?,
)))),
_ => unreachable!("Cannot negotiate an unknown version"), _ => unreachable!("Cannot negotiate an unknown version"),
}, },
RPC_BLOCKS_BY_ROOT => match self.protocol.version.as_str() { RPC_BLOCKS_BY_ROOT => match self.protocol.version.as_str() {
"1" => Ok(Some(RPCResponse::BlocksByRoot(raw_bytes.to_vec()))), "1" => Ok(Some(RPCResponse::BlocksByRoot(Box::new(
SignedBeaconBlock::from_ssz_bytes(&raw_bytes)?,
)))),
_ => unreachable!("Cannot negotiate an unknown version"), _ => unreachable!("Cannot negotiate an unknown version"),
}, },
_ => unreachable!("Cannot negotiate an unknown protocol"), _ => unreachable!("Cannot negotiate an unknown protocol"),
@ -216,7 +229,7 @@ impl Decoder for SSZOutboundCodec {
} }
} }
impl OutboundCodec for SSZOutboundCodec { impl<TSpec: EthSpec> OutboundCodec for SSZOutboundCodec<TSpec> {
type ErrorType = ErrorMessage; type ErrorType = ErrorMessage;
fn decode_error(&mut self, src: &mut BytesMut) -> Result<Option<Self::ErrorType>, RPCError> { fn decode_error(&mut self, src: &mut BytesMut) -> Result<Option<Self::ErrorType>, RPCError> {

View File

@ -18,6 +18,7 @@ use std::collections::hash_map::Entry;
use std::time::{Duration, Instant}; use std::time::{Duration, Instant};
use tokio::io::{AsyncRead, AsyncWrite}; use tokio::io::{AsyncRead, AsyncWrite};
use tokio::timer::{delay_queue, DelayQueue}; use tokio::timer::{delay_queue, DelayQueue};
use types::EthSpec;
//TODO: Implement close() on the substream types to improve the poll code. //TODO: Implement close() on the substream types to improve the poll code.
//TODO: Implement check_timeout() on the substream types //TODO: Implement check_timeout() on the substream types
@ -36,42 +37,50 @@ type InboundRequestId = RequestId;
type OutboundRequestId = RequestId; type OutboundRequestId = RequestId;
/// Implementation of `ProtocolsHandler` for the RPC protocol. /// Implementation of `ProtocolsHandler` for the RPC protocol.
pub struct RPCHandler<TSubstream> pub struct RPCHandler<TSubstream, TSpec>
where where
TSubstream: AsyncRead + AsyncWrite, TSubstream: AsyncRead + AsyncWrite,
TSpec: EthSpec,
{ {
/// The upgrade for inbound substreams. /// The upgrade for inbound substreams.
listen_protocol: SubstreamProtocol<RPCProtocol>, listen_protocol: SubstreamProtocol<RPCProtocol<TSpec>>,
/// If something bad happened and we should shut down the handler with an error. /// If something bad happened and we should shut down the handler with an error.
pending_error: Vec<(RequestId, ProtocolsHandlerUpgrErr<RPCError>)>, pending_error: Vec<(RequestId, ProtocolsHandlerUpgrErr<RPCError>)>,
/// Queue of events to produce in `poll()`. /// Queue of events to produce in `poll()`.
events_out: SmallVec<[RPCEvent; 4]>, events_out: SmallVec<[RPCEvent<TSpec>; 4]>,
/// Queue of outbound substreams to open. /// Queue of outbound substreams to open.
dial_queue: SmallVec<[RPCEvent; 4]>, dial_queue: SmallVec<[RPCEvent<TSpec>; 4]>,
/// Current number of concurrent outbound substreams being opened. /// Current number of concurrent outbound substreams being opened.
dial_negotiated: u32, dial_negotiated: u32,
/// Current inbound substreams awaiting processing. /// Current inbound substreams awaiting processing.
inbound_substreams: inbound_substreams: FnvHashMap<
FnvHashMap<InboundRequestId, (InboundSubstreamState<TSubstream>, Option<delay_queue::Key>)>, InboundRequestId,
(
InboundSubstreamState<TSubstream, TSpec>,
Option<delay_queue::Key>,
),
>,
/// Inbound substream `DelayQueue` which keeps track of when an inbound substream will timeout. /// Inbound substream `DelayQueue` which keeps track of when an inbound substream will timeout.
inbound_substreams_delay: DelayQueue<InboundRequestId>, inbound_substreams_delay: DelayQueue<InboundRequestId>,
/// Map of outbound substreams that need to be driven to completion. The `RequestId` is /// Map of outbound substreams that need to be driven to completion. The `RequestId` is
/// maintained by the application sending the request. /// maintained by the application sending the request.
outbound_substreams: outbound_substreams: FnvHashMap<
FnvHashMap<OutboundRequestId, (OutboundSubstreamState<TSubstream>, delay_queue::Key)>, OutboundRequestId,
(OutboundSubstreamState<TSubstream, TSpec>, delay_queue::Key),
>,
/// Inbound substream `DelayQueue` which keeps track of when an inbound substream will timeout. /// Inbound substream `DelayQueue` which keeps track of when an inbound substream will timeout.
outbound_substreams_delay: DelayQueue<OutboundRequestId>, outbound_substreams_delay: DelayQueue<OutboundRequestId>,
/// Map of outbound items that are queued as the stream processes them. /// Map of outbound items that are queued as the stream processes them.
queued_outbound_items: FnvHashMap<RequestId, Vec<RPCErrorResponse>>, queued_outbound_items: FnvHashMap<RequestId, Vec<RPCErrorResponse<TSpec>>>,
/// Sequential ID for waiting substreams. For inbound substreams, this is also the inbound request ID. /// Sequential ID for waiting substreams. For inbound substreams, this is also the inbound request ID.
current_inbound_substream_id: RequestId, current_inbound_substream_id: RequestId,
@ -97,14 +106,15 @@ where
} }
/// State of an outbound substream. Either waiting for a response, or in the process of sending. /// State of an outbound substream. Either waiting for a response, or in the process of sending.
pub enum InboundSubstreamState<TSubstream> pub enum InboundSubstreamState<TSubstream, TSpec>
where where
TSubstream: AsyncRead + AsyncWrite, TSubstream: AsyncRead + AsyncWrite,
TSpec: EthSpec,
{ {
/// A response has been sent, pending writing and flush. /// A response has been sent, pending writing and flush.
ResponsePendingSend { ResponsePendingSend {
/// The substream used to send the response /// The substream used to send the response
substream: futures::sink::Send<InboundFramed<TSubstream>>, substream: futures::sink::Send<InboundFramed<TSubstream, TSpec>>,
/// Whether a stream termination is requested. If true the stream will be closed after /// Whether a stream termination is requested. If true the stream will be closed after
/// this send. Otherwise it will transition to an idle state until a stream termination is /// this send. Otherwise it will transition to an idle state until a stream termination is
/// requested or a timeout is reached. /// requested or a timeout is reached.
@ -112,40 +122,41 @@ where
}, },
/// The response stream is idle and awaiting input from the application to send more chunked /// The response stream is idle and awaiting input from the application to send more chunked
/// responses. /// responses.
ResponseIdle(InboundFramed<TSubstream>), ResponseIdle(InboundFramed<TSubstream, TSpec>),
/// The substream is attempting to shutdown. /// The substream is attempting to shutdown.
Closing(InboundFramed<TSubstream>), Closing(InboundFramed<TSubstream, TSpec>),
/// Temporary state during processing /// Temporary state during processing
Poisoned, Poisoned,
} }
pub enum OutboundSubstreamState<TSubstream> { pub enum OutboundSubstreamState<TSubstream, TSpec: EthSpec> {
/// A request has been sent, and we are awaiting a response. This future is driven in the /// A request has been sent, and we are awaiting a response. This future is driven in the
/// handler because GOODBYE requests can be handled and responses dropped instantly. /// handler because GOODBYE requests can be handled and responses dropped instantly.
RequestPendingResponse { RequestPendingResponse {
/// The framed negotiated substream. /// The framed negotiated substream.
substream: OutboundFramed<TSubstream>, substream: OutboundFramed<TSubstream, TSpec>,
/// Keeps track of the actual request sent. /// Keeps track of the actual request sent.
request: RPCRequest, request: RPCRequest<TSpec>,
}, },
/// Closing an outbound substream> /// Closing an outbound substream>
Closing(OutboundFramed<TSubstream>), Closing(OutboundFramed<TSubstream, TSpec>),
/// Temporary state during processing /// Temporary state during processing
Poisoned, Poisoned,
} }
impl<TSubstream> InboundSubstreamState<TSubstream> impl<TSubstream, TSpec> InboundSubstreamState<TSubstream, TSpec>
where where
TSubstream: AsyncRead + AsyncWrite, TSubstream: AsyncRead + AsyncWrite,
TSpec: EthSpec,
{ {
/// Moves the substream state to closing and informs the connected peer. The /// Moves the substream state to closing and informs the connected peer. The
/// `queued_outbound_items` must be given as a parameter to add stream termination messages to /// `queued_outbound_items` must be given as a parameter to add stream termination messages to
/// the outbound queue. /// the outbound queue.
pub fn close(&mut self, outbound_queue: &mut Vec<RPCErrorResponse>) { pub fn close(&mut self, outbound_queue: &mut Vec<RPCErrorResponse<TSpec>>) {
// When terminating a stream, report the stream termination to the requesting user via // When terminating a stream, report the stream termination to the requesting user via
// an RPC error // an RPC error
let error = RPCErrorResponse::ServerError(ErrorMessage { let error = RPCErrorResponse::ServerError(ErrorMessage {
error_message: b"Request timed out".to_vec(), error_message: "Request timed out".as_bytes().to_vec(),
}); });
// The stream termination type is irrelevant, this will terminate the // The stream termination type is irrelevant, this will terminate the
@ -163,17 +174,12 @@ where
*self = InboundSubstreamState::ResponsePendingSend { substream, closing } *self = InboundSubstreamState::ResponsePendingSend { substream, closing }
} }
InboundSubstreamState::ResponseIdle(mut substream) => { InboundSubstreamState::ResponseIdle(substream) => {
// check if the stream is already closed
if let Ok(Async::Ready(None)) = substream.poll() {
*self = InboundSubstreamState::Closing(substream);
} else {
*self = InboundSubstreamState::ResponsePendingSend { *self = InboundSubstreamState::ResponsePendingSend {
substream: substream.send(error), substream: substream.send(error),
closing: true, closing: true,
}; };
} }
}
InboundSubstreamState::Closing(substream) => { InboundSubstreamState::Closing(substream) => {
// let the stream close // let the stream close
*self = InboundSubstreamState::Closing(substream); *self = InboundSubstreamState::Closing(substream);
@ -185,12 +191,13 @@ where
} }
} }
impl<TSubstream> RPCHandler<TSubstream> impl<TSubstream, TSpec> RPCHandler<TSubstream, TSpec>
where where
TSubstream: AsyncRead + AsyncWrite, TSubstream: AsyncRead + AsyncWrite,
TSpec: EthSpec,
{ {
pub fn new( pub fn new(
listen_protocol: SubstreamProtocol<RPCProtocol>, listen_protocol: SubstreamProtocol<RPCProtocol<TSpec>>,
inactive_timeout: Duration, inactive_timeout: Duration,
log: &slog::Logger, log: &slog::Logger,
) -> Self { ) -> Self {
@ -224,7 +231,7 @@ where
/// ///
/// > **Note**: If you modify the protocol, modifications will only applies to future inbound /// > **Note**: If you modify the protocol, modifications will only applies to future inbound
/// > substreams, not the ones already being negotiated. /// > substreams, not the ones already being negotiated.
pub fn listen_protocol_ref(&self) -> &SubstreamProtocol<RPCProtocol> { pub fn listen_protocol_ref(&self) -> &SubstreamProtocol<RPCProtocol<TSpec>> {
&self.listen_protocol &self.listen_protocol
} }
@ -232,29 +239,30 @@ where
/// ///
/// > **Note**: If you modify the protocol, modifications will only applies to future inbound /// > **Note**: If you modify the protocol, modifications will only applies to future inbound
/// > substreams, not the ones already being negotiated. /// > substreams, not the ones already being negotiated.
pub fn listen_protocol_mut(&mut self) -> &mut SubstreamProtocol<RPCProtocol> { pub fn listen_protocol_mut(&mut self) -> &mut SubstreamProtocol<RPCProtocol<TSpec>> {
&mut self.listen_protocol &mut self.listen_protocol
} }
/// Opens an outbound substream with a request. /// Opens an outbound substream with a request.
pub fn send_request(&mut self, rpc_event: RPCEvent) { pub fn send_request(&mut self, rpc_event: RPCEvent<TSpec>) {
self.keep_alive = KeepAlive::Yes; self.keep_alive = KeepAlive::Yes;
self.dial_queue.push(rpc_event); self.dial_queue.push(rpc_event);
} }
} }
impl<TSubstream> ProtocolsHandler for RPCHandler<TSubstream> impl<TSubstream, TSpec> ProtocolsHandler for RPCHandler<TSubstream, TSpec>
where where
TSubstream: AsyncRead + AsyncWrite, TSubstream: AsyncRead + AsyncWrite,
TSpec: EthSpec,
{ {
type InEvent = RPCEvent; type InEvent = RPCEvent<TSpec>;
type OutEvent = RPCEvent; type OutEvent = RPCEvent<TSpec>;
type Error = ProtocolsHandlerUpgrErr<RPCError>; type Error = ProtocolsHandlerUpgrErr<RPCError>;
type Substream = TSubstream; type Substream = TSubstream;
type InboundProtocol = RPCProtocol; type InboundProtocol = RPCProtocol<TSpec>;
type OutboundProtocol = RPCRequest; type OutboundProtocol = RPCRequest<TSpec>;
type OutboundOpenInfo = RPCEvent; // Keep track of the id and the request type OutboundOpenInfo = RPCEvent<TSpec>; // Keep track of the id and the request
fn listen_protocol(&self) -> SubstreamProtocol<Self::InboundProtocol> { fn listen_protocol(&self) -> SubstreamProtocol<Self::InboundProtocol> {
self.listen_protocol.clone() self.listen_protocol.clone()
@ -262,7 +270,7 @@ where
fn inject_fully_negotiated_inbound( fn inject_fully_negotiated_inbound(
&mut self, &mut self,
out: <RPCProtocol as InboundUpgrade<TSubstream>>::Output, out: <RPCProtocol<TSpec> as InboundUpgrade<TSubstream>>::Output,
) { ) {
// update the keep alive timeout if there are no more remaining outbound streams // update the keep alive timeout if there are no more remaining outbound streams
if let KeepAlive::Until(_) = self.keep_alive { if let KeepAlive::Until(_) = self.keep_alive {
@ -294,7 +302,7 @@ where
fn inject_fully_negotiated_outbound( fn inject_fully_negotiated_outbound(
&mut self, &mut self,
out: <RPCRequest as OutboundUpgrade<TSubstream>>::Output, out: <RPCRequest<TSpec> as OutboundUpgrade<TSubstream>>::Output,
rpc_event: Self::OutboundOpenInfo, rpc_event: Self::OutboundOpenInfo,
) { ) {
self.dial_negotiated -= 1; self.dial_negotiated -= 1;
@ -748,11 +756,11 @@ where
} }
// Check for new items to send to the peer and update the underlying stream // Check for new items to send to the peer and update the underlying stream
fn apply_queued_responses<TSubstream: AsyncRead + AsyncWrite>( fn apply_queued_responses<TSubstream: AsyncRead + AsyncWrite, TSpec: EthSpec>(
raw_substream: InboundFramed<TSubstream>, raw_substream: InboundFramed<TSubstream, TSpec>,
queued_outbound_items: &mut Option<&mut Vec<RPCErrorResponse>>, queued_outbound_items: &mut Option<&mut Vec<RPCErrorResponse<TSpec>>>,
new_items_to_send: &mut bool, new_items_to_send: &mut bool,
) -> InboundSubstreamState<TSubstream> { ) -> InboundSubstreamState<TSubstream, TSpec> {
match queued_outbound_items { match queued_outbound_items {
Some(ref mut queue) if !queue.is_empty() => { Some(ref mut queue) if !queue.is_empty() => {
*new_items_to_send = true; *new_items_to_send = true;

View File

@ -1,7 +1,7 @@
//! Available RPC methods types and ids. //! Available RPC methods types and ids.
use ssz_derive::{Decode, Encode}; use ssz_derive::{Decode, Encode};
use types::{Epoch, Hash256, Slot}; use types::{Epoch, EthSpec, Hash256, SignedBeaconBlock, Slot};
/* Request/Response data structures for RPC methods */ /* Request/Response data structures for RPC methods */
@ -129,16 +129,16 @@ pub struct BlocksByRootRequest {
// Collection of enums and structs used by the Codecs to encode/decode RPC messages // Collection of enums and structs used by the Codecs to encode/decode RPC messages
#[derive(Debug, Clone, PartialEq)] #[derive(Debug, Clone, PartialEq)]
pub enum RPCResponse { pub enum RPCResponse<T: EthSpec> {
/// A HELLO message. /// A HELLO message.
Status(StatusMessage), Status(StatusMessage),
/// A response to a get BLOCKS_BY_RANGE request. A None response signifies the end of the /// A response to a get BLOCKS_BY_RANGE request. A None response signifies the end of the
/// batch. /// batch.
BlocksByRange(Vec<u8>), BlocksByRange(Box<SignedBeaconBlock<T>>),
/// A response to a get BLOCKS_BY_ROOT request. /// A response to a get BLOCKS_BY_ROOT request.
BlocksByRoot(Vec<u8>), BlocksByRoot(Box<SignedBeaconBlock<T>>),
} }
/// Indicates which response is being terminated by a stream termination response. /// Indicates which response is being terminated by a stream termination response.
@ -152,9 +152,9 @@ pub enum ResponseTermination {
} }
#[derive(Debug)] #[derive(Debug)]
pub enum RPCErrorResponse { pub enum RPCErrorResponse<T: EthSpec> {
/// The response is a successful. /// The response is a successful.
Success(RPCResponse), Success(RPCResponse<T>),
/// The response was invalid. /// The response was invalid.
InvalidRequest(ErrorMessage), InvalidRequest(ErrorMessage),
@ -169,7 +169,7 @@ pub enum RPCErrorResponse {
StreamTermination(ResponseTermination), StreamTermination(ResponseTermination),
} }
impl RPCErrorResponse { impl<T: EthSpec> RPCErrorResponse<T> {
/// Used to encode the response in the codec. /// Used to encode the response in the codec.
pub fn as_u8(&self) -> Option<u8> { pub fn as_u8(&self) -> Option<u8> {
match self { match self {
@ -242,17 +242,21 @@ impl std::fmt::Display for StatusMessage {
} }
} }
impl std::fmt::Display for RPCResponse { impl<T: EthSpec> std::fmt::Display for RPCResponse<T> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self { match self {
RPCResponse::Status(status) => write!(f, "{}", status), RPCResponse::Status(status) => write!(f, "{}", status),
RPCResponse::BlocksByRange(_) => write!(f, "<BlocksByRange>"), RPCResponse::BlocksByRange(block) => {
RPCResponse::BlocksByRoot(_) => write!(f, "<BlocksByRoot>"), write!(f, "BlocksByRange: Block slot: {}", block.message.slot)
}
RPCResponse::BlocksByRoot(block) => {
write!(f, "BlocksByRoot: BLock slot: {}", block.message.slot)
}
} }
} }
} }
impl std::fmt::Display for RPCErrorResponse { impl<T: EthSpec> std::fmt::Display for RPCErrorResponse<T> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self { match self {
RPCErrorResponse::Success(res) => write!(f, "{}", res), RPCErrorResponse::Success(res) => write!(f, "{}", res),

View File

@ -20,6 +20,7 @@ use slog::o;
use std::marker::PhantomData; use std::marker::PhantomData;
use std::time::Duration; use std::time::Duration;
use tokio::io::{AsyncRead, AsyncWrite}; use tokio::io::{AsyncRead, AsyncWrite};
use types::EthSpec;
pub(crate) mod codec; pub(crate) mod codec;
mod handler; mod handler;
@ -28,19 +29,19 @@ mod protocol;
/// The return type used in the behaviour and the resultant event from the protocols handler. /// The return type used in the behaviour and the resultant event from the protocols handler.
#[derive(Debug)] #[derive(Debug)]
pub enum RPCEvent { pub enum RPCEvent<T: EthSpec> {
/// An inbound/outbound request for RPC protocol. The first parameter is a sequential /// An inbound/outbound request for RPC protocol. The first parameter is a sequential
/// id which tracks an awaiting substream for the response. /// id which tracks an awaiting substream for the response.
Request(RequestId, RPCRequest), Request(RequestId, RPCRequest<T>),
/// A response that is being sent or has been received from the RPC protocol. The first parameter returns /// A response that is being sent or has been received from the RPC protocol. The first parameter returns
/// that which was sent with the corresponding request, the second is a single chunk of a /// that which was sent with the corresponding request, the second is a single chunk of a
/// response. /// response.
Response(RequestId, RPCErrorResponse), Response(RequestId, RPCErrorResponse<T>),
/// An Error occurred. /// An Error occurred.
Error(RequestId, RPCError), Error(RequestId, RPCError),
} }
impl RPCEvent { impl<T: EthSpec> RPCEvent<T> {
pub fn id(&self) -> usize { pub fn id(&self) -> usize {
match *self { match *self {
RPCEvent::Request(id, _) => id, RPCEvent::Request(id, _) => id,
@ -50,7 +51,7 @@ impl RPCEvent {
} }
} }
impl std::fmt::Display for RPCEvent { impl<T: EthSpec> std::fmt::Display for RPCEvent<T> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self { match self {
RPCEvent::Request(id, req) => write!(f, "RPC Request(id: {}, {})", id, req), RPCEvent::Request(id, req) => write!(f, "RPC Request(id: {}, {})", id, req),
@ -62,16 +63,16 @@ impl std::fmt::Display for RPCEvent {
/// Implements the libp2p `NetworkBehaviour` trait and therefore manages network-level /// Implements the libp2p `NetworkBehaviour` trait and therefore manages network-level
/// logic. /// logic.
pub struct RPC<TSubstream> { pub struct RPC<TSubstream, TSpec: EthSpec> {
/// Queue of events to processed. /// Queue of events to processed.
events: Vec<NetworkBehaviourAction<RPCEvent, RPCMessage>>, events: Vec<NetworkBehaviourAction<RPCEvent<TSpec>, RPCMessage<TSpec>>>,
/// Pins the generic substream. /// Pins the generic substream.
marker: PhantomData<TSubstream>, marker: PhantomData<TSubstream>,
/// Slog logger for RPC behaviour. /// Slog logger for RPC behaviour.
log: slog::Logger, log: slog::Logger,
} }
impl<TSubstream> RPC<TSubstream> { impl<TSubstream, TSpec: EthSpec> RPC<TSubstream, TSpec> {
pub fn new(log: slog::Logger) -> Self { pub fn new(log: slog::Logger) -> Self {
let log = log.new(o!("service" => "libp2p_rpc")); let log = log.new(o!("service" => "libp2p_rpc"));
RPC { RPC {
@ -84,7 +85,7 @@ impl<TSubstream> RPC<TSubstream> {
/// Submits an RPC request. /// Submits an RPC request.
/// ///
/// The peer must be connected for this to succeed. /// The peer must be connected for this to succeed.
pub fn send_rpc(&mut self, peer_id: PeerId, rpc_event: RPCEvent) { pub fn send_rpc(&mut self, peer_id: PeerId, rpc_event: RPCEvent<TSpec>) {
self.events.push(NetworkBehaviourAction::SendEvent { self.events.push(NetworkBehaviourAction::SendEvent {
peer_id, peer_id,
event: rpc_event, event: rpc_event,
@ -92,16 +93,19 @@ impl<TSubstream> RPC<TSubstream> {
} }
} }
impl<TSubstream> NetworkBehaviour for RPC<TSubstream> impl<TSubstream, TSpec> NetworkBehaviour for RPC<TSubstream, TSpec>
where where
TSubstream: AsyncRead + AsyncWrite, TSubstream: AsyncRead + AsyncWrite,
TSpec: EthSpec,
{ {
type ProtocolsHandler = RPCHandler<TSubstream>; type ProtocolsHandler = RPCHandler<TSubstream, TSpec>;
type OutEvent = RPCMessage; type OutEvent = RPCMessage<TSpec>;
fn new_handler(&mut self) -> Self::ProtocolsHandler { fn new_handler(&mut self) -> Self::ProtocolsHandler {
RPCHandler::new( RPCHandler::new(
SubstreamProtocol::new(RPCProtocol), SubstreamProtocol::new(RPCProtocol {
phantom: PhantomData,
}),
Duration::from_secs(30), Duration::from_secs(30),
&self.log, &self.log,
) )
@ -157,8 +161,8 @@ where
} }
/// Messages sent to the user from the RPC protocol. /// Messages sent to the user from the RPC protocol.
pub enum RPCMessage { pub enum RPCMessage<TSpec: EthSpec> {
RPC(PeerId, RPCEvent), RPC(PeerId, RPCEvent<TSpec>),
PeerDialed(PeerId), PeerDialed(PeerId),
PeerDisconnected(PeerId), PeerDisconnected(PeerId),
} }

View File

@ -15,6 +15,7 @@ use futures::{
}; };
use libp2p::core::{upgrade, InboundUpgrade, OutboundUpgrade, ProtocolName, UpgradeInfo}; use libp2p::core::{upgrade, InboundUpgrade, OutboundUpgrade, ProtocolName, UpgradeInfo};
use std::io; use std::io;
use std::marker::PhantomData;
use std::time::Duration; use std::time::Duration;
use tokio::codec::Framed; use tokio::codec::Framed;
use tokio::io::{AsyncRead, AsyncWrite}; use tokio::io::{AsyncRead, AsyncWrite};
@ -22,6 +23,7 @@ use tokio::prelude::*;
use tokio::timer::timeout; use tokio::timer::timeout;
use tokio::util::FutureExt; use tokio::util::FutureExt;
use tokio_io_timeout::TimeoutStream; use tokio_io_timeout::TimeoutStream;
use types::EthSpec;
/// The maximum bytes that can be sent across the RPC. /// The maximum bytes that can be sent across the RPC.
const MAX_RPC_SIZE: usize = 4_194_304; // 4M const MAX_RPC_SIZE: usize = 4_194_304; // 4M
@ -44,9 +46,11 @@ pub const RPC_BLOCKS_BY_RANGE: &str = "beacon_blocks_by_range";
pub const RPC_BLOCKS_BY_ROOT: &str = "beacon_blocks_by_root"; pub const RPC_BLOCKS_BY_ROOT: &str = "beacon_blocks_by_root";
#[derive(Debug, Clone)] #[derive(Debug, Clone)]
pub struct RPCProtocol; pub struct RPCProtocol<TSpec: EthSpec> {
pub phantom: PhantomData<TSpec>,
}
impl UpgradeInfo for RPCProtocol { impl<TSpec: EthSpec> UpgradeInfo for RPCProtocol<TSpec> {
type Info = ProtocolId; type Info = ProtocolId;
type InfoIter = Vec<Self::Info>; type InfoIter = Vec<Self::Info>;
@ -104,27 +108,30 @@ impl ProtocolName for ProtocolId {
// The inbound protocol reads the request, decodes it and returns the stream to the protocol // The inbound protocol reads the request, decodes it and returns the stream to the protocol
// handler to respond to once ready. // handler to respond to once ready.
pub type InboundOutput<TSocket> = (RPCRequest, InboundFramed<TSocket>); pub type InboundOutput<TSocket, TSpec> = (RPCRequest<TSpec>, InboundFramed<TSocket, TSpec>);
pub type InboundFramed<TSocket> = Framed<TimeoutStream<upgrade::Negotiated<TSocket>>, InboundCodec>; pub type InboundFramed<TSocket, TSpec> =
type FnAndThen<TSocket> = fn( Framed<TimeoutStream<upgrade::Negotiated<TSocket>>, InboundCodec<TSpec>>;
(Option<RPCRequest>, InboundFramed<TSocket>), type FnAndThen<TSocket, TSpec> = fn(
) -> FutureResult<InboundOutput<TSocket>, RPCError>; (Option<RPCRequest<TSpec>>, InboundFramed<TSocket, TSpec>),
type FnMapErr<TSocket> = fn(timeout::Error<(RPCError, InboundFramed<TSocket>)>) -> RPCError; ) -> FutureResult<InboundOutput<TSocket, TSpec>, RPCError>;
type FnMapErr<TSocket, TSpec> =
fn(timeout::Error<(RPCError, InboundFramed<TSocket, TSpec>)>) -> RPCError;
impl<TSocket> InboundUpgrade<TSocket> for RPCProtocol impl<TSocket, TSpec> InboundUpgrade<TSocket> for RPCProtocol<TSpec>
where where
TSocket: AsyncRead + AsyncWrite, TSocket: AsyncRead + AsyncWrite,
TSpec: EthSpec,
{ {
type Output = InboundOutput<TSocket>; type Output = InboundOutput<TSocket, TSpec>;
type Error = RPCError; type Error = RPCError;
type Future = future::AndThen< type Future = future::AndThen<
future::MapErr< future::MapErr<
timeout::Timeout<stream::StreamFuture<InboundFramed<TSocket>>>, timeout::Timeout<stream::StreamFuture<InboundFramed<TSocket, TSpec>>>,
FnMapErr<TSocket>, FnMapErr<TSocket, TSpec>,
>, >,
FutureResult<InboundOutput<TSocket>, RPCError>, FutureResult<InboundOutput<TSocket, TSpec>, RPCError>,
FnAndThen<TSocket>, FnAndThen<TSocket, TSpec>,
>; >;
fn upgrade_inbound( fn upgrade_inbound(
@ -141,7 +148,7 @@ where
Framed::new(timed_socket, codec) Framed::new(timed_socket, codec)
.into_future() .into_future()
.timeout(Duration::from_secs(REQUEST_TIMEOUT)) .timeout(Duration::from_secs(REQUEST_TIMEOUT))
.map_err(RPCError::from as FnMapErr<TSocket>) .map_err(RPCError::from as FnMapErr<TSocket, TSpec>)
.and_then({ .and_then({
|(req, stream)| match req { |(req, stream)| match req {
Some(req) => futures::future::ok((req, stream)), Some(req) => futures::future::ok((req, stream)),
@ -149,7 +156,7 @@ where
"Stream terminated early".into(), "Stream terminated early".into(),
)), )),
} }
} as FnAndThen<TSocket>) } as FnAndThen<TSocket, TSpec>)
} }
} }
} }
@ -161,14 +168,15 @@ where
// `OutboundUpgrade` // `OutboundUpgrade`
#[derive(Debug, Clone, PartialEq)] #[derive(Debug, Clone, PartialEq)]
pub enum RPCRequest { pub enum RPCRequest<TSpec: EthSpec> {
Status(StatusMessage), Status(StatusMessage),
Goodbye(GoodbyeReason), Goodbye(GoodbyeReason),
BlocksByRange(BlocksByRangeRequest), BlocksByRange(BlocksByRangeRequest),
BlocksByRoot(BlocksByRootRequest), BlocksByRoot(BlocksByRootRequest),
Phantom(PhantomData<TSpec>),
} }
impl UpgradeInfo for RPCRequest { impl<TSpec: EthSpec> UpgradeInfo for RPCRequest<TSpec> {
type Info = ProtocolId; type Info = ProtocolId;
type InfoIter = Vec<Self::Info>; type InfoIter = Vec<Self::Info>;
@ -179,7 +187,7 @@ impl UpgradeInfo for RPCRequest {
} }
/// Implements the encoding per supported protocol for RPCRequest. /// Implements the encoding per supported protocol for RPCRequest.
impl RPCRequest { impl<TSpec: EthSpec> RPCRequest<TSpec> {
pub fn supported_protocols(&self) -> Vec<ProtocolId> { pub fn supported_protocols(&self) -> Vec<ProtocolId> {
match self { match self {
// add more protocols when versions/encodings are supported // add more protocols when versions/encodings are supported
@ -187,6 +195,7 @@ impl RPCRequest {
RPCRequest::Goodbye(_) => vec![ProtocolId::new(RPC_GOODBYE, "1", "ssz")], RPCRequest::Goodbye(_) => vec![ProtocolId::new(RPC_GOODBYE, "1", "ssz")],
RPCRequest::BlocksByRange(_) => vec![ProtocolId::new(RPC_BLOCKS_BY_RANGE, "1", "ssz")], RPCRequest::BlocksByRange(_) => vec![ProtocolId::new(RPC_BLOCKS_BY_RANGE, "1", "ssz")],
RPCRequest::BlocksByRoot(_) => vec![ProtocolId::new(RPC_BLOCKS_BY_ROOT, "1", "ssz")], RPCRequest::BlocksByRoot(_) => vec![ProtocolId::new(RPC_BLOCKS_BY_ROOT, "1", "ssz")],
RPCRequest::Phantom(_) => Vec::new(),
} }
} }
@ -200,6 +209,7 @@ impl RPCRequest {
RPCRequest::Goodbye(_) => false, RPCRequest::Goodbye(_) => false,
RPCRequest::BlocksByRange(_) => true, RPCRequest::BlocksByRange(_) => true,
RPCRequest::BlocksByRoot(_) => true, RPCRequest::BlocksByRoot(_) => true,
RPCRequest::Phantom(_) => unreachable!("Phantom should never be initialised"),
} }
} }
@ -211,6 +221,7 @@ impl RPCRequest {
RPCRequest::Goodbye(_) => false, RPCRequest::Goodbye(_) => false,
RPCRequest::BlocksByRange(_) => true, RPCRequest::BlocksByRange(_) => true,
RPCRequest::BlocksByRoot(_) => true, RPCRequest::BlocksByRoot(_) => true,
RPCRequest::Phantom(_) => unreachable!("Phantom should never be initialised"),
} }
} }
@ -224,6 +235,7 @@ impl RPCRequest {
RPCRequest::BlocksByRoot(_) => ResponseTermination::BlocksByRoot, RPCRequest::BlocksByRoot(_) => ResponseTermination::BlocksByRoot,
RPCRequest::Status(_) => unreachable!(), RPCRequest::Status(_) => unreachable!(),
RPCRequest::Goodbye(_) => unreachable!(), RPCRequest::Goodbye(_) => unreachable!(),
RPCRequest::Phantom(_) => unreachable!("Phantom should never be initialised"),
} }
} }
} }
@ -232,15 +244,17 @@ impl RPCRequest {
/* Outbound upgrades */ /* Outbound upgrades */
pub type OutboundFramed<TSocket> = Framed<upgrade::Negotiated<TSocket>, OutboundCodec>; pub type OutboundFramed<TSocket, TSpec> =
Framed<upgrade::Negotiated<TSocket>, OutboundCodec<TSpec>>;
impl<TSocket> OutboundUpgrade<TSocket> for RPCRequest impl<TSocket, TSpec> OutboundUpgrade<TSocket> for RPCRequest<TSpec>
where where
TSpec: EthSpec,
TSocket: AsyncRead + AsyncWrite, TSocket: AsyncRead + AsyncWrite,
{ {
type Output = OutboundFramed<TSocket>; type Output = OutboundFramed<TSocket, TSpec>;
type Error = RPCError; type Error = RPCError;
type Future = sink::Send<OutboundFramed<TSocket>>; type Future = sink::Send<OutboundFramed<TSocket, TSpec>>;
fn upgrade_outbound( fn upgrade_outbound(
self, self,
socket: upgrade::Negotiated<TSocket>, socket: upgrade::Negotiated<TSocket>,
@ -340,13 +354,14 @@ impl std::error::Error for RPCError {
} }
} }
impl std::fmt::Display for RPCRequest { impl<TSpec: EthSpec> std::fmt::Display for RPCRequest<TSpec> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self { match self {
RPCRequest::Status(status) => write!(f, "Status Message: {}", status), RPCRequest::Status(status) => write!(f, "Status Message: {}", status),
RPCRequest::Goodbye(reason) => write!(f, "Goodbye: {}", reason), RPCRequest::Goodbye(reason) => write!(f, "Goodbye: {}", reason),
RPCRequest::BlocksByRange(req) => write!(f, "Blocks by range: {}", req), RPCRequest::BlocksByRange(req) => write!(f, "Blocks by range: {}", req),
RPCRequest::BlocksByRoot(req) => write!(f, "Blocks by root: {:?}", req), RPCRequest::BlocksByRoot(req) => write!(f, "Blocks by root: {:?}", req),
RPCRequest::Phantom(_) => unreachable!("Phantom should never be initialised"),
} }
} }
} }

View File

@ -1,9 +1,9 @@
use crate::behaviour::{Behaviour, BehaviourEvent, PubsubMessage}; use crate::behaviour::{Behaviour, BehaviourEvent};
use crate::error;
use crate::multiaddr::Protocol; use crate::multiaddr::Protocol;
use crate::rpc::RPCEvent; use crate::rpc::RPCEvent;
use crate::types::error;
use crate::NetworkConfig; use crate::NetworkConfig;
use crate::{NetworkGlobals, Topic, TopicHash}; use crate::{NetworkGlobals, PubsubMessage, TopicHash};
use futures::prelude::*; use futures::prelude::*;
use futures::Stream; use futures::Stream;
use libp2p::core::{ use libp2p::core::{
@ -24,9 +24,10 @@ use std::io::{Error, ErrorKind};
use std::sync::Arc; use std::sync::Arc;
use std::time::Duration; use std::time::Duration;
use tokio::timer::DelayQueue; use tokio::timer::DelayQueue;
use types::EthSpec;
type Libp2pStream = Boxed<(PeerId, StreamMuxerBox), Error>; type Libp2pStream = Boxed<(PeerId, StreamMuxerBox), Error>;
type Libp2pBehaviour = Behaviour<Substream<StreamMuxerBox>>; type Libp2pBehaviour<TSpec> = Behaviour<Substream<StreamMuxerBox>, TSpec>;
const NETWORK_KEY_FILENAME: &str = "key"; const NETWORK_KEY_FILENAME: &str = "key";
/// The time in milliseconds to wait before banning a peer. This allows for any Goodbye messages to be /// The time in milliseconds to wait before banning a peer. This allows for any Goodbye messages to be
@ -34,10 +35,10 @@ const NETWORK_KEY_FILENAME: &str = "key";
const BAN_PEER_WAIT_TIMEOUT: u64 = 200; const BAN_PEER_WAIT_TIMEOUT: u64 = 200;
/// The configuration and state of the libp2p components for the beacon node. /// The configuration and state of the libp2p components for the beacon node.
pub struct Service { pub struct Service<TSpec: EthSpec> {
/// The libp2p Swarm handler. /// The libp2p Swarm handler.
//TODO: Make this private //TODO: Make this private
pub swarm: Swarm<Libp2pStream, Libp2pBehaviour>, pub swarm: Swarm<Libp2pStream, Libp2pBehaviour<TSpec>>,
/// This node's PeerId. /// This node's PeerId.
pub local_peer_id: PeerId, pub local_peer_id: PeerId,
@ -52,11 +53,11 @@ pub struct Service {
pub log: slog::Logger, pub log: slog::Logger,
} }
impl Service { impl<TSpec: EthSpec> Service<TSpec> {
pub fn new( pub fn new(
config: &NetworkConfig, config: &NetworkConfig,
log: slog::Logger, log: slog::Logger,
) -> error::Result<(Arc<NetworkGlobals>, Self)> { ) -> error::Result<(Arc<NetworkGlobals<TSpec>>, Self)> {
trace!(log, "Libp2p Service starting"); trace!(log, "Libp2p Service starting");
let local_keypair = if let Some(hex_bytes) = &config.secret_key_hex { let local_keypair = if let Some(hex_bytes) = &config.secret_key_hex {
@ -70,7 +71,11 @@ impl Service {
info!(log, "Libp2p Service"; "peer_id" => format!("{:?}", local_peer_id)); info!(log, "Libp2p Service"; "peer_id" => format!("{:?}", local_peer_id));
// set up a collection of variables accessible outside of the network crate // set up a collection of variables accessible outside of the network crate
let network_globals = Arc::new(NetworkGlobals::new(local_peer_id.clone())); let network_globals = Arc::new(NetworkGlobals::new(
local_peer_id.clone(),
config.libp2p_port,
config.discovery_port,
));
let mut swarm = { let mut swarm = {
// Set up the transport - tcp/ws with noise/secio and mplex/yamux // Set up the transport - tcp/ws with noise/secio and mplex/yamux
@ -133,12 +138,15 @@ impl Service {
} }
let mut subscribed_topics: Vec<String> = vec![]; let mut subscribed_topics: Vec<String> = vec![];
for topic in config.topics.clone() { for topic in &config.topics {
let raw_topic: Topic = topic.into(); let topic_string: String = topic.clone().into();
let topic_string = raw_topic.no_hash(); if swarm.subscribe(topic.clone()) {
if swarm.subscribe(raw_topic.clone()) {
trace!(log, "Subscribed to topic"; "topic" => format!("{}", topic_string)); trace!(log, "Subscribed to topic"; "topic" => format!("{}", topic_string));
subscribed_topics.push(topic_string.as_str().into()); subscribed_topics.push(topic_string);
network_globals
.gossipsub_subscriptions
.write()
.push(topic.clone());
} else { } else {
warn!(log, "Could not subscribe to topic"; "topic" => format!("{}",topic_string)); warn!(log, "Could not subscribe to topic"; "topic" => format!("{}",topic_string));
} }
@ -167,9 +175,9 @@ impl Service {
} }
} }
impl Stream for Service { impl<TSpec: EthSpec> Stream for Service<TSpec> {
type Item = Libp2pEvent; type Item = Libp2pEvent<TSpec>;
type Error = crate::error::Error; type Error = error::Error;
fn poll(&mut self) -> Poll<Option<Self::Item>, Self::Error> { fn poll(&mut self) -> Poll<Option<Self::Item>, Self::Error> {
loop { loop {
@ -313,9 +321,9 @@ fn build_transport(local_private_key: Keypair) -> Boxed<(PeerId, StreamMuxerBox)
#[derive(Debug)] #[derive(Debug)]
/// Events that can be obtained from polling the Libp2p Service. /// Events that can be obtained from polling the Libp2p Service.
pub enum Libp2pEvent { pub enum Libp2pEvent<TSpec: EthSpec> {
/// An RPC response request has been received on the swarm. /// An RPC response request has been received on the swarm.
RPC(PeerId, RPCEvent), RPC(PeerId, RPCEvent<TSpec>),
/// Initiated the connection to a new peer. /// Initiated the connection to a new peer.
PeerDialed(PeerId), PeerDialed(PeerId),
/// A peer has disconnected. /// A peer has disconnected.
@ -325,7 +333,7 @@ pub enum Libp2pEvent {
id: MessageId, id: MessageId,
source: PeerId, source: PeerId,
topics: Vec<TopicHash>, topics: Vec<TopicHash>,
message: PubsubMessage, message: PubsubMessage<TSpec>,
}, },
/// Subscribed to peer for a topic hash. /// Subscribed to peer for a topic hash.
PeerSubscribed(PeerId, TopicHash), PeerSubscribed(PeerId, TopicHash),

View File

@ -1,71 +0,0 @@
use libp2p::gossipsub::Topic;
use serde_derive::{Deserialize, Serialize};
/// The gossipsub topic names.
// These constants form a topic name of the form /TOPIC_PREFIX/TOPIC/ENCODING_POSTFIX
// For example /eth2/beacon_block/ssz
pub const TOPIC_PREFIX: &str = "eth2";
pub const TOPIC_ENCODING_POSTFIX: &str = "ssz";
pub const BEACON_BLOCK_TOPIC: &str = "beacon_block";
pub const BEACON_ATTESTATION_TOPIC: &str = "beacon_attestation";
pub const VOLUNTARY_EXIT_TOPIC: &str = "voluntary_exit";
pub const PROPOSER_SLASHING_TOPIC: &str = "proposer_slashing";
pub const ATTESTER_SLASHING_TOPIC: &str = "attester_slashing";
pub const SHARD_TOPIC_PREFIX: &str = "shard";
/// Enum that brings these topics into the rust type system.
#[derive(Clone, Debug, Serialize, Deserialize)]
pub enum GossipTopic {
BeaconBlock,
BeaconAttestation,
VoluntaryExit,
ProposerSlashing,
AttesterSlashing,
Shard,
Unknown(String),
}
impl From<&str> for GossipTopic {
fn from(topic: &str) -> GossipTopic {
let topic_parts: Vec<&str> = topic.split('/').collect();
if topic_parts.len() == 4
&& topic_parts[1] == TOPIC_PREFIX
&& topic_parts[3] == TOPIC_ENCODING_POSTFIX
{
match topic_parts[2] {
BEACON_BLOCK_TOPIC => GossipTopic::BeaconBlock,
BEACON_ATTESTATION_TOPIC => GossipTopic::BeaconAttestation,
VOLUNTARY_EXIT_TOPIC => GossipTopic::VoluntaryExit,
PROPOSER_SLASHING_TOPIC => GossipTopic::ProposerSlashing,
ATTESTER_SLASHING_TOPIC => GossipTopic::AttesterSlashing,
unknown_topic => GossipTopic::Unknown(unknown_topic.into()),
}
} else {
GossipTopic::Unknown(topic.into())
}
}
}
impl Into<Topic> for GossipTopic {
fn into(self) -> Topic {
Topic::new(self.into())
}
}
impl Into<String> for GossipTopic {
fn into(self) -> String {
match self {
GossipTopic::BeaconBlock => topic_builder(BEACON_BLOCK_TOPIC),
GossipTopic::BeaconAttestation => topic_builder(BEACON_ATTESTATION_TOPIC),
GossipTopic::VoluntaryExit => topic_builder(VOLUNTARY_EXIT_TOPIC),
GossipTopic::ProposerSlashing => topic_builder(PROPOSER_SLASHING_TOPIC),
GossipTopic::AttesterSlashing => topic_builder(ATTESTER_SLASHING_TOPIC),
GossipTopic::Shard => topic_builder(SHARD_TOPIC_PREFIX),
GossipTopic::Unknown(topic) => topic,
}
}
}
fn topic_builder(topic: &'static str) -> String {
format!("/{}/{}/{}", TOPIC_PREFIX, topic, TOPIC_ENCODING_POSTFIX,)
}

View File

@ -0,0 +1,68 @@
//! A collection of variables that are accessible outside of the network thread itself.
use crate::{Enr, GossipTopic, Multiaddr, PeerId, PeerInfo};
use parking_lot::RwLock;
use std::collections::HashMap;
use std::sync::atomic::{AtomicU16, Ordering};
use types::EthSpec;
pub struct NetworkGlobals<TSpec: EthSpec> {
/// The current local ENR.
pub local_enr: RwLock<Option<Enr>>,
/// The local peer_id.
pub peer_id: RwLock<PeerId>,
/// Listening multiaddrs.
pub listen_multiaddrs: RwLock<Vec<Multiaddr>>,
/// The tcp port that the libp2p service is listening on
pub listen_port_tcp: AtomicU16,
/// The udp port that the discovery service is listening on
pub listen_port_udp: AtomicU16,
/// The collection of currently connected peers.
pub connected_peer_set: RwLock<HashMap<PeerId, PeerInfo<TSpec>>>,
/// The current gossipsub topic subscriptions.
pub gossipsub_subscriptions: RwLock<Vec<GossipTopic>>,
}
impl<TSpec: EthSpec> NetworkGlobals<TSpec> {
pub fn new(peer_id: PeerId, tcp_port: u16, udp_port: u16) -> Self {
NetworkGlobals {
local_enr: RwLock::new(None),
peer_id: RwLock::new(peer_id),
listen_multiaddrs: RwLock::new(Vec::new()),
listen_port_tcp: AtomicU16::new(tcp_port),
listen_port_udp: AtomicU16::new(udp_port),
connected_peer_set: RwLock::new(HashMap::new()),
gossipsub_subscriptions: RwLock::new(Vec::new()),
}
}
/// Returns the local ENR from the underlying Discv5 behaviour that external peers may connect
/// to.
pub fn local_enr(&self) -> Option<Enr> {
self.local_enr.read().clone()
}
/// Returns the local libp2p PeerID.
pub fn local_peer_id(&self) -> PeerId {
self.peer_id.read().clone()
}
/// Returns the list of `Multiaddr` that the underlying libp2p instance is listening on.
pub fn listen_multiaddrs(&self) -> Vec<Multiaddr> {
self.listen_multiaddrs.read().clone()
}
/// Returns the libp2p TCP port that this node has been configured to listen on.
pub fn listen_port_tcp(&self) -> u16 {
self.listen_port_tcp.load(Ordering::Relaxed)
}
/// Returns the UDP discovery port that this node has been configured to listen on.
pub fn listen_port_udp(&self) -> u16 {
self.listen_port_udp.load(Ordering::Relaxed)
}
/// Returns the number of libp2p connected peers.
pub fn connected_peers(&self) -> usize {
self.connected_peer_set.read().len()
}
}

View File

@ -0,0 +1,10 @@
pub mod error;
mod globals;
mod peer_info;
mod pubsub;
mod topics;
pub use globals::NetworkGlobals;
pub use peer_info::{EnrBitfield, PeerInfo};
pub use pubsub::{PubsubData, PubsubMessage};
pub use topics::{GossipEncoding, GossipKind, GossipTopic};

View File

@ -0,0 +1,45 @@
//NOTE: This should be removed in favour of the PeerManager PeerInfo, once built.
use types::{BitVector, EthSpec, SubnetId};
#[allow(type_alias_bounds)]
pub type EnrBitfield<T: EthSpec> = BitVector<T::SubnetBitfieldLength>;
/// Information about a given connected peer.
#[derive(Debug, Clone)]
pub struct PeerInfo<T: EthSpec> {
/// The current syncing state of the peer. The state may be determined after it's initial
/// connection.
pub syncing_state: Option<PeerSyncingState>,
/// The ENR subnet bitfield of the peer. This may be determined after it's initial
/// connection.
pub enr_bitfield: Option<EnrBitfield<T>>,
}
#[derive(Debug, Clone)]
pub enum PeerSyncingState {
/// At the current state as our node.
Synced,
/// The peer is further ahead than our node and useful for block downloads.
Ahead,
/// Is behind our current head and not useful for block downloads.
Behind,
}
impl<T: EthSpec> PeerInfo<T> {
/// Creates a new PeerInfo, specifying it's
pub fn new() -> Self {
PeerInfo {
syncing_state: None,
enr_bitfield: None,
}
}
/// Returns if the peer is subscribed to a given `SubnetId`
pub fn on_subnet(&self, subnet_id: SubnetId) -> bool {
if let Some(bitfield) = &self.enr_bitfield {
return bitfield.get(*subnet_id as usize).unwrap_or_else(|_| false);
}
false
}
}

View File

@ -0,0 +1,170 @@
//! Handles the encoding and decoding of pubsub messages.
use crate::types::{GossipEncoding, GossipKind, GossipTopic};
use crate::TopicHash;
use ssz::{Decode, Encode};
use std::boxed::Box;
use types::SubnetId;
use types::{
Attestation, AttesterSlashing, EthSpec, ProposerSlashing, SignedAggregateAndProof,
SignedBeaconBlock, VoluntaryExit,
};
/// Messages that are passed to and from the pubsub (Gossipsub) behaviour.
#[derive(Debug, Clone, PartialEq)]
pub struct PubsubMessage<T: EthSpec> {
/// The encoding to be used to encode/decode the message
pub encoding: GossipEncoding,
/// The actual message being sent.
pub data: PubsubData<T>,
}
#[derive(Debug, Clone, PartialEq)]
pub enum PubsubData<T: EthSpec> {
/// Gossipsub message providing notification of a new block.
BeaconBlock(Box<SignedBeaconBlock<T>>),
/// Gossipsub message providing notification of a Aggregate attestation and associated proof.
AggregateAndProofAttestation(Box<SignedAggregateAndProof<T>>),
/// Gossipsub message providing notification of a raw un-aggregated attestation with its shard id.
Attestation(Box<(SubnetId, Attestation<T>)>),
/// Gossipsub message providing notification of a voluntary exit.
VoluntaryExit(Box<VoluntaryExit>),
/// Gossipsub message providing notification of a new proposer slashing.
ProposerSlashing(Box<ProposerSlashing>),
/// Gossipsub message providing notification of a new attester slashing.
AttesterSlashing(Box<AttesterSlashing<T>>),
}
impl<T: EthSpec> PubsubMessage<T> {
pub fn new(encoding: GossipEncoding, data: PubsubData<T>) -> Self {
PubsubMessage { encoding, data }
}
/// Returns the topics that each pubsub message will be sent across, given a supported
/// gossipsub encoding.
pub fn topics(&self) -> Vec<GossipTopic> {
let encoding = self.encoding.clone();
match &self.data {
PubsubData::BeaconBlock(_) => vec![GossipTopic::new(GossipKind::BeaconBlock, encoding)],
PubsubData::AggregateAndProofAttestation(_) => vec![GossipTopic::new(
GossipKind::BeaconAggregateAndProof,
encoding,
)],
PubsubData::Attestation(attestation_data) => vec![GossipTopic::new(
GossipKind::CommitteeIndex(attestation_data.0),
encoding,
)],
PubsubData::VoluntaryExit(_) => {
vec![GossipTopic::new(GossipKind::VoluntaryExit, encoding)]
}
PubsubData::ProposerSlashing(_) => {
vec![GossipTopic::new(GossipKind::ProposerSlashing, encoding)]
}
PubsubData::AttesterSlashing(_) => {
vec![GossipTopic::new(GossipKind::AttesterSlashing, encoding)]
}
}
}
/* Note: This is assuming we are not hashing topics. If we choose to hash topics, these will
* need to be modified.
*
* Also note that a message can be associated with many topics. As soon as one of the topics is
* known we match. If none of the topics are known we return an unknown state.
*/
pub fn decode(topics: &[TopicHash], data: &[u8]) -> Result<Self, String> {
let mut unknown_topics = Vec::new();
for topic in topics {
match GossipTopic::decode(topic.as_str()) {
Err(_) => {
unknown_topics.push(topic);
continue;
}
Ok(gossip_topic) => {
match gossip_topic.encoding() {
// group each part by encoding type
GossipEncoding::SSZ => {
// the ssz decoders
let encoding = GossipEncoding::SSZ;
match gossip_topic.kind() {
GossipKind::BeaconAggregateAndProof => {
let agg_and_proof =
SignedAggregateAndProof::from_ssz_bytes(data)
.map_err(|e| format!("{:?}", e))?;
return Ok(PubsubMessage::new(
encoding,
PubsubData::AggregateAndProofAttestation(Box::new(
agg_and_proof,
)),
));
}
GossipKind::CommitteeIndex(subnet_id) => {
let attestation = Attestation::from_ssz_bytes(data)
.map_err(|e| format!("{:?}", e))?;
return Ok(PubsubMessage::new(
encoding,
PubsubData::Attestation(Box::new((
*subnet_id,
attestation,
))),
));
}
GossipKind::BeaconBlock => {
let beacon_block = SignedBeaconBlock::from_ssz_bytes(data)
.map_err(|e| format!("{:?}", e))?;
return Ok(PubsubMessage::new(
encoding,
PubsubData::BeaconBlock(Box::new(beacon_block)),
));
}
GossipKind::VoluntaryExit => {
let voluntary_exit = VoluntaryExit::from_ssz_bytes(data)
.map_err(|e| format!("{:?}", e))?;
return Ok(PubsubMessage::new(
encoding,
PubsubData::VoluntaryExit(Box::new(voluntary_exit)),
));
}
GossipKind::ProposerSlashing => {
let proposer_slashing = ProposerSlashing::from_ssz_bytes(data)
.map_err(|e| format!("{:?}", e))?;
return Ok(PubsubMessage::new(
encoding,
PubsubData::ProposerSlashing(Box::new(proposer_slashing)),
));
}
GossipKind::AttesterSlashing => {
let attester_slashing = AttesterSlashing::from_ssz_bytes(data)
.map_err(|e| format!("{:?}", e))?;
return Ok(PubsubMessage::new(
encoding,
PubsubData::AttesterSlashing(Box::new(attester_slashing)),
));
}
}
}
}
}
}
}
Err(format!("Unknown gossipsub topics: {:?}", unknown_topics))
}
/// Encodes a pubsub message based on the topic encodings. The first known encoding is used. If
/// no encoding is known, and error is returned.
pub fn encode(&self) -> Vec<u8> {
match self.encoding {
GossipEncoding::SSZ => {
// SSZ Encodings
return match &self.data {
PubsubData::BeaconBlock(data) => data.as_ssz_bytes(),
PubsubData::AggregateAndProofAttestation(data) => data.as_ssz_bytes(),
PubsubData::VoluntaryExit(data) => data.as_ssz_bytes(),
PubsubData::ProposerSlashing(data) => data.as_ssz_bytes(),
PubsubData::AttesterSlashing(data) => data.as_ssz_bytes(),
PubsubData::Attestation(data) => data.1.as_ssz_bytes(),
};
}
}
}
}

View File

@ -0,0 +1,140 @@
use libp2p::gossipsub::Topic;
use serde_derive::{Deserialize, Serialize};
use types::SubnetId;
/// The gossipsub topic names.
// These constants form a topic name of the form /TOPIC_PREFIX/TOPIC/ENCODING_POSTFIX
// For example /eth2/beacon_block/ssz
pub const TOPIC_PREFIX: &str = "eth2";
pub const SSZ_ENCODING_POSTFIX: &str = "ssz";
pub const BEACON_BLOCK_TOPIC: &str = "beacon_block";
pub const BEACON_AGGREGATE_AND_PROOF_TOPIC: &str = "beacon_aggregate_and_proof";
// for speed and easier string manipulation, committee topic index is split into a prefix and a
// postfix. The topic is committee_index{}_beacon_attestation where {} is an integer.
pub const COMMITEE_INDEX_TOPIC_PREFIX: &str = "committee_index";
pub const COMMITEE_INDEX_TOPIC_POSTFIX: &str = "_beacon_attestation";
pub const VOLUNTARY_EXIT_TOPIC: &str = "voluntary_exit";
pub const PROPOSER_SLASHING_TOPIC: &str = "proposer_slashing";
pub const ATTESTER_SLASHING_TOPIC: &str = "attester_slashing";
/// A gossipsub topic which encapsulates the type of messages that should be sent and received over
/// the pubsub protocol and the way the messages should be encoded.
#[derive(Clone, Debug, Serialize, Deserialize, PartialEq)]
pub struct GossipTopic {
/// The encoding of the topic.
encoding: GossipEncoding,
/// The kind of topic.
kind: GossipKind,
}
/// Enum that brings these topics into the rust type system.
#[derive(Clone, Debug, Serialize, Deserialize, PartialEq)]
pub enum GossipKind {
/// Topic for publishing beacon blocks.
BeaconBlock,
/// Topic for publishing aggregate attestations and proofs.
BeaconAggregateAndProof,
/// Topic for publishing raw attestations on a particular subnet.
CommitteeIndex(SubnetId),
/// Topic for publishing voluntary exits.
VoluntaryExit,
/// Topic for publishing block proposer slashings.
ProposerSlashing,
/// Topic for publishing attester slashings.
AttesterSlashing,
}
/// The known encoding types for gossipsub messages.
#[derive(Clone, Debug, Serialize, Deserialize, PartialEq)]
pub enum GossipEncoding {
/// Messages are encoded with SSZ.
SSZ,
}
impl GossipTopic {
pub fn new(kind: GossipKind, encoding: GossipEncoding) -> Self {
GossipTopic { encoding, kind }
}
/// Returns the encoding type for the gossipsub topic.
pub fn encoding(&self) -> &GossipEncoding {
&self.encoding
}
/// Returns the kind of message expected on the gossipsub topic.
pub fn kind(&self) -> &GossipKind {
&self.kind
}
pub fn decode(topic: &str) -> Result<Self, String> {
let topic_parts: Vec<&str> = topic.split('/').collect();
if topic_parts.len() == 4 && topic_parts[1] == TOPIC_PREFIX {
let encoding = match topic_parts[3] {
SSZ_ENCODING_POSTFIX => GossipEncoding::SSZ,
_ => return Err(format!("Unknown encoding: {}", topic)),
};
let kind = match topic_parts[2] {
BEACON_BLOCK_TOPIC => GossipKind::BeaconBlock,
BEACON_AGGREGATE_AND_PROOF_TOPIC => GossipKind::BeaconAggregateAndProof,
VOLUNTARY_EXIT_TOPIC => GossipKind::VoluntaryExit,
PROPOSER_SLASHING_TOPIC => GossipKind::ProposerSlashing,
ATTESTER_SLASHING_TOPIC => GossipKind::AttesterSlashing,
topic => match committee_topic_index(topic) {
Some(subnet_id) => GossipKind::CommitteeIndex(subnet_id),
None => return Err(format!("Unknown topic: {}", topic)),
},
};
return Ok(GossipTopic { encoding, kind });
}
Err(format!("Unknown topic: {}", topic))
}
}
impl Into<Topic> for GossipTopic {
fn into(self) -> Topic {
Topic::new(self.into())
}
}
impl Into<String> for GossipTopic {
fn into(self) -> String {
let encoding = match self.encoding {
GossipEncoding::SSZ => SSZ_ENCODING_POSTFIX,
};
let kind = match self.kind {
GossipKind::BeaconBlock => BEACON_BLOCK_TOPIC.into(),
GossipKind::BeaconAggregateAndProof => BEACON_AGGREGATE_AND_PROOF_TOPIC.into(),
GossipKind::VoluntaryExit => VOLUNTARY_EXIT_TOPIC.into(),
GossipKind::ProposerSlashing => PROPOSER_SLASHING_TOPIC.into(),
GossipKind::AttesterSlashing => ATTESTER_SLASHING_TOPIC.into(),
GossipKind::CommitteeIndex(index) => format!(
"{}{}{}",
COMMITEE_INDEX_TOPIC_PREFIX, *index, COMMITEE_INDEX_TOPIC_POSTFIX
),
};
format!("/{}/{}/{}", TOPIC_PREFIX, kind, encoding)
}
}
// helper functions
// Determines if a string is a committee topic.
fn committee_topic_index(topic: &str) -> Option<SubnetId> {
if topic.starts_with(COMMITEE_INDEX_TOPIC_PREFIX)
&& topic.ends_with(COMMITEE_INDEX_TOPIC_POSTFIX)
{
return Some(SubnetId::new(
u64::from_str_radix(
topic
.trim_start_matches(COMMITEE_INDEX_TOPIC_PREFIX)
.trim_end_matches(COMMITEE_INDEX_TOPIC_POSTFIX),
10,
)
.ok()?,
));
}
None
}

View File

@ -5,6 +5,9 @@ use eth2_libp2p::NetworkConfig;
use eth2_libp2p::Service as LibP2PService; use eth2_libp2p::Service as LibP2PService;
use slog::{debug, error, o, Drain}; use slog::{debug, error, o, Drain};
use std::time::Duration; use std::time::Duration;
use types::MinimalEthSpec;
type E = MinimalEthSpec;
use tempdir::TempDir; use tempdir::TempDir;
pub fn build_log(level: slog::Level, enabled: bool) -> slog::Logger { pub fn build_log(level: slog::Level, enabled: bool) -> slog::Logger {
@ -43,7 +46,7 @@ pub fn build_libp2p_instance(
boot_nodes: Vec<Enr>, boot_nodes: Vec<Enr>,
secret_key: Option<String>, secret_key: Option<String>,
log: slog::Logger, log: slog::Logger,
) -> LibP2PService { ) -> LibP2PService<E> {
let config = build_config(port, boot_nodes, secret_key); let config = build_config(port, boot_nodes, secret_key);
// launch libp2p service // launch libp2p service
LibP2PService::new(&config, log.clone()) LibP2PService::new(&config, log.clone())
@ -52,15 +55,19 @@ pub fn build_libp2p_instance(
} }
#[allow(dead_code)] #[allow(dead_code)]
pub fn get_enr(node: &LibP2PService) -> Enr { pub fn get_enr(node: &LibP2PService<E>) -> Enr {
node.swarm.discovery().local_enr().clone() node.swarm.discovery().local_enr().clone()
} }
// Returns `n` libp2p peers in fully connected topology. // Returns `n` libp2p peers in fully connected topology.
#[allow(dead_code)] #[allow(dead_code)]
pub fn build_full_mesh(log: slog::Logger, n: usize, start_port: Option<u16>) -> Vec<LibP2PService> { pub fn build_full_mesh(
log: slog::Logger,
n: usize,
start_port: Option<u16>,
) -> Vec<LibP2PService<E>> {
let base_port = start_port.unwrap_or(9000); let base_port = start_port.unwrap_or(9000);
let mut nodes: Vec<LibP2PService> = (base_port..base_port + n as u16) let mut nodes: Vec<LibP2PService<E>> = (base_port..base_port + n as u16)
.map(|p| build_libp2p_instance(p, vec![], None, log.clone())) .map(|p| build_libp2p_instance(p, vec![], None, log.clone()))
.collect(); .collect();
let multiaddrs: Vec<Multiaddr> = nodes let multiaddrs: Vec<Multiaddr> = nodes
@ -84,7 +91,10 @@ pub fn build_full_mesh(log: slog::Logger, n: usize, start_port: Option<u16>) ->
// Constructs a pair of nodes with seperate loggers. The sender dials the receiver. // Constructs a pair of nodes with seperate loggers. The sender dials the receiver.
// This returns a (sender, receiver) pair. // This returns a (sender, receiver) pair.
#[allow(dead_code)] #[allow(dead_code)]
pub fn build_node_pair(log: &slog::Logger, start_port: u16) -> (LibP2PService, LibP2PService) { pub fn build_node_pair(
log: &slog::Logger,
start_port: u16,
) -> (LibP2PService<E>, LibP2PService<E>) {
let sender_log = log.new(o!("who" => "sender")); let sender_log = log.new(o!("who" => "sender"));
let receiver_log = log.new(o!("who" => "receiver")); let receiver_log = log.new(o!("who" => "receiver"));
@ -101,9 +111,9 @@ pub fn build_node_pair(log: &slog::Logger, start_port: u16) -> (LibP2PService, L
// Returns `n` peers in a linear topology // Returns `n` peers in a linear topology
#[allow(dead_code)] #[allow(dead_code)]
pub fn build_linear(log: slog::Logger, n: usize, start_port: Option<u16>) -> Vec<LibP2PService> { pub fn build_linear(log: slog::Logger, n: usize, start_port: Option<u16>) -> Vec<LibP2PService<E>> {
let base_port = start_port.unwrap_or(9000); let base_port = start_port.unwrap_or(9000);
let mut nodes: Vec<LibP2PService> = (base_port..base_port + n as u16) let mut nodes: Vec<LibP2PService<E>> = (base_port..base_port + n as u16)
.map(|p| build_libp2p_instance(p, vec![], None, log.clone())) .map(|p| build_libp2p_instance(p, vec![], None, log.clone()))
.collect(); .collect();
let multiaddrs: Vec<Multiaddr> = nodes let multiaddrs: Vec<Multiaddr> = nodes

View File

@ -1,8 +1,12 @@
#![cfg(test)] #![cfg(test)]
use crate::types::GossipEncoding;
use ::types::{BeaconBlock, EthSpec, MinimalEthSpec, Signature, SignedBeaconBlock};
use eth2_libp2p::*; use eth2_libp2p::*;
use futures::prelude::*; use futures::prelude::*;
use slog::{debug, Level}; use slog::{debug, Level};
type E = MinimalEthSpec;
mod common; mod common;
/* Gossipsub tests */ /* Gossipsub tests */
@ -23,7 +27,14 @@ fn test_gossipsub_forward() {
let num_nodes = 20; let num_nodes = 20;
let mut nodes = common::build_linear(log.clone(), num_nodes, Some(19000)); let mut nodes = common::build_linear(log.clone(), num_nodes, Some(19000));
let mut received_count = 0; let mut received_count = 0;
let pubsub_message = PubsubMessage::Block(vec![0; 4]); let spec = E::default_spec();
let empty_block = BeaconBlock::empty(&spec);
let signed_block = SignedBeaconBlock {
message: empty_block,
signature: Signature::empty_signature(),
};
let data = PubsubData::BeaconBlock(Box::new(signed_block));
let pubsub_message = PubsubMessage::new(GossipEncoding::SSZ, data);
let publishing_topic: String = "/eth2/beacon_block/ssz".into(); let publishing_topic: String = "/eth2/beacon_block/ssz".into();
let mut subscribed_count = 0; let mut subscribed_count = 0;
tokio::run(futures::future::poll_fn(move || -> Result<_, ()> { tokio::run(futures::future::poll_fn(move || -> Result<_, ()> {
@ -61,10 +72,7 @@ fn test_gossipsub_forward() {
subscribed_count += 1; subscribed_count += 1;
// Every node except the corner nodes are connected to 2 nodes. // Every node except the corner nodes are connected to 2 nodes.
if subscribed_count == (num_nodes * 2) - 2 { if subscribed_count == (num_nodes * 2) - 2 {
node.swarm.publish( node.swarm.publish(vec![pubsub_message.clone()]);
&[Topic::new(topic.into_string())],
pubsub_message.clone(),
);
} }
} }
} }
@ -90,7 +98,14 @@ fn test_gossipsub_full_mesh_publish() {
let num_nodes = 12; let num_nodes = 12;
let mut nodes = common::build_full_mesh(log, num_nodes, Some(11320)); let mut nodes = common::build_full_mesh(log, num_nodes, Some(11320));
let mut publishing_node = nodes.pop().unwrap(); let mut publishing_node = nodes.pop().unwrap();
let pubsub_message = PubsubMessage::Block(vec![0; 4]); let spec = E::default_spec();
let empty_block = BeaconBlock::empty(&spec);
let signed_block = SignedBeaconBlock {
message: empty_block,
signature: Signature::empty_signature(),
};
let data = PubsubData::BeaconBlock(Box::new(signed_block));
let pubsub_message = PubsubMessage::new(GossipEncoding::SSZ, data);
let publishing_topic: String = "/eth2/beacon_block/ssz".into(); let publishing_topic: String = "/eth2/beacon_block/ssz".into();
let mut subscribed_count = 0; let mut subscribed_count = 0;
let mut received_count = 0; let mut received_count = 0;
@ -123,9 +138,7 @@ fn test_gossipsub_full_mesh_publish() {
if topic == TopicHash::from_raw("/eth2/beacon_block/ssz") { if topic == TopicHash::from_raw("/eth2/beacon_block/ssz") {
subscribed_count += 1; subscribed_count += 1;
if subscribed_count == num_nodes - 1 { if subscribed_count == num_nodes - 1 {
publishing_node publishing_node.swarm.publish(vec![pubsub_message.clone()]);
.swarm
.publish(&[Topic::new(topic.into_string())], pubsub_message.clone());
} }
} }
} }

View File

@ -1,6 +1,7 @@
#![cfg(test)] #![cfg(test)]
use crate::behaviour::{Behaviour, BehaviourEvent}; use crate::behaviour::{Behaviour, BehaviourEvent};
use crate::multiaddr::Protocol; use crate::multiaddr::Protocol;
use ::types::MinimalEthSpec;
use eth2_libp2p::*; use eth2_libp2p::*;
use futures::prelude::*; use futures::prelude::*;
use libp2p::core::identity::Keypair; use libp2p::core::identity::Keypair;
@ -16,10 +17,12 @@ use std::sync::Arc;
use std::time::Duration; use std::time::Duration;
use tokio::prelude::*; use tokio::prelude::*;
type TSpec = MinimalEthSpec;
mod common; mod common;
type Libp2pStream = Boxed<(PeerId, StreamMuxerBox), Error>; type Libp2pStream = Boxed<(PeerId, StreamMuxerBox), Error>;
type Libp2pBehaviour = Behaviour<Substream<StreamMuxerBox>>; type Libp2pBehaviour = Behaviour<Substream<StreamMuxerBox>, TSpec>;
/// Build and return a eth2_libp2p Swarm with only secio support. /// Build and return a eth2_libp2p Swarm with only secio support.
fn build_secio_swarm( fn build_secio_swarm(
@ -29,7 +32,11 @@ fn build_secio_swarm(
let local_keypair = Keypair::generate_secp256k1(); let local_keypair = Keypair::generate_secp256k1();
let local_peer_id = PeerId::from(local_keypair.public()); let local_peer_id = PeerId::from(local_keypair.public());
let network_globals = Arc::new(NetworkGlobals::new(local_peer_id.clone())); let network_globals = Arc::new(NetworkGlobals::new(
local_peer_id.clone(),
config.libp2p_port,
config.discovery_port,
));
let mut swarm = { let mut swarm = {
// Set up the transport - tcp/ws with secio and mplex/yamux // Set up the transport - tcp/ws with secio and mplex/yamux

View File

@ -7,10 +7,14 @@ use std::sync::atomic::{AtomicBool, Ordering::Relaxed};
use std::sync::{Arc, Mutex}; use std::sync::{Arc, Mutex};
use std::time::Duration; use std::time::Duration;
use tokio::prelude::*; use tokio::prelude::*;
use types::{Epoch, Hash256, Slot}; use types::{
BeaconBlock, Epoch, EthSpec, Hash256, MinimalEthSpec, Signature, SignedBeaconBlock, Slot,
};
mod common; mod common;
type E = MinimalEthSpec;
#[test] #[test]
// Tests the STATUS RPC message // Tests the STATUS RPC message
fn test_status_rpc() { fn test_status_rpc() {
@ -73,7 +77,7 @@ fn test_status_rpc() {
warn!(sender_log, "Sender Completed"); warn!(sender_log, "Sender Completed");
return Ok(Async::Ready(true)); return Ok(Async::Ready(true));
} }
_ => panic!("Received invalid RPC message"), e => panic!("Received invalid RPC message {}", e),
}, },
Async::Ready(Some(_)) => (), Async::Ready(Some(_)) => (),
Async::Ready(None) | Async::NotReady => return Ok(Async::NotReady), Async::Ready(None) | Async::NotReady => return Ok(Async::NotReady),
@ -98,7 +102,7 @@ fn test_status_rpc() {
RPCEvent::Response(id, RPCErrorResponse::Success(rpc_response.clone())), RPCEvent::Response(id, RPCErrorResponse::Success(rpc_response.clone())),
); );
} }
_ => panic!("Received invalid RPC message"), e => panic!("Received invalid RPC message {}", e),
}, },
Async::Ready(Some(_)) => (), Async::Ready(Some(_)) => (),
Async::Ready(None) | Async::NotReady => return Ok(Async::NotReady), Async::Ready(None) | Async::NotReady => return Ok(Async::NotReady),
@ -145,7 +149,13 @@ fn test_blocks_by_range_chunked_rpc() {
}); });
// BlocksByRange Response // BlocksByRange Response
let rpc_response = RPCResponse::BlocksByRange(vec![13, 13, 13]); let spec = E::default_spec();
let empty_block = BeaconBlock::empty(&spec);
let empty_signed = SignedBeaconBlock {
message: empty_block,
signature: Signature::empty_signature(),
};
let rpc_response = RPCResponse::BlocksByRange(Box::new(empty_signed));
let sender_request = rpc_request.clone(); let sender_request = rpc_request.clone();
let sender_log = log.clone(); let sender_log = log.clone();
@ -272,7 +282,13 @@ fn test_blocks_by_range_single_empty_rpc() {
}); });
// BlocksByRange Response // BlocksByRange Response
let rpc_response = RPCResponse::BlocksByRange(vec![]); let spec = E::default_spec();
let empty_block = BeaconBlock::empty(&spec);
let empty_signed = SignedBeaconBlock {
message: empty_block,
signature: Signature::empty_signature(),
};
let rpc_response = RPCResponse::BlocksByRange(Box::new(empty_signed));
let sender_request = rpc_request.clone(); let sender_request = rpc_request.clone();
let sender_log = log.clone(); let sender_log = log.clone();
@ -373,132 +389,6 @@ fn test_blocks_by_range_single_empty_rpc() {
assert!(test_result.load(Relaxed)); assert!(test_result.load(Relaxed));
} }
#[test]
// Tests a streamed, chunked BlocksByRoot RPC Message
fn test_blocks_by_root_chunked_rpc() {
// set up the logging. The level and enabled logging or not
let log_level = Level::Trace;
let enable_logging = false;
let messages_to_send = 3;
let log = common::build_log(log_level, enable_logging);
// get sender/receiver
let (mut sender, mut receiver) = common::build_node_pair(&log, 10515);
// BlocksByRoot Request
let rpc_request = RPCRequest::BlocksByRoot(BlocksByRootRequest {
block_roots: vec![Hash256::from_low_u64_be(0), Hash256::from_low_u64_be(0)],
});
// BlocksByRoot Response
let rpc_response = RPCResponse::BlocksByRoot(vec![13, 13, 13]);
let sender_request = rpc_request.clone();
let sender_log = log.clone();
let sender_response = rpc_response.clone();
// keep count of the number of messages received
let messages_received = Arc::new(Mutex::new(0));
// build the sender future
let sender_future = future::poll_fn(move || -> Poll<bool, ()> {
loop {
match sender.poll().unwrap() {
Async::Ready(Some(Libp2pEvent::PeerDialed(peer_id))) => {
// Send a BlocksByRoot request
warn!(sender_log, "Sender sending RPC request");
sender
.swarm
.send_rpc(peer_id, RPCEvent::Request(1, sender_request.clone()));
}
Async::Ready(Some(Libp2pEvent::RPC(_, event))) => match event {
// Should receive the RPC response
RPCEvent::Response(id, response) => {
warn!(sender_log, "Sender received a response");
assert_eq!(id, 1);
match response {
RPCErrorResponse::Success(res) => {
assert_eq!(res, sender_response.clone());
*messages_received.lock().unwrap() += 1;
warn!(sender_log, "Chunk received");
}
RPCErrorResponse::StreamTermination(
ResponseTermination::BlocksByRoot,
) => {
// should be exactly 10 messages before terminating
assert_eq!(*messages_received.lock().unwrap(), messages_to_send);
// end the test
return Ok(Async::Ready(true));
}
m => panic!("Invalid RPC received: {}", m),
}
}
m => panic!("Received invalid RPC message: {}", m),
},
Async::Ready(Some(_)) => {}
Async::Ready(None) | Async::NotReady => return Ok(Async::NotReady),
};
}
});
// build the receiver future
let receiver_future = future::poll_fn(move || -> Poll<bool, ()> {
loop {
match receiver.poll().unwrap() {
Async::Ready(Some(Libp2pEvent::RPC(peer_id, event))) => match event {
// Should receive the sent RPC request
RPCEvent::Request(id, request) => {
assert_eq!(id, 1);
assert_eq!(rpc_request.clone(), request);
// send the response
warn!(log, "Receiver got request");
for _ in 1..=messages_to_send {
receiver.swarm.send_rpc(
peer_id.clone(),
RPCEvent::Response(
id,
RPCErrorResponse::Success(rpc_response.clone()),
),
);
}
// send the stream termination
receiver.swarm.send_rpc(
peer_id,
RPCEvent::Response(
id,
RPCErrorResponse::StreamTermination(
ResponseTermination::BlocksByRoot,
),
),
);
}
_ => panic!("Received invalid RPC message"),
},
Async::Ready(Some(_)) => (),
Async::Ready(None) | Async::NotReady => return Ok(Async::NotReady),
}
}
});
// execute the futures and check the result
let test_result = Arc::new(AtomicBool::new(false));
let error_result = test_result.clone();
let thread_result = test_result.clone();
tokio::run(
sender_future
.select(receiver_future)
.timeout(Duration::from_millis(1000))
.map_err(move |_| error_result.store(false, Relaxed))
.map(move |result| {
thread_result.store(result.0, Relaxed);
}),
);
assert!(test_result.load(Relaxed));
}
#[test] #[test]
// Tests a Goodbye RPC message // Tests a Goodbye RPC message
fn test_goodbye_rpc() { fn test_goodbye_rpc() {

View File

@ -1,6 +1,6 @@
[package] [package]
name = "genesis" name = "genesis"
version = "0.1.0" version = "0.2.0"
authors = ["Paul Hauner <paul@paulhauner.com>"] authors = ["Paul Hauner <paul@paulhauner.com>"]
edition = "2018" edition = "2018"

View File

@ -1,6 +1,6 @@
[package] [package]
name = "network" name = "network"
version = "0.1.0" version = "0.2.0"
authors = ["Age Manning <Age@AgeManning.com>"] authors = ["Age Manning <Age@AgeManning.com>"]
edition = "2018" edition = "2018"
@ -13,7 +13,10 @@ tempdir = "0.3"
beacon_chain = { path = "../beacon_chain" } beacon_chain = { path = "../beacon_chain" }
store = { path = "../store" } store = { path = "../store" }
eth2-libp2p = { path = "../eth2-libp2p" } eth2-libp2p = { path = "../eth2-libp2p" }
hashmap_delay = { path = "../../eth2/utils/hashmap_delay" }
rest_types = { path = "../../eth2/utils/rest_types" }
types = { path = "../../eth2/types" } types = { path = "../../eth2/types" }
slot_clock = { path = "../../eth2/utils/slot_clock" }
slog = { version = "2.5.2", features = ["max_level_trace"] } slog = { version = "2.5.2", features = ["max_level_trace"] }
hex = "0.3" hex = "0.3"
eth2_ssz = "0.1.2" eth2_ssz = "0.1.2"

View File

@ -0,0 +1,575 @@
//! This service keeps track of which shard subnet the beacon node should be subscribed to at any
//! given time. It schedules subscriptions to shard subnets, requests peer discoveries and
//! determines whether attestations should be aggregated and/or passed to the beacon node.
use beacon_chain::{BeaconChain, BeaconChainTypes};
use eth2_libp2p::{types::GossipKind, NetworkGlobals};
use futures::prelude::*;
use hashmap_delay::HashSetDelay;
use rand::seq::SliceRandom;
use rest_types::ValidatorSubscription;
use slog::{crit, debug, error, o, warn};
use slot_clock::SlotClock;
use std::boxed::Box;
use std::collections::VecDeque;
use std::sync::Arc;
use std::time::{Duration, Instant};
use types::{Attestation, SubnetId};
use types::{EthSpec, Slot};
/// The minimum number of slots ahead that we attempt to discover peers for a subscription. If the
/// slot is less than this number, skip the peer discovery process.
const MIN_PEER_DISCOVERY_SLOT_LOOK_AHEAD: u64 = 1;
/// The number of slots ahead that we attempt to discover peers for a subscription. If the slot to
/// attest to is greater than this, we queue a discovery request for this many slots prior to
/// subscribing.
const TARGET_PEER_DISCOVERY_SLOT_LOOK_AHEAD: u64 = 6;
/// The time (in seconds) before a last seen validator is considered absent and we unsubscribe from the random
/// gossip topics that we subscribed to due to the validator connection.
const LAST_SEEN_VALIDATOR_TIMEOUT: u64 = 1800; // 30 mins
/// The number of seconds in advance that we subscribe to a subnet before the required slot.
const ADVANCE_SUBSCRIBE_SECS: u64 = 3;
#[derive(Debug, PartialEq)]
pub enum AttServiceMessage {
/// Subscribe to the specified subnet id.
Subscribe(SubnetId),
/// Unsubscribe to the specified subnet id.
Unsubscribe(SubnetId),
/// Add the `SubnetId` to the ENR bitfield.
EnrAdd(SubnetId),
/// Remove the `SubnetId` from the ENR bitfield.
EnrRemove(SubnetId),
/// Discover peers for a particular subnet.
DiscoverPeers(SubnetId),
}
pub struct AttestationService<T: BeaconChainTypes> {
/// Queued events to return to the driving service.
events: VecDeque<AttServiceMessage>,
/// A collection of public network variables.
network_globals: Arc<NetworkGlobals<T::EthSpec>>,
/// A reference to the beacon chain to process received attestations.
beacon_chain: Arc<BeaconChain<T>>,
/// The collection of currently subscribed random subnets mapped to their expiry deadline.
random_subnets: HashSetDelay<SubnetId>,
/// A collection of timeouts for when to start searching for peers for a particular shard.
discover_peers: HashSetDelay<(SubnetId, Slot)>,
/// A collection of timeouts for when to subscribe to a shard subnet.
subscriptions: HashSetDelay<(SubnetId, Slot)>,
/// A collection of timeouts for when to unsubscribe from a shard subnet.
unsubscriptions: HashSetDelay<(SubnetId, Slot)>,
/// A collection of seen validators. These dictate how many random subnets we should be
/// subscribed to. As these time out, we unsubscribe for the required random subnets and update
/// our ENR.
/// This is a set of validator indices.
known_validators: HashSetDelay<u64>,
/// The logger for the attestation service.
log: slog::Logger,
}
impl<T: BeaconChainTypes> AttestationService<T> {
/* Public functions */
pub fn new(
beacon_chain: Arc<BeaconChain<T>>,
network_globals: Arc<NetworkGlobals<T::EthSpec>>,
log: &slog::Logger,
) -> Self {
let log = log.new(o!("service" => "attestation_service"));
// calculate the random subnet duration from the spec constants
let spec = &beacon_chain.spec;
let random_subnet_duration_millis = spec
.epochs_per_random_subnet_subscription
.saturating_mul(T::EthSpec::slots_per_epoch())
.saturating_mul(spec.milliseconds_per_slot);
AttestationService {
events: VecDeque::with_capacity(10),
network_globals,
beacon_chain,
random_subnets: HashSetDelay::new(Duration::from_millis(random_subnet_duration_millis)),
discover_peers: HashSetDelay::default(),
subscriptions: HashSetDelay::default(),
unsubscriptions: HashSetDelay::default(),
known_validators: HashSetDelay::new(Duration::from_secs(LAST_SEEN_VALIDATOR_TIMEOUT)),
log,
}
}
/// Processes a list of validator subscriptions.
///
/// This will:
/// - Register new validators as being known.
/// - Subscribe to the required number of random subnets.
/// - Update the local ENR for new random subnets due to seeing new validators.
/// - Search for peers for required subnets.
/// - Request subscriptions for subnets on specific slots when required.
/// - Build the timeouts for each of these events.
///
/// This returns a result simply for the ergonomics of using ?. The result can be
/// safely dropped.
pub fn validator_subscriptions(
&mut self,
subscriptions: Vec<ValidatorSubscription>,
) -> Result<(), ()> {
for subscription in subscriptions {
//NOTE: We assume all subscriptions have been verified before reaching this service
// Registers the validator with the attestation service.
// This will subscribe to long-lived random subnets if required.
self.add_known_validator(subscription.validator_index);
let subnet_id = SubnetId::new(
subscription.attestation_committee_index
% self.beacon_chain.spec.attestation_subnet_count,
);
// determine if we should run a discovery lookup request and request it if required
let _ = self.discover_peers_request(subnet_id, subscription.slot);
// set the subscription timer to subscribe to the next subnet if required
let _ = self.subscribe_to_subnet(subnet_id, subscription.slot);
}
Ok(())
}
pub fn handle_attestation(
&mut self,
subnet: SubnetId,
attestation: Box<Attestation<T::EthSpec>>,
) {
}
/* Internal private functions */
/// Checks if there are currently queued discovery requests and the time required to make the
/// request.
///
/// If there is sufficient time and no other request exists, queues a peer discovery request
/// for the required subnet.
fn discover_peers_request(
&mut self,
subnet_id: SubnetId,
subscription_slot: Slot,
) -> Result<(), ()> {
let current_slot = self.beacon_chain.slot_clock.now().ok_or_else(|| {
warn!(self.log, "Could not get the current slot");
})?;
let slot_duration = Duration::from_millis(self.beacon_chain.spec.milliseconds_per_slot);
// if there is enough time to perform a discovery lookup
if subscription_slot >= current_slot.saturating_add(MIN_PEER_DISCOVERY_SLOT_LOOK_AHEAD) {
// check if a discovery request already exists
if self
.discover_peers
.get(&(subnet_id, subscription_slot))
.is_some()
{
// already a request queued, end
return Ok(());
}
// check current event log to see if there is a discovery event queued
if self
.events
.iter()
.find(|event| event == &&AttServiceMessage::DiscoverPeers(subnet_id))
.is_some()
{
// already queued a discovery event
return Ok(());
}
// if the slot is more than epoch away, add an event to start looking for peers
if subscription_slot
< current_slot.saturating_add(TARGET_PEER_DISCOVERY_SLOT_LOOK_AHEAD)
{
// then instantly add a discovery request
self.events
.push_back(AttServiceMessage::DiscoverPeers(subnet_id));
} else {
// Queue the discovery event to be executed for
// TARGET_PEER_DISCOVERY_SLOT_LOOK_AHEAD
let duration_to_discover = {
let duration_to_next_slot = self
.beacon_chain
.slot_clock
.duration_to_next_slot()
.ok_or_else(|| {
warn!(self.log, "Unable to determine duration to next slot");
})?;
// The -1 is done here to exclude the current slot duration, as we will use
// `duration_to_next_slot`.
let slots_until_discover = subscription_slot
.saturating_sub(current_slot)
.saturating_sub(1u64)
.saturating_sub(TARGET_PEER_DISCOVERY_SLOT_LOOK_AHEAD);
duration_to_next_slot + slot_duration * (slots_until_discover.as_u64() as u32)
};
self.discover_peers
.insert_at((subnet_id, subscription_slot), duration_to_discover);
}
}
Ok(())
}
/// Checks the current random subnets and subscriptions to determine if a new subscription for this
/// subnet is required for the given slot.
///
/// If required, adds a subscription event and an associated unsubscription event.
fn subscribe_to_subnet(
&mut self,
subnet_id: SubnetId,
subscription_slot: Slot,
) -> Result<(), ()> {
// initialise timing variables
let current_slot = self.beacon_chain.slot_clock.now().ok_or_else(|| {
warn!(self.log, "Could not get the current slot");
})?;
let slot_duration = Duration::from_millis(self.beacon_chain.spec.milliseconds_per_slot);
let advance_subscription_duration = Duration::from_secs(ADVANCE_SUBSCRIBE_SECS);
// calculate the time to subscribe to the subnet
let duration_to_subscribe = {
let duration_to_next_slot = self
.beacon_chain
.slot_clock
.duration_to_next_slot()
.ok_or_else(|| {
warn!(self.log, "Unable to determine duration to next slot");
})?;
// The -1 is done here to exclude the current slot duration, as we will use
// `duration_to_next_slot`.
let slots_until_subscribe = subscription_slot
.saturating_sub(current_slot)
.saturating_sub(1u64);
duration_to_next_slot + slot_duration * (slots_until_subscribe.as_u64() as u32)
- advance_subscription_duration
};
// the duration until we no longer need this subscription. We assume a single slot is
// sufficient.
let expected_end_subscription_duration =
duration_to_subscribe + slot_duration + advance_subscription_duration;
// Checks on current subscriptions
// Note: We may be connected to a long-lived random subnet. In this case we still add the
// subscription timeout and check this case when the timeout fires. This is because a
// long-lived random subnet can be unsubscribed at any time when a validator becomes
// in-active. This case is checked on the subscription event (see `handle_subscriptions`).
// Return if we already have a subscription for this subnet_id and slot
if self.subscriptions.contains(&(subnet_id, subscription_slot)) {
return Ok(());
}
// We are not currently subscribed and have no waiting subscription, create one
self.subscriptions
.insert_at((subnet_id, subscription_slot), duration_to_subscribe);
// if there is an unsubscription event for the slot prior, we remove it to prevent
// unsubscriptions immediately after the subscription. We also want to minimize
// subscription churn and maintain a consecutive subnet subscriptions.
self.unsubscriptions
.remove(&(subnet_id, subscription_slot.saturating_sub(1u64)));
// add an unsubscription event to remove ourselves from the subnet once completed
self.unsubscriptions.insert_at(
(subnet_id, subscription_slot),
expected_end_subscription_duration,
);
Ok(())
}
/// Updates the `known_validators` mapping and subscribes to a set of random subnets if required.
///
/// This also updates the ENR to indicate our long-lived subscription to the subnet
fn add_known_validator(&mut self, validator_index: u64) {
if self.known_validators.get(&validator_index).is_none() {
// New validator has subscribed
// Subscribe to random topics and update the ENR if needed.
let spec = &self.beacon_chain.spec;
if self.random_subnets.len() < spec.attestation_subnet_count as usize {
// Still room for subscriptions
self.subscribe_to_random_subnets(
self.beacon_chain.spec.random_subnets_per_validator as usize,
);
}
}
// add the new validator or update the current timeout for a known validator
self.known_validators.insert(validator_index);
}
/// Subscribe to long-lived random subnets and update the local ENR bitfield.
fn subscribe_to_random_subnets(&mut self, no_subnets_to_subscribe: usize) {
let subnet_count = self.beacon_chain.spec.attestation_subnet_count;
// Build a list of random subnets that we are not currently subscribed to.
let available_subnets = (0..subnet_count)
.map(SubnetId::new)
.filter(|subnet_id| self.random_subnets.get(subnet_id).is_none())
.collect::<Vec<_>>();
let to_subscribe_subnets = {
if available_subnets.len() < no_subnets_to_subscribe {
debug!(self.log, "Reached maximum random subnet subscriptions");
available_subnets
} else {
// select a random sample of available subnets
available_subnets
.choose_multiple(&mut rand::thread_rng(), no_subnets_to_subscribe)
.cloned()
.collect::<Vec<_>>()
}
};
for subnet_id in to_subscribe_subnets {
// remove this subnet from any immediate subscription/un-subscription events
self.subscriptions
.retain(|(map_subnet_id, _)| map_subnet_id != &subnet_id);
self.unsubscriptions
.retain(|(map_subnet_id, _)| map_subnet_id != &subnet_id);
// insert a new random subnet
self.random_subnets.insert(subnet_id);
// if we are not already subscribed, then subscribe
let topic_kind = &GossipKind::CommitteeIndex(subnet_id);
if let None = self
.network_globals
.gossipsub_subscriptions
.read()
.iter()
.find(|topic| topic.kind() == topic_kind)
{
// not already subscribed to the topic
self.events
.push_back(AttServiceMessage::Subscribe(subnet_id));
}
// add the subnet to the ENR bitfield
self.events.push_back(AttServiceMessage::EnrAdd(subnet_id));
}
}
/* A collection of functions that handle the various timeouts */
/// Request a discovery query to find peers for a particular subnet.
fn handle_discover_peers(&mut self, subnet_id: SubnetId, target_slot: Slot) {
debug!(self.log, "Searching for peers for subnet"; "subnet" => *subnet_id, "target_slot" => target_slot);
self.events
.push_back(AttServiceMessage::DiscoverPeers(subnet_id));
}
/// A queued subscription is ready.
///
/// We add subscriptions events even if we are already subscribed to a random subnet (as these
/// can be unsubscribed at any time by inactive validators). If we are
/// still subscribed at the time the event fires, we don't re-subscribe.
fn handle_subscriptions(&mut self, subnet_id: SubnetId, target_slot: Slot) {
// Check if the subnet currently exists as a long-lasting random subnet
if let Some(expiry) = self.random_subnets.get(&subnet_id) {
// we are subscribed via a random subnet, if this is to expire during the time we need
// to be subscribed, just extend the expiry
let slot_duration = Duration::from_millis(self.beacon_chain.spec.milliseconds_per_slot);
let advance_subscription_duration = Duration::from_secs(ADVANCE_SUBSCRIBE_SECS);
// we require the subnet subscription for at least a slot on top of the initial
// subscription time
let expected_end_subscription_duration = slot_duration + advance_subscription_duration;
if expiry < &(Instant::now() + expected_end_subscription_duration) {
self.random_subnets
.update_timeout(&subnet_id, expected_end_subscription_duration);
}
} else {
// we are also not un-subscribing from a subnet if the next slot requires us to be
// subscribed. Therefore there could be the case that we are already still subscribed
// to the required subnet. In which case we do not issue another subscription request.
let topic_kind = &GossipKind::CommitteeIndex(subnet_id);
if self
.network_globals
.gossipsub_subscriptions
.read()
.iter()
.find(|topic| topic.kind() == topic_kind)
.is_none()
{
// we are not already subscribed
debug!(self.log, "Subscribing to subnet"; "subnet" => *subnet_id, "target_slot" => target_slot.as_u64());
self.events
.push_back(AttServiceMessage::Subscribe(subnet_id));
}
}
}
/// A queued unsubscription is ready.
///
/// Unsubscription events are added, even if we are subscribed to long-lived random subnets. If
/// a random subnet is present, we do not unsubscribe from it.
fn handle_unsubscriptions(&mut self, subnet_id: SubnetId, target_slot: Slot) {
// Check if the subnet currently exists as a long-lasting random subnet
if self.random_subnets.contains(&subnet_id) {
return;
}
debug!(self.log, "Unsubscribing from subnet"; "subnet" => *subnet_id, "processed_slot" => target_slot.as_u64());
// various logic checks
if self.subscriptions.contains(&(subnet_id, target_slot)) {
crit!(self.log, "Unsubscribing from a subnet in subscriptions");
}
self.events
.push_back(AttServiceMessage::Unsubscribe(subnet_id));
}
/// A random subnet has expired.
///
/// This function selects a new subnet to join, or extends the expiry if there are no more
/// available subnets to choose from.
fn handle_random_subnet_expiry(&mut self, subnet_id: SubnetId) {
let subnet_count = self.beacon_chain.spec.attestation_subnet_count;
if self.random_subnets.len() == (subnet_count - 1) as usize {
// We are at capacity, simply increase the timeout of the current subnet
self.random_subnets.insert(subnet_id);
return;
}
// we are not at capacity, unsubscribe from the current subnet, remove the ENR bitfield bit and choose a new random one
// from the available subnets
// Note: This should not occur during a required subnet as subscriptions update the timeout
// to last as long as they are needed.
debug!(self.log, "Unsubscribing from random subnet"; "subnet_id" => *subnet_id);
self.events
.push_back(AttServiceMessage::Unsubscribe(subnet_id));
self.events
.push_back(AttServiceMessage::EnrRemove(subnet_id));
self.subscribe_to_random_subnets(1);
}
/// A known validator has not sent a subscription in a while. They are considered offline and the
/// beacon node no longer needs to be subscribed to the allocated random subnets.
///
/// We don't keep track of a specific validator to random subnet, rather the ratio of active
/// validators to random subnets. So when a validator goes offline, we can simply remove the
/// allocated amount of random subnets.
fn handle_known_validator_expiry(&mut self) -> Result<(), ()> {
let spec = &self.beacon_chain.spec;
let subnet_count = spec.attestation_subnet_count;
let random_subnets_per_validator = spec.random_subnets_per_validator;
if self.known_validators.len() as u64 * random_subnets_per_validator >= subnet_count {
// have too many validators, ignore
return Ok(());
}
let subscribed_subnets = self.random_subnets.keys_vec();
let to_remove_subnets = subscribed_subnets.choose_multiple(
&mut rand::thread_rng(),
random_subnets_per_validator as usize,
);
let current_slot = self.beacon_chain.slot_clock.now().ok_or_else(|| {
warn!(self.log, "Could not get the current slot");
})?;
for subnet_id in to_remove_subnets {
// If a subscription is queued for two slots in the future, it's associated unsubscription
// will unsubscribe from the expired subnet.
// If there is no subscription for this subnet,slot it is safe to add one, without
// unsubscribing early from a required subnet
if self
.subscriptions
.get(&(**subnet_id, current_slot + 2))
.is_none()
{
// set an unsubscribe event
let duration_to_next_slot = self
.beacon_chain
.slot_clock
.duration_to_next_slot()
.ok_or_else(|| {
warn!(self.log, "Unable to determine duration to next slot");
})?;
let slot_duration =
Duration::from_millis(self.beacon_chain.spec.milliseconds_per_slot);
// Set the unsubscription timeout
let unsubscription_duration = duration_to_next_slot + slot_duration * 2;
self.unsubscriptions
.insert_at((**subnet_id, current_slot + 2), unsubscription_duration);
}
// as the long lasting subnet subscription is being removed, remove the subnet_id from
// the ENR bitfield
self.events
.push_back(AttServiceMessage::EnrRemove(**subnet_id));
}
Ok(())
}
}
impl<T: BeaconChainTypes> Stream for AttestationService<T> {
type Item = AttServiceMessage;
type Error = ();
fn poll(&mut self) -> Poll<Option<Self::Item>, Self::Error> {
// process any peer discovery events
while let Async::Ready(Some((subnet_id, target_slot))) =
self.discover_peers.poll().map_err(|e| {
error!(self.log, "Failed to check for peer discovery requests"; "error"=> format!("{}", e));
})?
{
self.handle_discover_peers(subnet_id, target_slot);
}
// process any subscription events
while let Async::Ready(Some((subnet_id, target_slot))) = self.subscriptions.poll().map_err(|e| {
error!(self.log, "Failed to check for subnet subscription times"; "error"=> format!("{}", e));
})?
{
self.handle_subscriptions(subnet_id, target_slot);
}
// process any un-subscription events
while let Async::Ready(Some((subnet_id, target_slot))) = self.unsubscriptions.poll().map_err(|e| {
error!(self.log, "Failed to check for subnet unsubscription times"; "error"=> format!("{}", e));
})?
{
self.handle_unsubscriptions(subnet_id, target_slot);
}
// process any random subnet expiries
while let Async::Ready(Some(subnet)) = self.random_subnets.poll().map_err(|e| {
error!(self.log, "Failed to check for random subnet cycles"; "error"=> format!("{}", e));
})?
{
self.handle_random_subnet_expiry(subnet);
}
// process any known validator expiries
while let Async::Ready(Some(_validator_index)) = self.known_validators.poll().map_err(|e| {
error!(self.log, "Failed to check for random subnet cycles"; "error"=> format!("{}", e));
})?
{
let _ = self.handle_known_validator_expiry();
}
// process any generated events
if let Some(event) = self.events.pop_front() {
return Ok(Async::Ready(Some(event)));
}
Ok(Async::NotReady)
}
}

View File

@ -1,6 +1,4 @@
// generates error types // generates error types
use eth2_libp2p;
use error_chain::error_chain; use error_chain::error_chain;
error_chain! { error_chain! {

View File

@ -1,12 +1,11 @@
/// This crate provides the network server for Lighthouse. /// This crate provides the network server for Lighthouse.
pub mod error; pub mod error;
pub mod message_handler;
pub mod message_processor;
pub mod persisted_dht;
pub mod service; pub mod service;
pub mod sync;
mod attestation_service;
mod persisted_dht;
mod router;
mod sync;
pub use eth2_libp2p::NetworkConfig; pub use eth2_libp2p::NetworkConfig;
pub use message_processor::MessageProcessor; pub use service::{NetworkMessage, NetworkService};
pub use service::NetworkMessage;
pub use service::Service;

View File

@ -1,367 +0,0 @@
#![allow(clippy::unit_arg)]
use crate::error;
use crate::service::NetworkMessage;
use crate::MessageProcessor;
use beacon_chain::{BeaconChain, BeaconChainTypes};
use eth2_libp2p::{
behaviour::PubsubMessage,
rpc::{RPCError, RPCErrorResponse, RPCRequest, RPCResponse, RequestId, ResponseTermination},
MessageId, PeerId, RPCEvent,
};
use futures::future::Future;
use futures::stream::Stream;
use slog::{debug, o, trace, warn};
use ssz::{Decode, DecodeError};
use std::sync::Arc;
use tokio::sync::mpsc;
use types::{Attestation, AttesterSlashing, ProposerSlashing, SignedBeaconBlock, VoluntaryExit};
/// Handles messages received from the network and client and organises syncing. This
/// functionality of this struct is to validate an decode messages from the network before
/// passing them to the internal message processor. The message processor spawns a syncing thread
/// which manages which blocks need to be requested and processed.
pub struct MessageHandler<T: BeaconChainTypes> {
/// A channel to the network service to allow for gossip propagation.
network_send: mpsc::UnboundedSender<NetworkMessage>,
/// Processes validated and decoded messages from the network. Has direct access to the
/// sync manager.
message_processor: MessageProcessor<T>,
/// The `MessageHandler` logger.
log: slog::Logger,
}
/// Types of messages the handler can receive.
#[derive(Debug)]
pub enum HandlerMessage {
/// We have initiated a connection to a new peer.
PeerDialed(PeerId),
/// Peer has disconnected,
PeerDisconnected(PeerId),
/// An RPC response/request has been received.
RPC(PeerId, RPCEvent),
/// A gossip message has been received. The fields are: message id, the peer that sent us this
/// message and the message itself.
PubsubMessage(MessageId, PeerId, PubsubMessage),
}
impl<T: BeaconChainTypes> MessageHandler<T> {
/// Initializes and runs the MessageHandler.
pub fn spawn(
beacon_chain: Arc<BeaconChain<T>>,
network_send: mpsc::UnboundedSender<NetworkMessage>,
executor: &tokio::runtime::TaskExecutor,
log: slog::Logger,
) -> error::Result<mpsc::UnboundedSender<HandlerMessage>> {
let message_handler_log = log.new(o!("service"=> "msg_handler"));
trace!(message_handler_log, "Service starting");
let (handler_send, handler_recv) = mpsc::unbounded_channel();
// Initialise a message instance, which itself spawns the syncing thread.
let message_processor =
MessageProcessor::new(executor, beacon_chain, network_send.clone(), &log);
// generate the Message handler
let mut handler = MessageHandler {
network_send,
message_processor,
log: message_handler_log,
};
// spawn handler task and move the message handler instance into the spawned thread
executor.spawn(
handler_recv
.for_each(move |msg| Ok(handler.handle_message(msg)))
.map_err(move |_| {
debug!(log, "Network message handler terminated.");
}),
);
Ok(handler_send)
}
/// Handle all messages incoming from the network service.
fn handle_message(&mut self, message: HandlerMessage) {
match message {
// we have initiated a connection to a peer
HandlerMessage::PeerDialed(peer_id) => {
self.message_processor.on_connect(peer_id);
}
// A peer has disconnected
HandlerMessage::PeerDisconnected(peer_id) => {
self.message_processor.on_disconnect(peer_id);
}
// An RPC message request/response has been received
HandlerMessage::RPC(peer_id, rpc_event) => {
self.handle_rpc_message(peer_id, rpc_event);
}
// An RPC message request/response has been received
HandlerMessage::PubsubMessage(id, peer_id, gossip) => {
self.handle_gossip(id, peer_id, gossip);
}
}
}
/* RPC - Related functionality */
/// Handle RPC messages
fn handle_rpc_message(&mut self, peer_id: PeerId, rpc_message: RPCEvent) {
match rpc_message {
RPCEvent::Request(id, req) => self.handle_rpc_request(peer_id, id, req),
RPCEvent::Response(id, resp) => self.handle_rpc_response(peer_id, id, resp),
RPCEvent::Error(id, error) => self.handle_rpc_error(peer_id, id, error),
}
}
/// A new RPC request has been received from the network.
fn handle_rpc_request(&mut self, peer_id: PeerId, request_id: RequestId, request: RPCRequest) {
match request {
RPCRequest::Status(status_message) => {
self.message_processor
.on_status_request(peer_id, request_id, status_message)
}
RPCRequest::Goodbye(goodbye_reason) => {
debug!(
self.log, "PeerGoodbye";
"peer" => format!("{:?}", peer_id),
"reason" => format!("{:?}", goodbye_reason),
);
self.message_processor.on_disconnect(peer_id);
}
RPCRequest::BlocksByRange(request) => self
.message_processor
.on_blocks_by_range_request(peer_id, request_id, request),
RPCRequest::BlocksByRoot(request) => self
.message_processor
.on_blocks_by_root_request(peer_id, request_id, request),
}
}
/// An RPC response has been received from the network.
// we match on id and ignore responses past the timeout.
fn handle_rpc_response(
&mut self,
peer_id: PeerId,
request_id: RequestId,
error_response: RPCErrorResponse,
) {
// an error could have occurred.
match error_response {
RPCErrorResponse::InvalidRequest(error) => {
warn!(self.log, "Peer indicated invalid request";"peer_id" => format!("{:?}", peer_id), "error" => error.as_string());
self.handle_rpc_error(peer_id, request_id, RPCError::RPCErrorResponse);
}
RPCErrorResponse::ServerError(error) => {
warn!(self.log, "Peer internal server error";"peer_id" => format!("{:?}", peer_id), "error" => error.as_string());
self.handle_rpc_error(peer_id, request_id, RPCError::RPCErrorResponse);
}
RPCErrorResponse::Unknown(error) => {
warn!(self.log, "Unknown peer error";"peer" => format!("{:?}", peer_id), "error" => error.as_string());
self.handle_rpc_error(peer_id, request_id, RPCError::RPCErrorResponse);
}
RPCErrorResponse::Success(response) => {
match response {
RPCResponse::Status(status_message) => {
self.message_processor
.on_status_response(peer_id, status_message);
}
RPCResponse::BlocksByRange(response) => {
match self.decode_beacon_block(response) {
Ok(beacon_block) => {
self.message_processor.on_blocks_by_range_response(
peer_id,
request_id,
Some(beacon_block),
);
}
Err(e) => {
// TODO: Down-vote Peer
warn!(self.log, "Peer sent invalid BEACON_BLOCKS response";"peer" => format!("{:?}", peer_id), "error" => format!("{:?}", e));
}
}
}
RPCResponse::BlocksByRoot(response) => {
match self.decode_beacon_block(response) {
Ok(beacon_block) => {
self.message_processor.on_blocks_by_root_response(
peer_id,
request_id,
Some(beacon_block),
);
}
Err(e) => {
// TODO: Down-vote Peer
warn!(self.log, "Peer sent invalid BEACON_BLOCKS response";"peer" => format!("{:?}", peer_id), "error" => format!("{:?}", e));
}
}
}
}
}
RPCErrorResponse::StreamTermination(response_type) => {
// have received a stream termination, notify the processing functions
match response_type {
ResponseTermination::BlocksByRange => {
self.message_processor
.on_blocks_by_range_response(peer_id, request_id, None);
}
ResponseTermination::BlocksByRoot => {
self.message_processor
.on_blocks_by_root_response(peer_id, request_id, None);
}
}
}
}
}
/// Handle various RPC errors
fn handle_rpc_error(&mut self, peer_id: PeerId, request_id: RequestId, error: RPCError) {
warn!(self.log, "RPC Error"; "Peer" => format!("{:?}", peer_id), "request_id" => format!("{}", request_id), "Error" => format!("{:?}", error));
self.message_processor.on_rpc_error(peer_id, request_id);
}
/// Handle RPC messages
fn handle_gossip(&mut self, id: MessageId, peer_id: PeerId, gossip_message: PubsubMessage) {
match gossip_message {
PubsubMessage::Block(message) => match self.decode_gossip_block(message) {
Ok(block) => {
let should_forward_on = self
.message_processor
.on_block_gossip(peer_id.clone(), block);
// TODO: Apply more sophisticated validation and decoding logic
if should_forward_on {
self.propagate_message(id, peer_id);
}
}
Err(e) => {
debug!(self.log, "Invalid gossiped beacon block"; "peer_id" => format!("{}", peer_id), "Error" => format!("{:?}", e));
}
},
PubsubMessage::Attestation(message) => match self.decode_gossip_attestation(message) {
Ok(attestation) => {
// TODO: Apply more sophisticated validation and decoding logic
self.propagate_message(id, peer_id.clone());
self.message_processor
.on_attestation_gossip(peer_id, attestation);
}
Err(e) => {
debug!(self.log, "Invalid gossiped attestation"; "peer_id" => format!("{}", peer_id), "Error" => format!("{:?}", e));
}
},
PubsubMessage::VoluntaryExit(message) => match self.decode_gossip_exit(message) {
Ok(_exit) => {
// TODO: Apply more sophisticated validation and decoding logic
self.propagate_message(id, peer_id.clone());
// TODO: Handle exits
debug!(self.log, "Received a voluntary exit"; "peer_id" => format!("{}", peer_id) );
}
Err(e) => {
debug!(self.log, "Invalid gossiped exit"; "peer_id" => format!("{}", peer_id), "Error" => format!("{:?}", e));
}
},
PubsubMessage::ProposerSlashing(message) => {
match self.decode_gossip_proposer_slashing(message) {
Ok(_slashing) => {
// TODO: Apply more sophisticated validation and decoding logic
self.propagate_message(id, peer_id.clone());
// TODO: Handle proposer slashings
debug!(self.log, "Received a proposer slashing"; "peer_id" => format!("{}", peer_id) );
}
Err(e) => {
debug!(self.log, "Invalid gossiped proposer slashing"; "peer_id" => format!("{}", peer_id), "Error" => format!("{:?}", e));
}
}
}
PubsubMessage::AttesterSlashing(message) => {
match self.decode_gossip_attestation_slashing(message) {
Ok(_slashing) => {
// TODO: Apply more sophisticated validation and decoding logic
self.propagate_message(id, peer_id.clone());
// TODO: Handle attester slashings
debug!(self.log, "Received an attester slashing"; "peer_id" => format!("{}", peer_id) );
}
Err(e) => {
debug!(self.log, "Invalid gossiped attester slashing"; "peer_id" => format!("{}", peer_id), "Error" => format!("{:?}", e));
}
}
}
PubsubMessage::Unknown(message) => {
// Received a message from an unknown topic. Ignore for now
debug!(self.log, "Unknown Gossip Message"; "peer_id" => format!("{}", peer_id), "Message" => format!("{:?}", message));
}
}
}
/// Informs the network service that the message should be forwarded to other peers.
fn propagate_message(&mut self, message_id: MessageId, propagation_source: PeerId) {
self.network_send
.try_send(NetworkMessage::Propagate {
propagation_source,
message_id,
})
.unwrap_or_else(|_| {
warn!(
self.log,
"Could not send propagation request to the network service"
)
});
}
/* Decoding of gossipsub objects from the network.
*
* The decoding is done in the message handler as it has access to to a `BeaconChain` and can
* therefore apply more efficient logic in decoding and verification.
*
* TODO: Apply efficient decoding/verification of these objects
*/
/* Gossipsub Domain Decoding */
// Note: These are not generics as type-specific verification will need to be applied.
fn decode_gossip_block(
&self,
beacon_block: Vec<u8>,
) -> Result<SignedBeaconBlock<T::EthSpec>, DecodeError> {
//TODO: Apply verification before decoding.
SignedBeaconBlock::from_ssz_bytes(&beacon_block)
}
fn decode_gossip_attestation(
&self,
beacon_block: Vec<u8>,
) -> Result<Attestation<T::EthSpec>, DecodeError> {
//TODO: Apply verification before decoding.
Attestation::from_ssz_bytes(&beacon_block)
}
fn decode_gossip_exit(&self, voluntary_exit: Vec<u8>) -> Result<VoluntaryExit, DecodeError> {
//TODO: Apply verification before decoding.
VoluntaryExit::from_ssz_bytes(&voluntary_exit)
}
fn decode_gossip_proposer_slashing(
&self,
proposer_slashing: Vec<u8>,
) -> Result<ProposerSlashing, DecodeError> {
//TODO: Apply verification before decoding.
ProposerSlashing::from_ssz_bytes(&proposer_slashing)
}
fn decode_gossip_attestation_slashing(
&self,
attester_slashing: Vec<u8>,
) -> Result<AttesterSlashing<T::EthSpec>, DecodeError> {
//TODO: Apply verification before decoding.
AttesterSlashing::from_ssz_bytes(&attester_slashing)
}
/* Req/Resp Domain Decoding */
/// Verifies and decodes an ssz-encoded `SignedBeaconBlock`. If `None` is passed, this represents a
/// stream termination.
fn decode_beacon_block(
&self,
beacon_block: Vec<u8>,
) -> Result<SignedBeaconBlock<T::EthSpec>, DecodeError> {
//TODO: Implement faster block verification before decoding entirely
SignedBeaconBlock::from_ssz_bytes(&beacon_block)
}
}

View File

@ -1,13 +1,15 @@
use beacon_chain::BeaconChainTypes;
use eth2_libp2p::Enr; use eth2_libp2p::Enr;
use rlp; use rlp;
use std::sync::Arc; use std::sync::Arc;
use store::{DBColumn, Error as StoreError, SimpleStoreItem, Store}; use store::Store;
use types::{EthSpec, Hash256}; use store::{DBColumn, Error as StoreError, SimpleStoreItem};
use types::Hash256;
/// 32-byte key for accessing the `DhtEnrs`. /// 32-byte key for accessing the `DhtEnrs`.
pub const DHT_DB_KEY: &str = "PERSISTEDDHTPERSISTEDDHTPERSISTE"; pub const DHT_DB_KEY: &str = "PERSISTEDDHTPERSISTEDDHTPERSISTE";
pub fn load_dht<T: Store<E>, E: EthSpec>(store: Arc<T>) -> Vec<Enr> { pub fn load_dht<T: BeaconChainTypes>(store: Arc<T::Store>) -> Vec<Enr> {
// Load DHT from store // Load DHT from store
let key = Hash256::from_slice(&DHT_DB_KEY.as_bytes()); let key = Hash256::from_slice(&DHT_DB_KEY.as_bytes());
match store.get(&key) { match store.get(&key) {
@ -20,8 +22,8 @@ pub fn load_dht<T: Store<E>, E: EthSpec>(store: Arc<T>) -> Vec<Enr> {
} }
/// Attempt to persist the ENR's in the DHT to `self.store`. /// Attempt to persist the ENR's in the DHT to `self.store`.
pub fn persist_dht<T: Store<E>, E: EthSpec>( pub fn persist_dht<T: BeaconChainTypes>(
store: Arc<T>, store: Arc<T::Store>,
enrs: Vec<Enr>, enrs: Vec<Enr>,
) -> Result<(), store::Error> { ) -> Result<(), store::Error> {
let key = Hash256::from_slice(&DHT_DB_KEY.as_bytes()); let key = Hash256::from_slice(&DHT_DB_KEY.as_bytes());

View File

@ -0,0 +1,275 @@
//! This module handles incoming network messages.
//!
//! It routes the messages to appropriate services, such as the Sync
//! and processes those that are
#![allow(clippy::unit_arg)]
pub mod processor;
use crate::error;
use crate::service::NetworkMessage;
use beacon_chain::{BeaconChain, BeaconChainTypes};
use eth2_libp2p::{
rpc::{RPCError, RPCErrorResponse, RPCRequest, RPCResponse, RequestId, ResponseTermination},
MessageId, PeerId, PubsubData, PubsubMessage, RPCEvent,
};
use futures::future::Future;
use futures::stream::Stream;
use processor::Processor;
use slog::{debug, o, trace, warn};
use std::sync::Arc;
use tokio::sync::mpsc;
use types::EthSpec;
/// Handles messages received from the network and client and organises syncing. This
/// functionality of this struct is to validate an decode messages from the network before
/// passing them to the internal message processor. The message processor spawns a syncing thread
/// which manages which blocks need to be requested and processed.
pub struct Router<T: BeaconChainTypes> {
/// A channel to the network service to allow for gossip propagation.
network_send: mpsc::UnboundedSender<NetworkMessage<T::EthSpec>>,
/// Processes validated and decoded messages from the network. Has direct access to the
/// sync manager.
processor: Processor<T>,
/// The `Router` logger.
log: slog::Logger,
}
/// Types of messages the handler can receive.
#[derive(Debug)]
pub enum RouterMessage<T: EthSpec> {
/// We have initiated a connection to a new peer.
PeerDialed(PeerId),
/// Peer has disconnected,
PeerDisconnected(PeerId),
/// An RPC response/request has been received.
RPC(PeerId, RPCEvent<T>),
/// A gossip message has been received. The fields are: message id, the peer that sent us this
/// message and the message itself.
PubsubMessage(MessageId, PeerId, PubsubMessage<T>),
}
impl<T: BeaconChainTypes> Router<T> {
/// Initializes and runs the Router.
pub fn spawn(
beacon_chain: Arc<BeaconChain<T>>,
network_send: mpsc::UnboundedSender<NetworkMessage<T::EthSpec>>,
executor: &tokio::runtime::TaskExecutor,
log: slog::Logger,
) -> error::Result<mpsc::UnboundedSender<RouterMessage<T::EthSpec>>> {
let message_handler_log = log.new(o!("service"=> "msg_handler"));
trace!(message_handler_log, "Service starting");
let (handler_send, handler_recv) = mpsc::unbounded_channel();
// Initialise a message instance, which itself spawns the syncing thread.
let processor = Processor::new(executor, beacon_chain, network_send.clone(), &log);
// generate the Message handler
let mut handler = Router {
network_send,
processor,
log: message_handler_log,
};
// spawn handler task and move the message handler instance into the spawned thread
executor.spawn(
handler_recv
.for_each(move |msg| Ok(handler.handle_message(msg)))
.map_err(move |_| {
debug!(log, "Network message handler terminated.");
}),
);
Ok(handler_send)
}
/// Handle all messages incoming from the network service.
fn handle_message(&mut self, message: RouterMessage<T::EthSpec>) {
match message {
// we have initiated a connection to a peer
RouterMessage::PeerDialed(peer_id) => {
self.processor.on_connect(peer_id);
}
// A peer has disconnected
RouterMessage::PeerDisconnected(peer_id) => {
self.processor.on_disconnect(peer_id);
}
// An RPC message request/response has been received
RouterMessage::RPC(peer_id, rpc_event) => {
self.handle_rpc_message(peer_id, rpc_event);
}
// An RPC message request/response has been received
RouterMessage::PubsubMessage(id, peer_id, gossip) => {
self.handle_gossip(id, peer_id, gossip);
}
}
}
/* RPC - Related functionality */
/// Handle RPC messages
fn handle_rpc_message(&mut self, peer_id: PeerId, rpc_message: RPCEvent<T::EthSpec>) {
match rpc_message {
RPCEvent::Request(id, req) => self.handle_rpc_request(peer_id, id, req),
RPCEvent::Response(id, resp) => self.handle_rpc_response(peer_id, id, resp),
RPCEvent::Error(id, error) => self.handle_rpc_error(peer_id, id, error),
}
}
/// A new RPC request has been received from the network.
fn handle_rpc_request(
&mut self,
peer_id: PeerId,
request_id: RequestId,
request: RPCRequest<T::EthSpec>,
) {
match request {
RPCRequest::Status(status_message) => {
self.processor
.on_status_request(peer_id, request_id, status_message)
}
RPCRequest::Goodbye(goodbye_reason) => {
debug!(
self.log, "PeerGoodbye";
"peer" => format!("{:?}", peer_id),
"reason" => format!("{:?}", goodbye_reason),
);
self.processor.on_disconnect(peer_id);
}
RPCRequest::BlocksByRange(request) => self
.processor
.on_blocks_by_range_request(peer_id, request_id, request),
RPCRequest::BlocksByRoot(request) => self
.processor
.on_blocks_by_root_request(peer_id, request_id, request),
RPCRequest::Phantom(_) => unreachable!("Phantom never initialised"),
}
}
/// An RPC response has been received from the network.
// we match on id and ignore responses past the timeout.
fn handle_rpc_response(
&mut self,
peer_id: PeerId,
request_id: RequestId,
error_response: RPCErrorResponse<T::EthSpec>,
) {
// an error could have occurred.
match error_response {
RPCErrorResponse::InvalidRequest(error) => {
warn!(self.log, "Peer indicated invalid request";"peer_id" => format!("{:?}", peer_id), "error" => error.as_string());
self.handle_rpc_error(peer_id, request_id, RPCError::RPCErrorResponse);
}
RPCErrorResponse::ServerError(error) => {
warn!(self.log, "Peer internal server error";"peer_id" => format!("{:?}", peer_id), "error" => error.as_string());
self.handle_rpc_error(peer_id, request_id, RPCError::RPCErrorResponse);
}
RPCErrorResponse::Unknown(error) => {
warn!(self.log, "Unknown peer error";"peer" => format!("{:?}", peer_id), "error" => error.as_string());
self.handle_rpc_error(peer_id, request_id, RPCError::RPCErrorResponse);
}
RPCErrorResponse::Success(response) => match response {
RPCResponse::Status(status_message) => {
self.processor.on_status_response(peer_id, status_message);
}
RPCResponse::BlocksByRange(beacon_block) => {
self.processor.on_blocks_by_range_response(
peer_id,
request_id,
Some(beacon_block),
);
}
RPCResponse::BlocksByRoot(beacon_block) => {
self.processor.on_blocks_by_root_response(
peer_id,
request_id,
Some(beacon_block),
);
}
},
RPCErrorResponse::StreamTermination(response_type) => {
// have received a stream termination, notify the processing functions
match response_type {
ResponseTermination::BlocksByRange => {
self.processor
.on_blocks_by_range_response(peer_id, request_id, None);
}
ResponseTermination::BlocksByRoot => {
self.processor
.on_blocks_by_root_response(peer_id, request_id, None);
}
}
}
}
}
/// Handle various RPC errors
fn handle_rpc_error(&mut self, peer_id: PeerId, request_id: RequestId, error: RPCError) {
warn!(self.log, "RPC Error"; "Peer" => format!("{:?}", peer_id), "request_id" => format!("{}", request_id), "Error" => format!("{:?}", error));
self.processor.on_rpc_error(peer_id, request_id);
}
/// Handle RPC messages
fn handle_gossip(
&mut self,
id: MessageId,
peer_id: PeerId,
gossip_message: PubsubMessage<T::EthSpec>,
) {
match gossip_message.data {
PubsubData::BeaconBlock(block) => {
if self.processor.should_forward_block(&block) {
self.propagate_message(id, peer_id.clone());
}
self.processor.on_block_gossip(peer_id, block);
}
PubsubData::AggregateAndProofAttestation(_agg_attestation) => {
// TODO: Handle propagation conditions
self.propagate_message(id, peer_id);
// TODO Handle aggregate attestion
// self.processor
// .on_attestation_gossip(peer_id.clone(), &agg_attestation);
}
PubsubData::Attestation(boxed_shard_attestation) => {
// TODO: Handle propagation conditions
self.propagate_message(id, peer_id.clone());
self.processor
.on_attestation_gossip(peer_id, boxed_shard_attestation.1);
}
PubsubData::VoluntaryExit(_exit) => {
// TODO: Apply more sophisticated validation
self.propagate_message(id, peer_id.clone());
// TODO: Handle exits
debug!(self.log, "Received a voluntary exit"; "peer_id" => format!("{}", peer_id) );
}
PubsubData::ProposerSlashing(_proposer_slashing) => {
// TODO: Apply more sophisticated validation
self.propagate_message(id, peer_id.clone());
// TODO: Handle proposer slashings
debug!(self.log, "Received a proposer slashing"; "peer_id" => format!("{}", peer_id) );
}
PubsubData::AttesterSlashing(_attester_slashing) => {
// TODO: Apply more sophisticated validation
self.propagate_message(id, peer_id.clone());
// TODO: Handle attester slashings
debug!(self.log, "Received an attester slashing"; "peer_id" => format!("{}", peer_id) );
}
}
}
/// Informs the network service that the message should be forwarded to other peers.
fn propagate_message(&mut self, message_id: MessageId, propagation_source: PeerId) {
self.network_send
.try_send(NetworkMessage::Propagate {
propagation_source,
message_id,
})
.unwrap_or_else(|_| {
warn!(
self.log,
"Could not send propagation request to the network service"
)
});
}
}

View File

@ -19,9 +19,6 @@ use types::{Attestation, Epoch, EthSpec, Hash256, SignedBeaconBlock, Slot};
/// Otherwise we queue it. /// Otherwise we queue it.
pub(crate) const FUTURE_SLOT_TOLERANCE: u64 = 1; pub(crate) const FUTURE_SLOT_TOLERANCE: u64 = 1;
const SHOULD_FORWARD_GOSSIP_BLOCK: bool = true;
const SHOULD_NOT_FORWARD_GOSSIP_BLOCK: bool = false;
/// Keeps track of syncing information for known connected peers. /// Keeps track of syncing information for known connected peers.
#[derive(Clone, Copy, Debug)] #[derive(Clone, Copy, Debug)]
pub struct PeerSyncInfo { pub struct PeerSyncInfo {
@ -52,7 +49,7 @@ impl PeerSyncInfo {
/// Processes validated messages from the network. It relays necessary data to the syncing thread /// Processes validated messages from the network. It relays necessary data to the syncing thread
/// and processes blocks from the pubsub network. /// and processes blocks from the pubsub network.
pub struct MessageProcessor<T: BeaconChainTypes> { pub struct Processor<T: BeaconChainTypes> {
/// A reference to the underlying beacon chain. /// A reference to the underlying beacon chain.
chain: Arc<BeaconChain<T>>, chain: Arc<BeaconChain<T>>,
/// A channel to the syncing thread. /// A channel to the syncing thread.
@ -60,17 +57,17 @@ pub struct MessageProcessor<T: BeaconChainTypes> {
/// A oneshot channel for destroying the sync thread. /// A oneshot channel for destroying the sync thread.
_sync_exit: oneshot::Sender<()>, _sync_exit: oneshot::Sender<()>,
/// A network context to return and handle RPC requests. /// A network context to return and handle RPC requests.
network: HandlerNetworkContext, network: HandlerNetworkContext<T::EthSpec>,
/// The `RPCHandler` logger. /// The `RPCHandler` logger.
log: slog::Logger, log: slog::Logger,
} }
impl<T: BeaconChainTypes> MessageProcessor<T> { impl<T: BeaconChainTypes> Processor<T> {
/// Instantiate a `MessageProcessor` instance /// Instantiate a `Processor` instance
pub fn new( pub fn new(
executor: &tokio::runtime::TaskExecutor, executor: &tokio::runtime::TaskExecutor,
beacon_chain: Arc<BeaconChain<T>>, beacon_chain: Arc<BeaconChain<T>>,
network_send: mpsc::UnboundedSender<NetworkMessage>, network_send: mpsc::UnboundedSender<NetworkMessage<T::EthSpec>>,
log: &slog::Logger, log: &slog::Logger,
) -> Self { ) -> Self {
let sync_logger = log.new(o!("service"=> "sync")); let sync_logger = log.new(o!("service"=> "sync"));
@ -83,7 +80,7 @@ impl<T: BeaconChainTypes> MessageProcessor<T> {
sync_logger, sync_logger,
); );
MessageProcessor { Processor {
chain: beacon_chain, chain: beacon_chain,
sync_send, sync_send,
_sync_exit, _sync_exit,
@ -303,7 +300,7 @@ impl<T: BeaconChainTypes> MessageProcessor<T> {
self.network.send_rpc_response( self.network.send_rpc_response(
peer_id.clone(), peer_id.clone(),
request_id, request_id,
RPCResponse::BlocksByRoot(block.as_ssz_bytes()), RPCResponse::BlocksByRoot(Box::new(block)),
); );
send_block_count += 1; send_block_count += 1;
} else { } else {
@ -389,7 +386,7 @@ impl<T: BeaconChainTypes> MessageProcessor<T> {
self.network.send_rpc_response( self.network.send_rpc_response(
peer_id.clone(), peer_id.clone(),
request_id, request_id,
RPCResponse::BlocksByRange(block.as_ssz_bytes()), RPCResponse::BlocksByRange(Box::new(block)),
); );
} }
} else { } else {
@ -436,9 +433,8 @@ impl<T: BeaconChainTypes> MessageProcessor<T> {
&mut self, &mut self,
peer_id: PeerId, peer_id: PeerId,
request_id: RequestId, request_id: RequestId,
beacon_block: Option<SignedBeaconBlock<T::EthSpec>>, beacon_block: Option<Box<SignedBeaconBlock<T::EthSpec>>>,
) { ) {
let beacon_block = beacon_block.map(Box::new);
trace!( trace!(
self.log, self.log,
"Received BlocksByRange Response"; "Received BlocksByRange Response";
@ -457,9 +453,8 @@ impl<T: BeaconChainTypes> MessageProcessor<T> {
&mut self, &mut self,
peer_id: PeerId, peer_id: PeerId,
request_id: RequestId, request_id: RequestId,
beacon_block: Option<SignedBeaconBlock<T::EthSpec>>, beacon_block: Option<Box<SignedBeaconBlock<T::EthSpec>>>,
) { ) {
let beacon_block = beacon_block.map(Box::new);
trace!( trace!(
self.log, self.log,
"Received BlocksByRoot Response"; "Received BlocksByRoot Response";
@ -473,6 +468,22 @@ impl<T: BeaconChainTypes> MessageProcessor<T> {
}); });
} }
/// Template function to be called on a block to determine if the block should be propagated
/// across the network.
pub fn should_forward_block(&mut self, _block: &Box<SignedBeaconBlock<T::EthSpec>>) -> bool {
// TODO: Propagate error once complete
// self.chain.should_forward_block(block).is_ok()
true
}
/// Template function to be called on an attestation to determine if the attestation should be propagated
/// across the network.
pub fn _should_forward_attestation(&mut self, _attestation: &Attestation<T::EthSpec>) -> bool {
// TODO: Propagate error once complete
//self.chain.should_forward_attestation(attestation).is_ok()
true
}
/// Process a gossip message declaring a new block. /// Process a gossip message declaring a new block.
/// ///
/// Attempts to apply to block to the beacon chain. May queue the block for later processing. /// Attempts to apply to block to the beacon chain. May queue the block for later processing.
@ -481,9 +492,9 @@ impl<T: BeaconChainTypes> MessageProcessor<T> {
pub fn on_block_gossip( pub fn on_block_gossip(
&mut self, &mut self,
peer_id: PeerId, peer_id: PeerId,
block: SignedBeaconBlock<T::EthSpec>, block: Box<SignedBeaconBlock<T::EthSpec>>,
) -> bool { ) -> bool {
match self.chain.process_block(block.clone()) { match BlockProcessingOutcome::shim(self.chain.process_block(*block.clone())) {
Ok(outcome) => match outcome { Ok(outcome) => match outcome {
BlockProcessingOutcome::Processed { .. } => { BlockProcessingOutcome::Processed { .. } => {
trace!(self.log, "Gossipsub block processed"; trace!(self.log, "Gossipsub block processed";
@ -508,24 +519,13 @@ impl<T: BeaconChainTypes> MessageProcessor<T> {
"location" => "block gossip" "location" => "block gossip"
), ),
} }
SHOULD_FORWARD_GOSSIP_BLOCK
} }
BlockProcessingOutcome::ParentUnknown { .. } => { BlockProcessingOutcome::ParentUnknown { .. } => {
// Inform the sync manager to find parents for this block // Inform the sync manager to find parents for this block
trace!(self.log, "Block with unknown parent received"; trace!(self.log, "Block with unknown parent received";
"peer_id" => format!("{:?}",peer_id)); "peer_id" => format!("{:?}",peer_id));
self.send_to_sync(SyncMessage::UnknownBlock(peer_id, Box::new(block))); self.send_to_sync(SyncMessage::UnknownBlock(peer_id, block));
SHOULD_FORWARD_GOSSIP_BLOCK
} }
BlockProcessingOutcome::FutureSlot {
present_slot,
block_slot,
} if present_slot + FUTURE_SLOT_TOLERANCE >= block_slot => {
//TODO: Decide the logic here
SHOULD_FORWARD_GOSSIP_BLOCK
}
BlockProcessingOutcome::BlockIsAlreadyKnown => SHOULD_FORWARD_GOSSIP_BLOCK,
other => { other => {
warn!( warn!(
self.log, self.log,
@ -539,7 +539,6 @@ impl<T: BeaconChainTypes> MessageProcessor<T> {
"Invalid gossip beacon block ssz"; "Invalid gossip beacon block ssz";
"ssz" => format!("0x{}", hex::encode(block.as_ssz_bytes())), "ssz" => format!("0x{}", hex::encode(block.as_ssz_bytes())),
); );
SHOULD_NOT_FORWARD_GOSSIP_BLOCK //TODO: Decide if we want to forward these
} }
}, },
Err(_) => { Err(_) => {
@ -549,15 +548,18 @@ impl<T: BeaconChainTypes> MessageProcessor<T> {
"Erroneous gossip beacon block ssz"; "Erroneous gossip beacon block ssz";
"ssz" => format!("0x{}", hex::encode(block.as_ssz_bytes())), "ssz" => format!("0x{}", hex::encode(block.as_ssz_bytes())),
); );
SHOULD_NOT_FORWARD_GOSSIP_BLOCK
} }
} }
// TODO: Update with correct block gossip checking
true
} }
/// Process a gossip message declaring a new attestation. /// Process a gossip message declaring a new attestation.
/// ///
/// Not currently implemented. /// Not currently implemented.
pub fn on_attestation_gossip(&mut self, peer_id: PeerId, msg: Attestation<T::EthSpec>) { pub fn on_attestation_gossip(&mut self, _peer_id: PeerId, _msg: Attestation<T::EthSpec>) {
// TODO: Handle subnet gossip
/*
match self.chain.process_attestation(msg.clone()) { match self.chain.process_attestation(msg.clone()) {
Ok(outcome) => match outcome { Ok(outcome) => match outcome {
AttestationProcessingOutcome::Processed => { AttestationProcessingOutcome::Processed => {
@ -603,7 +605,8 @@ impl<T: BeaconChainTypes> MessageProcessor<T> {
"ssz" => format!("0x{}", hex::encode(msg.as_ssz_bytes())), "ssz" => format!("0x{}", hex::encode(msg.as_ssz_bytes())),
); );
} }
} };
*/
} }
} }
@ -625,15 +628,15 @@ pub(crate) fn status_message<T: BeaconChainTypes>(
/// Wraps a Network Channel to employ various RPC related network functionality for the message /// Wraps a Network Channel to employ various RPC related network functionality for the message
/// handler. The handler doesn't manage it's own request Id's and can therefore only send /// handler. The handler doesn't manage it's own request Id's and can therefore only send
/// responses or requests with 0 request Ids. /// responses or requests with 0 request Ids.
pub struct HandlerNetworkContext { pub struct HandlerNetworkContext<T: EthSpec> {
/// The network channel to relay messages to the Network service. /// The network channel to relay messages to the Network service.
network_send: mpsc::UnboundedSender<NetworkMessage>, network_send: mpsc::UnboundedSender<NetworkMessage<T>>,
/// Logger for the `NetworkContext`. /// Logger for the `NetworkContext`.
log: slog::Logger, log: slog::Logger,
} }
impl HandlerNetworkContext { impl<T: EthSpec> HandlerNetworkContext<T> {
pub fn new(network_send: mpsc::UnboundedSender<NetworkMessage>, log: slog::Logger) -> Self { pub fn new(network_send: mpsc::UnboundedSender<NetworkMessage<T>>, log: slog::Logger) -> Self {
Self { network_send, log } Self { network_send, log }
} }
@ -655,7 +658,7 @@ impl HandlerNetworkContext {
}); });
} }
pub fn send_rpc_request(&mut self, peer_id: PeerId, rpc_request: RPCRequest) { pub fn send_rpc_request(&mut self, peer_id: PeerId, rpc_request: RPCRequest<T>) {
// the message handler cannot send requests with ids. Id's are managed by the sync // the message handler cannot send requests with ids. Id's are managed by the sync
// manager. // manager.
let request_id = 0; let request_id = 0;
@ -667,7 +670,7 @@ impl HandlerNetworkContext {
&mut self, &mut self,
peer_id: PeerId, peer_id: PeerId,
request_id: RequestId, request_id: RequestId,
rpc_response: RPCResponse, rpc_response: RPCResponse<T>,
) { ) {
self.send_rpc_event( self.send_rpc_event(
peer_id, peer_id,
@ -680,12 +683,12 @@ impl HandlerNetworkContext {
&mut self, &mut self,
peer_id: PeerId, peer_id: PeerId,
request_id: RequestId, request_id: RequestId,
rpc_error_response: RPCErrorResponse, rpc_error_response: RPCErrorResponse<T>,
) { ) {
self.send_rpc_event(peer_id, RPCEvent::Response(request_id, rpc_error_response)); self.send_rpc_event(peer_id, RPCEvent::Response(request_id, rpc_error_response));
} }
fn send_rpc_event(&mut self, peer_id: PeerId, rpc_event: RPCEvent) { fn send_rpc_event(&mut self, peer_id: PeerId, rpc_event: RPCEvent<T>) {
self.network_send self.network_send
.try_send(NetworkMessage::RPC(peer_id, rpc_event)) .try_send(NetworkMessage::RPC(peer_id, rpc_event))
.unwrap_or_else(|_| { .unwrap_or_else(|_| {

View File

@ -1,23 +1,24 @@
use crate::error; use crate::error;
use crate::message_handler::{HandlerMessage, MessageHandler};
use crate::persisted_dht::{load_dht, persist_dht}; use crate::persisted_dht::{load_dht, persist_dht};
use crate::NetworkConfig; use crate::router::{Router, RouterMessage};
use beacon_chain::{BeaconChain, BeaconChainTypes}; use crate::{
use core::marker::PhantomData; attestation_service::{AttServiceMessage, AttestationService},
use eth2_libp2p::Service as LibP2PService; NetworkConfig,
use eth2_libp2p::{
rpc::RPCRequest, Enr, Libp2pEvent, MessageId, Multiaddr, NetworkGlobals, PeerId, Swarm, Topic,
}; };
use beacon_chain::{BeaconChain, BeaconChainTypes};
use eth2_libp2p::Service as LibP2PService;
use eth2_libp2p::{rpc::RPCRequest, Enr, Libp2pEvent, MessageId, NetworkGlobals, PeerId, Swarm};
use eth2_libp2p::{PubsubMessage, RPCEvent}; use eth2_libp2p::{PubsubMessage, RPCEvent};
use futures::prelude::*; use futures::prelude::*;
use futures::Stream; use futures::Stream;
use rest_types::ValidatorSubscription;
use slog::{debug, error, info, trace}; use slog::{debug, error, info, trace};
use std::collections::HashSet; use std::sync::Arc;
use std::sync::{atomic::Ordering, Arc};
use std::time::{Duration, Instant}; use std::time::{Duration, Instant};
use tokio::runtime::TaskExecutor; use tokio::runtime::TaskExecutor;
use tokio::sync::{mpsc, oneshot}; use tokio::sync::{mpsc, oneshot};
use tokio::timer::Delay; use tokio::timer::Delay;
use types::EthSpec;
mod tests; mod tests;
@ -25,27 +26,46 @@ mod tests;
const BAN_PEER_TIMEOUT: u64 = 30; const BAN_PEER_TIMEOUT: u64 = 30;
/// Service that handles communication between internal services and the `eth2_libp2p` network service. /// Service that handles communication between internal services and the `eth2_libp2p` network service.
pub struct Service<T: BeaconChainTypes> { pub struct NetworkService<T: BeaconChainTypes> {
libp2p_port: u16, /// The underlying libp2p service that drives all the network interactions.
network_globals: Arc<NetworkGlobals>, libp2p: LibP2PService<T::EthSpec>,
_libp2p_exit: oneshot::Sender<()>, /// An attestation and subnet manager service.
_network_send: mpsc::UnboundedSender<NetworkMessage>, attestation_service: AttestationService<T>,
_phantom: PhantomData<T>, /// The receiver channel for lighthouse to communicate with the network service.
network_recv: mpsc::UnboundedReceiver<NetworkMessage<T::EthSpec>>,
/// The sending channel for the network service to send messages to be routed throughout
/// lighthouse.
router_send: mpsc::UnboundedSender<RouterMessage<T::EthSpec>>,
/// A reference to lighthouse's database to persist the DHT.
store: Arc<T::Store>,
/// A collection of global variables, accessible outside of the network service.
network_globals: Arc<NetworkGlobals<T::EthSpec>>,
/// An initial delay to update variables after the libp2p service has started.
initial_delay: Delay,
/// The logger for the network service.
log: slog::Logger,
/// A probability of propagation.
propagation_percentage: Option<u8>,
} }
impl<T: BeaconChainTypes> Service<T> { impl<T: BeaconChainTypes> NetworkService<T> {
pub fn new( pub fn start(
beacon_chain: Arc<BeaconChain<T>>, beacon_chain: Arc<BeaconChain<T>>,
config: &NetworkConfig, config: &NetworkConfig,
executor: &TaskExecutor, executor: &TaskExecutor,
network_log: slog::Logger, network_log: slog::Logger,
) -> error::Result<(Arc<Self>, mpsc::UnboundedSender<NetworkMessage>)> { ) -> error::Result<(
Arc<NetworkGlobals<T::EthSpec>>,
mpsc::UnboundedSender<NetworkMessage<T::EthSpec>>,
oneshot::Sender<()>,
)> {
// build the network channel // build the network channel
let (network_send, network_recv) = mpsc::unbounded_channel::<NetworkMessage>(); let (network_send, network_recv) = mpsc::unbounded_channel::<NetworkMessage<T::EthSpec>>();
// launch message handler thread // Get a reference to the beacon chain store
let store = beacon_chain.store.clone(); let store = beacon_chain.store.clone();
let message_handler_send = MessageHandler::spawn( // launch the router task
beacon_chain, let router_send = Router::spawn(
beacon_chain.clone(),
network_send.clone(), network_send.clone(),
executor, executor,
network_log.clone(), network_log.clone(),
@ -53,82 +73,42 @@ impl<T: BeaconChainTypes> Service<T> {
let propagation_percentage = config.propagation_percentage; let propagation_percentage = config.propagation_percentage;
// launch libp2p service // launch libp2p service
let (network_globals, mut libp2p_service) = let (network_globals, mut libp2p) = LibP2PService::new(config, network_log.clone())?;
LibP2PService::new(config, network_log.clone())?;
for enr in load_dht::<T::Store, T::EthSpec>(store.clone()) { for enr in load_dht::<T>(store.clone()) {
libp2p_service.swarm.add_enr(enr); libp2p.swarm.add_enr(enr);
} }
// A delay used to initialise code after the network has started // A delay used to initialise code after the network has started
// This is currently used to obtain the listening addresses from the libp2p service. // This is currently used to obtain the listening addresses from the libp2p service.
let initial_delay = Delay::new(Instant::now() + Duration::from_secs(1)); let initial_delay = Delay::new(Instant::now() + Duration::from_secs(1));
let libp2p_exit = spawn_service::<T>( // create the attestation service
libp2p_service, let attestation_service =
network_recv, AttestationService::new(beacon_chain, network_globals.clone(), &network_log);
message_handler_send,
executor,
store,
network_globals.clone(),
initial_delay,
network_log.clone(),
propagation_percentage,
)?;
let network_service = Service { // create the network service and spawn the task
libp2p_port: config.libp2p_port, let network_service = NetworkService {
network_globals, libp2p,
_libp2p_exit: libp2p_exit, attestation_service,
_network_send: network_send.clone(), network_recv,
_phantom: PhantomData, router_send,
store,
network_globals: network_globals.clone(),
initial_delay,
log: network_log,
propagation_percentage,
}; };
Ok((Arc::new(network_service), network_send)) let network_exit = spawn_service(network_service, &executor)?;
}
/// Returns the local ENR from the underlying Discv5 behaviour that external peers may connect Ok((network_globals, network_send, network_exit))
/// to.
pub fn local_enr(&self) -> Option<Enr> {
self.network_globals.local_enr.read().clone()
}
/// Returns the local libp2p PeerID.
pub fn local_peer_id(&self) -> PeerId {
self.network_globals.peer_id.read().clone()
}
/// Returns the list of `Multiaddr` that the underlying libp2p instance is listening on.
pub fn listen_multiaddrs(&self) -> Vec<Multiaddr> {
self.network_globals.listen_multiaddrs.read().clone()
}
/// Returns the libp2p port that this node has been configured to listen using.
pub fn listen_port(&self) -> u16 {
self.libp2p_port
}
/// Returns the number of libp2p connected peers.
pub fn connected_peers(&self) -> usize {
self.network_globals.connected_peers.load(Ordering::Relaxed)
}
/// Returns the set of `PeerId` that are connected via libp2p.
pub fn connected_peer_set(&self) -> HashSet<PeerId> {
self.network_globals.connected_peer_set.read().clone()
} }
} }
fn spawn_service<T: BeaconChainTypes>( fn spawn_service<T: BeaconChainTypes>(
mut libp2p_service: LibP2PService, mut service: NetworkService<T>,
mut network_recv: mpsc::UnboundedReceiver<NetworkMessage>,
mut message_handler_send: mpsc::UnboundedSender<HandlerMessage>,
executor: &TaskExecutor, executor: &TaskExecutor,
store: Arc<T::Store>,
network_globals: Arc<NetworkGlobals>,
mut initial_delay: Delay,
log: slog::Logger,
propagation_percentage: Option<u8>,
) -> error::Result<tokio::sync::oneshot::Sender<()>> { ) -> error::Result<tokio::sync::oneshot::Sender<()>> {
let (network_exit, mut exit_rx) = tokio::sync::oneshot::channel(); let (network_exit, mut exit_rx) = tokio::sync::oneshot::channel();
@ -136,25 +116,26 @@ fn spawn_service<T: BeaconChainTypes>(
executor.spawn( executor.spawn(
futures::future::poll_fn(move || -> Result<_, ()> { futures::future::poll_fn(move || -> Result<_, ()> {
let log = &service.log;
if !initial_delay.is_elapsed() { if !service.initial_delay.is_elapsed() {
if let Ok(Async::Ready(_)) = initial_delay.poll() { if let Ok(Async::Ready(_)) = service.initial_delay.poll() {
let multi_addrs = Swarm::listeners(&libp2p_service.swarm).cloned().collect(); let multi_addrs = Swarm::listeners(&service.libp2p.swarm).cloned().collect();
*network_globals.listen_multiaddrs.write() = multi_addrs; *service.network_globals.listen_multiaddrs.write() = multi_addrs;
} }
} }
// perform termination tasks when the network is being shutdown // perform termination tasks when the network is being shutdown
if let Ok(Async::Ready(_)) | Err(_) = exit_rx.poll() { if let Ok(Async::Ready(_)) | Err(_) = exit_rx.poll() {
// network thread is terminating // network thread is terminating
let enrs: Vec<Enr> = libp2p_service.swarm.enr_entries().cloned().collect(); let enrs: Vec<Enr> = service.libp2p.swarm.enr_entries().cloned().collect();
debug!( debug!(
log, log,
"Persisting DHT to store"; "Persisting DHT to store";
"Number of peers" => format!("{}", enrs.len()), "Number of peers" => format!("{}", enrs.len()),
); );
match persist_dht::<T::Store, T::EthSpec>(store.clone(), enrs) { match persist_dht::<T>(service.store.clone(), enrs) {
Err(e) => error!( Err(e) => error!(
log, log,
"Failed to persist DHT on drop"; "Failed to persist DHT on drop";
@ -173,11 +154,11 @@ fn spawn_service<T: BeaconChainTypes>(
// processes the network channel before processing the libp2p swarm // processes the network channel before processing the libp2p swarm
loop { loop {
// poll the network channel // poll the network channel
match network_recv.poll() { match service.network_recv.poll() {
Ok(Async::Ready(Some(message))) => match message { Ok(Async::Ready(Some(message))) => match message {
NetworkMessage::RPC(peer_id, rpc_event) => { NetworkMessage::RPC(peer_id, rpc_event) => {
trace!(log, "Sending RPC"; "rpc" => format!("{}", rpc_event)); trace!(log, "Sending RPC"; "rpc" => format!("{}", rpc_event));
libp2p_service.swarm.send_rpc(peer_id, rpc_event); service.libp2p.swarm.send_rpc(peer_id, rpc_event);
} }
NetworkMessage::Propagate { NetworkMessage::Propagate {
propagation_source, propagation_source,
@ -186,7 +167,7 @@ fn spawn_service<T: BeaconChainTypes>(
// TODO: Remove this for mainnet // TODO: Remove this for mainnet
// randomly prevents propagation // randomly prevents propagation
let mut should_send = true; let mut should_send = true;
if let Some(percentage) = propagation_percentage { if let Some(percentage) = service.propagation_percentage {
// not exact percentage but close enough // not exact percentage but close enough
let rand = rand::random::<u8>() % 100; let rand = rand::random::<u8>() % 100;
if rand > percentage { if rand > percentage {
@ -201,16 +182,16 @@ fn spawn_service<T: BeaconChainTypes>(
"propagation_peer" => format!("{:?}", propagation_source), "propagation_peer" => format!("{:?}", propagation_source),
"message_id" => message_id.to_string(), "message_id" => message_id.to_string(),
); );
libp2p_service service.libp2p
.swarm .swarm
.propagate_message(&propagation_source, message_id); .propagate_message(&propagation_source, message_id);
} }
} }
NetworkMessage::Publish { topics, message } => { NetworkMessage::Publish { messages } => {
// TODO: Remove this for mainnet // TODO: Remove this for mainnet
// randomly prevents propagation // randomly prevents propagation
let mut should_send = true; let mut should_send = true;
if let Some(percentage) = propagation_percentage { if let Some(percentage) = service.propagation_percentage {
// not exact percentage but close enough // not exact percentage but close enough
let rand = rand::random::<u8>() % 100; let rand = rand::random::<u8>() % 100;
if rand > percentage { if rand > percentage {
@ -219,18 +200,31 @@ fn spawn_service<T: BeaconChainTypes>(
} }
} }
if !should_send { if !should_send {
info!(log, "Random filter did not publish message"); info!(log, "Random filter did not publish messages");
} else { } else {
debug!(log, "Sending pubsub message"; "topics" => format!("{:?}",topics)); let mut unique_topics = Vec::new();
libp2p_service.swarm.publish(&topics, message); for message in &messages {
for topic in message.topics() {
if !unique_topics.contains(&topic) {
unique_topics.push(topic);
}
}
}
debug!(log, "Sending pubsub messages"; "count" => messages.len(), "topics" => format!("{:?}", unique_topics));
service.libp2p.swarm.publish(messages);
} }
} }
NetworkMessage::Disconnect { peer_id } => { NetworkMessage::Disconnect { peer_id } => {
libp2p_service.disconnect_and_ban_peer( service.libp2p.disconnect_and_ban_peer(
peer_id, peer_id,
std::time::Duration::from_secs(BAN_PEER_TIMEOUT), std::time::Duration::from_secs(BAN_PEER_TIMEOUT),
); );
} }
NetworkMessage::Subscribe { subscriptions } =>
{
// the result is dropped as it used solely for ergonomics
let _ = service.attestation_service.validator_subscriptions(subscriptions);
}
}, },
Ok(Async::NotReady) => break, Ok(Async::NotReady) => break,
Ok(Async::Ready(None)) => { Ok(Async::Ready(None)) => {
@ -244,10 +238,24 @@ fn spawn_service<T: BeaconChainTypes>(
} }
} }
// process any attestation service events
// NOTE: This must come after the network message processing as that may trigger events in
// the attestation service.
while let Ok(Async::Ready(Some(attestation_service_message))) = service.attestation_service.poll() {
match attestation_service_message {
// TODO: Implement
AttServiceMessage::Subscribe(_subnet) => { },
AttServiceMessage::Unsubscribe(_subnet) => { },
AttServiceMessage::EnrAdd(_subnet) => { },
AttServiceMessage::EnrRemove(_subnet) => { },
AttServiceMessage::DiscoverPeers(_subnet) => { },
}
}
let mut peers_to_ban = Vec::new(); let mut peers_to_ban = Vec::new();
// poll the swarm // poll the swarm
loop { loop {
match libp2p_service.poll() { match service.libp2p.poll() {
Ok(Async::Ready(Some(event))) => match event { Ok(Async::Ready(Some(event))) => match event {
Libp2pEvent::RPC(peer_id, rpc_event) => { Libp2pEvent::RPC(peer_id, rpc_event) => {
// trace!(log, "Received RPC"; "rpc" => format!("{}", rpc_event)); // trace!(log, "Received RPC"; "rpc" => format!("{}", rpc_event));
@ -256,21 +264,21 @@ fn spawn_service<T: BeaconChainTypes>(
if let RPCEvent::Request(_, RPCRequest::Goodbye(_)) = rpc_event { if let RPCEvent::Request(_, RPCRequest::Goodbye(_)) = rpc_event {
peers_to_ban.push(peer_id.clone()); peers_to_ban.push(peer_id.clone());
}; };
message_handler_send service.router_send
.try_send(HandlerMessage::RPC(peer_id, rpc_event)) .try_send(RouterMessage::RPC(peer_id, rpc_event))
.map_err(|_| { debug!(log, "Failed to send RPC to handler");} )?; .map_err(|_| { debug!(log, "Failed to send RPC to router");} )?;
} }
Libp2pEvent::PeerDialed(peer_id) => { Libp2pEvent::PeerDialed(peer_id) => {
debug!(log, "Peer Dialed"; "peer_id" => format!("{:?}", peer_id)); debug!(log, "Peer Dialed"; "peer_id" => format!("{:?}", peer_id));
message_handler_send service.router_send
.try_send(HandlerMessage::PeerDialed(peer_id)) .try_send(RouterMessage::PeerDialed(peer_id))
.map_err(|_| { debug!(log, "Failed to send peer dialed to handler");})?; .map_err(|_| { debug!(log, "Failed to send peer dialed to router");})?;
} }
Libp2pEvent::PeerDisconnected(peer_id) => { Libp2pEvent::PeerDisconnected(peer_id) => {
debug!(log, "Peer Disconnected"; "peer_id" => format!("{:?}", peer_id)); debug!(log, "Peer Disconnected"; "peer_id" => format!("{:?}", peer_id));
message_handler_send service.router_send
.try_send(HandlerMessage::PeerDisconnected(peer_id)) .try_send(RouterMessage::PeerDisconnected(peer_id))
.map_err(|_| { debug!(log, "Failed to send peer disconnect to handler");})?; .map_err(|_| { debug!(log, "Failed to send peer disconnect to router");})?;
} }
Libp2pEvent::PubsubMessage { Libp2pEvent::PubsubMessage {
id, id,
@ -278,9 +286,9 @@ fn spawn_service<T: BeaconChainTypes>(
message, message,
.. ..
} => { } => {
message_handler_send service.router_send
.try_send(HandlerMessage::PubsubMessage(id, source, message)) .try_send(RouterMessage::PubsubMessage(id, source, message))
.map_err(|_| { debug!(log, "Failed to send pubsub message to handler");})?; .map_err(|_| { debug!(log, "Failed to send pubsub message to router");})?;
} }
Libp2pEvent::PeerSubscribed(_, _) => {} Libp2pEvent::PeerSubscribed(_, _) => {}
}, },
@ -292,7 +300,7 @@ fn spawn_service<T: BeaconChainTypes>(
// ban and disconnect any peers that sent Goodbye requests // ban and disconnect any peers that sent Goodbye requests
while let Some(peer_id) = peers_to_ban.pop() { while let Some(peer_id) = peers_to_ban.pop() {
libp2p_service.disconnect_and_ban_peer( service.libp2p.disconnect_and_ban_peer(
peer_id.clone(), peer_id.clone(),
std::time::Duration::from_secs(BAN_PEER_TIMEOUT), std::time::Duration::from_secs(BAN_PEER_TIMEOUT),
); );
@ -308,14 +316,15 @@ fn spawn_service<T: BeaconChainTypes>(
/// Types of messages that the network service can receive. /// Types of messages that the network service can receive.
#[derive(Debug)] #[derive(Debug)]
pub enum NetworkMessage { pub enum NetworkMessage<T: EthSpec> {
/// Send an RPC message to the libp2p service. /// Subscribes a list of validators to specific slots for attestation duties.
RPC(PeerId, RPCEvent), Subscribe {
/// Publish a message to gossipsub. subscriptions: Vec<ValidatorSubscription>,
Publish {
topics: Vec<Topic>,
message: PubsubMessage,
}, },
/// Send an RPC message to the libp2p service.
RPC(PeerId, RPCEvent<T>),
/// Publish a list of messages to the gossipsub protocol.
Publish { messages: Vec<PubsubMessage<T>> },
/// Propagate a received gossipsub message. /// Propagate a received gossipsub message.
Propagate { Propagate {
propagation_source: PeerId, propagation_source: PeerId,

View File

@ -35,7 +35,7 @@
use super::network_context::SyncNetworkContext; use super::network_context::SyncNetworkContext;
use super::range_sync::{Batch, BatchProcessResult, RangeSync}; use super::range_sync::{Batch, BatchProcessResult, RangeSync};
use crate::message_processor::PeerSyncInfo; use crate::router::processor::PeerSyncInfo;
use crate::service::NetworkMessage; use crate::service::NetworkMessage;
use beacon_chain::{BeaconChain, BeaconChainTypes, BlockProcessingOutcome}; use beacon_chain::{BeaconChain, BeaconChainTypes, BlockProcessingOutcome};
use eth2_libp2p::rpc::methods::*; use eth2_libp2p::rpc::methods::*;
@ -153,7 +153,7 @@ pub struct SyncManager<T: BeaconChainTypes> {
input_channel: mpsc::UnboundedReceiver<SyncMessage<T::EthSpec>>, input_channel: mpsc::UnboundedReceiver<SyncMessage<T::EthSpec>>,
/// A network context to contact the network service. /// A network context to contact the network service.
network: SyncNetworkContext, network: SyncNetworkContext<T::EthSpec>,
/// The object handling long-range batch load-balanced syncing. /// The object handling long-range batch load-balanced syncing.
range_sync: RangeSync<T>, range_sync: RangeSync<T>,
@ -180,7 +180,7 @@ pub struct SyncManager<T: BeaconChainTypes> {
pub fn spawn<T: BeaconChainTypes>( pub fn spawn<T: BeaconChainTypes>(
executor: &tokio::runtime::TaskExecutor, executor: &tokio::runtime::TaskExecutor,
beacon_chain: Weak<BeaconChain<T>>, beacon_chain: Weak<BeaconChain<T>>,
network_send: mpsc::UnboundedSender<NetworkMessage>, network_send: mpsc::UnboundedSender<NetworkMessage<T::EthSpec>>,
log: slog::Logger, log: slog::Logger,
) -> ( ) -> (
mpsc::UnboundedSender<SyncMessage<T::EthSpec>>, mpsc::UnboundedSender<SyncMessage<T::EthSpec>>,
@ -391,7 +391,7 @@ impl<T: BeaconChainTypes> SyncManager<T> {
// we have the correct block, try and process it // we have the correct block, try and process it
if let Some(chain) = self.chain.upgrade() { if let Some(chain) = self.chain.upgrade() {
match chain.process_block(block.clone()) { match BlockProcessingOutcome::shim(chain.process_block(block.clone())) {
Ok(outcome) => { Ok(outcome) => {
match outcome { match outcome {
BlockProcessingOutcome::Processed { block_root } => { BlockProcessingOutcome::Processed { block_root } => {
@ -597,7 +597,7 @@ impl<T: BeaconChainTypes> SyncManager<T> {
.downloaded_blocks .downloaded_blocks
.pop() .pop()
.expect("There is always at least one block in the queue"); .expect("There is always at least one block in the queue");
match chain.process_block(newest_block.clone()) { match BlockProcessingOutcome::shim(chain.process_block(newest_block.clone())) {
Ok(BlockProcessingOutcome::ParentUnknown { .. }) => { Ok(BlockProcessingOutcome::ParentUnknown { .. }) => {
// need to keep looking for parents // need to keep looking for parents
// add the block back to the queue and continue the search // add the block back to the queue and continue the search
@ -642,7 +642,7 @@ impl<T: BeaconChainTypes> SyncManager<T> {
while let Some(block) = parent_request.downloaded_blocks.pop() { while let Some(block) = parent_request.downloaded_blocks.pop() {
// check if the chain exists // check if the chain exists
if let Some(chain) = self.chain.upgrade() { if let Some(chain) = self.chain.upgrade() {
match chain.process_block(block) { match BlockProcessingOutcome::shim(chain.process_block(block)) {
Ok(BlockProcessingOutcome::Processed { .. }) Ok(BlockProcessingOutcome::Processed { .. })
| Ok(BlockProcessingOutcome::BlockIsAlreadyKnown { .. }) => {} // continue to the next block | Ok(BlockProcessingOutcome::BlockIsAlreadyKnown { .. }) => {} // continue to the next block

View File

@ -5,9 +5,4 @@ pub mod manager;
mod network_context; mod network_context;
mod range_sync; mod range_sync;
/// Currently implemented sync methods.
pub enum SyncMethod {
SimpleSync,
}
pub use manager::SyncMessage; pub use manager::SyncMessage;

View File

@ -1,7 +1,7 @@
//! Provides network functionality for the Syncing thread. This fundamentally wraps a network //! Provides network functionality for the Syncing thread. This fundamentally wraps a network
//! channel and stores a global RPC ID to perform requests. //! channel and stores a global RPC ID to perform requests.
use crate::message_processor::status_message; use crate::router::processor::status_message;
use crate::service::NetworkMessage; use crate::service::NetworkMessage;
use beacon_chain::{BeaconChain, BeaconChainTypes}; use beacon_chain::{BeaconChain, BeaconChainTypes};
use eth2_libp2p::rpc::methods::*; use eth2_libp2p::rpc::methods::*;
@ -10,20 +10,21 @@ use eth2_libp2p::PeerId;
use slog::{debug, trace, warn}; use slog::{debug, trace, warn};
use std::sync::Weak; use std::sync::Weak;
use tokio::sync::mpsc; use tokio::sync::mpsc;
use types::EthSpec;
/// Wraps a Network channel to employ various RPC related network functionality for the Sync manager. This includes management of a global RPC request Id. /// Wraps a Network channel to employ various RPC related network functionality for the Sync manager. This includes management of a global RPC request Id.
pub struct SyncNetworkContext { pub struct SyncNetworkContext<T: EthSpec> {
/// The network channel to relay messages to the Network service. /// The network channel to relay messages to the Network service.
network_send: mpsc::UnboundedSender<NetworkMessage>, network_send: mpsc::UnboundedSender<NetworkMessage<T>>,
request_id: RequestId, request_id: RequestId,
/// Logger for the `SyncNetworkContext`. /// Logger for the `SyncNetworkContext`.
log: slog::Logger, log: slog::Logger,
} }
impl SyncNetworkContext { impl<T: EthSpec> SyncNetworkContext<T> {
pub fn new(network_send: mpsc::UnboundedSender<NetworkMessage>, log: slog::Logger) -> Self { pub fn new(network_send: mpsc::UnboundedSender<NetworkMessage<T>>, log: slog::Logger) -> Self {
Self { Self {
network_send, network_send,
request_id: 0, request_id: 0,
@ -31,9 +32,9 @@ impl SyncNetworkContext {
} }
} }
pub fn status_peer<T: BeaconChainTypes>( pub fn status_peer<U: BeaconChainTypes>(
&mut self, &mut self,
chain: Weak<BeaconChain<T>>, chain: Weak<BeaconChain<U>>,
peer_id: PeerId, peer_id: PeerId,
) { ) {
if let Some(chain) = chain.upgrade() { if let Some(chain) = chain.upgrade() {
@ -117,7 +118,7 @@ impl SyncNetworkContext {
pub fn send_rpc_request( pub fn send_rpc_request(
&mut self, &mut self,
peer_id: PeerId, peer_id: PeerId,
rpc_request: RPCRequest, rpc_request: RPCRequest<T>,
) -> Result<RequestId, &'static str> { ) -> Result<RequestId, &'static str> {
let request_id = self.request_id; let request_id = self.request_id;
self.request_id += 1; self.request_id += 1;
@ -125,7 +126,11 @@ impl SyncNetworkContext {
Ok(request_id) Ok(request_id)
} }
fn send_rpc_event(&mut self, peer_id: PeerId, rpc_event: RPCEvent) -> Result<(), &'static str> { fn send_rpc_event(
&mut self,
peer_id: PeerId,
rpc_event: RPCEvent<T>,
) -> Result<(), &'static str> {
self.network_send self.network_send
.try_send(NetworkMessage::RPC(peer_id, rpc_event)) .try_send(NetworkMessage::RPC(peer_id, rpc_event))
.map_err(|_| { .map_err(|_| {

View File

@ -1,7 +1,7 @@
use super::batch::Batch; use super::batch::Batch;
use crate::message_processor::FUTURE_SLOT_TOLERANCE; use crate::router::processor::FUTURE_SLOT_TOLERANCE;
use crate::sync::manager::SyncMessage; use crate::sync::manager::SyncMessage;
use beacon_chain::{BeaconChain, BeaconChainTypes, BlockProcessingOutcome}; use beacon_chain::{BeaconChain, BeaconChainTypes, BlockError};
use slog::{debug, error, trace, warn}; use slog::{debug, error, trace, warn};
use std::sync::{Arc, Weak}; use std::sync::{Arc, Weak};
use tokio::sync::mpsc; use tokio::sync::mpsc;
@ -54,48 +54,31 @@ fn process_batch<T: BeaconChainTypes>(
batch: &Batch<T::EthSpec>, batch: &Batch<T::EthSpec>,
log: &slog::Logger, log: &slog::Logger,
) -> Result<(), String> { ) -> Result<(), String> {
let mut successful_block_import = false;
for block in &batch.downloaded_blocks {
if let Some(chain) = chain.upgrade() { if let Some(chain) = chain.upgrade() {
let processing_result = chain.process_block(block.clone()); match chain.process_chain_segment(batch.downloaded_blocks.clone()) {
Ok(roots) => {
if let Ok(outcome) = processing_result {
match outcome {
BlockProcessingOutcome::Processed { block_root } => {
// The block was valid and we processed it successfully.
trace!( trace!(
log, "Imported block from network"; log, "Imported blocks from network";
"slot" => block.slot(), "count" => roots.len(),
"block_root" => format!("{}", block_root),
); );
successful_block_import = true;
} }
BlockProcessingOutcome::ParentUnknown { parent, .. } => { Err(BlockError::ParentUnknown(parent)) => {
// blocks should be sequential and all parents should exist // blocks should be sequential and all parents should exist
warn!( warn!(
log, "Parent block is unknown"; log, "Parent block is unknown";
"parent_root" => format!("{}", parent), "parent_root" => format!("{}", parent),
"baby_block_slot" => block.slot(),
); );
if successful_block_import {
run_fork_choice(chain, log);
} }
return Err(format!( Err(BlockError::BlockIsAlreadyKnown) => {
"Block at slot {} has an unknown parent.",
block.slot()
));
}
BlockProcessingOutcome::BlockIsAlreadyKnown => {
// this block is already known to us, move to the next // this block is already known to us, move to the next
debug!( debug!(
log, "Imported a block that is already known"; log, "Imported a block that is already known";
"block_slot" => block.slot(),
); );
} }
BlockProcessingOutcome::FutureSlot { Err(BlockError::FutureSlot {
present_slot, present_slot,
block_slot, block_slot,
} => { }) => {
if present_slot + FUTURE_SLOT_TOLERANCE >= block_slot { if present_slot + FUTURE_SLOT_TOLERANCE >= block_slot {
// The block is too far in the future, drop it. // The block is too far in the future, drop it.
warn!( warn!(
@ -105,13 +88,6 @@ fn process_batch<T: BeaconChainTypes>(
"block_slot" => block_slot, "block_slot" => block_slot,
"FUTURE_SLOT_TOLERANCE" => FUTURE_SLOT_TOLERANCE, "FUTURE_SLOT_TOLERANCE" => FUTURE_SLOT_TOLERANCE,
); );
if successful_block_import {
run_fork_choice(chain, log);
}
return Err(format!(
"Block at slot {} is too far in the future",
block.slot()
));
} else { } else {
// The block is in the future, but not too far. // The block is in the future, but not too far.
debug!( debug!(
@ -122,49 +98,35 @@ fn process_batch<T: BeaconChainTypes>(
); );
} }
} }
BlockProcessingOutcome::WouldRevertFinalizedSlot { .. } => { Err(BlockError::WouldRevertFinalizedSlot { .. }) => {
debug!( debug!(
log, "Finalized or earlier block processed"; log, "Finalized or earlier block processed";
"outcome" => format!("{:?}", outcome),
); );
// block reached our finalized slot or was earlier, move to the next block // block reached our finalized slot or was earlier, move to the next block
} }
BlockProcessingOutcome::GenesisBlock => { Err(BlockError::GenesisBlock) => {
debug!( debug!(
log, "Genesis block was processed"; log, "Genesis block was processed";
"outcome" => format!("{:?}", outcome),
); );
} }
_ => { Err(BlockError::BeaconChainError(e)) => {
warn!(
log, "Invalid block received";
"msg" => "peer sent invalid block",
"outcome" => format!("{:?}", outcome),
);
if successful_block_import {
run_fork_choice(chain, log);
}
return Err(format!("Invalid block at slot {}", block.slot()));
}
}
} else {
warn!( warn!(
log, "BlockProcessingFailure"; log, "BlockProcessingFailure";
"msg" => "unexpected condition in processing block.", "msg" => "unexpected condition in processing block.",
"outcome" => format!("{:?}", processing_result) "outcome" => format!("{:?}", e)
);
}
other => {
warn!(
log, "Invalid block received";
"msg" => "peer sent invalid block",
"outcome" => format!("{:?}", other),
); );
if successful_block_import {
run_fork_choice(chain, log);
} }
return Err(format!(
"Unexpected block processing error: {:?}",
processing_result
));
} }
} else { } else {
return Ok(()); // terminate early due to dropped beacon chain return Ok(()); // terminate early due to dropped beacon chain
} }
}
// Batch completed successfully, run fork choice. // Batch completed successfully, run fork choice.
if let Some(chain) = chain.upgrade() { if let Some(chain) = chain.upgrade() {

View File

@ -17,7 +17,7 @@ use types::{Hash256, SignedBeaconBlock, Slot};
/// downvote peers with poor bandwidth. This can be set arbitrarily high, in which case the /// downvote peers with poor bandwidth. This can be set arbitrarily high, in which case the
/// responder will fill the response up to the max request size, assuming they have the bandwidth /// responder will fill the response up to the max request size, assuming they have the bandwidth
/// to do so. /// to do so.
pub const BLOCKS_PER_BATCH: u64 = 50; pub const BLOCKS_PER_BATCH: u64 = 64;
/// The number of times to retry a batch before the chain is considered failed and removed. /// The number of times to retry a batch before the chain is considered failed and removed.
const MAX_BATCH_RETRIES: u8 = 5; const MAX_BATCH_RETRIES: u8 = 5;
@ -141,7 +141,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
/// batch. /// batch.
pub fn on_block_response( pub fn on_block_response(
&mut self, &mut self,
network: &mut SyncNetworkContext, network: &mut SyncNetworkContext<T::EthSpec>,
request_id: RequestId, request_id: RequestId,
beacon_block: &Option<SignedBeaconBlock<T::EthSpec>>, beacon_block: &Option<SignedBeaconBlock<T::EthSpec>>,
) -> Option<()> { ) -> Option<()> {
@ -161,7 +161,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
/// failed indicating that further batches are required. /// failed indicating that further batches are required.
fn handle_completed_batch( fn handle_completed_batch(
&mut self, &mut self,
network: &mut SyncNetworkContext, network: &mut SyncNetworkContext<T::EthSpec>,
batch: Batch<T::EthSpec>, batch: Batch<T::EthSpec>,
) { ) {
// An entire batch of blocks has been received. This functions checks to see if it can be processed, // An entire batch of blocks has been received. This functions checks to see if it can be processed,
@ -255,7 +255,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
/// of the batch processor. /// of the batch processor.
pub fn on_batch_process_result( pub fn on_batch_process_result(
&mut self, &mut self,
network: &mut SyncNetworkContext, network: &mut SyncNetworkContext<T::EthSpec>,
processing_id: u64, processing_id: u64,
batch: &mut Option<Batch<T::EthSpec>>, batch: &mut Option<Batch<T::EthSpec>>,
result: &BatchProcessResult, result: &BatchProcessResult,
@ -385,7 +385,11 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
// TODO: Batches could have been partially downloaded due to RPC size-limit restrictions. We // TODO: Batches could have been partially downloaded due to RPC size-limit restrictions. We
// need to add logic for partial batch downloads. Potentially, if another peer returns the same // need to add logic for partial batch downloads. Potentially, if another peer returns the same
// batch, we try a partial download. // batch, we try a partial download.
fn handle_invalid_batch(&mut self, network: &mut SyncNetworkContext, batch: Batch<T::EthSpec>) { fn handle_invalid_batch(
&mut self,
network: &mut SyncNetworkContext<T::EthSpec>,
batch: Batch<T::EthSpec>,
) {
// The current batch could not be processed, indicating either the current or previous // The current batch could not be processed, indicating either the current or previous
// batches are invalid // batches are invalid
@ -415,7 +419,11 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
/// ///
/// If the re-downloaded batch is different to the original and can be processed, the original /// If the re-downloaded batch is different to the original and can be processed, the original
/// peer will be downvoted. /// peer will be downvoted.
fn reprocess_batch(&mut self, network: &mut SyncNetworkContext, mut batch: Batch<T::EthSpec>) { fn reprocess_batch(
&mut self,
network: &mut SyncNetworkContext<T::EthSpec>,
mut batch: Batch<T::EthSpec>,
) {
// marks the batch as attempting to be reprocessed by hashing the downloaded blocks // marks the batch as attempting to be reprocessed by hashing the downloaded blocks
batch.original_hash = Some(batch.hash()); batch.original_hash = Some(batch.hash());
@ -455,7 +463,11 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
/// This chain has been requested to start syncing. /// This chain has been requested to start syncing.
/// ///
/// This could be new chain, or an old chain that is being resumed. /// This could be new chain, or an old chain that is being resumed.
pub fn start_syncing(&mut self, network: &mut SyncNetworkContext, local_finalized_slot: Slot) { pub fn start_syncing(
&mut self,
network: &mut SyncNetworkContext<T::EthSpec>,
local_finalized_slot: Slot,
) {
// A local finalized slot is provided as other chains may have made // A local finalized slot is provided as other chains may have made
// progress whilst this chain was Stopped or paused. If so, update the `processed_batch_id` to // progress whilst this chain was Stopped or paused. If so, update the `processed_batch_id` to
// accommodate potentially downloaded batches from other chains. Also prune any old batches // accommodate potentially downloaded batches from other chains. Also prune any old batches
@ -490,7 +502,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
/// Add a peer to the chain. /// Add a peer to the chain.
/// ///
/// If the chain is active, this starts requesting batches from this peer. /// If the chain is active, this starts requesting batches from this peer.
pub fn add_peer(&mut self, network: &mut SyncNetworkContext, peer_id: PeerId) { pub fn add_peer(&mut self, network: &mut SyncNetworkContext<T::EthSpec>, peer_id: PeerId) {
self.peer_pool.insert(peer_id.clone()); self.peer_pool.insert(peer_id.clone());
// do not request blocks if the chain is not syncing // do not request blocks if the chain is not syncing
if let ChainSyncingState::Stopped = self.state { if let ChainSyncingState::Stopped = self.state {
@ -503,7 +515,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
} }
/// Sends a STATUS message to all peers in the peer pool. /// Sends a STATUS message to all peers in the peer pool.
pub fn status_peers(&self, network: &mut SyncNetworkContext) { pub fn status_peers(&self, network: &mut SyncNetworkContext<T::EthSpec>) {
for peer_id in self.peer_pool.iter() { for peer_id in self.peer_pool.iter() {
network.status_peer(self.chain.clone(), peer_id.clone()); network.status_peer(self.chain.clone(), peer_id.clone());
} }
@ -517,7 +529,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
/// this chain. /// this chain.
pub fn inject_error( pub fn inject_error(
&mut self, &mut self,
network: &mut SyncNetworkContext, network: &mut SyncNetworkContext<T::EthSpec>,
peer_id: &PeerId, peer_id: &PeerId,
request_id: RequestId, request_id: RequestId,
) -> Option<ProcessingResult> { ) -> Option<ProcessingResult> {
@ -541,7 +553,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
/// `MAX_BATCH_RETRIES`. /// `MAX_BATCH_RETRIES`.
pub fn failed_batch( pub fn failed_batch(
&mut self, &mut self,
network: &mut SyncNetworkContext, network: &mut SyncNetworkContext<T::EthSpec>,
mut batch: Batch<T::EthSpec>, mut batch: Batch<T::EthSpec>,
) -> ProcessingResult { ) -> ProcessingResult {
batch.retries += 1; batch.retries += 1;
@ -575,7 +587,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
/// Attempts to request the next required batches from the peer pool if the chain is syncing. It will exhaust the peer /// Attempts to request the next required batches from the peer pool if the chain is syncing. It will exhaust the peer
/// pool and left over batches until the batch buffer is reached or all peers are exhausted. /// pool and left over batches until the batch buffer is reached or all peers are exhausted.
fn request_batches(&mut self, network: &mut SyncNetworkContext) { fn request_batches(&mut self, network: &mut SyncNetworkContext<T::EthSpec>) {
if let ChainSyncingState::Syncing = self.state { if let ChainSyncingState::Syncing = self.state {
while self.send_range_request(network) {} while self.send_range_request(network) {}
} }
@ -583,7 +595,7 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
/// Requests the next required batch from a peer. Returns true, if there was a peer available /// Requests the next required batch from a peer. Returns true, if there was a peer available
/// to send a request and there are batches to request, false otherwise. /// to send a request and there are batches to request, false otherwise.
fn send_range_request(&mut self, network: &mut SyncNetworkContext) -> bool { fn send_range_request(&mut self, network: &mut SyncNetworkContext<T::EthSpec>) -> bool {
// find the next pending batch and request it from the peer // find the next pending batch and request it from the peer
if let Some(peer_id) = self.get_next_peer() { if let Some(peer_id) = self.get_next_peer() {
if let Some(batch) = self.get_next_batch(peer_id) { if let Some(batch) = self.get_next_batch(peer_id) {
@ -669,7 +681,11 @@ impl<T: BeaconChainTypes> SyncingChain<T> {
} }
/// Requests the provided batch from the provided peer. /// Requests the provided batch from the provided peer.
fn send_batch(&mut self, network: &mut SyncNetworkContext, batch: Batch<T::EthSpec>) { fn send_batch(
&mut self,
network: &mut SyncNetworkContext<T::EthSpec>,
batch: Batch<T::EthSpec>,
) {
let request = batch.to_blocks_by_range_request(); let request = batch.to_blocks_by_range_request();
if let Ok(request_id) = network.blocks_by_range_request(batch.current_peer.clone(), request) if let Ok(request_id) = network.blocks_by_range_request(batch.current_peer.clone(), request)
{ {

View File

@ -4,7 +4,7 @@
//! with this struct to to simplify the logic of the other layers of sync. //! with this struct to to simplify the logic of the other layers of sync.
use super::chain::{ChainSyncingState, SyncingChain}; use super::chain::{ChainSyncingState, SyncingChain};
use crate::message_processor::PeerSyncInfo; use crate::router::processor::PeerSyncInfo;
use crate::sync::manager::SyncMessage; use crate::sync::manager::SyncMessage;
use crate::sync::network_context::SyncNetworkContext; use crate::sync::network_context::SyncNetworkContext;
use beacon_chain::{BeaconChain, BeaconChainTypes}; use beacon_chain::{BeaconChain, BeaconChainTypes};
@ -103,7 +103,11 @@ impl<T: BeaconChainTypes> ChainCollection<T> {
/// ///
/// This removes any out-dated chains, swaps to any higher priority finalized chains and /// This removes any out-dated chains, swaps to any higher priority finalized chains and
/// updates the state of the collection. /// updates the state of the collection.
pub fn update_finalized(&mut self, network: &mut SyncNetworkContext, log: &slog::Logger) { pub fn update_finalized(
&mut self,
network: &mut SyncNetworkContext<T::EthSpec>,
log: &slog::Logger,
) {
let local_slot = match self.beacon_chain.upgrade() { let local_slot = match self.beacon_chain.upgrade() {
Some(chain) => { Some(chain) => {
let local = match PeerSyncInfo::from_chain(&chain) { let local = match PeerSyncInfo::from_chain(&chain) {
@ -197,7 +201,7 @@ impl<T: BeaconChainTypes> ChainCollection<T> {
#[allow(clippy::too_many_arguments)] #[allow(clippy::too_many_arguments)]
pub fn new_head_chain( pub fn new_head_chain(
&mut self, &mut self,
network: &mut SyncNetworkContext, network: &mut SyncNetworkContext<T::EthSpec>,
remote_finalized_slot: Slot, remote_finalized_slot: Slot,
target_head: Hash256, target_head: Hash256,
target_slot: Slot, target_slot: Slot,
@ -277,7 +281,11 @@ impl<T: BeaconChainTypes> ChainCollection<T> {
/// ///
/// This removes chains with no peers, or chains whose start block slot is less than our current /// This removes chains with no peers, or chains whose start block slot is less than our current
/// finalized block slot. /// finalized block slot.
pub fn purge_outdated_chains(&mut self, network: &mut SyncNetworkContext, log: &slog::Logger) { pub fn purge_outdated_chains(
&mut self,
network: &mut SyncNetworkContext<T::EthSpec>,
log: &slog::Logger,
) {
// Remove any chains that have no peers // Remove any chains that have no peers
self.finalized_chains self.finalized_chains
.retain(|chain| !chain.peer_pool.is_empty()); .retain(|chain| !chain.peer_pool.is_empty());
@ -349,7 +357,7 @@ impl<T: BeaconChainTypes> ChainCollection<T> {
/// This will re-status the chains peers on removal. The index must exist. /// This will re-status the chains peers on removal. The index must exist.
pub fn remove_chain( pub fn remove_chain(
&mut self, &mut self,
network: &mut SyncNetworkContext, network: &mut SyncNetworkContext<T::EthSpec>,
index: usize, index: usize,
log: &slog::Logger, log: &slog::Logger,
) { ) {

View File

@ -42,7 +42,7 @@
use super::chain::ProcessingResult; use super::chain::ProcessingResult;
use super::chain_collection::{ChainCollection, SyncState}; use super::chain_collection::{ChainCollection, SyncState};
use super::{Batch, BatchProcessResult}; use super::{Batch, BatchProcessResult};
use crate::message_processor::PeerSyncInfo; use crate::router::processor::PeerSyncInfo;
use crate::sync::manager::SyncMessage; use crate::sync::manager::SyncMessage;
use crate::sync::network_context::SyncNetworkContext; use crate::sync::network_context::SyncNetworkContext;
use beacon_chain::{BeaconChain, BeaconChainTypes}; use beacon_chain::{BeaconChain, BeaconChainTypes};
@ -108,7 +108,7 @@ impl<T: BeaconChainTypes> RangeSync<T> {
/// prioritised by peer-pool size. /// prioritised by peer-pool size.
pub fn add_peer( pub fn add_peer(
&mut self, &mut self,
network: &mut SyncNetworkContext, network: &mut SyncNetworkContext<T::EthSpec>,
peer_id: PeerId, peer_id: PeerId,
remote: PeerSyncInfo, remote: PeerSyncInfo,
) { ) {
@ -228,7 +228,7 @@ impl<T: BeaconChainTypes> RangeSync<T> {
/// This request could complete a chain or simply add to its progress. /// This request could complete a chain or simply add to its progress.
pub fn blocks_by_range_response( pub fn blocks_by_range_response(
&mut self, &mut self,
network: &mut SyncNetworkContext, network: &mut SyncNetworkContext<T::EthSpec>,
peer_id: PeerId, peer_id: PeerId,
request_id: RequestId, request_id: RequestId,
beacon_block: Option<SignedBeaconBlock<T::EthSpec>>, beacon_block: Option<SignedBeaconBlock<T::EthSpec>>,
@ -255,7 +255,7 @@ impl<T: BeaconChainTypes> RangeSync<T> {
pub fn handle_block_process_result( pub fn handle_block_process_result(
&mut self, &mut self,
network: &mut SyncNetworkContext, network: &mut SyncNetworkContext<T::EthSpec>,
processing_id: u64, processing_id: u64,
batch: Batch<T::EthSpec>, batch: Batch<T::EthSpec>,
result: BatchProcessResult, result: BatchProcessResult,
@ -326,7 +326,11 @@ impl<T: BeaconChainTypes> RangeSync<T> {
/// A peer has disconnected. This removes the peer from any ongoing chains and mappings. A /// A peer has disconnected. This removes the peer from any ongoing chains and mappings. A
/// disconnected peer could remove a chain /// disconnected peer could remove a chain
pub fn peer_disconnect(&mut self, network: &mut SyncNetworkContext, peer_id: &PeerId) { pub fn peer_disconnect(
&mut self,
network: &mut SyncNetworkContext<T::EthSpec>,
peer_id: &PeerId,
) {
// if the peer is in the awaiting head mapping, remove it // if the peer is in the awaiting head mapping, remove it
self.awaiting_head_peers.remove(&peer_id); self.awaiting_head_peers.remove(&peer_id);
@ -340,7 +344,7 @@ impl<T: BeaconChainTypes> RangeSync<T> {
/// When a peer gets removed, both the head and finalized chains need to be searched to check which pool the peer is in. The chain may also have a batch or batches awaiting /// When a peer gets removed, both the head and finalized chains need to be searched to check which pool the peer is in. The chain may also have a batch or batches awaiting
/// for this peer. If so we mark the batch as failed. The batch may then hit it's maximum /// for this peer. If so we mark the batch as failed. The batch may then hit it's maximum
/// retries. In this case, we need to remove the chain and re-status all the peers. /// retries. In this case, we need to remove the chain and re-status all the peers.
fn remove_peer(&mut self, network: &mut SyncNetworkContext, peer_id: &PeerId) { fn remove_peer(&mut self, network: &mut SyncNetworkContext<T::EthSpec>, peer_id: &PeerId) {
if let Some((index, ProcessingResult::RemoveChain)) = if let Some((index, ProcessingResult::RemoveChain)) =
self.chains.head_finalized_request(|chain| { self.chains.head_finalized_request(|chain| {
if chain.peer_pool.remove(peer_id) { if chain.peer_pool.remove(peer_id) {
@ -370,7 +374,7 @@ impl<T: BeaconChainTypes> RangeSync<T> {
/// been too many failed attempts for the batch, remove the chain. /// been too many failed attempts for the batch, remove the chain.
pub fn inject_error( pub fn inject_error(
&mut self, &mut self,
network: &mut SyncNetworkContext, network: &mut SyncNetworkContext<T::EthSpec>,
peer_id: PeerId, peer_id: PeerId,
request_id: RequestId, request_id: RequestId,
) { ) {

View File

@ -1,12 +1,13 @@
[package] [package]
name = "rest_api" name = "rest_api"
version = "0.1.0" version = "0.2.0"
authors = ["Paul Hauner <paul@paulhauner.com>", "Luke Anderson <luke@sigmaprime.io>"] authors = ["Paul Hauner <paul@paulhauner.com>", "Age Manning <Age@AgeManning.com>", "Luke Anderson <luke@sigmaprime.io>"]
edition = "2018" edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies] [dependencies]
bls = { path = "../../eth2/utils/bls" } bls = { path = "../../eth2/utils/bls" }
rest_types = { path = "../../eth2/utils/rest_types" }
beacon_chain = { path = "../beacon_chain" } beacon_chain = { path = "../beacon_chain" }
network = { path = "../network" } network = { path = "../network" }
eth2-libp2p = { path = "../eth2-libp2p" } eth2-libp2p = { path = "../eth2-libp2p" }
@ -24,7 +25,6 @@ state_processing = { path = "../../eth2/state_processing" }
types = { path = "../../eth2/types" } types = { path = "../../eth2/types" }
http = "0.1" http = "0.1"
hyper = "0.12" hyper = "0.12"
exit-future = "0.1.4"
tokio = "0.1.22" tokio = "0.1.22"
url = "2.1" url = "2.1"
lazy_static = "1.3.0" lazy_static = "1.3.0"
@ -35,6 +35,7 @@ hex = "0.3"
parking_lot = "0.9" parking_lot = "0.9"
futures = "0.1.29" futures = "0.1.29"
operation_pool = { path = "../../eth2/operation_pool" } operation_pool = { path = "../../eth2/operation_pool" }
rayon = "1.3.0"
[dev-dependencies] [dev-dependencies]
remote_beacon_node = { path = "../../eth2/utils/remote_beacon_node" } remote_beacon_node = { path = "../../eth2/utils/remote_beacon_node" }

View File

@ -5,29 +5,17 @@ use crate::{ApiError, ApiResult, BoxFut, UrlQuery};
use beacon_chain::{BeaconChain, BeaconChainTypes, StateSkipConfig}; use beacon_chain::{BeaconChain, BeaconChainTypes, StateSkipConfig};
use futures::{Future, Stream}; use futures::{Future, Stream};
use hyper::{Body, Request}; use hyper::{Body, Request};
use serde::{Deserialize, Serialize}; use rest_types::{
use ssz_derive::{Decode, Encode}; BlockResponse, CanonicalHeadResponse, Committee, HeadBeaconBlock, StateResponse,
ValidatorRequest, ValidatorResponse,
};
use std::sync::Arc; use std::sync::Arc;
use store::Store; use store::Store;
use types::{ use types::{
AttesterSlashing, BeaconState, CommitteeIndex, EthSpec, Hash256, ProposerSlashing, AttesterSlashing, BeaconState, EthSpec, Hash256, ProposerSlashing, PublicKeyBytes,
PublicKeyBytes, RelativeEpoch, SignedBeaconBlock, Slot, Validator, RelativeEpoch, Slot,
}; };
/// Information about the block and state that are at head of the beacon chain.
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, Encode, Decode)]
pub struct CanonicalHeadResponse {
pub slot: Slot,
pub block_root: Hash256,
pub state_root: Hash256,
pub finalized_slot: Slot,
pub finalized_block_root: Hash256,
pub justified_slot: Slot,
pub justified_block_root: Hash256,
pub previous_justified_slot: Slot,
pub previous_justified_block_root: Hash256,
}
/// HTTP handler to return a `BeaconBlock` at a given `root` or `slot`. /// HTTP handler to return a `BeaconBlock` at a given `root` or `slot`.
pub fn get_head<T: BeaconChainTypes>( pub fn get_head<T: BeaconChainTypes>(
req: Request<Body>, req: Request<Body>,
@ -62,15 +50,7 @@ pub fn get_head<T: BeaconChainTypes>(
ResponseBuilder::new(&req)?.body(&head) ResponseBuilder::new(&req)?.body(&head)
} }
/// Information about a block that is at the head of a chain. May or may not represent the /// HTTP handler to return a list of head BeaconBlocks.
/// canonical head.
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, Encode, Decode)]
pub struct HeadBeaconBlock {
pub beacon_block_root: Hash256,
pub beacon_block_slot: Slot,
}
/// HTTP handler to return a list of head block roots.
pub fn get_heads<T: BeaconChainTypes>( pub fn get_heads<T: BeaconChainTypes>(
req: Request<Body>, req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>, beacon_chain: Arc<BeaconChain<T>>,
@ -87,14 +67,7 @@ pub fn get_heads<T: BeaconChainTypes>(
ResponseBuilder::new(&req)?.body(&heads) ResponseBuilder::new(&req)?.body(&heads)
} }
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, Encode, Decode)] /// HTTP handler to return a `BeaconBlock` at a given `root` or `slot`.
#[serde(bound = "T: EthSpec")]
pub struct BlockResponse<T: EthSpec> {
pub root: Hash256,
pub beacon_block: SignedBeaconBlock<T>,
}
/// HTTP handler to return a `SignedBeaconBlock` at a given `root` or `slot`.
pub fn get_block<T: BeaconChainTypes>( pub fn get_block<T: BeaconChainTypes>(
req: Request<Body>, req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>, beacon_chain: Arc<BeaconChain<T>>,
@ -158,14 +131,6 @@ pub fn get_fork<T: BeaconChainTypes>(
ResponseBuilder::new(&req)?.body(&beacon_chain.head()?.beacon_state.fork) ResponseBuilder::new(&req)?.body(&beacon_chain.head()?.beacon_state.fork)
} }
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, Encode, Decode)]
pub struct ValidatorResponse {
pub pubkey: PublicKeyBytes,
pub validator_index: Option<usize>,
pub balance: Option<u64>,
pub validator: Option<Validator>,
}
/// HTTP handler to which accepts a query string of a list of validator pubkeys and maps it to a /// HTTP handler to which accepts a query string of a list of validator pubkeys and maps it to a
/// `ValidatorResponse`. /// `ValidatorResponse`.
/// ///
@ -246,13 +211,6 @@ pub fn get_active_validators<T: BeaconChainTypes>(
ResponseBuilder::new(&req)?.body(&validators) ResponseBuilder::new(&req)?.body(&validators)
} }
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, Encode, Decode)]
pub struct ValidatorRequest {
/// If set to `None`, uses the canonical head state.
pub state_root: Option<Hash256>,
pub pubkeys: Vec<PublicKeyBytes>,
}
/// HTTP handler to which accepts a `ValidatorRequest` and returns a `ValidatorResponse` for /// HTTP handler to which accepts a `ValidatorRequest` and returns a `ValidatorResponse` for
/// each of the given `pubkeys`. When `state_root` is `None`, the canonical head is used. /// each of the given `pubkeys`. When `state_root` is `None`, the canonical head is used.
/// ///
@ -365,13 +323,6 @@ fn validator_response_by_pubkey<E: EthSpec>(
} }
} }
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, Encode, Decode)]
pub struct Committee {
pub slot: Slot,
pub index: CommitteeIndex,
pub committee: Vec<usize>,
}
/// HTTP handler /// HTTP handler
pub fn get_committees<T: BeaconChainTypes>( pub fn get_committees<T: BeaconChainTypes>(
req: Request<Body>, req: Request<Body>,
@ -405,13 +356,6 @@ pub fn get_committees<T: BeaconChainTypes>(
ResponseBuilder::new(&req)?.body(&committees) ResponseBuilder::new(&req)?.body(&committees)
} }
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, Encode, Decode)]
#[serde(bound = "T: EthSpec")]
pub struct StateResponse<T: EthSpec> {
pub root: Hash256,
pub beacon_state: BeaconState<T>,
}
/// HTTP handler to return a `BeaconState` at a given `root` or `slot`. /// HTTP handler to return a `BeaconState` at a given `root` or `slot`.
/// ///
/// Will not return a state if the request slot is in the future. Will return states higher than /// Will not return a state if the request slot is in the future. Will return states higher than

View File

@ -1,20 +1,17 @@
use crate::{ApiError, ApiResult}; use crate::{ApiError, ApiResult, NetworkChannel};
use beacon_chain::{BeaconChain, BeaconChainTypes, StateSkipConfig}; use beacon_chain::{BeaconChain, BeaconChainTypes, StateSkipConfig};
use bls::PublicKeyBytes; use bls::PublicKeyBytes;
use eth2_libp2p::GossipTopic; use eth2_libp2p::types::GossipEncoding;
use eth2_libp2p::PubsubMessage; use eth2_libp2p::{PubsubData, PubsubMessage};
use hex; use hex;
use http::header; use http::header;
use hyper::{Body, Request}; use hyper::{Body, Request};
use network::NetworkMessage; use network::NetworkMessage;
use parking_lot::RwLock; use ssz::Decode;
use ssz::{Decode, Encode};
use std::sync::Arc;
use store::{iter::AncestorIter, Store}; use store::{iter::AncestorIter, Store};
use tokio::sync::mpsc;
use types::{ use types::{
Attestation, BeaconState, CommitteeIndex, Epoch, EthSpec, Hash256, RelativeEpoch, Signature, Attestation, BeaconState, CommitteeIndex, Epoch, EthSpec, Hash256, RelativeEpoch, Signature,
SignedBeaconBlock, Slot, SignedAggregateAndProof, SignedBeaconBlock, Slot,
}; };
/// Parse a slot. /// Parse a slot.
@ -49,7 +46,7 @@ pub fn parse_committee_index(string: &str) -> Result<CommitteeIndex, ApiError> {
/// Checks the provided request to ensure that the `content-type` header. /// Checks the provided request to ensure that the `content-type` header.
/// ///
/// The content-type header should either be omitted, in which case JSON is assumed, or it should /// The content-type header should either be omitted, in which case JSON is assumed, or it should
/// explicity specify `application/json`. If anything else is provided, an error is returned. /// explicitly specify `application/json`. If anything else is provided, an error is returned.
pub fn check_content_type_for_json(req: &Request<Body>) -> Result<(), ApiError> { pub fn check_content_type_for_json(req: &Request<Body>) -> Result<(), ApiError> {
match req.headers().get(header::CONTENT_TYPE) { match req.headers().get(header::CONTENT_TYPE) {
Some(h) if h == "application/json" => Ok(()), Some(h) if h == "application/json" => Ok(()),
@ -61,7 +58,7 @@ pub fn check_content_type_for_json(req: &Request<Body>) -> Result<(), ApiError>
} }
} }
/// Parse a signature from a `0x` preixed string. /// Parse a signature from a `0x` prefixed string.
pub fn parse_signature(string: &str) -> Result<Signature, ApiError> { pub fn parse_signature(string: &str) -> Result<Signature, ApiError> {
const PREFIX: &str = "0x"; const PREFIX: &str = "0x";
@ -78,7 +75,7 @@ pub fn parse_signature(string: &str) -> Result<Signature, ApiError> {
} }
} }
/// Parse a root from a `0x` preixed string. /// Parse a root from a `0x` prefixed string.
/// ///
/// E.g., `"0x0000000000000000000000000000000000000000000000000000000000000000"` /// E.g., `"0x0000000000000000000000000000000000000000000000000000000000000000"`
pub fn parse_root(string: &str) -> Result<Hash256, ApiError> { pub fn parse_root(string: &str) -> Result<Hash256, ApiError> {
@ -232,18 +229,17 @@ pub fn implementation_pending_response(_req: Request<Body>) -> ApiResult {
} }
pub fn publish_beacon_block_to_network<T: BeaconChainTypes + 'static>( pub fn publish_beacon_block_to_network<T: BeaconChainTypes + 'static>(
chan: Arc<RwLock<mpsc::UnboundedSender<NetworkMessage>>>, mut chan: NetworkChannel<T::EthSpec>,
block: SignedBeaconBlock<T::EthSpec>, block: SignedBeaconBlock<T::EthSpec>,
) -> Result<(), ApiError> { ) -> Result<(), ApiError> {
// create the network topic to send on // send the block via SSZ encoding
let topic = GossipTopic::BeaconBlock; let messages = vec![PubsubMessage::new(
let message = PubsubMessage::Block(block.as_ssz_bytes()); GossipEncoding::SSZ,
PubsubData::BeaconBlock(Box::new(block)),
)];
// Publish the block to the p2p network via gossipsub. // Publish the block to the p2p network via gossipsub.
if let Err(e) = chan.write().try_send(NetworkMessage::Publish { if let Err(e) = chan.try_send(NetworkMessage::Publish { messages }) {
topics: vec![topic.into()],
message,
}) {
return Err(ApiError::ServerError(format!( return Err(ApiError::ServerError(format!(
"Unable to send new block to network: {:?}", "Unable to send new block to network: {:?}",
e e
@ -253,19 +249,51 @@ pub fn publish_beacon_block_to_network<T: BeaconChainTypes + 'static>(
Ok(()) Ok(())
} }
pub fn publish_attestation_to_network<T: BeaconChainTypes + 'static>( /// Publishes a raw un-aggregated attestation to the network.
chan: Arc<RwLock<mpsc::UnboundedSender<NetworkMessage>>>, pub fn publish_raw_attestations_to_network<T: BeaconChainTypes + 'static>(
attestation: Attestation<T::EthSpec>, mut chan: NetworkChannel<T::EthSpec>,
attestations: Vec<Attestation<T::EthSpec>>,
) -> Result<(), ApiError> { ) -> Result<(), ApiError> {
// create the network topic to send on let messages = attestations
let topic = GossipTopic::BeaconAttestation; .into_iter()
let message = PubsubMessage::Attestation(attestation.as_ssz_bytes()); .map(|attestation| {
// create the gossip message to send to the network
let subnet_id = attestation.subnet_id();
PubsubMessage::new(
GossipEncoding::SSZ,
PubsubData::Attestation(Box::new((subnet_id, attestation))),
)
})
.collect::<Vec<_>>();
// Publish the attestation to the p2p network via gossipsub. // Publish the attestations to the p2p network via gossipsub.
if let Err(e) = chan.write().try_send(NetworkMessage::Publish { if let Err(e) = chan.try_send(NetworkMessage::Publish { messages }) {
topics: vec![topic.into()], return Err(ApiError::ServerError(format!(
message, "Unable to send new attestation to network: {:?}",
}) { e
)));
}
Ok(())
}
/// Publishes an aggregated attestation to the network.
pub fn publish_aggregate_attestations_to_network<T: BeaconChainTypes + 'static>(
mut chan: NetworkChannel<T::EthSpec>,
signed_proofs: Vec<SignedAggregateAndProof<T::EthSpec>>,
) -> Result<(), ApiError> {
let messages = signed_proofs
.into_iter()
.map(|signed_proof| {
PubsubMessage::new(
GossipEncoding::SSZ,
PubsubData::AggregateAndProofAttestation(Box::new(signed_proof)),
)
})
.collect::<Vec<_>>();
// Publish the attestations to the p2p network via gossipsub.
if let Err(e) = chan.try_send(NetworkMessage::Publish { messages }) {
return Err(ApiError::ServerError(format!( return Err(ApiError::ServerError(format!(
"Unable to send new attestation to network: {:?}", "Unable to send new attestation to network: {:?}",
e e

View File

@ -21,38 +21,32 @@ mod validator;
use beacon_chain::{BeaconChain, BeaconChainTypes}; use beacon_chain::{BeaconChain, BeaconChainTypes};
use client_network::NetworkMessage; use client_network::NetworkMessage;
use client_network::Service as NetworkService;
pub use config::ApiEncodingFormat; pub use config::ApiEncodingFormat;
use error::{ApiError, ApiResult}; use error::{ApiError, ApiResult};
use eth2_config::Eth2Config; use eth2_config::Eth2Config;
use eth2_libp2p::NetworkGlobals;
use hyper::rt::Future; use hyper::rt::Future;
use hyper::server::conn::AddrStream; use hyper::server::conn::AddrStream;
use hyper::service::{make_service_fn, service_fn}; use hyper::service::{make_service_fn, service_fn};
use hyper::{Body, Request, Response, Server}; use hyper::{Body, Request, Response, Server};
use parking_lot::RwLock;
use slog::{info, warn}; use slog::{info, warn};
use std::net::SocketAddr; use std::net::SocketAddr;
use std::ops::Deref; use std::ops::Deref;
use std::path::PathBuf; use std::path::PathBuf;
use std::sync::Arc; use std::sync::Arc;
use tokio::runtime::TaskExecutor; use tokio::runtime::TaskExecutor;
use tokio::sync::mpsc; use tokio::sync::{mpsc, oneshot};
use url_query::UrlQuery; use url_query::UrlQuery;
pub use crate::helpers::parse_pubkey_bytes; pub use crate::helpers::parse_pubkey_bytes;
pub use beacon::{
BlockResponse, CanonicalHeadResponse, Committee, HeadBeaconBlock, StateResponse,
ValidatorRequest, ValidatorResponse,
};
pub use config::Config; pub use config::Config;
pub use validator::{ValidatorDutiesRequest, ValidatorDuty};
pub type BoxFut = Box<dyn Future<Item = Response<Body>, Error = ApiError> + Send>; pub type BoxFut = Box<dyn Future<Item = Response<Body>, Error = ApiError> + Send>;
pub type NetworkChannel = Arc<RwLock<mpsc::UnboundedSender<NetworkMessage>>>; pub type NetworkChannel<T> = mpsc::UnboundedSender<NetworkMessage<T>>;
pub struct NetworkInfo<T: BeaconChainTypes> { pub struct NetworkInfo<T: BeaconChainTypes> {
pub network_service: Arc<NetworkService<T>>, pub network_globals: Arc<NetworkGlobals<T::EthSpec>>,
pub network_chan: mpsc::UnboundedSender<NetworkMessage>, pub network_chan: NetworkChannel<T::EthSpec>,
} }
// Allowing more than 7 arguments. // Allowing more than 7 arguments.
@ -66,7 +60,7 @@ pub fn start_server<T: BeaconChainTypes>(
freezer_db_path: PathBuf, freezer_db_path: PathBuf,
eth2_config: Eth2Config, eth2_config: Eth2Config,
log: slog::Logger, log: slog::Logger,
) -> Result<(exit_future::Signal, SocketAddr), hyper::Error> { ) -> Result<(oneshot::Sender<()>, SocketAddr), hyper::Error> {
let inner_log = log.clone(); let inner_log = log.clone();
let eth2_config = Arc::new(eth2_config); let eth2_config = Arc::new(eth2_config);
@ -75,8 +69,8 @@ pub fn start_server<T: BeaconChainTypes>(
let beacon_chain = beacon_chain.clone(); let beacon_chain = beacon_chain.clone();
let log = inner_log.clone(); let log = inner_log.clone();
let eth2_config = eth2_config.clone(); let eth2_config = eth2_config.clone();
let network_service = network_info.network_service.clone(); let network_globals = network_info.network_globals.clone();
let network_channel = Arc::new(RwLock::new(network_info.network_chan.clone())); let network_channel = network_info.network_chan.clone();
let db_path = db_path.clone(); let db_path = db_path.clone();
let freezer_db_path = freezer_db_path.clone(); let freezer_db_path = freezer_db_path.clone();
@ -84,7 +78,7 @@ pub fn start_server<T: BeaconChainTypes>(
router::route( router::route(
req, req,
beacon_chain.clone(), beacon_chain.clone(),
network_service.clone(), network_globals.clone(),
network_channel.clone(), network_channel.clone(),
eth2_config.clone(), eth2_config.clone(),
log.clone(), log.clone(),
@ -104,7 +98,7 @@ pub fn start_server<T: BeaconChainTypes>(
let actual_listen_addr = server.local_addr(); let actual_listen_addr = server.local_addr();
// Build a channel to kill the HTTP server. // Build a channel to kill the HTTP server.
let (exit_signal, exit) = exit_future::signal(); let (exit_signal, exit) = oneshot::channel();
let inner_log = log.clone(); let inner_log = log.clone();
let server_exit = exit.and_then(move |_| { let server_exit = exit.and_then(move |_| {
info!(inner_log, "HTTP service shutdown"); info!(inner_log, "HTTP service shutdown");

View File

@ -1,6 +1,6 @@
use crate::error::ApiResult; use crate::error::ApiResult;
use crate::response_builder::ResponseBuilder; use crate::response_builder::ResponseBuilder;
use crate::NetworkService; use crate::NetworkGlobals;
use beacon_chain::BeaconChainTypes; use beacon_chain::BeaconChainTypes;
use eth2_libp2p::{Multiaddr, PeerId}; use eth2_libp2p::{Multiaddr, PeerId};
use hyper::{Body, Request}; use hyper::{Body, Request};
@ -11,7 +11,7 @@ use std::sync::Arc;
/// Returns a list of `Multiaddr`, serialized according to their `serde` impl. /// Returns a list of `Multiaddr`, serialized according to their `serde` impl.
pub fn get_listen_addresses<T: BeaconChainTypes>( pub fn get_listen_addresses<T: BeaconChainTypes>(
req: Request<Body>, req: Request<Body>,
network: Arc<NetworkService<T>>, network: Arc<NetworkGlobals<T::EthSpec>>,
) -> ApiResult { ) -> ApiResult {
let multiaddresses: Vec<Multiaddr> = network.listen_multiaddrs(); let multiaddresses: Vec<Multiaddr> = network.listen_multiaddrs();
ResponseBuilder::new(&req)?.body_no_ssz(&multiaddresses) ResponseBuilder::new(&req)?.body_no_ssz(&multiaddresses)
@ -22,9 +22,9 @@ pub fn get_listen_addresses<T: BeaconChainTypes>(
/// Returns the TCP port number in its plain form (which is also valid JSON serialization) /// Returns the TCP port number in its plain form (which is also valid JSON serialization)
pub fn get_listen_port<T: BeaconChainTypes>( pub fn get_listen_port<T: BeaconChainTypes>(
req: Request<Body>, req: Request<Body>,
network: Arc<NetworkService<T>>, network: Arc<NetworkGlobals<T::EthSpec>>,
) -> ApiResult { ) -> ApiResult {
ResponseBuilder::new(&req)?.body(&network.listen_port()) ResponseBuilder::new(&req)?.body(&network.listen_port_tcp())
} }
/// HTTP handler to return the Discv5 ENR from the client's libp2p service. /// HTTP handler to return the Discv5 ENR from the client's libp2p service.
@ -32,7 +32,7 @@ pub fn get_listen_port<T: BeaconChainTypes>(
/// ENR is encoded as base64 string. /// ENR is encoded as base64 string.
pub fn get_enr<T: BeaconChainTypes>( pub fn get_enr<T: BeaconChainTypes>(
req: Request<Body>, req: Request<Body>,
network: Arc<NetworkService<T>>, network: Arc<NetworkGlobals<T::EthSpec>>,
) -> ApiResult { ) -> ApiResult {
ResponseBuilder::new(&req)?.body_no_ssz( ResponseBuilder::new(&req)?.body_no_ssz(
&network &network
@ -47,7 +47,7 @@ pub fn get_enr<T: BeaconChainTypes>(
/// PeerId is encoded as base58 string. /// PeerId is encoded as base58 string.
pub fn get_peer_id<T: BeaconChainTypes>( pub fn get_peer_id<T: BeaconChainTypes>(
req: Request<Body>, req: Request<Body>,
network: Arc<NetworkService<T>>, network: Arc<NetworkGlobals<T::EthSpec>>,
) -> ApiResult { ) -> ApiResult {
ResponseBuilder::new(&req)?.body_no_ssz(&network.local_peer_id().to_base58()) ResponseBuilder::new(&req)?.body_no_ssz(&network.local_peer_id().to_base58())
} }
@ -55,7 +55,7 @@ pub fn get_peer_id<T: BeaconChainTypes>(
/// HTTP handler to return the number of peers connected in the client's libp2p service. /// HTTP handler to return the number of peers connected in the client's libp2p service.
pub fn get_peer_count<T: BeaconChainTypes>( pub fn get_peer_count<T: BeaconChainTypes>(
req: Request<Body>, req: Request<Body>,
network: Arc<NetworkService<T>>, network: Arc<NetworkGlobals<T::EthSpec>>,
) -> ApiResult { ) -> ApiResult {
ResponseBuilder::new(&req)?.body(&network.connected_peers()) ResponseBuilder::new(&req)?.body(&network.connected_peers())
} }
@ -65,11 +65,12 @@ pub fn get_peer_count<T: BeaconChainTypes>(
/// Peers are presented as a list of `PeerId::to_string()`. /// Peers are presented as a list of `PeerId::to_string()`.
pub fn get_peer_list<T: BeaconChainTypes>( pub fn get_peer_list<T: BeaconChainTypes>(
req: Request<Body>, req: Request<Body>,
network: Arc<NetworkService<T>>, network: Arc<NetworkGlobals<T::EthSpec>>,
) -> ApiResult { ) -> ApiResult {
let connected_peers: Vec<String> = network let connected_peers: Vec<String> = network
.connected_peer_set() .connected_peer_set
.iter() .read()
.keys()
.map(PeerId::to_string) .map(PeerId::to_string)
.collect(); .collect();
ResponseBuilder::new(&req)?.body_no_ssz(&connected_peers) ResponseBuilder::new(&req)?.body_no_ssz(&connected_peers)

View File

@ -3,8 +3,8 @@ use crate::{
BoxFut, NetworkChannel, BoxFut, NetworkChannel,
}; };
use beacon_chain::{BeaconChain, BeaconChainTypes}; use beacon_chain::{BeaconChain, BeaconChainTypes};
use client_network::Service as NetworkService;
use eth2_config::Eth2Config; use eth2_config::Eth2Config;
use eth2_libp2p::NetworkGlobals;
use futures::{Future, IntoFuture}; use futures::{Future, IntoFuture};
use hyper::{Body, Error, Method, Request, Response}; use hyper::{Body, Error, Method, Request, Response};
use slog::debug; use slog::debug;
@ -25,8 +25,8 @@ where
pub fn route<T: BeaconChainTypes>( pub fn route<T: BeaconChainTypes>(
req: Request<Body>, req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>, beacon_chain: Arc<BeaconChain<T>>,
network_service: Arc<NetworkService<T>>, network_globals: Arc<NetworkGlobals<T::EthSpec>>,
network_channel: NetworkChannel, network_channel: NetworkChannel<T::EthSpec>,
eth2_config: Arc<Eth2Config>, eth2_config: Arc<Eth2Config>,
local_log: slog::Logger, local_log: slog::Logger,
db_path: PathBuf, db_path: PathBuf,
@ -49,22 +49,22 @@ pub fn route<T: BeaconChainTypes>(
// Methods for Network // Methods for Network
(&Method::GET, "/network/enr") => { (&Method::GET, "/network/enr") => {
into_boxfut(network::get_enr::<T>(req, network_service)) into_boxfut(network::get_enr::<T>(req, network_globals))
} }
(&Method::GET, "/network/peer_count") => { (&Method::GET, "/network/peer_count") => {
into_boxfut(network::get_peer_count::<T>(req, network_service)) into_boxfut(network::get_peer_count::<T>(req, network_globals))
} }
(&Method::GET, "/network/peer_id") => { (&Method::GET, "/network/peer_id") => {
into_boxfut(network::get_peer_id::<T>(req, network_service)) into_boxfut(network::get_peer_id::<T>(req, network_globals))
} }
(&Method::GET, "/network/peers") => { (&Method::GET, "/network/peers") => {
into_boxfut(network::get_peer_list::<T>(req, network_service)) into_boxfut(network::get_peer_list::<T>(req, network_globals))
} }
(&Method::GET, "/network/listen_port") => { (&Method::GET, "/network/listen_port") => {
into_boxfut(network::get_listen_port::<T>(req, network_service)) into_boxfut(network::get_listen_port::<T>(req, network_globals))
} }
(&Method::GET, "/network/listen_addresses") => { (&Method::GET, "/network/listen_addresses") => {
into_boxfut(network::get_listen_addresses::<T>(req, network_service)) into_boxfut(network::get_listen_addresses::<T>(req, network_globals))
} }
// Methods for Beacon Node // Methods for Beacon Node
@ -121,6 +121,14 @@ pub fn route<T: BeaconChainTypes>(
drop(timer); drop(timer);
into_boxfut(response) into_boxfut(response)
} }
(&Method::POST, "/validator/subscribe") => {
validator::post_validator_subscriptions::<T>(
req,
beacon_chain,
network_channel,
log,
)
}
(&Method::GET, "/validator/duties/all") => { (&Method::GET, "/validator/duties/all") => {
into_boxfut(validator::get_all_validator_duties::<T>(req, beacon_chain)) into_boxfut(validator::get_all_validator_duties::<T>(req, beacon_chain))
} }
@ -144,10 +152,22 @@ pub fn route<T: BeaconChainTypes>(
drop(timer); drop(timer);
into_boxfut(response) into_boxfut(response)
} }
(&Method::POST, "/validator/attestation") => { (&Method::GET, "/validator/aggregate_attestation") => {
validator::publish_attestation::<T>(req, beacon_chain, network_channel, log) into_boxfut(validator::get_aggregate_attestation::<T>(req, beacon_chain))
}
(&Method::POST, "/validator/attestations") => {
validator::publish_attestations::<T>(req, beacon_chain, network_channel, log)
}
(&Method::POST, "/validator/aggregate_and_proofs") => {
validator::publish_aggregate_and_proofs::<T>(
req,
beacon_chain,
network_channel,
log,
)
} }
// Methods for consensus
(&Method::GET, "/consensus/global_votes") => { (&Method::GET, "/consensus/global_votes") => {
into_boxfut(consensus::get_vote_count::<T>(req, beacon_chain)) into_boxfut(consensus::get_vote_count::<T>(req, beacon_chain))
} }

View File

@ -1,47 +1,27 @@
use crate::helpers::{ use crate::helpers::{
check_content_type_for_json, publish_attestation_to_network, publish_beacon_block_to_network, check_content_type_for_json, publish_aggregate_attestations_to_network,
publish_beacon_block_to_network, publish_raw_attestations_to_network,
}; };
use crate::response_builder::ResponseBuilder; use crate::response_builder::ResponseBuilder;
use crate::{ApiError, ApiResult, BoxFut, NetworkChannel, UrlQuery}; use crate::{ApiError, ApiResult, BoxFut, NetworkChannel, UrlQuery};
use beacon_chain::{ use beacon_chain::{
AttestationProcessingOutcome, BeaconChain, BeaconChainTypes, BlockProcessingOutcome, AttestationProcessingOutcome, BeaconChain, BeaconChainTypes, BlockError, StateSkipConfig,
StateSkipConfig,
}; };
use bls::PublicKeyBytes; use bls::PublicKeyBytes;
use futures::{Future, Stream}; use futures::{Future, Stream};
use hyper::{Body, Request}; use hyper::{Body, Request};
use serde::{Deserialize, Serialize}; use network::NetworkMessage;
use rayon::prelude::*;
use rest_types::{ValidatorDutiesRequest, ValidatorDutyBytes, ValidatorSubscription};
use slog::{error, info, warn, Logger}; use slog::{error, info, warn, Logger};
use ssz_derive::{Decode, Encode};
use std::sync::Arc; use std::sync::Arc;
use types::beacon_state::EthSpec; use types::beacon_state::EthSpec;
use types::{ use types::{
Attestation, BeaconState, CommitteeIndex, Epoch, RelativeEpoch, SignedBeaconBlock, Slot, Attestation, BeaconState, Epoch, RelativeEpoch, SignedAggregateAndProof, SignedBeaconBlock,
Slot,
}; };
#[derive(PartialEq, Debug, Serialize, Deserialize, Clone)] /// HTTP Handler to retrieve the duties for a set of validators during a particular epoch. This
pub struct ValidatorDuty {
/// The validator's BLS public key, uniquely identifying them. _48-bytes, hex encoded with 0x prefix, case insensitive._
pub validator_pubkey: PublicKeyBytes,
/// The validator's index in `state.validators`
pub validator_index: Option<usize>,
/// The slot at which the validator must attest.
pub attestation_slot: Option<Slot>,
/// The index of the committee within `slot` of which the validator is a member.
pub attestation_committee_index: Option<CommitteeIndex>,
/// The position of the validator in the committee.
pub attestation_committee_position: Option<usize>,
/// The slots in which a validator must propose a block (can be empty).
pub block_proposal_slots: Vec<Slot>,
}
#[derive(PartialEq, Debug, Serialize, Deserialize, Clone, Encode, Decode)]
pub struct ValidatorDutiesRequest {
pub epoch: Epoch,
pub pubkeys: Vec<PublicKeyBytes>,
}
/// HTTP Handler to retrieve a the duties for a set of validators during a particular epoch. This
/// method allows for collecting bulk sets of validator duties without risking exceeding the max /// method allows for collecting bulk sets of validator duties without risking exceeding the max
/// URL length with query pairs. /// URL length with query pairs.
pub fn post_validator_duties<T: BeaconChainTypes>( pub fn post_validator_duties<T: BeaconChainTypes>(
@ -74,6 +54,79 @@ pub fn post_validator_duties<T: BeaconChainTypes>(
Box::new(future) Box::new(future)
} }
/// HTTP Handler to retrieve subscriptions for a set of validators. This allows the node to
/// organise peer discovery and topic subscription for known validators.
pub fn post_validator_subscriptions<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
mut network_chan: NetworkChannel<T::EthSpec>,
log: Logger,
) -> BoxFut {
try_future!(check_content_type_for_json(&req));
let response_builder = ResponseBuilder::new(&req);
let body = req.into_body();
Box::new(
body.concat2()
.map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e)))
.and_then(|chunks| {
serde_json::from_slice(&chunks).map_err(|e| {
ApiError::BadRequest(format!(
"Unable to parse JSON into ValidatorSubscriptions: {:?}",
e
))
})
})
.and_then(move |subscriptions: Vec<ValidatorSubscription>| {
let fork = beacon_chain
.wall_clock_state()
.map(|state| state.fork.clone())
.map_err(|e| {
error!(log, "Unable to get current beacon state");
ApiError::ServerError(format!("Error getting current beacon state {:?}", e))
})?;
// verify the signatures in parallel
subscriptions.par_iter().try_for_each(|subscription| {
if let Some(pubkey) =
&beacon_chain.validator_pubkey(subscription.validator_index as usize)?
{
if subscription.verify(
pubkey,
&beacon_chain.spec,
&fork,
T::EthSpec::slots_per_epoch(),
) {
Ok(())
} else {
error!(log, "HTTP RPC sent invalid signatures");
Err(ApiError::ProcessingError(format!(
"Could not verify signatures"
)))
}
} else {
error!(log, "HTTP RPC sent unknown validator");
Err(ApiError::ProcessingError(format!(
"Could not verify signatures"
)))
}
})?;
// subscriptions are verified, send them to the network thread
network_chan
.try_send(NetworkMessage::Subscribe { subscriptions })
.map_err(|e| {
ApiError::ServerError(format!(
"Unable to subscriptions to the network: {:?}",
e
))
})?;
Ok(())
})
.and_then(|_| response_builder?.body_no_ssz(&())),
)
}
/// HTTP Handler to retrieve all validator duties for the given epoch. /// HTTP Handler to retrieve all validator duties for the given epoch.
pub fn get_all_validator_duties<T: BeaconChainTypes>( pub fn get_all_validator_duties<T: BeaconChainTypes>(
req: Request<Body>, req: Request<Body>,
@ -154,7 +207,7 @@ fn return_validator_duties<T: BeaconChainTypes>(
beacon_chain: Arc<BeaconChain<T>>, beacon_chain: Arc<BeaconChain<T>>,
epoch: Epoch, epoch: Epoch,
validator_pubkeys: Vec<PublicKeyBytes>, validator_pubkeys: Vec<PublicKeyBytes>,
) -> Result<Vec<ValidatorDuty>, ApiError> { ) -> Result<Vec<ValidatorDutyBytes>, ApiError> {
let mut state = get_state_for_epoch(&beacon_chain, epoch, StateSkipConfig::WithoutStateRoots)?; let mut state = get_state_for_epoch(&beacon_chain, epoch, StateSkipConfig::WithoutStateRoots)?;
let relative_epoch = RelativeEpoch::from_epoch(state.current_epoch(), epoch) let relative_epoch = RelativeEpoch::from_epoch(state.current_epoch(), epoch)
@ -189,11 +242,24 @@ fn return_validator_duties<T: BeaconChainTypes>(
validator_pubkeys validator_pubkeys
.into_iter() .into_iter()
.map(|validator_pubkey| { .map(|validator_pubkey| {
if let Some(validator_index) = // The `beacon_chain` can return a validator index that does not exist in all states.
state.get_validator_index(&validator_pubkey).map_err(|e| { // Therefore, we must check to ensure that the validator index is valid for our
ApiError::ServerError(format!("Unable to read pubkey cache: {:?}", e)) // `state`.
})? let validator_index = if let Some(i) = beacon_chain
{ .validator_index(&validator_pubkey)
.map_err(|e| {
ApiError::ServerError(format!("Unable to get validator index: {:?}", e))
})? {
if i < state.validators.len() {
Some(i)
} else {
None
}
} else {
None
};
if let Some(validator_index) = validator_index {
let duties = state let duties = state
.get_attestation_duties(validator_index, relative_epoch) .get_attestation_duties(validator_index, relative_epoch)
.map_err(|e| { .map_err(|e| {
@ -203,28 +269,39 @@ fn return_validator_duties<T: BeaconChainTypes>(
)) ))
})?; })?;
// Obtain the aggregator modulo
let aggregator_modulo = duties.map(|d| {
std::cmp::max(
1,
d.committee_len as u64
/ &beacon_chain.spec.target_aggregators_per_committee,
)
});
let block_proposal_slots = validator_proposers let block_proposal_slots = validator_proposers
.iter() .iter()
.filter(|(i, _slot)| validator_index == *i) .filter(|(i, _slot)| validator_index == *i)
.map(|(_i, slot)| *slot) .map(|(_i, slot)| *slot)
.collect(); .collect();
Ok(ValidatorDuty { Ok(ValidatorDutyBytes {
validator_pubkey, validator_pubkey,
validator_index: Some(validator_index), validator_index: Some(validator_index as u64),
attestation_slot: duties.map(|d| d.slot), attestation_slot: duties.map(|d| d.slot),
attestation_committee_index: duties.map(|d| d.index), attestation_committee_index: duties.map(|d| d.index),
attestation_committee_position: duties.map(|d| d.committee_position), attestation_committee_position: duties.map(|d| d.committee_position),
block_proposal_slots, block_proposal_slots,
aggregator_modulo,
}) })
} else { } else {
Ok(ValidatorDuty { Ok(ValidatorDutyBytes {
validator_pubkey, validator_pubkey,
validator_index: None, validator_index: None,
attestation_slot: None, attestation_slot: None,
attestation_committee_index: None, attestation_committee_index: None,
attestation_committee_position: None, attestation_committee_position: None,
block_proposal_slots: vec![], block_proposal_slots: vec![],
aggregator_modulo: None,
}) })
} }
}) })
@ -264,7 +341,7 @@ pub fn get_new_beacon_block<T: BeaconChainTypes>(
pub fn publish_beacon_block<T: BeaconChainTypes>( pub fn publish_beacon_block<T: BeaconChainTypes>(
req: Request<Body>, req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>, beacon_chain: Arc<BeaconChain<T>>,
network_chan: NetworkChannel, network_chan: NetworkChannel<T::EthSpec>,
log: Logger, log: Logger,
) -> BoxFut { ) -> BoxFut {
try_future!(check_content_type_for_json(&req)); try_future!(check_content_type_for_json(&req));
@ -282,7 +359,7 @@ pub fn publish_beacon_block<T: BeaconChainTypes>(
.and_then(move |block: SignedBeaconBlock<T::EthSpec>| { .and_then(move |block: SignedBeaconBlock<T::EthSpec>| {
let slot = block.slot(); let slot = block.slot();
match beacon_chain.process_block(block.clone()) { match beacon_chain.process_block(block.clone()) {
Ok(BlockProcessingOutcome::Processed { block_root }) => { Ok(block_root) => {
// Block was processed, publish via gossipsub // Block was processed, publish via gossipsub
info!( info!(
log, log,
@ -325,19 +402,7 @@ pub fn publish_beacon_block<T: BeaconChainTypes>(
Ok(()) Ok(())
} }
Ok(outcome) => { Err(BlockError::BeaconChainError(e)) => {
warn!(
log,
"Invalid block from local validator";
"outcome" => format!("{:?}", outcome)
);
Err(ApiError::ProcessingError(format!(
"The SignedBeaconBlock could not be processed and has not been published: {:?}",
outcome
)))
}
Err(e) => {
error!( error!(
log, log,
"Error whilst processing block"; "Error whilst processing block";
@ -349,6 +414,18 @@ pub fn publish_beacon_block<T: BeaconChainTypes>(
e e
))) )))
} }
Err(other) => {
warn!(
log,
"Invalid block from local validator";
"outcome" => format!("{:?}", other)
);
Err(ApiError::ProcessingError(format!(
"The SignedBeaconBlock could not be processed and has not been published: {:?}",
other
)))
}
} }
}) })
.and_then(|_| response_builder?.body_no_ssz(&())) .and_then(|_| response_builder?.body_no_ssz(&()))
@ -372,11 +449,28 @@ pub fn get_new_attestation<T: BeaconChainTypes>(
ResponseBuilder::new(&req)?.body(&attestation) ResponseBuilder::new(&req)?.body(&attestation)
} }
/// HTTP Handler to publish an Attestation, which has been signed by a validator. /// HTTP Handler to retrieve the aggregate attestation for a slot
pub fn publish_attestation<T: BeaconChainTypes>( pub fn get_aggregate_attestation<T: BeaconChainTypes>(
req: Request<Body>, req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>, beacon_chain: Arc<BeaconChain<T>>,
network_chan: NetworkChannel, ) -> ApiResult {
let query = UrlQuery::from_request(&req)?;
let slot = query.slot()?;
let index = query.committee_index()?;
let aggregate_attestation = beacon_chain
.return_aggregate_attestation(slot, index)
.map_err(|e| ApiError::BadRequest(format!("Unable to produce attestation: {:?}", e)))?;
ResponseBuilder::new(&req)?.body(&aggregate_attestation)
}
/// HTTP Handler to publish a list of Attestations, which have been signed by a number of validators.
pub fn publish_attestations<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
network_chan: NetworkChannel<T::EthSpec>,
log: Logger, log: Logger,
) -> BoxFut { ) -> BoxFut {
try_future!(check_content_type_for_json(&req)); try_future!(check_content_type_for_json(&req));
@ -390,13 +484,20 @@ pub fn publish_attestation<T: BeaconChainTypes>(
.and_then(|chunks| { .and_then(|chunks| {
serde_json::from_slice(&chunks.as_slice()).map_err(|e| { serde_json::from_slice(&chunks.as_slice()).map_err(|e| {
ApiError::BadRequest(format!( ApiError::BadRequest(format!(
"Unable to deserialize JSON into a SignedBeaconBlock: {:?}", "Unable to deserialize JSON into a list of attestations: {:?}",
e e
)) ))
}) })
}) })
.and_then(move |attestation: Attestation<T::EthSpec>| { .and_then(move |attestations: Vec<Attestation<T::EthSpec>>| {
match beacon_chain.process_attestation(attestation.clone()) { // Note: This is a new attestation from a validator. We want to process this and
// inform the validator whether the attestation was valid. In doing so, we store
// this un-aggregated raw attestation in the op_pool by default. This is
// sub-optimal as if we have no validators needing to aggregate, these don't need
// to be stored in the op-pool. This is minimal however as the op_pool gets pruned
// every slot
attestations.par_iter().try_for_each(|attestation| {
match beacon_chain.process_attestation(attestation.clone(), Some(true)) {
Ok(AttestationProcessingOutcome::Processed) => { Ok(AttestationProcessingOutcome::Processed) => {
// Block was processed, publish via gossipsub // Block was processed, publish via gossipsub
info!( info!(
@ -407,7 +508,99 @@ pub fn publish_attestation<T: BeaconChainTypes>(
"index" => attestation.data.index, "index" => attestation.data.index,
"slot" => attestation.data.slot, "slot" => attestation.data.slot,
); );
publish_attestation_to_network::<T>(network_chan, attestation) Ok(())
}
Ok(outcome) => {
warn!(
log,
"Invalid attestation from local validator";
"outcome" => format!("{:?}", outcome)
);
Err(ApiError::ProcessingError(format!(
"An Attestation could not be processed and has not been published: {:?}",
outcome
)))
}
Err(e) => {
error!(
log,
"Error whilst processing attestation";
"error" => format!("{:?}", e)
);
Err(ApiError::ServerError(format!(
"Error while processing attestation: {:?}",
e
)))
}
}
})?;
Ok(attestations)
})
.and_then(|attestations| {
publish_raw_attestations_to_network::<T>(network_chan, attestations)
})
.and_then(|_| response_builder?.body_no_ssz(&())),
)
}
/// HTTP Handler to publish an Attestation, which has been signed by a validator.
pub fn publish_aggregate_and_proofs<T: BeaconChainTypes>(
req: Request<Body>,
beacon_chain: Arc<BeaconChain<T>>,
network_chan: NetworkChannel<T::EthSpec>,
log: Logger,
) -> BoxFut {
try_future!(check_content_type_for_json(&req));
let response_builder = ResponseBuilder::new(&req);
Box::new(
req.into_body()
.concat2()
.map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e)))
.map(|chunk| chunk.iter().cloned().collect::<Vec<u8>>())
.and_then(|chunks| {
serde_json::from_slice(&chunks.as_slice()).map_err(|e| {
ApiError::BadRequest(format!(
"Unable to deserialize JSON into a list of SignedAggregateAndProof: {:?}",
e
))
})
})
.and_then(move |signed_proofs: Vec<SignedAggregateAndProof<T::EthSpec>>| {
// Verify the signatures for the aggregate and proof and if valid process the
// aggregate
// TODO: Double check speed and logic consistency of handling current fork vs
// validator fork for signatures.
// TODO: More efficient way of getting a fork?
let fork = &beacon_chain.head()?.beacon_state.fork;
signed_proofs.par_iter().try_for_each(|signed_proof| {
let agg_proof = &signed_proof.message;
let validator_pubkey = &beacon_chain.validator_pubkey(agg_proof.aggregator_index as usize)?.ok_or_else(|| {
warn!(
log,
"Unknown validator from local validator client";
);
ApiError::ProcessingError(format!("The validator is known"))
})?;
if signed_proof.is_valid(validator_pubkey, fork) {
let attestation = &agg_proof.aggregate;
match beacon_chain.process_attestation(attestation.clone(), Some(false)) {
Ok(AttestationProcessingOutcome::Processed) => {
// Block was processed, publish via gossipsub
info!(
log,
"Attestation from local validator";
"target" => attestation.data.source.epoch,
"source" => attestation.data.source.epoch,
"index" => attestation.data.index,
"slot" => attestation.data.slot,
);
Ok(())
} }
Ok(outcome) => { Ok(outcome) => {
warn!( warn!(
@ -434,6 +627,21 @@ pub fn publish_attestation<T: BeaconChainTypes>(
))) )))
} }
} }
} else {
error!(
log,
"Invalid AggregateAndProof Signature"
);
Err(ApiError::ServerError(format!(
"Invalid AggregateAndProof Signature"
)))
}
})?;
Ok(signed_proofs)
})
.and_then(move |signed_proofs| {
publish_aggregate_attestations_to_network::<T>(network_chan, signed_proofs)
}) })
.and_then(|_| response_builder?.body_no_ssz(&())), .and_then(|_| response_builder?.body_no_ssz(&())),
) )

View File

@ -6,9 +6,9 @@ use node_test_rig::{
testing_client_config, ClientConfig, ClientGenesis, LocalBeaconNode, testing_client_config, ClientConfig, ClientGenesis, LocalBeaconNode,
}; };
use remote_beacon_node::{ use remote_beacon_node::{
Committee, HeadBeaconBlock, PersistedOperationPool, PublishStatus, ValidatorDuty, Committee, HeadBeaconBlock, PersistedOperationPool, PublishStatus, ValidatorResponse,
ValidatorResponse,
}; };
use rest_types::ValidatorDutyBytes;
use std::convert::TryInto; use std::convert::TryInto;
use std::sync::Arc; use std::sync::Arc;
use types::{ use types::{
@ -141,7 +141,7 @@ fn validator_produce_attestation() {
remote_node remote_node
.http .http
.validator() .validator()
.publish_attestation(attestation.clone()), .publish_attestations(vec![attestation.clone()]),
) )
.expect("should publish attestation"); .expect("should publish attestation");
assert!( assert!(
@ -167,7 +167,7 @@ fn validator_produce_attestation() {
remote_node remote_node
.http .http
.validator() .validator()
.publish_attestation(attestation), .publish_attestations(vec![attestation]),
) )
.expect("should publish attestation"); .expect("should publish attestation");
assert!( assert!(
@ -229,7 +229,7 @@ fn validator_duties() {
} }
fn check_duties<T: BeaconChainTypes>( fn check_duties<T: BeaconChainTypes>(
duties: Vec<ValidatorDuty>, duties: Vec<ValidatorDutyBytes>,
epoch: Epoch, epoch: Epoch,
validators: Vec<PublicKey>, validators: Vec<PublicKey>,
beacon_chain: Arc<BeaconChain<T>>, beacon_chain: Arc<BeaconChain<T>>,

View File

@ -1,7 +1,7 @@
use clap::ArgMatches; use clap::ArgMatches;
use client::{config::DEFAULT_DATADIR, ClientConfig, ClientGenesis, Eth2Config}; use client::{config::DEFAULT_DATADIR, ClientConfig, ClientGenesis, Eth2Config};
use eth2_config::{read_from_file, write_to_file}; use eth2_config::{read_from_file, write_to_file};
use eth2_libp2p::{Enr, Multiaddr}; use eth2_libp2p::{Enr, GossipTopic, Multiaddr};
use eth2_testnet_config::Eth2TestnetConfig; use eth2_testnet_config::Eth2TestnetConfig;
use genesis::recent_genesis_time; use genesis::recent_genesis_time;
use rand::{distributions::Alphanumeric, Rng}; use rand::{distributions::Alphanumeric, Rng};
@ -135,7 +135,12 @@ pub fn get_configs<E: EthSpec>(
} }
if let Some(topics_str) = cli_args.value_of("topics") { if let Some(topics_str) = cli_args.value_of("topics") {
client_config.network.topics = topics_str.split(',').map(|s| s.into()).collect(); let mut topics = Vec::new();
let topic_list = topics_str.split(',').collect::<Vec<_>>();
for topic_str in topic_list {
topics.push(GossipTopic::decode(topic_str)?);
}
client_config.network.topics = topics;
} }
if let Some(discovery_address_str) = cli_args.value_of("discovery-address") { if let Some(discovery_address_str) = cli_args.value_of("discovery-address") {

View File

@ -124,7 +124,7 @@ impl<E: EthSpec> ProductionBeaconNode<E> {
.system_time_slot_clock()? .system_time_slot_clock()?
.websocket_event_handler(client_config.websocket_server.clone())? .websocket_event_handler(client_config.websocket_server.clone())?
.build_beacon_chain()? .build_beacon_chain()?
.libp2p_network(&client_config.network)? .network(&client_config.network)?
.notifier()?; .notifier()?;
let builder = if client_config.rest_api.enabled { let builder = if client_config.rest_api.enabled {

View File

@ -1,6 +1,6 @@
[package] [package]
name = "store" name = "store"
version = "0.1.0" version = "0.2.0"
authors = ["Paul Hauner <paul@paulhauner.com>"] authors = ["Paul Hauner <paul@paulhauner.com>"]
edition = "2018" edition = "2018"

View File

@ -10,7 +10,7 @@ use crate::{
leveldb_store::LevelDB, DBColumn, Error, PartialBeaconState, SimpleStoreItem, Store, StoreItem, leveldb_store::LevelDB, DBColumn, Error, PartialBeaconState, SimpleStoreItem, Store, StoreItem,
}; };
use lru::LruCache; use lru::LruCache;
use parking_lot::{Mutex, RwLock}; use parking_lot::RwLock;
use slog::{debug, trace, warn, Logger}; use slog::{debug, trace, warn, Logger};
use ssz::{Decode, Encode}; use ssz::{Decode, Encode};
use ssz_derive::{Decode, Encode}; use ssz_derive::{Decode, Encode};
@ -22,7 +22,6 @@ use std::convert::TryInto;
use std::marker::PhantomData; use std::marker::PhantomData;
use std::path::Path; use std::path::Path;
use std::sync::Arc; use std::sync::Arc;
use types::beacon_state::CloneConfig;
use types::*; use types::*;
/// 32-byte key for accessing the `split` of the freezer DB. /// 32-byte key for accessing the `split` of the freezer DB.
@ -46,9 +45,7 @@ pub struct HotColdDB<E: EthSpec> {
/// The hot database also contains all blocks. /// The hot database also contains all blocks.
pub(crate) hot_db: LevelDB<E>, pub(crate) hot_db: LevelDB<E>,
/// LRU cache of deserialized blocks. Updated whenever a block is loaded. /// LRU cache of deserialized blocks. Updated whenever a block is loaded.
block_cache: Mutex<LruCache<Hash256, SignedBeaconBlock<E>>>, block_cache: RwLock<LruCache<Hash256, SignedBeaconBlock<E>>>,
/// LRU cache of deserialized states. Updated whenever a state is loaded.
state_cache: Mutex<LruCache<Hash256, BeaconState<E>>>,
/// Chain spec. /// Chain spec.
spec: ChainSpec, spec: ChainSpec,
/// Logger. /// Logger.
@ -112,7 +109,7 @@ impl<E: EthSpec> Store<E> for HotColdDB<E> {
self.put(block_root, &block)?; self.put(block_root, &block)?;
// Update cache. // Update cache.
self.block_cache.lock().put(*block_root, block); self.block_cache.write().put(*block_root, block);
Ok(()) Ok(())
} }
@ -122,7 +119,7 @@ impl<E: EthSpec> Store<E> for HotColdDB<E> {
metrics::inc_counter(&metrics::BEACON_BLOCK_GET_COUNT); metrics::inc_counter(&metrics::BEACON_BLOCK_GET_COUNT);
// Check the cache. // Check the cache.
if let Some(block) = self.block_cache.lock().get(block_root) { if let Some(block) = self.block_cache.write().get(block_root) {
metrics::inc_counter(&metrics::BEACON_BLOCK_CACHE_HIT_COUNT); metrics::inc_counter(&metrics::BEACON_BLOCK_CACHE_HIT_COUNT);
return Ok(Some(block.clone())); return Ok(Some(block.clone()));
} }
@ -131,7 +128,7 @@ impl<E: EthSpec> Store<E> for HotColdDB<E> {
match self.get::<SignedBeaconBlock<E>>(block_root)? { match self.get::<SignedBeaconBlock<E>>(block_root)? {
Some(block) => { Some(block) => {
// Add to cache. // Add to cache.
self.block_cache.lock().put(*block_root, block.clone()); self.block_cache.write().put(*block_root, block.clone());
Ok(Some(block)) Ok(Some(block))
} }
None => Ok(None), None => Ok(None),
@ -140,12 +137,12 @@ impl<E: EthSpec> Store<E> for HotColdDB<E> {
/// Delete a block from the store and the block cache. /// Delete a block from the store and the block cache.
fn delete_block(&self, block_root: &Hash256) -> Result<(), Error> { fn delete_block(&self, block_root: &Hash256) -> Result<(), Error> {
self.block_cache.lock().pop(block_root); self.block_cache.write().pop(block_root);
self.delete::<SignedBeaconBlock<E>>(block_root) self.delete::<SignedBeaconBlock<E>>(block_root)
} }
/// Store a state in the store. /// Store a state in the store.
fn put_state(&self, state_root: &Hash256, state: BeaconState<E>) -> Result<(), Error> { fn put_state(&self, state_root: &Hash256, state: &BeaconState<E>) -> Result<(), Error> {
if state.slot < self.get_split_slot() { if state.slot < self.get_split_slot() {
self.store_cold_state(state_root, &state) self.store_cold_state(state_root, &state)
} else { } else {
@ -159,7 +156,7 @@ impl<E: EthSpec> Store<E> for HotColdDB<E> {
state_root: &Hash256, state_root: &Hash256,
slot: Option<Slot>, slot: Option<Slot>,
) -> Result<Option<BeaconState<E>>, Error> { ) -> Result<Option<BeaconState<E>>, Error> {
self.get_state_with(state_root, slot, CloneConfig::all()) self.get_state_with(state_root, slot)
} }
/// Get a state from the store. /// Get a state from the store.
@ -169,7 +166,6 @@ impl<E: EthSpec> Store<E> for HotColdDB<E> {
&self, &self,
state_root: &Hash256, state_root: &Hash256,
slot: Option<Slot>, slot: Option<Slot>,
clone_config: CloneConfig,
) -> Result<Option<BeaconState<E>>, Error> { ) -> Result<Option<BeaconState<E>>, Error> {
metrics::inc_counter(&metrics::BEACON_STATE_GET_COUNT); metrics::inc_counter(&metrics::BEACON_STATE_GET_COUNT);
@ -177,10 +173,10 @@ impl<E: EthSpec> Store<E> for HotColdDB<E> {
if slot < self.get_split_slot() { if slot < self.get_split_slot() {
self.load_cold_state_by_slot(slot).map(Some) self.load_cold_state_by_slot(slot).map(Some)
} else { } else {
self.load_hot_state(state_root, clone_config) self.load_hot_state(state_root)
} }
} else { } else {
match self.load_hot_state(state_root, clone_config)? { match self.load_hot_state(state_root)? {
Some(state) => Ok(Some(state)), Some(state) => Ok(Some(state)),
None => self.load_cold_state(state_root), None => self.load_cold_state(state_root),
} }
@ -204,9 +200,6 @@ impl<E: EthSpec> Store<E> for HotColdDB<E> {
.key_delete(DBColumn::BeaconState.into(), state_root.as_bytes())?; .key_delete(DBColumn::BeaconState.into(), state_root.as_bytes())?;
} }
// Delete from the cache.
self.state_cache.lock().pop(state_root);
Ok(()) Ok(())
} }
@ -309,10 +302,7 @@ impl<E: EthSpec> Store<E> for HotColdDB<E> {
{ {
// NOTE: minor inefficiency here because we load an unnecessary hot state summary // NOTE: minor inefficiency here because we load an unnecessary hot state summary
let state = self let state = self
.load_hot_state( .load_hot_state(&epoch_boundary_state_root)?
&epoch_boundary_state_root,
CloneConfig::committee_caches_only(),
)?
.ok_or_else(|| { .ok_or_else(|| {
HotColdDBError::MissingEpochBoundaryState(epoch_boundary_state_root) HotColdDBError::MissingEpochBoundaryState(epoch_boundary_state_root)
})?; })?;
@ -348,8 +338,7 @@ impl<E: EthSpec> HotColdDB<E> {
split: RwLock::new(Split::default()), split: RwLock::new(Split::default()),
cold_db: LevelDB::open(cold_path)?, cold_db: LevelDB::open(cold_path)?,
hot_db: LevelDB::open(hot_path)?, hot_db: LevelDB::open(hot_path)?,
block_cache: Mutex::new(LruCache::new(config.block_cache_size)), block_cache: RwLock::new(LruCache::new(config.block_cache_size)),
state_cache: Mutex::new(LruCache::new(config.state_cache_size)),
config, config,
spec, spec,
log, log,
@ -371,7 +360,7 @@ impl<E: EthSpec> HotColdDB<E> {
pub fn store_hot_state( pub fn store_hot_state(
&self, &self,
state_root: &Hash256, state_root: &Hash256,
state: BeaconState<E>, state: &BeaconState<E>,
) -> Result<(), Error> { ) -> Result<(), Error> {
// On the epoch boundary, store the full state. // On the epoch boundary, store the full state.
if state.slot % E::slots_per_epoch() == 0 { if state.slot % E::slots_per_epoch() == 0 {
@ -387,10 +376,7 @@ impl<E: EthSpec> HotColdDB<E> {
// Store a summary of the state. // Store a summary of the state.
// We store one even for the epoch boundary states, as we may need their slots // We store one even for the epoch boundary states, as we may need their slots
// when doing a look up by state root. // when doing a look up by state root.
self.put_state_summary(state_root, HotStateSummary::new(state_root, &state)?)?; self.put_state_summary(state_root, HotStateSummary::new(state_root, state)?)?;
// Store the state in the cache.
self.state_cache.lock().put(*state_root, state);
Ok(()) Ok(())
} }
@ -398,24 +384,9 @@ impl<E: EthSpec> HotColdDB<E> {
/// Load a post-finalization state from the hot database. /// Load a post-finalization state from the hot database.
/// ///
/// Will replay blocks from the nearest epoch boundary. /// Will replay blocks from the nearest epoch boundary.
pub fn load_hot_state( pub fn load_hot_state(&self, state_root: &Hash256) -> Result<Option<BeaconState<E>>, Error> {
&self,
state_root: &Hash256,
clone_config: CloneConfig,
) -> Result<Option<BeaconState<E>>, Error> {
metrics::inc_counter(&metrics::BEACON_STATE_HOT_GET_COUNT); metrics::inc_counter(&metrics::BEACON_STATE_HOT_GET_COUNT);
// Check the cache.
if let Some(state) = self.state_cache.lock().get(state_root) {
metrics::inc_counter(&metrics::BEACON_STATE_CACHE_HIT_COUNT);
let timer = metrics::start_timer(&metrics::BEACON_STATE_CACHE_CLONE_TIME);
let state = state.clone_with(clone_config);
metrics::stop_timer(timer);
return Ok(Some(state));
}
if let Some(HotStateSummary { if let Some(HotStateSummary {
slot, slot,
latest_block_root, latest_block_root,
@ -439,9 +410,6 @@ impl<E: EthSpec> HotColdDB<E> {
self.replay_blocks(boundary_state, blocks, slot)? self.replay_blocks(boundary_state, blocks, slot)?
}; };
// Update the LRU cache.
self.state_cache.lock().put(*state_root, state.clone());
Ok(Some(state)) Ok(Some(state))
} else { } else {
Ok(None) Ok(None)

View File

@ -345,7 +345,7 @@ mod test {
let state_a_root = hashes.next().unwrap(); let state_a_root = hashes.next().unwrap();
state_b.state_roots[0] = state_a_root; state_b.state_roots[0] = state_a_root;
store.put_state(&state_a_root, state_a).unwrap(); store.put_state(&state_a_root, &state_a).unwrap();
let iter = BlockRootsIterator::new(store, &state_b); let iter = BlockRootsIterator::new(store, &state_b);
@ -393,8 +393,8 @@ mod test {
let state_a_root = Hash256::from_low_u64_be(slots_per_historical_root as u64); let state_a_root = Hash256::from_low_u64_be(slots_per_historical_root as u64);
let state_b_root = Hash256::from_low_u64_be(slots_per_historical_root as u64 * 2); let state_b_root = Hash256::from_low_u64_be(slots_per_historical_root as u64 * 2);
store.put_state(&state_a_root, state_a).unwrap(); store.put_state(&state_a_root, &state_a).unwrap();
store.put_state(&state_b_root, state_b.clone()).unwrap(); store.put_state(&state_b_root, &state_b.clone()).unwrap();
let iter = StateRootsIterator::new(store, &state_b); let iter = StateRootsIterator::new(store, &state_b);

View File

@ -123,7 +123,7 @@ impl<E: EthSpec> Store<E> for LevelDB<E> {
} }
/// Store a state in the store. /// Store a state in the store.
fn put_state(&self, state_root: &Hash256, state: BeaconState<E>) -> Result<(), Error> { fn put_state(&self, state_root: &Hash256, state: &BeaconState<E>) -> Result<(), Error> {
store_full_state(self, state_root, &state) store_full_state(self, state_root, &state)
} }

View File

@ -38,7 +38,6 @@ pub use errors::Error;
pub use impls::beacon_state::StorageContainer as BeaconStateStorageContainer; pub use impls::beacon_state::StorageContainer as BeaconStateStorageContainer;
pub use metrics::scrape_for_metrics; pub use metrics::scrape_for_metrics;
pub use state_batch::StateBatch; pub use state_batch::StateBatch;
pub use types::beacon_state::CloneConfig;
pub use types::*; pub use types::*;
/// An object capable of storing and retrieving objects implementing `StoreItem`. /// An object capable of storing and retrieving objects implementing `StoreItem`.
@ -97,7 +96,7 @@ pub trait Store<E: EthSpec>: Sync + Send + Sized + 'static {
} }
/// Store a state in the store. /// Store a state in the store.
fn put_state(&self, state_root: &Hash256, state: BeaconState<E>) -> Result<(), Error>; fn put_state(&self, state_root: &Hash256, state: &BeaconState<E>) -> Result<(), Error>;
/// Store a state summary in the store. /// Store a state summary in the store.
// NOTE: this is a hack for the HotColdDb, we could consider splitting this // NOTE: this is a hack for the HotColdDb, we could consider splitting this
@ -122,7 +121,6 @@ pub trait Store<E: EthSpec>: Sync + Send + Sized + 'static {
&self, &self,
state_root: &Hash256, state_root: &Hash256,
slot: Option<Slot>, slot: Option<Slot>,
_clone_config: CloneConfig,
) -> Result<Option<BeaconState<E>>, Error> { ) -> Result<Option<BeaconState<E>>, Error> {
// Default impl ignores config. Overriden in `HotColdDb`. // Default impl ignores config. Overriden in `HotColdDb`.
self.get_state(state_root, slot) self.get_state(state_root, slot)

View File

@ -76,7 +76,7 @@ impl<E: EthSpec> Store<E> for MemoryStore<E> {
} }
/// Store a state in the store. /// Store a state in the store.
fn put_state(&self, state_root: &Hash256, state: BeaconState<E>) -> Result<(), Error> { fn put_state(&self, state_root: &Hash256, state: &BeaconState<E>) -> Result<(), Error> {
store_full_state(self, state_root, &state) store_full_state(self, state_root, &state)
} }

View File

@ -38,7 +38,7 @@ impl<E: EthSpec> StateBatch<E> {
/// May fail to write the full batch if any of the items error (i.e. not atomic!) /// May fail to write the full batch if any of the items error (i.e. not atomic!)
pub fn commit<S: Store<E>>(self, store: &S) -> Result<(), Error> { pub fn commit<S: Store<E>>(self, store: &S) -> Result<(), Error> {
self.items.into_iter().try_for_each(|item| match item { self.items.into_iter().try_for_each(|item| match item {
BatchItem::Full(state_root, state) => store.put_state(&state_root, state), BatchItem::Full(state_root, state) => store.put_state(&state_root, &state),
BatchItem::Summary(state_root, summary) => { BatchItem::Summary(state_root, summary) => {
store.put_state_summary(&state_root, summary) store.put_state_summary(&state_root, summary)
} }

View File

@ -0,0 +1,14 @@
[package]
name = "timer"
version = "0.2.0"
authors = ["Age Manning <Age@AgeManning.com>"]
edition = "2018"
[dependencies]
beacon_chain = { path = "../beacon_chain" }
types = { path = "../../eth2/types" }
slot_clock = { path = "../../eth2/utils/slot_clock" }
tokio = "0.1.22"
slog = "2.5.2"
parking_lot = "0.10.0"
futures = "0.1.29"

View File

@ -0,0 +1,97 @@
//! A timer service for the beacon node.
//!
//! This service allows task execution on the beacon node for various functionality.
use beacon_chain::{BeaconChain, BeaconChainTypes};
use futures::prelude::*;
use slog::warn;
use slot_clock::SlotClock;
use std::sync::Arc;
use std::time::{Duration, Instant};
use tokio::runtime::TaskExecutor;
use tokio::timer::Interval;
use types::EthSpec;
/// A collection of timers that can execute actions on the beacon node.
///
/// This currently only has a per-slot timer, although others may be added in the future
struct Timer<T: BeaconChainTypes> {
/// Beacon chain associated.
beacon_chain: Arc<BeaconChain<T>>,
/// A timer that fires every slot.
per_slot_timer: Interval,
/// The logger for the timer.
log: slog::Logger,
}
impl<T: BeaconChainTypes> Timer<T> {
pub fn new(
beacon_chain: Arc<BeaconChain<T>>,
milliseconds_per_slot: u64,
log: slog::Logger,
) -> Result<Self, &'static str> {
let duration_to_next_slot = beacon_chain
.slot_clock
.duration_to_next_slot()
.ok_or_else(|| "slot_notifier unable to determine time to next slot")?;
let slot_duration = Duration::from_millis(milliseconds_per_slot);
// A per-slot timer
let start_instant = Instant::now() + duration_to_next_slot;
let per_slot_timer = Interval::new(start_instant, slot_duration);
Ok(Timer {
beacon_chain,
per_slot_timer,
log,
})
}
/// Tasks that occur on a per-slot basis.
pub fn per_slot_task(&self) {
self.beacon_chain.per_slot_task();
}
pub fn per_epoch_task(&self) {
self.beacon_chain.per_epoch_task();
}
}
/// Spawns a timer service which periodically executes tasks for the beacon chain
pub fn spawn<T: BeaconChainTypes>(
executor: &TaskExecutor,
beacon_chain: Arc<BeaconChain<T>>,
milliseconds_per_slot: u64,
log: slog::Logger,
) -> Result<tokio::sync::oneshot::Sender<()>, &'static str> {
//let thread_log = log.clone();
let mut timer = Timer::new(beacon_chain, milliseconds_per_slot, log)?;
let (exit_signal, mut exit) = tokio::sync::oneshot::channel();
executor.spawn(futures::future::poll_fn(move || -> Result<_, ()> {
if let Ok(Async::Ready(_)) | Err(_) = exit.poll() {
// notifier is terminating, end the process
return Ok(Async::Ready(()));
}
while let Async::Ready(_) = timer
.per_slot_timer
.poll()
.map_err(|e| warn!(timer.log, "Per slot timer error"; "error" => format!("{:?}", e)))?
{
timer.per_slot_task();
match timer
.beacon_chain
.slot_clock
.now()
.map(|slot| (slot % T::EthSpec::slots_per_epoch()).as_u64())
{
Some(0) => timer.per_epoch_task(),
_ => {}
}
}
Ok(Async::NotReady)
}));
Ok(exit_signal)
}

View File

@ -1,6 +1,6 @@
[package] [package]
name = "version" name = "version"
version = "0.1.0" version = "0.2.0"
authors = ["Age Manning <Age@AgeManning.com>"] authors = ["Age Manning <Age@AgeManning.com>"]
edition = "2018" edition = "2018"

View File

@ -1,13 +1,12 @@
[package] [package]
name = "websocket_server" name = "websocket_server"
version = "0.1.0" version = "0.2.0"
authors = ["Paul Hauner <paul@paulhauner.com>"] authors = ["Paul Hauner <paul@paulhauner.com>"]
edition = "2018" edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies] [dependencies]
exit-future = "0.1.4"
futures = "0.1.29" futures = "0.1.29"
serde = "1.0.102" serde = "1.0.102"
serde_derive = "1.0.102" serde_derive = "1.0.102"

View File

@ -40,7 +40,14 @@ pub fn start_server<T: EthSpec>(
config: &Config, config: &Config,
executor: &TaskExecutor, executor: &TaskExecutor,
log: &Logger, log: &Logger,
) -> Result<(WebSocketSender<T>, exit_future::Signal, SocketAddr), String> { ) -> Result<
(
WebSocketSender<T>,
tokio::sync::oneshot::Sender<()>,
SocketAddr,
),
String,
> {
let server_string = format!("{}:{}", config.listen_address, config.port); let server_string = format!("{}:{}", config.listen_address, config.port);
// Create a server that simply ignores any incoming messages. // Create a server that simply ignores any incoming messages.
@ -64,12 +71,13 @@ pub fn start_server<T: EthSpec>(
let broadcaster = server.broadcaster(); let broadcaster = server.broadcaster();
// Produce a signal/channel that can gracefully shutdown the websocket server. // Produce a signal/channel that can gracefully shutdown the websocket server.
let exit_signal = { let exit_channel = {
let (exit_signal, exit) = exit_future::signal(); let (exit_channel, exit) = tokio::sync::oneshot::channel();
let log_inner = log.clone(); let log_inner = log.clone();
let broadcaster_inner = server.broadcaster(); let broadcaster_inner = server.broadcaster();
let exit_future = exit.and_then(move |_| { let exit_future = exit
.and_then(move |_| {
if let Err(e) = broadcaster_inner.shutdown() { if let Err(e) = broadcaster_inner.shutdown() {
warn!( warn!(
log_inner, log_inner,
@ -80,13 +88,14 @@ pub fn start_server<T: EthSpec>(
info!(log_inner, "Websocket server shutdown"); info!(log_inner, "Websocket server shutdown");
} }
Ok(()) Ok(())
}); })
.map_err(|_| ());
// Place a future on the executor that will shutdown the websocket server when the // Place a future on the executor that will shutdown the websocket server when the
// application exits. // application exits.
executor.spawn(exit_future); executor.spawn(exit_future);
exit_signal exit_channel
}; };
let log_inner = log.clone(); let log_inner = log.clone();
@ -118,7 +127,7 @@ pub fn start_server<T: EthSpec>(
sender: Some(broadcaster), sender: Some(broadcaster),
_phantom: PhantomData, _phantom: PhantomData,
}, },
exit_signal, exit_channel,
actual_listen_addr, actual_listen_addr,
)) ))
} }

View File

@ -5,16 +5,23 @@ client to connect to the beacon node and produce blocks and attestations.
## Endpoints ## Endpoints
HTTP Path | Description | HTTP Path | HTTP Method | Description |
| --- | -- | | --- | -- |
[`/validator/duties`](#validatorduties) | Provides block and attestation production information for validators. [`/validator/duties`](#validatorduties) | GET | Provides block and attestation production information for validators.
[`/validator/duties/all`](#validatordutiesall) | Provides block and attestation production information for all validators. [`/validator/duties/all`](#validatordutiesall) | GET |Provides block and attestation production information for all validators.
[`/validator/duties/active`](#validatordutiesactive) | Provides block and attestation production information for all active validators. [`/validator/duties/active`](#validatordutiesactive) | GET | Provides block and attestation production information for all active validators.
[`/validator/block`](#validatorblock) | Produces a `BeaconBlock` object from current state. [`/validator/block`](#validatorblockget) | GET | Retrieves the current beacon
[`/validator/attestation`](#validatorattestation) | Produces an unsigned `Attestation` object from current state. block for the validator to publish.
[`/validator/block`](#validatorblock) | Processes a `SignedBeaconBlock` object and publishes it to the network. [`/validator/block`](#validatorblockpost) | POST | Publishes a signed block to the
[`/validator/attestation`](#validatorattestation) | Processes a signed `Attestation` and publishes it to the network. network.
[`/validator/attestation`](#validatorattestation) | GET | Retrieves the current best attestation for a validator to publish.
[`/validator/attestations`](#validatorattestations) | POST | Publishes a list
of raw unaggregated attestations to their appropriate subnets
[`/validator/aggregate_attestation`](#validatoraggregateattestation) | GET | Gets an aggregate attestation for validators to sign and publish.
[`/validator/aggregate_attestations`](#validatoraggregateattestation) | POST |
Publishes a list of aggregated attestations for validators who are aggregators
[`/validator/subscribe`](#validatorsubscribe) | POST | Subscribes a list of
validators to the beacon node for a particular duty/slot.
## `/validator/duties` ## `/validator/duties`
@ -81,7 +88,8 @@ _Note: for demonstration purposes the second pubkey is some unknown pubkey._
"attestation_slot": 38511, "attestation_slot": 38511,
"attestation_committee_index": 3, "attestation_committee_index": 3,
"attestation_committee_position": 39, "attestation_committee_position": 39,
"block_proposal_slots": [] "block_proposal_slots": [],
"aggregator_modulo": 5,
}, },
{ {
"validator_pubkey": "0x42f87bc7c8fa10408425bbeeeb3dc3874242b4bd92f57775b60b39142426f9ec80b273a64269332d97bdb7d93ae05a42", "validator_pubkey": "0x42f87bc7c8fa10408425bbeeeb3dc3874242b4bd92f57775b60b39142426f9ec80b273a64269332d97bdb7d93ae05a42",
@ -90,6 +98,7 @@ _Note: for demonstration purposes the second pubkey is some unknown pubkey._
"attestation_committee_index": null, "attestation_committee_index": null,
"attestation_committee_position": null, "attestation_committee_position": null,
"block_proposal_slots": [] "block_proposal_slots": []
"aggregator_modulo": null,
} }
] ]
``` ```

View File

@ -1,6 +1,6 @@
[package] [package]
name = "operation_pool" name = "operation_pool"
version = "0.1.0" version = "0.2.0"
authors = ["Michael Sproul <michael@sigmaprime.io>"] authors = ["Michael Sproul <michael@sigmaprime.io>"]
edition = "2018" edition = "2018"

View File

@ -10,8 +10,8 @@ use attestation_id::AttestationId;
use max_cover::maximum_cover; use max_cover::maximum_cover;
use parking_lot::RwLock; use parking_lot::RwLock;
use state_processing::per_block_processing::errors::{ use state_processing::per_block_processing::errors::{
AttestationValidationError, AttesterSlashingValidationError, ExitValidationError, AttestationInvalid, AttestationValidationError, AttesterSlashingValidationError,
ProposerSlashingValidationError, ExitValidationError, ProposerSlashingValidationError,
}; };
use state_processing::per_block_processing::{ use state_processing::per_block_processing::{
get_slashable_indices_modular, verify_attestation_for_block_inclusion, get_slashable_indices_modular, verify_attestation_for_block_inclusion,
@ -22,25 +22,43 @@ use std::collections::{hash_map, HashMap, HashSet};
use std::marker::PhantomData; use std::marker::PhantomData;
use types::{ use types::{
typenum::Unsigned, Attestation, AttesterSlashing, BeaconState, BeaconStateError, ChainSpec, typenum::Unsigned, Attestation, AttesterSlashing, BeaconState, BeaconStateError, ChainSpec,
EthSpec, Fork, ProposerSlashing, RelativeEpoch, SignedVoluntaryExit, Validator, CommitteeIndex, Epoch, EthSpec, Fork, ProposerSlashing, RelativeEpoch, SignedVoluntaryExit,
Slot, Validator,
}; };
/// The number of slots we keep shard subnet attestations in the operation pool for. A value of 0
/// means we remove the attestation pool as soon as the slot ends.
const ATTESTATION_SUBNET_SLOT_DURATION: u64 = 1;
#[derive(Default, Debug)] #[derive(Default, Debug)]
pub struct OperationPool<T: EthSpec + Default> { pub struct OperationPool<T: EthSpec + Default> {
/// Map from attestation ID (see below) to vectors of attestations. /// Map from attestation ID (see `attestation_id`) to vectors of attestations.
attestations: RwLock<HashMap<AttestationId, Vec<Attestation<T>>>>, ///
/// These are collected from the aggregate channel. They should already be aggregated but we
/// check for disjoint attestations in the unlikely event we receive disjoint attestations.
aggregate_attestations: RwLock<HashMap<AttestationId, Vec<Attestation<T>>>>,
/// A collection of aggregated attestations for a particular slot and committee index.
///
/// Un-aggregated attestations are collected on a shard subnet and if a connected validator is
/// required to aggregate these attestations they are aggregated and stored here until the
/// validator is required to publish the aggregate attestation.
/// This segregates attestations into (slot,committee_index) then by `AttestationId`.
committee_attestations:
RwLock<HashMap<(Slot, CommitteeIndex), HashMap<AttestationId, Attestation<T>>>>,
/// Map from two attestation IDs to a slashing for those IDs. /// Map from two attestation IDs to a slashing for those IDs.
attester_slashings: RwLock<HashMap<(AttestationId, AttestationId), AttesterSlashing<T>>>, attester_slashings: RwLock<HashMap<(AttestationId, AttestationId), AttesterSlashing<T>>>,
/// Map from proposer index to slashing. /// Map from proposer index to slashing.
proposer_slashings: RwLock<HashMap<u64, ProposerSlashing>>, proposer_slashings: RwLock<HashMap<u64, ProposerSlashing>>,
/// Map from exiting validator to their exit data. /// Map from exiting validator to their exit data.
voluntary_exits: RwLock<HashMap<u64, SignedVoluntaryExit>>, voluntary_exits: RwLock<HashMap<u64, SignedVoluntaryExit>>,
/// Marker to pin the generics.
_phantom: PhantomData<T>, _phantom: PhantomData<T>,
} }
#[derive(Debug, PartialEq)] #[derive(Debug, PartialEq)]
pub enum OpPoolError { pub enum OpPoolError {
GetAttestationsTotalBalanceError(BeaconStateError), GetAttestationsTotalBalanceError(BeaconStateError),
NoAttestationsForSlotCommittee,
} }
impl<T: EthSpec> OperationPool<T> { impl<T: EthSpec> OperationPool<T> {
@ -49,12 +67,13 @@ impl<T: EthSpec> OperationPool<T> {
Self::default() Self::default()
} }
/// Insert an attestation into the pool, aggregating it with existing attestations if possible. /// Insert an attestation from the aggregate channel into the pool, checking if the
/// aggregate can be further aggregated
/// ///
/// ## Note /// ## Note
/// ///
/// This function assumes the given `attestation` is valid. /// This function assumes the given `attestation` is valid.
pub fn insert_attestation( pub fn insert_aggregate_attestation(
&self, &self,
attestation: Attestation<T>, attestation: Attestation<T>,
fork: &Fork, fork: &Fork,
@ -63,7 +82,7 @@ impl<T: EthSpec> OperationPool<T> {
let id = AttestationId::from_data(&attestation.data, fork, spec); let id = AttestationId::from_data(&attestation.data, fork, spec);
// Take a write lock on the attestations map. // Take a write lock on the attestations map.
let mut attestations = self.attestations.write(); let mut attestations = self.aggregate_attestations.write();
let existing_attestations = match attestations.entry(id) { let existing_attestations = match attestations.entry(id) {
hash_map::Entry::Vacant(entry) => { hash_map::Entry::Vacant(entry) => {
@ -90,9 +109,90 @@ impl<T: EthSpec> OperationPool<T> {
Ok(()) Ok(())
} }
/// Total number of attestations in the pool, including attestations for the same data. /// Insert a raw un-aggregated attestation into the pool, for a given (slot, committee_index).
///
/// ## Note
///
/// It would be a fair assumption that all attestations here are unaggregated and we
/// therefore do not need to check if `signers_disjoint_form`. However the cost of doing
/// so is low, so we perform this check for added safety.
pub fn insert_raw_attestation(
&self,
attestation: Attestation<T>,
fork: &Fork,
spec: &ChainSpec,
) -> Result<(), AttestationValidationError> {
let id = AttestationId::from_data(&attestation.data, fork, spec);
let slot = attestation.data.slot.clone();
let committee_index = attestation.data.index.clone();
// Take a write lock on the attestations map.
let mut attestations = self.committee_attestations.write();
let slot_index_map = attestations
.entry((slot, committee_index))
.or_insert_with(|| HashMap::new());
let existing_attestation = match slot_index_map.entry(id) {
hash_map::Entry::Vacant(entry) => {
entry.insert(attestation);
return Ok(());
}
hash_map::Entry::Occupied(entry) => entry.into_mut(),
};
if existing_attestation.signers_disjoint_from(&attestation) {
existing_attestation.aggregate(&attestation);
} else if *existing_attestation != attestation {
return Err(AttestationValidationError::Invalid(
AttestationInvalid::NotDisjoint,
));
}
Ok(())
}
/// Total number of aggregate attestations in the pool from the aggregate channel, including attestations for the same data.
pub fn num_attestations(&self) -> usize { pub fn num_attestations(&self) -> usize {
self.attestations.read().values().map(Vec::len).sum() self.aggregate_attestations
.read()
.values()
.map(Vec::len)
.sum()
}
/// Total number of attestations in the pool, including attestations for the same data.
pub fn total_num_attestations(&self) -> usize {
self.num_attestations().saturating_add(
self.committee_attestations
.read()
.values()
.map(|map| map.values().len())
.sum(),
)
}
/// Get the aggregated raw attestations for a (slot, committee)
//TODO: Check this logic and optimize
pub fn get_raw_aggregated_attestations(
&self,
slot: &Slot,
index: &CommitteeIndex,
state: &BeaconState<T>,
spec: &ChainSpec,
) -> Result<Attestation<T>, OpPoolError> {
let curr_domain_bytes =
AttestationId::compute_domain_bytes(state.current_epoch(), &state.fork, spec);
self.committee_attestations
.read()
.get(&(*slot, *index))
.ok_or_else(|| OpPoolError::NoAttestationsForSlotCommittee)?
.iter()
.filter(|(key, _)| key.domain_bytes_match(&curr_domain_bytes))
.next()
.map(|(_key, attestation)| attestation.clone())
.ok_or_else(|| OpPoolError::NoAttestationsForSlotCommittee)
} }
/// Get a list of attestations for inclusion in a block. /// Get a list of attestations for inclusion in a block.
@ -109,7 +209,7 @@ impl<T: EthSpec> OperationPool<T> {
let prev_domain_bytes = AttestationId::compute_domain_bytes(prev_epoch, &state.fork, spec); let prev_domain_bytes = AttestationId::compute_domain_bytes(prev_epoch, &state.fork, spec);
let curr_domain_bytes = let curr_domain_bytes =
AttestationId::compute_domain_bytes(current_epoch, &state.fork, spec); AttestationId::compute_domain_bytes(current_epoch, &state.fork, spec);
let reader = self.attestations.read(); let reader = self.aggregate_attestations.read();
let active_indices = state let active_indices = state
.get_cached_active_validator_indices(RelativeEpoch::Current) .get_cached_active_validator_indices(RelativeEpoch::Current)
.map_err(OpPoolError::GetAttestationsTotalBalanceError)?; .map_err(OpPoolError::GetAttestationsTotalBalanceError)?;
@ -141,21 +241,40 @@ impl<T: EthSpec> OperationPool<T> {
)) ))
} }
/// Remove attestations which are too old to be included in a block. /// Removes aggregate attestations which are too old to be included in a block.
pub fn prune_attestations(&self, finalized_state: &BeaconState<T>) { ///
/// This leaves the committee_attestations intact. The committee attestations have their own
/// prune function as these are not for block inclusion and can be pruned more frequently.
/// See `prune_committee_attestations`.
//TODO: Michael to check this before merge
pub fn prune_attestations(&self, current_epoch: &Epoch) {
// We know we can include an attestation if: // We know we can include an attestation if:
// state.slot <= attestation_slot + SLOTS_PER_EPOCH // state.slot <= attestation_slot + SLOTS_PER_EPOCH
// We approximate this check using the attestation's epoch, to avoid computing // We approximate this check using the attestation's epoch, to avoid computing
// the slot or relying on the committee cache of the finalized state. // the slot or relying on the committee cache of the finalized state.
self.attestations.write().retain(|_, attestations| { self.aggregate_attestations
.write()
.retain(|_, attestations| {
// All the attestations in this bucket have the same data, so we only need to // All the attestations in this bucket have the same data, so we only need to
// check the first one. // check the first one.
attestations.first().map_or(false, |att| { attestations
finalized_state.current_epoch() <= att.data.target.epoch + 1 .first()
}) .map_or(false, |att| *current_epoch <= att.data.target.epoch + 1)
}); });
} }
/// Removes old committee attestations. These should be used in the slot that they are
/// collected. We keep these around for one extra slot (i.e current_slot + 1) to account for
/// potential delays.
///
/// The beacon chain should call this function every slot with the current slot as the
/// parameter.
pub fn prune_committee_attestations(&self, current_slot: &Slot) {
self.committee_attestations
.write()
.retain(|(slot, _), _| *slot + ATTESTATION_SUBNET_SLOT_DURATION >= *current_slot)
}
/// Insert a proposer slashing into the pool. /// Insert a proposer slashing into the pool.
pub fn insert_proposer_slashing( pub fn insert_proposer_slashing(
&self, &self,
@ -332,8 +451,8 @@ impl<T: EthSpec> OperationPool<T> {
} }
/// Prune all types of transactions given the latest finalized state. /// Prune all types of transactions given the latest finalized state.
// TODO: Michael - Can we shift these to per-epoch?
pub fn prune_all(&self, finalized_state: &BeaconState<T>, spec: &ChainSpec) { pub fn prune_all(&self, finalized_state: &BeaconState<T>, spec: &ChainSpec) {
self.prune_attestations(finalized_state);
self.prune_proposer_slashings(finalized_state); self.prune_proposer_slashings(finalized_state);
self.prune_attester_slashings(finalized_state, spec); self.prune_attester_slashings(finalized_state, spec);
self.prune_voluntary_exits(finalized_state); self.prune_voluntary_exits(finalized_state);
@ -383,7 +502,8 @@ fn prune_validator_hash_map<T, F, E: EthSpec>(
/// Compare two operation pools. /// Compare two operation pools.
impl<T: EthSpec + Default> PartialEq for OperationPool<T> { impl<T: EthSpec + Default> PartialEq for OperationPool<T> {
fn eq(&self, other: &Self) -> bool { fn eq(&self, other: &Self) -> bool {
*self.attestations.read() == *other.attestations.read() *self.aggregate_attestations.read() == *other.aggregate_attestations.read()
&& *self.committee_attestations.read() == *other.committee_attestations.read()
&& *self.attester_slashings.read() == *other.attester_slashings.read() && *self.attester_slashings.read() == *other.attester_slashings.read()
&& *self.proposer_slashings.read() == *other.proposer_slashings.read() && *self.proposer_slashings.read() == *other.proposer_slashings.read()
&& *self.voluntary_exits.read() == *other.voluntary_exits.read() && *self.voluntary_exits.read() == *other.voluntary_exits.read()
@ -397,6 +517,7 @@ mod release_tests {
use super::*; use super::*;
use state_processing::common::{get_attesting_indices, get_base_reward}; use state_processing::common::{get_attesting_indices, get_base_reward};
use std::collections::BTreeSet; use std::collections::BTreeSet;
use std::iter::FromIterator;
use types::test_utils::*; use types::test_utils::*;
use types::*; use types::*;
@ -820,11 +941,15 @@ mod release_tests {
let committee = state let committee = state
.get_beacon_committee(att.data.slot, att.data.index) .get_beacon_committee(att.data.slot, att.data.index)
.expect("should get beacon committee"); .expect("should get beacon committee");
let att_indices = get_attesting_indices::<MainnetEthSpec>(
let att_indices = BTreeSet::from_iter(
get_attesting_indices::<MainnetEthSpec>(
committee.committee, committee.committee,
&fresh_validators_bitlist, &fresh_validators_bitlist,
) )
.unwrap(); .unwrap(),
);
let fresh_indices = &att_indices - &seen_indices; let fresh_indices = &att_indices - &seen_indices;
let rewards = fresh_indices let rewards = fresh_indices

View File

@ -17,7 +17,9 @@ pub struct PersistedOperationPool<T: EthSpec> {
/// Mapping from attestation ID to attestation mappings. /// Mapping from attestation ID to attestation mappings.
// We could save space by not storing the attestation ID, but it might // We could save space by not storing the attestation ID, but it might
// be difficult to make that roundtrip due to eager aggregation. // be difficult to make that roundtrip due to eager aggregation.
attestations: Vec<(AttestationId, Vec<Attestation<T>>)>, // Note: That we don't store the committee attestations as these are short lived and not worth
// persisting
aggregate_attestations: Vec<(AttestationId, Vec<Attestation<T>>)>,
/// Attester slashings. /// Attester slashings.
attester_slashings: Vec<AttesterSlashing<T>>, attester_slashings: Vec<AttesterSlashing<T>>,
/// Proposer slashings. /// Proposer slashings.
@ -29,8 +31,8 @@ pub struct PersistedOperationPool<T: EthSpec> {
impl<T: EthSpec> PersistedOperationPool<T> { impl<T: EthSpec> PersistedOperationPool<T> {
/// Convert an `OperationPool` into serializable form. /// Convert an `OperationPool` into serializable form.
pub fn from_operation_pool(operation_pool: &OperationPool<T>) -> Self { pub fn from_operation_pool(operation_pool: &OperationPool<T>) -> Self {
let attestations = operation_pool let aggregate_attestations = operation_pool
.attestations .aggregate_attestations
.read() .read()
.iter() .iter()
.map(|(att_id, att)| (att_id.clone(), att.clone())) .map(|(att_id, att)| (att_id.clone(), att.clone()))
@ -58,7 +60,7 @@ impl<T: EthSpec> PersistedOperationPool<T> {
.collect(); .collect();
Self { Self {
attestations, aggregate_attestations,
attester_slashings, attester_slashings,
proposer_slashings, proposer_slashings,
voluntary_exits, voluntary_exits,
@ -67,7 +69,7 @@ impl<T: EthSpec> PersistedOperationPool<T> {
/// Reconstruct an `OperationPool`. /// Reconstruct an `OperationPool`.
pub fn into_operation_pool(self, state: &BeaconState<T>, spec: &ChainSpec) -> OperationPool<T> { pub fn into_operation_pool(self, state: &BeaconState<T>, spec: &ChainSpec) -> OperationPool<T> {
let attestations = RwLock::new(self.attestations.into_iter().collect()); let aggregate_attestations = RwLock::new(self.aggregate_attestations.into_iter().collect());
let attester_slashings = RwLock::new( let attester_slashings = RwLock::new(
self.attester_slashings self.attester_slashings
.into_iter() .into_iter()
@ -93,7 +95,8 @@ impl<T: EthSpec> PersistedOperationPool<T> {
); );
OperationPool { OperationPool {
attestations, aggregate_attestations,
committee_attestations: Default::default(),
attester_slashings, attester_slashings,
proposer_slashings, proposer_slashings,
voluntary_exits, voluntary_exits,

View File

@ -1,6 +1,6 @@
[package] [package]
name = "proto_array_fork_choice" name = "proto_array_fork_choice"
version = "0.1.0" version = "0.2.0"
authors = ["Paul Hauner <paul@sigmaprime.io>"] authors = ["Paul Hauner <paul@sigmaprime.io>"]
edition = "2018" edition = "2018"

View File

@ -1,6 +1,6 @@
[package] [package]
name = "state_processing" name = "state_processing"
version = "0.1.0" version = "0.2.0"
authors = ["Paul Hauner <paul@paulhauner.com>"] authors = ["Paul Hauner <paul@paulhauner.com>"]
edition = "2018" edition = "2018"
@ -15,7 +15,6 @@ serde = "1.0.102"
serde_derive = "1.0.102" serde_derive = "1.0.102"
lazy_static = "1.4.0" lazy_static = "1.4.0"
serde_yaml = "0.8.11" serde_yaml = "0.8.11"
eth2_ssz = "0.1.2"
beacon_chain = { path = "../../beacon_node/beacon_chain" } beacon_chain = { path = "../../beacon_node/beacon_chain" }
store = { path = "../../beacon_node/store" } store = { path = "../../beacon_node/store" }
@ -24,6 +23,7 @@ store = { path = "../../beacon_node/store" }
bls = { path = "../utils/bls" } bls = { path = "../utils/bls" }
integer-sqrt = "0.1.2" integer-sqrt = "0.1.2"
itertools = "0.8.1" itertools = "0.8.1"
eth2_ssz = "0.1.2"
eth2_ssz_types = { path = "../utils/ssz_types" } eth2_ssz_types = { path = "../utils/ssz_types" }
merkle_proof = { path = "../utils/merkle_proof" } merkle_proof = { path = "../utils/merkle_proof" }
log = "0.4.8" log = "0.4.8"

View File

@ -1,4 +1,3 @@
use std::collections::BTreeSet;
use types::*; use types::*;
/// Returns validator indices which participated in the attestation, sorted by increasing index. /// Returns validator indices which participated in the attestation, sorted by increasing index.
@ -7,17 +6,20 @@ use types::*;
pub fn get_attesting_indices<T: EthSpec>( pub fn get_attesting_indices<T: EthSpec>(
committee: &[usize], committee: &[usize],
bitlist: &BitList<T::MaxValidatorsPerCommittee>, bitlist: &BitList<T::MaxValidatorsPerCommittee>,
) -> Result<BTreeSet<usize>, BeaconStateError> { ) -> Result<Vec<usize>, BeaconStateError> {
if bitlist.len() != committee.len() { if bitlist.len() != committee.len() {
return Err(BeaconStateError::InvalidBitfield); return Err(BeaconStateError::InvalidBitfield);
} }
Ok(committee let mut indices = Vec::with_capacity(bitlist.num_set_bits());
.iter()
.enumerate() for (i, validator_index) in committee.iter().enumerate() {
.filter_map(|(i, validator_index)| match bitlist.get(i) { if let Ok(true) = bitlist.get(i) {
Ok(true) => Some(*validator_index), indices.push(*validator_index)
_ => None, }
}) }
.collect())
indices.sort_unstable();
Ok(indices)
} }

View File

@ -10,7 +10,7 @@ pub fn initiate_validator_exit<T: EthSpec>(
spec: &ChainSpec, spec: &ChainSpec,
) -> Result<(), Error> { ) -> Result<(), Error> {
if index >= state.validators.len() { if index >= state.validators.len() {
return Err(Error::UnknownValidator); return Err(Error::UnknownValidator(index as u64));
} }
// Return if the validator already initiated exit // Return if the validator already initiated exit

View File

@ -12,7 +12,7 @@ pub fn slash_validator<T: EthSpec>(
spec: &ChainSpec, spec: &ChainSpec,
) -> Result<(), Error> { ) -> Result<(), Error> {
if slashed_index >= state.validators.len() || slashed_index >= state.balances.len() { if slashed_index >= state.validators.len() || slashed_index >= state.balances.len() {
return Err(BeaconStateError::UnknownValidator); return Err(BeaconStateError::UnknownValidator(slashed_index as u64));
} }
let epoch = state.current_epoch(); let epoch = state.current_epoch();

View File

@ -10,8 +10,8 @@ pub mod test_utils;
pub use genesis::{initialize_beacon_state_from_eth1, is_valid_genesis_state, process_activations}; pub use genesis::{initialize_beacon_state_from_eth1, is_valid_genesis_state, process_activations};
pub use per_block_processing::{ pub use per_block_processing::{
errors::BlockProcessingError, per_block_processing, signature_sets, BlockSignatureStrategy, block_signature_verifier, errors::BlockProcessingError, per_block_processing, signature_sets,
VerifySignatures, BlockSignatureStrategy, BlockSignatureVerifier, VerifySignatures,
}; };
pub use per_epoch_processing::{errors::EpochProcessingError, per_epoch_processing}; pub use per_epoch_processing::{errors::EpochProcessingError, per_epoch_processing};
pub use per_slot_processing::{per_slot_processing, Error as SlotProcessingError}; pub use per_slot_processing::{per_slot_processing, Error as SlotProcessingError};

Some files were not shown because too many files have changed in this diff Show More