Add attestation gossip pre-verification (#983)

* Add PH & MS slot clock changes

* Account for genesis time

* Add progress on duties refactor

* Add simple is_aggregator bool to val subscription

* Start work on attestation_verification.rs

* Add progress on ObservedAttestations

* Progress with ObservedAttestations

* Fix tests

* Add observed attestations to the beacon chain

* Add attestation observation to processing code

* Add progress on attestation verification

* Add first draft of ObservedAttesters

* Add more tests

* Add observed attesters to beacon chain

* Add observers to attestation processing

* Add more attestation verification

* Create ObservedAggregators map

* Remove commented-out code

* Add observed aggregators into chain

* Add progress

* Finish adding features to attestation verification

* Ensure beacon chain compiles

* Link attn verification into chain

* Integrate new attn verification in chain

* Remove old attestation processing code

* Start trying to fix beacon_chain tests

* Split adding into pools into two functions

* Add aggregation to harness

* Get test harness working again

* Adjust the number of aggregators for test harness

* Fix edge-case in harness

* Integrate new attn processing in network

* Fix compile bug in validator_client

* Update validator API endpoints

* Fix aggreagation in test harness

* Fix enum thing

* Fix attestation observation bug:

* Patch failing API tests

* Start adding comments to attestation verification

* Remove unused attestation field

* Unify "is block known" logic

* Update comments

* Supress fork choice errors for network processing

* Add todos

* Tidy

* Add gossip attn tests

* Disallow test harness to produce old attns

* Comment out in-progress tests

* Partially address pruning tests

* Fix failing store test

* Add aggregate tests

* Add comments about which spec conditions we check

* Dont re-aggregate

* Split apart test harness attn production

* Fix compile error in network

* Make progress on commented-out test

* Fix skipping attestation test

* Add fork choice verification tests

* Tidy attn tests, remove dead code

* Remove some accidentally added code

* Fix clippy lint

* Rename test file

* Add block tests, add cheap block proposer check

* Rename block testing file

* Add observed_block_producers

* Tidy

* Switch around block signature verification

* Finish block testing

* Remove gossip from signature tests

* First pass of self review

* Fix deviation in spec

* Update test spec tags

* Start moving over to hashset

* Finish moving observed attesters to hashmap

* Move aggregation pool over to hashmap

* Make fc attn borrow again

* Fix rest_api compile error

* Fix missing comments

* Fix monster test

* Uncomment increasing slots test

* Address remaining comments

* Remove unsafe, use cfg test

* Remove cfg test flag

* Fix dodgy comment

* Ignore aggregates that are already known.

* Unify aggregator modulo logic

* Fix typo in logs

* Refactor validator subscription logic

* Avoid reproducing selection proof

* Skip HTTP call if no subscriptions

* Rename DutyAndState -> DutyAndProof

* Tidy logs

* Print root as dbg

* Fix compile errors in tests

* Fix compile error in test
This commit is contained in:
Paul Hauner 2020-05-06 21:42:56 +10:00 committed by GitHub
parent 1552f9997e
commit ad5bd6412a
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
38 changed files with 4952 additions and 1479 deletions

2
Cargo.lock generated
View File

@ -215,6 +215,8 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
name = "beacon_chain"
version = "0.2.0"
dependencies = [
"bitvec 0.17.4 (registry+https://github.com/rust-lang/crates.io-index)",
"bls 0.2.0",
"environment 0.2.0",
"eth1 0.2.0",
"eth2_config 0.2.0",

View File

@ -41,6 +41,8 @@ rand = "0.7.2"
proto_array_fork_choice = { path = "../../eth2/proto_array_fork_choice" }
lru = "0.4.3"
tempfile = "3.1.0"
bitvec = "0.17.4"
bls = { path = "../../eth2/utils/bls" }
safe_arith = { path = "../../eth2/utils/safe_arith" }
[dev-dependencies]

View File

@ -0,0 +1,886 @@
//! Provides verification for the following attestations:
//!
//! - "Unaggregated" `Attestation` received from either gossip or the HTTP API.
//! - "Aggregated" `SignedAggregateAndProof` received from gossip or the HTTP API.
//!
//! For clarity, we define:
//!
//! - Unaggregated: an `Attestation` object that has exactly one aggregation bit set.
//! - Aggregated: a `SignedAggregateAndProof` which has zero or more signatures.
//! - Note: "zero or more" may soon change to "one or more".
//!
//! Similar to the `crate::block_verification` module, we try to avoid doing duplicate verification
//! work as an attestation passes through different stages of verification. We represent these
//! different stages of verification with wrapper types. These wrapper-types flow in a particular
//! pattern:
//!
//! ```ignore
//! types::Attestation types::SignedAggregateAndProof
//! | |
//! ▼ ▼
//! VerifiedUnaggregatedAttestation VerifiedAggregatedAttestation
//! | |
//! -------------------------------------
//! |
//! ▼
//! ForkChoiceVerifiedAttestation
//! ```
use crate::{
beacon_chain::{
ATTESTATION_CACHE_LOCK_TIMEOUT, HEAD_LOCK_TIMEOUT, MAXIMUM_GOSSIP_CLOCK_DISPARITY,
VALIDATOR_PUBKEY_CACHE_LOCK_TIMEOUT,
},
metrics,
observed_attestations::ObserveOutcome,
observed_attesters::Error as ObservedAttestersError,
BeaconChain, BeaconChainError, BeaconChainTypes,
};
use bls::verify_signature_sets;
use slog::debug;
use slot_clock::SlotClock;
use state_processing::{
common::get_indexed_attestation,
per_block_processing::errors::AttestationValidationError,
per_slot_processing,
signature_sets::{
indexed_attestation_signature_set_from_pubkeys,
signed_aggregate_selection_proof_signature_set, signed_aggregate_signature_set,
},
};
use std::borrow::Cow;
use tree_hash::TreeHash;
use types::{
Attestation, BeaconCommittee, CommitteeIndex, Epoch, EthSpec, Hash256, IndexedAttestation,
RelativeEpoch, SelectionProof, SignedAggregateAndProof, Slot,
};
/// Returned when an attestation was not successfully verified. It might not have been verified for
/// two reasons:
///
/// - The attestation is malformed or inappropriate for the context (indicated by all variants
/// other than `BeaconChainError`).
/// - The application encountered an internal error whilst attempting to determine validity
/// (the `BeaconChainError` variant)
#[derive(Debug, PartialEq)]
pub enum Error {
/// The attestation is from a slot that is later than the current slot (with respect to the
/// gossip clock disparity).
FutureSlot {
attestation_slot: Slot,
latest_permissible_slot: Slot,
},
/// The attestation is from a slot that is prior to the earliest permissible slot (with
/// respect to the gossip clock disparity).
PastSlot {
attestation_slot: Slot,
earliest_permissible_slot: Slot,
},
/// The attestations aggregation bits were empty when they shouldn't be.
EmptyAggregationBitfield,
/// The `selection_proof` on the aggregate attestation does not elect it as an aggregator.
InvalidSelectionProof { aggregator_index: u64 },
/// The `selection_proof` on the aggregate attestation selects it as a validator, however the
/// aggregator index is not in the committee for that attestation.
AggregatorNotInCommittee { aggregator_index: u64 },
/// The aggregator index refers to a validator index that we have not seen.
AggregatorPubkeyUnknown(u64),
/// The attestation has been seen before; either in a block, on the gossip network or from a
/// local validator.
AttestationAlreadyKnown(Hash256),
/// There has already been an aggregation observed for this validator, we refuse to process a
/// second.
AggregatorAlreadyKnown(u64),
/// The aggregator index is higher than the maximum possible validator count.
ValidatorIndexTooHigh(usize),
/// The `attestation.data.beacon_block_root` block is unknown.
UnknownHeadBlock { beacon_block_root: Hash256 },
/// The `attestation.data.slot` is not from the same epoch as `data.target.epoch` and therefore
/// the attestation is invalid.
BadTargetEpoch,
/// The target root of the attestation points to a block that we have not verified.
UnknownTargetRoot(Hash256),
/// A signature on the attestation is invalid.
InvalidSignature,
/// There is no committee for the slot and committee index of this attestation and the
/// attestation should not have been produced.
NoCommitteeForSlotAndIndex { slot: Slot, index: CommitteeIndex },
/// The unaggregated attestation doesn't have only one aggregation bit set.
NotExactlyOneAggregationBitSet(usize),
/// We have already observed an attestation for the `validator_index` and refuse to process
/// another.
PriorAttestationKnown { validator_index: u64, epoch: Epoch },
/// The attestation is for an epoch in the future (with respect to the gossip clock disparity).
FutureEpoch {
attestation_epoch: Epoch,
current_epoch: Epoch,
},
/// The attestation is for an epoch in the past (with respect to the gossip clock disparity).
PastEpoch {
attestation_epoch: Epoch,
current_epoch: Epoch,
},
/// The attestation is attesting to a state that is later than itself. (Viz., attesting to the
/// future).
AttestsToFutureBlock { block: Slot, attestation: Slot },
/// The attestation failed the `state_processing` verification stage.
Invalid(AttestationValidationError),
/// There was an error whilst processing the attestation. It is not known if it is valid or invalid.
BeaconChainError(BeaconChainError),
}
impl From<BeaconChainError> for Error {
fn from(e: BeaconChainError) -> Self {
Error::BeaconChainError(e)
}
}
/// Wraps a `SignedAggregateAndProof` that has been verified for propagation on the gossip network.
pub struct VerifiedAggregatedAttestation<T: BeaconChainTypes> {
signed_aggregate: SignedAggregateAndProof<T::EthSpec>,
indexed_attestation: IndexedAttestation<T::EthSpec>,
}
/// Wraps an `Attestation` that has been verified for propagation on the gossip network.
pub struct VerifiedUnaggregatedAttestation<T: BeaconChainTypes> {
attestation: Attestation<T::EthSpec>,
indexed_attestation: IndexedAttestation<T::EthSpec>,
}
/// Custom `Clone` implementation is to avoid the restrictive trait bounds applied by the usual derive
/// macro.
impl<T: BeaconChainTypes> Clone for VerifiedUnaggregatedAttestation<T> {
fn clone(&self) -> Self {
Self {
attestation: self.attestation.clone(),
indexed_attestation: self.indexed_attestation.clone(),
}
}
}
/// Wraps an `indexed_attestation` that is valid for application to fork choice. The
/// `indexed_attestation` will have been generated via the `VerifiedAggregatedAttestation` or
/// `VerifiedUnaggregatedAttestation` wrappers.
pub struct ForkChoiceVerifiedAttestation<'a, T: BeaconChainTypes> {
indexed_attestation: &'a IndexedAttestation<T::EthSpec>,
}
/// A helper trait implemented on wrapper types that can be progressed to a state where they can be
/// verified for application to fork choice.
pub trait IntoForkChoiceVerifiedAttestation<'a, T: BeaconChainTypes> {
fn into_fork_choice_verified_attestation(
&'a self,
chain: &BeaconChain<T>,
) -> Result<ForkChoiceVerifiedAttestation<'a, T>, Error>;
}
impl<'a, T: BeaconChainTypes> IntoForkChoiceVerifiedAttestation<'a, T>
for VerifiedAggregatedAttestation<T>
{
/// Progresses the `VerifiedAggregatedAttestation` to a stage where it is valid for application
/// to the fork-choice rule (or not).
fn into_fork_choice_verified_attestation(
&'a self,
chain: &BeaconChain<T>,
) -> Result<ForkChoiceVerifiedAttestation<T>, Error> {
ForkChoiceVerifiedAttestation::from_signature_verified_components(
&self.indexed_attestation,
chain,
)
}
}
impl<'a, T: BeaconChainTypes> IntoForkChoiceVerifiedAttestation<'a, T>
for VerifiedUnaggregatedAttestation<T>
{
/// Progresses the `Attestation` to a stage where it is valid for application to the
/// fork-choice rule (or not).
fn into_fork_choice_verified_attestation(
&'a self,
chain: &BeaconChain<T>,
) -> Result<ForkChoiceVerifiedAttestation<T>, Error> {
ForkChoiceVerifiedAttestation::from_signature_verified_components(
&self.indexed_attestation,
chain,
)
}
}
impl<'a, T: BeaconChainTypes> IntoForkChoiceVerifiedAttestation<'a, T>
for ForkChoiceVerifiedAttestation<'a, T>
{
/// Simply returns itself.
fn into_fork_choice_verified_attestation(
&'a self,
_: &BeaconChain<T>,
) -> Result<ForkChoiceVerifiedAttestation<T>, Error> {
Ok(Self {
indexed_attestation: self.indexed_attestation,
})
}
}
impl<T: BeaconChainTypes> VerifiedAggregatedAttestation<T> {
/// Returns `Ok(Self)` if the `signed_aggregate` is valid to be (re)published on the gossip
/// network.
pub fn verify(
signed_aggregate: SignedAggregateAndProof<T::EthSpec>,
chain: &BeaconChain<T>,
) -> Result<Self, Error> {
let attestation = &signed_aggregate.message.aggregate;
// Ensure attestation is within the last ATTESTATION_PROPAGATION_SLOT_RANGE slots (within a
// MAXIMUM_GOSSIP_CLOCK_DISPARITY allowance).
//
// We do not queue future attestations for later processing.
verify_propagation_slot_range(chain, attestation)?;
// Ensure the aggregated attestation has not already been seen locally.
//
// TODO: this part of the code is not technically to spec, however I have raised a PR to
// change it:
//
// https://github.com/ethereum/eth2.0-specs/pull/1749
let attestation_root = attestation.tree_hash_root();
if chain
.observed_attestations
.is_known(attestation, attestation_root)
.map_err(|e| Error::BeaconChainError(e.into()))?
{
return Err(Error::AttestationAlreadyKnown(attestation_root));
}
let aggregator_index = signed_aggregate.message.aggregator_index;
// Ensure there has been no other observed aggregate for the given `aggregator_index`.
//
// Note: do not observe yet, only observe once the attestation has been verfied.
match chain
.observed_aggregators
.validator_has_been_observed(attestation, aggregator_index as usize)
{
Ok(true) => Err(Error::AggregatorAlreadyKnown(aggregator_index)),
Ok(false) => Ok(()),
Err(ObservedAttestersError::ValidatorIndexTooHigh(i)) => {
Err(Error::ValidatorIndexTooHigh(i))
}
Err(e) => Err(BeaconChainError::from(e).into()),
}?;
// Ensure the block being voted for (attestation.data.beacon_block_root) passes validation.
//
// This indirectly checks to see if the `attestation.data.beacon_block_root` is in our fork
// choice. Any known, non-finalized, processed block should be in fork choice, so this
// check immediately filters out attestations that attest to a block that has not been
// processed.
//
// Attestations must be for a known block. If the block is unknown, we simply drop the
// attestation and do not delay consideration for later.
verify_head_block_is_known(chain, &attestation)?;
let indexed_attestation = map_attestation_committee(chain, attestation, |committee| {
// Note: this clones the signature which is known to be a relatively slow operation.
//
// Future optimizations should remove this clone.
let selection_proof =
SelectionProof::from(signed_aggregate.message.selection_proof.clone());
if !selection_proof
.is_aggregator(committee.committee.len(), &chain.spec)
.map_err(|e| Error::BeaconChainError(e.into()))?
{
return Err(Error::InvalidSelectionProof { aggregator_index });
}
/*
* I have raised a PR that will likely get merged in v0.12.0:
*
* https://github.com/ethereum/eth2.0-specs/pull/1732
*
* If this PR gets merged, uncomment this code and remove the code below.
*
if !committee
.committee
.iter()
.any(|validator_index| *validator_index as u64 == aggregator_index)
{
return Err(Error::AggregatorNotInCommittee { aggregator_index });
}
*/
get_indexed_attestation(committee.committee, &attestation)
.map_err(|e| BeaconChainError::from(e).into())
})?;
// Ensure the aggregator is in the attestation.
//
// I've raised an issue with this here:
//
// https://github.com/ethereum/eth2.0-specs/pull/1732
//
// I suspect PR my will get merged in v0.12 and we'll need to delete this code and
// uncomment the code above.
if !indexed_attestation
.attesting_indices
.iter()
.any(|validator_index| *validator_index as u64 == aggregator_index)
{
return Err(Error::AggregatorNotInCommittee { aggregator_index });
}
if !verify_signed_aggregate_signatures(chain, &signed_aggregate, &indexed_attestation)? {
return Err(Error::InvalidSignature);
}
// Observe the valid attestation so we do not re-process it.
//
// It's important to double check that the attestation is not already known, otherwise two
// attestations processed at the same time could be published.
if let ObserveOutcome::AlreadyKnown = chain
.observed_attestations
.observe_attestation(attestation, Some(attestation_root))
.map_err(|e| Error::BeaconChainError(e.into()))?
{
return Err(Error::AttestationAlreadyKnown(attestation_root));
}
// Observe the aggregator so we don't process another aggregate from them.
//
// It's important to double check that the attestation is not already known, otherwise two
// attestations processed at the same time could be published.
if chain
.observed_aggregators
.observe_validator(&attestation, aggregator_index as usize)
.map_err(|e| BeaconChainError::from(e))?
{
return Err(Error::PriorAttestationKnown {
validator_index: aggregator_index,
epoch: attestation.data.target.epoch,
});
}
Ok(VerifiedAggregatedAttestation {
signed_aggregate,
indexed_attestation,
})
}
/// A helper function to add this aggregate to `beacon_chain.op_pool`.
pub fn add_to_pool(self, chain: &BeaconChain<T>) -> Result<Self, Error> {
chain.add_to_block_inclusion_pool(self)
}
/// A helper function to add this aggregate to `beacon_chain.fork_choice`.
pub fn add_to_fork_choice(
&self,
chain: &BeaconChain<T>,
) -> Result<ForkChoiceVerifiedAttestation<T>, Error> {
chain.apply_attestation_to_fork_choice(self)
}
/// Returns the underlying `attestation` for the `signed_aggregate`.
pub fn attestation(&self) -> &Attestation<T::EthSpec> {
&self.signed_aggregate.message.aggregate
}
}
impl<T: BeaconChainTypes> VerifiedUnaggregatedAttestation<T> {
/// Returns `Ok(Self)` if the `attestation` is valid to be (re)published on the gossip
/// network.
pub fn verify(
attestation: Attestation<T::EthSpec>,
chain: &BeaconChain<T>,
) -> Result<Self, Error> {
// Ensure attestation is within the last ATTESTATION_PROPAGATION_SLOT_RANGE slots (within a
// MAXIMUM_GOSSIP_CLOCK_DISPARITY allowance).
//
// We do not queue future attestations for later processing.
verify_propagation_slot_range(chain, &attestation)?;
// Check to ensure that the attestation is "unaggregated". I.e., it has exactly one
// aggregation bit set.
let num_aggreagtion_bits = attestation.aggregation_bits.num_set_bits();
if num_aggreagtion_bits != 1 {
return Err(Error::NotExactlyOneAggregationBitSet(num_aggreagtion_bits));
}
// Attestations must be for a known block. If the block is unknown, we simply drop the
// attestation and do not delay consideration for later.
verify_head_block_is_known(chain, &attestation)?;
let indexed_attestation = obtain_indexed_attestation(chain, &attestation)?;
let validator_index = *indexed_attestation
.attesting_indices
.first()
.ok_or_else(|| Error::NotExactlyOneAggregationBitSet(0))?;
/*
* The attestation is the first valid attestation received for the participating validator
* for the slot, attestation.data.slot.
*/
if chain
.observed_attesters
.validator_has_been_observed(&attestation, validator_index as usize)
.map_err(|e| BeaconChainError::from(e))?
{
return Err(Error::PriorAttestationKnown {
validator_index,
epoch: attestation.data.target.epoch,
});
}
// The aggregate signature of the attestation is valid.
verify_attestation_signature(chain, &indexed_attestation)?;
// Now that the attestation has been fully verified, store that we have received a valid
// attestation from this validator.
//
// It's important to double check that the attestation still hasn't been observed, since
// there can be a race-condition if we receive two attestations at the same time and
// process them in different threads.
if chain
.observed_attesters
.observe_validator(&attestation, validator_index as usize)
.map_err(|e| BeaconChainError::from(e))?
{
return Err(Error::PriorAttestationKnown {
validator_index,
epoch: attestation.data.target.epoch,
});
}
Ok(Self {
attestation,
indexed_attestation,
})
}
/// A helper function to add this attestation to `beacon_chain.naive_aggregation_pool`.
pub fn add_to_pool(self, chain: &BeaconChain<T>) -> Result<Self, Error> {
chain.add_to_naive_aggregation_pool(self)
}
/// Returns the wrapped `attestation`.
pub fn attestation(&self) -> &Attestation<T::EthSpec> {
&self.attestation
}
/// Returns a mutable reference to the underlying attestation.
///
/// Only use during testing since modifying the `IndexedAttestation` can cause the attestation
/// to no-longer be valid.
pub fn __indexed_attestation_mut(&mut self) -> &mut IndexedAttestation<T::EthSpec> {
&mut self.indexed_attestation
}
}
impl<'a, T: BeaconChainTypes> ForkChoiceVerifiedAttestation<'a, T> {
/// Returns `Ok(Self)` if the `attestation` is valid to be applied to the beacon chain fork
/// choice.
///
/// The supplied `indexed_attestation` MUST have a valid signature, this function WILL NOT
/// CHECK THE SIGNATURE. Use the `VerifiedAggregatedAttestation` or
/// `VerifiedUnaggregatedAttestation` structs to do signature verification.
fn from_signature_verified_components(
indexed_attestation: &'a IndexedAttestation<T::EthSpec>,
chain: &BeaconChain<T>,
) -> Result<Self, Error> {
// There is no point in processing an attestation with an empty bitfield. Reject
// it immediately.
//
// This is not in the specification, however it should be transparent to other nodes. We
// return early here to avoid wasting precious resources verifying the rest of it.
if indexed_attestation.attesting_indices.len() == 0 {
return Err(Error::EmptyAggregationBitfield);
}
let slot_now = chain.slot()?;
let epoch_now = slot_now.epoch(T::EthSpec::slots_per_epoch());
let target = indexed_attestation.data.target.clone();
// Attestation must be from the current or previous epoch.
if target.epoch > epoch_now {
return Err(Error::FutureEpoch {
attestation_epoch: target.epoch,
current_epoch: epoch_now,
});
} else if target.epoch + 1 < epoch_now {
return Err(Error::PastEpoch {
attestation_epoch: target.epoch,
current_epoch: epoch_now,
});
}
if target.epoch
!= indexed_attestation
.data
.slot
.epoch(T::EthSpec::slots_per_epoch())
{
return Err(Error::BadTargetEpoch);
}
// Attestation target must be for a known block.
if !chain.fork_choice.contains_block(&target.root) {
return Err(Error::UnknownTargetRoot(target.root));
}
// TODO: we're not testing an assert from the spec:
//
// `assert get_current_slot(store) >= compute_start_slot_at_epoch(target.epoch)`
//
// I think this check is redundant and I've raised an issue here:
//
// https://github.com/ethereum/eth2.0-specs/pull/1755
//
// To resolve this todo, observe the outcome of the above PR.
// Load the slot and state root for `attestation.data.beacon_block_root`.
//
// This indirectly checks to see if the `attestation.data.beacon_block_root` is in our fork
// choice. Any known, non-finalized block should be in fork choice, so this check
// immediately filters out attestations that attest to a block that has not been processed.
//
// Attestations must be for a known block. If the block is unknown, we simply drop the
// attestation and do not delay consideration for later.
let (block_slot, _state_root) = chain
.fork_choice
.block_slot_and_state_root(&indexed_attestation.data.beacon_block_root)
.ok_or_else(|| Error::UnknownHeadBlock {
beacon_block_root: indexed_attestation.data.beacon_block_root,
})?;
// TODO: currently we do not check the FFG source/target. This is what the spec dictates
// but it seems wrong.
//
// I have opened an issue on the specs repo for this:
//
// https://github.com/ethereum/eth2.0-specs/issues/1636
//
// We should revisit this code once that issue has been resolved.
// Attestations must not be for blocks in the future. If this is the case, the attestation
// should not be considered.
if block_slot > indexed_attestation.data.slot {
return Err(Error::AttestsToFutureBlock {
block: block_slot,
attestation: indexed_attestation.data.slot,
});
}
// Note: we're not checking the "attestations can only affect the fork choice of subsequent
// slots" part of the spec, we do this upstream.
Ok(Self {
indexed_attestation,
})
}
/// Returns the wrapped `IndexedAttestation`.
pub fn indexed_attestation(&self) -> &IndexedAttestation<T::EthSpec> {
&self.indexed_attestation
}
}
/// Returns `Ok(())` if the `attestation.data.beacon_block_root` is known to this chain.
///
/// The block root may not be known for two reasons:
///
/// 1. The block has never been verified by our application.
/// 2. The block is prior to the latest finalized block.
///
/// Case (1) is the exact thing we're trying to detect. However case (2) is a little different, but
/// it's still fine to reject here because there's no need for us to handle attestations that are
/// already finalized.
fn verify_head_block_is_known<T: BeaconChainTypes>(
chain: &BeaconChain<T>,
attestation: &Attestation<T::EthSpec>,
) -> Result<(), Error> {
if chain
.fork_choice
.contains_block(&attestation.data.beacon_block_root)
{
Ok(())
} else {
Err(Error::UnknownHeadBlock {
beacon_block_root: attestation.data.beacon_block_root,
})
}
}
/// Verify that the `attestation` is within the acceptable gossip propagation range, with reference
/// to the current slot of the `chain`.
///
/// Accounts for `MAXIMUM_GOSSIP_CLOCK_DISPARITY`.
pub fn verify_propagation_slot_range<T: BeaconChainTypes>(
chain: &BeaconChain<T>,
attestation: &Attestation<T::EthSpec>,
) -> Result<(), Error> {
let attestation_slot = attestation.data.slot;
let latest_permissible_slot = chain
.slot_clock
.now_with_future_tolerance(MAXIMUM_GOSSIP_CLOCK_DISPARITY)
.ok_or_else(|| BeaconChainError::UnableToReadSlot)?;
if attestation_slot > latest_permissible_slot {
return Err(Error::FutureSlot {
attestation_slot,
latest_permissible_slot,
});
}
// Taking advantage of saturating subtraction on `Slot`.
let earliest_permissible_slot = chain
.slot_clock
.now_with_past_tolerance(MAXIMUM_GOSSIP_CLOCK_DISPARITY)
.ok_or_else(|| BeaconChainError::UnableToReadSlot)?
- T::EthSpec::slots_per_epoch();
if attestation_slot < earliest_permissible_slot {
return Err(Error::PastSlot {
attestation_slot,
earliest_permissible_slot,
});
}
Ok(())
}
/// Verifies that the signature of the `indexed_attestation` is valid.
pub fn verify_attestation_signature<T: BeaconChainTypes>(
chain: &BeaconChain<T>,
indexed_attestation: &IndexedAttestation<T::EthSpec>,
) -> Result<(), Error> {
let signature_setup_timer =
metrics::start_timer(&metrics::ATTESTATION_PROCESSING_SIGNATURE_SETUP_TIMES);
let pubkey_cache = chain
.validator_pubkey_cache
.try_read_for(VALIDATOR_PUBKEY_CACHE_LOCK_TIMEOUT)
.ok_or_else(|| BeaconChainError::ValidatorPubkeyCacheLockTimeout)?;
let fork = chain
.canonical_head
.try_read_for(HEAD_LOCK_TIMEOUT)
.ok_or_else(|| BeaconChainError::CanonicalHeadLockTimeout)
.map(|head| head.beacon_state.fork.clone())?;
let signature_set = indexed_attestation_signature_set_from_pubkeys(
|validator_index| pubkey_cache.get(validator_index).map(Cow::Borrowed),
&indexed_attestation.signature,
&indexed_attestation,
&fork,
chain.genesis_validators_root,
&chain.spec,
)
.map_err(BeaconChainError::SignatureSetError)?;
metrics::stop_timer(signature_setup_timer);
let _signature_verification_timer =
metrics::start_timer(&metrics::ATTESTATION_PROCESSING_SIGNATURE_TIMES);
if signature_set.is_valid() {
Ok(())
} else {
Err(Error::InvalidSignature)
}
}
/// Verifies all the signatures in a `SignedAggregateAndProof` using BLS batch verification. This
/// includes three signatures:
///
/// - `signed_aggregate.signature`
/// - `signed_aggregate.signature.message.selection proof`
/// - `signed_aggregate.signature.message.aggregate.signature`
///
/// # Returns
///
/// - `Ok(true)`: if all signatures are valid.
/// - `Ok(false)`: if one or more signatures are invalid.
/// - `Err(e)`: if there was an error preventing signature verification.
pub fn verify_signed_aggregate_signatures<T: BeaconChainTypes>(
chain: &BeaconChain<T>,
signed_aggregate: &SignedAggregateAndProof<T::EthSpec>,
indexed_attestation: &IndexedAttestation<T::EthSpec>,
) -> Result<bool, Error> {
let pubkey_cache = chain
.validator_pubkey_cache
.try_read_for(VALIDATOR_PUBKEY_CACHE_LOCK_TIMEOUT)
.ok_or_else(|| BeaconChainError::ValidatorPubkeyCacheLockTimeout)?;
let aggregator_index = signed_aggregate.message.aggregator_index;
if aggregator_index >= pubkey_cache.len() as u64 {
return Err(Error::AggregatorPubkeyUnknown(aggregator_index));
}
let fork = chain
.canonical_head
.try_read_for(HEAD_LOCK_TIMEOUT)
.ok_or_else(|| BeaconChainError::CanonicalHeadLockTimeout)
.map(|head| head.beacon_state.fork.clone())?;
let signature_sets = vec![
signed_aggregate_selection_proof_signature_set(
|validator_index| pubkey_cache.get(validator_index).map(Cow::Borrowed),
&signed_aggregate,
&fork,
chain.genesis_validators_root,
&chain.spec,
)
.map_err(BeaconChainError::SignatureSetError)?,
signed_aggregate_signature_set(
|validator_index| pubkey_cache.get(validator_index).map(Cow::Borrowed),
&signed_aggregate,
&fork,
chain.genesis_validators_root,
&chain.spec,
)
.map_err(BeaconChainError::SignatureSetError)?,
indexed_attestation_signature_set_from_pubkeys(
|validator_index| pubkey_cache.get(validator_index).map(Cow::Borrowed),
&indexed_attestation.signature,
&indexed_attestation,
&fork,
chain.genesis_validators_root,
&chain.spec,
)
.map_err(BeaconChainError::SignatureSetError)?,
];
Ok(verify_signature_sets(signature_sets))
}
/// Returns the `indexed_attestation` for the `attestation` using the public keys cached in the
/// `chain`.
pub fn obtain_indexed_attestation<T: BeaconChainTypes>(
chain: &BeaconChain<T>,
attestation: &Attestation<T::EthSpec>,
) -> Result<IndexedAttestation<T::EthSpec>, Error> {
map_attestation_committee(chain, attestation, |committee| {
get_indexed_attestation(committee.committee, &attestation)
.map_err(|e| BeaconChainError::from(e).into())
})
}
/// Runs the `map_fn` with the committee for the given `attestation`.
///
/// This function exists in this odd "map" pattern because efficiently obtaining the committee for
/// an attestation can be complex. It might involve reading straight from the
/// `beacon_chain.shuffling_cache` or it might involve reading it from a state from the DB. Due to
/// the complexities of `RwLock`s on the shuffling cache, a simple `Cow` isn't suitable here.
///
/// If the committee for `attestation` isn't found in the `shuffling_cache`, we will read a state
/// from disk and then update the `shuffling_cache`.
pub fn map_attestation_committee<'a, T, F, R>(
chain: &'a BeaconChain<T>,
attestation: &Attestation<T::EthSpec>,
map_fn: F,
) -> Result<R, Error>
where
T: BeaconChainTypes,
F: Fn(BeaconCommittee) -> Result<R, Error>,
{
let attestation_epoch = attestation.data.slot.epoch(T::EthSpec::slots_per_epoch());
let target = &attestation.data.target;
// Attestation target must be for a known block.
//
// We use fork choice to find the target root, which means that we reject any attestation
// that has a `target.root` earlier than our latest finalized root. There's no point in
// processing an attestation that does not include our latest finalized block in its chain.
//
// We do not delay consideration for later, we simply drop the attestation.
let (target_block_slot, target_block_state_root) = chain
.fork_choice
.block_slot_and_state_root(&target.root)
.ok_or_else(|| Error::UnknownTargetRoot(target.root))?;
// Obtain the shuffling cache, timing how long we wait.
let cache_wait_timer =
metrics::start_timer(&metrics::ATTESTATION_PROCESSING_SHUFFLING_CACHE_WAIT_TIMES);
let mut shuffling_cache = chain
.shuffling_cache
.try_write_for(ATTESTATION_CACHE_LOCK_TIMEOUT)
.ok_or_else(|| BeaconChainError::AttestationCacheLockTimeout)?;
metrics::stop_timer(cache_wait_timer);
if let Some(committee_cache) = shuffling_cache.get(attestation_epoch, target.root) {
committee_cache
.get_beacon_committee(attestation.data.slot, attestation.data.index)
.map(map_fn)
.unwrap_or_else(|| {
Err(Error::NoCommitteeForSlotAndIndex {
slot: attestation.data.slot,
index: attestation.data.index,
})
})
} else {
// Drop the shuffling cache to avoid holding the lock for any longer than
// required.
drop(shuffling_cache);
debug!(
chain.log,
"Attestation processing cache miss";
"attn_epoch" => attestation_epoch.as_u64(),
"target_block_epoch" => target_block_slot.epoch(T::EthSpec::slots_per_epoch()).as_u64(),
);
let state_read_timer =
metrics::start_timer(&metrics::ATTESTATION_PROCESSING_STATE_READ_TIMES);
let mut state = chain
.get_state(&target_block_state_root, Some(target_block_slot))?
.ok_or_else(|| BeaconChainError::MissingBeaconState(target_block_state_root))?;
metrics::stop_timer(state_read_timer);
let state_skip_timer =
metrics::start_timer(&metrics::ATTESTATION_PROCESSING_STATE_SKIP_TIMES);
while state.current_epoch() + 1 < attestation_epoch {
// Here we tell `per_slot_processing` to skip hashing the state and just
// use the zero hash instead.
//
// The state roots are not useful for the shuffling, so there's no need to
// compute them.
per_slot_processing(&mut state, Some(Hash256::zero()), &chain.spec)
.map_err(|e| BeaconChainError::from(e))?
}
metrics::stop_timer(state_skip_timer);
let committee_building_timer =
metrics::start_timer(&metrics::ATTESTATION_PROCESSING_COMMITTEE_BUILDING_TIMES);
let relative_epoch = RelativeEpoch::from_epoch(state.current_epoch(), attestation_epoch)
.map_err(BeaconChainError::IncorrectStateForAttestation)?;
state
.build_committee_cache(relative_epoch, &chain.spec)
.map_err(|e| BeaconChainError::from(e))?;
let committee_cache = state
.committee_cache(relative_epoch)
.map_err(|e| BeaconChainError::from(e))?;
chain
.shuffling_cache
.try_write_for(ATTESTATION_CACHE_LOCK_TIMEOUT)
.ok_or_else(|| BeaconChainError::AttestationCacheLockTimeout)?
.insert(attestation_epoch, target.root, committee_cache);
metrics::stop_timer(committee_building_timer);
committee_cache
.get_beacon_committee(attestation.data.slot, attestation.data.index)
.map(map_fn)
.unwrap_or_else(|| {
Err(Error::NoCommitteeForSlotAndIndex {
slot: attestation.data.slot,
index: attestation.data.index,
})
})
}
}

View File

@ -1,3 +1,7 @@
use crate::attestation_verification::{
Error as AttestationError, ForkChoiceVerifiedAttestation, IntoForkChoiceVerifiedAttestation,
VerifiedAggregatedAttestation, VerifiedUnaggregatedAttestation,
};
use crate::block_verification::{
check_block_relevancy, get_block_root, signature_verify_chain_segment, BlockError,
FullyVerifiedBlock, GossipVerifiedBlock, IntoFullyVerifiedBlock,
@ -10,6 +14,9 @@ use crate::head_tracker::HeadTracker;
use crate::metrics;
use crate::migrate::Migrate;
use crate::naive_aggregation_pool::{Error as NaiveAggregationError, NaiveAggregationPool};
use crate::observed_attestations::{Error as AttestationObservationError, ObservedAttestations};
use crate::observed_attesters::{ObservedAggregators, ObservedAttesters};
use crate::observed_block_producers::ObservedBlockProducers;
use crate::persisted_beacon_chain::PersistedBeaconChain;
use crate::shuffling_cache::ShufflingCache;
use crate::snapshot_cache::SnapshotCache;
@ -23,10 +30,7 @@ use state_processing::per_block_processing::errors::{
AttestationValidationError, AttesterSlashingValidationError, ExitValidationError,
ProposerSlashingValidationError,
};
use state_processing::{
common::get_indexed_attestation, per_block_processing, per_slot_processing,
signature_sets::indexed_attestation_signature_set_from_pubkeys, BlockSignatureStrategy,
};
use state_processing::{per_block_processing, per_slot_processing, BlockSignatureStrategy};
use std::borrow::Cow;
use std::cmp::Ordering;
use std::collections::HashMap;
@ -49,14 +53,14 @@ pub const GRAFFITI: &str = "sigp/lighthouse-0.2.0-prerelease";
/// The time-out before failure during an operation to take a read/write RwLock on the canonical
/// head.
const HEAD_LOCK_TIMEOUT: Duration = Duration::from_secs(1);
pub const HEAD_LOCK_TIMEOUT: Duration = Duration::from_secs(1);
/// The time-out before failure during an operation to take a read/write RwLock on the block
/// processing cache.
pub const BLOCK_PROCESSING_CACHE_LOCK_TIMEOUT: Duration = Duration::from_secs(1);
/// The time-out before failure during an operation to take a read/write RwLock on the
/// attestation cache.
const ATTESTATION_CACHE_LOCK_TIMEOUT: Duration = Duration::from_secs(1);
pub const ATTESTATION_CACHE_LOCK_TIMEOUT: Duration = Duration::from_secs(1);
/// The time-out before failure during an operation to take a read/write RwLock on the
/// validator pubkey cache.
@ -67,23 +71,6 @@ pub const OP_POOL_DB_KEY: [u8; 32] = [0; 32];
pub const ETH1_CACHE_DB_KEY: [u8; 32] = [0; 32];
pub const FORK_CHOICE_DB_KEY: [u8; 32] = [0; 32];
#[derive(Debug, PartialEq)]
pub enum AttestationType {
/// An attestation with a single-signature that has been published in accordance with the naive
/// aggregation strategy.
///
/// These attestations may have come from a `committee_index{subnet_id}_beacon_attestation`
/// gossip subnet or they have have come directly from a validator attached to our API.
///
/// If `should_store == true`, the attestation will be added to the `NaiveAggregationPool`.
Unaggregated { should_store: bool },
/// An attestation with one more more signatures that has passed through the aggregation phase
/// of the naive aggregation scheme.
///
/// These attestations must have come from the `beacon_aggregate_and_proof` gossip subnet.
Aggregated,
}
/// The result of a chain segment processing.
#[derive(Debug)]
pub enum ChainSegmentResult {
@ -190,6 +177,15 @@ pub struct BeaconChain<T: BeaconChainTypes> {
/// This pool accepts `Attestation` objects that only have one aggregation bit set and provides
/// a method to get an aggregated `Attestation` for some `AttestationData`.
pub naive_aggregation_pool: NaiveAggregationPool<T::EthSpec>,
/// Contains a store of attestations which have been observed by the beacon chain.
pub observed_attestations: ObservedAttestations<T::EthSpec>,
/// Maintains a record of which validators have been seen to attest in recent epochs.
pub observed_attesters: ObservedAttesters<T::EthSpec>,
/// Maintains a record of which validators have been seen to create `SignedAggregateAndProofs`
/// in recent epochs.
pub observed_aggregators: ObservedAggregators<T::EthSpec>,
/// Maintains a record of which validators have proposed blocks for each slot.
pub observed_block_producers: ObservedBlockProducers<T::EthSpec>,
/// Provides information from the Ethereum 1 (PoW) chain.
pub eth1_chain: Option<Eth1Chain<T::Eth1Chain, T::EthSpec, T::Store>>,
/// Stores a "snapshot" of the chain at the time the head-of-the-chain block was received.
@ -742,10 +738,13 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
self.naive_aggregation_pool.get(data).map_err(Into::into)
}
/// Produce a raw unsigned `Attestation` that is valid for the given `slot` and `index`.
/// Produce an unaggregated `Attestation` that is valid for the given `slot` and `index`.
///
/// The produced `Attestation` will not be valid until it has been signed by exactly one
/// validator that is in the committee for `slot` and `index` in the canonical chain.
///
/// Always attests to the canonical chain.
pub fn produce_attestation(
pub fn produce_unaggregated_attestation(
&self,
slot: Slot,
index: CommitteeIndex,
@ -758,7 +757,7 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
.ok_or_else(|| Error::CanonicalHeadLockTimeout)?;
if slot >= head.beacon_block.slot() {
self.produce_attestation_for_block(
self.produce_unaggregated_attestation_for_block(
slot,
index,
head.beacon_block_root,
@ -790,15 +789,24 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
state.build_committee_cache(RelativeEpoch::Current, &self.spec)?;
self.produce_attestation_for_block(slot, index, beacon_block_root, Cow::Owned(state))
self.produce_unaggregated_attestation_for_block(
slot,
index,
beacon_block_root,
Cow::Owned(state),
)
}
}
/// Produce an `AttestationData` that attests to the chain denoted by `block_root` and `state`.
/// Produces an "unaggregated" attestation for the given `slot` and `index` that attests to
/// `beacon_block_root`. The provided `state` should match the `block.state_root` for the
/// `block` identified by `beacon_block_root`.
///
/// Permits attesting to any arbitrary chain. Generally, the `produce_attestation_data`
/// function should be used as it attests to the canonical chain.
pub fn produce_attestation_for_block(
/// The attestation doesn't _really_ have anything about it that makes it unaggregated per say,
/// however this function is only required in the context of forming an unaggregated
/// attestation. It would be an (undetectable) violation of the protocol to create a
/// `SignedAggregateAndProof` based upon the output of this function.
pub fn produce_unaggregated_attestation_for_block(
&self,
slot: Slot,
index: CommitteeIndex,
@ -846,328 +854,62 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
})
}
/// Accept a new, potentially invalid attestation from the network.
/// Accepts some `Attestation` from the network and attempts to verify it, returning `Ok(_)` if
/// it is valid to be (re)broadcast on the gossip network.
///
/// If valid, the attestation is added to `self.op_pool` and `self.fork_choice`.
///
/// Returns an `Ok(AttestationProcessingOutcome)` if the chain was able to make a determination
/// about the `attestation` (whether it was invalid or not). Returns an `Err` if there was an
/// error during this process and no determination was able to be made.
///
/// ## Notes
///
/// - Whilst the `attestation` is added to fork choice, the head is not updated. That must be
/// done separately.
///
/// The `store_raw` parameter determines if this attestation is to be stored in the operation
/// pool. `None` indicates the attestation is not stored in the operation pool (we don't have a
/// validator required to aggregate these attestations). `Some(true)` indicates we are storing a
/// raw un-aggregated attestation from a subnet into the `op_pool` which is short-lived and `Some(false)`
/// indicates that we are storing an aggregate attestation in the `op_pool`.
pub fn process_attestation(
/// The attestation must be "unaggregated", that is it must have exactly one
/// aggregation bit set.
pub fn verify_unaggregated_attestation_for_gossip(
&self,
attestation: Attestation<T::EthSpec>,
attestation_type: AttestationType,
) -> Result<AttestationProcessingOutcome, Error> {
metrics::inc_counter(&metrics::ATTESTATION_PROCESSING_REQUESTS);
let timer = metrics::start_timer(&metrics::ATTESTATION_PROCESSING_TIMES);
let outcome = self.process_attestation_internal(attestation.clone(), attestation_type);
match &outcome {
Ok(outcome) => match outcome {
AttestationProcessingOutcome::Processed => {
metrics::inc_counter(&metrics::ATTESTATION_PROCESSING_SUCCESSES);
trace!(
self.log,
"Beacon attestation imported";
"target_epoch" => attestation.data.target.epoch,
"index" => attestation.data.index,
);
let _ = self
.event_handler
.register(EventKind::BeaconAttestationImported {
attestation: Box::new(attestation),
});
}
other => {
trace!(
self.log,
"Beacon attestation rejected";
"reason" => format!("{:?}", other),
);
let _ = self
.event_handler
.register(EventKind::BeaconAttestationRejected {
reason: format!("Invalid attestation: {:?}", other),
attestation: Box::new(attestation),
});
}
},
Err(e) => {
error!(
self.log,
"Beacon attestation processing error";
"error" => format!("{:?}", e),
);
let _ = self
.event_handler
.register(EventKind::BeaconAttestationRejected {
reason: format!("Internal error: {:?}", e),
attestation: Box::new(attestation),
});
}
) -> Result<VerifiedUnaggregatedAttestation<T>, AttestationError> {
VerifiedUnaggregatedAttestation::verify(attestation, self)
}
metrics::stop_timer(timer);
outcome
}
pub fn process_attestation_internal(
/// Accepts some `SignedAggregateAndProof` from the network and attempts to verify it,
/// returning `Ok(_)` if it is valid to be (re)broadcast on the gossip network.
pub fn verify_aggregated_attestation_for_gossip(
&self,
attestation: Attestation<T::EthSpec>,
attestation_type: AttestationType,
) -> Result<AttestationProcessingOutcome, Error> {
let initial_validation_timer =
metrics::start_timer(&metrics::ATTESTATION_PROCESSING_INITIAL_VALIDATION_TIMES);
// There is no point in processing an attestation with an empty bitfield. Reject
// it immediately.
if attestation.aggregation_bits.num_set_bits() == 0 {
return Ok(AttestationProcessingOutcome::EmptyAggregationBitfield);
signed_aggregate: SignedAggregateAndProof<T::EthSpec>,
) -> Result<VerifiedAggregatedAttestation<T>, AttestationError> {
VerifiedAggregatedAttestation::verify(signed_aggregate, self)
}
let attestation_epoch = attestation.data.slot.epoch(T::EthSpec::slots_per_epoch());
let epoch_now = self.epoch()?;
let target = attestation.data.target.clone();
// Attestation must be from the current or previous epoch.
if attestation_epoch > epoch_now {
return Ok(AttestationProcessingOutcome::FutureEpoch {
attestation_epoch,
current_epoch: epoch_now,
});
} else if attestation_epoch + 1 < epoch_now {
return Ok(AttestationProcessingOutcome::PastEpoch {
attestation_epoch,
current_epoch: epoch_now,
});
/// Accepts some attestation-type object and attempts to verify it in the context of fork
/// choice. If it is valid it is applied to `self.fork_choice`.
///
/// Common items that implement `IntoForkChoiceVerifiedAttestation`:
///
/// - `VerifiedUnaggregatedAttestation`
/// - `VerifiedAggregatedAttestation`
/// - `ForkChoiceVerifiedAttestation`
pub fn apply_attestation_to_fork_choice<'a>(
&self,
unverified_attestation: &'a impl IntoForkChoiceVerifiedAttestation<'a, T>,
) -> Result<ForkChoiceVerifiedAttestation<'a, T>, AttestationError> {
let verified = unverified_attestation.into_fork_choice_verified_attestation(self)?;
let indexed_attestation = verified.indexed_attestation();
self.fork_choice
.process_indexed_attestation(indexed_attestation)
.map_err(|e| Error::from(e))?;
Ok(verified)
}
if target.epoch != attestation.data.slot.epoch(T::EthSpec::slots_per_epoch()) {
return Ok(AttestationProcessingOutcome::BadTargetEpoch);
}
/// Accepts an `VerifiedUnaggregatedAttestation` and attempts to apply it to the "naive
/// aggregation pool".
///
/// The naive aggregation pool is used by local validators to produce
/// `SignedAggregateAndProof`.
///
/// If the attestation is too old (low slot) to be included in the pool it is simply dropped
/// and no error is returned.
pub fn add_to_naive_aggregation_pool(
&self,
unaggregated_attestation: VerifiedUnaggregatedAttestation<T>,
) -> Result<VerifiedUnaggregatedAttestation<T>, AttestationError> {
let attestation = unaggregated_attestation.attestation();
// Attestation target must be for a known block.
//
// We use fork choice to find the target root, which means that we reject any attestation
// that has a `target.root` earlier than our latest finalized root. There's no point in
// processing an attestation that does not include our latest finalized block in its chain.
//
// We do not delay consideration for later, we simply drop the attestation.
let (target_block_slot, target_block_state_root) = if let Some((slot, state_root)) =
self.fork_choice.block_slot_and_state_root(&target.root)
{
(slot, state_root)
} else {
return Ok(AttestationProcessingOutcome::UnknownTargetRoot(target.root));
};
// Load the slot and state root for `attestation.data.beacon_block_root`.
//
// This indirectly checks to see if the `attestation.data.beacon_block_root` is in our fork
// choice. Any known, non-finalized block should be in fork choice, so this check
// immediately filters out attestations that attest to a block that has not been processed.
//
// Attestations must be for a known block. If the block is unknown, we simply drop the
// attestation and do not delay consideration for later.
let block_slot = if let Some((slot, _state_root)) = self
.fork_choice
.block_slot_and_state_root(&attestation.data.beacon_block_root)
{
slot
} else {
return Ok(AttestationProcessingOutcome::UnknownHeadBlock {
beacon_block_root: attestation.data.beacon_block_root,
});
};
// TODO: currently we do not check the FFG source/target. This is what the spec dictates
// but it seems wrong.
//
// I have opened an issue on the specs repo for this:
//
// https://github.com/ethereum/eth2.0-specs/issues/1636
//
// We should revisit this code once that issue has been resolved.
// Attestations must not be for blocks in the future. If this is the case, the attestation
// should not be considered.
if block_slot > attestation.data.slot {
return Ok(AttestationProcessingOutcome::AttestsToFutureBlock {
block: block_slot,
attestation: attestation.data.slot,
});
}
metrics::stop_timer(initial_validation_timer);
let cache_wait_timer =
metrics::start_timer(&metrics::ATTESTATION_PROCESSING_SHUFFLING_CACHE_WAIT_TIMES);
let mut shuffling_cache = self
.shuffling_cache
.try_write_for(ATTESTATION_CACHE_LOCK_TIMEOUT)
.ok_or_else(|| Error::AttestationCacheLockTimeout)?;
metrics::stop_timer(cache_wait_timer);
let indexed_attestation =
if let Some(committee_cache) = shuffling_cache.get(attestation_epoch, target.root) {
if let Some(committee) = committee_cache
.get_beacon_committee(attestation.data.slot, attestation.data.index)
{
let indexed_attestation =
get_indexed_attestation(committee.committee, &attestation)?;
// Drop the shuffling cache to avoid holding the lock for any longer than
// required.
drop(shuffling_cache);
indexed_attestation
} else {
return Ok(AttestationProcessingOutcome::NoCommitteeForSlotAndIndex {
slot: attestation.data.slot,
index: attestation.data.index,
});
}
} else {
// Drop the shuffling cache to avoid holding the lock for any longer than
// required.
drop(shuffling_cache);
debug!(
self.log,
"Attestation processing cache miss";
"attn_epoch" => attestation_epoch.as_u64(),
"head_block_epoch" => block_slot.epoch(T::EthSpec::slots_per_epoch()).as_u64(),
);
let state_read_timer =
metrics::start_timer(&metrics::ATTESTATION_PROCESSING_STATE_READ_TIMES);
let mut state = self
.get_state(&target_block_state_root, Some(target_block_slot))?
.ok_or_else(|| Error::MissingBeaconState(target_block_state_root))?;
metrics::stop_timer(state_read_timer);
let state_skip_timer =
metrics::start_timer(&metrics::ATTESTATION_PROCESSING_STATE_SKIP_TIMES);
while state.current_epoch() + 1 < attestation_epoch {
// Here we tell `per_slot_processing` to skip hashing the state and just
// use the zero hash instead.
//
// The state roots are not useful for the shuffling, so there's no need to
// compute them.
per_slot_processing(&mut state, Some(Hash256::zero()), &self.spec)?
}
metrics::stop_timer(state_skip_timer);
let committee_building_timer =
metrics::start_timer(&metrics::ATTESTATION_PROCESSING_COMMITTEE_BUILDING_TIMES);
let relative_epoch =
RelativeEpoch::from_epoch(state.current_epoch(), attestation_epoch)
.map_err(Error::IncorrectStateForAttestation)?;
state.build_committee_cache(relative_epoch, &self.spec)?;
let committee_cache = state.committee_cache(relative_epoch)?;
self.shuffling_cache
.try_write_for(ATTESTATION_CACHE_LOCK_TIMEOUT)
.ok_or_else(|| Error::AttestationCacheLockTimeout)?
.insert(attestation_epoch, target.root, committee_cache);
metrics::stop_timer(committee_building_timer);
if let Some(committee) = committee_cache
.get_beacon_committee(attestation.data.slot, attestation.data.index)
{
get_indexed_attestation(committee.committee, &attestation)?
} else {
return Ok(AttestationProcessingOutcome::NoCommitteeForSlotAndIndex {
slot: attestation.data.slot,
index: attestation.data.index,
});
}
};
let signature_setup_timer =
metrics::start_timer(&metrics::ATTESTATION_PROCESSING_SIGNATURE_SETUP_TIMES);
let pubkey_cache = self
.validator_pubkey_cache
.try_read_for(VALIDATOR_PUBKEY_CACHE_LOCK_TIMEOUT)
.ok_or_else(|| Error::ValidatorPubkeyCacheLockTimeout)?;
let (fork, genesis_validators_root) = self
.canonical_head
.try_read_for(HEAD_LOCK_TIMEOUT)
.ok_or_else(|| Error::CanonicalHeadLockTimeout)
.map(|head| {
(
head.beacon_state.fork.clone(),
head.beacon_state.genesis_validators_root,
)
})?;
let signature_set = indexed_attestation_signature_set_from_pubkeys(
|validator_index| pubkey_cache.get(validator_index).map(Cow::Borrowed),
&attestation.signature,
&indexed_attestation,
&fork,
genesis_validators_root,
&self.spec,
)
.map_err(Error::SignatureSetError)?;
metrics::stop_timer(signature_setup_timer);
let signature_verification_timer =
metrics::start_timer(&metrics::ATTESTATION_PROCESSING_SIGNATURE_TIMES);
let signature_is_valid = signature_set.is_valid();
metrics::stop_timer(signature_verification_timer);
drop(pubkey_cache);
if signature_is_valid {
// Provide the attestation to fork choice, updating the validator latest messages but
// _without_ finding and updating the head.
if let Err(e) = self
.fork_choice
.process_indexed_attestation(&indexed_attestation)
{
error!(
self.log,
"Add attestation to fork choice failed";
"beacon_block_root" => format!("{}", attestation.data.beacon_block_root),
"error" => format!("{:?}", e)
);
return Err(e.into());
}
// Provide the valid attestation to op pool, which may choose to retain the
// attestation for inclusion in a future block. If we receive an attestation from a
// subnet without a validator responsible for aggregating it, we don't store it in the
// op pool.
if self.eth1_chain.is_some() {
match attestation_type {
AttestationType::Unaggregated { should_store } if should_store => {
match self.naive_aggregation_pool.insert(&attestation) {
match self.naive_aggregation_pool.insert(attestation) {
Ok(outcome) => trace!(
self.log,
"Stored unaggregated attestation";
@ -1186,53 +928,51 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
"slot" => slot.as_u64(),
);
}
Err(e) => error!(
Err(e) => {
error!(
self.log,
"Failed to store unaggregated attestation";
"error" => format!("{:?}", e),
"index" => attestation.data.index,
"slot" => attestation.data.slot.as_u64(),
),
}
}
AttestationType::Unaggregated { .. } => trace!(
self.log,
"Did not store unaggregated attestation";
"index" => attestation.data.index,
"slot" => attestation.data.slot.as_u64(),
),
AttestationType::Aggregated => {
let index = attestation.data.index;
let slot = attestation.data.slot;
match self.op_pool.insert_attestation(
attestation,
&fork,
genesis_validators_root,
&self.spec,
) {
Ok(_) => {}
Err(e) => {
error!(
self.log,
"Failed to add attestation to op pool";
"error" => format!("{:?}", e),
"index" => index,
"slot" => slot.as_u64(),
);
return Err(Error::from(e).into());
}
}
}
}
};
Ok(unaggregated_attestation)
}
// Update the metrics.
metrics::inc_counter(&metrics::ATTESTATION_PROCESSING_SUCCESSES);
/// Accepts a `VerifiedAggregatedAttestation` and attempts to apply it to `self.op_pool`.
///
/// The op pool is used by local block producers to pack blocks with operations.
pub fn add_to_block_inclusion_pool(
&self,
signed_aggregate: VerifiedAggregatedAttestation<T>,
) -> Result<VerifiedAggregatedAttestation<T>, AttestationError> {
// If there's no eth1 chain then it's impossible to produce blocks and therefore
// useless to put things in the op pool.
if self.eth1_chain.is_some() {
let fork = self
.canonical_head
.try_read_for(HEAD_LOCK_TIMEOUT)
.ok_or_else(|| Error::CanonicalHeadLockTimeout)?
.beacon_state
.fork
.clone();
Ok(AttestationProcessingOutcome::Processed)
} else {
Ok(AttestationProcessingOutcome::InvalidSignature)
self.op_pool
.insert_attestation(
// TODO: address this clone.
signed_aggregate.attestation().clone(),
&fork,
self.genesis_validators_root,
&self.spec,
)
.map_err(Error::from)?;
}
Ok(signed_aggregate)
}
/// Check that the shuffling at `block_root` is equal to one of the shufflings of `state`.
@ -1671,6 +1411,24 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
let parent_block = fully_verified_block.parent_block;
let intermediate_states = fully_verified_block.intermediate_states;
let attestation_observation_timer =
metrics::start_timer(&metrics::BLOCK_PROCESSING_ATTESTATION_OBSERVATION);
// Iterate through the attestations in the block and register them as an "observed
// attestation". This will stop us from propagating them on the gossip network.
for a in &block.body.attestations {
match self.observed_attestations.observe_attestation(a, None) {
// If the observation was successful or if the slot for the attestation was too
// low, continue.
//
// We ignore `SlotTooLow` since this will be very common whilst syncing.
Ok(_) | Err(AttestationObservationError::SlotTooLow { .. }) => {}
Err(e) => return Err(BlockError::BeaconChainError(e.into())),
}
}
metrics::stop_timer(attestation_observation_timer);
let fork_choice_register_timer =
metrics::start_timer(&metrics::BLOCK_PROCESSING_FORK_CHOICE_REGISTER);
@ -2104,6 +1862,9 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
} else {
self.fork_choice.prune()?;
self.observed_block_producers
.prune(new_finalized_epoch.start_slot(T::EthSpec::slots_per_epoch()));
self.snapshot_cache
.try_write_for(BLOCK_PROCESSING_CACHE_LOCK_TIMEOUT)
.map(|mut snapshot_cache| {

View File

@ -104,10 +104,18 @@ pub enum BlockError {
},
/// Block is already known, no need to re-import.
BlockIsAlreadyKnown,
/// A block for this proposer and slot has already been observed.
RepeatProposal { proposer: u64, slot: Slot },
/// The block slot exceeds the MAXIMUM_BLOCK_SLOT_NUMBER.
BlockSlotLimitReached,
/// The `BeaconBlock` has a `proposer_index` that does not match the index we computed locally.
///
/// The block is invalid.
IncorrectBlockProposer { block: u64, local_shuffling: u64 },
/// The proposal signature in invalid.
ProposalSignatureInvalid,
/// The `block.proposal_index` is not known.
UnknownValidator(u64),
/// A signature in the block is invalid (exactly which is unknown).
InvalidSignature,
/// The provided block is from an earlier slot than its parent.
@ -126,7 +134,18 @@ pub enum BlockError {
impl From<BlockSignatureVerifierError> for BlockError {
fn from(e: BlockSignatureVerifierError) -> Self {
BlockError::BeaconChainError(BeaconChainError::BlockSignatureVerifierError(e))
match e {
// Make a special distinction for `IncorrectBlockProposer` since it indicates an
// invalid block, not an internal error.
BlockSignatureVerifierError::IncorrectBlockProposer {
block,
local_shuffling,
} => BlockError::IncorrectBlockProposer {
block,
local_shuffling,
},
e => BlockError::BeaconChainError(BeaconChainError::BlockSignatureVerifierError(e)),
}
}
}
@ -286,7 +305,17 @@ impl<T: BeaconChainTypes> GossipVerifiedBlock<T> {
// Do not gossip a block from a finalized slot.
check_block_against_finalized_slot(&block.message, chain)?;
// TODO: add check for the `(block.proposer_index, block.slot)` tuple once we have v0.11.0
// Check that we have not already received a block with a valid signature for this slot.
if chain
.observed_block_producers
.proposer_has_been_observed(&block.message)
.map_err(|e| BlockError::BeaconChainError(e.into()))?
{
return Err(BlockError::RepeatProposal {
proposer: block.message.proposer_index,
slot: block.message.slot,
});
}
let mut parent = load_parent(&block.message, chain)?;
let block_root = get_block_root(&block);
@ -297,21 +326,54 @@ impl<T: BeaconChainTypes> GossipVerifiedBlock<T> {
&chain.spec,
)?;
let signature_is_valid = {
let pubkey_cache = get_validator_pubkey_cache(chain)?;
let pubkey = pubkey_cache
.get(block.message.proposer_index as usize)
.ok_or_else(|| BlockError::UnknownValidator(block.message.proposer_index))?;
block.verify_signature(
Some(block_root),
pubkey,
&state.fork,
chain.genesis_validators_root,
&chain.spec,
)
};
let mut signature_verifier = get_signature_verifier(&state, &pubkey_cache, &chain.spec);
if !signature_is_valid {
return Err(BlockError::ProposalSignatureInvalid);
}
signature_verifier.include_block_proposal(&block, Some(block_root))?;
// Now the signature is valid, store the proposal so we don't accept another from this
// validator and slot.
//
// It's important to double-check that the proposer still hasn't been observed so we don't
// have a race-condition when verifying two blocks simultaneously.
if chain
.observed_block_producers
.observe_proposer(&block.message)
.map_err(|e| BlockError::BeaconChainError(e.into()))?
{
return Err(BlockError::RepeatProposal {
proposer: block.message.proposer_index,
slot: block.message.slot,
});
}
let expected_proposer =
state.get_beacon_proposer_index(block.message.slot, &chain.spec)? as u64;
if block.message.proposer_index != expected_proposer {
return Err(BlockError::IncorrectBlockProposer {
block: block.message.proposer_index,
local_shuffling: expected_proposer,
});
}
if signature_verifier.verify().is_ok() {
Ok(Self {
block,
block_root,
parent,
})
} else {
Err(BlockError::ProposalSignatureInvalid)
}
}
pub fn block_root(&self) -> Hash256 {

View File

@ -14,6 +14,8 @@ pub enum BlockProcessingOutcome {
InvalidSignature,
/// The proposal signature in invalid.
ProposalSignatureInvalid,
/// The `block.proposal_index` is not known.
UnknownValidator(u64),
/// The parent block was unknown.
ParentUnknown(Hash256),
/// The block slot is greater than the present slot.
@ -35,6 +37,11 @@ pub enum BlockProcessingOutcome {
},
/// Block is already known, no need to re-import.
BlockIsAlreadyKnown,
/// A block for this proposer and slot has already been observed.
RepeatProposal {
proposer: u64,
slot: Slot,
},
/// The block slot exceeds the MAXIMUM_BLOCK_SLOT_NUMBER.
BlockSlotLimitReached,
/// The provided block is from an earlier slot than its parent.
@ -42,6 +49,13 @@ pub enum BlockProcessingOutcome {
block_slot: Slot,
state_slot: Slot,
},
/// The `BeaconBlock` has a `proposer_index` that does not match the index we computed locally.
///
/// The block is invalid.
IncorrectBlockProposer {
block: u64,
local_shuffling: u64,
},
/// At least one block in the chain segement did not have it's parent root set to the root of
/// the prior block.
NonLinearParentRoots,
@ -78,12 +92,16 @@ impl BlockProcessingOutcome {
finalized_slot,
}),
Err(BlockError::BlockIsAlreadyKnown) => Ok(BlockProcessingOutcome::BlockIsAlreadyKnown),
Err(BlockError::RepeatProposal { proposer, slot }) => {
Ok(BlockProcessingOutcome::RepeatProposal { proposer, slot })
}
Err(BlockError::BlockSlotLimitReached) => {
Ok(BlockProcessingOutcome::BlockSlotLimitReached)
}
Err(BlockError::ProposalSignatureInvalid) => {
Ok(BlockProcessingOutcome::ProposalSignatureInvalid)
}
Err(BlockError::UnknownValidator(i)) => Ok(BlockProcessingOutcome::UnknownValidator(i)),
Err(BlockError::InvalidSignature) => Ok(BlockProcessingOutcome::InvalidSignature),
Err(BlockError::BlockIsNotLaterThanParent {
block_slot,
@ -92,6 +110,13 @@ impl BlockProcessingOutcome {
block_slot,
state_slot,
}),
Err(BlockError::IncorrectBlockProposer {
block,
local_shuffling,
}) => Ok(BlockProcessingOutcome::IncorrectBlockProposer {
block,
local_shuffling,
}),
Err(BlockError::NonLinearParentRoots) => {
Ok(BlockProcessingOutcome::NonLinearParentRoots)
}

View File

@ -431,6 +431,14 @@ where
.ok_or_else(|| "Cannot build without op pool".to_string())?,
// TODO: allow for persisting and loading the pool from disk.
naive_aggregation_pool: <_>::default(),
// TODO: allow for persisting and loading the pool from disk.
observed_attestations: <_>::default(),
// TODO: allow for persisting and loading the pool from disk.
observed_attesters: <_>::default(),
// TODO: allow for persisting and loading the pool from disk.
observed_aggregators: <_>::default(),
// TODO: allow for persisting and loading the pool from disk.
observed_block_producers: <_>::default(),
eth1_chain: self.eth1_chain,
genesis_validators_root: canonical_head.beacon_state.genesis_validators_root,
canonical_head: TimeoutRwLock::new(canonical_head.clone()),

View File

@ -1,7 +1,11 @@
use crate::eth1_chain::Error as Eth1ChainError;
use crate::fork_choice::Error as ForkChoiceError;
use crate::naive_aggregation_pool::Error as NaiveAggregationError;
use crate::observed_attestations::Error as ObservedAttestationsError;
use crate::observed_attesters::Error as ObservedAttestersError;
use crate::observed_block_producers::Error as ObservedBlockProducersError;
use operation_pool::OpPoolError;
use safe_arith::ArithError;
use ssz::DecodeError;
use ssz_types::Error as SszTypesError;
use state_processing::{
@ -66,6 +70,10 @@ pub enum BeaconChainError {
ValidatorPubkeyCacheFileError(String),
OpPoolError(OpPoolError),
NaiveAggregationError(NaiveAggregationError),
ObservedAttestationsError(ObservedAttestationsError),
ObservedAttestersError(ObservedAttestersError),
ObservedBlockProducersError(ObservedBlockProducersError),
ArithError(ArithError),
}
easy_from_to!(SlotProcessingError, BeaconChainError);
@ -73,7 +81,11 @@ easy_from_to!(AttestationValidationError, BeaconChainError);
easy_from_to!(SszTypesError, BeaconChainError);
easy_from_to!(OpPoolError, BeaconChainError);
easy_from_to!(NaiveAggregationError, BeaconChainError);
easy_from_to!(ObservedAttestationsError, BeaconChainError);
easy_from_to!(ObservedAttestersError, BeaconChainError);
easy_from_to!(ObservedBlockProducersError, BeaconChainError);
easy_from_to!(BlockSignatureVerifierError, BeaconChainError);
easy_from_to!(ArithError, BeaconChainError);
#[derive(Debug, PartialEq)]
pub enum BlockProductionError {

View File

@ -2,6 +2,7 @@
#[macro_use]
extern crate lazy_static;
pub mod attestation_verification;
mod beacon_chain;
mod beacon_snapshot;
mod block_verification;
@ -14,6 +15,9 @@ mod head_tracker;
mod metrics;
pub mod migrate;
mod naive_aggregation_pool;
mod observed_attestations;
mod observed_attesters;
mod observed_block_producers;
mod persisted_beacon_chain;
mod shuffling_cache;
mod snapshot_cache;
@ -22,11 +26,12 @@ mod timeout_rw_lock;
mod validator_pubkey_cache;
pub use self::beacon_chain::{
AttestationProcessingOutcome, AttestationType, BeaconChain, BeaconChainTypes,
ChainSegmentResult, StateSkipConfig,
AttestationProcessingOutcome, BeaconChain, BeaconChainTypes, ChainSegmentResult,
StateSkipConfig,
};
pub use self::beacon_snapshot::BeaconSnapshot;
pub use self::errors::{BeaconChainError, BlockProductionError};
pub use attestation_verification::Error as AttestationError;
pub use block_verification::{BlockError, BlockProcessingOutcome, GossipVerifiedBlock};
pub use eth1_chain::{Eth1Chain, Eth1ChainBackend};
pub use events::EventHandler;

View File

@ -52,6 +52,10 @@ lazy_static! {
"beacon_block_processing_fork_choice_register_seconds",
"Time spent registering the new block with fork choice (but not finding head)"
);
pub static ref BLOCK_PROCESSING_ATTESTATION_OBSERVATION: Result<Histogram> = try_create_histogram(
"beacon_block_processing_attestation_observation_seconds",
"Time spent hashing and remembering all the attestations in the block"
);
/*
* Block Production

View File

@ -46,8 +46,6 @@ pub enum Error {
/// stored. This indicates a fairly serious error somewhere in the code that called this
/// function.
InconsistentBitfieldLengths,
/// The function to obtain a map index failed, this is an internal error.
InvalidMapIndex(usize),
/// The given `attestation` was for the incorrect slot. This is an internal error.
IncorrectSlot { expected: Slot, attestation: Slot },
}
@ -56,30 +54,20 @@ pub enum Error {
/// `attestation` are from the same slot.
struct AggregatedAttestationMap<E: EthSpec> {
map: HashMap<AttestationData, Attestation<E>>,
slot: Slot,
}
impl<E: EthSpec> AggregatedAttestationMap<E> {
/// Create an empty collection that will only contain attestation for the given `slot`.
pub fn new(slot: Slot) -> Self {
/// Create an empty collection with the given `initial_capacity`.
pub fn new(initial_capacity: usize) -> Self {
Self {
slot,
map: <_>::default(),
map: HashMap::with_capacity(initial_capacity),
}
}
/// Insert an attestation into `self`, aggregating it into the pool.
///
/// The given attestation (`a`) must only have one signature and be from the slot that `self`
/// was initialized with.
/// The given attestation (`a`) must only have one signature.
pub fn insert(&mut self, a: &Attestation<E>) -> Result<InsertOutcome, Error> {
if a.data.slot != self.slot {
return Err(Error::IncorrectSlot {
expected: self.slot,
attestation: a.data.slot,
});
}
let set_bits = a
.aggregation_bits
.iter()
@ -124,14 +112,11 @@ impl<E: EthSpec> AggregatedAttestationMap<E> {
///
/// The given `a.data.slot` must match the slot that `self` was initialized with.
pub fn get(&self, data: &AttestationData) -> Result<Option<Attestation<E>>, Error> {
if data.slot != self.slot {
return Err(Error::IncorrectSlot {
expected: self.slot,
attestation: data.slot,
});
Ok(self.map.get(data).cloned())
}
Ok(self.map.get(data).cloned())
pub fn len(&self) -> usize {
self.map.len()
}
}
@ -158,14 +143,14 @@ impl<E: EthSpec> AggregatedAttestationMap<E> {
/// receives and it can be triggered manually.
pub struct NaiveAggregationPool<E: EthSpec> {
lowest_permissible_slot: RwLock<Slot>,
maps: RwLock<Vec<AggregatedAttestationMap<E>>>,
maps: RwLock<HashMap<Slot, AggregatedAttestationMap<E>>>,
}
impl<E: EthSpec> Default for NaiveAggregationPool<E> {
fn default() -> Self {
Self {
lowest_permissible_slot: RwLock::new(Slot::new(0)),
maps: RwLock::new(vec![]),
maps: RwLock::new(HashMap::new()),
}
}
}
@ -179,28 +164,46 @@ impl<E: EthSpec> NaiveAggregationPool<E> {
/// The pool may be pruned if the given `attestation.data` has a slot higher than any
/// previously seen.
pub fn insert(&self, attestation: &Attestation<E>) -> Result<InsertOutcome, Error> {
let slot = attestation.data.slot;
let lowest_permissible_slot = *self.lowest_permissible_slot.read();
// Reject any attestations that are too old.
if attestation.data.slot < lowest_permissible_slot {
if slot < lowest_permissible_slot {
return Err(Error::SlotTooLow {
slot: attestation.data.slot,
slot,
lowest_permissible_slot,
});
}
// Prune the pool if this attestation indicates that the current slot has advanced.
if (lowest_permissible_slot + SLOTS_RETAINED as u64) < attestation.data.slot + 1 {
self.prune(attestation.data.slot)
}
let mut maps = self.maps.write();
let index = self.get_map_index(attestation.data.slot);
let outcome = if let Some(map) = maps.get_mut(&slot) {
map.insert(attestation)
} else {
// To avoid re-allocations, try and determine a rough initial capacity for the new item
// by obtaining the mean size of all items in earlier epoch.
let (count, sum) = maps
.iter()
// Only include epochs that are less than the given slot in the average. This should
// generally avoid including recent epochs that are still "filling up".
.filter(|(map_slot, _item)| **map_slot < slot)
.map(|(_slot, map)| map.len())
.fold((0, 0), |(count, sum), len| (count + 1, sum + len));
self.maps
.write()
.get_mut(index)
.ok_or_else(|| Error::InvalidMapIndex(index))?
.insert(attestation)
// Use the mainnet default committee size if we can't determine an average.
let initial_capacity = sum.checked_div(count).unwrap_or(128);
let mut item = AggregatedAttestationMap::new(initial_capacity);
let outcome = item.insert(attestation);
maps.insert(slot, item);
outcome
};
drop(maps);
self.prune(slot);
outcome
}
/// Returns an aggregated `Attestation` with the given `data`, if any.
@ -208,8 +211,8 @@ impl<E: EthSpec> NaiveAggregationPool<E> {
self.maps
.read()
.iter()
.find(|map| map.slot == data.slot)
.map(|map| map.get(data))
.find(|(slot, _map)| **slot == data.slot)
.map(|(_slot, map)| map.get(data))
.unwrap_or_else(|| Ok(None))
}
@ -218,41 +221,26 @@ impl<E: EthSpec> NaiveAggregationPool<E> {
pub fn prune(&self, current_slot: Slot) {
// Taking advantage of saturating subtraction on `Slot`.
let lowest_permissible_slot = current_slot - Slot::from(SLOTS_RETAINED);
self.maps
.write()
.retain(|map| map.slot >= lowest_permissible_slot);
*self.lowest_permissible_slot.write() = lowest_permissible_slot;
}
/// Returns the index of `self.maps` that matches `slot`.
///
/// If there is no existing map for this slot one will be created. If `self.maps.len() >=
/// SLOTS_RETAINED`, the map with the lowest slot will be replaced.
fn get_map_index(&self, slot: Slot) -> usize {
let mut maps = self.maps.write();
if let Some(index) = maps.iter().position(|map| map.slot == slot) {
return index;
// Remove any maps that are definitely expired.
maps.retain(|slot, _map| *slot >= lowest_permissible_slot);
// If we have too many maps, remove the lowest amount to ensure we only have
// `SLOTS_RETAINED` left.
if maps.len() > SLOTS_RETAINED {
let mut slots = maps.iter().map(|(slot, _map)| *slot).collect::<Vec<_>>();
// Sort is generally pretty slow, however `SLOTS_RETAINED` is quite low so it should be
// negligible.
slots.sort_unstable();
slots
.into_iter()
.take(maps.len().saturating_sub(SLOTS_RETAINED))
.for_each(|slot| {
maps.remove(&slot);
})
}
if maps.len() < SLOTS_RETAINED || maps.is_empty() {
let index = maps.len();
maps.push(AggregatedAttestationMap::new(slot));
return index;
}
let index = maps
.iter()
.enumerate()
.min_by_key(|(_i, map)| map.slot)
.map(|(i, _map)| i)
.expect("maps cannot be empty due to previous .is_empty() check");
maps[index] = AggregatedAttestationMap::new(slot);
index
}
}
@ -432,7 +420,7 @@ mod tests {
.maps
.read()
.iter()
.map(|map| map.slot)
.map(|(slot, _map)| *slot)
.collect::<Vec<_>>();
pool_slots.sort_unstable();

View File

@ -0,0 +1,442 @@
//! Provides an `ObservedAttestations` struct which allows us to reject aggregated attestations if
//! we've already seen the aggregated attestation.
use parking_lot::RwLock;
use std::collections::HashSet;
use std::marker::PhantomData;
use tree_hash::TreeHash;
use types::{Attestation, EthSpec, Hash256, Slot};
/// As a DoS protection measure, the maximum number of distinct `Attestations` that will be
/// recorded for each slot.
///
/// Currently this is set to ~524k. If we say that each entry is 40 bytes (Hash256 (32 bytes) + an
/// 8 byte hash) then this comes to about 20mb per slot. If we're storing 34 of these slots, then
/// we're at 680mb. This is a lot of memory usage, but probably not a show-stopper for most
/// reasonable hardware.
///
/// Upstream conditions should strongly restrict the amount of attestations that can show up in
/// this pool. The maximum size with respect to upstream restrictions is more likely on the order
/// of the number of validators.
const MAX_OBSERVATIONS_PER_SLOT: usize = 1 << 19; // 524,288
#[derive(Debug, PartialEq)]
pub enum ObserveOutcome {
/// This attestation was already known.
AlreadyKnown,
/// This was the first time this attestation was observed.
New,
}
#[derive(Debug, PartialEq)]
pub enum Error {
SlotTooLow {
slot: Slot,
lowest_permissible_slot: Slot,
},
/// The function to obtain a set index failed, this is an internal error.
InvalidSetIndex(usize),
/// We have reached the maximum number of unique `Attestation` that can be observed in a slot.
/// This is a DoS protection function.
ReachedMaxObservationsPerSlot(usize),
IncorrectSlot {
expected: Slot,
attestation: Slot,
},
}
/// A `HashSet` that contains entries related to some `Slot`.
struct SlotHashSet {
set: HashSet<Hash256>,
slot: Slot,
}
impl SlotHashSet {
pub fn new(slot: Slot, initial_capacity: usize) -> Self {
Self {
slot,
set: HashSet::with_capacity(initial_capacity),
}
}
/// Store the attestation in self so future observations recognise its existence.
pub fn observe_attestation<E: EthSpec>(
&mut self,
a: &Attestation<E>,
root: Hash256,
) -> Result<ObserveOutcome, Error> {
if a.data.slot != self.slot {
return Err(Error::IncorrectSlot {
expected: self.slot,
attestation: a.data.slot,
});
}
if self.set.contains(&root) {
Ok(ObserveOutcome::AlreadyKnown)
} else {
// Here we check to see if this slot has reached the maximum observation count.
//
// The resulting behaviour is that we are no longer able to successfully observe new
// attestations, however we will continue to return `is_known` values. We could also
// disable `is_known`, however then we would stop forwarding attestations across the
// gossip network and I think that this is a worse case than sending some invalid ones.
// The underlying libp2p network is responsible for removing duplicate messages, so
// this doesn't risk a broadcast loop.
if self.set.len() >= MAX_OBSERVATIONS_PER_SLOT {
return Err(Error::ReachedMaxObservationsPerSlot(
MAX_OBSERVATIONS_PER_SLOT,
));
}
self.set.insert(root);
Ok(ObserveOutcome::New)
}
}
/// Indicates if `a` has been observed before.
pub fn is_known<E: EthSpec>(&self, a: &Attestation<E>, root: Hash256) -> Result<bool, Error> {
if a.data.slot != self.slot {
return Err(Error::IncorrectSlot {
expected: self.slot,
attestation: a.data.slot,
});
}
Ok(self.set.contains(&root))
}
/// The number of observed attestations in `self`.
pub fn len(&self) -> usize {
self.set.len()
}
}
/// Stores the roots of `Attestation` objects for some number of `Slots`, so we can determine if
/// these have previously been seen on the network.
pub struct ObservedAttestations<E: EthSpec> {
lowest_permissible_slot: RwLock<Slot>,
sets: RwLock<Vec<SlotHashSet>>,
_phantom: PhantomData<E>,
}
impl<E: EthSpec> Default for ObservedAttestations<E> {
fn default() -> Self {
Self {
lowest_permissible_slot: RwLock::new(Slot::new(0)),
sets: RwLock::new(vec![]),
_phantom: PhantomData,
}
}
}
impl<E: EthSpec> ObservedAttestations<E> {
/// Store the root of `a` in `self`.
///
/// `root` must equal `a.tree_hash_root()`.
pub fn observe_attestation(
&self,
a: &Attestation<E>,
root_opt: Option<Hash256>,
) -> Result<ObserveOutcome, Error> {
let index = self.get_set_index(a.data.slot)?;
let root = root_opt.unwrap_or_else(|| a.tree_hash_root());
self.sets
.write()
.get_mut(index)
.ok_or_else(|| Error::InvalidSetIndex(index))
.and_then(|set| set.observe_attestation(a, root))
}
/// Check to see if the `root` of `a` is in self.
///
/// `root` must equal `a.tree_hash_root()`.
pub fn is_known(&self, a: &Attestation<E>, root: Hash256) -> Result<bool, Error> {
let index = self.get_set_index(a.data.slot)?;
self.sets
.read()
.get(index)
.ok_or_else(|| Error::InvalidSetIndex(index))
.and_then(|set| set.is_known(a, root))
}
/// The maximum number of slots that attestations are stored for.
fn max_capacity(&self) -> u64 {
// We add `2` in order to account for one slot either side of the range due to
// `MAXIMUM_GOSSIP_CLOCK_DISPARITY`.
E::slots_per_epoch() + 2
}
/// Removes any attestations with a slot lower than `current_slot` and bars any future
/// attestations with a slot lower than `current_slot - SLOTS_RETAINED`.
pub fn prune(&self, current_slot: Slot) {
// Taking advantage of saturating subtraction on `Slot`.
let lowest_permissible_slot = current_slot - (self.max_capacity() - 1);
self.sets
.write()
.retain(|set| set.slot >= lowest_permissible_slot);
*self.lowest_permissible_slot.write() = lowest_permissible_slot;
}
/// Returns the index of `self.set` that matches `slot`.
///
/// If there is no existing set for this slot one will be created. If `self.sets.len() >=
/// Self::max_capacity()`, the set with the lowest slot will be replaced.
fn get_set_index(&self, slot: Slot) -> Result<usize, Error> {
let lowest_permissible_slot: Slot = *self.lowest_permissible_slot.read();
if slot < lowest_permissible_slot {
return Err(Error::SlotTooLow {
slot,
lowest_permissible_slot,
});
}
// Prune the pool if this attestation indicates that the current slot has advanced.
if lowest_permissible_slot + self.max_capacity() < slot + 1 {
self.prune(slot)
}
let mut sets = self.sets.write();
if let Some(index) = sets.iter().position(|set| set.slot == slot) {
return Ok(index);
}
// To avoid re-allocations, try and determine a rough initial capacity for the new set
// by obtaining the mean size of all items in earlier epoch.
let (count, sum) = sets
.iter()
// Only include slots that are less than the given slot in the average. This should
// generally avoid including recent slots that are still "filling up".
.filter(|set| set.slot < slot)
.map(|set| set.len())
.fold((0, 0), |(count, sum), len| (count + 1, sum + len));
// If we are unable to determine an average, just use 128 as it's the target committee
// size for the mainnet spec. This is perhaps a little wasteful for the minimal spec,
// but considering it's approx. 128 * 32 bytes we're not wasting much.
let initial_capacity = sum.checked_div(count).unwrap_or(128);
if sets.len() < self.max_capacity() as usize || sets.is_empty() {
let index = sets.len();
sets.push(SlotHashSet::new(slot, initial_capacity));
return Ok(index);
}
let index = sets
.iter()
.enumerate()
.min_by_key(|(_i, set)| set.slot)
.map(|(i, _set)| i)
.expect("sets cannot be empty due to previous .is_empty() check");
sets[index] = SlotHashSet::new(slot, initial_capacity);
Ok(index)
}
}
#[cfg(test)]
mod tests {
use super::*;
use tree_hash::TreeHash;
use types::{test_utils::test_random_instance, Hash256};
type E = types::MainnetEthSpec;
const NUM_ELEMENTS: usize = 8;
fn get_attestation(slot: Slot, beacon_block_root: u64) -> Attestation<E> {
let mut a: Attestation<E> = test_random_instance();
a.data.slot = slot;
a.data.beacon_block_root = Hash256::from_low_u64_be(beacon_block_root);
a
}
fn single_slot_test(store: &ObservedAttestations<E>, slot: Slot) {
let attestations = (0..NUM_ELEMENTS as u64)
.map(|i| get_attestation(slot, i))
.collect::<Vec<_>>();
for a in &attestations {
assert_eq!(
store.is_known(a, a.tree_hash_root()),
Ok(false),
"should indicate an unknown attestation is unknown"
);
assert_eq!(
store.observe_attestation(a, None),
Ok(ObserveOutcome::New),
"should observe new attestation"
);
}
for a in &attestations {
assert_eq!(
store.is_known(a, a.tree_hash_root()),
Ok(true),
"should indicate a known attestation is known"
);
assert_eq!(
store.observe_attestation(a, Some(a.tree_hash_root())),
Ok(ObserveOutcome::AlreadyKnown),
"should acknowledge an existing attestation"
);
}
}
#[test]
fn single_slot() {
let store = ObservedAttestations::default();
single_slot_test(&store, Slot::new(0));
assert_eq!(
store.sets.read().len(),
1,
"should have a single set stored"
);
assert_eq!(
store.sets.read()[0].len(),
NUM_ELEMENTS,
"set should have NUM_ELEMENTS elements"
);
}
#[test]
fn mulitple_contiguous_slots() {
let store = ObservedAttestations::default();
let max_cap = store.max_capacity();
for i in 0..max_cap * 3 {
let slot = Slot::new(i);
single_slot_test(&store, slot);
/*
* Ensure that the number of sets is correct.
*/
if i < max_cap {
assert_eq!(
store.sets.read().len(),
i as usize + 1,
"should have a {} sets stored",
i + 1
);
} else {
assert_eq!(
store.sets.read().len(),
max_cap as usize,
"should have max_capacity sets stored"
);
}
/*
* Ensure that each set contains the correct number of elements.
*/
for set in &store.sets.read()[..] {
assert_eq!(
set.len(),
NUM_ELEMENTS,
"each store should have NUM_ELEMENTS elements"
)
}
/*
* Ensure that all the sets have the expected slots
*/
let mut store_slots = store
.sets
.read()
.iter()
.map(|set| set.slot)
.collect::<Vec<_>>();
assert!(
store_slots.len() <= store.max_capacity() as usize,
"store size should not exceed max"
);
store_slots.sort_unstable();
let expected_slots = (i.saturating_sub(max_cap - 1)..=i)
.map(Slot::new)
.collect::<Vec<_>>();
assert_eq!(expected_slots, store_slots, "should have expected slots");
}
}
#[test]
fn mulitple_non_contiguous_slots() {
let store = ObservedAttestations::default();
let max_cap = store.max_capacity();
let to_skip = vec![1_u64, 2, 3, 5, 6, 29, 30, 31, 32, 64];
let slots = (0..max_cap * 3)
.into_iter()
.filter(|i| !to_skip.contains(i))
.collect::<Vec<_>>();
for &i in &slots {
if to_skip.contains(&i) {
continue;
}
let slot = Slot::from(i);
single_slot_test(&store, slot);
/*
* Ensure that each set contains the correct number of elements.
*/
for set in &store.sets.read()[..] {
assert_eq!(
set.len(),
NUM_ELEMENTS,
"each store should have NUM_ELEMENTS elements"
)
}
/*
* Ensure that all the sets have the expected slots
*/
let mut store_slots = store
.sets
.read()
.iter()
.map(|set| set.slot)
.collect::<Vec<_>>();
store_slots.sort_unstable();
assert!(
store_slots.len() <= store.max_capacity() as usize,
"store size should not exceed max"
);
let lowest = store.lowest_permissible_slot.read().as_u64();
let highest = slot.as_u64();
let expected_slots = (lowest..=highest)
.filter(|i| !to_skip.contains(i))
.map(Slot::new)
.collect::<Vec<_>>();
assert_eq!(
expected_slots,
&store_slots[..],
"should have expected slots"
);
}
}
}

View File

@ -0,0 +1,441 @@
//! Provides two structs that help us filter out attestation gossip from validators that have
//! already published attestations:
//!
//! - `ObservedAttesters`: allows filtering unaggregated attestations from the same validator in
//! the same epoch.
//! - `ObservedAggregators`: allows filtering aggregated attestations from the same aggregators in
//! the same epoch
use bitvec::vec::BitVec;
use parking_lot::RwLock;
use std::collections::{HashMap, HashSet};
use std::marker::PhantomData;
use types::{Attestation, Epoch, EthSpec, Unsigned};
pub type ObservedAttesters<E> = AutoPruningContainer<EpochBitfield, E>;
pub type ObservedAggregators<E> = AutoPruningContainer<EpochHashSet, E>;
#[derive(Debug, PartialEq)]
pub enum Error {
EpochTooLow {
epoch: Epoch,
lowest_permissible_epoch: Epoch,
},
/// We have reached the maximum number of unique `Attestation` that can be observed in a slot.
/// This is a DoS protection function.
ReachedMaxObservationsPerSlot(usize),
/// The function to obtain a set index failed, this is an internal error.
ValidatorIndexTooHigh(usize),
}
/// Implemented on an item in an `AutoPruningContainer`.
pub trait Item {
/// Instantiate `Self` with the given `capacity`.
fn with_capacity(capacity: usize) -> Self;
/// The default capacity for self. Used when we can't guess a reasonable size.
fn default_capacity() -> usize;
/// Returns the number of validator indices stored in `self`.
fn len(&self) -> usize;
/// Store `validator_index` in `self`.
fn insert(&mut self, validator_index: usize) -> bool;
/// Returns `true` if `validator_index` has been stored in `self`.
fn contains(&self, validator_index: usize) -> bool;
}
/// Stores a `BitVec` that represents which validator indices have attested during an epoch.
pub struct EpochBitfield {
bitfield: BitVec,
}
impl Item for EpochBitfield {
fn with_capacity(capacity: usize) -> Self {
Self {
bitfield: BitVec::with_capacity(capacity),
}
}
/// Uses a default size that equals the number of genesis validators.
fn default_capacity() -> usize {
16_384
}
fn len(&self) -> usize {
self.bitfield.len()
}
fn insert(&mut self, validator_index: usize) -> bool {
self.bitfield
.get_mut(validator_index)
.map(|mut bit| {
if *bit {
true
} else {
*bit = true;
false
}
})
.unwrap_or_else(|| {
self.bitfield
.resize(validator_index.saturating_add(1), false);
self.bitfield
.get_mut(validator_index)
.map(|mut bit| *bit = true);
false
})
}
fn contains(&self, validator_index: usize) -> bool {
self.bitfield.get(validator_index).map_or(false, |bit| *bit)
}
}
/// Stores a `HashSet` of which validator indices have created an aggregate attestation during an
/// epoch.
pub struct EpochHashSet {
set: HashSet<usize>,
}
impl Item for EpochHashSet {
fn with_capacity(capacity: usize) -> Self {
Self {
set: HashSet::with_capacity(capacity),
}
}
/// Defaults to the target number of aggregators per committee (16) multiplied by the expected
/// max committee count (64).
fn default_capacity() -> usize {
16 * 64
}
fn len(&self) -> usize {
self.set.len()
}
/// Inserts the `validator_index` in the set. Returns `true` if the `validator_index` was
/// already in the set.
fn insert(&mut self, validator_index: usize) -> bool {
!self.set.insert(validator_index)
}
/// Returns `true` if the `validator_index` is in the set.
fn contains(&self, validator_index: usize) -> bool {
self.set.contains(&validator_index)
}
}
/// A container that stores some number of `T` items.
///
/// This container is "auto-pruning" since it gets an idea of the current slot by which
/// attestations are provided to it and prunes old entries based upon that. For example, if
/// `Self::max_capacity == 32` and an attestation with `a.data.target.epoch` is supplied, then all
/// attestations with an epoch prior to `a.data.target.epoch - 32` will be cleared from the cache.
///
/// `T` should be set to a `EpochBitfield` or `EpochHashSet`.
pub struct AutoPruningContainer<T, E: EthSpec> {
lowest_permissible_epoch: RwLock<Epoch>,
items: RwLock<HashMap<Epoch, T>>,
_phantom: PhantomData<E>,
}
impl<T, E: EthSpec> Default for AutoPruningContainer<T, E> {
fn default() -> Self {
Self {
lowest_permissible_epoch: RwLock::new(Epoch::new(0)),
items: RwLock::new(HashMap::new()),
_phantom: PhantomData,
}
}
}
impl<T: Item, E: EthSpec> AutoPruningContainer<T, E> {
/// Observe that `validator_index` has produced attestation `a`. Returns `Ok(true)` if `a` has
/// previously been observed for `validator_index`.
///
/// ## Errors
///
/// - `validator_index` is higher than `VALIDATOR_REGISTRY_LIMIT`.
/// - `a.data.target.slot` is earlier than `self.earliest_permissible_slot`.
pub fn observe_validator(
&self,
a: &Attestation<E>,
validator_index: usize,
) -> Result<bool, Error> {
self.sanitize_request(a, validator_index)?;
let epoch = a.data.target.epoch;
self.prune(epoch);
let mut items = self.items.write();
if let Some(item) = items.get_mut(&epoch) {
Ok(item.insert(validator_index))
} else {
// To avoid re-allocations, try and determine a rough initial capacity for the new item
// by obtaining the mean size of all items in earlier epoch.
let (count, sum) = items
.iter()
// Only include epochs that are less than the given slot in the average. This should
// generally avoid including recent epochs that are still "filling up".
.filter(|(item_epoch, _item)| **item_epoch < epoch)
.map(|(_epoch, item)| item.len())
.fold((0, 0), |(count, sum), len| (count + 1, sum + len));
let initial_capacity = sum.checked_div(count).unwrap_or(T::default_capacity());
let mut item = T::with_capacity(initial_capacity);
item.insert(validator_index);
items.insert(epoch, item);
Ok(false)
}
}
/// Returns `Ok(true)` if the `validator_index` has produced an attestation conflicting with
/// `a`.
///
/// ## Errors
///
/// - `validator_index` is higher than `VALIDATOR_REGISTRY_LIMIT`.
/// - `a.data.target.slot` is earlier than `self.earliest_permissible_slot`.
pub fn validator_has_been_observed(
&self,
a: &Attestation<E>,
validator_index: usize,
) -> Result<bool, Error> {
self.sanitize_request(a, validator_index)?;
let exists = self
.items
.read()
.get(&a.data.target.epoch)
.map_or(false, |item| item.contains(validator_index));
Ok(exists)
}
fn sanitize_request(&self, a: &Attestation<E>, validator_index: usize) -> Result<(), Error> {
if validator_index > E::ValidatorRegistryLimit::to_usize() {
return Err(Error::ValidatorIndexTooHigh(validator_index));
}
let epoch = a.data.target.epoch;
let lowest_permissible_epoch: Epoch = *self.lowest_permissible_epoch.read();
if epoch < lowest_permissible_epoch {
return Err(Error::EpochTooLow {
epoch,
lowest_permissible_epoch,
});
}
Ok(())
}
/// The maximum number of epochs stored in `self`.
fn max_capacity(&self) -> u64 {
// The current epoch and the previous epoch. This is sufficient whilst
// GOSSIP_CLOCK_DISPARITY is 1/2 a slot or less:
//
// https://github.com/ethereum/eth2.0-specs/pull/1706#issuecomment-610151808
2
}
/// Updates `self` with the current epoch, removing all attestations that become expired
/// relative to `Self::max_capacity`.
///
/// Also sets `self.lowest_permissible_epoch` with relation to `current_epoch` and
/// `Self::max_capacity`.
pub fn prune(&self, current_epoch: Epoch) {
// Taking advantage of saturating subtraction on `Slot`.
let lowest_permissible_epoch = current_epoch - (self.max_capacity().saturating_sub(1));
*self.lowest_permissible_epoch.write() = lowest_permissible_epoch;
self.items
.write()
.retain(|epoch, _item| *epoch >= lowest_permissible_epoch);
}
}
#[cfg(test)]
mod tests {
use super::*;
macro_rules! test_suite {
($mod_name: ident, $type: ident) => {
#[cfg(test)]
mod $mod_name {
use super::*;
use types::test_utils::test_random_instance;
type E = types::MainnetEthSpec;
fn get_attestation(epoch: Epoch) -> Attestation<E> {
let mut a: Attestation<E> = test_random_instance();
a.data.target.epoch = epoch;
a
}
fn single_epoch_test(store: &$type<E>, epoch: Epoch) {
let attesters = [0, 1, 2, 3, 5, 6, 7, 18, 22];
let a = &get_attestation(epoch);
for &i in &attesters {
assert_eq!(
store.validator_has_been_observed(a, i),
Ok(false),
"should indicate an unknown attestation is unknown"
);
assert_eq!(
store.observe_validator(a, i),
Ok(false),
"should observe new attestation"
);
}
for &i in &attesters {
assert_eq!(
store.validator_has_been_observed(a, i),
Ok(true),
"should indicate a known attestation is known"
);
assert_eq!(
store.observe_validator(a, i),
Ok(true),
"should acknowledge an existing attestation"
);
}
}
#[test]
fn single_epoch() {
let store = $type::default();
single_epoch_test(&store, Epoch::new(0));
assert_eq!(
store.items.read().len(),
1,
"should have a single bitfield stored"
);
}
#[test]
fn mulitple_contiguous_epochs() {
let store = $type::default();
let max_cap = store.max_capacity();
for i in 0..max_cap * 3 {
let epoch = Epoch::new(i);
single_epoch_test(&store, epoch);
/*
* Ensure that the number of sets is correct.
*/
if i < max_cap {
assert_eq!(
store.items.read().len(),
i as usize + 1,
"should have a {} items stored",
i + 1
);
} else {
assert_eq!(
store.items.read().len(),
max_cap as usize,
"should have max_capacity items stored"
);
}
/*
* Ensure that all the sets have the expected slots
*/
let mut store_epochs = store
.items
.read()
.iter()
.map(|(epoch, _set)| *epoch)
.collect::<Vec<_>>();
assert!(
store_epochs.len() <= store.max_capacity() as usize,
"store size should not exceed max"
);
store_epochs.sort_unstable();
let expected_epochs = (i.saturating_sub(max_cap - 1)..=i)
.map(Epoch::new)
.collect::<Vec<_>>();
assert_eq!(expected_epochs, store_epochs, "should have expected slots");
}
}
#[test]
fn mulitple_non_contiguous_epochs() {
let store = $type::default();
let max_cap = store.max_capacity();
let to_skip = vec![1_u64, 3, 4, 5];
let epochs = (0..max_cap * 3)
.into_iter()
.filter(|i| !to_skip.contains(i))
.collect::<Vec<_>>();
for &i in &epochs {
if to_skip.contains(&i) {
continue;
}
let epoch = Epoch::from(i);
single_epoch_test(&store, epoch);
/*
* Ensure that all the sets have the expected slots
*/
let mut store_epochs = store
.items
.read()
.iter()
.map(|(epoch, _)| *epoch)
.collect::<Vec<_>>();
store_epochs.sort_unstable();
assert!(
store_epochs.len() <= store.max_capacity() as usize,
"store size should not exceed max"
);
let lowest = store.lowest_permissible_epoch.read().as_u64();
let highest = epoch.as_u64();
let expected_epochs = (lowest..=highest)
.filter(|i| !to_skip.contains(i))
.map(Epoch::new)
.collect::<Vec<_>>();
assert_eq!(
expected_epochs,
&store_epochs[..],
"should have expected epochs"
);
}
}
}
};
}
test_suite!(observed_attesters, ObservedAttesters);
test_suite!(observed_aggregators, ObservedAggregators);
}

View File

@ -0,0 +1,428 @@
//! Provides the `ObservedBlockProducers` struct which allows for rejecting gossip blocks from
//! validators that have already produced a block.
use parking_lot::RwLock;
use std::collections::{HashMap, HashSet};
use std::marker::PhantomData;
use types::{BeaconBlock, EthSpec, Slot, Unsigned};
#[derive(Debug, PartialEq)]
pub enum Error {
/// The slot of the provided block is prior to finalization and should not have been provided
/// to this function. This is an internal error.
FinalizedBlock { slot: Slot, finalized_slot: Slot },
/// The function to obtain a set index failed, this is an internal error.
ValidatorIndexTooHigh(u64),
}
/// Maintains a cache of observed `(block.slot, block.proposer)`.
///
/// The cache supports pruning based upon the finalized epoch. It does not automatically prune, you
/// must call `Self::prune` manually.
///
/// The maximum size of the cache is determined by `slots_since_finality *
/// VALIDATOR_REGISTRY_LIMIT`. This is quite a large size, so it's important that upstream
/// functions only use this cache for blocks with a valid signature. Only allowing valid signed
/// blocks reduces the theoretical maximum size of this cache to `slots_since_finality *
/// active_validator_count`, however in reality that is more like `slots_since_finality *
/// known_distinct_shufflings` which is much smaller.
pub struct ObservedBlockProducers<E: EthSpec> {
finalized_slot: RwLock<Slot>,
items: RwLock<HashMap<Slot, HashSet<u64>>>,
_phantom: PhantomData<E>,
}
impl<E: EthSpec> Default for ObservedBlockProducers<E> {
/// Instantiates `Self` with `finalized_slot == 0`.
fn default() -> Self {
Self {
finalized_slot: RwLock::new(Slot::new(0)),
items: RwLock::new(HashMap::new()),
_phantom: PhantomData,
}
}
}
impl<E: EthSpec> ObservedBlockProducers<E> {
/// Observe that the `block` was produced by `block.proposer_index` at `block.slot`. This will
/// update `self` so future calls to it indicate that this block is known.
///
/// The supplied `block` **MUST** be signature verified (see struct-level documentation).
///
/// ## Errors
///
/// - `block.proposer_index` is greater than `VALIDATOR_REGISTRY_LIMIT`.
/// - `block.slot` is equal to or less than the latest pruned `finalized_slot`.
pub fn observe_proposer(&self, block: &BeaconBlock<E>) -> Result<bool, Error> {
self.sanitize_block(block)?;
let did_not_exist = self
.items
.write()
.entry(block.slot)
.or_insert_with(|| HashSet::with_capacity(E::SlotsPerEpoch::to_usize()))
.insert(block.proposer_index);
Ok(!did_not_exist)
}
/// Returns `Ok(true)` if the `block` has been observed before, `Ok(false)` if not. Does not
/// update the cache, so calling this function multiple times will continue to return
/// `Ok(false)`, until `Self::observe_proposer` is called.
///
/// ## Errors
///
/// - `block.proposer_index` is greater than `VALIDATOR_REGISTRY_LIMIT`.
/// - `block.slot` is equal to or less than the latest pruned `finalized_slot`.
pub fn proposer_has_been_observed(&self, block: &BeaconBlock<E>) -> Result<bool, Error> {
self.sanitize_block(block)?;
let exists = self
.items
.read()
.get(&block.slot)
.map_or(false, |set| set.contains(&block.proposer_index));
Ok(exists)
}
/// Returns `Ok(())` if the given `block` is sane.
fn sanitize_block(&self, block: &BeaconBlock<E>) -> Result<(), Error> {
if block.proposer_index > E::ValidatorRegistryLimit::to_u64() {
return Err(Error::ValidatorIndexTooHigh(block.proposer_index));
}
let finalized_slot = *self.finalized_slot.read();
if finalized_slot > 0 && block.slot <= finalized_slot {
return Err(Error::FinalizedBlock {
slot: block.slot,
finalized_slot,
});
}
Ok(())
}
/// Removes all observations of blocks equal to or earlier than `finalized_slot`.
///
/// Stores `finalized_slot` in `self`, so that `self` will reject any block that has a slot
/// equal to or less than `finalized_slot`.
///
/// No-op if `finalized_slot == 0`.
pub fn prune(&self, finalized_slot: Slot) {
if finalized_slot == 0 {
return;
}
*self.finalized_slot.write() = finalized_slot;
self.items
.write()
.retain(|slot, _set| *slot > finalized_slot);
}
}
#[cfg(test)]
mod tests {
use super::*;
use types::MainnetEthSpec;
type E = MainnetEthSpec;
fn get_block(slot: u64, proposer: u64) -> BeaconBlock<E> {
let mut block = BeaconBlock::empty(&E::default_spec());
block.slot = slot.into();
block.proposer_index = proposer;
block
}
#[test]
fn pruning() {
let cache = ObservedBlockProducers::default();
assert_eq!(*cache.finalized_slot.read(), 0, "finalized slot is zero");
assert_eq!(cache.items.read().len(), 0, "no slots should be present");
// Slot 0, proposer 0
let block_a = &get_block(0, 0);
assert_eq!(
cache.observe_proposer(block_a),
Ok(false),
"can observe proposer, indicates proposer unobserved"
);
/*
* Preconditions.
*/
assert_eq!(*cache.finalized_slot.read(), 0, "finalized slot is zero");
assert_eq!(
cache.items.read().len(),
1,
"only one slot should be present"
);
assert_eq!(
cache
.items
.read()
.get(&Slot::new(0))
.expect("slot zero should be present")
.len(),
1,
"only one proposer should be present"
);
/*
* Check that a prune at the genesis slot does nothing.
*/
cache.prune(Slot::new(0));
assert_eq!(*cache.finalized_slot.read(), 0, "finalized slot is zero");
assert_eq!(
cache.items.read().len(),
1,
"only one slot should be present"
);
assert_eq!(
cache
.items
.read()
.get(&Slot::new(0))
.expect("slot zero should be present")
.len(),
1,
"only one proposer should be present"
);
/*
* Check that a prune empties the cache
*/
cache.prune(E::slots_per_epoch().into());
assert_eq!(
*cache.finalized_slot.read(),
Slot::from(E::slots_per_epoch()),
"finalized slot is updated"
);
assert_eq!(cache.items.read().len(), 0, "no items left");
/*
* Check that we can't insert a finalized block
*/
// First slot of finalized epoch, proposer 0
let block_b = &get_block(E::slots_per_epoch(), 0);
assert_eq!(
cache.observe_proposer(block_b),
Err(Error::FinalizedBlock {
slot: E::slots_per_epoch().into(),
finalized_slot: E::slots_per_epoch().into(),
}),
"cant insert finalized block"
);
assert_eq!(cache.items.read().len(), 0, "block was not added");
/*
* Check that we _can_ insert a non-finalized block
*/
let three_epochs = E::slots_per_epoch() * 3;
// First slot of finalized epoch, proposer 0
let block_b = &get_block(three_epochs, 0);
assert_eq!(
cache.observe_proposer(block_b),
Ok(false),
"can insert non-finalized block"
);
assert_eq!(
cache.items.read().len(),
1,
"only one slot should be present"
);
assert_eq!(
cache
.items
.read()
.get(&Slot::new(three_epochs))
.expect("the three epochs slot should be present")
.len(),
1,
"only one proposer should be present"
);
/*
* Check that a prune doesnt wipe later blocks
*/
let two_epochs = E::slots_per_epoch() * 2;
cache.prune(two_epochs.into());
assert_eq!(
*cache.finalized_slot.read(),
Slot::from(two_epochs),
"finalized slot is updated"
);
assert_eq!(
cache.items.read().len(),
1,
"only one slot should be present"
);
assert_eq!(
cache
.items
.read()
.get(&Slot::new(three_epochs))
.expect("the three epochs slot should be present")
.len(),
1,
"only one proposer should be present"
);
}
#[test]
fn simple_observations() {
let cache = ObservedBlockProducers::default();
// Slot 0, proposer 0
let block_a = &get_block(0, 0);
assert_eq!(
cache.proposer_has_been_observed(block_a),
Ok(false),
"no observation in empty cache"
);
assert_eq!(
cache.observe_proposer(block_a),
Ok(false),
"can observe proposer, indicates proposer unobserved"
);
assert_eq!(
cache.proposer_has_been_observed(block_a),
Ok(true),
"observed block is indicated as true"
);
assert_eq!(
cache.observe_proposer(block_a),
Ok(true),
"observing again indicates true"
);
assert_eq!(*cache.finalized_slot.read(), 0, "finalized slot is zero");
assert_eq!(
cache.items.read().len(),
1,
"only one slot should be present"
);
assert_eq!(
cache
.items
.read()
.get(&Slot::new(0))
.expect("slot zero should be present")
.len(),
1,
"only one proposer should be present"
);
// Slot 1, proposer 0
let block_b = &get_block(1, 0);
assert_eq!(
cache.proposer_has_been_observed(block_b),
Ok(false),
"no observation for new slot"
);
assert_eq!(
cache.observe_proposer(block_b),
Ok(false),
"can observe proposer for new slot, indicates proposer unobserved"
);
assert_eq!(
cache.proposer_has_been_observed(block_b),
Ok(true),
"observed block in slot 1 is indicated as true"
);
assert_eq!(
cache.observe_proposer(block_b),
Ok(true),
"observing slot 1 again indicates true"
);
assert_eq!(*cache.finalized_slot.read(), 0, "finalized slot is zero");
assert_eq!(cache.items.read().len(), 2, "two slots should be present");
assert_eq!(
cache
.items
.read()
.get(&Slot::new(0))
.expect("slot zero should be present")
.len(),
1,
"only one proposer should be present in slot 0"
);
assert_eq!(
cache
.items
.read()
.get(&Slot::new(1))
.expect("slot zero should be present")
.len(),
1,
"only one proposer should be present in slot 1"
);
// Slot 0, proposer 1
let block_c = &get_block(0, 1);
assert_eq!(
cache.proposer_has_been_observed(block_c),
Ok(false),
"no observation for new proposer"
);
assert_eq!(
cache.observe_proposer(block_c),
Ok(false),
"can observe new proposer, indicates proposer unobserved"
);
assert_eq!(
cache.proposer_has_been_observed(block_c),
Ok(true),
"observed new proposer block is indicated as true"
);
assert_eq!(
cache.observe_proposer(block_c),
Ok(true),
"observing new proposer again indicates true"
);
assert_eq!(*cache.finalized_slot.read(), 0, "finalized slot is zero");
assert_eq!(cache.items.read().len(), 2, "two slots should be present");
assert_eq!(
cache
.items
.read()
.get(&Slot::new(0))
.expect("slot zero should be present")
.len(),
2,
"two proposers should be present in slot 0"
);
assert_eq!(
cache
.items
.read()
.get(&Slot::new(1))
.expect("slot zero should be present")
.len(),
1,
"only one proposer should be present in slot 1"
);
}
}

View File

@ -7,7 +7,7 @@ use crate::{
builder::{BeaconChainBuilder, Witness},
eth1_chain::CachingEth1Backend,
events::NullEventHandler,
AttestationProcessingOutcome, AttestationType, BeaconChain, BeaconChainTypes, StateSkipConfig,
BeaconChain, BeaconChainTypes, StateSkipConfig,
};
use genesis::interop_genesis_state;
use rayon::prelude::*;
@ -23,8 +23,8 @@ use tempfile::{tempdir, TempDir};
use tree_hash::TreeHash;
use types::{
AggregateSignature, Attestation, BeaconState, BeaconStateHash, ChainSpec, Domain, EthSpec,
Hash256, Keypair, SecretKey, Signature, SignedBeaconBlock, SignedBeaconBlockHash, SignedRoot,
Slot,
Hash256, Keypair, SecretKey, SelectionProof, Signature, SignedAggregateAndProof,
SignedBeaconBlock, SignedBeaconBlockHash, SignedRoot, Slot,
};
pub use types::test_utils::generate_deterministic_keypairs;
@ -85,8 +85,24 @@ pub struct BeaconChainHarness<T: BeaconChainTypes> {
impl<E: EthSpec> BeaconChainHarness<HarnessType<E>> {
/// Instantiate a new harness with `validator_count` initial validators.
pub fn new(eth_spec_instance: E, keypairs: Vec<Keypair>) -> Self {
// Setting the target aggregators to really high means that _all_ validators in the
// committee are required to produce an aggregate. This is overkill, however with small
// validator counts it's the only way to be certain there is _at least one_ aggregator per
// committee.
Self::new_with_target_aggregators(eth_spec_instance, keypairs, 1 << 32)
}
/// Instantiate a new harness with `validator_count` initial validators and a custom
/// `target_aggregators_per_committee` spec value
pub fn new_with_target_aggregators(
eth_spec_instance: E,
keypairs: Vec<Keypair>,
target_aggregators_per_committee: u64,
) -> Self {
let data_dir = tempdir().expect("should create temporary data_dir");
let spec = E::default_spec();
let mut spec = E::default_spec();
spec.target_aggregators_per_committee = target_aggregators_per_committee;
let log = NullLoggerBuilder.build().expect("logger should build");
@ -268,9 +284,9 @@ where
.expect("should not error during block processing");
self.chain.fork_choice().expect("should find head");
head_block_root = Some(block_root);
self.add_free_attestations(&attestation_strategy, &new_state, block_root, slot);
self.add_attestations_for_slot(&attestation_strategy, &new_state, block_root, slot);
state = new_state;
slot += 1;
@ -312,7 +328,7 @@ where
self.chain.fork_choice().expect("should find head");
let attestation_strategy = AttestationStrategy::SomeValidators(validators.to_vec());
self.add_free_attestations(&attestation_strategy, &new_state, block_root, slot);
self.add_attestations_for_slot(&attestation_strategy, &new_state, block_root, slot);
(block_root.into(), new_state)
}
@ -448,72 +464,39 @@ where
(signed_block, state)
}
/// Adds attestations to the `BeaconChain` operations pool and fork choice.
/// A list of attestations for each committee for the given slot.
///
/// The `attestation_strategy` dictates which validators should attest.
fn add_free_attestations(
/// The first layer of the Vec is organised per committee. For example, if the return value is
/// called `all_attestations`, then all attestations in `all_attestations[0]` will be for
/// committee 0, whilst all in `all_attestations[1]` will be for committee 1.
pub fn get_unaggregated_attestations(
&self,
attestation_strategy: &AttestationStrategy,
state: &BeaconState<E>,
head_block_root: Hash256,
head_block_slot: Slot,
) {
self.get_free_attestations(
attestation_strategy,
state,
head_block_root,
head_block_slot,
)
.into_iter()
.for_each(|attestation| {
match self
.chain
.process_attestation(attestation, AttestationType::Aggregated)
.expect("should not error during attestation processing")
{
// PastEpoch can occur if we fork over several epochs
AttestationProcessingOutcome::Processed
| AttestationProcessingOutcome::PastEpoch { .. } => (),
other => panic!("did not successfully process attestation: {:?}", other),
}
});
}
/// Generates a `Vec<Attestation>` for some attestation strategy and head_block.
pub fn get_free_attestations(
&self,
attestation_strategy: &AttestationStrategy,
state: &BeaconState<E>,
head_block_root: Hash256,
head_block_slot: Slot,
) -> Vec<Attestation<E>> {
attestation_slot: Slot,
) -> Vec<Vec<Attestation<E>>> {
let spec = &self.spec;
let fork = &state.fork;
let attesting_validators: Vec<usize> = match attestation_strategy {
AttestationStrategy::AllValidators => (0..self.keypairs.len()).collect(),
AttestationStrategy::SomeValidators(vec) => vec.clone(),
};
let mut attestations = vec![];
let attesting_validators = self.get_attesting_validators(attestation_strategy);
state
.get_beacon_committees_at_slot(state.slot)
.expect("should get committees")
.iter()
.for_each(|bc| {
let mut local_attestations: Vec<Attestation<E>> = bc
.committee
.map(|bc| {
bc.committee
.par_iter()
.enumerate()
.filter_map(|(i, validator_index)| {
// Note: searching this array is worst-case `O(n)`. A hashset could be a better
// alternative.
if attesting_validators.contains(validator_index) {
if !attesting_validators.contains(validator_index) {
return None;
}
let mut attestation = self
.chain
.produce_attestation_for_block(
head_block_slot,
.produce_unaggregated_attestation_for_block(
attestation_slot,
bc.index,
head_block_root,
Cow::Borrowed(state),
@ -546,16 +529,118 @@ where
};
Some(attestation)
} else {
None
}
})
.collect();
.collect()
})
.collect()
}
attestations.append(&mut local_attestations);
fn get_attesting_validators(&self, attestation_strategy: &AttestationStrategy) -> Vec<usize> {
match attestation_strategy {
AttestationStrategy::AllValidators => (0..self.keypairs.len()).collect(),
AttestationStrategy::SomeValidators(vec) => vec.clone(),
}
}
/// Generates a `Vec<Attestation>` for some attestation strategy and head_block.
pub fn add_attestations_for_slot(
&self,
attestation_strategy: &AttestationStrategy,
state: &BeaconState<E>,
head_block_root: Hash256,
head_block_slot: Slot,
) {
// These attestations will not be accepted by the chain so no need to generate them.
if state.slot + E::slots_per_epoch() < self.chain.slot().expect("should get slot") {
return;
}
let spec = &self.spec;
let fork = &state.fork;
let attesting_validators = self.get_attesting_validators(attestation_strategy);
let unaggregated_attestations = self.get_unaggregated_attestations(
attestation_strategy,
state,
head_block_root,
head_block_slot,
);
// Loop through all unaggregated attestations, submit them to the chain and also submit a
// single aggregate.
unaggregated_attestations
.into_iter()
.for_each(|committee_attestations| {
// Submit each unaggregated attestation to the chain.
for attestation in &committee_attestations {
self.chain
.verify_unaggregated_attestation_for_gossip(attestation.clone())
.expect("should not error during attestation processing")
.add_to_pool(&self.chain)
.expect("should add attestation to naive pool");
}
// If there are any attestations in this committee, create an aggregate.
if let Some(attestation) = committee_attestations.first() {
let bc = state.get_beacon_committee(attestation.data.slot, attestation.data.index)
.expect("should get committee");
let aggregator_index = bc.committee
.iter()
.find(|&validator_index| {
if !attesting_validators.contains(validator_index) {
return false
}
let selection_proof = SelectionProof::new::<E>(
state.slot,
self.get_sk(*validator_index),
fork,
state.genesis_validators_root,
spec,
);
selection_proof.is_aggregator(bc.committee.len(), spec).unwrap_or(false)
})
.copied()
.expect(&format!(
"Committee {} at slot {} with {} attesting validators does not have any aggregators",
bc.index, state.slot, bc.committee.len()
));
// If the chain is able to produce an aggregate, use that. Otherwise, build an
// aggregate locally.
let aggregate = self
.chain
.get_aggregated_attestation(&attestation.data)
.expect("should not error whilst finding aggregate")
.unwrap_or_else(|| {
committee_attestations.iter().skip(1).fold(attestation.clone(), |mut agg, att| {
agg.aggregate(att);
agg
})
});
attestations
let signed_aggregate = SignedAggregateAndProof::from_aggregate(
aggregator_index as u64,
aggregate,
None,
self.get_sk(aggregator_index),
fork,
state.genesis_validators_root,
spec,
);
self.chain
.verify_aggregated_attestation_for_gossip(signed_aggregate)
.expect("should not error during attestation processing")
.add_to_pool(&self.chain)
.expect("should add attestation to naive aggregation pool")
.add_to_fork_choice(&self.chain)
.expect("should add attestation to fork choice");
}
});
}
/// Creates two forks:

View File

@ -115,6 +115,11 @@ impl ValidatorPubkeyCache {
pub fn get_index(&self, pubkey: &PublicKeyBytes) -> Option<usize> {
self.indices.get(pubkey).copied()
}
/// Returns the number of validators in the cache.
pub fn len(&self) -> usize {
self.indices.len()
}
}
/// Allows for maintaining an on-disk copy of the `ValidatorPubkeyCache`. The file is raw SSZ bytes

View File

@ -90,7 +90,7 @@ fn produces_attestations() {
.len();
let attestation = chain
.produce_attestation(slot, index)
.produce_unaggregated_attestation(slot, index)
.expect("should produce attestation");
let data = &attestation.data;

View File

@ -1,268 +0,0 @@
#![cfg(not(debug_assertions))]
#[macro_use]
extern crate lazy_static;
use beacon_chain::test_utils::{
AttestationStrategy, BeaconChainHarness, BlockStrategy, HarnessType,
};
use beacon_chain::{AttestationProcessingOutcome, AttestationType};
use state_processing::per_slot_processing;
use types::{
test_utils::generate_deterministic_keypair, AggregateSignature, BitList, EthSpec, Hash256,
Keypair, MainnetEthSpec, Signature,
};
pub const VALIDATOR_COUNT: usize = 128;
lazy_static! {
/// A cached set of keys.
static ref KEYPAIRS: Vec<Keypair> = types::test_utils::generate_deterministic_keypairs(VALIDATOR_COUNT);
}
fn get_harness(validator_count: usize) -> BeaconChainHarness<HarnessType<MainnetEthSpec>> {
let harness = BeaconChainHarness::new(MainnetEthSpec, KEYPAIRS[0..validator_count].to_vec());
harness.advance_slot();
harness
}
#[test]
fn attestation_validity() {
let harness = get_harness(VALIDATOR_COUNT);
let chain = &harness.chain;
// Extend the chain out a few epochs so we have some chain depth to play with.
harness.extend_chain(
MainnetEthSpec::slots_per_epoch() as usize * 3 + 1,
BlockStrategy::OnCanonicalHead,
AttestationStrategy::AllValidators,
);
let head = chain.head().expect("should get head");
let current_slot = chain.slot().expect("should get slot");
let current_epoch = chain.epoch().expect("should get epoch");
let valid_attestation = harness
.get_free_attestations(
&AttestationStrategy::AllValidators,
&head.beacon_state,
head.beacon_block_root,
head.beacon_block.slot(),
)
.first()
.cloned()
.expect("should get at least one attestation");
assert_eq!(
chain.process_attestation(valid_attestation.clone(), AttestationType::Aggregated),
Ok(AttestationProcessingOutcome::Processed),
"should accept valid attestation"
);
/*
* Should reject attestations if the slot does not match the target epoch.
*/
let mut epoch_mismatch_attestation = valid_attestation.clone();
epoch_mismatch_attestation.data.target.epoch = current_epoch + 1;
assert_eq!(
harness
.chain
.process_attestation(epoch_mismatch_attestation, AttestationType::Aggregated),
Ok(AttestationProcessingOutcome::BadTargetEpoch),
"should not accept attestation where the slot is not in the same epoch as the target"
);
/*
* Should reject attestations from future epochs.
*/
let mut early_attestation = valid_attestation.clone();
early_attestation.data.target.epoch = current_epoch + 1;
early_attestation.data.slot = (current_epoch + 1).start_slot(MainnetEthSpec::slots_per_epoch());
assert_eq!(
harness
.chain
.process_attestation(early_attestation, AttestationType::Aggregated),
Ok(AttestationProcessingOutcome::FutureEpoch {
attestation_epoch: current_epoch + 1,
current_epoch
}),
"should not accept early attestation"
);
/*
* Should reject attestations from epochs prior to the previous epoch.
*/
let late_slot = (current_epoch - 2).start_slot(MainnetEthSpec::slots_per_epoch());
let late_block = chain
.block_at_slot(late_slot)
.expect("should not error getting block at slot")
.expect("should find block at slot");
let late_state = chain
.get_state(&late_block.state_root(), Some(late_slot))
.expect("should not error getting state")
.expect("should find state");
let late_attestation = harness
.get_free_attestations(
&AttestationStrategy::AllValidators,
&late_state,
late_block.canonical_root(),
late_slot,
)
.first()
.cloned()
.expect("should get at least one late attestation");
assert_eq!(
harness
.chain
.process_attestation(late_attestation, AttestationType::Aggregated),
Ok(AttestationProcessingOutcome::PastEpoch {
attestation_epoch: current_epoch - 2,
current_epoch
}),
"should not accept late attestation"
);
/*
* Should reject attestations if the target is unknown.
*/
let mut bad_target_attestation = valid_attestation.clone();
bad_target_attestation.data.target.root = Hash256::from_low_u64_be(42);
assert_eq!(
harness
.chain
.process_attestation(bad_target_attestation, AttestationType::Aggregated),
Ok(AttestationProcessingOutcome::UnknownTargetRoot(
Hash256::from_low_u64_be(42)
)),
"should not accept bad_target attestation"
);
/*
* Should reject attestations if the target is unknown.
*/
let mut future_block_attestation = valid_attestation.clone();
future_block_attestation.data.slot -= 1;
assert_eq!(
harness
.chain
.process_attestation(future_block_attestation, AttestationType::Aggregated),
Ok(AttestationProcessingOutcome::AttestsToFutureBlock {
block: current_slot,
attestation: current_slot - 1
}),
"should not accept future_block attestation"
);
/*
* Should reject attestations if the target is unknown.
*/
let mut bad_head_attestation = valid_attestation.clone();
bad_head_attestation.data.beacon_block_root = Hash256::from_low_u64_be(42);
assert_eq!(
harness
.chain
.process_attestation(bad_head_attestation, AttestationType::Aggregated),
Ok(AttestationProcessingOutcome::UnknownHeadBlock {
beacon_block_root: Hash256::from_low_u64_be(42)
}),
"should not accept bad_head attestation"
);
/*
* Should reject attestations with a bad signature.
*/
let mut bad_signature_attestation = valid_attestation.clone();
let kp = generate_deterministic_keypair(0);
let mut agg_sig = AggregateSignature::new();
agg_sig.add(&Signature::new(&[42, 42], &kp.sk));
bad_signature_attestation.signature = agg_sig;
assert_eq!(
harness
.chain
.process_attestation(bad_signature_attestation, AttestationType::Aggregated),
Ok(AttestationProcessingOutcome::InvalidSignature),
"should not accept bad_signature attestation"
);
/*
* Should reject attestations with an empty bitfield.
*/
let mut empty_bitfield_attestation = valid_attestation.clone();
empty_bitfield_attestation.aggregation_bits =
BitList::with_capacity(1).expect("should build bitfield");
assert_eq!(
harness
.chain
.process_attestation(empty_bitfield_attestation, AttestationType::Aggregated),
Ok(AttestationProcessingOutcome::EmptyAggregationBitfield),
"should not accept empty_bitfield attestation"
);
}
#[test]
fn attestation_that_skips_epochs() {
let harness = get_harness(VALIDATOR_COUNT);
let chain = &harness.chain;
// Extend the chain out a few epochs so we have some chain depth to play with.
harness.extend_chain(
MainnetEthSpec::slots_per_epoch() as usize * 3 + 1,
BlockStrategy::OnCanonicalHead,
AttestationStrategy::AllValidators,
);
let current_slot = chain.slot().expect("should get slot");
let current_epoch = chain.epoch().expect("should get epoch");
let earlier_slot = (current_epoch - 2).start_slot(MainnetEthSpec::slots_per_epoch());
let earlier_block = chain
.block_at_slot(earlier_slot)
.expect("should not error getting block at slot")
.expect("should find block at slot");
let mut state = chain
.get_state(&earlier_block.state_root(), Some(earlier_slot))
.expect("should not error getting state")
.expect("should find state");
while state.slot < current_slot {
per_slot_processing(&mut state, None, &harness.spec).expect("should process slot");
}
let attestation = harness
.get_free_attestations(
&AttestationStrategy::AllValidators,
&state,
earlier_block.canonical_root(),
current_slot,
)
.first()
.cloned()
.expect("should get at least one attestation");
assert_eq!(
harness
.chain
.process_attestation(attestation, AttestationType::Aggregated),
Ok(AttestationProcessingOutcome::Processed),
"should process attestation that skips slots"
);
}

View File

@ -0,0 +1,990 @@
#![cfg(not(debug_assertions))]
#[macro_use]
extern crate lazy_static;
use beacon_chain::{
attestation_verification::Error as AttnError,
test_utils::{AttestationStrategy, BeaconChainHarness, BlockStrategy, HarnessType},
BeaconChain, BeaconChainTypes,
};
use state_processing::per_slot_processing;
use store::Store;
use tree_hash::TreeHash;
use types::{
test_utils::generate_deterministic_keypair, AggregateSignature, Attestation, EthSpec, Hash256,
Keypair, MainnetEthSpec, SecretKey, SelectionProof, Signature, SignedAggregateAndProof,
SignedBeaconBlock, Unsigned,
};
pub type E = MainnetEthSpec;
/// The validator count needs to be relatively high compared to other tests to ensure that we can
/// have committees where _some_ validators are aggregators but not _all_.
pub const VALIDATOR_COUNT: usize = 256;
lazy_static! {
/// A cached set of keys.
static ref KEYPAIRS: Vec<Keypair> = types::test_utils::generate_deterministic_keypairs(VALIDATOR_COUNT);
}
/// Returns a beacon chain harness.
fn get_harness(validator_count: usize) -> BeaconChainHarness<HarnessType<E>> {
let harness = BeaconChainHarness::new_with_target_aggregators(
MainnetEthSpec,
KEYPAIRS[0..validator_count].to_vec(),
// A kind-of arbitrary number that ensures that _some_ validators are aggregators, but
// not all.
4,
);
harness.advance_slot();
harness
}
/// Returns an attestation that is valid for some slot in the given `chain`.
///
/// Also returns some info about who created it.
fn get_valid_unaggregated_attestation<T: BeaconChainTypes>(
chain: &BeaconChain<T>,
) -> (Attestation<T::EthSpec>, usize, usize, SecretKey) {
let head = chain.head().expect("should get head");
let current_slot = chain.slot().expect("should get slot");
let mut valid_attestation = chain
.produce_unaggregated_attestation(current_slot, 0)
.expect("should not error while producing attestation");
let validator_committee_index = 0;
let validator_index = *head
.beacon_state
.get_beacon_committee(current_slot, valid_attestation.data.index)
.expect("should get committees")
.committee
.get(validator_committee_index)
.expect("there should be an attesting validator");
let validator_sk = generate_deterministic_keypair(validator_index).sk;
valid_attestation
.sign(
&validator_sk,
validator_committee_index,
&head.beacon_state.fork,
chain.genesis_validators_root,
&chain.spec,
)
.expect("should sign attestation");
(
valid_attestation,
validator_index,
validator_committee_index,
validator_sk,
)
}
fn get_valid_aggregated_attestation<T: BeaconChainTypes>(
chain: &BeaconChain<T>,
aggregate: Attestation<T::EthSpec>,
) -> (SignedAggregateAndProof<T::EthSpec>, usize, SecretKey) {
let state = &chain.head().expect("should get head").beacon_state;
let current_slot = chain.slot().expect("should get slot");
let committee = state
.get_beacon_committee(current_slot, aggregate.data.index)
.expect("should get committees");
let committee_len = committee.committee.len();
let (aggregator_index, aggregator_sk) = committee
.committee
.iter()
.find_map(|&val_index| {
let aggregator_sk = generate_deterministic_keypair(val_index).sk;
let proof = SelectionProof::new::<T::EthSpec>(
aggregate.data.slot,
&aggregator_sk,
&state.fork,
chain.genesis_validators_root,
&chain.spec,
);
if proof.is_aggregator(committee_len, &chain.spec).unwrap() {
Some((val_index, aggregator_sk))
} else {
None
}
})
.expect("should find aggregator for committee");
let signed_aggregate = SignedAggregateAndProof::from_aggregate(
aggregator_index as u64,
aggregate,
None,
&aggregator_sk,
&state.fork,
chain.genesis_validators_root,
&chain.spec,
);
(signed_aggregate, aggregator_index, aggregator_sk)
}
/// Returns a proof and index for a validator that is **not** an aggregator for the given
/// attestation.
fn get_non_aggregator<T: BeaconChainTypes>(
chain: &BeaconChain<T>,
aggregate: &Attestation<T::EthSpec>,
) -> (usize, SecretKey) {
let state = &chain.head().expect("should get head").beacon_state;
let current_slot = chain.slot().expect("should get slot");
let committee = state
.get_beacon_committee(current_slot, aggregate.data.index)
.expect("should get committees");
let committee_len = committee.committee.len();
committee
.committee
.iter()
.find_map(|&val_index| {
let aggregator_sk = generate_deterministic_keypair(val_index).sk;
let proof = SelectionProof::new::<T::EthSpec>(
aggregate.data.slot,
&aggregator_sk,
&state.fork,
chain.genesis_validators_root,
&chain.spec,
);
if proof.is_aggregator(committee_len, &chain.spec).unwrap() {
None
} else {
Some((val_index, aggregator_sk))
}
})
.expect("should find non-aggregator for committee")
}
/// Tests verification of `SignedAggregateAndProof` from the gossip network.
#[test]
fn aggregated_gossip_verification() {
let harness = get_harness(VALIDATOR_COUNT);
let chain = &harness.chain;
// Extend the chain out a few epochs so we have some chain depth to play with.
harness.extend_chain(
MainnetEthSpec::slots_per_epoch() as usize * 3 - 1,
BlockStrategy::OnCanonicalHead,
AttestationStrategy::AllValidators,
);
// Advance into a slot where there have not been blocks or attestations produced.
harness.advance_slot();
let current_slot = chain.slot().expect("should get slot");
assert_eq!(
current_slot % E::slots_per_epoch(),
0,
"the test requires a new epoch to avoid already-seen errors"
);
let (valid_attestation, _attester_index, _attester_committee_index, validator_sk) =
get_valid_unaggregated_attestation(&harness.chain);
let (valid_aggregate, aggregator_index, aggregator_sk) =
get_valid_aggregated_attestation(&harness.chain, valid_attestation);
macro_rules! assert_invalid {
($desc: tt, $attn_getter: expr, $error: expr) => {
assert_eq!(
harness
.chain
.verify_aggregated_attestation_for_gossip($attn_getter)
.err()
.expect(&format!(
"{} should error during verify_aggregated_attestation_for_gossip",
$desc
)),
$error,
"case: {}",
$desc,
);
};
}
/*
* The following two tests ensure:
*
* Spec v0.11.2
*
* aggregate.data.slot is within the last ATTESTATION_PROPAGATION_SLOT_RANGE slots (with a
* MAXIMUM_GOSSIP_CLOCK_DISPARITY allowance) -- i.e. aggregate.data.slot +
* ATTESTATION_PROPAGATION_SLOT_RANGE >= current_slot >= aggregate.data.slot (a client MAY
* queue future aggregates for processing at the appropriate slot).
*/
let future_slot = current_slot + 1;
assert_invalid!(
"aggregate from future slot",
{
let mut a = valid_aggregate.clone();
a.message.aggregate.data.slot = future_slot;
a
},
AttnError::FutureSlot {
attestation_slot: future_slot,
latest_permissible_slot: current_slot,
}
);
let early_slot = current_slot
.as_u64()
.checked_sub(E::slots_per_epoch() + 2)
.expect("chain is not sufficiently deep for test")
.into();
assert_invalid!(
"aggregate from past slot",
{
let mut a = valid_aggregate.clone();
a.message.aggregate.data.slot = early_slot;
a
},
AttnError::PastSlot {
attestation_slot: early_slot,
// Subtract an additional slot since the harness will be exactly on the start of the
// slot and the propagation tolerance will allow an extra slot.
earliest_permissible_slot: current_slot - E::slots_per_epoch() - 1,
}
);
/*
* The following test ensures:
*
* Spec v0.11.2
*
* The block being voted for (aggregate.data.beacon_block_root) passes validation.
*/
let unknown_root = Hash256::from_low_u64_le(424242);
assert_invalid!(
"aggregate with unknown head block",
{
let mut a = valid_aggregate.clone();
a.message.aggregate.data.beacon_block_root = unknown_root;
a
},
AttnError::UnknownHeadBlock {
beacon_block_root: unknown_root
}
);
/*
* This test ensures:
*
* Spec v0.11.2
*
* The aggregator signature, signed_aggregate_and_proof.signature, is valid.
*/
assert_invalid!(
"aggregate with bad signature",
{
let mut a = valid_aggregate.clone();
a.signature = Signature::new(&[42, 42], &validator_sk);
a
},
AttnError::InvalidSignature
);
/*
* The following test ensures:
*
* Spec v0.11.2
*
* The aggregate_and_proof.selection_proof is a valid signature of the aggregate.data.slot by
* the validator with index aggregate_and_proof.aggregator_index.
*/
let committee_len = harness
.chain
.head()
.unwrap()
.beacon_state
.get_beacon_committee(
harness.chain.slot().unwrap(),
valid_aggregate.message.aggregate.data.index,
)
.expect("should get committees")
.committee
.len();
assert_invalid!(
"aggregate with bad selection proof signature",
{
let mut a = valid_aggregate.clone();
// Generate some random signature until happens to be a valid selection proof. We need
// this in order to reach the signature verification code.
//
// Could run for ever, but that seems _really_ improbable.
let mut i: u64 = 0;
a.message.selection_proof = loop {
i += 1;
let proof: SelectionProof = Signature::new(&i.to_le_bytes(), &validator_sk).into();
if proof
.is_aggregator(committee_len, &harness.chain.spec)
.unwrap()
{
break proof.into();
}
};
a
},
AttnError::InvalidSignature
);
/*
* The following test ensures:
*
* Spec v0.11.2
*
* The signature of aggregate is valid.
*/
assert_invalid!(
"aggregate with bad aggregate signature",
{
let mut a = valid_aggregate.clone();
let mut agg_sig = AggregateSignature::new();
agg_sig.add(&Signature::new(&[42, 42], &aggregator_sk));
a.message.aggregate.signature = agg_sig;
a
},
AttnError::InvalidSignature
);
let too_high_index = <E as EthSpec>::ValidatorRegistryLimit::to_u64() + 1;
assert_invalid!(
"aggregate with too-high aggregator index",
{
let mut a = valid_aggregate.clone();
a.message.aggregator_index = too_high_index;
a
},
AttnError::ValidatorIndexTooHigh(too_high_index as usize)
);
/*
* The following test ensures:
*
* Spec v0.11.2
*
* The aggregator's validator index is within the aggregate's committee -- i.e.
* aggregate_and_proof.aggregator_index in get_attesting_indices(state, aggregate.data,
* aggregate.aggregation_bits).
*/
let unknown_validator = VALIDATOR_COUNT as u64;
assert_invalid!(
"aggregate with unknown aggregator index",
{
let mut a = valid_aggregate.clone();
a.message.aggregator_index = unknown_validator;
a
},
// Naively we should think this condition would trigger this error:
//
// AttnError::AggregatorPubkeyUnknown(unknown_validator)
//
// However the following error is triggered first:
AttnError::AggregatorNotInCommittee {
aggregator_index: unknown_validator
}
);
/*
* The following test ensures:
*
* Spec v0.11.2
*
* aggregate_and_proof.selection_proof selects the validator as an aggregator for the slot --
* i.e. is_aggregator(state, aggregate.data.slot, aggregate.data.index,
* aggregate_and_proof.selection_proof) returns True.
*/
let (non_aggregator_index, non_aggregator_sk) =
get_non_aggregator(&harness.chain, &valid_aggregate.message.aggregate);
assert_invalid!(
"aggregate with from non-aggregator",
{
SignedAggregateAndProof::from_aggregate(
non_aggregator_index as u64,
valid_aggregate.message.aggregate.clone(),
None,
&non_aggregator_sk,
&harness.chain.head_info().unwrap().fork,
harness.chain.genesis_validators_root,
&harness.chain.spec,
)
},
AttnError::InvalidSelectionProof {
aggregator_index: non_aggregator_index as u64
}
);
assert!(
harness
.chain
.verify_aggregated_attestation_for_gossip(valid_aggregate.clone())
.is_ok(),
"valid aggregate should be verified"
);
/*
* The following tests ensures:
*
* NOTE: this is a slight deviation from the spec, see:
* https://github.com/ethereum/eth2.0-specs/pull/1749
*
* Spec v0.11.2
*
* The aggregate attestation defined by hash_tree_root(aggregate) has not already been seen
* (via aggregate gossip, within a block, or through the creation of an equivalent aggregate
* locally).
*/
assert_invalid!(
"aggregate with that has already been seen",
valid_aggregate.clone(),
AttnError::AttestationAlreadyKnown(valid_aggregate.message.aggregate.tree_hash_root())
);
/*
* The following test ensures:
*
* Spec v0.11.2
*
* The aggregate is the first valid aggregate received for the aggregator with index
* aggregate_and_proof.aggregator_index for the epoch aggregate.data.target.epoch.
*/
assert_invalid!(
"aggregate from aggregator that has already been seen",
{
let mut a = valid_aggregate.clone();
a.message.aggregate.data.beacon_block_root = Hash256::from_low_u64_le(42);
a
},
AttnError::AggregatorAlreadyKnown(aggregator_index as u64)
);
}
/// Tests the verification conditions for an unaggregated attestation on the gossip network.
#[test]
fn unaggregated_gossip_verification() {
let harness = get_harness(VALIDATOR_COUNT);
let chain = &harness.chain;
// Extend the chain out a few epochs so we have some chain depth to play with.
harness.extend_chain(
MainnetEthSpec::slots_per_epoch() as usize * 3 - 1,
BlockStrategy::OnCanonicalHead,
AttestationStrategy::AllValidators,
);
// Advance into a slot where there have not been blocks or attestations produced.
harness.advance_slot();
let current_slot = chain.slot().expect("should get slot");
let current_epoch = chain.epoch().expect("should get epoch");
assert_eq!(
current_slot % E::slots_per_epoch(),
0,
"the test requires a new epoch to avoid already-seen errors"
);
let (valid_attestation, validator_index, validator_committee_index, validator_sk) =
get_valid_unaggregated_attestation(&harness.chain);
macro_rules! assert_invalid {
($desc: tt, $attn_getter: expr, $error: expr) => {
assert_eq!(
harness
.chain
.verify_unaggregated_attestation_for_gossip($attn_getter)
.err()
.expect(&format!(
"{} should error during verify_unaggregated_attestation_for_gossip",
$desc
)),
$error,
"case: {}",
$desc,
);
};
}
/*
* The following two tests ensure:
*
* Spec v0.11.2
*
* attestation.data.slot is within the last ATTESTATION_PROPAGATION_SLOT_RANGE slots (within a
* MAXIMUM_GOSSIP_CLOCK_DISPARITY allowance) -- i.e. attestation.data.slot +
* ATTESTATION_PROPAGATION_SLOT_RANGE >= current_slot >= attestation.data.slot (a client MAY
* queue future attestations for processing at the appropriate slot).
*/
let future_slot = current_slot + 1;
assert_invalid!(
"attestation from future slot",
{
let mut a = valid_attestation.clone();
a.data.slot = future_slot;
a
},
AttnError::FutureSlot {
attestation_slot: future_slot,
latest_permissible_slot: current_slot,
}
);
let early_slot = current_slot
.as_u64()
.checked_sub(E::slots_per_epoch() + 2)
.expect("chain is not sufficiently deep for test")
.into();
assert_invalid!(
"attestation from past slot",
{
let mut a = valid_attestation.clone();
a.data.slot = early_slot;
a
},
AttnError::PastSlot {
attestation_slot: early_slot,
// Subtract an additional slot since the harness will be exactly on the start of the
// slot and the propagation tolerance will allow an extra slot.
earliest_permissible_slot: current_slot - E::slots_per_epoch() - 1,
}
);
/*
* The following two tests ensure:
*
* Spec v0.11.2
*
* The attestation is unaggregated -- that is, it has exactly one participating validator
* (len([bit for bit in attestation.aggregation_bits if bit == 0b1]) == 1).
*/
assert_invalid!(
"attestation without any aggregation bits set",
{
let mut a = valid_attestation.clone();
a.aggregation_bits
.set(validator_committee_index, false)
.expect("should unset aggregation bit");
assert_eq!(
a.aggregation_bits.num_set_bits(),
0,
"test requires no set bits"
);
a
},
AttnError::NotExactlyOneAggregationBitSet(0)
);
assert_invalid!(
"attestation with two aggregation bits set",
{
let mut a = valid_attestation.clone();
a.aggregation_bits
.set(validator_committee_index + 1, true)
.expect("should set second aggregation bit");
a
},
AttnError::NotExactlyOneAggregationBitSet(2)
);
/*
* The following test ensures that:
*
* Spec v0.11.2
*
* The block being voted for (attestation.data.beacon_block_root) passes validation.
*/
let unknown_root = Hash256::from_low_u64_le(424242); // No one wants one of these
assert_invalid!(
"attestation with unknown head block",
{
let mut a = valid_attestation.clone();
a.data.beacon_block_root = unknown_root;
a
},
AttnError::UnknownHeadBlock {
beacon_block_root: unknown_root
}
);
/*
* The following test ensures that:
*
* Spec v0.11.2
*
* The signature of attestation is valid.
*/
assert_invalid!(
"attestation with bad signature",
{
let mut a = valid_attestation.clone();
let mut agg_sig = AggregateSignature::new();
agg_sig.add(&Signature::new(&[42, 42], &validator_sk));
a.signature = agg_sig;
a
},
AttnError::InvalidSignature
);
assert!(
harness
.chain
.verify_unaggregated_attestation_for_gossip(valid_attestation.clone())
.is_ok(),
"valid attestation should be verified"
);
/*
* The following test ensures that:
*
* Spec v0.11.2
*
*
* There has been no other valid attestation seen on an attestation subnet that has an
* identical attestation.data.target.epoch and participating validator index.
*/
assert_invalid!(
"attestation that has already been seen",
valid_attestation.clone(),
AttnError::PriorAttestationKnown {
validator_index: validator_index as u64,
epoch: current_epoch
}
);
}
/// Tests the verification conditions for an unaggregated attestation on the gossip network.
#[test]
fn fork_choice_verification() {
let harness = get_harness(VALIDATOR_COUNT);
let chain = &harness.chain;
// Extend the chain out a few epochs so we have some chain depth to play with.
harness.extend_chain(
MainnetEthSpec::slots_per_epoch() as usize * 3 - 1,
BlockStrategy::OnCanonicalHead,
AttestationStrategy::AllValidators,
);
// Advance into a slot where there have not been blocks or attestations produced.
harness.advance_slot();
// We're going to produce the attestations at the first slot of the epoch.
let (valid_attestation, _validator_index, _validator_committee_index, _validator_sk) =
get_valid_unaggregated_attestation(&harness.chain);
// Extend the chain two more blocks, but without any attestations so we don't trigger the
// "already seen" caches.
//
// Because of this, the attestation we're dealing with was made one slot prior to the current
// slot. This allows us to test the `AttestsToFutureBlock` condition.
harness.extend_chain(
2,
BlockStrategy::OnCanonicalHead,
AttestationStrategy::SomeValidators(vec![]),
);
let current_slot = chain.slot().expect("should get slot");
let current_epoch = chain.epoch().expect("should get epoch");
let attestation = harness
.chain
.verify_unaggregated_attestation_for_gossip(valid_attestation.clone())
.expect("precondition: should gossip verify attestation");
macro_rules! assert_invalid {
($desc: tt, $attn_getter: expr, $error: expr) => {
assert_eq!(
harness
.chain
.apply_attestation_to_fork_choice(&$attn_getter)
.err()
.expect(&format!(
"{} should error during apply_attestation_to_fork_choice",
$desc
)),
$error,
"case: {}",
$desc,
);
};
}
assert_invalid!(
"attestation without any aggregation bits set",
{
let mut a = attestation.clone();
a.__indexed_attestation_mut().attesting_indices = vec![].into();
a
},
AttnError::EmptyAggregationBitfield
);
/*
* The following two tests ensure that:
*
* Spec v0.11.2
*
* assert target.epoch in [current_epoch, previous_epoch]
*/
let future_epoch = current_epoch + 1;
assert_invalid!(
"attestation from future epoch",
{
let mut a = attestation.clone();
a.__indexed_attestation_mut().data.target.epoch = future_epoch;
a
},
AttnError::FutureEpoch {
attestation_epoch: future_epoch,
current_epoch
}
);
assert!(
current_epoch > 1,
"precondition: must be able to have a past epoch"
);
let past_epoch = current_epoch - 2;
assert_invalid!(
"attestation from past epoch",
{
let mut a = attestation.clone();
a.__indexed_attestation_mut().data.target.epoch = past_epoch;
a
},
AttnError::PastEpoch {
attestation_epoch: past_epoch,
current_epoch
}
);
/*
* This test ensures that:
*
* Spec v0.11.2
*
* assert target.epoch == compute_epoch_at_slot(attestation.data.slot)
*/
assert_invalid!(
"attestation with bad target epoch",
{
let mut a = attestation.clone();
let indexed = a.__indexed_attestation_mut();
indexed.data.target.epoch = indexed.data.slot.epoch(E::slots_per_epoch()) - 1;
a
},
AttnError::BadTargetEpoch
);
/*
* This test ensures that:
*
* Spec v0.11.2
*
* Attestations target be for a known block. If target block is unknown, delay consideration
* until the block is found
*
* assert target.root in store.blocks
*/
let unknown_root = Hash256::from_low_u64_le(42);
assert_invalid!(
"attestation with unknown target root",
{
let mut a = attestation.clone();
let indexed = a.__indexed_attestation_mut();
indexed.data.target.root = unknown_root;
a
},
AttnError::UnknownTargetRoot(unknown_root)
);
// NOTE: we're not testing an assert from the spec:
//
// `assert get_current_slot(store) >= compute_start_slot_at_epoch(target.epoch)`
//
// I think this check is redundant and I've raised an issue here:
//
// https://github.com/ethereum/eth2.0-specs/pull/1755
/*
* This test asserts that:
*
* Spec v0.11.2
*
* # Attestations must be for a known block. If block is unknown, delay consideration until the
* block is found
*
* assert attestation.data.beacon_block_root in store.blocks
*/
assert_invalid!(
"attestation with unknown beacon block root",
{
let mut a = attestation.clone();
let indexed = a.__indexed_attestation_mut();
indexed.data.beacon_block_root = unknown_root;
a
},
AttnError::UnknownHeadBlock {
beacon_block_root: unknown_root
}
);
let future_block = harness
.chain
.block_at_slot(current_slot)
.expect("should not error getting block")
.expect("should find block at current slot");
assert_invalid!(
"attestation to future block",
{
let mut a = attestation.clone();
let indexed = a.__indexed_attestation_mut();
assert!(
future_block.slot() > indexed.data.slot,
"precondition: the attestation must attest to the future"
);
indexed.data.beacon_block_root = future_block.canonical_root();
a
},
AttnError::AttestsToFutureBlock {
block: current_slot,
attestation: current_slot - 1
}
);
// Note: we're not checking the "attestations can only affect the fork choice of subsequent
// slots" part of the spec, we do this upstream.
assert!(
harness
.chain
.apply_attestation_to_fork_choice(&attestation.clone())
.is_ok(),
"should verify valid attestation"
);
// There's nothing stopping fork choice from accepting the same attestation twice.
assert!(
harness
.chain
.apply_attestation_to_fork_choice(&attestation)
.is_ok(),
"should verify valid attestation a second time"
);
}
/// Ensures that an attestation that skips epochs can still be processed.
///
/// This also checks that we can do a state lookup if we don't get a hit from the shuffling cache.
#[test]
fn attestation_that_skips_epochs() {
let harness = get_harness(VALIDATOR_COUNT);
let chain = &harness.chain;
// Extend the chain out a few epochs so we have some chain depth to play with.
harness.extend_chain(
MainnetEthSpec::slots_per_epoch() as usize * 3 + 1,
BlockStrategy::OnCanonicalHead,
AttestationStrategy::AllValidators,
);
let current_slot = chain.slot().expect("should get slot");
let current_epoch = chain.epoch().expect("should get epoch");
let earlier_slot = (current_epoch - 2).start_slot(MainnetEthSpec::slots_per_epoch());
let earlier_block = chain
.block_at_slot(earlier_slot)
.expect("should not error getting block at slot")
.expect("should find block at slot");
let mut state = chain
.get_state(&earlier_block.state_root(), Some(earlier_slot))
.expect("should not error getting state")
.expect("should find state");
while state.slot < current_slot {
per_slot_processing(&mut state, None, &harness.spec).expect("should process slot");
}
let attestation = harness
.get_unaggregated_attestations(
&AttestationStrategy::AllValidators,
&state,
earlier_block.canonical_root(),
current_slot,
)
.first()
.expect("should have at least one committee")
.first()
.cloned()
.expect("should have at least one attestation in committee");
let block_root = attestation.data.beacon_block_root;
let block_slot = harness
.chain
.store
.get::<SignedBeaconBlock<E>>(&block_root)
.expect("should not error getting block")
.expect("should find attestation block")
.message
.slot;
assert!(
attestation.data.slot - block_slot > E::slots_per_epoch() * 2,
"the attestation must skip more than two epochs"
);
assert!(
harness
.chain
.verify_unaggregated_attestation_for_gossip(attestation)
.is_ok(),
"should gossip verify attestation that skips slots"
);
}

View File

@ -314,16 +314,12 @@ fn invalid_signatures() {
item
);
let gossip_verified = harness
.chain
.verify_block_for_gossip(snapshots[block_index].beacon_block.clone())
.expect("should obtain gossip verified block");
assert_eq!(
harness.chain.process_block(gossip_verified),
Err(BlockError::InvalidSignature),
"should not import gossip verified block with an invalid {} signature",
item
);
// NOTE: we choose not to check gossip verification here. It only checks one signature
// (proposal) and that is already tested elsewhere in this file.
//
// It's not trivial to just check gossip verification since it will start refusing
// blocks as soon as it has seen one valid proposal signature for a given (validator,
// slot) tuple.
};
/*
@ -513,7 +509,7 @@ fn unwrap_err<T, E>(result: Result<T, E>) -> E {
}
#[test]
fn gossip_verification() {
fn block_gossip_verification() {
let harness = get_harness(VALIDATOR_COUNT);
let block_index = CHAIN_SEGMENT_LENGTH - 2;
@ -537,19 +533,13 @@ fn gossip_verification() {
}
/*
* Block with invalid signature
*/
let mut block = CHAIN_SEGMENT[block_index].beacon_block.clone();
block.signature = junk_signature();
assert_eq!(
unwrap_err(harness.chain.verify_block_for_gossip(block)),
BlockError::ProposalSignatureInvalid,
"should not import a block with an invalid proposal signature"
);
/*
* Block from a future slot.
* This test ensures that:
*
* Spec v0.11.2
*
* The block is not from a future slot (with a MAXIMUM_GOSSIP_CLOCK_DISPARITY allowance) --
* i.e. validate that signed_beacon_block.message.slot <= current_slot (a client MAY queue
* future blocks for processing at the appropriate slot).
*/
let mut block = CHAIN_SEGMENT[block_index].beacon_block.clone();
@ -565,7 +555,15 @@ fn gossip_verification() {
);
/*
* Block from a finalized slot.
* This test ensure that:
*
* Spec v0.11.2
*
* The block is from a slot greater than the latest finalized slot -- i.e. validate that
* signed_beacon_block.message.slot >
* compute_start_slot_at_epoch(state.finalized_checkpoint.epoch) (a client MAY choose to
* validate and store such blocks for additional purposes -- e.g. slashing detection, archive
* nodes, etc).
*/
let mut block = CHAIN_SEGMENT[block_index].beacon_block.clone();
@ -585,4 +583,92 @@ fn gossip_verification() {
},
"should not import a block with a finalized slot"
);
/*
* This test ensures that:
*
* Spec v0.11.2
*
* The proposer signature, signed_beacon_block.signature, is valid with respect to the
* proposer_index pubkey.
*/
let mut block = CHAIN_SEGMENT[block_index].beacon_block.clone();
block.signature = junk_signature();
assert_eq!(
unwrap_err(harness.chain.verify_block_for_gossip(block)),
BlockError::ProposalSignatureInvalid,
"should not import a block with an invalid proposal signature"
);
/*
* This test ensures that:
*
* Spec v0.11.2
*
* The block is proposed by the expected proposer_index for the block's slot in the context of
* the current shuffling (defined by parent_root/slot). If the proposer_index cannot
* immediately be verified against the expected shuffling, the block MAY be queued for later
* processing while proposers for the block's branch are calculated.
*/
let mut block = CHAIN_SEGMENT[block_index].beacon_block.clone();
let expected_proposer = block.message.proposer_index;
let other_proposer = (0..VALIDATOR_COUNT as u64)
.into_iter()
.find(|i| *i != block.message.proposer_index)
.expect("there must be more than one validator in this test");
block.message.proposer_index = other_proposer;
let block = block.message.clone().sign(
&generate_deterministic_keypair(other_proposer as usize).sk,
&harness.chain.head_info().unwrap().fork,
harness.chain.genesis_validators_root,
&harness.chain.spec,
);
assert_eq!(
unwrap_err(harness.chain.verify_block_for_gossip(block.clone())),
BlockError::IncorrectBlockProposer {
block: other_proposer,
local_shuffling: expected_proposer
},
"should not import a block with the wrong proposer index"
);
// Check to ensure that we registered this is a valid block from this proposer.
assert_eq!(
unwrap_err(harness.chain.verify_block_for_gossip(block.clone())),
BlockError::RepeatProposal {
proposer: other_proposer,
slot: block.message.slot
},
"should register any valid signature against the proposer, even if the block failed later verification"
);
let block = CHAIN_SEGMENT[block_index].beacon_block.clone();
assert!(
harness.chain.verify_block_for_gossip(block).is_ok(),
"the valid block should be processed"
);
/*
* This test ensures that:
*
* Spec v0.11.2
*
* The block is the first block with valid signature received for the proposer for the slot,
* signed_beacon_block.message.slot.
*/
let block = CHAIN_SEGMENT[block_index].beacon_block.clone();
assert_eq!(
harness
.chain
.verify_block_for_gossip(block.clone())
.err()
.expect("should error when processing known block"),
BlockError::RepeatProposal {
proposer: block.message.proposer_index,
slot: block.message.slot,
},
"the second proposal by this validator should be rejected"
);
}

View File

@ -1,13 +1,14 @@
#![cfg(not(debug_assertions))]
// #![cfg(not(debug_assertions))]
#[macro_use]
extern crate lazy_static;
use beacon_chain::attestation_verification::Error as AttnError;
use beacon_chain::test_utils::{
AttestationStrategy, BeaconChainHarness, BlockStrategy, DiskHarnessType,
};
use beacon_chain::BeaconSnapshot;
use beacon_chain::{AttestationProcessingOutcome, AttestationType, StateSkipConfig};
use beacon_chain::StateSkipConfig;
use rand::Rng;
use sloggers::{null::NullLoggerBuilder, Build};
use std::collections::HashMap;
@ -272,7 +273,7 @@ fn epoch_boundary_state_attestation_processing() {
);
let head = harness.chain.head().expect("head ok");
late_attestations.extend(harness.get_free_attestations(
late_attestations.extend(harness.get_unaggregated_attestations(
&AttestationStrategy::SomeValidators(late_validators.clone()),
&head.beacon_state,
head.beacon_block_root,
@ -289,7 +290,7 @@ fn epoch_boundary_state_attestation_processing() {
let mut checked_pre_fin = false;
for attestation in late_attestations {
for attestation in late_attestations.into_iter().flatten() {
// load_epoch_boundary_state is idempotent!
let block_root = attestation.data.beacon_block_root;
let block = store.get_block(&block_root).unwrap().expect("block exists");
@ -310,26 +311,29 @@ fn epoch_boundary_state_attestation_processing() {
.expect("head ok")
.finalized_checkpoint
.epoch;
let res = harness
.chain
.process_attestation_internal(attestation.clone(), AttestationType::Aggregated);
.verify_unaggregated_attestation_for_gossip(attestation.clone());
let current_epoch = harness.chain.epoch().expect("should get epoch");
let attestation_epoch = attestation.data.target.epoch;
let current_slot = harness.chain.slot().expect("should get slot");
let attestation_slot = attestation.data.slot;
// Extra -1 to handle gossip clock disparity.
let earliest_permissible_slot = current_slot - E::slots_per_epoch() - 1;
if attestation.data.slot <= finalized_epoch.start_slot(E::slots_per_epoch())
|| attestation_epoch + 1 < current_epoch
if attestation_slot <= finalized_epoch.start_slot(E::slots_per_epoch())
|| attestation_slot < earliest_permissible_slot
{
checked_pre_fin = true;
assert_eq!(
res,
Ok(AttestationProcessingOutcome::PastEpoch {
attestation_epoch,
current_epoch,
})
res.err().unwrap(),
AttnError::PastSlot {
attestation_slot,
earliest_permissible_slot,
}
);
} else {
assert_eq!(res, Ok(AttestationProcessingOutcome::Processed));
res.expect("should have verified attetation");
}
}
assert!(checked_pre_fin);
@ -1069,9 +1073,20 @@ fn prunes_fork_running_past_finalized_checkpoint() {
let (canonical_blocks_second_epoch, _, _, _, _) = harness.add_canonical_chain_blocks(
canonical_state,
canonical_slot,
slots_per_epoch * 4,
slots_per_epoch * 6,
&honest_validators,
);
assert_ne!(
harness
.chain
.head()
.unwrap()
.beacon_state
.finalized_checkpoint
.epoch,
0,
"chain should have finalized"
);
// Postconditions
let canonical_blocks: HashMap<Slot, SignedBeaconBlockHash> = canonical_blocks_zeroth_epoch
@ -1087,8 +1102,8 @@ fn prunes_fork_running_past_finalized_checkpoint() {
finalized_blocks,
vec![
Hash256::zero().into(),
canonical_blocks[&Slot::new(slots_per_epoch as u64)],
canonical_blocks[&Slot::new((slots_per_epoch * 2) as u64)],
canonical_blocks[&Slot::new(slots_per_epoch as u64 * 3)],
canonical_blocks[&Slot::new(slots_per_epoch as u64 * 4)],
]
.into_iter()
.collect()
@ -1203,9 +1218,20 @@ fn prunes_skipped_slots_states() {
let (canonical_blocks_post_finalization, _, _, _, _) = harness.add_canonical_chain_blocks(
canonical_state,
canonical_slot,
slots_per_epoch * 5,
slots_per_epoch * 6,
&honest_validators,
);
assert_eq!(
harness
.chain
.head()
.unwrap()
.beacon_state
.finalized_checkpoint
.epoch,
2,
"chain should have finalized"
);
// Postconditions
let chain_dump = harness.chain.chain_dump().unwrap();
@ -1218,7 +1244,7 @@ fn prunes_skipped_slots_states() {
finalized_blocks,
vec![
Hash256::zero().into(),
canonical_blocks[&Slot::new(slots_per_epoch as u64)],
canonical_blocks[&Slot::new(slots_per_epoch as u64 * 2)],
]
.into_iter()
.collect()
@ -1240,10 +1266,12 @@ fn prunes_skipped_slots_states() {
assert!(
harness
.chain
.get_state(&state_hash, Some(slot))
.get_state(&state_hash, None)
.unwrap()
.is_none(),
"skipped slot states should have been pruned"
"skipped slot {} state {} should have been pruned",
slot,
state_hash
);
}
}

View File

@ -3,10 +3,12 @@
#[macro_use]
extern crate lazy_static;
use beacon_chain::test_utils::{
use beacon_chain::{
attestation_verification::Error as AttnError,
test_utils::{
AttestationStrategy, BeaconChainHarness, BlockStrategy, HarnessType, OP_POOL_DB_KEY,
},
};
use beacon_chain::{AttestationProcessingOutcome, AttestationType};
use operation_pool::PersistedOperationPool;
use state_processing::{
per_slot_processing, per_slot_processing::Error as SlotProcessingError, EpochProcessingError,
@ -361,7 +363,7 @@ fn roundtrip_operation_pool() {
}
#[test]
fn free_attestations_added_to_fork_choice_some_none() {
fn unaggregated_attestations_added_to_fork_choice_some_none() {
let num_blocks_produced = MinimalEthSpec::slots_per_epoch() / 2;
let harness = get_harness(VALIDATOR_COUNT);
@ -425,7 +427,7 @@ fn attestations_with_increasing_slots() {
);
attestations.append(
&mut harness.get_free_attestations(
&mut harness.get_unaggregated_attestations(
&AttestationStrategy::AllValidators,
&harness.chain.head().expect("should get head").beacon_state,
harness
@ -445,30 +447,31 @@ fn attestations_with_increasing_slots() {
harness.advance_slot();
}
let current_epoch = harness.chain.epoch().expect("should get epoch");
for attestation in attestations {
let attestation_epoch = attestation.data.target.epoch;
for attestation in attestations.into_iter().flatten() {
let res = harness
.chain
.process_attestation(attestation, AttestationType::Aggregated);
.verify_unaggregated_attestation_for_gossip(attestation.clone());
if attestation_epoch + 1 < current_epoch {
let current_slot = harness.chain.slot().expect("should get slot");
let attestation_slot = attestation.data.slot;
let earliest_permissible_slot = current_slot - MinimalEthSpec::slots_per_epoch() - 1;
if attestation_slot < earliest_permissible_slot {
assert_eq!(
res,
Ok(AttestationProcessingOutcome::PastEpoch {
attestation_epoch,
current_epoch,
})
res.err().unwrap(),
AttnError::PastSlot {
attestation_slot,
earliest_permissible_slot,
}
)
} else {
assert_eq!(res, Ok(AttestationProcessingOutcome::Processed))
res.expect("should process attestation");
}
}
}
#[test]
fn free_attestations_added_to_fork_choice_all_updated() {
fn unaggregated_attestations_added_to_fork_choice_all_updated() {
let num_blocks_produced = MinimalEthSpec::slots_per_epoch() * 2 - 1;
let harness = get_harness(VALIDATOR_COUNT);

View File

@ -8,7 +8,7 @@ pub mod processor;
use crate::error;
use crate::service::NetworkMessage;
use beacon_chain::{AttestationType, BeaconChain, BeaconChainTypes, BlockError};
use beacon_chain::{BeaconChain, BeaconChainTypes, BlockError};
use eth2_libp2p::{
rpc::{
RPCCodedResponse, RPCError, RPCRequest, RPCResponse, RPCResponseErrorCode, RequestId,
@ -252,30 +252,28 @@ impl<T: BeaconChainTypes> Router<T> {
match gossip_message {
// Attestations should never reach the router.
PubsubMessage::AggregateAndProofAttestation(aggregate_and_proof) => {
if self
.processor
.should_forward_aggregate_attestation(&aggregate_and_proof)
if let Some(gossip_verified) =
self.processor.verify_aggregated_attestation_for_gossip(
peer_id.clone(),
*aggregate_and_proof.clone(),
)
{
self.propagate_message(id, peer_id.clone());
self.processor
.import_aggregated_attestation(peer_id, gossip_verified);
}
self.processor.process_attestation_gossip(
peer_id,
aggregate_and_proof.message.aggregate,
AttestationType::Aggregated,
);
}
PubsubMessage::Attestation(subnet_attestation) => {
if self
.processor
.should_forward_attestation(&subnet_attestation.1)
if let Some(gossip_verified) =
self.processor.verify_unaggregated_attestation_for_gossip(
peer_id.clone(),
subnet_attestation.1.clone(),
)
{
self.propagate_message(id, peer_id.clone());
self.processor
.import_unaggregated_attestation(peer_id, gossip_verified);
}
self.processor.process_attestation_gossip(
peer_id,
subnet_attestation.1,
AttestationType::Unaggregated { should_store: true },
);
}
PubsubMessage::BeaconBlock(block) => {
match self.processor.should_forward_block(&peer_id, block) {

View File

@ -1,8 +1,11 @@
use crate::service::NetworkMessage;
use crate::sync::{PeerSyncInfo, SyncMessage};
use beacon_chain::{
AttestationProcessingOutcome, AttestationType, BeaconChain, BeaconChainTypes, BlockError,
BlockProcessingOutcome, GossipVerifiedBlock,
attestation_verification::{
Error as AttnError, IntoForkChoiceVerifiedAttestation, VerifiedAggregatedAttestation,
VerifiedUnaggregatedAttestation,
},
BeaconChain, BeaconChainTypes, BlockError, BlockProcessingOutcome, GossipVerifiedBlock,
};
use eth2_libp2p::rpc::methods::*;
use eth2_libp2p::rpc::{RPCCodedResponse, RPCEvent, RPCRequest, RPCResponse, RequestId};
@ -548,44 +551,121 @@ impl<T: BeaconChainTypes> Processor<T> {
true
}
/// Verifies the Aggregate attestation before propagating.
pub fn should_forward_aggregate_attestation(
&self,
_aggregate_and_proof: &Box<SignedAggregateAndProof<T::EthSpec>>,
) -> bool {
// TODO: Implement
true
}
/// Verifies the attestation before propagating.
pub fn should_forward_attestation(&self, _aggregate: &Attestation<T::EthSpec>) -> bool {
// TODO: Implement
true
}
/// Process a new attestation received from gossipsub.
pub fn process_attestation_gossip(
/// Handle an error whilst verifying an `Attestation` or `SignedAggregateAndProof` from the
/// network.
pub fn handle_attestation_verification_failure(
&mut self,
peer_id: PeerId,
msg: Attestation<T::EthSpec>,
attestation_type: AttestationType,
beacon_block_root: Hash256,
attestation_type: &str,
error: AttnError,
) {
match self
.chain
.process_attestation(msg.clone(), attestation_type)
{
Ok(outcome) => match outcome {
AttestationProcessingOutcome::Processed => {
debug!(
self.log,
"Processed attestation";
"source" => "gossip",
"peer" => format!("{:?}",peer_id),
"block_root" => format!("{}", msg.data.beacon_block_root),
"slot" => format!("{}", msg.data.slot),
"Invalid attestation from network";
"block" => format!("{}", beacon_block_root),
"peer_id" => format!("{:?}", peer_id),
"type" => format!("{:?}", attestation_type),
);
match error {
AttnError::FutureEpoch { .. }
| AttnError::PastEpoch { .. }
| AttnError::FutureSlot { .. }
| AttnError::PastSlot { .. } => {
/*
* These errors can be triggered by a mismatch between our slot and the peer.
*
*
* The peer has published an invalid consensus message, _only_ if we trust our own clock.
*/
}
AttestationProcessingOutcome::UnknownHeadBlock { beacon_block_root } => {
AttnError::InvalidSelectionProof { .. } | AttnError::InvalidSignature => {
/*
* These errors are caused by invalid signatures.
*
* The peer has published an invalid consensus message.
*/
}
AttnError::EmptyAggregationBitfield => {
/*
* The aggregate had no signatures and is therefore worthless.
*
* Whilst we don't gossip this attestation, this act is **not** a clear
* violation of the spec nor indication of fault.
*
* This may change soon. Reference:
*
* https://github.com/ethereum/eth2.0-specs/pull/1732
*/
}
AttnError::AggregatorPubkeyUnknown(_) => {
/*
* The aggregator index was higher than any known validator index. This is
* possible in two cases:
*
* 1. The attestation is malformed
* 2. The attestation attests to a beacon_block_root that we do not know.
*
* It should be impossible to reach (2) without triggering
* `AttnError::UnknownHeadBlock`, so we can safely assume the peer is
* faulty.
*
* The peer has published an invalid consensus message.
*/
}
AttnError::AggregatorNotInCommittee { .. } => {
/*
* The aggregator index was higher than any known validator index. This is
* possible in two cases:
*
* 1. The attestation is malformed
* 2. The attestation attests to a beacon_block_root that we do not know.
*
* It should be impossible to reach (2) without triggering
* `AttnError::UnknownHeadBlock`, so we can safely assume the peer is
* faulty.
*
* The peer has published an invalid consensus message.
*/
}
AttnError::AttestationAlreadyKnown { .. } => {
/*
* The aggregate attestation has already been observed on the network or in
* a block.
*
* The peer is not necessarily faulty.
*/
}
AttnError::AggregatorAlreadyKnown(_) => {
/*
* There has already been an aggregate attestation seen from this
* aggregator index.
*
* The peer is not necessarily faulty.
*/
}
AttnError::PriorAttestationKnown { .. } => {
/*
* We have already seen an attestation from this validator for this epoch.
*
* The peer is not necessarily faulty.
*/
}
AttnError::ValidatorIndexTooHigh(_) => {
/*
* The aggregator index (or similar field) was higher than the maximum
* possible number of validators.
*
* The peer has published an invalid consensus message.
*/
}
AttnError::UnknownHeadBlock { beacon_block_root } => {
// Note: its a little bit unclear as to whether or not this block is unknown or
// just old. See:
//
// https://github.com/sigp/lighthouse/issues/1039
// TODO: Maintain this attestation and re-process once sync completes
debug!(
self.log,
@ -596,29 +676,197 @@ impl<T: BeaconChainTypes> Processor<T> {
// we don't know the block, get the sync manager to handle the block lookup
self.send_to_sync(SyncMessage::UnknownBlockHash(peer_id, beacon_block_root));
}
AttestationProcessingOutcome::FutureEpoch { .. }
| AttestationProcessingOutcome::PastEpoch { .. }
| AttestationProcessingOutcome::UnknownTargetRoot { .. }
| AttestationProcessingOutcome::FinalizedSlot { .. } => {} // ignore the attestation
AttestationProcessingOutcome::Invalid { .. }
| AttestationProcessingOutcome::EmptyAggregationBitfield { .. }
| AttestationProcessingOutcome::AttestsToFutureBlock { .. }
| AttestationProcessingOutcome::InvalidSignature
| AttestationProcessingOutcome::NoCommitteeForSlotAndIndex { .. }
| AttestationProcessingOutcome::BadTargetEpoch { .. } => {
// the peer has sent a bad attestation. Remove them.
self.network.disconnect(peer_id, GoodbyeReason::Fault);
AttnError::UnknownTargetRoot(_) => {
/*
* The block indicated by the target root is not known to us.
*
* We should always get `AttnError::UnknwonHeadBlock` before we get this
* error, so this means we can get this error if:
*
* 1. The target root does not represent a valid block.
* 2. We do not have the target root in our DB.
*
* For (2), we should only be processing attestations when we should have
* all the available information. Note: if we do a weak-subjectivity sync
* it's possible that this situation could occur, but I think it's
* unlikely. For now, we will declare this to be an invalid message>
*
* The peer has published an invalid consensus message.
*/
}
},
Err(_) => {
// error is logged during the processing therefore no error is logged here
trace!(
AttnError::BadTargetEpoch => {
/*
* The aggregator index (or similar field) was higher than the maximum
* possible number of validators.
*
* The peer has published an invalid consensus message.
*/
}
AttnError::NoCommitteeForSlotAndIndex { .. } => {
/*
* It is not possible to attest this the given committee in the given slot.
*
* The peer has published an invalid consensus message.
*/
}
AttnError::NotExactlyOneAggregationBitSet(_) => {
/*
* The unaggregated attestation doesn't have only one signature.
*
* The peer has published an invalid consensus message.
*/
}
AttnError::AttestsToFutureBlock { .. } => {
/*
* The beacon_block_root is from a higher slot than the attestation.
*
* The peer has published an invalid consensus message.
*/
}
AttnError::Invalid(_) => {
/*
* The attestation failed the state_processing verification.
*
* The peer has published an invalid consensus message.
*/
}
AttnError::BeaconChainError(e) => {
/*
* Lighthouse hit an unexpected error whilst processing the attestation. It
* should be impossible to trigger a `BeaconChainError` from the network,
* so we have a bug.
*
* It's not clear if the message is invalid/malicious.
*/
error!(
self.log,
"Erroneous gossip attestation ssz";
"ssz" => format!("0x{}", hex::encode(msg.as_ssz_bytes())),
"Unable to validate aggregate";
"peer_id" => format!("{:?}", peer_id),
"error" => format!("{:?}", e),
);
}
};
}
}
pub fn verify_aggregated_attestation_for_gossip(
&mut self,
peer_id: PeerId,
aggregate_and_proof: SignedAggregateAndProof<T::EthSpec>,
) -> Option<VerifiedAggregatedAttestation<T>> {
// This is provided to the error handling function to assist with debugging.
let beacon_block_root = aggregate_and_proof.message.aggregate.data.beacon_block_root;
self.chain
.verify_aggregated_attestation_for_gossip(aggregate_and_proof)
.map_err(|e| {
self.handle_attestation_verification_failure(
peer_id,
beacon_block_root,
"aggregated",
e,
)
})
.ok()
}
pub fn import_aggregated_attestation(
&mut self,
peer_id: PeerId,
verified_attestation: VerifiedAggregatedAttestation<T>,
) {
// This is provided to the error handling function to assist with debugging.
let beacon_block_root = verified_attestation.attestation().data.beacon_block_root;
self.apply_attestation_to_fork_choice(
peer_id.clone(),
beacon_block_root,
&verified_attestation,
);
if let Err(e) = self.chain.add_to_block_inclusion_pool(verified_attestation) {
debug!(
self.log,
"Attestation invalid for op pool";
"reason" => format!("{:?}", e),
"peer" => format!("{:?}", peer_id),
"beacon_block_root" => format!("{:?}", beacon_block_root)
)
}
}
pub fn verify_unaggregated_attestation_for_gossip(
&mut self,
peer_id: PeerId,
unaggregated_attestation: Attestation<T::EthSpec>,
) -> Option<VerifiedUnaggregatedAttestation<T>> {
// This is provided to the error handling function to assist with debugging.
let beacon_block_root = unaggregated_attestation.data.beacon_block_root;
self.chain
.verify_unaggregated_attestation_for_gossip(unaggregated_attestation)
.map_err(|e| {
self.handle_attestation_verification_failure(
peer_id,
beacon_block_root,
"unaggregated",
e,
)
})
.ok()
}
pub fn import_unaggregated_attestation(
&mut self,
peer_id: PeerId,
verified_attestation: VerifiedUnaggregatedAttestation<T>,
) {
// This is provided to the error handling function to assist with debugging.
let beacon_block_root = verified_attestation.attestation().data.beacon_block_root;
self.apply_attestation_to_fork_choice(
peer_id.clone(),
beacon_block_root,
&verified_attestation,
);
if let Err(e) = self
.chain
.add_to_naive_aggregation_pool(verified_attestation)
{
debug!(
self.log,
"Attestation invalid for agg pool";
"reason" => format!("{:?}", e),
"peer" => format!("{:?}", peer_id),
"beacon_block_root" => format!("{:?}", beacon_block_root)
)
}
}
/// Apply the attestation to fork choice, suppressing errors.
///
/// We suppress the errors when adding an attestation to fork choice since the spec
/// permits gossiping attestations that are invalid to be applied to fork choice.
///
/// An attestation that is invalid for fork choice can still be included in a block.
///
/// Reference:
/// https://github.com/ethereum/eth2.0-specs/issues/1408#issuecomment-617599260
fn apply_attestation_to_fork_choice<'a>(
&self,
peer_id: PeerId,
beacon_block_root: Hash256,
attestation: &'a impl IntoForkChoiceVerifiedAttestation<'a, T>,
) {
if let Err(e) = self.chain.apply_attestation_to_fork_choice(attestation) {
debug!(
self.log,
"Attestation invalid for fork choice";
"reason" => format!("{:?}", e),
"peer" => format!("{:?}", peer_id),
"beacon_block_root" => format!("{:?}", beacon_block_root)
)
}
}
}

View File

@ -9,8 +9,7 @@ use network::NetworkMessage;
use ssz::Decode;
use store::{iter::AncestorIter, Store};
use types::{
Attestation, BeaconState, ChainSpec, CommitteeIndex, Epoch, EthSpec, Hash256, RelativeEpoch,
SignedAggregateAndProof, SignedBeaconBlock, Slot,
BeaconState, CommitteeIndex, Epoch, EthSpec, Hash256, RelativeEpoch, SignedBeaconBlock, Slot,
};
/// Parse a slot.
@ -247,59 +246,6 @@ pub fn publish_beacon_block_to_network<T: BeaconChainTypes + 'static>(
Ok(())
}
/// Publishes a raw un-aggregated attestation to the network.
pub fn publish_raw_attestations_to_network<T: BeaconChainTypes + 'static>(
mut chan: NetworkChannel<T::EthSpec>,
attestations: Vec<Attestation<T::EthSpec>>,
spec: &ChainSpec,
) -> Result<(), ApiError> {
let messages = attestations
.into_iter()
.map(|attestation| {
// create the gossip message to send to the network
let subnet_id = attestation
.subnet_id(spec)
.map_err(|e| ApiError::ServerError(format!("Unable to get subnet id: {:?}", e)))?;
Ok(PubsubMessage::Attestation(Box::new((
subnet_id,
attestation,
))))
})
.collect::<Result<Vec<_>, ApiError>>()?;
// Publish the attestations to the p2p network via gossipsub.
if let Err(e) = chan.try_send(NetworkMessage::Publish { messages }) {
return Err(ApiError::ServerError(format!(
"Unable to send new attestation to network: {:?}",
e
)));
}
Ok(())
}
/// Publishes an aggregated attestation to the network.
pub fn publish_aggregate_attestations_to_network<T: BeaconChainTypes + 'static>(
mut chan: NetworkChannel<T::EthSpec>,
signed_proofs: Vec<SignedAggregateAndProof<T::EthSpec>>,
) -> Result<(), ApiError> {
let messages = signed_proofs
.into_iter()
.map(|signed_proof| PubsubMessage::AggregateAndProofAttestation(Box::new(signed_proof)))
.collect::<Vec<_>>();
// Publish the attestations to the p2p network via gossipsub.
if let Err(e) = chan.try_send(NetworkMessage::Publish { messages }) {
return Err(ApiError::ServerError(format!(
"Unable to send new attestation to network: {:?}",
e
)));
}
Ok(())
}
#[cfg(test)]
mod test {
use super::*;

View File

@ -1,25 +1,23 @@
use crate::helpers::{
check_content_type_for_json, publish_aggregate_attestations_to_network,
publish_beacon_block_to_network, publish_raw_attestations_to_network,
};
use crate::helpers::{check_content_type_for_json, publish_beacon_block_to_network};
use crate::response_builder::ResponseBuilder;
use crate::{ApiError, ApiResult, BoxFut, NetworkChannel, UrlQuery};
use beacon_chain::{
AttestationProcessingOutcome, AttestationType, BeaconChain, BeaconChainTypes, BlockError,
attestation_verification::Error as AttnError, BeaconChain, BeaconChainTypes, BlockError,
StateSkipConfig,
};
use bls::PublicKeyBytes;
use eth2_libp2p::PubsubMessage;
use futures::{Future, Stream};
use hyper::{Body, Request};
use network::NetworkMessage;
use rayon::prelude::*;
use rest_types::{ValidatorDutiesRequest, ValidatorDutyBytes, ValidatorSubscription};
use slog::{error, info, warn, Logger};
use slog::{error, info, trace, warn, Logger};
use std::sync::Arc;
use types::beacon_state::EthSpec;
use types::{
Attestation, BeaconState, Epoch, RelativeEpoch, SignedAggregateAndProof, SignedBeaconBlock,
Slot,
Attestation, AttestationData, BeaconState, Epoch, RelativeEpoch, SelectionProof,
SignedAggregateAndProof, SignedBeaconBlock, Slot,
};
/// HTTP Handler to retrieve the duties for a set of validators during a particular epoch. This
@ -226,14 +224,12 @@ fn return_validator_duties<T: BeaconChainTypes>(
))
})?;
// Obtain the aggregator modulo
let aggregator_modulo = duties.map(|d| {
std::cmp::max(
1,
d.committee_len as u64
/ &beacon_chain.spec.target_aggregators_per_committee,
)
});
let aggregator_modulo = duties
.map(|duties| SelectionProof::modulo(duties.committee_len, &beacon_chain.spec))
.transpose()
.map_err(|e| {
ApiError::ServerError(format!("Unable to find modulo: {:?}", e))
})?;
let block_proposal_slots = validator_proposers
.iter()
@ -400,7 +396,7 @@ pub fn get_new_attestation<T: BeaconChainTypes>(
let index = query.committee_index()?;
let attestation = beacon_chain
.produce_attestation(slot, index)
.produce_unaggregated_attestation(slot, index)
.map_err(|e| ApiError::BadRequest(format!("Unable to produce attestation: {:?}", e)))?;
ResponseBuilder::new(&req)?.body(&attestation)
@ -450,73 +446,101 @@ pub fn publish_attestations<T: BeaconChainTypes>(
))
})
})
.and_then(move |attestations: Vec<Attestation<T::EthSpec>>| {
// Note: This is a new attestation from a validator. We want to process this and
// inform the validator whether the attestation was valid. In doing so, we store
// this un-aggregated raw attestation in the op_pool by default. This is
// sub-optimal as if we have no validators needing to aggregate, these don't need
// to be stored in the op-pool. This is minimal however as the op_pool gets pruned
// every slot
attestations.par_iter().try_for_each(|attestation| {
// In accordance with the naive aggregation strategy, the validator client should
// only publish attestations to this endpoint with a single signature.
if attestation.aggregation_bits.num_set_bits() != 1 {
return Err(ApiError::BadRequest(format!("Attestation should have exactly one aggregation bit set")))
}
// TODO: we only need to store these attestations if we're aggregating for the
// given subnet.
let attestation_type = AttestationType::Unaggregated { should_store: true };
match beacon_chain.process_attestation(attestation.clone(), attestation_type) {
Ok(AttestationProcessingOutcome::Processed) => {
// Block was processed, publish via gossipsub
info!(
log,
"Attestation from local validator";
"target" => attestation.data.source.epoch,
"source" => attestation.data.source.epoch,
"index" => attestation.data.index,
"slot" => attestation.data.slot,
);
Ok(())
}
Ok(outcome) => {
warn!(
log,
"Invalid attestation from local validator";
"outcome" => format!("{:?}", outcome)
);
Err(ApiError::ProcessingError(format!(
"An Attestation could not be processed and has not been published: {:?}",
outcome
)))
}
Err(e) => {
error!(
log,
"Error whilst processing attestation";
"error" => format!("{:?}", e)
);
Err(ApiError::ServerError(format!(
"Error while processing attestation: {:?}",
e
)))
}
}
})?;
Ok((attestations, beacon_chain))
// Process all of the aggregates _without_ exiting early if one fails.
.map(move |attestations: Vec<Attestation<T::EthSpec>>| {
attestations
.into_par_iter()
.enumerate()
.map(|(i, attestation)| {
process_unaggregated_attestation(
&beacon_chain,
network_chan.clone(),
attestation,
i,
&log,
)
})
.and_then(|(attestations, beacon_chain)| {
publish_raw_attestations_to_network::<T>(network_chan, attestations, &beacon_chain.spec)
.collect::<Vec<Result<_, _>>>()
})
// Iterate through all the results and return on the first `Err`.
//
// Note: this will only provide info about the _first_ failure, not all failures.
.and_then(|processing_results| {
processing_results.into_iter().try_for_each(|result| result)
})
.and_then(|_| response_builder?.body_no_ssz(&())),
)
}
/// Processes an unaggregrated attestation that was included in a list of attestations with the
/// index `i`.
fn process_unaggregated_attestation<T: BeaconChainTypes>(
beacon_chain: &BeaconChain<T>,
mut network_chan: NetworkChannel<T::EthSpec>,
attestation: Attestation<T::EthSpec>,
i: usize,
log: &Logger,
) -> Result<(), ApiError> {
let data = &attestation.data.clone();
// Verify that the attestation is valid to included on the gossip network.
let verified_attestation = beacon_chain
.verify_unaggregated_attestation_for_gossip(attestation.clone())
.map_err(|e| {
handle_attestation_error(
e,
&format!("unaggregated attestation {} failed gossip verification", i),
data,
log,
)
})?;
// Publish the attestation to the network
if let Err(e) = network_chan.try_send(NetworkMessage::Publish {
messages: vec![PubsubMessage::Attestation(Box::new((
attestation
.subnet_id(&beacon_chain.spec)
.map_err(|e| ApiError::ServerError(format!("Unable to get subnet id: {:?}", e)))?,
attestation,
)))],
}) {
return Err(ApiError::ServerError(format!(
"Unable to send unaggregated attestation {} to network: {:?}",
i, e
)));
}
beacon_chain
.apply_attestation_to_fork_choice(&verified_attestation)
.map_err(|e| {
handle_attestation_error(
e,
&format!(
"unaggregated attestation {} was unable to be added to fork choice",
i
),
data,
log,
)
})?;
beacon_chain
.add_to_naive_aggregation_pool(verified_attestation)
.map_err(|e| {
handle_attestation_error(
e,
&format!(
"unaggregated attestation {} was unable to be added to aggregation pool",
i
),
data,
log,
)
})?;
Ok(())
}
/// HTTP Handler to publish an Attestation, which has been signed by a validator.
pub fn publish_aggregate_and_proofs<T: BeaconChainTypes>(
req: Request<Body>,
@ -540,90 +564,168 @@ pub fn publish_aggregate_and_proofs<T: BeaconChainTypes>(
))
})
})
.and_then(move |signed_proofs: Vec<SignedAggregateAndProof<T::EthSpec>>| {
// Verify the signatures for the aggregate and proof and if valid process the
// aggregate
// TODO: Double check speed and logic consistency of handling current fork vs
// validator fork for signatures.
// TODO: More efficient way of getting a fork?
let fork = &beacon_chain.head()?.beacon_state.fork;
// TODO: Update to shift this task to dedicated task using await
signed_proofs.par_iter().try_for_each(|signed_proof| {
let agg_proof = &signed_proof.message;
let validator_pubkey = &beacon_chain.validator_pubkey(agg_proof.aggregator_index as usize)?.ok_or_else(|| {
warn!(
log,
"Unknown validator from local validator client";
);
ApiError::ProcessingError(format!("The validator is known"))
})?;
/*
* TODO: checking that `signed_proof.is_valid()` is not sufficient. It
* is also necessary to check that the validator is actually designated as an
* aggregator for this attestation.
*
* I (Paul H) will pick this up in a future PR.
*/
if signed_proof.is_valid(validator_pubkey, fork, beacon_chain.genesis_validators_root, &beacon_chain.spec) {
let attestation = &agg_proof.aggregate;
match beacon_chain.process_attestation(attestation.clone(), AttestationType::Aggregated) {
Ok(AttestationProcessingOutcome::Processed) => {
// Block was processed, publish via gossipsub
info!(
log,
"Attestation from local validator";
"target" => attestation.data.source.epoch,
"source" => attestation.data.source.epoch,
"index" => attestation.data.index,
"slot" => attestation.data.slot,
);
Ok(())
}
Ok(outcome) => {
warn!(
log,
"Invalid attestation from local validator";
"outcome" => format!("{:?}", outcome)
);
Err(ApiError::ProcessingError(format!(
"The Attestation could not be processed and has not been published: {:?}",
outcome
)))
}
Err(e) => {
error!(
log,
"Error whilst processing attestation";
"error" => format!("{:?}", e)
);
Err(ApiError::ServerError(format!(
"Error while processing attestation: {:?}",
e
)))
}
}
} else {
error!(
log,
"Invalid AggregateAndProof Signature"
);
Err(ApiError::ServerError(format!(
"Invalid AggregateAndProof Signature"
)))
}
})?;
Ok(signed_proofs)
// Process all of the aggregates _without_ exiting early if one fails.
.map(
move |signed_aggregates: Vec<SignedAggregateAndProof<T::EthSpec>>| {
signed_aggregates
.into_par_iter()
.enumerate()
.map(|(i, signed_aggregate)| {
process_aggregated_attestation(
&beacon_chain,
network_chan.clone(),
signed_aggregate,
i,
&log,
)
})
.and_then(move |signed_proofs| {
publish_aggregate_attestations_to_network::<T>(network_chan, signed_proofs)
.collect::<Vec<Result<_, _>>>()
},
)
// Iterate through all the results and return on the first `Err`.
//
// Note: this will only provide info about the _first_ failure, not all failures.
.and_then(|processing_results| {
processing_results.into_iter().try_for_each(|result| result)
})
.and_then(|_| response_builder?.body_no_ssz(&())),
)
}
/// Processes an aggregrated attestation that was included in a list of attestations with the index
/// `i`.
fn process_aggregated_attestation<T: BeaconChainTypes>(
beacon_chain: &BeaconChain<T>,
mut network_chan: NetworkChannel<T::EthSpec>,
signed_aggregate: SignedAggregateAndProof<T::EthSpec>,
i: usize,
log: &Logger,
) -> Result<(), ApiError> {
let data = &signed_aggregate.message.aggregate.data.clone();
// Verify that the attestation is valid to be included on the gossip network.
//
// Using this gossip check for local validators is not necessarily ideal, there will be some
// attestations that we reject that could possibly be included in a block (e.g., attestations
// that late by more than 1 epoch but less than 2). We can come pick this back up if we notice
// that it's materially affecting validator profits. Until then, I'm hesitant to introduce yet
// _another_ attestation verification path.
let verified_attestation =
match beacon_chain.verify_aggregated_attestation_for_gossip(signed_aggregate.clone()) {
Ok(verified_attestation) => verified_attestation,
Err(AttnError::AttestationAlreadyKnown(attestation_root)) => {
trace!(
log,
"Ignored known attn from local validator";
"attn_root" => format!("{}", attestation_root)
);
// Exit early with success for a known attestation, there's no need to re-process
// an aggregate we already know.
return Ok(());
}
/*
* It's worth noting that we don't check for `Error::AggregatorAlreadyKnown` since (at
* the time of writing) we check for `AttestationAlreadyKnown` first.
*
* Given this, it's impossible to hit `Error::AggregatorAlreadyKnown` without that
* aggregator having already produced a conflicting aggregation. This is not slashable
* but I think it's still the sort of condition we should error on, at least for now.
*/
Err(e) => {
return Err(handle_attestation_error(
e,
&format!("aggregated attestation {} failed gossip verification", i),
data,
log,
))
}
};
// Publish the attestation to the network
if let Err(e) = network_chan.try_send(NetworkMessage::Publish {
messages: vec![PubsubMessage::AggregateAndProofAttestation(Box::new(
signed_aggregate,
))],
}) {
return Err(ApiError::ServerError(format!(
"Unable to send aggregated attestation {} to network: {:?}",
i, e
)));
}
beacon_chain
.apply_attestation_to_fork_choice(&verified_attestation)
.map_err(|e| {
handle_attestation_error(
e,
&format!(
"aggregated attestation {} was unable to be added to fork choice",
i
),
data,
log,
)
})?;
beacon_chain
.add_to_block_inclusion_pool(verified_attestation)
.map_err(|e| {
handle_attestation_error(
e,
&format!(
"aggregated attestation {} was unable to be added to op pool",
i
),
data,
log,
)
})?;
Ok(())
}
/// Common handler for `AttnError` during attestation verification.
fn handle_attestation_error(
e: AttnError,
detail: &str,
data: &AttestationData,
log: &Logger,
) -> ApiError {
match e {
AttnError::BeaconChainError(e) => {
error!(
log,
"Internal error verifying local attestation";
"detail" => detail,
"error" => format!("{:?}", e),
"target" => data.target.epoch,
"source" => data.source.epoch,
"index" => data.index,
"slot" => data.slot,
);
ApiError::ServerError(format!(
"Internal error verifying local attestation. Error: {:?}. Detail: {}",
e, detail
))
}
e => {
error!(
log,
"Invalid local attestation";
"detail" => detail,
"reason" => format!("{:?}", e),
"target" => data.target.epoch,
"source" => data.source.epoch,
"index" => data.index,
"slot" => data.slot,
);
ApiError::ProcessingError(format!(
"Invalid local attestation. Error: {:?} Detail: {}",
e, detail
))
}
}
}

View File

@ -93,12 +93,21 @@ fn validator_produce_attestation() {
let genesis_validators_root = beacon_chain.genesis_validators_root;
let state = beacon_chain.head().expect("should get head").beacon_state;
let validator_index = 0;
// Find a validator that has duties in the current slot of the chain.
let mut validator_index = 0;
let duties = loop {
let duties = state
.get_attestation_duties(validator_index, RelativeEpoch::Current)
.expect("should have attestation duties cache")
.expect("should have attestation duties");
if duties.slot == node.client.beacon_chain().unwrap().slot().unwrap() {
break duties;
} else {
validator_index += 1
}
};
let mut attestation = env
.runtime()
.block_on(
@ -134,15 +143,18 @@ fn validator_produce_attestation() {
// Try publishing the attestation without a signature or a committee bit set, ensure it is
// raises an error.
let publish_result = env.runtime().block_on(
let publish_status = env
.runtime()
.block_on(
remote_node
.http
.validator()
.publish_attestations(vec![attestation.clone()]),
);
)
.expect("should publish unsigned attestation");
assert!(
publish_result.is_err(),
"the unsigned published attestation should return error"
!publish_status.is_valid(),
"the unsigned published attestation should be invalid"
);
// Set the aggregation bit.
@ -224,6 +236,7 @@ fn validator_produce_attestation() {
let signed_aggregate_and_proof = SignedAggregateAndProof::from_aggregate(
validator_index as u64,
aggregated_attestation,
None,
&keypair.sk,
&state.fork,
genesis_validators_root,

View File

@ -22,6 +22,10 @@ pub enum Error {
/// There was an error attempting to read from a `BeaconState`. Block
/// validity was not determined.
BeaconStateError(BeaconStateError),
/// The `BeaconBlock` has a `proposer_index` that does not match the index we computed locally.
///
/// The block is invalid.
IncorrectBlockProposer { block: u64, local_shuffling: u64 },
/// Failed to load a signature set. The block may be invalid or we failed to process it.
SignatureSetError(SignatureSetError),
}
@ -34,7 +38,18 @@ impl From<BeaconStateError> for Error {
impl From<SignatureSetError> for Error {
fn from(e: SignatureSetError) -> Error {
Error::SignatureSetError(e)
match e {
// Make a special distinction for `IncorrectBlockProposer` since it indicates an
// invalid block, not an internal error.
SignatureSetError::IncorrectBlockProposer {
block,
local_shuffling,
} => Error::IncorrectBlockProposer {
block,
local_shuffling,
},
e => Error::SignatureSetError(e),
}
}
}

View File

@ -10,8 +10,8 @@ use tree_hash::TreeHash;
use types::{
AggregateSignature, AttesterSlashing, BeaconBlock, BeaconState, BeaconStateError, ChainSpec,
DepositData, Domain, EthSpec, Fork, Hash256, IndexedAttestation, ProposerSlashing, PublicKey,
Signature, SignedBeaconBlock, SignedBeaconBlockHeader, SignedRoot, SignedVoluntaryExit,
SigningRoot,
Signature, SignedAggregateAndProof, SignedBeaconBlock, SignedBeaconBlockHeader, SignedRoot,
SignedVoluntaryExit, SigningRoot,
};
pub type Result<T> = std::result::Result<T, Error>;
@ -26,6 +26,10 @@ pub enum Error {
/// Attempted to find the public key of a validator that does not exist. You cannot distinguish
/// between an error and an invalid block in this case.
ValidatorUnknown(u64),
/// The `BeaconBlock` has a `proposer_index` that does not match the index we computed locally.
///
/// The block is invalid.
IncorrectBlockProposer { block: u64, local_shuffling: u64 },
/// The public keys supplied do not match the number of objects requiring keys. Block validity
/// was not determined.
MismatchedPublicKeyLen { pubkey_len: usize, other_len: usize },
@ -73,6 +77,13 @@ where
let block = &signed_block.message;
let proposer_index = state.get_beacon_proposer_index(block.slot, spec)?;
if proposer_index as u64 != block.proposer_index {
return Err(Error::IncorrectBlockProposer {
block: block.proposer_index,
local_shuffling: proposer_index as u64,
});
}
let domain = spec.get_domain(
block.slot.epoch(T::slots_per_epoch()),
Domain::BeaconProposer,
@ -343,3 +354,74 @@ where
message,
))
}
pub fn signed_aggregate_selection_proof_signature_set<'a, T, F>(
get_pubkey: F,
signed_aggregate_and_proof: &'a SignedAggregateAndProof<T>,
fork: &Fork,
genesis_validators_root: Hash256,
spec: &'a ChainSpec,
) -> Result<SignatureSet>
where
T: EthSpec,
F: Fn(usize) -> Option<Cow<'a, PublicKey>>,
{
let slot = signed_aggregate_and_proof.message.aggregate.data.slot;
let domain = spec.get_domain(
slot.epoch(T::slots_per_epoch()),
Domain::SelectionProof,
fork,
genesis_validators_root,
);
let message = slot.signing_root(domain).as_bytes().to_vec();
let signature = &signed_aggregate_and_proof.message.selection_proof;
let validator_index = signed_aggregate_and_proof.message.aggregator_index;
Ok(SignatureSet::single(
signature,
get_pubkey(validator_index as usize)
.ok_or_else(|| Error::ValidatorUnknown(validator_index))?,
message,
))
}
pub fn signed_aggregate_signature_set<'a, T, F>(
get_pubkey: F,
signed_aggregate_and_proof: &'a SignedAggregateAndProof<T>,
fork: &Fork,
genesis_validators_root: Hash256,
spec: &'a ChainSpec,
) -> Result<SignatureSet>
where
T: EthSpec,
F: Fn(usize) -> Option<Cow<'a, PublicKey>>,
{
let target_epoch = signed_aggregate_and_proof
.message
.aggregate
.data
.target
.epoch;
let domain = spec.get_domain(
target_epoch,
Domain::AggregateAndProof,
fork,
genesis_validators_root,
);
let message = signed_aggregate_and_proof
.message
.signing_root(domain)
.as_bytes()
.to_vec();
let signature = &signed_aggregate_and_proof.signature;
let validator_index = signed_aggregate_and_proof.message.aggregator_index;
Ok(SignatureSet::single(
signature,
get_pubkey(validator_index as usize)
.ok_or_else(|| Error::ValidatorUnknown(validator_index))?,
message,
))
}

View File

@ -1039,7 +1039,12 @@ fn invalid_proposer_slashing_duplicate_slashing() {
let slashing = block.message.body.proposer_slashings[0].clone();
let slashed_proposer = slashing.signed_header_1.message.proposer_index;
block.message.body.proposer_slashings.push(slashing);
block
.message
.body
.proposer_slashings
.push(slashing)
.expect("should push slashing");
let result = per_block_processing(
&mut state,

View File

@ -27,21 +27,27 @@ pub struct AggregateAndProof<T: EthSpec> {
impl<T: EthSpec> AggregateAndProof<T> {
/// Produces a new `AggregateAndProof` with a `selection_proof` generated by signing
/// `aggregate.data.slot` with `secret_key`.
///
/// If `selection_proof.is_none()` it will be computed locally.
pub fn from_aggregate(
aggregator_index: u64,
aggregate: Attestation<T>,
selection_proof: Option<SelectionProof>,
secret_key: &SecretKey,
fork: &Fork,
genesis_validators_root: Hash256,
spec: &ChainSpec,
) -> Self {
let selection_proof = SelectionProof::new::<T>(
let selection_proof = selection_proof
.unwrap_or_else(|| {
SelectionProof::new::<T>(
aggregate.data.slot,
secret_key,
fork,
genesis_validators_root,
spec,
)
})
.into();
Self {

View File

@ -1,5 +1,8 @@
use crate::{ChainSpec, Domain, EthSpec, Fork, Hash256, SecretKey, Signature, SignedRoot, Slot};
use crate::{
ChainSpec, Domain, EthSpec, Fork, Hash256, PublicKey, SecretKey, Signature, SignedRoot, Slot,
};
use safe_arith::{ArithError, SafeArith};
use std::cmp;
use std::convert::TryInto;
use tree_hash::TreeHash;
@ -26,7 +29,23 @@ impl SelectionProof {
Self(Signature::new(message.as_bytes(), secret_key))
}
pub fn is_aggregator(&self, modulo: u64) -> Result<bool, ArithError> {
/// Returns the "modulo" used for determining if a `SelectionProof` elects an aggregator.
pub fn modulo(committee_len: usize, spec: &ChainSpec) -> Result<u64, ArithError> {
Ok(cmp::max(
1,
(committee_len as u64).safe_div(spec.target_aggregators_per_committee)?,
))
}
pub fn is_aggregator(
&self,
committee_len: usize,
spec: &ChainSpec,
) -> Result<bool, ArithError> {
self.is_aggregator_from_modulo(Self::modulo(committee_len, spec)?)
}
pub fn is_aggregator_from_modulo(&self, modulo: u64) -> Result<bool, ArithError> {
let signature_hash = self.0.tree_hash_root();
let signature_hash_int = u64::from_le_bytes(
signature_hash[0..8]
@ -37,6 +56,25 @@ impl SelectionProof {
signature_hash_int.safe_rem(modulo).map(|rem| rem == 0)
}
pub fn verify<T: EthSpec>(
&self,
slot: Slot,
pubkey: &PublicKey,
fork: &Fork,
genesis_validators_root: Hash256,
spec: &ChainSpec,
) -> bool {
let domain = spec.get_domain(
slot.epoch(T::slots_per_epoch()),
Domain::SelectionProof,
fork,
genesis_validators_root,
);
let message = slot.signing_root(domain);
self.0.verify(message.as_bytes(), pubkey)
}
}
impl Into<Signature> for SelectionProof {
@ -44,3 +82,9 @@ impl Into<Signature> for SelectionProof {
self.0
}
}
impl From<Signature> for SelectionProof {
fn from(sig: Signature) -> Self {
Self(sig)
}
}

View File

@ -1,6 +1,6 @@
use super::{
AggregateAndProof, Attestation, ChainSpec, Domain, EthSpec, Fork, Hash256, PublicKey,
SecretKey, Signature, SignedRoot,
SecretKey, SelectionProof, Signature, SignedRoot,
};
use crate::test_utils::TestRandom;
use serde_derive::{Deserialize, Serialize};
@ -25,9 +25,12 @@ pub struct SignedAggregateAndProof<T: EthSpec> {
impl<T: EthSpec> SignedAggregateAndProof<T> {
/// Produces a new `SignedAggregateAndProof` with a `selection_proof` generated by signing
/// `aggregate.data.slot` with `secret_key`.
///
/// If `selection_proof.is_none()` it will be computed locally.
pub fn from_aggregate(
aggregator_index: u64,
aggregate: Attestation<T>,
selection_proof: Option<SelectionProof>,
secret_key: &SecretKey,
fork: &Fork,
genesis_validators_root: Hash256,
@ -36,6 +39,7 @@ impl<T: EthSpec> SignedAggregateAndProof<T> {
let message = AggregateAndProof::from_aggregate(
aggregator_index,
aggregate,
selection_proof,
secret_key,
fork,
genesis_validators_root,

View File

@ -1,9 +1,11 @@
use crate::{test_utils::TestRandom, BeaconBlock, EthSpec, Hash256, Slot};
use std::fmt;
use crate::{
test_utils::TestRandom, BeaconBlock, ChainSpec, Domain, EthSpec, Fork, Hash256, PublicKey,
SignedRoot, SigningRoot, Slot,
};
use bls::Signature;
use serde_derive::{Deserialize, Serialize};
use ssz_derive::{Decode, Encode};
use std::fmt;
use test_random_derive::TestRandom;
use tree_hash::TreeHash;
@ -47,6 +49,38 @@ pub struct SignedBeaconBlock<E: EthSpec> {
}
impl<E: EthSpec> SignedBeaconBlock<E> {
/// Verify `self.signature`.
///
/// If the root of `block.message` is already known it can be passed in via `object_root_opt`.
/// Otherwise, it will be computed locally.
pub fn verify_signature(
&self,
object_root_opt: Option<Hash256>,
pubkey: &PublicKey,
fork: &Fork,
genesis_validators_root: Hash256,
spec: &ChainSpec,
) -> bool {
let domain = spec.get_domain(
self.message.slot.epoch(E::slots_per_epoch()),
Domain::BeaconProposer,
fork,
genesis_validators_root,
);
let message = if let Some(object_root) = object_root_opt {
SigningRoot {
object_root,
domain,
}
.tree_hash_root()
} else {
self.message.signing_root(domain)
};
self.signature.verify(message.as_bytes(), pubkey)
}
/// Convenience accessor for the block's slot.
pub fn slot(&self) -> Slot {
self.message.slot

View File

@ -21,7 +21,7 @@ impl BlsCase for BlsSign {}
impl Case for BlsSign {
fn result(&self, _case_index: usize) -> Result<(), Error> {
// Convert private_key and message to required types
let mut sk = hex::decode(&self.input.privkey[2..])
let sk = hex::decode(&self.input.privkey[2..])
.map_err(|e| Error::FailedToParseTest(format!("{:?}", e)))?;
let sk = SecretKey::from_bytes(&sk).unwrap();
let msg = hex::decode(&self.input.message[2..])

View File

@ -1,12 +1,11 @@
use crate::{
duties_service::{DutiesService, DutyAndState},
duties_service::{DutiesService, DutyAndProof},
validator_store::ValidatorStore,
};
use environment::RuntimeContext;
use exit_future::Signal;
use futures::{future, Future, Stream};
use remote_beacon_node::{PublishStatus, RemoteBeaconNode};
use rest_types::ValidatorSubscription;
use slog::{crit, debug, info, trace};
use slot_clock::SlotClock;
use std::collections::HashMap;
@ -198,31 +197,15 @@ impl<T: SlotClock + 'static, E: EthSpec> AttestationService<T, E> {
.checked_sub(slot_duration / 3)
.unwrap_or_else(|| Duration::from_secs(0));
let epoch = slot.epoch(E::slots_per_epoch());
// Check if any attestation subscriptions are required. If there a new attestation duties for
// this epoch or the next, send them to the beacon node
let mut duties_to_subscribe = service.duties_service.unsubscribed_epoch_duties(&epoch);
duties_to_subscribe.append(
&mut service
.duties_service
.unsubscribed_epoch_duties(&(epoch + 1)),
);
// spawn a task to subscribe all the duties
service
.context
.executor
.spawn(self.clone().send_subscriptions(duties_to_subscribe));
let duties_by_committee_index: HashMap<CommitteeIndex, Vec<DutyAndState>> = service
let duties_by_committee_index: HashMap<CommitteeIndex, Vec<DutyAndProof>> = service
.duties_service
.attesters(slot)
.into_iter()
.fold(HashMap::new(), |mut map, duty_and_state| {
if let Some(committee_index) = duty_and_state.duty.attestation_committee_index {
.fold(HashMap::new(), |mut map, duty_and_proof| {
if let Some(committee_index) = duty_and_proof.duty.attestation_committee_index {
let validator_duties = map.entry(committee_index).or_insert_with(|| vec![]);
validator_duties.push(duty_and_state);
validator_duties.push(duty_and_proof);
}
map
@ -250,81 +233,6 @@ impl<T: SlotClock + 'static, E: EthSpec> AttestationService<T, E> {
Ok(())
}
/// Subscribes any required validators to the beacon node for a particular slot.
///
/// This informs the beacon node that the validator has a duty on a particular
/// slot allowing the beacon node to connect to the required subnet and determine
/// if attestations need to be aggregated.
fn send_subscriptions(&self, duties: Vec<DutyAndState>) -> impl Future<Item = (), Error = ()> {
let service_1 = self.clone();
let num_duties = duties.len();
let log_1 = self.context.log.clone();
let log_2 = self.context.log.clone();
let (validator_subscriptions, successful_duties): (Vec<_>, Vec<_>) = duties
.into_iter()
.filter_map(|duty| {
let (slot, attestation_committee_index, _, validator_index) =
duty.attestation_duties()?;
let selection_proof = self
.validator_store
.produce_selection_proof(duty.validator_pubkey(), slot)?;
let modulo = duty.duty.aggregator_modulo?;
let subscription = ValidatorSubscription {
validator_index,
attestation_committee_index,
slot,
is_aggregator: selection_proof
.is_aggregator(modulo)
.map_err(|e| crit!(log_1, "Unable to determine aggregator: {:?}", e))
.ok()?,
};
Some((subscription, (duty, selection_proof)))
})
.unzip();
let num_failed_duties = num_duties - successful_duties.len();
self.beacon_node
.http
.validator()
.subscribe(validator_subscriptions)
.map_err(|e| format!("Failed to subscribe validators: {:?}", e))
.map(move |publish_status| match publish_status {
PublishStatus::Valid => info!(
log_1,
"Successfully subscribed validators";
"validators" => num_duties,
"failed_validators" => num_failed_duties,
),
PublishStatus::Invalid(msg) => crit!(
log_1,
"Validator Subscription was invalid";
"message" => msg,
),
PublishStatus::Unknown => {
crit!(log_1, "Unknown condition when publishing attestation")
}
})
.and_then(move |_| {
for (duty, selection_proof) in successful_duties {
service_1
.duties_service
.subscribe_duty(&duty.duty, selection_proof);
}
Ok(())
})
.map_err(move |e| {
crit!(
log_2,
"Error during attestation production";
"error" => e
)
})
}
/// Performs the first step of the attesting process: downloading `Attestation` objects,
/// signing them and returning them to the validator.
///
@ -338,7 +246,7 @@ impl<T: SlotClock + 'static, E: EthSpec> AttestationService<T, E> {
&self,
slot: Slot,
committee_index: CommitteeIndex,
validator_duties: Vec<DutyAndState>,
validator_duties: Vec<DutyAndProof>,
aggregate_production_instant: Instant,
) -> Box<dyn Future<Item = (), Error = ()> + Send> {
// There's not need to produce `Attestation` or `SignedAggregateAndProof` if we do not have
@ -421,7 +329,7 @@ impl<T: SlotClock + 'static, E: EthSpec> AttestationService<T, E> {
&self,
slot: Slot,
committee_index: CommitteeIndex,
validator_duties: Arc<Vec<DutyAndState>>,
validator_duties: Arc<Vec<DutyAndProof>>,
) -> Box<dyn Future<Item = Option<Attestation<E>>, Error = String> + Send> {
if validator_duties.is_empty() {
return Box::new(future::ok(None));
@ -522,6 +430,7 @@ impl<T: SlotClock + 'static, E: EthSpec> AttestationService<T, E> {
"head_block" => format!("{:?}", beacon_block_root),
"committee_index" => committee_index,
"slot" => slot.as_u64(),
"type" => "unaggregated",
),
PublishStatus::Invalid(msg) => crit!(
log,
@ -529,10 +438,12 @@ impl<T: SlotClock + 'static, E: EthSpec> AttestationService<T, E> {
"message" => msg,
"committee_index" => committee_index,
"slot" => slot.as_u64(),
"type" => "unaggregated",
),
PublishStatus::Unknown => crit!(
log,
"Unknown condition when publishing unagg. attestation"
),
PublishStatus::Unknown => {
crit!(log, "Unknown condition when publishing attestation")
}
})
.map(|()| Some(attestation)),
)
@ -565,7 +476,7 @@ impl<T: SlotClock + 'static, E: EthSpec> AttestationService<T, E> {
fn produce_and_publish_aggregates(
&self,
attestation: Attestation<E>,
validator_duties: Arc<Vec<DutyAndState>>,
validator_duties: Arc<Vec<DutyAndProof>>,
) -> impl Future<Item = (), Error = String> {
let service_1 = self.clone();
let log_1 = self.context.log.clone();
@ -581,23 +492,18 @@ impl<T: SlotClock + 'static, E: EthSpec> AttestationService<T, E> {
// a `SignedAggregateAndProof`
let signed_aggregate_and_proofs = validator_duties
.iter()
.filter_map(|duty_and_state| {
.filter_map(|duty_and_proof| {
// Do not produce a signed aggregator for validators that are not
// subscribed aggregators.
//
// Note: this function returns `false` if the validator is required to
// be an aggregator but has not yet subscribed.
if !duty_and_state.is_aggregator() {
return None;
}
let selection_proof = duty_and_proof.selection_proof.as_ref()?.clone();
let (duty_slot, duty_committee_index, _, validator_index) =
duty_and_state.attestation_duties().or_else(|| {
duty_and_proof.attestation_duties().or_else(|| {
crit!(log_1, "Missing duties when signing aggregate");
None
})?;
let pubkey = &duty_and_state.duty.validator_pubkey;
let pubkey = &duty_and_proof.duty.validator_pubkey;
let slot = attestation.data.slot;
let committee_index = attestation.data.index;
@ -612,6 +518,7 @@ impl<T: SlotClock + 'static, E: EthSpec> AttestationService<T, E> {
pubkey,
validator_index,
aggregated_attestation.clone(),
selection_proof,
)
{
Some(signed_aggregate_and_proof)
@ -637,11 +544,12 @@ impl<T: SlotClock + 'static, E: EthSpec> AttestationService<T, E> {
.map(move |(attestation, publish_status)| match publish_status {
PublishStatus::Valid => info!(
log_1,
"Successfully published aggregate attestations";
"Successfully published attestations";
"signatures" => attestation.aggregation_bits.num_set_bits(),
"head_block" => format!("{}", attestation.data.beacon_block_root),
"head_block" => format!("{:?}", attestation.data.beacon_block_root),
"committee_index" => attestation.data.index,
"slot" => attestation.data.slot.as_u64(),
"type" => "aggregated",
),
PublishStatus::Invalid(msg) => crit!(
log_1,
@ -649,9 +557,10 @@ impl<T: SlotClock + 'static, E: EthSpec> AttestationService<T, E> {
"message" => msg,
"committee_index" => attestation.data.index,
"slot" => attestation.data.slot.as_u64(),
"type" => "aggregated",
),
PublishStatus::Unknown => {
crit!(log_1, "Unknown condition when publishing attestation")
crit!(log_1, "Unknown condition when publishing agg. attestation")
}
}))
} else {

View File

@ -3,8 +3,8 @@ use environment::RuntimeContext;
use exit_future::Signal;
use futures::{future, Future, IntoFuture, Stream};
use parking_lot::RwLock;
use remote_beacon_node::RemoteBeaconNode;
use rest_types::{ValidatorDuty, ValidatorDutyBytes};
use remote_beacon_node::{PublishStatus, RemoteBeaconNode};
use rest_types::{ValidatorDuty, ValidatorDutyBytes, ValidatorSubscription};
use slog::{crit, debug, error, info, trace, warn};
use slot_clock::SlotClock;
use std::collections::HashMap;
@ -21,49 +21,73 @@ const TIME_DELAY_FROM_SLOT: Duration = Duration::from_millis(100);
/// Remove any duties where the `duties_epoch < current_epoch - PRUNE_DEPTH`.
const PRUNE_DEPTH: u64 = 4;
type BaseHashMap = HashMap<PublicKey, HashMap<Epoch, DutyAndState>>;
type BaseHashMap = HashMap<PublicKey, HashMap<Epoch, DutyAndProof>>;
#[derive(Debug, Clone)]
pub enum DutyState {
/// This duty has not been subscribed to the beacon node.
NotSubscribed,
/// The duty has been subscribed and the validator is an aggregator for this duty. The
/// selection proof is provided to construct the `AggregateAndProof` struct.
SubscribedAggregator(SelectionProof),
}
#[derive(Debug, Clone)]
pub struct DutyAndState {
pub struct DutyAndProof {
/// The validator duty.
pub duty: ValidatorDuty,
/// The current state of the validator duty.
state: DutyState,
/// Stores the selection proof if the duty elects the validator to be an aggregator.
pub selection_proof: Option<SelectionProof>,
}
impl DutyAndState {
/// Returns true if the duty is an aggregation duty (the validator must aggregate all
/// attestations.
pub fn is_aggregator(&self) -> bool {
match self.state {
DutyState::NotSubscribed => false,
DutyState::SubscribedAggregator(_) => true,
impl DutyAndProof {
/// Computes the selection proof for `self.validator_pubkey` and `self.duty.attestation_slot`,
/// storing it in `self.selection_proof` _if_ the validator is an aggregator. If the validator
/// is not an aggregator, `self.selection_proof` is set to `None`.
///
/// ## Errors
///
/// - `self.validator_pubkey` is not known in `validator_store`.
/// - There's an arith error during computation.
pub fn compute_selection_proof<T: SlotClock + 'static, E: EthSpec>(
&mut self,
validator_store: &ValidatorStore<T, E>,
) -> Result<(), String> {
let (modulo, slot) = if let (Some(modulo), Some(slot)) =
(self.duty.aggregator_modulo, self.duty.attestation_slot)
{
(modulo, slot)
} else {
// If there is no modulo or for the aggregator we assume they are not activated and
// therefore not an aggregator.
self.selection_proof = None;
return Ok(());
};
let selection_proof = validator_store
.produce_selection_proof(&self.duty.validator_pubkey, slot)
.ok_or_else(|| "Validator pubkey missing from store".to_string())?;
self.selection_proof = selection_proof
.is_aggregator_from_modulo(modulo)
.map_err(|e| format!("Invalid modulo: {:?}", e))
.map(|is_aggregator| {
if is_aggregator {
Some(selection_proof)
} else {
None
}
})?;
Ok(())
}
/// Returns the selection proof if the duty is an aggregation duty.
pub fn selection_proof(&self) -> Option<SelectionProof> {
match &self.state {
DutyState::SubscribedAggregator(proof) => Some(proof.clone()),
_ => None,
}
/// Returns `true` if the two `Self` instances would result in the same beacon subscription.
pub fn subscription_eq(&self, other: &Self) -> bool {
self.selection_proof_eq(other)
&& self.duty.validator_index == other.duty.validator_index
&& self.duty.attestation_committee_index == other.duty.attestation_committee_index
&& self.duty.attestation_slot == other.duty.attestation_slot
}
/// Returns true if the this duty has been subscribed with the beacon node.
pub fn is_subscribed(&self) -> bool {
match self.state {
DutyState::NotSubscribed => false,
DutyState::SubscribedAggregator(_) => true,
}
/// Returns `true` if the selection proof between `self` and `other` _should_ be equal.
///
/// It's important to note that this doesn't actually check `self.selection_proof`, instead it
/// checks to see if the inputs to computing the selection proof are equal.
fn selection_proof_eq(&self, other: &Self) -> bool {
self.duty.aggregator_modulo == other.duty.aggregator_modulo
&& self.duty.attestation_slot == other.duty.attestation_slot
}
/// Returns the information required for an attesting validator, if they are scheduled to
@ -82,10 +106,10 @@ impl DutyAndState {
}
}
impl TryInto<DutyAndState> for ValidatorDutyBytes {
impl TryInto<DutyAndProof> for ValidatorDutyBytes {
type Error = String;
fn try_into(self) -> Result<DutyAndState, Self::Error> {
fn try_into(self) -> Result<DutyAndProof, Self::Error> {
let duty = ValidatorDuty {
validator_pubkey: (&self.validator_pubkey)
.try_into()
@ -97,9 +121,9 @@ impl TryInto<DutyAndState> for ValidatorDutyBytes {
block_proposal_slots: self.block_proposal_slots,
aggregator_modulo: self.aggregator_modulo,
};
Ok(DutyAndState {
Ok(DutyAndProof {
duty,
state: DutyState::NotSubscribed,
selection_proof: None,
})
}
}
@ -114,11 +138,24 @@ enum InsertOutcome {
Identical,
/// There were duties for this validator and epoch in the store that were different to the ones
/// provided. The existing duties were replaced.
Replaced,
Replaced { should_resubscribe: bool },
/// The given duties were invalid.
Invalid,
}
impl InsertOutcome {
/// Returns `true` if the outcome indicates that the validator _might_ require a subscription.
pub fn is_subscription_candidate(self) -> bool {
match self {
InsertOutcome::Replaced { should_resubscribe } => should_resubscribe,
InsertOutcome::NewValidator => true,
InsertOutcome::NewEpoch => true,
InsertOutcome::Identical => false,
InsertOutcome::Invalid => false,
}
}
}
#[derive(Default)]
pub struct DutiesStore {
store: RwLock<BaseHashMap>,
@ -173,49 +210,7 @@ impl DutiesStore {
.collect()
}
/// Gets a list of validator duties for an epoch that have not yet been subscribed
/// to the beacon node.
// Note: Potentially we should modify the data structure to store the unsubscribed epoch duties for validator clients with a large number of validators. This currently adds an O(N) search each slot.
fn unsubscribed_epoch_duties(&self, epoch: &Epoch) -> Vec<DutyAndState> {
self.store
.read()
.iter()
.filter_map(|(_validator_pubkey, validator_map)| {
validator_map.get(epoch).and_then(|duty_and_state| {
if !duty_and_state.is_subscribed() {
Some(duty_and_state)
} else {
None
}
})
})
.cloned()
.collect()
}
/// Marks a duty as being subscribed to the beacon node. This is called by the attestation
/// service once it has been sent.
fn set_duty_state(
&self,
validator: &PublicKey,
slot: Slot,
state: DutyState,
slots_per_epoch: u64,
) {
let epoch = slot.epoch(slots_per_epoch);
let mut store = self.store.write();
if let Some(map) = store.get_mut(validator) {
if let Some(duty) = map.get_mut(&epoch) {
if duty.duty.attestation_slot == Some(slot) {
// set the duty state
duty.state = state;
}
}
}
}
fn attesters(&self, slot: Slot, slots_per_epoch: u64) -> Vec<DutyAndState> {
fn attesters(&self, slot: Slot, slots_per_epoch: u64) -> Vec<DutyAndProof> {
self.store
.read()
.iter()
@ -236,27 +231,49 @@ impl DutiesStore {
.collect()
}
fn insert(&self, epoch: Epoch, duties: DutyAndState, slots_per_epoch: u64) -> InsertOutcome {
fn insert<T: SlotClock + 'static, E: EthSpec>(
&self,
epoch: Epoch,
mut duties: DutyAndProof,
slots_per_epoch: u64,
validator_store: &ValidatorStore<T, E>,
) -> Result<InsertOutcome, String> {
let mut store = self.store.write();
if !duties_match_epoch(&duties.duty, epoch, slots_per_epoch) {
return InsertOutcome::Invalid;
return Ok(InsertOutcome::Invalid);
}
// TODO: refactor with Entry.
if let Some(validator_map) = store.get_mut(&duties.duty.validator_pubkey) {
if let Some(known_duties) = validator_map.get_mut(&epoch) {
if known_duties.duty == duties.duty {
InsertOutcome::Identical
Ok(InsertOutcome::Identical)
} else {
// Compute the selection proof.
duties.compute_selection_proof(validator_store)?;
// Determine if a re-subscription is required.
let should_resubscribe = duties.subscription_eq(known_duties);
// Replace the existing duties.
*known_duties = duties;
InsertOutcome::Replaced
Ok(InsertOutcome::Replaced { should_resubscribe })
}
} else {
// Compute the selection proof.
duties.compute_selection_proof(validator_store)?;
validator_map.insert(epoch, duties);
InsertOutcome::NewEpoch
Ok(InsertOutcome::NewEpoch)
}
} else {
// Compute the selection proof.
duties.compute_selection_proof(validator_store)?;
let validator_pubkey = duties.duty.validator_pubkey.clone();
let mut validator_map = HashMap::new();
@ -264,7 +281,7 @@ impl DutiesStore {
store.insert(validator_pubkey, validator_map);
InsertOutcome::NewValidator
Ok(InsertOutcome::NewValidator)
}
}
@ -408,29 +425,10 @@ impl<T: SlotClock + 'static, E: EthSpec> DutiesService<T, E> {
}
/// Returns all `ValidatorDuty` for the given `slot`.
pub fn attesters(&self, slot: Slot) -> Vec<DutyAndState> {
pub fn attesters(&self, slot: Slot) -> Vec<DutyAndProof> {
self.store.attesters(slot, E::slots_per_epoch())
}
/// Returns all `ValidatorDuty` that have not been registered with the beacon node.
pub fn unsubscribed_epoch_duties(&self, epoch: &Epoch) -> Vec<DutyAndState> {
self.store.unsubscribed_epoch_duties(epoch)
}
/// Marks the duty as being subscribed to the beacon node.
///
/// If the duty is to be marked as an aggregator duty, a selection proof is also provided.
pub fn subscribe_duty(&self, duty: &ValidatorDuty, proof: SelectionProof) {
if let Some(slot) = duty.attestation_slot {
self.store.set_duty_state(
&duty.validator_pubkey,
slot,
DutyState::SubscribedAggregator(proof),
E::slots_per_epoch(),
)
}
}
/// Start the service that periodically polls the beacon node for validator duties.
pub fn start_update_service(&self, spec: &ChainSpec) -> Result<Signal, String> {
let log = self.context.log.clone();
@ -569,7 +567,8 @@ impl<T: SlotClock + 'static, E: EthSpec> DutiesService<T, E> {
/// Attempt to download the duties of all managed validators for the given `epoch`.
fn update_epoch(self, epoch: Epoch) -> impl Future<Item = (), Error = String> {
let service_1 = self.clone();
let service_2 = self;
let service_2 = self.clone();
let service_3 = self;
let pubkeys = service_1.validator_store.voting_pubkeys();
service_1
@ -588,13 +587,31 @@ impl<T: SlotClock + 'static, E: EthSpec> DutiesService<T, E> {
let mut replaced = 0;
let mut invalid = 0;
all_duties.into_iter().try_for_each::<_, Result<_, String>>(|remote_duties| {
let duties: DutyAndState = remote_duties.try_into()?;
// For each of the duties, attempt to insert them into our local store and build a
// list of new or changed selections proofs for any aggregating validators.
let validator_subscriptions = all_duties.into_iter().filter_map(|remote_duties| {
// Convert the remote duties into our local representation.
let duties: DutyAndProof = remote_duties
.try_into()
.map_err(|e| error!(
log,
"Unable to convert remote duties";
"error" => e
))
.ok()?;
match service_2
// Attempt to update our local store.
let outcome = service_2
.store
.insert(epoch, duties.clone(), E::slots_per_epoch())
{
.insert(epoch, duties.clone(), E::slots_per_epoch(), &service_2.validator_store)
.map_err(|e| error!(
log,
"Unable to store duties";
"error" => e
))
.ok()?;
match &outcome {
InsertOutcome::NewValidator => {
debug!(
log,
@ -603,16 +620,25 @@ impl<T: SlotClock + 'static, E: EthSpec> DutiesService<T, E> {
"attestation_slot" => format!("{:?}", &duties.duty.attestation_slot),
"validator" => format!("{:?}", &duties.duty.validator_pubkey)
);
new_validator += 1
new_validator += 1;
}
InsertOutcome::NewEpoch => new_epoch += 1,
InsertOutcome::Identical => identical += 1,
InsertOutcome::Replaced => replaced += 1,
InsertOutcome::Replaced { .. } => replaced += 1,
InsertOutcome::Invalid => invalid += 1,
};
Ok(())
})?;
if outcome.is_subscription_candidate() {
Some(ValidatorSubscription {
validator_index: duties.duty.validator_index?,
attestation_committee_index: duties.duty.attestation_committee_index?,
slot: duties.duty.attestation_slot?,
is_aggregator: duties.selection_proof.is_some(),
})
} else {
None
}
}).collect::<Vec<_>>();
if invalid > 0 {
error!(
@ -641,7 +667,51 @@ impl<T: SlotClock + 'static, E: EthSpec> DutiesService<T, E> {
)
}
Ok(())
Ok(validator_subscriptions)
})
.and_then::<_, Box<dyn Future<Item = _, Error = _> + Send>>(move |validator_subscriptions| {
let log = service_3.context.log.clone();
let count = validator_subscriptions.len();
if count == 0 {
debug!(
log,
"No new subscriptions required"
);
Box::new(future::ok(()))
} else {
Box::new(service_3.beacon_node
.http
.validator()
.subscribe(validator_subscriptions)
.map_err(|e| format!("Failed to subscribe validators: {:?}", e))
.map(move |status| {
match status {
PublishStatus::Valid => {
debug!(
log,
"Successfully subscribed validators";
"count" => count
)
},
PublishStatus::Unknown => {
error!(
log,
"Unknown response from subscription";
)
},
PublishStatus::Invalid(e) => {
error!(
log,
"Failed to subscribe validator";
"error" => e
)
},
};
}))
}
})
}
}

View File

@ -224,6 +224,7 @@ impl<T: SlotClock + 'static, E: EthSpec> ValidatorStore<T, E> {
validator_pubkey: &PublicKey,
validator_index: u64,
aggregate: Attestation<E>,
selection_proof: SelectionProof,
) -> Option<SignedAggregateAndProof<E>> {
let validators = self.validators.read();
let voting_keypair = validators.get(validator_pubkey)?.voting_keypair.as_ref()?;
@ -231,6 +232,7 @@ impl<T: SlotClock + 'static, E: EthSpec> ValidatorStore<T, E> {
Some(SignedAggregateAndProof::from_aggregate(
validator_index,
aggregate,
Some(selection_proof),
&voting_keypair.sk,
&self.fork()?,
self.genesis_validators_root,