* Update hashmap hashset to stable futures * Adds panic test to hashset delay * Port remote_beacon_node to stable futures * Fix lcli merge conflicts * Non rpc stuff compiles * Remove padding * Add error enum, zeroize more things * Fix comment * protocol.rs compiles * Port websockets, timer and notifier to stable futures (#1035) * Fix lcli * Port timer to stable futures * Fix timer * Port websocket_server to stable futures * Port notifier to stable futures * Add TODOS * Port remote_beacon_node to stable futures * Partial eth2-libp2p stable future upgrade * Finished first round of fighting RPC types * Further progress towards porting eth2-libp2p adds caching to discovery * Update behaviour * Add keystore builder * Remove keystore stuff from val client * Add more tests, comments * RPC handler to stable futures * Update RPC to master libp2p * Add more comments, test vectors * Network service additions * Progress on improving JSON validation * More JSON verification * Start moving JSON into own mod * Remove old code * Add more tests, reader/writers * Tidy * Move keystore into own file * Move more logic into keystore file * Tidy * Tidy * Fix the fallback transport construction (#1102) * Allow for odd-character hex * Correct warning * Remove hashmap delay * Compiling version of eth2-libp2p * Update all crates versions * Fix conversion function and add tests (#1113) * Add more json missing field checks * Use scrypt by default * Tidy, address comments * Test path and uuid in vectors * Fix comment * Add checks for kdf params * Enforce empty kdf message * Port validator_client to stable futures (#1114) * Add PH & MS slot clock changes * Account for genesis time * Add progress on duties refactor * Add simple is_aggregator bool to val subscription * Start work on attestation_verification.rs * Add progress on ObservedAttestations * Progress with ObservedAttestations * Fix tests * Add observed attestations to the beacon chain * Add attestation observation to processing code * Add progress on attestation verification * Add first draft of ObservedAttesters * Add more tests * Add observed attesters to beacon chain * Add observers to attestation processing * Add more attestation verification * Create ObservedAggregators map * Remove commented-out code * Add observed aggregators into chain * Add progress * Finish adding features to attestation verification * Ensure beacon chain compiles * Link attn verification into chain * Integrate new attn verification in chain * Remove old attestation processing code * Start trying to fix beacon_chain tests * Split adding into pools into two functions * Add aggregation to harness * Get test harness working again * Adjust the number of aggregators for test harness * Fix edge-case in harness * Integrate new attn processing in network * Fix compile bug in validator_client * Update validator API endpoints * Fix aggreagation in test harness * Fix enum thing * Fix attestation observation bug: * Patch failing API tests * Start adding comments to attestation verification * Remove unused attestation field * Unify "is block known" logic * Update comments * Supress fork choice errors for network processing * Add todos * Tidy * Add gossip attn tests * Disallow test harness to produce old attns * Comment out in-progress tests * Partially address pruning tests * Fix failing store test * Add aggregate tests * Add comments about which spec conditions we check * Dont re-aggregate * Split apart test harness attn production * Fix compile error in network * Make progress on commented-out test * Fix skipping attestation test * Add fork choice verification tests * Tidy attn tests, remove dead code * Remove some accidentally added code * Fix clippy lint * Rename test file * Add block tests, add cheap block proposer check * Rename block testing file * Add observed_block_producers * Tidy * Switch around block signature verification * Finish block testing * Remove gossip from signature tests * First pass of self review * Fix deviation in spec * Update test spec tags * Start moving over to hashset * Finish moving observed attesters to hashmap * Move aggregation pool over to hashmap * Make fc attn borrow again * Fix rest_api compile error * Fix missing comments * Fix monster test * Uncomment increasing slots test * Address remaining comments * Remove unsafe, use cfg test * Remove cfg test flag * Fix dodgy comment * Revert "Update hashmap hashset to stable futures" This reverts commit d432378a3cc5cd67fc29c0b15b96b886c1323554. * Revert "Adds panic test to hashset delay" This reverts commit 281502396fc5b90d9c421a309c2c056982c9525b. * Ported attestation_service * Ported duties_service * Ported fork_service * More ports * Port block_service * Minor fixes * VC compiles * Update TODOS * Borrow self where possible * Ignore aggregates that are already known. * Unify aggregator modulo logic * Fix typo in logs * Refactor validator subscription logic * Avoid reproducing selection proof * Skip HTTP call if no subscriptions * Rename DutyAndState -> DutyAndProof * Tidy logs * Print root as dbg * Fix compile errors in tests * Fix compile error in test * Re-Fix attestation and duties service * Minor fixes Co-authored-by: Paul Hauner <paul@paulhauner.com> * Expose json_keystore mod * First commits on path derivation * Progress with implementation * More progress * Passing intermediate test vectors * Tidy, add comments * Add DerivedKey structs * Move key derivation into own crate * Add zeroize structs * Return error for empty seed * Add tests * Tidy * First commits on path derivation * Progress with implementation * Move key derivation into own crate * Start defining JSON wallet * Add progress * Split out encrypt/decrypt * First commits on path derivation * Progress with implementation * More progress * Passing intermediate test vectors * Tidy, add comments * Add DerivedKey structs * Move key derivation into own crate * Add zeroize structs * Return error for empty seed * Add tests * Tidy * Add progress * Replace some password usage with slice * First commits on path derivation * Progress with implementation * More progress * Passing intermediate test vectors * Tidy, add comments * Add DerivedKey structs * Move key derivation into own crate * Add zeroize structs * Return error for empty seed * Add tests * Tidy * Add progress * Expose PlainText struct * First commits on path derivation * Progress with implementation * More progress * Passing intermediate test vectors * Tidy, add comments * Add DerivedKey structs * Move key derivation into own crate * Add zeroize structs * Return error for empty seed * Add tests * Tidy * Add builder * Expose consts, remove Password * Minor progress * Expose SALT_SIZE * First compiling version * Add test vectors * Network crate update to stable futures * Move dbg assert statement * Port account_manager to stable futures (#1121) * Port account_manager to stable futures * Run async fns in tokio environment * Port rest_api crate to stable futures (#1118) * Port rest_api lib to stable futures * Reduce tokio features * Update notifier to stable futures * Builder update * Further updates * Add mnemonic, tidy * Convert self referential async functions * Tidy * Add testing * Add first attempt at validator_dir * Present pubkey field * stable futures fixes (#1124) * Fix eth1 update functions * Fix genesis and client * Fix beacon node lib * Return appropriate runtimes from environment * Fix test rig * Refactor eth1 service update * Upgrade simulator to stable futures * Lighthouse compiles on stable futures * Add first pass of wallet manager * Progress with CLI * Remove println debugging statement * Tidy output * Tidy 600 perms * Update libp2p service, start rpc test upgrade * Add validator creation flow * Update network crate for new libp2p * Start tidying, adding comments * Update tokio::codec to futures_codec (#1128) * Further work towards RPC corrections * Correct http timeout and network service select * Add wallet mgr testing * Shift LockedWallet into own file * Add comments to fs * Start integration into VC * Use tokio runtime for libp2p * Revert "Update tokio::codec to futures_codec (#1128)" This reverts commit e57aea924acf5cbabdcea18895ac07e38a425ed7. * Upgrade RPC libp2p tests * Upgrade secio fallback test * Add lcli keypair upgrade command * Upgrade gossipsub examples * Clean up RPC protocol * Test fixes (#1133) * Correct websocket timeout and run on os thread * Fix network test * Add --secrets-dir to VC * Remove --legacy-keys from VC * Clean up PR * Correct tokio tcp move attestation service tests * Upgrade attestation service tests * Fix sim * Correct network test * Correct genesis test * Start docs * Add progress for validator generation * Tidy error messages * Test corrections * Log info when block is received * Modify logs and update attester service events * Stable futures: fixes to vc, eth1 and account manager (#1142) * Add local testnet scripts * Remove whiteblock script * Rename local testnet script * Move spawns onto handle * Fix VC panic * Initial fix to block production issue * Tidy block producer fix * Tidy further * Add local testnet clean script * Run cargo fmt * Tidy duties service * Tidy fork service * Tidy ForkService * Tidy AttestationService * Tidy notifier * Ensure await is not suppressed in eth1 * Ensure await is not suppressed in account_manager * Use .ok() instead of .unwrap_or(()) * RPC decoding test for proto * Update discv5 and eth2-libp2p deps * Run cargo fmt * Pre-build keystores for sim * Fix lcli double runtime issue (#1144) * Handle stream termination and dialing peer errors * Correct peer_info variant types * Add progress on new deposit flow * Remove unnecessary warnings * Handle subnet unsubscription removal and improve logigng * Add logs around ping * Upgrade discv5 and improve logging * Handle peer connection status for multiple connections * Improve network service logging * Add more incomplete progress * Improve logging around peer manager * Upgrade swarm poll centralise peer management * Identify clients on error * Fix `remove_peer` in sync (#1150) * remove_peer removes from all chains * Remove logs * Fix early return from loop * Improved logging, fix panic * Partially correct tests * Add deposit command * Remove old validator directory * Start adding AM tests * Stable futures: Vc sync (#1149) * Improve syncing heuristic * Add comments * Use safer method for tolerance * Fix tests * Binary testing progress * Progress with CLI tests * Use constants for flags * More account manager testing * Improve CLI tests * Move upgrade-legacy-keypairs into account man * Use rayon for VC key generation * Add comments to `validator_dir` * Add testing to validator_dir * Add fix to eth1-sim * Check errors in eth1-sim * Fix mutability issue * Ensure password file ends in .pass * Add more tests to wallet manager * Tidy deposit * Tidy account manager * Tidy account manager * Remove panic * Generate keypairs earlier in sim * Tidy eth1-sime * Try to fix eth1 sim * Address review comments * Fix typo in CLI command * Update docs * Disable eth1 sim * Remove eth1 sim completely Co-authored-by: Age Manning <Age@AgeManning.com> Co-authored-by: pawanjay176 <pawandhananjay@gmail.com>
720 lines
27 KiB
Rust
720 lines
27 KiB
Rust
use crate::helpers::{check_content_type_for_json, publish_beacon_block_to_network};
|
|
use crate::response_builder::ResponseBuilder;
|
|
use crate::{ApiError, ApiResult, NetworkChannel, UrlQuery};
|
|
use beacon_chain::{
|
|
attestation_verification::Error as AttnError, BeaconChain, BeaconChainTypes, BlockError,
|
|
StateSkipConfig,
|
|
};
|
|
use bls::PublicKeyBytes;
|
|
use eth2_libp2p::PubsubMessage;
|
|
use hyper::{Body, Request};
|
|
use network::NetworkMessage;
|
|
use rayon::prelude::*;
|
|
use rest_types::{ValidatorDutiesRequest, ValidatorDutyBytes, ValidatorSubscription};
|
|
use slog::{error, info, trace, warn, Logger};
|
|
use std::sync::Arc;
|
|
use types::beacon_state::EthSpec;
|
|
use types::{
|
|
Attestation, AttestationData, BeaconState, Epoch, RelativeEpoch, SelectionProof,
|
|
SignedAggregateAndProof, SignedBeaconBlock, Slot,
|
|
};
|
|
|
|
/// HTTP Handler to retrieve the duties for a set of validators during a particular epoch. This
|
|
/// method allows for collecting bulk sets of validator duties without risking exceeding the max
|
|
/// URL length with query pairs.
|
|
pub async fn post_validator_duties<T: BeaconChainTypes>(
|
|
req: Request<Body>,
|
|
beacon_chain: Arc<BeaconChain<T>>,
|
|
) -> ApiResult {
|
|
let response_builder = ResponseBuilder::new(&req);
|
|
|
|
let body = req.into_body();
|
|
let chunks = hyper::body::to_bytes(body)
|
|
.await
|
|
.map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e)))?;
|
|
|
|
serde_json::from_slice::<ValidatorDutiesRequest>(&chunks)
|
|
.map_err(|e| {
|
|
ApiError::BadRequest(format!(
|
|
"Unable to parse JSON into ValidatorDutiesRequest: {:?}",
|
|
e
|
|
))
|
|
})
|
|
.and_then(|bulk_request| {
|
|
return_validator_duties(
|
|
beacon_chain,
|
|
bulk_request.epoch,
|
|
bulk_request.pubkeys.into_iter().map(Into::into).collect(),
|
|
)
|
|
})
|
|
.and_then(|duties| response_builder?.body_no_ssz(&duties))
|
|
}
|
|
|
|
/// HTTP Handler to retrieve subscriptions for a set of validators. This allows the node to
|
|
/// organise peer discovery and topic subscription for known validators.
|
|
pub async fn post_validator_subscriptions<T: BeaconChainTypes>(
|
|
req: Request<Body>,
|
|
network_chan: NetworkChannel<T::EthSpec>,
|
|
) -> ApiResult {
|
|
try_future!(check_content_type_for_json(&req));
|
|
let response_builder = ResponseBuilder::new(&req);
|
|
|
|
let body = req.into_body();
|
|
let chunks = hyper::body::to_bytes(body)
|
|
.await
|
|
.map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e)))?;
|
|
|
|
serde_json::from_slice(&chunks)
|
|
.map_err(|e| {
|
|
ApiError::BadRequest(format!(
|
|
"Unable to parse JSON into ValidatorSubscriptions: {:?}",
|
|
e
|
|
))
|
|
})
|
|
.and_then(move |subscriptions: Vec<ValidatorSubscription>| {
|
|
network_chan
|
|
.send(NetworkMessage::Subscribe { subscriptions })
|
|
.map_err(|e| {
|
|
ApiError::ServerError(format!(
|
|
"Unable to subscriptions to the network: {:?}",
|
|
e
|
|
))
|
|
})?;
|
|
Ok(())
|
|
})
|
|
.and_then(|_| response_builder?.body_no_ssz(&()))
|
|
}
|
|
|
|
/// HTTP Handler to retrieve all validator duties for the given epoch.
|
|
pub fn get_all_validator_duties<T: BeaconChainTypes>(
|
|
req: Request<Body>,
|
|
beacon_chain: Arc<BeaconChain<T>>,
|
|
) -> ApiResult {
|
|
let query = UrlQuery::from_request(&req)?;
|
|
|
|
let epoch = query.epoch()?;
|
|
|
|
let state = get_state_for_epoch(&beacon_chain, epoch, StateSkipConfig::WithoutStateRoots)?;
|
|
|
|
let validator_pubkeys = state
|
|
.validators
|
|
.iter()
|
|
.map(|validator| validator.pubkey.clone())
|
|
.collect();
|
|
|
|
let duties = return_validator_duties(beacon_chain, epoch, validator_pubkeys)?;
|
|
|
|
ResponseBuilder::new(&req)?.body_no_ssz(&duties)
|
|
}
|
|
|
|
/// HTTP Handler to retrieve all active validator duties for the given epoch.
|
|
pub fn get_active_validator_duties<T: BeaconChainTypes>(
|
|
req: Request<Body>,
|
|
beacon_chain: Arc<BeaconChain<T>>,
|
|
) -> ApiResult {
|
|
let query = UrlQuery::from_request(&req)?;
|
|
|
|
let epoch = query.epoch()?;
|
|
|
|
let state = get_state_for_epoch(&beacon_chain, epoch, StateSkipConfig::WithoutStateRoots)?;
|
|
|
|
let validator_pubkeys = state
|
|
.validators
|
|
.iter()
|
|
.filter(|validator| validator.is_active_at(state.current_epoch()))
|
|
.map(|validator| validator.pubkey.clone())
|
|
.collect();
|
|
|
|
let duties = return_validator_duties(beacon_chain, epoch, validator_pubkeys)?;
|
|
|
|
ResponseBuilder::new(&req)?.body_no_ssz(&duties)
|
|
}
|
|
|
|
/// Helper function to return the state that can be used to determine the duties for some `epoch`.
|
|
pub fn get_state_for_epoch<T: BeaconChainTypes>(
|
|
beacon_chain: &BeaconChain<T>,
|
|
epoch: Epoch,
|
|
config: StateSkipConfig,
|
|
) -> Result<BeaconState<T::EthSpec>, ApiError> {
|
|
let slots_per_epoch = T::EthSpec::slots_per_epoch();
|
|
let head_epoch = beacon_chain.head()?.beacon_state.current_epoch();
|
|
|
|
if RelativeEpoch::from_epoch(head_epoch, epoch).is_ok() {
|
|
Ok(beacon_chain.head()?.beacon_state)
|
|
} else {
|
|
let slot = if epoch > head_epoch {
|
|
// Move to the first slot of the epoch prior to the request.
|
|
//
|
|
// Taking advantage of saturating epoch subtraction.
|
|
(epoch - 1).start_slot(slots_per_epoch)
|
|
} else {
|
|
// Move to the end of the epoch following the target.
|
|
//
|
|
// Taking advantage of saturating epoch subtraction.
|
|
(epoch + 2).start_slot(slots_per_epoch) - 1
|
|
};
|
|
|
|
beacon_chain.state_at_slot(slot, config).map_err(|e| {
|
|
ApiError::ServerError(format!("Unable to load state for epoch {}: {:?}", epoch, e))
|
|
})
|
|
}
|
|
}
|
|
|
|
/// Helper function to get the duties for some `validator_pubkeys` in some `epoch`.
|
|
fn return_validator_duties<T: BeaconChainTypes>(
|
|
beacon_chain: Arc<BeaconChain<T>>,
|
|
epoch: Epoch,
|
|
validator_pubkeys: Vec<PublicKeyBytes>,
|
|
) -> Result<Vec<ValidatorDutyBytes>, ApiError> {
|
|
let mut state = get_state_for_epoch(&beacon_chain, epoch, StateSkipConfig::WithoutStateRoots)?;
|
|
|
|
let relative_epoch = RelativeEpoch::from_epoch(state.current_epoch(), epoch)
|
|
.map_err(|_| ApiError::ServerError(String::from("Loaded state is in the wrong epoch")))?;
|
|
|
|
state.update_pubkey_cache()?;
|
|
state
|
|
.build_committee_cache(relative_epoch, &beacon_chain.spec)
|
|
.map_err(|e| ApiError::ServerError(format!("Unable to build committee cache: {:?}", e)))?;
|
|
state
|
|
.update_pubkey_cache()
|
|
.map_err(|e| ApiError::ServerError(format!("Unable to build pubkey cache: {:?}", e)))?;
|
|
|
|
// Get a list of all validators for this epoch.
|
|
//
|
|
// Used for quickly determining the slot for a proposer.
|
|
let validator_proposers: Vec<(usize, Slot)> = epoch
|
|
.slot_iter(T::EthSpec::slots_per_epoch())
|
|
.map(|slot| {
|
|
state
|
|
.get_beacon_proposer_index(slot, &beacon_chain.spec)
|
|
.map(|i| (i, slot))
|
|
.map_err(|e| {
|
|
ApiError::ServerError(format!(
|
|
"Unable to get proposer index for validator: {:?}",
|
|
e
|
|
))
|
|
})
|
|
})
|
|
.collect::<Result<Vec<_>, _>>()?;
|
|
|
|
validator_pubkeys
|
|
.into_iter()
|
|
.map(|validator_pubkey| {
|
|
// The `beacon_chain` can return a validator index that does not exist in all states.
|
|
// Therefore, we must check to ensure that the validator index is valid for our
|
|
// `state`.
|
|
let validator_index = beacon_chain
|
|
.validator_index(&validator_pubkey)
|
|
.map_err(|e| {
|
|
ApiError::ServerError(format!("Unable to get validator index: {:?}", e))
|
|
})?
|
|
.filter(|i| *i < state.validators.len());
|
|
|
|
if let Some(validator_index) = validator_index {
|
|
let duties = state
|
|
.get_attestation_duties(validator_index, relative_epoch)
|
|
.map_err(|e| {
|
|
ApiError::ServerError(format!(
|
|
"Unable to obtain attestation duties: {:?}",
|
|
e
|
|
))
|
|
})?;
|
|
|
|
let aggregator_modulo = duties
|
|
.map(|duties| SelectionProof::modulo(duties.committee_len, &beacon_chain.spec))
|
|
.transpose()
|
|
.map_err(|e| {
|
|
ApiError::ServerError(format!("Unable to find modulo: {:?}", e))
|
|
})?;
|
|
|
|
let block_proposal_slots = validator_proposers
|
|
.iter()
|
|
.filter(|(i, _slot)| validator_index == *i)
|
|
.map(|(_i, slot)| *slot)
|
|
.collect();
|
|
|
|
Ok(ValidatorDutyBytes {
|
|
validator_pubkey,
|
|
validator_index: Some(validator_index as u64),
|
|
attestation_slot: duties.map(|d| d.slot),
|
|
attestation_committee_index: duties.map(|d| d.index),
|
|
attestation_committee_position: duties.map(|d| d.committee_position),
|
|
block_proposal_slots,
|
|
aggregator_modulo,
|
|
})
|
|
} else {
|
|
Ok(ValidatorDutyBytes {
|
|
validator_pubkey,
|
|
validator_index: None,
|
|
attestation_slot: None,
|
|
attestation_committee_index: None,
|
|
attestation_committee_position: None,
|
|
block_proposal_slots: vec![],
|
|
aggregator_modulo: None,
|
|
})
|
|
}
|
|
})
|
|
.collect::<Result<Vec<_>, ApiError>>()
|
|
}
|
|
|
|
/// HTTP Handler to produce a new BeaconBlock from the current state, ready to be signed by a validator.
|
|
pub fn get_new_beacon_block<T: BeaconChainTypes>(
|
|
req: Request<Body>,
|
|
beacon_chain: Arc<BeaconChain<T>>,
|
|
log: Logger,
|
|
) -> ApiResult {
|
|
let query = UrlQuery::from_request(&req)?;
|
|
|
|
let slot = query.slot()?;
|
|
let randao_reveal = query.randao_reveal()?;
|
|
|
|
let (new_block, _state) = beacon_chain
|
|
.produce_block(randao_reveal, slot)
|
|
.map_err(|e| {
|
|
error!(
|
|
log,
|
|
"Error whilst producing block";
|
|
"error" => format!("{:?}", e)
|
|
);
|
|
|
|
ApiError::ServerError(format!(
|
|
"Beacon node is not able to produce a block: {:?}",
|
|
e
|
|
))
|
|
})?;
|
|
|
|
ResponseBuilder::new(&req)?.body(&new_block)
|
|
}
|
|
|
|
/// HTTP Handler to publish a SignedBeaconBlock, which has been signed by a validator.
|
|
pub async fn publish_beacon_block<T: BeaconChainTypes>(
|
|
req: Request<Body>,
|
|
beacon_chain: Arc<BeaconChain<T>>,
|
|
network_chan: NetworkChannel<T::EthSpec>,
|
|
log: Logger,
|
|
) -> ApiResult {
|
|
try_future!(check_content_type_for_json(&req));
|
|
let response_builder = ResponseBuilder::new(&req);
|
|
|
|
let body = req.into_body();
|
|
let chunks = hyper::body::to_bytes(body)
|
|
.await
|
|
.map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e)))?;
|
|
|
|
serde_json::from_slice(&chunks).map_err(|e| {
|
|
ApiError::BadRequest(format!("Unable to parse JSON into SignedBeaconBlock: {:?}", e))
|
|
})
|
|
.and_then(move |block: SignedBeaconBlock<T::EthSpec>| {
|
|
let slot = block.slot();
|
|
match beacon_chain.process_block(block.clone()) {
|
|
Ok(block_root) => {
|
|
// Block was processed, publish via gossipsub
|
|
info!(
|
|
log,
|
|
"Block from local validator";
|
|
"block_root" => format!("{}", block_root),
|
|
"block_slot" => slot,
|
|
);
|
|
|
|
publish_beacon_block_to_network::<T>(network_chan, block)?;
|
|
|
|
// Run the fork choice algorithm and enshrine a new canonical head, if
|
|
// found.
|
|
//
|
|
// The new head may or may not be the block we just received.
|
|
if let Err(e) = beacon_chain.fork_choice() {
|
|
error!(
|
|
log,
|
|
"Failed to find beacon chain head";
|
|
"error" => format!("{:?}", e)
|
|
);
|
|
} else {
|
|
// In the best case, validators should produce blocks that become the
|
|
// head.
|
|
//
|
|
// Potential reasons this may not be the case:
|
|
//
|
|
// - A quick re-org between block produce and publish.
|
|
// - Excessive time between block produce and publish.
|
|
// - A validator is using another beacon node to produce blocks and
|
|
// submitting them here.
|
|
if beacon_chain.head()?.beacon_block_root != block_root {
|
|
warn!(
|
|
log,
|
|
"Block from validator is not head";
|
|
"desc" => "potential re-org",
|
|
);
|
|
|
|
}
|
|
}
|
|
|
|
Ok(())
|
|
}
|
|
Err(BlockError::BeaconChainError(e)) => {
|
|
error!(
|
|
log,
|
|
"Error whilst processing block";
|
|
"error" => format!("{:?}", e)
|
|
);
|
|
|
|
Err(ApiError::ServerError(format!(
|
|
"Error while processing block: {:?}",
|
|
e
|
|
)))
|
|
}
|
|
Err(other) => {
|
|
warn!(
|
|
log,
|
|
"Invalid block from local validator";
|
|
"outcome" => format!("{:?}", other)
|
|
);
|
|
|
|
Err(ApiError::ProcessingError(format!(
|
|
"The SignedBeaconBlock could not be processed and has not been published: {:?}",
|
|
other
|
|
)))
|
|
}
|
|
}
|
|
})
|
|
.and_then(|_| response_builder?.body_no_ssz(&()))
|
|
}
|
|
|
|
/// HTTP Handler to produce a new Attestation from the current state, ready to be signed by a validator.
|
|
pub fn get_new_attestation<T: BeaconChainTypes>(
|
|
req: Request<Body>,
|
|
beacon_chain: Arc<BeaconChain<T>>,
|
|
) -> ApiResult {
|
|
let query = UrlQuery::from_request(&req)?;
|
|
|
|
let slot = query.slot()?;
|
|
let index = query.committee_index()?;
|
|
|
|
let attestation = beacon_chain
|
|
.produce_unaggregated_attestation(slot, index)
|
|
.map_err(|e| ApiError::BadRequest(format!("Unable to produce attestation: {:?}", e)))?;
|
|
|
|
ResponseBuilder::new(&req)?.body(&attestation)
|
|
}
|
|
|
|
/// HTTP Handler to retrieve the aggregate attestation for a slot
|
|
pub fn get_aggregate_attestation<T: BeaconChainTypes>(
|
|
req: Request<Body>,
|
|
beacon_chain: Arc<BeaconChain<T>>,
|
|
) -> ApiResult {
|
|
let query = UrlQuery::from_request(&req)?;
|
|
|
|
let attestation_data = query.attestation_data()?;
|
|
|
|
match beacon_chain.get_aggregated_attestation(&attestation_data) {
|
|
Ok(Some(attestation)) => ResponseBuilder::new(&req)?.body(&attestation),
|
|
Ok(None) => Err(ApiError::NotFound(format!(
|
|
"No matching aggregate attestation for slot {:?} is known in slot {:?}",
|
|
attestation_data.slot,
|
|
beacon_chain.slot()
|
|
))),
|
|
Err(e) => Err(ApiError::ServerError(format!(
|
|
"Unable to obtain attestation: {:?}",
|
|
e
|
|
))),
|
|
}
|
|
}
|
|
|
|
/// HTTP Handler to publish a list of Attestations, which have been signed by a number of validators.
|
|
pub async fn publish_attestations<T: BeaconChainTypes>(
|
|
req: Request<Body>,
|
|
beacon_chain: Arc<BeaconChain<T>>,
|
|
network_chan: NetworkChannel<T::EthSpec>,
|
|
log: Logger,
|
|
) -> ApiResult {
|
|
try_future!(check_content_type_for_json(&req));
|
|
let response_builder = ResponseBuilder::new(&req);
|
|
|
|
let body = req.into_body();
|
|
let chunk = hyper::body::to_bytes(body)
|
|
.await
|
|
.map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e)))?;
|
|
|
|
let chunks = chunk.iter().cloned().collect::<Vec<u8>>();
|
|
serde_json::from_slice(&chunks.as_slice())
|
|
.map_err(|e| {
|
|
ApiError::BadRequest(format!(
|
|
"Unable to deserialize JSON into a list of attestations: {:?}",
|
|
e
|
|
))
|
|
})
|
|
// Process all of the aggregates _without_ exiting early if one fails.
|
|
.map(move |attestations: Vec<Attestation<T::EthSpec>>| {
|
|
attestations
|
|
.into_par_iter()
|
|
.enumerate()
|
|
.map(|(i, attestation)| {
|
|
process_unaggregated_attestation(
|
|
&beacon_chain,
|
|
network_chan.clone(),
|
|
attestation,
|
|
i,
|
|
&log,
|
|
)
|
|
})
|
|
.collect::<Vec<Result<_, _>>>()
|
|
})
|
|
// Iterate through all the results and return on the first `Err`.
|
|
//
|
|
// Note: this will only provide info about the _first_ failure, not all failures.
|
|
.and_then(|processing_results| processing_results.into_iter().try_for_each(|result| result))
|
|
.and_then(|_| response_builder?.body_no_ssz(&()))
|
|
}
|
|
|
|
/// Processes an unaggregrated attestation that was included in a list of attestations with the
|
|
/// index `i`.
|
|
fn process_unaggregated_attestation<T: BeaconChainTypes>(
|
|
beacon_chain: &BeaconChain<T>,
|
|
network_chan: NetworkChannel<T::EthSpec>,
|
|
attestation: Attestation<T::EthSpec>,
|
|
i: usize,
|
|
log: &Logger,
|
|
) -> Result<(), ApiError> {
|
|
let data = &attestation.data.clone();
|
|
|
|
// Verify that the attestation is valid to included on the gossip network.
|
|
let verified_attestation = beacon_chain
|
|
.verify_unaggregated_attestation_for_gossip(attestation.clone())
|
|
.map_err(|e| {
|
|
handle_attestation_error(
|
|
e,
|
|
&format!("unaggregated attestation {} failed gossip verification", i),
|
|
data,
|
|
log,
|
|
)
|
|
})?;
|
|
|
|
// Publish the attestation to the network
|
|
if let Err(e) = network_chan.send(NetworkMessage::Publish {
|
|
messages: vec![PubsubMessage::Attestation(Box::new((
|
|
attestation
|
|
.subnet_id(&beacon_chain.spec)
|
|
.map_err(|e| ApiError::ServerError(format!("Unable to get subnet id: {:?}", e)))?,
|
|
attestation,
|
|
)))],
|
|
}) {
|
|
return Err(ApiError::ServerError(format!(
|
|
"Unable to send unaggregated attestation {} to network: {:?}",
|
|
i, e
|
|
)));
|
|
}
|
|
|
|
beacon_chain
|
|
.apply_attestation_to_fork_choice(&verified_attestation)
|
|
.map_err(|e| {
|
|
handle_attestation_error(
|
|
e,
|
|
&format!(
|
|
"unaggregated attestation {} was unable to be added to fork choice",
|
|
i
|
|
),
|
|
data,
|
|
log,
|
|
)
|
|
})?;
|
|
|
|
beacon_chain
|
|
.add_to_naive_aggregation_pool(verified_attestation)
|
|
.map_err(|e| {
|
|
handle_attestation_error(
|
|
e,
|
|
&format!(
|
|
"unaggregated attestation {} was unable to be added to aggregation pool",
|
|
i
|
|
),
|
|
data,
|
|
log,
|
|
)
|
|
})?;
|
|
|
|
Ok(())
|
|
}
|
|
|
|
/// HTTP Handler to publish an Attestation, which has been signed by a validator.
|
|
pub async fn publish_aggregate_and_proofs<T: BeaconChainTypes>(
|
|
req: Request<Body>,
|
|
beacon_chain: Arc<BeaconChain<T>>,
|
|
network_chan: NetworkChannel<T::EthSpec>,
|
|
log: Logger,
|
|
) -> ApiResult {
|
|
try_future!(check_content_type_for_json(&req));
|
|
let response_builder = ResponseBuilder::new(&req);
|
|
let body = req.into_body();
|
|
let chunk = hyper::body::to_bytes(body)
|
|
.await
|
|
.map_err(|e| ApiError::ServerError(format!("Unable to get request body: {:?}", e)))?;
|
|
let chunks = chunk.iter().cloned().collect::<Vec<u8>>();
|
|
serde_json::from_slice(&chunks.as_slice())
|
|
.map_err(|e| {
|
|
ApiError::BadRequest(format!(
|
|
"Unable to deserialize JSON into a list of SignedAggregateAndProof: {:?}",
|
|
e
|
|
))
|
|
})
|
|
// Process all of the aggregates _without_ exiting early if one fails.
|
|
.map(
|
|
move |signed_aggregates: Vec<SignedAggregateAndProof<T::EthSpec>>| {
|
|
signed_aggregates
|
|
.into_par_iter()
|
|
.enumerate()
|
|
.map(|(i, signed_aggregate)| {
|
|
process_aggregated_attestation(
|
|
&beacon_chain,
|
|
network_chan.clone(),
|
|
signed_aggregate,
|
|
i,
|
|
&log,
|
|
)
|
|
})
|
|
.collect::<Vec<Result<_, _>>>()
|
|
},
|
|
)
|
|
// Iterate through all the results and return on the first `Err`.
|
|
//
|
|
// Note: this will only provide info about the _first_ failure, not all failures.
|
|
.and_then(|processing_results| processing_results.into_iter().try_for_each(|result| result))
|
|
.and_then(|_| response_builder?.body_no_ssz(&()))
|
|
}
|
|
|
|
/// Processes an aggregrated attestation that was included in a list of attestations with the index
|
|
/// `i`.
|
|
fn process_aggregated_attestation<T: BeaconChainTypes>(
|
|
beacon_chain: &BeaconChain<T>,
|
|
network_chan: NetworkChannel<T::EthSpec>,
|
|
signed_aggregate: SignedAggregateAndProof<T::EthSpec>,
|
|
i: usize,
|
|
log: &Logger,
|
|
) -> Result<(), ApiError> {
|
|
let data = &signed_aggregate.message.aggregate.data.clone();
|
|
|
|
// Verify that the attestation is valid to be included on the gossip network.
|
|
//
|
|
// Using this gossip check for local validators is not necessarily ideal, there will be some
|
|
// attestations that we reject that could possibly be included in a block (e.g., attestations
|
|
// that late by more than 1 epoch but less than 2). We can come pick this back up if we notice
|
|
// that it's materially affecting validator profits. Until then, I'm hesitant to introduce yet
|
|
// _another_ attestation verification path.
|
|
let verified_attestation =
|
|
match beacon_chain.verify_aggregated_attestation_for_gossip(signed_aggregate.clone()) {
|
|
Ok(verified_attestation) => verified_attestation,
|
|
Err(AttnError::AttestationAlreadyKnown(attestation_root)) => {
|
|
trace!(
|
|
log,
|
|
"Ignored known attn from local validator";
|
|
"attn_root" => format!("{}", attestation_root)
|
|
);
|
|
|
|
// Exit early with success for a known attestation, there's no need to re-process
|
|
// an aggregate we already know.
|
|
return Ok(());
|
|
}
|
|
/*
|
|
* It's worth noting that we don't check for `Error::AggregatorAlreadyKnown` since (at
|
|
* the time of writing) we check for `AttestationAlreadyKnown` first.
|
|
*
|
|
* Given this, it's impossible to hit `Error::AggregatorAlreadyKnown` without that
|
|
* aggregator having already produced a conflicting aggregation. This is not slashable
|
|
* but I think it's still the sort of condition we should error on, at least for now.
|
|
*/
|
|
Err(e) => {
|
|
return Err(handle_attestation_error(
|
|
e,
|
|
&format!("aggregated attestation {} failed gossip verification", i),
|
|
data,
|
|
log,
|
|
))
|
|
}
|
|
};
|
|
|
|
// Publish the attestation to the network
|
|
if let Err(e) = network_chan.send(NetworkMessage::Publish {
|
|
messages: vec![PubsubMessage::AggregateAndProofAttestation(Box::new(
|
|
signed_aggregate,
|
|
))],
|
|
}) {
|
|
return Err(ApiError::ServerError(format!(
|
|
"Unable to send aggregated attestation {} to network: {:?}",
|
|
i, e
|
|
)));
|
|
}
|
|
|
|
beacon_chain
|
|
.apply_attestation_to_fork_choice(&verified_attestation)
|
|
.map_err(|e| {
|
|
handle_attestation_error(
|
|
e,
|
|
&format!(
|
|
"aggregated attestation {} was unable to be added to fork choice",
|
|
i
|
|
),
|
|
data,
|
|
log,
|
|
)
|
|
})?;
|
|
|
|
beacon_chain
|
|
.add_to_block_inclusion_pool(verified_attestation)
|
|
.map_err(|e| {
|
|
handle_attestation_error(
|
|
e,
|
|
&format!(
|
|
"aggregated attestation {} was unable to be added to op pool",
|
|
i
|
|
),
|
|
data,
|
|
log,
|
|
)
|
|
})?;
|
|
|
|
Ok(())
|
|
}
|
|
|
|
/// Common handler for `AttnError` during attestation verification.
|
|
fn handle_attestation_error(
|
|
e: AttnError,
|
|
detail: &str,
|
|
data: &AttestationData,
|
|
log: &Logger,
|
|
) -> ApiError {
|
|
match e {
|
|
AttnError::BeaconChainError(e) => {
|
|
error!(
|
|
log,
|
|
"Internal error verifying local attestation";
|
|
"detail" => detail,
|
|
"error" => format!("{:?}", e),
|
|
"target" => data.target.epoch,
|
|
"source" => data.source.epoch,
|
|
"index" => data.index,
|
|
"slot" => data.slot,
|
|
);
|
|
|
|
ApiError::ServerError(format!(
|
|
"Internal error verifying local attestation. Error: {:?}. Detail: {}",
|
|
e, detail
|
|
))
|
|
}
|
|
e => {
|
|
error!(
|
|
log,
|
|
"Invalid local attestation";
|
|
"detail" => detail,
|
|
"reason" => format!("{:?}", e),
|
|
"target" => data.target.epoch,
|
|
"source" => data.source.epoch,
|
|
"index" => data.index,
|
|
"slot" => data.slot,
|
|
);
|
|
|
|
ApiError::ProcessingError(format!(
|
|
"Invalid local attestation. Error: {:?} Detail: {}",
|
|
e, detail
|
|
))
|
|
}
|
|
}
|
|
}
|