95c8e476bc
* Remove ping protocol * Initial renaming of network services * Correct rebasing relative to latest master * Start updating types * Adds HashMapDelay struct to utils * Initial network restructure * Network restructure. Adds new types for v0.2.0 * Removes build artefacts * Shift validation to beacon chain * Temporarily remove gossip validation This is to be updated to match current optimisation efforts. * Adds AggregateAndProof * Begin rebuilding pubsub encoding/decoding * Signature hacking * Shift gossipsup decoding into eth2_libp2p * Existing EF tests passing with fake_crypto * Shifts block encoding/decoding into RPC * Delete outdated API spec * All release tests passing bar genesis state parsing * Update and test YamlConfig * Update to spec v0.10 compatible BLS * Updates to BLS EF tests * Add EF test for AggregateVerify And delete unused hash2curve tests for uncompressed points * Update EF tests to v0.10.1 * Use optional block root correctly in block proc * Use genesis fork in deposit domain. All tests pass * Fast aggregate verify test * Update REST API docs * Fix unused import * Bump spec tags to v0.10.1 * Add `seconds_per_eth1_block` to chainspec * Update to timestamp based eth1 voting scheme * Return None from `get_votes_to_consider` if block cache is empty * Handle overflows in `is_candidate_block` * Revert to failing tests * Fix eth1 data sets test * Choose default vote according to spec * Fix collect_valid_votes tests * Fix `get_votes_to_consider` to choose all eligible blocks * Uncomment winning_vote tests * Add comments; remove unused code * Reduce seconds_per_eth1_block for simulation * Addressed review comments * Add test for default vote case * Fix logs * Remove unused functions * Meter default eth1 votes * Fix comments * Progress on attestation service * Address review comments; remove unused dependency * Initial work on removing libp2p lock * Add LRU caches to store (rollup) * Update attestation validation for DB changes (WIP) * Initial version of should_forward_block * Scaffold * Progress on attestation validation Also, consolidate prod+testing slot clocks so that they share much of the same implementation and can both handle sub-slot time changes. * Removes lock from libp2p service * Completed network lock removal * Finish(?) attestation processing * Correct network termination future * Add slot check to block check * Correct fmt issues * Remove Drop implementation for network service * Add first attempt at attestation proc. re-write * Add version 2 of attestation processing * Minor fixes * Add validator pubkey cache * Make get_indexed_attestation take a committee * Link signature processing into new attn verification * First working version * Ensure pubkey cache is updated * Add more metrics, slight optimizations * Clone committee cache during attestation processing * Update shuffling cache during block processing * Remove old commented-out code * Fix shuffling cache insert bug * Used indexed attestation in fork choice * Restructure attn processing, add metrics * Add more detailed metrics * Tidy, fix failing tests * Fix failing tests, tidy * Address reviewers suggestions * Disable/delete two outdated tests * Modification of validator for subscriptions * Add slot signing to validator client * Further progress on validation subscription * Adds necessary validator subscription functionality * Add new Pubkeys struct to signature_sets * Refactor with functional approach * Update beacon chain * Clean up validator <-> beacon node http types * Add aggregator status to ValidatorDuty * Impl Clone for manual slot clock * Fix minor errors * Further progress validator client subscription * Initial subscription and aggregation handling * Remove decompressed member from pubkey bytes * Progress to modifying val client for attestation aggregation * First draft of validator client upgrade for aggregate attestations * Add hashmap for indices lookup * Add state cache, remove store cache * Only build the head committee cache * Removes lock on a network channel * Partially implement beacon node subscription http api * Correct compilation issues * Change `get_attesting_indices` to use Vec * Fix failing test * Partial implementation of timer * Adds timer, removes exit_future, http api to op pool * Partial multiple aggregate attestation handling * Permits bulk messages accross gossipsub network channel * Correct compile issues * Improve gosispsub messaging and correct rest api helpers * Added global gossipsub subscriptions * Update validator subscriptions data structs * Tidy * Re-structure validator subscriptions * Initial handling of subscriptions * Re-structure network service * Add pubkey cache persistence file * Add more comments * Integrate persistence file into builder * Add pubkey cache tests * Add HashSetDelay and introduce into attestation service * Handles validator subscriptions * Add data_dir to beacon chain builder * Remove Option in pubkey cache persistence file * Ensure consistency between datadir/data_dir * Fix failing network test * Peer subnet discovery gets queued for future subscriptions * Reorganise attestation service functions * Initial wiring of attestation service * First draft of attestation service timing logic * Correct minor typos * Tidy * Fix todos * Improve tests * Add PeerInfo to connected peers mapping * Fix compile error * Fix compile error from merge * Split up block processing metrics * Tidy * Refactor get_pubkey_from_state * Remove commented-out code * Rename state_cache -> checkpoint_cache * Rename Checkpoint -> Snapshot * Tidy, add comments * Tidy up find_head function * Change some checkpoint -> snapshot * Add tests * Expose max_len * Remove dead code * Tidy * Fix bug * Add sync-speed metric * Add first attempt at VerifiableBlock * Start integrating into beacon chain * Integrate VerifiableBlock * Rename VerifableBlock -> PartialBlockVerification * Add start of typed methods * Add progress * Add further progress * Rename structs * Add full block verification to block_processing.rs * Further beacon chain integration * Update checks for gossip * Add todo * Start adding segement verification * Add passing chain segement test * Initial integration with batch sync * Minor changes * Tidy, add more error checking * Start adding chain_segment tests * Finish invalid signature tests * Include single and gossip verified blocks in tests * Add gossip verification tests * Start adding docs * Finish adding comments to block_processing.rs * Rename block_processing.rs -> block_verification * Start removing old block processing code * Fixes beacon_chain compilation * Fix project-wide compile errors * Remove old code * Correct code to pass all tests * Fix bug with beacon proposer index * Fix shim for BlockProcessingError * Only process one epoch at a time * Fix loop in chain segment processing * Correct tests from master merge * Add caching for state.eth1_data_votes * Add BeaconChain::validator_pubkey * Revert "Add caching for state.eth1_data_votes" This reverts commit cd73dcd6434fb8d8e6bf30c5356355598ea7b78e. Co-authored-by: Grant Wuerker <gwuerker@gmail.com> Co-authored-by: Michael Sproul <michael@sigmaprime.io> Co-authored-by: Michael Sproul <micsproul@gmail.com> Co-authored-by: pawan <pawandhananjay@gmail.com> Co-authored-by: Paul Hauner <paul@paulhauner.com>
462 lines
18 KiB
Rust
462 lines
18 KiB
Rust
#![cfg(test)]
|
|
use eth2_libp2p::rpc::methods::*;
|
|
use eth2_libp2p::rpc::*;
|
|
use eth2_libp2p::{Libp2pEvent, RPCEvent};
|
|
use slog::{warn, Level};
|
|
use std::sync::atomic::{AtomicBool, Ordering::Relaxed};
|
|
use std::sync::{Arc, Mutex};
|
|
use std::time::Duration;
|
|
use tokio::prelude::*;
|
|
use types::{
|
|
BeaconBlock, Epoch, EthSpec, Hash256, MinimalEthSpec, Signature, SignedBeaconBlock, Slot,
|
|
};
|
|
|
|
mod common;
|
|
|
|
type E = MinimalEthSpec;
|
|
|
|
#[test]
|
|
// Tests the STATUS RPC message
|
|
fn test_status_rpc() {
|
|
// set up the logging. The level and enabled logging or not
|
|
let log_level = Level::Trace;
|
|
let enable_logging = false;
|
|
|
|
let log = common::build_log(log_level, enable_logging);
|
|
|
|
// get sender/receiver
|
|
let (mut sender, mut receiver) = common::build_node_pair(&log, 10500);
|
|
|
|
// Dummy STATUS RPC message
|
|
let rpc_request = RPCRequest::Status(StatusMessage {
|
|
fork_version: [0; 4],
|
|
finalized_root: Hash256::from_low_u64_be(0),
|
|
finalized_epoch: Epoch::new(1),
|
|
head_root: Hash256::from_low_u64_be(0),
|
|
head_slot: Slot::new(1),
|
|
});
|
|
|
|
// Dummy STATUS RPC message
|
|
let rpc_response = RPCResponse::Status(StatusMessage {
|
|
fork_version: [0; 4],
|
|
finalized_root: Hash256::from_low_u64_be(0),
|
|
finalized_epoch: Epoch::new(1),
|
|
head_root: Hash256::from_low_u64_be(0),
|
|
head_slot: Slot::new(1),
|
|
});
|
|
|
|
let sender_request = rpc_request.clone();
|
|
let sender_log = log.clone();
|
|
let sender_response = rpc_response.clone();
|
|
|
|
// build the sender future
|
|
let sender_future = future::poll_fn(move || -> Poll<bool, ()> {
|
|
loop {
|
|
match sender.poll().unwrap() {
|
|
Async::Ready(Some(Libp2pEvent::PeerDialed(peer_id))) => {
|
|
// Send a STATUS message
|
|
warn!(sender_log, "Sending RPC");
|
|
sender
|
|
.swarm
|
|
.send_rpc(peer_id, RPCEvent::Request(1, sender_request.clone()));
|
|
}
|
|
Async::Ready(Some(Libp2pEvent::RPC(_, event))) => match event {
|
|
// Should receive the RPC response
|
|
RPCEvent::Response(id, response @ RPCErrorResponse::Success(_)) => {
|
|
warn!(sender_log, "Sender Received");
|
|
assert_eq!(id, 1);
|
|
|
|
let response = {
|
|
match response {
|
|
RPCErrorResponse::Success(r) => r,
|
|
_ => unreachable!(),
|
|
}
|
|
};
|
|
assert_eq!(response, sender_response.clone());
|
|
|
|
warn!(sender_log, "Sender Completed");
|
|
return Ok(Async::Ready(true));
|
|
}
|
|
e => panic!("Received invalid RPC message {}", e),
|
|
},
|
|
Async::Ready(Some(_)) => (),
|
|
Async::Ready(None) | Async::NotReady => return Ok(Async::NotReady),
|
|
};
|
|
}
|
|
});
|
|
|
|
// build the receiver future
|
|
let receiver_future = future::poll_fn(move || -> Poll<bool, ()> {
|
|
loop {
|
|
match receiver.poll().unwrap() {
|
|
Async::Ready(Some(Libp2pEvent::RPC(peer_id, event))) => match event {
|
|
// Should receive sent RPC request
|
|
RPCEvent::Request(id, request) => {
|
|
assert_eq!(id, 1);
|
|
assert_eq!(rpc_request.clone(), request);
|
|
|
|
// send the response
|
|
warn!(log, "Receiver Received");
|
|
receiver.swarm.send_rpc(
|
|
peer_id,
|
|
RPCEvent::Response(id, RPCErrorResponse::Success(rpc_response.clone())),
|
|
);
|
|
}
|
|
e => panic!("Received invalid RPC message {}", e),
|
|
},
|
|
Async::Ready(Some(_)) => (),
|
|
Async::Ready(None) | Async::NotReady => return Ok(Async::NotReady),
|
|
}
|
|
}
|
|
});
|
|
|
|
// execute the futures and check the result
|
|
let test_result = Arc::new(AtomicBool::new(false));
|
|
let error_result = test_result.clone();
|
|
let thread_result = test_result.clone();
|
|
tokio::run(
|
|
sender_future
|
|
.select(receiver_future)
|
|
.timeout(Duration::from_millis(1000))
|
|
.map_err(move |_| error_result.store(false, Relaxed))
|
|
.map(move |result| {
|
|
thread_result.store(result.0, Relaxed);
|
|
}),
|
|
);
|
|
assert!(test_result.load(Relaxed));
|
|
}
|
|
|
|
#[test]
|
|
// Tests a streamed BlocksByRange RPC Message
|
|
fn test_blocks_by_range_chunked_rpc() {
|
|
// set up the logging. The level and enabled logging or not
|
|
let log_level = Level::Trace;
|
|
let enable_logging = false;
|
|
|
|
let messages_to_send = 10;
|
|
|
|
let log = common::build_log(log_level, enable_logging);
|
|
|
|
// get sender/receiver
|
|
let (mut sender, mut receiver) = common::build_node_pair(&log, 10505);
|
|
|
|
// BlocksByRange Request
|
|
let rpc_request = RPCRequest::BlocksByRange(BlocksByRangeRequest {
|
|
head_block_root: Hash256::from_low_u64_be(0),
|
|
start_slot: 0,
|
|
count: messages_to_send,
|
|
step: 0,
|
|
});
|
|
|
|
// BlocksByRange Response
|
|
let spec = E::default_spec();
|
|
let empty_block = BeaconBlock::empty(&spec);
|
|
let empty_signed = SignedBeaconBlock {
|
|
message: empty_block,
|
|
signature: Signature::empty_signature(),
|
|
};
|
|
let rpc_response = RPCResponse::BlocksByRange(Box::new(empty_signed));
|
|
|
|
let sender_request = rpc_request.clone();
|
|
let sender_log = log.clone();
|
|
let sender_response = rpc_response.clone();
|
|
|
|
// keep count of the number of messages received
|
|
let messages_received = Arc::new(Mutex::new(0));
|
|
// build the sender future
|
|
let sender_future = future::poll_fn(move || -> Poll<bool, ()> {
|
|
loop {
|
|
match sender.poll().unwrap() {
|
|
Async::Ready(Some(Libp2pEvent::PeerDialed(peer_id))) => {
|
|
// Send a BlocksByRange request
|
|
warn!(sender_log, "Sender sending RPC request");
|
|
sender
|
|
.swarm
|
|
.send_rpc(peer_id, RPCEvent::Request(1, sender_request.clone()));
|
|
}
|
|
Async::Ready(Some(Libp2pEvent::RPC(_, event))) => match event {
|
|
// Should receive the RPC response
|
|
RPCEvent::Response(id, response) => {
|
|
warn!(sender_log, "Sender received a response");
|
|
assert_eq!(id, 1);
|
|
match response {
|
|
RPCErrorResponse::Success(res) => {
|
|
assert_eq!(res, sender_response.clone());
|
|
*messages_received.lock().unwrap() += 1;
|
|
warn!(sender_log, "Chunk received");
|
|
}
|
|
RPCErrorResponse::StreamTermination(
|
|
ResponseTermination::BlocksByRange,
|
|
) => {
|
|
// should be exactly 10 messages before terminating
|
|
assert_eq!(*messages_received.lock().unwrap(), messages_to_send);
|
|
// end the test
|
|
return Ok(Async::Ready(true));
|
|
}
|
|
_ => panic!("Invalid RPC received"),
|
|
}
|
|
}
|
|
_ => panic!("Received invalid RPC message"),
|
|
},
|
|
Async::Ready(Some(_)) => {}
|
|
Async::Ready(None) | Async::NotReady => return Ok(Async::NotReady),
|
|
};
|
|
}
|
|
});
|
|
|
|
// build the receiver future
|
|
let receiver_future = future::poll_fn(move || -> Poll<bool, ()> {
|
|
loop {
|
|
match receiver.poll().unwrap() {
|
|
Async::Ready(Some(Libp2pEvent::RPC(peer_id, event))) => match event {
|
|
// Should receive the sent RPC request
|
|
RPCEvent::Request(id, request) => {
|
|
assert_eq!(id, 1);
|
|
assert_eq!(rpc_request.clone(), request);
|
|
|
|
// send the response
|
|
warn!(log, "Receiver got request");
|
|
|
|
for _ in 1..=messages_to_send {
|
|
receiver.swarm.send_rpc(
|
|
peer_id.clone(),
|
|
RPCEvent::Response(
|
|
id,
|
|
RPCErrorResponse::Success(rpc_response.clone()),
|
|
),
|
|
);
|
|
}
|
|
// send the stream termination
|
|
receiver.swarm.send_rpc(
|
|
peer_id,
|
|
RPCEvent::Response(
|
|
id,
|
|
RPCErrorResponse::StreamTermination(
|
|
ResponseTermination::BlocksByRange,
|
|
),
|
|
),
|
|
);
|
|
}
|
|
_ => panic!("Received invalid RPC message"),
|
|
},
|
|
Async::Ready(Some(_)) => (),
|
|
Async::Ready(None) | Async::NotReady => return Ok(Async::NotReady),
|
|
}
|
|
}
|
|
});
|
|
|
|
// execute the futures and check the result
|
|
let test_result = Arc::new(AtomicBool::new(false));
|
|
let error_result = test_result.clone();
|
|
let thread_result = test_result.clone();
|
|
tokio::run(
|
|
sender_future
|
|
.select(receiver_future)
|
|
.timeout(Duration::from_millis(1000))
|
|
.map_err(move |_| error_result.store(false, Relaxed))
|
|
.map(move |result| {
|
|
thread_result.store(result.0, Relaxed);
|
|
}),
|
|
);
|
|
assert!(test_result.load(Relaxed));
|
|
}
|
|
|
|
#[test]
|
|
// Tests an empty response to a BlocksByRange RPC Message
|
|
fn test_blocks_by_range_single_empty_rpc() {
|
|
// set up the logging. The level and enabled logging or not
|
|
let log_level = Level::Trace;
|
|
let enable_logging = false;
|
|
|
|
let log = common::build_log(log_level, enable_logging);
|
|
|
|
// get sender/receiver
|
|
let (mut sender, mut receiver) = common::build_node_pair(&log, 10510);
|
|
|
|
// BlocksByRange Request
|
|
let rpc_request = RPCRequest::BlocksByRange(BlocksByRangeRequest {
|
|
head_block_root: Hash256::from_low_u64_be(0),
|
|
start_slot: 0,
|
|
count: 10,
|
|
step: 0,
|
|
});
|
|
|
|
// BlocksByRange Response
|
|
let spec = E::default_spec();
|
|
let empty_block = BeaconBlock::empty(&spec);
|
|
let empty_signed = SignedBeaconBlock {
|
|
message: empty_block,
|
|
signature: Signature::empty_signature(),
|
|
};
|
|
let rpc_response = RPCResponse::BlocksByRange(Box::new(empty_signed));
|
|
|
|
let sender_request = rpc_request.clone();
|
|
let sender_log = log.clone();
|
|
let sender_response = rpc_response.clone();
|
|
|
|
// keep count of the number of messages received
|
|
let messages_received = Arc::new(Mutex::new(0));
|
|
// build the sender future
|
|
let sender_future = future::poll_fn(move || -> Poll<bool, ()> {
|
|
loop {
|
|
match sender.poll().unwrap() {
|
|
Async::Ready(Some(Libp2pEvent::PeerDialed(peer_id))) => {
|
|
// Send a BlocksByRange request
|
|
warn!(sender_log, "Sender sending RPC request");
|
|
sender
|
|
.swarm
|
|
.send_rpc(peer_id, RPCEvent::Request(1, sender_request.clone()));
|
|
}
|
|
Async::Ready(Some(Libp2pEvent::RPC(_, event))) => match event {
|
|
// Should receive the RPC response
|
|
RPCEvent::Response(id, response) => {
|
|
warn!(sender_log, "Sender received a response");
|
|
assert_eq!(id, 1);
|
|
match response {
|
|
RPCErrorResponse::Success(res) => {
|
|
assert_eq!(res, sender_response.clone());
|
|
*messages_received.lock().unwrap() += 1;
|
|
warn!(sender_log, "Chunk received");
|
|
}
|
|
RPCErrorResponse::StreamTermination(
|
|
ResponseTermination::BlocksByRange,
|
|
) => {
|
|
// should be exactly 1 messages before terminating
|
|
assert_eq!(*messages_received.lock().unwrap(), 1);
|
|
// end the test
|
|
return Ok(Async::Ready(true));
|
|
}
|
|
_ => panic!("Invalid RPC received"),
|
|
}
|
|
}
|
|
m => panic!("Received invalid RPC message: {}", m),
|
|
},
|
|
Async::Ready(Some(_)) => {}
|
|
Async::Ready(None) | Async::NotReady => return Ok(Async::NotReady),
|
|
};
|
|
}
|
|
});
|
|
|
|
// build the receiver future
|
|
let receiver_future = future::poll_fn(move || -> Poll<bool, ()> {
|
|
loop {
|
|
match receiver.poll().unwrap() {
|
|
Async::Ready(Some(Libp2pEvent::RPC(peer_id, event))) => match event {
|
|
// Should receive the sent RPC request
|
|
RPCEvent::Request(id, request) => {
|
|
assert_eq!(id, 1);
|
|
assert_eq!(rpc_request.clone(), request);
|
|
|
|
// send the response
|
|
warn!(log, "Receiver got request");
|
|
|
|
receiver.swarm.send_rpc(
|
|
peer_id.clone(),
|
|
RPCEvent::Response(id, RPCErrorResponse::Success(rpc_response.clone())),
|
|
);
|
|
// send the stream termination
|
|
receiver.swarm.send_rpc(
|
|
peer_id,
|
|
RPCEvent::Response(
|
|
id,
|
|
RPCErrorResponse::StreamTermination(
|
|
ResponseTermination::BlocksByRange,
|
|
),
|
|
),
|
|
);
|
|
}
|
|
_ => panic!("Received invalid RPC message"),
|
|
},
|
|
Async::Ready(Some(_)) => (),
|
|
Async::Ready(None) | Async::NotReady => return Ok(Async::NotReady),
|
|
}
|
|
}
|
|
});
|
|
|
|
// execute the futures and check the result
|
|
let test_result = Arc::new(AtomicBool::new(false));
|
|
let error_result = test_result.clone();
|
|
let thread_result = test_result.clone();
|
|
tokio::run(
|
|
sender_future
|
|
.select(receiver_future)
|
|
.timeout(Duration::from_millis(1000))
|
|
.map_err(move |_| error_result.store(false, Relaxed))
|
|
.map(move |result| {
|
|
thread_result.store(result.0, Relaxed);
|
|
}),
|
|
);
|
|
assert!(test_result.load(Relaxed));
|
|
}
|
|
|
|
#[test]
|
|
// Tests a Goodbye RPC message
|
|
fn test_goodbye_rpc() {
|
|
// set up the logging. The level and enabled logging or not
|
|
let log_level = Level::Trace;
|
|
let enable_logging = false;
|
|
|
|
let log = common::build_log(log_level, enable_logging);
|
|
|
|
// get sender/receiver
|
|
let (mut sender, mut receiver) = common::build_node_pair(&log, 10520);
|
|
|
|
// Goodbye Request
|
|
let rpc_request = RPCRequest::Goodbye(GoodbyeReason::ClientShutdown);
|
|
|
|
let sender_request = rpc_request.clone();
|
|
let sender_log = log.clone();
|
|
|
|
// build the sender future
|
|
let sender_future = future::poll_fn(move || -> Poll<bool, ()> {
|
|
loop {
|
|
match sender.poll().unwrap() {
|
|
Async::Ready(Some(Libp2pEvent::PeerDialed(peer_id))) => {
|
|
// Send a Goodbye request
|
|
warn!(sender_log, "Sender sending RPC request");
|
|
sender
|
|
.swarm
|
|
.send_rpc(peer_id, RPCEvent::Request(1, sender_request.clone()));
|
|
}
|
|
Async::Ready(Some(_)) => {}
|
|
Async::Ready(None) | Async::NotReady => return Ok(Async::NotReady),
|
|
};
|
|
}
|
|
});
|
|
|
|
// build the receiver future
|
|
let receiver_future = future::poll_fn(move || -> Poll<bool, ()> {
|
|
loop {
|
|
match receiver.poll().unwrap() {
|
|
Async::Ready(Some(Libp2pEvent::RPC(_, event))) => match event {
|
|
// Should receive the sent RPC request
|
|
RPCEvent::Request(id, request) => {
|
|
assert_eq!(id, 0);
|
|
assert_eq!(rpc_request.clone(), request);
|
|
// receives the goodbye. Nothing left to do
|
|
return Ok(Async::Ready(true));
|
|
}
|
|
_ => panic!("Received invalid RPC message"),
|
|
},
|
|
Async::Ready(Some(_)) => (),
|
|
Async::Ready(None) | Async::NotReady => return Ok(Async::NotReady),
|
|
}
|
|
}
|
|
});
|
|
|
|
// execute the futures and check the result
|
|
let test_result = Arc::new(AtomicBool::new(false));
|
|
let error_result = test_result.clone();
|
|
let thread_result = test_result.clone();
|
|
tokio::run(
|
|
sender_future
|
|
.select(receiver_future)
|
|
.timeout(Duration::from_millis(1000))
|
|
.map_err(move |_| error_result.store(false, Relaxed))
|
|
.map(move |result| {
|
|
thread_result.store(result.0, Relaxed);
|
|
}),
|
|
);
|
|
assert!(test_result.load(Relaxed));
|
|
}
|