b6408805a2
* Port eth1 lib to use stable futures * Port eth1_test_rig to stable futures * Port eth1 tests to stable futures * Port genesis service to stable futures * Port genesis tests to stable futures * Port beacon_chain to stable futures * Port lcli to stable futures * Fix eth1_test_rig (#1014) * Fix lcli * Port timer to stable futures * Fix timer * Port websocket_server to stable futures * Port notifier to stable futures * Add TODOS * Update hashmap hashset to stable futures * Adds panic test to hashset delay * Port remote_beacon_node to stable futures * Fix lcli merge conflicts * Non rpc stuff compiles * protocol.rs compiles * Port websockets, timer and notifier to stable futures (#1035) * Fix lcli * Port timer to stable futures * Fix timer * Port websocket_server to stable futures * Port notifier to stable futures * Add TODOS * Port remote_beacon_node to stable futures * Partial eth2-libp2p stable future upgrade * Finished first round of fighting RPC types * Further progress towards porting eth2-libp2p adds caching to discovery * Update behaviour * RPC handler to stable futures * Update RPC to master libp2p * Network service additions * Fix the fallback transport construction (#1102) * Correct warning * Remove hashmap delay * Compiling version of eth2-libp2p * Update all crates versions * Fix conversion function and add tests (#1113) * Port validator_client to stable futures (#1114) * Add PH & MS slot clock changes * Account for genesis time * Add progress on duties refactor * Add simple is_aggregator bool to val subscription * Start work on attestation_verification.rs * Add progress on ObservedAttestations * Progress with ObservedAttestations * Fix tests * Add observed attestations to the beacon chain * Add attestation observation to processing code * Add progress on attestation verification * Add first draft of ObservedAttesters * Add more tests * Add observed attesters to beacon chain * Add observers to attestation processing * Add more attestation verification * Create ObservedAggregators map * Remove commented-out code * Add observed aggregators into chain * Add progress * Finish adding features to attestation verification * Ensure beacon chain compiles * Link attn verification into chain * Integrate new attn verification in chain * Remove old attestation processing code * Start trying to fix beacon_chain tests * Split adding into pools into two functions * Add aggregation to harness * Get test harness working again * Adjust the number of aggregators for test harness * Fix edge-case in harness * Integrate new attn processing in network * Fix compile bug in validator_client * Update validator API endpoints * Fix aggreagation in test harness * Fix enum thing * Fix attestation observation bug: * Patch failing API tests * Start adding comments to attestation verification * Remove unused attestation field * Unify "is block known" logic * Update comments * Supress fork choice errors for network processing * Add todos * Tidy * Add gossip attn tests * Disallow test harness to produce old attns * Comment out in-progress tests * Partially address pruning tests * Fix failing store test * Add aggregate tests * Add comments about which spec conditions we check * Dont re-aggregate * Split apart test harness attn production * Fix compile error in network * Make progress on commented-out test * Fix skipping attestation test * Add fork choice verification tests * Tidy attn tests, remove dead code * Remove some accidentally added code * Fix clippy lint * Rename test file * Add block tests, add cheap block proposer check * Rename block testing file * Add observed_block_producers * Tidy * Switch around block signature verification * Finish block testing * Remove gossip from signature tests * First pass of self review * Fix deviation in spec * Update test spec tags * Start moving over to hashset * Finish moving observed attesters to hashmap * Move aggregation pool over to hashmap * Make fc attn borrow again * Fix rest_api compile error * Fix missing comments * Fix monster test * Uncomment increasing slots test * Address remaining comments * Remove unsafe, use cfg test * Remove cfg test flag * Fix dodgy comment * Revert "Update hashmap hashset to stable futures" This reverts commit d432378a3cc5cd67fc29c0b15b96b886c1323554. * Revert "Adds panic test to hashset delay" This reverts commit 281502396fc5b90d9c421a309c2c056982c9525b. * Ported attestation_service * Ported duties_service * Ported fork_service * More ports * Port block_service * Minor fixes * VC compiles * Update TODOS * Borrow self where possible * Ignore aggregates that are already known. * Unify aggregator modulo logic * Fix typo in logs * Refactor validator subscription logic * Avoid reproducing selection proof * Skip HTTP call if no subscriptions * Rename DutyAndState -> DutyAndProof * Tidy logs * Print root as dbg * Fix compile errors in tests * Fix compile error in test * Re-Fix attestation and duties service * Minor fixes Co-authored-by: Paul Hauner <paul@paulhauner.com> * Network crate update to stable futures * Port account_manager to stable futures (#1121) * Port account_manager to stable futures * Run async fns in tokio environment * Port rest_api crate to stable futures (#1118) * Port rest_api lib to stable futures * Reduce tokio features * Update notifier to stable futures * Builder update * Further updates * Convert self referential async functions * stable futures fixes (#1124) * Fix eth1 update functions * Fix genesis and client * Fix beacon node lib * Return appropriate runtimes from environment * Fix test rig * Refactor eth1 service update * Upgrade simulator to stable futures * Lighthouse compiles on stable futures * Remove println debugging statement * Update libp2p service, start rpc test upgrade * Update network crate for new libp2p * Update tokio::codec to futures_codec (#1128) * Further work towards RPC corrections * Correct http timeout and network service select * Use tokio runtime for libp2p * Revert "Update tokio::codec to futures_codec (#1128)" This reverts commit e57aea924acf5cbabdcea18895ac07e38a425ed7. * Upgrade RPC libp2p tests * Upgrade secio fallback test * Upgrade gossipsub examples * Clean up RPC protocol * Test fixes (#1133) * Correct websocket timeout and run on os thread * Fix network test * Clean up PR * Correct tokio tcp move attestation service tests * Upgrade attestation service tests * Correct network test * Correct genesis test * Test corrections * Log info when block is received * Modify logs and update attester service events * Stable futures: fixes to vc, eth1 and account manager (#1142) * Add local testnet scripts * Remove whiteblock script * Rename local testnet script * Move spawns onto handle * Fix VC panic * Initial fix to block production issue * Tidy block producer fix * Tidy further * Add local testnet clean script * Run cargo fmt * Tidy duties service * Tidy fork service * Tidy ForkService * Tidy AttestationService * Tidy notifier * Ensure await is not suppressed in eth1 * Ensure await is not suppressed in account_manager * Use .ok() instead of .unwrap_or(()) * RPC decoding test for proto * Update discv5 and eth2-libp2p deps * Fix lcli double runtime issue (#1144) * Handle stream termination and dialing peer errors * Correct peer_info variant types * Remove unnecessary warnings * Handle subnet unsubscription removal and improve logigng * Add logs around ping * Upgrade discv5 and improve logging * Handle peer connection status for multiple connections * Improve network service logging * Improve logging around peer manager * Upgrade swarm poll centralise peer management * Identify clients on error * Fix `remove_peer` in sync (#1150) * remove_peer removes from all chains * Remove logs * Fix early return from loop * Improved logging, fix panic * Partially correct tests * Stable futures: Vc sync (#1149) * Improve syncing heuristic * Add comments * Use safer method for tolerance * Fix tests * Stable futures: Fix VC bug, update agg pool, add more metrics (#1151) * Expose epoch processing summary * Expose participation metrics to prometheus * Switch to f64 * Reduce precision * Change precision * Expose observed attesters metrics * Add metrics for agg/unagg attn counts * Add metrics for gossip rx * Add metrics for gossip tx * Adds ignored attns to prom * Add attestation timing * Add timer for aggregation pool sig agg * Add write lock timer for agg pool * Add more metrics to agg pool * Change map lock code * Add extra metric to agg pool * Change lock handling in agg pool * Change .write() to .read() * Add another agg pool timer * Fix for is_aggregator * Fix pruning bug Co-authored-by: pawan <pawandhananjay@gmail.com> Co-authored-by: Paul Hauner <paul@paulhauner.com>
542 lines
20 KiB
Rust
542 lines
20 KiB
Rust
#![cfg(test)]
|
|
use eth2_libp2p::rpc::methods::*;
|
|
use eth2_libp2p::rpc::*;
|
|
use eth2_libp2p::{BehaviourEvent, Libp2pEvent, RPCEvent};
|
|
use slog::{debug, warn, Level};
|
|
use std::time::Duration;
|
|
use tokio::time::delay_for;
|
|
use types::{
|
|
BeaconBlock, Epoch, EthSpec, Hash256, MinimalEthSpec, Signature, SignedBeaconBlock, Slot,
|
|
};
|
|
|
|
mod common;
|
|
|
|
type E = MinimalEthSpec;
|
|
|
|
#[tokio::test]
|
|
// Tests the STATUS RPC message
|
|
async fn test_status_rpc() {
|
|
// set up the logging. The level and enabled logging or not
|
|
let log_level = Level::Debug;
|
|
let enable_logging = false;
|
|
|
|
let log = common::build_log(log_level, enable_logging);
|
|
|
|
// get sender/receiver
|
|
let (mut sender, mut receiver) = common::build_node_pair(&log).await;
|
|
|
|
// Dummy STATUS RPC message
|
|
let rpc_request = RPCRequest::Status(StatusMessage {
|
|
fork_digest: [0; 4],
|
|
finalized_root: Hash256::from_low_u64_be(0),
|
|
finalized_epoch: Epoch::new(1),
|
|
head_root: Hash256::from_low_u64_be(0),
|
|
head_slot: Slot::new(1),
|
|
});
|
|
|
|
// Dummy STATUS RPC message
|
|
let rpc_response = RPCResponse::Status(StatusMessage {
|
|
fork_digest: [0; 4],
|
|
finalized_root: Hash256::from_low_u64_be(0),
|
|
finalized_epoch: Epoch::new(1),
|
|
head_root: Hash256::from_low_u64_be(0),
|
|
head_slot: Slot::new(1),
|
|
});
|
|
|
|
// build the sender future
|
|
let sender_future = async {
|
|
loop {
|
|
match sender.next_event().await {
|
|
Libp2pEvent::PeerConnected { peer_id, .. } => {
|
|
// Send a STATUS message
|
|
debug!(log, "Sending RPC");
|
|
sender
|
|
.swarm
|
|
.send_rpc(peer_id, RPCEvent::Request(10, rpc_request.clone()));
|
|
}
|
|
Libp2pEvent::Behaviour(BehaviourEvent::RPC(_, event)) => match event {
|
|
// Should receive the RPC response
|
|
RPCEvent::Response(id, response @ RPCCodedResponse::Success(_)) => {
|
|
if id == 10 {
|
|
debug!(log, "Sender Received");
|
|
let response = {
|
|
match response {
|
|
RPCCodedResponse::Success(r) => r,
|
|
_ => unreachable!(),
|
|
}
|
|
};
|
|
assert_eq!(response, rpc_response.clone());
|
|
debug!(log, "Sender Completed");
|
|
return;
|
|
}
|
|
}
|
|
_ => {} // Ignore other RPC messages
|
|
},
|
|
_ => {}
|
|
}
|
|
}
|
|
};
|
|
|
|
// build the receiver future
|
|
let receiver_future = async {
|
|
loop {
|
|
match receiver.next_event().await {
|
|
Libp2pEvent::Behaviour(BehaviourEvent::RPC(peer_id, event)) => {
|
|
match event {
|
|
// Should receive sent RPC request
|
|
RPCEvent::Request(id, request) => {
|
|
if request == rpc_request {
|
|
// send the response
|
|
debug!(log, "Receiver Received");
|
|
receiver.swarm.send_rpc(
|
|
peer_id,
|
|
RPCEvent::Response(
|
|
id,
|
|
RPCCodedResponse::Success(rpc_response.clone()),
|
|
),
|
|
);
|
|
}
|
|
}
|
|
_ => {} // Ignore other RPC requests
|
|
}
|
|
}
|
|
_ => {} // Ignore other events
|
|
}
|
|
}
|
|
};
|
|
|
|
tokio::select! {
|
|
_ = sender_future => {}
|
|
_ = receiver_future => {}
|
|
_ = delay_for(Duration::from_millis(800)) => {
|
|
panic!("Future timed out");
|
|
}
|
|
}
|
|
}
|
|
|
|
#[tokio::test]
|
|
// Tests a streamed BlocksByRange RPC Message
|
|
async fn test_blocks_by_range_chunked_rpc() {
|
|
// set up the logging. The level and enabled logging or not
|
|
let log_level = Level::Trace;
|
|
let enable_logging = false;
|
|
|
|
let messages_to_send = 10;
|
|
|
|
let log = common::build_log(log_level, enable_logging);
|
|
|
|
// get sender/receiver
|
|
let (mut sender, mut receiver) = common::build_node_pair(&log).await;
|
|
|
|
// BlocksByRange Request
|
|
let rpc_request = RPCRequest::BlocksByRange(BlocksByRangeRequest {
|
|
start_slot: 0,
|
|
count: messages_to_send,
|
|
step: 0,
|
|
});
|
|
|
|
// BlocksByRange Response
|
|
let spec = E::default_spec();
|
|
let empty_block = BeaconBlock::empty(&spec);
|
|
let empty_signed = SignedBeaconBlock {
|
|
message: empty_block,
|
|
signature: Signature::empty_signature(),
|
|
};
|
|
let rpc_response = RPCResponse::BlocksByRange(Box::new(empty_signed));
|
|
|
|
// keep count of the number of messages received
|
|
let mut messages_received = 0;
|
|
// build the sender future
|
|
let sender_future = async {
|
|
loop {
|
|
match sender.next_event().await {
|
|
Libp2pEvent::PeerConnected { peer_id, .. } => {
|
|
// Send a STATUS message
|
|
debug!(log, "Sending RPC");
|
|
sender
|
|
.swarm
|
|
.send_rpc(peer_id, RPCEvent::Request(10, rpc_request.clone()));
|
|
}
|
|
Libp2pEvent::Behaviour(BehaviourEvent::RPC(_, event)) => match event {
|
|
// Should receive the RPC response
|
|
RPCEvent::Response(id, response) => {
|
|
if id == 10 {
|
|
warn!(log, "Sender received a response");
|
|
match response {
|
|
RPCCodedResponse::Success(res) => {
|
|
assert_eq!(res, rpc_response.clone());
|
|
messages_received += 1;
|
|
warn!(log, "Chunk received");
|
|
}
|
|
RPCCodedResponse::StreamTermination(_) => {
|
|
// should be exactly 10 messages before terminating
|
|
assert_eq!(messages_received, messages_to_send);
|
|
// end the test
|
|
return;
|
|
}
|
|
_ => panic!("Invalid RPC received"),
|
|
}
|
|
}
|
|
}
|
|
_ => {} // Ignore other RPC messages
|
|
},
|
|
_ => {} // Ignore other behaviour events
|
|
}
|
|
}
|
|
};
|
|
|
|
// build the receiver future
|
|
let receiver_future = async {
|
|
loop {
|
|
match receiver.next_event().await {
|
|
Libp2pEvent::Behaviour(BehaviourEvent::RPC(peer_id, event)) => {
|
|
match event {
|
|
// Should receive sent RPC request
|
|
RPCEvent::Request(id, request) => {
|
|
if request == rpc_request {
|
|
// send the response
|
|
warn!(log, "Receiver got request");
|
|
|
|
for _ in 1..=messages_to_send {
|
|
receiver.swarm.send_rpc(
|
|
peer_id.clone(),
|
|
RPCEvent::Response(
|
|
id,
|
|
RPCCodedResponse::Success(rpc_response.clone()),
|
|
),
|
|
);
|
|
}
|
|
// send the stream termination
|
|
receiver.swarm.send_rpc(
|
|
peer_id,
|
|
RPCEvent::Response(
|
|
id,
|
|
RPCCodedResponse::StreamTermination(
|
|
ResponseTermination::BlocksByRange,
|
|
),
|
|
),
|
|
);
|
|
}
|
|
}
|
|
_ => {} // Ignore other events
|
|
}
|
|
}
|
|
_ => {} // Ignore other events
|
|
}
|
|
}
|
|
};
|
|
|
|
tokio::select! {
|
|
_ = sender_future => {}
|
|
_ = receiver_future => {}
|
|
_ = delay_for(Duration::from_millis(800)) => {
|
|
panic!("Future timed out");
|
|
}
|
|
}
|
|
}
|
|
|
|
#[tokio::test]
|
|
// Tests an empty response to a BlocksByRange RPC Message
|
|
async fn test_blocks_by_range_single_empty_rpc() {
|
|
// set up the logging. The level and enabled logging or not
|
|
let log_level = Level::Trace;
|
|
let enable_logging = false;
|
|
|
|
let log = common::build_log(log_level, enable_logging);
|
|
|
|
// get sender/receiver
|
|
let (mut sender, mut receiver) = common::build_node_pair(&log).await;
|
|
|
|
// BlocksByRange Request
|
|
let rpc_request = RPCRequest::BlocksByRange(BlocksByRangeRequest {
|
|
start_slot: 0,
|
|
count: 10,
|
|
step: 0,
|
|
});
|
|
|
|
// BlocksByRange Response
|
|
let spec = E::default_spec();
|
|
let empty_block = BeaconBlock::empty(&spec);
|
|
let empty_signed = SignedBeaconBlock {
|
|
message: empty_block,
|
|
signature: Signature::empty_signature(),
|
|
};
|
|
let rpc_response = RPCResponse::BlocksByRange(Box::new(empty_signed));
|
|
|
|
let messages_to_send = 1;
|
|
|
|
// keep count of the number of messages received
|
|
let mut messages_received = 0;
|
|
// build the sender future
|
|
let sender_future = async {
|
|
loop {
|
|
match sender.next_event().await {
|
|
Libp2pEvent::PeerConnected { peer_id, .. } => {
|
|
// Send a STATUS message
|
|
debug!(log, "Sending RPC");
|
|
sender
|
|
.swarm
|
|
.send_rpc(peer_id, RPCEvent::Request(10, rpc_request.clone()));
|
|
}
|
|
Libp2pEvent::Behaviour(BehaviourEvent::RPC(_, event)) => match event {
|
|
// Should receive the RPC response
|
|
RPCEvent::Response(id, response) => {
|
|
if id == 10 {
|
|
warn!(log, "Sender received a response");
|
|
match response {
|
|
RPCCodedResponse::Success(res) => {
|
|
assert_eq!(res, rpc_response.clone());
|
|
messages_received += 1;
|
|
warn!(log, "Chunk received");
|
|
}
|
|
RPCCodedResponse::StreamTermination(_) => {
|
|
// should be exactly 10 messages before terminating
|
|
assert_eq!(messages_received, messages_to_send);
|
|
// end the test
|
|
return;
|
|
}
|
|
_ => panic!("Invalid RPC received"),
|
|
}
|
|
}
|
|
}
|
|
_ => {} // Ignore other RPC messages
|
|
},
|
|
_ => {} // Ignore other behaviour events
|
|
}
|
|
}
|
|
};
|
|
|
|
// build the receiver future
|
|
let receiver_future = async {
|
|
loop {
|
|
match receiver.next_event().await {
|
|
Libp2pEvent::Behaviour(BehaviourEvent::RPC(peer_id, event)) => {
|
|
match event {
|
|
// Should receive sent RPC request
|
|
RPCEvent::Request(id, request) => {
|
|
if request == rpc_request {
|
|
// send the response
|
|
warn!(log, "Receiver got request");
|
|
|
|
for _ in 1..=messages_to_send {
|
|
receiver.swarm.send_rpc(
|
|
peer_id.clone(),
|
|
RPCEvent::Response(
|
|
id,
|
|
RPCCodedResponse::Success(rpc_response.clone()),
|
|
),
|
|
);
|
|
}
|
|
// send the stream termination
|
|
receiver.swarm.send_rpc(
|
|
peer_id,
|
|
RPCEvent::Response(
|
|
id,
|
|
RPCCodedResponse::StreamTermination(
|
|
ResponseTermination::BlocksByRange,
|
|
),
|
|
),
|
|
);
|
|
}
|
|
}
|
|
_ => {} // Ignore other events
|
|
}
|
|
}
|
|
_ => {} // Ignore other events
|
|
}
|
|
}
|
|
};
|
|
tokio::select! {
|
|
_ = sender_future => {}
|
|
_ = receiver_future => {}
|
|
_ = delay_for(Duration::from_millis(800)) => {
|
|
panic!("Future timed out");
|
|
}
|
|
}
|
|
}
|
|
|
|
#[tokio::test]
|
|
// Tests a streamed, chunked BlocksByRoot RPC Message
|
|
// The size of the reponse is a full `BeaconBlock`
|
|
// which is greater than the Snappy frame size. Hence, this test
|
|
// serves to test the snappy framing format as well.
|
|
async fn test_blocks_by_root_chunked_rpc() {
|
|
// set up the logging. The level and enabled logging or not
|
|
let log_level = Level::Debug;
|
|
let enable_logging = false;
|
|
|
|
let messages_to_send = 3;
|
|
|
|
let log = common::build_log(log_level, enable_logging);
|
|
let spec = E::default_spec();
|
|
|
|
// get sender/receiver
|
|
let (mut sender, mut receiver) = common::build_node_pair(&log).await;
|
|
|
|
// BlocksByRoot Request
|
|
let rpc_request = RPCRequest::BlocksByRoot(BlocksByRootRequest {
|
|
block_roots: vec![Hash256::from_low_u64_be(0), Hash256::from_low_u64_be(0)],
|
|
});
|
|
|
|
// BlocksByRoot Response
|
|
let full_block = BeaconBlock::full(&spec);
|
|
let signed_full_block = SignedBeaconBlock {
|
|
message: full_block,
|
|
signature: Signature::empty_signature(),
|
|
};
|
|
let rpc_response = RPCResponse::BlocksByRoot(Box::new(signed_full_block));
|
|
|
|
// keep count of the number of messages received
|
|
let mut messages_received = 0;
|
|
// build the sender future
|
|
let sender_future = async {
|
|
loop {
|
|
match sender.next_event().await {
|
|
Libp2pEvent::PeerConnected { peer_id, .. } => {
|
|
// Send a STATUS message
|
|
debug!(log, "Sending RPC");
|
|
sender
|
|
.swarm
|
|
.send_rpc(peer_id, RPCEvent::Request(10, rpc_request.clone()));
|
|
}
|
|
Libp2pEvent::Behaviour(BehaviourEvent::RPC(_, event)) => match event {
|
|
// Should receive the RPC response
|
|
RPCEvent::Response(id, response) => {
|
|
if id == 10 {
|
|
debug!(log, "Sender received a response");
|
|
match response {
|
|
RPCCodedResponse::Success(res) => {
|
|
assert_eq!(res, rpc_response.clone());
|
|
messages_received += 1;
|
|
debug!(log, "Chunk received");
|
|
}
|
|
RPCCodedResponse::StreamTermination(_) => {
|
|
// should be exactly messages_to_send
|
|
assert_eq!(messages_received, messages_to_send);
|
|
// end the test
|
|
return;
|
|
}
|
|
_ => {} // Ignore other RPC messages
|
|
}
|
|
}
|
|
}
|
|
_ => {} // Ignore other RPC messages
|
|
},
|
|
_ => {} // Ignore other behaviour events
|
|
}
|
|
}
|
|
};
|
|
|
|
// build the receiver future
|
|
let receiver_future = async {
|
|
loop {
|
|
match receiver.next_event().await {
|
|
Libp2pEvent::Behaviour(BehaviourEvent::RPC(peer_id, event)) => {
|
|
match event {
|
|
// Should receive sent RPC request
|
|
RPCEvent::Request(id, request) => {
|
|
if request == rpc_request {
|
|
// send the response
|
|
debug!(log, "Receiver got request");
|
|
|
|
for _ in 1..=messages_to_send {
|
|
receiver.swarm.send_rpc(
|
|
peer_id.clone(),
|
|
RPCEvent::Response(
|
|
id,
|
|
RPCCodedResponse::Success(rpc_response.clone()),
|
|
),
|
|
);
|
|
debug!(log, "Sending message");
|
|
}
|
|
// send the stream termination
|
|
receiver.swarm.send_rpc(
|
|
peer_id,
|
|
RPCEvent::Response(
|
|
id,
|
|
RPCCodedResponse::StreamTermination(
|
|
ResponseTermination::BlocksByRange,
|
|
),
|
|
),
|
|
);
|
|
debug!(log, "Send stream term");
|
|
}
|
|
}
|
|
_ => {} // Ignore other events
|
|
}
|
|
}
|
|
_ => {} // Ignore other events
|
|
}
|
|
}
|
|
};
|
|
tokio::select! {
|
|
_ = sender_future => {}
|
|
_ = receiver_future => {}
|
|
_ = delay_for(Duration::from_millis(1000)) => {
|
|
panic!("Future timed out");
|
|
}
|
|
}
|
|
}
|
|
|
|
#[tokio::test]
|
|
// Tests a Goodbye RPC message
|
|
async fn test_goodbye_rpc() {
|
|
// set up the logging. The level and enabled logging or not
|
|
let log_level = Level::Trace;
|
|
let enable_logging = false;
|
|
|
|
let log = common::build_log(log_level, enable_logging);
|
|
|
|
// get sender/receiver
|
|
let (mut sender, mut receiver) = common::build_node_pair(&log).await;
|
|
|
|
// Goodbye Request
|
|
let rpc_request = RPCRequest::Goodbye(GoodbyeReason::ClientShutdown);
|
|
|
|
// build the sender future
|
|
let sender_future = async {
|
|
loop {
|
|
match sender.next_event().await {
|
|
Libp2pEvent::PeerConnected { peer_id, .. } => {
|
|
// Send a STATUS message
|
|
debug!(log, "Sending RPC");
|
|
sender
|
|
.swarm
|
|
.send_rpc(peer_id, RPCEvent::Request(10, rpc_request.clone()));
|
|
}
|
|
_ => {} // Ignore other RPC messages
|
|
}
|
|
}
|
|
};
|
|
|
|
// build the receiver future
|
|
let receiver_future = async {
|
|
loop {
|
|
match receiver.next_event().await {
|
|
Libp2pEvent::Behaviour(BehaviourEvent::RPC(_peer_id, event)) => {
|
|
match event {
|
|
// Should receive sent RPC request
|
|
RPCEvent::Request(id, request) => {
|
|
if request == rpc_request {
|
|
assert_eq!(id, 0);
|
|
assert_eq!(rpc_request.clone(), request); // receives the goodbye. Nothing left to do
|
|
return;
|
|
}
|
|
}
|
|
_ => {} // Ignore other events
|
|
}
|
|
}
|
|
_ => {} // Ignore other events
|
|
}
|
|
}
|
|
};
|
|
|
|
tokio::select! {
|
|
_ = sender_future => {}
|
|
_ = receiver_future => {}
|
|
_ = delay_for(Duration::from_millis(1000)) => {
|
|
panic!("Future timed out");
|
|
}
|
|
}
|
|
}
|