lighthouse/beacon_node/eth1/tests/test.rs
Age Manning b6408805a2
Stable futures (#879)
* Port eth1 lib to use stable futures

* Port eth1_test_rig to stable futures

* Port eth1 tests to stable futures

* Port genesis service to stable futures

* Port genesis tests to stable futures

* Port beacon_chain to stable futures

* Port lcli to stable futures

* Fix eth1_test_rig (#1014)

* Fix lcli

* Port timer to stable futures

* Fix timer

* Port websocket_server to stable futures

* Port notifier to stable futures

* Add TODOS

* Update hashmap hashset to stable futures

* Adds panic test to hashset delay

* Port remote_beacon_node to stable futures

* Fix lcli merge conflicts

* Non rpc stuff compiles

* protocol.rs compiles

* Port websockets, timer and notifier to stable futures (#1035)

* Fix lcli

* Port timer to stable futures

* Fix timer

* Port websocket_server to stable futures

* Port notifier to stable futures

* Add TODOS

* Port remote_beacon_node to stable futures

* Partial eth2-libp2p stable future upgrade

* Finished first round of fighting RPC types

* Further progress towards porting eth2-libp2p adds caching to discovery

* Update behaviour

* RPC handler to stable futures

* Update RPC to master libp2p

* Network service additions

* Fix the fallback transport construction (#1102)

* Correct warning

* Remove hashmap delay

* Compiling version of eth2-libp2p

* Update all crates versions

* Fix conversion function and add tests (#1113)

* Port validator_client to stable futures (#1114)

* Add PH & MS slot clock changes

* Account for genesis time

* Add progress on duties refactor

* Add simple is_aggregator bool to val subscription

* Start work on attestation_verification.rs

* Add progress on ObservedAttestations

* Progress with ObservedAttestations

* Fix tests

* Add observed attestations to the beacon chain

* Add attestation observation to processing code

* Add progress on attestation verification

* Add first draft of ObservedAttesters

* Add more tests

* Add observed attesters to beacon chain

* Add observers to attestation processing

* Add more attestation verification

* Create ObservedAggregators map

* Remove commented-out code

* Add observed aggregators into chain

* Add progress

* Finish adding features to attestation verification

* Ensure beacon chain compiles

* Link attn verification into chain

* Integrate new attn verification in chain

* Remove old attestation processing code

* Start trying to fix beacon_chain tests

* Split adding into pools into two functions

* Add aggregation to harness

* Get test harness working again

* Adjust the number of aggregators for test harness

* Fix edge-case in harness

* Integrate new attn processing in network

* Fix compile bug in validator_client

* Update validator API endpoints

* Fix aggreagation in test harness

* Fix enum thing

* Fix attestation observation bug:

* Patch failing API tests

* Start adding comments to attestation verification

* Remove unused attestation field

* Unify "is block known" logic

* Update comments

* Supress fork choice errors for network processing

* Add todos

* Tidy

* Add gossip attn tests

* Disallow test harness to produce old attns

* Comment out in-progress tests

* Partially address pruning tests

* Fix failing store test

* Add aggregate tests

* Add comments about which spec conditions we check

* Dont re-aggregate

* Split apart test harness attn production

* Fix compile error in network

* Make progress on commented-out test

* Fix skipping attestation test

* Add fork choice verification tests

* Tidy attn tests, remove dead code

* Remove some accidentally added code

* Fix clippy lint

* Rename test file

* Add block tests, add cheap block proposer check

* Rename block testing file

* Add observed_block_producers

* Tidy

* Switch around block signature verification

* Finish block testing

* Remove gossip from signature tests

* First pass of self review

* Fix deviation in spec

* Update test spec tags

* Start moving over to hashset

* Finish moving observed attesters to hashmap

* Move aggregation pool over to hashmap

* Make fc attn borrow again

* Fix rest_api compile error

* Fix missing comments

* Fix monster test

* Uncomment increasing slots test

* Address remaining comments

* Remove unsafe, use cfg test

* Remove cfg test flag

* Fix dodgy comment

* Revert "Update hashmap hashset to stable futures"

This reverts commit d432378a3cc5cd67fc29c0b15b96b886c1323554.

* Revert "Adds panic test to hashset delay"

This reverts commit 281502396fc5b90d9c421a309c2c056982c9525b.

* Ported attestation_service

* Ported duties_service

* Ported fork_service

* More ports

* Port block_service

* Minor fixes

* VC compiles

* Update TODOS

* Borrow self where possible

* Ignore aggregates that are already known.

* Unify aggregator modulo logic

* Fix typo in logs

* Refactor validator subscription logic

* Avoid reproducing selection proof

* Skip HTTP call if no subscriptions

* Rename DutyAndState -> DutyAndProof

* Tidy logs

* Print root as dbg

* Fix compile errors in tests

* Fix compile error in test

* Re-Fix attestation and duties service

* Minor fixes

Co-authored-by: Paul Hauner <paul@paulhauner.com>

* Network crate update to stable futures

* Port account_manager to stable futures (#1121)

* Port account_manager to stable futures

* Run async fns in tokio environment

* Port rest_api crate to stable futures (#1118)

* Port rest_api lib to stable futures

* Reduce tokio features

* Update notifier to stable futures

* Builder update

* Further updates

* Convert self referential async functions

* stable futures fixes (#1124)

* Fix eth1 update functions

* Fix genesis and client

* Fix beacon node lib

* Return appropriate runtimes from environment

* Fix test rig

* Refactor eth1 service update

* Upgrade simulator to stable futures

* Lighthouse compiles on stable futures

* Remove println debugging statement

* Update libp2p service, start rpc test upgrade

* Update network crate for new libp2p

* Update tokio::codec to futures_codec (#1128)

* Further work towards RPC corrections

* Correct http timeout and network service select

* Use tokio runtime for libp2p

* Revert "Update tokio::codec to futures_codec (#1128)"

This reverts commit e57aea924acf5cbabdcea18895ac07e38a425ed7.

* Upgrade RPC libp2p tests

* Upgrade secio fallback test

* Upgrade gossipsub examples

* Clean up RPC protocol

* Test fixes (#1133)

* Correct websocket timeout and run on os thread

* Fix network test

* Clean up PR

* Correct tokio tcp move attestation service tests

* Upgrade attestation service tests

* Correct network test

* Correct genesis test

* Test corrections

* Log info when block is received

* Modify logs and update attester service events

* Stable futures: fixes to vc, eth1 and account manager (#1142)

* Add local testnet scripts

* Remove whiteblock script

* Rename local testnet script

* Move spawns onto handle

* Fix VC panic

* Initial fix to block production issue

* Tidy block producer fix

* Tidy further

* Add local testnet clean script

* Run cargo fmt

* Tidy duties service

* Tidy fork service

* Tidy ForkService

* Tidy AttestationService

* Tidy notifier

* Ensure await is not suppressed in eth1

* Ensure await is not suppressed in account_manager

* Use .ok() instead of .unwrap_or(())

* RPC decoding test for proto

* Update discv5 and eth2-libp2p deps

* Fix lcli double runtime issue (#1144)

* Handle stream termination and dialing peer errors

* Correct peer_info variant types

* Remove unnecessary warnings

* Handle subnet unsubscription removal and improve logigng

* Add logs around ping

* Upgrade discv5 and improve logging

* Handle peer connection status for multiple connections

* Improve network service logging

* Improve logging around peer manager

* Upgrade swarm poll centralise peer management

* Identify clients on error

* Fix `remove_peer` in sync (#1150)

* remove_peer removes from all chains

* Remove logs

* Fix early return from loop

* Improved logging, fix panic

* Partially correct tests

* Stable futures: Vc sync (#1149)

* Improve syncing heuristic

* Add comments

* Use safer method for tolerance

* Fix tests

* Stable futures: Fix VC bug, update agg pool, add more metrics (#1151)

* Expose epoch processing summary

* Expose participation metrics to prometheus

* Switch to f64

* Reduce precision

* Change precision

* Expose observed attesters metrics

* Add metrics for agg/unagg attn counts

* Add metrics for gossip rx

* Add metrics for gossip tx

* Adds ignored attns to prom

* Add attestation timing

* Add timer for aggregation pool sig agg

* Add write lock timer for agg pool

* Add more metrics to agg pool

* Change map lock code

* Add extra metric to agg pool

* Change lock handling in agg pool

* Change .write() to .read()

* Add another agg pool timer

* Fix for is_aggregator

* Fix pruning bug

Co-authored-by: pawan <pawandhananjay@gmail.com>
Co-authored-by: Paul Hauner <paul@paulhauner.com>
2020-05-17 11:16:48 +00:00

761 lines
24 KiB
Rust

#![cfg(test)]
use environment::{Environment, EnvironmentBuilder};
use eth1::http::{get_deposit_count, get_deposit_logs_in_range, get_deposit_root, Block, Log};
use eth1::{Config, Service};
use eth1::{DepositCache, DepositLog};
use eth1_test_rig::GanacheEth1Instance;
use futures::compat::Future01CompatExt;
use merkle_proof::verify_merkle_proof;
use slog::Logger;
use sloggers::{null::NullLoggerBuilder, Build};
use std::ops::Range;
use std::time::Duration;
use tree_hash::TreeHash;
use types::{DepositData, EthSpec, Hash256, Keypair, MainnetEthSpec, MinimalEthSpec, Signature};
use web3::{transports::Http, Web3};
const DEPOSIT_CONTRACT_TREE_DEPTH: usize = 32;
pub fn null_logger() -> Logger {
let log_builder = NullLoggerBuilder;
log_builder.build().expect("should build logger")
}
pub fn new_env() -> Environment<MinimalEthSpec> {
EnvironmentBuilder::minimal()
// Use a single thread, so that when all tests are run in parallel they don't have so many
// threads.
.single_thread_tokio_runtime()
.expect("should start tokio runtime")
.null_logger()
.expect("should start null logger")
.build()
.expect("should build env")
}
fn timeout() -> Duration {
Duration::from_secs(2)
}
fn random_deposit_data() -> DepositData {
let keypair = Keypair::random();
let mut deposit = DepositData {
pubkey: keypair.pk.into(),
withdrawal_credentials: Hash256::zero(),
amount: 32_000_000_000,
signature: Signature::empty_signature().into(),
};
deposit.signature = deposit.create_signature(&keypair.sk, &MainnetEthSpec::default_spec());
deposit
}
/// Blocking operation to get the deposit logs from the `deposit_contract`.
async fn blocking_deposit_logs(eth1: &GanacheEth1Instance, range: Range<u64>) -> Vec<Log> {
get_deposit_logs_in_range(
&eth1.endpoint(),
&eth1.deposit_contract.address(),
range,
timeout(),
)
.await
.expect("should get logs")
}
/// Blocking operation to get the deposit root from the `deposit_contract`.
async fn blocking_deposit_root(eth1: &GanacheEth1Instance, block_number: u64) -> Option<Hash256> {
get_deposit_root(
&eth1.endpoint(),
&eth1.deposit_contract.address(),
block_number,
timeout(),
)
.await
.expect("should get deposit root")
}
/// Blocking operation to get the deposit count from the `deposit_contract`.
async fn blocking_deposit_count(eth1: &GanacheEth1Instance, block_number: u64) -> Option<u64> {
get_deposit_count(
&eth1.endpoint(),
&eth1.deposit_contract.address(),
block_number,
timeout(),
)
.await
.expect("should get deposit count")
}
async fn get_block_number(web3: &Web3<Http>) -> u64 {
web3.eth()
.block_number()
.compat()
.await
.map(|v| v.as_u64())
.expect("should get block number")
}
mod eth1_cache {
use super::*;
#[tokio::test]
async fn simple_scenario() {
let log = null_logger();
for follow_distance in 0..2 {
let eth1 = GanacheEth1Instance::new()
.await
.expect("should start eth1 environment");
let deposit_contract = &eth1.deposit_contract;
let web3 = eth1.web3();
let initial_block_number = get_block_number(&web3).await;
let service = Service::new(
Config {
endpoint: eth1.endpoint(),
deposit_contract_address: deposit_contract.address(),
lowest_cached_block_number: initial_block_number,
follow_distance,
..Config::default()
},
log.clone(),
);
// Create some blocks and then consume them, performing the test `rounds` times.
for round in 0..2 {
let blocks = 4;
let initial = if round == 0 {
initial_block_number
} else {
service
.blocks()
.read()
.highest_block_number()
.map(|n| n + follow_distance)
.expect("should have a latest block after the first round")
};
for _ in 0..blocks {
eth1.ganache.evm_mine().await.expect("should mine block");
}
Service::update_deposit_cache(service.clone())
.await
.expect("should update deposit cache");
Service::update_block_cache(service.clone())
.await
.expect("should update block cache");
Service::update_block_cache(service.clone())
.await
.expect("should update cache when nothing has changed");
assert_eq!(
service
.blocks()
.read()
.highest_block_number()
.map(|n| n + follow_distance),
Some(initial + blocks),
"should update {} blocks in round {} (follow {})",
blocks,
round,
follow_distance,
);
}
}
}
/// Tests the case where we attempt to download more blocks than will fit in the cache.
#[tokio::test]
async fn big_skip() {
let log = null_logger();
let eth1 = GanacheEth1Instance::new()
.await
.expect("should start eth1 environment");
let deposit_contract = &eth1.deposit_contract;
let web3 = eth1.web3();
let cache_len = 4;
let service = Service::new(
Config {
endpoint: eth1.endpoint(),
deposit_contract_address: deposit_contract.address(),
lowest_cached_block_number: get_block_number(&web3).await,
follow_distance: 0,
block_cache_truncation: Some(cache_len),
..Config::default()
},
log,
);
let blocks = cache_len * 2;
for _ in 0..blocks {
eth1.ganache.evm_mine().await.expect("should mine block")
}
Service::update_deposit_cache(service.clone())
.await
.expect("should update deposit cache");
Service::update_block_cache(service.clone())
.await
.expect("should update block cache");
assert_eq!(
service.block_cache_len(),
cache_len,
"should not grow cache beyond target"
);
}
/// Tests to ensure that the cache gets pruned when doing multiple downloads smaller than the
/// cache size.
#[tokio::test]
async fn pruning() {
let log = null_logger();
let eth1 = GanacheEth1Instance::new()
.await
.expect("should start eth1 environment");
let deposit_contract = &eth1.deposit_contract;
let web3 = eth1.web3();
let cache_len = 4;
let service = Service::new(
Config {
endpoint: eth1.endpoint(),
deposit_contract_address: deposit_contract.address(),
lowest_cached_block_number: get_block_number(&web3).await,
follow_distance: 0,
block_cache_truncation: Some(cache_len),
..Config::default()
},
log,
);
for _ in 0..4u8 {
for _ in 0..cache_len / 2 {
eth1.ganache.evm_mine().await.expect("should mine block")
}
Service::update_deposit_cache(service.clone())
.await
.expect("should update deposit cache");
Service::update_block_cache(service.clone())
.await
.expect("should update block cache");
}
assert_eq!(
service.block_cache_len(),
cache_len,
"should not grow cache beyond target"
);
}
#[tokio::test]
async fn double_update() {
let log = null_logger();
let n = 16;
let eth1 = GanacheEth1Instance::new()
.await
.expect("should start eth1 environment");
let deposit_contract = &eth1.deposit_contract;
let web3 = eth1.web3();
let service = Service::new(
Config {
endpoint: eth1.endpoint(),
deposit_contract_address: deposit_contract.address(),
lowest_cached_block_number: get_block_number(&web3).await,
follow_distance: 0,
..Config::default()
},
log,
);
for _ in 0..n {
eth1.ganache.evm_mine().await.expect("should mine block")
}
futures::try_join!(
Service::update_deposit_cache(service.clone()),
Service::update_deposit_cache(service.clone())
)
.expect("should perform two simultaneous updates of deposit cache");
futures::try_join!(
Service::update_block_cache(service.clone()),
Service::update_block_cache(service.clone())
)
.expect("should perform two simultaneous updates of block cache");
assert!(service.block_cache_len() >= n, "should grow the cache");
}
}
mod deposit_tree {
use super::*;
#[tokio::test]
async fn updating() {
let log = null_logger();
let n = 4;
let eth1 = GanacheEth1Instance::new()
.await
.expect("should start eth1 environment");
let deposit_contract = &eth1.deposit_contract;
let web3 = eth1.web3();
let start_block = get_block_number(&web3).await;
let service = Service::new(
Config {
endpoint: eth1.endpoint(),
deposit_contract_address: deposit_contract.address(),
deposit_contract_deploy_block: start_block,
follow_distance: 0,
..Config::default()
},
log,
);
for round in 0..3 {
let deposits: Vec<_> = (0..n).map(|_| random_deposit_data()).collect();
for deposit in &deposits {
deposit_contract
.deposit(deposit.clone())
.await
.expect("should perform a deposit");
}
Service::update_deposit_cache(service.clone())
.await
.expect("should perform update");
Service::update_deposit_cache(service.clone())
.await
.expect("should perform update when nothing has changed");
let first = n * round;
let last = n * (round + 1);
let (_root, local_deposits) = service
.deposits()
.read()
.cache
.get_deposits(first, last, last, 32)
.unwrap_or_else(|_| panic!("should get deposits in round {}", round));
assert_eq!(
local_deposits.len(),
n as usize,
"should get the right number of deposits in round {}",
round
);
assert_eq!(
local_deposits
.iter()
.map(|d| d.data.clone())
.collect::<Vec<_>>(),
deposits.to_vec(),
"obtained deposits should match those submitted in round {}",
round
);
}
}
#[tokio::test]
async fn double_update() {
let log = null_logger();
let n = 8;
let eth1 = GanacheEth1Instance::new()
.await
.expect("should start eth1 environment");
let deposit_contract = &eth1.deposit_contract;
let web3 = eth1.web3();
let start_block = get_block_number(&web3).await;
let service = Service::new(
Config {
endpoint: eth1.endpoint(),
deposit_contract_address: deposit_contract.address(),
deposit_contract_deploy_block: start_block,
lowest_cached_block_number: start_block,
follow_distance: 0,
..Config::default()
},
log,
);
let deposits: Vec<_> = (0..n).map(|_| random_deposit_data()).collect();
for deposit in &deposits {
deposit_contract
.deposit(deposit.clone())
.await
.expect("should perform a deposit");
}
futures::try_join!(
Service::update_deposit_cache(service.clone()),
Service::update_deposit_cache(service.clone())
)
.expect("should perform two updates concurrently");
assert_eq!(service.deposit_cache_len(), n);
}
#[tokio::test]
async fn cache_consistency() {
let n = 8;
let deposits: Vec<_> = (0..n).map(|_| random_deposit_data()).collect();
let eth1 = GanacheEth1Instance::new()
.await
.expect("should start eth1 environment");
let deposit_contract = &eth1.deposit_contract;
let web3 = eth1.web3();
let mut deposit_roots = vec![];
let mut deposit_counts = vec![];
// Perform deposits to the smart contract, recording it's state along the way.
for deposit in &deposits {
deposit_contract
.deposit(deposit.clone())
.await
.expect("should perform a deposit");
let block_number = get_block_number(&web3).await;
deposit_roots.push(
blocking_deposit_root(&eth1, block_number)
.await
.expect("should get root if contract exists"),
);
deposit_counts.push(
blocking_deposit_count(&eth1, block_number)
.await
.expect("should get count if contract exists"),
);
}
let mut tree = DepositCache::default();
// Pull all the deposit logs from the contract.
let block_number = get_block_number(&web3).await;
let logs: Vec<_> = blocking_deposit_logs(&eth1, 0..block_number)
.await
.iter()
.map(|raw| DepositLog::from_log(raw).expect("should parse deposit log"))
.inspect(|log| {
tree.insert_log(log.clone())
.expect("should add consecutive logs")
})
.collect();
// Check the logs for invariants.
for i in 0..logs.len() {
let log = &logs[i];
assert_eq!(
log.deposit_data, deposits[i],
"log {} should have correct deposit data",
i
);
assert_eq!(log.index, i as u64, "log {} should have correct index", i);
}
// For each deposit test some more invariants
for i in 0..n {
// Ensure the deposit count from the smart contract was as expected.
assert_eq!(
deposit_counts[i],
i as u64 + 1,
"deposit count should be accurate"
);
// Ensure that the root from the deposit tree matches what the contract reported.
let (root, deposits) = tree
.get_deposits(0, i as u64, deposit_counts[i], DEPOSIT_CONTRACT_TREE_DEPTH)
.expect("should get deposits");
assert_eq!(
root, deposit_roots[i],
"tree deposit root {} should match the contract",
i
);
// Ensure that the deposits all prove into the root from the smart contract.
let deposit_root = deposit_roots[i];
for (j, deposit) in deposits.iter().enumerate() {
assert!(
verify_merkle_proof(
deposit.data.tree_hash_root(),
&deposit.proof,
DEPOSIT_CONTRACT_TREE_DEPTH + 1,
j,
deposit_root
),
"deposit merkle proof should prove into deposit contract root"
)
}
}
}
}
/// Tests for the base HTTP requests and response handlers.
mod http {
use super::*;
async fn get_block(eth1: &GanacheEth1Instance, block_number: u64) -> Block {
eth1::http::get_block(&eth1.endpoint(), block_number, timeout())
.await
.expect("should get block number")
}
#[tokio::test]
async fn incrementing_deposits() {
let eth1 = GanacheEth1Instance::new()
.await
.expect("should start eth1 environment");
let deposit_contract = &eth1.deposit_contract;
let web3 = eth1.web3();
let block_number = get_block_number(&web3).await;
let logs = blocking_deposit_logs(&eth1, 0..block_number).await;
assert_eq!(logs.len(), 0);
let mut old_root = blocking_deposit_root(&eth1, block_number).await;
let mut old_block = get_block(&eth1, block_number).await;
let mut old_block_number = block_number;
assert_eq!(
blocking_deposit_count(&eth1, block_number).await,
Some(0),
"should have deposit count zero"
);
for i in 1..=8 {
eth1.ganache
.increase_time(1)
.await
.expect("should be able to increase time on ganache");
deposit_contract
.deposit(random_deposit_data())
.await
.expect("should perform a deposit");
// Check the logs.
let block_number = get_block_number(&web3).await;
let logs = blocking_deposit_logs(&eth1, 0..block_number).await;
assert_eq!(logs.len(), i, "the number of logs should be as expected");
// Check the deposit count.
assert_eq!(
blocking_deposit_count(&eth1, block_number).await,
Some(i as u64),
"should have a correct deposit count"
);
// Check the deposit root.
let new_root = blocking_deposit_root(&eth1, block_number).await;
assert_ne!(
new_root, old_root,
"deposit root should change with each deposit"
);
old_root = new_root;
// Check the block hash.
let new_block = get_block(&eth1, block_number).await;
assert_ne!(
new_block.hash, old_block.hash,
"block hash should change with each deposit"
);
// Check to ensure the timestamp is increasing
assert!(
old_block.timestamp <= new_block.timestamp,
"block timestamp should increase"
);
old_block = new_block.clone();
// Check the block number.
assert!(
block_number > old_block_number,
"block number should increase"
);
old_block_number = block_number;
// Check to ensure the block root is changing
assert_ne!(
new_root,
Some(new_block.hash),
"the deposit root should be different to the block hash"
);
}
}
}
mod fast {
use super::*;
// Adds deposits into deposit cache and matches deposit_count and deposit_root
// with the deposit count and root computed from the deposit cache.
#[tokio::test]
async fn deposit_cache_query() {
let log = null_logger();
let eth1 = GanacheEth1Instance::new()
.await
.expect("should start eth1 environment");
let deposit_contract = &eth1.deposit_contract;
let web3 = eth1.web3();
let now = get_block_number(&web3).await;
let service = Service::new(
Config {
endpoint: eth1.endpoint(),
deposit_contract_address: deposit_contract.address(),
deposit_contract_deploy_block: now,
lowest_cached_block_number: now,
follow_distance: 0,
block_cache_truncation: None,
..Config::default()
},
log,
);
let n = 10;
let deposits: Vec<_> = (0..n).map(|_| random_deposit_data()).collect();
for deposit in &deposits {
deposit_contract
.deposit(deposit.clone())
.await
.expect("should perform a deposit");
// Mine an extra block between deposits to test for corner cases
eth1.ganache.evm_mine().await.expect("should mine block");
}
Service::update_deposit_cache(service.clone())
.await
.expect("should perform update");
assert!(
service.deposit_cache_len() >= n,
"should have imported n deposits"
);
for block_num in 0..=get_block_number(&web3).await {
let expected_deposit_count = blocking_deposit_count(&eth1, block_num).await;
let expected_deposit_root = blocking_deposit_root(&eth1, block_num).await;
let deposit_count = service
.deposits()
.read()
.cache
.get_deposit_count_from_cache(block_num);
let deposit_root = service
.deposits()
.read()
.cache
.get_deposit_root_from_cache(block_num);
assert_eq!(
expected_deposit_count, deposit_count,
"deposit count from cache should match queried"
);
assert_eq!(
expected_deposit_root, deposit_root,
"deposit root from cache should match queried"
);
}
}
}
mod persist {
use super::*;
#[tokio::test]
async fn test_persist_caches() {
let log = null_logger();
let eth1 = GanacheEth1Instance::new()
.await
.expect("should start eth1 environment");
let deposit_contract = &eth1.deposit_contract;
let web3 = eth1.web3();
let now = get_block_number(&web3).await;
let config = Config {
endpoint: eth1.endpoint(),
deposit_contract_address: deposit_contract.address(),
deposit_contract_deploy_block: now,
lowest_cached_block_number: now,
follow_distance: 0,
block_cache_truncation: None,
..Config::default()
};
let service = Service::new(config.clone(), log.clone());
let n = 10;
let deposits: Vec<_> = (0..n).map(|_| random_deposit_data()).collect();
for deposit in &deposits {
deposit_contract
.deposit(deposit.clone())
.await
.expect("should perform a deposit");
}
Service::update_deposit_cache(service.clone())
.await
.expect("should perform update");
assert!(
service.deposit_cache_len() >= n,
"should have imported n deposits"
);
let deposit_count = service.deposit_cache_len();
Service::update_block_cache(service.clone())
.await
.expect("should perform update");
assert!(
service.block_cache_len() >= n,
"should have imported n eth1 blocks"
);
let block_count = service.block_cache_len();
let eth1_bytes = service.as_bytes();
// Drop service and recover from bytes
drop(service);
let recovered_service = Service::from_bytes(&eth1_bytes, config, log).unwrap();
assert_eq!(
recovered_service.block_cache_len(),
block_count,
"Should have equal cached blocks as before recovery"
);
assert_eq!(
recovered_service.deposit_cache_len(),
deposit_count,
"Should have equal cached deposits as before recovery"
);
}
}