lighthouse/eth2/lmd_ghost/tests/test.rs
Paul Hauner f229bbba1c
Eth1 Integration (#542)
* Refactor to cache Eth1Data

* Fix merge conflicts and minor refactorings

* Rename Eth1Cache to Eth1DataCache

* Refactor events subscription

* Add deposits module to interface with BeaconChain deposits

* Remove utils

* Rename to types.rs and add trait constraints to Eth1DataFetcher

* Confirm to trait constraints. Make Web3DataFetcher cloneable

* Make fetcher object member of deposit and eth1_data cache and other fixes
* Fix update_cache function
* Move fetch_eth1_data to impl block
* Fix deposit tests

* Create Eth1 object for interfacing with Beacon chain
* Add `run` function for running update_cache and subscribe_deposit_logs tasks
* Add logging

* Run `cargo fmt` and make tests pass

* Convert sync functions to async

* Add timeouts to web3 functions

* Return futures from cache functions

* Add failed chaining of futures

* Working cache updation

* Clean up tests and `update_cache` function

* Refactor `get_eth1_data` functions to work with future returning functions

* Refactor eth1 `run` function to work with modified `update_cache` api

* Minor changes

* Add distance parameter to `update_cache`

* Fix tests and other minor fixes

* Working integration with cache and deposits

* Add merkle_tree construction, proof generation and verification code

* Add function to construct and fetch Deposits for BeaconNode

* Add error handling

* Import ssz

* Add error handling to eth1 cache and fix minor errors

* Run rustfmt

* Fix minor bug

* Rename Eth1Error and change to Result<T>

* Change deposit fetching mechanism from notification based to poll based
* Add deposits from eth1 chain in a given range every `x` blocks
* Modify `run` function to accommodate changes
* Minor fixes

* Fix formatting

* Initial commit. web3 api working.

* Tidied up lib. Add function for fetching logs.

* Refactor with `Eth1DataFetcher` trait

* Add parsing for deposit contract logs and get_eth1_data function

* Add `get_eth1_votes` function

* Refactor to cache Eth1Data

* Fix merge conflicts and minor refactorings

* Rename Eth1Cache to Eth1DataCache

* Refactor events subscription

* Add deposits module to interface with BeaconChain deposits

* Remove utils

* Rename to types.rs and add trait constraints to Eth1DataFetcher

* Confirm to trait constraints. Make Web3DataFetcher cloneable

* Make fetcher object member of deposit and eth1_data cache and other fixes
* Fix update_cache function
* Move fetch_eth1_data to impl block
* Fix deposit tests

* Create Eth1 object for interfacing with Beacon chain
* Add `run` function for running update_cache and subscribe_deposit_logs tasks
* Add logging

* Run `cargo fmt` and make tests pass

* Convert sync functions to async

* Add timeouts to web3 functions

* Return futures from cache functions

* Add failed chaining of futures

* Working cache updation

* Clean up tests and `update_cache` function

* Refactor `get_eth1_data` functions to work with future returning functions

* Refactor eth1 `run` function to work with modified `update_cache` api

* Minor changes

* Add distance parameter to `update_cache`

* Fix tests and other minor fixes

* Working integration with cache and deposits

* Add merkle_tree construction, proof generation and verification code

* Add function to construct and fetch Deposits for BeaconNode

* Add error handling

* Import ssz

* Add error handling to eth1 cache and fix minor errors

* Run rustfmt

* Fix minor bug

* Rename Eth1Error and change to Result<T>

* Change deposit fetching mechanism from notification based to poll based
* Add deposits from eth1 chain in a given range every `x` blocks
* Modify `run` function to accommodate changes
* Minor fixes

* Fix formatting

* Fix merge issue

* Refactor with `Config` struct. Remote `ContractConfig`

* Rename eth1_chain crate to eth1

* Rename files and read abi file using `fs::read`

* Move eth1 to lib

* Remove unnecessary mutability constraint

* Add `Web3Backend` for returning actual eth1 data

* Refactor `get_eth1_votes` to return a Result

* Delete `eth1_chain` crate

* Return `Result` from `get_deposits`

* Fix range of deposits to return to beacon chain

* Add `get_block_height_by_hash` trait function

* Add naive method for getting `previous_eth1_distance`

* Add eth1 config params to main config

* Add instructions for setting up eth1 testing environment

* Add build script to fetch deposit contract abi

* Contract ABI is part of compiled binary

* Fix minor bugs

* Move docs to lib

* Add timeout to config

* Remove print statements

* Change warn to error

* Fix typos

* Removed prints in test and get timeout value from config

* Fixed error types

* Added logging to web3_fetcher

* Refactor for modified web3 api

* Fix minor stuff

* Add build script

* Tidy, hide eth1 integration tests behind flag

* Add http crate

* Add first stages of eth1_test_rig

* Fix deposits on test rig

* Fix bug with deposit count method

* Add block hash getter to http eth1

* Clean eth1 http crate and tests

* Add script to start ganache

* Adds deposit tree to eth1-http

* Extend deposit tree tests

* Tidy tests in eth1-http

* Add more detail to get block request

* Add block cache to eth1-http

* Rename deposit tree to deposit cache

* Add inital updating to eth1-http

* Tidy updater

* Fix compile bugs in tests

* Adds an Eth1DataCache builder

* Reorg eth1-http files

* Add (failing) tests for eth1 updater

* Rename files, fix bug in eth1-http

* Ensure that ganache timestamps are increasing

* Fix bugs with getting eth1data ancestors

* Improve eth1 testing, fix bugs

* Add truncate method to block cache

* Add pruning to block cache update process

* Add tests for block pruning

* Allow for dropping an expired cache.

* Add more comments

* Add first compiling version of deposit updater

* Add common fn for getting range of required blocks

* Add passing deposit update test

* Improve tests

* Fix block pruning bug

* Add tests for running two updates at once

* Add updater services to eth1

* Add deposit collection to beacon chain

* Add incomplete builder experiments

* Add first working version of beacon chain builder

* Update test harness to new beacon chain type

* Rename builder file, tidy

* Add first working client builder

* Progress further on client builder

* Update becaon node binary to use client builder

* Ensure release tests compile

* Remove old eth1 crate

* Add first pass of new lighthouse binary

* Fix websocket server startup

* Remove old binary code from beacon_node crate

* Add first working beacon node tests

* Add genesis crate, new eth1 cache_2

* Add Serivce to Eth1Cache

* Refactor with general eth1 improvements

* Add passing genesis test

* Tidy, add comments

* Add more comments to eth1 service

* Add further eth1 progress

* Fix some bugs with genesis

* Fix eth1 bugs, make eth1 linking more efficient

* Shift logic in genesis service

* Add more comments to genesis service

* Add gzip, max request values, timeouts to http

* Update testnet parameters to suit goerli testnet

* Add ability to vary Fork, fix custom spec

* Be more explicit about deposit fork version

* Start adding beacon chain eth1 option

* Add more flexibility to prod client

* Further runtime refactoring

* Allow for starting from store

* Add bootstrapping to client config

* Add remote_beacon_node crate

* Update eth1 service for more configurability

* Update eth1 tests to use less runtimes

* Patch issues with tests using too many files

* Move dummy eth1 backend flag

* Ensure all tests pass

* Add ganache-cli to Dockerfile

* Use a special docker hub image for testing

* Appease clippy

* Move validator client into lighthouse binary

* Allow starting with dummy eth1 backend

* Improve logging

* Fix dummy eth1 backend from cli

* Add extra testnet command

* Ensure consistent spec in beacon node

* Update eth1 rig to work on goerli

* Tidy lcli, start adding support for yaml config

* Add incomplete YamlConfig struct

* Remove efforts at YamlConfig

* Add incomplete eth1 voting. Blocked on spec issues

* Add (untested) first pass at eth1 vote algo

* Add tests for winning vote

* Add more tests for eth1 chain

* Add more eth1 voting tests

* Added more eth1 voting testing

* Change test name

* Add more tests to eth1 chain

* Tidy eth1 generics, add more tests

* Improve comments

* Tidy beacon_node tests

* Tidy, rename JsonRpc.. to Caching..

* Tidy voting logic

* Tidy builder docs

* Add comments, tidy eth1

* Add more comments to eth1

* Fix bug with winning_vote

* Add doc comments to the `ClientBuilder`

* Remove commented-out code

* Improve `ClientBuilder` docs

* Add comments to client config

* Add decoding test for `ClientConfig`

* Remove unused `DepositSet` struct

* Tidy `block_cache`

* Remove commented out lines

* Remove unused code in `eth1` crate

* Remove old validator binary `main.rs`

* Tidy, fix tests compile error

* Add initial tests for get_deposits

* Remove dead code in eth1_test_rig

* Update TestingDepositBuilder

* Add testing for getting eth1 deposits

* Fix duplicate rand dep

* Remove dead code

* Remove accidentally-added files

* Fix comment in eth1_genesis_service

* Add .gitignore for eth1_test_rig

* Fix bug in eth1_genesis_service

* Remove dead code from eth2_config

* Fix tabs/spaces in root Cargo.toml

* Tidy eth1 crate

* Allow for re-use of eth1 service after genesis

* Update docs for new CLI

* Change README gif

* Tidy eth1 http module

* Tidy eth1 service

* Tidy environment crate

* Remove unused file

* Tidy, add comments

* Remove commented-out code

* Address majority of Michael's comments

* Address other PR comments

* Add link to issue alongside TODO
2019-11-15 14:47:51 +11:00

363 lines
12 KiB
Rust

#![cfg(not(debug_assertions))]
#[macro_use]
extern crate lazy_static;
use beacon_chain::test_utils::{
generate_deterministic_keypairs, AttestationStrategy,
BeaconChainHarness as BaseBeaconChainHarness, BlockStrategy, HarnessType,
};
use lmd_ghost::{LmdGhost, ThreadSafeReducedTree as BaseThreadSafeReducedTree};
use rand::{prelude::*, rngs::StdRng};
use std::sync::Arc;
use store::{
iter::{AncestorIter, BlockRootsIterator},
MemoryStore, Store,
};
use types::{BeaconBlock, EthSpec, Hash256, MinimalEthSpec, Slot};
// Should ideally be divisible by 3.
pub const VALIDATOR_COUNT: usize = 3 * 8;
type TestEthSpec = MinimalEthSpec;
type ThreadSafeReducedTree = BaseThreadSafeReducedTree<MemoryStore, TestEthSpec>;
type BeaconChainHarness = BaseBeaconChainHarness<HarnessType<TestEthSpec>>;
type RootAndSlot = (Hash256, Slot);
lazy_static! {
/// A lazy-static instance of a `BeaconChainHarness` that contains two forks.
///
/// Reduces test setup time by providing a common harness.
static ref FORKED_HARNESS: ForkedHarness = ForkedHarness::new();
}
/// Contains a `BeaconChainHarness` that has two forks, caused by a validator skipping a slot and
/// then some validators building on one head and some on the other.
///
/// Care should be taken to ensure that the `ForkedHarness` does not expose any interior mutability
/// from it's fields. This would cause cross-contamination between tests when used with
/// `lazy_static`.
struct ForkedHarness {
/// Private (not `pub`) because the `BeaconChainHarness` has interior mutability. We
/// don't expose it to avoid contamination between tests.
harness: BeaconChainHarness,
pub genesis_block_root: Hash256,
pub genesis_block: BeaconBlock<TestEthSpec>,
pub honest_head: RootAndSlot,
pub faulty_head: RootAndSlot,
pub honest_roots: Vec<RootAndSlot>,
pub faulty_roots: Vec<RootAndSlot>,
}
impl ForkedHarness {
/// A new standard instance of with constant parameters.
pub fn new() -> Self {
let harness = BeaconChainHarness::new(
MinimalEthSpec,
generate_deterministic_keypairs(VALIDATOR_COUNT),
);
// Move past the zero slot.
harness.advance_slot();
let delay = TestEthSpec::default_spec().min_attestation_inclusion_delay as usize;
let initial_blocks = delay + 5;
// Build an initial chain where all validators agree.
harness.extend_chain(
initial_blocks,
BlockStrategy::OnCanonicalHead,
AttestationStrategy::AllValidators,
);
let two_thirds = (VALIDATOR_COUNT / 3) * 2;
let honest_validators: Vec<usize> = (0..two_thirds).collect();
let faulty_validators: Vec<usize> = (two_thirds..VALIDATOR_COUNT).collect();
let honest_fork_blocks = delay + 5;
let faulty_fork_blocks = delay + 5;
let (honest_head, faulty_head) = harness.generate_two_forks_by_skipping_a_block(
&honest_validators,
&faulty_validators,
honest_fork_blocks,
faulty_fork_blocks,
);
let mut honest_roots =
get_ancestor_roots::<TestEthSpec, _>(harness.chain.store.clone(), honest_head);
honest_roots.insert(
0,
(honest_head, get_slot_for_block_root(&harness, honest_head)),
);
let mut faulty_roots =
get_ancestor_roots::<TestEthSpec, _>(harness.chain.store.clone(), faulty_head);
faulty_roots.insert(
0,
(faulty_head, get_slot_for_block_root(&harness, faulty_head)),
);
let genesis_block_root = harness.chain.genesis_block_root;
let genesis_block = harness
.chain
.store
.get::<BeaconBlock<TestEthSpec>>(&genesis_block_root)
.expect("Genesis block should exist")
.expect("DB should not error");
Self {
harness,
genesis_block_root,
genesis_block,
honest_head: *honest_roots.last().expect("Chain cannot be empty"),
faulty_head: *faulty_roots.last().expect("Chain cannot be empty"),
honest_roots,
faulty_roots,
}
}
pub fn store_clone(&self) -> MemoryStore {
(*self.harness.chain.store).clone()
}
/// Return a brand-new, empty fork choice with a reference to `harness.store`.
pub fn new_fork_choice(&self) -> ThreadSafeReducedTree {
// Take a full clone of the store built by the harness.
//
// Taking a clone here ensures that each fork choice gets it's own store so there is no
// cross-contamination between tests.
let store: MemoryStore = self.store_clone();
ThreadSafeReducedTree::new(
Arc::new(store),
&self.genesis_block,
self.genesis_block_root,
)
}
pub fn all_block_roots(&self) -> Vec<RootAndSlot> {
let mut all_roots = self.honest_roots.clone();
all_roots.append(&mut self.faulty_roots.clone());
all_roots.dedup();
all_roots
}
pub fn weight_function(_validator_index: usize) -> Option<u64> {
Some(1)
}
}
/// Helper: returns all the ancestor roots and slots for a given block_root.
fn get_ancestor_roots<E: EthSpec, U: Store>(
store: Arc<U>,
block_root: Hash256,
) -> Vec<(Hash256, Slot)> {
let block = store
.get::<BeaconBlock<TestEthSpec>>(&block_root)
.expect("block should exist")
.expect("store should not error");
<BeaconBlock<TestEthSpec> as AncestorIter<_, BlockRootsIterator<TestEthSpec, _>>>::try_iter_ancestor_roots(
&block, store,
)
.expect("should be able to create ancestor iter")
.collect()
}
/// Helper: returns the slot for some block_root.
fn get_slot_for_block_root(harness: &BeaconChainHarness, block_root: Hash256) -> Slot {
harness
.chain
.store
.get::<BeaconBlock<TestEthSpec>>(&block_root)
.expect("head block should exist")
.expect("DB should not error")
.slot
}
const RANDOM_ITERATIONS: usize = 50;
const RANDOM_ACTIONS_PER_ITERATION: usize = 100;
/// Create a single LMD instance and have one validator vote in reverse (highest to lowest slot)
/// down the chain.
#[test]
fn random_scenario() {
let harness = &FORKED_HARNESS;
let block_roots = harness.all_block_roots();
let validators: Vec<usize> = (0..VALIDATOR_COUNT).collect();
let mut rng = StdRng::seed_from_u64(9375205782030385); // Keyboard mash.
for _ in 0..RANDOM_ITERATIONS {
let lmd = harness.new_fork_choice();
for _ in 0..RANDOM_ACTIONS_PER_ITERATION {
let (root, slot) = block_roots[rng.next_u64() as usize % block_roots.len()];
let validator_index = validators[rng.next_u64() as usize % validators.len()];
lmd.process_attestation(validator_index, root, slot)
.expect("fork choice should accept randomly-placed attestations");
assert_eq!(
lmd.verify_integrity(),
Ok(()),
"New tree should have integrity"
);
}
}
}
/// Create a single LMD instance and have one validator vote in reverse (highest to lowest slot)
/// down the chain.
#[test]
fn single_voter_persistent_instance_reverse_order() {
let harness = &FORKED_HARNESS;
let lmd = harness.new_fork_choice();
assert_eq!(
lmd.verify_integrity(),
Ok(()),
"New tree should have integrity"
);
for (root, slot) in harness.honest_roots.iter().rev() {
lmd.process_attestation(0, *root, *slot)
.expect("fork choice should accept attestations to honest roots in reverse");
assert_eq!(
lmd.verify_integrity(),
Ok(()),
"Tree integrity should be maintained whilst processing attestations"
);
}
// The honest head should be selected.
let (head_root, head_slot) = harness.honest_roots.first().unwrap();
let (finalized_root, _) = harness.honest_roots.last().unwrap();
assert_eq!(
lmd.find_head(*head_slot, *finalized_root, ForkedHarness::weight_function),
Ok(*head_root),
"Honest head should be selected"
);
}
/// A single validator applies a single vote to each block in the honest fork, using a new tree
/// each time.
#[test]
fn single_voter_many_instance_honest_blocks_voting_forwards() {
let harness = &FORKED_HARNESS;
for (root, slot) in &harness.honest_roots {
let lmd = harness.new_fork_choice();
lmd.process_attestation(0, *root, *slot)
.expect("fork choice should accept attestations to honest roots");
assert_eq!(
lmd.verify_integrity(),
Ok(()),
"Tree integrity should be maintained whilst processing attestations"
);
}
}
/// Same as above, but in reverse order (votes on the highest honest block first).
#[test]
fn single_voter_many_instance_honest_blocks_voting_in_reverse() {
let harness = &FORKED_HARNESS;
// Same as above, but in reverse order (votes on the highest honest block first).
for (root, slot) in harness.honest_roots.iter().rev() {
let lmd = harness.new_fork_choice();
lmd.process_attestation(0, *root, *slot)
.expect("fork choice should accept attestations to honest roots in reverse");
assert_eq!(
lmd.verify_integrity(),
Ok(()),
"Tree integrity should be maintained whilst processing attestations"
);
}
}
/// A single validator applies a single vote to each block in the faulty fork, using a new tree
/// each time.
#[test]
fn single_voter_many_instance_faulty_blocks_voting_forwards() {
let harness = &FORKED_HARNESS;
for (root, slot) in &harness.faulty_roots {
let lmd = harness.new_fork_choice();
lmd.process_attestation(0, *root, *slot)
.expect("fork choice should accept attestations to faulty roots");
assert_eq!(
lmd.verify_integrity(),
Ok(()),
"Tree integrity should be maintained whilst processing attestations"
);
}
}
/// Same as above, but in reverse order (votes on the highest faulty block first).
#[test]
fn single_voter_many_instance_faulty_blocks_voting_in_reverse() {
let harness = &FORKED_HARNESS;
for (root, slot) in harness.faulty_roots.iter().rev() {
let lmd = harness.new_fork_choice();
lmd.process_attestation(0, *root, *slot)
.expect("fork choice should accept attestations to faulty roots in reverse");
assert_eq!(
lmd.verify_integrity(),
Ok(()),
"Tree integrity should be maintained whilst processing attestations"
);
}
}
/// Ensures that the finalized root can be set to all values in `roots`.
fn test_update_finalized_root(roots: &[(Hash256, Slot)]) {
let harness = &FORKED_HARNESS;
let lmd = harness.new_fork_choice();
for (root, _slot) in roots.iter().rev() {
let block = harness
.store_clone()
.get::<BeaconBlock<TestEthSpec>>(root)
.expect("block should exist")
.expect("db should not error");
lmd.update_finalized_root(&block, *root)
.expect("finalized root should update for faulty fork");
assert_eq!(
lmd.verify_integrity(),
Ok(()),
"Tree integrity should be maintained after updating the finalized root"
);
}
}
/// Iterates from low-to-high slot through the faulty roots, updating the finalized root.
#[test]
fn update_finalized_root_faulty() {
let harness = &FORKED_HARNESS;
test_update_finalized_root(&harness.faulty_roots)
}
/// Iterates from low-to-high slot through the honest roots, updating the finalized root.
#[test]
fn update_finalized_root_honest() {
let harness = &FORKED_HARNESS;
test_update_finalized_root(&harness.honest_roots)
}