Eth1 Integration (#542)

* Refactor to cache Eth1Data

* Fix merge conflicts and minor refactorings

* Rename Eth1Cache to Eth1DataCache

* Refactor events subscription

* Add deposits module to interface with BeaconChain deposits

* Remove utils

* Rename to types.rs and add trait constraints to Eth1DataFetcher

* Confirm to trait constraints. Make Web3DataFetcher cloneable

* Make fetcher object member of deposit and eth1_data cache and other fixes
* Fix update_cache function
* Move fetch_eth1_data to impl block
* Fix deposit tests

* Create Eth1 object for interfacing with Beacon chain
* Add `run` function for running update_cache and subscribe_deposit_logs tasks
* Add logging

* Run `cargo fmt` and make tests pass

* Convert sync functions to async

* Add timeouts to web3 functions

* Return futures from cache functions

* Add failed chaining of futures

* Working cache updation

* Clean up tests and `update_cache` function

* Refactor `get_eth1_data` functions to work with future returning functions

* Refactor eth1 `run` function to work with modified `update_cache` api

* Minor changes

* Add distance parameter to `update_cache`

* Fix tests and other minor fixes

* Working integration with cache and deposits

* Add merkle_tree construction, proof generation and verification code

* Add function to construct and fetch Deposits for BeaconNode

* Add error handling

* Import ssz

* Add error handling to eth1 cache and fix minor errors

* Run rustfmt

* Fix minor bug

* Rename Eth1Error and change to Result<T>

* Change deposit fetching mechanism from notification based to poll based
* Add deposits from eth1 chain in a given range every `x` blocks
* Modify `run` function to accommodate changes
* Minor fixes

* Fix formatting

* Initial commit. web3 api working.

* Tidied up lib. Add function for fetching logs.

* Refactor with `Eth1DataFetcher` trait

* Add parsing for deposit contract logs and get_eth1_data function

* Add `get_eth1_votes` function

* Refactor to cache Eth1Data

* Fix merge conflicts and minor refactorings

* Rename Eth1Cache to Eth1DataCache

* Refactor events subscription

* Add deposits module to interface with BeaconChain deposits

* Remove utils

* Rename to types.rs and add trait constraints to Eth1DataFetcher

* Confirm to trait constraints. Make Web3DataFetcher cloneable

* Make fetcher object member of deposit and eth1_data cache and other fixes
* Fix update_cache function
* Move fetch_eth1_data to impl block
* Fix deposit tests

* Create Eth1 object for interfacing with Beacon chain
* Add `run` function for running update_cache and subscribe_deposit_logs tasks
* Add logging

* Run `cargo fmt` and make tests pass

* Convert sync functions to async

* Add timeouts to web3 functions

* Return futures from cache functions

* Add failed chaining of futures

* Working cache updation

* Clean up tests and `update_cache` function

* Refactor `get_eth1_data` functions to work with future returning functions

* Refactor eth1 `run` function to work with modified `update_cache` api

* Minor changes

* Add distance parameter to `update_cache`

* Fix tests and other minor fixes

* Working integration with cache and deposits

* Add merkle_tree construction, proof generation and verification code

* Add function to construct and fetch Deposits for BeaconNode

* Add error handling

* Import ssz

* Add error handling to eth1 cache and fix minor errors

* Run rustfmt

* Fix minor bug

* Rename Eth1Error and change to Result<T>

* Change deposit fetching mechanism from notification based to poll based
* Add deposits from eth1 chain in a given range every `x` blocks
* Modify `run` function to accommodate changes
* Minor fixes

* Fix formatting

* Fix merge issue

* Refactor with `Config` struct. Remote `ContractConfig`

* Rename eth1_chain crate to eth1

* Rename files and read abi file using `fs::read`

* Move eth1 to lib

* Remove unnecessary mutability constraint

* Add `Web3Backend` for returning actual eth1 data

* Refactor `get_eth1_votes` to return a Result

* Delete `eth1_chain` crate

* Return `Result` from `get_deposits`

* Fix range of deposits to return to beacon chain

* Add `get_block_height_by_hash` trait function

* Add naive method for getting `previous_eth1_distance`

* Add eth1 config params to main config

* Add instructions for setting up eth1 testing environment

* Add build script to fetch deposit contract abi

* Contract ABI is part of compiled binary

* Fix minor bugs

* Move docs to lib

* Add timeout to config

* Remove print statements

* Change warn to error

* Fix typos

* Removed prints in test and get timeout value from config

* Fixed error types

* Added logging to web3_fetcher

* Refactor for modified web3 api

* Fix minor stuff

* Add build script

* Tidy, hide eth1 integration tests behind flag

* Add http crate

* Add first stages of eth1_test_rig

* Fix deposits on test rig

* Fix bug with deposit count method

* Add block hash getter to http eth1

* Clean eth1 http crate and tests

* Add script to start ganache

* Adds deposit tree to eth1-http

* Extend deposit tree tests

* Tidy tests in eth1-http

* Add more detail to get block request

* Add block cache to eth1-http

* Rename deposit tree to deposit cache

* Add inital updating to eth1-http

* Tidy updater

* Fix compile bugs in tests

* Adds an Eth1DataCache builder

* Reorg eth1-http files

* Add (failing) tests for eth1 updater

* Rename files, fix bug in eth1-http

* Ensure that ganache timestamps are increasing

* Fix bugs with getting eth1data ancestors

* Improve eth1 testing, fix bugs

* Add truncate method to block cache

* Add pruning to block cache update process

* Add tests for block pruning

* Allow for dropping an expired cache.

* Add more comments

* Add first compiling version of deposit updater

* Add common fn for getting range of required blocks

* Add passing deposit update test

* Improve tests

* Fix block pruning bug

* Add tests for running two updates at once

* Add updater services to eth1

* Add deposit collection to beacon chain

* Add incomplete builder experiments

* Add first working version of beacon chain builder

* Update test harness to new beacon chain type

* Rename builder file, tidy

* Add first working client builder

* Progress further on client builder

* Update becaon node binary to use client builder

* Ensure release tests compile

* Remove old eth1 crate

* Add first pass of new lighthouse binary

* Fix websocket server startup

* Remove old binary code from beacon_node crate

* Add first working beacon node tests

* Add genesis crate, new eth1 cache_2

* Add Serivce to Eth1Cache

* Refactor with general eth1 improvements

* Add passing genesis test

* Tidy, add comments

* Add more comments to eth1 service

* Add further eth1 progress

* Fix some bugs with genesis

* Fix eth1 bugs, make eth1 linking more efficient

* Shift logic in genesis service

* Add more comments to genesis service

* Add gzip, max request values, timeouts to http

* Update testnet parameters to suit goerli testnet

* Add ability to vary Fork, fix custom spec

* Be more explicit about deposit fork version

* Start adding beacon chain eth1 option

* Add more flexibility to prod client

* Further runtime refactoring

* Allow for starting from store

* Add bootstrapping to client config

* Add remote_beacon_node crate

* Update eth1 service for more configurability

* Update eth1 tests to use less runtimes

* Patch issues with tests using too many files

* Move dummy eth1 backend flag

* Ensure all tests pass

* Add ganache-cli to Dockerfile

* Use a special docker hub image for testing

* Appease clippy

* Move validator client into lighthouse binary

* Allow starting with dummy eth1 backend

* Improve logging

* Fix dummy eth1 backend from cli

* Add extra testnet command

* Ensure consistent spec in beacon node

* Update eth1 rig to work on goerli

* Tidy lcli, start adding support for yaml config

* Add incomplete YamlConfig struct

* Remove efforts at YamlConfig

* Add incomplete eth1 voting. Blocked on spec issues

* Add (untested) first pass at eth1 vote algo

* Add tests for winning vote

* Add more tests for eth1 chain

* Add more eth1 voting tests

* Added more eth1 voting testing

* Change test name

* Add more tests to eth1 chain

* Tidy eth1 generics, add more tests

* Improve comments

* Tidy beacon_node tests

* Tidy, rename JsonRpc.. to Caching..

* Tidy voting logic

* Tidy builder docs

* Add comments, tidy eth1

* Add more comments to eth1

* Fix bug with winning_vote

* Add doc comments to the `ClientBuilder`

* Remove commented-out code

* Improve `ClientBuilder` docs

* Add comments to client config

* Add decoding test for `ClientConfig`

* Remove unused `DepositSet` struct

* Tidy `block_cache`

* Remove commented out lines

* Remove unused code in `eth1` crate

* Remove old validator binary `main.rs`

* Tidy, fix tests compile error

* Add initial tests for get_deposits

* Remove dead code in eth1_test_rig

* Update TestingDepositBuilder

* Add testing for getting eth1 deposits

* Fix duplicate rand dep

* Remove dead code

* Remove accidentally-added files

* Fix comment in eth1_genesis_service

* Add .gitignore for eth1_test_rig

* Fix bug in eth1_genesis_service

* Remove dead code from eth2_config

* Fix tabs/spaces in root Cargo.toml

* Tidy eth1 crate

* Allow for re-use of eth1 service after genesis

* Update docs for new CLI

* Change README gif

* Tidy eth1 http module

* Tidy eth1 service

* Tidy environment crate

* Remove unused file

* Tidy, add comments

* Remove commented-out code

* Address majority of Michael's comments

* Address other PR comments

* Add link to issue alongside TODO
This commit is contained in:
Paul Hauner 2019-11-15 14:47:51 +11:00 committed by GitHub
parent 97729f8654
commit f229bbba1c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
99 changed files with 8263 additions and 1631 deletions

View File

@ -1,7 +1,7 @@
#Adapted from https://users.rust-lang.org/t/my-gitlab-config-docs-tests/16396 #Adapted from https://users.rust-lang.org/t/my-gitlab-config-docs-tests/16396
default: default:
image: 'sigp/lighthouse:latest' image: 'sigp/lighthouse:eth1'
cache: cache:
paths: paths:
- tests/ef_tests/*-v0.8.3.tar.gz - tests/ef_tests/*-v0.8.3.tar.gz

View File

@ -33,13 +33,18 @@ members = [
"beacon_node/eth2-libp2p", "beacon_node/eth2-libp2p",
"beacon_node/rpc", "beacon_node/rpc",
"beacon_node/version", "beacon_node/version",
"beacon_node/eth1",
"beacon_node/beacon_chain", "beacon_node/beacon_chain",
"beacon_node/websocket_server", "beacon_node/websocket_server",
"tests/ef_tests", "tests/ef_tests",
"tests/eth1_test_rig",
"tests/node_test_rig",
"lcli", "lcli",
"protos", "protos",
"validator_client", "validator_client",
"account_manager", "account_manager",
"lighthouse",
"lighthouse/environment"
] ]
[patch] [patch]

View File

@ -19,6 +19,8 @@ RUN git clone https://github.com/google/protobuf.git && \
cd .. && \ cd .. && \
rm -r protobuf rm -r protobuf
RUN apt-get install -y nodejs npm
RUN npm install -g ganache-cli --unsafe-perm
RUN mkdir -p /cache/cargocache && chmod -R ugo+rwX /cache/cargocache RUN mkdir -p /cache/cargocache && chmod -R ugo+rwX /cache/cargocache

View File

@ -15,7 +15,7 @@ An open-source Ethereum 2.0 client, written in Rust and maintained by Sigma Prim
[Swagger Badge]: https://img.shields.io/badge/Open%20API-0.2.0-success [Swagger Badge]: https://img.shields.io/badge/Open%20API-0.2.0-success
[Swagger Link]: https://app.swaggerhub.com/apis-docs/spble/lighthouse_rest_api/0.2.0 [Swagger Link]: https://app.swaggerhub.com/apis-docs/spble/lighthouse_rest_api/0.2.0
![terminalize](https://i.postimg.cc/Y0BQ0z3R/terminalize.gif) ![terminalize](https://i.postimg.cc/kG11dpCW/lighthouse-cli-png.gif)
## Overview ## Overview
@ -47,7 +47,7 @@ Current development overview:
- ~~**April 2019**: Inital single-client testnets.~~ - ~~**April 2019**: Inital single-client testnets.~~
- ~~**September 2019**: Inter-operability with other Ethereum 2.0 clients.~~ - ~~**September 2019**: Inter-operability with other Ethereum 2.0 clients.~~
- **Early-October 2019**: `lighthouse-0.0.1` release: All major phase 0 - **Q4 2019**: `lighthouse-0.0.1` release: All major phase 0
features implemented. features implemented.
- **Q4 2019**: Public, multi-client testnet with user-facing functionality. - **Q4 2019**: Public, multi-client testnet with user-facing functionality.
- **Q4 2019**: Third-party security review. - **Q4 2019**: Third-party security review.

View File

@ -4,6 +4,13 @@ version = "0.1.0"
authors = ["Paul Hauner <paul@paulhauner.com>", "Age Manning <Age@AgeManning.com"] authors = ["Paul Hauner <paul@paulhauner.com>", "Age Manning <Age@AgeManning.com"]
edition = "2018" edition = "2018"
[lib]
name = "beacon_node"
path = "src/lib.rs"
[dev-dependencies]
node_test_rig = { path = "../tests/node_test_rig" }
[dependencies] [dependencies]
eth2_config = { path = "../eth2/utils/eth2_config" } eth2_config = { path = "../eth2/utils/eth2_config" }
lighthouse_bootstrap = { path = "../eth2/utils/lighthouse_bootstrap" } lighthouse_bootstrap = { path = "../eth2/utils/lighthouse_bootstrap" }
@ -25,3 +32,5 @@ env_logger = "0.7.1"
dirs = "2.0.2" dirs = "2.0.2"
logging = { path = "../eth2/utils/logging" } logging = { path = "../eth2/utils/logging" }
futures = "0.1.29" futures = "0.1.29"
environment = { path = "../lighthouse/environment" }
genesis = { path = "genesis" }

View File

@ -33,7 +33,14 @@ state_processing = { path = "../../eth2/state_processing" }
tree_hash = "0.1.0" tree_hash = "0.1.0"
types = { path = "../../eth2/types" } types = { path = "../../eth2/types" }
lmd_ghost = { path = "../../eth2/lmd_ghost" } lmd_ghost = { path = "../../eth2/lmd_ghost" }
eth1 = { path = "../eth1" }
websocket_server = { path = "../websocket_server" }
futures = "0.1.25"
exit-future = "0.1.3"
genesis = { path = "../genesis" }
integer-sqrt = "0.1"
rand = "0.7.2"
[dev-dependencies] [dev-dependencies]
rand = "0.7.2"
lazy_static = "1.4.0" lazy_static = "1.4.0"
environment = { path = "../../lighthouse/environment" }

View File

@ -26,7 +26,6 @@ use state_processing::{
use std::fs; use std::fs;
use std::io::prelude::*; use std::io::prelude::*;
use std::sync::Arc; use std::sync::Arc;
use std::time::Duration;
use store::iter::{BlockRootsIterator, StateRootsIterator}; use store::iter::{BlockRootsIterator, StateRootsIterator};
use store::{Error as DBError, Store}; use store::{Error as DBError, Store};
use tree_hash::TreeHash; use tree_hash::TreeHash;
@ -113,9 +112,9 @@ pub struct BeaconChain<T: BeaconChainTypes> {
/// inclusion in a block. /// inclusion in a block.
pub op_pool: OperationPool<T::EthSpec>, pub op_pool: OperationPool<T::EthSpec>,
/// Provides information from the Ethereum 1 (PoW) chain. /// Provides information from the Ethereum 1 (PoW) chain.
pub eth1_chain: Eth1Chain<T>, pub eth1_chain: Option<Eth1Chain<T::Eth1Chain, T::EthSpec>>,
/// Stores a "snapshot" of the chain at the time the head-of-the-chain block was received. /// Stores a "snapshot" of the chain at the time the head-of-the-chain block was received.
canonical_head: RwLock<CheckPoint<T::EthSpec>>, pub(crate) canonical_head: RwLock<CheckPoint<T::EthSpec>>,
/// The root of the genesis block. /// The root of the genesis block.
pub genesis_block_root: Hash256, pub genesis_block_root: Hash256,
/// A state-machine that is updated with information from the network and chooses a canonical /// A state-machine that is updated with information from the network and chooses a canonical
@ -124,119 +123,12 @@ pub struct BeaconChain<T: BeaconChainTypes> {
/// A handler for events generated by the beacon chain. /// A handler for events generated by the beacon chain.
pub event_handler: T::EventHandler, pub event_handler: T::EventHandler,
/// Logging to CLI, etc. /// Logging to CLI, etc.
log: Logger, pub(crate) log: Logger,
} }
type BeaconInfo<T> = (BeaconBlock<T>, BeaconState<T>); type BeaconBlockAndState<T> = (BeaconBlock<T>, BeaconState<T>);
impl<T: BeaconChainTypes> BeaconChain<T> { impl<T: BeaconChainTypes> BeaconChain<T> {
/// Instantiate a new Beacon Chain, from genesis.
pub fn from_genesis(
store: Arc<T::Store>,
eth1_backend: T::Eth1Chain,
event_handler: T::EventHandler,
mut genesis_state: BeaconState<T::EthSpec>,
mut genesis_block: BeaconBlock<T::EthSpec>,
spec: ChainSpec,
log: Logger,
) -> Result<Self, Error> {
genesis_state.build_all_caches(&spec)?;
let genesis_state_root = genesis_state.canonical_root();
store.put(&genesis_state_root, &genesis_state)?;
genesis_block.state_root = genesis_state_root;
let genesis_block_root = genesis_block.block_header().canonical_root();
store.put(&genesis_block_root, &genesis_block)?;
// Also store the genesis block under the `ZERO_HASH` key.
let genesis_block_root = genesis_block.canonical_root();
store.put(&Hash256::zero(), &genesis_block)?;
let canonical_head = RwLock::new(CheckPoint::new(
genesis_block.clone(),
genesis_block_root,
genesis_state.clone(),
genesis_state_root,
));
// Slot clock
let slot_clock = T::SlotClock::new(
spec.genesis_slot,
Duration::from_secs(genesis_state.genesis_time),
Duration::from_millis(spec.milliseconds_per_slot),
);
info!(log, "Beacon chain initialized from genesis";
"validator_count" => genesis_state.validators.len(),
"state_root" => format!("{}", genesis_state_root),
"block_root" => format!("{}", genesis_block_root),
);
Ok(Self {
spec,
slot_clock,
op_pool: OperationPool::new(),
eth1_chain: Eth1Chain::new(eth1_backend),
canonical_head,
genesis_block_root,
fork_choice: ForkChoice::new(store.clone(), &genesis_block, genesis_block_root),
event_handler,
store,
log,
})
}
/// Attempt to load an existing instance from the given `store`.
pub fn from_store(
store: Arc<T::Store>,
eth1_backend: T::Eth1Chain,
event_handler: T::EventHandler,
spec: ChainSpec,
log: Logger,
) -> Result<Option<BeaconChain<T>>, Error> {
let key = Hash256::from_slice(&BEACON_CHAIN_DB_KEY.as_bytes());
let p: PersistedBeaconChain<T> = match store.get(&key) {
Err(e) => return Err(e.into()),
Ok(None) => return Ok(None),
Ok(Some(p)) => p,
};
let state = &p.canonical_head.beacon_state;
let slot_clock = T::SlotClock::new(
spec.genesis_slot,
Duration::from_secs(state.genesis_time),
Duration::from_millis(spec.milliseconds_per_slot),
);
let last_finalized_root = p.canonical_head.beacon_state.finalized_checkpoint.root;
let last_finalized_block = &p.canonical_head.beacon_block;
let op_pool = p.op_pool.into_operation_pool(state, &spec);
info!(log, "Beacon chain initialized from store";
"head_root" => format!("{}", p.canonical_head.beacon_block_root),
"head_epoch" => format!("{}", p.canonical_head.beacon_block.slot.epoch(T::EthSpec::slots_per_epoch())),
"finalized_root" => format!("{}", last_finalized_root),
"finalized_epoch" => format!("{}", last_finalized_block.slot.epoch(T::EthSpec::slots_per_epoch())),
);
Ok(Some(BeaconChain {
spec,
slot_clock,
fork_choice: ForkChoice::new(store.clone(), last_finalized_block, last_finalized_root),
op_pool,
event_handler,
eth1_chain: Eth1Chain::new(eth1_backend),
canonical_head: RwLock::new(p.canonical_head),
genesis_block_root: p.genesis_block_root,
store,
log,
}))
}
/// Attempt to save this instance to `self.store`. /// Attempt to save this instance to `self.store`.
pub fn persist(&self) -> Result<(), Error> { pub fn persist(&self) -> Result<(), Error> {
let timer = metrics::start_timer(&metrics::PERSIST_CHAIN); let timer = metrics::start_timer(&metrics::PERSIST_CHAIN);
@ -1270,7 +1162,7 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
&self, &self,
randao_reveal: Signature, randao_reveal: Signature,
slot: Slot, slot: Slot,
) -> Result<BeaconInfo<T::EthSpec>, BlockProductionError> { ) -> Result<BeaconBlockAndState<T::EthSpec>, BlockProductionError> {
let state = self let state = self
.state_at_slot(slot - 1) .state_at_slot(slot - 1)
.map_err(|_| BlockProductionError::UnableToProduceAtSlot(slot))?; .map_err(|_| BlockProductionError::UnableToProduceAtSlot(slot))?;
@ -1291,10 +1183,15 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
mut state: BeaconState<T::EthSpec>, mut state: BeaconState<T::EthSpec>,
produce_at_slot: Slot, produce_at_slot: Slot,
randao_reveal: Signature, randao_reveal: Signature,
) -> Result<BeaconInfo<T::EthSpec>, BlockProductionError> { ) -> Result<BeaconBlockAndState<T::EthSpec>, BlockProductionError> {
metrics::inc_counter(&metrics::BLOCK_PRODUCTION_REQUESTS); metrics::inc_counter(&metrics::BLOCK_PRODUCTION_REQUESTS);
let timer = metrics::start_timer(&metrics::BLOCK_PRODUCTION_TIMES); let timer = metrics::start_timer(&metrics::BLOCK_PRODUCTION_TIMES);
let eth1_chain = self
.eth1_chain
.as_ref()
.ok_or_else(|| BlockProductionError::NoEth1ChainConnection)?;
// If required, transition the new state to the present slot. // If required, transition the new state to the present slot.
while state.slot < produce_at_slot { while state.slot < produce_at_slot {
per_slot_processing(&mut state, &self.spec)?; per_slot_processing(&mut state, &self.spec)?;
@ -1319,17 +1216,19 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
let mut block = BeaconBlock { let mut block = BeaconBlock {
slot: state.slot, slot: state.slot,
parent_root, parent_root,
state_root: Hash256::zero(), // Updated after the state is calculated. state_root: Hash256::zero(),
signature: Signature::empty_signature(), // To be completed by a validator. // The block is not signed here, that is the task of a validator client.
signature: Signature::empty_signature(),
body: BeaconBlockBody { body: BeaconBlockBody {
randao_reveal, randao_reveal,
// TODO: replace with real data. eth1_data: eth1_chain.eth1_data_for_block_production(&state, &self.spec)?,
eth1_data: self.eth1_chain.eth1_data_for_block_production(&state)?,
graffiti, graffiti,
proposer_slashings: proposer_slashings.into(), proposer_slashings: proposer_slashings.into(),
attester_slashings: attester_slashings.into(), attester_slashings: attester_slashings.into(),
attestations: self.op_pool.get_attestations(&state, &self.spec).into(), attestations: self.op_pool.get_attestations(&state, &self.spec).into(),
deposits: self.eth1_chain.deposits_for_block_inclusion(&state)?.into(), deposits: eth1_chain
.deposits_for_block_inclusion(&state, &self.spec)?
.into(),
voluntary_exits: self.op_pool.get_voluntary_exits(&state, &self.spec).into(), voluntary_exits: self.op_pool.get_voluntary_exits(&state, &self.spec).into(),
transfers: self.op_pool.get_transfers(&state, &self.spec).into(), transfers: self.op_pool.get_transfers(&state, &self.spec).into(),
}, },

View File

@ -0,0 +1,622 @@
use crate::eth1_chain::CachingEth1Backend;
use crate::events::NullEventHandler;
use crate::persisted_beacon_chain::{PersistedBeaconChain, BEACON_CHAIN_DB_KEY};
use crate::{
BeaconChain, BeaconChainTypes, CheckPoint, Eth1Chain, Eth1ChainBackend, EventHandler,
ForkChoice,
};
use eth1::Config as Eth1Config;
use lmd_ghost::{LmdGhost, ThreadSafeReducedTree};
use operation_pool::OperationPool;
use parking_lot::RwLock;
use slog::{info, Logger};
use slot_clock::{SlotClock, TestingSlotClock};
use std::marker::PhantomData;
use std::sync::Arc;
use std::time::Duration;
use store::Store;
use types::{BeaconBlock, BeaconState, ChainSpec, EthSpec, Hash256, Slot};
/// An empty struct used to "witness" all the `BeaconChainTypes` traits. It has no user-facing
/// functionality and only exists to satisfy the type system.
pub struct Witness<TStore, TSlotClock, TLmdGhost, TEth1Backend, TEthSpec, TEventHandler>(
PhantomData<(
TStore,
TSlotClock,
TLmdGhost,
TEth1Backend,
TEthSpec,
TEventHandler,
)>,
);
impl<TStore, TSlotClock, TLmdGhost, TEth1Backend, TEthSpec, TEventHandler> BeaconChainTypes
for Witness<TStore, TSlotClock, TLmdGhost, TEth1Backend, TEthSpec, TEventHandler>
where
TStore: Store + 'static,
TSlotClock: SlotClock + 'static,
TLmdGhost: LmdGhost<TStore, TEthSpec> + 'static,
TEth1Backend: Eth1ChainBackend<TEthSpec> + 'static,
TEthSpec: EthSpec + 'static,
TEventHandler: EventHandler<TEthSpec> + 'static,
{
type Store = TStore;
type SlotClock = TSlotClock;
type LmdGhost = TLmdGhost;
type Eth1Chain = TEth1Backend;
type EthSpec = TEthSpec;
type EventHandler = TEventHandler;
}
/// Builds a `BeaconChain` by either creating anew from genesis, or, resuming from an existing chain
/// persisted to `store`.
///
/// Types may be elided and the compiler will infer them if all necessary builder methods have been
/// called. If type inference errors are being raised, it is likely that not all required methods
/// have been called.
///
/// See the tests for an example of a complete working example.
pub struct BeaconChainBuilder<T: BeaconChainTypes> {
store: Option<Arc<T::Store>>,
/// The finalized checkpoint to anchor the chain. May be genesis or a higher
/// checkpoint.
pub finalized_checkpoint: Option<CheckPoint<T::EthSpec>>,
genesis_block_root: Option<Hash256>,
op_pool: Option<OperationPool<T::EthSpec>>,
fork_choice: Option<ForkChoice<T>>,
eth1_chain: Option<Eth1Chain<T::Eth1Chain, T::EthSpec>>,
event_handler: Option<T::EventHandler>,
slot_clock: Option<T::SlotClock>,
spec: ChainSpec,
log: Option<Logger>,
}
impl<TStore, TSlotClock, TLmdGhost, TEth1Backend, TEthSpec, TEventHandler>
BeaconChainBuilder<
Witness<TStore, TSlotClock, TLmdGhost, TEth1Backend, TEthSpec, TEventHandler>,
>
where
TStore: Store + 'static,
TSlotClock: SlotClock + 'static,
TLmdGhost: LmdGhost<TStore, TEthSpec> + 'static,
TEth1Backend: Eth1ChainBackend<TEthSpec> + 'static,
TEthSpec: EthSpec + 'static,
TEventHandler: EventHandler<TEthSpec> + 'static,
{
/// Returns a new builder.
///
/// The `_eth_spec_instance` parameter is only supplied to make concrete the `TEthSpec` trait.
/// This should generally be either the `MinimalEthSpec` or `MainnetEthSpec` types.
pub fn new(_eth_spec_instance: TEthSpec) -> Self {
Self {
store: None,
finalized_checkpoint: None,
genesis_block_root: None,
op_pool: None,
fork_choice: None,
eth1_chain: None,
event_handler: None,
slot_clock: None,
spec: TEthSpec::default_spec(),
log: None,
}
}
/// Override the default spec (as defined by `TEthSpec`).
///
/// This method should generally be called immediately after `Self::new` to ensure components
/// are started with a consistent spec.
pub fn custom_spec(mut self, spec: ChainSpec) -> Self {
self.spec = spec;
self
}
/// Sets the store (database).
///
/// Should generally be called early in the build chain.
pub fn store(mut self, store: Arc<TStore>) -> Self {
self.store = Some(store);
self
}
/// Sets the logger.
///
/// Should generally be called early in the build chain.
pub fn logger(mut self, logger: Logger) -> Self {
self.log = Some(logger);
self
}
/// Attempt to load an existing chain from the builder's `Store`.
///
/// May initialize several components; including the op_pool and finalized checkpoints.
pub fn resume_from_db(mut self) -> Result<Self, String> {
let log = self
.log
.as_ref()
.ok_or_else(|| "resume_from_db requires a log".to_string())?;
info!(
log,
"Starting beacon chain";
"method" => "resume"
);
let store = self
.store
.clone()
.ok_or_else(|| "load_from_store requires a store.".to_string())?;
let key = Hash256::from_slice(&BEACON_CHAIN_DB_KEY.as_bytes());
let p: PersistedBeaconChain<
Witness<TStore, TSlotClock, TLmdGhost, TEth1Backend, TEthSpec, TEventHandler>,
> = match store.get(&key) {
Err(e) => {
return Err(format!(
"DB error when reading persisted beacon chain: {:?}",
e
))
}
Ok(None) => return Err("No persisted beacon chain found in store".into()),
Ok(Some(p)) => p,
};
self.op_pool = Some(
p.op_pool
.into_operation_pool(&p.canonical_head.beacon_state, &self.spec),
);
self.finalized_checkpoint = Some(p.canonical_head);
self.genesis_block_root = Some(p.genesis_block_root);
Ok(self)
}
/// Starts a new chain from a genesis state.
pub fn genesis_state(
mut self,
mut beacon_state: BeaconState<TEthSpec>,
) -> Result<Self, String> {
let store = self
.store
.clone()
.ok_or_else(|| "genesis_state requires a store")?;
let mut beacon_block = genesis_block(&beacon_state, &self.spec);
beacon_state
.build_all_caches(&self.spec)
.map_err(|e| format!("Failed to build genesis state caches: {:?}", e))?;
let beacon_state_root = beacon_state.canonical_root();
beacon_block.state_root = beacon_state_root;
let beacon_block_root = beacon_block.canonical_root();
self.genesis_block_root = Some(beacon_block_root);
store
.put(&beacon_state_root, &beacon_state)
.map_err(|e| format!("Failed to store genesis state: {:?}", e))?;
store
.put(&beacon_block_root, &beacon_block)
.map_err(|e| format!("Failed to store genesis block: {:?}", e))?;
// Store the genesis block under the `ZERO_HASH` key.
store.put(&Hash256::zero(), &beacon_block).map_err(|e| {
format!(
"Failed to store genesis block under 0x00..00 alias: {:?}",
e
)
})?;
self.finalized_checkpoint = Some(CheckPoint {
beacon_block_root,
beacon_block,
beacon_state_root,
beacon_state,
});
Ok(self.empty_op_pool())
}
/// Sets the `BeaconChain` fork choice backend.
///
/// Requires the store and state to have been specified earlier in the build chain.
pub fn fork_choice_backend(mut self, backend: TLmdGhost) -> Result<Self, String> {
let store = self
.store
.clone()
.ok_or_else(|| "reduced_tree_fork_choice requires a store")?;
let genesis_block_root = self
.genesis_block_root
.ok_or_else(|| "fork_choice_backend requires a genesis_block_root")?;
self.fork_choice = Some(ForkChoice::new(store, backend, genesis_block_root));
Ok(self)
}
/// Sets the `BeaconChain` eth1 backend.
pub fn eth1_backend(mut self, backend: Option<TEth1Backend>) -> Self {
self.eth1_chain = backend.map(Eth1Chain::new);
self
}
/// Sets the `BeaconChain` event handler backend.
///
/// For example, provide `WebSocketSender` as a `handler`.
pub fn event_handler(mut self, handler: TEventHandler) -> Self {
self.event_handler = Some(handler);
self
}
/// Sets the `BeaconChain` slot clock.
///
/// For example, provide `SystemTimeSlotClock` as a `clock`.
pub fn slot_clock(mut self, clock: TSlotClock) -> Self {
self.slot_clock = Some(clock);
self
}
/// Creates a new, empty operation pool.
fn empty_op_pool(mut self) -> Self {
self.op_pool = Some(OperationPool::new());
self
}
/// Consumes `self`, returning a `BeaconChain` if all required parameters have been supplied.
///
/// An error will be returned at runtime if all required parameters have not been configured.
///
/// Will also raise ambiguous type errors at compile time if some parameters have not been
/// configured.
#[allow(clippy::type_complexity)] // I think there's nothing to be gained here from a type alias.
pub fn build(
self,
) -> Result<
BeaconChain<Witness<TStore, TSlotClock, TLmdGhost, TEth1Backend, TEthSpec, TEventHandler>>,
String,
> {
let mut canonical_head = self
.finalized_checkpoint
.ok_or_else(|| "Cannot build without a state".to_string())?;
canonical_head
.beacon_state
.build_all_caches(&self.spec)
.map_err(|e| format!("Failed to build state caches: {:?}", e))?;
let log = self
.log
.ok_or_else(|| "Cannot build without a logger".to_string())?;
if canonical_head.beacon_block.state_root != canonical_head.beacon_state_root {
return Err("beacon_block.state_root != beacon_state".to_string());
}
let beacon_chain = BeaconChain {
spec: self.spec,
store: self
.store
.ok_or_else(|| "Cannot build without store".to_string())?,
slot_clock: self
.slot_clock
.ok_or_else(|| "Cannot build without slot clock".to_string())?,
op_pool: self
.op_pool
.ok_or_else(|| "Cannot build without op pool".to_string())?,
eth1_chain: self.eth1_chain,
canonical_head: RwLock::new(canonical_head),
genesis_block_root: self
.genesis_block_root
.ok_or_else(|| "Cannot build without a genesis block root".to_string())?,
fork_choice: self
.fork_choice
.ok_or_else(|| "Cannot build without a fork choice".to_string())?,
event_handler: self
.event_handler
.ok_or_else(|| "Cannot build without an event handler".to_string())?,
log: log.clone(),
};
info!(
log,
"Beacon chain initialized";
"head_state" => format!("{}", beacon_chain.head().beacon_state_root),
"head_block" => format!("{}", beacon_chain.head().beacon_block_root),
"head_slot" => format!("{}", beacon_chain.head().beacon_block.slot),
);
Ok(beacon_chain)
}
}
impl<TStore, TSlotClock, TEth1Backend, TEthSpec, TEventHandler>
BeaconChainBuilder<
Witness<
TStore,
TSlotClock,
ThreadSafeReducedTree<TStore, TEthSpec>,
TEth1Backend,
TEthSpec,
TEventHandler,
>,
>
where
TStore: Store + 'static,
TSlotClock: SlotClock + 'static,
TEth1Backend: Eth1ChainBackend<TEthSpec> + 'static,
TEthSpec: EthSpec + 'static,
TEventHandler: EventHandler<TEthSpec> + 'static,
{
/// Initializes a new, empty (no recorded votes or blocks) fork choice, using the
/// `ThreadSafeReducedTree` backend.
///
/// Requires the store and state to be initialized.
pub fn empty_reduced_tree_fork_choice(self) -> Result<Self, String> {
let store = self
.store
.clone()
.ok_or_else(|| "reduced_tree_fork_choice requires a store")?;
let finalized_checkpoint = &self
.finalized_checkpoint
.as_ref()
.expect("should have finalized checkpoint");
let backend = ThreadSafeReducedTree::new(
store.clone(),
&finalized_checkpoint.beacon_block,
finalized_checkpoint.beacon_block_root,
);
self.fork_choice_backend(backend)
}
}
impl<TStore, TSlotClock, TLmdGhost, TEthSpec, TEventHandler>
BeaconChainBuilder<
Witness<
TStore,
TSlotClock,
TLmdGhost,
CachingEth1Backend<TEthSpec, TStore>,
TEthSpec,
TEventHandler,
>,
>
where
TStore: Store + 'static,
TSlotClock: SlotClock + 'static,
TLmdGhost: LmdGhost<TStore, TEthSpec> + 'static,
TEthSpec: EthSpec + 'static,
TEventHandler: EventHandler<TEthSpec> + 'static,
{
/// Sets the `BeaconChain` eth1 back-end to `CachingEth1Backend`.
pub fn caching_eth1_backend(self, backend: CachingEth1Backend<TEthSpec, TStore>) -> Self {
self.eth1_backend(Some(backend))
}
/// Do not use any eth1 backend. The client will not be able to produce beacon blocks.
pub fn no_eth1_backend(self) -> Self {
self.eth1_backend(None)
}
/// Sets the `BeaconChain` eth1 back-end to produce predictably junk data when producing blocks.
pub fn dummy_eth1_backend(mut self) -> Result<Self, String> {
let log = self
.log
.as_ref()
.ok_or_else(|| "dummy_eth1_backend requires a log".to_string())?;
let store = self
.store
.clone()
.ok_or_else(|| "dummy_eth1_backend requires a store.".to_string())?;
let backend = CachingEth1Backend::new(Eth1Config::default(), log.clone(), store);
let mut eth1_chain = Eth1Chain::new(backend);
eth1_chain.use_dummy_backend = true;
self.eth1_chain = Some(eth1_chain);
Ok(self)
}
}
impl<TStore, TLmdGhost, TEth1Backend, TEthSpec, TEventHandler>
BeaconChainBuilder<
Witness<TStore, TestingSlotClock, TLmdGhost, TEth1Backend, TEthSpec, TEventHandler>,
>
where
TStore: Store + 'static,
TLmdGhost: LmdGhost<TStore, TEthSpec> + 'static,
TEth1Backend: Eth1ChainBackend<TEthSpec> + 'static,
TEthSpec: EthSpec + 'static,
TEventHandler: EventHandler<TEthSpec> + 'static,
{
/// Sets the `BeaconChain` slot clock to `TestingSlotClock`.
///
/// Requires the state to be initialized.
pub fn testing_slot_clock(self, slot_duration: Duration) -> Result<Self, String> {
let genesis_time = self
.finalized_checkpoint
.as_ref()
.ok_or_else(|| "testing_slot_clock requires an initialized state")?
.beacon_state
.genesis_time;
let slot_clock = TestingSlotClock::new(
Slot::new(0),
Duration::from_secs(genesis_time),
slot_duration,
);
Ok(self.slot_clock(slot_clock))
}
}
impl<TStore, TSlotClock, TLmdGhost, TEth1Backend, TEthSpec>
BeaconChainBuilder<
Witness<TStore, TSlotClock, TLmdGhost, TEth1Backend, TEthSpec, NullEventHandler<TEthSpec>>,
>
where
TStore: Store + 'static,
TSlotClock: SlotClock + 'static,
TLmdGhost: LmdGhost<TStore, TEthSpec> + 'static,
TEth1Backend: Eth1ChainBackend<TEthSpec> + 'static,
TEthSpec: EthSpec + 'static,
{
/// Sets the `BeaconChain` event handler to `NullEventHandler`.
pub fn null_event_handler(self) -> Self {
let handler = NullEventHandler::default();
self.event_handler(handler)
}
}
fn genesis_block<T: EthSpec>(genesis_state: &BeaconState<T>, spec: &ChainSpec) -> BeaconBlock<T> {
let mut genesis_block = BeaconBlock::empty(&spec);
genesis_block.state_root = genesis_state.canonical_root();
genesis_block
}
#[cfg(test)]
mod test {
use super::*;
use eth2_hashing::hash;
use genesis::{generate_deterministic_keypairs, interop_genesis_state};
use sloggers::{null::NullLoggerBuilder, Build};
use ssz::Encode;
use std::time::Duration;
use store::MemoryStore;
use types::{EthSpec, MinimalEthSpec, Slot};
type TestEthSpec = MinimalEthSpec;
fn get_logger() -> Logger {
let builder = NullLoggerBuilder;
builder.build().expect("should build logger")
}
#[test]
fn recent_genesis() {
let validator_count = 8;
let genesis_time = 13371337;
let log = get_logger();
let store = Arc::new(MemoryStore::open());
let spec = MinimalEthSpec::default_spec();
let genesis_state = interop_genesis_state(
&generate_deterministic_keypairs(validator_count),
genesis_time,
&spec,
)
.expect("should create interop genesis state");
let chain = BeaconChainBuilder::new(MinimalEthSpec)
.logger(log.clone())
.store(store.clone())
.genesis_state(genesis_state)
.expect("should build state using recent genesis")
.dummy_eth1_backend()
.expect("should build the dummy eth1 backend")
.null_event_handler()
.testing_slot_clock(Duration::from_secs(1))
.expect("should configure testing slot clock")
.empty_reduced_tree_fork_choice()
.expect("should add fork choice to builder")
.build()
.expect("should build");
let head = chain.head();
let state = head.beacon_state;
let block = head.beacon_block;
assert_eq!(state.slot, Slot::new(0), "should start from genesis");
assert_eq!(
state.genesis_time, 13371337,
"should have the correct genesis time"
);
assert_eq!(
block.state_root,
state.canonical_root(),
"block should have correct state root"
);
assert_eq!(
chain
.store
.get::<BeaconBlock<_>>(&Hash256::zero())
.expect("should read db")
.expect("should find genesis block"),
block,
"should store genesis block under zero hash alias"
);
assert_eq!(
state.validators.len(),
validator_count,
"should have correct validator count"
);
assert_eq!(
chain.genesis_block_root,
block.canonical_root(),
"should have correct genesis block root"
);
}
#[test]
fn interop_state() {
let validator_count = 16;
let genesis_time = 42;
let spec = &TestEthSpec::default_spec();
let keypairs = generate_deterministic_keypairs(validator_count);
let state = interop_genesis_state::<TestEthSpec>(&keypairs, genesis_time, spec)
.expect("should build state");
assert_eq!(
state.eth1_data.block_hash,
Hash256::from_slice(&[0x42; 32]),
"eth1 block hash should be co-ordinated junk"
);
assert_eq!(
state.genesis_time, genesis_time,
"genesis time should be as specified"
);
for b in &state.balances {
assert_eq!(
*b, spec.max_effective_balance,
"validator balances should be max effective balance"
);
}
for v in &state.validators {
let creds = v.withdrawal_credentials.as_bytes();
assert_eq!(
creds[0], spec.bls_withdrawal_prefix_byte,
"first byte of withdrawal creds should be bls prefix"
);
assert_eq!(
&creds[1..],
&hash(&v.pubkey.as_ssz_bytes())[1..],
"rest of withdrawal creds should be pubkey hash"
)
}
assert_eq!(
state.balances.len(),
validator_count,
"validator balances len should be correct"
);
assert_eq!(
state.validators.len(),
validator_count,
"validator count should be correct"
);
}
}

View File

@ -55,6 +55,9 @@ pub enum BlockProductionError {
BlockProcessingError(BlockProcessingError), BlockProcessingError(BlockProcessingError),
Eth1ChainError(Eth1ChainError), Eth1ChainError(Eth1ChainError),
BeaconStateError(BeaconStateError), BeaconStateError(BeaconStateError),
/// The `BeaconChain` was explicitly configured _without_ a connection to eth1, therefore it
/// cannot produce blocks.
NoEth1ChainConnection,
} }
easy_from_to!(BlockProcessingError, BlockProductionError); easy_from_to!(BlockProcessingError, BlockProductionError);

File diff suppressed because it is too large Load Diff

View File

@ -1,6 +1,7 @@
use serde_derive::{Deserialize, Serialize}; use serde_derive::{Deserialize, Serialize};
use std::marker::PhantomData; use std::marker::PhantomData;
use types::{Attestation, BeaconBlock, Epoch, EthSpec, Hash256}; use types::{Attestation, BeaconBlock, Epoch, EthSpec, Hash256};
pub use websocket_server::WebSocketSender;
pub trait EventHandler<T: EthSpec>: Sized + Send + Sync { pub trait EventHandler<T: EthSpec>: Sized + Send + Sync {
fn register(&self, kind: EventKind<T>) -> Result<(), String>; fn register(&self, kind: EventKind<T>) -> Result<(), String>;
@ -8,6 +9,15 @@ pub trait EventHandler<T: EthSpec>: Sized + Send + Sync {
pub struct NullEventHandler<T: EthSpec>(PhantomData<T>); pub struct NullEventHandler<T: EthSpec>(PhantomData<T>);
impl<T: EthSpec> EventHandler<T> for WebSocketSender<T> {
fn register(&self, kind: EventKind<T>) -> Result<(), String> {
self.send_string(
serde_json::to_string(&kind)
.map_err(|e| format!("Unable to serialize event: {:?}", e))?,
)
}
}
impl<T: EthSpec> EventHandler<T> for NullEventHandler<T> { impl<T: EthSpec> EventHandler<T> for NullEventHandler<T> {
fn register(&self, _kind: EventKind<T>) -> Result<(), String> { fn register(&self, _kind: EventKind<T>) -> Result<(), String> {
Ok(()) Ok(())

View File

@ -33,14 +33,10 @@ impl<T: BeaconChainTypes> ForkChoice<T> {
/// ///
/// "Genesis" does not necessarily need to be the absolute genesis, it can be some finalized /// "Genesis" does not necessarily need to be the absolute genesis, it can be some finalized
/// block. /// block.
pub fn new( pub fn new(store: Arc<T::Store>, backend: T::LmdGhost, genesis_block_root: Hash256) -> Self {
store: Arc<T::Store>,
genesis_block: &BeaconBlock<T::EthSpec>,
genesis_block_root: Hash256,
) -> Self {
Self { Self {
store: store.clone(), store: store.clone(),
backend: T::LmdGhost::new(store, genesis_block, genesis_block_root), backend,
genesis_block_root, genesis_block_root,
} }
} }

View File

@ -3,10 +3,10 @@
extern crate lazy_static; extern crate lazy_static;
mod beacon_chain; mod beacon_chain;
mod beacon_chain_builder; pub mod builder;
mod checkpoint; mod checkpoint;
mod errors; mod errors;
mod eth1_chain; pub mod eth1_chain;
pub mod events; pub mod events;
mod fork_choice; mod fork_choice;
mod iter; mod iter;
@ -19,8 +19,9 @@ pub use self::beacon_chain::{
}; };
pub use self::checkpoint::CheckPoint; pub use self::checkpoint::CheckPoint;
pub use self::errors::{BeaconChainError, BlockProductionError}; pub use self::errors::{BeaconChainError, BlockProductionError};
pub use beacon_chain_builder::BeaconChainBuilder; pub use eth1_chain::{Eth1Chain, Eth1ChainBackend};
pub use eth1_chain::{Eth1ChainBackend, InteropEth1ChainBackend}; pub use events::EventHandler;
pub use fork_choice::ForkChoice;
pub use lmd_ghost; pub use lmd_ghost;
pub use metrics::scrape_for_metrics; pub use metrics::scrape_for_metrics;
pub use parking_lot; pub use parking_lot;

View File

@ -1,14 +1,17 @@
use crate::{ use crate::{
events::NullEventHandler, AttestationProcessingOutcome, BeaconChain, BeaconChainBuilder, builder::{BeaconChainBuilder, Witness},
BeaconChainTypes, BlockProcessingOutcome, InteropEth1ChainBackend, eth1_chain::CachingEth1Backend,
events::NullEventHandler,
AttestationProcessingOutcome, BeaconChain, BeaconChainTypes, BlockProcessingOutcome,
}; };
use lmd_ghost::LmdGhost; use genesis::interop_genesis_state;
use lmd_ghost::ThreadSafeReducedTree;
use rayon::prelude::*; use rayon::prelude::*;
use sloggers::{terminal::TerminalLoggerBuilder, types::Severity, Build}; use sloggers::{terminal::TerminalLoggerBuilder, types::Severity, Build};
use slot_clock::TestingSlotClock; use slot_clock::TestingSlotClock;
use state_processing::per_slot_processing; use state_processing::per_slot_processing;
use std::marker::PhantomData;
use std::sync::Arc; use std::sync::Arc;
use std::time::Duration;
use store::MemoryStore; use store::MemoryStore;
use tree_hash::{SignedRoot, TreeHash}; use tree_hash::{SignedRoot, TreeHash};
use types::{ use types::{
@ -17,12 +20,20 @@ use types::{
Slot, Slot,
}; };
pub use crate::persisted_beacon_chain::{PersistedBeaconChain, BEACON_CHAIN_DB_KEY};
pub use types::test_utils::generate_deterministic_keypairs; pub use types::test_utils::generate_deterministic_keypairs;
pub use crate::persisted_beacon_chain::{PersistedBeaconChain, BEACON_CHAIN_DB_KEY};
pub const HARNESS_GENESIS_TIME: u64 = 1_567_552_690; // 4th September 2019 pub const HARNESS_GENESIS_TIME: u64 = 1_567_552_690; // 4th September 2019
pub type HarnessType<E> = Witness<
MemoryStore,
TestingSlotClock,
ThreadSafeReducedTree<MemoryStore, E>,
CachingEth1Backend<E, MemoryStore>,
E,
NullEventHandler<E>,
>;
/// Indicates how the `BeaconChainHarness` should produce blocks. /// Indicates how the `BeaconChainHarness` should produce blocks.
#[derive(Clone, Copy, Debug)] #[derive(Clone, Copy, Debug)]
pub enum BlockStrategy { pub enum BlockStrategy {
@ -48,50 +59,19 @@ pub enum AttestationStrategy {
SomeValidators(Vec<usize>), SomeValidators(Vec<usize>),
} }
/// Used to make the `BeaconChainHarness` generic over some types.
pub struct CommonTypes<L, E>
where
L: LmdGhost<MemoryStore, E>,
E: EthSpec,
{
_phantom_l: PhantomData<L>,
_phantom_e: PhantomData<E>,
}
impl<L, E> BeaconChainTypes for CommonTypes<L, E>
where
L: LmdGhost<MemoryStore, E> + 'static,
E: EthSpec,
{
type Store = MemoryStore;
type SlotClock = TestingSlotClock;
type LmdGhost = L;
type Eth1Chain = InteropEth1ChainBackend<E>;
type EthSpec = E;
type EventHandler = NullEventHandler<E>;
}
/// A testing harness which can instantiate a `BeaconChain` and populate it with blocks and /// A testing harness which can instantiate a `BeaconChain` and populate it with blocks and
/// attestations. /// attestations.
/// ///
/// Used for testing. /// Used for testing.
pub struct BeaconChainHarness<L, E> pub struct BeaconChainHarness<T: BeaconChainTypes> {
where pub chain: BeaconChain<T>,
L: LmdGhost<MemoryStore, E> + 'static,
E: EthSpec,
{
pub chain: BeaconChain<CommonTypes<L, E>>,
pub keypairs: Vec<Keypair>, pub keypairs: Vec<Keypair>,
pub spec: ChainSpec, pub spec: ChainSpec,
} }
impl<L, E> BeaconChainHarness<L, E> impl<E: EthSpec> BeaconChainHarness<HarnessType<E>> {
where
L: LmdGhost<MemoryStore, E>,
E: EthSpec,
{
/// Instantiate a new harness with `validator_count` initial validators. /// Instantiate a new harness with `validator_count` initial validators.
pub fn new(keypairs: Vec<Keypair>) -> Self { pub fn new(eth_spec_instance: E, keypairs: Vec<Keypair>) -> Self {
let spec = E::default_spec(); let spec = E::default_spec();
let log = TerminalLoggerBuilder::new() let log = TerminalLoggerBuilder::new()
@ -99,22 +79,29 @@ where
.build() .build()
.expect("logger should build"); .expect("logger should build");
let store = Arc::new(MemoryStore::open()); let chain = BeaconChainBuilder::new(eth_spec_instance)
.logger(log.clone())
let chain = .custom_spec(spec.clone())
BeaconChainBuilder::quick_start(HARNESS_GENESIS_TIME, &keypairs, spec.clone(), log) .store(Arc::new(MemoryStore::open()))
.unwrap_or_else(|e| panic!("Failed to create beacon chain builder: {}", e)) .genesis_state(
.build( interop_genesis_state::<E>(&keypairs, HARNESS_GENESIS_TIME, &spec)
store.clone(), .expect("should generate interop state"),
InteropEth1ChainBackend::default(), )
NullEventHandler::default(), .expect("should build state using recent genesis")
) .dummy_eth1_backend()
.unwrap_or_else(|e| panic!("Failed to build beacon chain: {}", e)); .expect("should build dummy backend")
.null_event_handler()
.testing_slot_clock(Duration::from_secs(1))
.expect("should configure testing slot clock")
.empty_reduced_tree_fork_choice()
.expect("should add fork choice to builder")
.build()
.expect("should build");
Self { Self {
spec: chain.spec.clone(),
chain, chain,
keypairs, keypairs,
spec,
} }
} }

View File

@ -6,14 +6,13 @@ extern crate lazy_static;
use beacon_chain::AttestationProcessingOutcome; use beacon_chain::AttestationProcessingOutcome;
use beacon_chain::{ use beacon_chain::{
test_utils::{ test_utils::{
AttestationStrategy, BeaconChainHarness, BlockStrategy, CommonTypes, PersistedBeaconChain, AttestationStrategy, BeaconChainHarness, BlockStrategy, HarnessType, PersistedBeaconChain,
BEACON_CHAIN_DB_KEY, BEACON_CHAIN_DB_KEY,
}, },
BlockProcessingOutcome, BlockProcessingOutcome,
}; };
use lmd_ghost::ThreadSafeReducedTree;
use rand::Rng; use rand::Rng;
use store::{MemoryStore, Store}; use store::Store;
use types::test_utils::{SeedableRng, TestRandom, XorShiftRng}; use types::test_utils::{SeedableRng, TestRandom, XorShiftRng};
use types::{Deposit, EthSpec, Hash256, Keypair, MinimalEthSpec, RelativeEpoch, Slot}; use types::{Deposit, EthSpec, Hash256, Keypair, MinimalEthSpec, RelativeEpoch, Slot};
@ -25,10 +24,8 @@ lazy_static! {
static ref KEYPAIRS: Vec<Keypair> = types::test_utils::generate_deterministic_keypairs(VALIDATOR_COUNT); static ref KEYPAIRS: Vec<Keypair> = types::test_utils::generate_deterministic_keypairs(VALIDATOR_COUNT);
} }
type TestForkChoice = ThreadSafeReducedTree<MemoryStore, MinimalEthSpec>; fn get_harness(validator_count: usize) -> BeaconChainHarness<HarnessType<MinimalEthSpec>> {
let harness = BeaconChainHarness::new(MinimalEthSpec, KEYPAIRS[0..validator_count].to_vec());
fn get_harness(validator_count: usize) -> BeaconChainHarness<TestForkChoice, MinimalEthSpec> {
let harness = BeaconChainHarness::new(KEYPAIRS[0..validator_count].to_vec());
harness.advance_slot(); harness.advance_slot();
@ -322,7 +319,7 @@ fn roundtrip_operation_pool() {
harness.chain.persist().unwrap(); harness.chain.persist().unwrap();
let key = Hash256::from_slice(&BEACON_CHAIN_DB_KEY.as_bytes()); let key = Hash256::from_slice(&BEACON_CHAIN_DB_KEY.as_bytes());
let p: PersistedBeaconChain<CommonTypes<TestForkChoice, MinimalEthSpec>> = let p: PersistedBeaconChain<HarnessType<MinimalEthSpec>> =
harness.chain.store.get(&key).unwrap().unwrap(); harness.chain.store.get(&key).unwrap().unwrap();
let restored_op_pool = p let restored_op_pool = p

View File

@ -4,6 +4,10 @@ version = "0.1.0"
authors = ["Age Manning <Age@AgeManning.com>"] authors = ["Age Manning <Age@AgeManning.com>"]
edition = "2018" edition = "2018"
[dev-dependencies]
sloggers = "0.3.4"
toml = "^0.5"
[dependencies] [dependencies]
beacon_chain = { path = "../beacon_chain" } beacon_chain = { path = "../beacon_chain" }
store = { path = "../store" } store = { path = "../store" }
@ -31,3 +35,8 @@ exit-future = "0.1.4"
futures = "0.1.29" futures = "0.1.29"
reqwest = "0.9.22" reqwest = "0.9.22"
url = "2.1.0" url = "2.1.0"
lmd_ghost = { path = "../../eth2/lmd_ghost" }
eth1 = { path = "../eth1" }
genesis = { path = "../genesis" }
environment = { path = "../../lighthouse/environment" }
lighthouse_bootstrap = { path = "../../eth2/utils/lighthouse_bootstrap" }

View File

@ -0,0 +1,715 @@
use crate::config::{ClientGenesis, Config as ClientConfig};
use crate::Client;
use beacon_chain::{
builder::{BeaconChainBuilder, Witness},
eth1_chain::CachingEth1Backend,
lmd_ghost::ThreadSafeReducedTree,
slot_clock::{SlotClock, SystemTimeSlotClock},
store::{DiskStore, MemoryStore, Store},
BeaconChain, BeaconChainTypes, Eth1ChainBackend, EventHandler,
};
use environment::RuntimeContext;
use eth1::{Config as Eth1Config, Service as Eth1Service};
use eth2_config::Eth2Config;
use exit_future::Signal;
use futures::{future, Future, IntoFuture, Stream};
use genesis::{
generate_deterministic_keypairs, interop_genesis_state, state_from_ssz_file, Eth1GenesisService,
};
use lighthouse_bootstrap::Bootstrapper;
use lmd_ghost::LmdGhost;
use network::{NetworkConfig, NetworkMessage, Service as NetworkService};
use rpc::Config as RpcConfig;
use slog::{debug, error, info, warn};
use std::net::SocketAddr;
use std::path::Path;
use std::sync::Arc;
use std::time::{Duration, Instant};
use tokio::sync::mpsc::UnboundedSender;
use tokio::timer::Interval;
use types::{ChainSpec, EthSpec};
use websocket_server::{Config as WebSocketConfig, WebSocketSender};
/// The interval between notifier events.
pub const NOTIFIER_INTERVAL_SECONDS: u64 = 15;
/// Create a warning log whenever the peer count is at or below this value.
pub const WARN_PEER_COUNT: usize = 1;
/// Interval between polling the eth1 node for genesis information.
pub const ETH1_GENESIS_UPDATE_INTERVAL_MILLIS: u64 = 500;
/// Builds a `Client` instance.
///
/// ## Notes
///
/// The builder may start some services (e.g.., libp2p, http server) immediately after they are
/// initialized, _before_ the `self.build(..)` method has been called.
///
/// Types may be elided and the compiler will infer them once all required methods have been
/// called.
///
/// If type inference errors are raised, ensure all necessary components have been initialized. For
/// example, the compiler will be unable to infer `T::Store` unless `self.disk_store(..)` or
/// `self.memory_store(..)` has been called.
pub struct ClientBuilder<T: BeaconChainTypes> {
slot_clock: Option<T::SlotClock>,
store: Option<Arc<T::Store>>,
runtime_context: Option<RuntimeContext<T::EthSpec>>,
chain_spec: Option<ChainSpec>,
beacon_chain_builder: Option<BeaconChainBuilder<T>>,
beacon_chain: Option<Arc<BeaconChain<T>>>,
eth1_service: Option<Eth1Service>,
exit_signals: Vec<Signal>,
event_handler: Option<T::EventHandler>,
libp2p_network: Option<Arc<NetworkService<T>>>,
libp2p_network_send: Option<UnboundedSender<NetworkMessage>>,
http_listen_addr: Option<SocketAddr>,
websocket_listen_addr: Option<SocketAddr>,
eth_spec_instance: T::EthSpec,
}
impl<TStore, TSlotClock, TLmdGhost, TEth1Backend, TEthSpec, TEventHandler>
ClientBuilder<Witness<TStore, TSlotClock, TLmdGhost, TEth1Backend, TEthSpec, TEventHandler>>
where
TStore: Store + 'static,
TSlotClock: SlotClock + Clone + 'static,
TLmdGhost: LmdGhost<TStore, TEthSpec> + 'static,
TEth1Backend: Eth1ChainBackend<TEthSpec> + 'static,
TEthSpec: EthSpec + 'static,
TEventHandler: EventHandler<TEthSpec> + 'static,
{
/// Instantiates a new, empty builder.
///
/// The `eth_spec_instance` parameter is used to concretize `TEthSpec`.
pub fn new(eth_spec_instance: TEthSpec) -> Self {
Self {
slot_clock: None,
store: None,
runtime_context: None,
chain_spec: None,
beacon_chain_builder: None,
beacon_chain: None,
eth1_service: None,
exit_signals: vec![],
event_handler: None,
libp2p_network: None,
libp2p_network_send: None,
http_listen_addr: None,
websocket_listen_addr: None,
eth_spec_instance,
}
}
/// Specifies the runtime context (tokio executor, logger, etc) for client services.
pub fn runtime_context(mut self, context: RuntimeContext<TEthSpec>) -> Self {
self.runtime_context = Some(context);
self
}
/// Specifies the `ChainSpec`.
pub fn chain_spec(mut self, spec: ChainSpec) -> Self {
self.chain_spec = Some(spec);
self
}
/// Initializes the `BeaconChainBuilder`. The `build_beacon_chain` method will need to be
/// called later in order to actually instantiate the `BeaconChain`.
pub fn beacon_chain_builder(
mut self,
client_genesis: ClientGenesis,
config: Eth1Config,
) -> impl Future<Item = Self, Error = String> {
let store = self.store.clone();
let chain_spec = self.chain_spec.clone();
let runtime_context = self.runtime_context.clone();
let eth_spec_instance = self.eth_spec_instance.clone();
future::ok(())
.and_then(move |()| {
let store = store
.ok_or_else(|| "beacon_chain_start_method requires a store".to_string())?;
let context = runtime_context
.ok_or_else(|| "beacon_chain_start_method requires a log".to_string())?
.service_context("beacon");
let spec = chain_spec
.ok_or_else(|| "beacon_chain_start_method requires a chain spec".to_string())?;
let builder = BeaconChainBuilder::new(eth_spec_instance)
.logger(context.log.clone())
.store(store.clone())
.custom_spec(spec.clone());
Ok((builder, spec, context))
})
.and_then(move |(builder, spec, context)| {
let genesis_state_future: Box<dyn Future<Item = _, Error = _> + Send> =
match client_genesis {
ClientGenesis::Interop {
validator_count,
genesis_time,
} => {
let keypairs = generate_deterministic_keypairs(validator_count);
let result = interop_genesis_state(&keypairs, genesis_time, &spec);
let future = result
.and_then(move |genesis_state| builder.genesis_state(genesis_state))
.into_future()
.map(|v| (v, None));
Box::new(future)
}
ClientGenesis::SszFile { path } => {
let result = state_from_ssz_file(path);
let future = result
.and_then(move |genesis_state| builder.genesis_state(genesis_state))
.into_future()
.map(|v| (v, None));
Box::new(future)
}
ClientGenesis::DepositContract => {
let genesis_service = Eth1GenesisService::new(
// Some of the configuration options for `Eth1Config` are
// hard-coded when listening for genesis from the deposit contract.
//
// The idea is that the `Eth1Config` supplied to this function
// (`config`) is intended for block production duties (i.e.,
// listening for deposit events and voting on eth1 data) and that
// we can make listening for genesis more efficient if we modify
// some params.
Eth1Config {
// Truncating the block cache makes searching for genesis more
// complicated.
block_cache_truncation: None,
// Scan large ranges of blocks when awaiting genesis.
blocks_per_log_query: 1_000,
// Only perform a single log request each time the eth1 node is
// polled.
//
// For small testnets this makes finding genesis much faster,
// as it usually happens within 1,000 blocks.
max_log_requests_per_update: Some(1),
// Only perform a single block request each time the eth1 node
// is polled.
//
// For small testnets, this is much faster as they do not have
// a `MIN_GENESIS_SECONDS`, so after `MIN_GENESIS_VALIDATOR_COUNT`
// has been reached only a single block needs to be read.
max_blocks_per_update: Some(1),
..config
},
context.log.clone(),
);
let future = genesis_service
.wait_for_genesis_state(
Duration::from_millis(ETH1_GENESIS_UPDATE_INTERVAL_MILLIS),
context.eth2_config().spec.clone(),
)
.and_then(move |genesis_state| builder.genesis_state(genesis_state))
.map(|v| (v, Some(genesis_service.into_core_service())));
Box::new(future)
}
ClientGenesis::RemoteNode { server, .. } => {
let future = Bootstrapper::connect(server.to_string(), &context.log)
.map_err(|e| {
format!("Failed to initialize bootstrap client: {}", e)
})
.into_future()
.and_then(|bootstrapper| {
let (genesis_state, _genesis_block) =
bootstrapper.genesis().map_err(|e| {
format!("Failed to bootstrap genesis state: {}", e)
})?;
builder.genesis_state(genesis_state)
})
.map(|v| (v, None));
Box::new(future)
}
ClientGenesis::Resume => {
let future = builder.resume_from_db().into_future().map(|v| (v, None));
Box::new(future)
}
};
genesis_state_future
})
.map(move |(beacon_chain_builder, eth1_service_option)| {
self.eth1_service = eth1_service_option;
self.beacon_chain_builder = Some(beacon_chain_builder);
self
})
}
/// Immediately starts the libp2p networking stack.
pub fn libp2p_network(mut self, config: &NetworkConfig) -> Result<Self, String> {
let beacon_chain = self
.beacon_chain
.clone()
.ok_or_else(|| "libp2p_network requires a beacon chain")?;
let context = self
.runtime_context
.as_ref()
.ok_or_else(|| "libp2p_network requires a runtime_context")?
.service_context("network");
let (network, network_send) =
NetworkService::new(beacon_chain, config, &context.executor, context.log)
.map_err(|e| format!("Failed to start libp2p network: {:?}", e))?;
self.libp2p_network = Some(network);
self.libp2p_network_send = Some(network_send);
Ok(self)
}
/// Immediately starts the gRPC server (gRPC is soon to be deprecated).
pub fn grpc_server(mut self, config: &RpcConfig) -> Result<Self, String> {
let beacon_chain = self
.beacon_chain
.clone()
.ok_or_else(|| "grpc_server requires a beacon chain")?;
let context = self
.runtime_context
.as_ref()
.ok_or_else(|| "grpc_server requires a runtime_context")?
.service_context("grpc");
let network_send = self
.libp2p_network_send
.clone()
.ok_or_else(|| "grpc_server requires a libp2p network")?;
let exit_signal = rpc::start_server(
config,
&context.executor,
network_send,
beacon_chain,
context.log,
);
self.exit_signals.push(exit_signal);
Ok(self)
}
/// Immediately starts the beacon node REST API http server.
pub fn http_server(
mut self,
client_config: &ClientConfig,
eth2_config: &Eth2Config,
) -> Result<Self, String> {
let beacon_chain = self
.beacon_chain
.clone()
.ok_or_else(|| "grpc_server requires a beacon chain")?;
let context = self
.runtime_context
.as_ref()
.ok_or_else(|| "http_server requires a runtime_context")?
.service_context("http");
let network = self
.libp2p_network
.clone()
.ok_or_else(|| "grpc_server requires a libp2p network")?;
let network_send = self
.libp2p_network_send
.clone()
.ok_or_else(|| "grpc_server requires a libp2p network sender")?;
let network_info = rest_api::NetworkInfo {
network_service: network.clone(),
network_chan: network_send.clone(),
};
let (exit_signal, listening_addr) = rest_api::start_server(
&client_config.rest_api,
&context.executor,
beacon_chain.clone(),
network_info,
client_config.db_path().expect("unable to read datadir"),
eth2_config.clone(),
context.log,
)
.map_err(|e| format!("Failed to start HTTP API: {:?}", e))?;
self.exit_signals.push(exit_signal);
self.http_listen_addr = Some(listening_addr);
Ok(self)
}
/// Immediately starts the service that periodically logs about the libp2p peer count.
pub fn peer_count_notifier(mut self) -> Result<Self, String> {
let context = self
.runtime_context
.as_ref()
.ok_or_else(|| "peer_count_notifier requires a runtime_context")?
.service_context("peer_notifier");
let log = context.log.clone();
let log_2 = context.log.clone();
let network = self
.libp2p_network
.clone()
.ok_or_else(|| "peer_notifier requires a libp2p network")?;
let (exit_signal, exit) = exit_future::signal();
self.exit_signals.push(exit_signal);
let interval_future = Interval::new(
Instant::now(),
Duration::from_secs(NOTIFIER_INTERVAL_SECONDS),
)
.map_err(move |e| error!(log_2, "Notifier timer failed"; "error" => format!("{:?}", e)))
.for_each(move |_| {
// NOTE: Panics if libp2p is poisoned.
let connected_peer_count = network.libp2p_service().lock().swarm.connected_peers();
debug!(log, "Connected peer status"; "peer_count" => connected_peer_count);
if connected_peer_count <= WARN_PEER_COUNT {
warn!(log, "Low peer count"; "peer_count" => connected_peer_count);
}
Ok(())
});
context
.executor
.spawn(exit.until(interval_future).map(|_| ()));
Ok(self)
}
/// Immediately starts the service that periodically logs information each slot.
pub fn slot_notifier(mut self) -> Result<Self, String> {
let context = self
.runtime_context
.as_ref()
.ok_or_else(|| "slot_notifier requires a runtime_context")?
.service_context("slot_notifier");
let log = context.log.clone();
let log_2 = log.clone();
let beacon_chain = self
.beacon_chain
.clone()
.ok_or_else(|| "slot_notifier requires a libp2p network")?;
let spec = self
.chain_spec
.clone()
.ok_or_else(|| "slot_notifier requires a chain spec".to_string())?;
let slot_duration = Duration::from_millis(spec.milliseconds_per_slot);
let duration_to_next_slot = beacon_chain
.slot_clock
.duration_to_next_slot()
.ok_or_else(|| "slot_notifier unable to determine time to next slot")?;
let (exit_signal, exit) = exit_future::signal();
self.exit_signals.push(exit_signal);
let interval_future = Interval::new(Instant::now() + duration_to_next_slot, slot_duration)
.map_err(move |e| error!(log_2, "Slot timer failed"; "error" => format!("{:?}", e)))
.for_each(move |_| {
let best_slot = beacon_chain.head().beacon_block.slot;
let latest_block_root = beacon_chain.head().beacon_block_root;
if let Ok(current_slot) = beacon_chain.slot() {
info!(
log,
"Slot start";
"skip_slots" => current_slot.saturating_sub(best_slot),
"best_block_root" => format!("{}", latest_block_root),
"best_block_slot" => best_slot,
"slot" => current_slot,
)
} else {
error!(
log,
"Beacon chain running whilst slot clock is unavailable."
);
};
Ok(())
});
context
.executor
.spawn(exit.until(interval_future).map(|_| ()));
Ok(self)
}
/// Consumers the builder, returning a `Client` if all necessary components have been
/// specified.
///
/// If type inference errors are being raised, see the comment on the definition of `Self`.
pub fn build(
self,
) -> Client<Witness<TStore, TSlotClock, TLmdGhost, TEth1Backend, TEthSpec, TEventHandler>> {
Client {
beacon_chain: self.beacon_chain,
libp2p_network: self.libp2p_network,
http_listen_addr: self.http_listen_addr,
websocket_listen_addr: self.websocket_listen_addr,
_exit_signals: self.exit_signals,
}
}
}
impl<TStore, TSlotClock, TEth1Backend, TEthSpec, TEventHandler>
ClientBuilder<
Witness<
TStore,
TSlotClock,
ThreadSafeReducedTree<TStore, TEthSpec>,
TEth1Backend,
TEthSpec,
TEventHandler,
>,
>
where
TStore: Store + 'static,
TSlotClock: SlotClock + Clone + 'static,
TEth1Backend: Eth1ChainBackend<TEthSpec> + 'static,
TEthSpec: EthSpec + 'static,
TEventHandler: EventHandler<TEthSpec> + 'static,
{
/// Consumes the internal `BeaconChainBuilder`, attaching the resulting `BeaconChain` to self.
pub fn build_beacon_chain(mut self) -> Result<Self, String> {
let chain = self
.beacon_chain_builder
.ok_or_else(|| "beacon_chain requires a beacon_chain_builder")?
.event_handler(
self.event_handler
.ok_or_else(|| "beacon_chain requires an event handler")?,
)
.slot_clock(
self.slot_clock
.clone()
.ok_or_else(|| "beacon_chain requires a slot clock")?,
)
.empty_reduced_tree_fork_choice()
.map_err(|e| format!("Failed to init fork choice: {}", e))?
.build()
.map_err(|e| format!("Failed to build beacon chain: {}", e))?;
self.beacon_chain = Some(Arc::new(chain));
self.beacon_chain_builder = None;
self.event_handler = None;
Ok(self)
}
}
impl<TStore, TSlotClock, TLmdGhost, TEth1Backend, TEthSpec>
ClientBuilder<
Witness<TStore, TSlotClock, TLmdGhost, TEth1Backend, TEthSpec, WebSocketSender<TEthSpec>>,
>
where
TStore: Store + 'static,
TSlotClock: SlotClock + 'static,
TLmdGhost: LmdGhost<TStore, TEthSpec> + 'static,
TEth1Backend: Eth1ChainBackend<TEthSpec> + 'static,
TEthSpec: EthSpec + 'static,
{
/// Specifies that the `BeaconChain` should publish events using the WebSocket server.
pub fn websocket_event_handler(mut self, config: WebSocketConfig) -> Result<Self, String> {
let context = self
.runtime_context
.as_ref()
.ok_or_else(|| "websocket_event_handler requires a runtime_context")?
.service_context("ws");
let (sender, exit_signal, listening_addr): (
WebSocketSender<TEthSpec>,
Option<_>,
Option<_>,
) = if config.enabled {
let (sender, exit, listening_addr) =
websocket_server::start_server(&config, &context.executor, &context.log)?;
(sender, Some(exit), Some(listening_addr))
} else {
(WebSocketSender::dummy(), None, None)
};
if let Some(signal) = exit_signal {
self.exit_signals.push(signal);
}
self.event_handler = Some(sender);
self.websocket_listen_addr = listening_addr;
Ok(self)
}
}
impl<TSlotClock, TLmdGhost, TEth1Backend, TEthSpec, TEventHandler>
ClientBuilder<Witness<DiskStore, TSlotClock, TLmdGhost, TEth1Backend, TEthSpec, TEventHandler>>
where
TSlotClock: SlotClock + 'static,
TLmdGhost: LmdGhost<DiskStore, TEthSpec> + 'static,
TEth1Backend: Eth1ChainBackend<TEthSpec> + 'static,
TEthSpec: EthSpec + 'static,
TEventHandler: EventHandler<TEthSpec> + 'static,
{
/// Specifies that the `Client` should use a `DiskStore` database.
pub fn disk_store(mut self, path: &Path) -> Result<Self, String> {
let store = DiskStore::open(path)
.map_err(|e| format!("Unable to open database: {:?}", e).to_string())?;
self.store = Some(Arc::new(store));
Ok(self)
}
}
impl<TSlotClock, TLmdGhost, TEth1Backend, TEthSpec, TEventHandler>
ClientBuilder<
Witness<MemoryStore, TSlotClock, TLmdGhost, TEth1Backend, TEthSpec, TEventHandler>,
>
where
TSlotClock: SlotClock + 'static,
TLmdGhost: LmdGhost<MemoryStore, TEthSpec> + 'static,
TEth1Backend: Eth1ChainBackend<TEthSpec> + 'static,
TEthSpec: EthSpec + 'static,
TEventHandler: EventHandler<TEthSpec> + 'static,
{
/// Specifies that the `Client` should use a `MemoryStore` database.
pub fn memory_store(mut self) -> Self {
let store = MemoryStore::open();
self.store = Some(Arc::new(store));
self
}
}
impl<TStore, TSlotClock, TLmdGhost, TEthSpec, TEventHandler>
ClientBuilder<
Witness<
TStore,
TSlotClock,
TLmdGhost,
CachingEth1Backend<TEthSpec, TStore>,
TEthSpec,
TEventHandler,
>,
>
where
TStore: Store + 'static,
TSlotClock: SlotClock + 'static,
TLmdGhost: LmdGhost<TStore, TEthSpec> + 'static,
TEthSpec: EthSpec + 'static,
TEventHandler: EventHandler<TEthSpec> + 'static,
{
/// Specifies that the `BeaconChain` should cache eth1 blocks/logs from a remote eth1 node
/// (e.g., Parity/Geth) and refer to that cache when collecting deposits or eth1 votes during
/// block production.
pub fn caching_eth1_backend(mut self, config: Eth1Config) -> Result<Self, String> {
let context = self
.runtime_context
.as_ref()
.ok_or_else(|| "caching_eth1_backend requires a runtime_context")?
.service_context("eth1_rpc");
let beacon_chain_builder = self
.beacon_chain_builder
.ok_or_else(|| "caching_eth1_backend requires a beacon_chain_builder")?;
let store = self
.store
.clone()
.ok_or_else(|| "caching_eth1_backend requires a store".to_string())?;
let backend = if let Some(eth1_service_from_genesis) = self.eth1_service {
eth1_service_from_genesis.update_config(config.clone())?;
CachingEth1Backend::from_service(eth1_service_from_genesis, store)
} else {
CachingEth1Backend::new(config, context.log, store)
};
self.eth1_service = None;
let exit = {
let (tx, rx) = exit_future::signal();
self.exit_signals.push(tx);
rx
};
// Starts the service that connects to an eth1 node and periodically updates caches.
context.executor.spawn(backend.start(exit));
self.beacon_chain_builder = Some(beacon_chain_builder.eth1_backend(Some(backend)));
Ok(self)
}
/// Do not use any eth1 backend. The client will not be able to produce beacon blocks.
pub fn no_eth1_backend(mut self) -> Result<Self, String> {
let beacon_chain_builder = self
.beacon_chain_builder
.ok_or_else(|| "caching_eth1_backend requires a beacon_chain_builder")?;
self.beacon_chain_builder = Some(beacon_chain_builder.no_eth1_backend());
Ok(self)
}
/// Use an eth1 backend that can produce blocks but is not connected to an Eth1 node.
///
/// This backend will never produce deposits so it's impossible to add validators after
/// genesis. The `Eth1Data` votes will be deterministic junk data.
///
/// ## Notes
///
/// The client is given the `CachingEth1Backend` type, but the http backend is never started and the
/// caches are never used.
pub fn dummy_eth1_backend(mut self) -> Result<Self, String> {
let beacon_chain_builder = self
.beacon_chain_builder
.ok_or_else(|| "caching_eth1_backend requires a beacon_chain_builder")?;
self.beacon_chain_builder = Some(beacon_chain_builder.dummy_eth1_backend()?);
Ok(self)
}
}
impl<TStore, TLmdGhost, TEth1Backend, TEthSpec, TEventHandler>
ClientBuilder<
Witness<TStore, SystemTimeSlotClock, TLmdGhost, TEth1Backend, TEthSpec, TEventHandler>,
>
where
TStore: Store + 'static,
TLmdGhost: LmdGhost<TStore, TEthSpec> + 'static,
TEth1Backend: Eth1ChainBackend<TEthSpec> + 'static,
TEthSpec: EthSpec + 'static,
TEventHandler: EventHandler<TEthSpec> + 'static,
{
/// Specifies that the slot clock should read the time from the computers system clock.
pub fn system_time_slot_clock(mut self) -> Result<Self, String> {
let beacon_chain_builder = self
.beacon_chain_builder
.as_ref()
.ok_or_else(|| "system_time_slot_clock requires a beacon_chain_builder")?;
let genesis_time = beacon_chain_builder
.finalized_checkpoint
.as_ref()
.ok_or_else(|| "system_time_slot_clock requires an initialized beacon state")?
.beacon_state
.genesis_time;
let spec = self
.chain_spec
.clone()
.ok_or_else(|| "system_time_slot_clock requires a chain spec".to_string())?;
let slot_clock = SystemTimeSlotClock::new(
spec.genesis_slot,
Duration::from_secs(genesis_time),
Duration::from_millis(spec.milliseconds_per_slot),
);
self.slot_clock = Some(slot_clock);
Ok(self)
}
}

View File

@ -9,6 +9,32 @@ use std::sync::Mutex;
/// The number initial validators when starting the `Minimal`. /// The number initial validators when starting the `Minimal`.
const TESTNET_SPEC_CONSTANTS: &str = "minimal"; const TESTNET_SPEC_CONSTANTS: &str = "minimal";
/// Defines how the client should initialize the `BeaconChain` and other components.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum ClientGenesis {
/// Reads the genesis state and other persisted data from the `Store`.
Resume,
/// Creates a genesis state as per the 2019 Canada interop specifications.
Interop {
validator_count: usize,
genesis_time: u64,
},
/// Connects to an eth1 node and waits until it can create the genesis state from the deposit
/// contract.
DepositContract,
/// Loads the genesis state from a SSZ-encoded `BeaconState` file.
SszFile { path: PathBuf },
/// Connects to another Lighthouse instance and reads the genesis state and other data via the
/// HTTP API.
RemoteNode { server: String, port: Option<u16> },
}
impl Default for ClientGenesis {
fn default() -> Self {
Self::DepositContract
}
}
/// The core configuration of a Lighthouse beacon node. /// The core configuration of a Lighthouse beacon node.
#[derive(Debug, Clone, Serialize, Deserialize)] #[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Config { pub struct Config {
@ -17,74 +43,20 @@ pub struct Config {
db_name: String, db_name: String,
pub log_file: PathBuf, pub log_file: PathBuf,
pub spec_constants: String, pub spec_constants: String,
/// Defines how we should initialize a BeaconChain instances. /// If true, the node will use co-ordinated junk for eth1 values.
/// ///
/// This field is not serialized, there for it will not be written to (or loaded from) config /// This is the method used for the 2019 client interop in Canada.
/// files. It can only be configured via the CLI. pub dummy_eth1_backend: bool,
pub sync_eth1_chain: bool,
#[serde(skip)] #[serde(skip)]
pub beacon_chain_start_method: BeaconChainStartMethod, /// The `genesis` field is not serialized or deserialized by `serde` to ensure it is defined
pub eth1_backend_method: Eth1BackendMethod, /// via the CLI at runtime, instead of from a configuration file saved to disk.
pub genesis: ClientGenesis,
pub network: network::NetworkConfig, pub network: network::NetworkConfig,
pub rpc: rpc::RPCConfig, pub rpc: rpc::Config,
pub rest_api: rest_api::ApiConfig, pub rest_api: rest_api::Config,
pub websocket_server: websocket_server::Config, pub websocket_server: websocket_server::Config,
} pub eth1: eth1::Config,
/// Defines how the client should initialize a BeaconChain.
///
/// In general, there are two methods:
/// - resuming a new chain, or
/// - initializing a new one.
#[derive(Debug, Clone)]
pub enum BeaconChainStartMethod {
/// Resume from an existing BeaconChain, loaded from the existing local database.
Resume,
/// Resume from an existing BeaconChain, loaded from the existing local database.
Mainnet,
/// Create a new beacon chain that can connect to mainnet.
///
/// Set the genesis time to be the start of the previous 30-minute window.
RecentGenesis {
validator_count: usize,
minutes: u64,
},
/// Create a new beacon chain with `genesis_time` and `validator_count` validators, all with well-known
/// secret keys.
Generated {
validator_count: usize,
genesis_time: u64,
},
/// Create a new beacon chain by loading a YAML-encoded genesis state from a file.
Yaml { file: PathBuf },
/// Create a new beacon chain by loading a SSZ-encoded genesis state from a file.
Ssz { file: PathBuf },
/// Create a new beacon chain by loading a JSON-encoded genesis state from a file.
Json { file: PathBuf },
/// Create a new beacon chain by using a HTTP server (running our REST-API) to load genesis and
/// finalized states and blocks.
HttpBootstrap { server: String, port: Option<u16> },
}
impl Default for BeaconChainStartMethod {
fn default() -> Self {
BeaconChainStartMethod::Resume
}
}
/// Defines which Eth1 backend the client should use.
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(tag = "type")]
pub enum Eth1BackendMethod {
/// Use the mocked eth1 backend used in interop testing
Interop,
/// Use a web3 connection to a running Eth1 node.
Web3 { server: String },
}
impl Default for Eth1BackendMethod {
fn default() -> Self {
Eth1BackendMethod::Interop
}
} }
impl Default for Config { impl Default for Config {
@ -94,13 +66,15 @@ impl Default for Config {
log_file: PathBuf::from(""), log_file: PathBuf::from(""),
db_type: "disk".to_string(), db_type: "disk".to_string(),
db_name: "chain_db".to_string(), db_name: "chain_db".to_string(),
genesis: <_>::default(),
network: NetworkConfig::new(), network: NetworkConfig::new(),
rpc: <_>::default(), rpc: <_>::default(),
rest_api: <_>::default(), rest_api: <_>::default(),
websocket_server: <_>::default(), websocket_server: <_>::default(),
spec_constants: TESTNET_SPEC_CONSTANTS.into(), spec_constants: TESTNET_SPEC_CONSTANTS.into(),
beacon_chain_start_method: <_>::default(), dummy_eth1_backend: false,
eth1_backend_method: <_>::default(), sync_eth1_chain: false,
eth1: <_>::default(),
} }
} }
} }
@ -183,3 +157,16 @@ impl Config {
Ok(()) Ok(())
} }
} }
#[cfg(test)]
mod tests {
use super::*;
use toml;
#[test]
fn serde() {
let config = Config::default();
let serialized = toml::to_string(&config).expect("should serde encode default config");
toml::from_str::<Config>(&serialized).expect("should serde decode default config");
}
}

View File

@ -2,327 +2,58 @@ extern crate slog;
mod config; mod config;
pub mod builder;
pub mod error; pub mod error;
pub mod notifier;
use beacon_chain::{ use beacon_chain::BeaconChain;
lmd_ghost::ThreadSafeReducedTree, slot_clock::SystemTimeSlotClock, store::Store,
test_utils::generate_deterministic_keypairs, BeaconChain, BeaconChainBuilder,
};
use exit_future::Signal; use exit_future::Signal;
use futures::{future::Future, Stream};
use network::Service as NetworkService; use network::Service as NetworkService;
use rest_api::NetworkInfo; use std::net::SocketAddr;
use slog::{crit, debug, error, info, o};
use slot_clock::SlotClock;
use std::marker::PhantomData;
use std::sync::Arc; use std::sync::Arc;
use std::time::{Duration, Instant, SystemTime, UNIX_EPOCH};
use tokio::runtime::TaskExecutor;
use tokio::timer::Interval;
use types::EthSpec;
use websocket_server::WebSocketSender;
pub use beacon_chain::{BeaconChainTypes, Eth1ChainBackend, InteropEth1ChainBackend}; pub use beacon_chain::{BeaconChainTypes, Eth1ChainBackend};
pub use config::{BeaconChainStartMethod, Config as ClientConfig, Eth1BackendMethod}; pub use builder::ClientBuilder;
pub use config::{ClientGenesis, Config as ClientConfig};
pub use eth2_config::Eth2Config; pub use eth2_config::Eth2Config;
#[derive(Clone)] /// The core "beacon node" client.
pub struct RuntimeBeaconChainTypes<S: Store, E: EthSpec> { ///
_phantom_s: PhantomData<S>, /// Holds references to running services, cleanly shutting them down when dropped.
_phantom_e: PhantomData<E>, pub struct Client<T: BeaconChainTypes> {
beacon_chain: Option<Arc<BeaconChain<T>>>,
libp2p_network: Option<Arc<NetworkService<T>>>,
http_listen_addr: Option<SocketAddr>,
websocket_listen_addr: Option<SocketAddr>,
/// Exit signals will "fire" when dropped, causing each service to exit gracefully.
_exit_signals: Vec<Signal>,
} }
impl<S, E> BeaconChainTypes for RuntimeBeaconChainTypes<S, E> impl<T: BeaconChainTypes> Client<T> {
where /// Returns an `Arc` reference to the client's `BeaconChain`, if it was started.
S: Store + 'static, pub fn beacon_chain(&self) -> Option<Arc<BeaconChain<T>>> {
E: EthSpec, self.beacon_chain.clone()
{ }
type Store = S;
type SlotClock = SystemTimeSlotClock;
type LmdGhost = ThreadSafeReducedTree<S, E>;
type Eth1Chain = InteropEth1ChainBackend<E>;
type EthSpec = E;
type EventHandler = WebSocketSender<E>;
}
/// Main beacon node client service. This provides the connection and initialisation of the clients /// Returns the address of the client's HTTP API server, if it was started.
/// sub-services in multiple threads. pub fn http_listen_addr(&self) -> Option<SocketAddr> {
pub struct Client<S, E> self.http_listen_addr
where }
S: Store + Clone + 'static,
E: EthSpec,
{
/// Configuration for the lighthouse client.
_client_config: ClientConfig,
/// The beacon chain for the running client.
beacon_chain: Arc<BeaconChain<RuntimeBeaconChainTypes<S, E>>>,
/// Reference to the network service.
pub network: Arc<NetworkService<RuntimeBeaconChainTypes<S, E>>>,
/// Signal to terminate the RPC server.
pub rpc_exit_signal: Option<Signal>,
/// Signal to terminate the slot timer.
pub slot_timer_exit_signal: Option<Signal>,
/// Signal to terminate the API
pub api_exit_signal: Option<Signal>,
/// Signal to terminate the websocket server
pub websocket_exit_signal: Option<Signal>,
/// The clients logger.
log: slog::Logger,
}
impl<S, E> Client<S, E> /// Returns the address of the client's WebSocket API server, if it was started.
where pub fn websocket_listen_addr(&self) -> Option<SocketAddr> {
S: Store + Clone + 'static, self.websocket_listen_addr
E: EthSpec, }
{
/// Generate an instance of the client. Spawn and link all internal sub-processes.
pub fn new(
client_config: ClientConfig,
eth2_config: Eth2Config,
store: S,
log: slog::Logger,
executor: &TaskExecutor,
) -> error::Result<Self> {
let store = Arc::new(store);
let milliseconds_per_slot = eth2_config.spec.milliseconds_per_slot;
let spec = &eth2_config.spec.clone(); /// Returns the port of the client's libp2p stack, if it was started.
pub fn libp2p_listen_port(&self) -> Option<u16> {
let beacon_chain_builder = match &client_config.beacon_chain_start_method { self.libp2p_network.as_ref().map(|n| n.listen_port())
BeaconChainStartMethod::Resume => {
info!(
log,
"Starting beacon chain";
"method" => "resume"
);
BeaconChainBuilder::from_store(spec.clone(), log.clone())
}
BeaconChainStartMethod::Mainnet => {
crit!(log, "No mainnet beacon chain startup specification.");
return Err("Mainnet launch is not yet announced.".into());
}
BeaconChainStartMethod::RecentGenesis {
validator_count,
minutes,
} => {
info!(
log,
"Starting beacon chain";
"validator_count" => validator_count,
"minutes" => minutes,
"method" => "recent"
);
BeaconChainBuilder::recent_genesis(
&generate_deterministic_keypairs(*validator_count),
*minutes,
spec.clone(),
log.clone(),
)?
}
BeaconChainStartMethod::Generated {
validator_count,
genesis_time,
} => {
info!(
log,
"Starting beacon chain";
"validator_count" => validator_count,
"genesis_time" => genesis_time,
"method" => "quick"
);
BeaconChainBuilder::quick_start(
*genesis_time,
&generate_deterministic_keypairs(*validator_count),
spec.clone(),
log.clone(),
)?
}
BeaconChainStartMethod::Yaml { file } => {
info!(
log,
"Starting beacon chain";
"file" => format!("{:?}", file),
"method" => "yaml"
);
BeaconChainBuilder::yaml_state(file, spec.clone(), log.clone())?
}
BeaconChainStartMethod::Ssz { file } => {
info!(
log,
"Starting beacon chain";
"file" => format!("{:?}", file),
"method" => "ssz"
);
BeaconChainBuilder::ssz_state(file, spec.clone(), log.clone())?
}
BeaconChainStartMethod::Json { file } => {
info!(
log,
"Starting beacon chain";
"file" => format!("{:?}", file),
"method" => "json"
);
BeaconChainBuilder::json_state(file, spec.clone(), log.clone())?
}
BeaconChainStartMethod::HttpBootstrap { server, port } => {
info!(
log,
"Starting beacon chain";
"port" => port,
"server" => server,
"method" => "bootstrap"
);
BeaconChainBuilder::http_bootstrap(server, spec.clone(), log.clone())?
}
};
let eth1_backend =
InteropEth1ChainBackend::new(String::new()).map_err(|e| format!("{:?}", e))?;
// Start the websocket server.
let (websocket_sender, websocket_exit_signal): (WebSocketSender<E>, Option<_>) =
if client_config.websocket_server.enabled {
let (sender, exit) = websocket_server::start_server(
&client_config.websocket_server,
executor,
&log,
)?;
(sender, Some(exit))
} else {
(WebSocketSender::dummy(), None)
};
let beacon_chain: Arc<BeaconChain<RuntimeBeaconChainTypes<S, E>>> = Arc::new(
beacon_chain_builder
.build(store, eth1_backend, websocket_sender)
.map_err(error::Error::from)?,
);
let since_epoch = SystemTime::now()
.duration_since(UNIX_EPOCH)
.map_err(|e| format!("Unable to read system time: {}", e))?;
let since_genesis = Duration::from_secs(beacon_chain.head().beacon_state.genesis_time);
if since_genesis > since_epoch {
info!(
log,
"Starting node prior to genesis";
"now" => since_epoch.as_secs(),
"genesis_seconds" => since_genesis.as_secs(),
);
}
let network_config = &client_config.network;
let (network, network_send) =
NetworkService::new(beacon_chain.clone(), network_config, executor, log.clone())?;
// spawn the RPC server
let rpc_exit_signal = if client_config.rpc.enabled {
Some(rpc::start_server(
&client_config.rpc,
executor,
network_send.clone(),
beacon_chain.clone(),
&log,
))
} else {
None
};
// Start the `rest_api` service
let api_exit_signal = if client_config.rest_api.enabled {
let network_info = NetworkInfo {
network_service: network.clone(),
network_chan: network_send.clone(),
};
match rest_api::start_server(
&client_config.rest_api,
executor,
beacon_chain.clone(),
network_info,
client_config.db_path().expect("unable to read datadir"),
eth2_config.clone(),
&log,
) {
Ok(s) => Some(s),
Err(e) => {
error!(log, "API service failed to start."; "error" => format!("{:?}",e));
None
}
}
} else {
None
};
let (slot_timer_exit_signal, exit) = exit_future::signal();
if let Some(duration_to_next_slot) = beacon_chain.slot_clock.duration_to_next_slot() {
// set up the validator work interval - start at next slot and proceed every slot
let interval = {
// Set the interval to start at the next slot, and every slot after
let slot_duration = Duration::from_millis(milliseconds_per_slot);
//TODO: Handle checked add correctly
Interval::new(Instant::now() + duration_to_next_slot, slot_duration)
};
let chain = beacon_chain.clone();
let log = log.new(o!("Service" => "SlotTimer"));
executor.spawn(
exit.until(
interval
.for_each(move |_| {
log_new_slot(&chain, &log);
Ok(())
})
.map_err(|_| ()),
)
.map(|_| ()),
);
}
Ok(Client {
_client_config: client_config,
beacon_chain,
rpc_exit_signal,
slot_timer_exit_signal: Some(slot_timer_exit_signal),
api_exit_signal,
websocket_exit_signal,
log,
network,
})
} }
} }
impl<S: Store + Clone, E: EthSpec> Drop for Client<S, E> { impl<T: BeaconChainTypes> Drop for Client<T> {
fn drop(&mut self) { fn drop(&mut self) {
// Save the beacon chain to it's store before dropping. if let Some(beacon_chain) = &self.beacon_chain {
let _result = self.beacon_chain.persist(); let _result = beacon_chain.persist();
}
} }
} }
fn log_new_slot<T: BeaconChainTypes>(chain: &Arc<BeaconChain<T>>, log: &slog::Logger) {
let best_slot = chain.head().beacon_block.slot;
let latest_block_root = chain.head().beacon_block_root;
if let Ok(current_slot) = chain.slot() {
info!(
log,
"Slot start";
"best_slot" => best_slot,
"slot" => current_slot,
);
debug!(
log,
"Slot info";
"skip_slots" => current_slot.saturating_sub(best_slot),
"best_block_root" => format!("{}", latest_block_root),
"slot" => current_slot,
);
} else {
error!(
log,
"Beacon chain running whilst slot clock is unavailable."
);
};
}

View File

@ -1,58 +0,0 @@
use crate::Client;
use exit_future::Exit;
use futures::{Future, Stream};
use slog::{debug, o, warn};
use std::time::{Duration, Instant};
use store::Store;
use tokio::runtime::TaskExecutor;
use tokio::timer::Interval;
use types::EthSpec;
/// The interval between heartbeat events.
pub const HEARTBEAT_INTERVAL_SECONDS: u64 = 15;
/// Create a warning log whenever the peer count is at or below this value.
pub const WARN_PEER_COUNT: usize = 1;
/// Spawns a thread that can be used to run code periodically, on `HEARTBEAT_INTERVAL_SECONDS`
/// durations.
///
/// Presently unused, but remains for future use.
pub fn run<S, E>(client: &Client<S, E>, executor: TaskExecutor, exit: Exit)
where
S: Store + Clone + 'static,
E: EthSpec,
{
// notification heartbeat
let interval = Interval::new(
Instant::now(),
Duration::from_secs(HEARTBEAT_INTERVAL_SECONDS),
);
let log = client.log.new(o!("Service" => "Notifier"));
let libp2p = client.network.libp2p_service();
let heartbeat = move |_| {
// Number of libp2p (not discv5) peers connected.
//
// Panics if libp2p is poisoned.
let connected_peer_count = libp2p.lock().swarm.connected_peers();
debug!(log, "Connected peer status"; "peer_count" => connected_peer_count);
if connected_peer_count <= WARN_PEER_COUNT {
warn!(log, "Low peer count"; "peer_count" => connected_peer_count);
}
Ok(())
};
// map error and spawn
let err_log = client.log.clone();
let heartbeat_interval = interval
.map_err(move |e| debug!(err_log, "Timer error {}", e))
.for_each(heartbeat);
executor.spawn(exit.until(heartbeat_interval).map(|_| ()));
}

View File

@ -0,0 +1,29 @@
[package]
name = "eth1"
version = "0.1.0"
authors = ["Paul Hauner <paul@paulhauner.com>"]
edition = "2018"
[dev-dependencies]
eth1_test_rig = { path = "../../tests/eth1_test_rig" }
environment = { path = "../../lighthouse/environment" }
toml = "^0.5"
web3 = "0.8.0"
[dependencies]
reqwest = "0.9"
futures = "0.1.25"
serde_json = "1.0"
serde = { version = "1.0", features = ["derive"] }
hex = "0.4"
types = { path = "../../eth2/types"}
merkle_proof = { path = "../../eth2/utils/merkle_proof"}
eth2_ssz = { path = "../../eth2/utils/ssz"}
tree_hash = { path = "../../eth2/utils/tree_hash"}
eth2_hashing = { path = "../../eth2/utils/eth2_hashing"}
parking_lot = "0.7"
slog = "^2.2.3"
tokio = "0.1.17"
state_processing = { path = "../../eth2/state_processing" }
exit-future = "0.1.4"
libflate = "0.1"

View File

@ -0,0 +1,271 @@
use std::ops::RangeInclusive;
use types::{Eth1Data, Hash256};
#[derive(Debug, PartialEq, Clone)]
pub enum Error {
/// The timestamp of each block equal to or later than the block prior to it.
InconsistentTimestamp { parent: u64, child: u64 },
/// Some `Eth1Block` was provided with the same block number but different data. The source
/// of eth1 data is inconsistent.
Conflicting(u64),
/// The given block was not one block number higher than the higest known block number.
NonConsecutive { given: u64, expected: u64 },
/// Some invariant was violated, there is a likely bug in the code.
Internal(String),
}
/// A block of the eth1 chain.
///
/// Contains all information required to add a `BlockCache` entry.
#[derive(Debug, PartialEq, Clone, Eq, Hash)]
pub struct Eth1Block {
pub hash: Hash256,
pub timestamp: u64,
pub number: u64,
pub deposit_root: Option<Hash256>,
pub deposit_count: Option<u64>,
}
impl Eth1Block {
pub fn eth1_data(self) -> Option<Eth1Data> {
Some(Eth1Data {
deposit_root: self.deposit_root?,
deposit_count: self.deposit_count?,
block_hash: self.hash,
})
}
}
/// Stores block and deposit contract information and provides queries based upon the block
/// timestamp.
#[derive(Debug, PartialEq, Clone, Default)]
pub struct BlockCache {
blocks: Vec<Eth1Block>,
}
impl BlockCache {
/// Returns the number of blocks stored in `self`.
pub fn len(&self) -> usize {
self.blocks.len()
}
/// True if the cache does not store any blocks.
pub fn is_empty(&self) -> bool {
self.blocks.is_empty()
}
/// Returns the highest block number stored.
pub fn highest_block_number(&self) -> Option<u64> {
self.blocks.last().map(|block| block.number)
}
/// Returns an iterator over all blocks.
///
/// Blocks a guaranteed to be returned with;
///
/// - Monotonically increasing block numbers.
/// - Non-uniformly increasing block timestamps.
pub fn iter(&self) -> impl DoubleEndedIterator<Item = &Eth1Block> + Clone {
self.blocks.iter()
}
/// Shortens the cache, keeping the latest (by block number) `len` blocks while dropping the
/// rest.
///
/// If `len` is greater than the vector's current length, this has no effect.
pub fn truncate(&mut self, len: usize) {
if len < self.blocks.len() {
self.blocks = self.blocks.split_off(self.blocks.len() - len);
}
}
/// Returns the range of block numbers stored in the block cache. All blocks in this range can
/// be accessed.
fn available_block_numbers(&self) -> Option<RangeInclusive<u64>> {
Some(self.blocks.first()?.number..=self.blocks.last()?.number)
}
/// Returns a block with the corresponding number, if any.
pub fn block_by_number(&self, block_number: u64) -> Option<&Eth1Block> {
self.blocks.get(
self.blocks
.as_slice()
.binary_search_by(|block| block.number.cmp(&block_number))
.ok()?,
)
}
/// Insert an `Eth1Snapshot` into `self`, allowing future queries.
///
/// Allows inserting either:
///
/// - The root block (i.e., any block if there are no existing blocks), or,
/// - An immediate child of the most recent (highest block number) block.
///
/// ## Errors
///
/// - If the cache is not empty and `item.block.block_number - 1` is not already in `self`.
/// - If `item.block.block_number` is in `self`, but is not identical to the supplied
/// `Eth1Snapshot`.
/// - If `item.block.timestamp` is prior to the parent.
pub fn insert_root_or_child(&mut self, block: Eth1Block) -> Result<(), Error> {
let expected_block_number = self
.highest_block_number()
.map(|n| n + 1)
.unwrap_or_else(|| block.number);
// If there are already some cached blocks, check to see if the new block number is one of
// them.
//
// If the block is already known, check to see the given block is identical to it. If not,
// raise an inconsistency error. This is mostly likely caused by some fork on the eth1
// chain.
if let Some(local) = self.available_block_numbers() {
if local.contains(&block.number) {
let known_block = self.block_by_number(block.number).ok_or_else(|| {
Error::Internal("An expected block was not present".to_string())
})?;
if known_block == &block {
return Ok(());
} else {
return Err(Error::Conflicting(block.number));
};
}
}
// Only permit blocks when it's either:
//
// - The first block inserted.
// - Exactly one block number higher than the highest known block number.
if block.number != expected_block_number {
return Err(Error::NonConsecutive {
given: block.number,
expected: expected_block_number,
});
}
// If the block is not the first block inserted, ensure that its timestamp is not higher
// than its parents.
if let Some(previous_block) = self.blocks.last() {
if previous_block.timestamp > block.timestamp {
return Err(Error::InconsistentTimestamp {
parent: previous_block.timestamp,
child: block.timestamp,
});
}
}
self.blocks.push(block);
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
fn get_block(i: u64, interval_secs: u64) -> Eth1Block {
Eth1Block {
hash: Hash256::from_low_u64_be(i),
timestamp: i * interval_secs,
number: i,
deposit_root: Some(Hash256::from_low_u64_be(i << 32)),
deposit_count: Some(i),
}
}
fn get_blocks(n: usize, interval_secs: u64) -> Vec<Eth1Block> {
(0..n as u64)
.into_iter()
.map(|i| get_block(i, interval_secs))
.collect()
}
fn insert(cache: &mut BlockCache, s: Eth1Block) -> Result<(), Error> {
cache.insert_root_or_child(s)
}
#[test]
fn truncate() {
let n = 16;
let blocks = get_blocks(n, 10);
let mut cache = BlockCache::default();
for block in blocks {
insert(&mut cache, block.clone()).expect("should add consecutive blocks");
}
for len in vec![0, 1, 2, 3, 4, 8, 15, 16] {
let mut cache = cache.clone();
cache.truncate(len);
assert_eq!(
cache.blocks.len(),
len,
"should truncate to length: {}",
len
);
}
let mut cache_2 = cache.clone();
cache_2.truncate(17);
assert_eq!(
cache_2.blocks.len(),
n,
"truncate to larger than n should be a no-op"
);
}
#[test]
fn inserts() {
let n = 16;
let blocks = get_blocks(n, 10);
let mut cache = BlockCache::default();
for block in blocks {
insert(&mut cache, block.clone()).expect("should add consecutive blocks");
}
// No error for re-adding a block identical to one that exists.
assert!(insert(&mut cache, get_block(n as u64 - 1, 10)).is_ok());
// Error for re-adding a block that is different to the one that exists.
assert!(insert(&mut cache, get_block(n as u64 - 1, 11)).is_err());
// Error for adding non-consecutive blocks.
assert!(insert(&mut cache, get_block(n as u64 + 1, 10)).is_err());
assert!(insert(&mut cache, get_block(n as u64 + 2, 10)).is_err());
// Error for adding timestamp prior to previous.
assert!(insert(&mut cache, get_block(n as u64, 1)).is_err());
// Double check to make sure previous test was only affected by timestamp.
assert!(insert(&mut cache, get_block(n as u64, 10)).is_ok());
}
#[test]
fn duplicate_timestamp() {
let mut blocks = get_blocks(7, 10);
blocks[0].timestamp = 0;
blocks[1].timestamp = 10;
blocks[2].timestamp = 10;
blocks[3].timestamp = 20;
blocks[4].timestamp = 30;
blocks[5].timestamp = 40;
blocks[6].timestamp = 40;
let mut cache = BlockCache::default();
for block in &blocks {
insert(&mut cache, block.clone())
.expect("should add consecutive blocks with duplicate timestamps");
}
assert_eq!(cache.blocks, blocks, "should have added all blocks");
}
}

View File

@ -0,0 +1,371 @@
use crate::DepositLog;
use eth2_hashing::hash;
use std::ops::Range;
use tree_hash::TreeHash;
use types::{Deposit, Hash256};
#[derive(Debug, PartialEq, Clone)]
pub enum Error {
/// A deposit log was added when a prior deposit was not already in the cache.
///
/// Logs have to be added with monotonically-increasing block numbers.
NonConsecutive { log_index: u64, expected: usize },
/// The eth1 event log data was unable to be parsed.
LogParseError(String),
/// There are insufficient deposits in the cache to fulfil the request.
InsufficientDeposits {
known_deposits: usize,
requested: u64,
},
/// A log with the given index is already present in the cache and it does not match the one
/// provided.
DuplicateDistinctLog(u64),
/// The deposit count must always be large enough to account for the requested deposit range.
///
/// E.g., you cannot request deposit 10 when the deposit count is 9.
DepositCountInvalid { deposit_count: u64, range_end: u64 },
/// An unexpected condition was encountered.
InternalError(String),
}
/// Emulates the eth1 deposit contract merkle tree.
pub struct DepositDataTree {
tree: merkle_proof::MerkleTree,
mix_in_length: usize,
depth: usize,
}
impl DepositDataTree {
/// Create a new Merkle tree from a list of leaves (`DepositData::tree_hash_root`) and a fixed depth.
pub fn create(leaves: &[Hash256], mix_in_length: usize, depth: usize) -> Self {
Self {
tree: merkle_proof::MerkleTree::create(leaves, depth),
mix_in_length,
depth,
}
}
/// Returns 32 bytes representing the "mix in length" for the merkle root of this tree.
fn length_bytes(&self) -> Vec<u8> {
int_to_bytes32(self.mix_in_length)
}
/// Retrieve the root hash of this Merkle tree with the length mixed in.
pub fn root(&self) -> Hash256 {
let mut preimage = [0; 64];
preimage[0..32].copy_from_slice(&self.tree.hash()[..]);
preimage[32..64].copy_from_slice(&self.length_bytes());
Hash256::from_slice(&hash(&preimage))
}
/// Return the leaf at `index` and a Merkle proof of its inclusion.
///
/// The Merkle proof is in "bottom-up" order, starting with a leaf node
/// and moving up the tree. Its length will be exactly equal to `depth + 1`.
pub fn generate_proof(&self, index: usize) -> (Hash256, Vec<Hash256>) {
let (root, mut proof) = self.tree.generate_proof(index, self.depth);
proof.push(Hash256::from_slice(&self.length_bytes()));
(root, proof)
}
}
/// Mirrors the merkle tree of deposits in the eth1 deposit contract.
///
/// Provides `Deposit` objects with merkle proofs included.
#[derive(Default)]
pub struct DepositCache {
logs: Vec<DepositLog>,
roots: Vec<Hash256>,
}
impl DepositCache {
/// Returns the number of deposits available in the cache.
pub fn len(&self) -> usize {
self.logs.len()
}
/// True if the cache does not store any blocks.
pub fn is_empty(&self) -> bool {
self.logs.is_empty()
}
/// Returns the block number for the most recent deposit in the cache.
pub fn latest_block_number(&self) -> Option<u64> {
self.logs.last().map(|log| log.block_number)
}
/// Returns an iterator over all the logs in `self`.
pub fn iter(&self) -> impl Iterator<Item = &DepositLog> {
self.logs.iter()
}
/// Returns the i'th deposit log.
pub fn get(&self, i: usize) -> Option<&DepositLog> {
self.logs.get(i)
}
/// Adds `log` to self.
///
/// This function enforces that `logs` are imported one-by-one with no gaps between
/// `log.index`, starting at `log.index == 0`.
///
/// ## Errors
///
/// - If a log with index `log.index - 1` is not already present in `self` (ignored when empty).
/// - If a log with `log.index` is already known, but the given `log` is distinct to it.
pub fn insert_log(&mut self, log: DepositLog) -> Result<(), Error> {
if log.index == self.logs.len() as u64 {
self.roots
.push(Hash256::from_slice(&log.deposit_data.tree_hash_root()));
self.logs.push(log);
Ok(())
} else if log.index < self.logs.len() as u64 {
if self.logs[log.index as usize] == log {
Ok(())
} else {
Err(Error::DuplicateDistinctLog(log.index))
}
} else {
Err(Error::NonConsecutive {
log_index: log.index,
expected: self.logs.len(),
})
}
}
/// Returns a list of `Deposit` objects, within the given deposit index `range`.
///
/// The `deposit_count` is used to generate the proofs for the `Deposits`. For example, if we
/// have 100 proofs, but the eth2 chain only acknowledges 50 of them, we must produce our
/// proofs with respect to a tree size of 50.
///
///
/// ## Errors
///
/// - If `deposit_count` is larger than `range.end`.
/// - There are not sufficient deposits in the tree to generate the proof.
pub fn get_deposits(
&self,
range: Range<u64>,
deposit_count: u64,
tree_depth: usize,
) -> Result<(Hash256, Vec<Deposit>), Error> {
if deposit_count < range.end {
// It's invalid to ask for more deposits than should exist.
Err(Error::DepositCountInvalid {
deposit_count,
range_end: range.end,
})
} else if range.end > self.logs.len() as u64 {
// The range of requested deposits exceeds the deposits stored locally.
Err(Error::InsufficientDeposits {
requested: range.end,
known_deposits: self.logs.len(),
})
} else if deposit_count > self.roots.len() as u64 {
// There are not `deposit_count` known deposit roots, so we can't build the merkle tree
// to prove into.
Err(Error::InsufficientDeposits {
requested: deposit_count,
known_deposits: self.logs.len(),
})
} else {
let roots = self
.roots
.get(0..deposit_count as usize)
.ok_or_else(|| Error::InternalError("Unable to get known root".into()))?;
// Note: there is likely a more optimal solution than recreating the `DepositDataTree`
// each time this function is called.
//
// Perhaps a base merkle tree could be maintained that contains all deposits up to the
// last finalized eth1 deposit count. Then, that tree could be cloned and extended for
// each of these calls.
let tree = DepositDataTree::create(roots, deposit_count as usize, tree_depth);
let deposits = self
.logs
.get(range.start as usize..range.end as usize)
.ok_or_else(|| Error::InternalError("Unable to get known log".into()))?
.iter()
.map(|deposit_log| {
let (_leaf, proof) = tree.generate_proof(deposit_log.index as usize);
Deposit {
proof: proof.into(),
data: deposit_log.deposit_data.clone(),
}
})
.collect();
Ok((tree.root(), deposits))
}
}
}
/// Returns `int` as little-endian bytes with a length of 32.
fn int_to_bytes32(int: usize) -> Vec<u8> {
let mut vec = int.to_le_bytes().to_vec();
vec.resize(32, 0);
vec
}
#[cfg(test)]
pub mod tests {
use super::*;
use crate::deposit_log::tests::EXAMPLE_LOG;
use crate::http::Log;
pub const TREE_DEPTH: usize = 32;
fn example_log() -> DepositLog {
let log = Log {
block_number: 42,
data: EXAMPLE_LOG.to_vec(),
};
DepositLog::from_log(&log).expect("should decode log")
}
#[test]
fn insert_log_valid() {
let mut tree = DepositCache::default();
for i in 0..16 {
let mut log = example_log();
log.index = i;
tree.insert_log(log).expect("should add consecutive logs")
}
}
#[test]
fn insert_log_invalid() {
let mut tree = DepositCache::default();
for i in 0..4 {
let mut log = example_log();
log.index = i;
tree.insert_log(log).expect("should add consecutive logs")
}
// Add duplicate, when given is the same as the one known.
let mut log = example_log();
log.index = 3;
assert!(tree.insert_log(log).is_ok());
// Add duplicate, when given is different to the one known.
let mut log = example_log();
log.index = 3;
log.block_number = 99;
assert!(tree.insert_log(log).is_err());
// Skip inserting a log.
let mut log = example_log();
log.index = 5;
assert!(tree.insert_log(log).is_err());
}
#[test]
fn get_deposit_valid() {
let n = 1_024;
let mut tree = DepositCache::default();
for i in 0..n {
let mut log = example_log();
log.index = i;
log.block_number = i;
log.deposit_data.withdrawal_credentials = Hash256::from_low_u64_be(i);
tree.insert_log(log).expect("should add consecutive logs")
}
// Get 0 deposits, with max deposit count.
let (_, deposits) = tree
.get_deposits(0..0, n, TREE_DEPTH)
.expect("should get the full tree");
assert_eq!(deposits.len(), 0, "should return no deposits");
// Get 0 deposits, with 0 deposit count.
let (_, deposits) = tree
.get_deposits(0..0, 0, TREE_DEPTH)
.expect("should get the full tree");
assert_eq!(deposits.len(), 0, "should return no deposits");
// Get 0 deposits, with 0 deposit count, tree depth 0.
let (_, deposits) = tree
.get_deposits(0..0, 0, 0)
.expect("should get the full tree");
assert_eq!(deposits.len(), 0, "should return no deposits");
// Get all deposits, with max deposit count.
let (full_root, deposits) = tree
.get_deposits(0..n, n, TREE_DEPTH)
.expect("should get the full tree");
assert_eq!(deposits.len(), n as usize, "should return all deposits");
// Get 4 deposits, with max deposit count.
let (root, deposits) = tree
.get_deposits(0..4, n, TREE_DEPTH)
.expect("should get the four from the full tree");
assert_eq!(
deposits.len(),
4 as usize,
"should get 4 deposits from full tree"
);
assert_eq!(
root, full_root,
"should still return full root when getting deposit subset"
);
// Get half of the deposits, with half deposit count.
let (half_root, deposits) = tree
.get_deposits(0..n / 2, n / 2, TREE_DEPTH)
.expect("should get the half tree");
assert_eq!(
deposits.len(),
n as usize / 2,
"should return half deposits"
);
// Get 4 deposits, with half deposit count.
let (root, deposits) = tree
.get_deposits(0..4, n / 2, TREE_DEPTH)
.expect("should get the half tree");
assert_eq!(
deposits.len(),
4 as usize,
"should get 4 deposits from half tree"
);
assert_eq!(
root, half_root,
"should still return half root when getting deposit subset"
);
assert_ne!(
full_root, half_root,
"should get different root when pinning deposit count"
);
}
#[test]
fn get_deposit_invalid() {
let n = 16;
let mut tree = DepositCache::default();
for i in 0..n {
let mut log = example_log();
log.index = i;
log.block_number = i;
log.deposit_data.withdrawal_credentials = Hash256::from_low_u64_be(i);
tree.insert_log(log).expect("should add consecutive logs")
}
// Range too high.
assert!(tree.get_deposits(0..n + 1, n, TREE_DEPTH).is_err());
// Count too high.
assert!(tree.get_deposits(0..n, n + 1, TREE_DEPTH).is_err());
// Range higher than count.
assert!(tree.get_deposits(0..4, 2, TREE_DEPTH).is_err());
}
}

View File

@ -0,0 +1,107 @@
use super::http::Log;
use ssz::Decode;
use types::{DepositData, Hash256, PublicKeyBytes, SignatureBytes};
/// The following constants define the layout of bytes in the deposit contract `DepositEvent`. The
/// event bytes are formatted according to the Ethereum ABI.
const PUBKEY_START: usize = 192;
const PUBKEY_LEN: usize = 48;
const CREDS_START: usize = PUBKEY_START + 64 + 32;
const CREDS_LEN: usize = 32;
const AMOUNT_START: usize = CREDS_START + 32 + 32;
const AMOUNT_LEN: usize = 8;
const SIG_START: usize = AMOUNT_START + 32 + 32;
const SIG_LEN: usize = 96;
const INDEX_START: usize = SIG_START + 96 + 32;
const INDEX_LEN: usize = 8;
/// A fully parsed eth1 deposit contract log.
#[derive(Debug, PartialEq, Clone)]
pub struct DepositLog {
pub deposit_data: DepositData,
/// The block number of the log that included this `DepositData`.
pub block_number: u64,
/// The index included with the deposit log.
pub index: u64,
}
impl DepositLog {
/// Attempts to parse a raw `Log` from the deposit contract into a `DepositLog`.
pub fn from_log(log: &Log) -> Result<Self, String> {
let bytes = &log.data;
let pubkey = bytes
.get(PUBKEY_START..PUBKEY_START + PUBKEY_LEN)
.ok_or_else(|| "Insufficient bytes for pubkey".to_string())?;
let withdrawal_credentials = bytes
.get(CREDS_START..CREDS_START + CREDS_LEN)
.ok_or_else(|| "Insufficient bytes for withdrawal credential".to_string())?;
let amount = bytes
.get(AMOUNT_START..AMOUNT_START + AMOUNT_LEN)
.ok_or_else(|| "Insufficient bytes for amount".to_string())?;
let signature = bytes
.get(SIG_START..SIG_START + SIG_LEN)
.ok_or_else(|| "Insufficient bytes for signature".to_string())?;
let index = bytes
.get(INDEX_START..INDEX_START + INDEX_LEN)
.ok_or_else(|| "Insufficient bytes for index".to_string())?;
let deposit_data = DepositData {
pubkey: PublicKeyBytes::from_ssz_bytes(pubkey)
.map_err(|e| format!("Invalid pubkey ssz: {:?}", e))?,
withdrawal_credentials: Hash256::from_ssz_bytes(withdrawal_credentials)
.map_err(|e| format!("Invalid withdrawal_credentials ssz: {:?}", e))?,
amount: u64::from_ssz_bytes(amount)
.map_err(|e| format!("Invalid amount ssz: {:?}", e))?,
signature: SignatureBytes::from_ssz_bytes(signature)
.map_err(|e| format!("Invalid signature ssz: {:?}", e))?,
};
Ok(DepositLog {
deposit_data,
block_number: log.block_number,
index: u64::from_ssz_bytes(index).map_err(|e| format!("Invalid index ssz: {:?}", e))?,
})
}
}
#[cfg(test)]
pub mod tests {
use super::*;
use crate::http::Log;
/// The data from a deposit event, using the v0.8.3 version of the deposit contract.
pub const EXAMPLE_LOG: &[u8] = &[
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 160, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1, 64, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 128, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 48, 167, 108, 6, 69, 88, 17, 3, 51, 6, 4, 158, 232, 82,
248, 218, 2, 71, 219, 55, 102, 86, 125, 136, 203, 36, 77, 64, 213, 43, 52, 175, 154, 239,
50, 142, 52, 201, 77, 54, 239, 0, 229, 22, 46, 139, 120, 62, 240, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 32, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 8, 0, 64, 89, 115, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 96, 140, 74, 175, 158, 209, 20, 206,
30, 63, 215, 238, 113, 60, 132, 216, 211, 100, 186, 202, 71, 34, 200, 160, 225, 212, 213,
119, 88, 51, 80, 101, 74, 2, 45, 78, 153, 12, 192, 44, 51, 77, 40, 10, 72, 246, 34, 193,
187, 22, 95, 4, 211, 245, 224, 13, 162, 21, 163, 54, 225, 22, 124, 3, 56, 14, 81, 122, 189,
149, 250, 251, 159, 22, 77, 94, 157, 197, 196, 253, 110, 201, 88, 193, 246, 136, 226, 221,
18, 113, 232, 105, 100, 114, 103, 237, 189, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 8, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
];
#[test]
fn can_parse_example_log() {
let log = Log {
block_number: 42,
data: EXAMPLE_LOG.to_vec(),
};
DepositLog::from_log(&log).expect("should decode log");
}
}

View File

@ -0,0 +1,405 @@
//! Provides a very minimal set of functions for interfacing with the eth2 deposit contract via an
//! eth1 HTTP JSON-RPC endpoint.
//!
//! All remote functions return a future (i.e., are async).
//!
//! Does not use a web3 library, instead it uses `reqwest` (`hyper`) to call the remote endpoint
//! and `serde` to decode the response.
//!
//! ## Note
//!
//! There is no ABI parsing here, all function signatures and topics are hard-coded as constants.
use futures::{Future, Stream};
use libflate::gzip::Decoder;
use reqwest::{header::CONTENT_TYPE, r#async::ClientBuilder, StatusCode};
use serde_json::{json, Value};
use std::io::prelude::*;
use std::ops::Range;
use std::time::Duration;
use types::Hash256;
/// `keccak("DepositEvent(bytes,bytes,bytes,bytes,bytes)")`
pub const DEPOSIT_EVENT_TOPIC: &str =
"0x649bbc62d0e31342afea4e5cd82d4049e7e1ee912fc0889aa790803be39038c5";
/// `keccak("get_deposit_root()")[0..4]`
pub const DEPOSIT_ROOT_FN_SIGNATURE: &str = "0x863a311b";
/// `keccak("get_deposit_count()")[0..4]`
pub const DEPOSIT_COUNT_FN_SIGNATURE: &str = "0x621fd130";
/// Number of bytes in deposit contract deposit root response.
pub const DEPOSIT_COUNT_RESPONSE_BYTES: usize = 96;
/// Number of bytes in deposit contract deposit root (value only).
pub const DEPOSIT_ROOT_BYTES: usize = 32;
#[derive(Debug, PartialEq, Clone)]
pub struct Block {
pub hash: Hash256,
pub timestamp: u64,
pub number: u64,
}
/// Returns the current block number.
///
/// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`.
pub fn get_block_number(
endpoint: &str,
timeout: Duration,
) -> impl Future<Item = u64, Error = String> {
send_rpc_request(endpoint, "eth_blockNumber", json!([]), timeout)
.and_then(|response_body| {
hex_to_u64_be(
response_result(&response_body)?
.ok_or_else(|| "No result field was returned for block number".to_string())?
.as_str()
.ok_or_else(|| "Data was not string")?,
)
})
.map_err(|e| format!("Failed to get block number: {}", e))
}
/// Gets a block hash by block number.
///
/// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`.
pub fn get_block(
endpoint: &str,
block_number: u64,
timeout: Duration,
) -> impl Future<Item = Block, Error = String> {
let params = json!([
format!("0x{:x}", block_number),
false // do not return full tx objects.
]);
send_rpc_request(endpoint, "eth_getBlockByNumber", params, timeout)
.and_then(|response_body| {
let hash = hex_to_bytes(
response_result(&response_body)?
.ok_or_else(|| "No result field was returned for block".to_string())?
.get("hash")
.ok_or_else(|| "No hash for block")?
.as_str()
.ok_or_else(|| "Block hash was not string")?,
)?;
let hash = if hash.len() == 32 {
Ok(Hash256::from_slice(&hash))
} else {
Err(format!("Block has was not 32 bytes: {:?}", hash))
}?;
let timestamp = hex_to_u64_be(
response_result(&response_body)?
.ok_or_else(|| "No result field was returned for timestamp".to_string())?
.get("timestamp")
.ok_or_else(|| "No timestamp for block")?
.as_str()
.ok_or_else(|| "Block timestamp was not string")?,
)?;
let number = hex_to_u64_be(
response_result(&response_body)?
.ok_or_else(|| "No result field was returned for number".to_string())?
.get("number")
.ok_or_else(|| "No number for block")?
.as_str()
.ok_or_else(|| "Block number was not string")?,
)?;
if number <= usize::max_value() as u64 {
Ok(Block {
hash,
timestamp,
number,
})
} else {
Err(format!("Block number {} is larger than a usize", number))
}
})
.map_err(|e| format!("Failed to get block number: {}", e))
}
/// Returns the value of the `get_deposit_count()` call at the given `address` for the given
/// `block_number`.
///
/// Assumes that the `address` has the same ABI as the eth2 deposit contract.
///
/// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`.
pub fn get_deposit_count(
endpoint: &str,
address: &str,
block_number: u64,
timeout: Duration,
) -> impl Future<Item = Option<u64>, Error = String> {
call(
endpoint,
address,
DEPOSIT_COUNT_FN_SIGNATURE,
block_number,
timeout,
)
.and_then(|result| result.ok_or_else(|| "No response to deposit count".to_string()))
.and_then(|bytes| {
if bytes.is_empty() {
Ok(None)
} else if bytes.len() == DEPOSIT_COUNT_RESPONSE_BYTES {
let mut array = [0; 8];
array.copy_from_slice(&bytes[32 + 32..32 + 32 + 8]);
Ok(Some(u64::from_le_bytes(array)))
} else {
Err(format!(
"Deposit count response was not {} bytes: {:?}",
DEPOSIT_COUNT_RESPONSE_BYTES, bytes
))
}
})
}
/// Returns the value of the `get_hash_tree_root()` call at the given `block_number`.
///
/// Assumes that the `address` has the same ABI as the eth2 deposit contract.
///
/// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`.
pub fn get_deposit_root(
endpoint: &str,
address: &str,
block_number: u64,
timeout: Duration,
) -> impl Future<Item = Option<Hash256>, Error = String> {
call(
endpoint,
address,
DEPOSIT_ROOT_FN_SIGNATURE,
block_number,
timeout,
)
.and_then(|result| result.ok_or_else(|| "No response to deposit root".to_string()))
.and_then(|bytes| {
if bytes.is_empty() {
Ok(None)
} else if bytes.len() == DEPOSIT_ROOT_BYTES {
Ok(Some(Hash256::from_slice(&bytes)))
} else {
Err(format!(
"Deposit root response was not {} bytes: {:?}",
DEPOSIT_ROOT_BYTES, bytes
))
}
})
}
/// Performs a instant, no-transaction call to the contract `address` with the given `0x`-prefixed
/// `hex_data`.
///
/// Returns bytes, if any.
///
/// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`.
fn call(
endpoint: &str,
address: &str,
hex_data: &str,
block_number: u64,
timeout: Duration,
) -> impl Future<Item = Option<Vec<u8>>, Error = String> {
let params = json! ([
{
"to": address,
"data": hex_data,
},
format!("0x{:x}", block_number)
]);
send_rpc_request(endpoint, "eth_call", params, timeout).and_then(|response_body| {
match response_result(&response_body)? {
None => Ok(None),
Some(result) => {
let hex = result
.as_str()
.map(|s| s.to_string())
.ok_or_else(|| "'result' value was not a string".to_string())?;
Ok(Some(hex_to_bytes(&hex)?))
}
}
})
}
/// A reduced set of fields from an Eth1 contract log.
#[derive(Debug, PartialEq, Clone)]
pub struct Log {
pub(crate) block_number: u64,
pub(crate) data: Vec<u8>,
}
/// Returns logs for the `DEPOSIT_EVENT_TOPIC`, for the given `address` in the given
/// `block_height_range`.
///
/// It's not clear from the Ethereum JSON-RPC docs if this range is inclusive or not.
///
/// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`.
pub fn get_deposit_logs_in_range(
endpoint: &str,
address: &str,
block_height_range: Range<u64>,
timeout: Duration,
) -> impl Future<Item = Vec<Log>, Error = String> {
let params = json! ([{
"address": address,
"topics": [DEPOSIT_EVENT_TOPIC],
"fromBlock": format!("0x{:x}", block_height_range.start),
"toBlock": format!("0x{:x}", block_height_range.end),
}]);
send_rpc_request(endpoint, "eth_getLogs", params, timeout)
.and_then(|response_body| {
response_result(&response_body)?
.ok_or_else(|| "No result field was returned for deposit logs".to_string())?
.as_array()
.cloned()
.ok_or_else(|| "'result' value was not an array".to_string())?
.into_iter()
.map(|value| {
let block_number = value
.get("blockNumber")
.ok_or_else(|| "No block number field in log")?
.as_str()
.ok_or_else(|| "Block number was not string")?;
let data = value
.get("data")
.ok_or_else(|| "No block number field in log")?
.as_str()
.ok_or_else(|| "Data was not string")?;
Ok(Log {
block_number: hex_to_u64_be(&block_number)?,
data: hex_to_bytes(data)?,
})
})
.collect::<Result<Vec<Log>, String>>()
})
.map_err(|e| format!("Failed to get logs in range: {}", e))
}
/// Sends an RPC request to `endpoint`, using a POST with the given `body`.
///
/// Tries to receive the response and parse the body as a `String`.
pub fn send_rpc_request(
endpoint: &str,
method: &str,
params: Value,
timeout: Duration,
) -> impl Future<Item = String, Error = String> {
let body = json! ({
"jsonrpc": "2.0",
"method": method,
"params": params,
"id": 1
})
.to_string();
// Note: it is not ideal to create a new client for each request.
//
// A better solution would be to create some struct that contains a built client and pass it
// around (similar to the `web3` crate's `Transport` structs).
ClientBuilder::new()
.timeout(timeout)
.build()
.expect("The builder should always build a client")
.post(endpoint)
.header(CONTENT_TYPE, "application/json")
.body(body)
.send()
.map_err(|e| format!("Request failed: {:?}", e))
.and_then(|response| {
if response.status() != StatusCode::OK {
Err(format!(
"Response HTTP status was not 200 OK: {}.",
response.status()
))
} else {
Ok(response)
}
})
.and_then(|response| {
response
.headers()
.get(CONTENT_TYPE)
.ok_or_else(|| "No content-type header in response".to_string())
.and_then(|encoding| {
encoding
.to_str()
.map(|s| s.to_string())
.map_err(|e| format!("Failed to parse content-type header: {}", e))
})
.map(|encoding| (response, encoding))
})
.and_then(|(response, encoding)| {
response
.into_body()
.concat2()
.map(|chunk| chunk.iter().cloned().collect::<Vec<u8>>())
.map_err(|e| format!("Failed to receive body: {:?}", e))
.and_then(move |bytes| match encoding.as_str() {
"application/json" => Ok(bytes),
"application/json; charset=utf-8" => Ok(bytes),
// Note: gzip is not presently working because we always seem to get an empty
// response from the server.
//
// I expect this is some simple-to-solve issue for someone who is familiar with
// the eth1 JSON RPC.
//
// Some public-facing web3 servers use gzip to compress their traffic, it would
// be good to support this.
"application/x-gzip" => {
let mut decoder = Decoder::new(&bytes[..])
.map_err(|e| format!("Failed to create gzip decoder: {}", e))?;
let mut decompressed = vec![];
decoder
.read_to_end(&mut decompressed)
.map_err(|e| format!("Failed to decompress gzip data: {}", e))?;
Ok(decompressed)
}
other => Err(format!("Unsupported encoding: {}", other)),
})
.map(|bytes| String::from_utf8_lossy(&bytes).into_owned())
.map_err(|e| format!("Failed to receive body: {:?}", e))
})
}
/// Accepts an entire HTTP body (as a string) and returns the `result` field, as a serde `Value`.
fn response_result(response: &str) -> Result<Option<Value>, String> {
Ok(serde_json::from_str::<Value>(&response)
.map_err(|e| format!("Failed to parse response: {:?}", e))?
.get("result")
.cloned()
.map(Some)
.unwrap_or_else(|| None))
}
/// Parses a `0x`-prefixed, **big-endian** hex string as a u64.
///
/// Note: the JSON-RPC encodes integers as big-endian. The deposit contract uses little-endian.
/// Therefore, this function is only useful for numbers encoded by the JSON RPC.
///
/// E.g., `0x01 == 1`
fn hex_to_u64_be(hex: &str) -> Result<u64, String> {
u64::from_str_radix(strip_prefix(hex)?, 16)
.map_err(|e| format!("Failed to parse hex as u64: {:?}", e))
}
/// Parses a `0x`-prefixed, big-endian hex string as bytes.
///
/// E.g., `0x0102 == vec![1, 2]`
fn hex_to_bytes(hex: &str) -> Result<Vec<u8>, String> {
hex::decode(strip_prefix(hex)?).map_err(|e| format!("Failed to parse hex as bytes: {:?}", e))
}
/// Removes the `0x` prefix from some bytes. Returns an error if the prefix is not present.
fn strip_prefix(hex: &str) -> Result<&str, String> {
if hex.starts_with("0x") {
Ok(&hex[2..])
} else {
Err("Hex string did not start with `0x`".to_string())
}
}

View File

@ -0,0 +1,27 @@
use crate::Config;
use crate::{block_cache::BlockCache, deposit_cache::DepositCache};
use parking_lot::RwLock;
#[derive(Default)]
pub struct DepositUpdater {
pub cache: DepositCache,
pub last_processed_block: Option<u64>,
}
#[derive(Default)]
pub struct Inner {
pub block_cache: RwLock<BlockCache>,
pub deposit_cache: RwLock<DepositUpdater>,
pub config: RwLock<Config>,
}
impl Inner {
/// Prunes the block cache to `self.target_block_cache_len`.
///
/// Is a no-op if `self.target_block_cache_len` is `None`.
pub fn prune_blocks(&self) {
if let Some(block_cache_truncation) = self.config.read().block_cache_truncation {
self.block_cache.write().truncate(block_cache_truncation);
}
}
}

View File

@ -0,0 +1,11 @@
mod block_cache;
mod deposit_cache;
mod deposit_log;
pub mod http;
mod inner;
mod service;
pub use block_cache::{BlockCache, Eth1Block};
pub use deposit_cache::DepositCache;
pub use deposit_log::DepositLog;
pub use service::{BlockCacheUpdateOutcome, Config, DepositCacheUpdateOutcome, Error, Service};

View File

@ -0,0 +1,643 @@
use crate::{
block_cache::{BlockCache, Error as BlockCacheError, Eth1Block},
deposit_cache::Error as DepositCacheError,
http::{
get_block, get_block_number, get_deposit_count, get_deposit_logs_in_range, get_deposit_root,
},
inner::{DepositUpdater, Inner},
DepositLog,
};
use exit_future::Exit;
use futures::{
future::{loop_fn, Loop},
stream, Future, Stream,
};
use parking_lot::{RwLock, RwLockReadGuard};
use serde::{Deserialize, Serialize};
use slog::{debug, error, trace, Logger};
use std::ops::{Range, RangeInclusive};
use std::sync::Arc;
use std::time::{Duration, Instant};
use tokio::timer::Delay;
const STANDARD_TIMEOUT_MILLIS: u64 = 15_000;
/// Timeout when doing a eth_blockNumber call.
const BLOCK_NUMBER_TIMEOUT_MILLIS: u64 = STANDARD_TIMEOUT_MILLIS;
/// Timeout when doing an eth_getBlockByNumber call.
const GET_BLOCK_TIMEOUT_MILLIS: u64 = STANDARD_TIMEOUT_MILLIS;
/// Timeout when doing an eth_call to read the deposit contract root.
const GET_DEPOSIT_ROOT_TIMEOUT_MILLIS: u64 = STANDARD_TIMEOUT_MILLIS;
/// Timeout when doing an eth_call to read the deposit contract deposit count.
const GET_DEPOSIT_COUNT_TIMEOUT_MILLIS: u64 = STANDARD_TIMEOUT_MILLIS;
/// Timeout when doing an eth_getLogs to read the deposit contract logs.
const GET_DEPOSIT_LOG_TIMEOUT_MILLIS: u64 = STANDARD_TIMEOUT_MILLIS;
#[derive(Debug, PartialEq, Clone)]
pub enum Error {
/// The remote node is less synced that we expect, it is not useful until has done more
/// syncing.
RemoteNotSynced {
next_required_block: u64,
remote_highest_block: u64,
follow_distance: u64,
},
/// Failed to download a block from the eth1 node.
BlockDownloadFailed(String),
/// Failed to get the current block number from the eth1 node.
GetBlockNumberFailed(String),
/// Failed to read the deposit contract root from the eth1 node.
GetDepositRootFailed(String),
/// Failed to read the deposit contract deposit count from the eth1 node.
GetDepositCountFailed(String),
/// Failed to read the deposit contract root from the eth1 node.
GetDepositLogsFailed(String),
/// There was an inconsistency when adding a block to the cache.
FailedToInsertEth1Block(BlockCacheError),
/// There was an inconsistency when adding a deposit to the cache.
FailedToInsertDeposit(DepositCacheError),
/// A log downloaded from the eth1 contract was not well formed.
FailedToParseDepositLog {
block_range: Range<u64>,
error: String,
},
/// There was an unexpected internal error.
Internal(String),
}
/// The success message for an Eth1Data cache update.
#[derive(Debug, PartialEq, Clone)]
pub enum BlockCacheUpdateOutcome {
/// The cache was sucessfully updated.
Success {
blocks_imported: usize,
head_block_number: Option<u64>,
},
}
/// The success message for an Eth1 deposit cache update.
#[derive(Debug, PartialEq, Clone)]
pub enum DepositCacheUpdateOutcome {
/// The cache was sucessfully updated.
Success { logs_imported: usize },
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Config {
/// An Eth1 node (e.g., Geth) running a HTTP JSON-RPC endpoint.
pub endpoint: String,
/// The address the `BlockCache` and `DepositCache` should assume is the canonical deposit contract.
pub deposit_contract_address: String,
/// Defines the first block that the `DepositCache` will start searching for deposit logs.
///
/// Setting too high can result in missed logs. Setting too low will result in unnecessary
/// calls to the Eth1 node's HTTP JSON RPC.
pub deposit_contract_deploy_block: u64,
/// Defines the lowest block number that should be downloaded and added to the `BlockCache`.
pub lowest_cached_block_number: u64,
/// Defines how far behind the Eth1 node's head we should follow.
///
/// Note: this should be less than or equal to the specification's `ETH1_FOLLOW_DISTANCE`.
pub follow_distance: u64,
/// Defines the number of blocks that should be retained each time the `BlockCache` calls truncate on
/// itself.
pub block_cache_truncation: Option<usize>,
/// The interval between updates when using the `auto_update` function.
pub auto_update_interval_millis: u64,
/// The span of blocks we should query for logs, per request.
pub blocks_per_log_query: usize,
/// The maximum number of log requests per update.
pub max_log_requests_per_update: Option<usize>,
/// The maximum number of log requests per update.
pub max_blocks_per_update: Option<usize>,
}
impl Default for Config {
fn default() -> Self {
Self {
endpoint: "http://localhost:8545".into(),
deposit_contract_address: "0x0000000000000000000000000000000000000000".into(),
deposit_contract_deploy_block: 0,
lowest_cached_block_number: 0,
follow_distance: 128,
block_cache_truncation: Some(4_096),
auto_update_interval_millis: 500,
blocks_per_log_query: 1_000,
max_log_requests_per_update: None,
max_blocks_per_update: None,
}
}
}
/// Provides a set of Eth1 caches and async functions to update them.
///
/// Stores the following caches:
///
/// - Deposit cache: stores all deposit logs from the deposit contract.
/// - Block cache: stores some number of eth1 blocks.
#[derive(Clone)]
pub struct Service {
inner: Arc<Inner>,
pub log: Logger,
}
impl Service {
/// Creates a new service. Does not attempt to connect to the eth1 node.
pub fn new(config: Config, log: Logger) -> Self {
Self {
inner: Arc::new(Inner {
config: RwLock::new(config),
..Inner::default()
}),
log,
}
}
/// Provides access to the block cache.
pub fn blocks(&self) -> &RwLock<BlockCache> {
&self.inner.block_cache
}
/// Provides access to the deposit cache.
pub fn deposits(&self) -> &RwLock<DepositUpdater> {
&self.inner.deposit_cache
}
/// Returns the number of currently cached blocks.
pub fn block_cache_len(&self) -> usize {
self.blocks().read().len()
}
/// Returns the number deposits available in the deposit cache.
pub fn deposit_cache_len(&self) -> usize {
self.deposits().read().cache.len()
}
/// Read the service's configuration.
pub fn config(&self) -> RwLockReadGuard<Config> {
self.inner.config.read()
}
/// Updates the configuration in `self to be `new_config`.
///
/// Will truncate the block cache if the new configure specifies truncation.
pub fn update_config(&self, new_config: Config) -> Result<(), String> {
let mut old_config = self.inner.config.write();
if new_config.deposit_contract_deploy_block != old_config.deposit_contract_deploy_block {
// This may be possible, I just haven't looked into the details to ensure it's safe.
Err("Updating deposit_contract_deploy_block is not supported".to_string())
} else {
*old_config = new_config;
// Prevents a locking condition when calling prune_blocks.
drop(old_config);
self.inner.prune_blocks();
Ok(())
}
}
/// Set the lowest block that the block cache will store.
///
/// Note: this block may not always be present if truncating is enabled.
pub fn set_lowest_cached_block(&self, block_number: u64) {
self.inner.config.write().lowest_cached_block_number = block_number;
}
/// Update the deposit and block cache, returning an error if either fail.
///
/// ## Returns
///
/// - Ok(_) if the update was successful (the cache may or may not have been modified).
/// - Err(_) if there is an error.
///
/// Emits logs for debugging and errors.
pub fn update(
&self,
) -> impl Future<Item = (DepositCacheUpdateOutcome, BlockCacheUpdateOutcome), Error = String>
{
let log_a = self.log.clone();
let log_b = self.log.clone();
let deposit_future = self
.update_deposit_cache()
.map_err(|e| format!("Failed to update eth1 cache: {:?}", e))
.then(move |result| {
match &result {
Ok(DepositCacheUpdateOutcome::Success { logs_imported }) => trace!(
log_a,
"Updated eth1 deposit cache";
"logs_imported" => logs_imported,
),
Err(e) => error!(
log_a,
"Failed to update eth1 deposit cache";
"error" => e
),
};
result
});
let block_future = self
.update_block_cache()
.map_err(|e| format!("Failed to update eth1 cache: {:?}", e))
.then(move |result| {
match &result {
Ok(BlockCacheUpdateOutcome::Success {
blocks_imported,
head_block_number,
}) => trace!(
log_b,
"Updated eth1 block cache";
"blocks_imported" => blocks_imported,
"head_block" => head_block_number,
),
Err(e) => error!(
log_b,
"Failed to update eth1 block cache";
"error" => e
),
};
result
});
deposit_future.join(block_future)
}
/// A looping future that updates the cache, then waits `config.auto_update_interval` before
/// updating it again.
///
/// ## Returns
///
/// - Ok(_) if the update was successful (the cache may or may not have been modified).
/// - Err(_) if there is an error.
///
/// Emits logs for debugging and errors.
pub fn auto_update(&self, exit: Exit) -> impl Future<Item = (), Error = ()> {
let service = self.clone();
let log = self.log.clone();
let update_interval = Duration::from_millis(self.config().auto_update_interval_millis);
loop_fn((), move |()| {
let exit = exit.clone();
let service = service.clone();
let log_a = log.clone();
let log_b = log.clone();
service
.update()
.then(move |update_result| {
match update_result {
Err(e) => error!(
log_a,
"Failed to update eth1 genesis cache";
"retry_millis" => update_interval.as_millis(),
"error" => e,
),
Ok((deposit, block)) => debug!(
log_a,
"Updated eth1 genesis cache";
"retry_millis" => update_interval.as_millis(),
"blocks" => format!("{:?}", block),
"deposits" => format!("{:?}", deposit),
),
};
// Do not break the loop if there is an update failure.
Ok(())
})
.and_then(move |_| Delay::new(Instant::now() + update_interval))
.then(move |timer_result| {
if let Err(e) = timer_result {
error!(
log_b,
"Failed to trigger eth1 cache update delay";
"error" => format!("{:?}", e),
);
}
// Do not break the loop if there is an timer failure.
Ok(())
})
.map(move |_| {
if exit.is_live() {
Loop::Continue(())
} else {
Loop::Break(())
}
})
})
}
/// Contacts the remote eth1 node and attempts to import deposit logs up to the configured
/// follow-distance block.
///
/// Will process no more than `BLOCKS_PER_LOG_QUERY * MAX_LOG_REQUESTS_PER_UPDATE` blocks in a
/// single update.
///
/// ## Resolves with
///
/// - Ok(_) if the update was successful (the cache may or may not have been modified).
/// - Err(_) if there is an error.
///
/// Emits logs for debugging and errors.
pub fn update_deposit_cache(
&self,
) -> impl Future<Item = DepositCacheUpdateOutcome, Error = Error> {
let service_1 = self.clone();
let service_2 = self.clone();
let blocks_per_log_query = self.config().blocks_per_log_query;
let max_log_requests_per_update = self
.config()
.max_log_requests_per_update
.unwrap_or_else(usize::max_value);
let next_required_block = self
.deposits()
.read()
.last_processed_block
.map(|n| n + 1)
.unwrap_or_else(|| self.config().deposit_contract_deploy_block);
get_new_block_numbers(
&self.config().endpoint,
next_required_block,
self.config().follow_distance,
)
.map(move |range| {
range
.map(|range| {
range
.collect::<Vec<u64>>()
.chunks(blocks_per_log_query)
.take(max_log_requests_per_update)
.map(|vec| {
let first = vec.first().cloned().unwrap_or_else(|| 0);
let last = vec.last().map(|n| n + 1).unwrap_or_else(|| 0);
(first..last)
})
.collect::<Vec<Range<u64>>>()
})
.unwrap_or_else(|| vec![])
})
.and_then(move |block_number_chunks| {
stream::unfold(
block_number_chunks.into_iter(),
move |mut chunks| match chunks.next() {
Some(chunk) => {
let chunk_1 = chunk.clone();
Some(
get_deposit_logs_in_range(
&service_1.config().endpoint,
&service_1.config().deposit_contract_address,
chunk,
Duration::from_millis(GET_DEPOSIT_LOG_TIMEOUT_MILLIS),
)
.map_err(Error::GetDepositLogsFailed)
.map(|logs| (chunk_1, logs))
.map(|logs| (logs, chunks)),
)
}
None => None,
},
)
.fold(0, move |mut sum, (block_range, log_chunk)| {
let mut cache = service_2.deposits().write();
log_chunk
.into_iter()
.map(|raw_log| {
DepositLog::from_log(&raw_log).map_err(|error| {
Error::FailedToParseDepositLog {
block_range: block_range.clone(),
error,
}
})
})
// Return early if any of the logs cannot be parsed.
//
// This costs an additional `collect`, however it enforces that no logs are
// imported if any one of them cannot be parsed.
.collect::<Result<Vec<_>, _>>()?
.into_iter()
.map(|deposit_log| {
cache
.cache
.insert_log(deposit_log)
.map_err(Error::FailedToInsertDeposit)?;
sum += 1;
Ok(())
})
// Returns if a deposit is unable to be added to the cache.
//
// If this error occurs, the cache will no longer be guaranteed to hold either
// none or all of the logs for each block (i.e., they may exist _some_ logs for
// a block, but not _all_ logs for that block). This scenario can cause the
// node to choose an invalid genesis state or propose an invalid block.
.collect::<Result<_, _>>()?;
cache.last_processed_block = Some(block_range.end.saturating_sub(1));
Ok(sum)
})
.map(|logs_imported| DepositCacheUpdateOutcome::Success { logs_imported })
})
}
/// Contacts the remote eth1 node and attempts to import all blocks up to the configured
/// follow-distance block.
///
/// If configured, prunes the block cache after importing new blocks.
///
/// ## Resolves with
///
/// - Ok(_) if the update was successful (the cache may or may not have been modified).
/// - Err(_) if there is an error.
///
/// Emits logs for debugging and errors.
pub fn update_block_cache(&self) -> impl Future<Item = BlockCacheUpdateOutcome, Error = Error> {
let cache_1 = self.inner.clone();
let cache_2 = self.inner.clone();
let cache_3 = self.inner.clone();
let cache_4 = self.inner.clone();
let cache_5 = self.inner.clone();
let block_cache_truncation = self.config().block_cache_truncation;
let max_blocks_per_update = self
.config()
.max_blocks_per_update
.unwrap_or_else(usize::max_value);
let next_required_block = cache_1
.block_cache
.read()
.highest_block_number()
.map(|n| n + 1)
.unwrap_or_else(|| self.config().lowest_cached_block_number);
get_new_block_numbers(
&self.config().endpoint,
next_required_block,
self.config().follow_distance,
)
// Map the range of required blocks into a Vec.
//
// If the required range is larger than the size of the cache, drop the exiting cache
// because it's exipred and just download enough blocks to fill the cache.
.and_then(move |range| {
range
.map(|range| {
if range.start() > range.end() {
// Note: this check is not strictly necessary, however it remains to safe
// guard against any regression which may cause an underflow in a following
// subtraction operation.
Err(Error::Internal("Range was not increasing".into()))
} else {
let range_size = range.end() - range.start();
let max_size = block_cache_truncation
.map(|n| n as u64)
.unwrap_or_else(u64::max_value);
if range_size > max_size {
// If the range of required blocks is larger than `max_size`, drop all
// existing blocks and download `max_size` count of blocks.
let first_block = range.end() - max_size;
(*cache_5.block_cache.write()) = BlockCache::default();
Ok((first_block..=*range.end()).collect::<Vec<u64>>())
} else {
Ok(range.collect::<Vec<u64>>())
}
}
})
.unwrap_or_else(|| Ok(vec![]))
})
// Download the range of blocks and sequentially import them into the cache.
.and_then(move |required_block_numbers| {
let required_block_numbers = required_block_numbers
.into_iter()
.take(max_blocks_per_update);
// Produce a stream from the list of required block numbers and return a future that
// consumes the it.
stream::unfold(
required_block_numbers,
move |mut block_numbers| match block_numbers.next() {
Some(block_number) => Some(
download_eth1_block(cache_2.clone(), block_number)
.map(|v| (v, block_numbers)),
),
None => None,
},
)
.fold(0, move |sum, eth1_block| {
cache_3
.block_cache
.write()
.insert_root_or_child(eth1_block)
.map_err(Error::FailedToInsertEth1Block)?;
Ok(sum + 1)
})
})
.and_then(move |blocks_imported| {
// Prune the block cache, preventing it from growing too large.
cache_4.prune_blocks();
Ok(BlockCacheUpdateOutcome::Success {
blocks_imported,
head_block_number: cache_4.clone().block_cache.read().highest_block_number(),
})
})
}
}
/// Determine the range of blocks that need to be downloaded, given the remotes best block and
/// the locally stored best block.
fn get_new_block_numbers<'a>(
endpoint: &str,
next_required_block: u64,
follow_distance: u64,
) -> impl Future<Item = Option<RangeInclusive<u64>>, Error = Error> + 'a {
get_block_number(endpoint, Duration::from_millis(BLOCK_NUMBER_TIMEOUT_MILLIS))
.map_err(Error::GetBlockNumberFailed)
.and_then(move |remote_highest_block| {
let remote_follow_block = remote_highest_block.saturating_sub(follow_distance);
if next_required_block <= remote_follow_block {
Ok(Some(next_required_block..=remote_follow_block))
} else if next_required_block > remote_highest_block + 1 {
// If this is the case, the node must have gone "backwards" in terms of it's sync
// (i.e., it's head block is lower than it was before).
//
// We assume that the `follow_distance` should be sufficient to ensure this never
// happens, otherwise it is an error.
Err(Error::RemoteNotSynced {
next_required_block,
remote_highest_block,
follow_distance,
})
} else {
// Return an empty range.
Ok(None)
}
})
}
/// Downloads the `(block, deposit_root, deposit_count)` tuple from an eth1 node for the given
/// `block_number`.
///
/// Performs three async calls to an Eth1 HTTP JSON RPC endpoint.
fn download_eth1_block<'a>(
cache: Arc<Inner>,
block_number: u64,
) -> impl Future<Item = Eth1Block, Error = Error> + 'a {
// Performs a `get_blockByNumber` call to an eth1 node.
get_block(
&cache.config.read().endpoint,
block_number,
Duration::from_millis(GET_BLOCK_TIMEOUT_MILLIS),
)
.map_err(Error::BlockDownloadFailed)
.join3(
// Perform 2x `eth_call` via an eth1 node to read the deposit contract root and count.
get_deposit_root(
&cache.config.read().endpoint,
&cache.config.read().deposit_contract_address,
block_number,
Duration::from_millis(GET_DEPOSIT_ROOT_TIMEOUT_MILLIS),
)
.map_err(Error::GetDepositRootFailed),
get_deposit_count(
&cache.config.read().endpoint,
&cache.config.read().deposit_contract_address,
block_number,
Duration::from_millis(GET_DEPOSIT_COUNT_TIMEOUT_MILLIS),
)
.map_err(Error::GetDepositCountFailed),
)
.map(|(http_block, deposit_root, deposit_count)| Eth1Block {
hash: http_block.hash,
number: http_block.number,
timestamp: http_block.timestamp,
deposit_root,
deposit_count,
})
}
#[cfg(test)]
mod tests {
use super::*;
use toml;
#[test]
fn serde_serialize() {
let serialized =
toml::to_string(&Config::default()).expect("Should serde encode default config");
toml::from_str::<Config>(&serialized).expect("Should serde decode default config");
}
}

View File

@ -0,0 +1,713 @@
#![cfg(test)]
use environment::{Environment, EnvironmentBuilder};
use eth1::http::{get_deposit_count, get_deposit_logs_in_range, get_deposit_root, Block, Log};
use eth1::{Config, Service};
use eth1::{DepositCache, DepositLog};
use eth1_test_rig::GanacheEth1Instance;
use exit_future;
use futures::Future;
use merkle_proof::verify_merkle_proof;
use std::ops::Range;
use std::time::Duration;
use tokio::runtime::Runtime;
use tree_hash::TreeHash;
use types::{DepositData, EthSpec, Hash256, Keypair, MainnetEthSpec, MinimalEthSpec, Signature};
use web3::{transports::Http, Web3};
const DEPOSIT_CONTRACT_TREE_DEPTH: usize = 32;
pub fn new_env() -> Environment<MinimalEthSpec> {
EnvironmentBuilder::minimal()
// Use a single thread, so that when all tests are run in parallel they don't have so many
// threads.
.single_thread_tokio_runtime()
.expect("should start tokio runtime")
.null_logger()
.expect("should start null logger")
.build()
.expect("should build env")
}
fn timeout() -> Duration {
Duration::from_secs(1)
}
fn random_deposit_data() -> DepositData {
let keypair = Keypair::random();
let mut deposit = DepositData {
pubkey: keypair.pk.into(),
withdrawal_credentials: Hash256::zero(),
amount: 32_000_000_000,
signature: Signature::empty_signature().into(),
};
deposit.signature = deposit.create_signature(&keypair.sk, &MainnetEthSpec::default_spec());
deposit
}
/// Blocking operation to get the deposit logs from the `deposit_contract`.
fn blocking_deposit_logs(
runtime: &mut Runtime,
eth1: &GanacheEth1Instance,
range: Range<u64>,
) -> Vec<Log> {
runtime
.block_on(get_deposit_logs_in_range(
&eth1.endpoint(),
&eth1.deposit_contract.address(),
range,
timeout(),
))
.expect("should get logs")
}
/// Blocking operation to get the deposit root from the `deposit_contract`.
fn blocking_deposit_root(
runtime: &mut Runtime,
eth1: &GanacheEth1Instance,
block_number: u64,
) -> Option<Hash256> {
runtime
.block_on(get_deposit_root(
&eth1.endpoint(),
&eth1.deposit_contract.address(),
block_number,
timeout(),
))
.expect("should get deposit root")
}
/// Blocking operation to get the deposit count from the `deposit_contract`.
fn blocking_deposit_count(
runtime: &mut Runtime,
eth1: &GanacheEth1Instance,
block_number: u64,
) -> Option<u64> {
runtime
.block_on(get_deposit_count(
&eth1.endpoint(),
&eth1.deposit_contract.address(),
block_number,
timeout(),
))
.expect("should get deposit count")
}
fn get_block_number(runtime: &mut Runtime, web3: &Web3<Http>) -> u64 {
runtime
.block_on(web3.eth().block_number().map(|v| v.as_u64()))
.expect("should get block number")
}
mod auto_update {
use super::*;
#[test]
fn can_auto_update() {
let mut env = new_env();
let log = env.core_context().log;
let runtime = env.runtime();
let eth1 = runtime
.block_on(GanacheEth1Instance::new())
.expect("should start eth1 environment");
let deposit_contract = &eth1.deposit_contract;
let web3 = eth1.web3();
let now = get_block_number(runtime, &web3);
let service = Service::new(
Config {
endpoint: eth1.endpoint(),
deposit_contract_address: deposit_contract.address(),
deposit_contract_deploy_block: now,
lowest_cached_block_number: now,
follow_distance: 0,
block_cache_truncation: None,
..Config::default()
},
log,
);
// NOTE: this test is sensitive to the response speed of the external web3 server. If
// you're experiencing failures, try increasing the update_interval.
let update_interval = Duration::from_millis(2_000);
assert_eq!(
service.block_cache_len(),
0,
"should have imported no blocks"
);
assert_eq!(
service.deposit_cache_len(),
0,
"should have imported no deposits"
);
let (_exit, signal) = exit_future::signal();
runtime.executor().spawn(service.auto_update(signal));
let n = 4;
for _ in 0..n {
deposit_contract
.deposit(runtime, random_deposit_data())
.expect("should do first deposits");
}
std::thread::sleep(update_interval * 5);
assert!(
service.deposit_cache_len() >= n,
"should have imported n deposits"
);
for _ in 0..n {
deposit_contract
.deposit(runtime, random_deposit_data())
.expect("should do second deposits");
}
std::thread::sleep(update_interval * 4);
assert!(
service.block_cache_len() >= n * 2,
"should have imported all blocks"
);
assert!(
service.deposit_cache_len() >= n * 2,
"should have imported all deposits, not {}",
service.deposit_cache_len()
);
}
}
mod eth1_cache {
use super::*;
#[test]
fn simple_scenario() {
let mut env = new_env();
let log = env.core_context().log;
let runtime = env.runtime();
for follow_distance in 0..2 {
let eth1 = runtime
.block_on(GanacheEth1Instance::new())
.expect("should start eth1 environment");
let deposit_contract = &eth1.deposit_contract;
let web3 = eth1.web3();
let initial_block_number = get_block_number(runtime, &web3);
let service = Service::new(
Config {
endpoint: eth1.endpoint(),
deposit_contract_address: deposit_contract.address(),
lowest_cached_block_number: initial_block_number,
follow_distance,
..Config::default()
},
log.clone(),
);
// Create some blocks and then consume them, performing the test `rounds` times.
for round in 0..2 {
let blocks = 4;
let initial = if round == 0 {
initial_block_number
} else {
service
.blocks()
.read()
.highest_block_number()
.map(|n| n + follow_distance)
.expect("should have a latest block after the first round")
};
for _ in 0..blocks {
runtime
.block_on(eth1.ganache.evm_mine())
.expect("should mine block");
}
runtime
.block_on(service.update_block_cache())
.expect("should update cache");
runtime
.block_on(service.update_block_cache())
.expect("should update cache when nothing has changed");
assert_eq!(
service
.blocks()
.read()
.highest_block_number()
.map(|n| n + follow_distance),
Some(initial + blocks),
"should update {} blocks in round {} (follow {})",
blocks,
round,
follow_distance,
);
}
}
}
/// Tests the case where we attempt to download more blocks than will fit in the cache.
#[test]
fn big_skip() {
let mut env = new_env();
let log = env.core_context().log;
let runtime = env.runtime();
let eth1 = runtime
.block_on(GanacheEth1Instance::new())
.expect("should start eth1 environment");
let deposit_contract = &eth1.deposit_contract;
let web3 = eth1.web3();
let cache_len = 4;
let service = Service::new(
Config {
endpoint: eth1.endpoint(),
deposit_contract_address: deposit_contract.address(),
lowest_cached_block_number: get_block_number(runtime, &web3),
follow_distance: 0,
block_cache_truncation: Some(cache_len),
..Config::default()
},
log,
);
let blocks = cache_len * 2;
for _ in 0..blocks {
runtime
.block_on(eth1.ganache.evm_mine())
.expect("should mine block")
}
runtime
.block_on(service.update_block_cache())
.expect("should update cache");
assert_eq!(
service.block_cache_len(),
cache_len,
"should not grow cache beyond target"
);
}
/// Tests to ensure that the cache gets pruned when doing multiple downloads smaller than the
/// cache size.
#[test]
fn pruning() {
let mut env = new_env();
let log = env.core_context().log;
let runtime = env.runtime();
let eth1 = runtime
.block_on(GanacheEth1Instance::new())
.expect("should start eth1 environment");
let deposit_contract = &eth1.deposit_contract;
let web3 = eth1.web3();
let cache_len = 4;
let service = Service::new(
Config {
endpoint: eth1.endpoint(),
deposit_contract_address: deposit_contract.address(),
lowest_cached_block_number: get_block_number(runtime, &web3),
follow_distance: 0,
block_cache_truncation: Some(cache_len),
..Config::default()
},
log,
);
for _ in 0..4 {
for _ in 0..cache_len / 2 {
runtime
.block_on(eth1.ganache.evm_mine())
.expect("should mine block")
}
runtime
.block_on(service.update_block_cache())
.expect("should update cache");
}
assert_eq!(
service.block_cache_len(),
cache_len,
"should not grow cache beyond target"
);
}
#[test]
fn double_update() {
let mut env = new_env();
let log = env.core_context().log;
let runtime = env.runtime();
let n = 16;
let eth1 = runtime
.block_on(GanacheEth1Instance::new())
.expect("should start eth1 environment");
let deposit_contract = &eth1.deposit_contract;
let web3 = eth1.web3();
let service = Service::new(
Config {
endpoint: eth1.endpoint(),
deposit_contract_address: deposit_contract.address(),
lowest_cached_block_number: get_block_number(runtime, &web3),
follow_distance: 0,
..Config::default()
},
log,
);
for _ in 0..n {
runtime
.block_on(eth1.ganache.evm_mine())
.expect("should mine block")
}
runtime
.block_on(
service
.update_block_cache()
.join(service.update_block_cache()),
)
.expect("should perform two simultaneous updates");
assert!(service.block_cache_len() >= n, "should grow the cache");
}
}
mod deposit_tree {
use super::*;
#[test]
fn updating() {
let mut env = new_env();
let log = env.core_context().log;
let runtime = env.runtime();
let n = 4;
let eth1 = runtime
.block_on(GanacheEth1Instance::new())
.expect("should start eth1 environment");
let deposit_contract = &eth1.deposit_contract;
let web3 = eth1.web3();
let start_block = get_block_number(runtime, &web3);
let service = Service::new(
Config {
endpoint: eth1.endpoint(),
deposit_contract_address: deposit_contract.address(),
deposit_contract_deploy_block: start_block,
follow_distance: 0,
..Config::default()
},
log,
);
for round in 0..3 {
let deposits: Vec<_> = (0..n).into_iter().map(|_| random_deposit_data()).collect();
for deposit in &deposits {
deposit_contract
.deposit(runtime, deposit.clone())
.expect("should perform a deposit");
}
runtime
.block_on(service.update_deposit_cache())
.expect("should perform update");
runtime
.block_on(service.update_deposit_cache())
.expect("should perform update when nothing has changed");
let first = n * round;
let last = n * (round + 1);
let (_root, local_deposits) = service
.deposits()
.read()
.cache
.get_deposits(first..last, last, 32)
.expect(&format!("should get deposits in round {}", round));
assert_eq!(
local_deposits.len(),
n as usize,
"should get the right number of deposits in round {}",
round
);
assert_eq!(
local_deposits
.iter()
.map(|d| d.data.clone())
.collect::<Vec<_>>(),
deposits.to_vec(),
"obtained deposits should match those submitted in round {}",
round
);
}
}
#[test]
fn double_update() {
let mut env = new_env();
let log = env.core_context().log;
let runtime = env.runtime();
let n = 8;
let eth1 = runtime
.block_on(GanacheEth1Instance::new())
.expect("should start eth1 environment");
let deposit_contract = &eth1.deposit_contract;
let web3 = eth1.web3();
let start_block = get_block_number(runtime, &web3);
let service = Service::new(
Config {
endpoint: eth1.endpoint(),
deposit_contract_address: deposit_contract.address(),
deposit_contract_deploy_block: start_block,
lowest_cached_block_number: start_block,
follow_distance: 0,
..Config::default()
},
log,
);
let deposits: Vec<_> = (0..n).into_iter().map(|_| random_deposit_data()).collect();
for deposit in &deposits {
deposit_contract
.deposit(runtime, deposit.clone())
.expect("should perform a deposit");
}
runtime
.block_on(
service
.update_deposit_cache()
.join(service.update_deposit_cache()),
)
.expect("should perform two updates concurrently");
assert_eq!(service.deposit_cache_len(), n);
}
#[test]
fn cache_consistency() {
let mut env = new_env();
let runtime = env.runtime();
let n = 8;
let deposits: Vec<_> = (0..n).into_iter().map(|_| random_deposit_data()).collect();
let eth1 = runtime
.block_on(GanacheEth1Instance::new())
.expect("should start eth1 environment");
let deposit_contract = &eth1.deposit_contract;
let web3 = eth1.web3();
let mut deposit_roots = vec![];
let mut deposit_counts = vec![];
// Perform deposits to the smart contract, recording it's state along the way.
for deposit in &deposits {
deposit_contract
.deposit(runtime, deposit.clone())
.expect("should perform a deposit");
let block_number = get_block_number(runtime, &web3);
deposit_roots.push(
blocking_deposit_root(runtime, &eth1, block_number)
.expect("should get root if contract exists"),
);
deposit_counts.push(
blocking_deposit_count(runtime, &eth1, block_number)
.expect("should get count if contract exists"),
);
}
let mut tree = DepositCache::default();
// Pull all the deposit logs from the contract.
let block_number = get_block_number(runtime, &web3);
let logs: Vec<_> = blocking_deposit_logs(runtime, &eth1, 0..block_number)
.iter()
.map(|raw| DepositLog::from_log(raw).expect("should parse deposit log"))
.inspect(|log| {
tree.insert_log(log.clone())
.expect("should add consecutive logs")
})
.collect();
// Check the logs for invariants.
for i in 0..logs.len() {
let log = &logs[i];
assert_eq!(
log.deposit_data, deposits[i],
"log {} should have correct deposit data",
i
);
assert_eq!(log.index, i as u64, "log {} should have correct index", i);
}
// For each deposit test some more invariants
for i in 0..n {
// Ensure the deposit count from the smart contract was as expected.
assert_eq!(
deposit_counts[i],
i as u64 + 1,
"deposit count should be accurate"
);
// Ensure that the root from the deposit tree matches what the contract reported.
let (root, deposits) = tree
.get_deposits(0..i as u64, deposit_counts[i], DEPOSIT_CONTRACT_TREE_DEPTH)
.expect("should get deposits");
assert_eq!(
root, deposit_roots[i],
"tree deposit root {} should match the contract",
i
);
// Ensure that the deposits all prove into the root from the smart contract.
let deposit_root = deposit_roots[i];
for (j, deposit) in deposits.iter().enumerate() {
assert!(
verify_merkle_proof(
Hash256::from_slice(&deposit.data.tree_hash_root()),
&deposit.proof,
DEPOSIT_CONTRACT_TREE_DEPTH + 1,
j,
deposit_root
),
"deposit merkle proof should prove into deposit contract root"
)
}
}
}
}
/// Tests for the base HTTP requests and response handlers.
mod http {
use super::*;
fn get_block(runtime: &mut Runtime, eth1: &GanacheEth1Instance, block_number: u64) -> Block {
runtime
.block_on(eth1::http::get_block(
&eth1.endpoint(),
block_number,
timeout(),
))
.expect("should get block number")
}
#[test]
fn incrementing_deposits() {
let mut env = new_env();
let runtime = env.runtime();
let eth1 = runtime
.block_on(GanacheEth1Instance::new())
.expect("should start eth1 environment");
let deposit_contract = &eth1.deposit_contract;
let web3 = eth1.web3();
let block_number = get_block_number(runtime, &web3);
let logs = blocking_deposit_logs(runtime, &eth1, 0..block_number);
assert_eq!(logs.len(), 0);
let mut old_root = blocking_deposit_root(runtime, &eth1, block_number);
let mut old_block = get_block(runtime, &eth1, block_number);
let mut old_block_number = block_number;
assert_eq!(
blocking_deposit_count(runtime, &eth1, block_number),
Some(0),
"should have deposit count zero"
);
for i in 1..=8 {
runtime
.block_on(eth1.ganache.increase_time(1))
.expect("should be able to increase time on ganache");
deposit_contract
.deposit(runtime, random_deposit_data())
.expect("should perform a deposit");
// Check the logs.
let block_number = get_block_number(runtime, &web3);
let logs = blocking_deposit_logs(runtime, &eth1, 0..block_number);
assert_eq!(logs.len(), i, "the number of logs should be as expected");
// Check the deposit count.
assert_eq!(
blocking_deposit_count(runtime, &eth1, block_number),
Some(i as u64),
"should have a correct deposit count"
);
// Check the deposit root.
let new_root = blocking_deposit_root(runtime, &eth1, block_number);
assert_ne!(
new_root, old_root,
"deposit root should change with each deposit"
);
old_root = new_root;
// Check the block hash.
let new_block = get_block(runtime, &eth1, block_number);
assert_ne!(
new_block.hash, old_block.hash,
"block hash should change with each deposit"
);
// Check to ensure the timestamp is increasing
assert!(
old_block.timestamp <= new_block.timestamp,
"block timestamp should increase"
);
old_block = new_block.clone();
// Check the block number.
assert!(
block_number > old_block_number,
"block number should increase"
);
old_block_number = block_number;
// Check to ensure the block root is changing
assert_ne!(
new_root,
Some(new_block.hash),
"the deposit root should be different to the block hash"
);
}
}
}

View File

@ -69,7 +69,7 @@ impl<TSubstream: AsyncRead + AsyncWrite> Behaviour<TSubstream> {
); );
Ok(Behaviour { Ok(Behaviour {
eth2_rpc: RPC::new(log), eth2_rpc: RPC::new(log.clone()),
gossipsub: Gossipsub::new(local_peer_id.clone(), net_conf.gs_config.clone()), gossipsub: Gossipsub::new(local_peer_id.clone(), net_conf.gs_config.clone()),
discovery: Discovery::new(local_key, net_conf, log)?, discovery: Discovery::new(local_key, net_conf, log)?,
ping: Ping::new(ping_config), ping: Ping::new(ping_config),

View File

@ -69,8 +69,8 @@ pub struct RPC<TSubstream> {
} }
impl<TSubstream> RPC<TSubstream> { impl<TSubstream> RPC<TSubstream> {
pub fn new(log: &slog::Logger) -> Self { pub fn new(log: slog::Logger) -> Self {
let log = log.new(o!("Service" => "Libp2p-RPC")); let log = log.new(o!("service" => "libp2p_rpc"));
RPC { RPC {
events: Vec::new(), events: Vec::new(),
marker: PhantomData, marker: PhantomData,

View File

@ -0,0 +1,28 @@
[package]
name = "genesis"
version = "0.1.0"
authors = ["Paul Hauner <paul@paulhauner.com>"]
edition = "2018"
[dev-dependencies]
eth1_test_rig = { path = "../../tests/eth1_test_rig" }
futures = "0.1.25"
[dependencies]
futures = "0.1.25"
types = { path = "../../eth2/types"}
environment = { path = "../../lighthouse/environment"}
eth1 = { path = "../eth1"}
rayon = "1.0"
state_processing = { path = "../../eth2/state_processing" }
merkle_proof = { path = "../../eth2/utils/merkle_proof" }
eth2_ssz = "0.1"
eth2_hashing = { path = "../../eth2/utils/eth2_hashing" }
tree_hash = "0.1"
tokio = "0.1.17"
parking_lot = "0.7"
slog = "^2.2.3"
exit-future = "0.1.4"
serde = "1.0"
serde_derive = "1.0"
int_to_bytes = { path = "../../eth2/utils/int_to_bytes" }

View File

@ -0,0 +1,44 @@
use int_to_bytes::int_to_bytes32;
use merkle_proof::MerkleTree;
use rayon::prelude::*;
use tree_hash::TreeHash;
use types::{ChainSpec, Deposit, DepositData, Hash256};
/// Accepts the genesis block validator `DepositData` list and produces a list of `Deposit`, with
/// proofs.
pub fn genesis_deposits(
deposit_data: Vec<DepositData>,
spec: &ChainSpec,
) -> Result<Vec<Deposit>, String> {
let deposit_root_leaves = deposit_data
.par_iter()
.map(|data| Hash256::from_slice(&data.tree_hash_root()))
.collect::<Vec<_>>();
let mut proofs = vec![];
let depth = spec.deposit_contract_tree_depth as usize;
let mut tree = MerkleTree::create(&[], depth);
for (i, deposit_leaf) in deposit_root_leaves.iter().enumerate() {
if let Err(_) = tree.push_leaf(*deposit_leaf, depth) {
return Err(String::from("Failed to push leaf"));
}
let (_, mut proof) = tree.generate_proof(i, depth);
proof.push(Hash256::from_slice(&int_to_bytes32((i + 1) as u64)));
assert_eq!(
proof.len(),
depth + 1,
"Deposit proof should be correct len"
);
proofs.push(proof);
}
Ok(deposit_data
.into_iter()
.zip(proofs.into_iter())
.map(|(data, proof)| (data, proof.into()))
.map(|(data, proof)| Deposit { proof, data })
.collect())
}

View File

@ -0,0 +1,379 @@
pub use crate::{common::genesis_deposits, interop::interop_genesis_state};
pub use eth1::Config as Eth1Config;
use eth1::{DepositLog, Eth1Block, Service};
use futures::{
future,
future::{loop_fn, Loop},
Future,
};
use parking_lot::Mutex;
use slog::{debug, error, info, Logger};
use state_processing::{
initialize_beacon_state_from_eth1, is_valid_genesis_state,
per_block_processing::process_deposit, process_activations,
};
use std::sync::Arc;
use std::time::{Duration, Instant};
use tokio::timer::Delay;
use types::{BeaconState, ChainSpec, Deposit, Eth1Data, EthSpec, Hash256};
/// Provides a service that connects to some Eth1 HTTP JSON-RPC endpoint and maintains a cache of eth1
/// blocks and deposits, listening for the eth1 block that triggers eth2 genesis and returning the
/// genesis `BeaconState`.
///
/// Is a wrapper around the `Service` struct of the `eth1` crate.
#[derive(Clone)]
pub struct Eth1GenesisService {
/// The underlying service. Access to this object is only required for testing and diagnosis.
pub core: Service,
/// The highest block number we've processed and determined it does not trigger genesis.
highest_processed_block: Arc<Mutex<Option<u64>>>,
/// Enabled when the genesis service should start downloading blocks.
///
/// It is disabled until there are enough deposit logs to start syncing.
sync_blocks: Arc<Mutex<bool>>,
}
impl Eth1GenesisService {
/// Creates a new service. Does not attempt to connect to the Eth1 node.
pub fn new(config: Eth1Config, log: Logger) -> Self {
Self {
core: Service::new(config, log),
highest_processed_block: Arc::new(Mutex::new(None)),
sync_blocks: Arc::new(Mutex::new(false)),
}
}
fn first_viable_eth1_block(&self, min_genesis_active_validator_count: usize) -> Option<u64> {
if self.core.deposit_cache_len() < min_genesis_active_validator_count {
None
} else {
self.core
.deposits()
.read()
.cache
.get(min_genesis_active_validator_count.saturating_sub(1))
.map(|log| log.block_number)
}
}
/// Returns a future that will keep updating the cache and resolve once it has discovered the
/// first Eth1 block that triggers an Eth2 genesis.
///
/// ## Returns
///
/// - `Ok(state)` once the canonical eth2 genesis state has been discovered.
/// - `Err(e)` if there is some internal error during updates.
pub fn wait_for_genesis_state<E: EthSpec>(
&self,
update_interval: Duration,
spec: ChainSpec,
) -> impl Future<Item = BeaconState<E>, Error = String> {
let service = self.clone();
loop_fn::<(ChainSpec, Option<BeaconState<E>>), _, _, _>(
(spec, None),
move |(spec, state)| {
let service_1 = service.clone();
let service_2 = service.clone();
let service_3 = service.clone();
let service_4 = service.clone();
let log = service.core.log.clone();
let min_genesis_active_validator_count = spec.min_genesis_active_validator_count;
Delay::new(Instant::now() + update_interval)
.map_err(|e| format!("Delay between genesis deposit checks failed: {:?}", e))
.and_then(move |()| {
service_1
.core
.update_deposit_cache()
.map_err(|e| format!("{:?}", e))
})
.then(move |update_result| {
if let Err(e) = update_result {
error!(
log,
"Failed to update eth1 deposit cache";
"error" => e
)
}
// Do not exit the loop if there is an error whilst updating.
Ok(())
})
// Only enable the `sync_blocks` flag if there are enough deposits to feasibly
// trigger genesis.
//
// Note: genesis is triggered by the _active_ validator count, not just the
// deposit count, so it's possible that block downloads are started too early.
// This is just wasteful, not erroneous.
.and_then(move |()| {
let mut sync_blocks = service_2.sync_blocks.lock();
if !(*sync_blocks) {
if let Some(viable_eth1_block) = service_2.first_viable_eth1_block(
min_genesis_active_validator_count as usize,
) {
info!(
service_2.core.log,
"Minimum genesis deposit count met";
"deposit_count" => min_genesis_active_validator_count,
"block_number" => viable_eth1_block,
);
service_2.core.set_lowest_cached_block(viable_eth1_block);
*sync_blocks = true
}
}
Ok(*sync_blocks)
})
.and_then(move |should_update_block_cache| {
let maybe_update_future: Box<dyn Future<Item = _, Error = _> + Send> =
if should_update_block_cache {
Box::new(service_3.core.update_block_cache().then(
move |update_result| {
if let Err(e) = update_result {
error!(
service_3.core.log,
"Failed to update eth1 block cache";
"error" => format!("{:?}", e)
);
}
// Do not exit the loop if there is an error whilst updating.
Ok(())
},
))
} else {
Box::new(future::ok(()))
};
maybe_update_future
})
.and_then(move |()| {
if let Some(genesis_state) = service_4
.scan_new_blocks::<E>(&spec)
.map_err(|e| format!("Failed to scan for new blocks: {}", e))?
{
Ok(Loop::Break((spec, genesis_state)))
} else {
debug!(
service_4.core.log,
"No eth1 genesis block found";
"cached_blocks" => service_4.core.block_cache_len(),
"cached_deposits" => service_4.core.deposit_cache_len(),
"cache_head" => service_4.highest_known_block(),
);
Ok(Loop::Continue((spec, state)))
}
})
},
)
.map(|(_spec, state)| state)
}
/// Processes any new blocks that have appeared since this function was last run.
///
/// A `highest_processed_block` value is stored in `self`. This function will find any blocks
/// in it's caches that have a higher block number than `highest_processed_block` and check to
/// see if they would trigger an Eth2 genesis.
///
/// Blocks are always tested in increasing order, starting with the lowest unknown block
/// number in the cache.
///
/// ## Returns
///
/// - `Ok(Some(eth1_block))` if a previously-unprocessed block would trigger Eth2 genesis.
/// - `Ok(None)` if none of the new blocks would trigger genesis, or there were no new blocks.
/// - `Err(_)` if there was some internal error.
fn scan_new_blocks<E: EthSpec>(
&self,
spec: &ChainSpec,
) -> Result<Option<BeaconState<E>>, String> {
let genesis_trigger_eth1_block = self
.core
.blocks()
.read()
.iter()
// It's only worth scanning blocks that have timestamps _after_ genesis time. It's
// impossible for any other block to trigger genesis.
.filter(|block| block.timestamp >= spec.min_genesis_time)
// The block cache might be more recently updated than deposit cache. Restrict any
// block numbers that are not known by all caches.
.filter(|block| {
self.highest_known_block()
.map(|n| block.number <= n)
.unwrap_or_else(|| false)
})
.find(|block| {
let mut highest_processed_block = self.highest_processed_block.lock();
let next_new_block_number =
highest_processed_block.map(|n| n + 1).unwrap_or_else(|| 0);
if block.number < next_new_block_number {
return false;
}
self.is_valid_genesis_eth1_block::<E>(block, &spec)
.and_then(|val| {
*highest_processed_block = Some(block.number);
Ok(val)
})
.unwrap_or_else(|_| {
error!(
self.core.log,
"Failed to detect if eth1 block triggers genesis";
"eth1_block_number" => block.number,
"eth1_block_hash" => format!("{}", block.hash),
);
false
})
})
.cloned();
if let Some(eth1_block) = genesis_trigger_eth1_block {
debug!(
self.core.log,
"All genesis conditions met";
"eth1_block_height" => eth1_block.number,
);
let genesis_state = self
.genesis_from_eth1_block(eth1_block.clone(), &spec)
.map_err(|e| format!("Failed to generate valid genesis state : {}", e))?;
info!(
self.core.log,
"Deposit contract genesis complete";
"eth1_block_height" => eth1_block.number,
"validator_count" => genesis_state.validators.len(),
);
Ok(Some(genesis_state))
} else {
Ok(None)
}
}
/// Produces an eth2 genesis `BeaconState` from the given `eth1_block`.
///
/// ## Returns
///
/// - Ok(genesis_state) if all went well.
/// - Err(e) if the given `eth1_block` was not a viable block to trigger genesis or there was
/// an internal error.
fn genesis_from_eth1_block<E: EthSpec>(
&self,
eth1_block: Eth1Block,
spec: &ChainSpec,
) -> Result<BeaconState<E>, String> {
let deposit_logs = self
.core
.deposits()
.read()
.cache
.iter()
.take_while(|log| log.block_number <= eth1_block.number)
.map(|log| log.deposit_data.clone())
.collect::<Vec<_>>();
let genesis_state = initialize_beacon_state_from_eth1(
eth1_block.hash,
eth1_block.timestamp,
genesis_deposits(deposit_logs, &spec)?,
&spec,
)
.map_err(|e| format!("Unable to initialize genesis state: {:?}", e))?;
if is_valid_genesis_state(&genesis_state, &spec) {
Ok(genesis_state)
} else {
Err("Generated state was not valid.".to_string())
}
}
/// A cheap (compared to using `initialize_beacon_state_from_eth1) method for determining if some
/// `target_block` will trigger genesis.
fn is_valid_genesis_eth1_block<E: EthSpec>(
&self,
target_block: &Eth1Block,
spec: &ChainSpec,
) -> Result<bool, String> {
if target_block.timestamp < spec.min_genesis_time {
Ok(false)
} else {
let mut local_state: BeaconState<E> = BeaconState::new(
0,
Eth1Data {
block_hash: Hash256::zero(),
deposit_root: Hash256::zero(),
deposit_count: 0,
},
&spec,
);
local_state.genesis_time = target_block.timestamp;
self.deposit_logs_at_block(target_block.number)
.iter()
// TODO: add the signature field back.
//.filter(|deposit_log| deposit_log.signature_is_valid)
.map(|deposit_log| Deposit {
proof: vec![Hash256::zero(); spec.deposit_contract_tree_depth as usize].into(),
data: deposit_log.deposit_data.clone(),
})
.try_for_each(|deposit| {
// No need to verify proofs in order to test if some block will trigger genesis.
const PROOF_VERIFICATION: bool = false;
// Note: presently all the signatures are verified each time this function is
// run.
//
// It would be more efficient to pre-verify signatures, filter out the invalid
// ones and disable verification for `process_deposit`.
//
// This is only more efficient in scenarios where `min_genesis_time` occurs
// _before_ `min_validator_count` is met. We're unlikely to see this scenario
// in testnets (`min_genesis_time` is usually `0`) and I'm not certain it will
// happen for the real, production deposit contract.
process_deposit(&mut local_state, &deposit, spec, PROOF_VERIFICATION)
.map_err(|e| format!("Error whilst processing deposit: {:?}", e))
})?;
process_activations(&mut local_state, spec);
Ok(is_valid_genesis_state(&local_state, spec))
}
}
/// Returns the `block_number` of the highest (by block number) block in the cache.
///
/// Takes the lower block number of the deposit and block caches to ensure this number is safe.
fn highest_known_block(&self) -> Option<u64> {
let block_cache = self.core.blocks().read().highest_block_number()?;
let deposit_cache = self.core.deposits().read().last_processed_block?;
Some(std::cmp::min(block_cache, deposit_cache))
}
/// Returns all deposit logs included in `block_number` and all prior blocks.
fn deposit_logs_at_block(&self, block_number: u64) -> Vec<DepositLog> {
self.core
.deposits()
.read()
.cache
.iter()
.take_while(|log| log.block_number <= block_number)
.cloned()
.collect()
}
/// Returns the `Service` contained in `self`.
pub fn into_core_service(self) -> Service {
self.core
}
}

View File

@ -0,0 +1,142 @@
use crate::common::genesis_deposits;
use eth2_hashing::hash;
use rayon::prelude::*;
use ssz::Encode;
use state_processing::initialize_beacon_state_from_eth1;
use std::time::SystemTime;
use tree_hash::SignedRoot;
use types::{
BeaconState, ChainSpec, DepositData, Domain, EthSpec, Fork, Hash256, Keypair, PublicKey,
Signature,
};
/// Builds a genesis state as defined by the Eth2 interop procedure (see below).
///
/// Reference:
/// https://github.com/ethereum/eth2.0-pm/tree/6e41fcf383ebeb5125938850d8e9b4e9888389b4/interop/mocked_start
pub fn interop_genesis_state<T: EthSpec>(
keypairs: &[Keypair],
genesis_time: u64,
spec: &ChainSpec,
) -> Result<BeaconState<T>, String> {
let eth1_block_hash = Hash256::from_slice(&[0x42; 32]);
let eth1_timestamp = 2_u64.pow(40);
let amount = spec.max_effective_balance;
let withdrawal_credentials = |pubkey: &PublicKey| {
let mut credentials = hash(&pubkey.as_ssz_bytes());
credentials[0] = spec.bls_withdrawal_prefix_byte;
Hash256::from_slice(&credentials)
};
let datas = keypairs
.into_par_iter()
.map(|keypair| {
let mut data = DepositData {
withdrawal_credentials: withdrawal_credentials(&keypair.pk),
pubkey: keypair.pk.clone().into(),
amount,
signature: Signature::empty_signature().into(),
};
let domain = spec.get_domain(
spec.genesis_slot.epoch(T::slots_per_epoch()),
Domain::Deposit,
&Fork::default(),
);
data.signature = Signature::new(&data.signed_root()[..], domain, &keypair.sk).into();
data
})
.collect::<Vec<_>>();
let mut state = initialize_beacon_state_from_eth1(
eth1_block_hash,
eth1_timestamp,
genesis_deposits(datas, spec)?,
spec,
)
.map_err(|e| format!("Unable to initialize genesis state: {:?}", e))?;
state.genesis_time = genesis_time;
// Invalid all the caches after all the manual state surgery.
state.drop_all_caches();
Ok(state)
}
/// Returns the system time, mod 30 minutes.
///
/// Used for easily creating testnets.
pub fn recent_genesis_time(minutes: u64) -> u64 {
let now = SystemTime::now()
.duration_since(SystemTime::UNIX_EPOCH)
.unwrap()
.as_secs();
let secs_after_last_period = now.checked_rem(minutes * 60).unwrap_or(0);
now - secs_after_last_period
}
#[cfg(test)]
mod test {
use super::*;
use types::{test_utils::generate_deterministic_keypairs, EthSpec, MinimalEthSpec};
type TestEthSpec = MinimalEthSpec;
#[test]
fn interop_state() {
let validator_count = 16;
let genesis_time = 42;
let spec = &TestEthSpec::default_spec();
let keypairs = generate_deterministic_keypairs(validator_count);
let state = interop_genesis_state::<TestEthSpec>(&keypairs, genesis_time, spec)
.expect("should build state");
assert_eq!(
state.eth1_data.block_hash,
Hash256::from_slice(&[0x42; 32]),
"eth1 block hash should be co-ordinated junk"
);
assert_eq!(
state.genesis_time, genesis_time,
"genesis time should be as specified"
);
for b in &state.balances {
assert_eq!(
*b, spec.max_effective_balance,
"validator balances should be max effective balance"
);
}
for v in &state.validators {
let creds = v.withdrawal_credentials.as_bytes();
assert_eq!(
creds[0], spec.bls_withdrawal_prefix_byte,
"first byte of withdrawal creds should be bls prefix"
);
assert_eq!(
&creds[1..],
&hash(&v.pubkey.as_ssz_bytes())[1..],
"rest of withdrawal creds should be pubkey hash"
)
}
assert_eq!(
state.balances.len(),
validator_count,
"validator balances len should be correct"
);
assert_eq!(
state.validators.len(),
validator_count,
"validator count should be correct"
);
}
}

View File

@ -0,0 +1,31 @@
mod common;
mod eth1_genesis_service;
mod interop;
pub use eth1::Config as Eth1Config;
pub use eth1_genesis_service::Eth1GenesisService;
pub use interop::{interop_genesis_state, recent_genesis_time};
pub use types::test_utils::generate_deterministic_keypairs;
use ssz::Decode;
use std::fs::File;
use std::io::prelude::*;
use std::path::PathBuf;
use types::{BeaconState, EthSpec};
/// Load a `BeaconState` from the given `path`. The file should contain raw SSZ bytes (i.e., no
/// ASCII encoding or schema).
pub fn state_from_ssz_file<E: EthSpec>(path: PathBuf) -> Result<BeaconState<E>, String> {
File::open(path.clone())
.map_err(move |e| format!("Unable to open SSZ genesis state file {:?}: {:?}", path, e))
.and_then(|mut file| {
let mut bytes = vec![];
file.read_to_end(&mut bytes)
.map_err(|e| format!("Failed to read SSZ file: {:?}", e))?;
Ok(bytes)
})
.and_then(|bytes| {
BeaconState::from_ssz_bytes(&bytes)
.map_err(|e| format!("Unable to parse SSZ genesis state file: {:?}", e))
})
}

View File

@ -0,0 +1,105 @@
//! NOTE: These tests will not pass unless ganache-cli is running on `ENDPOINT` (see below).
//!
//! You can start a suitable instance using the `ganache_test_node.sh` script in the `scripts`
//! dir in the root of the `lighthouse` repo.
#![cfg(test)]
use environment::{Environment, EnvironmentBuilder};
use eth1_test_rig::{DelayThenDeposit, GanacheEth1Instance};
use futures::Future;
use genesis::{Eth1Config, Eth1GenesisService};
use state_processing::is_valid_genesis_state;
use std::time::Duration;
use types::{test_utils::generate_deterministic_keypair, Hash256, MinimalEthSpec};
pub fn new_env() -> Environment<MinimalEthSpec> {
EnvironmentBuilder::minimal()
.single_thread_tokio_runtime()
.expect("should start tokio runtime")
.null_logger()
.expect("should start null logger")
.build()
.expect("should build env")
}
#[test]
fn basic() {
let mut env = new_env();
let log = env.core_context().log;
let mut spec = env.eth2_config().spec.clone();
let runtime = env.runtime();
let eth1 = runtime
.block_on(GanacheEth1Instance::new())
.expect("should start eth1 environment");
let deposit_contract = &eth1.deposit_contract;
let web3 = eth1.web3();
let now = runtime
.block_on(web3.eth().block_number().map(|v| v.as_u64()))
.expect("should get block number");
let service = Eth1GenesisService::new(
Eth1Config {
endpoint: eth1.endpoint(),
deposit_contract_address: deposit_contract.address(),
deposit_contract_deploy_block: now,
lowest_cached_block_number: now,
follow_distance: 0,
block_cache_truncation: None,
..Eth1Config::default()
},
log,
);
// NOTE: this test is sensitive to the response speed of the external web3 server. If
// you're experiencing failures, try increasing the update_interval.
let update_interval = Duration::from_millis(500);
spec.min_genesis_time = 0;
spec.min_genesis_active_validator_count = 8;
let deposits = (0..spec.min_genesis_active_validator_count + 2)
.into_iter()
.map(|i| {
deposit_contract.deposit_helper::<MinimalEthSpec>(
generate_deterministic_keypair(i as usize),
Hash256::from_low_u64_le(i),
32_000_000_000,
)
})
.map(|deposit| DelayThenDeposit {
delay: Duration::from_secs(0),
deposit,
})
.collect::<Vec<_>>();
let deposit_future = deposit_contract.deposit_multiple(deposits.clone());
let wait_future =
service.wait_for_genesis_state::<MinimalEthSpec>(update_interval, spec.clone());
let state = runtime
.block_on(deposit_future.join(wait_future))
.map(|(_, state)| state)
.expect("should finish waiting for genesis");
// Note: using ganache these deposits are 1-per-block, therefore we know there should only be
// the minimum number of validators.
assert_eq!(
state.validators.len(),
spec.min_genesis_active_validator_count as usize,
"should have expected validator count"
);
assert!(state.genesis_time > 0, "should have some genesis time");
assert!(
is_valid_genesis_state(&state, &spec),
"should be valid genesis state"
);
assert!(
is_valid_genesis_state(&state, &spec),
"should be valid genesis state"
);
}

View File

@ -51,7 +51,7 @@ impl<T: BeaconChainTypes + 'static> MessageHandler<T> {
executor: &tokio::runtime::TaskExecutor, executor: &tokio::runtime::TaskExecutor,
log: slog::Logger, log: slog::Logger,
) -> error::Result<mpsc::UnboundedSender<HandlerMessage>> { ) -> error::Result<mpsc::UnboundedSender<HandlerMessage>> {
let message_handler_log = log.new(o!("Service"=> "Message Handler")); let message_handler_log = log.new(o!("service"=> "msg_handler"));
trace!(message_handler_log, "Service starting"); trace!(message_handler_log, "Service starting");
let (handler_send, handler_recv) = mpsc::unbounded_channel(); let (handler_send, handler_recv) = mpsc::unbounded_channel();

View File

@ -10,7 +10,7 @@ use eth2_libp2p::{PubsubMessage, RPCEvent};
use futures::prelude::*; use futures::prelude::*;
use futures::Stream; use futures::Stream;
use parking_lot::Mutex; use parking_lot::Mutex;
use slog::{debug, info, o, trace}; use slog::{debug, info, trace};
use std::sync::Arc; use std::sync::Arc;
use tokio::runtime::TaskExecutor; use tokio::runtime::TaskExecutor;
use tokio::sync::{mpsc, oneshot}; use tokio::sync::{mpsc, oneshot};
@ -29,15 +29,18 @@ impl<T: BeaconChainTypes + 'static> Service<T> {
beacon_chain: Arc<BeaconChain<T>>, beacon_chain: Arc<BeaconChain<T>>,
config: &NetworkConfig, config: &NetworkConfig,
executor: &TaskExecutor, executor: &TaskExecutor,
log: slog::Logger, network_log: slog::Logger,
) -> error::Result<(Arc<Self>, mpsc::UnboundedSender<NetworkMessage>)> { ) -> error::Result<(Arc<Self>, mpsc::UnboundedSender<NetworkMessage>)> {
// build the network channel // build the network channel
let (network_send, network_recv) = mpsc::unbounded_channel::<NetworkMessage>(); let (network_send, network_recv) = mpsc::unbounded_channel::<NetworkMessage>();
// launch message handler thread // launch message handler thread
let message_handler_send = let message_handler_send = MessageHandler::spawn(
MessageHandler::spawn(beacon_chain, network_send.clone(), executor, log.clone())?; beacon_chain,
network_send.clone(),
executor,
network_log.clone(),
)?;
let network_log = log.new(o!("Service" => "Network"));
// launch libp2p service // launch libp2p service
let libp2p_service = Arc::new(Mutex::new(LibP2PService::new( let libp2p_service = Arc::new(Mutex::new(LibP2PService::new(
config.clone(), config.clone(),

View File

@ -75,7 +75,7 @@ impl<T: BeaconChainTypes> MessageProcessor<T> {
network_send: mpsc::UnboundedSender<NetworkMessage>, network_send: mpsc::UnboundedSender<NetworkMessage>,
log: &slog::Logger, log: &slog::Logger,
) -> Self { ) -> Self {
let sync_logger = log.new(o!("Service"=> "Sync")); let sync_logger = log.new(o!("service"=> "sync"));
let sync_network_context = NetworkContext::new(network_send.clone(), sync_logger.clone()); let sync_network_context = NetworkContext::new(network_send.clone(), sync_logger.clone());
// spawn the sync thread // spawn the sync thread

View File

@ -26,7 +26,8 @@ use hyper::rt::Future;
use hyper::service::Service; use hyper::service::Service;
use hyper::{Body, Method, Request, Response, Server}; use hyper::{Body, Method, Request, Response, Server};
use parking_lot::RwLock; use parking_lot::RwLock;
use slog::{info, o, warn}; use slog::{info, warn};
use std::net::SocketAddr;
use std::ops::Deref; use std::ops::Deref;
use std::path::PathBuf; use std::path::PathBuf;
use std::sync::Arc; use std::sync::Arc;
@ -35,7 +36,7 @@ use tokio::sync::mpsc;
use url_query::UrlQuery; use url_query::UrlQuery;
pub use beacon::{BlockResponse, HeadResponse, StateResponse}; pub use beacon::{BlockResponse, HeadResponse, StateResponse};
pub use config::Config as ApiConfig; pub use config::Config;
type BoxFut = Box<dyn Future<Item = Response<Body>, Error = ApiError> + Send>; type BoxFut = Box<dyn Future<Item = Response<Body>, Error = ApiError> + Send>;
@ -196,16 +197,14 @@ impl<T: BeaconChainTypes> Service for ApiService<T> {
} }
pub fn start_server<T: BeaconChainTypes>( pub fn start_server<T: BeaconChainTypes>(
config: &ApiConfig, config: &Config,
executor: &TaskExecutor, executor: &TaskExecutor,
beacon_chain: Arc<BeaconChain<T>>, beacon_chain: Arc<BeaconChain<T>>,
network_info: NetworkInfo<T>, network_info: NetworkInfo<T>,
db_path: PathBuf, db_path: PathBuf,
eth2_config: Eth2Config, eth2_config: Eth2Config,
log: &slog::Logger, log: slog::Logger,
) -> Result<exit_future::Signal, hyper::Error> { ) -> Result<(exit_future::Signal, SocketAddr), hyper::Error> {
let log = log.new(o!("Service" => "Api"));
// build a channel to kill the HTTP server // build a channel to kill the HTTP server
let (exit_signal, exit) = exit_future::signal(); let (exit_signal, exit) = exit_future::signal();
@ -237,8 +236,11 @@ pub fn start_server<T: BeaconChainTypes>(
}; };
let log_clone = log.clone(); let log_clone = log.clone();
let server = Server::bind(&bind_addr) let server = Server::bind(&bind_addr).serve(service);
.serve(service)
let actual_listen_addr = server.local_addr();
let server_future = server
.with_graceful_shutdown(server_exit) .with_graceful_shutdown(server_exit)
.map_err(move |e| { .map_err(move |e| {
warn!( warn!(
@ -248,15 +250,15 @@ pub fn start_server<T: BeaconChainTypes>(
}); });
info!( info!(
log, log,
"REST API started"; "REST API started";
"address" => format!("{}", config.listen_address), "address" => format!("{}", actual_listen_addr.ip()),
"port" => config.port, "port" => actual_listen_addr.port(),
); );
executor.spawn(server); executor.spawn(server_future);
Ok(exit_signal) Ok((exit_signal, actual_listen_addr))
} }
#[derive(Clone)] #[derive(Clone)]

View File

@ -16,13 +16,23 @@ use std::sync::Arc;
use tokio::sync::mpsc; use tokio::sync::mpsc;
use types::{Attestation, Slot}; use types::{Attestation, Slot};
#[derive(Clone)]
pub struct AttestationServiceInstance<T: BeaconChainTypes> { pub struct AttestationServiceInstance<T: BeaconChainTypes> {
pub chain: Arc<BeaconChain<T>>, pub chain: Arc<BeaconChain<T>>,
pub network_chan: mpsc::UnboundedSender<NetworkMessage>, pub network_chan: mpsc::UnboundedSender<NetworkMessage>,
pub log: slog::Logger, pub log: slog::Logger,
} }
// NOTE: Deriving Clone puts bogus bounds on T, so we implement it manually.
impl<T: BeaconChainTypes> Clone for AttestationServiceInstance<T> {
fn clone(&self) -> Self {
Self {
chain: self.chain.clone(),
network_chan: self.network_chan.clone(),
log: self.log.clone(),
}
}
}
impl<T: BeaconChainTypes> AttestationService for AttestationServiceInstance<T> { impl<T: BeaconChainTypes> AttestationService for AttestationServiceInstance<T> {
/// Produce the `AttestationData` for signing by a validator. /// Produce the `AttestationData` for signing by a validator.
fn produce_attestation_data( fn produce_attestation_data(

View File

@ -16,13 +16,23 @@ use std::sync::Arc;
use tokio::sync::mpsc; use tokio::sync::mpsc;
use types::{BeaconBlock, Signature, Slot}; use types::{BeaconBlock, Signature, Slot};
#[derive(Clone)]
pub struct BeaconBlockServiceInstance<T: BeaconChainTypes> { pub struct BeaconBlockServiceInstance<T: BeaconChainTypes> {
pub chain: Arc<BeaconChain<T>>, pub chain: Arc<BeaconChain<T>>,
pub network_chan: mpsc::UnboundedSender<NetworkMessage>, pub network_chan: mpsc::UnboundedSender<NetworkMessage>,
pub log: Logger, pub log: Logger,
} }
// NOTE: Deriving Clone puts bogus bounds on T, so we implement it manually.
impl<T: BeaconChainTypes> Clone for BeaconBlockServiceInstance<T> {
fn clone(&self) -> Self {
Self {
chain: self.chain.clone(),
network_chan: self.network_chan.clone(),
log: self.log.clone(),
}
}
}
impl<T: BeaconChainTypes> BeaconBlockService for BeaconBlockServiceInstance<T> { impl<T: BeaconChainTypes> BeaconBlockService for BeaconBlockServiceInstance<T> {
/// Produce a `BeaconBlock` for signing by a validator. /// Produce a `BeaconBlock` for signing by a validator.
fn produce_beacon_block( fn produce_beacon_block(

View File

@ -6,12 +6,21 @@ use protos::services_grpc::BeaconNodeService;
use slog::{trace, warn}; use slog::{trace, warn};
use std::sync::Arc; use std::sync::Arc;
#[derive(Clone)]
pub struct BeaconNodeServiceInstance<T: BeaconChainTypes> { pub struct BeaconNodeServiceInstance<T: BeaconChainTypes> {
pub chain: Arc<BeaconChain<T>>, pub chain: Arc<BeaconChain<T>>,
pub log: slog::Logger, pub log: slog::Logger,
} }
// NOTE: Deriving Clone puts bogus bounds on T, so we implement it manually.
impl<T: BeaconChainTypes> Clone for BeaconNodeServiceInstance<T> {
fn clone(&self) -> Self {
Self {
chain: self.chain.clone(),
log: self.log.clone(),
}
}
}
impl<T: BeaconChainTypes> BeaconNodeService for BeaconNodeServiceInstance<T> { impl<T: BeaconChainTypes> BeaconNodeService for BeaconNodeServiceInstance<T> {
/// Provides basic node information. /// Provides basic node information.
fn info(&mut self, ctx: RpcContext, _req: Empty, sink: UnarySink<NodeInfoResponse>) { fn info(&mut self, ctx: RpcContext, _req: Empty, sink: UnarySink<NodeInfoResponse>) {

View File

@ -9,7 +9,7 @@ use self::beacon_block::BeaconBlockServiceInstance;
use self::beacon_node::BeaconNodeServiceInstance; use self::beacon_node::BeaconNodeServiceInstance;
use self::validator::ValidatorServiceInstance; use self::validator::ValidatorServiceInstance;
use beacon_chain::{BeaconChain, BeaconChainTypes}; use beacon_chain::{BeaconChain, BeaconChainTypes};
pub use config::Config as RPCConfig; pub use config::Config;
use futures::Future; use futures::Future;
use grpcio::{Environment, ServerBuilder}; use grpcio::{Environment, ServerBuilder};
use network::NetworkMessage; use network::NetworkMessage;
@ -17,19 +17,18 @@ use protos::services_grpc::{
create_attestation_service, create_beacon_block_service, create_beacon_node_service, create_attestation_service, create_beacon_block_service, create_beacon_node_service,
create_validator_service, create_validator_service,
}; };
use slog::{info, o, warn}; use slog::{info, warn};
use std::sync::Arc; use std::sync::Arc;
use tokio::runtime::TaskExecutor; use tokio::runtime::TaskExecutor;
use tokio::sync::mpsc; use tokio::sync::mpsc;
pub fn start_server<T: BeaconChainTypes + Clone + 'static>( pub fn start_server<T: BeaconChainTypes>(
config: &RPCConfig, config: &Config,
executor: &TaskExecutor, executor: &TaskExecutor,
network_chan: mpsc::UnboundedSender<NetworkMessage>, network_chan: mpsc::UnboundedSender<NetworkMessage>,
beacon_chain: Arc<BeaconChain<T>>, beacon_chain: Arc<BeaconChain<T>>,
log: &slog::Logger, log: slog::Logger,
) -> exit_future::Signal { ) -> exit_future::Signal {
let log = log.new(o!("Service"=>"RPC"));
let env = Arc::new(Environment::new(1)); let env = Arc::new(Environment::new(1));
// build a channel to kill the rpc server // build a channel to kill the rpc server

View File

@ -9,12 +9,21 @@ use ssz::Decode;
use std::sync::Arc; use std::sync::Arc;
use types::{Epoch, EthSpec, RelativeEpoch}; use types::{Epoch, EthSpec, RelativeEpoch};
#[derive(Clone)]
pub struct ValidatorServiceInstance<T: BeaconChainTypes> { pub struct ValidatorServiceInstance<T: BeaconChainTypes> {
pub chain: Arc<BeaconChain<T>>, pub chain: Arc<BeaconChain<T>>,
pub log: slog::Logger, pub log: slog::Logger,
} }
// NOTE: Deriving Clone puts bogus bounds on T, so we implement it manually.
impl<T: BeaconChainTypes> Clone for ValidatorServiceInstance<T> {
fn clone(&self) -> Self {
Self {
chain: self.chain.clone(),
log: self.log.clone(),
}
}
}
impl<T: BeaconChainTypes> ValidatorService for ValidatorServiceInstance<T> { impl<T: BeaconChainTypes> ValidatorService for ValidatorServiceInstance<T> {
/// For a list of validator public keys, this function returns the slot at which each /// For a list of validator public keys, this function returns the slot at which each
/// validator must propose a block, attest to a shard, their shard committee and the shard they /// validator must propose a block, attest to a shard, their shard committee and the shard they

View File

@ -1,22 +1,9 @@
mod config;
mod run;
use clap::{App, Arg, SubCommand}; use clap::{App, Arg, SubCommand};
use config::get_configs;
use env_logger::{Builder, Env};
use slog::{crit, o, warn, Drain, Level};
pub const DEFAULT_DATA_DIR: &str = ".lighthouse"; pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
pub const CLIENT_CONFIG_FILENAME: &str = "beacon-node.toml"; App::new("Beacon Node")
pub const ETH2_CONFIG_FILENAME: &str = "eth2-spec.toml"; .visible_aliases(&["b", "bn", "beacon", "beacon_node"])
pub const TESTNET_CONFIG_FILENAME: &str = "testnet.toml"; .version(crate_version!())
fn main() {
// debugging output for libp2p and external crates
Builder::from_env(Env::default()).init();
let matches = App::new("Lighthouse")
.version(version::version().as_str())
.author("Sigma Prime <contact@sigmaprime.io>") .author("Sigma Prime <contact@sigmaprime.io>")
.about("Eth 2.0 Client") .about("Eth 2.0 Client")
/* /*
@ -30,13 +17,6 @@ fn main() {
.takes_value(true) .takes_value(true)
.global(true) .global(true)
) )
.arg(
Arg::with_name("logfile")
.long("logfile")
.value_name("FILE")
.help("File path where output will be written.")
.takes_value(true),
)
.arg( .arg(
Arg::with_name("network-dir") Arg::with_name("network-dir")
.long("network-dir") .long("network-dir")
@ -197,35 +177,44 @@ fn main() {
* Eth1 Integration * Eth1 Integration
*/ */
.arg( .arg(
Arg::with_name("eth1-server") Arg::with_name("dummy-eth1")
.long("eth1-server") .long("dummy-eth1")
.value_name("SERVER") .help("If present, uses an eth1 backend that generates static dummy data.\
Identical to the method used at the 2019 Canada interop.")
)
.arg(
Arg::with_name("eth1-endpoint")
.long("eth1-endpoint")
.value_name("HTTP-ENDPOINT")
.help("Specifies the server for a web3 connection to the Eth1 chain.") .help("Specifies the server for a web3 connection to the Eth1 chain.")
.takes_value(true) .takes_value(true)
.default_value("http://localhost:8545")
) )
/*
* Database parameters.
*/
.arg( .arg(
Arg::with_name("db") Arg::with_name("eth1-follow")
.long("db") .long("eth1-follow")
.value_name("DB") .value_name("BLOCK_COUNT")
.help("Type of database to use.") .help("Specifies how many blocks we should cache behind the eth1 head. A larger number means a smaller cache.")
.takes_value(true) .takes_value(true)
.possible_values(&["disk", "memory"]) // TODO: set this higher once we're not using testnets all the time.
.default_value("disk"), .default_value("0")
) )
/*
* Logging.
*/
.arg( .arg(
Arg::with_name("debug-level") Arg::with_name("deposit-contract")
.long("debug-level") .long("deposit-contract")
.value_name("LEVEL") .short("e")
.help("The title of the spec constants for chain config.") .value_name("DEPOSIT-CONTRACT")
.help("Specifies the deposit contract address on the Eth1 chain.")
.takes_value(true) .takes_value(true)
.possible_values(&["info", "debug", "trace", "warn", "error", "crit"]) )
.default_value("trace"), .arg(
Arg::with_name("deposit-contract-deploy")
.long("deposit-contract-deploy")
.value_name("BLOCK_NUMBER")
.help("Specifies the block number that the deposit contract was deployed at.")
.takes_value(true)
// TODO: set this higher once we're not using testnets all the time.
.default_value("0")
) )
/* /*
* The "testnet" sub-command. * The "testnet" sub-command.
@ -234,17 +223,6 @@ fn main() {
*/ */
.subcommand(SubCommand::with_name("testnet") .subcommand(SubCommand::with_name("testnet")
.about("Create a new Lighthouse datadir using a testnet strategy.") .about("Create a new Lighthouse datadir using a testnet strategy.")
.arg(
Arg::with_name("spec")
.short("s")
.long("spec")
.value_name("TITLE")
.help("Specifies the default eth2 spec type. Only effective when creating a new datadir.")
.takes_value(true)
.required(true)
.possible_values(&["mainnet", "minimal", "interop"])
.default_value("minimal")
)
.arg( .arg(
Arg::with_name("eth2-config") Arg::with_name("eth2-config")
.long("eth2-config") .long("eth2-config")
@ -347,68 +325,25 @@ fn main() {
* Start a new node, using a genesis state loaded from a YAML file * Start a new node, using a genesis state loaded from a YAML file
*/ */
.subcommand(SubCommand::with_name("file") .subcommand(SubCommand::with_name("file")
.about("Creates a new datadir where the genesis state is read from YAML. May fail to parse \ .about("Creates a new datadir where the genesis state is read from file. May fail to parse \
a file that was generated to a different spec than that specified by --spec.") a file that was generated to a different spec than that specified by --spec.")
.arg(Arg::with_name("format") .arg(Arg::with_name("format")
.value_name("FORMAT") .value_name("FORMAT")
.required(true) .required(true)
.possible_values(&["yaml", "ssz", "json"]) .possible_values(&["ssz"])
.help("The encoding of the state in the file.")) .help("The encoding of the state in the file."))
.arg(Arg::with_name("file") .arg(Arg::with_name("file")
.value_name("YAML_FILE") .value_name("FILE")
.required(true) .required(true)
.help("A YAML file from which to read the state")) .help("A file from which to read the state"))
)
/*
* `prysm`
*
* Connect to the Prysmatic Labs testnet.
*/
.subcommand(SubCommand::with_name("prysm")
.about("Connect to the Prysmatic Labs testnet on Goerli.")
) )
) )
.get_matches();
// build the initial logger
let decorator = slog_term::TermDecorator::new().build();
let decorator = logging::AlignedTermDecorator::new(decorator, logging::MAX_MESSAGE_WIDTH);
let drain = slog_term::FullFormat::new(decorator).build().fuse();
let drain = slog_async::Async::new(drain).build();
let drain = match matches.value_of("debug-level") {
Some("info") => drain.filter_level(Level::Info),
Some("debug") => drain.filter_level(Level::Debug),
Some("trace") => drain.filter_level(Level::Trace),
Some("warn") => drain.filter_level(Level::Warning),
Some("error") => drain.filter_level(Level::Error),
Some("crit") => drain.filter_level(Level::Critical),
_ => unreachable!("guarded by clap"),
};
let log = slog::Logger::root(drain.fuse(), o!());
if std::mem::size_of::<usize>() != 8 {
crit!(
log,
"Lighthouse only supports 64bit CPUs";
"detected" => format!("{}bit", std::mem::size_of::<usize>() * 8)
);
}
warn!(
log,
"Ethereum 2.0 is pre-release. This software is experimental."
);
let log_clone = log.clone();
// Load the process-wide configuration.
//
// May load this from disk or create a new configuration, depending on the CLI flags supplied.
let (client_config, eth2_config, log) = match get_configs(&matches, log) {
Ok(configs) => configs,
Err(e) => {
crit!(log_clone, "Failed to load configuration. Exiting"; "error" => e);
return;
}
};
// Start the node using a `tokio` executor.
match run::run_beacon_node(client_config, eth2_config, &log) {
Ok(_) => {}
Err(e) => crit!(log, "Beacon node failed to start"; "reason" => format!("{:}", e)),
}
} }

View File

@ -1,12 +1,14 @@
use clap::ArgMatches; use clap::ArgMatches;
use client::{BeaconChainStartMethod, ClientConfig, Eth1BackendMethod, Eth2Config}; use client::{ClientConfig, ClientGenesis, Eth2Config};
use eth2_config::{read_from_file, write_to_file}; use eth2_config::{read_from_file, write_to_file};
use genesis::recent_genesis_time;
use lighthouse_bootstrap::Bootstrapper; use lighthouse_bootstrap::Bootstrapper;
use rand::{distributions::Alphanumeric, Rng}; use rand::{distributions::Alphanumeric, Rng};
use slog::{crit, info, warn, Logger}; use slog::{crit, info, warn, Logger};
use std::fs; use std::fs;
use std::net::Ipv4Addr; use std::net::Ipv4Addr;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use types::{Address, Epoch, Fork};
pub const DEFAULT_DATA_DIR: &str = ".lighthouse"; pub const DEFAULT_DATA_DIR: &str = ".lighthouse";
pub const CLIENT_CONFIG_FILENAME: &str = "beacon-node.toml"; pub const CLIENT_CONFIG_FILENAME: &str = "beacon-node.toml";
@ -27,12 +29,33 @@ pub fn get_configs(cli_args: &ArgMatches, core_log: Logger) -> Result<Config> {
let mut builder = ConfigBuilder::new(cli_args, core_log)?; let mut builder = ConfigBuilder::new(cli_args, core_log)?;
if let Some(server) = cli_args.value_of("eth1-server") { if cli_args.is_present("dummy-eth1") {
builder.set_eth1_backend_method(Eth1BackendMethod::Web3 { builder.client_config.dummy_eth1_backend = true;
server: server.into(), }
})
} else { if let Some(val) = cli_args.value_of("eth1-endpoint") {
builder.set_eth1_backend_method(Eth1BackendMethod::Interop) builder.set_eth1_endpoint(val)
}
if let Some(val) = cli_args.value_of("deposit-contract") {
builder.set_deposit_contract(
val.parse::<Address>()
.map_err(|e| format!("Unable to parse deposit-contract address: {:?}", e))?,
)
}
if let Some(val) = cli_args.value_of("deposit-contract-deploy") {
builder.set_deposit_contract_deploy_block(
val.parse::<u64>()
.map_err(|e| format!("Unable to parse deposit-contract-deploy: {:?}", e))?,
)
}
if let Some(val) = cli_args.value_of("eth1-follow") {
builder.set_eth1_follow(
val.parse::<u64>()
.map_err(|e| format!("Unable to parse follow distance: {:?}", e))?,
)
} }
match cli_args.subcommand() { match cli_args.subcommand() {
@ -49,7 +72,7 @@ pub fn get_configs(cli_args: &ArgMatches, core_log: Logger) -> Result<Config> {
// If no primary subcommand was given, start the beacon chain from an existing // If no primary subcommand was given, start the beacon chain from an existing
// database. // database.
builder.set_beacon_chain_start_method(BeaconChainStartMethod::Resume); builder.set_genesis(ClientGenesis::Resume);
// Whilst there is no large testnet or mainnet force the user to specify how they want // Whilst there is no large testnet or mainnet force the user to specify how they want
// to start a new chain (e.g., from a genesis YAML file, another node, etc). // to start a new chain (e.g., from a genesis YAML file, another node, etc).
@ -142,7 +165,7 @@ fn process_testnet_subcommand(
builder.import_bootstrap_enr_address(server)?; builder.import_bootstrap_enr_address(server)?;
builder.import_bootstrap_eth2_config(server)?; builder.import_bootstrap_eth2_config(server)?;
builder.set_beacon_chain_start_method(BeaconChainStartMethod::HttpBootstrap { builder.set_genesis(ClientGenesis::RemoteNode {
server: server.to_string(), server: server.to_string(),
port, port,
}) })
@ -160,9 +183,11 @@ fn process_testnet_subcommand(
.parse::<u64>() .parse::<u64>()
.map_err(|e| format!("Unable to parse minutes: {:?}", e))?; .map_err(|e| format!("Unable to parse minutes: {:?}", e))?;
builder.set_beacon_chain_start_method(BeaconChainStartMethod::RecentGenesis { builder.client_config.dummy_eth1_backend = true;
builder.set_genesis(ClientGenesis::Interop {
validator_count, validator_count,
minutes, genesis_time: recent_genesis_time(minutes),
}) })
} }
("quick", Some(cli_args)) => { ("quick", Some(cli_args)) => {
@ -178,13 +203,15 @@ fn process_testnet_subcommand(
.parse::<u64>() .parse::<u64>()
.map_err(|e| format!("Unable to parse genesis time: {:?}", e))?; .map_err(|e| format!("Unable to parse genesis time: {:?}", e))?;
builder.set_beacon_chain_start_method(BeaconChainStartMethod::Generated { builder.client_config.dummy_eth1_backend = true;
builder.set_genesis(ClientGenesis::Interop {
validator_count, validator_count,
genesis_time, genesis_time,
}) })
} }
("file", Some(cli_args)) => { ("file", Some(cli_args)) => {
let file = cli_args let path = cli_args
.value_of("file") .value_of("file")
.ok_or_else(|| "No filename specified")? .ok_or_else(|| "No filename specified")?
.parse::<PathBuf>() .parse::<PathBuf>()
@ -195,13 +222,34 @@ fn process_testnet_subcommand(
.ok_or_else(|| "No file format specified")?; .ok_or_else(|| "No file format specified")?;
let start_method = match format { let start_method = match format {
"yaml" => BeaconChainStartMethod::Yaml { file }, "ssz" => ClientGenesis::SszFile { path },
"ssz" => BeaconChainStartMethod::Ssz { file },
"json" => BeaconChainStartMethod::Json { file },
other => return Err(format!("Unknown genesis file format: {}", other)), other => return Err(format!("Unknown genesis file format: {}", other)),
}; };
builder.set_beacon_chain_start_method(start_method) builder.set_genesis(start_method)
}
("prysm", Some(_)) => {
let mut spec = &mut builder.eth2_config.spec;
let mut client_config = &mut builder.client_config;
spec.min_deposit_amount = 100;
spec.max_effective_balance = 3_200_000_000;
spec.ejection_balance = 1_600_000_000;
spec.effective_balance_increment = 100_000_000;
spec.min_genesis_time = 0;
spec.genesis_fork = Fork {
previous_version: [0; 4],
current_version: [0, 0, 0, 2],
epoch: Epoch::new(0),
};
client_config.eth1.deposit_contract_address =
"0x802dF6aAaCe28B2EEb1656bb18dF430dDC42cc2e".to_string();
client_config.eth1.deposit_contract_deploy_block = 1487270;
client_config.eth1.follow_distance = 16;
client_config.dummy_eth1_backend = false;
builder.set_genesis(ClientGenesis::DepositContract)
} }
(cmd, Some(_)) => { (cmd, Some(_)) => {
return Err(format!( return Err(format!(
@ -220,8 +268,8 @@ fn process_testnet_subcommand(
/// Allows for building a set of configurations based upon `clap` arguments. /// Allows for building a set of configurations based upon `clap` arguments.
struct ConfigBuilder { struct ConfigBuilder {
log: Logger, log: Logger,
eth2_config: Eth2Config, pub eth2_config: Eth2Config,
client_config: ClientConfig, pub client_config: ClientConfig,
} }
impl ConfigBuilder { impl ConfigBuilder {
@ -294,14 +342,24 @@ impl ConfigBuilder {
Ok(()) Ok(())
} }
/// Sets the method for starting the beacon chain. pub fn set_eth1_endpoint(&mut self, endpoint: &str) {
pub fn set_beacon_chain_start_method(&mut self, method: BeaconChainStartMethod) { self.client_config.eth1.endpoint = endpoint.to_string();
self.client_config.beacon_chain_start_method = method;
} }
/// Sets the method for starting the beacon chain. pub fn set_deposit_contract(&mut self, deposit_contract: Address) {
pub fn set_eth1_backend_method(&mut self, method: Eth1BackendMethod) { self.client_config.eth1.deposit_contract_address = format!("{:?}", deposit_contract);
self.client_config.eth1_backend_method = method; }
pub fn set_deposit_contract_deploy_block(&mut self, eth1_block_number: u64) {
self.client_config.eth1.deposit_contract_deploy_block = eth1_block_number;
}
pub fn set_eth1_follow(&mut self, distance: u64) {
self.client_config.eth1.follow_distance = distance;
}
pub fn set_genesis(&mut self, method: ClientGenesis) {
self.client_config.genesis = method;
} }
/// Import the libp2p address for `server` into the list of libp2p nodes to connect with. /// Import the libp2p address for `server` into the list of libp2p nodes to connect with.
@ -540,7 +598,6 @@ impl ConfigBuilder {
/// The supplied `cli_args` should be the base-level `clap` cli_args (i.e., not a subcommand /// The supplied `cli_args` should be the base-level `clap` cli_args (i.e., not a subcommand
/// cli_args). /// cli_args).
pub fn build(mut self, cli_args: &ArgMatches) -> Result<Config> { pub fn build(mut self, cli_args: &ArgMatches) -> Result<Config> {
self.eth2_config.apply_cli_args(cli_args)?;
self.client_config.apply_cli_args(cli_args, &mut self.log)?; self.client_config.apply_cli_args(cli_args, &mut self.log)?;
if let Some(bump) = cli_args.value_of("port-bump") { if let Some(bump) = cli_args.value_of("port-bump") {

153
beacon_node/src/lib.rs Normal file
View File

@ -0,0 +1,153 @@
#[macro_use]
extern crate clap;
mod cli;
mod config;
pub use beacon_chain;
pub use cli::cli_app;
pub use client::{Client, ClientBuilder, ClientConfig, ClientGenesis};
pub use eth2_config::Eth2Config;
use beacon_chain::{
builder::Witness, eth1_chain::CachingEth1Backend, events::WebSocketSender,
lmd_ghost::ThreadSafeReducedTree, slot_clock::SystemTimeSlotClock,
};
use clap::ArgMatches;
use config::get_configs;
use environment::RuntimeContext;
use futures::{Future, IntoFuture};
use slog::{info, warn};
use std::ops::{Deref, DerefMut};
use store::DiskStore;
use types::EthSpec;
/// A type-alias to the tighten the definition of a production-intended `Client`.
pub type ProductionClient<E> = Client<
Witness<
DiskStore,
SystemTimeSlotClock,
ThreadSafeReducedTree<DiskStore, E>,
CachingEth1Backend<E, DiskStore>,
E,
WebSocketSender<E>,
>,
>;
/// The beacon node `Client` that will be used in production.
///
/// Generic over some `EthSpec`.
///
/// ## Notes:
///
/// Despite being titled `Production...`, this code is not ready for production. The name
/// demonstrates an intention, not a promise.
pub struct ProductionBeaconNode<E: EthSpec>(ProductionClient<E>);
impl<E: EthSpec> ProductionBeaconNode<E> {
/// Starts a new beacon node `Client` in the given `environment`.
///
/// Identical to `start_from_client_config`, however the `client_config` is generated from the
/// given `matches` and potentially configuration files on the local filesystem or other
/// configurations hosted remotely.
pub fn new_from_cli<'a, 'b>(
mut context: RuntimeContext<E>,
matches: &ArgMatches<'b>,
) -> impl Future<Item = Self, Error = String> + 'a {
let log = context.log.clone();
// TODO: the eth2 config in the env is being completely ignored.
//
// See https://github.com/sigp/lighthouse/issues/602
get_configs(&matches, log).into_future().and_then(
move |(client_config, eth2_config, _log)| {
context.eth2_config = eth2_config;
Self::new(context, client_config)
},
)
}
/// Starts a new beacon node `Client` in the given `environment`.
///
/// Client behaviour is defined by the given `client_config`.
pub fn new(
context: RuntimeContext<E>,
client_config: ClientConfig,
) -> impl Future<Item = Self, Error = String> {
let http_eth2_config = context.eth2_config().clone();
let spec = context.eth2_config().spec.clone();
let genesis_eth1_config = client_config.eth1.clone();
let client_genesis = client_config.genesis.clone();
let log = context.log.clone();
client_config
.db_path()
.ok_or_else(|| "Unable to access database path".to_string())
.into_future()
.and_then(move |db_path| {
Ok(ClientBuilder::new(context.eth_spec_instance.clone())
.runtime_context(context)
.disk_store(&db_path)?
.chain_spec(spec))
})
.and_then(move |builder| {
builder.beacon_chain_builder(client_genesis, genesis_eth1_config)
})
.and_then(move |builder| {
let builder = if client_config.sync_eth1_chain && !client_config.dummy_eth1_backend
{
info!(
log,
"Block production enabled";
"endpoint" => &client_config.eth1.endpoint,
"method" => "json rpc via http"
);
builder.caching_eth1_backend(client_config.eth1.clone())?
} else if client_config.dummy_eth1_backend {
warn!(
log,
"Block production impaired";
"reason" => "dummy eth1 backend is enabled"
);
builder.dummy_eth1_backend()?
} else {
info!(
log,
"Block production disabled";
"reason" => "no eth1 backend configured"
);
builder.no_eth1_backend()?
};
let builder = builder
.system_time_slot_clock()?
.websocket_event_handler(client_config.websocket_server.clone())?
.build_beacon_chain()?
.libp2p_network(&client_config.network)?
.http_server(&client_config, &http_eth2_config)?
.grpc_server(&client_config.rpc)?
.peer_count_notifier()?
.slot_notifier()?;
Ok(Self(builder.build()))
})
}
pub fn into_inner(self) -> ProductionClient<E> {
self.0
}
}
impl<E: EthSpec> Deref for ProductionBeaconNode<E> {
type Target = ProductionClient<E>;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl<E: EthSpec> DerefMut for ProductionBeaconNode<E> {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.0
}
}

View File

@ -1,138 +0,0 @@
use client::{error, notifier, Client, ClientConfig, Eth1BackendMethod, Eth2Config};
use futures::sync::oneshot;
use futures::Future;
use slog::{error, info};
use std::cell::RefCell;
use std::path::Path;
use std::path::PathBuf;
use store::Store;
use store::{DiskStore, MemoryStore};
use tokio::runtime::Builder;
use tokio::runtime::Runtime;
use tokio::runtime::TaskExecutor;
use tokio_timer::clock::Clock;
use types::{EthSpec, InteropEthSpec, MainnetEthSpec, MinimalEthSpec};
/// Reads the configuration and initializes a `BeaconChain` with the required types and parameters.
///
/// Spawns an executor which performs syncing, networking, block production, etc.
///
/// Blocks the current thread, returning after the `BeaconChain` has exited or a `Ctrl+C`
/// signal.
pub fn run_beacon_node(
client_config: ClientConfig,
eth2_config: Eth2Config,
log: &slog::Logger,
) -> error::Result<()> {
let runtime = Builder::new()
.name_prefix("main-")
.clock(Clock::system())
.build()
.map_err(|e| format!("{:?}", e))?;
let executor = runtime.executor();
let db_path: PathBuf = client_config
.db_path()
.ok_or_else::<error::Error, _>(|| "Unable to access database path".into())?;
let db_type = &client_config.db_type;
let spec_constants = eth2_config.spec_constants.clone();
let other_client_config = client_config.clone();
info!(
log,
"Starting beacon node";
"p2p_listen_address" => format!("{}", &other_client_config.network.listen_address),
"db_type" => &other_client_config.db_type,
"spec_constants" => &spec_constants,
);
macro_rules! run_client {
($store: ty, $eth_spec: ty) => {
run::<$store, $eth_spec>(&db_path, client_config, eth2_config, executor, runtime, log)
};
}
if let Eth1BackendMethod::Web3 { .. } = client_config.eth1_backend_method {
return Err("Starting from web3 backend is not supported for interop.".into());
}
match (db_type.as_str(), spec_constants.as_str()) {
("disk", "minimal") => run_client!(DiskStore, MinimalEthSpec),
("disk", "mainnet") => run_client!(DiskStore, MainnetEthSpec),
("disk", "interop") => run_client!(DiskStore, InteropEthSpec),
("memory", "minimal") => run_client!(MemoryStore, MinimalEthSpec),
("memory", "mainnet") => run_client!(MemoryStore, MainnetEthSpec),
("memory", "interop") => run_client!(MemoryStore, InteropEthSpec),
(db_type, spec) => {
error!(log, "Unknown runtime configuration"; "spec_constants" => spec, "db_type" => db_type);
Err("Unknown specification and/or db_type.".into())
}
}
}
/// Performs the type-generic parts of launching a `BeaconChain`.
fn run<S, E>(
db_path: &Path,
client_config: ClientConfig,
eth2_config: Eth2Config,
executor: TaskExecutor,
mut runtime: Runtime,
log: &slog::Logger,
) -> error::Result<()>
where
S: Store + Clone + 'static + OpenDatabase,
E: EthSpec,
{
let store = S::open_database(&db_path)?;
let client: Client<S, E> =
Client::new(client_config, eth2_config, store, log.clone(), &executor)?;
// run service until ctrl-c
let (ctrlc_send, ctrlc_oneshot) = oneshot::channel();
let ctrlc_send_c = RefCell::new(Some(ctrlc_send));
ctrlc::set_handler(move || {
if let Some(ctrlc_send) = ctrlc_send_c.try_borrow_mut().unwrap().take() {
ctrlc_send.send(()).expect("Error sending ctrl-c message");
}
})
.map_err(|e| format!("Could not set ctrlc handler: {:?}", e))?;
let (exit_signal, exit) = exit_future::signal();
notifier::run(&client, executor, exit);
runtime
.block_on(ctrlc_oneshot)
.map_err(|e| format!("Ctrlc oneshot failed: {:?}", e))?;
// perform global shutdown operations.
info!(log, "Shutting down..");
exit_signal.fire();
// shutdown the client
// client.exit_signal.fire();
drop(client);
runtime.shutdown_on_idle().wait().unwrap();
Ok(())
}
/// A convenience trait, providing a method to open a database.
///
/// Panics if unable to open the database.
pub trait OpenDatabase: Sized {
fn open_database(path: &Path) -> error::Result<Self>;
}
impl OpenDatabase for MemoryStore {
fn open_database(_path: &Path) -> error::Result<Self> {
Ok(MemoryStore::open())
}
}
impl OpenDatabase for DiskStore {
fn open_database(path: &Path) -> error::Result<Self> {
DiskStore::open(path).map_err(|e| format!("Unable to open database: {:?}", e).into())
}
}

40
beacon_node/tests/test.rs Normal file
View File

@ -0,0 +1,40 @@
#![cfg(test)]
use node_test_rig::{environment::EnvironmentBuilder, LocalBeaconNode};
use types::{MinimalEthSpec, Slot};
fn env_builder() -> EnvironmentBuilder<MinimalEthSpec> {
EnvironmentBuilder::minimal()
}
#[test]
fn http_server_genesis_state() {
let mut env = env_builder()
.null_logger()
.expect("should build env logger")
.multi_threaded_tokio_runtime()
.expect("should start tokio runtime")
.build()
.expect("environment should build");
let node = LocalBeaconNode::production(env.core_context());
let remote_node = node.remote_node().expect("should produce remote node");
let (api_state, _root) = env
.runtime()
.block_on(remote_node.http.beacon().state_at_slot(Slot::new(0)))
.expect("should fetch state from http api");
let mut db_state = node
.client
.beacon_chain()
.expect("client should have beacon chain")
.state_at_slot(Slot::new(0))
.expect("should find state");
db_state.drop_all_caches();
assert_eq!(
api_state, db_state,
"genesis state from api should match that from the DB"
);
}

View File

@ -7,7 +7,6 @@ edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies] [dependencies]
beacon_chain = { path = "../beacon_chain" }
clap = "2.33.0" clap = "2.33.0"
exit-future = "0.1.4" exit-future = "0.1.4"
futures = "0.1.29" futures = "0.1.29"

View File

@ -1,7 +1,7 @@
use beacon_chain::events::{EventHandler, EventKind};
use futures::Future; use futures::Future;
use slog::{debug, error, info, warn, Logger}; use slog::{debug, error, info, warn, Logger};
use std::marker::PhantomData; use std::marker::PhantomData;
use std::net::SocketAddr;
use std::thread; use std::thread;
use tokio::runtime::TaskExecutor; use tokio::runtime::TaskExecutor;
use types::EthSpec; use types::EthSpec;
@ -36,31 +36,30 @@ impl<T: EthSpec> WebSocketSender<T> {
} }
} }
impl<T: EthSpec> EventHandler<T> for WebSocketSender<T> {
fn register(&self, kind: EventKind<T>) -> Result<(), String> {
self.send_string(
serde_json::to_string(&kind)
.map_err(|e| format!("Unable to serialize event: {:?}", e))?,
)
}
}
pub fn start_server<T: EthSpec>( pub fn start_server<T: EthSpec>(
config: &Config, config: &Config,
executor: &TaskExecutor, executor: &TaskExecutor,
log: &Logger, log: &Logger,
) -> Result<(WebSocketSender<T>, exit_future::Signal), String> { ) -> Result<(WebSocketSender<T>, exit_future::Signal, SocketAddr), String> {
let server_string = format!("{}:{}", config.listen_address, config.port); let server_string = format!("{}:{}", config.listen_address, config.port);
info!(
log,
"Websocket server starting";
"listen_address" => &server_string
);
// Create a server that simply ignores any incoming messages. // Create a server that simply ignores any incoming messages.
let server = WebSocket::new(|_| |_| Ok(())) let server = WebSocket::new(|_| |_| Ok(()))
.map_err(|e| format!("Failed to initialize websocket server: {:?}", e))?; .map_err(|e| format!("Failed to initialize websocket server: {:?}", e))?
.bind(server_string.clone())
.map_err(|e| {
format!(
"Failed to bind websocket server to {}: {:?}",
server_string, e
)
})?;
let actual_listen_addr = server.local_addr().map_err(|e| {
format!(
"Failed to read listening addr from websocket server: {:?}",
e
)
})?;
let broadcaster = server.broadcaster(); let broadcaster = server.broadcaster();
@ -91,7 +90,7 @@ pub fn start_server<T: EthSpec>(
}; };
let log_inner = log.clone(); let log_inner = log.clone();
let _handle = thread::spawn(move || match server.listen(server_string) { let _handle = thread::spawn(move || match server.run() {
Ok(_) => { Ok(_) => {
debug!( debug!(
log_inner, log_inner,
@ -107,11 +106,19 @@ pub fn start_server<T: EthSpec>(
} }
}); });
info!(
log,
"WebSocket server started";
"address" => format!("{}", actual_listen_addr.ip()),
"port" => actual_listen_addr.port(),
);
Ok(( Ok((
WebSocketSender { WebSocketSender {
sender: Some(broadcaster), sender: Some(broadcaster),
_phantom: PhantomData, _phantom: PhantomData,
}, },
exit_signal, exit_signal,
actual_listen_addr,
)) ))
} }

View File

@ -1,24 +1,32 @@
# Command-Line Interface (CLI) # Command-Line Interface (CLI)
Lighthouse a collection of CLI applications. The two primary binaries are: The `lighthouse` binary provides all necessary Ethereum 2.0 functionality. It
has two primary sub-commands:
- `beacon_node`: the largest and most fundamental component which connects to - `$ lighthouse beacon_node`: the largest and most fundamental component which connects to
the p2p network, processes messages and tracks the head of the beacon the p2p network, processes messages and tracks the head of the beacon
chain. chain.
- `validator_client`: a lightweight but important component which loads a validators private - `$ lighthouse validator_client`: a lightweight but important component which loads a validators private
key and signs messages using a `beacon_node` as a source-of-truth. key and signs messages using a `beacon_node` as a source-of-truth.
There are also some ancillary binaries: There are also some ancillary binaries like `lcli` and `account_manager`, but
these are primarily for testing.
- `account_manager`: generates cryptographic keys. > **Note:** documentation sometimes uses `$ lighthouse bn` and `$ lighthouse
- `lcli`: a general-purpose utility for troubleshooting Lighthouse state > vc` instead of the long-form `beacon_node` and `validator_client`. These
transitions (developer tool). > commands are valid on the CLI too.
## Installation ## Installation
Presently, we recommend building Lighthouse using the `$ cargo build --release Typical users may install `lighthouse` to `CARGO_HOME` with `cargo install
--all` command and executing binaries from the --path lighthouse` from the root of the repository. See ["Configuring the
`<lighthouse-repository>/target/release` directory. `PATH` environment variable"](https://www.rust-lang.org/tools/install) for more
information.
For develeopers, we recommend building Lighthouse using the `$ cargo build --release
--bin lighthouse` command and executing binaries from the
`<lighthouse-repository>/target/release` directory. This is more ergonomic when
modifying and rebuilding regularly.
## Documentation ## Documentation
@ -27,36 +35,29 @@ documentation.
```bash ```bash
$ ./beacon_node --help $ lighthouse beacon_node --help
``` ```
```bash ```bash
$ ./validator_client --help $ lighthouse validator_client --help
```
```bash
$ ./account_manager --help
```
```bash
$ ./lcli --help
``` ```
## Beacon Node ## Beacon Node
The `beacon_node` CLI has two primary tasks: The `$ lighthouse beacon_node` (or `$ lighthouse bn`) command has two primary
tasks:
- **Resuming** an existing database with `$ ./beacon_node`. - **Resuming** an existing database with `$ lighthouse bn`.
- **Creating** a new testnet database using `$ ./beacon_node testnet`. - **Creating** a new testnet database using `$ lighthouse bn testnet`.
## Creating a new database ## Creating a new database
Use the `$./beacon_node testnet` command (see [testnets](./testnets.md) for more Use the `$ lighthouse bn testnet` command (see [testnets](./testnets.md) for
information). more information).
## Resuming from an existing database ## Resuming from an existing database
Once a database has been created, it can be resumed by running `$ ./beacon_node`. Once a database has been created, it can be resumed by running `$ lighthouse bn`.
Presently, this command will fail if no existing database is found. You must Presently, you are not allowed to call `$ lighthouse bn` unless you have first
use the `$ ./beacon_node testnet` command to create a new database. created a database using `$ lighthouse bn testnet`.

View File

@ -19,6 +19,18 @@
> `target/release` directory. > `target/release` directory.
> - First-time compilation may take several minutes. > - First-time compilation may take several minutes.
### Installing to `PATH`
Use `cargo install --path lighthouse` from the root of the repository to
install the compiled binary to `CARGO_HOME` or `$HOME/.cargo`. If this
directory is on your `PATH`, you can run `$ lighthouse ..` from anywhere.
See ["Configuring the `PATH` environment
variable" (rust-lang.org)](https://www.rust-lang.org/tools/install) for more information.
> If you _don't_ install `lighthouse` to the path, you'll need to run the
> binaries directly from the `target` directory or using `cargo run ...`.
### Windows ### Windows
Perl may also be required to build Lighthouse. You can install [Strawberry Perl may also be required to build Lighthouse. You can install [Strawberry

View File

@ -3,9 +3,9 @@
With a functional [development environment](./setup.md), starting a local multi-node With a functional [development environment](./setup.md), starting a local multi-node
testnet is easy: testnet is easy:
1. Start the first node: `$ ./beacon_node testnet -f recent 8` 1. Start the first node: `$ lighthouse bn testnet -f recent 8`
1. Start a validator client: `$ ./validator_client testnet -b insecure 0 8` 1. Start a validator client: `$ lighthouse bn testnet -b insecure 0 8`
1. Start more nodes with `$ ./beacon_node -b 10 testnet -f bootstrap 1. Start more nodes with `$ lighthouse bn -b 10 testnet -f bootstrap
http://localhost:5052` http://localhost:5052`
- Increment the `-b` value by `10` for each additional node. - Increment the `-b` value by `10` for each additional node.
@ -16,10 +16,10 @@ First, setup a Lighthouse development environment and navigate to the
## Starting a beacon node ## Starting a beacon node
Start a new node (creating a fresh database and configuration in `~/.lighthouse`), using: Start a new node (creating a fresh database and configuration in `$HOME/.lighthouse`), using:
```bash ```bash
$ ./beacon_node testnet -f recent 8 $ lighthouse bn testnet -f recent 8
``` ```
> Notes: > Notes:
@ -27,7 +27,7 @@ $ ./beacon_node testnet -f recent 8
> - The `-f` flag ignores any existing database or configuration, backing them > - The `-f` flag ignores any existing database or configuration, backing them
> up before re-initializing. > up before re-initializing.
> - `8` is number of validators with deposits in the genesis state. > - `8` is number of validators with deposits in the genesis state.
> - See `$ ./beacon_node testnet recent --help` for more configuration options, > - See `$ lighthouse bn testnet recent --help` for more configuration options,
> including `minimal`/`mainnet` specification. > including `minimal`/`mainnet` specification.
## Starting a validator client ## Starting a validator client
@ -35,7 +35,7 @@ $ ./beacon_node testnet -f recent 8
In a new terminal window, start the validator client with: In a new terminal window, start the validator client with:
```bash ```bash
$ ./validator_client testnet -b insecure 0 8 $ lighthouse bn testnet -b insecure 0 8
``` ```
> Notes: > Notes:
@ -58,7 +58,7 @@ In a new terminal window, run:
```bash ```bash
$ ./beacon_node -b 10 testnet -r bootstrap $ lighthouse bn -b 10 testnet -r bootstrap
``` ```
> Notes: > Notes:
@ -70,4 +70,4 @@ $ ./beacon_node -b 10 testnet -r bootstrap
> (avoids data directory collisions between nodes). > (avoids data directory collisions between nodes).
> - The default bootstrap HTTP address is `http://localhost:5052`. The new node > - The default bootstrap HTTP address is `http://localhost:5052`. The new node
> will download configuration via HTTP before starting sync via libp2p. > will download configuration via HTTP before starting sync via libp2p.
> - See `$ ./beacon_node testnet bootstrap --help` for more configuration. > - See `$ lighthouse bn testnet bootstrap --help` for more configuration.

View File

@ -1,16 +1,16 @@
# Testnets # Testnets
The Lighthouse CLI has a `testnet` sub-command to allow creating or connecting The `beacon_node` and `validator` commands have a `testnet` sub-command to
to Eth2 beacon chain testnets. allow creating or connecting to Eth2 beacon chain testnets.
For detailed documentation, use the `--help` flag on the CLI: For detailed documentation, use the `--help` flag on the CLI:
```bash ```bash
$ ./beacon_node testnet --help $ lighthouse bn testnet --help
``` ```
```bash ```bash
$ ./validator_client testnet --help $ lighthouse vc testnet --help
``` ```
## Examples ## Examples
@ -25,7 +25,7 @@ commands are based in the `target/release` directory (this is the build dir for
To start a brand-new beacon node (with no history) use: To start a brand-new beacon node (with no history) use:
```bash ```bash
$ ./beacon_node testnet -f quick 8 <GENESIS_TIME> $ lighthouse bn testnet -f quick 8 <GENESIS_TIME>
``` ```
Where `GENESIS_TIME` is in [unix time](https://duckduckgo.com/?q=unix+time&t=ffab&ia=answer). Where `GENESIS_TIME` is in [unix time](https://duckduckgo.com/?q=unix+time&t=ffab&ia=answer).
@ -38,7 +38,7 @@ method in the `ethereum/eth2.0-pm` repository.
> - The `-f` flag ignores any existing database or configuration, backing them > - The `-f` flag ignores any existing database or configuration, backing them
> up before re-initializing. > up before re-initializing.
> - `8` is the validator count and `1567222226` is the genesis time. > - `8` is the validator count and `1567222226` is the genesis time.
> - See `$ ./beacon_node testnet quick --help` for more configuration options. > - See `$ lighthouse bn testnet quick --help` for more configuration options.
### Start a beacon node given a genesis state file ### Start a beacon node given a genesis state file
@ -52,14 +52,14 @@ There are three supported formats:
Start a new node using `/tmp/genesis.ssz` as the genesis state: Start a new node using `/tmp/genesis.ssz` as the genesis state:
```bash ```bash
$ ./beacon_node testnet --spec minimal -f file ssz /tmp/genesis.ssz $ lighthouse bn testnet --spec minimal -f file ssz /tmp/genesis.ssz
``` ```
> Notes: > Notes:
> >
> - The `-f` flag ignores any existing database or configuration, backing them > - The `-f` flag ignores any existing database or configuration, backing them
> up before re-initializing. > up before re-initializing.
> - See `$ ./beacon_node testnet file --help` for more configuration options. > - See `$ lighthouse bn testnet file --help` for more configuration options.
> - The `--spec` flag is required to allow SSZ parsing of fixed-length lists. > - The `--spec` flag is required to allow SSZ parsing of fixed-length lists.
> Here the `minimal` eth2 specification is chosen, allowing for lower > Here the `minimal` eth2 specification is chosen, allowing for lower
> validator counts. See > validator counts. See
@ -71,7 +71,7 @@ $ ./beacon_node testnet --spec minimal -f file ssz /tmp/genesis.ssz
To start a brand-new validator client (with no history) use: To start a brand-new validator client (with no history) use:
```bash ```bash
$ ./validator_client testnet -b insecure 0 8 $ lighthouse vc testnet -b insecure 0 8
``` ```
> Notes: > Notes:
@ -113,7 +113,7 @@ the `--libp2p-addresses` command.
#### Example: #### Example:
```bash ```bash
$ ./beacon_node --libp2p-addresses /ip4/192.168.0.1/tcp/9000 $ lighthouse bn --libp2p-addresses /ip4/192.168.0.1/tcp/9000
``` ```
### Specify a boot node by ENR (Ethereum Name Record) ### Specify a boot node by ENR (Ethereum Name Record)
@ -124,7 +124,7 @@ the `--boot-nodes` command.
#### Example: #### Example:
```bash ```bash
$ ./beacon_node --boot-nodes -IW4QB2Hi8TPuEzQ41Cdf1r2AUU1FFVFDBJdJyOkWk2qXpZfFZQy2YnJIyoT_5fnbtrXUouoskmydZl4pIg90clIkYUDgmlwhH8AAAGDdGNwgiMog3VkcIIjKIlzZWNwMjU2azGhAjg0-DsTkQynhJCRnLLttBK1RS78lmUkLa-wgzAi-Ob5 $ lighthouse bn --boot-nodes -IW4QB2Hi8TPuEzQ41Cdf1r2AUU1FFVFDBJdJyOkWk2qXpZfFZQy2YnJIyoT_5fnbtrXUouoskmydZl4pIg90clIkYUDgmlwhH8AAAGDdGNwgiMog3VkcIIjKIlzZWNwMjU2azGhAjg0-DsTkQynhJCRnLLttBK1RS78lmUkLa-wgzAi-Ob5
``` ```
### Avoid port clashes when starting nodes ### Avoid port clashes when starting nodes
@ -138,7 +138,7 @@ ports by some `n`.
Increase all ports by `10` (using multiples of `10` is recommended). Increase all ports by `10` (using multiples of `10` is recommended).
```bash ```bash
$ ./beacon_node -b 10 $ lighthouse bn -b 10
``` ```
### Start a testnet with a custom slot time ### Start a testnet with a custom slot time
@ -151,7 +151,7 @@ Lighthouse can run at quite low slot times when there are few validators (e.g.,
The `-t` (`--slot-time`) flag specifies the milliseconds per slot. The `-t` (`--slot-time`) flag specifies the milliseconds per slot.
```bash ```bash
$ ./beacon_node testnet -t 500 recent 8 $ lighthouse bn testnet -t 500 recent 8
``` ```
> Note: `bootstrap` loads the slot time via HTTP and therefore conflicts with > Note: `bootstrap` loads the slot time via HTTP and therefore conflicts with

View File

@ -5,7 +5,7 @@ extern crate lazy_static;
use beacon_chain::test_utils::{ use beacon_chain::test_utils::{
generate_deterministic_keypairs, AttestationStrategy, generate_deterministic_keypairs, AttestationStrategy,
BeaconChainHarness as BaseBeaconChainHarness, BlockStrategy, BeaconChainHarness as BaseBeaconChainHarness, BlockStrategy, HarnessType,
}; };
use lmd_ghost::{LmdGhost, ThreadSafeReducedTree as BaseThreadSafeReducedTree}; use lmd_ghost::{LmdGhost, ThreadSafeReducedTree as BaseThreadSafeReducedTree};
use rand::{prelude::*, rngs::StdRng}; use rand::{prelude::*, rngs::StdRng};
@ -21,7 +21,7 @@ pub const VALIDATOR_COUNT: usize = 3 * 8;
type TestEthSpec = MinimalEthSpec; type TestEthSpec = MinimalEthSpec;
type ThreadSafeReducedTree = BaseThreadSafeReducedTree<MemoryStore, TestEthSpec>; type ThreadSafeReducedTree = BaseThreadSafeReducedTree<MemoryStore, TestEthSpec>;
type BeaconChainHarness = BaseBeaconChainHarness<ThreadSafeReducedTree, TestEthSpec>; type BeaconChainHarness = BaseBeaconChainHarness<HarnessType<TestEthSpec>>;
type RootAndSlot = (Hash256, Slot); type RootAndSlot = (Hash256, Slot);
lazy_static! { lazy_static! {
@ -52,7 +52,10 @@ struct ForkedHarness {
impl ForkedHarness { impl ForkedHarness {
/// A new standard instance of with constant parameters. /// A new standard instance of with constant parameters.
pub fn new() -> Self { pub fn new() -> Self {
let harness = BeaconChainHarness::new(generate_deterministic_keypairs(VALIDATOR_COUNT)); let harness = BeaconChainHarness::new(
MinimalEthSpec,
generate_deterministic_keypairs(VALIDATOR_COUNT),
);
// Move past the zero slot. // Move past the zero slot.
harness.advance_slot(); harness.advance_slot();

View File

@ -598,7 +598,7 @@ mod tests {
let mut state = BeaconState::random_for_test(rng); let mut state = BeaconState::random_for_test(rng);
state.fork = Fork::genesis(MainnetEthSpec::genesis_epoch()); state.fork = Fork::default();
(spec, state) (spec, state)
} }

View File

@ -35,18 +35,7 @@ pub fn initialize_beacon_state_from_eth1<T: EthSpec>(
process_deposit(&mut state, &deposit, spec, true)?; process_deposit(&mut state, &deposit, spec, true)?;
} }
// Process activations process_activations(&mut state, spec);
for (index, validator) in state.validators.iter_mut().enumerate() {
let balance = state.balances[index];
validator.effective_balance = std::cmp::min(
balance - balance % spec.effective_balance_increment,
spec.max_effective_balance,
);
if validator.effective_balance == spec.max_effective_balance {
validator.activation_eligibility_epoch = T::genesis_epoch();
validator.activation_epoch = T::genesis_epoch();
}
}
// Now that we have our validators, initialize the caches (including the committees) // Now that we have our validators, initialize the caches (including the committees)
state.build_all_caches(spec)?; state.build_all_caches(spec)?;
@ -71,3 +60,20 @@ pub fn is_valid_genesis_state<T: EthSpec>(state: &BeaconState<T>, spec: &ChainSp
&& state.get_active_validator_indices(T::genesis_epoch()).len() as u64 && state.get_active_validator_indices(T::genesis_epoch()).len() as u64
>= spec.min_genesis_active_validator_count >= spec.min_genesis_active_validator_count
} }
/// Activate genesis validators, if their balance is acceptable.
///
/// Spec v0.8.0
pub fn process_activations<T: EthSpec>(state: &mut BeaconState<T>, spec: &ChainSpec) {
for (index, validator) in state.validators.iter_mut().enumerate() {
let balance = state.balances[index];
validator.effective_balance = std::cmp::min(
balance - balance % spec.effective_balance_increment,
spec.max_effective_balance,
);
if validator.effective_balance == spec.max_effective_balance {
validator.activation_eligibility_epoch = T::genesis_epoch();
validator.activation_epoch = T::genesis_epoch();
}
}
}

View File

@ -8,7 +8,7 @@ pub mod per_epoch_processing;
pub mod per_slot_processing; pub mod per_slot_processing;
pub mod test_utils; pub mod test_utils;
pub use genesis::{initialize_beacon_state_from_eth1, is_valid_genesis_state}; pub use genesis::{initialize_beacon_state_from_eth1, is_valid_genesis_state, process_activations};
pub use per_block_processing::{ pub use per_block_processing::{
errors::BlockProcessingError, per_block_processing, BlockSignatureStrategy, VerifySignatures, errors::BlockProcessingError, per_block_processing, BlockSignatureStrategy, VerifySignatures,
}; };

View File

@ -444,7 +444,7 @@ pub fn process_deposit<T: EthSpec>(
} else { } else {
// The signature should be checked for new validators. Return early for a bad // The signature should be checked for new validators. Return early for a bad
// signature. // signature.
if verify_deposit_signature(state, deposit, spec).is_err() { if verify_deposit_signature(&deposit.data, spec).is_err() {
return Ok(()); return Ok(());
} }

View File

@ -7,7 +7,7 @@ use std::convert::TryInto;
use tree_hash::{SignedRoot, TreeHash}; use tree_hash::{SignedRoot, TreeHash};
use types::{ use types::{
AggregateSignature, AttestationDataAndCustodyBit, AttesterSlashing, BeaconBlock, AggregateSignature, AttestationDataAndCustodyBit, AttesterSlashing, BeaconBlock,
BeaconBlockHeader, BeaconState, BeaconStateError, ChainSpec, Deposit, Domain, EthSpec, Fork, BeaconBlockHeader, BeaconState, BeaconStateError, ChainSpec, DepositData, Domain, EthSpec,
Hash256, IndexedAttestation, ProposerSlashing, PublicKey, RelativeEpoch, Signature, Transfer, Hash256, IndexedAttestation, ProposerSlashing, PublicKey, RelativeEpoch, Signature, Transfer,
VoluntaryExit, VoluntaryExit,
}; };
@ -194,18 +194,17 @@ pub fn attester_slashing_signature_sets<'a, T: EthSpec>(
/// ///
/// This method is separate to `deposit_signature_set` to satisfy lifetime requirements. /// This method is separate to `deposit_signature_set` to satisfy lifetime requirements.
pub fn deposit_pubkey_signature_message( pub fn deposit_pubkey_signature_message(
deposit: &Deposit, deposit_data: &DepositData,
) -> Option<(PublicKey, Signature, Vec<u8>)> { ) -> Option<(PublicKey, Signature, Vec<u8>)> {
let pubkey = (&deposit.data.pubkey).try_into().ok()?; let pubkey = (&deposit_data.pubkey).try_into().ok()?;
let signature = (&deposit.data.signature).try_into().ok()?; let signature = (&deposit_data.signature).try_into().ok()?;
let message = deposit.data.signed_root(); let message = deposit_data.signed_root();
Some((pubkey, signature, message)) Some((pubkey, signature, message))
} }
/// Returns the signature set for some set of deposit signatures, made with /// Returns the signature set for some set of deposit signatures, made with
/// `deposit_pubkey_signature_message`. /// `deposit_pubkey_signature_message`.
pub fn deposit_signature_set<'a, T: EthSpec>( pub fn deposit_signature_set<'a>(
state: &'a BeaconState<T>,
pubkey_signature_message: &'a (PublicKey, Signature, Vec<u8>), pubkey_signature_message: &'a (PublicKey, Signature, Vec<u8>),
spec: &'a ChainSpec, spec: &'a ChainSpec,
) -> SignatureSet<'a> { ) -> SignatureSet<'a> {
@ -213,9 +212,12 @@ pub fn deposit_signature_set<'a, T: EthSpec>(
// Note: Deposits are valid across forks, thus the deposit domain is computed // Note: Deposits are valid across forks, thus the deposit domain is computed
// with the fork zeroed. // with the fork zeroed.
let domain = spec.get_domain(state.current_epoch(), Domain::Deposit, &Fork::default()); SignatureSet::single(
signature,
SignatureSet::single(signature, pubkey, message.clone(), domain) pubkey,
message.clone(),
spec.get_deposit_domain(),
)
} }
/// Returns a signature set that is valid if the `VoluntaryExit` was signed by the indicated /// Returns a signature set that is valid if the `VoluntaryExit` was signed by the indicated

View File

@ -15,16 +15,12 @@ fn error(reason: DepositInvalid) -> BlockOperationError<DepositInvalid> {
/// Verify `Deposit.pubkey` signed `Deposit.signature`. /// Verify `Deposit.pubkey` signed `Deposit.signature`.
/// ///
/// Spec v0.8.0 /// Spec v0.8.0
pub fn verify_deposit_signature<T: EthSpec>( pub fn verify_deposit_signature(deposit_data: &DepositData, spec: &ChainSpec) -> Result<()> {
state: &BeaconState<T>, let deposit_signature_message = deposit_pubkey_signature_message(&deposit_data)
deposit: &Deposit,
spec: &ChainSpec,
) -> Result<()> {
let deposit_signature_message = deposit_pubkey_signature_message(deposit)
.ok_or_else(|| error(DepositInvalid::BadBlsBytes))?; .ok_or_else(|| error(DepositInvalid::BadBlsBytes))?;
verify!( verify!(
deposit_signature_set(state, &deposit_signature_message, spec).is_valid(), deposit_signature_set(&deposit_signature_message, spec).is_valid(),
DepositInvalid::BadSignature DepositInvalid::BadSignature
); );

View File

@ -216,7 +216,7 @@ impl<T: EthSpec> BeaconState<T> {
// Versioning // Versioning
genesis_time, genesis_time,
slot: spec.genesis_slot, slot: spec.genesis_slot,
fork: Fork::genesis(T::genesis_epoch()), fork: spec.genesis_fork.clone(),
// History // History
latest_block_header: BeaconBlock::<T>::empty(spec).temporary_block_header(), latest_block_header: BeaconBlock::<T>::empty(spec).temporary_block_header(),

View File

@ -91,8 +91,15 @@ pub struct ChainSpec {
domain_voluntary_exit: u32, domain_voluntary_exit: u32,
domain_transfer: u32, domain_transfer: u32,
/*
* Eth1
*/
pub eth1_follow_distance: u64,
pub boot_nodes: Vec<String>, pub boot_nodes: Vec<String>,
pub network_id: u8, pub network_id: u8,
pub genesis_fork: Fork,
} }
impl ChainSpec { impl ChainSpec {
@ -118,6 +125,22 @@ impl ChainSpec {
u64::from_le_bytes(fork_and_domain) u64::from_le_bytes(fork_and_domain)
} }
/// Get the domain for a deposit signature.
///
/// Deposits are valid across forks, thus the deposit domain is computed
/// with the fork zeroed.
///
/// Spec v0.8.1
pub fn get_deposit_domain(&self) -> u64 {
let mut bytes: Vec<u8> = int_to_bytes4(self.domain_deposit);
bytes.append(&mut vec![0; 4]);
let mut fork_and_domain = [0; 8];
fork_and_domain.copy_from_slice(&bytes);
u64::from_le_bytes(fork_and_domain)
}
/// Returns a `ChainSpec` compatible with the Ethereum Foundation specification. /// Returns a `ChainSpec` compatible with the Ethereum Foundation specification.
/// ///
/// Spec v0.8.1 /// Spec v0.8.1
@ -186,6 +209,20 @@ impl ChainSpec {
domain_voluntary_exit: 4, domain_voluntary_exit: 4,
domain_transfer: 5, domain_transfer: 5,
/*
* Eth1
*/
eth1_follow_distance: 1_024,
/*
* Fork
*/
genesis_fork: Fork {
previous_version: [0; 4],
current_version: [0; 4],
epoch: Epoch::new(0),
},
/* /*
* Network specific * Network specific
*/ */
@ -210,6 +247,7 @@ impl ChainSpec {
max_epochs_per_crosslink: 4, max_epochs_per_crosslink: 4,
network_id: 2, // lighthouse testnet network id network_id: 2, // lighthouse testnet network id
boot_nodes, boot_nodes,
eth1_follow_distance: 16,
..ChainSpec::mainnet() ..ChainSpec::mainnet()
} }
} }
@ -248,7 +286,7 @@ mod tests {
} }
fn test_domain(domain_type: Domain, raw_domain: u32, spec: &ChainSpec) { fn test_domain(domain_type: Domain, raw_domain: u32, spec: &ChainSpec) {
let fork = Fork::genesis(Epoch::new(0)); let fork = &spec.genesis_fork;
let epoch = Epoch::new(0); let epoch = Epoch::new(0);
let domain = spec.get_domain(epoch, domain_type, &fork); let domain = spec.get_domain(epoch, domain_type, &fork);

View File

@ -7,6 +7,8 @@ use ssz_derive::{Decode, Encode};
use test_random_derive::TestRandom; use test_random_derive::TestRandom;
use tree_hash_derive::TreeHash; use tree_hash_derive::TreeHash;
pub const DEPOSIT_TREE_DEPTH: usize = 32;
/// A deposit to potentially become a beacon chain validator. /// A deposit to potentially become a beacon chain validator.
/// ///
/// Spec v0.8.0 /// Spec v0.8.0

View File

@ -36,15 +36,9 @@ impl DepositData {
/// Generate the signature for a given DepositData details. /// Generate the signature for a given DepositData details.
/// ///
/// Spec v0.8.1 /// Spec v0.8.1
pub fn create_signature( pub fn create_signature(&self, secret_key: &SecretKey, spec: &ChainSpec) -> SignatureBytes {
&self,
secret_key: &SecretKey,
epoch: Epoch,
fork: &Fork,
spec: &ChainSpec,
) -> SignatureBytes {
let msg = self.signed_root(); let msg = self.signed_root();
let domain = spec.get_domain(epoch, Domain::Deposit, fork); let domain = spec.get_deposit_domain();
SignatureBytes::from(Signature::new(msg.as_slice(), domain, secret_key)) SignatureBytes::from(Signature::new(msg.as_slice(), domain, secret_key))
} }

View File

@ -10,7 +10,18 @@ use tree_hash_derive::TreeHash;
/// ///
/// Spec v0.8.1 /// Spec v0.8.1
#[derive( #[derive(
Debug, PartialEq, Clone, Default, Serialize, Deserialize, Encode, Decode, TreeHash, TestRandom, Debug,
PartialEq,
Clone,
Default,
Eq,
Hash,
Serialize,
Deserialize,
Encode,
Decode,
TreeHash,
TestRandom,
)] )]
pub struct Eth1Data { pub struct Eth1Data {
pub deposit_root: Hash256, pub deposit_root: Hash256,

View File

@ -28,17 +28,6 @@ pub struct Fork {
} }
impl Fork { impl Fork {
/// Initialize the `Fork` from the genesis parameters in the `spec`.
///
/// Spec v0.8.1
pub fn genesis(genesis_epoch: Epoch) -> Self {
Self {
previous_version: [0; 4],
current_version: [0; 4],
epoch: genesis_epoch,
}
}
/// Return the fork version of the given ``epoch``. /// Return the fork version of the given ``epoch``.
/// ///
/// Spec v0.8.1 /// Spec v0.8.1
@ -56,24 +45,6 @@ mod tests {
ssz_tests!(Fork); ssz_tests!(Fork);
fn test_genesis(epoch: Epoch) {
let fork = Fork::genesis(epoch);
assert_eq!(fork.epoch, epoch, "epoch incorrect");
assert_eq!(
fork.previous_version, fork.current_version,
"previous and current are not identical"
);
}
#[test]
fn genesis() {
test_genesis(Epoch::new(0));
test_genesis(Epoch::new(11));
test_genesis(Epoch::new(2_u64.pow(63)));
test_genesis(Epoch::max_value());
}
#[test] #[test]
fn get_fork_version() { fn get_fork_version() {
let previous_version = [1; 4]; let previous_version = [1; 4];

View File

@ -58,7 +58,7 @@ pub use crate::checkpoint::Checkpoint;
pub use crate::compact_committee::CompactCommittee; pub use crate::compact_committee::CompactCommittee;
pub use crate::crosslink::Crosslink; pub use crate::crosslink::Crosslink;
pub use crate::crosslink_committee::{CrosslinkCommittee, OwnedCrosslinkCommittee}; pub use crate::crosslink_committee::{CrosslinkCommittee, OwnedCrosslinkCommittee};
pub use crate::deposit::Deposit; pub use crate::deposit::{Deposit, DEPOSIT_TREE_DEPTH};
pub use crate::deposit_data::DepositData; pub use crate::deposit_data::DepositData;
pub use crate::eth1_data::Eth1Data; pub use crate::eth1_data::Eth1Data;
pub use crate::fork::Fork; pub use crate::fork::Fork;

View File

@ -294,13 +294,7 @@ impl<T: EthSpec> TestingBeaconBlockBuilder<T> {
let keypair = Keypair::random(); let keypair = Keypair::random();
let mut builder = TestingDepositBuilder::new(keypair.pk.clone(), amount); let mut builder = TestingDepositBuilder::new(keypair.pk.clone(), amount);
builder.sign( builder.sign(&test_task, &keypair, spec);
&test_task,
&keypair,
state.slot.epoch(T::slots_per_epoch()),
&state.fork,
spec,
);
datas.push(builder.build().data); datas.push(builder.build().data);
} }

View File

@ -30,14 +30,7 @@ impl TestingDepositBuilder {
/// - `pubkey` to the signing pubkey. /// - `pubkey` to the signing pubkey.
/// - `withdrawal_credentials` to the signing pubkey. /// - `withdrawal_credentials` to the signing pubkey.
/// - `proof_of_possession` /// - `proof_of_possession`
pub fn sign( pub fn sign(&mut self, test_task: &DepositTestTask, keypair: &Keypair, spec: &ChainSpec) {
&mut self,
test_task: &DepositTestTask,
keypair: &Keypair,
epoch: Epoch,
fork: &Fork,
spec: &ChainSpec,
) {
let new_key = Keypair::random(); let new_key = Keypair::random();
let mut pubkeybytes = PublicKeyBytes::from(keypair.pk.clone()); let mut pubkeybytes = PublicKeyBytes::from(keypair.pk.clone());
let mut secret_key = keypair.sk.clone(); let mut secret_key = keypair.sk.clone();
@ -61,10 +54,7 @@ impl TestingDepositBuilder {
// Building the data and signing it // Building the data and signing it
self.deposit.data.pubkey = pubkeybytes; self.deposit.data.pubkey = pubkeybytes;
self.deposit.data.withdrawal_credentials = withdrawal_credentials; self.deposit.data.withdrawal_credentials = withdrawal_credentials;
self.deposit.data.signature = self.deposit.data.signature = self.deposit.data.create_signature(&secret_key, spec);
self.deposit
.data
.create_signature(&secret_key, epoch, fork, spec);
} }
/// Builds the deposit, consuming the builder. /// Builds the deposit, consuming the builder.

View File

@ -5,7 +5,6 @@ authors = ["Paul Hauner <paul@paulhauner.com>"]
edition = "2018" edition = "2018"
[dependencies] [dependencies]
clap = "2.33.0"
serde = "1.0.102" serde = "1.0.102"
serde_derive = "1.0.102" serde_derive = "1.0.102"
toml = "0.5.4" toml = "0.5.4"

View File

@ -1,9 +1,7 @@
use clap::ArgMatches;
use serde_derive::{Deserialize, Serialize}; use serde_derive::{Deserialize, Serialize};
use std::fs::File; use std::fs::File;
use std::io::prelude::*; use std::io::prelude::*;
use std::path::PathBuf; use std::path::PathBuf;
use std::time::SystemTime;
use types::ChainSpec; use types::ChainSpec;
/// The core configuration of a Lighthouse beacon node. /// The core configuration of a Lighthouse beacon node.
@ -46,33 +44,6 @@ impl Eth2Config {
} }
} }
impl Eth2Config {
/// Apply the following arguments to `self`, replacing values if they are specified in `args`.
///
/// Returns an error if arguments are obviously invalid. May succeed even if some values are
/// invalid.
pub fn apply_cli_args(&mut self, args: &ArgMatches) -> Result<(), &'static str> {
if args.is_present("recent-genesis") {
self.spec.min_genesis_time = recent_genesis_time()
}
Ok(())
}
}
/// Returns the system time, mod 30 minutes.
///
/// Used for easily creating testnets.
fn recent_genesis_time() -> u64 {
let now = SystemTime::now()
.duration_since(SystemTime::UNIX_EPOCH)
.unwrap()
.as_secs();
let secs_after_last_period = now.checked_rem(30 * 60).unwrap_or(0);
// genesis is now the last 30 minute block.
now - secs_after_last_period
}
/// Write a configuration to file. /// Write a configuration to file.
pub fn write_to_file<T>(path: PathBuf, config: &T) -> Result<(), String> pub fn write_to_file<T>(path: PathBuf, config: &T) -> Result<(), String>
where where
@ -111,3 +82,15 @@ where
Ok(None) Ok(None)
} }
} }
#[cfg(test)]
mod tests {
use super::*;
use toml;
#[test]
fn serde_serialize() {
let _ =
toml::to_string(&Eth2Config::default()).expect("Should serde encode default config");
}
}

View File

@ -0,0 +1,14 @@
[package]
name = "remote_beacon_node"
version = "0.1.0"
authors = ["Paul Hauner <paul@paulhauner.com>"]
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
reqwest = "0.9"
url = "1.2"
serde = "1.0"
futures = "0.1.25"
types = { path = "../../../eth2/types" }

View File

@ -0,0 +1,141 @@
//! Provides a `RemoteBeaconNode` which interacts with a HTTP API on another Lighthouse (or
//! compatible) instance.
//!
//! Presently, this is only used for testing but it _could_ become a user-facing library.
use futures::{Future, IntoFuture};
use reqwest::r#async::{Client, RequestBuilder};
use serde::Deserialize;
use std::marker::PhantomData;
use std::net::SocketAddr;
use types::{BeaconBlock, BeaconState, EthSpec};
use types::{Hash256, Slot};
use url::Url;
/// Connects to a remote Lighthouse (or compatible) node via HTTP.
pub struct RemoteBeaconNode<E: EthSpec> {
pub http: HttpClient<E>,
}
impl<E: EthSpec> RemoteBeaconNode<E> {
pub fn new(http_endpoint: SocketAddr) -> Result<Self, String> {
Ok(Self {
http: HttpClient::new(format!("http://{}", http_endpoint.to_string()))
.map_err(|e| format!("Unable to create http client: {:?}", e))?,
})
}
}
#[derive(Debug)]
pub enum Error {
UrlParseError(url::ParseError),
ReqwestError(reqwest::Error),
}
#[derive(Clone)]
pub struct HttpClient<E> {
client: Client,
url: Url,
_phantom: PhantomData<E>,
}
impl<E: EthSpec> HttpClient<E> {
/// Creates a new instance (without connecting to the node).
pub fn new(server_url: String) -> Result<Self, Error> {
Ok(Self {
client: Client::new(),
url: Url::parse(&server_url)?,
_phantom: PhantomData,
})
}
pub fn beacon(&self) -> Beacon<E> {
Beacon(self.clone())
}
fn url(&self, path: &str) -> Result<Url, Error> {
self.url.join(path).map_err(|e| e.into())
}
pub fn get(&self, path: &str) -> Result<RequestBuilder, Error> {
self.url(path)
.map(|url| Client::new().get(&url.to_string()))
}
}
/// Provides the functions on the `/beacon` endpoint of the node.
#[derive(Clone)]
pub struct Beacon<E>(HttpClient<E>);
impl<E: EthSpec> Beacon<E> {
fn url(&self, path: &str) -> Result<Url, Error> {
self.0
.url("beacon/")
.and_then(move |url| url.join(path).map_err(Error::from))
.map_err(Into::into)
}
/// Returns the block and block root at the given slot.
pub fn block_at_slot(
&self,
slot: Slot,
) -> impl Future<Item = (BeaconBlock<E>, Hash256), Error = Error> {
let client = self.0.clone();
self.url("block")
.into_future()
.and_then(move |mut url| {
url.query_pairs_mut()
.append_pair("slot", &format!("{}", slot.as_u64()));
client.get(&url.to_string())
})
.and_then(|builder| builder.send().map_err(Error::from))
.and_then(|response| response.error_for_status().map_err(Error::from))
.and_then(|mut success| success.json::<BlockResponse<E>>().map_err(Error::from))
.map(|response| (response.beacon_block, response.root))
}
/// Returns the state and state root at the given slot.
pub fn state_at_slot(
&self,
slot: Slot,
) -> impl Future<Item = (BeaconState<E>, Hash256), Error = Error> {
let client = self.0.clone();
self.url("state")
.into_future()
.and_then(move |mut url| {
url.query_pairs_mut()
.append_pair("slot", &format!("{}", slot.as_u64()));
client.get(&url.to_string())
})
.and_then(|builder| builder.send().map_err(Error::from))
.and_then(|response| response.error_for_status().map_err(Error::from))
.and_then(|mut success| success.json::<StateResponse<E>>().map_err(Error::from))
.map(|response| (response.beacon_state, response.root))
}
}
#[derive(Deserialize)]
#[serde(bound = "T: EthSpec")]
pub struct BlockResponse<T: EthSpec> {
pub beacon_block: BeaconBlock<T>,
pub root: Hash256,
}
#[derive(Deserialize)]
#[serde(bound = "T: EthSpec")]
pub struct StateResponse<T: EthSpec> {
pub beacon_state: BeaconState<T>,
pub root: Hash256,
}
impl From<reqwest::Error> for Error {
fn from(e: reqwest::Error) -> Error {
Error::ReqwestError(e)
}
}
impl From<url::ParseError> for Error {
fn from(e: url::ParseError) -> Error {
Error::UrlParseError(e)
}
}

View File

@ -18,3 +18,7 @@ types = { path = "../eth2/types" }
state_processing = { path = "../eth2/state_processing" } state_processing = { path = "../eth2/state_processing" }
eth2_ssz = "0.1.2" eth2_ssz = "0.1.2"
regex = "1.3.1" regex = "1.3.1"
eth1_test_rig = { path = "../tests/eth1_test_rig" }
futures = "0.1.25"
environment = { path = "../lighthouse/environment" }
web3 = "0.8.0"

View File

@ -0,0 +1,78 @@
use clap::ArgMatches;
use environment::Environment;
use eth1_test_rig::{DelayThenDeposit, DepositContract};
use futures::Future;
use std::time::Duration;
use types::{test_utils::generate_deterministic_keypair, EthSpec, Hash256};
use web3::{transports::Http, Web3};
pub fn run_deposit_contract<T: EthSpec>(
mut env: Environment<T>,
matches: &ArgMatches,
) -> Result<(), String> {
let count = matches
.value_of("count")
.ok_or_else(|| "Deposit count not specified")?
.parse::<usize>()
.map_err(|e| format!("Failed to parse deposit count: {}", e))?;
let delay = matches
.value_of("delay")
.ok_or_else(|| "Deposit count not specified")?
.parse::<u64>()
.map(Duration::from_millis)
.map_err(|e| format!("Failed to parse deposit count: {}", e))?;
let confirmations = matches
.value_of("confirmations")
.ok_or_else(|| "Confirmations not specified")?
.parse::<usize>()
.map_err(|e| format!("Failed to parse confirmations: {}", e))?;
let endpoint = matches
.value_of("endpoint")
.ok_or_else(|| "Endpoint not specified")?;
let (_event_loop, transport) = Http::new(&endpoint).map_err(|e| {
format!(
"Failed to start HTTP transport connected to ganache: {:?}",
e
)
})?;
let web3 = Web3::new(transport);
let deposit_contract = env
.runtime()
.block_on(DepositContract::deploy(web3, confirmations))
.map_err(|e| format!("Failed to deploy contract: {}", e))?;
info!(
"Deposit contract deployed. Address: {}",
deposit_contract.address()
);
env.runtime()
.block_on(do_deposits::<T>(deposit_contract, count, delay))
.map_err(|e| format!("Failed to submit deposits: {}", e))?;
Ok(())
}
fn do_deposits<T: EthSpec>(
deposit_contract: DepositContract,
count: usize,
delay: Duration,
) -> impl Future<Item = (), Error = String> {
let deposits = (0..count)
.map(|i| DelayThenDeposit {
deposit: deposit_contract.deposit_helper::<T>(
generate_deterministic_keypair(i),
Hash256::from_low_u64_le(i as u64),
32_000_000_000,
),
delay,
})
.collect();
deposit_contract.deposit_multiple(deposits)
}

View File

@ -1,11 +1,15 @@
#[macro_use] #[macro_use]
extern crate log; extern crate log;
mod deposit_contract;
mod parse_hex; mod parse_hex;
mod pycli; mod pycli;
mod transition_blocks; mod transition_blocks;
use clap::{App, Arg, SubCommand}; use clap::{App, Arg, SubCommand};
use deposit_contract::run_deposit_contract;
use environment::EnvironmentBuilder;
use log::Level;
use parse_hex::run_parse_hex; use parse_hex::run_parse_hex;
use pycli::run_pycli; use pycli::run_pycli;
use std::fs::File; use std::fs::File;
@ -17,7 +21,7 @@ use types::{test_utils::TestingBeaconStateBuilder, EthSpec, MainnetEthSpec, Mini
type LocalEthSpec = MinimalEthSpec; type LocalEthSpec = MinimalEthSpec;
fn main() { fn main() {
simple_logger::init().expect("logger should initialize"); simple_logger::init_with_level(Level::Info).expect("logger should initialize");
let matches = App::new("Lighthouse CLI Tool") let matches = App::new("Lighthouse CLI Tool")
.version("0.1.0") .version("0.1.0")
@ -115,6 +119,45 @@ fn main() {
.help("SSZ encoded as 0x-prefixed hex"), .help("SSZ encoded as 0x-prefixed hex"),
), ),
) )
.subcommand(
SubCommand::with_name("deposit-contract")
.about(
"Uses an eth1 test rpc (e.g., ganache-cli) to simulate the deposit contract.",
)
.version("0.1.0")
.author("Paul Hauner <paul@sigmaprime.io>")
.arg(
Arg::with_name("count")
.short("c")
.value_name("INTEGER")
.takes_value(true)
.required(true)
.help("The number of deposits to be submitted."),
)
.arg(
Arg::with_name("delay")
.short("d")
.value_name("MILLIS")
.takes_value(true)
.required(true)
.help("The delay (in milliseconds) between each deposit"),
)
.arg(
Arg::with_name("endpoint")
.short("e")
.value_name("HTTP_SERVER")
.takes_value(true)
.default_value("http://localhost:8545")
.help("The URL to the eth1 JSON-RPC http API."),
)
.arg(
Arg::with_name("confirmations")
.value_name("INTEGER")
.takes_value(true)
.default_value("3")
.help("The number of block confirmations before declaring the contract deployed."),
)
)
.subcommand( .subcommand(
SubCommand::with_name("pycli") SubCommand::with_name("pycli")
.about("TODO") .about("TODO")
@ -132,6 +175,14 @@ fn main() {
) )
.get_matches(); .get_matches();
let env = EnvironmentBuilder::minimal()
.multi_threaded_tokio_runtime()
.expect("should start tokio runtime")
.null_logger()
.expect("should start null logger")
.build()
.expect("should build env");
match matches.subcommand() { match matches.subcommand() {
("genesis_yaml", Some(matches)) => { ("genesis_yaml", Some(matches)) => {
let num_validators = matches let num_validators = matches
@ -178,6 +229,8 @@ fn main() {
} }
("pycli", Some(matches)) => run_pycli::<LocalEthSpec>(matches) ("pycli", Some(matches)) => run_pycli::<LocalEthSpec>(matches)
.unwrap_or_else(|e| error!("Failed to run pycli: {}", e)), .unwrap_or_else(|e| error!("Failed to run pycli: {}", e)),
("deposit-contract", Some(matches)) => run_deposit_contract::<LocalEthSpec>(env, matches)
.unwrap_or_else(|e| error!("Failed to run deposit contract sim: {}", e)),
(other, _) => error!("Unknown subcommand {}. See --help.", other), (other, _) => error!("Unknown subcommand {}. See --help.", other),
} }
} }

20
lighthouse/Cargo.toml Normal file
View File

@ -0,0 +1,20 @@
[package]
name = "lighthouse"
version = "0.1.0"
authors = ["Sigma Prime <contact@sigmaprime.io>"]
edition = "2018"
[dependencies]
beacon_node = { "path" = "../beacon_node" }
tokio = "0.1.15"
slog = { version = "^2.2.3" , features = ["max_level_trace"] }
sloggers = "0.3.4"
types = { "path" = "../eth2/types" }
clap = "2.32.0"
env_logger = "0.6.1"
logging = { path = "../eth2/utils/logging" }
slog-term = "^2.4.0"
slog-async = "^2.3.0"
environment = { path = "./environment" }
futures = "0.1.25"
validator_client = { "path" = "../validator_client" }

View File

@ -0,0 +1,19 @@
[package]
name = "environment"
version = "0.1.0"
authors = ["Paul Hauner <paul@paulhauner.com>"]
edition = "2018"
[dependencies]
tokio = "0.1.15"
slog = { version = "^2.2.3" , features = ["max_level_trace"] }
sloggers = "0.3.4"
types = { "path" = "../../eth2/types" }
eth2_config = { "path" = "../../eth2/utils/eth2_config" }
env_logger = "0.6.1"
logging = { path = "../../eth2/utils/logging" }
slog-term = "^2.4.0"
slog-async = "^2.3.0"
ctrlc = { version = "3.1.1", features = ["termination"] }
futures = "0.1.25"
parking_lot = "0.7"

View File

@ -0,0 +1,241 @@
//! This crate aims to provide a common set of tools that can be used to create a "environment" to
//! run Lighthouse services like the `beacon_node` or `validator_client`. This allows for the
//! unification of creating tokio runtimes, loggers and eth2 specifications in production and in
//! testing.
//!
//! The idea is that the main thread creates an `Environment`, which is then used to spawn a
//! `Context` which can be handed to any service that wishes to start async tasks or perform
//! logging.
use eth2_config::Eth2Config;
use futures::{sync::oneshot, Future};
use slog::{o, Drain, Level, Logger};
use sloggers::{null::NullLoggerBuilder, Build};
use std::cell::RefCell;
use tokio::runtime::{Builder as RuntimeBuilder, Runtime, TaskExecutor};
use types::{EthSpec, InteropEthSpec, MainnetEthSpec, MinimalEthSpec};
/// Builds an `Environment`.
pub struct EnvironmentBuilder<E: EthSpec> {
runtime: Option<Runtime>,
log: Option<Logger>,
eth_spec_instance: E,
eth2_config: Eth2Config,
}
impl EnvironmentBuilder<MinimalEthSpec> {
/// Creates a new builder using the `minimal` eth2 specification.
pub fn minimal() -> Self {
Self {
runtime: None,
log: None,
eth_spec_instance: MinimalEthSpec,
eth2_config: Eth2Config::minimal(),
}
}
}
impl EnvironmentBuilder<MainnetEthSpec> {
/// Creates a new builder using the `mainnet` eth2 specification.
pub fn mainnet() -> Self {
Self {
runtime: None,
log: None,
eth_spec_instance: MainnetEthSpec,
eth2_config: Eth2Config::mainnet(),
}
}
}
impl EnvironmentBuilder<InteropEthSpec> {
/// Creates a new builder using the `interop` eth2 specification.
pub fn interop() -> Self {
Self {
runtime: None,
log: None,
eth_spec_instance: InteropEthSpec,
eth2_config: Eth2Config::interop(),
}
}
}
impl<E: EthSpec> EnvironmentBuilder<E> {
/// Specifies that a multi-threaded tokio runtime should be used. Ideal for production uses.
///
/// The `Runtime` used is just the standard tokio runtime.
pub fn multi_threaded_tokio_runtime(mut self) -> Result<Self, String> {
self.runtime =
Some(Runtime::new().map_err(|e| format!("Failed to start runtime: {:?}", e))?);
Ok(self)
}
/// Specifies that a single-threaded tokio runtime should be used. Ideal for testing purposes
/// where tests are already multi-threaded.
///
/// This can solve problems if "too many open files" errors are thrown during tests.
pub fn single_thread_tokio_runtime(mut self) -> Result<Self, String> {
self.runtime = Some(
RuntimeBuilder::new()
.core_threads(1)
.build()
.map_err(|e| format!("Failed to start runtime: {:?}", e))?,
);
Ok(self)
}
/// Specifies that all logs should be sent to `null` (i.e., ignored).
pub fn null_logger(mut self) -> Result<Self, String> {
self.log = Some(null_logger()?);
Ok(self)
}
/// Specifies that the `slog` asynchronous logger should be used. Ideal for production.
///
/// The logger is "async" because it has a dedicated thread that accepts logs and then
/// asynchronously flushes them to stdout/files/etc. This means the thread that raised the log
/// does not have to wait for the logs to be flushed.
pub fn async_logger(mut self, debug_level: &str) -> Result<Self, String> {
// Build the initial logger.
let decorator = slog_term::TermDecorator::new().build();
let decorator = logging::AlignedTermDecorator::new(decorator, logging::MAX_MESSAGE_WIDTH);
let drain = slog_term::FullFormat::new(decorator).build().fuse();
let drain = slog_async::Async::new(drain).build();
let drain = match debug_level {
"info" => drain.filter_level(Level::Info),
"debug" => drain.filter_level(Level::Debug),
"trace" => drain.filter_level(Level::Trace),
"warn" => drain.filter_level(Level::Warning),
"error" => drain.filter_level(Level::Error),
"crit" => drain.filter_level(Level::Critical),
unknown => return Err(format!("Unknown debug-level: {}", unknown)),
};
self.log = Some(Logger::root(drain.fuse(), o!()));
Ok(self)
}
/// Consumes the builder, returning an `Environment`.
pub fn build(self) -> Result<Environment<E>, String> {
Ok(Environment {
runtime: self
.runtime
.ok_or_else(|| "Cannot build environment without runtime".to_string())?,
log: self
.log
.ok_or_else(|| "Cannot build environment without log".to_string())?,
eth_spec_instance: self.eth_spec_instance,
eth2_config: self.eth2_config,
})
}
}
/// An execution context that can be used by a service.
///
/// Distinct from an `Environment` because a `Context` is not able to give a mutable reference to a
/// `Runtime`, instead it only has access to a `TaskExecutor`.
#[derive(Clone)]
pub struct RuntimeContext<E: EthSpec> {
pub executor: TaskExecutor,
pub log: Logger,
pub eth_spec_instance: E,
pub eth2_config: Eth2Config,
}
impl<E: EthSpec> RuntimeContext<E> {
/// Returns a sub-context of this context.
///
/// The generated service will have the `service_name` in all it's logs.
pub fn service_context(&self, service_name: &'static str) -> Self {
Self {
executor: self.executor.clone(),
log: self.log.new(o!("service" => service_name)),
eth_spec_instance: self.eth_spec_instance.clone(),
eth2_config: self.eth2_config.clone(),
}
}
/// Returns the `eth2_config` for this service.
pub fn eth2_config(&self) -> &Eth2Config {
&self.eth2_config
}
}
/// An environment where Lighthouse services can run. Used to start a production beacon node or
/// validator client, or to run tests that involve logging and async task execution.
pub struct Environment<E: EthSpec> {
runtime: Runtime,
log: Logger,
eth_spec_instance: E,
eth2_config: Eth2Config,
}
impl<E: EthSpec> Environment<E> {
/// Returns a mutable reference to the `tokio` runtime.
///
/// Useful in the rare scenarios where it's necessary to block the current thread until a task
/// is finished (e.g., during testing).
pub fn runtime(&mut self) -> &mut Runtime {
&mut self.runtime
}
/// Returns a `Context` where no "service" has been added to the logger output.
pub fn core_context(&mut self) -> RuntimeContext<E> {
RuntimeContext {
executor: self.runtime.executor(),
log: self.log.clone(),
eth_spec_instance: self.eth_spec_instance.clone(),
eth2_config: self.eth2_config.clone(),
}
}
/// Returns a `Context` where the `service_name` is added to the logger output.
pub fn service_context(&mut self, service_name: &'static str) -> RuntimeContext<E> {
RuntimeContext {
executor: self.runtime.executor(),
log: self.log.new(o!("service" => service_name)),
eth_spec_instance: self.eth_spec_instance.clone(),
eth2_config: self.eth2_config.clone(),
}
}
/// Block the current thread until Ctrl+C is received.
pub fn block_until_ctrl_c(&mut self) -> Result<(), String> {
let (ctrlc_send, ctrlc_oneshot) = oneshot::channel();
let ctrlc_send_c = RefCell::new(Some(ctrlc_send));
ctrlc::set_handler(move || {
if let Some(ctrlc_send) = ctrlc_send_c.try_borrow_mut().unwrap().take() {
ctrlc_send.send(()).expect("Error sending ctrl-c message");
}
})
.map_err(|e| format!("Could not set ctrlc handler: {:?}", e))?;
// Block this thread until Crtl+C is pressed.
self.runtime()
.block_on(ctrlc_oneshot)
.map_err(|e| format!("Ctrlc oneshot failed: {:?}", e))
}
/// Shutdown the `tokio` runtime when all tasks are idle.
pub fn shutdown_on_idle(self) -> Result<(), String> {
self.runtime
.shutdown_on_idle()
.wait()
.map_err(|e| format!("Tokio runtime shutdown returned an error: {:?}", e))
}
pub fn eth_spec_instance(&self) -> &E {
&self.eth_spec_instance
}
pub fn eth2_config(&self) -> &Eth2Config {
&self.eth2_config
}
}
pub fn null_logger() -> Result<Logger, String> {
let log_builder = NullLoggerBuilder;
log_builder
.build()
.map_err(|e| format!("Failed to start null logger: {:?}", e))
}

165
lighthouse/src/main.rs Normal file
View File

@ -0,0 +1,165 @@
#[macro_use]
extern crate clap;
use beacon_node::ProductionBeaconNode;
use clap::{App, Arg, ArgMatches};
use env_logger::{Builder, Env};
use environment::EnvironmentBuilder;
use slog::{crit, info, warn};
use std::process::exit;
use types::EthSpec;
use validator_client::ProductionValidatorClient;
pub const DEFAULT_DATA_DIR: &str = ".lighthouse";
pub const CLIENT_CONFIG_FILENAME: &str = "beacon-node.toml";
pub const ETH2_CONFIG_FILENAME: &str = "eth2-spec.toml";
fn main() {
// Debugging output for libp2p and external crates.
Builder::from_env(Env::default()).init();
// Parse the CLI parameters.
let matches = App::new("Lighthouse")
.version(crate_version!())
.author("Sigma Prime <contact@sigmaprime.io>")
.about("Eth 2.0 Client")
.arg(
Arg::with_name("spec")
.short("s")
.long("spec")
.value_name("TITLE")
.help("Specifies the default eth2 spec type. Only effective when creating a new datadir.")
.takes_value(true)
.required(true)
.possible_values(&["mainnet", "minimal", "interop"])
.global(true)
.default_value("minimal")
)
.arg(
Arg::with_name("logfile")
.long("logfile")
.value_name("FILE")
.help("File path where output will be written.")
.takes_value(true),
)
.arg(
Arg::with_name("debug-level")
.long("debug-level")
.value_name("LEVEL")
.help("The title of the spec constants for chain config.")
.takes_value(true)
.possible_values(&["info", "debug", "trace", "warn", "error", "crit"])
.default_value("trace"),
)
.subcommand(beacon_node::cli_app())
.subcommand(validator_client::cli_app())
.get_matches();
macro_rules! run_with_spec {
($env_builder: expr) => {
match run($env_builder, &matches) {
Ok(()) => exit(0),
Err(e) => {
println!("Failed to start Lighthouse: {}", e);
exit(1)
}
}
};
}
match matches.value_of("spec") {
Some("minimal") => run_with_spec!(EnvironmentBuilder::minimal()),
Some("mainnet") => run_with_spec!(EnvironmentBuilder::mainnet()),
Some("interop") => run_with_spec!(EnvironmentBuilder::interop()),
spec => {
// This path should be unreachable due to slog having a `default_value`
unreachable!("Unknown spec configuration: {:?}", spec);
}
}
}
fn run<E: EthSpec>(
environment_builder: EnvironmentBuilder<E>,
matches: &ArgMatches,
) -> Result<(), String> {
let mut environment = environment_builder
.async_logger(
matches
.value_of("debug-level")
.ok_or_else(|| "Expected --debug-level flag".to_string())?,
)?
.multi_threaded_tokio_runtime()?
.build()?;
let log = environment.core_context().log;
if std::mem::size_of::<usize>() != 8 {
crit!(
log,
"Lighthouse only supports 64bit CPUs";
"detected" => format!("{}bit", std::mem::size_of::<usize>() * 8)
);
return Err("Invalid CPU architecture".into());
}
warn!(
log,
"Ethereum 2.0 is pre-release. This software is experimental."
);
// Note: the current code technically allows for starting a beacon node _and_ a validator
// client at the same time.
//
// Whilst this is possible, the mutual-exclusivity of `clap` sub-commands prevents it from
// actually happening.
//
// Creating a command which can run both might be useful future works.
let beacon_node = if let Some(sub_matches) = matches.subcommand_matches("Beacon Node") {
let runtime_context = environment.core_context();
let beacon = environment
.runtime()
.block_on(ProductionBeaconNode::new_from_cli(
runtime_context,
sub_matches,
))
.map_err(|e| format!("Failed to start beacon node: {}", e))?;
Some(beacon)
} else {
None
};
let validator_client = if let Some(sub_matches) = matches.subcommand_matches("Validator Client")
{
let runtime_context = environment.core_context();
let validator = ProductionValidatorClient::new_from_cli(runtime_context, sub_matches)
.map_err(|e| format!("Failed to init validator client: {}", e))?;
validator
.start_service()
.map_err(|e| format!("Failed to start validator client service: {}", e))?;
Some(validator)
} else {
None
};
if beacon_node.is_none() && validator_client.is_none() {
crit!(log, "No subcommand supplied. See --help .");
return Err("No subcommand supplied.".into());
}
// Block this thread until Crtl+C is pressed.
environment.block_until_ctrl_c()?;
info!(log, "Shutting down..");
drop(beacon_node);
drop(validator_client);
// Shutdown the environment once all tasks have completed.
environment.shutdown_on_idle()
}

8
scripts/ganache_test_node.sh Executable file
View File

@ -0,0 +1,8 @@
#!/usr/bin/env bash
ganache-cli \
--defaultBalanceEther 1000000000 \
--gasLimit 1000000000 \
--accounts 10 \
--mnemonic "vast thought differ pull jewel broom cook wrist tribe word before omit" \
--port 8545 \

View File

@ -74,9 +74,10 @@ do
shift shift
done done
./beacon_node \ ./lighthouse \
--p2p-priv-key $IDENTITY \
--logfile $BEACON_LOG_FILE \ --logfile $BEACON_LOG_FILE \
beacon \
--p2p-priv-key $IDENTITY \
--libp2p-addresses $PEERS \ --libp2p-addresses $PEERS \
--port $PORT \ --port $PORT \
testnet \ testnet \
@ -86,8 +87,9 @@ done
$GEN_STATE \ $GEN_STATE \
& \ & \
./validator_client \ ./lighthouse \
--logfile $VALIDATOR_LOG_FILE \ --logfile $VALIDATOR_LOG_FILE \
validator \
testnet \ testnet \
--bootstrap \ --bootstrap \
interop-yaml \ interop-yaml \

1
tests/eth1_test_rig/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
contract/

View File

@ -0,0 +1,19 @@
[package]
name = "eth1_test_rig"
version = "0.1.0"
authors = ["Paul Hauner <paul@paulhauner.com>"]
edition = "2018"
build = "build.rs"
[build-dependencies]
reqwest = "0.9.20"
serde_json = "1.0"
[dependencies]
web3 = "0.8.0"
tokio = "0.1.17"
futures = "0.1.25"
types = { path = "../../eth2/types"}
eth2_ssz = { path = "../../eth2/utils/ssz"}
serde_json = "1.0"

View File

@ -0,0 +1,95 @@
//! Downloads the ABI and bytecode for the deposit contract from the ethereum spec repository and
//! stores them in a `contract/` directory in the crate root.
//!
//! These files are required for some `include_bytes` calls used in this crate.
use reqwest::Response;
use serde_json::Value;
use std::env;
use std::fs::File;
use std::io::Write;
use std::path::PathBuf;
const GITHUB_RAW: &str = "https://raw.githubusercontent.com";
const SPEC_REPO: &str = "ethereum/eth2.0-specs";
const SPEC_TAG: &str = "v0.8.3";
const ABI_FILE: &str = "validator_registration.json";
const BYTECODE_FILE: &str = "validator_registration.bytecode";
fn main() {
match init_deposit_contract_abi() {
Ok(()) => (),
Err(e) => panic!(e),
}
}
/// Attempts to download the deposit contract ABI from github if a local copy is not already
/// present.
pub fn init_deposit_contract_abi() -> Result<(), String> {
let abi_file = abi_dir().join(format!("{}_{}", SPEC_TAG, ABI_FILE));
let bytecode_file = abi_dir().join(format!("{}_{}", SPEC_TAG, BYTECODE_FILE));
if abi_file.exists() {
// Nothing to do.
} else {
match download_abi() {
Ok(mut response) => {
let mut abi_file = File::create(abi_file)
.map_err(|e| format!("Failed to create local abi file: {:?}", e))?;
let mut bytecode_file = File::create(bytecode_file)
.map_err(|e| format!("Failed to create local bytecode file: {:?}", e))?;
let contract: Value = response
.json()
.map_err(|e| format!("Respsonse is not a valid json {:?}", e))?;
let abi = contract
.get("abi")
.ok_or(format!("Response does not contain key: abi"))?
.to_string();
abi_file
.write(abi.as_bytes())
.map_err(|e| format!("Failed to write http response to abi file: {:?}", e))?;
let bytecode = contract
.get("bytecode")
.ok_or(format!("Response does not contain key: bytecode"))?
.to_string();
bytecode_file.write(bytecode.as_bytes()).map_err(|e| {
format!("Failed to write http response to bytecode file: {:?}", e)
})?;
}
Err(e) => {
return Err(format!(
"No abi file found. Failed to download from github: {:?}",
e
))
}
}
}
Ok(())
}
/// Attempts to download the deposit contract file from the Ethereum github.
fn download_abi() -> Result<Response, String> {
reqwest::get(&format!(
"{}/{}/{}/deposit_contract/contracts/{}",
GITHUB_RAW, SPEC_REPO, SPEC_TAG, ABI_FILE
))
.map_err(|e| format!("Failed to download deposit ABI from github: {:?}", e))
}
/// Returns the directory that will be used to store the deposit contract ABI.
fn abi_dir() -> PathBuf {
let base = env::var("CARGO_MANIFEST_DIR")
.expect("should know manifest dir")
.parse::<PathBuf>()
.expect("should parse manifest dir as path")
.join("contract");
std::fs::create_dir_all(base.clone())
.expect("should be able to create abi directory in manifest");
base
}

View File

@ -0,0 +1,157 @@
use futures::Future;
use serde_json::json;
use std::io::prelude::*;
use std::io::BufReader;
use std::net::TcpListener;
use std::process::{Child, Command, Stdio};
use std::sync::Arc;
use std::time::{Duration, Instant};
use web3::{
transports::{EventLoopHandle, Http},
Transport, Web3,
};
/// How long we will wait for ganache to indicate that it is ready.
const GANACHE_STARTUP_TIMEOUT_MILLIS: u64 = 10_000;
/// Provides a dedicated `ganachi-cli` instance with a connected `Web3` instance.
///
/// Requires that `ganachi-cli` is installed and available on `PATH`.
pub struct GanacheInstance {
pub port: u16,
child: Child,
_event_loop: Arc<EventLoopHandle>,
pub web3: Web3<Http>,
}
impl GanacheInstance {
/// Start a new `ganache-cli` process, waiting until it indicates that it is ready to accept
/// RPC connections.
pub fn new() -> Result<Self, String> {
let port = unused_port()?;
let mut child = Command::new("ganache-cli")
.stdout(Stdio::piped())
.arg("--defaultBalanceEther")
.arg("1000000000")
.arg("--gasLimit")
.arg("1000000000")
.arg("--accounts")
.arg("10")
.arg("--port")
.arg(format!("{}", port))
.arg("--mnemonic")
.arg("\"vast thought differ pull jewel broom cook wrist tribe word before omit\"")
.spawn()
.map_err(|e| {
format!(
"Failed to start ganche-cli. \
Is it ganache-cli installed and available on $PATH? Error: {:?}",
e
)
})?;
let stdout = child
.stdout
.ok_or_else(|| "Unable to get stdout for ganache child process")?;
let start = Instant::now();
let mut reader = BufReader::new(stdout);
loop {
if start + Duration::from_millis(GANACHE_STARTUP_TIMEOUT_MILLIS) <= Instant::now() {
break Err(
"Timed out waiting for ganache to start. Is ganache-cli installed?".to_string(),
);
}
let mut line = String::new();
if let Err(e) = reader.read_line(&mut line) {
break Err(format!("Failed to read line from ganache process: {:?}", e));
} else if line.starts_with("Listening on") {
break Ok(());
} else {
continue;
}
}?;
let (event_loop, transport) = Http::new(&endpoint(port)).map_err(|e| {
format!(
"Failed to start HTTP transport connected to ganache: {:?}",
e
)
})?;
let web3 = Web3::new(transport);
child.stdout = Some(reader.into_inner());
Ok(Self {
child,
port,
_event_loop: Arc::new(event_loop),
web3,
})
}
/// Returns the endpoint that this instance is listening on.
pub fn endpoint(&self) -> String {
endpoint(self.port)
}
/// Increase the timestamp on future blocks by `increase_by` seconds.
pub fn increase_time(&self, increase_by: u64) -> impl Future<Item = (), Error = String> {
self.web3
.transport()
.execute("evm_increaseTime", vec![json!(increase_by)])
.map(|_json_value| ())
.map_err(|e| format!("Failed to increase time on EVM (is this ganache?): {:?}", e))
}
/// Returns the current block number, as u64
pub fn block_number(&self) -> impl Future<Item = u64, Error = String> {
self.web3
.eth()
.block_number()
.map(|v| v.as_u64())
.map_err(|e| format!("Failed to get block number: {:?}", e))
}
/// Mines a single block.
pub fn evm_mine(&self) -> impl Future<Item = (), Error = String> {
self.web3
.transport()
.execute("evm_mine", vec![])
.map(|_| ())
.map_err(|_| {
"utils should mine new block with evm_mine (only works with ganache-cli!)"
.to_string()
})
}
}
fn endpoint(port: u16) -> String {
format!("http://localhost:{}", port)
}
/// A bit of hack to find an unused TCP port.
///
/// Does not guarantee that the given port is unused after the function exists, just that it was
/// unused before the function started (i.e., it does not reserve a port).
pub fn unused_port() -> Result<u16, String> {
let listener = TcpListener::bind("127.0.0.1:0")
.map_err(|e| format!("Failed to create TCP listener to find unused port: {:?}", e))?;
let local_addr = listener.local_addr().map_err(|e| {
format!(
"Failed to read TCP listener local_addr to find unused port: {:?}",
e
)
})?;
Ok(local_addr.port())
}
impl Drop for GanacheInstance {
fn drop(&mut self) {
let _ = self.child.kill();
}
}

View File

@ -0,0 +1,240 @@
//! Provides utilities for deploying and manipulating the eth2 deposit contract on the eth1 chain.
//!
//! Presently used with [`ganache-cli`](https://github.com/trufflesuite/ganache-cli) to simulate
//! the deposit contract for testing beacon node eth1 integration.
//!
//! Not tested to work with actual clients (e.g., geth). It should work fine, however there may be
//! some initial issues.
mod ganache;
use futures::{stream, Future, IntoFuture, Stream};
use ganache::GanacheInstance;
use ssz::Encode;
use std::time::{Duration, Instant};
use tokio::{runtime::Runtime, timer::Delay};
use types::DepositData;
use types::{EthSpec, Hash256, Keypair, Signature};
use web3::contract::{Contract, Options};
use web3::transports::Http;
use web3::types::{Address, U256};
use web3::{Transport, Web3};
pub const DEPLOYER_ACCOUNTS_INDEX: usize = 0;
pub const DEPOSIT_ACCOUNTS_INDEX: usize = 0;
const CONTRACT_DEPLOY_GAS: usize = 4_000_000;
const DEPOSIT_GAS: usize = 4_000_000;
// Deposit contract
pub const ABI: &[u8] = include_bytes!("../contract/v0.8.3_validator_registration.json");
pub const BYTECODE: &[u8] = include_bytes!("../contract/v0.8.3_validator_registration.bytecode");
/// Provides a dedicated ganache-cli instance with the deposit contract already deployed.
pub struct GanacheEth1Instance {
pub ganache: GanacheInstance,
pub deposit_contract: DepositContract,
}
impl GanacheEth1Instance {
pub fn new() -> impl Future<Item = Self, Error = String> {
GanacheInstance::new().into_future().and_then(|ganache| {
DepositContract::deploy(ganache.web3.clone(), 0).map(|deposit_contract| Self {
ganache,
deposit_contract,
})
})
}
pub fn endpoint(&self) -> String {
self.ganache.endpoint()
}
pub fn web3(&self) -> Web3<Http> {
self.ganache.web3.clone()
}
}
/// Deploys and provides functions for the eth2 deposit contract, deployed on the eth1 chain.
#[derive(Clone, Debug)]
pub struct DepositContract {
web3: Web3<Http>,
contract: Contract<Http>,
}
impl DepositContract {
pub fn deploy(
web3: Web3<Http>,
confirmations: usize,
) -> impl Future<Item = Self, Error = String> {
let web3_1 = web3.clone();
deploy_deposit_contract(web3.clone(), confirmations)
.map_err(|e| {
format!(
"Failed to deploy contract: {}. Is scripts/ganache_tests_node.sh running?.",
e
)
})
.and_then(move |address| {
Contract::from_json(web3_1.eth(), address, ABI)
.map_err(|e| format!("Failed to init contract: {:?}", e))
})
.map(|contract| Self { contract, web3 })
}
/// The deposit contract's address in `0x00ab...` format.
pub fn address(&self) -> String {
format!("0x{:x}", self.contract.address())
}
/// A helper to return a fully-formed `DepositData`. Does not submit the deposit data to the
/// smart contact.
pub fn deposit_helper<E: EthSpec>(
&self,
keypair: Keypair,
withdrawal_credentials: Hash256,
amount: u64,
) -> DepositData {
let mut deposit = DepositData {
pubkey: keypair.pk.into(),
withdrawal_credentials,
amount,
signature: Signature::empty_signature().into(),
};
deposit.signature = deposit.create_signature(&keypair.sk, &E::default_spec());
deposit
}
/// Creates a random, valid deposit and submits it to the deposit contract.
///
/// The keypairs are created randomly and destroyed.
pub fn deposit_random<E: EthSpec>(&self, runtime: &mut Runtime) -> Result<(), String> {
let keypair = Keypair::random();
let mut deposit = DepositData {
pubkey: keypair.pk.into(),
withdrawal_credentials: Hash256::zero(),
amount: 32_000_000_000,
signature: Signature::empty_signature().into(),
};
deposit.signature = deposit.create_signature(&keypair.sk, &E::default_spec());
self.deposit(runtime, deposit)
}
/// Perfoms a blocking deposit.
pub fn deposit(&self, runtime: &mut Runtime, deposit_data: DepositData) -> Result<(), String> {
runtime
.block_on(self.deposit_async(deposit_data))
.map_err(|e| format!("Deposit failed: {:?}", e))
}
/// Performs a non-blocking deposit.
pub fn deposit_async(
&self,
deposit_data: DepositData,
) -> impl Future<Item = (), Error = String> {
let contract = self.contract.clone();
self.web3
.eth()
.accounts()
.map_err(|e| format!("Failed to get accounts: {:?}", e))
.and_then(|accounts| {
accounts
.get(DEPOSIT_ACCOUNTS_INDEX)
.cloned()
.ok_or_else(|| "Insufficient accounts for deposit".to_string())
})
.and_then(move |from_address| {
let params = (
deposit_data.pubkey.as_ssz_bytes(),
deposit_data.withdrawal_credentials.as_ssz_bytes(),
deposit_data.signature.as_ssz_bytes(),
);
let options = Options {
gas: Some(U256::from(DEPOSIT_GAS)),
value: Some(from_gwei(deposit_data.amount)),
..Options::default()
};
contract
.call("deposit", params, from_address, options)
.map_err(|e| format!("Failed to call deposit fn: {:?}", e))
})
.map(|_| ())
}
/// Peforms many deposits, each preceded by a delay.
pub fn deposit_multiple(
&self,
deposits: Vec<DelayThenDeposit>,
) -> impl Future<Item = (), Error = String> {
let s = self.clone();
stream::unfold(deposits.into_iter(), move |mut deposit_iter| {
let s = s.clone();
match deposit_iter.next() {
Some(deposit) => Some(
Delay::new(Instant::now() + deposit.delay)
.map_err(|e| format!("Failed to execute delay: {:?}", e))
.and_then(move |_| s.deposit_async(deposit.deposit))
.map(move |yielded| (yielded, deposit_iter)),
),
None => None,
}
})
.collect()
.map(|_| ())
}
}
/// Describes a deposit and a delay that should should precede it's submission to the deposit
/// contract.
#[derive(Clone)]
pub struct DelayThenDeposit {
/// Wait this duration ...
pub delay: Duration,
/// ... then submit this deposit.
pub deposit: DepositData,
}
fn from_gwei(gwei: u64) -> U256 {
U256::from(gwei) * U256::exp10(9)
}
/// Deploys the deposit contract to the given web3 instance using the account with index
/// `DEPLOYER_ACCOUNTS_INDEX`.
fn deploy_deposit_contract<T: Transport>(
web3: Web3<T>,
confirmations: usize,
) -> impl Future<Item = Address, Error = String> {
let bytecode = String::from_utf8_lossy(&BYTECODE);
web3.eth()
.accounts()
.map_err(|e| format!("Failed to get accounts: {:?}", e))
.and_then(|accounts| {
accounts
.get(DEPLOYER_ACCOUNTS_INDEX)
.cloned()
.ok_or_else(|| "Insufficient accounts for deployer".to_string())
})
.and_then(move |deploy_address| {
Contract::deploy(web3.eth(), &ABI)
.map_err(|e| format!("Unable to build contract deployer: {:?}", e))?
.confirmations(confirmations)
.options(Options {
gas: Some(U256::from(CONTRACT_DEPLOY_GAS)),
..Options::default()
})
.execute(bytecode, (), deploy_address)
.map_err(|e| format!("Failed to execute deployment: {:?}", e))
})
.and_then(|pending_contract| {
pending_contract
.map(|contract| contract.address())
.map_err(|e| format!("Unable to resolve pending contract: {:?}", e))
})
}

View File

@ -0,0 +1,18 @@
[package]
name = "node_test_rig"
version = "0.1.0"
authors = ["Paul Hauner <paul@paulhauner.com>"]
edition = "2018"
[dependencies]
environment = { path = "../../lighthouse/environment" }
beacon_node = { path = "../../beacon_node" }
types = { path = "../../eth2/types" }
eth2_config = { path = "../../eth2/utils/eth2_config" }
tempdir = "0.3"
reqwest = "0.9"
url = "1.2"
serde = "1.0"
futures = "0.1.25"
genesis = { path = "../../beacon_node/genesis" }
remote_beacon_node = { path = "../../eth2/utils/remote_beacon_node" }

View File

@ -0,0 +1,67 @@
use beacon_node::{
beacon_chain::BeaconChainTypes, Client, ClientConfig, ClientGenesis, ProductionBeaconNode,
ProductionClient,
};
use environment::RuntimeContext;
use futures::Future;
use remote_beacon_node::RemoteBeaconNode;
use tempdir::TempDir;
use types::EthSpec;
pub use environment;
/// Provides a beacon node that is running in the current process. Useful for testing purposes.
pub struct LocalBeaconNode<T> {
pub client: T,
pub datadir: TempDir,
}
impl<E: EthSpec> LocalBeaconNode<ProductionClient<E>> {
/// Starts a new, production beacon node.
pub fn production(context: RuntimeContext<E>) -> Self {
let (client_config, datadir) = testing_client_config();
let client = ProductionBeaconNode::new(context, client_config)
.wait()
.expect("should build production client")
.into_inner();
LocalBeaconNode { client, datadir }
}
}
impl<T: BeaconChainTypes> LocalBeaconNode<Client<T>> {
/// Returns a `RemoteBeaconNode` that can connect to `self`. Useful for testing the node as if
/// it were external this process.
pub fn remote_node(&self) -> Result<RemoteBeaconNode<T::EthSpec>, String> {
Ok(RemoteBeaconNode::new(
self.client
.http_listen_addr()
.ok_or_else(|| "A remote beacon node must have a http server".to_string())?,
)?)
}
}
fn testing_client_config() -> (ClientConfig, TempDir) {
// Creates a temporary directory that will be deleted once this `TempDir` is dropped.
let tempdir = TempDir::new("lighthouse_node_test_rig")
.expect("should create temp directory for client datadir");
let mut client_config = ClientConfig::default();
client_config.data_dir = tempdir.path().into();
// Setting ports to `0` means that the OS will choose some available port.
client_config.network.libp2p_port = 0;
client_config.network.discovery_port = 0;
client_config.rpc.port = 0;
client_config.rest_api.port = 0;
client_config.websocket_server.port = 0;
client_config.genesis = ClientGenesis::Interop {
validator_count: 8,
genesis_time: 13_371_337,
};
(client_config, tempdir)
}

View File

@ -4,10 +4,6 @@ version = "0.1.0"
authors = ["Paul Hauner <paul@paulhauner.com>", "Age Manning <Age@AgeManning.com>", "Luke Anderson <luke@lukeanderson.com.au>"] authors = ["Paul Hauner <paul@paulhauner.com>", "Age Manning <Age@AgeManning.com>", "Luke Anderson <luke@lukeanderson.com.au>"]
edition = "2018" edition = "2018"
[[bin]]
name = "validator_client"
path = "src/main.rs"
[lib] [lib]
name = "validator_client" name = "validator_client"
path = "src/lib.rs" path = "src/lib.rs"
@ -38,4 +34,7 @@ bincode = "1.2.0"
futures = "0.1.29" futures = "0.1.29"
dirs = "2.0.2" dirs = "2.0.2"
logging = { path = "../eth2/utils/logging" } logging = { path = "../eth2/utils/logging" }
environment = { path = "../lighthouse/environment" }
parking_lot = "0.7"
exit-future = "0.1.4"
libc = "0.2.65" libc = "0.2.65"

123
validator_client/src/cli.rs Normal file
View File

@ -0,0 +1,123 @@
use crate::config::{DEFAULT_SERVER, DEFAULT_SERVER_GRPC_PORT, DEFAULT_SERVER_HTTP_PORT};
use clap::{App, Arg, SubCommand};
pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
App::new("Validator Client")
.visible_aliases(&["v", "vc", "validator", "validator_client"])
.version("0.0.1")
.author("Sigma Prime <contact@sigmaprime.io>")
.about("Eth 2.0 Validator Client")
.arg(
Arg::with_name("datadir")
.long("datadir")
.short("d")
.value_name("DIR")
.help("Data directory for keys and databases.")
.takes_value(true),
)
.arg(
Arg::with_name("logfile")
.long("logfile")
.value_name("logfile")
.help("File path where output will be written.")
.takes_value(true),
)
.arg(
Arg::with_name("spec")
.long("spec")
.value_name("TITLE")
.help("Specifies the default eth2 spec type.")
.takes_value(true)
.possible_values(&["mainnet", "minimal", "interop"])
.conflicts_with("eth2-config")
.global(true)
)
.arg(
Arg::with_name("eth2-config")
.long("eth2-config")
.short("e")
.value_name("TOML_FILE")
.help("Path to Ethereum 2.0 config and specification file (e.g., eth2_spec.toml).")
.takes_value(true),
)
.arg(
Arg::with_name("server")
.long("server")
.value_name("NETWORK_ADDRESS")
.help("Address to connect to BeaconNode.")
.default_value(DEFAULT_SERVER)
.takes_value(true),
)
.arg(
Arg::with_name("server-grpc-port")
.long("server-grpc-port")
.short("g")
.value_name("PORT")
.help("Port to use for gRPC API connection to the server.")
.default_value(DEFAULT_SERVER_GRPC_PORT)
.takes_value(true),
)
.arg(
Arg::with_name("server-http-port")
.long("server-http-port")
.short("h")
.value_name("PORT")
.help("Port to use for HTTP API connection to the server.")
.default_value(DEFAULT_SERVER_HTTP_PORT)
.takes_value(true),
)
.arg(
Arg::with_name("debug-level")
.long("debug-level")
.value_name("LEVEL")
.short("s")
.help("The title of the spec constants for chain config.")
.takes_value(true)
.possible_values(&["info", "debug", "trace", "warn", "error", "crit"])
.default_value("trace"),
)
/*
* The "testnet" sub-command.
*
* Used for starting testnet validator clients.
*/
.subcommand(SubCommand::with_name("testnet")
.about("Starts a testnet validator using INSECURE, predicatable private keys, based off the canonical \
validator index. ONLY USE FOR TESTING PURPOSES!")
.arg(
Arg::with_name("bootstrap")
.short("b")
.long("bootstrap")
.help("Connect to the RPC server to download the eth2_config via the HTTP API.")
)
.subcommand(SubCommand::with_name("insecure")
.about("Uses the standard, predicatable `interop` keygen method to produce a range \
of predicatable private keys and starts performing their validator duties.")
.arg(Arg::with_name("first_validator")
.value_name("VALIDATOR_INDEX")
.required(true)
.help("The first validator public key to be generated for this client."))
.arg(Arg::with_name("validator_count")
.value_name("COUNT")
.required(true)
.help("The number of validators."))
)
.subcommand(SubCommand::with_name("interop-yaml")
.about("Loads plain-text secret keys from YAML files. Expects the interop format defined
in the ethereum/eth2.0-pm repo.")
.arg(Arg::with_name("path")
.value_name("PATH")
.required(true)
.help("Path to a YAML file."))
)
)
.subcommand(SubCommand::with_name("sign_block")
.about("Connects to the beacon server, requests a new block (after providing reveal),\
and prints the signed block to standard out")
.arg(Arg::with_name("validator")
.value_name("VALIDATOR")
.required(true)
.help("The pubkey of the validator that should sign the block.")
)
)
}

View File

@ -10,10 +10,10 @@ use self::epoch_duties::{EpochDuties, EpochDutiesMapError};
pub use self::epoch_duties::{EpochDutiesMap, WorkInfo}; pub use self::epoch_duties::{EpochDutiesMap, WorkInfo};
use super::signer::Signer; use super::signer::Signer;
use futures::Async; use futures::Async;
use parking_lot::RwLock;
use slog::{debug, error, info}; use slog::{debug, error, info};
use std::fmt::Display; use std::fmt::Display;
use std::sync::Arc; use std::sync::Arc;
use std::sync::RwLock;
use types::{Epoch, PublicKey, Slot}; use types::{Epoch, PublicKey, Slot};
#[derive(Debug, PartialEq, Clone)] #[derive(Debug, PartialEq, Clone)]
@ -55,20 +55,20 @@ impl<U: BeaconNodeDuties, S: Signer + Display> DutiesManager<U, S> {
let duties = self.beacon_node.request_duties(epoch, &public_keys)?; let duties = self.beacon_node.request_duties(epoch, &public_keys)?;
{ {
// If these duties were known, check to see if they're updates or identical. // If these duties were known, check to see if they're updates or identical.
if let Some(known_duties) = self.duties_map.read()?.get(&epoch) { if let Some(known_duties) = self.duties_map.read().get(&epoch) {
if *known_duties == duties { if *known_duties == duties {
return Ok(UpdateOutcome::NoChange(epoch)); return Ok(UpdateOutcome::NoChange(epoch));
} }
} }
} }
if !self.duties_map.read()?.contains_key(&epoch) { if !self.duties_map.read().contains_key(&epoch) {
//TODO: Remove clone by removing duties from outcome //TODO: Remove clone by removing duties from outcome
self.duties_map.write()?.insert(epoch, duties.clone()); self.duties_map.write().insert(epoch, duties.clone());
return Ok(UpdateOutcome::NewDuties(epoch, duties)); return Ok(UpdateOutcome::NewDuties(epoch, duties));
} }
// duties have changed // duties have changed
//TODO: Duties could be large here. Remove from display and avoid the clone. //TODO: Duties could be large here. Remove from display and avoid the clone.
self.duties_map.write()?.insert(epoch, duties.clone()); self.duties_map.write().insert(epoch, duties.clone());
Ok(UpdateOutcome::DutiesChanged(epoch, duties)) Ok(UpdateOutcome::DutiesChanged(epoch, duties))
} }
@ -97,7 +97,7 @@ impl<U: BeaconNodeDuties, S: Signer + Display> DutiesManager<U, S> {
let mut current_work: Vec<(usize, WorkInfo)> = Vec::new(); let mut current_work: Vec<(usize, WorkInfo)> = Vec::new();
// if the map is poisoned, return None // if the map is poisoned, return None
let duties = self.duties_map.read().ok()?; let duties = self.duties_map.read();
for (index, validator_signer) in self.signers.iter().enumerate() { for (index, validator_signer) in self.signers.iter().enumerate() {
match duties.is_work_slot(slot, &validator_signer.to_public()) { match duties.is_work_slot(slot, &validator_signer.to_public()) {

View File

@ -1,4 +1,258 @@
extern crate libc; mod attestation_producer;
pub mod config; mod block_producer;
mod cli;
mod config;
mod duties;
mod error;
mod service;
mod signer;
pub use crate::config::Config; pub use cli::cli_app;
pub use config::Config;
use clap::ArgMatches;
use config::{Config as ClientConfig, KeySource};
use environment::RuntimeContext;
use eth2_config::Eth2Config;
use exit_future::Signal;
use futures::Stream;
use lighthouse_bootstrap::Bootstrapper;
use parking_lot::RwLock;
use protos::services_grpc::ValidatorServiceClient;
use service::Service;
use slog::{error, info, warn, Logger};
use slot_clock::SlotClock;
use std::path::PathBuf;
use std::sync::Arc;
use std::time::{Duration, Instant};
use tokio::timer::Interval;
use types::{EthSpec, Keypair};
/// A fixed amount of time after a slot to perform operations. This gives the node time to complete
/// per-slot processes.
const TIME_DELAY_FROM_SLOT: Duration = Duration::from_millis(100);
#[derive(Clone)]
pub struct ProductionValidatorClient<T: EthSpec> {
context: RuntimeContext<T>,
service: Arc<Service<ValidatorServiceClient, Keypair, T>>,
exit_signals: Arc<RwLock<Vec<Signal>>>,
}
impl<T: EthSpec> ProductionValidatorClient<T> {
/// Instantiates the validator client, _without_ starting the timers to trigger block
/// and attestation production.
pub fn new_from_cli(context: RuntimeContext<T>, matches: &ArgMatches) -> Result<Self, String> {
let mut log = context.log.clone();
let (client_config, eth2_config) = get_configs(&matches, &mut log)
.map_err(|e| format!("Unable to initialize config: {}", e))?;
info!(
log,
"Starting validator client";
"datadir" => client_config.full_data_dir().expect("Unable to find datadir").to_str(),
);
let service: Service<ValidatorServiceClient, Keypair, T> =
Service::initialize_service(client_config, eth2_config, log.clone())
.map_err(|e| e.to_string())?;
Ok(Self {
context,
service: Arc::new(service),
exit_signals: Arc::new(RwLock::new(vec![])),
})
}
/// Starts the timers to trigger block and attestation production.
pub fn start_service(&self) -> Result<(), String> {
let service = self.clone().service;
let log = self.context.log.clone();
let duration_to_next_slot = service
.slot_clock
.duration_to_next_slot()
.ok_or_else(|| "Unable to determine duration to next slot. Exiting.".to_string())?;
// set up the validator work interval - start at next slot and proceed every slot
let interval = {
// Set the interval to start at the next slot, and every slot after
let slot_duration = Duration::from_millis(service.spec.milliseconds_per_slot);
//TODO: Handle checked add correctly
Interval::new(Instant::now() + duration_to_next_slot, slot_duration)
};
if service.slot_clock.now().is_none() {
warn!(
log,
"Starting node prior to genesis";
);
}
info!(
log,
"Waiting for next slot";
"seconds_to_wait" => duration_to_next_slot.as_secs()
);
let (exit_signal, exit_fut) = exit_future::signal();
self.exit_signals.write().push(exit_signal);
/* kick off the core service */
self.context.executor.spawn(
interval
.map_err(move |e| {
error! {
log,
"Timer thread failed";
"error" => format!("{}", e)
}
})
.and_then(move |_| if exit_fut.is_live() { Ok(()) } else { Err(()) })
.for_each(move |_| {
// wait for node to process
std::thread::sleep(TIME_DELAY_FROM_SLOT);
// if a non-fatal error occurs, proceed to the next slot.
let _ignore_error = service.per_slot_execution();
// completed a slot process
Ok(())
}),
);
Ok(())
}
}
/// Parses the CLI arguments and attempts to load the client and eth2 configuration.
///
/// This is not a pure function, it reads from disk and may contact network servers.
fn get_configs(
cli_args: &ArgMatches,
mut log: &mut Logger,
) -> Result<(ClientConfig, Eth2Config), String> {
let mut client_config = ClientConfig::default();
client_config.apply_cli_args(&cli_args, &mut log)?;
if let Some(server) = cli_args.value_of("server") {
client_config.server = server.to_string();
}
if let Some(port) = cli_args.value_of("server-http-port") {
client_config.server_http_port = port
.parse::<u16>()
.map_err(|e| format!("Unable to parse HTTP port: {:?}", e))?;
}
if let Some(port) = cli_args.value_of("server-grpc-port") {
client_config.server_grpc_port = port
.parse::<u16>()
.map_err(|e| format!("Unable to parse gRPC port: {:?}", e))?;
}
info!(
*log,
"Beacon node connection info";
"grpc_port" => client_config.server_grpc_port,
"http_port" => client_config.server_http_port,
"server" => &client_config.server,
);
let (client_config, eth2_config) = match cli_args.subcommand() {
("testnet", Some(sub_cli_args)) => {
if cli_args.is_present("eth2-config") && sub_cli_args.is_present("bootstrap") {
return Err(
"Cannot specify --eth2-config and --bootstrap as it may result \
in ambiguity."
.into(),
);
}
process_testnet_subcommand(sub_cli_args, client_config, log)
}
_ => return Err("You must use the testnet command. See '--help'.".into()),
}?;
Ok((client_config, eth2_config))
}
/// Parses the `testnet` CLI subcommand.
///
/// This is not a pure function, it reads from disk and may contact network servers.
fn process_testnet_subcommand(
cli_args: &ArgMatches,
mut client_config: ClientConfig,
log: &Logger,
) -> Result<(ClientConfig, Eth2Config), String> {
let eth2_config = if cli_args.is_present("bootstrap") {
info!(log, "Connecting to bootstrap server");
let bootstrapper = Bootstrapper::connect(
format!(
"http://{}:{}",
client_config.server, client_config.server_http_port
),
&log,
)?;
let eth2_config = bootstrapper.eth2_config()?;
info!(
log,
"Bootstrapped eth2 config via HTTP";
"slot_time_millis" => eth2_config.spec.milliseconds_per_slot,
"spec" => &eth2_config.spec_constants,
);
eth2_config
} else {
match cli_args.value_of("spec") {
Some("mainnet") => Eth2Config::mainnet(),
Some("minimal") => Eth2Config::minimal(),
Some("interop") => Eth2Config::interop(),
_ => return Err("No --spec flag provided. See '--help'.".into()),
}
};
client_config.key_source = match cli_args.subcommand() {
("insecure", Some(sub_cli_args)) => {
let first = sub_cli_args
.value_of("first_validator")
.ok_or_else(|| "No first validator supplied")?
.parse::<usize>()
.map_err(|e| format!("Unable to parse first validator: {:?}", e))?;
let count = sub_cli_args
.value_of("validator_count")
.ok_or_else(|| "No validator count supplied")?
.parse::<usize>()
.map_err(|e| format!("Unable to parse validator count: {:?}", e))?;
info!(
log,
"Generating unsafe testing keys";
"first_validator" => first,
"count" => count
);
KeySource::TestingKeypairRange(first..first + count)
}
("interop-yaml", Some(sub_cli_args)) => {
let path = sub_cli_args
.value_of("path")
.ok_or_else(|| "No yaml path supplied")?
.parse::<PathBuf>()
.map_err(|e| format!("Unable to parse yaml path: {:?}", e))?;
info!(
log,
"Loading keypairs from interop YAML format";
"path" => format!("{:?}", path),
);
KeySource::YamlKeypairs(path)
}
_ => KeySource::Disk,
};
Ok((client_config, eth2_config))
}

View File

@ -1,354 +0,0 @@
mod attestation_producer;
mod block_producer;
mod config;
mod duties;
pub mod error;
mod service;
mod signer;
use crate::config::{
Config as ClientConfig, KeySource, DEFAULT_SERVER, DEFAULT_SERVER_GRPC_PORT,
DEFAULT_SERVER_HTTP_PORT,
};
use crate::service::Service as ValidatorService;
use clap::{App, Arg, ArgMatches, SubCommand};
use eth2_config::Eth2Config;
use lighthouse_bootstrap::Bootstrapper;
use protos::services_grpc::ValidatorServiceClient;
use slog::{crit, error, info, o, Drain, Level, Logger};
use std::path::PathBuf;
use types::{InteropEthSpec, Keypair, MainnetEthSpec, MinimalEthSpec};
pub const DEFAULT_SPEC: &str = "minimal";
pub const DEFAULT_DATA_DIR: &str = ".lighthouse-validator";
pub const CLIENT_CONFIG_FILENAME: &str = "validator-client.toml";
pub const ETH2_CONFIG_FILENAME: &str = "eth2-spec.toml";
type Result<T> = core::result::Result<T, String>;
fn main() {
// Logging
let decorator = slog_term::TermDecorator::new().build();
let decorator = logging::AlignedTermDecorator::new(decorator, logging::MAX_MESSAGE_WIDTH);
let drain = slog_term::FullFormat::new(decorator).build().fuse();
let drain = slog_async::Async::new(drain).build().fuse();
// CLI
let matches = App::new("Lighthouse Validator Client")
.version("0.0.1")
.author("Sigma Prime <contact@sigmaprime.io>")
.about("Eth 2.0 Validator Client")
.arg(
Arg::with_name("datadir")
.long("datadir")
.short("d")
.value_name("DIR")
.help("Data directory for keys and databases.")
.takes_value(true),
)
.arg(
Arg::with_name("logfile")
.long("logfile")
.value_name("logfile")
.help("File path where output will be written.")
.takes_value(true),
)
.arg(
Arg::with_name("spec")
.long("spec")
.value_name("TITLE")
.help("Specifies the default eth2 spec type.")
.takes_value(true)
.possible_values(&["mainnet", "minimal", "interop"])
.conflicts_with("eth2-config")
.global(true)
)
.arg(
Arg::with_name("eth2-config")
.long("eth2-config")
.short("e")
.value_name("TOML_FILE")
.help("Path to Ethereum 2.0 config and specification file (e.g., eth2_spec.toml).")
.takes_value(true),
)
.arg(
Arg::with_name("server")
.long("server")
.value_name("NETWORK_ADDRESS")
.help("Address to connect to BeaconNode.")
.default_value(DEFAULT_SERVER)
.takes_value(true),
)
.arg(
Arg::with_name("server-grpc-port")
.long("server-grpc-port")
.short("g")
.value_name("PORT")
.help("Port to use for gRPC API connection to the server.")
.default_value(DEFAULT_SERVER_GRPC_PORT)
.takes_value(true),
)
.arg(
Arg::with_name("server-http-port")
.long("server-http-port")
.short("h")
.value_name("PORT")
.help("Port to use for HTTP API connection to the server.")
.default_value(DEFAULT_SERVER_HTTP_PORT)
.takes_value(true),
)
.arg(
Arg::with_name("debug-level")
.long("debug-level")
.value_name("LEVEL")
.short("s")
.help("The title of the spec constants for chain config.")
.takes_value(true)
.possible_values(&["info", "debug", "trace", "warn", "error", "crit"])
.default_value("trace"),
)
/*
* The "testnet" sub-command.
*
* Used for starting testnet validator clients.
*/
.subcommand(SubCommand::with_name("testnet")
.about("Starts a testnet validator using INSECURE, predicatable private keys, based off the canonical \
validator index. ONLY USE FOR TESTING PURPOSES!")
.arg(
Arg::with_name("bootstrap")
.short("b")
.long("bootstrap")
.help("Connect to the RPC server to download the eth2_config via the HTTP API.")
)
.subcommand(SubCommand::with_name("insecure")
.about("Uses the standard, predicatable `interop` keygen method to produce a range \
of predicatable private keys and starts performing their validator duties.")
.arg(Arg::with_name("first_validator")
.value_name("VALIDATOR_INDEX")
.required(true)
.help("The first validator public key to be generated for this client."))
.arg(Arg::with_name("validator_count")
.value_name("COUNT")
.required(true)
.help("The number of validators."))
)
.subcommand(SubCommand::with_name("interop-yaml")
.about("Loads plain-text secret keys from YAML files. Expects the interop format defined
in the ethereum/eth2.0-pm repo.")
.arg(Arg::with_name("path")
.value_name("PATH")
.required(true)
.help("Path to a YAML file."))
)
)
.subcommand(SubCommand::with_name("sign_block")
.about("Connects to the beacon server, requests a new block (after providing reveal),\
and prints the signed block to standard out")
.arg(Arg::with_name("validator")
.value_name("VALIDATOR")
.required(true)
.help("The pubkey of the validator that should sign the block.")
)
)
.get_matches();
let drain = match matches.value_of("debug-level") {
Some("info") => drain.filter_level(Level::Info),
Some("debug") => drain.filter_level(Level::Debug),
Some("trace") => drain.filter_level(Level::Trace),
Some("warn") => drain.filter_level(Level::Warning),
Some("error") => drain.filter_level(Level::Error),
Some("crit") => drain.filter_level(Level::Critical),
_ => unreachable!("guarded by clap"),
};
let mut log = slog::Logger::root(drain.fuse(), o!());
if std::mem::size_of::<usize>() != 8 {
crit!(
log,
"Lighthouse only supports 64bit CPUs";
"detected" => format!("{}bit", std::mem::size_of::<usize>() * 8)
);
}
let (client_config, eth2_config) = match get_configs(&matches, &mut log) {
Ok(tuple) => tuple,
Err(e) => {
crit!(
log,
"Unable to initialize configuration";
"error" => e
);
return;
}
};
info!(
log,
"Starting validator client";
"datadir" => client_config.full_data_dir().expect("Unable to find datadir").to_str(),
);
let result = match eth2_config.spec_constants.as_str() {
"mainnet" => ValidatorService::<ValidatorServiceClient, Keypair, MainnetEthSpec>::start(
client_config,
eth2_config,
log.clone(),
),
"minimal" => ValidatorService::<ValidatorServiceClient, Keypair, MinimalEthSpec>::start(
client_config,
eth2_config,
log.clone(),
),
"interop" => ValidatorService::<ValidatorServiceClient, Keypair, InteropEthSpec>::start(
client_config,
eth2_config,
log.clone(),
),
other => {
crit!(log, "Unknown spec constants"; "title" => other);
return;
}
};
// start the validator service.
// this specifies the GRPC and signer type to use as the duty manager beacon node.
match result {
Ok(_) => info!(log, "Validator client shutdown successfully."),
Err(e) => crit!(log, "Validator client exited with error"; "error" => e.to_string()),
}
}
/// Parses the CLI arguments and attempts to load the client and eth2 configuration.
///
/// This is not a pure function, it reads from disk and may contact network servers.
pub fn get_configs(
cli_args: &ArgMatches,
mut log: &mut Logger,
) -> Result<(ClientConfig, Eth2Config)> {
let mut client_config = ClientConfig::default();
client_config.apply_cli_args(&cli_args, &mut log)?;
if let Some(server) = cli_args.value_of("server") {
client_config.server = server.to_string();
}
if let Some(port) = cli_args.value_of("server-http-port") {
client_config.server_http_port = port
.parse::<u16>()
.map_err(|e| format!("Unable to parse HTTP port: {:?}", e))?;
}
if let Some(port) = cli_args.value_of("server-grpc-port") {
client_config.server_grpc_port = port
.parse::<u16>()
.map_err(|e| format!("Unable to parse gRPC port: {:?}", e))?;
}
info!(
*log,
"Beacon node connection info";
"grpc_port" => client_config.server_grpc_port,
"http_port" => client_config.server_http_port,
"server" => &client_config.server,
);
let (client_config, eth2_config) = match cli_args.subcommand() {
("testnet", Some(sub_cli_args)) => {
if cli_args.is_present("eth2-config") && sub_cli_args.is_present("bootstrap") {
return Err(
"Cannot specify --eth2-config and --bootstrap as it may result \
in ambiguity."
.into(),
);
}
process_testnet_subcommand(sub_cli_args, client_config, log)
}
_ => return Err("You must use the testnet command. See '--help'.".into()),
}?;
Ok((client_config, eth2_config))
}
/// Parses the `testnet` CLI subcommand.
///
/// This is not a pure function, it reads from disk and may contact network servers.
fn process_testnet_subcommand(
cli_args: &ArgMatches,
mut client_config: ClientConfig,
log: &Logger,
) -> Result<(ClientConfig, Eth2Config)> {
let eth2_config = if cli_args.is_present("bootstrap") {
info!(log, "Connecting to bootstrap server");
let bootstrapper = Bootstrapper::connect(
format!(
"http://{}:{}",
client_config.server, client_config.server_http_port
),
&log,
)?;
let eth2_config = bootstrapper.eth2_config()?;
info!(
log,
"Bootstrapped eth2 config via HTTP";
"slot_time_millis" => eth2_config.spec.milliseconds_per_slot,
"spec" => &eth2_config.spec_constants,
);
eth2_config
} else {
match cli_args.value_of("spec") {
Some("mainnet") => Eth2Config::mainnet(),
Some("minimal") => Eth2Config::minimal(),
Some("interop") => Eth2Config::interop(),
_ => return Err("No --spec flag provided. See '--help'.".into()),
}
};
client_config.key_source = match cli_args.subcommand() {
("insecure", Some(sub_cli_args)) => {
let first = sub_cli_args
.value_of("first_validator")
.ok_or_else(|| "No first validator supplied")?
.parse::<usize>()
.map_err(|e| format!("Unable to parse first validator: {:?}", e))?;
let count = sub_cli_args
.value_of("validator_count")
.ok_or_else(|| "No validator count supplied")?
.parse::<usize>()
.map_err(|e| format!("Unable to parse validator count: {:?}", e))?;
info!(
log,
"Generating unsafe testing keys";
"first_validator" => first,
"count" => count
);
KeySource::TestingKeypairRange(first..first + count)
}
("interop-yaml", Some(sub_cli_args)) => {
let path = sub_cli_args
.value_of("path")
.ok_or_else(|| "No yaml path supplied")?
.parse::<PathBuf>()
.map_err(|e| format!("Unable to parse yaml path: {:?}", e))?;
info!(
log,
"Loading keypairs from interop YAML format";
"path" => format!("{:?}", path),
);
KeySource::YamlKeypairs(path)
}
_ => KeySource::Disk,
};
Ok((client_config, eth2_config))
}

View File

@ -17,6 +17,7 @@ use crate::signer::Signer;
use bls::Keypair; use bls::Keypair;
use eth2_config::Eth2Config; use eth2_config::Eth2Config;
use grpcio::{ChannelBuilder, EnvBuilder}; use grpcio::{ChannelBuilder, EnvBuilder};
use parking_lot::RwLock;
use protos::services::Empty; use protos::services::Empty;
use protos::services_grpc::{ use protos::services_grpc::{
AttestationServiceClient, BeaconBlockServiceClient, BeaconNodeServiceClient, AttestationServiceClient, BeaconBlockServiceClient, BeaconNodeServiceClient,
@ -26,18 +27,9 @@ use slog::{crit, error, info, trace, warn};
use slot_clock::{SlotClock, SystemTimeSlotClock}; use slot_clock::{SlotClock, SystemTimeSlotClock};
use std::marker::PhantomData; use std::marker::PhantomData;
use std::sync::Arc; use std::sync::Arc;
use std::sync::RwLock; use std::time::Duration;
use std::time::{Duration, Instant};
use tokio::prelude::*;
use tokio::runtime::Builder;
use tokio::timer::Interval;
use tokio_timer::clock::Clock;
use types::{ChainSpec, Epoch, EthSpec, Fork, Slot}; use types::{ChainSpec, Epoch, EthSpec, Fork, Slot};
/// A fixed amount of time after a slot to perform operations. This gives the node time to complete
/// per-slot processes.
const TIME_DELAY_FROM_SLOT: Duration = Duration::from_millis(100);
/// The validator service. This is the main thread that executes and maintains validator /// The validator service. This is the main thread that executes and maintains validator
/// duties. /// duties.
//TODO: Generalize the BeaconNode types to use testing //TODO: Generalize the BeaconNode types to use testing
@ -45,12 +37,12 @@ pub struct Service<B: BeaconNodeDuties + 'static, S: Signer + 'static, E: EthSpe
/// The node's current fork version we are processing on. /// The node's current fork version we are processing on.
fork: Fork, fork: Fork,
/// The slot clock for this service. /// The slot clock for this service.
slot_clock: SystemTimeSlotClock, pub slot_clock: SystemTimeSlotClock,
/// The slot that is currently, or was previously processed by the service. /// The slot that is currently, or was previously processed by the service.
current_slot: Option<Slot>, current_slot: RwLock<Option<Slot>>,
slots_per_epoch: u64, slots_per_epoch: u64,
/// The chain specification for this clients instance. /// The chain specification for this clients instance.
spec: Arc<ChainSpec>, pub spec: Arc<ChainSpec>,
/// The duties manager which maintains the state of when to perform actions. /// The duties manager which maintains the state of when to perform actions.
duties_manager: Arc<DutiesManager<B, S>>, duties_manager: Arc<DutiesManager<B, S>>,
// GRPC Clients // GRPC Clients
@ -63,12 +55,12 @@ pub struct Service<B: BeaconNodeDuties + 'static, S: Signer + 'static, E: EthSpe
_phantom: PhantomData<E>, _phantom: PhantomData<E>,
} }
impl<B: BeaconNodeDuties + 'static, S: Signer + 'static, E: EthSpec> Service<B, S, E> { impl<E: EthSpec> Service<ValidatorServiceClient, Keypair, E> {
/// Initial connection to the beacon node to determine its properties. /// Initial connection to the beacon node to determine its properties.
/// ///
/// This tries to connect to a beacon node. Once connected, it initialised the gRPC clients /// This tries to connect to a beacon node. Once connected, it initialised the gRPC clients
/// and returns an instance of the service. /// and returns an instance of the service.
fn initialize_service( pub fn initialize_service(
client_config: ValidatorConfig, client_config: ValidatorConfig,
eth2_config: Eth2Config, eth2_config: Eth2Config,
log: slog::Logger, log: slog::Logger,
@ -195,7 +187,7 @@ impl<B: BeaconNodeDuties + 'static, S: Signer + 'static, E: EthSpec> Service<B,
Ok(Service { Ok(Service {
fork, fork,
slot_clock, slot_clock,
current_slot: None, current_slot: RwLock::new(None),
slots_per_epoch, slots_per_epoch,
spec, spec,
duties_manager, duties_manager,
@ -205,78 +197,12 @@ impl<B: BeaconNodeDuties + 'static, S: Signer + 'static, E: EthSpec> Service<B,
_phantom: PhantomData, _phantom: PhantomData,
}) })
} }
}
/// Initialise the service then run the core thread. impl<B: BeaconNodeDuties + 'static, S: Signer + 'static, E: EthSpec> Service<B, S, E> {
// TODO: Improve handling of generic BeaconNode types, to stub grpcClient
pub fn start(
client_config: ValidatorConfig,
eth2_config: Eth2Config,
log: slog::Logger,
) -> error_chain::Result<()> {
// connect to the node and retrieve its properties and initialize the gRPC clients
let mut service = Service::<ValidatorServiceClient, Keypair, E>::initialize_service(
client_config,
eth2_config,
log.clone(),
)?;
// we have connected to a node and established its parameters. Spin up the core service
// set up the validator service runtime
let mut runtime = Builder::new()
.clock(Clock::system())
.name_prefix("validator-client-")
.build()
.map_err(|e| format!("Tokio runtime failed: {}", e))?;
let duration_to_next_slot = service
.slot_clock
.duration_to_next_slot()
.ok_or_else::<error_chain::Error, _>(|| {
"Unable to determine duration to next slot. Exiting.".into()
})?;
// set up the validator work interval - start at next slot and proceed every slot
let interval = {
// Set the interval to start at the next slot, and every slot after
let slot_duration = Duration::from_millis(service.spec.milliseconds_per_slot);
//TODO: Handle checked add correctly
Interval::new(Instant::now() + duration_to_next_slot, slot_duration)
};
if service.slot_clock.now().is_none() {
warn!(
log,
"Starting node prior to genesis";
);
}
info!(
log,
"Waiting for next slot";
"seconds_to_wait" => duration_to_next_slot.as_secs()
);
/* kick off the core service */
runtime.block_on(
interval
.for_each(move |_| {
// wait for node to process
std::thread::sleep(TIME_DELAY_FROM_SLOT);
// if a non-fatal error occurs, proceed to the next slot.
let _ignore_error = service.per_slot_execution();
// completed a slot process
Ok(())
})
.map_err(|e| format!("Service thread failed: {:?}", e)),
)?;
// validator client exited
Ok(())
}
/// The execution logic that runs every slot. /// The execution logic that runs every slot.
// Errors are logged to output, and core execution continues unless fatal errors occur. // Errors are logged to output, and core execution continues unless fatal errors occur.
fn per_slot_execution(&mut self) -> error_chain::Result<()> { pub fn per_slot_execution(&self) -> error_chain::Result<()> {
/* get the new current slot and epoch */ /* get the new current slot and epoch */
self.update_current_slot()?; self.update_current_slot()?;
@ -295,7 +221,7 @@ impl<B: BeaconNodeDuties + 'static, S: Signer + 'static, E: EthSpec> Service<B,
} }
/// Updates the known current slot and epoch. /// Updates the known current slot and epoch.
fn update_current_slot(&mut self) -> error_chain::Result<()> { fn update_current_slot(&self) -> error_chain::Result<()> {
let wall_clock_slot = self let wall_clock_slot = self
.slot_clock .slot_clock
.now() .now()
@ -304,11 +230,12 @@ impl<B: BeaconNodeDuties + 'static, S: Signer + 'static, E: EthSpec> Service<B,
})?; })?;
let wall_clock_epoch = wall_clock_slot.epoch(self.slots_per_epoch); let wall_clock_epoch = wall_clock_slot.epoch(self.slots_per_epoch);
let mut current_slot = self.current_slot.write();
// this is a non-fatal error. If the slot clock repeats, the node could // this is a non-fatal error. If the slot clock repeats, the node could
// have been slow to process the previous slot and is now duplicating tasks. // have been slow to process the previous slot and is now duplicating tasks.
// We ignore duplicated but raise a critical error. // We ignore duplicated but raise a critical error.
if let Some(current_slot) = self.current_slot { if let Some(current_slot) = *current_slot {
if wall_clock_slot <= current_slot { if wall_clock_slot <= current_slot {
crit!( crit!(
self.log, self.log,
@ -317,17 +244,18 @@ impl<B: BeaconNodeDuties + 'static, S: Signer + 'static, E: EthSpec> Service<B,
return Err("Duplicate slot".into()); return Err("Duplicate slot".into());
} }
} }
self.current_slot = Some(wall_clock_slot); *current_slot = Some(wall_clock_slot);
info!(self.log, "Processing"; "slot" => wall_clock_slot.as_u64(), "epoch" => wall_clock_epoch.as_u64()); info!(self.log, "Processing"; "slot" => wall_clock_slot.as_u64(), "epoch" => wall_clock_epoch.as_u64());
Ok(()) Ok(())
} }
/// For all known validator keypairs, update any known duties from the beacon node. /// For all known validator keypairs, update any known duties from the beacon node.
fn check_for_duties(&mut self) { fn check_for_duties(&self) {
let cloned_manager = self.duties_manager.clone(); let cloned_manager = self.duties_manager.clone();
let cloned_log = self.log.clone(); let cloned_log = self.log.clone();
let current_epoch = self let current_epoch = self
.current_slot .current_slot
.read()
.expect("The current slot must be updated before checking for duties") .expect("The current slot must be updated before checking for duties")
.epoch(self.slots_per_epoch); .epoch(self.slots_per_epoch);
@ -349,9 +277,10 @@ impl<B: BeaconNodeDuties + 'static, S: Signer + 'static, E: EthSpec> Service<B,
} }
/// If there are any duties to process, spawn a separate thread and perform required actions. /// If there are any duties to process, spawn a separate thread and perform required actions.
fn process_duties(&mut self) { fn process_duties(&self) {
if let Some(work) = self.duties_manager.get_current_work( if let Some(work) = self.duties_manager.get_current_work(
self.current_slot self.current_slot
.read()
.expect("The current slot must be updated before processing duties"), .expect("The current slot must be updated before processing duties"),
) { ) {
trace!( trace!(
@ -368,6 +297,7 @@ impl<B: BeaconNodeDuties + 'static, S: Signer + 'static, E: EthSpec> Service<B,
let fork = self.fork.clone(); let fork = self.fork.clone();
let slot = self let slot = self
.current_slot .current_slot
.read()
.expect("The current slot must be updated before processing duties"); .expect("The current slot must be updated before processing duties");
let spec = self.spec.clone(); let spec = self.spec.clone();
let beacon_node = self.beacon_block_client.clone(); let beacon_node = self.beacon_block_client.clone();
@ -399,6 +329,7 @@ impl<B: BeaconNodeDuties + 'static, S: Signer + 'static, E: EthSpec> Service<B,
// spawns a thread to produce and sign an attestation // spawns a thread to produce and sign an attestation
let slot = self let slot = self
.current_slot .current_slot
.read()
.expect("The current slot must be updated before processing duties"); .expect("The current slot must be updated before processing duties");
let signers = self.duties_manager.signers.clone(); // this is an arc let signers = self.duties_manager.signers.clone(); // this is an arc
let fork = self.fork.clone(); let fork = self.fork.clone();