Merge branch 'master' into process-free-attestation

This commit is contained in:
Grant Wuerker 2019-07-18 12:33:22 +02:00
commit b90edaf7f6
No known key found for this signature in database
GPG Key ID: F7EA56FDDA6C464F
60 changed files with 3857 additions and 715 deletions

View File

@ -19,6 +19,7 @@ members = [
"eth2/utils/slot_clock", "eth2/utils/slot_clock",
"eth2/utils/ssz", "eth2/utils/ssz",
"eth2/utils/ssz_derive", "eth2/utils/ssz_derive",
"eth2/utils/ssz_types",
"eth2/utils/swap_or_not_shuffle", "eth2/utils/swap_or_not_shuffle",
"eth2/utils/tree_hash", "eth2/utils/tree_hash",
"eth2/utils/tree_hash_derive", "eth2/utils/tree_hash_derive",

251
README.md
View File

@ -1,4 +1,6 @@
# Lighthouse: an Ethereum Serenity client # Lighthouse: Ethereum 2.0
An open-source Ethereum 2.0 client, written in Rust and maintained by Sigma Prime.
[![Build Status]][Build Link] [![Doc Status]][Doc Link] [![Gitter Badge]][Gitter Link] [![Build Status]][Build Link] [![Doc Status]][Doc Link] [![Gitter Badge]][Gitter Link]
@ -9,24 +11,126 @@
[Doc Status]: https://img.shields.io/badge/docs-master-blue.svg [Doc Status]: https://img.shields.io/badge/docs-master-blue.svg
[Doc Link]: http://lighthouse-docs.sigmaprime.io/ [Doc Link]: http://lighthouse-docs.sigmaprime.io/
A work-in-progress, open-source implementation of the Serenity Beacon ## Overview
Chain, maintained by Sigma Prime.
The "Serenity" project is also known as "Ethereum 2.0" or "Shasper". Lighthouse is:
## Lighthouse Client - Fully open-source, licensed under Apache 2.0.
- Security-focussed, fuzzing has begun and security reviews are planned
for late-2019.
- Built in [Rust](https://www.rust-lang.org/), a modern language providing unique safety guarantees and
excellent performance (comparable to C++).
- Funded by various organisations, including Sigma Prime, the
Ethereum Foundation, Consensys and private individuals.
- Actively working to promote an inter-operable, multi-client Ethereum 2.0.
Lighthouse is an open-source Ethereum Serenity client that is currently under
development. Designed as a Serenity-only client, Lighthouse will not
re-implement the existing proof-of-work protocol. Maintaining a forward-focus
on Ethereum Serenity ensures that Lighthouse avoids reproducing the high-quality
work already undertaken by existing projects. As such, Lighthouse will connect
to existing clients, such as
[Geth](https://github.com/ethereum/go-ethereum) or
[Parity-Ethereum](https://github.com/paritytech/parity-ethereum), via RPC to enable
present-Ethereum functionality.
### Further Reading ## Development Status
Lighthouse, like all Ethereum 2.0 clients, is a work-in-progress. Instructions
are provided for running the client, however these instructions are designed
for developers and researchers working on the project. We do not (yet) provide
user-facing functionality.
Current development overview:
- Specification `v0.6.3` implemented, optimized and passing test vectors.
- Rust-native libp2p integrated, with Gossipsub.
- Discv5 (P2P discovery mechanism) integration started.
- Metrics via Prometheus.
- Basic gRPC API, soon to be replaced with RESTful HTTP/JSON.
### Roadmap
- **July 2019**: `lighthouse-0.0.1` release: A stable testnet for developers with a useful
HTTP API.
- **September 2019**: Inter-operability with other Ethereum 2.0 clients.
- **October 2019**: Public, multi-client testnet with user-facing functionality.
- **January 2020**: Production Beacon Chain testnet.
## Usage
Lighthouse consists of multiple binaries:
- [`beacon_node/`](beacon_node/): produces and verifies blocks from the P2P
connected validators and the P2P network. Provides an API for external services to
interact with Ethereum 2.0.
- [`validator_client/`](validator_client/): connects to a `beacon_node` and
performs the role of a proof-of-stake validator.
- [`account_manager/`](account_manager/): a stand-alone component providing key
management and creation for validators.
### Simple Local Testnet
**Note: these instructions are intended for developers and researchers. We do
not yet support end-users.**
In this example we use the `account_manager` to create some keys, launch two
`beacon_node` instances and connect a `validator_client` to one. The two
`beacon_nodes` should stay in sync and build a Beacon Chain.
First, clone this repository, [setup a development
environment](docs/installation.md) and navigate to the root directory of this repository.
Then, run `$ cargo build --all --release` and navigate to the `target/release`
directory and follow the steps:
#### 1. Generate Validator Keys
Generate 16 validator keys and store them in `~/.lighthouse-validator`:
```
$ ./account_manager -d ~/.lighthouse-validator generate_deterministic -i 0 -n 16
```
_Note: these keys are for development only. The secret keys are
deterministically generated from low integers. Assume they are public
knowledge._
#### 2. Start a Beacon Node
This node will act as the boot node and provide an API for the
`validator_client`.
```
$ ./beacon_node --recent-genesis --rpc
```
_Note: `--recent-genesis` defines the genesis time as either the start of the
current hour, or half-way through the current hour (whichever is most recent).
This makes it very easy to create a testnet, but does not allow nodes to
connect if they were started in separate 30-minute windows._
#### 3. Start Another Beacon Node
In another terminal window, start another boot that will connect to the
running node.
The running node will display it's ENR as a base64 string. This ENR, by default, has a target address of `127.0.0.1` meaning that any new node will connect to this node via `127.0.0.1`. If a boot node should be connected to on a different address, it should be run with the `--discovery-address` CLI flag to specify how other nodes may connect to it.
```
$ ./beacon_node -r --boot-nodes <boot-node-ENR> --listen-address 127.0.0.1 --port 9001 --datadir /tmp/.lighthouse
```
Here <boot-node-ENR> is the ENR string displayed in the terminal from the first node. The ENR can also be obtained from it's default directory `.lighthouse/network/enr.dat`.
The `--datadir` flag tells this Beacon Node to store it's files in a different
directory. If you're on a system that doesn't have a `/tmp` dir (e.g., Mac,
Windows), substitute this with any directory that has write access.
Note that all future created nodes can use the same boot-node ENR. Once connected to the boot node, all nodes should discover and connect with each other.
#### 4. Start a Validator Client
In a third terminal window, start a validator client:
```
$ ./validator-client
```
You should be able to observe the validator signing blocks, the boot node
processing these blocks and publishing them to the other node. If you have
issues, try restarting the beacon nodes to ensure they have the same genesis
time. Alternatively, raise an issue and include your terminal output.
## Further Reading
- [About Lighthouse](docs/lighthouse.md): Goals, Ideology and Ethos surrounding - [About Lighthouse](docs/lighthouse.md): Goals, Ideology and Ethos surrounding
this implementation. this implementation.
@ -37,7 +141,7 @@ If you'd like some background on Sigma Prime, please see the [Lighthouse Update
\#00](https://lighthouse.sigmaprime.io/update-00.html) blog post or the \#00](https://lighthouse.sigmaprime.io/update-00.html) blog post or the
[company website](https://sigmaprime.io). [company website](https://sigmaprime.io).
### Directory Structure ## Directory Structure
- [`beacon_node/`](beacon_node/): the "Beacon Node" binary and crates exclusively - [`beacon_node/`](beacon_node/): the "Beacon Node" binary and crates exclusively
associated with it. associated with it.
@ -50,115 +154,9 @@ If you'd like some background on Sigma Prime, please see the [Lighthouse Update
- [`validator_client/`](validator_client/): the "Validator Client" binary and crates exclusively - [`validator_client/`](validator_client/): the "Validator Client" binary and crates exclusively
associated with it. associated with it.
### Components ## Contributing
The following list describes some of the components actively under development **Lighthouse welcomes contributors.**
by the team:
- **BLS cryptography**: Lighthouse presently use the [Apache
Milagro](https://milagro.apache.org/) cryptography library to create and
verify BLS aggregate signatures. BLS signatures are core to Serenity as they
allow the signatures of many validators to be compressed into a constant 96
bytes and efficiently verified. The Lighthouse project is presently
maintaining its own [BLS aggregates
library](https://github.com/sigp/signature-schemes), gratefully forked from
[@lovesh](https://github.com/lovesh).
- **DoS-resistant block pre-processing**: Processing blocks in proof-of-stake
is more resource intensive than proof-of-work. As such, clients need to
ensure that bad blocks can be rejected as efficiently as possible. At
present, blocks having 10 million ETH staked can be processed in 0.006
seconds, and invalid blocks are rejected even more quickly. See
[issue #103](https://github.com/ethereum/beacon_chain/issues/103) on
[ethereum/beacon_chain](https://github.com/ethereum/beacon_chain).
- **P2P networking**: Serenity will likely use the [libp2p
framework](https://libp2p.io/). Lighthouse is working alongside
[Parity](https://www.parity.io/) to ensure
[libp2p-rust](https://github.com/libp2p/rust-libp2p) is fit-for-purpose.
- **Validator duties** : The project involves development of "validator
services" for users who wish to stake ETH. To fulfill their duties,
validators require a consistent view of the chain and the ability to vote
upon blocks from both shard and beacon chains.
- **New serialization formats**: Lighthouse is working alongside researchers
from the Ethereum Foundation to develop *simpleserialize* (SSZ), a
purpose-built serialization format for sending information across a network.
Check out the [SSZ
implementation](https://github.com/ethereum/eth2.0-specs/blob/00aa553fee95963b74fbec84dbd274d7247b8a0e/specs/simple-serialize.md)
and this
[research](https://github.com/sigp/serialization_sandbox/blob/report/report/serialization_report.md)
on serialization formats for more information.
- **Fork-choice**: The current fork choice rule is
[*LMD Ghost*](https://vitalik.ca/general/2018/12/05/cbc_casper.html#lmd-ghost),
which effectively takes the latest messages and forms the canonical chain using
the [GHOST](https://eprint.iacr.org/2013/881.pdf) mechanism.
- **Efficient state transition logic**: State transition logic governs
updates to the validator set as validators log in/out, penalizes/rewards
validators, rotates validators across shards, and implements other core tasks.
- **Fuzzing and testing environments**: Implementation of lab environments with
continuous integration (CI) workflows, providing automated security analysis.
In addition to these components we are also working on database schemas, RPC
frameworks, specification development, database optimizations (e.g.,
bloom-filters), and tons of other interesting stuff (at least we think so).
### Running
**NOTE: The cryptography libraries used in this implementation are
experimental. As such all cryptography is assumed to be insecure.**
This code-base is still very much under-development and does not provide any
user-facing functionality. For developers and researchers, there are several
tests and benchmarks which may be of interest.
A few basic steps are needed to get set up:
1. Install [rustup](https://rustup.rs/). It's a toolchain manager for Rust (Linux | macOS | Windows). For installation, download the script with `$ curl -f https://sh.rustup.rs > rustup.sh`, review its content (e.g. `$ less ./rustup.sh`) and run the script `$ ./rustup.sh` (you may need to change the permissions to allow execution, i.e. `$ chmod +x rustup.sh`)
2. (Linux & MacOS) To configure your current shell run: `$ source $HOME/.cargo/env`
3. Use the command `rustup show` to get information about the Rust installation. You should see that the
active toolchain is the stable version.
4. Run `rustc --version` to check the installation and version of rust.
- Updates can be performed using` rustup update` .
5. Install build dependencies (Arch packages are listed here, your distribution will likely be similar):
- `clang`: required by RocksDB.
- `protobuf`: required for protobuf serialization (gRPC).
- `cmake`: required for building protobuf.
- `git-lfs`: The Git extension for [Large File Support](https://git-lfs.github.com/) (required for EF tests submodule).
6. Navigate to the working directory.
7. If you haven't already, clone the repository with submodules: `git clone --recursive https://github.com/sigp/lighthouse`.
Alternatively, run `git submodule init` in a repository which was cloned without submodules.
8. Run the test by using command `cargo test --all --release`. By running, it will pass all the required test cases.
If you are doing it for the first time, then you can grab a coffee in the meantime. Usually, it takes time
to build, compile and pass all test cases. If there is no error then it means everything is working properly
and it's time to get your hands dirty.
In case, if there is an error, then please raise the [issue](https://github.com/sigp/lighthouse/issues).
We will help you.
9. As an alternative to, or instead of the above step, you may also run benchmarks by using
the command `cargo bench --all`
##### Note:
Lighthouse presently runs on Rust `stable`, however, benchmarks currently require the
`nightly` version.
##### Note for Windows users:
Perl may also be required to build lighthouse. You can install [Strawberry Perl](http://strawberryperl.com/),
or alternatively use a choco install command `choco install strawberryperl`.
Additionally, the dependency `protoc-grpcio v0.3.1` is reported to have issues compiling in Windows. You can specify
a known working version by editing version in protos/Cargo.toml's "build-dependencies" section to
`protoc-grpcio = "<=0.3.0"`.
### Contributing
**Lighthouse welcomes contributors with open-arms.**
If you would like to learn more about Ethereum Serenity and/or
[Rust](https://www.rust-lang.org/), we are more than happy to on-board you
and assign you some tasks. We aim to be as accepting and understanding as
possible; we are more than happy to up-skill contributors in exchange for their
assistance with the project.
Alternatively, if you are an ETH/Rust veteran, we'd love your input. We're
always looking for the best way to implement things and welcome all
respectful criticisms.
If you are looking to contribute, please head to our If you are looking to contribute, please head to our
[onboarding documentation](https://github.com/sigp/lighthouse/blob/master/docs/onboarding.md). [onboarding documentation](https://github.com/sigp/lighthouse/blob/master/docs/onboarding.md).
@ -173,10 +171,9 @@ your support!
## Contact ## Contact
The best place for discussion is the [sigp/lighthouse gitter](https://gitter.im/sigp/lighthouse). The best place for discussion is the [sigp/lighthouse gitter](https://gitter.im/sigp/lighthouse).
Ping @paulhauner or @AgeManning to get the quickest response.
# Donations ## Donations
If you support the cause, we could certainly use donations to help fund development: If you support the cause, we accept donations to help fund development:
`0x25c4a76E7d118705e7Ea2e9b7d8C59930d8aCD3b` (donation.sigmaprime.eth) `0x25c4a76E7d118705e7Ea2e9b7d8C59930d8aCD3b` (donation.sigmaprime.eth)

View File

@ -13,3 +13,4 @@ slog-async = "^2.3.0"
validator_client = { path = "../validator_client" } validator_client = { path = "../validator_client" }
types = { path = "../eth2/types" } types = { path = "../eth2/types" }
eth2_config = { path = "../eth2/utils/eth2_config" } eth2_config = { path = "../eth2/utils/eth2_config" }
dirs = "2.0.1"

View File

@ -1,12 +1,12 @@
use bls::Keypair; use bls::Keypair;
use clap::{App, Arg, SubCommand}; use clap::{App, Arg, SubCommand};
use eth2_config::get_data_dir;
use slog::{crit, debug, info, o, Drain}; use slog::{crit, debug, info, o, Drain};
use std::fs;
use std::path::PathBuf; use std::path::PathBuf;
use types::test_utils::generate_deterministic_keypair; use types::test_utils::generate_deterministic_keypair;
use validator_client::Config as ValidatorClientConfig; use validator_client::Config as ValidatorClientConfig;
pub const DEFAULT_DATA_DIR: &str = ".lighthouse-account-manager"; pub const DEFAULT_DATA_DIR: &str = ".lighthouse-validator";
pub const CLIENT_CONFIG_FILENAME: &str = "account-manager.toml"; pub const CLIENT_CONFIG_FILENAME: &str = "account-manager.toml";
fn main() { fn main() {
@ -14,13 +14,20 @@ fn main() {
let decorator = slog_term::TermDecorator::new().build(); let decorator = slog_term::TermDecorator::new().build();
let drain = slog_term::CompactFormat::new(decorator).build().fuse(); let drain = slog_term::CompactFormat::new(decorator).build().fuse();
let drain = slog_async::Async::new(drain).build().fuse(); let drain = slog_async::Async::new(drain).build().fuse();
let log = slog::Logger::root(drain, o!()); let mut log = slog::Logger::root(drain, o!());
// CLI // CLI
let matches = App::new("Lighthouse Accounts Manager") let matches = App::new("Lighthouse Accounts Manager")
.version("0.0.1") .version("0.0.1")
.author("Sigma Prime <contact@sigmaprime.io>") .author("Sigma Prime <contact@sigmaprime.io>")
.about("Eth 2.0 Accounts Manager") .about("Eth 2.0 Accounts Manager")
.arg(
Arg::with_name("logfile")
.long("logfile")
.value_name("logfile")
.help("File path where output will be written.")
.takes_value(true),
)
.arg( .arg(
Arg::with_name("datadir") Arg::with_name("datadir")
.long("datadir") .long("datadir")
@ -61,31 +68,42 @@ fn main() {
) )
.get_matches(); .get_matches();
let data_dir = match get_data_dir(&matches, PathBuf::from(DEFAULT_DATA_DIR)) { let data_dir = match matches
Ok(dir) => dir, .value_of("datadir")
Err(e) => { .and_then(|v| Some(PathBuf::from(v)))
crit!(log, "Failed to initialize data dir"; "error" => format!("{:?}", e)); {
return; Some(v) => v,
None => {
// use the default
let mut default_dir = match dirs::home_dir() {
Some(v) => v,
None => {
crit!(log, "Failed to find a home directory");
return;
}
};
default_dir.push(DEFAULT_DATA_DIR);
PathBuf::from(default_dir)
} }
}; };
let mut client_config = ValidatorClientConfig::default(); // create the directory if needed
match fs::create_dir_all(&data_dir) {
Ok(_) => {}
Err(e) => {
crit!(log, "Failed to initialize data dir"; "error" => format!("{}", e));
return;
}
}
if let Err(e) = client_config.apply_cli_args(&matches) { let mut client_config = ValidatorClientConfig::default();
crit!(log, "Failed to apply CLI args"; "error" => format!("{:?}", e));
return;
};
// Ensure the `data_dir` in the config matches that supplied to the CLI. // Ensure the `data_dir` in the config matches that supplied to the CLI.
client_config.data_dir = data_dir.clone(); client_config.data_dir = data_dir.clone();
// Update the client config with any CLI args. if let Err(e) = client_config.apply_cli_args(&matches, &mut log) {
match client_config.apply_cli_args(&matches) { crit!(log, "Failed to parse ClientConfig CLI arguments"; "error" => format!("{:?}", e));
Ok(()) => (), return;
Err(s) => {
crit!(log, "Failed to parse ClientConfig CLI arguments"; "error" => s);
return;
}
}; };
// Log configuration // Log configuration

View File

@ -13,7 +13,7 @@ client = { path = "client" }
version = { path = "version" } version = { path = "version" }
clap = "2.32.0" clap = "2.32.0"
serde = "1.0" serde = "1.0"
slog = { version = "^2.2.3" , features = ["max_level_trace", "release_max_level_debug"] } slog = { version = "^2.2.3" , features = ["max_level_trace"] }
slog-term = "^2.4.0" slog-term = "^2.4.0"
slog-async = "^2.3.0" slog-async = "^2.3.0"
ctrlc = { version = "3.1.1", features = ["termination"] } ctrlc = { version = "3.1.1", features = ["termination"] }
@ -22,3 +22,5 @@ tokio-timer = "0.2.10"
futures = "0.1.25" futures = "0.1.25"
exit-future = "0.1.3" exit-future = "0.1.3"
state_processing = { path = "../eth2/state_processing" } state_processing = { path = "../eth2/state_processing" }
env_logger = "0.6.1"
dirs = "2.0.1"

View File

@ -18,7 +18,7 @@ use state_processing::{
per_slot_processing, BlockProcessingError, common per_slot_processing, BlockProcessingError, common
}; };
use std::sync::Arc; use std::sync::Arc;
use store::iter::{BlockIterator, BlockRootsIterator, StateRootsIterator}; use store::iter::{BestBlockRootsIterator, BlockIterator, BlockRootsIterator, StateRootsIterator};
use store::{Error as DBError, Store}; use store::{Error as DBError, Store};
use tree_hash::TreeHash; use tree_hash::TreeHash;
use types::*; use types::*;
@ -226,6 +226,19 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
BlockRootsIterator::owned(self.store.clone(), self.state.read().clone(), slot) BlockRootsIterator::owned(self.store.clone(), self.state.read().clone(), slot)
} }
/// Iterates in reverse (highest to lowest slot) through all block roots from largest
/// `slot <= beacon_state.slot` through to genesis.
///
/// Returns `None` for roots prior to genesis or when there is an error reading from `Store`.
///
/// Contains duplicate roots when skip slots are encountered.
pub fn rev_iter_best_block_roots(
&self,
slot: Slot,
) -> BestBlockRootsIterator<T::EthSpec, T::Store> {
BestBlockRootsIterator::owned(self.store.clone(), self.state.read().clone(), slot)
}
/// Iterates in reverse (highest to lowest slot) through all state roots from `slot` through to /// Iterates in reverse (highest to lowest slot) through all state roots from `slot` through to
/// genesis. /// genesis.
/// ///

View File

@ -191,7 +191,7 @@ where
fn get_state_at_slot(&self, state_slot: Slot) -> BeaconState<E> { fn get_state_at_slot(&self, state_slot: Slot) -> BeaconState<E> {
let state_root = self let state_root = self
.chain .chain
.rev_iter_state_roots(self.chain.current_state().slot) .rev_iter_state_roots(self.chain.current_state().slot - 1)
.find(|(_hash, slot)| *slot == state_slot) .find(|(_hash, slot)| *slot == state_slot)
.map(|(hash, _slot)| hash) .map(|(hash, _slot)| hash)
.expect("could not find state root"); .expect("could not find state root");

View File

@ -9,17 +9,21 @@ beacon_chain = { path = "../beacon_chain" }
network = { path = "../network" } network = { path = "../network" }
store = { path = "../store" } store = { path = "../store" }
http_server = { path = "../http_server" } http_server = { path = "../http_server" }
eth2-libp2p = { path = "../eth2-libp2p" }
rpc = { path = "../rpc" } rpc = { path = "../rpc" }
prometheus = "^0.6" prometheus = "^0.6"
types = { path = "../../eth2/types" } types = { path = "../../eth2/types" }
tree_hash = { path = "../../eth2/utils/tree_hash" } tree_hash = { path = "../../eth2/utils/tree_hash" }
eth2_config = { path = "../../eth2/utils/eth2_config" } eth2_config = { path = "../../eth2/utils/eth2_config" }
slot_clock = { path = "../../eth2/utils/slot_clock" } slot_clock = { path = "../../eth2/utils/slot_clock" }
serde = "1.0" serde = "1.0.93"
serde_derive = "1.0" serde_derive = "1.0"
error-chain = "0.12.0" error-chain = "0.12.0"
slog = "^2.2.3"
eth2_ssz = { path = "../../eth2/utils/ssz" } eth2_ssz = { path = "../../eth2/utils/ssz" }
slog = { version = "^2.2.3" , features = ["max_level_trace"] }
slog-async = "^2.3.0"
slog-json = "^2.3"
slog-term = "^2.4.0"
tokio = "0.1.15" tokio = "0.1.15"
clap = "2.32.0" clap = "2.32.0"
dirs = "1.0.3" dirs = "1.0.3"

View File

@ -2,36 +2,40 @@ use clap::ArgMatches;
use http_server::HttpServerConfig; use http_server::HttpServerConfig;
use network::NetworkConfig; use network::NetworkConfig;
use serde_derive::{Deserialize, Serialize}; use serde_derive::{Deserialize, Serialize};
use std::fs; use slog::{info, o, Drain};
use std::fs::{self, OpenOptions};
use std::path::PathBuf; use std::path::PathBuf;
use std::sync::Mutex;
/// The core configuration of a Lighthouse beacon node. /// The core configuration of a Lighthouse beacon node.
#[derive(Debug, Clone, Serialize, Deserialize)] #[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ClientConfig { pub struct Config {
pub data_dir: PathBuf, pub data_dir: PathBuf,
pub db_type: String, pub db_type: String,
db_name: String, db_name: String,
pub log_file: PathBuf,
pub network: network::NetworkConfig, pub network: network::NetworkConfig,
pub rpc: rpc::RPCConfig, pub rpc: rpc::RPCConfig,
pub http: HttpServerConfig, pub http: HttpServerConfig,
} }
impl Default for ClientConfig { impl Default for Config {
fn default() -> Self { fn default() -> Self {
Self { Self {
data_dir: PathBuf::from(".lighthouse"), data_dir: PathBuf::from(".lighthouse"),
log_file: PathBuf::from(""),
db_type: "disk".to_string(), db_type: "disk".to_string(),
db_name: "chain_db".to_string(), db_name: "chain_db".to_string(),
// Note: there are no default bootnodes specified. // Note: there are no default bootnodes specified.
// Once bootnodes are established, add them here. // Once bootnodes are established, add them here.
network: NetworkConfig::new(vec![]), network: NetworkConfig::new(),
rpc: rpc::RPCConfig::default(), rpc: rpc::RPCConfig::default(),
http: HttpServerConfig::default(), http: HttpServerConfig::default(),
} }
} }
} }
impl ClientConfig { impl Config {
/// Returns the path to which the client may initialize an on-disk database. /// Returns the path to which the client may initialize an on-disk database.
pub fn db_path(&self) -> Option<PathBuf> { pub fn db_path(&self) -> Option<PathBuf> {
self.data_dir() self.data_dir()
@ -45,23 +49,64 @@ impl ClientConfig {
Some(path) Some(path)
} }
// Update the logger to output in JSON to specified file
fn update_logger(&mut self, log: &mut slog::Logger) -> Result<(), &'static str> {
let file = OpenOptions::new()
.create(true)
.write(true)
.truncate(true)
.open(&self.log_file);
if file.is_err() {
return Err("Cannot open log file");
}
let file = file.unwrap();
if let Some(file) = self.log_file.to_str() {
info!(
*log,
"Log file specified, output will now be written to {} in json.", file
);
} else {
info!(
*log,
"Log file specified output will now be written in json"
);
}
let drain = Mutex::new(slog_json::Json::default(file)).fuse();
let drain = slog_async::Async::new(drain).build().fuse();
*log = slog::Logger::root(drain, o!());
Ok(())
}
/// Apply the following arguments to `self`, replacing values if they are specified in `args`. /// Apply the following arguments to `self`, replacing values if they are specified in `args`.
/// ///
/// Returns an error if arguments are obviously invalid. May succeed even if some values are /// Returns an error if arguments are obviously invalid. May succeed even if some values are
/// invalid. /// invalid.
pub fn apply_cli_args(&mut self, args: &ArgMatches) -> Result<(), &'static str> { pub fn apply_cli_args(
&mut self,
args: &ArgMatches,
log: &mut slog::Logger,
) -> Result<(), String> {
if let Some(dir) = args.value_of("datadir") { if let Some(dir) = args.value_of("datadir") {
self.data_dir = PathBuf::from(dir); self.data_dir = PathBuf::from(dir);
}; };
if let Some(dir) = args.value_of("db") { if let Some(dir) = args.value_of("db") {
self.db_type = dir.to_string(); self.db_type = dir.to_string();
} };
self.network.apply_cli_args(args)?; self.network.apply_cli_args(args)?;
self.rpc.apply_cli_args(args)?; self.rpc.apply_cli_args(args)?;
self.http.apply_cli_args(args)?; self.http.apply_cli_args(args)?;
if let Some(log_file) = args.value_of("logfile") {
self.log_file = PathBuf::from(log_file);
self.update_logger(log)?;
};
Ok(()) Ok(())
} }
} }

View File

@ -1,7 +1,7 @@
extern crate slog; extern crate slog;
mod beacon_chain_types; mod beacon_chain_types;
mod client_config; mod config;
pub mod error; pub mod error;
pub mod notifier; pub mod notifier;
@ -21,7 +21,7 @@ use tokio::timer::Interval;
pub use beacon_chain::BeaconChainTypes; pub use beacon_chain::BeaconChainTypes;
pub use beacon_chain_types::ClientType; pub use beacon_chain_types::ClientType;
pub use beacon_chain_types::InitialiseBeaconChain; pub use beacon_chain_types::InitialiseBeaconChain;
pub use client_config::ClientConfig; pub use config::Config as ClientConfig;
pub use eth2_config::Eth2Config; pub use eth2_config::Eth2Config;
/// Main beacon node client service. This provides the connection and initialisation of the clients /// Main beacon node client service. This provides the connection and initialisation of the clients

View File

@ -3,39 +3,34 @@ use beacon_chain::BeaconChainTypes;
use exit_future::Exit; use exit_future::Exit;
use futures::{Future, Stream}; use futures::{Future, Stream};
use slog::{debug, o}; use slog::{debug, o};
use std::sync::{Arc, Mutex};
use std::time::{Duration, Instant}; use std::time::{Duration, Instant};
use tokio::runtime::TaskExecutor; use tokio::runtime::TaskExecutor;
use tokio::timer::Interval; use tokio::timer::Interval;
/// Thread that monitors the client and reports useful statistics to the user. /// The interval between heartbeat events.
pub const HEARTBEAT_INTERVAL_SECONDS: u64 = 5;
/// Spawns a thread that can be used to run code periodically, on `HEARTBEAT_INTERVAL_SECONDS`
/// durations.
///
/// Presently unused, but remains for future use.
pub fn run<T: BeaconChainTypes + Send + Sync + 'static>( pub fn run<T: BeaconChainTypes + Send + Sync + 'static>(
client: &Client<T>, client: &Client<T>,
executor: TaskExecutor, executor: TaskExecutor,
exit: Exit, exit: Exit,
) { ) {
// notification heartbeat // notification heartbeat
let interval = Interval::new(Instant::now(), Duration::from_secs(5)); let interval = Interval::new(
Instant::now(),
Duration::from_secs(HEARTBEAT_INTERVAL_SECONDS),
);
let _log = client.log.new(o!("Service" => "Notifier")); let _log = client.log.new(o!("Service" => "Notifier"));
// TODO: Debugging only let heartbeat = |_| {
let counter = Arc::new(Mutex::new(0)); // There is not presently any heartbeat logic.
let network = client.network.clone(); //
// We leave this function empty for future use.
// build heartbeat logic here
let heartbeat = move |_| {
//debug!(log, "Temp heartbeat output");
//TODO: Remove this logic. Testing only
let mut count = counter.lock().unwrap();
*count += 1;
if *count % 5 == 0 {
// debug!(log, "Sending Message");
network.send_message();
}
Ok(()) Ok(())
}; };

View File

@ -7,15 +7,18 @@ edition = "2018"
[dependencies] [dependencies]
beacon_chain = { path = "../beacon_chain" } beacon_chain = { path = "../beacon_chain" }
clap = "2.32.0" clap = "2.32.0"
# SigP repository until PR is merged #SigP repository
libp2p = { git = "https://github.com/SigP/rust-libp2p", rev = "b3c32d9a821ae6cc89079499cc6e8a6bab0bffc3" } libp2p = { git = "https://github.com/SigP/rust-libp2p", rev = "be5710bbde69d8c5be732c13ba64239e2f370a7b" }
enr = { git = "https://github.com/SigP/rust-libp2p/", rev = "be5710bbde69d8c5be732c13ba64239e2f370a7b", features = ["serde"] }
types = { path = "../../eth2/types" } types = { path = "../../eth2/types" }
serde = "1.0" serde = "1.0"
serde_derive = "1.0" serde_derive = "1.0"
eth2_ssz = { path = "../../eth2/utils/ssz" } eth2_ssz = { path = "../../eth2/utils/ssz" }
eth2_ssz_derive = { path = "../../eth2/utils/ssz_derive" } eth2_ssz_derive = { path = "../../eth2/utils/ssz_derive" }
slog = "2.4.1" slog = { version = "^2.4.1" , features = ["max_level_trace"] }
version = { path = "../version" } version = { path = "../version" }
tokio = "0.1.16" tokio = "0.1.16"
futures = "0.1.25" futures = "0.1.25"
error-chain = "0.12.0" error-chain = "0.12.0"
tokio-timer = "0.2.10"
dirs = "2.0.1"

View File

@ -1,45 +1,72 @@
use crate::discovery::Discovery;
use crate::rpc::{RPCEvent, RPCMessage, Rpc}; use crate::rpc::{RPCEvent, RPCMessage, Rpc};
use crate::NetworkConfig; use crate::{error, NetworkConfig};
use crate::{Topic, TopicHash};
use futures::prelude::*; use futures::prelude::*;
use libp2p::{ use libp2p::{
core::{ core::{
identity::Keypair,
swarm::{NetworkBehaviourAction, NetworkBehaviourEventProcess}, swarm::{NetworkBehaviourAction, NetworkBehaviourEventProcess},
PublicKey,
}, },
discv5::Discv5Event,
gossipsub::{Gossipsub, GossipsubEvent}, gossipsub::{Gossipsub, GossipsubEvent},
identify::{protocol::IdentifyInfo, Identify, IdentifyEvent}, ping::{Ping, PingConfig, PingEvent},
ping::{Ping, PingEvent},
tokio_io::{AsyncRead, AsyncWrite}, tokio_io::{AsyncRead, AsyncWrite},
NetworkBehaviour, PeerId, NetworkBehaviour, PeerId,
}; };
use slog::{debug, o, trace, warn}; use slog::{o, trace, warn};
use ssz::{ssz_encode, Decode, DecodeError, Encode}; use ssz::{ssz_encode, Decode, DecodeError, Encode};
use std::num::NonZeroU32;
use std::time::Duration;
use types::{Attestation, BeaconBlock}; use types::{Attestation, BeaconBlock};
use types::{Topic, TopicHash};
/// Builds the network behaviour for the libp2p Swarm. /// Builds the network behaviour that manages the core protocols of eth2.
/// Implements gossipsub message routing. /// This core behaviour is managed by `Behaviour` which adds peer management to all core
/// behaviours.
#[derive(NetworkBehaviour)] #[derive(NetworkBehaviour)]
#[behaviour(out_event = "BehaviourEvent", poll_method = "poll")] #[behaviour(out_event = "BehaviourEvent", poll_method = "poll")]
pub struct Behaviour<TSubstream: AsyncRead + AsyncWrite> { pub struct Behaviour<TSubstream: AsyncRead + AsyncWrite> {
/// The routing pub-sub mechanism for eth2. /// The routing pub-sub mechanism for eth2.
gossipsub: Gossipsub<TSubstream>, gossipsub: Gossipsub<TSubstream>,
// TODO: Add Kademlia for peer discovery /// The serenity RPC specified in the wire-0 protocol.
/// The events generated by this behaviour to be consumed in the swarm poll.
serenity_rpc: Rpc<TSubstream>, serenity_rpc: Rpc<TSubstream>,
/// Allows discovery of IP addresses for peers on the network.
identify: Identify<TSubstream>,
/// Keep regular connection to peers and disconnect if absent. /// Keep regular connection to peers and disconnect if absent.
// TODO: Keepalive, likely remove this later.
// TODO: Make the ping time customizeable.
ping: Ping<TSubstream>, ping: Ping<TSubstream>,
/// Kademlia for peer discovery.
discovery: Discovery<TSubstream>,
#[behaviour(ignore)] #[behaviour(ignore)]
/// The events generated by this behaviour to be consumed in the swarm poll.
events: Vec<BehaviourEvent>, events: Vec<BehaviourEvent>,
/// Logger for behaviour actions. /// Logger for behaviour actions.
#[behaviour(ignore)] #[behaviour(ignore)]
log: slog::Logger, log: slog::Logger,
} }
impl<TSubstream: AsyncRead + AsyncWrite> Behaviour<TSubstream> {
pub fn new(
local_key: &Keypair,
net_conf: &NetworkConfig,
log: &slog::Logger,
) -> error::Result<Self> {
let local_peer_id = local_key.public().clone().into_peer_id();
let behaviour_log = log.new(o!());
let ping_config = PingConfig::new()
.with_timeout(Duration::from_secs(30))
.with_interval(Duration::from_secs(20))
.with_max_failures(NonZeroU32::new(2).expect("2 != 0"))
.with_keep_alive(false);
Ok(Behaviour {
serenity_rpc: Rpc::new(log),
gossipsub: Gossipsub::new(local_peer_id.clone(), net_conf.gs_config.clone()),
discovery: Discovery::new(local_key, net_conf, log)?,
ping: Ping::new(ping_config),
events: Vec::new(),
log: behaviour_log,
})
}
}
// Implement the NetworkBehaviourEventProcess trait so that we can derive NetworkBehaviour for Behaviour // Implement the NetworkBehaviourEventProcess trait so that we can derive NetworkBehaviour for Behaviour
impl<TSubstream: AsyncRead + AsyncWrite> NetworkBehaviourEventProcess<GossipsubEvent> impl<TSubstream: AsyncRead + AsyncWrite> NetworkBehaviourEventProcess<GossipsubEvent>
for Behaviour<TSubstream> for Behaviour<TSubstream>
@ -89,30 +116,6 @@ impl<TSubstream: AsyncRead + AsyncWrite> NetworkBehaviourEventProcess<RPCMessage
} }
} }
impl<TSubstream: AsyncRead + AsyncWrite> NetworkBehaviourEventProcess<IdentifyEvent>
for Behaviour<TSubstream>
{
fn inject_event(&mut self, event: IdentifyEvent) {
match event {
IdentifyEvent::Identified {
peer_id, mut info, ..
} => {
if info.listen_addrs.len() > 20 {
debug!(
self.log,
"More than 20 peers have been identified, truncating"
);
info.listen_addrs.truncate(20);
}
self.events
.push(BehaviourEvent::Identified(peer_id, Box::new(info)));
}
IdentifyEvent::Error { .. } => {}
IdentifyEvent::SendBack { .. } => {}
}
}
}
impl<TSubstream: AsyncRead + AsyncWrite> NetworkBehaviourEventProcess<PingEvent> impl<TSubstream: AsyncRead + AsyncWrite> NetworkBehaviourEventProcess<PingEvent>
for Behaviour<TSubstream> for Behaviour<TSubstream>
{ {
@ -122,25 +125,6 @@ impl<TSubstream: AsyncRead + AsyncWrite> NetworkBehaviourEventProcess<PingEvent>
} }
impl<TSubstream: AsyncRead + AsyncWrite> Behaviour<TSubstream> { impl<TSubstream: AsyncRead + AsyncWrite> Behaviour<TSubstream> {
pub fn new(local_public_key: PublicKey, net_conf: &NetworkConfig, log: &slog::Logger) -> Self {
let local_peer_id = local_public_key.clone().into_peer_id();
let identify_config = net_conf.identify_config.clone();
let behaviour_log = log.new(o!());
Behaviour {
gossipsub: Gossipsub::new(local_peer_id, net_conf.gs_config.clone()),
serenity_rpc: Rpc::new(log),
identify: Identify::new(
identify_config.version,
identify_config.user_agent,
local_public_key,
),
ping: Ping::new(),
events: Vec::new(),
log: behaviour_log,
}
}
/// Consumes the events list when polled. /// Consumes the events list when polled.
fn poll<TBehaviourIn>( fn poll<TBehaviourIn>(
&mut self, &mut self,
@ -153,18 +137,23 @@ impl<TSubstream: AsyncRead + AsyncWrite> Behaviour<TSubstream> {
} }
} }
impl<TSubstream: AsyncRead + AsyncWrite> NetworkBehaviourEventProcess<Discv5Event>
for Behaviour<TSubstream>
{
fn inject_event(&mut self, _event: Discv5Event) {
// discv5 has no events to inject
}
}
/// Implements the combined behaviour for the libp2p service. /// Implements the combined behaviour for the libp2p service.
impl<TSubstream: AsyncRead + AsyncWrite> Behaviour<TSubstream> { impl<TSubstream: AsyncRead + AsyncWrite> Behaviour<TSubstream> {
/* Pubsub behaviour functions */
/// Subscribes to a gossipsub topic. /// Subscribes to a gossipsub topic.
pub fn subscribe(&mut self, topic: Topic) -> bool { pub fn subscribe(&mut self, topic: Topic) -> bool {
self.gossipsub.subscribe(topic) self.gossipsub.subscribe(topic)
} }
/// Sends an RPC Request/Response via the RPC protocol.
pub fn send_rpc(&mut self, peer_id: PeerId, rpc_event: RPCEvent) {
self.serenity_rpc.send_rpc(peer_id, rpc_event);
}
/// Publishes a message on the pubsub (gossipsub) behaviour. /// Publishes a message on the pubsub (gossipsub) behaviour.
pub fn publish(&mut self, topics: Vec<Topic>, message: PubsubMessage) { pub fn publish(&mut self, topics: Vec<Topic>, message: PubsubMessage) {
let message_bytes = ssz_encode(&message); let message_bytes = ssz_encode(&message);
@ -172,14 +161,19 @@ impl<TSubstream: AsyncRead + AsyncWrite> Behaviour<TSubstream> {
self.gossipsub.publish(topic, message_bytes.clone()); self.gossipsub.publish(topic, message_bytes.clone());
} }
} }
/* Eth2 RPC behaviour functions */
/// Sends an RPC Request/Response via the RPC protocol.
pub fn send_rpc(&mut self, peer_id: PeerId, rpc_event: RPCEvent) {
self.serenity_rpc.send_rpc(peer_id, rpc_event);
}
} }
/// The types of events than can be obtained from polling the behaviour. /// The types of events than can be obtained from polling the behaviour.
pub enum BehaviourEvent { pub enum BehaviourEvent {
RPC(PeerId, RPCEvent), RPC(PeerId, RPCEvent),
PeerDialed(PeerId), PeerDialed(PeerId),
Identified(PeerId, Box<IdentifyInfo>),
// TODO: This is a stub at the moment
GossipMessage { GossipMessage {
source: PeerId, source: PeerId,
topics: Vec<TopicHash>, topics: Vec<TopicHash>,

View File

@ -1,89 +1,129 @@
use clap::ArgMatches; use clap::ArgMatches;
use enr::Enr;
use libp2p::gossipsub::{GossipsubConfig, GossipsubConfigBuilder}; use libp2p::gossipsub::{GossipsubConfig, GossipsubConfigBuilder};
use serde_derive::{Deserialize, Serialize}; use serde_derive::{Deserialize, Serialize};
use types::multiaddr::{Error as MultiaddrError, Multiaddr}; use std::path::PathBuf;
use std::time::Duration;
/// The beacon node topic string to subscribe to.
pub const BEACON_PUBSUB_TOPIC: &str = "beacon_block";
pub const BEACON_ATTESTATION_TOPIC: &str = "beacon_attestation";
pub const SHARD_TOPIC_PREFIX: &str = "shard";
#[derive(Clone, Debug, Serialize, Deserialize)] #[derive(Clone, Debug, Serialize, Deserialize)]
#[serde(default)] #[serde(default)]
/// Network configuration for lighthouse. /// Network configuration for lighthouse.
pub struct Config { pub struct Config {
/// Data directory where node's keyfile is stored
pub network_dir: PathBuf,
/// IP address to listen on. /// IP address to listen on.
listen_addresses: Vec<String>, pub listen_address: std::net::IpAddr,
/// The TCP port that libp2p listens on.
pub libp2p_port: u16,
/// The address to broadcast to peers about which address we are listening on.
pub discovery_address: std::net::IpAddr,
/// UDP port that discovery listens on.
pub discovery_port: u16,
/// Target number of connected peers.
pub max_peers: usize,
/// Gossipsub configuration parameters. /// Gossipsub configuration parameters.
#[serde(skip)] #[serde(skip)]
pub gs_config: GossipsubConfig, pub gs_config: GossipsubConfig,
/// Configuration parameters for node identification protocol.
#[serde(skip)]
pub identify_config: IdentifyConfig,
/// List of nodes to initially connect to. /// List of nodes to initially connect to.
boot_nodes: Vec<String>, pub boot_nodes: Vec<Enr>,
/// Client version /// Client version
pub client_version: String, pub client_version: String,
/// List of topics to subscribe to as strings
/// List of extra topics to initially subscribe to as strings.
pub topics: Vec<String>, pub topics: Vec<String>,
} }
impl Default for Config { impl Default for Config {
/// Generate a default network configuration. /// Generate a default network configuration.
fn default() -> Self { fn default() -> Self {
let mut network_dir = dirs::home_dir().unwrap_or_else(|| PathBuf::from("."));
network_dir.push(".lighthouse");
network_dir.push("network");
Config { Config {
listen_addresses: vec!["/ip4/127.0.0.1/tcp/9000".to_string()], network_dir,
listen_address: "127.0.0.1".parse().expect("vaild ip address"),
libp2p_port: 9000,
discovery_address: "127.0.0.1".parse().expect("valid ip address"),
discovery_port: 9000,
max_peers: 10,
//TODO: Set realistic values for production
gs_config: GossipsubConfigBuilder::new() gs_config: GossipsubConfigBuilder::new()
.max_gossip_size(4_000_000) .max_gossip_size(4_000_000)
.inactivity_timeout(Duration::from_secs(90))
.heartbeat_interval(Duration::from_secs(20))
.build(), .build(),
identify_config: IdentifyConfig::default(),
boot_nodes: vec![], boot_nodes: vec![],
client_version: version::version(), client_version: version::version(),
topics: vec![String::from("beacon_chain")], topics: Vec::new(),
} }
} }
} }
/// Generates a default Config.
impl Config { impl Config {
pub fn new(boot_nodes: Vec<String>) -> Self { pub fn new() -> Self {
let mut conf = Config::default(); Config::default()
conf.boot_nodes = boot_nodes;
conf
} }
pub fn listen_addresses(&self) -> Result<Vec<Multiaddr>, MultiaddrError> { pub fn apply_cli_args(&mut self, args: &ArgMatches) -> Result<(), String> {
self.listen_addresses.iter().map(|s| s.parse()).collect() if let Some(dir) = args.value_of("datadir") {
} self.network_dir = PathBuf::from(dir).join("network");
};
pub fn boot_nodes(&self) -> Result<Vec<Multiaddr>, MultiaddrError> {
self.boot_nodes.iter().map(|s| s.parse()).collect()
}
pub fn apply_cli_args(&mut self, args: &ArgMatches) -> Result<(), &'static str> {
if let Some(listen_address_str) = args.value_of("listen-address") { if let Some(listen_address_str) = args.value_of("listen-address") {
let listen_addresses = listen_address_str.split(',').map(Into::into).collect(); let listen_address = listen_address_str
self.listen_addresses = listen_addresses; .parse()
.map_err(|_| format!("Invalid listen address: {:?}", listen_address_str))?;
self.listen_address = listen_address;
self.discovery_address = listen_address;
} }
if let Some(boot_addresses_str) = args.value_of("boot-nodes") { if let Some(max_peers_str) = args.value_of("maxpeers") {
let boot_addresses = boot_addresses_str.split(',').map(Into::into).collect(); self.max_peers = max_peers_str
self.boot_nodes = boot_addresses; .parse::<usize>()
.map_err(|_| format!("Invalid number of max peers: {}", max_peers_str))?;
}
if let Some(port_str) = args.value_of("port") {
let port = port_str
.parse::<u16>()
.map_err(|_| format!("Invalid port: {}", port_str))?;
self.libp2p_port = port;
self.discovery_port = port;
}
if let Some(boot_enr_str) = args.value_of("boot-nodes") {
self.boot_nodes = boot_enr_str
.split(',')
.map(|enr| enr.parse().map_err(|_| format!("Invalid ENR: {}", enr)))
.collect::<Result<Vec<Enr>, _>>()?;
}
if let Some(discovery_address_str) = args.value_of("discovery-address") {
self.discovery_address = discovery_address_str
.parse()
.map_err(|_| format!("Invalid discovery address: {:?}", discovery_address_str))?
}
if let Some(disc_port_str) = args.value_of("disc-port") {
self.discovery_port = disc_port_str
.parse::<u16>()
.map_err(|_| format!("Invalid discovery port: {}", disc_port_str))?;
} }
Ok(()) Ok(())
} }
} }
/// The configuration parameters for the Identify protocol
#[derive(Debug, Clone)]
pub struct IdentifyConfig {
/// The protocol version to listen on.
pub version: String,
/// The client's name and version for identification.
pub user_agent: String,
}
impl Default for IdentifyConfig {
fn default() -> Self {
Self {
version: "/eth/serenity/1.0".to_string(),
user_agent: version::version(),
}
}
}

View File

@ -0,0 +1,313 @@
use crate::{error, NetworkConfig};
/// This manages the discovery and management of peers.
///
/// Currently using discv5 for peer discovery.
///
use futures::prelude::*;
use libp2p::core::swarm::{
ConnectedPoint, NetworkBehaviour, NetworkBehaviourAction, PollParameters,
};
use libp2p::core::{identity::Keypair, Multiaddr, PeerId, ProtocolsHandler};
use libp2p::discv5::{Discv5, Discv5Event};
use libp2p::enr::{Enr, EnrBuilder, NodeId};
use libp2p::multiaddr::Protocol;
use slog::{debug, info, o, warn};
use std::collections::HashSet;
use std::fs::File;
use std::io::prelude::*;
use std::str::FromStr;
use std::time::{Duration, Instant};
use tokio::io::{AsyncRead, AsyncWrite};
use tokio_timer::Delay;
/// Maximum seconds before searching for extra peers.
const MAX_TIME_BETWEEN_PEER_SEARCHES: u64 = 60;
/// Initial delay between peer searches.
const INITIAL_SEARCH_DELAY: u64 = 5;
/// Local ENR storage filename.
const ENR_FILENAME: &str = "enr.dat";
/// Lighthouse discovery behaviour. This provides peer management and discovery using the Discv5
/// libp2p protocol.
pub struct Discovery<TSubstream> {
/// The peers currently connected to libp2p streams.
connected_peers: HashSet<PeerId>,
/// The target number of connected peers on the libp2p interface.
max_peers: usize,
/// The delay between peer discovery searches.
peer_discovery_delay: Delay,
/// Tracks the last discovery delay. The delay is doubled each round until the max
/// time is reached.
past_discovery_delay: u64,
/// The TCP port for libp2p. Used to convert an updated IP address to a multiaddr. Note: This
/// assumes that the external TCP port is the same as the internal TCP port if behind a NAT.
//TODO: Improve NAT handling limit the above restriction
tcp_port: u16,
/// The discovery behaviour used to discover new peers.
discovery: Discv5<TSubstream>,
/// Logger for the discovery behaviour.
log: slog::Logger,
}
impl<TSubstream> Discovery<TSubstream> {
pub fn new(
local_key: &Keypair,
config: &NetworkConfig,
log: &slog::Logger,
) -> error::Result<Self> {
let log = log.new(o!("Service" => "Libp2p-Discovery"));
// checks if current ENR matches that found on disk
let local_enr = load_enr(local_key, config, &log)?;
info!(log, "Local ENR: {}", local_enr.to_base64());
debug!(log, "Local Node Id: {}", local_enr.node_id());
let mut discovery = Discv5::new(local_enr, local_key.clone(), config.listen_address)
.map_err(|e| format!("Discv5 service failed: {:?}", e))?;
// Add bootnodes to routing table
for bootnode_enr in config.boot_nodes.clone() {
debug!(
log,
"Adding node to routing table: {}",
bootnode_enr.node_id()
);
discovery.add_enr(bootnode_enr);
}
Ok(Self {
connected_peers: HashSet::new(),
max_peers: config.max_peers,
peer_discovery_delay: Delay::new(Instant::now()),
past_discovery_delay: INITIAL_SEARCH_DELAY,
tcp_port: config.libp2p_port,
discovery,
log,
})
}
/// Manually search for peers. This restarts the discovery round, sparking multiple rapid
/// queries.
pub fn discover_peers(&mut self) {
self.past_discovery_delay = INITIAL_SEARCH_DELAY;
self.find_peers();
}
/// Add an Enr to the routing table of the discovery mechanism.
pub fn add_enr(&mut self, enr: Enr) {
self.discovery.add_enr(enr);
}
/// Search for new peers using the underlying discovery mechanism.
fn find_peers(&mut self) {
// pick a random NodeId
let random_node = NodeId::random();
debug!(self.log, "Searching for peers...");
self.discovery.find_node(random_node);
// update the time until next discovery
let delay = {
if self.past_discovery_delay < MAX_TIME_BETWEEN_PEER_SEARCHES {
self.past_discovery_delay *= 2;
self.past_discovery_delay
} else {
MAX_TIME_BETWEEN_PEER_SEARCHES
}
};
self.peer_discovery_delay
.reset(Instant::now() + Duration::from_secs(delay));
}
}
// Redirect all behaviour events to underlying discovery behaviour.
impl<TSubstream> NetworkBehaviour for Discovery<TSubstream>
where
TSubstream: AsyncRead + AsyncWrite,
{
type ProtocolsHandler = <Discv5<TSubstream> as NetworkBehaviour>::ProtocolsHandler;
type OutEvent = <Discv5<TSubstream> as NetworkBehaviour>::OutEvent;
fn new_handler(&mut self) -> Self::ProtocolsHandler {
NetworkBehaviour::new_handler(&mut self.discovery)
}
fn addresses_of_peer(&mut self, peer_id: &PeerId) -> Vec<Multiaddr> {
// Let discovery track possible known peers.
self.discovery.addresses_of_peer(peer_id)
}
fn inject_connected(&mut self, peer_id: PeerId, _endpoint: ConnectedPoint) {
self.connected_peers.insert(peer_id);
}
fn inject_disconnected(&mut self, peer_id: &PeerId, _endpoint: ConnectedPoint) {
self.connected_peers.remove(peer_id);
}
fn inject_replaced(
&mut self,
_peer_id: PeerId,
_closed: ConnectedPoint,
_opened: ConnectedPoint,
) {
// discv5 doesn't implement
}
fn inject_node_event(
&mut self,
_peer_id: PeerId,
_event: <Self::ProtocolsHandler as ProtocolsHandler>::OutEvent,
) {
// discv5 doesn't implement
}
fn poll(
&mut self,
params: &mut impl PollParameters,
) -> Async<
NetworkBehaviourAction<
<Self::ProtocolsHandler as ProtocolsHandler>::InEvent,
Self::OutEvent,
>,
> {
// search for peers if it is time
loop {
match self.peer_discovery_delay.poll() {
Ok(Async::Ready(_)) => {
if self.connected_peers.len() < self.max_peers {
self.find_peers();
}
}
Ok(Async::NotReady) => break,
Err(e) => {
warn!(self.log, "Discovery peer search failed: {:?}", e);
}
}
}
// Poll discovery
loop {
match self.discovery.poll(params) {
Async::Ready(NetworkBehaviourAction::GenerateEvent(event)) => {
match event {
Discv5Event::Discovered(_enr) => {
// not concerned about FINDNODE results, rather the result of an entire
// query.
}
Discv5Event::SocketUpdated(socket) => {
info!(self.log, "Address updated"; "IP" => format!("{}",socket.ip()));
let mut address = Multiaddr::from(socket.ip());
address.push(Protocol::Tcp(self.tcp_port));
return Async::Ready(NetworkBehaviourAction::ReportObservedAddr {
address,
});
}
Discv5Event::FindNodeResult { closer_peers, .. } => {
debug!(self.log, "Discv5 query found {} peers", closer_peers.len());
if closer_peers.is_empty() {
debug!(self.log, "Discv5 random query yielded empty results");
}
for peer_id in closer_peers {
// if we need more peers, attempt a connection
if self.connected_peers.len() < self.max_peers
&& self.connected_peers.get(&peer_id).is_none()
{
debug!(self.log, "Discv5: Peer discovered"; "Peer"=> format!("{:?}", peer_id));
return Async::Ready(NetworkBehaviourAction::DialPeer {
peer_id,
});
}
}
}
_ => {}
}
}
// discv5 does not output any other NetworkBehaviourAction
Async::Ready(_) => {}
Async::NotReady => break,
}
}
Async::NotReady
}
}
/// Loads an ENR from file if it exists and matches the current NodeId and sequence number. If none
/// exists, generates a new one.
///
/// If an ENR exists, with the same NodeId and IP address, we use the disk-generated one as its
/// ENR sequence will be equal or higher than a newly generated one.
fn load_enr(
local_key: &Keypair,
config: &NetworkConfig,
log: &slog::Logger,
) -> Result<Enr, String> {
// Build the local ENR.
// Note: Discovery should update the ENR record's IP to the external IP as seen by the
// majority of our peers.
let mut local_enr = EnrBuilder::new()
.ip(config.discovery_address.into())
.tcp(config.libp2p_port)
.udp(config.discovery_port)
.build(&local_key)
.map_err(|e| format!("Could not build Local ENR: {:?}", e))?;
let enr_f = config.network_dir.join(ENR_FILENAME);
if let Ok(mut enr_file) = File::open(enr_f.clone()) {
let mut enr_string = String::new();
match enr_file.read_to_string(&mut enr_string) {
Err(_) => debug!(log, "Could not read ENR from file"),
Ok(_) => {
match Enr::from_str(&enr_string) {
Ok(enr) => {
debug!(log, "ENR found in file: {:?}", enr_f);
if enr.node_id() == local_enr.node_id() {
if enr.ip() == config.discovery_address.into()
&& enr.tcp() == Some(config.libp2p_port)
&& enr.udp() == Some(config.discovery_port)
{
debug!(log, "ENR loaded from file");
// the stored ENR has the same configuration, use it
return Ok(enr);
}
// same node id, different configuration - update the sequence number
let new_seq_no = enr.seq().checked_add(1).ok_or_else(|| "ENR sequence number on file is too large. Remove it to generate a new NodeId")?;
local_enr.set_seq(new_seq_no, local_key).map_err(|e| {
format!("Could not update ENR sequence number: {:?}", e)
})?;
debug!(log, "ENR sequence number increased to: {}", new_seq_no);
}
}
Err(e) => {
warn!(log, "ENR from file could not be decoded: {:?}", e);
}
}
}
}
}
// write ENR to disk
let _ = std::fs::create_dir_all(&config.network_dir);
match File::create(enr_f.clone())
.and_then(|mut f| f.write_all(&local_enr.to_base64().as_bytes()))
{
Ok(_) => {
debug!(log, "ENR written to disk");
}
Err(e) => {
warn!(
log,
"Could not write ENR to file: {:?}. Error: {}", enr_f, e
);
}
}
Ok(local_enr)
}

View File

@ -4,12 +4,18 @@
/// This crate builds and manages the libp2p services required by the beacon node. /// This crate builds and manages the libp2p services required by the beacon node.
pub mod behaviour; pub mod behaviour;
mod config; mod config;
mod discovery;
pub mod error; pub mod error;
pub mod rpc; pub mod rpc;
mod service; mod service;
pub use behaviour::PubsubMessage; pub use behaviour::PubsubMessage;
pub use config::Config as NetworkConfig; pub use config::{
Config as NetworkConfig, BEACON_ATTESTATION_TOPIC, BEACON_PUBSUB_TOPIC, SHARD_TOPIC_PREFIX,
};
pub use libp2p::floodsub::{Topic, TopicBuilder, TopicHash};
pub use libp2p::multiaddr;
pub use libp2p::Multiaddr;
pub use libp2p::{ pub use libp2p::{
gossipsub::{GossipsubConfig, GossipsubConfigBuilder}, gossipsub::{GossipsubConfig, GossipsubConfigBuilder},
PeerId, PeerId,
@ -17,5 +23,3 @@ pub use libp2p::{
pub use rpc::RPCEvent; pub use rpc::RPCEvent;
pub use service::Libp2pEvent; pub use service::Libp2pEvent;
pub use service::Service; pub use service::Service;
pub use types::multiaddr;
pub use types::Multiaddr;

View File

@ -94,7 +94,7 @@ where
fn poll( fn poll(
&mut self, &mut self,
_: &mut PollParameters<'_>, _: &mut impl PollParameters,
) -> Async< ) -> Async<
NetworkBehaviourAction< NetworkBehaviourAction<
<Self::ProtocolsHandler as ProtocolsHandler>::InEvent, <Self::ProtocolsHandler as ProtocolsHandler>::InEvent,

View File

@ -11,7 +11,6 @@ use tokio::io::{AsyncRead, AsyncWrite};
const MAX_READ_SIZE: usize = 4_194_304; // 4M const MAX_READ_SIZE: usize = 4_194_304; // 4M
/// Implementation of the `ConnectionUpgrade` for the rpc protocol. /// Implementation of the `ConnectionUpgrade` for the rpc protocol.
#[derive(Debug, Clone)] #[derive(Debug, Clone)]
pub struct RPCProtocol; pub struct RPCProtocol;

View File

@ -3,25 +3,30 @@ use crate::error;
use crate::multiaddr::Protocol; use crate::multiaddr::Protocol;
use crate::rpc::RPCEvent; use crate::rpc::RPCEvent;
use crate::NetworkConfig; use crate::NetworkConfig;
use crate::{TopicBuilder, TopicHash};
use crate::{BEACON_ATTESTATION_TOPIC, BEACON_PUBSUB_TOPIC};
use futures::prelude::*; use futures::prelude::*;
use futures::Stream; use futures::Stream;
use libp2p::core::{ use libp2p::core::{
identity, identity::Keypair,
multiaddr::Multiaddr,
muxing::StreamMuxerBox, muxing::StreamMuxerBox,
nodes::Substream, nodes::Substream,
transport::boxed::Boxed, transport::boxed::Boxed,
upgrade::{InboundUpgradeExt, OutboundUpgradeExt}, upgrade::{InboundUpgradeExt, OutboundUpgradeExt},
}; };
use libp2p::identify::protocol::IdentifyInfo;
use libp2p::{core, secio, PeerId, Swarm, Transport}; use libp2p::{core, secio, PeerId, Swarm, Transport};
use slog::{debug, info, trace, warn}; use slog::{debug, info, trace, warn};
use std::fs::File;
use std::io::prelude::*;
use std::io::{Error, ErrorKind}; use std::io::{Error, ErrorKind};
use std::time::Duration; use std::time::Duration;
use types::{TopicBuilder, TopicHash};
type Libp2pStream = Boxed<(PeerId, StreamMuxerBox), Error>; type Libp2pStream = Boxed<(PeerId, StreamMuxerBox), Error>;
type Libp2pBehaviour = Behaviour<Substream<StreamMuxerBox>>; type Libp2pBehaviour = Behaviour<Substream<StreamMuxerBox>>;
const NETWORK_KEY_FILENAME: &str = "key";
/// The configuration and state of the libp2p components for the beacon node. /// The configuration and state of the libp2p components for the beacon node.
pub struct Service { pub struct Service {
/// The libp2p Swarm handler. /// The libp2p Swarm handler.
@ -35,59 +40,52 @@ pub struct Service {
impl Service { impl Service {
pub fn new(config: NetworkConfig, log: slog::Logger) -> error::Result<Self> { pub fn new(config: NetworkConfig, log: slog::Logger) -> error::Result<Self> {
debug!(log, "Libp2p Service starting"); debug!(log, "Network-libp2p Service starting");
// TODO: Currently using secp256k1 key pairs. Wire protocol specifies RSA. Waiting for this // load the private key from CLI flag, disk or generate a new one
// PR to be merged to generate RSA keys: https://github.com/briansmith/ring/pull/733 let local_private_key = load_private_key(&config, &log);
// TODO: Save and recover node key from disk
let local_private_key = identity::Keypair::generate_secp256k1();
let local_public_key = local_private_key.public();
let local_peer_id = PeerId::from(local_private_key.public()); let local_peer_id = PeerId::from(local_private_key.public());
info!(log, "Local peer id: {:?}", local_peer_id); info!(log, "Local peer id: {:?}", local_peer_id);
let mut swarm = { let mut swarm = {
// Set up the transport // Set up the transport - tcp/ws with secio and mplex/yamux
let transport = build_transport(local_private_key); let transport = build_transport(local_private_key.clone());
// Set up gossipsub routing // Lighthouse network behaviour
let behaviour = Behaviour::new(local_public_key.clone(), &config, &log); let behaviour = Behaviour::new(&local_private_key, &config, &log)?;
// Set up Topology Swarm::new(transport, behaviour, local_peer_id.clone())
let topology = local_peer_id.clone();
Swarm::new(transport, behaviour, topology)
}; };
// listen on all addresses // listen on the specified address
for address in config let listen_multiaddr = {
.listen_addresses() let mut m = Multiaddr::from(config.listen_address);
.map_err(|e| format!("Invalid listen multiaddr: {}", e))? m.push(Protocol::Tcp(config.libp2p_port));
{ m
match Swarm::listen_on(&mut swarm, address.clone()) { };
Ok(mut listen_addr) => {
listen_addr.append(Protocol::P2p(local_peer_id.clone().into())); match Swarm::listen_on(&mut swarm, listen_multiaddr.clone()) {
info!(log, "Listening on: {}", listen_addr); Ok(_) => {
} let mut log_address = listen_multiaddr;
Err(err) => warn!(log, "Cannot listen on: {} : {:?}", address, err), log_address.push(Protocol::P2p(local_peer_id.clone().into()));
}; info!(log, "Listening on: {}", log_address);
} }
// connect to boot nodes - these are currently stored as multiaddrs Err(err) => warn!(
// Once we have discovery, can set to peerId log,
for bootnode in config "Cannot listen on: {} because: {:?}", listen_multiaddr, err
.boot_nodes() ),
.map_err(|e| format!("Invalid boot node multiaddr: {:?}", e))? };
{
match Swarm::dial_addr(&mut swarm, bootnode.clone()) {
Ok(()) => debug!(log, "Dialing bootnode: {}", bootnode),
Err(err) => debug!(
log,
"Could not connect to bootnode: {} error: {:?}", bootnode, err
),
};
}
// subscribe to default gossipsub topics // subscribe to default gossipsub topics
let mut topics = vec![];
//TODO: Handle multiple shard attestations. For now we simply use a separate topic for
//attestations
topics.push(BEACON_ATTESTATION_TOPIC.to_string());
topics.push(BEACON_PUBSUB_TOPIC.to_string());
topics.append(&mut config.topics.clone());
let mut subscribed_topics = vec![]; let mut subscribed_topics = vec![];
for topic in config.topics { for topic in topics {
let t = TopicBuilder::new(topic.to_string()).build(); let t = TopicBuilder::new(topic.clone()).build();
if swarm.subscribe(t) { if swarm.subscribe(t) {
trace!(log, "Subscribed to topic: {:?}", topic); trace!(log, "Subscribed to topic: {:?}", topic);
subscribed_topics.push(topic); subscribed_topics.push(topic);
@ -111,8 +109,6 @@ impl Stream for Service {
fn poll(&mut self) -> Poll<Option<Self::Item>, Self::Error> { fn poll(&mut self) -> Poll<Option<Self::Item>, Self::Error> {
loop { loop {
// TODO: Currently only gossipsub events passed here.
// Build a type for more generic events
match self.swarm.poll() { match self.swarm.poll() {
//Behaviour events //Behaviour events
Ok(Async::Ready(Some(event))) => match event { Ok(Async::Ready(Some(event))) => match event {
@ -135,9 +131,6 @@ impl Stream for Service {
BehaviourEvent::PeerDialed(peer_id) => { BehaviourEvent::PeerDialed(peer_id) => {
return Ok(Async::Ready(Some(Libp2pEvent::PeerDialed(peer_id)))); return Ok(Async::Ready(Some(Libp2pEvent::PeerDialed(peer_id))));
} }
BehaviourEvent::Identified(peer_id, info) => {
return Ok(Async::Ready(Some(Libp2pEvent::Identified(peer_id, info))));
}
}, },
Ok(Async::Ready(None)) => unreachable!("Swarm stream shouldn't end"), Ok(Async::Ready(None)) => unreachable!("Swarm stream shouldn't end"),
Ok(Async::NotReady) => break, Ok(Async::NotReady) => break,
@ -150,7 +143,7 @@ impl Stream for Service {
/// The implementation supports TCP/IP, WebSockets over TCP/IP, secio as the encryption layer, and /// The implementation supports TCP/IP, WebSockets over TCP/IP, secio as the encryption layer, and
/// mplex or yamux as the multiplexing layer. /// mplex or yamux as the multiplexing layer.
fn build_transport(local_private_key: identity::Keypair) -> Boxed<(PeerId, StreamMuxerBox), Error> { fn build_transport(local_private_key: Keypair) -> Boxed<(PeerId, StreamMuxerBox), Error> {
// TODO: The Wire protocol currently doesn't specify encryption and this will need to be customised // TODO: The Wire protocol currently doesn't specify encryption and this will need to be customised
// in the future. // in the future.
let transport = libp2p::tcp::TcpConfig::new(); let transport = libp2p::tcp::TcpConfig::new();
@ -187,8 +180,6 @@ pub enum Libp2pEvent {
RPC(PeerId, RPCEvent), RPC(PeerId, RPCEvent),
/// Initiated the connection to a new peer. /// Initiated the connection to a new peer.
PeerDialed(PeerId), PeerDialed(PeerId),
/// Received information about a peer on the network.
Identified(PeerId, Box<IdentifyInfo>),
/// Received pubsub message. /// Received pubsub message.
PubsubMessage { PubsubMessage {
source: PeerId, source: PeerId,
@ -196,3 +187,51 @@ pub enum Libp2pEvent {
message: Box<PubsubMessage>, message: Box<PubsubMessage>,
}, },
} }
/// Loads a private key from disk. If this fails, a new key is
/// generated and is then saved to disk.
///
/// Currently only secp256k1 keys are allowed, as these are the only keys supported by discv5.
fn load_private_key(config: &NetworkConfig, log: &slog::Logger) -> Keypair {
// TODO: Currently using secp256k1 keypairs - currently required for discv5
// check for key from disk
let network_key_f = config.network_dir.join(NETWORK_KEY_FILENAME);
if let Ok(mut network_key_file) = File::open(network_key_f.clone()) {
let mut key_bytes: Vec<u8> = Vec::with_capacity(36);
match network_key_file.read_to_end(&mut key_bytes) {
Err(_) => debug!(log, "Could not read network key file"),
Ok(_) => {
// only accept secp256k1 keys for now
if let Ok(secret_key) =
libp2p::core::identity::secp256k1::SecretKey::from_bytes(&mut key_bytes)
{
let kp: libp2p::core::identity::secp256k1::Keypair = secret_key.into();
debug!(log, "Loaded network key from disk.");
return Keypair::Secp256k1(kp);
} else {
debug!(log, "Network key file is not a valid secp256k1 key");
}
}
}
}
// if a key could not be loaded from disk, generate a new one and save it
let local_private_key = Keypair::generate_secp256k1();
if let Keypair::Secp256k1(key) = local_private_key.clone() {
let _ = std::fs::create_dir_all(&config.network_dir);
match File::create(network_key_f.clone())
.and_then(|mut f| f.write_all(&key.secret().to_bytes()))
{
Ok(_) => {
debug!(log, "New network key generated and written to disk");
}
Err(e) => {
warn!(
log,
"Could not write node key to file: {:?}. Error: {}", network_key_f, e
);
}
}
}
local_private_key
}

View File

@ -27,9 +27,8 @@ futures = "0.1.23"
serde = "1.0" serde = "1.0"
serde_derive = "1.0" serde_derive = "1.0"
serde_json = "1.0" serde_json = "1.0"
slog = "^2.2.3" slog = { version = "^2.2.3" , features = ["max_level_trace"] }
slog-term = "^2.4.0" slog-term = "^2.4.0"
slog-async = "^2.3.0" slog-async = "^2.3.0"
tokio = "0.1.17" tokio = "0.1.17"
exit-future = "0.1.4" exit-future = "0.1.4"
crossbeam-channel = "0.3.8"

View File

@ -14,6 +14,7 @@ use slog::{info, o, warn};
use std::path::PathBuf; use std::path::PathBuf;
use std::sync::Arc; use std::sync::Arc;
use tokio::runtime::TaskExecutor; use tokio::runtime::TaskExecutor;
use tokio::sync::mpsc;
#[derive(PartialEq, Clone, Debug, Serialize, Deserialize)] #[derive(PartialEq, Clone, Debug, Serialize, Deserialize)]
pub struct HttpServerConfig { pub struct HttpServerConfig {
@ -75,7 +76,7 @@ pub fn create_iron_http_server<T: BeaconChainTypes + 'static>(
pub fn start_service<T: BeaconChainTypes + 'static>( pub fn start_service<T: BeaconChainTypes + 'static>(
config: &HttpServerConfig, config: &HttpServerConfig,
executor: &TaskExecutor, executor: &TaskExecutor,
_network_chan: crossbeam_channel::Sender<NetworkMessage>, _network_chan: mpsc::UnboundedSender<NetworkMessage>,
beacon_chain: Arc<BeaconChain<T>>, beacon_chain: Arc<BeaconChain<T>>,
db_path: PathBuf, db_path: PathBuf,
metrics_registry: Registry, metrics_registry: Registry,

View File

@ -13,10 +13,9 @@ store = { path = "../store" }
eth2-libp2p = { path = "../eth2-libp2p" } eth2-libp2p = { path = "../eth2-libp2p" }
version = { path = "../version" } version = { path = "../version" }
types = { path = "../../eth2/types" } types = { path = "../../eth2/types" }
slog = { version = "^2.2.3" , features = ["max_level_trace", "release_max_level_debug"] } slog = { version = "^2.2.3" , features = ["max_level_trace"] }
eth2_ssz = { path = "../../eth2/utils/ssz" } eth2_ssz = { path = "../../eth2/utils/ssz" }
tree_hash = { path = "../../eth2/utils/tree_hash" } tree_hash = { path = "../../eth2/utils/tree_hash" }
futures = "0.1.25" futures = "0.1.25"
error-chain = "0.12.0" error-chain = "0.12.0"
crossbeam-channel = "0.3.8"
tokio = "0.1.16" tokio = "0.1.16"

View File

@ -2,17 +2,18 @@ use crate::error;
use crate::service::{NetworkMessage, OutgoingMessage}; use crate::service::{NetworkMessage, OutgoingMessage};
use crate::sync::SimpleSync; use crate::sync::SimpleSync;
use beacon_chain::{BeaconChain, BeaconChainTypes}; use beacon_chain::{BeaconChain, BeaconChainTypes};
use crossbeam_channel::{unbounded as channel, Sender};
use eth2_libp2p::{ use eth2_libp2p::{
behaviour::PubsubMessage, behaviour::PubsubMessage,
rpc::{methods::GoodbyeReason, RPCRequest, RPCResponse, RequestId}, rpc::{methods::GoodbyeReason, RPCRequest, RPCResponse, RequestId},
PeerId, RPCEvent, PeerId, RPCEvent,
}; };
use futures::future; use futures::future::Future;
use futures::stream::Stream;
use slog::{debug, warn}; use slog::{debug, warn};
use std::collections::HashMap; use std::collections::HashMap;
use std::sync::Arc; use std::sync::Arc;
use std::time::Instant; use std::time::Instant;
use tokio::sync::mpsc;
/// Timeout for RPC requests. /// Timeout for RPC requests.
// const REQUEST_TIMEOUT: Duration = Duration::from_secs(30); // const REQUEST_TIMEOUT: Duration = Duration::from_secs(30);
@ -48,13 +49,13 @@ impl<T: BeaconChainTypes + 'static> MessageHandler<T> {
/// Initializes and runs the MessageHandler. /// Initializes and runs the MessageHandler.
pub fn spawn( pub fn spawn(
beacon_chain: Arc<BeaconChain<T>>, beacon_chain: Arc<BeaconChain<T>>,
network_send: crossbeam_channel::Sender<NetworkMessage>, network_send: mpsc::UnboundedSender<NetworkMessage>,
executor: &tokio::runtime::TaskExecutor, executor: &tokio::runtime::TaskExecutor,
log: slog::Logger, log: slog::Logger,
) -> error::Result<Sender<HandlerMessage>> { ) -> error::Result<mpsc::UnboundedSender<HandlerMessage>> {
debug!(log, "Service starting"); debug!(log, "Service starting");
let (handler_send, handler_recv) = channel(); let (handler_send, handler_recv) = mpsc::unbounded_channel();
// Initialise sync and begin processing in thread // Initialise sync and begin processing in thread
// generate the Message handler // generate the Message handler
@ -69,13 +70,13 @@ impl<T: BeaconChainTypes + 'static> MessageHandler<T> {
// spawn handler task // spawn handler task
// TODO: Handle manual termination of thread // TODO: Handle manual termination of thread
executor.spawn(future::poll_fn(move || -> Result<_, _> { executor.spawn(
loop { handler_recv
handler.handle_message(handler_recv.recv().map_err(|_| { .for_each(move |msg| Ok(handler.handle_message(msg)))
.map_err(move |_| {
debug!(log, "Network message handler terminated."); debug!(log, "Network message handler terminated.");
})?); }),
} );
}));
Ok(handler_send) Ok(handler_send)
} }
@ -222,7 +223,7 @@ impl<T: BeaconChainTypes + 'static> MessageHandler<T> {
pub struct NetworkContext { pub struct NetworkContext {
/// The network channel to relay messages to the Network service. /// The network channel to relay messages to the Network service.
network_send: crossbeam_channel::Sender<NetworkMessage>, network_send: mpsc::UnboundedSender<NetworkMessage>,
/// A mapping of peers and the RPC id we have sent an RPC request to. /// A mapping of peers and the RPC id we have sent an RPC request to.
outstanding_outgoing_request_ids: HashMap<(PeerId, RequestId), Instant>, outstanding_outgoing_request_ids: HashMap<(PeerId, RequestId), Instant>,
/// Stores the next `RequestId` we should include on an outgoing `RPCRequest` to a `PeerId`. /// Stores the next `RequestId` we should include on an outgoing `RPCRequest` to a `PeerId`.
@ -232,7 +233,7 @@ pub struct NetworkContext {
} }
impl NetworkContext { impl NetworkContext {
pub fn new(network_send: crossbeam_channel::Sender<NetworkMessage>, log: slog::Logger) -> Self { pub fn new(network_send: mpsc::UnboundedSender<NetworkMessage>, log: slog::Logger) -> Self {
Self { Self {
network_send, network_send,
outstanding_outgoing_request_ids: HashMap::new(), outstanding_outgoing_request_ids: HashMap::new(),
@ -278,13 +279,13 @@ impl NetworkContext {
); );
} }
fn send_rpc_event(&self, peer_id: PeerId, rpc_event: RPCEvent) { fn send_rpc_event(&mut self, peer_id: PeerId, rpc_event: RPCEvent) {
self.send(peer_id, OutgoingMessage::RPC(rpc_event)) self.send(peer_id, OutgoingMessage::RPC(rpc_event))
} }
fn send(&self, peer_id: PeerId, outgoing_message: OutgoingMessage) { fn send(&mut self, peer_id: PeerId, outgoing_message: OutgoingMessage) {
self.network_send self.network_send
.send(NetworkMessage::Send(peer_id, outgoing_message)) .try_send(NetworkMessage::Send(peer_id, outgoing_message))
.unwrap_or_else(|_| { .unwrap_or_else(|_| {
warn!( warn!(
self.log, self.log,

View File

@ -2,24 +2,23 @@ use crate::error;
use crate::message_handler::{HandlerMessage, MessageHandler}; use crate::message_handler::{HandlerMessage, MessageHandler};
use crate::NetworkConfig; use crate::NetworkConfig;
use beacon_chain::{BeaconChain, BeaconChainTypes}; use beacon_chain::{BeaconChain, BeaconChainTypes};
use crossbeam_channel::{unbounded as channel, Sender, TryRecvError};
use eth2_libp2p::Service as LibP2PService; use eth2_libp2p::Service as LibP2PService;
use eth2_libp2p::Topic;
use eth2_libp2p::{Libp2pEvent, PeerId}; use eth2_libp2p::{Libp2pEvent, PeerId};
use eth2_libp2p::{PubsubMessage, RPCEvent}; use eth2_libp2p::{PubsubMessage, RPCEvent};
use futures::prelude::*; use futures::prelude::*;
use futures::sync::oneshot;
use futures::Stream; use futures::Stream;
use slog::{debug, info, o, trace}; use slog::{debug, info, o, trace};
use std::marker::PhantomData; use std::marker::PhantomData;
use std::sync::Arc; use std::sync::Arc;
use tokio::runtime::TaskExecutor; use tokio::runtime::TaskExecutor;
use types::Topic; use tokio::sync::{mpsc, oneshot};
/// Service that handles communication between internal services and the eth2_libp2p network service. /// Service that handles communication between internal services and the eth2_libp2p network service.
pub struct Service<T: BeaconChainTypes> { pub struct Service<T: BeaconChainTypes> {
//libp2p_service: Arc<Mutex<LibP2PService>>, //libp2p_service: Arc<Mutex<LibP2PService>>,
_libp2p_exit: oneshot::Sender<()>, _libp2p_exit: oneshot::Sender<()>,
network_send: crossbeam_channel::Sender<NetworkMessage>, network_send: mpsc::UnboundedSender<NetworkMessage>,
_phantom: PhantomData<T>, //message_handler: MessageHandler, _phantom: PhantomData<T>, //message_handler: MessageHandler,
//message_handler_send: Sender<HandlerMessage> //message_handler_send: Sender<HandlerMessage>
} }
@ -30,9 +29,9 @@ impl<T: BeaconChainTypes + 'static> Service<T> {
config: &NetworkConfig, config: &NetworkConfig,
executor: &TaskExecutor, executor: &TaskExecutor,
log: slog::Logger, log: slog::Logger,
) -> error::Result<(Arc<Self>, Sender<NetworkMessage>)> { ) -> error::Result<(Arc<Self>, mpsc::UnboundedSender<NetworkMessage>)> {
// build the network channel // build the network channel
let (network_send, network_recv) = channel::<NetworkMessage>(); let (network_send, network_recv) = mpsc::unbounded_channel::<NetworkMessage>();
// launch message handler thread // launch message handler thread
let message_handler_log = log.new(o!("Service" => "MessageHandler")); let message_handler_log = log.new(o!("Service" => "MessageHandler"));
let message_handler_send = MessageHandler::spawn( let message_handler_send = MessageHandler::spawn(
@ -64,9 +63,9 @@ impl<T: BeaconChainTypes + 'static> Service<T> {
} }
// TODO: Testing only // TODO: Testing only
pub fn send_message(&self) { pub fn send_message(&mut self) {
self.network_send self.network_send
.send(NetworkMessage::Send( .try_send(NetworkMessage::Send(
PeerId::random(), PeerId::random(),
OutgoingMessage::NotifierTest, OutgoingMessage::NotifierTest,
)) ))
@ -76,12 +75,12 @@ impl<T: BeaconChainTypes + 'static> Service<T> {
fn spawn_service( fn spawn_service(
libp2p_service: LibP2PService, libp2p_service: LibP2PService,
network_recv: crossbeam_channel::Receiver<NetworkMessage>, network_recv: mpsc::UnboundedReceiver<NetworkMessage>,
message_handler_send: crossbeam_channel::Sender<HandlerMessage>, message_handler_send: mpsc::UnboundedSender<HandlerMessage>,
executor: &TaskExecutor, executor: &TaskExecutor,
log: slog::Logger, log: slog::Logger,
) -> error::Result<oneshot::Sender<()>> { ) -> error::Result<tokio::sync::oneshot::Sender<()>> {
let (network_exit, exit_rx) = oneshot::channel(); let (network_exit, exit_rx) = tokio::sync::oneshot::channel();
// spawn on the current executor // spawn on the current executor
executor.spawn( executor.spawn(
@ -105,76 +104,76 @@ fn spawn_service(
//TODO: Potentially handle channel errors //TODO: Potentially handle channel errors
fn network_service( fn network_service(
mut libp2p_service: LibP2PService, mut libp2p_service: LibP2PService,
network_recv: crossbeam_channel::Receiver<NetworkMessage>, mut network_recv: mpsc::UnboundedReceiver<NetworkMessage>,
message_handler_send: crossbeam_channel::Sender<HandlerMessage>, mut message_handler_send: mpsc::UnboundedSender<HandlerMessage>,
log: slog::Logger, log: slog::Logger,
) -> impl futures::Future<Item = (), Error = eth2_libp2p::error::Error> { ) -> impl futures::Future<Item = (), Error = eth2_libp2p::error::Error> {
futures::future::poll_fn(move || -> Result<_, eth2_libp2p::error::Error> { futures::future::poll_fn(move || -> Result<_, eth2_libp2p::error::Error> {
// poll the swarm // only end the loop once both major polls are not ready.
loop { let mut not_ready_count = 0;
while not_ready_count < 2 {
not_ready_count = 0;
// poll the network channel
match network_recv.poll() {
Ok(Async::Ready(Some(message))) => {
match message {
// TODO: Testing message - remove
NetworkMessage::Send(peer_id, outgoing_message) => {
match outgoing_message {
OutgoingMessage::RPC(rpc_event) => {
trace!(log, "Sending RPC Event: {:?}", rpc_event);
//TODO: Make swarm private
//TODO: Implement correct peer id topic message handling
libp2p_service.swarm.send_rpc(peer_id, rpc_event);
}
OutgoingMessage::NotifierTest => {
// debug!(log, "Received message from notifier");
}
};
}
NetworkMessage::Publish { topics, message } => {
debug!(log, "Sending pubsub message"; "topics" => format!("{:?}",topics));
libp2p_service.swarm.publish(topics, *message);
}
}
}
Ok(Async::NotReady) => not_ready_count += 1,
Ok(Async::Ready(None)) => {
return Err(eth2_libp2p::error::Error::from("Network channel closed"));
}
Err(_) => {
return Err(eth2_libp2p::error::Error::from("Network channel error"));
}
}
// poll the swarm
match libp2p_service.poll() { match libp2p_service.poll() {
Ok(Async::Ready(Some(event))) => match event { Ok(Async::Ready(Some(event))) => match event {
Libp2pEvent::RPC(peer_id, rpc_event) => { Libp2pEvent::RPC(peer_id, rpc_event) => {
trace!(log, "RPC Event: RPC message received: {:?}", rpc_event); trace!(log, "RPC Event: RPC message received: {:?}", rpc_event);
message_handler_send message_handler_send
.send(HandlerMessage::RPC(peer_id, rpc_event)) .try_send(HandlerMessage::RPC(peer_id, rpc_event))
.map_err(|_| "failed to send rpc to handler")?; .map_err(|_| "failed to send rpc to handler")?;
} }
Libp2pEvent::PeerDialed(peer_id) => { Libp2pEvent::PeerDialed(peer_id) => {
debug!(log, "Peer Dialed: {:?}", peer_id); debug!(log, "Peer Dialed: {:?}", peer_id);
message_handler_send message_handler_send
.send(HandlerMessage::PeerDialed(peer_id)) .try_send(HandlerMessage::PeerDialed(peer_id))
.map_err(|_| "failed to send rpc to handler")?; .map_err(|_| "failed to send rpc to handler")?;
} }
Libp2pEvent::Identified(peer_id, info) => {
debug!(
log,
"We have identified peer: {:?} with {:?}", peer_id, info
);
}
Libp2pEvent::PubsubMessage { Libp2pEvent::PubsubMessage {
source, message, .. source, message, ..
} => { } => {
//TODO: Decide if we need to propagate the topic upwards. (Potentially for //TODO: Decide if we need to propagate the topic upwards. (Potentially for
//attestations) //attestations)
message_handler_send message_handler_send
.send(HandlerMessage::PubsubMessage(source, message)) .try_send(HandlerMessage::PubsubMessage(source, message))
.map_err(|_| " failed to send pubsub message to handler")?; .map_err(|_| " failed to send pubsub message to handler")?;
} }
}, },
Ok(Async::Ready(None)) => unreachable!("Stream never ends"), Ok(Async::Ready(None)) => unreachable!("Stream never ends"),
Ok(Async::NotReady) => break, Ok(Async::NotReady) => not_ready_count += 1,
Err(_) => break, Err(_) => not_ready_count += 1,
}
}
// poll the network channel
// TODO: refactor - combine poll_fn's?
loop {
match network_recv.try_recv() {
// TODO: Testing message - remove
Ok(NetworkMessage::Send(peer_id, outgoing_message)) => {
match outgoing_message {
OutgoingMessage::RPC(rpc_event) => {
trace!(log, "Sending RPC Event: {:?}", rpc_event);
//TODO: Make swarm private
//TODO: Implement correct peer id topic message handling
libp2p_service.swarm.send_rpc(peer_id, rpc_event);
}
OutgoingMessage::NotifierTest => {
// debug!(log, "Received message from notifier");
}
};
}
Ok(NetworkMessage::Publish { topics, message }) => {
debug!(log, "Sending pubsub message on topics {:?}", topics);
libp2p_service.swarm.publish(topics, *message);
}
Err(TryRecvError::Empty) => break,
Err(TryRecvError::Disconnected) => {
return Err(eth2_libp2p::error::Error::from(
"Network channel disconnected",
));
}
} }
} }
Ok(Async::NotReady) Ok(Async::NotReady)

View File

@ -41,31 +41,23 @@ impl<T: BeaconChainTypes> ImportQueue<T> {
} }
} }
/// Completes all possible partials into `BeaconBlock` and returns them, sorted by increasing /// Returns true of the if the `BlockRoot` is found in the `import_queue`.
/// slot number. Does not delete the partials from the queue, this must be done manually.
///
/// Returns `(queue_index, block, sender)`:
///
/// - `block_root`: may be used to remove the entry if it is successfully processed.
/// - `block`: the completed block.
/// - `sender`: the `PeerId` the provided the `BeaconBlockBody` which completed the partial.
pub fn complete_blocks(&self) -> Vec<(Hash256, BeaconBlock, PeerId)> {
let mut complete: Vec<(Hash256, BeaconBlock, PeerId)> = self
.partials
.iter()
.filter_map(|(_, partial)| partial.clone().complete())
.collect();
// Sort the completable partials to be in ascending slot order.
complete.sort_unstable_by(|a, b| a.1.slot.partial_cmp(&b.1.slot).unwrap());
complete
}
pub fn contains_block_root(&self, block_root: Hash256) -> bool { pub fn contains_block_root(&self, block_root: Hash256) -> bool {
self.partials.contains_key(&block_root) self.partials.contains_key(&block_root)
} }
/// Attempts to complete the `BlockRoot` if it is found in the `import_queue`.
///
/// Returns an Enum with a `PartialBeaconBlockCompletion`.
/// Does not remove the `block_root` from the `import_queue`.
pub fn attempt_complete_block(&self, block_root: Hash256) -> PartialBeaconBlockCompletion {
if let Some(partial) = self.partials.get(&block_root) {
partial.attempt_complete()
} else {
PartialBeaconBlockCompletion::MissingRoot
}
}
/// Removes the first `PartialBeaconBlock` with a matching `block_root`, returning the partial /// Removes the first `PartialBeaconBlock` with a matching `block_root`, returning the partial
/// if it exists. /// if it exists.
pub fn remove(&mut self, block_root: Hash256) -> Option<PartialBeaconBlock> { pub fn remove(&mut self, block_root: Hash256) -> Option<PartialBeaconBlock> {
@ -102,6 +94,8 @@ impl<T: BeaconChainTypes> ImportQueue<T> {
block_roots: &[BlockRootSlot], block_roots: &[BlockRootSlot],
sender: PeerId, sender: PeerId,
) -> Vec<BlockRootSlot> { ) -> Vec<BlockRootSlot> {
// TODO: This will currently not return a `BlockRootSlot` if this root exists but there is no header.
// It would be more robust if it did.
let new_block_root_slots: Vec<BlockRootSlot> = block_roots let new_block_root_slots: Vec<BlockRootSlot> = block_roots
.iter() .iter()
// Ignore any roots already stored in the queue. // Ignore any roots already stored in the queue.
@ -135,12 +129,8 @@ impl<T: BeaconChainTypes> ImportQueue<T> {
/// the queue and it's block root is included in the output. /// the queue and it's block root is included in the output.
/// ///
/// If a `header` is already in the queue, but not yet processed by the chain the block root is /// If a `header` is already in the queue, but not yet processed by the chain the block root is
/// included in the output and the `inserted` time for the partial record is set to /// not included in the output and the `inserted` time for the partial record is set to
/// `Instant::now()`. Updating the `inserted` time stops the partial from becoming stale. /// `Instant::now()`. Updating the `inserted` time stops the partial from becoming stale.
///
/// Presently the queue enforces that a `BeaconBlockHeader` _must_ be received before its
/// `BeaconBlockBody`. This is not a natural requirement and we could enhance the queue to lift
/// this restraint.
pub fn enqueue_headers( pub fn enqueue_headers(
&mut self, &mut self,
headers: Vec<BeaconBlockHeader>, headers: Vec<BeaconBlockHeader>,
@ -152,8 +142,10 @@ impl<T: BeaconChainTypes> ImportQueue<T> {
let block_root = Hash256::from_slice(&header.canonical_root()[..]); let block_root = Hash256::from_slice(&header.canonical_root()[..]);
if self.chain_has_not_seen_block(&block_root) { if self.chain_has_not_seen_block(&block_root) {
self.insert_header(block_root, header, sender.clone()); if !self.insert_header(block_root, header, sender.clone()) {
required_bodies.push(block_root); // If a body is empty
required_bodies.push(block_root);
}
} }
} }
@ -163,10 +155,17 @@ impl<T: BeaconChainTypes> ImportQueue<T> {
/// If there is a matching `header` for this `body`, adds it to the queue. /// If there is a matching `header` for this `body`, adds it to the queue.
/// ///
/// If there is no `header` for the `body`, the body is simply discarded. /// If there is no `header` for the `body`, the body is simply discarded.
pub fn enqueue_bodies(&mut self, bodies: Vec<BeaconBlockBody>, sender: PeerId) { pub fn enqueue_bodies(
&mut self,
bodies: Vec<BeaconBlockBody>,
sender: PeerId,
) -> Option<Hash256> {
let mut last_block_hash = None;
for body in bodies { for body in bodies {
self.insert_body(body, sender.clone()); last_block_hash = self.insert_body(body, sender.clone());
} }
last_block_hash
} }
pub fn enqueue_full_blocks(&mut self, blocks: Vec<BeaconBlock>, sender: PeerId) { pub fn enqueue_full_blocks(&mut self, blocks: Vec<BeaconBlock>, sender: PeerId) {
@ -179,12 +178,22 @@ impl<T: BeaconChainTypes> ImportQueue<T> {
/// ///
/// If the header already exists, the `inserted` time is set to `now` and not other /// If the header already exists, the `inserted` time is set to `now` and not other
/// modifications are made. /// modifications are made.
fn insert_header(&mut self, block_root: Hash256, header: BeaconBlockHeader, sender: PeerId) { /// Returns true is `body` exists.
fn insert_header(
&mut self,
block_root: Hash256,
header: BeaconBlockHeader,
sender: PeerId,
) -> bool {
let mut exists = false;
self.partials self.partials
.entry(block_root) .entry(block_root)
.and_modify(|partial| { .and_modify(|partial| {
partial.header = Some(header.clone()); partial.header = Some(header.clone());
partial.inserted = Instant::now(); partial.inserted = Instant::now();
if partial.body.is_some() {
exists = true;
}
}) })
.or_insert_with(|| PartialBeaconBlock { .or_insert_with(|| PartialBeaconBlock {
slot: header.slot, slot: header.slot,
@ -194,28 +203,30 @@ impl<T: BeaconChainTypes> ImportQueue<T> {
inserted: Instant::now(), inserted: Instant::now(),
sender, sender,
}); });
exists
} }
/// Updates an existing partial with the `body`. /// Updates an existing partial with the `body`.
/// ///
/// If there is no header for the `body`, the body is simply discarded.
///
/// If the body already existed, the `inserted` time is set to `now`. /// If the body already existed, the `inserted` time is set to `now`.
fn insert_body(&mut self, body: BeaconBlockBody, sender: PeerId) { ///
/// Returns the block hash of the inserted body
fn insert_body(&mut self, body: BeaconBlockBody, sender: PeerId) -> Option<Hash256> {
let body_root = Hash256::from_slice(&body.tree_hash_root()[..]); let body_root = Hash256::from_slice(&body.tree_hash_root()[..]);
let mut last_root = None;
self.partials.iter_mut().for_each(|(_, mut p)| { self.partials.iter_mut().for_each(|(root, mut p)| {
if let Some(header) = &mut p.header { if let Some(header) = &mut p.header {
if body_root == header.block_body_root { if body_root == header.block_body_root {
p.inserted = Instant::now(); p.inserted = Instant::now();
p.body = Some(body.clone());
if p.body.is_none() { p.sender = sender.clone();
p.body = Some(body.clone()); last_root = Some(*root);
p.sender = sender.clone();
}
} }
} }
}); });
last_root
} }
/// Updates an existing `partial` with the completed block, or adds a new (complete) partial. /// Updates an existing `partial` with the completed block, or adds a new (complete) partial.
@ -257,13 +268,33 @@ pub struct PartialBeaconBlock {
} }
impl PartialBeaconBlock { impl PartialBeaconBlock {
/// Consumes `self` and returns a full built `BeaconBlock`, it's root and the `sender` /// Attempts to build a block.
/// `PeerId`, if enough information exists to complete the block. Otherwise, returns `None`. ///
pub fn complete(self) -> Option<(Hash256, BeaconBlock, PeerId)> { /// Does not comsume the `PartialBeaconBlock`.
Some(( pub fn attempt_complete(&self) -> PartialBeaconBlockCompletion {
self.block_root, if self.header.is_none() {
self.header?.into_block(self.body?), PartialBeaconBlockCompletion::MissingHeader(self.slot)
self.sender, } else if self.body.is_none() {
)) PartialBeaconBlockCompletion::MissingBody
} else {
PartialBeaconBlockCompletion::Complete(
self.header
.clone()
.unwrap()
.into_block(self.body.clone().unwrap()),
)
}
} }
} }
/// The result of trying to convert a `BeaconBlock` into a `PartialBeaconBlock`.
pub enum PartialBeaconBlockCompletion {
/// The partial contains a valid BeaconBlock.
Complete(BeaconBlock),
/// The partial does not exist.
MissingRoot,
/// The partial contains a `BeaconBlockRoot` but no `BeaconBlockHeader`.
MissingHeader(Slot),
/// The partial contains a `BeaconBlockRoot` and `BeaconBlockHeader` but no `BeaconBlockBody`.
MissingBody,
}

View File

@ -1,10 +1,10 @@
use super::import_queue::ImportQueue; use super::import_queue::{ImportQueue, PartialBeaconBlockCompletion};
use crate::message_handler::NetworkContext; use crate::message_handler::NetworkContext;
use beacon_chain::{BeaconChain, BeaconChainTypes, BlockProcessingOutcome}; use beacon_chain::{BeaconChain, BeaconChainTypes, BlockProcessingOutcome};
use eth2_libp2p::rpc::methods::*; use eth2_libp2p::rpc::methods::*;
use eth2_libp2p::rpc::{RPCRequest, RPCResponse, RequestId}; use eth2_libp2p::rpc::{RPCRequest, RPCResponse, RequestId};
use eth2_libp2p::PeerId; use eth2_libp2p::PeerId;
use slog::{debug, error, info, o, warn}; use slog::{debug, error, info, o, trace, warn};
use std::collections::HashMap; use std::collections::HashMap;
use std::sync::Arc; use std::sync::Arc;
use std::time::Duration; use std::time::Duration;
@ -17,7 +17,7 @@ use types::{
const SLOT_IMPORT_TOLERANCE: u64 = 100; const SLOT_IMPORT_TOLERANCE: u64 = 100;
/// The amount of seconds a block (or partial block) may exist in the import queue. /// The amount of seconds a block (or partial block) may exist in the import queue.
const QUEUE_STALE_SECS: u64 = 6; const QUEUE_STALE_SECS: u64 = 100;
/// If a block is more than `FUTURE_SLOT_TOLERANCE` slots ahead of our slot clock, we drop it. /// If a block is more than `FUTURE_SLOT_TOLERANCE` slots ahead of our slot clock, we drop it.
/// Otherwise we queue it. /// Otherwise we queue it.
@ -227,7 +227,12 @@ impl<T: BeaconChainTypes> SimpleSync<T> {
// //
// Therefore, there are some blocks between the local finalized epoch and the remote // Therefore, there are some blocks between the local finalized epoch and the remote
// head that are worth downloading. // head that are worth downloading.
debug!(self.log, "UsefulPeer"; "peer" => format!("{:?}", peer_id)); debug!(
self.log, "UsefulPeer";
"peer" => format!("{:?}", peer_id),
"local_finalized_epoch" => local.latest_finalized_epoch,
"remote_latest_finalized_epoch" => remote.latest_finalized_epoch,
);
let start_slot = local let start_slot = local
.latest_finalized_epoch .latest_finalized_epoch
@ -238,7 +243,7 @@ impl<T: BeaconChainTypes> SimpleSync<T> {
peer_id, peer_id,
BeaconBlockRootsRequest { BeaconBlockRootsRequest {
start_slot, start_slot,
count: required_slots.into(), count: required_slots.as_u64(),
}, },
network, network,
); );
@ -247,7 +252,7 @@ impl<T: BeaconChainTypes> SimpleSync<T> {
fn root_at_slot(&self, target_slot: Slot) -> Option<Hash256> { fn root_at_slot(&self, target_slot: Slot) -> Option<Hash256> {
self.chain self.chain
.rev_iter_block_roots(target_slot) .rev_iter_best_block_roots(target_slot)
.take(1) .take(1)
.find(|(_root, slot)| *slot == target_slot) .find(|(_root, slot)| *slot == target_slot)
.map(|(root, _slot)| root) .map(|(root, _slot)| root)
@ -271,8 +276,7 @@ impl<T: BeaconChainTypes> SimpleSync<T> {
let mut roots: Vec<BlockRootSlot> = self let mut roots: Vec<BlockRootSlot> = self
.chain .chain
.rev_iter_block_roots(req.start_slot + req.count) .rev_iter_best_block_roots(req.start_slot + req.count)
.skip(1)
.take(req.count as usize) .take(req.count as usize)
.map(|(block_root, slot)| BlockRootSlot { slot, block_root }) .map(|(block_root, slot)| BlockRootSlot { slot, block_root })
.collect(); .collect();
@ -356,7 +360,7 @@ impl<T: BeaconChainTypes> SimpleSync<T> {
BeaconBlockHeadersRequest { BeaconBlockHeadersRequest {
start_root: first.block_root, start_root: first.block_root,
start_slot: first.slot, start_slot: first.slot,
max_headers: (last.slot - first.slot).as_u64(), max_headers: (last.slot - first.slot + 1).as_u64(),
skip_slots: 0, skip_slots: 0,
}, },
network, network,
@ -386,7 +390,7 @@ impl<T: BeaconChainTypes> SimpleSync<T> {
// unnecessary block deserialization when `req.skip_slots > 0`. // unnecessary block deserialization when `req.skip_slots > 0`.
let mut roots: Vec<Hash256> = self let mut roots: Vec<Hash256> = self
.chain .chain
.rev_iter_block_roots(req.start_slot + (count - 1)) .rev_iter_best_block_roots(req.start_slot + count)
.take(count as usize) .take(count as usize)
.map(|(root, _slot)| root) .map(|(root, _slot)| root)
.collect(); .collect();
@ -499,14 +503,26 @@ impl<T: BeaconChainTypes> SimpleSync<T> {
"count" => res.block_bodies.len(), "count" => res.block_bodies.len(),
); );
self.import_queue if !res.block_bodies.is_empty() {
.enqueue_bodies(res.block_bodies, peer_id.clone()); // Import all blocks to queue
let last_root = self
.import_queue
.enqueue_bodies(res.block_bodies, peer_id.clone());
// Attempt to process all recieved bodies by recursively processing the latest block
if let Some(root) = last_root {
match self.attempt_process_partial_block(peer_id, root, network, &"rpc") {
Some(BlockProcessingOutcome::Processed { block_root: _ }) => {
// If processing is successful remove from `import_queue`
self.import_queue.remove(root);
}
_ => {}
}
}
}
// Clear out old entries // Clear out old entries
self.import_queue.remove_stale(); self.import_queue.remove_stale();
// Import blocks, if possible.
self.process_import_queue(network);
} }
/// Process a gossip message declaring a new block. /// Process a gossip message declaring a new block.
@ -526,26 +542,35 @@ impl<T: BeaconChainTypes> SimpleSync<T> {
match outcome { match outcome {
BlockProcessingOutcome::Processed { .. } => SHOULD_FORWARD_GOSSIP_BLOCK, BlockProcessingOutcome::Processed { .. } => SHOULD_FORWARD_GOSSIP_BLOCK,
BlockProcessingOutcome::ParentUnknown { parent } => { BlockProcessingOutcome::ParentUnknown { parent } => {
// Clean the stale entries from the queue.
self.import_queue.remove_stale();
// Add this block to the queue // Add this block to the queue
self.import_queue self.import_queue
.enqueue_full_blocks(vec![block], peer_id.clone()); .enqueue_full_blocks(vec![block.clone()], peer_id.clone());
debug!(
self.log, "RequestParentBlock";
"parent_root" => format!("{}", parent),
"parent_slot" => block.slot - 1,
"peer" => format!("{:?}", peer_id),
);
// Unless the parent is in the queue, request the parent block from the peer. // Request roots between parent and start of finality from peer.
// let start_slot = self
// It is likely that this is duplicate work, given we already send a hello .chain
// request. However, I believe there are some edge-cases where the hello .head()
// message doesn't suffice, so we perform this request as well. .beacon_state
if !self.import_queue.contains_block_root(parent) { .finalized_epoch
// Send a hello to learn of the clients best slot so we can then sync the required .start_slot(T::EthSpec::slots_per_epoch());
// parent(s). self.request_block_roots(
network.send_rpc_request( peer_id,
peer_id.clone(), BeaconBlockRootsRequest {
RPCRequest::Hello(hello_message(&self.chain)), // Request blocks between `latest_finalized_slot` and the `block`
); start_slot,
} count: block.slot.as_u64() - start_slot.as_u64(),
},
network,
);
// Clean the stale entries from the queue.
self.import_queue.remove_stale();
SHOULD_FORWARD_GOSSIP_BLOCK SHOULD_FORWARD_GOSSIP_BLOCK
} }
@ -587,40 +612,6 @@ impl<T: BeaconChainTypes> SimpleSync<T> {
} }
} }
/// Iterate through the `import_queue` and process any complete blocks.
///
/// If a block is successfully processed it is removed from the queue, otherwise it remains in
/// the queue.
pub fn process_import_queue(&mut self, network: &mut NetworkContext) {
let mut successful = 0;
// Loop through all of the complete blocks in the queue.
for (block_root, block, sender) in self.import_queue.complete_blocks() {
let processing_result = self.process_block(sender, block.clone(), network, &"gossip");
let should_dequeue = match processing_result {
Some(BlockProcessingOutcome::ParentUnknown { .. }) => false,
Some(BlockProcessingOutcome::FutureSlot {
present_slot,
block_slot,
}) if present_slot + FUTURE_SLOT_TOLERANCE >= block_slot => false,
_ => true,
};
if processing_result == Some(BlockProcessingOutcome::Processed { block_root }) {
successful += 1;
}
if should_dequeue {
self.import_queue.remove(block_root);
}
}
if successful > 0 {
info!(self.log, "Imported {} blocks", successful)
}
}
/// Request some `BeaconBlockRoots` from the remote peer. /// Request some `BeaconBlockRoots` from the remote peer.
fn request_block_roots( fn request_block_roots(
&mut self, &mut self,
@ -695,6 +686,89 @@ impl<T: BeaconChainTypes> SimpleSync<T> {
hello_message(&self.chain) hello_message(&self.chain)
} }
/// Helper function to attempt to process a partial block.
///
/// If the block can be completed recursively call `process_block`
/// else request missing parts.
fn attempt_process_partial_block(
&mut self,
peer_id: PeerId,
block_root: Hash256,
network: &mut NetworkContext,
source: &str,
) -> Option<BlockProcessingOutcome> {
match self.import_queue.attempt_complete_block(block_root) {
PartialBeaconBlockCompletion::MissingBody => {
// Unable to complete the block because the block body is missing.
debug!(
self.log, "RequestParentBody";
"source" => source,
"block_root" => format!("{}", block_root),
"peer" => format!("{:?}", peer_id),
);
// Request the block body from the peer.
self.request_block_bodies(
peer_id,
BeaconBlockBodiesRequest {
block_roots: vec![block_root],
},
network,
);
None
}
PartialBeaconBlockCompletion::MissingHeader(slot) => {
// Unable to complete the block because the block header is missing.
debug!(
self.log, "RequestParentHeader";
"source" => source,
"block_root" => format!("{}", block_root),
"peer" => format!("{:?}", peer_id),
);
// Request the block header from the peer.
self.request_block_headers(
peer_id,
BeaconBlockHeadersRequest {
start_root: block_root,
start_slot: slot,
max_headers: 1,
skip_slots: 0,
},
network,
);
None
}
PartialBeaconBlockCompletion::MissingRoot => {
// The `block_root` is not known to the queue.
debug!(
self.log, "MissingParentRoot";
"source" => source,
"block_root" => format!("{}", block_root),
"peer" => format!("{:?}", peer_id),
);
// Do nothing.
None
}
PartialBeaconBlockCompletion::Complete(block) => {
// The block exists in the queue, attempt to process it
trace!(
self.log, "AttemptProcessParent";
"source" => source,
"block_root" => format!("{}", block_root),
"parent_slot" => block.slot,
"peer" => format!("{:?}", peer_id),
);
self.process_block(peer_id.clone(), block, network, source)
}
}
}
/// Processes the `block` that was received from `peer_id`. /// Processes the `block` that was received from `peer_id`.
/// ///
/// If the block was submitted to the beacon chain without internal error, `Some(outcome)` is /// If the block was submitted to the beacon chain without internal error, `Some(outcome)` is
@ -721,6 +795,7 @@ impl<T: BeaconChainTypes> SimpleSync<T> {
if let Ok(outcome) = processing_result { if let Ok(outcome) = processing_result {
match outcome { match outcome {
BlockProcessingOutcome::Processed { block_root } => { BlockProcessingOutcome::Processed { block_root } => {
// The block was valid and we processed it successfully.
debug!( debug!(
self.log, "Imported block from network"; self.log, "Imported block from network";
"source" => source, "source" => source,
@ -730,26 +805,29 @@ impl<T: BeaconChainTypes> SimpleSync<T> {
); );
} }
BlockProcessingOutcome::ParentUnknown { parent } => { BlockProcessingOutcome::ParentUnknown { parent } => {
// The block was valid and we processed it successfully. // The parent has not been processed
debug!( trace!(
self.log, "ParentBlockUnknown"; self.log, "ParentBlockUnknown";
"source" => source, "source" => source,
"parent_root" => format!("{}", parent), "parent_root" => format!("{}", parent),
"baby_block_slot" => block.slot,
"peer" => format!("{:?}", peer_id), "peer" => format!("{:?}", peer_id),
); );
// Unless the parent is in the queue, request the parent block from the peer. // If the parent is in the `import_queue` attempt to complete it then process it.
// match self.attempt_process_partial_block(peer_id, parent, network, source) {
// It is likely that this is duplicate work, given we already send a hello // If processing parent is sucessful, re-process block and remove parent from queue
// request. However, I believe there are some edge-cases where the hello Some(BlockProcessingOutcome::Processed { block_root: _ }) => {
// message doesn't suffice, so we perform this request as well. self.import_queue.remove(parent);
if !self.import_queue.contains_block_root(parent) {
// Send a hello to learn of the clients best slot so we can then sync the require // Attempt to process `block` again
// parent(s). match self.chain.process_block(block) {
network.send_rpc_request( Ok(outcome) => return Some(outcome),
peer_id.clone(), Err(_) => return None,
RPCRequest::Hello(hello_message(&self.chain)), }
); }
// All other cases leave `parent` in `import_queue` and return original outcome.
_ => {}
} }
} }
BlockProcessingOutcome::FutureSlot { BlockProcessingOutcome::FutureSlot {

View File

@ -22,9 +22,8 @@ dirs = "1.0.3"
futures = "0.1.23" futures = "0.1.23"
serde = "1.0" serde = "1.0"
serde_derive = "1.0" serde_derive = "1.0"
slog = "^2.2.3" slog = { version = "^2.2.3" , features = ["max_level_trace"] }
slog-term = "^2.4.0" slog-term = "^2.4.0"
slog-async = "^2.3.0" slog-async = "^2.3.0"
tokio = "0.1.17" tokio = "0.1.17"
exit-future = "0.1.4" exit-future = "0.1.4"
crossbeam-channel = "0.3.8"

View File

@ -1,5 +1,7 @@
use beacon_chain::{BeaconChain, BeaconChainTypes}; use beacon_chain::{BeaconChain, BeaconChainTypes};
use eth2_libp2p::PubsubMessage; use eth2_libp2p::PubsubMessage;
use eth2_libp2p::TopicBuilder;
use eth2_libp2p::BEACON_ATTESTATION_TOPIC;
use futures::Future; use futures::Future;
use grpcio::{RpcContext, RpcStatus, RpcStatusCode, UnarySink}; use grpcio::{RpcContext, RpcStatus, RpcStatusCode, UnarySink};
use network::NetworkMessage; use network::NetworkMessage;
@ -11,12 +13,13 @@ use protos::services_grpc::AttestationService;
use slog::{error, info, trace, warn}; use slog::{error, info, trace, warn};
use ssz::{ssz_encode, Decode}; use ssz::{ssz_encode, Decode};
use std::sync::Arc; use std::sync::Arc;
use tokio::sync::mpsc;
use types::Attestation; use types::Attestation;
#[derive(Clone)] #[derive(Clone)]
pub struct AttestationServiceInstance<T: BeaconChainTypes> { pub struct AttestationServiceInstance<T: BeaconChainTypes> {
pub chain: Arc<BeaconChain<T>>, pub chain: Arc<BeaconChain<T>>,
pub network_chan: crossbeam_channel::Sender<NetworkMessage>, pub network_chan: mpsc::UnboundedSender<NetworkMessage>,
pub log: slog::Logger, pub log: slog::Logger,
} }
@ -136,13 +139,12 @@ impl<T: BeaconChainTypes> AttestationService for AttestationServiceInstance<T> {
"type" => "valid_attestation", "type" => "valid_attestation",
); );
// TODO: Obtain topics from the network service properly. // valid attestation, propagate to the network
let topic = types::TopicBuilder::new("beacon_chain".to_string()).build(); let topic = TopicBuilder::new(BEACON_ATTESTATION_TOPIC).build();
let message = PubsubMessage::Attestation(attestation); let message = PubsubMessage::Attestation(attestation);
// Publish the attestation to the p2p network via gossipsub.
self.network_chan self.network_chan
.send(NetworkMessage::Publish { .try_send(NetworkMessage::Publish {
topics: vec![topic], topics: vec![topic],
message: Box::new(message), message: Box::new(message),
}) })
@ -150,7 +152,7 @@ impl<T: BeaconChainTypes> AttestationService for AttestationServiceInstance<T> {
error!( error!(
self.log, self.log,
"PublishAttestation"; "PublishAttestation";
"type" => "failed to publish to gossipsub", "type" => "failed to publish attestation to gossipsub",
"error" => format!("{:?}", e) "error" => format!("{:?}", e)
); );
}); });

View File

@ -1,6 +1,6 @@
use beacon_chain::{BeaconChain, BeaconChainTypes, BlockProcessingOutcome}; use beacon_chain::{BeaconChain, BeaconChainTypes, BlockProcessingOutcome};
use crossbeam_channel; use eth2_libp2p::BEACON_PUBSUB_TOPIC;
use eth2_libp2p::PubsubMessage; use eth2_libp2p::{PubsubMessage, TopicBuilder};
use futures::Future; use futures::Future;
use grpcio::{RpcContext, RpcStatus, RpcStatusCode, UnarySink}; use grpcio::{RpcContext, RpcStatus, RpcStatusCode, UnarySink};
use network::NetworkMessage; use network::NetworkMessage;
@ -13,12 +13,13 @@ use slog::Logger;
use slog::{error, info, trace, warn}; use slog::{error, info, trace, warn};
use ssz::{ssz_encode, Decode}; use ssz::{ssz_encode, Decode};
use std::sync::Arc; use std::sync::Arc;
use tokio::sync::mpsc;
use types::{BeaconBlock, Signature, Slot}; use types::{BeaconBlock, Signature, Slot};
#[derive(Clone)] #[derive(Clone)]
pub struct BeaconBlockServiceInstance<T: BeaconChainTypes> { pub struct BeaconBlockServiceInstance<T: BeaconChainTypes> {
pub chain: Arc<BeaconChain<T>>, pub chain: Arc<BeaconChain<T>>,
pub network_chan: crossbeam_channel::Sender<NetworkMessage>, pub network_chan: mpsc::UnboundedSender<NetworkMessage>,
pub log: Logger, pub log: Logger,
} }
@ -104,14 +105,13 @@ impl<T: BeaconChainTypes> BeaconBlockService for BeaconBlockServiceInstance<T> {
"block_root" => format!("{}", block_root), "block_root" => format!("{}", block_root),
); );
// TODO: Obtain topics from the network service properly. // get the network topic to send on
let topic = let topic = TopicBuilder::new(BEACON_PUBSUB_TOPIC).build();
types::TopicBuilder::new("beacon_chain".to_string()).build();
let message = PubsubMessage::Block(block); let message = PubsubMessage::Block(block);
// Publish the block to the p2p network via gossipsub. // Publish the block to the p2p network via gossipsub.
self.network_chan self.network_chan
.send(NetworkMessage::Publish { .try_send(NetworkMessage::Publish {
topics: vec![topic], topics: vec![topic],
message: Box::new(message), message: Box::new(message),
}) })

View File

@ -20,11 +20,12 @@ use protos::services_grpc::{
use slog::{info, o, warn}; use slog::{info, o, warn};
use std::sync::Arc; use std::sync::Arc;
use tokio::runtime::TaskExecutor; use tokio::runtime::TaskExecutor;
use tokio::sync::mpsc;
pub fn start_server<T: BeaconChainTypes + Clone + 'static>( pub fn start_server<T: BeaconChainTypes + Clone + 'static>(
config: &RPCConfig, config: &RPCConfig,
executor: &TaskExecutor, executor: &TaskExecutor,
network_chan: crossbeam_channel::Sender<NetworkMessage>, network_chan: mpsc::UnboundedSender<NetworkMessage>,
beacon_chain: Arc<BeaconChain<T>>, beacon_chain: Arc<BeaconChain<T>>,
log: &slog::Logger, log: &slog::Logger,
) -> exit_future::Signal { ) -> exit_future::Signal {
@ -60,8 +61,8 @@ pub fn start_server<T: BeaconChainTypes + Clone + 'static>(
}; };
let attestation_service = { let attestation_service = {
let instance = AttestationServiceInstance { let instance = AttestationServiceInstance {
chain: beacon_chain.clone(),
network_chan, network_chan,
chain: beacon_chain.clone(),
log: log.clone(), log: log.clone(),
}; };
create_attestation_service(instance) create_attestation_service(instance)

View File

@ -1,11 +1,11 @@
extern crate slog;
mod run; mod run;
use clap::{App, Arg}; use clap::{App, Arg};
use client::{ClientConfig, Eth2Config}; use client::{ClientConfig, Eth2Config};
use eth2_config::{get_data_dir, read_from_file, write_to_file}; use env_logger::{Builder, Env};
use slog::{crit, o, Drain}; use eth2_config::{read_from_file, write_to_file};
use slog::{crit, o, Drain, Level};
use std::fs;
use std::path::PathBuf; use std::path::PathBuf;
pub const DEFAULT_DATA_DIR: &str = ".lighthouse"; pub const DEFAULT_DATA_DIR: &str = ".lighthouse";
@ -14,10 +14,8 @@ pub const CLIENT_CONFIG_FILENAME: &str = "beacon-node.toml";
pub const ETH2_CONFIG_FILENAME: &str = "eth2-spec.toml"; pub const ETH2_CONFIG_FILENAME: &str = "eth2-spec.toml";
fn main() { fn main() {
let decorator = slog_term::TermDecorator::new().build(); // debugging output for libp2p and external crates
let drain = slog_term::CompactFormat::new(decorator).build().fuse(); Builder::from_env(Env::default()).init();
let drain = slog_async::Async::new(drain).build().fuse();
let logger = slog::Logger::root(drain, o!());
let matches = App::new("Lighthouse") let matches = App::new("Lighthouse")
.version(version::version().as_str()) .version(version::version().as_str())
@ -30,21 +28,55 @@ fn main() {
.value_name("DIR") .value_name("DIR")
.help("Data directory for keys and databases.") .help("Data directory for keys and databases.")
.takes_value(true) .takes_value(true)
.default_value(DEFAULT_DATA_DIR), )
.arg(
Arg::with_name("logfile")
.long("logfile")
.value_name("logfile")
.help("File path where output will be written.")
.takes_value(true),
) )
// network related arguments // network related arguments
.arg( .arg(
Arg::with_name("listen-address") Arg::with_name("listen-address")
.long("listen-address") .long("listen-address")
.value_name("Listen Address") .value_name("Address")
.help("One or more comma-delimited multi-addresses to listen for p2p connections.") .help("The address lighthouse will listen for UDP and TCP connections. (default 127.0.0.1).")
.takes_value(true),
)
.arg(
Arg::with_name("maxpeers")
.long("maxpeers")
.help("The maximum number of peers (default 10).")
.takes_value(true), .takes_value(true),
) )
.arg( .arg(
Arg::with_name("boot-nodes") Arg::with_name("boot-nodes")
.long("boot-nodes") .long("boot-nodes")
.allow_hyphen_values(true)
.value_name("BOOTNODES") .value_name("BOOTNODES")
.help("One or more comma-delimited multi-addresses to bootstrap the p2p network.") .help("One or more comma-delimited base64-encoded ENR's to bootstrap the p2p network.")
.takes_value(true),
)
.arg(
Arg::with_name("port")
.long("port")
.value_name("Lighthouse Port")
.help("The TCP/UDP port to listen on. The UDP port can be modified by the --discovery-port flag.")
.takes_value(true),
)
.arg(
Arg::with_name("discovery-port")
.long("disc-port")
.value_name("DiscoveryPort")
.help("The discovery UDP port.")
.takes_value(true),
)
.arg(
Arg::with_name("discovery-address")
.long("discovery-address")
.value_name("Address")
.help("The IP address to broadcast to other peers on how to reach this node.")
.takes_value(true), .takes_value(true),
) )
// rpc related arguments // rpc related arguments
@ -58,14 +90,13 @@ fn main() {
.arg( .arg(
Arg::with_name("rpc-address") Arg::with_name("rpc-address")
.long("rpc-address") .long("rpc-address")
.value_name("RPCADDRESS") .value_name("Address")
.help("Listen address for RPC endpoint.") .help("Listen address for RPC endpoint.")
.takes_value(true), .takes_value(true),
) )
.arg( .arg(
Arg::with_name("rpc-port") Arg::with_name("rpc-port")
.long("rpc-port") .long("rpc-port")
.value_name("RPCPORT")
.help("Listen port for RPC endpoint.") .help("Listen port for RPC endpoint.")
.takes_value(true), .takes_value(true),
) )
@ -73,21 +104,19 @@ fn main() {
.arg( .arg(
Arg::with_name("http") Arg::with_name("http")
.long("http") .long("http")
.value_name("HTTP")
.help("Enable the HTTP server.") .help("Enable the HTTP server.")
.takes_value(false), .takes_value(false),
) )
.arg( .arg(
Arg::with_name("http-address") Arg::with_name("http-address")
.long("http-address") .long("http-address")
.value_name("HTTPADDRESS") .value_name("Address")
.help("Listen address for the HTTP server.") .help("Listen address for the HTTP server.")
.takes_value(true), .takes_value(true),
) )
.arg( .arg(
Arg::with_name("http-port") Arg::with_name("http-port")
.long("http-port") .long("http-port")
.value_name("HTTPPORT")
.help("Listen port for the HTTP server.") .help("Listen port for the HTTP server.")
.takes_value(true), .takes_value(true),
) )
@ -116,19 +145,60 @@ fn main() {
.short("r") .short("r")
.help("When present, genesis will be within 30 minutes prior. Only for testing"), .help("When present, genesis will be within 30 minutes prior. Only for testing"),
) )
.arg(
Arg::with_name("verbosity")
.short("v")
.multiple(true)
.help("Sets the verbosity level")
.takes_value(true),
)
.get_matches(); .get_matches();
let data_dir = match get_data_dir(&matches, PathBuf::from(DEFAULT_DATA_DIR)) { // build the initial logger
Ok(dir) => dir, let decorator = slog_term::TermDecorator::new().build();
Err(e) => { let drain = slog_term::CompactFormat::new(decorator).build().fuse();
crit!(logger, "Failed to initialize data dir"; "error" => format!("{:?}", e)); let drain = slog_async::Async::new(drain).build();
return;
let drain = match matches.occurrences_of("verbosity") {
0 => drain.filter_level(Level::Info),
1 => drain.filter_level(Level::Debug),
2 => drain.filter_level(Level::Trace),
_ => drain.filter_level(Level::Trace),
};
let mut log = slog::Logger::root(drain.fuse(), o!());
let data_dir = match matches
.value_of("datadir")
.and_then(|v| Some(PathBuf::from(v)))
{
Some(v) => v,
None => {
// use the default
let mut default_dir = match dirs::home_dir() {
Some(v) => v,
None => {
crit!(log, "Failed to find a home directory");
return;
}
};
default_dir.push(DEFAULT_DATA_DIR);
PathBuf::from(default_dir)
} }
}; };
// create the directory if needed
match fs::create_dir_all(&data_dir) {
Ok(_) => {}
Err(e) => {
crit!(log, "Failed to initialize data dir"; "error" => format!("{}", e));
return;
}
}
let client_config_path = data_dir.join(CLIENT_CONFIG_FILENAME); let client_config_path = data_dir.join(CLIENT_CONFIG_FILENAME);
// Attempt to lead the `ClientConfig` from disk. // Attempt to load the `ClientConfig` from disk.
// //
// If file doesn't exist, create a new, default one. // If file doesn't exist, create a new, default one.
let mut client_config = match read_from_file::<ClientConfig>(client_config_path.clone()) { let mut client_config = match read_from_file::<ClientConfig>(client_config_path.clone()) {
@ -136,13 +206,13 @@ fn main() {
Ok(None) => { Ok(None) => {
let default = ClientConfig::default(); let default = ClientConfig::default();
if let Err(e) = write_to_file(client_config_path, &default) { if let Err(e) = write_to_file(client_config_path, &default) {
crit!(logger, "Failed to write default ClientConfig to file"; "error" => format!("{:?}", e)); crit!(log, "Failed to write default ClientConfig to file"; "error" => format!("{:?}", e));
return; return;
} }
default default
} }
Err(e) => { Err(e) => {
crit!(logger, "Failed to load a ChainConfig file"; "error" => format!("{:?}", e)); crit!(log, "Failed to load a ChainConfig file"; "error" => format!("{:?}", e));
return; return;
} }
}; };
@ -151,10 +221,10 @@ fn main() {
client_config.data_dir = data_dir.clone(); client_config.data_dir = data_dir.clone();
// Update the client config with any CLI args. // Update the client config with any CLI args.
match client_config.apply_cli_args(&matches) { match client_config.apply_cli_args(&matches, &mut log) {
Ok(()) => (), Ok(()) => (),
Err(s) => { Err(s) => {
crit!(logger, "Failed to parse ClientConfig CLI arguments"; "error" => s); crit!(log, "Failed to parse ClientConfig CLI arguments"; "error" => s);
return; return;
} }
}; };
@ -173,13 +243,13 @@ fn main() {
_ => unreachable!(), // Guarded by slog. _ => unreachable!(), // Guarded by slog.
}; };
if let Err(e) = write_to_file(eth2_config_path, &default) { if let Err(e) = write_to_file(eth2_config_path, &default) {
crit!(logger, "Failed to write default Eth2Config to file"; "error" => format!("{:?}", e)); crit!(log, "Failed to write default Eth2Config to file"; "error" => format!("{:?}", e));
return; return;
} }
default default
} }
Err(e) => { Err(e) => {
crit!(logger, "Failed to load/generate an Eth2Config"; "error" => format!("{:?}", e)); crit!(log, "Failed to load/generate an Eth2Config"; "error" => format!("{:?}", e));
return; return;
} }
}; };
@ -188,13 +258,13 @@ fn main() {
match eth2_config.apply_cli_args(&matches) { match eth2_config.apply_cli_args(&matches) {
Ok(()) => (), Ok(()) => (),
Err(s) => { Err(s) => {
crit!(logger, "Failed to parse Eth2Config CLI arguments"; "error" => s); crit!(log, "Failed to parse Eth2Config CLI arguments"; "error" => s);
return; return;
} }
}; };
match run::run_beacon_node(client_config, eth2_config, &logger) { match run::run_beacon_node(client_config, eth2_config, &log) {
Ok(_) => {} Ok(_) => {}
Err(e) => crit!(logger, "Beacon node failed to start"; "reason" => format!("{:}", e)), Err(e) => crit!(log, "Beacon node failed to start"; "reason" => format!("{:}", e)),
} }
} }

View File

@ -41,6 +41,15 @@ pub fn run_beacon_node(
"This software is EXPERIMENTAL and provides no guarantees or warranties." "This software is EXPERIMENTAL and provides no guarantees or warranties."
); );
info!(
log,
"Starting beacon node";
"p2p_listen_address" => format!("{:?}", &other_client_config.network.listen_address),
"data_dir" => format!("{:?}", other_client_config.data_dir()),
"spec_constants" => &spec_constants,
"db_type" => &other_client_config.db_type,
);
let result = match (db_type.as_str(), spec_constants.as_str()) { let result = match (db_type.as_str(), spec_constants.as_str()) {
("disk", "minimal") => run::<ClientType<DiskStore, MinimalEthSpec>>( ("disk", "minimal") => run::<ClientType<DiskStore, MinimalEthSpec>>(
&db_path, &db_path,
@ -80,17 +89,6 @@ pub fn run_beacon_node(
} }
}; };
if result.is_ok() {
info!(
log,
"Started beacon node";
"p2p_listen_addresses" => format!("{:?}", &other_client_config.network.listen_addresses()),
"data_dir" => format!("{:?}", other_client_config.data_dir()),
"spec_constants" => &spec_constants,
"db_type" => &other_client_config.db_type,
);
}
result result
} }

View File

@ -15,15 +15,15 @@ impl<'a, T: EthSpec, U: Store> StateRootsIterator<'a, T, U> {
Self { Self {
store, store,
beacon_state: Cow::Borrowed(beacon_state), beacon_state: Cow::Borrowed(beacon_state),
slot: start_slot, slot: start_slot + 1,
} }
} }
pub fn owned(store: Arc<U>, beacon_state: BeaconState<T>, start_slot: Slot) -> Self { pub fn owned(store: Arc<U>, beacon_state: BeaconState<T>, start_slot: Slot) -> Self {
Self { Self {
slot: start_slot,
beacon_state: Cow::Owned(beacon_state),
store, store,
beacon_state: Cow::Owned(beacon_state),
slot: start_slot + 1,
} }
} }
} }
@ -90,13 +90,19 @@ impl<'a, T: EthSpec, U: Store> Iterator for BlockIterator<'a, T, U> {
} }
} }
/// Iterates backwards through block roots. /// Iterates backwards through block roots. If any specified slot is unable to be retrieved, the
/// iterator returns `None` indefinitely.
/// ///
/// Uses the `latest_block_roots` field of `BeaconState` to as the source of block roots and will /// Uses the `latest_block_roots` field of `BeaconState` to as the source of block roots and will
/// perform a lookup on the `Store` for a prior `BeaconState` if `latest_block_roots` has been /// perform a lookup on the `Store` for a prior `BeaconState` if `latest_block_roots` has been
/// exhausted. /// exhausted.
/// ///
/// Returns `None` for roots prior to genesis or when there is an error reading from `Store`. /// Returns `None` for roots prior to genesis or when there is an error reading from `Store`.
///
/// ## Notes
///
/// See [`BestBlockRootsIterator`](struct.BestBlockRootsIterator.html), which has different
/// `start_slot` logic.
#[derive(Clone)] #[derive(Clone)]
pub struct BlockRootsIterator<'a, T: EthSpec, U> { pub struct BlockRootsIterator<'a, T: EthSpec, U> {
store: Arc<U>, store: Arc<U>,
@ -108,18 +114,18 @@ impl<'a, T: EthSpec, U: Store> BlockRootsIterator<'a, T, U> {
/// Create a new iterator over all block roots in the given `beacon_state` and prior states. /// Create a new iterator over all block roots in the given `beacon_state` and prior states.
pub fn new(store: Arc<U>, beacon_state: &'a BeaconState<T>, start_slot: Slot) -> Self { pub fn new(store: Arc<U>, beacon_state: &'a BeaconState<T>, start_slot: Slot) -> Self {
Self { Self {
slot: start_slot,
beacon_state: Cow::Borrowed(beacon_state),
store, store,
beacon_state: Cow::Borrowed(beacon_state),
slot: start_slot + 1,
} }
} }
/// Create a new iterator over all block roots in the given `beacon_state` and prior states. /// Create a new iterator over all block roots in the given `beacon_state` and prior states.
pub fn owned(store: Arc<U>, beacon_state: BeaconState<T>, start_slot: Slot) -> Self { pub fn owned(store: Arc<U>, beacon_state: BeaconState<T>, start_slot: Slot) -> Self {
Self { Self {
slot: start_slot,
beacon_state: Cow::Owned(beacon_state),
store, store,
beacon_state: Cow::Owned(beacon_state),
slot: start_slot + 1,
} }
} }
} }
@ -156,6 +162,104 @@ impl<'a, T: EthSpec, U: Store> Iterator for BlockRootsIterator<'a, T, U> {
} }
} }
/// Iterates backwards through block roots with `start_slot` highest possible value
/// `<= beacon_state.slot`.
///
/// The distinction between `BestBlockRootsIterator` and `BlockRootsIterator` is:
///
/// - `BestBlockRootsIterator` uses best-effort slot. When `start_slot` is greater than the latest available block root
/// on `beacon_state`, returns `Some(root, slot)` where `slot` is the latest available block
/// root.
/// - `BlockRootsIterator` is strict about `start_slot`. When `start_slot` is greater than the latest available block root
/// on `beacon_state`, returns `None`.
///
/// This is distinct from `BestBlockRootsIterator`.
///
/// Uses the `latest_block_roots` field of `BeaconState` to as the source of block roots and will
/// perform a lookup on the `Store` for a prior `BeaconState` if `latest_block_roots` has been
/// exhausted.
///
/// Returns `None` for roots prior to genesis or when there is an error reading from `Store`.
#[derive(Clone)]
pub struct BestBlockRootsIterator<'a, T: EthSpec, U> {
store: Arc<U>,
beacon_state: Cow<'a, BeaconState<T>>,
slot: Slot,
}
impl<'a, T: EthSpec, U: Store> BestBlockRootsIterator<'a, T, U> {
/// Create a new iterator over all block roots in the given `beacon_state` and prior states.
pub fn new(store: Arc<U>, beacon_state: &'a BeaconState<T>, start_slot: Slot) -> Self {
let mut slot = start_slot;
if slot >= beacon_state.slot {
// Slot may be too high.
slot = beacon_state.slot;
if beacon_state.get_block_root(slot).is_err() {
slot -= 1;
}
}
Self {
store,
beacon_state: Cow::Borrowed(beacon_state),
slot: slot + 1,
}
}
/// Create a new iterator over all block roots in the given `beacon_state` and prior states.
pub fn owned(store: Arc<U>, beacon_state: BeaconState<T>, start_slot: Slot) -> Self {
let mut slot = start_slot;
if slot >= beacon_state.slot {
// Slot may be too high.
slot = beacon_state.slot;
// TODO: Use a function other than `get_block_root` as this will always return `Err()`
// for slot = state.slot.
if beacon_state.get_block_root(slot).is_err() {
slot -= 1;
}
}
Self {
store,
beacon_state: Cow::Owned(beacon_state),
slot: slot + 1,
}
}
}
impl<'a, T: EthSpec, U: Store> Iterator for BestBlockRootsIterator<'a, T, U> {
type Item = (Hash256, Slot);
fn next(&mut self) -> Option<Self::Item> {
if self.slot == 0 {
// End of Iterator
return None;
}
self.slot -= 1;
match self.beacon_state.get_block_root(self.slot) {
Ok(root) => Some((*root, self.slot)),
Err(BeaconStateError::SlotOutOfBounds) => {
// Read a `BeaconState` from the store that has access to prior historical root.
let beacon_state: BeaconState<T> = {
// Load the earliest state from disk.
let new_state_root = self.beacon_state.get_oldest_state_root().ok()?;
self.store.get(&new_state_root).ok()?
}?;
self.beacon_state = Cow::Owned(beacon_state);
let root = self.beacon_state.get_block_root(self.slot).ok()?;
Some((*root, self.slot))
}
_ => None,
}
}
}
#[cfg(test)] #[cfg(test)]
mod test { mod test {
use super::*; use super::*;
@ -206,7 +310,50 @@ mod test {
let mut collected: Vec<(Hash256, Slot)> = iter.collect(); let mut collected: Vec<(Hash256, Slot)> = iter.collect();
collected.reverse(); collected.reverse();
let expected_len = 2 * MainnetEthSpec::slots_per_historical_root() - 1; let expected_len = 2 * MainnetEthSpec::slots_per_historical_root();
assert_eq!(collected.len(), expected_len);
for i in 0..expected_len {
assert_eq!(collected[i].0, Hash256::from(i as u64));
}
}
#[test]
fn best_block_root_iter() {
let store = Arc::new(MemoryStore::open());
let slots_per_historical_root = MainnetEthSpec::slots_per_historical_root();
let mut state_a: BeaconState<MainnetEthSpec> = get_state();
let mut state_b: BeaconState<MainnetEthSpec> = get_state();
state_a.slot = Slot::from(slots_per_historical_root);
state_b.slot = Slot::from(slots_per_historical_root * 2);
let mut hashes = (0..).into_iter().map(|i| Hash256::from(i));
for root in &mut state_a.latest_block_roots[..] {
*root = hashes.next().unwrap()
}
for root in &mut state_b.latest_block_roots[..] {
*root = hashes.next().unwrap()
}
let state_a_root = hashes.next().unwrap();
state_b.latest_state_roots[0] = state_a_root;
store.put(&state_a_root, &state_a).unwrap();
let iter = BestBlockRootsIterator::new(store.clone(), &state_b, state_b.slot);
assert!(
iter.clone().find(|(_root, slot)| *slot == 0).is_some(),
"iter should contain zero slot"
);
let mut collected: Vec<(Hash256, Slot)> = iter.collect();
collected.reverse();
let expected_len = 2 * MainnetEthSpec::slots_per_historical_root();
assert_eq!(collected.len(), expected_len); assert_eq!(collected.len(), expected_len);
@ -255,7 +402,7 @@ mod test {
let mut collected: Vec<(Hash256, Slot)> = iter.collect(); let mut collected: Vec<(Hash256, Slot)> = iter.collect();
collected.reverse(); collected.reverse();
let expected_len = MainnetEthSpec::slots_per_historical_root() * 2 - 1; let expected_len = MainnetEthSpec::slots_per_historical_root() * 2;
assert_eq!(collected.len(), expected_len, "collection length incorrect"); assert_eq!(collected.len(), expected_len, "collection length incorrect");

40
docs/installation.md Normal file
View File

@ -0,0 +1,40 @@
# Development Environment Setup
A few basic steps are needed to get set up (skip to #5 if you already have Rust
installed):
1. Install [rustup](https://rustup.rs/). It's a toolchain manager for Rust (Linux | macOS | Windows). For installation, download the script with `$ curl -f https://sh.rustup.rs > rustup.sh`, review its content (e.g. `$ less ./rustup.sh`) and run the script `$ ./rustup.sh` (you may need to change the permissions to allow execution, i.e. `$ chmod +x rustup.sh`)
2. (Linux & MacOS) To configure your current shell run: `$ source $HOME/.cargo/env`
3. Use the command `rustup show` to get information about the Rust installation. You should see that the
active toolchain is the stable version.
4. Run `rustc --version` to check the installation and version of rust.
- Updates can be performed using` rustup update` .
5. Install build dependencies (Arch packages are listed here, your distribution will likely be similar):
- `clang`: required by RocksDB.
- `protobuf`: required for protobuf serialization (gRPC).
- `cmake`: required for building protobuf
- `git-lfs`: The Git extension for [Large File Support](https://git-lfs.github.com/) (required for EF tests submodule).
6. If you haven't already, clone the repository with submodules: `git clone --recursive https://github.com/sigp/lighthouse`.
Alternatively, run `git submodule init` in a repository which was cloned without submodules.
7. Change directory to the root of the repository.
8. Run the test by using command `cargo test --all --release`. By running, it will pass all the required test cases.
If you are doing it for the first time, then you can grab a coffee in the meantime. Usually, it takes time
to build, compile and pass all test cases. If there is no error then it means everything is working properly
and it's time to get your hands dirty.
In case, if there is an error, then please raise the [issue](https://github.com/sigp/lighthouse/issues).
We will help you.
9. As an alternative to, or instead of the above step, you may also run benchmarks by using
the command `cargo bench --all`
## Notes:
Lighthouse targets Rust `stable` but _should_ run on `nightly`.
### Note for Windows users:
Perl may also be required to build lighthouse. You can install [Strawberry Perl](http://strawberryperl.com/),
or alternatively use a choco install command `choco install strawberryperl`.
Additionally, the dependency `protoc-grpcio v0.3.1` is reported to have issues compiling in Windows. You can specify
a known working version by editing version in protos/Cargo.toml's "build-dependencies" section to
`protoc-grpcio = "<=0.3.0"`.

View File

@ -8,7 +8,7 @@ use parking_lot::RwLock;
use std::collections::HashMap; use std::collections::HashMap;
use std::marker::PhantomData; use std::marker::PhantomData;
use std::sync::Arc; use std::sync::Arc;
use store::{iter::BlockRootsIterator, Error as StoreError, Store}; use store::{iter::BestBlockRootsIterator, Error as StoreError, Store};
use types::{BeaconBlock, BeaconState, EthSpec, Hash256, Slot}; use types::{BeaconBlock, BeaconState, EthSpec, Hash256, Slot};
type Result<T> = std::result::Result<T, Error>; type Result<T> = std::result::Result<T, Error>;
@ -530,14 +530,14 @@ where
Ok(a_root) Ok(a_root)
} }
fn iter_ancestors(&self, child: Hash256) -> Result<BlockRootsIterator<E, T>> { fn iter_ancestors(&self, child: Hash256) -> Result<BestBlockRootsIterator<E, T>> {
let block = self.get_block(child)?; let block = self.get_block(child)?;
let state = self.get_state(block.state_root)?; let state = self.get_state(block.state_root)?;
Ok(BlockRootsIterator::owned( Ok(BestBlockRootsIterator::owned(
self.store.clone(), self.store.clone(),
state, state,
block.slot, block.slot - 1,
)) ))
} }

View File

@ -32,7 +32,6 @@ swap_or_not_shuffle = { path = "../utils/swap_or_not_shuffle" }
test_random_derive = { path = "../utils/test_random_derive" } test_random_derive = { path = "../utils/test_random_derive" }
tree_hash = { path = "../utils/tree_hash" } tree_hash = { path = "../utils/tree_hash" }
tree_hash_derive = { path = "../utils/tree_hash_derive" } tree_hash_derive = { path = "../utils/tree_hash_derive" }
libp2p = { git = "https://github.com/SigP/rust-libp2p", rev = "b3c32d9a821ae6cc89079499cc6e8a6bab0bffc3" }
[dev-dependencies] [dev-dependencies]
env_logger = "0.6.0" env_logger = "0.6.0"

View File

@ -104,11 +104,7 @@ pub struct ChainSpec {
domain_voluntary_exit: u32, domain_voluntary_exit: u32,
domain_transfer: u32, domain_transfer: u32,
/* pub boot_nodes: Vec<String>,
* Network specific parameters
*
*/
pub boot_nodes: Vec<Multiaddr>,
pub chain_id: u8, pub chain_id: u8,
} }
@ -216,7 +212,7 @@ impl ChainSpec {
domain_transfer: 5, domain_transfer: 5,
/* /*
* Boot nodes * Network specific
*/ */
boot_nodes: vec![], boot_nodes: vec![],
chain_id: 1, // mainnet chain id chain_id: 1, // mainnet chain id
@ -231,12 +227,8 @@ impl ChainSpec {
pub fn minimal() -> Self { pub fn minimal() -> Self {
let genesis_slot = Slot::new(0); let genesis_slot = Slot::new(0);
// Note: these bootnodes are placeholders. // Note: bootnodes to be updated when static nodes exist.
// let boot_nodes = vec![];
// Should be updated once static bootnodes exist.
let boot_nodes = vec!["/ip4/127.0.0.1/tcp/9000"
.parse()
.expect("correct multiaddr")];
Self { Self {
target_committee_size: 4, target_committee_size: 4,

View File

@ -82,6 +82,3 @@ pub type ProposerMap = HashMap<u64, usize>;
pub use bls::{AggregatePublicKey, AggregateSignature, Keypair, PublicKey, SecretKey, Signature}; pub use bls::{AggregatePublicKey, AggregateSignature, Keypair, PublicKey, SecretKey, Signature};
pub use fixed_len_vec::{typenum, typenum::Unsigned, FixedLenVec}; pub use fixed_len_vec::{typenum, typenum::Unsigned, FixedLenVec};
pub use libp2p::floodsub::{Topic, TopicBuilder, TopicHash};
pub use libp2p::multiaddr;
pub use libp2p::Multiaddr;

View File

@ -1,6 +1,5 @@
use clap::ArgMatches; use clap::ArgMatches;
use serde_derive::{Deserialize, Serialize}; use serde_derive::{Deserialize, Serialize};
use std::fs;
use std::fs::File; use std::fs::File;
use std::io::prelude::*; use std::io::prelude::*;
use std::path::PathBuf; use std::path::PathBuf;
@ -105,15 +104,3 @@ where
Ok(None) Ok(None)
} }
} }
pub fn get_data_dir(args: &ArgMatches, default_data_dir: PathBuf) -> Result<PathBuf, &'static str> {
if let Some(data_dir) = args.value_of("data_dir") {
Ok(PathBuf::from(data_dir))
} else {
let path = dirs::home_dir()
.ok_or_else(|| "Unable to locate home directory")?
.join(&default_data_dir);
fs::create_dir_all(&path).map_err(|_| "Unable to create data_dir")?;
Ok(path)
}
}

View File

@ -4,6 +4,7 @@ version = "0.1.0"
authors = ["Paul Hauner <paul@sigmaprime.io>"] authors = ["Paul Hauner <paul@sigmaprime.io>"]
edition = "2018" edition = "2018"
description = "SimpleSerialize (SSZ) as used in Ethereum 2.0" description = "SimpleSerialize (SSZ) as used in Ethereum 2.0"
license = "Apache-2.0"
[lib] [lib]
name = "ssz" name = "ssz"
@ -14,12 +15,10 @@ harness = false
[dev-dependencies] [dev-dependencies]
criterion = "0.2" criterion = "0.2"
eth2_ssz_derive = { path = "../ssz_derive" } eth2_ssz_derive = "0.1.0"
[dependencies] [dependencies]
bytes = "0.4.9" bytes = "0.4.9"
ethereum-types = "0.5" ethereum-types = "0.5"
hashing = { path = "../hashing" }
int_to_bytes = { path = "../int_to_bytes" }
hex = "0.3" hex = "0.3"
yaml-rust = "0.4" yaml-rust = "0.4"

View File

@ -47,9 +47,9 @@ pub use encode::{Encode, SszEncoder};
pub const BYTES_PER_LENGTH_OFFSET: usize = 4; pub const BYTES_PER_LENGTH_OFFSET: usize = 4;
/// The maximum value that can be represented using `BYTES_PER_LENGTH_OFFSET`. /// The maximum value that can be represented using `BYTES_PER_LENGTH_OFFSET`.
#[cfg(target_pointer_width = "32")] #[cfg(target_pointer_width = "32")]
pub const MAX_LENGTH_VALUE: usize = (std::u32::MAX >> 8 * (4 - BYTES_PER_LENGTH_OFFSET)) as usize; pub const MAX_LENGTH_VALUE: usize = (std::u32::MAX >> (8 * (4 - BYTES_PER_LENGTH_OFFSET))) as usize;
#[cfg(target_pointer_width = "64")] #[cfg(target_pointer_width = "64")]
pub const MAX_LENGTH_VALUE: usize = (std::u64::MAX >> 8 * (8 - BYTES_PER_LENGTH_OFFSET)) as usize; pub const MAX_LENGTH_VALUE: usize = (std::u64::MAX >> (8 * (8 - BYTES_PER_LENGTH_OFFSET))) as usize;
/// Convenience function to SSZ encode an object supporting ssz::Encode. /// Convenience function to SSZ encode an object supporting ssz::Encode.
/// ///

View File

@ -4,6 +4,7 @@ version = "0.1.0"
authors = ["Paul Hauner <paul@sigmaprime.io>"] authors = ["Paul Hauner <paul@sigmaprime.io>"]
edition = "2018" edition = "2018"
description = "Procedural derive macros to accompany the eth2_ssz crate." description = "Procedural derive macros to accompany the eth2_ssz crate."
license = "Apache-2.0"
[lib] [lib]
name = "ssz_derive" name = "ssz_derive"
@ -12,4 +13,3 @@ proc-macro = true
[dependencies] [dependencies]
syn = "0.15" syn = "0.15"
quote = "0.6" quote = "0.6"
eth2_ssz = { path = "../ssz" }

View File

@ -1,4 +1,7 @@
#![recursion_limit = "128"] #![recursion_limit = "128"]
//! Provides procedural derive macros for the `Encode` and `Decode` traits of the `eth2_ssz` crate.
//!
//! Supports field attributes, see each derive macro for more information.
extern crate proc_macro; extern crate proc_macro;
@ -61,6 +64,10 @@ fn should_skip_serializing(field: &syn::Field) -> bool {
/// Implements `ssz::Encode` for some `struct`. /// Implements `ssz::Encode` for some `struct`.
/// ///
/// Fields are encoded in the order they are defined. /// Fields are encoded in the order they are defined.
///
/// ## Field attributes
///
/// - `#[ssz(skip_serializing)]`: the field will not be serialized.
#[proc_macro_derive(Encode, attributes(ssz))] #[proc_macro_derive(Encode, attributes(ssz))]
pub fn ssz_encode_derive(input: TokenStream) -> TokenStream { pub fn ssz_encode_derive(input: TokenStream) -> TokenStream {
let item = parse_macro_input!(input as DeriveInput); let item = parse_macro_input!(input as DeriveInput);
@ -132,6 +139,12 @@ fn should_skip_deserializing(field: &syn::Field) -> bool {
/// Implements `ssz::Decode` for some `struct`. /// Implements `ssz::Decode` for some `struct`.
/// ///
/// Fields are decoded in the order they are defined. /// Fields are decoded in the order they are defined.
///
/// ## Field attributes
///
/// - `#[ssz(skip_deserializing)]`: during de-serialization the field will be instantiated from a
/// `Default` implementation. The decoder will assume that the field was not serialized at all
/// (e.g., if it has been serialized, an error will be raised instead of `Default` overriding it).
#[proc_macro_derive(Decode)] #[proc_macro_derive(Decode)]
pub fn ssz_decode_derive(input: TokenStream) -> TokenStream { pub fn ssz_decode_derive(input: TokenStream) -> TokenStream {
let item = parse_macro_input!(input as DeriveInput); let item = parse_macro_input!(input as DeriveInput);

View File

@ -1,22 +0,0 @@
use ssz::Encode;
use ssz_derive::Encode;
#[derive(Debug, PartialEq, Encode)]
pub struct Foo {
a: u16,
b: Vec<u8>,
c: u16,
}
#[test]
fn encode() {
let foo = Foo {
a: 42,
b: vec![0, 1, 2, 3],
c: 11,
};
let bytes = vec![42, 0, 8, 0, 0, 0, 11, 0, 0, 1, 2, 3];
assert_eq!(foo.as_ssz_bytes(), bytes);
}

View File

@ -0,0 +1,17 @@
[package]
name = "ssz_types"
version = "0.1.0"
authors = ["Paul Hauner <paul@paulhauner.com>"]
edition = "2018"
[dependencies]
cached_tree_hash = { path = "../cached_tree_hash" }
tree_hash = { path = "../tree_hash" }
serde = "1.0"
serde_derive = "1.0"
serde_hex = { path = "../serde_hex" }
eth2_ssz = { path = "../ssz" }
typenum = "1.10"
[dev-dependencies]
serde_yaml = "0.8"

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,335 @@
use crate::Error;
use serde_derive::{Deserialize, Serialize};
use std::marker::PhantomData;
use std::ops::{Deref, Index, IndexMut};
use std::slice::SliceIndex;
use typenum::Unsigned;
pub use typenum;
/// Emulates a SSZ `Vector` (distinct from a Rust `Vec`).
///
/// An ordered, heap-allocated, fixed-length, homogeneous collection of `T`, with `N` values.
///
/// This struct is backed by a Rust `Vec` but constrained such that it must be instantiated with a
/// fixed number of elements and you may not add or remove elements, only modify.
///
/// The length of this struct is fixed at the type-level using
/// [typenum](https://crates.io/crates/typenum).
///
/// ## Note
///
/// Whilst it is possible with this library, SSZ declares that a `FixedVector` with a length of `0`
/// is illegal.
///
/// ## Example
///
/// ```
/// use ssz_types::{FixedVector, typenum};
///
/// let base: Vec<u64> = vec![1, 2, 3, 4];
///
/// // Create a `FixedVector` from a `Vec` that has the expected length.
/// let exact: FixedVector<_, typenum::U4> = FixedVector::from(base.clone());
/// assert_eq!(&exact[..], &[1, 2, 3, 4]);
///
/// // Create a `FixedVector` from a `Vec` that is too long and the `Vec` is truncated.
/// let short: FixedVector<_, typenum::U3> = FixedVector::from(base.clone());
/// assert_eq!(&short[..], &[1, 2, 3]);
///
/// // Create a `FixedVector` from a `Vec` that is too short and the missing values are created
/// // using `std::default::Default`.
/// let long: FixedVector<_, typenum::U5> = FixedVector::from(base);
/// assert_eq!(&long[..], &[1, 2, 3, 4, 0]);
/// ```
#[derive(Debug, PartialEq, Clone, Serialize, Deserialize)]
#[serde(transparent)]
pub struct FixedVector<T, N> {
vec: Vec<T>,
_phantom: PhantomData<N>,
}
impl<T, N: Unsigned> FixedVector<T, N> {
/// Returns `Ok` if the given `vec` equals the fixed length of `Self`. Otherwise returns
/// `Err`.
pub fn new(vec: Vec<T>) -> Result<Self, Error> {
if vec.len() == Self::capacity() {
Ok(Self {
vec,
_phantom: PhantomData,
})
} else {
Err(Error::OutOfBounds {
i: vec.len(),
len: Self::capacity(),
})
}
}
/// Identical to `self.capacity`, returns the type-level constant length.
///
/// Exists for compatibility with `Vec`.
pub fn len(&self) -> usize {
self.vec.len()
}
/// True if the type-level constant length of `self` is zero.
pub fn is_empty(&self) -> bool {
self.len() == 0
}
/// Returns the type-level constant length.
pub fn capacity() -> usize {
N::to_usize()
}
}
impl<T: Default, N: Unsigned> From<Vec<T>> for FixedVector<T, N> {
fn from(mut vec: Vec<T>) -> Self {
vec.resize_with(Self::capacity(), Default::default);
Self {
vec,
_phantom: PhantomData,
}
}
}
impl<T, N: Unsigned> Into<Vec<T>> for FixedVector<T, N> {
fn into(self) -> Vec<T> {
self.vec
}
}
impl<T, N: Unsigned> Default for FixedVector<T, N> {
fn default() -> Self {
Self {
vec: Vec::default(),
_phantom: PhantomData,
}
}
}
impl<T, N: Unsigned, I: SliceIndex<[T]>> Index<I> for FixedVector<T, N> {
type Output = I::Output;
#[inline]
fn index(&self, index: I) -> &Self::Output {
Index::index(&self.vec, index)
}
}
impl<T, N: Unsigned, I: SliceIndex<[T]>> IndexMut<I> for FixedVector<T, N> {
#[inline]
fn index_mut(&mut self, index: I) -> &mut Self::Output {
IndexMut::index_mut(&mut self.vec, index)
}
}
impl<T, N: Unsigned> Deref for FixedVector<T, N> {
type Target = [T];
fn deref(&self) -> &[T] {
&self.vec[..]
}
}
#[cfg(test)]
mod test {
use super::*;
use typenum::*;
#[test]
fn new() {
let vec = vec![42; 5];
let fixed: Result<FixedVector<u64, U4>, _> = FixedVector::new(vec.clone());
assert!(fixed.is_err());
let vec = vec![42; 3];
let fixed: Result<FixedVector<u64, U4>, _> = FixedVector::new(vec.clone());
assert!(fixed.is_err());
let vec = vec![42; 4];
let fixed: Result<FixedVector<u64, U4>, _> = FixedVector::new(vec.clone());
assert!(fixed.is_ok());
}
#[test]
fn indexing() {
let vec = vec![1, 2];
let mut fixed: FixedVector<u64, U8192> = vec.clone().into();
assert_eq!(fixed[0], 1);
assert_eq!(&fixed[0..1], &vec[0..1]);
assert_eq!((&fixed[..]).len(), 8192);
fixed[1] = 3;
assert_eq!(fixed[1], 3);
}
#[test]
fn length() {
let vec = vec![42; 5];
let fixed: FixedVector<u64, U4> = FixedVector::from(vec.clone());
assert_eq!(&fixed[..], &vec[0..4]);
let vec = vec![42; 3];
let fixed: FixedVector<u64, U4> = FixedVector::from(vec.clone());
assert_eq!(&fixed[0..3], &vec[..]);
assert_eq!(&fixed[..], &vec![42, 42, 42, 0][..]);
let vec = vec![];
let fixed: FixedVector<u64, U4> = FixedVector::from(vec.clone());
assert_eq!(&fixed[..], &vec![0, 0, 0, 0][..]);
}
#[test]
fn deref() {
let vec = vec![0, 2, 4, 6];
let fixed: FixedVector<u64, U4> = FixedVector::from(vec);
assert_eq!(fixed.get(0), Some(&0));
assert_eq!(fixed.get(3), Some(&6));
assert_eq!(fixed.get(4), None);
}
}
impl<T, N: Unsigned> tree_hash::TreeHash for FixedVector<T, N>
where
T: tree_hash::TreeHash,
{
fn tree_hash_type() -> tree_hash::TreeHashType {
tree_hash::TreeHashType::Vector
}
fn tree_hash_packed_encoding(&self) -> Vec<u8> {
unreachable!("Vector should never be packed.")
}
fn tree_hash_packing_factor() -> usize {
unreachable!("Vector should never be packed.")
}
fn tree_hash_root(&self) -> Vec<u8> {
tree_hash::impls::vec_tree_hash_root(&self.vec)
}
}
impl<T, N: Unsigned> cached_tree_hash::CachedTreeHash for FixedVector<T, N>
where
T: cached_tree_hash::CachedTreeHash + tree_hash::TreeHash,
{
fn new_tree_hash_cache(
&self,
depth: usize,
) -> Result<cached_tree_hash::TreeHashCache, cached_tree_hash::Error> {
let (cache, _overlay) = cached_tree_hash::vec::new_tree_hash_cache(&self.vec, depth)?;
Ok(cache)
}
fn tree_hash_cache_schema(&self, depth: usize) -> cached_tree_hash::BTreeSchema {
cached_tree_hash::vec::produce_schema(&self.vec, depth)
}
fn update_tree_hash_cache(
&self,
cache: &mut cached_tree_hash::TreeHashCache,
) -> Result<(), cached_tree_hash::Error> {
cached_tree_hash::vec::update_tree_hash_cache(&self.vec, cache)?;
Ok(())
}
}
impl<T, N: Unsigned> ssz::Encode for FixedVector<T, N>
where
T: ssz::Encode,
{
fn is_ssz_fixed_len() -> bool {
true
}
fn ssz_fixed_len() -> usize {
if <Self as ssz::Encode>::is_ssz_fixed_len() {
T::ssz_fixed_len() * N::to_usize()
} else {
ssz::BYTES_PER_LENGTH_OFFSET
}
}
fn ssz_append(&self, buf: &mut Vec<u8>) {
if T::is_ssz_fixed_len() {
buf.reserve(T::ssz_fixed_len() * self.len());
for item in &self.vec {
item.ssz_append(buf);
}
} else {
let mut encoder = ssz::SszEncoder::list(buf, self.len() * ssz::BYTES_PER_LENGTH_OFFSET);
for item in &self.vec {
encoder.append(item);
}
encoder.finalize();
}
}
}
impl<T, N: Unsigned> ssz::Decode for FixedVector<T, N>
where
T: ssz::Decode + Default,
{
fn is_ssz_fixed_len() -> bool {
T::is_ssz_fixed_len()
}
fn ssz_fixed_len() -> usize {
if <Self as ssz::Decode>::is_ssz_fixed_len() {
T::ssz_fixed_len() * N::to_usize()
} else {
ssz::BYTES_PER_LENGTH_OFFSET
}
}
fn from_ssz_bytes(bytes: &[u8]) -> Result<Self, ssz::DecodeError> {
if bytes.is_empty() {
Ok(FixedVector::from(vec![]))
} else if T::is_ssz_fixed_len() {
bytes
.chunks(T::ssz_fixed_len())
.map(|chunk| T::from_ssz_bytes(chunk))
.collect::<Result<Vec<T>, _>>()
.and_then(|vec| Ok(vec.into()))
} else {
ssz::decode_list_of_variable_length_items(bytes).and_then(|vec| Ok(vec.into()))
}
}
}
#[cfg(test)]
mod ssz_tests {
use super::*;
use ssz::*;
use typenum::*;
#[test]
fn encode() {
let vec: FixedVector<u16, U2> = vec![0; 2].into();
assert_eq!(vec.as_ssz_bytes(), vec![0, 0, 0, 0]);
assert_eq!(<FixedVector<u16, U2> as Encode>::ssz_fixed_len(), 4);
}
fn round_trip<T: Encode + Decode + std::fmt::Debug + PartialEq>(item: T) {
let encoded = &item.as_ssz_bytes();
assert_eq!(T::from_ssz_bytes(&encoded), Ok(item));
}
#[test]
fn u16_len_8() {
round_trip::<FixedVector<u16, U8>>(vec![42; 8].into());
round_trip::<FixedVector<u16, U8>>(vec![0; 8].into());
}
}

View File

@ -0,0 +1,66 @@
//! Provides types with unique properties required for SSZ serialization and Merklization:
//!
//! - `FixedVector`: A heap-allocated list with a size that is fixed at compile time.
//! - `VariableList`: A heap-allocated list that cannot grow past a type-level maximum length.
//! - `BitList`: A heap-allocated bitfield that with a type-level _maximum_ length.
//! - `BitVector`: A heap-allocated bitfield that with a type-level _fixed__ length.
//!
//! These structs are required as SSZ serialization and Merklization rely upon type-level lengths
//! for padding and verification.
//!
//! ## Example
//! ```
//! use ssz_types::*;
//!
//! pub struct Example {
//! bit_vector: BitVector<typenum::U8>,
//! bit_list: BitList<typenum::U8>,
//! variable_list: VariableList<u64, typenum::U8>,
//! fixed_vector: FixedVector<u64, typenum::U8>,
//! }
//!
//! let mut example = Example {
//! bit_vector: Bitfield::new(),
//! bit_list: Bitfield::with_capacity(4).unwrap(),
//! variable_list: <_>::from(vec![0, 1]),
//! fixed_vector: <_>::from(vec![2, 3]),
//! };
//!
//! assert_eq!(example.bit_vector.len(), 8);
//! assert_eq!(example.bit_list.len(), 4);
//! assert_eq!(&example.variable_list[..], &[0, 1]);
//! assert_eq!(&example.fixed_vector[..], &[2, 3, 0, 0, 0, 0, 0, 0]);
//!
//! ```
#[macro_use]
mod bitfield;
mod fixed_vector;
mod variable_list;
pub use bitfield::{BitList, BitVector, Bitfield};
pub use fixed_vector::FixedVector;
pub use typenum;
pub use variable_list::VariableList;
pub mod length {
pub use crate::bitfield::{Fixed, Variable};
}
/// Returned when an item encounters an error.
#[derive(PartialEq, Debug)]
pub enum Error {
OutOfBounds {
i: usize,
len: usize,
},
/// A `BitList` does not have a set bit, therefore it's length is unknowable.
MissingLengthInformation,
/// A `BitList` has excess bits set to true.
ExcessBits,
/// A `BitList` has an invalid number of bytes for a given bit length.
InvalidByteCount {
given: usize,
expected: usize,
},
}

View File

@ -0,0 +1,320 @@
use crate::Error;
use serde_derive::{Deserialize, Serialize};
use std::marker::PhantomData;
use std::ops::{Deref, Index, IndexMut};
use std::slice::SliceIndex;
use typenum::Unsigned;
pub use typenum;
/// Emulates a SSZ `List`.
///
/// An ordered, heap-allocated, variable-length, homogeneous collection of `T`, with no more than
/// `N` values.
///
/// This struct is backed by a Rust `Vec` but constrained such that it must be instantiated with a
/// fixed number of elements and you may not add or remove elements, only modify.
///
/// The length of this struct is fixed at the type-level using
/// [typenum](https://crates.io/crates/typenum).
///
/// ## Example
///
/// ```
/// use ssz_types::{VariableList, typenum};
///
/// let base: Vec<u64> = vec![1, 2, 3, 4];
///
/// // Create a `VariableList` from a `Vec` that has the expected length.
/// let exact: VariableList<_, typenum::U4> = VariableList::from(base.clone());
/// assert_eq!(&exact[..], &[1, 2, 3, 4]);
///
/// // Create a `VariableList` from a `Vec` that is too long and the `Vec` is truncated.
/// let short: VariableList<_, typenum::U3> = VariableList::from(base.clone());
/// assert_eq!(&short[..], &[1, 2, 3]);
///
/// // Create a `VariableList` from a `Vec` that is shorter than the maximum.
/// let mut long: VariableList<_, typenum::U5> = VariableList::from(base);
/// assert_eq!(&long[..], &[1, 2, 3, 4]);
///
/// // Push a value to if it does not exceed the maximum
/// long.push(5).unwrap();
/// assert_eq!(&long[..], &[1, 2, 3, 4, 5]);
///
/// // Push a value to if it _does_ exceed the maximum.
/// assert!(long.push(6).is_err());
/// ```
#[derive(Debug, PartialEq, Clone, Serialize, Deserialize)]
#[serde(transparent)]
pub struct VariableList<T, N> {
vec: Vec<T>,
_phantom: PhantomData<N>,
}
impl<T, N: Unsigned> VariableList<T, N> {
/// Returns `Some` if the given `vec` equals the fixed length of `Self`. Otherwise returns
/// `None`.
pub fn new(vec: Vec<T>) -> Result<Self, Error> {
if vec.len() <= N::to_usize() {
Ok(Self {
vec,
_phantom: PhantomData,
})
} else {
Err(Error::OutOfBounds {
i: vec.len(),
len: Self::max_len(),
})
}
}
/// Returns the number of values presently in `self`.
pub fn len(&self) -> usize {
self.vec.len()
}
/// True if `self` does not contain any values.
pub fn is_empty(&self) -> bool {
self.len() == 0
}
/// Returns the type-level maximum length.
pub fn max_len() -> usize {
N::to_usize()
}
/// Appends `value` to the back of `self`.
///
/// Returns `Err(())` when appending `value` would exceed the maximum length.
pub fn push(&mut self, value: T) -> Result<(), Error> {
if self.vec.len() < Self::max_len() {
self.vec.push(value);
Ok(())
} else {
Err(Error::OutOfBounds {
i: self.vec.len() + 1,
len: Self::max_len(),
})
}
}
}
impl<T: Default, N: Unsigned> From<Vec<T>> for VariableList<T, N> {
fn from(mut vec: Vec<T>) -> Self {
vec.truncate(N::to_usize());
Self {
vec,
_phantom: PhantomData,
}
}
}
impl<T, N: Unsigned> Into<Vec<T>> for VariableList<T, N> {
fn into(self) -> Vec<T> {
self.vec
}
}
impl<T, N: Unsigned> Default for VariableList<T, N> {
fn default() -> Self {
Self {
vec: Vec::default(),
_phantom: PhantomData,
}
}
}
impl<T, N: Unsigned, I: SliceIndex<[T]>> Index<I> for VariableList<T, N> {
type Output = I::Output;
#[inline]
fn index(&self, index: I) -> &Self::Output {
Index::index(&self.vec, index)
}
}
impl<T, N: Unsigned, I: SliceIndex<[T]>> IndexMut<I> for VariableList<T, N> {
#[inline]
fn index_mut(&mut self, index: I) -> &mut Self::Output {
IndexMut::index_mut(&mut self.vec, index)
}
}
impl<T, N: Unsigned> Deref for VariableList<T, N> {
type Target = [T];
fn deref(&self) -> &[T] {
&self.vec[..]
}
}
#[cfg(test)]
mod test {
use super::*;
use typenum::*;
#[test]
fn new() {
let vec = vec![42; 5];
let fixed: Result<VariableList<u64, U4>, _> = VariableList::new(vec.clone());
assert!(fixed.is_err());
let vec = vec![42; 3];
let fixed: Result<VariableList<u64, U4>, _> = VariableList::new(vec.clone());
assert!(fixed.is_ok());
let vec = vec![42; 4];
let fixed: Result<VariableList<u64, U4>, _> = VariableList::new(vec.clone());
assert!(fixed.is_ok());
}
#[test]
fn indexing() {
let vec = vec![1, 2];
let mut fixed: VariableList<u64, U8192> = vec.clone().into();
assert_eq!(fixed[0], 1);
assert_eq!(&fixed[0..1], &vec[0..1]);
assert_eq!((&fixed[..]).len(), 2);
fixed[1] = 3;
assert_eq!(fixed[1], 3);
}
#[test]
fn length() {
let vec = vec![42; 5];
let fixed: VariableList<u64, U4> = VariableList::from(vec.clone());
assert_eq!(&fixed[..], &vec[0..4]);
let vec = vec![42; 3];
let fixed: VariableList<u64, U4> = VariableList::from(vec.clone());
assert_eq!(&fixed[0..3], &vec[..]);
assert_eq!(&fixed[..], &vec![42, 42, 42][..]);
let vec = vec![];
let fixed: VariableList<u64, U4> = VariableList::from(vec.clone());
assert_eq!(&fixed[..], &vec![][..]);
}
#[test]
fn deref() {
let vec = vec![0, 2, 4, 6];
let fixed: VariableList<u64, U4> = VariableList::from(vec);
assert_eq!(fixed.get(0), Some(&0));
assert_eq!(fixed.get(3), Some(&6));
assert_eq!(fixed.get(4), None);
}
}
impl<T, N: Unsigned> tree_hash::TreeHash for VariableList<T, N>
where
T: tree_hash::TreeHash,
{
fn tree_hash_type() -> tree_hash::TreeHashType {
tree_hash::TreeHashType::Vector
}
fn tree_hash_packed_encoding(&self) -> Vec<u8> {
unreachable!("Vector should never be packed.")
}
fn tree_hash_packing_factor() -> usize {
unreachable!("Vector should never be packed.")
}
fn tree_hash_root(&self) -> Vec<u8> {
tree_hash::impls::vec_tree_hash_root(&self.vec)
}
}
impl<T, N: Unsigned> cached_tree_hash::CachedTreeHash for VariableList<T, N>
where
T: cached_tree_hash::CachedTreeHash + tree_hash::TreeHash,
{
fn new_tree_hash_cache(
&self,
depth: usize,
) -> Result<cached_tree_hash::TreeHashCache, cached_tree_hash::Error> {
let (cache, _overlay) = cached_tree_hash::vec::new_tree_hash_cache(&self.vec, depth)?;
Ok(cache)
}
fn tree_hash_cache_schema(&self, depth: usize) -> cached_tree_hash::BTreeSchema {
cached_tree_hash::vec::produce_schema(&self.vec, depth)
}
fn update_tree_hash_cache(
&self,
cache: &mut cached_tree_hash::TreeHashCache,
) -> Result<(), cached_tree_hash::Error> {
cached_tree_hash::vec::update_tree_hash_cache(&self.vec, cache)?;
Ok(())
}
}
impl<T, N: Unsigned> ssz::Encode for VariableList<T, N>
where
T: ssz::Encode,
{
fn is_ssz_fixed_len() -> bool {
<Vec<T>>::is_ssz_fixed_len()
}
fn ssz_fixed_len() -> usize {
<Vec<T>>::ssz_fixed_len()
}
fn ssz_append(&self, buf: &mut Vec<u8>) {
self.vec.ssz_append(buf)
}
}
impl<T, N: Unsigned> ssz::Decode for VariableList<T, N>
where
T: ssz::Decode + Default,
{
fn is_ssz_fixed_len() -> bool {
<Vec<T>>::is_ssz_fixed_len()
}
fn ssz_fixed_len() -> usize {
<Vec<T>>::ssz_fixed_len()
}
fn from_ssz_bytes(bytes: &[u8]) -> Result<Self, ssz::DecodeError> {
let vec = <Vec<T>>::from_ssz_bytes(bytes)?;
Self::new(vec).map_err(|e| ssz::DecodeError::BytesInvalid(format!("VariableList {:?}", e)))
}
}
#[cfg(test)]
mod tests {
use super::*;
use ssz::*;
use typenum::*;
#[test]
fn encode() {
let vec: VariableList<u16, U2> = vec![0; 2].into();
assert_eq!(vec.as_ssz_bytes(), vec![0, 0, 0, 0]);
assert_eq!(<VariableList<u16, U2> as Encode>::ssz_fixed_len(), 4);
}
fn round_trip<T: Encode + Decode + std::fmt::Debug + PartialEq>(item: T) {
let encoded = &item.as_ssz_bytes();
assert_eq!(T::from_ssz_bytes(&encoded), Ok(item));
}
#[test]
fn u16_len_8() {
round_trip::<VariableList<u16, U8>>(vec![42; 8].into());
round_trip::<VariableList<u16, U8>>(vec![0; 8].into());
}
}

View File

@ -5,9 +5,11 @@ authors = ["Paul Hauner <paul@paulhauner.com>"]
edition = "2018" edition = "2018"
[dev-dependencies] [dev-dependencies]
rand = "0.7"
tree_hash_derive = { path = "../tree_hash_derive" } tree_hash_derive = { path = "../tree_hash_derive" }
[dependencies] [dependencies]
ethereum-types = "0.5" ethereum-types = "0.5"
hashing = { path = "../hashing" } hashing = { path = "../hashing" }
int_to_bytes = { path = "../int_to_bytes" } int_to_bytes = { path = "../int_to_bytes" }
lazy_static = "0.1"

View File

@ -1,5 +1,5 @@
use super::*; use super::*;
use crate::merkleize::merkle_root; use crate::merkle_root;
use ethereum_types::H256; use ethereum_types::H256;
use hashing::hash; use hashing::hash;
use int_to_bytes::int_to_bytes32; use int_to_bytes::int_to_bytes32;

View File

@ -1,5 +1,17 @@
#[macro_use]
extern crate lazy_static;
pub mod impls; pub mod impls;
pub mod merkleize; mod merkleize_padded;
mod merkleize_standard;
pub use merkleize_padded::merkleize_padded;
pub use merkleize_standard::merkleize_standard;
/// Alias to `merkleize_padded(&bytes, 0)`
pub fn merkle_root(bytes: &[u8]) -> Vec<u8> {
merkleize_padded(&bytes, 0)
}
pub const BYTES_PER_CHUNK: usize = 32; pub const BYTES_PER_CHUNK: usize = 32;
pub const HASHSIZE: usize = 32; pub const HASHSIZE: usize = 32;
@ -44,7 +56,7 @@ macro_rules! tree_hash_ssz_encoding_as_vector {
} }
fn tree_hash_root(&self) -> Vec<u8> { fn tree_hash_root(&self) -> Vec<u8> {
tree_hash::merkleize::merkle_root(&ssz::ssz_encode(self)) tree_hash::merkle_root(&ssz::ssz_encode(self))
} }
} }
}; };

View File

@ -0,0 +1,366 @@
use super::BYTES_PER_CHUNK;
use hashing::hash;
/// The size of the cache that stores padding nodes for a given height.
///
/// Currently, we panic if we encounter a tree with a height larger than `MAX_TREE_DEPTH`.
///
/// It is set to 48 as we expect it to be sufficiently high that we won't exceed it.
pub const MAX_TREE_DEPTH: usize = 48;
lazy_static! {
/// Cached zero hashes where `ZERO_HASHES[i]` is the hash of a Merkle tree with 2^i zero leaves.
static ref ZERO_HASHES: Vec<Vec<u8>> = {
let mut hashes = vec![vec![0; 32]; MAX_TREE_DEPTH + 1];
for i in 0..MAX_TREE_DEPTH {
hashes[i + 1] = hash_concat(&hashes[i], &hashes[i]);
}
hashes
};
}
/// Merkleize `bytes` and return the root, optionally padding the tree out to `min_leaves` number of
/// leaves.
///
/// First all nodes are extracted from `bytes` and then a padding node is added until the number of
/// leaf chunks is greater than or equal to `min_leaves`. Callers may set `min_leaves` to `0` if no
/// adding additional chunks should be added to the given `bytes`.
///
/// If `bytes.len() <= BYTES_PER_CHUNK`, no hashing is done and `bytes` is returned, potentially
/// padded out to `BYTES_PER_CHUNK` length with `0`.
///
/// ## CPU Performance
///
/// A cache of `MAX_TREE_DEPTH` hashes are stored to avoid re-computing the hashes of padding nodes
/// (or their parents). Therefore, adding padding nodes only incurs one more hash per additional
/// height of the tree.
///
/// ## Memory Performance
///
/// This algorithm has two interesting memory usage properties:
///
/// 1. The maximum memory footprint is roughly `O(V / 2)` memory, where `V` is the number of leaf
/// chunks with values (i.e., leaves that are not padding). The means adding padding nodes to
/// the tree does not increase the memory footprint.
/// 2. At each height of the tree half of the memory is freed until only a single chunk is stored.
/// 3. The input `bytes` are not copied into another list before processing.
///
/// _Note: there are some minor memory overheads, including a handful of usizes and a list of
/// `MAX_TREE_DEPTH` hashes as `lazy_static` constants._
pub fn merkleize_padded(bytes: &[u8], min_leaves: usize) -> Vec<u8> {
// If the bytes are just one chunk or less, pad to one chunk and return without hashing.
if bytes.len() <= BYTES_PER_CHUNK && min_leaves <= 1 {
let mut o = bytes.to_vec();
o.resize(BYTES_PER_CHUNK, 0);
return o;
}
assert!(
bytes.len() > BYTES_PER_CHUNK || min_leaves > 1,
"Merkle hashing only needs to happen if there is more than one chunk"
);
// The number of leaves that can be made directly from `bytes`.
let leaves_with_values = (bytes.len() + (BYTES_PER_CHUNK - 1)) / BYTES_PER_CHUNK;
// The number of parents that have at least one non-padding leaf.
//
// Since there is more than one node in this tree (see prior assertion), there should always be
// one or more initial parent nodes.
let initial_parents_with_values = std::cmp::max(1, next_even_number(leaves_with_values) / 2);
// The number of leaves in the full tree (including padding nodes).
let num_leaves = std::cmp::max(leaves_with_values, min_leaves).next_power_of_two();
// The number of levels in the tree.
//
// A tree with a single node has `height == 1`.
let height = num_leaves.trailing_zeros() as usize + 1;
assert!(height >= 2, "The tree should have two or more heights");
// A buffer/scratch-space used for storing each round of hashes at each height.
//
// This buffer is kept as small as possible; it will shrink so it never stores a padding node.
let mut chunks = ChunkStore::with_capacity(initial_parents_with_values);
// Create a parent in the `chunks` buffer for every two chunks in `bytes`.
//
// I.e., do the first round of hashing, hashing from the `bytes` slice and filling the `chunks`
// struct.
for i in 0..initial_parents_with_values {
let start = i * BYTES_PER_CHUNK * 2;
// Hash two chunks, creating a parent chunk.
let hash = match bytes.get(start..start + BYTES_PER_CHUNK * 2) {
// All bytes are available, hash as usual.
Some(slice) => hash(slice),
// Unable to get all the bytes, get a small slice and pad it out.
None => {
let mut preimage = bytes
.get(start..)
.expect("`i` can only be larger than zero if there are bytes to read")
.to_vec();
preimage.resize(BYTES_PER_CHUNK * 2, 0);
hash(&preimage)
}
};
assert_eq!(
hash.len(),
BYTES_PER_CHUNK,
"Hashes should be exactly one chunk"
);
// Store the parent node.
chunks
.set(i, &hash)
.expect("Buffer should always have capacity for parent nodes")
}
// Iterate through all heights above the leaf nodes and either (a) hash two children or, (b)
// hash a left child and a right padding node.
//
// Skip the 0'th height because the leaves have already been processed. Skip the highest-height
// in the tree as it is the root does not require hashing.
//
// The padding nodes for each height are cached via `lazy static` to simulate non-adjacent
// padding nodes (i.e., avoid doing unnecessary hashing).
for height in 1..height - 1 {
let child_nodes = chunks.len();
let parent_nodes = next_even_number(child_nodes) / 2;
// For each pair of nodes stored in `chunks`:
//
// - If two nodes are available, hash them to form a parent.
// - If one node is available, hash it and a cached padding node to form a parent.
for i in 0..parent_nodes {
let (left, right) = match (chunks.get(i * 2), chunks.get(i * 2 + 1)) {
(Ok(left), Ok(right)) => (left, right),
(Ok(left), Err(_)) => (left, get_zero_hash(height)),
// Deriving `parent_nodes` from `chunks.len()` has ensured that we never encounter the
// scenario where we expect two nodes but there are none.
(Err(_), Err(_)) => unreachable!("Parent must have one child"),
// `chunks` is a contiguous array so it is impossible for an index to be missing
// when a higher index is present.
(Err(_), Ok(_)) => unreachable!("Parent must have a left child"),
};
assert!(
left.len() == right.len() && right.len() == BYTES_PER_CHUNK,
"Both children should be `BYTES_PER_CHUNK` bytes."
);
let hash = hash_concat(left, right);
// Store a parent node.
chunks
.set(i, &hash)
.expect("Buf is adequate size for parent");
}
// Shrink the buffer so it neatly fits the number of new nodes created in this round.
//
// The number of `parent_nodes` is either decreasing or stable. It never increases.
chunks.truncate(parent_nodes);
}
// There should be a single chunk left in the buffer and it is the Merkle root.
let root = chunks.into_vec();
assert_eq!(root.len(), BYTES_PER_CHUNK, "Only one chunk should remain");
root
}
/// A helper struct for storing words of `BYTES_PER_CHUNK` size in a flat byte array.
#[derive(Debug)]
struct ChunkStore(Vec<u8>);
impl ChunkStore {
/// Creates a new instance with `chunks` padding nodes.
fn with_capacity(chunks: usize) -> Self {
Self(vec![0; chunks * BYTES_PER_CHUNK])
}
/// Set the `i`th chunk to `value`.
///
/// Returns `Err` if `value.len() != BYTES_PER_CHUNK` or `i` is out-of-bounds.
fn set(&mut self, i: usize, value: &[u8]) -> Result<(), ()> {
if i < self.len() && value.len() == BYTES_PER_CHUNK {
let slice = &mut self.0[i * BYTES_PER_CHUNK..i * BYTES_PER_CHUNK + BYTES_PER_CHUNK];
slice.copy_from_slice(value);
Ok(())
} else {
Err(())
}
}
/// Gets the `i`th chunk.
///
/// Returns `Err` if `i` is out-of-bounds.
fn get(&self, i: usize) -> Result<&[u8], ()> {
if i < self.len() {
Ok(&self.0[i * BYTES_PER_CHUNK..i * BYTES_PER_CHUNK + BYTES_PER_CHUNK])
} else {
Err(())
}
}
/// Returns the number of chunks presently stored in `self`.
fn len(&self) -> usize {
self.0.len() / BYTES_PER_CHUNK
}
/// Truncates 'self' to `num_chunks` chunks.
///
/// Functionally identical to `Vec::truncate`.
fn truncate(&mut self, num_chunks: usize) {
self.0.truncate(num_chunks * BYTES_PER_CHUNK)
}
/// Consumes `self`, returning the underlying byte array.
fn into_vec(self) -> Vec<u8> {
self.0
}
}
/// Returns a cached padding node for a given height.
fn get_zero_hash(height: usize) -> &'static [u8] {
if height <= MAX_TREE_DEPTH {
&ZERO_HASHES[height]
} else {
panic!("Tree exceeds MAX_TREE_DEPTH of {}", MAX_TREE_DEPTH)
}
}
/// Concatenate two vectors.
fn concat(mut vec1: Vec<u8>, mut vec2: Vec<u8>) -> Vec<u8> {
vec1.append(&mut vec2);
vec1
}
/// Compute the hash of two other hashes concatenated.
fn hash_concat(h1: &[u8], h2: &[u8]) -> Vec<u8> {
hash(&concat(h1.to_vec(), h2.to_vec()))
}
/// Returns the next even number following `n`. If `n` is even, `n` is returned.
fn next_even_number(n: usize) -> usize {
n + n % 2
}
#[cfg(test)]
mod test {
use super::*;
pub fn reference_root(bytes: &[u8]) -> Vec<u8> {
crate::merkleize_standard(&bytes)[0..32].to_vec()
}
macro_rules! common_tests {
($get_bytes: ident) => {
#[test]
fn zero_value_0_nodes() {
test_against_reference(&$get_bytes(0 * BYTES_PER_CHUNK), 0);
}
#[test]
fn zero_value_1_nodes() {
test_against_reference(&$get_bytes(1 * BYTES_PER_CHUNK), 0);
}
#[test]
fn zero_value_2_nodes() {
test_against_reference(&$get_bytes(2 * BYTES_PER_CHUNK), 0);
}
#[test]
fn zero_value_3_nodes() {
test_against_reference(&$get_bytes(3 * BYTES_PER_CHUNK), 0);
}
#[test]
fn zero_value_4_nodes() {
test_against_reference(&$get_bytes(4 * BYTES_PER_CHUNK), 0);
}
#[test]
fn zero_value_8_nodes() {
test_against_reference(&$get_bytes(8 * BYTES_PER_CHUNK), 0);
}
#[test]
fn zero_value_9_nodes() {
test_against_reference(&$get_bytes(9 * BYTES_PER_CHUNK), 0);
}
#[test]
fn zero_value_8_nodes_varying_min_length() {
for i in 0..64 {
test_against_reference(&$get_bytes(8 * BYTES_PER_CHUNK), i);
}
}
#[test]
fn zero_value_range_of_nodes() {
for i in 0..32 * BYTES_PER_CHUNK {
test_against_reference(&$get_bytes(i), 0);
}
}
#[test]
fn max_tree_depth_min_nodes() {
let input = vec![0; 10 * BYTES_PER_CHUNK];
let min_nodes = 2usize.pow(MAX_TREE_DEPTH as u32);
assert_eq!(
merkleize_padded(&input, min_nodes),
get_zero_hash(MAX_TREE_DEPTH)
);
}
};
}
mod zero_value {
use super::*;
fn zero_bytes(bytes: usize) -> Vec<u8> {
vec![0; bytes]
}
common_tests!(zero_bytes);
}
mod random_value {
use super::*;
use rand::RngCore;
fn random_bytes(bytes: usize) -> Vec<u8> {
let mut bytes = Vec::with_capacity(bytes);
rand::thread_rng().fill_bytes(&mut bytes);
bytes
}
common_tests!(random_bytes);
}
fn test_against_reference(input: &[u8], min_nodes: usize) {
let mut reference_input = input.to_vec();
reference_input.resize(
std::cmp::max(
reference_input.len(),
min_nodes.next_power_of_two() * BYTES_PER_CHUNK,
),
0,
);
assert_eq!(
reference_root(&reference_input),
merkleize_padded(&input, min_nodes),
"input.len(): {:?}",
input.len()
);
}
}

View File

@ -1,12 +1,23 @@
use super::*; use super::*;
use hashing::hash; use hashing::hash;
pub fn merkle_root(bytes: &[u8]) -> Vec<u8> { /// Merkleizes bytes and returns the root, using a simple algorithm that does not optimize to avoid
// TODO: replace this with a more memory efficient method. /// processing or storing padding bytes.
efficient_merkleize(&bytes)[0..32].to_vec() ///
} /// The input `bytes` will be padded to ensure that the number of leaves is a power-of-two.
///
pub fn efficient_merkleize(bytes: &[u8]) -> Vec<u8> { /// It is likely a better choice to use [merkleize_padded](fn.merkleize_padded.html) instead.
///
/// ## CPU Performance
///
/// Will hash all nodes in the tree, even if they are padding and pre-determined.
///
/// ## Memory Performance
///
/// - Duplicates the input `bytes`.
/// - Stores all internal nodes, even if they are padding.
/// - Does not free up unused memory during operation.
pub fn merkleize_standard(bytes: &[u8]) -> Vec<u8> {
// If the bytes are just one chunk (or less than one chunk) just return them. // If the bytes are just one chunk (or less than one chunk) just return them.
if bytes.len() <= HASHSIZE { if bytes.len() <= HASHSIZE {
let mut o = bytes.to_vec(); let mut o = bytes.to_vec();

View File

@ -150,7 +150,7 @@ pub fn tree_hash_derive(input: TokenStream) -> TokenStream {
leaves.append(&mut self.#idents.tree_hash_root()); leaves.append(&mut self.#idents.tree_hash_root());
)* )*
tree_hash::merkleize::merkle_root(&leaves) tree_hash::merkle_root(&leaves)
} }
} }
}; };
@ -180,7 +180,7 @@ pub fn tree_hash_signed_root_derive(input: TokenStream) -> TokenStream {
leaves.append(&mut self.#idents.tree_hash_root()); leaves.append(&mut self.#idents.tree_hash_root());
)* )*
tree_hash::merkleize::merkle_root(&leaves) tree_hash::merkle_root(&leaves)
} }
} }
}; };

View File

@ -1,5 +1,5 @@
use cached_tree_hash::{CachedTreeHash, TreeHashCache}; use cached_tree_hash::{CachedTreeHash, TreeHashCache};
use tree_hash::{merkleize::merkle_root, SignedRoot, TreeHash}; use tree_hash::{merkle_root, SignedRoot, TreeHash};
use tree_hash_derive::{CachedTreeHash, SignedRoot, TreeHash}; use tree_hash_derive::{CachedTreeHash, SignedRoot, TreeHash};
#[derive(Clone, Debug, TreeHash, CachedTreeHash)] #[derive(Clone, Debug, TreeHash, CachedTreeHash)]

View File

@ -26,11 +26,13 @@ types = { path = "../eth2/types" }
serde = "1.0" serde = "1.0"
serde_derive = "1.0" serde_derive = "1.0"
slog = "^2.2.3" slog = "^2.2.3"
slog-term = "^2.4.0"
slog-async = "^2.3.0" slog-async = "^2.3.0"
slog-json = "^2.3"
slog-term = "^2.4.0"
tokio = "0.1.18" tokio = "0.1.18"
tokio-timer = "0.2.10" tokio-timer = "0.2.10"
toml = "^0.5" toml = "^0.5"
error-chain = "0.12.0" error-chain = "0.12.0"
bincode = "^1.1.2" bincode = "^1.1.2"
futures = "0.1.25" futures = "0.1.25"
dirs = "2.0.1"

View File

@ -12,7 +12,7 @@ The VC is responsible for the following tasks:
duties require. duties require.
- Completing all the fields on a new block (e.g., RANDAO reveal, signature) and - Completing all the fields on a new block (e.g., RANDAO reveal, signature) and
publishing the block to a BN. publishing the block to a BN.
- Prompting the BN to produce a new shard atteststation as per a validators - Prompting the BN to produce a new shard attestation as per a validators
duties. duties.
- Ensuring that no slashable messages are signed by a validator private key. - Ensuring that no slashable messages are signed by a validator private key.
- Keeping track of the system clock and how it relates to slots/epochs. - Keeping track of the system clock and how it relates to slots/epochs.
@ -40,7 +40,7 @@ index. The outcome of a successful poll is a `EpochDuties` struct:
```rust ```rust
EpochDuties { EpochDuties {
validator_index: u64, validator_index: u64,
block_prodcution_slot: u64, block_production_slot: u64,
} }
``` ```

View File

@ -2,11 +2,11 @@ use bincode;
use bls::Keypair; use bls::Keypair;
use clap::ArgMatches; use clap::ArgMatches;
use serde_derive::{Deserialize, Serialize}; use serde_derive::{Deserialize, Serialize};
use slog::{debug, error, info}; use slog::{debug, error, info, o, Drain};
use std::fs; use std::fs::{self, File, OpenOptions};
use std::fs::File;
use std::io::{Error, ErrorKind}; use std::io::{Error, ErrorKind};
use std::path::PathBuf; use std::path::PathBuf;
use std::sync::Mutex;
use types::{EthSpec, MainnetEthSpec}; use types::{EthSpec, MainnetEthSpec};
/// Stores the core configuration for this validator instance. /// Stores the core configuration for this validator instance.
@ -14,6 +14,8 @@ use types::{EthSpec, MainnetEthSpec};
pub struct Config { pub struct Config {
/// The data directory, which stores all validator databases /// The data directory, which stores all validator databases
pub data_dir: PathBuf, pub data_dir: PathBuf,
/// The path where the logs will be outputted
pub log_file: PathBuf,
/// The server at which the Beacon Node can be contacted /// The server at which the Beacon Node can be contacted
pub server: String, pub server: String,
/// The number of slots per epoch. /// The number of slots per epoch.
@ -27,6 +29,7 @@ impl Default for Config {
fn default() -> Self { fn default() -> Self {
Self { Self {
data_dir: PathBuf::from(".lighthouse-validator"), data_dir: PathBuf::from(".lighthouse-validator"),
log_file: PathBuf::from(""),
server: "localhost:5051".to_string(), server: "localhost:5051".to_string(),
slots_per_epoch: MainnetEthSpec::slots_per_epoch(), slots_per_epoch: MainnetEthSpec::slots_per_epoch(),
} }
@ -38,11 +41,20 @@ impl Config {
/// ///
/// Returns an error if arguments are obviously invalid. May succeed even if some values are /// Returns an error if arguments are obviously invalid. May succeed even if some values are
/// invalid. /// invalid.
pub fn apply_cli_args(&mut self, args: &ArgMatches) -> Result<(), &'static str> { pub fn apply_cli_args(
&mut self,
args: &ArgMatches,
log: &mut slog::Logger,
) -> Result<(), &'static str> {
if let Some(datadir) = args.value_of("datadir") { if let Some(datadir) = args.value_of("datadir") {
self.data_dir = PathBuf::from(datadir); self.data_dir = PathBuf::from(datadir);
}; };
if let Some(log_file) = args.value_of("logfile") {
self.log_file = PathBuf::from(log_file);
self.update_logger(log)?;
};
if let Some(srv) = args.value_of("server") { if let Some(srv) = args.value_of("server") {
self.server = srv.to_string(); self.server = srv.to_string();
}; };
@ -50,6 +62,38 @@ impl Config {
Ok(()) Ok(())
} }
// Update the logger to output in JSON to specified file
fn update_logger(&mut self, log: &mut slog::Logger) -> Result<(), &'static str> {
let file = OpenOptions::new()
.create(true)
.write(true)
.truncate(true)
.open(&self.log_file);
if file.is_err() {
return Err("Cannot open log file");
}
let file = file.unwrap();
if let Some(file) = self.log_file.to_str() {
info!(
*log,
"Log file specified, output will now be written to {} in json.", file
);
} else {
info!(
*log,
"Log file specified output will now be written in json"
);
}
let drain = Mutex::new(slog_json::Json::default(file)).fuse();
let drain = slog_async::Async::new(drain).build().fuse();
*log = slog::Logger::root(drain, o!());
Ok(())
}
/// Try to load keys from validator_dir, returning None if none are found or an error. /// Try to load keys from validator_dir, returning None if none are found or an error.
#[allow(dead_code)] #[allow(dead_code)]
pub fn fetch_keys(&self, log: &slog::Logger) -> Option<Vec<Keypair>> { pub fn fetch_keys(&self, log: &slog::Logger) -> Option<Vec<Keypair>> {

View File

@ -9,9 +9,10 @@ mod signer;
use crate::config::Config as ValidatorClientConfig; use crate::config::Config as ValidatorClientConfig;
use crate::service::Service as ValidatorService; use crate::service::Service as ValidatorService;
use clap::{App, Arg}; use clap::{App, Arg};
use eth2_config::{get_data_dir, read_from_file, write_to_file, Eth2Config}; use eth2_config::{read_from_file, write_to_file, Eth2Config};
use protos::services_grpc::ValidatorServiceClient; use protos::services_grpc::ValidatorServiceClient;
use slog::{crit, error, info, o, Drain}; use slog::{crit, error, info, o, Drain};
use std::fs;
use std::path::PathBuf; use std::path::PathBuf;
use types::{Keypair, MainnetEthSpec, MinimalEthSpec}; use types::{Keypair, MainnetEthSpec, MinimalEthSpec};
@ -25,7 +26,7 @@ fn main() {
let decorator = slog_term::TermDecorator::new().build(); let decorator = slog_term::TermDecorator::new().build();
let drain = slog_term::CompactFormat::new(decorator).build().fuse(); let drain = slog_term::CompactFormat::new(decorator).build().fuse();
let drain = slog_async::Async::new(drain).build().fuse(); let drain = slog_async::Async::new(drain).build().fuse();
let log = slog::Logger::root(drain, o!()); let mut log = slog::Logger::root(drain, o!());
// CLI // CLI
let matches = App::new("Lighthouse Validator Client") let matches = App::new("Lighthouse Validator Client")
@ -35,10 +36,18 @@ fn main() {
.arg( .arg(
Arg::with_name("datadir") Arg::with_name("datadir")
.long("datadir") .long("datadir")
.short("d")
.value_name("DIR") .value_name("DIR")
.help("Data directory for keys and databases.") .help("Data directory for keys and databases.")
.takes_value(true), .takes_value(true),
) )
.arg(
Arg::with_name("logfile")
.long("logfile")
.value_name("logfile")
.help("File path where output will be written.")
.takes_value(true),
)
.arg( .arg(
Arg::with_name("eth2-spec") Arg::with_name("eth2-spec")
.long("eth2-spec") .long("eth2-spec")
@ -66,14 +75,34 @@ fn main() {
) )
.get_matches(); .get_matches();
let data_dir = match get_data_dir(&matches, PathBuf::from(DEFAULT_DATA_DIR)) { let data_dir = match matches
Ok(dir) => dir, .value_of("datadir")
Err(e) => { .and_then(|v| Some(PathBuf::from(v)))
crit!(log, "Failed to initialize data dir"; "error" => format!("{:?}", e)); {
return; Some(v) => v,
None => {
// use the default
let mut default_dir = match dirs::home_dir() {
Some(v) => v,
None => {
crit!(log, "Failed to find a home directory");
return;
}
};
default_dir.push(DEFAULT_DATA_DIR);
PathBuf::from(default_dir)
} }
}; };
// create the directory if needed
match fs::create_dir_all(&data_dir) {
Ok(_) => {}
Err(e) => {
crit!(log, "Failed to initialize data dir"; "error" => format!("{}", e));
return;
}
}
let client_config_path = data_dir.join(CLIENT_CONFIG_FILENAME); let client_config_path = data_dir.join(CLIENT_CONFIG_FILENAME);
// Attempt to lead the `ClientConfig` from disk. // Attempt to lead the `ClientConfig` from disk.
@ -101,7 +130,7 @@ fn main() {
client_config.data_dir = data_dir.clone(); client_config.data_dir = data_dir.clone();
// Update the client config with any CLI args. // Update the client config with any CLI args.
match client_config.apply_cli_args(&matches) { match client_config.apply_cli_args(&matches, &mut log) {
Ok(()) => (), Ok(()) => (),
Err(s) => { Err(s) => {
crit!(log, "Failed to parse ClientConfig CLI arguments"; "error" => s); crit!(log, "Failed to parse ClientConfig CLI arguments"; "error" => s);