Merge latest master
This commit is contained in:
commit
b2a1b20e24
@ -19,6 +19,7 @@ members = [
|
||||
"eth2/utils/slot_clock",
|
||||
"eth2/utils/ssz",
|
||||
"eth2/utils/ssz_derive",
|
||||
"eth2/utils/ssz_types",
|
||||
"eth2/utils/swap_or_not_shuffle",
|
||||
"eth2/utils/tree_hash",
|
||||
"eth2/utils/tree_hash_derive",
|
||||
|
251
README.md
251
README.md
@ -1,4 +1,6 @@
|
||||
# Lighthouse: an Ethereum Serenity client
|
||||
# Lighthouse: Ethereum 2.0
|
||||
|
||||
An open-source Ethereum 2.0 client, written in Rust and maintained by Sigma Prime.
|
||||
|
||||
[![Build Status]][Build Link] [![Doc Status]][Doc Link] [![Gitter Badge]][Gitter Link]
|
||||
|
||||
@ -9,24 +11,126 @@
|
||||
[Doc Status]: https://img.shields.io/badge/docs-master-blue.svg
|
||||
[Doc Link]: http://lighthouse-docs.sigmaprime.io/
|
||||
|
||||
A work-in-progress, open-source implementation of the Serenity Beacon
|
||||
Chain, maintained by Sigma Prime.
|
||||
## Overview
|
||||
|
||||
The "Serenity" project is also known as "Ethereum 2.0" or "Shasper".
|
||||
Lighthouse is:
|
||||
|
||||
## Lighthouse Client
|
||||
- Fully open-source, licensed under Apache 2.0.
|
||||
- Security-focussed, fuzzing has begun and security reviews are planned
|
||||
for late-2019.
|
||||
- Built in [Rust](https://www.rust-lang.org/), a modern language providing unique safety guarantees and
|
||||
excellent performance (comparable to C++).
|
||||
- Funded by various organisations, including Sigma Prime, the
|
||||
Ethereum Foundation, Consensys and private individuals.
|
||||
- Actively working to promote an inter-operable, multi-client Ethereum 2.0.
|
||||
|
||||
Lighthouse is an open-source Ethereum Serenity client that is currently under
|
||||
development. Designed as a Serenity-only client, Lighthouse will not
|
||||
re-implement the existing proof-of-work protocol. Maintaining a forward-focus
|
||||
on Ethereum Serenity ensures that Lighthouse avoids reproducing the high-quality
|
||||
work already undertaken by existing projects. As such, Lighthouse will connect
|
||||
to existing clients, such as
|
||||
[Geth](https://github.com/ethereum/go-ethereum) or
|
||||
[Parity-Ethereum](https://github.com/paritytech/parity-ethereum), via RPC to enable
|
||||
present-Ethereum functionality.
|
||||
|
||||
### Further Reading
|
||||
## Development Status
|
||||
|
||||
Lighthouse, like all Ethereum 2.0 clients, is a work-in-progress. Instructions
|
||||
are provided for running the client, however these instructions are designed
|
||||
for developers and researchers working on the project. We do not (yet) provide
|
||||
user-facing functionality.
|
||||
|
||||
Current development overview:
|
||||
|
||||
- Specification `v0.6.3` implemented, optimized and passing test vectors.
|
||||
- Rust-native libp2p integrated, with Gossipsub.
|
||||
- Discv5 (P2P discovery mechanism) integration started.
|
||||
- Metrics via Prometheus.
|
||||
- Basic gRPC API, soon to be replaced with RESTful HTTP/JSON.
|
||||
|
||||
### Roadmap
|
||||
|
||||
- **July 2019**: `lighthouse-0.0.1` release: A stable testnet for developers with a useful
|
||||
HTTP API.
|
||||
- **September 2019**: Inter-operability with other Ethereum 2.0 clients.
|
||||
- **October 2019**: Public, multi-client testnet with user-facing functionality.
|
||||
- **January 2020**: Production Beacon Chain testnet.
|
||||
|
||||
## Usage
|
||||
|
||||
Lighthouse consists of multiple binaries:
|
||||
|
||||
- [`beacon_node/`](beacon_node/): produces and verifies blocks from the P2P
|
||||
connected validators and the P2P network. Provides an API for external services to
|
||||
interact with Ethereum 2.0.
|
||||
- [`validator_client/`](validator_client/): connects to a `beacon_node` and
|
||||
performs the role of a proof-of-stake validator.
|
||||
- [`account_manager/`](account_manager/): a stand-alone component providing key
|
||||
management and creation for validators.
|
||||
|
||||
### Simple Local Testnet
|
||||
|
||||
**Note: these instructions are intended for developers and researchers. We do
|
||||
not yet support end-users.**
|
||||
|
||||
In this example we use the `account_manager` to create some keys, launch two
|
||||
`beacon_node` instances and connect a `validator_client` to one. The two
|
||||
`beacon_nodes` should stay in sync and build a Beacon Chain.
|
||||
|
||||
First, clone this repository, [setup a development
|
||||
environment](docs/installation.md) and navigate to the root directory of this repository.
|
||||
|
||||
Then, run `$ cargo build --all --release` and navigate to the `target/release`
|
||||
directory and follow the steps:
|
||||
|
||||
#### 1. Generate Validator Keys
|
||||
|
||||
Generate 16 validator keys and store them in `~/.lighthouse-validator`:
|
||||
|
||||
```
|
||||
$ ./account_manager -d ~/.lighthouse-validator generate_deterministic -i 0 -n 16
|
||||
```
|
||||
|
||||
_Note: these keys are for development only. The secret keys are
|
||||
deterministically generated from low integers. Assume they are public
|
||||
knowledge._
|
||||
|
||||
#### 2. Start a Beacon Node
|
||||
|
||||
This node will act as the boot node and provide an API for the
|
||||
`validator_client`.
|
||||
|
||||
```
|
||||
$ ./beacon_node --recent-genesis --rpc
|
||||
```
|
||||
|
||||
_Note: `--recent-genesis` defines the genesis time as either the start of the
|
||||
current hour, or half-way through the current hour (whichever is most recent).
|
||||
This makes it very easy to create a testnet, but does not allow nodes to
|
||||
connect if they were started in separate 30-minute windows._
|
||||
|
||||
#### 3. Start Another Beacon Node
|
||||
|
||||
In another terminal window, start another boot that will connect to the
|
||||
running node.
|
||||
|
||||
The running node will display it's ENR as a base64 string. This ENR, by default, has a target address of `127.0.0.1` meaning that any new node will connect to this node via `127.0.0.1`. If a boot node should be connected to on a different address, it should be run with the `--discovery-address` CLI flag to specify how other nodes may connect to it.
|
||||
```
|
||||
$ ./beacon_node -r --boot-nodes <boot-node-ENR> --listen-address 127.0.0.1 --port 9001 --datadir /tmp/.lighthouse
|
||||
```
|
||||
Here <boot-node-ENR> is the ENR string displayed in the terminal from the first node. The ENR can also be obtained from it's default directory `.lighthouse/network/enr.dat`.
|
||||
|
||||
The `--datadir` flag tells this Beacon Node to store it's files in a different
|
||||
directory. If you're on a system that doesn't have a `/tmp` dir (e.g., Mac,
|
||||
Windows), substitute this with any directory that has write access.
|
||||
|
||||
Note that all future created nodes can use the same boot-node ENR. Once connected to the boot node, all nodes should discover and connect with each other.
|
||||
#### 4. Start a Validator Client
|
||||
|
||||
In a third terminal window, start a validator client:
|
||||
|
||||
```
|
||||
$ ./validator-client
|
||||
```
|
||||
|
||||
You should be able to observe the validator signing blocks, the boot node
|
||||
processing these blocks and publishing them to the other node. If you have
|
||||
issues, try restarting the beacon nodes to ensure they have the same genesis
|
||||
time. Alternatively, raise an issue and include your terminal output.
|
||||
|
||||
## Further Reading
|
||||
|
||||
- [About Lighthouse](docs/lighthouse.md): Goals, Ideology and Ethos surrounding
|
||||
this implementation.
|
||||
@ -37,7 +141,7 @@ If you'd like some background on Sigma Prime, please see the [Lighthouse Update
|
||||
\#00](https://lighthouse.sigmaprime.io/update-00.html) blog post or the
|
||||
[company website](https://sigmaprime.io).
|
||||
|
||||
### Directory Structure
|
||||
## Directory Structure
|
||||
|
||||
- [`beacon_node/`](beacon_node/): the "Beacon Node" binary and crates exclusively
|
||||
associated with it.
|
||||
@ -50,115 +154,9 @@ If you'd like some background on Sigma Prime, please see the [Lighthouse Update
|
||||
- [`validator_client/`](validator_client/): the "Validator Client" binary and crates exclusively
|
||||
associated with it.
|
||||
|
||||
### Components
|
||||
## Contributing
|
||||
|
||||
The following list describes some of the components actively under development
|
||||
by the team:
|
||||
|
||||
- **BLS cryptography**: Lighthouse presently use the [Apache
|
||||
Milagro](https://milagro.apache.org/) cryptography library to create and
|
||||
verify BLS aggregate signatures. BLS signatures are core to Serenity as they
|
||||
allow the signatures of many validators to be compressed into a constant 96
|
||||
bytes and efficiently verified. The Lighthouse project is presently
|
||||
maintaining its own [BLS aggregates
|
||||
library](https://github.com/sigp/signature-schemes), gratefully forked from
|
||||
[@lovesh](https://github.com/lovesh).
|
||||
- **DoS-resistant block pre-processing**: Processing blocks in proof-of-stake
|
||||
is more resource intensive than proof-of-work. As such, clients need to
|
||||
ensure that bad blocks can be rejected as efficiently as possible. At
|
||||
present, blocks having 10 million ETH staked can be processed in 0.006
|
||||
seconds, and invalid blocks are rejected even more quickly. See
|
||||
[issue #103](https://github.com/ethereum/beacon_chain/issues/103) on
|
||||
[ethereum/beacon_chain](https://github.com/ethereum/beacon_chain).
|
||||
- **P2P networking**: Serenity will likely use the [libp2p
|
||||
framework](https://libp2p.io/). Lighthouse is working alongside
|
||||
[Parity](https://www.parity.io/) to ensure
|
||||
[libp2p-rust](https://github.com/libp2p/rust-libp2p) is fit-for-purpose.
|
||||
- **Validator duties** : The project involves development of "validator
|
||||
services" for users who wish to stake ETH. To fulfill their duties,
|
||||
validators require a consistent view of the chain and the ability to vote
|
||||
upon blocks from both shard and beacon chains.
|
||||
- **New serialization formats**: Lighthouse is working alongside researchers
|
||||
from the Ethereum Foundation to develop *simpleserialize* (SSZ), a
|
||||
purpose-built serialization format for sending information across a network.
|
||||
Check out the [SSZ
|
||||
implementation](https://github.com/ethereum/eth2.0-specs/blob/00aa553fee95963b74fbec84dbd274d7247b8a0e/specs/simple-serialize.md)
|
||||
and this
|
||||
[research](https://github.com/sigp/serialization_sandbox/blob/report/report/serialization_report.md)
|
||||
on serialization formats for more information.
|
||||
- **Fork-choice**: The current fork choice rule is
|
||||
[*LMD Ghost*](https://vitalik.ca/general/2018/12/05/cbc_casper.html#lmd-ghost),
|
||||
which effectively takes the latest messages and forms the canonical chain using
|
||||
the [GHOST](https://eprint.iacr.org/2013/881.pdf) mechanism.
|
||||
- **Efficient state transition logic**: State transition logic governs
|
||||
updates to the validator set as validators log in/out, penalizes/rewards
|
||||
validators, rotates validators across shards, and implements other core tasks.
|
||||
- **Fuzzing and testing environments**: Implementation of lab environments with
|
||||
continuous integration (CI) workflows, providing automated security analysis.
|
||||
|
||||
In addition to these components we are also working on database schemas, RPC
|
||||
frameworks, specification development, database optimizations (e.g.,
|
||||
bloom-filters), and tons of other interesting stuff (at least we think so).
|
||||
|
||||
### Running
|
||||
|
||||
**NOTE: The cryptography libraries used in this implementation are
|
||||
experimental. As such all cryptography is assumed to be insecure.**
|
||||
|
||||
This code-base is still very much under-development and does not provide any
|
||||
user-facing functionality. For developers and researchers, there are several
|
||||
tests and benchmarks which may be of interest.
|
||||
|
||||
A few basic steps are needed to get set up:
|
||||
|
||||
1. Install [rustup](https://rustup.rs/). It's a toolchain manager for Rust (Linux | macOS | Windows). For installation, download the script with `$ curl -f https://sh.rustup.rs > rustup.sh`, review its content (e.g. `$ less ./rustup.sh`) and run the script `$ ./rustup.sh` (you may need to change the permissions to allow execution, i.e. `$ chmod +x rustup.sh`)
|
||||
2. (Linux & MacOS) To configure your current shell run: `$ source $HOME/.cargo/env`
|
||||
3. Use the command `rustup show` to get information about the Rust installation. You should see that the
|
||||
active toolchain is the stable version.
|
||||
4. Run `rustc --version` to check the installation and version of rust.
|
||||
- Updates can be performed using` rustup update` .
|
||||
5. Install build dependencies (Arch packages are listed here, your distribution will likely be similar):
|
||||
- `clang`: required by RocksDB.
|
||||
- `protobuf`: required for protobuf serialization (gRPC).
|
||||
- `cmake`: required for building protobuf.
|
||||
- `git-lfs`: The Git extension for [Large File Support](https://git-lfs.github.com/) (required for EF tests submodule).
|
||||
6. Navigate to the working directory.
|
||||
7. If you haven't already, clone the repository with submodules: `git clone --recursive https://github.com/sigp/lighthouse`.
|
||||
Alternatively, run `git submodule init` in a repository which was cloned without submodules.
|
||||
8. Run the test by using command `cargo test --all --release`. By running, it will pass all the required test cases.
|
||||
If you are doing it for the first time, then you can grab a coffee in the meantime. Usually, it takes time
|
||||
to build, compile and pass all test cases. If there is no error then it means everything is working properly
|
||||
and it's time to get your hands dirty.
|
||||
In case, if there is an error, then please raise the [issue](https://github.com/sigp/lighthouse/issues).
|
||||
We will help you.
|
||||
9. As an alternative to, or instead of the above step, you may also run benchmarks by using
|
||||
the command `cargo bench --all`
|
||||
|
||||
##### Note:
|
||||
Lighthouse presently runs on Rust `stable`, however, benchmarks currently require the
|
||||
`nightly` version.
|
||||
|
||||
##### Note for Windows users:
|
||||
Perl may also be required to build lighthouse. You can install [Strawberry Perl](http://strawberryperl.com/),
|
||||
or alternatively use a choco install command `choco install strawberryperl`.
|
||||
|
||||
Additionally, the dependency `protoc-grpcio v0.3.1` is reported to have issues compiling in Windows. You can specify
|
||||
a known working version by editing version in protos/Cargo.toml's "build-dependencies" section to
|
||||
`protoc-grpcio = "<=0.3.0"`.
|
||||
|
||||
### Contributing
|
||||
|
||||
**Lighthouse welcomes contributors with open-arms.**
|
||||
|
||||
If you would like to learn more about Ethereum Serenity and/or
|
||||
[Rust](https://www.rust-lang.org/), we are more than happy to on-board you
|
||||
and assign you some tasks. We aim to be as accepting and understanding as
|
||||
possible; we are more than happy to up-skill contributors in exchange for their
|
||||
assistance with the project.
|
||||
|
||||
Alternatively, if you are an ETH/Rust veteran, we'd love your input. We're
|
||||
always looking for the best way to implement things and welcome all
|
||||
respectful criticisms.
|
||||
**Lighthouse welcomes contributors.**
|
||||
|
||||
If you are looking to contribute, please head to our
|
||||
[onboarding documentation](https://github.com/sigp/lighthouse/blob/master/docs/onboarding.md).
|
||||
@ -173,10 +171,9 @@ your support!
|
||||
## Contact
|
||||
|
||||
The best place for discussion is the [sigp/lighthouse gitter](https://gitter.im/sigp/lighthouse).
|
||||
Ping @paulhauner or @AgeManning to get the quickest response.
|
||||
|
||||
# Donations
|
||||
## Donations
|
||||
|
||||
If you support the cause, we could certainly use donations to help fund development:
|
||||
If you support the cause, we accept donations to help fund development:
|
||||
|
||||
`0x25c4a76E7d118705e7Ea2e9b7d8C59930d8aCD3b` (donation.sigmaprime.eth)
|
||||
|
@ -6,7 +6,7 @@ use std::path::PathBuf;
|
||||
use types::test_utils::generate_deterministic_keypair;
|
||||
use validator_client::Config as ValidatorClientConfig;
|
||||
|
||||
pub const DEFAULT_DATA_DIR: &str = ".lighthouse-account-manager";
|
||||
pub const DEFAULT_DATA_DIR: &str = ".lighthouse-validator";
|
||||
pub const CLIENT_CONFIG_FILENAME: &str = "account-manager.toml";
|
||||
|
||||
fn main() {
|
||||
@ -14,13 +14,20 @@ fn main() {
|
||||
let decorator = slog_term::TermDecorator::new().build();
|
||||
let drain = slog_term::CompactFormat::new(decorator).build().fuse();
|
||||
let drain = slog_async::Async::new(drain).build().fuse();
|
||||
let log = slog::Logger::root(drain, o!());
|
||||
let mut log = slog::Logger::root(drain, o!());
|
||||
|
||||
// CLI
|
||||
let matches = App::new("Lighthouse Accounts Manager")
|
||||
.version("0.0.1")
|
||||
.author("Sigma Prime <contact@sigmaprime.io>")
|
||||
.about("Eth 2.0 Accounts Manager")
|
||||
.arg(
|
||||
Arg::with_name("logfile")
|
||||
.long("logfile")
|
||||
.value_name("logfile")
|
||||
.help("File path where output will be written.")
|
||||
.takes_value(true),
|
||||
)
|
||||
.arg(
|
||||
Arg::with_name("datadir")
|
||||
.long("datadir")
|
||||
@ -91,21 +98,12 @@ fn main() {
|
||||
|
||||
let mut client_config = ValidatorClientConfig::default();
|
||||
|
||||
if let Err(e) = client_config.apply_cli_args(&matches) {
|
||||
crit!(log, "Failed to apply CLI args"; "error" => format!("{:?}", e));
|
||||
return;
|
||||
};
|
||||
|
||||
// Ensure the `data_dir` in the config matches that supplied to the CLI.
|
||||
client_config.data_dir = data_dir.clone();
|
||||
|
||||
// Update the client config with any CLI args.
|
||||
match client_config.apply_cli_args(&matches) {
|
||||
Ok(()) => (),
|
||||
Err(s) => {
|
||||
crit!(log, "Failed to parse ClientConfig CLI arguments"; "error" => s);
|
||||
return;
|
||||
}
|
||||
if let Err(e) = client_config.apply_cli_args(&matches, &mut log) {
|
||||
crit!(log, "Failed to parse ClientConfig CLI arguments"; "error" => format!("{:?}", e));
|
||||
return;
|
||||
};
|
||||
|
||||
// Log configuration
|
||||
|
@ -13,7 +13,7 @@ client = { path = "client" }
|
||||
version = { path = "version" }
|
||||
clap = "2.32.0"
|
||||
serde = "1.0"
|
||||
slog = { version = "^2.2.3" , features = ["max_level_trace", "release_max_level_debug"] }
|
||||
slog = { version = "^2.2.3" , features = ["max_level_trace"] }
|
||||
slog-term = "^2.4.0"
|
||||
slog-async = "^2.3.0"
|
||||
ctrlc = { version = "3.1.1", features = ["termination"] }
|
||||
|
@ -20,9 +20,12 @@ serde = "1.0"
|
||||
serde_derive = "1.0"
|
||||
serde_json = "1.0"
|
||||
slot_clock = { path = "../../eth2/utils/slot_clock" }
|
||||
ssz = { path = "../../eth2/utils/ssz" }
|
||||
ssz_derive = { path = "../../eth2/utils/ssz_derive" }
|
||||
eth2_ssz = { path = "../../eth2/utils/ssz" }
|
||||
eth2_ssz_derive = { path = "../../eth2/utils/ssz_derive" }
|
||||
state_processing = { path = "../../eth2/state_processing" }
|
||||
tree_hash = { path = "../../eth2/utils/tree_hash" }
|
||||
types = { path = "../../eth2/types" }
|
||||
lmd_ghost = { path = "../../eth2/lmd_ghost" }
|
||||
|
||||
[dev-dependencies]
|
||||
rand = "0.5.5"
|
||||
|
@ -6,7 +6,7 @@ use crate::persisted_beacon_chain::{PersistedBeaconChain, BEACON_CHAIN_DB_KEY};
|
||||
use lmd_ghost::LmdGhost;
|
||||
use log::trace;
|
||||
use operation_pool::DepositInsertStatus;
|
||||
use operation_pool::OperationPool;
|
||||
use operation_pool::{OperationPool, PersistedOperationPool};
|
||||
use parking_lot::{RwLock, RwLockReadGuard};
|
||||
use slot_clock::SlotClock;
|
||||
use state_processing::per_block_processing::errors::{
|
||||
@ -18,7 +18,7 @@ use state_processing::{
|
||||
per_slot_processing, BlockProcessingError,
|
||||
};
|
||||
use std::sync::Arc;
|
||||
use store::iter::{BlockIterator, BlockRootsIterator, StateRootsIterator};
|
||||
use store::iter::{BestBlockRootsIterator, BlockIterator, BlockRootsIterator, StateRootsIterator};
|
||||
use store::{Error as DBError, Store};
|
||||
use tree_hash::TreeHash;
|
||||
use types::*;
|
||||
@ -147,11 +147,13 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
|
||||
let last_finalized_root = p.canonical_head.beacon_state.finalized_root;
|
||||
let last_finalized_block = &p.canonical_head.beacon_block;
|
||||
|
||||
let op_pool = p.op_pool.into_operation_pool(&p.state, &spec);
|
||||
|
||||
Ok(Some(BeaconChain {
|
||||
spec,
|
||||
slot_clock,
|
||||
fork_choice: ForkChoice::new(store.clone(), last_finalized_block, last_finalized_root),
|
||||
op_pool: OperationPool::default(),
|
||||
op_pool,
|
||||
canonical_head: RwLock::new(p.canonical_head),
|
||||
state: RwLock::new(p.state),
|
||||
genesis_block_root: p.genesis_block_root,
|
||||
@ -164,6 +166,7 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
|
||||
pub fn persist(&self) -> Result<(), Error> {
|
||||
let p: PersistedBeaconChain<T> = PersistedBeaconChain {
|
||||
canonical_head: self.canonical_head.read().clone(),
|
||||
op_pool: PersistedOperationPool::from_operation_pool(&self.op_pool),
|
||||
genesis_block_root: self.genesis_block_root,
|
||||
state: self.state.read().clone(),
|
||||
};
|
||||
@ -223,6 +226,19 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
|
||||
BlockRootsIterator::owned(self.store.clone(), self.state.read().clone(), slot)
|
||||
}
|
||||
|
||||
/// Iterates in reverse (highest to lowest slot) through all block roots from largest
|
||||
/// `slot <= beacon_state.slot` through to genesis.
|
||||
///
|
||||
/// Returns `None` for roots prior to genesis or when there is an error reading from `Store`.
|
||||
///
|
||||
/// Contains duplicate roots when skip slots are encountered.
|
||||
pub fn rev_iter_best_block_roots(
|
||||
&self,
|
||||
slot: Slot,
|
||||
) -> BestBlockRootsIterator<T::EthSpec, T::Store> {
|
||||
BestBlockRootsIterator::owned(self.store.clone(), self.state.read().clone(), slot)
|
||||
}
|
||||
|
||||
/// Iterates in reverse (highest to lowest slot) through all state roots from `slot` through to
|
||||
/// genesis.
|
||||
///
|
||||
@ -506,8 +522,7 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
|
||||
&self,
|
||||
deposit: Deposit,
|
||||
) -> Result<DepositInsertStatus, DepositValidationError> {
|
||||
self.op_pool
|
||||
.insert_deposit(deposit, &*self.state.read(), &self.spec)
|
||||
self.op_pool.insert_deposit(deposit)
|
||||
}
|
||||
|
||||
/// Accept some exit and queue it for inclusion in an appropriate block.
|
||||
|
@ -1,4 +1,5 @@
|
||||
use crate::{BeaconChainTypes, CheckPoint};
|
||||
use operation_pool::PersistedOperationPool;
|
||||
use ssz::{Decode, Encode};
|
||||
use ssz_derive::{Decode, Encode};
|
||||
use store::{DBColumn, Error as StoreError, StoreItem};
|
||||
@ -10,7 +11,7 @@ pub const BEACON_CHAIN_DB_KEY: &str = "PERSISTEDBEACONCHAINPERSISTEDBEA";
|
||||
#[derive(Encode, Decode)]
|
||||
pub struct PersistedBeaconChain<T: BeaconChainTypes> {
|
||||
pub canonical_head: CheckPoint<T::EthSpec>,
|
||||
// TODO: operations pool.
|
||||
pub op_pool: PersistedOperationPool,
|
||||
pub genesis_block_root: Hash256,
|
||||
pub state: BeaconState<T::EthSpec>,
|
||||
}
|
||||
|
@ -14,6 +14,8 @@ use types::{
|
||||
Hash256, Keypair, RelativeEpoch, SecretKey, Signature, Slot,
|
||||
};
|
||||
|
||||
pub use crate::persisted_beacon_chain::{PersistedBeaconChain, BEACON_CHAIN_DB_KEY};
|
||||
|
||||
/// Indicates how the `BeaconChainHarness` should produce blocks.
|
||||
#[derive(Clone, Copy, Debug)]
|
||||
pub enum BlockStrategy {
|
||||
@ -68,8 +70,8 @@ where
|
||||
E: EthSpec,
|
||||
{
|
||||
pub chain: BeaconChain<CommonTypes<L, E>>,
|
||||
keypairs: Vec<Keypair>,
|
||||
spec: ChainSpec,
|
||||
pub keypairs: Vec<Keypair>,
|
||||
pub spec: ChainSpec,
|
||||
}
|
||||
|
||||
impl<L, E> BeaconChainHarness<L, E>
|
||||
@ -189,7 +191,7 @@ where
|
||||
fn get_state_at_slot(&self, state_slot: Slot) -> BeaconState<E> {
|
||||
let state_root = self
|
||||
.chain
|
||||
.rev_iter_state_roots(self.chain.current_state().slot)
|
||||
.rev_iter_state_roots(self.chain.current_state().slot - 1)
|
||||
.find(|(_hash, slot)| *slot == state_slot)
|
||||
.map(|(hash, _slot)| hash)
|
||||
.expect("could not find state root");
|
||||
|
@ -1,16 +1,21 @@
|
||||
#![cfg(not(debug_assertions))]
|
||||
|
||||
use beacon_chain::test_utils::{AttestationStrategy, BeaconChainHarness, BlockStrategy};
|
||||
use beacon_chain::test_utils::{
|
||||
AttestationStrategy, BeaconChainHarness, BlockStrategy, CommonTypes, PersistedBeaconChain,
|
||||
BEACON_CHAIN_DB_KEY,
|
||||
};
|
||||
use lmd_ghost::ThreadSafeReducedTree;
|
||||
use store::MemoryStore;
|
||||
use types::{EthSpec, MinimalEthSpec, Slot};
|
||||
use rand::Rng;
|
||||
use store::{MemoryStore, Store};
|
||||
use types::test_utils::{SeedableRng, TestRandom, XorShiftRng};
|
||||
use types::{Deposit, EthSpec, Hash256, MinimalEthSpec, Slot};
|
||||
|
||||
// Should ideally be divisible by 3.
|
||||
pub const VALIDATOR_COUNT: usize = 24;
|
||||
|
||||
fn get_harness(
|
||||
validator_count: usize,
|
||||
) -> BeaconChainHarness<ThreadSafeReducedTree<MemoryStore, MinimalEthSpec>, MinimalEthSpec> {
|
||||
type TestForkChoice = ThreadSafeReducedTree<MemoryStore, MinimalEthSpec>;
|
||||
|
||||
fn get_harness(validator_count: usize) -> BeaconChainHarness<TestForkChoice, MinimalEthSpec> {
|
||||
let harness = BeaconChainHarness::new(validator_count);
|
||||
|
||||
// Move past the zero slot.
|
||||
@ -225,3 +230,38 @@ fn does_not_finalize_without_attestation() {
|
||||
"no epoch should have been finalized"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn roundtrip_operation_pool() {
|
||||
let num_blocks_produced = MinimalEthSpec::slots_per_epoch() * 5;
|
||||
|
||||
let harness = get_harness(VALIDATOR_COUNT);
|
||||
|
||||
// Add some attestations
|
||||
harness.extend_chain(
|
||||
num_blocks_produced as usize,
|
||||
BlockStrategy::OnCanonicalHead,
|
||||
AttestationStrategy::AllValidators,
|
||||
);
|
||||
assert!(harness.chain.op_pool.num_attestations() > 0);
|
||||
|
||||
// Add some deposits
|
||||
let rng = &mut XorShiftRng::from_seed([66; 16]);
|
||||
for _ in 0..rng.gen_range(1, VALIDATOR_COUNT) {
|
||||
harness
|
||||
.chain
|
||||
.process_deposit(Deposit::random_for_test(rng))
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
// TODO: could add some other operations
|
||||
harness.chain.persist().unwrap();
|
||||
|
||||
let key = Hash256::from_slice(&BEACON_CHAIN_DB_KEY.as_bytes());
|
||||
let p: PersistedBeaconChain<CommonTypes<TestForkChoice, MinimalEthSpec>> =
|
||||
harness.chain.store.get(&key).unwrap().unwrap();
|
||||
|
||||
let restored_op_pool = p.op_pool.into_operation_pool(&p.state, &harness.spec);
|
||||
|
||||
assert_eq!(harness.chain.op_pool, restored_op_pool);
|
||||
}
|
||||
|
@ -19,10 +19,11 @@ slot_clock = { path = "../../eth2/utils/slot_clock" }
|
||||
serde = "1.0.93"
|
||||
serde_derive = "1.0"
|
||||
error-chain = "0.12.0"
|
||||
slog = { version = "^2.2.3" , features = ["max_level_trace", "release_max_level_trace"] }
|
||||
slog-term = "^2.4.0"
|
||||
eth2_ssz = { path = "../../eth2/utils/ssz" }
|
||||
slog = { version = "^2.2.3" , features = ["max_level_trace"] }
|
||||
slog-async = "^2.3.0"
|
||||
ssz = { path = "../../eth2/utils/ssz" }
|
||||
slog-json = "^2.3"
|
||||
slog-term = "^2.4.0"
|
||||
tokio = "0.1.15"
|
||||
clap = "2.32.0"
|
||||
dirs = "1.0.3"
|
||||
|
@ -2,8 +2,10 @@ use clap::ArgMatches;
|
||||
use http_server::HttpServerConfig;
|
||||
use network::NetworkConfig;
|
||||
use serde_derive::{Deserialize, Serialize};
|
||||
use std::fs;
|
||||
use slog::{info, o, Drain};
|
||||
use std::fs::{self, OpenOptions};
|
||||
use std::path::PathBuf;
|
||||
use std::sync::Mutex;
|
||||
|
||||
/// The core configuration of a Lighthouse beacon node.
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
@ -11,6 +13,7 @@ pub struct Config {
|
||||
pub data_dir: PathBuf,
|
||||
pub db_type: String,
|
||||
db_name: String,
|
||||
pub log_file: PathBuf,
|
||||
pub network: network::NetworkConfig,
|
||||
pub rpc: rpc::RPCConfig,
|
||||
pub http: HttpServerConfig,
|
||||
@ -20,6 +23,7 @@ impl Default for Config {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
data_dir: PathBuf::from(".lighthouse"),
|
||||
log_file: PathBuf::from(""),
|
||||
db_type: "disk".to_string(),
|
||||
db_name: "chain_db".to_string(),
|
||||
// Note: there are no default bootnodes specified.
|
||||
@ -45,23 +49,64 @@ impl Config {
|
||||
Some(path)
|
||||
}
|
||||
|
||||
// Update the logger to output in JSON to specified file
|
||||
fn update_logger(&mut self, log: &mut slog::Logger) -> Result<(), &'static str> {
|
||||
let file = OpenOptions::new()
|
||||
.create(true)
|
||||
.write(true)
|
||||
.truncate(true)
|
||||
.open(&self.log_file);
|
||||
|
||||
if file.is_err() {
|
||||
return Err("Cannot open log file");
|
||||
}
|
||||
let file = file.unwrap();
|
||||
|
||||
if let Some(file) = self.log_file.to_str() {
|
||||
info!(
|
||||
*log,
|
||||
"Log file specified, output will now be written to {} in json.", file
|
||||
);
|
||||
} else {
|
||||
info!(
|
||||
*log,
|
||||
"Log file specified output will now be written in json"
|
||||
);
|
||||
}
|
||||
|
||||
let drain = Mutex::new(slog_json::Json::default(file)).fuse();
|
||||
let drain = slog_async::Async::new(drain).build().fuse();
|
||||
*log = slog::Logger::root(drain, o!());
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Apply the following arguments to `self`, replacing values if they are specified in `args`.
|
||||
///
|
||||
/// Returns an error if arguments are obviously invalid. May succeed even if some values are
|
||||
/// invalid.
|
||||
pub fn apply_cli_args(&mut self, args: &ArgMatches) -> Result<(), String> {
|
||||
pub fn apply_cli_args(
|
||||
&mut self,
|
||||
args: &ArgMatches,
|
||||
log: &mut slog::Logger,
|
||||
) -> Result<(), String> {
|
||||
if let Some(dir) = args.value_of("datadir") {
|
||||
self.data_dir = PathBuf::from(dir);
|
||||
};
|
||||
|
||||
if let Some(dir) = args.value_of("db") {
|
||||
self.db_type = dir.to_string();
|
||||
}
|
||||
};
|
||||
|
||||
self.network.apply_cli_args(args)?;
|
||||
self.rpc.apply_cli_args(args)?;
|
||||
self.http.apply_cli_args(args)?;
|
||||
|
||||
if let Some(log_file) = args.value_of("logfile") {
|
||||
self.log_file = PathBuf::from(log_file);
|
||||
self.update_logger(log)?;
|
||||
};
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
@ -3,39 +3,34 @@ use beacon_chain::BeaconChainTypes;
|
||||
use exit_future::Exit;
|
||||
use futures::{Future, Stream};
|
||||
use slog::{debug, o};
|
||||
use std::sync::{Arc, Mutex};
|
||||
use std::time::{Duration, Instant};
|
||||
use tokio::runtime::TaskExecutor;
|
||||
use tokio::timer::Interval;
|
||||
|
||||
/// Thread that monitors the client and reports useful statistics to the user.
|
||||
/// The interval between heartbeat events.
|
||||
pub const HEARTBEAT_INTERVAL_SECONDS: u64 = 5;
|
||||
|
||||
/// Spawns a thread that can be used to run code periodically, on `HEARTBEAT_INTERVAL_SECONDS`
|
||||
/// durations.
|
||||
///
|
||||
/// Presently unused, but remains for future use.
|
||||
pub fn run<T: BeaconChainTypes + Send + Sync + 'static>(
|
||||
client: &Client<T>,
|
||||
executor: TaskExecutor,
|
||||
exit: Exit,
|
||||
) {
|
||||
// notification heartbeat
|
||||
let interval = Interval::new(Instant::now(), Duration::from_secs(5));
|
||||
let interval = Interval::new(
|
||||
Instant::now(),
|
||||
Duration::from_secs(HEARTBEAT_INTERVAL_SECONDS),
|
||||
);
|
||||
|
||||
let _log = client.log.new(o!("Service" => "Notifier"));
|
||||
|
||||
// TODO: Debugging only
|
||||
let counter = Arc::new(Mutex::new(0));
|
||||
let network = client.network.clone();
|
||||
|
||||
// build heartbeat logic here
|
||||
let heartbeat = move |_| {
|
||||
//debug!(log, "Temp heartbeat output");
|
||||
//TODO: Remove this logic. Testing only
|
||||
let mut count = counter.lock().unwrap();
|
||||
*count += 1;
|
||||
|
||||
if *count % 5 == 0 {
|
||||
// debug!(log, "Sending Message");
|
||||
network.send_message();
|
||||
}
|
||||
|
||||
let heartbeat = |_| {
|
||||
// There is not presently any heartbeat logic.
|
||||
//
|
||||
// We leave this function empty for future use.
|
||||
Ok(())
|
||||
};
|
||||
|
||||
|
@ -13,9 +13,9 @@ enr = { git = "https://github.com/SigP/rust-libp2p/", rev = "be5710bbde69d8c5be
|
||||
types = { path = "../../eth2/types" }
|
||||
serde = "1.0"
|
||||
serde_derive = "1.0"
|
||||
ssz = { path = "../../eth2/utils/ssz" }
|
||||
ssz_derive = { path = "../../eth2/utils/ssz_derive" }
|
||||
slog = { version = "2.4.1" , features = ["max_level_trace", "release_max_level_trace"] }
|
||||
eth2_ssz = { path = "../../eth2/utils/ssz" }
|
||||
eth2_ssz_derive = { path = "../../eth2/utils/ssz_derive" }
|
||||
slog = { version = "^2.4.1" , features = ["max_level_trace"] }
|
||||
version = { path = "../version" }
|
||||
tokio = "0.1.16"
|
||||
futures = "0.1.25"
|
||||
|
@ -25,6 +25,8 @@ const PROTOCOL_PREFIX: &str = "/eth/serenity/rpc/";
|
||||
const REQUEST_TIMEOUT: u64 = 3;
|
||||
|
||||
/// Implementation of the `ConnectionUpgrade` for the RPC protocol.
|
||||
const MAX_READ_SIZE: usize = 4_194_304; // 4M
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct RPCProtocol;
|
||||
|
||||
|
@ -109,8 +109,6 @@ impl Stream for Service {
|
||||
|
||||
fn poll(&mut self) -> Poll<Option<Self::Item>, Self::Error> {
|
||||
loop {
|
||||
// TODO: Currently only gossipsub events passed here.
|
||||
// Build a type for more generic events
|
||||
match self.swarm.poll() {
|
||||
//Behaviour events
|
||||
Ok(Async::Ready(Some(event))) => match event {
|
||||
|
@ -13,7 +13,7 @@ network = { path = "../network" }
|
||||
eth2-libp2p = { path = "../eth2-libp2p" }
|
||||
version = { path = "../version" }
|
||||
types = { path = "../../eth2/types" }
|
||||
ssz = { path = "../../eth2/utils/ssz" }
|
||||
eth2_ssz = { path = "../../eth2/utils/ssz" }
|
||||
slot_clock = { path = "../../eth2/utils/slot_clock" }
|
||||
protos = { path = "../../protos" }
|
||||
grpcio = { version = "0.4", default-features = false, features = ["protobuf-codec"] }
|
||||
@ -27,9 +27,8 @@ futures = "0.1.23"
|
||||
serde = "1.0"
|
||||
serde_derive = "1.0"
|
||||
serde_json = "1.0"
|
||||
slog = "^2.2.3"
|
||||
slog = { version = "^2.2.3" , features = ["max_level_trace"] }
|
||||
slog-term = "^2.4.0"
|
||||
slog-async = "^2.3.0"
|
||||
tokio = "0.1.17"
|
||||
exit-future = "0.1.4"
|
||||
crossbeam-channel = "0.3.8"
|
||||
|
@ -14,6 +14,7 @@ use slog::{info, o, warn};
|
||||
use std::path::PathBuf;
|
||||
use std::sync::Arc;
|
||||
use tokio::runtime::TaskExecutor;
|
||||
use tokio::sync::mpsc;
|
||||
|
||||
#[derive(PartialEq, Clone, Debug, Serialize, Deserialize)]
|
||||
pub struct HttpServerConfig {
|
||||
@ -75,7 +76,7 @@ pub fn create_iron_http_server<T: BeaconChainTypes + 'static>(
|
||||
pub fn start_service<T: BeaconChainTypes + 'static>(
|
||||
config: &HttpServerConfig,
|
||||
executor: &TaskExecutor,
|
||||
_network_chan: crossbeam_channel::Sender<NetworkMessage>,
|
||||
_network_chan: mpsc::UnboundedSender<NetworkMessage>,
|
||||
beacon_chain: Arc<BeaconChain<T>>,
|
||||
db_path: PathBuf,
|
||||
metrics_registry: Registry,
|
||||
|
@ -13,10 +13,9 @@ store = { path = "../store" }
|
||||
eth2-libp2p = { path = "../eth2-libp2p" }
|
||||
version = { path = "../version" }
|
||||
types = { path = "../../eth2/types" }
|
||||
slog = { version = "^2.2.3" }
|
||||
ssz = { path = "../../eth2/utils/ssz" }
|
||||
slog = { version = "^2.2.3" , features = ["max_level_trace"] }
|
||||
eth2_ssz = { path = "../../eth2/utils/ssz" }
|
||||
tree_hash = { path = "../../eth2/utils/tree_hash" }
|
||||
futures = "0.1.25"
|
||||
error-chain = "0.12.0"
|
||||
crossbeam-channel = "0.3.8"
|
||||
tokio = "0.1.16"
|
||||
|
@ -2,19 +2,20 @@ use crate::error;
|
||||
use crate::service::{NetworkMessage, OutgoingMessage};
|
||||
use crate::sync::SimpleSync;
|
||||
use beacon_chain::{BeaconChain, BeaconChainTypes};
|
||||
use crossbeam_channel::{unbounded as channel, Sender};
|
||||
use eth2_libp2p::rpc::methods::*;
|
||||
use eth2_libp2p::{
|
||||
behaviour::PubsubMessage,
|
||||
rpc::{RPCError, RPCErrorResponse, RPCRequest, RPCResponse, RequestId},
|
||||
PeerId, RPCEvent,
|
||||
};
|
||||
use futures::future;
|
||||
use futures::future::Future;
|
||||
use futures::stream::Stream;
|
||||
use slog::{debug, error, warn};
|
||||
use ssz::Decode;
|
||||
use std::collections::HashMap;
|
||||
use std::sync::Arc;
|
||||
use std::time::Instant;
|
||||
use tokio::sync::mpsc;
|
||||
|
||||
/// Handles messages received from the network and client and organises syncing.
|
||||
pub struct MessageHandler<T: BeaconChainTypes> {
|
||||
@ -45,13 +46,13 @@ impl<T: BeaconChainTypes + 'static> MessageHandler<T> {
|
||||
/// Initializes and runs the MessageHandler.
|
||||
pub fn spawn(
|
||||
beacon_chain: Arc<BeaconChain<T>>,
|
||||
network_send: crossbeam_channel::Sender<NetworkMessage>,
|
||||
network_send: mpsc::UnboundedSender<NetworkMessage>,
|
||||
executor: &tokio::runtime::TaskExecutor,
|
||||
log: slog::Logger,
|
||||
) -> error::Result<Sender<HandlerMessage>> {
|
||||
) -> error::Result<mpsc::UnboundedSender<HandlerMessage>> {
|
||||
debug!(log, "Service starting");
|
||||
|
||||
let (handler_send, handler_recv) = channel();
|
||||
let (handler_send, handler_recv) = mpsc::unbounded_channel();
|
||||
|
||||
// Initialise sync and begin processing in thread
|
||||
// generate the Message handler
|
||||
@ -66,13 +67,13 @@ impl<T: BeaconChainTypes + 'static> MessageHandler<T> {
|
||||
|
||||
// spawn handler task
|
||||
// TODO: Handle manual termination of thread
|
||||
executor.spawn(future::poll_fn(move || -> Result<_, _> {
|
||||
loop {
|
||||
handler.handle_message(handler_recv.recv().map_err(|_| {
|
||||
executor.spawn(
|
||||
handler_recv
|
||||
.for_each(move |msg| Ok(handler.handle_message(msg)))
|
||||
.map_err(move |_| {
|
||||
debug!(log, "Network message handler terminated.");
|
||||
})?);
|
||||
}
|
||||
}));
|
||||
}),
|
||||
);
|
||||
|
||||
Ok(handler_send)
|
||||
}
|
||||
@ -302,7 +303,7 @@ impl<T: BeaconChainTypes + 'static> MessageHandler<T> {
|
||||
|
||||
pub struct NetworkContext {
|
||||
/// The network channel to relay messages to the Network service.
|
||||
network_send: crossbeam_channel::Sender<NetworkMessage>,
|
||||
network_send: mpsc::UnboundedSender<NetworkMessage>,
|
||||
/// A mapping of peers and the RPC id we have sent an RPC request to.
|
||||
outstanding_outgoing_request_ids: HashMap<(PeerId, RequestId), Instant>,
|
||||
/// Stores the next `RequestId` we should include on an outgoing `RPCRequest` to a `PeerId`.
|
||||
@ -312,7 +313,7 @@ pub struct NetworkContext {
|
||||
}
|
||||
|
||||
impl NetworkContext {
|
||||
pub fn new(network_send: crossbeam_channel::Sender<NetworkMessage>, log: slog::Logger) -> Self {
|
||||
pub fn new(network_send: mpsc::UnboundedSender<NetworkMessage>, log: slog::Logger) -> Self {
|
||||
Self {
|
||||
network_send,
|
||||
outstanding_outgoing_request_ids: HashMap::new(),
|
||||
@ -348,13 +349,13 @@ impl NetworkContext {
|
||||
);
|
||||
}
|
||||
|
||||
fn send_rpc_event(&self, peer_id: PeerId, rpc_event: RPCEvent) {
|
||||
fn send_rpc_event(&mut self, peer_id: PeerId, rpc_event: RPCEvent) {
|
||||
self.send(peer_id, OutgoingMessage::RPC(rpc_event))
|
||||
}
|
||||
|
||||
fn send(&self, peer_id: PeerId, outgoing_message: OutgoingMessage) {
|
||||
fn send(&mut self, peer_id: PeerId, outgoing_message: OutgoingMessage) {
|
||||
self.network_send
|
||||
.send(NetworkMessage::Send(peer_id, outgoing_message))
|
||||
.try_send(NetworkMessage::Send(peer_id, outgoing_message))
|
||||
.unwrap_or_else(|_| {
|
||||
warn!(
|
||||
self.log,
|
||||
|
@ -2,24 +2,23 @@ use crate::error;
|
||||
use crate::message_handler::{HandlerMessage, MessageHandler};
|
||||
use crate::NetworkConfig;
|
||||
use beacon_chain::{BeaconChain, BeaconChainTypes};
|
||||
use crossbeam_channel::{unbounded as channel, Sender, TryRecvError};
|
||||
use eth2_libp2p::Service as LibP2PService;
|
||||
use eth2_libp2p::Topic;
|
||||
use eth2_libp2p::{Libp2pEvent, PeerId};
|
||||
use eth2_libp2p::{PubsubMessage, RPCEvent};
|
||||
use futures::prelude::*;
|
||||
use futures::sync::oneshot;
|
||||
use futures::Stream;
|
||||
use slog::{debug, info, o, trace};
|
||||
use std::marker::PhantomData;
|
||||
use std::sync::Arc;
|
||||
use tokio::runtime::TaskExecutor;
|
||||
use tokio::sync::{mpsc, oneshot};
|
||||
|
||||
/// Service that handles communication between internal services and the eth2_libp2p network service.
|
||||
pub struct Service<T: BeaconChainTypes> {
|
||||
//libp2p_service: Arc<Mutex<LibP2PService>>,
|
||||
_libp2p_exit: oneshot::Sender<()>,
|
||||
network_send: crossbeam_channel::Sender<NetworkMessage>,
|
||||
network_send: mpsc::UnboundedSender<NetworkMessage>,
|
||||
_phantom: PhantomData<T>, //message_handler: MessageHandler,
|
||||
//message_handler_send: Sender<HandlerMessage>
|
||||
}
|
||||
@ -30,9 +29,9 @@ impl<T: BeaconChainTypes + 'static> Service<T> {
|
||||
config: &NetworkConfig,
|
||||
executor: &TaskExecutor,
|
||||
log: slog::Logger,
|
||||
) -> error::Result<(Arc<Self>, Sender<NetworkMessage>)> {
|
||||
) -> error::Result<(Arc<Self>, mpsc::UnboundedSender<NetworkMessage>)> {
|
||||
// build the network channel
|
||||
let (network_send, network_recv) = channel::<NetworkMessage>();
|
||||
let (network_send, network_recv) = mpsc::unbounded_channel::<NetworkMessage>();
|
||||
// launch message handler thread
|
||||
let message_handler_log = log.new(o!("Service" => "MessageHandler"));
|
||||
let message_handler_send = MessageHandler::spawn(
|
||||
@ -64,9 +63,9 @@ impl<T: BeaconChainTypes + 'static> Service<T> {
|
||||
}
|
||||
|
||||
// TODO: Testing only
|
||||
pub fn send_message(&self) {
|
||||
pub fn send_message(&mut self) {
|
||||
self.network_send
|
||||
.send(NetworkMessage::Send(
|
||||
.try_send(NetworkMessage::Send(
|
||||
PeerId::random(),
|
||||
OutgoingMessage::NotifierTest,
|
||||
))
|
||||
@ -76,12 +75,12 @@ impl<T: BeaconChainTypes + 'static> Service<T> {
|
||||
|
||||
fn spawn_service(
|
||||
libp2p_service: LibP2PService,
|
||||
network_recv: crossbeam_channel::Receiver<NetworkMessage>,
|
||||
message_handler_send: crossbeam_channel::Sender<HandlerMessage>,
|
||||
network_recv: mpsc::UnboundedReceiver<NetworkMessage>,
|
||||
message_handler_send: mpsc::UnboundedSender<HandlerMessage>,
|
||||
executor: &TaskExecutor,
|
||||
log: slog::Logger,
|
||||
) -> error::Result<oneshot::Sender<()>> {
|
||||
let (network_exit, exit_rx) = oneshot::channel();
|
||||
) -> error::Result<tokio::sync::oneshot::Sender<()>> {
|
||||
let (network_exit, exit_rx) = tokio::sync::oneshot::channel();
|
||||
|
||||
// spawn on the current executor
|
||||
executor.spawn(
|
||||
@ -105,31 +104,67 @@ fn spawn_service(
|
||||
//TODO: Potentially handle channel errors
|
||||
fn network_service(
|
||||
mut libp2p_service: LibP2PService,
|
||||
network_recv: crossbeam_channel::Receiver<NetworkMessage>,
|
||||
message_handler_send: crossbeam_channel::Sender<HandlerMessage>,
|
||||
mut network_recv: mpsc::UnboundedReceiver<NetworkMessage>,
|
||||
mut message_handler_send: mpsc::UnboundedSender<HandlerMessage>,
|
||||
log: slog::Logger,
|
||||
) -> impl futures::Future<Item = (), Error = eth2_libp2p::error::Error> {
|
||||
futures::future::poll_fn(move || -> Result<_, eth2_libp2p::error::Error> {
|
||||
// poll the swarm
|
||||
loop {
|
||||
// only end the loop once both major polls are not ready.
|
||||
let mut not_ready_count = 0;
|
||||
while not_ready_count < 2 {
|
||||
not_ready_count = 0;
|
||||
// poll the network channel
|
||||
match network_recv.poll() {
|
||||
Ok(Async::Ready(Some(message))) => {
|
||||
match message {
|
||||
// TODO: Testing message - remove
|
||||
NetworkMessage::Send(peer_id, outgoing_message) => {
|
||||
match outgoing_message {
|
||||
OutgoingMessage::RPC(rpc_event) => {
|
||||
trace!(log, "Sending RPC Event: {:?}", rpc_event);
|
||||
//TODO: Make swarm private
|
||||
//TODO: Implement correct peer id topic message handling
|
||||
libp2p_service.swarm.send_rpc(peer_id, rpc_event);
|
||||
}
|
||||
OutgoingMessage::NotifierTest => {
|
||||
// debug!(log, "Received message from notifier");
|
||||
}
|
||||
};
|
||||
}
|
||||
NetworkMessage::Publish { topics, message } => {
|
||||
debug!(log, "Sending pubsub message"; "topics" => format!("{:?}",topics));
|
||||
libp2p_service.swarm.publish(topics, *message);
|
||||
}
|
||||
}
|
||||
}
|
||||
Ok(Async::NotReady) => not_ready_count += 1,
|
||||
Ok(Async::Ready(None)) => {
|
||||
return Err(eth2_libp2p::error::Error::from("Network channel closed"));
|
||||
}
|
||||
Err(_) => {
|
||||
return Err(eth2_libp2p::error::Error::from("Network channel error"));
|
||||
}
|
||||
}
|
||||
|
||||
// poll the swarm
|
||||
match libp2p_service.poll() {
|
||||
Ok(Async::Ready(Some(event))) => match event {
|
||||
Libp2pEvent::RPC(peer_id, rpc_event) => {
|
||||
trace!(log, "RPC Event: RPC message received: {:?}", rpc_event);
|
||||
message_handler_send
|
||||
.send(HandlerMessage::RPC(peer_id, rpc_event))
|
||||
.map_err(|_| "Failed to send rpc to handler")?;
|
||||
.try_send(HandlerMessage::RPC(peer_id, rpc_event))
|
||||
.map_err(|_| "Failed to send RPC to handler")?;
|
||||
}
|
||||
Libp2pEvent::PeerDialed(peer_id) => {
|
||||
debug!(log, "Peer Dialed: {:?}", peer_id);
|
||||
message_handler_send
|
||||
.send(HandlerMessage::PeerDialed(peer_id))
|
||||
.try_send(HandlerMessage::PeerDialed(peer_id))
|
||||
.map_err(|_| "Failed to send PeerDialed to handler")?;
|
||||
}
|
||||
Libp2pEvent::PeerDisconnected(peer_id) => {
|
||||
debug!(log, "Peer Disconnected: {:?}", peer_id);
|
||||
message_handler_send
|
||||
.send(HandlerMessage::PeerDisconnected(peer_id))
|
||||
.try_send(HandlerMessage::PeerDisconnected(peer_id))
|
||||
.map_err(|_| "Failed to send PeerDisconnected to handler")?;
|
||||
}
|
||||
Libp2pEvent::PubsubMessage {
|
||||
@ -138,43 +173,13 @@ fn network_service(
|
||||
//TODO: Decide if we need to propagate the topic upwards. (Potentially for
|
||||
//attestations)
|
||||
message_handler_send
|
||||
.send(HandlerMessage::PubsubMessage(source, message))
|
||||
.try_send(HandlerMessage::PubsubMessage(source, message))
|
||||
.map_err(|_| " failed to send pubsub message to handler")?;
|
||||
}
|
||||
},
|
||||
Ok(Async::Ready(None)) => unreachable!("Stream never ends"),
|
||||
Ok(Async::NotReady) => break,
|
||||
Err(_) => break,
|
||||
}
|
||||
}
|
||||
// poll the network channel
|
||||
// TODO: refactor - combine poll_fn's?
|
||||
loop {
|
||||
match network_recv.try_recv() {
|
||||
// TODO: Testing message - remove
|
||||
Ok(NetworkMessage::Send(peer_id, outgoing_message)) => {
|
||||
match outgoing_message {
|
||||
OutgoingMessage::RPC(rpc_event) => {
|
||||
trace!(log, "Sending RPC Event: {:?}", rpc_event);
|
||||
//TODO: Make swarm private
|
||||
//TODO: Implement correct peer id topic message handling
|
||||
libp2p_service.swarm.send_rpc(peer_id, rpc_event);
|
||||
}
|
||||
OutgoingMessage::NotifierTest => {
|
||||
// debug!(log, "Received message from notifier");
|
||||
}
|
||||
};
|
||||
}
|
||||
Ok(NetworkMessage::Publish { topics, message }) => {
|
||||
debug!(log, "Sending pubsub message on topics {:?}", topics);
|
||||
libp2p_service.swarm.publish(topics, *message);
|
||||
}
|
||||
Err(TryRecvError::Empty) => break,
|
||||
Err(TryRecvError::Disconnected) => {
|
||||
return Err(eth2_libp2p::error::Error::from(
|
||||
"Network channel disconnected",
|
||||
));
|
||||
}
|
||||
Ok(Async::NotReady) => not_ready_count += 1,
|
||||
Err(_) => not_ready_count += 1,
|
||||
}
|
||||
}
|
||||
Ok(Async::NotReady)
|
||||
|
@ -1,7 +1,8 @@
|
||||
use beacon_chain::{BeaconChain, BeaconChainTypes};
|
||||
use eth2_libp2p::rpc::methods::*;
|
||||
use eth2_libp2p::PeerId;
|
||||
use slog::{debug, error};
|
||||
use slog::error;
|
||||
use std::collections::HashMap;
|
||||
use std::sync::Arc;
|
||||
use std::time::{Duration, Instant};
|
||||
use tree_hash::TreeHash;
|
||||
@ -22,7 +23,7 @@ use types::{BeaconBlock, BeaconBlockBody, BeaconBlockHeader, Hash256, Slot};
|
||||
pub struct ImportQueue<T: BeaconChainTypes> {
|
||||
pub chain: Arc<BeaconChain<T>>,
|
||||
/// Partially imported blocks, keyed by the root of `BeaconBlockBody`.
|
||||
pub partials: Vec<PartialBeaconBlock>,
|
||||
partials: HashMap<Hash256, PartialBeaconBlock>,
|
||||
/// Time before a queue entry is considered state.
|
||||
pub stale_time: Duration,
|
||||
/// Logging
|
||||
@ -34,41 +35,33 @@ impl<T: BeaconChainTypes> ImportQueue<T> {
|
||||
pub fn new(chain: Arc<BeaconChain<T>>, stale_time: Duration, log: slog::Logger) -> Self {
|
||||
Self {
|
||||
chain,
|
||||
partials: vec![],
|
||||
partials: HashMap::new(),
|
||||
stale_time,
|
||||
log,
|
||||
}
|
||||
}
|
||||
|
||||
/// Completes all possible partials into `BeaconBlock` and returns them, sorted by increasing
|
||||
/// slot number. Does not delete the partials from the queue, this must be done manually.
|
||||
///
|
||||
/// Returns `(queue_index, block, sender)`:
|
||||
///
|
||||
/// - `block_root`: may be used to remove the entry if it is successfully processed.
|
||||
/// - `block`: the completed block.
|
||||
/// - `sender`: the `PeerId` the provided the `BeaconBlockBody` which completed the partial.
|
||||
pub fn complete_blocks(&self) -> Vec<(Hash256, BeaconBlock, PeerId)> {
|
||||
let mut complete: Vec<(Hash256, BeaconBlock, PeerId)> = self
|
||||
.partials
|
||||
.iter()
|
||||
.filter_map(|partial| partial.clone().complete())
|
||||
.collect();
|
||||
/// Returns true of the if the `BlockRoot` is found in the `import_queue`.
|
||||
pub fn contains_block_root(&self, block_root: Hash256) -> bool {
|
||||
self.partials.contains_key(&block_root)
|
||||
}
|
||||
|
||||
// Sort the completable partials to be in ascending slot order.
|
||||
complete.sort_unstable_by(|a, b| a.1.slot.partial_cmp(&b.1.slot).unwrap());
|
||||
|
||||
complete
|
||||
/// Attempts to complete the `BlockRoot` if it is found in the `import_queue`.
|
||||
///
|
||||
/// Returns an Enum with a `PartialBeaconBlockCompletion`.
|
||||
/// Does not remove the `block_root` from the `import_queue`.
|
||||
pub fn attempt_complete_block(&self, block_root: Hash256) -> PartialBeaconBlockCompletion {
|
||||
if let Some(partial) = self.partials.get(&block_root) {
|
||||
partial.attempt_complete()
|
||||
} else {
|
||||
PartialBeaconBlockCompletion::MissingRoot
|
||||
}
|
||||
}
|
||||
|
||||
/// Removes the first `PartialBeaconBlock` with a matching `block_root`, returning the partial
|
||||
/// if it exists.
|
||||
pub fn remove(&mut self, block_root: Hash256) -> Option<PartialBeaconBlock> {
|
||||
let position = self
|
||||
.partials
|
||||
.iter()
|
||||
.position(|p| p.block_root == block_root)?;
|
||||
Some(self.partials.remove(position))
|
||||
self.partials.remove(&block_root)
|
||||
}
|
||||
|
||||
/// Flushes all stale entries from the queue.
|
||||
@ -76,31 +69,10 @@ impl<T: BeaconChainTypes> ImportQueue<T> {
|
||||
/// An entry is stale if it has as a `inserted` time that is more than `self.stale_time` in the
|
||||
/// past.
|
||||
pub fn remove_stale(&mut self) {
|
||||
let stale_indices: Vec<usize> = self
|
||||
.partials
|
||||
.iter()
|
||||
.enumerate()
|
||||
.filter_map(|(i, partial)| {
|
||||
if partial.inserted + self.stale_time <= Instant::now() {
|
||||
Some(i)
|
||||
} else {
|
||||
None
|
||||
}
|
||||
})
|
||||
.collect();
|
||||
let stale_time = self.stale_time;
|
||||
|
||||
if !stale_indices.is_empty() {
|
||||
debug!(
|
||||
self.log,
|
||||
"ImportQueue removing stale entries";
|
||||
"stale_items" => stale_indices.len(),
|
||||
"stale_time_seconds" => self.stale_time.as_secs()
|
||||
);
|
||||
}
|
||||
|
||||
stale_indices.iter().for_each(|&i| {
|
||||
self.partials.remove(i);
|
||||
});
|
||||
self.partials
|
||||
.retain(|_, partial| partial.inserted + stale_time > Instant::now())
|
||||
}
|
||||
|
||||
/// Returns `true` if `self.chain` has not yet processed this block.
|
||||
@ -122,27 +94,32 @@ impl<T: BeaconChainTypes> ImportQueue<T> {
|
||||
block_roots: &[BlockRootSlot],
|
||||
sender: PeerId,
|
||||
) -> Vec<BlockRootSlot> {
|
||||
let new_roots: Vec<BlockRootSlot> = block_roots
|
||||
// TODO: This will currently not return a `BlockRootSlot` if this root exists but there is no header.
|
||||
// It would be more robust if it did.
|
||||
let new_block_root_slots: Vec<BlockRootSlot> = block_roots
|
||||
.iter()
|
||||
// Ignore any roots already stored in the queue.
|
||||
.filter(|brs| !self.contains_block_root(brs.block_root))
|
||||
// Ignore any roots already processed by the chain.
|
||||
.filter(|brs| self.chain_has_not_seen_block(&brs.block_root))
|
||||
// Ignore any roots already stored in the queue.
|
||||
.filter(|brs| !self.partials.iter().any(|p| p.block_root == brs.block_root))
|
||||
.cloned()
|
||||
.collect();
|
||||
|
||||
new_roots.iter().for_each(|brs| {
|
||||
self.partials.push(PartialBeaconBlock {
|
||||
slot: brs.slot,
|
||||
block_root: brs.block_root,
|
||||
sender: sender.clone(),
|
||||
header: None,
|
||||
body: None,
|
||||
inserted: Instant::now(),
|
||||
})
|
||||
});
|
||||
self.partials.extend(
|
||||
new_block_root_slots
|
||||
.iter()
|
||||
.map(|brs| PartialBeaconBlock {
|
||||
slot: brs.slot,
|
||||
block_root: brs.block_root,
|
||||
sender: sender.clone(),
|
||||
header: None,
|
||||
body: None,
|
||||
inserted: Instant::now(),
|
||||
})
|
||||
.map(|partial| (partial.block_root, partial)),
|
||||
);
|
||||
|
||||
new_roots
|
||||
new_block_root_slots
|
||||
}
|
||||
|
||||
/// Adds the `headers` to the `partials` queue. Returns a list of `Hash256` block roots for
|
||||
@ -152,12 +129,8 @@ impl<T: BeaconChainTypes> ImportQueue<T> {
|
||||
/// the queue and it's block root is included in the output.
|
||||
///
|
||||
/// If a `header` is already in the queue, but not yet processed by the chain the block root is
|
||||
/// included in the output and the `inserted` time for the partial record is set to
|
||||
/// not included in the output and the `inserted` time for the partial record is set to
|
||||
/// `Instant::now()`. Updating the `inserted` time stops the partial from becoming stale.
|
||||
///
|
||||
/// Presently the queue enforces that a `BeaconBlockHeader` _must_ be received before its
|
||||
/// `BeaconBlockBody`. This is not a natural requirement and we could enhance the queue to lift
|
||||
/// this restraint.
|
||||
pub fn enqueue_headers(
|
||||
&mut self,
|
||||
headers: Vec<BeaconBlockHeader>,
|
||||
@ -169,8 +142,10 @@ impl<T: BeaconChainTypes> ImportQueue<T> {
|
||||
let block_root = Hash256::from_slice(&header.canonical_root()[..]);
|
||||
|
||||
if self.chain_has_not_seen_block(&block_root) {
|
||||
self.insert_header(block_root, header, sender.clone());
|
||||
required_bodies.push(block_root)
|
||||
if !self.insert_header(block_root, header, sender.clone()) {
|
||||
// If a body is empty
|
||||
required_bodies.push(block_root);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -180,10 +155,17 @@ impl<T: BeaconChainTypes> ImportQueue<T> {
|
||||
/// If there is a matching `header` for this `body`, adds it to the queue.
|
||||
///
|
||||
/// If there is no `header` for the `body`, the body is simply discarded.
|
||||
pub fn enqueue_bodies(&mut self, bodies: Vec<BeaconBlockBody>, sender: PeerId) {
|
||||
pub fn enqueue_bodies(
|
||||
&mut self,
|
||||
bodies: Vec<BeaconBlockBody>,
|
||||
sender: PeerId,
|
||||
) -> Option<Hash256> {
|
||||
let mut last_block_hash = None;
|
||||
for body in bodies {
|
||||
self.insert_body(body, sender.clone());
|
||||
last_block_hash = self.insert_body(body, sender.clone());
|
||||
}
|
||||
|
||||
last_block_hash
|
||||
}
|
||||
|
||||
pub fn enqueue_full_blocks(&mut self, blocks: Vec<BeaconBlock>, sender: PeerId) {
|
||||
@ -196,54 +178,55 @@ impl<T: BeaconChainTypes> ImportQueue<T> {
|
||||
///
|
||||
/// If the header already exists, the `inserted` time is set to `now` and not other
|
||||
/// modifications are made.
|
||||
fn insert_header(&mut self, block_root: Hash256, header: BeaconBlockHeader, sender: PeerId) {
|
||||
if let Some(i) = self
|
||||
.partials
|
||||
.iter()
|
||||
.position(|p| p.block_root == block_root)
|
||||
{
|
||||
// Case 1: there already exists a partial with a matching block root.
|
||||
//
|
||||
// The `inserted` time is set to now and the header is replaced, regardless of whether
|
||||
// it existed or not.
|
||||
self.partials[i].header = Some(header);
|
||||
self.partials[i].inserted = Instant::now();
|
||||
} else {
|
||||
// Case 2: there was no partial with a matching block root.
|
||||
//
|
||||
// A new partial is added. This case permits adding a header without already known the
|
||||
// root.
|
||||
self.partials.push(PartialBeaconBlock {
|
||||
/// Returns true is `body` exists.
|
||||
fn insert_header(
|
||||
&mut self,
|
||||
block_root: Hash256,
|
||||
header: BeaconBlockHeader,
|
||||
sender: PeerId,
|
||||
) -> bool {
|
||||
let mut exists = false;
|
||||
self.partials
|
||||
.entry(block_root)
|
||||
.and_modify(|partial| {
|
||||
partial.header = Some(header.clone());
|
||||
partial.inserted = Instant::now();
|
||||
if partial.body.is_some() {
|
||||
exists = true;
|
||||
}
|
||||
})
|
||||
.or_insert_with(|| PartialBeaconBlock {
|
||||
slot: header.slot,
|
||||
block_root,
|
||||
header: Some(header),
|
||||
body: None,
|
||||
inserted: Instant::now(),
|
||||
sender,
|
||||
})
|
||||
}
|
||||
});
|
||||
exists
|
||||
}
|
||||
|
||||
/// Updates an existing partial with the `body`.
|
||||
///
|
||||
/// If there is no header for the `body`, the body is simply discarded.
|
||||
///
|
||||
/// If the body already existed, the `inserted` time is set to `now`.
|
||||
fn insert_body(&mut self, body: BeaconBlockBody, sender: PeerId) {
|
||||
///
|
||||
/// Returns the block hash of the inserted body
|
||||
fn insert_body(&mut self, body: BeaconBlockBody, sender: PeerId) -> Option<Hash256> {
|
||||
let body_root = Hash256::from_slice(&body.tree_hash_root()[..]);
|
||||
let mut last_root = None;
|
||||
|
||||
self.partials.iter_mut().for_each(|mut p| {
|
||||
self.partials.iter_mut().for_each(|(root, mut p)| {
|
||||
if let Some(header) = &mut p.header {
|
||||
if body_root == header.block_body_root {
|
||||
p.inserted = Instant::now();
|
||||
|
||||
if p.body.is_none() {
|
||||
p.body = Some(body.clone());
|
||||
p.sender = sender.clone();
|
||||
}
|
||||
p.body = Some(body.clone());
|
||||
p.sender = sender.clone();
|
||||
last_root = Some(*root);
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
last_root
|
||||
}
|
||||
|
||||
/// Updates an existing `partial` with the completed block, or adds a new (complete) partial.
|
||||
@ -261,15 +244,10 @@ impl<T: BeaconChainTypes> ImportQueue<T> {
|
||||
sender,
|
||||
};
|
||||
|
||||
if let Some(i) = self
|
||||
.partials
|
||||
.iter()
|
||||
.position(|p| p.block_root == block_root)
|
||||
{
|
||||
self.partials[i] = partial;
|
||||
} else {
|
||||
self.partials.push(partial)
|
||||
}
|
||||
self.partials
|
||||
.entry(block_root)
|
||||
.and_modify(|existing_partial| *existing_partial = partial.clone())
|
||||
.or_insert(partial);
|
||||
}
|
||||
}
|
||||
|
||||
@ -290,13 +268,33 @@ pub struct PartialBeaconBlock {
|
||||
}
|
||||
|
||||
impl PartialBeaconBlock {
|
||||
/// Consumes `self` and returns a full built `BeaconBlock`, it's root and the `sender`
|
||||
/// `PeerId`, if enough information exists to complete the block. Otherwise, returns `None`.
|
||||
pub fn complete(self) -> Option<(Hash256, BeaconBlock, PeerId)> {
|
||||
Some((
|
||||
self.block_root,
|
||||
self.header?.into_block(self.body?),
|
||||
self.sender,
|
||||
))
|
||||
/// Attempts to build a block.
|
||||
///
|
||||
/// Does not comsume the `PartialBeaconBlock`.
|
||||
pub fn attempt_complete(&self) -> PartialBeaconBlockCompletion {
|
||||
if self.header.is_none() {
|
||||
PartialBeaconBlockCompletion::MissingHeader(self.slot)
|
||||
} else if self.body.is_none() {
|
||||
PartialBeaconBlockCompletion::MissingBody
|
||||
} else {
|
||||
PartialBeaconBlockCompletion::Complete(
|
||||
self.header
|
||||
.clone()
|
||||
.unwrap()
|
||||
.into_block(self.body.clone().unwrap()),
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// The result of trying to convert a `BeaconBlock` into a `PartialBeaconBlock`.
|
||||
pub enum PartialBeaconBlockCompletion {
|
||||
/// The partial contains a valid BeaconBlock.
|
||||
Complete(BeaconBlock),
|
||||
/// The partial does not exist.
|
||||
MissingRoot,
|
||||
/// The partial contains a `BeaconBlockRoot` but no `BeaconBlockHeader`.
|
||||
MissingHeader(Slot),
|
||||
/// The partial contains a `BeaconBlockRoot` and `BeaconBlockHeader` but no `BeaconBlockBody`.
|
||||
MissingBody,
|
||||
}
|
||||
|
@ -1,4 +1,4 @@
|
||||
use super::import_queue::ImportQueue;
|
||||
use super::import_queue::{ImportQueue, PartialBeaconBlockCompletion};
|
||||
use crate::message_handler::NetworkContext;
|
||||
use beacon_chain::{BeaconChain, BeaconChainTypes, BlockProcessingOutcome};
|
||||
use eth2_libp2p::rpc::methods::*;
|
||||
@ -18,7 +18,7 @@ use types::{
|
||||
const SLOT_IMPORT_TOLERANCE: u64 = 100;
|
||||
|
||||
/// The amount of seconds a block (or partial block) may exist in the import queue.
|
||||
const QUEUE_STALE_SECS: u64 = 600;
|
||||
const QUEUE_STALE_SECS: u64 = 100;
|
||||
|
||||
/// If a block is more than `FUTURE_SLOT_TOLERANCE` slots ahead of our slot clock, we drop it.
|
||||
/// Otherwise we queue it.
|
||||
@ -75,7 +75,6 @@ pub struct SimpleSync<T: BeaconChainTypes> {
|
||||
import_queue: ImportQueue<T>,
|
||||
/// The current state of the syncing protocol.
|
||||
state: SyncState,
|
||||
/// Sync logger.
|
||||
log: slog::Logger,
|
||||
}
|
||||
|
||||
@ -174,96 +173,105 @@ impl<T: BeaconChainTypes> SimpleSync<T> {
|
||||
hello: HelloMessage,
|
||||
network: &mut NetworkContext,
|
||||
) {
|
||||
let spec = &self.chain.spec;
|
||||
|
||||
let remote = PeerSyncInfo::from(hello);
|
||||
let local = PeerSyncInfo::from(&self.chain);
|
||||
|
||||
// Disconnect nodes who are on a different network.
|
||||
let start_slot = |epoch: Epoch| epoch.start_slot(T::EthSpec::slots_per_epoch());
|
||||
|
||||
if local.network_id != remote.network_id {
|
||||
// The node is on a different network, disconnect them.
|
||||
info!(
|
||||
self.log, "HandshakeFailure";
|
||||
"peer" => format!("{:?}", peer_id),
|
||||
"reason" => "network_id"
|
||||
);
|
||||
|
||||
network.disconnect(peer_id.clone(), GoodbyeReason::IrreleventNetwork);
|
||||
// Disconnect nodes if our finalized epoch is greater than thieirs, and their finalized
|
||||
// epoch is not in our chain. Viz., they are on another chain.
|
||||
//
|
||||
// If the local or remote have a `latest_finalized_root == ZERO_HASH`, skips checks about
|
||||
// the finalized_root. The logic is akward and I think we're better without it.
|
||||
} else if (local.latest_finalized_epoch >= remote.latest_finalized_epoch)
|
||||
&& (!self
|
||||
.chain
|
||||
.rev_iter_block_roots(local.best_slot)
|
||||
.any(|(root, _slot)| root == remote.latest_finalized_root))
|
||||
&& (local.latest_finalized_root != spec.zero_hash)
|
||||
&& (remote.latest_finalized_root != spec.zero_hash)
|
||||
} else if remote.latest_finalized_epoch <= local.latest_finalized_epoch
|
||||
&& remote.latest_finalized_root != self.chain.spec.zero_hash
|
||||
&& local.latest_finalized_root != self.chain.spec.zero_hash
|
||||
&& (self.root_at_slot(start_slot(remote.latest_finalized_epoch))
|
||||
!= Some(remote.latest_finalized_root))
|
||||
{
|
||||
// The remotes finalized epoch is less than or greater than ours, but the block root is
|
||||
// different to the one in our chain.
|
||||
//
|
||||
// Therefore, the node is on a different chain and we should not communicate with them.
|
||||
info!(
|
||||
self.log, "HandshakeFailure";
|
||||
"peer" => format!("{:?}", peer_id),
|
||||
"reason" => "wrong_finalized_chain"
|
||||
"reason" => "different finalized chain"
|
||||
);
|
||||
network.disconnect(peer_id.clone(), GoodbyeReason::IrreleventNetwork);
|
||||
// Process handshakes from peers that seem to be on our chain.
|
||||
} else {
|
||||
info!(self.log, "HandshakeSuccess"; "peer" => format!("{:?}", peer_id));
|
||||
self.known_peers.insert(peer_id.clone(), remote);
|
||||
|
||||
// If we have equal or better finalized epochs and best slots, we require nothing else from
|
||||
// this peer.
|
||||
} else if remote.latest_finalized_epoch < local.latest_finalized_epoch {
|
||||
// The node has a lower finalized epoch, their chain is not useful to us. There are two
|
||||
// cases where a node can have a lower finalized epoch:
|
||||
//
|
||||
// We make an exception when our best slot is 0. Best slot does not indicate whether or
|
||||
// not there is a block at slot zero.
|
||||
if (remote.latest_finalized_epoch <= local.latest_finalized_epoch)
|
||||
&& (remote.best_slot <= local.best_slot)
|
||||
&& (local.best_slot > 0)
|
||||
{
|
||||
debug!(self.log, "Peer is naive"; "peer" => format!("{:?}", peer_id));
|
||||
return;
|
||||
}
|
||||
// ## The node is on the same chain
|
||||
//
|
||||
// If a node is on the same chain but has a lower finalized epoch, their head must be
|
||||
// lower than ours. Therefore, we have nothing to request from them.
|
||||
//
|
||||
// ## The node is on a fork
|
||||
//
|
||||
// If a node is on a fork that has a lower finalized epoch, switching to that fork would
|
||||
// cause us to revert a finalized block. This is not permitted, therefore we have no
|
||||
// interest in their blocks.
|
||||
debug!(
|
||||
self.log,
|
||||
"NaivePeer";
|
||||
"peer" => format!("{:?}", peer_id),
|
||||
"reason" => "lower finalized epoch"
|
||||
);
|
||||
} else if self
|
||||
.chain
|
||||
.store
|
||||
.exists::<BeaconBlock>(&remote.best_root)
|
||||
.unwrap_or_else(|_| false)
|
||||
{
|
||||
// If the node's best-block is already known to us, we have nothing to request.
|
||||
debug!(
|
||||
self.log,
|
||||
"NaivePeer";
|
||||
"peer" => format!("{:?}", peer_id),
|
||||
"reason" => "best block is known"
|
||||
);
|
||||
} else {
|
||||
// The remote node has an equal or great finalized epoch and we don't know it's head.
|
||||
//
|
||||
// Therefore, there are some blocks between the local finalized epoch and the remote
|
||||
// head that are worth downloading.
|
||||
debug!(
|
||||
self.log, "UsefulPeer";
|
||||
"peer" => format!("{:?}", peer_id),
|
||||
"local_finalized_epoch" => local.latest_finalized_epoch,
|
||||
"remote_latest_finalized_epoch" => remote.latest_finalized_epoch,
|
||||
);
|
||||
|
||||
// If the remote has a higher finalized epoch, request all block roots from our finalized
|
||||
// epoch through to its best slot.
|
||||
if remote.latest_finalized_epoch > local.latest_finalized_epoch {
|
||||
debug!(self.log, "Peer has high finalized epoch"; "peer" => format!("{:?}", peer_id));
|
||||
let start_slot = local
|
||||
.latest_finalized_epoch
|
||||
.start_slot(T::EthSpec::slots_per_epoch());
|
||||
let required_slots = remote.best_slot - start_slot;
|
||||
let start_slot = local
|
||||
.latest_finalized_epoch
|
||||
.start_slot(T::EthSpec::slots_per_epoch());
|
||||
let required_slots = remote.best_slot - start_slot;
|
||||
|
||||
self.request_block_roots(
|
||||
peer_id,
|
||||
BeaconBlockRootsRequest {
|
||||
start_slot,
|
||||
count: required_slots.into(),
|
||||
},
|
||||
network,
|
||||
);
|
||||
// If the remote has a greater best slot, request the roots between our best slot and their
|
||||
// best slot.
|
||||
} else if remote.best_slot > local.best_slot {
|
||||
debug!(self.log, "Peer has higher best slot"; "peer" => format!("{:?}", peer_id));
|
||||
let start_slot = local
|
||||
.latest_finalized_epoch
|
||||
.start_slot(T::EthSpec::slots_per_epoch());
|
||||
let required_slots = remote.best_slot - start_slot;
|
||||
|
||||
self.request_block_roots(
|
||||
peer_id,
|
||||
BeaconBlockRootsRequest {
|
||||
start_slot,
|
||||
count: required_slots.into(),
|
||||
},
|
||||
network,
|
||||
);
|
||||
} else {
|
||||
debug!(self.log, "Nothing to request from peer"; "peer" => format!("{:?}", peer_id));
|
||||
}
|
||||
self.request_block_roots(
|
||||
peer_id,
|
||||
BeaconBlockRootsRequest {
|
||||
start_slot,
|
||||
count: required_slots.as_u64(),
|
||||
},
|
||||
network,
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
fn root_at_slot(&self, target_slot: Slot) -> Option<Hash256> {
|
||||
self.chain
|
||||
.rev_iter_best_block_roots(target_slot)
|
||||
.take(1)
|
||||
.find(|(_root, slot)| *slot == target_slot)
|
||||
.map(|(root, _slot)| root)
|
||||
}
|
||||
|
||||
/// Handle a `BeaconBlockRoots` request from the peer.
|
||||
pub fn on_beacon_block_roots_request(
|
||||
&mut self,
|
||||
@ -282,18 +290,19 @@ impl<T: BeaconChainTypes> SimpleSync<T> {
|
||||
|
||||
let mut roots: Vec<BlockRootSlot> = self
|
||||
.chain
|
||||
.rev_iter_block_roots(req.start_slot + req.count)
|
||||
.skip(1)
|
||||
.rev_iter_best_block_roots(req.start_slot + req.count)
|
||||
.take(req.count as usize)
|
||||
.map(|(block_root, slot)| BlockRootSlot { slot, block_root })
|
||||
.collect();
|
||||
|
||||
if roots.len() as u64 != req.count {
|
||||
debug!(
|
||||
warn!(
|
||||
self.log,
|
||||
"BlockRootsRequest";
|
||||
"peer" => format!("{:?}", peer_id),
|
||||
"msg" => "Failed to return all requested hashes",
|
||||
"start_slot" => req.start_slot,
|
||||
"current_slot" => self.chain.current_state().slot,
|
||||
"requested" => req.count,
|
||||
"returned" => roots.len(),
|
||||
);
|
||||
@ -395,7 +404,7 @@ impl<T: BeaconChainTypes> SimpleSync<T> {
|
||||
// unnecessary block deserialization when `req.skip_slots > 0`.
|
||||
let mut roots: Vec<Hash256> = self
|
||||
.chain
|
||||
.rev_iter_block_roots(req.start_slot + (count - 1))
|
||||
.rev_iter_best_block_roots(req.start_slot + count)
|
||||
.take(count as usize)
|
||||
.map(|(root, _slot)| root)
|
||||
.collect();
|
||||
@ -454,7 +463,9 @@ impl<T: BeaconChainTypes> SimpleSync<T> {
|
||||
.import_queue
|
||||
.enqueue_headers(res.headers, peer_id.clone());
|
||||
|
||||
self.request_block_bodies(peer_id, BeaconBlockBodiesRequest { block_roots }, network);
|
||||
if !block_roots.is_empty() {
|
||||
self.request_block_bodies(peer_id, BeaconBlockBodiesRequest { block_roots }, network);
|
||||
}
|
||||
}
|
||||
|
||||
/// Handle a `BeaconBlockBodies` request from the peer.
|
||||
@ -522,14 +533,26 @@ impl<T: BeaconChainTypes> SimpleSync<T> {
|
||||
"count" => res.block_bodies.len(),
|
||||
);
|
||||
|
||||
self.import_queue
|
||||
.enqueue_bodies(res.block_bodies, peer_id.clone());
|
||||
if !res.block_bodies.is_empty() {
|
||||
// Import all blocks to queue
|
||||
let last_root = self
|
||||
.import_queue
|
||||
.enqueue_bodies(res.block_bodies, peer_id.clone());
|
||||
|
||||
// Attempt to process all recieved bodies by recursively processing the latest block
|
||||
if let Some(root) = last_root {
|
||||
match self.attempt_process_partial_block(peer_id, root, network, &"rpc") {
|
||||
Some(BlockProcessingOutcome::Processed { block_root: _ }) => {
|
||||
// If processing is successful remove from `import_queue`
|
||||
self.import_queue.remove(root);
|
||||
}
|
||||
_ => {}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Clear out old entries
|
||||
self.import_queue.remove_stale();
|
||||
|
||||
// Import blocks, if possible.
|
||||
self.process_import_queue(network);
|
||||
}
|
||||
|
||||
/// Process a gossip message declaring a new block.
|
||||
@ -548,15 +571,37 @@ impl<T: BeaconChainTypes> SimpleSync<T> {
|
||||
{
|
||||
match outcome {
|
||||
BlockProcessingOutcome::Processed { .. } => SHOULD_FORWARD_GOSSIP_BLOCK,
|
||||
BlockProcessingOutcome::ParentUnknown { .. } => {
|
||||
BlockProcessingOutcome::ParentUnknown { parent } => {
|
||||
// Add this block to the queue
|
||||
self.import_queue
|
||||
.enqueue_full_blocks(vec![block], peer_id.clone());
|
||||
trace!(
|
||||
self.log,
|
||||
"NewGossipBlock";
|
||||
.enqueue_full_blocks(vec![block.clone()], peer_id.clone());
|
||||
debug!(
|
||||
self.log, "RequestParentBlock";
|
||||
"parent_root" => format!("{}", parent),
|
||||
"parent_slot" => block.slot - 1,
|
||||
"peer" => format!("{:?}", peer_id),
|
||||
);
|
||||
|
||||
// Request roots between parent and start of finality from peer.
|
||||
let start_slot = self
|
||||
.chain
|
||||
.head()
|
||||
.beacon_state
|
||||
.finalized_epoch
|
||||
.start_slot(T::EthSpec::slots_per_epoch());
|
||||
self.request_block_roots(
|
||||
peer_id,
|
||||
BeaconBlockRootsRequest {
|
||||
// Request blocks between `latest_finalized_slot` and the `block`
|
||||
start_slot,
|
||||
count: block.slot.as_u64() - start_slot.as_u64(),
|
||||
},
|
||||
network,
|
||||
);
|
||||
|
||||
// Clean the stale entries from the queue.
|
||||
self.import_queue.remove_stale();
|
||||
|
||||
SHOULD_FORWARD_GOSSIP_BLOCK
|
||||
}
|
||||
|
||||
@ -598,40 +643,6 @@ impl<T: BeaconChainTypes> SimpleSync<T> {
|
||||
}
|
||||
}
|
||||
|
||||
/// Iterate through the `import_queue` and process any complete blocks.
|
||||
///
|
||||
/// If a block is successfully processed it is removed from the queue, otherwise it remains in
|
||||
/// the queue.
|
||||
pub fn process_import_queue(&mut self, network: &mut NetworkContext) {
|
||||
let mut successful = 0;
|
||||
|
||||
// Loop through all of the complete blocks in the queue.
|
||||
for (block_root, block, sender) in self.import_queue.complete_blocks() {
|
||||
let processing_result = self.process_block(sender, block.clone(), network, &"gossip");
|
||||
|
||||
let should_dequeue = match processing_result {
|
||||
Some(BlockProcessingOutcome::ParentUnknown { .. }) => false,
|
||||
Some(BlockProcessingOutcome::FutureSlot {
|
||||
present_slot,
|
||||
block_slot,
|
||||
}) if present_slot + FUTURE_SLOT_TOLERANCE >= block_slot => false,
|
||||
_ => true,
|
||||
};
|
||||
|
||||
if processing_result == Some(BlockProcessingOutcome::Processed { block_root }) {
|
||||
successful += 1;
|
||||
}
|
||||
|
||||
if should_dequeue {
|
||||
self.import_queue.remove(block_root);
|
||||
}
|
||||
}
|
||||
|
||||
if successful > 0 {
|
||||
info!(self.log, "Imported {} blocks", successful)
|
||||
}
|
||||
}
|
||||
|
||||
/// Request some `BeaconBlockRoots` from the remote peer.
|
||||
fn request_block_roots(
|
||||
&mut self,
|
||||
@ -706,6 +717,89 @@ impl<T: BeaconChainTypes> SimpleSync<T> {
|
||||
hello_message(&self.chain)
|
||||
}
|
||||
|
||||
/// Helper function to attempt to process a partial block.
|
||||
///
|
||||
/// If the block can be completed recursively call `process_block`
|
||||
/// else request missing parts.
|
||||
fn attempt_process_partial_block(
|
||||
&mut self,
|
||||
peer_id: PeerId,
|
||||
block_root: Hash256,
|
||||
network: &mut NetworkContext,
|
||||
source: &str,
|
||||
) -> Option<BlockProcessingOutcome> {
|
||||
match self.import_queue.attempt_complete_block(block_root) {
|
||||
PartialBeaconBlockCompletion::MissingBody => {
|
||||
// Unable to complete the block because the block body is missing.
|
||||
debug!(
|
||||
self.log, "RequestParentBody";
|
||||
"source" => source,
|
||||
"block_root" => format!("{}", block_root),
|
||||
"peer" => format!("{:?}", peer_id),
|
||||
);
|
||||
|
||||
// Request the block body from the peer.
|
||||
self.request_block_bodies(
|
||||
peer_id,
|
||||
BeaconBlockBodiesRequest {
|
||||
block_roots: vec![block_root],
|
||||
},
|
||||
network,
|
||||
);
|
||||
|
||||
None
|
||||
}
|
||||
PartialBeaconBlockCompletion::MissingHeader(slot) => {
|
||||
// Unable to complete the block because the block header is missing.
|
||||
debug!(
|
||||
self.log, "RequestParentHeader";
|
||||
"source" => source,
|
||||
"block_root" => format!("{}", block_root),
|
||||
"peer" => format!("{:?}", peer_id),
|
||||
);
|
||||
|
||||
// Request the block header from the peer.
|
||||
self.request_block_headers(
|
||||
peer_id,
|
||||
BeaconBlockHeadersRequest {
|
||||
start_root: block_root,
|
||||
start_slot: slot,
|
||||
max_headers: 1,
|
||||
skip_slots: 0,
|
||||
},
|
||||
network,
|
||||
);
|
||||
|
||||
None
|
||||
}
|
||||
PartialBeaconBlockCompletion::MissingRoot => {
|
||||
// The `block_root` is not known to the queue.
|
||||
debug!(
|
||||
self.log, "MissingParentRoot";
|
||||
"source" => source,
|
||||
"block_root" => format!("{}", block_root),
|
||||
"peer" => format!("{:?}", peer_id),
|
||||
);
|
||||
|
||||
// Do nothing.
|
||||
|
||||
None
|
||||
}
|
||||
PartialBeaconBlockCompletion::Complete(block) => {
|
||||
// The block exists in the queue, attempt to process it
|
||||
trace!(
|
||||
self.log, "AttemptProcessParent";
|
||||
"source" => source,
|
||||
"block_root" => format!("{}", block_root),
|
||||
"parent_slot" => block.slot,
|
||||
"peer" => format!("{:?}", peer_id),
|
||||
);
|
||||
|
||||
self.process_block(peer_id.clone(), block, network, source)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Processes the `block` that was received from `peer_id`.
|
||||
///
|
||||
/// If the block was submitted to the beacon chain without internal error, `Some(outcome)` is
|
||||
@ -732,7 +826,8 @@ impl<T: BeaconChainTypes> SimpleSync<T> {
|
||||
if let Ok(outcome) = processing_result {
|
||||
match outcome {
|
||||
BlockProcessingOutcome::Processed { block_root } => {
|
||||
info!(
|
||||
// The block was valid and we processed it successfully.
|
||||
debug!(
|
||||
self.log, "Imported block from network";
|
||||
"source" => source,
|
||||
"slot" => block.slot,
|
||||
@ -741,36 +836,30 @@ impl<T: BeaconChainTypes> SimpleSync<T> {
|
||||
);
|
||||
}
|
||||
BlockProcessingOutcome::ParentUnknown { parent } => {
|
||||
// The block was valid and we processed it successfully.
|
||||
debug!(
|
||||
// The parent has not been processed
|
||||
trace!(
|
||||
self.log, "ParentBlockUnknown";
|
||||
"source" => source,
|
||||
"parent_root" => format!("{}", parent),
|
||||
"baby_block_slot" => block.slot,
|
||||
"peer" => format!("{:?}", peer_id),
|
||||
);
|
||||
|
||||
// Send a hello to learn of the clients best slot so we can then sync the require
|
||||
// parent(s).
|
||||
network.send_rpc_request(
|
||||
peer_id.clone(),
|
||||
RPCRequest::Hello(hello_message(&self.chain)),
|
||||
);
|
||||
// If the parent is in the `import_queue` attempt to complete it then process it.
|
||||
match self.attempt_process_partial_block(peer_id, parent, network, source) {
|
||||
// If processing parent is sucessful, re-process block and remove parent from queue
|
||||
Some(BlockProcessingOutcome::Processed { block_root: _ }) => {
|
||||
self.import_queue.remove(parent);
|
||||
|
||||
// Explicitly request the parent block from the peer.
|
||||
//
|
||||
// It is likely that this is duplicate work, given we already send a hello
|
||||
// request. However, I believe there are some edge-cases where the hello
|
||||
// message doesn't suffice, so we perform this request as well.
|
||||
self.request_block_headers(
|
||||
peer_id,
|
||||
BeaconBlockHeadersRequest {
|
||||
start_root: parent,
|
||||
start_slot: block.slot - 1,
|
||||
max_headers: 1,
|
||||
skip_slots: 0,
|
||||
},
|
||||
network,
|
||||
)
|
||||
// Attempt to process `block` again
|
||||
match self.chain.process_block(block) {
|
||||
Ok(outcome) => return Some(outcome),
|
||||
Err(_) => return None,
|
||||
}
|
||||
}
|
||||
// All other cases leave `parent` in `import_queue` and return original outcome.
|
||||
_ => {}
|
||||
}
|
||||
}
|
||||
BlockProcessingOutcome::FutureSlot {
|
||||
present_slot,
|
||||
|
@ -11,7 +11,7 @@ network = { path = "../network" }
|
||||
eth2-libp2p = { path = "../eth2-libp2p" }
|
||||
version = { path = "../version" }
|
||||
types = { path = "../../eth2/types" }
|
||||
ssz = { path = "../../eth2/utils/ssz" }
|
||||
eth2_ssz = { path = "../../eth2/utils/ssz" }
|
||||
slot_clock = { path = "../../eth2/utils/slot_clock" }
|
||||
protos = { path = "../../protos" }
|
||||
grpcio = { version = "0.4", default-features = false, features = ["protobuf-codec"] }
|
||||
@ -22,9 +22,8 @@ dirs = "1.0.3"
|
||||
futures = "0.1.23"
|
||||
serde = "1.0"
|
||||
serde_derive = "1.0"
|
||||
slog = "^2.2.3"
|
||||
slog = { version = "^2.2.3" , features = ["max_level_trace"] }
|
||||
slog-term = "^2.4.0"
|
||||
slog-async = "^2.3.0"
|
||||
tokio = "0.1.17"
|
||||
exit-future = "0.1.4"
|
||||
crossbeam-channel = "0.3.8"
|
||||
|
@ -1,7 +1,7 @@
|
||||
use beacon_chain::{BeaconChain, BeaconChainTypes};
|
||||
use eth2_libp2p::PubsubMessage;
|
||||
use eth2_libp2p::TopicBuilder;
|
||||
use eth2_libp2p::SHARD_TOPIC_PREFIX;
|
||||
use eth2_libp2p::BEACON_ATTESTATION_TOPIC;
|
||||
use futures::Future;
|
||||
use grpcio::{RpcContext, RpcStatus, RpcStatusCode, UnarySink};
|
||||
use network::NetworkMessage;
|
||||
@ -13,12 +13,13 @@ use protos::services_grpc::AttestationService;
|
||||
use slog::{error, info, trace, warn};
|
||||
use ssz::{ssz_encode, Decode};
|
||||
use std::sync::Arc;
|
||||
use tokio::sync::mpsc;
|
||||
use types::Attestation;
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct AttestationServiceInstance<T: BeaconChainTypes> {
|
||||
pub chain: Arc<BeaconChain<T>>,
|
||||
pub network_chan: crossbeam_channel::Sender<NetworkMessage>,
|
||||
pub network_chan: mpsc::UnboundedSender<NetworkMessage>,
|
||||
pub log: slog::Logger,
|
||||
}
|
||||
|
||||
@ -139,11 +140,11 @@ impl<T: BeaconChainTypes> AttestationService for AttestationServiceInstance<T> {
|
||||
);
|
||||
|
||||
// valid attestation, propagate to the network
|
||||
let topic = TopicBuilder::new(SHARD_TOPIC_PREFIX).build();
|
||||
let topic = TopicBuilder::new(BEACON_ATTESTATION_TOPIC).build();
|
||||
let message = PubsubMessage::Attestation(attestation);
|
||||
|
||||
self.network_chan
|
||||
.send(NetworkMessage::Publish {
|
||||
.try_send(NetworkMessage::Publish {
|
||||
topics: vec![topic],
|
||||
message: Box::new(message),
|
||||
})
|
||||
|
@ -1,5 +1,4 @@
|
||||
use beacon_chain::{BeaconChain, BeaconChainTypes, BlockProcessingOutcome};
|
||||
use crossbeam_channel;
|
||||
use eth2_libp2p::BEACON_PUBSUB_TOPIC;
|
||||
use eth2_libp2p::{PubsubMessage, TopicBuilder};
|
||||
use futures::Future;
|
||||
@ -14,12 +13,13 @@ use slog::Logger;
|
||||
use slog::{error, info, trace, warn};
|
||||
use ssz::{ssz_encode, Decode};
|
||||
use std::sync::Arc;
|
||||
use tokio::sync::mpsc;
|
||||
use types::{BeaconBlock, Signature, Slot};
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct BeaconBlockServiceInstance<T: BeaconChainTypes> {
|
||||
pub chain: Arc<BeaconChain<T>>,
|
||||
pub network_chan: crossbeam_channel::Sender<NetworkMessage>,
|
||||
pub network_chan: mpsc::UnboundedSender<NetworkMessage>,
|
||||
pub log: Logger,
|
||||
}
|
||||
|
||||
@ -111,7 +111,7 @@ impl<T: BeaconChainTypes> BeaconBlockService for BeaconBlockServiceInstance<T> {
|
||||
|
||||
// Publish the block to the p2p network via gossipsub.
|
||||
self.network_chan
|
||||
.send(NetworkMessage::Publish {
|
||||
.try_send(NetworkMessage::Publish {
|
||||
topics: vec![topic],
|
||||
message: Box::new(message),
|
||||
})
|
||||
|
@ -20,11 +20,12 @@ use protos::services_grpc::{
|
||||
use slog::{info, o, warn};
|
||||
use std::sync::Arc;
|
||||
use tokio::runtime::TaskExecutor;
|
||||
use tokio::sync::mpsc;
|
||||
|
||||
pub fn start_server<T: BeaconChainTypes + Clone + 'static>(
|
||||
config: &RPCConfig,
|
||||
executor: &TaskExecutor,
|
||||
network_chan: crossbeam_channel::Sender<NetworkMessage>,
|
||||
network_chan: mpsc::UnboundedSender<NetworkMessage>,
|
||||
beacon_chain: Arc<BeaconChain<T>>,
|
||||
log: &slog::Logger,
|
||||
) -> exit_future::Signal {
|
||||
|
@ -29,6 +29,13 @@ fn main() {
|
||||
.help("Data directory for keys and databases.")
|
||||
.takes_value(true)
|
||||
)
|
||||
.arg(
|
||||
Arg::with_name("logfile")
|
||||
.long("logfile")
|
||||
.value_name("logfile")
|
||||
.help("File path where output will be written.")
|
||||
.takes_value(true),
|
||||
)
|
||||
// network related arguments
|
||||
.arg(
|
||||
Arg::with_name("listen-address")
|
||||
@ -156,10 +163,10 @@ fn main() {
|
||||
0 => drain.filter_level(Level::Info),
|
||||
1 => drain.filter_level(Level::Debug),
|
||||
2 => drain.filter_level(Level::Trace),
|
||||
_ => drain.filter_level(Level::Info),
|
||||
_ => drain.filter_level(Level::Trace),
|
||||
};
|
||||
|
||||
let log = slog::Logger::root(drain.fuse(), o!());
|
||||
let mut log = slog::Logger::root(drain.fuse(), o!());
|
||||
|
||||
let data_dir = match matches
|
||||
.value_of("datadir")
|
||||
@ -214,7 +221,7 @@ fn main() {
|
||||
client_config.data_dir = data_dir.clone();
|
||||
|
||||
// Update the client config with any CLI args.
|
||||
match client_config.apply_cli_args(&matches) {
|
||||
match client_config.apply_cli_args(&matches, &mut log) {
|
||||
Ok(()) => (),
|
||||
Err(s) => {
|
||||
crit!(log, "Failed to parse ClientConfig CLI arguments"; "error" => s);
|
||||
|
@ -14,7 +14,7 @@ bytes = "0.4.10"
|
||||
db-key = "0.0.5"
|
||||
leveldb = "0.8.4"
|
||||
parking_lot = "0.7"
|
||||
ssz = { path = "../../eth2/utils/ssz" }
|
||||
ssz_derive = { path = "../../eth2/utils/ssz_derive" }
|
||||
eth2_ssz = { path = "../../eth2/utils/ssz" }
|
||||
eth2_ssz_derive = { path = "../../eth2/utils/ssz_derive" }
|
||||
tree_hash = { path = "../../eth2/utils/tree_hash" }
|
||||
types = { path = "../../eth2/types" }
|
||||
|
@ -15,15 +15,15 @@ impl<'a, T: EthSpec, U: Store> StateRootsIterator<'a, T, U> {
|
||||
Self {
|
||||
store,
|
||||
beacon_state: Cow::Borrowed(beacon_state),
|
||||
slot: start_slot,
|
||||
slot: start_slot + 1,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn owned(store: Arc<U>, beacon_state: BeaconState<T>, start_slot: Slot) -> Self {
|
||||
Self {
|
||||
slot: start_slot,
|
||||
beacon_state: Cow::Owned(beacon_state),
|
||||
store,
|
||||
beacon_state: Cow::Owned(beacon_state),
|
||||
slot: start_slot + 1,
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -90,13 +90,19 @@ impl<'a, T: EthSpec, U: Store> Iterator for BlockIterator<'a, T, U> {
|
||||
}
|
||||
}
|
||||
|
||||
/// Iterates backwards through block roots.
|
||||
/// Iterates backwards through block roots. If any specified slot is unable to be retrieved, the
|
||||
/// iterator returns `None` indefinitely.
|
||||
///
|
||||
/// Uses the `latest_block_roots` field of `BeaconState` to as the source of block roots and will
|
||||
/// perform a lookup on the `Store` for a prior `BeaconState` if `latest_block_roots` has been
|
||||
/// exhausted.
|
||||
///
|
||||
/// Returns `None` for roots prior to genesis or when there is an error reading from `Store`.
|
||||
///
|
||||
/// ## Notes
|
||||
///
|
||||
/// See [`BestBlockRootsIterator`](struct.BestBlockRootsIterator.html), which has different
|
||||
/// `start_slot` logic.
|
||||
#[derive(Clone)]
|
||||
pub struct BlockRootsIterator<'a, T: EthSpec, U> {
|
||||
store: Arc<U>,
|
||||
@ -108,18 +114,18 @@ impl<'a, T: EthSpec, U: Store> BlockRootsIterator<'a, T, U> {
|
||||
/// Create a new iterator over all block roots in the given `beacon_state` and prior states.
|
||||
pub fn new(store: Arc<U>, beacon_state: &'a BeaconState<T>, start_slot: Slot) -> Self {
|
||||
Self {
|
||||
slot: start_slot,
|
||||
beacon_state: Cow::Borrowed(beacon_state),
|
||||
store,
|
||||
beacon_state: Cow::Borrowed(beacon_state),
|
||||
slot: start_slot + 1,
|
||||
}
|
||||
}
|
||||
|
||||
/// Create a new iterator over all block roots in the given `beacon_state` and prior states.
|
||||
pub fn owned(store: Arc<U>, beacon_state: BeaconState<T>, start_slot: Slot) -> Self {
|
||||
Self {
|
||||
slot: start_slot,
|
||||
beacon_state: Cow::Owned(beacon_state),
|
||||
store,
|
||||
beacon_state: Cow::Owned(beacon_state),
|
||||
slot: start_slot + 1,
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -139,8 +145,105 @@ impl<'a, T: EthSpec, U: Store> Iterator for BlockRootsIterator<'a, T, U> {
|
||||
Err(BeaconStateError::SlotOutOfBounds) => {
|
||||
// Read a `BeaconState` from the store that has access to prior historical root.
|
||||
let beacon_state: BeaconState<T> = {
|
||||
// Load the earlier state from disk. Skip forward one slot, because a state
|
||||
// doesn't return it's own state root.
|
||||
// Load the earliest state from disk.
|
||||
let new_state_root = self.beacon_state.get_oldest_state_root().ok()?;
|
||||
|
||||
self.store.get(&new_state_root).ok()?
|
||||
}?;
|
||||
|
||||
self.beacon_state = Cow::Owned(beacon_state);
|
||||
|
||||
let root = self.beacon_state.get_block_root(self.slot).ok()?;
|
||||
|
||||
Some((*root, self.slot))
|
||||
}
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Iterates backwards through block roots with `start_slot` highest possible value
|
||||
/// `<= beacon_state.slot`.
|
||||
///
|
||||
/// The distinction between `BestBlockRootsIterator` and `BlockRootsIterator` is:
|
||||
///
|
||||
/// - `BestBlockRootsIterator` uses best-effort slot. When `start_slot` is greater than the latest available block root
|
||||
/// on `beacon_state`, returns `Some(root, slot)` where `slot` is the latest available block
|
||||
/// root.
|
||||
/// - `BlockRootsIterator` is strict about `start_slot`. When `start_slot` is greater than the latest available block root
|
||||
/// on `beacon_state`, returns `None`.
|
||||
///
|
||||
/// This is distinct from `BestBlockRootsIterator`.
|
||||
///
|
||||
/// Uses the `latest_block_roots` field of `BeaconState` to as the source of block roots and will
|
||||
/// perform a lookup on the `Store` for a prior `BeaconState` if `latest_block_roots` has been
|
||||
/// exhausted.
|
||||
///
|
||||
/// Returns `None` for roots prior to genesis or when there is an error reading from `Store`.
|
||||
#[derive(Clone)]
|
||||
pub struct BestBlockRootsIterator<'a, T: EthSpec, U> {
|
||||
store: Arc<U>,
|
||||
beacon_state: Cow<'a, BeaconState<T>>,
|
||||
slot: Slot,
|
||||
}
|
||||
|
||||
impl<'a, T: EthSpec, U: Store> BestBlockRootsIterator<'a, T, U> {
|
||||
/// Create a new iterator over all block roots in the given `beacon_state` and prior states.
|
||||
pub fn new(store: Arc<U>, beacon_state: &'a BeaconState<T>, start_slot: Slot) -> Self {
|
||||
let mut slot = start_slot;
|
||||
if slot >= beacon_state.slot {
|
||||
// Slot may be too high.
|
||||
slot = beacon_state.slot;
|
||||
if beacon_state.get_block_root(slot).is_err() {
|
||||
slot -= 1;
|
||||
}
|
||||
}
|
||||
|
||||
Self {
|
||||
store,
|
||||
beacon_state: Cow::Borrowed(beacon_state),
|
||||
slot: slot + 1,
|
||||
}
|
||||
}
|
||||
|
||||
/// Create a new iterator over all block roots in the given `beacon_state` and prior states.
|
||||
pub fn owned(store: Arc<U>, beacon_state: BeaconState<T>, start_slot: Slot) -> Self {
|
||||
let mut slot = start_slot;
|
||||
if slot >= beacon_state.slot {
|
||||
// Slot may be too high.
|
||||
slot = beacon_state.slot;
|
||||
// TODO: Use a function other than `get_block_root` as this will always return `Err()`
|
||||
// for slot = state.slot.
|
||||
if beacon_state.get_block_root(slot).is_err() {
|
||||
slot -= 1;
|
||||
}
|
||||
}
|
||||
|
||||
Self {
|
||||
store,
|
||||
beacon_state: Cow::Owned(beacon_state),
|
||||
slot: slot + 1,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a, T: EthSpec, U: Store> Iterator for BestBlockRootsIterator<'a, T, U> {
|
||||
type Item = (Hash256, Slot);
|
||||
|
||||
fn next(&mut self) -> Option<Self::Item> {
|
||||
if self.slot == 0 {
|
||||
// End of Iterator
|
||||
return None;
|
||||
}
|
||||
|
||||
self.slot -= 1;
|
||||
|
||||
match self.beacon_state.get_block_root(self.slot) {
|
||||
Ok(root) => Some((*root, self.slot)),
|
||||
Err(BeaconStateError::SlotOutOfBounds) => {
|
||||
// Read a `BeaconState` from the store that has access to prior historical root.
|
||||
let beacon_state: BeaconState<T> = {
|
||||
// Load the earliest state from disk.
|
||||
let new_state_root = self.beacon_state.get_oldest_state_root().ok()?;
|
||||
|
||||
self.store.get(&new_state_root).ok()?
|
||||
@ -207,7 +310,50 @@ mod test {
|
||||
let mut collected: Vec<(Hash256, Slot)> = iter.collect();
|
||||
collected.reverse();
|
||||
|
||||
let expected_len = 2 * MainnetEthSpec::slots_per_historical_root() - 1;
|
||||
let expected_len = 2 * MainnetEthSpec::slots_per_historical_root();
|
||||
|
||||
assert_eq!(collected.len(), expected_len);
|
||||
|
||||
for i in 0..expected_len {
|
||||
assert_eq!(collected[i].0, Hash256::from(i as u64));
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn best_block_root_iter() {
|
||||
let store = Arc::new(MemoryStore::open());
|
||||
let slots_per_historical_root = MainnetEthSpec::slots_per_historical_root();
|
||||
|
||||
let mut state_a: BeaconState<MainnetEthSpec> = get_state();
|
||||
let mut state_b: BeaconState<MainnetEthSpec> = get_state();
|
||||
|
||||
state_a.slot = Slot::from(slots_per_historical_root);
|
||||
state_b.slot = Slot::from(slots_per_historical_root * 2);
|
||||
|
||||
let mut hashes = (0..).into_iter().map(|i| Hash256::from(i));
|
||||
|
||||
for root in &mut state_a.latest_block_roots[..] {
|
||||
*root = hashes.next().unwrap()
|
||||
}
|
||||
for root in &mut state_b.latest_block_roots[..] {
|
||||
*root = hashes.next().unwrap()
|
||||
}
|
||||
|
||||
let state_a_root = hashes.next().unwrap();
|
||||
state_b.latest_state_roots[0] = state_a_root;
|
||||
store.put(&state_a_root, &state_a).unwrap();
|
||||
|
||||
let iter = BestBlockRootsIterator::new(store.clone(), &state_b, state_b.slot);
|
||||
|
||||
assert!(
|
||||
iter.clone().find(|(_root, slot)| *slot == 0).is_some(),
|
||||
"iter should contain zero slot"
|
||||
);
|
||||
|
||||
let mut collected: Vec<(Hash256, Slot)> = iter.collect();
|
||||
collected.reverse();
|
||||
|
||||
let expected_len = 2 * MainnetEthSpec::slots_per_historical_root();
|
||||
|
||||
assert_eq!(collected.len(), expected_len);
|
||||
|
||||
@ -256,7 +402,7 @@ mod test {
|
||||
let mut collected: Vec<(Hash256, Slot)> = iter.collect();
|
||||
collected.reverse();
|
||||
|
||||
let expected_len = MainnetEthSpec::slots_per_historical_root() * 2 - 1;
|
||||
let expected_len = MainnetEthSpec::slots_per_historical_root() * 2;
|
||||
|
||||
assert_eq!(collected.len(), expected_len, "collection length incorrect");
|
||||
|
||||
|
40
docs/installation.md
Normal file
40
docs/installation.md
Normal file
@ -0,0 +1,40 @@
|
||||
# Development Environment Setup
|
||||
|
||||
A few basic steps are needed to get set up (skip to #5 if you already have Rust
|
||||
installed):
|
||||
|
||||
1. Install [rustup](https://rustup.rs/). It's a toolchain manager for Rust (Linux | macOS | Windows). For installation, download the script with `$ curl -f https://sh.rustup.rs > rustup.sh`, review its content (e.g. `$ less ./rustup.sh`) and run the script `$ ./rustup.sh` (you may need to change the permissions to allow execution, i.e. `$ chmod +x rustup.sh`)
|
||||
2. (Linux & MacOS) To configure your current shell run: `$ source $HOME/.cargo/env`
|
||||
3. Use the command `rustup show` to get information about the Rust installation. You should see that the
|
||||
active toolchain is the stable version.
|
||||
4. Run `rustc --version` to check the installation and version of rust.
|
||||
- Updates can be performed using` rustup update` .
|
||||
5. Install build dependencies (Arch packages are listed here, your distribution will likely be similar):
|
||||
- `clang`: required by RocksDB.
|
||||
- `protobuf`: required for protobuf serialization (gRPC).
|
||||
- `cmake`: required for building protobuf
|
||||
- `git-lfs`: The Git extension for [Large File Support](https://git-lfs.github.com/) (required for EF tests submodule).
|
||||
6. If you haven't already, clone the repository with submodules: `git clone --recursive https://github.com/sigp/lighthouse`.
|
||||
Alternatively, run `git submodule init` in a repository which was cloned without submodules.
|
||||
7. Change directory to the root of the repository.
|
||||
8. Run the test by using command `cargo test --all --release`. By running, it will pass all the required test cases.
|
||||
If you are doing it for the first time, then you can grab a coffee in the meantime. Usually, it takes time
|
||||
to build, compile and pass all test cases. If there is no error then it means everything is working properly
|
||||
and it's time to get your hands dirty.
|
||||
In case, if there is an error, then please raise the [issue](https://github.com/sigp/lighthouse/issues).
|
||||
We will help you.
|
||||
9. As an alternative to, or instead of the above step, you may also run benchmarks by using
|
||||
the command `cargo bench --all`
|
||||
|
||||
## Notes:
|
||||
|
||||
Lighthouse targets Rust `stable` but _should_ run on `nightly`.
|
||||
|
||||
### Note for Windows users:
|
||||
|
||||
Perl may also be required to build lighthouse. You can install [Strawberry Perl](http://strawberryperl.com/),
|
||||
or alternatively use a choco install command `choco install strawberryperl`.
|
||||
|
||||
Additionally, the dependency `protoc-grpcio v0.3.1` is reported to have issues compiling in Windows. You can specify
|
||||
a known working version by editing version in protos/Cargo.toml's "build-dependencies" section to
|
||||
`protoc-grpcio = "<=0.3.0"`.
|
@ -7,7 +7,7 @@ edition = "2018"
|
||||
[dependencies]
|
||||
parking_lot = "0.7"
|
||||
store = { path = "../../beacon_node/store" }
|
||||
ssz = { path = "../utils/ssz" }
|
||||
eth2_ssz = { path = "../utils/ssz" }
|
||||
state_processing = { path = "../state_processing" }
|
||||
types = { path = "../types" }
|
||||
log = "0.4.6"
|
||||
|
@ -8,7 +8,7 @@ use parking_lot::RwLock;
|
||||
use std::collections::HashMap;
|
||||
use std::marker::PhantomData;
|
||||
use std::sync::Arc;
|
||||
use store::{iter::BlockRootsIterator, Error as StoreError, Store};
|
||||
use store::{iter::BestBlockRootsIterator, Error as StoreError, Store};
|
||||
use types::{BeaconBlock, BeaconState, EthSpec, Hash256, Slot};
|
||||
|
||||
type Result<T> = std::result::Result<T, Error>;
|
||||
@ -530,14 +530,14 @@ where
|
||||
Ok(a_root)
|
||||
}
|
||||
|
||||
fn iter_ancestors(&self, child: Hash256) -> Result<BlockRootsIterator<E, T>> {
|
||||
fn iter_ancestors(&self, child: Hash256) -> Result<BestBlockRootsIterator<E, T>> {
|
||||
let block = self.get_block(child)?;
|
||||
let state = self.get_state(block.state_root)?;
|
||||
|
||||
Ok(BlockRootsIterator::owned(
|
||||
Ok(BestBlockRootsIterator::owned(
|
||||
self.store.clone(),
|
||||
state,
|
||||
block.slot,
|
||||
block.slot - 1,
|
||||
))
|
||||
}
|
||||
|
||||
|
@ -5,9 +5,11 @@ authors = ["Michael Sproul <michael@sigmaprime.io>"]
|
||||
edition = "2018"
|
||||
|
||||
[dependencies]
|
||||
boolean-bitfield = { path = "../utils/boolean-bitfield" }
|
||||
int_to_bytes = { path = "../utils/int_to_bytes" }
|
||||
itertools = "0.8"
|
||||
parking_lot = "0.7"
|
||||
types = { path = "../types" }
|
||||
state_processing = { path = "../state_processing" }
|
||||
ssz = { path = "../utils/ssz" }
|
||||
eth2_ssz = { path = "../utils/ssz" }
|
||||
eth2_ssz_derive = { path = "../utils/ssz_derive" }
|
||||
|
91
eth2/operation_pool/src/attestation.rs
Normal file
91
eth2/operation_pool/src/attestation.rs
Normal file
@ -0,0 +1,91 @@
|
||||
use crate::max_cover::MaxCover;
|
||||
use boolean_bitfield::BooleanBitfield;
|
||||
use types::{Attestation, BeaconState, EthSpec};
|
||||
|
||||
pub struct AttMaxCover<'a> {
|
||||
/// Underlying attestation.
|
||||
att: &'a Attestation,
|
||||
/// Bitfield of validators that are covered by this attestation.
|
||||
fresh_validators: BooleanBitfield,
|
||||
}
|
||||
|
||||
impl<'a> AttMaxCover<'a> {
|
||||
pub fn new(att: &'a Attestation, fresh_validators: BooleanBitfield) -> Self {
|
||||
Self {
|
||||
att,
|
||||
fresh_validators,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> MaxCover for AttMaxCover<'a> {
|
||||
type Object = Attestation;
|
||||
type Set = BooleanBitfield;
|
||||
|
||||
fn object(&self) -> Attestation {
|
||||
self.att.clone()
|
||||
}
|
||||
|
||||
fn covering_set(&self) -> &BooleanBitfield {
|
||||
&self.fresh_validators
|
||||
}
|
||||
|
||||
/// Sneaky: we keep all the attestations together in one bucket, even though
|
||||
/// their aggregation bitfields refer to different committees. In order to avoid
|
||||
/// confusing committees when updating covering sets, we update only those attestations
|
||||
/// whose shard and epoch match the attestation being included in the solution, by the logic
|
||||
/// that a shard and epoch uniquely identify a committee.
|
||||
fn update_covering_set(
|
||||
&mut self,
|
||||
best_att: &Attestation,
|
||||
covered_validators: &BooleanBitfield,
|
||||
) {
|
||||
if self.att.data.shard == best_att.data.shard
|
||||
&& self.att.data.target_epoch == best_att.data.target_epoch
|
||||
{
|
||||
self.fresh_validators.difference_inplace(covered_validators);
|
||||
}
|
||||
}
|
||||
|
||||
fn score(&self) -> usize {
|
||||
self.fresh_validators.num_set_bits()
|
||||
}
|
||||
}
|
||||
|
||||
/// Extract the validators for which `attestation` would be their earliest in the epoch.
|
||||
///
|
||||
/// The reward paid to a proposer for including an attestation is proportional to the number
|
||||
/// of validators for which the included attestation is their first in the epoch. The attestation
|
||||
/// is judged against the state's `current_epoch_attestations` or `previous_epoch_attestations`
|
||||
/// depending on when it was created, and all those validators who have already attested are
|
||||
/// removed from the `aggregation_bitfield` before returning it.
|
||||
// TODO: This could be optimised with a map from validator index to whether that validator has
|
||||
// attested in each of the current and previous epochs. Currently quadratic in number of validators.
|
||||
pub fn earliest_attestation_validators<T: EthSpec>(
|
||||
attestation: &Attestation,
|
||||
state: &BeaconState<T>,
|
||||
) -> BooleanBitfield {
|
||||
// Bitfield of validators whose attestations are new/fresh.
|
||||
let mut new_validators = attestation.aggregation_bitfield.clone();
|
||||
|
||||
let state_attestations = if attestation.data.target_epoch == state.current_epoch() {
|
||||
&state.current_epoch_attestations
|
||||
} else if attestation.data.target_epoch == state.previous_epoch() {
|
||||
&state.previous_epoch_attestations
|
||||
} else {
|
||||
return BooleanBitfield::from_elem(attestation.aggregation_bitfield.len(), false);
|
||||
};
|
||||
|
||||
state_attestations
|
||||
.iter()
|
||||
// In a single epoch, an attester should only be attesting for one shard.
|
||||
// TODO: we avoid including slashable attestations in the state here,
|
||||
// but maybe we should do something else with them (like construct slashings).
|
||||
.filter(|existing_attestation| existing_attestation.data.shard == attestation.data.shard)
|
||||
.for_each(|existing_attestation| {
|
||||
// Remove the validators who have signed the existing attestation (they are not new)
|
||||
new_validators.difference_inplace(&existing_attestation.aggregation_bitfield);
|
||||
});
|
||||
|
||||
new_validators
|
||||
}
|
38
eth2/operation_pool/src/attestation_id.rs
Normal file
38
eth2/operation_pool/src/attestation_id.rs
Normal file
@ -0,0 +1,38 @@
|
||||
use int_to_bytes::int_to_bytes8;
|
||||
use ssz::ssz_encode;
|
||||
use ssz_derive::{Decode, Encode};
|
||||
use types::{AttestationData, BeaconState, ChainSpec, Domain, Epoch, EthSpec};
|
||||
|
||||
/// Serialized `AttestationData` augmented with a domain to encode the fork info.
|
||||
#[derive(PartialEq, Eq, Clone, Hash, Debug, PartialOrd, Ord, Encode, Decode)]
|
||||
pub struct AttestationId {
|
||||
v: Vec<u8>,
|
||||
}
|
||||
|
||||
/// Number of domain bytes that the end of an attestation ID is padded with.
|
||||
const DOMAIN_BYTES_LEN: usize = 8;
|
||||
|
||||
impl AttestationId {
|
||||
pub fn from_data<T: EthSpec>(
|
||||
attestation: &AttestationData,
|
||||
state: &BeaconState<T>,
|
||||
spec: &ChainSpec,
|
||||
) -> Self {
|
||||
let mut bytes = ssz_encode(attestation);
|
||||
let epoch = attestation.target_epoch;
|
||||
bytes.extend_from_slice(&AttestationId::compute_domain_bytes(epoch, state, spec));
|
||||
AttestationId { v: bytes }
|
||||
}
|
||||
|
||||
pub fn compute_domain_bytes<T: EthSpec>(
|
||||
epoch: Epoch,
|
||||
state: &BeaconState<T>,
|
||||
spec: &ChainSpec,
|
||||
) -> Vec<u8> {
|
||||
int_to_bytes8(spec.get_domain(epoch, Domain::Attestation, &state.fork))
|
||||
}
|
||||
|
||||
pub fn domain_bytes_match(&self, domain_bytes: &[u8]) -> bool {
|
||||
&self.v[self.v.len() - DOMAIN_BYTES_LEN..] == domain_bytes
|
||||
}
|
||||
}
|
@ -1,13 +1,19 @@
|
||||
use int_to_bytes::int_to_bytes8;
|
||||
mod attestation;
|
||||
mod attestation_id;
|
||||
mod max_cover;
|
||||
mod persistence;
|
||||
|
||||
pub use persistence::PersistedOperationPool;
|
||||
|
||||
use attestation::{earliest_attestation_validators, AttMaxCover};
|
||||
use attestation_id::AttestationId;
|
||||
use itertools::Itertools;
|
||||
use max_cover::maximum_cover;
|
||||
use parking_lot::RwLock;
|
||||
use ssz::ssz_encode;
|
||||
use state_processing::per_block_processing::errors::{
|
||||
AttestationValidationError, AttesterSlashingValidationError, DepositValidationError,
|
||||
ExitValidationError, ProposerSlashingValidationError, TransferValidationError,
|
||||
};
|
||||
#[cfg(not(test))]
|
||||
use state_processing::per_block_processing::verify_deposit_merkle_proof;
|
||||
use state_processing::per_block_processing::{
|
||||
get_slashable_indices_modular, validate_attestation,
|
||||
validate_attestation_time_independent_only, verify_attester_slashing, verify_exit,
|
||||
@ -16,13 +22,12 @@ use state_processing::per_block_processing::{
|
||||
};
|
||||
use std::collections::{btree_map::Entry, hash_map, BTreeMap, HashMap, HashSet};
|
||||
use std::marker::PhantomData;
|
||||
use types::chain_spec::Domain;
|
||||
use types::{
|
||||
Attestation, AttestationData, AttesterSlashing, BeaconState, ChainSpec, Deposit, Epoch,
|
||||
EthSpec, ProposerSlashing, Transfer, Validator, VoluntaryExit,
|
||||
Attestation, AttesterSlashing, BeaconState, ChainSpec, Deposit, EthSpec, ProposerSlashing,
|
||||
Transfer, Validator, VoluntaryExit,
|
||||
};
|
||||
|
||||
#[derive(Default)]
|
||||
#[derive(Default, Debug)]
|
||||
pub struct OperationPool<T: EthSpec + Default> {
|
||||
/// Map from attestation ID (see below) to vectors of attestations.
|
||||
attestations: RwLock<HashMap<AttestationId, Vec<Attestation>>>,
|
||||
@ -43,71 +48,6 @@ pub struct OperationPool<T: EthSpec + Default> {
|
||||
_phantom: PhantomData<T>,
|
||||
}
|
||||
|
||||
/// Serialized `AttestationData` augmented with a domain to encode the fork info.
|
||||
#[derive(PartialEq, Eq, Clone, Hash, Debug)]
|
||||
struct AttestationId(Vec<u8>);
|
||||
|
||||
/// Number of domain bytes that the end of an attestation ID is padded with.
|
||||
const DOMAIN_BYTES_LEN: usize = 8;
|
||||
|
||||
impl AttestationId {
|
||||
fn from_data<T: EthSpec>(
|
||||
attestation: &AttestationData,
|
||||
state: &BeaconState<T>,
|
||||
spec: &ChainSpec,
|
||||
) -> Self {
|
||||
let mut bytes = ssz_encode(attestation);
|
||||
let epoch = attestation.target_epoch;
|
||||
bytes.extend_from_slice(&AttestationId::compute_domain_bytes(epoch, state, spec));
|
||||
AttestationId(bytes)
|
||||
}
|
||||
|
||||
fn compute_domain_bytes<T: EthSpec>(
|
||||
epoch: Epoch,
|
||||
state: &BeaconState<T>,
|
||||
spec: &ChainSpec,
|
||||
) -> Vec<u8> {
|
||||
int_to_bytes8(spec.get_domain(epoch, Domain::Attestation, &state.fork))
|
||||
}
|
||||
|
||||
fn domain_bytes_match(&self, domain_bytes: &[u8]) -> bool {
|
||||
&self.0[self.0.len() - DOMAIN_BYTES_LEN..] == domain_bytes
|
||||
}
|
||||
}
|
||||
|
||||
/// Compute a fitness score for an attestation.
|
||||
///
|
||||
/// The score is calculated by determining the number of *new* attestations that
|
||||
/// the aggregate attestation introduces, and is proportional to the size of the reward we will
|
||||
/// receive for including it in a block.
|
||||
// TODO: this could be optimised with a map from validator index to whether that validator has
|
||||
// attested in each of the current and previous epochs. Currently quadractic in number of validators.
|
||||
fn attestation_score<T: EthSpec>(attestation: &Attestation, state: &BeaconState<T>) -> usize {
|
||||
// Bitfield of validators whose attestations are new/fresh.
|
||||
let mut new_validators = attestation.aggregation_bitfield.clone();
|
||||
|
||||
let state_attestations = if attestation.data.target_epoch == state.current_epoch() {
|
||||
&state.current_epoch_attestations
|
||||
} else if attestation.data.target_epoch == state.previous_epoch() {
|
||||
&state.previous_epoch_attestations
|
||||
} else {
|
||||
return 0;
|
||||
};
|
||||
|
||||
state_attestations
|
||||
.iter()
|
||||
// In a single epoch, an attester should only be attesting for one shard.
|
||||
// TODO: we avoid including slashable attestations in the state here,
|
||||
// but maybe we should do something else with them (like construct slashings).
|
||||
.filter(|current_attestation| current_attestation.data.shard == attestation.data.shard)
|
||||
.for_each(|current_attestation| {
|
||||
// Remove the validators who have signed the existing attestation (they are not new)
|
||||
new_validators.difference_inplace(¤t_attestation.aggregation_bitfield);
|
||||
});
|
||||
|
||||
new_validators.num_set_bits()
|
||||
}
|
||||
|
||||
#[derive(Debug, PartialEq, Clone)]
|
||||
pub enum DepositInsertStatus {
|
||||
/// The deposit was not already in the pool.
|
||||
@ -176,29 +116,19 @@ impl<T: EthSpec> OperationPool<T> {
|
||||
let current_epoch = state.current_epoch();
|
||||
let prev_domain_bytes = AttestationId::compute_domain_bytes(prev_epoch, state, spec);
|
||||
let curr_domain_bytes = AttestationId::compute_domain_bytes(current_epoch, state, spec);
|
||||
self.attestations
|
||||
.read()
|
||||
let reader = self.attestations.read();
|
||||
let valid_attestations = reader
|
||||
.iter()
|
||||
.filter(|(key, _)| {
|
||||
key.domain_bytes_match(&prev_domain_bytes)
|
||||
|| key.domain_bytes_match(&curr_domain_bytes)
|
||||
})
|
||||
.flat_map(|(_, attestations)| attestations)
|
||||
// That are not superseded by an attestation included in the state...
|
||||
.filter(|attestation| !superior_attestation_exists_in_state(state, attestation))
|
||||
// That are valid...
|
||||
.filter(|attestation| validate_attestation(state, attestation, spec).is_ok())
|
||||
// Scored by the number of new attestations they introduce (descending)
|
||||
// TODO: need to consider attestations introduced in THIS block
|
||||
.map(|att| (att, attestation_score(att, state)))
|
||||
// Don't include any useless attestations (score 0)
|
||||
.filter(|&(_, score)| score != 0)
|
||||
.sorted_by_key(|&(_, score)| std::cmp::Reverse(score))
|
||||
// Limited to the maximum number of attestations per block
|
||||
.take(spec.max_attestations as usize)
|
||||
.map(|(att, _)| att)
|
||||
.cloned()
|
||||
.collect()
|
||||
.map(|att| AttMaxCover::new(att, earliest_attestation_validators(att, state)));
|
||||
|
||||
maximum_cover(valid_attestations, spec.max_attestations as usize)
|
||||
}
|
||||
|
||||
/// Remove attestations which are too old to be included in a block.
|
||||
@ -219,20 +149,14 @@ impl<T: EthSpec> OperationPool<T> {
|
||||
/// Add a deposit to the pool.
|
||||
///
|
||||
/// No two distinct deposits should be added with the same index.
|
||||
#[cfg_attr(test, allow(unused_variables))]
|
||||
pub fn insert_deposit(
|
||||
&self,
|
||||
deposit: Deposit,
|
||||
state: &BeaconState<T>,
|
||||
spec: &ChainSpec,
|
||||
) -> Result<DepositInsertStatus, DepositValidationError> {
|
||||
use DepositInsertStatus::*;
|
||||
|
||||
match self.deposits.write().entry(deposit.index) {
|
||||
Entry::Vacant(entry) => {
|
||||
// TODO: fix tests to generate valid merkle proofs
|
||||
#[cfg(not(test))]
|
||||
verify_deposit_merkle_proof(state, &deposit, spec)?;
|
||||
entry.insert(deposit);
|
||||
Ok(Fresh)
|
||||
}
|
||||
@ -240,9 +164,6 @@ impl<T: EthSpec> OperationPool<T> {
|
||||
if entry.get() == &deposit {
|
||||
Ok(Duplicate)
|
||||
} else {
|
||||
// TODO: fix tests to generate valid merkle proofs
|
||||
#[cfg(not(test))]
|
||||
verify_deposit_merkle_proof(state, &deposit, spec)?;
|
||||
Ok(Replaced(Box::new(entry.insert(deposit))))
|
||||
}
|
||||
}
|
||||
@ -253,7 +174,9 @@ impl<T: EthSpec> OperationPool<T> {
|
||||
///
|
||||
/// Take at most the maximum number of deposits, beginning from the current deposit index.
|
||||
pub fn get_deposits(&self, state: &BeaconState<T>, spec: &ChainSpec) -> Vec<Deposit> {
|
||||
// TODO: might want to re-check the Merkle proof to account for Eth1 forking
|
||||
// TODO: We need to update the Merkle proofs for existing deposits as more deposits
|
||||
// are added. It probably makes sense to construct the proofs from scratch when forming
|
||||
// a block, using fresh info from the ETH1 chain for the current deposit root.
|
||||
let start_idx = state.deposit_index;
|
||||
(start_idx..start_idx + spec.max_deposits)
|
||||
.map(|idx| self.deposits.read().get(&idx).cloned())
|
||||
@ -484,34 +407,6 @@ impl<T: EthSpec> OperationPool<T> {
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns `true` if the state already contains a `PendingAttestation` that is superior to the
|
||||
/// given `attestation`.
|
||||
///
|
||||
/// A validator has nothing to gain from re-including an attestation and it adds load to the
|
||||
/// network.
|
||||
///
|
||||
/// An existing `PendingAttestation` is superior to an existing `attestation` if:
|
||||
///
|
||||
/// - Their `AttestationData` is equal.
|
||||
/// - `attestation` does not contain any signatures that `PendingAttestation` does not have.
|
||||
fn superior_attestation_exists_in_state<T: EthSpec>(
|
||||
state: &BeaconState<T>,
|
||||
attestation: &Attestation,
|
||||
) -> bool {
|
||||
state
|
||||
.current_epoch_attestations
|
||||
.iter()
|
||||
.chain(state.previous_epoch_attestations.iter())
|
||||
.any(|existing_attestation| {
|
||||
let bitfield = &attestation.aggregation_bitfield;
|
||||
let existing_bitfield = &existing_attestation.aggregation_bitfield;
|
||||
|
||||
existing_attestation.data == attestation.data
|
||||
&& bitfield.intersection(existing_bitfield).num_set_bits()
|
||||
== bitfield.num_set_bits()
|
||||
})
|
||||
}
|
||||
|
||||
/// Filter up to a maximum number of operations out of an iterator.
|
||||
fn filter_limit_operations<'a, T: 'a, I, F>(operations: I, filter: F, limit: u64) -> Vec<T>
|
||||
where
|
||||
@ -547,6 +442,18 @@ fn prune_validator_hash_map<T, F, E: EthSpec>(
|
||||
});
|
||||
}
|
||||
|
||||
/// Compare two operation pools.
|
||||
impl<T: EthSpec + Default> PartialEq for OperationPool<T> {
|
||||
fn eq(&self, other: &Self) -> bool {
|
||||
*self.attestations.read() == *other.attestations.read()
|
||||
&& *self.deposits.read() == *other.deposits.read()
|
||||
&& *self.attester_slashings.read() == *other.attester_slashings.read()
|
||||
&& *self.proposer_slashings.read() == *other.proposer_slashings.read()
|
||||
&& *self.voluntary_exits.read() == *other.voluntary_exits.read()
|
||||
&& *self.transfers.read() == *other.transfers.read()
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::DepositInsertStatus::*;
|
||||
@ -557,22 +464,15 @@ mod tests {
|
||||
#[test]
|
||||
fn insert_deposit() {
|
||||
let rng = &mut XorShiftRng::from_seed([42; 16]);
|
||||
let (ref spec, ref state) = test_state(rng);
|
||||
let op_pool = OperationPool::new();
|
||||
let op_pool = OperationPool::<MinimalEthSpec>::new();
|
||||
let deposit1 = make_deposit(rng);
|
||||
let mut deposit2 = make_deposit(rng);
|
||||
deposit2.index = deposit1.index;
|
||||
|
||||
assert_eq!(op_pool.insert_deposit(deposit1.clone()), Ok(Fresh));
|
||||
assert_eq!(op_pool.insert_deposit(deposit1.clone()), Ok(Duplicate));
|
||||
assert_eq!(
|
||||
op_pool.insert_deposit(deposit1.clone(), state, spec),
|
||||
Ok(Fresh)
|
||||
);
|
||||
assert_eq!(
|
||||
op_pool.insert_deposit(deposit1.clone(), state, spec),
|
||||
Ok(Duplicate)
|
||||
);
|
||||
assert_eq!(
|
||||
op_pool.insert_deposit(deposit2, state, spec),
|
||||
op_pool.insert_deposit(deposit2),
|
||||
Ok(Replaced(Box::new(deposit1)))
|
||||
);
|
||||
}
|
||||
@ -591,10 +491,7 @@ mod tests {
|
||||
let deposits = dummy_deposits(rng, start, max_deposits + extra);
|
||||
|
||||
for deposit in &deposits {
|
||||
assert_eq!(
|
||||
op_pool.insert_deposit(deposit.clone(), &state, &spec),
|
||||
Ok(Fresh)
|
||||
);
|
||||
assert_eq!(op_pool.insert_deposit(deposit.clone()), Ok(Fresh));
|
||||
}
|
||||
|
||||
state.deposit_index = start + offset;
|
||||
@ -610,8 +507,7 @@ mod tests {
|
||||
#[test]
|
||||
fn prune_deposits() {
|
||||
let rng = &mut XorShiftRng::from_seed([42; 16]);
|
||||
let (spec, state) = test_state(rng);
|
||||
let op_pool = OperationPool::new();
|
||||
let op_pool = OperationPool::<MinimalEthSpec>::new();
|
||||
|
||||
let start1 = 100;
|
||||
// test is super slow in debug mode if this parameter is too high
|
||||
@ -623,7 +519,7 @@ mod tests {
|
||||
let deposits2 = dummy_deposits(rng, start2, count);
|
||||
|
||||
for d in deposits1.into_iter().chain(deposits2) {
|
||||
assert!(op_pool.insert_deposit(d, &state, &spec).is_ok());
|
||||
assert!(op_pool.insert_deposit(d).is_ok());
|
||||
}
|
||||
|
||||
assert_eq!(op_pool.num_deposits(), 2 * count as usize);
|
||||
@ -734,15 +630,13 @@ mod tests {
|
||||
state_builder.teleport_to_slot(slot);
|
||||
state_builder.build_caches(&spec).unwrap();
|
||||
let (state, keypairs) = state_builder.build();
|
||||
|
||||
(state, keypairs, MainnetEthSpec::default_spec())
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_attestation_score() {
|
||||
fn test_earliest_attestation() {
|
||||
let (ref mut state, ref keypairs, ref spec) =
|
||||
attestation_test_state::<MainnetEthSpec>(1);
|
||||
|
||||
let slot = state.slot - 1;
|
||||
let committees = state
|
||||
.get_crosslink_committees_at_slot(slot)
|
||||
@ -775,9 +669,8 @@ mod tests {
|
||||
|
||||
assert_eq!(
|
||||
att1.aggregation_bitfield.num_set_bits(),
|
||||
attestation_score(&att1, state)
|
||||
earliest_attestation_validators(&att1, state).num_set_bits()
|
||||
);
|
||||
|
||||
state.current_epoch_attestations.push(PendingAttestation {
|
||||
aggregation_bitfield: att1.aggregation_bitfield.clone(),
|
||||
data: att1.data.clone(),
|
||||
@ -785,7 +678,10 @@ mod tests {
|
||||
proposer_index: 0,
|
||||
});
|
||||
|
||||
assert_eq!(cc.committee.len() - 2, attestation_score(&att2, state));
|
||||
assert_eq!(
|
||||
cc.committee.len() - 2,
|
||||
earliest_attestation_validators(&att2, state).num_set_bits()
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
|
189
eth2/operation_pool/src/max_cover.rs
Normal file
189
eth2/operation_pool/src/max_cover.rs
Normal file
@ -0,0 +1,189 @@
|
||||
/// Trait for types that we can compute a maximum cover for.
|
||||
///
|
||||
/// Terminology:
|
||||
/// * `item`: something that implements this trait
|
||||
/// * `element`: something contained in a set, and covered by the covering set of an item
|
||||
/// * `object`: something extracted from an item in order to comprise a solution
|
||||
/// See: https://en.wikipedia.org/wiki/Maximum_coverage_problem
|
||||
pub trait MaxCover {
|
||||
/// The result type, of which we would eventually like a collection of maximal quality.
|
||||
type Object;
|
||||
/// The type used to represent sets.
|
||||
type Set: Clone;
|
||||
|
||||
/// Extract an object for inclusion in a solution.
|
||||
fn object(&self) -> Self::Object;
|
||||
|
||||
/// Get the set of elements covered.
|
||||
fn covering_set(&self) -> &Self::Set;
|
||||
/// Update the set of items covered, for the inclusion of some object in the solution.
|
||||
fn update_covering_set(&mut self, max_obj: &Self::Object, max_set: &Self::Set);
|
||||
/// The quality of this item's covering set, usually its cardinality.
|
||||
fn score(&self) -> usize;
|
||||
}
|
||||
|
||||
/// Helper struct to track which items of the input are still available for inclusion.
|
||||
/// Saves removing elements from the work vector.
|
||||
struct MaxCoverItem<T> {
|
||||
item: T,
|
||||
available: bool,
|
||||
}
|
||||
|
||||
impl<T> MaxCoverItem<T> {
|
||||
fn new(item: T) -> Self {
|
||||
MaxCoverItem {
|
||||
item,
|
||||
available: true,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Compute an approximate maximum cover using a greedy algorithm.
|
||||
///
|
||||
/// * Time complexity: `O(limit * items_iter.len())`
|
||||
/// * Space complexity: `O(item_iter.len())`
|
||||
pub fn maximum_cover<'a, I, T>(items_iter: I, limit: usize) -> Vec<T::Object>
|
||||
where
|
||||
I: IntoIterator<Item = T>,
|
||||
T: MaxCover,
|
||||
{
|
||||
// Construct an initial vec of all items, marked available.
|
||||
let mut all_items: Vec<_> = items_iter
|
||||
.into_iter()
|
||||
.map(MaxCoverItem::new)
|
||||
.filter(|x| x.item.score() != 0)
|
||||
.collect();
|
||||
|
||||
let mut result = vec![];
|
||||
|
||||
for _ in 0..limit {
|
||||
// Select the item with the maximum score.
|
||||
let (best_item, best_cover) = match all_items
|
||||
.iter_mut()
|
||||
.filter(|x| x.available && x.item.score() != 0)
|
||||
.max_by_key(|x| x.item.score())
|
||||
{
|
||||
Some(x) => {
|
||||
x.available = false;
|
||||
(x.item.object(), x.item.covering_set().clone())
|
||||
}
|
||||
None => return result,
|
||||
};
|
||||
|
||||
// Update the covering sets of the other items, for the inclusion of the selected item.
|
||||
// Items covered by the selected item can't be re-covered.
|
||||
all_items
|
||||
.iter_mut()
|
||||
.filter(|x| x.available && x.item.score() != 0)
|
||||
.for_each(|x| x.item.update_covering_set(&best_item, &best_cover));
|
||||
|
||||
result.push(best_item);
|
||||
}
|
||||
|
||||
result
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod test {
|
||||
use super::*;
|
||||
use std::iter::FromIterator;
|
||||
use std::{collections::HashSet, hash::Hash};
|
||||
|
||||
impl<T> MaxCover for HashSet<T>
|
||||
where
|
||||
T: Clone + Eq + Hash,
|
||||
{
|
||||
type Object = Self;
|
||||
type Set = Self;
|
||||
|
||||
fn object(&self) -> Self {
|
||||
self.clone()
|
||||
}
|
||||
|
||||
fn covering_set(&self) -> &Self {
|
||||
&self
|
||||
}
|
||||
|
||||
fn update_covering_set(&mut self, _: &Self, other: &Self) {
|
||||
let mut difference = &*self - other;
|
||||
std::mem::swap(self, &mut difference);
|
||||
}
|
||||
|
||||
fn score(&self) -> usize {
|
||||
self.len()
|
||||
}
|
||||
}
|
||||
|
||||
fn example_system() -> Vec<HashSet<usize>> {
|
||||
vec![
|
||||
HashSet::from_iter(vec![3]),
|
||||
HashSet::from_iter(vec![1, 2, 4, 5]),
|
||||
HashSet::from_iter(vec![1, 2, 4, 5]),
|
||||
HashSet::from_iter(vec![1]),
|
||||
HashSet::from_iter(vec![2, 4, 5]),
|
||||
]
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn zero_limit() {
|
||||
let cover = maximum_cover(example_system(), 0);
|
||||
assert_eq!(cover.len(), 0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn one_limit() {
|
||||
let sets = example_system();
|
||||
let cover = maximum_cover(sets.clone(), 1);
|
||||
assert_eq!(cover.len(), 1);
|
||||
assert_eq!(cover[0], sets[1]);
|
||||
}
|
||||
|
||||
// Check that even if the limit provides room, we don't include useless items in the soln.
|
||||
#[test]
|
||||
fn exclude_zero_score() {
|
||||
let sets = example_system();
|
||||
for k in 2..10 {
|
||||
let cover = maximum_cover(sets.clone(), k);
|
||||
assert_eq!(cover.len(), 2);
|
||||
assert_eq!(cover[0], sets[1]);
|
||||
assert_eq!(cover[1], sets[0]);
|
||||
}
|
||||
}
|
||||
|
||||
fn quality<T: Eq + Hash>(solution: &[HashSet<T>]) -> usize {
|
||||
solution.iter().map(HashSet::len).sum()
|
||||
}
|
||||
|
||||
// Optimal solution is the first three sets (quality 15) but our greedy algorithm
|
||||
// will select the last three (quality 11). The comment at the end of each line
|
||||
// shows that set's score at each iteration, with a * indicating that it will be chosen.
|
||||
#[test]
|
||||
fn suboptimal() {
|
||||
let sets = vec![
|
||||
HashSet::from_iter(vec![0, 1, 8, 11, 14]), // 5, 3, 2
|
||||
HashSet::from_iter(vec![2, 3, 7, 9, 10]), // 5, 3, 2
|
||||
HashSet::from_iter(vec![4, 5, 6, 12, 13]), // 5, 4, 2
|
||||
HashSet::from_iter(vec![9, 10]), // 4, 4, 2*
|
||||
HashSet::from_iter(vec![5, 6, 7, 8]), // 4, 4*
|
||||
HashSet::from_iter(vec![0, 1, 2, 3, 4]), // 5*
|
||||
];
|
||||
let cover = maximum_cover(sets.clone(), 3);
|
||||
assert_eq!(quality(&cover), 11);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn intersecting_ok() {
|
||||
let sets = vec![
|
||||
HashSet::from_iter(vec![1, 2, 3, 4, 5, 6, 7, 8]),
|
||||
HashSet::from_iter(vec![1, 2, 3, 9, 10, 11]),
|
||||
HashSet::from_iter(vec![4, 5, 6, 12, 13, 14]),
|
||||
HashSet::from_iter(vec![7, 8, 15, 16, 17, 18]),
|
||||
HashSet::from_iter(vec![1, 2, 9, 10]),
|
||||
HashSet::from_iter(vec![1, 5, 6, 8]),
|
||||
HashSet::from_iter(vec![1, 7, 11, 19]),
|
||||
];
|
||||
let cover = maximum_cover(sets.clone(), 5);
|
||||
assert_eq!(quality(&cover), 19);
|
||||
assert_eq!(cover.len(), 5);
|
||||
}
|
||||
}
|
121
eth2/operation_pool/src/persistence.rs
Normal file
121
eth2/operation_pool/src/persistence.rs
Normal file
@ -0,0 +1,121 @@
|
||||
use crate::attestation_id::AttestationId;
|
||||
use crate::OperationPool;
|
||||
use parking_lot::RwLock;
|
||||
use ssz_derive::{Decode, Encode};
|
||||
use types::*;
|
||||
|
||||
/// SSZ-serializable version of `OperationPool`.
|
||||
///
|
||||
/// Operations are stored in arbitrary order, so it's not a good idea to compare instances
|
||||
/// of this type (or its encoded form) for equality. Convert back to an `OperationPool` first.
|
||||
#[derive(Encode, Decode)]
|
||||
pub struct PersistedOperationPool {
|
||||
/// Mapping from attestation ID to attestation mappings.
|
||||
// We could save space by not storing the attestation ID, but it might
|
||||
// be difficult to make that roundtrip due to eager aggregation.
|
||||
attestations: Vec<(AttestationId, Vec<Attestation>)>,
|
||||
deposits: Vec<Deposit>,
|
||||
/// Attester slashings.
|
||||
attester_slashings: Vec<AttesterSlashing>,
|
||||
/// Proposer slashings.
|
||||
proposer_slashings: Vec<ProposerSlashing>,
|
||||
/// Voluntary exits.
|
||||
voluntary_exits: Vec<VoluntaryExit>,
|
||||
/// Transfers.
|
||||
transfers: Vec<Transfer>,
|
||||
}
|
||||
|
||||
impl PersistedOperationPool {
|
||||
/// Convert an `OperationPool` into serializable form.
|
||||
pub fn from_operation_pool<T: EthSpec>(operation_pool: &OperationPool<T>) -> Self {
|
||||
let attestations = operation_pool
|
||||
.attestations
|
||||
.read()
|
||||
.iter()
|
||||
.map(|(att_id, att)| (att_id.clone(), att.clone()))
|
||||
.collect();
|
||||
|
||||
let deposits = operation_pool
|
||||
.deposits
|
||||
.read()
|
||||
.iter()
|
||||
.map(|(_, d)| d.clone())
|
||||
.collect();
|
||||
|
||||
let attester_slashings = operation_pool
|
||||
.attester_slashings
|
||||
.read()
|
||||
.iter()
|
||||
.map(|(_, slashing)| slashing.clone())
|
||||
.collect();
|
||||
|
||||
let proposer_slashings = operation_pool
|
||||
.proposer_slashings
|
||||
.read()
|
||||
.iter()
|
||||
.map(|(_, slashing)| slashing.clone())
|
||||
.collect();
|
||||
|
||||
let voluntary_exits = operation_pool
|
||||
.voluntary_exits
|
||||
.read()
|
||||
.iter()
|
||||
.map(|(_, exit)| exit.clone())
|
||||
.collect();
|
||||
|
||||
let transfers = operation_pool.transfers.read().iter().cloned().collect();
|
||||
|
||||
Self {
|
||||
attestations,
|
||||
deposits,
|
||||
attester_slashings,
|
||||
proposer_slashings,
|
||||
voluntary_exits,
|
||||
transfers,
|
||||
}
|
||||
}
|
||||
|
||||
/// Reconstruct an `OperationPool`.
|
||||
pub fn into_operation_pool<T: EthSpec>(
|
||||
self,
|
||||
state: &BeaconState<T>,
|
||||
spec: &ChainSpec,
|
||||
) -> OperationPool<T> {
|
||||
let attestations = RwLock::new(self.attestations.into_iter().collect());
|
||||
let deposits = RwLock::new(self.deposits.into_iter().map(|d| (d.index, d)).collect());
|
||||
let attester_slashings = RwLock::new(
|
||||
self.attester_slashings
|
||||
.into_iter()
|
||||
.map(|slashing| {
|
||||
(
|
||||
OperationPool::attester_slashing_id(&slashing, state, spec),
|
||||
slashing,
|
||||
)
|
||||
})
|
||||
.collect(),
|
||||
);
|
||||
let proposer_slashings = RwLock::new(
|
||||
self.proposer_slashings
|
||||
.into_iter()
|
||||
.map(|slashing| (slashing.proposer_index, slashing))
|
||||
.collect(),
|
||||
);
|
||||
let voluntary_exits = RwLock::new(
|
||||
self.voluntary_exits
|
||||
.into_iter()
|
||||
.map(|exit| (exit.validator_index, exit))
|
||||
.collect(),
|
||||
);
|
||||
let transfers = RwLock::new(self.transfers.into_iter().collect());
|
||||
|
||||
OperationPool {
|
||||
attestations,
|
||||
deposits,
|
||||
attester_slashings,
|
||||
proposer_slashings,
|
||||
voluntary_exits,
|
||||
transfers,
|
||||
_phantom: Default::default(),
|
||||
}
|
||||
}
|
||||
}
|
@ -24,12 +24,12 @@ integer-sqrt = "0.1"
|
||||
itertools = "0.8"
|
||||
log = "0.4"
|
||||
merkle_proof = { path = "../utils/merkle_proof" }
|
||||
ssz = { path = "../utils/ssz" }
|
||||
ssz_derive = { path = "../utils/ssz_derive" }
|
||||
eth2_ssz = { path = "../utils/ssz" }
|
||||
eth2_ssz_derive = { path = "../utils/ssz_derive" }
|
||||
tree_hash = { path = "../utils/tree_hash" }
|
||||
tree_hash_derive = { path = "../utils/tree_hash_derive" }
|
||||
types = { path = "../types" }
|
||||
rayon = "1.0"
|
||||
|
||||
[features]
|
||||
fake_crypto = ["bls/fake_crypto"]
|
||||
fake_crypto = ["bls/fake_crypto"]
|
||||
|
@ -26,8 +26,8 @@ serde_derive = "1.0"
|
||||
serde_json = "1.0"
|
||||
serde_yaml = "0.8"
|
||||
slog = "^2.2.3"
|
||||
ssz = { path = "../utils/ssz" }
|
||||
ssz_derive = { path = "../utils/ssz_derive" }
|
||||
eth2_ssz = { path = "../utils/ssz" }
|
||||
eth2_ssz_derive = { path = "../utils/ssz_derive" }
|
||||
swap_or_not_shuffle = { path = "../utils/swap_or_not_shuffle" }
|
||||
test_random_derive = { path = "../utils/test_random_derive" }
|
||||
tree_hash = { path = "../utils/tree_hash" }
|
||||
|
@ -13,7 +13,7 @@ rand = "^0.5"
|
||||
serde = "1.0"
|
||||
serde_derive = "1.0"
|
||||
serde_hex = { path = "../serde_hex" }
|
||||
ssz = { path = "../ssz" }
|
||||
eth2_ssz = { path = "../ssz" }
|
||||
tree_hash = { path = "../tree_hash" }
|
||||
|
||||
[features]
|
||||
|
@ -7,7 +7,7 @@ edition = "2018"
|
||||
[dependencies]
|
||||
cached_tree_hash = { path = "../cached_tree_hash" }
|
||||
serde_hex = { path = "../serde_hex" }
|
||||
ssz = { path = "../ssz" }
|
||||
eth2_ssz = { path = "../ssz" }
|
||||
bit-vec = "0.5.0"
|
||||
bit_reverse = "0.1"
|
||||
serde = "1.0"
|
||||
|
@ -9,7 +9,7 @@ publish = false
|
||||
cargo-fuzz = true
|
||||
|
||||
[dependencies]
|
||||
ssz = { path = "../../ssz" }
|
||||
eth2_ssz = { path = "../../ssz" }
|
||||
|
||||
[dependencies.boolean-bitfield]
|
||||
path = ".."
|
||||
|
@ -13,7 +13,7 @@ use std::default;
|
||||
|
||||
/// A BooleanBitfield represents a set of booleans compactly stored as a vector of bits.
|
||||
/// The BooleanBitfield is given a fixed size during construction. Reads outside of the current size return an out-of-bounds error. Writes outside of the current size expand the size of the set.
|
||||
#[derive(Debug, Clone)]
|
||||
#[derive(Debug, Clone, Hash)]
|
||||
pub struct BooleanBitfield(BitVec);
|
||||
|
||||
/// Error represents some reason a request against a bitfield was not satisfied
|
||||
@ -170,6 +170,7 @@ impl cmp::PartialEq for BooleanBitfield {
|
||||
ssz::ssz_encode(self) == ssz::ssz_encode(other)
|
||||
}
|
||||
}
|
||||
impl Eq for BooleanBitfield {}
|
||||
|
||||
/// Create a new bitfield that is a union of two other bitfields.
|
||||
///
|
||||
|
@ -9,5 +9,5 @@ cached_tree_hash = { path = "../cached_tree_hash" }
|
||||
tree_hash = { path = "../tree_hash" }
|
||||
serde = "1.0"
|
||||
serde_derive = "1.0"
|
||||
ssz = { path = "../ssz" }
|
||||
eth2_ssz = { path = "../ssz" }
|
||||
typenum = "1.10"
|
||||
|
2
eth2/utils/hashing/.cargo/config
Normal file
2
eth2/utils/hashing/.cargo/config
Normal file
@ -0,0 +1,2 @@
|
||||
[target.wasm32-unknown-unknown]
|
||||
runner = 'wasm-bindgen-test-runner'
|
@ -4,5 +4,14 @@ version = "0.1.0"
|
||||
authors = ["Paul Hauner <paul@paulhauner.com>"]
|
||||
edition = "2018"
|
||||
|
||||
[dependencies]
|
||||
[target.'cfg(not(target_arch = "wasm32"))'.dependencies]
|
||||
ring = "0.14.6"
|
||||
|
||||
[target.'cfg(target_arch = "wasm32")'.dependencies]
|
||||
sha2 = "0.8.0"
|
||||
|
||||
[dev-dependencies]
|
||||
rustc-hex = "2.0.1"
|
||||
|
||||
[target.'cfg(target_arch = "wasm32")'.dev-dependencies]
|
||||
wasm-bindgen-test = "0.2.47"
|
||||
|
@ -1,7 +1,17 @@
|
||||
#[cfg(not(target_arch = "wasm32"))]
|
||||
use ring::digest::{digest, SHA256};
|
||||
|
||||
#[cfg(target_arch = "wasm32")]
|
||||
use sha2::{Digest, Sha256};
|
||||
|
||||
pub fn hash(input: &[u8]) -> Vec<u8> {
|
||||
digest(&SHA256, input).as_ref().into()
|
||||
#[cfg(not(target_arch = "wasm32"))]
|
||||
let h = digest(&SHA256, input).as_ref().into();
|
||||
|
||||
#[cfg(target_arch = "wasm32")]
|
||||
let h = Sha256::digest(input).as_ref().into();
|
||||
|
||||
h
|
||||
}
|
||||
|
||||
/// Get merkle root of some hashed values - the input leaf nodes is expected to already be hashed
|
||||
@ -37,19 +47,24 @@ pub fn merkle_root(values: &[Vec<u8>]) -> Option<Vec<u8>> {
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use ring::test;
|
||||
use rustc_hex::FromHex;
|
||||
|
||||
#[test]
|
||||
#[cfg(target_arch = "wasm32")]
|
||||
use wasm_bindgen_test::*;
|
||||
|
||||
#[cfg_attr(not(target_arch = "wasm32"), test)]
|
||||
#[cfg_attr(target_arch = "wasm32", wasm_bindgen_test)]
|
||||
fn test_hashing() {
|
||||
let input: Vec<u8> = b"hello world".as_ref().into();
|
||||
|
||||
let output = hash(input.as_ref());
|
||||
let expected_hex = "b94d27b9934d3e08a52e52d7da7dabfac484efe37a5380ee9088f7ace2efcde9";
|
||||
let expected: Vec<u8> = test::from_hex(expected_hex).unwrap();
|
||||
let expected: Vec<u8> = expected_hex.from_hex().unwrap();
|
||||
assert_eq!(expected, output);
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[cfg_attr(not(target_arch = "wasm32"), test)]
|
||||
#[cfg_attr(target_arch = "wasm32", wasm_bindgen_test)]
|
||||
fn test_merkle_root() {
|
||||
// hash the leaf nodes
|
||||
let mut input = vec![
|
||||
@ -79,13 +94,17 @@ mod tests {
|
||||
|
||||
assert_eq!(&expected[..], output.unwrap().as_slice());
|
||||
}
|
||||
#[test]
|
||||
|
||||
#[cfg_attr(not(target_arch = "wasm32"), test)]
|
||||
#[cfg_attr(target_arch = "wasm32", wasm_bindgen_test)]
|
||||
fn test_empty_input_merkle_root() {
|
||||
let input = vec![];
|
||||
let output = merkle_root(&input[..]);
|
||||
assert_eq!(None, output);
|
||||
}
|
||||
#[test]
|
||||
|
||||
#[cfg_attr(not(target_arch = "wasm32"), test)]
|
||||
#[cfg_attr(target_arch = "wasm32", wasm_bindgen_test)]
|
||||
fn test_odd_leaf_merkle_root() {
|
||||
let input = vec![
|
||||
hash("a".as_bytes()),
|
||||
|
@ -1,8 +1,13 @@
|
||||
[package]
|
||||
name = "ssz"
|
||||
name = "eth2_ssz"
|
||||
version = "0.1.0"
|
||||
authors = ["Paul Hauner <paul@paulhauner.com>"]
|
||||
authors = ["Paul Hauner <paul@sigmaprime.io>"]
|
||||
edition = "2018"
|
||||
description = "SimpleSerialize (SSZ) as used in Ethereum 2.0"
|
||||
license = "Apache-2.0"
|
||||
|
||||
[lib]
|
||||
name = "ssz"
|
||||
|
||||
[[bench]]
|
||||
name = "benches"
|
||||
@ -10,12 +15,10 @@ harness = false
|
||||
|
||||
[dev-dependencies]
|
||||
criterion = "0.2"
|
||||
ssz_derive = { path = "../ssz_derive" }
|
||||
eth2_ssz_derive = "0.1.0"
|
||||
|
||||
[dependencies]
|
||||
bytes = "0.4.9"
|
||||
ethereum-types = "0.5"
|
||||
hashing = { path = "../hashing" }
|
||||
int_to_bytes = { path = "../int_to_bytes" }
|
||||
hex = "0.3"
|
||||
yaml-rust = "0.4"
|
||||
|
@ -34,8 +34,160 @@ impl_decodable_for_uint!(u8, 8);
|
||||
impl_decodable_for_uint!(u16, 16);
|
||||
impl_decodable_for_uint!(u32, 32);
|
||||
impl_decodable_for_uint!(u64, 64);
|
||||
|
||||
#[cfg(target_pointer_width = "32")]
|
||||
impl_decodable_for_uint!(usize, 32);
|
||||
|
||||
#[cfg(target_pointer_width = "64")]
|
||||
impl_decodable_for_uint!(usize, 64);
|
||||
|
||||
macro_rules! impl_decode_for_tuples {
|
||||
($(
|
||||
$Tuple:ident {
|
||||
$(($idx:tt) -> $T:ident)+
|
||||
}
|
||||
)+) => {
|
||||
$(
|
||||
impl<$($T: Decode),+> Decode for ($($T,)+) {
|
||||
fn is_ssz_fixed_len() -> bool {
|
||||
$(
|
||||
<$T as Decode>::is_ssz_fixed_len() &&
|
||||
)*
|
||||
true
|
||||
}
|
||||
|
||||
fn ssz_fixed_len() -> usize {
|
||||
if <Self as Decode>::is_ssz_fixed_len() {
|
||||
$(
|
||||
<$T as Decode>::ssz_fixed_len() +
|
||||
)*
|
||||
0
|
||||
} else {
|
||||
BYTES_PER_LENGTH_OFFSET
|
||||
}
|
||||
}
|
||||
|
||||
fn from_ssz_bytes(bytes: &[u8]) -> Result<Self, DecodeError> {
|
||||
let mut builder = SszDecoderBuilder::new(bytes);
|
||||
|
||||
$(
|
||||
builder.register_type::<$T>()?;
|
||||
)*
|
||||
|
||||
let mut decoder = builder.build()?;
|
||||
|
||||
Ok(($(
|
||||
decoder.decode_next::<$T>()?,
|
||||
)*
|
||||
))
|
||||
}
|
||||
}
|
||||
)+
|
||||
}
|
||||
}
|
||||
|
||||
impl_decode_for_tuples! {
|
||||
Tuple2 {
|
||||
(0) -> A
|
||||
(1) -> B
|
||||
}
|
||||
Tuple3 {
|
||||
(0) -> A
|
||||
(1) -> B
|
||||
(2) -> C
|
||||
}
|
||||
Tuple4 {
|
||||
(0) -> A
|
||||
(1) -> B
|
||||
(2) -> C
|
||||
(3) -> D
|
||||
}
|
||||
Tuple5 {
|
||||
(0) -> A
|
||||
(1) -> B
|
||||
(2) -> C
|
||||
(3) -> D
|
||||
(4) -> E
|
||||
}
|
||||
Tuple6 {
|
||||
(0) -> A
|
||||
(1) -> B
|
||||
(2) -> C
|
||||
(3) -> D
|
||||
(4) -> E
|
||||
(5) -> F
|
||||
}
|
||||
Tuple7 {
|
||||
(0) -> A
|
||||
(1) -> B
|
||||
(2) -> C
|
||||
(3) -> D
|
||||
(4) -> E
|
||||
(5) -> F
|
||||
(6) -> G
|
||||
}
|
||||
Tuple8 {
|
||||
(0) -> A
|
||||
(1) -> B
|
||||
(2) -> C
|
||||
(3) -> D
|
||||
(4) -> E
|
||||
(5) -> F
|
||||
(6) -> G
|
||||
(7) -> H
|
||||
}
|
||||
Tuple9 {
|
||||
(0) -> A
|
||||
(1) -> B
|
||||
(2) -> C
|
||||
(3) -> D
|
||||
(4) -> E
|
||||
(5) -> F
|
||||
(6) -> G
|
||||
(7) -> H
|
||||
(8) -> I
|
||||
}
|
||||
Tuple10 {
|
||||
(0) -> A
|
||||
(1) -> B
|
||||
(2) -> C
|
||||
(3) -> D
|
||||
(4) -> E
|
||||
(5) -> F
|
||||
(6) -> G
|
||||
(7) -> H
|
||||
(8) -> I
|
||||
(9) -> J
|
||||
}
|
||||
Tuple11 {
|
||||
(0) -> A
|
||||
(1) -> B
|
||||
(2) -> C
|
||||
(3) -> D
|
||||
(4) -> E
|
||||
(5) -> F
|
||||
(6) -> G
|
||||
(7) -> H
|
||||
(8) -> I
|
||||
(9) -> J
|
||||
(10) -> K
|
||||
}
|
||||
Tuple12 {
|
||||
(0) -> A
|
||||
(1) -> B
|
||||
(2) -> C
|
||||
(3) -> D
|
||||
(4) -> E
|
||||
(5) -> F
|
||||
(6) -> G
|
||||
(7) -> H
|
||||
(8) -> I
|
||||
(9) -> J
|
||||
(10) -> K
|
||||
(11) -> L
|
||||
}
|
||||
}
|
||||
|
||||
impl Decode for bool {
|
||||
fn is_ssz_fixed_len() -> bool {
|
||||
true
|
||||
@ -515,4 +667,15 @@ mod tests {
|
||||
})
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn tuple() {
|
||||
assert_eq!(<(u16, u16)>::from_ssz_bytes(&[0, 0, 0, 0]), Ok((0, 0)));
|
||||
assert_eq!(<(u16, u16)>::from_ssz_bytes(&[16, 0, 17, 0]), Ok((16, 17)));
|
||||
assert_eq!(<(u16, u16)>::from_ssz_bytes(&[0, 1, 2, 0]), Ok((256, 2)));
|
||||
assert_eq!(
|
||||
<(u16, u16)>::from_ssz_bytes(&[255, 255, 0, 0]),
|
||||
Ok((65535, 0))
|
||||
);
|
||||
}
|
||||
}
|
||||
|
@ -24,8 +24,161 @@ impl_encodable_for_uint!(u8, 8);
|
||||
impl_encodable_for_uint!(u16, 16);
|
||||
impl_encodable_for_uint!(u32, 32);
|
||||
impl_encodable_for_uint!(u64, 64);
|
||||
|
||||
#[cfg(target_pointer_width = "32")]
|
||||
impl_encodable_for_uint!(usize, 32);
|
||||
|
||||
#[cfg(target_pointer_width = "64")]
|
||||
impl_encodable_for_uint!(usize, 64);
|
||||
|
||||
// Based on the `tuple_impls` macro from the standard library.
|
||||
macro_rules! impl_encode_for_tuples {
|
||||
($(
|
||||
$Tuple:ident {
|
||||
$(($idx:tt) -> $T:ident)+
|
||||
}
|
||||
)+) => {
|
||||
$(
|
||||
impl<$($T: Encode),+> Encode for ($($T,)+) {
|
||||
fn is_ssz_fixed_len() -> bool {
|
||||
$(
|
||||
<$T as Encode>::is_ssz_fixed_len() &&
|
||||
)*
|
||||
true
|
||||
}
|
||||
|
||||
fn ssz_fixed_len() -> usize {
|
||||
if <Self as Encode>::is_ssz_fixed_len() {
|
||||
$(
|
||||
<$T as Encode>::ssz_fixed_len() +
|
||||
)*
|
||||
0
|
||||
} else {
|
||||
BYTES_PER_LENGTH_OFFSET
|
||||
}
|
||||
}
|
||||
|
||||
fn ssz_append(&self, buf: &mut Vec<u8>) {
|
||||
let offset = $(
|
||||
<$T as Encode>::ssz_fixed_len() +
|
||||
)*
|
||||
0;
|
||||
|
||||
let mut encoder = SszEncoder::container(buf, offset);
|
||||
|
||||
$(
|
||||
encoder.append(&self.$idx);
|
||||
)*
|
||||
|
||||
encoder.finalize();
|
||||
}
|
||||
}
|
||||
)+
|
||||
}
|
||||
}
|
||||
|
||||
impl_encode_for_tuples! {
|
||||
Tuple2 {
|
||||
(0) -> A
|
||||
(1) -> B
|
||||
}
|
||||
Tuple3 {
|
||||
(0) -> A
|
||||
(1) -> B
|
||||
(2) -> C
|
||||
}
|
||||
Tuple4 {
|
||||
(0) -> A
|
||||
(1) -> B
|
||||
(2) -> C
|
||||
(3) -> D
|
||||
}
|
||||
Tuple5 {
|
||||
(0) -> A
|
||||
(1) -> B
|
||||
(2) -> C
|
||||
(3) -> D
|
||||
(4) -> E
|
||||
}
|
||||
Tuple6 {
|
||||
(0) -> A
|
||||
(1) -> B
|
||||
(2) -> C
|
||||
(3) -> D
|
||||
(4) -> E
|
||||
(5) -> F
|
||||
}
|
||||
Tuple7 {
|
||||
(0) -> A
|
||||
(1) -> B
|
||||
(2) -> C
|
||||
(3) -> D
|
||||
(4) -> E
|
||||
(5) -> F
|
||||
(6) -> G
|
||||
}
|
||||
Tuple8 {
|
||||
(0) -> A
|
||||
(1) -> B
|
||||
(2) -> C
|
||||
(3) -> D
|
||||
(4) -> E
|
||||
(5) -> F
|
||||
(6) -> G
|
||||
(7) -> H
|
||||
}
|
||||
Tuple9 {
|
||||
(0) -> A
|
||||
(1) -> B
|
||||
(2) -> C
|
||||
(3) -> D
|
||||
(4) -> E
|
||||
(5) -> F
|
||||
(6) -> G
|
||||
(7) -> H
|
||||
(8) -> I
|
||||
}
|
||||
Tuple10 {
|
||||
(0) -> A
|
||||
(1) -> B
|
||||
(2) -> C
|
||||
(3) -> D
|
||||
(4) -> E
|
||||
(5) -> F
|
||||
(6) -> G
|
||||
(7) -> H
|
||||
(8) -> I
|
||||
(9) -> J
|
||||
}
|
||||
Tuple11 {
|
||||
(0) -> A
|
||||
(1) -> B
|
||||
(2) -> C
|
||||
(3) -> D
|
||||
(4) -> E
|
||||
(5) -> F
|
||||
(6) -> G
|
||||
(7) -> H
|
||||
(8) -> I
|
||||
(9) -> J
|
||||
(10) -> K
|
||||
}
|
||||
Tuple12 {
|
||||
(0) -> A
|
||||
(1) -> B
|
||||
(2) -> C
|
||||
(3) -> D
|
||||
(4) -> E
|
||||
(5) -> F
|
||||
(6) -> G
|
||||
(7) -> H
|
||||
(8) -> I
|
||||
(9) -> J
|
||||
(10) -> K
|
||||
(11) -> L
|
||||
}
|
||||
}
|
||||
|
||||
/// The SSZ "union" type.
|
||||
impl<T: Encode> Encode for Option<T> {
|
||||
fn is_ssz_fixed_len() -> bool {
|
||||
@ -287,4 +440,11 @@ mod tests {
|
||||
assert_eq!([1, 0, 0, 0].as_ssz_bytes(), vec![1, 0, 0, 0]);
|
||||
assert_eq!([1, 2, 3, 4].as_ssz_bytes(), vec![1, 2, 3, 4]);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn tuple() {
|
||||
assert_eq!((10u8, 11u8).as_ssz_bytes(), vec![10, 11]);
|
||||
assert_eq!((10u32, 11u8).as_ssz_bytes(), vec![10, 0, 0, 0, 11]);
|
||||
assert_eq!((10u8, 11u8, 12u8).as_ssz_bytes(), vec![10, 11, 12]);
|
||||
}
|
||||
}
|
||||
|
@ -2,7 +2,7 @@
|
||||
//! format designed for use in Ethereum 2.0.
|
||||
//!
|
||||
//! Conforms to
|
||||
//! [v0.6.1](https://github.com/ethereum/eth2.0-specs/blob/v0.6.1/specs/simple-serialize.md) of the
|
||||
//! [v0.7.1](https://github.com/ethereum/eth2.0-specs/blob/v0.7.1/specs/simple-serialize.md) of the
|
||||
//! Ethereum 2.0 specification.
|
||||
//!
|
||||
//! ## Example
|
||||
@ -46,7 +46,10 @@ pub use encode::{Encode, SszEncoder};
|
||||
/// The number of bytes used to represent an offset.
|
||||
pub const BYTES_PER_LENGTH_OFFSET: usize = 4;
|
||||
/// The maximum value that can be represented using `BYTES_PER_LENGTH_OFFSET`.
|
||||
pub const MAX_LENGTH_VALUE: usize = (1 << (BYTES_PER_LENGTH_OFFSET * 8)) - 1;
|
||||
#[cfg(target_pointer_width = "32")]
|
||||
pub const MAX_LENGTH_VALUE: usize = (std::u32::MAX >> (8 * (4 - BYTES_PER_LENGTH_OFFSET))) as usize;
|
||||
#[cfg(target_pointer_width = "64")]
|
||||
pub const MAX_LENGTH_VALUE: usize = (std::u64::MAX >> (8 * (8 - BYTES_PER_LENGTH_OFFSET))) as usize;
|
||||
|
||||
/// Convenience function to SSZ encode an object supporting ssz::Encode.
|
||||
///
|
||||
|
@ -346,4 +346,34 @@ mod round_trip {
|
||||
|
||||
round_trip(vec);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn tuple_u8_u16() {
|
||||
let vec: Vec<(u8, u16)> = vec![
|
||||
(0, 0),
|
||||
(0, 1),
|
||||
(1, 0),
|
||||
(u8::max_value(), u16::max_value()),
|
||||
(0, u16::max_value()),
|
||||
(u8::max_value(), 0),
|
||||
(42, 12301),
|
||||
];
|
||||
|
||||
round_trip(vec);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn tuple_vec_vec() {
|
||||
let vec: Vec<(u64, Vec<u8>, Vec<Vec<u16>>)> = vec![
|
||||
(0, vec![], vec![vec![]]),
|
||||
(99, vec![101], vec![vec![], vec![]]),
|
||||
(
|
||||
42,
|
||||
vec![12, 13, 14],
|
||||
vec![vec![99, 98, 97, 96], vec![42, 44, 46, 48, 50]],
|
||||
),
|
||||
];
|
||||
|
||||
round_trip(vec);
|
||||
}
|
||||
}
|
||||
|
@ -1,14 +1,15 @@
|
||||
[package]
|
||||
name = "ssz_derive"
|
||||
name = "eth2_ssz_derive"
|
||||
version = "0.1.0"
|
||||
authors = ["Paul Hauner <paul@paulhauner.com>"]
|
||||
authors = ["Paul Hauner <paul@sigmaprime.io>"]
|
||||
edition = "2018"
|
||||
description = "Procedural derive macros for SSZ encoding and decoding."
|
||||
description = "Procedural derive macros to accompany the eth2_ssz crate."
|
||||
license = "Apache-2.0"
|
||||
|
||||
[lib]
|
||||
name = "ssz_derive"
|
||||
proc-macro = true
|
||||
|
||||
[dependencies]
|
||||
syn = "0.15"
|
||||
quote = "0.6"
|
||||
ssz = { path = "../ssz" }
|
||||
|
@ -1,4 +1,7 @@
|
||||
#![recursion_limit = "128"]
|
||||
//! Provides procedural derive macros for the `Encode` and `Decode` traits of the `eth2_ssz` crate.
|
||||
//!
|
||||
//! Supports field attributes, see each derive macro for more information.
|
||||
|
||||
extern crate proc_macro;
|
||||
|
||||
@ -61,6 +64,10 @@ fn should_skip_serializing(field: &syn::Field) -> bool {
|
||||
/// Implements `ssz::Encode` for some `struct`.
|
||||
///
|
||||
/// Fields are encoded in the order they are defined.
|
||||
///
|
||||
/// ## Field attributes
|
||||
///
|
||||
/// - `#[ssz(skip_serializing)]`: the field will not be serialized.
|
||||
#[proc_macro_derive(Encode, attributes(ssz))]
|
||||
pub fn ssz_encode_derive(input: TokenStream) -> TokenStream {
|
||||
let item = parse_macro_input!(input as DeriveInput);
|
||||
@ -132,6 +139,12 @@ fn should_skip_deserializing(field: &syn::Field) -> bool {
|
||||
/// Implements `ssz::Decode` for some `struct`.
|
||||
///
|
||||
/// Fields are decoded in the order they are defined.
|
||||
///
|
||||
/// ## Field attributes
|
||||
///
|
||||
/// - `#[ssz(skip_deserializing)]`: during de-serialization the field will be instantiated from a
|
||||
/// `Default` implementation. The decoder will assume that the field was not serialized at all
|
||||
/// (e.g., if it has been serialized, an error will be raised instead of `Default` overriding it).
|
||||
#[proc_macro_derive(Decode)]
|
||||
pub fn ssz_decode_derive(input: TokenStream) -> TokenStream {
|
||||
let item = parse_macro_input!(input as DeriveInput);
|
||||
|
@ -1,22 +0,0 @@
|
||||
use ssz::Encode;
|
||||
use ssz_derive::Encode;
|
||||
|
||||
#[derive(Debug, PartialEq, Encode)]
|
||||
pub struct Foo {
|
||||
a: u16,
|
||||
b: Vec<u8>,
|
||||
c: u16,
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn encode() {
|
||||
let foo = Foo {
|
||||
a: 42,
|
||||
b: vec![0, 1, 2, 3],
|
||||
c: 11,
|
||||
};
|
||||
|
||||
let bytes = vec![42, 0, 8, 0, 0, 0, 11, 0, 0, 1, 2, 3];
|
||||
|
||||
assert_eq!(foo.as_ssz_bytes(), bytes);
|
||||
}
|
17
eth2/utils/ssz_types/Cargo.toml
Normal file
17
eth2/utils/ssz_types/Cargo.toml
Normal file
@ -0,0 +1,17 @@
|
||||
[package]
|
||||
name = "ssz_types"
|
||||
version = "0.1.0"
|
||||
authors = ["Paul Hauner <paul@paulhauner.com>"]
|
||||
edition = "2018"
|
||||
|
||||
[dependencies]
|
||||
cached_tree_hash = { path = "../cached_tree_hash" }
|
||||
tree_hash = { path = "../tree_hash" }
|
||||
serde = "1.0"
|
||||
serde_derive = "1.0"
|
||||
serde_hex = { path = "../serde_hex" }
|
||||
eth2_ssz = { path = "../ssz" }
|
||||
typenum = "1.10"
|
||||
|
||||
[dev-dependencies]
|
||||
serde_yaml = "0.8"
|
1140
eth2/utils/ssz_types/src/bitfield.rs
Normal file
1140
eth2/utils/ssz_types/src/bitfield.rs
Normal file
File diff suppressed because it is too large
Load Diff
335
eth2/utils/ssz_types/src/fixed_vector.rs
Normal file
335
eth2/utils/ssz_types/src/fixed_vector.rs
Normal file
@ -0,0 +1,335 @@
|
||||
use crate::Error;
|
||||
use serde_derive::{Deserialize, Serialize};
|
||||
use std::marker::PhantomData;
|
||||
use std::ops::{Deref, Index, IndexMut};
|
||||
use std::slice::SliceIndex;
|
||||
use typenum::Unsigned;
|
||||
|
||||
pub use typenum;
|
||||
|
||||
/// Emulates a SSZ `Vector` (distinct from a Rust `Vec`).
|
||||
///
|
||||
/// An ordered, heap-allocated, fixed-length, homogeneous collection of `T`, with `N` values.
|
||||
///
|
||||
/// This struct is backed by a Rust `Vec` but constrained such that it must be instantiated with a
|
||||
/// fixed number of elements and you may not add or remove elements, only modify.
|
||||
///
|
||||
/// The length of this struct is fixed at the type-level using
|
||||
/// [typenum](https://crates.io/crates/typenum).
|
||||
///
|
||||
/// ## Note
|
||||
///
|
||||
/// Whilst it is possible with this library, SSZ declares that a `FixedVector` with a length of `0`
|
||||
/// is illegal.
|
||||
///
|
||||
/// ## Example
|
||||
///
|
||||
/// ```
|
||||
/// use ssz_types::{FixedVector, typenum};
|
||||
///
|
||||
/// let base: Vec<u64> = vec![1, 2, 3, 4];
|
||||
///
|
||||
/// // Create a `FixedVector` from a `Vec` that has the expected length.
|
||||
/// let exact: FixedVector<_, typenum::U4> = FixedVector::from(base.clone());
|
||||
/// assert_eq!(&exact[..], &[1, 2, 3, 4]);
|
||||
///
|
||||
/// // Create a `FixedVector` from a `Vec` that is too long and the `Vec` is truncated.
|
||||
/// let short: FixedVector<_, typenum::U3> = FixedVector::from(base.clone());
|
||||
/// assert_eq!(&short[..], &[1, 2, 3]);
|
||||
///
|
||||
/// // Create a `FixedVector` from a `Vec` that is too short and the missing values are created
|
||||
/// // using `std::default::Default`.
|
||||
/// let long: FixedVector<_, typenum::U5> = FixedVector::from(base);
|
||||
/// assert_eq!(&long[..], &[1, 2, 3, 4, 0]);
|
||||
/// ```
|
||||
#[derive(Debug, PartialEq, Clone, Serialize, Deserialize)]
|
||||
#[serde(transparent)]
|
||||
pub struct FixedVector<T, N> {
|
||||
vec: Vec<T>,
|
||||
_phantom: PhantomData<N>,
|
||||
}
|
||||
|
||||
impl<T, N: Unsigned> FixedVector<T, N> {
|
||||
/// Returns `Ok` if the given `vec` equals the fixed length of `Self`. Otherwise returns
|
||||
/// `Err`.
|
||||
pub fn new(vec: Vec<T>) -> Result<Self, Error> {
|
||||
if vec.len() == Self::capacity() {
|
||||
Ok(Self {
|
||||
vec,
|
||||
_phantom: PhantomData,
|
||||
})
|
||||
} else {
|
||||
Err(Error::OutOfBounds {
|
||||
i: vec.len(),
|
||||
len: Self::capacity(),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
/// Identical to `self.capacity`, returns the type-level constant length.
|
||||
///
|
||||
/// Exists for compatibility with `Vec`.
|
||||
pub fn len(&self) -> usize {
|
||||
self.vec.len()
|
||||
}
|
||||
|
||||
/// True if the type-level constant length of `self` is zero.
|
||||
pub fn is_empty(&self) -> bool {
|
||||
self.len() == 0
|
||||
}
|
||||
|
||||
/// Returns the type-level constant length.
|
||||
pub fn capacity() -> usize {
|
||||
N::to_usize()
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: Default, N: Unsigned> From<Vec<T>> for FixedVector<T, N> {
|
||||
fn from(mut vec: Vec<T>) -> Self {
|
||||
vec.resize_with(Self::capacity(), Default::default);
|
||||
|
||||
Self {
|
||||
vec,
|
||||
_phantom: PhantomData,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<T, N: Unsigned> Into<Vec<T>> for FixedVector<T, N> {
|
||||
fn into(self) -> Vec<T> {
|
||||
self.vec
|
||||
}
|
||||
}
|
||||
|
||||
impl<T, N: Unsigned> Default for FixedVector<T, N> {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
vec: Vec::default(),
|
||||
_phantom: PhantomData,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<T, N: Unsigned, I: SliceIndex<[T]>> Index<I> for FixedVector<T, N> {
|
||||
type Output = I::Output;
|
||||
|
||||
#[inline]
|
||||
fn index(&self, index: I) -> &Self::Output {
|
||||
Index::index(&self.vec, index)
|
||||
}
|
||||
}
|
||||
|
||||
impl<T, N: Unsigned, I: SliceIndex<[T]>> IndexMut<I> for FixedVector<T, N> {
|
||||
#[inline]
|
||||
fn index_mut(&mut self, index: I) -> &mut Self::Output {
|
||||
IndexMut::index_mut(&mut self.vec, index)
|
||||
}
|
||||
}
|
||||
|
||||
impl<T, N: Unsigned> Deref for FixedVector<T, N> {
|
||||
type Target = [T];
|
||||
|
||||
fn deref(&self) -> &[T] {
|
||||
&self.vec[..]
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod test {
|
||||
use super::*;
|
||||
use typenum::*;
|
||||
|
||||
#[test]
|
||||
fn new() {
|
||||
let vec = vec![42; 5];
|
||||
let fixed: Result<FixedVector<u64, U4>, _> = FixedVector::new(vec.clone());
|
||||
assert!(fixed.is_err());
|
||||
|
||||
let vec = vec![42; 3];
|
||||
let fixed: Result<FixedVector<u64, U4>, _> = FixedVector::new(vec.clone());
|
||||
assert!(fixed.is_err());
|
||||
|
||||
let vec = vec![42; 4];
|
||||
let fixed: Result<FixedVector<u64, U4>, _> = FixedVector::new(vec.clone());
|
||||
assert!(fixed.is_ok());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn indexing() {
|
||||
let vec = vec![1, 2];
|
||||
|
||||
let mut fixed: FixedVector<u64, U8192> = vec.clone().into();
|
||||
|
||||
assert_eq!(fixed[0], 1);
|
||||
assert_eq!(&fixed[0..1], &vec[0..1]);
|
||||
assert_eq!((&fixed[..]).len(), 8192);
|
||||
|
||||
fixed[1] = 3;
|
||||
assert_eq!(fixed[1], 3);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn length() {
|
||||
let vec = vec![42; 5];
|
||||
let fixed: FixedVector<u64, U4> = FixedVector::from(vec.clone());
|
||||
assert_eq!(&fixed[..], &vec[0..4]);
|
||||
|
||||
let vec = vec![42; 3];
|
||||
let fixed: FixedVector<u64, U4> = FixedVector::from(vec.clone());
|
||||
assert_eq!(&fixed[0..3], &vec[..]);
|
||||
assert_eq!(&fixed[..], &vec![42, 42, 42, 0][..]);
|
||||
|
||||
let vec = vec![];
|
||||
let fixed: FixedVector<u64, U4> = FixedVector::from(vec.clone());
|
||||
assert_eq!(&fixed[..], &vec![0, 0, 0, 0][..]);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn deref() {
|
||||
let vec = vec![0, 2, 4, 6];
|
||||
let fixed: FixedVector<u64, U4> = FixedVector::from(vec);
|
||||
|
||||
assert_eq!(fixed.get(0), Some(&0));
|
||||
assert_eq!(fixed.get(3), Some(&6));
|
||||
assert_eq!(fixed.get(4), None);
|
||||
}
|
||||
}
|
||||
|
||||
impl<T, N: Unsigned> tree_hash::TreeHash for FixedVector<T, N>
|
||||
where
|
||||
T: tree_hash::TreeHash,
|
||||
{
|
||||
fn tree_hash_type() -> tree_hash::TreeHashType {
|
||||
tree_hash::TreeHashType::Vector
|
||||
}
|
||||
|
||||
fn tree_hash_packed_encoding(&self) -> Vec<u8> {
|
||||
unreachable!("Vector should never be packed.")
|
||||
}
|
||||
|
||||
fn tree_hash_packing_factor() -> usize {
|
||||
unreachable!("Vector should never be packed.")
|
||||
}
|
||||
|
||||
fn tree_hash_root(&self) -> Vec<u8> {
|
||||
tree_hash::impls::vec_tree_hash_root(&self.vec)
|
||||
}
|
||||
}
|
||||
|
||||
impl<T, N: Unsigned> cached_tree_hash::CachedTreeHash for FixedVector<T, N>
|
||||
where
|
||||
T: cached_tree_hash::CachedTreeHash + tree_hash::TreeHash,
|
||||
{
|
||||
fn new_tree_hash_cache(
|
||||
&self,
|
||||
depth: usize,
|
||||
) -> Result<cached_tree_hash::TreeHashCache, cached_tree_hash::Error> {
|
||||
let (cache, _overlay) = cached_tree_hash::vec::new_tree_hash_cache(&self.vec, depth)?;
|
||||
|
||||
Ok(cache)
|
||||
}
|
||||
|
||||
fn tree_hash_cache_schema(&self, depth: usize) -> cached_tree_hash::BTreeSchema {
|
||||
cached_tree_hash::vec::produce_schema(&self.vec, depth)
|
||||
}
|
||||
|
||||
fn update_tree_hash_cache(
|
||||
&self,
|
||||
cache: &mut cached_tree_hash::TreeHashCache,
|
||||
) -> Result<(), cached_tree_hash::Error> {
|
||||
cached_tree_hash::vec::update_tree_hash_cache(&self.vec, cache)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
impl<T, N: Unsigned> ssz::Encode for FixedVector<T, N>
|
||||
where
|
||||
T: ssz::Encode,
|
||||
{
|
||||
fn is_ssz_fixed_len() -> bool {
|
||||
true
|
||||
}
|
||||
|
||||
fn ssz_fixed_len() -> usize {
|
||||
if <Self as ssz::Encode>::is_ssz_fixed_len() {
|
||||
T::ssz_fixed_len() * N::to_usize()
|
||||
} else {
|
||||
ssz::BYTES_PER_LENGTH_OFFSET
|
||||
}
|
||||
}
|
||||
|
||||
fn ssz_append(&self, buf: &mut Vec<u8>) {
|
||||
if T::is_ssz_fixed_len() {
|
||||
buf.reserve(T::ssz_fixed_len() * self.len());
|
||||
|
||||
for item in &self.vec {
|
||||
item.ssz_append(buf);
|
||||
}
|
||||
} else {
|
||||
let mut encoder = ssz::SszEncoder::list(buf, self.len() * ssz::BYTES_PER_LENGTH_OFFSET);
|
||||
|
||||
for item in &self.vec {
|
||||
encoder.append(item);
|
||||
}
|
||||
|
||||
encoder.finalize();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<T, N: Unsigned> ssz::Decode for FixedVector<T, N>
|
||||
where
|
||||
T: ssz::Decode + Default,
|
||||
{
|
||||
fn is_ssz_fixed_len() -> bool {
|
||||
T::is_ssz_fixed_len()
|
||||
}
|
||||
|
||||
fn ssz_fixed_len() -> usize {
|
||||
if <Self as ssz::Decode>::is_ssz_fixed_len() {
|
||||
T::ssz_fixed_len() * N::to_usize()
|
||||
} else {
|
||||
ssz::BYTES_PER_LENGTH_OFFSET
|
||||
}
|
||||
}
|
||||
|
||||
fn from_ssz_bytes(bytes: &[u8]) -> Result<Self, ssz::DecodeError> {
|
||||
if bytes.is_empty() {
|
||||
Ok(FixedVector::from(vec![]))
|
||||
} else if T::is_ssz_fixed_len() {
|
||||
bytes
|
||||
.chunks(T::ssz_fixed_len())
|
||||
.map(|chunk| T::from_ssz_bytes(chunk))
|
||||
.collect::<Result<Vec<T>, _>>()
|
||||
.and_then(|vec| Ok(vec.into()))
|
||||
} else {
|
||||
ssz::decode_list_of_variable_length_items(bytes).and_then(|vec| Ok(vec.into()))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod ssz_tests {
|
||||
use super::*;
|
||||
use ssz::*;
|
||||
use typenum::*;
|
||||
|
||||
#[test]
|
||||
fn encode() {
|
||||
let vec: FixedVector<u16, U2> = vec![0; 2].into();
|
||||
assert_eq!(vec.as_ssz_bytes(), vec![0, 0, 0, 0]);
|
||||
assert_eq!(<FixedVector<u16, U2> as Encode>::ssz_fixed_len(), 4);
|
||||
}
|
||||
|
||||
fn round_trip<T: Encode + Decode + std::fmt::Debug + PartialEq>(item: T) {
|
||||
let encoded = &item.as_ssz_bytes();
|
||||
assert_eq!(T::from_ssz_bytes(&encoded), Ok(item));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn u16_len_8() {
|
||||
round_trip::<FixedVector<u16, U8>>(vec![42; 8].into());
|
||||
round_trip::<FixedVector<u16, U8>>(vec![0; 8].into());
|
||||
}
|
||||
}
|
66
eth2/utils/ssz_types/src/lib.rs
Normal file
66
eth2/utils/ssz_types/src/lib.rs
Normal file
@ -0,0 +1,66 @@
|
||||
//! Provides types with unique properties required for SSZ serialization and Merklization:
|
||||
//!
|
||||
//! - `FixedVector`: A heap-allocated list with a size that is fixed at compile time.
|
||||
//! - `VariableList`: A heap-allocated list that cannot grow past a type-level maximum length.
|
||||
//! - `BitList`: A heap-allocated bitfield that with a type-level _maximum_ length.
|
||||
//! - `BitVector`: A heap-allocated bitfield that with a type-level _fixed__ length.
|
||||
//!
|
||||
//! These structs are required as SSZ serialization and Merklization rely upon type-level lengths
|
||||
//! for padding and verification.
|
||||
//!
|
||||
//! ## Example
|
||||
//! ```
|
||||
//! use ssz_types::*;
|
||||
//!
|
||||
//! pub struct Example {
|
||||
//! bit_vector: BitVector<typenum::U8>,
|
||||
//! bit_list: BitList<typenum::U8>,
|
||||
//! variable_list: VariableList<u64, typenum::U8>,
|
||||
//! fixed_vector: FixedVector<u64, typenum::U8>,
|
||||
//! }
|
||||
//!
|
||||
//! let mut example = Example {
|
||||
//! bit_vector: Bitfield::new(),
|
||||
//! bit_list: Bitfield::with_capacity(4).unwrap(),
|
||||
//! variable_list: <_>::from(vec![0, 1]),
|
||||
//! fixed_vector: <_>::from(vec![2, 3]),
|
||||
//! };
|
||||
//!
|
||||
//! assert_eq!(example.bit_vector.len(), 8);
|
||||
//! assert_eq!(example.bit_list.len(), 4);
|
||||
//! assert_eq!(&example.variable_list[..], &[0, 1]);
|
||||
//! assert_eq!(&example.fixed_vector[..], &[2, 3, 0, 0, 0, 0, 0, 0]);
|
||||
//!
|
||||
//! ```
|
||||
|
||||
#[macro_use]
|
||||
mod bitfield;
|
||||
mod fixed_vector;
|
||||
mod variable_list;
|
||||
|
||||
pub use bitfield::{BitList, BitVector, Bitfield};
|
||||
pub use fixed_vector::FixedVector;
|
||||
pub use typenum;
|
||||
pub use variable_list::VariableList;
|
||||
|
||||
pub mod length {
|
||||
pub use crate::bitfield::{Fixed, Variable};
|
||||
}
|
||||
|
||||
/// Returned when an item encounters an error.
|
||||
#[derive(PartialEq, Debug)]
|
||||
pub enum Error {
|
||||
OutOfBounds {
|
||||
i: usize,
|
||||
len: usize,
|
||||
},
|
||||
/// A `BitList` does not have a set bit, therefore it's length is unknowable.
|
||||
MissingLengthInformation,
|
||||
/// A `BitList` has excess bits set to true.
|
||||
ExcessBits,
|
||||
/// A `BitList` has an invalid number of bytes for a given bit length.
|
||||
InvalidByteCount {
|
||||
given: usize,
|
||||
expected: usize,
|
||||
},
|
||||
}
|
320
eth2/utils/ssz_types/src/variable_list.rs
Normal file
320
eth2/utils/ssz_types/src/variable_list.rs
Normal file
@ -0,0 +1,320 @@
|
||||
use crate::Error;
|
||||
use serde_derive::{Deserialize, Serialize};
|
||||
use std::marker::PhantomData;
|
||||
use std::ops::{Deref, Index, IndexMut};
|
||||
use std::slice::SliceIndex;
|
||||
use typenum::Unsigned;
|
||||
|
||||
pub use typenum;
|
||||
|
||||
/// Emulates a SSZ `List`.
|
||||
///
|
||||
/// An ordered, heap-allocated, variable-length, homogeneous collection of `T`, with no more than
|
||||
/// `N` values.
|
||||
///
|
||||
/// This struct is backed by a Rust `Vec` but constrained such that it must be instantiated with a
|
||||
/// fixed number of elements and you may not add or remove elements, only modify.
|
||||
///
|
||||
/// The length of this struct is fixed at the type-level using
|
||||
/// [typenum](https://crates.io/crates/typenum).
|
||||
///
|
||||
/// ## Example
|
||||
///
|
||||
/// ```
|
||||
/// use ssz_types::{VariableList, typenum};
|
||||
///
|
||||
/// let base: Vec<u64> = vec![1, 2, 3, 4];
|
||||
///
|
||||
/// // Create a `VariableList` from a `Vec` that has the expected length.
|
||||
/// let exact: VariableList<_, typenum::U4> = VariableList::from(base.clone());
|
||||
/// assert_eq!(&exact[..], &[1, 2, 3, 4]);
|
||||
///
|
||||
/// // Create a `VariableList` from a `Vec` that is too long and the `Vec` is truncated.
|
||||
/// let short: VariableList<_, typenum::U3> = VariableList::from(base.clone());
|
||||
/// assert_eq!(&short[..], &[1, 2, 3]);
|
||||
///
|
||||
/// // Create a `VariableList` from a `Vec` that is shorter than the maximum.
|
||||
/// let mut long: VariableList<_, typenum::U5> = VariableList::from(base);
|
||||
/// assert_eq!(&long[..], &[1, 2, 3, 4]);
|
||||
///
|
||||
/// // Push a value to if it does not exceed the maximum
|
||||
/// long.push(5).unwrap();
|
||||
/// assert_eq!(&long[..], &[1, 2, 3, 4, 5]);
|
||||
///
|
||||
/// // Push a value to if it _does_ exceed the maximum.
|
||||
/// assert!(long.push(6).is_err());
|
||||
/// ```
|
||||
#[derive(Debug, PartialEq, Clone, Serialize, Deserialize)]
|
||||
#[serde(transparent)]
|
||||
pub struct VariableList<T, N> {
|
||||
vec: Vec<T>,
|
||||
_phantom: PhantomData<N>,
|
||||
}
|
||||
|
||||
impl<T, N: Unsigned> VariableList<T, N> {
|
||||
/// Returns `Some` if the given `vec` equals the fixed length of `Self`. Otherwise returns
|
||||
/// `None`.
|
||||
pub fn new(vec: Vec<T>) -> Result<Self, Error> {
|
||||
if vec.len() <= N::to_usize() {
|
||||
Ok(Self {
|
||||
vec,
|
||||
_phantom: PhantomData,
|
||||
})
|
||||
} else {
|
||||
Err(Error::OutOfBounds {
|
||||
i: vec.len(),
|
||||
len: Self::max_len(),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns the number of values presently in `self`.
|
||||
pub fn len(&self) -> usize {
|
||||
self.vec.len()
|
||||
}
|
||||
|
||||
/// True if `self` does not contain any values.
|
||||
pub fn is_empty(&self) -> bool {
|
||||
self.len() == 0
|
||||
}
|
||||
|
||||
/// Returns the type-level maximum length.
|
||||
pub fn max_len() -> usize {
|
||||
N::to_usize()
|
||||
}
|
||||
|
||||
/// Appends `value` to the back of `self`.
|
||||
///
|
||||
/// Returns `Err(())` when appending `value` would exceed the maximum length.
|
||||
pub fn push(&mut self, value: T) -> Result<(), Error> {
|
||||
if self.vec.len() < Self::max_len() {
|
||||
self.vec.push(value);
|
||||
Ok(())
|
||||
} else {
|
||||
Err(Error::OutOfBounds {
|
||||
i: self.vec.len() + 1,
|
||||
len: Self::max_len(),
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: Default, N: Unsigned> From<Vec<T>> for VariableList<T, N> {
|
||||
fn from(mut vec: Vec<T>) -> Self {
|
||||
vec.truncate(N::to_usize());
|
||||
|
||||
Self {
|
||||
vec,
|
||||
_phantom: PhantomData,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<T, N: Unsigned> Into<Vec<T>> for VariableList<T, N> {
|
||||
fn into(self) -> Vec<T> {
|
||||
self.vec
|
||||
}
|
||||
}
|
||||
|
||||
impl<T, N: Unsigned> Default for VariableList<T, N> {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
vec: Vec::default(),
|
||||
_phantom: PhantomData,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<T, N: Unsigned, I: SliceIndex<[T]>> Index<I> for VariableList<T, N> {
|
||||
type Output = I::Output;
|
||||
|
||||
#[inline]
|
||||
fn index(&self, index: I) -> &Self::Output {
|
||||
Index::index(&self.vec, index)
|
||||
}
|
||||
}
|
||||
|
||||
impl<T, N: Unsigned, I: SliceIndex<[T]>> IndexMut<I> for VariableList<T, N> {
|
||||
#[inline]
|
||||
fn index_mut(&mut self, index: I) -> &mut Self::Output {
|
||||
IndexMut::index_mut(&mut self.vec, index)
|
||||
}
|
||||
}
|
||||
|
||||
impl<T, N: Unsigned> Deref for VariableList<T, N> {
|
||||
type Target = [T];
|
||||
|
||||
fn deref(&self) -> &[T] {
|
||||
&self.vec[..]
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod test {
|
||||
use super::*;
|
||||
use typenum::*;
|
||||
|
||||
#[test]
|
||||
fn new() {
|
||||
let vec = vec![42; 5];
|
||||
let fixed: Result<VariableList<u64, U4>, _> = VariableList::new(vec.clone());
|
||||
assert!(fixed.is_err());
|
||||
|
||||
let vec = vec![42; 3];
|
||||
let fixed: Result<VariableList<u64, U4>, _> = VariableList::new(vec.clone());
|
||||
assert!(fixed.is_ok());
|
||||
|
||||
let vec = vec![42; 4];
|
||||
let fixed: Result<VariableList<u64, U4>, _> = VariableList::new(vec.clone());
|
||||
assert!(fixed.is_ok());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn indexing() {
|
||||
let vec = vec![1, 2];
|
||||
|
||||
let mut fixed: VariableList<u64, U8192> = vec.clone().into();
|
||||
|
||||
assert_eq!(fixed[0], 1);
|
||||
assert_eq!(&fixed[0..1], &vec[0..1]);
|
||||
assert_eq!((&fixed[..]).len(), 2);
|
||||
|
||||
fixed[1] = 3;
|
||||
assert_eq!(fixed[1], 3);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn length() {
|
||||
let vec = vec![42; 5];
|
||||
let fixed: VariableList<u64, U4> = VariableList::from(vec.clone());
|
||||
assert_eq!(&fixed[..], &vec[0..4]);
|
||||
|
||||
let vec = vec![42; 3];
|
||||
let fixed: VariableList<u64, U4> = VariableList::from(vec.clone());
|
||||
assert_eq!(&fixed[0..3], &vec[..]);
|
||||
assert_eq!(&fixed[..], &vec![42, 42, 42][..]);
|
||||
|
||||
let vec = vec![];
|
||||
let fixed: VariableList<u64, U4> = VariableList::from(vec.clone());
|
||||
assert_eq!(&fixed[..], &vec![][..]);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn deref() {
|
||||
let vec = vec![0, 2, 4, 6];
|
||||
let fixed: VariableList<u64, U4> = VariableList::from(vec);
|
||||
|
||||
assert_eq!(fixed.get(0), Some(&0));
|
||||
assert_eq!(fixed.get(3), Some(&6));
|
||||
assert_eq!(fixed.get(4), None);
|
||||
}
|
||||
}
|
||||
|
||||
impl<T, N: Unsigned> tree_hash::TreeHash for VariableList<T, N>
|
||||
where
|
||||
T: tree_hash::TreeHash,
|
||||
{
|
||||
fn tree_hash_type() -> tree_hash::TreeHashType {
|
||||
tree_hash::TreeHashType::Vector
|
||||
}
|
||||
|
||||
fn tree_hash_packed_encoding(&self) -> Vec<u8> {
|
||||
unreachable!("Vector should never be packed.")
|
||||
}
|
||||
|
||||
fn tree_hash_packing_factor() -> usize {
|
||||
unreachable!("Vector should never be packed.")
|
||||
}
|
||||
|
||||
fn tree_hash_root(&self) -> Vec<u8> {
|
||||
tree_hash::impls::vec_tree_hash_root(&self.vec)
|
||||
}
|
||||
}
|
||||
|
||||
impl<T, N: Unsigned> cached_tree_hash::CachedTreeHash for VariableList<T, N>
|
||||
where
|
||||
T: cached_tree_hash::CachedTreeHash + tree_hash::TreeHash,
|
||||
{
|
||||
fn new_tree_hash_cache(
|
||||
&self,
|
||||
depth: usize,
|
||||
) -> Result<cached_tree_hash::TreeHashCache, cached_tree_hash::Error> {
|
||||
let (cache, _overlay) = cached_tree_hash::vec::new_tree_hash_cache(&self.vec, depth)?;
|
||||
|
||||
Ok(cache)
|
||||
}
|
||||
|
||||
fn tree_hash_cache_schema(&self, depth: usize) -> cached_tree_hash::BTreeSchema {
|
||||
cached_tree_hash::vec::produce_schema(&self.vec, depth)
|
||||
}
|
||||
|
||||
fn update_tree_hash_cache(
|
||||
&self,
|
||||
cache: &mut cached_tree_hash::TreeHashCache,
|
||||
) -> Result<(), cached_tree_hash::Error> {
|
||||
cached_tree_hash::vec::update_tree_hash_cache(&self.vec, cache)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
impl<T, N: Unsigned> ssz::Encode for VariableList<T, N>
|
||||
where
|
||||
T: ssz::Encode,
|
||||
{
|
||||
fn is_ssz_fixed_len() -> bool {
|
||||
<Vec<T>>::is_ssz_fixed_len()
|
||||
}
|
||||
|
||||
fn ssz_fixed_len() -> usize {
|
||||
<Vec<T>>::ssz_fixed_len()
|
||||
}
|
||||
|
||||
fn ssz_append(&self, buf: &mut Vec<u8>) {
|
||||
self.vec.ssz_append(buf)
|
||||
}
|
||||
}
|
||||
|
||||
impl<T, N: Unsigned> ssz::Decode for VariableList<T, N>
|
||||
where
|
||||
T: ssz::Decode + Default,
|
||||
{
|
||||
fn is_ssz_fixed_len() -> bool {
|
||||
<Vec<T>>::is_ssz_fixed_len()
|
||||
}
|
||||
|
||||
fn ssz_fixed_len() -> usize {
|
||||
<Vec<T>>::ssz_fixed_len()
|
||||
}
|
||||
|
||||
fn from_ssz_bytes(bytes: &[u8]) -> Result<Self, ssz::DecodeError> {
|
||||
let vec = <Vec<T>>::from_ssz_bytes(bytes)?;
|
||||
|
||||
Self::new(vec).map_err(|e| ssz::DecodeError::BytesInvalid(format!("VariableList {:?}", e)))
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use ssz::*;
|
||||
use typenum::*;
|
||||
|
||||
#[test]
|
||||
fn encode() {
|
||||
let vec: VariableList<u16, U2> = vec![0; 2].into();
|
||||
assert_eq!(vec.as_ssz_bytes(), vec![0, 0, 0, 0]);
|
||||
assert_eq!(<VariableList<u16, U2> as Encode>::ssz_fixed_len(), 4);
|
||||
}
|
||||
|
||||
fn round_trip<T: Encode + Decode + std::fmt::Debug + PartialEq>(item: T) {
|
||||
let encoded = &item.as_ssz_bytes();
|
||||
assert_eq!(T::from_ssz_bytes(&encoded), Ok(item));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn u16_len_8() {
|
||||
round_trip::<VariableList<u16, U8>>(vec![42; 8].into());
|
||||
round_trip::<VariableList<u16, U8>>(vec![0; 8].into());
|
||||
}
|
||||
}
|
@ -5,9 +5,11 @@ authors = ["Paul Hauner <paul@paulhauner.com>"]
|
||||
edition = "2018"
|
||||
|
||||
[dev-dependencies]
|
||||
rand = "0.7"
|
||||
tree_hash_derive = { path = "../tree_hash_derive" }
|
||||
|
||||
[dependencies]
|
||||
ethereum-types = "0.5"
|
||||
hashing = { path = "../hashing" }
|
||||
int_to_bytes = { path = "../int_to_bytes" }
|
||||
lazy_static = "0.1"
|
||||
|
@ -1,5 +1,5 @@
|
||||
use super::*;
|
||||
use crate::merkleize::merkle_root;
|
||||
use crate::merkle_root;
|
||||
use ethereum_types::H256;
|
||||
use hashing::hash;
|
||||
use int_to_bytes::int_to_bytes32;
|
||||
|
@ -1,5 +1,17 @@
|
||||
#[macro_use]
|
||||
extern crate lazy_static;
|
||||
|
||||
pub mod impls;
|
||||
pub mod merkleize;
|
||||
mod merkleize_padded;
|
||||
mod merkleize_standard;
|
||||
|
||||
pub use merkleize_padded::merkleize_padded;
|
||||
pub use merkleize_standard::merkleize_standard;
|
||||
|
||||
/// Alias to `merkleize_padded(&bytes, 0)`
|
||||
pub fn merkle_root(bytes: &[u8]) -> Vec<u8> {
|
||||
merkleize_padded(&bytes, 0)
|
||||
}
|
||||
|
||||
pub const BYTES_PER_CHUNK: usize = 32;
|
||||
pub const HASHSIZE: usize = 32;
|
||||
@ -44,7 +56,7 @@ macro_rules! tree_hash_ssz_encoding_as_vector {
|
||||
}
|
||||
|
||||
fn tree_hash_root(&self) -> Vec<u8> {
|
||||
tree_hash::merkleize::merkle_root(&ssz::ssz_encode(self))
|
||||
tree_hash::merkle_root(&ssz::ssz_encode(self))
|
||||
}
|
||||
}
|
||||
};
|
||||
|
366
eth2/utils/tree_hash/src/merkleize_padded.rs
Normal file
366
eth2/utils/tree_hash/src/merkleize_padded.rs
Normal file
@ -0,0 +1,366 @@
|
||||
use super::BYTES_PER_CHUNK;
|
||||
use hashing::hash;
|
||||
|
||||
/// The size of the cache that stores padding nodes for a given height.
|
||||
///
|
||||
/// Currently, we panic if we encounter a tree with a height larger than `MAX_TREE_DEPTH`.
|
||||
///
|
||||
/// It is set to 48 as we expect it to be sufficiently high that we won't exceed it.
|
||||
pub const MAX_TREE_DEPTH: usize = 48;
|
||||
|
||||
lazy_static! {
|
||||
/// Cached zero hashes where `ZERO_HASHES[i]` is the hash of a Merkle tree with 2^i zero leaves.
|
||||
static ref ZERO_HASHES: Vec<Vec<u8>> = {
|
||||
let mut hashes = vec![vec![0; 32]; MAX_TREE_DEPTH + 1];
|
||||
|
||||
for i in 0..MAX_TREE_DEPTH {
|
||||
hashes[i + 1] = hash_concat(&hashes[i], &hashes[i]);
|
||||
}
|
||||
|
||||
hashes
|
||||
};
|
||||
}
|
||||
|
||||
/// Merkleize `bytes` and return the root, optionally padding the tree out to `min_leaves` number of
|
||||
/// leaves.
|
||||
///
|
||||
/// First all nodes are extracted from `bytes` and then a padding node is added until the number of
|
||||
/// leaf chunks is greater than or equal to `min_leaves`. Callers may set `min_leaves` to `0` if no
|
||||
/// adding additional chunks should be added to the given `bytes`.
|
||||
///
|
||||
/// If `bytes.len() <= BYTES_PER_CHUNK`, no hashing is done and `bytes` is returned, potentially
|
||||
/// padded out to `BYTES_PER_CHUNK` length with `0`.
|
||||
///
|
||||
/// ## CPU Performance
|
||||
///
|
||||
/// A cache of `MAX_TREE_DEPTH` hashes are stored to avoid re-computing the hashes of padding nodes
|
||||
/// (or their parents). Therefore, adding padding nodes only incurs one more hash per additional
|
||||
/// height of the tree.
|
||||
///
|
||||
/// ## Memory Performance
|
||||
///
|
||||
/// This algorithm has two interesting memory usage properties:
|
||||
///
|
||||
/// 1. The maximum memory footprint is roughly `O(V / 2)` memory, where `V` is the number of leaf
|
||||
/// chunks with values (i.e., leaves that are not padding). The means adding padding nodes to
|
||||
/// the tree does not increase the memory footprint.
|
||||
/// 2. At each height of the tree half of the memory is freed until only a single chunk is stored.
|
||||
/// 3. The input `bytes` are not copied into another list before processing.
|
||||
///
|
||||
/// _Note: there are some minor memory overheads, including a handful of usizes and a list of
|
||||
/// `MAX_TREE_DEPTH` hashes as `lazy_static` constants._
|
||||
pub fn merkleize_padded(bytes: &[u8], min_leaves: usize) -> Vec<u8> {
|
||||
// If the bytes are just one chunk or less, pad to one chunk and return without hashing.
|
||||
if bytes.len() <= BYTES_PER_CHUNK && min_leaves <= 1 {
|
||||
let mut o = bytes.to_vec();
|
||||
o.resize(BYTES_PER_CHUNK, 0);
|
||||
return o;
|
||||
}
|
||||
|
||||
assert!(
|
||||
bytes.len() > BYTES_PER_CHUNK || min_leaves > 1,
|
||||
"Merkle hashing only needs to happen if there is more than one chunk"
|
||||
);
|
||||
|
||||
// The number of leaves that can be made directly from `bytes`.
|
||||
let leaves_with_values = (bytes.len() + (BYTES_PER_CHUNK - 1)) / BYTES_PER_CHUNK;
|
||||
|
||||
// The number of parents that have at least one non-padding leaf.
|
||||
//
|
||||
// Since there is more than one node in this tree (see prior assertion), there should always be
|
||||
// one or more initial parent nodes.
|
||||
let initial_parents_with_values = std::cmp::max(1, next_even_number(leaves_with_values) / 2);
|
||||
|
||||
// The number of leaves in the full tree (including padding nodes).
|
||||
let num_leaves = std::cmp::max(leaves_with_values, min_leaves).next_power_of_two();
|
||||
|
||||
// The number of levels in the tree.
|
||||
//
|
||||
// A tree with a single node has `height == 1`.
|
||||
let height = num_leaves.trailing_zeros() as usize + 1;
|
||||
|
||||
assert!(height >= 2, "The tree should have two or more heights");
|
||||
|
||||
// A buffer/scratch-space used for storing each round of hashes at each height.
|
||||
//
|
||||
// This buffer is kept as small as possible; it will shrink so it never stores a padding node.
|
||||
let mut chunks = ChunkStore::with_capacity(initial_parents_with_values);
|
||||
|
||||
// Create a parent in the `chunks` buffer for every two chunks in `bytes`.
|
||||
//
|
||||
// I.e., do the first round of hashing, hashing from the `bytes` slice and filling the `chunks`
|
||||
// struct.
|
||||
for i in 0..initial_parents_with_values {
|
||||
let start = i * BYTES_PER_CHUNK * 2;
|
||||
|
||||
// Hash two chunks, creating a parent chunk.
|
||||
let hash = match bytes.get(start..start + BYTES_PER_CHUNK * 2) {
|
||||
// All bytes are available, hash as usual.
|
||||
Some(slice) => hash(slice),
|
||||
// Unable to get all the bytes, get a small slice and pad it out.
|
||||
None => {
|
||||
let mut preimage = bytes
|
||||
.get(start..)
|
||||
.expect("`i` can only be larger than zero if there are bytes to read")
|
||||
.to_vec();
|
||||
preimage.resize(BYTES_PER_CHUNK * 2, 0);
|
||||
hash(&preimage)
|
||||
}
|
||||
};
|
||||
|
||||
assert_eq!(
|
||||
hash.len(),
|
||||
BYTES_PER_CHUNK,
|
||||
"Hashes should be exactly one chunk"
|
||||
);
|
||||
|
||||
// Store the parent node.
|
||||
chunks
|
||||
.set(i, &hash)
|
||||
.expect("Buffer should always have capacity for parent nodes")
|
||||
}
|
||||
|
||||
// Iterate through all heights above the leaf nodes and either (a) hash two children or, (b)
|
||||
// hash a left child and a right padding node.
|
||||
//
|
||||
// Skip the 0'th height because the leaves have already been processed. Skip the highest-height
|
||||
// in the tree as it is the root does not require hashing.
|
||||
//
|
||||
// The padding nodes for each height are cached via `lazy static` to simulate non-adjacent
|
||||
// padding nodes (i.e., avoid doing unnecessary hashing).
|
||||
for height in 1..height - 1 {
|
||||
let child_nodes = chunks.len();
|
||||
let parent_nodes = next_even_number(child_nodes) / 2;
|
||||
|
||||
// For each pair of nodes stored in `chunks`:
|
||||
//
|
||||
// - If two nodes are available, hash them to form a parent.
|
||||
// - If one node is available, hash it and a cached padding node to form a parent.
|
||||
for i in 0..parent_nodes {
|
||||
let (left, right) = match (chunks.get(i * 2), chunks.get(i * 2 + 1)) {
|
||||
(Ok(left), Ok(right)) => (left, right),
|
||||
(Ok(left), Err(_)) => (left, get_zero_hash(height)),
|
||||
// Deriving `parent_nodes` from `chunks.len()` has ensured that we never encounter the
|
||||
// scenario where we expect two nodes but there are none.
|
||||
(Err(_), Err(_)) => unreachable!("Parent must have one child"),
|
||||
// `chunks` is a contiguous array so it is impossible for an index to be missing
|
||||
// when a higher index is present.
|
||||
(Err(_), Ok(_)) => unreachable!("Parent must have a left child"),
|
||||
};
|
||||
|
||||
assert!(
|
||||
left.len() == right.len() && right.len() == BYTES_PER_CHUNK,
|
||||
"Both children should be `BYTES_PER_CHUNK` bytes."
|
||||
);
|
||||
|
||||
let hash = hash_concat(left, right);
|
||||
|
||||
// Store a parent node.
|
||||
chunks
|
||||
.set(i, &hash)
|
||||
.expect("Buf is adequate size for parent");
|
||||
}
|
||||
|
||||
// Shrink the buffer so it neatly fits the number of new nodes created in this round.
|
||||
//
|
||||
// The number of `parent_nodes` is either decreasing or stable. It never increases.
|
||||
chunks.truncate(parent_nodes);
|
||||
}
|
||||
|
||||
// There should be a single chunk left in the buffer and it is the Merkle root.
|
||||
let root = chunks.into_vec();
|
||||
|
||||
assert_eq!(root.len(), BYTES_PER_CHUNK, "Only one chunk should remain");
|
||||
|
||||
root
|
||||
}
|
||||
|
||||
/// A helper struct for storing words of `BYTES_PER_CHUNK` size in a flat byte array.
|
||||
#[derive(Debug)]
|
||||
struct ChunkStore(Vec<u8>);
|
||||
|
||||
impl ChunkStore {
|
||||
/// Creates a new instance with `chunks` padding nodes.
|
||||
fn with_capacity(chunks: usize) -> Self {
|
||||
Self(vec![0; chunks * BYTES_PER_CHUNK])
|
||||
}
|
||||
|
||||
/// Set the `i`th chunk to `value`.
|
||||
///
|
||||
/// Returns `Err` if `value.len() != BYTES_PER_CHUNK` or `i` is out-of-bounds.
|
||||
fn set(&mut self, i: usize, value: &[u8]) -> Result<(), ()> {
|
||||
if i < self.len() && value.len() == BYTES_PER_CHUNK {
|
||||
let slice = &mut self.0[i * BYTES_PER_CHUNK..i * BYTES_PER_CHUNK + BYTES_PER_CHUNK];
|
||||
slice.copy_from_slice(value);
|
||||
Ok(())
|
||||
} else {
|
||||
Err(())
|
||||
}
|
||||
}
|
||||
|
||||
/// Gets the `i`th chunk.
|
||||
///
|
||||
/// Returns `Err` if `i` is out-of-bounds.
|
||||
fn get(&self, i: usize) -> Result<&[u8], ()> {
|
||||
if i < self.len() {
|
||||
Ok(&self.0[i * BYTES_PER_CHUNK..i * BYTES_PER_CHUNK + BYTES_PER_CHUNK])
|
||||
} else {
|
||||
Err(())
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns the number of chunks presently stored in `self`.
|
||||
fn len(&self) -> usize {
|
||||
self.0.len() / BYTES_PER_CHUNK
|
||||
}
|
||||
|
||||
/// Truncates 'self' to `num_chunks` chunks.
|
||||
///
|
||||
/// Functionally identical to `Vec::truncate`.
|
||||
fn truncate(&mut self, num_chunks: usize) {
|
||||
self.0.truncate(num_chunks * BYTES_PER_CHUNK)
|
||||
}
|
||||
|
||||
/// Consumes `self`, returning the underlying byte array.
|
||||
fn into_vec(self) -> Vec<u8> {
|
||||
self.0
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns a cached padding node for a given height.
|
||||
fn get_zero_hash(height: usize) -> &'static [u8] {
|
||||
if height <= MAX_TREE_DEPTH {
|
||||
&ZERO_HASHES[height]
|
||||
} else {
|
||||
panic!("Tree exceeds MAX_TREE_DEPTH of {}", MAX_TREE_DEPTH)
|
||||
}
|
||||
}
|
||||
|
||||
/// Concatenate two vectors.
|
||||
fn concat(mut vec1: Vec<u8>, mut vec2: Vec<u8>) -> Vec<u8> {
|
||||
vec1.append(&mut vec2);
|
||||
vec1
|
||||
}
|
||||
|
||||
/// Compute the hash of two other hashes concatenated.
|
||||
fn hash_concat(h1: &[u8], h2: &[u8]) -> Vec<u8> {
|
||||
hash(&concat(h1.to_vec(), h2.to_vec()))
|
||||
}
|
||||
|
||||
/// Returns the next even number following `n`. If `n` is even, `n` is returned.
|
||||
fn next_even_number(n: usize) -> usize {
|
||||
n + n % 2
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod test {
|
||||
use super::*;
|
||||
|
||||
pub fn reference_root(bytes: &[u8]) -> Vec<u8> {
|
||||
crate::merkleize_standard(&bytes)[0..32].to_vec()
|
||||
}
|
||||
|
||||
macro_rules! common_tests {
|
||||
($get_bytes: ident) => {
|
||||
#[test]
|
||||
fn zero_value_0_nodes() {
|
||||
test_against_reference(&$get_bytes(0 * BYTES_PER_CHUNK), 0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn zero_value_1_nodes() {
|
||||
test_against_reference(&$get_bytes(1 * BYTES_PER_CHUNK), 0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn zero_value_2_nodes() {
|
||||
test_against_reference(&$get_bytes(2 * BYTES_PER_CHUNK), 0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn zero_value_3_nodes() {
|
||||
test_against_reference(&$get_bytes(3 * BYTES_PER_CHUNK), 0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn zero_value_4_nodes() {
|
||||
test_against_reference(&$get_bytes(4 * BYTES_PER_CHUNK), 0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn zero_value_8_nodes() {
|
||||
test_against_reference(&$get_bytes(8 * BYTES_PER_CHUNK), 0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn zero_value_9_nodes() {
|
||||
test_against_reference(&$get_bytes(9 * BYTES_PER_CHUNK), 0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn zero_value_8_nodes_varying_min_length() {
|
||||
for i in 0..64 {
|
||||
test_against_reference(&$get_bytes(8 * BYTES_PER_CHUNK), i);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn zero_value_range_of_nodes() {
|
||||
for i in 0..32 * BYTES_PER_CHUNK {
|
||||
test_against_reference(&$get_bytes(i), 0);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn max_tree_depth_min_nodes() {
|
||||
let input = vec![0; 10 * BYTES_PER_CHUNK];
|
||||
let min_nodes = 2usize.pow(MAX_TREE_DEPTH as u32);
|
||||
assert_eq!(
|
||||
merkleize_padded(&input, min_nodes),
|
||||
get_zero_hash(MAX_TREE_DEPTH)
|
||||
);
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
mod zero_value {
|
||||
use super::*;
|
||||
|
||||
fn zero_bytes(bytes: usize) -> Vec<u8> {
|
||||
vec![0; bytes]
|
||||
}
|
||||
|
||||
common_tests!(zero_bytes);
|
||||
}
|
||||
|
||||
mod random_value {
|
||||
use super::*;
|
||||
use rand::RngCore;
|
||||
|
||||
fn random_bytes(bytes: usize) -> Vec<u8> {
|
||||
let mut bytes = Vec::with_capacity(bytes);
|
||||
rand::thread_rng().fill_bytes(&mut bytes);
|
||||
bytes
|
||||
}
|
||||
|
||||
common_tests!(random_bytes);
|
||||
}
|
||||
|
||||
fn test_against_reference(input: &[u8], min_nodes: usize) {
|
||||
let mut reference_input = input.to_vec();
|
||||
reference_input.resize(
|
||||
std::cmp::max(
|
||||
reference_input.len(),
|
||||
min_nodes.next_power_of_two() * BYTES_PER_CHUNK,
|
||||
),
|
||||
0,
|
||||
);
|
||||
|
||||
assert_eq!(
|
||||
reference_root(&reference_input),
|
||||
merkleize_padded(&input, min_nodes),
|
||||
"input.len(): {:?}",
|
||||
input.len()
|
||||
);
|
||||
}
|
||||
}
|
@ -1,12 +1,23 @@
|
||||
use super::*;
|
||||
use hashing::hash;
|
||||
|
||||
pub fn merkle_root(bytes: &[u8]) -> Vec<u8> {
|
||||
// TODO: replace this with a more memory efficient method.
|
||||
efficient_merkleize(&bytes)[0..32].to_vec()
|
||||
}
|
||||
|
||||
pub fn efficient_merkleize(bytes: &[u8]) -> Vec<u8> {
|
||||
/// Merkleizes bytes and returns the root, using a simple algorithm that does not optimize to avoid
|
||||
/// processing or storing padding bytes.
|
||||
///
|
||||
/// The input `bytes` will be padded to ensure that the number of leaves is a power-of-two.
|
||||
///
|
||||
/// It is likely a better choice to use [merkleize_padded](fn.merkleize_padded.html) instead.
|
||||
///
|
||||
/// ## CPU Performance
|
||||
///
|
||||
/// Will hash all nodes in the tree, even if they are padding and pre-determined.
|
||||
///
|
||||
/// ## Memory Performance
|
||||
///
|
||||
/// - Duplicates the input `bytes`.
|
||||
/// - Stores all internal nodes, even if they are padding.
|
||||
/// - Does not free up unused memory during operation.
|
||||
pub fn merkleize_standard(bytes: &[u8]) -> Vec<u8> {
|
||||
// If the bytes are just one chunk (or less than one chunk) just return them.
|
||||
if bytes.len() <= HASHSIZE {
|
||||
let mut o = bytes.to_vec();
|
@ -150,7 +150,7 @@ pub fn tree_hash_derive(input: TokenStream) -> TokenStream {
|
||||
leaves.append(&mut self.#idents.tree_hash_root());
|
||||
)*
|
||||
|
||||
tree_hash::merkleize::merkle_root(&leaves)
|
||||
tree_hash::merkle_root(&leaves)
|
||||
}
|
||||
}
|
||||
};
|
||||
@ -180,7 +180,7 @@ pub fn tree_hash_signed_root_derive(input: TokenStream) -> TokenStream {
|
||||
leaves.append(&mut self.#idents.tree_hash_root());
|
||||
)*
|
||||
|
||||
tree_hash::merkleize::merkle_root(&leaves)
|
||||
tree_hash::merkle_root(&leaves)
|
||||
}
|
||||
}
|
||||
};
|
||||
|
@ -1,5 +1,5 @@
|
||||
use cached_tree_hash::{CachedTreeHash, TreeHashCache};
|
||||
use tree_hash::{merkleize::merkle_root, SignedRoot, TreeHash};
|
||||
use tree_hash::{merkle_root, SignedRoot, TreeHash};
|
||||
use tree_hash_derive::{CachedTreeHash, SignedRoot, TreeHash};
|
||||
|
||||
#[derive(Clone, Debug, TreeHash, CachedTreeHash)]
|
||||
|
@ -7,5 +7,5 @@ edition = "2018"
|
||||
[dependencies]
|
||||
bytes = "0.4.10"
|
||||
hashing = { path = "../utils/hashing" }
|
||||
ssz = { path = "../utils/ssz" }
|
||||
eth2_ssz = { path = "../utils/ssz" }
|
||||
types = { path = "../types" }
|
||||
|
@ -17,7 +17,7 @@ serde = "1.0"
|
||||
serde_derive = "1.0"
|
||||
serde_repr = "0.1"
|
||||
serde_yaml = "0.8"
|
||||
ssz = { path = "../../eth2/utils/ssz" }
|
||||
eth2_ssz = { path = "../../eth2/utils/ssz" }
|
||||
tree_hash = { path = "../../eth2/utils/tree_hash" }
|
||||
cached_tree_hash = { path = "../../eth2/utils/cached_tree_hash" }
|
||||
state_processing = { path = "../../eth2/state_processing" }
|
||||
|
@ -14,7 +14,7 @@ path = "src/lib.rs"
|
||||
|
||||
[dependencies]
|
||||
bls = { path = "../eth2/utils/bls" }
|
||||
ssz = { path = "../eth2/utils/ssz" }
|
||||
eth2_ssz = { path = "../eth2/utils/ssz" }
|
||||
eth2_config = { path = "../eth2/utils/eth2_config" }
|
||||
tree_hash = { path = "../eth2/utils/tree_hash" }
|
||||
clap = "2.32.0"
|
||||
@ -26,8 +26,9 @@ types = { path = "../eth2/types" }
|
||||
serde = "1.0"
|
||||
serde_derive = "1.0"
|
||||
slog = "^2.2.3"
|
||||
slog-term = "^2.4.0"
|
||||
slog-async = "^2.3.0"
|
||||
slog-json = "^2.3"
|
||||
slog-term = "^2.4.0"
|
||||
tokio = "0.1.18"
|
||||
tokio-timer = "0.2.10"
|
||||
toml = "^0.5"
|
||||
|
@ -12,7 +12,7 @@ The VC is responsible for the following tasks:
|
||||
duties require.
|
||||
- Completing all the fields on a new block (e.g., RANDAO reveal, signature) and
|
||||
publishing the block to a BN.
|
||||
- Prompting the BN to produce a new shard atteststation as per a validators
|
||||
- Prompting the BN to produce a new shard attestation as per a validators
|
||||
duties.
|
||||
- Ensuring that no slashable messages are signed by a validator private key.
|
||||
- Keeping track of the system clock and how it relates to slots/epochs.
|
||||
@ -40,7 +40,7 @@ index. The outcome of a successful poll is a `EpochDuties` struct:
|
||||
```rust
|
||||
EpochDuties {
|
||||
validator_index: u64,
|
||||
block_prodcution_slot: u64,
|
||||
block_production_slot: u64,
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -2,11 +2,11 @@ use bincode;
|
||||
use bls::Keypair;
|
||||
use clap::ArgMatches;
|
||||
use serde_derive::{Deserialize, Serialize};
|
||||
use slog::{debug, error, info};
|
||||
use std::fs;
|
||||
use std::fs::File;
|
||||
use slog::{debug, error, info, o, Drain};
|
||||
use std::fs::{self, File, OpenOptions};
|
||||
use std::io::{Error, ErrorKind};
|
||||
use std::path::PathBuf;
|
||||
use std::sync::Mutex;
|
||||
use types::{EthSpec, MainnetEthSpec};
|
||||
|
||||
/// Stores the core configuration for this validator instance.
|
||||
@ -14,6 +14,8 @@ use types::{EthSpec, MainnetEthSpec};
|
||||
pub struct Config {
|
||||
/// The data directory, which stores all validator databases
|
||||
pub data_dir: PathBuf,
|
||||
/// The path where the logs will be outputted
|
||||
pub log_file: PathBuf,
|
||||
/// The server at which the Beacon Node can be contacted
|
||||
pub server: String,
|
||||
/// The number of slots per epoch.
|
||||
@ -27,6 +29,7 @@ impl Default for Config {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
data_dir: PathBuf::from(".lighthouse-validator"),
|
||||
log_file: PathBuf::from(""),
|
||||
server: "localhost:5051".to_string(),
|
||||
slots_per_epoch: MainnetEthSpec::slots_per_epoch(),
|
||||
}
|
||||
@ -38,11 +41,20 @@ impl Config {
|
||||
///
|
||||
/// Returns an error if arguments are obviously invalid. May succeed even if some values are
|
||||
/// invalid.
|
||||
pub fn apply_cli_args(&mut self, args: &ArgMatches) -> Result<(), &'static str> {
|
||||
pub fn apply_cli_args(
|
||||
&mut self,
|
||||
args: &ArgMatches,
|
||||
log: &mut slog::Logger,
|
||||
) -> Result<(), &'static str> {
|
||||
if let Some(datadir) = args.value_of("datadir") {
|
||||
self.data_dir = PathBuf::from(datadir);
|
||||
};
|
||||
|
||||
if let Some(log_file) = args.value_of("logfile") {
|
||||
self.log_file = PathBuf::from(log_file);
|
||||
self.update_logger(log)?;
|
||||
};
|
||||
|
||||
if let Some(srv) = args.value_of("server") {
|
||||
self.server = srv.to_string();
|
||||
};
|
||||
@ -50,6 +62,38 @@ impl Config {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// Update the logger to output in JSON to specified file
|
||||
fn update_logger(&mut self, log: &mut slog::Logger) -> Result<(), &'static str> {
|
||||
let file = OpenOptions::new()
|
||||
.create(true)
|
||||
.write(true)
|
||||
.truncate(true)
|
||||
.open(&self.log_file);
|
||||
|
||||
if file.is_err() {
|
||||
return Err("Cannot open log file");
|
||||
}
|
||||
let file = file.unwrap();
|
||||
|
||||
if let Some(file) = self.log_file.to_str() {
|
||||
info!(
|
||||
*log,
|
||||
"Log file specified, output will now be written to {} in json.", file
|
||||
);
|
||||
} else {
|
||||
info!(
|
||||
*log,
|
||||
"Log file specified output will now be written in json"
|
||||
);
|
||||
}
|
||||
|
||||
let drain = Mutex::new(slog_json::Json::default(file)).fuse();
|
||||
let drain = slog_async::Async::new(drain).build().fuse();
|
||||
*log = slog::Logger::root(drain, o!());
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Try to load keys from validator_dir, returning None if none are found or an error.
|
||||
#[allow(dead_code)]
|
||||
pub fn fetch_keys(&self, log: &slog::Logger) -> Option<Vec<Keypair>> {
|
||||
|
@ -26,7 +26,7 @@ fn main() {
|
||||
let decorator = slog_term::TermDecorator::new().build();
|
||||
let drain = slog_term::CompactFormat::new(decorator).build().fuse();
|
||||
let drain = slog_async::Async::new(drain).build().fuse();
|
||||
let log = slog::Logger::root(drain, o!());
|
||||
let mut log = slog::Logger::root(drain, o!());
|
||||
|
||||
// CLI
|
||||
let matches = App::new("Lighthouse Validator Client")
|
||||
@ -36,10 +36,18 @@ fn main() {
|
||||
.arg(
|
||||
Arg::with_name("datadir")
|
||||
.long("datadir")
|
||||
.short("d")
|
||||
.value_name("DIR")
|
||||
.help("Data directory for keys and databases.")
|
||||
.takes_value(true),
|
||||
)
|
||||
.arg(
|
||||
Arg::with_name("logfile")
|
||||
.long("logfile")
|
||||
.value_name("logfile")
|
||||
.help("File path where output will be written.")
|
||||
.takes_value(true),
|
||||
)
|
||||
.arg(
|
||||
Arg::with_name("eth2-spec")
|
||||
.long("eth2-spec")
|
||||
@ -122,7 +130,7 @@ fn main() {
|
||||
client_config.data_dir = data_dir.clone();
|
||||
|
||||
// Update the client config with any CLI args.
|
||||
match client_config.apply_cli_args(&matches) {
|
||||
match client_config.apply_cli_args(&matches, &mut log) {
|
||||
Ok(()) => (),
|
||||
Err(s) => {
|
||||
crit!(log, "Failed to parse ClientConfig CLI arguments"; "error" => s);
|
||||
|
Loading…
Reference in New Issue
Block a user