Revert "Merge pull request #200 from sigp/new-structure"

This reverts commit d7a3545be1, reversing
changes made to 1da06c156c.
This commit is contained in:
Paul Hauner 2019-02-14 12:09:18 +11:00
parent d7a3545be1
commit 35c914baa6
No known key found for this signature in database
GPG Key ID: D362883A9218FCC6
163 changed files with 15510 additions and 137 deletions

9
.gitmodules vendored
View File

@ -1,9 +0,0 @@
[submodule "lighthouse-libs"]
path = lighthouse-libs
url = git@github.com:sigp/lighthouse-libs.git
[submodule "lighthouse-beacon"]
path = lighthouse-beacon
url = git@github.com:sigp/lighthouse-beacon.git
[submodule "lighthouse-validator"]
path = lighthouse-validator
url = https://github.com/sigp/lighthouse-validator

View File

@ -1,6 +1,4 @@
language: rust
git:
submodules: false
before_install:
- curl -OL https://github.com/google/protobuf/releases/download/v3.4.0/protoc-3.4.0-linux-x86_64.zip
- unzip protoc-3.4.0-linux-x86_64.zip -d protoc3
@ -8,15 +6,10 @@ before_install:
- sudo mv protoc3/include/* /usr/local/include/
- sudo chown $USER /usr/local/bin/protoc
- sudo chown -R $USER /usr/local/include/google
- sed -i 's/git@github.com:/https:\/\/github.com\//' .gitmodules
- git submodule update --init --recursive
script:
- cargo build --verbose --all --manifest-path lighthouse-beacon/Cargo.toml
- cargo build --verbose --all --manifest-path lighthouse-validator/Cargo.toml
- cargo build --verbose --all --manifest-path lighthouse-libs/Cargo.toml
- cargo test --verbose --all --manifest-path lighthouse-beacon/Cargo.toml
- cargo test --verbose --all --manifest-path lighthouse-validator/Cargo.toml
- cargo test --verbose --all --manifest-path lighthouse-libs/Cargo.toml
- cargo build --verbose --all
- cargo test --verbose --all
- cargo fmt --all -- --check
rust:
- stable
- beta

121
CONTRIBUTING.md Normal file
View File

@ -0,0 +1,121 @@
# Contributors Guide
Lighthouse is an open-source Ethereum 2.0 client. We we're community driven and
welcome all contribution. We aim to provide a constructive, respectful and fun
environment for collaboration.
We are active contributors to the [Ethereum 2.0 specification](https://github.com/ethereum/eth2.0-specs) and attend all [Eth
2.0 implementers calls](https://github.com/ethereum/eth2.0-pm).
This guide is geared towards beginners. If you're an open-source veteran feel
free to just skim this document and get straight into crushing issues.
## Why Contribute
There are many reasons you might contribute to Lighthouse. For example, you may
wish to:
- contribute to the Ethereum ecosystem.
- establish yourself as a layer-1 Ethereum developer.
- work in the amazing Rust programming language.
- learn how to participate in open-source projects.
- expand your software development skills.
- flex your skills in a public forum to expand your career
opportunities (or simply for the fun of it).
- grow your network by working with core Ethereum developers.
## How to Contribute
Regardless of the reason, the process to begin contributing is very much the
same. We operate like a typical open-source project operating on GitHub: the
repository [Issues](https://github.com/sigp/lighthouse/issues) is where we
track what needs to be done and [Pull
Requests](https://github.com/sigp/lighthouse/pulls) is where code gets
reviewed. We use [gitter](https://gitter.im/sigp/lighthouse) to chat
informally.
### General Work-Flow
We recommend the following work-flow for contributors:
1. **Find an issue** to work on, either because it's interesting or suitable to
your skill-set. Use comments to communicate your intentions and ask
questions.
2. **Work in a feature branch** of your personal fork
(github.com/YOUR_NAME/lighthouse) of the main repository
(github.com/sigp/lighthouse).
3. Once you feel you have addressed the issue, **create a pull-request** to merge
your changes in to the main repository.
4. Wait for the repository maintainers to **review your changes** to ensure the
issue is addressed satisfactorily. Optionally, mention your PR on
[gitter](https://gitter.im/sigp/lighthouse).
5. If the issue is addressed the repository maintainers will **merge your
pull-request** and you'll be an official contributor!
Generally, you find an issue you'd like to work on and announce your intentions
to start work in a comment on the issue. Then, do your work on a separate
branch (a "feature branch") in your own fork of the main repository. Once
you're happy and you think the issue has been addressed, create a pull request
into the main repository.
### First-time Set-up
First time contributors can get their git environment up and running with these
steps:
1. [Create a
fork](https://help.github.com/articles/fork-a-repo/#fork-an-example-repository)
and [clone
it](https://help.github.com/articles/fork-a-repo/#step-2-create-a-local-clone-of-your-fork)
to your local machine.
2. [Add an _"upstream"_
branch](https://help.github.com/articles/fork-a-repo/#step-3-configure-git-to-sync-your-fork-with-the-original-spoon-knife-repository)
that tracks github.com/sigp/lighthouse using `$ git remote add upstream
https://github.com/sigp/lighthouse.git` (pro-tip: [use SSH](https://help.github.com/articles/connecting-to-github-with-ssh/) instead of HTTPS).
3. Create a new feature branch with `$ git checkout -b your_feature_name`. The
name of your branch isn't critical but it should be short and instructive.
E.g., if you're fixing a bug with serialization, you could name your branch
`fix_serialization_bug`.
4. Commit your changes and push them to your fork with `$ git push origin
your_feature_name`.
5. Go to your fork on github.com and use the web interface to create a pull
request into the sigp/lighthouse repo.
From there, the repository maintainers will review the PR and either accept it
or provide some constructive feedback.
There's great
[guide](https://akrabat.com/the-beginners-guide-to-contributing-to-a-github-project/)
by Rob Allen that provides much more detail on each of these steps, if you're
having trouble. As always, jump on [gitter](https://gitter.im/sigp/lighthouse)
if you get stuck.
## FAQs
### I don't think I have anything to add
There's lots to be done and there's all sorts of tasks. You can do anything
from correcting typos through to writing core consensus code. If you reach out,
we'll include you.
### I'm not sure my Rust is good enough
We're open to developers of all levels. If you create a PR and your code
doesn't meet our standards, we'll help you fix it and we'll share the reasoning
with you. Contributing to open-source is a great way to learn.
### I'm not sure I know enough about Ethereum 2.0
No problems, there's plenty of tasks that don't require extensive Ethereum
knowledge. You can learn about Ethereum as you go.
### I'm afraid of making a mistake and looking silly
Don't be. We're all about personal development and constructive feedback. If you
make a mistake and learn from it, everyone wins.
### I don't like the way you do things
Please, make an issue and explain why. We're open to constructive criticism and
will happily change our ways.

21
Cargo.toml Normal file
View File

@ -0,0 +1,21 @@
[workspace]
members = [
"eth2/attester",
"eth2/block_producer",
"eth2/fork_choice",
"eth2/state_processing",
"eth2/types",
"eth2/utils/bls",
"eth2/utils/boolean-bitfield",
"eth2/utils/hashing",
"eth2/utils/honey-badger-split",
"eth2/utils/slot_clock",
"eth2/utils/ssz",
"eth2/utils/vec_shuffle",
"beacon_node",
"beacon_node/db",
"beacon_node/beacon_chain",
"beacon_node/beacon_chain/test_harness",
"protos",
"validator_client",
]

17
Dockerfile Normal file
View File

@ -0,0 +1,17 @@
FROM rust:latest
RUN apt-get update && apt-get install -y clang libclang-dev cmake build-essential git unzip autoconf libtool
RUN git clone https://github.com/google/protobuf.git && \
cd protobuf && \
./autogen.sh && \
./configure && \
make && \
make install && \
ldconfig && \
make clean && \
cd .. && \
rm -r protobuf
RUN mkdir /cargocache && chmod -R ugo+rwX /cargocache

20
Jenkinsfile vendored Normal file
View File

@ -0,0 +1,20 @@
pipeline {
agent {
dockerfile {
filename 'Dockerfile'
args '-v cargo-cache:/cargocache:rw -e "CARGO_HOME=/cargocache"'
}
}
stages {
stage('Build') {
steps {
sh 'cargo build'
}
}
stage('Test') {
steps {
sh 'cargo test --all'
}
}
}
}

143
README.md
View File

@ -7,49 +7,7 @@ Chain, maintained by Sigma Prime.
The "Serenity" project is also known as "Ethereum 2.0" or "Shasper".
## Project Structure
The Lighthouse project is managed across four Github repositories:
- [sigp/lighthouse](https://github.com/sigp/lighthouse) (this repo): The
"integration" repository which provides:
- Project-wide documentation
- A landing-page for users and contributors.
- In the future, various other integration tests and orchestration suites.
- [sigp/lighthouse-libs](https://github.com/sigp/lighthouse-libs): Contains
Rust crates common to the entire Lighthouse project, including:
- Pure specification logic (e.g., state transitions, etc)
- SSZ (SimpleSerialize)
- BLS Signature libraries, and more.
- [sigp/lighthouse-beacon](https://github.com/sigp/lighthouse-beacon): The
beacon node binary, responsible for connection to peers across the
network and maintaining a view of the Beacon Chain.
- [sigp/lighthouse-validator](https://github.com/sigp/lighthouse-validator):
The validator client binary, which connects to a beacon node and fulfils
the duties of a staked validator (producing and attesting to blocks).
## Contributing
We welcome new contributors and greatly appreciate the efforts from existing
contributors.
If you'd like to contribute to development on Lighthouse, we recommend checking
for [issues on the lighthouse-libs
repo](https://github.com/sigp/lighthouse-libs/issues) first, then checking the
other repositories.
If you don't find anything there, please reach out on the
[gitter](https://gitter.im/sigp/lighthouse) channel.
Additional resources:
- [ONBOARDING.md](docs/ONBOARDING.md): General on-boarding info,
including style-guide.
- [LIGHTHOUSE.md](docs/LIGHTHOUSE.md): Project goals and ethos.
- [RUNNING.md](docs/RUNNING.md): Step-by-step on getting the code running.
- [SERENITY.md](docs/SERENITY.md): Introduction to Ethereum Serenity.
## Project Summary
## Lighthouse Client
Lighthouse is an open-source Ethereum Serenity client that is currently under
development. Designed as a Serenity-only client, Lighthouse will not
@ -61,6 +19,15 @@ to existing clients, such as
[Parity-Ethereum](https://github.com/paritytech/parity-ethereum), via RPC to enable
present-Ethereum functionality.
### Further Reading
- [About Lighthouse](docs/lighthouse.md): Goals, Ideology and Ethos surrounding
this implementation.
- [What is Ethereum Serenity](docs/serenity.md): an introduction to Ethereum Serenity.
If you'd like some background on Sigma Prime, please see the [Lighthouse Update
\#00](https://lighthouse.sigmaprime.io/update-00.html) blog post or the
[company website](https://sigmaprime.io).
### Components
@ -94,7 +61,7 @@ by the team:
from the Ethereum Foundation to develop *simpleserialize* (SSZ), a
purpose-built serialization format for sending information across a network.
Check out the [SSZ
implementation](https://github.com/sigp/lighthouse-libs/tree/master/ssz)
implementation](https://github.com/sigp/lighthouse/tree/master/beacon_chain/utils/ssz)
and this
[research](https://github.com/sigp/serialization_sandbox/blob/report/report/serialization_report.md)
on serialization formats for more information.
@ -112,23 +79,89 @@ In addition to these components we are also working on database schemas, RPC
frameworks, specification development, database optimizations (e.g.,
bloom-filters), and tons of other interesting stuff (at least we think so).
### Directory Structure
Here we provide an overview of the directory structure:
- `beacon_chain/`: contains logic derived directly from the specification.
E.g., shuffling algorithms, state transition logic and structs, block
validation, BLS crypto, etc.
- `lighthouse/`: contains logic specific to this client implementation. E.g.,
CLI parsing, RPC end-points, databases, etc.
### Running
**NOTE: The cryptography libraries used in this implementation are
experimental. As such all cryptography is assumed to be insecure.**
This code-base is still very much under-development and does not provide any
user-facing functionality. For developers and researchers, there are several
tests and benchmarks which may be of interest.
A few basic steps are needed to get set up:
1. Install [rustup](https://rustup.rs/). It's a toolchain manager for Rust (Linux | macos | Windows). For installation run the below command in your terminal `$ curl https://sh.rustup.rs -sSf | sh`
2. (Linux & MacOS) To configure your current shell run: `$ source $HOME/.cargo/env`
3. Use the command `rustup show` to get information about the Rust installation. You should see that the
active toolchain is the stable version.
4. Run `rustc --version` to check the installation and version of rust.
- Updates can be performed using` rustup update` .
5. Install build dependencies (Arch packages are listed here, your distribution will likely be similar):
- `clang`: required by RocksDB.
- `protobuf`: required for protobuf serialization (gRPC).
6. Navigate to the working directory.
7. Run the test by using command `cargo test --all`. By running, it will pass all the required test cases.
If you are doing it for the first time, then you can grab a coffee in the meantime. Usually, it takes time
to build, compile and pass all test cases. If there is no error then it means everything is working properly
and it's time to get your hands dirty.
In case, if there is an error, then please raise the [issue](https://github.com/sigp/lighthouse/issues).
We will help you.
8. As an alternative to, or instead of the above step, you may also run benchmarks by using
the command `cargo bench --all`
##### Note:
Lighthouse presently runs on Rust `stable`, however, benchmarks currently require the
`nightly` version.
##### Note for Windows users:
Perl may also be required to build lighthouse. You can install [Strawberry Perl](http://strawberryperl.com/),
or alternatively use a choco install command `choco install strawberryperl`.
Additionally, the dependency `protoc-grpcio v0.3.1` is reported to have issues compiling in Windows. You can specify
a known working version by editing version in protos/Cargo.toml's "build-dependencies" section to
`protoc-grpcio = "<=0.3.0"`.
### Contributing
**Lighthouse welcomes contributors with open-arms.**
If you would like to learn more about Ethereum Serenity and/or
[Rust](https://www.rust-lang.org/), we are more than happy to on-board you
and assign you some tasks. We aim to be as accepting and understanding as
possible; we are more than happy to up-skill contributors in exchange for their
assistance with the project.
Alternatively, if you are an ETH/Rust veteran, we'd love your input. We're
always looking for the best way to implement things and welcome all
respectful criticisms.
If you are looking to contribute, please head to our
[onboarding documentation](https://github.com/sigp/lighthouse/blob/master/docs/onboarding.md).
If you'd like to contribute, try having a look through the [open
issues](https://github.com/sigp/lighthouse/issues) (tip: look for the [good
first
issue](https://github.com/sigp/lighthouse/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22)
tag) and ping us on the [gitter](https://gitter.im/sigp/lighthouse) channel. We need
your support!
## Contact
The best place for discussion is the [sigp/lighthouse gitter](https://gitter.im/sigp/lighthouse).
Ping @paulhauner or @AgeManning to get the quickest response.
If you'd like some background on Sigma Prime, please see the [Lighthouse Update
\#00](https://lighthouse.sigmaprime.io/update-00.html) blog post or the
[company website](https://sigmaprime.io).
# Donations
We accept donations at the following Ethereum address. All donations go towards
funding development of Ethereum 2.0.
If you support the cause, we could certainly use donations to help fund development:
[`0x25c4a76E7d118705e7Ea2e9b7d8C59930d8aCD3b`](https://etherscan.io/address/0x25c4a76e7d118705e7ea2e9b7d8c59930d8acd3b)
Alternatively, you can contribute via [Gitcoin Grant](https://gitcoin.co/grants/25/lighthouse-ethereum-20-client).
We appreciate all contributions to the project.
`0x25c4a76E7d118705e7Ea2e9b7d8C59930d8aCD3b`

24
beacon_node/Cargo.toml Normal file
View File

@ -0,0 +1,24 @@
[package]
name = "beacon_node"
version = "0.1.0"
authors = ["Paul Hauner <paul@paulhauner.com>"]
edition = "2018"
[dependencies]
bls = { path = "../eth2/utils/bls" }
beacon_chain = { path = "beacon_chain" }
grpcio = { version = "0.4", default-features = false, features = ["protobuf-codec"] }
protobuf = "2.0.2"
protos = { path = "../protos" }
clap = "2.32.0"
db = { path = "db" }
dirs = "1.0.3"
futures = "0.1.23"
fork_choice = { path = "../eth2/fork_choice" }
slog = "^2.2.3"
slot_clock = { path = "../eth2/utils/slot_clock" }
slog-term = "^2.4.0"
slog-async = "^2.3.0"
types = { path = "../eth2/types" }
ssz = { path = "../eth2/utils/ssz" }
tokio = "0.1"

View File

@ -0,0 +1,25 @@
[package]
name = "beacon_chain"
version = "0.1.0"
authors = ["Paul Hauner <paul@paulhauner.com>"]
edition = "2018"
[dependencies]
block_producer = { path = "../../eth2/block_producer" }
bls = { path = "../../eth2/utils/bls" }
boolean-bitfield = { path = "../../eth2/utils/boolean-bitfield" }
db = { path = "../db" }
failure = "0.1"
failure_derive = "0.1"
hashing = { path = "../../eth2/utils/hashing" }
fork_choice = { path = "../../eth2/fork_choice" }
parking_lot = "0.7"
log = "0.4"
env_logger = "0.6"
serde = "1.0"
serde_derive = "1.0"
serde_json = "1.0"
slot_clock = { path = "../../eth2/utils/slot_clock" }
ssz = { path = "../../eth2/utils/ssz" }
state_processing = { path = "../../eth2/state_processing" }
types = { path = "../../eth2/types" }

View File

@ -0,0 +1,217 @@
use state_processing::validate_attestation_without_signature;
use std::collections::{HashMap, HashSet};
use types::{
beacon_state::CommitteesError, AggregateSignature, Attestation, AttestationData, BeaconState,
Bitfield, ChainSpec, FreeAttestation, Signature,
};
const PHASE_0_CUSTODY_BIT: bool = false;
/// Provides the functionality to:
///
/// - Recieve a `FreeAttestation` and aggregate it into an `Attestation` (or create a new if it
/// doesn't exist).
/// - Store all aggregated or created `Attestation`s.
/// - Produce a list of attestations that would be valid for inclusion in some `BeaconState` (and
/// therefore valid for inclusion in a `BeaconBlock`.
///
/// Note: `Attestations` are stored in memory and never deleted. This is not scalable and must be
/// rectified in a future revision.
#[derive(Default)]
pub struct AttestationAggregator {
store: HashMap<Vec<u8>, Attestation>,
}
pub struct Outcome {
pub valid: bool,
pub message: Message,
}
pub enum Message {
/// The free attestation was added to an existing attestation.
Aggregated,
/// The free attestation has already been aggregated to an existing attestation.
AggregationNotRequired,
/// The free attestation was transformed into a new attestation.
NewAttestationCreated,
/// The supplied `validator_index` is not in the committee for the given `shard` and `slot`.
BadValidatorIndex,
/// The given `signature` did not match the `pubkey` in the given
/// `state.validator_registry`.
BadSignature,
/// The given `slot` does not match the validators committee assignment.
BadSlot,
/// The given `shard` does not match the validators committee assignment.
BadShard,
}
macro_rules! some_or_invalid {
($expression: expr, $error: expr) => {
match $expression {
Some(x) => x,
None => {
return Ok(Outcome {
valid: false,
message: $error,
});
}
}
};
}
impl AttestationAggregator {
/// Instantiates a new AttestationAggregator with an empty database.
pub fn new() -> Self {
Self {
store: HashMap::new(),
}
}
/// Accepts some `FreeAttestation`, validates it and either aggregates it upon some existing
/// `Attestation` or produces a new `Attestation`.
///
/// The "validation" provided is not complete, instead the following points are checked:
/// - The given `validator_index` is in the committee for the given `shard` for the given
/// `slot`.
/// - The signature is verified against that of the validator at `validator_index`.
pub fn process_free_attestation(
&mut self,
state: &BeaconState,
free_attestation: &FreeAttestation,
spec: &ChainSpec,
) -> Result<Outcome, CommitteesError> {
let (slot, shard, committee_index) = some_or_invalid!(
state.attestation_slot_and_shard_for_validator(
free_attestation.validator_index as usize,
spec,
)?,
Message::BadValidatorIndex
);
if free_attestation.data.slot != slot {
return Ok(Outcome {
valid: false,
message: Message::BadSlot,
});
}
if free_attestation.data.shard != shard {
return Ok(Outcome {
valid: false,
message: Message::BadShard,
});
}
let signable_message = free_attestation.data.signable_message(PHASE_0_CUSTODY_BIT);
let validator_record = some_or_invalid!(
state
.validator_registry
.get(free_attestation.validator_index as usize),
Message::BadValidatorIndex
);
if !free_attestation
.signature
.verify(&signable_message, &validator_record.pubkey)
{
return Ok(Outcome {
valid: false,
message: Message::BadSignature,
});
}
if let Some(existing_attestation) = self.store.get(&signable_message) {
if let Some(updated_attestation) = aggregate_attestation(
existing_attestation,
&free_attestation.signature,
committee_index as usize,
) {
self.store.insert(signable_message, updated_attestation);
Ok(Outcome {
valid: true,
message: Message::Aggregated,
})
} else {
Ok(Outcome {
valid: true,
message: Message::AggregationNotRequired,
})
}
} else {
let mut aggregate_signature = AggregateSignature::new();
aggregate_signature.add(&free_attestation.signature);
let mut aggregation_bitfield = Bitfield::new();
aggregation_bitfield.set(committee_index as usize, true);
let new_attestation = Attestation {
data: free_attestation.data.clone(),
aggregation_bitfield,
custody_bitfield: Bitfield::new(),
aggregate_signature,
};
self.store.insert(signable_message, new_attestation);
Ok(Outcome {
valid: true,
message: Message::NewAttestationCreated,
})
}
}
/// Returns all known attestations which are:
///
/// - Valid for the given state
/// - Not already in `state.latest_attestations`.
pub fn get_attestations_for_state(
&self,
state: &BeaconState,
spec: &ChainSpec,
) -> Vec<Attestation> {
let mut known_attestation_data: HashSet<AttestationData> = HashSet::new();
state.latest_attestations.iter().for_each(|attestation| {
known_attestation_data.insert(attestation.data.clone());
});
self.store
.values()
.filter_map(|attestation| {
if validate_attestation_without_signature(&state, attestation, spec).is_ok()
&& !known_attestation_data.contains(&attestation.data)
{
Some(attestation.clone())
} else {
None
}
})
.collect()
}
}
/// Produces a new `Attestation` where:
///
/// - `signature` is added to `Attestation.aggregate_signature`
/// - Attestation.aggregation_bitfield[committee_index]` is set to true.
fn aggregate_attestation(
existing_attestation: &Attestation,
signature: &Signature,
committee_index: usize,
) -> Option<Attestation> {
let already_signed = existing_attestation
.aggregation_bitfield
.get(committee_index)
.unwrap_or(false);
if already_signed {
None
} else {
let mut aggregation_bitfield = existing_attestation.aggregation_bitfield.clone();
aggregation_bitfield.set(committee_index, true);
let mut aggregate_signature = existing_attestation.aggregate_signature.clone();
aggregate_signature.add(&signature);
Some(Attestation {
aggregation_bitfield,
aggregate_signature,
..existing_attestation.clone()
})
}
}

View File

@ -0,0 +1,595 @@
use crate::attestation_aggregator::{AttestationAggregator, Outcome as AggregationOutcome};
use crate::checkpoint::CheckPoint;
use db::{
stores::{BeaconBlockStore, BeaconStateStore},
ClientDB, DBError,
};
use fork_choice::{ForkChoice, ForkChoiceError};
use log::{debug, trace};
use parking_lot::{RwLock, RwLockReadGuard};
use slot_clock::SlotClock;
use ssz::ssz_encode;
use state_processing::{
BlockProcessable, BlockProcessingError, SlotProcessable, SlotProcessingError,
};
use std::sync::Arc;
use types::{
beacon_state::CommitteesError,
readers::{BeaconBlockReader, BeaconStateReader},
AttestationData, BeaconBlock, BeaconBlockBody, BeaconState, ChainSpec, Crosslink, Deposit,
Epoch, Eth1Data, FreeAttestation, Hash256, PublicKey, Signature, Slot,
};
#[derive(Debug, PartialEq)]
pub enum Error {
InsufficientValidators,
BadRecentBlockRoots,
CommitteesError(CommitteesError),
DBInconsistent(String),
DBError(String),
ForkChoiceError(ForkChoiceError),
MissingBeaconBlock(Hash256),
MissingBeaconState(Hash256),
}
#[derive(Debug, PartialEq)]
pub enum ValidBlock {
/// The block was successfully processed.
Processed,
}
#[derive(Debug, PartialEq)]
pub enum InvalidBlock {
/// The block slot is greater than the present slot.
FutureSlot,
/// The block state_root does not match the generated state.
StateRootMismatch,
/// The blocks parent_root is unknown.
ParentUnknown,
/// There was an error whilst advancing the parent state to the present slot. This condition
/// should not occur, it likely represents an internal error.
SlotProcessingError(SlotProcessingError),
/// The block could not be applied to the state, it is invalid.
PerBlockProcessingError(BlockProcessingError),
}
#[derive(Debug, PartialEq)]
pub enum BlockProcessingOutcome {
/// The block was successfully validated.
ValidBlock(ValidBlock),
/// The block was not successfully validated.
InvalidBlock(InvalidBlock),
}
pub struct BeaconChain<T: ClientDB + Sized, U: SlotClock, F: ForkChoice> {
pub block_store: Arc<BeaconBlockStore<T>>,
pub state_store: Arc<BeaconStateStore<T>>,
pub slot_clock: U,
pub attestation_aggregator: RwLock<AttestationAggregator>,
canonical_head: RwLock<CheckPoint>,
finalized_head: RwLock<CheckPoint>,
pub state: RwLock<BeaconState>,
pub spec: ChainSpec,
pub fork_choice: RwLock<F>,
}
impl<T, U, F> BeaconChain<T, U, F>
where
T: ClientDB,
U: SlotClock,
F: ForkChoice,
{
/// Instantiate a new Beacon Chain, from genesis.
pub fn genesis(
state_store: Arc<BeaconStateStore<T>>,
block_store: Arc<BeaconBlockStore<T>>,
slot_clock: U,
genesis_time: u64,
latest_eth1_data: Eth1Data,
initial_validator_deposits: Vec<Deposit>,
spec: ChainSpec,
fork_choice: F,
) -> Result<Self, Error> {
if initial_validator_deposits.is_empty() {
return Err(Error::InsufficientValidators);
}
let genesis_state = BeaconState::genesis(
genesis_time,
initial_validator_deposits,
latest_eth1_data,
&spec,
);
let state_root = genesis_state.canonical_root();
state_store.put(&state_root, &ssz_encode(&genesis_state)[..])?;
let genesis_block = BeaconBlock::genesis(state_root, &spec);
let block_root = genesis_block.canonical_root();
block_store.put(&block_root, &ssz_encode(&genesis_block)[..])?;
let finalized_head = RwLock::new(CheckPoint::new(
genesis_block.clone(),
block_root,
genesis_state.clone(),
state_root,
));
let canonical_head = RwLock::new(CheckPoint::new(
genesis_block.clone(),
block_root,
genesis_state.clone(),
state_root,
));
let attestation_aggregator = RwLock::new(AttestationAggregator::new());
Ok(Self {
block_store,
state_store,
slot_clock,
attestation_aggregator,
state: RwLock::new(genesis_state.clone()),
finalized_head,
canonical_head,
spec,
fork_choice: RwLock::new(fork_choice),
})
}
/// Update the canonical head to some new values.
pub fn update_canonical_head(
&self,
new_beacon_block: BeaconBlock,
new_beacon_block_root: Hash256,
new_beacon_state: BeaconState,
new_beacon_state_root: Hash256,
) {
let mut head = self.canonical_head.write();
head.update(
new_beacon_block,
new_beacon_block_root,
new_beacon_state,
new_beacon_state_root,
);
}
/// Returns a read-lock guarded `CheckPoint` struct for reading the head (as chosen by the
/// fork-choice rule).
///
/// It is important to note that the `beacon_state` returned may not match the present slot. It
/// is the state as it was when the head block was recieved, which could be some slots prior to
/// now.
pub fn head(&self) -> RwLockReadGuard<CheckPoint> {
self.canonical_head.read()
}
/// Update the justified head to some new values.
pub fn update_finalized_head(
&self,
new_beacon_block: BeaconBlock,
new_beacon_block_root: Hash256,
new_beacon_state: BeaconState,
new_beacon_state_root: Hash256,
) {
let mut finalized_head = self.finalized_head.write();
finalized_head.update(
new_beacon_block,
new_beacon_block_root,
new_beacon_state,
new_beacon_state_root,
);
}
/// Returns a read-lock guarded `CheckPoint` struct for reading the justified head (as chosen,
/// indirectly, by the fork-choice rule).
pub fn finalized_head(&self) -> RwLockReadGuard<CheckPoint> {
self.finalized_head.read()
}
/// Advance the `self.state` `BeaconState` to the supplied slot.
///
/// This will perform per_slot and per_epoch processing as required.
///
/// The `previous_block_root` will be set to the root of the current head block (as determined
/// by the fork-choice rule).
///
/// It is important to note that this is _not_ the state corresponding to the canonical head
/// block, instead it is that state which may or may not have had additional per slot/epoch
/// processing applied to it.
pub fn advance_state(&self, slot: Slot) -> Result<(), SlotProcessingError> {
let state_slot = self.state.read().slot;
let head_block_root = self.head().beacon_block_root;
for _ in state_slot.as_u64()..slot.as_u64() {
self.state
.write()
.per_slot_processing(head_block_root, &self.spec)?;
}
Ok(())
}
/// Returns the validator index (if any) for the given public key.
///
/// Information is retrieved from the present `beacon_state.validator_registry`.
pub fn validator_index(&self, pubkey: &PublicKey) -> Option<usize> {
for (i, validator) in self
.head()
.beacon_state
.validator_registry
.iter()
.enumerate()
{
if validator.pubkey == *pubkey {
return Some(i);
}
}
None
}
/// Reads the slot clock, returns `None` if the slot is unavailable.
///
/// The slot might be unavailable due to an error with the system clock, or if the present time
/// is before genesis (i.e., a negative slot).
///
/// This is distinct to `present_slot`, which simply reads the latest state. If a
/// call to `read_slot_clock` results in a higher slot than a call to `present_slot`,
/// `self.state` should undergo per slot processing.
pub fn read_slot_clock(&self) -> Option<Slot> {
match self.slot_clock.present_slot() {
Ok(Some(some_slot)) => Some(some_slot),
Ok(None) => None,
_ => None,
}
}
/// Returns slot of the present state.
///
/// This is distinct to `read_slot_clock`, which reads from the actual system clock. If
/// `self.state` has not been transitioned it is possible for the system clock to be on a
/// different slot to what is returned from this call.
pub fn present_slot(&self) -> Slot {
self.state.read().slot
}
/// Returns the block proposer for a given slot.
///
/// Information is read from the present `beacon_state` shuffling, so only information from the
/// present and prior epoch is available.
pub fn block_proposer(&self, slot: Slot) -> Result<usize, CommitteesError> {
let index = self
.state
.read()
.get_beacon_proposer_index(slot, &self.spec)?;
Ok(index)
}
/// Returns the justified slot for the present state.
pub fn justified_epoch(&self) -> Epoch {
self.state.read().justified_epoch
}
/// Returns the attestation slot and shard for a given validator index.
///
/// Information is read from the current state, so only information from the present and prior
/// epoch is available.
pub fn validator_attestion_slot_and_shard(
&self,
validator_index: usize,
) -> Result<Option<(Slot, u64)>, CommitteesError> {
if let Some((slot, shard, _committee)) = self
.state
.read()
.attestation_slot_and_shard_for_validator(validator_index, &self.spec)?
{
Ok(Some((slot, shard)))
} else {
Ok(None)
}
}
/// Produce an `AttestationData` that is valid for the present `slot` and given `shard`.
pub fn produce_attestation_data(&self, shard: u64) -> Result<AttestationData, Error> {
let justified_epoch = self.justified_epoch();
let justified_block_root = *self
.state
.read()
.get_block_root(
justified_epoch.start_slot(self.spec.epoch_length),
&self.spec,
)
.ok_or_else(|| Error::BadRecentBlockRoots)?;
let epoch_boundary_root = *self
.state
.read()
.get_block_root(
self.state.read().current_epoch_start_slot(&self.spec),
&self.spec,
)
.ok_or_else(|| Error::BadRecentBlockRoots)?;
Ok(AttestationData {
slot: self.state.read().slot,
shard,
beacon_block_root: self.head().beacon_block_root,
epoch_boundary_root,
shard_block_root: Hash256::zero(),
latest_crosslink: Crosslink {
epoch: self.state.read().slot.epoch(self.spec.epoch_length),
shard_block_root: Hash256::zero(),
},
justified_epoch,
justified_block_root,
})
}
/// Validate a `FreeAttestation` and either:
///
/// - Create a new `Attestation`.
/// - Aggregate it to an existing `Attestation`.
pub fn process_free_attestation(
&self,
free_attestation: FreeAttestation,
) -> Result<AggregationOutcome, Error> {
let aggregation_outcome = self
.attestation_aggregator
.write()
.process_free_attestation(&self.state.read(), &free_attestation, &self.spec)?;
// TODO: Check this comment
//.map_err(|e| e.into())?;
// return if the attestation is invalid
if !aggregation_outcome.valid {
return Ok(aggregation_outcome);
}
// valid attestation, proceed with fork-choice logic
self.fork_choice.write().add_attestation(
free_attestation.validator_index,
&free_attestation.data.beacon_block_root,
)?;
Ok(aggregation_outcome)
}
/// Dumps the entire canonical chain, from the head to genesis to a vector for analysis.
///
/// This could be a very expensive operation and should only be done in testing/analysis
/// activities.
pub fn chain_dump(&self) -> Result<Vec<CheckPoint>, Error> {
let mut dump = vec![];
let mut last_slot = CheckPoint {
beacon_block: self.head().beacon_block.clone(),
beacon_block_root: self.head().beacon_block_root,
beacon_state: self.head().beacon_state.clone(),
beacon_state_root: self.head().beacon_state_root,
};
dump.push(last_slot.clone());
loop {
let beacon_block_root = last_slot.beacon_block.parent_root;
if beacon_block_root == self.spec.zero_hash {
break; // Genesis has been reached.
}
let beacon_block = self
.block_store
.get_deserialized(&beacon_block_root)?
.ok_or_else(|| {
Error::DBInconsistent(format!("Missing block {}", beacon_block_root))
})?;
let beacon_state_root = beacon_block.state_root;
let beacon_state = self
.state_store
.get_deserialized(&beacon_state_root)?
.ok_or_else(|| {
Error::DBInconsistent(format!("Missing state {}", beacon_state_root))
})?;
let slot = CheckPoint {
beacon_block,
beacon_block_root,
beacon_state,
beacon_state_root,
};
dump.push(slot.clone());
last_slot = slot;
}
Ok(dump)
}
/// Accept some block and attempt to add it to block DAG.
///
/// Will accept blocks from prior slots, however it will reject any block from a future slot.
pub fn process_block(&self, block: BeaconBlock) -> Result<BlockProcessingOutcome, Error> {
debug!("Processing block with slot {}...", block.slot());
let block_root = block.canonical_root();
let present_slot = self.present_slot();
if block.slot > present_slot {
return Ok(BlockProcessingOutcome::InvalidBlock(
InvalidBlock::FutureSlot,
));
}
// Load the blocks parent block from the database, returning invalid if that block is not
// found.
let parent_block_root = block.parent_root;
let parent_block = match self.block_store.get_reader(&parent_block_root)? {
Some(parent_root) => parent_root,
None => {
return Ok(BlockProcessingOutcome::InvalidBlock(
InvalidBlock::ParentUnknown,
));
}
};
// Load the parent blocks state from the database, returning an error if it is not found.
// It is an error because if know the parent block we should also know the parent state.
let parent_state_root = parent_block.state_root();
let parent_state = self
.state_store
.get_reader(&parent_state_root)?
.ok_or_else(|| Error::DBInconsistent(format!("Missing state {}", parent_state_root)))?
.into_beacon_state()
.ok_or_else(|| {
Error::DBInconsistent(format!("State SSZ invalid {}", parent_state_root))
})?;
// TODO: check the block proposer signature BEFORE doing a state transition. This will
// significantly lower exposure surface to DoS attacks.
// Transition the parent state to the present slot.
let mut state = parent_state;
for _ in state.slot.as_u64()..present_slot.as_u64() {
if let Err(e) = state.per_slot_processing(parent_block_root, &self.spec) {
return Ok(BlockProcessingOutcome::InvalidBlock(
InvalidBlock::SlotProcessingError(e),
));
}
}
// Apply the received block to its parent state (which has been transitioned into this
// slot).
if let Err(e) = state.per_block_processing(&block, &self.spec) {
return Ok(BlockProcessingOutcome::InvalidBlock(
InvalidBlock::PerBlockProcessingError(e),
));
}
let state_root = state.canonical_root();
if block.state_root != state_root {
return Ok(BlockProcessingOutcome::InvalidBlock(
InvalidBlock::StateRootMismatch,
));
}
// Store the block and state.
self.block_store.put(&block_root, &ssz_encode(&block)[..])?;
self.state_store.put(&state_root, &ssz_encode(&state)[..])?;
// run the fork_choice add_block logic
self.fork_choice.write().add_block(&block, &block_root)?;
// If the parent block was the parent_block, automatically update the canonical head.
//
// TODO: this is a first-in-best-dressed scenario that is not ideal; fork_choice should be
// run instead.
if self.head().beacon_block_root == parent_block_root {
self.update_canonical_head(
block.clone(),
block_root.clone(),
state.clone(),
state_root,
);
// Update the local state variable.
*self.state.write() = state.clone();
}
Ok(BlockProcessingOutcome::ValidBlock(ValidBlock::Processed))
}
/// Produce a new block at the present slot.
///
/// The produced block will not be inherently valid, it must be signed by a block producer.
/// Block signing is out of the scope of this function and should be done by a separate program.
pub fn produce_block(&self, randao_reveal: Signature) -> Option<(BeaconBlock, BeaconState)> {
debug!("Producing block at slot {}...", self.state.read().slot);
let mut state = self.state.read().clone();
trace!("Finding attestations for new block...");
let attestations = self
.attestation_aggregator
.read()
.get_attestations_for_state(&state, &self.spec);
trace!(
"Inserting {} attestation(s) into new block.",
attestations.len()
);
let parent_root = *state.get_block_root(state.slot.saturating_sub(1_u64), &self.spec)?;
let mut block = BeaconBlock {
slot: state.slot,
parent_root,
state_root: Hash256::zero(), // Updated after the state is calculated.
randao_reveal,
eth1_data: Eth1Data {
// TODO: replace with real data
deposit_root: Hash256::zero(),
block_hash: Hash256::zero(),
},
signature: self.spec.empty_signature.clone(), // To be completed by a validator.
body: BeaconBlockBody {
proposer_slashings: vec![],
attester_slashings: vec![],
attestations,
deposits: vec![],
exits: vec![],
},
};
state
.per_block_processing_without_verifying_block_signature(&block, &self.spec)
.ok()?;
let state_root = state.canonical_root();
block.state_root = state_root;
trace!("Block produced.");
Some((block, state))
}
// TODO: Left this as is, modify later
pub fn fork_choice(&self) -> Result<(), Error> {
let present_head = self.finalized_head().beacon_block_root;
let new_head = self.fork_choice.write().find_head(&present_head)?;
if new_head != present_head {
let block = self
.block_store
.get_deserialized(&new_head)?
.ok_or_else(|| Error::MissingBeaconBlock(new_head))?;
let block_root = block.canonical_root();
let state = self
.state_store
.get_deserialized(&block.state_root)?
.ok_or_else(|| Error::MissingBeaconState(block.state_root))?;
let state_root = state.canonical_root();
self.update_canonical_head(block, block_root, state, state_root);
}
Ok(())
}
}
impl From<DBError> for Error {
fn from(e: DBError) -> Error {
Error::DBError(e.message)
}
}
impl From<ForkChoiceError> for Error {
fn from(e: ForkChoiceError) -> Error {
Error::ForkChoiceError(e)
}
}
impl From<CommitteesError> for Error {
fn from(e: CommitteesError) -> Error {
Error::CommitteesError(e)
}
}

View File

@ -0,0 +1,43 @@
use serde_derive::Serialize;
use types::{BeaconBlock, BeaconState, Hash256};
/// Represents some block and it's associated state. Generally, this will be used for tracking the
/// head, justified head and finalized head.
#[derive(PartialEq, Clone, Serialize)]
pub struct CheckPoint {
pub beacon_block: BeaconBlock,
pub beacon_block_root: Hash256,
pub beacon_state: BeaconState,
pub beacon_state_root: Hash256,
}
impl CheckPoint {
/// Create a new checkpoint.
pub fn new(
beacon_block: BeaconBlock,
beacon_block_root: Hash256,
beacon_state: BeaconState,
beacon_state_root: Hash256,
) -> Self {
Self {
beacon_block,
beacon_block_root,
beacon_state,
beacon_state_root,
}
}
/// Update all fields of the checkpoint.
pub fn update(
&mut self,
beacon_block: BeaconBlock,
beacon_block_root: Hash256,
beacon_state: BeaconState,
beacon_state_root: Hash256,
) {
self.beacon_block = beacon_block;
self.beacon_block_root = beacon_block_root;
self.beacon_state = beacon_state;
self.beacon_state_root = beacon_state_root;
}
}

View File

@ -0,0 +1,7 @@
mod attestation_aggregator;
mod beacon_chain;
mod checkpoint;
pub use self::beacon_chain::{BeaconChain, Error};
pub use self::checkpoint::CheckPoint;
pub use fork_choice::{ForkChoice, ForkChoiceAlgorithms, ForkChoiceError};

View File

@ -0,0 +1,34 @@
[package]
name = "test_harness"
version = "0.1.0"
authors = ["Paul Hauner <paul@paulhauner.com>"]
edition = "2018"
[[bench]]
name = "state_transition"
harness = false
[dev-dependencies]
criterion = "0.2"
[dependencies]
attester = { path = "../../../eth2/attester" }
beacon_chain = { path = "../../beacon_chain" }
block_producer = { path = "../../../eth2/block_producer" }
bls = { path = "../../../eth2/utils/bls" }
boolean-bitfield = { path = "../../../eth2/utils/boolean-bitfield" }
db = { path = "../../db" }
parking_lot = "0.7"
failure = "0.1"
failure_derive = "0.1"
fork_choice = { path = "../../../eth2/fork_choice" }
hashing = { path = "../../../eth2/utils/hashing" }
log = "0.4"
env_logger = "0.6.0"
rayon = "1.0"
serde = "1.0"
serde_derive = "1.0"
serde_json = "1.0"
slot_clock = { path = "../../../eth2/utils/slot_clock" }
ssz = { path = "../../../eth2/utils/ssz" }
types = { path = "../../../eth2/types" }

View File

@ -0,0 +1,68 @@
use criterion::Criterion;
use criterion::{black_box, criterion_group, criterion_main, Benchmark};
// use env_logger::{Builder, Env};
use test_harness::BeaconChainHarness;
use types::{ChainSpec, Hash256};
fn mid_epoch_state_transition(c: &mut Criterion) {
// Builder::from_env(Env::default().default_filter_or("debug")).init();
let validator_count = 1000;
let mut rig = BeaconChainHarness::new(ChainSpec::foundation(), validator_count);
let epoch_depth = (rig.spec.epoch_length * 2) + (rig.spec.epoch_length / 2);
for _ in 0..epoch_depth {
rig.advance_chain_with_block();
}
let state = rig.beacon_chain.state.read().clone();
assert!((state.slot + 1) % rig.spec.epoch_length != 0);
c.bench_function("mid-epoch state transition 10k validators", move |b| {
let state = state.clone();
b.iter(|| {
let mut state = state.clone();
black_box(state.per_slot_processing(Hash256::zero(), &rig.spec))
})
});
}
fn epoch_boundary_state_transition(c: &mut Criterion) {
// Builder::from_env(Env::default().default_filter_or("debug")).init();
let validator_count = 10000;
let mut rig = BeaconChainHarness::new(ChainSpec::foundation(), validator_count);
let epoch_depth = rig.spec.epoch_length * 2;
for _ in 0..(epoch_depth - 1) {
rig.advance_chain_with_block();
}
let state = rig.beacon_chain.state.read().clone();
assert_eq!((state.slot + 1) % rig.spec.epoch_length, 0);
c.bench(
"routines",
Benchmark::new("routine_1", move |b| {
let state = state.clone();
b.iter(|| {
let mut state = state.clone();
black_box(black_box(
state.per_slot_processing(Hash256::zero(), &rig.spec),
))
})
})
.sample_size(5), // sample size is low because function is sloooow.
);
}
criterion_group!(
benches,
mid_epoch_state_transition,
epoch_boundary_state_transition
);
criterion_main!(benches);

View File

@ -0,0 +1,245 @@
use super::ValidatorHarness;
use beacon_chain::BeaconChain;
pub use beacon_chain::{CheckPoint, Error as BeaconChainError};
use bls::create_proof_of_possession;
use db::{
stores::{BeaconBlockStore, BeaconStateStore},
MemoryDB,
};
use fork_choice::{optimised_lmd_ghost::OptimisedLMDGhost, slow_lmd_ghost::SlowLMDGhost}; // import all the algorithms
use log::debug;
use rayon::prelude::*;
use slot_clock::TestingSlotClock;
use std::collections::HashSet;
use std::fs::File;
use std::io::prelude::*;
use std::iter::FromIterator;
use std::sync::Arc;
use types::{
BeaconBlock, ChainSpec, Deposit, DepositData, DepositInput, Eth1Data, FreeAttestation, Hash256,
Keypair, Slot,
};
/// The beacon chain harness simulates a single beacon node with `validator_count` validators connected
/// to it. Each validator is provided a borrow to the beacon chain, where it may read
/// information and submit blocks/attestations for processing.
///
/// This test harness is useful for testing validator and internal state transition logic. It
/// is not useful for testing that multiple beacon nodes can reach consensus.
pub struct BeaconChainHarness {
pub db: Arc<MemoryDB>,
pub beacon_chain: Arc<BeaconChain<MemoryDB, TestingSlotClock, OptimisedLMDGhost<MemoryDB>>>,
pub block_store: Arc<BeaconBlockStore<MemoryDB>>,
pub state_store: Arc<BeaconStateStore<MemoryDB>>,
pub validators: Vec<ValidatorHarness>,
pub spec: Arc<ChainSpec>,
}
impl BeaconChainHarness {
/// Create a new harness with:
///
/// - A keypair, `BlockProducer` and `Attester` for each validator.
/// - A new BeaconChain struct where the given validators are in the genesis.
pub fn new(spec: ChainSpec, validator_count: usize) -> Self {
let db = Arc::new(MemoryDB::open());
let block_store = Arc::new(BeaconBlockStore::new(db.clone()));
let state_store = Arc::new(BeaconStateStore::new(db.clone()));
let genesis_time = 1_549_935_547; // 12th Feb 2018 (arbitrary value in the past).
let slot_clock = TestingSlotClock::new(spec.genesis_slot.as_u64());
let fork_choice = OptimisedLMDGhost::new(block_store.clone(), state_store.clone());
let latest_eth1_data = Eth1Data {
deposit_root: Hash256::zero(),
block_hash: Hash256::zero(),
};
debug!("Generating validator keypairs...");
let keypairs: Vec<Keypair> = (0..validator_count)
.collect::<Vec<usize>>()
.par_iter()
.map(|_| Keypair::random())
.collect();
debug!("Creating validator deposits...");
let initial_validator_deposits = keypairs
.par_iter()
.map(|keypair| Deposit {
branch: vec![], // branch verification is not specified.
index: 0, // index verification is not specified.
deposit_data: DepositData {
amount: 32_000_000_000, // 32 ETH (in Gwei)
timestamp: genesis_time - 1,
deposit_input: DepositInput {
pubkey: keypair.pk.clone(),
withdrawal_credentials: Hash256::zero(), // Withdrawal not possible.
proof_of_possession: create_proof_of_possession(&keypair),
},
},
})
.collect();
debug!("Creating the BeaconChain...");
// Create the Beacon Chain
let beacon_chain = Arc::new(
BeaconChain::genesis(
state_store.clone(),
block_store.clone(),
slot_clock,
genesis_time,
latest_eth1_data,
initial_validator_deposits,
spec.clone(),
fork_choice,
)
.unwrap(),
);
let spec = Arc::new(spec);
debug!("Creating validator producer and attester instances...");
// Spawn the test validator instances.
let validators: Vec<ValidatorHarness> = keypairs
.iter()
.map(|keypair| {
ValidatorHarness::new(keypair.clone(), beacon_chain.clone(), spec.clone())
})
.collect();
debug!("Created {} ValidatorHarnesss", validators.len());
Self {
db,
beacon_chain,
block_store,
state_store,
validators,
spec,
}
}
/// Move the `slot_clock` for the `BeaconChain` forward one slot.
///
/// This is the equivalent of advancing a system clock forward one `SLOT_DURATION`.
///
/// Returns the new slot.
pub fn increment_beacon_chain_slot(&mut self) -> Slot {
let slot = self.beacon_chain.present_slot() + 1;
debug!("Incrementing BeaconChain slot to {}.", slot);
self.beacon_chain.slot_clock.set_slot(slot.as_u64());
self.beacon_chain.advance_state(slot).unwrap();
slot
}
/// Gather the `FreeAttestation`s from the valiators.
///
/// Note: validators will only produce attestations _once per slot_. So, if you call this twice
/// you'll only get attestations on the first run.
pub fn gather_free_attesations(&mut self) -> Vec<FreeAttestation> {
let present_slot = self.beacon_chain.present_slot();
let attesting_validators = self
.beacon_chain
.state
.read()
.get_crosslink_committees_at_slot(present_slot, false, &self.spec)
.unwrap()
.iter()
.fold(vec![], |mut acc, (committee, _slot)| {
acc.append(&mut committee.clone());
acc
});
let attesting_validators: HashSet<usize> =
HashSet::from_iter(attesting_validators.iter().cloned());
let free_attestations: Vec<FreeAttestation> = self
.validators
.par_iter_mut()
.enumerate()
.filter_map(|(i, validator)| {
if attesting_validators.contains(&i) {
// Advance the validator slot.
validator.set_slot(present_slot);
// Prompt the validator to produce an attestation (if required).
validator.produce_free_attestation().ok()
} else {
None
}
})
.collect();
debug!(
"Gathered {} FreeAttestations for slot {}.",
free_attestations.len(),
present_slot
);
free_attestations
}
/// Get the block from the proposer for the slot.
///
/// Note: the validator will only produce it _once per slot_. So, if you call this twice you'll
/// only get a block once.
pub fn produce_block(&mut self) -> BeaconBlock {
let present_slot = self.beacon_chain.present_slot();
let proposer = self.beacon_chain.block_proposer(present_slot).unwrap();
debug!(
"Producing block from validator #{} for slot {}.",
proposer, present_slot
);
// Ensure the validators slot clock is accurate.
self.validators[proposer].set_slot(present_slot);
self.validators[proposer].produce_block().unwrap()
}
/// Advances the chain with a BeaconBlock and attestations from all validators.
///
/// This is the ideal scenario for the Beacon Chain, 100% honest participation from
/// validators.
pub fn advance_chain_with_block(&mut self) {
self.increment_beacon_chain_slot();
// Produce a new block.
let block = self.produce_block();
debug!("Submitting block for processing...");
self.beacon_chain.process_block(block).unwrap();
debug!("...block processed by BeaconChain.");
debug!("Producing free attestations...");
// Produce new attestations.
let free_attestations = self.gather_free_attesations();
debug!("Processing free attestations...");
free_attestations.par_iter().for_each(|free_attestation| {
self.beacon_chain
.process_free_attestation(free_attestation.clone())
.unwrap();
});
debug!("Free attestations processed.");
}
/// Dump all blocks and states from the canonical beacon chain.
pub fn chain_dump(&self) -> Result<Vec<CheckPoint>, BeaconChainError> {
self.beacon_chain.chain_dump()
}
/// Write the output of `chain_dump` to a JSON file.
pub fn dump_to_file(&self, filename: String, chain_dump: &[CheckPoint]) {
let json = serde_json::to_string(chain_dump).unwrap();
let mut file = File::create(filename).unwrap();
file.write_all(json.as_bytes())
.expect("Failed writing dump to file.");
}
}

View File

@ -0,0 +1,5 @@
mod beacon_chain_harness;
mod validator_harness;
pub use self::beacon_chain_harness::BeaconChainHarness;
pub use self::validator_harness::ValidatorHarness;

View File

@ -0,0 +1,108 @@
use attester::{
BeaconNode as AttesterBeaconNode, BeaconNodeError as NodeError,
PublishOutcome as AttestationPublishOutcome,
};
use beacon_chain::BeaconChain;
use block_producer::{
BeaconNode as BeaconBlockNode, BeaconNodeError as BeaconBlockNodeError,
PublishOutcome as BlockPublishOutcome,
};
use db::ClientDB;
use fork_choice::ForkChoice;
use parking_lot::RwLock;
use slot_clock::SlotClock;
use std::sync::Arc;
use types::{AttestationData, BeaconBlock, FreeAttestation, Signature, Slot};
// mod attester;
// mod producer;
/// Connect directly to a borrowed `BeaconChain` instance so an attester/producer can request/submit
/// blocks/attestations.
///
/// `BeaconBlock`s and `FreeAttestation`s are not actually published to the `BeaconChain`, instead
/// they are stored inside this struct. This is to allow one to benchmark the submission of the
/// block/attestation directly, or modify it before submission.
pub struct DirectBeaconNode<T: ClientDB, U: SlotClock, F: ForkChoice> {
beacon_chain: Arc<BeaconChain<T, U, F>>,
published_blocks: RwLock<Vec<BeaconBlock>>,
published_attestations: RwLock<Vec<FreeAttestation>>,
}
impl<T: ClientDB, U: SlotClock, F: ForkChoice> DirectBeaconNode<T, U, F> {
pub fn new(beacon_chain: Arc<BeaconChain<T, U, F>>) -> Self {
Self {
beacon_chain,
published_blocks: RwLock::new(vec![]),
published_attestations: RwLock::new(vec![]),
}
}
/// Get the last published block (if any).
pub fn last_published_block(&self) -> Option<BeaconBlock> {
Some(self.published_blocks.read().last()?.clone())
}
/// Get the last published attestation (if any).
pub fn last_published_free_attestation(&self) -> Option<FreeAttestation> {
Some(self.published_attestations.read().last()?.clone())
}
}
impl<T: ClientDB, U: SlotClock, F: ForkChoice> AttesterBeaconNode for DirectBeaconNode<T, U, F> {
fn produce_attestation_data(
&self,
_slot: Slot,
shard: u64,
) -> Result<Option<AttestationData>, NodeError> {
match self.beacon_chain.produce_attestation_data(shard) {
Ok(attestation_data) => Ok(Some(attestation_data)),
Err(e) => Err(NodeError::RemoteFailure(format!("{:?}", e))),
}
}
fn publish_attestation_data(
&self,
free_attestation: FreeAttestation,
) -> Result<AttestationPublishOutcome, NodeError> {
self.published_attestations.write().push(free_attestation);
Ok(AttestationPublishOutcome::ValidAttestation)
}
}
impl<T: ClientDB, U: SlotClock, F: ForkChoice> BeaconBlockNode for DirectBeaconNode<T, U, F> {
/// Requests a new `BeaconBlock from the `BeaconChain`.
fn produce_beacon_block(
&self,
slot: Slot,
randao_reveal: &Signature,
) -> Result<Option<BeaconBlock>, BeaconBlockNodeError> {
let (block, _state) = self
.beacon_chain
.produce_block(randao_reveal.clone())
.ok_or_else(|| {
BeaconBlockNodeError::RemoteFailure("Did not produce block.".to_string())
})?;
if block.slot == slot {
Ok(Some(block))
} else {
Err(BeaconBlockNodeError::RemoteFailure(
"Unable to produce at non-current slot.".to_string(),
))
}
}
/// A block is not _actually_ published to the `BeaconChain`, instead it is stored in the
/// `published_block_vec` and a successful `ValidBlock` is returned to the caller.
///
/// The block may be retrieved and then applied to the `BeaconChain` manually, potentially in a
/// benchmarking scenario.
fn publish_beacon_block(
&self,
block: BeaconBlock,
) -> Result<BlockPublishOutcome, BeaconBlockNodeError> {
self.published_blocks.write().push(block);
Ok(BlockPublishOutcome::ValidBlock)
}
}

View File

@ -0,0 +1,70 @@
use attester::{
DutiesReader as AttesterDutiesReader, DutiesReaderError as AttesterDutiesReaderError,
};
use beacon_chain::BeaconChain;
use block_producer::{
DutiesReader as ProducerDutiesReader, DutiesReaderError as ProducerDutiesReaderError,
};
use db::ClientDB;
use fork_choice::ForkChoice;
use slot_clock::SlotClock;
use std::sync::Arc;
use types::{PublicKey, Slot};
/// Connects directly to a borrowed `BeaconChain` and reads attester/proposer duties directly from
/// it.
pub struct DirectDuties<T: ClientDB, U: SlotClock, F: ForkChoice> {
beacon_chain: Arc<BeaconChain<T, U, F>>,
pubkey: PublicKey,
}
impl<T: ClientDB, U: SlotClock, F: ForkChoice> DirectDuties<T, U, F> {
pub fn new(pubkey: PublicKey, beacon_chain: Arc<BeaconChain<T, U, F>>) -> Self {
Self {
beacon_chain,
pubkey,
}
}
}
impl<T: ClientDB, U: SlotClock, F: ForkChoice> ProducerDutiesReader for DirectDuties<T, U, F> {
fn is_block_production_slot(&self, slot: Slot) -> Result<bool, ProducerDutiesReaderError> {
let validator_index = self
.beacon_chain
.validator_index(&self.pubkey)
.ok_or_else(|| ProducerDutiesReaderError::UnknownValidator)?;
match self.beacon_chain.block_proposer(slot) {
Ok(proposer) if proposer == validator_index => Ok(true),
Ok(_) => Ok(false),
Err(_) => Err(ProducerDutiesReaderError::UnknownEpoch),
}
}
}
impl<T: ClientDB, U: SlotClock, F: ForkChoice> AttesterDutiesReader for DirectDuties<T, U, F> {
fn validator_index(&self) -> Option<u64> {
match self.beacon_chain.validator_index(&self.pubkey) {
Some(index) => Some(index as u64),
None => None,
}
}
fn attestation_shard(&self, slot: Slot) -> Result<Option<u64>, AttesterDutiesReaderError> {
if let Some(validator_index) = self.validator_index() {
match self
.beacon_chain
.validator_attestion_slot_and_shard(validator_index as usize)
{
Ok(Some((attest_slot, attest_shard))) if attest_slot == slot => {
Ok(Some(attest_shard))
}
Ok(Some(_)) => Ok(None),
Ok(None) => Err(AttesterDutiesReaderError::UnknownEpoch),
Err(_) => unreachable!("Error when getting validator attestation shard."),
}
} else {
Err(AttesterDutiesReaderError::UnknownValidator)
}
}
}

View File

@ -0,0 +1,47 @@
use attester::Signer as AttesterSigner;
use block_producer::Signer as BlockProposerSigner;
use std::sync::RwLock;
use types::{Keypair, Signature};
/// A test-only struct used to perform signing for a proposer or attester.
pub struct LocalSigner {
keypair: Keypair,
should_sign: RwLock<bool>,
}
impl LocalSigner {
/// Produce a new TestSigner with signing enabled by default.
pub fn new(keypair: Keypair) -> Self {
Self {
keypair,
should_sign: RwLock::new(true),
}
}
/// If set to `false`, the service will refuse to sign all messages. Otherwise, all messages
/// will be signed.
pub fn enable_signing(&self, enabled: bool) {
*self.should_sign.write().unwrap() = enabled;
}
/// Sign some message.
fn bls_sign(&self, message: &[u8]) -> Option<Signature> {
Some(Signature::new(message, &self.keypair.sk))
}
}
impl BlockProposerSigner for LocalSigner {
fn sign_block_proposal(&self, message: &[u8]) -> Option<Signature> {
self.bls_sign(message)
}
fn sign_randao_reveal(&self, message: &[u8]) -> Option<Signature> {
self.bls_sign(message)
}
}
impl AttesterSigner for LocalSigner {
fn sign_attestation_message(&self, message: &[u8]) -> Option<Signature> {
self.bls_sign(message)
}
}

View File

@ -0,0 +1,137 @@
mod direct_beacon_node;
mod direct_duties;
mod local_signer;
use attester::PollOutcome as AttestationPollOutcome;
use attester::{Attester, Error as AttestationPollError};
use beacon_chain::BeaconChain;
use block_producer::PollOutcome as BlockPollOutcome;
use block_producer::{BlockProducer, Error as BlockPollError};
use db::MemoryDB;
use direct_beacon_node::DirectBeaconNode;
use direct_duties::DirectDuties;
use fork_choice::{optimised_lmd_ghost::OptimisedLMDGhost, slow_lmd_ghost::SlowLMDGhost};
use local_signer::LocalSigner;
use slot_clock::TestingSlotClock;
use std::sync::Arc;
use types::{BeaconBlock, ChainSpec, FreeAttestation, Keypair, Slot};
#[derive(Debug, PartialEq)]
pub enum BlockProduceError {
DidNotProduce(BlockPollOutcome),
PollError(BlockPollError),
}
#[derive(Debug, PartialEq)]
pub enum AttestationProduceError {
DidNotProduce(AttestationPollOutcome),
PollError(AttestationPollError),
}
/// A `BlockProducer` and `Attester` which sign using a common keypair.
///
/// The test validator connects directly to a borrowed `BeaconChain` struct. It is useful for
/// testing that the core proposer and attester logic is functioning. Also for supporting beacon
/// chain tests.
pub struct ValidatorHarness {
pub block_producer: BlockProducer<
TestingSlotClock,
DirectBeaconNode<MemoryDB, TestingSlotClock, OptimisedLMDGhost<MemoryDB>>,
DirectDuties<MemoryDB, TestingSlotClock, OptimisedLMDGhost<MemoryDB>>,
LocalSigner,
>,
pub attester: Attester<
TestingSlotClock,
DirectBeaconNode<MemoryDB, TestingSlotClock, OptimisedLMDGhost<MemoryDB>>,
DirectDuties<MemoryDB, TestingSlotClock, OptimisedLMDGhost<MemoryDB>>,
LocalSigner,
>,
pub spec: Arc<ChainSpec>,
pub epoch_map: Arc<DirectDuties<MemoryDB, TestingSlotClock, OptimisedLMDGhost<MemoryDB>>>,
pub keypair: Keypair,
pub beacon_node: Arc<DirectBeaconNode<MemoryDB, TestingSlotClock, OptimisedLMDGhost<MemoryDB>>>,
pub slot_clock: Arc<TestingSlotClock>,
pub signer: Arc<LocalSigner>,
}
impl ValidatorHarness {
/// Create a new ValidatorHarness that signs with the given keypair, operates per the given spec and connects to the
/// supplied beacon node.
///
/// A `BlockProducer` and `Attester` is created..
pub fn new(
keypair: Keypair,
beacon_chain: Arc<BeaconChain<MemoryDB, TestingSlotClock, OptimisedLMDGhost<MemoryDB>>>,
spec: Arc<ChainSpec>,
) -> Self {
let slot_clock = Arc::new(TestingSlotClock::new(spec.genesis_slot.as_u64()));
let signer = Arc::new(LocalSigner::new(keypair.clone()));
let beacon_node = Arc::new(DirectBeaconNode::new(beacon_chain.clone()));
let epoch_map = Arc::new(DirectDuties::new(keypair.pk.clone(), beacon_chain.clone()));
let block_producer = BlockProducer::new(
spec.clone(),
epoch_map.clone(),
slot_clock.clone(),
beacon_node.clone(),
signer.clone(),
);
let attester = Attester::new(
epoch_map.clone(),
slot_clock.clone(),
beacon_node.clone(),
signer.clone(),
);
Self {
block_producer,
attester,
spec,
epoch_map,
keypair,
beacon_node,
slot_clock,
signer,
}
}
/// Run the `poll` function on the `BlockProducer` and produce a block.
///
/// An error is returned if the producer refuses to produce.
pub fn produce_block(&mut self) -> Result<BeaconBlock, BlockProduceError> {
// Using `DirectBeaconNode`, the validator will always return sucessufully if it tries to
// publish a block.
match self.block_producer.poll() {
Ok(BlockPollOutcome::BlockProduced(_)) => {}
Ok(outcome) => return Err(BlockProduceError::DidNotProduce(outcome)),
Err(error) => return Err(BlockProduceError::PollError(error)),
};
Ok(self
.beacon_node
.last_published_block()
.expect("Unable to obtain produced block."))
}
/// Run the `poll` function on the `Attester` and produce a `FreeAttestation`.
///
/// An error is returned if the attester refuses to attest.
pub fn produce_free_attestation(&mut self) -> Result<FreeAttestation, AttestationProduceError> {
match self.attester.poll() {
Ok(AttestationPollOutcome::AttestationProduced(_)) => {}
Ok(outcome) => return Err(AttestationProduceError::DidNotProduce(outcome)),
Err(error) => return Err(AttestationProduceError::PollError(error)),
};
Ok(self
.beacon_node
.last_published_free_attestation()
.expect("Unable to obtain produced attestation."))
}
/// Set the validators slot clock to the specified slot.
///
/// The validators slot clock will always read this value until it is set to something else.
pub fn set_slot(&mut self, slot: Slot) {
self.slot_clock.set_slot(slot.as_u64())
}
}

View File

@ -0,0 +1,47 @@
use env_logger::{Builder, Env};
use log::debug;
use test_harness::BeaconChainHarness;
use types::{ChainSpec, Slot};
#[test]
#[ignore]
fn it_can_build_on_genesis_block() {
let mut spec = ChainSpec::foundation();
spec.genesis_slot = Slot::new(spec.epoch_length * 8);
/*
spec.shard_count = spec.shard_count / 8;
spec.target_committee_size = spec.target_committee_size / 8;
*/
let validator_count = 1000;
let mut harness = BeaconChainHarness::new(spec, validator_count as usize);
harness.advance_chain_with_block();
}
#[test]
#[ignore]
fn it_can_produce_past_first_epoch_boundary() {
Builder::from_env(Env::default().default_filter_or("debug")).init();
let validator_count = 100;
debug!("Starting harness build...");
let mut harness = BeaconChainHarness::new(ChainSpec::foundation(), validator_count);
debug!("Harness built, tests starting..");
let blocks = harness.spec.epoch_length * 3 + 1;
for i in 0..blocks {
harness.advance_chain_with_block();
debug!("Produced block {}/{}.", i, blocks);
}
let dump = harness.chain_dump().expect("Chain dump failed.");
assert_eq!(dump.len() as u64, blocks + 1); // + 1 for genesis block.
harness.dump_to_file("/tmp/chaindump.json".to_string(), &dump);
}

13
beacon_node/db/Cargo.toml Normal file
View File

@ -0,0 +1,13 @@
[package]
name = "db"
version = "0.1.0"
authors = ["Paul Hauner <paul@paulhauner.com>"]
edition = "2018"
[dependencies]
blake2-rfc = "0.2.18"
bls = { path = "../../eth2/utils/bls" }
bytes = "0.4.10"
rocksdb = "0.10.1"
ssz = { path = "../../eth2/utils/ssz" }
types = { path = "../../eth2/types" }

View File

@ -0,0 +1,197 @@
extern crate rocksdb;
use super::rocksdb::Error as RocksError;
use super::rocksdb::{Options, DB};
use super::{ClientDB, DBError, DBValue};
use std::fs;
use std::path::Path;
/// A on-disk database which implements the ClientDB trait.
///
/// This implementation uses RocksDB with default options.
pub struct DiskDB {
db: DB,
}
impl DiskDB {
/// Open the RocksDB database, optionally supplying columns if required.
///
/// The RocksDB database will be contained in a directory titled
/// "database" in the supplied path.
///
/// # Panics
///
/// Panics if the database is unable to be created.
pub fn open(path: &Path, columns: Option<&[&str]>) -> Self {
/*
* Initialise the options
*/
let mut options = Options::default();
options.create_if_missing(true);
// TODO: ensure that columns are created (and remove
// the dead_code allow)
/*
* Initialise the path
*/
fs::create_dir_all(&path).unwrap_or_else(|_| panic!("Unable to create {:?}", &path));
let db_path = path.join("database");
/*
* Open the database
*/
let db = match columns {
None => DB::open(&options, db_path),
Some(columns) => DB::open_cf(&options, db_path, columns),
}
.expect("Unable to open local database");;
Self { db }
}
/// Create a RocksDB column family. Corresponds to the
/// `create_cf()` function on the RocksDB API.
#[allow(dead_code)]
fn create_col(&mut self, col: &str) -> Result<(), DBError> {
match self.db.create_cf(col, &Options::default()) {
Err(e) => Err(e.into()),
Ok(_) => Ok(()),
}
}
}
impl From<RocksError> for DBError {
fn from(e: RocksError) -> Self {
Self {
message: e.to_string(),
}
}
}
impl ClientDB for DiskDB {
/// Get the value for some key on some column.
///
/// Corresponds to the `get_cf()` method on the RocksDB API.
/// Will attempt to get the `ColumnFamily` and return an Err
/// if it fails.
fn get(&self, col: &str, key: &[u8]) -> Result<Option<DBValue>, DBError> {
match self.db.cf_handle(col) {
None => Err(DBError {
message: "Unknown column".to_string(),
}),
Some(handle) => match self.db.get_cf(handle, key)? {
None => Ok(None),
Some(db_vec) => Ok(Some(DBValue::from(&*db_vec))),
},
}
}
/// Set some value for some key on some column.
///
/// Corresponds to the `cf_handle()` method on the RocksDB API.
/// Will attempt to get the `ColumnFamily` and return an Err
/// if it fails.
fn put(&self, col: &str, key: &[u8], val: &[u8]) -> Result<(), DBError> {
match self.db.cf_handle(col) {
None => Err(DBError {
message: "Unknown column".to_string(),
}),
Some(handle) => self.db.put_cf(handle, key, val).map_err(|e| e.into()),
}
}
/// Return true if some key exists in some column.
fn exists(&self, col: &str, key: &[u8]) -> Result<bool, DBError> {
/*
* I'm not sure if this is the correct way to read if some
* block exists. Naively I would expect this to unncessarily
* copy some data, but I could be wrong.
*/
match self.db.cf_handle(col) {
None => Err(DBError {
message: "Unknown column".to_string(),
}),
Some(handle) => Ok(self.db.get_cf(handle, key)?.is_some()),
}
}
/// Delete the value for some key on some column.
///
/// Corresponds to the `delete_cf()` method on the RocksDB API.
/// Will attempt to get the `ColumnFamily` and return an Err
/// if it fails.
fn delete(&self, col: &str, key: &[u8]) -> Result<(), DBError> {
match self.db.cf_handle(col) {
None => Err(DBError {
message: "Unknown column".to_string(),
}),
Some(handle) => {
self.db.delete_cf(handle, key)?;
Ok(())
}
}
}
}
#[cfg(test)]
mod tests {
use super::super::ClientDB;
use super::*;
use std::sync::Arc;
use std::{env, fs, thread};
#[test]
#[ignore]
fn test_rocksdb_can_use_db() {
let pwd = env::current_dir().unwrap();
let path = pwd.join("testdb_please_remove");
let _ = fs::remove_dir_all(&path);
fs::create_dir_all(&path).unwrap();
let col_name: &str = "TestColumn";
let column_families = vec![col_name];
let mut db = DiskDB::open(&path, None);
for cf in column_families {
db.create_col(&cf).unwrap();
}
let db = Arc::new(db);
let thread_count = 10;
let write_count = 10;
// We're execting the product of these numbers to fit in one byte.
assert!(thread_count * write_count <= 255);
let mut handles = vec![];
for t in 0..thread_count {
let wc = write_count;
let db = db.clone();
let col = col_name.clone();
let handle = thread::spawn(move || {
for w in 0..wc {
let key = (t * w) as u8;
let val = 42;
db.put(&col, &vec![key], &vec![val]).unwrap();
}
});
handles.push(handle);
}
for handle in handles {
handle.join().unwrap();
}
for t in 0..thread_count {
for w in 0..write_count {
let key = (t * w) as u8;
let val = db.get(&col_name, &vec![key]).unwrap().unwrap();
assert_eq!(vec![42], val);
}
}
fs::remove_dir_all(&path).unwrap();
}
}

14
beacon_node/db/src/lib.rs Normal file
View File

@ -0,0 +1,14 @@
extern crate blake2_rfc as blake2;
extern crate bls;
extern crate rocksdb;
mod disk_db;
mod memory_db;
pub mod stores;
mod traits;
use self::stores::COLUMNS;
pub use self::disk_db::DiskDB;
pub use self::memory_db::MemoryDB;
pub use self::traits::{ClientDB, DBError, DBValue};

View File

@ -0,0 +1,236 @@
use super::blake2::blake2b::blake2b;
use super::COLUMNS;
use super::{ClientDB, DBError, DBValue};
use std::collections::{HashMap, HashSet};
use std::sync::RwLock;
type DBHashMap = HashMap<Vec<u8>, Vec<u8>>;
type ColumnHashSet = HashSet<String>;
/// An in-memory database implementing the ClientDB trait.
///
/// It is not particularily optimized, it exists for ease and speed of testing. It's not expected
/// this DB would be used outside of tests.
pub struct MemoryDB {
db: RwLock<DBHashMap>,
known_columns: RwLock<ColumnHashSet>,
}
impl MemoryDB {
/// Open the in-memory database.
///
/// All columns must be supplied initially, you will get an error if you try to access a column
/// that was not declared here. This condition is enforced artificially to simulate RocksDB.
pub fn open() -> Self {
let db: DBHashMap = HashMap::new();
let mut known_columns: ColumnHashSet = HashSet::new();
for col in &COLUMNS {
known_columns.insert(col.to_string());
}
Self {
db: RwLock::new(db),
known_columns: RwLock::new(known_columns),
}
}
/// Hashes a key and a column name in order to get a unique key for the supplied column.
fn get_key_for_col(col: &str, key: &[u8]) -> Vec<u8> {
blake2b(32, col.as_bytes(), key).as_bytes().to_vec()
}
}
impl ClientDB for MemoryDB {
/// Get the value of some key from the database. Returns `None` if the key does not exist.
fn get(&self, col: &str, key: &[u8]) -> Result<Option<DBValue>, DBError> {
// Panic if the DB locks are poisoned.
let db = self.db.read().unwrap();
let known_columns = self.known_columns.read().unwrap();
if known_columns.contains(&col.to_string()) {
let column_key = MemoryDB::get_key_for_col(col, key);
Ok(db.get(&column_key).and_then(|val| Some(val.clone())))
} else {
Err(DBError {
message: "Unknown column".to_string(),
})
}
}
/// Puts a key in the database.
fn put(&self, col: &str, key: &[u8], val: &[u8]) -> Result<(), DBError> {
// Panic if the DB locks are poisoned.
let mut db = self.db.write().unwrap();
let known_columns = self.known_columns.read().unwrap();
if known_columns.contains(&col.to_string()) {
let column_key = MemoryDB::get_key_for_col(col, key);
db.insert(column_key, val.to_vec());
Ok(())
} else {
Err(DBError {
message: "Unknown column".to_string(),
})
}
}
/// Return true if some key exists in some column.
fn exists(&self, col: &str, key: &[u8]) -> Result<bool, DBError> {
// Panic if the DB locks are poisoned.
let db = self.db.read().unwrap();
let known_columns = self.known_columns.read().unwrap();
if known_columns.contains(&col.to_string()) {
let column_key = MemoryDB::get_key_for_col(col, key);
Ok(db.contains_key(&column_key))
} else {
Err(DBError {
message: "Unknown column".to_string(),
})
}
}
/// Delete some key from the database.
fn delete(&self, col: &str, key: &[u8]) -> Result<(), DBError> {
// Panic if the DB locks are poisoned.
let mut db = self.db.write().unwrap();
let known_columns = self.known_columns.read().unwrap();
if known_columns.contains(&col.to_string()) {
let column_key = MemoryDB::get_key_for_col(col, key);
db.remove(&column_key);
Ok(())
} else {
Err(DBError {
message: "Unknown column".to_string(),
})
}
}
}
#[cfg(test)]
mod tests {
use super::super::stores::{BLOCKS_DB_COLUMN, VALIDATOR_DB_COLUMN};
use super::super::ClientDB;
use super::*;
use std::sync::Arc;
use std::thread;
#[test]
fn test_memorydb_can_delete() {
let col_a: &str = BLOCKS_DB_COLUMN;
let db = MemoryDB::open();
db.put(col_a, "dogs".as_bytes(), "lol".as_bytes()).unwrap();
assert_eq!(
db.get(col_a, "dogs".as_bytes()).unwrap().unwrap(),
"lol".as_bytes()
);
db.delete(col_a, "dogs".as_bytes()).unwrap();
assert_eq!(db.get(col_a, "dogs".as_bytes()).unwrap(), None);
}
#[test]
fn test_memorydb_column_access() {
let col_a: &str = BLOCKS_DB_COLUMN;
let col_b: &str = VALIDATOR_DB_COLUMN;
let db = MemoryDB::open();
/*
* Testing that if we write to the same key in different columns that
* there is not an overlap.
*/
db.put(col_a, "same".as_bytes(), "cat".as_bytes()).unwrap();
db.put(col_b, "same".as_bytes(), "dog".as_bytes()).unwrap();
assert_eq!(
db.get(col_a, "same".as_bytes()).unwrap().unwrap(),
"cat".as_bytes()
);
assert_eq!(
db.get(col_b, "same".as_bytes()).unwrap().unwrap(),
"dog".as_bytes()
);
}
#[test]
fn test_memorydb_unknown_column_access() {
let col_a: &str = BLOCKS_DB_COLUMN;
let col_x: &str = "ColumnX";
let db = MemoryDB::open();
/*
* Test that we get errors when using undeclared columns
*/
assert!(db.put(col_a, "cats".as_bytes(), "lol".as_bytes()).is_ok());
assert!(db.put(col_x, "cats".as_bytes(), "lol".as_bytes()).is_err());
assert!(db.get(col_a, "cats".as_bytes()).is_ok());
assert!(db.get(col_x, "cats".as_bytes()).is_err());
}
#[test]
fn test_memorydb_exists() {
let col_a: &str = BLOCKS_DB_COLUMN;
let col_b: &str = VALIDATOR_DB_COLUMN;
let db = MemoryDB::open();
/*
* Testing that if we write to the same key in different columns that
* there is not an overlap.
*/
db.put(col_a, "cats".as_bytes(), "lol".as_bytes()).unwrap();
assert_eq!(true, db.exists(col_a, "cats".as_bytes()).unwrap());
assert_eq!(false, db.exists(col_b, "cats".as_bytes()).unwrap());
assert_eq!(false, db.exists(col_a, "dogs".as_bytes()).unwrap());
assert_eq!(false, db.exists(col_b, "dogs".as_bytes()).unwrap());
}
#[test]
fn test_memorydb_threading() {
let col_name: &str = BLOCKS_DB_COLUMN;
let db = Arc::new(MemoryDB::open());
let thread_count = 10;
let write_count = 10;
// We're execting the product of these numbers to fit in one byte.
assert!(thread_count * write_count <= 255);
let mut handles = vec![];
for t in 0..thread_count {
let wc = write_count;
let db = db.clone();
let col = col_name.clone();
let handle = thread::spawn(move || {
for w in 0..wc {
let key = (t * w) as u8;
let val = 42;
db.put(&col, &vec![key], &vec![val]).unwrap();
}
});
handles.push(handle);
}
for handle in handles {
handle.join().unwrap();
}
for t in 0..thread_count {
for w in 0..write_count {
let key = (t * w) as u8;
let val = db.get(&col_name, &vec![key]).unwrap().unwrap();
assert_eq!(vec![42], val);
}
}
}
}

View File

@ -0,0 +1,265 @@
use super::BLOCKS_DB_COLUMN as DB_COLUMN;
use super::{ClientDB, DBError};
use ssz::Decodable;
use std::sync::Arc;
use types::{readers::BeaconBlockReader, BeaconBlock, Hash256, Slot};
#[derive(Clone, Debug, PartialEq)]
pub enum BeaconBlockAtSlotError {
UnknownBeaconBlock(Hash256),
InvalidBeaconBlock(Hash256),
DBError(String),
}
pub struct BeaconBlockStore<T>
where
T: ClientDB,
{
db: Arc<T>,
}
// Implements `put`, `get`, `exists` and `delete` for the store.
impl_crud_for_store!(BeaconBlockStore, DB_COLUMN);
impl<T: ClientDB> BeaconBlockStore<T> {
pub fn new(db: Arc<T>) -> Self {
Self { db }
}
pub fn get_deserialized(&self, hash: &Hash256) -> Result<Option<BeaconBlock>, DBError> {
match self.get(&hash)? {
None => Ok(None),
Some(ssz) => {
let (block, _) = BeaconBlock::ssz_decode(&ssz, 0).map_err(|_| DBError {
message: "Bad BeaconBlock SSZ.".to_string(),
})?;
Ok(Some(block))
}
}
}
/// Retuns an object implementing `BeaconBlockReader`, or `None` (if hash not known).
///
/// Note: Presently, this function fully deserializes a `BeaconBlock` and returns that. In the
/// future, it would be ideal to return an object capable of reading directly from serialized
/// SSZ bytes.
pub fn get_reader(&self, hash: &Hash256) -> Result<Option<impl BeaconBlockReader>, DBError> {
match self.get(&hash)? {
None => Ok(None),
Some(ssz) => {
let (block, _) = BeaconBlock::ssz_decode(&ssz, 0).map_err(|_| DBError {
message: "Bad BeaconBlock SSZ.".to_string(),
})?;
Ok(Some(block))
}
}
}
/// Retrieve the block at a slot given a "head_hash" and a slot.
///
/// A "head_hash" must be a block hash with a slot number greater than or equal to the desired
/// slot.
///
/// This function will read each block down the chain until it finds a block with the given
/// slot number. If the slot is skipped, the function will return None.
///
/// If a block is found, a tuple of (block_hash, serialized_block) is returned.
///
/// Note: this function uses a loop instead of recursion as the compiler is over-strict when it
/// comes to recursion and the `impl Trait` pattern. See:
/// https://stackoverflow.com/questions/54032940/using-impl-trait-in-a-recursive-function
pub fn block_at_slot(
&self,
head_hash: &Hash256,
slot: Slot,
) -> Result<Option<(Hash256, impl BeaconBlockReader)>, BeaconBlockAtSlotError> {
let mut current_hash = *head_hash;
loop {
if let Some(block_reader) = self.get_reader(&current_hash)? {
if block_reader.slot() == slot {
break Ok(Some((current_hash, block_reader)));
} else if block_reader.slot() < slot {
break Ok(None);
} else {
current_hash = block_reader.parent_root();
}
} else {
break Err(BeaconBlockAtSlotError::UnknownBeaconBlock(current_hash));
}
}
}
}
impl From<DBError> for BeaconBlockAtSlotError {
fn from(e: DBError) -> Self {
BeaconBlockAtSlotError::DBError(e.message)
}
}
#[cfg(test)]
mod tests {
use super::super::super::MemoryDB;
use super::*;
use std::sync::Arc;
use std::thread;
use ssz::ssz_encode;
use types::test_utils::{SeedableRng, TestRandom, XorShiftRng};
use types::BeaconBlock;
use types::Hash256;
test_crud_for_store!(BeaconBlockStore, DB_COLUMN);
#[test]
fn head_hash_slot_too_low() {
let db = Arc::new(MemoryDB::open());
let bs = Arc::new(BeaconBlockStore::new(db.clone()));
let mut rng = XorShiftRng::from_seed([42; 16]);
let mut block = BeaconBlock::random_for_test(&mut rng);
block.slot = Slot::from(10_u64);
let block_root = block.canonical_root();
bs.put(&block_root, &ssz_encode(&block)).unwrap();
let result = bs.block_at_slot(&block_root, Slot::from(11_u64)).unwrap();
assert_eq!(result, None);
}
#[test]
fn test_invalid_block_at_slot() {
let db = Arc::new(MemoryDB::open());
let store = BeaconBlockStore::new(db.clone());
let ssz = "definitly not a valid block".as_bytes();
let hash = &Hash256::from("some hash".as_bytes());
db.put(DB_COLUMN, hash, ssz).unwrap();
assert_eq!(
store.block_at_slot(hash, Slot::from(42_u64)),
Err(BeaconBlockAtSlotError::DBError(
"Bad BeaconBlock SSZ.".into()
))
);
}
#[test]
fn test_unknown_block_at_slot() {
let db = Arc::new(MemoryDB::open());
let store = BeaconBlockStore::new(db.clone());
let ssz = "some bytes".as_bytes();
let hash = &Hash256::from("some hash".as_bytes());
let other_hash = &Hash256::from("another hash".as_bytes());
db.put(DB_COLUMN, hash, ssz).unwrap();
assert_eq!(
store.block_at_slot(other_hash, Slot::from(42_u64)),
Err(BeaconBlockAtSlotError::UnknownBeaconBlock(*other_hash))
);
}
#[test]
fn test_block_store_on_memory_db() {
let db = Arc::new(MemoryDB::open());
let bs = Arc::new(BeaconBlockStore::new(db.clone()));
let thread_count = 10;
let write_count = 10;
// We're expecting the product of these numbers to fit in one byte.
assert!(thread_count * write_count <= 255);
let mut handles = vec![];
for t in 0..thread_count {
let wc = write_count;
let bs = bs.clone();
let handle = thread::spawn(move || {
for w in 0..wc {
let key = (t * w) as u8;
let val = 42;
bs.put(&[key][..].into(), &vec![val]).unwrap();
}
});
handles.push(handle);
}
for handle in handles {
handle.join().unwrap();
}
for t in 0..thread_count {
for w in 0..write_count {
let key = (t * w) as u8;
assert!(bs.exists(&[key][..].into()).unwrap());
let val = bs.get(&[key][..].into()).unwrap().unwrap();
assert_eq!(vec![42], val);
}
}
}
#[test]
fn test_block_at_slot() {
let db = Arc::new(MemoryDB::open());
let bs = Arc::new(BeaconBlockStore::new(db.clone()));
let mut rng = XorShiftRng::from_seed([42; 16]);
// Specify test block parameters.
let hashes = [
Hash256::from(&[0; 32][..]),
Hash256::from(&[1; 32][..]),
Hash256::from(&[2; 32][..]),
Hash256::from(&[3; 32][..]),
Hash256::from(&[4; 32][..]),
];
let parent_hashes = [
Hash256::from(&[255; 32][..]), // Genesis block.
Hash256::from(&[0; 32][..]),
Hash256::from(&[1; 32][..]),
Hash256::from(&[2; 32][..]),
Hash256::from(&[3; 32][..]),
];
let slots: Vec<Slot> = vec![0, 1, 3, 4, 5].iter().map(|x| Slot::new(*x)).collect();
// Generate a vec of random blocks and store them in the DB.
let block_count = 5;
let mut blocks: Vec<BeaconBlock> = Vec::with_capacity(5);
for i in 0..block_count {
let mut block = BeaconBlock::random_for_test(&mut rng);
block.parent_root = parent_hashes[i];
block.slot = slots[i];
let ssz = ssz_encode(&block);
db.put(DB_COLUMN, &hashes[i], &ssz).unwrap();
blocks.push(block);
}
// Test that certain slots can be reached from certain hashes.
let test_cases = vec![(4, 4), (4, 3), (4, 2), (4, 1), (4, 0)];
for (hashes_index, slot_index) in test_cases {
let (matched_block_hash, reader) = bs
.block_at_slot(&hashes[hashes_index], slots[slot_index])
.unwrap()
.unwrap();
assert_eq!(matched_block_hash, hashes[slot_index]);
assert_eq!(reader.slot(), slots[slot_index]);
}
let ssz = bs.block_at_slot(&hashes[4], Slot::new(2)).unwrap();
assert_eq!(ssz, None);
let ssz = bs.block_at_slot(&hashes[4], Slot::new(6)).unwrap();
assert_eq!(ssz, None);
let bad_hash = &Hash256::from("unknown".as_bytes());
let ssz = bs.block_at_slot(bad_hash, Slot::new(2));
assert_eq!(
ssz,
Err(BeaconBlockAtSlotError::UnknownBeaconBlock(*bad_hash))
);
}
}

View File

@ -0,0 +1,80 @@
use super::STATES_DB_COLUMN as DB_COLUMN;
use super::{ClientDB, DBError};
use ssz::Decodable;
use std::sync::Arc;
use types::{readers::BeaconStateReader, BeaconState, Hash256};
pub struct BeaconStateStore<T>
where
T: ClientDB,
{
db: Arc<T>,
}
// Implements `put`, `get`, `exists` and `delete` for the store.
impl_crud_for_store!(BeaconStateStore, DB_COLUMN);
impl<T: ClientDB> BeaconStateStore<T> {
pub fn new(db: Arc<T>) -> Self {
Self { db }
}
pub fn get_deserialized(&self, hash: &Hash256) -> Result<Option<BeaconState>, DBError> {
match self.get(&hash)? {
None => Ok(None),
Some(ssz) => {
let (state, _) = BeaconState::ssz_decode(&ssz, 0).map_err(|_| DBError {
message: "Bad State SSZ.".to_string(),
})?;
Ok(Some(state))
}
}
}
/// Retuns an object implementing `BeaconStateReader`, or `None` (if hash not known).
///
/// Note: Presently, this function fully deserializes a `BeaconState` and returns that. In the
/// future, it would be ideal to return an object capable of reading directly from serialized
/// SSZ bytes.
pub fn get_reader(&self, hash: &Hash256) -> Result<Option<impl BeaconStateReader>, DBError> {
match self.get(&hash)? {
None => Ok(None),
Some(ssz) => {
let (state, _) = BeaconState::ssz_decode(&ssz, 0).map_err(|_| DBError {
message: "Bad State SSZ.".to_string(),
})?;
Ok(Some(state))
}
}
}
}
#[cfg(test)]
mod tests {
use super::super::super::MemoryDB;
use super::*;
use ssz::ssz_encode;
use std::sync::Arc;
use types::test_utils::{SeedableRng, TestRandom, XorShiftRng};
use types::Hash256;
test_crud_for_store!(BeaconStateStore, DB_COLUMN);
#[test]
fn test_reader() {
let db = Arc::new(MemoryDB::open());
let store = BeaconStateStore::new(db.clone());
let mut rng = XorShiftRng::from_seed([42; 16]);
let state = BeaconState::random_for_test(&mut rng);
let state_root = state.canonical_root();
store.put(&state_root, &ssz_encode(&state)).unwrap();
let reader = store.get_reader(&state_root).unwrap().unwrap();
let decoded = reader.into_beacon_state().unwrap();
assert_eq!(state, decoded);
}
}

View File

@ -0,0 +1,103 @@
macro_rules! impl_crud_for_store {
($store: ident, $db_column: expr) => {
impl<T: ClientDB> $store<T> {
pub fn put(&self, hash: &Hash256, ssz: &[u8]) -> Result<(), DBError> {
self.db.put($db_column, hash, ssz)
}
pub fn get(&self, hash: &Hash256) -> Result<Option<Vec<u8>>, DBError> {
self.db.get($db_column, hash)
}
pub fn exists(&self, hash: &Hash256) -> Result<bool, DBError> {
self.db.exists($db_column, hash)
}
pub fn delete(&self, hash: &Hash256) -> Result<(), DBError> {
self.db.delete($db_column, hash)
}
}
};
}
#[allow(unused_macros)]
macro_rules! test_crud_for_store {
($store: ident, $db_column: expr) => {
#[test]
fn test_put() {
let db = Arc::new(MemoryDB::open());
let store = $store::new(db.clone());
let ssz = "some bytes".as_bytes();
let hash = &Hash256::from("some hash".as_bytes());
store.put(hash, ssz).unwrap();
assert_eq!(db.get(DB_COLUMN, hash).unwrap().unwrap(), ssz);
}
#[test]
fn test_get() {
let db = Arc::new(MemoryDB::open());
let store = $store::new(db.clone());
let ssz = "some bytes".as_bytes();
let hash = &Hash256::from("some hash".as_bytes());
db.put(DB_COLUMN, hash, ssz).unwrap();
assert_eq!(store.get(hash).unwrap().unwrap(), ssz);
}
#[test]
fn test_get_unknown() {
let db = Arc::new(MemoryDB::open());
let store = $store::new(db.clone());
let ssz = "some bytes".as_bytes();
let hash = &Hash256::from("some hash".as_bytes());
let other_hash = &Hash256::from("another hash".as_bytes());
db.put(DB_COLUMN, other_hash, ssz).unwrap();
assert_eq!(store.get(hash).unwrap(), None);
}
#[test]
fn test_exists() {
let db = Arc::new(MemoryDB::open());
let store = $store::new(db.clone());
let ssz = "some bytes".as_bytes();
let hash = &Hash256::from("some hash".as_bytes());
db.put(DB_COLUMN, hash, ssz).unwrap();
assert!(store.exists(hash).unwrap());
}
#[test]
fn test_block_does_not_exist() {
let db = Arc::new(MemoryDB::open());
let store = $store::new(db.clone());
let ssz = "some bytes".as_bytes();
let hash = &Hash256::from("some hash".as_bytes());
let other_hash = &Hash256::from("another hash".as_bytes());
db.put(DB_COLUMN, hash, ssz).unwrap();
assert!(!store.exists(other_hash).unwrap());
}
#[test]
fn test_delete() {
let db = Arc::new(MemoryDB::open());
let store = $store::new(db.clone());
let ssz = "some bytes".as_bytes();
let hash = &Hash256::from("some hash".as_bytes());
db.put(DB_COLUMN, hash, ssz).unwrap();
assert!(db.exists(DB_COLUMN, hash).unwrap());
store.delete(hash).unwrap();
assert!(!db.exists(DB_COLUMN, hash).unwrap());
}
};
}

View File

@ -0,0 +1,25 @@
use super::{ClientDB, DBError};
#[macro_use]
mod macros;
mod beacon_block_store;
mod beacon_state_store;
mod pow_chain_store;
mod validator_store;
pub use self::beacon_block_store::{BeaconBlockAtSlotError, BeaconBlockStore};
pub use self::beacon_state_store::BeaconStateStore;
pub use self::pow_chain_store::PoWChainStore;
pub use self::validator_store::{ValidatorStore, ValidatorStoreError};
pub const BLOCKS_DB_COLUMN: &str = "blocks";
pub const STATES_DB_COLUMN: &str = "states";
pub const POW_CHAIN_DB_COLUMN: &str = "powchain";
pub const VALIDATOR_DB_COLUMN: &str = "validator";
pub const COLUMNS: [&str; 4] = [
BLOCKS_DB_COLUMN,
STATES_DB_COLUMN,
POW_CHAIN_DB_COLUMN,
VALIDATOR_DB_COLUMN,
];

View File

@ -0,0 +1,68 @@
use super::POW_CHAIN_DB_COLUMN as DB_COLUMN;
use super::{ClientDB, DBError};
use std::sync::Arc;
pub struct PoWChainStore<T>
where
T: ClientDB,
{
db: Arc<T>,
}
impl<T: ClientDB> PoWChainStore<T> {
pub fn new(db: Arc<T>) -> Self {
Self { db }
}
pub fn put_block_hash(&self, hash: &[u8]) -> Result<(), DBError> {
self.db.put(DB_COLUMN, hash, &[0])
}
pub fn block_hash_exists(&self, hash: &[u8]) -> Result<bool, DBError> {
self.db.exists(DB_COLUMN, hash)
}
}
#[cfg(test)]
mod tests {
extern crate types;
use super::super::super::MemoryDB;
use super::*;
use self::types::Hash256;
#[test]
fn test_put_block_hash() {
let db = Arc::new(MemoryDB::open());
let store = PoWChainStore::new(db.clone());
let hash = &Hash256::from("some hash".as_bytes()).to_vec();
store.put_block_hash(hash).unwrap();
assert!(db.exists(DB_COLUMN, hash).unwrap());
}
#[test]
fn test_block_hash_exists() {
let db = Arc::new(MemoryDB::open());
let store = PoWChainStore::new(db.clone());
let hash = &Hash256::from("some hash".as_bytes()).to_vec();
db.put(DB_COLUMN, hash, &[0]).unwrap();
assert!(store.block_hash_exists(hash).unwrap());
}
#[test]
fn test_block_hash_does_not_exist() {
let db = Arc::new(MemoryDB::open());
let store = PoWChainStore::new(db.clone());
let hash = &Hash256::from("some hash".as_bytes()).to_vec();
let other_hash = &Hash256::from("another hash".as_bytes()).to_vec();
db.put(DB_COLUMN, hash, &[0]).unwrap();
assert!(!store.block_hash_exists(other_hash).unwrap());
}
}

View File

@ -0,0 +1,215 @@
extern crate bytes;
use self::bytes::{BufMut, BytesMut};
use super::VALIDATOR_DB_COLUMN as DB_COLUMN;
use super::{ClientDB, DBError};
use bls::PublicKey;
use ssz::{ssz_encode, Decodable};
use std::sync::Arc;
#[derive(Debug, PartialEq)]
pub enum ValidatorStoreError {
DBError(String),
DecodeError,
}
impl From<DBError> for ValidatorStoreError {
fn from(error: DBError) -> Self {
ValidatorStoreError::DBError(error.message)
}
}
#[derive(Debug, PartialEq)]
enum KeyPrefixes {
PublicKey,
}
pub struct ValidatorStore<T>
where
T: ClientDB,
{
db: Arc<T>,
}
impl<T: ClientDB> ValidatorStore<T> {
pub fn new(db: Arc<T>) -> Self {
Self { db }
}
fn prefix_bytes(&self, key_prefix: &KeyPrefixes) -> Vec<u8> {
match key_prefix {
KeyPrefixes::PublicKey => b"pubkey".to_vec(),
}
}
fn get_db_key_for_index(&self, key_prefix: &KeyPrefixes, index: usize) -> Vec<u8> {
let mut buf = BytesMut::with_capacity(6 + 8);
buf.put(self.prefix_bytes(key_prefix));
buf.put_u64_be(index as u64);
buf.take().to_vec()
}
pub fn put_public_key_by_index(
&self,
index: usize,
public_key: &PublicKey,
) -> Result<(), ValidatorStoreError> {
let key = self.get_db_key_for_index(&KeyPrefixes::PublicKey, index);
let val = ssz_encode(public_key);
self.db
.put(DB_COLUMN, &key[..], &val[..])
.map_err(ValidatorStoreError::from)
}
pub fn get_public_key_by_index(
&self,
index: usize,
) -> Result<Option<PublicKey>, ValidatorStoreError> {
let key = self.get_db_key_for_index(&KeyPrefixes::PublicKey, index);
let val = self.db.get(DB_COLUMN, &key[..])?;
match val {
None => Ok(None),
Some(val) => match PublicKey::ssz_decode(&val, 0) {
Ok((key, _)) => Ok(Some(key)),
Err(_) => Err(ValidatorStoreError::DecodeError),
},
}
}
}
#[cfg(test)]
mod tests {
use super::super::super::MemoryDB;
use super::*;
use bls::Keypair;
#[test]
fn test_prefix_bytes() {
let db = Arc::new(MemoryDB::open());
let store = ValidatorStore::new(db.clone());
assert_eq!(
store.prefix_bytes(&KeyPrefixes::PublicKey),
b"pubkey".to_vec()
);
}
#[test]
fn test_get_db_key_for_index() {
let db = Arc::new(MemoryDB::open());
let store = ValidatorStore::new(db.clone());
let mut buf = BytesMut::with_capacity(6 + 8);
buf.put(b"pubkey".to_vec());
buf.put_u64_be(42);
assert_eq!(
store.get_db_key_for_index(&KeyPrefixes::PublicKey, 42),
buf.take().to_vec()
)
}
#[test]
fn test_put_public_key_by_index() {
let db = Arc::new(MemoryDB::open());
let store = ValidatorStore::new(db.clone());
let index = 3;
let public_key = Keypair::random().pk;
store.put_public_key_by_index(index, &public_key).unwrap();
let public_key_at_index = db
.get(
DB_COLUMN,
&store.get_db_key_for_index(&KeyPrefixes::PublicKey, index)[..],
)
.unwrap()
.unwrap();
assert_eq!(public_key_at_index, ssz_encode(&public_key));
}
#[test]
fn test_get_public_key_by_index() {
let db = Arc::new(MemoryDB::open());
let store = ValidatorStore::new(db.clone());
let index = 4;
let public_key = Keypair::random().pk;
db.put(
DB_COLUMN,
&store.get_db_key_for_index(&KeyPrefixes::PublicKey, index)[..],
&ssz_encode(&public_key)[..],
)
.unwrap();
let public_key_at_index = store.get_public_key_by_index(index).unwrap().unwrap();
assert_eq!(public_key_at_index, public_key);
}
#[test]
fn test_get_public_key_by_unknown_index() {
let db = Arc::new(MemoryDB::open());
let store = ValidatorStore::new(db.clone());
let public_key = Keypair::random().pk;
db.put(
DB_COLUMN,
&store.get_db_key_for_index(&KeyPrefixes::PublicKey, 3)[..],
&ssz_encode(&public_key)[..],
)
.unwrap();
let public_key_at_index = store.get_public_key_by_index(4).unwrap();
assert_eq!(public_key_at_index, None);
}
#[test]
fn test_get_invalid_public_key() {
let db = Arc::new(MemoryDB::open());
let store = ValidatorStore::new(db.clone());
let key = store.get_db_key_for_index(&KeyPrefixes::PublicKey, 42);
db.put(DB_COLUMN, &key[..], "cats".as_bytes()).unwrap();
assert_eq!(
store.get_public_key_by_index(42),
Err(ValidatorStoreError::DecodeError)
);
}
#[test]
fn test_validator_store_put_get() {
let db = Arc::new(MemoryDB::open());
let store = ValidatorStore::new(db);
let keys = vec![
Keypair::random(),
Keypair::random(),
Keypair::random(),
Keypair::random(),
Keypair::random(),
];
for i in 0..keys.len() {
store.put_public_key_by_index(i, &keys[i].pk).unwrap();
}
/*
* Check all keys are retrieved correctly.
*/
for i in 0..keys.len() {
let retrieved = store.get_public_key_by_index(i).unwrap().unwrap();
assert_eq!(retrieved, keys[i].pk);
}
/*
* Check that an index that wasn't stored returns None.
*/
assert!(store
.get_public_key_by_index(keys.len() + 1)
.unwrap()
.is_none());
}
}

View File

@ -0,0 +1,28 @@
pub type DBValue = Vec<u8>;
#[derive(Debug)]
pub struct DBError {
pub message: String,
}
impl DBError {
pub fn new(message: String) -> Self {
Self { message }
}
}
/// A generic database to be used by the "client' (i.e.,
/// the lighthouse blockchain client).
///
/// The purpose of having this generic trait is to allow the
/// program to use a persistent on-disk database during production,
/// but use a transient database during tests.
pub trait ClientDB: Sync + Send {
fn get(&self, col: &str, key: &[u8]) -> Result<Option<DBValue>, DBError>;
fn put(&self, col: &str, key: &[u8], val: &[u8]) -> Result<(), DBError>;
fn exists(&self, col: &str, key: &[u8]) -> Result<bool, DBError>;
fn delete(&self, col: &str, key: &[u8]) -> Result<(), DBError>;
}

View File

@ -0,0 +1,30 @@
use std::fs;
use std::path::PathBuf;
/// Stores the core configuration for this Lighthouse instance.
/// This struct is general, other components may implement more
/// specialized config structs.
#[derive(Clone)]
pub struct LighthouseConfig {
pub data_dir: PathBuf,
pub p2p_listen_port: u16,
}
const DEFAULT_LIGHTHOUSE_DIR: &str = ".lighthouse";
impl LighthouseConfig {
/// Build a new lighthouse configuration from defaults.
pub fn default() -> Self {
let data_dir = {
let home = dirs::home_dir().expect("Unable to determine home dir.");
home.join(DEFAULT_LIGHTHOUSE_DIR)
};
fs::create_dir_all(&data_dir)
.unwrap_or_else(|_| panic!("Unable to create {:?}", &data_dir));
let p2p_listen_port = 0;
Self {
data_dir,
p2p_listen_port,
}
}
}

134
beacon_node/src/main.rs Normal file
View File

@ -0,0 +1,134 @@
extern crate slog;
mod config;
mod rpc;
use std::path::PathBuf;
use crate::config::LighthouseConfig;
use crate::rpc::start_server;
use beacon_chain::BeaconChain;
use bls::create_proof_of_possession;
use clap::{App, Arg};
use db::{
stores::{BeaconBlockStore, BeaconStateStore},
MemoryDB,
};
use fork_choice::optimised_lmd_ghost::OptimisedLMDGhost;
use slog::{error, info, o, Drain};
use slot_clock::SystemTimeSlotClock;
use std::sync::Arc;
use types::{ChainSpec, Deposit, DepositData, DepositInput, Eth1Data, Hash256, Keypair};
fn main() {
let decorator = slog_term::TermDecorator::new().build();
let drain = slog_term::CompactFormat::new(decorator).build().fuse();
let drain = slog_async::Async::new(drain).build().fuse();
let log = slog::Logger::root(drain, o!());
let matches = App::new("Lighthouse")
.version("0.0.1")
.author("Sigma Prime <paul@sigmaprime.io>")
.about("Eth 2.0 Client")
.arg(
Arg::with_name("datadir")
.long("datadir")
.value_name("DIR")
.help("Data directory for keys and databases.")
.takes_value(true),
)
.arg(
Arg::with_name("port")
.long("port")
.value_name("PORT")
.help("Network listen port for p2p connections.")
.takes_value(true),
)
.get_matches();
let mut config = LighthouseConfig::default();
// Custom datadir
if let Some(dir) = matches.value_of("datadir") {
config.data_dir = PathBuf::from(dir.to_string());
}
// Custom p2p listen port
if let Some(port_str) = matches.value_of("port") {
if let Ok(port) = port_str.parse::<u16>() {
config.p2p_listen_port = port;
} else {
error!(log, "Invalid port"; "port" => port_str);
return;
}
}
// Log configuration
info!(log, "";
"data_dir" => &config.data_dir.to_str(),
"port" => &config.p2p_listen_port);
// Specification (presently fixed to foundation).
let spec = ChainSpec::foundation();
// Database (presently in-memory)
let db = Arc::new(MemoryDB::open());
let block_store = Arc::new(BeaconBlockStore::new(db.clone()));
let state_store = Arc::new(BeaconStateStore::new(db.clone()));
// Slot clock
let genesis_time = 1_549_935_547; // 12th Feb 2018 (arbitrary value in the past).
let slot_clock = SystemTimeSlotClock::new(genesis_time, spec.slot_duration)
.expect("Unable to load SystemTimeSlotClock");
// Choose the fork choice
let fork_choice = OptimisedLMDGhost::new(block_store.clone(), state_store.clone());
/*
* Generate some random data to start a chain with.
*
* This is will need to be replace for production usage.
*/
let latest_eth1_data = Eth1Data {
deposit_root: Hash256::zero(),
block_hash: Hash256::zero(),
};
let keypairs: Vec<Keypair> = (0..10)
.collect::<Vec<usize>>()
.iter()
.map(|_| Keypair::random())
.collect();
let initial_validator_deposits = keypairs
.iter()
.map(|keypair| Deposit {
branch: vec![], // branch verification is not specified.
index: 0, // index verification is not specified.
deposit_data: DepositData {
amount: 32_000_000_000, // 32 ETH (in Gwei)
timestamp: genesis_time - 1,
deposit_input: DepositInput {
pubkey: keypair.pk.clone(),
withdrawal_credentials: Hash256::zero(), // Withdrawal not possible.
proof_of_possession: create_proof_of_possession(&keypair),
},
},
})
.collect();
// Genesis chain
let _chain_result = BeaconChain::genesis(
state_store.clone(),
block_store.clone(),
slot_clock,
genesis_time,
latest_eth1_data,
initial_validator_deposits,
spec,
fork_choice,
);
let _server = start_server(log.clone());
loop {
std::thread::sleep(std::time::Duration::from_secs(1));
}
}

View File

@ -0,0 +1,57 @@
use futures::Future;
use grpcio::{RpcContext, UnarySink};
use protos::services::{
BeaconBlock as BeaconBlockProto, ProduceBeaconBlockRequest, ProduceBeaconBlockResponse,
PublishBeaconBlockRequest, PublishBeaconBlockResponse,
};
use protos::services_grpc::BeaconBlockService;
use slog::Logger;
#[derive(Clone)]
pub struct BeaconBlockServiceInstance {
pub log: Logger,
}
impl BeaconBlockService for BeaconBlockServiceInstance {
/// Produce a `BeaconBlock` for signing by a validator.
fn produce_beacon_block(
&mut self,
ctx: RpcContext,
req: ProduceBeaconBlockRequest,
sink: UnarySink<ProduceBeaconBlockResponse>,
) {
println!("producing at slot {}", req.get_slot());
// TODO: build a legit block.
let mut block = BeaconBlockProto::new();
block.set_slot(req.get_slot());
block.set_block_root(b"cats".to_vec());
let mut resp = ProduceBeaconBlockResponse::new();
resp.set_block(block);
let f = sink
.success(resp)
.map_err(move |e| println!("failed to reply {:?}: {:?}", req, e));
ctx.spawn(f)
}
/// Accept some fully-formed `BeaconBlock`, process and publish it.
fn publish_beacon_block(
&mut self,
ctx: RpcContext,
req: PublishBeaconBlockRequest,
sink: UnarySink<PublishBeaconBlockResponse>,
) {
println!("publishing {:?}", req.get_block());
// TODO: actually process the block.
let mut resp = PublishBeaconBlockResponse::new();
resp.set_success(true);
let f = sink
.success(resp)
.map_err(move |e| println!("failed to reply {:?}: {:?}", req, e));
ctx.spawn(f)
}
}

View File

@ -0,0 +1,36 @@
mod beacon_block;
mod validator;
use self::beacon_block::BeaconBlockServiceInstance;
use self::validator::ValidatorServiceInstance;
use grpcio::{Environment, Server, ServerBuilder};
use protos::services_grpc::{create_beacon_block_service, create_validator_service};
use std::sync::Arc;
use slog::{info, Logger};
pub fn start_server(log: Logger) -> Server {
let log_clone = log.clone();
let env = Arc::new(Environment::new(1));
let beacon_block_service = {
let instance = BeaconBlockServiceInstance { log: log.clone() };
create_beacon_block_service(instance)
};
let validator_service = {
let instance = ValidatorServiceInstance { log: log.clone() };
create_validator_service(instance)
};
let mut server = ServerBuilder::new(env)
.register_service(beacon_block_service)
.register_service(validator_service)
.bind("127.0.0.1", 50_051)
.build()
.unwrap();
server.start();
for &(ref host, port) in server.bind_addrs() {
info!(log_clone, "gRPC listening on {}:{}", host, port);
}
server
}

View File

@ -0,0 +1,64 @@
use bls::PublicKey;
use futures::Future;
use grpcio::{RpcContext, RpcStatus, RpcStatusCode, UnarySink};
use protos::services::{
IndexResponse, ProposeBlockSlotRequest, ProposeBlockSlotResponse, PublicKey as PublicKeyRequest,
};
use protos::services_grpc::ValidatorService;
use slog::{debug, Logger};
use ssz::Decodable;
#[derive(Clone)]
pub struct ValidatorServiceInstance {
pub log: Logger,
}
impl ValidatorService for ValidatorServiceInstance {
fn validator_index(
&mut self,
ctx: RpcContext,
req: PublicKeyRequest,
sink: UnarySink<IndexResponse>,
) {
if let Ok((public_key, _)) = PublicKey::ssz_decode(req.get_public_key(), 0) {
debug!(self.log, "RPC request"; "endpoint" => "ValidatorIndex", "public_key" => public_key.concatenated_hex_id());
let mut resp = IndexResponse::new();
// TODO: return a legit value.
resp.set_index(1);
let f = sink
.success(resp)
.map_err(move |e| println!("failed to reply {:?}: {:?}", req, e));
ctx.spawn(f)
} else {
let f = sink
.fail(RpcStatus::new(
RpcStatusCode::InvalidArgument,
Some("Invalid public_key".to_string()),
))
.map_err(move |e| println!("failed to reply {:?}: {:?}", req, e));
ctx.spawn(f)
}
}
fn propose_block_slot(
&mut self,
ctx: RpcContext,
req: ProposeBlockSlotRequest,
sink: UnarySink<ProposeBlockSlotResponse>,
) {
debug!(self.log, "RPC request"; "endpoint" => "ProposeBlockSlot", "epoch" => req.get_epoch(), "validator_index" => req.get_validator_index());
let mut resp = ProposeBlockSlotResponse::new();
// TODO: return a legit value.
resp.set_slot(1);
let f = sink
.success(resp)
.map_err(move |e| println!("failed to reply {:?}: {:?}", req, e));
ctx.spawn(f)
}
}

View File

@ -1,10 +0,0 @@
# Lighthouse Documentation
Table of Contents:
- [ONBOARDING.md](ONBOARDING.md): General on-boarding info,
including style-guide.
- [LIGHTHOUSE.md](LIGHTHOUSE.md): Project goals and ethos.
- [RUNNING.md](RUNNING.md): Step-by-step on getting the code running.
- [SERENITY.md](SERENITY.md): Introduction to Ethereum Serenity.

View File

@ -1,50 +0,0 @@
# Running Lighthouse Code
These documents provide a guide for running code in the following repositories:
- [lighthouse-libs](https://github.com/sigp/lighthouse-libs)
- [lighthouse-beacon](https://github.com/sigp/lighthouse-beacon)
- [lighthouse-validator](https://github.com/sigp/lighthouse-validator)
This code-base is still very much under-development and does not provide any
user-facing functionality. For developers and researchers, there are several
tests and benchmarks which may be of interest.
A few basic steps are needed to get set up:
1. Install [rustup](https://rustup.rs/). It's a toolchain manager for Rust
(Linux | macos | Windows). For installation run the below command in your
terminal `$ curl https://sh.rustup.rs -sSf | sh`
2. (Linux & MacOS) To configure your current shell run: `$ source
$HOME/.cargo/env`
3. Use the command `rustup show` to get information about the Rust
installation. You should see that the active toolchain is the stable
version.
4. Run `rustc --version` to check the installation and version of rust.
- Updates can be performed using` rustup update` .
5. Install build dependencies (Arch packages are listed here, your
distribution will likely be similar):
- `clang`: required by RocksDB. `protobuf`: required for protobuf
- serialization (gRPC).
6. Navigate to the working directory.
7. Run the test by using command `cargo test --all`. By running, it will
pass all the required test cases. If you are doing it for the first time,
then you can grab a coffee in the meantime. Usually, it takes time to build,
compile and pass all test cases. If there is no error then it means
everything is working properly and it's time to get your hands dirty. In
case, if there is an error, then please raise the
[issue](https://github.com/sigp/lighthouse/issues). We will help you.
8. As an alternative to, or instead of the above step, you may also run
benchmarks by using the command `cargo bench --all`. (Note: not all
repositories have benchmarking).
##### Note: Lighthouse presently runs on Rust `stable`.
##### Note for Windows users: Perl may also be required to build lighthouse.
You can install [Strawberry Perl](http://strawberryperl.com/), or alternatively
use a choco install command `choco install strawberryperl`.
Additionally, the dependency `protoc-grpcio v0.3.1` is reported to have issues
compiling in Windows. You can specify a known working version by editing
version in protos/Cargo.toml's "build-dependencies" section to `protoc-grpcio =
"<=0.3.0"`.

10
eth2/attester/Cargo.toml Normal file
View File

@ -0,0 +1,10 @@
[package]
name = "attester"
version = "0.1.0"
authors = ["Paul Hauner <paul@paulhauner.com>"]
edition = "2018"
[dependencies]
slot_clock = { path = "../../eth2/utils/slot_clock" }
ssz = { path = "../../eth2/utils/ssz" }
types = { path = "../../eth2/types" }

250
eth2/attester/src/lib.rs Normal file
View File

@ -0,0 +1,250 @@
pub mod test_utils;
mod traits;
use slot_clock::SlotClock;
use std::sync::Arc;
use types::{AttestationData, FreeAttestation, Signature, Slot};
pub use self::traits::{
BeaconNode, BeaconNodeError, DutiesReader, DutiesReaderError, PublishOutcome, Signer,
};
const PHASE_0_CUSTODY_BIT: bool = false;
#[derive(Debug, PartialEq)]
pub enum PollOutcome {
AttestationProduced(Slot),
AttestationNotRequired(Slot),
SlashableAttestationNotProduced(Slot),
BeaconNodeUnableToProduceAttestation(Slot),
ProducerDutiesUnknown(Slot),
SlotAlreadyProcessed(Slot),
SignerRejection(Slot),
ValidatorIsUnknown(Slot),
}
#[derive(Debug, PartialEq)]
pub enum Error {
SlotClockError,
SlotUnknowable,
EpochMapPoisoned,
SlotClockPoisoned,
EpochLengthIsZero,
BeaconNodeError(BeaconNodeError),
}
/// A polling state machine which performs block production duties, based upon some epoch duties
/// (`EpochDutiesMap`) and a concept of time (`SlotClock`).
///
/// Ensures that messages are not slashable.
///
/// Relies upon an external service to keep the `EpochDutiesMap` updated.
pub struct Attester<T: SlotClock, U: BeaconNode, V: DutiesReader, W: Signer> {
pub last_processed_slot: Option<Slot>,
duties: Arc<V>,
slot_clock: Arc<T>,
beacon_node: Arc<U>,
signer: Arc<W>,
}
impl<T: SlotClock, U: BeaconNode, V: DutiesReader, W: Signer> Attester<T, U, V, W> {
/// Returns a new instance where `last_processed_slot == 0`.
pub fn new(duties: Arc<V>, slot_clock: Arc<T>, beacon_node: Arc<U>, signer: Arc<W>) -> Self {
Self {
last_processed_slot: None,
duties,
slot_clock,
beacon_node,
signer,
}
}
}
impl<T: SlotClock, U: BeaconNode, V: DutiesReader, W: Signer> Attester<T, U, V, W> {
/// Poll the `BeaconNode` and produce an attestation if required.
pub fn poll(&mut self) -> Result<PollOutcome, Error> {
let slot = self
.slot_clock
.present_slot()
.map_err(|_| Error::SlotClockError)?
.ok_or(Error::SlotUnknowable)?;
if !self.is_processed_slot(slot) {
self.last_processed_slot = Some(slot);
let shard = match self.duties.attestation_shard(slot) {
Ok(Some(result)) => result,
Ok(None) => return Ok(PollOutcome::AttestationNotRequired(slot)),
Err(DutiesReaderError::UnknownEpoch) => {
return Ok(PollOutcome::ProducerDutiesUnknown(slot));
}
Err(DutiesReaderError::UnknownValidator) => {
return Ok(PollOutcome::ValidatorIsUnknown(slot));
}
Err(DutiesReaderError::EpochLengthIsZero) => return Err(Error::EpochLengthIsZero),
Err(DutiesReaderError::Poisoned) => return Err(Error::EpochMapPoisoned),
};
self.produce_attestation(slot, shard)
} else {
Ok(PollOutcome::SlotAlreadyProcessed(slot))
}
}
fn produce_attestation(&mut self, slot: Slot, shard: u64) -> Result<PollOutcome, Error> {
let attestation_data = match self.beacon_node.produce_attestation_data(slot, shard)? {
Some(attestation_data) => attestation_data,
None => return Ok(PollOutcome::BeaconNodeUnableToProduceAttestation(slot)),
};
if !self.safe_to_produce(&attestation_data) {
return Ok(PollOutcome::SlashableAttestationNotProduced(slot));
}
let signature = match self.sign_attestation_data(&attestation_data) {
Some(signature) => signature,
None => return Ok(PollOutcome::SignerRejection(slot)),
};
let validator_index = match self.duties.validator_index() {
Some(validator_index) => validator_index,
None => return Ok(PollOutcome::ValidatorIsUnknown(slot)),
};
let free_attestation = FreeAttestation {
data: attestation_data,
signature,
validator_index,
};
self.beacon_node
.publish_attestation_data(free_attestation)?;
Ok(PollOutcome::AttestationProduced(slot))
}
fn is_processed_slot(&self, slot: Slot) -> bool {
match self.last_processed_slot {
Some(processed_slot) if slot <= processed_slot => true,
_ => false,
}
}
/// Consumes a block, returning that block signed by the validators private key.
///
/// Important: this function will not check to ensure the block is not slashable. This must be
/// done upstream.
fn sign_attestation_data(&mut self, attestation_data: &AttestationData) -> Option<Signature> {
self.store_produce(attestation_data);
self.signer
.sign_attestation_message(&attestation_data.signable_message(PHASE_0_CUSTODY_BIT)[..])
}
/// Returns `true` if signing some attestation_data is safe (non-slashable).
///
/// !!! UNSAFE !!!
///
/// Important: this function is presently stubbed-out. It provides ZERO SAFETY.
fn safe_to_produce(&self, _attestation_data: &AttestationData) -> bool {
// TODO: ensure the producer doesn't produce slashable blocks.
// https://github.com/sigp/lighthouse/issues/160
true
}
/// Record that a block was produced so that slashable votes may not be made in the future.
///
/// !!! UNSAFE !!!
///
/// Important: this function is presently stubbed-out. It provides ZERO SAFETY.
fn store_produce(&mut self, _block: &AttestationData) {
// TODO: record this block production to prevent future slashings.
// https://github.com/sigp/lighthouse/issues/160
}
}
impl From<BeaconNodeError> for Error {
fn from(e: BeaconNodeError) -> Error {
Error::BeaconNodeError(e)
}
}
#[cfg(test)]
mod tests {
use super::test_utils::{EpochMap, LocalSigner, SimulatedBeaconNode};
use super::*;
use slot_clock::TestingSlotClock;
use types::{
test_utils::{SeedableRng, TestRandom, XorShiftRng},
ChainSpec, Keypair,
};
// TODO: implement more thorough testing.
// https://github.com/sigp/lighthouse/issues/160
//
// These tests should serve as a good example for future tests.
#[test]
pub fn polling() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let spec = Arc::new(ChainSpec::foundation());
let slot_clock = Arc::new(TestingSlotClock::new(0));
let beacon_node = Arc::new(SimulatedBeaconNode::default());
let signer = Arc::new(LocalSigner::new(Keypair::random()));
let mut duties = EpochMap::new(spec.epoch_length);
let attest_slot = Slot::new(100);
let attest_epoch = attest_slot / spec.epoch_length;
let attest_shard = 12;
duties.insert_attestation_shard(attest_slot, attest_shard);
duties.set_validator_index(Some(2));
let duties = Arc::new(duties);
let mut attester = Attester::new(
duties.clone(),
slot_clock.clone(),
beacon_node.clone(),
signer.clone(),
);
// Configure responses from the BeaconNode.
beacon_node.set_next_produce_result(Ok(Some(AttestationData::random_for_test(&mut rng))));
beacon_node.set_next_publish_result(Ok(PublishOutcome::ValidAttestation));
// One slot before attestation slot...
slot_clock.set_slot(attest_slot.as_u64() - 1);
assert_eq!(
attester.poll(),
Ok(PollOutcome::AttestationNotRequired(attest_slot - 1))
);
// On the attest slot...
slot_clock.set_slot(attest_slot.as_u64());
assert_eq!(
attester.poll(),
Ok(PollOutcome::AttestationProduced(attest_slot))
);
// Trying the same attest slot again...
slot_clock.set_slot(attest_slot.as_u64());
assert_eq!(
attester.poll(),
Ok(PollOutcome::SlotAlreadyProcessed(attest_slot))
);
// One slot after the attest slot...
slot_clock.set_slot(attest_slot.as_u64() + 1);
assert_eq!(
attester.poll(),
Ok(PollOutcome::AttestationNotRequired(attest_slot + 1))
);
// In an epoch without known duties...
let slot = (attest_epoch + 1) * spec.epoch_length;
slot_clock.set_slot(slot.into());
assert_eq!(
attester.poll(),
Ok(PollOutcome::ProducerDutiesUnknown(slot))
);
}
}

View File

@ -0,0 +1,44 @@
use crate::{DutiesReader, DutiesReaderError};
use std::collections::HashMap;
use types::{Epoch, Slot};
pub struct EpochMap {
epoch_length: u64,
validator_index: Option<u64>,
map: HashMap<Epoch, (Slot, u64)>,
}
impl EpochMap {
pub fn new(epoch_length: u64) -> Self {
Self {
epoch_length,
validator_index: None,
map: HashMap::new(),
}
}
pub fn insert_attestation_shard(&mut self, slot: Slot, shard: u64) {
let epoch = slot.epoch(self.epoch_length);
self.map.insert(epoch, (slot, shard));
}
pub fn set_validator_index(&mut self, index: Option<u64>) {
self.validator_index = index;
}
}
impl DutiesReader for EpochMap {
fn attestation_shard(&self, slot: Slot) -> Result<Option<u64>, DutiesReaderError> {
let epoch = slot.epoch(self.epoch_length);
match self.map.get(&epoch) {
Some((attest_slot, attest_shard)) if *attest_slot == slot => Ok(Some(*attest_shard)),
Some((attest_slot, _attest_shard)) if *attest_slot != slot => Ok(None),
_ => Err(DutiesReaderError::UnknownEpoch),
}
}
fn validator_index(&self) -> Option<u64> {
self.validator_index
}
}

View File

@ -0,0 +1,31 @@
use crate::traits::Signer;
use std::sync::RwLock;
use types::{Keypair, Signature};
/// A test-only struct used to simulate a Beacon Node.
pub struct LocalSigner {
keypair: Keypair,
should_sign: RwLock<bool>,
}
impl LocalSigner {
/// Produce a new LocalSigner with signing enabled by default.
pub fn new(keypair: Keypair) -> Self {
Self {
keypair,
should_sign: RwLock::new(true),
}
}
/// If set to `false`, the service will refuse to sign all messages. Otherwise, all messages
/// will be signed.
pub fn enable_signing(&self, enabled: bool) {
*self.should_sign.write().unwrap() = enabled;
}
}
impl Signer for LocalSigner {
fn sign_attestation_message(&self, message: &[u8]) -> Option<Signature> {
Some(Signature::new(message, &self.keypair.sk))
}
}

View File

@ -0,0 +1,7 @@
mod epoch_map;
mod local_signer;
mod simulated_beacon_node;
pub use self::epoch_map::EpochMap;
pub use self::local_signer::LocalSigner;
pub use self::simulated_beacon_node::SimulatedBeaconNode;

View File

@ -0,0 +1,44 @@
use crate::traits::{BeaconNode, BeaconNodeError, PublishOutcome};
use std::sync::RwLock;
use types::{AttestationData, FreeAttestation, Slot};
type ProduceResult = Result<Option<AttestationData>, BeaconNodeError>;
type PublishResult = Result<PublishOutcome, BeaconNodeError>;
/// A test-only struct used to simulate a Beacon Node.
#[derive(Default)]
pub struct SimulatedBeaconNode {
pub produce_input: RwLock<Option<(Slot, u64)>>,
pub produce_result: RwLock<Option<ProduceResult>>,
pub publish_input: RwLock<Option<FreeAttestation>>,
pub publish_result: RwLock<Option<PublishResult>>,
}
impl SimulatedBeaconNode {
pub fn set_next_produce_result(&self, result: ProduceResult) {
*self.produce_result.write().unwrap() = Some(result);
}
pub fn set_next_publish_result(&self, result: PublishResult) {
*self.publish_result.write().unwrap() = Some(result);
}
}
impl BeaconNode for SimulatedBeaconNode {
fn produce_attestation_data(&self, slot: Slot, shard: u64) -> ProduceResult {
*self.produce_input.write().unwrap() = Some((slot, shard));
match *self.produce_result.read().unwrap() {
Some(ref r) => r.clone(),
None => panic!("TestBeaconNode: produce_result == None"),
}
}
fn publish_attestation_data(&self, free_attestation: FreeAttestation) -> PublishResult {
*self.publish_input.write().unwrap() = Some(free_attestation.clone());
match *self.publish_result.read().unwrap() {
Some(ref r) => r.clone(),
None => panic!("TestBeaconNode: publish_result == None"),
}
}
}

View File

@ -0,0 +1,49 @@
use types::{AttestationData, FreeAttestation, Signature, Slot};
#[derive(Debug, PartialEq, Clone)]
pub enum BeaconNodeError {
RemoteFailure(String),
DecodeFailure,
}
#[derive(Debug, PartialEq, Clone)]
pub enum PublishOutcome {
ValidAttestation,
InvalidAttestation(String),
}
/// Defines the methods required to produce and publish blocks on a Beacon Node.
pub trait BeaconNode: Send + Sync {
fn produce_attestation_data(
&self,
slot: Slot,
shard: u64,
) -> Result<Option<AttestationData>, BeaconNodeError>;
fn publish_attestation_data(
&self,
free_attestation: FreeAttestation,
) -> Result<PublishOutcome, BeaconNodeError>;
}
#[derive(Debug, PartialEq, Clone)]
pub enum DutiesReaderError {
UnknownValidator,
UnknownEpoch,
EpochLengthIsZero,
Poisoned,
}
/// Informs a validator of their duties (e.g., block production).
pub trait DutiesReader: Send + Sync {
/// Returns `Some(shard)` if this slot is an attestation slot. Otherwise, returns `None.`
fn attestation_shard(&self, slot: Slot) -> Result<Option<u64>, DutiesReaderError>;
/// Returns `Some(shard)` if this slot is an attestation slot. Otherwise, returns `None.`
fn validator_index(&self) -> Option<u64>;
}
/// Signs message using an internally-maintained private key.
pub trait Signer {
fn sign_attestation_message(&self, message: &[u8]) -> Option<Signature>;
}

View File

@ -0,0 +1,10 @@
[package]
name = "block_producer"
version = "0.1.0"
authors = ["Paul Hauner <paul@paulhauner.com>"]
edition = "2018"
[dependencies]
slot_clock = { path = "../../eth2/utils/slot_clock" }
ssz = { path = "../../eth2/utils/ssz" }
types = { path = "../../eth2/types" }

View File

@ -0,0 +1,287 @@
pub mod test_utils;
mod traits;
use slot_clock::SlotClock;
use ssz::ssz_encode;
use std::sync::Arc;
use types::{BeaconBlock, ChainSpec, Slot};
pub use self::traits::{
BeaconNode, BeaconNodeError, DutiesReader, DutiesReaderError, PublishOutcome, Signer,
};
#[derive(Debug, PartialEq)]
pub enum PollOutcome {
/// A new block was produced.
BlockProduced(Slot),
/// A block was not produced as it would have been slashable.
SlashableBlockNotProduced(Slot),
/// The validator duties did not require a block to be produced.
BlockProductionNotRequired(Slot),
/// The duties for the present epoch were not found.
ProducerDutiesUnknown(Slot),
/// The slot has already been processed, execution was skipped.
SlotAlreadyProcessed(Slot),
/// The Beacon Node was unable to produce a block at that slot.
BeaconNodeUnableToProduceBlock(Slot),
/// The signer failed to sign the message.
SignerRejection(Slot),
/// The public key for this validator is not an active validator.
ValidatorIsUnknown(Slot),
}
#[derive(Debug, PartialEq)]
pub enum Error {
SlotClockError,
SlotUnknowable,
EpochMapPoisoned,
SlotClockPoisoned,
EpochLengthIsZero,
BeaconNodeError(BeaconNodeError),
}
/// A polling state machine which performs block production duties, based upon some epoch duties
/// (`EpochDutiesMap`) and a concept of time (`SlotClock`).
///
/// Ensures that messages are not slashable.
///
/// Relies upon an external service to keep the `EpochDutiesMap` updated.
pub struct BlockProducer<T: SlotClock, U: BeaconNode, V: DutiesReader, W: Signer> {
pub last_processed_slot: Option<Slot>,
spec: Arc<ChainSpec>,
epoch_map: Arc<V>,
slot_clock: Arc<T>,
beacon_node: Arc<U>,
signer: Arc<W>,
}
impl<T: SlotClock, U: BeaconNode, V: DutiesReader, W: Signer> BlockProducer<T, U, V, W> {
/// Returns a new instance where `last_processed_slot == 0`.
pub fn new(
spec: Arc<ChainSpec>,
epoch_map: Arc<V>,
slot_clock: Arc<T>,
beacon_node: Arc<U>,
signer: Arc<W>,
) -> Self {
Self {
last_processed_slot: None,
spec,
epoch_map,
slot_clock,
beacon_node,
signer,
}
}
}
impl<T: SlotClock, U: BeaconNode, V: DutiesReader, W: Signer> BlockProducer<T, U, V, W> {
/// "Poll" to see if the validator is required to take any action.
///
/// The slot clock will be read and any new actions undertaken.
pub fn poll(&mut self) -> Result<PollOutcome, Error> {
let slot = self
.slot_clock
.present_slot()
.map_err(|_| Error::SlotClockError)?
.ok_or(Error::SlotUnknowable)?;
// If this is a new slot.
if !self.is_processed_slot(slot) {
let is_block_production_slot = match self.epoch_map.is_block_production_slot(slot) {
Ok(result) => result,
Err(DutiesReaderError::UnknownEpoch) => {
return Ok(PollOutcome::ProducerDutiesUnknown(slot));
}
Err(DutiesReaderError::UnknownValidator) => {
return Ok(PollOutcome::ValidatorIsUnknown(slot));
}
Err(DutiesReaderError::EpochLengthIsZero) => return Err(Error::EpochLengthIsZero),
Err(DutiesReaderError::Poisoned) => return Err(Error::EpochMapPoisoned),
};
if is_block_production_slot {
self.last_processed_slot = Some(slot);
self.produce_block(slot)
} else {
Ok(PollOutcome::BlockProductionNotRequired(slot))
}
} else {
Ok(PollOutcome::SlotAlreadyProcessed(slot))
}
}
fn is_processed_slot(&self, slot: Slot) -> bool {
match self.last_processed_slot {
Some(processed_slot) if processed_slot >= slot => true,
_ => false,
}
}
/// Produce a block at some slot.
///
/// Assumes that a block is required at this slot (does not check the duties).
///
/// Ensures the message is not slashable.
///
/// !!! UNSAFE !!!
///
/// The slash-protection code is not yet implemented. There is zero protection against
/// slashing.
fn produce_block(&mut self, slot: Slot) -> Result<PollOutcome, Error> {
let randao_reveal = {
// TODO: add domain, etc to this message. Also ensure result matches `into_to_bytes32`.
let message = ssz_encode(&slot.epoch(self.spec.epoch_length));
match self.signer.sign_randao_reveal(&message) {
None => return Ok(PollOutcome::SignerRejection(slot)),
Some(signature) => signature,
}
};
if let Some(block) = self
.beacon_node
.produce_beacon_block(slot, &randao_reveal)?
{
if self.safe_to_produce(&block) {
if let Some(block) = self.sign_block(block) {
self.beacon_node.publish_beacon_block(block)?;
Ok(PollOutcome::BlockProduced(slot))
} else {
Ok(PollOutcome::SignerRejection(slot))
}
} else {
Ok(PollOutcome::SlashableBlockNotProduced(slot))
}
} else {
Ok(PollOutcome::BeaconNodeUnableToProduceBlock(slot))
}
}
/// Consumes a block, returning that block signed by the validators private key.
///
/// Important: this function will not check to ensure the block is not slashable. This must be
/// done upstream.
fn sign_block(&mut self, mut block: BeaconBlock) -> Option<BeaconBlock> {
self.store_produce(&block);
match self
.signer
.sign_block_proposal(&block.proposal_root(&self.spec)[..])
{
None => None,
Some(signature) => {
block.signature = signature;
Some(block)
}
}
}
/// Returns `true` if signing a block is safe (non-slashable).
///
/// !!! UNSAFE !!!
///
/// Important: this function is presently stubbed-out. It provides ZERO SAFETY.
fn safe_to_produce(&self, _block: &BeaconBlock) -> bool {
// TODO: ensure the producer doesn't produce slashable blocks.
// https://github.com/sigp/lighthouse/issues/160
true
}
/// Record that a block was produced so that slashable votes may not be made in the future.
///
/// !!! UNSAFE !!!
///
/// Important: this function is presently stubbed-out. It provides ZERO SAFETY.
fn store_produce(&mut self, _block: &BeaconBlock) {
// TODO: record this block production to prevent future slashings.
// https://github.com/sigp/lighthouse/issues/160
}
}
impl From<BeaconNodeError> for Error {
fn from(e: BeaconNodeError) -> Error {
Error::BeaconNodeError(e)
}
}
#[cfg(test)]
mod tests {
use super::test_utils::{EpochMap, LocalSigner, SimulatedBeaconNode};
use super::*;
use slot_clock::TestingSlotClock;
use types::{
test_utils::{SeedableRng, TestRandom, XorShiftRng},
Keypair,
};
// TODO: implement more thorough testing.
// https://github.com/sigp/lighthouse/issues/160
//
// These tests should serve as a good example for future tests.
#[test]
pub fn polling() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let spec = Arc::new(ChainSpec::foundation());
let slot_clock = Arc::new(TestingSlotClock::new(0));
let beacon_node = Arc::new(SimulatedBeaconNode::default());
let signer = Arc::new(LocalSigner::new(Keypair::random()));
let mut epoch_map = EpochMap::new(spec.epoch_length);
let produce_slot = Slot::new(100);
let produce_epoch = produce_slot.epoch(spec.epoch_length);
epoch_map.map.insert(produce_epoch, produce_slot);
let epoch_map = Arc::new(epoch_map);
let mut block_producer = BlockProducer::new(
spec.clone(),
epoch_map.clone(),
slot_clock.clone(),
beacon_node.clone(),
signer.clone(),
);
// Configure responses from the BeaconNode.
beacon_node.set_next_produce_result(Ok(Some(BeaconBlock::random_for_test(&mut rng))));
beacon_node.set_next_publish_result(Ok(PublishOutcome::ValidBlock));
// One slot before production slot...
slot_clock.set_slot(produce_slot.as_u64() - 1);
assert_eq!(
block_producer.poll(),
Ok(PollOutcome::BlockProductionNotRequired(produce_slot - 1))
);
// On the produce slot...
slot_clock.set_slot(produce_slot.as_u64());
assert_eq!(
block_producer.poll(),
Ok(PollOutcome::BlockProduced(produce_slot.into()))
);
// Trying the same produce slot again...
slot_clock.set_slot(produce_slot.as_u64());
assert_eq!(
block_producer.poll(),
Ok(PollOutcome::SlotAlreadyProcessed(produce_slot))
);
// One slot after the produce slot...
slot_clock.set_slot(produce_slot.as_u64() + 1);
assert_eq!(
block_producer.poll(),
Ok(PollOutcome::BlockProductionNotRequired(produce_slot + 1))
);
// In an epoch without known duties...
let slot = (produce_epoch.as_u64() + 1) * spec.epoch_length;
slot_clock.set_slot(slot);
assert_eq!(
block_producer.poll(),
Ok(PollOutcome::ProducerDutiesUnknown(Slot::new(slot)))
);
}
}

View File

@ -0,0 +1,28 @@
use crate::{DutiesReader, DutiesReaderError};
use std::collections::HashMap;
use types::{Epoch, Slot};
pub struct EpochMap {
epoch_length: u64,
pub map: HashMap<Epoch, Slot>,
}
impl EpochMap {
pub fn new(epoch_length: u64) -> Self {
Self {
epoch_length,
map: HashMap::new(),
}
}
}
impl DutiesReader for EpochMap {
fn is_block_production_slot(&self, slot: Slot) -> Result<bool, DutiesReaderError> {
let epoch = slot.epoch(self.epoch_length);
match self.map.get(&epoch) {
Some(s) if *s == slot => Ok(true),
Some(s) if *s != slot => Ok(false),
_ => Err(DutiesReaderError::UnknownEpoch),
}
}
}

View File

@ -0,0 +1,35 @@
use crate::traits::Signer;
use std::sync::RwLock;
use types::{Keypair, Signature};
/// A test-only struct used to simulate a Beacon Node.
pub struct LocalSigner {
keypair: Keypair,
should_sign: RwLock<bool>,
}
impl LocalSigner {
/// Produce a new LocalSigner with signing enabled by default.
pub fn new(keypair: Keypair) -> Self {
Self {
keypair,
should_sign: RwLock::new(true),
}
}
/// If set to `false`, the service will refuse to sign all messages. Otherwise, all messages
/// will be signed.
pub fn enable_signing(&self, enabled: bool) {
*self.should_sign.write().unwrap() = enabled;
}
}
impl Signer for LocalSigner {
fn sign_block_proposal(&self, message: &[u8]) -> Option<Signature> {
Some(Signature::new(message, &self.keypair.sk))
}
fn sign_randao_reveal(&self, message: &[u8]) -> Option<Signature> {
Some(Signature::new(message, &self.keypair.sk))
}
}

View File

@ -0,0 +1,7 @@
mod epoch_map;
mod local_signer;
mod simulated_beacon_node;
pub use self::epoch_map::EpochMap;
pub use self::local_signer::LocalSigner;
pub use self::simulated_beacon_node::SimulatedBeaconNode;

View File

@ -0,0 +1,48 @@
use crate::traits::{BeaconNode, BeaconNodeError, PublishOutcome};
use std::sync::RwLock;
use types::{BeaconBlock, Signature, Slot};
type ProduceResult = Result<Option<BeaconBlock>, BeaconNodeError>;
type PublishResult = Result<PublishOutcome, BeaconNodeError>;
/// A test-only struct used to simulate a Beacon Node.
#[derive(Default)]
pub struct SimulatedBeaconNode {
pub produce_input: RwLock<Option<(Slot, Signature)>>,
pub produce_result: RwLock<Option<ProduceResult>>,
pub publish_input: RwLock<Option<BeaconBlock>>,
pub publish_result: RwLock<Option<PublishResult>>,
}
impl SimulatedBeaconNode {
/// Set the result to be returned when `produce_beacon_block` is called.
pub fn set_next_produce_result(&self, result: ProduceResult) {
*self.produce_result.write().unwrap() = Some(result);
}
/// Set the result to be returned when `publish_beacon_block` is called.
pub fn set_next_publish_result(&self, result: PublishResult) {
*self.publish_result.write().unwrap() = Some(result);
}
}
impl BeaconNode for SimulatedBeaconNode {
/// Returns the value specified by the `set_next_produce_result`.
fn produce_beacon_block(&self, slot: Slot, randao_reveal: &Signature) -> ProduceResult {
*self.produce_input.write().unwrap() = Some((slot, randao_reveal.clone()));
match *self.produce_result.read().unwrap() {
Some(ref r) => r.clone(),
None => panic!("SimulatedBeaconNode: produce_result == None"),
}
}
/// Returns the value specified by the `set_next_publish_result`.
fn publish_beacon_block(&self, block: BeaconBlock) -> PublishResult {
*self.publish_input.write().unwrap() = Some(block);
match *self.publish_result.read().unwrap() {
Some(ref r) => r.clone(),
None => panic!("SimulatedBeaconNode: publish_result == None"),
}
}
}

View File

@ -0,0 +1,49 @@
use types::{BeaconBlock, Signature, Slot};
#[derive(Debug, PartialEq, Clone)]
pub enum BeaconNodeError {
RemoteFailure(String),
DecodeFailure,
}
#[derive(Debug, PartialEq, Clone)]
pub enum PublishOutcome {
ValidBlock,
InvalidBlock(String),
}
/// Defines the methods required to produce and publish blocks on a Beacon Node.
pub trait BeaconNode: Send + Sync {
/// Request that the node produces a block.
///
/// Returns Ok(None) if the Beacon Node is unable to produce at the given slot.
fn produce_beacon_block(
&self,
slot: Slot,
randao_reveal: &Signature,
) -> Result<Option<BeaconBlock>, BeaconNodeError>;
/// Request that the node publishes a block.
///
/// Returns `true` if the publish was sucessful.
fn publish_beacon_block(&self, block: BeaconBlock) -> Result<PublishOutcome, BeaconNodeError>;
}
#[derive(Debug, PartialEq, Clone)]
pub enum DutiesReaderError {
UnknownValidator,
UnknownEpoch,
EpochLengthIsZero,
Poisoned,
}
/// Informs a validator of their duties (e.g., block production).
pub trait DutiesReader: Send + Sync {
fn is_block_production_slot(&self, slot: Slot) -> Result<bool, DutiesReaderError>;
}
/// Signs message using an internally-maintained private key.
pub trait Signer {
fn sign_block_proposal(&self, message: &[u8]) -> Option<Signature>;
fn sign_randao_reveal(&self, message: &[u8]) -> Option<Signature>;
}

View File

@ -0,0 +1,12 @@
[package]
name = "fork_choice"
version = "0.1.0"
authors = ["Age Manning <Age@AgeManning.com>"]
edition = "2018"
[dependencies]
db = { path = "../../beacon_node/db" }
ssz = { path = "../utils/ssz" }
types = { path = "../types" }
fast-math = "0.1.1"
byteorder = "1.3.1"

118
eth2/fork_choice/src/lib.rs Normal file
View File

@ -0,0 +1,118 @@
// Copyright 2019 Sigma Prime Pty Ltd.
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! This crate stores the various implementations of fork-choice rules that can be used for the
//! beacon blockchain.
//!
//! There are four implementations. One is the naive longest chain rule (primarily for testing
//! purposes). The other three are proposed implementations of the LMD-GHOST fork-choice rule with various forms of optimisation.
//!
//! The current implementations are:
//! - [`longest-chain`]: Simplistic longest-chain fork choice - primarily for testing, **not for
//! production**.
//! - [`slow_lmd_ghost`]: This is a simple and very inefficient implementation given in the ethereum 2.0
//! specifications (https://github.com/ethereum/eth2.0-specs/blob/v0.1/specs/core/0_beacon-chain.md#get_block_root).
//! - [`optimised_lmd_ghost`]: This is an optimised version of the naive implementation as proposed
//! by Vitalik. The reference implementation can be found at: https://github.com/ethereum/research/blob/master/ghost/ghost.py
//! - [`protolambda_lmd_ghost`]: Another optimised version of LMD-GHOST designed by @protolambda.
//! The go implementation can be found here: https://github.com/protolambda/lmd-ghost.
//!
//! [`slow_lmd_ghost`]: struct.SlowLmdGhost.html
//! [`optimised_lmd_ghost`]: struct.OptimisedLmdGhost.html
//! [`protolambda_lmd_ghost`]: struct.ProtolambdaLmdGhost.html
extern crate db;
extern crate ssz;
extern crate types;
pub mod longest_chain;
pub mod optimised_lmd_ghost;
pub mod protolambda_lmd_ghost;
pub mod slow_lmd_ghost;
use db::stores::BeaconBlockAtSlotError;
use db::DBError;
use types::{BeaconBlock, Hash256};
/// Defines the interface for Fork Choices. Each Fork choice will define their own data structures
/// which can be built in block processing through the `add_block` and `add_attestation` functions.
/// The main fork choice algorithm is specified in `find_head
pub trait ForkChoice: Send + Sync {
/// Called when a block has been added. Allows generic block-level data structures to be
/// built for a given fork-choice.
fn add_block(
&mut self,
block: &BeaconBlock,
block_hash: &Hash256,
) -> Result<(), ForkChoiceError>;
/// Called when an attestation has been added. Allows generic attestation-level data structures to be built for a given fork choice.
// This can be generalised to a full attestation if required later.
fn add_attestation(
&mut self,
validator_index: u64,
target_block_hash: &Hash256,
) -> Result<(), ForkChoiceError>;
/// The fork-choice algorithm to find the current canonical head of the chain.
// TODO: Remove the justified_start_block parameter and make it internal
fn find_head(&mut self, justified_start_block: &Hash256) -> Result<Hash256, ForkChoiceError>;
}
/// Possible fork choice errors that can occur.
#[derive(Debug, PartialEq)]
pub enum ForkChoiceError {
MissingBeaconBlock(Hash256),
MissingBeaconState(Hash256),
IncorrectBeaconState(Hash256),
CannotFindBestChild,
ChildrenNotFound,
StorageError(String),
}
impl From<DBError> for ForkChoiceError {
fn from(e: DBError) -> ForkChoiceError {
ForkChoiceError::StorageError(e.message)
}
}
impl From<BeaconBlockAtSlotError> for ForkChoiceError {
fn from(e: BeaconBlockAtSlotError) -> ForkChoiceError {
match e {
BeaconBlockAtSlotError::UnknownBeaconBlock(hash) => {
ForkChoiceError::MissingBeaconBlock(hash)
}
BeaconBlockAtSlotError::InvalidBeaconBlock(hash) => {
ForkChoiceError::MissingBeaconBlock(hash)
}
BeaconBlockAtSlotError::DBError(string) => ForkChoiceError::StorageError(string),
}
}
}
/// Fork choice options that are currently implemented.
pub enum ForkChoiceAlgorithms {
/// Chooses the longest chain becomes the head. Not for production.
LongestChain,
/// A simple and highly inefficient implementation of LMD ghost.
SlowLMDGhost,
/// An optimised version of LMD-GHOST by Vitalik.
OptimisedLMDGhost,
/// An optimised version of LMD-GHOST by Protolambda.
ProtoLMDGhost,
}

View File

@ -0,0 +1,93 @@
use db::stores::BeaconBlockStore;
use db::{ClientDB, DBError};
use ssz::{Decodable, DecodeError};
use std::sync::Arc;
use types::{BeaconBlock, Hash256, Slot};
pub enum ForkChoiceError {
BadSszInDatabase,
MissingBlock,
DBError(String),
}
pub fn longest_chain<T>(
head_block_hashes: &[Hash256],
block_store: &Arc<BeaconBlockStore<T>>,
) -> Result<Option<usize>, ForkChoiceError>
where
T: ClientDB + Sized,
{
let mut head_blocks: Vec<(usize, BeaconBlock)> = vec![];
/*
* Load all the head_block hashes from the DB as SszBeaconBlocks.
*/
for (index, block_hash) in head_block_hashes.iter().enumerate() {
let ssz = block_store
.get(&block_hash)?
.ok_or(ForkChoiceError::MissingBlock)?;
let (block, _) = BeaconBlock::ssz_decode(&ssz, 0)?;
head_blocks.push((index, block));
}
/*
* Loop through all the head blocks and find the highest slot.
*/
let highest_slot: Option<Slot> = None;
for (_, block) in &head_blocks {
let slot = block.slot;
match highest_slot {
None => Some(slot),
Some(winning_slot) => {
if slot > winning_slot {
Some(slot)
} else {
Some(winning_slot)
}
}
};
}
/*
* Loop through all the highest blocks and sort them by highest hash.
*
* Ultimately, the index of the head_block hash with the highest slot and highest block
* hash will be the winner.
*/
match highest_slot {
None => Ok(None),
Some(highest_slot) => {
let mut highest_blocks = vec![];
for (index, block) in head_blocks {
if block.slot == highest_slot {
highest_blocks.push((index, block))
}
}
highest_blocks.sort_by(|a, b| head_block_hashes[a.0].cmp(&head_block_hashes[b.0]));
let (index, _) = highest_blocks[0];
Ok(Some(index))
}
}
}
impl From<DecodeError> for ForkChoiceError {
fn from(_: DecodeError) -> Self {
ForkChoiceError::BadSszInDatabase
}
}
impl From<DBError> for ForkChoiceError {
fn from(e: DBError) -> Self {
ForkChoiceError::DBError(e.message)
}
}
#[cfg(test)]
mod tests {
#[test]
fn test_naive_fork_choice() {
assert_eq!(2 + 2, 4);
}
}

View File

@ -0,0 +1,443 @@
// Copyright 2019 Sigma Prime Pty Ltd.
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
extern crate byteorder;
extern crate fast_math;
use crate::{ForkChoice, ForkChoiceError};
use byteorder::{BigEndian, ByteOrder};
use db::{
stores::{BeaconBlockStore, BeaconStateStore},
ClientDB,
};
use fast_math::log2_raw;
use std::collections::HashMap;
use std::sync::Arc;
use types::{
readers::BeaconBlockReader,
slot_epoch_height::{Height, Slot},
validator_registry::get_active_validator_indices,
BeaconBlock, Hash256,
};
//TODO: Pruning - Children
//TODO: Handle Syncing
//TODO: Sort out global constants
const GENESIS_SLOT: u64 = 0;
const FORK_CHOICE_BALANCE_INCREMENT: u64 = 1e9 as u64;
const MAX_DEPOSIT_AMOUNT: u64 = 32e9 as u64;
const EPOCH_LENGTH: u64 = 64;
/// The optimised LMD-GHOST fork choice rule.
/// NOTE: This uses u32 to represent difference between block heights. Thus this is only
/// applicable for block height differences in the range of a u32.
/// This can potentially be parallelized in some parts.
// we use fast log2, a log2 lookup table is implemented in Vitaliks code, potentially do
// the comparison. Log2_raw takes 2ns according to the documentation.
#[inline]
fn log2_int(x: u32) -> u32 {
log2_raw(x as f32) as u32
}
fn power_of_2_below(x: u32) -> u32 {
2u32.pow(log2_int(x))
}
/// Stores the necessary data structures to run the optimised lmd ghost algorithm.
pub struct OptimisedLMDGhost<T: ClientDB + Sized> {
/// A cache of known ancestors at given heights for a specific block.
//TODO: Consider FnvHashMap
cache: HashMap<CacheKey<u32>, Hash256>,
/// Log lookup table for blocks to their ancestors.
//TODO: Verify we only want/need a size 16 log lookup
ancestors: Vec<HashMap<Hash256, Hash256>>,
/// Stores the children for any given parent.
children: HashMap<Hash256, Vec<Hash256>>,
/// The latest attestation targets as a map of validator index to block hash.
//TODO: Could this be a fixed size vec
latest_attestation_targets: HashMap<u64, Hash256>,
/// Block storage access.
block_store: Arc<BeaconBlockStore<T>>,
/// State storage access.
state_store: Arc<BeaconStateStore<T>>,
max_known_height: Height,
}
impl<T> OptimisedLMDGhost<T>
where
T: ClientDB + Sized,
{
pub fn new(
block_store: Arc<BeaconBlockStore<T>>,
state_store: Arc<BeaconStateStore<T>>,
) -> Self {
OptimisedLMDGhost {
cache: HashMap::new(),
ancestors: vec![HashMap::new(); 16],
latest_attestation_targets: HashMap::new(),
children: HashMap::new(),
max_known_height: Height::new(0),
block_store,
state_store,
}
}
/// Finds the latest votes weighted by validator balance. Returns a hashmap of block_hash to
/// weighted votes.
pub fn get_latest_votes(
&self,
state_root: &Hash256,
block_slot: Slot,
) -> Result<HashMap<Hash256, u64>, ForkChoiceError> {
// get latest votes
// Note: Votes are weighted by min(balance, MAX_DEPOSIT_AMOUNT) //
// FORK_CHOICE_BALANCE_INCREMENT
// build a hashmap of block_hash to weighted votes
let mut latest_votes: HashMap<Hash256, u64> = HashMap::new();
// gets the current weighted votes
let current_state = self
.state_store
.get_deserialized(&state_root)?
.ok_or_else(|| ForkChoiceError::MissingBeaconState(*state_root))?;
let active_validator_indices = get_active_validator_indices(
&current_state.validator_registry,
block_slot.epoch(EPOCH_LENGTH),
);
for index in active_validator_indices {
let balance =
std::cmp::min(current_state.validator_balances[index], MAX_DEPOSIT_AMOUNT)
/ FORK_CHOICE_BALANCE_INCREMENT;
if balance > 0 {
if let Some(target) = self.latest_attestation_targets.get(&(index as u64)) {
*latest_votes.entry(*target).or_insert_with(|| 0) += balance;
}
}
}
Ok(latest_votes)
}
/// Gets the ancestor at a given height `at_height` of a block specified by `block_hash`.
fn get_ancestor(&mut self, block_hash: Hash256, at_height: Height) -> Option<Hash256> {
// return None if we can't get the block from the db.
let block_height = {
let block_slot = self
.block_store
.get_deserialized(&block_hash)
.ok()?
.expect("Should have returned already if None")
.slot;
block_slot.height(Slot::from(GENESIS_SLOT))
};
// verify we haven't exceeded the block height
if at_height >= block_height {
if at_height > block_height {
return None;
} else {
return Some(block_hash);
}
}
// check if the result is stored in our cache
let cache_key = CacheKey::new(&block_hash, at_height.as_u32());
if let Some(ancestor) = self.cache.get(&cache_key) {
return Some(*ancestor);
}
// not in the cache recursively search for ancestors using a log-lookup
if let Some(ancestor) = {
let ancestor_lookup = self.ancestors
[log2_int((block_height - at_height - 1u64).as_u32()) as usize]
.get(&block_hash)
//TODO: Panic if we can't lookup and fork choice fails
.expect("All blocks should be added to the ancestor log lookup table");
self.get_ancestor(*ancestor_lookup, at_height)
} {
// add the result to the cache
self.cache.insert(cache_key, ancestor);
return Some(ancestor);
}
None
}
// looks for an obvious block winner given the latest votes for a specific height
fn get_clear_winner(
&mut self,
latest_votes: &HashMap<Hash256, u64>,
block_height: Height,
) -> Option<Hash256> {
// map of vote counts for every hash at this height
let mut current_votes: HashMap<Hash256, u64> = HashMap::new();
let mut total_vote_count = 0;
// loop through the latest votes and count all votes
// these have already been weighted by balance
for (hash, votes) in latest_votes.iter() {
if let Some(ancestor) = self.get_ancestor(*hash, block_height) {
let current_vote_value = current_votes.get(&ancestor).unwrap_or_else(|| &0);
current_votes.insert(ancestor, current_vote_value + *votes);
total_vote_count += votes;
}
}
// Check if there is a clear block winner at this height. If so return it.
for (hash, votes) in current_votes.iter() {
if *votes >= total_vote_count / 2 {
// we have a clear winner, return it
return Some(*hash);
}
}
// didn't find a clear winner
None
}
// Finds the best child, splitting children into a binary tree, based on their hashes
fn choose_best_child(&self, votes: &HashMap<Hash256, u64>) -> Option<Hash256> {
let mut bitmask = 0;
for bit in (0..=255).rev() {
let mut zero_votes = 0;
let mut one_votes = 0;
let mut single_candidate = None;
for (candidate, votes) in votes.iter() {
let candidate_uint = BigEndian::read_u32(candidate);
if candidate_uint >> (bit + 1) != bitmask {
continue;
}
if (candidate_uint >> bit) % 2 == 0 {
zero_votes += votes;
} else {
one_votes += votes;
}
if single_candidate.is_none() {
single_candidate = Some(candidate);
} else {
single_candidate = None;
}
}
bitmask = (bitmask * 2) + {
if one_votes > zero_votes {
1
} else {
0
}
};
if let Some(candidate) = single_candidate {
return Some(*candidate);
}
//TODO Remove this during benchmark after testing
assert!(bit >= 1);
}
// should never reach here
None
}
}
impl<T: ClientDB + Sized> ForkChoice for OptimisedLMDGhost<T> {
fn add_block(
&mut self,
block: &BeaconBlock,
block_hash: &Hash256,
) -> Result<(), ForkChoiceError> {
// get the height of the parent
let parent_height = self
.block_store
.get_deserialized(&block.parent_root)?
.ok_or_else(|| ForkChoiceError::MissingBeaconBlock(block.parent_root))?
.slot()
.height(Slot::from(GENESIS_SLOT));
let parent_hash = &block.parent_root;
// add the new block to the children of parent
(*self
.children
.entry(block.parent_root)
.or_insert_with(|| vec![]))
.push(block_hash.clone());
// build the ancestor data structure
for index in 0..16 {
if parent_height % (1 << index) == 0 {
self.ancestors[index].insert(*block_hash, *parent_hash);
} else {
// TODO: This is unsafe. Will panic if parent_hash doesn't exist. Using it for debugging
let parent_ancestor = self.ancestors[index][parent_hash];
self.ancestors[index].insert(*block_hash, parent_ancestor);
}
}
// update the max height
self.max_known_height = std::cmp::max(self.max_known_height, parent_height + 1);
Ok(())
}
fn add_attestation(
&mut self,
validator_index: u64,
target_block_root: &Hash256,
) -> Result<(), ForkChoiceError> {
// simply add the attestation to the latest_attestation_target if the block_height is
// larger
let attestation_target = self
.latest_attestation_targets
.entry(validator_index)
.or_insert_with(|| *target_block_root);
// if we already have a value
if attestation_target != target_block_root {
// get the height of the target block
let block_height = self
.block_store
.get_deserialized(&target_block_root)?
.ok_or_else(|| ForkChoiceError::MissingBeaconBlock(*target_block_root))?
.slot()
.height(Slot::from(GENESIS_SLOT));
// get the height of the past target block
let past_block_height = self
.block_store
.get_deserialized(&attestation_target)?
.ok_or_else(|| ForkChoiceError::MissingBeaconBlock(*attestation_target))?
.slot()
.height(Slot::from(GENESIS_SLOT));
// update the attestation only if the new target is higher
if past_block_height < block_height {
*attestation_target = *target_block_root;
}
}
Ok(())
}
/// Perform lmd_ghost on the current chain to find the head.
fn find_head(&mut self, justified_block_start: &Hash256) -> Result<Hash256, ForkChoiceError> {
let block = self
.block_store
.get_deserialized(&justified_block_start)?
.ok_or_else(|| ForkChoiceError::MissingBeaconBlock(*justified_block_start))?;
let block_slot = block.slot();
let block_height = block_slot.height(Slot::from(GENESIS_SLOT));
let state_root = block.state_root();
let mut current_head = *justified_block_start;
let mut latest_votes = self.get_latest_votes(&state_root, block_slot)?;
// remove any votes that don't relate to our current head.
latest_votes.retain(|hash, _| self.get_ancestor(*hash, block_height) == Some(current_head));
// begin searching for the head
loop {
// if there are no children, we are done, return the current_head
let children = match self.children.get(&current_head) {
Some(children) => children.clone(),
None => return Ok(current_head),
};
// logarithmic lookup blocks to see if there are obvious winners, if so,
// progress to the next iteration.
let mut step =
power_of_2_below(self.max_known_height.saturating_sub(block_height).as_u32()) / 2;
while step > 0 {
if let Some(clear_winner) = self.get_clear_winner(
&latest_votes,
block_height - (block_height % u64::from(step)) + u64::from(step),
) {
current_head = clear_winner;
break;
}
step /= 2;
}
if step > 0 {
}
// if our skip lookup failed and we only have one child, progress to that child
else if children.len() == 1 {
current_head = children[0];
}
// we need to find the best child path to progress down.
else {
let mut child_votes = HashMap::new();
for (voted_hash, vote) in latest_votes.iter() {
// if the latest votes correspond to a child
if let Some(child) = self.get_ancestor(*voted_hash, block_height + 1) {
// add up the votes for each child
*child_votes.entry(child).or_insert_with(|| 0) += vote;
}
}
// given the votes on the children, find the best child
current_head = self
.choose_best_child(&child_votes)
.ok_or(ForkChoiceError::CannotFindBestChild)?;
}
// No head was found, re-iterate
// update the block height for the next iteration
let block_height = self
.block_store
.get_deserialized(&current_head)?
.ok_or_else(|| ForkChoiceError::MissingBeaconBlock(*justified_block_start))?
.slot()
.height(Slot::from(GENESIS_SLOT));
// prune the latest votes for votes that are not part of current chosen chain
// more specifically, only keep votes that have head as an ancestor
latest_votes
.retain(|hash, _| self.get_ancestor(*hash, block_height) == Some(current_head));
}
}
}
/// Type for storing blocks in a memory cache. Key is comprised of block-hash plus the height.
#[derive(PartialEq, Eq, Hash)]
pub struct CacheKey<T> {
block_hash: Hash256,
block_height: T,
}
impl<T> CacheKey<T> {
pub fn new(block_hash: &Hash256, block_height: T) -> Self {
CacheKey {
block_hash: *block_hash,
block_height,
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
pub fn test_power_of_2_below() {
println!("{:?}", std::f32::MAX);
assert_eq!(power_of_2_below(4), 4);
assert_eq!(power_of_2_below(5), 4);
assert_eq!(power_of_2_below(7), 4);
assert_eq!(power_of_2_below(24), 16);
assert_eq!(power_of_2_below(32), 32);
assert_eq!(power_of_2_below(33), 32);
assert_eq!(power_of_2_below(63), 32);
}
}

View File

@ -0,0 +1,223 @@
// Copyright 2019 Sigma Prime Pty Ltd.
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
extern crate db;
use crate::{ForkChoice, ForkChoiceError};
use db::{
stores::{BeaconBlockStore, BeaconStateStore},
ClientDB,
};
use std::collections::HashMap;
use std::sync::Arc;
use types::{
readers::{BeaconBlockReader, BeaconStateReader},
slot_epoch_height::Slot,
validator_registry::get_active_validator_indices,
BeaconBlock, Hash256,
};
//TODO: Pruning and syncing
//TODO: Sort out global constants
const GENESIS_SLOT: u64 = 0;
const FORK_CHOICE_BALANCE_INCREMENT: u64 = 1e9 as u64;
const MAX_DEPOSIT_AMOUNT: u64 = 32e9 as u64;
const EPOCH_LENGTH: u64 = 64;
pub struct SlowLMDGhost<T: ClientDB + Sized> {
/// The latest attestation targets as a map of validator index to block hash.
//TODO: Could this be a fixed size vec
latest_attestation_targets: HashMap<u64, Hash256>,
/// Stores the children for any given parent.
children: HashMap<Hash256, Vec<Hash256>>,
/// Block storage access.
block_store: Arc<BeaconBlockStore<T>>,
/// State storage access.
state_store: Arc<BeaconStateStore<T>>,
}
impl<T> SlowLMDGhost<T>
where
T: ClientDB + Sized,
{
pub fn new(block_store: BeaconBlockStore<T>, state_store: BeaconStateStore<T>) -> Self {
SlowLMDGhost {
latest_attestation_targets: HashMap::new(),
children: HashMap::new(),
block_store: Arc::new(block_store),
state_store: Arc::new(state_store),
}
}
/// Finds the latest votes weighted by validator balance. Returns a hashmap of block_hash to
/// weighted votes.
pub fn get_latest_votes(
&self,
state_root: &Hash256,
block_slot: Slot,
) -> Result<HashMap<Hash256, u64>, ForkChoiceError> {
// get latest votes
// Note: Votes are weighted by min(balance, MAX_DEPOSIT_AMOUNT) //
// FORK_CHOICE_BALANCE_INCREMENT
// build a hashmap of block_hash to weighted votes
let mut latest_votes: HashMap<Hash256, u64> = HashMap::new();
// gets the current weighted votes
let current_state = self
.state_store
.get_deserialized(&state_root)?
.ok_or_else(|| ForkChoiceError::MissingBeaconState(*state_root))?;
let active_validator_indices = get_active_validator_indices(
&current_state.validator_registry,
block_slot.epoch(EPOCH_LENGTH),
);
for index in active_validator_indices {
let balance =
std::cmp::min(current_state.validator_balances[index], MAX_DEPOSIT_AMOUNT)
/ FORK_CHOICE_BALANCE_INCREMENT;
if balance > 0 {
if let Some(target) = self.latest_attestation_targets.get(&(index as u64)) {
*latest_votes.entry(*target).or_insert_with(|| 0) += balance;
}
}
}
Ok(latest_votes)
}
/// Get the total number of votes for some given block root.
///
/// The vote count is incremented each time an attestation target votes for a block root.
fn get_vote_count(
&self,
latest_votes: &HashMap<Hash256, u64>,
block_root: &Hash256,
) -> Result<u64, ForkChoiceError> {
let mut count = 0;
let block_slot = self
.block_store
.get_deserialized(&block_root)?
.ok_or_else(|| ForkChoiceError::MissingBeaconBlock(*block_root))?
.slot();
for (target_hash, votes) in latest_votes.iter() {
let (root_at_slot, _) = self
.block_store
.block_at_slot(&block_root, block_slot)?
.ok_or(ForkChoiceError::MissingBeaconBlock(*block_root))?;
if root_at_slot == *target_hash {
count += votes;
}
}
Ok(count)
}
}
impl<T: ClientDB + Sized> ForkChoice for SlowLMDGhost<T> {
/// Process when a block is added
fn add_block(
&mut self,
block: &BeaconBlock,
block_hash: &Hash256,
) -> Result<(), ForkChoiceError> {
// build the children hashmap
// add the new block to the children of parent
(*self
.children
.entry(block.parent_root)
.or_insert_with(|| vec![]))
.push(block_hash.clone());
// complete
Ok(())
}
fn add_attestation(
&mut self,
validator_index: u64,
target_block_root: &Hash256,
) -> Result<(), ForkChoiceError> {
// simply add the attestation to the latest_attestation_target if the block_height is
// larger
let attestation_target = self
.latest_attestation_targets
.entry(validator_index)
.or_insert_with(|| *target_block_root);
// if we already have a value
if attestation_target != target_block_root {
// get the height of the target block
let block_height = self
.block_store
.get_deserialized(&target_block_root)?
.ok_or_else(|| ForkChoiceError::MissingBeaconBlock(*target_block_root))?
.slot()
.height(Slot::from(GENESIS_SLOT));
// get the height of the past target block
let past_block_height = self
.block_store
.get_deserialized(&attestation_target)?
.ok_or_else(|| ForkChoiceError::MissingBeaconBlock(*attestation_target))?
.slot()
.height(Slot::from(GENESIS_SLOT));
// update the attestation only if the new target is higher
if past_block_height < block_height {
*attestation_target = *target_block_root;
}
}
Ok(())
}
/// A very inefficient implementation of LMD ghost.
fn find_head(&mut self, justified_block_start: &Hash256) -> Result<Hash256, ForkChoiceError> {
let start = self
.block_store
.get_deserialized(&justified_block_start)?
.ok_or_else(|| ForkChoiceError::MissingBeaconBlock(*justified_block_start))?;
let start_state_root = start.state_root();
let latest_votes = self.get_latest_votes(&start_state_root, start.slot())?;
let mut head_hash = Hash256::zero();
loop {
let mut head_vote_count = 0;
let children = match self.children.get(&head_hash) {
Some(children) => children,
// we have found the head, exit
None => break,
};
for child_hash in children {
let vote_count = self.get_vote_count(&latest_votes, &child_hash)?;
if vote_count > head_vote_count {
head_hash = *child_hash;
head_vote_count = vote_count;
}
}
}
Ok(head_hash)
}
}

View File

@ -0,0 +1,13 @@
[package]
name = "state_processing"
version = "0.1.0"
authors = ["Paul Hauner <paul@paulhauner.com>"]
edition = "2018"
[dependencies]
hashing = { path = "../utils/hashing" }
integer-sqrt = "0.1"
log = "0.4"
ssz = { path = "../utils/ssz" }
types = { path = "../types" }
rayon = "1.0"

View File

@ -0,0 +1,403 @@
use crate::SlotProcessingError;
use hashing::hash;
use log::debug;
use ssz::{ssz_encode, TreeHash};
use types::{
beacon_state::{AttestationValidationError, CommitteesError},
AggregatePublicKey, Attestation, BeaconBlock, BeaconState, ChainSpec, Crosslink, Epoch, Exit,
Fork, Hash256, PendingAttestation, PublicKey, Signature,
};
// TODO: define elsehwere.
const DOMAIN_PROPOSAL: u64 = 2;
const DOMAIN_EXIT: u64 = 3;
const DOMAIN_RANDAO: u64 = 4;
const PHASE_0_CUSTODY_BIT: bool = false;
const DOMAIN_ATTESTATION: u64 = 1;
#[derive(Debug, PartialEq)]
pub enum Error {
DBError(String),
StateAlreadyTransitioned,
PresentSlotIsNone,
UnableToDecodeBlock,
MissingParentState(Hash256),
InvalidParentState(Hash256),
MissingBeaconBlock(Hash256),
InvalidBeaconBlock(Hash256),
MissingParentBlock(Hash256),
NoBlockProducer,
StateSlotMismatch,
BadBlockSignature,
BadRandaoSignature,
MaxProposerSlashingsExceeded,
BadProposerSlashing,
MaxAttestationsExceeded,
InvalidAttestation(AttestationValidationError),
NoBlockRoot,
MaxDepositsExceeded,
MaxExitsExceeded,
BadExit,
BadCustodyReseeds,
BadCustodyChallenges,
BadCustodyResponses,
CommitteesError(CommitteesError),
SlotProcessingError(SlotProcessingError),
}
macro_rules! ensure {
($condition: expr, $result: expr) => {
if !$condition {
return Err($result);
}
};
}
pub trait BlockProcessable {
fn per_block_processing(&mut self, block: &BeaconBlock, spec: &ChainSpec) -> Result<(), Error>;
fn per_block_processing_without_verifying_block_signature(
&mut self,
block: &BeaconBlock,
spec: &ChainSpec,
) -> Result<(), Error>;
}
impl BlockProcessable for BeaconState {
fn per_block_processing(&mut self, block: &BeaconBlock, spec: &ChainSpec) -> Result<(), Error> {
per_block_processing_signature_optional(self, block, true, spec)
}
fn per_block_processing_without_verifying_block_signature(
&mut self,
block: &BeaconBlock,
spec: &ChainSpec,
) -> Result<(), Error> {
per_block_processing_signature_optional(self, block, false, spec)
}
}
fn per_block_processing_signature_optional(
state: &mut BeaconState,
block: &BeaconBlock,
verify_block_signature: bool,
spec: &ChainSpec,
) -> Result<(), Error> {
ensure!(block.slot == state.slot, Error::StateSlotMismatch);
/*
* Proposer Signature
*/
let block_proposer_index = state
.get_beacon_proposer_index(block.slot, spec)
.map_err(|_| Error::NoBlockProducer)?;
let block_proposer = &state.validator_registry[block_proposer_index];
if verify_block_signature {
ensure!(
bls_verify(
&block_proposer.pubkey,
&block.proposal_root(spec)[..],
&block.signature,
get_domain(&state.fork, state.current_epoch(spec), DOMAIN_PROPOSAL)
),
Error::BadBlockSignature
);
}
/*
* RANDAO
*/
ensure!(
bls_verify(
&block_proposer.pubkey,
&ssz_encode(&state.current_epoch(spec)),
&block.randao_reveal,
get_domain(&state.fork, state.current_epoch(spec), DOMAIN_RANDAO)
),
Error::BadRandaoSignature
);
// TODO: check this is correct.
let new_mix = {
let mut mix = state.latest_randao_mixes
[state.slot.as_usize() % spec.latest_randao_mixes_length]
.to_vec();
mix.append(&mut ssz_encode(&block.randao_reveal));
Hash256::from(&hash(&mix)[..])
};
state.latest_randao_mixes[state.slot.as_usize() % spec.latest_randao_mixes_length] = new_mix;
/*
* Eth1 data
*/
// TODO: Eth1 data processing.
/*
* Proposer slashings
*/
ensure!(
block.body.proposer_slashings.len() as u64 <= spec.max_proposer_slashings,
Error::MaxProposerSlashingsExceeded
);
for proposer_slashing in &block.body.proposer_slashings {
let proposer = state
.validator_registry
.get(proposer_slashing.proposer_index as usize)
.ok_or(Error::BadProposerSlashing)?;
ensure!(
proposer_slashing.proposal_data_1.slot == proposer_slashing.proposal_data_2.slot,
Error::BadProposerSlashing
);
ensure!(
proposer_slashing.proposal_data_1.shard == proposer_slashing.proposal_data_2.shard,
Error::BadProposerSlashing
);
ensure!(
proposer_slashing.proposal_data_1.block_root
!= proposer_slashing.proposal_data_2.block_root,
Error::BadProposerSlashing
);
ensure!(
proposer.penalized_epoch > state.current_epoch(spec),
Error::BadProposerSlashing
);
ensure!(
bls_verify(
&proposer.pubkey,
&proposer_slashing.proposal_data_1.hash_tree_root(),
&proposer_slashing.proposal_signature_1,
get_domain(
&state.fork,
proposer_slashing
.proposal_data_1
.slot
.epoch(spec.epoch_length),
DOMAIN_PROPOSAL
)
),
Error::BadProposerSlashing
);
ensure!(
bls_verify(
&proposer.pubkey,
&proposer_slashing.proposal_data_2.hash_tree_root(),
&proposer_slashing.proposal_signature_2,
get_domain(
&state.fork,
proposer_slashing
.proposal_data_2
.slot
.epoch(spec.epoch_length),
DOMAIN_PROPOSAL
)
),
Error::BadProposerSlashing
);
state.penalize_validator(proposer_slashing.proposer_index as usize, spec)?;
}
/*
* Attestations
*/
ensure!(
block.body.attestations.len() as u64 <= spec.max_attestations,
Error::MaxAttestationsExceeded
);
for attestation in &block.body.attestations {
validate_attestation(&state, attestation, spec)?;
let pending_attestation = PendingAttestation {
data: attestation.data.clone(),
aggregation_bitfield: attestation.aggregation_bitfield.clone(),
custody_bitfield: attestation.custody_bitfield.clone(),
inclusion_slot: state.slot,
};
state.latest_attestations.push(pending_attestation);
}
debug!(
"{} attestations verified & processed.",
block.body.attestations.len()
);
/*
* Deposits
*/
ensure!(
block.body.deposits.len() as u64 <= spec.max_deposits,
Error::MaxDepositsExceeded
);
// TODO: process deposits.
/*
* Exits
*/
ensure!(
block.body.exits.len() as u64 <= spec.max_exits,
Error::MaxExitsExceeded
);
for exit in &block.body.exits {
let validator = state
.validator_registry
.get(exit.validator_index as usize)
.ok_or(Error::BadExit)?;
ensure!(
validator.exit_epoch
> state.get_entry_exit_effect_epoch(state.current_epoch(spec), spec),
Error::BadExit
);
ensure!(state.current_epoch(spec) >= exit.epoch, Error::BadExit);
let exit_message = {
let exit_struct = Exit {
epoch: exit.epoch,
validator_index: exit.validator_index,
signature: spec.empty_signature.clone(),
};
exit_struct.hash_tree_root()
};
ensure!(
bls_verify(
&validator.pubkey,
&exit_message,
&exit.signature,
get_domain(&state.fork, exit.epoch, DOMAIN_EXIT)
),
Error::BadProposerSlashing
);
state.initiate_validator_exit(exit.validator_index as usize);
}
debug!("State transition complete.");
Ok(())
}
pub fn validate_attestation(
state: &BeaconState,
attestation: &Attestation,
spec: &ChainSpec,
) -> Result<(), AttestationValidationError> {
validate_attestation_signature_optional(state, attestation, spec, true)
}
pub fn validate_attestation_without_signature(
state: &BeaconState,
attestation: &Attestation,
spec: &ChainSpec,
) -> Result<(), AttestationValidationError> {
validate_attestation_signature_optional(state, attestation, spec, false)
}
fn validate_attestation_signature_optional(
state: &BeaconState,
attestation: &Attestation,
spec: &ChainSpec,
verify_signature: bool,
) -> Result<(), AttestationValidationError> {
ensure!(
attestation.data.slot + spec.min_attestation_inclusion_delay <= state.slot,
AttestationValidationError::IncludedTooEarly
);
ensure!(
attestation.data.slot + spec.epoch_length >= state.slot,
AttestationValidationError::IncludedTooLate
);
if attestation.data.slot >= state.current_epoch_start_slot(spec) {
ensure!(
attestation.data.justified_epoch == state.justified_epoch,
AttestationValidationError::WrongJustifiedSlot
);
} else {
ensure!(
attestation.data.justified_epoch == state.previous_justified_epoch,
AttestationValidationError::WrongJustifiedSlot
);
}
ensure!(
attestation.data.justified_block_root
== *state
.get_block_root(
attestation
.data
.justified_epoch
.start_slot(spec.epoch_length),
&spec
)
.ok_or(AttestationValidationError::NoBlockRoot)?,
AttestationValidationError::WrongJustifiedRoot
);
let potential_crosslink = Crosslink {
shard_block_root: attestation.data.shard_block_root,
epoch: attestation.data.slot.epoch(spec.epoch_length),
};
ensure!(
(attestation.data.latest_crosslink
== state.latest_crosslinks[attestation.data.shard as usize])
| (attestation.data.latest_crosslink == potential_crosslink),
AttestationValidationError::BadLatestCrosslinkRoot
);
if verify_signature {
let participants = state.get_attestation_participants(
&attestation.data,
&attestation.aggregation_bitfield,
spec,
)?;
let mut group_public_key = AggregatePublicKey::new();
for participant in participants {
group_public_key.add(
state.validator_registry[participant as usize]
.pubkey
.as_raw(),
)
}
ensure!(
attestation.verify_signature(
&group_public_key,
PHASE_0_CUSTODY_BIT,
get_domain(
&state.fork,
attestation.data.slot.epoch(spec.epoch_length),
DOMAIN_ATTESTATION,
)
),
AttestationValidationError::BadSignature
);
}
ensure!(
attestation.data.shard_block_root == spec.zero_hash,
AttestationValidationError::ShardBlockRootNotZero
);
Ok(())
}
fn get_domain(_fork: &Fork, _epoch: Epoch, _domain_type: u64) -> u64 {
// TODO: stubbed out.
0
}
fn bls_verify(pubkey: &PublicKey, message: &[u8], signature: &Signature, _domain: u64) -> bool {
// TODO: add domain
signature.verify(message, pubkey)
}
impl From<AttestationValidationError> for Error {
fn from(e: AttestationValidationError) -> Error {
Error::InvalidAttestation(e)
}
}
impl From<CommitteesError> for Error {
fn from(e: CommitteesError) -> Error {
Error::CommitteesError(e)
}
}
impl From<SlotProcessingError> for Error {
fn from(e: SlotProcessingError) -> Error {
Error::SlotProcessingError(e)
}
}

View File

@ -0,0 +1,716 @@
use integer_sqrt::IntegerSquareRoot;
use log::{debug, trace};
use rayon::prelude::*;
use ssz::TreeHash;
use std::collections::{HashMap, HashSet};
use std::iter::FromIterator;
use types::{
beacon_state::{AttestationParticipantsError, CommitteesError, InclusionError},
validator_registry::get_active_validator_indices,
BeaconState, ChainSpec, Crosslink, Epoch, Hash256, PendingAttestation,
};
macro_rules! safe_add_assign {
($a: expr, $b: expr) => {
$a = $a.saturating_add($b);
};
}
macro_rules! safe_sub_assign {
($a: expr, $b: expr) => {
$a = $a.saturating_sub($b);
};
}
#[derive(Debug, PartialEq)]
pub enum Error {
UnableToDetermineProducer,
NoBlockRoots,
BaseRewardQuotientIsZero,
NoRandaoSeed,
CommitteesError(CommitteesError),
AttestationParticipantsError(AttestationParticipantsError),
InclusionError(InclusionError),
WinningRootError(WinningRootError),
}
#[derive(Debug, PartialEq)]
pub enum WinningRootError {
NoWinningRoot,
AttestationParticipantsError(AttestationParticipantsError),
}
#[derive(Clone)]
pub struct WinningRoot {
pub shard_block_root: Hash256,
pub attesting_validator_indices: Vec<usize>,
pub total_balance: u64,
pub total_attesting_balance: u64,
}
pub trait EpochProcessable {
fn per_epoch_processing(&mut self, spec: &ChainSpec) -> Result<(), Error>;
}
impl EpochProcessable for BeaconState {
// Cyclomatic complexity is ignored. It would be ideal to split this function apart, however it
// remains monolithic to allow for easier spec updates. Once the spec is more stable we can
// optimise.
#[allow(clippy::cyclomatic_complexity)]
fn per_epoch_processing(&mut self, spec: &ChainSpec) -> Result<(), Error> {
let current_epoch = self.current_epoch(spec);
let previous_epoch = self.previous_epoch(spec);
let next_epoch = self.next_epoch(spec);
debug!(
"Starting per-epoch processing on epoch {}...",
self.current_epoch(spec)
);
/*
* Validators attesting during the current epoch.
*/
let active_validator_indices = get_active_validator_indices(
&self.validator_registry,
self.slot.epoch(spec.epoch_length),
);
let current_total_balance = self.get_total_balance(&active_validator_indices[..], spec);
trace!(
"{} validators with a total balance of {} wei.",
active_validator_indices.len(),
current_total_balance
);
let current_epoch_attestations: Vec<&PendingAttestation> = self
.latest_attestations
.par_iter()
.filter(|a| {
(a.data.slot / spec.epoch_length).epoch(spec.epoch_length)
== self.current_epoch(spec)
})
.collect();
trace!(
"Current epoch attestations: {}",
current_epoch_attestations.len()
);
let current_epoch_boundary_attestations: Vec<&PendingAttestation> =
current_epoch_attestations
.par_iter()
.filter(
|a| match self.get_block_root(self.current_epoch_start_slot(spec), spec) {
Some(block_root) => {
(a.data.epoch_boundary_root == *block_root)
&& (a.data.justified_epoch == self.justified_epoch)
}
None => unreachable!(),
},
)
.cloned()
.collect();
let current_epoch_boundary_attester_indices = self
.get_attestation_participants_union(&current_epoch_boundary_attestations[..], spec)?;
let current_epoch_boundary_attesting_balance =
self.get_total_balance(&current_epoch_boundary_attester_indices[..], spec);
trace!(
"Current epoch boundary attesters: {}",
current_epoch_boundary_attester_indices.len()
);
/*
* Validators attesting during the previous epoch
*/
/*
* Validators that made an attestation during the previous epoch
*/
let previous_epoch_attestations: Vec<&PendingAttestation> = self
.latest_attestations
.par_iter()
.filter(|a| {
//TODO: ensure these saturating subs are correct.
(a.data.slot / spec.epoch_length).epoch(spec.epoch_length)
== self.previous_epoch(spec)
})
.collect();
debug!(
"previous epoch attestations: {}",
previous_epoch_attestations.len()
);
let previous_epoch_attester_indices =
self.get_attestation_participants_union(&previous_epoch_attestations[..], spec)?;
let previous_total_balance =
self.get_total_balance(&previous_epoch_attester_indices[..], spec);
/*
* Validators targetting the previous justified slot
*/
let previous_epoch_justified_attestations: Vec<&PendingAttestation> = {
let mut a: Vec<&PendingAttestation> = current_epoch_attestations
.iter()
.filter(|a| a.data.justified_epoch == self.previous_justified_epoch)
.cloned()
.collect();
let mut b: Vec<&PendingAttestation> = previous_epoch_attestations
.iter()
.filter(|a| a.data.justified_epoch == self.previous_justified_epoch)
.cloned()
.collect();
a.append(&mut b);
a
};
let previous_epoch_justified_attester_indices = self
.get_attestation_participants_union(&previous_epoch_justified_attestations[..], spec)?;
let previous_epoch_justified_attesting_balance =
self.get_total_balance(&previous_epoch_justified_attester_indices[..], spec);
/*
* Validators justifying the epoch boundary block at the start of the previous epoch
*/
let previous_epoch_boundary_attestations: Vec<&PendingAttestation> =
previous_epoch_justified_attestations
.iter()
.filter(
|a| match self.get_block_root(self.previous_epoch_start_slot(spec), spec) {
Some(block_root) => a.data.epoch_boundary_root == *block_root,
None => unreachable!(),
},
)
.cloned()
.collect();
let previous_epoch_boundary_attester_indices = self
.get_attestation_participants_union(&previous_epoch_boundary_attestations[..], spec)?;
let previous_epoch_boundary_attesting_balance =
self.get_total_balance(&previous_epoch_boundary_attester_indices[..], spec);
/*
* Validators attesting to the expected beacon chain head during the previous epoch.
*/
let previous_epoch_head_attestations: Vec<&PendingAttestation> =
previous_epoch_attestations
.iter()
.filter(|a| match self.get_block_root(a.data.slot, spec) {
Some(block_root) => a.data.beacon_block_root == *block_root,
None => unreachable!(),
})
.cloned()
.collect();
let previous_epoch_head_attester_indices =
self.get_attestation_participants_union(&previous_epoch_head_attestations[..], spec)?;
let previous_epoch_head_attesting_balance =
self.get_total_balance(&previous_epoch_head_attester_indices[..], spec);
debug!(
"previous_epoch_head_attester_balance of {} wei.",
previous_epoch_head_attesting_balance
);
/*
* Eth1 Data
*/
if self.next_epoch(spec) % spec.eth1_data_voting_period == 0 {
for eth1_data_vote in &self.eth1_data_votes {
if eth1_data_vote.vote_count * 2 > spec.eth1_data_voting_period {
self.latest_eth1_data = eth1_data_vote.eth1_data.clone();
}
}
self.eth1_data_votes = vec![];
}
/*
* Justification
*/
let mut new_justified_epoch = self.justified_epoch;
self.justification_bitfield <<= 1;
// If > 2/3 of the total balance attested to the previous epoch boundary
//
// - Set the 2nd bit of the bitfield.
// - Set the previous epoch to be justified.
if (3 * previous_epoch_boundary_attesting_balance) >= (2 * current_total_balance) {
self.justification_bitfield |= 2;
new_justified_epoch = previous_epoch;
trace!(">= 2/3 voted for previous epoch boundary");
}
// If > 2/3 of the total balance attested to the previous epoch boundary
//
// - Set the 1st bit of the bitfield.
// - Set the current epoch to be justified.
if (3 * current_epoch_boundary_attesting_balance) >= (2 * current_total_balance) {
self.justification_bitfield |= 1;
new_justified_epoch = current_epoch;
trace!(">= 2/3 voted for current epoch boundary");
}
// If:
//
// - All three epochs prior to this epoch have been justified.
// - The previous justified justified epoch was three epochs ago.
//
// Then, set the finalized epoch to be three epochs ago.
if ((self.justification_bitfield >> 1) % 8 == 0b111)
& (self.previous_justified_epoch == previous_epoch - 2)
{
self.finalized_epoch = self.previous_justified_epoch;
trace!("epoch - 3 was finalized (1st condition).");
}
// If:
//
// - Both two epochs prior to this epoch have been justified.
// - The previous justified epoch was two epochs ago.
//
// Then, set the finalized epoch to two epochs ago.
if ((self.justification_bitfield >> 1) % 4 == 0b11)
& (self.previous_justified_epoch == previous_epoch - 1)
{
self.finalized_epoch = self.previous_justified_epoch;
trace!("epoch - 2 was finalized (2nd condition).");
}
// If:
//
// - This epoch and the two prior have been justified.
// - The presently justified epoch was two epochs ago.
//
// Then, set the finalized epoch to two epochs ago.
if (self.justification_bitfield % 8 == 0b111) & (self.justified_epoch == previous_epoch - 1)
{
self.finalized_epoch = self.justified_epoch;
trace!("epoch - 2 was finalized (3rd condition).");
}
// If:
//
// - This epoch and the epoch prior to it have been justified.
// - Set the previous epoch to be justified.
//
// Then, set the finalized epoch to be the previous epoch.
if (self.justification_bitfield % 4 == 0b11) & (self.justified_epoch == previous_epoch) {
self.finalized_epoch = self.justified_epoch;
trace!("epoch - 1 was finalized (4th condition).");
}
self.previous_justified_epoch = self.justified_epoch;
self.justified_epoch = new_justified_epoch;
debug!(
"Finalized epoch {}, justified epoch {}.",
self.finalized_epoch, self.justified_epoch
);
/*
* Crosslinks
*/
// Cached for later lookups.
let mut winning_root_for_shards: HashMap<u64, Result<WinningRoot, WinningRootError>> =
HashMap::new();
// for slot in self.slot.saturating_sub(2 * spec.epoch_length)..self.slot {
for slot in self.previous_epoch(spec).slot_iter(spec.epoch_length) {
let crosslink_committees_at_slot =
self.get_crosslink_committees_at_slot(slot, false, spec)?;
for (crosslink_committee, shard) in crosslink_committees_at_slot {
let shard = shard as u64;
let winning_root = winning_root(
self,
shard,
&current_epoch_attestations,
&previous_epoch_attestations,
spec,
);
if let Ok(winning_root) = &winning_root {
let total_committee_balance =
self.get_total_balance(&crosslink_committee[..], spec);
if (3 * winning_root.total_attesting_balance) >= (2 * total_committee_balance) {
self.latest_crosslinks[shard as usize] = Crosslink {
epoch: current_epoch,
shard_block_root: winning_root.shard_block_root,
}
}
}
winning_root_for_shards.insert(shard, winning_root);
}
}
trace!(
"Found {} winning shard roots.",
winning_root_for_shards.len()
);
/*
* Rewards and Penalities
*/
let base_reward_quotient = previous_total_balance.integer_sqrt();
if base_reward_quotient == 0 {
return Err(Error::BaseRewardQuotientIsZero);
}
/*
* Justification and finalization
*/
let epochs_since_finality = next_epoch - self.finalized_epoch;
let previous_epoch_justified_attester_indices_hashset: HashSet<usize> =
HashSet::from_iter(previous_epoch_justified_attester_indices.iter().cloned());
let previous_epoch_boundary_attester_indices_hashset: HashSet<usize> =
HashSet::from_iter(previous_epoch_boundary_attester_indices.iter().cloned());
let previous_epoch_head_attester_indices_hashset: HashSet<usize> =
HashSet::from_iter(previous_epoch_head_attester_indices.iter().cloned());
let previous_epoch_attester_indices_hashset: HashSet<usize> =
HashSet::from_iter(previous_epoch_attester_indices.iter().cloned());
let active_validator_indices_hashset: HashSet<usize> =
HashSet::from_iter(active_validator_indices.iter().cloned());
debug!("previous epoch justified attesters: {}, previous epoch boundary attesters: {}, previous epoch head attesters: {}, previous epoch attesters: {}", previous_epoch_justified_attester_indices.len(), previous_epoch_boundary_attester_indices.len(), previous_epoch_head_attester_indices.len(), previous_epoch_attester_indices.len());
debug!("{} epochs since finality.", epochs_since_finality);
if epochs_since_finality <= 4 {
for index in 0..self.validator_balances.len() {
let base_reward = self.base_reward(index, base_reward_quotient, spec);
if previous_epoch_justified_attester_indices_hashset.contains(&index) {
safe_add_assign!(
self.validator_balances[index],
base_reward * previous_epoch_justified_attesting_balance
/ previous_total_balance
);
} else if active_validator_indices_hashset.contains(&index) {
safe_sub_assign!(self.validator_balances[index], base_reward);
}
if previous_epoch_boundary_attester_indices_hashset.contains(&index) {
safe_add_assign!(
self.validator_balances[index],
base_reward * previous_epoch_boundary_attesting_balance
/ previous_total_balance
);
} else if active_validator_indices_hashset.contains(&index) {
safe_sub_assign!(self.validator_balances[index], base_reward);
}
if previous_epoch_head_attester_indices_hashset.contains(&index) {
safe_add_assign!(
self.validator_balances[index],
base_reward * previous_epoch_head_attesting_balance
/ previous_total_balance
);
} else if active_validator_indices_hashset.contains(&index) {
safe_sub_assign!(self.validator_balances[index], base_reward);
}
}
for index in previous_epoch_attester_indices {
let base_reward = self.base_reward(index, base_reward_quotient, spec);
let inclusion_distance =
self.inclusion_distance(&previous_epoch_attestations, index, spec)?;
safe_add_assign!(
self.validator_balances[index],
base_reward * spec.min_attestation_inclusion_delay / inclusion_distance
)
}
} else {
for index in 0..self.validator_balances.len() {
let inactivity_penalty = self.inactivity_penalty(
index,
epochs_since_finality,
base_reward_quotient,
spec,
);
if active_validator_indices_hashset.contains(&index) {
if !previous_epoch_justified_attester_indices_hashset.contains(&index) {
safe_sub_assign!(self.validator_balances[index], inactivity_penalty);
}
if !previous_epoch_boundary_attester_indices_hashset.contains(&index) {
safe_sub_assign!(self.validator_balances[index], inactivity_penalty);
}
if !previous_epoch_head_attester_indices_hashset.contains(&index) {
safe_sub_assign!(self.validator_balances[index], inactivity_penalty);
}
if self.validator_registry[index].penalized_epoch <= current_epoch {
let base_reward = self.base_reward(index, base_reward_quotient, spec);
safe_sub_assign!(
self.validator_balances[index],
2 * inactivity_penalty + base_reward
);
}
}
}
for index in previous_epoch_attester_indices {
let base_reward = self.base_reward(index, base_reward_quotient, spec);
let inclusion_distance =
self.inclusion_distance(&previous_epoch_attestations, index, spec)?;
safe_sub_assign!(
self.validator_balances[index],
base_reward
- base_reward * spec.min_attestation_inclusion_delay / inclusion_distance
);
}
}
trace!("Processed validator justification and finalization rewards/penalities.");
/*
* Attestation inclusion
*/
for &index in &previous_epoch_attester_indices_hashset {
let inclusion_slot =
self.inclusion_slot(&previous_epoch_attestations[..], index, spec)?;
let proposer_index = self
.get_beacon_proposer_index(inclusion_slot, spec)
.map_err(|_| Error::UnableToDetermineProducer)?;
let base_reward = self.base_reward(proposer_index, base_reward_quotient, spec);
safe_add_assign!(
self.validator_balances[proposer_index],
base_reward / spec.includer_reward_quotient
);
}
trace!(
"Previous epoch attesters: {}.",
previous_epoch_attester_indices_hashset.len()
);
/*
* Crosslinks
*/
for slot in self.previous_epoch(spec).slot_iter(spec.epoch_length) {
let crosslink_committees_at_slot =
self.get_crosslink_committees_at_slot(slot, false, spec)?;
for (_crosslink_committee, shard) in crosslink_committees_at_slot {
let shard = shard as u64;
if let Some(Ok(winning_root)) = winning_root_for_shards.get(&shard) {
// TODO: remove the map.
let attesting_validator_indices: HashSet<usize> = HashSet::from_iter(
winning_root.attesting_validator_indices.iter().cloned(),
);
for index in 0..self.validator_balances.len() {
let base_reward = self.base_reward(index, base_reward_quotient, spec);
if attesting_validator_indices.contains(&index) {
safe_add_assign!(
self.validator_balances[index],
base_reward * winning_root.total_attesting_balance
/ winning_root.total_balance
);
} else {
safe_sub_assign!(self.validator_balances[index], base_reward);
}
}
for index in &winning_root.attesting_validator_indices {
let base_reward = self.base_reward(*index, base_reward_quotient, spec);
safe_add_assign!(
self.validator_balances[*index],
base_reward * winning_root.total_attesting_balance
/ winning_root.total_balance
);
}
}
}
}
/*
* Ejections
*/
self.process_ejections(spec);
/*
* Validator Registry
*/
self.previous_calculation_epoch = self.current_calculation_epoch;
self.previous_epoch_start_shard = self.current_epoch_start_shard;
self.previous_epoch_seed = self.current_epoch_seed;
let should_update_validator_registy = if self.finalized_epoch
> self.validator_registry_update_epoch
{
(0..self.get_current_epoch_committee_count(spec)).all(|i| {
let shard = (self.current_epoch_start_shard + i as u64) % spec.shard_count;
self.latest_crosslinks[shard as usize].epoch > self.validator_registry_update_epoch
})
} else {
false
};
if should_update_validator_registy {
self.update_validator_registry(spec);
self.current_calculation_epoch = next_epoch;
self.current_epoch_start_shard = (self.current_epoch_start_shard
+ self.get_current_epoch_committee_count(spec) as u64)
% spec.shard_count;
self.current_epoch_seed = self
.generate_seed(self.current_calculation_epoch, spec)
.ok_or_else(|| Error::NoRandaoSeed)?;
} else {
let epochs_since_last_registry_update =
current_epoch - self.validator_registry_update_epoch;
if (epochs_since_last_registry_update > 1)
& epochs_since_last_registry_update.is_power_of_two()
{
self.current_calculation_epoch = next_epoch;
self.current_epoch_seed = self
.generate_seed(self.current_calculation_epoch, spec)
.ok_or_else(|| Error::NoRandaoSeed)?;
}
}
self.process_penalties_and_exits(spec);
self.latest_index_roots[(next_epoch.as_usize() + spec.entry_exit_delay as usize)
% spec.latest_index_roots_length] = hash_tree_root(get_active_validator_indices(
&self.validator_registry,
next_epoch + Epoch::from(spec.entry_exit_delay),
));
self.latest_penalized_balances[next_epoch.as_usize() % spec.latest_penalized_exit_length] =
self.latest_penalized_balances
[current_epoch.as_usize() % spec.latest_penalized_exit_length];
self.latest_randao_mixes[next_epoch.as_usize() % spec.latest_randao_mixes_length] = self
.get_randao_mix(current_epoch, spec)
.and_then(|x| Some(*x))
.ok_or_else(|| Error::NoRandaoSeed)?;
self.latest_attestations = self
.latest_attestations
.iter()
.filter(|a| a.data.slot.epoch(spec.epoch_length) >= current_epoch)
.cloned()
.collect();
debug!("Epoch transition complete.");
Ok(())
}
}
fn hash_tree_root<T: TreeHash>(input: Vec<T>) -> Hash256 {
Hash256::from(&input.hash_tree_root()[..])
}
fn winning_root(
state: &BeaconState,
shard: u64,
current_epoch_attestations: &[&PendingAttestation],
previous_epoch_attestations: &[&PendingAttestation],
spec: &ChainSpec,
) -> Result<WinningRoot, WinningRootError> {
let mut attestations = current_epoch_attestations.to_vec();
attestations.append(&mut previous_epoch_attestations.to_vec());
let mut candidates: HashMap<Hash256, WinningRoot> = HashMap::new();
let mut highest_seen_balance = 0;
for a in &attestations {
if a.data.shard != shard {
continue;
}
let shard_block_root = &a.data.shard_block_root;
if candidates.contains_key(shard_block_root) {
continue;
}
// TODO: `cargo fmt` makes this rather ugly; tidy up.
let attesting_validator_indices = attestations.iter().try_fold::<_, _, Result<
_,
AttestationParticipantsError,
>>(vec![], |mut acc, a| {
if (a.data.shard == shard) && (a.data.shard_block_root == *shard_block_root) {
acc.append(&mut state.get_attestation_participants(
&a.data,
&a.aggregation_bitfield,
spec,
)?);
}
Ok(acc)
})?;
let total_balance: u64 = attesting_validator_indices
.iter()
.fold(0, |acc, i| acc + state.get_effective_balance(*i, spec));
let total_attesting_balance: u64 = attesting_validator_indices
.iter()
.fold(0, |acc, i| acc + state.get_effective_balance(*i, spec));
if total_attesting_balance > highest_seen_balance {
highest_seen_balance = total_attesting_balance;
}
let candidate_root = WinningRoot {
shard_block_root: *shard_block_root,
attesting_validator_indices,
total_attesting_balance,
total_balance,
};
candidates.insert(*shard_block_root, candidate_root);
}
Ok(candidates
.iter()
.filter_map(|(_hash, candidate)| {
if candidate.total_attesting_balance == highest_seen_balance {
Some(candidate)
} else {
None
}
})
.min_by_key(|candidate| candidate.shard_block_root)
.ok_or_else(|| WinningRootError::NoWinningRoot)?
// TODO: avoid clone.
.clone())
}
impl From<InclusionError> for Error {
fn from(e: InclusionError) -> Error {
Error::InclusionError(e)
}
}
impl From<CommitteesError> for Error {
fn from(e: CommitteesError) -> Error {
Error::CommitteesError(e)
}
}
impl From<AttestationParticipantsError> for Error {
fn from(e: AttestationParticipantsError) -> Error {
Error::AttestationParticipantsError(e)
}
}
impl From<AttestationParticipantsError> for WinningRootError {
fn from(e: AttestationParticipantsError) -> WinningRootError {
WinningRootError::AttestationParticipantsError(e)
}
}
#[cfg(test)]
mod tests {
#[test]
fn it_works() {
assert_eq!(2 + 2, 4);
}
}

View File

@ -0,0 +1,10 @@
mod block_processable;
mod epoch_processable;
mod slot_processable;
pub use block_processable::{
validate_attestation, validate_attestation_without_signature, BlockProcessable,
Error as BlockProcessingError,
};
pub use epoch_processable::{EpochProcessable, Error as EpochProcessingError};
pub use slot_processable::{Error as SlotProcessingError, SlotProcessable};

View File

@ -0,0 +1,70 @@
use crate::{EpochProcessable, EpochProcessingError};
use types::{beacon_state::CommitteesError, BeaconState, ChainSpec, Hash256};
#[derive(Debug, PartialEq)]
pub enum Error {
CommitteesError(CommitteesError),
EpochProcessingError(EpochProcessingError),
}
pub trait SlotProcessable {
fn per_slot_processing(
&mut self,
previous_block_root: Hash256,
spec: &ChainSpec,
) -> Result<(), Error>;
}
impl SlotProcessable for BeaconState
where
BeaconState: EpochProcessable,
{
fn per_slot_processing(
&mut self,
previous_block_root: Hash256,
spec: &ChainSpec,
) -> Result<(), Error> {
if (self.slot + 1) % spec.epoch_length == 0 {
self.per_epoch_processing(spec)?;
}
self.slot += 1;
self.latest_randao_mixes[self.slot.as_usize() % spec.latest_randao_mixes_length] =
self.latest_randao_mixes[(self.slot.as_usize() - 1) % spec.latest_randao_mixes_length];
// Block roots.
self.latest_block_roots[(self.slot.as_usize() - 1) % spec.latest_block_roots_length] =
previous_block_root;
if self.slot.as_usize() % spec.latest_block_roots_length == 0 {
let root = merkle_root(&self.latest_block_roots[..]);
self.batched_block_roots.push(root);
}
Ok(())
}
}
fn merkle_root(_input: &[Hash256]) -> Hash256 {
Hash256::zero()
}
impl From<CommitteesError> for Error {
fn from(e: CommitteesError) -> Error {
Error::CommitteesError(e)
}
}
impl From<EpochProcessingError> for Error {
fn from(e: EpochProcessingError) -> Error {
Error::EpochProcessingError(e)
}
}
#[cfg(test)]
mod tests {
#[test]
fn it_works() {
assert_eq!(2 + 2, 4);
}
}

21
eth2/types/Cargo.toml Normal file
View File

@ -0,0 +1,21 @@
[package]
name = "types"
version = "0.1.0"
authors = ["Paul Hauner <paul@paulhauner.com>"]
edition = "2018"
[dependencies]
bls = { path = "../utils/bls" }
boolean-bitfield = { path = "../utils/boolean-bitfield" }
ethereum-types = "0.4.0"
hashing = { path = "../utils/hashing" }
honey-badger-split = { path = "../utils/honey-badger-split" }
log = "0.4"
rayon = "1.0"
rand = "0.5.5"
serde = "1.0"
serde_derive = "1.0"
serde_json = "1.0"
slog = "^2.2.3"
ssz = { path = "../utils/ssz" }
vec_shuffle = { path = "../utils/vec_shuffle" }

View File

@ -0,0 +1,112 @@
use super::{AggregatePublicKey, AggregateSignature, AttestationData, Bitfield, Hash256};
use crate::test_utils::TestRandom;
use rand::RngCore;
use serde_derive::Serialize;
use ssz::{hash, Decodable, DecodeError, Encodable, SszStream, TreeHash};
#[derive(Debug, Clone, PartialEq, Serialize)]
pub struct Attestation {
pub aggregation_bitfield: Bitfield,
pub data: AttestationData,
pub custody_bitfield: Bitfield,
pub aggregate_signature: AggregateSignature,
}
impl Attestation {
pub fn canonical_root(&self) -> Hash256 {
Hash256::from(&self.hash_tree_root()[..])
}
pub fn signable_message(&self, custody_bit: bool) -> Vec<u8> {
self.data.signable_message(custody_bit)
}
pub fn verify_signature(
&self,
group_public_key: &AggregatePublicKey,
custody_bit: bool,
// TODO: use domain.
_domain: u64,
) -> bool {
self.aggregate_signature
.verify(&self.signable_message(custody_bit), group_public_key)
}
}
impl Encodable for Attestation {
fn ssz_append(&self, s: &mut SszStream) {
s.append(&self.aggregation_bitfield);
s.append(&self.data);
s.append(&self.custody_bitfield);
s.append(&self.aggregate_signature);
}
}
impl Decodable for Attestation {
fn ssz_decode(bytes: &[u8], i: usize) -> Result<(Self, usize), DecodeError> {
let (aggregation_bitfield, i) = Bitfield::ssz_decode(bytes, i)?;
let (data, i) = AttestationData::ssz_decode(bytes, i)?;
let (custody_bitfield, i) = Bitfield::ssz_decode(bytes, i)?;
let (aggregate_signature, i) = AggregateSignature::ssz_decode(bytes, i)?;
let attestation_record = Self {
aggregation_bitfield,
data,
custody_bitfield,
aggregate_signature,
};
Ok((attestation_record, i))
}
}
impl TreeHash for Attestation {
fn hash_tree_root(&self) -> Vec<u8> {
let mut result: Vec<u8> = vec![];
result.append(&mut self.aggregation_bitfield.hash_tree_root());
result.append(&mut self.data.hash_tree_root());
result.append(&mut self.custody_bitfield.hash_tree_root());
result.append(&mut self.aggregate_signature.hash_tree_root());
hash(&result)
}
}
impl<T: RngCore> TestRandom<T> for Attestation {
fn random_for_test(rng: &mut T) -> Self {
Self {
data: <_>::random_for_test(rng),
aggregation_bitfield: <_>::random_for_test(rng),
custody_bitfield: <_>::random_for_test(rng),
aggregate_signature: <_>::random_for_test(rng),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::test_utils::{SeedableRng, TestRandom, XorShiftRng};
use ssz::ssz_encode;
#[test]
pub fn test_ssz_round_trip() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = Attestation::random_for_test(&mut rng);
let bytes = ssz_encode(&original);
let (decoded, _) = <_>::ssz_decode(&bytes, 0).unwrap();
assert_eq!(original, decoded);
}
#[test]
pub fn test_hash_tree_root() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = Attestation::random_for_test(&mut rng);
let result = original.hash_tree_root();
assert_eq!(result.len(), 32);
// TODO: Add further tests
// https://github.com/sigp/lighthouse/issues/170
}
}

View File

@ -0,0 +1,142 @@
use crate::test_utils::TestRandom;
use crate::{AttestationDataAndCustodyBit, Crosslink, Epoch, Hash256, Slot};
use rand::RngCore;
use serde_derive::Serialize;
use ssz::{hash, Decodable, DecodeError, Encodable, SszStream, TreeHash};
pub const SSZ_ATTESTION_DATA_LENGTH: usize = {
8 + // slot
8 + // shard
32 + // beacon_block_hash
32 + // epoch_boundary_root
32 + // shard_block_hash
32 + // latest_crosslink_hash
8 + // justified_epoch
32 // justified_block_root
};
#[derive(Debug, Clone, PartialEq, Default, Serialize, Hash)]
pub struct AttestationData {
pub slot: Slot,
pub shard: u64,
pub beacon_block_root: Hash256,
pub epoch_boundary_root: Hash256,
pub shard_block_root: Hash256,
pub latest_crosslink: Crosslink,
pub justified_epoch: Epoch,
pub justified_block_root: Hash256,
}
impl Eq for AttestationData {}
impl AttestationData {
pub fn canonical_root(&self) -> Hash256 {
Hash256::from(&self.hash_tree_root()[..])
}
pub fn signable_message(&self, custody_bit: bool) -> Vec<u8> {
let attestation_data_and_custody_bit = AttestationDataAndCustodyBit {
data: self.clone(),
custody_bit,
};
attestation_data_and_custody_bit.hash_tree_root()
}
}
impl Encodable for AttestationData {
fn ssz_append(&self, s: &mut SszStream) {
s.append(&self.slot);
s.append(&self.shard);
s.append(&self.beacon_block_root);
s.append(&self.epoch_boundary_root);
s.append(&self.shard_block_root);
s.append(&self.latest_crosslink);
s.append(&self.justified_epoch);
s.append(&self.justified_block_root);
}
}
impl Decodable for AttestationData {
fn ssz_decode(bytes: &[u8], i: usize) -> Result<(Self, usize), DecodeError> {
let (slot, i) = <_>::ssz_decode(bytes, i)?;
let (shard, i) = <_>::ssz_decode(bytes, i)?;
let (beacon_block_root, i) = <_>::ssz_decode(bytes, i)?;
let (epoch_boundary_root, i) = <_>::ssz_decode(bytes, i)?;
let (shard_block_root, i) = <_>::ssz_decode(bytes, i)?;
let (latest_crosslink, i) = <_>::ssz_decode(bytes, i)?;
let (justified_epoch, i) = <_>::ssz_decode(bytes, i)?;
let (justified_block_root, i) = <_>::ssz_decode(bytes, i)?;
let attestation_data = AttestationData {
slot,
shard,
beacon_block_root,
epoch_boundary_root,
shard_block_root,
latest_crosslink,
justified_epoch,
justified_block_root,
};
Ok((attestation_data, i))
}
}
impl TreeHash for AttestationData {
fn hash_tree_root(&self) -> Vec<u8> {
let mut result: Vec<u8> = vec![];
result.append(&mut self.slot.hash_tree_root());
result.append(&mut self.shard.hash_tree_root());
result.append(&mut self.beacon_block_root.hash_tree_root());
result.append(&mut self.epoch_boundary_root.hash_tree_root());
result.append(&mut self.shard_block_root.hash_tree_root());
result.append(&mut self.latest_crosslink.hash_tree_root());
result.append(&mut self.justified_epoch.hash_tree_root());
result.append(&mut self.justified_block_root.hash_tree_root());
hash(&result)
}
}
impl<T: RngCore> TestRandom<T> for AttestationData {
fn random_for_test(rng: &mut T) -> Self {
Self {
slot: <_>::random_for_test(rng),
shard: <_>::random_for_test(rng),
beacon_block_root: <_>::random_for_test(rng),
epoch_boundary_root: <_>::random_for_test(rng),
shard_block_root: <_>::random_for_test(rng),
latest_crosslink: <_>::random_for_test(rng),
justified_epoch: <_>::random_for_test(rng),
justified_block_root: <_>::random_for_test(rng),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::test_utils::{SeedableRng, TestRandom, XorShiftRng};
use ssz::ssz_encode;
#[test]
pub fn test_ssz_round_trip() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = AttestationData::random_for_test(&mut rng);
let bytes = ssz_encode(&original);
let (decoded, _) = <_>::ssz_decode(&bytes, 0).unwrap();
assert_eq!(original, decoded);
}
#[test]
pub fn test_hash_tree_root() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = AttestationData::random_for_test(&mut rng);
let result = original.hash_tree_root();
assert_eq!(result.len(), 32);
// TODO: Add further tests
// https://github.com/sigp/lighthouse/issues/170
}
}

View File

@ -0,0 +1,81 @@
use super::AttestationData;
use crate::test_utils::TestRandom;
use rand::RngCore;
use serde_derive::Serialize;
use ssz::{Decodable, DecodeError, Encodable, SszStream, TreeHash};
#[derive(Debug, Clone, PartialEq, Default, Serialize)]
pub struct AttestationDataAndCustodyBit {
pub data: AttestationData,
pub custody_bit: bool,
}
impl Encodable for AttestationDataAndCustodyBit {
fn ssz_append(&self, s: &mut SszStream) {
s.append(&self.data);
// TODO: deal with bools
}
}
impl Decodable for AttestationDataAndCustodyBit {
fn ssz_decode(bytes: &[u8], i: usize) -> Result<(Self, usize), DecodeError> {
let (data, i) = <_>::ssz_decode(bytes, i)?;
let custody_bit = false;
let attestation_data_and_custody_bit = AttestationDataAndCustodyBit { data, custody_bit };
Ok((attestation_data_and_custody_bit, i))
}
}
impl TreeHash for AttestationDataAndCustodyBit {
fn hash_tree_root(&self) -> Vec<u8> {
let mut result: Vec<u8> = vec![];
result.append(&mut self.data.hash_tree_root());
// TODO: add bool ssz
// result.append(custody_bit.hash_tree_root());
ssz::hash(&result)
}
}
impl<T: RngCore> TestRandom<T> for AttestationDataAndCustodyBit {
fn random_for_test(rng: &mut T) -> Self {
Self {
data: <_>::random_for_test(rng),
// TODO: deal with bools
custody_bit: false,
}
}
}
#[cfg(test)]
mod test {
use super::*;
use crate::test_utils::{SeedableRng, TestRandom, XorShiftRng};
use ssz::ssz_encode;
#[test]
pub fn test_ssz_round_trip() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = AttestationDataAndCustodyBit::random_for_test(&mut rng);
let bytes = ssz_encode(&original);
let (decoded, _) = <_>::ssz_decode(&bytes, 0).unwrap();
assert_eq!(original, decoded);
}
#[test]
pub fn test_hash_tree_root() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = AttestationDataAndCustodyBit::random_for_test(&mut rng);
let result = original.hash_tree_root();
assert_eq!(result.len(), 32);
// TODO: Add further tests
// https://github.com/sigp/lighthouse/issues/170
}
}

View File

@ -0,0 +1,80 @@
use crate::{test_utils::TestRandom, SlashableAttestation};
use rand::RngCore;
use serde_derive::Serialize;
use ssz::{hash, Decodable, DecodeError, Encodable, SszStream, TreeHash};
#[derive(Debug, PartialEq, Clone, Serialize)]
pub struct AttesterSlashing {
pub slashable_attestation_1: SlashableAttestation,
pub slashable_attestation_2: SlashableAttestation,
}
impl Encodable for AttesterSlashing {
fn ssz_append(&self, s: &mut SszStream) {
s.append(&self.slashable_attestation_1);
s.append(&self.slashable_attestation_2);
}
}
impl Decodable for AttesterSlashing {
fn ssz_decode(bytes: &[u8], i: usize) -> Result<(Self, usize), DecodeError> {
let (slashable_attestation_1, i) = <_>::ssz_decode(bytes, i)?;
let (slashable_attestation_2, i) = <_>::ssz_decode(bytes, i)?;
Ok((
AttesterSlashing {
slashable_attestation_1,
slashable_attestation_2,
},
i,
))
}
}
impl TreeHash for AttesterSlashing {
fn hash_tree_root(&self) -> Vec<u8> {
let mut result: Vec<u8> = vec![];
result.append(&mut self.slashable_attestation_1.hash_tree_root());
result.append(&mut self.slashable_attestation_2.hash_tree_root());
hash(&result)
}
}
impl<T: RngCore> TestRandom<T> for AttesterSlashing {
fn random_for_test(rng: &mut T) -> Self {
Self {
slashable_attestation_1: <_>::random_for_test(rng),
slashable_attestation_2: <_>::random_for_test(rng),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::test_utils::{SeedableRng, TestRandom, XorShiftRng};
use ssz::ssz_encode;
#[test]
pub fn test_ssz_round_trip() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = AttesterSlashing::random_for_test(&mut rng);
let bytes = ssz_encode(&original);
let (decoded, _) = <_>::ssz_decode(&bytes, 0).unwrap();
assert_eq!(original, decoded);
}
#[test]
pub fn test_hash_tree_root() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = AttesterSlashing::random_for_test(&mut rng);
let result = original.hash_tree_root();
assert_eq!(result.len(), 32);
// TODO: Add further tests
// https://github.com/sigp/lighthouse/issues/170
}
}

View File

@ -0,0 +1,155 @@
use crate::test_utils::TestRandom;
use crate::{BeaconBlockBody, ChainSpec, Eth1Data, Hash256, ProposalSignedData, Slot};
use bls::Signature;
use rand::RngCore;
use serde_derive::Serialize;
use ssz::{hash, Decodable, DecodeError, Encodable, SszStream, TreeHash};
#[derive(Debug, PartialEq, Clone, Serialize)]
pub struct BeaconBlock {
pub slot: Slot,
pub parent_root: Hash256,
pub state_root: Hash256,
pub randao_reveal: Signature,
pub eth1_data: Eth1Data,
pub signature: Signature,
pub body: BeaconBlockBody,
}
impl BeaconBlock {
/// Produce the first block of the Beacon Chain.
pub fn genesis(state_root: Hash256, spec: &ChainSpec) -> BeaconBlock {
BeaconBlock {
slot: spec.genesis_slot,
parent_root: spec.zero_hash,
state_root,
randao_reveal: spec.empty_signature.clone(),
eth1_data: Eth1Data {
deposit_root: spec.zero_hash,
block_hash: spec.zero_hash,
},
signature: spec.empty_signature.clone(),
body: BeaconBlockBody {
proposer_slashings: vec![],
attester_slashings: vec![],
attestations: vec![],
deposits: vec![],
exits: vec![],
},
}
}
pub fn canonical_root(&self) -> Hash256 {
Hash256::from(&self.hash_tree_root()[..])
}
pub fn proposal_root(&self, spec: &ChainSpec) -> Hash256 {
let block_without_signature_root = {
let mut block_without_signature = self.clone();
block_without_signature.signature = spec.empty_signature.clone();
block_without_signature.canonical_root()
};
let proposal = ProposalSignedData {
slot: self.slot,
shard: spec.beacon_chain_shard_number,
block_root: block_without_signature_root,
};
Hash256::from(&proposal.hash_tree_root()[..])
}
}
impl Encodable for BeaconBlock {
fn ssz_append(&self, s: &mut SszStream) {
s.append(&self.slot);
s.append(&self.parent_root);
s.append(&self.state_root);
s.append(&self.randao_reveal);
s.append(&self.eth1_data);
s.append(&self.signature);
s.append(&self.body);
}
}
impl Decodable for BeaconBlock {
fn ssz_decode(bytes: &[u8], i: usize) -> Result<(Self, usize), DecodeError> {
let (slot, i) = <_>::ssz_decode(bytes, i)?;
let (parent_root, i) = <_>::ssz_decode(bytes, i)?;
let (state_root, i) = <_>::ssz_decode(bytes, i)?;
let (randao_reveal, i) = <_>::ssz_decode(bytes, i)?;
let (eth1_data, i) = <_>::ssz_decode(bytes, i)?;
let (signature, i) = <_>::ssz_decode(bytes, i)?;
let (body, i) = <_>::ssz_decode(bytes, i)?;
Ok((
Self {
slot,
parent_root,
state_root,
randao_reveal,
eth1_data,
signature,
body,
},
i,
))
}
}
impl TreeHash for BeaconBlock {
fn hash_tree_root(&self) -> Vec<u8> {
let mut result: Vec<u8> = vec![];
result.append(&mut self.slot.hash_tree_root());
result.append(&mut self.parent_root.hash_tree_root());
result.append(&mut self.state_root.hash_tree_root());
result.append(&mut self.randao_reveal.hash_tree_root());
result.append(&mut self.eth1_data.hash_tree_root());
result.append(&mut self.signature.hash_tree_root());
result.append(&mut self.body.hash_tree_root());
hash(&result)
}
}
impl<T: RngCore> TestRandom<T> for BeaconBlock {
fn random_for_test(rng: &mut T) -> Self {
Self {
slot: <_>::random_for_test(rng),
parent_root: <_>::random_for_test(rng),
state_root: <_>::random_for_test(rng),
randao_reveal: <_>::random_for_test(rng),
eth1_data: <_>::random_for_test(rng),
signature: <_>::random_for_test(rng),
body: <_>::random_for_test(rng),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::test_utils::{SeedableRng, TestRandom, XorShiftRng};
use ssz::ssz_encode;
#[test]
pub fn test_ssz_round_trip() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = BeaconBlock::random_for_test(&mut rng);
let bytes = ssz_encode(&original);
let (decoded, _) = <_>::ssz_decode(&bytes, 0).unwrap();
assert_eq!(original, decoded);
}
#[test]
pub fn test_hash_tree_root() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = BeaconBlock::random_for_test(&mut rng);
let result = original.hash_tree_root();
assert_eq!(result.len(), 32);
// TODO: Add further tests
// https://github.com/sigp/lighthouse/issues/170
}
}

View File

@ -0,0 +1,99 @@
use super::{Attestation, AttesterSlashing, Deposit, Exit, ProposerSlashing};
use crate::test_utils::TestRandom;
use rand::RngCore;
use serde_derive::Serialize;
use ssz::{hash, Decodable, DecodeError, Encodable, SszStream, TreeHash};
#[derive(Debug, PartialEq, Clone, Default, Serialize)]
pub struct BeaconBlockBody {
pub proposer_slashings: Vec<ProposerSlashing>,
pub attester_slashings: Vec<AttesterSlashing>,
pub attestations: Vec<Attestation>,
pub deposits: Vec<Deposit>,
pub exits: Vec<Exit>,
}
impl Encodable for BeaconBlockBody {
fn ssz_append(&self, s: &mut SszStream) {
s.append_vec(&self.proposer_slashings);
s.append_vec(&self.attester_slashings);
s.append_vec(&self.attestations);
s.append_vec(&self.deposits);
s.append_vec(&self.exits);
}
}
impl Decodable for BeaconBlockBody {
fn ssz_decode(bytes: &[u8], i: usize) -> Result<(Self, usize), DecodeError> {
let (proposer_slashings, i) = <_>::ssz_decode(bytes, i)?;
let (attester_slashings, i) = <_>::ssz_decode(bytes, i)?;
let (attestations, i) = <_>::ssz_decode(bytes, i)?;
let (deposits, i) = <_>::ssz_decode(bytes, i)?;
let (exits, i) = <_>::ssz_decode(bytes, i)?;
Ok((
Self {
proposer_slashings,
attester_slashings,
attestations,
deposits,
exits,
},
i,
))
}
}
impl TreeHash for BeaconBlockBody {
fn hash_tree_root(&self) -> Vec<u8> {
let mut result: Vec<u8> = vec![];
result.append(&mut self.proposer_slashings.hash_tree_root());
result.append(&mut self.attester_slashings.hash_tree_root());
result.append(&mut self.attestations.hash_tree_root());
result.append(&mut self.deposits.hash_tree_root());
result.append(&mut self.exits.hash_tree_root());
hash(&result)
}
}
impl<T: RngCore> TestRandom<T> for BeaconBlockBody {
fn random_for_test(rng: &mut T) -> Self {
Self {
proposer_slashings: <_>::random_for_test(rng),
attester_slashings: <_>::random_for_test(rng),
attestations: <_>::random_for_test(rng),
deposits: <_>::random_for_test(rng),
exits: <_>::random_for_test(rng),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::test_utils::{SeedableRng, TestRandom, XorShiftRng};
use ssz::ssz_encode;
#[test]
pub fn test_ssz_round_trip() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = BeaconBlockBody::random_for_test(&mut rng);
let bytes = ssz_encode(&original);
let (decoded, _) = <_>::ssz_decode(&bytes, 0).unwrap();
assert_eq!(original, decoded);
}
#[test]
pub fn test_hash_tree_root() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = BeaconBlockBody::random_for_test(&mut rng);
let result = original.hash_tree_root();
assert_eq!(result.len(), 32);
// TODO: Add further tests
// https://github.com/sigp/lighthouse/issues/170
}
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,81 @@
use super::SlashableVoteData;
use crate::test_utils::TestRandom;
use rand::RngCore;
use serde_derive::Serialize;
use ssz::{hash, Decodable, DecodeError, Encodable, SszStream, TreeHash};
#[derive(Debug, PartialEq, Clone, Serialize)]
pub struct CasperSlashing {
pub slashable_vote_data_1: SlashableVoteData,
pub slashable_vote_data_2: SlashableVoteData,
}
impl Encodable for CasperSlashing {
fn ssz_append(&self, s: &mut SszStream) {
s.append(&self.slashable_vote_data_1);
s.append(&self.slashable_vote_data_2);
}
}
impl Decodable for CasperSlashing {
fn ssz_decode(bytes: &[u8], i: usize) -> Result<(Self, usize), DecodeError> {
let (slashable_vote_data_1, i) = <_>::ssz_decode(bytes, i)?;
let (slashable_vote_data_2, i) = <_>::ssz_decode(bytes, i)?;
Ok((
CasperSlashing {
slashable_vote_data_1,
slashable_vote_data_2,
},
i,
))
}
}
impl TreeHash for CasperSlashing {
fn hash_tree_root(&self) -> Vec<u8> {
let mut result: Vec<u8> = vec![];
result.append(&mut self.slashable_vote_data_1.hash_tree_root());
result.append(&mut self.slashable_vote_data_2.hash_tree_root());
hash(&result)
}
}
impl<T: RngCore> TestRandom<T> for CasperSlashing {
fn random_for_test(rng: &mut T) -> Self {
Self {
slashable_vote_data_1: <_>::random_for_test(rng),
slashable_vote_data_2: <_>::random_for_test(rng),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::test_utils::{SeedableRng, TestRandom, XorShiftRng};
use ssz::ssz_encode;
#[test]
pub fn test_ssz_round_trip() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = CasperSlashing::random_for_test(&mut rng);
let bytes = ssz_encode(&original);
let (decoded, _) = <_>::ssz_decode(&bytes, 0).unwrap();
assert_eq!(original, decoded);
}
#[test]
pub fn test_hash_tree_root() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = CasperSlashing::random_for_test(&mut rng);
let result = original.hash_tree_root();
assert_eq!(result.len(), 32);
// TODO: Add further tests
// https://github.com/sigp/lighthouse/issues/170
}
}

View File

@ -0,0 +1,91 @@
use crate::test_utils::TestRandom;
use crate::{Epoch, Hash256};
use rand::RngCore;
use serde_derive::Serialize;
use ssz::{hash, Decodable, DecodeError, Encodable, SszStream, TreeHash};
#[derive(Debug, Clone, PartialEq, Default, Serialize, Hash)]
pub struct Crosslink {
pub epoch: Epoch,
pub shard_block_root: Hash256,
}
impl Crosslink {
/// Generates a new instance where `dynasty` and `hash` are both zero.
pub fn zero() -> Self {
Self {
epoch: Epoch::new(0),
shard_block_root: Hash256::zero(),
}
}
}
impl Encodable for Crosslink {
fn ssz_append(&self, s: &mut SszStream) {
s.append(&self.epoch);
s.append(&self.shard_block_root);
}
}
impl Decodable for Crosslink {
fn ssz_decode(bytes: &[u8], i: usize) -> Result<(Self, usize), DecodeError> {
let (epoch, i) = <_>::ssz_decode(bytes, i)?;
let (shard_block_root, i) = <_>::ssz_decode(bytes, i)?;
Ok((
Self {
epoch,
shard_block_root,
},
i,
))
}
}
impl TreeHash for Crosslink {
fn hash_tree_root(&self) -> Vec<u8> {
let mut result: Vec<u8> = vec![];
result.append(&mut self.epoch.hash_tree_root());
result.append(&mut self.shard_block_root.hash_tree_root());
hash(&result)
}
}
impl<T: RngCore> TestRandom<T> for Crosslink {
fn random_for_test(rng: &mut T) -> Self {
Self {
epoch: <_>::random_for_test(rng),
shard_block_root: <_>::random_for_test(rng),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::test_utils::{SeedableRng, TestRandom, XorShiftRng};
use ssz::ssz_encode;
#[test]
pub fn test_ssz_round_trip() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = Crosslink::random_for_test(&mut rng);
let bytes = ssz_encode(&original);
let (decoded, _) = <_>::ssz_decode(&bytes, 0).unwrap();
assert_eq!(original, decoded);
}
#[test]
pub fn test_hash_tree_root() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = Crosslink::random_for_test(&mut rng);
let result = original.hash_tree_root();
assert_eq!(result.len(), 32);
// TODO: Add further tests
// https://github.com/sigp/lighthouse/issues/170
}
}

87
eth2/types/src/deposit.rs Normal file
View File

@ -0,0 +1,87 @@
use super::{DepositData, Hash256};
use crate::test_utils::TestRandom;
use rand::RngCore;
use serde_derive::Serialize;
use ssz::{hash, Decodable, DecodeError, Encodable, SszStream, TreeHash};
#[derive(Debug, PartialEq, Clone, Serialize)]
pub struct Deposit {
pub branch: Vec<Hash256>,
pub index: u64,
pub deposit_data: DepositData,
}
impl Encodable for Deposit {
fn ssz_append(&self, s: &mut SszStream) {
s.append_vec(&self.branch);
s.append(&self.index);
s.append(&self.deposit_data);
}
}
impl Decodable for Deposit {
fn ssz_decode(bytes: &[u8], i: usize) -> Result<(Self, usize), DecodeError> {
let (branch, i) = <_>::ssz_decode(bytes, i)?;
let (index, i) = <_>::ssz_decode(bytes, i)?;
let (deposit_data, i) = <_>::ssz_decode(bytes, i)?;
Ok((
Self {
branch,
index,
deposit_data,
},
i,
))
}
}
impl TreeHash for Deposit {
fn hash_tree_root(&self) -> Vec<u8> {
let mut result: Vec<u8> = vec![];
result.append(&mut self.branch.hash_tree_root());
result.append(&mut self.index.hash_tree_root());
result.append(&mut self.deposit_data.hash_tree_root());
hash(&result)
}
}
impl<T: RngCore> TestRandom<T> for Deposit {
fn random_for_test(rng: &mut T) -> Self {
Self {
branch: <_>::random_for_test(rng),
index: <_>::random_for_test(rng),
deposit_data: <_>::random_for_test(rng),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::test_utils::{SeedableRng, TestRandom, XorShiftRng};
use ssz::ssz_encode;
#[test]
pub fn test_ssz_round_trip() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = Deposit::random_for_test(&mut rng);
let bytes = ssz_encode(&original);
let (decoded, _) = <_>::ssz_decode(&bytes, 0).unwrap();
assert_eq!(original, decoded);
}
#[test]
pub fn test_hash_tree_root() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = Deposit::random_for_test(&mut rng);
let result = original.hash_tree_root();
assert_eq!(result.len(), 32);
// TODO: Add further tests
// https://github.com/sigp/lighthouse/issues/170
}
}

View File

@ -0,0 +1,87 @@
use super::DepositInput;
use crate::test_utils::TestRandom;
use rand::RngCore;
use serde_derive::Serialize;
use ssz::{hash, Decodable, DecodeError, Encodable, SszStream, TreeHash};
#[derive(Debug, PartialEq, Clone, Serialize)]
pub struct DepositData {
pub amount: u64,
pub timestamp: u64,
pub deposit_input: DepositInput,
}
impl Encodable for DepositData {
fn ssz_append(&self, s: &mut SszStream) {
s.append(&self.amount);
s.append(&self.timestamp);
s.append(&self.deposit_input);
}
}
impl Decodable for DepositData {
fn ssz_decode(bytes: &[u8], i: usize) -> Result<(Self, usize), DecodeError> {
let (amount, i) = <_>::ssz_decode(bytes, i)?;
let (timestamp, i) = <_>::ssz_decode(bytes, i)?;
let (deposit_input, i) = <_>::ssz_decode(bytes, i)?;
Ok((
Self {
amount,
timestamp,
deposit_input,
},
i,
))
}
}
impl TreeHash for DepositData {
fn hash_tree_root(&self) -> Vec<u8> {
let mut result: Vec<u8> = vec![];
result.append(&mut self.amount.hash_tree_root());
result.append(&mut self.timestamp.hash_tree_root());
result.append(&mut self.deposit_input.hash_tree_root());
hash(&result)
}
}
impl<T: RngCore> TestRandom<T> for DepositData {
fn random_for_test(rng: &mut T) -> Self {
Self {
amount: <_>::random_for_test(rng),
timestamp: <_>::random_for_test(rng),
deposit_input: <_>::random_for_test(rng),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::test_utils::{SeedableRng, TestRandom, XorShiftRng};
use ssz::ssz_encode;
#[test]
pub fn test_ssz_round_trip() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = DepositData::random_for_test(&mut rng);
let bytes = ssz_encode(&original);
let (decoded, _) = <_>::ssz_decode(&bytes, 0).unwrap();
assert_eq!(original, decoded);
}
#[test]
pub fn test_hash_tree_root() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = DepositData::random_for_test(&mut rng);
let result = original.hash_tree_root();
assert_eq!(result.len(), 32);
// TODO: Add further tests
// https://github.com/sigp/lighthouse/issues/170
}
}

View File

@ -0,0 +1,88 @@
use super::Hash256;
use crate::test_utils::TestRandom;
use bls::{PublicKey, Signature};
use rand::RngCore;
use serde_derive::Serialize;
use ssz::{hash, Decodable, DecodeError, Encodable, SszStream, TreeHash};
#[derive(Debug, PartialEq, Clone, Serialize)]
pub struct DepositInput {
pub pubkey: PublicKey,
pub withdrawal_credentials: Hash256,
pub proof_of_possession: Signature,
}
impl Encodable for DepositInput {
fn ssz_append(&self, s: &mut SszStream) {
s.append(&self.pubkey);
s.append(&self.withdrawal_credentials);
s.append(&self.proof_of_possession);
}
}
impl Decodable for DepositInput {
fn ssz_decode(bytes: &[u8], i: usize) -> Result<(Self, usize), DecodeError> {
let (pubkey, i) = <_>::ssz_decode(bytes, i)?;
let (withdrawal_credentials, i) = <_>::ssz_decode(bytes, i)?;
let (proof_of_possession, i) = <_>::ssz_decode(bytes, i)?;
Ok((
Self {
pubkey,
withdrawal_credentials,
proof_of_possession,
},
i,
))
}
}
impl TreeHash for DepositInput {
fn hash_tree_root(&self) -> Vec<u8> {
let mut result: Vec<u8> = vec![];
result.append(&mut self.pubkey.hash_tree_root());
result.append(&mut self.withdrawal_credentials.hash_tree_root());
result.append(&mut self.proof_of_possession.hash_tree_root());
hash(&result)
}
}
impl<T: RngCore> TestRandom<T> for DepositInput {
fn random_for_test(rng: &mut T) -> Self {
Self {
pubkey: <_>::random_for_test(rng),
withdrawal_credentials: <_>::random_for_test(rng),
proof_of_possession: <_>::random_for_test(rng),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::test_utils::{SeedableRng, TestRandom, XorShiftRng};
use ssz::ssz_encode;
#[test]
pub fn test_ssz_round_trip() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = DepositInput::random_for_test(&mut rng);
let bytes = ssz_encode(&original);
let (decoded, _) = <_>::ssz_decode(&bytes, 0).unwrap();
assert_eq!(original, decoded);
}
#[test]
pub fn test_hash_tree_root() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = DepositInput::random_for_test(&mut rng);
let result = original.hash_tree_root();
assert_eq!(result.len(), 32);
// TODO: Add further tests
// https://github.com/sigp/lighthouse/issues/170
}
}

View File

@ -0,0 +1,82 @@
use super::Hash256;
use crate::test_utils::TestRandom;
use rand::RngCore;
use serde_derive::Serialize;
use ssz::{hash, Decodable, DecodeError, Encodable, SszStream, TreeHash};
// Note: this is refer to as DepositRootVote in specs
#[derive(Debug, PartialEq, Clone, Default, Serialize)]
pub struct Eth1Data {
pub deposit_root: Hash256,
pub block_hash: Hash256,
}
impl Encodable for Eth1Data {
fn ssz_append(&self, s: &mut SszStream) {
s.append(&self.deposit_root);
s.append(&self.block_hash);
}
}
impl Decodable for Eth1Data {
fn ssz_decode(bytes: &[u8], i: usize) -> Result<(Self, usize), DecodeError> {
let (deposit_root, i) = <_>::ssz_decode(bytes, i)?;
let (block_hash, i) = <_>::ssz_decode(bytes, i)?;
Ok((
Self {
deposit_root,
block_hash,
},
i,
))
}
}
impl TreeHash for Eth1Data {
fn hash_tree_root(&self) -> Vec<u8> {
let mut result: Vec<u8> = vec![];
result.append(&mut self.deposit_root.hash_tree_root());
result.append(&mut self.block_hash.hash_tree_root());
hash(&result)
}
}
impl<T: RngCore> TestRandom<T> for Eth1Data {
fn random_for_test(rng: &mut T) -> Self {
Self {
deposit_root: <_>::random_for_test(rng),
block_hash: <_>::random_for_test(rng),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::test_utils::{SeedableRng, TestRandom, XorShiftRng};
use ssz::ssz_encode;
#[test]
pub fn test_ssz_round_trip() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = Eth1Data::random_for_test(&mut rng);
let bytes = ssz_encode(&original);
let (decoded, _) = <_>::ssz_decode(&bytes, 0).unwrap();
assert_eq!(original, decoded);
}
#[test]
pub fn test_hash_tree_root() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = Eth1Data::random_for_test(&mut rng);
let result = original.hash_tree_root();
assert_eq!(result.len(), 32);
// TODO: Add further tests
// https://github.com/sigp/lighthouse/issues/170
}
}

View File

@ -0,0 +1,82 @@
use super::Eth1Data;
use crate::test_utils::TestRandom;
use rand::RngCore;
use serde_derive::Serialize;
use ssz::{hash, Decodable, DecodeError, Encodable, SszStream, TreeHash};
// Note: this is refer to as DepositRootVote in specs
#[derive(Debug, PartialEq, Clone, Default, Serialize)]
pub struct Eth1DataVote {
pub eth1_data: Eth1Data,
pub vote_count: u64,
}
impl Encodable for Eth1DataVote {
fn ssz_append(&self, s: &mut SszStream) {
s.append(&self.eth1_data);
s.append(&self.vote_count);
}
}
impl Decodable for Eth1DataVote {
fn ssz_decode(bytes: &[u8], i: usize) -> Result<(Self, usize), DecodeError> {
let (eth1_data, i) = <_>::ssz_decode(bytes, i)?;
let (vote_count, i) = <_>::ssz_decode(bytes, i)?;
Ok((
Self {
eth1_data,
vote_count,
},
i,
))
}
}
impl TreeHash for Eth1DataVote {
fn hash_tree_root(&self) -> Vec<u8> {
let mut result: Vec<u8> = vec![];
result.append(&mut self.eth1_data.hash_tree_root());
result.append(&mut self.vote_count.hash_tree_root());
hash(&result)
}
}
impl<T: RngCore> TestRandom<T> for Eth1DataVote {
fn random_for_test(rng: &mut T) -> Self {
Self {
eth1_data: <_>::random_for_test(rng),
vote_count: <_>::random_for_test(rng),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::test_utils::{SeedableRng, TestRandom, XorShiftRng};
use ssz::ssz_encode;
#[test]
pub fn test_ssz_round_trip() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = Eth1DataVote::random_for_test(&mut rng);
let bytes = ssz_encode(&original);
let (decoded, _) = <_>::ssz_decode(&bytes, 0).unwrap();
assert_eq!(original, decoded);
}
#[test]
pub fn test_hash_tree_root() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = Eth1DataVote::random_for_test(&mut rng);
let result = original.hash_tree_root();
assert_eq!(result.len(), 32);
// TODO: Add further tests
// https://github.com/sigp/lighthouse/issues/170
}
}

87
eth2/types/src/exit.rs Normal file
View File

@ -0,0 +1,87 @@
use crate::{test_utils::TestRandom, Epoch};
use bls::Signature;
use rand::RngCore;
use serde_derive::Serialize;
use ssz::{hash, Decodable, DecodeError, Encodable, SszStream, TreeHash};
#[derive(Debug, PartialEq, Clone, Serialize)]
pub struct Exit {
pub epoch: Epoch,
pub validator_index: u64,
pub signature: Signature,
}
impl Encodable for Exit {
fn ssz_append(&self, s: &mut SszStream) {
s.append(&self.epoch);
s.append(&self.validator_index);
s.append(&self.signature);
}
}
impl Decodable for Exit {
fn ssz_decode(bytes: &[u8], i: usize) -> Result<(Self, usize), DecodeError> {
let (epoch, i) = <_>::ssz_decode(bytes, i)?;
let (validator_index, i) = <_>::ssz_decode(bytes, i)?;
let (signature, i) = <_>::ssz_decode(bytes, i)?;
Ok((
Self {
epoch,
validator_index,
signature,
},
i,
))
}
}
impl TreeHash for Exit {
fn hash_tree_root(&self) -> Vec<u8> {
let mut result: Vec<u8> = vec![];
result.append(&mut self.epoch.hash_tree_root());
result.append(&mut self.validator_index.hash_tree_root());
result.append(&mut self.signature.hash_tree_root());
hash(&result)
}
}
impl<T: RngCore> TestRandom<T> for Exit {
fn random_for_test(rng: &mut T) -> Self {
Self {
epoch: <_>::random_for_test(rng),
validator_index: <_>::random_for_test(rng),
signature: <_>::random_for_test(rng),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::test_utils::{SeedableRng, TestRandom, XorShiftRng};
use ssz::ssz_encode;
#[test]
pub fn test_ssz_round_trip() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = Exit::random_for_test(&mut rng);
let bytes = ssz_encode(&original);
let (decoded, _) = <_>::ssz_decode(&bytes, 0).unwrap();
assert_eq!(original, decoded);
}
#[test]
pub fn test_hash_tree_root() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = Exit::random_for_test(&mut rng);
let result = original.hash_tree_root();
assert_eq!(result.len(), 32);
// TODO: Add further tests
// https://github.com/sigp/lighthouse/issues/170
}
}

86
eth2/types/src/fork.rs Normal file
View File

@ -0,0 +1,86 @@
use crate::{test_utils::TestRandom, Epoch};
use rand::RngCore;
use serde_derive::Serialize;
use ssz::{hash, Decodable, DecodeError, Encodable, SszStream, TreeHash};
#[derive(Debug, Clone, PartialEq, Default, Serialize)]
pub struct Fork {
pub previous_version: u64,
pub current_version: u64,
pub epoch: Epoch,
}
impl Encodable for Fork {
fn ssz_append(&self, s: &mut SszStream) {
s.append(&self.previous_version);
s.append(&self.current_version);
s.append(&self.epoch);
}
}
impl Decodable for Fork {
fn ssz_decode(bytes: &[u8], i: usize) -> Result<(Self, usize), DecodeError> {
let (previous_version, i) = <_>::ssz_decode(bytes, i)?;
let (current_version, i) = <_>::ssz_decode(bytes, i)?;
let (epoch, i) = <_>::ssz_decode(bytes, i)?;
Ok((
Self {
previous_version,
current_version,
epoch,
},
i,
))
}
}
impl TreeHash for Fork {
fn hash_tree_root(&self) -> Vec<u8> {
let mut result: Vec<u8> = vec![];
result.append(&mut self.previous_version.hash_tree_root());
result.append(&mut self.current_version.hash_tree_root());
result.append(&mut self.epoch.hash_tree_root());
hash(&result)
}
}
impl<T: RngCore> TestRandom<T> for Fork {
fn random_for_test(rng: &mut T) -> Self {
Self {
previous_version: <_>::random_for_test(rng),
current_version: <_>::random_for_test(rng),
epoch: <_>::random_for_test(rng),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::test_utils::{SeedableRng, TestRandom, XorShiftRng};
use ssz::ssz_encode;
#[test]
pub fn test_ssz_round_trip() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = Fork::random_for_test(&mut rng);
let bytes = ssz_encode(&original);
let (decoded, _) = <_>::ssz_decode(&bytes, 0).unwrap();
assert_eq!(original, decoded);
}
#[test]
pub fn test_hash_tree_root() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = Fork::random_for_test(&mut rng);
let result = original.hash_tree_root();
assert_eq!(result.len(), 32);
// TODO: Add further tests
// https://github.com/sigp/lighthouse/issues/170
}
}

View File

@ -0,0 +1,12 @@
/// Note: this object does not actually exist in the spec.
///
/// We use it for managing attestations that have not been aggregated.
use super::{AttestationData, Signature};
use serde_derive::Serialize;
#[derive(Debug, Clone, PartialEq, Serialize)]
pub struct FreeAttestation {
pub data: AttestationData,
pub signature: Signature,
pub validator_index: u64,
}

75
eth2/types/src/lib.rs Normal file
View File

@ -0,0 +1,75 @@
pub mod test_utils;
pub mod attestation;
pub mod attestation_data;
pub mod attestation_data_and_custody_bit;
pub mod attester_slashing;
pub mod beacon_block;
pub mod beacon_block_body;
pub mod beacon_state;
pub mod casper_slashing;
pub mod crosslink;
pub mod deposit;
pub mod deposit_data;
pub mod deposit_input;
pub mod eth1_data;
pub mod eth1_data_vote;
pub mod exit;
pub mod fork;
pub mod free_attestation;
pub mod pending_attestation;
pub mod proposal_signed_data;
pub mod proposer_slashing;
pub mod readers;
pub mod shard_reassignment_record;
pub mod slashable_attestation;
pub mod slashable_vote_data;
pub mod slot_epoch_height;
pub mod spec;
pub mod validator;
pub mod validator_registry;
pub mod validator_registry_delta_block;
use ethereum_types::{H160, H256, U256};
use std::collections::HashMap;
pub use crate::attestation::Attestation;
pub use crate::attestation_data::AttestationData;
pub use crate::attestation_data_and_custody_bit::AttestationDataAndCustodyBit;
pub use crate::attester_slashing::AttesterSlashing;
pub use crate::beacon_block::BeaconBlock;
pub use crate::beacon_block_body::BeaconBlockBody;
pub use crate::beacon_state::BeaconState;
pub use crate::casper_slashing::CasperSlashing;
pub use crate::crosslink::Crosslink;
pub use crate::deposit::Deposit;
pub use crate::deposit_data::DepositData;
pub use crate::deposit_input::DepositInput;
pub use crate::eth1_data::Eth1Data;
pub use crate::eth1_data_vote::Eth1DataVote;
pub use crate::exit::Exit;
pub use crate::fork::Fork;
pub use crate::free_attestation::FreeAttestation;
pub use crate::pending_attestation::PendingAttestation;
pub use crate::proposal_signed_data::ProposalSignedData;
pub use crate::proposer_slashing::ProposerSlashing;
pub use crate::slashable_attestation::SlashableAttestation;
pub use crate::slashable_vote_data::SlashableVoteData;
pub use crate::slot_epoch_height::{Epoch, Slot};
pub use crate::spec::ChainSpec;
pub use crate::validator::{StatusFlags as ValidatorStatusFlags, Validator};
pub use crate::validator_registry_delta_block::ValidatorRegistryDeltaBlock;
pub type Hash256 = H256;
pub type Address = H160;
pub type EthBalance = U256;
pub type Bitfield = boolean_bitfield::BooleanBitfield;
pub type BitfieldError = boolean_bitfield::Error;
/// Maps a (slot, shard_id) to attestation_indices.
pub type AttesterMap = HashMap<(u64, u64), Vec<usize>>;
/// Maps a slot to a block proposer.
pub type ProposerMap = HashMap<u64, usize>;
pub use bls::{AggregatePublicKey, AggregateSignature, Keypair, PublicKey, Signature};

View File

@ -0,0 +1,93 @@
use crate::test_utils::TestRandom;
use crate::{AttestationData, Bitfield, Slot};
use rand::RngCore;
use serde_derive::Serialize;
use ssz::{hash, Decodable, DecodeError, Encodable, SszStream, TreeHash};
#[derive(Debug, Clone, PartialEq, Serialize)]
pub struct PendingAttestation {
pub aggregation_bitfield: Bitfield,
pub data: AttestationData,
pub custody_bitfield: Bitfield,
pub inclusion_slot: Slot,
}
impl Encodable for PendingAttestation {
fn ssz_append(&self, s: &mut SszStream) {
s.append(&self.aggregation_bitfield);
s.append(&self.data);
s.append(&self.custody_bitfield);
s.append(&self.inclusion_slot);
}
}
impl Decodable for PendingAttestation {
fn ssz_decode(bytes: &[u8], i: usize) -> Result<(Self, usize), DecodeError> {
let (aggregation_bitfield, i) = <_>::ssz_decode(bytes, i)?;
let (data, i) = <_>::ssz_decode(bytes, i)?;
let (custody_bitfield, i) = <_>::ssz_decode(bytes, i)?;
let (inclusion_slot, i) = <_>::ssz_decode(bytes, i)?;
Ok((
Self {
data,
aggregation_bitfield,
custody_bitfield,
inclusion_slot,
},
i,
))
}
}
impl TreeHash for PendingAttestation {
fn hash_tree_root(&self) -> Vec<u8> {
let mut result: Vec<u8> = vec![];
result.append(&mut self.aggregation_bitfield.hash_tree_root());
result.append(&mut self.data.hash_tree_root());
result.append(&mut self.custody_bitfield.hash_tree_root());
result.append(&mut self.inclusion_slot.hash_tree_root());
hash(&result)
}
}
impl<T: RngCore> TestRandom<T> for PendingAttestation {
fn random_for_test(rng: &mut T) -> Self {
Self {
data: <_>::random_for_test(rng),
aggregation_bitfield: <_>::random_for_test(rng),
custody_bitfield: <_>::random_for_test(rng),
inclusion_slot: <_>::random_for_test(rng),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::test_utils::{SeedableRng, TestRandom, XorShiftRng};
use ssz::ssz_encode;
#[test]
pub fn test_ssz_round_trip() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = PendingAttestation::random_for_test(&mut rng);
let bytes = ssz_encode(&original);
let (decoded, _) = <_>::ssz_decode(&bytes, 0).unwrap();
assert_eq!(original, decoded);
}
#[test]
pub fn test_hash_tree_root() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = PendingAttestation::random_for_test(&mut rng);
let result = original.hash_tree_root();
assert_eq!(result.len(), 32);
// TODO: Add further tests
// https://github.com/sigp/lighthouse/issues/170
}
}

View File

@ -0,0 +1,87 @@
use crate::test_utils::TestRandom;
use crate::{Hash256, Slot};
use rand::RngCore;
use serde_derive::Serialize;
use ssz::{hash, Decodable, DecodeError, Encodable, SszStream, TreeHash};
#[derive(Debug, PartialEq, Clone, Default, Serialize)]
pub struct ProposalSignedData {
pub slot: Slot,
pub shard: u64,
pub block_root: Hash256,
}
impl Encodable for ProposalSignedData {
fn ssz_append(&self, s: &mut SszStream) {
s.append(&self.slot);
s.append(&self.shard);
s.append(&self.block_root);
}
}
impl Decodable for ProposalSignedData {
fn ssz_decode(bytes: &[u8], i: usize) -> Result<(Self, usize), DecodeError> {
let (slot, i) = <_>::ssz_decode(bytes, i)?;
let (shard, i) = <_>::ssz_decode(bytes, i)?;
let (block_root, i) = <_>::ssz_decode(bytes, i)?;
Ok((
ProposalSignedData {
slot,
shard,
block_root,
},
i,
))
}
}
impl TreeHash for ProposalSignedData {
fn hash_tree_root(&self) -> Vec<u8> {
let mut result: Vec<u8> = vec![];
result.append(&mut self.slot.hash_tree_root());
result.append(&mut self.shard.hash_tree_root());
result.append(&mut self.block_root.hash_tree_root());
hash(&result)
}
}
impl<T: RngCore> TestRandom<T> for ProposalSignedData {
fn random_for_test(rng: &mut T) -> Self {
Self {
slot: <_>::random_for_test(rng),
shard: <_>::random_for_test(rng),
block_root: <_>::random_for_test(rng),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::test_utils::{SeedableRng, TestRandom, XorShiftRng};
use ssz::ssz_encode;
#[test]
pub fn test_ssz_round_trip() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = ProposalSignedData::random_for_test(&mut rng);
let bytes = ssz_encode(&original);
let (decoded, _) = <_>::ssz_decode(&bytes, 0).unwrap();
assert_eq!(original, decoded);
}
#[test]
pub fn test_hash_tree_root() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = ProposalSignedData::random_for_test(&mut rng);
let result = original.hash_tree_root();
assert_eq!(result.len(), 32);
// TODO: Add further tests
// https://github.com/sigp/lighthouse/issues/170
}
}

View File

@ -0,0 +1,100 @@
use super::ProposalSignedData;
use crate::test_utils::TestRandom;
use bls::Signature;
use rand::RngCore;
use serde_derive::Serialize;
use ssz::{hash, Decodable, DecodeError, Encodable, SszStream, TreeHash};
#[derive(Debug, PartialEq, Clone, Serialize)]
pub struct ProposerSlashing {
pub proposer_index: u64,
pub proposal_data_1: ProposalSignedData,
pub proposal_signature_1: Signature,
pub proposal_data_2: ProposalSignedData,
pub proposal_signature_2: Signature,
}
impl Encodable for ProposerSlashing {
fn ssz_append(&self, s: &mut SszStream) {
s.append(&self.proposer_index);
s.append(&self.proposal_data_1);
s.append(&self.proposal_signature_1);
s.append(&self.proposal_data_2);
s.append(&self.proposal_signature_2);
}
}
impl Decodable for ProposerSlashing {
fn ssz_decode(bytes: &[u8], i: usize) -> Result<(Self, usize), DecodeError> {
let (proposer_index, i) = <_>::ssz_decode(bytes, i)?;
let (proposal_data_1, i) = <_>::ssz_decode(bytes, i)?;
let (proposal_signature_1, i) = <_>::ssz_decode(bytes, i)?;
let (proposal_data_2, i) = <_>::ssz_decode(bytes, i)?;
let (proposal_signature_2, i) = <_>::ssz_decode(bytes, i)?;
Ok((
ProposerSlashing {
proposer_index,
proposal_data_1,
proposal_signature_1,
proposal_data_2,
proposal_signature_2,
},
i,
))
}
}
impl TreeHash for ProposerSlashing {
fn hash_tree_root(&self) -> Vec<u8> {
let mut result: Vec<u8> = vec![];
result.append(&mut self.proposer_index.hash_tree_root());
result.append(&mut self.proposal_data_1.hash_tree_root());
result.append(&mut self.proposal_signature_1.hash_tree_root());
result.append(&mut self.proposal_data_2.hash_tree_root());
result.append(&mut self.proposal_signature_2.hash_tree_root());
hash(&result)
}
}
impl<T: RngCore> TestRandom<T> for ProposerSlashing {
fn random_for_test(rng: &mut T) -> Self {
Self {
proposer_index: <_>::random_for_test(rng),
proposal_data_1: <_>::random_for_test(rng),
proposal_signature_1: <_>::random_for_test(rng),
proposal_data_2: <_>::random_for_test(rng),
proposal_signature_2: <_>::random_for_test(rng),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::test_utils::{SeedableRng, TestRandom, XorShiftRng};
use ssz::ssz_encode;
#[test]
pub fn test_ssz_round_trip() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = ProposerSlashing::random_for_test(&mut rng);
let bytes = ssz_encode(&original);
let (decoded, _) = <_>::ssz_decode(&bytes, 0).unwrap();
assert_eq!(original, decoded);
}
#[test]
pub fn test_hash_tree_root() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = ProposerSlashing::random_for_test(&mut rng);
let result = original.hash_tree_root();
assert_eq!(result.len(), 32);
// TODO: Add further tests
// https://github.com/sigp/lighthouse/issues/170
}
}

View File

@ -0,0 +1,40 @@
use crate::{BeaconBlock, Hash256, Slot};
use std::fmt::Debug;
/// The `BeaconBlockReader` provides interfaces for reading a subset of fields of a `BeaconBlock`.
///
/// The purpose of this trait is to allow reading from either;
/// - a standard `BeaconBlock` struct, or
/// - a SSZ serialized byte array.
///
/// Note: presently, direct SSZ reading has not been implemented so this trait is being used for
/// "future proofing".
pub trait BeaconBlockReader: Debug + PartialEq {
fn slot(&self) -> Slot;
fn parent_root(&self) -> Hash256;
fn state_root(&self) -> Hash256;
fn canonical_root(&self) -> Hash256;
fn into_beacon_block(self) -> Option<BeaconBlock>;
}
impl BeaconBlockReader for BeaconBlock {
fn slot(&self) -> Slot {
self.slot
}
fn parent_root(&self) -> Hash256 {
self.parent_root
}
fn state_root(&self) -> Hash256 {
self.state_root
}
fn canonical_root(&self) -> Hash256 {
self.canonical_root()
}
fn into_beacon_block(self) -> Option<BeaconBlock> {
Some(self)
}
}

View File

@ -0,0 +1,5 @@
mod block_reader;
mod state_reader;
pub use self::block_reader::BeaconBlockReader;
pub use self::state_reader::BeaconStateReader;

View File

@ -0,0 +1,30 @@
use crate::{BeaconState, Hash256, Slot};
use std::fmt::Debug;
/// The `BeaconStateReader` provides interfaces for reading a subset of fields of a `BeaconState`.
///
/// The purpose of this trait is to allow reading from either;
/// - a standard `BeaconState` struct, or
/// - a SSZ serialized byte array.
///
/// Note: presently, direct SSZ reading has not been implemented so this trait is being used for
/// "future proofing".
pub trait BeaconStateReader: Debug + PartialEq {
fn slot(&self) -> Slot;
fn canonical_root(&self) -> Hash256;
fn into_beacon_state(self) -> Option<BeaconState>;
}
impl BeaconStateReader for BeaconState {
fn slot(&self) -> Slot {
self.slot
}
fn canonical_root(&self) -> Hash256 {
self.canonical_root()
}
fn into_beacon_state(self) -> Option<BeaconState> {
Some(self)
}
}

View File

@ -0,0 +1,86 @@
use crate::{test_utils::TestRandom, Slot};
use rand::RngCore;
use serde_derive::Serialize;
use ssz::{hash, Decodable, DecodeError, Encodable, SszStream, TreeHash};
#[derive(Debug, PartialEq, Clone, Serialize)]
pub struct ShardReassignmentRecord {
pub validator_index: u64,
pub shard: u64,
pub slot: Slot,
}
impl Encodable for ShardReassignmentRecord {
fn ssz_append(&self, s: &mut SszStream) {
s.append(&self.validator_index);
s.append(&self.shard);
s.append(&self.slot);
}
}
impl Decodable for ShardReassignmentRecord {
fn ssz_decode(bytes: &[u8], i: usize) -> Result<(Self, usize), DecodeError> {
let (validator_index, i) = <_>::ssz_decode(bytes, i)?;
let (shard, i) = <_>::ssz_decode(bytes, i)?;
let (slot, i) = <_>::ssz_decode(bytes, i)?;
Ok((
Self {
validator_index,
shard,
slot,
},
i,
))
}
}
impl TreeHash for ShardReassignmentRecord {
fn hash_tree_root(&self) -> Vec<u8> {
let mut result: Vec<u8> = vec![];
result.append(&mut self.validator_index.hash_tree_root());
result.append(&mut self.shard.hash_tree_root());
result.append(&mut self.slot.hash_tree_root());
hash(&result)
}
}
impl<T: RngCore> TestRandom<T> for ShardReassignmentRecord {
fn random_for_test(rng: &mut T) -> Self {
Self {
validator_index: <_>::random_for_test(rng),
shard: <_>::random_for_test(rng),
slot: <_>::random_for_test(rng),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::test_utils::{SeedableRng, TestRandom, XorShiftRng};
use ssz::ssz_encode;
#[test]
pub fn test_ssz_round_trip() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = ShardReassignmentRecord::random_for_test(&mut rng);
let bytes = ssz_encode(&original);
let (decoded, _) = <_>::ssz_decode(&bytes, 0).unwrap();
assert_eq!(original, decoded);
}
#[test]
pub fn test_hash_tree_root() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = ShardReassignmentRecord::random_for_test(&mut rng);
let result = original.hash_tree_root();
assert_eq!(result.len(), 32);
// TODO: Add further tests
// https://github.com/sigp/lighthouse/issues/170
}
}

View File

@ -0,0 +1,92 @@
use crate::{test_utils::TestRandom, AggregateSignature, AttestationData, Bitfield};
use rand::RngCore;
use serde_derive::Serialize;
use ssz::{hash, Decodable, DecodeError, Encodable, SszStream, TreeHash};
#[derive(Debug, PartialEq, Clone, Serialize)]
pub struct SlashableAttestation {
pub validator_indices: Vec<u64>,
pub data: AttestationData,
pub custody_bitfield: Bitfield,
pub aggregate_signature: AggregateSignature,
}
impl Encodable for SlashableAttestation {
fn ssz_append(&self, s: &mut SszStream) {
s.append_vec(&self.validator_indices);
s.append(&self.data);
s.append(&self.custody_bitfield);
s.append(&self.aggregate_signature);
}
}
impl Decodable for SlashableAttestation {
fn ssz_decode(bytes: &[u8], i: usize) -> Result<(Self, usize), DecodeError> {
let (validator_indices, i) = <_>::ssz_decode(bytes, i)?;
let (data, i) = <_>::ssz_decode(bytes, i)?;
let (custody_bitfield, i) = <_>::ssz_decode(bytes, i)?;
let (aggregate_signature, i) = <_>::ssz_decode(bytes, i)?;
Ok((
SlashableAttestation {
validator_indices,
data,
custody_bitfield,
aggregate_signature,
},
i,
))
}
}
impl TreeHash for SlashableAttestation {
fn hash_tree_root(&self) -> Vec<u8> {
let mut result: Vec<u8> = vec![];
result.append(&mut self.validator_indices.hash_tree_root());
result.append(&mut self.data.hash_tree_root());
result.append(&mut self.custody_bitfield.hash_tree_root());
result.append(&mut self.aggregate_signature.hash_tree_root());
hash(&result)
}
}
impl<T: RngCore> TestRandom<T> for SlashableAttestation {
fn random_for_test(rng: &mut T) -> Self {
Self {
validator_indices: <_>::random_for_test(rng),
data: <_>::random_for_test(rng),
custody_bitfield: <_>::random_for_test(rng),
aggregate_signature: <_>::random_for_test(rng),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::test_utils::{SeedableRng, TestRandom, XorShiftRng};
use ssz::ssz_encode;
#[test]
pub fn test_ssz_round_trip() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = SlashableAttestation::random_for_test(&mut rng);
let bytes = ssz_encode(&original);
let (decoded, _) = <_>::ssz_decode(&bytes, 0).unwrap();
assert_eq!(original, decoded);
}
#[test]
pub fn test_hash_tree_root() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = SlashableAttestation::random_for_test(&mut rng);
let result = original.hash_tree_root();
assert_eq!(result.len(), 32);
// TODO: Add further tests
// https://github.com/sigp/lighthouse/issues/170
}
}

View File

@ -0,0 +1,94 @@
use super::AttestationData;
use crate::test_utils::TestRandom;
use bls::AggregateSignature;
use rand::RngCore;
use serde_derive::Serialize;
use ssz::{hash, Decodable, DecodeError, Encodable, SszStream, TreeHash};
#[derive(Debug, PartialEq, Clone, Serialize)]
pub struct SlashableVoteData {
pub custody_bit_0_indices: Vec<u32>,
pub custody_bit_1_indices: Vec<u32>,
pub data: AttestationData,
pub aggregate_signature: AggregateSignature,
}
impl Encodable for SlashableVoteData {
fn ssz_append(&self, s: &mut SszStream) {
s.append_vec(&self.custody_bit_0_indices);
s.append_vec(&self.custody_bit_1_indices);
s.append(&self.data);
s.append(&self.aggregate_signature);
}
}
impl Decodable for SlashableVoteData {
fn ssz_decode(bytes: &[u8], i: usize) -> Result<(Self, usize), DecodeError> {
let (custody_bit_0_indices, i) = <_>::ssz_decode(bytes, i)?;
let (custody_bit_1_indices, i) = <_>::ssz_decode(bytes, i)?;
let (data, i) = <_>::ssz_decode(bytes, i)?;
let (aggregate_signature, i) = <_>::ssz_decode(bytes, i)?;
Ok((
SlashableVoteData {
custody_bit_0_indices,
custody_bit_1_indices,
data,
aggregate_signature,
},
i,
))
}
}
impl TreeHash for SlashableVoteData {
fn hash_tree_root(&self) -> Vec<u8> {
let mut result: Vec<u8> = vec![];
result.append(&mut self.custody_bit_0_indices.hash_tree_root());
result.append(&mut self.custody_bit_1_indices.hash_tree_root());
result.append(&mut self.data.hash_tree_root());
result.append(&mut self.aggregate_signature.hash_tree_root());
hash(&result)
}
}
impl<T: RngCore> TestRandom<T> for SlashableVoteData {
fn random_for_test(rng: &mut T) -> Self {
Self {
custody_bit_0_indices: <_>::random_for_test(rng),
custody_bit_1_indices: <_>::random_for_test(rng),
data: <_>::random_for_test(rng),
aggregate_signature: <_>::random_for_test(rng),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::test_utils::{SeedableRng, TestRandom, XorShiftRng};
use ssz::ssz_encode;
#[test]
pub fn test_ssz_round_trip() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = SlashableVoteData::random_for_test(&mut rng);
let bytes = ssz_encode(&original);
let (decoded, _) = <_>::ssz_decode(&bytes, 0).unwrap();
assert_eq!(original, decoded);
}
#[test]
pub fn test_hash_tree_root() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = SlashableVoteData::random_for_test(&mut rng);
let result = original.hash_tree_root();
assert_eq!(result.len(), 32);
// TODO: Add further tests
// https://github.com/sigp/lighthouse/issues/170
}
}

View File

@ -0,0 +1,763 @@
/// The `Slot` `Epoch`, `Height` types are defined as newtypes over u64 to enforce type-safety between
/// the three types.
///
/// `Slot`, `Epoch` and `Height` have implementations which permit conversion, comparison and math operations
/// between each and `u64`, however specifically not between each other.
///
/// All math operations on `Slot` and `Epoch` are saturating, they never wrap.
///
/// It would be easy to define `PartialOrd` and other traits generically across all types which
/// implement `Into<u64>`, however this would allow operations between `Slots`, `Epochs` and
/// `Heights` which may lead to programming errors which are not detected by the compiler.
use crate::test_utils::TestRandom;
use rand::RngCore;
use serde_derive::Serialize;
use slog;
use ssz::{hash, ssz_encode, Decodable, DecodeError, Encodable, SszStream, TreeHash};
use std::cmp::{Ord, Ordering};
use std::fmt;
use std::hash::{Hash, Hasher};
use std::iter::Iterator;
use std::ops::{Add, AddAssign, Div, DivAssign, Mul, MulAssign, Rem, Sub, SubAssign};
macro_rules! impl_from_into_u64 {
($main: ident) => {
impl From<u64> for $main {
fn from(n: u64) -> $main {
$main(n)
}
}
impl Into<u64> for $main {
fn into(self) -> u64 {
self.0
}
}
impl $main {
pub fn as_u64(&self) -> u64 {
self.0
}
}
};
}
// need to truncate for some fork-choice algorithms
macro_rules! impl_into_u32 {
($main: ident) => {
impl Into<u32> for $main {
fn into(self) -> u32 {
self.0 as u32
}
}
impl $main {
pub fn as_u32(&self) -> u32 {
self.0 as u32
}
}
};
}
macro_rules! impl_from_into_usize {
($main: ident) => {
impl From<usize> for $main {
fn from(n: usize) -> $main {
$main(n as u64)
}
}
impl Into<usize> for $main {
fn into(self) -> usize {
self.0 as usize
}
}
impl $main {
pub fn as_usize(&self) -> usize {
self.0 as usize
}
}
};
}
macro_rules! impl_math_between {
($main: ident, $other: ident) => {
impl PartialOrd<$other> for $main {
/// Utilizes `partial_cmp` on the underlying `u64`.
fn partial_cmp(&self, other: &$other) -> Option<Ordering> {
Some(self.0.cmp(&(*other).into()))
}
}
impl PartialEq<$other> for $main {
fn eq(&self, other: &$other) -> bool {
let other: u64 = (*other).into();
self.0 == other
}
}
impl Add<$other> for $main {
type Output = $main;
fn add(self, other: $other) -> $main {
$main::from(self.0.saturating_add(other.into()))
}
}
impl AddAssign<$other> for $main {
fn add_assign(&mut self, other: $other) {
self.0 = self.0.saturating_add(other.into());
}
}
impl Sub<$other> for $main {
type Output = $main;
fn sub(self, other: $other) -> $main {
$main::from(self.0.saturating_sub(other.into()))
}
}
impl SubAssign<$other> for $main {
fn sub_assign(&mut self, other: $other) {
self.0 = self.0.saturating_sub(other.into());
}
}
impl Mul<$other> for $main {
type Output = $main;
fn mul(self, rhs: $other) -> $main {
let rhs: u64 = rhs.into();
$main::from(self.0.saturating_mul(rhs))
}
}
impl MulAssign<$other> for $main {
fn mul_assign(&mut self, rhs: $other) {
let rhs: u64 = rhs.into();
self.0 = self.0.saturating_mul(rhs)
}
}
impl Div<$other> for $main {
type Output = $main;
fn div(self, rhs: $other) -> $main {
let rhs: u64 = rhs.into();
if rhs == 0 {
panic!("Cannot divide by zero-valued Slot/Epoch")
}
$main::from(self.0 / rhs)
}
}
impl DivAssign<$other> for $main {
fn div_assign(&mut self, rhs: $other) {
let rhs: u64 = rhs.into();
if rhs == 0 {
panic!("Cannot divide by zero-valued Slot/Epoch")
}
self.0 = self.0 / rhs
}
}
impl Rem<$other> for $main {
type Output = $main;
fn rem(self, modulus: $other) -> $main {
let modulus: u64 = modulus.into();
$main::from(self.0 % modulus)
}
}
};
}
macro_rules! impl_math {
($type: ident) => {
impl $type {
pub fn saturating_sub<T: Into<$type>>(&self, other: T) -> $type {
*self - other.into()
}
pub fn saturating_add<T: Into<$type>>(&self, other: T) -> $type {
*self + other.into()
}
pub fn checked_div<T: Into<$type>>(&self, rhs: T) -> Option<$type> {
let rhs: $type = rhs.into();
if rhs == 0 {
None
} else {
Some(*self / rhs)
}
}
pub fn is_power_of_two(&self) -> bool {
self.0.is_power_of_two()
}
}
impl Ord for $type {
fn cmp(&self, other: &$type) -> Ordering {
let other: u64 = (*other).into();
self.0.cmp(&other)
}
}
};
}
macro_rules! impl_display {
($type: ident) => {
impl fmt::Display for $type {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "{}", self.0)
}
}
impl slog::Value for $type {
fn serialize(
&self,
record: &slog::Record,
key: slog::Key,
serializer: &mut slog::Serializer,
) -> slog::Result {
self.0.serialize(record, key, serializer)
}
}
};
}
macro_rules! impl_ssz {
($type: ident) => {
impl Encodable for $type {
fn ssz_append(&self, s: &mut SszStream) {
s.append(&self.0);
}
}
impl Decodable for $type {
fn ssz_decode(bytes: &[u8], i: usize) -> Result<(Self, usize), DecodeError> {
let (value, i) = <_>::ssz_decode(bytes, i)?;
Ok(($type(value), i))
}
}
impl TreeHash for $type {
fn hash_tree_root(&self) -> Vec<u8> {
let mut result: Vec<u8> = vec![];
result.append(&mut self.0.hash_tree_root());
hash(&result)
}
}
impl<T: RngCore> TestRandom<T> for $type {
fn random_for_test(rng: &mut T) -> Self {
$type::from(u64::random_for_test(rng))
}
}
};
}
macro_rules! impl_hash {
($type: ident) => {
// Implemented to stop clippy lint:
// https://rust-lang.github.io/rust-clippy/master/index.html#derive_hash_xor_eq
impl Hash for $type {
fn hash<H: Hasher>(&self, state: &mut H) {
ssz_encode(self).hash(state)
}
}
};
}
macro_rules! impl_common {
($type: ident) => {
impl_from_into_u64!($type);
impl_from_into_usize!($type);
impl_math_between!($type, $type);
impl_math_between!($type, u64);
impl_math!($type);
impl_display!($type);
impl_ssz!($type);
impl_hash!($type);
};
}
/// Beacon block slot.
#[derive(Eq, Debug, Clone, Copy, Default, Serialize)]
pub struct Slot(u64);
/// Beacon block height, effectively `Slot/GENESIS_START_BLOCK`.
#[derive(Eq, Debug, Clone, Copy, Default, Serialize)]
pub struct Height(u64);
/// Beacon Epoch, effectively `Slot / EPOCH_LENGTH`.
#[derive(Eq, Debug, Clone, Copy, Default, Serialize)]
pub struct Epoch(u64);
impl_common!(Slot);
impl_common!(Height);
impl_into_u32!(Height); // height can be converted to u32
impl_common!(Epoch);
impl Slot {
pub fn new(slot: u64) -> Slot {
Slot(slot)
}
pub fn epoch(self, epoch_length: u64) -> Epoch {
Epoch::from(self.0 / epoch_length)
}
pub fn height(self, genesis_slot: Slot) -> Height {
Height::from(self.0.saturating_sub(genesis_slot.as_u64()))
}
pub fn max_value() -> Slot {
Slot(u64::max_value())
}
}
impl Height {
pub fn new(slot: u64) -> Height {
Height(slot)
}
pub fn slot(self, genesis_slot: Slot) -> Slot {
Slot::from(self.0.saturating_add(genesis_slot.as_u64()))
}
pub fn epoch(self, genesis_slot: u64, epoch_length: u64) -> Epoch {
Epoch::from(self.0.saturating_add(genesis_slot) / epoch_length)
}
pub fn max_value() -> Height {
Height(u64::max_value())
}
}
impl Epoch {
pub fn new(slot: u64) -> Epoch {
Epoch(slot)
}
pub fn max_value() -> Epoch {
Epoch(u64::max_value())
}
pub fn start_slot(self, epoch_length: u64) -> Slot {
Slot::from(self.0.saturating_mul(epoch_length))
}
pub fn end_slot(self, epoch_length: u64) -> Slot {
Slot::from(
self.0
.saturating_add(1)
.saturating_mul(epoch_length)
.saturating_sub(1),
)
}
pub fn slot_iter(&self, epoch_length: u64) -> SlotIter {
SlotIter {
current: self.start_slot(epoch_length),
epoch: self,
epoch_length,
}
}
}
pub struct SlotIter<'a> {
current: Slot,
epoch: &'a Epoch,
epoch_length: u64,
}
impl<'a> Iterator for SlotIter<'a> {
type Item = Slot;
fn next(&mut self) -> Option<Slot> {
if self.current == self.epoch.end_slot(self.epoch_length) {
None
} else {
let previous = self.current;
self.current += 1;
Some(previous)
}
}
}
#[cfg(test)]
mod tests {
use super::*;
macro_rules! new_tests {
($type: ident) => {
#[test]
fn new() {
assert_eq!($type(0), $type::new(0));
assert_eq!($type(3), $type::new(3));
assert_eq!($type(u64::max_value()), $type::new(u64::max_value()));
}
};
}
macro_rules! from_into_tests {
($type: ident, $other: ident) => {
#[test]
fn into() {
let x: $other = $type(0).into();
assert_eq!(x, 0);
let x: $other = $type(3).into();
assert_eq!(x, 3);
let x: $other = $type(u64::max_value()).into();
// Note: this will fail on 32 bit systems. This is expected as we don't have a proper
// 32-bit system strategy in place.
assert_eq!(x, $other::max_value());
}
#[test]
fn from() {
assert_eq!($type(0), $type::from(0_u64));
assert_eq!($type(3), $type::from(3_u64));
assert_eq!($type(u64::max_value()), $type::from($other::max_value()));
}
};
}
macro_rules! math_between_tests {
($type: ident, $other: ident) => {
#[test]
fn partial_ord() {
let assert_partial_ord = |a: u64, partial_ord: Ordering, b: u64| {
let other: $other = $type(b).into();
assert_eq!($type(a).partial_cmp(&other), Some(partial_ord));
};
assert_partial_ord(1, Ordering::Less, 2);
assert_partial_ord(2, Ordering::Greater, 1);
assert_partial_ord(0, Ordering::Less, u64::max_value());
assert_partial_ord(u64::max_value(), Ordering::Greater, 0);
}
#[test]
fn partial_eq() {
let assert_partial_eq = |a: u64, b: u64, is_equal: bool| {
let other: $other = $type(b).into();
assert_eq!($type(a).eq(&other), is_equal);
};
assert_partial_eq(0, 0, true);
assert_partial_eq(0, 1, false);
assert_partial_eq(1, 0, false);
assert_partial_eq(1, 1, true);
assert_partial_eq(u64::max_value(), u64::max_value(), true);
assert_partial_eq(0, u64::max_value(), false);
assert_partial_eq(u64::max_value(), 0, false);
}
#[test]
fn add_and_add_assign() {
let assert_add = |a: u64, b: u64, result: u64| {
let other: $other = $type(b).into();
assert_eq!($type(a) + other, $type(result));
let mut add_assigned = $type(a);
add_assigned += other;
assert_eq!(add_assigned, $type(result));
};
assert_add(0, 1, 1);
assert_add(1, 0, 1);
assert_add(1, 2, 3);
assert_add(2, 1, 3);
assert_add(7, 7, 14);
// Addition should be saturating.
assert_add(u64::max_value(), 1, u64::max_value());
assert_add(u64::max_value(), u64::max_value(), u64::max_value());
}
#[test]
fn sub_and_sub_assign() {
let assert_sub = |a: u64, b: u64, result: u64| {
let other: $other = $type(b).into();
assert_eq!($type(a) - other, $type(result));
let mut sub_assigned = $type(a);
sub_assigned -= other;
assert_eq!(sub_assigned, $type(result));
};
assert_sub(1, 0, 1);
assert_sub(2, 1, 1);
assert_sub(14, 7, 7);
assert_sub(u64::max_value(), 1, u64::max_value() - 1);
assert_sub(u64::max_value(), u64::max_value(), 0);
// Subtraction should be saturating
assert_sub(0, 1, 0);
assert_sub(1, 2, 0);
}
#[test]
fn mul_and_mul_assign() {
let assert_mul = |a: u64, b: u64, result: u64| {
let other: $other = $type(b).into();
assert_eq!($type(a) * other, $type(result));
let mut mul_assigned = $type(a);
mul_assigned *= other;
assert_eq!(mul_assigned, $type(result));
};
assert_mul(2, 2, 4);
assert_mul(1, 2, 2);
assert_mul(0, 2, 0);
// Multiplication should be saturating.
assert_mul(u64::max_value(), 2, u64::max_value());
}
#[test]
fn div_and_div_assign() {
let assert_div = |a: u64, b: u64, result: u64| {
let other: $other = $type(b).into();
assert_eq!($type(a) / other, $type(result));
let mut div_assigned = $type(a);
div_assigned /= other;
assert_eq!(div_assigned, $type(result));
};
assert_div(0, 2, 0);
assert_div(2, 2, 1);
assert_div(100, 50, 2);
assert_div(128, 2, 64);
assert_div(u64::max_value(), 2, 2_u64.pow(63) - 1);
}
#[test]
#[should_panic]
fn div_panics_with_divide_by_zero() {
let other: $other = $type(0).into();
let _ = $type(2) / other;
}
#[test]
#[should_panic]
fn div_assign_panics_with_divide_by_zero() {
let other: $other = $type(0).into();
let mut assigned = $type(2);
assigned /= other;
}
#[test]
fn rem() {
let assert_rem = |a: u64, b: u64, result: u64| {
let other: $other = $type(b).into();
assert_eq!($type(a) % other, $type(result));
};
assert_rem(3, 2, 1);
assert_rem(40, 2, 0);
assert_rem(10, 100, 10);
assert_rem(302042, 3293, 2379);
}
};
}
macro_rules! math_tests {
($type: ident) => {
#[test]
fn saturating_sub() {
let assert_saturating_sub = |a: u64, b: u64, result: u64| {
assert_eq!($type(a).saturating_sub($type(b)), $type(result));
};
assert_saturating_sub(1, 0, 1);
assert_saturating_sub(2, 1, 1);
assert_saturating_sub(14, 7, 7);
assert_saturating_sub(u64::max_value(), 1, u64::max_value() - 1);
assert_saturating_sub(u64::max_value(), u64::max_value(), 0);
// Subtraction should be saturating
assert_saturating_sub(0, 1, 0);
assert_saturating_sub(1, 2, 0);
}
#[test]
fn saturating_add() {
let assert_saturating_add = |a: u64, b: u64, result: u64| {
assert_eq!($type(a).saturating_add($type(b)), $type(result));
};
assert_saturating_add(0, 1, 1);
assert_saturating_add(1, 0, 1);
assert_saturating_add(1, 2, 3);
assert_saturating_add(2, 1, 3);
assert_saturating_add(7, 7, 14);
// Addition should be saturating.
assert_saturating_add(u64::max_value(), 1, u64::max_value());
assert_saturating_add(u64::max_value(), u64::max_value(), u64::max_value());
}
#[test]
fn checked_div() {
let assert_checked_div = |a: u64, b: u64, result: Option<u64>| {
let division_result_as_u64 = match $type(a).checked_div($type(b)) {
None => None,
Some(val) => Some(val.as_u64()),
};
assert_eq!(division_result_as_u64, result);
};
assert_checked_div(0, 2, Some(0));
assert_checked_div(2, 2, Some(1));
assert_checked_div(100, 50, Some(2));
assert_checked_div(128, 2, Some(64));
assert_checked_div(u64::max_value(), 2, Some(2_u64.pow(63) - 1));
assert_checked_div(2, 0, None);
assert_checked_div(0, 0, None);
assert_checked_div(u64::max_value(), 0, None);
}
#[test]
fn is_power_of_two() {
let assert_is_power_of_two = |a: u64, result: bool| {
assert_eq!(
$type(a).is_power_of_two(),
result,
"{}.is_power_of_two() != {}",
a,
result
);
};
assert_is_power_of_two(0, false);
assert_is_power_of_two(1, true);
assert_is_power_of_two(2, true);
assert_is_power_of_two(3, false);
assert_is_power_of_two(4, true);
assert_is_power_of_two(2_u64.pow(4), true);
assert_is_power_of_two(u64::max_value(), false);
}
#[test]
fn ord() {
let assert_ord = |a: u64, ord: Ordering, b: u64| {
assert_eq!($type(a).cmp(&$type(b)), ord);
};
assert_ord(1, Ordering::Less, 2);
assert_ord(2, Ordering::Greater, 1);
assert_ord(0, Ordering::Less, u64::max_value());
assert_ord(u64::max_value(), Ordering::Greater, 0);
}
};
}
macro_rules! ssz_tests {
($type: ident) => {
#[test]
pub fn test_ssz_round_trip() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = $type::random_for_test(&mut rng);
let bytes = ssz_encode(&original);
let (decoded, _) = $type::ssz_decode(&bytes, 0).unwrap();
assert_eq!(original, decoded);
}
#[test]
pub fn test_hash_tree_root() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = $type::random_for_test(&mut rng);
let result = original.hash_tree_root();
assert_eq!(result.len(), 32);
// TODO: Add further tests
// https://github.com/sigp/lighthouse/issues/170
}
};
}
macro_rules! all_tests {
($type: ident) => {
new_tests!($type);
math_between_tests!($type, $type);
math_tests!($type);
ssz_tests!($type);
mod u64_tests {
use super::*;
from_into_tests!($type, u64);
math_between_tests!($type, u64);
#[test]
pub fn as_64() {
let x = $type(0).as_u64();
assert_eq!(x, 0);
let x = $type(3).as_u64();
assert_eq!(x, 3);
let x = $type(u64::max_value()).as_u64();
assert_eq!(x, u64::max_value());
}
}
mod usize_tests {
use super::*;
from_into_tests!($type, usize);
#[test]
pub fn as_usize() {
let x = $type(0).as_usize();
assert_eq!(x, 0);
let x = $type(3).as_usize();
assert_eq!(x, 3);
let x = $type(u64::max_value()).as_usize();
assert_eq!(x, usize::max_value());
}
}
};
}
#[cfg(test)]
mod slot_tests {
use super::*;
use crate::test_utils::{SeedableRng, TestRandom, XorShiftRng};
use ssz::ssz_encode;
all_tests!(Slot);
}
#[cfg(test)]
mod epoch_tests {
use super::*;
use crate::test_utils::{SeedableRng, TestRandom, XorShiftRng};
use ssz::ssz_encode;
all_tests!(Epoch);
}
}

View File

@ -0,0 +1,111 @@
use crate::{Address, ChainSpec, Epoch, Hash256, Signature, Slot};
const GWEI: u64 = 1_000_000_000;
impl ChainSpec {
/// Returns a `ChainSpec` compatible with the specification from Ethereum Foundation.
///
/// Of course, the actual foundation specs are unknown at this point so these are just a rough
/// estimate.
///
/// Spec v0.2.0
pub fn foundation() -> Self {
let genesis_slot = Slot::new(2_u64.pow(19));
let epoch_length = 64;
let genesis_epoch = genesis_slot.epoch(epoch_length);
Self {
/*
* Misc
*/
shard_count: 1_024,
target_committee_size: 128,
max_balance_churn_quotient: 32,
beacon_chain_shard_number: u64::max_value(),
max_indices_per_slashable_vote: 4_096,
max_withdrawals_per_epoch: 4,
shuffle_round_count: 90,
/*
* Deposit contract
*/
deposit_contract_address: Address::zero(),
deposit_contract_tree_depth: 32,
/*
* Gwei values
*/
min_deposit_amount: u64::pow(2, 0) * GWEI,
max_deposit_amount: u64::pow(2, 5) * GWEI,
fork_choice_balance_increment: u64::pow(2, 0) * GWEI,
ejection_balance: u64::pow(2, 4) * GWEI,
/*
* Initial Values
*/
genesis_fork_version: 0,
genesis_slot: Slot::new(2_u64.pow(19)),
genesis_epoch,
genesis_start_shard: 0,
far_future_epoch: Epoch::new(u64::max_value()),
zero_hash: Hash256::zero(),
empty_signature: Signature::empty_signature(),
bls_withdrawal_prefix_byte: 0,
/*
* Time parameters
*/
slot_duration: 6,
min_attestation_inclusion_delay: 4,
epoch_length,
seed_lookahead: Epoch::new(1),
entry_exit_delay: 4,
eth1_data_voting_period: 16,
min_validator_withdrawal_epochs: Epoch::new(256),
/*
* State list lengths
*/
latest_block_roots_length: 8_192,
latest_randao_mixes_length: 8_192,
latest_index_roots_length: 8_192,
latest_penalized_exit_length: 8_192,
/*
* Reward and penalty quotients
*/
base_reward_quotient: 32,
whistleblower_reward_quotient: 512,
includer_reward_quotient: 8,
inactivity_penalty_quotient: 16_777_216,
/*
* Max operations per block
*/
max_proposer_slashings: 16,
max_attester_slashings: 1,
max_attestations: 128,
max_deposits: 16,
max_exits: 16,
/*
* Signature domains
*/
domain_deposit: 0,
domain_attestation: 1,
domain_proposal: 2,
domain_exit: 3,
domain_randao: 4,
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_foundation_spec_can_be_constructed() {
let _ = ChainSpec::foundation();
}
}

View File

@ -0,0 +1,92 @@
mod foundation;
use crate::{Address, Epoch, Hash256, Slot};
use bls::Signature;
/// Holds all the "constants" for a BeaconChain.
///
/// Spec v0.2.0
#[derive(PartialEq, Debug, Clone)]
pub struct ChainSpec {
/*
* Misc
*/
pub shard_count: u64,
pub target_committee_size: u64,
pub max_balance_churn_quotient: u64,
pub beacon_chain_shard_number: u64,
pub max_indices_per_slashable_vote: u64,
pub max_withdrawals_per_epoch: u64,
pub shuffle_round_count: u64,
/*
* Deposit contract
*/
pub deposit_contract_address: Address,
pub deposit_contract_tree_depth: u64,
/*
* Gwei values
*/
pub min_deposit_amount: u64,
pub max_deposit_amount: u64,
pub fork_choice_balance_increment: u64,
pub ejection_balance: u64,
/*
* Initial Values
*/
pub genesis_fork_version: u64,
pub genesis_slot: Slot,
pub genesis_epoch: Epoch,
pub genesis_start_shard: u64,
pub far_future_epoch: Epoch,
pub zero_hash: Hash256,
pub empty_signature: Signature,
pub bls_withdrawal_prefix_byte: u8,
/*
* Time parameters
*/
pub slot_duration: u64,
pub min_attestation_inclusion_delay: u64,
pub epoch_length: u64,
pub seed_lookahead: Epoch,
pub entry_exit_delay: u64,
pub eth1_data_voting_period: u64,
pub min_validator_withdrawal_epochs: Epoch,
/*
* State list lengths
*/
pub latest_block_roots_length: usize,
pub latest_randao_mixes_length: usize,
pub latest_index_roots_length: usize,
pub latest_penalized_exit_length: usize,
/*
* Reward and penalty quotients
*/
pub base_reward_quotient: u64,
pub whistleblower_reward_quotient: u64,
pub includer_reward_quotient: u64,
pub inactivity_penalty_quotient: u64,
/*
* Max operations per block
*/
pub max_proposer_slashings: u64,
pub max_attester_slashings: u64,
pub max_attestations: u64,
pub max_deposits: u64,
pub max_exits: u64,
/*
* Signature domains
*/
pub domain_deposit: u64,
pub domain_attestation: u64,
pub domain_proposal: u64,
pub domain_exit: u64,
pub domain_randao: u64,
}

View File

@ -0,0 +1,11 @@
use super::TestRandom;
use crate::Address;
use rand::RngCore;
impl<T: RngCore> TestRandom<T> for Address {
fn random_for_test(rng: &mut T) -> Self {
let mut key_bytes = vec![0; 20];
rng.fill_bytes(&mut key_bytes);
Address::from(&key_bytes[..])
}
}

Some files were not shown because too many files have changed in this diff Show More