Merge branch 'master' into grpc-rs

This commit is contained in:
Paul Hauner 2019-01-22 12:58:17 +11:00
commit 80e37f8d49
No known key found for this signature in database
GPG Key ID: D362883A9218FCC6
21 changed files with 1005 additions and 471 deletions

View File

@ -7,6 +7,7 @@ before_install:
- sudo chown $USER /usr/local/bin/protoc
- sudo chown -R $USER /usr/local/include/google
script:
- cargo fmt --all -- --check
- cargo build --verbose --all
- cargo test --verbose --all
rust:
@ -17,3 +18,5 @@ matrix:
allow_failures:
- rust: nightly
fast_finish: true
install:
- rustup component add rustfmt

195
README.md
View File

@ -7,22 +7,10 @@ Chain, maintained by Sigma Prime.
The "Serenity" project is also known as "Ethereum 2.0" or "Shasper".
## Introduction
This readme is split into two major sections:
- [Lighthouse Client](#lighthouse-client): information about this
implementation.
- [What is Ethereum Serenity](#what-is-ethereum-serenity): an introduction to Ethereum Serenity.
If you'd like some background on Sigma Prime, please see the [Lighthouse Update
\#00](https://lighthouse.sigmaprime.io/update-00.html) blog post or the
[company website](https://sigmaprime.io).
## Lighthouse Client
Lighthouse is an open-source Ethereum Serenity client that is currently under
development. Designed as a Serenity-only client, Lighthouse will not
development. Designed as a Serenity-only client, Lighthouse will not
re-implement the existing proof-of-work protocol. Maintaining a forward-focus
on Ethereum Serenity ensures that Lighthouse avoids reproducing the high-quality
work already undertaken by existing projects. As such, Lighthouse will connect
@ -31,14 +19,15 @@ to existing clients, such as
[Parity-Ethereum](https://github.com/paritytech/parity-ethereum), via RPC to enable
present-Ethereum functionality.
### Goals
### Further Reading
The purpose of this project is to further research and development towards a
secure, efficient, and decentralized Ethereum protocol, facilitated by a new
open-source Ethereum Serenity client.
- [About Lighthouse](docs/lighthouse.md): Goals, Ideology and Ethos surrounding
this implementation.
- [What is Ethereum Serenity](docs/serenity.md): an introduction to Ethereum Serenity.
In addition to implementing a new client, the project seeks to maintain and
improve the Ethereum protocol wherever possible.
If you'd like some background on Sigma Prime, please see the [Lighthouse Update
\#00](https://lighthouse.sigmaprime.io/update-00.html) blog post or the
[company website](https://sigmaprime.io).
### Components
@ -57,12 +46,11 @@ by the team:
is more resource intensive than proof-of-work. As such, clients need to
ensure that bad blocks can be rejected as efficiently as possible. At
present, blocks having 10 million ETH staked can be processed in 0.006
seconds, and invalid blocks are rejected even more quickly. See [issue
#103](https://github.com/ethereum/beacon_chain/issues/103) on
seconds, and invalid blocks are rejected even more quickly. See
[issue #103](https://github.com/ethereum/beacon_chain/issues/103) on
[ethereum/beacon_chain](https://github.com/ethereum/beacon_chain).
.
- **P2P networking**: Serenity will likely use the [libp2p
framework](https://libp2p.io/). Lighthouse aims to work alongside
framework](https://libp2p.io/). Lighthouse is working alongside
[Parity](https://www.parity.io/) to ensure
[libp2p-rust](https://github.com/libp2p/rust-libp2p) is fit-for-purpose.
- **Validator duties** : The project involves development of "validator
@ -77,9 +65,10 @@ implementation](https://github.com/sigp/lighthouse/tree/master/beacon_chain/util
and this
[research](https://github.com/sigp/serialization_sandbox/blob/report/report/serialization_report.md)
on serialization formats for more information.
- **Casper FFG fork-choice**: The [Casper
FFG](https://arxiv.org/abs/1710.09437) fork-choice rules allow the chain to
select a canonical chain in the case of a fork.
- **Fork-choice**: The current fork choice rule is
[*LMD Ghost*](https://vitalik.ca/general/2018/12/05/cbc_casper.html#lmd-ghost),
which effectively takes the latest messages and forms the canonical chain using
the [GHOST](https://eprint.iacr.org/2013/881.pdf) mechanism.
- **Efficient state transition logic**: State transition logic governs
updates to the validator set as validators log in/out, penalizes/rewards
validators, rotates validators across shards, and implements other core tasks.
@ -90,31 +79,15 @@ In addition to these components we are also working on database schemas, RPC
frameworks, specification development, database optimizations (e.g.,
bloom-filters), and tons of other interesting stuff (at least we think so).
### Contributing
### Directory Structure
**Lighthouse welcomes contributors with open-arms.**
Here we provide an overview of the directory structure:
Layer-1 infrastructure is a critical component for the ecosystem and relies
heavily on contributions from the community. Building Ethereum Serenity is a huge
task and we refuse to conduct an inappropriate ICO or charge licensing fees.
Instead, we fund development through grants and support from Sigma Prime.
If you would like to learn more about Ethereum Serenity and/or
[Rust](https://www.rust-lang.org/), we are more than happy to on-board you
and assign you some tasks. We aim to be as accepting and understanding as
possible; we are more than happy to up-skill contributors in exchange for their
assistance with the project.
Alternatively, if you are an ETH/Rust veteran, we'd love your input. We're
always looking for the best way to implement things and welcome all
respectful criticisms.
If you'd like to contribute, try having a look through the [open
issues](https://github.com/sigp/lighthouse/issues) (tip: look for the [good
first
issue](https://github.com/sigp/lighthouse/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22)
tag) and ping us on the [gitter](https://gitter.im/sigp/lighthouse) channel. We need
your support!
- `beacon_chain/`: contains logic derived directly from the specification.
E.g., shuffling algorithms, state transition logic and structs, block
validation, BLS crypto, etc.
- `lighthouse/`: contains logic specific to this client implementation. E.g.,
CLI parsing, RPC end-points, databases, etc.
### Running
@ -152,123 +125,35 @@ A few basic steps are needed to get set up:
Lighthouse presently runs on Rust `stable`, however, benchmarks currently require the
`nightly` version.
### Engineering Ethos
### Contributing
Lighthouse aims to produce many small easily-tested components, each separated
into individual crates wherever possible.
**Lighthouse welcomes contributors with open-arms.**
Generally, tests can be kept in the same file, as is typical in Rust.
Integration tests should be placed in the `tests` directory in the crate's
root. Particularity large (line-count) tests should be placed into a separate
file.
If you would like to learn more about Ethereum Serenity and/or
[Rust](https://www.rust-lang.org/), we are more than happy to on-board you
and assign you some tasks. We aim to be as accepting and understanding as
possible; we are more than happy to up-skill contributors in exchange for their
assistance with the project.
A function is not considered complete until a test exists for it. We produce
tests to protect against regression (accidentally breaking things) and to
provide examples that help readers of the code base understand how functions
should (or should not) be used.
Alternatively, if you are an ETH/Rust veteran, we'd love your input. We're
always looking for the best way to implement things and welcome all
respectful criticisms.
Each pull request is to be reviewed by at least one "core developer" (i.e.,
someone with write-access to the repository). This helps to ensure bugs are
detected, consistency is maintained, and responsibility of errors is dispersed.
If you are looking to contribute, please head to our
[onboarding documentation](https://github.com/sigp/lighthouse/blob/master/docs/onboarding.md).
Discussion must be respectful and intellectual. Have fun and make jokes, but
always respect the limits of other people.
### Directory Structure
Here we provide an overview of the directory structure:
- `/beacon_chain`: contains logic derived directly from the specification.
E.g., shuffling algorithms, state transition logic and structs, block
validation, BLS crypto, etc.
- `/lighthouse`: contains logic specific to this client implementation. E.g.,
CLI parsing, RPC end-points, databases, etc.
If you'd like to contribute, try having a look through the [open
issues](https://github.com/sigp/lighthouse/issues) (tip: look for the [good
first
issue](https://github.com/sigp/lighthouse/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22)
tag) and ping us on the [gitter](https://gitter.im/sigp/lighthouse) channel. We need
your support!
## Contact
The best place for discussion is the [sigp/lighthouse gitter](https://gitter.im/sigp/lighthouse).
Ping @paulhauner or @AgeManning to get the quickest response.
# What is Ethereum Serenity
Ethereum Serenity refers to a new blockchain system currently under development by
the Ethereum Foundation and the Ethereum community. The Serenity blockchain
consists of 1,025 proof-of-stake blockchains. This includes the "beacon chain"
and 1,024 "shard chains".
Ethereum Serenity is also known as "Ethereum 2.0" and "Shasper". We prefer
Serenity as it more accurately reflects the established Ethereum roadmap (plus
we think it's a nice name).
## Beacon Chain
The concept of a beacon chain differs from existing blockchains, such as
Bitcoin and Ethereum, in that it doesn't process transactions per se. Instead,
it maintains a set of bonded (staked) validators and coordinates these to
provide services to a static set of *sub-blockchains* (i.e. shards). Each of
these shard blockchains processes normal transactions (e.g. "Transfer 5 ETH
from A to B") in parallel whilst deferring consensus mechanisms to the beacon
chain.
Major services provided by the beacon chain to its shards include the following:
- A source of entropy, likely using a [RANDAO + VDF
scheme](https://ethresear.ch/t/minimal-vdf-randomness-beacon/3566).
- Validator management, including:
- Inducting and ejecting validators.
- Assigning randomly-shuffled subsets of validators to particular shards.
- Penalizing and rewarding validators.
- Proof-of-stake consensus for shard chain blocks.
## Shard Chains
Shards are analogous to CPU cores - they're a resource where transactions can
execute in series (one-after-another). Presently, Ethereum is single-core and
can only _fully_ process one transaction at a time. Sharding allows processing
of multiple transactions simultaneously, greatly increasing the per-second
transaction capacity of Ethereum.
Each shard uses a proof-of-stake consensus mechanism and shares its validators
(stakers) with other shards. The beacon chain rotates validators
pseudo-randomly between different shards. Shards will likely be the basis of
layer-2 transaction processing schemes, however, that is not in scope of this
discussion.
## The Proof-of-Work Chain
The present-Ethereum proof-of-work (PoW) chain will host a smart contract that
enables accounts to deposit 32 ETH, a BLS public key, and some [other
parameters](https://github.com/ethereum/eth2.0-specs/blob/master/specs/casper_sharding_v2.1.md#pow-chain-changes),
allowing them to become beacon chain validators. Each beacon chain will
reference a PoW block hash allowing PoW clients to use the beacon chain as a
source of [Casper FFG finality](https://arxiv.org/abs/1710.09437), if desired.
It is a requirement that ETH can move freely between shard chains, as well as between
Serenity and present-Ethereum blockchains. The exact mechanics of these transfers remain
an active topic of research and their details are yet to be confirmed.
## Ethereum Serenity Progress
Ethereum Serenity is not fully specified and a working implementation does not yet
exist. Some teams have demos available which indicate progress, but do not
constitute a complete product. We look forward to providing user functionality
once we are ready to provide a minimum-viable user experience.
The work-in-progress Serenity specification lives
[here](https://github.com/ethereum/eth2.0-specs/blob/master/specs/casper_sharding_v2.1.md)
in the [ethereum/eth2.0-specs](https://github.com/ethereum/eth2.0-specs)
repository. The spec is still in a draft phase, however there are several teams
basing their Serenity implementations upon it while the Ethereum Foundation research
team continue to fill in the gaps. There is active discussion about the specification in the
[ethereum/sharding](https://gitter.im/ethereum/sharding) gitter channel. A
proof-of-concept implementation in Python is available at
[ethereum/beacon_chain](https://github.com/ethereum/beacon_chain).
Presently, the specification focuses almost exclusively on the beacon chain,
as it is the focus of current development efforts. Progress on shard chain
specification will soon follow.
# Donations
If you support the cause, we could certainly use donations to help fund development:

View File

@ -1,43 +1,32 @@
use bls::{Signature, BLS_AGG_SIG_BYTE_SIZE};
use spec::ChainSpec;
use ssz::{encode::encode_length, Decodable, LENGTH_BYTES};
use types::{BeaconBlock, BeaconBlockBody, Hash256};
/// Generate a genesis BeaconBlock.
pub fn genesis_beacon_block(state_root: Hash256, spec: &ChainSpec) -> BeaconBlock {
BeaconBlock {
slot: spec.initial_slot_number,
slot: spec.genesis_slot_number,
parent_root: spec.zero_hash,
state_root,
randao_reveal: spec.zero_hash,
candidate_pow_receipt_root: spec.zero_hash,
signature: genesis_signature(),
signature: spec.empty_signature.clone(),
body: BeaconBlockBody {
proposer_slashings: vec![],
casper_slashings: vec![],
attestations: vec![],
custody_reseeds: vec![],
custody_challenges: vec![],
custody_responses: vec![],
deposits: vec![],
exits: vec![],
},
}
}
fn genesis_signature() -> Signature {
let mut bytes = encode_length(BLS_AGG_SIG_BYTE_SIZE, LENGTH_BYTES);
bytes.append(&mut vec![0; BLS_AGG_SIG_BYTE_SIZE]);
let (signature, _) = match Signature::ssz_decode(&bytes, 0) {
Ok(sig) => sig,
Err(_) => unreachable!(),
};
signature
}
#[cfg(test)]
mod tests {
use super::*;
// TODO: enhance these tests.
// https://github.com/sigp/lighthouse/issues/117
use bls::Signature;
#[test]
fn test_genesis() {
@ -47,4 +36,54 @@ mod tests {
// This only checks that the function runs without panic.
genesis_beacon_block(state_root, &spec);
}
#[test]
fn test_zero_items() {
let spec = ChainSpec::foundation();
let state_root = Hash256::zero();
let genesis_block = genesis_beacon_block(state_root, &spec);
assert!(genesis_block.slot == 0);
assert!(genesis_block.parent_root.is_zero());
assert!(genesis_block.randao_reveal.is_zero());
assert!(genesis_block.candidate_pow_receipt_root.is_zero()); // aka deposit_root
}
#[test]
fn test_beacon_body() {
let spec = ChainSpec::foundation();
let state_root = Hash256::zero();
let genesis_block = genesis_beacon_block(state_root, &spec);
// Custody items are not being implemented until phase 1 so tests to be added later
assert!(genesis_block.body.proposer_slashings.is_empty());
assert!(genesis_block.body.casper_slashings.is_empty());
assert!(genesis_block.body.attestations.is_empty());
assert!(genesis_block.body.deposits.is_empty());
assert!(genesis_block.body.exits.is_empty());
}
#[test]
fn test_signature() {
let spec = ChainSpec::foundation();
let state_root = Hash256::zero();
let genesis_block = genesis_beacon_block(state_root, &spec);
// Signature should consist of [bytes48(0), bytes48(0)]
// Note this is implemented using Apache Milagro BLS which requires one extra byte -> 97bytes
let raw_sig = genesis_block.signature.as_raw();
let raw_sig_bytes = raw_sig.as_bytes();
for item in raw_sig_bytes.iter() {
assert!(*item == 0);
}
assert_eq!(genesis_block.signature, Signature::empty_signature());
}
}

View File

@ -21,7 +21,7 @@ pub fn genesis_beacon_state(spec: &ChainSpec) -> Result<BeaconState, Error> {
};
let initial_crosslink = CrosslinkRecord {
slot: spec.initial_slot_number,
slot: spec.genesis_slot_number,
shard_block_root: spec.zero_hash,
};
@ -29,43 +29,54 @@ pub fn genesis_beacon_state(spec: &ChainSpec) -> Result<BeaconState, Error> {
/*
* Misc
*/
slot: spec.initial_slot_number,
slot: spec.genesis_slot_number,
genesis_time: spec.genesis_time,
fork_data: ForkData {
pre_fork_version: spec.initial_fork_version,
post_fork_version: spec.initial_fork_version,
fork_slot: spec.initial_slot_number,
pre_fork_version: spec.genesis_fork_version,
post_fork_version: spec.genesis_fork_version,
fork_slot: spec.genesis_slot_number,
},
/*
* Validator registry
*/
validator_registry: spec.initial_validators.clone(),
validator_balances: spec.initial_balances.clone(),
validator_registry_latest_change_slot: spec.initial_slot_number,
validator_registry_latest_change_slot: spec.genesis_slot_number,
validator_registry_exit_count: 0,
validator_registry_delta_chain_tip: spec.zero_hash,
/*
* Randomness and committees
*/
randao_mix: spec.zero_hash,
next_seed: spec.zero_hash,
shard_committees_at_slots: vec![],
persistent_committees: vec![],
persistent_committee_reassignments: vec![],
latest_randao_mixes: vec![spec.zero_hash; spec.latest_randao_mixes_length as usize],
latest_vdf_outputs: vec![
spec.zero_hash;
(spec.latest_randao_mixes_length / spec.epoch_length) as usize
],
previous_epoch_start_shard: spec.genesis_start_shard,
current_epoch_start_shard: spec.genesis_start_shard,
previous_epoch_calculation_slot: spec.genesis_slot_number,
current_epoch_calculation_slot: spec.genesis_slot_number,
previous_epoch_randao_mix: spec.zero_hash,
current_epoch_randao_mix: spec.zero_hash,
/*
* Custody challenges
*/
custody_challenges: vec![],
/*
* Finality
*/
previous_justified_slot: spec.initial_slot_number,
justified_slot: spec.initial_slot_number,
previous_justified_slot: spec.genesis_slot_number,
justified_slot: spec.genesis_slot_number,
justification_bitfield: 0,
finalized_slot: spec.initial_slot_number,
finalized_slot: spec.genesis_slot_number,
/*
* Recent state
*/
latest_crosslinks: vec![initial_crosslink; spec.shard_count as usize],
latest_block_roots: vec![spec.zero_hash; spec.epoch_length as usize],
latest_penalized_exit_balances: vec![],
latest_block_roots: vec![spec.zero_hash; spec.latest_block_roots_length as usize],
latest_penalized_exit_balances: vec![0; spec.latest_penalized_exit_length as usize],
latest_attestations: vec![],
batched_block_roots: vec![],
/*
* PoW receipt root
*/
@ -82,16 +93,11 @@ impl From<ValidatorAssignmentError> for Error {
#[cfg(test)]
mod tests {
extern crate bls;
extern crate validator_induction;
use super::*;
// TODO: enhance these tests.
// https://github.com/sigp/lighthouse/issues/117
use types::Hash256;
#[test]
fn test_genesis() {
fn test_genesis_state() {
let spec = ChainSpec::foundation();
let state = genesis_beacon_state(&spec).unwrap();
@ -101,4 +107,116 @@ mod tests {
spec.initial_validators.len()
);
}
#[test]
fn test_genesis_state_misc() {
let spec = ChainSpec::foundation();
let state = genesis_beacon_state(&spec).unwrap();
assert_eq!(state.slot, 0);
assert_eq!(state.genesis_time, spec.genesis_time);
assert_eq!(state.fork_data.pre_fork_version, 0);
assert_eq!(state.fork_data.post_fork_version, 0);
assert_eq!(state.fork_data.fork_slot, 0);
}
#[test]
fn test_genesis_state_validators() {
let spec = ChainSpec::foundation();
let state = genesis_beacon_state(&spec).unwrap();
assert_eq!(state.validator_registry, spec.initial_validators);
assert_eq!(state.validator_balances, spec.initial_balances);
assert!(state.validator_registry_latest_change_slot == 0);
assert!(state.validator_registry_exit_count == 0);
assert_eq!(state.validator_registry_delta_chain_tip, Hash256::zero());
}
#[test]
fn test_genesis_state_randomness_committees() {
let spec = ChainSpec::foundation();
let state = genesis_beacon_state(&spec).unwrap();
// Array of size 8,192 each being zero_hash
assert_eq!(state.latest_randao_mixes.len(), 8_192);
for item in state.latest_randao_mixes.iter() {
assert_eq!(*item, Hash256::zero());
}
// Array of size 8,192 each being a zero hash
assert_eq!(state.latest_vdf_outputs.len(), (8_192 / 64));
for item in state.latest_vdf_outputs.iter() {
assert_eq!(*item, Hash256::zero());
}
// TODO: Check shard and committee shuffling requires solving issue:
// https://github.com/sigp/lighthouse/issues/151
// initial_shuffling = get_shuffling(Hash256::zero(), &state.validator_registry, 0, 0)
// initial_shuffling = initial_shuffling.append(initial_shuffling.clone());
}
// Custody not implemented until Phase 1
#[test]
fn test_genesis_state_custody() {}
#[test]
fn test_genesis_state_finanilty() {
let spec = ChainSpec::foundation();
let state = genesis_beacon_state(&spec).unwrap();
assert_eq!(state.previous_justified_slot, 0);
assert_eq!(state.justified_slot, 0);
assert_eq!(state.justification_bitfield, 0);
assert_eq!(state.finalized_slot, 0);
}
#[test]
fn test_genesis_state_recent_state() {
let spec = ChainSpec::foundation();
let state = genesis_beacon_state(&spec).unwrap();
// Test latest_crosslinks
assert_eq!(state.latest_crosslinks.len(), 1_024);
for link in state.latest_crosslinks.iter() {
assert_eq!(link.slot, 0);
assert_eq!(link.shard_block_root, Hash256::zero());
}
// Test latest_block_roots
assert_eq!(state.latest_block_roots.len(), 8_192);
for block in state.latest_block_roots.iter() {
assert_eq!(*block, Hash256::zero());
}
// Test latest_penalized_exit_balances
assert_eq!(state.latest_penalized_exit_balances.len(), 8_192);
for item in state.latest_penalized_exit_balances.iter() {
assert!(*item == 0);
}
// Test latest_attestations
assert!(state.latest_attestations.is_empty());
// batched_block_roots
assert!(state.batched_block_roots.is_empty());
}
#[test]
fn test_genesis_state_deposit_root() {
let spec = ChainSpec::foundation();
let state = genesis_beacon_state(&spec).unwrap();
assert_eq!(
state.processed_pow_receipt_root,
spec.processed_pow_receipt_root
);
assert!(state.candidate_pow_receipt_roots.is_empty());
}
}

View File

@ -1,5 +1,5 @@
use super::ChainSpec;
use bls::{Keypair, PublicKey, SecretKey};
use bls::{Keypair, PublicKey, SecretKey, Signature};
use types::{Address, Hash256, ValidatorRecord};
@ -17,13 +17,16 @@ impl ChainSpec {
* Misc
*/
shard_count: 1_024,
target_committee_size: 256,
target_committee_size: 128,
ejection_balance: 16,
max_balance_churn_quotient: 32,
gwei_per_eth: u64::pow(10, 9),
beacon_chain_shard_number: u64::max_value(),
bls_withdrawal_prefix_byte: 0x00,
max_casper_votes: 1_024,
latest_block_roots_length: 8_192,
latest_randao_mixes_length: 8_192,
latest_penalized_exit_length: 8_192,
max_withdrawals_per_epoch: 4,
/*
* Deposit contract
*/
@ -34,32 +37,35 @@ impl ChainSpec {
/*
* Initial Values
*/
initial_fork_version: 0,
initial_slot_number: 0,
genesis_fork_version: 0,
genesis_slot_number: 0,
genesis_start_shard: 0,
far_future_slot: u64::max_value(),
zero_hash: Hash256::zero(),
empty_signature: Signature::empty_signature(),
bls_withdrawal_prefix_byte: 0x00,
/*
* Time parameters
*/
slot_duration: 6,
min_attestation_inclusion_delay: 4,
epoch_length: 64,
min_validator_registry_change_interval: 256,
seed_lookahead: 64,
entry_exit_delay: 256,
pow_receipt_root_voting_period: 1_024,
shard_persistent_committee_change_period: u64::pow(2, 17),
collective_penalty_calculation_period: u64::pow(2, 20),
zero_balance_validator_ttl: u64::pow(2, 22),
min_validator_withdrawal_time: u64::pow(2, 14),
/*
* Reward and penalty quotients
*/
base_reward_quotient: 2_048,
base_reward_quotient: 1_024,
whistleblower_reward_quotient: 512,
includer_reward_quotient: 8,
inactivity_penalty_quotient: u64::pow(2, 34),
inactivity_penalty_quotient: u64::pow(2, 24),
/*
* Max operations per block
*/
max_proposer_slashings: 16,
max_casper_slashings: 15,
max_casper_slashings: 16,
max_attestations: 128,
max_deposits: 16,
max_exits: 16,
@ -103,9 +109,12 @@ fn initial_validators_for_testing() -> Vec<ValidatorRecord> {
withdrawal_credentials: Hash256::zero(),
randao_commitment: Hash256::zero(),
randao_layers: 0,
status: From::from(0),
latest_status_change_slot: 0,
activation_slot: u64::max_value(),
exit_slot: u64::max_value(),
withdrawal_slot: u64::max_value(),
penalized_slot: u64::max_value(),
exit_count: 0,
status_flags: None,
custody_commitment: Hash256::zero(),
latest_custody_reseed_slot: 0,
penultimate_custody_reseed_slot: 0,

View File

@ -3,6 +3,7 @@ extern crate types;
mod foundation;
use bls::Signature;
use types::{Address, Hash256, ValidatorRecord};
#[derive(PartialEq, Debug)]
@ -16,8 +17,11 @@ pub struct ChainSpec {
pub max_balance_churn_quotient: u64,
pub gwei_per_eth: u64,
pub beacon_chain_shard_number: u64,
pub bls_withdrawal_prefix_byte: u8,
pub max_casper_votes: u64,
pub latest_block_roots_length: u64,
pub latest_randao_mixes_length: u64,
pub latest_penalized_exit_length: u64,
pub max_withdrawals_per_epoch: u64,
/*
* Deposit contract
*/
@ -28,20 +32,23 @@ pub struct ChainSpec {
/*
* Initial Values
*/
pub initial_fork_version: u64,
pub initial_slot_number: u64,
pub genesis_fork_version: u64,
pub genesis_slot_number: u64,
pub genesis_start_shard: u64,
pub far_future_slot: u64,
pub zero_hash: Hash256,
pub empty_signature: Signature,
pub bls_withdrawal_prefix_byte: u8,
/*
* Time parameters
*/
pub slot_duration: u64,
pub min_attestation_inclusion_delay: u64,
pub epoch_length: u64,
pub min_validator_registry_change_interval: u64,
pub pow_receipt_root_voting_period: u64,
pub shard_persistent_committee_change_period: u64,
pub collective_penalty_calculation_period: u64,
pub zero_balance_validator_ttl: u64,
pub seed_lookahead: u64,
pub entry_exit_delay: u64,
pub pow_receipt_root_voting_period: u64, // a.k.a. deposit_root_voting_period
pub min_validator_withdrawal_time: u64,
/*
* Reward and penalty quotients
*/

View File

@ -0,0 +1,60 @@
use super::ssz::{Decodable, DecodeError, Encodable, SszStream};
use rand::RngCore;
use crate::test_utils::TestRandom;
use super::AttestationData;
#[derive(Debug, Clone, PartialEq, Default)]
pub struct AttestationDataAndCustodyBit {
pub data: AttestationData,
pub custody_bit: bool,
}
impl Encodable for AttestationDataAndCustodyBit {
fn ssz_append(&self, s: &mut SszStream) {
s.append(&self.data);
s.append(&self.custody_bit);
}
}
impl Decodable for AttestationDataAndCustodyBit {
fn ssz_decode(bytes: &[u8], i: usize) -> Result<(Self, usize), DecodeError> {
let (data, i) = <_>::ssz_decode(bytes, i)?;
let (custody_bit, i) = <_>::ssz_decode(bytes, i)?;
let attestation_data_and_custody_bit = AttestationDataAndCustodyBit {
data,
custody_bit,
};
Ok((attestation_data_and_custody_bit, i))
}
}
impl<T: RngCore> TestRandom<T> for AttestationDataAndCustodyBit {
fn random_for_test(rng: &mut T) -> Self {
Self {
data: <_>::random_for_test(rng),
custody_bit: <_>::random_for_test(rng),
}
}
}
#[cfg(test)]
mod test {
use crate::test_utils::{SeedableRng, TestRandom, XorShiftRng};
use super::*;
use super::super::ssz::ssz_encode;
#[test]
pub fn test_ssz_round_trip() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = AttestationDataAndCustodyBit::random_for_test(&mut rng);
let bytes = ssz_encode(&original);
let (decoded, _) = <_>::ssz_decode(&bytes, 0).unwrap();
assert_eq!(original, decoded);
}
}

View File

@ -3,11 +3,20 @@ use super::{Attestation, CasperSlashing, Deposit, Exit, ProposerSlashing};
use crate::test_utils::TestRandom;
use rand::RngCore;
// The following types are just dummy classes as they will not be defined until
// Phase 1 (Sharding phase)
type CustodyReseed = usize;
type CustodyChallenge = usize;
type CustodyResponse = usize;
#[derive(Debug, PartialEq, Clone, Default)]
pub struct BeaconBlockBody {
pub proposer_slashings: Vec<ProposerSlashing>,
pub casper_slashings: Vec<CasperSlashing>,
pub attestations: Vec<Attestation>,
pub custody_reseeds: Vec<CustodyReseed>,
pub custody_challenges: Vec<CustodyChallenge>,
pub custody_responses: Vec<CustodyResponse>,
pub deposits: Vec<Deposit>,
pub exits: Vec<Exit>,
}
@ -17,6 +26,9 @@ impl Encodable for BeaconBlockBody {
s.append_vec(&self.proposer_slashings);
s.append_vec(&self.casper_slashings);
s.append_vec(&self.attestations);
s.append_vec(&self.custody_reseeds);
s.append_vec(&self.custody_challenges);
s.append_vec(&self.custody_responses);
s.append_vec(&self.deposits);
s.append_vec(&self.exits);
}
@ -27,6 +39,9 @@ impl Decodable for BeaconBlockBody {
let (proposer_slashings, i) = <_>::ssz_decode(bytes, i)?;
let (casper_slashings, i) = <_>::ssz_decode(bytes, i)?;
let (attestations, i) = <_>::ssz_decode(bytes, i)?;
let (custody_reseeds, i) = <_>::ssz_decode(bytes, i)?;
let (custody_challenges, i) = <_>::ssz_decode(bytes, i)?;
let (custody_responses, i) = <_>::ssz_decode(bytes, i)?;
let (deposits, i) = <_>::ssz_decode(bytes, i)?;
let (exits, i) = <_>::ssz_decode(bytes, i)?;
@ -35,6 +50,9 @@ impl Decodable for BeaconBlockBody {
proposer_slashings,
casper_slashings,
attestations,
custody_reseeds,
custody_challenges,
custody_responses,
deposits,
exits,
},
@ -49,6 +67,9 @@ impl<T: RngCore> TestRandom<T> for BeaconBlockBody {
proposer_slashings: <_>::random_for_test(rng),
casper_slashings: <_>::random_for_test(rng),
attestations: <_>::random_for_test(rng),
custody_reseeds: <_>::random_for_test(rng),
custody_challenges: <_>::random_for_test(rng),
custody_responses: <_>::random_for_test(rng),
deposits: <_>::random_for_test(rng),
exits: <_>::random_for_test(rng),
}

View File

@ -2,8 +2,6 @@ use super::candidate_pow_receipt_root_record::CandidatePoWReceiptRootRecord;
use super::crosslink_record::CrosslinkRecord;
use super::fork_data::ForkData;
use super::pending_attestation_record::PendingAttestationRecord;
use super::shard_committee::ShardCommittee;
use super::shard_reassignment_record::ShardReassignmentRecord;
use super::validator_record::ValidatorRecord;
use super::Hash256;
use crate::test_utils::TestRandom;
@ -11,6 +9,9 @@ use hashing::canonical_hash;
use rand::RngCore;
use ssz::{ssz_encode, Decodable, DecodeError, Encodable, SszStream};
// Custody will not be added to the specs until Phase 1 (Sharding Phase) so dummy class used.
type CustodyChallenge = usize;
#[derive(Debug, PartialEq, Clone, Default)]
pub struct BeaconState {
// Misc
@ -26,11 +27,17 @@ pub struct BeaconState {
pub validator_registry_delta_chain_tip: Hash256,
// Randomness and committees
pub randao_mix: Hash256,
pub next_seed: Hash256,
pub shard_committees_at_slots: Vec<Vec<ShardCommittee>>,
pub persistent_committees: Vec<Vec<u32>>,
pub persistent_committee_reassignments: Vec<ShardReassignmentRecord>,
pub latest_randao_mixes: Vec<Hash256>,
pub latest_vdf_outputs: Vec<Hash256>,
pub previous_epoch_start_shard: u64,
pub current_epoch_start_shard: u64,
pub previous_epoch_calculation_slot: u64,
pub current_epoch_calculation_slot: u64,
pub previous_epoch_randao_mix: Hash256,
pub current_epoch_randao_mix: Hash256,
// Custody challenges
pub custody_challenges: Vec<CustodyChallenge>,
// Finality
pub previous_justified_slot: u64,
@ -43,8 +50,9 @@ pub struct BeaconState {
pub latest_block_roots: Vec<Hash256>,
pub latest_penalized_exit_balances: Vec<u64>,
pub latest_attestations: Vec<PendingAttestationRecord>,
pub batched_block_roots: Vec<Hash256>,
// PoW receipt root
// PoW receipt root (a.k.a. deposit root)
pub processed_pow_receipt_root: Hash256,
pub candidate_pow_receipt_roots: Vec<CandidatePoWReceiptRootRecord>,
}
@ -67,11 +75,15 @@ impl Encodable for BeaconState {
s.append(&self.validator_registry_latest_change_slot);
s.append(&self.validator_registry_exit_count);
s.append(&self.validator_registry_delta_chain_tip);
s.append(&self.randao_mix);
s.append(&self.next_seed);
s.append(&self.shard_committees_at_slots);
s.append(&self.persistent_committees);
s.append(&self.persistent_committee_reassignments);
s.append(&self.latest_randao_mixes);
s.append(&self.latest_vdf_outputs);
s.append(&self.previous_epoch_start_shard);
s.append(&self.current_epoch_start_shard);
s.append(&self.previous_epoch_calculation_slot);
s.append(&self.current_epoch_calculation_slot);
s.append(&self.previous_epoch_randao_mix);
s.append(&self.current_epoch_randao_mix);
s.append(&self.custody_challenges);
s.append(&self.previous_justified_slot);
s.append(&self.justified_slot);
s.append(&self.justification_bitfield);
@ -80,6 +92,7 @@ impl Encodable for BeaconState {
s.append(&self.latest_block_roots);
s.append(&self.latest_penalized_exit_balances);
s.append(&self.latest_attestations);
s.append(&self.batched_block_roots);
s.append(&self.processed_pow_receipt_root);
s.append(&self.candidate_pow_receipt_roots);
}
@ -95,11 +108,15 @@ impl Decodable for BeaconState {
let (validator_registry_latest_change_slot, i) = <_>::ssz_decode(bytes, i)?;
let (validator_registry_exit_count, i) = <_>::ssz_decode(bytes, i)?;
let (validator_registry_delta_chain_tip, i) = <_>::ssz_decode(bytes, i)?;
let (randao_mix, i) = <_>::ssz_decode(bytes, i)?;
let (next_seed, i) = <_>::ssz_decode(bytes, i)?;
let (shard_committees_at_slots, i) = <_>::ssz_decode(bytes, i)?;
let (persistent_committees, i) = <_>::ssz_decode(bytes, i)?;
let (persistent_committee_reassignments, i) = <_>::ssz_decode(bytes, i)?;
let (latest_randao_mixes, i) = <_>::ssz_decode(bytes, i)?;
let (latest_vdf_outputs, i) = <_>::ssz_decode(bytes, i)?;
let (previous_epoch_start_shard, i) = <_>::ssz_decode(bytes, i)?;
let (current_epoch_start_shard, i) = <_>::ssz_decode(bytes, i)?;
let (previous_epoch_calculation_slot, i) = <_>::ssz_decode(bytes, i)?;
let (current_epoch_calculation_slot, i) = <_>::ssz_decode(bytes, i)?;
let (previous_epoch_randao_mix, i) = <_>::ssz_decode(bytes, i)?;
let (current_epoch_randao_mix, i) = <_>::ssz_decode(bytes, i)?;
let (custody_challenges, i) = <_>::ssz_decode(bytes, i)?;
let (previous_justified_slot, i) = <_>::ssz_decode(bytes, i)?;
let (justified_slot, i) = <_>::ssz_decode(bytes, i)?;
let (justification_bitfield, i) = <_>::ssz_decode(bytes, i)?;
@ -108,6 +125,7 @@ impl Decodable for BeaconState {
let (latest_block_roots, i) = <_>::ssz_decode(bytes, i)?;
let (latest_penalized_exit_balances, i) = <_>::ssz_decode(bytes, i)?;
let (latest_attestations, i) = <_>::ssz_decode(bytes, i)?;
let (batched_block_roots, i) = <_>::ssz_decode(bytes, i)?;
let (processed_pow_receipt_root, i) = <_>::ssz_decode(bytes, i)?;
let (candidate_pow_receipt_roots, i) = <_>::ssz_decode(bytes, i)?;
@ -121,11 +139,15 @@ impl Decodable for BeaconState {
validator_registry_latest_change_slot,
validator_registry_exit_count,
validator_registry_delta_chain_tip,
randao_mix,
next_seed,
shard_committees_at_slots,
persistent_committees,
persistent_committee_reassignments,
latest_randao_mixes,
latest_vdf_outputs,
previous_epoch_start_shard,
current_epoch_start_shard,
previous_epoch_calculation_slot,
current_epoch_calculation_slot,
previous_epoch_randao_mix,
current_epoch_randao_mix,
custody_challenges,
previous_justified_slot,
justified_slot,
justification_bitfield,
@ -134,6 +156,7 @@ impl Decodable for BeaconState {
latest_block_roots,
latest_penalized_exit_balances,
latest_attestations,
batched_block_roots,
processed_pow_receipt_root,
candidate_pow_receipt_roots,
},
@ -153,11 +176,15 @@ impl<T: RngCore> TestRandom<T> for BeaconState {
validator_registry_latest_change_slot: <_>::random_for_test(rng),
validator_registry_exit_count: <_>::random_for_test(rng),
validator_registry_delta_chain_tip: <_>::random_for_test(rng),
randao_mix: <_>::random_for_test(rng),
next_seed: <_>::random_for_test(rng),
shard_committees_at_slots: <_>::random_for_test(rng),
persistent_committees: <_>::random_for_test(rng),
persistent_committee_reassignments: <_>::random_for_test(rng),
latest_randao_mixes: <_>::random_for_test(rng),
latest_vdf_outputs: <_>::random_for_test(rng),
previous_epoch_start_shard: <_>::random_for_test(rng),
current_epoch_start_shard: <_>::random_for_test(rng),
previous_epoch_calculation_slot: <_>::random_for_test(rng),
current_epoch_calculation_slot: <_>::random_for_test(rng),
previous_epoch_randao_mix: <_>::random_for_test(rng),
current_epoch_randao_mix: <_>::random_for_test(rng),
custody_challenges: <_>::random_for_test(rng),
previous_justified_slot: <_>::random_for_test(rng),
justified_slot: <_>::random_for_test(rng),
justification_bitfield: <_>::random_for_test(rng),
@ -166,6 +193,7 @@ impl<T: RngCore> TestRandom<T> for BeaconState {
latest_block_roots: <_>::random_for_test(rng),
latest_penalized_exit_balances: <_>::random_for_test(rng),
latest_attestations: <_>::random_for_test(rng),
batched_block_roots: <_>::random_for_test(rng),
processed_pow_receipt_root: <_>::random_for_test(rng),
candidate_pow_receipt_roots: <_>::random_for_test(rng),
}

View File

@ -3,6 +3,7 @@ use super::Hash256;
use crate::test_utils::TestRandom;
use rand::RngCore;
// Note: this is refer to as DepositRootVote in specs
#[derive(Debug, PartialEq, Clone)]
pub struct CandidatePoWReceiptRootRecord {
pub candidate_pow_receipt_root: Hash256,

View File

@ -26,6 +26,7 @@ pub mod shard_reassignment_record;
pub mod slashable_vote_data;
pub mod special_record;
pub mod validator_record;
pub mod validator_registry;
pub mod readers;
@ -50,7 +51,7 @@ pub use crate::proposer_slashing::ProposerSlashing;
pub use crate::shard_committee::ShardCommittee;
pub use crate::slashable_vote_data::SlashableVoteData;
pub use crate::special_record::{SpecialRecord, SpecialRecordKind};
pub use crate::validator_record::{ValidatorRecord, ValidatorStatus};
pub use crate::validator_record::{StatusFlags as ValidatorStatusFlags, ValidatorRecord};
pub type Hash256 = H256;
pub type Address = H160;

View File

@ -3,29 +3,43 @@ use super::Hash256;
use crate::test_utils::TestRandom;
use rand::RngCore;
use ssz::{Decodable, DecodeError, Encodable, SszStream};
use std::convert;
const STATUS_FLAG_INITIATED_EXIT: u8 = 1;
const STATUS_FLAG_WITHDRAWABLE: u8 = 2;
#[derive(Debug, PartialEq, Clone, Copy)]
pub enum ValidatorStatus {
PendingActivation,
Active,
PendingExit,
PendingWithdraw,
Withdrawn,
Penalized,
pub enum StatusFlags {
InitiatedExit,
Withdrawable,
}
impl convert::From<u8> for ValidatorStatus {
fn from(status: u8) -> Self {
match status {
0 => ValidatorStatus::PendingActivation,
1 => ValidatorStatus::Active,
2 => ValidatorStatus::PendingExit,
3 => ValidatorStatus::PendingWithdraw,
5 => ValidatorStatus::Withdrawn,
127 => ValidatorStatus::Penalized,
_ => unreachable!(),
struct StatusFlagsDecodeError;
impl From<StatusFlagsDecodeError> for DecodeError {
fn from(_: StatusFlagsDecodeError) -> DecodeError {
DecodeError::Invalid
}
}
/// Handles the serialization logic for the `status_flags` field of the `ValidatorRecord`.
fn status_flag_to_byte(flag: Option<StatusFlags>) -> u8 {
if let Some(flag) = flag {
match flag {
StatusFlags::InitiatedExit => STATUS_FLAG_INITIATED_EXIT,
StatusFlags::Withdrawable => STATUS_FLAG_WITHDRAWABLE,
}
} else {
0
}
}
/// Handles the deserialization logic for the `status_flags` field of the `ValidatorRecord`.
fn status_flag_from_byte(flag: u8) -> Result<Option<StatusFlags>, StatusFlagsDecodeError> {
match flag {
0 => Ok(None),
1 => Ok(Some(StatusFlags::InitiatedExit)),
2 => Ok(Some(StatusFlags::Withdrawable)),
_ => Err(StatusFlagsDecodeError),
}
}
@ -35,61 +49,49 @@ pub struct ValidatorRecord {
pub withdrawal_credentials: Hash256,
pub randao_commitment: Hash256,
pub randao_layers: u64,
pub status: ValidatorStatus,
pub latest_status_change_slot: u64,
pub activation_slot: u64,
pub exit_slot: u64,
pub withdrawal_slot: u64,
pub penalized_slot: u64,
pub exit_count: u64,
pub status_flags: Option<StatusFlags>,
pub custody_commitment: Hash256,
pub latest_custody_reseed_slot: u64,
pub penultimate_custody_reseed_slot: u64,
}
impl ValidatorRecord {
pub fn status_is(&self, status: ValidatorStatus) -> bool {
self.status == status
/// This predicate indicates if the validator represented by this record is considered "active" at `slot`.
pub fn is_active_at(&self, slot: u64) -> bool {
self.activation_slot <= slot && slot < self.exit_slot
}
}
impl Encodable for ValidatorStatus {
fn ssz_append(&self, s: &mut SszStream) {
let byte: u8 = match self {
ValidatorStatus::PendingActivation => 0,
ValidatorStatus::Active => 1,
ValidatorStatus::PendingExit => 2,
ValidatorStatus::PendingWithdraw => 3,
ValidatorStatus::Withdrawn => 5,
ValidatorStatus::Penalized => 127,
};
s.append(&byte);
impl Default for ValidatorRecord {
/// Yields a "default" `ValidatorRecord`. Primarily used for testing.
fn default() -> Self {
Self {
pubkey: PublicKey::default(),
withdrawal_credentials: Hash256::default(),
randao_commitment: Hash256::default(),
randao_layers: 0,
activation_slot: std::u64::MAX,
exit_slot: std::u64::MAX,
withdrawal_slot: std::u64::MAX,
penalized_slot: std::u64::MAX,
exit_count: 0,
status_flags: None,
custody_commitment: Hash256::default(),
latest_custody_reseed_slot: 0, // NOTE: is `GENESIS_SLOT`
penultimate_custody_reseed_slot: 0, // NOTE: is `GENESIS_SLOT`
}
}
}
impl Decodable for ValidatorStatus {
fn ssz_decode(bytes: &[u8], i: usize) -> Result<(Self, usize), DecodeError> {
let (byte, i) = u8::ssz_decode(bytes, i)?;
let status = match byte {
0 => ValidatorStatus::PendingActivation,
1 => ValidatorStatus::Active,
2 => ValidatorStatus::PendingExit,
3 => ValidatorStatus::PendingWithdraw,
5 => ValidatorStatus::Withdrawn,
127 => ValidatorStatus::Penalized,
_ => return Err(DecodeError::Invalid),
};
Ok((status, i))
}
}
impl<T: RngCore> TestRandom<T> for ValidatorStatus {
impl<T: RngCore> TestRandom<T> for StatusFlags {
fn random_for_test(rng: &mut T) -> Self {
let options = vec![
ValidatorStatus::PendingActivation,
ValidatorStatus::Active,
ValidatorStatus::PendingExit,
ValidatorStatus::PendingWithdraw,
ValidatorStatus::Withdrawn,
ValidatorStatus::Penalized,
];
options[(rng.next_u32() as usize) % options.len()]
let options = vec![StatusFlags::InitiatedExit, StatusFlags::Withdrawable];
options[(rng.next_u32() as usize) % options.len()].clone()
}
}
@ -99,9 +101,12 @@ impl Encodable for ValidatorRecord {
s.append(&self.withdrawal_credentials);
s.append(&self.randao_commitment);
s.append(&self.randao_layers);
s.append(&self.status);
s.append(&self.latest_status_change_slot);
s.append(&self.activation_slot);
s.append(&self.exit_slot);
s.append(&self.withdrawal_slot);
s.append(&self.penalized_slot);
s.append(&self.exit_count);
s.append(&status_flag_to_byte(self.status_flags));
s.append(&self.custody_commitment);
s.append(&self.latest_custody_reseed_slot);
s.append(&self.penultimate_custody_reseed_slot);
@ -114,22 +119,30 @@ impl Decodable for ValidatorRecord {
let (withdrawal_credentials, i) = <_>::ssz_decode(bytes, i)?;
let (randao_commitment, i) = <_>::ssz_decode(bytes, i)?;
let (randao_layers, i) = <_>::ssz_decode(bytes, i)?;
let (status, i) = <_>::ssz_decode(bytes, i)?;
let (latest_status_change_slot, i) = <_>::ssz_decode(bytes, i)?;
let (activation_slot, i) = <_>::ssz_decode(bytes, i)?;
let (exit_slot, i) = <_>::ssz_decode(bytes, i)?;
let (withdrawal_slot, i) = <_>::ssz_decode(bytes, i)?;
let (penalized_slot, i) = <_>::ssz_decode(bytes, i)?;
let (exit_count, i) = <_>::ssz_decode(bytes, i)?;
let (status_flags_byte, i): (u8, usize) = <_>::ssz_decode(bytes, i)?;
let (custody_commitment, i) = <_>::ssz_decode(bytes, i)?;
let (latest_custody_reseed_slot, i) = <_>::ssz_decode(bytes, i)?;
let (penultimate_custody_reseed_slot, i) = <_>::ssz_decode(bytes, i)?;
let status_flags = status_flag_from_byte(status_flags_byte)?;
Ok((
Self {
pubkey,
withdrawal_credentials,
randao_commitment,
randao_layers,
status,
latest_status_change_slot,
activation_slot,
exit_slot,
withdrawal_slot,
penalized_slot,
exit_count,
status_flags,
custody_commitment,
latest_custody_reseed_slot,
penultimate_custody_reseed_slot,
@ -146,9 +159,12 @@ impl<T: RngCore> TestRandom<T> for ValidatorRecord {
withdrawal_credentials: <_>::random_for_test(rng),
randao_commitment: <_>::random_for_test(rng),
randao_layers: <_>::random_for_test(rng),
status: <_>::random_for_test(rng),
latest_status_change_slot: <_>::random_for_test(rng),
activation_slot: <_>::random_for_test(rng),
exit_slot: <_>::random_for_test(rng),
withdrawal_slot: <_>::random_for_test(rng),
penalized_slot: <_>::random_for_test(rng),
exit_count: <_>::random_for_test(rng),
status_flags: Some(<_>::random_for_test(rng)),
custody_commitment: <_>::random_for_test(rng),
latest_custody_reseed_slot: <_>::random_for_test(rng),
penultimate_custody_reseed_slot: <_>::random_for_test(rng),
@ -174,13 +190,24 @@ mod tests {
}
#[test]
pub fn test_validator_status_ssz_round_trip() {
fn test_validator_can_be_active() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let original = ValidatorStatus::random_for_test(&mut rng);
let mut validator = ValidatorRecord::random_for_test(&mut rng);
let bytes = ssz_encode(&original);
let (decoded, _) = <_>::ssz_decode(&bytes, 0).unwrap();
let activation_slot = u64::random_for_test(&mut rng);
let exit_slot = activation_slot + 234;
assert_eq!(original, decoded);
validator.activation_slot = activation_slot;
validator.exit_slot = exit_slot;
for slot in (activation_slot - 100)..(exit_slot + 100) {
if slot < activation_slot {
assert!(!validator.is_active_at(slot));
} else if slot >= exit_slot {
assert!(!validator.is_active_at(slot));
} else {
assert!(validator.is_active_at(slot));
}
}
}
}

View File

@ -0,0 +1,171 @@
/// Contains logic to manipulate a `&[ValidatorRecord]`.
/// For now, we avoid defining a newtype and just have flat functions here.
use super::validator_record::*;
/// Given an indexed sequence of `validators`, return the indices corresponding to validators that are active at `slot`.
pub fn get_active_validator_indices(validators: &[ValidatorRecord], slot: u64) -> Vec<usize> {
validators
.iter()
.enumerate()
.filter_map(|(index, validator)| {
if validator.is_active_at(slot) {
Some(index)
} else {
None
}
})
.collect::<Vec<_>>()
}
#[cfg(test)]
mod tests {
use super::*;
use crate::test_utils::{SeedableRng, TestRandom, XorShiftRng};
#[test]
fn can_get_empty_active_validator_indices() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let validators = vec![];
let some_slot = u64::random_for_test(&mut rng);
let indices = get_active_validator_indices(&validators, some_slot);
assert_eq!(indices, vec![]);
}
#[test]
fn can_get_no_active_validator_indices() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let mut validators = vec![];
let count_validators = 10;
for _ in 0..count_validators {
validators.push(ValidatorRecord::default())
}
let some_slot = u64::random_for_test(&mut rng);
let indices = get_active_validator_indices(&validators, some_slot);
assert_eq!(indices, vec![]);
}
#[test]
fn can_get_all_active_validator_indices() {
let mut rng = XorShiftRng::from_seed([42; 16]);
let count_validators = 10;
let some_slot = u64::random_for_test(&mut rng);
let mut validators = (0..count_validators)
.into_iter()
.map(|_| {
let mut validator = ValidatorRecord::default();
let activation_offset = u64::random_for_test(&mut rng);
let exit_offset = u64::random_for_test(&mut rng);
validator.activation_slot = some_slot.checked_sub(activation_offset).unwrap_or(0);
validator.exit_slot = some_slot.checked_add(exit_offset).unwrap_or(std::u64::MAX);
validator
})
.collect::<Vec<_>>();
// test boundary condition by ensuring that at least one validator in the list just activated
if let Some(validator) = validators.get_mut(0) {
validator.activation_slot = some_slot;
}
let indices = get_active_validator_indices(&validators, some_slot);
assert_eq!(
indices,
(0..count_validators).into_iter().collect::<Vec<_>>()
);
}
fn set_validators_to_default_entry_exit(validators: &mut [ValidatorRecord]) {
for validator in validators.iter_mut() {
validator.activation_slot = std::u64::MAX;
validator.exit_slot = std::u64::MAX;
}
}
// sets all `validators` to be active as of some slot prior to `slot`. returns the activation slot.
fn set_validators_to_activated(validators: &mut [ValidatorRecord], slot: u64) -> u64 {
let activation_slot = slot - 10;
for validator in validators.iter_mut() {
validator.activation_slot = activation_slot;
}
activation_slot
}
// sets all `validators` to be exited as of some slot before `slot`.
fn set_validators_to_exited(
validators: &mut [ValidatorRecord],
slot: u64,
activation_slot: u64,
) {
assert!(activation_slot < slot);
let mut exit_slot = activation_slot + 10;
while exit_slot >= slot {
exit_slot -= 1;
}
assert!(activation_slot < exit_slot && exit_slot < slot);
for validator in validators.iter_mut() {
validator.exit_slot = exit_slot;
}
}
#[test]
fn can_get_some_active_validator_indices() {
let mut rng = XorShiftRng::from_seed([42; 16]);
const COUNT_PARTITIONS: usize = 3;
const COUNT_VALIDATORS: usize = 3 * COUNT_PARTITIONS;
let some_slot: u64 = u64::random_for_test(&mut rng);
let mut validators = (0..COUNT_VALIDATORS)
.into_iter()
.map(|_| {
let mut validator = ValidatorRecord::default();
let activation_offset = u64::random_for_test(&mut rng);
let exit_offset = u64::random_for_test(&mut rng);
validator.activation_slot = some_slot.checked_sub(activation_offset).unwrap_or(0);
validator.exit_slot = some_slot.checked_add(exit_offset).unwrap_or(std::u64::MAX);
validator
})
.collect::<Vec<_>>();
// we partition the set into partitions based on lifecycle:
for (i, chunk) in validators.chunks_exact_mut(COUNT_PARTITIONS).enumerate() {
match i {
0 => {
// 1. not activated (Default::default())
set_validators_to_default_entry_exit(chunk);
}
1 => {
// 2. activated, but not exited
set_validators_to_activated(chunk, some_slot);
// test boundary condition by ensuring that at least one validator in the list just activated
if let Some(validator) = chunk.get_mut(0) {
validator.activation_slot = some_slot;
}
}
2 => {
// 3. exited
let activation_slot = set_validators_to_activated(chunk, some_slot);
set_validators_to_exited(chunk, some_slot, activation_slot);
// test boundary condition by ensuring that at least one validator in the list just exited
if let Some(validator) = chunk.get_mut(0) {
validator.exit_slot = some_slot;
}
}
_ => unreachable!(
"constants local to this test not in sync with generation of test case"
),
}
}
let indices = get_active_validator_indices(&validators, some_slot);
assert_eq!(indices, vec![3, 4, 5]);
}
}

View File

@ -2,6 +2,7 @@ use super::SecretKey;
use bls_aggregates::PublicKey as RawPublicKey;
use hex::encode as hex_encode;
use ssz::{decode_ssz_list, ssz_encode, Decodable, DecodeError, Encodable, SszStream};
use std::default;
use std::hash::{Hash, Hasher};
/// A single BLS signature.
@ -31,6 +32,13 @@ impl PublicKey {
}
}
impl default::Default for PublicKey {
fn default() -> Self {
let secret_key = SecretKey::random();
PublicKey::from_secret_key(&secret_key)
}
}
impl Encodable for PublicKey {
fn ssz_append(&self, s: &mut SszStream) {
s.append_vec(&self.0.as_bytes());

View File

@ -35,6 +35,12 @@ impl Signature {
pub fn as_raw(&self) -> &RawSignature {
&self.0
}
/// Returns a new empty signature.
pub fn empty_signature() -> Self {
let empty: Vec<u8> = vec![0; 97];
Signature(RawSignature::from_bytes(&empty).unwrap())
}
}
impl Encodable for Signature {
@ -68,4 +74,16 @@ mod tests {
assert_eq!(original, decoded);
}
#[test]
pub fn test_empty_signature() {
let sig = Signature::empty_signature();
let sig_as_bytes: Vec<u8> = sig.as_raw().as_bytes();
assert_eq!(sig_as_bytes.len(), 97);
for one_byte in sig_as_bytes.iter() {
assert_eq!(*one_byte, 0);
}
}
}

View File

@ -1,6 +1,6 @@
use bls::verify_proof_of_possession;
use spec::ChainSpec;
use types::{BeaconState, Deposit, ValidatorRecord, ValidatorStatus};
use types::{BeaconState, Deposit, ValidatorRecord};
#[derive(Debug, PartialEq, Clone)]
pub enum ValidatorInductionError {
@ -13,7 +13,7 @@ pub fn process_deposit(
state: &mut BeaconState,
deposit: &Deposit,
spec: &ChainSpec,
) -> Result<usize, ValidatorInductionError> {
) -> Result<(), ValidatorInductionError> {
let deposit_input = &deposit.deposit_data.deposit_input;
let deposit_data = &deposit.deposit_data;
@ -33,7 +33,7 @@ pub fn process_deposit(
== deposit_input.withdrawal_credentials
{
state.validator_balances[i] += deposit_data.value;
return Ok(i);
return Ok(());
}
Err(ValidatorInductionError::InvalidWithdrawalCredentials)
@ -44,43 +44,25 @@ pub fn process_deposit(
withdrawal_credentials: deposit_input.withdrawal_credentials,
randao_commitment: deposit_input.randao_commitment,
randao_layers: 0,
status: ValidatorStatus::PendingActivation,
latest_status_change_slot: state.validator_registry_latest_change_slot,
activation_slot: spec.far_future_slot,
exit_slot: spec.far_future_slot,
withdrawal_slot: spec.far_future_slot,
penalized_slot: spec.far_future_slot,
exit_count: 0,
status_flags: None,
custody_commitment: deposit_input.custody_commitment,
latest_custody_reseed_slot: 0,
penultimate_custody_reseed_slot: 0,
};
match min_empty_validator_index(state, spec) {
Some(i) => {
state.validator_registry[i] = validator;
state.validator_balances[i] = deposit_data.value;
Ok(i)
}
None => {
state.validator_registry.push(validator);
state.validator_balances.push(deposit_data.value);
Ok(state.validator_registry.len() - 1)
}
}
let _index = state.validator_registry.len();
state.validator_registry.push(validator);
state.validator_balances.push(deposit_data.value);
Ok(())
}
}
}
fn min_empty_validator_index(state: &BeaconState, spec: &ChainSpec) -> Option<usize> {
for i in 0..state.validator_registry.len() {
if state.validator_balances[i] == 0
&& state.validator_registry[i].latest_status_change_slot
+ spec.zero_balance_validator_ttl
<= state.slot
{
return Some(i);
}
}
None
}
#[cfg(test)]
mod tests {
use super::*;
@ -125,7 +107,7 @@ mod tests {
let result = process_deposit(&mut state, &deposit, &spec);
assert_eq!(result.unwrap(), 0);
assert_eq!(result.unwrap(), ());
assert!(deposit_equals_record(
&deposit,
&state.validator_registry[0]
@ -143,7 +125,7 @@ mod tests {
let mut deposit = get_deposit();
let result = process_deposit(&mut state, &deposit, &spec);
deposit.deposit_data.value = DEPOSIT_GWEI;
assert_eq!(result.unwrap(), i);
assert_eq!(result.unwrap(), ());
assert!(deposit_equals_record(
&deposit,
&state.validator_registry[i]
@ -172,7 +154,7 @@ mod tests {
let result = process_deposit(&mut state, &deposit, &spec);
assert_eq!(result.unwrap(), 0);
assert_eq!(result.unwrap(), ());
assert!(deposit_equals_record(
&deposit,
&state.validator_registry[0]
@ -182,32 +164,6 @@ mod tests {
assert_eq!(state.validator_balances.len(), 1);
}
#[test]
fn test_process_deposit_replace_validator() {
let mut state = BeaconState::default();
let spec = ChainSpec::foundation();
let mut validator = get_validator();
validator.latest_status_change_slot = 0;
state.validator_registry.push(validator);
state.validator_balances.push(0);
let mut deposit = get_deposit();
deposit.deposit_data.value = DEPOSIT_GWEI;
state.slot = spec.zero_balance_validator_ttl;
let result = process_deposit(&mut state, &deposit, &spec);
assert_eq!(result.unwrap(), 0);
assert!(deposit_equals_record(
&deposit,
&state.validator_registry[0]
));
assert_eq!(state.validator_balances[0], DEPOSIT_GWEI);
assert_eq!(state.validator_registry.len(), 1);
assert_eq!(state.validator_balances.len(), 1);
}
#[test]
fn test_process_deposit_invalid_proof_of_possession() {
let mut state = BeaconState::default();

View File

@ -2,7 +2,8 @@ use std::cmp::min;
use honey_badger_split::SplitExt;
use spec::ChainSpec;
use types::{ShardCommittee, ValidatorRecord, ValidatorStatus};
use types::validator_registry::get_active_validator_indices;
use types::{ShardCommittee, ValidatorRecord};
use vec_shuffle::{shuffle, ShuffleErr};
type DelegatedCycle = Vec<Vec<ShardCommittee>>;
@ -24,17 +25,7 @@ pub fn shard_and_committees_for_cycle(
spec: &ChainSpec,
) -> Result<DelegatedCycle, ValidatorAssignmentError> {
let shuffled_validator_indices = {
let validator_indices = validators
.iter()
.enumerate()
.filter_map(|(i, validator)| {
if validator.status_is(ValidatorStatus::Active) {
Some(i)
} else {
None
}
})
.collect();
let validator_indices = get_active_validator_indices(validators, 0);
shuffle(seed, validator_indices)?
};
let shard_indices: Vec<usize> = (0_usize..spec.shard_count as usize).into_iter().collect();

View File

@ -1,4 +1,4 @@
use chain::{BlockProcessingOutcome, BeaconChain};
use chain::{BeaconChain, BlockProcessingOutcome};
use db::{
stores::{BeaconBlockStore, BeaconStateStore},
MemoryDB,

83
docs/lighthouse.md Normal file
View File

@ -0,0 +1,83 @@
# About Lighthouse
## Goals
The purpose of this project is to work alongside the Ethereum community to
implement a secure, trustworthy, open-source Ethereum Serenity client in Rust.
* **Security**: Lighthouse's main goal is to implement everything with a
security-first mindset. The goal is to ensure that all components of lighthouse
are thoroughly tested, checked and secure.
* **Trust** : As Ethereum Serenity is a Proof-of-Stake system, which
involves the interaction of the Ethereum protocol and user funds. Thus, a goal
of Lighthouse is to provide a client that is trustworthy.
All code can be tested and verified the goal of Lighthouse is to provide code
that is trusted.
* **Transparency**: Lighthouse aims at being as transparent as possible. This
goal is for Lighthouse to embrace the open-source community and allow for all
to understand the decisions, direction and changes in all aspects.
* **Error Resilience**: As Lighthouse embraces the "never `panic`" mindset, the
goal is to be resilient to errors that may occur. Providing a client that has
tolerance against errors provides further properties for a secure, trustworthy
client that Lighthouse aims to provide.
In addition to implementing a new client, the project seeks to maintain and
improve the Ethereum protocol wherever possible.
## Ideology
### Never Panic
Lighthouse will be the gateway interacting with the Proof-of-Stake system
employed by Ethereum. This requires the validation and proposal of blocks
and extremely timely responses. As part of this, Lighthouse aims to ensure
the most uptime as possible, meaning minimising the amount of
exceptions and gracefully handling any issues.
Rust's `panic` provides the ability to throw an exception and exit, this
will terminate the running processes. Thus, Lighthouse aims to use `panic`
as little as possible to minimise the possible termination cases.
### Security First Mindset
Lighthouse aims to provide a safe, secure Serenity client for the Ethereum
ecosystem. At each step of development, the aim is to have a security-first
mindset and always ensure you are following the safe, secure mindset. When
contributing to any part of the Lighthouse client, through any development,
always ensure you understand each aspect thoroughly and cover all potential
security considerations of your code.
### Functions aren't completed until they are tested
As part of the Security First mindset, we want to aim to cover as many distinct
cases. A function being developed is not considered "completed" until tests
exist for that function. The tests not only help show the correctness of the
function, but also provide a way for new developers to understand how the
function is to be called and how it works.
## Engineering Ethos
Lighthouse aims to produce many small easily-tested components, each separated
into individual crates wherever possible.
Generally, tests can be kept in the same file, as is typical in Rust.
Integration tests should be placed in the `tests` directory in the crate's
root. Particularity large (line-count) tests should be placed into a separate
file.
A function is not considered complete until a test exists for it. We produce
tests to protect against regression (accidentally breaking things) and to
provide examples that help readers of the code base understand how functions
should (or should not) be used.
Each pull request is to be reviewed by at least one "core developer" (i.e.,
someone with write-access to the repository). This helps to ensure bugs are
detected, consistency is maintained, and responsibility of errors is dispersed.
Discussion must be respectful and intellectual. Have fun and make jokes, but
always respect the limits of other people.

View File

@ -1,5 +1,7 @@
# Contributing to Lighthouse
[![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/sigp/lighthouse?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
Lighthouse is an open-source Ethereum Serenity client built in
[Rust](https://www.rust-lang.org/).
@ -13,39 +15,31 @@ documentation, writing extra tests or developing components, all help is
appreciated and your contributions will help not only the community but all
the contributors.
We've bundled up our Goals, Ethos and Ideology into one document for you to
read through, please read our [About Lighthouse](lighthouse.md) docs. :smile:
Layer-1 infrastructure is a critical component for the ecosystem and relies
heavily on contributions from the community. Building Ethereum Serenity is a
huge task and we refuse to conduct an inappropriate ICO or charge licensing
fees. Instead, we fund development through grants and support from Sigma
Prime.
If you have any additional questions, please feel free to jump on the
[gitter](https://gitter.im/sigp/lighthouse) and have a chat with all of us.
## Ideology
**Pre-reading Materials:**
### Never Panic
* [About Lighthouse](lighthouse.md)
* [Ethereum Serenity](serenity.md)
Lighthouse will be the gateway interacting with the Proof-of-Stake system
employed by Ethereum. This requires the validation and proposal of blocks
and extremely timely responses. As part of this, Lighthouse aims to ensure
the most uptime as possible, meaning minimising the amount of
exceptions and gracefully handling any issues.
**Repository**
Rust's `panic` provides the ability to throw an exception and exit, this
will terminate the running processes. Thus, Lighthouse aims to use `panic`
as little as possible to minimise the possible termination cases.
### Security First Mindset
Lighthouse aims to provide a safe, secure Serenity client for the Ethereum
ecosystem. At each step of development, the aim is to have a security-first
mindset and always ensure you are following the safe, secure mindset. When
contributing to any part of the Lighthouse client, through any development,
always ensure you understand each aspect thoroughly and cover all potential
security considerations of your code.
### Functions aren't completed until they are tested
As part of the Security First mindset, we want to aim to cover as many distinct
cases. A function being developed is not considered "completed" until tests
exist for that function. The tests not only help show the correctness of the
function, but also provide a way for new developers to understand how the
function is to be called and how it works.
If you'd like to contribute, try having a look through the [open
issues](https://github.com/sigp/lighthouse/issues) (tip: look for the [good
first
issue](https://github.com/sigp/lighthouse/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22)
tag) and ping us on the [gitter](https://gitter.im/sigp/lighthouse) channel. We need
your support!
## Understanding Serenity
@ -54,59 +48,12 @@ Ethereum's Serenity is based on a Proof-of-Stake based sharded beacon chain.
(*If you don't know what that is, don't `panic`, that's what this documentation
is for!* :smile:)
### Ethereum
Read through our [Understanding
Serenity](https://github.com/sigp/lighthouse/blob/master/docs/serenity.md) docs
to learn more! :smile: (*unless you've already read it.*)
Ethereum is an open blockchain protocol, allowing for the building and use of
decentralized applications that run on blockchain technology. The blockchain can
be seen as a decentralized, distributed ledger of transactions.
General Ethereum Introduction:
* [What is Ethereum](http://ethdocs.org/en/latest/introduction/what-is-ethereum.html)
* [Ethereum Introduction](https://github.com/ethereum/wiki/wiki/Ethereum-introduction)
### Proof-of-Work and the current state of Ethereum.
Currently, Ethereum is based on the Proof-of-Work model, a Sybil resilient
mechanism to allow nodes to propose blocks to the network. Although it provides
properties that allow the blockchain to operate in an open, public
(permissionless) network, it faces it's challenges and as a result impacts
the operation of the blockchain.
The main goals to advance Ethereum is to (1) increase the scalability and
overall transaction processing power of the Ethereum world computer and (2)
find a suitable replacement for Proof-of-Work that still provides the necessary
properties that we need.
* [Proof-of-Work in Cryptocurrencies: an accessible introduction](https://blog.sigmaprime.io/what-is-proof-of-work.html)
### Serenity
As part of the original Ethereum roadmap
[\[1\]](https://blog.ethereum.org/2015/03/03/ethereum-launch-process/)
[\[2\]](http://ethdocs.org/en/latest/introduction/the-homestead-release.html),
the Proof-of-Stake integration falls under **Release Step 4:*Serenity***. With
this, a number of changes are to be made to the current Ethereum protocol to
incorporate some of the new Proof-of-Stake mechanisms as well as improve on
some of the hindrances faced by the current Proof-of-Work chain.
To now advance the current Ethereum, the decision is made to move to a sharded
Beacon chain structure where multiple shard-chains will be operating and
interacting with a central beacon chain.
(Be mindful, the specifications change occasionally, so check these to keep up
to date)
* Current Specifications:
* [Danny Ryan's "State of the Spec"](https://notes.ethereum.org/s/BJEZWNoyE) (A nice summary of the current specifications)
* [Ethereum Serenity - Phase 0: Beacon Chain Spec](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md)
* [Ethereum Serenity - Phase 1: Sharded Data Chains](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/1_shard-data-chains.md)
* [Beacon Chain - Vitalik Buterin and Justin Drake explain](https://www.youtube.com/watch?v=GAywmwGToUI)
* Understanding Sharding:
* [Prysmatic Labs: Sharding Explained](https://medium.com/prysmatic-labs/how-to-scale-ethereum-sharding-explained-ba2e283b7fce)
* Other relevant resources
* [Proof of Stake - Casper FFG](https://www.youtube.com/watch?v=uQ3IqLDf-oo)
The document explains the necessary fundamentals for understanding Ethereum,
Proof-of-Stake and the Serenity we are working towards.
## Development Onboarding
@ -143,7 +90,6 @@ $ curl https://sh.rustup.rs -sSf | sh
```
**Windows (You need a bit more):**
* Install the Visual Studio 2015 with C++ support
* Install Rustup using: https://static.rust-lang.org/rustup/dist/x86_64-pc-windows-msvc/rustup-init.exe
* You can then use the ``VS2015 x64 Native Tools Command Prompt`` and run:
@ -161,7 +107,6 @@ handy for handling dependencies and helping to modularise your project better.
*Note: If you've installed rust through rustup, you should have ``cargo``
installed.*
#### Rust Terminology
When developing rust, you'll come across some terminology that differs to
@ -217,6 +162,8 @@ and intellectual. Have fun, but always respect the limits of other people.
**Testing**
*"A function is not considered complete until tests exist for it."*
Generally, tests can be self-contained in the same file. Integration tests
should be added into the ``tests/`` directory in the crate's **root**.

161
docs/serenity.md Normal file
View File

@ -0,0 +1,161 @@
# Ethereum Serenity
This document aims at providing a high level understanding of Ethereum and the
Serenity phase of the Ethereum roadmap.
## The Blockchain
A blockchain can be seen as a decentralized, distributed ledger. The ledger of
transactions is replicated onto all nodes in the network. When a transaction
occurs, it is first propagated to the nodes. Once the nodes receive the
transaction, and verifies the correctness, the nodes attempt to batch the
transactions into a block and append the block to the ledger. Once the ledger
has been successfully appended onto, they propagate the block to the network.
If accepted, this block now becomes the latest block in the chain. If two people
propose a block at the same time, the one canonical blockchain forks. At this
point it must be resolved, and each system has it's own way of resolving these
forks.
![Blockchain](http://yuml.me/b0d6b30a.jpg)
<center>Figure 1. Example blockchain with a resolved fork.</center>
<br>
The idea of the blockchain was first proposed in the seminal [Bitcoin
whitepaper](https://bitcoin.org/bitcoin.pdf) by Satoshi Nakamoto. Since then, a
vast number of updates and blockchains have taken shape providing different
functionality or properties to the original blockchain.
## What is Ethereum?
Ethereum is an open blockchain protocol, allowing for the building and use of
decentralized applications that run on blockchain technology. Ethereum was one
of the initial platforms providing turing-complete code to be run on the
blockchain, allowing for conditional payments to occur through the use of this
code. Since then, Ethereum has advanced to allow for a number of Decentralized
Applications (DApps) to be developed and run completely with the blockchain as
the backbone.
General Ethereum Introduction:
* [What is Ethereum](http://ethdocs.org/en/latest/introduction/what-is-ethereum.html)
* [Ethereum Introduction](https://github.com/ethereum/wiki/wiki/Ethereum-introduction)
### Proof-of-Work and the current state of Ethereum.
Currently, Ethereum is based on the Proof-of-Work model, a Sybil resilient
mechanism to allow nodes to propose blocks to the network. Although it provides
properties that allow the blockchain to operate in an open, public
(permissionless) network, it faces it's challenges and as a result impacts
the operation of the blockchain.
The main goals to advance Ethereum is to (1) increase the scalability and
overall transaction processing power of the Ethereum world computer and (2)
find a suitable replacement for Proof-of-Work that still provides the necessary
properties that we need.
* [Proof-of-Work in Cryptocurrencies: an accessible introduction](https://blog.sigmaprime.io/what-is-proof-of-work.html)
## Serenity
Ethereum Serenity refers to a new blockchain system currently under development
by the Ethereum Foundation and the Ethereum community.
As part of the original Ethereum roadmap
[\[1\]](https://blog.ethereum.org/2015/03/03/ethereum-launch-process/)
[\[2\]](http://ethdocs.org/en/latest/introduction/the-homestead-release.html),
the Proof-of-Stake integration falls under **Release Step 4: *Serenity***. With
this, a number of changes are to be made to the current Ethereum protocol to
incorporate some of the new Proof-of-Stake mechanisms as well as improve on
some of the hindrances faced by the current Proof-of-Work chain.
To now advance the current Ethereum, the decision is made to move to a sharded
Beacon chain structure where multiple shard-chains will be operating and
interacting with a central beacon chain.The Serenity blockchain consists of
1,025 proof-of-stake blockchains. This includes the "beacon chain" and 1,024
"shard chains".
Ethereum Serenity is also known as "Ethereum 2.0" and "Shasper". We prefer
Serenity as it more accurately reflects the established Ethereum roadmap (plus
we think it's a nice name).
(Be mindful, the specifications change occasionally, so check these to keep up
to date)
* Current Specifications:
* [Danny Ryan's "State of the Spec"](https://notes.ethereum.org/s/BJEZWNoyE) (A nice summary of the current specifications)
* [Ethereum Serenity - Phase 0: Beacon Chain Spec](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/0_beacon-chain.md)
* [Ethereum Serenity - Phase 1: Sharded Data Chains](https://github.com/ethereum/eth2.0-specs/blob/master/specs/core/1_shard-data-chains.md)
* [Beacon Chain - Vitalik Buterin and Justin Drake explain](https://www.youtube.com/watch?v=GAywmwGToUI)
* Understanding Sharding:
* [Prysmatic Labs: Sharding Explained](https://medium.com/prysmatic-labs/how-to-scale-ethereum-sharding-explained-ba2e283b7fce)
* Other relevant resources
* [Proof of Stake - Casper FFG](https://www.youtube.com/watch?v=uQ3IqLDf-oo)
* [Justin Drake VDF Devcon4 Talk](https://www.youtube.com/watch?v=zqL_cMlPjOI)
### Beacon Chain
The concept of a beacon chain differs from existing blockchains, such as
Bitcoin and Ethereum, in that it doesn't process transactions per se. Instead,
it maintains a set of bonded (staked) validators and coordinates these to
provide services to a static set of *sub-blockchains* (i.e. shards). Each of
these shard blockchains processes normal transactions (e.g. "Transfer 5 ETH
from A to B") in parallel whilst deferring consensus mechanisms to the beacon
chain.
Major services provided by the beacon chain to its shards include the following:
- A source of entropy, likely using a [RANDAO + VDF
scheme](https://ethresear.ch/t/minimal-vdf-randomness-beacon/3566).
- Validator management, including:
- Inducting and ejecting validators.
- Assigning randomly-shuffled subsets of validators to particular shards.
- Penalizing and rewarding validators.
- Proof-of-stake consensus for shard chain blocks.
### Shard Chains
Shards are analogous to CPU cores - they're a resource where transactions can
execute in series (one-after-another). Presently, Ethereum is single-core and
can only _fully_ process one transaction at a time. Sharding allows processing
of multiple transactions simultaneously, greatly increasing the per-second
transaction capacity of Ethereum.
Each shard uses a proof-of-stake consensus mechanism and shares its validators
(stakers) with other shards. The beacon chain rotates validators
pseudo-randomly between different shards. Shards will likely be the basis of
layer-2 transaction processing schemes, however, that is not in scope of this
discussion.
### The Proof-of-Work Chain
The present-Ethereum proof-of-work (PoW) chain will host a smart contract that
enables accounts to deposit 32 ETH, a BLS public key, and some [other
parameters](https://github.com/ethereum/eth2.0-specs/blob/master/specs/casper_sharding_v2.1.md#pow-chain-changes),
allowing them to become beacon chain validators. Each beacon chain will
reference a PoW block hash allowing PoW clients to use the beacon chain as a
source of [Casper FFG finality](https://arxiv.org/abs/1710.09437), if desired.
It is a requirement that ETH can move freely between shard chains, as well as between
Serenity and present-Ethereum blockchains. The exact mechanics of these transfers remain
an active topic of research and their details are yet to be confirmed.
## Serenity Progress
Ethereum Serenity is not fully specified and a working implementation does not
yet exist. Some teams have demos available which indicate progress, but do not
constitute a complete product. We look forward to providing user functionality
once we are ready to provide a minimum-viable user experience.
The work-in-progress specifications live in the
[ethereum/eth2.0-specs](https://github.com/ethereum/eth2.0-specs) repository.
There is active discussion about the specification in the
[ethereum/sharding](https://gitter.im/ethereum/sharding) gitter channel. A
proof-of-concept implementation in Python is available at
[ethereum/beacon_chain](https://github.com/ethereum/beacon_chain).
Presently, the specification focuses almost exclusively on the beacon chain,
as it is the focus of current development efforts. Progress on shard chain
specification will soon follow.