Merge branch 'master' into beacon_block

This commit is contained in:
Paul Hauner 2018-10-18 10:14:55 +11:00
commit 1acfb87e77
No known key found for this signature in database
GPG Key ID: 303E4494BB28068C
17 changed files with 680 additions and 186 deletions

16
.github/ISSUE_TEMPLATE.md vendored Normal file
View File

@ -0,0 +1,16 @@
## Description
Please provide a brief description of the issue.
## Present Behaviour
Describe the present behaviour of the application, with regards to this
issue.
## Expected Behaviour
How _should_ the application behave?
## Steps to resolve
Please describe the steps required to resolve this issue, if known.

12
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@ -0,0 +1,12 @@
## Issue Addressed
Which issue # does this PR address?
## Proposed Changes
Please list or describe the changes introduced by this PR.
## Additional Info
Please provide any additional information. For example, future considerations
or information useful for reviewers.

0
.gitmodules vendored
View File

121
CONTRIBUTING.md Normal file
View File

@ -0,0 +1,121 @@
# Contributors Guide
Lighthouse is an open-source Ethereum 2.0 client. We we're community driven and
welcome all contribution. We aim to provide a constructive, respectful and fun
environment for collaboration.
We are active contributors to the [Ethereum 2.0 specification](https://github.com/ethereum/eth2.0-specs) and attend all [Eth
2.0 implementers calls](https://github.com/ethereum/eth2.0-pm).
This guide is geared towards beginners. If you're an open-source veteran feel
free to just skim this document and get straight into crushing issues.
## Why Contribute
There are many reasons you might contribute to Lighthouse. For example, you may
wish to:
- contribute to the Ethereum ecosystem.
- establish yourself as a layer-1 Ethereum developer.
- work in the amazing Rust programming language.
- learn how to participate in open-source projects.
- expand your software development skills.
- flex your skills in a public forum to expand your career
opportunities (or simply for the fun of it).
- grow your network by working with core Ethereum developers.
## How to Contribute
Regardless of the reason, the process to begin contributing is very much the
same. We operate like a typical open-source project operating on GitHub: the
repository [Issues](https://github.com/sigp/lighthouse/issues) is where we
track what needs to be done and [Pull
Requests](https://github.com/sigp/lighthouse/pulls) is where code gets
reviewed. We use [gitter](https://gitter.im/sigp/lighthouse) to chat
informally.
### General Work-Flow
We recommend the following work-flow for contributors:
1. **Find an issue** to work on, either because it's interesting or suitable to
your skill-set. Use comments to communicate your intentions and ask
questions.
2. **Work in a feature branch** of your personal fork
(github.com/YOUR_NAME/lighthouse) of the main repository
(github.com/sigp/lighthouse).
3. Once you feel you have addressed the issue, **create a pull-request** to merge
your changes in to the main repository.
4. Wait for the repository maintainers to **review your changes** to ensure the
issue is addressed satisfactorily. Optionally, mention your PR on
[gitter](https://gitter.im/sigp/lighthouse).
5. If the issue is addressed the repository maintainers will **merge your
pull-request** and you'll be an official contributor!
Generally, you find an issue you'd like to work on and announce your intentions
to start work in a comment on the issue. Then, do your work on a separate
branch (a "feature branch") in your own fork of the main repository. Once
you're happy and you think the issue has been addressed, create a pull request
into the main repository.
### First-time Set-up
First time contributors can get their git environment up and running with these
steps:
1. [Create a
fork](https://help.github.com/articles/fork-a-repo/#fork-an-example-repository)
and [clone
it](https://help.github.com/articles/fork-a-repo/#step-2-create-a-local-clone-of-your-fork)
to your local machine.
2. [Add an _"upstream"_
branch](https://help.github.com/articles/fork-a-repo/#step-3-configure-git-to-sync-your-fork-with-the-original-spoon-knife-repository)
that tracks github.com/sigp/lighthouse using `$ git remote add upstream
https://github.com/sigp/lighthouse.git` (pro-tip: [use SSH](https://help.github.com/articles/connecting-to-github-with-ssh/) instead of HTTPS).
3. Create a new feature branch with `$ git checkout -b your_feature_name`. The
name of your branch isn't critical but it should be short and instructive.
E.g., if you're fixing a bug with serialization, you could name your branch
`fix_serialization_bug`.
4. Commit your changes and push them to your fork with `$ git push origin
your_feature_name`.
5. Go to your fork on github.com and use the web interface to create a pull
request into the sigp/lighthouse repo.
From there, the repository maintainers will review the PR and either accept it
or provide some constructive feedback.
There's great
[guide](https://akrabat.com/the-beginners-guide-to-contributing-to-a-github-project/)
by Rob Allen that provides much more detail on each of these steps, if you're
having trouble. As always, jump on [gitter](https://gitter.im/sigp/lighthouse)
if you get stuck.
## FAQs
### I don't think I have anything to add
There's lots to be done and there's all sorts of tasks. You can do anything
from correcting typos through to writing core consensus code. If you reach out,
we'll include you.
### I'm not sure my Rust is good enough
We're open to developers of all levels. If you create a PR and your code
doesn't meet our standards, we'll help you fix it and we'll share the reasoning
with you. Contributing to open-source is a great way to learn.
### I'm not sure I know enough about Ethereum 2.0
No problems, there's plenty of tasks that don't require extensive Ethereum
knowledge. You can learn about Ethereum as you go.
### I'm afraid of making a mistake and looking silly
Don't be. We're all about personal development and constructive feedback. If you
make a mistake and learn from it, everyone wins.
### I don't like the way you do things
Please, make an issue and explain why. We're open to constructive criticism and
will happily change our ways.

View File

@ -33,9 +33,11 @@ name = "lighthouse"
[workspace] [workspace]
members = [ members = [
"beacon_chain/types", "beacon_chain/types",
"beacon_chain/transition",
"beacon_chain/utils/bls", "beacon_chain/utils/bls",
"beacon_chain/utils/boolean-bitfield", "beacon_chain/utils/boolean-bitfield",
"beacon_chain/utils/hashing", "beacon_chain/utils/hashing",
"beacon_chain/utils/honey-badger-split",
"beacon_chain/utils/shuffling", "beacon_chain/utils/shuffling",
"beacon_chain/utils/ssz", "beacon_chain/utils/ssz",
"beacon_chain/utils/ssz_helpers", "beacon_chain/utils/ssz_helpers",

239
README.md
View File

@ -2,8 +2,8 @@
[![Build Status](https://travis-ci.org/sigp/lighthouse.svg?branch=master)](https://travis-ci.org/sigp/lighthouse) [![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/sigp/lighthouse?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge) [![Build Status](https://travis-ci.org/sigp/lighthouse.svg?branch=master)](https://travis-ci.org/sigp/lighthouse) [![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/sigp/lighthouse?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
A work-in-progress, open-source implementation of the Ethereum 2.0 Beacon Chain, maintained A work-in-progress, open-source implementation of the Ethereum 2.0 Beacon
by Sigma Prime. Chain, maintained by Sigma Prime.
## Introduction ## Introduction
@ -14,180 +14,184 @@ This readme is split into two major sections:
- [What is Ethereum 2.0](#what-is-ethereum-20): an introduction to Ethereum 2.0. - [What is Ethereum 2.0](#what-is-ethereum-20): an introduction to Ethereum 2.0.
If you'd like some background on Sigma Prime, please see the [Lighthouse Update If you'd like some background on Sigma Prime, please see the [Lighthouse Update
\#00](https://lighthouse.sigmaprime.io/update-00.html) blog post or our \#00](https://lighthouse.sigmaprime.io/update-00.html) blog post or the
[website](https://sigmaprime.io). [company website](https://sigmaprime.io).
## Lighthouse Client ## Lighthouse Client
Lighthouse is an open-source Ethereum 2.0 client, in development. Designed as Lighthouse is an open-source Ethereum 2.0 client that is currently under
an Ethereum 2.0-only client, Lighthouse will not re-implement the existing development. Designed as an Ethereum 2.0-only client, Lighthouse will not
proof-of-work protocol. Maintaining a forward-focus on Ethereum 2.0 ensures re-implement the existing proof-of-work protocol. Maintaining a forward-focus
that Lighthouse will avoid reproducing the high-quality work already undertaken on Ethereum 2.0 ensures that Lighthouse avoids reproducing the high-quality
by existing clients. For present-Ethereum functionality, Lighthouse will work already undertaken by existing projects. As such, Lighthouse will connect
connect to existing clients like to existing clients, such as
[Geth](https://github.com/ethereum/go-ethereum) or [Geth](https://github.com/ethereum/go-ethereum) or
[Parity-Ethereum](https://github.com/paritytech/parity-ethereum) via RPC. [Parity-Ethereum](https://github.com/paritytech/parity-ethereum), via RPC to enable
present-Ethereum functionality.
### Goals ### Goals
We aim to contribute to the research and development of a secure, efficient and The purpose of this project is to further research and development towards a
decentralised Ethereum protocol through the development of an open-source secure, efficient, and decentralized Ethereum protocol, facilitated by a new
Ethereum 2.0 client. open-source Ethereum 2.0 client.
In addition to building an implementation, we seek to help maintain and improve In addition to implementing a new client, the project seeks to maintain and
the protocol wherever possible. improve the Ethereum protocol wherever possible.
### Components ### Components
The following list describes some of the components actively under development The following list describes some of the components actively under development
by the team: by the team:
- **BLS cryptography**: we presently use the [Apache - **BLS cryptography**: Lighthouse presently use the [Apache
Milagro](https://milagro.apache.org/) cryptography library to create and Milagro](https://milagro.apache.org/) cryptography library to create and
verify BLS aggregate signatures. BLS signatures are core to Eth 2.0 as they verify BLS aggregate signatures. BLS signatures are core to Eth 2.0 as they
allow the signatures of many validators to be compressed into a constant 96 allow the signatures of many validators to be compressed into a constant 96
bytes and verified efficiently.. We're presently maintaining our own [BLS bytes and efficiently verified. The Lighthouse project is presently
aggregates library](https://github.com/sigp/signature-schemes), gratefully maintaining its own [BLS aggregates
forked from @lovesh. library](https://github.com/sigp/signature-schemes), gratefully forked from
- **DoS-resistant block pre-processing**: processing blocks in proof-of-stake [@lovesh](https://github.com/lovesh).
- **DoS-resistant block pre-processing**: Processing blocks in proof-of-stake
is more resource intensive than proof-of-work. As such, clients need to is more resource intensive than proof-of-work. As such, clients need to
ensure that bad blocks can be rejected as efficiently as possible. We can ensure that bad blocks can be rejected as efficiently as possible. At
presently process a block with 10 million ETH staked in 0.006 seconds and present, blocks having 10 million ETH staked can be processed in 0.006
reject invalid blocks even quicker. See the seconds, and invalid blocks are rejected even more quickly. See [issue
[issue](https://github.com/ethereum/beacon_chain/issues/103) on #103](https://github.com/ethereum/beacon_chain/issues/103) on
[ethereum/beacon_chain](https://github.com/ethereum/beacon_chain) [ethereum/beacon_chain](https://github.com/ethereum/beacon_chain).
. .
- **P2P networking**: Eth 2.0 will likely use the [libp2p - **P2P networking**: Eth 2.0 will likely use the [libp2p
framework](https://libp2p.io/). Lighthouse aims to work alongside framework](https://libp2p.io/). Lighthouse aims to work alongside
[Parity](https://www.parity.io/) to get [Parity](https://www.parity.io/) to ensure
[libp2p-rust](https://github.com/libp2p/rust-libp2p) fit-for-purpose. [libp2p-rust](https://github.com/libp2p/rust-libp2p) is fit-for-purpose.
- **Validator duties** : the project involves the development of "validator" - **Validator duties** : The project involves development of "validator
services for users who wish to stake ETH. To fulfil their duties, validators services" for users who wish to stake ETH. To fulfill their duties,
require a consistent view of the chain and the ability to vote upon both shard validators require a consistent view of the chain and the ability to vote
and beacon chain blocks.. upon blocks from both shard and beacon chains.
- **New serialization formats**: lighthouse is working alongside EF researchers - **New serialization formats**: Lighthouse is working alongside researchers
to develop "simpleserialize" a purpose-built serialization format for sending from the Ethereum Foundation to develop *simpleserialize* (SSZ), a
information across the network. Check out our [SSZ purpose-built serialization format for sending information across a network.
Check out the [SSZ
implementation](https://github.com/sigp/lighthouse/tree/master/beacon_chain/utils/ssz) implementation](https://github.com/sigp/lighthouse/tree/master/beacon_chain/utils/ssz)
and our and this
[research](https://github.com/sigp/serialization_sandbox/blob/report/report/serialization_report.md) [research](https://github.com/sigp/serialization_sandbox/blob/report/report/serialization_report.md)
on serialization formats. on serialization formats for more information.
- **Casper FFG fork-choice**: the [Casper - **Casper FFG fork-choice**: The [Casper
FFG](https://arxiv.org/abs/1710.09437) fork-choice rules allow the chain to FFG](https://arxiv.org/abs/1710.09437) fork-choice rules allow the chain to
select a canonical chain in the case of a fork. select a canonical chain in the case of a fork.
- **Efficient state transition logic**: "state transition" logic governs - **Efficient state transition logic**: State transition logic governs
updates to the validator set as validators log in/out, penalises/rewards updates to the validator set as validators log in/out, penalizes/rewards
validators, rotates validators across shards, and implements other core tasks. validators, rotates validators across shards, and implements other core tasks.
- **Fuzzing and testing environments**: we are preparing to implement lab - **Fuzzing and testing environments**: Implementation of lab environments with
environments with CI work-flows to provide automated security analysis.. continuous integration (CI) workflows, providing automated security analysis.
In addition to these components we're also working on database schemas, RPC In addition to these components we are also working on database schemas, RPC
frameworks, specification development, database optimizations (e.g., frameworks, specification development, database optimizations (e.g.,
bloom-filters) and tons of other interesting stuff (at least we think so). bloom-filters), and tons of other interesting stuff (at least we think so).
### Contributing ### Contributing
**Lighthouse welcomes contributors with open-arms.** **Lighthouse welcomes contributors with open-arms.**
Layer-1 infrastructure is a critical component of the ecosystem and relies Layer-1 infrastructure is a critical component for the ecosystem and relies
heavily on community contribution. Building Ethereum 2.0 is a huge task and we heavily on contributions from the community. Building Ethereum 2.0 is a huge
refuse to conduct an inappropriate ICO or charge licensing fees. Instead, we task and we refuse to conduct an inappropriate ICO or charge licensing fees.
fund development through grants and support from Sigma Prime. Instead, we fund development through grants and support from Sigma Prime.
If you would like to learn more about Ethereum 2.0 and/or If you would like to learn more about Ethereum 2.0 and/or
[Rust](https://www.rust-lang.org/), we would be more than happy to on-board you [Rust](https://www.rust-lang.org/), we are more than happy to on-board you
and assign you to some tasks. We aim to be as accepting and understanding as and assign you some tasks. We aim to be as accepting and understanding as
possible; we are more than happy to up-skill contributors in exchange for their possible; we are more than happy to up-skill contributors in exchange for their
help on the project. assistance with the project.
Alternatively, if you an ETH/Rust veteran we'd love to have your input. We're Alternatively, if you are an ETH/Rust veteran, we'd love your input. We're
always looking for the best way to implement things and will consider any always looking for the best way to implement things and welcome all
respectful criticism. respectful criticisms.
If you'd like to contribute, try having a look through the [open If you'd like to contribute, try having a look through the [open
issues](https://github.com/sigp/lighthouse/issues) (tip: look for the [good issues](https://github.com/sigp/lighthouse/issues) (tip: look for the [good
first first
issue](https://github.com/sigp/lighthouse/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22) issue](https://github.com/sigp/lighthouse/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22)
tag) and ping us on the [gitter](https://gitter.im/sigp/lighthouse). We need tag) and ping us on the [gitter](https://gitter.im/sigp/lighthouse) channel. We need
your support! your support!
### Running ### Running
**NOTE: the cryptography libraries used in this implementation are **NOTE: The cryptography libraries used in this implementation are
experimental and as such all cryptography should be assumed to be insecure.** experimental. As such all cryptography is assumed to be insecure.**
The code-base is still under-development and does not provide any user-facing This code-base is still very much under-development and does not provide any
functionality. For developers and researchers, there are tests and benchmarks user-facing functionality. For developers and researchers, there are several
which could be of interest. tests and benchmarks which may be of interest.
To run tests, use To run tests, use:
``` ```
$ cargo test --all $ cargo test --all
``` ```
To run benchmarks, use To run benchmarks, use:
``` ```
$ cargo bench --all $ cargo bench --all
``` ```
Lighthouse presently runs on Rust `stable`, however, benchmarks require the Lighthouse presently runs on Rust `stable`, however, benchmarks currently require the
`nightly` version. `nightly` version.
### Engineering Ethos ### Engineering Ethos
Lighthouse aims to produce many small, easily-tested components, each separated Lighthouse aims to produce many small easily-tested components, each separated
into individual crates wherever possible. into individual crates wherever possible.
Generally, tests can be kept in the same file, as is typical in Rust. Generally, tests can be kept in the same file, as is typical in Rust.
Integration tests should be placed in the `tests` directory in the crates root. Integration tests should be placed in the `tests` directory in the crate's
Particularity large (line-count) tests should be separated into another file. root. Particularity large (line-count) tests should be placed into a separate
file.
A function is not complete until it is tested. We produce tests to protect A function is not considered complete until a test exists for it. We produce
against regression (accidentally breaking things) and to help those who read tests to protect against regression (accidentally breaking things) and to
our code to understand how the function should (or shouldn't) be used. provide examples that help readers of the code base understand how functions
should (or should not) be used.
Each PR is to be reviewed by at-least one "core developer" (i.e., someone with Each pull request is to be reviewed by at least one "core developer" (i.e.,
write-access to the repository). This helps to detect bugs, improve consistency someone with write-access to the repository). This helps to ensure bugs are
and relieves any one individual of the responsibility of an error. detected, consistency is maintained, and responsibility of errors is dispersed.
Discussion should be respectful and intellectual. Have fun, make jokes but Discussion must be respectful and intellectual. Have fun and make jokes, but
respect other people's limits. always respect the limits of other people.
### Directory Structure ### Directory Structure
Here we provide an overview of the directory structure: Here we provide an overview of the directory structure:
- `\beacon_chain`: contains logic derived directly from the specification. - `/beacon_chain`: contains logic derived directly from the specification.
E.g., shuffling algorithms, state transition logic and structs, block E.g., shuffling algorithms, state transition logic and structs, block
validation, BLS crypto, etc. validation, BLS crypto, etc.
- `\lighthouse`: contains logic specific to this client implementation. E.g., - `/lighthouse`: contains logic specific to this client implementation. E.g.,
CLI parsing, RPC end-points, databases, etc. CLI parsing, RPC end-points, databases, etc.
- `\network-libp2p`: contains a proof-of-concept libp2p implementation. Will be
replaced once research around p2p has been finalized.
## Contact ## Contact
The best place for discussion is the [sigp/lighthouse](https://gitter.im/sigp/lighthouse) gitter. The best place for discussion is the [sigp/lighthouse gitter](https://gitter.im/sigp/lighthouse).
Ping @paulhauner or @AgeManning to get the quickest response. Ping @paulhauner or @AgeManning to get the quickest response.
# What is Ethereum 2.0 # What is Ethereum 2.0
Ethereum 2.0 refers to a new blockchain currently under development Ethereum 2.0 refers to a new blockchain system currently under development by
by the Ethereum Foundation and the Ethereum community. The Ethereum 2.0 blockchain the Ethereum Foundation and the Ethereum community. The Ethereum 2.0 blockchain
consists of 1,025 proof-of-stake blockchains; the "beacon chain" and 1,024 consists of 1,025 proof-of-stake blockchains. This includes the "beacon chain"
"shard chains". and 1,024 "shard chains".
## Beacon Chain ## Beacon Chain
The Beacon Chain differs from existing blockchains such as Bitcoin and The concept of a beacon chain differs from existing blockchains, such as
Ethereum, in that it doesn't process "transactions", per say. Instead, it Bitcoin and Ethereum, in that it doesn't process transactions per se. Instead,
maintains a set of bonded (staked) validators and co-ordinates these to provide it maintains a set of bonded (staked) validators and coordinates these to
services to a static set of "sub-blockchains" (shards). These shards process provide services to a static set of *sub-blockchains* (i.e. shards). Each of
normal transactions, such as "5 ETH from A to B", in parallel whilst deferring these shard blockchains processes normal transactions (e.g. "Transfer 5 ETH
consensus to the Beacon Chain. from A to B") in parallel whilst deferring consensus mechanisms to the beacon
chain.
Major services provided by the beacon chain to its shards include the following: Major services provided by the beacon chain to its shards include the following:
@ -195,53 +199,54 @@ Major services provided by the beacon chain to its shards include the following:
scheme](https://ethresear.ch/t/minimal-vdf-randomness-beacon/3566). scheme](https://ethresear.ch/t/minimal-vdf-randomness-beacon/3566).
- Validator management, including: - Validator management, including:
- Inducting and ejecting validators. - Inducting and ejecting validators.
- Delegating randomly-shuffled subsets of validators to validate shards. - Assigning randomly-shuffled subsets of validators to particular shards.
- Penalising and rewarding validators. - Penalizing and rewarding validators.
- Proof-of-stake consensus for shard chain blocks. - Proof-of-stake consensus for shard chain blocks.
## Shard Chains ## Shard Chains
Shards can be thought of like CPU cores - they're a lane where transactions can Shards are analogous to CPU cores - they're a resource where transactions can
execute in series (one-after-another). Presently, Ethereum is single-core and execute in series (one-after-another). Presently, Ethereum is single-core and
can only _fully_ process one transaction at a time. Sharding allows multiple can only _fully_ process one transaction at a time. Sharding allows processing
transactions to happen in parallel, greatly increasing the per-second of multiple transactions simultaneously, greatly increasing the per-second
transaction capacity of Ethereum. transaction capacity of Ethereum.
Each shard uses proof-of-stake and shares its validators (stakers) with the other Each shard uses a proof-of-stake consensus mechanism and shares its validators
shards as the beacon chain rotates validators pseudo-randomly across shards. (stakers) with other shards. The beacon chain rotates validators
Shards will likely be the basis of very interesting layer-2 transaction pseudo-randomly between different shards. Shards will likely be the basis of
processing schemes, however, we won't get into that here. layer-2 transaction processing schemes, however, that is not in scope of this
discussion.
## The Proof-of-Work Chain ## The Proof-of-Work Chain
The proof-of-work chain will hold a contract that allows accounts to deposit 32 The present-Ethereum proof-of-work (PoW) chain will host a smart contract that
ETH, a BLS public key and some [other enables accounts to deposit 32 ETH, a BLS public key, and some [other
parameters](https://github.com/ethereum/eth2.0-specs/blob/master/specs/casper_sharding_v2.1.md#pow-chain-changes) parameters](https://github.com/ethereum/eth2.0-specs/blob/master/specs/casper_sharding_v2.1.md#pow-chain-changes),
to allow them to become Beacon Chain validators. Each Beacon Chain will allowing them to become beacon chain validators. Each beacon chain will
reference a PoW block hash allowing PoW clients to use the Beacon Chain as a reference a PoW block hash allowing PoW clients to use the beacon chain as a
source of [Casper FFG finality](https://arxiv.org/abs/1710.09437), if desired. source of [Casper FFG finality](https://arxiv.org/abs/1710.09437), if desired.
It is a requirement that ETH can move freely between shard chains and between It is a requirement that ETH can move freely between shard chains, as well as between
Eth 2.0 and present-Ethereum. The exact mechanics of these transfers are still Eth 2.0 and present-Ethereum blockchains. The exact mechanics of these transfers remain
a topic of research and their details are yet to be confirmed. an active topic of research and their details are yet to be confirmed.
## Ethereum 2.0 Progress ## Ethereum 2.0 Progress
Ethereum 2.0 is not fully specified and there's no working implementation. Some Ethereum 2.0 is not fully specified and a working implementation does not yet
teams have demos available which indicate progress, but not a complete product. exist. Some teams have demos available which indicate progress, but do not
We look forward to providing user functionality once we are ready to provide a constitute a complete product. We look forward to providing user functionality
minimum-viable user experience. once we are ready to provide a minimum-viable user experience.
The work-in-progress specification lives The work-in-progress Eth 2.0 specification lives
[here](https://github.com/ethereum/eth2.0-specs/blob/master/specs/casper_sharding_v2.1.md) [here](https://github.com/ethereum/eth2.0-specs/blob/master/specs/casper_sharding_v2.1.md)
in the [ethereum/eth2.0-specs](https://github.com/ethereum/eth2.0-specs) in the [ethereum/eth2.0-specs](https://github.com/ethereum/eth2.0-specs)
repository. The spec is still in a draft phase, however there are several teams repository. The spec is still in a draft phase, however there are several teams
already implementing it whilst the Ethereum Foundation research team fill in basing their Eth 2.0 implementations upon it while the Ethereum Foundation research
the gaps. There is active discussion about the spec in the team continue to fill in the gaps. There is active discussion about the specification in the
[ethereum/sharding](https://gitter.im/ethereum/sharding) gitter channel. A [ethereum/sharding](https://gitter.im/ethereum/sharding) gitter channel. A
proof-of-concept implementation in Python is available at proof-of-concept implementation in Python is available at
[ethereum/beacon_chain](https://github.com/ethereum/beacon_chain). [ethereum/beacon_chain](https://github.com/ethereum/beacon_chain).
Presently, the spec almost exclusively defines the Beacon Chain as it Presently, the specification focuses almost exclusively on the beacon chain,
is the focus of present development efforts. Progress on shard chain as it is the focus of current development efforts. Progress on shard chain
specification will soon follow. specification will soon follow.

View File

@ -0,0 +1,9 @@
[package]
name = "transition"
version = "0.1.0"
authors = ["Age Manning <Age@AgeManning.com>"]
[dependencies]
honey-badger-split = { path = "../utils/honey-badger-split" }
types = { path = "../types" }
shuffling = { path = "../utils/shuffling" }

View File

@ -0,0 +1,6 @@
use super::honey_badger_split;
use super::types;
use super::TransitionError;
use super::shuffling::shuffle;
pub mod validator;

View File

@ -0,0 +1,263 @@
use super::honey_badger_split::SplitExt;
use super::types::{ShardAndCommittee, ValidatorRecord, ChainConfig};
use super::TransitionError;
use super::shuffle;
use std::cmp::min;
type DelegatedCycle = Vec<Vec<ShardAndCommittee>>;
/// Produce a vector of validators indicies where those validators start and end
/// dynasties are within the supplied `dynasty`.
fn active_validator_indicies(
dynasty: u64,
validators: &[ValidatorRecord])
-> Vec<usize>
{
validators.iter()
.enumerate()
.filter_map(|(i, validator)| {
if (validator.start_dynasty >= dynasty) &
(validator.end_dynasty < dynasty)
{
Some(i)
} else {
None
}
})
.collect()
}
/// Delegates active validators into slots for a given cycle, given a random seed.
/// Returns a vector or ShardAndComitte vectors representing the shards and committiees for
/// each slot.
/// References get_new_shuffling (ethereum 2.1 specification)
pub fn delegate_validators(
seed: &[u8],
validators: &[ValidatorRecord],
dynasty: u64,
crosslinking_shard_start: u16,
config: &ChainConfig)
-> Result<DelegatedCycle, TransitionError>
{
let shuffled_validator_indices = {
let mut validator_indices = active_validator_indicies(dynasty, validators);
match shuffle(seed, validator_indices) {
Ok(shuffled) => shuffled,
_ => return Err(TransitionError::InvalidInput(
String::from("Shuffle list length exceed.")))
}
};
let shard_indices: Vec<usize> = (0_usize..config.shard_count as usize).into_iter().collect();
let crosslinking_shard_start = crosslinking_shard_start as usize;
let cycle_length = config.cycle_length as usize;
let min_committee_size = config.min_committee_size as usize;
generate_cycle(
&shuffled_validator_indices,
&shard_indices,
crosslinking_shard_start,
cycle_length,
min_committee_size)
}
/// Given the validator list, delegates the validators into slots and comittees for a given cycle.
fn generate_cycle(
validator_indices: &[usize],
shard_indices: &[usize],
crosslinking_shard_start: usize,
cycle_length: usize,
min_committee_size: usize)
-> Result<DelegatedCycle, TransitionError>
{
let validator_count = validator_indices.len();
let shard_count = shard_indices.len();
if shard_count / cycle_length == 0 {
return Err(TransitionError::InvalidInput(String::from("Number of
shards needs to be greater than
cycle length")));
}
let (committees_per_slot, slots_per_committee) = {
if validator_count >= cycle_length * min_committee_size {
let committees_per_slot = min(validator_count / cycle_length /
(min_committee_size * 2) + 1, shard_count /
cycle_length);
let slots_per_committee = 1;
(committees_per_slot, slots_per_committee)
} else {
let committees_per_slot = 1;
let mut slots_per_committee = 1;
while (validator_count * slots_per_committee < cycle_length * min_committee_size) &
(slots_per_committee < cycle_length) {
slots_per_committee *= 2;
}
(committees_per_slot, slots_per_committee)
}
};
let cycle = validator_indices.honey_badger_split(cycle_length)
.enumerate()
.map(|(i, slot_indices)| {
let shard_id_start = crosslinking_shard_start + i * committees_per_slot / slots_per_committee;
slot_indices.honey_badger_split(committees_per_slot)
.enumerate()
.map(|(j, shard_indices)| {
ShardAndCommittee{
shard_id: ((shard_id_start + j) % shard_count) as u16,
committee: shard_indices.to_vec(),
}
})
.collect()
})
.collect();
Ok(cycle)
}
#[cfg(test)]
mod tests {
use super::*;
fn generate_cycle_helper(
validator_count: &usize,
shard_count: &usize,
crosslinking_shard_start: usize,
cycle_length: usize,
min_committee_size: usize)
-> (Vec<usize>, Vec<usize>, Result<DelegatedCycle, TransitionError>)
{
let validator_indices: Vec<usize> = (0_usize..*validator_count).into_iter().collect();
let shard_indices: Vec<usize> = (0_usize..*shard_count).into_iter().collect();
let result = generate_cycle(
&validator_indices,
&shard_indices,
crosslinking_shard_start,
cycle_length,
min_committee_size);
(validator_indices, shard_indices, result)
}
#[allow(dead_code)]
fn print_cycle(cycle: &DelegatedCycle) {
cycle.iter()
.enumerate()
.for_each(|(i, slot)| {
println!("slot {:?}", &i);
slot.iter()
.enumerate()
.for_each(|(i, sac)| {
println!("#{:?}\tshard_id={}\tcommittee.len()={}",
&i, &sac.shard_id, &sac.committee.len())
})
});
}
fn flatten_validators(cycle: &DelegatedCycle)
-> Vec<usize>
{
let mut flattened = vec![];
for slot in cycle.iter() {
for sac in slot.iter() {
for validator in sac.committee.iter() {
flattened.push(*validator);
}
}
}
flattened
}
fn flatten_and_dedup_shards(cycle: &DelegatedCycle)
-> Vec<usize>
{
let mut flattened = vec![];
for slot in cycle.iter() {
for sac in slot.iter() {
flattened.push(sac.shard_id as usize);
}
}
flattened.dedup();
flattened
}
fn flatten_shards_in_slots(cycle: &DelegatedCycle)
-> Vec<Vec<usize>>
{
let mut shards_in_slots: Vec<Vec<usize>> = vec![];
for slot in cycle.iter() {
let mut shards: Vec<usize> = vec![];
for sac in slot.iter() {
shards.push(sac.shard_id as usize);
}
shards_in_slots.push(shards);
}
shards_in_slots
}
// TODO: Improve these tests to check committee lengths
#[test]
fn test_generate_cycle() {
let validator_count: usize = 100;
let shard_count: usize = 20;
let crosslinking_shard_start: usize = 0;
let cycle_length: usize = 20;
let min_committee_size: usize = 10;
let (validators, shards, result) = generate_cycle_helper(
&validator_count,
&shard_count,
crosslinking_shard_start,
cycle_length,
min_committee_size);
let cycle = result.unwrap();
let assigned_validators = flatten_validators(&cycle);
let assigned_shards = flatten_and_dedup_shards(&cycle);
let shards_in_slots = flatten_shards_in_slots(&cycle);
let expected_shards = shards.get(0..10).unwrap();
assert_eq!(assigned_validators, validators, "Validator assignment incorrect");
assert_eq!(assigned_shards, expected_shards, "Shard assignment incorrect");
let expected_shards_in_slots: Vec<Vec<usize>> = vec![
vec![0], vec![0], // Each line is 2 slots..
vec![1], vec![1],
vec![2], vec![2],
vec![3], vec![3],
vec![4], vec![4],
vec![5], vec![5],
vec![6], vec![6],
vec![7], vec![7],
vec![8], vec![8],
vec![9], vec![9],
];
// assert!(compare_shards_in_slots(&cycle, &expected_shards_in_slots));
assert_eq!(expected_shards_in_slots, shards_in_slots, "Shard assignment incorrect.")
}
#[test]
// Check that the committees per slot is upper bounded by shard count
fn test_generate_cycle_committees_bounded() {
let validator_count: usize = 523;
let shard_count: usize = 31;
let crosslinking_shard_start: usize = 0;
let cycle_length: usize = 11;
let min_committee_size: usize = 5;
let (validators, shards, result) = generate_cycle_helper(
&validator_count,
&shard_count,
crosslinking_shard_start,
cycle_length,
min_committee_size);
let cycle = result.unwrap();
let assigned_validators = flatten_validators(&cycle);
let assigned_shards = flatten_and_dedup_shards(&cycle);
let shards_in_slots = flatten_shards_in_slots(&cycle);
let expected_shards = shards.get(0..22).unwrap();
let expected_shards_in_slots: Vec<Vec<usize>> =
(0_usize..11_usize) .map(|x| vec![2*x,2*x+1]).collect();
assert_eq!(assigned_validators, validators, "Validator assignment incorrect");
assert_eq!(assigned_shards, expected_shards, "Shard assignment incorrect");
// assert!(compare_shards_in_slots(&cycle, &expected_shards_in_slots));
assert_eq!(expected_shards_in_slots, shards_in_slots, "Shard assignment incorrect.")
}
}

View File

@ -0,0 +1,10 @@
extern crate honey_badger_split;
extern crate types;
extern crate shuffling;
pub mod delegation;
#[derive(Debug)]
pub enum TransitionError {
InvalidInput(String),
}

View File

@ -20,6 +20,20 @@ impl ChainConfig {
} }
} }
pub fn validate(&self) -> bool {
// criteria that ensure the config is valid
// shard_count / cycle_length > 0 otherwise validator delegation
// will fail.
if self.shard_count / self.cycle_length as u16 == 0 {
return false;
}
true
}
#[cfg(test)] #[cfg(test)]
pub fn super_fast_tests() -> Self { pub fn super_fast_tests() -> Self {
Self { Self {

View File

@ -1,62 +0,0 @@
use super::utils::errors::ParameterError;
use super::utils::types::Hash256;
/*
* Work-in-progress function: not ready for review.
*/
pub fn get_block_hash(
active_state_recent_block_hashes: &[Hash256],
current_block_slot: u64,
slot: u64,
cycle_length: u64, // convert from standard u8
) -> Result<Hash256, ParameterError> {
// active_state must have at 2*cycle_length hashes
assert_error!(
active_state_recent_block_hashes.len() as u64 == cycle_length * 2,
ParameterError::InvalidInput(String::from(
"active state has incorrect number of block hashes"
))
);
let state_start_slot = (current_block_slot)
.checked_sub(cycle_length * 2)
.unwrap_or(0);
assert_error!(
(state_start_slot <= slot) && (slot < current_block_slot),
ParameterError::InvalidInput(String::from("incorrect slot number"))
);
let index = 2 * cycle_length + slot - current_block_slot; // should always be positive
Ok(active_state_recent_block_hashes[index as usize])
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_get_block_hash() {
let block_slot: u64 = 10;
let slot: u64 = 3;
let cycle_length: u64 = 8;
let mut block_hashes: Vec<Hash256> = Vec::new();
for _i in 0..2 * cycle_length {
block_hashes.push(Hash256::random());
}
let result = get_block_hash(
&block_hashes,
block_slot,
slot,
cycle_length)
.unwrap();
assert_eq!(
result,
block_hashes[(2 * cycle_length + slot - block_slot) as usize]
);
}
}

View File

@ -1,3 +0,0 @@
mod block_hash;
use super::utils;

View File

@ -0,0 +1,6 @@
[package]
name = "honey-badger-split"
version = "0.1.0"
authors = ["Paul Hauner <paul@paulhauner.com>"]
[dependencies]

View File

@ -0,0 +1,85 @@
/// A function for splitting a list into N pieces.
///
/// We have titled it the "honey badger split" because of its robustness. It don't care.
/// Iterator for the honey_badger_split function
pub struct Split<'a, T: 'a> {
n: usize,
current_pos: usize,
list: &'a [T],
list_length: usize
}
impl<'a,T> Iterator for Split<'a, T> {
type Item = &'a [T];
fn next(&mut self) -> Option<Self::Item> {
self.current_pos +=1;
if self.current_pos <= self.n {
match self.list.get(self.list_length*(self.current_pos-1)/self.n..self.list_length*self.current_pos/self.n) {
Some(v) => Some(v),
None => unreachable!()
}
}
else {
None
}
}
}
/// Splits a slice into chunks of size n. All postive n values are applicable,
/// hence the honey_badger prefix.
///
/// Returns an iterator over the original list.
pub trait SplitExt<T> {
fn honey_badger_split(&self, n: usize) -> Split<T>;
}
impl<T> SplitExt<T> for [T] {
fn honey_badger_split(&self, n: usize) -> Split<T> {
Split {
n,
current_pos: 0,
list: &self,
list_length: self.len(),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_honey_badger_split() {
/*
* These test cases are generated from the eth2.0 spec `split()`
* function at commit cbd254a.
*/
let input: Vec<usize> = vec![0, 1, 2, 3];
let output: Vec<&[usize]> = input.honey_badger_split(2).collect();
assert_eq!(output, vec![&[0, 1], &[2, 3]]);
let input: Vec<usize> = vec![0, 1, 2, 3];
let output: Vec<&[usize]> = input.honey_badger_split(6).collect();
let expected: Vec<&[usize]> = vec![&[], &[0], &[1], &[], &[2], &[3]];
assert_eq!(output, expected);
let input: Vec<usize> = vec![0, 1, 2, 3];
let output: Vec<&[usize]> = input.honey_badger_split(10).collect();
let expected: Vec<&[usize]> = vec![&[], &[], &[0], &[], &[1], &[], &[], &[2], &[], &[3]];
assert_eq!(output, expected);
let input: Vec<usize> = vec![0];
let output: Vec<&[usize]> = input.honey_badger_split(5).collect();
let expected: Vec<&[usize]> = vec![&[], &[], &[], &[], &[0]];
assert_eq!(output, expected);
let input: Vec<usize> = vec![0, 1, 2];
let output: Vec<&[usize]> = input.honey_badger_split(2).collect();
let expected: Vec<&[usize]> = vec![&[0], &[1, 2]];
assert_eq!(output, expected);
}
}

View File

@ -29,6 +29,7 @@ use super::signature_verification::{
#[derive(Debug,PartialEq)] #[derive(Debug,PartialEq)]
pub enum AttestationValidationError { pub enum AttestationValidationError {
ParentSlotTooHigh, ParentSlotTooHigh,
ParentSlotTooLow,
BlockSlotTooHigh, BlockSlotTooHigh,
BlockSlotTooLow, BlockSlotTooLow,
JustifiedSlotIncorrect, JustifiedSlotIncorrect,
@ -94,11 +95,11 @@ impl<T> AttestationValidationContext<T>
/* /*
* The slot of this attestation must not be more than cycle_length + 1 distance * The slot of this attestation must not be more than cycle_length + 1 distance
* from the block that contained it. * from the parent_slot of block that contained it.
*/ */
if a.slot < self.block_slot if a.slot < self.parent_block_slot
.saturating_sub(u64::from(self.cycle_length).saturating_add(1)) { .saturating_sub(u64::from(self.cycle_length).saturating_add(1)) {
return Err(AttestationValidationError::BlockSlotTooLow); return Err(AttestationValidationError::ParentSlotTooLow);
} }
/* /*

View File

@ -42,6 +42,15 @@ fn test_attestation_validation_invalid_parent_slot_too_high() {
assert_eq!(result, Err(AttestationValidationError::ParentSlotTooHigh)); assert_eq!(result, Err(AttestationValidationError::ParentSlotTooHigh));
} }
#[test]
fn test_attestation_validation_invalid_parent_slot_too_low() {
let mut rig = generic_rig();
rig.attestation.slot = rig.context.parent_block_slot - u64::from(rig.context.cycle_length) - 2;
let result = rig.context.validate_attestation(&rig.attestation);
assert_eq!(result, Err(AttestationValidationError::ParentSlotTooLow));
}
#[test] #[test]
fn test_attestation_validation_invalid_block_slot_too_high() { fn test_attestation_validation_invalid_block_slot_too_high() {
let mut rig = generic_rig(); let mut rig = generic_rig();
@ -56,7 +65,7 @@ fn test_attestation_validation_invalid_block_slot_too_high() {
fn test_attestation_validation_invalid_block_slot_too_low() { fn test_attestation_validation_invalid_block_slot_too_low() {
let mut rig = generic_rig(); let mut rig = generic_rig();
rig.attestation.slot = rig.context.block_slot - u64::from(rig.context.cycle_length) - 2; rig.context.block_slot = rig.context.block_slot + u64::from(rig.context.cycle_length);
let result = rig.context.validate_attestation(&rig.attestation); let result = rig.context.validate_attestation(&rig.attestation);
assert_eq!(result, Err(AttestationValidationError::BlockSlotTooLow)); assert_eq!(result, Err(AttestationValidationError::BlockSlotTooLow));
} }