a0549e3842
* Update to spec v0.9.0 * Update to v0.9.1 * Bump spec tags for v0.9.1 * Formatting, fix CI failures * Resolve accidental KeyPair merge conflict * Document new BeaconState functions * Add `validator` changes from `validator-to-rest` * Add initial (failing) REST api tests * Fix signature parsing * Add more tests * Refactor http router * Add working tests for publish beacon block * Add validator duties tests * Move account_manager under `lighthouse` binary * Unify logfile handling in `environment` crate. * Fix incorrect cache drops in `advance_caches` * Update fork choice for v0.9.1 * Add `deposit_contract` crate * Add progress on validator onboarding * Add unfinished attesation code * Update account manager CLI * Write eth1 data file as hex string * Integrate ValidatorDirectory with validator_client * Move ValidatorDirectory into validator_client * Clean up some FIXMEs * Add beacon_chain_sim * Fix a few docs/logs * Expand `beacon_chain_sim` * Fix spec for `beacon_chain_sim * More testing for api * Start work on attestation endpoint * Reject empty attestations * Allow attestations to genesis block * Add working tests for `rest_api` validator endpoint * Remove grpc from beacon_node * Start heavy refactor of validator client - Block production is working * Prune old validator client files * Start works on attestation service * Add attestation service to validator client * Use full pubkey for validator directories * Add validator duties post endpoint * Use par_iter for keypair generation * Use bulk duties request in validator client * Add version http endpoint tests * Add interop keys and startup wait * Ensure a prompt exit * Add duties pruning * Fix compile error in beacon node tests * Add github workflow * Modify rust.yaml * Modify gitlab actions * Add to CI file * Add sudo to CI npm install * Move cargo fmt to own job in tests * Fix cargo fmt in CI * Add rustup update before cargo fmt * Change name of CI job * Make other CI jobs require cargo fmt * Add CI badge * Remove gitlab and travis files * Add different http timeout for debug * Update docker file, use makefile in CI * Use make in the dockerfile, skip the test * Use the makefile for debug GI test * Update book * Tidy grpc and misc things * Apply discv5 fixes * Address other minor issues * Fix warnings * Attempt fix for addr parsing * Tidy validator config, CLIs * Tidy comments * Tidy signing, reduce ForkService duplication * Fail if skipping too many slots * Set default recent genesis time to 0 * Add custom http timeout to validator * Fix compile bug in node_test_rig * Remove old bootstrap flag from val CLI * Update docs * Tidy val client * Change val client log levels * Add comments, more validity checks * Fix compile error, add comments * Undo changes to eth2-libp2p/src * Reduce duplication of keypair generation * Add more logging for validator duties * Fix beacon_chain_sim, nitpicks * Fix compile error, minor nits * Update to use v0.9.2 version of deposit contract * Add efforts to automate eth1 testnet deployment * Fix lcli testnet deployer * Modify bn CLI to parse eth2_testnet_dir * Progress with account_manager deposit tools * Make account manager submit deposits * Add password option for submitting deposits * Allow custom deposit amount * Add long names to lcli clap * Add password option to lcli deploy command * Fix minor bugs whilst testing * Address Michael's comments * Add refund-deposit-contract to lcli * Use time instead of skip count for denying long skips * Improve logging for eth1 * Fix bug with validator services exiting on error * Drop the block cache after genesis * Modify eth1 testnet config * Improve eth1 logging * Make validator wait until genesis time * Fix bug in eth1 voting * Add more logging to eth1 voting * Handle errors in eth1 http module * Set SECONDS_PER_DAY to sensible minimum * Shorten delay before testnet start * Ensure eth1 block is produced without any votes * Improve eth1 logging * Fix broken tests in eth1 * Tidy code in rest_api * Fix failing test in deposit_contract * Make CLI args more consistent * Change validator/duties endpoint * Add time-based skip slot limiting * Add new error type missed in previous commit * Add log when waiting for genesis * Refactor beacon node CLI * Remove unused dep * Add lcli eth1-genesis command * Fix bug in master merge * Apply clippy lints to beacon node * Add support for YamlConfig in Eth2TestnetDir * Upgrade tesnet deposit contract version * Remove unnecessary logging and correct formatting * Add a hardcoded eth2 testnet config * Ensure http server flag works. Overwrite configs with flags. * Ensure boot nodes are loaded from testnet dir * Fix account manager CLI bugs * Fix bugs with beacon node cli * Allow testnet dir without boot nodes * Write genesis state as SSZ * Remove ---/n from the start of testnet_dir files * Set default libp2p address * Tidy account manager CLI, add logging * Add check to see if testnet dir exists * Apply reviewers suggestions * Add HeadTracker struct * Add fork choice persistence * Shorten slot time for simulator * Add the /beacon/heads API endpoint * Update hardcoded testnet * Add tests for BeaconChain persistence + fix bugs * Extend BeaconChain persistence testing * Ensure chain is finalized b4 persistence tests * Ensure boot_enr.yaml is include in binary * Refactor beacon_chain_sim * Move files about in beacon sim * Update beacon_chain_sim * Fix bug with deposit inclusion * Increase log in genesis service, fix todo * Tidy sim, fix broken rest_api tests * Fix more broken tests * Update testnet * Fix broken rest api test * Tidy account manager CLI * Use tempdir for account manager * Stop hardcoded testnet dir from creating dir * Rename Eth2TestnetDir to Eth2TestnetConfig * Change hardcoded -> hard_coded * Tidy account manager * Add log to account manager * Tidy, ensure head tracker is loaded from disk * Tidy beacon chain builder * Tidy eth1_chain * Adds log support for simulator * Revert "Adds log support for simulator" This reverts commit ec77c66a052350f551db145cf20f213823428dd3. * Adds log support for simulator * Tidy after self-review * Change default log level * Address Michael's delicious PR comments * Fix off-by-one in tests
287 lines
9.2 KiB
Rust
287 lines
9.2 KiB
Rust
use std::ops::RangeInclusive;
|
|
use types::{Eth1Data, Hash256};
|
|
|
|
#[derive(Debug, PartialEq, Clone)]
|
|
pub enum Error {
|
|
/// The timestamp of each block equal to or later than the block prior to it.
|
|
InconsistentTimestamp { parent: u64, child: u64 },
|
|
/// Some `Eth1Block` was provided with the same block number but different data. The source
|
|
/// of eth1 data is inconsistent.
|
|
Conflicting(u64),
|
|
/// The given block was not one block number higher than the higest known block number.
|
|
NonConsecutive { given: u64, expected: u64 },
|
|
/// Some invariant was violated, there is a likely bug in the code.
|
|
Internal(String),
|
|
}
|
|
|
|
/// A block of the eth1 chain.
|
|
///
|
|
/// Contains all information required to add a `BlockCache` entry.
|
|
#[derive(Debug, PartialEq, Clone, Eq, Hash)]
|
|
pub struct Eth1Block {
|
|
pub hash: Hash256,
|
|
pub timestamp: u64,
|
|
pub number: u64,
|
|
pub deposit_root: Option<Hash256>,
|
|
pub deposit_count: Option<u64>,
|
|
}
|
|
|
|
impl Eth1Block {
|
|
pub fn eth1_data(self) -> Option<Eth1Data> {
|
|
Some(Eth1Data {
|
|
deposit_root: self.deposit_root?,
|
|
deposit_count: self.deposit_count?,
|
|
block_hash: self.hash,
|
|
})
|
|
}
|
|
}
|
|
|
|
/// Stores block and deposit contract information and provides queries based upon the block
|
|
/// timestamp.
|
|
#[derive(Debug, PartialEq, Clone, Default)]
|
|
pub struct BlockCache {
|
|
blocks: Vec<Eth1Block>,
|
|
}
|
|
|
|
impl BlockCache {
|
|
/// Returns the number of blocks stored in `self`.
|
|
pub fn len(&self) -> usize {
|
|
self.blocks.len()
|
|
}
|
|
|
|
/// True if the cache does not store any blocks.
|
|
pub fn is_empty(&self) -> bool {
|
|
self.blocks.is_empty()
|
|
}
|
|
|
|
/// Returns the timestamp of the earliest block in the cache (if any).
|
|
pub fn earliest_block_timestamp(&self) -> Option<u64> {
|
|
self.blocks.first().map(|block| block.timestamp)
|
|
}
|
|
|
|
/// Returns the timestamp of the latest block in the cache (if any).
|
|
pub fn latest_block_timestamp(&self) -> Option<u64> {
|
|
self.blocks.last().map(|block| block.timestamp)
|
|
}
|
|
|
|
/// Returns the lowest block number stored.
|
|
pub fn lowest_block_number(&self) -> Option<u64> {
|
|
self.blocks.first().map(|block| block.number)
|
|
}
|
|
|
|
/// Returns the highest block number stored.
|
|
pub fn highest_block_number(&self) -> Option<u64> {
|
|
self.blocks.last().map(|block| block.number)
|
|
}
|
|
|
|
/// Returns an iterator over all blocks.
|
|
///
|
|
/// Blocks a guaranteed to be returned with;
|
|
///
|
|
/// - Monotonically increasing block numbers.
|
|
/// - Non-uniformly increasing block timestamps.
|
|
pub fn iter(&self) -> impl DoubleEndedIterator<Item = &Eth1Block> + Clone {
|
|
self.blocks.iter()
|
|
}
|
|
|
|
/// Shortens the cache, keeping the latest (by block number) `len` blocks while dropping the
|
|
/// rest.
|
|
///
|
|
/// If `len` is greater than the vector's current length, this has no effect.
|
|
pub fn truncate(&mut self, len: usize) {
|
|
if len < self.blocks.len() {
|
|
self.blocks = self.blocks.split_off(self.blocks.len() - len);
|
|
}
|
|
}
|
|
|
|
/// Returns the range of block numbers stored in the block cache. All blocks in this range can
|
|
/// be accessed.
|
|
fn available_block_numbers(&self) -> Option<RangeInclusive<u64>> {
|
|
Some(self.blocks.first()?.number..=self.blocks.last()?.number)
|
|
}
|
|
|
|
/// Returns a block with the corresponding number, if any.
|
|
pub fn block_by_number(&self, block_number: u64) -> Option<&Eth1Block> {
|
|
self.blocks.get(
|
|
self.blocks
|
|
.as_slice()
|
|
.binary_search_by(|block| block.number.cmp(&block_number))
|
|
.ok()?,
|
|
)
|
|
}
|
|
|
|
/// Insert an `Eth1Snapshot` into `self`, allowing future queries.
|
|
///
|
|
/// Allows inserting either:
|
|
///
|
|
/// - The root block (i.e., any block if there are no existing blocks), or,
|
|
/// - An immediate child of the most recent (highest block number) block.
|
|
///
|
|
/// ## Errors
|
|
///
|
|
/// - If the cache is not empty and `item.block.block_number - 1` is not already in `self`.
|
|
/// - If `item.block.block_number` is in `self`, but is not identical to the supplied
|
|
/// `Eth1Snapshot`.
|
|
/// - If `item.block.timestamp` is prior to the parent.
|
|
pub fn insert_root_or_child(&mut self, block: Eth1Block) -> Result<(), Error> {
|
|
let expected_block_number = self
|
|
.highest_block_number()
|
|
.map(|n| n + 1)
|
|
.unwrap_or_else(|| block.number);
|
|
|
|
// If there are already some cached blocks, check to see if the new block number is one of
|
|
// them.
|
|
//
|
|
// If the block is already known, check to see the given block is identical to it. If not,
|
|
// raise an inconsistency error. This is mostly likely caused by some fork on the eth1
|
|
// chain.
|
|
if let Some(local) = self.available_block_numbers() {
|
|
if local.contains(&block.number) {
|
|
let known_block = self.block_by_number(block.number).ok_or_else(|| {
|
|
Error::Internal("An expected block was not present".to_string())
|
|
})?;
|
|
|
|
if known_block == &block {
|
|
return Ok(());
|
|
} else {
|
|
return Err(Error::Conflicting(block.number));
|
|
};
|
|
}
|
|
}
|
|
|
|
// Only permit blocks when it's either:
|
|
//
|
|
// - The first block inserted.
|
|
// - Exactly one block number higher than the highest known block number.
|
|
if block.number != expected_block_number {
|
|
return Err(Error::NonConsecutive {
|
|
given: block.number,
|
|
expected: expected_block_number,
|
|
});
|
|
}
|
|
|
|
// If the block is not the first block inserted, ensure that its timestamp is not higher
|
|
// than its parents.
|
|
if let Some(previous_block) = self.blocks.last() {
|
|
if previous_block.timestamp > block.timestamp {
|
|
return Err(Error::InconsistentTimestamp {
|
|
parent: previous_block.timestamp,
|
|
child: block.timestamp,
|
|
});
|
|
}
|
|
}
|
|
|
|
self.blocks.push(block);
|
|
|
|
Ok(())
|
|
}
|
|
}
|
|
|
|
#[cfg(test)]
|
|
mod tests {
|
|
use super::*;
|
|
|
|
fn get_block(i: u64, interval_secs: u64) -> Eth1Block {
|
|
Eth1Block {
|
|
hash: Hash256::from_low_u64_be(i),
|
|
timestamp: i * interval_secs,
|
|
number: i,
|
|
deposit_root: Some(Hash256::from_low_u64_be(i << 32)),
|
|
deposit_count: Some(i),
|
|
}
|
|
}
|
|
|
|
fn get_blocks(n: usize, interval_secs: u64) -> Vec<Eth1Block> {
|
|
(0..n as u64)
|
|
.into_iter()
|
|
.map(|i| get_block(i, interval_secs))
|
|
.collect()
|
|
}
|
|
|
|
fn insert(cache: &mut BlockCache, s: Eth1Block) -> Result<(), Error> {
|
|
cache.insert_root_or_child(s)
|
|
}
|
|
|
|
#[test]
|
|
fn truncate() {
|
|
let n = 16;
|
|
let blocks = get_blocks(n, 10);
|
|
|
|
let mut cache = BlockCache::default();
|
|
|
|
for block in blocks {
|
|
insert(&mut cache, block.clone()).expect("should add consecutive blocks");
|
|
}
|
|
|
|
for len in vec![0, 1, 2, 3, 4, 8, 15, 16] {
|
|
let mut cache = cache.clone();
|
|
|
|
cache.truncate(len);
|
|
|
|
assert_eq!(
|
|
cache.blocks.len(),
|
|
len,
|
|
"should truncate to length: {}",
|
|
len
|
|
);
|
|
}
|
|
|
|
let mut cache_2 = cache.clone();
|
|
cache_2.truncate(17);
|
|
assert_eq!(
|
|
cache_2.blocks.len(),
|
|
n,
|
|
"truncate to larger than n should be a no-op"
|
|
);
|
|
}
|
|
|
|
#[test]
|
|
fn inserts() {
|
|
let n = 16;
|
|
let blocks = get_blocks(n, 10);
|
|
|
|
let mut cache = BlockCache::default();
|
|
|
|
for block in blocks {
|
|
insert(&mut cache, block.clone()).expect("should add consecutive blocks");
|
|
}
|
|
|
|
// No error for re-adding a block identical to one that exists.
|
|
assert!(insert(&mut cache, get_block(n as u64 - 1, 10)).is_ok());
|
|
|
|
// Error for re-adding a block that is different to the one that exists.
|
|
assert!(insert(&mut cache, get_block(n as u64 - 1, 11)).is_err());
|
|
|
|
// Error for adding non-consecutive blocks.
|
|
assert!(insert(&mut cache, get_block(n as u64 + 1, 10)).is_err());
|
|
assert!(insert(&mut cache, get_block(n as u64 + 2, 10)).is_err());
|
|
|
|
// Error for adding timestamp prior to previous.
|
|
assert!(insert(&mut cache, get_block(n as u64, 1)).is_err());
|
|
// Double check to make sure previous test was only affected by timestamp.
|
|
assert!(insert(&mut cache, get_block(n as u64, 10)).is_ok());
|
|
}
|
|
|
|
#[test]
|
|
fn duplicate_timestamp() {
|
|
let mut blocks = get_blocks(7, 10);
|
|
|
|
blocks[0].timestamp = 0;
|
|
blocks[1].timestamp = 10;
|
|
blocks[2].timestamp = 10;
|
|
blocks[3].timestamp = 20;
|
|
blocks[4].timestamp = 30;
|
|
blocks[5].timestamp = 40;
|
|
blocks[6].timestamp = 40;
|
|
|
|
let mut cache = BlockCache::default();
|
|
|
|
for block in &blocks {
|
|
insert(&mut cache, block.clone())
|
|
.expect("should add consecutive blocks with duplicate timestamps");
|
|
}
|
|
|
|
assert_eq!(cache.blocks, blocks, "should have added all blocks");
|
|
}
|
|
}
|