lighthouse/beacon_node/beacon_chain/tests/tests.rs

836 lines
24 KiB
Rust
Raw Normal View History

2019-08-06 12:56:13 +00:00
#![cfg(not(debug_assertions))]
Add attestation gossip pre-verification (#983) * Add PH & MS slot clock changes * Account for genesis time * Add progress on duties refactor * Add simple is_aggregator bool to val subscription * Start work on attestation_verification.rs * Add progress on ObservedAttestations * Progress with ObservedAttestations * Fix tests * Add observed attestations to the beacon chain * Add attestation observation to processing code * Add progress on attestation verification * Add first draft of ObservedAttesters * Add more tests * Add observed attesters to beacon chain * Add observers to attestation processing * Add more attestation verification * Create ObservedAggregators map * Remove commented-out code * Add observed aggregators into chain * Add progress * Finish adding features to attestation verification * Ensure beacon chain compiles * Link attn verification into chain * Integrate new attn verification in chain * Remove old attestation processing code * Start trying to fix beacon_chain tests * Split adding into pools into two functions * Add aggregation to harness * Get test harness working again * Adjust the number of aggregators for test harness * Fix edge-case in harness * Integrate new attn processing in network * Fix compile bug in validator_client * Update validator API endpoints * Fix aggreagation in test harness * Fix enum thing * Fix attestation observation bug: * Patch failing API tests * Start adding comments to attestation verification * Remove unused attestation field * Unify "is block known" logic * Update comments * Supress fork choice errors for network processing * Add todos * Tidy * Add gossip attn tests * Disallow test harness to produce old attns * Comment out in-progress tests * Partially address pruning tests * Fix failing store test * Add aggregate tests * Add comments about which spec conditions we check * Dont re-aggregate * Split apart test harness attn production * Fix compile error in network * Make progress on commented-out test * Fix skipping attestation test * Add fork choice verification tests * Tidy attn tests, remove dead code * Remove some accidentally added code * Fix clippy lint * Rename test file * Add block tests, add cheap block proposer check * Rename block testing file * Add observed_block_producers * Tidy * Switch around block signature verification * Finish block testing * Remove gossip from signature tests * First pass of self review * Fix deviation in spec * Update test spec tags * Start moving over to hashset * Finish moving observed attesters to hashmap * Move aggregation pool over to hashmap * Make fc attn borrow again * Fix rest_api compile error * Fix missing comments * Fix monster test * Uncomment increasing slots test * Address remaining comments * Remove unsafe, use cfg test * Remove cfg test flag * Fix dodgy comment * Ignore aggregates that are already known. * Unify aggregator modulo logic * Fix typo in logs * Refactor validator subscription logic * Avoid reproducing selection proof * Skip HTTP call if no subscriptions * Rename DutyAndState -> DutyAndProof * Tidy logs * Print root as dbg * Fix compile errors in tests * Fix compile error in test
2020-05-06 11:42:56 +00:00
use beacon_chain::{
attestation_verification::Error as AttnError,
test_utils::{
Fix head tracker concurrency bugs (#1771) ## Issue Addressed Closes #1557 ## Proposed Changes Modify the pruning algorithm so that it mutates the head-tracker _before_ committing the database transaction to disk, and _only if_ all the heads to be removed are still present in the head-tracker (i.e. no concurrent mutations). In the process of writing and testing this I also had to make a few other changes: * Use internal mutability for all `BeaconChainHarness` functions (namely the RNG and the graffiti), in order to enable parallel calls (see testing section below). * Disable logging in harness tests unless the `test_logger` feature is turned on And chose to make some clean-ups: * Delete the `NullMigrator` * Remove type-based configuration for the migrator in favour of runtime config (simpler, less duplicated code) * Use the non-blocking migrator unless the blocking migrator is required. In the store tests we need the blocking migrator because some tests make asserts about the state of the DB after the migration has run. * Rename `validators_keypairs` -> `validator_keypairs` in the `BeaconChainHarness` ## Testing To confirm that the fix worked, I wrote a test using [Hiatus](https://crates.io/crates/hiatus), which can be found here: https://github.com/michaelsproul/lighthouse/tree/hiatus-issue-1557 That test can't be merged because it inserts random breakpoints everywhere, but if you check out that branch you can run the test with: ``` $ cd beacon_node/beacon_chain $ cargo test --release --test parallel_tests --features test_logger ``` It should pass, and the log output should show: ``` WARN Pruning deferred because of a concurrent mutation, message: this is expected only very rarely! ``` ## Additional Info This is a backwards-compatible change with no impact on consensus.
2020-10-19 05:58:39 +00:00
AttestationStrategy, BeaconChainHarness, BlockStrategy, EphemeralHarnessType,
OP_POOL_DB_KEY,
Add attestation gossip pre-verification (#983) * Add PH & MS slot clock changes * Account for genesis time * Add progress on duties refactor * Add simple is_aggregator bool to val subscription * Start work on attestation_verification.rs * Add progress on ObservedAttestations * Progress with ObservedAttestations * Fix tests * Add observed attestations to the beacon chain * Add attestation observation to processing code * Add progress on attestation verification * Add first draft of ObservedAttesters * Add more tests * Add observed attesters to beacon chain * Add observers to attestation processing * Add more attestation verification * Create ObservedAggregators map * Remove commented-out code * Add observed aggregators into chain * Add progress * Finish adding features to attestation verification * Ensure beacon chain compiles * Link attn verification into chain * Integrate new attn verification in chain * Remove old attestation processing code * Start trying to fix beacon_chain tests * Split adding into pools into two functions * Add aggregation to harness * Get test harness working again * Adjust the number of aggregators for test harness * Fix edge-case in harness * Integrate new attn processing in network * Fix compile bug in validator_client * Update validator API endpoints * Fix aggreagation in test harness * Fix enum thing * Fix attestation observation bug: * Patch failing API tests * Start adding comments to attestation verification * Remove unused attestation field * Unify "is block known" logic * Update comments * Supress fork choice errors for network processing * Add todos * Tidy * Add gossip attn tests * Disallow test harness to produce old attns * Comment out in-progress tests * Partially address pruning tests * Fix failing store test * Add aggregate tests * Add comments about which spec conditions we check * Dont re-aggregate * Split apart test harness attn production * Fix compile error in network * Make progress on commented-out test * Fix skipping attestation test * Add fork choice verification tests * Tidy attn tests, remove dead code * Remove some accidentally added code * Fix clippy lint * Rename test file * Add block tests, add cheap block proposer check * Rename block testing file * Add observed_block_producers * Tidy * Switch around block signature verification * Finish block testing * Remove gossip from signature tests * First pass of self review * Fix deviation in spec * Update test spec tags * Start moving over to hashset * Finish moving observed attesters to hashmap * Move aggregation pool over to hashmap * Make fc attn borrow again * Fix rest_api compile error * Fix missing comments * Fix monster test * Uncomment increasing slots test * Address remaining comments * Remove unsafe, use cfg test * Remove cfg test flag * Fix dodgy comment * Ignore aggregates that are already known. * Unify aggregator modulo logic * Fix typo in logs * Refactor validator subscription logic * Avoid reproducing selection proof * Skip HTTP call if no subscriptions * Rename DutyAndState -> DutyAndProof * Tidy logs * Print root as dbg * Fix compile errors in tests * Fix compile error in test
2020-05-06 11:42:56 +00:00
},
StateSkipConfig, WhenSlotSkipped,
};
use lazy_static::lazy_static;
use operation_pool::PersistedOperationPool;
use state_processing::{
per_slot_processing, per_slot_processing::Error as SlotProcessingError, EpochProcessingError,
};
use types::{BeaconStateError, EthSpec, Hash256, Keypair, MinimalEthSpec, RelativeEpoch, Slot};
// Should ideally be divisible by 3.
pub const VALIDATOR_COUNT: usize = 24;
2019-08-07 06:40:49 +00:00
lazy_static! {
/// A cached set of keys.
static ref KEYPAIRS: Vec<Keypair> = types::test_utils::generate_deterministic_keypairs(VALIDATOR_COUNT);
}
Fix head tracker concurrency bugs (#1771) ## Issue Addressed Closes #1557 ## Proposed Changes Modify the pruning algorithm so that it mutates the head-tracker _before_ committing the database transaction to disk, and _only if_ all the heads to be removed are still present in the head-tracker (i.e. no concurrent mutations). In the process of writing and testing this I also had to make a few other changes: * Use internal mutability for all `BeaconChainHarness` functions (namely the RNG and the graffiti), in order to enable parallel calls (see testing section below). * Disable logging in harness tests unless the `test_logger` feature is turned on And chose to make some clean-ups: * Delete the `NullMigrator` * Remove type-based configuration for the migrator in favour of runtime config (simpler, less duplicated code) * Use the non-blocking migrator unless the blocking migrator is required. In the store tests we need the blocking migrator because some tests make asserts about the state of the DB after the migration has run. * Rename `validators_keypairs` -> `validator_keypairs` in the `BeaconChainHarness` ## Testing To confirm that the fix worked, I wrote a test using [Hiatus](https://crates.io/crates/hiatus), which can be found here: https://github.com/michaelsproul/lighthouse/tree/hiatus-issue-1557 That test can't be merged because it inserts random breakpoints everywhere, but if you check out that branch you can run the test with: ``` $ cd beacon_node/beacon_chain $ cargo test --release --test parallel_tests --features test_logger ``` It should pass, and the log output should show: ``` WARN Pruning deferred because of a concurrent mutation, message: this is expected only very rarely! ``` ## Additional Info This is a backwards-compatible change with no impact on consensus.
2020-10-19 05:58:39 +00:00
fn get_harness(validator_count: usize) -> BeaconChainHarness<EphemeralHarnessType<MinimalEthSpec>> {
let harness = BeaconChainHarness::builder(MinimalEthSpec)
.default_spec()
.keypairs(KEYPAIRS[0..validator_count].to_vec())
.fresh_ephemeral_store()
.mock_execution_layer()
.build();
harness.advance_slot();
harness
}
#[test]
fn massive_skips() {
let harness = get_harness(8);
let spec = &harness.chain.spec;
let mut state = harness.chain.head().expect("should get head").beacon_state;
// Run per_slot_processing until it returns an error.
let error = loop {
match per_slot_processing(&mut state, None, spec) {
Ok(_) => continue,
Err(e) => break e,
}
};
assert!(state.slot() > 1, "the state should skip at least one slot");
assert_eq!(
error,
SlotProcessingError::EpochProcessingError(EpochProcessingError::BeaconStateError(
BeaconStateError::InsufficientValidators
)),
"should return error indicating that validators have been slashed out"
)
}
#[test]
fn iterators() {
let num_blocks_produced = MinimalEthSpec::slots_per_epoch() * 2 - 1;
Fix head tracker concurrency bugs (#1771) ## Issue Addressed Closes #1557 ## Proposed Changes Modify the pruning algorithm so that it mutates the head-tracker _before_ committing the database transaction to disk, and _only if_ all the heads to be removed are still present in the head-tracker (i.e. no concurrent mutations). In the process of writing and testing this I also had to make a few other changes: * Use internal mutability for all `BeaconChainHarness` functions (namely the RNG and the graffiti), in order to enable parallel calls (see testing section below). * Disable logging in harness tests unless the `test_logger` feature is turned on And chose to make some clean-ups: * Delete the `NullMigrator` * Remove type-based configuration for the migrator in favour of runtime config (simpler, less duplicated code) * Use the non-blocking migrator unless the blocking migrator is required. In the store tests we need the blocking migrator because some tests make asserts about the state of the DB after the migration has run. * Rename `validators_keypairs` -> `validator_keypairs` in the `BeaconChainHarness` ## Testing To confirm that the fix worked, I wrote a test using [Hiatus](https://crates.io/crates/hiatus), which can be found here: https://github.com/michaelsproul/lighthouse/tree/hiatus-issue-1557 That test can't be merged because it inserts random breakpoints everywhere, but if you check out that branch you can run the test with: ``` $ cd beacon_node/beacon_chain $ cargo test --release --test parallel_tests --features test_logger ``` It should pass, and the log output should show: ``` WARN Pruning deferred because of a concurrent mutation, message: this is expected only very rarely! ``` ## Additional Info This is a backwards-compatible change with no impact on consensus.
2020-10-19 05:58:39 +00:00
let harness = get_harness(VALIDATOR_COUNT);
harness.extend_chain(
num_blocks_produced as usize,
BlockStrategy::OnCanonicalHead,
// No need to produce attestations for this test.
AttestationStrategy::SomeValidators(vec![]),
);
let block_roots: Vec<(Hash256, Slot)> = harness
.chain
.forwards_iter_block_roots(Slot::new(0))
.expect("should get iter")
.map(Result::unwrap)
.collect();
let state_roots: Vec<(Hash256, Slot)> = harness
.chain
.forwards_iter_state_roots(Slot::new(0))
.expect("should get iter")
.map(Result::unwrap)
.collect();
assert_eq!(
block_roots.len(),
state_roots.len(),
"should be an equal amount of block and state roots"
);
assert!(
block_roots.iter().any(|(_root, slot)| *slot == 0),
"should contain genesis block root"
);
assert!(
state_roots.iter().any(|(_root, slot)| *slot == 0),
"should contain genesis state root"
);
assert_eq!(
block_roots.len(),
num_blocks_produced as usize + 1,
"should contain all produced blocks, plus the genesis block"
);
block_roots.windows(2).for_each(|x| {
assert_eq!(
x[1].1,
x[0].1 + 1,
"block root slots should be increasing by one"
)
});
state_roots.windows(2).for_each(|x| {
assert_eq!(
x[1].1,
x[0].1 + 1,
"state root slots should be increasing by one"
)
});
let head = &harness.chain.head().expect("should get head");
assert_eq!(
*block_roots.last().expect("should have some block roots"),
Update to Spec v0.10 (#817) * Start updating types * WIP * Signature hacking * Existing EF tests passing with fake_crypto * Updates * Delete outdated API spec * The refactor continues * It compiles * WIP test fixes * All release tests passing bar genesis state parsing * Update and test YamlConfig * Update to spec v0.10 compatible BLS * Updates to BLS EF tests * Add EF test for AggregateVerify And delete unused hash2curve tests for uncompressed points * Update EF tests to v0.10.1 * Use optional block root correctly in block proc * Use genesis fork in deposit domain. All tests pass * Cargo fmt * Fast aggregate verify test * Update REST API docs * Cargo fmt * Fix unused import * Bump spec tags to v0.10.1 * Add `seconds_per_eth1_block` to chainspec * Update to timestamp based eth1 voting scheme * Return None from `get_votes_to_consider` if block cache is empty * Handle overflows in `is_candidate_block` * Revert to failing tests * Fix eth1 data sets test * Choose default vote according to spec * Fix collect_valid_votes tests * Fix `get_votes_to_consider` to choose all eligible blocks * Uncomment winning_vote tests * Add comments; remove unused code * Reduce seconds_per_eth1_block for simulation * Addressed review comments * Add test for default vote case * Fix logs * Remove unused functions * Meter default eth1 votes * Fix comments * Address review comments; remove unused dependency * Disable/delete two outdated tests * Bump eth1 default vote warn to error * Delete outdated eth1 test Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
2020-02-10 23:19:36 +00:00
(head.beacon_block_root, head.beacon_block.slot()),
"last block root and slot should be for the head block"
);
assert_eq!(
*state_roots.last().expect("should have some state roots"),
(head.beacon_state_root(), head.beacon_state.slot()),
"last state root and slot should be for the head state"
);
}
#[test]
fn find_reorgs() {
let num_blocks_produced = MinimalEthSpec::slots_per_historical_root() + 1;
let harness = get_harness(VALIDATOR_COUNT);
harness.extend_chain(
num_blocks_produced as usize,
BlockStrategy::OnCanonicalHead,
// No need to produce attestations for this test.
AttestationStrategy::SomeValidators(vec![]),
);
let head_state = harness.chain.head_beacon_state().unwrap();
let head_slot = head_state.slot();
let genesis_state = harness
.chain
.state_at_slot(Slot::new(0), StateSkipConfig::WithStateRoots)
.unwrap();
// because genesis is more than `SLOTS_PER_HISTORICAL_ROOT` away, this should return with the
// finalized slot.
assert_eq!(
harness
.chain
.find_reorg_slot(&genesis_state, harness.chain.genesis_block_root)
.unwrap(),
head_state
.finalized_checkpoint()
.epoch
.start_slot(MinimalEthSpec::slots_per_epoch())
);
// test head
assert_eq!(
harness
.chain
.find_reorg_slot(
&head_state,
harness.chain.head_beacon_block().unwrap().canonical_root()
)
.unwrap(),
head_slot
);
// Re-org back to the slot prior to the head.
let prev_slot = head_slot - Slot::new(1);
let prev_state = harness
.chain
.state_at_slot(prev_slot, StateSkipConfig::WithStateRoots)
.unwrap();
let prev_block_root = harness
.chain
.block_root_at_slot(prev_slot, WhenSlotSkipped::None)
.unwrap()
.unwrap();
assert_eq!(
harness
.chain
.find_reorg_slot(&prev_state, prev_block_root)
.unwrap(),
prev_slot
);
}
#[test]
fn chooses_fork() {
Fix head tracker concurrency bugs (#1771) ## Issue Addressed Closes #1557 ## Proposed Changes Modify the pruning algorithm so that it mutates the head-tracker _before_ committing the database transaction to disk, and _only if_ all the heads to be removed are still present in the head-tracker (i.e. no concurrent mutations). In the process of writing and testing this I also had to make a few other changes: * Use internal mutability for all `BeaconChainHarness` functions (namely the RNG and the graffiti), in order to enable parallel calls (see testing section below). * Disable logging in harness tests unless the `test_logger` feature is turned on And chose to make some clean-ups: * Delete the `NullMigrator` * Remove type-based configuration for the migrator in favour of runtime config (simpler, less duplicated code) * Use the non-blocking migrator unless the blocking migrator is required. In the store tests we need the blocking migrator because some tests make asserts about the state of the DB after the migration has run. * Rename `validators_keypairs` -> `validator_keypairs` in the `BeaconChainHarness` ## Testing To confirm that the fix worked, I wrote a test using [Hiatus](https://crates.io/crates/hiatus), which can be found here: https://github.com/michaelsproul/lighthouse/tree/hiatus-issue-1557 That test can't be merged because it inserts random breakpoints everywhere, but if you check out that branch you can run the test with: ``` $ cd beacon_node/beacon_chain $ cargo test --release --test parallel_tests --features test_logger ``` It should pass, and the log output should show: ``` WARN Pruning deferred because of a concurrent mutation, message: this is expected only very rarely! ``` ## Additional Info This is a backwards-compatible change with no impact on consensus.
2020-10-19 05:58:39 +00:00
let harness = get_harness(VALIDATOR_COUNT);
let two_thirds = (VALIDATOR_COUNT / 3) * 2;
let delay = MinimalEthSpec::default_spec().min_attestation_inclusion_delay as usize;
let honest_validators: Vec<usize> = (0..two_thirds).collect();
let faulty_validators: Vec<usize> = (two_thirds..VALIDATOR_COUNT).collect();
let initial_blocks = delay + 1;
let honest_fork_blocks = delay + 1;
let faulty_fork_blocks = delay + 2;
2019-06-24 05:31:36 +00:00
// Build an initial chain where all validators agree.
harness.extend_chain(
initial_blocks,
BlockStrategy::OnCanonicalHead,
AttestationStrategy::AllValidators,
);
let (honest_head, faulty_head) = harness.generate_two_forks_by_skipping_a_block(
&honest_validators,
&faulty_validators,
honest_fork_blocks,
faulty_fork_blocks,
);
assert_ne!(honest_head, faulty_head, "forks should be distinct");
let state = &harness.chain.head().expect("should get head").beacon_state;
assert_eq!(
state.slot(),
Slot::from(initial_blocks + honest_fork_blocks),
"head should be at the current slot"
);
assert_eq!(
harness
.chain
.head()
.expect("should get head")
.beacon_block_root,
honest_head,
"the honest chain should be the canonical chain"
);
}
#[test]
fn finalizes_with_full_participation() {
let num_blocks_produced = MinimalEthSpec::slots_per_epoch() * 5;
Fix head tracker concurrency bugs (#1771) ## Issue Addressed Closes #1557 ## Proposed Changes Modify the pruning algorithm so that it mutates the head-tracker _before_ committing the database transaction to disk, and _only if_ all the heads to be removed are still present in the head-tracker (i.e. no concurrent mutations). In the process of writing and testing this I also had to make a few other changes: * Use internal mutability for all `BeaconChainHarness` functions (namely the RNG and the graffiti), in order to enable parallel calls (see testing section below). * Disable logging in harness tests unless the `test_logger` feature is turned on And chose to make some clean-ups: * Delete the `NullMigrator` * Remove type-based configuration for the migrator in favour of runtime config (simpler, less duplicated code) * Use the non-blocking migrator unless the blocking migrator is required. In the store tests we need the blocking migrator because some tests make asserts about the state of the DB after the migration has run. * Rename `validators_keypairs` -> `validator_keypairs` in the `BeaconChainHarness` ## Testing To confirm that the fix worked, I wrote a test using [Hiatus](https://crates.io/crates/hiatus), which can be found here: https://github.com/michaelsproul/lighthouse/tree/hiatus-issue-1557 That test can't be merged because it inserts random breakpoints everywhere, but if you check out that branch you can run the test with: ``` $ cd beacon_node/beacon_chain $ cargo test --release --test parallel_tests --features test_logger ``` It should pass, and the log output should show: ``` WARN Pruning deferred because of a concurrent mutation, message: this is expected only very rarely! ``` ## Additional Info This is a backwards-compatible change with no impact on consensus.
2020-10-19 05:58:39 +00:00
let harness = get_harness(VALIDATOR_COUNT);
harness.extend_chain(
num_blocks_produced as usize,
BlockStrategy::OnCanonicalHead,
AttestationStrategy::AllValidators,
);
let state = &harness.chain.head().expect("should get head").beacon_state;
assert_eq!(
state.slot(),
num_blocks_produced,
"head should be at the current slot"
);
assert_eq!(
state.current_epoch(),
num_blocks_produced / MinimalEthSpec::slots_per_epoch(),
"head should be at the expected epoch"
);
assert_eq!(
state.current_justified_checkpoint().epoch,
state.current_epoch() - 1,
"the head should be justified one behind the current epoch"
);
assert_eq!(
state.finalized_checkpoint().epoch,
state.current_epoch() - 2,
"the head should be finalized two behind the current epoch"
);
}
#[test]
fn finalizes_with_two_thirds_participation() {
let num_blocks_produced = MinimalEthSpec::slots_per_epoch() * 5;
Fix head tracker concurrency bugs (#1771) ## Issue Addressed Closes #1557 ## Proposed Changes Modify the pruning algorithm so that it mutates the head-tracker _before_ committing the database transaction to disk, and _only if_ all the heads to be removed are still present in the head-tracker (i.e. no concurrent mutations). In the process of writing and testing this I also had to make a few other changes: * Use internal mutability for all `BeaconChainHarness` functions (namely the RNG and the graffiti), in order to enable parallel calls (see testing section below). * Disable logging in harness tests unless the `test_logger` feature is turned on And chose to make some clean-ups: * Delete the `NullMigrator` * Remove type-based configuration for the migrator in favour of runtime config (simpler, less duplicated code) * Use the non-blocking migrator unless the blocking migrator is required. In the store tests we need the blocking migrator because some tests make asserts about the state of the DB after the migration has run. * Rename `validators_keypairs` -> `validator_keypairs` in the `BeaconChainHarness` ## Testing To confirm that the fix worked, I wrote a test using [Hiatus](https://crates.io/crates/hiatus), which can be found here: https://github.com/michaelsproul/lighthouse/tree/hiatus-issue-1557 That test can't be merged because it inserts random breakpoints everywhere, but if you check out that branch you can run the test with: ``` $ cd beacon_node/beacon_chain $ cargo test --release --test parallel_tests --features test_logger ``` It should pass, and the log output should show: ``` WARN Pruning deferred because of a concurrent mutation, message: this is expected only very rarely! ``` ## Additional Info This is a backwards-compatible change with no impact on consensus.
2020-10-19 05:58:39 +00:00
let harness = get_harness(VALIDATOR_COUNT);
let two_thirds = (VALIDATOR_COUNT / 3) * 2;
let attesters = (0..two_thirds).collect();
harness.extend_chain(
num_blocks_produced as usize,
BlockStrategy::OnCanonicalHead,
AttestationStrategy::SomeValidators(attesters),
);
let state = &harness.chain.head().expect("should get head").beacon_state;
assert_eq!(
state.slot(),
num_blocks_produced,
"head should be at the current slot"
);
assert_eq!(
state.current_epoch(),
num_blocks_produced / MinimalEthSpec::slots_per_epoch(),
"head should be at the expected epoch"
);
// Note: the 2/3rds tests are not justifying the immediately prior epochs because the
// `MIN_ATTESTATION_INCLUSION_DELAY` is preventing an adequate number of attestations being
// included in blocks during that epoch.
assert_eq!(
state.current_justified_checkpoint().epoch,
state.current_epoch() - 2,
"the head should be justified two behind the current epoch"
);
assert_eq!(
state.finalized_checkpoint().epoch,
state.current_epoch() - 4,
"the head should be finalized three behind the current epoch"
);
}
#[test]
fn does_not_finalize_with_less_than_two_thirds_participation() {
let num_blocks_produced = MinimalEthSpec::slots_per_epoch() * 5;
Fix head tracker concurrency bugs (#1771) ## Issue Addressed Closes #1557 ## Proposed Changes Modify the pruning algorithm so that it mutates the head-tracker _before_ committing the database transaction to disk, and _only if_ all the heads to be removed are still present in the head-tracker (i.e. no concurrent mutations). In the process of writing and testing this I also had to make a few other changes: * Use internal mutability for all `BeaconChainHarness` functions (namely the RNG and the graffiti), in order to enable parallel calls (see testing section below). * Disable logging in harness tests unless the `test_logger` feature is turned on And chose to make some clean-ups: * Delete the `NullMigrator` * Remove type-based configuration for the migrator in favour of runtime config (simpler, less duplicated code) * Use the non-blocking migrator unless the blocking migrator is required. In the store tests we need the blocking migrator because some tests make asserts about the state of the DB after the migration has run. * Rename `validators_keypairs` -> `validator_keypairs` in the `BeaconChainHarness` ## Testing To confirm that the fix worked, I wrote a test using [Hiatus](https://crates.io/crates/hiatus), which can be found here: https://github.com/michaelsproul/lighthouse/tree/hiatus-issue-1557 That test can't be merged because it inserts random breakpoints everywhere, but if you check out that branch you can run the test with: ``` $ cd beacon_node/beacon_chain $ cargo test --release --test parallel_tests --features test_logger ``` It should pass, and the log output should show: ``` WARN Pruning deferred because of a concurrent mutation, message: this is expected only very rarely! ``` ## Additional Info This is a backwards-compatible change with no impact on consensus.
2020-10-19 05:58:39 +00:00
let harness = get_harness(VALIDATOR_COUNT);
let two_thirds = (VALIDATOR_COUNT / 3) * 2;
let less_than_two_thirds = two_thirds - 1;
let attesters = (0..less_than_two_thirds).collect();
harness.extend_chain(
num_blocks_produced as usize,
BlockStrategy::OnCanonicalHead,
AttestationStrategy::SomeValidators(attesters),
);
let state = &harness.chain.head().expect("should get head").beacon_state;
assert_eq!(
state.slot(),
num_blocks_produced,
"head should be at the current slot"
);
assert_eq!(
state.current_epoch(),
num_blocks_produced / MinimalEthSpec::slots_per_epoch(),
"head should be at the expected epoch"
);
assert_eq!(
state.current_justified_checkpoint().epoch,
0,
"no epoch should have been justified"
);
assert_eq!(
state.finalized_checkpoint().epoch,
0,
"no epoch should have been finalized"
);
}
#[test]
fn does_not_finalize_without_attestation() {
let num_blocks_produced = MinimalEthSpec::slots_per_epoch() * 5;
Fix head tracker concurrency bugs (#1771) ## Issue Addressed Closes #1557 ## Proposed Changes Modify the pruning algorithm so that it mutates the head-tracker _before_ committing the database transaction to disk, and _only if_ all the heads to be removed are still present in the head-tracker (i.e. no concurrent mutations). In the process of writing and testing this I also had to make a few other changes: * Use internal mutability for all `BeaconChainHarness` functions (namely the RNG and the graffiti), in order to enable parallel calls (see testing section below). * Disable logging in harness tests unless the `test_logger` feature is turned on And chose to make some clean-ups: * Delete the `NullMigrator` * Remove type-based configuration for the migrator in favour of runtime config (simpler, less duplicated code) * Use the non-blocking migrator unless the blocking migrator is required. In the store tests we need the blocking migrator because some tests make asserts about the state of the DB after the migration has run. * Rename `validators_keypairs` -> `validator_keypairs` in the `BeaconChainHarness` ## Testing To confirm that the fix worked, I wrote a test using [Hiatus](https://crates.io/crates/hiatus), which can be found here: https://github.com/michaelsproul/lighthouse/tree/hiatus-issue-1557 That test can't be merged because it inserts random breakpoints everywhere, but if you check out that branch you can run the test with: ``` $ cd beacon_node/beacon_chain $ cargo test --release --test parallel_tests --features test_logger ``` It should pass, and the log output should show: ``` WARN Pruning deferred because of a concurrent mutation, message: this is expected only very rarely! ``` ## Additional Info This is a backwards-compatible change with no impact on consensus.
2020-10-19 05:58:39 +00:00
let harness = get_harness(VALIDATOR_COUNT);
harness.extend_chain(
num_blocks_produced as usize,
BlockStrategy::OnCanonicalHead,
AttestationStrategy::SomeValidators(vec![]),
);
let state = &harness.chain.head().expect("should get head").beacon_state;
assert_eq!(
state.slot(),
num_blocks_produced,
"head should be at the current slot"
);
assert_eq!(
state.current_epoch(),
num_blocks_produced / MinimalEthSpec::slots_per_epoch(),
"head should be at the expected epoch"
);
assert_eq!(
state.current_justified_checkpoint().epoch,
0,
"no epoch should have been justified"
);
assert_eq!(
state.finalized_checkpoint().epoch,
0,
"no epoch should have been finalized"
);
}
2019-06-26 03:06:08 +00:00
#[test]
fn roundtrip_operation_pool() {
let num_blocks_produced = MinimalEthSpec::slots_per_epoch() * 5;
Fix head tracker concurrency bugs (#1771) ## Issue Addressed Closes #1557 ## Proposed Changes Modify the pruning algorithm so that it mutates the head-tracker _before_ committing the database transaction to disk, and _only if_ all the heads to be removed are still present in the head-tracker (i.e. no concurrent mutations). In the process of writing and testing this I also had to make a few other changes: * Use internal mutability for all `BeaconChainHarness` functions (namely the RNG and the graffiti), in order to enable parallel calls (see testing section below). * Disable logging in harness tests unless the `test_logger` feature is turned on And chose to make some clean-ups: * Delete the `NullMigrator` * Remove type-based configuration for the migrator in favour of runtime config (simpler, less duplicated code) * Use the non-blocking migrator unless the blocking migrator is required. In the store tests we need the blocking migrator because some tests make asserts about the state of the DB after the migration has run. * Rename `validators_keypairs` -> `validator_keypairs` in the `BeaconChainHarness` ## Testing To confirm that the fix worked, I wrote a test using [Hiatus](https://crates.io/crates/hiatus), which can be found here: https://github.com/michaelsproul/lighthouse/tree/hiatus-issue-1557 That test can't be merged because it inserts random breakpoints everywhere, but if you check out that branch you can run the test with: ``` $ cd beacon_node/beacon_chain $ cargo test --release --test parallel_tests --features test_logger ``` It should pass, and the log output should show: ``` WARN Pruning deferred because of a concurrent mutation, message: this is expected only very rarely! ``` ## Additional Info This is a backwards-compatible change with no impact on consensus.
2020-10-19 05:58:39 +00:00
let harness = get_harness(VALIDATOR_COUNT);
2019-06-26 03:06:08 +00:00
// Add some attestations
harness.extend_chain(
num_blocks_produced as usize,
BlockStrategy::OnCanonicalHead,
AttestationStrategy::AllValidators,
);
assert!(harness.chain.op_pool.num_attestations() > 0);
// TODO: could add some other operations
harness
.chain
.persist_op_pool()
.expect("should persist op pool");
2019-06-26 03:06:08 +00:00
let restored_op_pool = harness
.chain
.store
.get_item::<PersistedOperationPool<MinimalEthSpec>>(&OP_POOL_DB_KEY)
.expect("should read db")
.expect("should find op pool")
.into_operation_pool()
.unwrap();
2019-06-26 03:06:08 +00:00
assert_eq!(harness.chain.op_pool, restored_op_pool);
}
#[test]
Add attestation gossip pre-verification (#983) * Add PH & MS slot clock changes * Account for genesis time * Add progress on duties refactor * Add simple is_aggregator bool to val subscription * Start work on attestation_verification.rs * Add progress on ObservedAttestations * Progress with ObservedAttestations * Fix tests * Add observed attestations to the beacon chain * Add attestation observation to processing code * Add progress on attestation verification * Add first draft of ObservedAttesters * Add more tests * Add observed attesters to beacon chain * Add observers to attestation processing * Add more attestation verification * Create ObservedAggregators map * Remove commented-out code * Add observed aggregators into chain * Add progress * Finish adding features to attestation verification * Ensure beacon chain compiles * Link attn verification into chain * Integrate new attn verification in chain * Remove old attestation processing code * Start trying to fix beacon_chain tests * Split adding into pools into two functions * Add aggregation to harness * Get test harness working again * Adjust the number of aggregators for test harness * Fix edge-case in harness * Integrate new attn processing in network * Fix compile bug in validator_client * Update validator API endpoints * Fix aggreagation in test harness * Fix enum thing * Fix attestation observation bug: * Patch failing API tests * Start adding comments to attestation verification * Remove unused attestation field * Unify "is block known" logic * Update comments * Supress fork choice errors for network processing * Add todos * Tidy * Add gossip attn tests * Disallow test harness to produce old attns * Comment out in-progress tests * Partially address pruning tests * Fix failing store test * Add aggregate tests * Add comments about which spec conditions we check * Dont re-aggregate * Split apart test harness attn production * Fix compile error in network * Make progress on commented-out test * Fix skipping attestation test * Add fork choice verification tests * Tidy attn tests, remove dead code * Remove some accidentally added code * Fix clippy lint * Rename test file * Add block tests, add cheap block proposer check * Rename block testing file * Add observed_block_producers * Tidy * Switch around block signature verification * Finish block testing * Remove gossip from signature tests * First pass of self review * Fix deviation in spec * Update test spec tags * Start moving over to hashset * Finish moving observed attesters to hashmap * Move aggregation pool over to hashmap * Make fc attn borrow again * Fix rest_api compile error * Fix missing comments * Fix monster test * Uncomment increasing slots test * Address remaining comments * Remove unsafe, use cfg test * Remove cfg test flag * Fix dodgy comment * Ignore aggregates that are already known. * Unify aggregator modulo logic * Fix typo in logs * Refactor validator subscription logic * Avoid reproducing selection proof * Skip HTTP call if no subscriptions * Rename DutyAndState -> DutyAndProof * Tidy logs * Print root as dbg * Fix compile errors in tests * Fix compile error in test
2020-05-06 11:42:56 +00:00
fn unaggregated_attestations_added_to_fork_choice_some_none() {
let num_blocks_produced = MinimalEthSpec::slots_per_epoch() / 2;
Fix head tracker concurrency bugs (#1771) ## Issue Addressed Closes #1557 ## Proposed Changes Modify the pruning algorithm so that it mutates the head-tracker _before_ committing the database transaction to disk, and _only if_ all the heads to be removed are still present in the head-tracker (i.e. no concurrent mutations). In the process of writing and testing this I also had to make a few other changes: * Use internal mutability for all `BeaconChainHarness` functions (namely the RNG and the graffiti), in order to enable parallel calls (see testing section below). * Disable logging in harness tests unless the `test_logger` feature is turned on And chose to make some clean-ups: * Delete the `NullMigrator` * Remove type-based configuration for the migrator in favour of runtime config (simpler, less duplicated code) * Use the non-blocking migrator unless the blocking migrator is required. In the store tests we need the blocking migrator because some tests make asserts about the state of the DB after the migration has run. * Rename `validators_keypairs` -> `validator_keypairs` in the `BeaconChainHarness` ## Testing To confirm that the fix worked, I wrote a test using [Hiatus](https://crates.io/crates/hiatus), which can be found here: https://github.com/michaelsproul/lighthouse/tree/hiatus-issue-1557 That test can't be merged because it inserts random breakpoints everywhere, but if you check out that branch you can run the test with: ``` $ cd beacon_node/beacon_chain $ cargo test --release --test parallel_tests --features test_logger ``` It should pass, and the log output should show: ``` WARN Pruning deferred because of a concurrent mutation, message: this is expected only very rarely! ``` ## Additional Info This is a backwards-compatible change with no impact on consensus.
2020-10-19 05:58:39 +00:00
let harness = get_harness(VALIDATOR_COUNT);
harness.extend_chain(
num_blocks_produced as usize,
BlockStrategy::OnCanonicalHead,
AttestationStrategy::AllValidators,
);
let state = &harness.chain.head().expect("should get head").beacon_state;
v0.12 fork choice update (#1229) * Incomplete scraps * Add progress on new fork choice impl * Further progress * First complete compiling version * Remove chain reference * Add new lmd_ghost crate * Start integrating into beacon chain * Update `milagro_bls` to new release (#1183) * Update milagro_bls to new release Signed-off-by: Kirk Baird <baird.k@outlook.com> * Tidy up fake cryptos Signed-off-by: Kirk Baird <baird.k@outlook.com> * move SecretHash to bls and put plaintext back Signed-off-by: Kirk Baird <baird.k@outlook.com> * Update state processing for v0.12 * Fix EF test runners for v0.12 * Fix some tests * Fix broken attestation verification test * More test fixes * Rough beacon chain impl working * Remove fork_choice_2 * Remove checkpoint manager * Half finished ssz impl * Add missed file * Add persistence * Tidy, fix some compile errors * Remove RwLock from ProtoArrayForkChoice * Fix store-based compile errors * Add comments, tidy * Move function out of ForkChoice struct * Start testing * More testing * Fix compile error * Tidy beacon_chain::fork_choice * Queue attestations from the current slot * Allow fork choice to handle prior-to-genesis start * Improve error granularity * Test attestation dequeuing * Process attestations during block * Store target root in fork choice * Move fork choice verification into new crate * Update tests * Consensus updates for v0.12 (#1228) * Update state processing for v0.12 * Fix EF test runners for v0.12 * Fix some tests * Fix broken attestation verification test * More test fixes * Fix typo found in review * Add `Block` struct to ProtoArray * Start fixing get_ancestor * Add rough progress on testing * Get fork choice tests working * Progress with testing * Fix partialeq impl * Move slot clock from fc_store * Improve testing * Add testing for best justified * Add clone back to SystemTimeSlotClock * Add balances test * Start adding balances cache again * Wire-in balances cache * Improve tests * Remove commented-out tests * Remove beacon_chain::ForkChoice * Rename crates * Update wider codebase to new fork_choice layout * Move advance_slot in test harness * Tidy ForkChoice::update_time * Fix verification tests * Fix compile error with iter::once * Fix fork choice tests * Ensure block attestations are processed * Fix failing beacon_chain tests * Add first invalid block check * Add finalized block check * Progress with testing, new store builder * Add fixes to get_ancestor * Fix old genesis justification test * Fix remaining fork choice tests * Change root iteration method * Move on_verified_block * Remove unused method * Start adding attestation verification tests * Add invalid ffg target test * Add target epoch test * Add queued attestation test * Remove old fork choice verification tests * Tidy, add test * Move fork choice lock drop * Rename BeaconForkChoiceStore * Add comments, tidy BeaconForkChoiceStore * Update metrics, rename fork_choice_store.rs * Remove genesis_block_root from ForkChoice * Tidy * Update fork_choice comments * Tidy, add comments * Tidy, simplify ForkChoice, fix compile issue * Tidy, removed dead file * Increase http request timeout * Fix failing rest_api test * Set HTTP timeout back to 5s * Apply fix to get_ancestor * Address Michael's comments * Fix typo * Revert "Fix broken attestation verification test" This reverts commit 722cdc903b12611de27916a57eeecfa3224f2279. Co-authored-by: Kirk Baird <baird.k@outlook.com> Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2020-06-17 01:10:22 +00:00
let mut fork_choice = harness.chain.fork_choice.write();
// Move forward a slot so all queued attestations can be processed.
harness.advance_slot();
fork_choice
.update_time(harness.chain.slot().unwrap())
.unwrap();
2019-08-09 01:54:35 +00:00
let validator_slots: Vec<(usize, Slot)> = (0..VALIDATOR_COUNT)
.into_iter()
.map(|validator_index| {
let slot = state
.get_attestation_duties(validator_index, RelativeEpoch::Current)
.expect("should get attester duties")
.unwrap()
2019-08-09 01:54:35 +00:00
.slot;
(validator_index, slot)
2019-08-06 17:17:15 +00:00
})
.collect();
for (validator, slot) in validator_slots.clone() {
2019-08-09 01:54:35 +00:00
let latest_message = fork_choice.latest_message(validator);
2019-08-06 17:17:15 +00:00
if slot <= num_blocks_produced && slot != 0 {
assert_eq!(
2019-08-06 17:17:15 +00:00
latest_message.unwrap().1,
Add proto_array fork choice (#804) * Start implementing proto_array * Add progress * Add unfinished progress * Add further progress * Add progress * Add tree filtering * Add half-finished modifications * Add refactored version * Tidy, add incomplete LmdGhost impl * Move impls in LmdGhost trait def * Remove old reduced_tree fork choice * Combine two functions in to `compute_deltas` * Start testing * Add more compute_deltas tests * Add fork choice testing * Add more fork choice testing * Add more fork choice tests * Add more testing to proto-array * Remove old tests * Modify tests * Add more tests * Add more testing * Add comments and fixes * Re-organise crate * Tidy, finish pruning tests * Add ssz encoding, other pub fns * Rename lmd_ghost > proto_array_fork_choice * Integrate proto_array into lighthouse * Add first pass at fixing filter * Clean out old comments * Add more comments * Attempt to fix prune error * Adjust TODO * Fix test compile errors * Add extra justification change check * Update cargo.lock * Fix fork choice test compile errors * Most remove ffg_update_required * Fix bug with epoch of attestation votes * Start adding new test format * Make fork choice tests declarative * Create test def concept * Move test defs into crate * Add binary, re-org crate * Shuffle files * Start adding ffg tests * Add more fork choice tests * Add fork choice JSON dumping * Add more detail to best node error * Ensure fin+just checkpoints from from same block * Rename JustificationManager * Move checkpoint manager into own file * Tidy * Add targetted logging for sneaky sync bug * Fix justified balances bug * Add cache metrics * Add metrics for log levels * Fix bug in checkpoint manager * Fix compile error in fork choice tests * Ignore duplicate blocks in fork choice * Add block to fock choice before db * Rename on_new_block fn * Fix spec inconsistency in `CheckpointManager` * Remove BlockRootTree * Remove old reduced_tree code fragment * Add API endpoint for fork choice * Add more ffg tests * Remove block_root_tree reminents * Ensure effective balances are used * Remove old debugging code, fix API fault * Add check to ensure parent block is in fork choice * Update readme dates * Fix readme * Tidy checkpoint manager * Remove fork choice yaml files from repo * Remove fork choice yaml from repo * General tidy * Address majority of Michael's comments * Tidy bin/lib business * Remove dangling file * Undo changes for rpc/handler from master * Revert "Undo changes for rpc/handler from master" This reverts commit 876edff0e4a501aafbb47113454852826dcc24e8. Co-authored-by: Age Manning <Age@AgeManning.com>
2020-01-29 04:05:00 +00:00
slot.epoch(MinimalEthSpec::slots_per_epoch()),
v0.12 fork choice update (#1229) * Incomplete scraps * Add progress on new fork choice impl * Further progress * First complete compiling version * Remove chain reference * Add new lmd_ghost crate * Start integrating into beacon chain * Update `milagro_bls` to new release (#1183) * Update milagro_bls to new release Signed-off-by: Kirk Baird <baird.k@outlook.com> * Tidy up fake cryptos Signed-off-by: Kirk Baird <baird.k@outlook.com> * move SecretHash to bls and put plaintext back Signed-off-by: Kirk Baird <baird.k@outlook.com> * Update state processing for v0.12 * Fix EF test runners for v0.12 * Fix some tests * Fix broken attestation verification test * More test fixes * Rough beacon chain impl working * Remove fork_choice_2 * Remove checkpoint manager * Half finished ssz impl * Add missed file * Add persistence * Tidy, fix some compile errors * Remove RwLock from ProtoArrayForkChoice * Fix store-based compile errors * Add comments, tidy * Move function out of ForkChoice struct * Start testing * More testing * Fix compile error * Tidy beacon_chain::fork_choice * Queue attestations from the current slot * Allow fork choice to handle prior-to-genesis start * Improve error granularity * Test attestation dequeuing * Process attestations during block * Store target root in fork choice * Move fork choice verification into new crate * Update tests * Consensus updates for v0.12 (#1228) * Update state processing for v0.12 * Fix EF test runners for v0.12 * Fix some tests * Fix broken attestation verification test * More test fixes * Fix typo found in review * Add `Block` struct to ProtoArray * Start fixing get_ancestor * Add rough progress on testing * Get fork choice tests working * Progress with testing * Fix partialeq impl * Move slot clock from fc_store * Improve testing * Add testing for best justified * Add clone back to SystemTimeSlotClock * Add balances test * Start adding balances cache again * Wire-in balances cache * Improve tests * Remove commented-out tests * Remove beacon_chain::ForkChoice * Rename crates * Update wider codebase to new fork_choice layout * Move advance_slot in test harness * Tidy ForkChoice::update_time * Fix verification tests * Fix compile error with iter::once * Fix fork choice tests * Ensure block attestations are processed * Fix failing beacon_chain tests * Add first invalid block check * Add finalized block check * Progress with testing, new store builder * Add fixes to get_ancestor * Fix old genesis justification test * Fix remaining fork choice tests * Change root iteration method * Move on_verified_block * Remove unused method * Start adding attestation verification tests * Add invalid ffg target test * Add target epoch test * Add queued attestation test * Remove old fork choice verification tests * Tidy, add test * Move fork choice lock drop * Rename BeaconForkChoiceStore * Add comments, tidy BeaconForkChoiceStore * Update metrics, rename fork_choice_store.rs * Remove genesis_block_root from ForkChoice * Tidy * Update fork_choice comments * Tidy, add comments * Tidy, simplify ForkChoice, fix compile issue * Tidy, removed dead file * Increase http request timeout * Fix failing rest_api test * Set HTTP timeout back to 5s * Apply fix to get_ancestor * Address Michael's comments * Fix typo * Revert "Fix broken attestation verification test" This reverts commit 722cdc903b12611de27916a57eeecfa3224f2279. Co-authored-by: Kirk Baird <baird.k@outlook.com> Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2020-06-17 01:10:22 +00:00
"Latest message epoch for {} should be equal to epoch {}.",
2019-08-09 01:54:35 +00:00
validator,
slot
)
} else {
assert!(
latest_message.is_none(),
"Latest message slot should be None."
)
}
}
}
#[test]
2019-08-07 06:40:49 +00:00
fn attestations_with_increasing_slots() {
let num_blocks_produced = MinimalEthSpec::slots_per_epoch() * 5;
Fix head tracker concurrency bugs (#1771) ## Issue Addressed Closes #1557 ## Proposed Changes Modify the pruning algorithm so that it mutates the head-tracker _before_ committing the database transaction to disk, and _only if_ all the heads to be removed are still present in the head-tracker (i.e. no concurrent mutations). In the process of writing and testing this I also had to make a few other changes: * Use internal mutability for all `BeaconChainHarness` functions (namely the RNG and the graffiti), in order to enable parallel calls (see testing section below). * Disable logging in harness tests unless the `test_logger` feature is turned on And chose to make some clean-ups: * Delete the `NullMigrator` * Remove type-based configuration for the migrator in favour of runtime config (simpler, less duplicated code) * Use the non-blocking migrator unless the blocking migrator is required. In the store tests we need the blocking migrator because some tests make asserts about the state of the DB after the migration has run. * Rename `validators_keypairs` -> `validator_keypairs` in the `BeaconChainHarness` ## Testing To confirm that the fix worked, I wrote a test using [Hiatus](https://crates.io/crates/hiatus), which can be found here: https://github.com/michaelsproul/lighthouse/tree/hiatus-issue-1557 That test can't be merged because it inserts random breakpoints everywhere, but if you check out that branch you can run the test with: ``` $ cd beacon_node/beacon_chain $ cargo test --release --test parallel_tests --features test_logger ``` It should pass, and the log output should show: ``` WARN Pruning deferred because of a concurrent mutation, message: this is expected only very rarely! ``` ## Additional Info This is a backwards-compatible change with no impact on consensus.
2020-10-19 05:58:39 +00:00
let harness = get_harness(VALIDATOR_COUNT);
let mut attestations = vec![];
for _ in 0..num_blocks_produced {
harness.extend_chain(
2,
BlockStrategy::OnCanonicalHead,
// Don't produce & include any attestations (we'll collect them later).
AttestationStrategy::SomeValidators(vec![]),
);
Optimize validator duties (#2243) ## Issue Addressed Closes #2052 ## Proposed Changes - Refactor the attester/proposer duties endpoints in the BN - Performance improvements - Fixes some potential inconsistencies with the dependent root fields. - Removes `http_api::beacon_proposer_cache` and just uses the one on the `BeaconChain` instead. - Move the code for the proposer/attester duties endpoints into separate files, for readability. - Refactor the `DutiesService` in the VC - Required to reduce the delay on broadcasting new blocks. - Gets rid of the `ValidatorDuty` shim struct that came about when we adopted the standard API. - Separate block/attestation duty tasks so that they don't block each other when one is slow. - In the VC, use `PublicKeyBytes` to represent validators instead of `PublicKey`. `PublicKey` is a legit crypto object whilst `PublicKeyBytes` is just a byte-array, it's much faster to clone/hash `PublicKeyBytes` and this change has had a significant impact on runtimes. - Unfortunately this has created lots of dust changes. - In the BN, store `PublicKeyBytes` in the `beacon_proposer_cache` and allow access to them. The HTTP API always sends `PublicKeyBytes` over the wire and the conversion from `PublicKey` -> `PublickeyBytes` is non-trivial, especially when queries have 100s/1000s of validators (like Pyrmont). - Add the `state_processing::state_advance` mod which dedups a lot of the "apply `n` skip slots to the state" code. - This also fixes a bug with some functions which were failing to include a state root as per [this comment](https://github.com/sigp/lighthouse/blob/072695284f7eff82c51f79bc921ad942fea7483a/consensus/state_processing/src/state_advance.rs#L69-L74). I couldn't find any instance of this bug that resulted in anything more severe than keying a shuffling cache by the wrong block root. - Swap the VC block service to use `mpsc` from `tokio` instead of `futures`. This is consistent with the rest of the code base. ~~This PR *reduces* the size of the codebase :tada:~~ It *used* to reduce the size of the code base before I added more comments. ## Observations on Prymont - Proposer duties times down from peaks of 450ms to consistent <1ms. - Current epoch attester duties times down from >1s peaks to a consistent 20-30ms. - Block production down from +600ms to 100-200ms. ## Additional Info - ~~Blocked on #2241~~ - ~~Blocked on #2234~~ ## TODO - [x] ~~Refactor this into some smaller PRs?~~ Leaving this as-is for now. - [x] Address `per_slot_processing` roots. - [x] Investigate slow next epoch times. Not getting added to cache on block processing? - [x] Consider [this](https://github.com/sigp/lighthouse/blob/072695284f7eff82c51f79bc921ad942fea7483a/beacon_node/store/src/hot_cold_store.rs#L811-L812) in the scenario of replacing the state roots Co-authored-by: pawan <pawandhananjay@gmail.com> Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2021-03-17 05:09:57 +00:00
let head = harness.chain.head().unwrap();
let head_state_root = head.beacon_state_root();
attestations.extend(harness.get_unaggregated_attestations(
&AttestationStrategy::AllValidators,
&head.beacon_state,
head_state_root,
head.beacon_block_root,
head.beacon_block.slot(),
));
harness.advance_slot();
}
for (attestation, subnet_id) in attestations.into_iter().flatten() {
let res = harness
.chain
Batch BLS verification for attestations (#2399) ## Issue Addressed NA ## Proposed Changes Adds the ability to verify batches of aggregated/unaggregated attestations from the network. When the `BeaconProcessor` finds there are messages in the aggregated or unaggregated attestation queues, it will first check the length of the queue: - `== 1` verify the attestation individually. - `>= 2` take up to 64 of those attestations and verify them in a batch. Notably, we only perform batch verification if the queue has a backlog. We don't apply any artificial delays to attestations to try and force them into batches. ### Batching Details To assist with implementing batches we modify `beacon_chain::attestation_verification` to have two distinct categories for attestations: - *Indexed* attestations: those which have passed initial validation and were valid enough for us to derive an `IndexedAttestation`. - *Verified* attestations: those attestations which were indexed *and also* passed signature verification. These are well-formed, interesting messages which were signed by validators. The batching functions accept `n` attestations and then return `n` attestation verification `Result`s, where those `Result`s can be any combination of `Ok` or `Err`. In other words, we attempt to verify as many attestations as possible and return specific per-attestation results so peer scores can be updated, if required. When we batch verify attestations, we first try to map all those attestations to *indexed* attestations. If any of those attestations were able to be indexed, we then perform batch BLS verification on those indexed attestations. If the batch verification succeeds, we convert them into *verified* attestations, disabling individual signature checking. If the batch fails, we convert to verified attestations with individual signature checking enabled. Ultimately, we optimistically try to do a batch verification of attestation signatures and fall-back to individual verification if it fails. This opens an attach vector for "poisoning" the attestations and causing us to waste a batch verification. I argue that peer scoring should do a good-enough job of defending against this and the typical-case gains massively outweigh the worst-case losses. ## Additional Info Before this PR, attestation verification took the attestations by value (instead of by reference). It turns out that this was unnecessary and, in my opinion, resulted in some undesirable ergonomics (e.g., we had to pass the attestation back in the `Err` variant to avoid clones). In this PR I've modified attestation verification so that it now takes a reference. I refactored the `beacon_chain/tests/attestation_verification.rs` tests so they use a builder-esque "tester" struct instead of a weird macro. It made it easier for me to test individual/batch with the same set of tests and I think it was a nice tidy-up. Notably, I did this last to try and make sure my new refactors to *actual* production code would pass under the existing test suite.
2021-09-22 08:49:41 +00:00
.verify_unaggregated_attestation_for_gossip(&attestation, Some(subnet_id));
Add attestation gossip pre-verification (#983) * Add PH & MS slot clock changes * Account for genesis time * Add progress on duties refactor * Add simple is_aggregator bool to val subscription * Start work on attestation_verification.rs * Add progress on ObservedAttestations * Progress with ObservedAttestations * Fix tests * Add observed attestations to the beacon chain * Add attestation observation to processing code * Add progress on attestation verification * Add first draft of ObservedAttesters * Add more tests * Add observed attesters to beacon chain * Add observers to attestation processing * Add more attestation verification * Create ObservedAggregators map * Remove commented-out code * Add observed aggregators into chain * Add progress * Finish adding features to attestation verification * Ensure beacon chain compiles * Link attn verification into chain * Integrate new attn verification in chain * Remove old attestation processing code * Start trying to fix beacon_chain tests * Split adding into pools into two functions * Add aggregation to harness * Get test harness working again * Adjust the number of aggregators for test harness * Fix edge-case in harness * Integrate new attn processing in network * Fix compile bug in validator_client * Update validator API endpoints * Fix aggreagation in test harness * Fix enum thing * Fix attestation observation bug: * Patch failing API tests * Start adding comments to attestation verification * Remove unused attestation field * Unify "is block known" logic * Update comments * Supress fork choice errors for network processing * Add todos * Tidy * Add gossip attn tests * Disallow test harness to produce old attns * Comment out in-progress tests * Partially address pruning tests * Fix failing store test * Add aggregate tests * Add comments about which spec conditions we check * Dont re-aggregate * Split apart test harness attn production * Fix compile error in network * Make progress on commented-out test * Fix skipping attestation test * Add fork choice verification tests * Tidy attn tests, remove dead code * Remove some accidentally added code * Fix clippy lint * Rename test file * Add block tests, add cheap block proposer check * Rename block testing file * Add observed_block_producers * Tidy * Switch around block signature verification * Finish block testing * Remove gossip from signature tests * First pass of self review * Fix deviation in spec * Update test spec tags * Start moving over to hashset * Finish moving observed attesters to hashmap * Move aggregation pool over to hashmap * Make fc attn borrow again * Fix rest_api compile error * Fix missing comments * Fix monster test * Uncomment increasing slots test * Address remaining comments * Remove unsafe, use cfg test * Remove cfg test flag * Fix dodgy comment * Ignore aggregates that are already known. * Unify aggregator modulo logic * Fix typo in logs * Refactor validator subscription logic * Avoid reproducing selection proof * Skip HTTP call if no subscriptions * Rename DutyAndState -> DutyAndProof * Tidy logs * Print root as dbg * Fix compile errors in tests * Fix compile error in test
2020-05-06 11:42:56 +00:00
let current_slot = harness.chain.slot().expect("should get slot");
let expected_attestation_slot = attestation.data.slot;
let expected_earliest_permissible_slot =
current_slot - MinimalEthSpec::slots_per_epoch() - 1;
Optimize attestation processing (#841) * Start updating types * WIP * Signature hacking * Existing EF tests passing with fake_crypto * Updates * Delete outdated API spec * The refactor continues * It compiles * WIP test fixes * All release tests passing bar genesis state parsing * Update and test YamlConfig * Update to spec v0.10 compatible BLS * Updates to BLS EF tests * Add EF test for AggregateVerify And delete unused hash2curve tests for uncompressed points * Update EF tests to v0.10.1 * Use optional block root correctly in block proc * Use genesis fork in deposit domain. All tests pass * Cargo fmt * Fast aggregate verify test * Update REST API docs * Cargo fmt * Fix unused import * Bump spec tags to v0.10.1 * Add `seconds_per_eth1_block` to chainspec * Update to timestamp based eth1 voting scheme * Return None from `get_votes_to_consider` if block cache is empty * Handle overflows in `is_candidate_block` * Revert to failing tests * Fix eth1 data sets test * Choose default vote according to spec * Fix collect_valid_votes tests * Fix `get_votes_to_consider` to choose all eligible blocks * Uncomment winning_vote tests * Add comments; remove unused code * Reduce seconds_per_eth1_block for simulation * Addressed review comments * Add test for default vote case * Fix logs * Remove unused functions * Meter default eth1 votes * Fix comments * Address review comments; remove unused dependency * Add first attempt at attestation proc. re-write * Add version 2 of attestation processing * Minor fixes * Add validator pubkey cache * Make get_indexed_attestation take a committee * Link signature processing into new attn verification * First working version * Ensure pubkey cache is updated * Add more metrics, slight optimizations * Clone committee cache during attestation processing * Update shuffling cache during block processing * Remove old commented-out code * Fix shuffling cache insert bug * Used indexed attestation in fork choice * Restructure attn processing, add metrics * Add more detailed metrics * Tidy, fix failing tests * Fix failing tests, tidy * Disable/delete two outdated tests * Tidy * Add pubkey cache persistence file * Add more comments * Integrate persistence file into builder * Add pubkey cache tests * Add data_dir to beacon chain builder * Remove Option in pubkey cache persistence file * Ensure consistency between datadir/data_dir * Fix failing network test * Tidy * Fix todos * Add attestation processing tests * Add another test * Only run attestation tests in release * Make attestation tests MainnetEthSpec * Address Michael's comments * Remove redundant check * Fix warning * Fix failing test Co-authored-by: Michael Sproul <micsproul@gmail.com> Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
2020-03-05 06:19:35 +00:00
if expected_attestation_slot < expected_earliest_permissible_slot {
assert!(matches!(
Batch BLS verification for attestations (#2399) ## Issue Addressed NA ## Proposed Changes Adds the ability to verify batches of aggregated/unaggregated attestations from the network. When the `BeaconProcessor` finds there are messages in the aggregated or unaggregated attestation queues, it will first check the length of the queue: - `== 1` verify the attestation individually. - `>= 2` take up to 64 of those attestations and verify them in a batch. Notably, we only perform batch verification if the queue has a backlog. We don't apply any artificial delays to attestations to try and force them into batches. ### Batching Details To assist with implementing batches we modify `beacon_chain::attestation_verification` to have two distinct categories for attestations: - *Indexed* attestations: those which have passed initial validation and were valid enough for us to derive an `IndexedAttestation`. - *Verified* attestations: those attestations which were indexed *and also* passed signature verification. These are well-formed, interesting messages which were signed by validators. The batching functions accept `n` attestations and then return `n` attestation verification `Result`s, where those `Result`s can be any combination of `Ok` or `Err`. In other words, we attempt to verify as many attestations as possible and return specific per-attestation results so peer scores can be updated, if required. When we batch verify attestations, we first try to map all those attestations to *indexed* attestations. If any of those attestations were able to be indexed, we then perform batch BLS verification on those indexed attestations. If the batch verification succeeds, we convert them into *verified* attestations, disabling individual signature checking. If the batch fails, we convert to verified attestations with individual signature checking enabled. Ultimately, we optimistically try to do a batch verification of attestation signatures and fall-back to individual verification if it fails. This opens an attach vector for "poisoning" the attestations and causing us to waste a batch verification. I argue that peer scoring should do a good-enough job of defending against this and the typical-case gains massively outweigh the worst-case losses. ## Additional Info Before this PR, attestation verification took the attestations by value (instead of by reference). It turns out that this was unnecessary and, in my opinion, resulted in some undesirable ergonomics (e.g., we had to pass the attestation back in the `Err` variant to avoid clones). In this PR I've modified attestation verification so that it now takes a reference. I refactored the `beacon_chain/tests/attestation_verification.rs` tests so they use a builder-esque "tester" struct instead of a weird macro. It made it easier for me to test individual/batch with the same set of tests and I think it was a nice tidy-up. Notably, I did this last to try and make sure my new refactors to *actual* production code would pass under the existing test suite.
2021-09-22 08:49:41 +00:00
res.err().unwrap(),
Add attestation gossip pre-verification (#983) * Add PH & MS slot clock changes * Account for genesis time * Add progress on duties refactor * Add simple is_aggregator bool to val subscription * Start work on attestation_verification.rs * Add progress on ObservedAttestations * Progress with ObservedAttestations * Fix tests * Add observed attestations to the beacon chain * Add attestation observation to processing code * Add progress on attestation verification * Add first draft of ObservedAttesters * Add more tests * Add observed attesters to beacon chain * Add observers to attestation processing * Add more attestation verification * Create ObservedAggregators map * Remove commented-out code * Add observed aggregators into chain * Add progress * Finish adding features to attestation verification * Ensure beacon chain compiles * Link attn verification into chain * Integrate new attn verification in chain * Remove old attestation processing code * Start trying to fix beacon_chain tests * Split adding into pools into two functions * Add aggregation to harness * Get test harness working again * Adjust the number of aggregators for test harness * Fix edge-case in harness * Integrate new attn processing in network * Fix compile bug in validator_client * Update validator API endpoints * Fix aggreagation in test harness * Fix enum thing * Fix attestation observation bug: * Patch failing API tests * Start adding comments to attestation verification * Remove unused attestation field * Unify "is block known" logic * Update comments * Supress fork choice errors for network processing * Add todos * Tidy * Add gossip attn tests * Disallow test harness to produce old attns * Comment out in-progress tests * Partially address pruning tests * Fix failing store test * Add aggregate tests * Add comments about which spec conditions we check * Dont re-aggregate * Split apart test harness attn production * Fix compile error in network * Make progress on commented-out test * Fix skipping attestation test * Add fork choice verification tests * Tidy attn tests, remove dead code * Remove some accidentally added code * Fix clippy lint * Rename test file * Add block tests, add cheap block proposer check * Rename block testing file * Add observed_block_producers * Tidy * Switch around block signature verification * Finish block testing * Remove gossip from signature tests * First pass of self review * Fix deviation in spec * Update test spec tags * Start moving over to hashset * Finish moving observed attesters to hashmap * Move aggregation pool over to hashmap * Make fc attn borrow again * Fix rest_api compile error * Fix missing comments * Fix monster test * Uncomment increasing slots test * Address remaining comments * Remove unsafe, use cfg test * Remove cfg test flag * Fix dodgy comment * Ignore aggregates that are already known. * Unify aggregator modulo logic * Fix typo in logs * Refactor validator subscription logic * Avoid reproducing selection proof * Skip HTTP call if no subscriptions * Rename DutyAndState -> DutyAndProof * Tidy logs * Print root as dbg * Fix compile errors in tests * Fix compile error in test
2020-05-06 11:42:56 +00:00
AttnError::PastSlot {
attestation_slot,
earliest_permissible_slot,
}
if attestation_slot == expected_attestation_slot && earliest_permissible_slot == expected_earliest_permissible_slot
))
Optimize attestation processing (#841) * Start updating types * WIP * Signature hacking * Existing EF tests passing with fake_crypto * Updates * Delete outdated API spec * The refactor continues * It compiles * WIP test fixes * All release tests passing bar genesis state parsing * Update and test YamlConfig * Update to spec v0.10 compatible BLS * Updates to BLS EF tests * Add EF test for AggregateVerify And delete unused hash2curve tests for uncompressed points * Update EF tests to v0.10.1 * Use optional block root correctly in block proc * Use genesis fork in deposit domain. All tests pass * Cargo fmt * Fast aggregate verify test * Update REST API docs * Cargo fmt * Fix unused import * Bump spec tags to v0.10.1 * Add `seconds_per_eth1_block` to chainspec * Update to timestamp based eth1 voting scheme * Return None from `get_votes_to_consider` if block cache is empty * Handle overflows in `is_candidate_block` * Revert to failing tests * Fix eth1 data sets test * Choose default vote according to spec * Fix collect_valid_votes tests * Fix `get_votes_to_consider` to choose all eligible blocks * Uncomment winning_vote tests * Add comments; remove unused code * Reduce seconds_per_eth1_block for simulation * Addressed review comments * Add test for default vote case * Fix logs * Remove unused functions * Meter default eth1 votes * Fix comments * Address review comments; remove unused dependency * Add first attempt at attestation proc. re-write * Add version 2 of attestation processing * Minor fixes * Add validator pubkey cache * Make get_indexed_attestation take a committee * Link signature processing into new attn verification * First working version * Ensure pubkey cache is updated * Add more metrics, slight optimizations * Clone committee cache during attestation processing * Update shuffling cache during block processing * Remove old commented-out code * Fix shuffling cache insert bug * Used indexed attestation in fork choice * Restructure attn processing, add metrics * Add more detailed metrics * Tidy, fix failing tests * Fix failing tests, tidy * Disable/delete two outdated tests * Tidy * Add pubkey cache persistence file * Add more comments * Integrate persistence file into builder * Add pubkey cache tests * Add data_dir to beacon chain builder * Remove Option in pubkey cache persistence file * Ensure consistency between datadir/data_dir * Fix failing network test * Tidy * Fix todos * Add attestation processing tests * Add another test * Only run attestation tests in release * Make attestation tests MainnetEthSpec * Address Michael's comments * Remove redundant check * Fix warning * Fix failing test Co-authored-by: Michael Sproul <micsproul@gmail.com> Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
2020-03-05 06:19:35 +00:00
} else {
Add attestation gossip pre-verification (#983) * Add PH & MS slot clock changes * Account for genesis time * Add progress on duties refactor * Add simple is_aggregator bool to val subscription * Start work on attestation_verification.rs * Add progress on ObservedAttestations * Progress with ObservedAttestations * Fix tests * Add observed attestations to the beacon chain * Add attestation observation to processing code * Add progress on attestation verification * Add first draft of ObservedAttesters * Add more tests * Add observed attesters to beacon chain * Add observers to attestation processing * Add more attestation verification * Create ObservedAggregators map * Remove commented-out code * Add observed aggregators into chain * Add progress * Finish adding features to attestation verification * Ensure beacon chain compiles * Link attn verification into chain * Integrate new attn verification in chain * Remove old attestation processing code * Start trying to fix beacon_chain tests * Split adding into pools into two functions * Add aggregation to harness * Get test harness working again * Adjust the number of aggregators for test harness * Fix edge-case in harness * Integrate new attn processing in network * Fix compile bug in validator_client * Update validator API endpoints * Fix aggreagation in test harness * Fix enum thing * Fix attestation observation bug: * Patch failing API tests * Start adding comments to attestation verification * Remove unused attestation field * Unify "is block known" logic * Update comments * Supress fork choice errors for network processing * Add todos * Tidy * Add gossip attn tests * Disallow test harness to produce old attns * Comment out in-progress tests * Partially address pruning tests * Fix failing store test * Add aggregate tests * Add comments about which spec conditions we check * Dont re-aggregate * Split apart test harness attn production * Fix compile error in network * Make progress on commented-out test * Fix skipping attestation test * Add fork choice verification tests * Tidy attn tests, remove dead code * Remove some accidentally added code * Fix clippy lint * Rename test file * Add block tests, add cheap block proposer check * Rename block testing file * Add observed_block_producers * Tidy * Switch around block signature verification * Finish block testing * Remove gossip from signature tests * First pass of self review * Fix deviation in spec * Update test spec tags * Start moving over to hashset * Finish moving observed attesters to hashmap * Move aggregation pool over to hashmap * Make fc attn borrow again * Fix rest_api compile error * Fix missing comments * Fix monster test * Uncomment increasing slots test * Address remaining comments * Remove unsafe, use cfg test * Remove cfg test flag * Fix dodgy comment * Ignore aggregates that are already known. * Unify aggregator modulo logic * Fix typo in logs * Refactor validator subscription logic * Avoid reproducing selection proof * Skip HTTP call if no subscriptions * Rename DutyAndState -> DutyAndProof * Tidy logs * Print root as dbg * Fix compile errors in tests * Fix compile error in test
2020-05-06 11:42:56 +00:00
res.expect("should process attestation");
Optimize attestation processing (#841) * Start updating types * WIP * Signature hacking * Existing EF tests passing with fake_crypto * Updates * Delete outdated API spec * The refactor continues * It compiles * WIP test fixes * All release tests passing bar genesis state parsing * Update and test YamlConfig * Update to spec v0.10 compatible BLS * Updates to BLS EF tests * Add EF test for AggregateVerify And delete unused hash2curve tests for uncompressed points * Update EF tests to v0.10.1 * Use optional block root correctly in block proc * Use genesis fork in deposit domain. All tests pass * Cargo fmt * Fast aggregate verify test * Update REST API docs * Cargo fmt * Fix unused import * Bump spec tags to v0.10.1 * Add `seconds_per_eth1_block` to chainspec * Update to timestamp based eth1 voting scheme * Return None from `get_votes_to_consider` if block cache is empty * Handle overflows in `is_candidate_block` * Revert to failing tests * Fix eth1 data sets test * Choose default vote according to spec * Fix collect_valid_votes tests * Fix `get_votes_to_consider` to choose all eligible blocks * Uncomment winning_vote tests * Add comments; remove unused code * Reduce seconds_per_eth1_block for simulation * Addressed review comments * Add test for default vote case * Fix logs * Remove unused functions * Meter default eth1 votes * Fix comments * Address review comments; remove unused dependency * Add first attempt at attestation proc. re-write * Add version 2 of attestation processing * Minor fixes * Add validator pubkey cache * Make get_indexed_attestation take a committee * Link signature processing into new attn verification * First working version * Ensure pubkey cache is updated * Add more metrics, slight optimizations * Clone committee cache during attestation processing * Update shuffling cache during block processing * Remove old commented-out code * Fix shuffling cache insert bug * Used indexed attestation in fork choice * Restructure attn processing, add metrics * Add more detailed metrics * Tidy, fix failing tests * Fix failing tests, tidy * Disable/delete two outdated tests * Tidy * Add pubkey cache persistence file * Add more comments * Integrate persistence file into builder * Add pubkey cache tests * Add data_dir to beacon chain builder * Remove Option in pubkey cache persistence file * Ensure consistency between datadir/data_dir * Fix failing network test * Tidy * Fix todos * Add attestation processing tests * Add another test * Only run attestation tests in release * Make attestation tests MainnetEthSpec * Address Michael's comments * Remove redundant check * Fix warning * Fix failing test Co-authored-by: Michael Sproul <micsproul@gmail.com> Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
2020-03-05 06:19:35 +00:00
}
}
}
#[test]
Add attestation gossip pre-verification (#983) * Add PH & MS slot clock changes * Account for genesis time * Add progress on duties refactor * Add simple is_aggregator bool to val subscription * Start work on attestation_verification.rs * Add progress on ObservedAttestations * Progress with ObservedAttestations * Fix tests * Add observed attestations to the beacon chain * Add attestation observation to processing code * Add progress on attestation verification * Add first draft of ObservedAttesters * Add more tests * Add observed attesters to beacon chain * Add observers to attestation processing * Add more attestation verification * Create ObservedAggregators map * Remove commented-out code * Add observed aggregators into chain * Add progress * Finish adding features to attestation verification * Ensure beacon chain compiles * Link attn verification into chain * Integrate new attn verification in chain * Remove old attestation processing code * Start trying to fix beacon_chain tests * Split adding into pools into two functions * Add aggregation to harness * Get test harness working again * Adjust the number of aggregators for test harness * Fix edge-case in harness * Integrate new attn processing in network * Fix compile bug in validator_client * Update validator API endpoints * Fix aggreagation in test harness * Fix enum thing * Fix attestation observation bug: * Patch failing API tests * Start adding comments to attestation verification * Remove unused attestation field * Unify "is block known" logic * Update comments * Supress fork choice errors for network processing * Add todos * Tidy * Add gossip attn tests * Disallow test harness to produce old attns * Comment out in-progress tests * Partially address pruning tests * Fix failing store test * Add aggregate tests * Add comments about which spec conditions we check * Dont re-aggregate * Split apart test harness attn production * Fix compile error in network * Make progress on commented-out test * Fix skipping attestation test * Add fork choice verification tests * Tidy attn tests, remove dead code * Remove some accidentally added code * Fix clippy lint * Rename test file * Add block tests, add cheap block proposer check * Rename block testing file * Add observed_block_producers * Tidy * Switch around block signature verification * Finish block testing * Remove gossip from signature tests * First pass of self review * Fix deviation in spec * Update test spec tags * Start moving over to hashset * Finish moving observed attesters to hashmap * Move aggregation pool over to hashmap * Make fc attn borrow again * Fix rest_api compile error * Fix missing comments * Fix monster test * Uncomment increasing slots test * Address remaining comments * Remove unsafe, use cfg test * Remove cfg test flag * Fix dodgy comment * Ignore aggregates that are already known. * Unify aggregator modulo logic * Fix typo in logs * Refactor validator subscription logic * Avoid reproducing selection proof * Skip HTTP call if no subscriptions * Rename DutyAndState -> DutyAndProof * Tidy logs * Print root as dbg * Fix compile errors in tests * Fix compile error in test
2020-05-06 11:42:56 +00:00
fn unaggregated_attestations_added_to_fork_choice_all_updated() {
let num_blocks_produced = MinimalEthSpec::slots_per_epoch() * 2 - 1;
Fix head tracker concurrency bugs (#1771) ## Issue Addressed Closes #1557 ## Proposed Changes Modify the pruning algorithm so that it mutates the head-tracker _before_ committing the database transaction to disk, and _only if_ all the heads to be removed are still present in the head-tracker (i.e. no concurrent mutations). In the process of writing and testing this I also had to make a few other changes: * Use internal mutability for all `BeaconChainHarness` functions (namely the RNG and the graffiti), in order to enable parallel calls (see testing section below). * Disable logging in harness tests unless the `test_logger` feature is turned on And chose to make some clean-ups: * Delete the `NullMigrator` * Remove type-based configuration for the migrator in favour of runtime config (simpler, less duplicated code) * Use the non-blocking migrator unless the blocking migrator is required. In the store tests we need the blocking migrator because some tests make asserts about the state of the DB after the migration has run. * Rename `validators_keypairs` -> `validator_keypairs` in the `BeaconChainHarness` ## Testing To confirm that the fix worked, I wrote a test using [Hiatus](https://crates.io/crates/hiatus), which can be found here: https://github.com/michaelsproul/lighthouse/tree/hiatus-issue-1557 That test can't be merged because it inserts random breakpoints everywhere, but if you check out that branch you can run the test with: ``` $ cd beacon_node/beacon_chain $ cargo test --release --test parallel_tests --features test_logger ``` It should pass, and the log output should show: ``` WARN Pruning deferred because of a concurrent mutation, message: this is expected only very rarely! ``` ## Additional Info This is a backwards-compatible change with no impact on consensus.
2020-10-19 05:58:39 +00:00
let harness = get_harness(VALIDATOR_COUNT);
harness.extend_chain(
num_blocks_produced as usize,
BlockStrategy::OnCanonicalHead,
AttestationStrategy::AllValidators,
);
let state = &harness.chain.head().expect("should get head").beacon_state;
v0.12 fork choice update (#1229) * Incomplete scraps * Add progress on new fork choice impl * Further progress * First complete compiling version * Remove chain reference * Add new lmd_ghost crate * Start integrating into beacon chain * Update `milagro_bls` to new release (#1183) * Update milagro_bls to new release Signed-off-by: Kirk Baird <baird.k@outlook.com> * Tidy up fake cryptos Signed-off-by: Kirk Baird <baird.k@outlook.com> * move SecretHash to bls and put plaintext back Signed-off-by: Kirk Baird <baird.k@outlook.com> * Update state processing for v0.12 * Fix EF test runners for v0.12 * Fix some tests * Fix broken attestation verification test * More test fixes * Rough beacon chain impl working * Remove fork_choice_2 * Remove checkpoint manager * Half finished ssz impl * Add missed file * Add persistence * Tidy, fix some compile errors * Remove RwLock from ProtoArrayForkChoice * Fix store-based compile errors * Add comments, tidy * Move function out of ForkChoice struct * Start testing * More testing * Fix compile error * Tidy beacon_chain::fork_choice * Queue attestations from the current slot * Allow fork choice to handle prior-to-genesis start * Improve error granularity * Test attestation dequeuing * Process attestations during block * Store target root in fork choice * Move fork choice verification into new crate * Update tests * Consensus updates for v0.12 (#1228) * Update state processing for v0.12 * Fix EF test runners for v0.12 * Fix some tests * Fix broken attestation verification test * More test fixes * Fix typo found in review * Add `Block` struct to ProtoArray * Start fixing get_ancestor * Add rough progress on testing * Get fork choice tests working * Progress with testing * Fix partialeq impl * Move slot clock from fc_store * Improve testing * Add testing for best justified * Add clone back to SystemTimeSlotClock * Add balances test * Start adding balances cache again * Wire-in balances cache * Improve tests * Remove commented-out tests * Remove beacon_chain::ForkChoice * Rename crates * Update wider codebase to new fork_choice layout * Move advance_slot in test harness * Tidy ForkChoice::update_time * Fix verification tests * Fix compile error with iter::once * Fix fork choice tests * Ensure block attestations are processed * Fix failing beacon_chain tests * Add first invalid block check * Add finalized block check * Progress with testing, new store builder * Add fixes to get_ancestor * Fix old genesis justification test * Fix remaining fork choice tests * Change root iteration method * Move on_verified_block * Remove unused method * Start adding attestation verification tests * Add invalid ffg target test * Add target epoch test * Add queued attestation test * Remove old fork choice verification tests * Tidy, add test * Move fork choice lock drop * Rename BeaconForkChoiceStore * Add comments, tidy BeaconForkChoiceStore * Update metrics, rename fork_choice_store.rs * Remove genesis_block_root from ForkChoice * Tidy * Update fork_choice comments * Tidy, add comments * Tidy, simplify ForkChoice, fix compile issue * Tidy, removed dead file * Increase http request timeout * Fix failing rest_api test * Set HTTP timeout back to 5s * Apply fix to get_ancestor * Address Michael's comments * Fix typo * Revert "Fix broken attestation verification test" This reverts commit 722cdc903b12611de27916a57eeecfa3224f2279. Co-authored-by: Kirk Baird <baird.k@outlook.com> Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2020-06-17 01:10:22 +00:00
let mut fork_choice = harness.chain.fork_choice.write();
// Move forward a slot so all queued attestations can be processed.
harness.advance_slot();
fork_choice
.update_time(harness.chain.slot().unwrap())
.unwrap();
let validators: Vec<usize> = (0..VALIDATOR_COUNT).collect();
let slots: Vec<Slot> = validators
.iter()
2019-08-06 17:17:15 +00:00
.map(|&v| {
state
.get_attestation_duties(v, RelativeEpoch::Current)
.expect("should get attester duties")
.unwrap()
.slot
2019-08-06 17:17:15 +00:00
})
.collect();
let validator_slots: Vec<(&usize, Slot)> = validators.iter().zip(slots).collect();
for (validator, slot) in validator_slots {
let latest_message = fork_choice.latest_message(*validator);
assert_eq!(
2019-08-06 17:17:15 +00:00
latest_message.unwrap().1,
Add proto_array fork choice (#804) * Start implementing proto_array * Add progress * Add unfinished progress * Add further progress * Add progress * Add tree filtering * Add half-finished modifications * Add refactored version * Tidy, add incomplete LmdGhost impl * Move impls in LmdGhost trait def * Remove old reduced_tree fork choice * Combine two functions in to `compute_deltas` * Start testing * Add more compute_deltas tests * Add fork choice testing * Add more fork choice testing * Add more fork choice tests * Add more testing to proto-array * Remove old tests * Modify tests * Add more tests * Add more testing * Add comments and fixes * Re-organise crate * Tidy, finish pruning tests * Add ssz encoding, other pub fns * Rename lmd_ghost > proto_array_fork_choice * Integrate proto_array into lighthouse * Add first pass at fixing filter * Clean out old comments * Add more comments * Attempt to fix prune error * Adjust TODO * Fix test compile errors * Add extra justification change check * Update cargo.lock * Fix fork choice test compile errors * Most remove ffg_update_required * Fix bug with epoch of attestation votes * Start adding new test format * Make fork choice tests declarative * Create test def concept * Move test defs into crate * Add binary, re-org crate * Shuffle files * Start adding ffg tests * Add more fork choice tests * Add fork choice JSON dumping * Add more detail to best node error * Ensure fin+just checkpoints from from same block * Rename JustificationManager * Move checkpoint manager into own file * Tidy * Add targetted logging for sneaky sync bug * Fix justified balances bug * Add cache metrics * Add metrics for log levels * Fix bug in checkpoint manager * Fix compile error in fork choice tests * Ignore duplicate blocks in fork choice * Add block to fock choice before db * Rename on_new_block fn * Fix spec inconsistency in `CheckpointManager` * Remove BlockRootTree * Remove old reduced_tree code fragment * Add API endpoint for fork choice * Add more ffg tests * Remove block_root_tree reminents * Ensure effective balances are used * Remove old debugging code, fix API fault * Add check to ensure parent block is in fork choice * Update readme dates * Fix readme * Tidy checkpoint manager * Remove fork choice yaml files from repo * Remove fork choice yaml from repo * General tidy * Address majority of Michael's comments * Tidy bin/lib business * Remove dangling file * Undo changes for rpc/handler from master * Revert "Undo changes for rpc/handler from master" This reverts commit 876edff0e4a501aafbb47113454852826dcc24e8. Co-authored-by: Age Manning <Age@AgeManning.com>
2020-01-29 04:05:00 +00:00
slot.epoch(MinimalEthSpec::slots_per_epoch()),
"Latest message slot should be equal to attester duty."
);
if slot != num_blocks_produced {
2019-08-06 17:17:15 +00:00
let block_root = state
.get_block_root(slot)
.expect("Should get block root at slot");
assert_eq!(
2019-08-06 17:17:15 +00:00
latest_message.unwrap().0,
*block_root,
"Latest message block root should be equal to block at slot."
);
}
}
2019-08-06 17:17:15 +00:00
}
2019-09-04 02:04:15 +00:00
fn run_skip_slot_test(skip_slots: u64) {
let num_validators = 8;
Fix head tracker concurrency bugs (#1771) ## Issue Addressed Closes #1557 ## Proposed Changes Modify the pruning algorithm so that it mutates the head-tracker _before_ committing the database transaction to disk, and _only if_ all the heads to be removed are still present in the head-tracker (i.e. no concurrent mutations). In the process of writing and testing this I also had to make a few other changes: * Use internal mutability for all `BeaconChainHarness` functions (namely the RNG and the graffiti), in order to enable parallel calls (see testing section below). * Disable logging in harness tests unless the `test_logger` feature is turned on And chose to make some clean-ups: * Delete the `NullMigrator` * Remove type-based configuration for the migrator in favour of runtime config (simpler, less duplicated code) * Use the non-blocking migrator unless the blocking migrator is required. In the store tests we need the blocking migrator because some tests make asserts about the state of the DB after the migration has run. * Rename `validators_keypairs` -> `validator_keypairs` in the `BeaconChainHarness` ## Testing To confirm that the fix worked, I wrote a test using [Hiatus](https://crates.io/crates/hiatus), which can be found here: https://github.com/michaelsproul/lighthouse/tree/hiatus-issue-1557 That test can't be merged because it inserts random breakpoints everywhere, but if you check out that branch you can run the test with: ``` $ cd beacon_node/beacon_chain $ cargo test --release --test parallel_tests --features test_logger ``` It should pass, and the log output should show: ``` WARN Pruning deferred because of a concurrent mutation, message: this is expected only very rarely! ``` ## Additional Info This is a backwards-compatible change with no impact on consensus.
2020-10-19 05:58:39 +00:00
let harness_a = get_harness(num_validators);
let harness_b = get_harness(num_validators);
for _ in 0..skip_slots {
harness_a.advance_slot();
harness_b.advance_slot();
}
harness_a.extend_chain(
1,
BlockStrategy::OnCanonicalHead,
// No attestation required for test.
AttestationStrategy::SomeValidators(vec![]),
);
2019-09-04 02:04:15 +00:00
assert_eq!(
harness_a
.chain
.head()
.expect("should get head")
.beacon_block
Update to Spec v0.10 (#817) * Start updating types * WIP * Signature hacking * Existing EF tests passing with fake_crypto * Updates * Delete outdated API spec * The refactor continues * It compiles * WIP test fixes * All release tests passing bar genesis state parsing * Update and test YamlConfig * Update to spec v0.10 compatible BLS * Updates to BLS EF tests * Add EF test for AggregateVerify And delete unused hash2curve tests for uncompressed points * Update EF tests to v0.10.1 * Use optional block root correctly in block proc * Use genesis fork in deposit domain. All tests pass * Cargo fmt * Fast aggregate verify test * Update REST API docs * Cargo fmt * Fix unused import * Bump spec tags to v0.10.1 * Add `seconds_per_eth1_block` to chainspec * Update to timestamp based eth1 voting scheme * Return None from `get_votes_to_consider` if block cache is empty * Handle overflows in `is_candidate_block` * Revert to failing tests * Fix eth1 data sets test * Choose default vote according to spec * Fix collect_valid_votes tests * Fix `get_votes_to_consider` to choose all eligible blocks * Uncomment winning_vote tests * Add comments; remove unused code * Reduce seconds_per_eth1_block for simulation * Addressed review comments * Add test for default vote case * Fix logs * Remove unused functions * Meter default eth1 votes * Fix comments * Address review comments; remove unused dependency * Disable/delete two outdated tests * Bump eth1 default vote warn to error * Delete outdated eth1 test Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
2020-02-10 23:19:36 +00:00
.slot(),
2019-09-04 02:04:15 +00:00
Slot::new(skip_slots + 1)
);
assert_eq!(
harness_b
.chain
.head()
.expect("should get head")
.beacon_block
Update to Spec v0.10 (#817) * Start updating types * WIP * Signature hacking * Existing EF tests passing with fake_crypto * Updates * Delete outdated API spec * The refactor continues * It compiles * WIP test fixes * All release tests passing bar genesis state parsing * Update and test YamlConfig * Update to spec v0.10 compatible BLS * Updates to BLS EF tests * Add EF test for AggregateVerify And delete unused hash2curve tests for uncompressed points * Update EF tests to v0.10.1 * Use optional block root correctly in block proc * Use genesis fork in deposit domain. All tests pass * Cargo fmt * Fast aggregate verify test * Update REST API docs * Cargo fmt * Fix unused import * Bump spec tags to v0.10.1 * Add `seconds_per_eth1_block` to chainspec * Update to timestamp based eth1 voting scheme * Return None from `get_votes_to_consider` if block cache is empty * Handle overflows in `is_candidate_block` * Revert to failing tests * Fix eth1 data sets test * Choose default vote according to spec * Fix collect_valid_votes tests * Fix `get_votes_to_consider` to choose all eligible blocks * Uncomment winning_vote tests * Add comments; remove unused code * Reduce seconds_per_eth1_block for simulation * Addressed review comments * Add test for default vote case * Fix logs * Remove unused functions * Meter default eth1 votes * Fix comments * Address review comments; remove unused dependency * Disable/delete two outdated tests * Bump eth1 default vote warn to error * Delete outdated eth1 test Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
2020-02-10 23:19:36 +00:00
.slot(),
Slot::new(0)
);
assert_eq!(
harness_b
.chain
.process_block(
harness_a
.chain
.head()
.expect("should get head")
.beacon_block
.clone(),
)
.unwrap(),
harness_a
Initial work towards v0.2.0 (#924) * Remove ping protocol * Initial renaming of network services * Correct rebasing relative to latest master * Start updating types * Adds HashMapDelay struct to utils * Initial network restructure * Network restructure. Adds new types for v0.2.0 * Removes build artefacts * Shift validation to beacon chain * Temporarily remove gossip validation This is to be updated to match current optimisation efforts. * Adds AggregateAndProof * Begin rebuilding pubsub encoding/decoding * Signature hacking * Shift gossipsup decoding into eth2_libp2p * Existing EF tests passing with fake_crypto * Shifts block encoding/decoding into RPC * Delete outdated API spec * All release tests passing bar genesis state parsing * Update and test YamlConfig * Update to spec v0.10 compatible BLS * Updates to BLS EF tests * Add EF test for AggregateVerify And delete unused hash2curve tests for uncompressed points * Update EF tests to v0.10.1 * Use optional block root correctly in block proc * Use genesis fork in deposit domain. All tests pass * Fast aggregate verify test * Update REST API docs * Fix unused import * Bump spec tags to v0.10.1 * Add `seconds_per_eth1_block` to chainspec * Update to timestamp based eth1 voting scheme * Return None from `get_votes_to_consider` if block cache is empty * Handle overflows in `is_candidate_block` * Revert to failing tests * Fix eth1 data sets test * Choose default vote according to spec * Fix collect_valid_votes tests * Fix `get_votes_to_consider` to choose all eligible blocks * Uncomment winning_vote tests * Add comments; remove unused code * Reduce seconds_per_eth1_block for simulation * Addressed review comments * Add test for default vote case * Fix logs * Remove unused functions * Meter default eth1 votes * Fix comments * Progress on attestation service * Address review comments; remove unused dependency * Initial work on removing libp2p lock * Add LRU caches to store (rollup) * Update attestation validation for DB changes (WIP) * Initial version of should_forward_block * Scaffold * Progress on attestation validation Also, consolidate prod+testing slot clocks so that they share much of the same implementation and can both handle sub-slot time changes. * Removes lock from libp2p service * Completed network lock removal * Finish(?) attestation processing * Correct network termination future * Add slot check to block check * Correct fmt issues * Remove Drop implementation for network service * Add first attempt at attestation proc. re-write * Add version 2 of attestation processing * Minor fixes * Add validator pubkey cache * Make get_indexed_attestation take a committee * Link signature processing into new attn verification * First working version * Ensure pubkey cache is updated * Add more metrics, slight optimizations * Clone committee cache during attestation processing * Update shuffling cache during block processing * Remove old commented-out code * Fix shuffling cache insert bug * Used indexed attestation in fork choice * Restructure attn processing, add metrics * Add more detailed metrics * Tidy, fix failing tests * Fix failing tests, tidy * Address reviewers suggestions * Disable/delete two outdated tests * Modification of validator for subscriptions * Add slot signing to validator client * Further progress on validation subscription * Adds necessary validator subscription functionality * Add new Pubkeys struct to signature_sets * Refactor with functional approach * Update beacon chain * Clean up validator <-> beacon node http types * Add aggregator status to ValidatorDuty * Impl Clone for manual slot clock * Fix minor errors * Further progress validator client subscription * Initial subscription and aggregation handling * Remove decompressed member from pubkey bytes * Progress to modifying val client for attestation aggregation * First draft of validator client upgrade for aggregate attestations * Add hashmap for indices lookup * Add state cache, remove store cache * Only build the head committee cache * Removes lock on a network channel * Partially implement beacon node subscription http api * Correct compilation issues * Change `get_attesting_indices` to use Vec * Fix failing test * Partial implementation of timer * Adds timer, removes exit_future, http api to op pool * Partial multiple aggregate attestation handling * Permits bulk messages accross gossipsub network channel * Correct compile issues * Improve gosispsub messaging and correct rest api helpers * Added global gossipsub subscriptions * Update validator subscriptions data structs * Tidy * Re-structure validator subscriptions * Initial handling of subscriptions * Re-structure network service * Add pubkey cache persistence file * Add more comments * Integrate persistence file into builder * Add pubkey cache tests * Add HashSetDelay and introduce into attestation service * Handles validator subscriptions * Add data_dir to beacon chain builder * Remove Option in pubkey cache persistence file * Ensure consistency between datadir/data_dir * Fix failing network test * Peer subnet discovery gets queued for future subscriptions * Reorganise attestation service functions * Initial wiring of attestation service * First draft of attestation service timing logic * Correct minor typos * Tidy * Fix todos * Improve tests * Add PeerInfo to connected peers mapping * Fix compile error * Fix compile error from merge * Split up block processing metrics * Tidy * Refactor get_pubkey_from_state * Remove commented-out code * Rename state_cache -> checkpoint_cache * Rename Checkpoint -> Snapshot * Tidy, add comments * Tidy up find_head function * Change some checkpoint -> snapshot * Add tests * Expose max_len * Remove dead code * Tidy * Fix bug * Add sync-speed metric * Add first attempt at VerifiableBlock * Start integrating into beacon chain * Integrate VerifiableBlock * Rename VerifableBlock -> PartialBlockVerification * Add start of typed methods * Add progress * Add further progress * Rename structs * Add full block verification to block_processing.rs * Further beacon chain integration * Update checks for gossip * Add todo * Start adding segement verification * Add passing chain segement test * Initial integration with batch sync * Minor changes * Tidy, add more error checking * Start adding chain_segment tests * Finish invalid signature tests * Include single and gossip verified blocks in tests * Add gossip verification tests * Start adding docs * Finish adding comments to block_processing.rs * Rename block_processing.rs -> block_verification * Start removing old block processing code * Fixes beacon_chain compilation * Fix project-wide compile errors * Remove old code * Correct code to pass all tests * Fix bug with beacon proposer index * Fix shim for BlockProcessingError * Only process one epoch at a time * Fix loop in chain segment processing * Correct tests from master merge * Add caching for state.eth1_data_votes * Add BeaconChain::validator_pubkey * Revert "Add caching for state.eth1_data_votes" This reverts commit cd73dcd6434fb8d8e6bf30c5356355598ea7b78e. Co-authored-by: Grant Wuerker <gwuerker@gmail.com> Co-authored-by: Michael Sproul <michael@sigmaprime.io> Co-authored-by: Michael Sproul <micsproul@gmail.com> Co-authored-by: pawan <pawandhananjay@gmail.com> Co-authored-by: Paul Hauner <paul@paulhauner.com>
2020-03-17 06:24:44 +00:00
.chain
.head()
.expect("should get head")
.beacon_block_root
);
2019-09-04 02:04:15 +00:00
harness_b
.chain
.fork_choice()
.expect("should run fork choice");
2019-09-04 02:04:15 +00:00
assert_eq!(
harness_b
.chain
.head()
.expect("should get head")
.beacon_block
Update to Spec v0.10 (#817) * Start updating types * WIP * Signature hacking * Existing EF tests passing with fake_crypto * Updates * Delete outdated API spec * The refactor continues * It compiles * WIP test fixes * All release tests passing bar genesis state parsing * Update and test YamlConfig * Update to spec v0.10 compatible BLS * Updates to BLS EF tests * Add EF test for AggregateVerify And delete unused hash2curve tests for uncompressed points * Update EF tests to v0.10.1 * Use optional block root correctly in block proc * Use genesis fork in deposit domain. All tests pass * Cargo fmt * Fast aggregate verify test * Update REST API docs * Cargo fmt * Fix unused import * Bump spec tags to v0.10.1 * Add `seconds_per_eth1_block` to chainspec * Update to timestamp based eth1 voting scheme * Return None from `get_votes_to_consider` if block cache is empty * Handle overflows in `is_candidate_block` * Revert to failing tests * Fix eth1 data sets test * Choose default vote according to spec * Fix collect_valid_votes tests * Fix `get_votes_to_consider` to choose all eligible blocks * Uncomment winning_vote tests * Add comments; remove unused code * Reduce seconds_per_eth1_block for simulation * Addressed review comments * Add test for default vote case * Fix logs * Remove unused functions * Meter default eth1 votes * Fix comments * Address review comments; remove unused dependency * Disable/delete two outdated tests * Bump eth1 default vote warn to error * Delete outdated eth1 test Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
2020-02-10 23:19:36 +00:00
.slot(),
2019-09-04 02:04:15 +00:00
Slot::new(skip_slots + 1)
);
}
#[test]
fn produces_and_processes_with_genesis_skip_slots() {
for i in 0..MinimalEthSpec::slots_per_epoch() * 4 {
run_skip_slot_test(i)
}
}
Use the forwards iterator more often (#2376) ## Issue Addressed NA ## Primary Change When investigating memory usage, I noticed that retrieving a block from an early slot (e.g., slot 900) would cause a sharp increase in the memory footprint (from 400mb to 800mb+) which seemed to be ever-lasting. After some investigation, I found that the reverse iteration from the head back to that slot was the likely culprit. To counter this, I've switched the `BeaconChain::block_root_at_slot` to use the forwards iterator, instead of the reverse one. I also noticed that the networking stack is using `BeaconChain::root_at_slot` to check if a peer is relevant (`check_peer_relevance`). Perhaps the steep, seemingly-random-but-consistent increases in memory usage are caused by the use of this function. Using the forwards iterator with the HTTP API alleviated the sharp increases in memory usage. It also made the response much faster (before it felt like to took 1-2s, now it feels instant). ## Additional Changes In the process I also noticed that we have two functions for getting block roots: - `BeaconChain::block_root_at_slot`: returns `None` for a skip slot. - `BeaconChain::root_at_slot`: returns the previous root for a skip slot. I unified these two functions into `block_root_at_slot` and added the `WhenSlotSkipped` enum. Now, the caller must be explicit about the skip-slot behaviour when requesting a root. Additionally, I replaced `vec![]` with `Vec::with_capacity` in `store::chunked_vector::range_query`. I stumbled across this whilst debugging and made this modification to see what effect it would have (not much). It seems like a decent change to keep around, but I'm not concerned either way. Also, `BeaconChain::get_ancestor_block_root` is unused, so I got rid of it :wastebasket:. ## Additional Info I haven't also done the same for state roots here. Whilst it's possible and a good idea, it's more work since the fwds iterators are presently block-roots-specific. Whilst there's a few places a reverse iteration of state roots could be triggered (e.g., attestation production, HTTP API), they're no where near as common as the `check_peer_relevance` call. As such, I think we should get this PR merged first, then come back for the state root iters. I made an issue here https://github.com/sigp/lighthouse/issues/2377.
2021-05-31 04:18:20 +00:00
#[test]
fn block_roots_skip_slot_behaviour() {
let harness = get_harness(VALIDATOR_COUNT);
// Test should be longer than the block roots to ensure a DB lookup is triggered.
let chain_length = harness
.chain
.head()
.unwrap()
.beacon_state
.block_roots()
.len() as u64
* 3;
Use the forwards iterator more often (#2376) ## Issue Addressed NA ## Primary Change When investigating memory usage, I noticed that retrieving a block from an early slot (e.g., slot 900) would cause a sharp increase in the memory footprint (from 400mb to 800mb+) which seemed to be ever-lasting. After some investigation, I found that the reverse iteration from the head back to that slot was the likely culprit. To counter this, I've switched the `BeaconChain::block_root_at_slot` to use the forwards iterator, instead of the reverse one. I also noticed that the networking stack is using `BeaconChain::root_at_slot` to check if a peer is relevant (`check_peer_relevance`). Perhaps the steep, seemingly-random-but-consistent increases in memory usage are caused by the use of this function. Using the forwards iterator with the HTTP API alleviated the sharp increases in memory usage. It also made the response much faster (before it felt like to took 1-2s, now it feels instant). ## Additional Changes In the process I also noticed that we have two functions for getting block roots: - `BeaconChain::block_root_at_slot`: returns `None` for a skip slot. - `BeaconChain::root_at_slot`: returns the previous root for a skip slot. I unified these two functions into `block_root_at_slot` and added the `WhenSlotSkipped` enum. Now, the caller must be explicit about the skip-slot behaviour when requesting a root. Additionally, I replaced `vec![]` with `Vec::with_capacity` in `store::chunked_vector::range_query`. I stumbled across this whilst debugging and made this modification to see what effect it would have (not much). It seems like a decent change to keep around, but I'm not concerned either way. Also, `BeaconChain::get_ancestor_block_root` is unused, so I got rid of it :wastebasket:. ## Additional Info I haven't also done the same for state roots here. Whilst it's possible and a good idea, it's more work since the fwds iterators are presently block-roots-specific. Whilst there's a few places a reverse iteration of state roots could be triggered (e.g., attestation production, HTTP API), they're no where near as common as the `check_peer_relevance` call. As such, I think we should get this PR merged first, then come back for the state root iters. I made an issue here https://github.com/sigp/lighthouse/issues/2377.
2021-05-31 04:18:20 +00:00
let skipped_slots = [1, 6, 7, 10, chain_length];
// Build a chain with some skip slots.
for i in 1..=chain_length {
if i > 1 {
harness.advance_slot();
}
let slot = harness.chain.slot().unwrap().as_u64();
if !skipped_slots.contains(&slot) {
harness.extend_chain(
1,
BlockStrategy::OnCanonicalHead,
AttestationStrategy::AllValidators,
);
}
}
let mut prev_unskipped_root = None;
for target_slot in 0..=chain_length {
if skipped_slots.contains(&target_slot) {
/*
* A skip slot
*/
assert!(
harness
.chain
.block_root_at_slot(target_slot.into(), WhenSlotSkipped::None)
.unwrap()
.is_none(),
"WhenSlotSkipped::None should return None on a skip slot"
);
let skipped_root = harness
.chain
.block_root_at_slot(target_slot.into(), WhenSlotSkipped::Prev)
.unwrap()
.expect("WhenSlotSkipped::Prev should always return Some");
assert_eq!(
skipped_root,
prev_unskipped_root.expect("test is badly formed"),
"WhenSlotSkipped::Prev should accurately return the prior skipped block"
);
let expected_block = harness.chain.get_block(&skipped_root).unwrap().unwrap();
assert_eq!(
harness
.chain
.block_at_slot(target_slot.into(), WhenSlotSkipped::Prev)
.unwrap()
.unwrap(),
expected_block,
);
assert!(
harness
.chain
.block_at_slot(target_slot.into(), WhenSlotSkipped::None)
.unwrap()
.is_none(),
"WhenSlotSkipped::None should return None on a skip slot"
);
} else {
/*
* Not a skip slot
*/
let skips_none = harness
.chain
.block_root_at_slot(target_slot.into(), WhenSlotSkipped::None)
.unwrap()
.expect("WhenSlotSkipped::None should return Some for non-skipped block");
let skips_prev = harness
.chain
.block_root_at_slot(target_slot.into(), WhenSlotSkipped::Prev)
.unwrap()
.expect("WhenSlotSkipped::Prev should always return Some");
assert_eq!(
skips_none, skips_prev,
"WhenSlotSkipped::None and WhenSlotSkipped::Prev should be equal on non-skipped slot"
);
let expected_block = harness.chain.get_block(&skips_prev).unwrap().unwrap();
assert_eq!(
harness
.chain
.block_at_slot(target_slot.into(), WhenSlotSkipped::Prev)
.unwrap()
.unwrap(),
expected_block
);
assert_eq!(
harness
.chain
.block_at_slot(target_slot.into(), WhenSlotSkipped::None)
.unwrap()
.unwrap(),
expected_block
);
prev_unskipped_root = Some(skips_prev);
}
}
/*
* A future, non-existent slot.
*/
let future_slot = harness.chain.slot().unwrap() + 1;
assert_eq!(
harness.chain.head().unwrap().beacon_block.slot(),
future_slot - 2,
"test precondition"
);
assert!(
harness
.chain
.block_root_at_slot(future_slot, WhenSlotSkipped::None)
.unwrap()
.is_none(),
"WhenSlotSkipped::None should return None on a future slot"
);
assert!(
harness
.chain
.block_root_at_slot(future_slot, WhenSlotSkipped::Prev)
.unwrap()
.is_none(),
"WhenSlotSkipped::Prev should return None on a future slot"
);
}