lighthouse/beacon_node/http_api/tests/interactive_tests.rs

128 lines
4.4 KiB
Rust
Raw Normal View History

//! Generic tests that make use of the (newer) `InteractiveApiTester`
use crate::common::*;
Run fork choice before block proposal (#3168) ## Issue Addressed Upcoming spec change https://github.com/ethereum/consensus-specs/pull/2878 ## Proposed Changes 1. Run fork choice at the start of every slot, and wait for this run to complete before proposing a block. 2. As an optimisation, also run fork choice 3/4 of the way through the slot (at 9s), _dequeueing attestations for the next slot_. 3. Remove the fork choice run from the state advance timer that occurred before advancing the state. ## Additional Info ### Block Proposal Accuracy This change makes us more likely to propose on top of the correct head in the presence of re-orgs with proposer boost in play. The main scenario that this change is designed to address is described in the linked spec issue. ### Attestation Accuracy This change _also_ makes us more likely to attest to the correct head. Currently in the case of a skipped slot at `slot` we only run fork choice 9s into `slot - 1`. This means the attestations from `slot - 1` aren't taken into consideration, and any boost applied to the block from `slot - 1` is not removed (it should be). In the language of the linked spec issue, this means we are liable to attest to C, even when the majority voting weight has already caused a re-org to B. ### Why remove the call before the state advance? If we've run fork choice at the start of the slot then it has already dequeued all the attestations from the previous slot, which are the only ones eligible to influence the head in the current slot. Running fork choice again is unnecessary (unless we run it for the next slot and try to pre-empt a re-org, but I don't currently think this is a great idea). ### Performance Based on Prater testing this adds about 5-25ms of runtime to block proposal times, which are 500-1000ms on average (and spike to 5s+ sometimes due to state handling issues :cry: ). I believe this is a small enough penalty to enable it by default, with the option to disable it via the new flag `--fork-choice-before-proposal-timeout 0`. Upcoming work on block packing and state representation will also reduce block production times in general, while removing the spikes. ### Implementation Fork choice gets invoked at the start of the slot via the `per_slot_task` function called from the slot timer. It then uses a condition variable to signal to block production that fork choice has been updated. This is a bit funky, but it seems to work. One downside of the timer-based approach is that it doesn't happen automatically in most of the tests. The test added by this PR has to trigger the run manually.
2022-05-20 05:02:11 +00:00
use beacon_chain::test_utils::{AttestationStrategy, BlockStrategy};
use eth2::types::DepositContractData;
Run fork choice before block proposal (#3168) ## Issue Addressed Upcoming spec change https://github.com/ethereum/consensus-specs/pull/2878 ## Proposed Changes 1. Run fork choice at the start of every slot, and wait for this run to complete before proposing a block. 2. As an optimisation, also run fork choice 3/4 of the way through the slot (at 9s), _dequeueing attestations for the next slot_. 3. Remove the fork choice run from the state advance timer that occurred before advancing the state. ## Additional Info ### Block Proposal Accuracy This change makes us more likely to propose on top of the correct head in the presence of re-orgs with proposer boost in play. The main scenario that this change is designed to address is described in the linked spec issue. ### Attestation Accuracy This change _also_ makes us more likely to attest to the correct head. Currently in the case of a skipped slot at `slot` we only run fork choice 9s into `slot - 1`. This means the attestations from `slot - 1` aren't taken into consideration, and any boost applied to the block from `slot - 1` is not removed (it should be). In the language of the linked spec issue, this means we are liable to attest to C, even when the majority voting weight has already caused a re-org to B. ### Why remove the call before the state advance? If we've run fork choice at the start of the slot then it has already dequeued all the attestations from the previous slot, which are the only ones eligible to influence the head in the current slot. Running fork choice again is unnecessary (unless we run it for the next slot and try to pre-empt a re-org, but I don't currently think this is a great idea). ### Performance Based on Prater testing this adds about 5-25ms of runtime to block proposal times, which are 500-1000ms on average (and spike to 5s+ sometimes due to state handling issues :cry: ). I believe this is a small enough penalty to enable it by default, with the option to disable it via the new flag `--fork-choice-before-proposal-timeout 0`. Upcoming work on block packing and state representation will also reduce block production times in general, while removing the spikes. ### Implementation Fork choice gets invoked at the start of the slot via the `per_slot_task` function called from the slot timer. It then uses a condition variable to signal to block production that fork choice has been updated. This is a bit funky, but it seems to work. One downside of the timer-based approach is that it doesn't happen automatically in most of the tests. The test added by this PR has to trigger the run manually.
2022-05-20 05:02:11 +00:00
use tree_hash::TreeHash;
use types::{EthSpec, FullPayload, MainnetEthSpec, Slot};
type E = MainnetEthSpec;
// Test that the deposit_contract endpoint returns the correct chain_id and address.
// Regression test for https://github.com/sigp/lighthouse/issues/2657
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
async fn deposit_contract_custom_network() {
let validator_count = 24;
let mut spec = E::default_spec();
// Rinkeby, which we don't use elsewhere.
spec.deposit_chain_id = 4;
spec.deposit_network_id = 4;
// Arbitrary contract address.
spec.deposit_contract_address = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa".parse().unwrap();
let tester = InteractiveTester::<E>::new(Some(spec.clone()), validator_count).await;
let client = &tester.client;
let result = client.get_config_deposit_contract().await.unwrap().data;
let expected = DepositContractData {
address: spec.deposit_contract_address,
chain_id: spec.deposit_chain_id,
};
assert_eq!(result, expected);
}
Run fork choice before block proposal (#3168) ## Issue Addressed Upcoming spec change https://github.com/ethereum/consensus-specs/pull/2878 ## Proposed Changes 1. Run fork choice at the start of every slot, and wait for this run to complete before proposing a block. 2. As an optimisation, also run fork choice 3/4 of the way through the slot (at 9s), _dequeueing attestations for the next slot_. 3. Remove the fork choice run from the state advance timer that occurred before advancing the state. ## Additional Info ### Block Proposal Accuracy This change makes us more likely to propose on top of the correct head in the presence of re-orgs with proposer boost in play. The main scenario that this change is designed to address is described in the linked spec issue. ### Attestation Accuracy This change _also_ makes us more likely to attest to the correct head. Currently in the case of a skipped slot at `slot` we only run fork choice 9s into `slot - 1`. This means the attestations from `slot - 1` aren't taken into consideration, and any boost applied to the block from `slot - 1` is not removed (it should be). In the language of the linked spec issue, this means we are liable to attest to C, even when the majority voting weight has already caused a re-org to B. ### Why remove the call before the state advance? If we've run fork choice at the start of the slot then it has already dequeued all the attestations from the previous slot, which are the only ones eligible to influence the head in the current slot. Running fork choice again is unnecessary (unless we run it for the next slot and try to pre-empt a re-org, but I don't currently think this is a great idea). ### Performance Based on Prater testing this adds about 5-25ms of runtime to block proposal times, which are 500-1000ms on average (and spike to 5s+ sometimes due to state handling issues :cry: ). I believe this is a small enough penalty to enable it by default, with the option to disable it via the new flag `--fork-choice-before-proposal-timeout 0`. Upcoming work on block packing and state representation will also reduce block production times in general, while removing the spikes. ### Implementation Fork choice gets invoked at the start of the slot via the `per_slot_task` function called from the slot timer. It then uses a condition variable to signal to block production that fork choice has been updated. This is a bit funky, but it seems to work. One downside of the timer-based approach is that it doesn't happen automatically in most of the tests. The test added by this PR has to trigger the run manually.
2022-05-20 05:02:11 +00:00
// Test that running fork choice before proposing results in selection of the correct head.
#[tokio::test(flavor = "multi_thread", worker_threads = 2)]
pub async fn fork_choice_before_proposal() {
// Validator count needs to be at least 32 or proposer boost gets set to 0 when computing
// `validator_count // 32`.
let validator_count = 32;
let all_validators = (0..validator_count).collect::<Vec<_>>();
let num_initial: u64 = 31;
let tester = InteractiveTester::<E>::new(None, validator_count).await;
let harness = &tester.harness;
// Create some chain depth.
harness.advance_slot();
harness.extend_chain(
num_initial as usize,
BlockStrategy::OnCanonicalHead,
AttestationStrategy::AllValidators,
);
// We set up the following block graph, where B is a block that is temporarily orphaned by C,
// but is then reinstated and built upon by D.
//
// A | B | - | D |
// ^ | - | C |
let slot_a = Slot::new(num_initial);
let slot_b = slot_a + 1;
let slot_c = slot_a + 2;
let slot_d = slot_a + 3;
let state_a = harness.get_current_state();
let (block_b, state_b) = harness.make_block(state_a.clone(), slot_b);
let block_root_b = harness.process_block(slot_b, block_b).unwrap();
// Create attestations to B but keep them in reserve until after C has been processed.
let attestations_b = harness.make_attestations(
&all_validators,
&state_b,
state_b.tree_hash_root(),
block_root_b,
slot_b,
);
let (block_c, state_c) = harness.make_block(state_a, slot_c);
let block_root_c = harness.process_block(slot_c, block_c.clone()).unwrap();
// Create attestations to C from a small number of validators and process them immediately.
let attestations_c = harness.make_attestations(
&all_validators[..validator_count / 2],
&state_c,
state_c.tree_hash_root(),
block_root_c,
slot_c,
);
harness.process_attestations(attestations_c);
// Apply the attestations to B, but don't re-run fork choice.
harness.process_attestations(attestations_b);
// Due to proposer boost, the head should be C during slot C.
assert_eq!(
harness.chain.head_info().unwrap().block_root,
block_root_c.into()
);
// Ensure that building a block via the HTTP API re-runs fork choice and builds block D upon B.
// Manually prod the per-slot task, because the slot timer doesn't run in the background in
// these tests.
harness.advance_slot();
harness.chain.per_slot_task();
let proposer_index = state_b
.get_beacon_proposer_index(slot_d, &harness.chain.spec)
.unwrap();
let randao_reveal = harness
.sign_randao_reveal(&state_b, proposer_index, slot_d)
.into();
let block_d = tester
.client
.get_validator_blocks::<E, FullPayload<E>>(slot_d, &randao_reveal, None)
.await
.unwrap()
.data;
// Head is now B.
assert_eq!(
harness.chain.head_info().unwrap().block_root,
block_root_b.into()
);
// D's parent is B.
assert_eq!(block_d.parent_root(), block_root_b.into());
}