be4e261e74
## Overview This rather extensive PR achieves two primary goals: 1. Uses the finalized/justified checkpoints of fork choice (FC), rather than that of the head state. 2. Refactors fork choice, block production and block processing to `async` functions. Additionally, it achieves: - Concurrent forkchoice updates to the EL and cache pruning after a new head is selected. - Concurrent "block packing" (attestations, etc) and execution payload retrieval during block production. - Concurrent per-block-processing and execution payload verification during block processing. - The `Arc`-ification of `SignedBeaconBlock` during block processing (it's never mutated, so why not?): - I had to do this to deal with sending blocks into spawned tasks. - Previously we were cloning the beacon block at least 2 times during each block processing, these clones are either removed or turned into cheaper `Arc` clones. - We were also `Box`-ing and un-`Box`-ing beacon blocks as they moved throughout the networking crate. This is not a big deal, but it's nice to avoid shifting things between the stack and heap. - Avoids cloning *all the blocks* in *every chain segment* during sync. - It also has the potential to clean up our code where we need to pass an *owned* block around so we can send it back in the case of an error (I didn't do much of this, my PR is already big enough 😅) - The `BeaconChain::HeadSafetyStatus` struct was removed. It was an old relic from prior merge specs. For motivation for this change, see https://github.com/sigp/lighthouse/pull/3244#issuecomment-1160963273 ## Changes to `canonical_head` and `fork_choice` Previously, the `BeaconChain` had two separate fields: ``` canonical_head: RwLock<Snapshot>, fork_choice: RwLock<BeaconForkChoice> ``` Now, we have grouped these values under a single struct: ``` canonical_head: CanonicalHead { cached_head: RwLock<Arc<Snapshot>>, fork_choice: RwLock<BeaconForkChoice> } ``` Apart from ergonomics, the only *actual* change here is wrapping the canonical head snapshot in an `Arc`. This means that we no longer need to hold the `cached_head` (`canonical_head`, in old terms) lock when we want to pull some values from it. This was done to avoid deadlock risks by preventing functions from acquiring (and holding) the `cached_head` and `fork_choice` locks simultaneously. ## Breaking Changes ### The `state` (root) field in the `finalized_checkpoint` SSE event Consider the scenario where epoch `n` is just finalized, but `start_slot(n)` is skipped. There are two state roots we might in the `finalized_checkpoint` SSE event: 1. The state root of the finalized block, which is `get_block(finalized_checkpoint.root).state_root`. 4. The state root at slot of `start_slot(n)`, which would be the state from (1), but "skipped forward" through any skip slots. Previously, Lighthouse would choose (2). However, we can see that when [Teku generates that event](de2b2801c8/data/beaconrestapi/src/main/java/tech/pegasys/teku/beaconrestapi/handlers/v1/events/EventSubscriptionManager.java (L171-L182)
) it uses [`getStateRootFromBlockRoot`](de2b2801c8/data/provider/src/main/java/tech/pegasys/teku/api/ChainDataProvider.java (L336-L341)
) which uses (1). I have switched Lighthouse from (2) to (1). I think it's a somewhat arbitrary choice between the two, where (1) is easier to compute and is consistent with Teku. ## Notes for Reviewers I've renamed `BeaconChain::fork_choice` to `BeaconChain::recompute_head`. Doing this helped ensure I broke all previous uses of fork choice and I also find it more descriptive. It describes an action and can't be confused with trying to get a reference to the `ForkChoice` struct. I've changed the ordering of SSE events when a block is received. It used to be `[block, finalized, head]` and now it's `[block, head, finalized]`. It was easier this way and I don't think we were making any promises about SSE event ordering so it's not "breaking". I've made it so fork choice will run when it's first constructed. I did this because I wanted to have a cached version of the last call to `get_head`. Ensuring `get_head` has been run *at least once* means that the cached values doesn't need to wrapped in an `Option`. This was fairly simple, it just involved passing a `slot` to the constructor so it knows *when* it's being run. When loading a fork choice from the store and a slot clock isn't handy I've just used the `slot` that was saved in the `fork_choice_store`. That seems like it would be a faithful representation of the slot when we saved it. I added the `genesis_time: u64` to the `BeaconChain`. It's small, constant and nice to have around. Since we're using FC for the fin/just checkpoints, we no longer get the `0x00..00` roots at genesis. You can see I had to remove a work-around in `ef-tests` here: b56be3bc2. I can't find any reason why this would be an issue, if anything I think it'll be better since the genesis-alias has caught us out a few times (0x00..00 isn't actually a real root). Edit: I did find a case where the `network` expected the 0x00..00 alias and patched it here: 3f26ac3e2. You'll notice a lot of changes in tests. Generally, tests should be functionally equivalent. Here are the things creating the most diff-noise in tests: - Changing tests to be `tokio::async` tests. - Adding `.await` to fork choice, block processing and block production functions. - Refactor of the `canonical_head` "API" provided by the `BeaconChain`. E.g., `chain.canonical_head.cached_head()` instead of `chain.canonical_head.read()`. - Wrapping `SignedBeaconBlock` in an `Arc`. - In the `beacon_chain/tests/block_verification`, we can't use the `lazy_static` `CHAIN_SEGMENT` variable anymore since it's generated with an async function. We just generate it in each test, not so efficient but hopefully insignificant. I had to disable `rayon` concurrent tests in the `fork_choice` tests. This is because the use of `rayon` and `block_on` was causing a panic. Co-authored-by: Mac L <mjladson@pm.me>
205 lines
8.6 KiB
Rust
205 lines
8.6 KiB
Rust
use crate::{errors::BeaconChainError as Error, metrics, BeaconChain, BeaconChainTypes};
|
|
use itertools::Itertools;
|
|
use slog::debug;
|
|
use state_processing::{
|
|
per_block_processing::ParallelSignatureSets,
|
|
signature_sets::{block_proposal_signature_set_from_parts, Error as SignatureSetError},
|
|
};
|
|
use std::borrow::Cow;
|
|
use std::iter;
|
|
use std::sync::Arc;
|
|
use std::time::Duration;
|
|
use store::{chunked_vector::BlockRoots, AnchorInfo, ChunkWriter, KeyValueStore};
|
|
use types::{Hash256, SignedBlindedBeaconBlock, Slot};
|
|
|
|
/// Use a longer timeout on the pubkey cache.
|
|
///
|
|
/// It's ok if historical sync is stalled due to writes from forwards block processing.
|
|
const PUBKEY_CACHE_LOCK_TIMEOUT: Duration = Duration::from_secs(30);
|
|
|
|
#[derive(Debug)]
|
|
pub enum HistoricalBlockError {
|
|
/// Block is not available (only returned when fetching historic blocks).
|
|
BlockOutOfRange { slot: Slot, oldest_block_slot: Slot },
|
|
/// Block root mismatch, caller should retry with different blocks.
|
|
MismatchedBlockRoot {
|
|
block_root: Hash256,
|
|
expected_block_root: Hash256,
|
|
},
|
|
/// Bad signature, caller should retry with different blocks.
|
|
SignatureSet(SignatureSetError),
|
|
/// Bad signature, caller should retry with different blocks.
|
|
InvalidSignature,
|
|
/// Transitory error, caller should retry with the same blocks.
|
|
ValidatorPubkeyCacheTimeout,
|
|
/// No historical sync needed.
|
|
NoAnchorInfo,
|
|
/// Logic error: should never occur.
|
|
IndexOutOfBounds,
|
|
}
|
|
|
|
impl<T: BeaconChainTypes> BeaconChain<T> {
|
|
/// Store a batch of historical blocks in the database.
|
|
///
|
|
/// The `blocks` should be given in slot-ascending order. One of the blocks should have a block
|
|
/// root corresponding to the `oldest_block_parent` from the store's `AnchorInfo`.
|
|
///
|
|
/// The block roots and proposer signatures are verified. If any block doesn't match the parent
|
|
/// root listed in its successor, then the whole batch will be discarded and
|
|
/// `MismatchedBlockRoot` will be returned. If any proposer signature is invalid then
|
|
/// `SignatureSetError` or `InvalidSignature` will be returned.
|
|
///
|
|
/// To align with sync we allow some excess blocks with slots greater than or equal to
|
|
/// `oldest_block_slot` to be provided. They will be ignored without being checked.
|
|
///
|
|
/// This function should not be called concurrently with any other function that mutates
|
|
/// the anchor info (including this function itself). If a concurrent mutation occurs that
|
|
/// would violate consistency then an `AnchorInfoConcurrentMutation` error will be returned.
|
|
///
|
|
/// Return the number of blocks successfully imported.
|
|
pub fn import_historical_block_batch(
|
|
&self,
|
|
blocks: Vec<Arc<SignedBlindedBeaconBlock<T::EthSpec>>>,
|
|
) -> Result<usize, Error> {
|
|
let anchor_info = self
|
|
.store
|
|
.get_anchor_info()
|
|
.ok_or(HistoricalBlockError::NoAnchorInfo)?;
|
|
|
|
// Take all blocks with slots less than the oldest block slot.
|
|
let num_relevant =
|
|
blocks.partition_point(|block| block.slot() < anchor_info.oldest_block_slot);
|
|
let blocks_to_import = &blocks
|
|
.get(..num_relevant)
|
|
.ok_or(HistoricalBlockError::IndexOutOfBounds)?;
|
|
|
|
if blocks_to_import.len() != blocks.len() {
|
|
debug!(
|
|
self.log,
|
|
"Ignoring some historic blocks";
|
|
"oldest_block_slot" => anchor_info.oldest_block_slot,
|
|
"total_blocks" => blocks.len(),
|
|
"ignored" => blocks.len().saturating_sub(blocks_to_import.len()),
|
|
);
|
|
}
|
|
|
|
if blocks_to_import.is_empty() {
|
|
return Ok(0);
|
|
}
|
|
|
|
let mut expected_block_root = anchor_info.oldest_block_parent;
|
|
let mut prev_block_slot = anchor_info.oldest_block_slot;
|
|
let mut chunk_writer =
|
|
ChunkWriter::<BlockRoots, _, _>::new(&self.store.cold_db, prev_block_slot.as_usize())?;
|
|
|
|
let mut cold_batch = Vec::with_capacity(blocks.len());
|
|
let mut hot_batch = Vec::with_capacity(blocks.len());
|
|
|
|
for block in blocks_to_import.iter().rev() {
|
|
// Check chain integrity.
|
|
let block_root = block.canonical_root();
|
|
|
|
if block_root != expected_block_root {
|
|
return Err(HistoricalBlockError::MismatchedBlockRoot {
|
|
block_root,
|
|
expected_block_root,
|
|
}
|
|
.into());
|
|
}
|
|
|
|
// Store block in the hot database without payload.
|
|
self.store
|
|
.blinded_block_as_kv_store_ops(&block_root, block, &mut hot_batch);
|
|
|
|
// Store block roots, including at all skip slots in the freezer DB.
|
|
for slot in (block.slot().as_usize()..prev_block_slot.as_usize()).rev() {
|
|
chunk_writer.set(slot, block_root, &mut cold_batch)?;
|
|
}
|
|
|
|
prev_block_slot = block.slot();
|
|
expected_block_root = block.message().parent_root();
|
|
|
|
// If we've reached genesis, add the genesis block root to the batch and set the
|
|
// anchor slot to 0 to indicate completion.
|
|
if expected_block_root == self.genesis_block_root {
|
|
let genesis_slot = self.spec.genesis_slot;
|
|
chunk_writer.set(
|
|
genesis_slot.as_usize(),
|
|
self.genesis_block_root,
|
|
&mut cold_batch,
|
|
)?;
|
|
prev_block_slot = genesis_slot;
|
|
expected_block_root = Hash256::zero();
|
|
break;
|
|
}
|
|
}
|
|
chunk_writer.write(&mut cold_batch)?;
|
|
|
|
// Verify signatures in one batch, holding the pubkey cache lock for the shortest duration
|
|
// possible. For each block fetch the parent root from its successor. Slicing from index 1
|
|
// is safe because we've already checked that `blocks_to_import` is non-empty.
|
|
let sig_timer = metrics::start_timer(&metrics::BACKFILL_SIGNATURE_TOTAL_TIMES);
|
|
let setup_timer = metrics::start_timer(&metrics::BACKFILL_SIGNATURE_SETUP_TIMES);
|
|
let pubkey_cache = self
|
|
.validator_pubkey_cache
|
|
.try_read_for(PUBKEY_CACHE_LOCK_TIMEOUT)
|
|
.ok_or(HistoricalBlockError::ValidatorPubkeyCacheTimeout)?;
|
|
let block_roots = blocks_to_import
|
|
.get(1..)
|
|
.ok_or(HistoricalBlockError::IndexOutOfBounds)?
|
|
.iter()
|
|
.map(|block| block.parent_root())
|
|
.chain(iter::once(anchor_info.oldest_block_parent));
|
|
let signature_set = blocks_to_import
|
|
.iter()
|
|
.zip_eq(block_roots)
|
|
.map(|(block, block_root)| {
|
|
block_proposal_signature_set_from_parts(
|
|
block,
|
|
Some(block_root),
|
|
block.message().proposer_index(),
|
|
&self.spec.fork_at_epoch(block.message().epoch()),
|
|
self.genesis_validators_root,
|
|
|validator_index| pubkey_cache.get(validator_index).cloned().map(Cow::Owned),
|
|
&self.spec,
|
|
)
|
|
})
|
|
.collect::<Result<Vec<_>, _>>()
|
|
.map_err(HistoricalBlockError::SignatureSet)
|
|
.map(ParallelSignatureSets::from)?;
|
|
drop(pubkey_cache);
|
|
drop(setup_timer);
|
|
|
|
let verify_timer = metrics::start_timer(&metrics::BACKFILL_SIGNATURE_VERIFY_TIMES);
|
|
if !signature_set.verify() {
|
|
return Err(HistoricalBlockError::InvalidSignature.into());
|
|
}
|
|
drop(verify_timer);
|
|
drop(sig_timer);
|
|
|
|
// Write the I/O batches to disk, writing the blocks themselves first, as it's better
|
|
// for the hot DB to contain extra blocks than for the cold DB to point to blocks that
|
|
// do not exist.
|
|
self.store.hot_db.do_atomically(hot_batch)?;
|
|
self.store.cold_db.do_atomically(cold_batch)?;
|
|
|
|
// Update the anchor.
|
|
let new_anchor = AnchorInfo {
|
|
oldest_block_slot: prev_block_slot,
|
|
oldest_block_parent: expected_block_root,
|
|
..anchor_info
|
|
};
|
|
let backfill_complete = new_anchor.block_backfill_complete();
|
|
self.store
|
|
.compare_and_set_anchor_info_with_write(Some(anchor_info), Some(new_anchor))?;
|
|
|
|
// If backfill has completed and the chain is configured to reconstruct historic states,
|
|
// send a message to the background migrator instructing it to begin reconstruction.
|
|
if backfill_complete && self.config.reconstruct_historic_states {
|
|
self.store_migrator.process_reconstruction();
|
|
}
|
|
|
|
Ok(blocks_to_import.len())
|
|
}
|
|
}
|