be4e261e74
## Overview This rather extensive PR achieves two primary goals: 1. Uses the finalized/justified checkpoints of fork choice (FC), rather than that of the head state. 2. Refactors fork choice, block production and block processing to `async` functions. Additionally, it achieves: - Concurrent forkchoice updates to the EL and cache pruning after a new head is selected. - Concurrent "block packing" (attestations, etc) and execution payload retrieval during block production. - Concurrent per-block-processing and execution payload verification during block processing. - The `Arc`-ification of `SignedBeaconBlock` during block processing (it's never mutated, so why not?): - I had to do this to deal with sending blocks into spawned tasks. - Previously we were cloning the beacon block at least 2 times during each block processing, these clones are either removed or turned into cheaper `Arc` clones. - We were also `Box`-ing and un-`Box`-ing beacon blocks as they moved throughout the networking crate. This is not a big deal, but it's nice to avoid shifting things between the stack and heap. - Avoids cloning *all the blocks* in *every chain segment* during sync. - It also has the potential to clean up our code where we need to pass an *owned* block around so we can send it back in the case of an error (I didn't do much of this, my PR is already big enough 😅) - The `BeaconChain::HeadSafetyStatus` struct was removed. It was an old relic from prior merge specs. For motivation for this change, see https://github.com/sigp/lighthouse/pull/3244#issuecomment-1160963273 ## Changes to `canonical_head` and `fork_choice` Previously, the `BeaconChain` had two separate fields: ``` canonical_head: RwLock<Snapshot>, fork_choice: RwLock<BeaconForkChoice> ``` Now, we have grouped these values under a single struct: ``` canonical_head: CanonicalHead { cached_head: RwLock<Arc<Snapshot>>, fork_choice: RwLock<BeaconForkChoice> } ``` Apart from ergonomics, the only *actual* change here is wrapping the canonical head snapshot in an `Arc`. This means that we no longer need to hold the `cached_head` (`canonical_head`, in old terms) lock when we want to pull some values from it. This was done to avoid deadlock risks by preventing functions from acquiring (and holding) the `cached_head` and `fork_choice` locks simultaneously. ## Breaking Changes ### The `state` (root) field in the `finalized_checkpoint` SSE event Consider the scenario where epoch `n` is just finalized, but `start_slot(n)` is skipped. There are two state roots we might in the `finalized_checkpoint` SSE event: 1. The state root of the finalized block, which is `get_block(finalized_checkpoint.root).state_root`. 4. The state root at slot of `start_slot(n)`, which would be the state from (1), but "skipped forward" through any skip slots. Previously, Lighthouse would choose (2). However, we can see that when [Teku generates that event](de2b2801c8/data/beaconrestapi/src/main/java/tech/pegasys/teku/beaconrestapi/handlers/v1/events/EventSubscriptionManager.java (L171-L182)
) it uses [`getStateRootFromBlockRoot`](de2b2801c8/data/provider/src/main/java/tech/pegasys/teku/api/ChainDataProvider.java (L336-L341)
) which uses (1). I have switched Lighthouse from (2) to (1). I think it's a somewhat arbitrary choice between the two, where (1) is easier to compute and is consistent with Teku. ## Notes for Reviewers I've renamed `BeaconChain::fork_choice` to `BeaconChain::recompute_head`. Doing this helped ensure I broke all previous uses of fork choice and I also find it more descriptive. It describes an action and can't be confused with trying to get a reference to the `ForkChoice` struct. I've changed the ordering of SSE events when a block is received. It used to be `[block, finalized, head]` and now it's `[block, head, finalized]`. It was easier this way and I don't think we were making any promises about SSE event ordering so it's not "breaking". I've made it so fork choice will run when it's first constructed. I did this because I wanted to have a cached version of the last call to `get_head`. Ensuring `get_head` has been run *at least once* means that the cached values doesn't need to wrapped in an `Option`. This was fairly simple, it just involved passing a `slot` to the constructor so it knows *when* it's being run. When loading a fork choice from the store and a slot clock isn't handy I've just used the `slot` that was saved in the `fork_choice_store`. That seems like it would be a faithful representation of the slot when we saved it. I added the `genesis_time: u64` to the `BeaconChain`. It's small, constant and nice to have around. Since we're using FC for the fin/just checkpoints, we no longer get the `0x00..00` roots at genesis. You can see I had to remove a work-around in `ef-tests` here: b56be3bc2. I can't find any reason why this would be an issue, if anything I think it'll be better since the genesis-alias has caught us out a few times (0x00..00 isn't actually a real root). Edit: I did find a case where the `network` expected the 0x00..00 alias and patched it here: 3f26ac3e2. You'll notice a lot of changes in tests. Generally, tests should be functionally equivalent. Here are the things creating the most diff-noise in tests: - Changing tests to be `tokio::async` tests. - Adding `.await` to fork choice, block processing and block production functions. - Refactor of the `canonical_head` "API" provided by the `BeaconChain`. E.g., `chain.canonical_head.cached_head()` instead of `chain.canonical_head.read()`. - Wrapping `SignedBeaconBlock` in an `Arc`. - In the `beacon_chain/tests/block_verification`, we can't use the `lazy_static` `CHAIN_SEGMENT` variable anymore since it's generated with an async function. We just generate it in each test, not so efficient but hopefully insignificant. I had to disable `rayon` concurrent tests in the `fork_choice` tests. This is because the use of `rayon` and `block_on` was causing a panic. Co-authored-by: Mac L <mjladson@pm.me>
141 lines
6.0 KiB
Rust
141 lines
6.0 KiB
Rust
//! Utilities for managing database schema changes.
|
|
mod migration_schema_v6;
|
|
mod migration_schema_v7;
|
|
mod migration_schema_v8;
|
|
mod migration_schema_v9;
|
|
mod types;
|
|
|
|
use crate::beacon_chain::{BeaconChainTypes, FORK_CHOICE_DB_KEY};
|
|
use crate::persisted_fork_choice::{PersistedForkChoiceV1, PersistedForkChoiceV7};
|
|
use crate::types::ChainSpec;
|
|
use slog::{warn, Logger};
|
|
use std::path::Path;
|
|
use std::sync::Arc;
|
|
use store::hot_cold_store::{HotColdDB, HotColdDBError};
|
|
use store::metadata::{SchemaVersion, CURRENT_SCHEMA_VERSION};
|
|
use store::{Error as StoreError, StoreItem};
|
|
|
|
/// Migrate the database from one schema version to another, applying all requisite mutations.
|
|
pub fn migrate_schema<T: BeaconChainTypes>(
|
|
db: Arc<HotColdDB<T::EthSpec, T::HotStore, T::ColdStore>>,
|
|
datadir: &Path,
|
|
from: SchemaVersion,
|
|
to: SchemaVersion,
|
|
log: Logger,
|
|
spec: &ChainSpec,
|
|
) -> Result<(), StoreError> {
|
|
match (from, to) {
|
|
// Migrating from the current schema version to iself is always OK, a no-op.
|
|
(_, _) if from == to && to == CURRENT_SCHEMA_VERSION => Ok(()),
|
|
// Upgrade across multiple versions by recursively migrating one step at a time.
|
|
(_, _) if from.as_u64() + 1 < to.as_u64() => {
|
|
let next = SchemaVersion(from.as_u64() + 1);
|
|
migrate_schema::<T>(db.clone(), datadir, from, next, log.clone(), spec)?;
|
|
migrate_schema::<T>(db, datadir, next, to, log, spec)
|
|
}
|
|
|
|
//
|
|
// Migrations from before SchemaVersion(5) are deprecated.
|
|
//
|
|
|
|
// Migration for adding `execution_status` field to the fork choice store.
|
|
(SchemaVersion(5), SchemaVersion(6)) => {
|
|
// Database operations to be done atomically
|
|
let mut ops = vec![];
|
|
|
|
// The top-level `PersistedForkChoice` struct is still V1 but will have its internal
|
|
// bytes for the fork choice updated to V6.
|
|
let fork_choice_opt = db.get_item::<PersistedForkChoiceV1>(&FORK_CHOICE_DB_KEY)?;
|
|
if let Some(mut persisted_fork_choice) = fork_choice_opt {
|
|
migration_schema_v6::update_execution_statuses::<T>(&mut persisted_fork_choice)
|
|
.map_err(StoreError::SchemaMigrationError)?;
|
|
|
|
// Store the converted fork choice store under the same key.
|
|
ops.push(persisted_fork_choice.as_kv_store_op(FORK_CHOICE_DB_KEY));
|
|
}
|
|
|
|
db.store_schema_version_atomically(to, ops)?;
|
|
|
|
Ok(())
|
|
}
|
|
// 1. Add `proposer_boost_root`.
|
|
// 2. Update `justified_epoch` to `justified_checkpoint` and `finalized_epoch` to
|
|
// `finalized_checkpoint`.
|
|
// 3. This migration also includes a potential update to the justified
|
|
// checkpoint in case the fork choice store's justified checkpoint and finalized checkpoint
|
|
// combination does not actually exist for any blocks in fork choice. This was possible in
|
|
// the consensus spec prior to v1.1.6.
|
|
//
|
|
// Relevant issues:
|
|
//
|
|
// https://github.com/sigp/lighthouse/issues/2741
|
|
// https://github.com/ethereum/consensus-specs/pull/2727
|
|
// https://github.com/ethereum/consensus-specs/pull/2730
|
|
(SchemaVersion(6), SchemaVersion(7)) => {
|
|
// Database operations to be done atomically
|
|
let mut ops = vec![];
|
|
|
|
let fork_choice_opt = db.get_item::<PersistedForkChoiceV1>(&FORK_CHOICE_DB_KEY)?;
|
|
if let Some(persisted_fork_choice_v1) = fork_choice_opt {
|
|
// This migrates the `PersistedForkChoiceStore`, adding the `proposer_boost_root` field.
|
|
let mut persisted_fork_choice_v7 = persisted_fork_choice_v1.into();
|
|
|
|
let result = migration_schema_v7::update_fork_choice::<T>(
|
|
&mut persisted_fork_choice_v7,
|
|
db.clone(),
|
|
);
|
|
|
|
// Fall back to re-initializing fork choice from an anchor state if necessary.
|
|
if let Err(e) = result {
|
|
warn!(log, "Unable to migrate to database schema 7, re-initializing fork choice"; "error" => ?e);
|
|
migration_schema_v7::update_with_reinitialized_fork_choice::<T>(
|
|
&mut persisted_fork_choice_v7,
|
|
db.clone(),
|
|
spec,
|
|
)
|
|
.map_err(StoreError::SchemaMigrationError)?;
|
|
}
|
|
|
|
// Store the converted fork choice store under the same key.
|
|
ops.push(persisted_fork_choice_v7.as_kv_store_op(FORK_CHOICE_DB_KEY));
|
|
}
|
|
|
|
db.store_schema_version_atomically(to, ops)?;
|
|
|
|
Ok(())
|
|
}
|
|
// Migration to add an `epoch` key to the fork choice's balances cache.
|
|
(SchemaVersion(7), SchemaVersion(8)) => {
|
|
let mut ops = vec![];
|
|
let fork_choice_opt = db.get_item::<PersistedForkChoiceV7>(&FORK_CHOICE_DB_KEY)?;
|
|
if let Some(fork_choice) = fork_choice_opt {
|
|
let updated_fork_choice =
|
|
migration_schema_v8::update_fork_choice::<T>(fork_choice, db.clone())?;
|
|
|
|
ops.push(updated_fork_choice.as_kv_store_op(FORK_CHOICE_DB_KEY));
|
|
}
|
|
|
|
db.store_schema_version_atomically(to, ops)?;
|
|
|
|
Ok(())
|
|
}
|
|
// Upgrade from v8 to v9 to separate the execution payloads into their own column.
|
|
(SchemaVersion(8), SchemaVersion(9)) => {
|
|
migration_schema_v9::upgrade_to_v9::<T>(db.clone(), log)?;
|
|
db.store_schema_version(to)
|
|
}
|
|
// Downgrade from v9 to v8 to ignore the separation of execution payloads
|
|
// NOTE: only works before the Bellatrix fork epoch.
|
|
(SchemaVersion(9), SchemaVersion(8)) => {
|
|
migration_schema_v9::downgrade_from_v9::<T>(db.clone(), log)?;
|
|
db.store_schema_version(to)
|
|
}
|
|
// Anything else is an error.
|
|
(_, _) => Err(HotColdDBError::UnsupportedSchemaVersion {
|
|
target_version: to,
|
|
current_version: from,
|
|
}
|
|
.into()),
|
|
}
|
|
}
|