Persist to DB after setting canonical head (#2547)

## Issue Addressed

NA

## Proposed Changes

Missed head votes on attestations is a well-known issue. The primary cause is a block getting set as the head *after* the attestation deadline.

This PR aims to shorten the overall time between "block received" and "block set as head" by:

1. Persisting the head and fork choice *after* setting the canonical head
    - Informal measurements show this takes ~200ms
 1. Pruning the op pool *after* setting the canonical head.
 1. No longer persisting the op pool to disk during `BeaconChain::fork_choice`
     - Informal measurements show this can take up to 1.2s.
     
I also add some metrics to help measure the effect of these changes.
     
Persistence changes like this run the risk of breaking assumptions downstream. However, I have considered these risks and I think we're fine here. I will describe my reasoning for each change.

## Reasoning

### Change 1:  Persisting the head and fork choice *after* setting the canonical head

For (1), although the function is called `persist_head_and_fork_choice`, it only persists:

- Fork choice
- Head tracker
- Genesis block root

Since `BeaconChain::fork_choice_internal` does not modify these values between the original time we were persisting it and the current time, I assert that the change I've made is non-substantial in terms of what ends up on-disk. There's the possibility that some *other* thread has modified fork choice in the extra time we've given it, but that's totally fine.

Since the only time we *read* those values from disk is during startup, I assert that this has no impact during runtime. 

### Change 2: Pruning the op pool after setting the canonical head

Similar to the argument above, we don't modify the op pool during `BeaconChain::fork_choice_internal` so it shouldn't matter when we prune. This change should be non-substantial.

### Change 3: No longer persisting the op pool to disk during `BeaconChain::fork_choice`

This change *is* substantial. With the proposed changes, we'll only be persisting the op pool to disk when we shut down cleanly (i.e., the `BeaconChain` gets dropped). This means we'll save disk IO and time during usual operation, but a `kill -9` or similar "crash" will probably result in an out-of-date op pool when we reboot. An out-of-date op pool can only have an impact when producing blocks or aggregate attestations/sync committees.

I think it's pretty reasonable that a crash might result in an out-of-date op pool, since:

- Crashes are fairly rare. Practically the only time I see LH suffer a full crash is when the OOM killer shows up, and that's a very serious event.
- It's generally quite rare to produce a block/aggregate immediately after a reboot. Just a few slots of runtime is probably enough to have a decent-enough op pool again.

## Additional Info

Credits to @macladson for the timings referenced here.
This commit is contained in:
Paul Hauner 2021-08-31 04:48:21 +00:00
parent 1031f79aca
commit 44fa54004c
2 changed files with 13 additions and 6 deletions

View File

@ -2810,6 +2810,8 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
return Ok(()); return Ok(());
} }
let lag_timer = metrics::start_timer(&metrics::FORK_CHOICE_SET_HEAD_LAG_TIMES);
// At this point we know that the new head block is not the same as the previous one // At this point we know that the new head block is not the same as the previous one
metrics::inc_counter(&metrics::FORK_CHOICE_CHANGED_HEAD); metrics::inc_counter(&metrics::FORK_CHOICE_CHANGED_HEAD);
@ -2913,12 +2915,6 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
.slot() .slot()
.epoch(T::EthSpec::slots_per_epoch()); .epoch(T::EthSpec::slots_per_epoch());
if is_epoch_transition || is_reorg {
self.persist_head_and_fork_choice()?;
self.op_pool.prune_attestations(self.epoch()?);
self.persist_op_pool()?;
}
let update_head_timer = metrics::start_timer(&metrics::UPDATE_HEAD_TIMES); let update_head_timer = metrics::start_timer(&metrics::UPDATE_HEAD_TIMES);
// These fields are used for server-sent events // These fields are used for server-sent events
@ -2934,6 +2930,8 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
.start_slot(T::EthSpec::slots_per_epoch()); .start_slot(T::EthSpec::slots_per_epoch());
let head_proposer_index = new_head.beacon_block.message().proposer_index(); let head_proposer_index = new_head.beacon_block.message().proposer_index();
drop(lag_timer);
// Update the snapshot that stores the head of the chain at the time it received the // Update the snapshot that stores the head of the chain at the time it received the
// block. // block.
*self *self
@ -2984,6 +2982,11 @@ impl<T: BeaconChainTypes> BeaconChain<T> {
); );
}); });
if is_epoch_transition || is_reorg {
self.persist_head_and_fork_choice()?;
self.op_pool.prune_attestations(self.epoch()?);
}
if new_finalized_checkpoint.epoch != old_finalized_checkpoint.epoch { if new_finalized_checkpoint.epoch != old_finalized_checkpoint.epoch {
// Due to race conditions, it's technically possible that the head we load here is // Due to race conditions, it's technically possible that the head we load here is
// different to the one earlier in this function. // different to the one earlier in this function.

View File

@ -261,6 +261,10 @@ lazy_static! {
"beacon_fork_choice_process_attestation_seconds", "beacon_fork_choice_process_attestation_seconds",
"Time taken to add an attestation to fork choice" "Time taken to add an attestation to fork choice"
); );
pub static ref FORK_CHOICE_SET_HEAD_LAG_TIMES: Result<Histogram> = try_create_histogram(
"beacon_fork_choice_set_head_lag_times",
"Time taken between finding the head and setting the canonical head value"
);
pub static ref BALANCES_CACHE_HITS: Result<IntCounter> = pub static ref BALANCES_CACHE_HITS: Result<IntCounter> =
try_create_int_counter("beacon_balances_cache_hits_total", "Count of times balances cache fulfils request"); try_create_int_counter("beacon_balances_cache_hits_total", "Count of times balances cache fulfils request");
pub static ref BALANCES_CACHE_MISSES: Result<IntCounter> = pub static ref BALANCES_CACHE_MISSES: Result<IntCounter> =