229f883968
## Issue Addressed Fixes an issue that @paulhauner found with the v2.3.0 release candidate whereby the fork choice runs introduced by #3168 tripped over each other during sync: ``` May 24 23:06:40.542 WARN Error signalling fork choice waiter slot: 3884129, error: ForkChoiceSignalOutOfOrder { current: Slot(3884131), latest: Slot(3884129) }, service: beacon ``` This can occur because fork choice is called from the state advance _and_ the per-slot task. When one of these runs takes a long time it can end up finishing after a run from a later slot, tripping the error above. The problem is resolved by not running either of these fork choice calls during sync. Additionally, these parallel fork choice runs were causing issues in the database: ``` May 24 07:49:05.098 WARN Found a chain that should already have been pruned, head_slot: 92925, head_block_root: 0xa76c7bf1b98e54ed4b0d8686efcfdf853484e6c2a4c67e91cbf19e5ad1f96b17, service: beacon May 24 07:49:05.101 WARN Database migration failed error: HotColdDBError(FreezeSlotError { current_split_slot: Slot(92608), proposed_split_slot: Slot(92576) }), service: beacon ``` In this case, two fork choice calls triggering the finalization processing were being processed out of order due to differences in their processing time, causing the background migrator to try to advance finalization _backwards_ 😳. Removing the parallel fork choice runs from sync effectively addresses the issue, because these runs are most likely to have different finalized checkpoints (because of the speed at which fork choice advances during sync). In theory it's still possible to process updates out of order if any other fork choice runs end up completing out of order, but this should be much less common. Fixing out of order fork choice runs in general is difficult as it requires architectural changes like serialising fork choice updates through a single thread, or locking fork choice along with the head when it is mutated (https://github.com/sigp/lighthouse/pull/3175). ## Proposed Changes * Don't run per-slot fork choice during sync (if head is older than 4 slots) * Don't run state-advance fork choice during sync (if head is older than 4 slots) * Check for monotonic finalization updates in the background migrator. This is a good defensive check to have, and I'm not sure why we didn't have it before (we may have had it and wrongly removed it). |
||
---|---|---|
.. | ||
attestation_verification | ||
schema_change | ||
attestation_verification.rs | ||
attester_cache.rs | ||
beacon_chain.rs | ||
beacon_fork_choice_store.rs | ||
beacon_proposer_cache.rs | ||
beacon_snapshot.rs | ||
block_reward.rs | ||
block_times_cache.rs | ||
block_verification.rs | ||
builder.rs | ||
chain_config.rs | ||
early_attester_cache.rs | ||
errors.rs | ||
eth1_chain.rs | ||
events.rs | ||
execution_payload.rs | ||
fork_choice_signal.rs | ||
fork_revert.rs | ||
head_tracker.rs | ||
historical_blocks.rs | ||
lib.rs | ||
metrics.rs | ||
migrate.rs | ||
naive_aggregation_pool.rs | ||
observed_aggregates.rs | ||
observed_attesters.rs | ||
observed_block_producers.rs | ||
observed_operations.rs | ||
persisted_beacon_chain.rs | ||
persisted_fork_choice.rs | ||
pre_finalization_cache.rs | ||
proposer_prep_service.rs | ||
schema_change.rs | ||
shuffling_cache.rs | ||
snapshot_cache.rs | ||
state_advance_timer.rs | ||
sync_committee_verification.rs | ||
test_utils.rs | ||
timeout_rw_lock.rs | ||
validator_monitor.rs | ||
validator_pubkey_cache.rs |