## Issue Addressed
NA
## Proposed Changes
Add an optimization to perform `per_slot_processing` from the *leading-edge* of block processing to the *trailing-edge*. Ultimately, this allows us to import the block at slot `n` faster because we used the tail-end of slot `n - 1` to perform `per_slot_processing`.
Additionally, add a "block proposer cache" which allows us to cache the block proposer for some epoch. Since we're now doing trailing-edge `per_slot_processing`, we can prime this cache with the values for the next epoch before those blocks arrive (assuming those blocks don't have some weird forking).
There were several ancillary changes required to achieve this:
- Remove the `state_root` field of `BeaconSnapshot`, since there's no need to know it on a `pre_state` and in all other cases we can just read it from `block.state_root()`.
- This caused some "dust" changes of `snapshot.beacon_state_root` to `snapshot.beacon_state_root()`, where the `BeaconSnapshot::beacon_state_root()` func just reads the state root from the block.
- Rename `types::ShuffingId` to `AttestationShufflingId`. I originally did this because I added a `ProposerShufflingId` struct which turned out to be not so useful. I thought this new name was more descriptive so I kept it.
- Address https://github.com/ethereum/eth2.0-specs/pull/2196
- Add a debug log when we get a block with an unknown parent. There was previously no logging around this case.
- Add a function to `BeaconState` to compute all proposers for an epoch without re-computing the active indices for each slot.
## Additional Info
- ~~Blocked on #2173~~
- ~~Blocked on #2179~~ That PR was wrapped into this PR.
- There's potentially some places where we could avoid computing the proposer indices in `per_block_processing` but I haven't done this here. These would be an optimization beyond the issue at hand (improving block propagation times) and I think this PR is already doing enough. We can come back for that later.
## TODO
- [x] Tidy, improve comments.
- [x] ~~Try avoid computing proposer index in `per_block_processing`?~~
220 lines
7.4 KiB
Rust
220 lines
7.4 KiB
Rust
use errors::EpochProcessingError as Error;
|
|
use safe_arith::SafeArith;
|
|
use tree_hash::TreeHash;
|
|
use types::*;
|
|
|
|
pub mod apply_rewards;
|
|
pub mod errors;
|
|
pub mod process_slashings;
|
|
pub mod registry_updates;
|
|
pub mod tests;
|
|
pub mod validator_statuses;
|
|
|
|
pub use apply_rewards::process_rewards_and_penalties;
|
|
pub use process_slashings::process_slashings;
|
|
pub use registry_updates::process_registry_updates;
|
|
pub use validator_statuses::{TotalBalances, ValidatorStatus, ValidatorStatuses};
|
|
|
|
/// Provides a summary of validator participation during the epoch.
|
|
pub struct EpochProcessingSummary {
|
|
pub total_balances: TotalBalances,
|
|
pub statuses: Vec<ValidatorStatus>,
|
|
}
|
|
|
|
/// Performs per-epoch processing on some BeaconState.
|
|
///
|
|
/// Mutates the given `BeaconState`, returning early if an error is encountered. If an error is
|
|
/// returned, a state might be "half-processed" and therefore in an invalid state.
|
|
///
|
|
/// Spec v0.12.1
|
|
pub fn per_epoch_processing<T: EthSpec>(
|
|
state: &mut BeaconState<T>,
|
|
spec: &ChainSpec,
|
|
) -> Result<EpochProcessingSummary, Error> {
|
|
// Ensure the committee caches are built.
|
|
state.build_committee_cache(RelativeEpoch::Previous, spec)?;
|
|
state.build_committee_cache(RelativeEpoch::Current, spec)?;
|
|
state.build_committee_cache(RelativeEpoch::Next, spec)?;
|
|
|
|
// Load the struct we use to assign validators into sets based on their participation.
|
|
//
|
|
// E.g., attestation in the previous epoch, attested to the head, etc.
|
|
let mut validator_statuses = ValidatorStatuses::new(state, spec)?;
|
|
validator_statuses.process_attestations(&state, spec)?;
|
|
|
|
// Justification and finalization.
|
|
process_justification_and_finalization(state, &validator_statuses.total_balances)?;
|
|
|
|
// Rewards and Penalties.
|
|
process_rewards_and_penalties(state, &mut validator_statuses, spec)?;
|
|
|
|
// Registry Updates.
|
|
process_registry_updates(state, spec)?;
|
|
|
|
// Slashings.
|
|
process_slashings(
|
|
state,
|
|
validator_statuses.total_balances.current_epoch(),
|
|
spec,
|
|
)?;
|
|
|
|
// Final updates.
|
|
process_final_updates(state, spec)?;
|
|
|
|
// Rotate the epoch caches to suit the epoch transition.
|
|
state.advance_caches();
|
|
|
|
Ok(EpochProcessingSummary {
|
|
total_balances: validator_statuses.total_balances,
|
|
statuses: validator_statuses.statuses,
|
|
})
|
|
}
|
|
|
|
/// Update the following fields on the `BeaconState`:
|
|
///
|
|
/// - `justification_bitfield`.
|
|
/// - `previous_justified_epoch`
|
|
/// - `previous_justified_root`
|
|
/// - `current_justified_epoch`
|
|
/// - `current_justified_root`
|
|
/// - `finalized_epoch`
|
|
/// - `finalized_root`
|
|
///
|
|
/// Spec v0.12.1
|
|
#[allow(clippy::if_same_then_else)] // For readability and consistency with spec.
|
|
pub fn process_justification_and_finalization<T: EthSpec>(
|
|
state: &mut BeaconState<T>,
|
|
total_balances: &TotalBalances,
|
|
) -> Result<(), Error> {
|
|
if state.current_epoch() <= T::genesis_epoch().safe_add(1)? {
|
|
return Ok(());
|
|
}
|
|
|
|
let previous_epoch = state.previous_epoch();
|
|
let current_epoch = state.current_epoch();
|
|
|
|
let old_previous_justified_checkpoint = state.previous_justified_checkpoint;
|
|
let old_current_justified_checkpoint = state.current_justified_checkpoint;
|
|
|
|
// Process justifications
|
|
state.previous_justified_checkpoint = state.current_justified_checkpoint;
|
|
state.justification_bits.shift_up(1)?;
|
|
|
|
if total_balances
|
|
.previous_epoch_target_attesters()
|
|
.safe_mul(3)?
|
|
>= total_balances.current_epoch().safe_mul(2)?
|
|
{
|
|
state.current_justified_checkpoint = Checkpoint {
|
|
epoch: previous_epoch,
|
|
root: *state.get_block_root_at_epoch(previous_epoch)?,
|
|
};
|
|
state.justification_bits.set(1, true)?;
|
|
}
|
|
// If the current epoch gets justified, fill the last bit.
|
|
if total_balances
|
|
.current_epoch_target_attesters()
|
|
.safe_mul(3)?
|
|
>= total_balances.current_epoch().safe_mul(2)?
|
|
{
|
|
state.current_justified_checkpoint = Checkpoint {
|
|
epoch: current_epoch,
|
|
root: *state.get_block_root_at_epoch(current_epoch)?,
|
|
};
|
|
state.justification_bits.set(0, true)?;
|
|
}
|
|
|
|
let bits = &state.justification_bits;
|
|
|
|
// The 2nd/3rd/4th most recent epochs are all justified, the 2nd using the 4th as source.
|
|
if (1..4).all(|i| bits.get(i).unwrap_or(false))
|
|
&& old_previous_justified_checkpoint.epoch.safe_add(3)? == current_epoch
|
|
{
|
|
state.finalized_checkpoint = old_previous_justified_checkpoint;
|
|
}
|
|
// The 2nd/3rd most recent epochs are both justified, the 2nd using the 3rd as source.
|
|
else if (1..3).all(|i| bits.get(i).unwrap_or(false))
|
|
&& old_previous_justified_checkpoint.epoch.safe_add(2)? == current_epoch
|
|
{
|
|
state.finalized_checkpoint = old_previous_justified_checkpoint;
|
|
}
|
|
// The 1st/2nd/3rd most recent epochs are all justified, the 1st using the 3nd as source.
|
|
if (0..3).all(|i| bits.get(i).unwrap_or(false))
|
|
&& old_current_justified_checkpoint.epoch.safe_add(2)? == current_epoch
|
|
{
|
|
state.finalized_checkpoint = old_current_justified_checkpoint;
|
|
}
|
|
// The 1st/2nd most recent epochs are both justified, the 1st using the 2nd as source.
|
|
else if (0..2).all(|i| bits.get(i).unwrap_or(false))
|
|
&& old_current_justified_checkpoint.epoch.safe_add(1)? == current_epoch
|
|
{
|
|
state.finalized_checkpoint = old_current_justified_checkpoint;
|
|
}
|
|
|
|
Ok(())
|
|
}
|
|
|
|
/// Finish up an epoch update.
|
|
///
|
|
/// Spec v0.12.1
|
|
pub fn process_final_updates<T: EthSpec>(
|
|
state: &mut BeaconState<T>,
|
|
spec: &ChainSpec,
|
|
) -> Result<(), Error> {
|
|
let current_epoch = state.current_epoch();
|
|
let next_epoch = state.next_epoch()?;
|
|
|
|
// Reset eth1 data votes.
|
|
if state
|
|
.slot
|
|
.safe_add(1)?
|
|
.safe_rem(T::SlotsPerEth1VotingPeriod::to_u64())?
|
|
== 0
|
|
{
|
|
state.eth1_data_votes = VariableList::empty();
|
|
}
|
|
|
|
// Update effective balances with hysteresis (lag).
|
|
let hysteresis_increment = spec
|
|
.effective_balance_increment
|
|
.safe_div(spec.hysteresis_quotient)?;
|
|
let downward_threshold = hysteresis_increment.safe_mul(spec.hysteresis_downward_multiplier)?;
|
|
let upward_threshold = hysteresis_increment.safe_mul(spec.hysteresis_upward_multiplier)?;
|
|
for (index, validator) in state.validators.iter_mut().enumerate() {
|
|
let balance = state.balances[index];
|
|
|
|
if balance.safe_add(downward_threshold)? < validator.effective_balance
|
|
|| validator.effective_balance.safe_add(upward_threshold)? < balance
|
|
{
|
|
validator.effective_balance = std::cmp::min(
|
|
balance.safe_sub(balance.safe_rem(spec.effective_balance_increment)?)?,
|
|
spec.max_effective_balance,
|
|
);
|
|
}
|
|
}
|
|
|
|
// Reset slashings
|
|
state.set_slashings(next_epoch, 0)?;
|
|
|
|
// Set randao mix
|
|
state.set_randao_mix(next_epoch, *state.get_randao_mix(current_epoch)?)?;
|
|
|
|
// Set historical root accumulator
|
|
if next_epoch
|
|
.as_u64()
|
|
.safe_rem(T::SlotsPerHistoricalRoot::to_u64().safe_div(T::slots_per_epoch())?)?
|
|
== 0
|
|
{
|
|
let historical_batch = state.historical_batch();
|
|
state
|
|
.historical_roots
|
|
.push(historical_batch.tree_hash_root())?;
|
|
}
|
|
|
|
// Rotate current/previous epoch attestations
|
|
state.previous_epoch_attestations =
|
|
std::mem::replace(&mut state.current_epoch_attestations, VariableList::empty());
|
|
|
|
Ok(())
|
|
}
|