lighthouse/slasher/service/src/service.rs
Paul Hauner be4e261e74 Use async code when interacting with EL (#3244)
## Overview

This rather extensive PR achieves two primary goals:

1. Uses the finalized/justified checkpoints of fork choice (FC), rather than that of the head state.
2. Refactors fork choice, block production and block processing to `async` functions.

Additionally, it achieves:

- Concurrent forkchoice updates to the EL and cache pruning after a new head is selected.
- Concurrent "block packing" (attestations, etc) and execution payload retrieval during block production.
- Concurrent per-block-processing and execution payload verification during block processing.
- The `Arc`-ification of `SignedBeaconBlock` during block processing (it's never mutated, so why not?):
    - I had to do this to deal with sending blocks into spawned tasks.
    - Previously we were cloning the beacon block at least 2 times during each block processing, these clones are either removed or turned into cheaper `Arc` clones.
    - We were also `Box`-ing and un-`Box`-ing beacon blocks as they moved throughout the networking crate. This is not a big deal, but it's nice to avoid shifting things between the stack and heap.
    - Avoids cloning *all the blocks* in *every chain segment* during sync.
    - It also has the potential to clean up our code where we need to pass an *owned* block around so we can send it back in the case of an error (I didn't do much of this, my PR is already big enough 😅)
- The `BeaconChain::HeadSafetyStatus` struct was removed. It was an old relic from prior merge specs.

For motivation for this change, see https://github.com/sigp/lighthouse/pull/3244#issuecomment-1160963273

## Changes to `canonical_head` and `fork_choice`

Previously, the `BeaconChain` had two separate fields:

```
canonical_head: RwLock<Snapshot>,
fork_choice: RwLock<BeaconForkChoice>
```

Now, we have grouped these values under a single struct:

```
canonical_head: CanonicalHead {
  cached_head: RwLock<Arc<Snapshot>>,
  fork_choice: RwLock<BeaconForkChoice>
} 
```

Apart from ergonomics, the only *actual* change here is wrapping the canonical head snapshot in an `Arc`. This means that we no longer need to hold the `cached_head` (`canonical_head`, in old terms) lock when we want to pull some values from it. This was done to avoid deadlock risks by preventing functions from acquiring (and holding) the `cached_head` and `fork_choice` locks simultaneously.

## Breaking Changes

### The `state` (root) field in the `finalized_checkpoint` SSE event

Consider the scenario where epoch `n` is just finalized, but `start_slot(n)` is skipped. There are two state roots we might in the `finalized_checkpoint` SSE event:

1. The state root of the finalized block, which is `get_block(finalized_checkpoint.root).state_root`.
4. The state root at slot of `start_slot(n)`, which would be the state from (1), but "skipped forward" through any skip slots.

Previously, Lighthouse would choose (2). However, we can see that when [Teku generates that event](de2b2801c8/data/beaconrestapi/src/main/java/tech/pegasys/teku/beaconrestapi/handlers/v1/events/EventSubscriptionManager.java (L171-L182)) it uses [`getStateRootFromBlockRoot`](de2b2801c8/data/provider/src/main/java/tech/pegasys/teku/api/ChainDataProvider.java (L336-L341)) which uses (1).

I have switched Lighthouse from (2) to (1). I think it's a somewhat arbitrary choice between the two, where (1) is easier to compute and is consistent with Teku.

## Notes for Reviewers

I've renamed `BeaconChain::fork_choice` to `BeaconChain::recompute_head`. Doing this helped ensure I broke all previous uses of fork choice and I also find it more descriptive. It describes an action and can't be confused with trying to get a reference to the `ForkChoice` struct.

I've changed the ordering of SSE events when a block is received. It used to be `[block, finalized, head]` and now it's `[block, head, finalized]`. It was easier this way and I don't think we were making any promises about SSE event ordering so it's not "breaking".

I've made it so fork choice will run when it's first constructed. I did this because I wanted to have a cached version of the last call to `get_head`. Ensuring `get_head` has been run *at least once* means that the cached values doesn't need to wrapped in an `Option`. This was fairly simple, it just involved passing a `slot` to the constructor so it knows *when* it's being run. When loading a fork choice from the store and a slot clock isn't handy I've just used the `slot` that was saved in the `fork_choice_store`. That seems like it would be a faithful representation of the slot when we saved it.

I added the `genesis_time: u64` to the `BeaconChain`. It's small, constant and nice to have around.

Since we're using FC for the fin/just checkpoints, we no longer get the `0x00..00` roots at genesis. You can see I had to remove a work-around in `ef-tests` here: b56be3bc2. I can't find any reason why this would be an issue, if anything I think it'll be better since the genesis-alias has caught us out a few times (0x00..00 isn't actually a real root). Edit: I did find a case where the `network` expected the 0x00..00 alias and patched it here: 3f26ac3e2.

You'll notice a lot of changes in tests. Generally, tests should be functionally equivalent. Here are the things creating the most diff-noise in tests:
- Changing tests to be `tokio::async` tests.
- Adding `.await` to fork choice, block processing and block production functions.
- Refactor of the `canonical_head` "API" provided by the `BeaconChain`. E.g., `chain.canonical_head.cached_head()` instead of `chain.canonical_head.read()`.
- Wrapping `SignedBeaconBlock` in an `Arc`.
- In the `beacon_chain/tests/block_verification`, we can't use the `lazy_static` `CHAIN_SEGMENT` variable anymore since it's generated with an async function. We just generate it in each test, not so efficient but hopefully insignificant.

I had to disable `rayon` concurrent tests in the `fork_choice` tests. This is because the use of `rayon` and `block_on` was causing a panic.

Co-authored-by: Mac L <mjladson@pm.me>
2022-07-03 05:36:50 +00:00

331 lines
12 KiB
Rust

use beacon_chain::{
observed_operations::ObservationOutcome, BeaconChain, BeaconChainError, BeaconChainTypes,
};
use directory::size_of_dir;
use lighthouse_network::PubsubMessage;
use network::NetworkMessage;
use slasher::{
metrics::{self, SLASHER_DATABASE_SIZE, SLASHER_RUN_TIME},
Slasher,
};
use slog::{debug, error, info, trace, warn, Logger};
use slot_clock::SlotClock;
use state_processing::{
per_block_processing::errors::{
AttesterSlashingInvalid, BlockOperationError, ProposerSlashingInvalid,
},
VerifyOperation,
};
use std::sync::mpsc::{sync_channel, Receiver, SyncSender, TrySendError};
use std::sync::Arc;
use task_executor::TaskExecutor;
use tokio::sync::mpsc::UnboundedSender;
use tokio::time::{interval_at, Duration, Instant};
use types::{AttesterSlashing, Epoch, EthSpec, ProposerSlashing};
pub struct SlasherService<T: BeaconChainTypes> {
beacon_chain: Arc<BeaconChain<T>>,
network_sender: UnboundedSender<NetworkMessage<T::EthSpec>>,
}
impl<T: BeaconChainTypes> SlasherService<T> {
/// Create a new service but don't start any tasks yet.
pub fn new(
beacon_chain: Arc<BeaconChain<T>>,
network_sender: UnboundedSender<NetworkMessage<T::EthSpec>>,
) -> Self {
Self {
beacon_chain,
network_sender,
}
}
/// Start the slasher service tasks on the `executor`.
pub fn run(&self, executor: &TaskExecutor) -> Result<(), String> {
let slasher = self
.beacon_chain
.slasher
.clone()
.ok_or("No slasher is configured")?;
let log = slasher.log().clone();
info!(log, "Starting slasher"; "broadcast" => slasher.config().broadcast);
// Buffer just a single message in the channel. If the receiver is still processing, we
// don't need to burden them with more work (we can wait).
let (notif_sender, notif_receiver) = sync_channel(1);
let update_period = slasher.config().update_period;
let slot_offset = slasher.config().slot_offset;
let beacon_chain = self.beacon_chain.clone();
let network_sender = self.network_sender.clone();
executor.spawn(
Self::run_notifier(
beacon_chain.clone(),
update_period,
slot_offset,
notif_sender,
log,
),
"slasher_server_notifier",
);
executor.spawn_blocking(
|| Self::run_processor(beacon_chain, slasher, notif_receiver, network_sender),
"slasher_server_processor",
);
Ok(())
}
/// Run the async notifier which periodically prompts the processor to run.
async fn run_notifier(
beacon_chain: Arc<BeaconChain<T>>,
update_period: u64,
slot_offset: f64,
notif_sender: SyncSender<Epoch>,
log: Logger,
) {
let slot_offset = Duration::from_secs_f64(slot_offset);
let start_instant =
if let Some(duration_to_next_slot) = beacon_chain.slot_clock.duration_to_next_slot() {
Instant::now() + duration_to_next_slot + slot_offset
} else {
error!(log, "Error aligning slasher to slot clock");
Instant::now()
};
let mut interval = interval_at(start_instant, Duration::from_secs(update_period));
loop {
interval.tick().await;
if let Some(current_slot) = beacon_chain.slot_clock.now() {
let current_epoch = current_slot.epoch(T::EthSpec::slots_per_epoch());
if let Err(TrySendError::Disconnected(_)) = notif_sender.try_send(current_epoch) {
break;
}
} else {
trace!(log, "Slasher has nothing to do: we are pre-genesis");
}
}
}
/// Run the blocking task that performs work.
fn run_processor(
beacon_chain: Arc<BeaconChain<T>>,
slasher: Arc<Slasher<T::EthSpec>>,
notif_receiver: Receiver<Epoch>,
network_sender: UnboundedSender<NetworkMessage<T::EthSpec>>,
) {
let log = slasher.log();
while let Ok(current_epoch) = notif_receiver.recv() {
let t = Instant::now();
let batch_timer = metrics::start_timer(&SLASHER_RUN_TIME);
let stats = match slasher.process_queued(current_epoch) {
Ok(stats) => Some(stats),
Err(e) => {
error!(
log,
"Error during scheduled slasher processing";
"epoch" => current_epoch,
"error" => ?e,
);
None
}
};
drop(batch_timer);
// Prune the database, even in the case where batch processing failed.
// If the database is full then pruning could help to free it up.
if let Err(e) = slasher.prune_database(current_epoch) {
error!(
log,
"Error during slasher database pruning";
"epoch" => current_epoch,
"error" => ?e,
);
continue;
};
// Provide slashings to the beacon chain, and optionally publish them.
Self::process_slashings(&beacon_chain, &slasher, &network_sender);
let database_size = size_of_dir(&slasher.config().database_path);
metrics::set_gauge(&SLASHER_DATABASE_SIZE, database_size as i64);
if let Some(stats) = stats {
debug!(
log,
"Completed slasher update";
"epoch" => current_epoch,
"time_taken" => format!("{}ms", t.elapsed().as_millis()),
"num_attestations" => stats.attestation_stats.num_processed,
"num_blocks" => stats.block_stats.num_processed,
);
}
}
}
/// Push any slashings found to the beacon chain, optionally publishing them on the network.
fn process_slashings(
beacon_chain: &BeaconChain<T>,
slasher: &Slasher<T::EthSpec>,
network_sender: &UnboundedSender<NetworkMessage<T::EthSpec>>,
) {
Self::process_attester_slashings(beacon_chain, slasher, network_sender);
Self::process_proposer_slashings(beacon_chain, slasher, network_sender);
}
fn process_attester_slashings(
beacon_chain: &BeaconChain<T>,
slasher: &Slasher<T::EthSpec>,
network_sender: &UnboundedSender<NetworkMessage<T::EthSpec>>,
) {
let log = slasher.log();
let attester_slashings = slasher.get_attester_slashings();
for slashing in attester_slashings {
// Verify slashing signature.
let verified_slashing = match beacon_chain.with_head(|head| {
Ok::<_, BeaconChainError>(
slashing
.clone()
.validate(&head.beacon_state, &beacon_chain.spec)?,
)
}) {
Ok(verified) => verified,
Err(BeaconChainError::AttesterSlashingValidationError(
BlockOperationError::Invalid(AttesterSlashingInvalid::NoSlashableIndices),
)) => {
debug!(
log,
"Skipping attester slashing for slashed validators";
"slashing" => ?slashing,
);
continue;
}
Err(e) => {
warn!(
log,
"Attester slashing produced is invalid";
"error" => ?e,
"slashing" => ?slashing,
);
continue;
}
};
// Add to local op pool.
beacon_chain.import_attester_slashing(verified_slashing);
// Publish to the network if broadcast is enabled.
if slasher.config().broadcast {
if let Err(e) =
Self::publish_attester_slashing(beacon_chain, network_sender, slashing)
{
debug!(
log,
"Unable to publish attester slashing";
"error" => e,
);
}
}
}
}
fn process_proposer_slashings(
beacon_chain: &BeaconChain<T>,
slasher: &Slasher<T::EthSpec>,
network_sender: &UnboundedSender<NetworkMessage<T::EthSpec>>,
) {
let log = slasher.log();
let proposer_slashings = slasher.get_proposer_slashings();
for slashing in proposer_slashings {
let verified_slashing = match beacon_chain.with_head(|head| {
Ok(slashing
.clone()
.validate(&head.beacon_state, &beacon_chain.spec)?)
}) {
Ok(verified) => verified,
Err(BeaconChainError::ProposerSlashingValidationError(
BlockOperationError::Invalid(ProposerSlashingInvalid::ProposerNotSlashable(
index,
)),
)) => {
debug!(
log,
"Skipping proposer slashing for slashed validator";
"validator_index" => index,
);
continue;
}
Err(e) => {
error!(
log,
"Proposer slashing produced is invalid";
"error" => ?e,
"slashing" => ?slashing,
);
continue;
}
};
beacon_chain.import_proposer_slashing(verified_slashing);
if slasher.config().broadcast {
if let Err(e) =
Self::publish_proposer_slashing(beacon_chain, network_sender, slashing)
{
debug!(
log,
"Unable to publish proposer slashing";
"error" => e,
);
}
}
}
}
fn publish_attester_slashing(
beacon_chain: &BeaconChain<T>,
network_sender: &UnboundedSender<NetworkMessage<T::EthSpec>>,
slashing: AttesterSlashing<T::EthSpec>,
) -> Result<(), String> {
let outcome = beacon_chain
.verify_attester_slashing_for_gossip(slashing)
.map_err(|e| format!("gossip verification error: {:?}", e))?;
if let ObservationOutcome::New(slashing) = outcome {
network_sender
.send(NetworkMessage::Publish {
messages: vec![PubsubMessage::AttesterSlashing(Box::new(
slashing.into_inner(),
))],
})
.map_err(|e| format!("network error: {:?}", e))?;
}
Ok(())
}
fn publish_proposer_slashing(
beacon_chain: &BeaconChain<T>,
network_sender: &UnboundedSender<NetworkMessage<T::EthSpec>>,
slashing: ProposerSlashing,
) -> Result<(), String> {
let outcome = beacon_chain
.verify_proposer_slashing_for_gossip(slashing)
.map_err(|e| format!("gossip verification error: {:?}", e))?;
if let ObservationOutcome::New(slashing) = outcome {
network_sender
.send(NetworkMessage::Publish {
messages: vec![PubsubMessage::ProposerSlashing(Box::new(
slashing.into_inner(),
))],
})
.map_err(|e| format!("network error: {:?}", e))?;
}
Ok(())
}
}