lighthouse/validator_client/slashing_protection/src/interchange.rs
Michael Sproul 0b54ff17f2 Fix assert in slashing protection import (#2881)
## Issue Addressed

There was an overeager assert in the import of slashing protection data here:

fff01b24dd/validator_client/slashing_protection/src/slashing_database.rs (L939)

We were asserting that if the import contained any blocks for a validator, then the database should contain only a single block for that validator due to pruning/consolidation. However, we would only prune if the import contained _relevant blocks_ (that would actually change the maximum slot):

fff01b24dd/validator_client/slashing_protection/src/slashing_database.rs (L629-L633)

This lead to spurious failures (in the form of `ConsistencyError`s) when importing an interchange containing no new blocks for any of the validators. This wasn't hard to trigger, e.g. export and then immediately re-import the same file.

## Proposed Changes

This PR fixes the issue by simplifying the import so that it's more like the import for attestations. I.e. we make the assert true by always pruning when the imported file contains blocks.

In practice this doesn't have any downsides: if we import a new block then the behaviour is as before, except that we drop the `signing_root`. If we import an existing block or an old block then we prune the database to a single block. The only time this would be relevant is during extreme clock drift locally _plus_ import of a non-drifted interchange, which should occur infrequently.

## Additional Info

I've also added `Arbitrary` implementations to the slashing protection types so that we can fuzz them. I have a fuzzer sitting in a separate directory which I may or may not commit in a subsequent PR.

There's a new test in the standard interchange tests v5.2.1 that checks for this issue: https://github.com/eth-clients/slashing-protection-interchange-tests/pull/12
2022-01-04 20:46:44 +00:00

160 lines
5.8 KiB
Rust

use crate::InterchangeError;
use serde_derive::{Deserialize, Serialize};
use std::cmp::max;
use std::collections::{HashMap, HashSet};
use std::io;
use types::{Epoch, Hash256, PublicKeyBytes, Slot};
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize)]
#[serde(deny_unknown_fields)]
#[cfg_attr(feature = "arbitrary", derive(arbitrary::Arbitrary))]
pub struct InterchangeMetadata {
#[serde(with = "eth2_serde_utils::quoted_u64::require_quotes")]
pub interchange_format_version: u64,
pub genesis_validators_root: Hash256,
}
#[derive(Debug, Clone, PartialEq, Eq, Hash, Deserialize, Serialize)]
#[serde(deny_unknown_fields)]
#[cfg_attr(feature = "arbitrary", derive(arbitrary::Arbitrary))]
pub struct InterchangeData {
pub pubkey: PublicKeyBytes,
pub signed_blocks: Vec<SignedBlock>,
pub signed_attestations: Vec<SignedAttestation>,
}
#[derive(Debug, Clone, PartialEq, Eq, Hash, Deserialize, Serialize)]
#[serde(deny_unknown_fields)]
#[cfg_attr(feature = "arbitrary", derive(arbitrary::Arbitrary))]
pub struct SignedBlock {
#[serde(with = "eth2_serde_utils::quoted_u64::require_quotes")]
pub slot: Slot,
#[serde(skip_serializing_if = "Option::is_none")]
pub signing_root: Option<Hash256>,
}
#[derive(Debug, Clone, PartialEq, Eq, Hash, Deserialize, Serialize)]
#[serde(deny_unknown_fields)]
#[cfg_attr(feature = "arbitrary", derive(arbitrary::Arbitrary))]
pub struct SignedAttestation {
#[serde(with = "eth2_serde_utils::quoted_u64::require_quotes")]
pub source_epoch: Epoch,
#[serde(with = "eth2_serde_utils::quoted_u64::require_quotes")]
pub target_epoch: Epoch,
#[serde(skip_serializing_if = "Option::is_none")]
pub signing_root: Option<Hash256>,
}
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize)]
#[cfg_attr(feature = "arbitrary", derive(arbitrary::Arbitrary))]
pub struct Interchange {
pub metadata: InterchangeMetadata,
pub data: Vec<InterchangeData>,
}
impl Interchange {
pub fn from_json_str(json: &str) -> Result<Self, serde_json::Error> {
serde_json::from_str(json)
}
pub fn from_json_reader(mut reader: impl std::io::Read) -> Result<Self, io::Error> {
// We read the entire file into memory first, as this is *a lot* faster than using
// `serde_json::from_reader`. See https://github.com/serde-rs/json/issues/160
let mut json_str = String::new();
reader.read_to_string(&mut json_str)?;
Ok(Interchange::from_json_str(&json_str)?)
}
pub fn write_to(&self, writer: impl std::io::Write) -> Result<(), serde_json::Error> {
serde_json::to_writer(writer, self)
}
/// Do these two `Interchange`s contain the same data (ignoring ordering)?
pub fn equiv(&self, other: &Self) -> bool {
let self_set = self.data.iter().collect::<HashSet<_>>();
let other_set = other.data.iter().collect::<HashSet<_>>();
self.metadata == other.metadata && self_set == other_set
}
/// The number of entries in `data`.
pub fn len(&self) -> usize {
self.data.len()
}
/// Is the `data` part of the interchange completely empty?
pub fn is_empty(&self) -> bool {
self.len() == 0
}
/// Minify an interchange by constructing a synthetic block & attestation for each validator.
pub fn minify(&self) -> Result<Self, InterchangeError> {
// Map from pubkey to optional max block and max attestation.
let mut validator_data =
HashMap::<PublicKeyBytes, (Option<SignedBlock>, Option<SignedAttestation>)>::new();
for data in self.data.iter() {
// Existing maximum attestation and maximum block.
let (max_block, max_attestation) = validator_data
.entry(data.pubkey)
.or_insert_with(|| (None, None));
// Find maximum source and target epochs.
let max_source_epoch = data
.signed_attestations
.iter()
.map(|attestation| attestation.source_epoch)
.max();
let max_target_epoch = data
.signed_attestations
.iter()
.map(|attestation| attestation.target_epoch)
.max();
match (max_source_epoch, max_target_epoch) {
(Some(source_epoch), Some(target_epoch)) => {
if let Some(prev_max) = max_attestation {
prev_max.source_epoch = max(prev_max.source_epoch, source_epoch);
prev_max.target_epoch = max(prev_max.target_epoch, target_epoch);
} else {
*max_attestation = Some(SignedAttestation {
source_epoch,
target_epoch,
signing_root: None,
});
}
}
(None, None) => {}
_ => return Err(InterchangeError::MaxInconsistent),
};
// Find maximum block slot.
let max_block_slot = data.signed_blocks.iter().map(|block| block.slot).max();
if let Some(max_slot) = max_block_slot {
if let Some(prev_max) = max_block {
prev_max.slot = max(prev_max.slot, max_slot);
} else {
*max_block = Some(SignedBlock {
slot: max_slot,
signing_root: None,
});
}
}
}
let data = validator_data
.into_iter()
.map(|(pubkey, (maybe_block, maybe_att))| InterchangeData {
pubkey,
signed_blocks: maybe_block.into_iter().collect(),
signed_attestations: maybe_att.into_iter().collect(),
})
.collect();
Ok(Self {
metadata: self.metadata.clone(),
data,
})
}
}