Refine and test slashing protection semantics (#1885)

## Issue Addressed

Closes #1873

## Proposed Changes

Fixes the bug in slashing protection import (#1873) by pruning the database upon import.

Also expands the test generator to cover this case and a few others which are under discussion here:

https://ethereum-magicians.org/t/eip-3076-validator-client-interchange-format-slashing-protection/4883

## Additional Info

Depending on the outcome of the discussion on Eth Magicians, we can either wait for consensus before merging, or merge our preferred solution and patch things later.
This commit is contained in:
Michael Sproul 2020-11-24 07:21:14 +00:00
parent 84b3387d09
commit 20339ade01
11 changed files with 763 additions and 236 deletions

View File

@ -1,11 +1,12 @@
use clap::{App, Arg, ArgMatches};
use environment::Environment;
use slashing_protection::{
interchange::Interchange, SlashingDatabase, SLASHING_PROTECTION_FILENAME,
interchange::Interchange, InterchangeImportOutcome, SlashingDatabase,
SLASHING_PROTECTION_FILENAME,
};
use std::fs::File;
use std::path::PathBuf;
use types::{BeaconState, EthSpec};
use types::{BeaconState, Epoch, EthSpec, Slot};
pub const CMD: &str = "slashing-protection";
pub const IMPORT_CMD: &str = "import";
@ -84,17 +85,64 @@ pub fn cli_run<T: EthSpec>(
)
})?;
slashing_protection_database
.import_interchange_info(&interchange, genesis_validators_root)
let outcomes = slashing_protection_database
.import_interchange_info(interchange, genesis_validators_root)
.map_err(|e| {
format!(
"Error during import, no data imported: {:?}\n\
"Error during import: {:?}\n\
IT IS NOT SAFE TO START VALIDATING",
e
)
})?;
eprintln!("Import completed successfully");
let display_slot = |slot: Option<Slot>| {
slot.map_or("none".to_string(), |slot| format!("{}", slot.as_u64()))
};
let display_epoch = |epoch: Option<Epoch>| {
epoch.map_or("?".to_string(), |epoch| format!("{}", epoch.as_u64()))
};
let display_attestation = |source, target| match (source, target) {
(None, None) => "none".to_string(),
(source, target) => format!("{}=>{}", display_epoch(source), display_epoch(target)),
};
let mut num_failed = 0;
for outcome in &outcomes {
match outcome {
InterchangeImportOutcome::Success { pubkey, summary } => {
eprintln!("- {:?} SUCCESS min block: {}, max block: {}, min attestation: {}, max attestation: {}",
pubkey,
display_slot(summary.min_block_slot),
display_slot(summary.max_block_slot),
display_attestation(summary.min_attestation_source, summary.min_attestation_target),
display_attestation(summary.max_attestation_source,
summary.max_attestation_target),
);
}
InterchangeImportOutcome::Failure { pubkey, error } => {
eprintln!("- {:?} ERROR: {:?}", pubkey, error);
num_failed += 1;
}
}
}
if num_failed == 0 {
eprintln!("Import completed successfully.");
eprintln!(
"Please double-check that the minimum and maximum blocks and slots above \
match your expectations."
);
} else {
eprintln!(
"WARNING: history was NOT imported for {} of {} records",
num_failed,
outcomes.len()
);
eprintln!("IT IS NOT SAFE TO START VALIDATING");
eprintln!("Please see https://lighthouse-book.sigmaprime.io/slashing-protection.html#slashable-data-in-import");
return Err("Partial import".to_string());
}
Ok(())
}

View File

@ -60,9 +60,9 @@ Examples where it is **ineffective** are:
## Import and Export
Lighthouse supports v5 of the slashing protection interchange format described
[here][interchange-spec]. An interchange file is a record of all blocks and attestations
signing by a set of validator keys basically a portable slashing protection database!
Lighthouse supports the slashing protection interchange format described in [EIP-3076][]. An
interchange file is a record of blocks and attestations signed by a set of validator keys
basically a portable slashing protection database!
With your validator client stopped, you can import a `.json` interchange file from another client
using this command:
@ -86,9 +86,10 @@ You can export Lighthouse's database for use with another client with this comma
lighthouse account validator slashing-protection export <lighthouse_interchange.json>
```
The validator client needs to be stopped in order to export.
The validator client needs to be stopped in order to export, to guarantee that the data exported is
up to date.
[interchange-spec]: https://hackmd.io/@sproul/Bk0Y0qdGD
[EIP-3076]: https://eips.ethereum.org/EIPS/eip-3076
## Troubleshooting
@ -134,6 +135,43 @@ Sep 29 15:15:05.303 CRIT Not signing slashable attestation error: InvalidA
This log is still marked as `CRIT` because in general it should occur only very rarely,
and _could_ indicate a serious error or misconfiguration (see [Avoiding Slashing](#avoiding-slashing)).
### Slashable Data in Import
If you receive a warning when trying to import an [interchange file](#import-and-export) about
the file containing slashable data, then you must carefully consider whether you want to continue.
There are several potential causes for this warning, each of which require a different reaction. If
you have seen the warning for multiple validator keys, the cause could be different for each of them.
1. Your validator has actually signed slashable data. If this is the case, you should assess
whether your validator has been slashed (or is likely to be slashed). It's up to you
whether you'd like to continue.
2. You have exported data from Lighthouse to another client, and then back to Lighthouse,
_in a way that didn't preserve the signing roots_. A message with no signing roots
is considered slashable with respect to _any_ other message at the same slot/epoch,
so even if it was signed by Lighthouse originally, Lighthouse has no way of knowing this.
If you're sure you haven't run Lighthouse and the other client simultaneously, you
can [drop Lighthouse's DB in favour of the interchange file](#drop-and-re-import).
3. You have imported the same interchange file (which lacks signing roots) twice, e.g. from Teku.
It might be safe to continue as-is, or you could consider a [Drop and
Re-import](#drop-and-re-import).
#### Drop and Re-import
If you'd like to prioritize an interchange file over any existing database stored by Lighthouse
then you can _move_ (not delete) Lighthouse's database and replace it like so:
```bash
mv $datadir/validators/slashing_protection.sqlite ~/slashing_protection_backup.sqlite
```
```
lighthouse account validator slashing-protection import <my_interchange.json>
```
If your interchange file doesn't cover all of your validators, you shouldn't do this. Please reach
out on Discord if you need help.
## Limitation of Liability
The Lighthouse developers do not guarantee the perfect functioning of this software, or accept

View File

@ -1,4 +1,4 @@
TESTS_TAG := 359085be9da6e5e19644977aa45947bcec5d99de
TESTS_TAG := b8413ca42dc92308019d0d4db52c87e9e125c4e9
GENERATE_DIR := generated-tests
OUTPUT_DIR := interchange-tests
TARBALL := $(OUTPUT_DIR)-$(TESTS_TAG).tar.gz

View File

@ -1,7 +1,7 @@
use slashing_protection::interchange::{
Interchange, InterchangeData, InterchangeMetadata, SignedAttestation, SignedBlock,
};
use slashing_protection::interchange_test::TestCase;
use slashing_protection::interchange_test::{MultiTestCase, TestCase};
use slashing_protection::test_utils::{pubkey, DEFAULT_GENESIS_VALIDATORS_ROOT};
use slashing_protection::SUPPORTED_INTERCHANGE_FORMAT_VERSION;
use std::fs::{self, File};
@ -80,65 +80,69 @@ fn main() {
];
let tests = vec![
TestCase::new(
MultiTestCase::single(
"single_validator_import_only",
interchange(vec![(0, vec![22], vec![(0, 2)])]),
TestCase::new(interchange(vec![(0, vec![22], vec![(0, 2)])])),
),
TestCase::new(
MultiTestCase::single(
"single_validator_single_block",
interchange(vec![(0, vec![32], vec![])]),
)
.with_blocks(single_validator_blocks.clone()),
TestCase::new(
TestCase::new(interchange(vec![(0, vec![32], vec![])]))
.with_blocks(single_validator_blocks.clone()),
),
MultiTestCase::single(
"single_validator_single_attestation",
interchange(vec![(0, vec![], vec![(15, 20)])]),
)
.with_attestations(single_validator_attestations.clone()),
TestCase::new(
TestCase::new(interchange(vec![(0, vec![], vec![(15, 20)])]))
.with_attestations(single_validator_attestations.clone()),
),
MultiTestCase::single(
"single_validator_single_block_and_attestation",
interchange(vec![(0, vec![32], vec![(15, 20)])]),
)
.with_blocks(single_validator_blocks)
.with_attestations(single_validator_attestations),
TestCase::new(
TestCase::new(interchange(vec![(0, vec![32], vec![(15, 20)])]))
.with_blocks(single_validator_blocks)
.with_attestations(single_validator_attestations),
),
MultiTestCase::single(
"single_validator_genesis_attestation",
interchange(vec![(0, vec![], vec![(0, 0)])]),
)
.with_attestations(vec![(0, 0, 0, false)]),
TestCase::new(
TestCase::new(interchange(vec![(0, vec![], vec![(0, 0)])]))
.with_attestations(vec![(0, 0, 0, false)]),
),
MultiTestCase::single(
"single_validator_multiple_blocks_and_attestations",
interchange(vec![(
TestCase::new(interchange(vec![(
0,
vec![2, 3, 10, 1200],
vec![(10, 11), (12, 13), (20, 24)],
)]),
)
.with_blocks(vec![
(0, 1, false),
(0, 2, false),
(0, 3, false),
(0, 10, false),
(0, 1200, false),
(0, 4, true),
(0, 256, true),
(0, 1201, true),
])
.with_attestations(vec![
(0, 9, 10, false),
(0, 12, 13, false),
(0, 11, 14, false),
(0, 21, 22, false),
(0, 10, 24, false),
(0, 11, 12, true),
(0, 20, 25, true),
]),
TestCase::new(
"single_validator_single_block_and_attestation_signing_root",
interchange_with_signing_roots(vec![(0, vec![(19, Some(1))], vec![(0, 1, Some(2))])]),
)]))
.with_blocks(vec![
(0, 1, false),
(0, 2, false),
(0, 3, false),
(0, 10, false),
(0, 1200, false),
(0, 4, true),
(0, 256, true),
(0, 1201, true),
])
.with_attestations(vec![
(0, 9, 10, false),
(0, 12, 13, false),
(0, 11, 14, false),
(0, 21, 22, false),
(0, 10, 24, false),
(0, 11, 12, true),
(0, 20, 25, true),
]),
),
TestCase::new(
MultiTestCase::single(
"single_validator_single_block_and_attestation_signing_root",
TestCase::new(interchange_with_signing_roots(vec![(
0,
vec![(19, Some(1))],
vec![(0, 1, Some(2))],
)])),
),
MultiTestCase::single(
"multiple_validators_multiple_blocks_and_attestations",
interchange(vec![
TestCase::new(interchange(vec![
(
0,
vec![10, 15, 20],
@ -150,37 +154,189 @@ fn main() {
vec![(0, 0), (0, 1), (1, 2), (2, 5), (5, 6)],
),
(2, vec![10, 15, 20], vec![(1, 2), (1, 3), (2, 4)]),
]))
.with_blocks(vec![
(0, 9, false),
(0, 10, false),
(0, 21, true),
(0, 11, true),
(1, 2, false),
(1, 3, false),
(1, 0, false),
(1, 101, true),
(2, 9, false),
(2, 10, false),
(2, 22, true),
])
.with_attestations(vec![
(0, 0, 5, false),
(0, 3, 6, false),
(0, 4, 6, true),
(0, 5, 7, true),
(0, 6, 8, true),
(1, 1, 7, false),
(1, 1, 4, true),
(1, 5, 7, true),
(2, 0, 0, false),
(2, 0, 1, false),
(2, 2, 5, true),
]),
),
MultiTestCase::single(
"multiple_validators_same_slot_blocks",
TestCase::new(interchange_with_signing_roots(vec![
(0, vec![(1, Some(0)), (2, Some(0)), (3, Some(0))], vec![]),
(1, vec![(1, Some(1)), (3, Some(1))], vec![]),
(2, vec![(1, Some(2)), (2, Some(2))], vec![]),
])),
),
MultiTestCase::single(
"wrong_genesis_validators_root",
TestCase::new(interchange(vec![])).should_fail(),
)
.with_blocks(vec![
(0, 9, false),
(0, 10, false),
(0, 21, true),
(0, 11, true),
(1, 2, false),
(1, 3, false),
(1, 0, false),
(1, 101, true),
(2, 9, false),
(2, 10, false),
(2, 22, true),
])
.with_attestations(vec![
(0, 0, 5, false),
(0, 3, 6, false),
(0, 4, 6, true),
(0, 5, 7, true),
(0, 6, 8, true),
(1, 1, 7, false),
(1, 1, 4, true),
(1, 5, 7, true),
(2, 0, 0, false),
(2, 0, 1, false),
(2, 2, 5, true),
]),
TestCase::new("wrong_genesis_validators_root", interchange(vec![]))
.gvr(Hash256::from_low_u64_be(1))
.should_fail(),
.gvr(Hash256::from_low_u64_be(1)),
MultiTestCase::new(
"multiple_interchanges_single_validator_single_message_gap",
vec![
TestCase::new(interchange(vec![(0, vec![40], vec![(2, 30)])])),
TestCase::new(interchange(vec![(0, vec![50], vec![(10, 50)])]))
.with_blocks(vec![
(0, 41, false),
(0, 45, false),
(0, 49, false),
(0, 50, false),
(0, 51, true),
])
.with_attestations(vec![
(0, 3, 31, false),
(0, 9, 49, false),
(0, 10, 51, true),
]),
],
),
MultiTestCase::new(
"multiple_interchanges_single_validator_single_message_out_of_order",
vec![
TestCase::new(interchange(vec![(0, vec![40], vec![])])),
TestCase::new(interchange(vec![(0, vec![20], vec![])]))
.allow_partial_import()
.with_blocks(vec![(0, 20, false)]),
],
),
MultiTestCase::single(
"single_validator_source_greater_than_target",
TestCase::new(interchange(vec![(0, vec![], vec![(8, 7)])])).allow_partial_import(),
),
MultiTestCase::single(
"single_validator_out_of_order_blocks",
TestCase::new(interchange(vec![(0, vec![6, 5], vec![])])).with_blocks(vec![
(0, 5, false),
(0, 6, false),
(0, 7, true),
]),
),
MultiTestCase::single(
"single_validator_out_of_order_attestations",
TestCase::new(interchange(vec![(0, vec![], vec![(4, 5), (3, 4)])])).with_attestations(
vec![
(0, 3, 4, false),
(0, 4, 5, false),
(0, 1, 10, false),
(0, 3, 3, false),
],
),
),
// Ensure that it's not just the minimum bound check preventing blocks at the same slot
// from being signed.
MultiTestCase::single(
"single_validator_two_blocks_no_signing_root",
TestCase::new(interchange(vec![(0, vec![10, 20], vec![])]))
.with_blocks(vec![(0, 20, false)]),
),
MultiTestCase::single(
"single_validator_multiple_block_attempts",
TestCase::new(interchange(vec![(0, vec![15, 16, 17], vec![])]))
.with_signing_root_blocks(vec![
(0, 16, 0, false),
(0, 16, 1, false),
(0, 16, u64::MAX, false),
]),
),
MultiTestCase::single(
"single_validator_resign_block",
TestCase::new(interchange_with_signing_roots(vec![(
0,
vec![(15, Some(151)), (16, Some(161)), (17, Some(171))],
vec![],
)]))
.with_signing_root_blocks(vec![
(0, 15, 151, true),
(0, 16, 161, true),
(0, 17, 171, true),
(0, 15, 152, false),
(0, 15, 0, false),
(0, 16, 151, false),
(0, 17, 151, false),
(0, 18, 151, true),
(0, 14, 171, false),
]),
),
MultiTestCase::single(
"single_validator_resign_attestation",
TestCase::new(interchange_with_signing_roots(vec![(
0,
vec![],
vec![(5, 15, Some(515))],
)]))
.with_signing_root_attestations(vec![
(0, 5, 15, 0, false),
(0, 5, 15, 1, false),
(0, 5, 15, 515, true),
(0, 6, 15, 615, false),
(0, 5, 14, 515, false),
]),
),
MultiTestCase::single(
"single_validator_slashable_blocks",
TestCase::new(interchange_with_signing_roots(vec![(
0,
vec![(10, Some(0)), (10, Some(11))],
vec![],
)]))
.allow_partial_import(),
),
MultiTestCase::single(
"single_validator_slashable_blocks_no_root",
TestCase::new(interchange(vec![(0, vec![10, 10], vec![])])).allow_partial_import(),
),
MultiTestCase::single(
"single_validator_slashable_attestations_double_vote",
TestCase::new(interchange_with_signing_roots(vec![(
0,
vec![],
vec![(2, 3, Some(0)), (2, 3, Some(1))],
)]))
.allow_partial_import(),
),
MultiTestCase::single(
"single_validator_slashable_attestations_surrounds_existing",
TestCase::new(interchange(vec![(0, vec![], vec![(2, 3), (0, 4)])]))
.allow_partial_import(),
),
MultiTestCase::single(
"single_validator_slashable_attestations_surrounded_by_existing",
TestCase::new(interchange(vec![(0, vec![], vec![(0, 4), (2, 3)])]))
.allow_partial_import(),
),
MultiTestCase::single(
"duplicate_pubkey_not_slashable",
TestCase::new(interchange(vec![
(0, vec![10, 11], vec![(0, 2)]),
(0, vec![12, 13], vec![(1, 3)]),
]))
.with_blocks(vec![(0, 10, false), (0, 13, false), (0, 14, true)])
.with_attestations(vec![(0, 0, 2, false), (0, 1, 3, false)]),
),
];
let args = std::env::args().collect::<Vec<_>>();

View File

@ -1,17 +1,23 @@
use crate::{
interchange::Interchange,
test_utils::{pubkey, DEFAULT_GENESIS_VALIDATORS_ROOT},
SlashingDatabase,
SigningRoot, SlashingDatabase,
};
use serde_derive::{Deserialize, Serialize};
use tempfile::tempdir;
use types::{Epoch, Hash256, PublicKey, Slot};
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct TestCase {
pub struct MultiTestCase {
pub name: String,
pub should_succeed: bool,
pub genesis_validators_root: Hash256,
pub steps: Vec<TestCase>,
}
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct TestCase {
pub should_succeed: bool,
pub allow_partial_import: bool,
pub interchange: Interchange,
pub blocks: Vec<TestBlock>,
pub attestations: Vec<TestAttestation>,
@ -21,6 +27,7 @@ pub struct TestCase {
pub struct TestBlock {
pub pubkey: PublicKey,
pub slot: Slot,
pub signing_root: Hash256,
pub should_succeed: bool,
}
@ -29,123 +36,183 @@ pub struct TestAttestation {
pub pubkey: PublicKey,
pub source_epoch: Epoch,
pub target_epoch: Epoch,
pub signing_root: Hash256,
pub should_succeed: bool,
}
impl TestCase {
pub fn new(name: &str, interchange: Interchange) -> Self {
TestCase {
impl MultiTestCase {
pub fn new(name: &str, steps: Vec<TestCase>) -> Self {
MultiTestCase {
name: name.into(),
should_succeed: true,
genesis_validators_root: DEFAULT_GENESIS_VALIDATORS_ROOT,
interchange,
blocks: vec![],
attestations: vec![],
steps,
}
}
pub fn single(name: &str, test_case: TestCase) -> Self {
Self::new(name, vec![test_case])
}
pub fn gvr(mut self, genesis_validators_root: Hash256) -> Self {
self.genesis_validators_root = genesis_validators_root;
self
}
pub fn should_fail(mut self) -> Self {
self.should_succeed = false;
self
}
pub fn with_blocks(mut self, blocks: impl IntoIterator<Item = (usize, u64, bool)>) -> Self {
self.blocks.extend(
blocks
.into_iter()
.map(|(pk, slot, should_succeed)| TestBlock {
pubkey: pubkey(pk),
slot: Slot::new(slot),
should_succeed,
}),
);
self
}
pub fn with_attestations(
mut self,
attestations: impl IntoIterator<Item = (usize, u64, u64, bool)>,
) -> Self {
self.attestations.extend(attestations.into_iter().map(
|(pk, source, target, should_succeed)| TestAttestation {
pubkey: pubkey(pk),
source_epoch: Epoch::new(source),
target_epoch: Epoch::new(target),
should_succeed,
},
));
self
}
pub fn run(&self) {
let dir = tempdir().unwrap();
let slashing_db_file = dir.path().join("slashing_protection.sqlite");
let slashing_db = SlashingDatabase::create(&slashing_db_file).unwrap();
match slashing_db.import_interchange_info(&self.interchange, self.genesis_validators_root) {
Ok(()) if !self.should_succeed => {
panic!(
"test `{}` succeeded on import when it should have failed",
self.name
);
}
Err(e) if self.should_succeed => {
panic!(
"test `{}` failed on import when it should have succeeded, error: {:?}",
self.name, e
);
}
_ => (),
}
for (i, block) in self.blocks.iter().enumerate() {
match slashing_db.check_and_insert_block_signing_root(
&block.pubkey,
block.slot,
Hash256::random(),
for test_case in &self.steps {
match slashing_db.import_interchange_info(
test_case.interchange.clone(),
self.genesis_validators_root,
) {
Ok(safe) if !block.should_succeed => {
panic!(
"block {} from `{}` succeeded when it should have failed: {:?}",
i, self.name, safe
);
Ok(import_outcomes) => {
let failed_records = import_outcomes
.iter()
.filter(|o| o.failed())
.collect::<Vec<_>>();
if !test_case.should_succeed {
panic!(
"test `{}` succeeded on import when it should have failed",
self.name
);
}
if !failed_records.is_empty() && !test_case.allow_partial_import {
panic!(
"test `{}` failed to import some records but should have succeeded: {:#?}",
self.name, failed_records,
);
}
}
Err(e) if block.should_succeed => {
Err(e) if test_case.should_succeed => {
panic!(
"block {} from `{}` failed when it should have succeeded: {:?}",
i, self.name, e
"test `{}` failed on import when it should have succeeded, error: {:?}",
self.name, e
);
}
_ => (),
}
}
for (i, att) in self.attestations.iter().enumerate() {
match slashing_db.check_and_insert_attestation_signing_root(
&att.pubkey,
att.source_epoch,
att.target_epoch,
Hash256::random(),
) {
Ok(safe) if !att.should_succeed => {
panic!(
"attestation {} from `{}` succeeded when it should have failed: {:?}",
i, self.name, safe
);
for (i, block) in test_case.blocks.iter().enumerate() {
match slashing_db.check_and_insert_block_signing_root(
&block.pubkey,
block.slot,
SigningRoot::from(block.signing_root),
) {
Ok(safe) if !block.should_succeed => {
panic!(
"block {} from `{}` succeeded when it should have failed: {:?}",
i, self.name, safe
);
}
Err(e) if block.should_succeed => {
panic!(
"block {} from `{}` failed when it should have succeeded: {:?}",
i, self.name, e
);
}
_ => (),
}
Err(e) if att.should_succeed => {
panic!(
"attestation {} from `{}` failed when it should have succeeded: {:?}",
i, self.name, e
);
}
for (i, att) in test_case.attestations.iter().enumerate() {
match slashing_db.check_and_insert_attestation_signing_root(
&att.pubkey,
att.source_epoch,
att.target_epoch,
SigningRoot::from(att.signing_root),
) {
Ok(safe) if !att.should_succeed => {
panic!(
"attestation {} from `{}` succeeded when it should have failed: {:?}",
i, self.name, safe
);
}
Err(e) if att.should_succeed => {
panic!(
"attestation {} from `{}` failed when it should have succeeded: {:?}",
i, self.name, e
);
}
_ => (),
}
_ => (),
}
}
}
}
impl TestCase {
pub fn new(interchange: Interchange) -> Self {
TestCase {
should_succeed: true,
allow_partial_import: false,
interchange,
blocks: vec![],
attestations: vec![],
}
}
pub fn should_fail(mut self) -> Self {
self.should_succeed = false;
self
}
pub fn allow_partial_import(mut self) -> Self {
self.allow_partial_import = true;
self
}
pub fn with_blocks(self, blocks: impl IntoIterator<Item = (usize, u64, bool)>) -> Self {
self.with_signing_root_blocks(
blocks
.into_iter()
.map(|(index, slot, should_succeed)| (index, slot, 0, should_succeed)),
)
}
pub fn with_signing_root_blocks(
mut self,
blocks: impl IntoIterator<Item = (usize, u64, u64, bool)>,
) -> Self {
self.blocks.extend(
blocks
.into_iter()
.map(|(pk, slot, signing_root, should_succeed)| TestBlock {
pubkey: pubkey(pk),
slot: Slot::new(slot),
signing_root: Hash256::from_low_u64_be(signing_root),
should_succeed,
}),
);
self
}
pub fn with_attestations(
self,
attestations: impl IntoIterator<Item = (usize, u64, u64, bool)>,
) -> Self {
self.with_signing_root_attestations(
attestations
.into_iter()
.map(|(id, source, target, succeed)| (id, source, target, 0, succeed)),
)
}
pub fn with_signing_root_attestations(
mut self,
attestations: impl IntoIterator<Item = (usize, u64, u64, u64, bool)>,
) -> Self {
self.attestations.extend(attestations.into_iter().map(
|(pk, source, target, signing_root, should_succeed)| TestAttestation {
pubkey: pubkey(pk),
source_epoch: Epoch::new(source),
target_epoch: Epoch::new(target),
signing_root: Hash256::from_low_u64_be(signing_root),
should_succeed,
},
));
self
}
}

View File

@ -11,7 +11,9 @@ pub mod test_utils;
pub use crate::signed_attestation::{InvalidAttestation, SignedAttestation};
pub use crate::signed_block::{InvalidBlock, SignedBlock};
pub use crate::slashing_database::{SlashingDatabase, SUPPORTED_INTERCHANGE_FORMAT_VERSION};
pub use crate::slashing_database::{
InterchangeImportOutcome, SlashingDatabase, SUPPORTED_INTERCHANGE_FORMAT_VERSION,
};
use rusqlite::Error as SQLError;
use std::io::{Error as IOError, ErrorKind};
use std::string::ToString;
@ -42,6 +44,36 @@ pub enum Safe {
Valid,
}
/// A wrapper for `Hash256` that treats `0x0` as a special null value.
///
/// Notably `SigningRoot(0x0) != SigningRoot(0x0)`. It is `PartialEq` but not `Eq`!
#[derive(Debug, Clone, Copy, Default)]
pub struct SigningRoot(Hash256);
impl PartialEq for SigningRoot {
fn eq(&self, other: &Self) -> bool {
!self.0.is_zero() && self.0 == other.0
}
}
impl From<Hash256> for SigningRoot {
fn from(hash: Hash256) -> Self {
SigningRoot(hash)
}
}
impl Into<Hash256> for SigningRoot {
fn into(self) -> Hash256 {
self.0
}
}
impl SigningRoot {
fn to_hash256(self) -> Hash256 {
self.into()
}
}
/// Safely parse a `Hash256` from the given `column` of an SQLite `row`.
fn hash256_from_row(column: usize, row: &rusqlite::Row) -> rusqlite::Result<Hash256> {
use rusqlite::{types::Type, Error};
@ -81,3 +113,20 @@ impl ToString for NotSafe {
format!("{:?}", self)
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn signing_root_partial_eq() {
let h0 = SigningRoot(Hash256::zero());
let h1 = SigningRoot(Hash256::repeat_byte(1));
let h2 = SigningRoot(Hash256::repeat_byte(2));
assert_ne!(h0, h0);
assert_ne!(h0, h1);
assert_ne!(h1, h0);
assert_eq!(h1, h1);
assert_ne!(h1, h2);
}
}

View File

@ -1,4 +1,4 @@
use crate::hash256_from_row;
use crate::{hash256_from_row, SigningRoot};
use types::{AttestationData, Epoch, Hash256, SignedRoot};
/// An attestation that has previously been signed.
@ -6,7 +6,7 @@ use types::{AttestationData, Epoch, Hash256, SignedRoot};
pub struct SignedAttestation {
pub source_epoch: Epoch,
pub target_epoch: Epoch,
pub signing_root: Hash256,
pub signing_root: SigningRoot,
}
/// Reasons why an attestation may be slashable (or invalid).
@ -35,7 +35,7 @@ pub enum InvalidAttestation {
}
impl SignedAttestation {
pub fn new(source_epoch: Epoch, target_epoch: Epoch, signing_root: Hash256) -> Self {
pub fn new(source_epoch: Epoch, target_epoch: Epoch, signing_root: SigningRoot) -> Self {
Self {
source_epoch,
target_epoch,
@ -48,7 +48,7 @@ impl SignedAttestation {
Self {
source_epoch: attestation.source.epoch,
target_epoch: attestation.target.epoch,
signing_root: attestation.signing_root(domain),
signing_root: attestation.signing_root(domain).into(),
}
}
@ -56,7 +56,7 @@ impl SignedAttestation {
pub fn from_row(row: &rusqlite::Row) -> rusqlite::Result<Self> {
let source = row.get(0)?;
let target = row.get(1)?;
let signing_root = hash256_from_row(2, row)?;
let signing_root = hash256_from_row(2, row)?.into();
Ok(SignedAttestation::new(source, target, signing_root))
}
}

View File

@ -1,11 +1,11 @@
use crate::hash256_from_row;
use crate::{hash256_from_row, SigningRoot};
use types::{BeaconBlockHeader, Hash256, SignedRoot, Slot};
/// A block that has previously been signed.
#[derive(Clone, Debug, PartialEq)]
pub struct SignedBlock {
pub slot: Slot,
pub signing_root: Hash256,
pub signing_root: SigningRoot,
}
/// Reasons why a block may be slashable.
@ -16,21 +16,21 @@ pub enum InvalidBlock {
}
impl SignedBlock {
pub fn new(slot: Slot, signing_root: Hash256) -> Self {
pub fn new(slot: Slot, signing_root: SigningRoot) -> Self {
Self { slot, signing_root }
}
pub fn from_header(header: &BeaconBlockHeader, domain: Hash256) -> Self {
Self {
slot: header.slot,
signing_root: header.signing_root(domain),
signing_root: header.signing_root(domain).into(),
}
}
/// Parse an SQLite row of `(slot, signing_root)`.
pub fn from_row(row: &rusqlite::Row) -> rusqlite::Result<Self> {
let slot = row.get(0)?;
let signing_root = hash256_from_row(1, row)?;
let signing_root = hash256_from_row(1, row)?.into();
Ok(SignedBlock { slot, signing_root })
}
}

View File

@ -4,7 +4,7 @@ use crate::interchange::{
};
use crate::signed_attestation::InvalidAttestation;
use crate::signed_block::InvalidBlock;
use crate::{hash256_from_row, NotSafe, Safe, SignedAttestation, SignedBlock};
use crate::{hash256_from_row, NotSafe, Safe, SignedAttestation, SignedBlock, SigningRoot};
use r2d2_sqlite::SqliteConnectionManager;
use rusqlite::{params, OptionalExtension, Transaction, TransactionBehavior};
use std::fs::{File, OpenOptions};
@ -231,7 +231,7 @@ impl SlashingDatabase {
txn: &Transaction,
validator_pubkey: &PublicKey,
slot: Slot,
signing_root: Hash256,
signing_root: SigningRoot,
) -> Result<Safe, NotSafe> {
let validator_id = self.get_validator_id_in_txn(txn, validator_pubkey)?;
@ -281,7 +281,7 @@ impl SlashingDatabase {
validator_pubkey: &PublicKey,
att_source_epoch: Epoch,
att_target_epoch: Epoch,
att_signing_root: Hash256,
att_signing_root: SigningRoot,
) -> Result<Safe, NotSafe> {
// Although it's not required to avoid slashing, we disallow attestations
// which are obviously invalid by virtue of their source epoch exceeding their target.
@ -410,14 +410,14 @@ impl SlashingDatabase {
txn: &Transaction,
validator_pubkey: &PublicKey,
slot: Slot,
signing_root: Hash256,
signing_root: SigningRoot,
) -> Result<(), NotSafe> {
let validator_id = self.get_validator_id_in_txn(txn, validator_pubkey)?;
txn.execute(
"INSERT INTO signed_blocks (validator_id, slot, signing_root)
VALUES (?1, ?2, ?3)",
params![validator_id, slot, signing_root.as_bytes()],
params![validator_id, slot, signing_root.to_hash256().as_bytes()],
)?;
Ok(())
}
@ -432,7 +432,7 @@ impl SlashingDatabase {
validator_pubkey: &PublicKey,
att_source_epoch: Epoch,
att_target_epoch: Epoch,
att_signing_root: Hash256,
att_signing_root: SigningRoot,
) -> Result<(), NotSafe> {
let validator_id = self.get_validator_id_in_txn(txn, validator_pubkey)?;
@ -443,7 +443,7 @@ impl SlashingDatabase {
validator_id,
att_source_epoch,
att_target_epoch,
att_signing_root.as_bytes()
att_signing_root.to_hash256().as_bytes()
],
)?;
Ok(())
@ -464,7 +464,7 @@ impl SlashingDatabase {
self.check_and_insert_block_signing_root(
validator_pubkey,
block_header.slot,
block_header.signing_root(domain),
block_header.signing_root(domain).into(),
)
}
@ -473,7 +473,7 @@ impl SlashingDatabase {
&self,
validator_pubkey: &PublicKey,
slot: Slot,
signing_root: Hash256,
signing_root: SigningRoot,
) -> Result<Safe, NotSafe> {
let mut conn = self.conn_pool.get()?;
let txn = conn.transaction_with_behavior(TransactionBehavior::Exclusive)?;
@ -492,7 +492,7 @@ impl SlashingDatabase {
&self,
validator_pubkey: &PublicKey,
slot: Slot,
signing_root: Hash256,
signing_root: SigningRoot,
txn: &Transaction,
) -> Result<Safe, NotSafe> {
let safe = self.check_block_proposal(&txn, validator_pubkey, slot, signing_root)?;
@ -515,7 +515,7 @@ impl SlashingDatabase {
attestation: &AttestationData,
domain: Hash256,
) -> Result<Safe, NotSafe> {
let attestation_signing_root = attestation.signing_root(domain);
let attestation_signing_root = attestation.signing_root(domain).into();
self.check_and_insert_attestation_signing_root(
validator_pubkey,
attestation.source.epoch,
@ -530,7 +530,7 @@ impl SlashingDatabase {
validator_pubkey: &PublicKey,
att_source_epoch: Epoch,
att_target_epoch: Epoch,
att_signing_root: Hash256,
att_signing_root: SigningRoot,
) -> Result<Safe, NotSafe> {
let mut conn = self.conn_pool.get()?;
let txn = conn.transaction_with_behavior(TransactionBehavior::Exclusive)?;
@ -551,7 +551,7 @@ impl SlashingDatabase {
validator_pubkey: &PublicKey,
att_source_epoch: Epoch,
att_target_epoch: Epoch,
att_signing_root: Hash256,
att_signing_root: SigningRoot,
txn: &Transaction,
) -> Result<Safe, NotSafe> {
let safe = self.check_attestation(
@ -575,11 +575,14 @@ impl SlashingDatabase {
}
/// Import slashing protection from another client in the interchange format.
///
/// Return a vector of public keys and errors for any validators whose data could not be
/// imported.
pub fn import_interchange_info(
&self,
interchange: &Interchange,
interchange: Interchange,
genesis_validators_root: Hash256,
) -> Result<(), InterchangeError> {
) -> Result<Vec<InterchangeImportOutcome>, InterchangeError> {
let version = interchange.metadata.interchange_format_version;
if version != SUPPORTED_INTERCHANGE_FORMAT_VERSION {
return Err(InterchangeError::UnsupportedVersion(version));
@ -592,37 +595,87 @@ impl SlashingDatabase {
});
}
// Import atomically, to prevent registering validators with partial information.
let mut conn = self.conn_pool.get()?;
let txn = conn.transaction()?;
for record in &interchange.data {
self.register_validators_in_txn(std::iter::once(&record.pubkey), &txn)?;
let mut import_outcomes = vec![];
// Insert all signed blocks.
for block in &record.signed_blocks {
self.check_and_insert_block_signing_root_txn(
&record.pubkey,
block.slot,
block.signing_root.unwrap_or_else(Hash256::zero),
&txn,
)?;
}
// Insert all signed attestations.
for attestation in &record.signed_attestations {
self.check_and_insert_attestation_signing_root_txn(
&record.pubkey,
attestation.source_epoch,
attestation.target_epoch,
attestation.signing_root.unwrap_or_else(Hash256::zero),
&txn,
)?;
for record in interchange.data {
let pubkey = record.pubkey.clone();
let txn = conn.transaction()?;
match self.import_interchange_record(record, &txn) {
Ok(summary) => {
import_outcomes.push(InterchangeImportOutcome::Success { pubkey, summary });
txn.commit()?;
}
Err(error) => {
import_outcomes.push(InterchangeImportOutcome::Failure { pubkey, error });
}
}
}
txn.commit()?;
Ok(())
Ok(import_outcomes)
}
pub fn import_interchange_record(
&self,
mut record: InterchangeData,
txn: &Transaction,
) -> Result<ValidatorSummary, NotSafe> {
self.register_validators_in_txn(std::iter::once(&record.pubkey), txn)?;
// Insert all signed blocks, sorting them so that the minimum bounds are not
// violated by blocks earlier in the file.
record.signed_blocks.sort_unstable_by_key(|b| b.slot);
for block in &record.signed_blocks {
self.check_and_insert_block_signing_root_txn(
&record.pubkey,
block.slot,
block
.signing_root
.map(SigningRoot::from)
.unwrap_or_default(),
txn,
)?;
}
// Prune blocks less than the min slot from this interchange file.
// This ensures we don't sign anything less than the min slot after successful import,
// which is signficant if we have imported two files with a "gap" in between.
if let Some(new_min_slot) = record.signed_blocks.iter().map(|block| block.slot).min() {
self.prune_signed_blocks(&record.pubkey, new_min_slot, txn)?;
}
// Insert all signed attestations.
record
.signed_attestations
.sort_unstable_by_key(|att| (att.source_epoch, att.target_epoch));
for attestation in &record.signed_attestations {
self.check_and_insert_attestation_signing_root_txn(
&record.pubkey,
attestation.source_epoch,
attestation.target_epoch,
attestation
.signing_root
.map(SigningRoot::from)
.unwrap_or_default(),
txn,
)?;
}
// Prune attestations less than the min source and target from this interchange file.
// See the rationale for blocks above.
if let Some((new_min_source, new_min_target)) = record
.signed_attestations
.iter()
.map(|attestation| (attestation.source_epoch, attestation.target_epoch))
.min()
{
self.prune_signed_attestations(&record.pubkey, new_min_source, new_min_target, txn)?;
}
let summary = self.validator_summary(&record.pubkey, txn)?;
Ok(summary)
}
pub fn export_interchange_info(
@ -641,7 +694,8 @@ impl SlashingDatabase {
txn.prepare(
"SELECT public_key, slot, signing_root
FROM signed_blocks, validators
WHERE signed_blocks.validator_id = validators.id",
WHERE signed_blocks.validator_id = validators.id
ORDER BY slot ASC",
)?
.query_and_then(params![], |row| {
let validator_pubkey: String = row.get(0)?;
@ -659,7 +713,8 @@ impl SlashingDatabase {
txn.prepare(
"SELECT public_key, source_epoch, target_epoch, signing_root
FROM signed_attestations, validators
WHERE signed_attestations.validator_id = validators.id",
WHERE signed_attestations.validator_id = validators.id
ORDER BY source_epoch ASC, target_epoch ASC",
)?
.query_and_then(params![], |row| {
let validator_pubkey: String = row.get(0)?;
@ -698,6 +753,50 @@ impl SlashingDatabase {
Ok(Interchange { metadata, data })
}
/// Remove all blocks for `public_key` with slots less than `new_min_slot`.
pub fn prune_signed_blocks(
&self,
public_key: &PublicKey,
new_min_slot: Slot,
txn: &Transaction,
) -> Result<(), NotSafe> {
let validator_id = self.get_validator_id_in_txn(txn, public_key)?;
txn.execute(
"DELETE FROM signed_blocks
WHERE validator_id = ?1 AND slot < ?2",
params![validator_id, new_min_slot],
)?;
Ok(())
}
/// Remove all attestations for `public_key` with
/// `(source, target) < (new_min_source, new_min_target)`.
pub fn prune_signed_attestations(
&self,
public_key: &PublicKey,
new_min_source: Epoch,
new_min_target: Epoch,
txn: &Transaction,
) -> Result<(), NotSafe> {
let validator_id = self.get_validator_id_in_txn(txn, public_key)?;
// Delete attestations with source *and* target less than the minimums.
// Assuming `(new_min_source, new_min_target)` was successfully
// inserted into the database, then any other attestation in the database
// can't have just its source or just its target less than the new minimum.
// I.e. the following holds:
// a.source < new_min_source <--> a.target < new_min_target
txn.execute(
"DELETE FROM signed_attestations
WHERE validator_id = ?1 AND source_epoch < ?2 AND target_epoch < ?3",
params![validator_id, new_min_source, new_min_target],
)?;
Ok(())
}
pub fn num_validator_rows(&self) -> Result<u32, NotSafe> {
let mut conn = self.conn_pool.get()?;
let txn = conn.transaction()?;
@ -706,6 +805,76 @@ impl SlashingDatabase {
.query_row(params![], |row| row.get(0))?;
Ok(count)
}
/// Get a summary of a validator's slashing protection data for consumption by the user.
pub fn validator_summary(
&self,
public_key: &PublicKey,
txn: &Transaction,
) -> Result<ValidatorSummary, NotSafe> {
let validator_id = self.get_validator_id_in_txn(txn, public_key)?;
let (min_block_slot, max_block_slot) = txn
.prepare(
"SELECT MIN(slot), MAX(slot)
FROM signed_blocks
WHERE validator_id = ?1",
)?
.query_row(params![validator_id], |row| Ok((row.get(0)?, row.get(1)?)))?;
let (
min_attestation_source,
min_attestation_target,
max_attestation_source,
max_attestation_target,
) = txn
.prepare(
"SELECT MIN(source_epoch), MIN(target_epoch), MAX(source_epoch), MAX(target_epoch)
FROM signed_attestations
WHERE validator_id = ?1",
)?
.query_row(params![validator_id], |row| {
Ok((row.get(0)?, row.get(1)?, row.get(2)?, row.get(3)?))
})?;
Ok(ValidatorSummary {
min_block_slot,
max_block_slot,
min_attestation_source,
min_attestation_target,
max_attestation_source,
max_attestation_target,
})
}
}
/// Minimum and maximum slots and epochs signed by a validator.
#[derive(Debug)]
pub struct ValidatorSummary {
pub min_block_slot: Option<Slot>,
pub max_block_slot: Option<Slot>,
pub min_attestation_source: Option<Epoch>,
pub min_attestation_target: Option<Epoch>,
pub max_attestation_source: Option<Epoch>,
pub max_attestation_target: Option<Epoch>,
}
/// The result of importing a single entry from an interchange file.
#[derive(Debug)]
pub enum InterchangeImportOutcome {
Success {
pubkey: PublicKey,
summary: ValidatorSummary,
},
Failure {
pubkey: PublicKey,
error: NotSafe,
},
}
impl InterchangeImportOutcome {
pub fn failed(&self) -> bool {
matches!(self, InterchangeImportOutcome::Failure { .. })
}
}
#[derive(Debug)]

View File

@ -135,7 +135,7 @@ fn roundtrip_database(dir: &TempDir, db: &SlashingDatabase, is_empty: bool) {
let new_db =
SlashingDatabase::create(&dir.path().join("roundtrip_slashing_protection.sqlite")).unwrap();
new_db
.import_interchange_info(&exported, DEFAULT_GENESIS_VALIDATORS_ROOT)
.import_interchange_info(exported.clone(), DEFAULT_GENESIS_VALIDATORS_ROOT)
.unwrap();
let reexported = new_db
.export_interchange_info(DEFAULT_GENESIS_VALIDATORS_ROOT)

View File

@ -1,4 +1,4 @@
use slashing_protection::interchange_test::TestCase;
use slashing_protection::interchange_test::MultiTestCase;
use std::fs::File;
use std::path::PathBuf;
@ -17,7 +17,7 @@ fn generated() {
.map(Result::unwrap)
{
let file = File::open(entry.path()).unwrap();
let test_case: TestCase = serde_json::from_reader(&file).unwrap();
let test_case: MultiTestCase = serde_json::from_reader(&file).unwrap();
test_case.run();
}
}