015ab7d0a7
## Issue Addressed Closes #2052 ## Proposed Changes - Refactor the attester/proposer duties endpoints in the BN - Performance improvements - Fixes some potential inconsistencies with the dependent root fields. - Removes `http_api::beacon_proposer_cache` and just uses the one on the `BeaconChain` instead. - Move the code for the proposer/attester duties endpoints into separate files, for readability. - Refactor the `DutiesService` in the VC - Required to reduce the delay on broadcasting new blocks. - Gets rid of the `ValidatorDuty` shim struct that came about when we adopted the standard API. - Separate block/attestation duty tasks so that they don't block each other when one is slow. - In the VC, use `PublicKeyBytes` to represent validators instead of `PublicKey`. `PublicKey` is a legit crypto object whilst `PublicKeyBytes` is just a byte-array, it's much faster to clone/hash `PublicKeyBytes` and this change has had a significant impact on runtimes. - Unfortunately this has created lots of dust changes. - In the BN, store `PublicKeyBytes` in the `beacon_proposer_cache` and allow access to them. The HTTP API always sends `PublicKeyBytes` over the wire and the conversion from `PublicKey` -> `PublickeyBytes` is non-trivial, especially when queries have 100s/1000s of validators (like Pyrmont). - Add the `state_processing::state_advance` mod which dedups a lot of the "apply `n` skip slots to the state" code. - This also fixes a bug with some functions which were failing to include a state root as per [this comment](072695284f/consensus/state_processing/src/state_advance.rs (L69-L74)
). I couldn't find any instance of this bug that resulted in anything more severe than keying a shuffling cache by the wrong block root. - Swap the VC block service to use `mpsc` from `tokio` instead of `futures`. This is consistent with the rest of the code base. ~~This PR *reduces* the size of the codebase 🎉~~ It *used* to reduce the size of the code base before I added more comments. ## Observations on Prymont - Proposer duties times down from peaks of 450ms to consistent <1ms. - Current epoch attester duties times down from >1s peaks to a consistent 20-30ms. - Block production down from +600ms to 100-200ms. ## Additional Info - ~~Blocked on #2241~~ - ~~Blocked on #2234~~ ## TODO - [x] ~~Refactor this into some smaller PRs?~~ Leaving this as-is for now. - [x] Address `per_slot_processing` roots. - [x] Investigate slow next epoch times. Not getting added to cache on block processing? - [x] Consider [this](072695284f/beacon_node/store/src/hot_cold_store.rs (L811-L812)
) in the scenario of replacing the state roots Co-authored-by: pawan <pawandhananjay@gmail.com> Co-authored-by: Michael Sproul <michael@sigmaprime.io>
83 lines
2.5 KiB
Rust
83 lines
2.5 KiB
Rust
//! Tests that stress the concurrency safety of the slashing protection DB.
|
|
#![cfg(test)]
|
|
|
|
use crate::attestation_tests::attestation_data_builder;
|
|
use crate::block_tests::block;
|
|
use crate::test_utils::*;
|
|
use crate::*;
|
|
use rayon::prelude::*;
|
|
use tempfile::tempdir;
|
|
|
|
#[test]
|
|
fn block_same_slot() {
|
|
let dir = tempdir().unwrap();
|
|
let slashing_db_file = dir.path().join("slashing_protection.sqlite");
|
|
let slashing_db = SlashingDatabase::create(&slashing_db_file).unwrap();
|
|
|
|
let pk = pubkey(0);
|
|
|
|
slashing_db.register_validator(pk).unwrap();
|
|
|
|
// A stream of blocks all with the same slot.
|
|
let num_blocks = 10;
|
|
let results = (0..num_blocks)
|
|
.into_par_iter()
|
|
.map(|_| slashing_db.check_and_insert_block_proposal(&pk, &block(1), DEFAULT_DOMAIN))
|
|
.collect::<Vec<_>>();
|
|
|
|
let num_successes = results.iter().filter(|res| res.is_ok()).count();
|
|
assert_eq!(num_successes, 1);
|
|
}
|
|
|
|
#[test]
|
|
fn attestation_same_target() {
|
|
let dir = tempdir().unwrap();
|
|
let slashing_db_file = dir.path().join("slashing_protection.sqlite");
|
|
let slashing_db = SlashingDatabase::create(&slashing_db_file).unwrap();
|
|
|
|
let pk = pubkey(0);
|
|
|
|
slashing_db.register_validator(pk).unwrap();
|
|
|
|
// A stream of attestations all with the same target.
|
|
let num_attestations = 10;
|
|
let results = (0..num_attestations)
|
|
.into_par_iter()
|
|
.map(|i| {
|
|
slashing_db.check_and_insert_attestation(
|
|
&pk,
|
|
&attestation_data_builder(i, num_attestations),
|
|
DEFAULT_DOMAIN,
|
|
)
|
|
})
|
|
.collect::<Vec<_>>();
|
|
|
|
let num_successes = results.iter().filter(|res| res.is_ok()).count();
|
|
assert_eq!(num_successes, 1);
|
|
}
|
|
|
|
#[test]
|
|
fn attestation_surround_fest() {
|
|
let dir = tempdir().unwrap();
|
|
let slashing_db_file = dir.path().join("slashing_protection.sqlite");
|
|
let slashing_db = SlashingDatabase::create(&slashing_db_file).unwrap();
|
|
|
|
let pk = pubkey(0);
|
|
|
|
slashing_db.register_validator(pk).unwrap();
|
|
|
|
// A stream of attestations that all surround each other.
|
|
let num_attestations = 10;
|
|
|
|
let results = (0..num_attestations)
|
|
.into_par_iter()
|
|
.map(|i| {
|
|
let att = attestation_data_builder(i, 2 * num_attestations - i);
|
|
slashing_db.check_and_insert_attestation(&pk, &att, DEFAULT_DOMAIN)
|
|
})
|
|
.collect::<Vec<_>>();
|
|
|
|
let num_successes = results.iter().filter(|res| res.is_ok()).count();
|
|
assert_eq!(num_successes, 1);
|
|
}
|