2020-11-26 01:10:51 +00:00
|
|
|
use crate::{
|
2021-09-16 03:26:33 +00:00
|
|
|
doppelganger_service::DoppelgangerService,
|
|
|
|
http_metrics::metrics,
|
2021-07-31 03:50:52 +00:00
|
|
|
initialized_validators::InitializedValidators,
|
2021-09-16 03:26:33 +00:00
|
|
|
signing_method::{Error as SigningError, SignableMessage, SigningContext, SigningMethod},
|
2022-07-30 00:22:37 +00:00
|
|
|
Config,
|
2020-11-26 01:10:51 +00:00
|
|
|
};
|
2020-10-02 09:42:19 +00:00
|
|
|
use account_utils::{validator_definitions::ValidatorDefinition, ZeroizeString};
|
Prune slashing protection DB (#2194)
## Proposed Changes
Prune the slashing protection database so that it doesn't exhibit unbounded growth. Prune by dropping attestations and blocks from more than 512 epochs ago, relying on the guards that prevent signing messages with slots or epochs less than the minimum recorded in the DB.
The pruning process is potentially time consuming, so it's scheduled to run only every 512 epochs, in the last 2/3rds of a slot. This gives it at least 4 seconds to run without impacting other signing, which I think should be sufficient. I've seen it run for several minutes (yikes!) on our Pyrmont nodes, but I suspect that 1) this will only occur on the first run when the database is still huge 2) no other production users will be impacted because they don't have enough validators per node.
Pruning also happens at start-up, as I figured this is a fairly infrequent event, and if a user is experiencing problems with the VC related to pruning, it's nice to be able to trigger it with a quick restart. Users are also conditioned to not mind missing a few attestations during a restart.
We need to include a note in the release notes that users may see the message `timed out waiting for connection` the first time they prune a huge database, but that this is totally fine and to be expected (the VC will miss those attestations in the meantime).
I'm also open to making this opt-in for now, although the sooner we get users doing it, the less painful it will be: prune early, prune often!
2021-02-24 23:51:04 +00:00
|
|
|
use parking_lot::{Mutex, RwLock};
|
2022-01-30 23:22:04 +00:00
|
|
|
use slashing_protection::{
|
|
|
|
interchange::Interchange, InterchangeError, NotSafe, Safe, SlashingDatabase,
|
|
|
|
};
|
Prune slashing protection DB (#2194)
## Proposed Changes
Prune the slashing protection database so that it doesn't exhibit unbounded growth. Prune by dropping attestations and blocks from more than 512 epochs ago, relying on the guards that prevent signing messages with slots or epochs less than the minimum recorded in the DB.
The pruning process is potentially time consuming, so it's scheduled to run only every 512 epochs, in the last 2/3rds of a slot. This gives it at least 4 seconds to run without impacting other signing, which I think should be sufficient. I've seen it run for several minutes (yikes!) on our Pyrmont nodes, but I suspect that 1) this will only occur on the first run when the database is still huge 2) no other production users will be impacted because they don't have enough validators per node.
Pruning also happens at start-up, as I figured this is a fairly infrequent event, and if a user is experiencing problems with the VC related to pruning, it's nice to be able to trigger it with a quick restart. Users are also conditioned to not mind missing a few attestations during a restart.
We need to include a note in the release notes that users may see the message `timed out waiting for connection` the first time they prune a huge database, but that this is totally fine and to be expected (the VC will miss those attestations in the meantime).
I'm also open to making this opt-in for now, although the sooner we get users doing it, the less painful it will be: prune early, prune often!
2021-02-24 23:51:04 +00:00
|
|
|
use slog::{crit, error, info, warn, Logger};
|
2019-11-25 04:48:24 +00:00
|
|
|
use slot_clock::SlotClock;
|
2021-07-31 03:50:52 +00:00
|
|
|
use std::iter::FromIterator;
|
2021-08-06 00:47:31 +00:00
|
|
|
use std::marker::PhantomData;
|
2020-10-02 09:42:19 +00:00
|
|
|
use std::path::Path;
|
2019-11-25 04:48:24 +00:00
|
|
|
use std::sync::Arc;
|
2021-09-16 03:26:33 +00:00
|
|
|
use task_executor::TaskExecutor;
|
2019-11-25 04:48:24 +00:00
|
|
|
use types::{
|
2022-10-26 19:15:26 +00:00
|
|
|
attestation::Error as AttestationError, graffiti::GraffitiString, AbstractExecPayload, Address,
|
2022-12-02 23:42:12 +00:00
|
|
|
AggregateAndProof, Attestation, BeaconBlock, BlindedPayload, ChainSpec, ContributionAndProof,
|
|
|
|
Domain, Epoch, EthSpec, Fork, Graffiti, Hash256, Keypair, PublicKeyBytes, SelectionProof,
|
|
|
|
Signature, SignedAggregateAndProof, SignedBeaconBlock, SignedContributionAndProof, SignedRoot,
|
|
|
|
SignedValidatorRegistrationData, Slot, SyncAggregatorSelectionData, SyncCommitteeContribution,
|
|
|
|
SyncCommitteeMessage, SyncSelectionProof, SyncSubnetId, ValidatorRegistrationData,
|
2019-11-25 04:48:24 +00:00
|
|
|
};
|
2020-06-26 01:10:52 +00:00
|
|
|
use validator_dir::ValidatorDir;
|
Wallet-based, encrypted key management (#1138)
* Update hashmap hashset to stable futures
* Adds panic test to hashset delay
* Port remote_beacon_node to stable futures
* Fix lcli merge conflicts
* Non rpc stuff compiles
* Remove padding
* Add error enum, zeroize more things
* Fix comment
* protocol.rs compiles
* Port websockets, timer and notifier to stable futures (#1035)
* Fix lcli
* Port timer to stable futures
* Fix timer
* Port websocket_server to stable futures
* Port notifier to stable futures
* Add TODOS
* Port remote_beacon_node to stable futures
* Partial eth2-libp2p stable future upgrade
* Finished first round of fighting RPC types
* Further progress towards porting eth2-libp2p adds caching to discovery
* Update behaviour
* Add keystore builder
* Remove keystore stuff from val client
* Add more tests, comments
* RPC handler to stable futures
* Update RPC to master libp2p
* Add more comments, test vectors
* Network service additions
* Progress on improving JSON validation
* More JSON verification
* Start moving JSON into own mod
* Remove old code
* Add more tests, reader/writers
* Tidy
* Move keystore into own file
* Move more logic into keystore file
* Tidy
* Tidy
* Fix the fallback transport construction (#1102)
* Allow for odd-character hex
* Correct warning
* Remove hashmap delay
* Compiling version of eth2-libp2p
* Update all crates versions
* Fix conversion function and add tests (#1113)
* Add more json missing field checks
* Use scrypt by default
* Tidy, address comments
* Test path and uuid in vectors
* Fix comment
* Add checks for kdf params
* Enforce empty kdf message
* Port validator_client to stable futures (#1114)
* Add PH & MS slot clock changes
* Account for genesis time
* Add progress on duties refactor
* Add simple is_aggregator bool to val subscription
* Start work on attestation_verification.rs
* Add progress on ObservedAttestations
* Progress with ObservedAttestations
* Fix tests
* Add observed attestations to the beacon chain
* Add attestation observation to processing code
* Add progress on attestation verification
* Add first draft of ObservedAttesters
* Add more tests
* Add observed attesters to beacon chain
* Add observers to attestation processing
* Add more attestation verification
* Create ObservedAggregators map
* Remove commented-out code
* Add observed aggregators into chain
* Add progress
* Finish adding features to attestation verification
* Ensure beacon chain compiles
* Link attn verification into chain
* Integrate new attn verification in chain
* Remove old attestation processing code
* Start trying to fix beacon_chain tests
* Split adding into pools into two functions
* Add aggregation to harness
* Get test harness working again
* Adjust the number of aggregators for test harness
* Fix edge-case in harness
* Integrate new attn processing in network
* Fix compile bug in validator_client
* Update validator API endpoints
* Fix aggreagation in test harness
* Fix enum thing
* Fix attestation observation bug:
* Patch failing API tests
* Start adding comments to attestation verification
* Remove unused attestation field
* Unify "is block known" logic
* Update comments
* Supress fork choice errors for network processing
* Add todos
* Tidy
* Add gossip attn tests
* Disallow test harness to produce old attns
* Comment out in-progress tests
* Partially address pruning tests
* Fix failing store test
* Add aggregate tests
* Add comments about which spec conditions we check
* Dont re-aggregate
* Split apart test harness attn production
* Fix compile error in network
* Make progress on commented-out test
* Fix skipping attestation test
* Add fork choice verification tests
* Tidy attn tests, remove dead code
* Remove some accidentally added code
* Fix clippy lint
* Rename test file
* Add block tests, add cheap block proposer check
* Rename block testing file
* Add observed_block_producers
* Tidy
* Switch around block signature verification
* Finish block testing
* Remove gossip from signature tests
* First pass of self review
* Fix deviation in spec
* Update test spec tags
* Start moving over to hashset
* Finish moving observed attesters to hashmap
* Move aggregation pool over to hashmap
* Make fc attn borrow again
* Fix rest_api compile error
* Fix missing comments
* Fix monster test
* Uncomment increasing slots test
* Address remaining comments
* Remove unsafe, use cfg test
* Remove cfg test flag
* Fix dodgy comment
* Revert "Update hashmap hashset to stable futures"
This reverts commit d432378a3cc5cd67fc29c0b15b96b886c1323554.
* Revert "Adds panic test to hashset delay"
This reverts commit 281502396fc5b90d9c421a309c2c056982c9525b.
* Ported attestation_service
* Ported duties_service
* Ported fork_service
* More ports
* Port block_service
* Minor fixes
* VC compiles
* Update TODOS
* Borrow self where possible
* Ignore aggregates that are already known.
* Unify aggregator modulo logic
* Fix typo in logs
* Refactor validator subscription logic
* Avoid reproducing selection proof
* Skip HTTP call if no subscriptions
* Rename DutyAndState -> DutyAndProof
* Tidy logs
* Print root as dbg
* Fix compile errors in tests
* Fix compile error in test
* Re-Fix attestation and duties service
* Minor fixes
Co-authored-by: Paul Hauner <paul@paulhauner.com>
* Expose json_keystore mod
* First commits on path derivation
* Progress with implementation
* More progress
* Passing intermediate test vectors
* Tidy, add comments
* Add DerivedKey structs
* Move key derivation into own crate
* Add zeroize structs
* Return error for empty seed
* Add tests
* Tidy
* First commits on path derivation
* Progress with implementation
* Move key derivation into own crate
* Start defining JSON wallet
* Add progress
* Split out encrypt/decrypt
* First commits on path derivation
* Progress with implementation
* More progress
* Passing intermediate test vectors
* Tidy, add comments
* Add DerivedKey structs
* Move key derivation into own crate
* Add zeroize structs
* Return error for empty seed
* Add tests
* Tidy
* Add progress
* Replace some password usage with slice
* First commits on path derivation
* Progress with implementation
* More progress
* Passing intermediate test vectors
* Tidy, add comments
* Add DerivedKey structs
* Move key derivation into own crate
* Add zeroize structs
* Return error for empty seed
* Add tests
* Tidy
* Add progress
* Expose PlainText struct
* First commits on path derivation
* Progress with implementation
* More progress
* Passing intermediate test vectors
* Tidy, add comments
* Add DerivedKey structs
* Move key derivation into own crate
* Add zeroize structs
* Return error for empty seed
* Add tests
* Tidy
* Add builder
* Expose consts, remove Password
* Minor progress
* Expose SALT_SIZE
* First compiling version
* Add test vectors
* Network crate update to stable futures
* Move dbg assert statement
* Port account_manager to stable futures (#1121)
* Port account_manager to stable futures
* Run async fns in tokio environment
* Port rest_api crate to stable futures (#1118)
* Port rest_api lib to stable futures
* Reduce tokio features
* Update notifier to stable futures
* Builder update
* Further updates
* Add mnemonic, tidy
* Convert self referential async functions
* Tidy
* Add testing
* Add first attempt at validator_dir
* Present pubkey field
* stable futures fixes (#1124)
* Fix eth1 update functions
* Fix genesis and client
* Fix beacon node lib
* Return appropriate runtimes from environment
* Fix test rig
* Refactor eth1 service update
* Upgrade simulator to stable futures
* Lighthouse compiles on stable futures
* Add first pass of wallet manager
* Progress with CLI
* Remove println debugging statement
* Tidy output
* Tidy 600 perms
* Update libp2p service, start rpc test upgrade
* Add validator creation flow
* Update network crate for new libp2p
* Start tidying, adding comments
* Update tokio::codec to futures_codec (#1128)
* Further work towards RPC corrections
* Correct http timeout and network service select
* Add wallet mgr testing
* Shift LockedWallet into own file
* Add comments to fs
* Start integration into VC
* Use tokio runtime for libp2p
* Revert "Update tokio::codec to futures_codec (#1128)"
This reverts commit e57aea924acf5cbabdcea18895ac07e38a425ed7.
* Upgrade RPC libp2p tests
* Upgrade secio fallback test
* Add lcli keypair upgrade command
* Upgrade gossipsub examples
* Clean up RPC protocol
* Test fixes (#1133)
* Correct websocket timeout and run on os thread
* Fix network test
* Add --secrets-dir to VC
* Remove --legacy-keys from VC
* Clean up PR
* Correct tokio tcp move attestation service tests
* Upgrade attestation service tests
* Fix sim
* Correct network test
* Correct genesis test
* Start docs
* Add progress for validator generation
* Tidy error messages
* Test corrections
* Log info when block is received
* Modify logs and update attester service events
* Stable futures: fixes to vc, eth1 and account manager (#1142)
* Add local testnet scripts
* Remove whiteblock script
* Rename local testnet script
* Move spawns onto handle
* Fix VC panic
* Initial fix to block production issue
* Tidy block producer fix
* Tidy further
* Add local testnet clean script
* Run cargo fmt
* Tidy duties service
* Tidy fork service
* Tidy ForkService
* Tidy AttestationService
* Tidy notifier
* Ensure await is not suppressed in eth1
* Ensure await is not suppressed in account_manager
* Use .ok() instead of .unwrap_or(())
* RPC decoding test for proto
* Update discv5 and eth2-libp2p deps
* Run cargo fmt
* Pre-build keystores for sim
* Fix lcli double runtime issue (#1144)
* Handle stream termination and dialing peer errors
* Correct peer_info variant types
* Add progress on new deposit flow
* Remove unnecessary warnings
* Handle subnet unsubscription removal and improve logigng
* Add logs around ping
* Upgrade discv5 and improve logging
* Handle peer connection status for multiple connections
* Improve network service logging
* Add more incomplete progress
* Improve logging around peer manager
* Upgrade swarm poll centralise peer management
* Identify clients on error
* Fix `remove_peer` in sync (#1150)
* remove_peer removes from all chains
* Remove logs
* Fix early return from loop
* Improved logging, fix panic
* Partially correct tests
* Add deposit command
* Remove old validator directory
* Start adding AM tests
* Stable futures: Vc sync (#1149)
* Improve syncing heuristic
* Add comments
* Use safer method for tolerance
* Fix tests
* Binary testing progress
* Progress with CLI tests
* Use constants for flags
* More account manager testing
* Improve CLI tests
* Move upgrade-legacy-keypairs into account man
* Use rayon for VC key generation
* Add comments to `validator_dir`
* Add testing to validator_dir
* Add fix to eth1-sim
* Check errors in eth1-sim
* Fix mutability issue
* Ensure password file ends in .pass
* Add more tests to wallet manager
* Tidy deposit
* Tidy account manager
* Tidy account manager
* Remove panic
* Generate keypairs earlier in sim
* Tidy eth1-sime
* Try to fix eth1 sim
* Address review comments
* Fix typo in CLI command
* Update docs
* Disable eth1 sim
* Remove eth1 sim completely
Co-authored-by: Age Manning <Age@AgeManning.com>
Co-authored-by: pawanjay176 <pawandhananjay@gmail.com>
2020-05-18 09:01:45 +00:00
|
|
|
|
2021-07-31 03:50:52 +00:00
|
|
|
pub use crate::doppelganger_service::DoppelgangerStatus;
|
2022-07-30 00:22:37 +00:00
|
|
|
use crate::preparation_service::ProposalData;
|
2021-07-31 03:50:52 +00:00
|
|
|
|
|
|
|
#[derive(Debug, PartialEq)]
|
|
|
|
pub enum Error {
|
|
|
|
DoppelgangerProtected(PublicKeyBytes),
|
|
|
|
UnknownToDoppelgangerService(PublicKeyBytes),
|
|
|
|
UnknownPubkey(PublicKeyBytes),
|
|
|
|
Slashable(NotSafe),
|
|
|
|
SameData,
|
|
|
|
GreaterThanCurrentSlot { slot: Slot, current_slot: Slot },
|
|
|
|
GreaterThanCurrentEpoch { epoch: Epoch, current_epoch: Epoch },
|
|
|
|
UnableToSignAttestation(AttestationError),
|
2021-09-16 03:26:33 +00:00
|
|
|
UnableToSign(SigningError),
|
|
|
|
}
|
|
|
|
|
|
|
|
impl From<SigningError> for Error {
|
|
|
|
fn from(e: SigningError) -> Self {
|
|
|
|
Error::UnableToSign(e)
|
|
|
|
}
|
2021-07-31 03:50:52 +00:00
|
|
|
}
|
|
|
|
|
Prune slashing protection DB (#2194)
## Proposed Changes
Prune the slashing protection database so that it doesn't exhibit unbounded growth. Prune by dropping attestations and blocks from more than 512 epochs ago, relying on the guards that prevent signing messages with slots or epochs less than the minimum recorded in the DB.
The pruning process is potentially time consuming, so it's scheduled to run only every 512 epochs, in the last 2/3rds of a slot. This gives it at least 4 seconds to run without impacting other signing, which I think should be sufficient. I've seen it run for several minutes (yikes!) on our Pyrmont nodes, but I suspect that 1) this will only occur on the first run when the database is still huge 2) no other production users will be impacted because they don't have enough validators per node.
Pruning also happens at start-up, as I figured this is a fairly infrequent event, and if a user is experiencing problems with the VC related to pruning, it's nice to be able to trigger it with a quick restart. Users are also conditioned to not mind missing a few attestations during a restart.
We need to include a note in the release notes that users may see the message `timed out waiting for connection` the first time they prune a huge database, but that this is totally fine and to be expected (the VC will miss those attestations in the meantime).
I'm also open to making this opt-in for now, although the sooner we get users doing it, the less painful it will be: prune early, prune often!
2021-02-24 23:51:04 +00:00
|
|
|
/// Number of epochs of slashing protection history to keep.
|
|
|
|
///
|
|
|
|
/// This acts as a maximum safe-guard against clock drift.
|
|
|
|
const SLASHING_PROTECTION_HISTORY_EPOCHS: u64 = 512;
|
|
|
|
|
2022-07-30 00:22:37 +00:00
|
|
|
/// Currently used as the default gas limit in execution clients.
|
|
|
|
///
|
|
|
|
/// https://github.com/ethereum/builder-specs/issues/17
|
2022-08-15 01:30:58 +00:00
|
|
|
pub const DEFAULT_GAS_LIMIT: u64 = 30_000_000;
|
2022-07-30 00:22:37 +00:00
|
|
|
|
Wallet-based, encrypted key management (#1138)
* Update hashmap hashset to stable futures
* Adds panic test to hashset delay
* Port remote_beacon_node to stable futures
* Fix lcli merge conflicts
* Non rpc stuff compiles
* Remove padding
* Add error enum, zeroize more things
* Fix comment
* protocol.rs compiles
* Port websockets, timer and notifier to stable futures (#1035)
* Fix lcli
* Port timer to stable futures
* Fix timer
* Port websocket_server to stable futures
* Port notifier to stable futures
* Add TODOS
* Port remote_beacon_node to stable futures
* Partial eth2-libp2p stable future upgrade
* Finished first round of fighting RPC types
* Further progress towards porting eth2-libp2p adds caching to discovery
* Update behaviour
* Add keystore builder
* Remove keystore stuff from val client
* Add more tests, comments
* RPC handler to stable futures
* Update RPC to master libp2p
* Add more comments, test vectors
* Network service additions
* Progress on improving JSON validation
* More JSON verification
* Start moving JSON into own mod
* Remove old code
* Add more tests, reader/writers
* Tidy
* Move keystore into own file
* Move more logic into keystore file
* Tidy
* Tidy
* Fix the fallback transport construction (#1102)
* Allow for odd-character hex
* Correct warning
* Remove hashmap delay
* Compiling version of eth2-libp2p
* Update all crates versions
* Fix conversion function and add tests (#1113)
* Add more json missing field checks
* Use scrypt by default
* Tidy, address comments
* Test path and uuid in vectors
* Fix comment
* Add checks for kdf params
* Enforce empty kdf message
* Port validator_client to stable futures (#1114)
* Add PH & MS slot clock changes
* Account for genesis time
* Add progress on duties refactor
* Add simple is_aggregator bool to val subscription
* Start work on attestation_verification.rs
* Add progress on ObservedAttestations
* Progress with ObservedAttestations
* Fix tests
* Add observed attestations to the beacon chain
* Add attestation observation to processing code
* Add progress on attestation verification
* Add first draft of ObservedAttesters
* Add more tests
* Add observed attesters to beacon chain
* Add observers to attestation processing
* Add more attestation verification
* Create ObservedAggregators map
* Remove commented-out code
* Add observed aggregators into chain
* Add progress
* Finish adding features to attestation verification
* Ensure beacon chain compiles
* Link attn verification into chain
* Integrate new attn verification in chain
* Remove old attestation processing code
* Start trying to fix beacon_chain tests
* Split adding into pools into two functions
* Add aggregation to harness
* Get test harness working again
* Adjust the number of aggregators for test harness
* Fix edge-case in harness
* Integrate new attn processing in network
* Fix compile bug in validator_client
* Update validator API endpoints
* Fix aggreagation in test harness
* Fix enum thing
* Fix attestation observation bug:
* Patch failing API tests
* Start adding comments to attestation verification
* Remove unused attestation field
* Unify "is block known" logic
* Update comments
* Supress fork choice errors for network processing
* Add todos
* Tidy
* Add gossip attn tests
* Disallow test harness to produce old attns
* Comment out in-progress tests
* Partially address pruning tests
* Fix failing store test
* Add aggregate tests
* Add comments about which spec conditions we check
* Dont re-aggregate
* Split apart test harness attn production
* Fix compile error in network
* Make progress on commented-out test
* Fix skipping attestation test
* Add fork choice verification tests
* Tidy attn tests, remove dead code
* Remove some accidentally added code
* Fix clippy lint
* Rename test file
* Add block tests, add cheap block proposer check
* Rename block testing file
* Add observed_block_producers
* Tidy
* Switch around block signature verification
* Finish block testing
* Remove gossip from signature tests
* First pass of self review
* Fix deviation in spec
* Update test spec tags
* Start moving over to hashset
* Finish moving observed attesters to hashmap
* Move aggregation pool over to hashmap
* Make fc attn borrow again
* Fix rest_api compile error
* Fix missing comments
* Fix monster test
* Uncomment increasing slots test
* Address remaining comments
* Remove unsafe, use cfg test
* Remove cfg test flag
* Fix dodgy comment
* Revert "Update hashmap hashset to stable futures"
This reverts commit d432378a3cc5cd67fc29c0b15b96b886c1323554.
* Revert "Adds panic test to hashset delay"
This reverts commit 281502396fc5b90d9c421a309c2c056982c9525b.
* Ported attestation_service
* Ported duties_service
* Ported fork_service
* More ports
* Port block_service
* Minor fixes
* VC compiles
* Update TODOS
* Borrow self where possible
* Ignore aggregates that are already known.
* Unify aggregator modulo logic
* Fix typo in logs
* Refactor validator subscription logic
* Avoid reproducing selection proof
* Skip HTTP call if no subscriptions
* Rename DutyAndState -> DutyAndProof
* Tidy logs
* Print root as dbg
* Fix compile errors in tests
* Fix compile error in test
* Re-Fix attestation and duties service
* Minor fixes
Co-authored-by: Paul Hauner <paul@paulhauner.com>
* Expose json_keystore mod
* First commits on path derivation
* Progress with implementation
* More progress
* Passing intermediate test vectors
* Tidy, add comments
* Add DerivedKey structs
* Move key derivation into own crate
* Add zeroize structs
* Return error for empty seed
* Add tests
* Tidy
* First commits on path derivation
* Progress with implementation
* Move key derivation into own crate
* Start defining JSON wallet
* Add progress
* Split out encrypt/decrypt
* First commits on path derivation
* Progress with implementation
* More progress
* Passing intermediate test vectors
* Tidy, add comments
* Add DerivedKey structs
* Move key derivation into own crate
* Add zeroize structs
* Return error for empty seed
* Add tests
* Tidy
* Add progress
* Replace some password usage with slice
* First commits on path derivation
* Progress with implementation
* More progress
* Passing intermediate test vectors
* Tidy, add comments
* Add DerivedKey structs
* Move key derivation into own crate
* Add zeroize structs
* Return error for empty seed
* Add tests
* Tidy
* Add progress
* Expose PlainText struct
* First commits on path derivation
* Progress with implementation
* More progress
* Passing intermediate test vectors
* Tidy, add comments
* Add DerivedKey structs
* Move key derivation into own crate
* Add zeroize structs
* Return error for empty seed
* Add tests
* Tidy
* Add builder
* Expose consts, remove Password
* Minor progress
* Expose SALT_SIZE
* First compiling version
* Add test vectors
* Network crate update to stable futures
* Move dbg assert statement
* Port account_manager to stable futures (#1121)
* Port account_manager to stable futures
* Run async fns in tokio environment
* Port rest_api crate to stable futures (#1118)
* Port rest_api lib to stable futures
* Reduce tokio features
* Update notifier to stable futures
* Builder update
* Further updates
* Add mnemonic, tidy
* Convert self referential async functions
* Tidy
* Add testing
* Add first attempt at validator_dir
* Present pubkey field
* stable futures fixes (#1124)
* Fix eth1 update functions
* Fix genesis and client
* Fix beacon node lib
* Return appropriate runtimes from environment
* Fix test rig
* Refactor eth1 service update
* Upgrade simulator to stable futures
* Lighthouse compiles on stable futures
* Add first pass of wallet manager
* Progress with CLI
* Remove println debugging statement
* Tidy output
* Tidy 600 perms
* Update libp2p service, start rpc test upgrade
* Add validator creation flow
* Update network crate for new libp2p
* Start tidying, adding comments
* Update tokio::codec to futures_codec (#1128)
* Further work towards RPC corrections
* Correct http timeout and network service select
* Add wallet mgr testing
* Shift LockedWallet into own file
* Add comments to fs
* Start integration into VC
* Use tokio runtime for libp2p
* Revert "Update tokio::codec to futures_codec (#1128)"
This reverts commit e57aea924acf5cbabdcea18895ac07e38a425ed7.
* Upgrade RPC libp2p tests
* Upgrade secio fallback test
* Add lcli keypair upgrade command
* Upgrade gossipsub examples
* Clean up RPC protocol
* Test fixes (#1133)
* Correct websocket timeout and run on os thread
* Fix network test
* Add --secrets-dir to VC
* Remove --legacy-keys from VC
* Clean up PR
* Correct tokio tcp move attestation service tests
* Upgrade attestation service tests
* Fix sim
* Correct network test
* Correct genesis test
* Start docs
* Add progress for validator generation
* Tidy error messages
* Test corrections
* Log info when block is received
* Modify logs and update attester service events
* Stable futures: fixes to vc, eth1 and account manager (#1142)
* Add local testnet scripts
* Remove whiteblock script
* Rename local testnet script
* Move spawns onto handle
* Fix VC panic
* Initial fix to block production issue
* Tidy block producer fix
* Tidy further
* Add local testnet clean script
* Run cargo fmt
* Tidy duties service
* Tidy fork service
* Tidy ForkService
* Tidy AttestationService
* Tidy notifier
* Ensure await is not suppressed in eth1
* Ensure await is not suppressed in account_manager
* Use .ok() instead of .unwrap_or(())
* RPC decoding test for proto
* Update discv5 and eth2-libp2p deps
* Run cargo fmt
* Pre-build keystores for sim
* Fix lcli double runtime issue (#1144)
* Handle stream termination and dialing peer errors
* Correct peer_info variant types
* Add progress on new deposit flow
* Remove unnecessary warnings
* Handle subnet unsubscription removal and improve logigng
* Add logs around ping
* Upgrade discv5 and improve logging
* Handle peer connection status for multiple connections
* Improve network service logging
* Add more incomplete progress
* Improve logging around peer manager
* Upgrade swarm poll centralise peer management
* Identify clients on error
* Fix `remove_peer` in sync (#1150)
* remove_peer removes from all chains
* Remove logs
* Fix early return from loop
* Improved logging, fix panic
* Partially correct tests
* Add deposit command
* Remove old validator directory
* Start adding AM tests
* Stable futures: Vc sync (#1149)
* Improve syncing heuristic
* Add comments
* Use safer method for tolerance
* Fix tests
* Binary testing progress
* Progress with CLI tests
* Use constants for flags
* More account manager testing
* Improve CLI tests
* Move upgrade-legacy-keypairs into account man
* Use rayon for VC key generation
* Add comments to `validator_dir`
* Add testing to validator_dir
* Add fix to eth1-sim
* Check errors in eth1-sim
* Fix mutability issue
* Ensure password file ends in .pass
* Add more tests to wallet manager
* Tidy deposit
* Tidy account manager
* Tidy account manager
* Remove panic
* Generate keypairs earlier in sim
* Tidy eth1-sime
* Try to fix eth1 sim
* Address review comments
* Fix typo in CLI command
* Update docs
* Disable eth1 sim
* Remove eth1 sim completely
Co-authored-by: Age Manning <Age@AgeManning.com>
Co-authored-by: pawanjay176 <pawandhananjay@gmail.com>
2020-05-18 09:01:45 +00:00
|
|
|
struct LocalValidator {
|
|
|
|
validator_dir: ValidatorDir,
|
|
|
|
voting_keypair: Keypair,
|
|
|
|
}
|
2019-11-25 04:48:24 +00:00
|
|
|
|
2020-05-19 01:23:08 +00:00
|
|
|
/// We derive our own `PartialEq` to avoid doing equality checks between secret keys.
|
|
|
|
///
|
|
|
|
/// It's nice to avoid secret key comparisons from a security perspective, but it's also a little
|
|
|
|
/// risky when it comes to `HashMap` integrity (that's why we need `PartialEq`).
|
|
|
|
///
|
|
|
|
/// Currently, we obtain keypairs from keystores where we derive the `PublicKey` from a `SecretKey`
|
|
|
|
/// via a hash function. In order to have two equal `PublicKey` with different `SecretKey` we would
|
|
|
|
/// need to have either:
|
|
|
|
///
|
|
|
|
/// - A serious upstream integrity error.
|
|
|
|
/// - A hash collision.
|
|
|
|
///
|
|
|
|
/// It seems reasonable to make these two assumptions in order to avoid the equality checks.
|
|
|
|
impl PartialEq for LocalValidator {
|
|
|
|
fn eq(&self, other: &Self) -> bool {
|
|
|
|
self.validator_dir == other.validator_dir
|
|
|
|
&& self.voting_keypair.pk == other.voting_keypair.pk
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-11-25 04:48:24 +00:00
|
|
|
pub struct ValidatorStore<T, E: EthSpec> {
|
2020-07-22 09:34:55 +00:00
|
|
|
validators: Arc<RwLock<InitializedValidators>>,
|
2020-05-18 06:25:16 +00:00
|
|
|
slashing_protection: SlashingDatabase,
|
Prune slashing protection DB (#2194)
## Proposed Changes
Prune the slashing protection database so that it doesn't exhibit unbounded growth. Prune by dropping attestations and blocks from more than 512 epochs ago, relying on the guards that prevent signing messages with slots or epochs less than the minimum recorded in the DB.
The pruning process is potentially time consuming, so it's scheduled to run only every 512 epochs, in the last 2/3rds of a slot. This gives it at least 4 seconds to run without impacting other signing, which I think should be sufficient. I've seen it run for several minutes (yikes!) on our Pyrmont nodes, but I suspect that 1) this will only occur on the first run when the database is still huge 2) no other production users will be impacted because they don't have enough validators per node.
Pruning also happens at start-up, as I figured this is a fairly infrequent event, and if a user is experiencing problems with the VC related to pruning, it's nice to be able to trigger it with a quick restart. Users are also conditioned to not mind missing a few attestations during a restart.
We need to include a note in the release notes that users may see the message `timed out waiting for connection` the first time they prune a huge database, but that this is totally fine and to be expected (the VC will miss those attestations in the meantime).
I'm also open to making this opt-in for now, although the sooner we get users doing it, the less painful it will be: prune early, prune often!
2021-02-24 23:51:04 +00:00
|
|
|
slashing_protection_last_prune: Arc<Mutex<Epoch>>,
|
2020-04-01 11:03:03 +00:00
|
|
|
genesis_validators_root: Hash256,
|
2019-11-25 04:48:24 +00:00
|
|
|
spec: Arc<ChainSpec>,
|
|
|
|
log: Logger,
|
2021-07-31 03:50:52 +00:00
|
|
|
doppelganger_service: Option<Arc<DoppelgangerService>>,
|
|
|
|
slot_clock: T,
|
2022-07-06 03:51:08 +00:00
|
|
|
fee_recipient_process: Option<Address>,
|
2022-07-30 00:22:37 +00:00
|
|
|
gas_limit: Option<u64>,
|
|
|
|
builder_proposals: bool,
|
2021-09-16 03:26:33 +00:00
|
|
|
task_executor: TaskExecutor,
|
2021-08-06 00:47:31 +00:00
|
|
|
_phantom: PhantomData<E>,
|
2019-11-25 04:48:24 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
impl<T: SlotClock + 'static, E: EthSpec> ValidatorStore<T, E> {
|
2021-09-16 03:26:33 +00:00
|
|
|
// All arguments are different types. Making the fields `pub` is undesired. A builder seems
|
|
|
|
// unnecessary.
|
|
|
|
#[allow(clippy::too_many_arguments)]
|
2020-06-26 01:10:52 +00:00
|
|
|
pub fn new(
|
2020-07-22 09:34:55 +00:00
|
|
|
validators: InitializedValidators,
|
2020-10-09 02:05:32 +00:00
|
|
|
slashing_protection: SlashingDatabase,
|
2020-04-01 11:03:03 +00:00
|
|
|
genesis_validators_root: Hash256,
|
2019-11-25 04:48:24 +00:00
|
|
|
spec: ChainSpec,
|
2021-07-31 03:50:52 +00:00
|
|
|
doppelganger_service: Option<Arc<DoppelgangerService>>,
|
2021-08-06 00:47:31 +00:00
|
|
|
slot_clock: T,
|
2022-07-30 00:22:37 +00:00
|
|
|
config: &Config,
|
2021-09-16 03:26:33 +00:00
|
|
|
task_executor: TaskExecutor,
|
2019-11-25 04:48:24 +00:00
|
|
|
log: Logger,
|
2020-10-09 02:05:32 +00:00
|
|
|
) -> Self {
|
|
|
|
Self {
|
2020-07-22 09:34:55 +00:00
|
|
|
validators: Arc::new(RwLock::new(validators)),
|
2020-05-18 06:25:16 +00:00
|
|
|
slashing_protection,
|
Prune slashing protection DB (#2194)
## Proposed Changes
Prune the slashing protection database so that it doesn't exhibit unbounded growth. Prune by dropping attestations and blocks from more than 512 epochs ago, relying on the guards that prevent signing messages with slots or epochs less than the minimum recorded in the DB.
The pruning process is potentially time consuming, so it's scheduled to run only every 512 epochs, in the last 2/3rds of a slot. This gives it at least 4 seconds to run without impacting other signing, which I think should be sufficient. I've seen it run for several minutes (yikes!) on our Pyrmont nodes, but I suspect that 1) this will only occur on the first run when the database is still huge 2) no other production users will be impacted because they don't have enough validators per node.
Pruning also happens at start-up, as I figured this is a fairly infrequent event, and if a user is experiencing problems with the VC related to pruning, it's nice to be able to trigger it with a quick restart. Users are also conditioned to not mind missing a few attestations during a restart.
We need to include a note in the release notes that users may see the message `timed out waiting for connection` the first time they prune a huge database, but that this is totally fine and to be expected (the VC will miss those attestations in the meantime).
I'm also open to making this opt-in for now, although the sooner we get users doing it, the less painful it will be: prune early, prune often!
2021-02-24 23:51:04 +00:00
|
|
|
slashing_protection_last_prune: Arc::new(Mutex::new(Epoch::new(0))),
|
2020-04-01 11:03:03 +00:00
|
|
|
genesis_validators_root,
|
2019-11-25 04:48:24 +00:00
|
|
|
spec: Arc::new(spec),
|
2021-08-06 00:47:31 +00:00
|
|
|
log,
|
2021-07-31 03:50:52 +00:00
|
|
|
doppelganger_service,
|
2021-08-06 00:47:31 +00:00
|
|
|
slot_clock,
|
2022-07-30 00:22:37 +00:00
|
|
|
fee_recipient_process: config.fee_recipient,
|
|
|
|
gas_limit: config.gas_limit,
|
|
|
|
builder_proposals: config.builder_proposals,
|
2021-09-16 03:26:33 +00:00
|
|
|
task_executor,
|
2021-08-06 00:47:31 +00:00
|
|
|
_phantom: PhantomData,
|
2020-10-09 02:05:32 +00:00
|
|
|
}
|
2019-11-25 04:48:24 +00:00
|
|
|
}
|
|
|
|
|
2021-07-31 03:50:52 +00:00
|
|
|
/// Register all local validators in doppelganger protection to try and prevent instances of
|
|
|
|
/// duplicate validators operating on the network at the same time.
|
|
|
|
///
|
|
|
|
/// This function has no effect if doppelganger protection is disabled.
|
|
|
|
pub fn register_all_in_doppelganger_protection_if_enabled(&self) -> Result<(), String> {
|
|
|
|
if let Some(doppelganger_service) = &self.doppelganger_service {
|
|
|
|
for pubkey in self.validators.read().iter_voting_pubkeys() {
|
|
|
|
doppelganger_service.register_new_validator::<E, _>(*pubkey, &self.slot_clock)?
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
Ok(())
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Returns `true` if doppelganger protection is enabled, or else `false`.
|
|
|
|
pub fn doppelganger_protection_enabled(&self) -> bool {
|
|
|
|
self.doppelganger_service.is_some()
|
|
|
|
}
|
|
|
|
|
2020-10-02 09:42:19 +00:00
|
|
|
pub fn initialized_validators(&self) -> Arc<RwLock<InitializedValidators>> {
|
|
|
|
self.validators.clone()
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Insert a new validator to `self`, where the validator is represented by an EIP-2335
|
|
|
|
/// keystore on the filesystem.
|
2022-07-30 00:22:37 +00:00
|
|
|
#[allow(clippy::too_many_arguments)]
|
2020-10-02 09:42:19 +00:00
|
|
|
pub async fn add_validator_keystore<P: AsRef<Path>>(
|
|
|
|
&self,
|
|
|
|
voting_keystore_path: P,
|
|
|
|
password: ZeroizeString,
|
|
|
|
enable: bool,
|
2021-03-02 22:35:46 +00:00
|
|
|
graffiti: Option<GraffitiString>,
|
2022-02-08 19:52:20 +00:00
|
|
|
suggested_fee_recipient: Option<Address>,
|
2022-07-30 00:22:37 +00:00
|
|
|
gas_limit: Option<u64>,
|
|
|
|
builder_proposals: Option<bool>,
|
2020-10-02 09:42:19 +00:00
|
|
|
) -> Result<ValidatorDefinition, String> {
|
2021-03-02 22:35:46 +00:00
|
|
|
let mut validator_def = ValidatorDefinition::new_keystore_with_password(
|
|
|
|
voting_keystore_path,
|
|
|
|
Some(password),
|
|
|
|
graffiti.map(Into::into),
|
2022-02-08 19:52:20 +00:00
|
|
|
suggested_fee_recipient,
|
2022-07-30 00:22:37 +00:00
|
|
|
gas_limit,
|
|
|
|
builder_proposals,
|
2021-03-02 22:35:46 +00:00
|
|
|
)
|
|
|
|
.map_err(|e| format!("failed to create validator definitions: {:?}", e))?;
|
2020-10-02 09:42:19 +00:00
|
|
|
|
2021-09-16 03:26:33 +00:00
|
|
|
validator_def.enabled = enable;
|
|
|
|
|
|
|
|
self.add_validator(validator_def).await
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Insert a new validator to `self`.
|
|
|
|
///
|
|
|
|
/// This function includes:
|
|
|
|
///
|
|
|
|
/// - Adding the validator definition to the YAML file, saving it to the filesystem.
|
|
|
|
/// - Enabling the validator with the slashing protection database.
|
|
|
|
/// - If `enable == true`, starting to perform duties for the validator.
|
2022-05-20 05:02:13 +00:00
|
|
|
// FIXME: ignore this clippy lint until the validator store is refactored to use async locks
|
|
|
|
#[allow(clippy::await_holding_lock)]
|
2021-09-16 03:26:33 +00:00
|
|
|
pub async fn add_validator(
|
|
|
|
&self,
|
|
|
|
validator_def: ValidatorDefinition,
|
|
|
|
) -> Result<ValidatorDefinition, String> {
|
2021-07-31 03:50:52 +00:00
|
|
|
let validator_pubkey = validator_def.voting_public_key.compress();
|
|
|
|
|
2020-10-02 09:42:19 +00:00
|
|
|
self.slashing_protection
|
2021-07-31 03:50:52 +00:00
|
|
|
.register_validator(validator_pubkey)
|
2020-10-02 09:42:19 +00:00
|
|
|
.map_err(|e| format!("failed to register validator: {:?}", e))?;
|
|
|
|
|
2021-07-31 03:50:52 +00:00
|
|
|
if let Some(doppelganger_service) = &self.doppelganger_service {
|
|
|
|
doppelganger_service
|
|
|
|
.register_new_validator::<E, _>(validator_pubkey, &self.slot_clock)?;
|
|
|
|
}
|
|
|
|
|
2020-10-02 09:42:19 +00:00
|
|
|
self.validators
|
|
|
|
.write()
|
2022-01-30 23:22:04 +00:00
|
|
|
.add_definition_replace_disabled(validator_def.clone())
|
2020-10-02 09:42:19 +00:00
|
|
|
.await
|
|
|
|
.map_err(|e| format!("Unable to add definition: {:?}", e))?;
|
|
|
|
|
|
|
|
Ok(validator_def)
|
|
|
|
}
|
|
|
|
|
2022-07-30 00:22:37 +00:00
|
|
|
/// Returns `ProposalData` for the provided `pubkey` if it exists in `InitializedValidators`.
|
|
|
|
/// `ProposalData` fields include defaulting logic described in `get_fee_recipient_defaulting`,
|
|
|
|
/// `get_gas_limit_defaulting`, and `get_builder_proposals_defaulting`.
|
|
|
|
pub fn proposal_data(&self, pubkey: &PublicKeyBytes) -> Option<ProposalData> {
|
|
|
|
self.validators
|
|
|
|
.read()
|
|
|
|
.validator(pubkey)
|
|
|
|
.map(|validator| ProposalData {
|
|
|
|
validator_index: validator.get_index(),
|
|
|
|
fee_recipient: self
|
|
|
|
.get_fee_recipient_defaulting(validator.get_suggested_fee_recipient()),
|
|
|
|
gas_limit: self.get_gas_limit_defaulting(validator.get_gas_limit()),
|
|
|
|
builder_proposals: self
|
|
|
|
.get_builder_proposals_defaulting(validator.get_builder_proposals()),
|
|
|
|
})
|
|
|
|
}
|
|
|
|
|
2021-07-31 03:50:52 +00:00
|
|
|
/// Attempts to resolve the pubkey to a validator index.
|
|
|
|
///
|
|
|
|
/// It may return `None` if the `pubkey` is:
|
|
|
|
///
|
|
|
|
/// - Unknown.
|
|
|
|
/// - Known, but with an unknown index.
|
|
|
|
pub fn validator_index(&self, pubkey: &PublicKeyBytes) -> Option<u64> {
|
|
|
|
self.validators.read().get_index(pubkey)
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Returns all voting pubkeys for all enabled validators.
|
|
|
|
///
|
|
|
|
/// The `filter_func` allows for filtering pubkeys based upon their `DoppelgangerStatus`. There
|
|
|
|
/// are two primary functions used here:
|
|
|
|
///
|
|
|
|
/// - `DoppelgangerStatus::only_safe`: only returns pubkeys which have passed doppelganger
|
|
|
|
/// protection and are safe-enough to sign messages.
|
|
|
|
/// - `DoppelgangerStatus::ignored`: returns all the pubkeys from `only_safe` *plus* those still
|
|
|
|
/// undergoing protection. This is useful for collecting duties or other non-signing tasks.
|
|
|
|
#[allow(clippy::needless_collect)] // Collect is required to avoid holding a lock.
|
|
|
|
pub fn voting_pubkeys<I, F>(&self, filter_func: F) -> I
|
|
|
|
where
|
|
|
|
I: FromIterator<PublicKeyBytes>,
|
|
|
|
F: Fn(DoppelgangerStatus) -> Option<PublicKeyBytes>,
|
|
|
|
{
|
|
|
|
// Collect all the pubkeys first to avoid interleaving locks on `self.validators` and
|
|
|
|
// `self.doppelganger_service()`.
|
|
|
|
let pubkeys = self
|
|
|
|
.validators
|
2019-11-25 04:48:24 +00:00
|
|
|
.read()
|
2020-07-22 09:34:55 +00:00
|
|
|
.iter_voting_pubkeys()
|
|
|
|
.cloned()
|
2021-07-31 03:50:52 +00:00
|
|
|
.collect::<Vec<_>>();
|
|
|
|
|
|
|
|
pubkeys
|
|
|
|
.into_iter()
|
|
|
|
.map(|pubkey| {
|
|
|
|
self.doppelganger_service
|
|
|
|
.as_ref()
|
|
|
|
.map(|doppelganger_service| doppelganger_service.validator_status(pubkey))
|
|
|
|
// Allow signing on all pubkeys if doppelganger protection is disabled.
|
|
|
|
.unwrap_or_else(|| DoppelgangerStatus::SigningEnabled(pubkey))
|
|
|
|
})
|
|
|
|
.filter_map(filter_func)
|
2019-11-25 04:48:24 +00:00
|
|
|
.collect()
|
|
|
|
}
|
|
|
|
|
2021-07-31 03:50:52 +00:00
|
|
|
/// Returns doppelganger statuses for all enabled validators.
|
|
|
|
#[allow(clippy::needless_collect)] // Collect is required to avoid holding a lock.
|
|
|
|
pub fn doppelganger_statuses(&self) -> Vec<DoppelgangerStatus> {
|
|
|
|
// Collect all the pubkeys first to avoid interleaving locks on `self.validators` and
|
|
|
|
// `self.doppelganger_service`.
|
|
|
|
let pubkeys = self
|
|
|
|
.validators
|
|
|
|
.read()
|
|
|
|
.iter_voting_pubkeys()
|
|
|
|
.cloned()
|
|
|
|
.collect::<Vec<_>>();
|
|
|
|
|
|
|
|
pubkeys
|
|
|
|
.into_iter()
|
|
|
|
.map(|pubkey| {
|
|
|
|
self.doppelganger_service
|
|
|
|
.as_ref()
|
|
|
|
.map(|doppelganger_service| doppelganger_service.validator_status(pubkey))
|
|
|
|
// Allow signing on all pubkeys if doppelganger protection is disabled.
|
|
|
|
.unwrap_or_else(|| DoppelgangerStatus::SigningEnabled(pubkey))
|
|
|
|
})
|
|
|
|
.collect()
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Check if the `validator_pubkey` is permitted by the doppleganger protection to sign
|
|
|
|
/// messages.
|
|
|
|
pub fn doppelganger_protection_allows_signing(&self, validator_pubkey: PublicKeyBytes) -> bool {
|
|
|
|
self.doppelganger_service
|
|
|
|
.as_ref()
|
|
|
|
// If there's no doppelganger service then we assume it is purposefully disabled and
|
|
|
|
// declare that all keys are safe with regard to it.
|
|
|
|
.map_or(true, |doppelganger_service| {
|
|
|
|
doppelganger_service
|
|
|
|
.validator_status(validator_pubkey)
|
|
|
|
.only_safe()
|
|
|
|
.is_some()
|
|
|
|
})
|
|
|
|
}
|
|
|
|
|
2019-11-25 04:48:24 +00:00
|
|
|
pub fn num_voting_validators(&self) -> usize {
|
2020-07-22 09:34:55 +00:00
|
|
|
self.validators.read().num_enabled()
|
2019-11-25 04:48:24 +00:00
|
|
|
}
|
|
|
|
|
2021-08-06 00:47:31 +00:00
|
|
|
fn fork(&self, epoch: Epoch) -> Fork {
|
|
|
|
self.spec.fork_at_epoch(epoch)
|
2019-11-25 04:48:24 +00:00
|
|
|
}
|
|
|
|
|
2021-09-16 03:26:33 +00:00
|
|
|
/// Returns a `SigningMethod` for `validator_pubkey` *only if* that validator is considered safe
|
|
|
|
/// by doppelganger protection.
|
|
|
|
fn doppelganger_checked_signing_method(
|
|
|
|
&self,
|
|
|
|
validator_pubkey: PublicKeyBytes,
|
|
|
|
) -> Result<Arc<SigningMethod>, Error> {
|
|
|
|
if self.doppelganger_protection_allows_signing(validator_pubkey) {
|
|
|
|
self.validators
|
|
|
|
.read()
|
|
|
|
.signing_method(&validator_pubkey)
|
|
|
|
.ok_or(Error::UnknownPubkey(validator_pubkey))
|
|
|
|
} else {
|
|
|
|
Err(Error::DoppelgangerProtected(validator_pubkey))
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Returns a `SigningMethod` for `validator_pubkey` regardless of that validators doppelganger
|
|
|
|
/// protection status.
|
2021-07-31 03:50:52 +00:00
|
|
|
///
|
|
|
|
/// ## Warning
|
|
|
|
///
|
2021-09-16 03:26:33 +00:00
|
|
|
/// This method should only be used for signing non-slashable messages.
|
|
|
|
fn doppelganger_bypassed_signing_method(
|
2021-07-31 03:50:52 +00:00
|
|
|
&self,
|
|
|
|
validator_pubkey: PublicKeyBytes,
|
2021-09-16 03:26:33 +00:00
|
|
|
) -> Result<Arc<SigningMethod>, Error> {
|
|
|
|
self.validators
|
|
|
|
.read()
|
|
|
|
.signing_method(&validator_pubkey)
|
|
|
|
.ok_or(Error::UnknownPubkey(validator_pubkey))
|
|
|
|
}
|
2021-07-31 03:50:52 +00:00
|
|
|
|
2021-09-16 03:26:33 +00:00
|
|
|
fn signing_context(&self, domain: Domain, signing_epoch: Epoch) -> SigningContext {
|
|
|
|
SigningContext {
|
|
|
|
domain,
|
|
|
|
epoch: signing_epoch,
|
|
|
|
fork: self.fork(signing_epoch),
|
|
|
|
genesis_validators_root: self.genesis_validators_root,
|
|
|
|
}
|
2021-07-31 03:50:52 +00:00
|
|
|
}
|
|
|
|
|
2021-09-16 03:26:33 +00:00
|
|
|
pub async fn randao_reveal(
|
2021-03-17 05:09:57 +00:00
|
|
|
&self,
|
2021-07-31 03:50:52 +00:00
|
|
|
validator_pubkey: PublicKeyBytes,
|
2021-09-16 03:26:33 +00:00
|
|
|
signing_epoch: Epoch,
|
2021-07-31 03:50:52 +00:00
|
|
|
) -> Result<Signature, Error> {
|
2021-09-16 03:26:33 +00:00
|
|
|
let signing_method = self.doppelganger_checked_signing_method(validator_pubkey)?;
|
|
|
|
let signing_context = self.signing_context(Domain::Randao, signing_epoch);
|
2019-11-25 04:48:24 +00:00
|
|
|
|
2021-09-16 03:26:33 +00:00
|
|
|
let signature = signing_method
|
2022-03-31 07:52:23 +00:00
|
|
|
.get_signature::<E, BlindedPayload<E>>(
|
2021-09-16 03:26:33 +00:00
|
|
|
SignableMessage::RandaoReveal(signing_epoch),
|
|
|
|
signing_context,
|
|
|
|
&self.spec,
|
|
|
|
&self.task_executor,
|
|
|
|
)
|
|
|
|
.await?;
|
|
|
|
|
|
|
|
Ok(signature)
|
2019-11-25 04:48:24 +00:00
|
|
|
}
|
|
|
|
|
2021-03-17 05:09:57 +00:00
|
|
|
pub fn graffiti(&self, validator_pubkey: &PublicKeyBytes) -> Option<Graffiti> {
|
2021-03-02 22:35:46 +00:00
|
|
|
self.validators.read().graffiti(validator_pubkey)
|
|
|
|
}
|
|
|
|
|
2022-07-06 03:51:08 +00:00
|
|
|
/// Returns the fee recipient for the given public key. The priority order for fetching
|
|
|
|
/// the fee recipient is:
|
|
|
|
/// 1. validator_definitions.yml
|
|
|
|
/// 2. process level fee recipient
|
|
|
|
pub fn get_fee_recipient(&self, validator_pubkey: &PublicKeyBytes) -> Option<Address> {
|
|
|
|
// If there is a `suggested_fee_recipient` in the validator definitions yaml
|
|
|
|
// file, use that value.
|
2022-07-30 00:22:37 +00:00
|
|
|
self.get_fee_recipient_defaulting(self.suggested_fee_recipient(validator_pubkey))
|
|
|
|
}
|
|
|
|
|
|
|
|
pub fn get_fee_recipient_defaulting(&self, fee_recipient: Option<Address>) -> Option<Address> {
|
|
|
|
// If there's nothing in the file, try the process-level default value.
|
|
|
|
fee_recipient.or(self.fee_recipient_process)
|
2022-07-06 03:51:08 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/// Returns the suggested_fee_recipient from `validator_definitions.yml` if any.
|
|
|
|
/// This has been pulled into a private function so the read lock is dropped easily
|
|
|
|
fn suggested_fee_recipient(&self, validator_pubkey: &PublicKeyBytes) -> Option<Address> {
|
2022-02-08 19:52:20 +00:00
|
|
|
self.validators
|
|
|
|
.read()
|
|
|
|
.suggested_fee_recipient(validator_pubkey)
|
|
|
|
}
|
|
|
|
|
2022-07-30 00:22:37 +00:00
|
|
|
/// Returns the gas limit for the given public key. The priority order for fetching
|
|
|
|
/// the gas limit is:
|
|
|
|
///
|
|
|
|
/// 1. validator_definitions.yml
|
|
|
|
/// 2. process level gas limit
|
|
|
|
/// 3. `DEFAULT_GAS_LIMIT`
|
|
|
|
pub fn get_gas_limit(&self, validator_pubkey: &PublicKeyBytes) -> u64 {
|
|
|
|
self.get_gas_limit_defaulting(self.validators.read().gas_limit(validator_pubkey))
|
|
|
|
}
|
|
|
|
|
|
|
|
fn get_gas_limit_defaulting(&self, gas_limit: Option<u64>) -> u64 {
|
|
|
|
// If there is a `gas_limit` in the validator definitions yaml
|
|
|
|
// file, use that value.
|
|
|
|
gas_limit
|
|
|
|
// If there's nothing in the file, try the process-level default value.
|
|
|
|
.or(self.gas_limit)
|
|
|
|
// If there's no process-level default, use the `DEFAULT_GAS_LIMIT`.
|
|
|
|
.unwrap_or(DEFAULT_GAS_LIMIT)
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Returns a `bool` for the given public key that denotes whther this validator should use the
|
|
|
|
/// builder API. The priority order for fetching this value is:
|
|
|
|
///
|
|
|
|
/// 1. validator_definitions.yml
|
|
|
|
/// 2. process level flag
|
|
|
|
pub fn get_builder_proposals(&self, validator_pubkey: &PublicKeyBytes) -> bool {
|
|
|
|
// If there is a `suggested_fee_recipient` in the validator definitions yaml
|
|
|
|
// file, use that value.
|
|
|
|
self.get_builder_proposals_defaulting(
|
|
|
|
self.validators.read().builder_proposals(validator_pubkey),
|
|
|
|
)
|
|
|
|
}
|
|
|
|
|
|
|
|
fn get_builder_proposals_defaulting(&self, builder_proposals: Option<bool>) -> bool {
|
|
|
|
builder_proposals
|
|
|
|
// If there's nothing in the file, try the process-level default value.
|
|
|
|
.unwrap_or(self.builder_proposals)
|
|
|
|
}
|
|
|
|
|
2022-10-26 19:15:26 +00:00
|
|
|
pub async fn sign_block<Payload: AbstractExecPayload<E>>(
|
2019-11-25 04:48:24 +00:00
|
|
|
&self,
|
2021-07-31 03:50:52 +00:00
|
|
|
validator_pubkey: PublicKeyBytes,
|
2022-03-31 07:52:23 +00:00
|
|
|
block: BeaconBlock<E, Payload>,
|
2020-05-18 06:25:16 +00:00
|
|
|
current_slot: Slot,
|
2022-03-31 07:52:23 +00:00
|
|
|
) -> Result<SignedBeaconBlock<E, Payload>, Error> {
|
2020-05-18 06:25:16 +00:00
|
|
|
// Make sure the block slot is not higher than the current slot to avoid potential attacks.
|
2021-07-09 06:15:32 +00:00
|
|
|
if block.slot() > current_slot {
|
2020-05-18 06:25:16 +00:00
|
|
|
warn!(
|
|
|
|
self.log,
|
|
|
|
"Not signing block with slot greater than current slot";
|
2021-07-09 06:15:32 +00:00
|
|
|
"block_slot" => block.slot().as_u64(),
|
2020-05-18 06:25:16 +00:00
|
|
|
"current_slot" => current_slot.as_u64()
|
|
|
|
);
|
2021-07-31 03:50:52 +00:00
|
|
|
return Err(Error::GreaterThanCurrentSlot {
|
|
|
|
slot: block.slot(),
|
|
|
|
current_slot,
|
|
|
|
});
|
2020-05-18 06:25:16 +00:00
|
|
|
}
|
|
|
|
|
2021-09-16 03:26:33 +00:00
|
|
|
let signing_epoch = block.epoch();
|
|
|
|
let signing_context = self.signing_context(Domain::BeaconProposer, signing_epoch);
|
|
|
|
let domain_hash = signing_context.domain_hash(&self.spec);
|
2020-05-18 06:25:16 +00:00
|
|
|
|
2021-09-16 03:26:33 +00:00
|
|
|
// Check for slashing conditions.
|
2020-05-18 06:25:16 +00:00
|
|
|
let slashing_status = self.slashing_protection.check_and_insert_block_proposal(
|
2021-07-31 03:50:52 +00:00
|
|
|
&validator_pubkey,
|
2020-05-18 06:25:16 +00:00
|
|
|
&block.block_header(),
|
2021-09-16 03:26:33 +00:00
|
|
|
domain_hash,
|
2020-05-18 06:25:16 +00:00
|
|
|
);
|
|
|
|
|
|
|
|
match slashing_status {
|
2021-07-31 03:50:52 +00:00
|
|
|
// We can safely sign this block without slashing.
|
2020-05-18 06:25:16 +00:00
|
|
|
Ok(Safe::Valid) => {
|
2020-11-26 01:10:51 +00:00
|
|
|
metrics::inc_counter_vec(&metrics::SIGNED_BLOCKS_TOTAL, &[metrics::SUCCESS]);
|
|
|
|
|
2021-09-16 03:26:33 +00:00
|
|
|
let signing_method = self.doppelganger_checked_signing_method(validator_pubkey)?;
|
|
|
|
let signature = signing_method
|
2022-03-31 07:52:23 +00:00
|
|
|
.get_signature::<E, Payload>(
|
2021-09-16 03:26:33 +00:00
|
|
|
SignableMessage::BeaconBlock(&block),
|
|
|
|
signing_context,
|
|
|
|
&self.spec,
|
|
|
|
&self.task_executor,
|
|
|
|
)
|
|
|
|
.await?;
|
|
|
|
Ok(SignedBeaconBlock::from_block(block, signature))
|
2020-05-18 06:25:16 +00:00
|
|
|
}
|
|
|
|
Ok(Safe::SameData) => {
|
|
|
|
warn!(
|
|
|
|
self.log,
|
|
|
|
"Skipping signing of previously signed block";
|
|
|
|
);
|
2020-11-26 01:10:51 +00:00
|
|
|
metrics::inc_counter_vec(&metrics::SIGNED_BLOCKS_TOTAL, &[metrics::SAME_DATA]);
|
2021-07-31 03:50:52 +00:00
|
|
|
Err(Error::SameData)
|
2020-05-18 06:25:16 +00:00
|
|
|
}
|
|
|
|
Err(NotSafe::UnregisteredValidator(pk)) => {
|
|
|
|
warn!(
|
|
|
|
self.log,
|
|
|
|
"Not signing block for unregistered validator";
|
2020-10-09 02:05:32 +00:00
|
|
|
"msg" => "Carefully consider running with --init-slashing-protection (see --help)",
|
2020-05-18 06:25:16 +00:00
|
|
|
"public_key" => format!("{:?}", pk)
|
|
|
|
);
|
2020-11-26 01:10:51 +00:00
|
|
|
metrics::inc_counter_vec(&metrics::SIGNED_BLOCKS_TOTAL, &[metrics::UNREGISTERED]);
|
2021-07-31 03:50:52 +00:00
|
|
|
Err(Error::Slashable(NotSafe::UnregisteredValidator(pk)))
|
2020-05-18 06:25:16 +00:00
|
|
|
}
|
|
|
|
Err(e) => {
|
|
|
|
crit!(
|
|
|
|
self.log,
|
|
|
|
"Not signing slashable block";
|
|
|
|
"error" => format!("{:?}", e)
|
|
|
|
);
|
2020-11-26 01:10:51 +00:00
|
|
|
metrics::inc_counter_vec(&metrics::SIGNED_BLOCKS_TOTAL, &[metrics::SLASHABLE]);
|
2021-07-31 03:50:52 +00:00
|
|
|
Err(Error::Slashable(e))
|
2020-05-18 06:25:16 +00:00
|
|
|
}
|
|
|
|
}
|
2019-11-25 04:48:24 +00:00
|
|
|
}
|
|
|
|
|
2021-09-16 03:26:33 +00:00
|
|
|
pub async fn sign_attestation(
|
2019-11-25 04:48:24 +00:00
|
|
|
&self,
|
2021-07-31 03:50:52 +00:00
|
|
|
validator_pubkey: PublicKeyBytes,
|
2019-11-25 04:48:24 +00:00
|
|
|
validator_committee_position: usize,
|
|
|
|
attestation: &mut Attestation<E>,
|
2020-05-18 06:25:16 +00:00
|
|
|
current_epoch: Epoch,
|
2021-07-31 03:50:52 +00:00
|
|
|
) -> Result<(), Error> {
|
2020-05-18 06:25:16 +00:00
|
|
|
// Make sure the target epoch is not higher than the current epoch to avoid potential attacks.
|
|
|
|
if attestation.data.target.epoch > current_epoch {
|
2021-07-31 03:50:52 +00:00
|
|
|
return Err(Error::GreaterThanCurrentEpoch {
|
|
|
|
epoch: attestation.data.target.epoch,
|
|
|
|
current_epoch,
|
|
|
|
});
|
2020-05-18 06:25:16 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// Checking for slashing conditions.
|
2021-09-16 03:26:33 +00:00
|
|
|
let signing_epoch = attestation.data.target.epoch;
|
|
|
|
let signing_context = self.signing_context(Domain::BeaconAttester, signing_epoch);
|
|
|
|
let domain_hash = signing_context.domain_hash(&self.spec);
|
2020-05-18 06:25:16 +00:00
|
|
|
let slashing_status = self.slashing_protection.check_and_insert_attestation(
|
2021-07-31 03:50:52 +00:00
|
|
|
&validator_pubkey,
|
2020-05-18 06:25:16 +00:00
|
|
|
&attestation.data,
|
2021-09-16 03:26:33 +00:00
|
|
|
domain_hash,
|
2020-05-18 06:25:16 +00:00
|
|
|
);
|
|
|
|
|
|
|
|
match slashing_status {
|
|
|
|
// We can safely sign this attestation.
|
|
|
|
Ok(Safe::Valid) => {
|
2021-09-16 03:26:33 +00:00
|
|
|
let signing_method = self.doppelganger_checked_signing_method(validator_pubkey)?;
|
|
|
|
let signature = signing_method
|
2022-03-31 07:52:23 +00:00
|
|
|
.get_signature::<E, BlindedPayload<E>>(
|
2021-09-16 03:26:33 +00:00
|
|
|
SignableMessage::AttestationData(&attestation.data),
|
|
|
|
signing_context,
|
2019-11-25 04:48:24 +00:00
|
|
|
&self.spec,
|
2021-09-16 03:26:33 +00:00
|
|
|
&self.task_executor,
|
2019-11-25 04:48:24 +00:00
|
|
|
)
|
2021-09-16 03:26:33 +00:00
|
|
|
.await?;
|
|
|
|
attestation
|
|
|
|
.add_signature(&signature, validator_committee_position)
|
|
|
|
.map_err(Error::UnableToSignAttestation)?;
|
2019-11-25 04:48:24 +00:00
|
|
|
|
2020-11-26 01:10:51 +00:00
|
|
|
metrics::inc_counter_vec(&metrics::SIGNED_ATTESTATIONS_TOTAL, &[metrics::SUCCESS]);
|
|
|
|
|
2021-07-31 03:50:52 +00:00
|
|
|
Ok(())
|
2020-05-18 06:25:16 +00:00
|
|
|
}
|
|
|
|
Ok(Safe::SameData) => {
|
|
|
|
warn!(
|
|
|
|
self.log,
|
|
|
|
"Skipping signing of previously signed attestation"
|
|
|
|
);
|
2020-11-26 01:10:51 +00:00
|
|
|
metrics::inc_counter_vec(
|
|
|
|
&metrics::SIGNED_ATTESTATIONS_TOTAL,
|
|
|
|
&[metrics::SAME_DATA],
|
|
|
|
);
|
2021-07-31 03:50:52 +00:00
|
|
|
Err(Error::SameData)
|
2020-05-18 06:25:16 +00:00
|
|
|
}
|
|
|
|
Err(NotSafe::UnregisteredValidator(pk)) => {
|
|
|
|
warn!(
|
|
|
|
self.log,
|
|
|
|
"Not signing attestation for unregistered validator";
|
2020-10-09 02:05:32 +00:00
|
|
|
"msg" => "Carefully consider running with --init-slashing-protection (see --help)",
|
2020-05-18 06:25:16 +00:00
|
|
|
"public_key" => format!("{:?}", pk)
|
|
|
|
);
|
2020-11-26 01:10:51 +00:00
|
|
|
metrics::inc_counter_vec(
|
|
|
|
&metrics::SIGNED_ATTESTATIONS_TOTAL,
|
|
|
|
&[metrics::UNREGISTERED],
|
|
|
|
);
|
2021-07-31 03:50:52 +00:00
|
|
|
Err(Error::Slashable(NotSafe::UnregisteredValidator(pk)))
|
2020-05-18 06:25:16 +00:00
|
|
|
}
|
|
|
|
Err(e) => {
|
|
|
|
crit!(
|
|
|
|
self.log,
|
|
|
|
"Not signing slashable attestation";
|
|
|
|
"attestation" => format!("{:?}", attestation.data),
|
|
|
|
"error" => format!("{:?}", e)
|
|
|
|
);
|
2020-11-26 01:10:51 +00:00
|
|
|
metrics::inc_counter_vec(
|
|
|
|
&metrics::SIGNED_ATTESTATIONS_TOTAL,
|
|
|
|
&[metrics::SLASHABLE],
|
|
|
|
);
|
2021-07-31 03:50:52 +00:00
|
|
|
Err(Error::Slashable(e))
|
2020-05-18 06:25:16 +00:00
|
|
|
}
|
|
|
|
}
|
2019-11-25 04:48:24 +00:00
|
|
|
}
|
Initial work towards v0.2.0 (#924)
* Remove ping protocol
* Initial renaming of network services
* Correct rebasing relative to latest master
* Start updating types
* Adds HashMapDelay struct to utils
* Initial network restructure
* Network restructure. Adds new types for v0.2.0
* Removes build artefacts
* Shift validation to beacon chain
* Temporarily remove gossip validation
This is to be updated to match current optimisation efforts.
* Adds AggregateAndProof
* Begin rebuilding pubsub encoding/decoding
* Signature hacking
* Shift gossipsup decoding into eth2_libp2p
* Existing EF tests passing with fake_crypto
* Shifts block encoding/decoding into RPC
* Delete outdated API spec
* All release tests passing bar genesis state parsing
* Update and test YamlConfig
* Update to spec v0.10 compatible BLS
* Updates to BLS EF tests
* Add EF test for AggregateVerify
And delete unused hash2curve tests for uncompressed points
* Update EF tests to v0.10.1
* Use optional block root correctly in block proc
* Use genesis fork in deposit domain. All tests pass
* Fast aggregate verify test
* Update REST API docs
* Fix unused import
* Bump spec tags to v0.10.1
* Add `seconds_per_eth1_block` to chainspec
* Update to timestamp based eth1 voting scheme
* Return None from `get_votes_to_consider` if block cache is empty
* Handle overflows in `is_candidate_block`
* Revert to failing tests
* Fix eth1 data sets test
* Choose default vote according to spec
* Fix collect_valid_votes tests
* Fix `get_votes_to_consider` to choose all eligible blocks
* Uncomment winning_vote tests
* Add comments; remove unused code
* Reduce seconds_per_eth1_block for simulation
* Addressed review comments
* Add test for default vote case
* Fix logs
* Remove unused functions
* Meter default eth1 votes
* Fix comments
* Progress on attestation service
* Address review comments; remove unused dependency
* Initial work on removing libp2p lock
* Add LRU caches to store (rollup)
* Update attestation validation for DB changes (WIP)
* Initial version of should_forward_block
* Scaffold
* Progress on attestation validation
Also, consolidate prod+testing slot clocks so that they share much
of the same implementation and can both handle sub-slot time changes.
* Removes lock from libp2p service
* Completed network lock removal
* Finish(?) attestation processing
* Correct network termination future
* Add slot check to block check
* Correct fmt issues
* Remove Drop implementation for network service
* Add first attempt at attestation proc. re-write
* Add version 2 of attestation processing
* Minor fixes
* Add validator pubkey cache
* Make get_indexed_attestation take a committee
* Link signature processing into new attn verification
* First working version
* Ensure pubkey cache is updated
* Add more metrics, slight optimizations
* Clone committee cache during attestation processing
* Update shuffling cache during block processing
* Remove old commented-out code
* Fix shuffling cache insert bug
* Used indexed attestation in fork choice
* Restructure attn processing, add metrics
* Add more detailed metrics
* Tidy, fix failing tests
* Fix failing tests, tidy
* Address reviewers suggestions
* Disable/delete two outdated tests
* Modification of validator for subscriptions
* Add slot signing to validator client
* Further progress on validation subscription
* Adds necessary validator subscription functionality
* Add new Pubkeys struct to signature_sets
* Refactor with functional approach
* Update beacon chain
* Clean up validator <-> beacon node http types
* Add aggregator status to ValidatorDuty
* Impl Clone for manual slot clock
* Fix minor errors
* Further progress validator client subscription
* Initial subscription and aggregation handling
* Remove decompressed member from pubkey bytes
* Progress to modifying val client for attestation aggregation
* First draft of validator client upgrade for aggregate attestations
* Add hashmap for indices lookup
* Add state cache, remove store cache
* Only build the head committee cache
* Removes lock on a network channel
* Partially implement beacon node subscription http api
* Correct compilation issues
* Change `get_attesting_indices` to use Vec
* Fix failing test
* Partial implementation of timer
* Adds timer, removes exit_future, http api to op pool
* Partial multiple aggregate attestation handling
* Permits bulk messages accross gossipsub network channel
* Correct compile issues
* Improve gosispsub messaging and correct rest api helpers
* Added global gossipsub subscriptions
* Update validator subscriptions data structs
* Tidy
* Re-structure validator subscriptions
* Initial handling of subscriptions
* Re-structure network service
* Add pubkey cache persistence file
* Add more comments
* Integrate persistence file into builder
* Add pubkey cache tests
* Add HashSetDelay and introduce into attestation service
* Handles validator subscriptions
* Add data_dir to beacon chain builder
* Remove Option in pubkey cache persistence file
* Ensure consistency between datadir/data_dir
* Fix failing network test
* Peer subnet discovery gets queued for future subscriptions
* Reorganise attestation service functions
* Initial wiring of attestation service
* First draft of attestation service timing logic
* Correct minor typos
* Tidy
* Fix todos
* Improve tests
* Add PeerInfo to connected peers mapping
* Fix compile error
* Fix compile error from merge
* Split up block processing metrics
* Tidy
* Refactor get_pubkey_from_state
* Remove commented-out code
* Rename state_cache -> checkpoint_cache
* Rename Checkpoint -> Snapshot
* Tidy, add comments
* Tidy up find_head function
* Change some checkpoint -> snapshot
* Add tests
* Expose max_len
* Remove dead code
* Tidy
* Fix bug
* Add sync-speed metric
* Add first attempt at VerifiableBlock
* Start integrating into beacon chain
* Integrate VerifiableBlock
* Rename VerifableBlock -> PartialBlockVerification
* Add start of typed methods
* Add progress
* Add further progress
* Rename structs
* Add full block verification to block_processing.rs
* Further beacon chain integration
* Update checks for gossip
* Add todo
* Start adding segement verification
* Add passing chain segement test
* Initial integration with batch sync
* Minor changes
* Tidy, add more error checking
* Start adding chain_segment tests
* Finish invalid signature tests
* Include single and gossip verified blocks in tests
* Add gossip verification tests
* Start adding docs
* Finish adding comments to block_processing.rs
* Rename block_processing.rs -> block_verification
* Start removing old block processing code
* Fixes beacon_chain compilation
* Fix project-wide compile errors
* Remove old code
* Correct code to pass all tests
* Fix bug with beacon proposer index
* Fix shim for BlockProcessingError
* Only process one epoch at a time
* Fix loop in chain segment processing
* Correct tests from master merge
* Add caching for state.eth1_data_votes
* Add BeaconChain::validator_pubkey
* Revert "Add caching for state.eth1_data_votes"
This reverts commit cd73dcd6434fb8d8e6bf30c5356355598ea7b78e.
Co-authored-by: Grant Wuerker <gwuerker@gmail.com>
Co-authored-by: Michael Sproul <michael@sigmaprime.io>
Co-authored-by: Michael Sproul <micsproul@gmail.com>
Co-authored-by: pawan <pawandhananjay@gmail.com>
Co-authored-by: Paul Hauner <paul@paulhauner.com>
2020-03-17 06:24:44 +00:00
|
|
|
|
2022-06-30 00:49:21 +00:00
|
|
|
pub async fn sign_validator_registration_data(
|
|
|
|
&self,
|
|
|
|
validator_registration_data: ValidatorRegistrationData,
|
|
|
|
) -> Result<SignedValidatorRegistrationData, Error> {
|
|
|
|
let domain_hash = self.spec.get_builder_domain();
|
|
|
|
let signing_root = validator_registration_data.signing_root(domain_hash);
|
|
|
|
|
|
|
|
let signing_method =
|
|
|
|
self.doppelganger_bypassed_signing_method(validator_registration_data.pubkey)?;
|
|
|
|
let signature = signing_method
|
|
|
|
.get_signature_from_root::<E, BlindedPayload<E>>(
|
|
|
|
SignableMessage::ValidatorRegistration(&validator_registration_data),
|
|
|
|
signing_root,
|
|
|
|
&self.task_executor,
|
|
|
|
None,
|
|
|
|
)
|
|
|
|
.await?;
|
|
|
|
|
|
|
|
metrics::inc_counter_vec(
|
|
|
|
&metrics::SIGNED_VALIDATOR_REGISTRATIONS_TOTAL,
|
|
|
|
&[metrics::SUCCESS],
|
|
|
|
);
|
|
|
|
|
|
|
|
Ok(SignedValidatorRegistrationData {
|
|
|
|
message: validator_registration_data,
|
|
|
|
signature,
|
|
|
|
})
|
|
|
|
}
|
|
|
|
|
2020-03-25 10:14:05 +00:00
|
|
|
/// Signs an `AggregateAndProof` for a given validator.
|
|
|
|
///
|
|
|
|
/// The resulting `SignedAggregateAndProof` is sent on the aggregation channel and cannot be
|
|
|
|
/// modified by actors other than the signing validator.
|
2021-09-16 03:26:33 +00:00
|
|
|
pub async fn produce_signed_aggregate_and_proof(
|
2020-03-25 10:14:05 +00:00
|
|
|
&self,
|
2021-07-31 03:50:52 +00:00
|
|
|
validator_pubkey: PublicKeyBytes,
|
2021-09-16 03:26:33 +00:00
|
|
|
aggregator_index: u64,
|
2020-03-25 10:14:05 +00:00
|
|
|
aggregate: Attestation<E>,
|
2020-05-06 11:42:56 +00:00
|
|
|
selection_proof: SelectionProof,
|
2021-07-31 03:50:52 +00:00
|
|
|
) -> Result<SignedAggregateAndProof<E>, Error> {
|
2021-09-16 03:26:33 +00:00
|
|
|
let signing_epoch = aggregate.data.target.epoch;
|
|
|
|
let signing_context = self.signing_context(Domain::AggregateAndProof, signing_epoch);
|
|
|
|
|
|
|
|
let message = AggregateAndProof {
|
|
|
|
aggregator_index,
|
|
|
|
aggregate,
|
|
|
|
selection_proof: selection_proof.into(),
|
|
|
|
};
|
|
|
|
|
|
|
|
let signing_method = self.doppelganger_checked_signing_method(validator_pubkey)?;
|
|
|
|
let signature = signing_method
|
2022-03-31 07:52:23 +00:00
|
|
|
.get_signature::<E, BlindedPayload<E>>(
|
2021-09-16 03:26:33 +00:00
|
|
|
SignableMessage::SignedAggregateAndProof(&message),
|
|
|
|
signing_context,
|
2021-07-31 03:50:52 +00:00
|
|
|
&self.spec,
|
2021-09-16 03:26:33 +00:00
|
|
|
&self.task_executor,
|
2021-07-31 03:50:52 +00:00
|
|
|
)
|
2021-09-16 03:26:33 +00:00
|
|
|
.await?;
|
2020-03-25 10:14:05 +00:00
|
|
|
|
2020-11-26 01:10:51 +00:00
|
|
|
metrics::inc_counter_vec(&metrics::SIGNED_AGGREGATES_TOTAL, &[metrics::SUCCESS]);
|
|
|
|
|
2021-09-16 03:26:33 +00:00
|
|
|
Ok(SignedAggregateAndProof { message, signature })
|
2020-03-25 10:14:05 +00:00
|
|
|
}
|
|
|
|
|
2020-03-30 03:26:54 +00:00
|
|
|
/// Produces a `SelectionProof` for the `slot`, signed by with corresponding secret key to
|
|
|
|
/// `validator_pubkey`.
|
2021-09-16 03:26:33 +00:00
|
|
|
pub async fn produce_selection_proof(
|
2020-03-30 03:26:54 +00:00
|
|
|
&self,
|
2021-07-31 03:50:52 +00:00
|
|
|
validator_pubkey: PublicKeyBytes,
|
2020-03-30 03:26:54 +00:00
|
|
|
slot: Slot,
|
2021-07-31 03:50:52 +00:00
|
|
|
) -> Result<SelectionProof, Error> {
|
2021-09-16 03:26:33 +00:00
|
|
|
let signing_epoch = slot.epoch(E::slots_per_epoch());
|
|
|
|
let signing_context = self.signing_context(Domain::SelectionProof, signing_epoch);
|
|
|
|
|
|
|
|
// Bypass the `with_validator_signing_method` function.
|
2021-07-31 03:50:52 +00:00
|
|
|
//
|
|
|
|
// This is because we don't care about doppelganger protection when it comes to selection
|
|
|
|
// proofs. They are not slashable and we need them to subscribe to subnets on the BN.
|
|
|
|
//
|
|
|
|
// As long as we disallow `SignedAggregateAndProof` then these selection proofs will never
|
|
|
|
// be published on the network.
|
2021-09-16 03:26:33 +00:00
|
|
|
let signing_method = self.doppelganger_bypassed_signing_method(validator_pubkey)?;
|
2021-07-31 03:50:52 +00:00
|
|
|
|
2021-09-16 03:26:33 +00:00
|
|
|
let signature = signing_method
|
2022-03-31 07:52:23 +00:00
|
|
|
.get_signature::<E, BlindedPayload<E>>(
|
2021-09-16 03:26:33 +00:00
|
|
|
SignableMessage::SelectionProof(slot),
|
|
|
|
signing_context,
|
|
|
|
&self.spec,
|
|
|
|
&self.task_executor,
|
|
|
|
)
|
|
|
|
.await
|
|
|
|
.map_err(Error::UnableToSign)?;
|
2021-07-31 03:50:52 +00:00
|
|
|
|
|
|
|
metrics::inc_counter_vec(&metrics::SIGNED_SELECTION_PROOFS_TOTAL, &[metrics::SUCCESS]);
|
|
|
|
|
2021-09-16 03:26:33 +00:00
|
|
|
Ok(signature.into())
|
Initial work towards v0.2.0 (#924)
* Remove ping protocol
* Initial renaming of network services
* Correct rebasing relative to latest master
* Start updating types
* Adds HashMapDelay struct to utils
* Initial network restructure
* Network restructure. Adds new types for v0.2.0
* Removes build artefacts
* Shift validation to beacon chain
* Temporarily remove gossip validation
This is to be updated to match current optimisation efforts.
* Adds AggregateAndProof
* Begin rebuilding pubsub encoding/decoding
* Signature hacking
* Shift gossipsup decoding into eth2_libp2p
* Existing EF tests passing with fake_crypto
* Shifts block encoding/decoding into RPC
* Delete outdated API spec
* All release tests passing bar genesis state parsing
* Update and test YamlConfig
* Update to spec v0.10 compatible BLS
* Updates to BLS EF tests
* Add EF test for AggregateVerify
And delete unused hash2curve tests for uncompressed points
* Update EF tests to v0.10.1
* Use optional block root correctly in block proc
* Use genesis fork in deposit domain. All tests pass
* Fast aggregate verify test
* Update REST API docs
* Fix unused import
* Bump spec tags to v0.10.1
* Add `seconds_per_eth1_block` to chainspec
* Update to timestamp based eth1 voting scheme
* Return None from `get_votes_to_consider` if block cache is empty
* Handle overflows in `is_candidate_block`
* Revert to failing tests
* Fix eth1 data sets test
* Choose default vote according to spec
* Fix collect_valid_votes tests
* Fix `get_votes_to_consider` to choose all eligible blocks
* Uncomment winning_vote tests
* Add comments; remove unused code
* Reduce seconds_per_eth1_block for simulation
* Addressed review comments
* Add test for default vote case
* Fix logs
* Remove unused functions
* Meter default eth1 votes
* Fix comments
* Progress on attestation service
* Address review comments; remove unused dependency
* Initial work on removing libp2p lock
* Add LRU caches to store (rollup)
* Update attestation validation for DB changes (WIP)
* Initial version of should_forward_block
* Scaffold
* Progress on attestation validation
Also, consolidate prod+testing slot clocks so that they share much
of the same implementation and can both handle sub-slot time changes.
* Removes lock from libp2p service
* Completed network lock removal
* Finish(?) attestation processing
* Correct network termination future
* Add slot check to block check
* Correct fmt issues
* Remove Drop implementation for network service
* Add first attempt at attestation proc. re-write
* Add version 2 of attestation processing
* Minor fixes
* Add validator pubkey cache
* Make get_indexed_attestation take a committee
* Link signature processing into new attn verification
* First working version
* Ensure pubkey cache is updated
* Add more metrics, slight optimizations
* Clone committee cache during attestation processing
* Update shuffling cache during block processing
* Remove old commented-out code
* Fix shuffling cache insert bug
* Used indexed attestation in fork choice
* Restructure attn processing, add metrics
* Add more detailed metrics
* Tidy, fix failing tests
* Fix failing tests, tidy
* Address reviewers suggestions
* Disable/delete two outdated tests
* Modification of validator for subscriptions
* Add slot signing to validator client
* Further progress on validation subscription
* Adds necessary validator subscription functionality
* Add new Pubkeys struct to signature_sets
* Refactor with functional approach
* Update beacon chain
* Clean up validator <-> beacon node http types
* Add aggregator status to ValidatorDuty
* Impl Clone for manual slot clock
* Fix minor errors
* Further progress validator client subscription
* Initial subscription and aggregation handling
* Remove decompressed member from pubkey bytes
* Progress to modifying val client for attestation aggregation
* First draft of validator client upgrade for aggregate attestations
* Add hashmap for indices lookup
* Add state cache, remove store cache
* Only build the head committee cache
* Removes lock on a network channel
* Partially implement beacon node subscription http api
* Correct compilation issues
* Change `get_attesting_indices` to use Vec
* Fix failing test
* Partial implementation of timer
* Adds timer, removes exit_future, http api to op pool
* Partial multiple aggregate attestation handling
* Permits bulk messages accross gossipsub network channel
* Correct compile issues
* Improve gosispsub messaging and correct rest api helpers
* Added global gossipsub subscriptions
* Update validator subscriptions data structs
* Tidy
* Re-structure validator subscriptions
* Initial handling of subscriptions
* Re-structure network service
* Add pubkey cache persistence file
* Add more comments
* Integrate persistence file into builder
* Add pubkey cache tests
* Add HashSetDelay and introduce into attestation service
* Handles validator subscriptions
* Add data_dir to beacon chain builder
* Remove Option in pubkey cache persistence file
* Ensure consistency between datadir/data_dir
* Fix failing network test
* Peer subnet discovery gets queued for future subscriptions
* Reorganise attestation service functions
* Initial wiring of attestation service
* First draft of attestation service timing logic
* Correct minor typos
* Tidy
* Fix todos
* Improve tests
* Add PeerInfo to connected peers mapping
* Fix compile error
* Fix compile error from merge
* Split up block processing metrics
* Tidy
* Refactor get_pubkey_from_state
* Remove commented-out code
* Rename state_cache -> checkpoint_cache
* Rename Checkpoint -> Snapshot
* Tidy, add comments
* Tidy up find_head function
* Change some checkpoint -> snapshot
* Add tests
* Expose max_len
* Remove dead code
* Tidy
* Fix bug
* Add sync-speed metric
* Add first attempt at VerifiableBlock
* Start integrating into beacon chain
* Integrate VerifiableBlock
* Rename VerifableBlock -> PartialBlockVerification
* Add start of typed methods
* Add progress
* Add further progress
* Rename structs
* Add full block verification to block_processing.rs
* Further beacon chain integration
* Update checks for gossip
* Add todo
* Start adding segement verification
* Add passing chain segement test
* Initial integration with batch sync
* Minor changes
* Tidy, add more error checking
* Start adding chain_segment tests
* Finish invalid signature tests
* Include single and gossip verified blocks in tests
* Add gossip verification tests
* Start adding docs
* Finish adding comments to block_processing.rs
* Rename block_processing.rs -> block_verification
* Start removing old block processing code
* Fixes beacon_chain compilation
* Fix project-wide compile errors
* Remove old code
* Correct code to pass all tests
* Fix bug with beacon proposer index
* Fix shim for BlockProcessingError
* Only process one epoch at a time
* Fix loop in chain segment processing
* Correct tests from master merge
* Add caching for state.eth1_data_votes
* Add BeaconChain::validator_pubkey
* Revert "Add caching for state.eth1_data_votes"
This reverts commit cd73dcd6434fb8d8e6bf30c5356355598ea7b78e.
Co-authored-by: Grant Wuerker <gwuerker@gmail.com>
Co-authored-by: Michael Sproul <michael@sigmaprime.io>
Co-authored-by: Michael Sproul <micsproul@gmail.com>
Co-authored-by: pawan <pawandhananjay@gmail.com>
Co-authored-by: Paul Hauner <paul@paulhauner.com>
2020-03-17 06:24:44 +00:00
|
|
|
}
|
Prune slashing protection DB (#2194)
## Proposed Changes
Prune the slashing protection database so that it doesn't exhibit unbounded growth. Prune by dropping attestations and blocks from more than 512 epochs ago, relying on the guards that prevent signing messages with slots or epochs less than the minimum recorded in the DB.
The pruning process is potentially time consuming, so it's scheduled to run only every 512 epochs, in the last 2/3rds of a slot. This gives it at least 4 seconds to run without impacting other signing, which I think should be sufficient. I've seen it run for several minutes (yikes!) on our Pyrmont nodes, but I suspect that 1) this will only occur on the first run when the database is still huge 2) no other production users will be impacted because they don't have enough validators per node.
Pruning also happens at start-up, as I figured this is a fairly infrequent event, and if a user is experiencing problems with the VC related to pruning, it's nice to be able to trigger it with a quick restart. Users are also conditioned to not mind missing a few attestations during a restart.
We need to include a note in the release notes that users may see the message `timed out waiting for connection` the first time they prune a huge database, but that this is totally fine and to be expected (the VC will miss those attestations in the meantime).
I'm also open to making this opt-in for now, although the sooner we get users doing it, the less painful it will be: prune early, prune often!
2021-02-24 23:51:04 +00:00
|
|
|
|
2021-08-06 00:47:31 +00:00
|
|
|
/// Produce a `SyncSelectionProof` for `slot` signed by the secret key of `validator_pubkey`.
|
2021-09-16 03:26:33 +00:00
|
|
|
pub async fn produce_sync_selection_proof(
|
2021-08-06 00:47:31 +00:00
|
|
|
&self,
|
|
|
|
validator_pubkey: &PublicKeyBytes,
|
|
|
|
slot: Slot,
|
|
|
|
subnet_id: SyncSubnetId,
|
|
|
|
) -> Result<SyncSelectionProof, Error> {
|
2021-09-16 03:26:33 +00:00
|
|
|
let signing_epoch = slot.epoch(E::slots_per_epoch());
|
|
|
|
let signing_context =
|
|
|
|
self.signing_context(Domain::SyncCommitteeSelectionProof, signing_epoch);
|
|
|
|
|
|
|
|
// Bypass `with_validator_signing_method`: sync committee messages are not slashable.
|
|
|
|
let signing_method = self.doppelganger_bypassed_signing_method(*validator_pubkey)?;
|
2021-08-06 00:47:31 +00:00
|
|
|
|
|
|
|
metrics::inc_counter_vec(
|
|
|
|
&metrics::SIGNED_SYNC_SELECTION_PROOFS_TOTAL,
|
|
|
|
&[metrics::SUCCESS],
|
|
|
|
);
|
|
|
|
|
2021-09-16 03:26:33 +00:00
|
|
|
let message = SyncAggregatorSelectionData {
|
2021-08-06 00:47:31 +00:00
|
|
|
slot,
|
2021-09-16 03:26:33 +00:00
|
|
|
subcommittee_index: subnet_id.into(),
|
|
|
|
};
|
|
|
|
|
|
|
|
let signature = signing_method
|
2022-03-31 07:52:23 +00:00
|
|
|
.get_signature::<E, BlindedPayload<E>>(
|
2021-09-16 03:26:33 +00:00
|
|
|
SignableMessage::SyncSelectionProof(&message),
|
|
|
|
signing_context,
|
|
|
|
&self.spec,
|
|
|
|
&self.task_executor,
|
|
|
|
)
|
|
|
|
.await
|
|
|
|
.map_err(Error::UnableToSign)?;
|
|
|
|
|
|
|
|
Ok(signature.into())
|
2021-08-06 00:47:31 +00:00
|
|
|
}
|
|
|
|
|
2021-09-16 03:26:33 +00:00
|
|
|
pub async fn produce_sync_committee_signature(
|
2021-08-06 00:47:31 +00:00
|
|
|
&self,
|
|
|
|
slot: Slot,
|
|
|
|
beacon_block_root: Hash256,
|
|
|
|
validator_index: u64,
|
|
|
|
validator_pubkey: &PublicKeyBytes,
|
|
|
|
) -> Result<SyncCommitteeMessage, Error> {
|
2021-09-16 03:26:33 +00:00
|
|
|
let signing_epoch = slot.epoch(E::slots_per_epoch());
|
|
|
|
let signing_context = self.signing_context(Domain::SyncCommittee, signing_epoch);
|
|
|
|
|
|
|
|
// Bypass `with_validator_signing_method`: sync committee messages are not slashable.
|
|
|
|
let signing_method = self.doppelganger_bypassed_signing_method(*validator_pubkey)?;
|
|
|
|
|
|
|
|
let signature = signing_method
|
2022-03-31 07:52:23 +00:00
|
|
|
.get_signature::<E, BlindedPayload<E>>(
|
2021-09-16 03:26:33 +00:00
|
|
|
SignableMessage::SyncCommitteeSignature {
|
|
|
|
beacon_block_root,
|
|
|
|
slot,
|
|
|
|
},
|
|
|
|
signing_context,
|
|
|
|
&self.spec,
|
|
|
|
&self.task_executor,
|
|
|
|
)
|
|
|
|
.await
|
|
|
|
.map_err(Error::UnableToSign)?;
|
2021-08-06 00:47:31 +00:00
|
|
|
|
|
|
|
metrics::inc_counter_vec(
|
|
|
|
&metrics::SIGNED_SYNC_COMMITTEE_MESSAGES_TOTAL,
|
|
|
|
&[metrics::SUCCESS],
|
|
|
|
);
|
|
|
|
|
2021-09-16 03:26:33 +00:00
|
|
|
Ok(SyncCommitteeMessage {
|
2021-08-06 00:47:31 +00:00
|
|
|
slot,
|
|
|
|
beacon_block_root,
|
|
|
|
validator_index,
|
2021-09-16 03:26:33 +00:00
|
|
|
signature,
|
|
|
|
})
|
2021-08-06 00:47:31 +00:00
|
|
|
}
|
|
|
|
|
2021-09-16 03:26:33 +00:00
|
|
|
pub async fn produce_signed_contribution_and_proof(
|
2021-08-06 00:47:31 +00:00
|
|
|
&self,
|
|
|
|
aggregator_index: u64,
|
2021-09-16 03:26:33 +00:00
|
|
|
aggregator_pubkey: PublicKeyBytes,
|
2021-08-06 00:47:31 +00:00
|
|
|
contribution: SyncCommitteeContribution<E>,
|
|
|
|
selection_proof: SyncSelectionProof,
|
|
|
|
) -> Result<SignedContributionAndProof<E>, Error> {
|
2021-09-16 03:26:33 +00:00
|
|
|
let signing_epoch = contribution.slot.epoch(E::slots_per_epoch());
|
|
|
|
let signing_context = self.signing_context(Domain::ContributionAndProof, signing_epoch);
|
|
|
|
|
|
|
|
// Bypass `with_validator_signing_method`: sync committee messages are not slashable.
|
|
|
|
let signing_method = self.doppelganger_bypassed_signing_method(aggregator_pubkey)?;
|
|
|
|
|
|
|
|
let message = ContributionAndProof {
|
|
|
|
aggregator_index,
|
|
|
|
contribution,
|
|
|
|
selection_proof: selection_proof.into(),
|
|
|
|
};
|
|
|
|
|
|
|
|
let signature = signing_method
|
2022-03-31 07:52:23 +00:00
|
|
|
.get_signature::<E, BlindedPayload<E>>(
|
2021-09-16 03:26:33 +00:00
|
|
|
SignableMessage::SignedContributionAndProof(&message),
|
|
|
|
signing_context,
|
|
|
|
&self.spec,
|
|
|
|
&self.task_executor,
|
|
|
|
)
|
|
|
|
.await
|
|
|
|
.map_err(Error::UnableToSign)?;
|
2021-08-06 00:47:31 +00:00
|
|
|
|
|
|
|
metrics::inc_counter_vec(
|
|
|
|
&metrics::SIGNED_SYNC_COMMITTEE_CONTRIBUTIONS_TOTAL,
|
|
|
|
&[metrics::SUCCESS],
|
|
|
|
);
|
|
|
|
|
2021-09-16 03:26:33 +00:00
|
|
|
Ok(SignedContributionAndProof { message, signature })
|
2021-08-06 00:47:31 +00:00
|
|
|
}
|
|
|
|
|
2022-01-30 23:22:04 +00:00
|
|
|
pub fn import_slashing_protection(
|
|
|
|
&self,
|
|
|
|
interchange: Interchange,
|
|
|
|
) -> Result<(), InterchangeError> {
|
|
|
|
self.slashing_protection
|
|
|
|
.import_interchange_info(interchange, self.genesis_validators_root)?;
|
|
|
|
Ok(())
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Export slashing protection data while also disabling the given keys in the database.
|
|
|
|
///
|
|
|
|
/// If any key is unknown to the slashing protection database it will be silently omitted
|
|
|
|
/// from the result. It is the caller's responsibility to check whether all keys provided
|
|
|
|
/// had data returned for them.
|
|
|
|
pub fn export_slashing_protection_for_keys(
|
|
|
|
&self,
|
|
|
|
pubkeys: &[PublicKeyBytes],
|
|
|
|
) -> Result<Interchange, InterchangeError> {
|
|
|
|
self.slashing_protection.with_transaction(|txn| {
|
|
|
|
let known_pubkeys = pubkeys
|
|
|
|
.iter()
|
|
|
|
.filter_map(|pubkey| {
|
|
|
|
let validator_id = self
|
|
|
|
.slashing_protection
|
|
|
|
.get_validator_id_ignoring_status(txn, pubkey)
|
|
|
|
.ok()?;
|
|
|
|
|
|
|
|
Some(
|
|
|
|
self.slashing_protection
|
|
|
|
.update_validator_status(txn, validator_id, false)
|
|
|
|
.map(|()| *pubkey),
|
|
|
|
)
|
|
|
|
})
|
|
|
|
.collect::<Result<Vec<PublicKeyBytes>, _>>()?;
|
|
|
|
self.slashing_protection.export_interchange_info_in_txn(
|
|
|
|
self.genesis_validators_root,
|
|
|
|
Some(&known_pubkeys),
|
|
|
|
txn,
|
|
|
|
)
|
|
|
|
})
|
|
|
|
}
|
|
|
|
|
Prune slashing protection DB (#2194)
## Proposed Changes
Prune the slashing protection database so that it doesn't exhibit unbounded growth. Prune by dropping attestations and blocks from more than 512 epochs ago, relying on the guards that prevent signing messages with slots or epochs less than the minimum recorded in the DB.
The pruning process is potentially time consuming, so it's scheduled to run only every 512 epochs, in the last 2/3rds of a slot. This gives it at least 4 seconds to run without impacting other signing, which I think should be sufficient. I've seen it run for several minutes (yikes!) on our Pyrmont nodes, but I suspect that 1) this will only occur on the first run when the database is still huge 2) no other production users will be impacted because they don't have enough validators per node.
Pruning also happens at start-up, as I figured this is a fairly infrequent event, and if a user is experiencing problems with the VC related to pruning, it's nice to be able to trigger it with a quick restart. Users are also conditioned to not mind missing a few attestations during a restart.
We need to include a note in the release notes that users may see the message `timed out waiting for connection` the first time they prune a huge database, but that this is totally fine and to be expected (the VC will miss those attestations in the meantime).
I'm also open to making this opt-in for now, although the sooner we get users doing it, the less painful it will be: prune early, prune often!
2021-02-24 23:51:04 +00:00
|
|
|
/// Prune the slashing protection database so that it remains performant.
|
|
|
|
///
|
|
|
|
/// This function will only do actual pruning periodically, so it should usually be
|
|
|
|
/// cheap to call. The `first_run` flag can be used to print a more verbose message when pruning
|
|
|
|
/// runs.
|
|
|
|
pub fn prune_slashing_protection_db(&self, current_epoch: Epoch, first_run: bool) {
|
|
|
|
// Attempt to prune every SLASHING_PROTECTION_HISTORY_EPOCHs, with a tolerance for
|
|
|
|
// missing the epoch that aligns exactly.
|
|
|
|
let mut last_prune = self.slashing_protection_last_prune.lock();
|
|
|
|
if current_epoch / SLASHING_PROTECTION_HISTORY_EPOCHS
|
|
|
|
<= *last_prune / SLASHING_PROTECTION_HISTORY_EPOCHS
|
|
|
|
{
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if first_run {
|
|
|
|
info!(
|
|
|
|
self.log,
|
|
|
|
"Pruning slashing protection DB";
|
|
|
|
"epoch" => current_epoch,
|
|
|
|
"msg" => "pruning may take several minutes the first time it runs"
|
|
|
|
);
|
|
|
|
} else {
|
|
|
|
info!(self.log, "Pruning slashing protection DB"; "epoch" => current_epoch);
|
|
|
|
}
|
|
|
|
|
|
|
|
let _timer = metrics::start_timer(&metrics::SLASHING_PROTECTION_PRUNE_TIMES);
|
|
|
|
|
|
|
|
let new_min_target_epoch = current_epoch.saturating_sub(SLASHING_PROTECTION_HISTORY_EPOCHS);
|
|
|
|
let new_min_slot = new_min_target_epoch.start_slot(E::slots_per_epoch());
|
|
|
|
|
2021-07-31 03:50:52 +00:00
|
|
|
let all_pubkeys: Vec<_> = self.voting_pubkeys(DoppelgangerStatus::ignored);
|
|
|
|
|
Prune slashing protection DB (#2194)
## Proposed Changes
Prune the slashing protection database so that it doesn't exhibit unbounded growth. Prune by dropping attestations and blocks from more than 512 epochs ago, relying on the guards that prevent signing messages with slots or epochs less than the minimum recorded in the DB.
The pruning process is potentially time consuming, so it's scheduled to run only every 512 epochs, in the last 2/3rds of a slot. This gives it at least 4 seconds to run without impacting other signing, which I think should be sufficient. I've seen it run for several minutes (yikes!) on our Pyrmont nodes, but I suspect that 1) this will only occur on the first run when the database is still huge 2) no other production users will be impacted because they don't have enough validators per node.
Pruning also happens at start-up, as I figured this is a fairly infrequent event, and if a user is experiencing problems with the VC related to pruning, it's nice to be able to trigger it with a quick restart. Users are also conditioned to not mind missing a few attestations during a restart.
We need to include a note in the release notes that users may see the message `timed out waiting for connection` the first time they prune a huge database, but that this is totally fine and to be expected (the VC will miss those attestations in the meantime).
I'm also open to making this opt-in for now, although the sooner we get users doing it, the less painful it will be: prune early, prune often!
2021-02-24 23:51:04 +00:00
|
|
|
if let Err(e) = self
|
|
|
|
.slashing_protection
|
2021-07-31 03:50:52 +00:00
|
|
|
.prune_all_signed_attestations(all_pubkeys.iter(), new_min_target_epoch)
|
Prune slashing protection DB (#2194)
## Proposed Changes
Prune the slashing protection database so that it doesn't exhibit unbounded growth. Prune by dropping attestations and blocks from more than 512 epochs ago, relying on the guards that prevent signing messages with slots or epochs less than the minimum recorded in the DB.
The pruning process is potentially time consuming, so it's scheduled to run only every 512 epochs, in the last 2/3rds of a slot. This gives it at least 4 seconds to run without impacting other signing, which I think should be sufficient. I've seen it run for several minutes (yikes!) on our Pyrmont nodes, but I suspect that 1) this will only occur on the first run when the database is still huge 2) no other production users will be impacted because they don't have enough validators per node.
Pruning also happens at start-up, as I figured this is a fairly infrequent event, and if a user is experiencing problems with the VC related to pruning, it's nice to be able to trigger it with a quick restart. Users are also conditioned to not mind missing a few attestations during a restart.
We need to include a note in the release notes that users may see the message `timed out waiting for connection` the first time they prune a huge database, but that this is totally fine and to be expected (the VC will miss those attestations in the meantime).
I'm also open to making this opt-in for now, although the sooner we get users doing it, the less painful it will be: prune early, prune often!
2021-02-24 23:51:04 +00:00
|
|
|
{
|
|
|
|
error!(
|
|
|
|
self.log,
|
|
|
|
"Error during pruning of signed attestations";
|
|
|
|
"error" => ?e,
|
|
|
|
);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if let Err(e) = self
|
|
|
|
.slashing_protection
|
2021-07-31 03:50:52 +00:00
|
|
|
.prune_all_signed_blocks(all_pubkeys.iter(), new_min_slot)
|
Prune slashing protection DB (#2194)
## Proposed Changes
Prune the slashing protection database so that it doesn't exhibit unbounded growth. Prune by dropping attestations and blocks from more than 512 epochs ago, relying on the guards that prevent signing messages with slots or epochs less than the minimum recorded in the DB.
The pruning process is potentially time consuming, so it's scheduled to run only every 512 epochs, in the last 2/3rds of a slot. This gives it at least 4 seconds to run without impacting other signing, which I think should be sufficient. I've seen it run for several minutes (yikes!) on our Pyrmont nodes, but I suspect that 1) this will only occur on the first run when the database is still huge 2) no other production users will be impacted because they don't have enough validators per node.
Pruning also happens at start-up, as I figured this is a fairly infrequent event, and if a user is experiencing problems with the VC related to pruning, it's nice to be able to trigger it with a quick restart. Users are also conditioned to not mind missing a few attestations during a restart.
We need to include a note in the release notes that users may see the message `timed out waiting for connection` the first time they prune a huge database, but that this is totally fine and to be expected (the VC will miss those attestations in the meantime).
I'm also open to making this opt-in for now, although the sooner we get users doing it, the less painful it will be: prune early, prune often!
2021-02-24 23:51:04 +00:00
|
|
|
{
|
|
|
|
error!(
|
|
|
|
self.log,
|
|
|
|
"Error during pruning of signed blocks";
|
|
|
|
"error" => ?e,
|
|
|
|
);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
*last_prune = current_epoch;
|
|
|
|
|
|
|
|
info!(self.log, "Completed pruning of slashing protection DB");
|
|
|
|
}
|
2019-11-25 04:48:24 +00:00
|
|
|
}
|