Wallet-based, encrypted key management (#1138)

* Update hashmap hashset to stable futures

* Adds panic test to hashset delay

* Port remote_beacon_node to stable futures

* Fix lcli merge conflicts

* Non rpc stuff compiles

* Remove padding

* Add error enum, zeroize more things

* Fix comment

* protocol.rs compiles

* Port websockets, timer and notifier to stable futures (#1035)

* Fix lcli

* Port timer to stable futures

* Fix timer

* Port websocket_server to stable futures

* Port notifier to stable futures

* Add TODOS

* Port remote_beacon_node to stable futures

* Partial eth2-libp2p stable future upgrade

* Finished first round of fighting RPC types

* Further progress towards porting eth2-libp2p adds caching to discovery

* Update behaviour

* Add keystore builder

* Remove keystore stuff from val client

* Add more tests, comments

* RPC handler to stable futures

* Update RPC to master libp2p

* Add more comments, test vectors

* Network service additions

* Progress on improving JSON validation

* More JSON verification

* Start moving JSON into own mod

* Remove old code

* Add more tests, reader/writers

* Tidy

* Move keystore into own file

* Move more logic into keystore file

* Tidy

* Tidy

* Fix the fallback transport construction (#1102)

* Allow for odd-character hex

* Correct warning

* Remove hashmap delay

* Compiling version of eth2-libp2p

* Update all crates versions

* Fix conversion function and add tests (#1113)

* Add more json missing field checks

* Use scrypt by default

* Tidy, address comments

* Test path and uuid in vectors

* Fix comment

* Add checks for kdf params

* Enforce empty kdf message

* Port validator_client to stable futures (#1114)

* Add PH & MS slot clock changes

* Account for genesis time

* Add progress on duties refactor

* Add simple is_aggregator bool to val subscription

* Start work on attestation_verification.rs

* Add progress on ObservedAttestations

* Progress with ObservedAttestations

* Fix tests

* Add observed attestations to the beacon chain

* Add attestation observation to processing code

* Add progress on attestation verification

* Add first draft of ObservedAttesters

* Add more tests

* Add observed attesters to beacon chain

* Add observers to attestation processing

* Add more attestation verification

* Create ObservedAggregators map

* Remove commented-out code

* Add observed aggregators into chain

* Add progress

* Finish adding features to attestation verification

* Ensure beacon chain compiles

* Link attn verification into chain

* Integrate new attn verification in chain

* Remove old attestation processing code

* Start trying to fix beacon_chain tests

* Split adding into pools into two functions

* Add aggregation to harness

* Get test harness working again

* Adjust the number of aggregators for test harness

* Fix edge-case in harness

* Integrate new attn processing in network

* Fix compile bug in validator_client

* Update validator API endpoints

* Fix aggreagation in test harness

* Fix enum thing

* Fix attestation observation bug:

* Patch failing API tests

* Start adding comments to attestation verification

* Remove unused attestation field

* Unify "is block known" logic

* Update comments

* Supress fork choice errors for network processing

* Add todos

* Tidy

* Add gossip attn tests

* Disallow test harness to produce old attns

* Comment out in-progress tests

* Partially address pruning tests

* Fix failing store test

* Add aggregate tests

* Add comments about which spec conditions we check

* Dont re-aggregate

* Split apart test harness attn production

* Fix compile error in network

* Make progress on commented-out test

* Fix skipping attestation test

* Add fork choice verification tests

* Tidy attn tests, remove dead code

* Remove some accidentally added code

* Fix clippy lint

* Rename test file

* Add block tests, add cheap block proposer check

* Rename block testing file

* Add observed_block_producers

* Tidy

* Switch around block signature verification

* Finish block testing

* Remove gossip from signature tests

* First pass of self review

* Fix deviation in spec

* Update test spec tags

* Start moving over to hashset

* Finish moving observed attesters to hashmap

* Move aggregation pool over to hashmap

* Make fc attn borrow again

* Fix rest_api compile error

* Fix missing comments

* Fix monster test

* Uncomment increasing slots test

* Address remaining comments

* Remove unsafe, use cfg test

* Remove cfg test flag

* Fix dodgy comment

* Revert "Update hashmap hashset to stable futures"

This reverts commit d432378a3cc5cd67fc29c0b15b96b886c1323554.

* Revert "Adds panic test to hashset delay"

This reverts commit 281502396fc5b90d9c421a309c2c056982c9525b.

* Ported attestation_service

* Ported duties_service

* Ported fork_service

* More ports

* Port block_service

* Minor fixes

* VC compiles

* Update TODOS

* Borrow self where possible

* Ignore aggregates that are already known.

* Unify aggregator modulo logic

* Fix typo in logs

* Refactor validator subscription logic

* Avoid reproducing selection proof

* Skip HTTP call if no subscriptions

* Rename DutyAndState -> DutyAndProof

* Tidy logs

* Print root as dbg

* Fix compile errors in tests

* Fix compile error in test

* Re-Fix attestation and duties service

* Minor fixes

Co-authored-by: Paul Hauner <paul@paulhauner.com>

* Expose json_keystore mod

* First commits on path derivation

* Progress with implementation

* More progress

* Passing intermediate test vectors

* Tidy, add comments

* Add DerivedKey structs

* Move key derivation into own crate

* Add zeroize structs

* Return error for empty seed

* Add tests

* Tidy

* First commits on path derivation

* Progress with implementation

* Move key derivation into own crate

* Start defining JSON wallet

* Add progress

* Split out encrypt/decrypt

* First commits on path derivation

* Progress with implementation

* More progress

* Passing intermediate test vectors

* Tidy, add comments

* Add DerivedKey structs

* Move key derivation into own crate

* Add zeroize structs

* Return error for empty seed

* Add tests

* Tidy

* Add progress

* Replace some password usage with slice

* First commits on path derivation

* Progress with implementation

* More progress

* Passing intermediate test vectors

* Tidy, add comments

* Add DerivedKey structs

* Move key derivation into own crate

* Add zeroize structs

* Return error for empty seed

* Add tests

* Tidy

* Add progress

* Expose PlainText struct

* First commits on path derivation

* Progress with implementation

* More progress

* Passing intermediate test vectors

* Tidy, add comments

* Add DerivedKey structs

* Move key derivation into own crate

* Add zeroize structs

* Return error for empty seed

* Add tests

* Tidy

* Add builder

* Expose consts, remove Password

* Minor progress

* Expose SALT_SIZE

* First compiling version

* Add test vectors

* Network crate update to stable futures

* Move dbg assert statement

* Port account_manager to stable futures (#1121)

* Port account_manager to stable futures

* Run async fns in tokio environment

* Port rest_api crate to stable futures (#1118)

* Port rest_api lib to stable futures

* Reduce tokio features

* Update notifier to stable futures

* Builder update

* Further updates

* Add mnemonic, tidy

* Convert self referential async functions

* Tidy

* Add testing

* Add first attempt at validator_dir

* Present pubkey field

* stable futures fixes (#1124)

* Fix eth1 update functions

* Fix genesis and client

* Fix beacon node lib

* Return appropriate runtimes from environment

* Fix test rig

* Refactor eth1 service update

* Upgrade simulator to stable futures

* Lighthouse compiles on stable futures

* Add first pass of wallet manager

* Progress with CLI

* Remove println debugging statement

* Tidy output

* Tidy 600 perms

* Update libp2p service, start rpc test upgrade

* Add validator creation flow

* Update network crate for new libp2p

* Start tidying, adding comments

* Update tokio::codec to futures_codec (#1128)

* Further work towards RPC corrections

* Correct http timeout and network service select

* Add wallet mgr testing

* Shift LockedWallet into own file

* Add comments to fs

* Start integration into VC

* Use tokio runtime for libp2p

* Revert "Update tokio::codec to futures_codec (#1128)"

This reverts commit e57aea924acf5cbabdcea18895ac07e38a425ed7.

* Upgrade RPC libp2p tests

* Upgrade secio fallback test

* Add lcli keypair upgrade command

* Upgrade gossipsub examples

* Clean up RPC protocol

* Test fixes (#1133)

* Correct websocket timeout and run on os thread

* Fix network test

* Add --secrets-dir to VC

* Remove --legacy-keys from VC

* Clean up PR

* Correct tokio tcp move attestation service tests

* Upgrade attestation service tests

* Fix sim

* Correct network test

* Correct genesis test

* Start docs

* Add progress for validator generation

* Tidy error messages

* Test corrections

* Log info when block is received

* Modify logs and update attester service events

* Stable futures: fixes to vc, eth1 and account manager (#1142)

* Add local testnet scripts

* Remove whiteblock script

* Rename local testnet script

* Move spawns onto handle

* Fix VC panic

* Initial fix to block production issue

* Tidy block producer fix

* Tidy further

* Add local testnet clean script

* Run cargo fmt

* Tidy duties service

* Tidy fork service

* Tidy ForkService

* Tidy AttestationService

* Tidy notifier

* Ensure await is not suppressed in eth1

* Ensure await is not suppressed in account_manager

* Use .ok() instead of .unwrap_or(())

* RPC decoding test for proto

* Update discv5 and eth2-libp2p deps

* Run cargo fmt

* Pre-build keystores for sim

* Fix lcli double runtime issue (#1144)

* Handle stream termination and dialing peer errors

* Correct peer_info variant types

* Add progress on new deposit flow

* Remove unnecessary warnings

* Handle subnet unsubscription removal and improve logigng

* Add logs around ping

* Upgrade discv5 and improve logging

* Handle peer connection status for multiple connections

* Improve network service logging

* Add more incomplete progress

* Improve logging around peer manager

* Upgrade swarm poll centralise peer management

* Identify clients on error

* Fix `remove_peer` in sync (#1150)

* remove_peer removes from all chains

* Remove logs

* Fix early return from loop

* Improved logging, fix panic

* Partially correct tests

* Add deposit command

* Remove old validator directory

* Start adding AM tests

* Stable futures: Vc sync (#1149)

* Improve syncing heuristic

* Add comments

* Use safer method for tolerance

* Fix tests

* Binary testing progress

* Progress with CLI tests

* Use constants for flags

* More account manager testing

* Improve CLI tests

* Move upgrade-legacy-keypairs into account man

* Use rayon for VC key generation

* Add comments to `validator_dir`

* Add testing to validator_dir

* Add fix to eth1-sim

* Check errors in eth1-sim

* Fix mutability issue

* Ensure password file ends in .pass

* Add more tests to wallet manager

* Tidy deposit

* Tidy account manager

* Tidy account manager

* Remove panic

* Generate keypairs earlier in sim

* Tidy eth1-sime

* Try to fix eth1 sim

* Address review comments

* Fix typo in CLI command

* Update docs

* Disable eth1 sim

* Remove eth1 sim completely

Co-authored-by: Age Manning <Age@AgeManning.com>
Co-authored-by: pawanjay176 <pawandhananjay@gmail.com>
This commit is contained in:
Paul Hauner 2020-05-18 19:01:45 +10:00 committed by GitHub
parent a4b07a833c
commit c571afb8d8
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
55 changed files with 3761 additions and 1163 deletions

View File

@ -58,15 +58,6 @@ jobs:
- uses: actions/checkout@v1
- name: Build the root Dockerfile
run: docker build .
eth1-simulator-ubuntu:
runs-on: ubuntu-latest
needs: cargo-fmt
steps:
- uses: actions/checkout@v1
- name: Install ganache-cli
run: sudo npm install -g ganache-cli
- name: Run the beacon chain sim that starts from an eth1 contract
run: cargo run --release --bin simulator eth1-sim
no-eth1-simulator-ubuntu:
runs-on: ubuntu-latest
needs: cargo-fmt

40
Cargo.lock generated
View File

@ -10,12 +10,16 @@ dependencies = [
"deposit_contract",
"dirs",
"environment",
"eth2_keystore",
"eth2_ssz",
"eth2_ssz_derive",
"eth2_testnet_config",
"eth2_wallet",
"eth2_wallet_manager",
"futures 0.3.5",
"hex 0.4.2",
"libc",
"rand 0.7.3",
"rayon",
"slog",
"slog-async",
@ -24,6 +28,7 @@ dependencies = [
"tokio 0.2.20",
"types",
"validator_client",
"validator_dir",
"web3",
]
@ -1340,6 +1345,15 @@ dependencies = [
"uuid",
]
[[package]]
name = "eth2_wallet_manager"
version = "0.1.0"
dependencies = [
"eth2_keystore",
"eth2_wallet",
"tempfile",
]
[[package]]
name = "ethabi"
version = "12.0.0"
@ -2164,12 +2178,14 @@ dependencies = [
"environment",
"eth1_test_rig",
"eth2-libp2p",
"eth2_keystore",
"eth2_ssz",
"eth2_testnet_config",
"futures 0.3.5",
"genesis",
"hex 0.4.2",
"log 0.4.8",
"rand 0.7.3",
"regex",
"serde",
"serde_yaml",
@ -2178,6 +2194,7 @@ dependencies = [
"tokio 0.2.20",
"tree_hash",
"types",
"validator_dir",
"web3",
]
@ -2542,9 +2559,11 @@ dependencies = [
"slog-async",
"slog-term",
"sloggers",
"tempfile",
"tokio 0.2.20",
"types",
"validator_client",
"validator_dir",
]
[[package]]
@ -2901,6 +2920,7 @@ dependencies = [
"types",
"url 2.1.1",
"validator_client",
"validator_dir",
]
[[package]]
@ -4130,6 +4150,7 @@ dependencies = [
"futures 0.3.5",
"node_test_rig",
"parking_lot 0.10.2",
"rayon",
"tokio 0.2.20",
"types",
"validator_client",
@ -5232,6 +5253,7 @@ dependencies = [
"bincode",
"bls",
"clap",
"clap_utils",
"deposit_contract",
"dirs",
"environment",
@ -5261,9 +5283,27 @@ dependencies = [
"tokio 0.2.20",
"tree_hash",
"types",
"validator_dir",
"web3",
]
[[package]]
name = "validator_dir"
version = "0.1.0"
dependencies = [
"bls",
"deposit_contract",
"eth2_keystore",
"eth2_ssz",
"eth2_ssz_derive",
"eth2_wallet",
"rand 0.7.3",
"rayon",
"tempfile",
"tree_hash",
"types",
]
[[package]]
name = "vcpkg"
version = "0.2.8"

View File

@ -15,6 +15,7 @@ members = [
"eth2/utils/eth2_keystore",
"eth2/utils/eth2_testnet_config",
"eth2/utils/eth2_wallet",
"eth2/utils/eth2_wallet_manager",
"eth2/utils/logging",
"eth2/utils/eth2_hashing",
"eth2/utils/hashset_delay",
@ -33,6 +34,7 @@ members = [
"eth2/utils/tree_hash",
"eth2/utils/tree_hash_derive",
"eth2/utils/test_random_derive",
"eth2/utils/validator_dir",
"beacon_node",
"beacon_node/beacon_chain",
"beacon_node/client",

View File

@ -27,5 +27,9 @@ eth2_testnet_config = { path = "../eth2/utils/eth2_testnet_config" }
web3 = "0.10.0"
futures = { version = "0.3.5", features = ["compat"] }
clap_utils = { path = "../eth2/utils/clap_utils" }
# reduce feature set
eth2_wallet = { path = "../eth2/utils/eth2_wallet" }
eth2_wallet_manager = { path = "../eth2/utils/eth2_wallet_manager" }
rand = "0.7.2"
validator_dir = { path = "../eth2/utils/validator_dir", features = ["unencrypted_keys"] }
tokio = {version = "0.2.20", features = ["full"]}
eth2_keystore = { path = "../eth2/utils/eth2_keystore" }

View File

@ -1,91 +0,0 @@
use crate::deposits;
use clap::{App, Arg, SubCommand};
pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
App::new("account_manager")
.visible_aliases(&["a", "am", "account", "account_manager"])
.about("Utilities for generating and managing Ethereum 2.0 accounts.")
.subcommand(
SubCommand::with_name("validator")
.about("Generate or manage Ethereum 2.0 validators.")
.subcommand(deposits::cli_app())
.subcommand(
SubCommand::with_name("new")
.about("Create a new Ethereum 2.0 validator.")
.arg(
Arg::with_name("deposit-value")
.short("v")
.long("deposit-value")
.value_name("GWEI")
.takes_value(true)
.default_value("32000000000")
.help("The deposit amount in Gwei (not Wei). Default is 32 ETH."),
)
.arg(
Arg::with_name("send-deposits")
.long("send-deposits")
.help("If present, submit validator deposits to an eth1 endpoint /
defined by the --eth1-endpoint. Requires either the /
--deposit-contract or --testnet-dir flag.")
)
.arg(
Arg::with_name("eth1-endpoint")
.short("e")
.long("eth1-endpoint")
.value_name("HTTP_SERVER")
.takes_value(true)
.default_value("http://localhost:8545")
.help("The URL to the Eth1 JSON-RPC HTTP API (e.g., Geth/Parity-Ethereum)."),
)
.arg(
Arg::with_name("account-index")
.short("i")
.long("account-index")
.value_name("INDEX")
.takes_value(true)
.default_value("0")
.help("The eth1 accounts[] index which will send the transaction"),
)
.arg(
Arg::with_name("password")
.short("p")
.long("password")
.value_name("FILE")
.takes_value(true)
.help("The password file to unlock the eth1 account (see --index)"),
)
.subcommand(
SubCommand::with_name("insecure")
.about("Produce insecure, ephemeral validators. DO NOT USE TO STORE VALUE.")
.arg(
Arg::with_name("first")
.index(1)
.value_name("INDEX")
.help("Index of the first validator")
.takes_value(true)
.required(true),
)
.arg(
Arg::with_name("last")
.index(2)
.value_name("INDEX")
.help("Index of the last validator")
.takes_value(true)
.required(true),
),
)
.subcommand(
SubCommand::with_name("random")
.about("Produces public keys using entropy from the Rust 'rand' library.")
.arg(
Arg::with_name("validator_count")
.index(1)
.value_name("INTEGER")
.help("The number of new validators to generate.")
.takes_value(true)
.default_value("1"),
),
)
)
)
}

View File

@ -0,0 +1,39 @@
use clap::ArgMatches;
use eth2_wallet::PlainText;
use rand::{distributions::Alphanumeric, Rng};
use std::fs::create_dir_all;
use std::path::{Path, PathBuf};
/// The `Alphanumeric` crate only generates a-z, A-Z, 0-9, therefore it has a range of 62
/// characters.
///
/// 62**48 is greater than 255**32, therefore this password has more bits of entropy than a byte
/// array of length 32.
const DEFAULT_PASSWORD_LEN: usize = 48;
pub fn random_password() -> PlainText {
rand::thread_rng()
.sample_iter(&Alphanumeric)
.take(DEFAULT_PASSWORD_LEN)
.collect::<String>()
.into_bytes()
.into()
}
pub fn ensure_dir_exists<P: AsRef<Path>>(path: P) -> Result<(), String> {
let path = path.as_ref();
if !path.exists() {
create_dir_all(path).map_err(|e| format!("Unable to create {:?}: {:?}", path, e))?;
}
Ok(())
}
pub fn base_wallet_dir(matches: &ArgMatches, arg: &'static str) -> Result<PathBuf, String> {
clap_utils::parse_path_with_default_in_home_dir(
matches,
arg,
PathBuf::new().join(".lighthouse").join("wallets"),
)
}

View File

@ -1,241 +0,0 @@
use clap::{App, Arg, ArgMatches};
use clap_utils;
use environment::Environment;
use futures::compat::Future01CompatExt;
use slog::{info, Logger};
use std::fs;
use std::path::PathBuf;
use tokio::time::{delay_until, Duration, Instant};
use types::EthSpec;
use validator_client::validator_directory::ValidatorDirectoryBuilder;
use web3::{
transports::Ipc,
types::{Address, SyncInfo, SyncState},
Transport, Web3,
};
const SYNCING_STATE_RETRY_DELAY: Duration = Duration::from_secs(2);
pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
App::new("deposited")
.about("Creates new Lighthouse validator keys and directories. Each newly-created validator
will have a deposit transaction formed and submitted to the deposit contract via
--eth1-ipc. This application will only write each validator keys to disk if the deposit
transaction returns successfully from the eth1 node. The process exits immediately if any
Eth1 tx fails. Does not wait for Eth1 confirmation blocks, so there is no guarantee that a
deposit will be accepted in the Eth1 chain. Before key generation starts, this application
will wait until the eth1 indicates that it is not syncing via the eth_syncing endpoint")
.arg(
Arg::with_name("validator-dir")
.long("validator-dir")
.value_name("VALIDATOR_DIRECTORY")
.help("The path where the validator directories will be created. Defaults to ~/.lighthouse/validators")
.takes_value(true),
)
.arg(
Arg::with_name("eth1-ipc")
.long("eth1-ipc")
.value_name("ETH1_IPC_PATH")
.help("Path to an Eth1 JSON-RPC IPC endpoint")
.takes_value(true)
.required(true)
)
.arg(
Arg::with_name("from-address")
.long("from-address")
.value_name("FROM_ETH1_ADDRESS")
.help("The address that will submit the eth1 deposit. Must be unlocked on the node
at --eth1-ipc.")
.takes_value(true)
.required(true)
)
.arg(
Arg::with_name("deposit-gwei")
.long("deposit-gwei")
.value_name("DEPOSIT_GWEI")
.help("The GWEI value of the deposit amount. Defaults to the minimum amount
required for an active validator (MAX_EFFECTIVE_BALANCE.")
.takes_value(true),
)
.arg(
Arg::with_name("count")
.long("count")
.value_name("DEPOSIT_COUNT")
.help("The number of deposits to create, regardless of how many already exist")
.conflicts_with("limit")
.takes_value(true),
)
.arg(
Arg::with_name("at-most")
.long("at-most")
.value_name("VALIDATOR_COUNT")
.help("Observe the number of validators in --validator-dir, only creating enough to
ensure reach the given count. Never deletes an existing validator.")
.conflicts_with("count")
.takes_value(true),
)
}
pub fn cli_run<T: EthSpec>(
matches: &ArgMatches<'_>,
mut env: Environment<T>,
) -> Result<(), String> {
let spec = env.core_context().eth2_config.spec;
let log = env.core_context().log;
let validator_dir = clap_utils::parse_path_with_default_in_home_dir(
matches,
"validator_dir",
PathBuf::new().join(".lighthouse").join("validators"),
)?;
let eth1_ipc_path: PathBuf = clap_utils::parse_required(matches, "eth1-ipc")?;
let from_address: Address = clap_utils::parse_required(matches, "from-address")?;
let deposit_gwei = clap_utils::parse_optional(matches, "deposit-gwei")?
.unwrap_or_else(|| spec.max_effective_balance);
let count: Option<usize> = clap_utils::parse_optional(matches, "count")?;
let at_most: Option<usize> = clap_utils::parse_optional(matches, "at-most")?;
let starting_validator_count = existing_validator_count(&validator_dir)?;
let n = match (count, at_most) {
(Some(_), Some(_)) => Err("Cannot supply --count and --at-most".to_string()),
(None, None) => Err("Must supply either --count or --at-most".to_string()),
(Some(count), None) => Ok(count),
(None, Some(at_most)) => Ok(at_most.saturating_sub(starting_validator_count)),
}?;
if n == 0 {
info!(
log,
"No need to produce and validators, exiting";
"--count" => count,
"--at-most" => at_most,
"existing_validators" => starting_validator_count,
);
return Ok(());
}
let deposit_contract = env
.testnet
.as_ref()
.ok_or_else(|| "Unable to run account manager without a testnet dir".to_string())?
.deposit_contract_address()
.map_err(|e| format!("Unable to parse deposit contract address: {}", e))?;
if deposit_contract == Address::zero() {
return Err("Refusing to deposit to the zero address. Check testnet configuration.".into());
}
let (_event_loop_handle, transport) =
Ipc::new(eth1_ipc_path).map_err(|e| format!("Unable to connect to eth1 IPC: {:?}", e))?;
let web3 = Web3::new(transport);
env.runtime()
.block_on(poll_until_synced(web3.clone(), log.clone()))?;
for i in 0..n {
let tx_hash_log = log.clone();
env.runtime()
.block_on(async {
ValidatorDirectoryBuilder::default()
.spec(spec.clone())
.custom_deposit_amount(deposit_gwei)
.thread_random_keypairs()
.submit_eth1_deposit(web3.clone(), from_address, deposit_contract)
.await
.map(move |(builder, tx_hash)| {
info!(
tx_hash_log,
"Validator deposited";
"eth1_tx_hash" => format!("{:?}", tx_hash),
"index" => format!("{}/{}", i + 1, n),
);
builder
})
})?
.create_directory(validator_dir.clone())?
.write_keypair_files()?
.write_eth1_data_file()?
.build()?;
}
let ending_validator_count = existing_validator_count(&validator_dir)?;
let delta = ending_validator_count.saturating_sub(starting_validator_count);
info!(
log,
"Success";
"validators_created_and_deposited" => delta,
);
Ok(())
}
/// Returns the number of validators that exist in the given `validator_dir`.
///
/// This function just assumes any file is a validator directory, making it likely to return a
/// higher number than accurate but never a lower one.
fn existing_validator_count(validator_dir: &PathBuf) -> Result<usize, String> {
fs::read_dir(&validator_dir)
.map(|iter| iter.count())
.map_err(|e| format!("Unable to read {:?}: {}", validator_dir, e))
}
/// Run a poll on the `eth_syncing` endpoint, blocking until the node is synced.
async fn poll_until_synced<T>(web3: Web3<T>, log: Logger) -> Result<(), String>
where
T: Transport + Send + 'static,
<T as Transport>::Out: Send,
{
loop {
let sync_state = web3
.clone()
.eth()
.syncing()
.compat()
.await
.map_err(|e| format!("Unable to read syncing state from eth1 node: {:?}", e))?;
match sync_state {
SyncState::Syncing(SyncInfo {
current_block,
highest_block,
..
}) => {
info!(
log,
"Waiting for eth1 node to sync";
"est_highest_block" => format!("{}", highest_block),
"current_block" => format!("{}", current_block),
);
delay_until(Instant::now() + SYNCING_STATE_RETRY_DELAY).await;
}
SyncState::NotSyncing => {
let block_number = web3
.clone()
.eth()
.block_number()
.compat()
.await
.map_err(|e| format!("Unable to read block number from eth1 node: {:?}", e))?;
if block_number > 0.into() {
info!(
log,
"Eth1 node is synced";
"head_block" => format!("{}", block_number),
);
break;
} else {
delay_until(Instant::now() + SYNCING_STATE_RETRY_DELAY).await;
info!(
log,
"Waiting for eth1 node to sync";
"current_block" => 0,
);
}
}
}
}
Ok(())
}

View File

@ -1,457 +1,40 @@
mod cli;
mod deposits;
mod common;
pub mod upgrade_legacy_keypairs;
pub mod validator;
pub mod wallet;
use clap::App;
use clap::ArgMatches;
use deposit_contract::DEPOSIT_GAS;
use environment::{Environment, RuntimeContext};
use eth2_testnet_config::Eth2TestnetConfig;
use futures::compat::Future01CompatExt;
use futures::{FutureExt, StreamExt};
use rayon::prelude::*;
use slog::{error, info, Logger};
use std::fs;
use std::fs::File;
use std::io::Read;
use std::path::PathBuf;
use types::{ChainSpec, EthSpec};
use validator_client::validator_directory::{ValidatorDirectory, ValidatorDirectoryBuilder};
use web3::{
transports::Http,
types::{Address, TransactionRequest, U256},
Web3,
};
use environment::Environment;
use types::EthSpec;
pub use cli::cli_app;
pub const CMD: &str = "account_manager";
pub const SECRETS_DIR_FLAG: &str = "secrets-dir";
pub const VALIDATOR_DIR_FLAG: &str = "validator-dir";
pub const BASE_DIR_FLAG: &str = "base-dir";
pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
App::new(CMD)
.visible_aliases(&["a", "am", "account", CMD])
.about("Utilities for generating and managing Ethereum 2.0 accounts.")
.subcommand(wallet::cli_app())
.subcommand(validator::cli_app())
.subcommand(upgrade_legacy_keypairs::cli_app())
}
/// Run the account manager, returning an error if the operation did not succeed.
pub fn run<T: EthSpec>(matches: &ArgMatches<'_>, mut env: Environment<T>) -> Result<(), String> {
let context = env.core_context();
let log = context.log.clone();
// If the `datadir` was not provided, default to the home directory. If the home directory is
// not known, use the current directory.
let datadir = matches
.value_of("datadir")
.map(PathBuf::from)
.unwrap_or_else(|| {
dirs::home_dir()
.unwrap_or_else(|| PathBuf::from("."))
.join(".lighthouse")
.join("validators")
});
fs::create_dir_all(&datadir).map_err(|e| format!("Failed to create datadir: {}", e))?;
info!(
log,
"Located data directory";
"path" => format!("{:?}", datadir)
);
pub fn run<T: EthSpec>(matches: &ArgMatches<'_>, env: Environment<T>) -> Result<(), String> {
match matches.subcommand() {
("validator", Some(matches)) => match matches.subcommand() {
("deposited", Some(matches)) => deposits::cli_run(matches, env)?,
("new", Some(matches)) => run_new_validator_subcommand(matches, datadir, env)?,
_ => {
return Err("Invalid 'validator new' command. See --help.".to_string());
}
},
_ => {
return Err("Invalid 'validator' command. See --help.".to_string());
(wallet::CMD, Some(matches)) => wallet::cli_run(matches)?,
(validator::CMD, Some(matches)) => validator::cli_run(matches, env)?,
(upgrade_legacy_keypairs::CMD, Some(matches)) => upgrade_legacy_keypairs::cli_run(matches)?,
(unknown, _) => {
return Err(format!(
"{} is not a valid {} command. See --help.",
unknown, CMD
));
}
}
Ok(())
}
/// Describes the crypto key generation methods for a validator.
enum KeygenMethod {
/// Produce an insecure "deterministic" keypair. Used only for interop and testing.
Insecure(usize),
/// Generate a new key from the `rand` thread random RNG.
ThreadRandom,
}
/// Process the subcommand for creating new validators.
fn run_new_validator_subcommand<T: EthSpec>(
matches: &ArgMatches,
datadir: PathBuf,
mut env: Environment<T>,
) -> Result<(), String> {
let mut context = env.core_context();
let log = context.log.clone();
// Load the testnet configuration from disk, or use the default testnet.
let eth2_testnet_config: Eth2TestnetConfig<T> =
if let Some(testnet_dir_str) = matches.value_of("testnet-dir") {
let testnet_dir = testnet_dir_str
.parse::<PathBuf>()
.map_err(|e| format!("Unable to parse testnet-dir: {}", e))?;
if !testnet_dir.exists() {
return Err(format!(
"Testnet directory at {:?} does not exist",
testnet_dir
));
}
info!(
log,
"Loading deposit contract address";
"testnet_dir" => format!("{:?}", &testnet_dir)
);
Eth2TestnetConfig::load(testnet_dir.clone())
.map_err(|e| format!("Failed to load testnet dir at {:?}: {}", testnet_dir, e))?
} else {
info!(
log,
"Using Lighthouse testnet deposit contract";
);
Eth2TestnetConfig::hard_coded()
.map_err(|e| format!("Failed to load hard_coded testnet dir: {}", e))?
};
context.eth2_config.spec = eth2_testnet_config
.yaml_config
.as_ref()
.ok_or_else(|| "The testnet directory must contain a spec config".to_string())?
.apply_to_chain_spec::<T>(&context.eth2_config.spec)
.ok_or_else(|| {
format!(
"The loaded config is not compatible with the {} spec",
&context.eth2_config.spec_constants
)
})?;
let methods: Vec<KeygenMethod> = match matches.subcommand() {
("insecure", Some(matches)) => {
let first = matches
.value_of("first")
.ok_or_else(|| "No first index".to_string())?
.parse::<usize>()
.map_err(|e| format!("Unable to parse first index: {}", e))?;
let last = matches
.value_of("last")
.ok_or_else(|| "No last index".to_string())?
.parse::<usize>()
.map_err(|e| format!("Unable to parse first index: {}", e))?;
(first..last).map(KeygenMethod::Insecure).collect()
}
("random", Some(matches)) => {
let count = matches
.value_of("validator_count")
.ok_or_else(|| "No validator count".to_string())?
.parse::<usize>()
.map_err(|e| format!("Unable to parse validator count: {}", e))?;
(0..count).map(|_| KeygenMethod::ThreadRandom).collect()
}
_ => {
return Err("Invalid 'validator' command. See --help.".to_string());
}
};
let deposit_value = matches
.value_of("deposit-value")
.ok_or_else(|| "No deposit-value".to_string())?
.parse::<u64>()
.map_err(|e| format!("Unable to parse deposit-value: {}", e))?;
let validators = make_validators(
datadir.clone(),
&methods,
deposit_value,
&context.eth2_config.spec,
&log,
)?;
if matches.is_present("send-deposits") {
let eth1_endpoint = matches
.value_of("eth1-endpoint")
.ok_or_else(|| "No eth1-endpoint".to_string())?;
let account_index = matches
.value_of("account-index")
.ok_or_else(|| "No account-index".to_string())?
.parse::<usize>()
.map_err(|e| format!("Unable to parse account-index: {}", e))?;
// If supplied, load the eth1 account password from file.
let password = if let Some(password_path) = matches.value_of("password") {
Some(
File::open(password_path)
.map_err(|e| format!("Unable to open password file: {:?}", e))
.and_then(|mut file| {
let mut password = String::new();
file.read_to_string(&mut password)
.map_err(|e| format!("Unable to read password file to string: {:?}", e))
.map(|_| password)
})
.map(|password| {
// Trim the line feed from the end of the password file, if present.
if password.ends_with('\n') {
password[0..password.len() - 1].to_string()
} else {
password
}
})?,
)
} else {
None
};
info!(
log,
"Submitting validator deposits";
"eth1_node_http_endpoint" => eth1_endpoint
);
// Convert from `types::Address` to `web3::types::Address`.
let deposit_contract = Address::from_slice(
eth2_testnet_config
.deposit_contract_address()?
.as_fixed_bytes(),
);
if let Err(()) = env.runtime().block_on(deposit_validators(
context.clone(),
eth1_endpoint.to_string(),
deposit_contract,
validators.clone(),
account_index,
deposit_value,
password,
)) {
error!(
log,
"Created validators but could not submit deposits";
)
} else {
info!(
log,
"Validator deposits complete";
);
}
}
info!(
log,
"Generated validator directories";
"base_path" => format!("{:?}", datadir),
"count" => validators.len(),
);
Ok(())
}
/// Produces a validator directory for each of the key generation methods provided in `methods`.
fn make_validators(
datadir: PathBuf,
methods: &[KeygenMethod],
deposit_value: u64,
spec: &ChainSpec,
log: &Logger,
) -> Result<Vec<ValidatorDirectory>, String> {
methods
.par_iter()
.map(|method| {
let mut builder = ValidatorDirectoryBuilder::default()
.spec(spec.clone())
.custom_deposit_amount(deposit_value);
builder = match method {
KeygenMethod::Insecure(index) => builder.insecure_keypairs(*index),
KeygenMethod::ThreadRandom => builder.thread_random_keypairs(),
};
let validator = builder
.create_directory(datadir.clone())?
.write_keypair_files()?
.write_eth1_data_file()?
.build()?;
let pubkey = &validator
.voting_keypair
.as_ref()
.ok_or_else(|| "Generated validator must have voting keypair".to_string())?
.pk;
info!(
log,
"Saved new validator to disk";
"voting_pubkey" => format!("{:?}", pubkey)
);
Ok(validator)
})
.collect()
}
/// For each `ValidatorDirectory`, submit a deposit transaction to the `eth1_endpoint`.
///
/// Returns success as soon as the eth1 endpoint accepts the transaction (i.e., does not wait for
/// transaction success/revert).
async fn deposit_validators<E: EthSpec>(
context: RuntimeContext<E>,
eth1_endpoint: String,
deposit_contract: Address,
validators: Vec<ValidatorDirectory>,
account_index: usize,
deposit_value: u64,
password: Option<String>,
) -> Result<(), ()> {
let log_1 = context.log.clone();
let log_2 = context.log.clone();
let (event_loop, transport) = Http::new(&eth1_endpoint).map_err(move |e| {
error!(
log_1,
"Failed to start web3 HTTP transport";
"error" => format!("{:?}", e)
)
})?;
/*
* Loop through the validator directories and submit the deposits.
*/
let web3 = Web3::new(transport);
futures::stream::iter(validators)
.for_each(|validator| async {
let web3 = web3.clone();
let log = log_2.clone();
let password = password.clone();
let _ = deposit_validator(
web3,
deposit_contract,
validator,
deposit_value,
account_index,
password,
log,
)
.await;
})
.map(|_| event_loop)
// // Web3 gives errors if the event loop is dropped whilst performing requests.
.map(drop)
.await;
Ok(())
}
/// For the given `ValidatorDirectory`, submit a deposit transaction to the `web3` node.
///
/// Returns success as soon as the eth1 endpoint accepts the transaction (i.e., does not wait for
/// transaction success/revert).
async fn deposit_validator(
web3: Web3<Http>,
deposit_contract: Address,
validator: ValidatorDirectory,
deposit_amount: u64,
account_index: usize,
password_opt: Option<String>,
log: Logger,
) -> Result<(), ()> {
let voting_keypair = validator
.voting_keypair
.clone()
.ok_or_else(|| error!(log, "Validator does not have voting keypair"))?;
let deposit_data = validator
.deposit_data
.clone()
.ok_or_else(|| error!(log, "Validator does not have deposit data"))?;
let pubkey_1 = voting_keypair.pk.clone();
let pubkey_2 = voting_keypair.pk;
let log_1 = log.clone();
let log_2 = log.clone();
// TODO: creating a future to extract the Error type
// check if there's a better way
let future = async move {
let accounts = web3
.eth()
.accounts()
.compat()
.await
.map_err(|e| format!("Failed to get accounts: {:?}", e))?;
let from_address = accounts
.get(account_index)
.cloned()
.ok_or_else(|| "Insufficient accounts for deposit".to_string())?;
/*
* If a password was supplied, unlock the account.
*/
let from = if let Some(password) = password_opt {
// Unlock for only a single transaction.
let duration = None;
let result = web3
.personal()
.unlock_account(from_address, &password, duration)
.compat()
.await;
match result {
Ok(true) => from_address,
Ok(false) => {
return Err::<(), String>(
"Eth1 node refused to unlock account. Check password.".to_string(),
)
}
Err(e) => return Err::<(), String>(format!("Eth1 unlock request failed: {:?}", e)),
}
} else {
from_address
};
/*
* Submit the deposit transaction.
*/
let tx_request = TransactionRequest {
from,
to: Some(deposit_contract),
gas: Some(U256::from(DEPOSIT_GAS)),
gas_price: None,
value: Some(from_gwei(deposit_amount)),
data: Some(deposit_data.into()),
nonce: None,
condition: None,
};
let tx = web3
.eth()
.send_transaction(tx_request)
.compat()
.await
.map_err(|e| format!("Failed to call deposit fn: {:?}", e))?;
info!(
log_1,
"Validator deposit successful";
"eth1_tx_hash" => format!("{:?}", tx),
"validator_voting_pubkey" => format!("{:?}", pubkey_1)
);
Ok(())
};
future.await.map_err(move |e| {
error!(
log_2,
"Validator deposit_failed";
"error" => e,
"validator_voting_pubkey" => format!("{:?}", pubkey_2)
);
})?;
Ok(())
}
/// Converts gwei to wei.
fn from_gwei(gwei: u64) -> U256 {
U256::from(gwei) * U256::exp10(9)
}

View File

@ -0,0 +1,149 @@
//! This command allows migrating from the old method of storing keys (unencrypted SSZ) to the
//! current method of using encrypted EIP-2335 keystores.
//!
//! This command should be completely removed once the `unencrypted_keys` feature is removed from
//! the `validator_dir` command. This should hopefully be in mid-June 2020.
//!
//! ## Example
//!
//! This command will upgrade all keypairs in the `--validators-dir`, storing the newly-generated
//! passwords in `--secrets-dir`.
//!
//! ```ignore
//! lighthouse am upgrade-legacy-keypairs \
//! --validators-dir ~/.lighthouse/validators \
//! --secrets-dir ~/.lighthouse/secrets
//! ```
use crate::{SECRETS_DIR_FLAG, VALIDATOR_DIR_FLAG};
use clap::{App, Arg, ArgMatches};
use clap_utils::parse_required;
use eth2_keystore::KeystoreBuilder;
use rand::{distributions::Alphanumeric, Rng};
use std::fs::{create_dir_all, read_dir, write, File};
use std::path::{Path, PathBuf};
use types::Keypair;
use validator_dir::{
unencrypted_keys::load_unencrypted_keypair, VOTING_KEYSTORE_FILE, WITHDRAWAL_KEYSTORE_FILE,
};
pub const CMD: &str = "upgrade-legacy-keypairs";
pub const VOTING_KEYPAIR_FILE: &str = "voting_keypair";
pub const WITHDRAWAL_KEYPAIR_FILE: &str = "withdrawal_keypair";
pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
App::new(CMD)
.about(
"Converts legacy unencrypted SSZ keypairs into encrypted keystores.",
)
.arg(
Arg::with_name(VALIDATOR_DIR_FLAG)
.long(VALIDATOR_DIR_FLAG)
.value_name("VALIDATORS_DIRECTORY")
.takes_value(true)
.required(true)
.help("The directory containing legacy validators. Generally ~/.lighthouse/validators"),
)
.arg(
Arg::with_name(SECRETS_DIR_FLAG)
.long(SECRETS_DIR_FLAG)
.value_name("SECRETS_DIRECTORY")
.takes_value(true)
.required(true)
.help("The directory where keystore passwords will be stored. Generally ~/.lighthouse/secrets"),
)
}
pub fn cli_run(matches: &ArgMatches) -> Result<(), String> {
let validators_dir: PathBuf = parse_required(matches, VALIDATOR_DIR_FLAG)?;
let secrets_dir: PathBuf = parse_required(matches, SECRETS_DIR_FLAG)?;
if !secrets_dir.exists() {
create_dir_all(&secrets_dir)
.map_err(|e| format!("Failed to create secrets dir {:?}: {:?}", secrets_dir, e))?;
}
read_dir(&validators_dir)
.map_err(|e| {
format!(
"Failed to read validators directory {:?}: {:?}",
validators_dir, e
)
})?
.try_for_each(|dir| {
let path = dir
.map_err(|e| format!("Unable to read dir: {}", e))?
.path();
if path.is_dir() {
if let Err(e) = upgrade_keypair(
&path,
&secrets_dir,
VOTING_KEYPAIR_FILE,
VOTING_KEYSTORE_FILE,
) {
println!("Validator {:?}: {:?}", path, e);
} else {
println!("Validator {:?} voting keys: success", path);
}
if let Err(e) = upgrade_keypair(
&path,
&secrets_dir,
WITHDRAWAL_KEYPAIR_FILE,
WITHDRAWAL_KEYSTORE_FILE,
) {
println!("Validator {:?}: {:?}", path, e);
} else {
println!("Validator {:?} withdrawal keys: success", path);
}
}
Ok(())
})
}
fn upgrade_keypair<P: AsRef<Path>>(
validator_dir: P,
secrets_dir: P,
input_filename: &str,
output_filename: &str,
) -> Result<(), String> {
let validator_dir = validator_dir.as_ref();
let secrets_dir = secrets_dir.as_ref();
let keypair: Keypair = load_unencrypted_keypair(validator_dir.join(input_filename))?.into();
let password = rand::thread_rng()
.sample_iter(&Alphanumeric)
.take(48)
.collect::<String>()
.into_bytes();
let keystore = KeystoreBuilder::new(&keypair, &password, "".into())
.map_err(|e| format!("Unable to create keystore builder: {:?}", e))?
.build()
.map_err(|e| format!("Unable to build keystore: {:?}", e))?;
let keystore_path = validator_dir.join(output_filename);
if keystore_path.exists() {
return Err(format!("{:?} already exists", keystore_path));
}
let mut file = File::create(&keystore_path).map_err(|e| format!("Cannot create: {:?}", e))?;
keystore
.to_json_writer(&mut file)
.map_err(|e| format!("Cannot write keystore to {:?}: {:?}", keystore_path, e))?;
let password_path = secrets_dir.join(format!("{}", keypair.pk.as_hex_string()));
if password_path.exists() {
return Err(format!("{:?} already exists", password_path));
}
write(&password_path, &password)
.map_err(|e| format!("Unable to write password to {:?}: {:?}", password_path, e))?;
Ok(())
}

View File

@ -0,0 +1,203 @@
use crate::{
common::{ensure_dir_exists, random_password},
SECRETS_DIR_FLAG, VALIDATOR_DIR_FLAG,
};
use clap::{App, Arg, ArgMatches};
use environment::Environment;
use eth2_wallet::PlainText;
use eth2_wallet_manager::WalletManager;
use std::fs;
use std::path::{Path, PathBuf};
use types::EthSpec;
use validator_dir::Builder as ValidatorDirBuilder;
pub const CMD: &str = "create";
pub const BASE_DIR_FLAG: &str = "base-dir";
pub const WALLET_NAME_FLAG: &str = "wallet-name";
pub const WALLET_PASSPHRASE_FLAG: &str = "wallet-passphrase";
pub const DEPOSIT_GWEI_FLAG: &str = "deposit-gwei";
pub const STORE_WITHDRAW_FLAG: &str = "store-withdrawal-keystore";
pub const COUNT_FLAG: &str = "count";
pub const AT_MOST_FLAG: &str = "at-most";
pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
App::new(CMD)
.about(
"Creates new validators from an existing EIP-2386 wallet using the EIP-2333 HD key \
derivation scheme.",
)
.arg(
Arg::with_name(WALLET_NAME_FLAG)
.long(WALLET_NAME_FLAG)
.value_name("WALLET_NAME")
.help("Use the wallet identified by this name")
.takes_value(true)
.required(true),
)
.arg(
Arg::with_name(WALLET_PASSPHRASE_FLAG)
.long(WALLET_PASSPHRASE_FLAG)
.value_name("WALLET_PASSWORD_PATH")
.help("A path to a file containing the password which will unlock the wallet.")
.takes_value(true)
.required(true),
)
.arg(
Arg::with_name(VALIDATOR_DIR_FLAG)
.long(VALIDATOR_DIR_FLAG)
.value_name("VALIDATOR_DIRECTORY")
.help(
"The path where the validator directories will be created. \
Defaults to ~/.lighthouse/validators",
)
.takes_value(true),
)
.arg(
Arg::with_name(SECRETS_DIR_FLAG)
.long(SECRETS_DIR_FLAG)
.value_name("SECRETS_DIR")
.help(
"The path where the validator keystore passwords will be stored. \
Defaults to ~/.lighthouse/secrets",
)
.takes_value(true),
)
.arg(
Arg::with_name(DEPOSIT_GWEI_FLAG)
.long(DEPOSIT_GWEI_FLAG)
.value_name("DEPOSIT_GWEI")
.help(
"The GWEI value of the deposit amount. Defaults to the minimum amount \
required for an active validator (MAX_EFFECTIVE_BALANCE)",
)
.takes_value(true),
)
.arg(
Arg::with_name(STORE_WITHDRAW_FLAG)
.long(STORE_WITHDRAW_FLAG)
.help(
"If present, the withdrawal keystore will be stored alongside the voting \
keypair. It is generally recommended to *not* store the withdrawal key and \
instead generate them from the wallet seed when required.",
),
)
.arg(
Arg::with_name(COUNT_FLAG)
.long(COUNT_FLAG)
.value_name("VALIDATOR_COUNT")
.help("The number of validators to create, regardless of how many already exist")
.conflicts_with("at-most")
.takes_value(true),
)
.arg(
Arg::with_name(AT_MOST_FLAG)
.long(AT_MOST_FLAG)
.value_name("AT_MOST_VALIDATORS")
.help(
"Observe the number of validators in --validator-dir, only creating enough to \
reach the given count. Never deletes an existing validator.",
)
.conflicts_with("count")
.takes_value(true),
)
}
pub fn cli_run<T: EthSpec>(
matches: &ArgMatches,
mut env: Environment<T>,
wallet_base_dir: PathBuf,
) -> Result<(), String> {
let spec = env.core_context().eth2_config.spec;
let name: String = clap_utils::parse_required(matches, WALLET_NAME_FLAG)?;
let wallet_password_path: PathBuf =
clap_utils::parse_required(matches, WALLET_PASSPHRASE_FLAG)?;
let validator_dir = clap_utils::parse_path_with_default_in_home_dir(
matches,
VALIDATOR_DIR_FLAG,
PathBuf::new().join(".lighthouse").join("validators"),
)?;
let secrets_dir = clap_utils::parse_path_with_default_in_home_dir(
matches,
SECRETS_DIR_FLAG,
PathBuf::new().join(".lighthouse").join("secrets"),
)?;
let deposit_gwei = clap_utils::parse_optional(matches, DEPOSIT_GWEI_FLAG)?
.unwrap_or_else(|| spec.max_effective_balance);
let count: Option<usize> = clap_utils::parse_optional(matches, COUNT_FLAG)?;
let at_most: Option<usize> = clap_utils::parse_optional(matches, AT_MOST_FLAG)?;
ensure_dir_exists(&validator_dir)?;
ensure_dir_exists(&secrets_dir)?;
let starting_validator_count = existing_validator_count(&validator_dir)?;
let n = match (count, at_most) {
(Some(_), Some(_)) => Err(format!(
"Cannot supply --{} and --{}",
COUNT_FLAG, AT_MOST_FLAG
)),
(None, None) => Err(format!(
"Must supply either --{} or --{}",
COUNT_FLAG, AT_MOST_FLAG
)),
(Some(count), None) => Ok(count),
(None, Some(at_most)) => Ok(at_most.saturating_sub(starting_validator_count)),
}?;
if n == 0 {
eprintln!(
"No validators to create. {}={:?}, {}={:?}",
COUNT_FLAG, count, AT_MOST_FLAG, at_most
);
return Ok(());
}
let wallet_password = fs::read(&wallet_password_path)
.map_err(|e| format!("Unable to read {:?}: {:?}", wallet_password_path, e))
.map(|bytes| PlainText::from(bytes))?;
let mgr = WalletManager::open(&wallet_base_dir)
.map_err(|e| format!("Unable to open --{}: {:?}", BASE_DIR_FLAG, e))?;
let mut wallet = mgr
.wallet_by_name(&name)
.map_err(|e| format!("Unable to open wallet: {:?}", e))?;
for i in 0..n {
let voting_password = random_password();
let withdrawal_password = random_password();
let keystores = wallet
.next_validator(
wallet_password.as_bytes(),
voting_password.as_bytes(),
withdrawal_password.as_bytes(),
)
.map_err(|e| format!("Unable to create validator keys: {:?}", e))?;
let voting_pubkey = keystores.voting.pubkey().to_string();
ValidatorDirBuilder::new(validator_dir.clone(), secrets_dir.clone())
.voting_keystore(keystores.voting, voting_password.as_bytes())
.withdrawal_keystore(keystores.withdrawal, withdrawal_password.as_bytes())
.create_eth1_tx_data(deposit_gwei, &spec)
.store_withdrawal_keystore(matches.is_present(STORE_WITHDRAW_FLAG))
.build()
.map_err(|e| format!("Unable to build validator directory: {:?}", e))?;
println!("{}/{}\t0x{}", i + 1, n, voting_pubkey);
}
Ok(())
}
/// Returns the number of validators that exist in the given `validator_dir`.
///
/// This function just assumes any file is a validator directory, making it likely to return a
/// higher number than accurate but never a lower one.
fn existing_validator_count<P: AsRef<Path>>(validator_dir: P) -> Result<usize, String> {
fs::read_dir(validator_dir.as_ref())
.map(|iter| iter.count())
.map_err(|e| format!("Unable to read {:?}: {}", validator_dir.as_ref(), e))
}

View File

@ -0,0 +1,271 @@
use crate::VALIDATOR_DIR_FLAG;
use clap::{App, Arg, ArgMatches};
use clap_utils;
use deposit_contract::DEPOSIT_GAS;
use environment::Environment;
use futures::compat::Future01CompatExt;
use slog::{info, Logger};
use std::path::PathBuf;
use tokio::time::{delay_until, Duration, Instant};
use types::EthSpec;
use validator_dir::Manager as ValidatorManager;
use web3::{
transports::Ipc,
types::{Address, SyncInfo, SyncState, TransactionRequest, U256},
Transport, Web3,
};
pub const CMD: &str = "deposit";
pub const VALIDATOR_FLAG: &str = "validator";
pub const ETH1_IPC_FLAG: &str = "eth1-ipc";
pub const FROM_ADDRESS_FLAG: &str = "from-address";
const GWEI: u64 = 1_000_000_000;
const SYNCING_STATE_RETRY_DELAY: Duration = Duration::from_secs(2);
pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
App::new("deposit")
.about(
"Submits a deposit to an Eth1 validator registration contract via an IPC endpoint \
of an Eth1 client (e.g., Geth, OpenEthereum, etc.). The validators must already \
have been created and exist on the file-system. The process will exit immediately \
with an error if any error occurs. After each deposit is submitted to the Eth1 \
node, a file will be saved in the validator directory with the transaction hash. \
The application does not wait for confirmations so there is not guarantee that \
the transaction is included in the Eth1 chain; use a block explorer and the \
transaction hash to check for confirmations. The deposit contract address will \
be determined by the --testnet-dir flag on the primary Lighthouse binary.",
)
.arg(
Arg::with_name(VALIDATOR_DIR_FLAG)
.long(VALIDATOR_DIR_FLAG)
.value_name("VALIDATOR_DIRECTORY")
.help(
"The path the validator client data directory. \
Defaults to ~/.lighthouse/validators",
)
.takes_value(true),
)
.arg(
Arg::with_name(VALIDATOR_FLAG)
.long(VALIDATOR_FLAG)
.value_name("VALIDATOR_NAME")
.help(
"The name of the directory in --data-dir for which to deposit. \
Set to 'all' to deposit all validators in the --data-dir.",
)
.takes_value(true)
.required(true),
)
.arg(
Arg::with_name(ETH1_IPC_FLAG)
.long(ETH1_IPC_FLAG)
.value_name("ETH1_IPC_PATH")
.help("Path to an Eth1 JSON-RPC IPC endpoint")
.takes_value(true)
.required(true),
)
.arg(
Arg::with_name(FROM_ADDRESS_FLAG)
.long(FROM_ADDRESS_FLAG)
.value_name("FROM_ETH1_ADDRESS")
.help(
"The address that will submit the eth1 deposit. \
Must be unlocked on the node at --eth1-ipc.",
)
.takes_value(true)
.required(true),
)
}
pub fn cli_run<T: EthSpec>(
matches: &ArgMatches<'_>,
mut env: Environment<T>,
) -> Result<(), String> {
let log = env.core_context().log;
let data_dir = clap_utils::parse_path_with_default_in_home_dir(
matches,
VALIDATOR_DIR_FLAG,
PathBuf::new().join(".lighthouse").join("validators"),
)?;
let validator: String = clap_utils::parse_required(matches, VALIDATOR_FLAG)?;
let eth1_ipc_path: PathBuf = clap_utils::parse_required(matches, ETH1_IPC_FLAG)?;
let from_address: Address = clap_utils::parse_required(matches, FROM_ADDRESS_FLAG)?;
let manager = ValidatorManager::open(&data_dir)
.map_err(|e| format!("Unable to read --{}: {:?}", VALIDATOR_DIR_FLAG, e))?;
let validators = match validator.as_ref() {
"all" => manager
.open_all_validators()
.map_err(|e| format!("Unable to read all validators: {:?}", e)),
name => {
let path = manager
.directory_names()
.map_err(|e| {
format!(
"Unable to read --{} directory names: {:?}",
VALIDATOR_DIR_FLAG, e
)
})?
.get(name)
.ok_or_else(|| format!("Unknown validator: {}", name))?
.clone();
manager
.open_validator(&path)
.map_err(|e| format!("Unable to open {}: {:?}", name, e))
.map(|v| vec![v])
}
}?;
let eth1_deposit_datas = validators
.into_iter()
.filter(|v| !v.eth1_deposit_tx_hash_exists())
.map(|v| match v.eth1_deposit_data() {
Ok(Some(data)) => Ok((v, data)),
Ok(None) => Err(format!(
"Validator is missing deposit data file: {:?}",
v.dir()
)),
Err(e) => Err(format!(
"Unable to read deposit data for {:?}: {:?}",
v.dir(),
e
)),
})
.collect::<Result<Vec<_>, _>>()?;
let total_gwei: u64 = eth1_deposit_datas
.iter()
.map(|(_, d)| d.deposit_data.amount)
.sum();
if eth1_deposit_datas.is_empty() {
info!(log, "No validators to deposit");
return Ok(());
}
info!(
log,
"Starting deposits";
"deposit_count" => eth1_deposit_datas.len(),
"total_eth" => total_gwei / GWEI,
);
let deposit_contract = env
.testnet
.as_ref()
.ok_or_else(|| "Unable to run account manager without a testnet dir".to_string())?
.deposit_contract_address()
.map_err(|e| format!("Unable to parse deposit contract address: {}", e))?;
if deposit_contract == Address::zero() {
return Err("Refusing to deposit to the zero address. Check testnet configuration.".into());
}
let (_event_loop_handle, transport) =
Ipc::new(eth1_ipc_path).map_err(|e| format!("Unable to connect to eth1 IPC: {:?}", e))?;
let web3 = Web3::new(transport);
let deposits_fut = async {
poll_until_synced(web3.clone(), log.clone()).await?;
for (mut validator_dir, eth1_deposit_data) in eth1_deposit_datas {
let tx_hash = web3
.eth()
.send_transaction(TransactionRequest {
from: from_address,
to: Some(deposit_contract),
gas: Some(DEPOSIT_GAS.into()),
gas_price: None,
value: Some(from_gwei(eth1_deposit_data.deposit_data.amount)),
data: Some(eth1_deposit_data.rlp.into()),
nonce: None,
condition: None,
})
.compat()
.await
.map_err(|e| format!("Failed to send transaction: {:?}", e))?;
validator_dir
.save_eth1_deposit_tx_hash(&format!("{:?}", tx_hash))
.map_err(|e| format!("Failed to save tx hash {:?} to disk: {:?}", tx_hash, e))?;
}
Ok::<(), String>(())
};
env.runtime().block_on(deposits_fut)?;
Ok(())
}
/// Converts gwei to wei.
fn from_gwei(gwei: u64) -> U256 {
U256::from(gwei) * U256::exp10(9)
}
/// Run a poll on the `eth_syncing` endpoint, blocking until the node is synced.
async fn poll_until_synced<T>(web3: Web3<T>, log: Logger) -> Result<(), String>
where
T: Transport + Send + 'static,
<T as Transport>::Out: Send,
{
loop {
let sync_state = web3
.clone()
.eth()
.syncing()
.compat()
.await
.map_err(|e| format!("Unable to read syncing state from eth1 node: {:?}", e))?;
match sync_state {
SyncState::Syncing(SyncInfo {
current_block,
highest_block,
..
}) => {
info!(
log,
"Waiting for eth1 node to sync";
"est_highest_block" => format!("{}", highest_block),
"current_block" => format!("{}", current_block),
);
delay_until(Instant::now() + SYNCING_STATE_RETRY_DELAY).await;
}
SyncState::NotSyncing => {
let block_number = web3
.clone()
.eth()
.block_number()
.compat()
.await
.map_err(|e| format!("Unable to read block number from eth1 node: {:?}", e))?;
if block_number > 0.into() {
info!(
log,
"Eth1 node is synced";
"head_block" => format!("{}", block_number),
);
break;
} else {
delay_until(Instant::now() + SYNCING_STATE_RETRY_DELAY).await;
info!(
log,
"Waiting for eth1 node to sync";
"current_block" => 0,
);
}
}
}
}
Ok(())
}

View File

@ -0,0 +1,38 @@
pub mod create;
pub mod deposit;
use crate::common::base_wallet_dir;
use clap::{App, Arg, ArgMatches};
use environment::Environment;
use types::EthSpec;
pub const CMD: &str = "validator";
pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
App::new(CMD)
.about("Provides commands for managing Eth2 validators.")
.arg(
Arg::with_name("base-dir")
.long("base-dir")
.value_name("BASE_DIRECTORY")
.help("A path containing Eth2 EIP-2386 wallets. Defaults to ~/.lighthouse/wallets")
.takes_value(true),
)
.subcommand(create::cli_app())
.subcommand(deposit::cli_app())
}
pub fn cli_run<T: EthSpec>(matches: &ArgMatches, env: Environment<T>) -> Result<(), String> {
let base_wallet_dir = base_wallet_dir(matches, "base-dir")?;
match matches.subcommand() {
(create::CMD, Some(matches)) => create::cli_run::<T>(matches, env, base_wallet_dir),
(deposit::CMD, Some(matches)) => deposit::cli_run::<T>(matches, env),
(unknown, _) => {
return Err(format!(
"{} does not have a {} command. See --help",
CMD, unknown
));
}
}
}

View File

@ -0,0 +1,163 @@
use crate::{common::random_password, BASE_DIR_FLAG};
use clap::{App, Arg, ArgMatches};
use eth2_wallet::{
bip39::{Language, Mnemonic, MnemonicType},
PlainText,
};
use eth2_wallet_manager::{WalletManager, WalletType};
use std::ffi::OsStr;
use std::fs::{self, File};
use std::io::prelude::*;
use std::os::unix::fs::PermissionsExt;
use std::path::{Path, PathBuf};
pub const CMD: &str = "create";
pub const HD_TYPE: &str = "hd";
pub const NAME_FLAG: &str = "name";
pub const PASSPHRASE_FLAG: &str = "passphrase-file";
pub const TYPE_FLAG: &str = "type";
pub const MNEMONIC_FLAG: &str = "mnemonic-output-path";
pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
App::new(CMD)
.about("Creates a new HD (hierarchical-deterministic) EIP-2386 wallet.")
.arg(
Arg::with_name(NAME_FLAG)
.long(NAME_FLAG)
.value_name("WALLET_NAME")
.help(
"The wallet will be created with this name. It is not allowed to \
create two wallets with the same name for the same --base-dir.",
)
.takes_value(true)
.required(true),
)
.arg(
Arg::with_name(PASSPHRASE_FLAG)
.long(PASSPHRASE_FLAG)
.value_name("WALLET_PASSWORD_PATH")
.help(
"A path to a file containing the password which will unlock the wallet. \
If the file does not exist, a random password will be generated and \
saved at that path. To avoid confusion, if the file does not already \
exist it must include a '.pass' suffix.",
)
.takes_value(true)
.required(true),
)
.arg(
Arg::with_name(TYPE_FLAG)
.long(TYPE_FLAG)
.value_name("WALLET_TYPE")
.help(
"The type of wallet to create. Only HD (hierarchical-deterministic) \
wallets are supported presently..",
)
.takes_value(true)
.possible_values(&[HD_TYPE])
.default_value(HD_TYPE),
)
.arg(
Arg::with_name(MNEMONIC_FLAG)
.long(MNEMONIC_FLAG)
.value_name("MNEMONIC_PATH")
.help(
"If present, the mnemonic will be saved to this file. DO NOT SHARE THE MNEMONIC.",
)
.takes_value(true)
)
}
pub fn cli_run(matches: &ArgMatches, base_dir: PathBuf) -> Result<(), String> {
let name: String = clap_utils::parse_required(matches, NAME_FLAG)?;
let wallet_password_path: PathBuf = clap_utils::parse_required(matches, PASSPHRASE_FLAG)?;
let mnemonic_output_path: Option<PathBuf> = clap_utils::parse_optional(matches, MNEMONIC_FLAG)?;
let type_field: String = clap_utils::parse_required(matches, TYPE_FLAG)?;
let wallet_type = match type_field.as_ref() {
HD_TYPE => WalletType::Hd,
unknown => return Err(format!("--{} {} is not supported", TYPE_FLAG, unknown)),
};
let mgr = WalletManager::open(&base_dir)
.map_err(|e| format!("Unable to open --{}: {:?}", BASE_DIR_FLAG, e))?;
// Create a new random mnemonic.
//
// The `tiny-bip39` crate uses `thread_rng()` for this entropy.
let mnemonic = Mnemonic::new(MnemonicType::Words12, Language::English);
// Create a random password if the file does not exist.
if !wallet_password_path.exists() {
// To prevent users from accidentally supplying their password to the PASSPHRASE_FLAG and
// create a file with that name, we require that the password has a .pass suffix.
if wallet_password_path.extension() != Some(&OsStr::new("pass")) {
return Err(format!(
"Only creates a password file if that file ends in .pass: {:?}",
wallet_password_path
));
}
create_with_600_perms(&wallet_password_path, random_password().as_bytes())
.map_err(|e| format!("Unable to write to {:?}: {:?}", wallet_password_path, e))?;
}
let wallet_password = fs::read(&wallet_password_path)
.map_err(|e| format!("Unable to read {:?}: {:?}", wallet_password_path, e))
.map(|bytes| PlainText::from(bytes))?;
let wallet = mgr
.create_wallet(name, wallet_type, &mnemonic, wallet_password.as_bytes())
.map_err(|e| format!("Unable to create wallet: {:?}", e))?;
if let Some(path) = mnemonic_output_path {
create_with_600_perms(&path, mnemonic.phrase().as_bytes())
.map_err(|e| format!("Unable to write mnemonic to {:?}: {:?}", path, e))?;
}
println!("Your wallet's 12-word BIP-39 mnemonic is:");
println!("");
println!("\t{}", mnemonic.phrase());
println!("");
println!("This mnemonic can be used to fully restore your wallet, should ");
println!("you lose the JSON file or your password. ");
println!("");
println!("It is very important that you DO NOT SHARE this mnemonic as it will ");
println!("reveal the private keys of all validators and keys generated with ");
println!("this wallet. That would be catastrophic.");
println!("");
println!("It is also import to store a backup of this mnemonic so you can ");
println!("recover your private keys in the case of data loss. Writing it on ");
println!("a piece of paper and storing it in a safe place would be prudent.");
println!("");
println!("Your wallet's UUID is:");
println!("");
println!("\t{}", wallet.wallet().uuid());
println!("");
println!("You do not need to backup your UUID or keep it secret.");
Ok(())
}
/// Creates a file with `600 (-rw-------)` permissions.
pub fn create_with_600_perms<P: AsRef<Path>>(path: P, bytes: &[u8]) -> Result<(), String> {
let path = path.as_ref();
let mut file =
File::create(&path).map_err(|e| format!("Unable to create {:?}: {}", path, e))?;
let mut perm = file
.metadata()
.map_err(|e| format!("Unable to get {:?} metadata: {}", path, e))?
.permissions();
perm.set_mode(0o600);
file.set_permissions(perm)
.map_err(|e| format!("Unable to set {:?} permissions: {}", path, e))?;
file.write_all(bytes)
.map_err(|e| format!("Unable to write to {:?}: {}", path, e))?;
Ok(())
}

View File

@ -0,0 +1,24 @@
use crate::BASE_DIR_FLAG;
use clap::App;
use eth2_wallet_manager::WalletManager;
use std::path::PathBuf;
pub const CMD: &str = "list";
pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
App::new(CMD).about("Lists the names of all wallets.")
}
pub fn cli_run(base_dir: PathBuf) -> Result<(), String> {
let mgr = WalletManager::open(&base_dir)
.map_err(|e| format!("Unable to open --{}: {:?}", BASE_DIR_FLAG, e))?;
for (name, _uuid) in mgr
.wallets()
.map_err(|e| format!("Unable to list wallets: {:?}", e))?
{
println!("{}", name)
}
Ok(())
}

View File

@ -0,0 +1,40 @@
pub mod create;
pub mod list;
use crate::{
common::{base_wallet_dir, ensure_dir_exists},
BASE_DIR_FLAG,
};
use clap::{App, Arg, ArgMatches};
pub const CMD: &str = "wallet";
pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
App::new(CMD)
.about("TODO")
.arg(
Arg::with_name(BASE_DIR_FLAG)
.long(BASE_DIR_FLAG)
.value_name("BASE_DIRECTORY")
.help("A path containing Eth2 EIP-2386 wallets. Defaults to ~/.lighthouse/wallets")
.takes_value(true),
)
.subcommand(create::cli_app())
.subcommand(list::cli_app())
}
pub fn cli_run(matches: &ArgMatches) -> Result<(), String> {
let base_dir = base_wallet_dir(matches, BASE_DIR_FLAG)?;
ensure_dir_exists(&base_dir)?;
match matches.subcommand() {
(create::CMD, Some(matches)) => create::cli_run(matches, base_dir),
(list::CMD, Some(_)) => list::cli_run(base_dir),
(unknown, _) => {
return Err(format!(
"{} does not have a {} command. See --help",
CMD, unknown
));
}
}
}

View File

@ -407,9 +407,11 @@ pub fn get_aggregate_attestation<T: BeaconChainTypes>(
match beacon_chain.get_aggregated_attestation(&attestation_data) {
Ok(Some(attestation)) => ResponseBuilder::new(&req)?.body(&attestation),
Ok(None) => Err(ApiError::NotFound(
"No matching aggregate attestation is known".into(),
)),
Ok(None) => Err(ApiError::NotFound(format!(
"No matching aggregate attestation for slot {:?} is known in slot {:?}",
attestation_data.slot,
beacon_chain.slot()
))),
Err(e) => Err(ApiError::ServerError(format!(
"Unable to obtain attestation: {:?}",
e

View File

@ -6,6 +6,9 @@
* [Building from Source](./become-a-validator-source.md)
* [Installation](./installation.md)
* [Docker](./docker.md)
* [Key Management](./key-managment.md)
* [Create a wallet](./wallet-create.md)
* [Create a validator](./validator-create.md)
* [Local Testnets](./local-testnets.md)
* [API](./api.md)
* [HTTP (RESTful JSON)](./http.md)

View File

@ -27,7 +27,7 @@ Since Eth2 relies upon the Eth1 chain for validator on-boarding, all Eth2 valida
We provide instructions for using Geth (the Eth1 client that, by chance, we ended up testing with), but you could use any client that implements the JSON RPC via HTTP. A fast-synced node should be sufficient.
### Installing Geth
### Installing Geth
If you're using a Mac, follow the instructions [listed here](https://github.com/ethereum/go-ethereum/wiki/Installation-Instructions-for-Mac) to install geth. Otherwise [see here](https://github.com/ethereum/go-ethereum/wiki/Installing-Geth).
### Starting Geth
@ -73,30 +73,71 @@ slot: 16835, ...
## 4. Generate your validator key
Generate new validator BLS keypairs using:
First, [create a wallet](./wallet-create) that can be used to generate
validator keys. Then, from that wallet [create a
validator](./validator-create). A two-step example follows:
### 4.1 Create a Wallet
Create a wallet with:
```bash
lighthouse account validator new random
lighthouse account wallet create --name my-validators --passphrase-file my-validators.pass
```
Take note of the `voting_pubkey` of the new validator:
The output will look like this:
```
INFO Saved new validator to disk
voting_pubkey: 0xa1625249d80...
Your wallet's 12-word BIP-39 mnemonic is:
thank beach essence clerk gun library key grape hotel wise dutch segment
This mnemonic can be used to fully restore your wallet, should
you lose the JSON file or your password.
It is very important that you DO NOT SHARE this mnemonic as it will
reveal the private keys of all validators and keys generated with
this wallet. That would be catastrophic.
It is also import to store a backup of this mnemonic so you can
recover your private keys in the case of data loss. Writing it on
a piece of paper and storing it in a safe place would be prudent.
Your wallet's UUID is:
e762671a-2a33-4922-901b-62a43dbd5227
You do not need to backup your UUID or keep it secret.
```
It's the validator's primary identifier, and will be used to find your validator in block explorers.
**Don't forget to make a backup** of the 12-word BIP-39 mnemonic. It can be
used to restore your validator if there is a data loss.
You've completed this step when you see something like the following line:
### 4.2 Create a Validator from the Wallet
```
Dec 02 21:42:01.337 INFO Generated validator directories count: 1, base_path: "/home/karl/.lighthouse/validators"
Create a validator from the wallet with:
```bash
lighthouse account validator create --wallet-name my-validators --wallet-passphrase my-validators.pass
```
This means you've successfully generated a new sub-directory for your validator in the `.lighthouse/validators` directory. The sub-directory is identified by your validator's public key (`voting_pubkey`). And is used to store your validator's deposit data, along with its voting and withdrawal keys.
The output will look like this:
```bash
1/1 0x80f3dce8d6745a725d8442c9bc3ca0852e772394b898c95c134b94979ebb0af6f898d5c5f65b71be6889185c486918a7
```
Take note of the _validator public key_ (the `0x` and 64 characters following
it). It's the validator's primary identifier, and will be used to find your
validator in block explorers. (The `1/1` at the start is saying it's one-of-one
keys generated).
Once you've observed the validator public key, you've successfully generated a
new sub-directory for your validator in the `.lighthouse/validators` directory.
The sub-directory is identified by your validator's public key . And is used to
store your validator's deposit data, along with its voting keys and other
information.
> Note: these keypairs are good enough for the Lighthouse testnet, however they shouldn't be considered secure until we've undergone a security audit (planned March/April).
## 5. Start your validator client
@ -148,7 +189,7 @@ However, since it generally takes somewhere between [4 and 8 hours](./faq.md) af
In the [next step](become-a-validator.html#2-submit-your-deposit-to-goerli) you'll need to upload your validator's deposit data. This data is stored in a file called `eth1_deposit_data.rlp`.
You'll find it in `/home/.lighthouse/validators` -- in the sub-directory that corresponds to your validator's public key (`voting_pubkey`).
You'll find it in `/home/.lighthouse/validators` -- in the sub-directory that corresponds to your validator's public key.
> For example, if your username is `karlm`, and your validator's public key (aka `voting_pubkey`) is `0x8592c7..`, then you'll find your `eth1_deposit_data.rlp` file in the following directory:
>

104
book/src/key-managment.md Normal file
View File

@ -0,0 +1,104 @@
# Key Management
Lighthouse uses a _hierarchical_ key management system for producing validator
keys. It is hierarchical because each validator key can be _derived_ from a
master key, making the validators keys _children_ of the master key. This
scheme means that a single 12-word mnemonic can be used to backup all of your
validator keys without providing any observable link between them (i.e., it is
privacy-retaining). Hierarchical key derivation schemes are common-place in
cryptocurrencies, they are already used by most hardware and software wallets
to secure BTC, ETH and many other coins.
## Key Concepts
We defined some terms in the context of validator key management:
- **Mnemonic**: a string of 12-words that is designed to be easy to write down
and remember. E.g., _"enemy fog enlist laundry nurse hungry discover turkey holiday resemble glad discover"_.
- Defined in BIP-39
- **Wallet**: a wallet is a JSON file which stores an
encrypted version of a mnemonic.
- Defined in EIP-2386
- **Keystore**: typically created by wallet, it contains a single encrypted BLS
keypair.
- Defined in EIP-2335.
- **Voting Keypair**: a BLS public and private keypair which is used for
signing blocks, attestations and other messages on regular intervals,
whilst staking in Phase 0.
- **Withdrawal Keypair**: a BLS public and private keypair which will be
required _after_ Phase 0 to manage ETH once a validator has exited.
## Overview
The key management system in Lighthouse involves moving down the above list of
items, starting at one easy-to-backup mnemonic and ending with multiple
keypairs. Creating a single validator looks like this:
1. Create a **wallet** and record the **mnemonic**:
- `lighthouse account wallet create --name wally --passphrase-file wally.pass`
1. Create the voting and withdrawal **keystores** for one validator:
- `lighthouse account validator create --wallet-name wally --wallet-passphrase wally.pass`
In step (1), we created a wallet in `~/.lighthouse/wallets` with the name
`mywallet`. We encrypted this using a pre-defined password in the
`mywallet.pass` file. Then, in step (2), we created a new validator in the
`~/.lighthouse/validators` directory using `mywallet` (unlocking it with
`mywallet.pass`) and storing the passwords to the validators voting key in
`~/.lighthouse/secrets`.
Thanks to the hierarchical key derivation scheme, we can delete all of the
aforementioned directories and then regenerate them as long as we remembered
the 12-word mnemonic (we don't recommend doing this, though).
Creating another validator is easy, it's just a matter of repeating step (2).
The wallet keeps track of how many validators it has generated and ensures that
a new validator is generated each time.
## Detail
### Directory Structure
There are three important directories in Lighthouse validator key management:
- `wallets/`: contains encrypted wallets which are used for hierarchical
key derivation.
- Defaults to `~/.lighthouse/wallets`
- `validators/`: contains a directory for each validator containing
encrypted keystores and other validator-specific data.
- Defaults to `~/.lighthouse/validators`
- `secrets/`: since the validator signing keys are "hot", the validator process
needs access to the passwords to decrypt the keystores in the validators
dir. These passwords are stored here.
- Defaults to `~/.lighthouse/secrets`
When the validator client boots, it searches the `validators/` for directories
containing voting keystores. When it discovers a keystore, it searches the
`secrets/` dir for a file with the same name as the 0x-prefixed hex
representation of the keystore public key. If it finds this file, it attempts
to decrypt the keystore using the contents of this file as the password. If it
fails, it logs an error and moves onto the next keystore.
The `validators/` and `secrets/` directories are kept separate to allow for
ease-of-backup; you can safely backup `validators/` without worrying about
leaking private key data.
### Withdrawal Keypairs
In Eth2 Phase 0, withdrawal keypairs do not serve any immediate purpose.
However, they become very important _after_ Phase 0: they will provide the
ultimate control of the ETH of withdrawn validators.
This presents an interesting key management scenario: withdrawal keys are very
important, but not right now. Considering this, Lighthouse has adopted a
strategy where **we do not save withdrawal keypairs to disk by default** (it is
opt-in). Instead, we assert that since the withdrawal keys can be regenerated
from a mnemonic, having them lying around on the file-system only presents risk
and complexity.
At the time or writing, we do not expose the commands to regenerate keys from
mnemonics. However, key regeneration is tested on the public Lighthouse
repository and will be exposed prior to mainnet launch.
So, in summary, withdrawal keypairs can be trivially regenerated from the
mnemonic via EIP-2333 so they are not saved to disk like the voting keypairs.

View File

@ -0,0 +1,75 @@
# Create a validator
Validators are fundamentally represented by a BLS keypair. In Lighthouse, we
use a [wallet](./wallet-create) to generate these keypairs. Once a wallet
exists, the `lighthouse account validator create` command is used to generate
the BLS keypair and all necessary information to submit a validator deposit and
have that validator operate in the `lighthouse validator_client`.
## Usage
To create a validator from a [wallet](./wallet-create), use the `lighthouse
account validator create` command:
```bash
lighthouse account validator create --help
Creates new validators from an existing EIP-2386 wallet using the EIP-2333 HD key-derivation scheme.
USAGE:
lighthouse account_manager validator create [FLAGS] [OPTIONS] --wallet-name <WALLET_NAME> --wallet-passphrase <WALLET_PASSWORD_PATH>
FLAGS:
-h, --help Prints help information
--store-withdrawal-keystore If present, the withdrawal keystore will be stored alongside the voting keypair.
It is generally recommended to *not* store the withdrawal key and instead
generate them from the wallet seed when required.
-V, --version Prints version information
OPTIONS:
--at-most <AT_MOST_VALIDATORS>
Observe the number of validators in --validator-dir, only creating enough to reach the given count. Never
deletes an existing validator.
--count <VALIDATOR_COUNT>
The number of validators to create, regardless of how many already exist
-d, --datadir <DIR> Data directory for lighthouse keys and databases.
--deposit-gwei <DEPOSIT_GWEI>
The GWEI value of the deposit amount. Defaults to the minimum amount required for an active validator
(MAX_EFFECTIVE_BALANCE)
--secrets-dir <SECRETS_DIR>
The path where the validator keystore passwords will be stored. Defaults to ~/.lighthouse/secrets
-s, --spec <TITLE>
Specifies the default eth2 spec type. [default: mainnet] [possible values: mainnet, minimal, interop]
-t, --testnet-dir <DIR>
Path to directory containing eth2_testnet specs. Defaults to a hard-coded Lighthouse testnet. Only effective
if there is no existing database.
--validator-dir <VALIDATOR_DIRECTORY>
The path where the validator directories will be created. Defaults to ~/.lighthouse/validators
--wallet-name <WALLET_NAME> Use the wallet identified by this name
--wallet-passphrase <WALLET_PASSWORD_PATH>
A path to a file containing the password which will unlock the wallet.
```
## Example
The example assumes that the `wally` wallet was generated from the
[wallet](./wallet-create) example.
```bash
lighthouse account wallet validator --name wally --wallet-password wally.pass
```
This command will:
- Derive a new BLS keypair from `wally`, updating it so that it generates a
new key next time.
- Create a new directory in `~/.lighthouse/validators` containing:
- An encrypted keystore containing the validators voting keypair.
- An `eth1_deposit_data.rlp` assuming the default deposit amount (`32 ETH`
for most testnets and mainnet) which can be submitted to the deposit
contract.
- Store a password to the validators voting keypair in `~/.lighthouse/secrets`.

72
book/src/wallet-create.md Normal file
View File

@ -0,0 +1,72 @@
# Create a wallet
A wallet allows for generating practically unlimited validators from an
easy-to-remember 12-word string (a mnemonic). As long as that mnemonic is
backed up, all validator keys can be trivially re-generated.
The 12-word string is randomly generated during wallet creation and printed out
to the terminal. It's important to **make one or more backups of the mnemonic**
to ensure your ETH is not lost in the case of data loss. It very important to
**keep your mnemonic private** as it represents the ultimate control of your
ETH.
Whilst the wallet stores the mnemonic, it does not store it in plain-text: the
mnemonic is encrypted with a password. It is the responsibility of the user to
define a strong password. The password is only required for interacting with
the wallet, it is not required for recovering keys from a mnemonic.
## Usage
To create a wallet, use the `lighthouse account wallet` command:
```bash
lighthouse account wallet create --help
Creates a new HD (hierarchical-deterministic) EIP-2386 wallet.
USAGE:
lighthouse account_manager wallet create [OPTIONS] --name <WALLET_NAME> --passphrase-file <WALLET_PASSWORD_PATH>
FLAGS:
-h, --help Prints help information
-V, --version Prints version information
OPTIONS:
-d, --datadir <DIR> Data directory for lighthouse keys and databases.
--mnemonic-output-path <MNEMONIC_PATH>
If present, the mnemonic will be saved to this file. DO NOT SHARE THE MNEMONIC.
--name <WALLET_NAME>
The wallet will be created with this name. It is not allowed to create two wallets with the same name for
the same --base-dir.
--passphrase-file <WALLET_PASSWORD_PATH>
A path to a file containing the password which will unlock the wallet. If the file does not exist, a random
password will be generated and saved at that path. To avoid confusion, if the file does not already exist it
must include a '.pass' suffix.
-s, --spec <TITLE>
Specifies the default eth2 spec type. [default: mainnet] [possible values: mainnet, minimal, interop]
-t, --testnet-dir <DIR>
Path to directory containing eth2_testnet specs. Defaults to a hard-coded Lighthouse testnet. Only effective
if there is no existing database.
--type <WALLET_TYPE>
The type of wallet to create. Only HD (hierarchical-deterministic) wallets are supported presently..
[default: hd] [possible values: hd]
```
## Example
Creates a new wallet named `wally` with a randomly generated password saved
to `./wallet.pass`:
```bash
lighthouse account wallet create --name wally --passphrase-file wally.pass
```
> Notes:
>
> - The password is not `wally.pass`, it is the _contents_ of the
> `wally.pass` file.
> - If `wally.pass` already exists the wallet password will be set to contents
> of that file.

View File

@ -203,6 +203,11 @@ impl Keystore {
&self.json.path
}
/// Returns the pubkey for the keystore.
pub fn pubkey(&self) -> &str {
&self.json.pubkey
}
/// Encodes `self` as a JSON object.
pub fn to_json_string(&self) -> Result<String, Error> {
serde_json::to_string(self).map_err(|e| Error::UnableToSerialize(format!("{}", e)))

View File

@ -6,6 +6,6 @@ pub mod json_wallet;
pub use bip39;
pub use validator_path::{KeyType, ValidatorPath, COIN_TYPE, PURPOSE};
pub use wallet::{
recover_validator_secret, DerivedKey, Error, KeystoreError, PlainText, ValidatorKeystores,
Wallet, WalletBuilder,
recover_validator_secret, DerivedKey, Error, KeystoreError, PlainText, Uuid,
ValidatorKeystores, Wallet, WalletBuilder,
};

View File

@ -12,11 +12,11 @@ use eth2_keystore::{
use rand::prelude::*;
use serde::{Deserialize, Serialize};
use std::io::{Read, Write};
use uuid::Uuid;
pub use bip39::{Mnemonic, Seed as Bip39Seed};
pub use eth2_key_derivation::DerivedKey;
pub use eth2_keystore::{Error as KeystoreError, PlainText};
pub use uuid::Uuid;
#[derive(Debug, PartialEq)]
pub enum Error {
@ -112,7 +112,7 @@ impl<'a> WalletBuilder<'a> {
}
}
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
#[derive(Debug, PartialEq, Serialize, Deserialize)]
#[serde(transparent)]
pub struct Wallet {
json: JsonWallet,

View File

@ -0,0 +1,14 @@
[package]
name = "eth2_wallet_manager"
version = "0.1.0"
authors = ["Paul Hauner <paul@paulhauner.com>"]
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
eth2_keystore = { path = "../eth2_keystore" }
eth2_wallet = { path = "../eth2_wallet" }
[dev-dependencies]
tempfile = "3.1.0"

View File

@ -0,0 +1,97 @@
//! Provides some CRUD functions for wallets on the filesystem.
use eth2_wallet::Error as WalletError;
use eth2_wallet::{Uuid, Wallet};
use std::fs::{copy as copy_file, remove_file, OpenOptions};
use std::io;
use std::path::{Path, PathBuf};
#[derive(Debug)]
pub enum Error {
WalletAlreadyExists(PathBuf),
WalletDoesNotExist(PathBuf),
WalletBackupAlreadyExists(PathBuf),
UnableToCreateBackup(io::Error),
UnableToRemoveBackup(io::Error),
UnableToRemoveWallet(io::Error),
UnableToCreateWallet(io::Error),
UnableToReadWallet(io::Error),
JsonWriteError(WalletError),
JsonReadError(WalletError),
}
/// Read a wallet with the given `uuid` from the `wallet_dir`.
pub fn read<P: AsRef<Path>>(wallet_dir: P, uuid: &Uuid) -> Result<Wallet, Error> {
let json_path = wallet_json_path(wallet_dir, uuid);
if !json_path.exists() {
Err(Error::WalletDoesNotExist(json_path))
} else {
OpenOptions::new()
.read(true)
.create(false)
.open(json_path)
.map_err(Error::UnableToReadWallet)
.and_then(|f| Wallet::from_json_reader(f).map_err(Error::JsonReadError))
}
}
/// Update the JSON file in the `wallet_dir` with the given `wallet`.
///
/// Performs a three-step copy:
///
/// 1. Copy the current JSON file to a backup file.
/// 2. Over-write the existing JSON file.
/// 3. Delete the backup file.
pub fn update<P: AsRef<Path>>(wallet_dir: P, wallet: &Wallet) -> Result<(), Error> {
let wallet_dir = wallet_dir.as_ref();
let json_path = wallet_json_path(wallet_dir, wallet.uuid());
let json_backup_path = wallet_json_backup_path(wallet_dir, wallet.uuid());
// Require that a wallet already exists.
if !json_path.exists() {
return Err(Error::WalletDoesNotExist(json_path));
// Require that there is no existing backup.
} else if json_backup_path.exists() {
return Err(Error::WalletBackupAlreadyExists(json_backup_path));
}
// Copy the existing wallet to the backup location.
copy_file(&json_path, &json_backup_path).map_err(Error::UnableToCreateBackup)?;
// Remove the existing wallet
remove_file(json_path).map_err(Error::UnableToRemoveWallet)?;
// Create the new wallet.
create(wallet_dir, wallet)?;
// Remove the backup file.
remove_file(json_backup_path).map_err(Error::UnableToRemoveBackup)?;
Ok(())
}
/// Writes the `wallet` into the `wallet_dir`, returning an error if it already exists.
pub fn create<P: AsRef<Path>>(wallet_dir: P, wallet: &Wallet) -> Result<(), Error> {
let json_path = wallet_json_path(wallet_dir, wallet.uuid());
if json_path.exists() {
Err(Error::WalletAlreadyExists(json_path))
} else {
OpenOptions::new()
.write(true)
.create_new(true)
.open(json_path)
.map_err(Error::UnableToCreateWallet)
.and_then(|f| wallet.to_json_writer(f).map_err(Error::JsonWriteError))
}
}
fn wallet_json_backup_path<P: AsRef<Path>>(wallet_dir: P, uuid: &Uuid) -> PathBuf {
wallet_dir.as_ref().join(format!("{}.backup", uuid))
}
fn wallet_json_path<P: AsRef<Path>>(wallet_dir: P, uuid: &Uuid) -> PathBuf {
wallet_dir.as_ref().join(format!("{}", uuid))
}

View File

@ -0,0 +1,6 @@
mod filesystem;
mod locked_wallet;
mod wallet_manager;
pub use locked_wallet::LockedWallet;
pub use wallet_manager::{Error, WalletManager, WalletType};

View File

@ -0,0 +1,111 @@
use crate::{
filesystem::{read, update},
Error,
};
use eth2_wallet::{Uuid, ValidatorKeystores, Wallet};
use std::fs::{remove_file, OpenOptions};
use std::path::{Path, PathBuf};
pub const LOCK_FILE: &str = ".lock";
/// Represents a `Wallet` in a `wallet_dir`.
///
/// For example:
///
/// ```ignore
/// <wallet_dir>
/// └── .lock
/// └── <wallet-json>
/// ```
///
/// Provides the following functionality:
///
/// - Control over the `.lock` file to prevent concurrent access.
/// - A `next_validator` function which wraps `Wallet::next_validator`, ensuring that the wallet is
/// persisted to disk (as JSON) between each consecutive call.
pub struct LockedWallet {
wallet_dir: PathBuf,
wallet: Wallet,
}
impl LockedWallet {
/// Opens a wallet with the `uuid` from a `base_dir`.
///
/// ```ignore
/// <base-dir>
/// ├── <uuid (directory)>
///    └── <uuid (json file)>
/// ```
///
/// ## Errors
///
/// - If the wallet does not exist.
/// - There is file-system or parsing error.
/// - The lock-file already exists.
pub(crate) fn open<P: AsRef<Path>>(base_dir: P, uuid: &Uuid) -> Result<Self, Error> {
let wallet_dir = base_dir.as_ref().join(format!("{}", uuid));
if !wallet_dir.exists() {
return Err(Error::MissingWalletDir(wallet_dir));
}
let lockfile = wallet_dir.join(LOCK_FILE);
if lockfile.exists() {
return Err(Error::WalletIsLocked(wallet_dir));
} else {
OpenOptions::new()
.write(true)
.create_new(true)
.open(lockfile)
.map_err(Error::UnableToCreateLockfile)?;
}
Ok(Self {
wallet: read(&wallet_dir, uuid)?,
wallet_dir,
})
}
/// Returns a reference to the underlying wallet.
///
/// Note: this does not read from the file-system on each call. It assumes that the wallet does
/// not change due to the use of a lock-file.
pub fn wallet(&self) -> &Wallet {
&self.wallet
}
/// Calls `Wallet::next_validator` on the underlying `wallet`.
///
/// Ensures that the wallet JSON file is updated after each call.
///
/// ## Errors
///
/// - If there is an error generating the validator keys.
/// - If there is a file-system error.
pub fn next_validator(
&mut self,
wallet_password: &[u8],
voting_keystore_password: &[u8],
withdrawal_keystore_password: &[u8],
) -> Result<ValidatorKeystores, Error> {
let keystores = self.wallet.next_validator(
wallet_password,
voting_keystore_password,
withdrawal_keystore_password,
)?;
update(&self.wallet_dir, &self.wallet)?;
Ok(keystores)
}
}
impl Drop for LockedWallet {
/// Clean-up the lockfile.
fn drop(&mut self) {
let lockfile = self.wallet_dir.clone().join(LOCK_FILE);
if let Err(e) = remove_file(&lockfile) {
eprintln!("Unable to remove {:?}: {:?}", lockfile, e);
}
}
}

View File

@ -0,0 +1,380 @@
use crate::{
filesystem::{create, Error as FilesystemError},
LockedWallet,
};
use eth2_wallet::{bip39::Mnemonic, Error as WalletError, Uuid, Wallet, WalletBuilder};
use std::collections::HashMap;
use std::ffi::OsString;
use std::fs::{create_dir_all, read_dir, OpenOptions};
use std::io;
use std::path::{Path, PathBuf};
#[derive(Debug)]
pub enum Error {
DirectoryDoesNotExist(PathBuf),
WalletError(WalletError),
FilesystemError(FilesystemError),
UnableToReadDir(io::Error),
UnableToReadWallet(io::Error),
UnableToReadFilename(OsString),
NameAlreadyTaken(String),
WalletNameUnknown(String),
WalletDirExists(PathBuf),
IoError(io::Error),
WalletIsLocked(PathBuf),
MissingWalletDir(PathBuf),
UnableToCreateLockfile(io::Error),
UuidMismatch((Uuid, Uuid)),
}
impl From<io::Error> for Error {
fn from(e: io::Error) -> Error {
Error::IoError(e)
}
}
impl From<WalletError> for Error {
fn from(e: WalletError) -> Error {
Error::WalletError(e)
}
}
impl From<FilesystemError> for Error {
fn from(e: FilesystemError) -> Error {
Error::FilesystemError(e)
}
}
/// Defines the type of an EIP-2386 wallet.
///
/// Presently only `Hd` wallets are supported.
pub enum WalletType {
/// Hierarchical-deterministic.
Hd,
}
/// Manages a directory containing EIP-2386 wallets.
///
/// Each wallet is stored in a directory with the name of the wallet UUID. Inside each directory a
/// EIP-2386 JSON wallet is also stored using the UUID as the filename.
///
/// In each wallet directory an optional `.lock` exists to prevent concurrent reads and writes from
/// the same wallet.
///
/// Example:
///
/// ```ignore
/// wallets
/// ├── 35c07717-c6f3-45e8-976f-ef5d267e86c9
/// │   └── 35c07717-c6f3-45e8-976f-ef5d267e86c9
/// └── 747ad9dc-e1a1-4804-ada4-0dc124e46c49
/// └── .lock
/// └── 747ad9dc-e1a1-4804-ada4-0dc124e46c49
/// ```
pub struct WalletManager {
dir: PathBuf,
}
impl WalletManager {
/// Open a directory containing multiple wallets.
///
/// Pass the `wallets` directory as `dir` (see struct-level example).
pub fn open<P: AsRef<Path>>(dir: P) -> Result<Self, Error> {
let dir: PathBuf = dir.as_ref().into();
if dir.exists() {
Ok(Self { dir })
} else {
Err(Error::DirectoryDoesNotExist(dir))
}
}
/// Searches all wallets in `self.dir` and returns the wallet with this name.
///
/// ## Errors
///
/// - If there is no wallet with this name.
/// - If there is a file-system or parsing error.
pub fn wallet_by_name(&self, name: &str) -> Result<LockedWallet, Error> {
LockedWallet::open(
self.dir.clone(),
self.wallets()?
.get(name)
.ok_or_else(|| Error::WalletNameUnknown(name.into()))?,
)
}
/// Creates a new wallet with the given `name` in `self.dir` with the given `mnemonic` as a
/// seed, encrypted with `password`.
///
/// ## Errors
///
/// - If a wallet with this name already exists.
/// - If there is a file-system or parsing error.
pub fn create_wallet(
&self,
name: String,
_wallet_type: WalletType,
mnemonic: &Mnemonic,
password: &[u8],
) -> Result<LockedWallet, Error> {
if self.wallets()?.contains_key(&name) {
return Err(Error::NameAlreadyTaken(name));
}
let wallet = WalletBuilder::from_mnemonic(mnemonic, password, name)?.build()?;
let uuid = wallet.uuid().clone();
let wallet_dir = self.dir.join(format!("{}", uuid));
if wallet_dir.exists() {
return Err(Error::WalletDirExists(wallet_dir));
}
create_dir_all(&wallet_dir)?;
create(&wallet_dir, &wallet)?;
drop(wallet);
LockedWallet::open(&self.dir, &uuid)
}
/// Iterates all wallets in `self.dir` and returns a mapping of their name to their UUID.
///
/// Ignores any items in `self.dir` that:
///
/// - Are files.
/// - Are directories, but their file-name does not parse as a UUID.
///
/// This function is fairly strict, it will fail if any directory is found that does not obey
/// the expected structure (e.g., there is a UUID directory that does not contain a valid JSON
/// keystore with the same UUID).
pub fn wallets(&self) -> Result<HashMap<String, Uuid>, Error> {
let mut wallets = HashMap::new();
for f in read_dir(&self.dir).map_err(Error::UnableToReadDir)? {
let f = f?;
// Ignore any non-directory objects in the root wallet dir.
if f.file_type()?.is_dir() {
let file_name = f
.file_name()
.into_string()
.map_err(Error::UnableToReadFilename)?;
// Ignore any paths that don't parse as a UUID.
if let Ok(uuid) = Uuid::parse_str(&file_name) {
let wallet_path = f.path().join(format!("{}", uuid));
let wallet = OpenOptions::new()
.read(true)
.create(false)
.open(wallet_path)
.map_err(Error::UnableToReadWallet)
.and_then(|f| Wallet::from_json_reader(f).map_err(Error::WalletError))?;
if *wallet.uuid() != uuid {
return Err(Error::UuidMismatch((uuid, *wallet.uuid())));
}
wallets.insert(wallet.name().into(), *wallet.uuid());
}
}
}
Ok(wallets)
}
}
#[cfg(test)]
// These tests are very slow in debug, only test in release.
#[cfg(not(debug_assertions))]
mod tests {
use super::*;
use crate::{filesystem::read, locked_wallet::LOCK_FILE};
use eth2_wallet::bip39::{Language, Mnemonic};
use tempfile::tempdir;
const MNEMONIC: &str =
"enemy fog enlist laundry nurse hungry discover turkey holiday resemble glad discover";
const WALLET_PASSWORD: &[u8] = &[43; 43];
fn get_mnemonic() -> Mnemonic {
Mnemonic::from_phrase(MNEMONIC, Language::English).unwrap()
}
fn create_wallet(mgr: &WalletManager, id: usize) -> LockedWallet {
let wallet = mgr
.create_wallet(
format!("{}", id),
WalletType::Hd,
&get_mnemonic(),
WALLET_PASSWORD,
)
.expect("should create wallet");
assert!(
wallet_dir_path(&mgr.dir, wallet.wallet().uuid()).exists(),
"should have created wallet dir"
);
assert!(
json_path(&mgr.dir, wallet.wallet().uuid()).exists(),
"should have created json file"
);
assert!(
lockfile_path(&mgr.dir, wallet.wallet().uuid()).exists(),
"should have created lockfile"
);
wallet
}
fn load_wallet_raw<P: AsRef<Path>>(base_dir: P, uuid: &Uuid) -> Wallet {
read(wallet_dir_path(base_dir, uuid), uuid).expect("should load raw json")
}
fn wallet_dir_path<P: AsRef<Path>>(base_dir: P, uuid: &Uuid) -> PathBuf {
let s = format!("{}", uuid);
base_dir.as_ref().join(&s)
}
fn lockfile_path<P: AsRef<Path>>(base_dir: P, uuid: &Uuid) -> PathBuf {
let s = format!("{}", uuid);
base_dir.as_ref().join(&s).join(LOCK_FILE)
}
fn json_path<P: AsRef<Path>>(base_dir: P, uuid: &Uuid) -> PathBuf {
let s = format!("{}", uuid);
base_dir.as_ref().join(&s).join(&s)
}
#[test]
fn duplicate_names() {
let dir = tempdir().unwrap();
let base_dir = dir.path();
let mgr = WalletManager::open(base_dir).unwrap();
let name = "cats".to_string();
mgr.create_wallet(
name.clone(),
WalletType::Hd,
&get_mnemonic(),
WALLET_PASSWORD,
)
.expect("should create first wallet");
match mgr.create_wallet(
name.clone(),
WalletType::Hd,
&get_mnemonic(),
WALLET_PASSWORD,
) {
Err(Error::NameAlreadyTaken(_)) => {}
_ => panic!("expected name error"),
}
}
#[test]
fn keystore_generation() {
let dir = tempdir().unwrap();
let base_dir = dir.path();
let mgr = WalletManager::open(base_dir).unwrap();
let name = "cats".to_string();
let mut w = mgr
.create_wallet(
name.clone(),
WalletType::Hd,
&get_mnemonic(),
WALLET_PASSWORD,
)
.expect("should create first wallet");
let uuid = w.wallet().uuid().clone();
assert_eq!(
load_wallet_raw(&base_dir, &uuid).nextaccount(),
0,
"should start wallet with nextaccount 0"
);
for i in 1..3 {
w.next_validator(WALLET_PASSWORD, &[1], &[0])
.expect("should create validator");
assert_eq!(
load_wallet_raw(&base_dir, &uuid).nextaccount(),
i,
"should update wallet with nextaccount {}",
i
);
}
drop(w);
// Check that we can open the wallet by name.
let by_name = mgr.wallet_by_name(&name).unwrap();
assert_eq!(by_name.wallet().name(), name);
drop(by_name);
let wallets = mgr.wallets().unwrap().into_iter().collect::<Vec<_>>();
assert_eq!(wallets, vec![(name, uuid)]);
}
#[test]
fn locked_wallet_lockfile() {
let dir = tempdir().unwrap();
let base_dir = dir.path();
let mgr = WalletManager::open(base_dir).unwrap();
let uuid_a = create_wallet(&mgr, 0).wallet().uuid().clone();
let uuid_b = create_wallet(&mgr, 1).wallet().uuid().clone();
let locked_a = LockedWallet::open(&base_dir, &uuid_a).expect("should open wallet a");
assert!(
lockfile_path(&base_dir, &uuid_a).exists(),
"lockfile should exist"
);
drop(locked_a);
assert!(
!lockfile_path(&base_dir, &uuid_a).exists(),
"lockfile have been cleaned up"
);
let locked_a = LockedWallet::open(&base_dir, &uuid_a).expect("should open wallet a");
let locked_b = LockedWallet::open(&base_dir, &uuid_b).expect("should open wallet b");
assert!(
lockfile_path(&base_dir, &uuid_a).exists(),
"lockfile a should exist"
);
assert!(
lockfile_path(&base_dir, &uuid_b).exists(),
"lockfile b should exist"
);
match LockedWallet::open(&base_dir, &uuid_a) {
Err(Error::WalletIsLocked(_)) => {}
_ => panic!("did not get locked error"),
};
drop(locked_a);
LockedWallet::open(&base_dir, &uuid_a)
.expect("should open wallet a after previous instance is dropped");
match LockedWallet::open(&base_dir, &uuid_b) {
Err(Error::WalletIsLocked(_)) => {}
_ => panic!("did not get locked error"),
};
drop(locked_b);
LockedWallet::open(&base_dir, &uuid_b)
.expect("should open wallet a after previous instance is dropped");
}
}

View File

@ -0,0 +1,26 @@
[package]
name = "validator_dir"
version = "0.1.0"
authors = ["Paul Hauner <paul@paulhauner.com>"]
edition = "2018"
[features]
unencrypted_keys = []
insecure_keys = []
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
eth2_wallet = { path = "../eth2_wallet" }
bls = { path = "../bls" }
eth2_keystore = { path = "../eth2_keystore" }
types = { path = "../../types" }
rand = "0.7.2"
deposit_contract = { path = "../deposit_contract" }
eth2_ssz = { path = "../ssz" }
eth2_ssz_derive = { path = "../ssz_derive" }
rayon = "1.3.0"
tree_hash = { path = "../tree_hash" }
[dev-dependencies]
tempfile = "3.1.0"

View File

@ -0,0 +1,286 @@
use crate::{Error as DirError, ValidatorDir};
use bls::get_withdrawal_credentials;
use deposit_contract::{encode_eth1_tx_data, Error as DepositError};
use eth2_keystore::{Error as KeystoreError, Keystore, KeystoreBuilder, PlainText};
use rand::{distributions::Alphanumeric, Rng};
use std::fs::{create_dir_all, File, OpenOptions};
use std::io::{self, Write};
use std::os::unix::fs::PermissionsExt;
use std::path::{Path, PathBuf};
use types::{ChainSpec, DepositData, Hash256, Keypair, Signature};
/// The `Alphanumeric` crate only generates a-z, A-Z, 0-9, therefore it has a range of 62
/// characters.
///
/// 62**48 is greater than 255**32, therefore this password has more bits of entropy than a byte
/// array of length 32.
const DEFAULT_PASSWORD_LEN: usize = 48;
pub const VOTING_KEYSTORE_FILE: &str = "voting-keystore.json";
pub const WITHDRAWAL_KEYSTORE_FILE: &str = "withdrawal-keystore.json";
pub const ETH1_DEPOSIT_DATA_FILE: &str = "eth1-deposit-data.rlp";
pub const ETH1_DEPOSIT_AMOUNT_FILE: &str = "eth1-deposit-gwei.txt";
#[derive(Debug)]
pub enum Error {
DirectoryAlreadyExists(PathBuf),
UnableToCreateDir(io::Error),
UnableToEncodeDeposit(DepositError),
DepositDataAlreadyExists(PathBuf),
UnableToSaveDepositData(io::Error),
DepositAmountAlreadyExists(PathBuf),
UnableToSaveDepositAmount(io::Error),
KeystoreAlreadyExists(PathBuf),
UnableToSaveKeystore(io::Error),
PasswordAlreadyExists(PathBuf),
UnableToSavePassword(io::Error),
KeystoreError(KeystoreError),
UnableToOpenDir(DirError),
#[cfg(feature = "insecure_keys")]
InsecureKeysError(String),
}
impl From<KeystoreError> for Error {
fn from(e: KeystoreError) -> Error {
Error::KeystoreError(e)
}
}
/// A builder for creating a `ValidatorDir`.
pub struct Builder<'a> {
base_validators_dir: PathBuf,
password_dir: PathBuf,
pub(crate) voting_keystore: Option<(Keystore, PlainText)>,
pub(crate) withdrawal_keystore: Option<(Keystore, PlainText)>,
store_withdrawal_keystore: bool,
deposit_info: Option<(u64, &'a ChainSpec)>,
}
impl<'a> Builder<'a> {
/// Instantiate a new builder.
pub fn new(base_validators_dir: PathBuf, password_dir: PathBuf) -> Self {
Self {
base_validators_dir,
password_dir,
voting_keystore: None,
withdrawal_keystore: None,
store_withdrawal_keystore: true,
deposit_info: None,
}
}
/// Build the `ValidatorDir` use the given `keystore` which can be unlocked with `password`.
///
/// If this argument (or equivalent key specification argument) is not supplied a keystore will
/// be randomly generated.
pub fn voting_keystore(mut self, keystore: Keystore, password: &[u8]) -> Self {
self.voting_keystore = Some((keystore, password.to_vec().into()));
self
}
/// Build the `ValidatorDir` use the given `keystore` which can be unlocked with `password`.
///
/// If this argument (or equivalent key specification argument) is not supplied a keystore will
/// be randomly generated.
pub fn withdrawal_keystore(mut self, keystore: Keystore, password: &[u8]) -> Self {
self.withdrawal_keystore = Some((keystore, password.to_vec().into()));
self
}
/// Upon build, create files in the `ValidatorDir` which will permit the submission of a
/// deposit to the eth1 deposit contract with the given `deposit_amount`.
pub fn create_eth1_tx_data(mut self, deposit_amount: u64, spec: &'a ChainSpec) -> Self {
self.deposit_info = Some((deposit_amount, spec));
self
}
/// If `should_store == true`, the validator keystore will be saved in the `ValidatorDir` (and
/// the password to it stored in the `password_dir`). If `should_store == false`, the
/// withdrawal keystore will be dropped after `Self::build`.
///
/// ## Notes
///
/// If `should_store == false`, it is important to ensure that the withdrawal keystore is
/// backed up. Backup can be via saving the files elsewhere, or in the case of HD key
/// derivation, ensuring the seed and path are known.
///
/// If the builder is not specifically given a withdrawal keystore then one will be generated
/// randomly. When this random keystore is generated, calls to this function are ignored and
/// the withdrawal keystore is *always* stored to disk. This is to prevent data loss.
pub fn store_withdrawal_keystore(mut self, should_store: bool) -> Self {
self.store_withdrawal_keystore = should_store;
self
}
/// Consumes `self`, returning a `ValidatorDir` if no error is encountered.
pub fn build(mut self) -> Result<ValidatorDir, Error> {
// If the withdrawal keystore will be generated randomly, always store it.
if self.withdrawal_keystore.is_none() {
self.store_withdrawal_keystore = true;
}
// Attempts to get `self.$keystore`, unwrapping it into a random keystore if it is `None`.
// Then, decrypts the keypair from the keystore.
macro_rules! expand_keystore {
($keystore: ident) => {
self.$keystore
.map(Result::Ok)
.unwrap_or_else(random_keystore)
.and_then(|(keystore, password)| {
keystore
.decrypt_keypair(password.as_bytes())
.map(|keypair| (keystore, password, keypair))
.map_err(Into::into)
})?;
};
}
let (voting_keystore, voting_password, voting_keypair) = expand_keystore!(voting_keystore);
let (withdrawal_keystore, withdrawal_password, withdrawal_keypair) =
expand_keystore!(withdrawal_keystore);
let dir = self
.base_validators_dir
.join(format!("0x{}", voting_keystore.pubkey()));
if dir.exists() {
return Err(Error::DirectoryAlreadyExists(dir));
} else {
create_dir_all(&dir).map_err(Error::UnableToCreateDir)?;
}
if let Some((amount, spec)) = self.deposit_info {
let withdrawal_credentials = Hash256::from_slice(&get_withdrawal_credentials(
&withdrawal_keypair.pk,
spec.bls_withdrawal_prefix_byte,
));
let mut deposit_data = DepositData {
pubkey: voting_keypair.pk.clone().into(),
withdrawal_credentials,
amount,
signature: Signature::empty_signature().into(),
};
deposit_data.signature = deposit_data.create_signature(&voting_keypair.sk, &spec);
let deposit_data =
encode_eth1_tx_data(&deposit_data).map_err(Error::UnableToEncodeDeposit)?;
// Save `ETH1_DEPOSIT_DATA_FILE` to file.
//
// This allows us to know the RLP data for the eth1 transaction without needed to know
// the withdrawal/voting keypairs again at a later date.
let path = dir.clone().join(ETH1_DEPOSIT_DATA_FILE);
if path.exists() {
return Err(Error::DepositDataAlreadyExists(path));
} else {
OpenOptions::new()
.write(true)
.read(true)
.create(true)
.open(path.clone())
.map_err(Error::UnableToSaveDepositData)?
.write_all(&deposit_data)
.map_err(Error::UnableToSaveDepositData)?
}
// Save `ETH1_DEPOSIT_AMOUNT_FILE` to file.
//
// This allows us to know the intended deposit amount at a later date.
let path = dir.clone().join(ETH1_DEPOSIT_AMOUNT_FILE);
if path.exists() {
return Err(Error::DepositAmountAlreadyExists(path));
} else {
OpenOptions::new()
.write(true)
.read(true)
.create(true)
.open(path.clone())
.map_err(Error::UnableToSaveDepositAmount)?
.write_all(format!("{}", amount).as_bytes())
.map_err(Error::UnableToSaveDepositAmount)?
}
}
write_password_to_file(
self.password_dir
.clone()
.join(voting_keypair.pk.as_hex_string()),
voting_password.as_bytes(),
)?;
write_keystore_to_file(dir.clone().join(VOTING_KEYSTORE_FILE), &voting_keystore)?;
if self.store_withdrawal_keystore {
write_password_to_file(
self.password_dir
.clone()
.join(withdrawal_keypair.pk.as_hex_string()),
withdrawal_password.as_bytes(),
)?;
write_keystore_to_file(
dir.clone().join(WITHDRAWAL_KEYSTORE_FILE),
&withdrawal_keystore,
)?;
}
ValidatorDir::open(dir).map_err(Error::UnableToOpenDir)
}
}
/// Writes a JSON keystore to file.
fn write_keystore_to_file(path: PathBuf, keystore: &Keystore) -> Result<(), Error> {
if path.exists() {
Err(Error::KeystoreAlreadyExists(path))
} else {
let file = OpenOptions::new()
.write(true)
.read(true)
.create_new(true)
.open(path.clone())
.map_err(Error::UnableToSaveKeystore)?;
keystore.to_json_writer(file).map_err(Into::into)
}
}
/// Creates a file with `600 (-rw-------)` permissions.
pub fn write_password_to_file<P: AsRef<Path>>(path: P, bytes: &[u8]) -> Result<(), Error> {
let path = path.as_ref();
if path.exists() {
return Err(Error::PasswordAlreadyExists(path.into()));
}
let mut file = File::create(&path).map_err(Error::UnableToSavePassword)?;
let mut perm = file
.metadata()
.map_err(Error::UnableToSavePassword)?
.permissions();
perm.set_mode(0o600);
file.set_permissions(perm)
.map_err(Error::UnableToSavePassword)?;
file.write_all(bytes).map_err(Error::UnableToSavePassword)?;
Ok(())
}
/// Generates a random keystore with a random password.
fn random_keystore() -> Result<(Keystore, PlainText), Error> {
let keypair = Keypair::random();
let password: PlainText = rand::thread_rng()
.sample_iter(&Alphanumeric)
.take(DEFAULT_PASSWORD_LEN)
.collect::<String>()
.into_bytes()
.into();
let keystore = KeystoreBuilder::new(&keypair, password.as_bytes(), "".into())?.build()?;
Ok((keystore, password))
}

View File

@ -0,0 +1,67 @@
//! These features exist to allow for generating deterministic, well-known, unsafe keys for use in
//! testing.
//!
//! **NEVER** use these keys in production!
#![cfg(feature = "insecure_keys")]
use crate::{Builder, BuilderError};
use eth2_keystore::{Keystore, KeystoreBuilder, PlainText};
use std::path::PathBuf;
use types::test_utils::generate_deterministic_keypair;
/// A very weak password with which to encrypt the keystores.
pub const INSECURE_PASSWORD: &[u8] = &[30; 32];
impl<'a> Builder<'a> {
/// Generate the voting and withdrawal keystores using deterministic, well-known, **unsafe**
/// keypairs.
///
/// **NEVER** use these keys in production!
pub fn insecure_keys(mut self, deterministic_key_index: usize) -> Result<Self, BuilderError> {
self.voting_keystore = Some(
generate_deterministic_keystore(deterministic_key_index)
.map_err(BuilderError::InsecureKeysError)?,
);
self.withdrawal_keystore = Some(
generate_deterministic_keystore(deterministic_key_index)
.map_err(BuilderError::InsecureKeysError)?,
);
Ok(self)
}
}
/// Generate a keystore, encrypted with `INSECURE_PASSWORD` using a deterministic, well-known,
/// **unsafe** secret key.
///
/// **NEVER** use these keys in production!
pub fn generate_deterministic_keystore(i: usize) -> Result<(Keystore, PlainText), String> {
let keypair = generate_deterministic_keypair(i);
let keystore = KeystoreBuilder::new(&keypair, INSECURE_PASSWORD, "".into())
.map_err(|e| format!("Unable to create keystore builder: {:?}", e))?
.build()
.map_err(|e| format!("Unable to build keystore: {:?}", e))?;
Ok((keystore, INSECURE_PASSWORD.to_vec().into()))
}
/// A helper function to use the `Builder` to generate deterministic, well-known, **unsafe**
/// validator directories for the given validator `indices`.
///
/// **NEVER** use these keys in production!
pub fn build_deterministic_validator_dirs(
validators_dir: PathBuf,
password_dir: PathBuf,
indices: &[usize],
) -> Result<(), String> {
for &i in indices {
Builder::new(validators_dir.clone(), password_dir.clone())
.insecure_keys(i)
.map_err(|e| format!("Unable to generate insecure keypair: {:?}", e))?
.store_withdrawal_keystore(false)
.build()
.map_err(|e| format!("Unable to build keystore: {:?}", e))?;
}
Ok(())
}

View File

@ -0,0 +1,21 @@
//! Provides:
//!
//! - `ValidatorDir`: manages a directory containing validator keypairs, deposit info and other
//! things.
//! - `Manager`: manages a directory that contains multiple `ValidatorDir`.
//!
//! This crate is intended to be used by the account manager to create validators and the validator
//! client to load those validators.
mod builder;
pub mod insecure_keys;
mod manager;
pub mod unencrypted_keys;
mod validator_dir;
pub use crate::validator_dir::{Error, Eth1DepositData, ValidatorDir, ETH1_DEPOSIT_TX_HASH_FILE};
pub use builder::{
Builder, Error as BuilderError, ETH1_DEPOSIT_DATA_FILE, VOTING_KEYSTORE_FILE,
WITHDRAWAL_KEYSTORE_FILE,
};
pub use manager::{Error as ManagerError, Manager};

View File

@ -0,0 +1,115 @@
use crate::{Error as ValidatorDirError, ValidatorDir};
use bls::Keypair;
use rayon::prelude::*;
use std::collections::HashMap;
use std::fs::read_dir;
use std::io;
use std::iter::FromIterator;
use std::path::{Path, PathBuf};
#[derive(Debug)]
pub enum Error {
DirectoryDoesNotExist(PathBuf),
UnableToReadBaseDir(io::Error),
UnableToReadFile(io::Error),
ValidatorDirError(ValidatorDirError),
}
/// Manages a directory containing multiple `ValidatorDir` directories.
///
/// ## Example
///
/// ```ignore
/// validators
/// └── 0x91494d3ac4c078049f37aa46934ba8cdf5a9cca6e1b9a9e12403d69d8a2c43a25a7f576df2a5a3d7cb3f45e6aa5e2812
/// ├── eth1_deposit_data.rlp
/// ├── deposit-tx-hash.txt
/// ├── voting-keystore.json
/// └── withdrawal-keystore.json
/// ```
pub struct Manager {
dir: PathBuf,
}
impl Manager {
/// Open a directory containing multiple validators.
///
/// Pass the `validators` director as `dir` (see struct-level example).
pub fn open<P: AsRef<Path>>(dir: P) -> Result<Self, Error> {
let dir: PathBuf = dir.as_ref().into();
if dir.exists() {
Ok(Self { dir })
} else {
Err(Error::DirectoryDoesNotExist(dir))
}
}
/// Iterate the nodes in `self.dir`, filtering out things that are unlikely to be a validator
/// directory.
fn iter_dir(&self) -> Result<Vec<PathBuf>, Error> {
read_dir(&self.dir)
.map_err(Error::UnableToReadBaseDir)?
.map(|file_res| file_res.map(|f| f.path()))
// We use `map_or` with `true` here to ensure that we always fail if there is any
// error.
.filter(|path_res| path_res.as_ref().map_or(true, |p| p.is_dir()))
.map(|res| res.map_err(Error::UnableToReadFile))
.collect()
}
/// Open a `ValidatorDir` at the given `path`.
///
/// ## Note
///
/// It is not enforced that `path` is contained in `self.dir`.
pub fn open_validator<P: AsRef<Path>>(&self, path: P) -> Result<ValidatorDir, Error> {
ValidatorDir::open(path).map_err(Error::ValidatorDirError)
}
/// Opens all the validator directories in `self`.
///
/// ## Errors
///
/// Returns an error if any of the directories is unable to be opened, perhaps due to a
/// file-system error or directory with an active lockfile.
pub fn open_all_validators(&self) -> Result<Vec<ValidatorDir>, Error> {
self.iter_dir()?
.into_iter()
.map(|path| ValidatorDir::open(path).map_err(Error::ValidatorDirError))
.collect()
}
/// Opens all the validator directories in `self` and decrypts the validator keypairs.
///
/// ## Errors
///
/// Returns an error if any of the directories is unable to be opened.
pub fn decrypt_all_validators(
&self,
secrets_dir: PathBuf,
) -> Result<Vec<(Keypair, ValidatorDir)>, Error> {
self.iter_dir()?
.into_par_iter()
.map(|path| {
ValidatorDir::open(path)
.and_then(|v| v.voting_keypair(&secrets_dir).map(|kp| (kp, v)))
.map_err(Error::ValidatorDirError)
})
.collect()
}
/// Returns a map of directory name to full directory path. E.g., `myval -> /home/vals/myval`.
/// Filters out nodes in `self.dir` that are unlikely to be a validator directory.
///
/// ## Errors
///
/// Returns an error if a directory is unable to be read.
pub fn directory_names(&self) -> Result<HashMap<String, PathBuf>, Error> {
Ok(HashMap::from_iter(
self.iter_dir()?
.into_iter()
.map(|path| (format!("{:?}", path), path)),
))
}
}

View File

@ -0,0 +1,66 @@
//! The functionality in this module is only required for backward compatibility with the old
//! method of key generation (unencrypted, SSZ-encoded keypairs). It should be removed as soon as
//! we're confident that no-one is using these keypairs anymore (hopefully mid-June 2020).
#![cfg(feature = "unencrypted_keys")]
use eth2_keystore::PlainText;
use ssz::Decode;
use ssz_derive::{Decode, Encode};
use std::fs::File;
use std::io::Read;
use std::path::Path;
use types::{Keypair, PublicKey, SecretKey};
/// Read a keypair from disk, using the old format where keys were stored as unencrypted
/// SSZ-encoded keypairs.
///
/// This only exists as compatibility with the old scheme and should not be implemented on any new
/// features.
pub fn load_unencrypted_keypair<P: AsRef<Path>>(path: P) -> Result<Keypair, String> {
let path = path.as_ref();
if !path.exists() {
return Err(format!("Keypair file does not exist: {:?}", path));
}
let mut bytes = vec![];
File::open(&path)
.map_err(|e| format!("Unable to open keypair file: {}", e))?
.read_to_end(&mut bytes)
.map_err(|e| format!("Unable to read keypair file: {}", e))?;
let bytes: PlainText = bytes.into();
SszEncodableKeypair::from_ssz_bytes(bytes.as_bytes())
.map(Into::into)
.map_err(|e| format!("Unable to decode keypair: {:?}", e))
}
/// A helper struct to allow SSZ enc/dec for a `Keypair`.
///
/// This only exists as compatibility with the old scheme and should not be implemented on any new
/// features.
#[derive(Encode, Decode)]
pub struct SszEncodableKeypair {
pk: PublicKey,
sk: SecretKey,
}
impl Into<Keypair> for SszEncodableKeypair {
fn into(self) -> Keypair {
Keypair {
sk: self.sk,
pk: self.pk,
}
}
}
impl From<Keypair> for SszEncodableKeypair {
fn from(kp: Keypair) -> Self {
Self {
sk: kp.sk,
pk: kp.pk,
}
}
}

View File

@ -0,0 +1,230 @@
use crate::builder::{
ETH1_DEPOSIT_AMOUNT_FILE, ETH1_DEPOSIT_DATA_FILE, VOTING_KEYSTORE_FILE,
WITHDRAWAL_KEYSTORE_FILE,
};
use deposit_contract::decode_eth1_tx_data;
use eth2_keystore::{Error as KeystoreError, Keystore, PlainText};
use std::fs::{read, remove_file, write, OpenOptions};
use std::io;
use std::path::{Path, PathBuf};
use tree_hash::TreeHash;
use types::{DepositData, Hash256, Keypair};
/// The file used for indicating if a directory is in-use by another process.
const LOCK_FILE: &str = ".lock";
/// The file used to save the Eth1 transaction hash from a deposit.
pub const ETH1_DEPOSIT_TX_HASH_FILE: &str = "eth1-deposit-tx-hash.txt";
#[derive(Debug)]
pub enum Error {
DirectoryDoesNotExist(PathBuf),
DirectoryLocked(PathBuf),
UnableToCreateLockfile(io::Error),
UnableToOpenKeystore(io::Error),
UnableToReadKeystore(KeystoreError),
UnableToOpenPassword(io::Error),
UnableToReadPassword(PathBuf),
UnableToDecryptKeypair(KeystoreError),
UnableToReadDepositData(io::Error),
DepositAmountDoesNotExist(PathBuf),
UnableToReadDepositAmount(io::Error),
UnableToParseDepositAmount(std::num::ParseIntError),
DepositAmountIsNotUtf8(std::string::FromUtf8Error),
UnableToParseDepositData(deposit_contract::DecodeError),
Eth1TxHashExists(PathBuf),
UnableToWriteEth1TxHash(io::Error),
/// The deposit root in the deposit data file does not match the one generated locally. This is
/// generally caused by supplying an `amount` at deposit-time that is different to the one used
/// at generation-time.
Eth1DepositRootMismatch,
#[cfg(feature = "unencrypted_keys")]
SszKeypairError(String),
}
/// Information required to submit a deposit to the Eth1 deposit contract.
#[derive(Debug, PartialEq)]
pub struct Eth1DepositData {
/// An RLP encoded Eth1 transaction.
pub rlp: Vec<u8>,
/// The deposit data used to generate `self.rlp`.
pub deposit_data: DepositData,
/// The root of `self.deposit_data`.
pub root: Hash256,
}
/// Provides a wrapper around a directory containing validator information.
///
/// Creates/deletes a lockfile in `self.dir` to attempt to prevent concurrent access from multiple
/// processes.
#[derive(Debug, PartialEq)]
pub struct ValidatorDir {
dir: PathBuf,
}
impl ValidatorDir {
/// Open `dir`, creating a lockfile to prevent concurrent access.
///
/// ## Errors
///
/// If there is a filesystem error or if a lockfile already exists.
pub fn open<P: AsRef<Path>>(dir: P) -> Result<Self, Error> {
let dir: &Path = dir.as_ref();
let dir: PathBuf = dir.into();
if !dir.exists() {
return Err(Error::DirectoryDoesNotExist(dir));
}
let lockfile = dir.join(LOCK_FILE);
if lockfile.exists() {
return Err(Error::DirectoryLocked(dir));
} else {
OpenOptions::new()
.write(true)
.create_new(true)
.open(lockfile)
.map_err(Error::UnableToCreateLockfile)?;
}
Ok(Self { dir })
}
/// Returns the `dir` provided to `Self::open`.
pub fn dir(&self) -> &PathBuf {
&self.dir
}
/// Attempts to read the keystore in `self.dir` and decrypt the keypair using a password file
/// in `password_dir`.
///
/// The password file that is used will be based upon the pubkey value in the keystore.
///
/// ## Errors
///
/// If there is a filesystem error, a password is missing or the password is incorrect.
pub fn voting_keypair<P: AsRef<Path>>(&self, password_dir: P) -> Result<Keypair, Error> {
unlock_keypair(&self.dir.clone(), VOTING_KEYSTORE_FILE, password_dir)
}
/// Attempts to read the keystore in `self.dir` and decrypt the keypair using a password file
/// in `password_dir`.
///
/// The password file that is used will be based upon the pubkey value in the keystore.
///
/// ## Errors
///
/// If there is a file-system error, a password is missing or the password is incorrect.
pub fn withdrawal_keypair<P: AsRef<Path>>(&self, password_dir: P) -> Result<Keypair, Error> {
unlock_keypair(&self.dir.clone(), WITHDRAWAL_KEYSTORE_FILE, password_dir)
}
/// Indicates if there is a file containing an eth1 deposit transaction. This can be used to
/// check if a deposit transaction has been created.
///
/// ## Note
///
/// It's possible to submit an Eth1 deposit without creating this file, so use caution when
/// relying upon this value.
pub fn eth1_deposit_tx_hash_exists(&self) -> bool {
self.dir.join(ETH1_DEPOSIT_TX_HASH_FILE).exists()
}
/// Saves the `tx_hash` to a file in `self.dir`. Artificially requires `mut self` to prevent concurrent
/// calls.
///
/// ## Errors
///
/// If there is a file-system error, or if there is already a transaction hash stored in
/// `self.dir`.
pub fn save_eth1_deposit_tx_hash(&mut self, tx_hash: &str) -> Result<(), Error> {
let path = self.dir.join(ETH1_DEPOSIT_TX_HASH_FILE);
if path.exists() {
return Err(Error::Eth1TxHashExists(path));
}
write(path, tx_hash.as_bytes()).map_err(Error::UnableToWriteEth1TxHash)
}
/// Attempts to read files in `self.dir` and return an `Eth1DepositData` that can be used for
/// submitting an Eth1 deposit.
///
/// ## Errors
///
/// If there is a file-system error, not all required files exist or the files are
/// inconsistent.
pub fn eth1_deposit_data(&self) -> Result<Option<Eth1DepositData>, Error> {
// Read and parse `ETH1_DEPOSIT_DATA_FILE`.
let path = self.dir.join(ETH1_DEPOSIT_DATA_FILE);
if !path.exists() {
return Ok(None);
}
let deposit_data_rlp = read(path).map_err(Error::UnableToReadDepositData)?;
// Read and parse `ETH1_DEPOSIT_AMOUNT_FILE`.
let path = self.dir.join(ETH1_DEPOSIT_AMOUNT_FILE);
if !path.exists() {
return Err(Error::DepositAmountDoesNotExist(path));
}
let deposit_amount: u64 =
String::from_utf8(read(path).map_err(Error::UnableToReadDepositAmount)?)
.map_err(Error::DepositAmountIsNotUtf8)?
.parse()
.map_err(Error::UnableToParseDepositAmount)?;
let (deposit_data, root) = decode_eth1_tx_data(&deposit_data_rlp, deposit_amount)
.map_err(Error::UnableToParseDepositData)?;
// This acts as a sanity check to ensure that the amount from `ETH1_DEPOSIT_AMOUNT_FILE`
// matches the value that `ETH1_DEPOSIT_DATA_FILE` was created with.
if deposit_data.tree_hash_root() != root {
return Err(Error::Eth1DepositRootMismatch);
}
Ok(Some(Eth1DepositData {
rlp: deposit_data_rlp,
deposit_data,
root,
}))
}
}
impl Drop for ValidatorDir {
fn drop(&mut self) {
let lockfile = self.dir.clone().join(LOCK_FILE);
if let Err(e) = remove_file(&lockfile) {
eprintln!(
"Unable to remove validator lockfile {:?}: {:?}",
lockfile, e
);
}
}
}
/// Attempts to load and decrypt a keystore.
fn unlock_keypair<P: AsRef<Path>>(
keystore_dir: &PathBuf,
filename: &str,
password_dir: P,
) -> Result<Keypair, Error> {
let keystore = Keystore::from_json_reader(
&mut OpenOptions::new()
.read(true)
.create(false)
.open(keystore_dir.clone().join(filename))
.map_err(Error::UnableToOpenKeystore)?,
)
.map_err(Error::UnableToReadKeystore)?;
let password_path = password_dir
.as_ref()
.join(format!("0x{}", keystore.pubkey()));
let password: PlainText = read(&password_path)
.map_err(|_| Error::UnableToReadPassword(password_path.into()))?
.into();
keystore
.decrypt_keypair(password.as_bytes())
.map_err(Error::UnableToDecryptKeypair)
}

View File

@ -0,0 +1,273 @@
#![cfg(not(debug_assertions))]
use eth2_keystore::{Keystore, KeystoreBuilder, PlainText};
use std::fs::{self, File};
use std::path::Path;
use tempfile::{tempdir, TempDir};
use types::{test_utils::generate_deterministic_keypair, EthSpec, Keypair, MainnetEthSpec};
use validator_dir::{
Builder, ValidatorDir, ETH1_DEPOSIT_TX_HASH_FILE, VOTING_KEYSTORE_FILE,
WITHDRAWAL_KEYSTORE_FILE,
};
/// A very weak password with which to encrypt the keystores.
pub const INSECURE_PASSWORD: &[u8] = &[30; 32];
/// Helper struct for configuring tests.
struct BuildConfig {
random_voting_keystore: bool,
random_withdrawal_keystore: bool,
deposit_amount: Option<u64>,
store_withdrawal_keystore: bool,
}
impl Default for BuildConfig {
fn default() -> Self {
Self {
random_voting_keystore: true,
random_withdrawal_keystore: true,
deposit_amount: None,
store_withdrawal_keystore: true,
}
}
}
/// Check that a keystore exists and can be decrypted with a password in password_dir
fn check_keystore<P: AsRef<Path>>(path: P, password_dir: P) -> Keypair {
let mut file = File::open(path).unwrap();
let keystore = Keystore::from_json_reader(&mut file).unwrap();
let pubkey = keystore.pubkey();
let password_path = password_dir.as_ref().join(format!("0x{}", pubkey));
let password = fs::read(password_path).unwrap();
keystore.decrypt_keypair(&password).unwrap()
}
/// Creates a keystore using `generate_deterministic_keypair`.
pub fn generate_deterministic_keystore(i: usize) -> Result<(Keystore, PlainText), String> {
let keypair = generate_deterministic_keypair(i);
let keystore = KeystoreBuilder::new(&keypair, INSECURE_PASSWORD, "".into())
.map_err(|e| format!("Unable to create keystore builder: {:?}", e))?
.build()
.map_err(|e| format!("Unable to build keystore: {:?}", e))?;
Ok((keystore, INSECURE_PASSWORD.to_vec().into()))
}
/// A testing harness for generating validator directories.
struct Harness {
validators_dir: TempDir,
password_dir: TempDir,
}
impl Harness {
/// Create a new harness using temporary directories.
pub fn new() -> Self {
Self {
validators_dir: tempdir().unwrap(),
password_dir: tempdir().unwrap(),
}
}
/// Create a `ValidatorDir` from the `config`, then assert that the `ValidatorDir` was generated
/// correctly with respect to the `config`.
pub fn create_and_test(&self, config: &BuildConfig) -> ValidatorDir {
let spec = MainnetEthSpec::default_spec();
/*
* Build the `ValidatorDir`.
*/
let builder = Builder::new(
self.validators_dir.path().into(),
self.password_dir.path().into(),
)
.store_withdrawal_keystore(config.store_withdrawal_keystore);
let builder = if config.random_voting_keystore {
builder
} else {
let (keystore, password) = generate_deterministic_keystore(0).unwrap();
builder.voting_keystore(keystore, password.as_bytes())
};
let builder = if config.random_withdrawal_keystore {
builder
} else {
let (keystore, password) = generate_deterministic_keystore(1).unwrap();
builder.withdrawal_keystore(keystore, password.as_bytes())
};
let builder = if let Some(amount) = config.deposit_amount {
builder.create_eth1_tx_data(amount, &spec)
} else {
builder
};
let mut validator = builder.build().unwrap();
/*
* Assert that the dir is consistent with the config.
*/
let withdrawal_keystore_path = validator.dir().join(WITHDRAWAL_KEYSTORE_FILE);
let password_dir = self.password_dir.path().into();
// Ensure the voting keypair exists and can be decrypted.
let voting_keypair =
check_keystore(&validator.dir().join(VOTING_KEYSTORE_FILE), &password_dir);
if !config.random_voting_keystore {
assert_eq!(voting_keypair, generate_deterministic_keypair(0))
}
// Use OR here instead of AND so we *always* check for the withdrawal keystores if random
// keystores were generated.
if config.random_withdrawal_keystore || config.store_withdrawal_keystore {
// Ensure the withdrawal keypair exists and can be decrypted.
let withdrawal_keypair = check_keystore(&withdrawal_keystore_path, &password_dir);
if !config.random_withdrawal_keystore {
assert_eq!(withdrawal_keypair, generate_deterministic_keypair(1))
}
// The withdrawal keys should be distinct from the voting keypairs.
assert_ne!(withdrawal_keypair, voting_keypair);
}
if !config.store_withdrawal_keystore && !config.random_withdrawal_keystore {
assert!(!withdrawal_keystore_path.exists())
}
if let Some(amount) = config.deposit_amount {
// Check that the deposit data can be decoded.
let data = validator.eth1_deposit_data().unwrap().unwrap();
// Ensure the amount is consistent.
assert_eq!(data.deposit_data.amount, amount);
} else {
// If there was no deposit then we should return `Ok(None)`.
assert!(validator.eth1_deposit_data().unwrap().is_none());
}
let tx_hash_path = validator.dir().join(ETH1_DEPOSIT_TX_HASH_FILE);
// The eth1 deposit file should not exist, yet.
assert!(!tx_hash_path.exists());
let tx = "junk data";
// Save a tx hash.
validator.save_eth1_deposit_tx_hash(tx).unwrap();
// Ensure the saved tx hash is correct.
assert_eq!(fs::read(tx_hash_path).unwrap(), tx.as_bytes().to_vec());
// Saving a second tx hash should fail.
validator.save_eth1_deposit_tx_hash(tx).unwrap_err();
validator
}
}
#[test]
fn concurrency() {
let harness = Harness::new();
let val_dir = harness.create_and_test(&BuildConfig::default());
let path = val_dir.dir().clone();
// Should not re-open whilst opened after build.
ValidatorDir::open(&path).unwrap_err();
drop(val_dir);
// Should re-open after drop.
let val_dir = ValidatorDir::open(&path).unwrap();
// Should not re-open when opened via ValidatorDir.
ValidatorDir::open(&path).unwrap_err();
drop(val_dir);
// Should re-open again.
ValidatorDir::open(&path).unwrap();
}
#[test]
fn deterministic_voting_keystore() {
let harness = Harness::new();
let config = BuildConfig {
random_voting_keystore: false,
..BuildConfig::default()
};
harness.create_and_test(&config);
}
#[test]
fn deterministic_withdrawal_keystore_without_saving() {
let harness = Harness::new();
let config = BuildConfig {
random_withdrawal_keystore: false,
store_withdrawal_keystore: false,
..BuildConfig::default()
};
harness.create_and_test(&config);
}
#[test]
fn deterministic_withdrawal_keystore_with_saving() {
let harness = Harness::new();
let config = BuildConfig {
random_withdrawal_keystore: false,
store_withdrawal_keystore: true,
..BuildConfig::default()
};
harness.create_and_test(&config);
}
#[test]
fn both_keystores_deterministic_without_saving() {
let harness = Harness::new();
let config = BuildConfig {
random_voting_keystore: false,
random_withdrawal_keystore: false,
store_withdrawal_keystore: false,
..BuildConfig::default()
};
harness.create_and_test(&config);
}
#[test]
fn both_keystores_deterministic_with_saving() {
let harness = Harness::new();
let config = BuildConfig {
random_voting_keystore: false,
random_withdrawal_keystore: false,
store_withdrawal_keystore: true,
..BuildConfig::default()
};
harness.create_and_test(&config);
}
#[test]
fn eth1_data() {
let harness = Harness::new();
let config = BuildConfig {
deposit_amount: Some(123456),
..BuildConfig::default()
};
harness.create_and_test(&config);
}

View File

@ -30,3 +30,6 @@ tree_hash = "0.1.0"
tokio = { version = "0.2.20", features = ["full"] }
clap_utils = { path = "../eth2/utils/clap_utils" }
eth2-libp2p = { path = "../beacon_node/eth2-libp2p" }
validator_dir = { path = "../eth2/utils/validator_dir", features = ["unencrypted_keys"] }
rand = "0.7.2"
eth2_keystore = { path = "../eth2/utils/eth2_keystore" }

View File

@ -405,7 +405,7 @@ fn main() {
.subcommand(
SubCommand::with_name("generate-bootnode-enr")
.about(
"Generates an ENR address to be used as a pre-genesis boot node..",
"Generates an ENR address to be used as a pre-genesis boot node.",
)
.arg(
Arg::with_name("ip")

View File

@ -24,3 +24,7 @@ validator_client = { "path" = "../validator_client" }
account_manager = { "path" = "../account_manager" }
clap_utils = { path = "../eth2/utils/clap_utils" }
eth2_testnet_config = { path = "../eth2/utils/eth2_testnet_config" }
[dev-dependencies]
tempfile = "3.1.0"
validator_dir = { path = "../eth2/utils/validator_dir" }

View File

@ -95,17 +95,11 @@ fn main() {
macro_rules! run_with_spec {
($env_builder: expr) => {
match run($env_builder, &matches) {
Ok(()) => exit(0),
Err(e) => {
println!("Failed to start Lighthouse: {}", e);
exit(1)
}
}
run($env_builder, &matches)
};
}
match matches.value_of("spec") {
let result = match matches.value_of("spec") {
Some("minimal") => run_with_spec!(EnvironmentBuilder::minimal()),
Some("mainnet") => run_with_spec!(EnvironmentBuilder::mainnet()),
Some("interop") => run_with_spec!(EnvironmentBuilder::interop()),
@ -113,6 +107,19 @@ fn main() {
// This path should be unreachable due to slog having a `default_value`
unreachable!("Unknown spec configuration: {:?}", spec);
}
};
// `std::process::exit` does not run destructors so we drop manually.
drop(matches);
// Return the appropriate error code.
match result {
Ok(()) => exit(0),
Err(e) => {
eprintln!("{}", e);
drop(e);
exit(1)
}
}
}
@ -152,19 +159,6 @@ fn run<E: EthSpec>(
return Err("Invalid CPU architecture".into());
}
warn!(
log,
"Ethereum 2.0 is pre-release. This software is experimental."
);
if !matches.is_present("testnet-dir") {
info!(
log,
"Using default testnet";
"default" => HARDCODED_TESTNET
)
}
// Note: the current code technically allows for starting a beacon node _and_ a validator
// client at the same time.
//
@ -181,6 +175,19 @@ fn run<E: EthSpec>(
return Ok(());
};
warn!(
log,
"Ethereum 2.0 is pre-release. This software is experimental."
);
if !matches.is_present("testnet-dir") {
info!(
log,
"Using default testnet";
"default" => HARDCODED_TESTNET
)
}
let beacon_node = if let Some(sub_matches) = matches.subcommand_matches("beacon_node") {
let runtime_context = environment.core_context();

View File

@ -0,0 +1,420 @@
#![cfg(not(debug_assertions))]
use account_manager::{
upgrade_legacy_keypairs::{CMD as UPGRADE_CMD, *},
validator::{create::*, CMD as VALIDATOR_CMD},
wallet::{
create::{CMD as CREATE_CMD, *},
list::CMD as LIST_CMD,
CMD as WALLET_CMD,
},
BASE_DIR_FLAG, CMD as ACCOUNT_CMD, *,
};
use std::env;
use std::fs;
use std::path::{Path, PathBuf};
use std::process::{Command, Output};
use std::str::from_utf8;
use tempfile::{tempdir, TempDir};
use types::Keypair;
use validator_dir::ValidatorDir;
// TODO: create tests for the `lighthouse account validator deposit` command. This involves getting
// access to an IPC endpoint during testing or adding support for deposit submission via HTTP and
// using ganache-cli.
/// Returns the `lighthouse account` command.
fn account_cmd() -> Command {
let target_dir = env!("CARGO_BIN_EXE_lighthouse");
let path = target_dir
.parse::<PathBuf>()
.expect("should parse CARGO_TARGET_DIR");
let mut cmd = Command::new(path);
cmd.arg(ACCOUNT_CMD);
cmd
}
/// Returns the `lighthouse account wallet` command.
fn wallet_cmd() -> Command {
let mut cmd = account_cmd();
cmd.arg(WALLET_CMD);
cmd
}
/// Executes a `Command`, returning a `Result` based upon the success exit code of the command.
fn output_result(cmd: &mut Command) -> Result<Output, String> {
let output = cmd.output().expect("should run command");
if output.status.success() {
Ok(output)
} else {
Err(from_utf8(&output.stderr)
.expect("stderr is not utf8")
.to_string())
}
}
/// Returns the number of nodes in a directory.
fn dir_child_count<P: AsRef<Path>>(dir: P) -> usize {
fs::read_dir(dir).expect("should read dir").count()
}
/// Uses `lighthouse account wallet list` to list all wallets.
fn list_wallets<P: AsRef<Path>>(base_dir: P) -> Vec<String> {
let output = output_result(
wallet_cmd()
.arg(format!("--{}", BASE_DIR_FLAG))
.arg(base_dir.as_ref().as_os_str())
.arg(LIST_CMD),
)
.unwrap();
let stdout = from_utf8(&output.stdout)
.expect("stdout is not utf8")
.to_string();
stdout[..stdout.len() - 1]
.split("\n")
.map(Into::into)
.collect()
}
/// Create a wallet using the lighthouse CLI.
fn create_wallet<P: AsRef<Path>>(
name: &str,
base_dir: P,
password: P,
mnemonic: P,
) -> Result<Output, String> {
output_result(
wallet_cmd()
.arg(format!("--{}", BASE_DIR_FLAG))
.arg(base_dir.as_ref().as_os_str())
.arg(CREATE_CMD)
.arg(format!("--{}", NAME_FLAG))
.arg(&name)
.arg(format!("--{}", PASSPHRASE_FLAG))
.arg(password.as_ref().as_os_str())
.arg(format!("--{}", MNEMONIC_FLAG))
.arg(mnemonic.as_ref().as_os_str()),
)
}
/// Helper struct for testing wallets.
struct TestWallet {
base_dir: PathBuf,
password_dir: TempDir,
mnemonic_dir: TempDir,
name: String,
}
impl TestWallet {
/// Creates a new wallet tester, without _actually_ creating it via the CLI.
pub fn new<P: AsRef<Path>>(base_dir: P, name: &str) -> Self {
Self {
base_dir: base_dir.as_ref().into(),
password_dir: tempdir().unwrap(),
mnemonic_dir: tempdir().unwrap(),
name: name.into(),
}
}
pub fn base_dir(&self) -> PathBuf {
self.base_dir.clone()
}
pub fn password_path(&self) -> PathBuf {
self.password_dir.path().join("password.pass")
}
pub fn mnemonic_path(&self) -> PathBuf {
self.mnemonic_dir.path().join("mnemonic")
}
/// Actually create the wallet using the lighthouse CLI.
pub fn create(&self) -> Result<Output, String> {
create_wallet(
&self.name,
self.base_dir(),
self.password_path(),
self.mnemonic_path(),
)
}
/// Create a wallet, expecting it to succeed.
pub fn create_expect_success(&self) {
self.create().unwrap();
assert!(self.password_path().exists(), "{} password", self.name);
assert!(self.mnemonic_path().exists(), "{} mnemonic", self.name);
assert!(list_wallets(self.base_dir()).contains(&self.name));
}
}
#[test]
fn without_pass_extension() {
let base_dir = tempdir().unwrap();
let password_dir = tempdir().unwrap();
let mnemonic_dir = tempdir().unwrap();
let err = create_wallet(
"bad_extension",
base_dir.path(),
&password_dir.path().join("password"),
&mnemonic_dir.path().join("mnemonic"),
)
.unwrap_err();
assert!(err.contains("ends in .pass"));
}
#[test]
fn wallet_create_and_list() {
let base_temp_dir = tempdir().unwrap();
let base_dir: PathBuf = base_temp_dir.path().into();
let wally = TestWallet::new(&base_dir, "wally");
assert_eq!(dir_child_count(&base_dir), 0);
wally.create_expect_success();
assert_eq!(dir_child_count(&base_dir), 1);
assert!(wally.password_path().exists());
assert!(wally.mnemonic_path().exists());
// Should not create a wallet with a duplicate name.
wally.create().unwrap_err();
assert_eq!(list_wallets(wally.base_dir()).len(), 1);
let wally2 = TestWallet::new(&base_dir, "wally2");
wally2.create_expect_success();
assert_eq!(list_wallets(wally.base_dir()).len(), 2);
}
/// Returns the `lighthouse account wallet` command.
fn validator_cmd() -> Command {
let mut cmd = account_cmd();
cmd.arg(VALIDATOR_CMD);
cmd
}
/// Helper struct for testing wallets.
struct TestValidator {
wallet: TestWallet,
validator_dir: PathBuf,
secrets_dir: PathBuf,
}
impl TestValidator {
pub fn new<P: AsRef<Path>>(validator_dir: P, secrets_dir: P, wallet: TestWallet) -> Self {
Self {
wallet,
validator_dir: validator_dir.as_ref().into(),
secrets_dir: secrets_dir.as_ref().into(),
}
}
/// Create validators, returning a list of validator pubkeys on success.
pub fn create(
&self,
quantity_flag: &str,
quantity: usize,
store_withdrawal_key: bool,
) -> Result<Vec<String>, String> {
let mut cmd = validator_cmd();
cmd.arg(format!("--{}", BASE_DIR_FLAG))
.arg(self.wallet.base_dir().into_os_string())
.arg(CREATE_CMD)
.arg(format!("--{}", WALLET_NAME_FLAG))
.arg(&self.wallet.name)
.arg(format!("--{}", WALLET_PASSPHRASE_FLAG))
.arg(self.wallet.password_path().into_os_string())
.arg(format!("--{}", VALIDATOR_DIR_FLAG))
.arg(self.validator_dir.clone().into_os_string())
.arg(format!("--{}", SECRETS_DIR_FLAG))
.arg(self.secrets_dir.clone().into_os_string())
.arg(format!("--{}", DEPOSIT_GWEI_FLAG))
.arg("32000000000")
.arg(format!("--{}", quantity_flag))
.arg(format!("{}", quantity));
let output = if store_withdrawal_key {
output_result(cmd.arg(format!("--{}", STORE_WITHDRAW_FLAG))).unwrap()
} else {
output_result(&mut cmd).unwrap()
};
let stdout = from_utf8(&output.stdout)
.expect("stdout is not utf8")
.to_string();
if stdout == "" {
return Ok(vec![]);
}
let pubkeys = stdout[..stdout.len() - 1]
.split("\n")
.map(|line| {
let tab = line.find("\t").expect("line must have tab");
let (_, pubkey) = line.split_at(tab + 1);
pubkey.to_string()
})
.collect::<Vec<_>>();
Ok(pubkeys)
}
/// Create a validators, expecting success.
pub fn create_expect_success(
&self,
quantity_flag: &str,
quantity: usize,
store_withdrawal_key: bool,
) -> Vec<ValidatorDir> {
let pubkeys = self
.create(quantity_flag, quantity, store_withdrawal_key)
.unwrap();
pubkeys
.into_iter()
.map(|pk| {
// Password should have been created.
assert!(self.secrets_dir.join(&pk).exists(), "password exists");
// Should have created a validator directory.
let dir = ValidatorDir::open(self.validator_dir.join(&pk))
.expect("should open validator dir");
// Validator dir should have a voting keypair.
let voting_keypair = dir.voting_keypair(&self.secrets_dir).unwrap();
// Validator dir should *not* have a withdrawal keypair.
let withdrawal_result = dir.withdrawal_keypair(&self.secrets_dir);
if store_withdrawal_key {
let withdrawal_keypair = withdrawal_result.unwrap();
assert_ne!(voting_keypair.pk, withdrawal_keypair.pk);
} else {
withdrawal_result.unwrap_err();
}
// Deposit tx file should not exist yet.
assert!(!dir.eth1_deposit_tx_hash_exists(), "deposit tx");
// Should have created a valid deposit data file.
dir.eth1_deposit_data().unwrap().unwrap();
dir
})
.collect()
}
}
#[test]
fn validator_create() {
let base_dir = tempdir().unwrap();
let validator_dir = tempdir().unwrap();
let secrets_dir = tempdir().unwrap();
let wallet = TestWallet::new(base_dir.path(), "wally");
wallet.create_expect_success();
assert_eq!(dir_child_count(validator_dir.path()), 0);
let validator = TestValidator::new(validator_dir.path(), secrets_dir.path(), wallet);
// Create a validator _without_ storing the withdraw key.
validator.create_expect_success(COUNT_FLAG, 1, false);
assert_eq!(dir_child_count(validator_dir.path()), 1);
// Create a validator storing the withdraw key.
validator.create_expect_success(COUNT_FLAG, 1, true);
assert_eq!(dir_child_count(validator_dir.path()), 2);
// Use the at-most flag with less validators then are in the directory.
assert_eq!(
validator.create_expect_success(AT_MOST_FLAG, 1, true).len(),
0
);
assert_eq!(dir_child_count(validator_dir.path()), 2);
// Use the at-most flag with the same number of validators that are in the directory.
assert_eq!(
validator.create_expect_success(AT_MOST_FLAG, 2, true).len(),
0
);
assert_eq!(dir_child_count(validator_dir.path()), 2);
// Use the at-most flag with two more number of validators than are in the directory.
assert_eq!(
validator.create_expect_success(AT_MOST_FLAG, 4, true).len(),
2
);
assert_eq!(dir_child_count(validator_dir.path()), 4);
// Create multiple validators with the count flag.
assert_eq!(
validator.create_expect_success(COUNT_FLAG, 2, true).len(),
2
);
assert_eq!(dir_child_count(validator_dir.path()), 6);
}
fn write_legacy_keypair<P: AsRef<Path>>(name: &str, dir: P) -> Keypair {
let keypair = Keypair::random();
let mut keypair_bytes = keypair.pk.as_bytes();
keypair_bytes.extend_from_slice(&keypair.sk.as_raw().as_bytes());
fs::write(dir.as_ref().join(name), &keypair_bytes).unwrap();
keypair
}
#[test]
fn upgrade_legacy_keypairs() {
let validators_dir = tempdir().unwrap();
let secrets_dir = tempdir().unwrap();
let validators = (0..2)
.into_iter()
.map(|i| {
let validator_dir = validators_dir.path().join(format!("myval{}", i));
fs::create_dir_all(&validator_dir).unwrap();
let voting_keypair = write_legacy_keypair(VOTING_KEYPAIR_FILE, &validator_dir);
let withdrawal_keypair = write_legacy_keypair(WITHDRAWAL_KEYPAIR_FILE, &validator_dir);
(validator_dir, voting_keypair, withdrawal_keypair)
})
.collect::<Vec<_>>();
account_cmd()
.arg(UPGRADE_CMD)
.arg(format!("--{}", VALIDATOR_DIR_FLAG))
.arg(validators_dir.path().as_os_str())
.arg(format!("--{}", SECRETS_DIR_FLAG))
.arg(secrets_dir.path().as_os_str())
.output()
.unwrap();
for (validator_dir, voting_keypair, withdrawal_keypair) in validators {
let dir = ValidatorDir::open(&validator_dir).unwrap();
assert_eq!(
voting_keypair,
dir.voting_keypair(secrets_dir.path()).unwrap()
);
assert_eq!(
withdrawal_keypair,
dir.withdrawal_keypair(secrets_dir.path()).unwrap()
);
}
}

View File

@ -17,3 +17,4 @@ futures = "0.3.5"
genesis = { path = "../../beacon_node/genesis" }
remote_beacon_node = { path = "../../eth2/utils/remote_beacon_node" }
validator_client = { path = "../../validator_client" }
validator_dir = { path = "../../eth2/utils/validator_dir", features = ["insecure_keys"] }

View File

@ -8,7 +8,8 @@ use std::path::PathBuf;
use std::time::{SystemTime, UNIX_EPOCH};
use tempdir::TempDir;
use types::EthSpec;
use validator_client::{KeySource, ProductionValidatorClient};
use validator_client::ProductionValidatorClient;
use validator_dir::insecure_keys::build_deterministic_validator_dirs;
pub use beacon_node::{ClientConfig, ClientGenesis, ProductionClient};
pub use environment;
@ -90,13 +91,52 @@ pub fn testing_client_config() -> ClientConfig {
client_config
}
/// Contains the directories for a `LocalValidatorClient`.
///
/// This struct is separate to `LocalValidatorClient` to allow for pre-computation of validator
/// keypairs since the task is quite resource intensive.
pub struct ValidatorFiles {
pub datadir: TempDir,
pub secrets_dir: TempDir,
}
impl ValidatorFiles {
/// Creates temporary data and secrets dirs.
pub fn new() -> Result<Self, String> {
let datadir = TempDir::new("lighthouse-validator-client")
.map_err(|e| format!("Unable to create VC data dir: {:?}", e))?;
let secrets_dir = TempDir::new("lighthouse-validator-client-secrets")
.map_err(|e| format!("Unable to create VC secrets dir: {:?}", e))?;
Ok(Self {
datadir,
secrets_dir,
})
}
/// Creates temporary data and secrets dirs, preloaded with keystores.
pub fn with_keystores(keypair_indices: &[usize]) -> Result<Self, String> {
let this = Self::new()?;
build_deterministic_validator_dirs(
this.datadir.path().into(),
this.secrets_dir.path().into(),
keypair_indices,
)
.map_err(|e| format!("Unable to build validator directories: {:?}", e))?;
Ok(this)
}
}
/// Provides a validator client that is running in the current process on a given tokio executor (it
/// is _local_ to this process).
///
/// Intended for use in testing and simulation. Not for production.
pub struct LocalValidatorClient<T: EthSpec> {
pub client: ProductionValidatorClient<T>,
pub datadir: TempDir,
pub files: ValidatorFiles,
}
impl<E: EthSpec> LocalValidatorClient<E> {
@ -106,16 +146,10 @@ impl<E: EthSpec> LocalValidatorClient<E> {
/// The validator created is using the same types as the node we use in production.
pub async fn production_with_insecure_keypairs(
context: RuntimeContext<E>,
mut config: ValidatorConfig,
keypair_indices: &[usize],
config: ValidatorConfig,
files: ValidatorFiles,
) -> Result<Self, String> {
// Creates a temporary directory that will be deleted once this `TempDir` is dropped.
let datadir = TempDir::new("lighthouse-beacon-node")
.expect("should create temp directory for client datadir");
config.key_source = KeySource::InsecureKeypairs(keypair_indices.to_vec());
Self::new(context, config, datadir).await
Self::new(context, config, files).await
}
/// Creates a validator client that attempts to read keys from the default data dir.
@ -126,19 +160,18 @@ impl<E: EthSpec> LocalValidatorClient<E> {
context: RuntimeContext<E>,
config: ValidatorConfig,
) -> Result<Self, String> {
// Creates a temporary directory that will be deleted once this `TempDir` is dropped.
let datadir = TempDir::new("lighthouse-validator")
.expect("should create temp directory for client datadir");
let files = ValidatorFiles::new()?;
Self::new(context, config, datadir).await
Self::new(context, config, files).await
}
async fn new(
context: RuntimeContext<E>,
mut config: ValidatorConfig,
datadir: TempDir,
files: ValidatorFiles,
) -> Result<Self, String> {
config.data_dir = datadir.path().into();
config.data_dir = files.datadir.path().into();
config.secrets_dir = files.secrets_dir.path().into();
ProductionValidatorClient::new(context, config)
.await
@ -146,7 +179,7 @@ impl<E: EthSpec> LocalValidatorClient<E> {
client
.start_service()
.expect("should start validator services");
Self { client, datadir }
Self { client, files }
})
}
}

View File

@ -16,3 +16,4 @@ tokio = "0.2.20"
eth1_test_rig = { path = "../eth1_test_rig" }
env_logger = "0.7.1"
clap = "2.33.0"
rayon = "1.3.0"

View File

@ -4,9 +4,12 @@ use eth1_test_rig::GanacheEth1Instance;
use futures::prelude::*;
use node_test_rig::{
environment::EnvironmentBuilder, testing_client_config, ClientGenesis, ValidatorConfig,
ValidatorFiles,
};
use rayon::prelude::*;
use std::net::{IpAddr, Ipv4Addr};
use std::time::Duration;
use tokio::time::{delay_until, Instant};
pub fn run_eth1_sim(matches: &ArgMatches) -> Result<(), String> {
let node_count = value_t!(matches, "nodes", usize).expect("missing nodes default");
@ -24,6 +27,24 @@ pub fn run_eth1_sim(matches: &ArgMatches) -> Result<(), String> {
println!(" validators_per_node:{}", validators_per_node);
println!(" end_after_checks:{}", end_after_checks);
// Generate the directories and keystores required for the validator clients.
let validator_files = (0..node_count)
.into_par_iter()
.map(|i| {
println!(
"Generating keystores for validator {} of {}",
i + 1,
node_count
);
let indices =
(i * validators_per_node..(i + 1) * validators_per_node).collect::<Vec<_>>();
ValidatorFiles::with_keystores(&indices).unwrap()
})
.collect::<Vec<_>>();
let expected_genesis_instant = Instant::now() + Duration::from_secs(60);
let log_level = "debug";
let log_format = None;
@ -103,54 +124,64 @@ pub fn run_eth1_sim(matches: &ArgMatches) -> Result<(), String> {
for _ in 0..node_count - 1 {
network.add_beacon_node(beacon_config.clone()).await?;
}
/*
* One by one, add validator clients to the network. Each validator client is attached to
* a single corresponding beacon node.
* Create a future that will add validator clients to the network. Each validator client is
* attached to a single corresponding beacon node.
*/
let add_validators_fut = async {
for (i, files) in validator_files.into_iter().enumerate() {
network
.add_validator_client(
ValidatorConfig {
auto_register: true,
..ValidatorConfig::default()
},
i,
files,
)
.await?;
}
// Note: presently the validator client future will only resolve once genesis time
// occurs. This is great for this scenario, but likely to change in the future.
//
// If the validator client future behaviour changes, we would need to add a new future
// that delays until genesis. Otherwise, all of the checks that start in the next
// future will start too early.
for i in 0..node_count {
let indices =
(i * validators_per_node..(i + 1) * validators_per_node).collect::<Vec<_>>();
network
.add_validator_client(
ValidatorConfig {
auto_register: true,
..ValidatorConfig::default()
},
i,
indices,
)
.await?;
}
Ok::<(), String>(())
};
/*
* Start the processes that will run checks on the network as it runs.
*/
let _err = futures::join!(
// Check that the chain finalizes at the first given opportunity.
checks::verify_first_finalization(network.clone(), slot_duration),
// Check that the chain starts with the expected validator count.
checks::verify_initial_validator_count(
network.clone(),
slot_duration,
initial_validator_count,
),
// Check that validators greater than `spec.min_genesis_active_validator_count` are
// onboarded at the first possible opportunity.
checks::verify_validator_onboarding(
network.clone(),
slot_duration,
total_validator_count,
)
);
let checks_fut = async {
delay_until(expected_genesis_instant).await;
let (finalization, validator_count, onboarding) = futures::join!(
// Check that the chain finalizes at the first given opportunity.
checks::verify_first_finalization(network.clone(), slot_duration),
// Check that the chain starts with the expected validator count.
checks::verify_initial_validator_count(
network.clone(),
slot_duration,
initial_validator_count,
),
// Check that validators greater than `spec.min_genesis_active_validator_count` are
// onboarded at the first possible opportunity.
checks::verify_validator_onboarding(
network.clone(),
slot_duration,
total_validator_count,
)
);
finalization?;
validator_count?;
onboarding?;
Ok::<(), String>(())
};
let (add_validators, checks) = futures::join!(add_validators_fut, checks_fut);
add_validators?;
checks?;
// The `final_future` either completes immediately or never completes, depending on the value
// of `end_after_checks`.

View File

@ -1,6 +1,6 @@
use node_test_rig::{
environment::RuntimeContext, ClientConfig, LocalBeaconNode, LocalValidatorClient,
RemoteBeaconNode, ValidatorConfig,
RemoteBeaconNode, ValidatorConfig, ValidatorFiles,
};
use parking_lot::RwLock;
use std::ops::Deref;
@ -111,7 +111,7 @@ impl<E: EthSpec> LocalNetwork<E> {
&self,
mut validator_config: ValidatorConfig,
beacon_node: usize,
keypair_indices: Vec<usize>,
validator_files: ValidatorFiles,
) -> Result<(), String> {
let index = self.validator_clients.read().len();
let context = self.context.service_context(format!("validator_{}", index));
@ -132,7 +132,7 @@ impl<E: EthSpec> LocalNetwork<E> {
let validator_client = LocalValidatorClient::production_with_insecure_keypairs(
context,
validator_config,
&keypair_indices,
validator_files,
)
.await?;
self_1.validator_clients.write().push(validator_client);

View File

@ -3,9 +3,12 @@ use clap::ArgMatches;
use futures::prelude::*;
use node_test_rig::{
environment::EnvironmentBuilder, testing_client_config, ClientGenesis, ValidatorConfig,
ValidatorFiles,
};
use rayon::prelude::*;
use std::net::{IpAddr, Ipv4Addr};
use std::time::{Duration, SystemTime, UNIX_EPOCH};
use tokio::time::{delay_until, Instant};
pub fn run_no_eth1_sim(matches: &ArgMatches) -> Result<(), String> {
let node_count = value_t!(matches, "nodes", usize).expect("missing nodes default");
@ -23,6 +26,22 @@ pub fn run_no_eth1_sim(matches: &ArgMatches) -> Result<(), String> {
println!(" validators_per_node:{}", validators_per_node);
println!(" end_after_checks:{}", end_after_checks);
// Generate the directories and keystores required for the validator clients.
let validator_files = (0..node_count)
.into_par_iter()
.map(|i| {
println!(
"Generating keystores for validator {} of {}",
i + 1,
node_count
);
let indices =
(i * validators_per_node..(i + 1) * validators_per_node).collect::<Vec<_>>();
ValidatorFiles::with_keystores(&indices).unwrap()
})
.collect::<Vec<_>>();
let log_level = "debug";
let log_format = None;
@ -42,10 +61,12 @@ pub fn run_no_eth1_sim(matches: &ArgMatches) -> Result<(), String> {
spec.min_genesis_active_validator_count = 64;
spec.seconds_per_eth1_block = 1;
let genesis_delay = Duration::from_secs(5);
let genesis_time = SystemTime::now()
.duration_since(UNIX_EPOCH)
.map_err(|_| "should get system time")?
+ Duration::from_secs(5);
+ genesis_delay;
let genesis_instant = Instant::now() + genesis_delay;
let slot_duration = Duration::from_millis(spec.milliseconds_per_slot);
let total_validator_count = validators_per_node * node_count;
@ -72,37 +93,44 @@ pub fn run_no_eth1_sim(matches: &ArgMatches) -> Result<(), String> {
for _ in 0..node_count - 1 {
network.add_beacon_node(beacon_config.clone()).await?;
}
/*
* One by one, add validator clients to the network. Each validator client is attached to
* a single corresponding beacon node.
*/
// Note: presently the validator client future will only resolve once genesis time
// occurs. This is great for this scenario, but likely to change in the future.
//
// If the validator client future behaviour changes, we would need to add a new future
// that delays until genesis. Otherwise, all of the checks that start in the next
// future will start too early.
for i in 0..node_count {
let indices =
(i * validators_per_node..(i + 1) * validators_per_node).collect::<Vec<_>>();
network
.add_validator_client(
ValidatorConfig {
auto_register: true,
..ValidatorConfig::default()
},
i,
indices,
)
.await?;
}
/*
* Start the processes that will run checks on the network as it runs.
* Create a future that will add validator clients to the network. Each validator client is
* attached to a single corresponding beacon node.
*/
// Check that the chain finalizes at the first given opportunity.
checks::verify_first_finalization(network.clone(), slot_duration).await?;
let add_validators_fut = async {
for (i, files) in validator_files.into_iter().enumerate() {
network
.add_validator_client(
ValidatorConfig {
auto_register: true,
..ValidatorConfig::default()
},
i,
files,
)
.await?;
}
Ok::<(), String>(())
};
/*
* The processes that will run checks on the network as it runs.
*/
let checks_fut = async {
delay_until(genesis_instant).await;
// Check that the chain finalizes at the first given opportunity.
checks::verify_first_finalization(network.clone(), slot_duration).await?;
Ok::<(), String>(())
};
let (add_validators, start_checks) = futures::join!(add_validators_fut, checks_fut);
add_validators?;
start_checks?;
// The `final_future` either completes immediately or never completes, depending on the value
// of `end_after_checks`.

View File

@ -5,6 +5,7 @@ use futures::prelude::*;
use node_test_rig::ClientConfig;
use node_test_rig::{
environment::EnvironmentBuilder, testing_client_config, ClientGenesis, ValidatorConfig,
ValidatorFiles,
};
use std::net::{IpAddr, Ipv4Addr};
use std::time::{Duration, SystemTime, UNIX_EPOCH};
@ -77,6 +78,10 @@ fn syncing_sim(
beacon_config.network.enr_address = Some(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)));
// Generate the directories and keystores required for the validator clients.
let validator_indices = (0..num_validators).collect::<Vec<_>>();
let validator_files = ValidatorFiles::with_keystores(&validator_indices).unwrap();
let main_future = async {
/*
* Create a new `LocalNetwork` with one beacon node.
@ -87,7 +92,7 @@ fn syncing_sim(
* Add a validator client which handles all validators from the genesis state.
*/
network
.add_validator_client(ValidatorConfig::default(), 0, (0..num_validators).collect())
.add_validator_client(ValidatorConfig::default(), 0, validator_files)
.await?;
// Check all syncing strategies one after other.

View File

@ -45,3 +45,5 @@ remote_beacon_node = { path = "../eth2/utils/remote_beacon_node" }
tempdir = "0.3.7"
rayon = "1.3.0"
web3 = "0.10.0"
validator_dir = { path = "../eth2/utils/validator_dir" }
clap_utils = { path = "../eth2/utils/clap_utils" }

View File

@ -1,11 +1,13 @@
use crate::config::DEFAULT_HTTP_SERVER;
use clap::{App, Arg, SubCommand};
use clap::{App, Arg};
pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
App::new("validator_client")
.visible_aliases(&["v", "vc", "validator"])
.about("When connected to a beacon node, performs the duties of a staked \
validator (e.g., proposing blocks and attestations).")
.about(
"When connected to a beacon node, performs the duties of a staked \
validator (e.g., proposing blocks and attestations).",
)
.arg(
Arg::with_name("server")
.long("server")
@ -15,57 +17,31 @@ pub fn cli_app<'a, 'b>() -> App<'a, 'b> {
.takes_value(true),
)
.arg(
Arg::with_name("allow-unsynced")
.long("allow-unsynced")
.help("If present, the validator client will still poll for duties if the beacon \
node is not synced.")
Arg::with_name("secrets-dir")
.long("secrets-dir")
.value_name("SECRETS_DIRECTORY")
.help(
"The directory which contains the password to unlock the validator \
voting keypairs. Each password should be contained in a file where the \
name is the 0x-prefixed hex representation of the validators voting public \
key. Defaults to ~/.lighthouse/secrets.",
)
.takes_value(true),
)
.arg(
Arg::with_name("auto-register")
.long("auto-register")
.help("If present, the validator client will register any new signing keys with \
.arg(Arg::with_name("auto-register").long("auto-register").help(
"If present, the validator client will register any new signing keys with \
the slashing protection database so that they may be used. WARNING: \
enabling the same signing key on multiple validator clients WILL lead to \
that validator getting slashed. Only use this flag the first time you run \
the validator client, or if you're certain there are no other \
nodes using the same key.")
)
/*
* The "testnet" sub-command.
*
* Used for starting testnet validator clients.
*/
.subcommand(SubCommand::with_name("testnet")
.about("Starts a testnet validator using INSECURE, predicatable private keys, based off the canonical \
validator index. ONLY USE FOR TESTING PURPOSES!")
.subcommand(SubCommand::with_name("insecure")
.about("Uses the standard, predicatable `interop` keygen method to produce a range \
of predicatable private keys and starts performing their validator duties.")
.arg(Arg::with_name("first_validator")
.value_name("VALIDATOR_INDEX")
.required(true)
.help("The first validator public key to be generated for this client."))
.arg(Arg::with_name("last_validator")
.value_name("VALIDATOR_INDEX")
.required(true)
.help("The end of the range of keys to generate. This index is not generated."))
)
.subcommand(SubCommand::with_name("interop-yaml")
.about("Loads plain-text secret keys from YAML files. Expects the interop format defined
in the ethereum/eth2.0-pm repo.")
.arg(Arg::with_name("path")
.value_name("PATH")
.required(true)
.help("Path to a YAML file."))
)
)
.subcommand(SubCommand::with_name("sign_block")
.about("Connects to the beacon server, requests a new block (after providing reveal),\
and prints the signed block to standard out")
.arg(Arg::with_name("validator")
.value_name("VALIDATOR")
.required(true)
.help("The pubkey of the validator that should sign the block.")
)
nodes using the same key.",
))
.arg(
Arg::with_name("allow-unsynced")
.long("allow-unsynced")
.help(
"If present, the validator client will still poll for duties if the beacon
node is not synced.",
),
)
}

View File

@ -1,35 +1,21 @@
use clap::ArgMatches;
use clap_utils::{parse_optional, parse_path_with_default_in_home_dir};
use serde_derive::{Deserialize, Serialize};
use std::path::PathBuf;
pub const DEFAULT_HTTP_SERVER: &str = "http://localhost:5052/";
pub const DEFAULT_DATA_DIR: &str = ".lighthouse/validators";
pub const DEFAULT_SECRETS_DIR: &str = ".lighthouse/secrets";
/// Path to the slashing protection database within the datadir.
pub const SLASHING_PROTECTION_FILENAME: &str = "slashing_protection.sqlite";
/// Specifies a method for obtaining validator keypairs.
#[derive(Clone)]
pub enum KeySource {
/// Load the keypairs from disk.
Disk,
/// Generate the keypairs (insecure, generates predictable keys).
InsecureKeypairs(Vec<usize>),
}
impl Default for KeySource {
fn default() -> Self {
KeySource::Disk
}
}
/// Stores the core configuration for this validator instance.
#[derive(Clone, Serialize, Deserialize)]
pub struct Config {
/// The data directory, which stores all validator databases
pub data_dir: PathBuf,
/// Specifies how the validator client should load keypairs.
#[serde(skip)]
pub key_source: KeySource,
/// The directory containing the passwords to unlock validator keystores.
pub secrets_dir: PathBuf,
/// The http endpoint of the beacon node API.
///
/// Should be similar to `http://localhost:8080`
@ -47,9 +33,12 @@ impl Default for Config {
let data_dir = dirs::home_dir()
.map(|home| home.join(DEFAULT_DATA_DIR))
.unwrap_or_else(|| PathBuf::from("."));
let secrets_dir = dirs::home_dir()
.map(|home| home.join(DEFAULT_SECRETS_DIR))
.unwrap_or_else(|| PathBuf::from("."));
Self {
data_dir,
key_source: <_>::default(),
secrets_dir,
http_server: DEFAULT_HTTP_SERVER.to_string(),
allow_unsynced_beacon_node: false,
auto_register: false,
@ -63,71 +52,37 @@ impl Config {
pub fn from_cli(cli_args: &ArgMatches) -> Result<Config, String> {
let mut config = Config::default();
// Read the `--datadir` flag.
//
// If it's not present, try and find the home directory (`~`) and push the default data
// directory onto it. If the home directory is not available, use the present directory.
config.data_dir = cli_args
.value_of("datadir")
.map(PathBuf::from)
.unwrap_or_else(|| {
dirs::home_dir()
.map(|home| home.join(DEFAULT_DATA_DIR))
.unwrap_or_else(|| PathBuf::from("."))
});
config.data_dir = parse_path_with_default_in_home_dir(
cli_args,
"datadir",
PathBuf::from(".lighthouse").join("validators"),
)?;
if let Some(server) = cli_args.value_of("server") {
config.http_server = server.to_string();
if !config.data_dir.exists() {
return Err(format!(
"The directory for validator data (--datadir) does not exist: {:?}",
config.data_dir
));
}
let mut config = match cli_args.subcommand() {
("testnet", Some(sub_cli_args)) => {
if cli_args.is_present("eth2-config") && sub_cli_args.is_present("bootstrap") {
return Err(
"Cannot specify --eth2-config and --bootstrap as it may result \
in ambiguity."
.into(),
);
}
process_testnet_subcommand(sub_cli_args, config)?
}
_ => {
config.key_source = KeySource::Disk;
config
}
};
if let Some(server) = parse_optional(cli_args, "server")? {
config.http_server = server;
}
config.allow_unsynced_beacon_node = cli_args.is_present("allow-unsynced");
config.auto_register = cli_args.is_present("auto-register");
if let Some(secrets_dir) = parse_optional(cli_args, "secrets-dir")? {
config.secrets_dir = secrets_dir;
}
if !config.secrets_dir.exists() {
return Err(format!(
"The directory for validator passwords (--secrets-dir) does not exist: {:?}",
config.secrets_dir
));
}
Ok(config)
}
}
/// Parses the `testnet` CLI subcommand, modifying the `config` based upon the parameters in
/// `cli_args`.
fn process_testnet_subcommand(cli_args: &ArgMatches, mut config: Config) -> Result<Config, String> {
config.key_source = match cli_args.subcommand() {
("insecure", Some(sub_cli_args)) => {
let first = sub_cli_args
.value_of("first_validator")
.ok_or_else(|| "No first validator supplied")?
.parse::<usize>()
.map_err(|e| format!("Unable to parse first validator: {:?}", e))?;
let last = sub_cli_args
.value_of("last_validator")
.ok_or_else(|| "No last validator supplied")?
.parse::<usize>()
.map_err(|e| format!("Unable to parse last validator: {:?}", e))?;
if last < first {
return Err("Cannot supply a last validator less than the first".to_string());
}
KeySource::InsecureKeypairs((first..last).collect())
}
_ => KeySource::Disk,
};
Ok(config)
}

View File

@ -56,7 +56,7 @@ impl DutyAndProof {
let selection_proof = validator_store
.produce_selection_proof(&self.duty.validator_pubkey, slot)
.ok_or_else(|| "Validator pubkey missing from store".to_string())?;
.ok_or_else(|| "Failed to produce selection proof".to_string())?;
self.selection_proof = selection_proof
.is_aggregator_from_modulo(modulo)

View File

@ -8,10 +8,8 @@ mod is_synced;
mod notifier;
mod validator_store;
pub mod validator_directory;
pub use cli::cli_app;
pub use config::{Config, KeySource};
pub use config::Config;
use attestation_service::{AttestationService, AttestationServiceBuilder};
use block_service::{BlockService, BlockServiceBuilder};
@ -159,28 +157,14 @@ impl<T: EthSpec> ProductionValidatorClient<T> {
.runtime_context(context.service_context("fork".into()))
.build()?;
let validator_store: ValidatorStore<SystemTimeSlotClock, T> = match &config.key_source {
// Load pre-existing validators from the data dir.
//
// Use the `account_manager` to generate these files.
KeySource::Disk => ValidatorStore::load_from_disk(
config.data_dir.clone(),
let validator_store: ValidatorStore<SystemTimeSlotClock, T> =
ValidatorStore::load_from_disk(
&config,
genesis_validators_root,
context.eth2_config.spec.clone(),
fork_service.clone(),
log.clone(),
)?,
// Generate ephemeral insecure keypairs for testing purposes.
//
// Do not use in production.
KeySource::InsecureKeypairs(indices) => ValidatorStore::insecure_ephemeral_validators(
&indices,
genesis_validators_root,
context.eth2_config.spec.clone(),
fork_service.clone(),
log.clone(),
)?,
};
)?;
info!(
log,

View File

@ -1,26 +1,29 @@
use crate::config::SLASHING_PROTECTION_FILENAME;
use crate::fork_service::ForkService;
use crate::validator_directory::{ValidatorDirectory, ValidatorDirectoryBuilder};
use crate::{config::Config, fork_service::ForkService};
use parking_lot::RwLock;
use rayon::prelude::*;
use slashing_protection::{NotSafe, Safe, SlashingDatabase};
use slog::{crit, error, warn, Logger};
use slot_clock::SlotClock;
use std::collections::HashMap;
use std::fs::read_dir;
use std::iter::FromIterator;
use std::marker::PhantomData;
use std::path::PathBuf;
use std::sync::Arc;
use tempdir::TempDir;
use types::{
Attestation, BeaconBlock, ChainSpec, Domain, Epoch, EthSpec, Fork, Hash256, PublicKey,
Attestation, BeaconBlock, ChainSpec, Domain, Epoch, EthSpec, Fork, Hash256, Keypair, PublicKey,
SelectionProof, Signature, SignedAggregateAndProof, SignedBeaconBlock, SignedRoot, Slot,
};
use validator_dir::{Manager as ValidatorManager, ValidatorDir};
#[derive(PartialEq)]
struct LocalValidator {
validator_dir: ValidatorDir,
voting_keypair: Keypair,
}
#[derive(Clone)]
pub struct ValidatorStore<T, E: EthSpec> {
validators: Arc<RwLock<HashMap<PublicKey, ValidatorDirectory>>>,
validators: Arc<RwLock<HashMap<PublicKey, LocalValidator>>>,
slashing_protection: SlashingDatabase,
genesis_validators_root: Hash256,
spec: Arc<ChainSpec>,
@ -32,13 +35,13 @@ pub struct ValidatorStore<T, E: EthSpec> {
impl<T: SlotClock + 'static, E: EthSpec> ValidatorStore<T, E> {
pub fn load_from_disk(
base_dir: PathBuf,
config: &Config,
genesis_validators_root: Hash256,
spec: ChainSpec,
fork_service: ForkService<T, E>,
log: Logger,
) -> Result<Self, String> {
let slashing_db_path = base_dir.join(SLASHING_PROTECTION_FILENAME);
let slashing_db_path = config.data_dir.join(SLASHING_PROTECTION_FILENAME);
let slashing_protection =
SlashingDatabase::open_or_create(&slashing_db_path).map_err(|e| {
format!(
@ -47,39 +50,23 @@ impl<T: SlotClock + 'static, E: EthSpec> ValidatorStore<T, E> {
)
})?;
let validator_key_values = read_dir(&base_dir)
.map_err(|e| format!("Failed to read base directory {:?}: {:?}", base_dir, e))?
.collect::<Vec<_>>()
.into_par_iter()
.filter_map(|validator_dir| {
let path = validator_dir.ok()?.path();
if path.is_dir() {
match ValidatorDirectory::load_for_signing(path.clone()) {
Ok(validator_directory) => Some(validator_directory),
Err(e) => {
error!(
log,
"Failed to load a validator directory";
"error" => e,
"path" => path.to_str(),
);
None
}
}
} else {
None
}
})
.filter_map(|validator_directory| {
validator_directory
.voting_keypair
.clone()
.map(|voting_keypair| (voting_keypair.pk, validator_directory))
let validator_key_values = ValidatorManager::open(&config.data_dir)
.map_err(|e| format!("unable to read data_dir: {:?}", e))?
.decrypt_all_validators(config.secrets_dir.clone())
.map_err(|e| format!("unable to decrypt all validator directories: {:?}", e))?
.into_iter()
.map(|(kp, dir)| {
(
kp.pk.clone(),
LocalValidator {
validator_dir: dir,
voting_keypair: kp,
},
)
});
Ok(Self {
validators: Arc::new(RwLock::new(HashMap::from_par_iter(validator_key_values))),
validators: Arc::new(RwLock::new(HashMap::from_iter(validator_key_values))),
slashing_protection,
genesis_validators_root,
spec: Arc::new(spec),
@ -90,54 +77,6 @@ impl<T: SlotClock + 'static, E: EthSpec> ValidatorStore<T, E> {
})
}
pub fn insecure_ephemeral_validators(
validator_indices: &[usize],
genesis_validators_root: Hash256,
spec: ChainSpec,
fork_service: ForkService<T, E>,
log: Logger,
) -> Result<Self, String> {
let temp_dir = TempDir::new("insecure_validator")
.map_err(|e| format!("Unable to create temp dir: {:?}", e))?;
let data_dir = PathBuf::from(temp_dir.path());
let slashing_db_path = data_dir.join(SLASHING_PROTECTION_FILENAME);
let slashing_protection = SlashingDatabase::create(&slashing_db_path)
.map_err(|e| format!("Failed to create slashing protection database: {:?}", e))?;
let validators = validator_indices
.par_iter()
.map(|index| {
ValidatorDirectoryBuilder::default()
.spec(spec.clone())
.full_deposit_amount()?
.insecure_keypairs(*index)
.create_directory(data_dir.clone())?
.write_keypair_files()?
.write_eth1_data_file()?
.build()
})
.collect::<Result<Vec<_>, _>>()?
.into_iter()
.filter_map(|validator_directory| {
validator_directory
.voting_keypair
.clone()
.map(|voting_keypair| (voting_keypair.pk, validator_directory))
});
Ok(Self {
validators: Arc::new(RwLock::new(HashMap::from_iter(validators))),
slashing_protection,
genesis_validators_root,
spec: Arc::new(spec),
log,
temp_dir: Some(Arc::new(temp_dir)),
fork_service,
_phantom: PhantomData,
})
}
/// Register all known validators with the slashing protection database.
///
/// Registration is required to protect against a lost or missing slashing database,
@ -175,8 +114,8 @@ impl<T: SlotClock + 'static, E: EthSpec> ValidatorStore<T, E> {
self.validators
.read()
.get(validator_pubkey)
.and_then(|validator_dir| {
let voting_keypair = validator_dir.voting_keypair.as_ref()?;
.and_then(|local_validator| {
let voting_keypair = &local_validator.voting_keypair;
let domain = self.spec.get_domain(
epoch,
Domain::Randao,
@ -226,7 +165,7 @@ impl<T: SlotClock + 'static, E: EthSpec> ValidatorStore<T, E> {
Ok(Safe::Valid) => {
let validators = self.validators.read();
let validator = validators.get(validator_pubkey)?;
let voting_keypair = validator.voting_keypair.as_ref()?;
let voting_keypair = &validator.voting_keypair;
Some(block.sign(
&voting_keypair.sk,
@ -294,7 +233,7 @@ impl<T: SlotClock + 'static, E: EthSpec> ValidatorStore<T, E> {
Ok(Safe::Valid) => {
let validators = self.validators.read();
let validator = validators.get(validator_pubkey)?;
let voting_keypair = validator.voting_keypair.as_ref()?;
let voting_keypair = &validator.voting_keypair;
attestation
.sign(
@ -355,7 +294,7 @@ impl<T: SlotClock + 'static, E: EthSpec> ValidatorStore<T, E> {
selection_proof: SelectionProof,
) -> Option<SignedAggregateAndProof<E>> {
let validators = self.validators.read();
let voting_keypair = validators.get(validator_pubkey)?.voting_keypair.as_ref()?;
let voting_keypair = &validators.get(validator_pubkey)?.voting_keypair;
Some(SignedAggregateAndProof::from_aggregate(
validator_index,
@ -376,7 +315,7 @@ impl<T: SlotClock + 'static, E: EthSpec> ValidatorStore<T, E> {
slot: Slot,
) -> Option<SelectionProof> {
let validators = self.validators.read();
let voting_keypair = validators.get(validator_pubkey)?.voting_keypair.as_ref()?;
let voting_keypair = &validators.get(validator_pubkey)?.voting_keypair;
Some(SelectionProof::new::<E>(
slot,