Update local testnet scripts, fix eth1 sim (#1184)

* Update local testnet scripts

* Add logs when decrypting validators

* Update comment

* Update account manager

* Make random key generation explicit

* Remove unnecessary clap constraint

* Only decrypt voting keypair for eth1 deposit

* Use insecure kdf for insecure keypairs

* Simplify local testnet keygen

* Update local testnet

* Fix eth1 sim

* Add eth1 sim to CI again

* Remove old local testnet docs

* Tidy

* Remove checks for existing validators

* Tidy

* Fix typos
This commit is contained in:
Paul Hauner 2020-05-26 18:30:44 +10:00 committed by GitHub
parent d41a9f7aa6
commit 8bc82c573d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
25 changed files with 599 additions and 338 deletions

View File

@ -62,6 +62,15 @@ jobs:
- uses: actions/checkout@v1 - uses: actions/checkout@v1
- name: Build the root Dockerfile - name: Build the root Dockerfile
run: docker build . run: docker build .
eth1-simulator-ubuntu:
runs-on: ubuntu-latest
needs: cargo-fmt
steps:
- uses: actions/checkout@v1
- name: Install ganache-cli
run: sudo npm install -g ganache-cli
- name: Run the beacon chain sim that starts from an eth1 contract
run: cargo run --release --bin simulator eth1-sim
no-eth1-simulator-ubuntu: no-eth1-simulator-ubuntu:
runs-on: ubuntu-latest runs-on: ubuntu-latest
needs: cargo-fmt needs: cargo-fmt

1
Cargo.lock generated
View File

@ -5412,6 +5412,7 @@ dependencies = [
"hex 0.4.2", "hex 0.4.2",
"rand 0.7.3", "rand 0.7.3",
"rayon", "rayon",
"slog",
"tempfile", "tempfile",
"tree_hash", "tree_hash",
"types", "types",

View File

@ -1,154 +1,8 @@
# Local Testnets # Local Testnets
> This section is about running your own private local testnets. During development and testing it can be useful to start a small, local
> - If you wish to join the ongoing public testnet, please read [become a validator](./become-a-validator.md).
It is possible to create local, short-lived Lighthouse testnets that _don't_
require a deposit contract and Eth1 connection. There are two components
required for this:
1. Creating a "testnet directory", containing the configuration of your new
testnet. testnet.
1. Using the `--dummy-eth1` flag on your beacon node to avoid needing an Eth1
node for block production.
There is a TL;DR (too long; didn't read), followed by detailed steps if the The
TL;DR isn't adequate. [scripts/local_testnet/](https://github.com/sigp/lighthouse/tree/master/scripts)
directory contains several scripts and a README that should make this process easy.
## TL;DR
```bash
make install-lcli
lcli new-testnet
lcli interop-genesis 128
lighthouse bn --testnet-dir ~/.lighthouse/testnet --dummy-eth1 --http --enr-match
lighthouse vc --testnet-dir ~/.lighthouse/testnet --auto-register --allow-unsynced testnet insecure 0 128
```
Optionally update the genesis time to now:
```bash
lcli change-genesis-time ~/.lighthouse/testnet/genesis.ssz $(date +%s)
```
## 1. Creating a testnet directory
### 1.1 Install `lcli`
This guide requires `lcli`, the "Lighthouse CLI tool". It is a development tool
used for starting testnets and debugging.
Install `lcli` from the root directory of this repository with:
```bash
make install-lcli
```
### 1.2 Create a testnet directory
The default location for a testnet directory is `~/.lighthouse/testnet`. We'll
use this directory to keep the examples simple, however you can always specify
a different directory using the `--testnet-dir` flag.
Once you have `lcli` installed, create a new testnet directory with:
```bash
lcli new-testnet
```
> - This will create a "mainnet" spec testnet. To create a minimal spec use `lcli --spec minimal new-testnet`.
> - The `lcli new-testnet` command has many options, use `lcli new-testnet --help` to see them.
### 1.3 Create a genesis state
Your new testnet directory at `~/.lighthouse/testnet` doesn't yet have a
genesis state (`genesis.ssz`). Since there's no deposit contract in this
testnet, there's no way for nodes to find genesis themselves.
Manually create an "interop" genesis state with `128` validators:
```bash
lcli interop-genesis 128
```
> - A custom genesis time can be provided with `-t`.
> - See `lcli interop-genesis --help` for more info.
## 2. Start the beacon nodes and validator clients
Now the testnet has been specified in `~/.lighthouse/testnet`, it's time to
start a beacon node and validator client.
### 2.1 Start a beacon node
Start a beacon node:
```bash
lighthouse bn --testnet-dir ~/.lighthouse/testnet --dummy-eth1 --http --enr-match
```
> - `--testnet-dir` instructs the beacon node to use the spec we generated earlier.
> - `--dummy-eth1` uses deterministic "junk data" for linking to the eth1 chain, avoiding the requirement for an eth1 node. The downside is that new validators cannot be on-boarded after genesis.
> - `--http` starts the REST API so the validator client can produce blocks.
> - `--enr-match` sets the local ENR to use the local IP address and port which allows other nodes to connect. This node can then behave as a bootnode for other nodes.
### 2.2 Start a validator client
Once the beacon node has started and begun trying to sync, start a validator
client:
```bash
lighthouse vc --testnet-dir ~/.lighthouse/testnet --auto-register --allow-unsynced testnet insecure 0 128
```
> - `--testnet-dir` instructs the validator client to use the spec we generated earlier.
> - `--auto-register` enables slashing protection and signing for any new validator keys.
> - `--allow-unsynced` stops the validator client checking to see if the beacon node is synced prior to producing blocks.
> - `testnet insecure 0 128` instructs the validator client to use insecure
> testnet private keys and that it should control validators from `0` to
> `127` (inclusive).
## 3. Connect other nodes
Other nodes can now join this local testnet.
The initial node will output the ENR on boot. The ENR can also be obtained via
the http:
```bash
curl localhost:5052/network/enr
```
or from it's default directory:
```
~/.lighthouse/beacon/network/enr.dat
```
Once the ENR of the first node is obtained, another nodes may connect and
participate in the local network. Simply run:
```bash
lighthouse bn --testnet-dir ~/.lighthouse/testnet --dummy-eth1 --http --http-port 5053 --port 9002 --boot-nodes <ENR>
```
> - `--testnet-dir` instructs the beacon node to use the spec we generated earlier.
> - `--dummy-eth1` uses deterministic "junk data" for linking to the eth1 chain, avoiding the requirement for an eth1 node. The downside is that new validators cannot be on-boarded after genesis.
> - `--http` starts the REST API so the validator client can produce blocks.
> - `--http-port` sets the REST API port to a non-standard port to avoid conflicts with the first local node.
> - `--port` sets the ports of the lighthouse client to a non-standard value to avoid conflicts with the original node.
> - `--boot-nodes` provides the ENR of the original node to connect to. Note all nodes can use this ENR and should discover each other automatically via the discv5 discovery.
Note: The `--enr-match` is only required for the boot node. The local ENR of
all subsequent nodes will update automatically.
This node should now connect to the original node, sync and follow it's head.
## 4. Updating genesis time
To re-use a testnet directory one may simply update the genesis time and repeat
the process.
To update the genesis time of a `genesis.ssz` file, use the following command:
```bash
$ lcli change-genesis-time ~/.lighthouse/testnet/genesis.ssz $(date +%s)
```

View File

@ -1 +0,0 @@
# Simple Local Testnet

View File

@ -19,6 +19,7 @@ rand = "0.7.2"
deposit_contract = { path = "../deposit_contract" } deposit_contract = { path = "../deposit_contract" }
rayon = "1.3.0" rayon = "1.3.0"
tree_hash = { path = "../../consensus/tree_hash" } tree_hash = { path = "../../consensus/tree_hash" }
slog = { version = "2.5.2", features = ["max_level_trace", "release_max_level_trace"] }
hex = "0.4.2" hex = "0.4.2"
[dev-dependencies] [dev-dependencies]

View File

@ -36,6 +36,8 @@ pub enum Error {
UnableToSavePassword(io::Error), UnableToSavePassword(io::Error),
KeystoreError(KeystoreError), KeystoreError(KeystoreError),
UnableToOpenDir(DirError), UnableToOpenDir(DirError),
UninitializedVotingKeystore,
UninitializedWithdrawalKeystore,
#[cfg(feature = "insecure_keys")] #[cfg(feature = "insecure_keys")]
InsecureKeysError(String), InsecureKeysError(String),
} }
@ -71,8 +73,7 @@ impl<'a> Builder<'a> {
/// Build the `ValidatorDir` use the given `keystore` which can be unlocked with `password`. /// Build the `ValidatorDir` use the given `keystore` which can be unlocked with `password`.
/// ///
/// If this argument (or equivalent key specification argument) is not supplied a keystore will /// The builder will not necessarily check that `password` can unlock `keystore`.
/// be randomly generated.
pub fn voting_keystore(mut self, keystore: Keystore, password: &[u8]) -> Self { pub fn voting_keystore(mut self, keystore: Keystore, password: &[u8]) -> Self {
self.voting_keystore = Some((keystore, password.to_vec().into())); self.voting_keystore = Some((keystore, password.to_vec().into()));
self self
@ -80,13 +81,27 @@ impl<'a> Builder<'a> {
/// Build the `ValidatorDir` use the given `keystore` which can be unlocked with `password`. /// Build the `ValidatorDir` use the given `keystore` which can be unlocked with `password`.
/// ///
/// If this argument (or equivalent key specification argument) is not supplied a keystore will /// The builder will not necessarily check that `password` can unlock `keystore`.
/// be randomly generated.
pub fn withdrawal_keystore(mut self, keystore: Keystore, password: &[u8]) -> Self { pub fn withdrawal_keystore(mut self, keystore: Keystore, password: &[u8]) -> Self {
self.withdrawal_keystore = Some((keystore, password.to_vec().into())); self.withdrawal_keystore = Some((keystore, password.to_vec().into()));
self self
} }
/// Build the `ValidatorDir` using a randomly generated voting keypair.
pub fn random_voting_keystore(mut self) -> Result<Self, Error> {
self.voting_keystore = Some(random_keystore()?);
Ok(self)
}
/// Build the `ValidatorDir` using a randomly generated withdrawal keypair.
///
/// Also calls `Self::store_withdrawal_keystore(true)` in an attempt to protect against data
/// loss.
pub fn random_withdrawal_keystore(mut self) -> Result<Self, Error> {
self.withdrawal_keystore = Some(random_keystore()?);
Ok(self.store_withdrawal_keystore(true))
}
/// Upon build, create files in the `ValidatorDir` which will permit the submission of a /// Upon build, create files in the `ValidatorDir` which will permit the submission of a
/// deposit to the eth1 deposit contract with the given `deposit_amount`. /// deposit to the eth1 deposit contract with the given `deposit_amount`.
pub fn create_eth1_tx_data(mut self, deposit_amount: u64, spec: &'a ChainSpec) -> Self { pub fn create_eth1_tx_data(mut self, deposit_amount: u64, spec: &'a ChainSpec) -> Self {
@ -113,31 +128,10 @@ impl<'a> Builder<'a> {
} }
/// Consumes `self`, returning a `ValidatorDir` if no error is encountered. /// Consumes `self`, returning a `ValidatorDir` if no error is encountered.
pub fn build(mut self) -> Result<ValidatorDir, Error> { pub fn build(self) -> Result<ValidatorDir, Error> {
// If the withdrawal keystore will be generated randomly, always store it. let (voting_keystore, voting_password) = self
if self.withdrawal_keystore.is_none() { .voting_keystore
self.store_withdrawal_keystore = true; .ok_or_else(|| Error::UninitializedVotingKeystore)?;
}
// Attempts to get `self.$keystore`, unwrapping it into a random keystore if it is `None`.
// Then, decrypts the keypair from the keystore.
macro_rules! expand_keystore {
($keystore: ident) => {
self.$keystore
.map(Result::Ok)
.unwrap_or_else(random_keystore)
.and_then(|(keystore, password)| {
keystore
.decrypt_keypair(password.as_bytes())
.map(|keypair| (keystore, password, keypair))
.map_err(Into::into)
})?;
};
}
let (voting_keystore, voting_password, voting_keypair) = expand_keystore!(voting_keystore);
let (withdrawal_keystore, withdrawal_password, withdrawal_keypair) =
expand_keystore!(withdrawal_keystore);
let dir = self let dir = self
.base_validators_dir .base_validators_dir
@ -149,6 +143,23 @@ impl<'a> Builder<'a> {
create_dir_all(&dir).map_err(Error::UnableToCreateDir)?; create_dir_all(&dir).map_err(Error::UnableToCreateDir)?;
} }
// The withdrawal keystore must be initialized in order to store it or create an eth1
// deposit.
if (self.store_withdrawal_keystore || self.deposit_info.is_some())
&& self.withdrawal_keystore.is_none()
{
return Err(Error::UninitializedWithdrawalKeystore);
};
if let Some((withdrawal_keystore, withdrawal_password)) = self.withdrawal_keystore {
// Attempt to decrypt the voting keypair.
let voting_keypair = voting_keystore.decrypt_keypair(voting_password.as_bytes())?;
// Attempt to decrypt the withdrawal keypair.
let withdrawal_keypair =
withdrawal_keystore.decrypt_keypair(withdrawal_password.as_bytes())?;
// If a deposit amount was specified, create a deposit.
if let Some((amount, spec)) = self.deposit_info { if let Some((amount, spec)) = self.deposit_info {
let withdrawal_credentials = Hash256::from_slice(&get_withdrawal_credentials( let withdrawal_credentials = Hash256::from_slice(&get_withdrawal_credentials(
&withdrawal_keypair.pk, &withdrawal_keypair.pk,
@ -169,7 +180,7 @@ impl<'a> Builder<'a> {
// Save `ETH1_DEPOSIT_DATA_FILE` to file. // Save `ETH1_DEPOSIT_DATA_FILE` to file.
// //
// This allows us to know the RLP data for the eth1 transaction without needed to know // This allows us to know the RLP data for the eth1 transaction without needing to know
// the withdrawal/voting keypairs again at a later date. // the withdrawal/voting keypairs again at a later date.
let path = dir.clone().join(ETH1_DEPOSIT_DATA_FILE); let path = dir.clone().join(ETH1_DEPOSIT_DATA_FILE);
if path.exists() { if path.exists() {
@ -204,27 +215,34 @@ impl<'a> Builder<'a> {
} }
} }
write_password_to_file( // Only the withdrawal keystore if explicitly required.
self.password_dir
.clone()
.join(voting_keypair.pk.as_hex_string()),
voting_password.as_bytes(),
)?;
write_keystore_to_file(dir.clone().join(VOTING_KEYSTORE_FILE), &voting_keystore)?;
if self.store_withdrawal_keystore { if self.store_withdrawal_keystore {
// Write the withdrawal password to file.
write_password_to_file( write_password_to_file(
self.password_dir self.password_dir
.clone() .clone()
.join(withdrawal_keypair.pk.as_hex_string()), .join(withdrawal_keypair.pk.as_hex_string()),
withdrawal_password.as_bytes(), withdrawal_password.as_bytes(),
)?; )?;
// Write the withdrawal keystore to file.
write_keystore_to_file( write_keystore_to_file(
dir.clone().join(WITHDRAWAL_KEYSTORE_FILE), dir.clone().join(WITHDRAWAL_KEYSTORE_FILE),
&withdrawal_keystore, &withdrawal_keystore,
)?; )?;
} }
}
// Write the voting password to file.
write_password_to_file(
self.password_dir
.clone()
.join(format!("0x{}", voting_keystore.pubkey())),
voting_password.as_bytes(),
)?;
// Write the voting keystore to file.
write_keystore_to_file(dir.clone().join(VOTING_KEYSTORE_FILE), &voting_keystore)?;
ValidatorDir::open(dir).map_err(Error::UnableToOpenDir) ValidatorDir::open(dir).map_err(Error::UnableToOpenDir)
} }

View File

@ -5,7 +5,10 @@
#![cfg(feature = "insecure_keys")] #![cfg(feature = "insecure_keys")]
use crate::{Builder, BuilderError}; use crate::{Builder, BuilderError};
use eth2_keystore::{Keystore, KeystoreBuilder, PlainText}; use eth2_keystore::{
json_keystore::{Kdf, Scrypt},
Keystore, KeystoreBuilder, PlainText, DKLEN,
};
use std::path::PathBuf; use std::path::PathBuf;
use types::test_utils::generate_deterministic_keypair; use types::test_utils::generate_deterministic_keypair;
@ -13,19 +16,17 @@ use types::test_utils::generate_deterministic_keypair;
pub const INSECURE_PASSWORD: &[u8] = &[30; 32]; pub const INSECURE_PASSWORD: &[u8] = &[30; 32];
impl<'a> Builder<'a> { impl<'a> Builder<'a> {
/// Generate the voting and withdrawal keystores using deterministic, well-known, **unsafe** /// Generate the voting keystore using a deterministic, well-known, **unsafe** keypair.
/// keypairs.
/// ///
/// **NEVER** use these keys in production! /// **NEVER** use these keys in production!
pub fn insecure_keys(mut self, deterministic_key_index: usize) -> Result<Self, BuilderError> { pub fn insecure_voting_keypair(
mut self,
deterministic_key_index: usize,
) -> Result<Self, BuilderError> {
self.voting_keystore = Some( self.voting_keystore = Some(
generate_deterministic_keystore(deterministic_key_index) generate_deterministic_keystore(deterministic_key_index)
.map_err(BuilderError::InsecureKeysError)?, .map_err(BuilderError::InsecureKeysError)?,
); );
self.withdrawal_keystore = Some(
generate_deterministic_keystore(deterministic_key_index)
.map_err(BuilderError::InsecureKeysError)?,
);
Ok(self) Ok(self)
} }
} }
@ -39,12 +40,29 @@ pub fn generate_deterministic_keystore(i: usize) -> Result<(Keystore, PlainText)
let keystore = KeystoreBuilder::new(&keypair, INSECURE_PASSWORD, "".into()) let keystore = KeystoreBuilder::new(&keypair, INSECURE_PASSWORD, "".into())
.map_err(|e| format!("Unable to create keystore builder: {:?}", e))? .map_err(|e| format!("Unable to create keystore builder: {:?}", e))?
.kdf(insecure_kdf())
.build() .build()
.map_err(|e| format!("Unable to build keystore: {:?}", e))?; .map_err(|e| format!("Unable to build keystore: {:?}", e))?;
Ok((keystore, INSECURE_PASSWORD.to_vec().into())) Ok((keystore, INSECURE_PASSWORD.to_vec().into()))
} }
/// Returns an INSECURE key derivation function.
///
/// **NEVER** use this KDF in production!
fn insecure_kdf() -> Kdf {
Kdf::Scrypt(Scrypt {
dklen: DKLEN,
// `n` is set very low, making it cheap to encrypt/decrypt keystores.
//
// This is very insecure, only use during testing.
n: 2,
p: 1,
r: 8,
salt: vec![1, 3, 3, 5].into(),
})
}
/// A helper function to use the `Builder` to generate deterministic, well-known, **unsafe** /// A helper function to use the `Builder` to generate deterministic, well-known, **unsafe**
/// validator directories for the given validator `indices`. /// validator directories for the given validator `indices`.
/// ///
@ -56,7 +74,7 @@ pub fn build_deterministic_validator_dirs(
) -> Result<(), String> { ) -> Result<(), String> {
for &i in indices { for &i in indices {
Builder::new(validators_dir.clone(), password_dir.clone()) Builder::new(validators_dir.clone(), password_dir.clone())
.insecure_keys(i) .insecure_voting_keypair(i)
.map_err(|e| format!("Unable to generate insecure keypair: {:?}", e))? .map_err(|e| format!("Unable to generate insecure keypair: {:?}", e))?
.store_withdrawal_keystore(false) .store_withdrawal_keystore(false)
.build() .build()

View File

@ -1,6 +1,7 @@
use crate::{Error as ValidatorDirError, ValidatorDir}; use crate::{Error as ValidatorDirError, ValidatorDir};
use bls::Keypair; use bls::Keypair;
use rayon::prelude::*; use rayon::prelude::*;
use slog::{info, Logger};
use std::collections::HashMap; use std::collections::HashMap;
use std::fs::read_dir; use std::fs::read_dir;
use std::io; use std::io;
@ -82,18 +83,31 @@ impl Manager {
/// Opens all the validator directories in `self` and decrypts the validator keypairs. /// Opens all the validator directories in `self` and decrypts the validator keypairs.
/// ///
/// If `log.is_some()`, an `info` log will be generated for each decrypted validator.
///
/// ## Errors /// ## Errors
/// ///
/// Returns an error if any of the directories is unable to be opened. /// Returns an error if any of the directories is unable to be opened.
pub fn decrypt_all_validators( pub fn decrypt_all_validators(
&self, &self,
secrets_dir: PathBuf, secrets_dir: PathBuf,
log_opt: Option<&Logger>,
) -> Result<Vec<(Keypair, ValidatorDir)>, Error> { ) -> Result<Vec<(Keypair, ValidatorDir)>, Error> {
self.iter_dir()? self.iter_dir()?
.into_par_iter() .into_par_iter()
.map(|path| { .map(|path| {
ValidatorDir::open(path) ValidatorDir::open(path)
.and_then(|v| v.voting_keypair(&secrets_dir).map(|kp| (kp, v))) .and_then(|v| v.voting_keypair(&secrets_dir).map(|kp| (kp, v)))
.map(|(kp, v)| {
if let Some(log) = log_opt {
info!(
log,
"Decrypted validator keystore";
"pubkey" => kp.pk.as_hex_string()
)
}
(kp, v)
})
.map_err(Error::ValidatorDirError) .map_err(Error::ValidatorDirError)
}) })
.collect() .collect()

View File

@ -6,8 +6,8 @@ use std::path::Path;
use tempfile::{tempdir, TempDir}; use tempfile::{tempdir, TempDir};
use types::{test_utils::generate_deterministic_keypair, EthSpec, Keypair, MainnetEthSpec}; use types::{test_utils::generate_deterministic_keypair, EthSpec, Keypair, MainnetEthSpec};
use validator_dir::{ use validator_dir::{
Builder, ValidatorDir, ETH1_DEPOSIT_DATA_FILE, ETH1_DEPOSIT_TX_HASH_FILE, VOTING_KEYSTORE_FILE, Builder, BuilderError, ValidatorDir, ETH1_DEPOSIT_DATA_FILE, ETH1_DEPOSIT_TX_HASH_FILE,
WITHDRAWAL_KEYSTORE_FILE, VOTING_KEYSTORE_FILE, WITHDRAWAL_KEYSTORE_FILE,
}; };
/// A very weak password with which to encrypt the keystores. /// A very weak password with which to encrypt the keystores.
@ -82,17 +82,19 @@ impl Harness {
self.validators_dir.path().into(), self.validators_dir.path().into(),
self.password_dir.path().into(), self.password_dir.path().into(),
) )
// Note: setting the withdrawal keystore here ensure that it can get overriden by later
// calls to `random_withdrawal_keystore`.
.store_withdrawal_keystore(config.store_withdrawal_keystore); .store_withdrawal_keystore(config.store_withdrawal_keystore);
let builder = if config.random_voting_keystore { let builder = if config.random_voting_keystore {
builder builder.random_voting_keystore().unwrap()
} else { } else {
let (keystore, password) = generate_deterministic_keystore(0).unwrap(); let (keystore, password) = generate_deterministic_keystore(0).unwrap();
builder.voting_keystore(keystore, password.as_bytes()) builder.voting_keystore(keystore, password.as_bytes())
}; };
let builder = if config.random_withdrawal_keystore { let builder = if config.random_withdrawal_keystore {
builder builder.random_withdrawal_keystore().unwrap()
} else { } else {
let (keystore, password) = generate_deterministic_keystore(1).unwrap(); let (keystore, password) = generate_deterministic_keystore(1).unwrap();
builder.withdrawal_keystore(keystore, password.as_bytes()) builder.withdrawal_keystore(keystore, password.as_bytes())
@ -201,6 +203,69 @@ fn concurrency() {
ValidatorDir::open(&path).unwrap(); ValidatorDir::open(&path).unwrap();
} }
#[test]
fn without_voting_keystore() {
let harness = Harness::new();
assert!(matches!(
Builder::new(
harness.validators_dir.path().into(),
harness.password_dir.path().into(),
)
.random_withdrawal_keystore()
.unwrap()
.build(),
Err(BuilderError::UninitializedVotingKeystore)
))
}
#[test]
fn without_withdrawal_keystore() {
let harness = Harness::new();
let spec = &MainnetEthSpec::default_spec();
// Should build without withdrawal keystore if not storing the it or creating eth1 data.
Builder::new(
harness.validators_dir.path().into(),
harness.password_dir.path().into(),
)
.random_voting_keystore()
.unwrap()
.store_withdrawal_keystore(false)
.build()
.unwrap();
assert!(
matches!(
Builder::new(
harness.validators_dir.path().into(),
harness.password_dir.path().into(),
)
.random_voting_keystore()
.unwrap()
.store_withdrawal_keystore(true)
.build(),
Err(BuilderError::UninitializedWithdrawalKeystore)
),
"storing the keystore requires keystore"
);
assert!(
matches!(
Builder::new(
harness.validators_dir.path().into(),
harness.password_dir.path().into(),
)
.random_voting_keystore()
.unwrap()
.create_eth1_tx_data(42, spec)
.build(),
Err(BuilderError::UninitializedWithdrawalKeystore)
),
"storing the keystore requires keystore"
);
}
#[test] #[test]
fn deterministic_voting_keystore() { fn deterministic_voting_keystore() {
let harness = Harness::new(); let harness = Harness::new();
@ -253,6 +318,20 @@ fn both_keystores_deterministic_without_saving() {
harness.create_and_test(&config); harness.create_and_test(&config);
} }
#[test]
fn both_keystores_random_without_saving() {
let harness = Harness::new();
let config = BuildConfig {
random_voting_keystore: true,
random_withdrawal_keystore: true,
store_withdrawal_keystore: false,
..BuildConfig::default()
};
harness.create_and_test(&config);
}
#[test] #[test]
fn both_keystores_deterministic_with_saving() { fn both_keystores_deterministic_with_saving() {
let harness = Harness::new(); let harness = Harness::new();
@ -278,3 +357,31 @@ fn eth1_data() {
harness.create_and_test(&config); harness.create_and_test(&config);
} }
#[test]
fn store_withdrawal_keystore_without_eth1_data() {
let harness = Harness::new();
let config = BuildConfig {
store_withdrawal_keystore: false,
random_withdrawal_keystore: true,
deposit_amount: None,
..BuildConfig::default()
};
harness.create_and_test(&config);
}
#[test]
fn store_withdrawal_keystore_with_eth1_data() {
let harness = Harness::new();
let config = BuildConfig {
store_withdrawal_keystore: false,
random_withdrawal_keystore: true,
deposit_amount: Some(32000000000),
..BuildConfig::default()
};
harness.create_and_test(&config);
}

View File

@ -97,6 +97,12 @@ impl<'a> KeystoreBuilder<'a> {
} }
} }
/// Build the keystore using the supplied `kdf` instead of `crate::default_kdf`.
pub fn kdf(mut self, kdf: Kdf) -> Self {
self.kdf = kdf;
self
}
/// Consumes `self`, returning a `Keystore`. /// Consumes `self`, returning a `Keystore`.
pub fn build(self) -> Result<Keystore, Error> { pub fn build(self) -> Result<Keystore, Error> {
Keystore::encrypt( Keystore::encrypt(
@ -208,6 +214,11 @@ impl Keystore {
&self.json.pubkey &self.json.pubkey
} }
/// Returns the key derivation function for the keystore.
pub fn kdf(&self) -> &Kdf {
&self.json.crypto.kdf.params
}
/// Encodes `self` as a JSON object. /// Encodes `self` as a JSON object.
pub fn to_json_string(&self) -> Result<String, Error> { pub fn to_json_string(&self) -> Result<String, Error> {
serde_json::to_string(self).map_err(|e| Error::UnableToSerialize(format!("{}", e))) serde_json::to_string(self).map_err(|e| Error::UnableToSerialize(format!("{}", e)))

View File

@ -2,7 +2,11 @@
#![cfg(not(debug_assertions))] #![cfg(not(debug_assertions))]
use bls::Keypair; use bls::Keypair;
use eth2_keystore::{Error, Keystore, KeystoreBuilder}; use eth2_keystore::{
default_kdf,
json_keystore::{Kdf, Pbkdf2, Prf, Scrypt},
Error, Keystore, KeystoreBuilder, DKLEN,
};
use std::fs::OpenOptions; use std::fs::OpenOptions;
use tempfile::tempdir; use tempfile::tempdir;
@ -107,3 +111,52 @@ fn scrypt_params() {
"should decrypt with good password" "should decrypt with good password"
); );
} }
#[test]
fn custom_scrypt_kdf() {
let keypair = Keypair::random();
let salt = vec![42];
let my_kdf = Kdf::Scrypt(Scrypt {
dklen: DKLEN,
n: 2,
p: 1,
r: 8,
salt: salt.clone().into(),
});
assert!(my_kdf != default_kdf(salt));
let keystore = KeystoreBuilder::new(&keypair, GOOD_PASSWORD, "".into())
.unwrap()
.kdf(my_kdf.clone())
.build()
.unwrap();
assert_eq!(keystore.kdf(), &my_kdf);
}
#[test]
fn custom_pbkdf2_kdf() {
let keypair = Keypair::random();
let salt = vec![42];
let my_kdf = Kdf::Pbkdf2(Pbkdf2 {
dklen: DKLEN,
c: 2,
prf: Prf::HmacSha256,
salt: salt.clone().into(),
});
assert!(my_kdf != default_kdf(salt));
let keystore = KeystoreBuilder::new(&keypair, GOOD_PASSWORD, "".into())
.unwrap()
.kdf(my_kdf.clone())
.build()
.unwrap();
assert_eq!(keystore.kdf(), &my_kdf);
}

View File

@ -30,6 +30,6 @@ tree_hash = "0.1.0"
tokio = { version = "0.2.20", features = ["full"] } tokio = { version = "0.2.20", features = ["full"] }
clap_utils = { path = "../common/clap_utils" } clap_utils = { path = "../common/clap_utils" }
eth2-libp2p = { path = "../beacon_node/eth2-libp2p" } eth2-libp2p = { path = "../beacon_node/eth2-libp2p" }
validator_dir = { path = "../common/validator_dir", features = ["unencrypted_keys"] } validator_dir = { path = "../common/validator_dir", features = ["insecure_keys"] }
rand = "0.7.2" rand = "0.7.2"
eth2_keystore = { path = "../crypto/eth2_keystore" } eth2_keystore = { path = "../crypto/eth2_keystore" }

View File

@ -0,0 +1,33 @@
use clap::ArgMatches;
use std::fs;
use std::path::PathBuf;
use validator_dir::Builder as ValidatorBuilder;
pub fn run(matches: &ArgMatches) -> Result<(), String> {
let validator_count: usize = clap_utils::parse_required(matches, "count")?;
let validators_dir: PathBuf = clap_utils::parse_required(matches, "validators-dir")?;
let secrets_dir: PathBuf = clap_utils::parse_required(matches, "secrets-dir")?;
if !validators_dir.exists() {
fs::create_dir_all(&validators_dir)
.map_err(|e| format!("Unable to create validators dir: {:?}", e))?;
}
if !secrets_dir.exists() {
fs::create_dir_all(&secrets_dir)
.map_err(|e| format!("Unable to create secrets dir: {:?}", e))?;
}
for i in 0..validator_count {
println!("Validator {}/{}", i + 1, validator_count);
ValidatorBuilder::new(validators_dir.clone(), secrets_dir.clone())
.store_withdrawal_keystore(false)
.insecure_voting_keypair(i)
.map_err(|e| format!("Unable to generate keys: {:?}", e))?
.build()
.map_err(|e| format!("Unable to build validator: {:?}", e))?;
}
Ok(())
}

View File

@ -6,6 +6,7 @@ mod check_deposit_data;
mod deploy_deposit_contract; mod deploy_deposit_contract;
mod eth1_genesis; mod eth1_genesis;
mod generate_bootnode_enr; mod generate_bootnode_enr;
mod insecure_validators;
mod interop_genesis; mod interop_genesis;
mod new_testnet; mod new_testnet;
mod parse_hex; mod parse_hex;
@ -440,6 +441,33 @@ fn main() {
.help("The directory in which to create the network dir"), .help("The directory in which to create the network dir"),
) )
) )
.subcommand(
SubCommand::with_name("insecure-validators")
.about(
"Produces validator directories with INSECURE, deterministic keypairs.",
)
.arg(
Arg::with_name("count")
.long("count")
.value_name("COUNT")
.takes_value(true)
.help("Produces validators in the range of 0..count."),
)
.arg(
Arg::with_name("validators-dir")
.long("validators-dir")
.value_name("VALIDATOR_DIR")
.takes_value(true)
.help("The directory for storing validators."),
)
.arg(
Arg::with_name("secrets-dir")
.long("secrets-dir")
.value_name("SECRETS_DIR")
.takes_value(true)
.help("The directory for storing secrets."),
)
)
.get_matches(); .get_matches();
macro_rules! run_with_spec { macro_rules! run_with_spec {
@ -544,6 +572,8 @@ fn run<T: EthSpec>(
.map_err(|e| format!("Failed to run check-deposit-data command: {}", e)), .map_err(|e| format!("Failed to run check-deposit-data command: {}", e)),
("generate-bootnode-enr", Some(matches)) => generate_bootnode_enr::run::<T>(matches) ("generate-bootnode-enr", Some(matches)) => generate_bootnode_enr::run::<T>(matches)
.map_err(|e| format!("Failed to run generate-bootnode-enr command: {}", e)), .map_err(|e| format!("Failed to run generate-bootnode-enr command: {}", e)),
("insecure-validators", Some(matches)) => insecure_validators::run(matches)
.map_err(|e| format!("Failed to run insecure-validators command: {}", e)),
(other, _) => Err(format!("Unknown subcommand {}. See --help.", other)), (other, _) => Err(format!("Unknown subcommand {}. See --help.", other)),
} }
} }

View File

@ -0,0 +1,79 @@
# Simple Local Testnet
These scripts allow for running a small local testnet with two beacon nodes and
one validator client. This setup can be useful for testing and development.
## Requirements
The scripts require `lci` and `lighthouse` to be installed on `PATH`. From the
root of this repository, run:
```bash
cargo install --path lighthouse --force --locked
cargo install --path lcli --force --locked
```
## Starting the testnet
Assuming you are happy with the configuration in `var.env`, create the testnet
directory, genesis state and validator keys with:
```bash
./setup
```
Start the first beacon node:
```bash
./beacon_node
```
In a new terminal, start the validator client which will attach to the first
beacon node:
```bash
./validator_client
```
In a new terminal, start the second beacon node which will peer with the first:
```bash
./second_beacon_node
```
## Additional Info
### Debug level
The beacon nodes and validator client have their `--debug-level` set to `info`.
Specify a different debug level like this:
```bash
./validator_client debug
./beacon_node trace
./second_beacon_node warn
```
### Starting fresh
Delete the current testnet and all related files using:
```bash
./clean
```
### Updating the genesis time of the beacon state
If it's been a while since you ran `./setup` then the genesis time of the
genesis state will be far in the future, causing lots of skip slots.
Update the genesis time to now using:
```bash
./reset_genesis_time
```
> Note: you probably want to drop the beacon node database and the validator
> client slashing database if you do this. When using small validator counts
> it's probably easy to just use `./clean && ./setup`.

View File

@ -5,14 +5,17 @@
# `./local_testnet_genesis_state`. # `./local_testnet_genesis_state`.
# #
TESTNET_DIR=~/.lighthouse/local-testnet/testnet source ./vars.env
DATADIR=~/.lighthouse/local-testnet/beacon
DEBUG_LEVEL=${1:-info} DEBUG_LEVEL=${1:-info}
exec lighthouse \ exec lighthouse \
--debug-level $DEBUG_LEVEL \ --debug-level $DEBUG_LEVEL \
bn \ bn \
--datadir $DATADIR \ --datadir $BEACON_DIR \
--testnet-dir $TESTNET_DIR \ --testnet-dir $TESTNET_DIR \
--dummy-eth1 \ --dummy-eth1 \
--http --http \
--enr-address 127.0.0.1 \
--enr-udp-port 9000 \
--enr-tcp-port 9000 \

9
scripts/local_testnet/clean.sh Executable file
View File

@ -0,0 +1,9 @@
#!/bin/bash
#
# Deletes all files associated with the local testnet.
#
source ./vars.env
rm -r $DATADIR

View File

@ -0,0 +1,16 @@
#!/bin/bash
#
# Resets the beacon state genesis time to now.
#
source ./vars.env
NOW=$(date +%s)
lcli \
change-genesis-time \
$TESTNET_DIR/genesis.ssz \
$(date +%s)
echo "Reset genesis time to now ($NOW)"

View File

@ -0,0 +1,20 @@
#!/bin/bash
#
# Starts a beacon node based upon a genesis state created by
# `./local_testnet_genesis_state`.
#
source ./vars.env
DEBUG_LEVEL=${1:-info}
exec lighthouse \
--debug-level $DEBUG_LEVEL \
bn \
--datadir $BEACON_DIR-2 \
--testnet-dir $TESTNET_DIR \
--dummy-eth1 \
--http \
--http-port 6052 \
--boot-nodes $(cat $BEACON_DIR/beacon/network/enr.dat)

View File

@ -4,12 +4,8 @@
# Produces a testnet specification and a genesis state where the genesis time # Produces a testnet specification and a genesis state where the genesis time
# is now. # is now.
# #
# Optionally, supply an integer as the first argument to override the default
# validator count of 1024.
#
TESTNET_DIR=~/.lighthouse/local-testnet/testnet source ./vars.env
VALIDATOR_COUNT=${1:-1024}
lcli \ lcli \
--spec mainnet \ --spec mainnet \
@ -19,7 +15,16 @@ lcli \
--min-genesis-active-validator-count $VALIDATOR_COUNT \ --min-genesis-active-validator-count $VALIDATOR_COUNT \
--force --force
echo Created tesnet directory at $TESTNET_DIR echo Specification generated at $TESTNET_DIR.
echo "Generating $VALIDATOR_COUNT validators concurrently... (this may take a while)"
lcli \
insecure-validators \
--count $VALIDATOR_COUNT \
--validators-dir $VALIDATORS_DIR \
--secrets-dir $SECRETS_DIR
echo Validators generated at $VALIDATORS_DIR with keystore passwords at $SECRETS_DIR.
echo "Building genesis state... (this might take a while)" echo "Building genesis state... (this might take a while)"
lcli \ lcli \
@ -29,5 +34,3 @@ lcli \
$VALIDATOR_COUNT $VALIDATOR_COUNT
echo Created genesis state in $TESTNET_DIR echo Created genesis state in $TESTNET_DIR
echo $VALIDATOR_COUNT > $TESTNET_DIR/validator_count.txt

View File

@ -5,16 +5,14 @@
# `./local_testnet_genesis_state`. # `./local_testnet_genesis_state`.
# #
TESTNET_DIR=~/.lighthouse/local-testnet/testnet source ./vars.env
DATADIR=~/.lighthouse/local-testnet/validator
DEBUG_LEVEL=${1:-info} DEBUG_LEVEL=${1:-info}
exec lighthouse \ exec lighthouse \
--debug-level $DEBUG_LEVEL \ --debug-level $DEBUG_LEVEL \
vc \ vc \
--datadir $DATADIR \ --datadir $VALIDATORS_DIR \
--secrets-dir $SECRETS_DIR \
--testnet-dir $TESTNET_DIR \ --testnet-dir $TESTNET_DIR \
testnet \ --auto-register
insecure \
0 \
$(cat $TESTNET_DIR/validator_count.txt)

View File

@ -0,0 +1,7 @@
DATADIR=~/.lighthouse/local-testnet
TESTNET_DIR=$DATADIR/testnet
BEACON_DIR=$DATADIR/beacon
VALIDATORS_DIR=$DATADIR/validators
SECRETS_DIR=$DATADIR/secrets
VALIDATOR_COUNT=1024

View File

@ -1,7 +0,0 @@
#!/bin/bash
#
# Removes any existing local testnet
#
rm -rf ~/.lighthouse/local-testnet

View File

@ -9,7 +9,6 @@ use node_test_rig::{
use rayon::prelude::*; use rayon::prelude::*;
use std::net::{IpAddr, Ipv4Addr}; use std::net::{IpAddr, Ipv4Addr};
use std::time::Duration; use std::time::Duration;
use tokio::time::{delay_until, Instant};
pub fn run_eth1_sim(matches: &ArgMatches) -> Result<(), String> { pub fn run_eth1_sim(matches: &ArgMatches) -> Result<(), String> {
let node_count = value_t!(matches, "nodes", usize).expect("missing nodes default"); let node_count = value_t!(matches, "nodes", usize).expect("missing nodes default");
@ -43,8 +42,6 @@ pub fn run_eth1_sim(matches: &ArgMatches) -> Result<(), String> {
}) })
.collect::<Vec<_>>(); .collect::<Vec<_>>();
let expected_genesis_instant = Instant::now() + Duration::from_secs(60);
let log_level = "debug"; let log_level = "debug";
let log_format = None; let log_format = None;
@ -117,19 +114,17 @@ pub fn run_eth1_sim(matches: &ArgMatches) -> Result<(), String> {
* Create a new `LocalNetwork` with one beacon node. * Create a new `LocalNetwork` with one beacon node.
*/ */
let network = LocalNetwork::new(context, beacon_config.clone()).await?; let network = LocalNetwork::new(context, beacon_config.clone()).await?;
/* /*
* One by one, add beacon nodes to the network. * One by one, add beacon nodes to the network.
*/ */
for _ in 0..node_count - 1 { for _ in 0..node_count - 1 {
network.add_beacon_node(beacon_config.clone()).await?; network.add_beacon_node(beacon_config.clone()).await?;
} }
/* /*
* Create a future that will add validator clients to the network. Each validator client is * One by one, add validators to the network.
* attached to a single corresponding beacon node.
*/ */
let add_validators_fut = async {
for (i, files) in validator_files.into_iter().enumerate() { for (i, files) in validator_files.into_iter().enumerate() {
network network
.add_validator_client( .add_validator_client(
@ -143,16 +138,14 @@ pub fn run_eth1_sim(matches: &ArgMatches) -> Result<(), String> {
.await?; .await?;
} }
Ok::<(), String>(())
};
/* /*
* Start the processes that will run checks on the network as it runs. * Start the checks that ensure the network performs as expected.
*
* We start these checks immediately after the validators have started. This means we're
* relying on the validator futures to all return immediately after genesis so that these
* tests start at the right time. Whilst this is works well for now, it's subject to
* breakage by changes to the VC.
*/ */
let checks_fut = async {
delay_until(expected_genesis_instant).await;
let (finalization, validator_count, onboarding) = futures::join!( let (finalization, validator_count, onboarding) = futures::join!(
// Check that the chain finalizes at the first given opportunity. // Check that the chain finalizes at the first given opportunity.
checks::verify_first_finalization(network.clone(), slot_duration), checks::verify_first_finalization(network.clone(), slot_duration),
@ -175,14 +168,6 @@ pub fn run_eth1_sim(matches: &ArgMatches) -> Result<(), String> {
validator_count?; validator_count?;
onboarding?; onboarding?;
Ok::<(), String>(())
};
let (add_validators, checks) = futures::join!(add_validators_fut, checks_fut);
add_validators?;
checks?;
// The `final_future` either completes immediately or never completes, depending on the value // The `final_future` either completes immediately or never completes, depending on the value
// of `end_after_checks`. // of `end_after_checks`.

View File

@ -71,7 +71,7 @@ impl<T: SlotClock + 'static, E: EthSpec> ValidatorStore<T, E> {
let validator_key_values = ValidatorManager::open(&config.data_dir) let validator_key_values = ValidatorManager::open(&config.data_dir)
.map_err(|e| format!("unable to read data_dir: {:?}", e))? .map_err(|e| format!("unable to read data_dir: {:?}", e))?
.decrypt_all_validators(config.secrets_dir.clone()) .decrypt_all_validators(config.secrets_dir.clone(), Some(&log))
.map_err(|e| format!("unable to decrypt all validator directories: {:?}", e))? .map_err(|e| format!("unable to decrypt all validator directories: {:?}", e))?
.into_iter() .into_iter()
.map(|(kp, dir)| { .map(|(kp, dir)| {