Fixes a couple of low hanging fruits.
- Fixes#2037
- `validators-dir` and `secrets-dir` flags don't really need to depend upon each other
- Fixes#2006 and Fixes#1995
## Issue Addressed
Resolves#2004
## Proposed Changes
Only print validator dir path once
## Additional Info
N/A
Co-authored-by: realbigsean <seananderson33@gmail.com>
Adds a `--privacy` CLI flag to the beacon node that users may opt into.
This does two things:
- Removes client identifying information from the identify libp2p protocol
- Changes the default graffiti to "" if no graffiti is set.
## Issue Addressed
NA
## Proposed Changes
- Log about eth1 whilst waiting for genesis.
- For the block and deposit caches, update them after each download instead of when *all* downloads are complete.
- This prevents the case where a single timeout error can cause us to drop *all* previously download blocks/deposits.
- Set `max_log_requests_per_update` to avoid timeouts due to very large log counts in a response.
- Set `max_blocks_per_update` to prevent a single update of the block cache to download an unreasonable number of blocks.
- This shouldn't have any affect in normal use, it's just a safe-guard against bugs.
- Increase the timeout for eth1 calls from 15s to 60s, as per @pawanjay176's experience with Infura.
## Additional Info
NA
## Issue Addressed
Following slog's documentation, this should help a bit with string allocations. I left it run for two days and mem usage is lower. This is of course anecdotal, but shouldn't harm anyway
## Proposed Changes
remove `String` creation in logs when possible
## Issue Addressed
NA
## Proposed Changes
This was causing:
```
Nov 28 21:56:08.154 ERRO slog-async: logger dropped messages due to channel overflow, count: 44, service: libp2p
```
## Additional Info
NA
Update lighthouse to version `v1.0.2`.
There are two major updates in this version:
- Updates to the task executor to tokio 0.3 and all sub-dependencies relying on core execution, including libp2p
- Update BLST
## Issue Addressed
Move to latest official version of blst (v0.3.1). Incorporate all the subgroup check API changes.
## Proposed Changes
Update Cargo.toml to use official blst crate 0.3.1
Modifications to blst.rs wrapper for subgroup check API changes
## Additional Info
The overall subgroup check methodology is public keys should be check for validity using key_validate() at time of first seeing them. This will check for infinity and in group. Those keys can then be cached for future usage. All calls into blst set the pk_validate boolean to false to indicate there is no need for on the fly checking of public keys in the library. Additionally the public keys are supposed to be validated for proof of possession outside of blst.
For signatures the subgroup check can be done at time of deserialization, prior to being used in aggregation or verification, or in the blst aggregation or verification functions themselves. In the interface wrapper the call to subgroup_check has been left for one instance, although that could be moved into the
verify_multiple_aggregate_signatures() call if wanted. Checking beforehand does save some compute resources in the scenario a bad signature is received. Elsewhere the subgroup check is being done inside the higher level operations. See comments in the code.
All checks on signature are done for subgroup only. There are no checks for infinity. The rationale is an aggregate signature could technically equal infinity. If any individual signature was infinity (invalid) then it would fail at time of verification. A loss of compute resources, although safety would be preserved.
## Description
This PR updates Lighthouse to tokio 0.3. It includes a number of dependency updates and some structural changes as to how we create and spawn tasks.
This also brings with it a number of various improvements:
- Discv5 update
- Libp2p update
- Fix for recompilation issues
- Improved UPnP port mapping handling
- Futures dependency update
- Log downgrade to traces for rejecting peers when we've reached our max
Co-authored-by: blacktemplar <blacktemplar@a1.net>
## Issue Addressed
part of #1883
## Proposed Changes
Adds a new cli argument `--eth1-endpoints` that can be used instead of `--eth1-endpoint` to specify a comma-separated list of endpoints. If the first endpoint returns an error for some request the other endpoints are tried in the given order.
## Additional Info
Currently if the first endpoint fails the fallbacks are used silently (except for `try_fallback_test_endpoint` that is used in `do_update` which logs a `WARN` for each endpoint that is not reachable). A question is if we should add more logs so that the user gets warned if his main endpoint is for example just slow and sometimes hits timeouts.
## Proposed Changes
A user on Discord reported build issues when trying to compile Lighthouse checked out to a path with spaces in it. I've fixed the issue upstream in `leveldb-sys` (https://github.com/skade/leveldb-sys/pull/22), but rather than waiting for a new release of the `leveldb` crate, we can also work around the issue by disabling Snappy in LevelDB, which we weren't using anyway.
This may also have the side-effect of slightly improving compilation times, as LevelDB+Snappy was found to be a substantial contributor to build time (although I'm not sure how much was LevelDB and how much was Snappy).
## Issue Addressed
Closes#1823
## Proposed Changes
* Use OS-level file locking for validator keystores, eliminating problems with lockfiles lingering after ungraceful shutdowns (`SIGKILL`, power outage). I'm using the `fs2` crate because it's cross-platform (unlike `file-lock`), and it seems to have the most downloads on crates.io.
* Deprecate + disable `--delete-lockfiles` CLI param, it's no longer necessary
* Delete the `validator_dir::Manager`, as it was mostly dead code and was only used in the `validator list` command, which has been rewritten to read the validator definitions YAML instead.
## Additional Info
Tested on:
- [x] Linux
- [x] macOS
- [x] Docker Linux
- [x] Docker macOS
- [ ] Windows
## Issue Addressed
- Add metrics to keep track of peer counts by sync type
- Add metric to keep track of the number of syncing chains in range
## Proposed Changes
Plugin to the network metrics update interval and update too the counts for peers wrt to their sync status with us
## Additional Info
For the peer counts
- By the way it is implemented the numbers won't always match to the total peer count in the `libp2p` metric.
- Updating the gauge with every change is messy because it requires to be updated on connection (in the `eth2_libp2p` crate, while metrics are defined in the `network` crate) on Goodbye sent (for an `IrrelevantPeer`) either in the `beacon_processor` or the `peer_manager`, and on disconnection. Since this is not a critical metric I think counting once every second is enough. If you think more accuracy is needed we can do it too, but it would be harder to maintain)
ATM those look like this
![image](https://user-images.githubusercontent.com/26765164/100275387-22137b00-2f60-11eb-93b9-94b0f265240c.png)
## Issue Addressed
NA
## Proposed Changes
- Adds a HTTP server to the VC which provides Prometheus metrics.
- Moves the health metrics into the `lighthouse_metrics` crate so it can be shared between BN/VC.
- Sprinkle some metrics around the VC.
- Update the book to indicate that we now have VC metrics.
- Shifts the "waiting for genesis" logic later in the `ProductionValidatorClient::new_from_cli`
- This is worth attention during the review.
## Additional Info
- ~~`clippy` has some new lints that are failing. I'll deal with that in another PR.~~
## Issue Addressed
Boot nodes were being successfully created and publishing valid ENRs however the `eth2` field was not being saved to disk leading to a discrepancy between published ENR and disk ENR.
If the `eth2` field is known, it is now constructed in the initial ENR and saved to disk.
Previous mainnet bootnodes did not contain the `eth2` field and these have also been updated.
## Issue Addressed
NA
## Proposed Changes
Presently `master` is stable (and will be sunsetted) which means our docs only update after a release. This PR sets the docs to build on the `unstable` branch, which is equivalent to what what we've always had.
## Additional Info
This does raise the question of whether or not docs should target `stable` or `unstable`, but I'd prefer to maintain current functionality and merge #1966 for now. I think having two versions might be handy, one for stable and one for unstable; I don't imagine this very difficult to achieve.
## Issue Addressed
Closes#1873
## Proposed Changes
Fixes the bug in slashing protection import (#1873) by pruning the database upon import.
Also expands the test generator to cover this case and a few others which are under discussion here:
https://ethereum-magicians.org/t/eip-3076-validator-client-interchange-format-slashing-protection/4883
## Additional Info
Depending on the outcome of the discussion on Eth Magicians, we can either wait for consensus before merging, or merge our preferred solution and patch things later.
## Issue Addressed
NA
## Proposed Changes
Updates the validator guide to provide instructions for mainnet users.
## Additional Info
- ~~Blocked on #1751~~
## Issue Addressed
Closes#1790
## Proposed Changes
Make a new method that creates an empty transaction with `TransactionBehavior::Exclusive` to check whether the slashing protection is locked. Call this method before attempting to create or import new validator keystores.
## Additional Info
N/A
Co-authored-by: realbigsean <seananderson33@gmail.com>
## Issue Addressed
Two issues related to empty batches
- Chain target's was not being advanced when the batch was successful, empty and the chain didn't have an optimistic batch
- Not switching finalized chains. We now switch finalized chains requiring a minimum work first
## Issue Addressed
Resolves#1890
## Proposed Changes
Change the slasher database schema to key indexed attestations by `(target_epoch, indexed_attestation_root)` instead of just `indexed_attestation_root`. This allows more straight-forward pruning (linear scan), that is also "re-entrant". By re-entrant, we mean that a pruning pass that gets stuck because of a `MapFull` error can attempt to commit midway, and be resumed later without issue. The previous pruning strategy for indexed attestations did not have this property. There was also a flaw in the previous pruning that could leave "zombie" indexed attestations in the database (ones not referenced by any attester record), which could build up and contribute to bloat (although in practice I think they occur quite infrequently).
## Additional Info
During testing I noticed that a `MapFull` error can still occur during the commit of the transaction itself, which is irritating, but not unbearable. This PR should at least reduce the frequency with which users need to manually resize their DB, and if the `MapFull` on commit rears its ugly head too often we could use a dynamic strategy (temporarily increase the size of the map until the transaction commits).
The extra bytes for the epoch make the database a bit heavier, so the size estimate docs have been updated to reflect this. This is also a breaking schema change, so anyone using a v0 database from a few hours ago will need to drop it and update 😅
This is an implementation of a slasher that lives inside the BN and can be enabled via `lighthouse bn --slasher`.
Features included in this PR:
- [x] Detection of attester slashing conditions (double votes, surrounds existing, surrounded by existing)
- [x] Integration into Lighthouse's attestation verification flow
- [x] Detection of proposer slashing conditions
- [x] Extraction of attestations from blocks as they are verified
- [x] Compression of chunks
- [x] Configurable history length
- [x] Pruning of old attestations and blocks
- [x] More tests
Future work:
* Focus on a slice of history separate from the most recent N epochs (e.g. epochs `current - K` to `current - M`)
* Run out-of-process
* Ingest attestations from the chain without a resync
Design notes are here https://hackmd.io/@sproul/HJSEklmPL
## Issue Addressed
- Resolves#1424
## Proposed Changes
Add a `GET lighthouse/staking` that returns 200 if the node is ready to stake (i.e., `--eth1` flag is present) or a 404 otherwise.
Whilst the VC is waiting for the genesis time to start (i.e., when the genesis state is known), check the `lighthouse/staking` endpoint and log an error if the node isn't configured for staking.
## Additional Info
NA
## Issue Addressed
NA
## Proposed Changes
Adds the `--import-all-attestations` flag which tells the `network::AttestationService` to import/aggregate all attestations after verification (instead of only ones for subnets that are relevant to local validators).
This is useful for testing/debugging and also for creating back-up nodes that should be all cached up and ready for any validator.
## Additional Info
NA
## Issue Addressed
we have a log saying we add a peer to a chain, and an another one in case the chain is not syncing. To avoid needing to peer there two (and reduce log entries) simply log the chain's syncing state in the chain's KV
## Issue Addressed
Closes#1719
## Proposed Changes
Lift the internal `RwLock`s and `Mutex`es from the `Observed*` data structures to resolve the race conditions described in #1719.
Most of this work was done by @paulhauner on his `lift-locks` branch, I merely updated it for the current `master` and checked over it.
## Additional Info
I think it would be prudent to test this on a testnet or two before mainnet launch, just to be sure that the extra lock contention doesn't negatively impact performance.
## Issue Addressed
- Resolves#1945
## Proposed Changes
- As per #1945, fix a log message from the metrics server that was falsely claiming to be from the api server.
- Ensure successful api request logs are published to debug, not trace. This is something I've wanted to do for a while.
## Additional Info
NA
## Issue Addressed
NA
## Proposed Changes
- Caches later blocks than is required by `ETH1_FOLLOW_DISTANCE`.
- Adds logging to `warn` if the eth1 cache is insufficiently primed.
- Use `max_by_key` instead of `max_by` in `BeaconChain::Eth1Chain` since it's simpler.
- Rename `voting_period_start_timestamp` to `voting_target_timestamp` for accuracy.
## Additional Info
The reason for eating into the `ETH1_FOLLOW_DISTANCE` and caching blocks that are closer to the head is due to possibility for `SECONDS_PER_ETH1_BLOCK` to be incorrect (as is the case for the Pyrmont testnet on Goerli).
If `SECONDS_PER_ETH1_BLOCK` is too short, we'll skip back too far from the head and skip over blocks that would be valid [`is_candidate_block`](https://github.com/ethereum/eth2.0-specs/blob/v1.0.0/specs/phase0/validator.md#eth1-data) blocks. This was the case on the Pyrmont testnet and resulted in Lighthouse choosing blocks that were about 30 minutes older than is ideal.
## Issue Addressed
Resolves#1333
## Proposed Changes
- Remove `deposit_signature_set()` function
- Prevent deposits from being in `SignatureSets`
- User `Signature.verify()` to verify deposit signatures rather than a signature set which uses `fast_aggregate_verify()`
## Additional Info
n/a