Commit Graph

4195 Commits

Author SHA1 Message Date
Pawan Dhananjay
58b04acf28 Fix simulator issues (#2802)
## Issue Addressed

Resolves #2775 

## Proposed Changes

1. Reduces the target_peers to stop recursive discovery requests.
2. Changes the eth1 `auto_update_interval` config parameter to be equal to the eth1 block time in the simulator. Without this, the eth1 `latest_cached_block` wouldn't be updated for some slots around `eth1_voting_period` boundaries which would lead to the eth1 cache falling out of sync. This is what caused the `Syncing eth1 block cache` and `No valid eth1_data votes` logs.
2021-11-15 06:38:29 +00:00
Pawan Dhananjay
3919046731 Add a section on using infura as the checkpoint sync provider (#2797)
## Issue Addressed

N/A

## Proposed Changes

From discord, it seems like users are a bit unclear on how to run checkpoint sync if they don't have an existing synced beacon node. Adds a note on how to use infura for the checkpoint sync feature.
2021-11-10 23:43:12 +00:00
Paul Hauner
931daa40d7 Add fork choice EF tests (#2737)
## Issue Addressed

Resolves #2545

## Proposed Changes

Adds the long-overdue EF tests for fork choice. Although we had pretty good coverage via other implementations that closely followed our approach, it is nonetheless important for us to implement these tests too.

During testing I found that we were using a hard-coded `SAFE_SLOTS_TO_UPDATE_JUSTIFIED` value rather than one from the `ChainSpec`. This caused a failure during a minimal preset test. This doesn't represent a risk to mainnet or testnets, since the hard-coded value matched the mainnet preset.

## Failing Cases

There is one failing case which is presently marked as `SkippedKnownFailure`:

```
case 4 ("new_finalized_slot_is_justified_checkpoint_ancestor") from /home/paul/development/lighthouse/testing/ef_tests/consensus-spec-tests/tests/minimal/phase0/fork_choice/on_block/pyspec_tests/new_finalized_slot_is_justified_checkpoint_ancestor failed with NotEqual:
head check failed: Got Head { slot: Slot(40), root: 0x9183dbaed4191a862bd307d476e687277fc08469fc38618699863333487703e7 } | Expected Head { slot: Slot(24), root: 0x105b49b51bf7103c182aa58860b039550a89c05a4675992e2af703bd02c84570 }
```

This failure is due to #2741. It's not a particularly high priority issue at the moment, so we fix it after merging this PR.
2021-11-08 07:29:04 +00:00
mooori
d01fe02824 Add regression tests for boot_node (#2749)
## Issue Addressed
Resolves #2602

## Proposed Changes

*Note: For a review it might help to look at the individual commits.*

### `boot_node`
Add support for the flags `dump-config` and `immediate-shutdown`. For `immediate-shutdown` the actual behavior could be described as `dump-config-and-exit`.

Both flags are handled in `boot_node::main`, which appears to be the simplest approach.

### `boot_node` regression tests
Added in `lighthouse/tests/boot_node.rs`.

### `CommandLineTestExec`
Factors out boilerplate related to CLI tests. It's used in the regression tests for `boot_node`, `beacon_node` and `validator_client`.

## Open TODO
Add tests for `boot_node` flags `enable-enr-auto-update` and `disable-packet-filter`. They end up in [`Discv5Config`](9ed2cba6bc/boot_node/src/config.rs (L29)), which doesn't support serde (de)serialization.

I haven't found a workaround - guidance would be appreciated.
2021-11-08 01:37:58 +00:00
Divma
fbafe416d1 Move the peer manager to be a behaviour (#2773)
This simply moves some functions that were "swarm notifications" to a network behaviour implementation.

Notes
------
- We could disconnect from the peer manager but we would lose the rpc shutdown message
- We still notify from the swarm since this is the most reliable way to get some events. Ugly but best for now
- Events need to be pushed with "add event" to wake the waker

Co-authored-by: Divma <26765164+divagant-martian@users.noreply.github.com>
2021-11-08 00:01:10 +00:00
Michael Sproul
df02639b71 De-duplicate attestations in the slasher (#2767)
## Issue Addressed

Closes https://github.com/sigp/lighthouse/issues/2112
Closes https://github.com/sigp/lighthouse/issues/1861

## Proposed Changes

Collect attestations by validator index in the slasher, and use the magic of reference counting to automatically discard redundant attestations. This results in us storing only 1-2% of the attestations observed when subscribed to all subnets, which carries over to a 50-100x reduction in data stored 🎉 

## Additional Info

There's some nuance to the configuration of the `slot-offset`. It has a profound effect on the effictiveness of de-duplication, see the docs added to the book for an explanation: 5442e695e5/book/src/slasher.md (slot-offset)
2021-11-08 00:01:09 +00:00
Rémy Roy
fadb8b2b2b Add minimum supported version for checkpoint sync in book (#2779)
## Issue Addressed

No specific issue. Just some improvement in the documentation provided by the book.

## Proposed Changes

Add minimum supported version for checkpoint sync in book to make sure users who want to use this feature know they need to be using at least version 2.0.0.
2021-11-05 06:35:52 +00:00
Pascal Bach
2ed6775dcf Add cargo vendor test (#2076)
## Issue Addressed

This is related to #1926 and #1712.

## Proposed Changes

This PR adds a test that make sure that the used dependencies can be vendored.

Being able to vendor the dependencies is important for archival and repdroducibility purpose.
It's also required to package lighthouse for some Linux distributions. Specifically [NixOS](https://nixos.org/) and [Yocto](https://www.yoctoproject.org/).

## Additional Info

This PR only adds the test, it doesn't clean up the dependencies yet. That's why it is in draft.
2021-11-05 04:42:12 +00:00
Divma
a683e0296a Peer manager cfg (#2766)
## Issue Addressed
I've done this change in a couple of WIPs already so I might as well submit it on its own. This changes no functionality but reduces coupling in a 0.0001%. It also helps new people who need to work in the peer manager to better understand what it actually needs from the outside

## Proposed Changes

Add a config to the peer manager
2021-11-03 23:44:44 +00:00
Divma
7502970a7d Do not compute metrics in the network service if the cli flag is not set (#2765)
## Issue Addressed

The computation of metrics in the network service can be expensive. This disables the computation unless the cli flag `metrics` is set.

## Additional Info
Metrics in other parts of the network are still updated, since most are simple metrics and checking if metrics are enabled each time each metric is updated doesn't seem like a gain.
2021-11-03 00:06:03 +00:00
realbigsean
c4ad0e3fb3 Ensure dependent root consistency in head events (#2753)
## Issue Addressed

@paulhauner noticed that when we send head events, we use the block root from `new_head` in `fork_choice_internal`, but calculate `dependent_root` and `previous_dependent_root` using the `canonical_head`. This is normally fine because `new_head` updates the `canonical_head` in `fork_choice_internal`, but it's possible we have a reorg updating `canonical_head` before our head events are sent. So this PR ensures `dependent_root` and `previous_dependent_root` are always derived from the state associated with `new_head`.



Co-authored-by: realbigsean <seananderson33@gmail.com>
2021-11-02 02:26:32 +00:00
Mac L
80627b428b Fix linting error on Windows (#2759)
While testing some code on Windows, I ran into a failure when using `clippy` via (`make lint`):
```
error: this expression borrows a reference (`&str`) that is immediately dereferenced by the compiler
   --> common/filesystem/src/lib.rs:105:43
    |
105 |         let mut acl = ACL::from_file_path(&path_str, false).map_err(Error::UnableToRetrieveACL)?;
    |                                           ^^^^^^^^^ help: change this to: `path_str`
    |
    = note: `-D clippy::needless-borrow` implied by `-D warnings`
    = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_borrow
    
error: could not compile `filesystem` due to previous error
```

## Proposed Changes

Remove the unnecessary borrow as suggested.

## Additional Info

Since we are only running `clippy` in CI on Ubuntu, I believe we don't have any way (in CI) to detect these Windows specific lint errors (either from new code, or from linting changes from new Rust versions. 
This is because code marked as `#[cfg(windows)]` is not checked on `unix` systems and vice versa.

I'm conscious that our CI runs are already taking a long time, and that adding a new Windows `clippy` run would add a non-negligible amount of time to the runs (not sure if this topic has already been discussed), but it something to be aware of.

## Extra Note
I don't think this is the case, but it might be worth someone else running `make lint` on their Windows machines to eliminate the possibility that this is an error specific to my setup.
2021-11-01 07:44:42 +00:00
Pawan Dhananjay
4499adc7fd Check proposer index during block production (#2740)
## Issue Addressed

Resolves #2612 

## Proposed Changes

Implements both the checks mentioned in the original issue. 
1. Verifies the `randao_reveal` in the beacon node
2. Cross checks the proposer index after getting back the block from the beacon node.

## Additional info
The block production time increases by ~10x because of the signature verification on the beacon node (based on the `beacon_block_production_process_seconds` metric) when running on a local testnet.
2021-11-01 07:44:40 +00:00
Michael Sproul
ffb04e1a9e Add op pool metrics for attestations (#2758)
## Proposed Changes

Add several metrics for the number of attestations in the op pool. These give us a way to observe the number of valid, non-trivial attestations during block packing rather than just the size of the entire op pool.
2021-11-01 05:52:31 +00:00
Michael Sproul
e8a557fdd8 Update prometheus to v0.13.0 (#2757)
## Issue Addressed

Closes #2485
2021-11-01 05:52:30 +00:00
Divma
e2c0650d16 Relax late sync committee penalty (#2752)
## Issue Addressed

Getting too many peers kicked due to slightly late sync committee messages as tested on.. under-performant hardware.

## Proposed Changes

Only penalize if the message is more than one slot late. Still ignore the message-

Co-authored-by: Divma <26765164+divagant-martian@users.noreply.github.com>
2021-10-31 22:30:19 +00:00
Age Manning
1790010260 Upgrade to latest libp2p (#2605)
This is a pre-cursor to the next libp2p upgrade. 

It is currently being used for staging a number of PR upgrades which are contingent on the latest libp2p.
2021-10-29 01:59:29 +00:00
ethDreamer
2c4413454a Fixed Gossip Topics on Fork Boundary (#2619)
## Issue Addressed

The [p2p-interface section of the `altair` spec](https://github.com/ethereum/consensus-specs/blob/dev/specs/altair/p2p-interface.md#transitioning-the-gossip) says you should subscribe to the topics for a fork "In advance of the fork" and unsubscribe from old topics `2 Epochs` after the new fork is activated. We've chosen to subscribe to new fork topics `2 slots` before the fork is initiated.

This function is supposed to return the required fork digests at any given time but as it was currently written, it doesn't return the fork digest for a previous fork if you've switched to the current fork less than 2 epoch's ago. Also this function required modification for every new fork we add.

## Proposed Changes

Make this function fork-agnostic and correctly handle the previous fork topic digests when you've only just switched to the new fork.
2021-10-29 00:05:27 +00:00
Pawan Dhananjay
88063398f6 Prevent double import of blocks (#2647)
## Issue Addressed

Resolves #2611 

## Proposed Changes

Adds a duplicate block root cache to the `BeaconProcessor`. Adds the block root to the cache before calling `process_gossip_block` and `process_rpc_block`. Since `process_rpc_block` is called only for single block lookups, we don't have to worry about batched block imports.

The block is imported from the source(gossip/rpc) that arrives first. The block that arrives second is not imported to avoid the db access issue.
There are 2 cases:
1. Block that arrives second is from rpc: In this case, we return an optimistic `BlockError::BlockIsAlreadyKnown` to sync.
2. Block that arrives second is from gossip: In this case, we only do gossip verification and forwarding but don't import the block into the the beacon chain.

## Additional info
Splits up `process_gossip_block` function to `process_gossip_unverified_block` and `process_gossip_verified_block`.
2021-10-28 03:36:14 +00:00
Michael Sproul
2dc6163043 Add API version headers and map_fork_name! (#2745)
## Proposed Changes

* Add the `Eth-Consensus-Version` header to the HTTP API for the block and state endpoints. This is part of the v2.1.0 API that was recently released: https://github.com/ethereum/beacon-APIs/pull/170
* Add tests for the above. I refactored the `eth2` crate's helper functions to make this more straight-forward, and introduced some new mixin traits that I think greatly improve readability and flexibility.
* Add a new `map_with_fork!` macro which is useful for decoding a superstruct type without naming all its variants. It is now used for SSZ-decoding `BeaconBlock` and `BeaconState`, and for JSON-decoding `SignedBeaconBlock` in the API.

## Additional Info

The `map_with_fork!` changes will conflict with the Merge changes, but when resolving the conflict the changes from this branch should be preferred (it is no longer necessary to enumerate every fork). The merge fork _will_  need to be added to `map_fork_name_with`.
2021-10-28 01:18:04 +00:00
Mac L
8edd9d45ab Fix purge-db edge case (#2747)
## Issue Addressed

Currently, if you launch the beacon node with the `--purge-db` flag and the `beacon` directory exists, but one (or both) of the `chain_db` or `freezer-db` directories are missing, it will error unnecessarily with: 
```
Failed to remove chain_db: No such file or directory (os error 2)
```

This is an edge case which can occur in cases of manual intervention (a user deleted the directory) or if you had previously run with the `--purge-db` flag and Lighthouse errored before it could initialize the db directories.

## Proposed Changes

Check if the `chain_db`/`freezer_db` exists before attempting to remove them. This prevents unnecessary errors.
2021-10-25 22:11:28 +00:00
Mac L
39c0d1219c Add note about java dependency (#2746)
## Issue Addressed
Currently, running the Web3Signer tests locally without having a java runtime environment installed and available on your PATH will result in the tests failing. 

## Proposed Changes
Add a note regarding the Web3Signer tests being dependent on java (similar to what we have for `ganache-cli`)
2021-10-25 22:11:26 +00:00
Age Manning
3a51f829d5 Add metrics for individual async tasks (#2735)
I have been in the process of debugging libp2p tasks as there is something locking our executor. 

This addition adds a metric allowing us to track all tasks within lighthouse allowing us to identify various sections of Lighthouse code that may be taking longer than normal to process.
2021-10-25 05:18:49 +00:00
Michael Sproul
bf1667a904 Fix test warnings on Rust 1.56.0 (#2743)
## Issue Addressed

Continuation of #2728, fix the fork choice tests for Rust 1.56.0 so that `unstable` is free of warnings.

CI will be broken until this PR merges, because we strictly enforce the absence of warnings (even for tests)
2021-10-22 04:49:51 +00:00
Divma
d4819bfd42 Add a waker to the RPC handler (#2721)
## Issue Addressed

Attempts to fix #2701 but I doubt this is the reason behind that.

## Proposed Changes

maintain a waker in the rpc handler and call it if an event is received
2021-10-21 06:14:36 +00:00
Pawan Dhananjay
de34001e78 Update next_fork_subscriptions correctly (#2688)
## Issue Addressed

N/A

## Proposed Changes

Update the `next_fork_subscriptions` timer only after a fork happens.
2021-10-21 04:38:44 +00:00
Divma
2f7b5f8cb0 Update mainnet altair types test (#2738)
## Issue Addressed
e895074ba updated the altair fork and now that we are a week away this test no longer panics.

## Proposed Changes
Remove the expected panic and explanatory note.
2021-10-20 23:12:12 +00:00
divma
99f7a7db58 remove double backfill sync state (#2733)
## Issue Addressed
In the backfill sync the state was maintained twice, once locally and also in the globals. This makes it so that it's maintained only once.

The only behavioral change is that when backfill sync in paused, the global backfill state is updated. I asked @AgeManning about this and he deemed it a bug, so this solves it.
2021-10-19 22:32:25 +00:00
Michael Sproul
aad397f00a Resolve Rust 1.56 lints and warnings (#2728)
## Issue Addressed

When compiling with Rust 1.56.0 the compiler generates 3 instances of this warning:

```
warning: trailing semicolon in macro used in expression position
   --> common/eth2_network_config/src/lib.rs:181:24
    |
181 |                     })?;
    |                        ^
...
195 |         let deposit_contract_deploy_block = load_from_file!(DEPLOY_BLOCK_FILE);
    |                                             ---------------------------------- in this macro invocation
    |
    = note: `#[warn(semicolon_in_expressions_from_macros)]` on by default
    = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release!
    = note: for more information, see issue #79813 <https://github.com/rust-lang/rust/issues/79813>
    = note: this warning originates in the macro `load_from_file` (in Nightly builds, run with -Z macro-backtrace for more info)
```

This warning is completely harmless, but will be visible to users compiling Lighthouse v2.0.1 (or earlier) with Rust 1.56.0 (to be released October 21st). It is **completely safe** to ignore this warning, it's just a superficial change to Rust's syntax.

## Proposed Changes

This PR removes the semi-colon as recommended, and fixes the new Clippy lints from 1.56.0
2021-10-19 00:30:42 +00:00
Akihito Nakano
efec60ee90 Tiny fix: wrong log level (#2720)
## Proposed Changes

If the `RemoveChain` is critical log level should be crit. 🙂
2021-10-19 00:30:41 +00:00
Michael Sproul
d2e3d4c6f1 Add flag to disable lock timeouts (#2714)
## Issue Addressed

Mitigates #1096

## Proposed Changes

Add a flag to the beacon node called `--disable-lock-timeouts` which allows opting out of lock timeouts.

The lock timeouts serve a dual purpose:

1. They prevent any single operation from hogging the lock for too long. When a timeout occurs it logs a nasty error which indicates that there's suboptimal lock use occurring, which we can then act on.
2. They allow deadlock detection. We're fairly sure there are no deadlocks left in Lighthouse anymore but the timeout locks offer a safeguard against that.

However, timeouts on locks are not without downsides:

They allow for the possibility of livelock, particularly on slower hardware. If lock timeouts keep failing spuriously the node can be prevented from making any progress, even if it would be able to make progress slowly without the timeout. One particularly concerning scenario which could occur would be if a DoS attack succeeded in slowing block signature verification times across the network, and all Lighthouse nodes got livelocked because they timed out repeatedly. This could also occur on just a subset of nodes (e.g. dual core VPSs or Raspberri Pis).

By making the behaviour runtime configurable this PR allows us to choose the behaviour we want depending on circumstance. I suspect that long term we could make the timeout-free approach the default (#2381 moves in this direction) and just enable the timeouts on our testnet nodes for debugging purposes. This PR conservatively leaves the default as-is so we can gain some more experience before switching the default.
2021-10-19 00:30:40 +00:00
Age Manning
df40700ddd Rename eth2_libp2p to lighthouse_network (#2702)
## Description

The `eth2_libp2p` crate was originally named and designed to incorporate a simple libp2p integration into lighthouse. Since its origins the crates purpose has expanded dramatically. It now houses a lot more sophistication that is specific to lighthouse and no longer just a libp2p integration. 

As of this writing it currently houses the following high-level lighthouse-specific logic:
- Lighthouse's implementation of the eth2 RPC protocol and specific encodings/decodings
- Integration and handling of ENRs with respect to libp2p and eth2
- Lighthouse's discovery logic, its integration with discv5 and logic about searching and handling peers. 
- Lighthouse's peer manager - This is a large module handling various aspects of Lighthouse's network, such as peer scoring, handling pings and metadata, connection maintenance and recording, etc.
- Lighthouse's peer database - This is a collection of information stored for each individual peer which is specific to lighthouse. We store connection state, sync state, last seen ips and scores etc. The data stored for each peer is designed for various elements of the lighthouse code base such as syncing and the http api.
- Gossipsub scoring - This stores a collection of gossipsub 1.1 scoring mechanisms that are continuously analyssed and updated based on the ethereum 2 networks and how Lighthouse performs on these networks.
- Lighthouse specific types for managing gossipsub topics, sync status and ENR fields
- Lighthouse's network HTTP API metrics - A collection of metrics for lighthouse network monitoring
- Lighthouse's custom configuration of all networking protocols, RPC, gossipsub, discovery, identify and libp2p. 

Therefore it makes sense to rename the crate to be more akin to its current purposes, simply that it manages the majority of Lighthouse's network stack. This PR renames this crate to `lighthouse_network`

Co-authored-by: Paul Hauner <paul@paulhauner.com>
2021-10-19 00:30:39 +00:00
Michael Sproul
06e310c4eb Export slashing protection per validator (#2674)
## Issue Addressed

Part of https://github.com/sigp/lighthouse/issues/2557

## Proposed Changes

Refactor the slashing protection export so that it can export data for a subset of validators.

This is the last remaining building block required for supporting the standard validator API (which I'll start to build atop this branch)

## Additional Info

Built on and requires #2598
2021-10-19 00:30:38 +00:00
divma
e75ce534f6 Ignore cargo audit advisory (#2730)
## Issue Addressed
Related to #2727 

Ignores the audit failure for the same reasons in #2727
2021-10-18 21:59:27 +00:00
Paul Hauner
fff01b24dd Release v2.0.1 (#2726)
## Issue Addressed

NA

## Proposed Changes

- Update versions to `v2.0.1` in anticipation for a release early next week.
- Add `--ignore` to `cargo audit`. See #2727.

## Additional Info

NA
2021-10-18 03:08:32 +00:00
Age Manning
180c90bf6d Correct peer connection transition logic (#2725)
## Description

This PR updates the peer connection transition logic. It is acceptable for a peer to immediately transition from a disconnected state to a disconnecting state. This can occur when we are at our peer limit and a new peer's dial us.
2021-10-17 04:04:36 +00:00
Paul Hauner
a7b675460d Add Altair tests to op pool (#2723)
## Issue Addressed

NA

## Proposed Changes

Adds some more testing for Altair to the op pool. Credits to @michaelsproul for some appropriated efforts here.

## Additional Info

NA


Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2021-10-16 05:07:23 +00:00
Paul Hauner
cfafe7ba3a Update to consensus-spec-tests v1.1.3 (#2722)
## Issue Addressed

NA

## Proposed Changes

Updates to `testing/ef_tests` to use https://github.com/ethereum/consensus-spec-tests/releases/tag/v1.1.3.

Also updates `initialize_beacon_state_from_eth1` to set the `state.fork.previous_version` to the Altair fork version when starting a new Altair chain from genesis. This will not have an effect on mainnet or any long-lived testnets. This was introduced in https://github.com/ethereum/consensus-specs/releases/tag/v1.1.1.

## Additional Info

NA
2021-10-16 05:07:21 +00:00
Michael Sproul
5cde3fc4da Reduce lock contention in backfill sync (#2716)
## Proposed Changes

Clone the proposer pubkeys during backfill signature verification to reduce the time that the pubkey cache lock is held for. Cloning such a small number of pubkeys has negligible impact on the total running time, but greatly reduces lock contention.

On a Ryzen 5950X, the setup step seems to take around 180us regardless of whether the key is cloned or not, while the verification takes 7ms. When Lighthouse is limited to 10% of one core using `sudo cpulimit --pid <pid> --limit 10` the total time jumps up to 800ms, but the setup step remains only 250us. This means that under heavy load this PR could cut the time the lock is held for from 800ms to 250us, which is a huge saving of 99.97%!
2021-10-15 03:28:03 +00:00
Paul Hauner
9c5a8ab7f2 Change "too many resources" to "insufficient resources" in eth2_libp2p (#2713)
## Issue Addressed

NA

## Proposed Changes

Fixes what I assume is a typo in a log message. See the diff for details.

## Additional Info

NA
2021-10-15 00:07:12 +00:00
Mac L
7c23e2142a Allow custom certificates when connecting to BN (#2703)
## Issue Addressed

Resolves #2262 

## Proposed Changes

Add a new CLI flag `--beacon-nodes-tls-certs` which allows the user to specify a path to a certificate file (or a list of files, separated by commas). The VC will then use these certificates (in addition to the existing certificates in the OS trust store) when connecting to a beacon node over HTTPS.

## Additional Info

This only supports certificates in PEM format.
2021-10-15 00:07:11 +00:00
Age Manning
05040e68ec Update discovery (#2711)
## Issue Addressed

#2695 

## Proposed Changes

This updates discovery to the latest version which has patched a panic that occurred due to a race condition in the bucket logic.
2021-10-14 22:09:38 +00:00
Paul Hauner
ef49524ff8 Quoted altair fields (2.0) (#2712)
## Issue Addressed

Resolves #2689

## Proposed Changes

Copy of #2709 so I can appease CI and merge without waiting for @realbigsean to come online. See #2709 for more information.

## Additional Info

NA


Co-authored-by: realbigsean <seananderson33@gmail.com>
2021-10-14 02:58:12 +00:00
Paul Hauner
18340d1fb6 Get arbitrary check passing (2.0) (#2710)
## Issue Addressed

NA

## Proposed Changes

This is a wholesale rip-off of #2708, see that PR for more of a description.

I've made this PR since @realbigsean is offline and I can't merge his PR due to Github's frustrating `target-branch-check` bug. I also changed the branch to `unstable`, since I'm trying to minimize the diff between `merge-f2f`/`unstable`. I'll just rebase `merge-f2f` onto `unstable` after this PR merges.

When running `make lint` I noticed the following warning:

```
warning: patch for `fixed-hash` uses the features mechanism. default-features and features will not take effect because the patch dependency does not support this mechanism
```

So, I removed the `features` section from the patch.

## Additional Info

NA


Co-authored-by: realbigsean <seananderson33@gmail.com>
2021-10-14 02:58:11 +00:00
Paul Hauner
e2d09bb8ac Add BeaconChainHarness::builder (#2707)
## Issue Addressed

NA

## Proposed Changes

This PR is near-identical to https://github.com/sigp/lighthouse/pull/2652, however it is to be merged into `unstable` instead of `merge-f2f`. Please see that PR for reasoning.

I'm making this duplicate PR to merge to `unstable` in an effort to shrink the diff between `unstable` and `merge-f2f` by doing smaller, lead-up PRs.

## Additional Info

NA
2021-10-14 02:58:10 +00:00
Michael Sproul
0a77d783a4 Make slashing protection import more resilient (#2598)
## Issue Addressed

Closes #2419

## Proposed Changes

Address a long-standing issue with the import of slashing protection data where the import would fail due to the data appearing slashable w.r.t the existing database. Importing is now idempotent, and will have no issues importing data that has been handed back and forth between different validator clients, or different implementations.

The implementation works by updating the high and low watermarks if they need updating, and not attempting to check if the input is slashable w.r.t itself or the database. This is a strengthening of the minification that we started to do by default since #2380, and what Teku has been doing since the beginning.

## Additional Info

The only feature we lose by doing this is the ability to do non-minified imports of clock drifted messages (cf. Prysm on Medalla). In theory, with the previous implementation we could import all the messages in case of clock drift and be aware of the "gap" between the real present time and the messages signed in the far future. _However_ for attestations this is close to useless, as the source epoch will advance as soon as justification occurs, which will require us to make slashable attestations with respect to our bogus attestation(s). E.g. if I sign an attestation 100=>200 when the current epoch is 101, then I won't be able to vote in any epochs prior to 101 becoming justified because 101=>102, 101=>103, etc are all surrounded by 100=>200. Seeing as signing attestations gets blocked almost immediately in this case regardless of our import behaviour, there's no point trying to handle it. For blocks the situation is more hopeful due to the lack of surrounds, but losing block proposals from validators who by definition can't attest doesn't seem like an issue (the other block proposers can pick up the slack).
2021-10-13 01:49:51 +00:00
Pawan Dhananjay
34d22b5920 Reduce validator monitor logging verbosity (#2606)
## Issue Addressed

Resolves #2541

## Proposed Changes

Reduces verbosity of validator monitor per epoch logging by batching info logs for multiple validators.

Instead of a log for every validator managed by the validator monitor, we now batch logs for attestation records for previous epoch.

Before:
```log
Sep 20 06:53:08.239 INFO Previous epoch attestation success      validator: 1, epoch: 65875, matched_head: true, matched_target: true, inclusion_lag: 0 slot(s), service: val_mon
Sep 20 06:53:08.239 INFO Previous epoch attestation success      validator: 2, epoch: 65875, matched_head: true, matched_target: true, inclusion_lag: 0 slot(s), service: val_mon
Sep 20 06:53:08.239 INFO Previous epoch attestation success      validator: 3, epoch: 65875, matched_head: true, matched_target: true, inclusion_lag: 0 slot(s), service: val_mon
Sep 20 06:53:08.239 INFO Previous epoch attestation success      validator: 4, epoch: 65875, matched_head: true, matched_target: true, inclusion_lag: 0 slot(s), service: val_mon
Sep 20 06:53:08.239 INFO Previous epoch attestation success      validator: 5, epoch: 65875, matched_head: false, matched_target: true, inclusion_lag: 0 slot(s), service: val_mon
Sep 20 06:53:08.239 WARN Attestation failed to match head        validator: 5, epoch: 65875, service: val_mon
Sep 20 06:53:08.239 INFO Previous epoch attestation success      validator: 6, epoch: 65875, matched_head: false, matched_target: true, inclusion_lag: 0 slot(s), service: val_mon
Sep 20 06:53:08.239 WARN Attestation failed to match head        validator: 6, epoch: 65875, service: val_mon
Sep 20 06:53:08.239 INFO Previous epoch attestation success      validator: 7, epoch: 65875, matched_head: true, matched_target: false, inclusion_lag: 1 slot(s), service: val_mon
Sep 20 06:53:08.239 WARN Attestation failed to match target      validator: 7, epoch: 65875, service: val_mon
Sep 20 06:53:08.239 WARN Sub-optimal inclusion delay             validator: 7, epoch: 65875, optimal: 1, delay: 2, service: val_mon
Sep 20 06:53:08.239 INFO Previous epoch attestation success      validator: 8, epoch: 65875, matched_head: true, matched_target: false, inclusion_lag: 1 slot(s), service: val_mon
Sep 20 06:53:08.239 WARN Attestation failed to match target      validator: 8, epoch: 65875, service: val_mon
Sep 20 06:53:08.239 WARN Sub-optimal inclusion delay             validator: 8, epoch: 65875, optimal: 1, delay: 2, service: val_mon
Sep 20 06:53:08.239 ERRO Previous epoch attestation missing      validator: 9, epoch: 65875, service: val_mon
Sep 20 06:53:08.239 ERRO Previous epoch attestation missing      validator: 10, epoch: 65875, service: val_mon
```

after
```
Sep 20 06:53:08.239 INFO Previous epoch attestation success      validators: [1,2,3,4,5,6,7,8,9] , epoch: 65875, service: val_mon
Sep 20 06:53:08.239 WARN Previous epoch attestation failed to match head, validators: [5,6], epoch: 65875, service: val_mon
Sep 20 06:53:08.239 WARN Previous epoch attestation failed to match target, validators: [7,8], epoch: 65875, service: val_mon
Sep 20 06:53:08.239 WARN Previous epoch attestations had sub-optimal inclusion delay, validators: [7,8], epoch: 65875, service: val_mon
Sep 20 06:53:08.239 ERRO Previous epoch attestation missing      validators: [9,10], epoch: 65875, service: val_mon
```

The detailed individual logs are downgraded to debug logs.
2021-10-12 05:06:48 +00:00
Mac L
a73d698e30 Add TLS capability to the beacon node HTTP API (#2668)
Currently, the beacon node has no ability to serve the HTTP API over TLS.
Adding this functionality would be helpful for certain use cases, such as when you need a validator client to connect to a backup beacon node which is outside your local network, and the use of an SSH tunnel or reverse proxy would be inappropriate.

## Proposed Changes

- Add three new CLI flags to the beacon node
  - `--http-enable-tls`: enables TLS
  - `--http-tls-cert`: to specify the path to the certificate file
  - `--http-tls-key`: to specify the path to the key file
- Update the HTTP API to optionally use `warp`'s [`TlsServer`](https://docs.rs/warp/0.3.1/warp/struct.TlsServer.html) depending on the presence of the `--http-enable-tls` flag
- Update tests and docs
- Use a custom branch for `warp` to ensure proper error handling

## Additional Info

Serving the API over TLS should currently be considered experimental. The reason for this is that it uses code from an [unmerged PR](https://github.com/seanmonstar/warp/pull/717). This commit provides the `try_bind_with_graceful_shutdown` method to `warp`, which is helpful for controlling error flow when the TLS configuration is invalid (cert/key files don't exist, incorrect permissions, etc). 
I've implemented the same code in my [branch here](https://github.com/macladson/warp/tree/tls).

Once the code has been reviewed and merged upstream into `warp`, we can remove the dependency on my branch and the feature can be considered more stable.

Currently, the private key file must not be password-protected in order to be read into Lighthouse.
2021-10-12 03:35:49 +00:00
Age Manning
0aee7ec873 Refactor Peerdb and PeerManager (#2660)
## Proposed Changes

This is a refactor of the PeerDB and PeerManager. A number of bugs have been surfacing around the connection state of peers and their interaction with the score state. 

This refactor tightens the mutability properties of peers such that only specific modules are able to modify the state of peer information preventing inadvertant state changes that can lead to our local peer manager db being out of sync with libp2p. 

Further, the logic around connection and scoring was quite convoluted and the distinction between the PeerManager and Peerdb was not well defined. Although these issues are not fully resolved, this PR is step to cleaning up this logic. The peerdb solely manages most mutability operations of peers leaving high-order logic to the peer manager. 

A single `update_connection_state()` function has been added to the peer-db making it solely responsible for modifying the peer's connection state. The way the peer's scores can be modified have been reduced to three simple functions (`update_scores()`, `update_gossipsub_scores()` and `report_peer()`). This prevents any add-hoc modifications of scores and only natural processes of score modification is allowed which simplifies the reasoning of score and state changes.
2021-10-11 02:45:06 +00:00
Michael Sproul
708557a473 Fix cargo audit warns for nix, psutil, time (#2699)
## Issue Addressed

Fix `cargo audit` failures on `unstable`

Closes #2698

## Proposed Changes

The main culprit is `nix`, which is vulnerable for versions below v0.23.0. We can't get by with a straight-forward `cargo update` because `psutil` depends on an old version of `nix` (cf. https://github.com/rust-psutil/rust-psutil/pull/93). Hence I've temporarily forked `psutil` under the `sigp` org, where I've included the update to `nix` v0.23.0.

Additionally, I took the chance to update the `time` dependency to v0.3, which removed a bunch of stale deps including `stdweb` which is no longer maintained. Lighthouse only uses the `time` crate in the notifier to do some pretty printing, and so wasn't affected by any of the breaking changes in v0.3 ([changelog here](https://github.com/time-rs/time/blob/main/CHANGELOG.md#030-2021-07-30)).
2021-10-11 00:10:35 +00:00