## Issue Addressed
Closes#2990
## Proposed Changes
Add a check to see if the `--validators-dir` CLI flag is set and if so store validator logs into it.
Ensure that if the log directory cannot be created, emit a `WARN` and disable file logging rather than panicking.
## Additional Info
Panics associated with logfiles can still occur in these scenarios:
1. The `$datadir/validators/logs` directory already exists with the wrong permissions (or was changed after creation).
1. The logfile already exists with the wrong permissions (or was changed after creation).
> These panics are cosmetic only since only the logfile thread panics. Following the panics, LH will continue to function as normal.
I believe this is due to the use of [`slog::Fuse`](https://docs.rs/slog/latest/slog/struct.Fuse.html) when initializing the logger.
I'm not sure if there a better way of handling logfile errors?
I think ideally, rather than panicking, we would emit a `WARN` to the stdout logger with the panic reason, then exit the logfile thread gracefully.
## Issue Addressed
#3020
## Proposed Changes
- Alias the `validators-dir` arg to `validator-dir` in the `validator_client` subcommand.
- Alias the `validator-dir` arg to `validators-dir` in the `account_manager validator` subcommand.
- Add test for the validator_client alias.
## Issue Addressed
Addresses https://github.com/sigp/lighthouse/issues/2926
## Proposed Changes
Appropriated from https://github.com/sigp/lighthouse/issues/2926#issuecomment-1039676768:
When a node returns *any* error we call [`CandidateBeaconNode::set_offline`](c3a793fd73/validator_client/src/beacon_node_fallback.rs (L424)) which sets it's `status` to `CandidateError::Offline`. That node will then be ignored until the routine [`fallback_updater_service`](c3a793fd73/validator_client/src/beacon_node_fallback.rs (L44)) manages to reconnect to it.
However, I believe there was an issue in the [`CanidateBeaconNode::refesh_status`](c3a793fd73/validator_client/src/beacon_node_fallback.rs (L157-L178)) method, which is used by the updater service to see if the node has come good again. It was holding a [write lock on the `status` field](c3a793fd73/validator_client/src/beacon_node_fallback.rs (L165)) whilst it polled the node status. This means a long timeout would hog the write lock and starve other processes.
When a VC is trying to access a beacon node for whatever purpose (getting duties, posting blocks, etc), it performs [three passes](c3a793fd73/validator_client/src/beacon_node_fallback.rs (L432-L482)) through the lists of nodes, trying to run some generic `function` (closure, lambda, etc) on each node:
- 1st pass: only try running `function` on all nodes which are both synced and online.
- 2nd pass: try running `function` on all nodes that are online, but not necessarily synced.
- 3rd pass: for each offline node, try refreshing its status and then running `function` on it.
So, it turns out that if the `CanidateBeaconNode::refesh_status` function from the routine update service is hogging the write-lock, the 1st pass gets blocked whilst trying to read the status of the first node. So, nodes that should be left until the 3rd pass are blocking the process of the 1st and 2nd passes, hence the behaviour described in #2926.
## Additional Info
NA
## Issue Addressed
Timeouts due to Windows builds running for 2h 20m.
## Proposed Changes
* Increase Bors timeout to 3h
* Refine the target branch check so that it will pass when we make PRs to feature branches. This is just an extra change I've been meaning to sneak in for a while.
## Additional Info
* I think it would also be cool to try caching for CI again, but that's a separate issue and we'll still need the long timeout on a cache miss.
## Issue Addressed
N/A
## Proposed Changes
Add a HTTP API which can be used to compute the block packing data for all blocks over a discrete range of epochs.
## Usage
### Request
```
curl "http:localhost:5052/lighthouse/analysis/block_packing_efficiency?start_epoch=57730&end_epoch=57732"
```
### Response
```
[
{
"slot": "1847360",
"block_hash": "0xa7dc230659802df2f99ea3798faede2e75942bb5735d56e6bfdc2df335dcd61f",
"proposer_info": {
"validator_index": 1686,
"graffiti": ""
},
"available_attestations": 7096,
"included_attestations": 6459,
"prior_skip_slots": 0
},
...
]
```
## Additional Info
This is notably different to the existing lcli code:
- Uses `BlockReplayer` #2863 and as such runs significantly faster than the previous method.
- Corrects the off-by-one #2878
- Removes the `offline` validators component. This was only a "best guess" and simply was used as a way to determine an estimate of the "true" packing efficiency and was generally not helpful in terms of direct comparisons between different packing methods. As such it has been removed from the API and any future estimates of "offline" validators would be better suited in a separate/more targeted API or as part of 'beacon watch': #2873
- Includes `prior_skip_slots`.
## Issue Addressed
Closes#2880
## Proposed Changes
Support requests to the next epoch in proposer_duties api.
## Additional Info
Implemented with skipping proposer cache for this case because the cache for the future epoch will be missed every new slot as dependent_root is changed and we don't want to "wash it out" by saving additional values.
## Issue Addressed
#2953
## Proposed Changes
Adds empty local validator check.
## Additional Info
Two other options:
- add check inside `local_index` collection. Instead of after collection.
- Move `local_index` collection to the beginning of the `poll_sync_committee_duties` function and combine sync committee with altair fork check.
## Issue Addressed
I noticed in some logs some excess and unecessary discovery queries. What was happening was we were pruning our peers down to our outbound target and having some disconnect. When we are below this threshold we try to find more peers (even if we are at our peer limit). The request becomes futile because we have no more peer slots.
This PR corrects this issue and advances the pruning mechanism to favour subnet peers.
An overview the new logic added is:
- We prune peers down to a target outbound peer count which is higher than the minimum outbound peer count.
- We only search for more peers if there is room to do so, and we are below the minimum outbound peer count not the target. So this gives us some buffer for peers to disconnect. The buffer is currently 10%
The modified pruning logic is documented in the code but for reference it should do the following:
- Prune peers with bad scores first
- If we need to prune more peers, then prune peers that are subscribed to a long-lived subnet
- If we still need to prune peers, the prune peers that we have a higher density of on any given subnet which should drive for uniform peers across all subnets.
This will need a bit of testing as it modifies some significant peer management behaviours in lighthouse.
## Issue Addressed
Alternative to #2935
## Proposed Changes
Replace the `Vec<u8>` inside `Bitfield` with a `SmallVec<[u8; 32>`. This eliminates heap allocations for attestation bitfields until we reach 500K validators, at which point we can consider increasing `SMALLVEC_LEN` to 40 or 48.
While running Lighthouse under `heaptrack` I found that SSZ encoding and decoding of bitfields corresponded to 22% of all allocations by count. I've confirmed that with this change applied those allocations disappear entirely.
## Additional Info
We can win another 8 bytes of space by using `smallvec`'s [`union` feature](https://docs.rs/smallvec/1.8.0/smallvec/#union), although I might leave that for a future PR because I don't know how experimental that feature is and whether it uses some spicy `unsafe` blocks.
## Issue Addressed
NA
## Proposed Changes
This PR extends #3018 to address my review comments there and add automated integration tests with Geth (and other implementations, in the future).
I've also de-duplicated the "unused port" logic by creating an `common/unused_port` crate.
## Additional Info
I'm not sure if we want to merge this PR, or update #3018 and merge that. I don't mind, I'm primarily opening this PR to make sure CI works.
Co-authored-by: Mark Mackey <mark@sigmaprime.io>
## Issue Addressed
Closes#3014
## Proposed Changes
- Rename `receipt_root` to `receipts_root`
- Rename `execute_payload` to `notify_new_payload`
- This is slightly weird since we modify everything except the actual HTTP call to the engine API. That change is expected to be implemented in #2985 (cc @ethDreamer)
- Enable "random" tests for Bellatrix.
## Notes
This will break *partially* compatibility with Kintusgi testnets in order to gain compatibility with [Kiln](https://hackmd.io/@n0ble/kiln-spec) testnets. I think it will only break the BN APIs due to the `receipts_root` change, however it might have some other effects too.
Co-authored-by: Michael Sproul <micsproul@gmail.com>
## Issue Addressed
N/A
## Proposed Changes
Removes all configurations and hard-coded rules related to the deprecated Pyrmont testnet.
## Additional Info
Pyrmont is deprecated/will be shut down after being used for scenario testing, this PR removes configurations related to it.
Co-authored-by: Zachinquarantine <zachinquarantine@yahoo.com>
## Issue Addressed
Lighthouse gossiping late messages
## Proposed Changes
Point LH to our fork using tokio interval, which 1) works as expected 2) is more performant than the previous version that actually worked as expected
Upgrade libp2p
## Additional Info
https://github.com/libp2p/rust-libp2p/issues/2497
## Issue Addressed
NA
## Proposed Changes
Add the "Update Priority" section which has featured in many of our previous releases (e.g., [Poñeta](https://github.com/sigp/lighthouse/releases/v2.1.1)).
Previously this section has been copied in manually.
## Additional Info
NA
## Issue Addressed
#2883
## Proposed Changes
* Added `suggested-fee-recipient` & `suggested-fee-recipient-file` flags to validator client (similar to graffiti / graffiti-file implementation).
* Added proposer preparation service to VC, which sends the fee-recipient of all known validators to the BN via [/eth/v1/validator/prepare_beacon_proposer](https://github.com/ethereum/beacon-APIs/pull/178) api once per slot
* Added [/eth/v1/validator/prepare_beacon_proposer](https://github.com/ethereum/beacon-APIs/pull/178) api endpoint and preparation data caching
* Added cleanup routine to remove cached proposer preparations when not updated for 2 epochs
## Additional Info
Changed the Implementation following the discussion in #2883.
Co-authored-by: pk910 <philipp@pk910.de>
Co-authored-by: Paul Hauner <paul@paulhauner.com>
Co-authored-by: Philipp K <philipp@pk910.de>
## Issue Addressed
On a network with few nodes, it is possible that the same node can be found from a subnet discovery and a normal peer discovery at the same time.
The network behaviour loads these peers into events and processes them when it has the chance. It can happen that the same peer can enter the event queue more than once and then attempt to be dialed twice.
This PR shifts the registration of nodes in the peerdb as being dialed before they enter the NetworkBehaviour queue, preventing multiple attempts of the same peer being entered into the queue and avoiding the race condition.
## Issue Addressed
#2947
## Proposed Changes
Store messages that fail to be published due to insufficient peers for retry later. Messages expire after half an epoch and are retried if gossipsub informs us that an useful peer has connected. Currently running in Atlanta
## Additional Info
If on retry sending the messages fails they will not be tried again
## Issue Addressed
NA
## Proposed Changes
Checks to see if attestations or sync messages are still valid before "accepting" them for propagation.
## Additional Info
NA
## Issue Addressed
NA
## Proposed Changes
Adds `STRICT_LATE_MESSAGE_PENALTIES: bool` which allows for toggling penalties for late sync/attn messages.
`STRICT_LATE_MESSAGE_PENALTIES` is set to `false`, since we're seeing a lot of late messages on the network which are causing peer drops. We can toggle the bool during testing to try and figure out what/who is the cause of these late messages.
In effect, this PR *relaxes* peer downscoring for late attns and sync committee messages.
## Additional Info
- ~~Blocked on #2974~~
## Issue Addressed
Closes#2938
## Proposed Changes
* Build and publish images with a `-modern` suffix which enable CPU optimizations for modern hardware.
* Add docs for the plethora of available images!
* Unify all the Docker workflows in `docker.yml` (including for tagged releases).
## Additional Info
The `Dockerfile` is no longer used by our Docker Hub builds, as we use `cross` and a generic approach for ARM and x86. There's a new CI job `docker-build-from-source` which tests the `Dockerfile` without publishing anything.
The gossipsub history was increased to a good portion of a slot from 2.1 seconds in the last release.
Although it shouldn't cause too much issue, it could be related to recieving later messages than usual and interacting with our scoring system penalizing peers. For consistency, this PR reduces the time we gossip messages back to the same values of the previous release.
It also adjusts the gossipsub heartbeat time for testing purposes with a developer flag but this should not effect end users.
## Issue Addressed
N/A
## Proposed Changes
https://github.com/sigp/lighthouse/pull/2940 introduced a bug where we parsed the uint256 terminal total difficulty as a hex string instead of a decimal string. Fixes the bug and adds tests.
## Issue Addressed
Implements the standard key manager API from https://ethereum.github.io/keymanager-APIs/, formerly https://github.com/ethereum/beacon-APIs/pull/151
Related to https://github.com/sigp/lighthouse/issues/2557
## Proposed Changes
- [x] Add all of the new endpoints from the standard API: GET, POST and DELETE.
- [x] Add a `validators.enabled` column to the slashing protection database to support atomic disable + export.
- [x] Add tests for all the common sequential accesses of the API
- [x] Add tests for interactions with remote signer validators
- [x] Add end-to-end tests for migration of validators from one VC to another
- [x] Implement the authentication scheme from the standard (token bearer auth)
## Additional Info
The `enabled` column in the validators SQL database is necessary to prevent a race condition when exporting slashing protection data. Without the slashing protection database having a way of knowing that a key has been disabled, a concurrent request to sign a message could insert a new record into the database. The `delete_concurrent_with_signing` test exercises this code path, and was indeed failing before the `enabled` column was added.
The validator client authentication has been modified from basic auth to bearer auth, with basic auth preserved for backwards compatibility.
## Proposed Changes
Add a new hardcoded spec for the Gnosis Beacon Chain.
Ideally, official Lighthouse executables will be able to connect to the gnosis beacon chain from now on, using `--network gnosis` CLI option.
## Issue Addressed
This PR fixes the unnecessary `WARN Single block lookup failed` messages described here:
https://github.com/sigp/lighthouse/pull/2866#issuecomment-1008442640
## Proposed Changes
Add a new cache to the `BeaconChain` that tracks the block roots of blocks from before finalization. These could be blocks from the canonical chain (which might need to be read from disk), or old pre-finalization blocks that have been forked out.
The cache also stores a set of block roots for in-progress single block lookups, which duplicates some of the information from sync's `single_block_lookups` hashmap:
a836e180f9/beacon_node/network/src/sync/manager.rs (L192-L196)
On a live node you can confirm that the cache is working by grepping logs for the message: `Rejected attestation to finalized block`.
## Issue Addressed
N/A
## Proposed Changes
Add a HTTP API which can be used to compute the attestation performances of a validator (or all validators) over a discrete range of epochs.
Performances can be computed for a single validator, or for the global validator set.
## Usage
### Request
The API can be used as follows:
```
curl "http://localhost:5052/lighthouse/analysis/attestation_performance/{validator_index}?start_epoch=57730&end_epoch=57732"
```
Alternatively, to compute performances for the global validator set:
```
curl "http://localhost:5052/lighthouse/analysis/attestation_performance/global?start_epoch=57730&end_epoch=57732"
```
### Response
The response is JSON formatted as follows:
```
[
{
"index": 72,
"epochs": {
"57730": {
"active": true,
"head": false,
"target": false,
"source": false
},
"57731": {
"active": true,
"head": true,
"target": true,
"source": true,
"delay": 1
},
"57732": {
"active": true,
"head": true,
"target": true,
"source": true,
"delay": 1
},
}
}
]
```
> Note that the `"epochs"` are not guaranteed to be in ascending order.
## Additional Info
- This API is intended to be used in our upcoming validator analysis tooling (#2873) and will likely not be very useful for regular users. Some advanced users or block explorers may find this API useful however.
- The request range is limited to 100 epochs (since the range is inclusive and it also computes the `end_epoch` it's actually 101 epochs) to prevent Lighthouse using exceptionally large amounts of memory.
## Issue Addressed
Currently performing a voluntary exit prompts for manual confirmation. This prevents automation of exits.
## Proposed Changes
Add the flag `--no-confirmation` to the account manager when performing voluntary exits to bypass this manual confirmation.
## Issue Addressed
Closes https://github.com/sigp/lighthouse/issues/2857
## Proposed Changes
Explicitly set GNU malloc's MMAP_THRESHOLD to 128KB, disabling dynamic adjustments. For rationale see the linked issue.
Checking how to priorize the polling of the network I moved most of the service code to functions. This change I think it's worth on it's own for code quality since inside the `tokio::select` many tools don't work (cargo fmt, sometimes clippy, and sometimes even the compiler's errors get wack). This is functionally equivalent to the previous code, just better organized
## Proposed Changes
Initially the idea was to remove hashing of blocks in backfill sync. After considering it more, we conclude that we need to do it in both (forward and backfill) anyway. But since we forgot why we were doing it in the first place, this PR documents this logic.
Future us should find it useful
Co-authored-by: Divma <26765164+divagant-martian@users.noreply.github.com>
## Issue Addressed
Resolves https://github.com/sigp/lighthouse/issues/2763
## Proposed Changes
- Add a workflow which tests that local testnet starts successfully
- Added `set` option into the scripts in order to fail fast so that we can notice errors during starting local testnet.
- Fix errors on MacOS
- The redirect `&>>` is supported since bash v4 but the version bundled in macOS(11.6.1) is v3. a54f119c9b
## Issue Addressed
NA
## Proposed Changes
- Bump Lighthouse version to v2.1.1
- Update `thread_local` from v1.1.3 to v1.1.4 to address https://rustsec.org/advisories/RUSTSEC-2022-0006
## Additional Info
- ~~Blocked on #2950~~
- ~~Blocked on #2952~~
## Proposed Changes
Remove the check for exact equality on the beacon node spec when polling `/config/spec` from the VC. This check was always overzealous, and mostly served to check that the BN was configured for upcoming forks. I've replaced it by explicit checks of the `altair_fork_epoch` and `bellatrix_fork_epoch` instead.
## Additional Info
We should come back to this and clean it up so that we can retain compatibility while removing the field `default`s we installed.
In the latest release we decreased the target number of subnet peers.
It appears this could be causing issues in some cases and so reverting it back to the previous number it wise. A larger PR that follows this will address some other related discovery issues and peer management around subnet peer discovery.
## Proposed Changes
This PR establishes compatibility between Lighthouse's VC and Nimbus's BN.
Lighthouse was previously `POST`ing unquoted lists of validator indices to the attester and sync duties endpoints which were (correctly) not accepted by Nimbus. These lists had slipped through the cracks because they didn't have an explicit wrapper type to add `serde` annotations to. I've added the `ValidatorIndexDataRef` newtype in order to implement the modified serialisation behaviour.
## Testing
Combined with https://github.com/sigp/lighthouse/pull/2940, I've confirmed that this PR allows my Lighthouse VC on Prater to validate with the public Nimbus BN listed here: https://github.com/status-im/nimbus-eth2#quickly-test-your-tooling-against-nimbus. I haven't had a block proposal yet, but attestations and sync committee messages are working.
## Additional Info
This may also provide compatibility with Prysm BNs but I haven't had a chance to test that yet.
## Issue Addressed
Continuation to #2934
## Proposed Changes
Currently, we have the transition fields in the config (`TERMINAL_TOTAL_DIFFICULTY`, `TERMINAL_BLOCK_HASH` and `TERMINAL_BLOCK_HASH_ACTIVATION_EPOCH`) as mandatory fields.
This is causing compatibility issues with other client BN's (nimbus and teku v22.1.0) which don't return these fields on a `eth/v1/config/spec` api call. Since we don't use this values until the merge, I think it's okay to have default values set for these fields as well to ensure compatibility.
#2923
Which issue # does this PR address?
There's a redundant field on the BeaconChain called disabled_forks that was once part of our fork-aware networking (#953) but which is no longer used and could be deleted. so Removed all references to disabled_forks so that the code compiles and git grep disabled_forks returns no results.
## Proposed Changes
Please list or describe the changes introduced by this PR.
Removed all references of disabled_forks
Co-authored-by: Divma <26765164+divagant-martian@users.noreply.github.com>