Commit Graph

5164 Commits

Author SHA1 Message Date
Jimmy Chen
ca050053bf Use the native concurrency property to cancel workflows (#4572)
I noticed that some of our workflows aren't getting cancelled when a new one has been triggered, so we ended up having a long queue in our CI when multiple changes are triggered in a short period.

Looking at the comment here, I noticed the list of workflow IDs are outdated and no longer exist, and some new ones are missing:
dfcb3363c7/.github/workflows/cancel-previous-runs.yml (L12-L13)

I attempted to update these, and came across this comment on the [`cancel-workflow-action`](https://github.com/styfle/cancel-workflow-action) repo:
> You probably don't need to install this custom action.
>
> Instead, use the native [concurrency](https://github.blog/changelog/2021-04-19-github-actions-limit-workflow-run-or-job-concurrency/) property to cancel workflows, for example:

So I thought instead of updating the workflow and maintaining the workflow IDs, perhaps we can try experimenting the [native `concurrency` property](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#concurrency).
2023-08-14 03:16:03 +00:00
zhiqiangxu
842b42297b Fix bug of init_from_beacon_node (#4613) 2023-08-14 00:29:47 +00:00
zhiqiangxu
e92359b756 use account_manager::CMD instead of magic string (#4612)
Make the code style a bit more consistent with following lines.
2023-08-14 00:29:47 +00:00
zhiqiangxu
fa93b58257 remove optional_eth2_network_config (#4611)
It seems the passed [`optional_config`](dfcb3363c7/lighthouse/src/main.rs (L515)) is always `Some` instead of `None`.
2023-08-14 00:29:46 +00:00
zhiqiangxu
501ce62d7c minor optimize process_active_validator: avoid a call to state.get_validator (#4608) 2023-08-14 00:29:45 +00:00
João Oliveira
9d8b2764ef align editorconfig with rustfmt (#4600)
## Issue Addressed
There seems to be a conflict between `editorconfig` and `rustfmt`. 
`editorconfig` is configured with [`insert_final_newline=false`](https://github.com/sigp/lighthouse/blob/stable/.editorconfig#L9C1-L9C21) which [removes the newline](https://github.com/editorconfig/editorconfig/wiki/EditorConfig-Properties#insert_final_newline), whereas `rustfmt` [adds a newline](https://rust-lang.github.io/rustfmt/?version=v1.6.0&search=#newline_style). 

## Proposed Changes

Align `.editorconfig` with `rustfmt`
2023-08-14 00:29:44 +00:00
zhiqiangxu
f1ac12f23a Fix some typos (#4565) 2023-08-14 00:29:43 +00:00
Eitan Seri-Levi
1fcada8a32 Improve transport connection errors (#4540)
## Issue Addressed

#4538 

## Proposed Changes

add newtype wrapper around DialError that extracts error messages and logs them in a more readable format

## Additional Info

I was able to test Transport Dial Errors in the situation where a libp2p instance attempts to ping a nonexistent peer. That error message should look something like

`A transport level error has ocurred: Connection refused (os error 61)`

AgeManning mentioned we should try fetching only the most inner error (in situations where theres a nested error). I took a stab at implementing that

For non transport DialErrors, I wrote out the error messages explicitly (as per the docs). Could potentially clean things up here if thats not necessary


Co-authored-by: Age Manning <Age@AgeManning.com>
2023-08-10 00:10:09 +00:00
Paul Hauner
b60304b19f Use BeaconProcessor for API requests (#4462)
## Issue Addressed

NA

## Proposed Changes

Rather than spawning new tasks on the tokio executor to process each HTTP API request, send the tasks to the `BeaconProcessor`. This achieves:

1. Places a bound on how many concurrent requests are being served (i.e., how many we are actually trying to compute at one time).
1. Places a bound on how many requests can be awaiting a response at one time (i.e., starts dropping requests when we have too many queued).
1. Allows the BN prioritise HTTP requests with respect to messages coming from the P2P network (i.e., proiritise importing gossip blocks rather than serving API requests).

Presently there are two levels of priorities:

- `Priority::P0`
    - The beacon processor will prioritise these above everything other than importing new blocks.
    - Roughly all validator-sensitive endpoints.
- `Priority::P1`
    - The beacon processor will prioritise practically all other P2P messages over these, except for historical backfill things.
    - Everything that's not `Priority::P0`
    
The `--http-enable-beacon-processor false` flag can be supplied to revert back to the old behaviour of spawning new `tokio` tasks for each request:

```
        --http-enable-beacon-processor <BOOLEAN>
            The beacon processor is a scheduler which provides quality-of-service and DoS protection. When set to
            "true", HTTP API requests will queued and scheduled alongside other tasks. When set to "false", HTTP API
            responses will be executed immediately. [default: true]
```
    
## New CLI Flags

I added some other new CLI flags:

```
        --beacon-processor-aggregate-batch-size <INTEGER>
            Specifies the number of gossip aggregate attestations in a signature verification batch. Higher values may
            reduce CPU usage in a healthy network while lower values may increase CPU usage in an unhealthy or hostile
            network. [default: 64]
        --beacon-processor-attestation-batch-size <INTEGER>
            Specifies the number of gossip attestations in a signature verification batch. Higher values may reduce CPU
            usage in a healthy network whilst lower values may increase CPU usage in an unhealthy or hostile network.
            [default: 64]
        --beacon-processor-max-workers <INTEGER>
            Specifies the maximum concurrent tasks for the task scheduler. Increasing this value may increase resource
            consumption. Reducing the value may result in decreased resource usage and diminished performance. The
            default value is the number of logical CPU cores on the host.
        --beacon-processor-reprocess-queue-len <INTEGER>
            Specifies the length of the queue for messages requiring delayed processing. Higher values may prevent
            messages from being dropped while lower values may help protect the node from becoming overwhelmed.
            [default: 12288]
```


I needed to add the max-workers flag since the "simulator" flavor tests started failing with HTTP timeouts on the test assertions. I believe they were failing because the Github runners only have 2 cores and there just weren't enough workers available to process our requests in time. I added the other flags since they seem fun to fiddle with.

## Additional Info

I bumped the timeouts on the "simulator" flavor test from 4s to 8s. The prioritisation of consensus messages seems to be causing slower responses, I guess this is what we signed up for 🤷 

The `validator/register` validator has some special handling because the relays have a bad habit of timing out on these calls. It seems like a waste of a `BeaconProcessor` worker to just wait for the builder API HTTP response, so we spawn a new `tokio` task to wait for a builder response.

I've added an optimisation for the `GET beacon/states/{state_id}/validators/{validator_id}` endpoint in [efbabe3](efbabe3252). That's the endpoint the VC uses to resolve pubkeys to validator indices, and it's the endpoint that was causing us grief. Perhaps I should move that into a new PR, not sure.
2023-08-08 23:30:15 +00:00
Paul Hauner
1373dcf076 Add validator-manager (#3502)
## Issue Addressed

Addresses #2557

## Proposed Changes

Adds the `lighthouse validator-manager` command, which provides:

- `lighthouse validator-manager create`
    - Creates a `validators.json` file and a `deposits.json` (same format as https://github.com/ethereum/staking-deposit-cli)
- `lighthouse validator-manager import`
    - Imports validators from a `validators.json` file to the VC via the HTTP API.
- `lighthouse validator-manager move`
    - Moves validators from one VC to the other, utilizing only the VC API.

## Additional Info

In 98bcb947c I've reduced some VC `ERRO` and `CRIT` warnings to `WARN` or `DEBG` for the case where a pubkey is missing from the validator store. These were being triggered when we removed a validator but still had it in caches. It seems to me that `UnknownPubkey` will only happen in the case where we've removed a validator, so downgrading the logs is prudent. All the logs are `DEBG` apart from attestations and blocks which are `WARN`. I thought having *some* logging about this condition might help us down the track.

In 856cd7e37d I've made the VC delete the corresponding password file when it's deleting a keystore. This seemed like nice hygiene. Notably, it'll only delete that password file after it scans the validator definitions and finds that no other validator is also using that password file.
2023-08-08 00:03:22 +00:00
Paul Hauner
5ea75052a8 Increase slashing protection test SQL timeout to 500ms (#4574)
## Issue Addressed

NA

## Proposed Changes

We've been seeing a lot of [CI failures](https://github.com/sigp/lighthouse/actions/runs/5781296217/job/15666209142) with errors like this:

```
---- extra_interchange_tests::export_same_key_twice stdout ----
thread 'extra_interchange_tests::export_same_key_twice' panicked at 'called `Result::unwrap()` on an `Err` value: SQLError("Unable to open database: Error(None)")', validator_client/slashing_protection/src/extra_interchange_tests.rs:48:67
```

I'm assuming they're timeouts. I noticed that tests have a 0.1s timeout. Perhaps this just doesn't cut it when our new runners are overloaded.

## Additional Info

NA
2023-08-07 22:53:05 +00:00
Eitan Seri-Levi
521432129d Support SSZ request body for POST /beacon/blinded_blocks endpoints (v1 & v2) (#4504)
## Issue Addressed

#4262 

## Proposed Changes

add SSZ support in request body for POST /beacon/blinded_blocks endpoints (v1 & v2)

## Additional Info
2023-08-07 22:53:04 +00:00
Nico Flaig
31daf3a87c Update doppelganger note about sync committee contributions (#4425)
**Motivation**

As clarified [on discord](https://discord.com/channels/605577013327167508/605577013331361793/1121246688183603240), sync committee contributions are not delayed if DP is enabled.

**Description**

This PR updates doppelganger note about sync committee contributions. Based on the current docs, a user might assume that DP is not working as expected.
2023-08-07 00:46:29 +00:00
Armağan Yıldırak
3397612160 Shift networking configuration (#4426)
## Issue Addressed
Addresses [#4401](https://github.com/sigp/lighthouse/issues/4401)

## Proposed Changes
Shift some constants into ```ChainSpec``` and remove the constant values from code space.

## Additional Info

I mostly used ```MainnetEthSpec::default_spec()``` for getting ```ChainSpec```. I wonder Did I make a mistake about that.


Co-authored-by: armaganyildirak <armaganyildirak@gmail.com>
Co-authored-by: Paul Hauner <paul@paulhauner.com>
Co-authored-by: Age Manning <Age@AgeManning.com>
Co-authored-by: Diva M <divma@protonmail.com>
2023-08-03 01:51:47 +00:00
zhiqiangxu
7399a54ca3 CommitteeCache.get_all_beacon_committees: set correct capacity to avoid realloc (#4557) 2023-08-02 23:50:43 +00:00
zhiqiangxu
b5e25aeb2f CommitteeCache.initialized: fail early if possible (#4556) 2023-08-02 23:50:42 +00:00
zhiqiangxu
fcf51d691e fix typo (#4555) 2023-08-02 23:50:41 +00:00
Divma
ff9b09d964 upgrade to libp2p 0.52 (#4431)
## Issue Addressed

Upgrade libp2p to v0.52

## Proposed Changes
- **Workflows**: remove installation of `protoc`
- **Book**: remove installation of `protoc`
- **`Dockerfile`s and `cross`**: remove custom base `Dockerfile` for cross since it's no longer needed. Remove `protoc` from remaining `Dockerfiles`s
- **Upgrade `discv5` to `v0.3.1`:** we have some cool stuff in there: no longer needs `protoc` and faster ip updates on cold start
- **Upgrade `prometheus` to `0.21.0`**, now it no longer needs encoding checks
- **things that look like refactors:** bunch of api types were renamed and need to be accessed in a different (clearer) way
- **Lighthouse network**
	- connection limits is now a behaviour
	- banned peers no longer exist on the swarm level, but at the behaviour level
	- `connection_event_buffer_size` now is handled per connection with a buffer size of 4
	- `mplex` is deprecated and was removed
	- rpc handler now logs the peer to which it belongs

## Additional Info

Tried to keep as much behaviour unchanged as possible. However, there is a great deal of improvements we can do _after_ this upgrade:
- Smart connection limits: Connection limits have been checked only based on numbers, we can now use information about the incoming peer to decide if we want it
- More powerful peer management: Dial attempts from other behaviours can be rejected early
- Incoming connections can be rejected early
- Banning can be returned exclusively to the peer management: We should not get connections to banned peers anymore making use of this
- TCP Nat updates: We might be able to take advantage of confirmed external addresses to check out tcp ports/ips


Co-authored-by: Age Manning <Age@AgeManning.com>
Co-authored-by: Akihito Nakano <sora.akatsuki@gmail.com>
2023-08-02 00:59:34 +00:00
Gua00va
73764d0dd2 Deprecate exchangeTransitionConfiguration functionality (#4517)
## Issue Addressed

Solves #4442 
## Proposed Changes

EL clients log errors if we don't query this endpoint, but they are making releases that remove this error logging. After those are out we can stop calling it, after which point EL teams will remove the endpoint entirely. 
Refer https://hackmd.io/@n0ble/deprecate-exchgTC
2023-07-31 23:51:39 +00:00
chonghe
cb275e746d Update Lighthouse book FAQ (#4510)
Some updates in the FAQ based on issues seen on Discord. Additionally, corrected the disk usage on the default SPRP as the previously provided value is not correct. 

Co-authored-by: chonghe <44791194+chong-he@users.noreply.github.com>
2023-07-31 23:51:38 +00:00
Eitan Seri-Levi
e8c411c288 add ssz support in request body for /beacon/blocks endpoints (v1 & v2) (#4479)
## Issue Addressed

[#4457](https://github.com/sigp/lighthouse/issues/4457)

## Proposed Changes

add ssz support in request body for  /beacon/blocks endpoints (v1 & v2)


## Additional Info
2023-07-31 23:51:37 +00:00
Age Manning
8654f20028 Development feature flag - Disable backfill (#4537)
Often when testing I have to create a hack which is annoying to maintain. 

I think it might be handy to add a custom compile-time flag that developers can use if they want to test things locally without having to backfill a bunch of blocks.

There is probably an argument to have a feature called "backfill" which is enabled by default and can be disabled. I didn't go this route because I think it's counter-intuitive to have a feature that enables a core and necessary behaviour.
2023-07-31 01:53:08 +00:00
Gua00va
117802cef1 Add Eth Version Header (#4528)
## Issue Addressed
Closes #4525 

## Proposed Changes
`GET /eth/v1/validator/blinded_blocks` endpoint and `GET /eth/v1/validator/blocks`  now send `Eth-Version` header.

Co-authored-by: Gua00va <105484243+Gua00va@users.noreply.github.com>
2023-07-31 01:53:07 +00:00
Jimmy Chen
b5337c0ea5 Fix incorrect ideal rewards calculation (#4520)
## Issue Addressed

The PR fixes a bug where the the ideal rewards for source and head were incorrectly set.

Output from testing a validator that performed optimally in a Phase 0 epoch , note the `source` and `target` under ideal rewards is incorrect (compared to the actual `total_rewards` below):

```json
{ 
   "ideal_rewards": [
    ...
      {
        "effective_balance": "32000000000",
        "head": "18771",
        "target": "18770",
        "source": "18729",
        "inclusion_delay": "17083",
        "inactivity": "0"
      }
    ],
    "total_rewards": [
      {
        "validator_index": "0",
        "head": "18729",
        "target": "18770",
        "source": "18771",
        "inclusion_delay": "17083",
        "inactivity": "0"
      }
    ]
```
2023-07-31 01:53:06 +00:00
Michael Sproul
b96cfcaaa4 Fix bug in lcli transition-blocks and improve pretty-ssz (#4513)
## Proposed Changes

- Fix bad `state_root` reuse in `lcli transition-blocks` that resulted in invalid results at skipped slots.
- Modernise `lcli pretty-ssz` to include fork-generic decoders for `SignedBeaconBlock` and `BeaconState` which respect the `--network`/`--testnet-dir` flag.

## Additional Info

Breaking change: the underscore names like `signed_block_merge` are removed in favour of the fork-generic name `SignedBeaconBlock`, and fork-specific names which match the superstruct variants, e.g. `SignedBeaconBlockMerge`.
2023-07-31 01:53:05 +00:00
Michael Sproul
eafe08780c Restore upstream arbitrary (#4372)
## Proposed Changes

Remove patch for `arbitrary` in favour of upstream, now that the `arithmetic_side_effects` lint no longer triggers in derive macro code.

## Additional Info

~~Blocked on Rust 1.71.0, to be released 13 July 23~~
2023-07-31 01:53:04 +00:00
Aoi Kurokawa
85a3340d0e Implement liveness BeaconAPI (#4343)
## Issue Addressed

#4243

## Proposed Changes

- create a new endpoint for liveness/{endpoint}

## Additional Info
This is my first PR.
2023-07-31 01:53:03 +00:00
Paul Hauner
071dd4cd9c Add self-hosted runners v2 (#4506)
## Issue Addressed

NA

## Proposed Changes

Carries on from #4115, with the following modifications:

1. Self-hosted runners are only enabled if `github.repository == sigp/lighthouse`.
    - This allows forks to still have Github-hosted CI.
    - This gives us a method to switch back to Github-runners if we have extended downtime on self-hosted. 
1. Does not remove any existing dependency builds for Github-hosted runners (e.g., installing the latest Rust).
1. Adds the `WATCH_HOST` environment variable which defines where we expect to find the postgres db in the `watch` tests. This should be set to `host.docker.internal` for the tests to pass on self-hosted runners.

## Additional Info

NA


Co-authored-by: antondlr <anton@delaruelle.net>
2023-07-21 04:46:52 +00:00
Jimmy Chen
fc7f1ba6b9 Phase 0 attestation rewards via Beacon API (#4474)
## Issue Addressed

Addresses #4026.

Beacon-API spec [here](https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Beacon/getAttestationsRewards).

Endpoint: `POST /eth/v1/beacon/rewards/attestations/{epoch}`

This endpoint already supports post-Altair epochs. This PR adds support for phase 0 rewards calculation.

## Proposed Changes

- [x] Attestation rewards API to support phase 0 rewards calculation, re-using logic from `state_processing`. Refactored `get_attestation_deltas` slightly to support computing deltas for a subset of validators.
- [x] Add `inclusion_delay` to `ideal_rewards` (`beacon-API` spec update to follow)
- [x] Add `inactivity` penalties to both `ideal_rewards` and `total_rewards` (`beacon-API` spec update to follow)
- [x] Add tests to compute attestation rewards and compare results with beacon states 

## Additional Notes

- The extra penalty for missing attestations or being slashed during an inactivity leak is currently not included in the API response (for both phase 0 and Altair) in the spec. 
- I went with adding `inactivity` as a separate component rather than combining them with the 4 rewards, because this is how it was grouped in [the phase 0 spec](https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/beacon-chain.md#get_attestation_deltas). During inactivity leak, all rewards include the optimal reward, and inactivity penalties are calculated separately (see below code snippet from the spec), so it would be quite confusing if we merge them. This would also work better with Altair, because there's no "cancelling" of rewards and inactivity penalties are more separate.
- Altair calculation logic (to include inactivity penalties) to be updated in a follow-up PR.

```python
def get_attestation_deltas(state: BeaconState) -> Tuple[Sequence[Gwei], Sequence[Gwei]]:
    """
    Return attestation reward/penalty deltas for each validator.
    """
    source_rewards, source_penalties = get_source_deltas(state)
    target_rewards, target_penalties = get_target_deltas(state)
    head_rewards, head_penalties = get_head_deltas(state)
    inclusion_delay_rewards, _ = get_inclusion_delay_deltas(state)
    _, inactivity_penalties = get_inactivity_penalty_deltas(state)

    rewards = [
        source_rewards[i] + target_rewards[i] + head_rewards[i] + inclusion_delay_rewards[i]
        for i in range(len(state.validators))
    ]

    penalties = [
        source_penalties[i] + target_penalties[i] + head_penalties[i] + inactivity_penalties[i]
        for i in range(len(state.validators))
    ]

    return rewards, penalties
```

## Example API Response

<details>
  <summary>Click me</summary>
  
```json
{
  "ideal_rewards": [
    {
      "effective_balance": "1000000000",
      "head": "6638",
      "target": "6638",
      "source": "6638",
      "inclusion_delay": "9783",
      "inactivity": "0"
    },
    {
      "effective_balance": "2000000000",
      "head": "13276",
      "target": "13276",
      "source": "13276",
      "inclusion_delay": "19565",
      "inactivity": "0"
    },
    {
      "effective_balance": "3000000000",
      "head": "19914",
      "target": "19914",
      "source": "19914",
      "inclusion_delay": "29349",
      "inactivity": "0"
    },
    {
      "effective_balance": "4000000000",
      "head": "26553",
      "target": "26553",
      "source": "26553",
      "inclusion_delay": "39131",
      "inactivity": "0"
    },
    {
      "effective_balance": "5000000000",
      "head": "33191",
      "target": "33191",
      "source": "33191",
      "inclusion_delay": "48914",
      "inactivity": "0"
    },
    {
      "effective_balance": "6000000000",
      "head": "39829",
      "target": "39829",
      "source": "39829",
      "inclusion_delay": "58697",
      "inactivity": "0"
    },
    {
      "effective_balance": "7000000000",
      "head": "46468",
      "target": "46468",
      "source": "46468",
      "inclusion_delay": "68480",
      "inactivity": "0"
    },
    {
      "effective_balance": "8000000000",
      "head": "53106",
      "target": "53106",
      "source": "53106",
      "inclusion_delay": "78262",
      "inactivity": "0"
    },
    {
      "effective_balance": "9000000000",
      "head": "59744",
      "target": "59744",
      "source": "59744",
      "inclusion_delay": "88046",
      "inactivity": "0"
    },
    {
      "effective_balance": "10000000000",
      "head": "66383",
      "target": "66383",
      "source": "66383",
      "inclusion_delay": "97828",
      "inactivity": "0"
    },
    {
      "effective_balance": "11000000000",
      "head": "73021",
      "target": "73021",
      "source": "73021",
      "inclusion_delay": "107611",
      "inactivity": "0"
    },
    {
      "effective_balance": "12000000000",
      "head": "79659",
      "target": "79659",
      "source": "79659",
      "inclusion_delay": "117394",
      "inactivity": "0"
    },
    {
      "effective_balance": "13000000000",
      "head": "86298",
      "target": "86298",
      "source": "86298",
      "inclusion_delay": "127176",
      "inactivity": "0"
    },
    {
      "effective_balance": "14000000000",
      "head": "92936",
      "target": "92936",
      "source": "92936",
      "inclusion_delay": "136959",
      "inactivity": "0"
    },
    {
      "effective_balance": "15000000000",
      "head": "99574",
      "target": "99574",
      "source": "99574",
      "inclusion_delay": "146742",
      "inactivity": "0"
    },
    {
      "effective_balance": "16000000000",
      "head": "106212",
      "target": "106212",
      "source": "106212",
      "inclusion_delay": "156525",
      "inactivity": "0"
    },
    {
      "effective_balance": "17000000000",
      "head": "112851",
      "target": "112851",
      "source": "112851",
      "inclusion_delay": "166307",
      "inactivity": "0"
    },
    {
      "effective_balance": "18000000000",
      "head": "119489",
      "target": "119489",
      "source": "119489",
      "inclusion_delay": "176091",
      "inactivity": "0"
    },
    {
      "effective_balance": "19000000000",
      "head": "126127",
      "target": "126127",
      "source": "126127",
      "inclusion_delay": "185873",
      "inactivity": "0"
    },
    {
      "effective_balance": "20000000000",
      "head": "132766",
      "target": "132766",
      "source": "132766",
      "inclusion_delay": "195656",
      "inactivity": "0"
    },
    {
      "effective_balance": "21000000000",
      "head": "139404",
      "target": "139404",
      "source": "139404",
      "inclusion_delay": "205439",
      "inactivity": "0"
    },
    {
      "effective_balance": "22000000000",
      "head": "146042",
      "target": "146042",
      "source": "146042",
      "inclusion_delay": "215222",
      "inactivity": "0"
    },
    {
      "effective_balance": "23000000000",
      "head": "152681",
      "target": "152681",
      "source": "152681",
      "inclusion_delay": "225004",
      "inactivity": "0"
    },
    {
      "effective_balance": "24000000000",
      "head": "159319",
      "target": "159319",
      "source": "159319",
      "inclusion_delay": "234787",
      "inactivity": "0"
    },
    {
      "effective_balance": "25000000000",
      "head": "165957",
      "target": "165957",
      "source": "165957",
      "inclusion_delay": "244570",
      "inactivity": "0"
    },
    {
      "effective_balance": "26000000000",
      "head": "172596",
      "target": "172596",
      "source": "172596",
      "inclusion_delay": "254352",
      "inactivity": "0"
    },
    {
      "effective_balance": "27000000000",
      "head": "179234",
      "target": "179234",
      "source": "179234",
      "inclusion_delay": "264136",
      "inactivity": "0"
    },
    {
      "effective_balance": "28000000000",
      "head": "185872",
      "target": "185872",
      "source": "185872",
      "inclusion_delay": "273918",
      "inactivity": "0"
    },
    {
      "effective_balance": "29000000000",
      "head": "192510",
      "target": "192510",
      "source": "192510",
      "inclusion_delay": "283701",
      "inactivity": "0"
    },
    {
      "effective_balance": "30000000000",
      "head": "199149",
      "target": "199149",
      "source": "199149",
      "inclusion_delay": "293484",
      "inactivity": "0"
    },
    {
      "effective_balance": "31000000000",
      "head": "205787",
      "target": "205787",
      "source": "205787",
      "inclusion_delay": "303267",
      "inactivity": "0"
    },
    {
      "effective_balance": "32000000000",
      "head": "212426",
      "target": "212426",
      "source": "212426",
      "inclusion_delay": "313050",
      "inactivity": "0"
    }
  ],
  "total_rewards": [
    {
      "validator_index": "0",
      "head": "212426",
      "target": "212426",
      "source": "212426",
      "inclusion_delay": "313050",
      "inactivity": "0"
    },
    {
      "validator_index": "32",
      "head": "212426",
      "target": "212426",
      "source": "212426",
      "inclusion_delay": "313050",
      "inactivity": "0"
    },
    {
      "validator_index": "63",
      "head": "-357771",
      "target": "-357771",
      "source": "-357771",
      "inclusion_delay": "0",
      "inactivity": "0"
    }
  ]
}
```
</details>
2023-07-18 01:48:40 +00:00
Divma
4435a22221 Cleanup unreachable code in lcli::generate_bootnode_enr and some tests (#4485)
## Issue Addressed
n/a Noticed this while working on something else

## Proposed Changes
- leverage the appropriate types to avoid a bunch of `unwrap` and errors

## Additional Info
n/a
2023-07-17 05:31:53 +00:00
Jimmy Chen
a25ec16a67 Speed up CI by installing foundry with Github action (#4505)
## Issue Addressed

Speed up CI by installing foundry with Github action instead of building Anvil from source.

Building anvil from source on GItHub hosted runners currently takes about 10 mins. Using the `foundry-toolchain` action to install only takes about 2 seconds.
2023-07-17 00:14:20 +00:00
Pawan Dhananjay
f2223feb21 Rust 1.71 lints (#4503)
## Issue Addressed

N/A

## Proposed Changes

Add lints for rust 1.71

[3789134](3789134ae2) is probably the one that needs most attention as it changes beacon state code. I changed the `is_in_inactivity_leak ` function to return a `ArithError` as not all consumers of that function work well with a `BeaconState::Error`.
2023-07-17 00:14:19 +00:00
Jimmy Chen
d4a61756ca CI fix: add retries to eth1 sim tests (#4501)
## Issue Addressed

This PR attempts to workaround the recent frequent eth1 simulator failures caused by missing eth logs from Anvil. 

> FailedToInsertDeposit(NonConsecutive { log_index: 1, expected: 0 })

This usually occurs at the beginning of the tests, and it guarantees a timeout after a few hours if this log shows up, and this is currently causing our CIs to fail quite frequently. 

Example failure here: https://github.com/sigp/lighthouse/actions/runs/5525760195/jobs/10079736914

## Proposed Changes

The quick fix applied here adds a timeout to node startup and restarts the node again.

- Add a 60 seconds timeout to beacon node startup in eth1 simulator tests. It takes ~10 seconds on my machine, but could take longer on CI runners.
- Wrap the startup code in a retry function, that allows for 3 retries before returning an error.

## Additional Info

We should probably raise an issue under the Anvil GitHub repo there so this can be further investigated.
2023-07-17 00:14:18 +00:00
Jimmy Chen
68d5a6cf99 Clean up local testnet files without prompting (#4498)
## Issue Addressed

Addresses an issue where CI could fail due to an nonexisting file error:

```
Run ./clean.sh
rm: cannot remove '/home/runner/.lighthouse/local-testnet/geth_datadir4/geth/fastcache.tmp.1549331618': No such file or directory
Error: Process completed with exit code 1.
```

This seems to happen quite frequently now, I'm not sure exactly why but perhaps worth trying suppressing the prompt?

https://github.com/sigp/lighthouse/actions/runs/5455027574/jobs/9925916159?pr=4463
2023-07-17 00:14:17 +00:00
Michael Sproul
5569d97a07 Remove wget dependency (#4497)
## Proposed Changes

Replace `wget` in the EF-tests makefile with `curl`.

On macOS `curl` is pre-installed, and I found myself making this change to avoid installing `wget`.

The `-L` flag is used to follow redirects which is useful if a repo gets renamed, and more similar to `wget`'s default behaviour.
2023-07-17 00:14:16 +00:00
Michael Sproul
03674c7199 Update mev-rs and remove patches (#4496)
## Issue Addressed

Fixes occasional compilation errors with mev-rs (see #4456).

## Proposed Changes

- Update `mev-rs` to the latest version, which allows us to remove hacky `[patch]` sections
- Update the `axum` version used in `watch` so LH only uses a single version
2023-07-17 00:14:15 +00:00
Jack McPherson
1a5de8b0f0 Remove instances of required arguments with default values (#4489)
## Issue Addressed

#4488 

## Proposed Changes

 - Remove all instances of the `required` modifier where we have a default value specified for a subcommand

## Additional Info

N/A
2023-07-17 00:14:14 +00:00
Jimmy Chen
5cd738c882 Use unique arg names for eth1-sim (#4463)
## Issue Addressed

When trying to run `eth1-sim` locally, the simulator doesn't start for me, and panicked due to duplicate arg names for `proposer-nodes` (using same arg names as `nodes`). Not sure why this isn't failing on CI but failing on mine 🤔 

```
thread 'main' panicked at 'Argument short must be unique
thread 'main' panicked at 'Argument long must be unique
```
2023-07-17 00:14:13 +00:00
Michael Sproul
6c375205fb Fix HTTP state API bug and add --epochs-per-migration (#4236)
## Issue Addressed

Fix an issue observed by `@zlan` on Discord where Lighthouse would sometimes return this error when looking up states via the API:

> {"code":500,"message":"UNHANDLED_ERROR: ForkChoiceError(MissingProtoArrayBlock(0xc9cf1495421b6ef3215d82253b388d77321176a1dcef0db0e71a0cd0ffc8cdb7))","stacktraces":[]}

## Proposed Changes

The error stems from a faulty assumption in the HTTP API logic: that any state in the hot database must have its block in fork choice. This isn't true because the state's hot database may update much less frequently than the fork choice store, e.g. if reconstructing states (where freezer migration pauses), or if the freezer migration runs slowly. There could also be a race between loading the hot state and checking fork choice, e.g. even if the finalization migration of DB+fork choice were atomic, the update could happen between the 1st and 2nd calls.

To address this I've changed the HTTP API logic to use the finalized block's execution status as a fallback where it is safe to do so. In the case where a block is non-canonical and prior to finalization (permanently orphaned) we default `execution_optimistic` to `true`.

## Additional Info

I've also added a new CLI flag to reduce the frequency of the finalization migration as this is useful for several purposes:

- Spacing out database writes (less frequent, larger batches)
- Keeping a limited chain history with high availability, e.g. the last month in the hot database.

This new flag made it _substantially_ easier to test this change. It was extracted from `tree-states` (where it's called `--db-migration-period`), which is why this PR also carries the `tree-states` label.
2023-07-17 00:14:12 +00:00
Tyler
0c7eed5e58 bump proc-macro2 (#4464)
## Issue Addressed

Addresses issue:  #4459

## Proposed Changes

`cargo update -p proc-macro2`

proc-macro2 v1.0.58 -> v1.0.63

## Additional Info

See: #4459 / https://github.com/rust-lang/rust/issues/113152
2023-07-12 22:02:26 +00:00
Jack McPherson
62c9170755 Remove hidden re-exports to appease Rust 1.73 (#4495)
## Issue Addressed

#4494 

## Proposed Changes

 - Remove explicit re-exports of various types to appease the new compiler lint

## Additional Info

It seems `warn(hidden_glob_reexports)` is the main culprit.
2023-07-12 07:06:00 +00:00
Michael Sproul
420e9c490e Add state-root command and network support to lcli (#4492)
## Proposed Changes

* Add `lcli state-root` command for computing the hash tree root of a `BeaconState`.
* Add a `--network` flag which can be used instead of `--testnet-dir` to set the network, e.g. Mainnet, Goerli, Gnosis.
* Use the new network flag in `transition-blocks`, `skip-slots`, and `block-root`, which previously only supported mainnet.
* **BREAKING CHANGE** Remove the default value of `~/.lighthouse/testnet` from `--testnet-dir`. This may have made sense in previous versions where `lcli` was more testnet focussed, but IMO it is an unnecessary complication and foot-gun today.
2023-07-12 07:05:58 +00:00
Paul Hauner
c25825a539 Move the BeaconProcessor into a new crate (#4435)
*Replaces #4434. It is identical, but this PR has a smaller diff due to a curated commit history.*

## Issue Addressed

NA

## Proposed Changes

This PR moves the scheduling logic for the `BeaconProcessor` into a new crate in `beacon_node/beacon_processor`. Previously it existed in the `beacon_node/network` crate.

This addresses a circular-dependency problem where it's not possible to use the `BeaconProcessor` from the `beacon_chain` crate. The `network` crate depends on the `beacon_chain` crate (`network -> beacon_chain`), but importing the `BeaconProcessor` into the `beacon_chain` crate would create a circular dependancy of `beacon_chain -> network`.

The `BeaconProcessor` was designed to provide queuing and prioritized scheduling for messages from the network. It has proven to be quite valuable and I believe we'd make Lighthouse more stable and effective by using it elsewhere. In particular, I think we should use the `BeaconProcessor` for:

1. HTTP API requests.
1. Scheduled tasks in the `BeaconChain` (e.g., state advance).

Using the `BeaconProcessor` for these tasks would help prevent the BN from becoming overwhelmed and would also help it to prioritize operations (e.g., choosing to process blocks from gossip before responding to low-priority HTTP API requests).

## Additional Info

This PR is intended to have zero impact on runtime behaviour. It aims to simply separate the *scheduling* code (i.e., the `BeaconProcessor`) from the *business logic* in the `network` crate (i.e., the `Worker` impls). Future PRs (see #4462) can build upon these works to actually use the `BeaconProcessor` for more operations.

I've gone to some effort to use `git mv` to make the diff look more like "file was moved and modified" rather than "file was deleted and a new one added". This should reduce review burden and help maintain commit attribution.
2023-07-10 07:45:54 +00:00
Michael Sproul
ea2420d193 Bump default checkpoint sync timeout to 3 minutes (#4466)
## Issue Addressed

[Users on Twitter](https://twitter.com/ashekhirin/status/1676334843192397824) are getting checkpoint sync URL timeouts with the default of 60s, so this PR increases the default timeout to 3 minutes.

I've also added a short section to the book about adjusting the timeout with `--checkpoint-sync-url-timeout`.
2023-07-08 13:16:06 +00:00
Jack McPherson
a6d5c7d7e0 Correct checks for backfill completeness (#4465)
## Issue Addressed

#4331 

## Proposed Changes

 - Use comparison rather than strict equality between the earliest epoch we know about and the backfill target (which will be the most recent WSP by default or genesis)
 - Add helper function `BackFillSync<T>::would_complete` to achieve this in one location

## Additional Info

 - There's an ad hoc test for this in #4461


Co-authored-by: Age Manning <Age@AgeManning.com>
2023-07-06 07:35:31 +00:00
Paul Hauner
dfcb3363c7 Release v4.3.0 (#4452)
## Issue Addressed

NA

## Proposed Changes

Bump versions

## Additional Info

NA
2023-07-04 13:29:55 +00:00
Age Manning
8e65419455 Ipv6 bootnodes update (#4394)
We now officially have ipv6 support. The mainnet bootnodes have been updated to support ipv6. This PR updates lighthouse's internal bootnodes for mainnet to avoid fetching them on initial load.
2023-07-03 03:20:21 +00:00
Jimmy Chen
46be05f728 Cache target attester balances for unrealized FFG progression calculation (#4362)
## Issue Addressed

#4118 

## Proposed Changes

This PR introduces a "progressive balances" cache on the `BeaconState`, which keeps track of the accumulated target attestation balance for the current & previous epochs. The cached values are utilised by fork choice to calculate unrealized justification and finalization (instead of converting epoch participation arrays to balances for each block we receive).

This optimization will be rolled out gradually to allow for more testing. A new `--progressive-balances disabled|checked|strict|fast` flag is introduced to support this:
- `checked`: enabled with checks against participation cache, and falls back to the existing epoch processing calculation if there is a total target attester balance mismatch. There is no performance gain from this as the participation cache still needs to be computed. **This is the default mode for now.**
- `strict`: enabled with checks against participation cache, returns error if there is a mismatch. **Used for testing only**.
- `fast`: enabled with no comparative checks and without computing the participation cache. This mode gives us the performance gains from the optimization. This is still experimental and not currently recommended for production usage, but will become the default mode in a future release.
- `disabled`: disable the usage of progressive cache, and use the existing method for FFG progression calculation. This mode may be useful if we find a bug and want to stop the frequent error logs.

### Tasks

- [x] Initial cache implementation in `BeaconState`
- [x] Perform checks in fork choice to compare the progressive balances cache against results from `ParticipationCache`
- [x] Add CLI flag, and disable the optimization by default
- [x] Testing on Goerli & Benchmarking
- [x]  Move caching logic from state processing to the `ProgressiveBalancesCache` (see [this comment](https://github.com/sigp/lighthouse/pull/4362#discussion_r1230877001))
- [x] Add attesting balance metrics



Co-authored-by: Jimmy Chen <jimmy@sigmaprime.io>
2023-06-30 01:13:06 +00:00
Eitan Seri-Levi
826e090f50 Update node health endpoint (#4310)
## Issue Addressed

[#4292](https://github.com/sigp/lighthouse/issues/4292)

## Proposed Changes

Updated the node health endpoint

will return a 200 status code if  `!syncing && !el_offline && !optimistic`

wil return a 206 if `(syncing || optimistic) &&  !el_offline`

will return a 503 if `el_offline`



## Additional Info
2023-06-30 01:13:04 +00:00
Eitan Seri-Levi
edd093293a added debounce to log (#4269)
## Issue Addressed

[#4259](https://github.com/sigp/lighthouse/issues/4259)

## Proposed Changes

debounce spammy `Unable to send message to the beacon processor` log messages

## Additional Info

We could potentially debounce other logs that have the potential to be "spammy". 

After some feedback we decided to additionally add the following change:

create a newtype wrapper around `mpsc::Sender<BeaconWorkEvent<T>>`. When there is an error on the try_send method on the wrapper, we increase a counter metric with one label per work type.
2023-06-30 01:13:03 +00:00