Commit Graph

111 Commits

Author SHA1 Message Date
Michael Sproul
20067b9465 Remove checkpoint alignment requirements and enable historic state pruning (#4610)
## Issue Addressed

Closes #3210
Closes #3211

## Proposed Changes

- Checkpoint sync from the latest finalized state regardless of its alignment.
- Add the `block_root` to the database's split point. This is _only_ added to the in-memory split in order to avoid a schema migration. See `load_split`.
- Add a new method to the DB called `get_advanced_state`, which looks up a state _by block root_, with a `state_root` as fallback. Using this method prevents accidental accesses of the split's unadvanced state, which does not exist in the hot DB and is not guaranteed to exist in the freezer DB at all. Previously Lighthouse would look up this state _from the freezer DB_, even if it was required for block/attestation processing, which was suboptimal.
- Replace several state look-ups in block and attestation processing with `get_advanced_state` so that they can't hit the split block's unadvanced state.
- Do not store any states in the freezer database by default. All states will be deleted upon being evicted from the hot database unless `--reconstruct-historic-states` is set. The anchor info which was previously used for checkpoint sync is used to implement this, including when syncing from genesis.

## Additional Info

Needs further testing. I want to stress-test the pruned database under Hydra.

The `get_advanced_state` method is intended to become more relevant over time: `tree-states` includes an identically named method that returns advanced states from its in-memory cache.

Co-authored-by: realbigsean <seananderson33@gmail.com>
2023-08-21 05:02:32 +00:00
Jimmy Chen
fc7f1ba6b9 Phase 0 attestation rewards via Beacon API (#4474)
## Issue Addressed

Addresses #4026.

Beacon-API spec [here](https://ethereum.github.io/beacon-APIs/?urls.primaryName=dev#/Beacon/getAttestationsRewards).

Endpoint: `POST /eth/v1/beacon/rewards/attestations/{epoch}`

This endpoint already supports post-Altair epochs. This PR adds support for phase 0 rewards calculation.

## Proposed Changes

- [x] Attestation rewards API to support phase 0 rewards calculation, re-using logic from `state_processing`. Refactored `get_attestation_deltas` slightly to support computing deltas for a subset of validators.
- [x] Add `inclusion_delay` to `ideal_rewards` (`beacon-API` spec update to follow)
- [x] Add `inactivity` penalties to both `ideal_rewards` and `total_rewards` (`beacon-API` spec update to follow)
- [x] Add tests to compute attestation rewards and compare results with beacon states 

## Additional Notes

- The extra penalty for missing attestations or being slashed during an inactivity leak is currently not included in the API response (for both phase 0 and Altair) in the spec. 
- I went with adding `inactivity` as a separate component rather than combining them with the 4 rewards, because this is how it was grouped in [the phase 0 spec](https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/beacon-chain.md#get_attestation_deltas). During inactivity leak, all rewards include the optimal reward, and inactivity penalties are calculated separately (see below code snippet from the spec), so it would be quite confusing if we merge them. This would also work better with Altair, because there's no "cancelling" of rewards and inactivity penalties are more separate.
- Altair calculation logic (to include inactivity penalties) to be updated in a follow-up PR.

```python
def get_attestation_deltas(state: BeaconState) -> Tuple[Sequence[Gwei], Sequence[Gwei]]:
    """
    Return attestation reward/penalty deltas for each validator.
    """
    source_rewards, source_penalties = get_source_deltas(state)
    target_rewards, target_penalties = get_target_deltas(state)
    head_rewards, head_penalties = get_head_deltas(state)
    inclusion_delay_rewards, _ = get_inclusion_delay_deltas(state)
    _, inactivity_penalties = get_inactivity_penalty_deltas(state)

    rewards = [
        source_rewards[i] + target_rewards[i] + head_rewards[i] + inclusion_delay_rewards[i]
        for i in range(len(state.validators))
    ]

    penalties = [
        source_penalties[i] + target_penalties[i] + head_penalties[i] + inactivity_penalties[i]
        for i in range(len(state.validators))
    ]

    return rewards, penalties
```

## Example API Response

<details>
  <summary>Click me</summary>
  
```json
{
  "ideal_rewards": [
    {
      "effective_balance": "1000000000",
      "head": "6638",
      "target": "6638",
      "source": "6638",
      "inclusion_delay": "9783",
      "inactivity": "0"
    },
    {
      "effective_balance": "2000000000",
      "head": "13276",
      "target": "13276",
      "source": "13276",
      "inclusion_delay": "19565",
      "inactivity": "0"
    },
    {
      "effective_balance": "3000000000",
      "head": "19914",
      "target": "19914",
      "source": "19914",
      "inclusion_delay": "29349",
      "inactivity": "0"
    },
    {
      "effective_balance": "4000000000",
      "head": "26553",
      "target": "26553",
      "source": "26553",
      "inclusion_delay": "39131",
      "inactivity": "0"
    },
    {
      "effective_balance": "5000000000",
      "head": "33191",
      "target": "33191",
      "source": "33191",
      "inclusion_delay": "48914",
      "inactivity": "0"
    },
    {
      "effective_balance": "6000000000",
      "head": "39829",
      "target": "39829",
      "source": "39829",
      "inclusion_delay": "58697",
      "inactivity": "0"
    },
    {
      "effective_balance": "7000000000",
      "head": "46468",
      "target": "46468",
      "source": "46468",
      "inclusion_delay": "68480",
      "inactivity": "0"
    },
    {
      "effective_balance": "8000000000",
      "head": "53106",
      "target": "53106",
      "source": "53106",
      "inclusion_delay": "78262",
      "inactivity": "0"
    },
    {
      "effective_balance": "9000000000",
      "head": "59744",
      "target": "59744",
      "source": "59744",
      "inclusion_delay": "88046",
      "inactivity": "0"
    },
    {
      "effective_balance": "10000000000",
      "head": "66383",
      "target": "66383",
      "source": "66383",
      "inclusion_delay": "97828",
      "inactivity": "0"
    },
    {
      "effective_balance": "11000000000",
      "head": "73021",
      "target": "73021",
      "source": "73021",
      "inclusion_delay": "107611",
      "inactivity": "0"
    },
    {
      "effective_balance": "12000000000",
      "head": "79659",
      "target": "79659",
      "source": "79659",
      "inclusion_delay": "117394",
      "inactivity": "0"
    },
    {
      "effective_balance": "13000000000",
      "head": "86298",
      "target": "86298",
      "source": "86298",
      "inclusion_delay": "127176",
      "inactivity": "0"
    },
    {
      "effective_balance": "14000000000",
      "head": "92936",
      "target": "92936",
      "source": "92936",
      "inclusion_delay": "136959",
      "inactivity": "0"
    },
    {
      "effective_balance": "15000000000",
      "head": "99574",
      "target": "99574",
      "source": "99574",
      "inclusion_delay": "146742",
      "inactivity": "0"
    },
    {
      "effective_balance": "16000000000",
      "head": "106212",
      "target": "106212",
      "source": "106212",
      "inclusion_delay": "156525",
      "inactivity": "0"
    },
    {
      "effective_balance": "17000000000",
      "head": "112851",
      "target": "112851",
      "source": "112851",
      "inclusion_delay": "166307",
      "inactivity": "0"
    },
    {
      "effective_balance": "18000000000",
      "head": "119489",
      "target": "119489",
      "source": "119489",
      "inclusion_delay": "176091",
      "inactivity": "0"
    },
    {
      "effective_balance": "19000000000",
      "head": "126127",
      "target": "126127",
      "source": "126127",
      "inclusion_delay": "185873",
      "inactivity": "0"
    },
    {
      "effective_balance": "20000000000",
      "head": "132766",
      "target": "132766",
      "source": "132766",
      "inclusion_delay": "195656",
      "inactivity": "0"
    },
    {
      "effective_balance": "21000000000",
      "head": "139404",
      "target": "139404",
      "source": "139404",
      "inclusion_delay": "205439",
      "inactivity": "0"
    },
    {
      "effective_balance": "22000000000",
      "head": "146042",
      "target": "146042",
      "source": "146042",
      "inclusion_delay": "215222",
      "inactivity": "0"
    },
    {
      "effective_balance": "23000000000",
      "head": "152681",
      "target": "152681",
      "source": "152681",
      "inclusion_delay": "225004",
      "inactivity": "0"
    },
    {
      "effective_balance": "24000000000",
      "head": "159319",
      "target": "159319",
      "source": "159319",
      "inclusion_delay": "234787",
      "inactivity": "0"
    },
    {
      "effective_balance": "25000000000",
      "head": "165957",
      "target": "165957",
      "source": "165957",
      "inclusion_delay": "244570",
      "inactivity": "0"
    },
    {
      "effective_balance": "26000000000",
      "head": "172596",
      "target": "172596",
      "source": "172596",
      "inclusion_delay": "254352",
      "inactivity": "0"
    },
    {
      "effective_balance": "27000000000",
      "head": "179234",
      "target": "179234",
      "source": "179234",
      "inclusion_delay": "264136",
      "inactivity": "0"
    },
    {
      "effective_balance": "28000000000",
      "head": "185872",
      "target": "185872",
      "source": "185872",
      "inclusion_delay": "273918",
      "inactivity": "0"
    },
    {
      "effective_balance": "29000000000",
      "head": "192510",
      "target": "192510",
      "source": "192510",
      "inclusion_delay": "283701",
      "inactivity": "0"
    },
    {
      "effective_balance": "30000000000",
      "head": "199149",
      "target": "199149",
      "source": "199149",
      "inclusion_delay": "293484",
      "inactivity": "0"
    },
    {
      "effective_balance": "31000000000",
      "head": "205787",
      "target": "205787",
      "source": "205787",
      "inclusion_delay": "303267",
      "inactivity": "0"
    },
    {
      "effective_balance": "32000000000",
      "head": "212426",
      "target": "212426",
      "source": "212426",
      "inclusion_delay": "313050",
      "inactivity": "0"
    }
  ],
  "total_rewards": [
    {
      "validator_index": "0",
      "head": "212426",
      "target": "212426",
      "source": "212426",
      "inclusion_delay": "313050",
      "inactivity": "0"
    },
    {
      "validator_index": "32",
      "head": "212426",
      "target": "212426",
      "source": "212426",
      "inclusion_delay": "313050",
      "inactivity": "0"
    },
    {
      "validator_index": "63",
      "head": "-357771",
      "target": "-357771",
      "source": "-357771",
      "inclusion_delay": "0",
      "inactivity": "0"
    }
  ]
}
```
</details>
2023-07-18 01:48:40 +00:00
Michael Sproul
5569d97a07 Remove wget dependency (#4497)
## Proposed Changes

Replace `wget` in the EF-tests makefile with `curl`.

On macOS `curl` is pre-installed, and I found myself making this change to avoid installing `wget`.

The `-L` flag is used to follow redirects which is useful if a repo gets renamed, and more similar to `wget`'s default behaviour.
2023-07-17 00:14:16 +00:00
Jimmy Chen
46be05f728 Cache target attester balances for unrealized FFG progression calculation (#4362)
## Issue Addressed

#4118 

## Proposed Changes

This PR introduces a "progressive balances" cache on the `BeaconState`, which keeps track of the accumulated target attestation balance for the current & previous epochs. The cached values are utilised by fork choice to calculate unrealized justification and finalization (instead of converting epoch participation arrays to balances for each block we receive).

This optimization will be rolled out gradually to allow for more testing. A new `--progressive-balances disabled|checked|strict|fast` flag is introduced to support this:
- `checked`: enabled with checks against participation cache, and falls back to the existing epoch processing calculation if there is a total target attester balance mismatch. There is no performance gain from this as the participation cache still needs to be computed. **This is the default mode for now.**
- `strict`: enabled with checks against participation cache, returns error if there is a mismatch. **Used for testing only**.
- `fast`: enabled with no comparative checks and without computing the participation cache. This mode gives us the performance gains from the optimization. This is still experimental and not currently recommended for production usage, but will become the default mode in a future release.
- `disabled`: disable the usage of progressive cache, and use the existing method for FFG progression calculation. This mode may be useful if we find a bug and want to stop the frequent error logs.

### Tasks

- [x] Initial cache implementation in `BeaconState`
- [x] Perform checks in fork choice to compare the progressive balances cache against results from `ParticipationCache`
- [x] Add CLI flag, and disable the optimization by default
- [x] Testing on Goerli & Benchmarking
- [x]  Move caching logic from state processing to the `ProgressiveBalancesCache` (see [this comment](https://github.com/sigp/lighthouse/pull/4362#discussion_r1230877001))
- [x] Add attesting balance metrics



Co-authored-by: Jimmy Chen <jimmy@sigmaprime.io>
2023-06-30 01:13:06 +00:00
Jack McPherson
1aff082eea Add broadcast validation routes to Beacon Node HTTP API (#4316)
## Issue Addressed

 - #4293 
 - #4264 

## Proposed Changes

*Changes largely follow those suggested in the main issue*.

 - Add new routes to HTTP API
   - `post_beacon_blocks_v2`
   - `post_blinded_beacon_blocks_v2`
 - Add new routes to `BeaconNodeHttpClient`
   - `post_beacon_blocks_v2`
   - `post_blinded_beacon_blocks_v2`
 - Define new Eth2 common types
   - `BroadcastValidation`, enum representing the level of validation to apply to blocks prior to broadcast
   - `BroadcastValidationQuery`, the corresponding HTTP query string type for the above type
 - ~~Define `_checked` variants of both `publish_block` and `publish_blinded_block` that enforce a validation level at a type level~~
 - Add interactive tests to the `bn_http_api_tests` test target covering each validation level (to their own test module, `broadcast_validation_tests`)
   - `beacon/blocks`
       - `broadcast_validation=gossip`
         - Invalid (400)
         - Full Pass (200)
         - Partial Pass (202)
        - `broadcast_validation=consensus`
          - Invalid (400)
          - Only gossip (400)
          - Only consensus pass (i.e., equivocates) (200)
          - Full pass (200)
        - `broadcast_validation=consensus_and_equivocation`
          - Invalid (400)
          - Invalid due to early equivocation (400)
          - Only gossip (400)
          - Only consensus (400)
          - Pass (200)
   - `beacon/blinded_blocks`
       - `broadcast_validation=gossip`
         - Invalid (400)
         - Full Pass (200)
         - Partial Pass (202)
        - `broadcast_validation=consensus`
          - Invalid (400)
          - Only gossip (400)
          - ~~Only consensus pass (i.e., equivocates) (200)~~
          - Full pass (200)
        - `broadcast_validation=consensus_and_equivocation`
          - Invalid (400)
          - Invalid due to early equivocation (400)
          - Only gossip (400)
          - Only consensus (400)
          - Pass (200)
 - Add a new trait, `IntoGossipVerifiedBlock`, which allows type-level guarantees to be made as to gossip validity
 - Modify the structure of the `ObservedBlockProducers` cache from a `(slot, validator_index)` mapping to a `((slot, validator_index), block_root)` mapping
 - Modify `ObservedBlockProducers::proposer_has_been_observed` to return a `SeenBlock` rather than a boolean on success
 - Punish gossip peer (low) for submitting equivocating blocks
 - Rename `BlockError::SlashablePublish` to `BlockError::SlashableProposal`

## Additional Info

This PR contains changes that directly modify how blocks are verified within the client. For more context, consult [comments in-thread](https://github.com/sigp/lighthouse/pull/4316#discussion_r1234724202).


Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2023-06-29 12:02:38 +00:00
Michael Sproul
affea585f4 Remove CountUnrealized (#4357)
## Issue Addressed

Closes #4332

## Proposed Changes

Remove the `CountUnrealized` type, defaulting unrealized justification to _on_. This fixes the #4332 issue by ensuring that importing the same block to fork choice always results in the same outcome.

Finalized sync speed may be slightly impacted by this change, but that is deemed an acceptable trade-off until the optimisation from #4118 is implemented.

TODO:

- [x] Also check that the block isn't a duplicate before importing
2023-06-16 06:44:31 +00:00
Jimmy Chen
8d9c748025 Fix attestation withdrawals root mismatch (#4249)
## Issue Addressed

Addresses #4234 

## Proposed Changes

- Skip withdrawals processing in an inconsistent state replay. 
- Repurpose `StateRootStrategy`: rename to `StateProcessingStrategy` and always skip withdrawals if using `StateProcessingStrategy::Inconsistent`
- Add a test to reproduce the scenario


Co-authored-by: Jimmy Chen <jimmy@sigmaprime.io>
2023-05-09 10:48:15 +00:00
Michael Sproul
c11638c36c Split common crates out into their own repos (#3890)
## Proposed Changes

Split out several crates which now exist in separate repos under `sigp`.

- [`ssz` and `ssz_derive`](https://github.com/sigp/ethereum_ssz)
- [`tree_hash` and `tree_hash_derive`](https://github.com/sigp/tree_hash)
- [`ethereum_hashing`](https://github.com/sigp/ethereum_hashing)
- [`ethereum_serde_utils`](https://github.com/sigp/ethereum_serde_utils)
- [`ssz_types`](https://github.com/sigp/ssz_types)

For the published crates see: https://crates.io/teams/github:sigp:crates-io?sort=recent-updates.

## Additional Info

- [x] Need to work out how to handle versioning. I was hoping to do 1.0 versions of several crates, but if they depend on `ethereum-types 0.x` that is not going to work. EDIT: decided to go with 0.5.x versions.
- [x] Need to port several changes from `tree-states`, `capella`, `eip4844` branches to the external repos.
2023-04-28 01:15:40 +00:00
Jimmy Chen
2de3451011 Rate limiting backfill sync (#3936)
## Issue Addressed

#3212 

## Proposed Changes

- Introduce a new `rate_limiting_backfill_queue` - any new inbound backfill work events gets immediately sent to this FIFO queue **without any processing**
- Spawn a `backfill_scheduler` routine that pops a backfill event from the FIFO queue at specified intervals (currently halfway through a slot, or at 6s after slot start for 12s slots) and sends the event to `BeaconProcessor` via a `scheduled_backfill_work_tx` channel
- This channel gets polled last in the `InboundEvents`, and work event received is  wrapped in a `InboundEvent::ScheduledBackfillWork` enum variant, which gets processed immediately or queued by the `BeaconProcessor` (existing logic applies from here)

Diagram comparing backfill processing with / without rate-limiting: 
https://github.com/sigp/lighthouse/issues/3212#issuecomment-1386249922

See this comment for @paulhauner's  explanation and solution: https://github.com/sigp/lighthouse/issues/3212#issuecomment-1384674956

## Additional Info

I've compared this branch (with backfill processing rate limited to to 1 and 3 batches per slot) against the latest stable version. The CPU usage during backfill sync is reduced by ~5% - 20%, more details on this page:

https://hackmd.io/@jimmygchen/SJuVpJL3j

The above testing is done on Goerli (as I don't currently have hardware for Mainnet), I'm guessing the differences are likely to be bigger on mainnet due to block size.

### TODOs

- [x] Experiment with processing multiple batches per slot. (need to think about how to do this for different slot durations)
- [x] Add option to disable rate-limiting, enabed by default.
- [x] (No longer required now we're reusing the reprocessing queue) Complete the `backfill_scheduler` task when backfill sync is completed or not required
2023-04-03 03:02:55 +00:00
Paul Hauner
1f8c17b530 Fork choice modifications and cleanup (#3962)
## Issue Addressed

NA

## Proposed Changes

- Implements https://github.com/ethereum/consensus-specs/pull/3290/
- Bumps `ef-tests` to [v1.3.0-rc.4](https://github.com/ethereum/consensus-spec-tests/releases/tag/v1.3.0-rc.4).

The `CountRealizedFull` concept has been removed and the `--count-unrealized-full` and `--count-unrealized` BN flags now do nothing but log a `WARN` when used.

## Database Migration Debt

This PR removes the `best_justified_checkpoint` from fork choice. This field is persisted on-disk and the correct way to go about this would be to make a DB migration to remove the field. However, in this PR I've simply stubbed out the value with a junk value. I've taken this approach because if we're going to do a DB migration I'd love to remove the `Option`s around the justified and finalized checkpoints on `ProtoNode` whilst we're at it. Those options were added in #2822 which was included in Lighthouse v2.1.0. The options were only put there to handle the migration and they've been set to `Some` ever since v2.1.0. There's no reason to keep them as options anymore.

I started adding the DB migration to this branch but I started to feel like I was bloating this rather critical PR with nice-to-haves. I've kept the partially-complete migration [over in my repo](https://github.com/paulhauner/lighthouse/tree/fc-pr-18-migration) so we can pick it up after this PR is merged.
2023-03-21 07:34:41 +00:00
Divma
047c7544e3 Clean capella (#4019)
## Issue Addressed

Cleans up all the remnants of 4844 in capella. This makes sure when 4844 is reviewed there is nothing we are missing because it got included here 

## Proposed Changes

drop a bomb on every 4844 thing 

## Additional Info

Merge process I did (locally) is as follows:
- squash merge to produce one commit
- in new branch off unstable with the squashed commit create a `git revert HEAD` commit
- merge that new branch onto 4844 with `--strategy ours`
- compare local 4844 to remote 4844 and make sure the diff is empty
- enjoy

Co-authored-by: Paul Hauner <paul@paulhauner.com>
2023-03-01 03:19:02 +00:00
Paul Hauner
caa6190d4a Use consensus-spec-tests v1.3.0-rc.3 (#4021)
## Issue Addressed

NA

## Proposed Changes

Updates our `ef_tests` to use: https://github.com/ethereum/consensus-specs/releases/tag/v1.3.0-rc.3

This required:

- Skipping a `merkle_proof_validity` test (see #4022)
- Account for the `eip4844` tests changing name to `deneb`
    - My IDE did some Python linting during this change. It seemed simple and nice so I left it there.

## Additional Info

NA
2023-02-28 02:20:51 +00:00
realbigsean
302eaca344
intentionally skip LightClientHeader ssz static tests 2023-01-13 16:15:27 -05:00
realbigsean
35c9e2407b
bump ef-tests 2023-01-13 15:11:46 -05:00
Michael Sproul
2af8110529
Merge remote-tracking branch 'origin/unstable' into capella
Fixing the conflicts involved patching up some of the `block_hash` verification,
the rest will be done as part of https://github.com/sigp/lighthouse/issues/3870
2023-01-12 16:22:00 +11:00
realbigsean
98b11bbd3f
add historical summaries (#3865)
* add historical summaries

* fix tree hash caching, disable the sanity slots test with fake crypto

* add ssz static HistoricalSummary

* only store historical summaries after capella

* Teach `UpdatePattern` about Capella

* Tidy EF tests

* Clippy

Co-authored-by: Michael Sproul <michael@sigmaprime.io>
2023-01-11 12:40:21 +11:00
Michael Sproul
4bd2b777ec Verify execution block hashes during finalized sync (#3794)
## Issue Addressed

Recent discussions with other client devs about optimistic sync have revealed a conceptual issue with the optimisation implemented in #3738. In designing that feature I failed to consider that the execution node checks the `blockHash` of the execution payload before responding with `SYNCING`, and that omitting this check entirely results in a degradation of the full node's validation. A node omitting the `blockHash` checks could be tricked by a supermajority of validators into following an invalid chain, something which is ordinarily impossible.

## Proposed Changes

I've added verification of the `payload.block_hash` in Lighthouse. In case of failure we log a warning and fall back to verifying the payload with the execution client.

I've used our existing dependency on `ethers_core` for RLP support, and a new dependency on Parity's `triehash` crate for the Merkle patricia trie. Although the `triehash` crate is currently unmaintained it seems like our best option at the moment (it is also used by Reth, and requires vastly less boilerplate than Parity's generic `trie-root` library).

Block hash verification is pretty quick, about 500us per block on my machine (mainnet).

The optimistic finalized sync feature can be disabled using `--disable-optimistic-finalized-sync` which forces full verification with the EL.

## Additional Info

This PR also introduces a new dependency on our [`metastruct`](https://github.com/sigp/metastruct) library, which was perfectly suited to the RLP serialization method. There will likely be changes as `metastruct` grows, but I think this is a good way to start dogfooding it.

I took inspiration from some Parity and Reth code while writing this, and have preserved the relevant license headers on the files containing code that was copied and modified.
2023-01-09 03:11:59 +00:00
ethDreamer
cb94f639b0
Isolate withdrawals-processing Feature (#3854) 2023-01-09 11:05:28 +11:00
Mark Mackey
8711db2f3b Fix EF Tests 2023-01-04 15:14:43 -06:00
Mark Mackey
c188cde034
merge upstream/unstable 2022-12-28 14:43:25 -06:00
Michael Sproul
59a7a4703c Various CI fixes (#3813)
## Issue Addressed

Closes #3812
Closes #3750
Closes #3705
2022-12-20 01:34:52 +00:00
Mark Mackey
b75ca74222 Removed withdrawals feature flag 2022-12-19 15:38:46 -06:00
Michael Sproul
558367ab8c
Bounded withdrawals and spec v1.3.0-alpha.2 (#3802) 2022-12-16 09:20:45 +11:00
ethDreamer
b1c33361ea
Fixed Clippy Complaints & Some Failing Tests (#3791)
* Fixed Clippy Complaints & Some Failing Tests
* Update Dockerfile to Rust-1.65
* EF test file renamed
* Touch up comments based on feedback
2022-12-13 10:50:24 -06:00
ethDreamer
1a39976715
Fixed Compiler Warnings & Failing Tests (#3771) 2022-12-03 10:42:12 +11:00
Mark Mackey
8a04c3428e Merged with unstable 2022-11-30 17:29:10 -06:00
Michael Sproul
22115049ee Prioritise important parts of block processing (#3696)
## Issue Addressed

Closes https://github.com/sigp/lighthouse/issues/2327

## Proposed Changes

This is an extension of some ideas I implemented while working on `tree-states`:

- Cache the indexed attestations from blocks in the `ConsensusContext`. Previously we were re-computing them 3-4 times over.
- Clean up `import_block` by splitting each part into `import_block_XXX`.
- Move some stuff off hot paths, specifically:
  - Relocate non-essential tasks that were running between receiving the payload verification status and priming the early attester cache. These tasks are moved after the cache priming:
    - Attestation observation
    - Validator monitor updates
    - Slasher updates
    - Updating the shuffling cache
  - Fork choice attestation observation now happens at the end of block verification in parallel with payload verification (this seems to save 5-10ms).
  - Payload verification now happens _before_ advancing the pre-state and writing it to disk! States were previously being written eagerly and adding ~20-30ms in front of verifying the execution payload. State catchup also sometimes takes ~500ms if we get a cache miss and need to rebuild the tree hash cache.

The remaining task that's taking substantial time (~20ms) is importing the block to fork choice. I _think_ this is because of pull-tips, and we should be able to optimise it out with a clever total active balance cache in the state (which would be computed in parallel with payload verification). I've decided to leave that for future work though. For now it can be observed via the new `beacon_block_processing_post_exec_pre_attestable_seconds` metric.


Co-authored-by: Michael Sproul <micsproul@gmail.com>
2022-11-30 05:22:58 +00:00
GeemoCandama
3534c85e30 Optimize finalized chain sync by skipping newPayload messages (#3738)
## Issue Addressed

#3704 

## Proposed Changes
Adds is_syncing_finalized: bool parameter for block verification functions. Sets the payload_verification_status to Optimistic if is_syncing_finalized is true. Uses SyncState in NetworkGlobals in BeaconProcessor to retrieve the syncing status.

## Additional Info
I could implement FinalizedSignatureVerifiedBlock if you think it would be nicer.
2022-11-29 08:19:27 +00:00
Michael Sproul
788b337951
Op pool and gossip for BLS to execution changes (#3726) 2022-11-25 07:09:26 +11:00
ethDreamer
24e5252a55
Massive Update to Engine API (#3740)
* Massive Update to Engine API

* Update beacon_node/execution_layer/src/engine_api/json_structures.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Update beacon_node/execution_layer/src/engine_api/json_structures.rs

Co-authored-by: Michael Sproul <micsproul@gmail.com>

* Update beacon_node/beacon_chain/src/execution_payload.rs

Co-authored-by: realbigsean <seananderson33@GMAIL.com>

* Update beacon_node/execution_layer/src/engine_api.rs

Co-authored-by: realbigsean <seananderson33@GMAIL.com>

Co-authored-by: Michael Sproul <micsproul@gmail.com>
Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2022-11-22 13:27:48 -05:00
Michael Sproul
0cdd049da9
Fixes to make EF Capella tests pass (#3719)
* Fixes to make EF Capella tests pass

* Clippy for state_processing
2022-11-14 13:14:31 -06:00
Michael Sproul
3be41006a6 Add --light-client-server flag and state cache utils (#3714)
## Issue Addressed

Part of https://github.com/sigp/lighthouse/issues/3651.

## Proposed Changes

Add a flag for enabling the light client server, which should be checked before gossip/RPC traffic is processed (e.g. https://github.com/sigp/lighthouse/pull/3693, https://github.com/sigp/lighthouse/pull/3711). The flag is available at runtime from `beacon_chain.config.enable_light_client_server`.

Additionally, a new method `BeaconChain::with_mutable_state_for_block` is added which I envisage being used for computing light client updates. Unfortunately its performance will be quite poor on average because it will only run quickly with access to the tree hash cache. Each slot the tree hash cache is only available for a brief window of time between the head block being processed and the state advance at 9s in the slot. When the state advance happens the cache is moved and mutated to get ready for the next slot, which makes it no longer useful for merkle proofs related to the head block. Rather than spend more time trying to optimise this I think we should continue prototyping with this code, and I'll make sure `tree-states` is ready to ship before we enable the light client server in prod (cf. https://github.com/sigp/lighthouse/pull/3206).

## Additional Info

I also fixed a bug in the implementation of `BeaconState::compute_merkle_proof` whereby the tree hash cache was moved with `.take()` but never put back with `.restore()`.
2022-11-11 11:03:18 +00:00
Giulio rebuffo
9d6209725f Added Merkle Proof Generation for Beacon State (#3674)
## Issue Addressed

This PR addresses partially #3651

## Proposed Changes

This PR adds the following methods:

* a new method to trait `TreeHash`, `hash_tree_leaves` which returns all the Merkle leaves of the ssz object.
* a new method to `BeaconState`: `compute_merkle_proof` which generates a specific merkle proof for given depth and index by using the `hash_tree_leaves` as leaves function.

## Additional Info

Now here is some rationale on why I decided to go down this route: adding a new function to commonly used trait is a pain but was necessary to make sure we have all merkle leaves for every object, that is why I just added  `hash_tree_leaves`  in the trait and not  `compute_merkle_proof` as well. although it would make sense it gives us code duplication/harder review time and we just need it from one specific object in one specific usecase so not worth the effort YET. In my humble opinion.

Co-authored-by: Michael Sproul <micsproul@gmail.com>
2022-11-08 01:58:18 +00:00
realbigsean
c45b809b76
Cleanup payload types (#3675)
* Add transparent support

* Add `Config` struct

* Deprecate `enum_behaviour`

* Partially remove enum_behaviour from project

* Revert "Partially remove enum_behaviour from project"

This reverts commit 46ffb7fe77622cf420f7ba2fccf432c0050535d6.

* Revert "Deprecate `enum_behaviour`"

This reverts commit 89b64a6f53d0f68685be88d5b60d39799d9933b5.

* Add `struct_behaviour`

* Tidy

* Move tests into `ssz_derive`

* Bump ssz derive

* Fix comment

* newtype transaparent ssz

* use ssz transparent and create macros for  per fork implementations

* use superstruct map macros

Co-authored-by: Paul Hauner <paul@paulhauner.com>
2022-11-02 10:30:41 -04:00
realbigsean
d8a49aad2b
merge with unstable fixes 2022-11-01 13:26:56 -04:00
realbigsean
8656d23327
merge with unstable 2022-11-01 13:18:00 -04:00
realbigsean
137f230344
Capella eip 4844 cleanup (#3652)
* add capella gossip boiler plate

* get everything compiling

Co-authored-by: realbigsean <sean@sigmaprime.io
Co-authored-by: Mark Mackey <mark@sigmaprime.io>

* small cleanup

* small cleanup

* cargo fix + some test cleanup

* improve block production

* add fixme for potential panic

Co-authored-by: Mark Mackey <mark@sigmaprime.io>
2022-10-26 15:15:26 -04:00
Michael Sproul
59ec6b71b8 Consensus context with proposer index caching (#3604)
## Issue Addressed

Closes https://github.com/sigp/lighthouse/issues/2371

## Proposed Changes

Backport some changes from `tree-states` that remove duplicated calculations of the `proposer_index`.

With this change the proposer index should be calculated only once for each block, and then plumbed through to every place it is required.

## Additional Info

In future I hope to add more data to the consensus context that is cached on a per-epoch basis, like the effective balances of validators and the base rewards.

There are some other changes to remove indexing in tests that were also useful for `tree-states` (the `tree-states` types don't implement `Index`).
2022-10-15 22:25:54 +00:00
Michael Sproul
e4cbdc1c77 Optimistic sync spec tests (v1.2.0) (#3564)
## Issue Addressed

Implements new optimistic sync test format from https://github.com/ethereum/consensus-specs/pull/2982.

## Proposed Changes

- Add parsing and runner support for the new test format.
- Extend the mock EL with a set of canned responses keyed by block hash. Although this doubles up on some of the existing functionality I think it's really nice to use compared to the `preloaded_responses` or static responses. I think we could write novel new opt sync tests using these primtives much more easily than the previous ones. Forks are natively supported, and different responses to `forkchoiceUpdated` and `newPayload` are also straight-forward.

## Additional Info

Blocked on merge of the spec PR and release of new test vectors.
2022-10-15 22:25:52 +00:00
will
9f242137b0 Add a new bls test (#3235)
## Issue Addressed

Which issue # does this PR address?
#2629 

## Proposed Changes

Please list or describe the changes introduced by this PR.

1. ci would dowload the bls test cases from https://github.com/ethereum/bls12-381-tests/
2. all the bls test cases(except eth ones) would use cases in the archive from step one
3. The bls test cases from https://github.com/ethereum/consensus-spec-tests would stay there and no use . For the future , these bls test cases would be remove suggested from https://github.com/ethereum/consensus-spec-tests/issues/25 . So it would do no harm and compatible for future cases.

## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.


Question: 

I am not sure if I should implement tests about `deserialization_G1`, `deserialization_G2` and `hash_to_G2` for the issue.
2022-10-12 23:40:42 +00:00
realbigsean
e81dbbfea4
compile 2022-10-03 21:48:02 -04:00
realbigsean
88006735c4
compile 2022-10-03 10:06:04 -04:00
realbigsean
fe6fc55449
fix compilation errors, rename capella -> shanghai, cleanup some rebase issues 2022-09-29 12:43:13 -04:00
realbigsean
acaa340b41
add new beacon state variant for shanghai 2022-09-29 12:37:14 -04:00
realbigsean
4cdf1b546d
add shanghai fork version and epoch 2022-09-29 12:28:58 -04:00
realbigsean
de44b300c0
add/update types 2022-09-29 12:25:56 -04:00
Paul Hauner
fa6ad1a11a Deduplicate block root computation (#3590)
## Issue Addressed

NA

## Proposed Changes

This PR removes duplicated block root computation.

Computing the `SignedBeaconBlock::canonical_root` has become more expensive since the merge as we need to compute the merke root of each transaction inside an `ExecutionPayload`.

Computing the root for [a mainnet block](https://beaconcha.in/slot/4704236) is taking ~10ms on my i7-8700K CPU @ 3.70GHz (no sha extensions). Given that our median seen-to-imported time for blocks is presently 300-400ms, removing a few duplicated block roots (~30ms) could represent an easy 10% improvement. When we consider that the seen-to-imported times include operations *after* the block has been placed in the early attester cache, we could expect the 30ms to be more significant WRT our seen-to-attestable times.

## Additional Info

NA
2022-09-23 03:52:42 +00:00
Paul Hauner
b0b606dabe Use SmallVec for TreeHash packed encoding (#3581)
## Issue Addressed

NA

## Proposed Changes

I've noticed that our block hashing times increase significantly after the merge. I did some flamegraph-ing and noticed that we're allocating a `Vec` for each byte of each execution payload transaction. This seems like unnecessary work and a bit of a fragmentation risk.

This PR switches to `SmallVec<[u8; 32]>` for the packed encoding of `TreeHash`. I believe this is a nice simple optimisation with no downside.

### Benchmarking

These numbers were computed using #3580 on my desktop (i7 hex-core). You can see a bit of noise in the numbers, that's probably just my computer doing other things. Generally I found this change takes the time from 10-11ms to 8-9ms. I can also see all the allocations disappear from flamegraph.

This is the block being benchmarked: https://beaconcha.in/slot/4704236

#### Before

```
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 980: 10.553003ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 981: 10.563737ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 982: 10.646352ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 983: 10.628532ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 984: 10.552112ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 985: 10.587778ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 986: 10.640526ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 987: 10.587243ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 988: 10.554748ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 989: 10.551111ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 990: 11.559031ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 991: 11.944827ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 992: 10.554308ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 993: 11.043397ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 994: 11.043315ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 995: 11.207711ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 996: 11.056246ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 997: 11.049706ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 998: 11.432449ms
[2022-09-15T21:44:19Z INFO  lcli::block_root] Run 999: 11.149617ms
```

#### After

```
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 980: 14.011653ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 981: 8.925314ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 982: 8.849563ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 983: 8.893689ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 984: 8.902964ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 985: 8.942067ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 986: 8.907088ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 987: 9.346101ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 988: 8.96142ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 989: 9.366437ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 990: 9.809334ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 991: 9.541561ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 992: 11.143518ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 993: 10.821181ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 994: 9.855973ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 995: 10.941006ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 996: 9.596155ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 997: 9.121739ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 998: 9.090019ms
[2022-09-15T21:41:49Z INFO  lcli::block_root] Run 999: 9.071885ms
```

## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.
2022-09-16 08:54:06 +00:00
Paul Hauner
ebd661783e Enable block_lookup_failed EF test (#3489)
## Issue Addressed

Resolves #3448

## Proposed Changes

Removes a known failure that wasn't actually a known failure. The tests declare this block invalid and we refuse to import it due to `ExecutionPayloadError(UnverifiedNonOptimisticCandidate)`.

This is correct since there is only one "eth1" block included in this test and two are required to trigger the merge (pre- and post-TTD blocks). It is slot 1 (tick = 12s) when this block is imported so the import must be prevented by `SAFE_SLOTS_TO_IMPORT_OPTIMISTICALLY`.

I'm not sure where I got the idea in #3448 that this test needed retrospective checking, that seems like a false assumption in hindsight.

## Additional Info

- Blocked on #3464
2022-08-24 23:34:59 +00:00
Paul Hauner
a0605c4ee6 Bump EF tests to v1.2.0 rc.3 (#3483)
## Issue Addressed

NA

## Proposed Changes

Bumps test vectors and ignores another weird MacOS file.

## Additional Info

NA
2022-08-19 04:27:21 +00:00