2022-01-11 01:35:55 +00:00
|
|
|
use crate::{
|
|
|
|
attester_cache::{CommitteeLengths, Error},
|
|
|
|
metrics,
|
|
|
|
};
|
|
|
|
use parking_lot::RwLock;
|
|
|
|
use proto_array::Block as ProtoBlock;
|
Use async code when interacting with EL (#3244)
## Overview
This rather extensive PR achieves two primary goals:
1. Uses the finalized/justified checkpoints of fork choice (FC), rather than that of the head state.
2. Refactors fork choice, block production and block processing to `async` functions.
Additionally, it achieves:
- Concurrent forkchoice updates to the EL and cache pruning after a new head is selected.
- Concurrent "block packing" (attestations, etc) and execution payload retrieval during block production.
- Concurrent per-block-processing and execution payload verification during block processing.
- The `Arc`-ification of `SignedBeaconBlock` during block processing (it's never mutated, so why not?):
- I had to do this to deal with sending blocks into spawned tasks.
- Previously we were cloning the beacon block at least 2 times during each block processing, these clones are either removed or turned into cheaper `Arc` clones.
- We were also `Box`-ing and un-`Box`-ing beacon blocks as they moved throughout the networking crate. This is not a big deal, but it's nice to avoid shifting things between the stack and heap.
- Avoids cloning *all the blocks* in *every chain segment* during sync.
- It also has the potential to clean up our code where we need to pass an *owned* block around so we can send it back in the case of an error (I didn't do much of this, my PR is already big enough :sweat_smile:)
- The `BeaconChain::HeadSafetyStatus` struct was removed. It was an old relic from prior merge specs.
For motivation for this change, see https://github.com/sigp/lighthouse/pull/3244#issuecomment-1160963273
## Changes to `canonical_head` and `fork_choice`
Previously, the `BeaconChain` had two separate fields:
```
canonical_head: RwLock<Snapshot>,
fork_choice: RwLock<BeaconForkChoice>
```
Now, we have grouped these values under a single struct:
```
canonical_head: CanonicalHead {
cached_head: RwLock<Arc<Snapshot>>,
fork_choice: RwLock<BeaconForkChoice>
}
```
Apart from ergonomics, the only *actual* change here is wrapping the canonical head snapshot in an `Arc`. This means that we no longer need to hold the `cached_head` (`canonical_head`, in old terms) lock when we want to pull some values from it. This was done to avoid deadlock risks by preventing functions from acquiring (and holding) the `cached_head` and `fork_choice` locks simultaneously.
## Breaking Changes
### The `state` (root) field in the `finalized_checkpoint` SSE event
Consider the scenario where epoch `n` is just finalized, but `start_slot(n)` is skipped. There are two state roots we might in the `finalized_checkpoint` SSE event:
1. The state root of the finalized block, which is `get_block(finalized_checkpoint.root).state_root`.
4. The state root at slot of `start_slot(n)`, which would be the state from (1), but "skipped forward" through any skip slots.
Previously, Lighthouse would choose (2). However, we can see that when [Teku generates that event](https://github.com/ConsenSys/teku/blob/de2b2801c89ef5abf983d6bf37867c37fc47121f/data/beaconrestapi/src/main/java/tech/pegasys/teku/beaconrestapi/handlers/v1/events/EventSubscriptionManager.java#L171-L182) it uses [`getStateRootFromBlockRoot`](https://github.com/ConsenSys/teku/blob/de2b2801c89ef5abf983d6bf37867c37fc47121f/data/provider/src/main/java/tech/pegasys/teku/api/ChainDataProvider.java#L336-L341) which uses (1).
I have switched Lighthouse from (2) to (1). I think it's a somewhat arbitrary choice between the two, where (1) is easier to compute and is consistent with Teku.
## Notes for Reviewers
I've renamed `BeaconChain::fork_choice` to `BeaconChain::recompute_head`. Doing this helped ensure I broke all previous uses of fork choice and I also find it more descriptive. It describes an action and can't be confused with trying to get a reference to the `ForkChoice` struct.
I've changed the ordering of SSE events when a block is received. It used to be `[block, finalized, head]` and now it's `[block, head, finalized]`. It was easier this way and I don't think we were making any promises about SSE event ordering so it's not "breaking".
I've made it so fork choice will run when it's first constructed. I did this because I wanted to have a cached version of the last call to `get_head`. Ensuring `get_head` has been run *at least once* means that the cached values doesn't need to wrapped in an `Option`. This was fairly simple, it just involved passing a `slot` to the constructor so it knows *when* it's being run. When loading a fork choice from the store and a slot clock isn't handy I've just used the `slot` that was saved in the `fork_choice_store`. That seems like it would be a faithful representation of the slot when we saved it.
I added the `genesis_time: u64` to the `BeaconChain`. It's small, constant and nice to have around.
Since we're using FC for the fin/just checkpoints, we no longer get the `0x00..00` roots at genesis. You can see I had to remove a work-around in `ef-tests` here: b56be3bc2. I can't find any reason why this would be an issue, if anything I think it'll be better since the genesis-alias has caught us out a few times (0x00..00 isn't actually a real root). Edit: I did find a case where the `network` expected the 0x00..00 alias and patched it here: 3f26ac3e2.
You'll notice a lot of changes in tests. Generally, tests should be functionally equivalent. Here are the things creating the most diff-noise in tests:
- Changing tests to be `tokio::async` tests.
- Adding `.await` to fork choice, block processing and block production functions.
- Refactor of the `canonical_head` "API" provided by the `BeaconChain`. E.g., `chain.canonical_head.cached_head()` instead of `chain.canonical_head.read()`.
- Wrapping `SignedBeaconBlock` in an `Arc`.
- In the `beacon_chain/tests/block_verification`, we can't use the `lazy_static` `CHAIN_SEGMENT` variable anymore since it's generated with an async function. We just generate it in each test, not so efficient but hopefully insignificant.
I had to disable `rayon` concurrent tests in the `fork_choice` tests. This is because the use of `rayon` and `block_on` was causing a panic.
Co-authored-by: Mac L <mjladson@pm.me>
2022-07-03 05:36:50 +00:00
|
|
|
use std::sync::Arc;
|
2022-11-30 18:31:58 +00:00
|
|
|
use store::signed_block_and_blobs::BlockWrapper;
|
2022-01-11 01:35:55 +00:00
|
|
|
use types::*;
|
|
|
|
|
|
|
|
pub struct CacheItem<E: EthSpec> {
|
|
|
|
/*
|
|
|
|
* Values used to create attestations.
|
|
|
|
*/
|
|
|
|
epoch: Epoch,
|
|
|
|
committee_lengths: CommitteeLengths,
|
|
|
|
beacon_block_root: Hash256,
|
|
|
|
source: Checkpoint,
|
|
|
|
target: Checkpoint,
|
|
|
|
/*
|
|
|
|
* Values used to make the block available.
|
|
|
|
*/
|
Use async code when interacting with EL (#3244)
## Overview
This rather extensive PR achieves two primary goals:
1. Uses the finalized/justified checkpoints of fork choice (FC), rather than that of the head state.
2. Refactors fork choice, block production and block processing to `async` functions.
Additionally, it achieves:
- Concurrent forkchoice updates to the EL and cache pruning after a new head is selected.
- Concurrent "block packing" (attestations, etc) and execution payload retrieval during block production.
- Concurrent per-block-processing and execution payload verification during block processing.
- The `Arc`-ification of `SignedBeaconBlock` during block processing (it's never mutated, so why not?):
- I had to do this to deal with sending blocks into spawned tasks.
- Previously we were cloning the beacon block at least 2 times during each block processing, these clones are either removed or turned into cheaper `Arc` clones.
- We were also `Box`-ing and un-`Box`-ing beacon blocks as they moved throughout the networking crate. This is not a big deal, but it's nice to avoid shifting things between the stack and heap.
- Avoids cloning *all the blocks* in *every chain segment* during sync.
- It also has the potential to clean up our code where we need to pass an *owned* block around so we can send it back in the case of an error (I didn't do much of this, my PR is already big enough :sweat_smile:)
- The `BeaconChain::HeadSafetyStatus` struct was removed. It was an old relic from prior merge specs.
For motivation for this change, see https://github.com/sigp/lighthouse/pull/3244#issuecomment-1160963273
## Changes to `canonical_head` and `fork_choice`
Previously, the `BeaconChain` had two separate fields:
```
canonical_head: RwLock<Snapshot>,
fork_choice: RwLock<BeaconForkChoice>
```
Now, we have grouped these values under a single struct:
```
canonical_head: CanonicalHead {
cached_head: RwLock<Arc<Snapshot>>,
fork_choice: RwLock<BeaconForkChoice>
}
```
Apart from ergonomics, the only *actual* change here is wrapping the canonical head snapshot in an `Arc`. This means that we no longer need to hold the `cached_head` (`canonical_head`, in old terms) lock when we want to pull some values from it. This was done to avoid deadlock risks by preventing functions from acquiring (and holding) the `cached_head` and `fork_choice` locks simultaneously.
## Breaking Changes
### The `state` (root) field in the `finalized_checkpoint` SSE event
Consider the scenario where epoch `n` is just finalized, but `start_slot(n)` is skipped. There are two state roots we might in the `finalized_checkpoint` SSE event:
1. The state root of the finalized block, which is `get_block(finalized_checkpoint.root).state_root`.
4. The state root at slot of `start_slot(n)`, which would be the state from (1), but "skipped forward" through any skip slots.
Previously, Lighthouse would choose (2). However, we can see that when [Teku generates that event](https://github.com/ConsenSys/teku/blob/de2b2801c89ef5abf983d6bf37867c37fc47121f/data/beaconrestapi/src/main/java/tech/pegasys/teku/beaconrestapi/handlers/v1/events/EventSubscriptionManager.java#L171-L182) it uses [`getStateRootFromBlockRoot`](https://github.com/ConsenSys/teku/blob/de2b2801c89ef5abf983d6bf37867c37fc47121f/data/provider/src/main/java/tech/pegasys/teku/api/ChainDataProvider.java#L336-L341) which uses (1).
I have switched Lighthouse from (2) to (1). I think it's a somewhat arbitrary choice between the two, where (1) is easier to compute and is consistent with Teku.
## Notes for Reviewers
I've renamed `BeaconChain::fork_choice` to `BeaconChain::recompute_head`. Doing this helped ensure I broke all previous uses of fork choice and I also find it more descriptive. It describes an action and can't be confused with trying to get a reference to the `ForkChoice` struct.
I've changed the ordering of SSE events when a block is received. It used to be `[block, finalized, head]` and now it's `[block, head, finalized]`. It was easier this way and I don't think we were making any promises about SSE event ordering so it's not "breaking".
I've made it so fork choice will run when it's first constructed. I did this because I wanted to have a cached version of the last call to `get_head`. Ensuring `get_head` has been run *at least once* means that the cached values doesn't need to wrapped in an `Option`. This was fairly simple, it just involved passing a `slot` to the constructor so it knows *when* it's being run. When loading a fork choice from the store and a slot clock isn't handy I've just used the `slot` that was saved in the `fork_choice_store`. That seems like it would be a faithful representation of the slot when we saved it.
I added the `genesis_time: u64` to the `BeaconChain`. It's small, constant and nice to have around.
Since we're using FC for the fin/just checkpoints, we no longer get the `0x00..00` roots at genesis. You can see I had to remove a work-around in `ef-tests` here: b56be3bc2. I can't find any reason why this would be an issue, if anything I think it'll be better since the genesis-alias has caught us out a few times (0x00..00 isn't actually a real root). Edit: I did find a case where the `network` expected the 0x00..00 alias and patched it here: 3f26ac3e2.
You'll notice a lot of changes in tests. Generally, tests should be functionally equivalent. Here are the things creating the most diff-noise in tests:
- Changing tests to be `tokio::async` tests.
- Adding `.await` to fork choice, block processing and block production functions.
- Refactor of the `canonical_head` "API" provided by the `BeaconChain`. E.g., `chain.canonical_head.cached_head()` instead of `chain.canonical_head.read()`.
- Wrapping `SignedBeaconBlock` in an `Arc`.
- In the `beacon_chain/tests/block_verification`, we can't use the `lazy_static` `CHAIN_SEGMENT` variable anymore since it's generated with an async function. We just generate it in each test, not so efficient but hopefully insignificant.
I had to disable `rayon` concurrent tests in the `fork_choice` tests. This is because the use of `rayon` and `block_on` was causing a panic.
Co-authored-by: Mac L <mjladson@pm.me>
2022-07-03 05:36:50 +00:00
|
|
|
block: Arc<SignedBeaconBlock<E>>,
|
2022-11-19 21:53:34 +00:00
|
|
|
blobs: Option<Arc<BlobsSidecar<E>>>,
|
2022-01-11 01:35:55 +00:00
|
|
|
proto_block: ProtoBlock,
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Provides a single-item cache which allows for attesting to blocks before those blocks have
|
|
|
|
/// reached the database.
|
|
|
|
///
|
|
|
|
/// This cache stores enough information to allow Lighthouse to:
|
|
|
|
///
|
|
|
|
/// - Produce an attestation without using `chain.canonical_head`.
|
|
|
|
/// - Verify that a block root exists (i.e., will be imported in the future) during attestation
|
|
|
|
/// verification.
|
|
|
|
/// - Provide a block which can be sent to peers via RPC.
|
|
|
|
#[derive(Default)]
|
|
|
|
pub struct EarlyAttesterCache<E: EthSpec> {
|
|
|
|
item: RwLock<Option<CacheItem<E>>>,
|
|
|
|
}
|
|
|
|
|
|
|
|
impl<E: EthSpec> EarlyAttesterCache<E> {
|
|
|
|
/// Removes the cached item, meaning that all future calls to `Self::try_attest` will return
|
|
|
|
/// `None` until a new cache item is added.
|
|
|
|
pub fn clear(&self) {
|
|
|
|
*self.item.write() = None
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Updates the cache item, so that `Self::try_attest` with return `Some` when given suitable
|
|
|
|
/// parameters.
|
|
|
|
pub fn add_head_block(
|
|
|
|
&self,
|
|
|
|
beacon_block_root: Hash256,
|
2022-11-30 18:31:58 +00:00
|
|
|
block: BlockWrapper<E>,
|
2022-01-11 01:35:55 +00:00
|
|
|
proto_block: ProtoBlock,
|
|
|
|
state: &BeaconState<E>,
|
|
|
|
spec: &ChainSpec,
|
|
|
|
) -> Result<(), Error> {
|
|
|
|
let epoch = state.current_epoch();
|
|
|
|
let committee_lengths = CommitteeLengths::new(state, spec)?;
|
|
|
|
let source = state.current_justified_checkpoint();
|
|
|
|
let target_slot = epoch.start_slot(E::slots_per_epoch());
|
|
|
|
let target = Checkpoint {
|
|
|
|
epoch,
|
|
|
|
root: if state.slot() <= target_slot {
|
|
|
|
beacon_block_root
|
|
|
|
} else {
|
|
|
|
*state.get_block_root(target_slot)?
|
|
|
|
},
|
|
|
|
};
|
|
|
|
|
2022-12-23 17:59:04 +00:00
|
|
|
let (block, blobs) = block.deconstruct(Some(beacon_block_root));
|
2022-01-11 01:35:55 +00:00
|
|
|
let item = CacheItem {
|
|
|
|
epoch,
|
|
|
|
committee_lengths,
|
|
|
|
beacon_block_root,
|
|
|
|
source,
|
|
|
|
target,
|
|
|
|
block,
|
2022-12-23 22:44:50 +00:00
|
|
|
blobs: blobs.map_err(|_| Error::MissingBlobs)?,
|
2022-01-11 01:35:55 +00:00
|
|
|
proto_block,
|
|
|
|
};
|
|
|
|
|
|
|
|
*self.item.write() = Some(item);
|
|
|
|
|
|
|
|
Ok(())
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Will return `Some(attestation)` if all the following conditions are met:
|
|
|
|
///
|
|
|
|
/// - There is a cache `item` present.
|
|
|
|
/// - If `request_slot` is in the same epoch as `item.epoch`.
|
2022-07-27 00:51:06 +00:00
|
|
|
/// - If `request_index` does not exceed `item.committee_count`.
|
2022-01-11 01:35:55 +00:00
|
|
|
pub fn try_attest(
|
|
|
|
&self,
|
|
|
|
request_slot: Slot,
|
|
|
|
request_index: CommitteeIndex,
|
|
|
|
spec: &ChainSpec,
|
|
|
|
) -> Result<Option<Attestation<E>>, Error> {
|
|
|
|
let lock = self.item.read();
|
|
|
|
let item = if let Some(item) = lock.as_ref() {
|
|
|
|
item
|
|
|
|
} else {
|
|
|
|
return Ok(None);
|
|
|
|
};
|
|
|
|
|
|
|
|
let request_epoch = request_slot.epoch(E::slots_per_epoch());
|
|
|
|
if request_epoch != item.epoch {
|
|
|
|
return Ok(None);
|
|
|
|
}
|
|
|
|
|
2022-05-17 01:51:25 +00:00
|
|
|
if request_slot < item.block.slot() {
|
|
|
|
return Ok(None);
|
|
|
|
}
|
|
|
|
|
2022-01-11 01:35:55 +00:00
|
|
|
let committee_count = item
|
|
|
|
.committee_lengths
|
|
|
|
.get_committee_count_per_slot::<E>(spec)?;
|
|
|
|
if request_index >= committee_count as u64 {
|
|
|
|
return Ok(None);
|
|
|
|
}
|
|
|
|
|
|
|
|
let committee_len =
|
|
|
|
item.committee_lengths
|
|
|
|
.get_committee_length::<E>(request_slot, request_index, spec)?;
|
|
|
|
|
|
|
|
let attestation = Attestation {
|
|
|
|
aggregation_bits: BitList::with_capacity(committee_len)
|
|
|
|
.map_err(BeaconStateError::from)?,
|
|
|
|
data: AttestationData {
|
|
|
|
slot: request_slot,
|
|
|
|
index: request_index,
|
|
|
|
beacon_block_root: item.beacon_block_root,
|
|
|
|
source: item.source,
|
|
|
|
target: item.target,
|
|
|
|
},
|
|
|
|
signature: AggregateSignature::empty(),
|
|
|
|
};
|
|
|
|
|
|
|
|
metrics::inc_counter(&metrics::BEACON_EARLY_ATTESTER_CACHE_HITS);
|
|
|
|
|
|
|
|
Ok(Some(attestation))
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Returns `true` if `block_root` matches the cached item.
|
|
|
|
pub fn contains_block(&self, block_root: Hash256) -> bool {
|
|
|
|
self.item
|
|
|
|
.read()
|
|
|
|
.as_ref()
|
|
|
|
.map_or(false, |item| item.beacon_block_root == block_root)
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Returns the block, if `block_root` matches the cached item.
|
Use async code when interacting with EL (#3244)
## Overview
This rather extensive PR achieves two primary goals:
1. Uses the finalized/justified checkpoints of fork choice (FC), rather than that of the head state.
2. Refactors fork choice, block production and block processing to `async` functions.
Additionally, it achieves:
- Concurrent forkchoice updates to the EL and cache pruning after a new head is selected.
- Concurrent "block packing" (attestations, etc) and execution payload retrieval during block production.
- Concurrent per-block-processing and execution payload verification during block processing.
- The `Arc`-ification of `SignedBeaconBlock` during block processing (it's never mutated, so why not?):
- I had to do this to deal with sending blocks into spawned tasks.
- Previously we were cloning the beacon block at least 2 times during each block processing, these clones are either removed or turned into cheaper `Arc` clones.
- We were also `Box`-ing and un-`Box`-ing beacon blocks as they moved throughout the networking crate. This is not a big deal, but it's nice to avoid shifting things between the stack and heap.
- Avoids cloning *all the blocks* in *every chain segment* during sync.
- It also has the potential to clean up our code where we need to pass an *owned* block around so we can send it back in the case of an error (I didn't do much of this, my PR is already big enough :sweat_smile:)
- The `BeaconChain::HeadSafetyStatus` struct was removed. It was an old relic from prior merge specs.
For motivation for this change, see https://github.com/sigp/lighthouse/pull/3244#issuecomment-1160963273
## Changes to `canonical_head` and `fork_choice`
Previously, the `BeaconChain` had two separate fields:
```
canonical_head: RwLock<Snapshot>,
fork_choice: RwLock<BeaconForkChoice>
```
Now, we have grouped these values under a single struct:
```
canonical_head: CanonicalHead {
cached_head: RwLock<Arc<Snapshot>>,
fork_choice: RwLock<BeaconForkChoice>
}
```
Apart from ergonomics, the only *actual* change here is wrapping the canonical head snapshot in an `Arc`. This means that we no longer need to hold the `cached_head` (`canonical_head`, in old terms) lock when we want to pull some values from it. This was done to avoid deadlock risks by preventing functions from acquiring (and holding) the `cached_head` and `fork_choice` locks simultaneously.
## Breaking Changes
### The `state` (root) field in the `finalized_checkpoint` SSE event
Consider the scenario where epoch `n` is just finalized, but `start_slot(n)` is skipped. There are two state roots we might in the `finalized_checkpoint` SSE event:
1. The state root of the finalized block, which is `get_block(finalized_checkpoint.root).state_root`.
4. The state root at slot of `start_slot(n)`, which would be the state from (1), but "skipped forward" through any skip slots.
Previously, Lighthouse would choose (2). However, we can see that when [Teku generates that event](https://github.com/ConsenSys/teku/blob/de2b2801c89ef5abf983d6bf37867c37fc47121f/data/beaconrestapi/src/main/java/tech/pegasys/teku/beaconrestapi/handlers/v1/events/EventSubscriptionManager.java#L171-L182) it uses [`getStateRootFromBlockRoot`](https://github.com/ConsenSys/teku/blob/de2b2801c89ef5abf983d6bf37867c37fc47121f/data/provider/src/main/java/tech/pegasys/teku/api/ChainDataProvider.java#L336-L341) which uses (1).
I have switched Lighthouse from (2) to (1). I think it's a somewhat arbitrary choice between the two, where (1) is easier to compute and is consistent with Teku.
## Notes for Reviewers
I've renamed `BeaconChain::fork_choice` to `BeaconChain::recompute_head`. Doing this helped ensure I broke all previous uses of fork choice and I also find it more descriptive. It describes an action and can't be confused with trying to get a reference to the `ForkChoice` struct.
I've changed the ordering of SSE events when a block is received. It used to be `[block, finalized, head]` and now it's `[block, head, finalized]`. It was easier this way and I don't think we were making any promises about SSE event ordering so it's not "breaking".
I've made it so fork choice will run when it's first constructed. I did this because I wanted to have a cached version of the last call to `get_head`. Ensuring `get_head` has been run *at least once* means that the cached values doesn't need to wrapped in an `Option`. This was fairly simple, it just involved passing a `slot` to the constructor so it knows *when* it's being run. When loading a fork choice from the store and a slot clock isn't handy I've just used the `slot` that was saved in the `fork_choice_store`. That seems like it would be a faithful representation of the slot when we saved it.
I added the `genesis_time: u64` to the `BeaconChain`. It's small, constant and nice to have around.
Since we're using FC for the fin/just checkpoints, we no longer get the `0x00..00` roots at genesis. You can see I had to remove a work-around in `ef-tests` here: b56be3bc2. I can't find any reason why this would be an issue, if anything I think it'll be better since the genesis-alias has caught us out a few times (0x00..00 isn't actually a real root). Edit: I did find a case where the `network` expected the 0x00..00 alias and patched it here: 3f26ac3e2.
You'll notice a lot of changes in tests. Generally, tests should be functionally equivalent. Here are the things creating the most diff-noise in tests:
- Changing tests to be `tokio::async` tests.
- Adding `.await` to fork choice, block processing and block production functions.
- Refactor of the `canonical_head` "API" provided by the `BeaconChain`. E.g., `chain.canonical_head.cached_head()` instead of `chain.canonical_head.read()`.
- Wrapping `SignedBeaconBlock` in an `Arc`.
- In the `beacon_chain/tests/block_verification`, we can't use the `lazy_static` `CHAIN_SEGMENT` variable anymore since it's generated with an async function. We just generate it in each test, not so efficient but hopefully insignificant.
I had to disable `rayon` concurrent tests in the `fork_choice` tests. This is because the use of `rayon` and `block_on` was causing a panic.
Co-authored-by: Mac L <mjladson@pm.me>
2022-07-03 05:36:50 +00:00
|
|
|
pub fn get_block(&self, block_root: Hash256) -> Option<Arc<SignedBeaconBlock<E>>> {
|
2022-01-11 01:35:55 +00:00
|
|
|
self.item
|
|
|
|
.read()
|
|
|
|
.as_ref()
|
|
|
|
.filter(|item| item.beacon_block_root == block_root)
|
|
|
|
.map(|item| item.block.clone())
|
|
|
|
}
|
|
|
|
|
2022-11-19 21:53:34 +00:00
|
|
|
/// Returns the blobs, if `block_root` matches the cached item.
|
|
|
|
pub fn get_blobs(&self, block_root: Hash256) -> Option<Arc<BlobsSidecar<E>>> {
|
|
|
|
self.item
|
|
|
|
.read()
|
|
|
|
.as_ref()
|
|
|
|
.filter(|item| item.beacon_block_root == block_root)
|
|
|
|
.map(|item| item.blobs.clone())
|
|
|
|
.flatten()
|
|
|
|
}
|
|
|
|
|
2022-01-11 01:35:55 +00:00
|
|
|
/// Returns the proto-array block, if `block_root` matches the cached item.
|
|
|
|
pub fn get_proto_block(&self, block_root: Hash256) -> Option<ProtoBlock> {
|
|
|
|
self.item
|
|
|
|
.read()
|
|
|
|
.as_ref()
|
|
|
|
.filter(|item| item.beacon_block_root == block_root)
|
|
|
|
.map(|item| item.proto_block.clone())
|
|
|
|
}
|
|
|
|
}
|