lighthouse/beacon_node/execution_layer/src/engines.rs

369 lines
12 KiB
Rust
Raw Normal View History

[Merge] Implement `execution_layer` (#2635) * Checkout serde_utils from rayonism * Make eth1::http functions pub * Add bones of execution_layer * Modify decoding * Expose Transaction, cargo fmt * Add executePayload * Add all minimal spec endpoints * Start adding json rpc wrapper * Finish custom JSON response handler * Switch to new rpc sending method * Add first test * Fix camelCase * Finish adding tests * Begin threading execution layer into BeaconChain * Fix clippy lints * Fix clippy lints * Thread execution layer into ClientBuilder * Add CLI flags * Add block processing methods to ExecutionLayer * Add block_on to execution_layer * Integrate execute_payload * Add extra_data field * Begin implementing payload handle * Send consensus valid/invalid messages * Fix minor type in task_executor * Call forkchoiceUpdated * Add search for TTD block * Thread TTD into execution layer * Allow producing block with execution payload * Add LRU cache for execution blocks * Remove duplicate 0x on ssz_types serialization * Add tests for block getter methods * Add basic block generator impl * Add is_valid_terminal_block to EL * Verify merge block in block_verification * Partially implement --terminal-block-hash-override * Add terminal_block_hash to ChainSpec * Remove Option from terminal_block_hash in EL * Revert merge changes to consensus/fork_choice * Remove commented-out code * Add bones for handling RPC methods on test server * Add first ExecutionLayer tests * Add testing for finding terminal block * Prevent infinite loops * Add insert_merge_block to block gen * Add block gen test for pos blocks * Start adding payloads to block gen * Fix clippy lints * Add execution payload to block gen * Add execute_payload to block_gen * Refactor block gen * Add all routes to mock server * Use Uint256 for base_fee_per_gas * Add working execution chain build * Remove unused var * Revert "Use Uint256 for base_fee_per_gas" This reverts commit 6c88f19ac45db834dd4dbf7a3c6e7242c1c0f735. * Fix base_fee_for_gas Uint256 * Update execute payload handle * Improve testing, fix bugs * Fix default fee-recipient * Fix fee-recipient address (again) * Add check for terminal block, add comments, tidy * Apply suggestions from code review Co-authored-by: realbigsean <seananderson33@GMAIL.com> * Fix is_none on handle Drop * Remove commented-out tests Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2021-09-29 22:14:15 +00:00
//! Provides generic behaviour for multiple execution engines, specifically fallback behaviour.
use crate::engine_api::{
Error as EngineApiError, ForkchoiceUpdatedResponse, PayloadAttributes, PayloadId,
};
use crate::HttpJsonRpc;
use lru::LruCache;
use slog::{debug, error, info, Logger};
[Merge] Implement `execution_layer` (#2635) * Checkout serde_utils from rayonism * Make eth1::http functions pub * Add bones of execution_layer * Modify decoding * Expose Transaction, cargo fmt * Add executePayload * Add all minimal spec endpoints * Start adding json rpc wrapper * Finish custom JSON response handler * Switch to new rpc sending method * Add first test * Fix camelCase * Finish adding tests * Begin threading execution layer into BeaconChain * Fix clippy lints * Fix clippy lints * Thread execution layer into ClientBuilder * Add CLI flags * Add block processing methods to ExecutionLayer * Add block_on to execution_layer * Integrate execute_payload * Add extra_data field * Begin implementing payload handle * Send consensus valid/invalid messages * Fix minor type in task_executor * Call forkchoiceUpdated * Add search for TTD block * Thread TTD into execution layer * Allow producing block with execution payload * Add LRU cache for execution blocks * Remove duplicate 0x on ssz_types serialization * Add tests for block getter methods * Add basic block generator impl * Add is_valid_terminal_block to EL * Verify merge block in block_verification * Partially implement --terminal-block-hash-override * Add terminal_block_hash to ChainSpec * Remove Option from terminal_block_hash in EL * Revert merge changes to consensus/fork_choice * Remove commented-out code * Add bones for handling RPC methods on test server * Add first ExecutionLayer tests * Add testing for finding terminal block * Prevent infinite loops * Add insert_merge_block to block gen * Add block gen test for pos blocks * Start adding payloads to block gen * Fix clippy lints * Add execution payload to block gen * Add execute_payload to block_gen * Refactor block gen * Add all routes to mock server * Use Uint256 for base_fee_per_gas * Add working execution chain build * Remove unused var * Revert "Use Uint256 for base_fee_per_gas" This reverts commit 6c88f19ac45db834dd4dbf7a3c6e7242c1c0f735. * Fix base_fee_for_gas Uint256 * Update execute payload handle * Improve testing, fix bugs * Fix default fee-recipient * Fix fee-recipient address (again) * Add check for terminal block, add comments, tidy * Apply suggestions from code review Co-authored-by: realbigsean <seananderson33@GMAIL.com> * Fix is_none on handle Drop * Remove commented-out tests Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2021-09-29 22:14:15 +00:00
use std::future::Future;
use std::sync::Arc;
use task_executor::TaskExecutor;
Pause sync when EE is offline (#3428) ## Issue Addressed #3032 ## Proposed Changes Pause sync when ee is offline. Changes include three main parts: - Online/offline notification system - Pause sync - Resume sync #### Online/offline notification system - The engine state is now guarded behind a new struct `State` that ensures every change is correctly notified. Notifications are only sent if the state changes. The new `State` is behind a `RwLock` (as before) as the synchronization mechanism. - The actual notification channel is a [tokio::sync::watch](https://docs.rs/tokio/latest/tokio/sync/watch/index.html) which ensures only the last value is in the receiver channel. This way we don't need to worry about message order etc. - Sync waits for state changes concurrently with normal messages. #### Pause Sync Sync has four components, pausing is done differently in each: - **Block lookups**: Disabled while in this state. We drop current requests and don't search for new blocks. Block lookups are infrequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Parent lookups**: Disabled while in this state. We drop current requests and don't search for new parents. Parent lookups are even less frequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Range**: Chains don't send batches for processing to the beacon processor. This is easily done by guarding the channel to the beacon processor and giving it access only if the ee is responsive. I find this the simplest and most powerful approach since we don't need to deal with new sync states and chain segments that are added while the ee is offline will follow the same logic without needing to synchronize a shared state among those. Another advantage of passive pause vs active pause is that we can still keep track of active advertised chain segments so that on resume we don't need to re-evaluate all our peers. - **Backfill**: Not affected by ee states, we don't pause. #### Resume Sync - **Block lookups**: Enabled again. - **Parent lookups**: Enabled again. - **Range**: Active resume. Since the only real pause range does is not sending batches for processing, resume makes all chains that are holding read-for-processing batches send them. - **Backfill**: Not affected by ee states, no need to resume. ## Additional Info **QUESTION**: Originally I made this to notify and change on synced state, but @pawanjay176 on talks with @paulhauner concluded we only need to check online/offline states. The upcheck function mentions extra checks to have a very up to date sync status to aid the networking stack. However, the only need the networking stack would have is this one. I added a TODO to review if the extra check can be removed Next gen of #3094 Will work best with #3439 Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
2022-08-24 23:34:56 +00:00
use tokio::sync::{watch, Mutex, RwLock};
use tokio_stream::wrappers::WatchStream;
use types::ExecutionBlockHash;
/// The number of payload IDs that will be stored for each `Engine`.
///
/// Since the size of each value is small (~100 bytes) a large number is used for safety.
/// FIXME: check this assumption now that the key includes entire payload attributes which now includes withdrawals
const PAYLOAD_ID_LRU_CACHE_SIZE: usize = 512;
[Merge] Implement `execution_layer` (#2635) * Checkout serde_utils from rayonism * Make eth1::http functions pub * Add bones of execution_layer * Modify decoding * Expose Transaction, cargo fmt * Add executePayload * Add all minimal spec endpoints * Start adding json rpc wrapper * Finish custom JSON response handler * Switch to new rpc sending method * Add first test * Fix camelCase * Finish adding tests * Begin threading execution layer into BeaconChain * Fix clippy lints * Fix clippy lints * Thread execution layer into ClientBuilder * Add CLI flags * Add block processing methods to ExecutionLayer * Add block_on to execution_layer * Integrate execute_payload * Add extra_data field * Begin implementing payload handle * Send consensus valid/invalid messages * Fix minor type in task_executor * Call forkchoiceUpdated * Add search for TTD block * Thread TTD into execution layer * Allow producing block with execution payload * Add LRU cache for execution blocks * Remove duplicate 0x on ssz_types serialization * Add tests for block getter methods * Add basic block generator impl * Add is_valid_terminal_block to EL * Verify merge block in block_verification * Partially implement --terminal-block-hash-override * Add terminal_block_hash to ChainSpec * Remove Option from terminal_block_hash in EL * Revert merge changes to consensus/fork_choice * Remove commented-out code * Add bones for handling RPC methods on test server * Add first ExecutionLayer tests * Add testing for finding terminal block * Prevent infinite loops * Add insert_merge_block to block gen * Add block gen test for pos blocks * Start adding payloads to block gen * Fix clippy lints * Add execution payload to block gen * Add execute_payload to block_gen * Refactor block gen * Add all routes to mock server * Use Uint256 for base_fee_per_gas * Add working execution chain build * Remove unused var * Revert "Use Uint256 for base_fee_per_gas" This reverts commit 6c88f19ac45db834dd4dbf7a3c6e7242c1c0f735. * Fix base_fee_for_gas Uint256 * Update execute payload handle * Improve testing, fix bugs * Fix default fee-recipient * Fix fee-recipient address (again) * Add check for terminal block, add comments, tidy * Apply suggestions from code review Co-authored-by: realbigsean <seananderson33@GMAIL.com> * Fix is_none on handle Drop * Remove commented-out tests Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2021-09-29 22:14:15 +00:00
/// Stores the remembered state of a engine.
Pause sync when EE is offline (#3428) ## Issue Addressed #3032 ## Proposed Changes Pause sync when ee is offline. Changes include three main parts: - Online/offline notification system - Pause sync - Resume sync #### Online/offline notification system - The engine state is now guarded behind a new struct `State` that ensures every change is correctly notified. Notifications are only sent if the state changes. The new `State` is behind a `RwLock` (as before) as the synchronization mechanism. - The actual notification channel is a [tokio::sync::watch](https://docs.rs/tokio/latest/tokio/sync/watch/index.html) which ensures only the last value is in the receiver channel. This way we don't need to worry about message order etc. - Sync waits for state changes concurrently with normal messages. #### Pause Sync Sync has four components, pausing is done differently in each: - **Block lookups**: Disabled while in this state. We drop current requests and don't search for new blocks. Block lookups are infrequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Parent lookups**: Disabled while in this state. We drop current requests and don't search for new parents. Parent lookups are even less frequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Range**: Chains don't send batches for processing to the beacon processor. This is easily done by guarding the channel to the beacon processor and giving it access only if the ee is responsive. I find this the simplest and most powerful approach since we don't need to deal with new sync states and chain segments that are added while the ee is offline will follow the same logic without needing to synchronize a shared state among those. Another advantage of passive pause vs active pause is that we can still keep track of active advertised chain segments so that on resume we don't need to re-evaluate all our peers. - **Backfill**: Not affected by ee states, we don't pause. #### Resume Sync - **Block lookups**: Enabled again. - **Parent lookups**: Enabled again. - **Range**: Active resume. Since the only real pause range does is not sending batches for processing, resume makes all chains that are holding read-for-processing batches send them. - **Backfill**: Not affected by ee states, no need to resume. ## Additional Info **QUESTION**: Originally I made this to notify and change on synced state, but @pawanjay176 on talks with @paulhauner concluded we only need to check online/offline states. The upcheck function mentions extra checks to have a very up to date sync status to aid the networking stack. However, the only need the networking stack would have is this one. I added a TODO to review if the extra check can be removed Next gen of #3094 Will work best with #3439 Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
2022-08-24 23:34:56 +00:00
#[derive(Copy, Clone, PartialEq, Debug, Eq, Default)]
enum EngineStateInternal {
Synced,
Pause sync when EE is offline (#3428) ## Issue Addressed #3032 ## Proposed Changes Pause sync when ee is offline. Changes include three main parts: - Online/offline notification system - Pause sync - Resume sync #### Online/offline notification system - The engine state is now guarded behind a new struct `State` that ensures every change is correctly notified. Notifications are only sent if the state changes. The new `State` is behind a `RwLock` (as before) as the synchronization mechanism. - The actual notification channel is a [tokio::sync::watch](https://docs.rs/tokio/latest/tokio/sync/watch/index.html) which ensures only the last value is in the receiver channel. This way we don't need to worry about message order etc. - Sync waits for state changes concurrently with normal messages. #### Pause Sync Sync has four components, pausing is done differently in each: - **Block lookups**: Disabled while in this state. We drop current requests and don't search for new blocks. Block lookups are infrequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Parent lookups**: Disabled while in this state. We drop current requests and don't search for new parents. Parent lookups are even less frequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Range**: Chains don't send batches for processing to the beacon processor. This is easily done by guarding the channel to the beacon processor and giving it access only if the ee is responsive. I find this the simplest and most powerful approach since we don't need to deal with new sync states and chain segments that are added while the ee is offline will follow the same logic without needing to synchronize a shared state among those. Another advantage of passive pause vs active pause is that we can still keep track of active advertised chain segments so that on resume we don't need to re-evaluate all our peers. - **Backfill**: Not affected by ee states, we don't pause. #### Resume Sync - **Block lookups**: Enabled again. - **Parent lookups**: Enabled again. - **Range**: Active resume. Since the only real pause range does is not sending batches for processing, resume makes all chains that are holding read-for-processing batches send them. - **Backfill**: Not affected by ee states, no need to resume. ## Additional Info **QUESTION**: Originally I made this to notify and change on synced state, but @pawanjay176 on talks with @paulhauner concluded we only need to check online/offline states. The upcheck function mentions extra checks to have a very up to date sync status to aid the networking stack. However, the only need the networking stack would have is this one. I added a TODO to review if the extra check can be removed Next gen of #3094 Will work best with #3439 Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
2022-08-24 23:34:56 +00:00
#[default]
[Merge] Implement `execution_layer` (#2635) * Checkout serde_utils from rayonism * Make eth1::http functions pub * Add bones of execution_layer * Modify decoding * Expose Transaction, cargo fmt * Add executePayload * Add all minimal spec endpoints * Start adding json rpc wrapper * Finish custom JSON response handler * Switch to new rpc sending method * Add first test * Fix camelCase * Finish adding tests * Begin threading execution layer into BeaconChain * Fix clippy lints * Fix clippy lints * Thread execution layer into ClientBuilder * Add CLI flags * Add block processing methods to ExecutionLayer * Add block_on to execution_layer * Integrate execute_payload * Add extra_data field * Begin implementing payload handle * Send consensus valid/invalid messages * Fix minor type in task_executor * Call forkchoiceUpdated * Add search for TTD block * Thread TTD into execution layer * Allow producing block with execution payload * Add LRU cache for execution blocks * Remove duplicate 0x on ssz_types serialization * Add tests for block getter methods * Add basic block generator impl * Add is_valid_terminal_block to EL * Verify merge block in block_verification * Partially implement --terminal-block-hash-override * Add terminal_block_hash to ChainSpec * Remove Option from terminal_block_hash in EL * Revert merge changes to consensus/fork_choice * Remove commented-out code * Add bones for handling RPC methods on test server * Add first ExecutionLayer tests * Add testing for finding terminal block * Prevent infinite loops * Add insert_merge_block to block gen * Add block gen test for pos blocks * Start adding payloads to block gen * Fix clippy lints * Add execution payload to block gen * Add execute_payload to block_gen * Refactor block gen * Add all routes to mock server * Use Uint256 for base_fee_per_gas * Add working execution chain build * Remove unused var * Revert "Use Uint256 for base_fee_per_gas" This reverts commit 6c88f19ac45db834dd4dbf7a3c6e7242c1c0f735. * Fix base_fee_for_gas Uint256 * Update execute payload handle * Improve testing, fix bugs * Fix default fee-recipient * Fix fee-recipient address (again) * Add check for terminal block, add comments, tidy * Apply suggestions from code review Co-authored-by: realbigsean <seananderson33@GMAIL.com> * Fix is_none on handle Drop * Remove commented-out tests Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2021-09-29 22:14:15 +00:00
Offline,
Syncing,
AuthFailed,
[Merge] Implement `execution_layer` (#2635) * Checkout serde_utils from rayonism * Make eth1::http functions pub * Add bones of execution_layer * Modify decoding * Expose Transaction, cargo fmt * Add executePayload * Add all minimal spec endpoints * Start adding json rpc wrapper * Finish custom JSON response handler * Switch to new rpc sending method * Add first test * Fix camelCase * Finish adding tests * Begin threading execution layer into BeaconChain * Fix clippy lints * Fix clippy lints * Thread execution layer into ClientBuilder * Add CLI flags * Add block processing methods to ExecutionLayer * Add block_on to execution_layer * Integrate execute_payload * Add extra_data field * Begin implementing payload handle * Send consensus valid/invalid messages * Fix minor type in task_executor * Call forkchoiceUpdated * Add search for TTD block * Thread TTD into execution layer * Allow producing block with execution payload * Add LRU cache for execution blocks * Remove duplicate 0x on ssz_types serialization * Add tests for block getter methods * Add basic block generator impl * Add is_valid_terminal_block to EL * Verify merge block in block_verification * Partially implement --terminal-block-hash-override * Add terminal_block_hash to ChainSpec * Remove Option from terminal_block_hash in EL * Revert merge changes to consensus/fork_choice * Remove commented-out code * Add bones for handling RPC methods on test server * Add first ExecutionLayer tests * Add testing for finding terminal block * Prevent infinite loops * Add insert_merge_block to block gen * Add block gen test for pos blocks * Start adding payloads to block gen * Fix clippy lints * Add execution payload to block gen * Add execute_payload to block_gen * Refactor block gen * Add all routes to mock server * Use Uint256 for base_fee_per_gas * Add working execution chain build * Remove unused var * Revert "Use Uint256 for base_fee_per_gas" This reverts commit 6c88f19ac45db834dd4dbf7a3c6e7242c1c0f735. * Fix base_fee_for_gas Uint256 * Update execute payload handle * Improve testing, fix bugs * Fix default fee-recipient * Fix fee-recipient address (again) * Add check for terminal block, add comments, tidy * Apply suggestions from code review Co-authored-by: realbigsean <seananderson33@GMAIL.com> * Fix is_none on handle Drop * Remove commented-out tests Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2021-09-29 22:14:15 +00:00
}
Pause sync when EE is offline (#3428) ## Issue Addressed #3032 ## Proposed Changes Pause sync when ee is offline. Changes include three main parts: - Online/offline notification system - Pause sync - Resume sync #### Online/offline notification system - The engine state is now guarded behind a new struct `State` that ensures every change is correctly notified. Notifications are only sent if the state changes. The new `State` is behind a `RwLock` (as before) as the synchronization mechanism. - The actual notification channel is a [tokio::sync::watch](https://docs.rs/tokio/latest/tokio/sync/watch/index.html) which ensures only the last value is in the receiver channel. This way we don't need to worry about message order etc. - Sync waits for state changes concurrently with normal messages. #### Pause Sync Sync has four components, pausing is done differently in each: - **Block lookups**: Disabled while in this state. We drop current requests and don't search for new blocks. Block lookups are infrequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Parent lookups**: Disabled while in this state. We drop current requests and don't search for new parents. Parent lookups are even less frequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Range**: Chains don't send batches for processing to the beacon processor. This is easily done by guarding the channel to the beacon processor and giving it access only if the ee is responsive. I find this the simplest and most powerful approach since we don't need to deal with new sync states and chain segments that are added while the ee is offline will follow the same logic without needing to synchronize a shared state among those. Another advantage of passive pause vs active pause is that we can still keep track of active advertised chain segments so that on resume we don't need to re-evaluate all our peers. - **Backfill**: Not affected by ee states, we don't pause. #### Resume Sync - **Block lookups**: Enabled again. - **Parent lookups**: Enabled again. - **Range**: Active resume. Since the only real pause range does is not sending batches for processing, resume makes all chains that are holding read-for-processing batches send them. - **Backfill**: Not affected by ee states, no need to resume. ## Additional Info **QUESTION**: Originally I made this to notify and change on synced state, but @pawanjay176 on talks with @paulhauner concluded we only need to check online/offline states. The upcheck function mentions extra checks to have a very up to date sync status to aid the networking stack. However, the only need the networking stack would have is this one. I added a TODO to review if the extra check can be removed Next gen of #3094 Will work best with #3439 Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
2022-08-24 23:34:56 +00:00
/// A subset of the engine state to inform other services if the engine is online or offline.
#[derive(Debug, Clone, PartialEq, Eq, Copy)]
pub enum EngineState {
Online,
Offline,
}
impl From<EngineStateInternal> for EngineState {
fn from(state: EngineStateInternal) -> Self {
match state {
EngineStateInternal::Synced | EngineStateInternal::Syncing => EngineState::Online,
EngineStateInternal::Offline | EngineStateInternal::AuthFailed => EngineState::Offline,
}
}
}
/// Wrapper structure that ensures changes to the engine state are correctly reported to watchers.
struct State {
/// The actual engine state.
state: EngineStateInternal,
/// Notifier to watch the engine state.
notifier: watch::Sender<EngineState>,
}
impl std::ops::Deref for State {
type Target = EngineStateInternal;
fn deref(&self) -> &Self::Target {
&self.state
}
}
impl Default for State {
fn default() -> Self {
let state = EngineStateInternal::default();
let (notifier, _receiver) = watch::channel(state.into());
State { state, notifier }
}
}
impl State {
// Updates the state and notifies all watchers if the state has changed.
pub fn update(&mut self, new_state: EngineStateInternal) {
self.state = new_state;
self.notifier.send_if_modified(|last_state| {
let changed = *last_state != new_state.into(); // notify conditionally
*last_state = new_state.into(); // update the state unconditionally
changed
});
}
/// Gives access to a channel containing whether the last state is online.
///
/// This can be called several times.
pub fn watch(&self) -> WatchStream<EngineState> {
self.notifier.subscribe().into()
}
}
#[derive(Copy, Clone, PartialEq, Debug)]
pub struct ForkchoiceState {
pub head_block_hash: ExecutionBlockHash,
pub safe_block_hash: ExecutionBlockHash,
pub finalized_block_hash: ExecutionBlockHash,
}
[Merge] Implement `execution_layer` (#2635) * Checkout serde_utils from rayonism * Make eth1::http functions pub * Add bones of execution_layer * Modify decoding * Expose Transaction, cargo fmt * Add executePayload * Add all minimal spec endpoints * Start adding json rpc wrapper * Finish custom JSON response handler * Switch to new rpc sending method * Add first test * Fix camelCase * Finish adding tests * Begin threading execution layer into BeaconChain * Fix clippy lints * Fix clippy lints * Thread execution layer into ClientBuilder * Add CLI flags * Add block processing methods to ExecutionLayer * Add block_on to execution_layer * Integrate execute_payload * Add extra_data field * Begin implementing payload handle * Send consensus valid/invalid messages * Fix minor type in task_executor * Call forkchoiceUpdated * Add search for TTD block * Thread TTD into execution layer * Allow producing block with execution payload * Add LRU cache for execution blocks * Remove duplicate 0x on ssz_types serialization * Add tests for block getter methods * Add basic block generator impl * Add is_valid_terminal_block to EL * Verify merge block in block_verification * Partially implement --terminal-block-hash-override * Add terminal_block_hash to ChainSpec * Remove Option from terminal_block_hash in EL * Revert merge changes to consensus/fork_choice * Remove commented-out code * Add bones for handling RPC methods on test server * Add first ExecutionLayer tests * Add testing for finding terminal block * Prevent infinite loops * Add insert_merge_block to block gen * Add block gen test for pos blocks * Start adding payloads to block gen * Fix clippy lints * Add execution payload to block gen * Add execute_payload to block_gen * Refactor block gen * Add all routes to mock server * Use Uint256 for base_fee_per_gas * Add working execution chain build * Remove unused var * Revert "Use Uint256 for base_fee_per_gas" This reverts commit 6c88f19ac45db834dd4dbf7a3c6e7242c1c0f735. * Fix base_fee_for_gas Uint256 * Update execute payload handle * Improve testing, fix bugs * Fix default fee-recipient * Fix fee-recipient address (again) * Add check for terminal block, add comments, tidy * Apply suggestions from code review Co-authored-by: realbigsean <seananderson33@GMAIL.com> * Fix is_none on handle Drop * Remove commented-out tests Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2021-09-29 22:14:15 +00:00
#[derive(Hash, PartialEq, std::cmp::Eq)]
struct PayloadIdCacheKey {
pub head_block_hash: ExecutionBlockHash,
pub payload_attributes: PayloadAttributes,
}
#[derive(Debug)]
pub enum EngineError {
Offline,
Api { error: EngineApiError },
BuilderApi { error: EngineApiError },
Auth,
}
[Merge] Implement `execution_layer` (#2635) * Checkout serde_utils from rayonism * Make eth1::http functions pub * Add bones of execution_layer * Modify decoding * Expose Transaction, cargo fmt * Add executePayload * Add all minimal spec endpoints * Start adding json rpc wrapper * Finish custom JSON response handler * Switch to new rpc sending method * Add first test * Fix camelCase * Finish adding tests * Begin threading execution layer into BeaconChain * Fix clippy lints * Fix clippy lints * Thread execution layer into ClientBuilder * Add CLI flags * Add block processing methods to ExecutionLayer * Add block_on to execution_layer * Integrate execute_payload * Add extra_data field * Begin implementing payload handle * Send consensus valid/invalid messages * Fix minor type in task_executor * Call forkchoiceUpdated * Add search for TTD block * Thread TTD into execution layer * Allow producing block with execution payload * Add LRU cache for execution blocks * Remove duplicate 0x on ssz_types serialization * Add tests for block getter methods * Add basic block generator impl * Add is_valid_terminal_block to EL * Verify merge block in block_verification * Partially implement --terminal-block-hash-override * Add terminal_block_hash to ChainSpec * Remove Option from terminal_block_hash in EL * Revert merge changes to consensus/fork_choice * Remove commented-out code * Add bones for handling RPC methods on test server * Add first ExecutionLayer tests * Add testing for finding terminal block * Prevent infinite loops * Add insert_merge_block to block gen * Add block gen test for pos blocks * Start adding payloads to block gen * Fix clippy lints * Add execution payload to block gen * Add execute_payload to block_gen * Refactor block gen * Add all routes to mock server * Use Uint256 for base_fee_per_gas * Add working execution chain build * Remove unused var * Revert "Use Uint256 for base_fee_per_gas" This reverts commit 6c88f19ac45db834dd4dbf7a3c6e7242c1c0f735. * Fix base_fee_for_gas Uint256 * Update execute payload handle * Improve testing, fix bugs * Fix default fee-recipient * Fix fee-recipient address (again) * Add check for terminal block, add comments, tidy * Apply suggestions from code review Co-authored-by: realbigsean <seananderson33@GMAIL.com> * Fix is_none on handle Drop * Remove commented-out tests Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2021-09-29 22:14:15 +00:00
/// An execution engine.
pub struct Engine {
pub api: HttpJsonRpc,
payload_id_cache: Mutex<LruCache<PayloadIdCacheKey, PayloadId>>,
Pause sync when EE is offline (#3428) ## Issue Addressed #3032 ## Proposed Changes Pause sync when ee is offline. Changes include three main parts: - Online/offline notification system - Pause sync - Resume sync #### Online/offline notification system - The engine state is now guarded behind a new struct `State` that ensures every change is correctly notified. Notifications are only sent if the state changes. The new `State` is behind a `RwLock` (as before) as the synchronization mechanism. - The actual notification channel is a [tokio::sync::watch](https://docs.rs/tokio/latest/tokio/sync/watch/index.html) which ensures only the last value is in the receiver channel. This way we don't need to worry about message order etc. - Sync waits for state changes concurrently with normal messages. #### Pause Sync Sync has four components, pausing is done differently in each: - **Block lookups**: Disabled while in this state. We drop current requests and don't search for new blocks. Block lookups are infrequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Parent lookups**: Disabled while in this state. We drop current requests and don't search for new parents. Parent lookups are even less frequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Range**: Chains don't send batches for processing to the beacon processor. This is easily done by guarding the channel to the beacon processor and giving it access only if the ee is responsive. I find this the simplest and most powerful approach since we don't need to deal with new sync states and chain segments that are added while the ee is offline will follow the same logic without needing to synchronize a shared state among those. Another advantage of passive pause vs active pause is that we can still keep track of active advertised chain segments so that on resume we don't need to re-evaluate all our peers. - **Backfill**: Not affected by ee states, we don't pause. #### Resume Sync - **Block lookups**: Enabled again. - **Parent lookups**: Enabled again. - **Range**: Active resume. Since the only real pause range does is not sending batches for processing, resume makes all chains that are holding read-for-processing batches send them. - **Backfill**: Not affected by ee states, no need to resume. ## Additional Info **QUESTION**: Originally I made this to notify and change on synced state, but @pawanjay176 on talks with @paulhauner concluded we only need to check online/offline states. The upcheck function mentions extra checks to have a very up to date sync status to aid the networking stack. However, the only need the networking stack would have is this one. I added a TODO to review if the extra check can be removed Next gen of #3094 Will work best with #3439 Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
2022-08-24 23:34:56 +00:00
state: RwLock<State>,
latest_forkchoice_state: RwLock<Option<ForkchoiceState>>,
Pause sync when EE is offline (#3428) ## Issue Addressed #3032 ## Proposed Changes Pause sync when ee is offline. Changes include three main parts: - Online/offline notification system - Pause sync - Resume sync #### Online/offline notification system - The engine state is now guarded behind a new struct `State` that ensures every change is correctly notified. Notifications are only sent if the state changes. The new `State` is behind a `RwLock` (as before) as the synchronization mechanism. - The actual notification channel is a [tokio::sync::watch](https://docs.rs/tokio/latest/tokio/sync/watch/index.html) which ensures only the last value is in the receiver channel. This way we don't need to worry about message order etc. - Sync waits for state changes concurrently with normal messages. #### Pause Sync Sync has four components, pausing is done differently in each: - **Block lookups**: Disabled while in this state. We drop current requests and don't search for new blocks. Block lookups are infrequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Parent lookups**: Disabled while in this state. We drop current requests and don't search for new parents. Parent lookups are even less frequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Range**: Chains don't send batches for processing to the beacon processor. This is easily done by guarding the channel to the beacon processor and giving it access only if the ee is responsive. I find this the simplest and most powerful approach since we don't need to deal with new sync states and chain segments that are added while the ee is offline will follow the same logic without needing to synchronize a shared state among those. Another advantage of passive pause vs active pause is that we can still keep track of active advertised chain segments so that on resume we don't need to re-evaluate all our peers. - **Backfill**: Not affected by ee states, we don't pause. #### Resume Sync - **Block lookups**: Enabled again. - **Parent lookups**: Enabled again. - **Range**: Active resume. Since the only real pause range does is not sending batches for processing, resume makes all chains that are holding read-for-processing batches send them. - **Backfill**: Not affected by ee states, no need to resume. ## Additional Info **QUESTION**: Originally I made this to notify and change on synced state, but @pawanjay176 on talks with @paulhauner concluded we only need to check online/offline states. The upcheck function mentions extra checks to have a very up to date sync status to aid the networking stack. However, the only need the networking stack would have is this one. I added a TODO to review if the extra check can be removed Next gen of #3094 Will work best with #3439 Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
2022-08-24 23:34:56 +00:00
executor: TaskExecutor,
log: Logger,
[Merge] Implement `execution_layer` (#2635) * Checkout serde_utils from rayonism * Make eth1::http functions pub * Add bones of execution_layer * Modify decoding * Expose Transaction, cargo fmt * Add executePayload * Add all minimal spec endpoints * Start adding json rpc wrapper * Finish custom JSON response handler * Switch to new rpc sending method * Add first test * Fix camelCase * Finish adding tests * Begin threading execution layer into BeaconChain * Fix clippy lints * Fix clippy lints * Thread execution layer into ClientBuilder * Add CLI flags * Add block processing methods to ExecutionLayer * Add block_on to execution_layer * Integrate execute_payload * Add extra_data field * Begin implementing payload handle * Send consensus valid/invalid messages * Fix minor type in task_executor * Call forkchoiceUpdated * Add search for TTD block * Thread TTD into execution layer * Allow producing block with execution payload * Add LRU cache for execution blocks * Remove duplicate 0x on ssz_types serialization * Add tests for block getter methods * Add basic block generator impl * Add is_valid_terminal_block to EL * Verify merge block in block_verification * Partially implement --terminal-block-hash-override * Add terminal_block_hash to ChainSpec * Remove Option from terminal_block_hash in EL * Revert merge changes to consensus/fork_choice * Remove commented-out code * Add bones for handling RPC methods on test server * Add first ExecutionLayer tests * Add testing for finding terminal block * Prevent infinite loops * Add insert_merge_block to block gen * Add block gen test for pos blocks * Start adding payloads to block gen * Fix clippy lints * Add execution payload to block gen * Add execute_payload to block_gen * Refactor block gen * Add all routes to mock server * Use Uint256 for base_fee_per_gas * Add working execution chain build * Remove unused var * Revert "Use Uint256 for base_fee_per_gas" This reverts commit 6c88f19ac45db834dd4dbf7a3c6e7242c1c0f735. * Fix base_fee_for_gas Uint256 * Update execute payload handle * Improve testing, fix bugs * Fix default fee-recipient * Fix fee-recipient address (again) * Add check for terminal block, add comments, tidy * Apply suggestions from code review Co-authored-by: realbigsean <seananderson33@GMAIL.com> * Fix is_none on handle Drop * Remove commented-out tests Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2021-09-29 22:14:15 +00:00
}
impl Engine {
[Merge] Implement `execution_layer` (#2635) * Checkout serde_utils from rayonism * Make eth1::http functions pub * Add bones of execution_layer * Modify decoding * Expose Transaction, cargo fmt * Add executePayload * Add all minimal spec endpoints * Start adding json rpc wrapper * Finish custom JSON response handler * Switch to new rpc sending method * Add first test * Fix camelCase * Finish adding tests * Begin threading execution layer into BeaconChain * Fix clippy lints * Fix clippy lints * Thread execution layer into ClientBuilder * Add CLI flags * Add block processing methods to ExecutionLayer * Add block_on to execution_layer * Integrate execute_payload * Add extra_data field * Begin implementing payload handle * Send consensus valid/invalid messages * Fix minor type in task_executor * Call forkchoiceUpdated * Add search for TTD block * Thread TTD into execution layer * Allow producing block with execution payload * Add LRU cache for execution blocks * Remove duplicate 0x on ssz_types serialization * Add tests for block getter methods * Add basic block generator impl * Add is_valid_terminal_block to EL * Verify merge block in block_verification * Partially implement --terminal-block-hash-override * Add terminal_block_hash to ChainSpec * Remove Option from terminal_block_hash in EL * Revert merge changes to consensus/fork_choice * Remove commented-out code * Add bones for handling RPC methods on test server * Add first ExecutionLayer tests * Add testing for finding terminal block * Prevent infinite loops * Add insert_merge_block to block gen * Add block gen test for pos blocks * Start adding payloads to block gen * Fix clippy lints * Add execution payload to block gen * Add execute_payload to block_gen * Refactor block gen * Add all routes to mock server * Use Uint256 for base_fee_per_gas * Add working execution chain build * Remove unused var * Revert "Use Uint256 for base_fee_per_gas" This reverts commit 6c88f19ac45db834dd4dbf7a3c6e7242c1c0f735. * Fix base_fee_for_gas Uint256 * Update execute payload handle * Improve testing, fix bugs * Fix default fee-recipient * Fix fee-recipient address (again) * Add check for terminal block, add comments, tidy * Apply suggestions from code review Co-authored-by: realbigsean <seananderson33@GMAIL.com> * Fix is_none on handle Drop * Remove commented-out tests Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2021-09-29 22:14:15 +00:00
/// Creates a new, offline engine.
pub fn new(api: HttpJsonRpc, executor: TaskExecutor, log: &Logger) -> Self {
[Merge] Implement `execution_layer` (#2635) * Checkout serde_utils from rayonism * Make eth1::http functions pub * Add bones of execution_layer * Modify decoding * Expose Transaction, cargo fmt * Add executePayload * Add all minimal spec endpoints * Start adding json rpc wrapper * Finish custom JSON response handler * Switch to new rpc sending method * Add first test * Fix camelCase * Finish adding tests * Begin threading execution layer into BeaconChain * Fix clippy lints * Fix clippy lints * Thread execution layer into ClientBuilder * Add CLI flags * Add block processing methods to ExecutionLayer * Add block_on to execution_layer * Integrate execute_payload * Add extra_data field * Begin implementing payload handle * Send consensus valid/invalid messages * Fix minor type in task_executor * Call forkchoiceUpdated * Add search for TTD block * Thread TTD into execution layer * Allow producing block with execution payload * Add LRU cache for execution blocks * Remove duplicate 0x on ssz_types serialization * Add tests for block getter methods * Add basic block generator impl * Add is_valid_terminal_block to EL * Verify merge block in block_verification * Partially implement --terminal-block-hash-override * Add terminal_block_hash to ChainSpec * Remove Option from terminal_block_hash in EL * Revert merge changes to consensus/fork_choice * Remove commented-out code * Add bones for handling RPC methods on test server * Add first ExecutionLayer tests * Add testing for finding terminal block * Prevent infinite loops * Add insert_merge_block to block gen * Add block gen test for pos blocks * Start adding payloads to block gen * Fix clippy lints * Add execution payload to block gen * Add execute_payload to block_gen * Refactor block gen * Add all routes to mock server * Use Uint256 for base_fee_per_gas * Add working execution chain build * Remove unused var * Revert "Use Uint256 for base_fee_per_gas" This reverts commit 6c88f19ac45db834dd4dbf7a3c6e7242c1c0f735. * Fix base_fee_for_gas Uint256 * Update execute payload handle * Improve testing, fix bugs * Fix default fee-recipient * Fix fee-recipient address (again) * Add check for terminal block, add comments, tidy * Apply suggestions from code review Co-authored-by: realbigsean <seananderson33@GMAIL.com> * Fix is_none on handle Drop * Remove commented-out tests Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2021-09-29 22:14:15 +00:00
Self {
api,
payload_id_cache: Mutex::new(LruCache::new(PAYLOAD_ID_LRU_CACHE_SIZE)),
Pause sync when EE is offline (#3428) ## Issue Addressed #3032 ## Proposed Changes Pause sync when ee is offline. Changes include three main parts: - Online/offline notification system - Pause sync - Resume sync #### Online/offline notification system - The engine state is now guarded behind a new struct `State` that ensures every change is correctly notified. Notifications are only sent if the state changes. The new `State` is behind a `RwLock` (as before) as the synchronization mechanism. - The actual notification channel is a [tokio::sync::watch](https://docs.rs/tokio/latest/tokio/sync/watch/index.html) which ensures only the last value is in the receiver channel. This way we don't need to worry about message order etc. - Sync waits for state changes concurrently with normal messages. #### Pause Sync Sync has four components, pausing is done differently in each: - **Block lookups**: Disabled while in this state. We drop current requests and don't search for new blocks. Block lookups are infrequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Parent lookups**: Disabled while in this state. We drop current requests and don't search for new parents. Parent lookups are even less frequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Range**: Chains don't send batches for processing to the beacon processor. This is easily done by guarding the channel to the beacon processor and giving it access only if the ee is responsive. I find this the simplest and most powerful approach since we don't need to deal with new sync states and chain segments that are added while the ee is offline will follow the same logic without needing to synchronize a shared state among those. Another advantage of passive pause vs active pause is that we can still keep track of active advertised chain segments so that on resume we don't need to re-evaluate all our peers. - **Backfill**: Not affected by ee states, we don't pause. #### Resume Sync - **Block lookups**: Enabled again. - **Parent lookups**: Enabled again. - **Range**: Active resume. Since the only real pause range does is not sending batches for processing, resume makes all chains that are holding read-for-processing batches send them. - **Backfill**: Not affected by ee states, no need to resume. ## Additional Info **QUESTION**: Originally I made this to notify and change on synced state, but @pawanjay176 on talks with @paulhauner concluded we only need to check online/offline states. The upcheck function mentions extra checks to have a very up to date sync status to aid the networking stack. However, the only need the networking stack would have is this one. I added a TODO to review if the extra check can be removed Next gen of #3094 Will work best with #3439 Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
2022-08-24 23:34:56 +00:00
state: Default::default(),
latest_forkchoice_state: Default::default(),
executor,
log: log.clone(),
[Merge] Implement `execution_layer` (#2635) * Checkout serde_utils from rayonism * Make eth1::http functions pub * Add bones of execution_layer * Modify decoding * Expose Transaction, cargo fmt * Add executePayload * Add all minimal spec endpoints * Start adding json rpc wrapper * Finish custom JSON response handler * Switch to new rpc sending method * Add first test * Fix camelCase * Finish adding tests * Begin threading execution layer into BeaconChain * Fix clippy lints * Fix clippy lints * Thread execution layer into ClientBuilder * Add CLI flags * Add block processing methods to ExecutionLayer * Add block_on to execution_layer * Integrate execute_payload * Add extra_data field * Begin implementing payload handle * Send consensus valid/invalid messages * Fix minor type in task_executor * Call forkchoiceUpdated * Add search for TTD block * Thread TTD into execution layer * Allow producing block with execution payload * Add LRU cache for execution blocks * Remove duplicate 0x on ssz_types serialization * Add tests for block getter methods * Add basic block generator impl * Add is_valid_terminal_block to EL * Verify merge block in block_verification * Partially implement --terminal-block-hash-override * Add terminal_block_hash to ChainSpec * Remove Option from terminal_block_hash in EL * Revert merge changes to consensus/fork_choice * Remove commented-out code * Add bones for handling RPC methods on test server * Add first ExecutionLayer tests * Add testing for finding terminal block * Prevent infinite loops * Add insert_merge_block to block gen * Add block gen test for pos blocks * Start adding payloads to block gen * Fix clippy lints * Add execution payload to block gen * Add execute_payload to block_gen * Refactor block gen * Add all routes to mock server * Use Uint256 for base_fee_per_gas * Add working execution chain build * Remove unused var * Revert "Use Uint256 for base_fee_per_gas" This reverts commit 6c88f19ac45db834dd4dbf7a3c6e7242c1c0f735. * Fix base_fee_for_gas Uint256 * Update execute payload handle * Improve testing, fix bugs * Fix default fee-recipient * Fix fee-recipient address (again) * Add check for terminal block, add comments, tidy * Apply suggestions from code review Co-authored-by: realbigsean <seananderson33@GMAIL.com> * Fix is_none on handle Drop * Remove commented-out tests Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2021-09-29 22:14:15 +00:00
}
}
Pause sync when EE is offline (#3428) ## Issue Addressed #3032 ## Proposed Changes Pause sync when ee is offline. Changes include three main parts: - Online/offline notification system - Pause sync - Resume sync #### Online/offline notification system - The engine state is now guarded behind a new struct `State` that ensures every change is correctly notified. Notifications are only sent if the state changes. The new `State` is behind a `RwLock` (as before) as the synchronization mechanism. - The actual notification channel is a [tokio::sync::watch](https://docs.rs/tokio/latest/tokio/sync/watch/index.html) which ensures only the last value is in the receiver channel. This way we don't need to worry about message order etc. - Sync waits for state changes concurrently with normal messages. #### Pause Sync Sync has four components, pausing is done differently in each: - **Block lookups**: Disabled while in this state. We drop current requests and don't search for new blocks. Block lookups are infrequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Parent lookups**: Disabled while in this state. We drop current requests and don't search for new parents. Parent lookups are even less frequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Range**: Chains don't send batches for processing to the beacon processor. This is easily done by guarding the channel to the beacon processor and giving it access only if the ee is responsive. I find this the simplest and most powerful approach since we don't need to deal with new sync states and chain segments that are added while the ee is offline will follow the same logic without needing to synchronize a shared state among those. Another advantage of passive pause vs active pause is that we can still keep track of active advertised chain segments so that on resume we don't need to re-evaluate all our peers. - **Backfill**: Not affected by ee states, we don't pause. #### Resume Sync - **Block lookups**: Enabled again. - **Parent lookups**: Enabled again. - **Range**: Active resume. Since the only real pause range does is not sending batches for processing, resume makes all chains that are holding read-for-processing batches send them. - **Backfill**: Not affected by ee states, no need to resume. ## Additional Info **QUESTION**: Originally I made this to notify and change on synced state, but @pawanjay176 on talks with @paulhauner concluded we only need to check online/offline states. The upcheck function mentions extra checks to have a very up to date sync status to aid the networking stack. However, the only need the networking stack would have is this one. I added a TODO to review if the extra check can be removed Next gen of #3094 Will work best with #3439 Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
2022-08-24 23:34:56 +00:00
/// Gives access to a channel containing the last engine state.
///
/// This can be called several times.
pub async fn watch_state(&self) -> WatchStream<EngineState> {
self.state.read().await.watch()
}
pub async fn get_payload_id(
&self,
head_block_hash: &ExecutionBlockHash,
payload_attributes: &PayloadAttributes,
) -> Option<PayloadId> {
self.payload_id_cache
.lock()
.await
.get(&PayloadIdCacheKey::new(head_block_hash, payload_attributes))
.cloned()
}
pub async fn notify_forkchoice_updated(
&self,
forkchoice_state: ForkchoiceState,
payload_attributes: Option<PayloadAttributes>,
log: &Logger,
) -> Result<ForkchoiceUpdatedResponse, EngineApiError> {
let response = self
.api
.forkchoice_updated(forkchoice_state, payload_attributes.clone())
.await?;
if let Some(payload_id) = response.payload_id {
if let Some(key) = payload_attributes
.map(|pa| PayloadIdCacheKey::new(&forkchoice_state.head_block_hash, &pa))
{
self.payload_id_cache.lock().await.put(key, payload_id);
} else {
debug!(
log,
"Engine returned unexpected payload_id";
"payload_id" => ?payload_id
);
}
}
Ok(response)
}
[Merge] Implement `execution_layer` (#2635) * Checkout serde_utils from rayonism * Make eth1::http functions pub * Add bones of execution_layer * Modify decoding * Expose Transaction, cargo fmt * Add executePayload * Add all minimal spec endpoints * Start adding json rpc wrapper * Finish custom JSON response handler * Switch to new rpc sending method * Add first test * Fix camelCase * Finish adding tests * Begin threading execution layer into BeaconChain * Fix clippy lints * Fix clippy lints * Thread execution layer into ClientBuilder * Add CLI flags * Add block processing methods to ExecutionLayer * Add block_on to execution_layer * Integrate execute_payload * Add extra_data field * Begin implementing payload handle * Send consensus valid/invalid messages * Fix minor type in task_executor * Call forkchoiceUpdated * Add search for TTD block * Thread TTD into execution layer * Allow producing block with execution payload * Add LRU cache for execution blocks * Remove duplicate 0x on ssz_types serialization * Add tests for block getter methods * Add basic block generator impl * Add is_valid_terminal_block to EL * Verify merge block in block_verification * Partially implement --terminal-block-hash-override * Add terminal_block_hash to ChainSpec * Remove Option from terminal_block_hash in EL * Revert merge changes to consensus/fork_choice * Remove commented-out code * Add bones for handling RPC methods on test server * Add first ExecutionLayer tests * Add testing for finding terminal block * Prevent infinite loops * Add insert_merge_block to block gen * Add block gen test for pos blocks * Start adding payloads to block gen * Fix clippy lints * Add execution payload to block gen * Add execute_payload to block_gen * Refactor block gen * Add all routes to mock server * Use Uint256 for base_fee_per_gas * Add working execution chain build * Remove unused var * Revert "Use Uint256 for base_fee_per_gas" This reverts commit 6c88f19ac45db834dd4dbf7a3c6e7242c1c0f735. * Fix base_fee_for_gas Uint256 * Update execute payload handle * Improve testing, fix bugs * Fix default fee-recipient * Fix fee-recipient address (again) * Add check for terminal block, add comments, tidy * Apply suggestions from code review Co-authored-by: realbigsean <seananderson33@GMAIL.com> * Fix is_none on handle Drop * Remove commented-out tests Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2021-09-29 22:14:15 +00:00
async fn get_latest_forkchoice_state(&self) -> Option<ForkchoiceState> {
*self.latest_forkchoice_state.read().await
}
pub async fn set_latest_forkchoice_state(&self, state: ForkchoiceState) {
*self.latest_forkchoice_state.write().await = Some(state);
}
Initial work to remove engines fallback from the `execution_layer` crate (#3257) ## Issue Addressed Part of #3160 ## Proposed Changes Use only the first url given in the execution engine, if more than one is provided log it. This change only moves having multiple engines to one. The amount of code cleanup that can and should be done forward is not small and would interfere with ongoing PRs. I'm keeping the changes intentionally very very minimal. ## Additional Info Future works: - In [ `EngineError` ](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L173-L177) the id is not needed since it now has no meaning. - the [ `first_success_without_retry` ](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L348-L351) function can return a single error. - the [`first_success`](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L324) function can return a single error. - After the redundancy is removed for the builders, we can probably make the [ `EngineErrors` ](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/lib.rs#L69) carry a single error. - Merge the [`Engines`](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L161-L165) struct and [`Engine` ](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L62-L67) - Fix the associated configurations and cli params. Not sure if both are done in https://github.com/sigp/lighthouse/pull/3214 In general I think those changes can be done incrementally and in individual pull requests.
2022-06-22 14:27:16 +00:00
async fn send_latest_forkchoice_state(&self) {
let latest_forkchoice_state = self.get_latest_forkchoice_state().await;
if let Some(forkchoice_state) = latest_forkchoice_state {
if forkchoice_state.head_block_hash == ExecutionBlockHash::zero() {
debug!(
self.log,
"No need to call forkchoiceUpdated";
"msg" => "head does not have execution enabled",
);
return;
}
info!(
self.log,
"Issuing forkchoiceUpdated";
"forkchoice_state" => ?forkchoice_state,
);
// For simplicity, payload attributes are never included in this call. It may be
// reasonable to include them in the future.
if let Err(e) = self.api.forkchoice_updated(forkchoice_state, None).await {
debug!(
self.log,
"Failed to issue latest head to engine";
"error" => ?e,
);
}
} else {
debug!(
self.log,
"No head, not sending to engine";
);
}
}
Initial work to remove engines fallback from the `execution_layer` crate (#3257) ## Issue Addressed Part of #3160 ## Proposed Changes Use only the first url given in the execution engine, if more than one is provided log it. This change only moves having multiple engines to one. The amount of code cleanup that can and should be done forward is not small and would interfere with ongoing PRs. I'm keeping the changes intentionally very very minimal. ## Additional Info Future works: - In [ `EngineError` ](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L173-L177) the id is not needed since it now has no meaning. - the [ `first_success_without_retry` ](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L348-L351) function can return a single error. - the [`first_success`](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L324) function can return a single error. - After the redundancy is removed for the builders, we can probably make the [ `EngineErrors` ](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/lib.rs#L69) carry a single error. - Merge the [`Engines`](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L161-L165) struct and [`Engine` ](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L62-L67) - Fix the associated configurations and cli params. Not sure if both are done in https://github.com/sigp/lighthouse/pull/3214 In general I think those changes can be done incrementally and in individual pull requests.
2022-06-22 14:27:16 +00:00
/// Returns `true` if the engine has a "synced" status.
pub async fn is_synced(&self) -> bool {
Pause sync when EE is offline (#3428) ## Issue Addressed #3032 ## Proposed Changes Pause sync when ee is offline. Changes include three main parts: - Online/offline notification system - Pause sync - Resume sync #### Online/offline notification system - The engine state is now guarded behind a new struct `State` that ensures every change is correctly notified. Notifications are only sent if the state changes. The new `State` is behind a `RwLock` (as before) as the synchronization mechanism. - The actual notification channel is a [tokio::sync::watch](https://docs.rs/tokio/latest/tokio/sync/watch/index.html) which ensures only the last value is in the receiver channel. This way we don't need to worry about message order etc. - Sync waits for state changes concurrently with normal messages. #### Pause Sync Sync has four components, pausing is done differently in each: - **Block lookups**: Disabled while in this state. We drop current requests and don't search for new blocks. Block lookups are infrequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Parent lookups**: Disabled while in this state. We drop current requests and don't search for new parents. Parent lookups are even less frequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Range**: Chains don't send batches for processing to the beacon processor. This is easily done by guarding the channel to the beacon processor and giving it access only if the ee is responsive. I find this the simplest and most powerful approach since we don't need to deal with new sync states and chain segments that are added while the ee is offline will follow the same logic without needing to synchronize a shared state among those. Another advantage of passive pause vs active pause is that we can still keep track of active advertised chain segments so that on resume we don't need to re-evaluate all our peers. - **Backfill**: Not affected by ee states, we don't pause. #### Resume Sync - **Block lookups**: Enabled again. - **Parent lookups**: Enabled again. - **Range**: Active resume. Since the only real pause range does is not sending batches for processing, resume makes all chains that are holding read-for-processing batches send them. - **Backfill**: Not affected by ee states, no need to resume. ## Additional Info **QUESTION**: Originally I made this to notify and change on synced state, but @pawanjay176 on talks with @paulhauner concluded we only need to check online/offline states. The upcheck function mentions extra checks to have a very up to date sync status to aid the networking stack. However, the only need the networking stack would have is this one. I added a TODO to review if the extra check can be removed Next gen of #3094 Will work best with #3439 Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
2022-08-24 23:34:56 +00:00
**self.state.read().await == EngineStateInternal::Synced
}
Initial work to remove engines fallback from the `execution_layer` crate (#3257) ## Issue Addressed Part of #3160 ## Proposed Changes Use only the first url given in the execution engine, if more than one is provided log it. This change only moves having multiple engines to one. The amount of code cleanup that can and should be done forward is not small and would interfere with ongoing PRs. I'm keeping the changes intentionally very very minimal. ## Additional Info Future works: - In [ `EngineError` ](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L173-L177) the id is not needed since it now has no meaning. - the [ `first_success_without_retry` ](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L348-L351) function can return a single error. - the [`first_success`](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L324) function can return a single error. - After the redundancy is removed for the builders, we can probably make the [ `EngineErrors` ](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/lib.rs#L69) carry a single error. - Merge the [`Engines`](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L161-L165) struct and [`Engine` ](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L62-L67) - Fix the associated configurations and cli params. Not sure if both are done in https://github.com/sigp/lighthouse/pull/3214 In general I think those changes can be done incrementally and in individual pull requests.
2022-06-22 14:27:16 +00:00
/// Run the `EngineApi::upcheck` function if the node's last known state is not synced. This
/// might be used to recover the node if offline.
pub async fn upcheck(&self) {
Pause sync when EE is offline (#3428) ## Issue Addressed #3032 ## Proposed Changes Pause sync when ee is offline. Changes include three main parts: - Online/offline notification system - Pause sync - Resume sync #### Online/offline notification system - The engine state is now guarded behind a new struct `State` that ensures every change is correctly notified. Notifications are only sent if the state changes. The new `State` is behind a `RwLock` (as before) as the synchronization mechanism. - The actual notification channel is a [tokio::sync::watch](https://docs.rs/tokio/latest/tokio/sync/watch/index.html) which ensures only the last value is in the receiver channel. This way we don't need to worry about message order etc. - Sync waits for state changes concurrently with normal messages. #### Pause Sync Sync has four components, pausing is done differently in each: - **Block lookups**: Disabled while in this state. We drop current requests and don't search for new blocks. Block lookups are infrequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Parent lookups**: Disabled while in this state. We drop current requests and don't search for new parents. Parent lookups are even less frequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Range**: Chains don't send batches for processing to the beacon processor. This is easily done by guarding the channel to the beacon processor and giving it access only if the ee is responsive. I find this the simplest and most powerful approach since we don't need to deal with new sync states and chain segments that are added while the ee is offline will follow the same logic without needing to synchronize a shared state among those. Another advantage of passive pause vs active pause is that we can still keep track of active advertised chain segments so that on resume we don't need to re-evaluate all our peers. - **Backfill**: Not affected by ee states, we don't pause. #### Resume Sync - **Block lookups**: Enabled again. - **Parent lookups**: Enabled again. - **Range**: Active resume. Since the only real pause range does is not sending batches for processing, resume makes all chains that are holding read-for-processing batches send them. - **Backfill**: Not affected by ee states, no need to resume. ## Additional Info **QUESTION**: Originally I made this to notify and change on synced state, but @pawanjay176 on talks with @paulhauner concluded we only need to check online/offline states. The upcheck function mentions extra checks to have a very up to date sync status to aid the networking stack. However, the only need the networking stack would have is this one. I added a TODO to review if the extra check can be removed Next gen of #3094 Will work best with #3439 Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
2022-08-24 23:34:56 +00:00
let state: EngineStateInternal = match self.api.upcheck().await {
Ok(()) => {
let mut state = self.state.write().await;
Pause sync when EE is offline (#3428) ## Issue Addressed #3032 ## Proposed Changes Pause sync when ee is offline. Changes include three main parts: - Online/offline notification system - Pause sync - Resume sync #### Online/offline notification system - The engine state is now guarded behind a new struct `State` that ensures every change is correctly notified. Notifications are only sent if the state changes. The new `State` is behind a `RwLock` (as before) as the synchronization mechanism. - The actual notification channel is a [tokio::sync::watch](https://docs.rs/tokio/latest/tokio/sync/watch/index.html) which ensures only the last value is in the receiver channel. This way we don't need to worry about message order etc. - Sync waits for state changes concurrently with normal messages. #### Pause Sync Sync has four components, pausing is done differently in each: - **Block lookups**: Disabled while in this state. We drop current requests and don't search for new blocks. Block lookups are infrequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Parent lookups**: Disabled while in this state. We drop current requests and don't search for new parents. Parent lookups are even less frequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Range**: Chains don't send batches for processing to the beacon processor. This is easily done by guarding the channel to the beacon processor and giving it access only if the ee is responsive. I find this the simplest and most powerful approach since we don't need to deal with new sync states and chain segments that are added while the ee is offline will follow the same logic without needing to synchronize a shared state among those. Another advantage of passive pause vs active pause is that we can still keep track of active advertised chain segments so that on resume we don't need to re-evaluate all our peers. - **Backfill**: Not affected by ee states, we don't pause. #### Resume Sync - **Block lookups**: Enabled again. - **Parent lookups**: Enabled again. - **Range**: Active resume. Since the only real pause range does is not sending batches for processing, resume makes all chains that are holding read-for-processing batches send them. - **Backfill**: Not affected by ee states, no need to resume. ## Additional Info **QUESTION**: Originally I made this to notify and change on synced state, but @pawanjay176 on talks with @paulhauner concluded we only need to check online/offline states. The upcheck function mentions extra checks to have a very up to date sync status to aid the networking stack. However, the only need the networking stack would have is this one. I added a TODO to review if the extra check can be removed Next gen of #3094 Will work best with #3439 Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
2022-08-24 23:34:56 +00:00
if **state != EngineStateInternal::Synced {
info!(
self.log,
"Execution engine online";
);
Initial work to remove engines fallback from the `execution_layer` crate (#3257) ## Issue Addressed Part of #3160 ## Proposed Changes Use only the first url given in the execution engine, if more than one is provided log it. This change only moves having multiple engines to one. The amount of code cleanup that can and should be done forward is not small and would interfere with ongoing PRs. I'm keeping the changes intentionally very very minimal. ## Additional Info Future works: - In [ `EngineError` ](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L173-L177) the id is not needed since it now has no meaning. - the [ `first_success_without_retry` ](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L348-L351) function can return a single error. - the [`first_success`](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L324) function can return a single error. - After the redundancy is removed for the builders, we can probably make the [ `EngineErrors` ](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/lib.rs#L69) carry a single error. - Merge the [`Engines`](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L161-L165) struct and [`Engine` ](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L62-L67) - Fix the associated configurations and cli params. Not sure if both are done in https://github.com/sigp/lighthouse/pull/3214 In general I think those changes can be done incrementally and in individual pull requests.
2022-06-22 14:27:16 +00:00
// Send the node our latest forkchoice_state.
self.send_latest_forkchoice_state().await;
} else {
debug!(
self.log,
"Execution engine online";
);
Initial work to remove engines fallback from the `execution_layer` crate (#3257) ## Issue Addressed Part of #3160 ## Proposed Changes Use only the first url given in the execution engine, if more than one is provided log it. This change only moves having multiple engines to one. The amount of code cleanup that can and should be done forward is not small and would interfere with ongoing PRs. I'm keeping the changes intentionally very very minimal. ## Additional Info Future works: - In [ `EngineError` ](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L173-L177) the id is not needed since it now has no meaning. - the [ `first_success_without_retry` ](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L348-L351) function can return a single error. - the [`first_success`](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L324) function can return a single error. - After the redundancy is removed for the builders, we can probably make the [ `EngineErrors` ](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/lib.rs#L69) carry a single error. - Merge the [`Engines`](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L161-L165) struct and [`Engine` ](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L62-L67) - Fix the associated configurations and cli params. Not sure if both are done in https://github.com/sigp/lighthouse/pull/3214 In general I think those changes can be done incrementally and in individual pull requests.
2022-06-22 14:27:16 +00:00
}
Pause sync when EE is offline (#3428) ## Issue Addressed #3032 ## Proposed Changes Pause sync when ee is offline. Changes include three main parts: - Online/offline notification system - Pause sync - Resume sync #### Online/offline notification system - The engine state is now guarded behind a new struct `State` that ensures every change is correctly notified. Notifications are only sent if the state changes. The new `State` is behind a `RwLock` (as before) as the synchronization mechanism. - The actual notification channel is a [tokio::sync::watch](https://docs.rs/tokio/latest/tokio/sync/watch/index.html) which ensures only the last value is in the receiver channel. This way we don't need to worry about message order etc. - Sync waits for state changes concurrently with normal messages. #### Pause Sync Sync has four components, pausing is done differently in each: - **Block lookups**: Disabled while in this state. We drop current requests and don't search for new blocks. Block lookups are infrequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Parent lookups**: Disabled while in this state. We drop current requests and don't search for new parents. Parent lookups are even less frequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Range**: Chains don't send batches for processing to the beacon processor. This is easily done by guarding the channel to the beacon processor and giving it access only if the ee is responsive. I find this the simplest and most powerful approach since we don't need to deal with new sync states and chain segments that are added while the ee is offline will follow the same logic without needing to synchronize a shared state among those. Another advantage of passive pause vs active pause is that we can still keep track of active advertised chain segments so that on resume we don't need to re-evaluate all our peers. - **Backfill**: Not affected by ee states, we don't pause. #### Resume Sync - **Block lookups**: Enabled again. - **Parent lookups**: Enabled again. - **Range**: Active resume. Since the only real pause range does is not sending batches for processing, resume makes all chains that are holding read-for-processing batches send them. - **Backfill**: Not affected by ee states, no need to resume. ## Additional Info **QUESTION**: Originally I made this to notify and change on synced state, but @pawanjay176 on talks with @paulhauner concluded we only need to check online/offline states. The upcheck function mentions extra checks to have a very up to date sync status to aid the networking stack. However, the only need the networking stack would have is this one. I added a TODO to review if the extra check can be removed Next gen of #3094 Will work best with #3439 Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
2022-08-24 23:34:56 +00:00
state.update(EngineStateInternal::Synced);
**state
[Merge] Implement `execution_layer` (#2635) * Checkout serde_utils from rayonism * Make eth1::http functions pub * Add bones of execution_layer * Modify decoding * Expose Transaction, cargo fmt * Add executePayload * Add all minimal spec endpoints * Start adding json rpc wrapper * Finish custom JSON response handler * Switch to new rpc sending method * Add first test * Fix camelCase * Finish adding tests * Begin threading execution layer into BeaconChain * Fix clippy lints * Fix clippy lints * Thread execution layer into ClientBuilder * Add CLI flags * Add block processing methods to ExecutionLayer * Add block_on to execution_layer * Integrate execute_payload * Add extra_data field * Begin implementing payload handle * Send consensus valid/invalid messages * Fix minor type in task_executor * Call forkchoiceUpdated * Add search for TTD block * Thread TTD into execution layer * Allow producing block with execution payload * Add LRU cache for execution blocks * Remove duplicate 0x on ssz_types serialization * Add tests for block getter methods * Add basic block generator impl * Add is_valid_terminal_block to EL * Verify merge block in block_verification * Partially implement --terminal-block-hash-override * Add terminal_block_hash to ChainSpec * Remove Option from terminal_block_hash in EL * Revert merge changes to consensus/fork_choice * Remove commented-out code * Add bones for handling RPC methods on test server * Add first ExecutionLayer tests * Add testing for finding terminal block * Prevent infinite loops * Add insert_merge_block to block gen * Add block gen test for pos blocks * Start adding payloads to block gen * Fix clippy lints * Add execution payload to block gen * Add execute_payload to block_gen * Refactor block gen * Add all routes to mock server * Use Uint256 for base_fee_per_gas * Add working execution chain build * Remove unused var * Revert "Use Uint256 for base_fee_per_gas" This reverts commit 6c88f19ac45db834dd4dbf7a3c6e7242c1c0f735. * Fix base_fee_for_gas Uint256 * Update execute payload handle * Improve testing, fix bugs * Fix default fee-recipient * Fix fee-recipient address (again) * Add check for terminal block, add comments, tidy * Apply suggestions from code review Co-authored-by: realbigsean <seananderson33@GMAIL.com> * Fix is_none on handle Drop * Remove commented-out tests Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2021-09-29 22:14:15 +00:00
}
Err(EngineApiError::IsSyncing) => {
let mut state = self.state.write().await;
Pause sync when EE is offline (#3428) ## Issue Addressed #3032 ## Proposed Changes Pause sync when ee is offline. Changes include three main parts: - Online/offline notification system - Pause sync - Resume sync #### Online/offline notification system - The engine state is now guarded behind a new struct `State` that ensures every change is correctly notified. Notifications are only sent if the state changes. The new `State` is behind a `RwLock` (as before) as the synchronization mechanism. - The actual notification channel is a [tokio::sync::watch](https://docs.rs/tokio/latest/tokio/sync/watch/index.html) which ensures only the last value is in the receiver channel. This way we don't need to worry about message order etc. - Sync waits for state changes concurrently with normal messages. #### Pause Sync Sync has four components, pausing is done differently in each: - **Block lookups**: Disabled while in this state. We drop current requests and don't search for new blocks. Block lookups are infrequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Parent lookups**: Disabled while in this state. We drop current requests and don't search for new parents. Parent lookups are even less frequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Range**: Chains don't send batches for processing to the beacon processor. This is easily done by guarding the channel to the beacon processor and giving it access only if the ee is responsive. I find this the simplest and most powerful approach since we don't need to deal with new sync states and chain segments that are added while the ee is offline will follow the same logic without needing to synchronize a shared state among those. Another advantage of passive pause vs active pause is that we can still keep track of active advertised chain segments so that on resume we don't need to re-evaluate all our peers. - **Backfill**: Not affected by ee states, we don't pause. #### Resume Sync - **Block lookups**: Enabled again. - **Parent lookups**: Enabled again. - **Range**: Active resume. Since the only real pause range does is not sending batches for processing, resume makes all chains that are holding read-for-processing batches send them. - **Backfill**: Not affected by ee states, no need to resume. ## Additional Info **QUESTION**: Originally I made this to notify and change on synced state, but @pawanjay176 on talks with @paulhauner concluded we only need to check online/offline states. The upcheck function mentions extra checks to have a very up to date sync status to aid the networking stack. However, the only need the networking stack would have is this one. I added a TODO to review if the extra check can be removed Next gen of #3094 Will work best with #3439 Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
2022-08-24 23:34:56 +00:00
state.update(EngineStateInternal::Syncing);
**state
}
Err(EngineApiError::Auth(err)) => {
error!(
self.log,
"Failed jwt authorization";
"error" => ?err,
);
[Merge] Implement `execution_layer` (#2635) * Checkout serde_utils from rayonism * Make eth1::http functions pub * Add bones of execution_layer * Modify decoding * Expose Transaction, cargo fmt * Add executePayload * Add all minimal spec endpoints * Start adding json rpc wrapper * Finish custom JSON response handler * Switch to new rpc sending method * Add first test * Fix camelCase * Finish adding tests * Begin threading execution layer into BeaconChain * Fix clippy lints * Fix clippy lints * Thread execution layer into ClientBuilder * Add CLI flags * Add block processing methods to ExecutionLayer * Add block_on to execution_layer * Integrate execute_payload * Add extra_data field * Begin implementing payload handle * Send consensus valid/invalid messages * Fix minor type in task_executor * Call forkchoiceUpdated * Add search for TTD block * Thread TTD into execution layer * Allow producing block with execution payload * Add LRU cache for execution blocks * Remove duplicate 0x on ssz_types serialization * Add tests for block getter methods * Add basic block generator impl * Add is_valid_terminal_block to EL * Verify merge block in block_verification * Partially implement --terminal-block-hash-override * Add terminal_block_hash to ChainSpec * Remove Option from terminal_block_hash in EL * Revert merge changes to consensus/fork_choice * Remove commented-out code * Add bones for handling RPC methods on test server * Add first ExecutionLayer tests * Add testing for finding terminal block * Prevent infinite loops * Add insert_merge_block to block gen * Add block gen test for pos blocks * Start adding payloads to block gen * Fix clippy lints * Add execution payload to block gen * Add execute_payload to block_gen * Refactor block gen * Add all routes to mock server * Use Uint256 for base_fee_per_gas * Add working execution chain build * Remove unused var * Revert "Use Uint256 for base_fee_per_gas" This reverts commit 6c88f19ac45db834dd4dbf7a3c6e7242c1c0f735. * Fix base_fee_for_gas Uint256 * Update execute payload handle * Improve testing, fix bugs * Fix default fee-recipient * Fix fee-recipient address (again) * Add check for terminal block, add comments, tidy * Apply suggestions from code review Co-authored-by: realbigsean <seananderson33@GMAIL.com> * Fix is_none on handle Drop * Remove commented-out tests Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2021-09-29 22:14:15 +00:00
let mut state = self.state.write().await;
Pause sync when EE is offline (#3428) ## Issue Addressed #3032 ## Proposed Changes Pause sync when ee is offline. Changes include three main parts: - Online/offline notification system - Pause sync - Resume sync #### Online/offline notification system - The engine state is now guarded behind a new struct `State` that ensures every change is correctly notified. Notifications are only sent if the state changes. The new `State` is behind a `RwLock` (as before) as the synchronization mechanism. - The actual notification channel is a [tokio::sync::watch](https://docs.rs/tokio/latest/tokio/sync/watch/index.html) which ensures only the last value is in the receiver channel. This way we don't need to worry about message order etc. - Sync waits for state changes concurrently with normal messages. #### Pause Sync Sync has four components, pausing is done differently in each: - **Block lookups**: Disabled while in this state. We drop current requests and don't search for new blocks. Block lookups are infrequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Parent lookups**: Disabled while in this state. We drop current requests and don't search for new parents. Parent lookups are even less frequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Range**: Chains don't send batches for processing to the beacon processor. This is easily done by guarding the channel to the beacon processor and giving it access only if the ee is responsive. I find this the simplest and most powerful approach since we don't need to deal with new sync states and chain segments that are added while the ee is offline will follow the same logic without needing to synchronize a shared state among those. Another advantage of passive pause vs active pause is that we can still keep track of active advertised chain segments so that on resume we don't need to re-evaluate all our peers. - **Backfill**: Not affected by ee states, we don't pause. #### Resume Sync - **Block lookups**: Enabled again. - **Parent lookups**: Enabled again. - **Range**: Active resume. Since the only real pause range does is not sending batches for processing, resume makes all chains that are holding read-for-processing batches send them. - **Backfill**: Not affected by ee states, no need to resume. ## Additional Info **QUESTION**: Originally I made this to notify and change on synced state, but @pawanjay176 on talks with @paulhauner concluded we only need to check online/offline states. The upcheck function mentions extra checks to have a very up to date sync status to aid the networking stack. However, the only need the networking stack would have is this one. I added a TODO to review if the extra check can be removed Next gen of #3094 Will work best with #3439 Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
2022-08-24 23:34:56 +00:00
state.update(EngineStateInternal::AuthFailed);
**state
}
Err(e) => {
error!(
self.log,
"Error during execution engine upcheck";
"error" => ?e,
);
let mut state = self.state.write().await;
Pause sync when EE is offline (#3428) ## Issue Addressed #3032 ## Proposed Changes Pause sync when ee is offline. Changes include three main parts: - Online/offline notification system - Pause sync - Resume sync #### Online/offline notification system - The engine state is now guarded behind a new struct `State` that ensures every change is correctly notified. Notifications are only sent if the state changes. The new `State` is behind a `RwLock` (as before) as the synchronization mechanism. - The actual notification channel is a [tokio::sync::watch](https://docs.rs/tokio/latest/tokio/sync/watch/index.html) which ensures only the last value is in the receiver channel. This way we don't need to worry about message order etc. - Sync waits for state changes concurrently with normal messages. #### Pause Sync Sync has four components, pausing is done differently in each: - **Block lookups**: Disabled while in this state. We drop current requests and don't search for new blocks. Block lookups are infrequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Parent lookups**: Disabled while in this state. We drop current requests and don't search for new parents. Parent lookups are even less frequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Range**: Chains don't send batches for processing to the beacon processor. This is easily done by guarding the channel to the beacon processor and giving it access only if the ee is responsive. I find this the simplest and most powerful approach since we don't need to deal with new sync states and chain segments that are added while the ee is offline will follow the same logic without needing to synchronize a shared state among those. Another advantage of passive pause vs active pause is that we can still keep track of active advertised chain segments so that on resume we don't need to re-evaluate all our peers. - **Backfill**: Not affected by ee states, we don't pause. #### Resume Sync - **Block lookups**: Enabled again. - **Parent lookups**: Enabled again. - **Range**: Active resume. Since the only real pause range does is not sending batches for processing, resume makes all chains that are holding read-for-processing batches send them. - **Backfill**: Not affected by ee states, no need to resume. ## Additional Info **QUESTION**: Originally I made this to notify and change on synced state, but @pawanjay176 on talks with @paulhauner concluded we only need to check online/offline states. The upcheck function mentions extra checks to have a very up to date sync status to aid the networking stack. However, the only need the networking stack would have is this one. I added a TODO to review if the extra check can be removed Next gen of #3094 Will work best with #3439 Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
2022-08-24 23:34:56 +00:00
state.update(EngineStateInternal::Offline);
**state
[Merge] Implement `execution_layer` (#2635) * Checkout serde_utils from rayonism * Make eth1::http functions pub * Add bones of execution_layer * Modify decoding * Expose Transaction, cargo fmt * Add executePayload * Add all minimal spec endpoints * Start adding json rpc wrapper * Finish custom JSON response handler * Switch to new rpc sending method * Add first test * Fix camelCase * Finish adding tests * Begin threading execution layer into BeaconChain * Fix clippy lints * Fix clippy lints * Thread execution layer into ClientBuilder * Add CLI flags * Add block processing methods to ExecutionLayer * Add block_on to execution_layer * Integrate execute_payload * Add extra_data field * Begin implementing payload handle * Send consensus valid/invalid messages * Fix minor type in task_executor * Call forkchoiceUpdated * Add search for TTD block * Thread TTD into execution layer * Allow producing block with execution payload * Add LRU cache for execution blocks * Remove duplicate 0x on ssz_types serialization * Add tests for block getter methods * Add basic block generator impl * Add is_valid_terminal_block to EL * Verify merge block in block_verification * Partially implement --terminal-block-hash-override * Add terminal_block_hash to ChainSpec * Remove Option from terminal_block_hash in EL * Revert merge changes to consensus/fork_choice * Remove commented-out code * Add bones for handling RPC methods on test server * Add first ExecutionLayer tests * Add testing for finding terminal block * Prevent infinite loops * Add insert_merge_block to block gen * Add block gen test for pos blocks * Start adding payloads to block gen * Fix clippy lints * Add execution payload to block gen * Add execute_payload to block_gen * Refactor block gen * Add all routes to mock server * Use Uint256 for base_fee_per_gas * Add working execution chain build * Remove unused var * Revert "Use Uint256 for base_fee_per_gas" This reverts commit 6c88f19ac45db834dd4dbf7a3c6e7242c1c0f735. * Fix base_fee_for_gas Uint256 * Update execute payload handle * Improve testing, fix bugs * Fix default fee-recipient * Fix fee-recipient address (again) * Add check for terminal block, add comments, tidy * Apply suggestions from code review Co-authored-by: realbigsean <seananderson33@GMAIL.com> * Fix is_none on handle Drop * Remove commented-out tests Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2021-09-29 22:14:15 +00:00
}
};
debug!(
self.log,
"Execution engine upcheck complete";
"state" => ?state,
);
[Merge] Implement `execution_layer` (#2635) * Checkout serde_utils from rayonism * Make eth1::http functions pub * Add bones of execution_layer * Modify decoding * Expose Transaction, cargo fmt * Add executePayload * Add all minimal spec endpoints * Start adding json rpc wrapper * Finish custom JSON response handler * Switch to new rpc sending method * Add first test * Fix camelCase * Finish adding tests * Begin threading execution layer into BeaconChain * Fix clippy lints * Fix clippy lints * Thread execution layer into ClientBuilder * Add CLI flags * Add block processing methods to ExecutionLayer * Add block_on to execution_layer * Integrate execute_payload * Add extra_data field * Begin implementing payload handle * Send consensus valid/invalid messages * Fix minor type in task_executor * Call forkchoiceUpdated * Add search for TTD block * Thread TTD into execution layer * Allow producing block with execution payload * Add LRU cache for execution blocks * Remove duplicate 0x on ssz_types serialization * Add tests for block getter methods * Add basic block generator impl * Add is_valid_terminal_block to EL * Verify merge block in block_verification * Partially implement --terminal-block-hash-override * Add terminal_block_hash to ChainSpec * Remove Option from terminal_block_hash in EL * Revert merge changes to consensus/fork_choice * Remove commented-out code * Add bones for handling RPC methods on test server * Add first ExecutionLayer tests * Add testing for finding terminal block * Prevent infinite loops * Add insert_merge_block to block gen * Add block gen test for pos blocks * Start adding payloads to block gen * Fix clippy lints * Add execution payload to block gen * Add execute_payload to block_gen * Refactor block gen * Add all routes to mock server * Use Uint256 for base_fee_per_gas * Add working execution chain build * Remove unused var * Revert "Use Uint256 for base_fee_per_gas" This reverts commit 6c88f19ac45db834dd4dbf7a3c6e7242c1c0f735. * Fix base_fee_for_gas Uint256 * Update execute payload handle * Improve testing, fix bugs * Fix default fee-recipient * Fix fee-recipient address (again) * Add check for terminal block, add comments, tidy * Apply suggestions from code review Co-authored-by: realbigsean <seananderson33@GMAIL.com> * Fix is_none on handle Drop * Remove commented-out tests Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2021-09-29 22:14:15 +00:00
}
/// Run `func` on the node regardless of the node's current state.
///
/// ## Note
///
/// This function takes locks on `self.state`, holding a conflicting lock might cause a
/// deadlock.
pub async fn request<'a, F, G, H>(self: &'a Arc<Self>, func: F) -> Result<H, EngineError>
[Merge] Implement `execution_layer` (#2635) * Checkout serde_utils from rayonism * Make eth1::http functions pub * Add bones of execution_layer * Modify decoding * Expose Transaction, cargo fmt * Add executePayload * Add all minimal spec endpoints * Start adding json rpc wrapper * Finish custom JSON response handler * Switch to new rpc sending method * Add first test * Fix camelCase * Finish adding tests * Begin threading execution layer into BeaconChain * Fix clippy lints * Fix clippy lints * Thread execution layer into ClientBuilder * Add CLI flags * Add block processing methods to ExecutionLayer * Add block_on to execution_layer * Integrate execute_payload * Add extra_data field * Begin implementing payload handle * Send consensus valid/invalid messages * Fix minor type in task_executor * Call forkchoiceUpdated * Add search for TTD block * Thread TTD into execution layer * Allow producing block with execution payload * Add LRU cache for execution blocks * Remove duplicate 0x on ssz_types serialization * Add tests for block getter methods * Add basic block generator impl * Add is_valid_terminal_block to EL * Verify merge block in block_verification * Partially implement --terminal-block-hash-override * Add terminal_block_hash to ChainSpec * Remove Option from terminal_block_hash in EL * Revert merge changes to consensus/fork_choice * Remove commented-out code * Add bones for handling RPC methods on test server * Add first ExecutionLayer tests * Add testing for finding terminal block * Prevent infinite loops * Add insert_merge_block to block gen * Add block gen test for pos blocks * Start adding payloads to block gen * Fix clippy lints * Add execution payload to block gen * Add execute_payload to block_gen * Refactor block gen * Add all routes to mock server * Use Uint256 for base_fee_per_gas * Add working execution chain build * Remove unused var * Revert "Use Uint256 for base_fee_per_gas" This reverts commit 6c88f19ac45db834dd4dbf7a3c6e7242c1c0f735. * Fix base_fee_for_gas Uint256 * Update execute payload handle * Improve testing, fix bugs * Fix default fee-recipient * Fix fee-recipient address (again) * Add check for terminal block, add comments, tidy * Apply suggestions from code review Co-authored-by: realbigsean <seananderson33@GMAIL.com> * Fix is_none on handle Drop * Remove commented-out tests Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2021-09-29 22:14:15 +00:00
where
F: Fn(&'a Engine) -> G,
[Merge] Implement `execution_layer` (#2635) * Checkout serde_utils from rayonism * Make eth1::http functions pub * Add bones of execution_layer * Modify decoding * Expose Transaction, cargo fmt * Add executePayload * Add all minimal spec endpoints * Start adding json rpc wrapper * Finish custom JSON response handler * Switch to new rpc sending method * Add first test * Fix camelCase * Finish adding tests * Begin threading execution layer into BeaconChain * Fix clippy lints * Fix clippy lints * Thread execution layer into ClientBuilder * Add CLI flags * Add block processing methods to ExecutionLayer * Add block_on to execution_layer * Integrate execute_payload * Add extra_data field * Begin implementing payload handle * Send consensus valid/invalid messages * Fix minor type in task_executor * Call forkchoiceUpdated * Add search for TTD block * Thread TTD into execution layer * Allow producing block with execution payload * Add LRU cache for execution blocks * Remove duplicate 0x on ssz_types serialization * Add tests for block getter methods * Add basic block generator impl * Add is_valid_terminal_block to EL * Verify merge block in block_verification * Partially implement --terminal-block-hash-override * Add terminal_block_hash to ChainSpec * Remove Option from terminal_block_hash in EL * Revert merge changes to consensus/fork_choice * Remove commented-out code * Add bones for handling RPC methods on test server * Add first ExecutionLayer tests * Add testing for finding terminal block * Prevent infinite loops * Add insert_merge_block to block gen * Add block gen test for pos blocks * Start adding payloads to block gen * Fix clippy lints * Add execution payload to block gen * Add execute_payload to block_gen * Refactor block gen * Add all routes to mock server * Use Uint256 for base_fee_per_gas * Add working execution chain build * Remove unused var * Revert "Use Uint256 for base_fee_per_gas" This reverts commit 6c88f19ac45db834dd4dbf7a3c6e7242c1c0f735. * Fix base_fee_for_gas Uint256 * Update execute payload handle * Improve testing, fix bugs * Fix default fee-recipient * Fix fee-recipient address (again) * Add check for terminal block, add comments, tidy * Apply suggestions from code review Co-authored-by: realbigsean <seananderson33@GMAIL.com> * Fix is_none on handle Drop * Remove commented-out tests Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2021-09-29 22:14:15 +00:00
G: Future<Output = Result<H, EngineApiError>>,
{
match func(self).await {
Ok(result) => {
// Take a clone *without* holding the read-lock since the `upcheck` function will
// take a write-lock.
Pause sync when EE is offline (#3428) ## Issue Addressed #3032 ## Proposed Changes Pause sync when ee is offline. Changes include three main parts: - Online/offline notification system - Pause sync - Resume sync #### Online/offline notification system - The engine state is now guarded behind a new struct `State` that ensures every change is correctly notified. Notifications are only sent if the state changes. The new `State` is behind a `RwLock` (as before) as the synchronization mechanism. - The actual notification channel is a [tokio::sync::watch](https://docs.rs/tokio/latest/tokio/sync/watch/index.html) which ensures only the last value is in the receiver channel. This way we don't need to worry about message order etc. - Sync waits for state changes concurrently with normal messages. #### Pause Sync Sync has four components, pausing is done differently in each: - **Block lookups**: Disabled while in this state. We drop current requests and don't search for new blocks. Block lookups are infrequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Parent lookups**: Disabled while in this state. We drop current requests and don't search for new parents. Parent lookups are even less frequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Range**: Chains don't send batches for processing to the beacon processor. This is easily done by guarding the channel to the beacon processor and giving it access only if the ee is responsive. I find this the simplest and most powerful approach since we don't need to deal with new sync states and chain segments that are added while the ee is offline will follow the same logic without needing to synchronize a shared state among those. Another advantage of passive pause vs active pause is that we can still keep track of active advertised chain segments so that on resume we don't need to re-evaluate all our peers. - **Backfill**: Not affected by ee states, we don't pause. #### Resume Sync - **Block lookups**: Enabled again. - **Parent lookups**: Enabled again. - **Range**: Active resume. Since the only real pause range does is not sending batches for processing, resume makes all chains that are holding read-for-processing batches send them. - **Backfill**: Not affected by ee states, no need to resume. ## Additional Info **QUESTION**: Originally I made this to notify and change on synced state, but @pawanjay176 on talks with @paulhauner concluded we only need to check online/offline states. The upcheck function mentions extra checks to have a very up to date sync status to aid the networking stack. However, the only need the networking stack would have is this one. I added a TODO to review if the extra check can be removed Next gen of #3094 Will work best with #3439 Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
2022-08-24 23:34:56 +00:00
let state: EngineStateInternal = **self.state.read().await;
Pause sync when EE is offline (#3428) ## Issue Addressed #3032 ## Proposed Changes Pause sync when ee is offline. Changes include three main parts: - Online/offline notification system - Pause sync - Resume sync #### Online/offline notification system - The engine state is now guarded behind a new struct `State` that ensures every change is correctly notified. Notifications are only sent if the state changes. The new `State` is behind a `RwLock` (as before) as the synchronization mechanism. - The actual notification channel is a [tokio::sync::watch](https://docs.rs/tokio/latest/tokio/sync/watch/index.html) which ensures only the last value is in the receiver channel. This way we don't need to worry about message order etc. - Sync waits for state changes concurrently with normal messages. #### Pause Sync Sync has four components, pausing is done differently in each: - **Block lookups**: Disabled while in this state. We drop current requests and don't search for new blocks. Block lookups are infrequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Parent lookups**: Disabled while in this state. We drop current requests and don't search for new parents. Parent lookups are even less frequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Range**: Chains don't send batches for processing to the beacon processor. This is easily done by guarding the channel to the beacon processor and giving it access only if the ee is responsive. I find this the simplest and most powerful approach since we don't need to deal with new sync states and chain segments that are added while the ee is offline will follow the same logic without needing to synchronize a shared state among those. Another advantage of passive pause vs active pause is that we can still keep track of active advertised chain segments so that on resume we don't need to re-evaluate all our peers. - **Backfill**: Not affected by ee states, we don't pause. #### Resume Sync - **Block lookups**: Enabled again. - **Parent lookups**: Enabled again. - **Range**: Active resume. Since the only real pause range does is not sending batches for processing, resume makes all chains that are holding read-for-processing batches send them. - **Backfill**: Not affected by ee states, no need to resume. ## Additional Info **QUESTION**: Originally I made this to notify and change on synced state, but @pawanjay176 on talks with @paulhauner concluded we only need to check online/offline states. The upcheck function mentions extra checks to have a very up to date sync status to aid the networking stack. However, the only need the networking stack would have is this one. I added a TODO to review if the extra check can be removed Next gen of #3094 Will work best with #3439 Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
2022-08-24 23:34:56 +00:00
// Keep an up to date engine state.
if state != EngineStateInternal::Synced {
// Spawn the upcheck in another task to avoid slowing down this request.
let inner_self = self.clone();
self.executor.spawn(
async move { inner_self.upcheck().await },
"upcheck_after_success",
Initial work to remove engines fallback from the `execution_layer` crate (#3257) ## Issue Addressed Part of #3160 ## Proposed Changes Use only the first url given in the execution engine, if more than one is provided log it. This change only moves having multiple engines to one. The amount of code cleanup that can and should be done forward is not small and would interfere with ongoing PRs. I'm keeping the changes intentionally very very minimal. ## Additional Info Future works: - In [ `EngineError` ](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L173-L177) the id is not needed since it now has no meaning. - the [ `first_success_without_retry` ](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L348-L351) function can return a single error. - the [`first_success`](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L324) function can return a single error. - After the redundancy is removed for the builders, we can probably make the [ `EngineErrors` ](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/lib.rs#L69) carry a single error. - Merge the [`Engines`](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L161-L165) struct and [`Engine` ](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L62-L67) - Fix the associated configurations and cli params. Not sure if both are done in https://github.com/sigp/lighthouse/pull/3214 In general I think those changes can be done incrementally and in individual pull requests.
2022-06-22 14:27:16 +00:00
);
[Merge] Implement `execution_layer` (#2635) * Checkout serde_utils from rayonism * Make eth1::http functions pub * Add bones of execution_layer * Modify decoding * Expose Transaction, cargo fmt * Add executePayload * Add all minimal spec endpoints * Start adding json rpc wrapper * Finish custom JSON response handler * Switch to new rpc sending method * Add first test * Fix camelCase * Finish adding tests * Begin threading execution layer into BeaconChain * Fix clippy lints * Fix clippy lints * Thread execution layer into ClientBuilder * Add CLI flags * Add block processing methods to ExecutionLayer * Add block_on to execution_layer * Integrate execute_payload * Add extra_data field * Begin implementing payload handle * Send consensus valid/invalid messages * Fix minor type in task_executor * Call forkchoiceUpdated * Add search for TTD block * Thread TTD into execution layer * Allow producing block with execution payload * Add LRU cache for execution blocks * Remove duplicate 0x on ssz_types serialization * Add tests for block getter methods * Add basic block generator impl * Add is_valid_terminal_block to EL * Verify merge block in block_verification * Partially implement --terminal-block-hash-override * Add terminal_block_hash to ChainSpec * Remove Option from terminal_block_hash in EL * Revert merge changes to consensus/fork_choice * Remove commented-out code * Add bones for handling RPC methods on test server * Add first ExecutionLayer tests * Add testing for finding terminal block * Prevent infinite loops * Add insert_merge_block to block gen * Add block gen test for pos blocks * Start adding payloads to block gen * Fix clippy lints * Add execution payload to block gen * Add execute_payload to block_gen * Refactor block gen * Add all routes to mock server * Use Uint256 for base_fee_per_gas * Add working execution chain build * Remove unused var * Revert "Use Uint256 for base_fee_per_gas" This reverts commit 6c88f19ac45db834dd4dbf7a3c6e7242c1c0f735. * Fix base_fee_for_gas Uint256 * Update execute payload handle * Improve testing, fix bugs * Fix default fee-recipient * Fix fee-recipient address (again) * Add check for terminal block, add comments, tidy * Apply suggestions from code review Co-authored-by: realbigsean <seananderson33@GMAIL.com> * Fix is_none on handle Drop * Remove commented-out tests Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2021-09-29 22:14:15 +00:00
}
Ok(result)
[Merge] Implement `execution_layer` (#2635) * Checkout serde_utils from rayonism * Make eth1::http functions pub * Add bones of execution_layer * Modify decoding * Expose Transaction, cargo fmt * Add executePayload * Add all minimal spec endpoints * Start adding json rpc wrapper * Finish custom JSON response handler * Switch to new rpc sending method * Add first test * Fix camelCase * Finish adding tests * Begin threading execution layer into BeaconChain * Fix clippy lints * Fix clippy lints * Thread execution layer into ClientBuilder * Add CLI flags * Add block processing methods to ExecutionLayer * Add block_on to execution_layer * Integrate execute_payload * Add extra_data field * Begin implementing payload handle * Send consensus valid/invalid messages * Fix minor type in task_executor * Call forkchoiceUpdated * Add search for TTD block * Thread TTD into execution layer * Allow producing block with execution payload * Add LRU cache for execution blocks * Remove duplicate 0x on ssz_types serialization * Add tests for block getter methods * Add basic block generator impl * Add is_valid_terminal_block to EL * Verify merge block in block_verification * Partially implement --terminal-block-hash-override * Add terminal_block_hash to ChainSpec * Remove Option from terminal_block_hash in EL * Revert merge changes to consensus/fork_choice * Remove commented-out code * Add bones for handling RPC methods on test server * Add first ExecutionLayer tests * Add testing for finding terminal block * Prevent infinite loops * Add insert_merge_block to block gen * Add block gen test for pos blocks * Start adding payloads to block gen * Fix clippy lints * Add execution payload to block gen * Add execute_payload to block_gen * Refactor block gen * Add all routes to mock server * Use Uint256 for base_fee_per_gas * Add working execution chain build * Remove unused var * Revert "Use Uint256 for base_fee_per_gas" This reverts commit 6c88f19ac45db834dd4dbf7a3c6e7242c1c0f735. * Fix base_fee_for_gas Uint256 * Update execute payload handle * Improve testing, fix bugs * Fix default fee-recipient * Fix fee-recipient address (again) * Add check for terminal block, add comments, tidy * Apply suggestions from code review Co-authored-by: realbigsean <seananderson33@GMAIL.com> * Fix is_none on handle Drop * Remove commented-out tests Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2021-09-29 22:14:15 +00:00
}
Err(error) => {
error!(
self.log,
"Execution engine call failed";
"error" => ?error,
);
[Merge] Implement `execution_layer` (#2635) * Checkout serde_utils from rayonism * Make eth1::http functions pub * Add bones of execution_layer * Modify decoding * Expose Transaction, cargo fmt * Add executePayload * Add all minimal spec endpoints * Start adding json rpc wrapper * Finish custom JSON response handler * Switch to new rpc sending method * Add first test * Fix camelCase * Finish adding tests * Begin threading execution layer into BeaconChain * Fix clippy lints * Fix clippy lints * Thread execution layer into ClientBuilder * Add CLI flags * Add block processing methods to ExecutionLayer * Add block_on to execution_layer * Integrate execute_payload * Add extra_data field * Begin implementing payload handle * Send consensus valid/invalid messages * Fix minor type in task_executor * Call forkchoiceUpdated * Add search for TTD block * Thread TTD into execution layer * Allow producing block with execution payload * Add LRU cache for execution blocks * Remove duplicate 0x on ssz_types serialization * Add tests for block getter methods * Add basic block generator impl * Add is_valid_terminal_block to EL * Verify merge block in block_verification * Partially implement --terminal-block-hash-override * Add terminal_block_hash to ChainSpec * Remove Option from terminal_block_hash in EL * Revert merge changes to consensus/fork_choice * Remove commented-out code * Add bones for handling RPC methods on test server * Add first ExecutionLayer tests * Add testing for finding terminal block * Prevent infinite loops * Add insert_merge_block to block gen * Add block gen test for pos blocks * Start adding payloads to block gen * Fix clippy lints * Add execution payload to block gen * Add execute_payload to block_gen * Refactor block gen * Add all routes to mock server * Use Uint256 for base_fee_per_gas * Add working execution chain build * Remove unused var * Revert "Use Uint256 for base_fee_per_gas" This reverts commit 6c88f19ac45db834dd4dbf7a3c6e7242c1c0f735. * Fix base_fee_for_gas Uint256 * Update execute payload handle * Improve testing, fix bugs * Fix default fee-recipient * Fix fee-recipient address (again) * Add check for terminal block, add comments, tidy * Apply suggestions from code review Co-authored-by: realbigsean <seananderson33@GMAIL.com> * Fix is_none on handle Drop * Remove commented-out tests Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2021-09-29 22:14:15 +00:00
// The node just returned an error, run an upcheck so we can update the endpoint
// state.
//
// Spawn the upcheck in another task to avoid slowing down this request.
let inner_self = self.clone();
self.executor.spawn(
async move { inner_self.upcheck().await },
"upcheck_after_error",
);
Err(EngineError::Api { error })
[Merge] Implement `execution_layer` (#2635) * Checkout serde_utils from rayonism * Make eth1::http functions pub * Add bones of execution_layer * Modify decoding * Expose Transaction, cargo fmt * Add executePayload * Add all minimal spec endpoints * Start adding json rpc wrapper * Finish custom JSON response handler * Switch to new rpc sending method * Add first test * Fix camelCase * Finish adding tests * Begin threading execution layer into BeaconChain * Fix clippy lints * Fix clippy lints * Thread execution layer into ClientBuilder * Add CLI flags * Add block processing methods to ExecutionLayer * Add block_on to execution_layer * Integrate execute_payload * Add extra_data field * Begin implementing payload handle * Send consensus valid/invalid messages * Fix minor type in task_executor * Call forkchoiceUpdated * Add search for TTD block * Thread TTD into execution layer * Allow producing block with execution payload * Add LRU cache for execution blocks * Remove duplicate 0x on ssz_types serialization * Add tests for block getter methods * Add basic block generator impl * Add is_valid_terminal_block to EL * Verify merge block in block_verification * Partially implement --terminal-block-hash-override * Add terminal_block_hash to ChainSpec * Remove Option from terminal_block_hash in EL * Revert merge changes to consensus/fork_choice * Remove commented-out code * Add bones for handling RPC methods on test server * Add first ExecutionLayer tests * Add testing for finding terminal block * Prevent infinite loops * Add insert_merge_block to block gen * Add block gen test for pos blocks * Start adding payloads to block gen * Fix clippy lints * Add execution payload to block gen * Add execute_payload to block_gen * Refactor block gen * Add all routes to mock server * Use Uint256 for base_fee_per_gas * Add working execution chain build * Remove unused var * Revert "Use Uint256 for base_fee_per_gas" This reverts commit 6c88f19ac45db834dd4dbf7a3c6e7242c1c0f735. * Fix base_fee_for_gas Uint256 * Update execute payload handle * Improve testing, fix bugs * Fix default fee-recipient * Fix fee-recipient address (again) * Add check for terminal block, add comments, tidy * Apply suggestions from code review Co-authored-by: realbigsean <seananderson33@GMAIL.com> * Fix is_none on handle Drop * Remove commented-out tests Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2021-09-29 22:14:15 +00:00
}
Initial work to remove engines fallback from the `execution_layer` crate (#3257) ## Issue Addressed Part of #3160 ## Proposed Changes Use only the first url given in the execution engine, if more than one is provided log it. This change only moves having multiple engines to one. The amount of code cleanup that can and should be done forward is not small and would interfere with ongoing PRs. I'm keeping the changes intentionally very very minimal. ## Additional Info Future works: - In [ `EngineError` ](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L173-L177) the id is not needed since it now has no meaning. - the [ `first_success_without_retry` ](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L348-L351) function can return a single error. - the [`first_success`](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L324) function can return a single error. - After the redundancy is removed for the builders, we can probably make the [ `EngineErrors` ](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/lib.rs#L69) carry a single error. - Merge the [`Engines`](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L161-L165) struct and [`Engine` ](https://github.com/sigp/lighthouse/blob/9c429d0764ed91cf56efb8a47a35a556b54a86a4/beacon_node/execution_layer/src/engines.rs#L62-L67) - Fix the associated configurations and cli params. Not sure if both are done in https://github.com/sigp/lighthouse/pull/3214 In general I think those changes can be done incrementally and in individual pull requests.
2022-06-22 14:27:16 +00:00
}
[Merge] Implement `execution_layer` (#2635) * Checkout serde_utils from rayonism * Make eth1::http functions pub * Add bones of execution_layer * Modify decoding * Expose Transaction, cargo fmt * Add executePayload * Add all minimal spec endpoints * Start adding json rpc wrapper * Finish custom JSON response handler * Switch to new rpc sending method * Add first test * Fix camelCase * Finish adding tests * Begin threading execution layer into BeaconChain * Fix clippy lints * Fix clippy lints * Thread execution layer into ClientBuilder * Add CLI flags * Add block processing methods to ExecutionLayer * Add block_on to execution_layer * Integrate execute_payload * Add extra_data field * Begin implementing payload handle * Send consensus valid/invalid messages * Fix minor type in task_executor * Call forkchoiceUpdated * Add search for TTD block * Thread TTD into execution layer * Allow producing block with execution payload * Add LRU cache for execution blocks * Remove duplicate 0x on ssz_types serialization * Add tests for block getter methods * Add basic block generator impl * Add is_valid_terminal_block to EL * Verify merge block in block_verification * Partially implement --terminal-block-hash-override * Add terminal_block_hash to ChainSpec * Remove Option from terminal_block_hash in EL * Revert merge changes to consensus/fork_choice * Remove commented-out code * Add bones for handling RPC methods on test server * Add first ExecutionLayer tests * Add testing for finding terminal block * Prevent infinite loops * Add insert_merge_block to block gen * Add block gen test for pos blocks * Start adding payloads to block gen * Fix clippy lints * Add execution payload to block gen * Add execute_payload to block_gen * Refactor block gen * Add all routes to mock server * Use Uint256 for base_fee_per_gas * Add working execution chain build * Remove unused var * Revert "Use Uint256 for base_fee_per_gas" This reverts commit 6c88f19ac45db834dd4dbf7a3c6e7242c1c0f735. * Fix base_fee_for_gas Uint256 * Update execute payload handle * Improve testing, fix bugs * Fix default fee-recipient * Fix fee-recipient address (again) * Add check for terminal block, add comments, tidy * Apply suggestions from code review Co-authored-by: realbigsean <seananderson33@GMAIL.com> * Fix is_none on handle Drop * Remove commented-out tests Co-authored-by: realbigsean <seananderson33@GMAIL.com>
2021-09-29 22:14:15 +00:00
}
}
impl PayloadIdCacheKey {
fn new(head_block_hash: &ExecutionBlockHash, attributes: &PayloadAttributes) -> Self {
Self {
head_block_hash: *head_block_hash,
payload_attributes: attributes.clone(),
}
}
}
Pause sync when EE is offline (#3428) ## Issue Addressed #3032 ## Proposed Changes Pause sync when ee is offline. Changes include three main parts: - Online/offline notification system - Pause sync - Resume sync #### Online/offline notification system - The engine state is now guarded behind a new struct `State` that ensures every change is correctly notified. Notifications are only sent if the state changes. The new `State` is behind a `RwLock` (as before) as the synchronization mechanism. - The actual notification channel is a [tokio::sync::watch](https://docs.rs/tokio/latest/tokio/sync/watch/index.html) which ensures only the last value is in the receiver channel. This way we don't need to worry about message order etc. - Sync waits for state changes concurrently with normal messages. #### Pause Sync Sync has four components, pausing is done differently in each: - **Block lookups**: Disabled while in this state. We drop current requests and don't search for new blocks. Block lookups are infrequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Parent lookups**: Disabled while in this state. We drop current requests and don't search for new parents. Parent lookups are even less frequent and I don't think it's worth the extra logic of keeping these and delaying processing. If we later see that this is required, we can add it. - **Range**: Chains don't send batches for processing to the beacon processor. This is easily done by guarding the channel to the beacon processor and giving it access only if the ee is responsive. I find this the simplest and most powerful approach since we don't need to deal with new sync states and chain segments that are added while the ee is offline will follow the same logic without needing to synchronize a shared state among those. Another advantage of passive pause vs active pause is that we can still keep track of active advertised chain segments so that on resume we don't need to re-evaluate all our peers. - **Backfill**: Not affected by ee states, we don't pause. #### Resume Sync - **Block lookups**: Enabled again. - **Parent lookups**: Enabled again. - **Range**: Active resume. Since the only real pause range does is not sending batches for processing, resume makes all chains that are holding read-for-processing batches send them. - **Backfill**: Not affected by ee states, no need to resume. ## Additional Info **QUESTION**: Originally I made this to notify and change on synced state, but @pawanjay176 on talks with @paulhauner concluded we only need to check online/offline states. The upcheck function mentions extra checks to have a very up to date sync status to aid the networking stack. However, the only need the networking stack would have is this one. I added a TODO to review if the extra check can be removed Next gen of #3094 Will work best with #3439 Co-authored-by: Pawan Dhananjay <pawandhananjay@gmail.com>
2022-08-24 23:34:56 +00:00
#[cfg(test)]
mod tests {
use super::*;
use tokio_stream::StreamExt;
#[tokio::test]
async fn test_state_notifier() {
let mut state = State::default();
let initial_state: EngineState = state.state.into();
assert_eq!(initial_state, EngineState::Offline);
state.update(EngineStateInternal::Synced);
// a watcher that arrives after the first update.
let mut watcher = state.watch();
let new_state = watcher.next().await.expect("Last state is always present");
assert_eq!(new_state, EngineState::Online);
}
}