* Port eth1 lib to use stable futures * Port eth1_test_rig to stable futures * Port eth1 tests to stable futures * Port genesis service to stable futures * Port genesis tests to stable futures * Port beacon_chain to stable futures * Port lcli to stable futures * Fix eth1_test_rig (#1014) * Fix lcli * Port timer to stable futures * Fix timer * Port websocket_server to stable futures * Port notifier to stable futures * Add TODOS * Update hashmap hashset to stable futures * Adds panic test to hashset delay * Port remote_beacon_node to stable futures * Fix lcli merge conflicts * Non rpc stuff compiles * protocol.rs compiles * Port websockets, timer and notifier to stable futures (#1035) * Fix lcli * Port timer to stable futures * Fix timer * Port websocket_server to stable futures * Port notifier to stable futures * Add TODOS * Port remote_beacon_node to stable futures * Partial eth2-libp2p stable future upgrade * Finished first round of fighting RPC types * Further progress towards porting eth2-libp2p adds caching to discovery * Update behaviour * RPC handler to stable futures * Update RPC to master libp2p * Network service additions * Fix the fallback transport construction (#1102) * Correct warning * Remove hashmap delay * Compiling version of eth2-libp2p * Update all crates versions * Fix conversion function and add tests (#1113) * Port validator_client to stable futures (#1114) * Add PH & MS slot clock changes * Account for genesis time * Add progress on duties refactor * Add simple is_aggregator bool to val subscription * Start work on attestation_verification.rs * Add progress on ObservedAttestations * Progress with ObservedAttestations * Fix tests * Add observed attestations to the beacon chain * Add attestation observation to processing code * Add progress on attestation verification * Add first draft of ObservedAttesters * Add more tests * Add observed attesters to beacon chain * Add observers to attestation processing * Add more attestation verification * Create ObservedAggregators map * Remove commented-out code * Add observed aggregators into chain * Add progress * Finish adding features to attestation verification * Ensure beacon chain compiles * Link attn verification into chain * Integrate new attn verification in chain * Remove old attestation processing code * Start trying to fix beacon_chain tests * Split adding into pools into two functions * Add aggregation to harness * Get test harness working again * Adjust the number of aggregators for test harness * Fix edge-case in harness * Integrate new attn processing in network * Fix compile bug in validator_client * Update validator API endpoints * Fix aggreagation in test harness * Fix enum thing * Fix attestation observation bug: * Patch failing API tests * Start adding comments to attestation verification * Remove unused attestation field * Unify "is block known" logic * Update comments * Supress fork choice errors for network processing * Add todos * Tidy * Add gossip attn tests * Disallow test harness to produce old attns * Comment out in-progress tests * Partially address pruning tests * Fix failing store test * Add aggregate tests * Add comments about which spec conditions we check * Dont re-aggregate * Split apart test harness attn production * Fix compile error in network * Make progress on commented-out test * Fix skipping attestation test * Add fork choice verification tests * Tidy attn tests, remove dead code * Remove some accidentally added code * Fix clippy lint * Rename test file * Add block tests, add cheap block proposer check * Rename block testing file * Add observed_block_producers * Tidy * Switch around block signature verification * Finish block testing * Remove gossip from signature tests * First pass of self review * Fix deviation in spec * Update test spec tags * Start moving over to hashset * Finish moving observed attesters to hashmap * Move aggregation pool over to hashmap * Make fc attn borrow again * Fix rest_api compile error * Fix missing comments * Fix monster test * Uncomment increasing slots test * Address remaining comments * Remove unsafe, use cfg test * Remove cfg test flag * Fix dodgy comment * Revert "Update hashmap hashset to stable futures" This reverts commit d432378a3cc5cd67fc29c0b15b96b886c1323554. * Revert "Adds panic test to hashset delay" This reverts commit 281502396fc5b90d9c421a309c2c056982c9525b. * Ported attestation_service * Ported duties_service * Ported fork_service * More ports * Port block_service * Minor fixes * VC compiles * Update TODOS * Borrow self where possible * Ignore aggregates that are already known. * Unify aggregator modulo logic * Fix typo in logs * Refactor validator subscription logic * Avoid reproducing selection proof * Skip HTTP call if no subscriptions * Rename DutyAndState -> DutyAndProof * Tidy logs * Print root as dbg * Fix compile errors in tests * Fix compile error in test * Re-Fix attestation and duties service * Minor fixes Co-authored-by: Paul Hauner <paul@paulhauner.com> * Network crate update to stable futures * Port account_manager to stable futures (#1121) * Port account_manager to stable futures * Run async fns in tokio environment * Port rest_api crate to stable futures (#1118) * Port rest_api lib to stable futures * Reduce tokio features * Update notifier to stable futures * Builder update * Further updates * Convert self referential async functions * stable futures fixes (#1124) * Fix eth1 update functions * Fix genesis and client * Fix beacon node lib * Return appropriate runtimes from environment * Fix test rig * Refactor eth1 service update * Upgrade simulator to stable futures * Lighthouse compiles on stable futures * Remove println debugging statement * Update libp2p service, start rpc test upgrade * Update network crate for new libp2p * Update tokio::codec to futures_codec (#1128) * Further work towards RPC corrections * Correct http timeout and network service select * Use tokio runtime for libp2p * Revert "Update tokio::codec to futures_codec (#1128)" This reverts commit e57aea924acf5cbabdcea18895ac07e38a425ed7. * Upgrade RPC libp2p tests * Upgrade secio fallback test * Upgrade gossipsub examples * Clean up RPC protocol * Test fixes (#1133) * Correct websocket timeout and run on os thread * Fix network test * Clean up PR * Correct tokio tcp move attestation service tests * Upgrade attestation service tests * Correct network test * Correct genesis test * Test corrections * Log info when block is received * Modify logs and update attester service events * Stable futures: fixes to vc, eth1 and account manager (#1142) * Add local testnet scripts * Remove whiteblock script * Rename local testnet script * Move spawns onto handle * Fix VC panic * Initial fix to block production issue * Tidy block producer fix * Tidy further * Add local testnet clean script * Run cargo fmt * Tidy duties service * Tidy fork service * Tidy ForkService * Tidy AttestationService * Tidy notifier * Ensure await is not suppressed in eth1 * Ensure await is not suppressed in account_manager * Use .ok() instead of .unwrap_or(()) * RPC decoding test for proto * Update discv5 and eth2-libp2p deps * Fix lcli double runtime issue (#1144) * Handle stream termination and dialing peer errors * Correct peer_info variant types * Remove unnecessary warnings * Handle subnet unsubscription removal and improve logigng * Add logs around ping * Upgrade discv5 and improve logging * Handle peer connection status for multiple connections * Improve network service logging * Improve logging around peer manager * Upgrade swarm poll centralise peer management * Identify clients on error * Fix `remove_peer` in sync (#1150) * remove_peer removes from all chains * Remove logs * Fix early return from loop * Improved logging, fix panic * Partially correct tests * Stable futures: Vc sync (#1149) * Improve syncing heuristic * Add comments * Use safer method for tolerance * Fix tests * Stable futures: Fix VC bug, update agg pool, add more metrics (#1151) * Expose epoch processing summary * Expose participation metrics to prometheus * Switch to f64 * Reduce precision * Change precision * Expose observed attesters metrics * Add metrics for agg/unagg attn counts * Add metrics for gossip rx * Add metrics for gossip tx * Adds ignored attns to prom * Add attestation timing * Add timer for aggregation pool sig agg * Add write lock timer for agg pool * Add more metrics to agg pool * Change map lock code * Add extra metric to agg pool * Change lock handling in agg pool * Change .write() to .read() * Add another agg pool timer * Fix for is_aggregator * Fix pruning bug Co-authored-by: pawan <pawandhananjay@gmail.com> Co-authored-by: Paul Hauner <paul@paulhauner.com>
377 lines
12 KiB
Rust
377 lines
12 KiB
Rust
//! Provides a very minimal set of functions for interfacing with the eth2 deposit contract via an
|
|
//! eth1 HTTP JSON-RPC endpoint.
|
|
//!
|
|
//! All remote functions return a future (i.e., are async).
|
|
//!
|
|
//! Does not use a web3 library, instead it uses `reqwest` (`hyper`) to call the remote endpoint
|
|
//! and `serde` to decode the response.
|
|
//!
|
|
//! ## Note
|
|
//!
|
|
//! There is no ABI parsing here, all function signatures and topics are hard-coded as constants.
|
|
|
|
use futures::future::TryFutureExt;
|
|
use reqwest::{header::CONTENT_TYPE, ClientBuilder, StatusCode};
|
|
use serde_json::{json, Value};
|
|
use std::ops::Range;
|
|
use std::time::Duration;
|
|
use types::Hash256;
|
|
|
|
/// `keccak("DepositEvent(bytes,bytes,bytes,bytes,bytes)")`
|
|
pub const DEPOSIT_EVENT_TOPIC: &str =
|
|
"0x649bbc62d0e31342afea4e5cd82d4049e7e1ee912fc0889aa790803be39038c5";
|
|
/// `keccak("get_deposit_root()")[0..4]`
|
|
pub const DEPOSIT_ROOT_FN_SIGNATURE: &str = "0xc5f2892f";
|
|
/// `keccak("get_deposit_count()")[0..4]`
|
|
pub const DEPOSIT_COUNT_FN_SIGNATURE: &str = "0x621fd130";
|
|
|
|
/// Number of bytes in deposit contract deposit root response.
|
|
pub const DEPOSIT_COUNT_RESPONSE_BYTES: usize = 96;
|
|
/// Number of bytes in deposit contract deposit root (value only).
|
|
pub const DEPOSIT_ROOT_BYTES: usize = 32;
|
|
|
|
#[derive(Debug, PartialEq, Clone)]
|
|
pub struct Block {
|
|
pub hash: Hash256,
|
|
pub timestamp: u64,
|
|
pub number: u64,
|
|
}
|
|
|
|
/// Returns the current block number.
|
|
///
|
|
/// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`.
|
|
pub async fn get_block_number(endpoint: &str, timeout: Duration) -> Result<u64, String> {
|
|
let response_body = send_rpc_request(endpoint, "eth_blockNumber", json!([]), timeout).await?;
|
|
hex_to_u64_be(
|
|
response_result(&response_body)?
|
|
.ok_or_else(|| "No result field was returned for block number".to_string())?
|
|
.as_str()
|
|
.ok_or_else(|| "Data was not string")?,
|
|
)
|
|
.map_err(|e| format!("Failed to get block number: {}", e))
|
|
}
|
|
|
|
/// Gets a block hash by block number.
|
|
///
|
|
/// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`.
|
|
pub async fn get_block(
|
|
endpoint: &str,
|
|
block_number: u64,
|
|
timeout: Duration,
|
|
) -> Result<Block, String> {
|
|
let params = json!([
|
|
format!("0x{:x}", block_number),
|
|
false // do not return full tx objects.
|
|
]);
|
|
|
|
let response_body = send_rpc_request(endpoint, "eth_getBlockByNumber", params, timeout).await?;
|
|
let hash = hex_to_bytes(
|
|
response_result(&response_body)?
|
|
.ok_or_else(|| "No result field was returned for block".to_string())?
|
|
.get("hash")
|
|
.ok_or_else(|| "No hash for block")?
|
|
.as_str()
|
|
.ok_or_else(|| "Block hash was not string")?,
|
|
)?;
|
|
let hash = if hash.len() == 32 {
|
|
Ok(Hash256::from_slice(&hash))
|
|
} else {
|
|
Err(format!("Block has was not 32 bytes: {:?}", hash))
|
|
}?;
|
|
|
|
let timestamp = hex_to_u64_be(
|
|
response_result(&response_body)?
|
|
.ok_or_else(|| "No result field was returned for timestamp".to_string())?
|
|
.get("timestamp")
|
|
.ok_or_else(|| "No timestamp for block")?
|
|
.as_str()
|
|
.ok_or_else(|| "Block timestamp was not string")?,
|
|
)?;
|
|
|
|
let number = hex_to_u64_be(
|
|
response_result(&response_body)?
|
|
.ok_or_else(|| "No result field was returned for number".to_string())?
|
|
.get("number")
|
|
.ok_or_else(|| "No number for block")?
|
|
.as_str()
|
|
.ok_or_else(|| "Block number was not string")?,
|
|
)?;
|
|
|
|
if number <= usize::max_value() as u64 {
|
|
Ok(Block {
|
|
hash,
|
|
timestamp,
|
|
number,
|
|
})
|
|
} else {
|
|
Err(format!("Block number {} is larger than a usize", number))
|
|
}
|
|
.map_err(|e| format!("Failed to get block number: {}", e))
|
|
}
|
|
|
|
/// Returns the value of the `get_deposit_count()` call at the given `address` for the given
|
|
/// `block_number`.
|
|
///
|
|
/// Assumes that the `address` has the same ABI as the eth2 deposit contract.
|
|
///
|
|
/// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`.
|
|
pub async fn get_deposit_count(
|
|
endpoint: &str,
|
|
address: &str,
|
|
block_number: u64,
|
|
timeout: Duration,
|
|
) -> Result<Option<u64>, String> {
|
|
let result = call(
|
|
endpoint,
|
|
address,
|
|
DEPOSIT_COUNT_FN_SIGNATURE,
|
|
block_number,
|
|
timeout,
|
|
)
|
|
.await?;
|
|
match result {
|
|
None => Err("Deposit root response was none".to_string()),
|
|
Some(bytes) => {
|
|
if bytes.is_empty() {
|
|
Ok(None)
|
|
} else if bytes.len() == DEPOSIT_COUNT_RESPONSE_BYTES {
|
|
let mut array = [0; 8];
|
|
array.copy_from_slice(&bytes[32 + 32..32 + 32 + 8]);
|
|
Ok(Some(u64::from_le_bytes(array)))
|
|
} else {
|
|
Err(format!(
|
|
"Deposit count response was not {} bytes: {:?}",
|
|
DEPOSIT_COUNT_RESPONSE_BYTES, bytes
|
|
))
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
/// Returns the value of the `get_hash_tree_root()` call at the given `block_number`.
|
|
///
|
|
/// Assumes that the `address` has the same ABI as the eth2 deposit contract.
|
|
///
|
|
/// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`.
|
|
pub async fn get_deposit_root(
|
|
endpoint: &str,
|
|
address: &str,
|
|
block_number: u64,
|
|
timeout: Duration,
|
|
) -> Result<Option<Hash256>, String> {
|
|
let result = call(
|
|
endpoint,
|
|
address,
|
|
DEPOSIT_ROOT_FN_SIGNATURE,
|
|
block_number,
|
|
timeout,
|
|
)
|
|
.await?;
|
|
match result {
|
|
None => Err("Deposit root response was none".to_string()),
|
|
Some(bytes) => {
|
|
if bytes.is_empty() {
|
|
Ok(None)
|
|
} else if bytes.len() == DEPOSIT_ROOT_BYTES {
|
|
Ok(Some(Hash256::from_slice(&bytes)))
|
|
} else {
|
|
Err(format!(
|
|
"Deposit root response was not {} bytes: {:?}",
|
|
DEPOSIT_ROOT_BYTES, bytes
|
|
))
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
/// Performs a instant, no-transaction call to the contract `address` with the given `0x`-prefixed
|
|
/// `hex_data`.
|
|
///
|
|
/// Returns bytes, if any.
|
|
///
|
|
/// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`.
|
|
async fn call(
|
|
endpoint: &str,
|
|
address: &str,
|
|
hex_data: &str,
|
|
block_number: u64,
|
|
timeout: Duration,
|
|
) -> Result<Option<Vec<u8>>, String> {
|
|
let params = json! ([
|
|
{
|
|
"to": address,
|
|
"data": hex_data,
|
|
},
|
|
format!("0x{:x}", block_number)
|
|
]);
|
|
|
|
let response_body = send_rpc_request(endpoint, "eth_call", params, timeout).await?;
|
|
match response_result(&response_body)? {
|
|
None => Ok(None),
|
|
Some(result) => {
|
|
let hex = result
|
|
.as_str()
|
|
.map(|s| s.to_string())
|
|
.ok_or_else(|| "'result' value was not a string".to_string())?;
|
|
|
|
Ok(Some(hex_to_bytes(&hex)?))
|
|
}
|
|
}
|
|
}
|
|
|
|
/// A reduced set of fields from an Eth1 contract log.
|
|
#[derive(Debug, PartialEq, Clone)]
|
|
pub struct Log {
|
|
pub(crate) block_number: u64,
|
|
pub(crate) data: Vec<u8>,
|
|
}
|
|
|
|
/// Returns logs for the `DEPOSIT_EVENT_TOPIC`, for the given `address` in the given
|
|
/// `block_height_range`.
|
|
///
|
|
/// It's not clear from the Ethereum JSON-RPC docs if this range is inclusive or not.
|
|
///
|
|
/// Uses HTTP JSON RPC at `endpoint`. E.g., `http://localhost:8545`.
|
|
pub async fn get_deposit_logs_in_range(
|
|
endpoint: &str,
|
|
address: &str,
|
|
block_height_range: Range<u64>,
|
|
timeout: Duration,
|
|
) -> Result<Vec<Log>, String> {
|
|
let params = json! ([{
|
|
"address": address,
|
|
"topics": [DEPOSIT_EVENT_TOPIC],
|
|
"fromBlock": format!("0x{:x}", block_height_range.start),
|
|
"toBlock": format!("0x{:x}", block_height_range.end),
|
|
}]);
|
|
|
|
let response_body = send_rpc_request(endpoint, "eth_getLogs", params, timeout).await?;
|
|
response_result(&response_body)?
|
|
.ok_or_else(|| "No result field was returned for deposit logs".to_string())?
|
|
.as_array()
|
|
.cloned()
|
|
.ok_or_else(|| "'result' value was not an array".to_string())?
|
|
.into_iter()
|
|
.map(|value| {
|
|
let block_number = value
|
|
.get("blockNumber")
|
|
.ok_or_else(|| "No block number field in log")?
|
|
.as_str()
|
|
.ok_or_else(|| "Block number was not string")?;
|
|
|
|
let data = value
|
|
.get("data")
|
|
.ok_or_else(|| "No block number field in log")?
|
|
.as_str()
|
|
.ok_or_else(|| "Data was not string")?;
|
|
|
|
Ok(Log {
|
|
block_number: hex_to_u64_be(&block_number)?,
|
|
data: hex_to_bytes(data)?,
|
|
})
|
|
})
|
|
.collect::<Result<Vec<Log>, String>>()
|
|
.map_err(|e| format!("Failed to get logs in range: {}", e))
|
|
}
|
|
|
|
/// Sends an RPC request to `endpoint`, using a POST with the given `body`.
|
|
///
|
|
/// Tries to receive the response and parse the body as a `String`.
|
|
pub async fn send_rpc_request(
|
|
endpoint: &str,
|
|
method: &str,
|
|
params: Value,
|
|
timeout: Duration,
|
|
) -> Result<String, String> {
|
|
let body = json! ({
|
|
"jsonrpc": "2.0",
|
|
"method": method,
|
|
"params": params,
|
|
"id": 1
|
|
})
|
|
.to_string();
|
|
|
|
// Note: it is not ideal to create a new client for each request.
|
|
//
|
|
// A better solution would be to create some struct that contains a built client and pass it
|
|
// around (similar to the `web3` crate's `Transport` structs).
|
|
let response = ClientBuilder::new()
|
|
.timeout(timeout)
|
|
.build()
|
|
.expect("The builder should always build a client")
|
|
.post(endpoint)
|
|
.header(CONTENT_TYPE, "application/json")
|
|
.body(body)
|
|
.send()
|
|
.map_err(|e| format!("Request failed: {:?}", e))
|
|
.await?;
|
|
if response.status() != StatusCode::OK {
|
|
return Err(format!(
|
|
"Response HTTP status was not 200 OK: {}.",
|
|
response.status()
|
|
));
|
|
};
|
|
let encoding = response
|
|
.headers()
|
|
.get(CONTENT_TYPE)
|
|
.ok_or_else(|| "No content-type header in response".to_string())?
|
|
.to_str()
|
|
.map(|s| s.to_string())
|
|
.map_err(|e| format!("Failed to parse content-type header: {}", e))?;
|
|
|
|
response
|
|
.bytes()
|
|
.map_err(|e| format!("Failed to receive body: {:?}", e))
|
|
.await
|
|
.and_then(move |bytes| match encoding.as_str() {
|
|
"application/json" => Ok(bytes),
|
|
"application/json; charset=utf-8" => Ok(bytes),
|
|
other => Err(format!("Unsupported encoding: {}", other)),
|
|
})
|
|
.map(|bytes| String::from_utf8_lossy(&bytes).into_owned())
|
|
.map_err(|e| format!("Failed to receive body: {:?}", e))
|
|
}
|
|
|
|
/// Accepts an entire HTTP body (as a string) and returns the `result` field, as a serde `Value`.
|
|
fn response_result(response: &str) -> Result<Option<Value>, String> {
|
|
let json = serde_json::from_str::<Value>(&response)
|
|
.map_err(|e| format!("Failed to parse response: {:?}", e))?;
|
|
|
|
if let Some(error) = json.get("error") {
|
|
Err(format!("Eth1 node returned error: {}", error))
|
|
} else {
|
|
Ok(json
|
|
.get("result")
|
|
.cloned()
|
|
.map(Some)
|
|
.unwrap_or_else(|| None))
|
|
}
|
|
}
|
|
|
|
/// Parses a `0x`-prefixed, **big-endian** hex string as a u64.
|
|
///
|
|
/// Note: the JSON-RPC encodes integers as big-endian. The deposit contract uses little-endian.
|
|
/// Therefore, this function is only useful for numbers encoded by the JSON RPC.
|
|
///
|
|
/// E.g., `0x01 == 1`
|
|
fn hex_to_u64_be(hex: &str) -> Result<u64, String> {
|
|
u64::from_str_radix(strip_prefix(hex)?, 16)
|
|
.map_err(|e| format!("Failed to parse hex as u64: {:?}", e))
|
|
}
|
|
|
|
/// Parses a `0x`-prefixed, big-endian hex string as bytes.
|
|
///
|
|
/// E.g., `0x0102 == vec![1, 2]`
|
|
fn hex_to_bytes(hex: &str) -> Result<Vec<u8>, String> {
|
|
hex::decode(strip_prefix(hex)?).map_err(|e| format!("Failed to parse hex as bytes: {:?}", e))
|
|
}
|
|
|
|
/// Removes the `0x` prefix from some bytes. Returns an error if the prefix is not present.
|
|
fn strip_prefix(hex: &str) -> Result<&str, String> {
|
|
if hex.starts_with("0x") {
|
|
Ok(&hex[2..])
|
|
} else {
|
|
Err("Hex string did not start with `0x`".to_string())
|
|
}
|
|
}
|