* Remove ping protocol * Initial renaming of network services * Correct rebasing relative to latest master * Start updating types * Adds HashMapDelay struct to utils * Initial network restructure * Network restructure. Adds new types for v0.2.0 * Removes build artefacts * Shift validation to beacon chain * Temporarily remove gossip validation This is to be updated to match current optimisation efforts. * Adds AggregateAndProof * Begin rebuilding pubsub encoding/decoding * Signature hacking * Shift gossipsup decoding into eth2_libp2p * Existing EF tests passing with fake_crypto * Shifts block encoding/decoding into RPC * Delete outdated API spec * All release tests passing bar genesis state parsing * Update and test YamlConfig * Update to spec v0.10 compatible BLS * Updates to BLS EF tests * Add EF test for AggregateVerify And delete unused hash2curve tests for uncompressed points * Update EF tests to v0.10.1 * Use optional block root correctly in block proc * Use genesis fork in deposit domain. All tests pass * Fast aggregate verify test * Update REST API docs * Fix unused import * Bump spec tags to v0.10.1 * Add `seconds_per_eth1_block` to chainspec * Update to timestamp based eth1 voting scheme * Return None from `get_votes_to_consider` if block cache is empty * Handle overflows in `is_candidate_block` * Revert to failing tests * Fix eth1 data sets test * Choose default vote according to spec * Fix collect_valid_votes tests * Fix `get_votes_to_consider` to choose all eligible blocks * Uncomment winning_vote tests * Add comments; remove unused code * Reduce seconds_per_eth1_block for simulation * Addressed review comments * Add test for default vote case * Fix logs * Remove unused functions * Meter default eth1 votes * Fix comments * Progress on attestation service * Address review comments; remove unused dependency * Initial work on removing libp2p lock * Add LRU caches to store (rollup) * Update attestation validation for DB changes (WIP) * Initial version of should_forward_block * Scaffold * Progress on attestation validation Also, consolidate prod+testing slot clocks so that they share much of the same implementation and can both handle sub-slot time changes. * Removes lock from libp2p service * Completed network lock removal * Finish(?) attestation processing * Correct network termination future * Add slot check to block check * Correct fmt issues * Remove Drop implementation for network service * Add first attempt at attestation proc. re-write * Add version 2 of attestation processing * Minor fixes * Add validator pubkey cache * Make get_indexed_attestation take a committee * Link signature processing into new attn verification * First working version * Ensure pubkey cache is updated * Add more metrics, slight optimizations * Clone committee cache during attestation processing * Update shuffling cache during block processing * Remove old commented-out code * Fix shuffling cache insert bug * Used indexed attestation in fork choice * Restructure attn processing, add metrics * Add more detailed metrics * Tidy, fix failing tests * Fix failing tests, tidy * Address reviewers suggestions * Disable/delete two outdated tests * Modification of validator for subscriptions * Add slot signing to validator client * Further progress on validation subscription * Adds necessary validator subscription functionality * Add new Pubkeys struct to signature_sets * Refactor with functional approach * Update beacon chain * Clean up validator <-> beacon node http types * Add aggregator status to ValidatorDuty * Impl Clone for manual slot clock * Fix minor errors * Further progress validator client subscription * Initial subscription and aggregation handling * Remove decompressed member from pubkey bytes * Progress to modifying val client for attestation aggregation * First draft of validator client upgrade for aggregate attestations * Add hashmap for indices lookup * Add state cache, remove store cache * Only build the head committee cache * Removes lock on a network channel * Partially implement beacon node subscription http api * Correct compilation issues * Change `get_attesting_indices` to use Vec * Fix failing test * Partial implementation of timer * Adds timer, removes exit_future, http api to op pool * Partial multiple aggregate attestation handling * Permits bulk messages accross gossipsub network channel * Correct compile issues * Improve gosispsub messaging and correct rest api helpers * Added global gossipsub subscriptions * Update validator subscriptions data structs * Tidy * Re-structure validator subscriptions * Initial handling of subscriptions * Re-structure network service * Add pubkey cache persistence file * Add more comments * Integrate persistence file into builder * Add pubkey cache tests * Add HashSetDelay and introduce into attestation service * Handles validator subscriptions * Add data_dir to beacon chain builder * Remove Option in pubkey cache persistence file * Ensure consistency between datadir/data_dir * Fix failing network test * Peer subnet discovery gets queued for future subscriptions * Reorganise attestation service functions * Initial wiring of attestation service * First draft of attestation service timing logic * Correct minor typos * Tidy * Fix todos * Improve tests * Add PeerInfo to connected peers mapping * Fix compile error * Fix compile error from merge * Split up block processing metrics * Tidy * Refactor get_pubkey_from_state * Remove commented-out code * Rename state_cache -> checkpoint_cache * Rename Checkpoint -> Snapshot * Tidy, add comments * Tidy up find_head function * Change some checkpoint -> snapshot * Add tests * Expose max_len * Remove dead code * Tidy * Fix bug * Add sync-speed metric * Add first attempt at VerifiableBlock * Start integrating into beacon chain * Integrate VerifiableBlock * Rename VerifableBlock -> PartialBlockVerification * Add start of typed methods * Add progress * Add further progress * Rename structs * Add full block verification to block_processing.rs * Further beacon chain integration * Update checks for gossip * Add todo * Start adding segement verification * Add passing chain segement test * Initial integration with batch sync * Minor changes * Tidy, add more error checking * Start adding chain_segment tests * Finish invalid signature tests * Include single and gossip verified blocks in tests * Add gossip verification tests * Start adding docs * Finish adding comments to block_processing.rs * Rename block_processing.rs -> block_verification * Start removing old block processing code * Fixes beacon_chain compilation * Fix project-wide compile errors * Remove old code * Correct code to pass all tests * Fix bug with beacon proposer index * Fix shim for BlockProcessingError * Only process one epoch at a time * Fix loop in chain segment processing * Correct tests from master merge * Add caching for state.eth1_data_votes * Add BeaconChain::validator_pubkey * Revert "Add caching for state.eth1_data_votes" This reverts commit cd73dcd6434fb8d8e6bf30c5356355598ea7b78e. Co-authored-by: Grant Wuerker <gwuerker@gmail.com> Co-authored-by: Michael Sproul <michael@sigmaprime.io> Co-authored-by: Michael Sproul <micsproul@gmail.com> Co-authored-by: pawan <pawandhananjay@gmail.com> Co-authored-by: Paul Hauner <paul@paulhauner.com>
134 lines
3.5 KiB
Rust
134 lines
3.5 KiB
Rust
use futures::Future;
|
|
use slog::{debug, error, info, warn, Logger};
|
|
use std::marker::PhantomData;
|
|
use std::net::SocketAddr;
|
|
use std::thread;
|
|
use tokio::runtime::TaskExecutor;
|
|
use types::EthSpec;
|
|
use ws::{Sender, WebSocket};
|
|
|
|
mod config;
|
|
|
|
pub use config::Config;
|
|
|
|
pub struct WebSocketSender<T: EthSpec> {
|
|
sender: Option<Sender>,
|
|
_phantom: PhantomData<T>,
|
|
}
|
|
|
|
impl<T: EthSpec> WebSocketSender<T> {
|
|
/// Creates a dummy websocket server that never starts and where all future calls are no-ops.
|
|
pub fn dummy() -> Self {
|
|
Self {
|
|
sender: None,
|
|
_phantom: PhantomData,
|
|
}
|
|
}
|
|
|
|
pub fn send_string(&self, string: String) -> Result<(), String> {
|
|
if let Some(sender) = &self.sender {
|
|
sender
|
|
.send(string)
|
|
.map_err(|e| format!("Unable to broadcast to websocket clients: {:?}", e))
|
|
} else {
|
|
Ok(())
|
|
}
|
|
}
|
|
}
|
|
|
|
pub fn start_server<T: EthSpec>(
|
|
config: &Config,
|
|
executor: &TaskExecutor,
|
|
log: &Logger,
|
|
) -> Result<
|
|
(
|
|
WebSocketSender<T>,
|
|
tokio::sync::oneshot::Sender<()>,
|
|
SocketAddr,
|
|
),
|
|
String,
|
|
> {
|
|
let server_string = format!("{}:{}", config.listen_address, config.port);
|
|
|
|
// Create a server that simply ignores any incoming messages.
|
|
let server = WebSocket::new(|_| |_| Ok(()))
|
|
.map_err(|e| format!("Failed to initialize websocket server: {:?}", e))?
|
|
.bind(server_string.clone())
|
|
.map_err(|e| {
|
|
format!(
|
|
"Failed to bind websocket server to {}: {:?}",
|
|
server_string, e
|
|
)
|
|
})?;
|
|
|
|
let actual_listen_addr = server.local_addr().map_err(|e| {
|
|
format!(
|
|
"Failed to read listening addr from websocket server: {:?}",
|
|
e
|
|
)
|
|
})?;
|
|
|
|
let broadcaster = server.broadcaster();
|
|
|
|
// Produce a signal/channel that can gracefully shutdown the websocket server.
|
|
let exit_channel = {
|
|
let (exit_channel, exit) = tokio::sync::oneshot::channel();
|
|
|
|
let log_inner = log.clone();
|
|
let broadcaster_inner = server.broadcaster();
|
|
let exit_future = exit
|
|
.and_then(move |_| {
|
|
if let Err(e) = broadcaster_inner.shutdown() {
|
|
warn!(
|
|
log_inner,
|
|
"Websocket server errored on shutdown";
|
|
"error" => format!("{:?}", e)
|
|
);
|
|
} else {
|
|
info!(log_inner, "Websocket server shutdown");
|
|
}
|
|
Ok(())
|
|
})
|
|
.map_err(|_| ());
|
|
|
|
// Place a future on the executor that will shutdown the websocket server when the
|
|
// application exits.
|
|
executor.spawn(exit_future);
|
|
|
|
exit_channel
|
|
};
|
|
|
|
let log_inner = log.clone();
|
|
let _handle = thread::spawn(move || match server.run() {
|
|
Ok(_) => {
|
|
debug!(
|
|
log_inner,
|
|
"Websocket server thread stopped";
|
|
);
|
|
}
|
|
Err(e) => {
|
|
error!(
|
|
log_inner,
|
|
"Websocket server failed to start";
|
|
"error" => format!("{:?}", e)
|
|
);
|
|
}
|
|
});
|
|
|
|
info!(
|
|
log,
|
|
"WebSocket server started";
|
|
"address" => format!("{}", actual_listen_addr.ip()),
|
|
"port" => actual_listen_addr.port(),
|
|
);
|
|
|
|
Ok((
|
|
WebSocketSender {
|
|
sender: Some(broadcaster),
|
|
_phantom: PhantomData,
|
|
},
|
|
exit_channel,
|
|
actual_listen_addr,
|
|
))
|
|
}
|