lighthouse/beacon_node/client/src/notifier.rs
Paul Hauner 177df12149
Testnet stability (#451)
* Change reduced tree for adding weightless node

* Add more comments for reduced tree fork choice

* Small refactor on reduced tree for readability

* Move test_harness forking logic into itself

* Add new `AncestorIter` trait to store

* Add unfinished tests to fork choice

* Make `beacon_state.genesis_block_root` public

* Add failing lmd_ghost fork choice tests

* Extend fork_choice tests, create failing test

* Implement Debug for generic ReducedTree

* Add lazy_static to fork choice tests

* Add verify_integrity fn to reduced tree

* Fix bugs in reduced tree

* Ensure all reduced tree tests verify integrity

* Slightly alter reduce tree test params

* Add (failing) reduced tree test

* Fix bug in fork choice

Iter ancestors was not working well with skip slots

* Put maximum depth for common ancestor search

Ensures that we don't search back past the finalized root.

* Add basic finalization tests for reduced tree

* Change fork choice to use beacon_block_root

Previously it was using target_root, which was wrong

* Change reduced tree for adding weightless node

* Add more comments for reduced tree fork choice

* Small refactor on reduced tree for readability

* Move test_harness forking logic into itself

* Add new `AncestorIter` trait to store

* Add unfinished tests to fork choice

* Make `beacon_state.genesis_block_root` public

* Add failing lmd_ghost fork choice tests

* Extend fork_choice tests, create failing test

* Implement Debug for generic ReducedTree

* Add lazy_static to fork choice tests

* Add verify_integrity fn to reduced tree

* Fix bugs in reduced tree

* Ensure all reduced tree tests verify integrity

* Slightly alter reduce tree test params

* Add (failing) reduced tree test

* Fix bug in fork choice

Iter ancestors was not working well with skip slots

* Put maximum depth for common ancestor search

Ensures that we don't search back past the finalized root.

* Add basic finalization tests for reduced tree

* Add network dir CLI flag

* Simplify "NewSlot" log message

* Rename network-dir CLI flag

* Change fork choice to use beacon_block_root

Previously it was using target_root, which was wrong

* Update db dir size for metrics

* Change slog to use `FullFormat` logging

* Update some comments and log formatting

* Add prom gauge for best block root

* Only add known target blocks to fork choice

* Add finalized and justified root prom metrics

* Add CLI flag for setting log level

* Add logger to beacon chain

* Add debug-level CLI flag to validator

* Allow block processing if fork choice fails

* Create warn log when there's low libp2p peer count

* Minor change to logging

* Make ancestor iter return option

* Disable fork choice test when !debug_assertions

* Fix type, removed code fragment

* Tidy some borrow-checker evading

* Lower reduced tree random test iterations
2019-07-29 13:45:45 +10:00

58 lines
1.7 KiB
Rust

use crate::Client;
use beacon_chain::BeaconChainTypes;
use exit_future::Exit;
use futures::{Future, Stream};
use slog::{debug, o, warn};
use std::time::{Duration, Instant};
use tokio::runtime::TaskExecutor;
use tokio::timer::Interval;
/// The interval between heartbeat events.
pub const HEARTBEAT_INTERVAL_SECONDS: u64 = 15;
/// Create a warning log whenever the peer count is at or below this value.
pub const WARN_PEER_COUNT: usize = 1;
/// Spawns a thread that can be used to run code periodically, on `HEARTBEAT_INTERVAL_SECONDS`
/// durations.
///
/// Presently unused, but remains for future use.
pub fn run<T: BeaconChainTypes + Send + Sync + 'static>(
client: &Client<T>,
executor: TaskExecutor,
exit: Exit,
) {
// notification heartbeat
let interval = Interval::new(
Instant::now(),
Duration::from_secs(HEARTBEAT_INTERVAL_SECONDS),
);
let log = client.log.new(o!("Service" => "Notifier"));
let libp2p = client.network.libp2p_service();
let heartbeat = move |_| {
// Number of libp2p (not discv5) peers connected.
//
// Panics if libp2p is poisoned.
let connected_peer_count = libp2p.lock().swarm.connected_peers();
debug!(log, "libp2p"; "peer_count" => connected_peer_count);
if connected_peer_count <= WARN_PEER_COUNT {
warn!(log, "Low libp2p peer count"; "peer_count" => connected_peer_count);
}
Ok(())
};
// map error and spawn
let err_log = client.log.clone();
let heartbeat_interval = interval
.map_err(move |e| debug!(err_log, "Timer error {}", e))
.for_each(heartbeat);
executor.spawn(exit.until(heartbeat_interval).map(|_| ()));
}