8dd9249177
On heavily crowded networks, we are seeing many attempted connections to our node every second. Often these connections come from peers that have just been disconnected. This can be for a number of reasons including: - We have deemed them to be not as useful as other peers - They have performed poorly - They have dropped the connection with us - The connection was spontaneously lost - They were randomly removed because we have too many peers In all of these cases, if we have reached or exceeded our target peer limit, there is no desire to accept new connections immediately after the disconnect from these peers. In fact, it often costs us resources to handle the established connections and defeats some of the logic of dropping them in the first place. This PR adds a timeout, that prevents recently disconnected peers from reconnecting to us. Technically we implement a ban at the swarm layer to prevent immediate re connections for at least 10 minutes. I decided to keep this light, and use a time-based LRUCache which only gets updated during the peer manager heartbeat to prevent added stress of polling a delay map for what could be a large number of peers. This cache is bounded in time. An extra space bound could be added should people consider this a risk. Co-authored-by: Diva M <divma@protonmail.com> |
||
---|---|---|
.. | ||
account_utils | ||
clap_utils | ||
compare_fields | ||
compare_fields_derive | ||
deposit_contract | ||
directory | ||
eth2 | ||
eth2_config | ||
eth2_interop_keypairs | ||
eth2_network_config | ||
eth2_wallet_manager | ||
filesystem | ||
lighthouse_metrics | ||
lighthouse_version | ||
lockfile | ||
logging | ||
lru_cache | ||
malloc_utils | ||
monitoring_api | ||
oneshot_broadcast | ||
sensitive_url | ||
slot_clock | ||
system_health | ||
target_check | ||
task_executor | ||
test_random_derive | ||
unused_port | ||
validator_dir | ||
warp_utils | ||
README.md |
eth2
Common crates containing eth2-specific logic.