## Issue Addressed
NA
## Proposed Changes
As we've seen on Prater, there seems to be a correlation between these messages
```
WARN Not enough time for a discovery search subnet_id: ExactSubnet { subnet_id: SubnetId(19), slot: Slot(3742336) }, service: attestation_service
```
... and nodes falling 20-30 slots behind the head for short periods. These nodes are running ~20k Prater validators.
After running some metrics, I can see that the `network_recv` channel is processing ~250k `AttestationSubscribe` messages per minute. It occurred to me that perhaps the `AttestationSubscribe` messages are "washing out" the `SendRequest` and `SendResponse` messages. In this PR I separate the `AttestationSubscribe` and `SyncCommitteeSubscribe` messages into their own queue so the `tokio::select!` in the `NetworkService` can still process the other messages in the `network_recv` channel without necessarily having to clear all the subscription messages first.
~~I've also added filter to the HTTP API to prevent duplicate subscriptions going to the network service.~~
## Additional Info
- Currently being tested on Prater
|
||
|---|---|---|
| .. | ||
| account_utils | ||
| clap_utils | ||
| compare_fields | ||
| compare_fields_derive | ||
| deposit_contract | ||
| directory | ||
| eth2 | ||
| eth2_config | ||
| eth2_interop_keypairs | ||
| eth2_network_config | ||
| eth2_wallet_manager | ||
| fallback | ||
| filesystem | ||
| hashset_delay | ||
| lighthouse_metrics | ||
| lighthouse_version | ||
| lockfile | ||
| logging | ||
| lru_cache | ||
| malloc_utils | ||
| monitoring_api | ||
| sensitive_url | ||
| slot_clock | ||
| target_check | ||
| task_executor | ||
| test_random_derive | ||
| unused_port | ||
| validator_dir | ||
| warp_utils | ||
| README.md | ||
eth2
Common crates containing eth2-specific logic.