2020-09-29 03:46:54 +00:00
|
|
|
//! This module contains endpoints that are non-standard and only available on Lighthouse servers.
|
|
|
|
|
Add API to compute discrete validator attestation performance (#2874)
## Issue Addressed
N/A
## Proposed Changes
Add a HTTP API which can be used to compute the attestation performances of a validator (or all validators) over a discrete range of epochs.
Performances can be computed for a single validator, or for the global validator set.
## Usage
### Request
The API can be used as follows:
```
curl "http://localhost:5052/lighthouse/analysis/attestation_performance/{validator_index}?start_epoch=57730&end_epoch=57732"
```
Alternatively, to compute performances for the global validator set:
```
curl "http://localhost:5052/lighthouse/analysis/attestation_performance/global?start_epoch=57730&end_epoch=57732"
```
### Response
The response is JSON formatted as follows:
```
[
{
"index": 72,
"epochs": {
"57730": {
"active": true,
"head": false,
"target": false,
"source": false
},
"57731": {
"active": true,
"head": true,
"target": true,
"source": true,
"delay": 1
},
"57732": {
"active": true,
"head": true,
"target": true,
"source": true,
"delay": 1
},
}
}
]
```
> Note that the `"epochs"` are not guaranteed to be in ascending order.
## Additional Info
- This API is intended to be used in our upcoming validator analysis tooling (#2873) and will likely not be very useful for regular users. Some advanced users or block explorers may find this API useful however.
- The request range is limited to 100 epochs (since the range is inclusive and it also computes the `end_epoch` it's actually 101 epochs) to prevent Lighthouse using exceptionally large amounts of memory.
2022-01-27 22:58:31 +00:00
|
|
|
mod attestation_performance;
|
2022-02-21 23:21:02 +00:00
|
|
|
mod block_packing_efficiency;
|
2022-01-27 01:06:02 +00:00
|
|
|
mod block_rewards;
|
|
|
|
|
2020-09-29 03:46:54 +00:00
|
|
|
use crate::{
|
2020-10-22 06:05:49 +00:00
|
|
|
ok_or_error,
|
2021-07-09 06:15:32 +00:00
|
|
|
types::{BeaconState, ChainSpec, Epoch, EthSpec, GenericResponse, ValidatorId},
|
2020-11-02 00:37:30 +00:00
|
|
|
BeaconNodeHttpClient, DepositData, Error, Eth1Data, Hash256, StateId, StatusCode,
|
2020-09-29 03:46:54 +00:00
|
|
|
};
|
|
|
|
use proto_array::core::ProtoArray;
|
2020-10-22 06:05:49 +00:00
|
|
|
use reqwest::IntoUrl;
|
2020-09-29 03:46:54 +00:00
|
|
|
use serde::{Deserialize, Serialize};
|
Implement SSZ union type (#2579)
## Issue Addressed
NA
## Proposed Changes
Implements the "union" type from the SSZ spec for `ssz`, `ssz_derive`, `tree_hash` and `tree_hash_derive` so it may be derived for `enums`:
https://github.com/ethereum/consensus-specs/blob/v1.1.0-beta.3/ssz/simple-serialize.md#union
The union type is required for the merge, since the `Transaction` type is defined as a single-variant union `Union[OpaqueTransaction]`.
### Crate Updates
This PR will (hopefully) cause CI to publish new versions for the following crates:
- `eth2_ssz_derive`: `0.2.1` -> `0.3.0`
- `eth2_ssz`: `0.3.0` -> `0.4.0`
- `eth2_ssz_types`: `0.2.0` -> `0.2.1`
- `tree_hash`: `0.3.0` -> `0.4.0`
- `tree_hash_derive`: `0.3.0` -> `0.4.0`
These these crates depend on each other, I've had to add a workspace-level `[patch]` for these crates. A follow-up PR will need to remove this patch, ones the new versions are published.
### Union Behaviors
We already had SSZ `Encode` and `TreeHash` derive for enums, however it just did a "transparent" pass-through of the inner value. Since the "union" decoding from the spec is in conflict with the transparent method, I've required that all `enum` have exactly one of the following enum-level attributes:
#### SSZ
- `#[ssz(enum_behaviour = "union")]`
- matches the spec used for the merge
- `#[ssz(enum_behaviour = "transparent")]`
- maintains existing functionality
- not supported for `Decode` (never was)
#### TreeHash
- `#[tree_hash(enum_behaviour = "union")]`
- matches the spec used for the merge
- `#[tree_hash(enum_behaviour = "transparent")]`
- maintains existing functionality
This means that we can maintain the existing transparent behaviour, but all existing users will get a compile-time error until they explicitly opt-in to being transparent.
### Legacy Option Encoding
Before this PR, we already had a union-esque encoding for `Option<T>`. However, this was with the *old* SSZ spec where the union selector was 4 bytes. During merge specification, the spec was changed to use 1 byte for the selector.
Whilst the 4-byte `Option` encoding was never used in the spec, we used it in our database. Writing a migrate script for all occurrences of `Option` in the database would be painful, especially since it's used in the `CommitteeCache`. To avoid the migrate script, I added a serde-esque `#[ssz(with = "module")]` field-level attribute to `ssz_derive` so that we can opt into the 4-byte encoding on a field-by-field basis.
The `ssz::legacy::four_byte_impl!` macro allows a one-liner to define the module required for the `#[ssz(with = "module")]` for some `Option<T> where T: Encode + Decode`.
Notably, **I have removed `Encode` and `Decode` impls for `Option`**. I've done this to force a break on downstream users. Like I mentioned, `Option` isn't used in the spec so I don't think it'll be *that* annoying. I think it's nicer than quietly having two different union implementations or quietly breaking the existing `Option` impl.
### Crate Publish Ordering
I've modified the order in which CI publishes crates to ensure that we don't publish a crate without ensuring we already published a crate that it depends upon.
## TODO
- [ ] Queue a follow-up `[patch]`-removing PR.
2021-09-25 05:58:36 +00:00
|
|
|
use ssz::four_byte_option_impl;
|
2020-11-02 00:37:30 +00:00
|
|
|
use ssz_derive::{Decode, Encode};
|
2022-04-01 07:16:25 +00:00
|
|
|
use store::{AnchorInfo, Split, StoreConfig};
|
2020-09-29 03:46:54 +00:00
|
|
|
|
Add API to compute discrete validator attestation performance (#2874)
## Issue Addressed
N/A
## Proposed Changes
Add a HTTP API which can be used to compute the attestation performances of a validator (or all validators) over a discrete range of epochs.
Performances can be computed for a single validator, or for the global validator set.
## Usage
### Request
The API can be used as follows:
```
curl "http://localhost:5052/lighthouse/analysis/attestation_performance/{validator_index}?start_epoch=57730&end_epoch=57732"
```
Alternatively, to compute performances for the global validator set:
```
curl "http://localhost:5052/lighthouse/analysis/attestation_performance/global?start_epoch=57730&end_epoch=57732"
```
### Response
The response is JSON formatted as follows:
```
[
{
"index": 72,
"epochs": {
"57730": {
"active": true,
"head": false,
"target": false,
"source": false
},
"57731": {
"active": true,
"head": true,
"target": true,
"source": true,
"delay": 1
},
"57732": {
"active": true,
"head": true,
"target": true,
"source": true,
"delay": 1
},
}
}
]
```
> Note that the `"epochs"` are not guaranteed to be in ascending order.
## Additional Info
- This API is intended to be used in our upcoming validator analysis tooling (#2873) and will likely not be very useful for regular users. Some advanced users or block explorers may find this API useful however.
- The request range is limited to 100 epochs (since the range is inclusive and it also computes the `end_epoch` it's actually 101 epochs) to prevent Lighthouse using exceptionally large amounts of memory.
2022-01-27 22:58:31 +00:00
|
|
|
pub use attestation_performance::{
|
|
|
|
AttestationPerformance, AttestationPerformanceQuery, AttestationPerformanceStatistics,
|
|
|
|
};
|
2022-02-21 23:21:02 +00:00
|
|
|
pub use block_packing_efficiency::{
|
|
|
|
BlockPackingEfficiency, BlockPackingEfficiencyQuery, ProposerInfo, UniqueAttestation,
|
|
|
|
};
|
2022-01-27 01:06:02 +00:00
|
|
|
pub use block_rewards::{AttestationRewards, BlockReward, BlockRewardMeta, BlockRewardsQuery};
|
Rename eth2_libp2p to lighthouse_network (#2702)
## Description
The `eth2_libp2p` crate was originally named and designed to incorporate a simple libp2p integration into lighthouse. Since its origins the crates purpose has expanded dramatically. It now houses a lot more sophistication that is specific to lighthouse and no longer just a libp2p integration.
As of this writing it currently houses the following high-level lighthouse-specific logic:
- Lighthouse's implementation of the eth2 RPC protocol and specific encodings/decodings
- Integration and handling of ENRs with respect to libp2p and eth2
- Lighthouse's discovery logic, its integration with discv5 and logic about searching and handling peers.
- Lighthouse's peer manager - This is a large module handling various aspects of Lighthouse's network, such as peer scoring, handling pings and metadata, connection maintenance and recording, etc.
- Lighthouse's peer database - This is a collection of information stored for each individual peer which is specific to lighthouse. We store connection state, sync state, last seen ips and scores etc. The data stored for each peer is designed for various elements of the lighthouse code base such as syncing and the http api.
- Gossipsub scoring - This stores a collection of gossipsub 1.1 scoring mechanisms that are continuously analyssed and updated based on the ethereum 2 networks and how Lighthouse performs on these networks.
- Lighthouse specific types for managing gossipsub topics, sync status and ENR fields
- Lighthouse's network HTTP API metrics - A collection of metrics for lighthouse network monitoring
- Lighthouse's custom configuration of all networking protocols, RPC, gossipsub, discovery, identify and libp2p.
Therefore it makes sense to rename the crate to be more akin to its current purposes, simply that it manages the majority of Lighthouse's network stack. This PR renames this crate to `lighthouse_network`
Co-authored-by: Paul Hauner <paul@paulhauner.com>
2021-10-19 00:30:39 +00:00
|
|
|
pub use lighthouse_network::{types::SyncState, PeerInfo};
|
2020-09-29 03:46:54 +00:00
|
|
|
|
Implement SSZ union type (#2579)
## Issue Addressed
NA
## Proposed Changes
Implements the "union" type from the SSZ spec for `ssz`, `ssz_derive`, `tree_hash` and `tree_hash_derive` so it may be derived for `enums`:
https://github.com/ethereum/consensus-specs/blob/v1.1.0-beta.3/ssz/simple-serialize.md#union
The union type is required for the merge, since the `Transaction` type is defined as a single-variant union `Union[OpaqueTransaction]`.
### Crate Updates
This PR will (hopefully) cause CI to publish new versions for the following crates:
- `eth2_ssz_derive`: `0.2.1` -> `0.3.0`
- `eth2_ssz`: `0.3.0` -> `0.4.0`
- `eth2_ssz_types`: `0.2.0` -> `0.2.1`
- `tree_hash`: `0.3.0` -> `0.4.0`
- `tree_hash_derive`: `0.3.0` -> `0.4.0`
These these crates depend on each other, I've had to add a workspace-level `[patch]` for these crates. A follow-up PR will need to remove this patch, ones the new versions are published.
### Union Behaviors
We already had SSZ `Encode` and `TreeHash` derive for enums, however it just did a "transparent" pass-through of the inner value. Since the "union" decoding from the spec is in conflict with the transparent method, I've required that all `enum` have exactly one of the following enum-level attributes:
#### SSZ
- `#[ssz(enum_behaviour = "union")]`
- matches the spec used for the merge
- `#[ssz(enum_behaviour = "transparent")]`
- maintains existing functionality
- not supported for `Decode` (never was)
#### TreeHash
- `#[tree_hash(enum_behaviour = "union")]`
- matches the spec used for the merge
- `#[tree_hash(enum_behaviour = "transparent")]`
- maintains existing functionality
This means that we can maintain the existing transparent behaviour, but all existing users will get a compile-time error until they explicitly opt-in to being transparent.
### Legacy Option Encoding
Before this PR, we already had a union-esque encoding for `Option<T>`. However, this was with the *old* SSZ spec where the union selector was 4 bytes. During merge specification, the spec was changed to use 1 byte for the selector.
Whilst the 4-byte `Option` encoding was never used in the spec, we used it in our database. Writing a migrate script for all occurrences of `Option` in the database would be painful, especially since it's used in the `CommitteeCache`. To avoid the migrate script, I added a serde-esque `#[ssz(with = "module")]` field-level attribute to `ssz_derive` so that we can opt into the 4-byte encoding on a field-by-field basis.
The `ssz::legacy::four_byte_impl!` macro allows a one-liner to define the module required for the `#[ssz(with = "module")]` for some `Option<T> where T: Encode + Decode`.
Notably, **I have removed `Encode` and `Decode` impls for `Option`**. I've done this to force a break on downstream users. Like I mentioned, `Option` isn't used in the spec so I don't think it'll be *that* annoying. I think it's nicer than quietly having two different union implementations or quietly breaking the existing `Option` impl.
### Crate Publish Ordering
I've modified the order in which CI publishes crates to ensure that we don't publish a crate without ensuring we already published a crate that it depends upon.
## TODO
- [ ] Queue a follow-up `[patch]`-removing PR.
2021-09-25 05:58:36 +00:00
|
|
|
// Define "legacy" implementations of `Option<T>` which use four bytes for encoding the union
|
|
|
|
// selector.
|
|
|
|
four_byte_option_impl!(four_byte_option_u64, u64);
|
|
|
|
four_byte_option_impl!(four_byte_option_hash256, Hash256);
|
|
|
|
|
2020-09-29 03:46:54 +00:00
|
|
|
/// Information returned by `peers` and `connected_peers`.
|
|
|
|
// TODO: this should be deserializable..
|
|
|
|
#[derive(Debug, Clone, Serialize)]
|
|
|
|
#[serde(bound = "T: EthSpec")]
|
|
|
|
pub struct Peer<T: EthSpec> {
|
|
|
|
/// The Peer's ID
|
|
|
|
pub peer_id: String,
|
|
|
|
/// The PeerInfo associated with the peer.
|
|
|
|
pub peer_info: PeerInfo<T>,
|
|
|
|
}
|
|
|
|
|
|
|
|
/// The results of validators voting during an epoch.
|
|
|
|
///
|
|
|
|
/// Provides information about the current and previous epochs.
|
|
|
|
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
|
|
|
|
pub struct GlobalValidatorInclusionData {
|
|
|
|
/// The total effective balance of all active validators during the _current_ epoch.
|
|
|
|
pub current_epoch_active_gwei: u64,
|
|
|
|
/// The total effective balance of all active validators during the _previous_ epoch.
|
|
|
|
pub previous_epoch_active_gwei: u64,
|
|
|
|
/// The total effective balance of all validators who attested during the _current_ epoch and
|
|
|
|
/// agreed with the state about the beacon block at the first slot of the _current_ epoch.
|
|
|
|
pub current_epoch_target_attesting_gwei: u64,
|
|
|
|
/// The total effective balance of all validators who attested during the _previous_ epoch and
|
|
|
|
/// agreed with the state about the beacon block at the first slot of the _previous_ epoch.
|
|
|
|
pub previous_epoch_target_attesting_gwei: u64,
|
|
|
|
/// The total effective balance of all validators who attested during the _previous_ epoch and
|
|
|
|
/// agreed with the state about the beacon block at the time of attestation.
|
|
|
|
pub previous_epoch_head_attesting_gwei: u64,
|
|
|
|
}
|
|
|
|
|
|
|
|
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
|
|
|
|
pub struct ValidatorInclusionData {
|
|
|
|
/// True if the validator has been slashed, ever.
|
|
|
|
pub is_slashed: bool,
|
|
|
|
/// True if the validator can withdraw in the current epoch.
|
|
|
|
pub is_withdrawable_in_current_epoch: bool,
|
2021-07-27 07:01:01 +00:00
|
|
|
/// True if the validator was active and not slashed in the state's _current_ epoch.
|
|
|
|
pub is_active_unslashed_in_current_epoch: bool,
|
|
|
|
/// True if the validator was active and not slashed in the state's _previous_ epoch.
|
|
|
|
pub is_active_unslashed_in_previous_epoch: bool,
|
2020-09-29 03:46:54 +00:00
|
|
|
/// The validator's effective balance in the _current_ epoch.
|
|
|
|
pub current_epoch_effective_balance_gwei: u64,
|
|
|
|
/// True if the validator's beacon block root attestation for the first slot of the _current_
|
|
|
|
/// epoch matches the block root known to the state.
|
|
|
|
pub is_current_epoch_target_attester: bool,
|
|
|
|
/// True if the validator's beacon block root attestation for the first slot of the _previous_
|
|
|
|
/// epoch matches the block root known to the state.
|
|
|
|
pub is_previous_epoch_target_attester: bool,
|
|
|
|
/// True if the validator's beacon block root attestation in the _previous_ epoch at the
|
|
|
|
/// attestation's slot (`attestation_data.slot`) matches the block root known to the state.
|
|
|
|
pub is_previous_epoch_head_attester: bool,
|
|
|
|
}
|
|
|
|
|
|
|
|
#[cfg(target_os = "linux")]
|
2021-05-26 05:58:41 +00:00
|
|
|
use {
|
|
|
|
procinfo::pid, psutil::cpu::os::linux::CpuTimesExt,
|
|
|
|
psutil::memory::os::linux::VirtualMemoryExt, psutil::process::Process,
|
|
|
|
};
|
2020-09-29 03:46:54 +00:00
|
|
|
|
|
|
|
/// Reports on the health of the Lighthouse instance.
|
|
|
|
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
|
|
|
|
pub struct Health {
|
2021-05-26 05:58:41 +00:00
|
|
|
#[serde(flatten)]
|
|
|
|
pub system: SystemHealth,
|
|
|
|
#[serde(flatten)]
|
|
|
|
pub process: ProcessHealth,
|
|
|
|
}
|
|
|
|
|
|
|
|
/// System related health.
|
|
|
|
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
|
|
|
|
pub struct SystemHealth {
|
2020-09-29 03:46:54 +00:00
|
|
|
/// Total virtual memory on the system
|
|
|
|
pub sys_virt_mem_total: u64,
|
|
|
|
/// Total virtual memory available for new processes.
|
|
|
|
pub sys_virt_mem_available: u64,
|
2021-05-26 05:58:41 +00:00
|
|
|
/// Total virtual memory used on the system.
|
2020-09-29 03:46:54 +00:00
|
|
|
pub sys_virt_mem_used: u64,
|
2021-05-26 05:58:41 +00:00
|
|
|
/// Total virtual memory not used on the system.
|
2020-09-29 03:46:54 +00:00
|
|
|
pub sys_virt_mem_free: u64,
|
2021-05-26 05:58:41 +00:00
|
|
|
/// Percentage of virtual memory used on the system.
|
2020-09-29 03:46:54 +00:00
|
|
|
pub sys_virt_mem_percent: f32,
|
2021-05-26 05:58:41 +00:00
|
|
|
/// Total cached virtual memory on the system.
|
|
|
|
pub sys_virt_mem_cached: u64,
|
|
|
|
/// Total buffered virtual memory on the system.
|
|
|
|
pub sys_virt_mem_buffers: u64,
|
|
|
|
|
2020-09-29 03:46:54 +00:00
|
|
|
/// System load average over 1 minute.
|
|
|
|
pub sys_loadavg_1: f64,
|
|
|
|
/// System load average over 5 minutes.
|
|
|
|
pub sys_loadavg_5: f64,
|
|
|
|
/// System load average over 15 minutes.
|
|
|
|
pub sys_loadavg_15: f64,
|
2021-05-26 05:58:41 +00:00
|
|
|
|
|
|
|
/// Total cpu cores.
|
|
|
|
pub cpu_cores: u64,
|
|
|
|
/// Total cpu threads.
|
|
|
|
pub cpu_threads: u64,
|
|
|
|
|
|
|
|
/// Total time spent in kernel mode.
|
|
|
|
pub system_seconds_total: u64,
|
|
|
|
/// Total time spent in user mode.
|
|
|
|
pub user_seconds_total: u64,
|
|
|
|
/// Total time spent in waiting for io.
|
|
|
|
pub iowait_seconds_total: u64,
|
|
|
|
/// Total idle cpu time.
|
|
|
|
pub idle_seconds_total: u64,
|
|
|
|
/// Total cpu time.
|
|
|
|
pub cpu_time_total: u64,
|
|
|
|
|
|
|
|
/// Total capacity of disk.
|
|
|
|
pub disk_node_bytes_total: u64,
|
|
|
|
/// Free space in disk.
|
|
|
|
pub disk_node_bytes_free: u64,
|
|
|
|
/// Number of disk reads.
|
|
|
|
pub disk_node_reads_total: u64,
|
|
|
|
/// Number of disk writes.
|
|
|
|
pub disk_node_writes_total: u64,
|
|
|
|
|
|
|
|
/// Total bytes received over all network interfaces.
|
|
|
|
pub network_node_bytes_total_received: u64,
|
|
|
|
/// Total bytes sent over all network interfaces.
|
|
|
|
pub network_node_bytes_total_transmit: u64,
|
|
|
|
|
|
|
|
/// Boot time
|
|
|
|
pub misc_node_boot_ts_seconds: u64,
|
|
|
|
/// OS
|
|
|
|
pub misc_os: String,
|
2020-09-29 03:46:54 +00:00
|
|
|
}
|
|
|
|
|
2021-05-26 05:58:41 +00:00
|
|
|
impl SystemHealth {
|
2020-09-29 03:46:54 +00:00
|
|
|
#[cfg(not(target_os = "linux"))]
|
|
|
|
pub fn observe() -> Result<Self, String> {
|
|
|
|
Err("Health is only available on Linux".into())
|
|
|
|
}
|
|
|
|
|
|
|
|
#[cfg(target_os = "linux")]
|
|
|
|
pub fn observe() -> Result<Self, String> {
|
|
|
|
let vm = psutil::memory::virtual_memory()
|
|
|
|
.map_err(|e| format!("Unable to get virtual memory: {:?}", e))?;
|
|
|
|
let loadavg =
|
|
|
|
psutil::host::loadavg().map_err(|e| format!("Unable to get loadavg: {:?}", e))?;
|
|
|
|
|
2021-05-26 05:58:41 +00:00
|
|
|
let cpu =
|
|
|
|
psutil::cpu::cpu_times().map_err(|e| format!("Unable to get cpu times: {:?}", e))?;
|
|
|
|
|
|
|
|
let disk_usage = psutil::disk::disk_usage("/")
|
|
|
|
.map_err(|e| format!("Unable to disk usage info: {:?}", e))?;
|
|
|
|
|
|
|
|
let disk = psutil::disk::DiskIoCountersCollector::default()
|
|
|
|
.disk_io_counters()
|
|
|
|
.map_err(|e| format!("Unable to get disk counters: {:?}", e))?;
|
|
|
|
|
|
|
|
let net = psutil::network::NetIoCountersCollector::default()
|
|
|
|
.net_io_counters()
|
|
|
|
.map_err(|e| format!("Unable to get network io counters: {:?}", e))?;
|
|
|
|
|
|
|
|
let boot_time = psutil::host::boot_time()
|
|
|
|
.map_err(|e| format!("Unable to get system boot time: {:?}", e))?
|
|
|
|
.duration_since(std::time::UNIX_EPOCH)
|
|
|
|
.map_err(|e| format!("Boot time is lower than unix epoch: {}", e))?
|
|
|
|
.as_secs();
|
|
|
|
|
2020-09-29 03:46:54 +00:00
|
|
|
Ok(Self {
|
|
|
|
sys_virt_mem_total: vm.total(),
|
|
|
|
sys_virt_mem_available: vm.available(),
|
|
|
|
sys_virt_mem_used: vm.used(),
|
|
|
|
sys_virt_mem_free: vm.free(),
|
2021-05-26 05:58:41 +00:00
|
|
|
sys_virt_mem_cached: vm.cached(),
|
|
|
|
sys_virt_mem_buffers: vm.buffers(),
|
2020-09-29 03:46:54 +00:00
|
|
|
sys_virt_mem_percent: vm.percent(),
|
|
|
|
sys_loadavg_1: loadavg.one,
|
|
|
|
sys_loadavg_5: loadavg.five,
|
|
|
|
sys_loadavg_15: loadavg.fifteen,
|
2021-05-26 05:58:41 +00:00
|
|
|
cpu_cores: psutil::cpu::cpu_count_physical(),
|
|
|
|
cpu_threads: psutil::cpu::cpu_count(),
|
|
|
|
system_seconds_total: cpu.system().as_secs(),
|
|
|
|
cpu_time_total: cpu.total().as_secs(),
|
|
|
|
user_seconds_total: cpu.user().as_secs(),
|
|
|
|
iowait_seconds_total: cpu.iowait().as_secs(),
|
|
|
|
idle_seconds_total: cpu.idle().as_secs(),
|
|
|
|
disk_node_bytes_total: disk_usage.total(),
|
|
|
|
disk_node_bytes_free: disk_usage.free(),
|
|
|
|
disk_node_reads_total: disk.read_count(),
|
|
|
|
disk_node_writes_total: disk.write_count(),
|
|
|
|
network_node_bytes_total_received: net.bytes_recv(),
|
|
|
|
network_node_bytes_total_transmit: net.bytes_sent(),
|
|
|
|
misc_node_boot_ts_seconds: boot_time,
|
|
|
|
misc_os: std::env::consts::OS.to_string(),
|
|
|
|
})
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Process specific health
|
|
|
|
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
|
|
|
|
pub struct ProcessHealth {
|
|
|
|
/// The pid of this process.
|
|
|
|
pub pid: u32,
|
|
|
|
/// The number of threads used by this pid.
|
|
|
|
pub pid_num_threads: i32,
|
|
|
|
/// The total resident memory used by this pid.
|
|
|
|
pub pid_mem_resident_set_size: u64,
|
|
|
|
/// The total virtual memory used by this pid.
|
|
|
|
pub pid_mem_virtual_memory_size: u64,
|
|
|
|
/// Number of cpu seconds consumed by this pid.
|
|
|
|
pub pid_process_seconds_total: u64,
|
|
|
|
}
|
|
|
|
|
|
|
|
impl ProcessHealth {
|
|
|
|
#[cfg(not(target_os = "linux"))]
|
|
|
|
pub fn observe() -> Result<Self, String> {
|
|
|
|
Err("Health is only available on Linux".into())
|
|
|
|
}
|
|
|
|
|
|
|
|
#[cfg(target_os = "linux")]
|
|
|
|
pub fn observe() -> Result<Self, String> {
|
|
|
|
let process =
|
|
|
|
Process::current().map_err(|e| format!("Unable to get current process: {:?}", e))?;
|
|
|
|
|
|
|
|
let process_mem = process
|
|
|
|
.memory_info()
|
|
|
|
.map_err(|e| format!("Unable to get process memory info: {:?}", e))?;
|
|
|
|
|
|
|
|
let stat = pid::stat_self().map_err(|e| format!("Unable to get stat: {:?}", e))?;
|
|
|
|
let process_times = process
|
|
|
|
.cpu_times()
|
|
|
|
.map_err(|e| format!("Unable to get process cpu times : {:?}", e))?;
|
|
|
|
|
|
|
|
Ok(Self {
|
|
|
|
pid: process.pid(),
|
|
|
|
pid_num_threads: stat.num_threads,
|
|
|
|
pid_mem_resident_set_size: process_mem.rss(),
|
|
|
|
pid_mem_virtual_memory_size: process_mem.vms(),
|
|
|
|
pid_process_seconds_total: process_times.busy().as_secs()
|
|
|
|
+ process_times.children_system().as_secs()
|
|
|
|
+ process_times.children_system().as_secs(),
|
|
|
|
})
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
impl Health {
|
|
|
|
#[cfg(not(target_os = "linux"))]
|
|
|
|
pub fn observe() -> Result<Self, String> {
|
|
|
|
Err("Health is only available on Linux".into())
|
|
|
|
}
|
|
|
|
|
|
|
|
#[cfg(target_os = "linux")]
|
|
|
|
pub fn observe() -> Result<Self, String> {
|
|
|
|
Ok(Self {
|
|
|
|
process: ProcessHealth::observe()?,
|
|
|
|
system: SystemHealth::observe()?,
|
2020-09-29 03:46:54 +00:00
|
|
|
})
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-11-02 00:37:30 +00:00
|
|
|
/// Indicates how up-to-date the Eth1 caches are.
|
|
|
|
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
|
|
|
|
pub struct Eth1SyncStatusData {
|
|
|
|
pub head_block_number: Option<u64>,
|
|
|
|
pub head_block_timestamp: Option<u64>,
|
|
|
|
pub latest_cached_block_number: Option<u64>,
|
|
|
|
pub latest_cached_block_timestamp: Option<u64>,
|
2020-11-21 00:26:15 +00:00
|
|
|
pub voting_target_timestamp: u64,
|
2020-11-02 00:37:30 +00:00
|
|
|
pub eth1_node_sync_status_percentage: f64,
|
|
|
|
pub lighthouse_is_cached_and_ready: bool,
|
|
|
|
}
|
|
|
|
|
|
|
|
/// A fully parsed eth1 deposit contract log.
|
|
|
|
#[derive(Debug, PartialEq, Clone, Serialize, Deserialize, Encode, Decode)]
|
|
|
|
pub struct DepositLog {
|
|
|
|
pub deposit_data: DepositData,
|
|
|
|
/// The block number of the log that included this `DepositData`.
|
|
|
|
pub block_number: u64,
|
|
|
|
/// The index included with the deposit log.
|
|
|
|
pub index: u64,
|
|
|
|
/// True if the signature is valid.
|
|
|
|
pub signature_is_valid: bool,
|
|
|
|
}
|
|
|
|
|
|
|
|
/// A block of the eth1 chain.
|
|
|
|
#[derive(Debug, PartialEq, Clone, Serialize, Deserialize, Encode, Decode)]
|
|
|
|
pub struct Eth1Block {
|
|
|
|
pub hash: Hash256,
|
|
|
|
pub timestamp: u64,
|
|
|
|
pub number: u64,
|
Implement SSZ union type (#2579)
## Issue Addressed
NA
## Proposed Changes
Implements the "union" type from the SSZ spec for `ssz`, `ssz_derive`, `tree_hash` and `tree_hash_derive` so it may be derived for `enums`:
https://github.com/ethereum/consensus-specs/blob/v1.1.0-beta.3/ssz/simple-serialize.md#union
The union type is required for the merge, since the `Transaction` type is defined as a single-variant union `Union[OpaqueTransaction]`.
### Crate Updates
This PR will (hopefully) cause CI to publish new versions for the following crates:
- `eth2_ssz_derive`: `0.2.1` -> `0.3.0`
- `eth2_ssz`: `0.3.0` -> `0.4.0`
- `eth2_ssz_types`: `0.2.0` -> `0.2.1`
- `tree_hash`: `0.3.0` -> `0.4.0`
- `tree_hash_derive`: `0.3.0` -> `0.4.0`
These these crates depend on each other, I've had to add a workspace-level `[patch]` for these crates. A follow-up PR will need to remove this patch, ones the new versions are published.
### Union Behaviors
We already had SSZ `Encode` and `TreeHash` derive for enums, however it just did a "transparent" pass-through of the inner value. Since the "union" decoding from the spec is in conflict with the transparent method, I've required that all `enum` have exactly one of the following enum-level attributes:
#### SSZ
- `#[ssz(enum_behaviour = "union")]`
- matches the spec used for the merge
- `#[ssz(enum_behaviour = "transparent")]`
- maintains existing functionality
- not supported for `Decode` (never was)
#### TreeHash
- `#[tree_hash(enum_behaviour = "union")]`
- matches the spec used for the merge
- `#[tree_hash(enum_behaviour = "transparent")]`
- maintains existing functionality
This means that we can maintain the existing transparent behaviour, but all existing users will get a compile-time error until they explicitly opt-in to being transparent.
### Legacy Option Encoding
Before this PR, we already had a union-esque encoding for `Option<T>`. However, this was with the *old* SSZ spec where the union selector was 4 bytes. During merge specification, the spec was changed to use 1 byte for the selector.
Whilst the 4-byte `Option` encoding was never used in the spec, we used it in our database. Writing a migrate script for all occurrences of `Option` in the database would be painful, especially since it's used in the `CommitteeCache`. To avoid the migrate script, I added a serde-esque `#[ssz(with = "module")]` field-level attribute to `ssz_derive` so that we can opt into the 4-byte encoding on a field-by-field basis.
The `ssz::legacy::four_byte_impl!` macro allows a one-liner to define the module required for the `#[ssz(with = "module")]` for some `Option<T> where T: Encode + Decode`.
Notably, **I have removed `Encode` and `Decode` impls for `Option`**. I've done this to force a break on downstream users. Like I mentioned, `Option` isn't used in the spec so I don't think it'll be *that* annoying. I think it's nicer than quietly having two different union implementations or quietly breaking the existing `Option` impl.
### Crate Publish Ordering
I've modified the order in which CI publishes crates to ensure that we don't publish a crate without ensuring we already published a crate that it depends upon.
## TODO
- [ ] Queue a follow-up `[patch]`-removing PR.
2021-09-25 05:58:36 +00:00
|
|
|
#[ssz(with = "four_byte_option_hash256")]
|
2020-11-02 00:37:30 +00:00
|
|
|
pub deposit_root: Option<Hash256>,
|
Implement SSZ union type (#2579)
## Issue Addressed
NA
## Proposed Changes
Implements the "union" type from the SSZ spec for `ssz`, `ssz_derive`, `tree_hash` and `tree_hash_derive` so it may be derived for `enums`:
https://github.com/ethereum/consensus-specs/blob/v1.1.0-beta.3/ssz/simple-serialize.md#union
The union type is required for the merge, since the `Transaction` type is defined as a single-variant union `Union[OpaqueTransaction]`.
### Crate Updates
This PR will (hopefully) cause CI to publish new versions for the following crates:
- `eth2_ssz_derive`: `0.2.1` -> `0.3.0`
- `eth2_ssz`: `0.3.0` -> `0.4.0`
- `eth2_ssz_types`: `0.2.0` -> `0.2.1`
- `tree_hash`: `0.3.0` -> `0.4.0`
- `tree_hash_derive`: `0.3.0` -> `0.4.0`
These these crates depend on each other, I've had to add a workspace-level `[patch]` for these crates. A follow-up PR will need to remove this patch, ones the new versions are published.
### Union Behaviors
We already had SSZ `Encode` and `TreeHash` derive for enums, however it just did a "transparent" pass-through of the inner value. Since the "union" decoding from the spec is in conflict with the transparent method, I've required that all `enum` have exactly one of the following enum-level attributes:
#### SSZ
- `#[ssz(enum_behaviour = "union")]`
- matches the spec used for the merge
- `#[ssz(enum_behaviour = "transparent")]`
- maintains existing functionality
- not supported for `Decode` (never was)
#### TreeHash
- `#[tree_hash(enum_behaviour = "union")]`
- matches the spec used for the merge
- `#[tree_hash(enum_behaviour = "transparent")]`
- maintains existing functionality
This means that we can maintain the existing transparent behaviour, but all existing users will get a compile-time error until they explicitly opt-in to being transparent.
### Legacy Option Encoding
Before this PR, we already had a union-esque encoding for `Option<T>`. However, this was with the *old* SSZ spec where the union selector was 4 bytes. During merge specification, the spec was changed to use 1 byte for the selector.
Whilst the 4-byte `Option` encoding was never used in the spec, we used it in our database. Writing a migrate script for all occurrences of `Option` in the database would be painful, especially since it's used in the `CommitteeCache`. To avoid the migrate script, I added a serde-esque `#[ssz(with = "module")]` field-level attribute to `ssz_derive` so that we can opt into the 4-byte encoding on a field-by-field basis.
The `ssz::legacy::four_byte_impl!` macro allows a one-liner to define the module required for the `#[ssz(with = "module")]` for some `Option<T> where T: Encode + Decode`.
Notably, **I have removed `Encode` and `Decode` impls for `Option`**. I've done this to force a break on downstream users. Like I mentioned, `Option` isn't used in the spec so I don't think it'll be *that* annoying. I think it's nicer than quietly having two different union implementations or quietly breaking the existing `Option` impl.
### Crate Publish Ordering
I've modified the order in which CI publishes crates to ensure that we don't publish a crate without ensuring we already published a crate that it depends upon.
## TODO
- [ ] Queue a follow-up `[patch]`-removing PR.
2021-09-25 05:58:36 +00:00
|
|
|
#[ssz(with = "four_byte_option_u64")]
|
2020-11-02 00:37:30 +00:00
|
|
|
pub deposit_count: Option<u64>,
|
|
|
|
}
|
|
|
|
|
|
|
|
impl Eth1Block {
|
|
|
|
pub fn eth1_data(self) -> Option<Eth1Data> {
|
|
|
|
Some(Eth1Data {
|
|
|
|
deposit_root: self.deposit_root?,
|
|
|
|
deposit_count: self.deposit_count?,
|
|
|
|
block_hash: self.hash,
|
|
|
|
})
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-09-22 00:37:28 +00:00
|
|
|
#[derive(Debug, Serialize, Deserialize)]
|
|
|
|
pub struct DatabaseInfo {
|
|
|
|
pub schema_version: u64,
|
2022-04-01 07:16:25 +00:00
|
|
|
pub config: StoreConfig,
|
2021-09-22 00:37:28 +00:00
|
|
|
pub split: Split,
|
|
|
|
pub anchor: Option<AnchorInfo>,
|
|
|
|
}
|
|
|
|
|
2020-09-29 03:46:54 +00:00
|
|
|
impl BeaconNodeHttpClient {
|
2020-10-22 06:05:49 +00:00
|
|
|
/// Perform a HTTP GET request, returning `None` on a 404 error.
|
|
|
|
async fn get_bytes_opt<U: IntoUrl>(&self, url: U) -> Result<Option<Vec<u8>>, Error> {
|
|
|
|
let response = self.client.get(url).send().await.map_err(Error::Reqwest)?;
|
|
|
|
match ok_or_error(response).await {
|
|
|
|
Ok(resp) => Ok(Some(
|
|
|
|
resp.bytes()
|
|
|
|
.await
|
|
|
|
.map_err(Error::Reqwest)?
|
|
|
|
.into_iter()
|
|
|
|
.collect::<Vec<_>>(),
|
|
|
|
)),
|
|
|
|
Err(err) => {
|
|
|
|
if err.status() == Some(StatusCode::NOT_FOUND) {
|
|
|
|
Ok(None)
|
|
|
|
} else {
|
|
|
|
Err(err)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-09-29 03:46:54 +00:00
|
|
|
/// `GET lighthouse/health`
|
|
|
|
pub async fn get_lighthouse_health(&self) -> Result<GenericResponse<Health>, Error> {
|
2021-05-04 01:59:51 +00:00
|
|
|
let mut path = self.server.full.clone();
|
2020-09-29 03:46:54 +00:00
|
|
|
|
|
|
|
path.path_segments_mut()
|
|
|
|
.map_err(|()| Error::InvalidUrl(self.server.clone()))?
|
|
|
|
.push("lighthouse")
|
|
|
|
.push("health");
|
|
|
|
|
|
|
|
self.get(path).await
|
|
|
|
}
|
|
|
|
|
|
|
|
/// `GET lighthouse/syncing`
|
|
|
|
pub async fn get_lighthouse_syncing(&self) -> Result<GenericResponse<SyncState>, Error> {
|
2021-05-04 01:59:51 +00:00
|
|
|
let mut path = self.server.full.clone();
|
2020-09-29 03:46:54 +00:00
|
|
|
|
|
|
|
path.path_segments_mut()
|
|
|
|
.map_err(|()| Error::InvalidUrl(self.server.clone()))?
|
|
|
|
.push("lighthouse")
|
|
|
|
.push("syncing");
|
|
|
|
|
|
|
|
self.get(path).await
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Note:
|
|
|
|
*
|
|
|
|
* The `lighthouse/peers` endpoints do not have functions here. We are yet to implement
|
|
|
|
* `Deserialize` on the `PeerInfo` struct since it contains use of `Instant`. This could be
|
|
|
|
* fairly simply achieved, if desired.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/// `GET lighthouse/proto_array`
|
|
|
|
pub async fn get_lighthouse_proto_array(&self) -> Result<GenericResponse<ProtoArray>, Error> {
|
2021-05-04 01:59:51 +00:00
|
|
|
let mut path = self.server.full.clone();
|
2020-09-29 03:46:54 +00:00
|
|
|
|
|
|
|
path.path_segments_mut()
|
|
|
|
.map_err(|()| Error::InvalidUrl(self.server.clone()))?
|
|
|
|
.push("lighthouse")
|
|
|
|
.push("proto_array");
|
|
|
|
|
|
|
|
self.get(path).await
|
|
|
|
}
|
|
|
|
|
|
|
|
/// `GET lighthouse/validator_inclusion/{epoch}/global`
|
|
|
|
pub async fn get_lighthouse_validator_inclusion_global(
|
|
|
|
&self,
|
|
|
|
epoch: Epoch,
|
|
|
|
) -> Result<GenericResponse<GlobalValidatorInclusionData>, Error> {
|
2021-05-04 01:59:51 +00:00
|
|
|
let mut path = self.server.full.clone();
|
2020-09-29 03:46:54 +00:00
|
|
|
|
|
|
|
path.path_segments_mut()
|
|
|
|
.map_err(|()| Error::InvalidUrl(self.server.clone()))?
|
|
|
|
.push("lighthouse")
|
|
|
|
.push("validator_inclusion")
|
|
|
|
.push(&epoch.to_string())
|
|
|
|
.push("global");
|
|
|
|
|
|
|
|
self.get(path).await
|
|
|
|
}
|
|
|
|
|
|
|
|
/// `GET lighthouse/validator_inclusion/{epoch}/{validator_id}`
|
|
|
|
pub async fn get_lighthouse_validator_inclusion(
|
|
|
|
&self,
|
|
|
|
epoch: Epoch,
|
|
|
|
validator_id: ValidatorId,
|
|
|
|
) -> Result<GenericResponse<Option<ValidatorInclusionData>>, Error> {
|
2021-05-04 01:59:51 +00:00
|
|
|
let mut path = self.server.full.clone();
|
2020-09-29 03:46:54 +00:00
|
|
|
|
|
|
|
path.path_segments_mut()
|
|
|
|
.map_err(|()| Error::InvalidUrl(self.server.clone()))?
|
|
|
|
.push("lighthouse")
|
|
|
|
.push("validator_inclusion")
|
|
|
|
.push(&epoch.to_string())
|
|
|
|
.push(&validator_id.to_string());
|
|
|
|
|
|
|
|
self.get(path).await
|
|
|
|
}
|
2020-10-22 06:05:49 +00:00
|
|
|
|
2020-11-02 00:37:30 +00:00
|
|
|
/// `GET lighthouse/eth1/syncing`
|
|
|
|
pub async fn get_lighthouse_eth1_syncing(
|
|
|
|
&self,
|
|
|
|
) -> Result<GenericResponse<Eth1SyncStatusData>, Error> {
|
2021-05-04 01:59:51 +00:00
|
|
|
let mut path = self.server.full.clone();
|
2020-11-02 00:37:30 +00:00
|
|
|
|
|
|
|
path.path_segments_mut()
|
|
|
|
.map_err(|()| Error::InvalidUrl(self.server.clone()))?
|
|
|
|
.push("lighthouse")
|
|
|
|
.push("eth1")
|
|
|
|
.push("syncing");
|
|
|
|
|
|
|
|
self.get(path).await
|
|
|
|
}
|
|
|
|
|
|
|
|
/// `GET lighthouse/eth1/block_cache`
|
|
|
|
pub async fn get_lighthouse_eth1_block_cache(
|
|
|
|
&self,
|
|
|
|
) -> Result<GenericResponse<Vec<Eth1Block>>, Error> {
|
2021-05-04 01:59:51 +00:00
|
|
|
let mut path = self.server.full.clone();
|
2020-11-02 00:37:30 +00:00
|
|
|
|
|
|
|
path.path_segments_mut()
|
|
|
|
.map_err(|()| Error::InvalidUrl(self.server.clone()))?
|
|
|
|
.push("lighthouse")
|
|
|
|
.push("eth1")
|
|
|
|
.push("block_cache");
|
|
|
|
|
|
|
|
self.get(path).await
|
|
|
|
}
|
|
|
|
|
|
|
|
/// `GET lighthouse/eth1/deposit_cache`
|
|
|
|
pub async fn get_lighthouse_eth1_deposit_cache(
|
|
|
|
&self,
|
|
|
|
) -> Result<GenericResponse<Vec<DepositLog>>, Error> {
|
2021-05-04 01:59:51 +00:00
|
|
|
let mut path = self.server.full.clone();
|
2020-11-02 00:37:30 +00:00
|
|
|
|
|
|
|
path.path_segments_mut()
|
|
|
|
.map_err(|()| Error::InvalidUrl(self.server.clone()))?
|
|
|
|
.push("lighthouse")
|
|
|
|
.push("eth1")
|
|
|
|
.push("deposit_cache");
|
|
|
|
|
|
|
|
self.get(path).await
|
|
|
|
}
|
|
|
|
|
2020-10-22 06:05:49 +00:00
|
|
|
/// `GET lighthouse/beacon/states/{state_id}/ssz`
|
|
|
|
pub async fn get_lighthouse_beacon_states_ssz<E: EthSpec>(
|
|
|
|
&self,
|
|
|
|
state_id: &StateId,
|
2021-07-09 06:15:32 +00:00
|
|
|
spec: &ChainSpec,
|
2020-10-22 06:05:49 +00:00
|
|
|
) -> Result<Option<BeaconState<E>>, Error> {
|
2021-05-04 01:59:51 +00:00
|
|
|
let mut path = self.server.full.clone();
|
2020-10-22 06:05:49 +00:00
|
|
|
|
|
|
|
path.path_segments_mut()
|
|
|
|
.map_err(|()| Error::InvalidUrl(self.server.clone()))?
|
|
|
|
.push("lighthouse")
|
|
|
|
.push("beacon")
|
|
|
|
.push("states")
|
|
|
|
.push(&state_id.to_string())
|
|
|
|
.push("ssz");
|
|
|
|
|
|
|
|
self.get_bytes_opt(path)
|
|
|
|
.await?
|
2021-07-09 06:15:32 +00:00
|
|
|
.map(|bytes| BeaconState::from_ssz_bytes(&bytes, spec).map_err(Error::InvalidSsz))
|
2020-10-22 06:05:49 +00:00
|
|
|
.transpose()
|
|
|
|
}
|
2020-11-23 01:00:22 +00:00
|
|
|
|
|
|
|
/// `GET lighthouse/staking`
|
|
|
|
pub async fn get_lighthouse_staking(&self) -> Result<bool, Error> {
|
2021-05-04 01:59:51 +00:00
|
|
|
let mut path = self.server.full.clone();
|
2020-11-23 01:00:22 +00:00
|
|
|
|
|
|
|
path.path_segments_mut()
|
|
|
|
.map_err(|()| Error::InvalidUrl(self.server.clone()))?
|
|
|
|
.push("lighthouse")
|
|
|
|
.push("staking");
|
|
|
|
|
|
|
|
self.get_opt::<(), _>(path).await.map(|opt| opt.is_some())
|
|
|
|
}
|
2021-09-22 00:37:28 +00:00
|
|
|
|
|
|
|
/// `GET lighthouse/database/info`
|
|
|
|
pub async fn get_lighthouse_database_info(&self) -> Result<DatabaseInfo, Error> {
|
|
|
|
let mut path = self.server.full.clone();
|
|
|
|
|
|
|
|
path.path_segments_mut()
|
|
|
|
.map_err(|()| Error::InvalidUrl(self.server.clone()))?
|
|
|
|
.push("lighthouse")
|
|
|
|
.push("database")
|
|
|
|
.push("info");
|
|
|
|
|
|
|
|
self.get(path).await
|
|
|
|
}
|
|
|
|
|
|
|
|
/// `POST lighthouse/database/reconstruct`
|
|
|
|
pub async fn post_lighthouse_database_reconstruct(&self) -> Result<String, Error> {
|
|
|
|
let mut path = self.server.full.clone();
|
|
|
|
|
|
|
|
path.path_segments_mut()
|
|
|
|
.map_err(|()| Error::InvalidUrl(self.server.clone()))?
|
|
|
|
.push("lighthouse")
|
|
|
|
.push("database")
|
|
|
|
.push("reconstruct");
|
|
|
|
|
|
|
|
self.post_with_response(path, &()).await
|
|
|
|
}
|
2020-09-29 03:46:54 +00:00
|
|
|
}
|