First RESTful HTTP API (#399)
* Added generated code for REST API.
- Created a new crate rest_api, which will adapt the openapi generated code to Lighthouse
- Committed automatically generated code from openapi-generator-cli (via docker). Should hopfully not have to modify this at all, and do all changes in the rest_api crate.
* Removed openapi generated code, because it was the rust client, not the rust server.
* Added the correct rust-server code, automatically generated from openapi.
* Added generated code for REST API.
- Created a new crate rest_api, which will adapt the openapi generated code to Lighthouse
- Committed automatically generated code from openapi-generator-cli (via docker). Should hopfully not have to modify this at all, and do all changes in the rest_api crate.
* Removed openapi generated code, because it was the rust client, not the rust server.
* Added the correct rust-server code, automatically generated from openapi.
* Included REST API in configuratuion.
- Started adding the rest_api into the beacon node's dependencies.
- Set up configuration file for rest_api and integrated into main client config
- Added CLI flags for REST API.
* Futher work on REST API.
- Adding the dependencies to rest_api crate
- Created a skeleton BeaconNodeService, which will handle /node requests.
- Started the rest_api server definition, with the high level request handling logic.
* Added generated code for REST API.
- Created a new crate rest_api, which will adapt the openapi generated code to Lighthouse
- Committed automatically generated code from openapi-generator-cli (via docker). Should hopfully not have to modify this at all, and do all changes in the rest_api crate.
* Removed openapi generated code, because it was the rust client, not the rust server.
* Added the correct rust-server code, automatically generated from openapi.
* Included REST API in configuratuion.
- Started adding the rest_api into the beacon node's dependencies.
- Set up configuration file for rest_api and integrated into main client config
- Added CLI flags for REST API.
* Futher work on REST API.
- Adding the dependencies to rest_api crate
- Created a skeleton BeaconNodeService, which will handle /node requests.
- Started the rest_api server definition, with the high level request handling logic.
* WIP: Restructured REST API to use hyper_router and separate services.
* WIP: Fixing rust for REST API
* WIP: Fixed up many bugs in trying to get router to compile.
* WIP: Got the beacon_node to compile with the REST changes
* Basic API works!
- Changed CLI flags from rest-api* to api*
- Fixed port cli flag
- Tested, works over HTTP
* WIP: Moved things around so that we can get state inside the handlers.
* WIP: Significant API updates.
- Started writing a macro for getting the handler functions.
- Added the BeaconChain into the type map, gives stateful access to the beacon state.
- Created new generic error types (haven't figured out yet), to reduce code duplication.
- Moved common stuff into lib.rs
* WIP: Factored macros, defined API result and error.
- did more logging when creating HTTP responses
- Tried moving stuff into macros, but can't get macros in macros to compile.
- Pulled out a lot of placeholder code.
* Fixed macros so that things compile.
* Cleaned up code.
- Removed unused imports
- Removed comments
- Addressed all compiler warnings.
- Ran cargo fmt.
* Removed auto-generated OpenAPI code.
* Addressed Paul's suggestions.
- Fixed spelling mistake
- Moved the simple macros into functions, since it doesn't make sense for them to be macros.
- Removed redundant code & inclusions.
* Removed redundant validate_request function.
* Included graceful shutdown in Hyper server.
* Fixing the dropped exit_signal, which prevented the API from starting.
* Wrapped the exit signal, to get an API shutdown log line.
2019-07-31 08:29:41 +00:00
|
|
|
[package]
|
2020-09-29 03:46:54 +00:00
|
|
|
name = "http_api"
|
|
|
|
version = "0.1.0"
|
|
|
|
authors = ["Paul Hauner <paul@paulhauner.com>"]
|
2023-09-22 04:30:56 +00:00
|
|
|
edition = { workspace = true }
|
Use `BeaconProcessor` for API requests (#4462)
## Issue Addressed
NA
## Proposed Changes
Rather than spawning new tasks on the tokio executor to process each HTTP API request, send the tasks to the `BeaconProcessor`. This achieves:
1. Places a bound on how many concurrent requests are being served (i.e., how many we are actually trying to compute at one time).
1. Places a bound on how many requests can be awaiting a response at one time (i.e., starts dropping requests when we have too many queued).
1. Allows the BN prioritise HTTP requests with respect to messages coming from the P2P network (i.e., proiritise importing gossip blocks rather than serving API requests).
Presently there are two levels of priorities:
- `Priority::P0`
- The beacon processor will prioritise these above everything other than importing new blocks.
- Roughly all validator-sensitive endpoints.
- `Priority::P1`
- The beacon processor will prioritise practically all other P2P messages over these, except for historical backfill things.
- Everything that's not `Priority::P0`
The `--http-enable-beacon-processor false` flag can be supplied to revert back to the old behaviour of spawning new `tokio` tasks for each request:
```
--http-enable-beacon-processor <BOOLEAN>
The beacon processor is a scheduler which provides quality-of-service and DoS protection. When set to
"true", HTTP API requests will queued and scheduled alongside other tasks. When set to "false", HTTP API
responses will be executed immediately. [default: true]
```
## New CLI Flags
I added some other new CLI flags:
```
--beacon-processor-aggregate-batch-size <INTEGER>
Specifies the number of gossip aggregate attestations in a signature verification batch. Higher values may
reduce CPU usage in a healthy network while lower values may increase CPU usage in an unhealthy or hostile
network. [default: 64]
--beacon-processor-attestation-batch-size <INTEGER>
Specifies the number of gossip attestations in a signature verification batch. Higher values may reduce CPU
usage in a healthy network whilst lower values may increase CPU usage in an unhealthy or hostile network.
[default: 64]
--beacon-processor-max-workers <INTEGER>
Specifies the maximum concurrent tasks for the task scheduler. Increasing this value may increase resource
consumption. Reducing the value may result in decreased resource usage and diminished performance. The
default value is the number of logical CPU cores on the host.
--beacon-processor-reprocess-queue-len <INTEGER>
Specifies the length of the queue for messages requiring delayed processing. Higher values may prevent
messages from being dropped while lower values may help protect the node from becoming overwhelmed.
[default: 12288]
```
I needed to add the max-workers flag since the "simulator" flavor tests started failing with HTTP timeouts on the test assertions. I believe they were failing because the Github runners only have 2 cores and there just weren't enough workers available to process our requests in time. I added the other flags since they seem fun to fiddle with.
## Additional Info
I bumped the timeouts on the "simulator" flavor test from 4s to 8s. The prioritisation of consensus messages seems to be causing slower responses, I guess this is what we signed up for 🤷
The `validator/register` validator has some special handling because the relays have a bad habit of timing out on these calls. It seems like a waste of a `BeaconProcessor` worker to just wait for the builder API HTTP response, so we spawn a new `tokio` task to wait for a builder response.
I've added an optimisation for the `GET beacon/states/{state_id}/validators/{validator_id}` endpoint in [efbabe3](https://github.com/sigp/lighthouse/pull/4462/commits/efbabe32521ed6eb3564764da4e507d26a1c4bd0). That's the endpoint the VC uses to resolve pubkeys to validator indices, and it's the endpoint that was causing us grief. Perhaps I should move that into a new PR, not sure.
2023-08-08 23:30:15 +00:00
|
|
|
autotests = false # using a single test binary compiles faster
|
First RESTful HTTP API (#399)
* Added generated code for REST API.
- Created a new crate rest_api, which will adapt the openapi generated code to Lighthouse
- Committed automatically generated code from openapi-generator-cli (via docker). Should hopfully not have to modify this at all, and do all changes in the rest_api crate.
* Removed openapi generated code, because it was the rust client, not the rust server.
* Added the correct rust-server code, automatically generated from openapi.
* Added generated code for REST API.
- Created a new crate rest_api, which will adapt the openapi generated code to Lighthouse
- Committed automatically generated code from openapi-generator-cli (via docker). Should hopfully not have to modify this at all, and do all changes in the rest_api crate.
* Removed openapi generated code, because it was the rust client, not the rust server.
* Added the correct rust-server code, automatically generated from openapi.
* Included REST API in configuratuion.
- Started adding the rest_api into the beacon node's dependencies.
- Set up configuration file for rest_api and integrated into main client config
- Added CLI flags for REST API.
* Futher work on REST API.
- Adding the dependencies to rest_api crate
- Created a skeleton BeaconNodeService, which will handle /node requests.
- Started the rest_api server definition, with the high level request handling logic.
* Added generated code for REST API.
- Created a new crate rest_api, which will adapt the openapi generated code to Lighthouse
- Committed automatically generated code from openapi-generator-cli (via docker). Should hopfully not have to modify this at all, and do all changes in the rest_api crate.
* Removed openapi generated code, because it was the rust client, not the rust server.
* Added the correct rust-server code, automatically generated from openapi.
* Included REST API in configuratuion.
- Started adding the rest_api into the beacon node's dependencies.
- Set up configuration file for rest_api and integrated into main client config
- Added CLI flags for REST API.
* Futher work on REST API.
- Adding the dependencies to rest_api crate
- Created a skeleton BeaconNodeService, which will handle /node requests.
- Started the rest_api server definition, with the high level request handling logic.
* WIP: Restructured REST API to use hyper_router and separate services.
* WIP: Fixing rust for REST API
* WIP: Fixed up many bugs in trying to get router to compile.
* WIP: Got the beacon_node to compile with the REST changes
* Basic API works!
- Changed CLI flags from rest-api* to api*
- Fixed port cli flag
- Tested, works over HTTP
* WIP: Moved things around so that we can get state inside the handlers.
* WIP: Significant API updates.
- Started writing a macro for getting the handler functions.
- Added the BeaconChain into the type map, gives stateful access to the beacon state.
- Created new generic error types (haven't figured out yet), to reduce code duplication.
- Moved common stuff into lib.rs
* WIP: Factored macros, defined API result and error.
- did more logging when creating HTTP responses
- Tried moving stuff into macros, but can't get macros in macros to compile.
- Pulled out a lot of placeholder code.
* Fixed macros so that things compile.
* Cleaned up code.
- Removed unused imports
- Removed comments
- Addressed all compiler warnings.
- Ran cargo fmt.
* Removed auto-generated OpenAPI code.
* Addressed Paul's suggestions.
- Fixed spelling mistake
- Moved the simple macros into functions, since it doesn't make sense for them to be macros.
- Removed redundant code & inclusions.
* Removed redundant validate_request function.
* Included graceful shutdown in Hyper server.
* Fixing the dropped exit_signal, which prevented the API from starting.
* Wrapped the exit signal, to get an API shutdown log line.
2019-07-31 08:29:41 +00:00
|
|
|
|
|
|
|
[dependencies]
|
2023-09-22 04:30:56 +00:00
|
|
|
warp = { workspace = true }
|
|
|
|
serde = { workspace = true }
|
|
|
|
tokio = { workspace = true }
|
|
|
|
tokio-stream = { workspace = true }
|
|
|
|
types = { workspace = true }
|
|
|
|
hex = { workspace = true }
|
|
|
|
beacon_chain = { workspace = true }
|
|
|
|
eth2 = { workspace = true }
|
|
|
|
slog = { workspace = true }
|
|
|
|
network = { workspace = true }
|
|
|
|
lighthouse_network = { workspace = true }
|
|
|
|
eth1 = { workspace = true }
|
|
|
|
state_processing = { workspace = true }
|
|
|
|
lighthouse_version = { workspace = true }
|
|
|
|
lighthouse_metrics = { workspace = true }
|
|
|
|
lazy_static = { workspace = true }
|
|
|
|
warp_utils = { workspace = true }
|
|
|
|
slot_clock = { workspace = true }
|
|
|
|
ethereum_ssz = { workspace = true }
|
2020-12-07 08:20:33 +00:00
|
|
|
bs58 = "0.4.0"
|
2023-09-22 04:30:56 +00:00
|
|
|
futures = { workspace = true }
|
|
|
|
execution_layer = { workspace = true }
|
|
|
|
parking_lot = { workspace = true }
|
|
|
|
safe_arith = { workspace = true }
|
|
|
|
task_executor = { workspace = true }
|
|
|
|
lru = { workspace = true }
|
|
|
|
tree_hash = { workspace = true }
|
|
|
|
sysinfo = { workspace = true }
|
2022-11-15 05:21:26 +00:00
|
|
|
system_health = { path = "../../common/system_health" }
|
2023-09-22 04:30:56 +00:00
|
|
|
directory = { workspace = true }
|
|
|
|
logging = { workspace = true }
|
|
|
|
ethereum_serde_utils = { workspace = true }
|
|
|
|
operation_pool = { workspace = true }
|
|
|
|
sensitive_url = { workspace = true }
|
|
|
|
store = { workspace = true }
|
|
|
|
bytes = { workspace = true }
|
|
|
|
beacon_processor = { workspace = true }
|
2022-02-21 23:21:02 +00:00
|
|
|
|
2019-11-25 04:48:24 +00:00
|
|
|
[dev-dependencies]
|
2023-09-22 04:30:56 +00:00
|
|
|
environment = { workspace = true }
|
|
|
|
serde_json = { workspace = true }
|
|
|
|
proto_array = { workspace = true }
|
|
|
|
genesis = { workspace = true }
|
2021-08-06 00:47:31 +00:00
|
|
|
|
|
|
|
[[test]]
|
|
|
|
name = "bn_http_api_tests"
|
Use `BeaconProcessor` for API requests (#4462)
## Issue Addressed
NA
## Proposed Changes
Rather than spawning new tasks on the tokio executor to process each HTTP API request, send the tasks to the `BeaconProcessor`. This achieves:
1. Places a bound on how many concurrent requests are being served (i.e., how many we are actually trying to compute at one time).
1. Places a bound on how many requests can be awaiting a response at one time (i.e., starts dropping requests when we have too many queued).
1. Allows the BN prioritise HTTP requests with respect to messages coming from the P2P network (i.e., proiritise importing gossip blocks rather than serving API requests).
Presently there are two levels of priorities:
- `Priority::P0`
- The beacon processor will prioritise these above everything other than importing new blocks.
- Roughly all validator-sensitive endpoints.
- `Priority::P1`
- The beacon processor will prioritise practically all other P2P messages over these, except for historical backfill things.
- Everything that's not `Priority::P0`
The `--http-enable-beacon-processor false` flag can be supplied to revert back to the old behaviour of spawning new `tokio` tasks for each request:
```
--http-enable-beacon-processor <BOOLEAN>
The beacon processor is a scheduler which provides quality-of-service and DoS protection. When set to
"true", HTTP API requests will queued and scheduled alongside other tasks. When set to "false", HTTP API
responses will be executed immediately. [default: true]
```
## New CLI Flags
I added some other new CLI flags:
```
--beacon-processor-aggregate-batch-size <INTEGER>
Specifies the number of gossip aggregate attestations in a signature verification batch. Higher values may
reduce CPU usage in a healthy network while lower values may increase CPU usage in an unhealthy or hostile
network. [default: 64]
--beacon-processor-attestation-batch-size <INTEGER>
Specifies the number of gossip attestations in a signature verification batch. Higher values may reduce CPU
usage in a healthy network whilst lower values may increase CPU usage in an unhealthy or hostile network.
[default: 64]
--beacon-processor-max-workers <INTEGER>
Specifies the maximum concurrent tasks for the task scheduler. Increasing this value may increase resource
consumption. Reducing the value may result in decreased resource usage and diminished performance. The
default value is the number of logical CPU cores on the host.
--beacon-processor-reprocess-queue-len <INTEGER>
Specifies the length of the queue for messages requiring delayed processing. Higher values may prevent
messages from being dropped while lower values may help protect the node from becoming overwhelmed.
[default: 12288]
```
I needed to add the max-workers flag since the "simulator" flavor tests started failing with HTTP timeouts on the test assertions. I believe they were failing because the Github runners only have 2 cores and there just weren't enough workers available to process our requests in time. I added the other flags since they seem fun to fiddle with.
## Additional Info
I bumped the timeouts on the "simulator" flavor test from 4s to 8s. The prioritisation of consensus messages seems to be causing slower responses, I guess this is what we signed up for 🤷
The `validator/register` validator has some special handling because the relays have a bad habit of timing out on these calls. It seems like a waste of a `BeaconProcessor` worker to just wait for the builder API HTTP response, so we spawn a new `tokio` task to wait for a builder response.
I've added an optimisation for the `GET beacon/states/{state_id}/validators/{validator_id}` endpoint in [efbabe3](https://github.com/sigp/lighthouse/pull/4462/commits/efbabe32521ed6eb3564764da4e507d26a1c4bd0). That's the endpoint the VC uses to resolve pubkeys to validator indices, and it's the endpoint that was causing us grief. Perhaps I should move that into a new PR, not sure.
2023-08-08 23:30:15 +00:00
|
|
|
path = "tests/main.rs"
|