f284e0e264
## Issue Addressed Fix a bug in the storage of the linear block roots array in the freezer DB. Previously this array was always written as part of state storage (or block backfill). With state pruning enabled by #4610, these states were no longer being written and as a result neither were the block roots. The impact is quite low, we would just log an error when trying to forwards-iterate the block roots, which for validating nodes only happens when they try to look up blocks for peers: > Aug 25 03:42:36.980 ERRO Missing chunk in forwards iterator chunk index: 49726, service: freezer_db Any node checkpoint synced off `unstable` is affected and has a corrupt database. If you see the log above, you need to re-sync with the fix. Nodes that haven't checkpoint synced recently should _not_ be corrupted, even if they ran the buggy version. ## Proposed Changes - Use a `ChunkWriter` to write the block roots when states are not being stored. - Tweak the usage of `get_latest_restore_point` so that it doesn't return a nonsense value when state pruning is enabled. - Tweak the guarantee on the block roots array so that block roots are assumed available up to the split slot (exclusive). This is a bit nicer than relying on anything to do with the latest restore point, which is a nonsensical concept when there aren't any restore points. ## Additional Info I'm looking forward to deleting the chunked vector code for good when we merge tree-states 😁 |
||
---|---|---|
.. | ||
migrations | ||
postgres_docker_compose | ||
src | ||
tests | ||
.gitignore | ||
Cargo.toml | ||
config.yaml.default | ||
diesel.toml | ||
README.md |
beacon.watch
beacon.watch is pre-MVP and still under active development and subject to change.
beacon.watch is an Ethereum Beacon Chain monitoring platform whose goal is to provide fast access to data which is:
- Not already stored natively in the Beacon Chain
- Too specialized for Block Explorers
- Too sensitive for public Block Explorers
Requirements
git
rust
: https://rustup.rs/libpq
: https://www.postgresql.org/download/diesel_cli
:
cargo install diesel_cli --no-default-features --features postgres
docker
: https://docs.docker.com/engine/install/docker-compose
: https://docs.docker.com/compose/install/
Setup
- Setup the database:
cd postgres_docker_compose
docker-compose up
- Ensure the tests pass:
cargo test --release
- Drop the database (if it already exists) and run the required migrations:
diesel database reset --database-url postgres://postgres:postgres@localhost/dev
-
Ensure a synced Lighthouse beacon node with historical states is available at
localhost:5052
. The smaller the value of--slots-per-restore-point
the faster beacon.watch will be able to sync to the beacon node. -
Run the updater daemon:
cargo run --release -- run-updater
- Start the HTTP API server:
cargo run --release -- serve
- Ensure connectivity:
curl "http://localhost:5059/v1/slots/highest"
Functionality on MacOS has not been tested. Windows is not supported.
Configuration
beacon.watch can be configured through the use of a config file.
Available options can be seen in config.yaml.default
.
You can specify a config file during runtime:
cargo run -- run-updater --config path/to/config.yaml
cargo run -- serve --config path/to/config.yaml
You can specify only the parts of the config file which you need changed. Missing values will remain as their defaults.
For example, if you wish to run with default settings but only wish to alter log_level
your config file would be:
# config.yaml
log_level = "info"
Available Endpoints
As beacon.watch continues to develop, more endpoints will be added.
In these examples any data containing information from blockprint has either been redacted or fabricated.
/v1/slots/{slot}
curl "http://localhost:5059/v1/slots/4635296"
{
"slot": "4635296",
"root": "0xf7063a9d6c663682e59bd0b41d29ce80c3ff0b089049ff8676d6f9ee79622c62",
"skipped": false,
"beacon_block": "0xf7063a9d6c663682e59bd0b41d29ce80c3ff0b089049ff8676d6f9ee79622c62"
}
/v1/slots?start_slot={}&end_slot={}
curl "http://localhost:5059/v1/slots?start_slot=4635296&end_slot=4635297"
[
{
"slot": "4635297",
"root": "0x04ad2e963811207e344bebeba5b1217805bcc3a9e2ed9fcf2205d491778c6182",
"skipped": false,
"beacon_block": "0x04ad2e963811207e344bebeba5b1217805bcc3a9e2ed9fcf2205d491778c6182"
},
{
"slot": "4635296",
"root": "0xf7063a9d6c663682e59bd0b41d29ce80c3ff0b089049ff8676d6f9ee79622c62",
"skipped": false,
"beacon_block": "0xf7063a9d6c663682e59bd0b41d29ce80c3ff0b089049ff8676d6f9ee79622c62"
}
]
/v1/slots/lowest
curl "http://localhost:5059/v1/slots/lowest"
{
"slot": "4635296",
"root": "0xf7063a9d6c663682e59bd0b41d29ce80c3ff0b089049ff8676d6f9ee79622c62",
"skipped": false,
"beacon_block": "0xf7063a9d6c663682e59bd0b41d29ce80c3ff0b089049ff8676d6f9ee79622c62"
}
/v1/slots/highest
curl "http://localhost:5059/v1/slots/highest"
{
"slot": "4635358",
"root": "0xe9eff13560688f1bf15cf07b60c84963d4d04a4a885ed0eb19ceb8450011894b",
"skipped": false,
"beacon_block": "0xe9eff13560688f1bf15cf07b60c84963d4d04a4a885ed0eb19ceb8450011894b"
}
v1/slots/{slot}/block
curl "http://localhost:5059/v1/slots/4635296/block"
{
"slot": "4635296",
"root": "0xf7063a9d6c663682e59bd0b41d29ce80c3ff0b089049ff8676d6f9ee79622c62",
"parent_root": "0x7c4860b420a23de9d126da71f9043b3744af98c847efd9e1440f2da8fbf7f31b"
}
/v1/blocks/{block_id}
curl "http://localhost:5059/v1/blocks/4635296"
# OR
curl "http://localhost:5059/v1/blocks/0xf7063a9d6c663682e59bd0b41d29ce80c3ff0b089049ff8676d6f9ee79622c62"
{
"slot": "4635296",
"root": "0xf7063a9d6c663682e59bd0b41d29ce80c3ff0b089049ff8676d6f9ee79622c62",
"parent_root": "0x7c4860b420a23de9d126da71f9043b3744af98c847efd9e1440f2da8fbf7f31b"
}
/v1/blocks?start_slot={}&end_slot={}
curl "http://localhost:5059/v1/blocks?start_slot=4635296&end_slot=4635297"
[
{
"slot": "4635297",
"root": "0x04ad2e963811207e344bebeba5b1217805bcc3a9e2ed9fcf2205d491778c6182",
"parent_root": "0xf7063a9d6c663682e59bd0b41d29ce80c3ff0b089049ff8676d6f9ee79622c62"
},
{
"slot": "4635296",
"root": "0xf7063a9d6c663682e59bd0b41d29ce80c3ff0b089049ff8676d6f9ee79622c62",
"parent_root": "0x7c4860b420a23de9d126da71f9043b3744af98c847efd9e1440f2da8fbf7f31b"
}
]
/v1/blocks/{block_id}/previous
curl "http://localhost:5059/v1/blocks/4635297/previous"
# OR
curl "http://localhost:5059/v1/blocks/0x04ad2e963811207e344bebeba5b1217805bcc3a9e2ed9fcf2205d491778c6182/previous"
{
"slot": "4635296",
"root": "0xf7063a9d6c663682e59bd0b41d29ce80c3ff0b089049ff8676d6f9ee79622c62",
"parent_root": "0x7c4860b420a23de9d126da71f9043b3744af98c847efd9e1440f2da8fbf7f31b"
}
/v1/blocks/{block_id}/next
curl "http://localhost:5059/v1/blocks/4635296/next"
# OR
curl "http://localhost:5059/v1/blocks/0xf7063a9d6c663682e59bd0b41d29ce80c3ff0b089049ff8676d6f9ee79622c62/next"
{
"slot": "4635297",
"root": "0x04ad2e963811207e344bebeba5b1217805bcc3a9e2ed9fcf2205d491778c6182",
"parent_root": "0xf7063a9d6c663682e59bd0b41d29ce80c3ff0b089049ff8676d6f9ee79622c62"
}
/v1/blocks/lowest
curl "http://localhost:5059/v1/blocks/lowest"
{
"slot": "4635296",
"root": "0xf7063a9d6c663682e59bd0b41d29ce80c3ff0b089049ff8676d6f9ee79622c62",
"parent_root": "0x7c4860b420a23de9d126da71f9043b3744af98c847efd9e1440f2da8fbf7f31b"
}
/v1/blocks/highest
curl "http://localhost:5059/v1/blocks/highest"
{
"slot": "4635358",
"root": "0xe9eff13560688f1bf15cf07b60c84963d4d04a4a885ed0eb19ceb8450011894b",
"parent_root": "0xb66e05418bb5b1d4a965c994e1f0e5b5f0d7b780e0df12f3f6321510654fa1d2"
}
/v1/blocks/{block_id}/proposer
curl "http://localhost:5059/v1/blocks/4635296/proposer"
# OR
curl "http://localhost:5059/v1/blocks/0xf7063a9d6c663682e59bd0b41d29ce80c3ff0b089049ff8676d6f9ee79622c62/proposer"
{
"slot": "4635296",
"proposer_index": 223126,
"graffiti": ""
}
/v1/blocks/{block_id}/rewards
curl "http://localhost:5059/v1/blocks/4635296/reward"
# OR
curl "http://localhost:5059/v1/blocks/0xf7063a9d6c663682e59bd0b41d29ce80c3ff0b089049ff8676d6f9ee79622c62/reward"
{
"slot": "4635296",
"total": 25380059,
"attestation_reward": 24351867,
"sync_committee_reward": 1028192
}
/v1/blocks/{block_id}/packing
curl "http://localhost:5059/v1/blocks/4635296/packing"
# OR
curl "http://localhost:5059/v1/blocks/0xf7063a9d6c663682e59bd0b41d29ce80c3ff0b089049ff8676d6f9ee79622c62/packing"
{
"slot": "4635296",
"available": 16152,
"included": 13101,
"prior_skip_slots": 0
}
/v1/validators/{validator}
curl "http://localhost:5059/v1/validators/1"
# OR
curl "http://localhost:5059/v1/validators/0xa1d1ad0714035353258038e964ae9675dc0252ee22cea896825c01458e1807bfad2f9969338798548d9858a571f7425c"
{
"index": 1,
"public_key": "0xa1d1ad0714035353258038e964ae9675dc0252ee22cea896825c01458e1807bfad2f9969338798548d9858a571f7425c",
"status": "active_ongoing",
"client": null,
"activation_epoch": 0,
"exit_epoch": null
}
/v1/validators/{validator}/attestation/{epoch}
curl "http://localhost:5059/v1/validators/1/attestation/144853"
# OR
curl "http://localhost:5059/v1/validators/0xa1d1ad0714035353258038e964ae9675dc0252ee22cea896825c01458e1807bfad2f9969338798548d9858a571f7425c/attestation/144853"
{
"index": 1,
"epoch": "144853",
"source": true,
"head": true,
"target": true
}
/v1/validators/missed/{vote}/{epoch}
curl "http://localhost:5059/v1/validators/missed/head/144853"
[
63,
67,
98,
...
]
/v1/validators/missed/{vote}/{epoch}/graffiti
curl "http://localhost:5059/v1/validators/missed/head/144853/graffiti"
{
"Mr F was here": 3,
"Lighthouse/v3.1.0-aa022f4": 5,
...
}
/v1/clients/missed/{vote}/{epoch}
curl "http://localhost:5059/v1/clients/missed/source/144853"
{
"Lighthouse": 100,
"Lodestar": 100,
"Nimbus": 100,
"Prysm": 100,
"Teku": 100,
"Unknown": 100
}
/v1/clients/missed/{vote}/{epoch}/percentages
Note that this endpoint expresses the following:
What percentage of each client implementation missed this vote?
curl "http://localhost:5059/v1/clients/missed/target/144853/percentages"
{
"Lighthouse": 0.51234567890,
"Lodestar": 0.51234567890,
"Nimbus": 0.51234567890,
"Prysm": 0.09876543210,
"Teku": 0.09876543210,
"Unknown": 0.05647382910
}
/v1/clients/missed/{vote}/{epoch}/percentages/relative
Note that this endpoint expresses the following:
For the validators which did miss this vote, what percentage of them were from each client implementation?
You can check these values against the output of /v1/clients/percentages
to see any discrepancies.
curl "http://localhost:5059/v1/clients/missed/target/144853/percentages/relative"
{
"Lighthouse": 11.11111111111111,
"Lodestar": 11.11111111111111,
"Nimbus": 11.11111111111111,
"Prysm": 16.66666666666667,
"Teku": 16.66666666666667,
"Unknown": 33.33333333333333
}
/v1/clients
curl "http://localhost:5059/v1/clients"
{
"Lighthouse": 5000,
"Lodestar": 5000,
"Nimbus": 5000,
"Prysm": 5000,
"Teku": 5000,
"Unknown": 5000
}
/v1/clients/percentages
curl "http://localhost:5059/v1/clients/percentages"
{
"Lighthouse": 16.66666666666667,
"Lodestar": 16.66666666666667,
"Nimbus": 16.66666666666667,
"Prysm": 16.66666666666667,
"Teku": 16.66666666666667,
"Unknown": 16.66666666666667
}
Future work
-
New tables
skip_slots
?
-
More API endpoints
/v1/proposers?start_epoch={}&end_epoch={}
and similar/v1/validators/{status}/count
-
Concurrently backfill and forwards fill, so forwards fill is not bottlenecked by large backfills.
-
Better/prettier (async?) logging.
-
Connect to a range of beacon_nodes to sync different components concurrently. Generally, processing certain api queries such as
block_packing
andattestation_performance
take the longest to sync.
Architecture
Connection Pooling:
- 1 Pool for Updater (read and write)
- 1 Pool for HTTP Server (should be read only, although not sure if we can enforce this)