Add attestation simulator, blobs info and some updates to Lighthouse Book (#5364)

* Apply suggestions from code review

* Revise attestation simulator doc

* Revise blobs.md

* Summary

* Add blobs

* Simulator docs

* Revise attestation simulator

* minor formatting

* Revise vm node

* Update faq

* Update faq

* Add link to v4.6.0

* Remove minification in the docs

* Update Goerli to Holesky

* Add a note on moved vm validator monitor

* Update Rpi 4 note

* Revise attestation simulator doc

* Add docs for attestation simulator

* update database table

* Update faq on resources used

* Fix and update table
This commit is contained in:
chonghe 2024-03-14 14:12:25 +08:00 committed by GitHub
parent 2a3c709f8c
commit eab3672c6d
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
18 changed files with 171 additions and 65 deletions

View File

@ -53,6 +53,7 @@
* [MEV](./builders.md)
* [Merge Migration](./merge-migration.md)
* [Late Block Re-orgs](./late-block-re-orgs.md)
* [Blobs](./advanced-blobs.md)
* [Built-In Documentation](./help_general.md)
* [Beacon Node](./help_bn.md)
* [Validator Client](./help_vc.md)

View File

@ -0,0 +1,42 @@
# Blobs
In the Deneb network upgrade, one of the changes is the implementation of EIP-4844, also known as [Proto-danksharding](https://blog.ethereum.org/2024/02/27/dencun-mainnet-announcement). Alongside with this, a new term named `blob` (binary large object) is introduced. Blobs are "side-cars" carrying transaction data in a block. They are mainly used by Ethereum layer 2 operators. As far as stakers are concerned, the main difference with the introduction of blobs is the increased storage requirement.
### FAQ
1. What is the storage requirement for blobs?
We expect an additional increase of ~50 GB of storage requirement for blobs (on top of what is required by the consensus and execution clients database). The calculation is as below:
One blob is 128 KB in size. Each block can carry a maximum of 6 blobs. Blobs will be kept for 4096 epochs and pruned afterwards. This means that the maximum increase in storage requirement will be:
```
2**17 bytes / blob * 6 blobs / block * 32 blocks / epoch * 4096 epochs = 96 GB
```
However, the blob base fee targets 3 blobs per block and it works similarly to how EIP-1559 operates in the Ethereum gas fee. Therefore, practically it is very likely to average to 3 blobs per blocks, which translates to a storage requirement of 48 GB.
1. Do I have to add any flags for blobs?
No, you can use the default values for blob-related flags, which means you do not need add or remove any flags.
1. What if I want to keep all blobs?
Use the flag `--prune-blobs false` in the beacon node. The storage requirement will be:
```
2**17 bytes * 3 blobs / block * 7200 blocks / day * 30 days = 79GB / month or 948GB / year
```
To keep blobs for a custom period, you may use the flag `--blob-prune-margin-epochs <EPOCHS>` which keeps blobs for 4096+EPOCHS specified in the flag.
1. How to see the info of the blobs database?
We can call the API:
```bash
curl "http://localhost:5052/lighthouse/database/info" | jq
```
Refer to [Lighthouse API](./api-lighthouse.md#lighthousedatabaseinfo) for an example response.

View File

@ -40,5 +40,5 @@ There can also be a scenario that a bug has been found and requires an urgent fi
## When *not* to use a release candidate
Other than the above scenarios, it is generally not recommended to use release candidates for any critical tasks on mainnet (e.g., staking). To test new release candidate features, try one of the testnets (e.g., Goerli).
Other than the above scenarios, it is generally not recommended to use release candidates for any critical tasks on mainnet (e.g., staking). To test new release candidate features, try one of the testnets (e.g., Holesky).

View File

@ -21,3 +21,4 @@ tips about how things work under the hood.
* [Maximal Extractable Value](./builders.md): use external builders for a potential higher rewards during block proposals
* [Merge Migration](./merge-migration.md): look at what you need to do during a significant network upgrade: The Merge
* [Late Block Re-orgs](./late-block-re-orgs.md): read information about Lighthouse late block re-orgs.
* [Blobs](./advanced-blobs.md): information about blobs in Deneb upgrade

View File

@ -44,7 +44,7 @@ The values shown in the table are approximate, calculated using a simple heurist
The **Load Historical State** time is the worst-case load time for a state in the last slot
before a restore point.
To run a full archival node with fast access to beacon states and a SPRP of 32, the disk usage will be more than 10 TB per year, which is impractical for many users. As such, users may consider running the [tree-states](https://github.com/sigp/lighthouse/releases/tag/v4.5.444-exp) release, which only uses less than 150 GB for a full archival node. The caveat is that it is currently experimental and in alpha release (as of Dec 2023), thus not recommended for running mainnet validators. Nevertheless, it is suitable to be used for analysis purposes, and if you encounter any issues in tree-states, we do appreciate any feedback. We plan to have a stable release of tree-states in 1H 2024.
To run a full archival node with fast access to beacon states and a SPRP of 32, the disk usage will be more than 10 TB per year, which is impractical for many users. As such, users may consider running the [tree-states](https://github.com/sigp/lighthouse/releases/tag/v5.0.111-exp) release, which only uses less than 200 GB for a full archival node. The caveat is that it is currently experimental and in alpha release (as of Dec 2023), thus not recommended for running mainnet validators. Nevertheless, it is suitable to be used for analysis purposes, and if you encounter any issues in tree-states, we do appreciate any feedback. We plan to have a stable release of tree-states in 1H 2024.
### Defaults

View File

@ -1,6 +1,6 @@
# Checkpoint Sync
Lighthouse supports syncing from a recent finalized checkpoint. This is substantially faster than syncing from genesis, while still providing all the same features. Checkpoint sync is also safer as it protects the node from long-range attacks. Since 4.6.0, checkpoint sync is required by default and genesis sync will no longer work without the use of `--allow-insecure-genesis-sync`.
Lighthouse supports syncing from a recent finalized checkpoint. This is substantially faster than syncing from genesis, while still providing all the same features. Checkpoint sync is also safer as it protects the node from long-range attacks. Since [v4.6.0](https://github.com/sigp/lighthouse/releases/tag/v4.6.0), checkpoint sync is required by default and genesis sync will no longer work without the use of `--allow-insecure-genesis-sync`.
To quickly get started with checkpoint sync, read the sections below on:

View File

@ -16,7 +16,8 @@ validator client or the slasher**.
| Lighthouse version | Release date | Schema version | Downgrade available? |
|--------------------|--------------|----------------|----------------------|
| v5.1.0 | Mar 2024 | v19 | yes before Deneb |
| v5.0.0 | Feb 2024 | v19 | yes before Deneb |
| v4.6.0 | Dec 2023 | v19 | yes before Deneb |
| v4.6.0-rc.0 | Dec 2023 | v18 | yes before Deneb |
| v4.5.0 | Sep 2023 | v17 | yes |
@ -127,7 +128,7 @@ Several conditions need to be met in order to run `lighthouse db`:
2. The command must run as the user that owns the beacon node database. If you are using systemd then
your beacon node might run as a user called `lighthousebeacon`.
3. The `--datadir` flag must be set to the location of the Lighthouse data directory.
4. The `--network` flag must be set to the correct network, e.g. `mainnet`, `goerli` or `sepolia`.
4. The `--network` flag must be set to the correct network, e.g. `mainnet`, `holesky` or `sepolia`.
The general form for a `lighthouse db` command is:

View File

@ -115,7 +115,7 @@ You can run a Docker beacon node with the following command:
docker run -p 9000:9000/tcp -p 9000:9000/udp -p 9001:9001/udp -p 127.0.0.1:5052:5052 -v $HOME/.lighthouse:/root/.lighthouse sigp/lighthouse lighthouse --network mainnet beacon --http --http-address 0.0.0.0
```
> To join the Goerli testnet, use `--network goerli` instead.
> To join the Holesky testnet, use `--network holesky` instead.
> The `-v` (Volumes) and `-p` (Ports) and values are described below.

View File

@ -3,6 +3,7 @@
## [Beacon Node](#beacon-node-1)
- [I see a warning about "Syncing deposit contract block cache" or an error about "updating deposit contract cache", what should I do?](#bn-deposit-contract)
- [I see beacon logs showing `WARN: Execution engine called failed`, what should I do?](#bn-ee)
- [I see beacon logs showing `Error during execution engine upcheck`, what should I do?](#bn-upcheck)
- [My beacon node is stuck at downloading historical block using checkpoint sync. What should I do?](#bn-download-historical)
- [I proposed a block but the beacon node shows `could not publish message` with error `duplicate` as below, should I be worried?](#bn-duplicate)
- [I see beacon node logs `Head is optimistic` and I am missing attestations. What should I do?](#bn-optimistic)
@ -12,6 +13,7 @@
- [My beacon node logs `WARN Error processing HTTP API request`, what should I do?](#bn-http)
- [My beacon node logs `WARN Error signalling fork choice waiter`, what should I do?](#bn-fork-choice)
- [My beacon node logs `ERRO Aggregate attestation queue full`, what should I do?](#bn-queue-full)
- [My beacon node logs `WARN Failed to finalize deposit cache`, what should I do?](#bn-deposit-cache)
## [Validator](#validator-1)
- [Why does it take so long for a validator to be activated?](#vc-activation)
@ -46,8 +48,6 @@
## Beacon Node
### <a name="bn-deposit-contract"></a> I see a warning about "Syncing deposit contract block cache" or an error about "updating deposit contract cache", what should I do?
The error can be a warning:
@ -77,7 +77,7 @@ If this log continues appearing during operation, it means your execution client
The `WARN Execution engine called failed` log is shown when the beacon node cannot reach the execution engine. When this warning occurs, it will be followed by a detailed message. A frequently encountered example of the error message is:
`error: Reqwest(reqwest::Error { kind: Request, url: Url { scheme: "http", cannot_be_a_base: false, username: "", password: None, host: Some(Ipv4(127.0.0.1)), port: Some(8551), path: "/", query: None, fragment: None }, source: TimedOut }), service: exec`
`error: HttpClient(url: http://127.0.0.1:8551/, kind: timeout, detail: operation timed out), service: exec`
which says `TimedOut` at the end of the message. This means that the execution engine has not responded in time to the beacon node. One option is to add the flags `--execution-timeout-multiplier 3` and `--disable-lock-timeouts` to the beacon node. However, if the error persists, it is worth digging further to find out the cause. There are a few reasons why this can occur:
1. The execution engine is not synced. Check the log of the execution engine to make sure that it is synced. If it is syncing, wait until it is synced and the error will disappear. You will see the beacon node logs `INFO Execution engine online` when it is synced.
@ -87,7 +87,17 @@ which says `TimedOut` at the end of the message. This means that the execution e
If the reason for the error message is caused by no. 1 above, you may want to look further. If the execution engine is out of sync suddenly, it is usually caused by ungraceful shutdown. The common causes for ungraceful shutdown are:
- Power outage. If power outages are an issue at your place, consider getting a UPS to avoid ungraceful shutdown of services.
- The service file is not stopped properly. To overcome this, make sure that the process is stopped properly, e.g., during client updates.
- Out of memory (oom) error. This can happen when the system memory usage has reached its maximum and causes the execution engine to be killed. When this occurs, the log file will show `Main process exited, code=killed, status=9/KILL`. You can also run `sudo journalctl -a --since "18 hours ago" | grep -i "killed process` to confirm that the execution client has been killed due to oom. If you are using geth as the execution client, a short term solution is to reduce the resources used. For example, you can reduce the cache by adding the flag `--cache 2048`. If the oom occurs rather frequently, a long term solution is to increase the memory capacity of the computer.
- Out of memory (oom) error. This can happen when the system memory usage has reached its maximum and causes the execution engine to be killed. To confirm that the error is due to oom, run `sudo dmesg -T | grep killed` to look for killed processes. If you are using geth as the execution client, a short term solution is to reduce the resources used. For example, you can reduce the cache by adding the flag `--cache 2048`. If the oom occurs rather frequently, a long term solution is to increase the memory capacity of the computer.
### <a name="bn-upcheck"></a> I see beacon logs showing `Error during execution engine upcheck`, what should I do?
An example of the full error is:
`ERRO Error during execution engine upcheck error: HttpClient(url: http://127.0.0.1:8551/, kind: request, detail: error trying to connect: tcp connect error: Connection refused (os error 111)), service: exec`
Connection refused means the beacon node cannot reach the execution client. This could be due to the execution client is offline or the configuration is wrong. If the execution client is offline, run the execution engine and the error will disappear.
If it is a configuration issue, ensure that the execution engine can be reached. The standard endpoint to connect to the execution client is `--execution-endpoint http://localhost:8551`. If the execution client is on a different host, the endpoint to connect to it will change, e.g., `--execution-endpoint http://IP_address:8551` where `IP_address` is the IP of the execution client node (you may also need additional flags to be set). If it is using another port, the endpoint link needs to be changed accordingly. Once the execution client/beacon node is configured correctly, the error will disappear.
### <a name="bn-download-historical"></a> My beacon node is stuck at downloading historical block using checkpoint sync. What should I do?
@ -195,6 +205,9 @@ ERRO Aggregate attestation queue full, queue_len: 4096, msg: the system has insu
This suggests that the computer resources are being overwhelmed. It could be due to high CPU usage or high disk I/O usage. This can happen, e.g., when the beacon node is downloading historical blocks, or when the execution client is syncing. The error will disappear when the resources used return to normal or when the node is synced.
### <a name="bn-deposit-cache"></a> My beacon node logs `WARN Failed to finalize deposit cache`, what should I do?
This is a known [bug](https://github.com/sigp/lighthouse/issues/3707) that will fix by itself.
## Validator
@ -269,14 +282,14 @@ repeats until the queue is cleared. The churn limit is summarised in the table b
<div align="center" style="text-align: center;">
| Number of active validators | Validators activated per epoch | Validators activated per day |
|-------------------|--------------------------------------------|----|
|----------------|----|------|
| 327679 or less | 4 | 900 |
| 327680-393215 | 5 | 1125 |
| 393216-458751 | 6 | 1350
| 458752-524287 | 7 | 1575
| 524288-589823 | 8| 1800 |
| 589824-655359 | 9| 2025 |
| 655360-720895 | 10 | 2250|
| 393216-458751 | 6 | 1350 |
| 458752-524287 | 7 | 1575 |
| 524288-589823 | 8 | 1800 |
| 589824-655359 | 9 | 2025 |
| 655360-720895 | 10 | 2250 |
| 720896-786431 | 11 | 2475 |
| 786432-851967 | 12 | 2700 |
| 851968-917503 | 13 | 2925 |
@ -335,7 +348,7 @@ If you would like to still use Lighthouse to submit the message, you will need t
### <a name="vc-resource"></a> Does increasing the number of validators increase the CPU and other computer resources used?
A computer with hardware specifications stated in the [Recommended System Requirements](./installation.md#recommended-system-requirements) can run hundreds validators with only marginal increase in cpu usage. When validators are active, there is a bit of an increase in resources used from validators 0-64, because you end up subscribed to more subnets. After that, the increase in resources plateaus when the number of validators go from 64 to ~500.
A computer with hardware specifications stated in the [Recommended System Requirements](./installation.md#recommended-system-requirements) can run hundreds validators with only marginal increase in CPU usage.
### <a name="vc-reimport"></a> I want to add new validators. Do I have to reimport the existing keys?
@ -363,7 +376,7 @@ network configuration settings. Ensure that the network you wish to connect to
is correct (the beacon node outputs the network it is connecting to in the
initial boot-up log lines). On top of this, ensure that you are not using the
same `datadir` as a previous network, i.e., if you have been running the
`Goerli` testnet and are now trying to join a new network but using the same
`Holesky` testnet and are now trying to join a new network but using the same
`datadir` (the `datadir` is also printed out in the beacon node's logs on
boot-up).
@ -551,7 +564,7 @@ which says that the version is v4.1.0.
### <a name="misc-prune"></a> Does Lighthouse have pruning function like the execution client to save disk space?
There is no pruning of Lighthouse database for now. However, since v4.2.0, a feature to only sync back to the weak subjectivity point (approximately 5 months) when syncing via a checkpoint sync was added. This will help to save disk space since the previous behaviour will sync back to the genesis by default.
Yes, Lighthouse supports [state pruning](./database-migrations.md#how-to-prune-historic-states) which can help to save disk space.
### <a name="misc-freezer"></a> Can I use a HDD for the freezer database and only have the hot db on SSD?
@ -565,8 +578,6 @@ The reason why Lighthouse logs in UTC is due to the dependency on an upstream li
A quick way to get the validator back online is by removing the Lighthouse beacon node database and resync Lighthouse using checkpoint sync. A guide to do this can be found in the [Lighthouse Discord server](https://discord.com/channels/605577013327167508/605577013331361793/1019755522985050142). With some free space left, you will then be able to prune the execution client database to free up more space.
For a relatively long term solution, if you are using Geth and Nethermind as the execution client, you can consider setup the online pruning feature. Refer to [Geth](https://blog.ethereum.org/2023/09/12/geth-v1-13-0) and [Nethermind](https://gist.github.com/yorickdowne/67be09b3ba0a9ff85ed6f83315b5f7e0) for details.

View File

@ -10,7 +10,7 @@ There are three core methods to obtain the Lighthouse application:
Additionally, there are two extra guides for specific uses:
- [Raspberry Pi 4 guide](./pi.md).
- [Raspberry Pi 4 guide](./pi.md). (Archived)
- [Cross-compiling guide for developers](./cross-compiling.md).
There are also community-maintained installation methods:

View File

@ -13,7 +13,7 @@ managing servers. You'll also need at least 32 ETH!
Being educated is critical to a validator's success. Before submitting your mainnet deposit, we recommend:
- Thoroughly exploring the [Staking Launchpad][launchpad] website, try running through the deposit process using a testnet launchpad such as the [Goerli staking launchpad](https://goerli.launchpad.ethereum.org/en/).
- Thoroughly exploring the [Staking Launchpad][launchpad] website, try running through the deposit process using a testnet launchpad such as the [Holesky staking launchpad](https://holesky.launchpad.ethereum.org/en/).
- Running a testnet validator.
- Reading through this documentation, especially the [Slashing Protection][slashing] section.
- Performing a web search and doing your own research.
@ -41,10 +41,7 @@ There are five primary steps to become a validator:
> **Important note**: The guide below contains both mainnet and testnet instructions. We highly recommend *all* users to **run a testnet validator** prior to staking mainnet ETH. By far, the best technical learning experience is to run a testnet validator. You can get hands-on experience with all the tools and it's a great way to test your staking
hardware. 32 ETH is a significant outlay and joining a testnet is a great way to "try before you buy".
<!--To join a testnet, for example the Goerli testnet, select `Goerli` when you are prompted to select the network in the `staking-deposit-cli` in Step 1, replace `--network mainnet` with `--network goerli` in Steps 2-4, and visit [Goerli staking launchpad](https://goerli.launchpad.ethereum.org/en/) to deposit testnet ETH in Step 5.-->
> **Never use real ETH to join a testnet!** Testnet such as the Goerli testnet uses Goerli ETH which is worthless. This allows experimentation without real-world costs.
> **Never use real ETH to join a testnet!** Testnet such as the Holesky testnet uses Holesky ETH which is worthless. This allows experimentation without real-world costs.
### Step 1. Create validator keys
@ -52,7 +49,7 @@ The Ethereum Foundation provides the [staking-deposit-cli](https://github.com/et
```bash
./deposit new-mnemonic
```
and follow the instructions to generate the keys. When prompted for a network, select `mainnet` if you want to run a mainnet validator, or select `goerli` if you want to run a Goerli testnet validator. A new mnemonic will be generated in the process.
and follow the instructions to generate the keys. When prompted for a network, select `mainnet` if you want to run a mainnet validator, or select `holesky` if you want to run a Holesky testnet validator. A new mnemonic will be generated in the process.
> **Important note:** A mnemonic (or seed phrase) is a 24-word string randomly generated in the process. It is highly recommended to write down the mnemonic and keep it safe offline. It is important to ensure that the mnemonic is never stored in any digital form (computers, mobile phones, etc) connected to the internet. Please also make one or more backups of the mnemonic to ensure your ETH is not lost in the case of data loss. It is very important to keep your mnemonic private as it represents the ultimate control of your ETH.
@ -75,9 +72,9 @@ Mainnet:
lighthouse --network mainnet account validator import --directory $HOME/staking-deposit-cli/validator_keys
```
Goerli testnet:
Holesky testnet:
```bash
lighthouse --network goerli account validator import --directory $HOME/staking-deposit-cli/validator_keys
lighthouse --network holesky account validator import --directory $HOME/staking-deposit-cli/validator_keys
```
> Note: The user must specify the consensus client network that they are importing the keys by using the `--network` flag.
@ -137,9 +134,9 @@ Mainnet:
lighthouse vc --network mainnet --suggested-fee-recipient YourFeeRecipientAddress
```
Goerli testnet:
Holesky testnet:
```bash
lighthouse vc --network goerli --suggested-fee-recipient YourFeeRecipientAddress
lighthouse vc --network holesky --suggested-fee-recipient YourFeeRecipientAddress
```
The `validator client` manages validators using data obtained from the beacon node via a HTTP API. You are highly recommended to enter a fee-recipient by changing `YourFeeRecipientAddress` to an Ethereum address under your control.
@ -157,7 +154,7 @@ by the protocol.
### Step 5: Submit deposit (32ETH per validator)
After you have successfully run and synced the execution client, beacon node and validator client, you can now proceed to submit the deposit. Go to the mainnet [Staking launchpad](https://launchpad.ethereum.org/en/) (or [Goerli staking launchpad](https://goerli.launchpad.ethereum.org/en/) for testnet validator) and carefully go through the steps to becoming a validator. Once you are ready, you can submit the deposit by sending 32ETH per validator to the deposit contract. Upload the `deposit_data-*.json` file generated in [Step 1](#step-1-create-validator-keys) to the Staking launchpad.
After you have successfully run and synced the execution client, beacon node and validator client, you can now proceed to submit the deposit. Go to the mainnet [Staking launchpad](https://launchpad.ethereum.org/en/) (or [Holesky staking launchpad](https://holesky.launchpad.ethereum.org/en/) for testnet validator) and carefully go through the steps to becoming a validator. Once you are ready, you can submit the deposit by sending 32ETH per validator to the deposit contract. Upload the `deposit_data-*.json` file generated in [Step 1](#step-1-create-validator-keys) to the Staking launchpad.
> **Important note:** Double check that the deposit contract for mainnet is `0x00000000219ab540356cBB839Cbe05303d7705Fa` before you confirm the transaction.

View File

@ -26,13 +26,13 @@ All networks (**Mainnet**, **Goerli (Prater)**, **Ropsten**, **Sepolia**, **Kiln
<div align="center">
| Network | Bellatrix | The Merge | Remark |
|-------------------|--------------------------------------------|----|----|
| Ropsten | 2<sup>nd</sup> June 2022 | 8<sup>th</sup> June 2022 | Deprecated
|---------|-------------------------------|-------------------------------| -----------|
| Ropsten | 2<sup>nd</sup> June 2022 | 8<sup>th</sup> June 2022 | Deprecated |
| Sepolia | 20<sup>th</sup> June 2022 | 6<sup>th</sup> July 2022 | |
| Goerli | 4<sup>th</sup> August 2022 | 10<sup>th</sup> August 2022 | Previously named `Prater`|
| Mainnet | 6<sup>th</sup> September 2022 | 15<sup>th</sup> September 2022 |
| Chiado | 10<sup>th</sup> October 2022 | 4<sup>th</sup> November 2022 |
| Gnosis| 30<sup>th</sup> November 2022 | 8<sup>th</sup> December 2022
| Mainnet | 6<sup>th</sup> September 2022| 15<sup>th</sup> September 2022| |
| Chiado | 10<sup>th</sup> October 2022 | 4<sup>th</sup> November 2022 | |
| Gnosis | 30<sup>th</sup> November 2022| 8<sup>th</sup> December 2022 | |
</div>

View File

@ -1,5 +1,7 @@
# Raspberry Pi 4 Installation
> Note: This page is left here for archival purposes. As the number of validators on mainnet has increased significantly, so does the requirement for hardware (e.g., RAM). Running Ethereum mainnet on a Raspberry Pi 4 is no longer recommended.
Tested on:
- Raspberry Pi 4 Model B (4GB)

View File

@ -56,7 +56,7 @@ Notable flags:
- `--network` flag, which selects a network:
- `lighthouse` (no flag): Mainnet.
- `lighthouse --network mainnet`: Mainnet.
- `lighthouse --network goerli`: Goerli (testnet).
- `lighthouse --network holesky`: Holesky (testnet).
- `lighthouse --network sepolia`: Sepolia (testnet).
- `lighthouse --network chiado`: Chiado (testnet).
- `lighthouse --network gnosis`: Gnosis chain.

View File

@ -101,18 +101,6 @@ update the low watermarks for blocks and attestations. It will store only the ma
for each validator, and the maximum source/target attestation. This is faster than importing
all data while also being more resilient to repeated imports & stale data.
### Minification
The exporter can be configured to minify (shrink) the data it exports by keeping only the
maximum-slot and maximum-epoch messages. Provide the `--minify=true` flag:
```
lighthouse account validator slashing-protection export --minify=true <lighthouse_interchange.json>
```
This may make the file faster to import into other clients, but is unnecessary for Lighthouse to
Lighthouse transfers since v1.5.0.
## Troubleshooting
### Misplaced Slashing Database

View File

@ -182,6 +182,13 @@ lighthouse \
--validators 0x9096aab771e44da149bd7c9926d6f7bb96ef465c0eeb4918be5178cd23a1deb4aec232c61d85ff329b54ed4a3bdfff3a,0x90fc4f72d898a8f01ab71242e36f4545aaf87e3887be81632bb8ba4b2ae8fb70753a62f866344d7905e9a07f5a9cdda1
```
> Note: If you have the `validator-monitor-auto` turned on, the source beacon node may still be reporting the attestation status of the validators that have been moved:
```
INFO Previous epoch attestation(s) success validators: ["validator_index"], epoch: 100000, service: val_mon, service: beacon
```
> This is fine as the validator monitor does not know that the validators have been moved (it *does not* mean that the validators have attested twice for the same slot). A restart of the beacon node will resolve this.
Any errors encountered during the operation should include information on how to
proceed. Assistance is also available on our
[Discord](https://discord.gg/cyAszAh).

View File

@ -96,4 +96,60 @@ Jan 18 11:21:09.808 INFO Attestation included in block validator: 1, s
The
[`ValidatorMonitor`](https://github.com/sigp/lighthouse-metrics/blob/master/dashboards/ValidatorMonitor.json)
dashboard contains all/most of the metrics exposed via the validator monitor.
dashboard contains most of the metrics exposed via the validator monitor.
### Attestation Simulator Metrics
Lighthouse v4.6.0 introduces a new feature to track the performance of a beacon node. This feature internally simulates an attestation for each slot, and outputs a hit or miss for the head, target and source votes. The attestation simulator is turned on automatically (even when there are no validators) and prints logs in the debug level.
> Note: The simulated attestations are never published to the network, so the simulator does not reflect the attestation performance of a validator.
The attestation simulation prints the following logs when simulating an attestation:
```
DEBG Simulating unagg. attestation production, service: beacon, module: beacon_chain::attestation_simulator:39
DEBG Produce unagg. attestation, attestation_target: 0x59fc…1a67, attestation_source: 0xc4c5…d414, service: beacon, module: beacon_chain::attestation_simulator:87
```
When the simulated attestation has completed, it prints a log that specifies if the head, target and source votes are hit. An example of a log when all head, target and source are hit:
```
DEBG Simulated attestation evaluated, head_hit: true, target_hit: true, source_hit: true, attestation_slot: Slot(1132616), attestation_head: 0x61367335c30b0f114582fe298724b75b56ae9372bdc6e7ce5d735db68efbdd5f, attestation_target: 0xaab25a6d01748cf4528e952666558317b35874074632550c37d935ca2ec63c23, attestation_source: 0x13ccbf8978896c43027013972427ee7ce02b2bb9b898dbb264b870df9288c1e7, service: val_mon, service: beacon, module: beacon_chain::validator_monitor:2051
```
An example of a log when the head is missed:
```
DEBG Simulated attestation evaluated, head_hit: false, target_hit: true, source_hit: true, attestation_slot: Slot(1132623), attestation_head: 0x1c0e53c6ace8d0ff57f4a963e4460fe1c030b37bf1c76f19e40928dc2e214c59, attestation_target: 0xaab25a6d01748cf4528e952666558317b35874074632550c37d935ca2ec63c23, attestation_source: 0x13ccbf8978896c43027013972427ee7ce02b2bb9b898dbb264b870df9288c1e7, service: val_mon, service: beacon, module: beacon_chain::validator_monitor:2051
```
With `--metrics` enabled on the beacon node, the following metrics will be recorded:
```
validator_monitor_attestation_simulator_head_attester_hit_total
validator_monitor_attestation_simulator_head_attester_miss_total
validator_monitor_attestation_simulator_target_attester_hit_total
validator_monitor_attestation_simulator_target_attester_miss_total
validator_monitor_attestation_simulator_source_attester_hit_total
validator_monitor_attestation_simulator_source_attester_miss_total
```
A grafana dashboard to view the metrics for attestation simulator is available [here](https://github.com/sigp/lighthouse-metrics/blob/master/dashboards/AttestationSimulator.json).
The attestation simulator provides an insight into the attestation performance of a beacon node. It can be used as an indication of how expediently the beacon node has completed importing blocks within the 4s time frame for an attestation to be made.
The attestation simulator *does not* consider:
- the latency between the beacon node and the validator client
- the potential delays when publishing the attestation to the network
which are critical factors to consider when evaluating the attestation performance of a validator.
Assuming the above factors are ignored (no delays between beacon node and validator client, and in publishing the attestation to the network):
1. If the attestation simulator says that all votes are hit, it means that if the beacon node were to publish the attestation for this slot, the validator should receive the rewards for the head, target and source votes.
1. If the attestation simulator says that the one or more votes are missed, it means that there is a delay in importing the block. The delay could be due to slowness in processing the block (e.g., due to a slow CPU) or that the block is arriving late (e.g., the proposer publishes the block late). If the beacon node were to publish the attestation for this slot, the validator will miss one or more votes (e.g., the head vote).

View File

@ -30,13 +30,13 @@ The exit phrase is the following:
Below is an example for initiating a voluntary exit on the Goerli testnet.
Below is an example for initiating a voluntary exit on the Holesky testnet.
```
$ lighthouse --network goerli account validator exit --keystore /path/to/keystore --beacon-node http://localhost:5052
$ lighthouse --network holesky account validator exit --keystore /path/to/keystore --beacon-node http://localhost:5052
Running account manager for Prater network
validator-dir path: ~/.lighthouse/goerli/validators
validator-dir path: ~/.lighthouse/holesky/validators
Enter the keystore password for validator in 0xabcd