update docsgen

This commit is contained in:
Łukasz Magiera 2024-02-21 17:00:30 +01:00
parent 2eaf34bc92
commit a586982c2a
3 changed files with 173 additions and 67 deletions

View File

@ -10,15 +10,15 @@ VERSION:
1.25.3-dev
COMMANDS:
cli Execute cli commands
run Start a lotus provider process
stop Stop a running lotus provider
config Manage node config by layers. The layer 'base' will always be applied.
test Utility functions for testing
web Start lotus provider web interface
pipeline Manage the sealing pipeline
version Print version
help, h Shows a list of commands or help for one command
cli Execute cli commands
run Start a lotus provider process
stop Stop a running lotus provider
config Manage node config by layers. The layer 'base' will always be applied.
test Utility functions for testing
web Start lotus provider web interface
seal Manage the sealing pipeline
version Print version
help, h Shows a list of commands or help for one command
DEVELOPER:
auth Manage RPC permissions
log Manage logging
@ -26,16 +26,16 @@ COMMANDS:
fetch-params Fetch proving parameters
GLOBAL OPTIONS:
--color use color in display output (default: depends on output being a TTY)
--db-host value Command separated list of hostnames for yugabyte cluster (default: "yugabyte") [$LOTUS_DB_HOST]
--db-name value (default: "yugabyte") [$LOTUS_DB_NAME, $LOTUS_HARMONYDB_HOSTS]
--db-user value (default: "yugabyte") [$LOTUS_DB_USER, $LOTUS_HARMONYDB_USERNAME]
--db-password value (default: "yugabyte") [$LOTUS_DB_PASSWORD, $LOTUS_HARMONYDB_PASSWORD]
--layers value (default: "base") [$LOTUS_LAYERS, $LOTUS_CONFIG_LAYERS]
--repo-path value (default: "~/.lotusprovider") [$LOTUS_REPO_PATH]
--vv enables very verbose mode, useful for debugging the CLI (default: false)
--help, -h show help
--version, -v print the version
--color use color in display output (default: depends on output being a TTY)
--db-host value Command separated list of hostnames for yugabyte cluster (default: "yugabyte") [$LOTUS_DB_HOST]
--db-name value (default: "yugabyte") [$LOTUS_DB_NAME, $LOTUS_HARMONYDB_HOSTS]
--db-user value (default: "yugabyte") [$LOTUS_DB_USER, $LOTUS_HARMONYDB_USERNAME]
--db-password value (default: "yugabyte") [$LOTUS_DB_PASSWORD, $LOTUS_HARMONYDB_PASSWORD]
--layers value [ --layers value ] list of layers to be interpreted (atop defaults). Default: base (default: "base") [$CURIO_LAYERS]
--repo-path value (default: "~/.lotusprovider") [$LOTUS_REPO_PATH]
--vv enables very verbose mode, useful for debugging the CLI (default: false)
--help, -h show help
--version, -v print the version
```
## lotus-provider cli
@ -51,7 +51,7 @@ COMMANDS:
help, h Shows a list of commands or help for one command
OPTIONS:
--machine value machine host:port
--machine value machine host:port (lotus-provider run --listen address)
--help, -h show help
```
@ -68,13 +68,12 @@ USAGE:
lotus-provider run [command options] [arguments...]
OPTIONS:
--listen value host address and port the worker api will listen on (default: "0.0.0.0:12300") [$LOTUS_WORKER_LISTEN]
--nosync don't check full-node sync status (default: false)
--manage-fdlimit manage open file limit (default: true)
--layers value [ --layers value ] list of layers to be interpreted (atop defaults). Default: base (default: "base")
--storage-json value path to json file containing storage config (default: "~/.lotus-provider/storage.json")
--journal value path to journal files (default: "~/.lotus-provider/")
--help, -h show help
--listen value host address and port the worker api will listen on (default: "0.0.0.0:12300") [$LOTUS_WORKER_LISTEN]
--nosync don't check full-node sync status (default: false)
--manage-fdlimit manage open file limit (default: true)
--storage-json value path to json file containing storage config (default: "~/.lotus-provider/storage.json")
--journal value path to journal files (default: "~/.lotus-provider/")
--help, -h show help
```
## lotus-provider stop
@ -217,7 +216,7 @@ DESCRIPTION:
Express a database config (for lotus-provider) from an existing miner.
OPTIONS:
--miner-repo value, --storagerepo value Specify miner repo path. flag(storagerepo) and env(LOTUS_STORAGE_PATH) are DEPRECATION, will REMOVE SOON (default: "~/.lotusminer") [$LOTUS_MINER_PATH, $LOTUS_STORAGE_PATH]
--miner-repo value, --storagerepo value Miner repo path (default: "~/.lotusminer") [$LOTUS_MINER_PATH, $LOTUS_STORAGE_PATH]
--to-layer value, -t value The layer name for this data push. 'base' is recommended for single-miner setup.
--overwrite, -o Use this with --to-layer to replace an existing layer (default: false)
--help, -h show help
@ -297,9 +296,8 @@ USAGE:
lotus-provider test window-post task [command options] [arguments...]
OPTIONS:
--deadline value deadline to compute WindowPoSt for (default: 0)
--layers value [ --layers value ] list of layers to be interpreted (atop defaults). Default: base (default: "base")
--help, -h show help
--deadline value deadline to compute WindowPoSt for (default: 0)
--help, -h show help
```
## lotus-provider web
@ -315,19 +313,18 @@ DESCRIPTION:
This creates the 'web' layer if it does not exist, then calls run with that layer.
OPTIONS:
--listen value Address to listen on (default: "127.0.0.1:4701")
--layers value [ --layers value ] list of layers to be interpreted (atop defaults). Default: base. Web will be added (default: "base")
--nosync don't check full-node sync status (default: false)
--help, -h show help
--listen value Address to listen on (default: "127.0.0.1:4701")
--nosync don't check full-node sync status (default: false)
--help, -h show help
```
## lotus-provider pipeline
## lotus-provider seal
```
NAME:
lotus-provider pipeline - Manage the sealing pipeline
lotus-provider seal - Manage the sealing pipeline
USAGE:
lotus-provider pipeline command [command options] [arguments...]
lotus-provider seal command [command options] [arguments...]
COMMANDS:
start Start new sealing operations manually
@ -337,22 +334,21 @@ OPTIONS:
--help, -h show help
```
### lotus-provider pipeline start
### lotus-provider seal start
```
NAME:
lotus-provider pipeline start - Start new sealing operations manually
lotus-provider seal start - Start new sealing operations manually
USAGE:
lotus-provider pipeline start [command options] [arguments...]
lotus-provider seal start [command options] [arguments...]
OPTIONS:
--actor value Specify actor address to start sealing sectors for
--now Start sealing sectors for all actors now (not on schedule) (default: false)
--cc Start sealing new CC sectors (default: false)
--count value Number of sectors to start (default: 1)
--synthetic Use synthetic PoRep (default: false)
--layers value [ --layers value ] list of layers to be interpreted (atop defaults). Default: base (default: "base")
--help, -h show help
--actor value Specify actor address to start sealing sectors for
--now Start sealing sectors for all actors now (not on schedule) (default: false)
--cc Start sealing new CC sectors (default: false)
--count value Number of sectors to start (default: 1)
--synthetic Use synthetic PoRep (default: false)
--help, -h show help
```
## lotus-provider version

View File

@ -1,10 +1,24 @@
[Subsystems]
# EnableWindowPost enables window post to be executed on this lotus-provider instance. Each machine in the cluster
# with WindowPoSt enabled will also participate in the window post scheduler. It is possible to have multiple
# machines with WindowPoSt enabled which will provide redundancy, and in case of multiple partitions per deadline,
# will allow for parallel processing of partitions.
#
# It is possible to have instances handling both WindowPoSt and WinningPoSt, which can provide redundancy without
# the need for additional machines. In setups like this it is generally recommended to run
# partitionsPerDeadline+1 machines.
#
# type: bool
#EnableWindowPost = false
# type: int
#WindowPostMaxTasks = 0
# EnableWinningPost enables winning post to be executed on this lotus-provider instance.
# Each machine in the cluster with WinningPoSt enabled will also participate in the winning post scheduler.
# It is possible to mix machines with WindowPoSt and WinningPoSt enabled, for details see the EnableWindowPost
# documentation.
#
# type: bool
#EnableWinningPost = false
@ -12,18 +26,39 @@
#WinningPostMaxTasks = 0
# EnableSealSDR enables SDR tasks to run. SDR is the long sequential computation
# creating layers.
# creating 11 layer files in sector cache directory.
#
# SDR is the first task in the sealing pipeline. It's inputs are just the hash of the
# unsealed data (CommD), sector number, miner id, and the seal proof type.
# It's outputs are the 11 layer files in the sector cache directory.
#
# In lotus-miner this was run as part of PreCommit1.
#
# type: bool
#EnableSealSDR = false
# The maximum amount of SDR tasks that can run simultaneously. Note that the maximum number of tasks will
# also be bounded by resources available on the machine.
#
# type: int
#SealSDRMaxTasks = 0
# EnableSealSDRTrees enables the SDR pipeline tree-building task to run.
# This task handles encoding of unsealed data into last sdr layer and building
# of TreeR, TreeC and TreeD.
#
# This task runs after SDR
# TreeD is first computed with optional input of unsealed data
# TreeR is computed from replica, which is first computed as field
# addition of the last SDR layer and the bottom layer of TreeD (which is the unsealed data)
# TreeC is computed from the 11 SDR layers
# The 3 trees will later be used to compute the PoRep proof.
#
# In case of SyntheticPoRep challenges for PoRep will be pre-generated at this step, and trees and layers
# will be dropped. SyntheticPoRep works by pre-generating a very large set of challenges (~30GiB on disk)
# then using a small subset of them for the actual PoRep computation. This allows for significant scratch space
# saving between PreCommit and PoRep generation at the expense of more computation (generating challenges in this step)
#
# In lotus-miner this was run as part of PreCommit2 (TreeD was run in PreCommit1).
# Note that nodes with SDRTrees enabled will also answer to Finalize tasks,
# which just remove unneeded tree data after PoRep is computed.
@ -31,24 +66,42 @@
# type: bool
#EnableSealSDRTrees = false
# The maximum amount of SealSDRTrees tasks that can run simultaneously. Note that the maximum number of tasks will
# also be bounded by resources available on the machine.
#
# type: int
#SealSDRTreesMaxTasks = 0
# FinalizeMaxTasks is the maximum amount of finalize tasks that can run simultaneously.
# The finalize task is enabled on all machines which also handle SDRTrees tasks. Finalize ALWAYS runs on whichever
# machine holds sector cache files, as it removes unneeded tree data after PoRep is computed.
# Finalize will run in parallel with the SubmitCommitMsg task.
#
# type: int
#FinalizeMaxTasks = 0
# EnableSendPrecommitMsg enables the sending of precommit messages to the chain
# from this lotus-provider instance.
# This runs after SDRTrees and uses the output CommD / CommR (roots of TreeD / TreeR) for the message
#
# type: bool
#EnableSendPrecommitMsg = false
# EnableSendCommitMsg enables the computation of the porep proof
# EnablePoRepProof enables the computation of the porep proof
#
# This task runs after interactive-porep seed becomes available, which happens 150 epochs (75min) after the
# precommit message lands on chain. This task should run on a machine with a GPU. Vanilla PoRep proofs are
# requested from the machine which holds sector cache files which most likely is the machine which ran the SDRTrees
# task.
#
# In lotus-miner this was Commit1 / Commit2
#
# type: bool
#EnablePoRepProof = false
# The maximum amount of PoRepProof tasks that can run simultaneously. Note that the maximum number of tasks will
# also be bounded by resources available on the machine.
#
# type: int
#PoRepProofMaxTasks = 0
@ -58,16 +111,25 @@
# type: bool
#EnableSendCommitMsg = false
# EnableMoveStorage enables the move-into-long-term-storage task to run
# on this lotus-provider instance. This tasks should only be enabled on
# nodes with long-term storage.
# EnableMoveStorage enables the move-into-long-term-storage task to run on this lotus-provider instance.
# This tasks should only be enabled on nodes with long-term storage.
#
# The MoveStorage task is the last task in the sealing pipeline. It moves the sealed sector data from the
# SDRTrees machine into long-term storage. This task runs after the Finalize task.
#
# type: bool
#EnableMoveStorage = false
# The maximum amount of MoveStorage tasks that can run simultaneously. Note that the maximum number of tasks will
# also be bounded by resources available on the machine. It is recommended that this value is set to a number which
# uses all available network (or disk) bandwidth on the machine without causing bottlenecks.
#
# type: int
#MoveStorageMaxTasks = 0
# EnableWebGui enables the web GUI on this lotus-provider instance. The UI has minimal local overhead, but it should
# only need to be run on a single machine in the cluster.
#
# type: bool
#EnableWebGui = false
@ -124,6 +186,8 @@
#DisableWorkerFallback = false
#MinerAddresses = []
[Proving]
# Maximum number of sector checks to run in parallel. (0 = unlimited)

View File

@ -987,7 +987,14 @@ block rewards will be missed!`,
Name: "EnableWindowPost",
Type: "bool",
Comment: ``,
Comment: `EnableWindowPost enables window post to be executed on this lotus-provider instance. Each machine in the cluster
with WindowPoSt enabled will also participate in the window post scheduler. It is possible to have multiple
machines with WindowPoSt enabled which will provide redundancy, and in case of multiple partitions per deadline,
will allow for parallel processing of partitions.
It is possible to have instances handling both WindowPoSt and WinningPoSt, which can provide redundancy without
the need for additional machines. In setups like this it is generally recommended to run
partitionsPerDeadline+1 machines.`,
},
{
Name: "WindowPostMaxTasks",
@ -999,7 +1006,10 @@ block rewards will be missed!`,
Name: "EnableWinningPost",
Type: "bool",
Comment: ``,
Comment: `EnableWinningPost enables winning post to be executed on this lotus-provider instance.
Each machine in the cluster with WinningPoSt enabled will also participate in the winning post scheduler.
It is possible to mix machines with WindowPoSt and WinningPoSt enabled, for details see the EnableWindowPost
documentation.`,
},
{
Name: "WinningPostMaxTasks",
@ -1012,14 +1022,20 @@ block rewards will be missed!`,
Type: "bool",
Comment: `EnableSealSDR enables SDR tasks to run. SDR is the long sequential computation
creating layers.
creating 11 layer files in sector cache directory.
SDR is the first task in the sealing pipeline. It's inputs are just the hash of the
unsealed data (CommD), sector number, miner id, and the seal proof type.
It's outputs are the 11 layer files in the sector cache directory.
In lotus-miner this was run as part of PreCommit1.`,
},
{
Name: "SealSDRMaxTasks",
Type: "int",
Comment: ``,
Comment: `The maximum amount of SDR tasks that can run simultaneously. Note that the maximum number of tasks will
also be bounded by resources available on the machine.`,
},
{
Name: "EnableSealSDRTrees",
@ -1028,6 +1044,19 @@ In lotus-miner this was run as part of PreCommit1.`,
Comment: `EnableSealSDRTrees enables the SDR pipeline tree-building task to run.
This task handles encoding of unsealed data into last sdr layer and building
of TreeR, TreeC and TreeD.
This task runs after SDR
TreeD is first computed with optional input of unsealed data
TreeR is computed from replica, which is first computed as field
addition of the last SDR layer and the bottom layer of TreeD (which is the unsealed data)
TreeC is computed from the 11 SDR layers
The 3 trees will later be used to compute the PoRep proof.
In case of SyntheticPoRep challenges for PoRep will be pre-generated at this step, and trees and layers
will be dropped. SyntheticPoRep works by pre-generating a very large set of challenges (~30GiB on disk)
then using a small subset of them for the actual PoRep computation. This allows for significant scratch space
saving between PreCommit and PoRep generation at the expense of more computation (generating challenges in this step)
In lotus-miner this was run as part of PreCommit2 (TreeD was run in PreCommit1).
Note that nodes with SDRTrees enabled will also answer to Finalize tasks,
which just remove unneeded tree data after PoRep is computed.`,
@ -1036,33 +1065,45 @@ which just remove unneeded tree data after PoRep is computed.`,
Name: "SealSDRTreesMaxTasks",
Type: "int",
Comment: ``,
Comment: `The maximum amount of SealSDRTrees tasks that can run simultaneously. Note that the maximum number of tasks will
also be bounded by resources available on the machine.`,
},
{
Name: "FinalizeMaxTasks",
Type: "int",
Comment: ``,
Comment: `FinalizeMaxTasks is the maximum amount of finalize tasks that can run simultaneously.
The finalize task is enabled on all machines which also handle SDRTrees tasks. Finalize ALWAYS runs on whichever
machine holds sector cache files, as it removes unneeded tree data after PoRep is computed.
Finalize will run in parallel with the SubmitCommitMsg task.`,
},
{
Name: "EnableSendPrecommitMsg",
Type: "bool",
Comment: `EnableSendPrecommitMsg enables the sending of precommit messages to the chain
from this lotus-provider instance.`,
from this lotus-provider instance.
This runs after SDRTrees and uses the output CommD / CommR (roots of TreeD / TreeR) for the message`,
},
{
Name: "EnablePoRepProof",
Type: "bool",
Comment: `EnableSendCommitMsg enables the computation of the porep proof
Comment: `EnablePoRepProof enables the computation of the porep proof
This task runs after interactive-porep seed becomes available, which happens 150 epochs (75min) after the
precommit message lands on chain. This task should run on a machine with a GPU. Vanilla PoRep proofs are
requested from the machine which holds sector cache files which most likely is the machine which ran the SDRTrees
task.
In lotus-miner this was Commit1 / Commit2`,
},
{
Name: "PoRepProofMaxTasks",
Type: "int",
Comment: ``,
Comment: `The maximum amount of PoRepProof tasks that can run simultaneously. Note that the maximum number of tasks will
also be bounded by resources available on the machine.`,
},
{
Name: "EnableSendCommitMsg",
@ -1075,21 +1116,26 @@ from this lotus-provider instance.`,
Name: "EnableMoveStorage",
Type: "bool",
Comment: `EnableMoveStorage enables the move-into-long-term-storage task to run
on this lotus-provider instance. This tasks should only be enabled on
nodes with long-term storage.`,
Comment: `EnableMoveStorage enables the move-into-long-term-storage task to run on this lotus-provider instance.
This tasks should only be enabled on nodes with long-term storage.
The MoveStorage task is the last task in the sealing pipeline. It moves the sealed sector data from the
SDRTrees machine into long-term storage. This task runs after the Finalize task.`,
},
{
Name: "MoveStorageMaxTasks",
Type: "int",
Comment: ``,
Comment: `The maximum amount of MoveStorage tasks that can run simultaneously. Note that the maximum number of tasks will
also be bounded by resources available on the machine. It is recommended that this value is set to a number which
uses all available network (or disk) bandwidth on the machine without causing bottlenecks.`,
},
{
Name: "EnableWebGui",
Type: "bool",
Comment: ``,
Comment: `EnableWebGui enables the web GUI on this lotus-provider instance. The UI has minimal local overhead, but it should
only need to be run on a single machine in the cluster.`,
},
{
Name: "GuiAddress",