Merge branch 'master' into sbansal/nonce-coordination-and-consensus-for-chain-nodes

This commit is contained in:
Shrenuj Bansal 2022-09-27 16:29:03 +00:00
commit 559c2c6d34
174 changed files with 5055 additions and 1220 deletions

View File

@ -924,6 +924,11 @@ workflows:
suite: itest-mempool
target: "./itests/mempool_test.go"
- test:
name: test-itest-mpool_msg_uuid
suite: itest-mpool_msg_uuid
target: "./itests/mpool_msg_uuid_test.go"
- test:
name: test-itest-mpool_push_with_uuid
suite: itest-mpool_push_with_uuid
@ -979,6 +984,16 @@ workflows:
suite: itest-sector_finalize_early
target: "./itests/sector_finalize_early_test.go"
- test:
name: test-itest-sector_import_full
suite: itest-sector_import_full
target: "./itests/sector_import_full_test.go"
- test:
name: test-itest-sector_import_simple
suite: itest-sector_import_simple
target: "./itests/sector_import_simple_test.go"
- test:
name: test-itest-sector_make_cc_avail
suite: itest-sector_make_cc_avail
@ -1124,12 +1139,15 @@ workflows:
- /.*/
tags:
only:
- /^v\d+\.\d+\.\d+(-rc\d+)?$/
- /^v\d+\.\d+\.\d+$/
- build-macos:
filters:
branches:
only:
- /^release\/v\d+\.\d+\.\d+(-rc\d+)?$/
tags:
only:
- /^v\d+\.\d+\.\d+-rc\d+$/
- build-appimage:
filters:
branches:
@ -1260,4 +1278,4 @@ workflows:
only:
- master
jobs:
- publish-packer-snap
- publish-packer-snap

View File

@ -854,12 +854,15 @@ workflows:
- /.*/
tags:
only:
- /^v\d+\.\d+\.\d+(-rc\d+)?$/
- /^v\d+\.\d+\.\d+$/
- build-macos:
filters:
branches:
only:
- /^release\/v\d+\.\d+\.\d+(-rc\d+)?$/
tags:
only:
- /^v\d+\.\d+\.\d+-rc\d+$/
- build-appimage:
filters:
branches:
@ -990,4 +993,4 @@ workflows:
only:
- master
jobs:
- publish-packer-snap
- publish-packer-snap

View File

@ -81,7 +81,7 @@ body:
render: text
description: |
Please provide debug logs of the problem, remember you can get set log level control for:
* lotus: use `lotus log list` to get all log systems available and set level by `lotus log set-level`. An example can be found [here](https://docs.filecoin.io/get-started/lotus/configuration-and-advanced-usage/#log-level-control).
* lotus: use `lotus log list` to get all log systems available and set level by `lotus log set-level`. An example can be found [here](https://lotus.filecoin.io/lotus/configure/defaults/#log-level-control).
* lotus-miner:`lotus-miner log list` to get all log systems available and set level by `lotus-miner log set-level
If you don't provide detailed logs when you raise the issue it will almost certainly be the first request I make before furthur diagnosing the problem.
validations:

View File

@ -103,6 +103,8 @@ Contributors
| Aloxaf | 1 | +2/-2 | 1 |
# Lotus changelog
# v1.17.0 / 2022-08-02
This is an optional release of Lotus. This feature release introduces a lot of new sealing and scheduler improvements, and many other functionalities and bug fixes.
@ -1439,7 +1441,7 @@ storage providers and clients.
## Highlights
- 🌟🌟🌟 Introduce Dagstore and CARv2 for deal-making (#6671) ([filecoin-project/lotus#6671](https://github.com/filecoin-project/lotus/pull/6671))
- **[lotus miner markets' Dagstore](https://docs.filecoin.io/mine/lotus/dagstore/#conceptual-overview)** is a
- **[lotus miner markets' Dagstore](https://lotus.filecoin.io/storage-providers/operate/dagstore/)** is a
component of the `markets` subsystem in lotus-miner. It is a sharded store to hold large IPLD graphs efficiently,
packaged as location-transparent attachable CAR files and it replaces the former Badger staging blockstore. It
is designed to provide high efficiency and throughput, and minimize resource utilization during deal-making operations.
@ -1447,18 +1449,18 @@ storage providers and clients.
blockstores, which are served as the direct medium for data exchanges in markets for both storage and retrieval
deal making without requiring intermediate buffers.
- In the future, lotus will leverage and interact with Dagstore a lot for new features and improvements for deal
making, therefore, it's highly recommended to lotus users to go through [Lotus Miner: About the markets dagstore](https://docs.filecoin.io/mine/lotus/dagstore/#conceptual-overview) thoroughly to learn more about Dagstore's
making, therefore, it's highly recommended to lotus users to go through [Lotus Miner: About the markets dagstore](https://lotus.filecoin.io/storage-providers/operate/dagstore/) thoroughly to learn more about Dagstore's
conceptual overview, terminology, directory structure, configuration and so on.
- **Note**:
- When you first start your lotus-miner or market subsystem with this release, a one-time/first-time **dagstore migration** will be triggered which replaces the former Badger staging blockstore with dagstore. We highly
recommend storage providers to read this [section](https://docs.filecoin.io/mine/lotus/dagstore/#first-time-migration) to learn more about
recommend storage providers to read this [section](https://lotus.filecoin.io/storage-providers/operate/dagstore/#first-time-migration) to learn more about
what the process does, what to expect and how monitor it.
- It is highly recommended to **wait all ongoing data transfer to finish or cancel inbound storage deals that
are still transferring**, using the `lotus-miner data-transfers cancel` command before upgrade your market nodes. Reason being that the new dagstore changes attributes in the internal deal state objects, and the paths to the staging CARs where the deal data was being placed will be lost.
- ‼Having your dags initialized will become important in the near feature for you to provide a better storage
and retrieval service. We'd suggest you to start [forced bulk initialization] soon if possible as this process
places relatively high IP workload on your storage system and is better to be carried out gradually and over a
longer timeframe. Read how to do properly perform a force bulk initialization [here](https://docs.filecoin.io/mine/lotus/dagstore/#forcing-bulk-initialization).
longer timeframe. Read how to do properly perform a force bulk initialization [here](https://lotus.filecoin.io/storage-providers/operate/dagstore/#forcing-bulk-initialization).
- ⏮ Rollback Alert(from v1.11.2-rcX to any version lower): If a storages deal is initiated with M1/v1.11.2(-rcX)
release, it needs to get to the `StorageDealAwaitingPrecommit` state before you can do a version rollback or the markets process may panic.
- 💙 **Special thanks to [MinerX fellows for testing and providing valuable feedbacks](https://github.com/filecoin-project/lotus/discussions/6852) for Dagstore in the past month!**
@ -1588,8 +1590,8 @@ Contributors
This is a **highly recommended** but optional Lotus v1.11.1 release that introduces many deal making and datastore improvements and new features along with other bug fixes.
## Highlights
- ⭐️⭐️⭐️[**lotus-miner market subsystem**](https://docs.filecoin.io/mine/lotus/split-markets-miners/#frontmatter-title) is introduced in this release! It is **highly recommended** for storage providers to run markets processes on a separate machine! Doing so, only this machine needs to exposes public ports for deal making. This also means that the other miner operations can now be completely isolated by from the deal making processes and storage providers can stop and restarts the markets process without affecting an ongoing Winning/Window PoSt!
- More details on the concepts, architecture and how to split the market process can be found [here](https://docs.filecoin.io/mine/lotus/split-markets-miners/#concepts).
- ⭐️⭐️⭐️[**lotus-miner market subsystem**](https://lotus.filecoin.io/storage-providers/advanced-configurations/split-markets-miners/) is introduced in this release! It is **highly recommended** for storage providers to run markets processes on a separate machine! Doing so, only this machine needs to exposes public ports for deal making. This also means that the other miner operations can now be completely isolated by from the deal making processes and storage providers can stop and restarts the markets process without affecting an ongoing Winning/Window PoSt!
- More details on the concepts, architecture and how to split the market process can be found [here](https://lotus.filecoin.io/storage-providers/advanced-configurations/split-markets-miners/#concepts).
- Base on your system setup(running on separate machines, same machine and so on), please see the suggested practice by community members [here](https://github.com/filecoin-project/lotus/discussions/7047#discussion-3515335).
- Note: if you are running lotus-worker on a different machine, you will need to set `MARKETS_API_INFO` for certain CLI to work properly. This will be improved by #7072.
- Huge thanks to MinerX fellows for [helping testing the implementation, reporting the issues so they were fixed by now and providing feedbacks](https://github.com/filecoin-project/lotus/discussions/6861) to user docs in the past three weeks!
@ -1599,7 +1601,7 @@ This is a **highly recommended** but optional Lotus v1.11.1 release that introd
- `AvailableBalanceBuffer`: minimum available balance to keep in the miner actor before sending it with messages, default is 0FIL.
- `DisableCollateralFallback`: whether to send collateral with messages even if there is no available balance in the miner actor, default is `false`.
- Config for deal publishing control addresses ([filecoin-project/lotus#6697](https://github.com/filecoin-project/lotus/pull/6697))
- Set `DealPublishControl` to set the wallet used for sending `PublishStorageDeals` messages, instructions [here](https://docs.filecoin.io/mine/lotus/miner-addresses/#control-addresses).
- Set `DealPublishControl` to set the wallet used for sending `PublishStorageDeals` messages, instructions [here](https://lotus.filecoin.io/storage-providers/operate/addresses/#control-addresses).
- Config UX improvements ([filecoin-project/lotus#6848](https://github.com/filecoin-project/lotus/pull/6848))
- You can now preview the the default and updated node config by running `lotus/lotus-miner config default/updated`
@ -2028,7 +2030,7 @@ Note that this release is built on top of Lotus v1.9.0. Enterprising users can u
FIPs [0008](https://github.com/filecoin-project/FIPs/blob/master/FIPS/fip-0008.md) and [0013](https://github.com/filecoin-project/FIPs/blob/master/FIPS/fip-0013.md) combine to allow for a significant increase in the rate of onboarding storage on the Filecoin network. This aims to lead to more useful data being stored on the network, reduced network congestion, and lower network base fee.
**Check out the documentation [here](https://docs.filecoin.io/mine/lotus/miner-configuration/#precommitsectorsbatch) for details on the new Lotus miner sealing config options, [here](https://docs.filecoin.io/mine/lotus/miner-configuration/#fees-section) for fee config options, and explanations of the new features.**
**Check out the documentation [here]((https://lotus.filecoin.io/storage-providers/advanced-configurations/sealing/#precommitsectorsbatch) for details on the new Lotus miner sealing config options, [here](https://lotus.filecoin.io/storage-providers/setup/configuration/#fees-section) for fee config options, and explanations of the new features.**
Note:
- We recommend to keep `PreCommitSectorsBatch` as 1.
@ -2044,7 +2046,7 @@ Given these assumptions:
- We'd expect a network storage growth rate of around 530PiB per day. 😳 🎉 🥳 😅
- We'd expect network bandwidth dedicated to `SubmitWindowedPoSt` to grow by about 0.02% per day.
- We'd expect the [state-tree](https://spec.filecoin.io/#section-systems.filecoin_vm.state_tree) (and therefore [snapshot](https://docs.filecoin.io/get-started/lotus/chain/#lightweight-snapshot)) size to grow by 1.16GiB per day.
- We'd expect the [state-tree](https://spec.filecoin.io/#section-systems.filecoin_vm.state_tree) (and therefore [snapshot](https://lotus.filecoin.io/lotus/manage/chain-management/#lightweight-snapshot)) size to grow by 1.16GiB per day.
- Nearly all of the state-tree growth is expected to come from new sector metadata.
- We'd expect the daily lotus datastore growth rate to increase by about 10-15% (from current ~21GiB/day).
- Most "growth" of the lotus datastore is due to "churn", historical data that's no longer referenced by the latest state-tree.
@ -2065,7 +2067,7 @@ Included in the HyperDrive upgrade is [FIP-0015](https://github.com/filecoin-pro
- Implement FIP-0015 ([filecoin-project/lotus#6361](https://github.com/filecoin-project/lotus/pull/6361))
- Integrate FIP0013 and FIP0008 ([filecoin-project/lotus#6235](https://github.com/filecoin-project/lotus/pull/6235))
- [Configuration docs and cli examples](https://docs.filecoin.io/mine/lotus/miner-configuration/#precommitsectorsbatch)
- [Configuration docs and cli examples](https://lotus.filecoin.io/storage-providers/advanced-configurations/sealing/#precommitsectorsbatch)
- [cli docs](https://github.com/filecoin-project/lotus/blob/master/documentation/en/cli-lotus-miner.md#lotus-miner-sectors-batching)
- Introduce gas prices for aggregate verifications ([filecoin-project/lotus#6347](https://github.com/filecoin-project/lotus/pull/6347))
- Introduce v5 actors ([filecoin-project/lotus#6195](https://github.com/filecoin-project/lotus/pull/6195))
@ -2383,7 +2385,7 @@ Note that this release does NOT set an upgrade epoch for v3 actors to take effec
- [#5309](https://github.com/filecoin-project/lotus/pull/5309) Batch multiple deals in one `PublishStorageMessages`
- [#5411](https://github.com/filecoin-project/lotus/pull/5411) Handle batch `PublishStorageDeals` message in sealing recovery
- [#5505](https://github.com/filecoin-project/lotus/pull/5505) Exclude expired deals from batching in `PublishStorageDeals` messages
- Added `PublishMsgPeriod` and `MaxDealsPerPublishMsg` to miner `Dealmaking` [configuration](https://docs.filecoin.io/mine/lotus/miner-configuration/#dealmaking-section). See how they work [here](https://docs.filecoin.io/mine/lotus/miner-configuration/#publishing-several-deals-in-one-message).
- Added `PublishMsgPeriod` and `MaxDealsPerPublishMsg` to miner `Dealmaking` [configuration](https://lotus.filecoin.io/storage-providers/advanced-configurations/market/#dealmaking-section). See how they work [here](https://lotus.filecoin.io/storage-providers/advanced-configurations/market/#publishing-several-deals-in-one-message).
- [#5538](https://github.com/filecoin-project/lotus/pull/5538), [#5549](https://github.com/filecoin-project/lotus/pull/5549) Added a command to list pending deals and force publish messages.
- Run `lotus-miner market pending-publish`
- [#5428](https://github.com/filecoin-project/lotus/pull/5428) Moved waiting for `PublishStorageDeals` messages' receipt from markets to lotus

View File

@ -1,5 +1,5 @@
<p align="center">
<a href="https://docs.filecoin.io/" title="Filecoin Docs">
<a href="https://lotus.filecoin.io/" title="Filecoin Docs">
<img src="documentation/images/lotus_logo_h.png" alt="Project Lotus Logo" width="244" />
</a>
</p>
@ -10,7 +10,7 @@
<a href="https://circleci.com/gh/filecoin-project/lotus"><img src="https://circleci.com/gh/filecoin-project/lotus.svg?style=svg"></a>
<a href="https://codecov.io/gh/filecoin-project/lotus"><img src="https://codecov.io/gh/filecoin-project/lotus/branch/master/graph/badge.svg"></a>
<a href="https://goreportcard.com/report/github.com/filecoin-project/lotus"><img src="https://goreportcard.com/badge/github.com/filecoin-project/lotus" /></a>
<a href=""><img src="https://img.shields.io/badge/golang-%3E%3D1.17-blue.svg" /></a>
<a href=""><img src="https://img.shields.io/badge/golang-%3E%3D1.18.1-blue.svg" /></a>
<br>
</p>
@ -67,7 +67,7 @@ Fedora:
sudo dnf -y install gcc make git bzr jq pkgconfig mesa-libOpenCL mesa-libOpenCL-devel opencl-headers ocl-icd ocl-icd-devel clang llvm wget hwloc hwloc-devel
```
For other distributions you can find the required dependencies [here.](https://docs.filecoin.io/get-started/lotus/installation/#system-specific) For instructions specific to macOS, you can find them [here.](https://docs.filecoin.io/get-started/lotus/installation/#macos)
For other distributions you can find the required dependencies [here.](https://lotus.filecoin.io/lotus/install/prerequisites/#supported-platforms) For instructions specific to macOS, you can find them [here.](https://lotus.filecoin.io/lotus/install/macos/)
#### Go
@ -101,7 +101,7 @@ Note: The default branch `master` is the dev branch where the latest new feature
2. To join mainnet, checkout the [latest release](https://github.com/filecoin-project/lotus/releases).
If you are changing networks from a previous Lotus installation or there has been a network reset, read the [Switch networks guide](https://docs.filecoin.io/get-started/lotus/switch-networks/) before proceeding.
If you are changing networks from a previous Lotus installation or there has been a network reset, read the [Switch networks guide](https://lotus.filecoin.io/lotus/manage/switch-networks/) before proceeding.
For networks other than mainnet, look up the current branch or tag/commit for the network you want to join in the [Filecoin networks dashboard](https://network.filecoin.io), then build Lotus for your specific network below.
@ -113,8 +113,8 @@ Note: The default branch `master` is the dev branch where the latest new feature
Currently, the latest code on the _master_ branch corresponds to mainnet.
3. If you are in China, see "[Lotus: tips when running in China](https://docs.filecoin.io/get-started/lotus/tips-running-in-china/)".
4. This build instruction uses the prebuilt proofs binaries. If you want to build the proof binaries from source check the [complete instructions](https://docs.filecoin.io/get-started/lotus/installation/#build-and-install-lotus). Note, if you are building the proof binaries from source, [installing rustup](https://docs.filecoin.io/get-started/lotus/installation/#rustup) is also needed.
3. If you are in China, see "[Lotus: tips when running in China](https://lotus.filecoin.io/lotus/configure/nodes-in-china/)".
4. This build instruction uses the prebuilt proofs binaries. If you want to build the proof binaries from source check the [complete instructions](https://lotus.filecoin.io/lotus/install/prerequisites/). Note, if you are building the proof binaries from source, [installing rustup](https://lotus.filecoin.io/lotus/install/linux/#rustup) is also needed.
5. Build and install Lotus:
@ -129,9 +129,9 @@ Note: The default branch `master` is the dev branch where the latest new feature
This will put `lotus`, `lotus-miner` and `lotus-worker` in `/usr/local/bin`.
`lotus` will use the `$HOME/.lotus` folder by default for storage (configuration, chain data, wallets, etc). See [advanced options](https://docs.filecoin.io/get-started/lotus/configuration-and-advanced-usage/) for information on how to customize the Lotus folder.
`lotus` will use the `$HOME/.lotus` folder by default for storage (configuration, chain data, wallets, etc). See [advanced options](https://lotus.filecoin.io/lotus/configure/defaults/#environment-variables) for information on how to customize the Lotus folder.
6. You should now have Lotus installed. You can now [start the Lotus daemon and sync the chain](https://docs.filecoin.io/get-started/lotus/installation/#start-the-lotus-daemon-and-sync-the-chain).
6. You should now have Lotus installed. You can now [start the Lotus daemon and sync the chain](https://lotus.filecoin.io/lotus/install/linux/#start-the-lotus-daemon-and-sync-the-chain).
## License

42
api/api_errors.go Normal file
View File

@ -0,0 +1,42 @@
package api
import (
"errors"
"reflect"
"github.com/filecoin-project/go-jsonrpc"
)
const (
EOutOfGas = iota + jsonrpc.FirstUserCode
EActorNotFound
)
type ErrOutOfGas struct{}
func (e *ErrOutOfGas) Error() string {
return "call ran out of gas"
}
type ErrActorNotFound struct{}
func (e *ErrActorNotFound) Error() string {
return "actor not found"
}
var RPCErrors = jsonrpc.NewErrors()
func ErrorIsIn(err error, errorTypes []error) bool {
for _, etype := range errorTypes {
tmp := reflect.New(reflect.PointerTo(reflect.ValueOf(etype).Elem().Type())).Interface()
if errors.As(err, tmp) {
return true
}
}
return false
}
func init() {
RPCErrors.Register(EOutOfGas, new(*ErrOutOfGas))
RPCErrors.Register(EActorNotFound, new(*ErrActorNotFound))
}

View File

@ -17,6 +17,7 @@ import (
"github.com/filecoin-project/go-fil-markets/storagemarket"
"github.com/filecoin-project/go-jsonrpc/auth"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/big"
"github.com/filecoin-project/go-state-types/builtin/v8/market"
"github.com/filecoin-project/go-state-types/builtin/v9/miner"
abinetwork "github.com/filecoin-project/go-state-types/network"
@ -54,6 +55,11 @@ type StorageMiner interface {
// and does not wait for message execution
ActorWithdrawBalance(ctx context.Context, amount abi.TokenAmount) (cid.Cid, error) //perm:admin
// BeneficiaryWithdrawBalance allows the beneficiary of a miner to withdraw balance from miner actor
// Specify amount as "0" to withdraw full balance. This method returns a message CID
// and does not wait for message execution
BeneficiaryWithdrawBalance(context.Context, abi.TokenAmount) (cid.Cid, error) //perm:admin
MiningBase(context.Context) (*types.TipSet, error) //perm:read
ComputeWindowPoSt(ctx context.Context, dlIdx uint64, tsk types.TipSetKey) ([]miner.SubmitWindowedPoStParams, error) //perm:admin
@ -139,6 +145,8 @@ type StorageMiner interface {
// SectorNumFree drops a sector reservation
SectorNumFree(ctx context.Context, name string) error //perm:admin
SectorReceive(ctx context.Context, meta RemoteSectorMeta) error //perm:admin
// WorkerConnect tells the node to connect to workers RPC
WorkerConnect(context.Context, string) error //perm:admin retry:true
WorkerStats(context.Context) (map[uuid.UUID]storiface.WorkerStats, error) //perm:admin
@ -161,6 +169,7 @@ type StorageMiner interface {
ReturnMoveStorage(ctx context.Context, callID storiface.CallID, err *storiface.CallError) error //perm:admin retry:true
ReturnUnsealPiece(ctx context.Context, callID storiface.CallID, err *storiface.CallError) error //perm:admin retry:true
ReturnReadPiece(ctx context.Context, callID storiface.CallID, ok bool, err *storiface.CallError) error //perm:admin retry:true
ReturnDownloadSector(ctx context.Context, callID storiface.CallID, err *storiface.CallError) error //perm:admin retry:true
ReturnFetch(ctx context.Context, callID storiface.CallID, err *storiface.CallError) error //perm:admin retry:true
// SealingSchedDiag dumps internal sealing scheduler state
@ -499,3 +508,109 @@ type NumAssignerMeta struct {
Next abi.SectorNumber
}
type RemoteSectorMeta struct {
////////
// BASIC SECTOR INFORMATION
// State specifies the first state the sector will enter after being imported
// Must be one of the following states:
// * Packing
// * GetTicket
// * PreCommitting
// * SubmitCommit
// * Proving/Available
State SectorState
Sector abi.SectorID
Type abi.RegisteredSealProof
////////
// SEALING METADATA
// (allows lotus to continue the sealing process)
// Required in Packing and later
Pieces []SectorPiece // todo better type?
// Required in PreCommitting and later
TicketValue abi.SealRandomness
TicketEpoch abi.ChainEpoch
PreCommit1Out storiface.PreCommit1Out // todo specify better
CommD *cid.Cid
CommR *cid.Cid // SectorKey
// Required in SubmitCommit and later
PreCommitInfo *miner.SectorPreCommitInfo
PreCommitDeposit *big.Int
PreCommitMessage *cid.Cid
PreCommitTipSet types.TipSetKey
SeedValue abi.InteractiveSealRandomness
SeedEpoch abi.ChainEpoch
CommitProof []byte
// Required in Proving/Available
CommitMessage *cid.Cid
// Optional sector metadata to import
Log []SectorLog
////////
// SECTOR DATA SOURCE
// Sector urls - lotus will use those for fetching files into local storage
// Required in all states
DataUnsealed *storiface.SectorLocation
// Required in PreCommitting and later
DataSealed *storiface.SectorLocation
DataCache *storiface.SectorLocation
////////
// SEALING SERVICE HOOKS
// URL
// RemoteCommit1Endpoint is an URL of POST endpoint which lotus will call requesting Commit1 (seal_commit_phase1)
// request body will be json-serialized RemoteCommit1Params struct
RemoteCommit1Endpoint string
// RemoteCommit2Endpoint is an URL of POST endpoint which lotus will call requesting Commit2 (seal_commit_phase2)
// request body will be json-serialized RemoteCommit2Params struct
RemoteCommit2Endpoint string
// RemoteSealingDoneEndpoint is called after the sector exists the sealing pipeline
// request body will be json-serialized RemoteSealingDoneParams struct
RemoteSealingDoneEndpoint string
}
type RemoteCommit1Params struct {
Ticket, Seed []byte
Unsealed cid.Cid
Sealed cid.Cid
ProofType abi.RegisteredSealProof
}
type RemoteCommit2Params struct {
Sector abi.SectorID
ProofType abi.RegisteredSealProof
// todo spec better
Commit1Out storiface.Commit1Out
}
type RemoteSealingDoneParams struct {
// Successful is true if the sector has entered state considered as "successfully sealed"
Successful bool
// State is the state the sector has entered
// For example "Proving" / "Removing"
State string
// Optional commit message CID
CommitMessage *cid.Cid
}

View File

@ -12,6 +12,9 @@ import (
"testing"
"github.com/stretchr/testify/require"
"golang.org/x/xerrors"
"github.com/filecoin-project/go-jsonrpc"
)
func goCmd() string {
@ -124,3 +127,18 @@ func TestPermTags(t *testing.T) {
_ = PermissionedStorMinerAPI(&StorageMinerStruct{})
_ = PermissionedWorkerAPI(&WorkerStruct{})
}
func TestRetryErrorIsInTrue(t *testing.T) {
errorsToRetry := []error{&jsonrpc.RPCConnectionError{}}
require.True(t, ErrorIsIn(&jsonrpc.RPCConnectionError{}, errorsToRetry))
}
func TestRetryErrorIsInFalse(t *testing.T) {
errorsToRetry := []error{&jsonrpc.RPCConnectionError{}}
require.False(t, ErrorIsIn(xerrors.Errorf("random error"), errorsToRetry))
}
func TestRetryWrappedErrorIsInTrue(t *testing.T) {
errorsToRetry := []error{&jsonrpc.RPCConnectionError{}}
require.True(t, ErrorIsIn(xerrors.Errorf("wrapped: %w", &jsonrpc.RPCConnectionError{}), errorsToRetry))
}

View File

@ -49,6 +49,7 @@ type Worker interface {
MoveStorage(ctx context.Context, sector storiface.SectorRef, types storiface.SectorFileType) (storiface.CallID, error) //perm:admin
UnsealPiece(context.Context, storiface.SectorRef, storiface.UnpaddedByteIndex, abi.UnpaddedPieceSize, abi.SealRandomness, cid.Cid) (storiface.CallID, error) //perm:admin
Fetch(context.Context, storiface.SectorRef, storiface.SectorFileType, storiface.PathType, storiface.AcquireMode) (storiface.CallID, error) //perm:admin
DownloadSectorData(ctx context.Context, sector storiface.SectorRef, finalized bool, src map[storiface.SectorFileType]storiface.SectorLocation) (storiface.CallID, error) //perm:admin
GenerateWinningPoSt(ctx context.Context, ppt abi.RegisteredPoStProof, mid abi.ActorID, sectors []storiface.PostSectorChallenge, randomness abi.PoStRandomness) ([]proof.PoStProof, error) //perm:admin
GenerateWindowPoSt(ctx context.Context, ppt abi.RegisteredPoStProof, mid abi.ActorID, sectors []storiface.PostSectorChallenge, partitionIdx int, randomness abi.PoStRandomness) (storiface.WindowPoStResult, error) //perm:admin

View File

@ -1005,6 +1005,129 @@ func (t *PieceDealInfo) UnmarshalCBOR(r io.Reader) (err error) {
return nil
}
func (t *SectorPiece) MarshalCBOR(w io.Writer) error {
if t == nil {
_, err := w.Write(cbg.CborNull)
return err
}
cw := cbg.NewCborWriter(w)
if _, err := cw.Write([]byte{162}); err != nil {
return err
}
// t.Piece (abi.PieceInfo) (struct)
if len("Piece") > cbg.MaxLength {
return xerrors.Errorf("Value in field \"Piece\" was too long")
}
if err := cw.WriteMajorTypeHeader(cbg.MajTextString, uint64(len("Piece"))); err != nil {
return err
}
if _, err := io.WriteString(w, string("Piece")); err != nil {
return err
}
if err := t.Piece.MarshalCBOR(cw); err != nil {
return err
}
// t.DealInfo (api.PieceDealInfo) (struct)
if len("DealInfo") > cbg.MaxLength {
return xerrors.Errorf("Value in field \"DealInfo\" was too long")
}
if err := cw.WriteMajorTypeHeader(cbg.MajTextString, uint64(len("DealInfo"))); err != nil {
return err
}
if _, err := io.WriteString(w, string("DealInfo")); err != nil {
return err
}
if err := t.DealInfo.MarshalCBOR(cw); err != nil {
return err
}
return nil
}
func (t *SectorPiece) UnmarshalCBOR(r io.Reader) (err error) {
*t = SectorPiece{}
cr := cbg.NewCborReader(r)
maj, extra, err := cr.ReadHeader()
if err != nil {
return err
}
defer func() {
if err == io.EOF {
err = io.ErrUnexpectedEOF
}
}()
if maj != cbg.MajMap {
return fmt.Errorf("cbor input should be of type map")
}
if extra > cbg.MaxLength {
return fmt.Errorf("SectorPiece: map struct too large (%d)", extra)
}
var name string
n := extra
for i := uint64(0); i < n; i++ {
{
sval, err := cbg.ReadString(cr)
if err != nil {
return err
}
name = string(sval)
}
switch name {
// t.Piece (abi.PieceInfo) (struct)
case "Piece":
{
if err := t.Piece.UnmarshalCBOR(cr); err != nil {
return xerrors.Errorf("unmarshaling t.Piece: %w", err)
}
}
// t.DealInfo (api.PieceDealInfo) (struct)
case "DealInfo":
{
b, err := cr.ReadByte()
if err != nil {
return err
}
if b != cbg.CborNull[0] {
if err := cr.UnreadByte(); err != nil {
return err
}
t.DealInfo = new(PieceDealInfo)
if err := t.DealInfo.UnmarshalCBOR(cr); err != nil {
return xerrors.Errorf("unmarshaling t.DealInfo pointer: %w", err)
}
}
}
default:
// Field doesn't exist on this type, so ignore it
cbg.ScanForLinks(r, func(cid.Cid) {})
}
}
return nil
}
func (t *DealSchedule) MarshalCBOR(w io.Writer) error {
if t == nil {
_, err := w.Write(cbg.CborNull)

View File

@ -19,7 +19,7 @@ import (
func NewCommonRPCV0(ctx context.Context, addr string, requestHeader http.Header) (api.CommonNet, jsonrpc.ClientCloser, error) {
var res v0api.CommonNetStruct
closer, err := jsonrpc.NewMergeClient(ctx, addr, "Filecoin",
api.GetInternalStructs(&res), requestHeader)
api.GetInternalStructs(&res), requestHeader, jsonrpc.WithErrors(api.RPCErrors))
return &res, closer, err
}
@ -29,7 +29,7 @@ func NewFullNodeRPCV0(ctx context.Context, addr string, requestHeader http.Heade
var res v0api.FullNodeStruct
closer, err := jsonrpc.NewMergeClient(ctx, addr, "Filecoin",
api.GetInternalStructs(&res), requestHeader)
api.GetInternalStructs(&res), requestHeader, jsonrpc.WithErrors(api.RPCErrors))
return &res, closer, err
}
@ -38,7 +38,7 @@ func NewFullNodeRPCV0(ctx context.Context, addr string, requestHeader http.Heade
func NewFullNodeRPCV1(ctx context.Context, addr string, requestHeader http.Header) (api.FullNode, jsonrpc.ClientCloser, error) {
var res v1api.FullNodeStruct
closer, err := jsonrpc.NewMergeClient(ctx, addr, "Filecoin",
api.GetInternalStructs(&res), requestHeader)
api.GetInternalStructs(&res), requestHeader, jsonrpc.WithErrors(api.RPCErrors))
return &res, closer, err
}
@ -72,6 +72,7 @@ func NewStorageMinerRPCV0(ctx context.Context, addr string, requestHeader http.H
api.GetInternalStructs(&res), requestHeader,
append([]jsonrpc.Option{
rpcenc.ReaderParamEncoder(pushUrl),
jsonrpc.WithErrors(api.RPCErrors),
}, opts...)...)
return &res, closer, err
@ -90,6 +91,7 @@ func NewWorkerRPCV0(ctx context.Context, addr string, requestHeader http.Header)
rpcenc.ReaderParamEncoder(pushUrl),
jsonrpc.WithNoReconnect(),
jsonrpc.WithTimeout(30*time.Second),
jsonrpc.WithErrors(api.RPCErrors),
)
return &res, closer, err
@ -101,7 +103,7 @@ func NewGatewayRPCV1(ctx context.Context, addr string, requestHeader http.Header
closer, err := jsonrpc.NewMergeClient(ctx, addr, "Filecoin",
api.GetInternalStructs(&res),
requestHeader,
opts...,
append(opts, jsonrpc.WithErrors(api.RPCErrors))...,
)
return &res, closer, err
@ -113,7 +115,7 @@ func NewGatewayRPCV0(ctx context.Context, addr string, requestHeader http.Header
closer, err := jsonrpc.NewMergeClient(ctx, addr, "Filecoin",
api.GetInternalStructs(&res),
requestHeader,
opts...,
append(opts, jsonrpc.WithErrors(api.RPCErrors))...,
)
return &res, closer, err
@ -124,6 +126,7 @@ func NewWalletRPCV0(ctx context.Context, addr string, requestHeader http.Header)
closer, err := jsonrpc.NewMergeClient(ctx, addr, "Filecoin",
api.GetInternalStructs(&res),
requestHeader,
jsonrpc.WithErrors(api.RPCErrors),
)
return &res, closer, err

View File

@ -6,6 +6,7 @@ import (
"go/ast"
"go/parser"
"go/token"
"net/http"
"path/filepath"
"reflect"
"strings"
@ -345,6 +346,17 @@ func init() {
MsgUuids: make(map[uuid.UUID]*types.SignedMessage),
})
addExample(http.Header{
"Authorization": []string{"Bearer ey.."},
})
addExample(map[storiface.SectorFileType]storiface.SectorLocation{
storiface.FTSealed: {
Local: false,
URL: "https://example.com/sealingservice/sectors/s-f0123-12345",
Headers: nil,
},
})
}
func GetAPIType(name, pkg string) (i interface{}, t reflect.Type, permStruct []reflect.Type) {

View File

@ -669,6 +669,8 @@ type StorageMinerStruct struct {
ActorWithdrawBalance func(p0 context.Context, p1 abi.TokenAmount) (cid.Cid, error) `perm:"admin"`
BeneficiaryWithdrawBalance func(p0 context.Context, p1 abi.TokenAmount) (cid.Cid, error) `perm:"admin"`
CheckProvable func(p0 context.Context, p1 abi.RegisteredPoStProof, p2 []storiface.SectorRef, p3 bool) (map[abi.SectorNumber]string, error) `perm:"admin"`
ComputeDataCid func(p0 context.Context, p1 abi.UnpaddedPieceSize, p2 storiface.Data) (abi.PieceInfo, error) `perm:"admin"`
@ -781,6 +783,8 @@ type StorageMinerStruct struct {
ReturnDataCid func(p0 context.Context, p1 storiface.CallID, p2 abi.PieceInfo, p3 *storiface.CallError) error `perm:"admin"`
ReturnDownloadSector func(p0 context.Context, p1 storiface.CallID, p2 *storiface.CallError) error `perm:"admin"`
ReturnFetch func(p0 context.Context, p1 storiface.CallID, p2 *storiface.CallError) error `perm:"admin"`
ReturnFinalizeReplicaUpdate func(p0 context.Context, p1 storiface.CallID, p2 *storiface.CallError) error `perm:"admin"`
@ -849,6 +853,8 @@ type StorageMinerStruct struct {
SectorPreCommitPending func(p0 context.Context) ([]abi.SectorID, error) `perm:"admin"`
SectorReceive func(p0 context.Context, p1 RemoteSectorMeta) error `perm:"admin"`
SectorRemove func(p0 context.Context, p1 abi.SectorNumber) error `perm:"admin"`
SectorSetExpectedSealDuration func(p0 context.Context, p1 time.Duration) error `perm:"write"`
@ -954,6 +960,8 @@ type WorkerStruct struct {
DataCid func(p0 context.Context, p1 abi.UnpaddedPieceSize, p2 storiface.Data) (storiface.CallID, error) `perm:"admin"`
DownloadSectorData func(p0 context.Context, p1 storiface.SectorRef, p2 bool, p3 map[storiface.SectorFileType]storiface.SectorLocation) (storiface.CallID, error) `perm:"admin"`
Enabled func(p0 context.Context) (bool, error) `perm:"admin"`
Fetch func(p0 context.Context, p1 storiface.SectorRef, p2 storiface.SectorFileType, p3 storiface.PathType, p4 storiface.AcquireMode) (storiface.CallID, error) `perm:"admin"`
@ -4063,6 +4071,17 @@ func (s *StorageMinerStub) ActorWithdrawBalance(p0 context.Context, p1 abi.Token
return *new(cid.Cid), ErrNotSupported
}
func (s *StorageMinerStruct) BeneficiaryWithdrawBalance(p0 context.Context, p1 abi.TokenAmount) (cid.Cid, error) {
if s.Internal.BeneficiaryWithdrawBalance == nil {
return *new(cid.Cid), ErrNotSupported
}
return s.Internal.BeneficiaryWithdrawBalance(p0, p1)
}
func (s *StorageMinerStub) BeneficiaryWithdrawBalance(p0 context.Context, p1 abi.TokenAmount) (cid.Cid, error) {
return *new(cid.Cid), ErrNotSupported
}
func (s *StorageMinerStruct) CheckProvable(p0 context.Context, p1 abi.RegisteredPoStProof, p2 []storiface.SectorRef, p3 bool) (map[abi.SectorNumber]string, error) {
if s.Internal.CheckProvable == nil {
return *new(map[abi.SectorNumber]string), ErrNotSupported
@ -4679,6 +4698,17 @@ func (s *StorageMinerStub) ReturnDataCid(p0 context.Context, p1 storiface.CallID
return ErrNotSupported
}
func (s *StorageMinerStruct) ReturnDownloadSector(p0 context.Context, p1 storiface.CallID, p2 *storiface.CallError) error {
if s.Internal.ReturnDownloadSector == nil {
return ErrNotSupported
}
return s.Internal.ReturnDownloadSector(p0, p1, p2)
}
func (s *StorageMinerStub) ReturnDownloadSector(p0 context.Context, p1 storiface.CallID, p2 *storiface.CallError) error {
return ErrNotSupported
}
func (s *StorageMinerStruct) ReturnFetch(p0 context.Context, p1 storiface.CallID, p2 *storiface.CallError) error {
if s.Internal.ReturnFetch == nil {
return ErrNotSupported
@ -5053,6 +5083,17 @@ func (s *StorageMinerStub) SectorPreCommitPending(p0 context.Context) ([]abi.Sec
return *new([]abi.SectorID), ErrNotSupported
}
func (s *StorageMinerStruct) SectorReceive(p0 context.Context, p1 RemoteSectorMeta) error {
if s.Internal.SectorReceive == nil {
return ErrNotSupported
}
return s.Internal.SectorReceive(p0, p1)
}
func (s *StorageMinerStub) SectorReceive(p0 context.Context, p1 RemoteSectorMeta) error {
return ErrNotSupported
}
func (s *StorageMinerStruct) SectorRemove(p0 context.Context, p1 abi.SectorNumber) error {
if s.Internal.SectorRemove == nil {
return ErrNotSupported
@ -5537,6 +5578,17 @@ func (s *WorkerStub) DataCid(p0 context.Context, p1 abi.UnpaddedPieceSize, p2 st
return *new(storiface.CallID), ErrNotSupported
}
func (s *WorkerStruct) DownloadSectorData(p0 context.Context, p1 storiface.SectorRef, p2 bool, p3 map[storiface.SectorFileType]storiface.SectorLocation) (storiface.CallID, error) {
if s.Internal.DownloadSectorData == nil {
return *new(storiface.CallID), ErrNotSupported
}
return s.Internal.DownloadSectorData(p0, p1, p2, p3)
}
func (s *WorkerStub) DownloadSectorData(p0 context.Context, p1 storiface.SectorRef, p2 bool, p3 map[storiface.SectorFileType]storiface.SectorLocation) (storiface.CallID, error) {
return *new(storiface.CallID), ErrNotSupported
}
func (s *WorkerStruct) Enabled(p0 context.Context) (bool, error) {
if s.Internal.Enabled == nil {
return false, ErrNotSupported

View File

@ -17,6 +17,7 @@ import (
datatransfer "github.com/filecoin-project/go-data-transfer"
"github.com/filecoin-project/go-fil-markets/retrievalmarket"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/builtin/v9/miner"
"github.com/filecoin-project/lotus/chain/types"
"github.com/filecoin-project/lotus/node/modules/dtypes"
@ -300,6 +301,9 @@ type MinerInfo struct {
SectorSize abi.SectorSize
WindowPoStPartitionSectors uint64
ConsensusFaultElapsed abi.ChainEpoch
Beneficiary address.Address
BeneficiaryTerm *miner.BeneficiaryTerm
PendingBeneficiaryTerm *miner.PendingBeneficiaryChange
}
type NetworkParams struct {

View File

@ -1,2 +1,2 @@
/dns4/bootstrap-0.butterfly.fildev.network/tcp/1347/p2p/12D3KooWSUZhAY3eyoPUboJ1ZWe4dNPFWTr1EPoDjbTDSAN15uhY
/dns4/bootstrap-1.butterfly.fildev.network/tcp/1347/p2p/12D3KooWDfvNrSRVGWAGbn3sm9C8z98W2x25qCZjaXGHXmGiH24e
/dns4/bootstrap-0.butterfly.fildev.network/tcp/1347/p2p/12D3KooWHc8xB2S1wFeF9ar9bVdXoEEaBPGLqfKxVQH55c4nNmxs
/dns4/bootstrap-1.butterfly.fildev.network/tcp/1347/p2p/12D3KooWPcNcwS3cKarWrN7MfANWNpzXmZA9Ag6eH9FHFdLQ3LFQ

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -4,6 +4,9 @@
package build
import (
"os"
"strconv"
"github.com/ipfs/go-cid"
"github.com/filecoin-project/go-address"
@ -86,13 +89,28 @@ func init() {
Devnet = true
// NOTE: DO NOT change this unless you REALLY know what you're doing. This is not consensus critical, however,
//set this value too high may impacts your block submission; set this value too low may cause you miss
//parent tipsets for blocking forming and mining.
if len(os.Getenv("PROPAGATION_DELAY_SECS")) != 0 {
pds, err := strconv.ParseUint(os.Getenv("PROPAGATION_DELAY_SECS"), 10, 64)
if err != nil {
log.Warnw("Error setting PROPAGATION_DELAY_SECS, %v, proceed with default value %s", err,
PropagationDelaySecs)
} else {
PropagationDelaySecs = pds
log.Warnw(" !!WARNING!! propagation delay is set to be %s second, "+
"this value impacts your message republish interval and block forming - monitor with caution!!", PropagationDelaySecs)
}
}
BuildType = BuildCalibnet
}
const BlockDelaySecs = uint64(builtin2.EpochDurationSeconds)
const PropagationDelaySecs = uint64(6)
var PropagationDelaySecs = uint64(10)
// BootstrapPeerThreshold is the minimum number peers we need to track for a sync worker to start
const BootstrapPeerThreshold = 4

View File

@ -6,6 +6,7 @@ package build
import (
"math"
"os"
"strconv"
"github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-state-types/abi"
@ -87,6 +88,7 @@ var SupportedProofTypes = []abi.RegisteredSealProof{
var ConsensusMinerMinPower = abi.NewStoragePower(10 << 40)
var MinVerifiedDealSize = abi.NewStoragePower(1 << 20)
var PreCommitChallengeDelay = abi.ChainEpoch(150)
var PropagationDelaySecs = uint64(10)
func init() {
if os.Getenv("LOTUS_USE_TEST_ADDRESSES") != "1" {
@ -97,6 +99,21 @@ func init() {
UpgradeV17Height = math.MaxInt64
}
// NOTE: DO NOT change this unless you REALLY know what you're doing. This is not consensus critical, however,
//set this value too high may impacts your block submission; set this value too low may cause you miss
//parent tipsets for blocking forming and mining.
if len(os.Getenv("PROPAGATION_DELAY_SECS")) != 0 {
pds, err := strconv.ParseUint(os.Getenv("PROPAGATION_DELAY_SECS"), 10, 64)
if err != nil {
log.Warnw("Error setting PROPAGATION_DELAY_SECS, %v, proceed with default value %s", err,
PropagationDelaySecs)
} else {
PropagationDelaySecs = pds
log.Warnw(" !!WARNING!! propagation delay is set to be %s second, "+
"this value impacts your message republish interval and block forming - monitor with caution!!", PropagationDelaySecs)
}
}
Devnet = false
BuildType = BuildMainnet
@ -104,8 +121,6 @@ func init() {
const BlockDelaySecs = uint64(builtin2.EpochDurationSeconds)
const PropagationDelaySecs = uint64(6)
// BootstrapPeerThreshold is the minimum number peers we need to track for a sync worker to start
const BootstrapPeerThreshold = 4

View File

@ -118,8 +118,9 @@ const VerifSigCacheSize = 32000
// TODO: If this is gonna stay, it should move to specs-actors
const BlockMessageLimit = 10000
const BlockGasLimit = 10_000_000_000
const BlockGasTarget = BlockGasLimit / 2
var BlockGasLimit = int64(10_000_000_000)
var BlockGasTarget = BlockGasLimit / 2
const BaseFeeMaxChangeDenom = 8 // 12.5%
const InitialBaseFee = 100e6
const MinimumBaseFee = 100

View File

@ -37,7 +37,7 @@ func BuildTypeString() string {
}
// BuildVersion is the local build version
const BuildVersion = "1.17.2-dev"
const BuildVersion = "1.17.3-dev"
func UserVersion() string {
if os.Getenv("LOTUS_VERSION_IGNORE_COMMIT") == "1" {

View File

@ -554,7 +554,7 @@ func (me *messageEvents) Called(ctx context.Context, check CheckFunc, msgHnd Msg
id, err := me.hcAPI.onHeadChanged(ctx, check, hnd, rev, confidence, timeout)
if err != nil {
return err
return xerrors.Errorf("on head changed error: %w", err)
}
me.lk.Lock()

View File

@ -79,7 +79,7 @@ func (mp *MessagePool) republishPendingMessages(ctx context.Context) error {
return chains[i].Before(chains[j])
})
gasLimit := int64(build.BlockGasLimit)
gasLimit := build.BlockGasLimit
minGas := int64(gasguess.MinGas)
var msgs []*types.SignedMessage
loop:

View File

@ -258,7 +258,7 @@ func (mp *MessagePool) selectMessagesOptimal(ctx context.Context, curTs, ts *typ
nextChain := 0
partitions := make([][]*msgChain, MaxBlocks)
for i := 0; i < MaxBlocks && nextChain < len(chains); i++ {
gasLimit := int64(build.BlockGasLimit)
gasLimit := build.BlockGasLimit
msgLimit := build.BlockMessageLimit
for nextChain < len(chains) {
chain := chains[nextChain]
@ -590,7 +590,7 @@ func (mp *MessagePool) selectPriorityMessages(ctx context.Context, pending map[a
mpCfg := mp.getConfig()
result := &selectedMessages{
msgs: make([]*types.SignedMessage, 0, mpCfg.SizeLimitLow),
gasLimit: int64(build.BlockGasLimit),
gasLimit: build.BlockGasLimit,
blsLimit: cbg.MaxLength,
secpLimit: cbg.MaxLength,
}

View File

@ -25,7 +25,7 @@ func TestBaseFee(t *testing.T) {
{100e6, build.BlockGasTarget, 1, 103.125e6, 100e6},
{100e6, build.BlockGasTarget * 2, 2, 103.125e6, 100e6},
{100e6, build.BlockGasLimit * 2, 2, 112.5e6, 112.5e6},
{100e6, build.BlockGasLimit * 1.5, 2, 110937500, 106.250e6},
{100e6, (build.BlockGasLimit * 15) / 10, 2, 110937500, 106.250e6},
}
for _, test := range tests {

View File

@ -57,7 +57,7 @@ func (t *FvmExecutionTrace) MarshalCBOR(w io.Writer) error {
}
// t.GasCharges ([]vm.FvmGasCharge) (slice)
if len(t.GasCharges) > cbg.MaxLength {
if len(t.GasCharges) > 1000000000 {
return xerrors.Errorf("Slice value in field t.GasCharges was too long")
}
@ -71,7 +71,7 @@ func (t *FvmExecutionTrace) MarshalCBOR(w io.Writer) error {
}
// t.Subcalls ([]vm.FvmExecutionTrace) (slice)
if len(t.Subcalls) > cbg.MaxLength {
if len(t.Subcalls) > 1000000000 {
return xerrors.Errorf("Slice value in field t.Subcalls was too long")
}
@ -164,7 +164,7 @@ func (t *FvmExecutionTrace) UnmarshalCBOR(r io.Reader) (err error) {
return err
}
if extra > cbg.MaxLength {
if extra > 1000000000 {
return fmt.Errorf("t.GasCharges: array too large (%d)", extra)
}
@ -193,7 +193,7 @@ func (t *FvmExecutionTrace) UnmarshalCBOR(r io.Reader) (err error) {
return err
}
if extra > cbg.MaxLength {
if extra > 1000000000 {
return fmt.Errorf("t.Subcalls: array too large (%d)", extra)
}

View File

@ -114,8 +114,8 @@ this command must be within this base path`,
},
ArgsUsage: "[backup file path]",
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 1 {
return xerrors.Errorf("expected 1 argument")
if cctx.NArg() != 1 {
return IncorrectNumArgs(cctx)
}
if cctx.Bool("offline") {

View File

@ -946,8 +946,8 @@ var ChainBisectCmd = &cli.Command{
defer closer()
ctx := ReqContext(cctx)
if cctx.Args().Len() < 4 {
return xerrors.New("need at least 4 args")
if cctx.NArg() < 4 {
return IncorrectNumArgs(cctx)
}
start, err := strconv.ParseUint(cctx.Args().Get(0), 10, 64)
@ -1312,8 +1312,8 @@ var chainDecodeParamsCmd = &cli.Command{
defer closer()
ctx := ReqContext(cctx)
if cctx.Args().Len() != 3 {
return ShowHelp(cctx, fmt.Errorf("incorrect number of arguments"))
if cctx.NArg() != 3 {
return IncorrectNumArgs(cctx)
}
to, err := address.NewFromString(cctx.Args().First())
@ -1391,8 +1391,8 @@ var chainEncodeParamsCmd = &cli.Command{
Action: func(cctx *cli.Context) error {
afmt := NewAppFmt(cctx.App)
if cctx.Args().Len() != 3 {
return ShowHelp(cctx, fmt.Errorf("incorrect number of arguments"))
if cctx.NArg() != 3 {
return IncorrectNumArgs(cctx)
}
method, err := strconv.ParseInt(cctx.Args().Get(1), 10, 64)

View File

@ -130,7 +130,7 @@ var clientImportCmd = &cli.Command{
ctx := ReqContext(cctx)
if cctx.NArg() != 1 {
return xerrors.New("expected input path as the only arg")
return IncorrectNumArgs(cctx)
}
absPath, err := filepath.Abs(cctx.Args().First())
@ -212,8 +212,8 @@ var clientCommPCmd = &cli.Command{
defer closer()
ctx := ReqContext(cctx)
if cctx.Args().Len() != 1 {
return fmt.Errorf("usage: commP <inputPath>")
if cctx.NArg() != 1 {
return IncorrectNumArgs(cctx)
}
ret, err := api.ClientCalcCommP(ctx, cctx.Args().Get(0))
@ -245,8 +245,8 @@ var clientCarGenCmd = &cli.Command{
defer closer()
ctx := ReqContext(cctx)
if cctx.Args().Len() != 2 {
return fmt.Errorf("usage: generate-car <inputPath> <outputPath>")
if cctx.NArg() != 2 {
return IncorrectNumArgs(cctx)
}
ref := lapi.FileRef{
@ -376,7 +376,7 @@ The minimum value is 518400 (6 months).`,
afmt := NewAppFmt(cctx.App)
if cctx.NArg() != 4 {
return xerrors.New(expectedArgsMsg)
return IncorrectNumArgs(cctx)
}
// [data, miner, price, dur]

View File

@ -289,7 +289,7 @@ Examples:
}, retrFlagsCommon...),
Action: func(cctx *cli.Context) error {
if cctx.NArg() != 2 {
return ShowHelp(cctx, fmt.Errorf("incorrect number of arguments"))
return IncorrectNumArgs(cctx)
}
if cctx.Bool("car-export-merkle-proof") {
@ -405,7 +405,7 @@ var clientRetrieveCatCmd = &cli.Command{
}, retrFlagsCommon...),
Action: func(cctx *cli.Context) error {
if cctx.NArg() != 1 {
return ShowHelp(cctx, fmt.Errorf("incorrect number of arguments"))
return IncorrectNumArgs(cctx)
}
ainfo, err := GetAPIInfo(cctx, repo.FullNode)
@ -484,7 +484,7 @@ var clientRetrieveLsCmd = &cli.Command{
}, retrFlagsCommon...),
Action: func(cctx *cli.Context) error {
if cctx.NArg() != 1 {
return ShowHelp(cctx, fmt.Errorf("incorrect number of arguments"))
return IncorrectNumArgs(cctx)
}
ainfo, err := GetAPIInfo(cctx, repo.FullNode)

View File

@ -61,8 +61,8 @@ var filplusVerifyClientCmd = &cli.Command{
return err
}
if cctx.Args().Len() != 2 {
return fmt.Errorf("must specify two arguments: address and allowance")
if cctx.NArg() != 2 {
return IncorrectNumArgs(cctx)
}
target, err := address.NewFromString(cctx.Args().Get(0))
@ -120,7 +120,7 @@ var filplusVerifyClientCmd = &cli.Command{
return err
}
if mwait.Receipt.ExitCode != 0 {
if mwait.Receipt.ExitCode.IsError() {
return fmt.Errorf("failed to add verified client: %d", mwait.Receipt.ExitCode)
}
@ -289,8 +289,8 @@ var filplusSignRemoveDataCapProposal = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 3 {
return fmt.Errorf("must specify three arguments: notary address, client address, and allowance to remove")
if cctx.NArg() != 3 {
return IncorrectNumArgs(cctx)
}
api, closer, err := GetFullNodeAPI(cctx)

View File

@ -31,6 +31,10 @@ func ShowHelp(cctx *ufcli.Context, err error) error {
return &PrintHelpErr{Err: err, Ctx: cctx}
}
func IncorrectNumArgs(cctx *ufcli.Context) error {
return ShowHelp(cctx, fmt.Errorf("incorrect number of arguments, got %d", cctx.NArg()))
}
func RunApp(app *ufcli.App) {
if err := app.Run(os.Args); err != nil {
if os.Getenv("LOTUS_DEV") != "" {

View File

@ -404,7 +404,7 @@ var MpoolReplaceCmd = &cli.Command{
var from address.Address
var nonce uint64
switch cctx.Args().Len() {
switch cctx.NArg() {
case 1:
mcid, err := cid.Decode(cctx.Args().First())
if err != nil {
@ -610,8 +610,8 @@ var MpoolConfig = &cli.Command{
Usage: "get or set current mpool configuration",
ArgsUsage: "[new-config]",
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() > 1 {
return cli.ShowCommandHelp(cctx, cctx.Command.Name)
if cctx.NArg() > 1 {
return IncorrectNumArgs(cctx)
}
afmt := NewAppFmt(cctx.App)
@ -624,7 +624,7 @@ var MpoolConfig = &cli.Command{
ctx := ReqContext(cctx)
if cctx.Args().Len() == 0 {
if cctx.NArg() == 0 {
cfg, err := api.MpoolGetConfig(ctx)
if err != nil {
return err

View File

@ -88,8 +88,8 @@ var msigCreateCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() < 1 {
return ShowHelp(cctx, fmt.Errorf("multisigs must have at least one signer"))
if cctx.NArg() < 1 {
return IncorrectNumArgs(cctx)
}
srv, err := GetFullNodeServices(cctx)
@ -167,7 +167,7 @@ var msigCreateCmd = &cli.Command{
}
// check it executed successfully
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
fmt.Fprintln(cctx.App.Writer, "actor creation failed!")
return err
}
@ -365,11 +365,11 @@ var msigProposeCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() < 3 {
if cctx.NArg() < 3 {
return ShowHelp(cctx, fmt.Errorf("must pass at least multisig address, destination, and value"))
}
if cctx.Args().Len() > 3 && cctx.Args().Len() != 5 {
if cctx.NArg() > 3 && cctx.NArg() != 5 {
return ShowHelp(cctx, fmt.Errorf("must either pass three or five arguments"))
}
@ -399,7 +399,7 @@ var msigProposeCmd = &cli.Command{
var method uint64
var params []byte
if cctx.Args().Len() == 5 {
if cctx.NArg() == 5 {
m, err := strconv.ParseUint(cctx.Args().Get(3), 10, 64)
if err != nil {
return err
@ -456,7 +456,7 @@ var msigProposeCmd = &cli.Command{
return err
}
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
return fmt.Errorf("proposal returned exit %d", wait.Receipt.ExitCode)
}
@ -487,15 +487,15 @@ var msigApproveCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() < 2 {
if cctx.NArg() < 2 {
return ShowHelp(cctx, fmt.Errorf("must pass at least multisig address and message ID"))
}
if cctx.Args().Len() > 2 && cctx.Args().Len() < 5 {
if cctx.NArg() > 2 && cctx.NArg() < 5 {
return ShowHelp(cctx, fmt.Errorf("usage: msig approve <msig addr> <message ID> <proposer address> <desination> <value>"))
}
if cctx.Args().Len() > 5 && cctx.Args().Len() != 7 {
if cctx.NArg() > 5 && cctx.NArg() != 7 {
return ShowHelp(cctx, fmt.Errorf("usage: msig approve <msig addr> <message ID> <proposer address> <desination> <value> [ <method> <params> ]"))
}
@ -534,7 +534,7 @@ var msigApproveCmd = &cli.Command{
}
var msgCid cid.Cid
if cctx.Args().Len() == 2 {
if cctx.NArg() == 2 {
proto, err := api.MsigApprove(ctx, msig, txid, from)
if err != nil {
return err
@ -571,7 +571,7 @@ var msigApproveCmd = &cli.Command{
var method uint64
var params []byte
if cctx.Args().Len() == 7 {
if cctx.NArg() == 7 {
m, err := strconv.ParseUint(cctx.Args().Get(5), 10, 64)
if err != nil {
return err
@ -605,7 +605,7 @@ var msigApproveCmd = &cli.Command{
return err
}
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
return fmt.Errorf("approve returned exit %d", wait.Receipt.ExitCode)
}
@ -624,15 +624,15 @@ var msigCancelCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() < 2 {
if cctx.NArg() < 2 {
return ShowHelp(cctx, fmt.Errorf("must pass at least multisig address and message ID"))
}
if cctx.Args().Len() > 2 && cctx.Args().Len() < 4 {
if cctx.NArg() > 2 && cctx.NArg() < 4 {
return ShowHelp(cctx, fmt.Errorf("usage: msig cancel <msig addr> <message ID> <desination> <value>"))
}
if cctx.Args().Len() > 4 && cctx.Args().Len() != 6 {
if cctx.NArg() > 4 && cctx.NArg() != 6 {
return ShowHelp(cctx, fmt.Errorf("usage: msig cancel <msig addr> <message ID> <desination> <value> [ <method> <params> ]"))
}
@ -671,7 +671,7 @@ var msigCancelCmd = &cli.Command{
}
var msgCid cid.Cid
if cctx.Args().Len() == 2 {
if cctx.NArg() == 2 {
proto, err := api.MsigCancel(ctx, msig, txid, from)
if err != nil {
return err
@ -696,7 +696,7 @@ var msigCancelCmd = &cli.Command{
var method uint64
var params []byte
if cctx.Args().Len() == 6 {
if cctx.NArg() == 6 {
m, err := strconv.ParseUint(cctx.Args().Get(4), 10, 64)
if err != nil {
return err
@ -730,7 +730,7 @@ var msigCancelCmd = &cli.Command{
return err
}
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
return fmt.Errorf("cancel returned exit %d", wait.Receipt.ExitCode)
}
@ -753,8 +753,8 @@ var msigRemoveProposeCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 2 {
return ShowHelp(cctx, fmt.Errorf("must pass multisig address and signer address"))
if cctx.NArg() != 2 {
return IncorrectNumArgs(cctx)
}
srv, err := GetFullNodeServices(cctx)
@ -810,7 +810,7 @@ var msigRemoveProposeCmd = &cli.Command{
return err
}
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
return fmt.Errorf("add proposal returned exit %d", wait.Receipt.ExitCode)
}
@ -840,8 +840,8 @@ var msigAddProposeCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 2 {
return ShowHelp(cctx, fmt.Errorf("must pass multisig address and signer address"))
if cctx.NArg() != 2 {
return IncorrectNumArgs(cctx)
}
srv, err := GetFullNodeServices(cctx)
@ -930,7 +930,7 @@ var msigAddProposeCmd = &cli.Command{
return err
}
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
return fmt.Errorf("add proposal returned exit %d", wait.Receipt.ExitCode)
}
@ -949,8 +949,8 @@ var msigAddApproveCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 5 {
return ShowHelp(cctx, fmt.Errorf("must pass multisig address, proposer address, transaction id, new signer address, whether to increase threshold"))
if cctx.NArg() != 5 {
return IncorrectNumArgs(cctx)
}
srv, err := GetFullNodeServices(cctx)
@ -1021,7 +1021,7 @@ var msigAddApproveCmd = &cli.Command{
return err
}
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
return fmt.Errorf("add approval returned exit %d", wait.Receipt.ExitCode)
}
@ -1040,8 +1040,8 @@ var msigAddCancelCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 4 {
return ShowHelp(cctx, fmt.Errorf("must pass multisig address, transaction id, new signer address, whether to increase threshold"))
if cctx.NArg() != 4 {
return IncorrectNumArgs(cctx)
}
srv, err := GetFullNodeServices(cctx)
@ -1107,7 +1107,7 @@ var msigAddCancelCmd = &cli.Command{
return err
}
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
return fmt.Errorf("add cancellation returned exit %d", wait.Receipt.ExitCode)
}
@ -1126,8 +1126,8 @@ var msigSwapProposeCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 3 {
return ShowHelp(cctx, fmt.Errorf("must pass multisig address, old signer address, new signer address"))
if cctx.NArg() != 3 {
return IncorrectNumArgs(cctx)
}
srv, err := GetFullNodeServices(cctx)
@ -1188,7 +1188,7 @@ var msigSwapProposeCmd = &cli.Command{
return err
}
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
return fmt.Errorf("swap proposal returned exit %d", wait.Receipt.ExitCode)
}
@ -1207,8 +1207,8 @@ var msigSwapApproveCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 5 {
return ShowHelp(cctx, fmt.Errorf("must pass multisig address, proposer address, transaction id, old signer address, new signer address"))
if cctx.NArg() != 5 {
return IncorrectNumArgs(cctx)
}
srv, err := GetFullNodeServices(cctx)
@ -1279,7 +1279,7 @@ var msigSwapApproveCmd = &cli.Command{
return err
}
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
return fmt.Errorf("swap approval returned exit %d", wait.Receipt.ExitCode)
}
@ -1298,8 +1298,8 @@ var msigSwapCancelCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 4 {
return ShowHelp(cctx, fmt.Errorf("must pass multisig address, transaction id, old signer address, new signer address"))
if cctx.NArg() != 4 {
return IncorrectNumArgs(cctx)
}
srv, err := GetFullNodeServices(cctx)
@ -1365,7 +1365,7 @@ var msigSwapCancelCmd = &cli.Command{
return err
}
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
return fmt.Errorf("swap cancellation returned exit %d", wait.Receipt.ExitCode)
}
@ -1384,8 +1384,8 @@ var msigLockProposeCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 4 {
return ShowHelp(cctx, fmt.Errorf("must pass multisig address, start epoch, unlock duration, and amount"))
if cctx.NArg() != 4 {
return IncorrectNumArgs(cctx)
}
srv, err := GetFullNodeServices(cctx)
@ -1461,7 +1461,7 @@ var msigLockProposeCmd = &cli.Command{
return err
}
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
return fmt.Errorf("lock proposal returned exit %d", wait.Receipt.ExitCode)
}
@ -1480,8 +1480,8 @@ var msigLockApproveCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 6 {
return ShowHelp(cctx, fmt.Errorf("must pass multisig address, proposer address, tx id, start epoch, unlock duration, and amount"))
if cctx.NArg() != 6 {
return IncorrectNumArgs(cctx)
}
srv, err := GetFullNodeServices(cctx)
@ -1567,7 +1567,7 @@ var msigLockApproveCmd = &cli.Command{
return err
}
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
return fmt.Errorf("lock approval returned exit %d", wait.Receipt.ExitCode)
}
@ -1586,8 +1586,8 @@ var msigLockCancelCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 5 {
return ShowHelp(cctx, fmt.Errorf("must pass multisig address, tx id, start epoch, unlock duration, and amount"))
if cctx.NArg() != 5 {
return IncorrectNumArgs(cctx)
}
srv, err := GetFullNodeServices(cctx)
@ -1668,7 +1668,7 @@ var msigLockCancelCmd = &cli.Command{
return err
}
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
return fmt.Errorf("lock cancellation returned exit %d", wait.Receipt.ExitCode)
}
@ -1693,8 +1693,8 @@ var msigVestedCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 1 {
return ShowHelp(cctx, fmt.Errorf("must pass multisig address"))
if cctx.NArg() != 1 {
return IncorrectNumArgs(cctx)
}
api, closer, err := GetFullNodeAPI(cctx)
@ -1749,8 +1749,8 @@ var msigProposeThresholdCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 2 {
return ShowHelp(cctx, fmt.Errorf("must pass multisig address and new threshold value"))
if cctx.NArg() != 2 {
return IncorrectNumArgs(cctx)
}
srv, err := GetFullNodeServices(cctx)
@ -1814,7 +1814,7 @@ var msigProposeThresholdCmd = &cli.Command{
return err
}
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
return fmt.Errorf("change threshold proposal returned exit %d", wait.Receipt.ExitCode)
}

View File

@ -141,8 +141,8 @@ var NetPing = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 1 {
return xerrors.Errorf("please provide a peerID")
if cctx.NArg() != 1 {
return IncorrectNumArgs(cctx)
}
api, closer, err := GetAPI(cctx)

View File

@ -50,8 +50,8 @@ var paychAddFundsCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 3 {
return ShowHelp(cctx, fmt.Errorf("must pass three arguments: <from> <to> <available funds>"))
if cctx.NArg() != 3 {
return IncorrectNumArgs(cctx)
}
from, err := address.NewFromString(cctx.Args().Get(0))
@ -112,8 +112,8 @@ var paychStatusByFromToCmd = &cli.Command{
Usage: "Show the status of an active outbound payment channel by from/to addresses",
ArgsUsage: "[fromAddress toAddress]",
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 2 {
return ShowHelp(cctx, fmt.Errorf("must pass two arguments: <from address> <to address>"))
if cctx.NArg() != 2 {
return IncorrectNumArgs(cctx)
}
ctx := ReqContext(cctx)
@ -148,8 +148,8 @@ var paychStatusCmd = &cli.Command{
Usage: "Show the status of an outbound payment channel",
ArgsUsage: "[channelAddress]",
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 1 {
return ShowHelp(cctx, fmt.Errorf("must pass an argument: <channel address>"))
if cctx.NArg() != 1 {
return IncorrectNumArgs(cctx)
}
ctx := ReqContext(cctx)
@ -260,8 +260,8 @@ var paychSettleCmd = &cli.Command{
Usage: "Settle a payment channel",
ArgsUsage: "[channelAddress]",
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 1 {
return fmt.Errorf("must pass payment channel address")
if cctx.NArg() != 1 {
return IncorrectNumArgs(cctx)
}
ch, err := address.NewFromString(cctx.Args().Get(0))
@ -286,7 +286,7 @@ var paychSettleCmd = &cli.Command{
if err != nil {
return nil
}
if mwait.Receipt.ExitCode != 0 {
if mwait.Receipt.ExitCode.IsError() {
return fmt.Errorf("settle message execution failed (exit code %d)", mwait.Receipt.ExitCode)
}
@ -300,8 +300,8 @@ var paychCloseCmd = &cli.Command{
Usage: "Collect funds for a payment channel",
ArgsUsage: "[channelAddress]",
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 1 {
return fmt.Errorf("must pass payment channel address")
if cctx.NArg() != 1 {
return IncorrectNumArgs(cctx)
}
ch, err := address.NewFromString(cctx.Args().Get(0))
@ -326,7 +326,7 @@ var paychCloseCmd = &cli.Command{
if err != nil {
return nil
}
if mwait.Receipt.ExitCode != 0 {
if mwait.Receipt.ExitCode.IsError() {
return fmt.Errorf("collect message execution failed (exit code %d)", mwait.Receipt.ExitCode)
}
@ -360,8 +360,8 @@ var paychVoucherCreateCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 2 {
return ShowHelp(cctx, fmt.Errorf("must pass two arguments: <channel> <amount>"))
if cctx.NArg() != 2 {
return IncorrectNumArgs(cctx)
}
ch, err := address.NewFromString(cctx.Args().Get(0))
@ -408,8 +408,8 @@ var paychVoucherCheckCmd = &cli.Command{
Usage: "Check validity of payment channel voucher",
ArgsUsage: "[channelAddress voucher]",
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 2 {
return ShowHelp(cctx, fmt.Errorf("must pass payment channel address and voucher to validate"))
if cctx.NArg() != 2 {
return IncorrectNumArgs(cctx)
}
ch, err := address.NewFromString(cctx.Args().Get(0))
@ -444,8 +444,8 @@ var paychVoucherAddCmd = &cli.Command{
Usage: "Add payment channel voucher to local datastore",
ArgsUsage: "[channelAddress voucher]",
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 2 {
return ShowHelp(cctx, fmt.Errorf("must pass payment channel address and voucher"))
if cctx.NArg() != 2 {
return IncorrectNumArgs(cctx)
}
ch, err := address.NewFromString(cctx.Args().Get(0))
@ -486,8 +486,8 @@ var paychVoucherListCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 1 {
return ShowHelp(cctx, fmt.Errorf("must pass payment channel address"))
if cctx.NArg() != 1 {
return IncorrectNumArgs(cctx)
}
ch, err := address.NewFromString(cctx.Args().Get(0))
@ -531,8 +531,8 @@ var paychVoucherBestSpendableCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 1 {
return ShowHelp(cctx, fmt.Errorf("must pass payment channel address"))
if cctx.NArg() != 1 {
return IncorrectNumArgs(cctx)
}
ch, err := address.NewFromString(cctx.Args().Get(0))
@ -602,8 +602,8 @@ var paychVoucherSubmitCmd = &cli.Command{
Usage: "Submit voucher to chain to update payment channel state",
ArgsUsage: "[channelAddress voucher]",
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 2 {
return ShowHelp(cctx, fmt.Errorf("must pass payment channel address and voucher"))
if cctx.NArg() != 2 {
return IncorrectNumArgs(cctx)
}
ch, err := address.NewFromString(cctx.Args().Get(0))
@ -634,7 +634,7 @@ var paychVoucherSubmitCmd = &cli.Command{
return err
}
if mwait.Receipt.ExitCode != 0 {
if mwait.Receipt.ExitCode.IsError() {
return fmt.Errorf("message execution failed (exit code %d)", mwait.Receipt.ExitCode)
}

View File

@ -67,8 +67,8 @@ var sendCmd = &cli.Command{
fmt.Println("'force' flag is deprecated, use global flag 'force-send'")
}
if cctx.Args().Len() != 2 {
return ShowHelp(cctx, fmt.Errorf("'send' expects two arguments, target and amount"))
if cctx.NArg() != 2 {
return IncorrectNumArgs(cctx)
}
srv, err := GetFullNodeServices(cctx)

View File

@ -169,6 +169,22 @@ var StateMinerInfo = &cli.Command{
for i, controlAddress := range mi.ControlAddresses {
fmt.Printf("Control %d: \t%s\n", i, controlAddress)
}
if mi.Beneficiary != address.Undef {
fmt.Printf("Beneficiary:\t%s\n", mi.Beneficiary)
if mi.Beneficiary != mi.Owner {
fmt.Printf("Beneficiary Quota:\t%s\n", mi.BeneficiaryTerm.Quota)
fmt.Printf("Beneficiary Used Quota:\t%s\n", mi.BeneficiaryTerm.UsedQuota)
fmt.Printf("Beneficiary Expiration:\t%s\n", mi.BeneficiaryTerm.Expiration)
}
}
if mi.PendingBeneficiaryTerm != nil {
fmt.Printf("Pending Beneficiary Term:\n")
fmt.Printf("New Beneficiary:\t%s\n", mi.PendingBeneficiaryTerm.NewBeneficiary)
fmt.Printf("New Quota:\t%s\n", mi.PendingBeneficiaryTerm.NewQuota)
fmt.Printf("New Expiration:\t%s\n", mi.PendingBeneficiaryTerm.NewExpiration)
fmt.Printf("Approved By Beneficiary:\t%t\n", mi.PendingBeneficiaryTerm.ApprovedByBeneficiary)
fmt.Printf("Approved By Nominee:\t%t\n", mi.PendingBeneficiaryTerm.ApprovedByNominee)
}
fmt.Printf("PeerID:\t%s\n", mi.PeerId)
fmt.Printf("Multiaddrs:\t")
@ -504,9 +520,8 @@ var StateReplayCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 1 {
fmt.Println("must provide cid of message to replay")
return nil
if cctx.NArg() != 1 {
return IncorrectNumArgs(cctx)
}
mcid, err := cid.Decode(cctx.Args().First())
@ -1232,7 +1247,7 @@ var compStateMsg = `
<details>
<summary>Gas Trace</summary>
<table>
<tr><th>Num</th><th>Total/Compute/Storage</th><th>Time Taken</th><th>Location</th></tr>
<tr><th>Name</th><th>Total/Compute/Storage</th><th>Time Taken</th><th>Location</th></tr>
{{define "virt" -}}
{{- if . -}}
<span class="deemp">+({{.}})</span>
@ -1244,7 +1259,7 @@ var compStateMsg = `
{{- end}}
{{range .GasCharges}}
<tr><td>{{.Num}}{{if .Extra}}:{{.Extra}}{{end}}</td>
<tr><td>{{.Name}}{{if .Extra}}:{{.Extra}}{{end}}</td>
{{template "gasC" .}}
<td>{{if PrintTiming}}{{.TimeTaken}}{{end}}</td>
<td>
@ -1580,8 +1595,8 @@ var StateCallCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() < 2 {
return fmt.Errorf("must specify at least actor and method to invoke")
if cctx.NArg() < 2 {
return ShowHelp(cctx, fmt.Errorf("must specify at least actor and method to invoke"))
}
api, closer, err := GetFullNodeAPI(cctx)
@ -1619,7 +1634,7 @@ var StateCallCmd = &cli.Command{
var params []byte
// If params were passed in, decode them
if cctx.Args().Len() > 2 {
if cctx.NArg() > 2 {
switch cctx.String("encoding") {
case "base64":
params, err = base64.StdEncoding.DecodeString(cctx.Args().Get(2))
@ -1743,8 +1758,8 @@ var StateSectorCmd = &cli.Command{
ctx := ReqContext(cctx)
if cctx.Args().Len() != 2 {
return xerrors.Errorf("expected 2 params: minerAddress and sectorNumber")
if cctx.NArg() != 2 {
return IncorrectNumArgs(cctx)
}
ts, err := LoadTipSet(ctx, cctx, api)
@ -1885,7 +1900,6 @@ var StateSysActorCIDsCmd = &cli.Command{
&cli.UintFlag{
Name: "network-version",
Usage: "specify network version",
Value: uint(build.NewestNetworkVersion),
},
},
Action: func(cctx *cli.Context) error {
@ -1901,7 +1915,15 @@ var StateSysActorCIDsCmd = &cli.Command{
ctx := ReqContext(cctx)
nv := network.Version(cctx.Uint64("network-version"))
var nv network.Version
if cctx.IsSet("network-version") {
nv = network.Version(cctx.Uint64("network-version"))
} else {
nv, err = api.StateNetworkVersion(ctx, types.EmptyTSK)
if err != nil {
return err
}
}
fmt.Printf("Network Version: %d\n", nv)

View File

@ -645,7 +645,7 @@ var walletMarketWithdraw = &cli.Command{
}
// check it executed successfully
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
afmt.Println(cctx.App.Writer, "withdrawal failed!")
return err
}

View File

@ -959,7 +959,7 @@ var simpleProveReplicaUpdate2 = &cli.Command{
}
func ParsePieceInfos(cctx *cli.Context, firstArg int) ([]abi.PieceInfo, error) {
args := cctx.Args().Len() - firstArg
args := cctx.NArg() - firstArg
if args%2 != 0 {
return nil, xerrors.Errorf("piece info argunemts need to be supplied in pairs")
}

View File

@ -4,9 +4,11 @@ import (
"bytes"
"fmt"
"os"
"strconv"
"strings"
"github.com/fatih/color"
"github.com/ipfs/go-cid"
cbor "github.com/ipfs/go-ipld-cbor"
"github.com/libp2p/go-libp2p/core/peer"
ma "github.com/multiformats/go-multiaddr"
@ -19,7 +21,7 @@ import (
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/big"
"github.com/filecoin-project/go-state-types/builtin"
"github.com/filecoin-project/go-state-types/builtin/v8/miner"
"github.com/filecoin-project/go-state-types/builtin/v9/miner"
"github.com/filecoin-project/go-state-types/network"
"github.com/filecoin-project/lotus/api"
@ -47,6 +49,8 @@ var actorCmd = &cli.Command{
actorProposeChangeWorker,
actorConfirmChangeWorker,
actorCompactAllocatedCmd,
actorProposeChangeBeneficiary,
actorConfirmChangeBeneficiary,
},
}
@ -80,7 +84,7 @@ var actorSetAddrsCmd = &cli.Command{
return fmt.Errorf("unset can only be used with no arguments")
}
nodeAPI, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
@ -111,7 +115,7 @@ var actorSetAddrsCmd = &cli.Command{
addrs = append(addrs, maddrNop2p.Bytes())
}
maddr, err := nodeAPI.ActorAddress(ctx)
maddr, err := minerApi.ActorAddress(ctx)
if err != nil {
return err
}
@ -176,7 +180,7 @@ var actorSetPeeridCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
nodeAPI, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
@ -195,7 +199,7 @@ var actorSetPeeridCmd = &cli.Command{
return fmt.Errorf("failed to parse input as a peerId: %w", err)
}
maddr, err := nodeAPI.ActorAddress(ctx)
maddr, err := minerApi.ActorAddress(ctx)
if err != nil {
return err
}
@ -232,7 +236,7 @@ var actorSetPeeridCmd = &cli.Command{
var actorWithdrawCmd = &cli.Command{
Name: "withdraw",
Usage: "withdraw available balance",
Usage: "withdraw available balance to beneficiary",
ArgsUsage: "[amount (FIL)]",
Flags: []cli.Flag{
&cli.IntFlag{
@ -240,6 +244,10 @@ var actorWithdrawCmd = &cli.Command{
Usage: "number of block confirmations to wait for",
Value: int(build.MessageConfidence),
},
&cli.BoolFlag{
Name: "beneficiary",
Usage: "send withdraw message from the beneficiary address",
},
},
Action: func(cctx *cli.Context) error {
amount := abi.NewTokenAmount(0)
@ -253,7 +261,7 @@ var actorWithdrawCmd = &cli.Command{
amount = abi.TokenAmount(f)
}
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
@ -267,7 +275,12 @@ var actorWithdrawCmd = &cli.Command{
ctx := lcli.ReqContext(cctx)
res, err := nodeApi.ActorWithdrawBalance(ctx, amount)
var res cid.Cid
if cctx.IsSet("beneficiary") {
res, err = minerApi.BeneficiaryWithdrawBalance(ctx, amount)
} else {
res, err = minerApi.ActorWithdrawBalance(ctx, amount)
}
if err != nil {
return err
}
@ -280,7 +293,7 @@ var actorWithdrawCmd = &cli.Command{
return xerrors.Errorf("Timeout waiting for withdrawal message %s", wait.Message)
}
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
return xerrors.Errorf("Failed to execute withdrawal message %s: %w", wait.Message, wait.Receipt.ExitCode.Error())
}
@ -316,7 +329,7 @@ var actorRepayDebtCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
@ -330,7 +343,7 @@ var actorRepayDebtCmd = &cli.Command{
ctx := lcli.ReqContext(cctx)
maddr, err := nodeApi.ActorAddress(ctx)
maddr, err := minerApi.ActorAddress(ctx)
if err != nil {
return err
}
@ -431,7 +444,7 @@ var actorControlList = &cli.Command{
color.NoColor = !cctx.Bool("color")
}
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
@ -463,7 +476,7 @@ var actorControlList = &cli.Command{
tablewriter.Col("balance"),
)
ac, err := nodeApi.ActorAddressConfig(ctx)
ac, err := minerApi.ActorAddressConfig(ctx)
if err != nil {
return err
}
@ -536,7 +549,9 @@ var actorControlList = &cli.Command{
}
kstr := k.String()
if !cctx.Bool("verbose") {
kstr = kstr[:9] + "..."
if len(kstr) > 9 {
kstr = kstr[:6] + "..."
}
}
bstr := types.FIL(b).String()
@ -600,7 +615,7 @@ var actorControlSet = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
@ -614,7 +629,7 @@ var actorControlSet = &cli.Command{
ctx := lcli.ReqContext(cctx)
maddr, err := nodeApi.ActorAddress(ctx)
maddr, err := minerApi.ActorAddress(ctx)
if err != nil {
return err
}
@ -719,7 +734,7 @@ var actorSetOwnerCmd = &cli.Command{
}
if cctx.NArg() != 2 {
return fmt.Errorf("must pass new owner address and sender address")
return lcli.IncorrectNumArgs(cctx)
}
api, acloser, err := lcli.GetFullNodeAPI(cctx)
@ -789,7 +804,7 @@ var actorSetOwnerCmd = &cli.Command{
}
// check it executed successfully
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
fmt.Println("owner change failed!")
return err
}
@ -816,7 +831,7 @@ var actorProposeChangeWorker = &cli.Command{
return fmt.Errorf("must pass address of new worker address")
}
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
@ -840,7 +855,7 @@ var actorProposeChangeWorker = &cli.Command{
return err
}
maddr, err := nodeApi.ActorAddress(ctx)
maddr, err := minerApi.ActorAddress(ctx)
if err != nil {
return err
}
@ -895,9 +910,8 @@ var actorProposeChangeWorker = &cli.Command{
}
// check it executed successfully
if wait.Receipt.ExitCode != 0 {
fmt.Fprintln(cctx.App.Writer, "Propose worker change failed!")
return err
if wait.Receipt.ExitCode.IsError() {
return fmt.Errorf("propose worker change failed")
}
mi, err = api.StateMinerInfo(ctx, maddr, wait.TipSet)
@ -915,6 +929,139 @@ var actorProposeChangeWorker = &cli.Command{
},
}
var actorProposeChangeBeneficiary = &cli.Command{
Name: "propose-change-beneficiary",
Usage: "Propose a beneficiary address change",
ArgsUsage: "[beneficiaryAddress quota expiration]",
Flags: []cli.Flag{
&cli.BoolFlag{
Name: "really-do-it",
Usage: "Actually send transaction performing the action",
Value: false,
},
&cli.BoolFlag{
Name: "overwrite-pending-change",
Usage: "Overwrite the current beneficiary change proposal",
Value: false,
},
&cli.StringFlag{
Name: "actor",
Usage: "specify the address of miner actor",
},
},
Action: func(cctx *cli.Context) error {
if cctx.NArg() != 3 {
return lcli.IncorrectNumArgs(cctx)
}
api, acloser, err := lcli.GetFullNodeAPI(cctx)
if err != nil {
return xerrors.Errorf("getting fullnode api: %w", err)
}
defer acloser()
ctx := lcli.ReqContext(cctx)
na, err := address.NewFromString(cctx.Args().Get(0))
if err != nil {
return xerrors.Errorf("parsing beneficiary address: %w", err)
}
newAddr, err := api.StateLookupID(ctx, na, types.EmptyTSK)
if err != nil {
return xerrors.Errorf("looking up new beneficiary address: %w", err)
}
quota, err := types.ParseFIL(cctx.Args().Get(1))
if err != nil {
return xerrors.Errorf("parsing quota: %w", err)
}
expiration, err := strconv.ParseInt(cctx.Args().Get(2), 10, 64)
if err != nil {
return xerrors.Errorf("parsing expiration: %w", err)
}
maddr, err := getActorAddress(ctx, cctx)
if err != nil {
return xerrors.Errorf("getting miner address: %w", err)
}
mi, err := api.StateMinerInfo(ctx, maddr, types.EmptyTSK)
if err != nil {
return xerrors.Errorf("getting miner info: %w", err)
}
if mi.Beneficiary == mi.Owner && newAddr == mi.Owner {
return fmt.Errorf("beneficiary %s already set to owner address", mi.Beneficiary)
}
if mi.PendingBeneficiaryTerm != nil {
fmt.Println("WARNING: replacing Pending Beneficiary Term of:")
fmt.Println("Beneficiary: ", mi.PendingBeneficiaryTerm.NewBeneficiary)
fmt.Println("Quota:", mi.PendingBeneficiaryTerm.NewQuota)
fmt.Println("Expiration Epoch:", mi.PendingBeneficiaryTerm.NewExpiration)
if !cctx.Bool("overwrite-pending-change") {
return fmt.Errorf("must pass --overwrite-pending-change to replace current pending beneficiary change. Please review CAREFULLY")
}
}
if !cctx.Bool("really-do-it") {
fmt.Println("Pass --really-do-it to actually execute this action. Review what you're about to approve CAREFULLY please")
return nil
}
params := &miner.ChangeBeneficiaryParams{
NewBeneficiary: newAddr,
NewQuota: abi.TokenAmount(quota),
NewExpiration: abi.ChainEpoch(expiration),
}
sp, err := actors.SerializeParams(params)
if err != nil {
return xerrors.Errorf("serializing params: %w", err)
}
smsg, err := api.MpoolPushMessage(ctx, &types.Message{
From: mi.Owner,
To: maddr,
Method: builtin.MethodsMiner.ChangeBeneficiary,
Value: big.Zero(),
Params: sp,
}, nil)
if err != nil {
return xerrors.Errorf("mpool push: %w", err)
}
fmt.Println("Propose Message CID:", smsg.Cid())
// wait for it to get mined into a block
wait, err := api.StateWaitMsg(ctx, smsg.Cid(), build.MessageConfidence)
if err != nil {
return xerrors.Errorf("waiting for message to be included in block: %w", err)
}
// check it executed successfully
if wait.Receipt.ExitCode.IsError() {
return fmt.Errorf("propose beneficiary change failed")
}
updatedMinerInfo, err := api.StateMinerInfo(ctx, maddr, wait.TipSet)
if err != nil {
return xerrors.Errorf("getting miner info: %w", err)
}
if updatedMinerInfo.PendingBeneficiaryTerm == nil && updatedMinerInfo.Beneficiary == newAddr {
fmt.Println("Beneficiary address successfully changed")
} else {
fmt.Println("Beneficiary address change awaiting additional confirmations")
}
return nil
},
}
var actorConfirmChangeWorker = &cli.Command{
Name: "confirm-change-worker",
Usage: "Confirm a worker address change",
@ -931,7 +1078,7 @@ var actorConfirmChangeWorker = &cli.Command{
return fmt.Errorf("must pass address of new worker address")
}
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
@ -955,7 +1102,7 @@ var actorConfirmChangeWorker = &cli.Command{
return err
}
maddr, err := nodeApi.ActorAddress(ctx)
maddr, err := minerApi.ActorAddress(ctx)
if err != nil {
return err
}
@ -978,7 +1125,7 @@ var actorConfirmChangeWorker = &cli.Command{
}
if !cctx.Bool("really-do-it") {
fmt.Fprintln(cctx.App.Writer, "Pass --really-do-it to actually execute this action")
fmt.Println("Pass --really-do-it to actually execute this action")
return nil
}
@ -992,7 +1139,7 @@ var actorConfirmChangeWorker = &cli.Command{
return xerrors.Errorf("mpool push: %w", err)
}
fmt.Fprintln(cctx.App.Writer, "Confirm Message CID:", smsg.Cid())
fmt.Println("Confirm Message CID:", smsg.Cid())
// wait for it to get mined into a block
wait, err := api.StateWaitMsg(ctx, smsg.Cid(), build.MessageConfidence)
@ -1001,7 +1148,7 @@ var actorConfirmChangeWorker = &cli.Command{
}
// check it executed successfully
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
fmt.Fprintln(cctx.App.Writer, "Worker change failed!")
return err
}
@ -1018,6 +1165,129 @@ var actorConfirmChangeWorker = &cli.Command{
},
}
var actorConfirmChangeBeneficiary = &cli.Command{
Name: "confirm-change-beneficiary",
Usage: "Confirm a beneficiary address change",
ArgsUsage: "[minerAddress]",
Flags: []cli.Flag{
&cli.BoolFlag{
Name: "really-do-it",
Usage: "Actually send transaction performing the action",
Value: false,
},
&cli.BoolFlag{
Name: "existing-beneficiary",
Usage: "send confirmation from the existing beneficiary address",
},
&cli.BoolFlag{
Name: "new-beneficiary",
Usage: "send confirmation from the new beneficiary address",
},
},
Action: func(cctx *cli.Context) error {
if cctx.NArg() != 1 {
return lcli.IncorrectNumArgs(cctx)
}
api, acloser, err := lcli.GetFullNodeAPI(cctx)
if err != nil {
return xerrors.Errorf("getting fullnode api: %w", err)
}
defer acloser()
ctx := lcli.ReqContext(cctx)
maddr, err := address.NewFromString(cctx.Args().First())
if err != nil {
return xerrors.Errorf("parsing beneficiary address: %w", err)
}
mi, err := api.StateMinerInfo(ctx, maddr, types.EmptyTSK)
if err != nil {
return xerrors.Errorf("getting miner info: %w", err)
}
if mi.PendingBeneficiaryTerm == nil {
return fmt.Errorf("no pending beneficiary term found for miner %s", maddr)
}
if (cctx.IsSet("existing-beneficiary") && cctx.IsSet("new-beneficiary")) || (!cctx.IsSet("existing-beneficiary") && !cctx.IsSet("new-beneficiary")) {
return lcli.ShowHelp(cctx, fmt.Errorf("must pass exactly one of --existing-beneficiary or --new-beneficiary"))
}
var fromAddr address.Address
if cctx.IsSet("existing-beneficiary") {
if mi.PendingBeneficiaryTerm.ApprovedByBeneficiary {
return fmt.Errorf("beneficiary change already approved by current beneficiary")
}
fromAddr = mi.Beneficiary
} else {
if mi.PendingBeneficiaryTerm.ApprovedByNominee {
return fmt.Errorf("beneficiary change already approved by new beneficiary")
}
fromAddr = mi.PendingBeneficiaryTerm.NewBeneficiary
}
fmt.Println("Confirming Pending Beneficiary Term of:")
fmt.Println("Beneficiary: ", mi.PendingBeneficiaryTerm.NewBeneficiary)
fmt.Println("Quota:", mi.PendingBeneficiaryTerm.NewQuota)
fmt.Println("Expiration Epoch:", mi.PendingBeneficiaryTerm.NewExpiration)
if !cctx.Bool("really-do-it") {
fmt.Println("Pass --really-do-it to actually execute this action. Review what you're about to approve CAREFULLY please")
return nil
}
params := &miner.ChangeBeneficiaryParams{
NewBeneficiary: mi.PendingBeneficiaryTerm.NewBeneficiary,
NewQuota: mi.PendingBeneficiaryTerm.NewQuota,
NewExpiration: mi.PendingBeneficiaryTerm.NewExpiration,
}
sp, err := actors.SerializeParams(params)
if err != nil {
return xerrors.Errorf("serializing params: %w", err)
}
smsg, err := api.MpoolPushMessage(ctx, &types.Message{
From: fromAddr,
To: maddr,
Method: builtin.MethodsMiner.ChangeBeneficiary,
Value: big.Zero(),
Params: sp,
}, nil)
if err != nil {
return xerrors.Errorf("mpool push: %w", err)
}
fmt.Println("Confirm Message CID:", smsg.Cid())
// wait for it to get mined into a block
wait, err := api.StateWaitMsg(ctx, smsg.Cid(), build.MessageConfidence)
if err != nil {
return xerrors.Errorf("waiting for message to be included in block: %w", err)
}
// check it executed successfully
if wait.Receipt.ExitCode.IsError() {
return fmt.Errorf("confirm beneficiary change failed with code %d", wait.Receipt.ExitCode)
}
updatedMinerInfo, err := api.StateMinerInfo(ctx, maddr, types.EmptyTSK)
if err != nil {
return err
}
if updatedMinerInfo.PendingBeneficiaryTerm == nil && updatedMinerInfo.Beneficiary == mi.PendingBeneficiaryTerm.NewBeneficiary {
fmt.Println("Beneficiary address successfully changed")
} else {
fmt.Println("Beneficiary address change awaiting additional confirmations")
}
return nil
},
}
var actorCompactAllocatedCmd = &cli.Command{
Name: "compact-allocated",
Usage: "compact allocated sectors bitfield",
@ -1046,7 +1316,7 @@ var actorCompactAllocatedCmd = &cli.Command{
return fmt.Errorf("must pass address of new owner address")
}
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
@ -1060,7 +1330,7 @@ var actorCompactAllocatedCmd = &cli.Command{
ctx := lcli.ReqContext(cctx)
maddr, err := nodeApi.ActorAddress(ctx)
maddr, err := minerApi.ActorAddress(ctx)
if err != nil {
return err
}
@ -1160,7 +1430,7 @@ var actorCompactAllocatedCmd = &cli.Command{
}
// check it executed successfully
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
fmt.Println("Propose owner change failed!")
return err
}

View File

@ -77,7 +77,7 @@ var dagstoreRegisterShardCmd = &cli.Command{
}
if cctx.NArg() != 1 {
return fmt.Errorf("must provide a single shard key")
return lcli.IncorrectNumArgs(cctx)
}
marketsAPI, closer, err := lcli.GetMarketsAPI(cctx)
@ -116,7 +116,7 @@ var dagstoreInitializeShardCmd = &cli.Command{
}
if cctx.NArg() != 1 {
return fmt.Errorf("must provide a single shard key")
return lcli.IncorrectNumArgs(cctx)
}
marketsApi, closer, err := lcli.GetMarketsAPI(cctx)
@ -148,7 +148,7 @@ var dagstoreRecoverShardCmd = &cli.Command{
}
if cctx.NArg() != 1 {
return fmt.Errorf("must provide a single shard key")
return lcli.IncorrectNumArgs(cctx)
}
marketsApi, closer, err := lcli.GetMarketsAPI(cctx)
@ -330,7 +330,7 @@ var dagstoreLookupPiecesCmd = &cli.Command{
}
if cctx.NArg() != 1 {
return fmt.Errorf("must provide a CID")
return lcli.IncorrectNumArgs(cctx)
}
cidStr := cctx.Args().First()

View File

@ -36,7 +36,7 @@ var indexProvAnnounceCmd = &cli.Command{
}
if cctx.NArg() != 1 {
return fmt.Errorf("must provide the deal proposal CID")
return lcli.IncorrectNumArgs(cctx)
}
proposalCidStr := cctx.Args().First()

View File

@ -313,6 +313,23 @@ func handleMiningInfo(ctx context.Context, cctx *cli.Context, fullapi v1api.Full
}
colorTokenAmount("Total Spendable: %s\n", spendable)
if mi.Beneficiary != address.Undef {
fmt.Println()
fmt.Printf("Beneficiary:\t%s\n", mi.Beneficiary)
if mi.Beneficiary != mi.Owner {
fmt.Printf("Beneficiary Quota:\t%s\n", mi.BeneficiaryTerm.Quota)
fmt.Printf("Beneficiary Used Quota:\t%s\n", mi.BeneficiaryTerm.UsedQuota)
fmt.Printf("Beneficiary Expiration:\t%s\n", mi.BeneficiaryTerm.Expiration)
}
}
if mi.PendingBeneficiaryTerm != nil {
fmt.Printf("Pending Beneficiary Term:\n")
fmt.Printf("New Beneficiary:\t%s\n", mi.PendingBeneficiaryTerm.NewBeneficiary)
fmt.Printf("New Quota:\t%s\n", mi.PendingBeneficiaryTerm.NewQuota)
fmt.Printf("New Expiration:\t%s\n", mi.PendingBeneficiaryTerm.NewExpiration)
fmt.Printf("Approved By Beneficiary:\t%t\n", mi.PendingBeneficiaryTerm.ApprovedByBeneficiary)
fmt.Printf("Approved By Nominee:\t%t\n", mi.PendingBeneficiaryTerm.ApprovedByNominee)
}
fmt.Println()
if !cctx.Bool("hide-sectors-info") {
@ -481,6 +498,8 @@ var stateList = []stateMeta{
{col: color.FgGreen, state: sealing.Available},
{col: color.FgGreen, state: sealing.UpdateActivating},
{col: color.FgMagenta, state: sealing.ReceiveSector},
{col: color.FgBlue, state: sealing.Empty},
{col: color.FgBlue, state: sealing.WaitDeals},
{col: color.FgBlue, state: sealing.AddPiece},
@ -526,6 +545,7 @@ var stateList = []stateMeta{
{col: color.FgRed, state: sealing.SealPreCommit2Failed},
{col: color.FgRed, state: sealing.PreCommitFailed},
{col: color.FgRed, state: sealing.ComputeProofFailed},
{col: color.FgRed, state: sealing.RemoteCommitFailed},
{col: color.FgRed, state: sealing.CommitFailed},
{col: color.FgRed, state: sealing.CommitFinalizeFailed},
{col: color.FgRed, state: sealing.PackingFailed},

View File

@ -16,7 +16,7 @@ var infoAllCmd = &cli.Command{
Name: "all",
Usage: "dump all related miner info",
Action: func(cctx *cli.Context) error {
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
@ -220,7 +220,7 @@ var infoAllCmd = &cli.Command{
// Very Very Verbose info
fmt.Println("\n#: Per Sector Info")
list, err := nodeApi.SectorsList(ctx)
list, err := minerApi.SectorsList(ctx)
if err != nil {
fmt.Println("ERROR: ", err)
}

View File

@ -90,7 +90,6 @@ var initCmd = &cli.Command{
&cli.StringFlag{
Name: "sector-size",
Usage: "specify sector size to use",
Value: units.BytesSize(float64(abi.SectorSize(2048))),
},
&cli.StringSliceFlag{
Name: "pre-sealed-sectors",
@ -129,11 +128,18 @@ var initCmd = &cli.Command{
Action: func(cctx *cli.Context) error {
log.Info("Initializing lotus miner")
sectorSizeInt, err := units.RAMInBytes(cctx.String("sector-size"))
ssize, err := abi.RegisteredSealProof_StackedDrg32GiBV1.SectorSize()
if err != nil {
return err
return xerrors.Errorf("failed to calculate default sector size: %w", err)
}
if cctx.IsSet("sector-size") {
sectorSizeInt, err := units.RAMInBytes(cctx.String("sector-size"))
if err != nil {
return err
}
ssize = abi.SectorSize(sectorSizeInt)
}
ssize := abi.SectorSize(sectorSizeInt)
gasPrice, err := types.BigFromString(cctx.String("gas-premium"))
if err != nil {
@ -314,7 +320,7 @@ func migratePreSealMeta(ctx context.Context, api v1api.FullNode, metadata string
info := &pipeline.SectorInfo{
State: pipeline.Proving,
SectorNumber: sector.SectorID,
Pieces: []pipeline.Piece{
Pieces: []lapi.SectorPiece{
{
Piece: abi.PieceInfo{
Size: abi.PaddedPieceSize(meta.SectorSize),
@ -541,7 +547,7 @@ func storageMinerInit(ctx context.Context, cctx *cli.Context, api v1api.FullNode
addr = a
} else {
a, err := createStorageMiner(ctx, api, peerid, gasPrice, cctx)
a, err := createStorageMiner(ctx, api, ssize, peerid, gasPrice, cctx)
if err != nil {
return xerrors.Errorf("creating miner failed: %w", err)
}
@ -621,7 +627,7 @@ func configureStorageMiner(ctx context.Context, api v1api.FullNode, addr address
return nil
}
func createStorageMiner(ctx context.Context, api v1api.FullNode, peerid peer.ID, gasPrice types.BigInt, cctx *cli.Context) (address.Address, error) {
func createStorageMiner(ctx context.Context, api v1api.FullNode, ssize abi.SectorSize, peerid peer.ID, gasPrice types.BigInt, cctx *cli.Context) (address.Address, error) {
var err error
var owner address.Address
if cctx.String("owner") != "" {
@ -633,11 +639,6 @@ func createStorageMiner(ctx context.Context, api v1api.FullNode, peerid peer.ID,
return address.Undef, err
}
ssize, err := units.RAMInBytes(cctx.String("sector-size"))
if err != nil {
return address.Undef, fmt.Errorf("failed to parse sector size: %w", err)
}
worker := owner
if cctx.String("worker") != "" {
worker, err = address.NewFromString(cctx.String("worker"))
@ -712,7 +713,7 @@ func createStorageMiner(ctx context.Context, api v1api.FullNode, peerid peer.ID,
}
// Note: the correct thing to do would be to call SealProofTypeFromSectorSize if actors version is v3 or later, but this still works
spt, err := miner.WindowPoStProofTypeFromSectorSize(abi.SectorSize(ssize))
spt, err := miner.WindowPoStProofTypeFromSectorSize(ssize)
if err != nil {
return address.Undef, xerrors.Errorf("getting post proof type: %w", err)
}

View File

@ -96,8 +96,8 @@ var restoreCmd = &cli.Command{
}
func restore(ctx context.Context, cctx *cli.Context, targetPath string, strConfig *paths.StorageConfig, manageConfig func(*config.StorageMiner) error, after func(api lapi.FullNode, addr address.Address, peerid peer.ID, mi api.MinerInfo) error) error {
if cctx.Args().Len() != 1 {
return xerrors.Errorf("expected 1 argument")
if cctx.NArg() != 1 {
return lcli.IncorrectNumArgs(cctx)
}
log.Info("Trying to connect to full node RPC")

View File

@ -177,13 +177,13 @@ func getActorAddress(ctx context.Context, cctx *cli.Context) (maddr address.Addr
return
}
nodeAPI, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return address.Undef, err
}
defer closer()
maddr, err = nodeAPI.ActorAddress(ctx)
maddr, err = minerApi.ActorAddress(ctx)
if err != nil {
return maddr, xerrors.Errorf("getting actor address: %w", err)
}

View File

@ -370,8 +370,8 @@ var dealsImportDataCmd = &cli.Command{
ctx := lcli.DaemonContext(cctx)
if cctx.Args().Len() < 2 {
return fmt.Errorf("must specify proposal CID and file path")
if cctx.NArg() != 2 {
return lcli.IncorrectNumArgs(cctx)
}
propCid, err := cid.Decode(cctx.Args().Get(0))
@ -617,8 +617,8 @@ var setSealDurationCmd = &cli.Command{
}
defer closer()
ctx := lcli.ReqContext(cctx)
if cctx.Args().Len() != 1 {
return xerrors.Errorf("must pass duration in minutes")
if cctx.NArg() != 1 {
return lcli.IncorrectNumArgs(cctx)
}
hs, err := strconv.ParseUint(cctx.Args().Get(0), 10, 64)

View File

@ -314,8 +314,8 @@ var provingDeadlineInfoCmd = &cli.Command{
ArgsUsage: "<deadlineIdx>",
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 1 {
return xerrors.Errorf("must pass deadline index")
if cctx.NArg() != 1 {
return lcli.IncorrectNumArgs(cctx)
}
dlIdx, err := strconv.ParseUint(cctx.Args().Get(0), 10, 64)
@ -461,8 +461,8 @@ var provingCheckProvableCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 1 {
return xerrors.Errorf("must pass deadline index")
if cctx.NArg() != 1 {
return lcli.IncorrectNumArgs(cctx)
}
dlIdx, err := strconv.ParseUint(cctx.Args().Get(0), 10, 64)
@ -476,7 +476,7 @@ var provingCheckProvableCmd = &cli.Command{
}
defer closer()
sapi, scloser, err := lcli.GetStorageMinerAPI(cctx)
minerApi, scloser, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
@ -484,7 +484,7 @@ var provingCheckProvableCmd = &cli.Command{
ctx := lcli.ReqContext(cctx)
addr, err := sapi.ActorAddress(ctx)
addr, err := minerApi.ActorAddress(ctx)
if err != nil {
return err
}
@ -510,7 +510,7 @@ var provingCheckProvableCmd = &cli.Command{
var filter map[abi.SectorID]struct{}
if cctx.IsSet("storage-id") {
sl, err := sapi.StorageList(ctx)
sl, err := minerApi.StorageList(ctx)
if err != nil {
return err
}
@ -582,7 +582,7 @@ var provingCheckProvableCmd = &cli.Command{
})
}
bad, err := sapi.CheckProvable(ctx, info.WindowPoStProofType, tocheck, cctx.Bool("slow"))
bad, err := minerApi.CheckProvable(ctx, info.WindowPoStProofType, tocheck, cctx.Bool("slow"))
if err != nil {
return err
}
@ -616,8 +616,8 @@ var provingComputeWindowPoStCmd = &cli.Command{
It will not send any messages to the chain.`,
ArgsUsage: "[deadline index]",
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 1 {
return xerrors.Errorf("must pass deadline index")
if cctx.NArg() != 1 {
return lcli.IncorrectNumArgs(cctx)
}
dlIdx, err := strconv.ParseUint(cctx.Args().Get(0), 10, 64)
@ -625,7 +625,7 @@ It will not send any messages to the chain.`,
return xerrors.Errorf("could not parse deadline index: %w", err)
}
sapi, scloser, err := lcli.GetStorageMinerAPI(cctx)
minerApi, scloser, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
@ -634,7 +634,7 @@ It will not send any messages to the chain.`,
ctx := lcli.ReqContext(cctx)
start := time.Now()
res, err := sapi.ComputeWindowPoSt(ctx, dlIdx, types.EmptyTSK)
res, err := minerApi.ComputeWindowPoSt(ctx, dlIdx, types.EmptyTSK)
fmt.Printf("Took %s\n", time.Now().Sub(start))
if err != nil {
return err
@ -661,8 +661,8 @@ var provingRecoverFaultsCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() < 1 {
return xerrors.Errorf("must pass at least 1 sector number")
if cctx.NArg() < 1 {
return lcli.ShowHelp(cctx, xerrors.Errorf("must pass at least 1 sector number"))
}
arglist := cctx.Args().Slice()
@ -675,7 +675,7 @@ var provingRecoverFaultsCmd = &cli.Command{
sectors = append(sectors, abi.SectorNumber(s))
}
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
@ -689,7 +689,7 @@ var provingRecoverFaultsCmd = &cli.Command{
ctx := lcli.ReqContext(cctx)
msgs, err := nodeApi.RecoverFault(ctx, sectors)
msgs, err := minerApi.RecoverFault(ctx, sectors)
if err != nil {
return err
}
@ -707,7 +707,7 @@ var provingRecoverFaultsCmd = &cli.Command{
return
}
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
results <- xerrors.Errorf("Failed to execute message %s: %w", wait.Message, wait.Receipt.ExitCode.Error())
return
}

View File

@ -57,7 +57,7 @@ func workersCmd(sealing bool) *cli.Command {
color.NoColor = !cctx.Bool("color")
}
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
@ -65,7 +65,7 @@ func workersCmd(sealing bool) *cli.Command {
ctx := lcli.ReqContext(cctx)
stats, err := nodeApi.WorkerStats(ctx)
stats, err := minerApi.WorkerStats(ctx)
if err != nil {
return err
}
@ -233,7 +233,7 @@ var sealingJobsCmd = &cli.Command{
color.NoColor = !cctx.Bool("color")
}
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
@ -241,7 +241,7 @@ var sealingJobsCmd = &cli.Command{
ctx := lcli.ReqContext(cctx)
jobs, err := nodeApi.WorkerJobs(ctx)
jobs, err := minerApi.WorkerJobs(ctx)
if err != nil {
return xerrors.Errorf("getting worker jobs: %w", err)
}
@ -275,7 +275,7 @@ var sealingJobsCmd = &cli.Command{
workerHostnames := map[uuid.UUID]string{}
wst, err := nodeApi.WorkerStats(ctx)
wst, err := minerApi.WorkerStats(ctx)
if err != nil {
return xerrors.Errorf("getting worker stats: %w", err)
}
@ -337,7 +337,7 @@ var sealingSchedDiagCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
@ -345,7 +345,7 @@ var sealingSchedDiagCmd = &cli.Command{
ctx := lcli.ReqContext(cctx)
st, err := nodeApi.SealingSchedDiag(ctx, cctx.Bool("force-sched"))
st, err := minerApi.SealingSchedDiag(ctx, cctx.Bool("force-sched"))
if err != nil {
return err
}
@ -372,11 +372,11 @@ var sealingAbortCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 1 {
return xerrors.Errorf("expected 1 argument")
if cctx.NArg() != 1 {
return lcli.IncorrectNumArgs(cctx)
}
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
@ -385,14 +385,14 @@ var sealingAbortCmd = &cli.Command{
ctx := lcli.ReqContext(cctx)
if cctx.Bool("sched") {
err = nodeApi.SealingRemoveRequest(ctx, uuid.Must(uuid.Parse(cctx.Args().First())))
err = minerApi.SealingRemoveRequest(ctx, uuid.Must(uuid.Parse(cctx.Args().First())))
if err != nil {
return xerrors.Errorf("Failed to removed the request with UUID %s: %w", cctx.Args().First(), err)
}
return nil
}
jobs, err := nodeApi.WorkerJobs(ctx)
jobs, err := minerApi.WorkerJobs(ctx)
if err != nil {
return xerrors.Errorf("getting worker jobs: %w", err)
}
@ -415,7 +415,7 @@ var sealingAbortCmd = &cli.Command{
fmt.Printf("aborting job %s, task %s, sector %d, running on host %s\n", job.ID.String(), job.Task.Short(), job.Sector.Number, job.Hostname)
return nodeApi.SealingAbort(ctx, job.ID)
return minerApi.SealingAbort(ctx, job.ID)
},
}
@ -430,11 +430,11 @@ var sealingDataCidCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() < 1 || cctx.Args().Len() > 2 {
return xerrors.Errorf("expected 1 or 2 arguments")
if cctx.NArg() < 1 || cctx.NArg() > 2 {
return lcli.ShowHelp(cctx, xerrors.Errorf("expected 1 or 2 arguments"))
}
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
@ -484,7 +484,7 @@ var sealingDataCidCmd = &cli.Command{
}
var psize abi.PaddedPieceSize
if cctx.Args().Len() == 2 {
if cctx.NArg() == 2 {
rps, err := humanize.ParseBytes(cctx.Args().Get(1))
if err != nil {
return xerrors.Errorf("parsing piece size: %w", err)
@ -500,7 +500,7 @@ var sealingDataCidCmd = &cli.Command{
psize = padreader.PaddedSize(sz).Padded()
}
pc, err := nodeApi.ComputeDataCid(ctx, psize.Unpadded(), r)
pc, err := minerApi.ComputeDataCid(ctx, psize.Unpadded(), r)
if err != nil {
return xerrors.Errorf("computing data CID: %w", err)
}

View File

@ -69,14 +69,14 @@ var sectorsPledgeCmd = &cli.Command{
Name: "pledge",
Usage: "store random data in a sector",
Action: func(cctx *cli.Context) error {
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
defer closer()
ctx := lcli.ReqContext(cctx)
id, err := nodeApi.PledgeSector(ctx)
id, err := minerApi.PledgeSector(ctx)
if err != nil {
return err
}
@ -113,7 +113,7 @@ var sectorsStatusCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
@ -130,7 +130,7 @@ var sectorsStatusCmd = &cli.Command{
}
onChainInfo := cctx.Bool("on-chain-info")
status, err := nodeApi.SectorsStatus(ctx, abi.SectorNumber(id), onChainInfo)
status, err := minerApi.SectorsStatus(ctx, abi.SectorNumber(id), onChainInfo)
if err != nil {
return err
}
@ -318,7 +318,7 @@ var sectorsListCmd = &cli.Command{
color.NoColor = !cctx.Bool("color")
}
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
@ -359,16 +359,16 @@ var sectorsListCmd = &cli.Command{
}
if len(states) == 0 {
list, err = nodeApi.SectorsList(ctx)
list, err = minerApi.SectorsList(ctx)
} else {
list, err = nodeApi.SectorsListInStates(ctx, states)
list, err = minerApi.SectorsListInStates(ctx, states)
}
if err != nil {
return err
}
maddr, err := nodeApi.ActorAddress(ctx)
maddr, err := minerApi.ActorAddress(ctx)
if err != nil {
return err
}
@ -418,7 +418,7 @@ var sectorsListCmd = &cli.Command{
fast := cctx.Bool("fast")
for _, s := range list {
st, err := nodeApi.SectorsStatus(ctx, s, !fast)
st, err := minerApi.SectorsStatus(ctx, s, !fast)
if err != nil {
tw.Write(map[string]interface{}{
"ID": s,
@ -1372,14 +1372,14 @@ var sectorsTerminateCmd = &cli.Command{
if !cctx.Bool("really-do-it") {
return xerrors.Errorf("pass --really-do-it to confirm this action")
}
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
defer closer()
ctx := lcli.ReqContext(cctx)
if cctx.Args().Len() != 1 {
return xerrors.Errorf("must pass sector number")
if cctx.NArg() != 1 {
return lcli.IncorrectNumArgs(cctx)
}
id, err := strconv.ParseUint(cctx.Args().Get(0), 10, 64)
@ -1387,7 +1387,7 @@ var sectorsTerminateCmd = &cli.Command{
return xerrors.Errorf("could not parse sector number: %w", err)
}
return nodeApi.SectorTerminate(ctx, abi.SectorNumber(id))
return minerApi.SectorTerminate(ctx, abi.SectorNumber(id))
},
}
@ -1395,14 +1395,14 @@ var sectorsTerminateFlushCmd = &cli.Command{
Name: "flush",
Usage: "Send a terminate message if there are sectors queued for termination",
Action: func(cctx *cli.Context) error {
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
defer closer()
ctx := lcli.ReqContext(cctx)
mcid, err := nodeApi.SectorTerminateFlush(ctx)
mcid, err := minerApi.SectorTerminateFlush(ctx)
if err != nil {
return err
}
@ -1421,7 +1421,7 @@ var sectorsTerminatePendingCmd = &cli.Command{
Name: "pending",
Usage: "List sector numbers of sectors pending termination",
Action: func(cctx *cli.Context) error {
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerAPI, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
@ -1433,12 +1433,12 @@ var sectorsTerminatePendingCmd = &cli.Command{
defer nCloser()
ctx := lcli.ReqContext(cctx)
pending, err := nodeApi.SectorTerminatePending(ctx)
pending, err := minerAPI.SectorTerminatePending(ctx)
if err != nil {
return err
}
maddr, err := nodeApi.ActorAddress(ctx)
maddr, err := minerAPI.ActorAddress(ctx)
if err != nil {
return err
}
@ -1482,14 +1482,14 @@ var sectorsRemoveCmd = &cli.Command{
if !cctx.Bool("really-do-it") {
return xerrors.Errorf("this is a command for advanced users, only use it if you are sure of what you are doing")
}
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerAPI, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
defer closer()
ctx := lcli.ReqContext(cctx)
if cctx.Args().Len() != 1 {
return xerrors.Errorf("must pass sector number")
if cctx.NArg() != 1 {
return lcli.IncorrectNumArgs(cctx)
}
id, err := strconv.ParseUint(cctx.Args().Get(0), 10, 64)
@ -1497,7 +1497,7 @@ var sectorsRemoveCmd = &cli.Command{
return xerrors.Errorf("could not parse sector number: %w", err)
}
return nodeApi.SectorRemove(ctx, abi.SectorNumber(id))
return minerAPI.SectorRemove(ctx, abi.SectorNumber(id))
},
}
@ -1506,11 +1506,11 @@ var sectorsSnapUpCmd = &cli.Command{
Usage: "Mark a committed capacity sector to be filled with deals",
ArgsUsage: "<sectorNum>",
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 1 {
return lcli.ShowHelp(cctx, xerrors.Errorf("must pass sector number"))
if cctx.NArg() != 1 {
return lcli.IncorrectNumArgs(cctx)
}
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerAPI, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
@ -1535,7 +1535,7 @@ var sectorsSnapUpCmd = &cli.Command{
return xerrors.Errorf("could not parse sector number: %w", err)
}
return nodeApi.SectorMarkForUpgrade(ctx, abi.SectorNumber(id), true)
return minerAPI.SectorMarkForUpgrade(ctx, abi.SectorNumber(id), true)
},
}
@ -1550,7 +1550,7 @@ var sectorsSnapAbortCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 1 {
if cctx.NArg() != 1 {
return lcli.ShowHelp(cctx, xerrors.Errorf("must pass sector number"))
}
@ -1560,7 +1560,7 @@ var sectorsSnapAbortCmd = &cli.Command{
return fmt.Errorf("--really-do-it must be specified for this action to have an effect; you have been warned")
}
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerAPI, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
@ -1572,7 +1572,7 @@ var sectorsSnapAbortCmd = &cli.Command{
return xerrors.Errorf("could not parse sector number: %w", err)
}
return nodeApi.SectorAbortUpgrade(ctx, abi.SectorNumber(id))
return minerAPI.SectorAbortUpgrade(ctx, abi.SectorNumber(id))
},
}
@ -1581,14 +1581,14 @@ var sectorsStartSealCmd = &cli.Command{
Usage: "Manually start sealing a sector (filling any unused space with junk)",
ArgsUsage: "<sectorNum>",
Action: func(cctx *cli.Context) error {
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerAPI, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
defer closer()
ctx := lcli.ReqContext(cctx)
if cctx.Args().Len() != 1 {
return xerrors.Errorf("must pass sector number")
if cctx.NArg() != 1 {
return lcli.IncorrectNumArgs(cctx)
}
id, err := strconv.ParseUint(cctx.Args().Get(0), 10, 64)
@ -1596,7 +1596,7 @@ var sectorsStartSealCmd = &cli.Command{
return xerrors.Errorf("could not parse sector number: %w", err)
}
return nodeApi.SectorStartSealing(ctx, abi.SectorNumber(id))
return minerAPI.SectorStartSealing(ctx, abi.SectorNumber(id))
},
}
@ -1605,14 +1605,14 @@ var sectorsSealDelayCmd = &cli.Command{
Usage: "Set the time, in minutes, that a new sector waits for deals before sealing starts",
ArgsUsage: "<minutes>",
Action: func(cctx *cli.Context) error {
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerAPI, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
defer closer()
ctx := lcli.ReqContext(cctx)
if cctx.Args().Len() != 1 {
return xerrors.Errorf("must pass duration in minutes")
if cctx.NArg() != 1 {
return lcli.IncorrectNumArgs(cctx)
}
hs, err := strconv.ParseUint(cctx.Args().Get(0), 10, 64)
@ -1622,7 +1622,7 @@ var sectorsSealDelayCmd = &cli.Command{
delay := hs * uint64(time.Minute)
return nodeApi.SectorSetSealDelay(ctx, time.Duration(delay))
return minerAPI.SectorSetSealDelay(ctx, time.Duration(delay))
},
}
@ -1708,14 +1708,14 @@ var sectorsUpdateCmd = &cli.Command{
if !cctx.Bool("really-do-it") {
return xerrors.Errorf("this is a command for advanced users, only use it if you are sure of what you are doing")
}
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerAPI, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
defer closer()
ctx := lcli.ReqContext(cctx)
if cctx.Args().Len() < 2 {
return xerrors.Errorf("must pass sector number and new state")
if cctx.NArg() < 2 {
return lcli.ShowHelp(cctx, xerrors.Errorf("must pass sector number and new state"))
}
id, err := strconv.ParseUint(cctx.Args().Get(0), 10, 64)
@ -1723,7 +1723,7 @@ var sectorsUpdateCmd = &cli.Command{
return xerrors.Errorf("could not parse sector number: %w", err)
}
_, err = nodeApi.SectorsStatus(ctx, abi.SectorNumber(id), false)
_, err = minerAPI.SectorsStatus(ctx, abi.SectorNumber(id), false)
if err != nil {
return xerrors.Errorf("sector %d not found, could not change state", id)
}
@ -1737,7 +1737,7 @@ var sectorsUpdateCmd = &cli.Command{
return nil
}
return nodeApi.SectorsUpdate(ctx, abi.SectorNumber(id), api.SectorState(cctx.Args().Get(1)))
return minerAPI.SectorsUpdate(ctx, abi.SectorNumber(id), api.SectorState(cctx.Args().Get(1)))
},
}
@ -1765,7 +1765,7 @@ var sectorsExpiredCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerAPI, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
@ -1805,7 +1805,7 @@ var sectorsExpiredCmd = &cli.Command{
return xerrors.Errorf("getting lookback tipset: %w", err)
}
maddr, err := nodeApi.ActorAddress(ctx)
maddr, err := minerAPI.ActorAddress(ctx)
if err != nil {
return xerrors.Errorf("getting actor address: %w", err)
}
@ -1813,7 +1813,7 @@ var sectorsExpiredCmd = &cli.Command{
// toCheck is a working bitfield which will only contain terminated sectors
toCheck := bitfield.New()
{
sectors, err := nodeApi.SectorsList(ctx)
sectors, err := minerAPI.SectorsList(ctx)
if err != nil {
return xerrors.Errorf("getting sector list: %w", err)
}
@ -1890,7 +1890,7 @@ var sectorsExpiredCmd = &cli.Command{
err = toCheck.ForEach(func(u uint64) error {
s := abi.SectorNumber(u)
st, err := nodeApi.SectorsStatus(ctx, s, true)
st, err := minerAPI.SectorsStatus(ctx, s, true)
if err != nil {
fmt.Printf("%d:\tError getting status: %s\n", u, err)
return nil
@ -1933,7 +1933,7 @@ var sectorsExpiredCmd = &cli.Command{
for _, number := range toRemove {
fmt.Printf("Removing sector\t%s:\t", color.YellowString("%d", number))
err := nodeApi.SectorRemove(ctx, number)
err := minerAPI.SectorRemove(ctx, number)
if err != nil {
color.Red("ERROR: %s\n", err.Error())
} else {
@ -1965,7 +1965,7 @@ var sectorsBatchingPendingCommit = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
api, closer, err := lcli.GetStorageMinerAPI(cctx)
minerAPI, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
@ -1973,7 +1973,7 @@ var sectorsBatchingPendingCommit = &cli.Command{
ctx := lcli.ReqContext(cctx)
if cctx.Bool("publish-now") {
res, err := api.SectorCommitFlush(ctx)
res, err := minerAPI.SectorCommitFlush(ctx)
if err != nil {
return xerrors.Errorf("flush: %w", err)
}
@ -2000,7 +2000,7 @@ var sectorsBatchingPendingCommit = &cli.Command{
return nil
}
pending, err := api.SectorCommitPending(ctx)
pending, err := minerAPI.SectorCommitPending(ctx)
if err != nil {
return xerrors.Errorf("getting pending deals: %w", err)
}
@ -2027,7 +2027,7 @@ var sectorsBatchingPendingPreCommit = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
api, closer, err := lcli.GetStorageMinerAPI(cctx)
minerAPI, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
@ -2035,7 +2035,7 @@ var sectorsBatchingPendingPreCommit = &cli.Command{
ctx := lcli.ReqContext(cctx)
if cctx.Bool("publish-now") {
res, err := api.SectorPreCommitFlush(ctx)
res, err := minerAPI.SectorPreCommitFlush(ctx)
if err != nil {
return xerrors.Errorf("flush: %w", err)
}
@ -2058,7 +2058,7 @@ var sectorsBatchingPendingPreCommit = &cli.Command{
return nil
}
pending, err := api.SectorPreCommitPending(ctx)
pending, err := minerAPI.SectorPreCommitPending(ctx)
if err != nil {
return xerrors.Errorf("getting pending deals: %w", err)
}
@ -2079,14 +2079,14 @@ var sectorsRefreshPieceMatchingCmd = &cli.Command{
Name: "match-pending-pieces",
Usage: "force a refreshed match of pending pieces to open sectors without manually waiting for more deals",
Action: func(cctx *cli.Context) error {
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerAPI, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
defer closer()
ctx := lcli.ReqContext(cctx)
if err := nodeApi.SectorMatchPendingPiecesToOpenSectors(ctx); err != nil {
if err := minerAPI.SectorMatchPendingPiecesToOpenSectors(ctx); err != nil {
return err
}
@ -2195,7 +2195,7 @@ var sectorsCompactPartitionsCmd = &cli.Command{
}
// check it executed successfully
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
fmt.Println(cctx.App.Writer, "compact partitions failed!")
return err
}
@ -2219,14 +2219,14 @@ var sectorsNumbersInfoCmd = &cli.Command{
Name: "info",
Usage: "view sector assigner state",
Action: func(cctx *cli.Context) error {
api, closer, err := lcli.GetStorageMinerAPI(cctx)
minerAPI, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
defer closer()
ctx := lcli.ReqContext(cctx)
am, err := api.SectorNumAssignerMeta(ctx)
am, err := minerAPI.SectorNumAssignerMeta(ctx)
if err != nil {
return err
}
@ -2253,14 +2253,14 @@ var sectorsNumbersReservationsCmd = &cli.Command{
Name: "reservations",
Usage: "list sector number reservations",
Action: func(cctx *cli.Context) error {
api, closer, err := lcli.GetStorageMinerAPI(cctx)
minerAPI, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
defer closer()
ctx := lcli.ReqContext(cctx)
rs, err := api.SectorNumReservations(ctx)
rs, err := minerAPI.SectorNumReservations(ctx)
if err != nil {
return err
}
@ -2303,15 +2303,15 @@ var sectorsNumbersReserveCmd = &cli.Command{
},
ArgsUsage: "[reservation name] [reserved ranges]",
Action: func(cctx *cli.Context) error {
api, closer, err := lcli.GetStorageMinerAPI(cctx)
minerAPI, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
defer closer()
ctx := lcli.ReqContext(cctx)
if cctx.Args().Len() != 2 {
return xerrors.Errorf("expected 2 arguments: [reservation name] [reserved ranges]")
if cctx.NArg() != 2 {
return lcli.IncorrectNumArgs(cctx)
}
bf, err := strle.HumanRangesToBitField(cctx.Args().Get(1))
@ -2319,7 +2319,7 @@ var sectorsNumbersReserveCmd = &cli.Command{
return xerrors.Errorf("parsing ranges: %w", err)
}
return api.SectorNumReserve(ctx, cctx.Args().First(), bf, cctx.Bool("force"))
return minerAPI.SectorNumReserve(ctx, cctx.Args().First(), bf, cctx.Bool("force"))
},
}
@ -2328,17 +2328,17 @@ var sectorsNumbersFreeCmd = &cli.Command{
Usage: "remove sector number reservations",
ArgsUsage: "[reservation name]",
Action: func(cctx *cli.Context) error {
api, closer, err := lcli.GetStorageMinerAPI(cctx)
minerAPI, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
defer closer()
ctx := lcli.ReqContext(cctx)
if cctx.Args().Len() != 1 {
return xerrors.Errorf("expected 1 argument: [reservation name]")
if cctx.NArg() != 1 {
return lcli.IncorrectNumArgs(cctx)
}
return api.SectorNumFree(ctx, cctx.Args().First())
return minerAPI.SectorNumFree(ctx, cctx.Args().First())
},
}

View File

@ -109,7 +109,7 @@ over time
},
},
Action: func(cctx *cli.Context) error {
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
@ -172,7 +172,7 @@ over time
}
}
return nodeApi.StorageAddLocal(ctx, p)
return minerApi.StorageAddLocal(ctx, p)
},
}
@ -186,7 +186,7 @@ var storageDetachCmd = &cli.Command{
},
ArgsUsage: "[path]",
Action: func(cctx *cli.Context) error {
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
@ -206,7 +206,7 @@ var storageDetachCmd = &cli.Command{
return xerrors.Errorf("pass --really-do-it to execute the action")
}
return nodeApi.StorageDetachLocal(ctx, p)
return minerApi.StorageDetachLocal(ctx, p)
},
}
@ -228,7 +228,7 @@ var storageRedeclareCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
@ -241,11 +241,11 @@ var storageRedeclareCmd = &cli.Command{
if cctx.IsSet("id") {
id := storiface.ID(cctx.String("id"))
return nodeApi.StorageRedeclareLocal(ctx, &id, cctx.Bool("drop-missing"))
return minerApi.StorageRedeclareLocal(ctx, &id, cctx.Bool("drop-missing"))
}
if cctx.Bool("all") {
return nodeApi.StorageRedeclareLocal(ctx, nil, cctx.Bool("drop-missing"))
return minerApi.StorageRedeclareLocal(ctx, nil, cctx.Bool("drop-missing"))
}
return xerrors.Errorf("either --all or --id must be specified")
@ -270,19 +270,19 @@ var storageListCmd = &cli.Command{
color.NoColor = !cctx.Bool("color")
}
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
defer closer()
ctx := lcli.ReqContext(cctx)
st, err := nodeApi.StorageList(ctx)
st, err := minerApi.StorageList(ctx)
if err != nil {
return err
}
local, err := nodeApi.StorageLocal(ctx)
local, err := minerApi.StorageLocal(ctx)
if err != nil {
return err
}
@ -295,7 +295,7 @@ var storageListCmd = &cli.Command{
sorted := make([]fsInfo, 0, len(st))
for id, decls := range st {
st, err := nodeApi.StorageStat(ctx, id)
st, err := minerApi.StorageStat(ctx, id)
if err != nil {
sorted = append(sorted, fsInfo{ID: id, sectors: decls})
continue
@ -313,7 +313,7 @@ var storageListCmd = &cli.Command{
for _, s := range sorted {
var cnt [3]int
var cnt [5]int
for _, decl := range s.sectors {
for i := range cnt {
if decl.SectorFileType&(1<<i) != 0 {
@ -325,7 +325,7 @@ var storageListCmd = &cli.Command{
fmt.Printf("%s:\n", s.ID)
pingStart := time.Now()
st, err := nodeApi.StorageStat(ctx, s.ID)
st, err := minerApi.StorageStat(ctx, s.ID)
if err != nil {
fmt.Printf("\t%s: %s:\n", color.RedString("Error"), err)
continue
@ -392,13 +392,15 @@ var storageListCmd = &cli.Command{
color.New(percCol).Sprintf("%d%%", usedPercent))
}
fmt.Printf("\t%s; %s; %s; Reserved: %s\n",
fmt.Printf("\t%s; %s; %s; %s; %s; Reserved: %s\n",
color.YellowString("Unsealed: %d", cnt[0]),
color.GreenString("Sealed: %d", cnt[1]),
color.BlueString("Caches: %d", cnt[2]),
color.GreenString("Updated: %d", cnt[3]),
color.BlueString("Update-caches: %d", cnt[4]),
types.SizeStr(types.NewInt(uint64(st.Reserved))))
si, err := nodeApi.StorageInfo(ctx, s.ID)
si, err := minerApi.StorageInfo(ctx, s.ID)
if err != nil {
return err
}
@ -469,14 +471,14 @@ var storageFindCmd = &cli.Command{
Usage: "find sector in the storage system",
ArgsUsage: "[sector number]",
Action: func(cctx *cli.Context) error {
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
defer closer()
ctx := lcli.ReqContext(cctx)
ma, err := nodeApi.ActorAddress(ctx)
ma, err := minerApi.ActorAddress(ctx)
if err != nil {
return err
}
@ -500,27 +502,27 @@ var storageFindCmd = &cli.Command{
Number: abi.SectorNumber(snum),
}
u, err := nodeApi.StorageFindSector(ctx, sid, storiface.FTUnsealed, 0, false)
u, err := minerApi.StorageFindSector(ctx, sid, storiface.FTUnsealed, 0, false)
if err != nil {
return xerrors.Errorf("finding unsealed: %w", err)
}
s, err := nodeApi.StorageFindSector(ctx, sid, storiface.FTSealed, 0, false)
s, err := minerApi.StorageFindSector(ctx, sid, storiface.FTSealed, 0, false)
if err != nil {
return xerrors.Errorf("finding sealed: %w", err)
}
c, err := nodeApi.StorageFindSector(ctx, sid, storiface.FTCache, 0, false)
c, err := minerApi.StorageFindSector(ctx, sid, storiface.FTCache, 0, false)
if err != nil {
return xerrors.Errorf("finding cache: %w", err)
}
us, err := nodeApi.StorageFindSector(ctx, sid, storiface.FTUpdate, 0, false)
us, err := minerApi.StorageFindSector(ctx, sid, storiface.FTUpdate, 0, false)
if err != nil {
return xerrors.Errorf("finding sealed: %w", err)
}
uc, err := nodeApi.StorageFindSector(ctx, sid, storiface.FTUpdateCache, 0, false)
uc, err := minerApi.StorageFindSector(ctx, sid, storiface.FTUpdateCache, 0, false)
if err != nil {
return xerrors.Errorf("finding cache: %w", err)
}
@ -582,7 +584,7 @@ var storageFindCmd = &cli.Command{
sts.updatecache = true
}
local, err := nodeApi.StorageLocal(ctx)
local, err := minerApi.StorageLocal(ctx)
if err != nil {
return err
}
@ -644,7 +646,7 @@ var storageListSectorsCmd = &cli.Command{
color.NoColor = !cctx.Bool("color")
}
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
@ -658,12 +660,12 @@ var storageListSectorsCmd = &cli.Command{
ctx := lcli.ReqContext(cctx)
sectors, err := nodeApi.SectorsList(ctx)
sectors, err := minerApi.SectorsList(ctx)
if err != nil {
return xerrors.Errorf("listing sectors: %w", err)
}
maddr, err := nodeApi.ActorAddress(ctx)
maddr, err := minerApi.ActorAddress(ctx)
if err != nil {
return err
}
@ -706,7 +708,7 @@ var storageListSectorsCmd = &cli.Command{
var list []entry
for _, sector := range sectors {
st, err := nodeApi.SectorsStatus(ctx, sector, false)
st, err := minerApi.SectorsStatus(ctx, sector, false)
if err != nil {
return xerrors.Errorf("getting sector status for sector %d: %w", sector, err)
}
@ -716,7 +718,7 @@ var storageListSectorsCmd = &cli.Command{
}
for _, ft := range storiface.PathTypes {
si, err := nodeApi.StorageFindSector(ctx, sid(sector), ft, mi.SectorSize, false)
si, err := minerApi.StorageFindSector(ctx, sid(sector), ft, mi.SectorSize, false)
if err != nil {
return xerrors.Errorf("find sector %d: %w", sector, err)
}
@ -869,7 +871,7 @@ var storageCleanupCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
api, closer, err := lcli.GetStorageMinerAPI(cctx)
minerAPI, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
@ -884,7 +886,7 @@ var storageCleanupCmd = &cli.Command{
ctx := lcli.ReqContext(cctx)
if cctx.Bool("removed") {
if err := cleanupRemovedSectorData(ctx, api, napi); err != nil {
if err := cleanupRemovedSectorData(ctx, minerAPI, napi); err != nil {
return err
}
}
@ -962,20 +964,20 @@ var storageLocks = &cli.Command{
Name: "locks",
Usage: "show active sector locks",
Action: func(cctx *cli.Context) error {
api, closer, err := lcli.GetStorageMinerAPI(cctx)
minerAPI, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
defer closer()
ctx := lcli.ReqContext(cctx)
locks, err := api.StorageGetLocks(ctx)
locks, err := minerAPI.StorageGetLocks(ctx)
if err != nil {
return err
}
for _, lock := range locks.Locks {
st, err := api.SectorsStatus(ctx, lock.Sector.Number, false)
st, err := minerAPI.SectorsStatus(ctx, lock.Sector.Number, false)
if err != nil {
return xerrors.Errorf("getting sector status(%d): %w", lock.Sector.Number, err)
}

View File

@ -93,7 +93,7 @@ var genesisAddMinerCmd = &cli.Command{
Description: "add genesis miner",
Flags: []cli.Flag{},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 2 {
if cctx.NArg() != 2 {
return xerrors.New("seed genesis add-miner [genesis.json] [preseal.json]")
}
@ -181,7 +181,7 @@ type GenAccountEntry struct {
var genesisAddMsigsCmd = &cli.Command{
Name: "add-msigs",
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() < 2 {
if cctx.NArg() < 2 {
return fmt.Errorf("must specify template file and csv file with accounts")
}
@ -329,7 +329,7 @@ var genesisSetVRKCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 1 {
if cctx.NArg() != 1 {
return fmt.Errorf("must specify template file")
}
@ -425,7 +425,7 @@ var genesisSetRemainderCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 1 {
if cctx.NArg() != 1 {
return fmt.Errorf("must specify template file")
}
@ -519,7 +519,7 @@ var genesisSetActorVersionCmd = &cli.Command{
},
ArgsUsage: "<genesisFile>",
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 1 {
if cctx.NArg() != 1 {
return fmt.Errorf("must specify genesis file")
}
@ -597,7 +597,7 @@ var genesisSetVRKSignersCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 1 {
if cctx.NArg() != 1 {
return fmt.Errorf("must specify template file")
}

View File

@ -38,7 +38,7 @@ var actorCmd = &cli.Command{
var actorWithdrawCmd = &cli.Command{
Name: "withdraw",
Usage: "withdraw available balance",
Usage: "withdraw available balance to beneficiary",
ArgsUsage: "[amount (FIL)]",
Flags: []cli.Flag{
&cli.StringFlag{
@ -50,6 +50,10 @@ var actorWithdrawCmd = &cli.Command{
Usage: "number of block confirmations to wait for",
Value: int(build.MessageConfidence),
},
&cli.BoolFlag{
Name: "beneficiary",
Usage: "send withdraw message from the beneficiary address",
},
},
Action: func(cctx *cli.Context) error {
var maddr address.Address
@ -70,13 +74,13 @@ var actorWithdrawCmd = &cli.Command{
ctx := lcli.ReqContext(cctx)
if maddr.Empty() {
minerAPI, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
defer closer()
maddr, err = minerAPI.ActorAddress(ctx)
maddr, err = minerApi.ActorAddress(ctx)
if err != nil {
return err
}
@ -113,9 +117,16 @@ var actorWithdrawCmd = &cli.Command{
return err
}
var sender address.Address
if cctx.IsSet("beneficiary") {
sender = mi.Beneficiary
} else {
sender = mi.Owner
}
smsg, err := nodeAPI.MpoolPushMessage(ctx, &types.Message{
To: maddr,
From: mi.Owner,
From: sender,
Value: types.NewInt(0),
Method: builtin.MethodsMiner.WithdrawBalance,
Params: params,
@ -133,7 +144,7 @@ var actorWithdrawCmd = &cli.Command{
}
// check it executed successfully
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
fmt.Println(cctx.App.Writer, "withdrawal failed!")
return err
}
@ -181,7 +192,7 @@ var actorSetOwnerCmd = &cli.Command{
}
if cctx.NArg() != 2 {
return fmt.Errorf("must pass new owner address and sender address")
return lcli.IncorrectNumArgs(cctx)
}
var maddr address.Address
@ -222,13 +233,13 @@ var actorSetOwnerCmd = &cli.Command{
}
if maddr.Empty() {
minerAPI, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
defer closer()
maddr, err = minerAPI.ActorAddress(ctx)
maddr, err = minerApi.ActorAddress(ctx)
if err != nil {
return err
}
@ -268,7 +279,7 @@ var actorSetOwnerCmd = &cli.Command{
}
// check it executed successfully
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
fmt.Println("owner change failed!")
return err
}
@ -328,13 +339,13 @@ var actorControlList = &cli.Command{
ctx := lcli.ReqContext(cctx)
if maddr.Empty() {
minerAPI, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
defer closer()
maddr, err = minerAPI.ActorAddress(ctx)
maddr, err = minerApi.ActorAddress(ctx)
if err != nil {
return err
}
@ -367,7 +378,9 @@ var actorControlList = &cli.Command{
kstr := k.String()
if !cctx.Bool("verbose") {
kstr = kstr[:9] + "..."
if len(kstr) > 9 {
kstr = kstr[:6] + "..."
}
}
bstr := types.FIL(b).String()
@ -437,13 +450,13 @@ var actorControlSet = &cli.Command{
ctx := lcli.ReqContext(cctx)
if maddr.Empty() {
minerAPI, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
defer closer()
maddr, err = minerAPI.ActorAddress(ctx)
maddr, err = minerApi.ActorAddress(ctx)
if err != nil {
return err
}
@ -579,13 +592,13 @@ var actorProposeChangeWorker = &cli.Command{
}
if maddr.Empty() {
minerAPI, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
defer closer()
maddr, err = minerAPI.ActorAddress(ctx)
maddr, err = minerApi.ActorAddress(ctx)
if err != nil {
return err
}
@ -636,7 +649,7 @@ var actorProposeChangeWorker = &cli.Command{
}
// check it executed successfully
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
fmt.Fprintln(cctx.App.Writer, "Propose worker change failed!")
return err
}
@ -709,13 +722,13 @@ var actorConfirmChangeWorker = &cli.Command{
}
if maddr.Empty() {
minerAPI, closer, err := lcli.GetStorageMinerAPI(cctx)
minerApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
defer closer()
maddr, err = minerAPI.ActorAddress(ctx)
maddr, err = minerApi.ActorAddress(ctx)
if err != nil {
return err
}
@ -757,7 +770,7 @@ var actorConfirmChangeWorker = &cli.Command{
}
// check it executed successfully
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
fmt.Fprintln(cctx.App.Writer, "Worker change failed!")
return err
}

View File

@ -24,7 +24,7 @@ var base16Cmd = &cli.Command{
Action: func(cctx *cli.Context) error {
var input io.Reader
if cctx.Args().Len() == 0 {
if cctx.NArg() == 0 {
input = os.Stdin
} else {
input = strings.NewReader(cctx.Args().First())

View File

@ -24,7 +24,7 @@ var base32Cmd = &cli.Command{
Action: func(cctx *cli.Context) error {
var input io.Reader
if cctx.Args().Len() == 0 {
if cctx.NArg() == 0 {
input = os.Stdin
} else {
input = strings.NewReader(cctx.Args().First())

View File

@ -32,7 +32,7 @@ var base64Cmd = &cli.Command{
Action: func(cctx *cli.Context) error {
var input io.Reader
if cctx.Args().Len() == 0 {
if cctx.NArg() == 0 {
input = os.Stdin
} else {
input = strings.NewReader(cctx.Args().First())

View File

@ -56,7 +56,7 @@ var computeStateRangeCmd = &cli.Command{
ArgsUsage: "[START_TIPSET_REF] [END_TIPSET_REF]",
Action: func(cctx *cli.Context) error {
if cctx.NArg() != 2 {
return fmt.Errorf("expected two arguments: a start and an end tipset")
return lcli.IncorrectNumArgs(cctx)
}
api, closer, err := lcli.GetFullNodeAPI(cctx)

View File

@ -84,7 +84,7 @@ var consensusCheckCmd = &cli.Command{
filePath := cctx.Args().First()
var input *bufio.Reader
if cctx.Args().Len() == 0 {
if cctx.NArg() == 0 {
input = bufio.NewReader(os.Stdin)
} else {
var err error

View File

@ -20,6 +20,7 @@ import (
"go.uber.org/multierr"
"golang.org/x/xerrors"
lcli "github.com/filecoin-project/lotus/cli"
"github.com/filecoin-project/lotus/lib/backupds"
"github.com/filecoin-project/lotus/node/repo"
)
@ -171,8 +172,8 @@ var datastoreBackupStatCmd = &cli.Command{
Description: "validate and print info about datastore backup",
ArgsUsage: "[file]",
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 1 {
return xerrors.Errorf("expected 1 argument")
if cctx.NArg() != 1 {
return lcli.IncorrectNumArgs(cctx)
}
f, err := os.Open(cctx.Args().First())
@ -220,8 +221,8 @@ var datastoreBackupListCmd = &cli.Command{
},
ArgsUsage: "[file]",
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 1 {
return xerrors.Errorf("expected 1 argument")
if cctx.NArg() != 1 {
return lcli.IncorrectNumArgs(cctx)
}
f, err := os.Open(cctx.Args().First())
@ -308,7 +309,7 @@ var datastoreRewriteCmd = &cli.Command{
ArgsUsage: "source destination",
Action: func(cctx *cli.Context) error {
if cctx.NArg() != 2 {
return xerrors.Errorf("expected 2 arguments, got %d", cctx.NArg())
return lcli.IncorrectNumArgs(cctx)
}
fromPath, err := homedir.Expand(cctx.Args().Get(0))
if err != nil {

View File

@ -31,7 +31,7 @@ var diffStateTrees = &cli.Command{
ctx := lcli.ReqContext(cctx)
if cctx.NArg() != 2 {
return xerrors.Errorf("expected two state-tree roots")
return lcli.IncorrectNumArgs(cctx)
}
argA := cctx.Args().Get(0)

View File

@ -38,8 +38,8 @@ var exportCarCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 2 {
return lcli.ShowHelp(cctx, fmt.Errorf("must specify file name and object"))
if cctx.NArg() != 2 {
return lcli.IncorrectNumArgs(cctx)
}
outfile := cctx.Args().First()

View File

@ -149,7 +149,7 @@ var keyinfoImportCmd = &cli.Command{
flagRepo := cctx.String("repo")
var input io.Reader
if cctx.Args().Len() == 0 {
if cctx.NArg() == 0 {
input = os.Stdin
} else {
var err error
@ -261,7 +261,7 @@ var keyinfoInfoCmd = &cli.Command{
format := cctx.String("format")
var input io.Reader
if cctx.Args().Len() == 0 {
if cctx.NArg() == 0 {
input = os.Stdin
} else {
var err error

View File

@ -300,7 +300,7 @@ var ledgerNewAddressesCmd = &cli.Command{
ctx := lcli.ReqContext(cctx)
if cctx.NArg() != 1 {
return fmt.Errorf("must pass account index")
return lcli.IncorrectNumArgs(cctx)
}
index, err := strconv.ParseUint(cctx.Args().First(), 10, 32)

View File

@ -73,6 +73,7 @@ func main() {
migrationsCmd,
diffCmd,
itestdCmd,
msigCmd,
}
app := &cli.App{

View File

@ -16,6 +16,7 @@ import (
"github.com/filecoin-project/lotus/chain/store"
"github.com/filecoin-project/lotus/chain/types"
"github.com/filecoin-project/lotus/chain/vm"
lcli "github.com/filecoin-project/lotus/cli"
"github.com/filecoin-project/lotus/node/repo"
"github.com/filecoin-project/lotus/storage/sealer/ffiwrapper"
)
@ -34,7 +35,7 @@ var migrationsCmd = &cli.Command{
ctx := context.TODO()
if cctx.NArg() != 1 {
return fmt.Errorf("must pass block cid")
return lcli.IncorrectNumArgs(cctx)
}
blkCid, err := cid.Decode(cctx.Args().First())

View File

@ -12,9 +12,8 @@ import (
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/big"
"github.com/filecoin-project/go-state-types/builtin"
miner2 "github.com/filecoin-project/specs-actors/v2/actors/builtin/miner"
miner5 "github.com/filecoin-project/specs-actors/v5/actors/builtin/miner"
msig5 "github.com/filecoin-project/specs-actors/v5/actors/builtin/multisig"
"github.com/filecoin-project/go-state-types/builtin/v9/miner"
"github.com/filecoin-project/go-state-types/builtin/v9/multisig"
"github.com/filecoin-project/lotus/build"
"github.com/filecoin-project/lotus/chain/actors"
@ -33,6 +32,8 @@ var minerMultisigsCmd = &cli.Command{
mmProposeChangeWorker,
mmConfirmChangeWorker,
mmProposeControlSet,
mmProposeChangeBeneficiary,
mmConfirmChangeBeneficiary,
},
Flags: []cli.Flag{
&cli.StringFlag{
@ -80,7 +81,7 @@ var mmProposeWithdrawBalance = &cli.Command{
return err
}
sp, err := actors.SerializeParams(&miner5.WithdrawBalanceParams{
sp, err := actors.SerializeParams(&miner.WithdrawBalanceParams{
AmountRequested: abi.TokenAmount(val),
})
if err != nil {
@ -101,12 +102,12 @@ var mmProposeWithdrawBalance = &cli.Command{
}
// check it executed successfully
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
fmt.Fprintln(cctx.App.Writer, "Propose owner change tx failed!")
return err
}
var retval msig5.ProposeReturn
var retval multisig.ProposeReturn
if err := retval.UnmarshalCBOR(bytes.NewReader(wait.Receipt.Return)); err != nil {
return fmt.Errorf("failed to unmarshal propose return value: %w", err)
}
@ -128,7 +129,7 @@ var mmApproveWithdrawBalance = &cli.Command{
ArgsUsage: "[amount txnId proposer]",
Action: func(cctx *cli.Context) error {
if cctx.NArg() != 3 {
return fmt.Errorf("must pass amount, txn Id, and proposer address")
return lcli.IncorrectNumArgs(cctx)
}
api, closer, err := lcli.GetFullNodeAPI(cctx)
@ -149,7 +150,7 @@ var mmApproveWithdrawBalance = &cli.Command{
return err
}
sp, err := actors.SerializeParams(&miner5.WithdrawBalanceParams{
sp, err := actors.SerializeParams(&miner.WithdrawBalanceParams{
AmountRequested: abi.TokenAmount(val),
})
if err != nil {
@ -180,12 +181,12 @@ var mmApproveWithdrawBalance = &cli.Command{
}
// check it executed successfully
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
fmt.Fprintln(cctx.App.Writer, "Approve owner change tx failed!")
return err
}
var retval msig5.ApproveReturn
var retval multisig.ApproveReturn
if err := retval.UnmarshalCBOR(bytes.NewReader(wait.Receipt.Return)); err != nil {
return fmt.Errorf("failed to unmarshal approve return value: %w", err)
}
@ -261,12 +262,12 @@ var mmProposeChangeOwner = &cli.Command{
}
// check it executed successfully
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
fmt.Fprintln(cctx.App.Writer, "Propose owner change tx failed!")
return err
}
var retval msig5.ProposeReturn
var retval multisig.ProposeReturn
if err := retval.UnmarshalCBOR(bytes.NewReader(wait.Receipt.Return)); err != nil {
return fmt.Errorf("failed to unmarshal propose return value: %w", err)
}
@ -287,7 +288,7 @@ var mmApproveChangeOwner = &cli.Command{
ArgsUsage: "[newOwner txnId proposer]",
Action: func(cctx *cli.Context) error {
if cctx.NArg() != 3 {
return fmt.Errorf("must pass new owner address, txn Id, and proposer address")
return lcli.IncorrectNumArgs(cctx)
}
api, closer, err := lcli.GetFullNodeAPI(cctx)
@ -351,12 +352,12 @@ var mmApproveChangeOwner = &cli.Command{
}
// check it executed successfully
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
fmt.Fprintln(cctx.App.Writer, "Approve owner change tx failed!")
return err
}
var retval msig5.ApproveReturn
var retval multisig.ApproveReturn
if err := retval.UnmarshalCBOR(bytes.NewReader(wait.Receipt.Return)); err != nil {
return fmt.Errorf("failed to unmarshal approve return value: %w", err)
}
@ -421,7 +422,7 @@ var mmProposeChangeWorker = &cli.Command{
}
}
cwp := &miner2.ChangeWorkerAddressParams{
cwp := &miner.ChangeWorkerAddressParams{
NewWorker: newAddr,
NewControlAddrs: mi.ControlAddresses,
}
@ -448,12 +449,12 @@ var mmProposeChangeWorker = &cli.Command{
}
// check it executed successfully
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
fmt.Fprintln(cctx.App.Writer, "Propose worker change tx failed!")
return err
}
var retval msig5.ProposeReturn
var retval multisig.ProposeReturn
if err := retval.UnmarshalCBOR(bytes.NewReader(wait.Receipt.Return)); err != nil {
return fmt.Errorf("failed to unmarshal propose return value: %w", err)
}
@ -469,6 +470,114 @@ var mmProposeChangeWorker = &cli.Command{
},
}
var mmProposeChangeBeneficiary = &cli.Command{
Name: "propose-change-beneficiary",
Usage: "Propose a beneficiary address change",
ArgsUsage: "[beneficiaryAddress quota expiration]",
Flags: []cli.Flag{
&cli.BoolFlag{
Name: "really-do-it",
Usage: "Actually send transaction performing the action",
Value: false,
},
&cli.BoolFlag{
Name: "overwrite-pending-change",
Usage: "Overwrite the current beneficiary change proposal",
Value: false,
},
},
Action: func(cctx *cli.Context) error {
if cctx.NArg() != 3 {
return lcli.IncorrectNumArgs(cctx)
}
api, acloser, err := lcli.GetFullNodeAPI(cctx)
if err != nil {
return xerrors.Errorf("getting fullnode api: %w", err)
}
defer acloser()
ctx := lcli.ReqContext(cctx)
na, err := address.NewFromString(cctx.Args().Get(0))
if err != nil {
return xerrors.Errorf("parsing beneficiary address: %w", err)
}
newAddr, err := api.StateLookupID(ctx, na, types.EmptyTSK)
if err != nil {
return xerrors.Errorf("looking up new beneficiary address: %w", err)
}
quota, err := types.ParseFIL(cctx.Args().Get(1))
if err != nil {
return xerrors.Errorf("parsing quota: %w", err)
}
expiration, err := types.BigFromString(cctx.Args().Get(2))
if err != nil {
return xerrors.Errorf("parsing expiration: %w", err)
}
multisigAddr, sender, minerAddr, err := getInputs(cctx)
if err != nil {
return err
}
mi, err := api.StateMinerInfo(ctx, minerAddr, types.EmptyTSK)
if err != nil {
return err
}
if mi.PendingBeneficiaryTerm != nil {
fmt.Println("WARNING: replacing Pending Beneficiary Term of:")
fmt.Println("Beneficiary: ", mi.PendingBeneficiaryTerm.NewBeneficiary)
fmt.Println("Quota:", mi.PendingBeneficiaryTerm.NewQuota)
fmt.Println("Expiration Epoch:", mi.PendingBeneficiaryTerm.NewExpiration)
if !cctx.Bool("overwrite-pending-change") {
return fmt.Errorf("must pass --overwrite-pending-change to replace current pending beneficiary change. Please review CAREFULLY")
}
}
if !cctx.Bool("really-do-it") {
fmt.Println("Pass --really-do-it to actually execute this action. Review what you're about to approve CAREFULLY please")
return nil
}
params := &miner.ChangeBeneficiaryParams{
NewBeneficiary: newAddr,
NewQuota: abi.TokenAmount(quota),
NewExpiration: abi.ChainEpoch(expiration.Int64()),
}
sp, err := actors.SerializeParams(params)
if err != nil {
return xerrors.Errorf("serializing params: %w", err)
}
pcid, err := api.MsigPropose(ctx, multisigAddr, minerAddr, big.Zero(), sender, uint64(builtin.MethodsMiner.ChangeBeneficiary), sp)
if err != nil {
return xerrors.Errorf("proposing message: %w", err)
}
fmt.Println("Propose Message CID: ", pcid)
// wait for it to get mined into a block
wait, err := api.StateWaitMsg(ctx, pcid, build.MessageConfidence)
if err != nil {
return xerrors.Errorf("waiting for message to be included in block: %w", err)
}
// check it executed successfully
if wait.Receipt.ExitCode.IsError() {
return fmt.Errorf("propose beneficiary change failed")
}
return nil
},
}
var mmConfirmChangeWorker = &cli.Command{
Name: "confirm-change-worker",
Usage: "Confirm an worker address change",
@ -532,12 +641,12 @@ var mmConfirmChangeWorker = &cli.Command{
}
// check it executed successfully
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
fmt.Fprintln(cctx.App.Writer, "Propose worker change tx failed!")
return err
}
var retval msig5.ProposeReturn
var retval multisig.ProposeReturn
if err := retval.UnmarshalCBOR(bytes.NewReader(wait.Receipt.Return)); err != nil {
return fmt.Errorf("failed to unmarshal propose return value: %w", err)
}
@ -552,6 +661,98 @@ var mmConfirmChangeWorker = &cli.Command{
},
}
var mmConfirmChangeBeneficiary = &cli.Command{
Name: "confirm-change-beneficiary",
Usage: "Confirm a beneficiary address change",
ArgsUsage: "[minerAddress]",
Flags: []cli.Flag{
&cli.BoolFlag{
Name: "really-do-it",
Usage: "Actually send transaction performing the action",
Value: false,
},
},
Action: func(cctx *cli.Context) error {
if cctx.NArg() != 1 {
return lcli.IncorrectNumArgs(cctx)
}
api, acloser, err := lcli.GetFullNodeAPI(cctx)
if err != nil {
return xerrors.Errorf("getting fullnode api: %w", err)
}
defer acloser()
ctx := lcli.ReqContext(cctx)
multisigAddr, sender, minerAddr, err := getInputs(cctx)
if err != nil {
return err
}
mi, err := api.StateMinerInfo(ctx, minerAddr, types.EmptyTSK)
if err != nil {
return err
}
if mi.PendingBeneficiaryTerm == nil {
return fmt.Errorf("no pending beneficiary term found for miner %s", minerAddr)
}
fmt.Println("Confirming Pending Beneficiary Term of:")
fmt.Println("Beneficiary: ", mi.PendingBeneficiaryTerm.NewBeneficiary)
fmt.Println("Quota:", mi.PendingBeneficiaryTerm.NewQuota)
fmt.Println("Expiration Epoch:", mi.PendingBeneficiaryTerm.NewExpiration)
if !cctx.Bool("really-do-it") {
fmt.Println("Pass --really-do-it to actually execute this action. Review what you're about to approve CAREFULLY please")
return nil
}
params := &miner.ChangeBeneficiaryParams{
NewBeneficiary: mi.PendingBeneficiaryTerm.NewBeneficiary,
NewQuota: mi.PendingBeneficiaryTerm.NewQuota,
NewExpiration: mi.PendingBeneficiaryTerm.NewExpiration,
}
sp, err := actors.SerializeParams(params)
if err != nil {
return xerrors.Errorf("serializing params: %w", err)
}
pcid, err := api.MsigPropose(ctx, multisigAddr, minerAddr, big.Zero(), sender, uint64(builtin.MethodsMiner.ChangeBeneficiary), sp)
if err != nil {
return xerrors.Errorf("proposing message: %w", err)
}
fmt.Println("Confirm Message CID:", pcid)
// wait for it to get mined into a block
wait, err := api.StateWaitMsg(ctx, pcid, build.MessageConfidence)
if err != nil {
return xerrors.Errorf("waiting for message to be included in block: %w", err)
}
// check it executed successfully
if wait.Receipt.ExitCode.IsError() {
return fmt.Errorf("confirm beneficiary change failed with code %d", wait.Receipt.ExitCode)
}
updatedMinerInfo, err := api.StateMinerInfo(ctx, minerAddr, types.EmptyTSK)
if err != nil {
return err
}
if updatedMinerInfo.PendingBeneficiaryTerm == nil && updatedMinerInfo.Beneficiary == mi.PendingBeneficiaryTerm.NewBeneficiary {
fmt.Println("Beneficiary address successfully changed")
} else {
fmt.Println("Beneficiary address change awaiting additional confirmations")
}
return nil
},
}
var mmProposeControlSet = &cli.Command{
Name: "propose-control-set",
Usage: "Set control address(-es)",
@ -623,7 +824,7 @@ var mmProposeControlSet = &cli.Command{
}
}
cwp := &miner2.ChangeWorkerAddressParams{
cwp := &miner.ChangeWorkerAddressParams{
NewWorker: mi.Worker,
NewControlAddrs: toSet,
}
@ -647,12 +848,12 @@ var mmProposeControlSet = &cli.Command{
}
// check it executed successfully
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
fmt.Fprintln(cctx.App.Writer, "Propose worker change tx failed!")
return err
}
var retval msig5.ProposeReturn
var retval multisig.ProposeReturn
if err := retval.UnmarshalCBOR(bytes.NewReader(wait.Receipt.Return)); err != nil {
return fmt.Errorf("failed to unmarshal propose return value: %w", err)
}

View File

@ -20,6 +20,7 @@ import (
"github.com/filecoin-project/lotus/chain/state"
"github.com/filecoin-project/lotus/chain/store"
"github.com/filecoin-project/lotus/chain/types"
lcli "github.com/filecoin-project/lotus/cli"
"github.com/filecoin-project/lotus/node/repo"
)
@ -35,7 +36,7 @@ var minerPeeridCmd = &cli.Command{
ctx := context.TODO()
if cctx.NArg() != 2 {
return fmt.Errorf("must pass peer id and state root")
return lcli.IncorrectNumArgs(cctx)
}
pid, err := peer.Decode(cctx.Args().Get(0))

View File

@ -135,8 +135,8 @@ var minerCreateCmd = &cli.Command{
defer closer()
ctx := lcli.ReqContext(cctx)
if cctx.Args().Len() != 4 {
return xerrors.Errorf("expected 4 args (sender owner worker sectorSize)")
if cctx.NArg() != 4 {
return lcli.IncorrectNumArgs(cctx)
}
sender, err := address.NewFromString(cctx.Args().First())
@ -273,8 +273,8 @@ var minerUnpackInfoCmd = &cli.Command{
Usage: "unpack miner info all dump",
ArgsUsage: "[allinfo.txt] [dir]",
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 2 {
return xerrors.Errorf("expected 2 args")
if cctx.NArg() != 2 {
return lcli.IncorrectNumArgs(cctx)
}
src, err := homedir.Expand(cctx.Args().Get(0))
@ -475,7 +475,7 @@ var sendInvalidWindowPoStCmd = &cli.Command{
}
// check it executed successfully
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
fmt.Println(cctx.App.Writer, "Invalid PoST message failed!")
return err
}
@ -488,9 +488,10 @@ var generateAndSendConsensusFaultCmd = &cli.Command{
Name: "generate-and-send-consensus-fault",
Usage: "Provided a block CID mined by the miner, will create another block at the same height, and send both block headers to generate a consensus fault.",
Description: `Note: This is meant for testing purposes and should NOT be used on mainnet or you will be slashed`,
ArgsUsage: "blockCID",
Action: func(cctx *cli.Context) error {
if cctx.NArg() != 1 {
return xerrors.Errorf("expected 1 arg (blockCID)")
return lcli.IncorrectNumArgs(cctx)
}
blockCid, err := cid.Parse(cctx.Args().First())
@ -574,7 +575,7 @@ var generateAndSendConsensusFaultCmd = &cli.Command{
}
// check it executed successfully
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
fmt.Println(cctx.App.Writer, "Report consensus fault failed!")
return err
}

View File

@ -27,8 +27,8 @@ var msgCmd = &cli.Command{
Usage: "Translate message between various formats",
ArgsUsage: "Message in any form",
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 1 {
return xerrors.Errorf("expected 1 argument")
if cctx.NArg() != 1 {
return lcli.IncorrectNumArgs(cctx)
}
msg, err := messageFromString(cctx, cctx.Args().First())

131
cmd/lotus-shed/msig.go Normal file
View File

@ -0,0 +1,131 @@
package main
import (
"context"
"encoding/json"
"fmt"
"io"
"github.com/ipfs/go-cid"
cbor "github.com/ipfs/go-ipld-cbor"
"github.com/urfave/cli/v2"
"github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/lotus/chain/actors/adt"
"github.com/filecoin-project/lotus/chain/actors/builtin"
"github.com/filecoin-project/lotus/chain/actors/builtin/multisig"
"github.com/filecoin-project/lotus/chain/consensus/filcns"
"github.com/filecoin-project/lotus/chain/state"
"github.com/filecoin-project/lotus/chain/store"
"github.com/filecoin-project/lotus/chain/types"
"github.com/filecoin-project/lotus/node/repo"
)
type msigBriefInfo struct {
ID address.Address
Signer interface{}
Balance abi.TokenAmount
Threshold uint64
}
var msigCmd = &cli.Command{
Name: "msig",
Subcommands: []*cli.Command{
multisigGetAllCmd,
},
}
var multisigGetAllCmd = &cli.Command{
Name: "all",
Usage: "get all multisig actor on chain with id, signers, threshold and balance at a tipset",
ArgsUsage: "[state root]",
Flags: []cli.Flag{
&cli.StringFlag{
Name: "repo",
Value: "~/.lotus",
},
},
Action: func(cctx *cli.Context) error {
ctx := context.TODO()
if !cctx.Args().Present() {
return fmt.Errorf("must pass state root")
}
sroot, err := cid.Decode(cctx.Args().First())
if err != nil {
return fmt.Errorf("failed to parse input: %w", err)
}
fsrepo, err := repo.NewFS(cctx.String("repo"))
if err != nil {
return err
}
lkrepo, err := fsrepo.Lock(repo.FullNode)
if err != nil {
return err
}
defer lkrepo.Close() //nolint:errcheck
bs, err := lkrepo.Blockstore(ctx, repo.UniversalBlockstore)
if err != nil {
return fmt.Errorf("failed to open blockstore: %w", err)
}
defer func() {
if c, ok := bs.(io.Closer); ok {
if err := c.Close(); err != nil {
log.Warnf("failed to close blockstore: %s", err)
}
}
}()
mds, err := lkrepo.Datastore(context.Background(), "/metadata")
if err != nil {
return err
}
cs := store.NewChainStore(bs, bs, mds, filcns.Weight, nil)
defer cs.Close() //nolint:errcheck
cst := cbor.NewCborStore(bs)
store := adt.WrapStore(ctx, cst)
tree, err := state.LoadStateTree(cst, sroot)
if err != nil {
return err
}
var msigActorsInfo []msigBriefInfo
err = tree.ForEach(func(addr address.Address, act *types.Actor) error {
if builtin.IsMultisigActor(act.Code) {
ms, err := multisig.Load(store, act)
if err != nil {
return fmt.Errorf("load msig failed %v", err)
}
signers, _ := ms.Signers()
threshold, _ := ms.Threshold()
info := msigBriefInfo{
ID: addr,
Signer: signers,
Balance: act.Balance,
Threshold: threshold,
}
msigActorsInfo = append(msigActorsInfo, info)
}
return nil
})
out, err := json.MarshalIndent(msigActorsInfo, "", " ")
if err != nil {
return err
}
fmt.Println(string(out))
return nil
},
}

View File

@ -11,6 +11,8 @@ import (
"github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-state-types/abi"
prooftypes "github.com/filecoin-project/go-state-types/proof"
lcli "github.com/filecoin-project/lotus/cli"
)
var proofsCmd = &cli.Command{
@ -42,8 +44,8 @@ var verifySealProofCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 3 {
return fmt.Errorf("must specify commR, commD, and proof to verify")
if cctx.NArg() != 3 {
return lcli.IncorrectNumArgs(cctx)
}
commr, err := cid.Decode(cctx.Args().Get(0))

View File

@ -112,8 +112,8 @@ var rpcCmd = &cli.Command{
}
if cctx.Args().Present() {
if cctx.Args().Len() > 2 {
return xerrors.Errorf("expected 1 or 2 arguments: method [params]")
if cctx.NArg() > 2 {
return lcli.ShowHelp(cctx, xerrors.Errorf("expected 1 or 2 arguments: method [params]"))
}
params := cctx.Args().Get(1)

View File

@ -58,10 +58,14 @@ var terminateSectorCmd = &cli.Command{
Name: "really-do-it",
Usage: "pass this flag if you know what you are doing",
},
&cli.StringFlag{
Name: "from",
Usage: "specify the address to send the terminate message from",
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() < 1 {
return fmt.Errorf("at least one sector must be specified")
if cctx.NArg() < 1 {
return lcli.ShowHelp(cctx, fmt.Errorf("at least one sector must be specified"))
}
var maddr address.Address
@ -86,13 +90,13 @@ var terminateSectorCmd = &cli.Command{
ctx := lcli.ReqContext(cctx)
if maddr.Empty() {
api, acloser, err := lcli.GetStorageMinerAPI(cctx)
minerApi, acloser, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
defer acloser()
maddr, err = api.ActorAddress(ctx)
maddr, err = minerApi.ActorAddress(ctx)
if err != nil {
return err
}
@ -137,8 +141,19 @@ var terminateSectorCmd = &cli.Command{
return xerrors.Errorf("serializing params: %w", err)
}
var fromAddr address.Address
if from := cctx.String("from"); from != "" {
var err error
fromAddr, err = address.NewFromString(from)
if err != nil {
return fmt.Errorf("parsing address %s: %w", from, err)
}
} else {
fromAddr = mi.Worker
}
smsg, err := nodeApi.MpoolPushMessage(ctx, &types.Message{
From: mi.Owner,
From: fromAddr,
To: maddr,
Method: builtin.MethodsMiner.TerminateSectors,
@ -156,7 +171,7 @@ var terminateSectorCmd = &cli.Command{
return err
}
if wait.Receipt.ExitCode != 0 {
if wait.Receipt.ExitCode.IsError() {
return fmt.Errorf("terminate sectors message returned exit %d", wait.Receipt.ExitCode)
}
@ -185,8 +200,8 @@ var terminateSectorPenaltyEstimationCmd = &cli.Command{
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() < 1 {
return fmt.Errorf("at least one sector must be specified")
if cctx.NArg() < 1 {
return lcli.ShowHelp(cctx, fmt.Errorf("at least one sector must be specified"))
}
var maddr address.Address
@ -207,13 +222,13 @@ var terminateSectorPenaltyEstimationCmd = &cli.Command{
ctx := lcli.ReqContext(cctx)
if maddr.Empty() {
api, acloser, err := lcli.GetStorageMinerAPI(cctx)
minerApi, acloser, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
defer acloser()
maddr, err = api.ActorAddress(ctx)
maddr, err = minerApi.ActorAddress(ctx)
if err != nil {
return err
}

View File

@ -34,7 +34,7 @@ var sendCsvCmd = &cli.Command{
ArgsUsage: "[csvfile]",
Action: func(cctx *cli.Context) error {
if cctx.NArg() != 1 {
return xerrors.New("must supply path to csv file")
return lcli.IncorrectNumArgs(cctx)
}
api, closer, err := lcli.GetFullNodeAPIV1(cctx)

View File

@ -31,8 +31,8 @@ var sigsVerifyBlsMsgsCmd = &cli.Command{
Description: "given a block, verifies the bls signature of the messages in the block",
Usage: "<blockCid>",
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 1 {
return xerrors.Errorf("usage: <blockCid>")
if cctx.NArg() != 1 {
return lcli.IncorrectNumArgs(cctx)
}
api, closer, err := lcli.GetFullNodeAPI(cctx)
@ -101,8 +101,8 @@ var sigsVerifyVoteCmd = &cli.Command{
Usage: "<FIPnumber> <signingAddress> <signature>",
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 3 {
return xerrors.Errorf("usage: verify-vote <FIPnumber> <signingAddress> <signature>")
if cctx.NArg() != 3 {
return lcli.IncorrectNumArgs(cctx)
}
fip, err := strconv.ParseInt(cctx.Args().First(), 10, 64)

View File

@ -174,8 +174,8 @@ var staterootStatCmd = &cli.Command{
}
outcap := 10
if cctx.Args().Len() > outcap {
outcap = cctx.Args().Len()
if cctx.NArg() > outcap {
outcap = cctx.NArg()
}
if len(infos) < outcap {
outcap = len(infos)

View File

@ -38,10 +38,8 @@ var syncValidateCmd = &cli.Command{
defer closer()
ctx := lcli.ReqContext(cctx)
if cctx.Args().Len() < 1 {
fmt.Println("usage: <blockCid1> <blockCid2>...")
fmt.Println("At least one block cid must be provided")
return nil
if cctx.NArg() < 1 {
return lcli.ShowHelp(cctx, fmt.Errorf("at least one block cid must be provided"))
}
args := cctx.Args().Slice()
@ -75,7 +73,7 @@ var syncScrapePowerCmd = &cli.Command{
Usage: "given a height and a tipset, reports what percentage of mining power had a winning ticket between the tipset and height",
ArgsUsage: "[height tipsetkey]",
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() < 1 {
if cctx.NArg() < 1 {
fmt.Println("usage: <height> [blockCid1 blockCid2...]")
fmt.Println("Any CIDs passed after the height will be used as the tipset key")
fmt.Println("If no block CIDs are provided, chain head will be used")
@ -90,10 +88,8 @@ var syncScrapePowerCmd = &cli.Command{
defer closer()
ctx := lcli.ReqContext(cctx)
if cctx.Args().Len() < 1 {
fmt.Println("usage: <blockCid1> <blockCid2>...")
fmt.Println("At least one block cid must be provided")
return nil
if cctx.NArg() < 1 {
return lcli.ShowHelp(cctx, fmt.Errorf("at least one block cid must be provided"))
}
h, err := strconv.ParseInt(cctx.Args().Get(0), 10, 0)

View File

@ -23,6 +23,7 @@ import (
"github.com/filecoin-project/lotus/chain/state"
"github.com/filecoin-project/lotus/chain/store"
"github.com/filecoin-project/lotus/chain/types"
lcli "github.com/filecoin-project/lotus/cli"
"github.com/filecoin-project/lotus/node/repo"
)
@ -40,7 +41,7 @@ var terminationsCmd = &cli.Command{
ctx := context.TODO()
if cctx.NArg() != 2 {
return fmt.Errorf("must pass block cid && lookback period")
return lcli.IncorrectNumArgs(cctx)
}
blkCid, err := cid.Decode(cctx.Args().First())

View File

@ -46,8 +46,8 @@ var verifRegAddVerifierFromMsigCmd = &cli.Command{
Usage: "make a given account a verifier",
ArgsUsage: "<message sender> <new verifier> <allowance>",
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 3 {
return fmt.Errorf("must specify three arguments: sender, verifier, and allowance")
if cctx.NArg() != 3 {
return lcli.IncorrectNumArgs(cctx)
}
sender, err := address.NewFromString(cctx.Args().Get(0))
@ -104,7 +104,7 @@ var verifRegAddVerifierFromMsigCmd = &cli.Command{
return err
}
if mwait.Receipt.ExitCode != 0 {
if mwait.Receipt.ExitCode.IsError() {
return fmt.Errorf("failed to add verifier: %d", mwait.Receipt.ExitCode)
}
@ -119,8 +119,8 @@ var verifRegAddVerifierFromAccountCmd = &cli.Command{
Usage: "make a given account a verifier",
ArgsUsage: "<verifier root key> <new verifier> <allowance>",
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 3 {
return fmt.Errorf("must specify three arguments: sender, verifier, and allowance")
if cctx.NArg() != 3 {
return lcli.IncorrectNumArgs(cctx)
}
sender, err := address.NewFromString(cctx.Args().Get(0))
@ -170,7 +170,7 @@ var verifRegAddVerifierFromAccountCmd = &cli.Command{
return err
}
if mwait.Receipt.ExitCode != 0 {
if mwait.Receipt.ExitCode.IsError() {
return fmt.Errorf("failed to add verified client: %d", mwait.Receipt.ExitCode)
}
@ -201,8 +201,8 @@ var verifRegVerifyClientCmd = &cli.Command{
return err
}
if cctx.Args().Len() != 2 {
return fmt.Errorf("must specify two arguments: address and allowance")
if cctx.NArg() != 2 {
return lcli.IncorrectNumArgs(cctx)
}
target, err := address.NewFromString(cctx.Args().Get(0))
@ -246,7 +246,7 @@ var verifRegVerifyClientCmd = &cli.Command{
return err
}
if mwait.Receipt.ExitCode != 0 {
if mwait.Receipt.ExitCode.IsError() {
return fmt.Errorf("failed to add verified client: %d", mwait.Receipt.ExitCode)
}
@ -418,8 +418,8 @@ var verifRegRemoveVerifiedClientDataCapCmd = &cli.Command{
Usage: "Remove data cap from verified client",
ArgsUsage: "<message sender> <client address> <allowance to remove> <verifier 1 address> <verifier 1 signature> <verifier 2 address> <verifier 2 signature>",
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 7 {
return fmt.Errorf("must specify seven arguments: sender, client, allowance to remove, verifier 1 address, verifier 1 signature, verifier 2 address, verifier 2 signature")
if cctx.NArg() != 7 {
return lcli.IncorrectNumArgs(cctx)
}
srv, err := lcli.GetFullNodeServices(cctx)
@ -555,7 +555,7 @@ var verifRegRemoveVerifiedClientDataCapCmd = &cli.Command{
return err
}
if mwait.Receipt.ExitCode != 0 {
if mwait.Receipt.ExitCode.IsError() {
return fmt.Errorf("failed to removed verified data cap: %d", mwait.Receipt.ExitCode)
}

View File

@ -31,12 +31,13 @@ const (
// has.
// 5 per tipset, but we effectively get 4 blocks worth of messages.
expectedBlocks = 4
// TODO: This will produce invalid blocks but it will accurately model the amount of gas
// we're willing to use per-tipset.
// A more correct approach would be to produce 5 blocks. We can do that later.
targetGas = build.BlockGasTarget * expectedBlocks
)
// TODO: This will produce invalid blocks but it will accurately model the amount of gas
// we're willing to use per-tipset.
// A more correct approach would be to produce 5 blocks. We can do that later.
var targetGas = build.BlockGasTarget * expectedBlocks
type BlockBuilder struct {
ctx context.Context
logger *zap.SugaredLogger

View File

@ -209,7 +209,7 @@ var runCmd = &cli.Command{
rpcApi = api.PermissionedWalletAPI(rpcApi)
}
rpcServer := jsonrpc.NewServer()
rpcServer := jsonrpc.NewServer(jsonrpc.WithServerErrors(api.RPCErrors))
rpcServer.Register("Filecoin", rpcApi)
mux.Handle("/rpc/v0", rpcServer)

View File

@ -224,6 +224,12 @@ var runCmd = &cli.Command{
Value: true,
EnvVars: []string{"LOTUS_WORKER_REGEN_SECTOR_KEY"},
},
&cli.BoolFlag{
Name: "sector-download",
Usage: "enable external sector data download",
Value: false,
EnvVars: []string{"LOTUS_WORKER_SECTOR_DOWNLOAD"},
},
&cli.BoolFlag{
Name: "windowpost",
Usage: "enable window post",
@ -373,6 +379,9 @@ var runCmd = &cli.Command{
if (workerType == sealtasks.WorkerSealing || cctx.IsSet("addpiece")) && cctx.Bool("addpiece") {
taskTypes = append(taskTypes, sealtasks.TTAddPiece, sealtasks.TTDataCid)
}
if (workerType == sealtasks.WorkerSealing || cctx.IsSet("sector-download")) && cctx.Bool("sector-download") {
taskTypes = append(taskTypes, sealtasks.TTDownloadSector)
}
if (workerType == sealtasks.WorkerSealing || cctx.IsSet("precommit1")) && cctx.Bool("precommit1") {
taskTypes = append(taskTypes, sealtasks.TTPreCommit1)
}

View File

@ -29,7 +29,7 @@ var log = logging.Logger("sealworker")
func WorkerHandler(authv func(ctx context.Context, token string) ([]auth.Permission, error), remote http.HandlerFunc, a api.Worker, permissioned bool) http.Handler {
mux := mux.NewRouter()
readerHandler, readerServerOpt := rpcenc.ReaderParamDecoder()
rpcServer := jsonrpc.NewServer(readerServerOpt)
rpcServer := jsonrpc.NewServer(jsonrpc.WithServerErrors(api.RPCErrors), readerServerOpt)
wapi := proxy.MetricedWorkerAPI(a)
if permissioned {

View File

@ -30,7 +30,7 @@ import (
"github.com/filecoin-project/go-jsonrpc"
"github.com/filecoin-project/go-paramfetch"
"github.com/filecoin-project/lotus/api"
lapi "github.com/filecoin-project/lotus/api"
"github.com/filecoin-project/lotus/build"
"github.com/filecoin-project/lotus/chain/consensus/filcns"
"github.com/filecoin-project/lotus/chain/stmgr"
@ -303,7 +303,7 @@ var DaemonCmd = &cli.Command{
}
defer closer()
liteModeDeps = node.Override(new(api.Gateway), gapi)
liteModeDeps = node.Override(new(lapi.Gateway), gapi)
}
// some libraries like ipfs/go-ds-measure and ipfs/go-ipfs-blockstore
@ -313,7 +313,7 @@ var DaemonCmd = &cli.Command{
log.Warnf("unable to inject prometheus ipfs/go-metrics exporter; some metrics will be unavailable; err: %s", err)
}
var api api.FullNode
var api lapi.FullNode
stop, err := node.New(ctx,
node.FullAPI(&api, node.Lite(isLite)),
@ -360,7 +360,7 @@ var DaemonCmd = &cli.Command{
// ----
// Populate JSON-RPC options.
serverOptions := make([]jsonrpc.ServerOption, 0)
serverOptions := []jsonrpc.ServerOption{jsonrpc.WithServerErrors(lapi.RPCErrors)}
if maxRequestSize := cctx.Int("api-max-req-size"); maxRequestSize != 0 {
serverOptions = append(serverOptions, jsonrpc.WithMaxRequestSize(int64(maxRequestSize)))
}
@ -392,7 +392,7 @@ var DaemonCmd = &cli.Command{
},
}
func importKey(ctx context.Context, api api.FullNode, f string) error {
func importKey(ctx context.Context, api lapi.FullNode, f string) error {
f, err := homedir.Expand(f)
if err != nil {
return err

View File

@ -15,7 +15,7 @@
# docker swarm init (if you haven't already)
# docker stack deploy -c docker-compose.yaml mylotuscluster
#
# for more information, please visit docs.filecoin.io
# for more information, please visit lotus.filecoin.io
version: "3.8"

View File

@ -2,16 +2,12 @@
This folder contains some Lotus documentation mostly intended for Lotus developers.
User documentation (including documentation for miners) has been moved to https://docs.filecoin.io and https://lotus.filecoin.io:
User documentation (including documentation for miners) has been moved to https://lotus.filecoin.io:
- https://docs.filecoin.io/get-started/overview/
- https://lotus.filecoin.io/lotus/get-started/what-is-lotus/
- https://docs.filecoin.io/store/
- https://lotus.filecoin.io/tutorials/lotus/store-and-retrieve/store-data/
- https://docs.filecoin.io/storage-provider/
- https://lotus.filecoin.io/tutorials/lotus-miner/run-a-miner/
- https://docs.filecoin.io/build/
- https://lotus.filecoin.io/developers/
- https://lotus.filecoin.io/lotus/get-started/what-is-lotus/
- https://lotus.filecoin.io/tutorials/lotus/store-and-retrieve/store-data/
- https://lotus.filecoin.io/tutorials/lotus-miner/run-a-miner/
- https://lotus.filecoin.io/developers/
## Documentation Website

View File

@ -8,12 +8,12 @@ It is written in Go and provides a suite of command-line applications:
- Lotus Miner (`lotus-miner`): a Filecoin miner. See the the respective Lotus Miner section in the Mine documentation.
- Lotus Worker (`lotus-worker`): a worker that assists miners to perform mining-related tasks. See its respective guide for more information.
The [Lotus user documentation](https://docs.filecoin.io/get-started/lotus) is part of the [Filecoin documentation site](https://docs.filecoin.io):
The [Lotus user documentation](https://lotus.filecoin.io/lotus/get-started/what-is-lotus/) is part of the [Filecoin documentation site](https://lotus.filecoin.io):
* To install and get started with Lotus, visit the [Get Started section](https://docs.filecoin.io/get-started/lotus).
* Information about how to perform deals on the Filecoin network using Lotus can be found in the [Store section](https://docs.filecoin.io/store/lotus).
* Miners looking to provide storage to the Network can find the latest guides in the [Mine section](https://docs.filecoin.io/mine/lotus).
* Developers and integrators that wish to use the Lotus APIs can start in the [Build section](https://docs.filecoin.io/mine/lotus).
* To install and get started with Lotus, visit the [Get Started section](https://lotus.filecoin.io/lotus/install/prerequisites/).
* Information about how to perform deals on the Filecoin network using Lotus can be found in the [Store section](https://lotus.filecoin.io/tutorials/lotus/store-and-retrieve/store-data/).
* Miners looking to provide storage to the Network can find the latest guides in the [Mine section](https://lotus.filecoin.io/tutorials/lotus-miner/run-a-miner/).
* Developers and integrators that wish to use the Lotus APIs can start in the [Build section](https://lotus.filecoin.io/tutorials/lotus/build-with-lotus-api/).
For more details about Filecoin, check out the [Filecoin Docs](https://docs.filecoin.io) and [Filecoin Spec](https://spec.filecoin.io/).
For more details about Filecoin, check out the [Filecoin Docs](https://lotus.filecoin.io) and [Filecoin Spec](https://spec.filecoin.io/).

View File

@ -13,6 +13,8 @@
* [Auth](#Auth)
* [AuthNew](#AuthNew)
* [AuthVerify](#AuthVerify)
* [Beneficiary](#Beneficiary)
* [BeneficiaryWithdrawBalance](#BeneficiaryWithdrawBalance)
* [Check](#Check)
* [CheckProvable](#CheckProvable)
* [Compute](#Compute)
@ -111,6 +113,7 @@
* [Return](#Return)
* [ReturnAddPiece](#ReturnAddPiece)
* [ReturnDataCid](#ReturnDataCid)
* [ReturnDownloadSector](#ReturnDownloadSector)
* [ReturnFetch](#ReturnFetch)
* [ReturnFinalizeReplicaUpdate](#ReturnFinalizeReplicaUpdate)
* [ReturnFinalizeSector](#ReturnFinalizeSector)
@ -148,6 +151,7 @@
* [SectorNumReserveCount](#SectorNumReserveCount)
* [SectorPreCommitFlush](#SectorPreCommitFlush)
* [SectorPreCommitPending](#SectorPreCommitPending)
* [SectorReceive](#SectorReceive)
* [SectorRemove](#SectorRemove)
* [SectorSetExpectedSealDuration](#SectorSetExpectedSealDuration)
* [SectorSetSealDelay](#SectorSetSealDelay)
@ -364,6 +368,31 @@ Response:
]
```
## Beneficiary
### BeneficiaryWithdrawBalance
BeneficiaryWithdrawBalance allows the beneficiary of a miner to withdraw balance from miner actor
Specify amount as "0" to withdraw full balance. This method returns a message CID
and does not wait for message execution
Perms: admin
Inputs:
```json
[
"0"
]
```
Response:
```json
{
"/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
}
```
## Check
@ -2361,6 +2390,30 @@ Inputs:
Response: `{}`
### ReturnDownloadSector
Perms: admin
Inputs:
```json
[
{
"Sector": {
"Miner": 1000,
"Number": 9
},
"ID": "07070707-0707-0707-0707-070707070707"
},
{
"Code": 0,
"Message": "string value"
}
]
```
Response: `{}`
### ReturnFetch
@ -3124,6 +3177,134 @@ Response:
]
```
### SectorReceive
Perms: admin
Inputs:
```json
[
{
"State": "Proving",
"Sector": {
"Miner": 1000,
"Number": 9
},
"Type": 8,
"Pieces": [
{
"Piece": {
"Size": 1032,
"PieceCID": {
"/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
}
},
"DealInfo": {
"PublishCid": null,
"DealID": 5432,
"DealProposal": {
"PieceCID": {
"/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
},
"PieceSize": 1032,
"VerifiedDeal": true,
"Client": "f01234",
"Provider": "f01234",
"Label": "",
"StartEpoch": 10101,
"EndEpoch": 10101,
"StoragePricePerEpoch": "0",
"ProviderCollateral": "0",
"ClientCollateral": "0"
},
"DealSchedule": {
"StartEpoch": 10101,
"EndEpoch": 10101
},
"KeepUnsealed": true
}
}
],
"TicketValue": "Bw==",
"TicketEpoch": 10101,
"PreCommit1Out": "Bw==",
"CommD": null,
"CommR": null,
"PreCommitInfo": {
"SealProof": 8,
"SectorNumber": 9,
"SealedCID": {
"/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
},
"SealRandEpoch": 10101,
"DealIDs": [
5432
],
"Expiration": 10101,
"UnsealedCid": null
},
"PreCommitDeposit": "0",
"PreCommitMessage": null,
"PreCommitTipSet": [
{
"/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
},
{
"/": "bafy2bzacebp3shtrn43k7g3unredz7fxn4gj533d3o43tqn2p2ipxxhrvchve"
}
],
"SeedValue": "Bw==",
"SeedEpoch": 10101,
"CommitProof": "Ynl0ZSBhcnJheQ==",
"CommitMessage": null,
"Log": [
{
"Kind": "string value",
"Timestamp": 42,
"Trace": "string value",
"Message": "string value"
}
],
"DataUnsealed": {
"Local": true,
"URL": "string value",
"Headers": [
{
"Key": "string value",
"Value": "string value"
}
]
},
"DataSealed": {
"Local": true,
"URL": "string value",
"Headers": [
{
"Key": "string value",
"Value": "string value"
}
]
},
"DataCache": {
"Local": true,
"URL": "string value",
"Headers": [
{
"Key": "string value",
"Value": "string value"
}
]
},
"RemoteCommit1Endpoint": "string value",
"RemoteCommit2Endpoint": "string value",
"RemoteSealingDoneEndpoint": "string value"
}
]
```
Response: `{}`
### SectorRemove
SectorRemove removes the sector from storage. It doesn't terminate it on-chain, which can
be done with SectorTerminate. Removing and not terminating live sectors will cause additional penalties.

View File

@ -12,6 +12,8 @@
* [AddPiece](#AddPiece)
* [Data](#Data)
* [DataCid](#DataCid)
* [Download](#Download)
* [DownloadSectorData](#DownloadSectorData)
* [Finalize](#Finalize)
* [FinalizeReplicaUpdate](#FinalizeReplicaUpdate)
* [FinalizeSector](#FinalizeSector)
@ -1542,6 +1544,46 @@ Response:
}
```
## Download
### DownloadSectorData
Perms: admin
Inputs:
```json
[
{
"ID": {
"Miner": 1000,
"Number": 9
},
"ProofType": 8
},
true,
{
"2": {
"Local": false,
"URL": "https://example.com/sealingservice/sectors/s-f0123-12345",
"Headers": null
}
}
]
```
Response:
```json
{
"Sector": {
"Miner": 1000,
"Number": 9
},
"ID": "07070707-0707-0707-0707-070707070707"
}
```
## Finalize

View File

@ -5811,7 +5811,20 @@ Response:
"WindowPoStProofType": 8,
"SectorSize": 34359738368,
"WindowPoStPartitionSectors": 42,
"ConsensusFaultElapsed": 10101
"ConsensusFaultElapsed": 10101,
"Beneficiary": "f01234",
"BeneficiaryTerm": {
"Quota": "0",
"UsedQuota": "0",
"Expiration": 10101
},
"PendingBeneficiaryTerm": {
"NewBeneficiary": "f01234",
"NewQuota": "0",
"NewExpiration": 10101,
"ApprovedByBeneficiary": true,
"ApprovedByNominee": true
}
}
```

View File

@ -6358,7 +6358,20 @@ Response:
"WindowPoStProofType": 8,
"SectorSize": 34359738368,
"WindowPoStPartitionSectors": 42,
"ConsensusFaultElapsed": 10101
"ConsensusFaultElapsed": 10101,
"Beneficiary": "f01234",
"BeneficiaryTerm": {
"Quota": "0",
"UsedQuota": "0",
"Expiration": 10101
},
"PendingBeneficiaryTerm": {
"NewBeneficiary": "f01234",
"NewQuota": "0",
"NewExpiration": 10101,
"ApprovedByBeneficiary": true,
"ApprovedByNominee": true
}
}
```

View File

@ -7,7 +7,7 @@ USAGE:
lotus-miner [global options] command [command options] [arguments...]
VERSION:
1.17.2-dev
1.17.3-dev
COMMANDS:
init Initialize a lotus miner repo
@ -71,7 +71,7 @@ OPTIONS:
--create-worker-key create separate worker key (default: false)
--worker value, -w value worker key to use (overrides --create-worker-key)
--owner value, -o value owner key to use
--sector-size value specify sector size to use (default: "2KiB")
--sector-size value specify sector size to use
--pre-sealed-sectors value specify set of presealed sectors for starting as a genesis miner (accepts multiple inputs)
--pre-sealed-metadata value specify the metadata file for the presealed sectors
--nosync don't check full-node sync status (default: false)
@ -231,16 +231,18 @@ USAGE:
lotus-miner actor command [command options] [arguments...]
COMMANDS:
set-addresses, set-addrs set addresses that your miner can be publicly dialed on
withdraw withdraw available balance
repay-debt pay down a miner's debt
set-peer-id set the peer id of your miner
set-owner Set owner address (this command should be invoked twice, first with the old owner as the senderAddress, and then with the new owner)
control Manage control addresses
propose-change-worker Propose a worker address change
confirm-change-worker Confirm a worker address change
compact-allocated compact allocated sectors bitfield
help, h Shows a list of commands or help for one command
set-addresses, set-addrs set addresses that your miner can be publicly dialed on
withdraw withdraw available balance to beneficiary
repay-debt pay down a miner's debt
set-peer-id set the peer id of your miner
set-owner Set owner address (this command should be invoked twice, first with the old owner as the senderAddress, and then with the new owner)
control Manage control addresses
propose-change-worker Propose a worker address change
confirm-change-worker Confirm a worker address change
compact-allocated compact allocated sectors bitfield
propose-change-beneficiary Propose a beneficiary address change
confirm-change-beneficiary Confirm a beneficiary address change
help, h Shows a list of commands or help for one command
OPTIONS:
--help, -h show help (default: false)
@ -254,12 +256,13 @@ OPTIONS:
### lotus-miner actor withdraw
```
NAME:
lotus-miner actor withdraw - withdraw available balance
lotus-miner actor withdraw - withdraw available balance to beneficiary
USAGE:
lotus-miner actor withdraw [command options] [amount (FIL)]
OPTIONS:
--beneficiary send withdraw message from the beneficiary address (default: false)
--confidence value number of block confirmations to wait for (default: 5)
```
@ -389,6 +392,36 @@ OPTIONS:
```
### lotus-miner actor propose-change-beneficiary
```
NAME:
lotus-miner actor propose-change-beneficiary - Propose a beneficiary address change
USAGE:
lotus-miner actor propose-change-beneficiary [command options] [beneficiaryAddress quota expiration]
OPTIONS:
--actor value specify the address of miner actor
--overwrite-pending-change Overwrite the current beneficiary change proposal (default: false)
--really-do-it Actually send transaction performing the action (default: false)
```
### lotus-miner actor confirm-change-beneficiary
```
NAME:
lotus-miner actor confirm-change-beneficiary - Confirm a beneficiary address change
USAGE:
lotus-miner actor confirm-change-beneficiary [command options] [minerAddress]
OPTIONS:
--existing-beneficiary send confirmation from the existing beneficiary address (default: false)
--new-beneficiary send confirmation from the new beneficiary address (default: false)
--really-do-it Actually send transaction performing the action (default: false)
```
## lotus-miner info
```
NAME:

View File

@ -7,7 +7,7 @@ USAGE:
lotus-worker [global options] command [command options] [arguments...]
VERSION:
1.17.2-dev
1.17.3-dev
COMMANDS:
run Start lotus worker
@ -53,6 +53,7 @@ OPTIONS:
--prove-replica-update2 enable prove replica update 2 (default: true) [$LOTUS_WORKER_PROVE_REPLICA_UPDATE2]
--regen-sector-key enable regen sector key (default: true) [$LOTUS_WORKER_REGEN_SECTOR_KEY]
--replica-update enable replica update (default: true) [$LOTUS_WORKER_REPLICA_UPDATE]
--sector-download enable external sector data download (default: false) [$LOTUS_WORKER_SECTOR_DOWNLOAD]
--timeout value used when 'listen' is unspecified. must be a valid duration recognized by golang's time.ParseDuration function (default: "30m") [$LOTUS_WORKER_TIMEOUT]
--unseal enable unsealing (32G sectors: 1 core, 128GiB Memory) (default: true) [$LOTUS_WORKER_UNSEAL]
--windowpost enable window post (default: false) [$LOTUS_WORKER_WINDOWPOST]

View File

@ -7,7 +7,7 @@ USAGE:
lotus [global options] command [command options] [arguments...]
VERSION:
1.17.2-dev
1.17.3-dev
COMMANDS:
daemon Start a lotus daemon process
@ -2029,7 +2029,7 @@ USAGE:
lotus state actor-cids [command options] [arguments...]
OPTIONS:
--network-version value specify network version (default: 17)
--network-version value specify network version (default: 0)
```

View File

@ -240,14 +240,14 @@
#StartEpochSealingBuffer = 480
# A command used for fine-grained evaluation of storage deals
# see https://docs.filecoin.io/mine/lotus/miner-configuration/#using-filters-for-fine-grained-storage-and-retrieval-deal-acceptance for more details
# see https://lotus.filecoin.io/storage-providers/advanced-configurations/market/#using-filters-for-fine-grained-storage-and-retrieval-deal-acceptance for more details
#
# type: string
# env var: LOTUS_DEALMAKING_FILTER
#Filter = ""
# A command used for fine-grained evaluation of retrieval deals
# see https://docs.filecoin.io/mine/lotus/miner-configuration/#using-filters-for-fine-grained-storage-and-retrieval-deal-acceptance for more details
# see https://lotus.filecoin.io/storage-providers/advanced-configurations/market/#using-filters-for-fine-grained-storage-and-retrieval-deal-acceptance for more details
#
# type: string
# env var: LOTUS_DEALMAKING_RETRIEVALFILTER
@ -442,6 +442,30 @@
# env var: LOTUS_SEALING_MAXUPGRADINGSECTORS
#MaxUpgradingSectors = 0
# When set to a non-zero value, minimum number of epochs until sector expiration required for sectors to be considered
# for upgrades (0 = DealMinDuration = 180 days = 518400 epochs)
#
# Note that if all deals waiting in the input queue have lifetimes longer than this value, upgrade sectors will be
# required to have expiration of at least the soonest-ending deal
#
# type: uint64
# env var: LOTUS_SEALING_MINUPGRADESECTOREXPIRATION
#MinUpgradeSectorExpiration = 0
# When set to a non-zero value, minimum number of epochs until sector expiration above which upgrade candidates will
# be selected based on lowest initial pledge.
#
# Target sector expiration is calculated by looking at the input deal queue, sorting it by deal expiration, and
# selecting N deals from the queue up to sector size. The target expiration will be Nth deal end epoch, or in case
# where there weren't enough deals to fill a sector, DealMaxDuration (540 days = 1555200 epochs)
#
# Setting this to a high value (for example to maximum deal duration - 1555200) will disable selection based on
# initial pledge - upgrade sectors will always be chosen based on longest expiration
#
# type: uint64
# env var: LOTUS_SEALING_MINTARGETUPGRADESECTOREXPIRATION
#MinTargetUpgradeSectorExpiration = 0
# CommittedCapacitySectorLifetime is the duration a Committed Capacity (CC) sector will
# live before it must be extended or converted into sector containing deals before it is
# terminated. Value must be between 180-540 days inclusive
@ -588,8 +612,10 @@
# env var: LOTUS_STORAGE_PARALLELFETCHLIMIT
#ParallelFetchLimit = 10
# Local worker config
#
# type: bool
# env var: LOTUS_STORAGE_ALLOWSECTORDOWNLOAD
#AllowSectorDownload = true
# type: bool
# env var: LOTUS_STORAGE_ALLOWADDPIECE
#AllowAddPiece = true

Some files were not shown because too many files have changed in this diff Show More