Merge remote-tracking branch 'origin/master' into refactor/net-upgrade
This commit is contained in:
commit
06ec571c20
@ -231,11 +231,11 @@ jobs:
|
|||||||
- run:
|
- run:
|
||||||
name: install statediff globally
|
name: install statediff globally
|
||||||
command: |
|
command: |
|
||||||
|
## statediff is optional; we succeed even if compilation fails.
|
||||||
mkdir -p /tmp/statediff
|
mkdir -p /tmp/statediff
|
||||||
git clone https://github.com/filecoin-project/statediff.git /tmp/statediff
|
git clone https://github.com/filecoin-project/statediff.git /tmp/statediff
|
||||||
cd /tmp/statediff
|
cd /tmp/statediff
|
||||||
go generate ./...
|
go install ./cmd/statediff || exit 0
|
||||||
go install ./cmd/statediff
|
|
||||||
- run:
|
- run:
|
||||||
name: go test
|
name: go test
|
||||||
environment:
|
environment:
|
||||||
@ -400,7 +400,8 @@ workflows:
|
|||||||
version: 2.1
|
version: 2.1
|
||||||
ci:
|
ci:
|
||||||
jobs:
|
jobs:
|
||||||
- lint-all
|
- lint-all:
|
||||||
|
concurrency: "16" # expend all docker 2xlarge CPUs.
|
||||||
- mod-tidy-check
|
- mod-tidy-check
|
||||||
- gofmt
|
- gofmt
|
||||||
- cbor-gen-check
|
- cbor-gen-check
|
||||||
|
2
.github/CODEOWNERS
vendored
2
.github/CODEOWNERS
vendored
@ -8,7 +8,7 @@
|
|||||||
## the PR before merging.
|
## the PR before merging.
|
||||||
|
|
||||||
### Global owners.
|
### Global owners.
|
||||||
* @magik6k @whyrusleeping
|
* @magik6k @whyrusleeping @Kubuxu
|
||||||
|
|
||||||
### Conformance testing.
|
### Conformance testing.
|
||||||
conformance/ @raulk
|
conformance/ @raulk
|
||||||
|
3
.gitmodules
vendored
3
.gitmodules
vendored
@ -8,3 +8,6 @@
|
|||||||
[submodule "extern/test-vectors"]
|
[submodule "extern/test-vectors"]
|
||||||
path = extern/test-vectors
|
path = extern/test-vectors
|
||||||
url = https://github.com/filecoin-project/test-vectors.git
|
url = https://github.com/filecoin-project/test-vectors.git
|
||||||
|
[submodule "extern/fil-blst"]
|
||||||
|
path = extern/fil-blst
|
||||||
|
url = https://github.com/filecoin-project/fil-blst.git
|
||||||
|
93
CHANGELOG.md
93
CHANGELOG.md
@ -1,5 +1,98 @@
|
|||||||
# Lotus changelog
|
# Lotus changelog
|
||||||
|
|
||||||
|
# 0.7.0 / 2020-09-10
|
||||||
|
|
||||||
|
This consensus-breaking release of Lotus is designed to test a network upgrade on the space race testnet. The changes that break consensus are:
|
||||||
|
|
||||||
|
- Upgrading the Drand network used from the test Drand network to the League of Entropy main drand network. This is the same Drand network that will be used in the Filecoin mainnet.
|
||||||
|
- Upgrading to specs-actors v0.9.8, which adds a new method to the Multisig actor.
|
||||||
|
|
||||||
|
## Changes
|
||||||
|
|
||||||
|
#### Core Lotus
|
||||||
|
|
||||||
|
- Fix IsAncestorOf (https://github.com/filecoin-project/lotus/pull/3717)
|
||||||
|
- Update to specs-actors v0.9.8 (https://github.com/filecoin-project/lotus/pull/3725)
|
||||||
|
- Increase chain throughput by 20% (https://github.com/filecoin-project/lotus/pull/3732)
|
||||||
|
- Updare to go-libp2p-pubsub `master` (https://github.com/filecoin-project/lotus/pull/3735)
|
||||||
|
- Drand upgrade (https://github.com/filecoin-project/lotus/pull/3670)
|
||||||
|
- Multisig API additions (https://github.com/filecoin-project/lotus/pull/3590)
|
||||||
|
|
||||||
|
#### Storage Miner
|
||||||
|
|
||||||
|
- Increase the number of times precommit2 is attempted before moving back to precommit1 (https://github.com/filecoin-project/lotus/pull/3720)
|
||||||
|
|
||||||
|
#### Message pool
|
||||||
|
|
||||||
|
- Relax mpool add strictness checks for local pushes (https://github.com/filecoin-project/lotus/pull/3724)
|
||||||
|
|
||||||
|
|
||||||
|
#### Maintenance
|
||||||
|
|
||||||
|
- Fix devnets (https://github.com/filecoin-project/lotus/pull/3712)
|
||||||
|
- Fix(chainwatch): compare prev miner with cur miner (https://github.com/filecoin-project/lotus/pull/3715)
|
||||||
|
- CI: fix statediff build; make optional (https://github.com/filecoin-project/lotus/pull/3729)
|
||||||
|
- Feat: Chaos abort (https://github.com/filecoin-project/lotus/pull/3733)
|
||||||
|
|
||||||
|
## Contributors
|
||||||
|
|
||||||
|
The following contributors had commits go into this release.
|
||||||
|
We are grateful for every contribution!
|
||||||
|
|
||||||
|
| Contributor | Commits | Lines ± |
|
||||||
|
|--------------------|---------|---------------|
|
||||||
|
| arajasek | 28 | +1144/-239 |
|
||||||
|
| Kubuxu | 19 | +452/-261 |
|
||||||
|
| whyrusleeping | 13 | +456/-87 |
|
||||||
|
| vyzo | 11 | +318/-20 |
|
||||||
|
| raulk | 10 | +1289/-350 |
|
||||||
|
| magik6k | 6 | +188/-55 |
|
||||||
|
| dirkmc | 3 | +31/-8 |
|
||||||
|
| alanshaw | 3 | +176/-37 |
|
||||||
|
| Stebalien | 2 | +9/-12 |
|
||||||
|
| lanzafame | 1 | +1/-1 |
|
||||||
|
| frrist | 1 | +1/-1 |
|
||||||
|
| mishmosh | 1 | +1/-1 |
|
||||||
|
| nonsense | 1 | +1/-0 |
|
||||||
|
|
||||||
|
# 0.6.2 / 2020-09-09
|
||||||
|
|
||||||
|
This release introduces some critical fixes to message selection and gas estimation logic. It also adds the ability for nodes to mark a certain tipset as checkpointed, as well as various minor improvements and bugfixes.
|
||||||
|
|
||||||
|
## Changes
|
||||||
|
|
||||||
|
#### Messagepool
|
||||||
|
|
||||||
|
- Warn when optimal selection fails to pack a block and we fall back to random selection (https://github.com/filecoin-project/lotus/pull/3708)
|
||||||
|
- Add basic command for printing gas performance of messages in the mpool (https://github.com/filecoin-project/lotus/pull/3701)
|
||||||
|
- Adjust optimal selection to always try to fill blocks (https://github.com/filecoin-project/lotus/pull/3685)
|
||||||
|
- Fix very minor bug in repub baseFeeLowerBound (https://github.com/filecoin-project/lotus/pull/3663)
|
||||||
|
- Add an auto flag to mpool replace (https://github.com/filecoin-project/lotus/pull/3676)
|
||||||
|
- Fix mpool optimal selection packing failure (https://github.com/filecoin-project/lotus/pull/3698)
|
||||||
|
|
||||||
|
#### Core Lotus
|
||||||
|
|
||||||
|
- Don't use latency as initital estimate for blocksync (https://github.com/filecoin-project/lotus/pull/3648)
|
||||||
|
- Add niceSleep 1 second when drand errors (https://github.com/filecoin-project/lotus/pull/3664)
|
||||||
|
- Fix isChainNearSync check in block validator (https://github.com/filecoin-project/lotus/pull/3650)
|
||||||
|
- Add peer to peer manager before fetching the tipset (https://github.com/filecoin-project/lotus/pull/3667)
|
||||||
|
- Add StageFetchingMessages to sync status (https://github.com/filecoin-project/lotus/pull/3668)
|
||||||
|
- Pass tipset through upgrade logic (https://github.com/filecoin-project/lotus/pull/3673)
|
||||||
|
- Allow nodes to mark tipsets as checkpointed (https://github.com/filecoin-project/lotus/pull/3680)
|
||||||
|
- Remove hard-coded late-fee in window PoSt (https://github.com/filecoin-project/lotus/pull/3702)
|
||||||
|
- Gas: Fix median calc (https://github.com/filecoin-project/lotus/pull/3686)
|
||||||
|
|
||||||
|
#### Storage
|
||||||
|
|
||||||
|
- Storage manager: bail out with an error if unsealed cid is undefined (https://github.com/filecoin-project/lotus/pull/3655)
|
||||||
|
- Storage: return true from Sealer.ReadPiece() on success (https://github.com/filecoin-project/lotus/pull/3657)
|
||||||
|
|
||||||
|
#### Maintenance
|
||||||
|
|
||||||
|
- Resolve lotus, test-vectors, statediff dependency cycle (https://github.com/filecoin-project/lotus/pull/3688)
|
||||||
|
- Paych: add docs on how to use paych status (https://github.com/filecoin-project/lotus/pull/3690)
|
||||||
|
- Initial CODEOWNERS (https://github.com/filecoin-project/lotus/pull/3691)
|
||||||
|
|
||||||
# 0.6.1 / 2020-09-08
|
# 0.6.1 / 2020-09-08
|
||||||
|
|
||||||
This optional release introduces a minor improvement to the sync process, ensuring nodes don't fall behind and then resync.
|
This optional release introduces a minor improvement to the sync process, ensuring nodes don't fall behind and then resync.
|
||||||
|
@ -117,7 +117,8 @@ type FullNode interface {
|
|||||||
// The exported chain data includes the header chain from the given tipset
|
// The exported chain data includes the header chain from the given tipset
|
||||||
// back to genesis, the entire genesis state, and the most recent 'nroots'
|
// back to genesis, the entire genesis state, and the most recent 'nroots'
|
||||||
// state trees.
|
// state trees.
|
||||||
ChainExport(ctx context.Context, nroots abi.ChainEpoch, tsk types.TipSetKey) (<-chan []byte, error)
|
// If oldmsgskip is set, messages from before the requested roots are also not included.
|
||||||
|
ChainExport(ctx context.Context, nroots abi.ChainEpoch, oldmsgskip bool, tsk types.TipSetKey) (<-chan []byte, error)
|
||||||
|
|
||||||
// MethodGroup: Beacon
|
// MethodGroup: Beacon
|
||||||
// The Beacon method group contains methods for interacting with the random beacon (DRAND)
|
// The Beacon method group contains methods for interacting with the random beacon (DRAND)
|
||||||
@ -158,10 +159,16 @@ type FullNode interface {
|
|||||||
// yet synced block headers.
|
// yet synced block headers.
|
||||||
SyncIncomingBlocks(ctx context.Context) (<-chan *types.BlockHeader, error)
|
SyncIncomingBlocks(ctx context.Context) (<-chan *types.BlockHeader, error)
|
||||||
|
|
||||||
|
// SyncCheckpoint marks a blocks as checkpointed, meaning that it won't ever fork away from it.
|
||||||
|
SyncCheckpoint(ctx context.Context, tsk types.TipSetKey) error
|
||||||
|
|
||||||
// SyncMarkBad marks a blocks as bad, meaning that it won't ever by synced.
|
// SyncMarkBad marks a blocks as bad, meaning that it won't ever by synced.
|
||||||
// Use with extreme caution.
|
// Use with extreme caution.
|
||||||
SyncMarkBad(ctx context.Context, bcid cid.Cid) error
|
SyncMarkBad(ctx context.Context, bcid cid.Cid) error
|
||||||
|
|
||||||
|
// SyncUnmarkBad unmarks a blocks as bad, making it possible to be validated and synced again.
|
||||||
|
SyncUnmarkBad(ctx context.Context, bcid cid.Cid) error
|
||||||
|
|
||||||
// SyncCheckBad checks if a block was marked as bad, and if it was, returns
|
// SyncCheckBad checks if a block was marked as bad, and if it was, returns
|
||||||
// the reason.
|
// the reason.
|
||||||
SyncCheckBad(ctx context.Context, bcid cid.Cid) (string, error)
|
SyncCheckBad(ctx context.Context, bcid cid.Cid) (string, error)
|
||||||
@ -388,6 +395,9 @@ type FullNode interface {
|
|||||||
|
|
||||||
// MsigGetAvailableBalance returns the portion of a multisig's balance that can be withdrawn or spent
|
// MsigGetAvailableBalance returns the portion of a multisig's balance that can be withdrawn or spent
|
||||||
MsigGetAvailableBalance(context.Context, address.Address, types.TipSetKey) (types.BigInt, error)
|
MsigGetAvailableBalance(context.Context, address.Address, types.TipSetKey) (types.BigInt, error)
|
||||||
|
// MsigGetVested returns the amount of FIL that vested in a multisig in a certain period.
|
||||||
|
// It takes the following params: <multisig address>, <start epoch>, <end epoch>
|
||||||
|
MsigGetVested(context.Context, address.Address, types.TipSetKey, types.TipSetKey) (types.BigInt, error)
|
||||||
// MsigCreate creates a multisig wallet
|
// MsigCreate creates a multisig wallet
|
||||||
// It takes the following params: <required number of senders>, <approving addresses>, <unlock duration>
|
// It takes the following params: <required number of senders>, <approving addresses>, <unlock duration>
|
||||||
//<initial balance>, <sender address of the create msg>, <gas price>
|
//<initial balance>, <sender address of the create msg>, <gas price>
|
||||||
@ -404,17 +414,29 @@ type FullNode interface {
|
|||||||
// It takes the following params: <multisig address>, <proposed message ID>, <recipient address>, <value to transfer>,
|
// It takes the following params: <multisig address>, <proposed message ID>, <recipient address>, <value to transfer>,
|
||||||
// <sender address of the cancel msg>, <method to call in the proposed message>, <params to include in the proposed message>
|
// <sender address of the cancel msg>, <method to call in the proposed message>, <params to include in the proposed message>
|
||||||
MsigCancel(context.Context, address.Address, uint64, address.Address, types.BigInt, address.Address, uint64, []byte) (cid.Cid, error)
|
MsigCancel(context.Context, address.Address, uint64, address.Address, types.BigInt, address.Address, uint64, []byte) (cid.Cid, error)
|
||||||
|
// MsigAddPropose proposes adding a signer in the multisig
|
||||||
|
// It takes the following params: <multisig address>, <sender address of the propose msg>,
|
||||||
|
// <new signer>, <whether the number of required signers should be increased>
|
||||||
|
MsigAddPropose(context.Context, address.Address, address.Address, address.Address, bool) (cid.Cid, error)
|
||||||
|
// MsigAddApprove approves a previously proposed AddSigner message
|
||||||
|
// It takes the following params: <multisig address>, <sender address of the approve msg>, <proposed message ID>,
|
||||||
|
// <proposer address>, <new signer>, <whether the number of required signers should be increased>
|
||||||
|
MsigAddApprove(context.Context, address.Address, address.Address, uint64, address.Address, address.Address, bool) (cid.Cid, error)
|
||||||
|
// MsigAddCancel cancels a previously proposed AddSigner message
|
||||||
|
// It takes the following params: <multisig address>, <sender address of the cancel msg>, <proposed message ID>,
|
||||||
|
// <new signer>, <whether the number of required signers should be increased>
|
||||||
|
MsigAddCancel(context.Context, address.Address, address.Address, uint64, address.Address, bool) (cid.Cid, error)
|
||||||
// MsigSwapPropose proposes swapping 2 signers in the multisig
|
// MsigSwapPropose proposes swapping 2 signers in the multisig
|
||||||
// It takes the following params: <multisig address>, <sender address of the propose msg>,
|
// It takes the following params: <multisig address>, <sender address of the propose msg>,
|
||||||
// <old signer> <new signer>
|
// <old signer>, <new signer>
|
||||||
MsigSwapPropose(context.Context, address.Address, address.Address, address.Address, address.Address) (cid.Cid, error)
|
MsigSwapPropose(context.Context, address.Address, address.Address, address.Address, address.Address) (cid.Cid, error)
|
||||||
// MsigSwapApprove approves a previously proposed SwapSigner
|
// MsigSwapApprove approves a previously proposed SwapSigner
|
||||||
// It takes the following params: <multisig address>, <sender address of the approve msg>, <proposed message ID>,
|
// It takes the following params: <multisig address>, <sender address of the approve msg>, <proposed message ID>,
|
||||||
// <proposer address>, <old signer> <new signer>
|
// <proposer address>, <old signer>, <new signer>
|
||||||
MsigSwapApprove(context.Context, address.Address, address.Address, uint64, address.Address, address.Address, address.Address) (cid.Cid, error)
|
MsigSwapApprove(context.Context, address.Address, address.Address, uint64, address.Address, address.Address, address.Address) (cid.Cid, error)
|
||||||
// MsigSwapCancel cancels a previously proposed SwapSigner message
|
// MsigSwapCancel cancels a previously proposed SwapSigner message
|
||||||
// It takes the following params: <multisig address>, <sender address of the cancel msg>, <proposed message ID>,
|
// It takes the following params: <multisig address>, <sender address of the cancel msg>, <proposed message ID>,
|
||||||
// <old signer> <new signer>
|
// <old signer>, <new signer>
|
||||||
MsigSwapCancel(context.Context, address.Address, address.Address, uint64, address.Address, address.Address) (cid.Cid, error)
|
MsigSwapCancel(context.Context, address.Address, address.Address, uint64, address.Address, address.Address) (cid.Cid, error)
|
||||||
|
|
||||||
MarketEnsureAvailable(context.Context, address.Address, address.Address, types.BigInt) (cid.Cid, error)
|
MarketEnsureAvailable(context.Context, address.Address, address.Address, types.BigInt) (cid.Cid, error)
|
||||||
|
@ -7,6 +7,8 @@ import (
|
|||||||
"io"
|
"io"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/filecoin-project/go-state-types/dline"
|
||||||
|
|
||||||
"github.com/ipfs/go-cid"
|
"github.com/ipfs/go-cid"
|
||||||
metrics "github.com/libp2p/go-libp2p-core/metrics"
|
metrics "github.com/libp2p/go-libp2p-core/metrics"
|
||||||
"github.com/libp2p/go-libp2p-core/network"
|
"github.com/libp2p/go-libp2p-core/network"
|
||||||
@ -95,7 +97,7 @@ type FullNodeStruct struct {
|
|||||||
ChainGetNode func(ctx context.Context, p string) (*api.IpldObject, error) `perm:"read"`
|
ChainGetNode func(ctx context.Context, p string) (*api.IpldObject, error) `perm:"read"`
|
||||||
ChainGetMessage func(context.Context, cid.Cid) (*types.Message, error) `perm:"read"`
|
ChainGetMessage func(context.Context, cid.Cid) (*types.Message, error) `perm:"read"`
|
||||||
ChainGetPath func(context.Context, types.TipSetKey, types.TipSetKey) ([]*api.HeadChange, error) `perm:"read"`
|
ChainGetPath func(context.Context, types.TipSetKey, types.TipSetKey) ([]*api.HeadChange, error) `perm:"read"`
|
||||||
ChainExport func(context.Context, abi.ChainEpoch, types.TipSetKey) (<-chan []byte, error) `perm:"read"`
|
ChainExport func(context.Context, abi.ChainEpoch, bool, types.TipSetKey) (<-chan []byte, error) `perm:"read"`
|
||||||
|
|
||||||
BeaconGetEntry func(ctx context.Context, epoch abi.ChainEpoch) (*types.BeaconEntry, error) `perm:"read"`
|
BeaconGetEntry func(ctx context.Context, epoch abi.ChainEpoch) (*types.BeaconEntry, error) `perm:"read"`
|
||||||
|
|
||||||
@ -107,7 +109,9 @@ type FullNodeStruct struct {
|
|||||||
SyncState func(context.Context) (*api.SyncState, error) `perm:"read"`
|
SyncState func(context.Context) (*api.SyncState, error) `perm:"read"`
|
||||||
SyncSubmitBlock func(ctx context.Context, blk *types.BlockMsg) error `perm:"write"`
|
SyncSubmitBlock func(ctx context.Context, blk *types.BlockMsg) error `perm:"write"`
|
||||||
SyncIncomingBlocks func(ctx context.Context) (<-chan *types.BlockHeader, error) `perm:"read"`
|
SyncIncomingBlocks func(ctx context.Context) (<-chan *types.BlockHeader, error) `perm:"read"`
|
||||||
|
SyncCheckpoint func(ctx context.Context, key types.TipSetKey) error `perm:"admin"`
|
||||||
SyncMarkBad func(ctx context.Context, bcid cid.Cid) error `perm:"admin"`
|
SyncMarkBad func(ctx context.Context, bcid cid.Cid) error `perm:"admin"`
|
||||||
|
SyncUnmarkBad func(ctx context.Context, bcid cid.Cid) error `perm:"admin"`
|
||||||
SyncCheckBad func(ctx context.Context, bcid cid.Cid) (string, error) `perm:"read"`
|
SyncCheckBad func(ctx context.Context, bcid cid.Cid) (string, error) `perm:"read"`
|
||||||
|
|
||||||
MpoolGetConfig func(context.Context) (*types.MpoolConfig, error) `perm:"read"`
|
MpoolGetConfig func(context.Context) (*types.MpoolConfig, error) `perm:"read"`
|
||||||
@ -162,7 +166,7 @@ type FullNodeStruct struct {
|
|||||||
StateNetworkName func(context.Context) (dtypes.NetworkName, error) `perm:"read"`
|
StateNetworkName func(context.Context) (dtypes.NetworkName, error) `perm:"read"`
|
||||||
StateMinerSectors func(context.Context, address.Address, *bitfield.BitField, bool, types.TipSetKey) ([]*api.ChainSectorInfo, error) `perm:"read"`
|
StateMinerSectors func(context.Context, address.Address, *bitfield.BitField, bool, types.TipSetKey) ([]*api.ChainSectorInfo, error) `perm:"read"`
|
||||||
StateMinerActiveSectors func(context.Context, address.Address, types.TipSetKey) ([]*api.ChainSectorInfo, error) `perm:"read"`
|
StateMinerActiveSectors func(context.Context, address.Address, types.TipSetKey) ([]*api.ChainSectorInfo, error) `perm:"read"`
|
||||||
StateMinerProvingDeadline func(context.Context, address.Address, types.TipSetKey) (*dline.Info, error) `perm:"read"`
|
StateMinerProvingDeadline func(context.Context, address.Address, types.TipSetKey) (*dline.Info, error) `perm:"read"`
|
||||||
StateMinerPower func(context.Context, address.Address, types.TipSetKey) (*api.MinerPower, error) `perm:"read"`
|
StateMinerPower func(context.Context, address.Address, types.TipSetKey) (*api.MinerPower, error) `perm:"read"`
|
||||||
StateMinerInfo func(context.Context, address.Address, types.TipSetKey) (miner2.MinerInfo, error) `perm:"read"`
|
StateMinerInfo func(context.Context, address.Address, types.TipSetKey) (miner2.MinerInfo, error) `perm:"read"`
|
||||||
StateMinerFaults func(context.Context, address.Address, types.TipSetKey) (bitfield.BitField, error) `perm:"read"`
|
StateMinerFaults func(context.Context, address.Address, types.TipSetKey) (bitfield.BitField, error) `perm:"read"`
|
||||||
@ -199,10 +203,14 @@ type FullNodeStruct struct {
|
|||||||
StateCirculatingSupply func(context.Context, types.TipSetKey) (api.CirculatingSupply, error) `perm:"read"`
|
StateCirculatingSupply func(context.Context, types.TipSetKey) (api.CirculatingSupply, error) `perm:"read"`
|
||||||
|
|
||||||
MsigGetAvailableBalance func(context.Context, address.Address, types.TipSetKey) (types.BigInt, error) `perm:"read"`
|
MsigGetAvailableBalance func(context.Context, address.Address, types.TipSetKey) (types.BigInt, error) `perm:"read"`
|
||||||
|
MsigGetVested func(context.Context, address.Address, types.TipSetKey, types.TipSetKey) (types.BigInt, error) `perm:"read"`
|
||||||
MsigCreate func(context.Context, uint64, []address.Address, abi.ChainEpoch, types.BigInt, address.Address, types.BigInt) (cid.Cid, error) `perm:"sign"`
|
MsigCreate func(context.Context, uint64, []address.Address, abi.ChainEpoch, types.BigInt, address.Address, types.BigInt) (cid.Cid, error) `perm:"sign"`
|
||||||
MsigPropose func(context.Context, address.Address, address.Address, types.BigInt, address.Address, uint64, []byte) (cid.Cid, error) `perm:"sign"`
|
MsigPropose func(context.Context, address.Address, address.Address, types.BigInt, address.Address, uint64, []byte) (cid.Cid, error) `perm:"sign"`
|
||||||
MsigApprove func(context.Context, address.Address, uint64, address.Address, address.Address, types.BigInt, address.Address, uint64, []byte) (cid.Cid, error) `perm:"sign"`
|
MsigApprove func(context.Context, address.Address, uint64, address.Address, address.Address, types.BigInt, address.Address, uint64, []byte) (cid.Cid, error) `perm:"sign"`
|
||||||
MsigCancel func(context.Context, address.Address, uint64, address.Address, types.BigInt, address.Address, uint64, []byte) (cid.Cid, error) `perm:"sign"`
|
MsigCancel func(context.Context, address.Address, uint64, address.Address, types.BigInt, address.Address, uint64, []byte) (cid.Cid, error) `perm:"sign"`
|
||||||
|
MsigAddPropose func(context.Context, address.Address, address.Address, address.Address, bool) (cid.Cid, error) `perm:"sign"`
|
||||||
|
MsigAddApprove func(context.Context, address.Address, address.Address, uint64, address.Address, address.Address, bool) (cid.Cid, error) `perm:"sign"`
|
||||||
|
MsigAddCancel func(context.Context, address.Address, address.Address, uint64, address.Address, bool) (cid.Cid, error) `perm:"sign"`
|
||||||
MsigSwapPropose func(context.Context, address.Address, address.Address, address.Address, address.Address) (cid.Cid, error) `perm:"sign"`
|
MsigSwapPropose func(context.Context, address.Address, address.Address, address.Address, address.Address) (cid.Cid, error) `perm:"sign"`
|
||||||
MsigSwapApprove func(context.Context, address.Address, address.Address, uint64, address.Address, address.Address, address.Address) (cid.Cid, error) `perm:"sign"`
|
MsigSwapApprove func(context.Context, address.Address, address.Address, uint64, address.Address, address.Address, address.Address) (cid.Cid, error) `perm:"sign"`
|
||||||
MsigSwapCancel func(context.Context, address.Address, address.Address, uint64, address.Address, address.Address) (cid.Cid, error) `perm:"sign"`
|
MsigSwapCancel func(context.Context, address.Address, address.Address, uint64, address.Address, address.Address) (cid.Cid, error) `perm:"sign"`
|
||||||
@ -684,8 +692,8 @@ func (c *FullNodeStruct) ChainGetPath(ctx context.Context, from types.TipSetKey,
|
|||||||
return c.Internal.ChainGetPath(ctx, from, to)
|
return c.Internal.ChainGetPath(ctx, from, to)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *FullNodeStruct) ChainExport(ctx context.Context, nroots abi.ChainEpoch, tsk types.TipSetKey) (<-chan []byte, error) {
|
func (c *FullNodeStruct) ChainExport(ctx context.Context, nroots abi.ChainEpoch, iom bool, tsk types.TipSetKey) (<-chan []byte, error) {
|
||||||
return c.Internal.ChainExport(ctx, nroots, tsk)
|
return c.Internal.ChainExport(ctx, nroots, iom, tsk)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *FullNodeStruct) BeaconGetEntry(ctx context.Context, epoch abi.ChainEpoch) (*types.BeaconEntry, error) {
|
func (c *FullNodeStruct) BeaconGetEntry(ctx context.Context, epoch abi.ChainEpoch) (*types.BeaconEntry, error) {
|
||||||
@ -704,10 +712,18 @@ func (c *FullNodeStruct) SyncIncomingBlocks(ctx context.Context) (<-chan *types.
|
|||||||
return c.Internal.SyncIncomingBlocks(ctx)
|
return c.Internal.SyncIncomingBlocks(ctx)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (c *FullNodeStruct) SyncCheckpoint(ctx context.Context, tsk types.TipSetKey) error {
|
||||||
|
return c.Internal.SyncCheckpoint(ctx, tsk)
|
||||||
|
}
|
||||||
|
|
||||||
func (c *FullNodeStruct) SyncMarkBad(ctx context.Context, bcid cid.Cid) error {
|
func (c *FullNodeStruct) SyncMarkBad(ctx context.Context, bcid cid.Cid) error {
|
||||||
return c.Internal.SyncMarkBad(ctx, bcid)
|
return c.Internal.SyncMarkBad(ctx, bcid)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (c *FullNodeStruct) SyncUnmarkBad(ctx context.Context, bcid cid.Cid) error {
|
||||||
|
return c.Internal.SyncUnmarkBad(ctx, bcid)
|
||||||
|
}
|
||||||
|
|
||||||
func (c *FullNodeStruct) SyncCheckBad(ctx context.Context, bcid cid.Cid) (string, error) {
|
func (c *FullNodeStruct) SyncCheckBad(ctx context.Context, bcid cid.Cid) (string, error) {
|
||||||
return c.Internal.SyncCheckBad(ctx, bcid)
|
return c.Internal.SyncCheckBad(ctx, bcid)
|
||||||
}
|
}
|
||||||
@ -864,6 +880,10 @@ func (c *FullNodeStruct) MsigGetAvailableBalance(ctx context.Context, a address.
|
|||||||
return c.Internal.MsigGetAvailableBalance(ctx, a, tsk)
|
return c.Internal.MsigGetAvailableBalance(ctx, a, tsk)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (c *FullNodeStruct) MsigGetVested(ctx context.Context, a address.Address, sTsk types.TipSetKey, eTsk types.TipSetKey) (types.BigInt, error) {
|
||||||
|
return c.Internal.MsigGetVested(ctx, a, sTsk, eTsk)
|
||||||
|
}
|
||||||
|
|
||||||
func (c *FullNodeStruct) MsigCreate(ctx context.Context, req uint64, addrs []address.Address, duration abi.ChainEpoch, val types.BigInt, src address.Address, gp types.BigInt) (cid.Cid, error) {
|
func (c *FullNodeStruct) MsigCreate(ctx context.Context, req uint64, addrs []address.Address, duration abi.ChainEpoch, val types.BigInt, src address.Address, gp types.BigInt) (cid.Cid, error) {
|
||||||
return c.Internal.MsigCreate(ctx, req, addrs, duration, val, src, gp)
|
return c.Internal.MsigCreate(ctx, req, addrs, duration, val, src, gp)
|
||||||
}
|
}
|
||||||
@ -880,6 +900,18 @@ func (c *FullNodeStruct) MsigCancel(ctx context.Context, msig address.Address, t
|
|||||||
return c.Internal.MsigCancel(ctx, msig, txID, to, amt, src, method, params)
|
return c.Internal.MsigCancel(ctx, msig, txID, to, amt, src, method, params)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (c *FullNodeStruct) MsigAddPropose(ctx context.Context, msig address.Address, src address.Address, newAdd address.Address, inc bool) (cid.Cid, error) {
|
||||||
|
return c.Internal.MsigAddPropose(ctx, msig, src, newAdd, inc)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *FullNodeStruct) MsigAddApprove(ctx context.Context, msig address.Address, src address.Address, txID uint64, proposer address.Address, newAdd address.Address, inc bool) (cid.Cid, error) {
|
||||||
|
return c.Internal.MsigAddApprove(ctx, msig, src, txID, proposer, newAdd, inc)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *FullNodeStruct) MsigAddCancel(ctx context.Context, msig address.Address, src address.Address, txID uint64, newAdd address.Address, inc bool) (cid.Cid, error) {
|
||||||
|
return c.Internal.MsigAddCancel(ctx, msig, src, txID, newAdd, inc)
|
||||||
|
}
|
||||||
|
|
||||||
func (c *FullNodeStruct) MsigSwapPropose(ctx context.Context, msig address.Address, src address.Address, oldAdd address.Address, newAdd address.Address) (cid.Cid, error) {
|
func (c *FullNodeStruct) MsigSwapPropose(ctx context.Context, msig address.Address, src address.Address, oldAdd address.Address, newAdd address.Address) (cid.Cid, error) {
|
||||||
return c.Internal.MsigSwapPropose(ctx, msig, src, oldAdd, newAdd)
|
return c.Internal.MsigSwapPropose(ctx, msig, src, oldAdd, newAdd)
|
||||||
}
|
}
|
||||||
|
@ -105,7 +105,7 @@ func init() {
|
|||||||
addExample(network.Connected)
|
addExample(network.Connected)
|
||||||
addExample(dtypes.NetworkName("lotus"))
|
addExample(dtypes.NetworkName("lotus"))
|
||||||
addExample(api.SyncStateStage(1))
|
addExample(api.SyncStateStage(1))
|
||||||
addExample(build.APIVersion)
|
addExample(build.FullAPIVersion)
|
||||||
addExample(api.PCHInbound)
|
addExample(api.PCHInbound)
|
||||||
addExample(time.Minute)
|
addExample(time.Minute)
|
||||||
addExample(datatransfer.TransferID(3))
|
addExample(datatransfer.TransferID(3))
|
||||||
|
@ -65,6 +65,8 @@ func TestApis(t *testing.T, b APIBuilder) {
|
|||||||
var OneMiner = []StorageMiner{{Full: 0, Preseal: PresealGenesis}}
|
var OneMiner = []StorageMiner{{Full: 0, Preseal: PresealGenesis}}
|
||||||
|
|
||||||
func (ts *testSuite) testVersion(t *testing.T) {
|
func (ts *testSuite) testVersion(t *testing.T) {
|
||||||
|
build.RunningNodeType = build.NodeFull
|
||||||
|
|
||||||
ctx := context.Background()
|
ctx := context.Background()
|
||||||
apis, _ := ts.makeNodes(t, 1, OneMiner)
|
apis, _ := ts.makeNodes(t, 1, OneMiner)
|
||||||
api := apis[0]
|
api := apis[0]
|
||||||
|
@ -192,7 +192,7 @@ func TestWindowPost(t *testing.T, b APIBuilder, blocktime time.Duration, nSector
|
|||||||
|
|
||||||
// Drop the partition
|
// Drop the partition
|
||||||
err = parts[0].Sectors.ForEach(func(sid uint64) error {
|
err = parts[0].Sectors.ForEach(func(sid uint64) error {
|
||||||
return miner.StorageMiner.(*impl.StorageMinerAPI).IStorageMgr.(*mock.SectorMgr).MarkFailed(abi.SectorID{
|
return miner.StorageMiner.(*impl.StorageMinerAPI).IStorageMgr.(*mock.SectorMgr).MarkCorrupted(abi.SectorID{
|
||||||
Miner: abi.ActorID(mid),
|
Miner: abi.ActorID(mid),
|
||||||
Number: abi.SectorNumber(sid),
|
Number: abi.SectorNumber(sid),
|
||||||
}, true)
|
}, true)
|
||||||
|
@ -1,15 +1,26 @@
|
|||||||
package build
|
package build
|
||||||
|
|
||||||
import "github.com/filecoin-project/lotus/node/modules/dtypes"
|
import (
|
||||||
|
"sort"
|
||||||
|
|
||||||
var DrandNetwork = DrandIncentinet
|
"github.com/filecoin-project/lotus/node/modules/dtypes"
|
||||||
|
)
|
||||||
func DrandConfig() dtypes.DrandConfig {
|
|
||||||
return DrandConfigs[DrandNetwork]
|
|
||||||
}
|
|
||||||
|
|
||||||
type DrandEnum int
|
type DrandEnum int
|
||||||
|
|
||||||
|
func DrandConfigSchedule() dtypes.DrandSchedule {
|
||||||
|
out := dtypes.DrandSchedule{}
|
||||||
|
for start, config := range DrandSchedule {
|
||||||
|
out = append(out, dtypes.DrandPoint{Start: start, Config: DrandConfigs[config]})
|
||||||
|
}
|
||||||
|
|
||||||
|
sort.Slice(out, func(i, j int) bool {
|
||||||
|
return out[i].Start < out[j].Start
|
||||||
|
})
|
||||||
|
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
const (
|
const (
|
||||||
DrandMainnet DrandEnum = iota + 1
|
DrandMainnet DrandEnum = iota + 1
|
||||||
DrandTestnet
|
DrandTestnet
|
||||||
|
@ -10,9 +10,15 @@ import (
|
|||||||
"github.com/filecoin-project/specs-actors/actors/builtin/verifreg"
|
"github.com/filecoin-project/specs-actors/actors/builtin/verifreg"
|
||||||
)
|
)
|
||||||
|
|
||||||
const UpgradeBreezeHeight = 0
|
const UpgradeBreezeHeight = -1
|
||||||
const BreezeGasTampingDuration = 0
|
const BreezeGasTampingDuration = 0
|
||||||
|
|
||||||
|
const UpgradeSmokeHeight = -1
|
||||||
|
|
||||||
|
var DrandSchedule = map[abi.ChainEpoch]DrandEnum{
|
||||||
|
0: DrandMainnet,
|
||||||
|
}
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
power.ConsensusMinerMinPower = big.NewInt(2048)
|
power.ConsensusMinerMinPower = big.NewInt(2048)
|
||||||
miner.SupportedProofTypes = map[abi.RegisteredSealProof]struct{}{
|
miner.SupportedProofTypes = map[abi.RegisteredSealProof]struct{}{
|
||||||
@ -23,7 +29,7 @@ func init() {
|
|||||||
BuildType |= Build2k
|
BuildType |= Build2k
|
||||||
}
|
}
|
||||||
|
|
||||||
const BlockDelaySecs = uint64(4)
|
const BlockDelaySecs = uint64(30)
|
||||||
|
|
||||||
const PropagationDelaySecs = uint64(1)
|
const PropagationDelaySecs = uint64(1)
|
||||||
|
|
||||||
|
@ -36,3 +36,11 @@ func MessagesTopic(netName dtypes.NetworkName) string { return "/fil/msgs/" + st
|
|||||||
func DhtProtocolName(netName dtypes.NetworkName) protocol.ID {
|
func DhtProtocolName(netName dtypes.NetworkName) protocol.ID {
|
||||||
return protocol.ID("/fil/kad/" + string(netName))
|
return protocol.ID("/fil/kad/" + string(netName))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func UseNewestNetwork() bool {
|
||||||
|
// TODO: Put these in a container we can iterate over
|
||||||
|
if UpgradeBreezeHeight <= 0 && UpgradeSmokeHeight <= 0 {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
@ -5,6 +5,8 @@ package build
|
|||||||
import (
|
import (
|
||||||
"math/big"
|
"math/big"
|
||||||
|
|
||||||
|
"github.com/filecoin-project/go-state-types/network"
|
||||||
|
|
||||||
"github.com/filecoin-project/go-state-types/abi"
|
"github.com/filecoin-project/go-state-types/abi"
|
||||||
"github.com/filecoin-project/specs-actors/actors/builtin"
|
"github.com/filecoin-project/specs-actors/actors/builtin"
|
||||||
"github.com/filecoin-project/specs-actors/actors/builtin/miner"
|
"github.com/filecoin-project/specs-actors/actors/builtin/miner"
|
||||||
@ -20,6 +22,7 @@ const UnixfsLinksPerLevel = 1024
|
|||||||
// Consensus / Network
|
// Consensus / Network
|
||||||
|
|
||||||
const AllowableClockDriftSecs = uint64(1)
|
const AllowableClockDriftSecs = uint64(1)
|
||||||
|
const NewestNetworkVersion = network.Version2
|
||||||
|
|
||||||
// Epochs
|
// Epochs
|
||||||
const ForkLengthThreshold = Finality
|
const ForkLengthThreshold = Finality
|
||||||
|
@ -11,6 +11,7 @@ import (
|
|||||||
"math/big"
|
"math/big"
|
||||||
|
|
||||||
"github.com/filecoin-project/go-state-types/abi"
|
"github.com/filecoin-project/go-state-types/abi"
|
||||||
|
"github.com/filecoin-project/go-state-types/network"
|
||||||
"github.com/filecoin-project/specs-actors/actors/builtin"
|
"github.com/filecoin-project/specs-actors/actors/builtin"
|
||||||
"github.com/filecoin-project/specs-actors/actors/builtin/miner"
|
"github.com/filecoin-project/specs-actors/actors/builtin/miner"
|
||||||
)
|
)
|
||||||
@ -70,6 +71,14 @@ var (
|
|||||||
PackingEfficiencyNum int64 = 4
|
PackingEfficiencyNum int64 = 4
|
||||||
PackingEfficiencyDenom int64 = 5
|
PackingEfficiencyDenom int64 = 5
|
||||||
|
|
||||||
UpgradeBreezeHeight abi.ChainEpoch = 0
|
UpgradeBreezeHeight abi.ChainEpoch = -1
|
||||||
BreezeGasTampingDuration abi.ChainEpoch = 0
|
BreezeGasTampingDuration abi.ChainEpoch = 0
|
||||||
|
|
||||||
|
UpgradeSmokeHeight abi.ChainEpoch = -1
|
||||||
|
|
||||||
|
DrandSchedule = map[abi.ChainEpoch]DrandEnum{
|
||||||
|
0: DrandMainnet,
|
||||||
|
}
|
||||||
|
|
||||||
|
NewestNetworkVersion = network.Version2
|
||||||
)
|
)
|
||||||
|
@ -12,9 +12,16 @@ import (
|
|||||||
"github.com/filecoin-project/specs-actors/actors/builtin/power"
|
"github.com/filecoin-project/specs-actors/actors/builtin/power"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
var DrandSchedule = map[abi.ChainEpoch]DrandEnum{
|
||||||
|
0: DrandIncentinet,
|
||||||
|
UpgradeSmokeHeight: DrandMainnet,
|
||||||
|
}
|
||||||
|
|
||||||
const UpgradeBreezeHeight = 41280
|
const UpgradeBreezeHeight = 41280
|
||||||
const BreezeGasTampingDuration = 120
|
const BreezeGasTampingDuration = 120
|
||||||
|
|
||||||
|
const UpgradeSmokeHeight = 51000
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
power.ConsensusMinerMinPower = big.NewInt(10 << 40)
|
power.ConsensusMinerMinPower = big.NewInt(10 << 40)
|
||||||
miner.SupportedProofTypes = map[abi.RegisteredSealProof]struct{}{
|
miner.SupportedProofTypes = map[abi.RegisteredSealProof]struct{}{
|
||||||
|
@ -1,6 +1,10 @@
|
|||||||
package build
|
package build
|
||||||
|
|
||||||
import "fmt"
|
import (
|
||||||
|
"fmt"
|
||||||
|
|
||||||
|
"golang.org/x/xerrors"
|
||||||
|
)
|
||||||
|
|
||||||
var CurrentCommit string
|
var CurrentCommit string
|
||||||
var BuildType int
|
var BuildType int
|
||||||
@ -25,7 +29,7 @@ func buildType() string {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// BuildVersion is the local build version, set by build system
|
// BuildVersion is the local build version, set by build system
|
||||||
const BuildVersion = "0.6.2-rc1"
|
const BuildVersion = "0.7.0"
|
||||||
|
|
||||||
func UserVersion() string {
|
func UserVersion() string {
|
||||||
return BuildVersion + buildType() + CurrentCommit
|
return BuildVersion + buildType() + CurrentCommit
|
||||||
@ -52,8 +56,37 @@ func (ve Version) EqMajorMinor(v2 Version) bool {
|
|||||||
return ve&minorMask == v2&minorMask
|
return ve&minorMask == v2&minorMask
|
||||||
}
|
}
|
||||||
|
|
||||||
// APIVersion is a semver version of the rpc api exposed
|
type NodeType int
|
||||||
var APIVersion Version = newVer(0, 15, 0)
|
|
||||||
|
const (
|
||||||
|
NodeUnknown NodeType = iota
|
||||||
|
|
||||||
|
NodeFull
|
||||||
|
NodeMiner
|
||||||
|
NodeWorker
|
||||||
|
)
|
||||||
|
|
||||||
|
var RunningNodeType NodeType
|
||||||
|
|
||||||
|
func VersionForType(nodeType NodeType) (Version, error) {
|
||||||
|
switch nodeType {
|
||||||
|
case NodeFull:
|
||||||
|
return FullAPIVersion, nil
|
||||||
|
case NodeMiner:
|
||||||
|
return MinerAPIVersion, nil
|
||||||
|
case NodeWorker:
|
||||||
|
return WorkerAPIVersion, nil
|
||||||
|
default:
|
||||||
|
return Version(0), xerrors.Errorf("unknown node type %d", nodeType)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// semver versions of the rpc api exposed
|
||||||
|
var (
|
||||||
|
FullAPIVersion = newVer(0, 15, 0)
|
||||||
|
MinerAPIVersion = newVer(0, 14, 0)
|
||||||
|
WorkerAPIVersion = newVer(0, 14, 0)
|
||||||
|
)
|
||||||
|
|
||||||
//nolint:varcheck,deadcode
|
//nolint:varcheck,deadcode
|
||||||
const (
|
const (
|
||||||
|
@ -56,6 +56,10 @@ func (bts *BadBlockCache) Add(c cid.Cid, bbr BadBlockReason) {
|
|||||||
bts.badBlocks.Add(c, bbr)
|
bts.badBlocks.Add(c, bbr)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (bts *BadBlockCache) Remove(c cid.Cid) {
|
||||||
|
bts.badBlocks.Remove(c)
|
||||||
|
}
|
||||||
|
|
||||||
func (bts *BadBlockCache) Has(c cid.Cid) (BadBlockReason, bool) {
|
func (bts *BadBlockCache) Has(c cid.Cid) (BadBlockReason, bool) {
|
||||||
rval, ok := bts.badBlocks.Get(c)
|
rval, ok := bts.badBlocks.Get(c)
|
||||||
if !ok {
|
if !ok {
|
||||||
|
@ -18,6 +18,23 @@ type Response struct {
|
|||||||
Err error
|
Err error
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type Schedule []BeaconPoint
|
||||||
|
|
||||||
|
func (bs Schedule) BeaconForEpoch(e abi.ChainEpoch) RandomBeacon {
|
||||||
|
for i := len(bs) - 1; i >= 0; i-- {
|
||||||
|
bp := bs[i]
|
||||||
|
if e >= bp.Start {
|
||||||
|
return bp.Beacon
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return bs[0].Beacon
|
||||||
|
}
|
||||||
|
|
||||||
|
type BeaconPoint struct {
|
||||||
|
Start abi.ChainEpoch
|
||||||
|
Beacon RandomBeacon
|
||||||
|
}
|
||||||
|
|
||||||
// RandomBeacon represents a system that provides randomness to Lotus.
|
// RandomBeacon represents a system that provides randomness to Lotus.
|
||||||
// Other components interrogate the RandomBeacon to acquire randomness that's
|
// Other components interrogate the RandomBeacon to acquire randomness that's
|
||||||
// valid for a specific chain epoch. Also to verify beacon entries that have
|
// valid for a specific chain epoch. Also to verify beacon entries that have
|
||||||
@ -25,11 +42,30 @@ type Response struct {
|
|||||||
type RandomBeacon interface {
|
type RandomBeacon interface {
|
||||||
Entry(context.Context, uint64) <-chan Response
|
Entry(context.Context, uint64) <-chan Response
|
||||||
VerifyEntry(types.BeaconEntry, types.BeaconEntry) error
|
VerifyEntry(types.BeaconEntry, types.BeaconEntry) error
|
||||||
MaxBeaconRoundForEpoch(abi.ChainEpoch, types.BeaconEntry) uint64
|
MaxBeaconRoundForEpoch(abi.ChainEpoch) uint64
|
||||||
}
|
}
|
||||||
|
|
||||||
func ValidateBlockValues(b RandomBeacon, h *types.BlockHeader, prevEntry types.BeaconEntry) error {
|
func ValidateBlockValues(bSchedule Schedule, h *types.BlockHeader, parentEpoch abi.ChainEpoch,
|
||||||
maxRound := b.MaxBeaconRoundForEpoch(h.Height, prevEntry)
|
prevEntry types.BeaconEntry) error {
|
||||||
|
{
|
||||||
|
parentBeacon := bSchedule.BeaconForEpoch(parentEpoch)
|
||||||
|
currBeacon := bSchedule.BeaconForEpoch(h.Height)
|
||||||
|
if parentBeacon != currBeacon {
|
||||||
|
if len(h.BeaconEntries) != 2 {
|
||||||
|
return xerrors.Errorf("expected two beacon entries at beacon fork, got %d", len(h.BeaconEntries))
|
||||||
|
}
|
||||||
|
err := currBeacon.VerifyEntry(h.BeaconEntries[1], h.BeaconEntries[0])
|
||||||
|
if err != nil {
|
||||||
|
return xerrors.Errorf("beacon at fork point invalid: (%v, %v): %w",
|
||||||
|
h.BeaconEntries[1], h.BeaconEntries[0], err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// TODO: fork logic
|
||||||
|
b := bSchedule.BeaconForEpoch(h.Height)
|
||||||
|
maxRound := b.MaxBeaconRoundForEpoch(h.Height)
|
||||||
if maxRound == prevEntry.Round {
|
if maxRound == prevEntry.Round {
|
||||||
if len(h.BeaconEntries) != 0 {
|
if len(h.BeaconEntries) != 0 {
|
||||||
return xerrors.Errorf("expected not to have any beacon entries in this block, got %d", len(h.BeaconEntries))
|
return xerrors.Errorf("expected not to have any beacon entries in this block, got %d", len(h.BeaconEntries))
|
||||||
@ -56,10 +92,35 @@ func ValidateBlockValues(b RandomBeacon, h *types.BlockHeader, prevEntry types.B
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func BeaconEntriesForBlock(ctx context.Context, beacon RandomBeacon, round abi.ChainEpoch, prev types.BeaconEntry) ([]types.BeaconEntry, error) {
|
func BeaconEntriesForBlock(ctx context.Context, bSchedule Schedule, epoch abi.ChainEpoch, parentEpoch abi.ChainEpoch, prev types.BeaconEntry) ([]types.BeaconEntry, error) {
|
||||||
|
{
|
||||||
|
parentBeacon := bSchedule.BeaconForEpoch(parentEpoch)
|
||||||
|
currBeacon := bSchedule.BeaconForEpoch(epoch)
|
||||||
|
if parentBeacon != currBeacon {
|
||||||
|
// Fork logic
|
||||||
|
round := currBeacon.MaxBeaconRoundForEpoch(epoch)
|
||||||
|
out := make([]types.BeaconEntry, 2)
|
||||||
|
rch := currBeacon.Entry(ctx, round-1)
|
||||||
|
res := <-rch
|
||||||
|
if res.Err != nil {
|
||||||
|
return nil, xerrors.Errorf("getting entry %d returned error: %w", round-1, res.Err)
|
||||||
|
}
|
||||||
|
out[0] = res.Entry
|
||||||
|
rch = currBeacon.Entry(ctx, round)
|
||||||
|
res = <-rch
|
||||||
|
if res.Err != nil {
|
||||||
|
return nil, xerrors.Errorf("getting entry %d returned error: %w", round, res.Err)
|
||||||
|
}
|
||||||
|
out[1] = res.Entry
|
||||||
|
return out, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
beacon := bSchedule.BeaconForEpoch(epoch)
|
||||||
|
|
||||||
start := build.Clock.Now()
|
start := build.Clock.Now()
|
||||||
|
|
||||||
maxRound := beacon.MaxBeaconRoundForEpoch(round, prev)
|
maxRound := beacon.MaxBeaconRoundForEpoch(epoch)
|
||||||
if maxRound == prev.Round {
|
if maxRound == prev.Round {
|
||||||
return nil, nil
|
return nil, nil
|
||||||
}
|
}
|
||||||
@ -82,7 +143,7 @@ func BeaconEntriesForBlock(ctx context.Context, beacon RandomBeacon, round abi.C
|
|||||||
out = append(out, resp.Entry)
|
out = append(out, resp.Entry)
|
||||||
cur = resp.Entry.Round - 1
|
cur = resp.Entry.Round - 1
|
||||||
case <-ctx.Done():
|
case <-ctx.Done():
|
||||||
return nil, xerrors.Errorf("context timed out waiting on beacon entry to come back for round %d: %w", round, ctx.Err())
|
return nil, xerrors.Errorf("context timed out waiting on beacon entry to come back for epoch %d: %w", epoch, ctx.Err())
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -187,7 +187,7 @@ func (db *DrandBeacon) VerifyEntry(curr types.BeaconEntry, prev types.BeaconEntr
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
func (db *DrandBeacon) MaxBeaconRoundForEpoch(filEpoch abi.ChainEpoch, prevEntry types.BeaconEntry) uint64 {
|
func (db *DrandBeacon) MaxBeaconRoundForEpoch(filEpoch abi.ChainEpoch) uint64 {
|
||||||
// TODO: sometimes the genesis time for filecoin is zero and this goes negative
|
// TODO: sometimes the genesis time for filecoin is zero and this goes negative
|
||||||
latestTs := ((uint64(filEpoch) * db.filRoundTime) + db.filGenTime) - db.filRoundTime
|
latestTs := ((uint64(filEpoch) * db.filRoundTime) + db.filGenTime) - db.filRoundTime
|
||||||
dround := (latestTs - db.drandGenTime) / uint64(db.interval.Seconds())
|
dround := (latestTs - db.drandGenTime) / uint64(db.interval.Seconds())
|
||||||
|
@ -12,7 +12,7 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
func TestPrintGroupInfo(t *testing.T) {
|
func TestPrintGroupInfo(t *testing.T) {
|
||||||
server := build.DrandConfig().Servers[0]
|
server := build.DrandConfigs[build.DrandIncentinet].Servers[0]
|
||||||
c, err := hclient.New(server, nil, nil)
|
c, err := hclient.New(server, nil, nil)
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
cg := c.(interface {
|
cg := c.(interface {
|
||||||
|
@ -53,11 +53,7 @@ func (mb *mockBeacon) VerifyEntry(from types.BeaconEntry, to types.BeaconEntry)
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (mb *mockBeacon) IsEntryForEpoch(e types.BeaconEntry, epoch abi.ChainEpoch, nulls int) (bool, error) {
|
func (mb *mockBeacon) MaxBeaconRoundForEpoch(epoch abi.ChainEpoch) uint64 {
|
||||||
return int64(e.Round) <= int64(epoch) && int64(epoch)-int64(nulls) >= int64(e.Round), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (mb *mockBeacon) MaxBeaconRoundForEpoch(epoch abi.ChainEpoch, prevEntry types.BeaconEntry) uint64 {
|
|
||||||
return uint64(epoch)
|
return uint64(epoch)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
81
chain/checkpoint.go
Normal file
81
chain/checkpoint.go
Normal file
@ -0,0 +1,81 @@
|
|||||||
|
package chain
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
|
||||||
|
"github.com/filecoin-project/lotus/chain/types"
|
||||||
|
|
||||||
|
"github.com/filecoin-project/lotus/node/modules/dtypes"
|
||||||
|
"github.com/ipfs/go-datastore"
|
||||||
|
"golang.org/x/xerrors"
|
||||||
|
)
|
||||||
|
|
||||||
|
var CheckpointKey = datastore.NewKey("/chain/checks")
|
||||||
|
|
||||||
|
func loadCheckpoint(ds dtypes.MetadataDS) (types.TipSetKey, error) {
|
||||||
|
haveChks, err := ds.Has(CheckpointKey)
|
||||||
|
if err != nil {
|
||||||
|
return types.EmptyTSK, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if !haveChks {
|
||||||
|
return types.EmptyTSK, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
tskBytes, err := ds.Get(CheckpointKey)
|
||||||
|
if err != nil {
|
||||||
|
return types.EmptyTSK, err
|
||||||
|
}
|
||||||
|
|
||||||
|
var tsk types.TipSetKey
|
||||||
|
err = json.Unmarshal(tskBytes, &tsk)
|
||||||
|
if err != nil {
|
||||||
|
return types.EmptyTSK, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return tsk, err
|
||||||
|
}
|
||||||
|
|
||||||
|
func (syncer *Syncer) SetCheckpoint(tsk types.TipSetKey) error {
|
||||||
|
if tsk == types.EmptyTSK {
|
||||||
|
return xerrors.Errorf("called with empty tsk")
|
||||||
|
}
|
||||||
|
|
||||||
|
syncer.checkptLk.Lock()
|
||||||
|
defer syncer.checkptLk.Unlock()
|
||||||
|
|
||||||
|
ts, err := syncer.ChainStore().LoadTipSet(tsk)
|
||||||
|
if err != nil {
|
||||||
|
return xerrors.Errorf("cannot find tipset: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
hts := syncer.ChainStore().GetHeaviestTipSet()
|
||||||
|
anc, err := syncer.ChainStore().IsAncestorOf(ts, hts)
|
||||||
|
if err != nil {
|
||||||
|
return xerrors.Errorf("cannot determine whether checkpoint tipset is in main-chain: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if !hts.Equals(ts) && !anc {
|
||||||
|
return xerrors.Errorf("cannot mark tipset as checkpoint, since it isn't in the main-chain: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
tskBytes, err := json.Marshal(tsk)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
err = syncer.ds.Put(CheckpointKey, tskBytes)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
syncer.checkpt = tsk
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (syncer *Syncer) GetCheckpoint() types.TipSetKey {
|
||||||
|
syncer.checkptLk.Lock()
|
||||||
|
defer syncer.checkptLk.Unlock()
|
||||||
|
return syncer.checkpt
|
||||||
|
}
|
@ -59,7 +59,7 @@ type ChainGen struct {
|
|||||||
|
|
||||||
cs *store.ChainStore
|
cs *store.ChainStore
|
||||||
|
|
||||||
beacon beacon.RandomBeacon
|
beacon beacon.Schedule
|
||||||
|
|
||||||
sm *stmgr.StateManager
|
sm *stmgr.StateManager
|
||||||
|
|
||||||
@ -252,7 +252,7 @@ func NewGeneratorWithSectors(numSectors int) (*ChainGen, error) {
|
|||||||
|
|
||||||
miners := []address.Address{maddr1, maddr2}
|
miners := []address.Address{maddr1, maddr2}
|
||||||
|
|
||||||
beac := beacon.NewMockBeacon(time.Second)
|
beac := beacon.Schedule{{Start: 0, Beacon: beacon.NewMockBeacon(time.Second)}}
|
||||||
//beac, err := drand.NewDrandBeacon(tpl.Timestamp, build.BlockDelaySecs)
|
//beac, err := drand.NewDrandBeacon(tpl.Timestamp, build.BlockDelaySecs)
|
||||||
//if err != nil {
|
//if err != nil {
|
||||||
//return nil, xerrors.Errorf("creating drand beacon: %w", err)
|
//return nil, xerrors.Errorf("creating drand beacon: %w", err)
|
||||||
@ -338,7 +338,7 @@ func (cg *ChainGen) nextBlockProof(ctx context.Context, pts *types.TipSet, m add
|
|||||||
|
|
||||||
prev := mbi.PrevBeaconEntry
|
prev := mbi.PrevBeaconEntry
|
||||||
|
|
||||||
entries, err := beacon.BeaconEntriesForBlock(ctx, cg.beacon, round, prev)
|
entries, err := beacon.BeaconEntriesForBlock(ctx, cg.beacon, round, pts.Height(), prev)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, nil, xerrors.Errorf("get beacon entries for block: %w", err)
|
return nil, nil, nil, xerrors.Errorf("get beacon entries for block: %w", err)
|
||||||
}
|
}
|
||||||
@ -358,7 +358,7 @@ func (cg *ChainGen) nextBlockProof(ctx context.Context, pts *types.TipSet, m add
|
|||||||
return nil, nil, nil, xerrors.Errorf("failed to cbor marshal address: %w", err)
|
return nil, nil, nil, xerrors.Errorf("failed to cbor marshal address: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(entries) == 0 {
|
if round > build.UpgradeSmokeHeight {
|
||||||
buf.Write(pts.MinTicket().VRFProof)
|
buf.Write(pts.MinTicket().VRFProof)
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -559,7 +559,7 @@ type mca struct {
|
|||||||
w *wallet.Wallet
|
w *wallet.Wallet
|
||||||
sm *stmgr.StateManager
|
sm *stmgr.StateManager
|
||||||
pv ffiwrapper.Verifier
|
pv ffiwrapper.Verifier
|
||||||
bcn beacon.RandomBeacon
|
bcn beacon.Schedule
|
||||||
}
|
}
|
||||||
|
|
||||||
func (mca mca) ChainGetRandomnessFromTickets(ctx context.Context, tsk types.TipSetKey, personalization crypto.DomainSeparationTag, randEpoch abi.ChainEpoch, entropy []byte) (abi.Randomness, error) {
|
func (mca mca) ChainGetRandomnessFromTickets(ctx context.Context, tsk types.TipSetKey, personalization crypto.DomainSeparationTag, randEpoch abi.ChainEpoch, entropy []byte) (abi.Randomness, error) {
|
||||||
|
@ -6,6 +6,10 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
"math/rand"
|
"math/rand"
|
||||||
|
|
||||||
|
"github.com/filecoin-project/lotus/build"
|
||||||
|
|
||||||
|
"github.com/filecoin-project/go-state-types/network"
|
||||||
|
|
||||||
"github.com/filecoin-project/lotus/chain/state"
|
"github.com/filecoin-project/lotus/chain/state"
|
||||||
|
|
||||||
"github.com/ipfs/go-cid"
|
"github.com/ipfs/go-cid"
|
||||||
|
@ -12,6 +12,7 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/filecoin-project/go-state-types/abi"
|
"github.com/filecoin-project/go-state-types/abi"
|
||||||
|
"github.com/filecoin-project/go-state-types/big"
|
||||||
"github.com/filecoin-project/go-state-types/crypto"
|
"github.com/filecoin-project/go-state-types/crypto"
|
||||||
"github.com/hashicorp/go-multierror"
|
"github.com/hashicorp/go-multierror"
|
||||||
lru "github.com/hashicorp/golang-lru"
|
lru "github.com/hashicorp/golang-lru"
|
||||||
@ -160,6 +161,22 @@ func ComputeMinRBF(curPrem abi.TokenAmount) abi.TokenAmount {
|
|||||||
return types.BigAdd(minPrice, types.NewInt(1))
|
return types.BigAdd(minPrice, types.NewInt(1))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func CapGasFee(msg *types.Message, maxFee abi.TokenAmount) {
|
||||||
|
if maxFee.Equals(big.Zero()) {
|
||||||
|
maxFee = types.NewInt(build.FilecoinPrecision / 10)
|
||||||
|
}
|
||||||
|
|
||||||
|
gl := types.NewInt(uint64(msg.GasLimit))
|
||||||
|
totalFee := types.BigMul(msg.GasFeeCap, gl)
|
||||||
|
|
||||||
|
if totalFee.LessThanEqual(maxFee) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
msg.GasFeeCap = big.Div(maxFee, gl)
|
||||||
|
msg.GasPremium = big.Min(msg.GasFeeCap, msg.GasPremium) // cap premium at FeeCap
|
||||||
|
}
|
||||||
|
|
||||||
func (ms *msgSet) add(m *types.SignedMessage, mp *MessagePool, strict bool) (bool, error) {
|
func (ms *msgSet) add(m *types.SignedMessage, mp *MessagePool, strict bool) (bool, error) {
|
||||||
nextNonce := ms.nextNonce
|
nextNonce := ms.nextNonce
|
||||||
nonceGap := false
|
nonceGap := false
|
||||||
@ -399,7 +416,7 @@ func (mp *MessagePool) verifyMsgBeforeAdd(m *types.SignedMessage, curTs *types.T
|
|||||||
publish := local
|
publish := local
|
||||||
if strictBaseFeeValidation && len(curTs.Blocks()) > 0 {
|
if strictBaseFeeValidation && len(curTs.Blocks()) > 0 {
|
||||||
baseFee := curTs.Blocks()[0].ParentBaseFee
|
baseFee := curTs.Blocks()[0].ParentBaseFee
|
||||||
baseFeeLowerBound := types.BigDiv(baseFee, baseFeeLowerBoundFactor)
|
baseFeeLowerBound := getBaseFeeLowerBound(baseFee)
|
||||||
if m.Message.GasFeeCap.LessThan(baseFeeLowerBound) {
|
if m.Message.GasFeeCap.LessThan(baseFeeLowerBound) {
|
||||||
if local {
|
if local {
|
||||||
log.Warnf("local message will not be immediately published because GasFeeCap doesn't meet the lower bound for inclusion in the next 20 blocks (GasFeeCap: %s, baseFeeLowerBound: %s)",
|
log.Warnf("local message will not be immediately published because GasFeeCap doesn't meet the lower bound for inclusion in the next 20 blocks (GasFeeCap: %s, baseFeeLowerBound: %s)",
|
||||||
@ -590,7 +607,7 @@ func (mp *MessagePool) addTs(m *types.SignedMessage, curTs *types.TipSet, local
|
|||||||
return false, err
|
return false, err
|
||||||
}
|
}
|
||||||
|
|
||||||
return publish, mp.addLocked(m, true)
|
return publish, mp.addLocked(m, !local)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (mp *MessagePool) addLoaded(m *types.SignedMessage) error {
|
func (mp *MessagePool) addLoaded(m *types.SignedMessage) error {
|
||||||
@ -812,7 +829,7 @@ func (mp *MessagePool) PushWithNonce(ctx context.Context, addr address.Address,
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := mp.addLocked(msg, true); err != nil {
|
if err := mp.addLocked(msg, false); err != nil {
|
||||||
return nil, xerrors.Errorf("add locked failed: %w", err)
|
return nil, xerrors.Errorf("add locked failed: %w", err)
|
||||||
}
|
}
|
||||||
if err := mp.addLocal(msg, msgb); err != nil {
|
if err := mp.addLocal(msg, msgb); err != nil {
|
||||||
@ -1282,3 +1299,12 @@ func (mp *MessagePool) Clear(local bool) {
|
|||||||
delete(mp.pending, a)
|
delete(mp.pending, a)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func getBaseFeeLowerBound(baseFee types.BigInt) types.BigInt {
|
||||||
|
baseFeeLowerBound := types.BigDiv(baseFee, baseFeeLowerBoundFactor)
|
||||||
|
if baseFeeLowerBound.LessThan(minimumBaseFee) {
|
||||||
|
baseFeeLowerBound = minimumBaseFee
|
||||||
|
}
|
||||||
|
|
||||||
|
return baseFeeLowerBound
|
||||||
|
}
|
||||||
|
@ -7,6 +7,7 @@ import (
|
|||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/filecoin-project/go-address"
|
"github.com/filecoin-project/go-address"
|
||||||
|
"github.com/filecoin-project/go-state-types/abi"
|
||||||
"github.com/filecoin-project/go-state-types/crypto"
|
"github.com/filecoin-project/go-state-types/crypto"
|
||||||
"github.com/filecoin-project/lotus/chain/messagepool/gasguess"
|
"github.com/filecoin-project/lotus/chain/messagepool/gasguess"
|
||||||
"github.com/filecoin-project/lotus/chain/types"
|
"github.com/filecoin-project/lotus/chain/types"
|
||||||
@ -56,6 +57,13 @@ func (tma *testMpoolAPI) nextBlock() *types.BlockHeader {
|
|||||||
return newBlk
|
return newBlk
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (tma *testMpoolAPI) nextBlockWithHeight(height uint64) *types.BlockHeader {
|
||||||
|
newBlk := mock.MkBlock(tma.tipsets[len(tma.tipsets)-1], 1, 1)
|
||||||
|
newBlk.Height = abi.ChainEpoch(height)
|
||||||
|
tma.tipsets = append(tma.tipsets, mock.TipSet(newBlk))
|
||||||
|
return newBlk
|
||||||
|
}
|
||||||
|
|
||||||
func (tma *testMpoolAPI) applyBlock(t *testing.T, b *types.BlockHeader) {
|
func (tma *testMpoolAPI) applyBlock(t *testing.T, b *types.BlockHeader) {
|
||||||
t.Helper()
|
t.Helper()
|
||||||
if err := tma.cb(nil, []*types.TipSet{mock.TipSet(b)}); err != nil {
|
if err := tma.cb(nil, []*types.TipSet{mock.TipSet(b)}); err != nil {
|
||||||
|
@ -46,13 +46,21 @@ func (mp *MessagePool) pruneMessages(ctx context.Context, ts *types.TipSet) erro
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return xerrors.Errorf("computing basefee: %w", err)
|
return xerrors.Errorf("computing basefee: %w", err)
|
||||||
}
|
}
|
||||||
|
baseFeeLowerBound := getBaseFeeLowerBound(baseFee)
|
||||||
|
|
||||||
pending, _ := mp.getPendingMessages(ts, ts)
|
pending, _ := mp.getPendingMessages(ts, ts)
|
||||||
|
|
||||||
// priority actors -- not pruned
|
// protected actors -- not pruned
|
||||||
priority := make(map[address.Address]struct{})
|
protected := make(map[address.Address]struct{})
|
||||||
|
|
||||||
|
// we never prune priority addresses
|
||||||
for _, actor := range mp.cfg.PriorityAddrs {
|
for _, actor := range mp.cfg.PriorityAddrs {
|
||||||
priority[actor] = struct{}{}
|
protected[actor] = struct{}{}
|
||||||
|
}
|
||||||
|
|
||||||
|
// we also never prune locally published messages
|
||||||
|
for actor := range mp.localAddrs {
|
||||||
|
protected[actor] = struct{}{}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Collect all messages to track which ones to remove and create chains for block inclusion
|
// Collect all messages to track which ones to remove and create chains for block inclusion
|
||||||
@ -61,18 +69,18 @@ func (mp *MessagePool) pruneMessages(ctx context.Context, ts *types.TipSet) erro
|
|||||||
|
|
||||||
var chains []*msgChain
|
var chains []*msgChain
|
||||||
for actor, mset := range pending {
|
for actor, mset := range pending {
|
||||||
// we never prune priority actors
|
// we never prune protected actors
|
||||||
_, keep := priority[actor]
|
_, keep := protected[actor]
|
||||||
if keep {
|
if keep {
|
||||||
keepCount += len(mset)
|
keepCount += len(mset)
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
// not a priority actor, track the messages and create chains
|
// not a protected actor, track the messages and create chains
|
||||||
for _, m := range mset {
|
for _, m := range mset {
|
||||||
pruneMsgs[m.Message.Cid()] = m
|
pruneMsgs[m.Message.Cid()] = m
|
||||||
}
|
}
|
||||||
actorChains := mp.createMessageChains(actor, mset, baseFee, ts)
|
actorChains := mp.createMessageChains(actor, mset, baseFeeLowerBound, ts)
|
||||||
chains = append(chains, actorChains...)
|
chains = append(chains, actorChains...)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -27,11 +27,7 @@ func (mp *MessagePool) republishPendingMessages() error {
|
|||||||
mp.curTsLk.Unlock()
|
mp.curTsLk.Unlock()
|
||||||
return xerrors.Errorf("computing basefee: %w", err)
|
return xerrors.Errorf("computing basefee: %w", err)
|
||||||
}
|
}
|
||||||
|
baseFeeLowerBound := getBaseFeeLowerBound(baseFee)
|
||||||
baseFeeLowerBound := types.BigDiv(baseFee, baseFeeLowerBoundFactor)
|
|
||||||
if baseFeeLowerBoundFactor.LessThan(minimumBaseFee) {
|
|
||||||
baseFeeLowerBound = minimumBaseFee
|
|
||||||
}
|
|
||||||
|
|
||||||
pending := make(map[address.Address]map[uint64]*types.SignedMessage)
|
pending := make(map[address.Address]map[uint64]*types.SignedMessage)
|
||||||
mp.lk.Lock()
|
mp.lk.Lock()
|
||||||
|
@ -199,9 +199,11 @@ func (mp *MessagePool) selectMessagesOptimal(curTs, ts *types.TipSet, tq float64
|
|||||||
gasLimit -= chainGasLimit
|
gasLimit -= chainGasLimit
|
||||||
|
|
||||||
// resort to account for already merged chains and effective performance adjustments
|
// resort to account for already merged chains and effective performance adjustments
|
||||||
sort.Slice(chains[i+1:], func(i, j int) bool {
|
// the sort *must* be stable or we end up getting negative gasPerfs pushed up.
|
||||||
|
sort.SliceStable(chains[i+1:], func(i, j int) bool {
|
||||||
return chains[i].BeforeEffective(chains[j])
|
return chains[i].BeforeEffective(chains[j])
|
||||||
})
|
})
|
||||||
|
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -307,8 +309,10 @@ tailLoop:
|
|||||||
|
|
||||||
// if we have gasLimit to spare, pick some random (non-negative) chains to fill the block
|
// if we have gasLimit to spare, pick some random (non-negative) chains to fill the block
|
||||||
// we pick randomly so that we minimize the probability of duplication among all miners
|
// we pick randomly so that we minimize the probability of duplication among all miners
|
||||||
startRandom := time.Now()
|
|
||||||
if gasLimit >= minGas {
|
if gasLimit >= minGas {
|
||||||
|
randomCount := 0
|
||||||
|
|
||||||
|
startRandom := time.Now()
|
||||||
shuffleChains(chains)
|
shuffleChains(chains)
|
||||||
|
|
||||||
for _, chain := range chains {
|
for _, chain := range chains {
|
||||||
@ -357,15 +361,23 @@ tailLoop:
|
|||||||
curChain := chainDeps[i]
|
curChain := chainDeps[i]
|
||||||
curChain.merged = true
|
curChain.merged = true
|
||||||
result = append(result, curChain.msgs...)
|
result = append(result, curChain.msgs...)
|
||||||
|
randomCount += len(curChain.msgs)
|
||||||
}
|
}
|
||||||
|
|
||||||
chain.merged = true
|
chain.merged = true
|
||||||
result = append(result, chain.msgs...)
|
result = append(result, chain.msgs...)
|
||||||
|
randomCount += len(chain.msgs)
|
||||||
gasLimit -= chainGasLimit
|
gasLimit -= chainGasLimit
|
||||||
}
|
}
|
||||||
}
|
|
||||||
if dt := time.Since(startRandom); dt > time.Millisecond {
|
if dt := time.Since(startRandom); dt > time.Millisecond {
|
||||||
log.Infow("pack random tail chains done", "took", dt)
|
log.Infow("pack random tail chains done", "took", dt)
|
||||||
|
}
|
||||||
|
|
||||||
|
if randomCount > 0 {
|
||||||
|
log.Warnf("optimal selection failed to pack a block; picked %d messages with random selection",
|
||||||
|
randomCount)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return result, nil
|
return result, nil
|
||||||
@ -912,7 +924,9 @@ func (mc *msgChain) SetNullEffectivePerf() {
|
|||||||
|
|
||||||
func (mc *msgChain) BeforeEffective(other *msgChain) bool {
|
func (mc *msgChain) BeforeEffective(other *msgChain) bool {
|
||||||
// move merged chains to the front so we can discard them earlier
|
// move merged chains to the front so we can discard them earlier
|
||||||
return (mc.merged && !other.merged) || mc.effPerf > other.effPerf ||
|
return (mc.merged && !other.merged) ||
|
||||||
|
(mc.gasPerf >= 0 && other.gasPerf < 0) ||
|
||||||
|
mc.effPerf > other.effPerf ||
|
||||||
(mc.effPerf == other.effPerf && mc.gasPerf > other.gasPerf) ||
|
(mc.effPerf == other.effPerf && mc.gasPerf > other.gasPerf) ||
|
||||||
(mc.effPerf == other.effPerf && mc.gasPerf == other.gasPerf && mc.gasReward.Cmp(other.gasReward) > 0)
|
(mc.effPerf == other.effPerf && mc.gasPerf == other.gasPerf && mc.gasReward.Cmp(other.gasReward) > 0)
|
||||||
}
|
}
|
||||||
|
@ -1,11 +1,16 @@
|
|||||||
package messagepool
|
package messagepool
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"compress/gzip"
|
||||||
"context"
|
"context"
|
||||||
|
"encoding/json"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"io"
|
||||||
"math"
|
"math"
|
||||||
"math/big"
|
"math/big"
|
||||||
"math/rand"
|
"math/rand"
|
||||||
|
"os"
|
||||||
|
"sort"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/filecoin-project/go-address"
|
"github.com/filecoin-project/go-address"
|
||||||
@ -1281,3 +1286,177 @@ func TestGasReward(t *testing.T) {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestRealWorldSelection(t *testing.T) {
|
||||||
|
// load test-messages.json.gz and rewrite the messages so that
|
||||||
|
// 1) we map each real actor to a test actor so that we can sign the messages
|
||||||
|
// 2) adjust the nonces so that they start from 0
|
||||||
|
file, err := os.Open("test-messages.json.gz")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
gzr, err := gzip.NewReader(file)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
dec := json.NewDecoder(gzr)
|
||||||
|
|
||||||
|
var msgs []*types.SignedMessage
|
||||||
|
baseNonces := make(map[address.Address]uint64)
|
||||||
|
|
||||||
|
readLoop:
|
||||||
|
for {
|
||||||
|
m := new(types.SignedMessage)
|
||||||
|
err := dec.Decode(m)
|
||||||
|
switch err {
|
||||||
|
case nil:
|
||||||
|
msgs = append(msgs, m)
|
||||||
|
nonce, ok := baseNonces[m.Message.From]
|
||||||
|
if !ok || m.Message.Nonce < nonce {
|
||||||
|
baseNonces[m.Message.From] = m.Message.Nonce
|
||||||
|
}
|
||||||
|
|
||||||
|
case io.EOF:
|
||||||
|
break readLoop
|
||||||
|
|
||||||
|
default:
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
actorMap := make(map[address.Address]address.Address)
|
||||||
|
actorWallets := make(map[address.Address]*wallet.Wallet)
|
||||||
|
|
||||||
|
for _, m := range msgs {
|
||||||
|
baseNonce := baseNonces[m.Message.From]
|
||||||
|
|
||||||
|
localActor, ok := actorMap[m.Message.From]
|
||||||
|
if !ok {
|
||||||
|
w, err := wallet.NewWallet(wallet.NewMemKeyStore())
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
a, err := w.GenerateKey(crypto.SigTypeSecp256k1)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
actorMap[m.Message.From] = a
|
||||||
|
actorWallets[a] = w
|
||||||
|
localActor = a
|
||||||
|
}
|
||||||
|
|
||||||
|
w, ok := actorWallets[localActor]
|
||||||
|
if !ok {
|
||||||
|
t.Fatalf("failed to lookup wallet for actor %s", localActor)
|
||||||
|
}
|
||||||
|
|
||||||
|
m.Message.From = localActor
|
||||||
|
m.Message.Nonce -= baseNonce
|
||||||
|
|
||||||
|
sig, err := w.Sign(context.TODO(), localActor, m.Message.Cid().Bytes())
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
m.Signature = *sig
|
||||||
|
}
|
||||||
|
|
||||||
|
mp, tma := makeTestMpool()
|
||||||
|
|
||||||
|
block := tma.nextBlockWithHeight(build.UpgradeBreezeHeight + 10)
|
||||||
|
ts := mock.TipSet(block)
|
||||||
|
tma.applyBlock(t, block)
|
||||||
|
|
||||||
|
for _, a := range actorMap {
|
||||||
|
tma.setBalance(a, 1000000)
|
||||||
|
}
|
||||||
|
|
||||||
|
tma.baseFee = types.NewInt(800_000_000)
|
||||||
|
|
||||||
|
sort.Slice(msgs, func(i, j int) bool {
|
||||||
|
return msgs[i].Message.Nonce < msgs[j].Message.Nonce
|
||||||
|
})
|
||||||
|
|
||||||
|
// add the messages
|
||||||
|
for _, m := range msgs {
|
||||||
|
mustAdd(t, mp, m)
|
||||||
|
}
|
||||||
|
|
||||||
|
// do message selection and check block packing
|
||||||
|
minGasLimit := int64(0.9 * float64(build.BlockGasLimit))
|
||||||
|
|
||||||
|
// greedy first
|
||||||
|
selected, err := mp.SelectMessages(ts, 1.0)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
gasLimit := int64(0)
|
||||||
|
for _, m := range selected {
|
||||||
|
gasLimit += m.Message.GasLimit
|
||||||
|
}
|
||||||
|
if gasLimit < minGasLimit {
|
||||||
|
t.Fatalf("failed to pack with tq=1.0; packed %d, minimum packing: %d", gasLimit, minGasLimit)
|
||||||
|
}
|
||||||
|
|
||||||
|
// high quality ticket
|
||||||
|
selected, err = mp.SelectMessages(ts, .8)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
gasLimit = int64(0)
|
||||||
|
for _, m := range selected {
|
||||||
|
gasLimit += m.Message.GasLimit
|
||||||
|
}
|
||||||
|
if gasLimit < minGasLimit {
|
||||||
|
t.Fatalf("failed to pack with tq=0.8; packed %d, minimum packing: %d", gasLimit, minGasLimit)
|
||||||
|
}
|
||||||
|
|
||||||
|
// mid quality ticket
|
||||||
|
selected, err = mp.SelectMessages(ts, .4)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
gasLimit = int64(0)
|
||||||
|
for _, m := range selected {
|
||||||
|
gasLimit += m.Message.GasLimit
|
||||||
|
}
|
||||||
|
if gasLimit < minGasLimit {
|
||||||
|
t.Fatalf("failed to pack with tq=0.4; packed %d, minimum packing: %d", gasLimit, minGasLimit)
|
||||||
|
}
|
||||||
|
|
||||||
|
// low quality ticket
|
||||||
|
selected, err = mp.SelectMessages(ts, .1)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
gasLimit = int64(0)
|
||||||
|
for _, m := range selected {
|
||||||
|
gasLimit += m.Message.GasLimit
|
||||||
|
}
|
||||||
|
if gasLimit < minGasLimit {
|
||||||
|
t.Fatalf("failed to pack with tq=0.1; packed %d, minimum packing: %d", gasLimit, minGasLimit)
|
||||||
|
}
|
||||||
|
|
||||||
|
// very low quality ticket
|
||||||
|
selected, err = mp.SelectMessages(ts, .01)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
gasLimit = int64(0)
|
||||||
|
for _, m := range selected {
|
||||||
|
gasLimit += m.Message.GasLimit
|
||||||
|
}
|
||||||
|
if gasLimit < minGasLimit {
|
||||||
|
t.Fatalf("failed to pack with tq=0.01; packed %d, minimum packing: %d", gasLimit, minGasLimit)
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
||||||
|
BIN
chain/messagepool/test-messages.json.gz
Normal file
BIN
chain/messagepool/test-messages.json.gz
Normal file
Binary file not shown.
@ -15,7 +15,6 @@ import (
|
|||||||
"github.com/filecoin-project/specs-actors/actors/builtin/power"
|
"github.com/filecoin-project/specs-actors/actors/builtin/power"
|
||||||
"github.com/filecoin-project/specs-actors/actors/builtin/verifreg"
|
"github.com/filecoin-project/specs-actors/actors/builtin/verifreg"
|
||||||
"github.com/filecoin-project/specs-actors/actors/runtime"
|
"github.com/filecoin-project/specs-actors/actors/runtime"
|
||||||
"github.com/filecoin-project/specs-actors/actors/util/adt"
|
|
||||||
"golang.org/x/xerrors"
|
"golang.org/x/xerrors"
|
||||||
|
|
||||||
"github.com/filecoin-project/lotus/chain/actors"
|
"github.com/filecoin-project/lotus/chain/actors"
|
||||||
@ -73,15 +72,15 @@ func (ta *testActor) Exports() []interface{} {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (ta *testActor) Constructor(rt runtime.Runtime, params *adt.EmptyValue) *adt.EmptyValue {
|
func (ta *testActor) Constructor(rt runtime.Runtime, params *abi.EmptyValue) *abi.EmptyValue {
|
||||||
rt.ValidateImmediateCallerAcceptAny()
|
rt.ValidateImmediateCallerAcceptAny()
|
||||||
rt.State().Create(&testActorState{11})
|
rt.State().Create(&testActorState{11})
|
||||||
fmt.Println("NEW ACTOR ADDRESS IS: ", rt.Message().Receiver())
|
fmt.Println("NEW ACTOR ADDRESS IS: ", rt.Message().Receiver())
|
||||||
|
|
||||||
return adt.Empty
|
return abi.Empty
|
||||||
}
|
}
|
||||||
|
|
||||||
func (ta *testActor) TestMethod(rt runtime.Runtime, params *adt.EmptyValue) *adt.EmptyValue {
|
func (ta *testActor) TestMethod(rt runtime.Runtime, params *abi.EmptyValue) *abi.EmptyValue {
|
||||||
rt.ValidateImmediateCallerAcceptAny()
|
rt.ValidateImmediateCallerAcceptAny()
|
||||||
var st testActorState
|
var st testActorState
|
||||||
rt.State().Readonly(&st)
|
rt.State().Readonly(&st)
|
||||||
@ -96,7 +95,7 @@ func (ta *testActor) TestMethod(rt runtime.Runtime, params *adt.EmptyValue) *adt
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return adt.Empty
|
return abi.Empty
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestForkHeightTriggers(t *testing.T) {
|
func TestForkHeightTriggers(t *testing.T) {
|
||||||
|
@ -1131,13 +1131,17 @@ func (sm *StateManager) GetCirculatingSupply(ctx context.Context, height abi.Cha
|
|||||||
func (sm *StateManager) GetNtwkVersion(ctx context.Context, height abi.ChainEpoch) network.Version {
|
func (sm *StateManager) GetNtwkVersion(ctx context.Context, height abi.ChainEpoch) network.Version {
|
||||||
// TODO: move hard fork epoch checks to a schedule defined in build/
|
// TODO: move hard fork epoch checks to a schedule defined in build/
|
||||||
|
|
||||||
if build.UpgradeBreezeHeight == 0 {
|
if build.UseNewestNetwork() {
|
||||||
return network.Version1
|
return build.NewestNetworkVersion
|
||||||
}
|
}
|
||||||
|
|
||||||
if height <= build.UpgradeBreezeHeight {
|
if height <= build.UpgradeBreezeHeight {
|
||||||
return network.Version0
|
return network.Version0
|
||||||
}
|
}
|
||||||
|
|
||||||
return network.Version1
|
if height <= build.UpgradeSmokeHeight {
|
||||||
|
return network.Version1
|
||||||
|
}
|
||||||
|
|
||||||
|
return build.NewestNetworkVersion
|
||||||
}
|
}
|
||||||
|
@ -479,7 +479,7 @@ func GetLookbackTipSetForRound(ctx context.Context, sm *StateManager, ts *types.
|
|||||||
return lbts, nil
|
return lbts, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func MinerGetBaseInfo(ctx context.Context, sm *StateManager, bcn beacon.RandomBeacon, tsk types.TipSetKey, round abi.ChainEpoch, maddr address.Address, pv ffiwrapper.Verifier) (*api.MiningBaseInfo, error) {
|
func MinerGetBaseInfo(ctx context.Context, sm *StateManager, bcs beacon.Schedule, tsk types.TipSetKey, round abi.ChainEpoch, maddr address.Address, pv ffiwrapper.Verifier) (*api.MiningBaseInfo, error) {
|
||||||
ts, err := sm.ChainStore().LoadTipSet(tsk)
|
ts, err := sm.ChainStore().LoadTipSet(tsk)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, xerrors.Errorf("failed to load tipset for mining base: %w", err)
|
return nil, xerrors.Errorf("failed to load tipset for mining base: %w", err)
|
||||||
@ -494,7 +494,7 @@ func MinerGetBaseInfo(ctx context.Context, sm *StateManager, bcn beacon.RandomBe
|
|||||||
prev = &types.BeaconEntry{}
|
prev = &types.BeaconEntry{}
|
||||||
}
|
}
|
||||||
|
|
||||||
entries, err := beacon.BeaconEntriesForBlock(ctx, bcn, round, *prev)
|
entries, err := beacon.BeaconEntriesForBlock(ctx, bcs, round, ts.Height(), *prev)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@ -601,8 +601,8 @@ func init() {
|
|||||||
// Explicitly add send, it's special.
|
// Explicitly add send, it's special.
|
||||||
methods[builtin.MethodSend] = MethodMeta{
|
methods[builtin.MethodSend] = MethodMeta{
|
||||||
Name: "Send",
|
Name: "Send",
|
||||||
Params: reflect.TypeOf(new(adt.EmptyValue)),
|
Params: reflect.TypeOf(new(abi.EmptyValue)),
|
||||||
Ret: reflect.TypeOf(new(adt.EmptyValue)),
|
Ret: reflect.TypeOf(new(abi.EmptyValue)),
|
||||||
}
|
}
|
||||||
|
|
||||||
// Learn method names from the builtin.Methods* structs.
|
// Learn method names from the builtin.Methods* structs.
|
||||||
|
@ -11,14 +11,20 @@ import (
|
|||||||
"golang.org/x/xerrors"
|
"golang.org/x/xerrors"
|
||||||
)
|
)
|
||||||
|
|
||||||
func computeNextBaseFee(baseFee types.BigInt, gasLimitUsed int64, noOfBlocks int) types.BigInt {
|
func ComputeNextBaseFee(baseFee types.BigInt, gasLimitUsed int64, noOfBlocks int, epoch abi.ChainEpoch) types.BigInt {
|
||||||
// deta := 1/PackingEfficiency * gasLimitUsed/noOfBlocks - build.BlockGasTarget
|
// deta := gasLimitUsed/noOfBlocks - build.BlockGasTarget
|
||||||
// change := baseFee * deta / BlockGasTarget / BaseFeeMaxChangeDenom
|
// change := baseFee * deta / BlockGasTarget
|
||||||
// nextBaseFee = baseFee + change
|
// nextBaseFee = baseFee + change
|
||||||
// nextBaseFee = max(nextBaseFee, build.MinimumBaseFee)
|
// nextBaseFee = max(nextBaseFee, build.MinimumBaseFee)
|
||||||
|
|
||||||
delta := build.PackingEfficiencyDenom * gasLimitUsed / (int64(noOfBlocks) * build.PackingEfficiencyNum)
|
var delta int64
|
||||||
delta -= build.BlockGasTarget
|
if epoch > build.UpgradeSmokeHeight {
|
||||||
|
delta = gasLimitUsed / int64(noOfBlocks)
|
||||||
|
delta -= build.BlockGasTarget
|
||||||
|
} else {
|
||||||
|
delta = build.PackingEfficiencyDenom * gasLimitUsed / (int64(noOfBlocks) * build.PackingEfficiencyNum)
|
||||||
|
delta -= build.BlockGasTarget
|
||||||
|
}
|
||||||
|
|
||||||
// cap change at 12.5% (BaseFeeMaxChangeDenom) by capping delta
|
// cap change at 12.5% (BaseFeeMaxChangeDenom) by capping delta
|
||||||
if delta > build.BlockGasTarget {
|
if delta > build.BlockGasTarget {
|
||||||
@ -73,5 +79,5 @@ func (cs *ChainStore) ComputeBaseFee(ctx context.Context, ts *types.TipSet) (abi
|
|||||||
}
|
}
|
||||||
parentBaseFee := ts.Blocks()[0].ParentBaseFee
|
parentBaseFee := ts.Blocks()[0].ParentBaseFee
|
||||||
|
|
||||||
return computeNextBaseFee(parentBaseFee, totalLimit, len(ts.Blocks())), nil
|
return ComputeNextBaseFee(parentBaseFee, totalLimit, len(ts.Blocks()), ts.Height()), nil
|
||||||
}
|
}
|
||||||
|
@ -27,7 +27,7 @@ func TestBaseFee(t *testing.T) {
|
|||||||
for _, test := range tests {
|
for _, test := range tests {
|
||||||
test := test
|
test := test
|
||||||
t.Run(fmt.Sprintf("%v", test), func(t *testing.T) {
|
t.Run(fmt.Sprintf("%v", test), func(t *testing.T) {
|
||||||
output := computeNextBaseFee(types.NewInt(test.basefee), test.limitUsed, test.noOfBlocks)
|
output := ComputeNextBaseFee(types.NewInt(test.basefee), test.limitUsed, test.noOfBlocks, 0)
|
||||||
assert.Equal(t, fmt.Sprintf("%d", test.output), output.String())
|
assert.Equal(t, fmt.Sprintf("%d", test.output), output.String())
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
@ -473,7 +473,7 @@ func (cs *ChainStore) IsAncestorOf(a, b *types.TipSet) (bool, error) {
|
|||||||
|
|
||||||
cur := b
|
cur := b
|
||||||
for !a.Equals(cur) && cur.Height() > a.Height() {
|
for !a.Equals(cur) && cur.Height() > a.Height() {
|
||||||
next, err := cs.LoadTipSet(b.Parents())
|
next, err := cs.LoadTipSet(cur.Parents())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return false, err
|
return false, err
|
||||||
}
|
}
|
||||||
@ -1179,7 +1179,7 @@ func recurseLinks(bs bstore.Blockstore, walked *cid.Set, root cid.Cid, in []cid.
|
|||||||
return in, rerr
|
return in, rerr
|
||||||
}
|
}
|
||||||
|
|
||||||
func (cs *ChainStore) Export(ctx context.Context, ts *types.TipSet, inclRecentRoots abi.ChainEpoch, w io.Writer) error {
|
func (cs *ChainStore) Export(ctx context.Context, ts *types.TipSet, inclRecentRoots abi.ChainEpoch, skipOldMsgs bool, w io.Writer) error {
|
||||||
if ts == nil {
|
if ts == nil {
|
||||||
ts = cs.GetHeaviestTipSet()
|
ts = cs.GetHeaviestTipSet()
|
||||||
}
|
}
|
||||||
@ -1217,9 +1217,13 @@ func (cs *ChainStore) Export(ctx context.Context, ts *types.TipSet, inclRecentRo
|
|||||||
return xerrors.Errorf("unmarshaling block header (cid=%s): %w", blk, err)
|
return xerrors.Errorf("unmarshaling block header (cid=%s): %w", blk, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
cids, err := recurseLinks(cs.bs, walked, b.Messages, []cid.Cid{b.Messages})
|
var cids []cid.Cid
|
||||||
if err != nil {
|
if !skipOldMsgs || b.Height > ts.Height()-inclRecentRoots {
|
||||||
return xerrors.Errorf("recursing messages failed: %w", err)
|
mcids, err := recurseLinks(cs.bs, walked, b.Messages, []cid.Cid{b.Messages})
|
||||||
|
if err != nil {
|
||||||
|
return xerrors.Errorf("recursing messages failed: %w", err)
|
||||||
|
}
|
||||||
|
cids = mcids
|
||||||
}
|
}
|
||||||
|
|
||||||
if b.Height > 0 {
|
if b.Height > 0 {
|
||||||
|
@ -96,7 +96,7 @@ func TestChainExportImport(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
buf := new(bytes.Buffer)
|
buf := new(bytes.Buffer)
|
||||||
if err := cg.ChainStore().Export(context.TODO(), last, 0, buf); err != nil {
|
if err := cg.ChainStore().Export(context.TODO(), last, 0, false, buf); err != nil {
|
||||||
t.Fatal(err)
|
t.Fatal(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -9,8 +9,11 @@ import (
|
|||||||
"sort"
|
"sort"
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/filecoin-project/lotus/node/modules/dtypes"
|
||||||
|
|
||||||
"github.com/filecoin-project/specs-actors/actors/runtime/proof"
|
"github.com/filecoin-project/specs-actors/actors/runtime/proof"
|
||||||
|
|
||||||
"github.com/Gurpartap/async"
|
"github.com/Gurpartap/async"
|
||||||
@ -100,7 +103,7 @@ type Syncer struct {
|
|||||||
store *store.ChainStore
|
store *store.ChainStore
|
||||||
|
|
||||||
// handle to the random beacon for verification
|
// handle to the random beacon for verification
|
||||||
beacon beacon.RandomBeacon
|
beacon beacon.Schedule
|
||||||
|
|
||||||
// the state manager handles making state queries
|
// the state manager handles making state queries
|
||||||
sm *stmgr.StateManager
|
sm *stmgr.StateManager
|
||||||
@ -129,10 +132,16 @@ type Syncer struct {
|
|||||||
windowSize int
|
windowSize int
|
||||||
|
|
||||||
tickerCtxCancel context.CancelFunc
|
tickerCtxCancel context.CancelFunc
|
||||||
|
|
||||||
|
checkptLk sync.Mutex
|
||||||
|
|
||||||
|
checkpt types.TipSetKey
|
||||||
|
|
||||||
|
ds dtypes.MetadataDS
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewSyncer creates a new Syncer object.
|
// NewSyncer creates a new Syncer object.
|
||||||
func NewSyncer(sm *stmgr.StateManager, exchange exchange.Client, connmgr connmgr.ConnManager, self peer.ID, beacon beacon.RandomBeacon, verifier ffiwrapper.Verifier) (*Syncer, error) {
|
func NewSyncer(ds dtypes.MetadataDS, sm *stmgr.StateManager, exchange exchange.Client, connmgr connmgr.ConnManager, self peer.ID, beacon beacon.Schedule, verifier ffiwrapper.Verifier) (*Syncer, error) {
|
||||||
gen, err := sm.ChainStore().GetGenesis()
|
gen, err := sm.ChainStore().GetGenesis()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, xerrors.Errorf("getting genesis block: %w", err)
|
return nil, xerrors.Errorf("getting genesis block: %w", err)
|
||||||
@ -143,7 +152,14 @@ func NewSyncer(sm *stmgr.StateManager, exchange exchange.Client, connmgr connmgr
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
cp, err := loadCheckpoint(ds)
|
||||||
|
if err != nil {
|
||||||
|
return nil, xerrors.Errorf("error loading mpool config: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
s := &Syncer{
|
s := &Syncer{
|
||||||
|
ds: ds,
|
||||||
|
checkpt: cp,
|
||||||
beacon: beacon,
|
beacon: beacon,
|
||||||
bad: NewBadBlockCache(),
|
bad: NewBadBlockCache(),
|
||||||
Genesis: gent,
|
Genesis: gent,
|
||||||
@ -863,7 +879,7 @@ func (syncer *Syncer) ValidateBlock(ctx context.Context, b *types.FullBlock) (er
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := beacon.ValidateBlockValues(syncer.beacon, h, *prevBeacon); err != nil {
|
if err := beacon.ValidateBlockValues(syncer.beacon, h, baseTs.Height(), *prevBeacon); err != nil {
|
||||||
return xerrors.Errorf("failed to validate blocks random beacon values: %w", err)
|
return xerrors.Errorf("failed to validate blocks random beacon values: %w", err)
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
@ -875,10 +891,12 @@ func (syncer *Syncer) ValidateBlock(ctx context.Context, b *types.FullBlock) (er
|
|||||||
return xerrors.Errorf("failed to marshal miner address to cbor: %w", err)
|
return xerrors.Errorf("failed to marshal miner address to cbor: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
beaconBase := *prevBeacon
|
if h.Height > build.UpgradeSmokeHeight {
|
||||||
if len(h.BeaconEntries) == 0 {
|
|
||||||
buf.Write(baseTs.MinTicket().VRFProof)
|
buf.Write(baseTs.MinTicket().VRFProof)
|
||||||
} else {
|
}
|
||||||
|
|
||||||
|
beaconBase := *prevBeacon
|
||||||
|
if len(h.BeaconEntries) != 0 {
|
||||||
beaconBase = h.BeaconEntries[len(h.BeaconEntries)-1]
|
beaconBase = h.BeaconEntries[len(h.BeaconEntries)-1]
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1362,7 +1380,7 @@ loop:
|
|||||||
log.Warnf("(fork detected) synced header chain (%s - %d) does not link to our best block (%s - %d)", incoming.Cids(), incoming.Height(), known.Cids(), known.Height())
|
log.Warnf("(fork detected) synced header chain (%s - %d) does not link to our best block (%s - %d)", incoming.Cids(), incoming.Height(), known.Cids(), known.Height())
|
||||||
fork, err := syncer.syncFork(ctx, base, known)
|
fork, err := syncer.syncFork(ctx, base, known)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if xerrors.Is(err, ErrForkTooLong) {
|
if xerrors.Is(err, ErrForkTooLong) || xerrors.Is(err, ErrForkCheckpoint) {
|
||||||
// TODO: we're marking this block bad in the same way that we mark invalid blocks bad. Maybe distinguish?
|
// TODO: we're marking this block bad in the same way that we mark invalid blocks bad. Maybe distinguish?
|
||||||
log.Warn("adding forked chain to our bad tipset cache")
|
log.Warn("adding forked chain to our bad tipset cache")
|
||||||
for _, b := range incoming.Blocks() {
|
for _, b := range incoming.Blocks() {
|
||||||
@ -1378,14 +1396,23 @@ loop:
|
|||||||
}
|
}
|
||||||
|
|
||||||
var ErrForkTooLong = fmt.Errorf("fork longer than threshold")
|
var ErrForkTooLong = fmt.Errorf("fork longer than threshold")
|
||||||
|
var ErrForkCheckpoint = fmt.Errorf("fork would require us to diverge from checkpointed block")
|
||||||
|
|
||||||
// syncFork tries to obtain the chain fragment that links a fork into a common
|
// syncFork tries to obtain the chain fragment that links a fork into a common
|
||||||
// ancestor in our view of the chain.
|
// ancestor in our view of the chain.
|
||||||
//
|
//
|
||||||
// If the fork is too long (build.ForkLengthThreshold), we add the entire subchain to the
|
// If the fork is too long (build.ForkLengthThreshold), or would cause us to diverge from the checkpoint (ErrForkCheckpoint),
|
||||||
// denylist. Else, we find the common ancestor, and add the missing chain
|
// we add the entire subchain to the denylist. Else, we find the common ancestor, and add the missing chain
|
||||||
// fragment until the fork point to the returned []TipSet.
|
// fragment until the fork point to the returned []TipSet.
|
||||||
func (syncer *Syncer) syncFork(ctx context.Context, incoming *types.TipSet, known *types.TipSet) ([]*types.TipSet, error) {
|
func (syncer *Syncer) syncFork(ctx context.Context, incoming *types.TipSet, known *types.TipSet) ([]*types.TipSet, error) {
|
||||||
|
|
||||||
|
chkpt := syncer.GetCheckpoint()
|
||||||
|
if known.Key() == chkpt {
|
||||||
|
return nil, ErrForkCheckpoint
|
||||||
|
}
|
||||||
|
|
||||||
|
// TODO: Does this mean we always ask for ForkLengthThreshold blocks from the network, even if we just need, like, 2?
|
||||||
|
// Would it not be better to ask in smaller chunks, given that an ~ForkLengthThreshold is very rare?
|
||||||
tips, err := syncer.Exchange.GetBlocks(ctx, incoming.Parents(), int(build.ForkLengthThreshold))
|
tips, err := syncer.Exchange.GetBlocks(ctx, incoming.Parents(), int(build.ForkLengthThreshold))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@ -1411,12 +1438,18 @@ func (syncer *Syncer) syncFork(ctx context.Context, incoming *types.TipSet, know
|
|||||||
if nts.Height() < tips[cur].Height() {
|
if nts.Height() < tips[cur].Height() {
|
||||||
cur++
|
cur++
|
||||||
} else {
|
} else {
|
||||||
|
// We will be forking away from nts, check that it isn't checkpointed
|
||||||
|
if nts.Key() == chkpt {
|
||||||
|
return nil, ErrForkCheckpoint
|
||||||
|
}
|
||||||
|
|
||||||
nts, err = syncer.store.LoadTipSet(nts.Parents())
|
nts, err = syncer.store.LoadTipSet(nts.Parents())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, xerrors.Errorf("loading next local tipset: %w", err)
|
return nil, xerrors.Errorf("loading next local tipset: %w", err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil, ErrForkTooLong
|
return nil, ErrForkTooLong
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1645,6 +1678,11 @@ func (syncer *Syncer) MarkBad(blk cid.Cid) {
|
|||||||
syncer.bad.Add(blk, NewBadBlockReason([]cid.Cid{blk}, "manually marked bad"))
|
syncer.bad.Add(blk, NewBadBlockReason([]cid.Cid{blk}, "manually marked bad"))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// UnmarkBad manually adds a block to the "bad blocks" cache.
|
||||||
|
func (syncer *Syncer) UnmarkBad(blk cid.Cid) {
|
||||||
|
syncer.bad.Remove(blk)
|
||||||
|
}
|
||||||
|
|
||||||
func (syncer *Syncer) CheckBadBlockCache(blk cid.Cid) (string, bool) {
|
func (syncer *Syncer) CheckBadBlockCache(blk cid.Cid) (string, bool) {
|
||||||
bbr, ok := syncer.bad.Has(blk)
|
bbr, ok := syncer.bad.Has(blk)
|
||||||
return bbr.String(), ok
|
return bbr.String(), ok
|
||||||
|
@ -333,6 +333,36 @@ func (tu *syncTestUtil) compareSourceState(with int) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (tu *syncTestUtil) assertBad(node int, ts *types.TipSet) {
|
||||||
|
for _, blk := range ts.Cids() {
|
||||||
|
rsn, err := tu.nds[node].SyncCheckBad(context.TODO(), blk)
|
||||||
|
require.NoError(tu.t, err)
|
||||||
|
require.True(tu.t, len(rsn) != 0)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (tu *syncTestUtil) getHead(node int) *types.TipSet {
|
||||||
|
ts, err := tu.nds[node].ChainHead(context.TODO())
|
||||||
|
require.NoError(tu.t, err)
|
||||||
|
return ts
|
||||||
|
}
|
||||||
|
|
||||||
|
func (tu *syncTestUtil) checkpointTs(node int, tsk types.TipSetKey) {
|
||||||
|
require.NoError(tu.t, tu.nds[node].SyncCheckpoint(context.TODO(), tsk))
|
||||||
|
}
|
||||||
|
|
||||||
|
func (tu *syncTestUtil) waitUntilNodeHasTs(node int, tsk types.TipSetKey) {
|
||||||
|
for {
|
||||||
|
_, err := tu.nds[node].ChainGetTipSet(context.TODO(), tsk)
|
||||||
|
if err != nil {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Time to allow for syncing and validation
|
||||||
|
time.Sleep(2 * time.Second)
|
||||||
|
}
|
||||||
|
|
||||||
func (tu *syncTestUtil) waitUntilSync(from, to int) {
|
func (tu *syncTestUtil) waitUntilSync(from, to int) {
|
||||||
target, err := tu.nds[from].ChainHead(tu.ctx)
|
target, err := tu.nds[from].ChainHead(tu.ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@ -678,3 +708,87 @@ func TestSyncInputs(t *testing.T) {
|
|||||||
t.Fatal("should error on block with nil election proof")
|
t.Fatal("should error on block with nil election proof")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestSyncCheckpointHead(t *testing.T) {
|
||||||
|
H := 10
|
||||||
|
tu := prepSyncTest(t, H)
|
||||||
|
|
||||||
|
p1 := tu.addClientNode()
|
||||||
|
p2 := tu.addClientNode()
|
||||||
|
|
||||||
|
fmt.Println("GENESIS: ", tu.g.Genesis().Cid())
|
||||||
|
tu.loadChainToNode(p1)
|
||||||
|
tu.loadChainToNode(p2)
|
||||||
|
|
||||||
|
base := tu.g.CurTipset
|
||||||
|
fmt.Println("Mining base: ", base.TipSet().Cids(), base.TipSet().Height())
|
||||||
|
|
||||||
|
// The two nodes fork at this point into 'a' and 'b'
|
||||||
|
a1 := tu.mineOnBlock(base, p1, []int{0}, true, false, nil)
|
||||||
|
a := tu.mineOnBlock(a1, p1, []int{0}, true, false, nil)
|
||||||
|
a = tu.mineOnBlock(a, p1, []int{0}, true, false, nil)
|
||||||
|
|
||||||
|
tu.waitUntilSyncTarget(p1, a.TipSet())
|
||||||
|
tu.checkpointTs(p1, a.TipSet().Key())
|
||||||
|
|
||||||
|
require.NoError(t, tu.g.ResyncBankerNonce(a1.TipSet()))
|
||||||
|
// chain B will now be heaviest
|
||||||
|
b := tu.mineOnBlock(base, p2, []int{1}, true, false, nil)
|
||||||
|
b = tu.mineOnBlock(b, p2, []int{1}, true, false, nil)
|
||||||
|
b = tu.mineOnBlock(b, p2, []int{1}, true, false, nil)
|
||||||
|
b = tu.mineOnBlock(b, p2, []int{1}, true, false, nil)
|
||||||
|
|
||||||
|
fmt.Println("A: ", a.Cids(), a.TipSet().Height())
|
||||||
|
fmt.Println("B: ", b.Cids(), b.TipSet().Height())
|
||||||
|
|
||||||
|
// Now for the fun part!! p1 should mark p2's head as BAD.
|
||||||
|
|
||||||
|
require.NoError(t, tu.mn.LinkAll())
|
||||||
|
tu.connect(p1, p2)
|
||||||
|
tu.waitUntilNodeHasTs(p1, b.TipSet().Key())
|
||||||
|
p1Head := tu.getHead(p1)
|
||||||
|
require.Equal(tu.t, p1Head, a.TipSet())
|
||||||
|
tu.assertBad(p1, b.TipSet())
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSyncCheckpointEarlierThanHead(t *testing.T) {
|
||||||
|
H := 10
|
||||||
|
tu := prepSyncTest(t, H)
|
||||||
|
|
||||||
|
p1 := tu.addClientNode()
|
||||||
|
p2 := tu.addClientNode()
|
||||||
|
|
||||||
|
fmt.Println("GENESIS: ", tu.g.Genesis().Cid())
|
||||||
|
tu.loadChainToNode(p1)
|
||||||
|
tu.loadChainToNode(p2)
|
||||||
|
|
||||||
|
base := tu.g.CurTipset
|
||||||
|
fmt.Println("Mining base: ", base.TipSet().Cids(), base.TipSet().Height())
|
||||||
|
|
||||||
|
// The two nodes fork at this point into 'a' and 'b'
|
||||||
|
a1 := tu.mineOnBlock(base, p1, []int{0}, true, false, nil)
|
||||||
|
a := tu.mineOnBlock(a1, p1, []int{0}, true, false, nil)
|
||||||
|
a = tu.mineOnBlock(a, p1, []int{0}, true, false, nil)
|
||||||
|
|
||||||
|
tu.waitUntilSyncTarget(p1, a.TipSet())
|
||||||
|
tu.checkpointTs(p1, a1.TipSet().Key())
|
||||||
|
|
||||||
|
require.NoError(t, tu.g.ResyncBankerNonce(a1.TipSet()))
|
||||||
|
// chain B will now be heaviest
|
||||||
|
b := tu.mineOnBlock(base, p2, []int{1}, true, false, nil)
|
||||||
|
b = tu.mineOnBlock(b, p2, []int{1}, true, false, nil)
|
||||||
|
b = tu.mineOnBlock(b, p2, []int{1}, true, false, nil)
|
||||||
|
b = tu.mineOnBlock(b, p2, []int{1}, true, false, nil)
|
||||||
|
|
||||||
|
fmt.Println("A: ", a.Cids(), a.TipSet().Height())
|
||||||
|
fmt.Println("B: ", b.Cids(), b.TipSet().Height())
|
||||||
|
|
||||||
|
// Now for the fun part!! p1 should mark p2's head as BAD.
|
||||||
|
|
||||||
|
require.NoError(t, tu.mn.LinkAll())
|
||||||
|
tu.connect(p1, p2)
|
||||||
|
tu.waitUntilNodeHasTs(p1, b.TipSet().Key())
|
||||||
|
p1Head := tu.getHead(p1)
|
||||||
|
require.Equal(tu.t, p1Head, a.TipSet())
|
||||||
|
tu.assertBad(p1, b.TipSet())
|
||||||
|
}
|
||||||
|
@ -5,6 +5,8 @@ import (
|
|||||||
"io"
|
"io"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
|
"github.com/filecoin-project/go-state-types/abi"
|
||||||
|
|
||||||
cbor "github.com/ipfs/go-ipld-cbor"
|
cbor "github.com/ipfs/go-ipld-cbor"
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
cbg "github.com/whyrusleeping/cbor-gen"
|
cbg "github.com/whyrusleeping/cbor-gen"
|
||||||
@ -13,7 +15,6 @@ import (
|
|||||||
"github.com/filecoin-project/lotus/chain/actors"
|
"github.com/filecoin-project/lotus/chain/actors"
|
||||||
"github.com/filecoin-project/lotus/chain/actors/aerrors"
|
"github.com/filecoin-project/lotus/chain/actors/aerrors"
|
||||||
"github.com/filecoin-project/specs-actors/actors/runtime"
|
"github.com/filecoin-project/specs-actors/actors/runtime"
|
||||||
"github.com/filecoin-project/specs-actors/actors/util/adt"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
type basicContract struct{}
|
type basicContract struct{}
|
||||||
@ -60,17 +61,17 @@ func (b basicContract) Exports() []interface{} {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (basicContract) InvokeSomething0(rt runtime.Runtime, params *basicParams) *adt.EmptyValue {
|
func (basicContract) InvokeSomething0(rt runtime.Runtime, params *basicParams) *abi.EmptyValue {
|
||||||
rt.Abortf(exitcode.ExitCode(params.B), "params.B")
|
rt.Abortf(exitcode.ExitCode(params.B), "params.B")
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (basicContract) BadParam(rt runtime.Runtime, params *basicParams) *adt.EmptyValue {
|
func (basicContract) BadParam(rt runtime.Runtime, params *basicParams) *abi.EmptyValue {
|
||||||
rt.Abortf(255, "bad params")
|
rt.Abortf(255, "bad params")
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (basicContract) InvokeSomething10(rt runtime.Runtime, params *basicParams) *adt.EmptyValue {
|
func (basicContract) InvokeSomething10(rt runtime.Runtime, params *basicParams) *abi.EmptyValue {
|
||||||
rt.Abortf(exitcode.ExitCode(params.B+10), "params.B")
|
rt.Abortf(exitcode.ExitCode(params.B+10), "params.B")
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
30
cli/chain.go
30
cli/chain.go
@ -337,25 +337,6 @@ var chainSetHeadCmd = &cli.Command{
|
|||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
func parseTipSet(ctx context.Context, api api.FullNode, vals []string) (*types.TipSet, error) {
|
|
||||||
var headers []*types.BlockHeader
|
|
||||||
for _, c := range vals {
|
|
||||||
blkc, err := cid.Decode(c)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
bh, err := api.ChainGetBlock(ctx, blkc)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
headers = append(headers, bh)
|
|
||||||
}
|
|
||||||
|
|
||||||
return types.NewTipSet(headers)
|
|
||||||
}
|
|
||||||
|
|
||||||
var chainListCmd = &cli.Command{
|
var chainListCmd = &cli.Command{
|
||||||
Name: "list",
|
Name: "list",
|
||||||
Usage: "View a segment of the chain",
|
Usage: "View a segment of the chain",
|
||||||
@ -863,6 +844,9 @@ var chainExportCmd = &cli.Command{
|
|||||||
Name: "recent-stateroots",
|
Name: "recent-stateroots",
|
||||||
Usage: "specify the number of recent state roots to include in the export",
|
Usage: "specify the number of recent state roots to include in the export",
|
||||||
},
|
},
|
||||||
|
&cli.BoolFlag{
|
||||||
|
Name: "skip-old-msgs",
|
||||||
|
},
|
||||||
},
|
},
|
||||||
Action: func(cctx *cli.Context) error {
|
Action: func(cctx *cli.Context) error {
|
||||||
api, closer, err := GetFullNodeAPI(cctx)
|
api, closer, err := GetFullNodeAPI(cctx)
|
||||||
@ -897,7 +881,13 @@ var chainExportCmd = &cli.Command{
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
stream, err := api.ChainExport(ctx, rsrs, ts.Key())
|
skipold := cctx.Bool("skip-old-msgs")
|
||||||
|
|
||||||
|
if rsrs == 0 && skipold {
|
||||||
|
return fmt.Errorf("must pass recent stateroots along with skip-old-msgs")
|
||||||
|
}
|
||||||
|
|
||||||
|
stream, err := api.ChainExport(ctx, rsrs, skipold, ts.Key())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
109
cli/mpool.go
109
cli/mpool.go
@ -3,6 +3,7 @@ package cli
|
|||||||
import (
|
import (
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
stdbig "math/big"
|
||||||
"sort"
|
"sort"
|
||||||
"strconv"
|
"strconv"
|
||||||
|
|
||||||
@ -14,6 +15,7 @@ import (
|
|||||||
"github.com/filecoin-project/go-state-types/big"
|
"github.com/filecoin-project/go-state-types/big"
|
||||||
|
|
||||||
lapi "github.com/filecoin-project/lotus/api"
|
lapi "github.com/filecoin-project/lotus/api"
|
||||||
|
"github.com/filecoin-project/lotus/build"
|
||||||
"github.com/filecoin-project/lotus/chain/messagepool"
|
"github.com/filecoin-project/lotus/chain/messagepool"
|
||||||
"github.com/filecoin-project/lotus/chain/types"
|
"github.com/filecoin-project/lotus/chain/types"
|
||||||
)
|
)
|
||||||
@ -29,6 +31,7 @@ var mpoolCmd = &cli.Command{
|
|||||||
mpoolReplaceCmd,
|
mpoolReplaceCmd,
|
||||||
mpoolFindCmd,
|
mpoolFindCmd,
|
||||||
mpoolConfig,
|
mpoolConfig,
|
||||||
|
mpoolGasPerfCmd,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -300,6 +303,10 @@ var mpoolReplaceCmd = &cli.Command{
|
|||||||
Name: "auto",
|
Name: "auto",
|
||||||
Usage: "automatically reprice the specified message",
|
Usage: "automatically reprice the specified message",
|
||||||
},
|
},
|
||||||
|
&cli.StringFlag{
|
||||||
|
Name: "max-fee",
|
||||||
|
Usage: "Spend up to X FIL for this message (applicable for auto mode)",
|
||||||
|
},
|
||||||
},
|
},
|
||||||
ArgsUsage: "[from] [nonce]",
|
ArgsUsage: "[from] [nonce]",
|
||||||
Action: func(cctx *cli.Context) error {
|
Action: func(cctx *cli.Context) error {
|
||||||
@ -350,17 +357,30 @@ var mpoolReplaceCmd = &cli.Command{
|
|||||||
msg := found.Message
|
msg := found.Message
|
||||||
|
|
||||||
if cctx.Bool("auto") {
|
if cctx.Bool("auto") {
|
||||||
|
minRBF := messagepool.ComputeMinRBF(msg.GasPremium)
|
||||||
|
|
||||||
|
var mss *lapi.MessageSendSpec
|
||||||
|
if cctx.IsSet("max-fee") {
|
||||||
|
maxFee, err := types.BigFromString(cctx.String("max-fee"))
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("parsing max-spend: %w", err)
|
||||||
|
}
|
||||||
|
mss = &lapi.MessageSendSpec{
|
||||||
|
MaxFee: maxFee,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// msg.GasLimit = 0 // TODO: need to fix the way we estimate gas limits to account for the messages already being in the mempool
|
// msg.GasLimit = 0 // TODO: need to fix the way we estimate gas limits to account for the messages already being in the mempool
|
||||||
msg.GasFeeCap = abi.NewTokenAmount(0)
|
msg.GasFeeCap = abi.NewTokenAmount(0)
|
||||||
msg.GasPremium = abi.NewTokenAmount(0)
|
msg.GasPremium = abi.NewTokenAmount(0)
|
||||||
retm, err := api.GasEstimateMessageGas(ctx, &msg, &lapi.MessageSendSpec{}, types.EmptyTSK)
|
retm, err := api.GasEstimateMessageGas(ctx, &msg, mss, types.EmptyTSK)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("failed to estimate gas values: %w", err)
|
return fmt.Errorf("failed to estimate gas values: %w", err)
|
||||||
}
|
}
|
||||||
msg.GasFeeCap = retm.GasFeeCap
|
|
||||||
|
|
||||||
minRBF := messagepool.ComputeMinRBF(msg.GasPremium)
|
|
||||||
msg.GasPremium = big.Max(retm.GasPremium, minRBF)
|
msg.GasPremium = big.Max(retm.GasPremium, minRBF)
|
||||||
|
msg.GasFeeCap = big.Max(retm.GasFeeCap, msg.GasPremium)
|
||||||
|
messagepool.CapGasFee(&msg, mss.Get().MaxFee)
|
||||||
} else {
|
} else {
|
||||||
msg.GasLimit = cctx.Int64("gas-limit")
|
msg.GasLimit = cctx.Int64("gas-limit")
|
||||||
msg.GasPremium, err = types.BigFromString(cctx.String("gas-premium"))
|
msg.GasPremium, err = types.BigFromString(cctx.String("gas-premium"))
|
||||||
@ -516,3 +536,86 @@ var mpoolConfig = &cli.Command{
|
|||||||
return nil
|
return nil
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
var mpoolGasPerfCmd = &cli.Command{
|
||||||
|
Name: "gas-perf",
|
||||||
|
Usage: "Check gas performance of messages in mempool",
|
||||||
|
Flags: []cli.Flag{
|
||||||
|
&cli.BoolFlag{
|
||||||
|
Name: "all",
|
||||||
|
Usage: "print gas performance for all mempool messages (default only prints for local)",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Action: func(cctx *cli.Context) error {
|
||||||
|
api, closer, err := GetFullNodeAPI(cctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer closer()
|
||||||
|
|
||||||
|
ctx := ReqContext(cctx)
|
||||||
|
|
||||||
|
msgs, err := api.MpoolPending(ctx, types.EmptyTSK)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
var filter map[address.Address]struct{}
|
||||||
|
if !cctx.Bool("all") {
|
||||||
|
filter = map[address.Address]struct{}{}
|
||||||
|
|
||||||
|
addrss, err := api.WalletList(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return xerrors.Errorf("getting local addresses: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, a := range addrss {
|
||||||
|
filter[a] = struct{}{}
|
||||||
|
}
|
||||||
|
|
||||||
|
var filtered []*types.SignedMessage
|
||||||
|
for _, msg := range msgs {
|
||||||
|
if _, has := filter[msg.Message.From]; !has {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
filtered = append(filtered, msg)
|
||||||
|
}
|
||||||
|
msgs = filtered
|
||||||
|
}
|
||||||
|
|
||||||
|
ts, err := api.ChainHead(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return xerrors.Errorf("failed to get chain head: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
baseFee := ts.Blocks()[0].ParentBaseFee
|
||||||
|
|
||||||
|
bigBlockGasLimit := big.NewInt(build.BlockGasLimit)
|
||||||
|
|
||||||
|
getGasReward := func(msg *types.SignedMessage) big.Int {
|
||||||
|
maxPremium := types.BigSub(msg.Message.GasFeeCap, baseFee)
|
||||||
|
if types.BigCmp(maxPremium, msg.Message.GasPremium) < 0 {
|
||||||
|
maxPremium = msg.Message.GasPremium
|
||||||
|
}
|
||||||
|
return types.BigMul(maxPremium, types.NewInt(uint64(msg.Message.GasLimit)))
|
||||||
|
}
|
||||||
|
|
||||||
|
getGasPerf := func(gasReward big.Int, gasLimit int64) float64 {
|
||||||
|
// gasPerf = gasReward * build.BlockGasLimit / gasLimit
|
||||||
|
a := new(stdbig.Rat).SetInt(new(stdbig.Int).Mul(gasReward.Int, bigBlockGasLimit.Int))
|
||||||
|
b := stdbig.NewRat(1, gasLimit)
|
||||||
|
c := new(stdbig.Rat).Mul(a, b)
|
||||||
|
r, _ := c.Float64()
|
||||||
|
return r
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, m := range msgs {
|
||||||
|
gasReward := getGasReward(m)
|
||||||
|
gasPerf := getGasPerf(gasReward, m.Message.GasLimit)
|
||||||
|
|
||||||
|
fmt.Printf("%s\t%d\t%s\t%f\n", m.Message.From, m.Message.Nonce, gasReward, gasPerf)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
},
|
||||||
|
}
|
||||||
|
300
cli/multisig.go
300
cli/multisig.go
@ -37,9 +37,13 @@ var multisigCmd = &cli.Command{
|
|||||||
msigInspectCmd,
|
msigInspectCmd,
|
||||||
msigProposeCmd,
|
msigProposeCmd,
|
||||||
msigApproveCmd,
|
msigApproveCmd,
|
||||||
|
msigAddProposeCmd,
|
||||||
|
msigAddApproveCmd,
|
||||||
|
msigAddCancelCmd,
|
||||||
msigSwapProposeCmd,
|
msigSwapProposeCmd,
|
||||||
msigSwapApproveCmd,
|
msigSwapApproveCmd,
|
||||||
msigSwapCancelCmd,
|
msigSwapCancelCmd,
|
||||||
|
msigVestedCmd,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -506,6 +510,236 @@ var msigApproveCmd = &cli.Command{
|
|||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
var msigAddProposeCmd = &cli.Command{
|
||||||
|
Name: "add-propose",
|
||||||
|
Usage: "Propose to add a signer",
|
||||||
|
ArgsUsage: "[multisigAddress signer]",
|
||||||
|
Flags: []cli.Flag{
|
||||||
|
&cli.BoolFlag{
|
||||||
|
Name: "increase-threshold",
|
||||||
|
Usage: "whether the number of required signers should be increased",
|
||||||
|
},
|
||||||
|
&cli.StringFlag{
|
||||||
|
Name: "from",
|
||||||
|
Usage: "account to send the propose message from",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Action: func(cctx *cli.Context) error {
|
||||||
|
if cctx.Args().Len() != 2 {
|
||||||
|
return ShowHelp(cctx, fmt.Errorf("must pass multisig address and signer address"))
|
||||||
|
}
|
||||||
|
|
||||||
|
api, closer, err := GetFullNodeAPI(cctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer closer()
|
||||||
|
ctx := ReqContext(cctx)
|
||||||
|
|
||||||
|
msig, err := address.NewFromString(cctx.Args().Get(0))
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
addr, err := address.NewFromString(cctx.Args().Get(1))
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
var from address.Address
|
||||||
|
if cctx.IsSet("from") {
|
||||||
|
f, err := address.NewFromString(cctx.String("from"))
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
from = f
|
||||||
|
} else {
|
||||||
|
defaddr, err := api.WalletDefaultAddress(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
from = defaddr
|
||||||
|
}
|
||||||
|
|
||||||
|
msgCid, err := api.MsigAddPropose(ctx, msig, from, addr, cctx.Bool("increase-threshold"))
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Println("sent add proposal in message: ", msgCid)
|
||||||
|
|
||||||
|
wait, err := api.StateWaitMsg(ctx, msgCid, build.MessageConfidence)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if wait.Receipt.ExitCode != 0 {
|
||||||
|
return fmt.Errorf("add proposal returned exit %d", wait.Receipt.ExitCode)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
var msigAddApproveCmd = &cli.Command{
|
||||||
|
Name: "add-approve",
|
||||||
|
Usage: "Approve a message to add a signer",
|
||||||
|
ArgsUsage: "[multisigAddress proposerAddress txId newAddress increaseThreshold]",
|
||||||
|
Flags: []cli.Flag{
|
||||||
|
&cli.StringFlag{
|
||||||
|
Name: "from",
|
||||||
|
Usage: "account to send the approve message from",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Action: func(cctx *cli.Context) error {
|
||||||
|
if cctx.Args().Len() != 5 {
|
||||||
|
return ShowHelp(cctx, fmt.Errorf("must pass multisig address, proposer address, transaction id, new signer address, whether to increase threshold"))
|
||||||
|
}
|
||||||
|
|
||||||
|
api, closer, err := GetFullNodeAPI(cctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer closer()
|
||||||
|
ctx := ReqContext(cctx)
|
||||||
|
|
||||||
|
msig, err := address.NewFromString(cctx.Args().Get(0))
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
prop, err := address.NewFromString(cctx.Args().Get(1))
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
txid, err := strconv.ParseUint(cctx.Args().Get(2), 10, 64)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
newAdd, err := address.NewFromString(cctx.Args().Get(3))
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
inc, err := strconv.ParseBool(cctx.Args().Get(4))
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
var from address.Address
|
||||||
|
if cctx.IsSet("from") {
|
||||||
|
f, err := address.NewFromString(cctx.String("from"))
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
from = f
|
||||||
|
} else {
|
||||||
|
defaddr, err := api.WalletDefaultAddress(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
from = defaddr
|
||||||
|
}
|
||||||
|
|
||||||
|
msgCid, err := api.MsigAddApprove(ctx, msig, from, txid, prop, newAdd, inc)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Println("sent add approval in message: ", msgCid)
|
||||||
|
|
||||||
|
wait, err := api.StateWaitMsg(ctx, msgCid, build.MessageConfidence)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if wait.Receipt.ExitCode != 0 {
|
||||||
|
return fmt.Errorf("add approval returned exit %d", wait.Receipt.ExitCode)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
var msigAddCancelCmd = &cli.Command{
|
||||||
|
Name: "add-cancel",
|
||||||
|
Usage: "Cancel a message to add a signer",
|
||||||
|
ArgsUsage: "[multisigAddress txId newAddress increaseThreshold]",
|
||||||
|
Flags: []cli.Flag{
|
||||||
|
&cli.StringFlag{
|
||||||
|
Name: "from",
|
||||||
|
Usage: "account to send the approve message from",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Action: func(cctx *cli.Context) error {
|
||||||
|
if cctx.Args().Len() != 4 {
|
||||||
|
return ShowHelp(cctx, fmt.Errorf("must pass multisig address, transaction id, new signer address, whether to increase threshold"))
|
||||||
|
}
|
||||||
|
|
||||||
|
api, closer, err := GetFullNodeAPI(cctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer closer()
|
||||||
|
ctx := ReqContext(cctx)
|
||||||
|
|
||||||
|
msig, err := address.NewFromString(cctx.Args().Get(0))
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
txid, err := strconv.ParseUint(cctx.Args().Get(1), 10, 64)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
newAdd, err := address.NewFromString(cctx.Args().Get(2))
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
inc, err := strconv.ParseBool(cctx.Args().Get(3))
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
var from address.Address
|
||||||
|
if cctx.IsSet("from") {
|
||||||
|
f, err := address.NewFromString(cctx.String("from"))
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
from = f
|
||||||
|
} else {
|
||||||
|
defaddr, err := api.WalletDefaultAddress(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
from = defaddr
|
||||||
|
}
|
||||||
|
|
||||||
|
msgCid, err := api.MsigAddCancel(ctx, msig, from, txid, newAdd, inc)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Println("sent add cancellation in message: ", msgCid)
|
||||||
|
|
||||||
|
wait, err := api.StateWaitMsg(ctx, msgCid, build.MessageConfidence)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if wait.Receipt.ExitCode != 0 {
|
||||||
|
return fmt.Errorf("add cancellation returned exit %d", wait.Receipt.ExitCode)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
var msigSwapProposeCmd = &cli.Command{
|
var msigSwapProposeCmd = &cli.Command{
|
||||||
Name: "swap-propose",
|
Name: "swap-propose",
|
||||||
Usage: "Propose to swap signers",
|
Usage: "Propose to swap signers",
|
||||||
@ -722,7 +956,7 @@ var msigSwapCancelCmd = &cli.Command{
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
fmt.Println("sent swap approval in message: ", msgCid)
|
fmt.Println("sent swap cancellation in message: ", msgCid)
|
||||||
|
|
||||||
wait, err := api.StateWaitMsg(ctx, msgCid, build.MessageConfidence)
|
wait, err := api.StateWaitMsg(ctx, msgCid, build.MessageConfidence)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@ -730,9 +964,71 @@ var msigSwapCancelCmd = &cli.Command{
|
|||||||
}
|
}
|
||||||
|
|
||||||
if wait.Receipt.ExitCode != 0 {
|
if wait.Receipt.ExitCode != 0 {
|
||||||
return fmt.Errorf("swap approval returned exit %d", wait.Receipt.ExitCode)
|
return fmt.Errorf("swap cancellation returned exit %d", wait.Receipt.ExitCode)
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
var msigVestedCmd = &cli.Command{
|
||||||
|
Name: "vested",
|
||||||
|
Usage: "Gets the amount vested in an msig between two epochs",
|
||||||
|
ArgsUsage: "[multisigAddress]",
|
||||||
|
Flags: []cli.Flag{
|
||||||
|
&cli.Int64Flag{
|
||||||
|
Name: "start-epoch",
|
||||||
|
Usage: "start epoch to measure vesting from",
|
||||||
|
Value: 0,
|
||||||
|
},
|
||||||
|
&cli.Int64Flag{
|
||||||
|
Name: "end-epoch",
|
||||||
|
Usage: "end epoch to stop measure vesting at",
|
||||||
|
Value: -1,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Action: func(cctx *cli.Context) error {
|
||||||
|
if cctx.Args().Len() != 1 {
|
||||||
|
return ShowHelp(cctx, fmt.Errorf("must pass multisig address"))
|
||||||
|
}
|
||||||
|
|
||||||
|
api, closer, err := GetFullNodeAPI(cctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer closer()
|
||||||
|
ctx := ReqContext(cctx)
|
||||||
|
|
||||||
|
msig, err := address.NewFromString(cctx.Args().Get(0))
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
start, err := api.ChainGetTipSetByHeight(ctx, abi.ChainEpoch(cctx.Int64("start-epoch")), types.EmptyTSK)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
var end *types.TipSet
|
||||||
|
if cctx.Int64("end-epoch") < 0 {
|
||||||
|
end, err = LoadTipSet(ctx, cctx, api)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
end, err = api.ChainGetTipSetByHeight(ctx, abi.ChainEpoch(cctx.Int64("end-epoch")), types.EmptyTSK)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
ret, err := api.MsigGetVested(ctx, msig, start.Key(), end.Key())
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("Vested: %s between %d and %d\n", types.FIL(ret), start.Height(), end.Height())
|
||||||
|
|
||||||
|
return nil
|
||||||
|
},
|
||||||
|
}
|
||||||
|
@ -28,6 +28,7 @@ var paychCmd = &cli.Command{
|
|||||||
paychListCmd,
|
paychListCmd,
|
||||||
paychVoucherCmd,
|
paychVoucherCmd,
|
||||||
paychSettleCmd,
|
paychSettleCmd,
|
||||||
|
paychStatusCmd,
|
||||||
paychCloseCmd,
|
paychCloseCmd,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
24
cli/state.go
24
cli/state.go
@ -27,6 +27,7 @@ import (
|
|||||||
|
|
||||||
"github.com/filecoin-project/go-address"
|
"github.com/filecoin-project/go-address"
|
||||||
"github.com/filecoin-project/go-state-types/abi"
|
"github.com/filecoin-project/go-state-types/abi"
|
||||||
|
"github.com/filecoin-project/go-state-types/big"
|
||||||
"github.com/filecoin-project/go-state-types/exitcode"
|
"github.com/filecoin-project/go-state-types/exitcode"
|
||||||
"github.com/filecoin-project/specs-actors/actors/builtin"
|
"github.com/filecoin-project/specs-actors/actors/builtin"
|
||||||
"github.com/filecoin-project/specs-actors/actors/builtin/exported"
|
"github.com/filecoin-project/specs-actors/actors/builtin/exported"
|
||||||
@ -122,7 +123,7 @@ var stateMinerInfo = &cli.Command{
|
|||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
func parseTipSetString(ts string) ([]cid.Cid, error) {
|
func ParseTipSetString(ts string) ([]cid.Cid, error) {
|
||||||
strs := strings.Split(ts, ",")
|
strs := strings.Split(ts, ",")
|
||||||
|
|
||||||
var cids []cid.Cid
|
var cids []cid.Cid
|
||||||
@ -160,7 +161,7 @@ func ParseTipSetRef(ctx context.Context, api api.FullNode, tss string) (*types.T
|
|||||||
return api.ChainGetTipSetByHeight(ctx, abi.ChainEpoch(h), types.EmptyTSK)
|
return api.ChainGetTipSetByHeight(ctx, abi.ChainEpoch(h), types.EmptyTSK)
|
||||||
}
|
}
|
||||||
|
|
||||||
cids, err := parseTipSetString(tss)
|
cids, err := ParseTipSetString(tss)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@ -1384,7 +1385,7 @@ var stateCallCmd = &cli.Command{
|
|||||||
}
|
}
|
||||||
|
|
||||||
if ret.MsgRct.ExitCode != 0 {
|
if ret.MsgRct.ExitCode != 0 {
|
||||||
return fmt.Errorf("invocation failed (exit: %d): %s", ret.MsgRct.ExitCode, ret.Error)
|
return fmt.Errorf("invocation failed (exit: %d, gasUsed: %d): %s", ret.MsgRct.ExitCode, ret.MsgRct.GasUsed, ret.Error)
|
||||||
}
|
}
|
||||||
|
|
||||||
s, err := formatOutput(cctx.String("ret"), ret.MsgRct.Return)
|
s, err := formatOutput(cctx.String("ret"), ret.MsgRct.Return)
|
||||||
@ -1392,6 +1393,7 @@ var stateCallCmd = &cli.Command{
|
|||||||
return fmt.Errorf("failed to format output: %s", err)
|
return fmt.Errorf("failed to format output: %s", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fmt.Printf("gas used: %d\n", ret.MsgRct.GasUsed)
|
||||||
fmt.Printf("return: %s\n", s)
|
fmt.Printf("return: %s\n", s)
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
@ -1465,11 +1467,11 @@ func parseParamsForMethod(act cid.Cid, method uint64, args []string) ([]byte, er
|
|||||||
f := methods[method]
|
f := methods[method]
|
||||||
|
|
||||||
rf := reflect.TypeOf(f)
|
rf := reflect.TypeOf(f)
|
||||||
if rf.NumIn() != 3 {
|
if rf.NumIn() != 2 {
|
||||||
return nil, fmt.Errorf("expected referenced method to have three arguments")
|
return nil, fmt.Errorf("expected referenced method to have three arguments")
|
||||||
}
|
}
|
||||||
|
|
||||||
paramObj := rf.In(2).Elem()
|
paramObj := rf.In(1).Elem()
|
||||||
if paramObj.NumField() != len(args) {
|
if paramObj.NumField() != len(args) {
|
||||||
return nil, fmt.Errorf("not enough arguments given to call that method (expecting %d)", paramObj.NumField())
|
return nil, fmt.Errorf("not enough arguments given to call that method (expecting %d)", paramObj.NumField())
|
||||||
}
|
}
|
||||||
@ -1489,6 +1491,18 @@ func parseParamsForMethod(act cid.Cid, method uint64, args []string) ([]byte, er
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
p.Elem().Field(i).Set(reflect.ValueOf(val))
|
p.Elem().Field(i).Set(reflect.ValueOf(val))
|
||||||
|
case reflect.TypeOf(abi.ChainEpoch(0)):
|
||||||
|
val, err := strconv.ParseInt(args[i], 10, 64)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
p.Elem().Field(i).Set(reflect.ValueOf(abi.ChainEpoch(val)))
|
||||||
|
case reflect.TypeOf(big.Int{}):
|
||||||
|
val, err := big.FromString(args[i])
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
p.Elem().Field(i).Set(reflect.ValueOf(val))
|
||||||
case reflect.TypeOf(peer.ID("")):
|
case reflect.TypeOf(peer.ID("")):
|
||||||
pid, err := peer.Decode(args[i])
|
pid, err := peer.Decode(args[i])
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
71
cli/sync.go
71
cli/sync.go
@ -5,6 +5,8 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/filecoin-project/lotus/chain/types"
|
||||||
|
|
||||||
"github.com/filecoin-project/go-state-types/abi"
|
"github.com/filecoin-project/go-state-types/abi"
|
||||||
cid "github.com/ipfs/go-cid"
|
cid "github.com/ipfs/go-cid"
|
||||||
"github.com/urfave/cli/v2"
|
"github.com/urfave/cli/v2"
|
||||||
@ -20,7 +22,9 @@ var syncCmd = &cli.Command{
|
|||||||
syncStatusCmd,
|
syncStatusCmd,
|
||||||
syncWaitCmd,
|
syncWaitCmd,
|
||||||
syncMarkBadCmd,
|
syncMarkBadCmd,
|
||||||
|
syncUnmarkBadCmd,
|
||||||
syncCheckBadCmd,
|
syncCheckBadCmd,
|
||||||
|
syncCheckpointCmd,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -117,6 +121,31 @@ var syncMarkBadCmd = &cli.Command{
|
|||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
var syncUnmarkBadCmd = &cli.Command{
|
||||||
|
Name: "unmark-bad",
|
||||||
|
Usage: "Unmark the given block as bad, makes it possible to sync to a chain containing it",
|
||||||
|
ArgsUsage: "[blockCid]",
|
||||||
|
Action: func(cctx *cli.Context) error {
|
||||||
|
napi, closer, err := GetFullNodeAPI(cctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer closer()
|
||||||
|
ctx := ReqContext(cctx)
|
||||||
|
|
||||||
|
if !cctx.Args().Present() {
|
||||||
|
return fmt.Errorf("must specify block cid to unmark")
|
||||||
|
}
|
||||||
|
|
||||||
|
bcid, err := cid.Decode(cctx.Args().First())
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to decode input as a cid: %s", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return napi.SyncUnmarkBad(ctx, bcid)
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
var syncCheckBadCmd = &cli.Command{
|
var syncCheckBadCmd = &cli.Command{
|
||||||
Name: "check-bad",
|
Name: "check-bad",
|
||||||
Usage: "check if the given block was marked bad, and for what reason",
|
Usage: "check if the given block was marked bad, and for what reason",
|
||||||
@ -153,6 +182,48 @@ var syncCheckBadCmd = &cli.Command{
|
|||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
var syncCheckpointCmd = &cli.Command{
|
||||||
|
Name: "checkpoint",
|
||||||
|
Usage: "mark a certain tipset as checkpointed; the node will never fork away from this tipset",
|
||||||
|
ArgsUsage: "[tipsetKey]",
|
||||||
|
Flags: []cli.Flag{
|
||||||
|
&cli.Uint64Flag{
|
||||||
|
Name: "epoch",
|
||||||
|
Usage: "checkpoint the tipset at the given epoch",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Action: func(cctx *cli.Context) error {
|
||||||
|
napi, closer, err := GetFullNodeAPI(cctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer closer()
|
||||||
|
ctx := ReqContext(cctx)
|
||||||
|
|
||||||
|
var ts *types.TipSet
|
||||||
|
|
||||||
|
if cctx.IsSet("epoch") {
|
||||||
|
ts, err = napi.ChainGetTipSetByHeight(ctx, abi.ChainEpoch(cctx.Uint64("epoch")), types.EmptyTSK)
|
||||||
|
}
|
||||||
|
if ts == nil {
|
||||||
|
ts, err = parseTipSet(ctx, napi, cctx.Args().Slice())
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if ts == nil {
|
||||||
|
return fmt.Errorf("must pass cids for tipset to set as head, or specify epoch flag")
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := napi.SyncCheckpoint(ctx, ts.Key()); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
func SyncWait(ctx context.Context, napi api.FullNode) error {
|
func SyncWait(ctx context.Context, napi api.FullNode) error {
|
||||||
for {
|
for {
|
||||||
state, err := napi.SyncState(ctx)
|
state, err := napi.SyncState(ctx)
|
||||||
|
28
cli/util.go
Normal file
28
cli/util.go
Normal file
@ -0,0 +1,28 @@
|
|||||||
|
package cli
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
|
||||||
|
"github.com/filecoin-project/lotus/api"
|
||||||
|
"github.com/filecoin-project/lotus/chain/types"
|
||||||
|
"github.com/ipfs/go-cid"
|
||||||
|
)
|
||||||
|
|
||||||
|
func parseTipSet(ctx context.Context, api api.FullNode, vals []string) (*types.TipSet, error) {
|
||||||
|
var headers []*types.BlockHeader
|
||||||
|
for _, c := range vals {
|
||||||
|
blkc, err := cid.Decode(c)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
bh, err := api.ChainGetBlock(ctx, blkc)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
headers = append(headers, bh)
|
||||||
|
}
|
||||||
|
|
||||||
|
return types.NewTipSet(headers)
|
||||||
|
}
|
@ -694,7 +694,7 @@ func (p *Processor) getMinerSectorChanges(ctx context.Context, m minerActorInfo)
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (p *Processor) diffMinerPartitions(ctx context.Context, m minerActorInfo, events chan<- *MinerSectorsEvent) error {
|
func (p *Processor) diffMinerPartitions(ctx context.Context, m minerActorInfo, events chan<- *MinerSectorsEvent) error {
|
||||||
prevMiner, err := p.getMinerStateAt(ctx, m.common.addr, m.common.tsKey)
|
prevMiner, err := p.getMinerStateAt(ctx, m.common.addr, m.common.parentTsKey)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -45,6 +45,8 @@ const FlagWorkerRepo = "worker-repo"
|
|||||||
const FlagWorkerRepoDeprecation = "workerrepo"
|
const FlagWorkerRepoDeprecation = "workerrepo"
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
|
build.RunningNodeType = build.NodeWorker
|
||||||
|
|
||||||
lotuslog.SetupLogLevels()
|
lotuslog.SetupLogLevels()
|
||||||
|
|
||||||
local := []*cli.Command{
|
local := []*cli.Command{
|
||||||
@ -187,8 +189,8 @@ var runCmd = &cli.Command{
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
if v.APIVersion != build.APIVersion {
|
if v.APIVersion != build.MinerAPIVersion {
|
||||||
return xerrors.Errorf("lotus-miner API version doesn't match: local: %s", api.Version{APIVersion: build.APIVersion})
|
return xerrors.Errorf("lotus-miner API version doesn't match: expected: %s", api.Version{APIVersion: build.MinerAPIVersion})
|
||||||
}
|
}
|
||||||
log.Infof("Remote version %s", v)
|
log.Infof("Remote version %s", v)
|
||||||
|
|
||||||
|
@ -21,7 +21,7 @@ type worker struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (w *worker) Version(context.Context) (build.Version, error) {
|
func (w *worker) Version(context.Context) (build.Version, error) {
|
||||||
return build.APIVersion, nil
|
return build.WorkerAPIVersion, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (w *worker) StorageAddLocal(ctx context.Context, path string) error {
|
func (w *worker) StorageAddLocal(ctx context.Context, path string) error {
|
||||||
|
123
cmd/lotus-shed/export.go
Normal file
123
cmd/lotus-shed/export.go
Normal file
@ -0,0 +1,123 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
|
||||||
|
"github.com/urfave/cli/v2"
|
||||||
|
"golang.org/x/xerrors"
|
||||||
|
|
||||||
|
"github.com/filecoin-project/go-state-types/abi"
|
||||||
|
"github.com/filecoin-project/lotus/chain/store"
|
||||||
|
"github.com/filecoin-project/lotus/chain/types"
|
||||||
|
lcli "github.com/filecoin-project/lotus/cli"
|
||||||
|
"github.com/filecoin-project/lotus/lib/blockstore"
|
||||||
|
"github.com/filecoin-project/lotus/node/repo"
|
||||||
|
)
|
||||||
|
|
||||||
|
var exportChainCmd = &cli.Command{
|
||||||
|
Name: "export",
|
||||||
|
Description: "Export chain from repo (requires node to be offline)",
|
||||||
|
Flags: []cli.Flag{
|
||||||
|
&cli.StringFlag{
|
||||||
|
Name: "repo",
|
||||||
|
Value: "~/.lotus",
|
||||||
|
},
|
||||||
|
&cli.StringFlag{
|
||||||
|
Name: "tipset",
|
||||||
|
Usage: "tipset to export from",
|
||||||
|
},
|
||||||
|
&cli.Int64Flag{
|
||||||
|
Name: "recent-stateroots",
|
||||||
|
},
|
||||||
|
&cli.BoolFlag{
|
||||||
|
Name: "full-state",
|
||||||
|
},
|
||||||
|
&cli.BoolFlag{
|
||||||
|
Name: "skip-old-msgs",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Action: func(cctx *cli.Context) error {
|
||||||
|
if !cctx.Args().Present() {
|
||||||
|
return lcli.ShowHelp(cctx, fmt.Errorf("must specify file name to write export to"))
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx := context.TODO()
|
||||||
|
|
||||||
|
r, err := repo.NewFS(cctx.String("repo"))
|
||||||
|
if err != nil {
|
||||||
|
return xerrors.Errorf("opening fs repo: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
exists, err := r.Exists()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if !exists {
|
||||||
|
return xerrors.Errorf("lotus repo doesn't exist")
|
||||||
|
}
|
||||||
|
|
||||||
|
lr, err := r.Lock(repo.FullNode)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer lr.Close() //nolint:errcheck
|
||||||
|
|
||||||
|
fi, err := os.Create(cctx.Args().First())
|
||||||
|
if err != nil {
|
||||||
|
return xerrors.Errorf("opening the output file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
defer fi.Close() //nolint:errcheck
|
||||||
|
|
||||||
|
ds, err := lr.Datastore("/chain")
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
mds, err := lr.Datastore("/metadata")
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
bs := blockstore.NewBlockstore(ds)
|
||||||
|
|
||||||
|
cs := store.NewChainStore(bs, mds, nil)
|
||||||
|
if err := cs.Load(); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
nroots := abi.ChainEpoch(cctx.Int64("recent-stateroots"))
|
||||||
|
fullstate := cctx.Bool("full-state")
|
||||||
|
skipoldmsgs := cctx.Bool("skip-old-msgs")
|
||||||
|
|
||||||
|
var ts *types.TipSet
|
||||||
|
if tss := cctx.String("tipset"); tss != "" {
|
||||||
|
cids, err := lcli.ParseTipSetString(tss)
|
||||||
|
if err != nil {
|
||||||
|
return xerrors.Errorf("failed to parse tipset (%q): %w", tss, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
tsk := types.NewTipSetKey(cids...)
|
||||||
|
|
||||||
|
selts, err := cs.LoadTipSet(tsk)
|
||||||
|
if err != nil {
|
||||||
|
return xerrors.Errorf("loading tipset: %w", err)
|
||||||
|
}
|
||||||
|
ts = selts
|
||||||
|
} else {
|
||||||
|
ts = cs.GetHeaviestTipSet()
|
||||||
|
}
|
||||||
|
|
||||||
|
if fullstate {
|
||||||
|
nroots = ts.Height() + 1
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := cs.Export(ctx, ts, nroots, skipoldmsgs, fi); err != nil {
|
||||||
|
return xerrors.Errorf("export failed: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
},
|
||||||
|
}
|
@ -33,6 +33,8 @@ func main() {
|
|||||||
mpoolCmd,
|
mpoolCmd,
|
||||||
genesisVerifyCmd,
|
genesisVerifyCmd,
|
||||||
mathCmd,
|
mathCmd,
|
||||||
|
mpoolStatsCmd,
|
||||||
|
exportChainCmd,
|
||||||
}
|
}
|
||||||
|
|
||||||
app := &cli.App{
|
app := &cli.App{
|
||||||
|
273
cmd/lotus-shed/mempool-stats.go
Normal file
273
cmd/lotus-shed/mempool-stats.go
Normal file
@ -0,0 +1,273 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"net/http"
|
||||||
|
"sort"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"contrib.go.opencensus.io/exporter/prometheus"
|
||||||
|
"github.com/ipfs/go-cid"
|
||||||
|
logging "github.com/ipfs/go-log"
|
||||||
|
"github.com/urfave/cli/v2"
|
||||||
|
"go.opencensus.io/stats"
|
||||||
|
"go.opencensus.io/stats/view"
|
||||||
|
"go.opencensus.io/tag"
|
||||||
|
|
||||||
|
"github.com/filecoin-project/go-address"
|
||||||
|
lapi "github.com/filecoin-project/lotus/api"
|
||||||
|
"github.com/filecoin-project/lotus/chain/types"
|
||||||
|
lcli "github.com/filecoin-project/lotus/cli"
|
||||||
|
"github.com/filecoin-project/specs-actors/actors/builtin"
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
MpoolAge = stats.Float64("mpoolage", "Age of messages in the mempool", stats.UnitSeconds)
|
||||||
|
MpoolSize = stats.Int64("mpoolsize", "Number of messages in mempool", stats.UnitDimensionless)
|
||||||
|
MpoolInboundRate = stats.Int64("inbound", "Counter for inbound messages", stats.UnitDimensionless)
|
||||||
|
BlockInclusionRate = stats.Int64("inclusion", "Counter for message included in blocks", stats.UnitDimensionless)
|
||||||
|
MsgWaitTime = stats.Float64("msg-wait-time", "Wait time of messages to make it into a block", stats.UnitSeconds)
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
LeTag, _ = tag.NewKey("quantile")
|
||||||
|
MTTag, _ = tag.NewKey("msg_type")
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
AgeView = &view.View{
|
||||||
|
Name: "mpool-age",
|
||||||
|
Measure: MpoolAge,
|
||||||
|
TagKeys: []tag.Key{LeTag, MTTag},
|
||||||
|
Aggregation: view.LastValue(),
|
||||||
|
}
|
||||||
|
SizeView = &view.View{
|
||||||
|
Name: "mpool-size",
|
||||||
|
Measure: MpoolSize,
|
||||||
|
TagKeys: []tag.Key{MTTag},
|
||||||
|
Aggregation: view.LastValue(),
|
||||||
|
}
|
||||||
|
InboundRate = &view.View{
|
||||||
|
Name: "msg-inbound",
|
||||||
|
Measure: MpoolInboundRate,
|
||||||
|
TagKeys: []tag.Key{MTTag},
|
||||||
|
Aggregation: view.Count(),
|
||||||
|
}
|
||||||
|
InclusionRate = &view.View{
|
||||||
|
Name: "msg-inclusion",
|
||||||
|
Measure: BlockInclusionRate,
|
||||||
|
TagKeys: []tag.Key{MTTag},
|
||||||
|
Aggregation: view.Count(),
|
||||||
|
}
|
||||||
|
MsgWait = &view.View{
|
||||||
|
Name: "msg-wait",
|
||||||
|
Measure: MsgWaitTime,
|
||||||
|
TagKeys: []tag.Key{MTTag},
|
||||||
|
Aggregation: view.Distribution(10, 30, 60, 120, 240, 600, 1800, 3600),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
type msgInfo struct {
|
||||||
|
msg *types.SignedMessage
|
||||||
|
seen time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
var mpoolStatsCmd = &cli.Command{
|
||||||
|
Name: "mpool-stats",
|
||||||
|
Action: func(cctx *cli.Context) error {
|
||||||
|
logging.SetLogLevel("rpc", "ERROR")
|
||||||
|
|
||||||
|
if err := view.Register(AgeView, SizeView, InboundRate, InclusionRate, MsgWait); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
expo, err := prometheus.NewExporter(prometheus.Options{
|
||||||
|
Namespace: "lotusmpool",
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
http.Handle("/debug/metrics", expo)
|
||||||
|
|
||||||
|
go func() {
|
||||||
|
if err := http.ListenAndServe(":10555", nil); err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
api, closer, err := lcli.GetFullNodeAPI(cctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
defer closer()
|
||||||
|
ctx := lcli.ReqContext(cctx)
|
||||||
|
|
||||||
|
updates, err := api.MpoolSub(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
mcache := make(map[address.Address]bool)
|
||||||
|
isMiner := func(addr address.Address) (bool, error) {
|
||||||
|
cache, ok := mcache[addr]
|
||||||
|
if ok {
|
||||||
|
return cache, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
act, err := api.StateGetActor(ctx, addr, types.EmptyTSK)
|
||||||
|
if err != nil {
|
||||||
|
return false, err
|
||||||
|
}
|
||||||
|
|
||||||
|
ism := act.Code == builtin.StorageMinerActorCodeID
|
||||||
|
mcache[addr] = ism
|
||||||
|
return ism, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
wpostTracker := make(map[cid.Cid]*msgInfo)
|
||||||
|
tracker := make(map[cid.Cid]*msgInfo)
|
||||||
|
tick := time.Tick(time.Second)
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case u, ok := <-updates:
|
||||||
|
if !ok {
|
||||||
|
return fmt.Errorf("connection with lotus node broke")
|
||||||
|
}
|
||||||
|
switch u.Type {
|
||||||
|
case lapi.MpoolAdd:
|
||||||
|
stats.Record(ctx, MpoolInboundRate.M(1))
|
||||||
|
tracker[u.Message.Cid()] = &msgInfo{
|
||||||
|
msg: u.Message,
|
||||||
|
seen: time.Now(),
|
||||||
|
}
|
||||||
|
|
||||||
|
if u.Message.Message.Method == builtin.MethodsMiner.SubmitWindowedPoSt {
|
||||||
|
|
||||||
|
miner, err := isMiner(u.Message.Message.To)
|
||||||
|
if err != nil {
|
||||||
|
log.Warnf("failed to determine if message target was to a miner: %s", err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
if miner {
|
||||||
|
wpostTracker[u.Message.Cid()] = &msgInfo{
|
||||||
|
msg: u.Message,
|
||||||
|
seen: time.Now(),
|
||||||
|
}
|
||||||
|
_ = stats.RecordWithTags(ctx, []tag.Mutator{tag.Upsert(MTTag, "wpost")}, MpoolInboundRate.M(1))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
case lapi.MpoolRemove:
|
||||||
|
mi, ok := tracker[u.Message.Cid()]
|
||||||
|
if ok {
|
||||||
|
fmt.Printf("%s was in the mempool for %s (feecap=%s, prem=%s)\n", u.Message.Cid(), time.Since(mi.seen), u.Message.Message.GasFeeCap, u.Message.Message.GasPremium)
|
||||||
|
stats.Record(ctx, BlockInclusionRate.M(1))
|
||||||
|
stats.Record(ctx, MsgWaitTime.M(time.Since(mi.seen).Seconds()))
|
||||||
|
delete(tracker, u.Message.Cid())
|
||||||
|
}
|
||||||
|
|
||||||
|
wm, ok := wpostTracker[u.Message.Cid()]
|
||||||
|
if ok {
|
||||||
|
_ = stats.RecordWithTags(ctx, []tag.Mutator{tag.Upsert(MTTag, "wpost")}, BlockInclusionRate.M(1))
|
||||||
|
_ = stats.RecordWithTags(ctx, []tag.Mutator{tag.Upsert(MTTag, "wpost")}, MsgWaitTime.M(time.Since(wm.seen).Seconds()))
|
||||||
|
delete(wpostTracker, u.Message.Cid())
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
return fmt.Errorf("unrecognized mpool update state: %d", u.Type)
|
||||||
|
}
|
||||||
|
case <-tick:
|
||||||
|
var ages []time.Duration
|
||||||
|
if len(tracker) > 0 {
|
||||||
|
for _, v := range tracker {
|
||||||
|
age := time.Since(v.seen)
|
||||||
|
ages = append(ages, age)
|
||||||
|
}
|
||||||
|
|
||||||
|
st := ageStats(ages)
|
||||||
|
_ = stats.RecordWithTags(ctx, []tag.Mutator{tag.Upsert(LeTag, "40")}, MpoolAge.M(st.Perc40.Seconds()))
|
||||||
|
_ = stats.RecordWithTags(ctx, []tag.Mutator{tag.Upsert(LeTag, "50")}, MpoolAge.M(st.Perc50.Seconds()))
|
||||||
|
_ = stats.RecordWithTags(ctx, []tag.Mutator{tag.Upsert(LeTag, "60")}, MpoolAge.M(st.Perc60.Seconds()))
|
||||||
|
_ = stats.RecordWithTags(ctx, []tag.Mutator{tag.Upsert(LeTag, "70")}, MpoolAge.M(st.Perc70.Seconds()))
|
||||||
|
_ = stats.RecordWithTags(ctx, []tag.Mutator{tag.Upsert(LeTag, "80")}, MpoolAge.M(st.Perc80.Seconds()))
|
||||||
|
_ = stats.RecordWithTags(ctx, []tag.Mutator{tag.Upsert(LeTag, "90")}, MpoolAge.M(st.Perc90.Seconds()))
|
||||||
|
_ = stats.RecordWithTags(ctx, []tag.Mutator{tag.Upsert(LeTag, "95")}, MpoolAge.M(st.Perc95.Seconds()))
|
||||||
|
|
||||||
|
stats.Record(ctx, MpoolSize.M(int64(len(tracker))))
|
||||||
|
fmt.Printf("%d messages in mempool for average of %s, (%s / %s / %s)\n", st.Count, st.Average, st.Perc50, st.Perc80, st.Perc95)
|
||||||
|
}
|
||||||
|
|
||||||
|
var wpages []time.Duration
|
||||||
|
if len(wpostTracker) > 0 {
|
||||||
|
for _, v := range wpostTracker {
|
||||||
|
age := time.Since(v.seen)
|
||||||
|
wpages = append(wpages, age)
|
||||||
|
}
|
||||||
|
|
||||||
|
st := ageStats(wpages)
|
||||||
|
_ = stats.RecordWithTags(ctx, []tag.Mutator{tag.Upsert(LeTag, "40"), tag.Upsert(MTTag, "wpost")}, MpoolAge.M(st.Perc40.Seconds()))
|
||||||
|
_ = stats.RecordWithTags(ctx, []tag.Mutator{tag.Upsert(LeTag, "50"), tag.Upsert(MTTag, "wpost")}, MpoolAge.M(st.Perc50.Seconds()))
|
||||||
|
_ = stats.RecordWithTags(ctx, []tag.Mutator{tag.Upsert(LeTag, "60"), tag.Upsert(MTTag, "wpost")}, MpoolAge.M(st.Perc60.Seconds()))
|
||||||
|
_ = stats.RecordWithTags(ctx, []tag.Mutator{tag.Upsert(LeTag, "70"), tag.Upsert(MTTag, "wpost")}, MpoolAge.M(st.Perc70.Seconds()))
|
||||||
|
_ = stats.RecordWithTags(ctx, []tag.Mutator{tag.Upsert(LeTag, "80"), tag.Upsert(MTTag, "wpost")}, MpoolAge.M(st.Perc80.Seconds()))
|
||||||
|
_ = stats.RecordWithTags(ctx, []tag.Mutator{tag.Upsert(LeTag, "90"), tag.Upsert(MTTag, "wpost")}, MpoolAge.M(st.Perc90.Seconds()))
|
||||||
|
_ = stats.RecordWithTags(ctx, []tag.Mutator{tag.Upsert(LeTag, "95"), tag.Upsert(MTTag, "wpost")}, MpoolAge.M(st.Perc95.Seconds()))
|
||||||
|
|
||||||
|
_ = stats.RecordWithTags(ctx, []tag.Mutator{tag.Upsert(MTTag, "wpost")}, MpoolSize.M(int64(len(wpostTracker))))
|
||||||
|
fmt.Printf("%d wpost messages in mempool for average of %s, (%s / %s / %s)\n", st.Count, st.Average, st.Perc50, st.Perc80, st.Perc95)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
type ageStat struct {
|
||||||
|
Average time.Duration
|
||||||
|
Max time.Duration
|
||||||
|
Perc40 time.Duration
|
||||||
|
Perc50 time.Duration
|
||||||
|
Perc60 time.Duration
|
||||||
|
Perc70 time.Duration
|
||||||
|
Perc80 time.Duration
|
||||||
|
Perc90 time.Duration
|
||||||
|
Perc95 time.Duration
|
||||||
|
Count int
|
||||||
|
}
|
||||||
|
|
||||||
|
func ageStats(ages []time.Duration) *ageStat {
|
||||||
|
sort.Slice(ages, func(i, j int) bool {
|
||||||
|
return ages[i] < ages[j]
|
||||||
|
})
|
||||||
|
|
||||||
|
st := ageStat{
|
||||||
|
Count: len(ages),
|
||||||
|
}
|
||||||
|
var sum time.Duration
|
||||||
|
for _, a := range ages {
|
||||||
|
sum += a
|
||||||
|
if a > st.Max {
|
||||||
|
st.Max = a
|
||||||
|
}
|
||||||
|
}
|
||||||
|
st.Average = sum / time.Duration(len(ages))
|
||||||
|
|
||||||
|
p40 := (4 * len(ages)) / 10
|
||||||
|
p50 := len(ages) / 2
|
||||||
|
p60 := (6 * len(ages)) / 10
|
||||||
|
p70 := (7 * len(ages)) / 10
|
||||||
|
p80 := (4 * len(ages)) / 5
|
||||||
|
p90 := (9 * len(ages)) / 10
|
||||||
|
p95 := (19 * len(ages)) / 20
|
||||||
|
|
||||||
|
st.Perc40 = ages[p40]
|
||||||
|
st.Perc50 = ages[p50]
|
||||||
|
st.Perc60 = ages[p60]
|
||||||
|
st.Perc70 = ages[p70]
|
||||||
|
st.Perc80 = ages[p80]
|
||||||
|
st.Perc90 = ages[p90]
|
||||||
|
st.Perc95 = ages[p95]
|
||||||
|
|
||||||
|
return &st
|
||||||
|
}
|
@ -179,8 +179,8 @@ var initCmd = &cli.Command{
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
if !v.APIVersion.EqMajorMinor(build.APIVersion) {
|
if !v.APIVersion.EqMajorMinor(build.FullAPIVersion) {
|
||||||
return xerrors.Errorf("Remote API version didn't match (local %s, remote %s)", build.APIVersion, v.APIVersion)
|
return xerrors.Errorf("Remote API version didn't match (expected %s, remote %s)", build.FullAPIVersion, v.APIVersion)
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Info("Initializing repo")
|
log.Info("Initializing repo")
|
||||||
|
@ -26,6 +26,8 @@ const FlagMinerRepo = "miner-repo"
|
|||||||
const FlagMinerRepoDeprecation = "storagerepo"
|
const FlagMinerRepoDeprecation = "storagerepo"
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
|
build.RunningNodeType = build.NodeMiner
|
||||||
|
|
||||||
lotuslog.SetupLogLevels()
|
lotuslog.SetupLogLevels()
|
||||||
|
|
||||||
local := []*cli.Command{
|
local := []*cli.Command{
|
||||||
|
@ -77,8 +77,8 @@ var runCmd = &cli.Command{
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if v.APIVersion != build.APIVersion {
|
if v.APIVersion != build.FullAPIVersion {
|
||||||
return xerrors.Errorf("lotus-daemon API version doesn't match: local: %s", api.Version{APIVersion: build.APIVersion})
|
return xerrors.Errorf("lotus-daemon API version doesn't match: expected: %s", api.Version{APIVersion: build.FullAPIVersion})
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Info("Checking full node sync status")
|
log.Info("Checking full node sync status")
|
||||||
|
@ -46,6 +46,7 @@ func init() {
|
|||||||
return xerrors.Errorf("StateMinerWorker: %w", err)
|
return xerrors.Errorf("StateMinerWorker: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// XXX: This can't be right
|
||||||
rand, err := api.ChainGetRandomnessFromTickets(ctx, head.Key(), crypto.DomainSeparationTag_TicketProduction, head.Height(), addr.Bytes())
|
rand, err := api.ChainGetRandomnessFromTickets(ctx, head.Key(), crypto.DomainSeparationTag_TicketProduction, head.Height(), addr.Bytes())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return xerrors.Errorf("failed to get randomness: %w", err)
|
return xerrors.Errorf("failed to get randomness: %w", err)
|
||||||
|
@ -16,6 +16,8 @@ import (
|
|||||||
var AdvanceBlockCmd *cli.Command
|
var AdvanceBlockCmd *cli.Command
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
|
build.RunningNodeType = build.NodeFull
|
||||||
|
|
||||||
lotuslog.SetupLogLevels()
|
lotuslog.SetupLogLevels()
|
||||||
|
|
||||||
local := []*cli.Command{
|
local := []*cli.Command{
|
||||||
|
@ -6,7 +6,6 @@ import (
|
|||||||
"github.com/filecoin-project/go-state-types/exitcode"
|
"github.com/filecoin-project/go-state-types/exitcode"
|
||||||
"github.com/filecoin-project/specs-actors/actors/builtin"
|
"github.com/filecoin-project/specs-actors/actors/builtin"
|
||||||
"github.com/filecoin-project/specs-actors/actors/runtime"
|
"github.com/filecoin-project/specs-actors/actors/runtime"
|
||||||
"github.com/filecoin-project/specs-actors/actors/util/adt"
|
|
||||||
"github.com/ipfs/go-cid"
|
"github.com/ipfs/go-cid"
|
||||||
|
|
||||||
typegen "github.com/whyrusleeping/cbor-gen"
|
typegen "github.com/whyrusleeping/cbor-gen"
|
||||||
@ -62,6 +61,9 @@ const (
|
|||||||
// MethodMutateState is the identifier for the method that attempts to mutate
|
// MethodMutateState is the identifier for the method that attempts to mutate
|
||||||
// a state value in the actor.
|
// a state value in the actor.
|
||||||
MethodMutateState
|
MethodMutateState
|
||||||
|
// MethodAbortWith is the identifier for the method that panics optionally with
|
||||||
|
// a passed exit code.
|
||||||
|
MethodAbortWith
|
||||||
)
|
)
|
||||||
|
|
||||||
// Exports defines the methods this actor exposes publicly.
|
// Exports defines the methods this actor exposes publicly.
|
||||||
@ -74,6 +76,7 @@ func (a Actor) Exports() []interface{} {
|
|||||||
MethodDeleteActor: a.DeleteActor,
|
MethodDeleteActor: a.DeleteActor,
|
||||||
MethodSend: a.Send,
|
MethodSend: a.Send,
|
||||||
MethodMutateState: a.MutateState,
|
MethodMutateState: a.MutateState,
|
||||||
|
MethodAbortWith: a.AbortWith,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -116,7 +119,7 @@ func (a Actor) Send(rt runtime.Runtime, args *SendArgs) *SendReturn {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Constructor will panic because the Chaos actor is a singleton.
|
// Constructor will panic because the Chaos actor is a singleton.
|
||||||
func (a Actor) Constructor(_ runtime.Runtime, _ *adt.EmptyValue) *adt.EmptyValue {
|
func (a Actor) Constructor(_ runtime.Runtime, _ *abi.EmptyValue) *abi.EmptyValue {
|
||||||
panic("constructor should not be called; the Chaos actor is a singleton actor")
|
panic("constructor should not be called; the Chaos actor is a singleton actor")
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -127,7 +130,7 @@ func (a Actor) Constructor(_ runtime.Runtime, _ *adt.EmptyValue) *adt.EmptyValue
|
|||||||
// CallerValidationBranchAddrNilSet validates against an empty caller
|
// CallerValidationBranchAddrNilSet validates against an empty caller
|
||||||
// address set.
|
// address set.
|
||||||
// CallerValidationBranchTypeNilSet validates against an empty caller type set.
|
// CallerValidationBranchTypeNilSet validates against an empty caller type set.
|
||||||
func (a Actor) CallerValidation(rt runtime.Runtime, branch *typegen.CborInt) *adt.EmptyValue {
|
func (a Actor) CallerValidation(rt runtime.Runtime, branch *typegen.CborInt) *abi.EmptyValue {
|
||||||
switch CallerValidationBranch(*branch) {
|
switch CallerValidationBranch(*branch) {
|
||||||
case CallerValidationBranchNone:
|
case CallerValidationBranchNone:
|
||||||
case CallerValidationBranchTwice:
|
case CallerValidationBranchTwice:
|
||||||
@ -157,7 +160,7 @@ type CreateActorArgs struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// CreateActor creates an actor with the supplied CID and Address.
|
// CreateActor creates an actor with the supplied CID and Address.
|
||||||
func (a Actor) CreateActor(rt runtime.Runtime, args *CreateActorArgs) *adt.EmptyValue {
|
func (a Actor) CreateActor(rt runtime.Runtime, args *CreateActorArgs) *abi.EmptyValue {
|
||||||
rt.ValidateImmediateCallerAcceptAny()
|
rt.ValidateImmediateCallerAcceptAny()
|
||||||
|
|
||||||
var (
|
var (
|
||||||
@ -195,7 +198,7 @@ func (a Actor) ResolveAddress(rt runtime.Runtime, args *address.Address) *Resolv
|
|||||||
|
|
||||||
// DeleteActor deletes the executing actor from the state tree, transferring any
|
// DeleteActor deletes the executing actor from the state tree, transferring any
|
||||||
// balance to beneficiary.
|
// balance to beneficiary.
|
||||||
func (a Actor) DeleteActor(rt runtime.Runtime, beneficiary *address.Address) *adt.EmptyValue {
|
func (a Actor) DeleteActor(rt runtime.Runtime, beneficiary *address.Address) *abi.EmptyValue {
|
||||||
rt.ValidateImmediateCallerAcceptAny()
|
rt.ValidateImmediateCallerAcceptAny()
|
||||||
rt.DeleteActor(*beneficiary)
|
rt.DeleteActor(*beneficiary)
|
||||||
return nil
|
return nil
|
||||||
@ -209,7 +212,7 @@ type MutateStateArgs struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// MutateState attempts to mutate a state value in the actor.
|
// MutateState attempts to mutate a state value in the actor.
|
||||||
func (a Actor) MutateState(rt runtime.Runtime, args *MutateStateArgs) *adt.EmptyValue {
|
func (a Actor) MutateState(rt runtime.Runtime, args *MutateStateArgs) *abi.EmptyValue {
|
||||||
rt.ValidateImmediateCallerAcceptAny()
|
rt.ValidateImmediateCallerAcceptAny()
|
||||||
var st State
|
var st State
|
||||||
switch args.Branch {
|
switch args.Branch {
|
||||||
@ -230,3 +233,21 @@ func (a Actor) MutateState(rt runtime.Runtime, args *MutateStateArgs) *adt.Empty
|
|||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// AbortWithArgs are the arguments to the Actor.AbortWith method, specifying the
|
||||||
|
// exit code to (optionally) abort with and the message.
|
||||||
|
type AbortWithArgs struct {
|
||||||
|
Code exitcode.ExitCode
|
||||||
|
Message string
|
||||||
|
Uncontrolled bool
|
||||||
|
}
|
||||||
|
|
||||||
|
// AbortWith simply causes a panic with the passed exit code.
|
||||||
|
func (a Actor) AbortWith(rt runtime.Runtime, args *AbortWithArgs) *abi.EmptyValue {
|
||||||
|
if args.Uncontrolled { // uncontrolled abort: directly panic
|
||||||
|
panic(args.Message)
|
||||||
|
} else {
|
||||||
|
rt.Abortf(args.Code, args.Message)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
153
conformance/chaos/actor_test.go
Normal file
153
conformance/chaos/actor_test.go
Normal file
@ -0,0 +1,153 @@
|
|||||||
|
package chaos
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/filecoin-project/go-state-types/abi"
|
||||||
|
"github.com/filecoin-project/go-state-types/exitcode"
|
||||||
|
"github.com/filecoin-project/specs-actors/support/mock"
|
||||||
|
atesting "github.com/filecoin-project/specs-actors/support/testing"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestSingleton(t *testing.T) {
|
||||||
|
receiver := atesting.NewIDAddr(t, 100)
|
||||||
|
builder := mock.NewBuilder(context.Background(), receiver)
|
||||||
|
|
||||||
|
rt := builder.Build(t)
|
||||||
|
var a Actor
|
||||||
|
|
||||||
|
msg := "constructor should not be called; the Chaos actor is a singleton actor"
|
||||||
|
rt.ExpectAssertionFailure(msg, func() {
|
||||||
|
rt.Call(a.Constructor, abi.Empty)
|
||||||
|
})
|
||||||
|
rt.Verify()
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDeleteActor(t *testing.T) {
|
||||||
|
receiver := atesting.NewIDAddr(t, 100)
|
||||||
|
beneficiary := atesting.NewIDAddr(t, 101)
|
||||||
|
builder := mock.NewBuilder(context.Background(), receiver)
|
||||||
|
|
||||||
|
rt := builder.Build(t)
|
||||||
|
var a Actor
|
||||||
|
|
||||||
|
rt.ExpectValidateCallerAny()
|
||||||
|
rt.ExpectDeleteActor(beneficiary)
|
||||||
|
rt.Call(a.DeleteActor, &beneficiary)
|
||||||
|
rt.Verify()
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestMutateStateInTransaction(t *testing.T) {
|
||||||
|
receiver := atesting.NewIDAddr(t, 100)
|
||||||
|
builder := mock.NewBuilder(context.Background(), receiver)
|
||||||
|
|
||||||
|
rt := builder.Build(t)
|
||||||
|
var a Actor
|
||||||
|
|
||||||
|
rt.ExpectValidateCallerAny()
|
||||||
|
rt.Create(&State{})
|
||||||
|
|
||||||
|
val := "__mutstat test"
|
||||||
|
rt.Call(a.MutateState, &MutateStateArgs{
|
||||||
|
Value: val,
|
||||||
|
Branch: MutateInTransaction,
|
||||||
|
})
|
||||||
|
|
||||||
|
var st State
|
||||||
|
rt.GetState(&st)
|
||||||
|
|
||||||
|
if st.Value != val {
|
||||||
|
t.Fatal("state was not updated")
|
||||||
|
}
|
||||||
|
|
||||||
|
rt.Verify()
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestMutateStateAfterTransaction(t *testing.T) {
|
||||||
|
receiver := atesting.NewIDAddr(t, 100)
|
||||||
|
builder := mock.NewBuilder(context.Background(), receiver)
|
||||||
|
|
||||||
|
rt := builder.Build(t)
|
||||||
|
var a Actor
|
||||||
|
|
||||||
|
rt.ExpectValidateCallerAny()
|
||||||
|
rt.Create(&State{})
|
||||||
|
|
||||||
|
val := "__mutstat test"
|
||||||
|
rt.Call(a.MutateState, &MutateStateArgs{
|
||||||
|
Value: val,
|
||||||
|
Branch: MutateAfterTransaction,
|
||||||
|
})
|
||||||
|
|
||||||
|
var st State
|
||||||
|
rt.GetState(&st)
|
||||||
|
|
||||||
|
// state should be updated successfully _in_ the transaction but not outside
|
||||||
|
if st.Value != val+"-in" {
|
||||||
|
t.Fatal("state was not updated")
|
||||||
|
}
|
||||||
|
|
||||||
|
rt.Verify()
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestMutateStateReadonly(t *testing.T) {
|
||||||
|
receiver := atesting.NewIDAddr(t, 100)
|
||||||
|
builder := mock.NewBuilder(context.Background(), receiver)
|
||||||
|
|
||||||
|
rt := builder.Build(t)
|
||||||
|
var a Actor
|
||||||
|
|
||||||
|
rt.ExpectValidateCallerAny()
|
||||||
|
rt.Create(&State{})
|
||||||
|
|
||||||
|
val := "__mutstat test"
|
||||||
|
rt.Call(a.MutateState, &MutateStateArgs{
|
||||||
|
Value: val,
|
||||||
|
Branch: MutateReadonly,
|
||||||
|
})
|
||||||
|
|
||||||
|
var st State
|
||||||
|
rt.GetState(&st)
|
||||||
|
|
||||||
|
if st.Value != "" {
|
||||||
|
t.Fatal("state was not expected to be updated")
|
||||||
|
}
|
||||||
|
|
||||||
|
rt.Verify()
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestAbortWith(t *testing.T) {
|
||||||
|
receiver := atesting.NewIDAddr(t, 100)
|
||||||
|
builder := mock.NewBuilder(context.Background(), receiver)
|
||||||
|
|
||||||
|
rt := builder.Build(t)
|
||||||
|
var a Actor
|
||||||
|
|
||||||
|
msg := "__test forbidden"
|
||||||
|
rt.ExpectAbortContainsMessage(exitcode.ErrForbidden, msg, func() {
|
||||||
|
rt.Call(a.AbortWith, &AbortWithArgs{
|
||||||
|
Code: exitcode.ErrForbidden,
|
||||||
|
Message: msg,
|
||||||
|
Uncontrolled: false,
|
||||||
|
})
|
||||||
|
})
|
||||||
|
rt.Verify()
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestAbortWithUncontrolled(t *testing.T) {
|
||||||
|
receiver := atesting.NewIDAddr(t, 100)
|
||||||
|
builder := mock.NewBuilder(context.Background(), receiver)
|
||||||
|
|
||||||
|
rt := builder.Build(t)
|
||||||
|
var a Actor
|
||||||
|
|
||||||
|
msg := "__test uncontrolled panic"
|
||||||
|
rt.ExpectAssertionFailure(msg, func() {
|
||||||
|
rt.Call(a.AbortWith, &AbortWithArgs{
|
||||||
|
Message: msg,
|
||||||
|
Uncontrolled: true,
|
||||||
|
})
|
||||||
|
})
|
||||||
|
rt.Verify()
|
||||||
|
}
|
@ -614,3 +614,119 @@ func (t *MutateStateArgs) UnmarshalCBOR(r io.Reader) error {
|
|||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
var lengthBufAbortWithArgs = []byte{131}
|
||||||
|
|
||||||
|
func (t *AbortWithArgs) MarshalCBOR(w io.Writer) error {
|
||||||
|
if t == nil {
|
||||||
|
_, err := w.Write(cbg.CborNull)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if _, err := w.Write(lengthBufAbortWithArgs); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
scratch := make([]byte, 9)
|
||||||
|
|
||||||
|
// t.Code (exitcode.ExitCode) (int64)
|
||||||
|
if t.Code >= 0 {
|
||||||
|
if err := cbg.WriteMajorTypeHeaderBuf(scratch, w, cbg.MajUnsignedInt, uint64(t.Code)); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
if err := cbg.WriteMajorTypeHeaderBuf(scratch, w, cbg.MajNegativeInt, uint64(-t.Code-1)); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// t.Message (string) (string)
|
||||||
|
if len(t.Message) > cbg.MaxLength {
|
||||||
|
return xerrors.Errorf("Value in field t.Message was too long")
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := cbg.WriteMajorTypeHeaderBuf(scratch, w, cbg.MajTextString, uint64(len(t.Message))); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if _, err := io.WriteString(w, string(t.Message)); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// t.Uncontrolled (bool) (bool)
|
||||||
|
if err := cbg.WriteBool(w, t.Uncontrolled); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (t *AbortWithArgs) UnmarshalCBOR(r io.Reader) error {
|
||||||
|
*t = AbortWithArgs{}
|
||||||
|
|
||||||
|
br := cbg.GetPeeker(r)
|
||||||
|
scratch := make([]byte, 8)
|
||||||
|
|
||||||
|
maj, extra, err := cbg.CborReadHeaderBuf(br, scratch)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if maj != cbg.MajArray {
|
||||||
|
return fmt.Errorf("cbor input should be of type array")
|
||||||
|
}
|
||||||
|
|
||||||
|
if extra != 3 {
|
||||||
|
return fmt.Errorf("cbor input had wrong number of fields")
|
||||||
|
}
|
||||||
|
|
||||||
|
// t.Code (exitcode.ExitCode) (int64)
|
||||||
|
{
|
||||||
|
maj, extra, err := cbg.CborReadHeaderBuf(br, scratch)
|
||||||
|
var extraI int64
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
switch maj {
|
||||||
|
case cbg.MajUnsignedInt:
|
||||||
|
extraI = int64(extra)
|
||||||
|
if extraI < 0 {
|
||||||
|
return fmt.Errorf("int64 positive overflow")
|
||||||
|
}
|
||||||
|
case cbg.MajNegativeInt:
|
||||||
|
extraI = int64(extra)
|
||||||
|
if extraI < 0 {
|
||||||
|
return fmt.Errorf("int64 negative oveflow")
|
||||||
|
}
|
||||||
|
extraI = -1 - extraI
|
||||||
|
default:
|
||||||
|
return fmt.Errorf("wrong type for int64 field: %d", maj)
|
||||||
|
}
|
||||||
|
|
||||||
|
t.Code = exitcode.ExitCode(extraI)
|
||||||
|
}
|
||||||
|
// t.Message (string) (string)
|
||||||
|
|
||||||
|
{
|
||||||
|
sval, err := cbg.ReadStringBuf(br, scratch)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
t.Message = string(sval)
|
||||||
|
}
|
||||||
|
// t.Uncontrolled (bool) (bool)
|
||||||
|
|
||||||
|
maj, extra, err = cbg.CborReadHeaderBuf(br, scratch)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if maj != cbg.MajOther {
|
||||||
|
return fmt.Errorf("booleans must be major type 7")
|
||||||
|
}
|
||||||
|
switch extra {
|
||||||
|
case 20:
|
||||||
|
t.Uncontrolled = false
|
||||||
|
case 21:
|
||||||
|
t.Uncontrolled = true
|
||||||
|
default:
|
||||||
|
return fmt.Errorf("booleans are either major type 7, value 20 or 21 (got %d)", extra)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
@ -14,6 +14,7 @@ func main() {
|
|||||||
chaos.SendArgs{},
|
chaos.SendArgs{},
|
||||||
chaos.SendReturn{},
|
chaos.SendReturn{},
|
||||||
chaos.MutateStateArgs{},
|
chaos.MutateStateArgs{},
|
||||||
|
chaos.AbortWithArgs{},
|
||||||
); err != nil {
|
); err != nil {
|
||||||
panic(err)
|
panic(err)
|
||||||
}
|
}
|
||||||
|
@ -75,10 +75,14 @@
|
|||||||
* [MpoolSetConfig](#MpoolSetConfig)
|
* [MpoolSetConfig](#MpoolSetConfig)
|
||||||
* [MpoolSub](#MpoolSub)
|
* [MpoolSub](#MpoolSub)
|
||||||
* [Msig](#Msig)
|
* [Msig](#Msig)
|
||||||
|
* [MsigAddApprove](#MsigAddApprove)
|
||||||
|
* [MsigAddCancel](#MsigAddCancel)
|
||||||
|
* [MsigAddPropose](#MsigAddPropose)
|
||||||
* [MsigApprove](#MsigApprove)
|
* [MsigApprove](#MsigApprove)
|
||||||
* [MsigCancel](#MsigCancel)
|
* [MsigCancel](#MsigCancel)
|
||||||
* [MsigCreate](#MsigCreate)
|
* [MsigCreate](#MsigCreate)
|
||||||
* [MsigGetAvailableBalance](#MsigGetAvailableBalance)
|
* [MsigGetAvailableBalance](#MsigGetAvailableBalance)
|
||||||
|
* [MsigGetVested](#MsigGetVested)
|
||||||
* [MsigPropose](#MsigPropose)
|
* [MsigPropose](#MsigPropose)
|
||||||
* [MsigSwapApprove](#MsigSwapApprove)
|
* [MsigSwapApprove](#MsigSwapApprove)
|
||||||
* [MsigSwapCancel](#MsigSwapCancel)
|
* [MsigSwapCancel](#MsigSwapCancel)
|
||||||
@ -156,10 +160,12 @@
|
|||||||
* [StateWaitMsg](#StateWaitMsg)
|
* [StateWaitMsg](#StateWaitMsg)
|
||||||
* [Sync](#Sync)
|
* [Sync](#Sync)
|
||||||
* [SyncCheckBad](#SyncCheckBad)
|
* [SyncCheckBad](#SyncCheckBad)
|
||||||
|
* [SyncCheckpoint](#SyncCheckpoint)
|
||||||
* [SyncIncomingBlocks](#SyncIncomingBlocks)
|
* [SyncIncomingBlocks](#SyncIncomingBlocks)
|
||||||
* [SyncMarkBad](#SyncMarkBad)
|
* [SyncMarkBad](#SyncMarkBad)
|
||||||
* [SyncState](#SyncState)
|
* [SyncState](#SyncState)
|
||||||
* [SyncSubmitBlock](#SyncSubmitBlock)
|
* [SyncSubmitBlock](#SyncSubmitBlock)
|
||||||
|
* [SyncUnmarkBad](#SyncUnmarkBad)
|
||||||
* [Wallet](#Wallet)
|
* [Wallet](#Wallet)
|
||||||
* [WalletBalance](#WalletBalance)
|
* [WalletBalance](#WalletBalance)
|
||||||
* [WalletDefaultAddress](#WalletDefaultAddress)
|
* [WalletDefaultAddress](#WalletDefaultAddress)
|
||||||
@ -278,6 +284,7 @@ ChainExport returns a stream of bytes with CAR dump of chain data.
|
|||||||
The exported chain data includes the header chain from the given tipset
|
The exported chain data includes the header chain from the given tipset
|
||||||
back to genesis, the entire genesis state, and the most recent 'nroots'
|
back to genesis, the entire genesis state, and the most recent 'nroots'
|
||||||
state trees.
|
state trees.
|
||||||
|
If oldmsgskip is set, messages from before the requested roots are also not included.
|
||||||
|
|
||||||
|
|
||||||
Perms: read
|
Perms: read
|
||||||
@ -286,6 +293,7 @@ Inputs:
|
|||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
10101,
|
10101,
|
||||||
|
true,
|
||||||
[
|
[
|
||||||
{
|
{
|
||||||
"/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
|
"/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
|
||||||
@ -1837,6 +1845,84 @@ The Msig methods are used to interact with multisig wallets on the
|
|||||||
filecoin network
|
filecoin network
|
||||||
|
|
||||||
|
|
||||||
|
### MsigAddApprove
|
||||||
|
MsigAddApprove approves a previously proposed AddSigner message
|
||||||
|
It takes the following params: <multisig address>, <sender address of the approve msg>, <proposed message ID>,
|
||||||
|
<proposer address>, <new signer>, <whether the number of required signers should be increased>
|
||||||
|
|
||||||
|
|
||||||
|
Perms: sign
|
||||||
|
|
||||||
|
Inputs:
|
||||||
|
```json
|
||||||
|
[
|
||||||
|
"t01234",
|
||||||
|
"t01234",
|
||||||
|
42,
|
||||||
|
"t01234",
|
||||||
|
"t01234",
|
||||||
|
true
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
Response:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### MsigAddCancel
|
||||||
|
MsigAddCancel cancels a previously proposed AddSigner message
|
||||||
|
It takes the following params: <multisig address>, <sender address of the cancel msg>, <proposed message ID>,
|
||||||
|
<new signer>, <whether the number of required signers should be increased>
|
||||||
|
|
||||||
|
|
||||||
|
Perms: sign
|
||||||
|
|
||||||
|
Inputs:
|
||||||
|
```json
|
||||||
|
[
|
||||||
|
"t01234",
|
||||||
|
"t01234",
|
||||||
|
42,
|
||||||
|
"t01234",
|
||||||
|
true
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
Response:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### MsigAddPropose
|
||||||
|
MsigAddPropose proposes adding a signer in the multisig
|
||||||
|
It takes the following params: <multisig address>, <sender address of the propose msg>,
|
||||||
|
<new signer>, <whether the number of required signers should be increased>
|
||||||
|
|
||||||
|
|
||||||
|
Perms: sign
|
||||||
|
|
||||||
|
Inputs:
|
||||||
|
```json
|
||||||
|
[
|
||||||
|
"t01234",
|
||||||
|
"t01234",
|
||||||
|
"t01234",
|
||||||
|
true
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
Response:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
### MsigApprove
|
### MsigApprove
|
||||||
MsigApprove approves a previously-proposed multisig message
|
MsigApprove approves a previously-proposed multisig message
|
||||||
It takes the following params: <multisig address>, <proposed message ID>, <proposer address>, <recipient address>, <value to transfer>,
|
It takes the following params: <multisig address>, <proposed message ID>, <proposer address>, <recipient address>, <value to transfer>,
|
||||||
@ -1944,6 +2030,38 @@ Inputs:
|
|||||||
|
|
||||||
Response: `"0"`
|
Response: `"0"`
|
||||||
|
|
||||||
|
### MsigGetVested
|
||||||
|
MsigGetVested returns the amount of FIL that vested in a multisig in a certain period.
|
||||||
|
It takes the following params: <multisig address>, <start epoch>, <end epoch>
|
||||||
|
|
||||||
|
|
||||||
|
Perms: read
|
||||||
|
|
||||||
|
Inputs:
|
||||||
|
```json
|
||||||
|
[
|
||||||
|
"t01234",
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"/": "bafy2bzacebp3shtrn43k7g3unredz7fxn4gj533d3o43tqn2p2ipxxhrvchve"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"/": "bafy2bzacebp3shtrn43k7g3unredz7fxn4gj533d3o43tqn2p2ipxxhrvchve"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
Response: `"0"`
|
||||||
|
|
||||||
### MsigPropose
|
### MsigPropose
|
||||||
MsigPropose proposes a multisig message
|
MsigPropose proposes a multisig message
|
||||||
It takes the following params: <multisig address>, <recipient address>, <value to transfer>,
|
It takes the following params: <multisig address>, <recipient address>, <value to transfer>,
|
||||||
@ -1974,7 +2092,7 @@ Response:
|
|||||||
### MsigSwapApprove
|
### MsigSwapApprove
|
||||||
MsigSwapApprove approves a previously proposed SwapSigner
|
MsigSwapApprove approves a previously proposed SwapSigner
|
||||||
It takes the following params: <multisig address>, <sender address of the approve msg>, <proposed message ID>,
|
It takes the following params: <multisig address>, <sender address of the approve msg>, <proposed message ID>,
|
||||||
<proposer address>, <old signer> <new signer>
|
<proposer address>, <old signer>, <new signer>
|
||||||
|
|
||||||
|
|
||||||
Perms: sign
|
Perms: sign
|
||||||
@ -2001,7 +2119,7 @@ Response:
|
|||||||
### MsigSwapCancel
|
### MsigSwapCancel
|
||||||
MsigSwapCancel cancels a previously proposed SwapSigner message
|
MsigSwapCancel cancels a previously proposed SwapSigner message
|
||||||
It takes the following params: <multisig address>, <sender address of the cancel msg>, <proposed message ID>,
|
It takes the following params: <multisig address>, <sender address of the cancel msg>, <proposed message ID>,
|
||||||
<old signer> <new signer>
|
<old signer>, <new signer>
|
||||||
|
|
||||||
|
|
||||||
Perms: sign
|
Perms: sign
|
||||||
@ -2027,7 +2145,7 @@ Response:
|
|||||||
### MsigSwapPropose
|
### MsigSwapPropose
|
||||||
MsigSwapPropose proposes swapping 2 signers in the multisig
|
MsigSwapPropose proposes swapping 2 signers in the multisig
|
||||||
It takes the following params: <multisig address>, <sender address of the propose msg>,
|
It takes the following params: <multisig address>, <sender address of the propose msg>,
|
||||||
<old signer> <new signer>
|
<old signer>, <new signer>
|
||||||
|
|
||||||
|
|
||||||
Perms: sign
|
Perms: sign
|
||||||
@ -3517,7 +3635,12 @@ Response:
|
|||||||
"Open": 10101,
|
"Open": 10101,
|
||||||
"Close": 10101,
|
"Close": 10101,
|
||||||
"Challenge": 10101,
|
"Challenge": 10101,
|
||||||
"FaultCutoff": 10101
|
"FaultCutoff": 10101,
|
||||||
|
"WPoStPeriodDeadlines": 42,
|
||||||
|
"WPoStProvingPeriod": 10101,
|
||||||
|
"WPoStChallengeWindow": 10101,
|
||||||
|
"WPoStChallengeLookback": 10101,
|
||||||
|
"FaultDeclarationCutoff": 10101
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -3995,6 +4118,28 @@ Inputs:
|
|||||||
|
|
||||||
Response: `"string value"`
|
Response: `"string value"`
|
||||||
|
|
||||||
|
### SyncCheckpoint
|
||||||
|
SyncCheckpoint marks a blocks as checkpointed, meaning that it won't ever fork away from it.
|
||||||
|
|
||||||
|
|
||||||
|
Perms: admin
|
||||||
|
|
||||||
|
Inputs:
|
||||||
|
```json
|
||||||
|
[
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"/": "bafy2bzacebp3shtrn43k7g3unredz7fxn4gj533d3o43tqn2p2ipxxhrvchve"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
Response: `{}`
|
||||||
|
|
||||||
### SyncIncomingBlocks
|
### SyncIncomingBlocks
|
||||||
SyncIncomingBlocks returns a channel streaming incoming, potentially not
|
SyncIncomingBlocks returns a channel streaming incoming, potentially not
|
||||||
yet synced block headers.
|
yet synced block headers.
|
||||||
@ -4130,6 +4275,23 @@ Inputs:
|
|||||||
|
|
||||||
Response: `{}`
|
Response: `{}`
|
||||||
|
|
||||||
|
### SyncUnmarkBad
|
||||||
|
SyncUnmarkBad unmarks a blocks as bad, making it possible to be validated and synced again.
|
||||||
|
|
||||||
|
|
||||||
|
Perms: admin
|
||||||
|
|
||||||
|
Inputs:
|
||||||
|
```json
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
Response: `{}`
|
||||||
|
|
||||||
## Wallet
|
## Wallet
|
||||||
|
|
||||||
|
|
||||||
|
@ -259,7 +259,7 @@ When we launch a Lotus node with the command `./lotus daemon`
|
|||||||
(see [here](https://github.com/filecoin-project/lotus/blob/master/cmd/lotus/daemon.go) for more),
|
(see [here](https://github.com/filecoin-project/lotus/blob/master/cmd/lotus/daemon.go) for more),
|
||||||
the node is created through [dependency injection](https://godoc.org/go.uber.org/fx).
|
the node is created through [dependency injection](https://godoc.org/go.uber.org/fx).
|
||||||
This relies on reflection, which makes some of the references hard to follow.
|
This relies on reflection, which makes some of the references hard to follow.
|
||||||
The node sets up all of the subsystems it needs to run, such as the repository, the network connections, thechain sync
|
The node sets up all of the subsystems it needs to run, such as the repository, the network connections, the chain sync
|
||||||
service, etc.
|
service, etc.
|
||||||
This setup is orchestrated through calls to the `node.Override` function.
|
This setup is orchestrated through calls to the `node.Override` function.
|
||||||
The structure of each call indicates the type of component it will set up
|
The structure of each call indicates the type of component it will set up
|
||||||
|
1
extern/fil-blst
vendored
Submodule
1
extern/fil-blst
vendored
Submodule
@ -0,0 +1 @@
|
|||||||
|
Subproject commit 5f93488fc0dbfb450f2355269f18fc67010d59bb
|
2
extern/filecoin-ffi
vendored
2
extern/filecoin-ffi
vendored
@ -1 +1 @@
|
|||||||
Subproject commit 40569104603407c999d6c9e4c3f1228cbd4d0e5c
|
Subproject commit f640612a1a1f7a2dd8b3a49e1531db0aa0f63447
|
97
extern/sector-storage/ffiwrapper/sealer_test.go
vendored
97
extern/sector-storage/ffiwrapper/sealer_test.go
vendored
@ -168,50 +168,34 @@ func (s *seal) unseal(t *testing.T, sb *Sealer, sp *basicfs.Provider, si abi.Sec
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func post(t *testing.T, sealer *Sealer, seals ...seal) time.Time {
|
func post(t *testing.T, sealer *Sealer, skipped []abi.SectorID, seals ...seal) {
|
||||||
/*randomness := abi.PoStRandomness{0, 9, 2, 7, 6, 5, 4, 3, 2, 1, 0, 9, 8, 7, 6, 45, 3, 2, 1, 0, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0, 9, 7}
|
randomness := abi.PoStRandomness{0, 9, 2, 7, 6, 5, 4, 3, 2, 1, 0, 9, 8, 7, 6, 45, 3, 2, 1, 0, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0, 9, 7}
|
||||||
|
|
||||||
sis := make([]abi.SectorInfo, len(seals))
|
sis := make([]saproof.SectorInfo, len(seals))
|
||||||
for i, s := range seals {
|
for i, s := range seals {
|
||||||
sis[i] = abi.SectorInfo{
|
sis[i] = saproof.SectorInfo{
|
||||||
RegisteredProof: sealProofType,
|
SealProof: sealProofType,
|
||||||
SectorNumber: s.id.Number,
|
SectorNumber: s.id.Number,
|
||||||
SealedCID: s.cids.Sealed,
|
SealedCID: s.cids.Sealed,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
candidates, err := sealer.GenerateEPostCandidates(context.TODO(), seals[0].id.Miner, sis, randomness, []abi.SectorNumber{})
|
proofs, skp, err := sealer.GenerateWindowPoSt(context.TODO(), seals[0].id.Miner, sis, randomness)
|
||||||
if err != nil {
|
if len(skipped) > 0 {
|
||||||
t.Fatalf("%+v", err)
|
require.Error(t, err)
|
||||||
}*/
|
require.EqualValues(t, skipped, skp)
|
||||||
|
return
|
||||||
fmt.Println("skipping post")
|
|
||||||
|
|
||||||
genCandidates := time.Now()
|
|
||||||
|
|
||||||
/*if len(candidates) != 1 {
|
|
||||||
t.Fatal("expected 1 candidate")
|
|
||||||
}
|
}
|
||||||
|
|
||||||
candidatesPrime := make([]abi.PoStCandidate, len(candidates))
|
|
||||||
for idx := range candidatesPrime {
|
|
||||||
candidatesPrime[idx] = candidates[idx].Candidate
|
|
||||||
}
|
|
||||||
|
|
||||||
proofs, err := sealer.ComputeElectionPoSt(context.TODO(), seals[0].id.Miner, sis, randomness, candidatesPrime)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("%+v", err)
|
t.Fatalf("%+v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
ePoStChallengeCount := ElectionPostChallengeCount(uint64(len(sis)), 0)
|
ok, err := ProofVerifier.VerifyWindowPoSt(context.TODO(), saproof.WindowPoStVerifyInfo{
|
||||||
|
Randomness: randomness,
|
||||||
ok, err := ProofVerifier.VerifyElectionPost(context.TODO(), abi.PoStVerifyInfo{
|
Proofs: proofs,
|
||||||
Randomness: randomness,
|
ChallengedSectors: sis,
|
||||||
Candidates: candidatesPrime,
|
Prover: seals[0].id.Miner,
|
||||||
Proofs: proofs,
|
|
||||||
EligibleSectors: sis,
|
|
||||||
Prover: seals[0].id.Miner,
|
|
||||||
ChallengeCount: ePoStChallengeCount,
|
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("%+v", err)
|
t.Fatalf("%+v", err)
|
||||||
@ -219,8 +203,21 @@ func post(t *testing.T, sealer *Sealer, seals ...seal) time.Time {
|
|||||||
if !ok {
|
if !ok {
|
||||||
t.Fatal("bad post")
|
t.Fatal("bad post")
|
||||||
}
|
}
|
||||||
*/
|
}
|
||||||
return genCandidates
|
|
||||||
|
func corrupt(t *testing.T, sealer *Sealer, id abi.SectorID) {
|
||||||
|
paths, done, err := sealer.sectors.AcquireSector(context.Background(), id, stores.FTSealed, 0, stores.PathStorage)
|
||||||
|
require.NoError(t, err)
|
||||||
|
defer done()
|
||||||
|
|
||||||
|
log.Infof("corrupt %s", paths.Sealed)
|
||||||
|
f, err := os.OpenFile(paths.Sealed, os.O_RDWR, 0664)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
_, err = f.WriteAt(bytes.Repeat([]byte{'d'}, 2048), 0)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.NoError(t, f.Close())
|
||||||
}
|
}
|
||||||
|
|
||||||
func getGrothParamFileAndVerifyingKeys(s abi.SectorSize) {
|
func getGrothParamFileAndVerifyingKeys(s abi.SectorSize) {
|
||||||
@ -299,11 +296,11 @@ func TestSealAndVerify(t *testing.T) {
|
|||||||
|
|
||||||
commit := time.Now()
|
commit := time.Now()
|
||||||
|
|
||||||
genCandidiates := post(t, sb, s)
|
post(t, sb, nil, s)
|
||||||
|
|
||||||
epost := time.Now()
|
epost := time.Now()
|
||||||
|
|
||||||
post(t, sb, s)
|
post(t, sb, nil, s)
|
||||||
|
|
||||||
if err := sb.FinalizeSector(context.TODO(), si, nil); err != nil {
|
if err := sb.FinalizeSector(context.TODO(), si, nil); err != nil {
|
||||||
t.Fatalf("%+v", err)
|
t.Fatalf("%+v", err)
|
||||||
@ -313,8 +310,7 @@ func TestSealAndVerify(t *testing.T) {
|
|||||||
|
|
||||||
fmt.Printf("PreCommit: %s\n", precommit.Sub(start).String())
|
fmt.Printf("PreCommit: %s\n", precommit.Sub(start).String())
|
||||||
fmt.Printf("Commit: %s\n", commit.Sub(precommit).String())
|
fmt.Printf("Commit: %s\n", commit.Sub(precommit).String())
|
||||||
fmt.Printf("GenCandidates: %s\n", genCandidiates.Sub(commit).String())
|
fmt.Printf("EPoSt: %s\n", epost.Sub(commit).String())
|
||||||
fmt.Printf("EPoSt: %s\n", epost.Sub(genCandidiates).String())
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestSealPoStNoCommit(t *testing.T) {
|
func TestSealPoStNoCommit(t *testing.T) {
|
||||||
@ -370,16 +366,15 @@ func TestSealPoStNoCommit(t *testing.T) {
|
|||||||
t.Fatal(err)
|
t.Fatal(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
genCandidiates := post(t, sb, s)
|
post(t, sb, nil, s)
|
||||||
|
|
||||||
epost := time.Now()
|
epost := time.Now()
|
||||||
|
|
||||||
fmt.Printf("PreCommit: %s\n", precommit.Sub(start).String())
|
fmt.Printf("PreCommit: %s\n", precommit.Sub(start).String())
|
||||||
fmt.Printf("GenCandidates: %s\n", genCandidiates.Sub(precommit).String())
|
fmt.Printf("EPoSt: %s\n", epost.Sub(precommit).String())
|
||||||
fmt.Printf("EPoSt: %s\n", epost.Sub(genCandidiates).String())
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestSealAndVerify2(t *testing.T) {
|
func TestSealAndVerify3(t *testing.T) {
|
||||||
defer requireFDsClosed(t, openFDs(t))
|
defer requireFDsClosed(t, openFDs(t))
|
||||||
|
|
||||||
if runtime.NumCPU() < 10 && os.Getenv("CI") == "" { // don't bother on slow hardware
|
if runtime.NumCPU() < 10 && os.Getenv("CI") == "" { // don't bother on slow hardware
|
||||||
@ -419,22 +414,32 @@ func TestSealAndVerify2(t *testing.T) {
|
|||||||
|
|
||||||
si1 := abi.SectorID{Miner: miner, Number: 1}
|
si1 := abi.SectorID{Miner: miner, Number: 1}
|
||||||
si2 := abi.SectorID{Miner: miner, Number: 2}
|
si2 := abi.SectorID{Miner: miner, Number: 2}
|
||||||
|
si3 := abi.SectorID{Miner: miner, Number: 3}
|
||||||
|
|
||||||
s1 := seal{id: si1}
|
s1 := seal{id: si1}
|
||||||
s2 := seal{id: si2}
|
s2 := seal{id: si2}
|
||||||
|
s3 := seal{id: si3}
|
||||||
|
|
||||||
wg.Add(2)
|
wg.Add(3)
|
||||||
go s1.precommit(t, sb, si1, wg.Done) //nolint: staticcheck
|
go s1.precommit(t, sb, si1, wg.Done) //nolint: staticcheck
|
||||||
time.Sleep(100 * time.Millisecond)
|
time.Sleep(100 * time.Millisecond)
|
||||||
go s2.precommit(t, sb, si2, wg.Done) //nolint: staticcheck
|
go s2.precommit(t, sb, si2, wg.Done) //nolint: staticcheck
|
||||||
|
time.Sleep(100 * time.Millisecond)
|
||||||
|
go s3.precommit(t, sb, si3, wg.Done) //nolint: staticcheck
|
||||||
wg.Wait()
|
wg.Wait()
|
||||||
|
|
||||||
wg.Add(2)
|
wg.Add(3)
|
||||||
go s1.commit(t, sb, wg.Done) //nolint: staticcheck
|
go s1.commit(t, sb, wg.Done) //nolint: staticcheck
|
||||||
go s2.commit(t, sb, wg.Done) //nolint: staticcheck
|
go s2.commit(t, sb, wg.Done) //nolint: staticcheck
|
||||||
|
go s3.commit(t, sb, wg.Done) //nolint: staticcheck
|
||||||
wg.Wait()
|
wg.Wait()
|
||||||
|
|
||||||
post(t, sb, s1, s2)
|
post(t, sb, nil, s1, s2, s3)
|
||||||
|
|
||||||
|
corrupt(t, sb, si1)
|
||||||
|
corrupt(t, sb, si2)
|
||||||
|
|
||||||
|
post(t, sb, []abi.SectorID{si1, si2}, s1, s2, s3)
|
||||||
}
|
}
|
||||||
|
|
||||||
func BenchmarkWriteWithAlignment(b *testing.B) {
|
func BenchmarkWriteWithAlignment(b *testing.B) {
|
||||||
|
17
extern/sector-storage/ffiwrapper/verifier_cgo.go
vendored
17
extern/sector-storage/ffiwrapper/verifier_cgo.go
vendored
@ -40,8 +40,21 @@ func (sb *Sealer) GenerateWindowPoSt(ctx context.Context, minerID abi.ActorID, s
|
|||||||
}
|
}
|
||||||
defer done()
|
defer done()
|
||||||
|
|
||||||
proof, err := ffi.GenerateWindowPoSt(minerID, privsectors, randomness)
|
if len(skipped) > 0 {
|
||||||
return proof, skipped, err
|
return nil, skipped, xerrors.Errorf("pubSectorToPriv skipped some sectors")
|
||||||
|
}
|
||||||
|
|
||||||
|
proof, faulty, err := ffi.GenerateWindowPoSt(minerID, privsectors, randomness)
|
||||||
|
|
||||||
|
var faultyIDs []abi.SectorID
|
||||||
|
for _, f := range faulty {
|
||||||
|
faultyIDs = append(faultyIDs, abi.SectorID{
|
||||||
|
Miner: minerID,
|
||||||
|
Number: f,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
return proof, faultyIDs, err
|
||||||
}
|
}
|
||||||
|
|
||||||
func (sb *Sealer) pubSectorToPriv(ctx context.Context, mid abi.ActorID, sectorInfo []proof.SectorInfo, faults []abi.SectorNumber, rpt func(abi.RegisteredSealProof) (abi.RegisteredPoStProof, error)) (ffi.SortedPrivateSectorInfo, []abi.SectorID, func(), error) {
|
func (sb *Sealer) pubSectorToPriv(ctx context.Context, mid abi.ActorID, sectorInfo []proof.SectorInfo, faults []abi.SectorNumber, rpt func(abi.RegisteredSealProof) (abi.RegisteredPoStProof, error)) (ffi.SortedPrivateSectorInfo, []abi.SectorID, func(), error) {
|
||||||
|
26
extern/sector-storage/mock/mock.go
vendored
26
extern/sector-storage/mock/mock.go
vendored
@ -66,8 +66,9 @@ const (
|
|||||||
)
|
)
|
||||||
|
|
||||||
type sectorState struct {
|
type sectorState struct {
|
||||||
pieces []cid.Cid
|
pieces []cid.Cid
|
||||||
failed bool
|
failed bool
|
||||||
|
corrupted bool
|
||||||
|
|
||||||
state int
|
state int
|
||||||
|
|
||||||
@ -251,6 +252,18 @@ func (mgr *SectorMgr) MarkFailed(sid abi.SectorID, failed bool) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (mgr *SectorMgr) MarkCorrupted(sid abi.SectorID, corrupted bool) error {
|
||||||
|
mgr.lk.Lock()
|
||||||
|
defer mgr.lk.Unlock()
|
||||||
|
ss, ok := mgr.sectors[sid]
|
||||||
|
if !ok {
|
||||||
|
return fmt.Errorf("no such sector in storage")
|
||||||
|
}
|
||||||
|
|
||||||
|
ss.corrupted = corrupted
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
func opFinishWait(ctx context.Context) {
|
func opFinishWait(ctx context.Context) {
|
||||||
val, ok := ctx.Value("opfinish").(chan struct{})
|
val, ok := ctx.Value("opfinish").(chan struct{})
|
||||||
if !ok {
|
if !ok {
|
||||||
@ -275,6 +288,8 @@ func (mgr *SectorMgr) GenerateWindowPoSt(ctx context.Context, minerID abi.ActorI
|
|||||||
si := make([]proof.SectorInfo, 0, len(sectorInfo))
|
si := make([]proof.SectorInfo, 0, len(sectorInfo))
|
||||||
var skipped []abi.SectorID
|
var skipped []abi.SectorID
|
||||||
|
|
||||||
|
var err error
|
||||||
|
|
||||||
for _, info := range sectorInfo {
|
for _, info := range sectorInfo {
|
||||||
sid := abi.SectorID{
|
sid := abi.SectorID{
|
||||||
Miner: minerID,
|
Miner: minerID,
|
||||||
@ -283,13 +298,18 @@ func (mgr *SectorMgr) GenerateWindowPoSt(ctx context.Context, minerID abi.ActorI
|
|||||||
|
|
||||||
_, found := mgr.sectors[sid]
|
_, found := mgr.sectors[sid]
|
||||||
|
|
||||||
if found && !mgr.sectors[sid].failed {
|
if found && !mgr.sectors[sid].failed && !mgr.sectors[sid].corrupted {
|
||||||
si = append(si, info)
|
si = append(si, info)
|
||||||
} else {
|
} else {
|
||||||
skipped = append(skipped, sid)
|
skipped = append(skipped, sid)
|
||||||
|
err = xerrors.Errorf("skipped some sectors")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return nil, skipped, err
|
||||||
|
}
|
||||||
|
|
||||||
return generateFakePoSt(si, abi.RegisteredSealProof.RegisteredWindowPoStProof, randomness), skipped, nil
|
return generateFakePoSt(si, abi.RegisteredSealProof.RegisteredWindowPoStProof, randomness), skipped, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
4
extern/storage-sealing/sealing.go
vendored
4
extern/storage-sealing/sealing.go
vendored
@ -28,6 +28,8 @@ import (
|
|||||||
|
|
||||||
const SectorStorePrefix = "/sectors"
|
const SectorStorePrefix = "/sectors"
|
||||||
|
|
||||||
|
var ErrTooManySectorsSealing = xerrors.New("too many sectors sealing")
|
||||||
|
|
||||||
var log = logging.Logger("sectors")
|
var log = logging.Logger("sectors")
|
||||||
|
|
||||||
type SectorLocation struct {
|
type SectorLocation struct {
|
||||||
@ -280,7 +282,7 @@ func (m *Sealing) newDealSector() (abi.SectorNumber, error) {
|
|||||||
|
|
||||||
if cfg.MaxSealingSectorsForDeals > 0 {
|
if cfg.MaxSealingSectorsForDeals > 0 {
|
||||||
if m.stats.curSealing() > cfg.MaxSealingSectorsForDeals {
|
if m.stats.curSealing() > cfg.MaxSealingSectorsForDeals {
|
||||||
return 0, xerrors.Errorf("too many sectors sealing")
|
return 0, ErrTooManySectorsSealing
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
2
extern/storage-sealing/states_failed.go
vendored
2
extern/storage-sealing/states_failed.go
vendored
@ -62,7 +62,7 @@ func (m *Sealing) handleSealPrecommit2Failed(ctx statemachine.Context, sector Se
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
if sector.PreCommit2Fails > 1 {
|
if sector.PreCommit2Fails > 3 {
|
||||||
return ctx.Send(SectorRetrySealPreCommit1{})
|
return ctx.Send(SectorRetrySealPreCommit1{})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
18
go.mod
18
go.mod
@ -2,8 +2,6 @@ module github.com/filecoin-project/lotus
|
|||||||
|
|
||||||
go 1.14
|
go 1.14
|
||||||
|
|
||||||
replace github.com/supranational/blst => github.com/supranational/blst v0.1.2-alpha.1
|
|
||||||
|
|
||||||
require (
|
require (
|
||||||
contrib.go.opencensus.io/exporter/jaeger v0.1.0
|
contrib.go.opencensus.io/exporter/jaeger v0.1.0
|
||||||
contrib.go.opencensus.io/exporter/prometheus v0.1.0
|
contrib.go.opencensus.io/exporter/prometheus v0.1.0
|
||||||
@ -15,10 +13,10 @@ require (
|
|||||||
github.com/buger/goterm v0.0.0-20200322175922-2f3e71b85129
|
github.com/buger/goterm v0.0.0-20200322175922-2f3e71b85129
|
||||||
github.com/coreos/go-systemd/v22 v22.0.0
|
github.com/coreos/go-systemd/v22 v22.0.0
|
||||||
github.com/detailyang/go-fallocate v0.0.0-20180908115635-432fa640bd2e
|
github.com/detailyang/go-fallocate v0.0.0-20180908115635-432fa640bd2e
|
||||||
github.com/dgraph-io/badger/v2 v2.0.3
|
github.com/dgraph-io/badger/v2 v2.2007.2
|
||||||
github.com/docker/go-units v0.4.0
|
github.com/docker/go-units v0.4.0
|
||||||
github.com/drand/drand v1.0.3-0.20200714175734-29705eaf09d4
|
github.com/drand/drand v1.1.2-0.20200905144319-79c957281b32
|
||||||
github.com/drand/kyber v1.1.1
|
github.com/drand/kyber v1.1.2
|
||||||
github.com/dustin/go-humanize v1.0.0
|
github.com/dustin/go-humanize v1.0.0
|
||||||
github.com/elastic/go-sysinfo v1.3.0
|
github.com/elastic/go-sysinfo v1.3.0
|
||||||
github.com/fatih/color v1.8.0
|
github.com/fatih/color v1.8.0
|
||||||
@ -29,7 +27,7 @@ require (
|
|||||||
github.com/filecoin-project/go-crypto v0.0.0-20191218222705-effae4ea9f03
|
github.com/filecoin-project/go-crypto v0.0.0-20191218222705-effae4ea9f03
|
||||||
github.com/filecoin-project/go-data-transfer v0.6.3
|
github.com/filecoin-project/go-data-transfer v0.6.3
|
||||||
github.com/filecoin-project/go-fil-commcid v0.0.0-20200716160307-8f644712406f
|
github.com/filecoin-project/go-fil-commcid v0.0.0-20200716160307-8f644712406f
|
||||||
github.com/filecoin-project/go-fil-markets v0.6.0
|
github.com/filecoin-project/go-fil-markets v0.6.1-0.20200911011457-2959ccca6a3c
|
||||||
github.com/filecoin-project/go-jsonrpc v0.1.2-0.20200822201400-474f4fdccc52
|
github.com/filecoin-project/go-jsonrpc v0.1.2-0.20200822201400-474f4fdccc52
|
||||||
github.com/filecoin-project/go-multistore v0.0.3
|
github.com/filecoin-project/go-multistore v0.0.3
|
||||||
github.com/filecoin-project/go-padreader v0.0.0-20200903213702-ed5fae088b20
|
github.com/filecoin-project/go-padreader v0.0.0-20200903213702-ed5fae088b20
|
||||||
@ -92,7 +90,7 @@ require (
|
|||||||
github.com/libp2p/go-libp2p-mplex v0.2.4
|
github.com/libp2p/go-libp2p-mplex v0.2.4
|
||||||
github.com/libp2p/go-libp2p-noise v0.1.1
|
github.com/libp2p/go-libp2p-noise v0.1.1
|
||||||
github.com/libp2p/go-libp2p-peerstore v0.2.6
|
github.com/libp2p/go-libp2p-peerstore v0.2.6
|
||||||
github.com/libp2p/go-libp2p-pubsub v0.3.6-0.20200907103802-a3445b756fdb
|
github.com/libp2p/go-libp2p-pubsub v0.3.6-0.20200910093904-f7f33e10cc18
|
||||||
github.com/libp2p/go-libp2p-quic-transport v0.8.0
|
github.com/libp2p/go-libp2p-quic-transport v0.8.0
|
||||||
github.com/libp2p/go-libp2p-record v0.1.3
|
github.com/libp2p/go-libp2p-record v0.1.3
|
||||||
github.com/libp2p/go-libp2p-routing-helpers v0.2.3
|
github.com/libp2p/go-libp2p-routing-helpers v0.2.3
|
||||||
@ -135,6 +133,8 @@ replace github.com/golangci/golangci-lint => github.com/golangci/golangci-lint v
|
|||||||
|
|
||||||
replace github.com/filecoin-project/filecoin-ffi => ./extern/filecoin-ffi
|
replace github.com/filecoin-project/filecoin-ffi => ./extern/filecoin-ffi
|
||||||
|
|
||||||
replace github.com/dgraph-io/badger/v2 => github.com/dgraph-io/badger/v2 v2.0.1-rc1.0.20200716180832-3ab515320794
|
|
||||||
|
|
||||||
replace github.com/filecoin-project/test-vectors => ./extern/test-vectors
|
replace github.com/filecoin-project/test-vectors => ./extern/test-vectors
|
||||||
|
|
||||||
|
replace github.com/supranational/blst => ./extern/fil-blst/blst
|
||||||
|
|
||||||
|
replace github.com/filecoin-project/fil-blst => ./extern/fil-blst
|
||||||
|
34
go.sum
34
go.sum
@ -91,6 +91,7 @@ github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
|
|||||||
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
|
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
|
||||||
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
|
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
|
||||||
github.com/bradfitz/go-smtpd v0.0.0-20170404230938-deb6d6237625/go.mod h1:HYsPBTaaSFSlLx/70C2HPIMNZpVV8+vt/A+FMnYP11g=
|
github.com/bradfitz/go-smtpd v0.0.0-20170404230938-deb6d6237625/go.mod h1:HYsPBTaaSFSlLx/70C2HPIMNZpVV8+vt/A+FMnYP11g=
|
||||||
|
github.com/briandowns/spinner v1.11.1 h1:OixPqDEcX3juo5AjQZAnFPbeUA0jvkp2qzB5gOZJ/L0=
|
||||||
github.com/briandowns/spinner v1.11.1/go.mod h1:QOuQk7x+EaDASo80FEXwlwiA+j/PPIcX3FScO+3/ZPQ=
|
github.com/briandowns/spinner v1.11.1/go.mod h1:QOuQk7x+EaDASo80FEXwlwiA+j/PPIcX3FScO+3/ZPQ=
|
||||||
github.com/btcsuite/btcd v0.0.0-20190213025234-306aecffea32/go.mod h1:DrZx5ec/dmnfpw9KyYoQyYo7d0KEvTkk/5M/vbZjAr8=
|
github.com/btcsuite/btcd v0.0.0-20190213025234-306aecffea32/go.mod h1:DrZx5ec/dmnfpw9KyYoQyYo7d0KEvTkk/5M/vbZjAr8=
|
||||||
github.com/btcsuite/btcd v0.0.0-20190523000118-16327141da8c/go.mod h1:3J08xEfcugPacsc34/LKRU2yO7YmuT8yt28J8k2+rrI=
|
github.com/btcsuite/btcd v0.0.0-20190523000118-16327141da8c/go.mod h1:3J08xEfcugPacsc34/LKRU2yO7YmuT8yt28J8k2+rrI=
|
||||||
@ -166,8 +167,10 @@ github.com/dgraph-io/badger v1.6.0-rc1/go.mod h1:zwt7syl517jmP8s94KqSxTlM6IMsdhY
|
|||||||
github.com/dgraph-io/badger v1.6.0/go.mod h1:zwt7syl517jmP8s94KqSxTlM6IMsdhYy6psNgSztDR4=
|
github.com/dgraph-io/badger v1.6.0/go.mod h1:zwt7syl517jmP8s94KqSxTlM6IMsdhYy6psNgSztDR4=
|
||||||
github.com/dgraph-io/badger v1.6.1 h1:w9pSFNSdq/JPM1N12Fz/F/bzo993Is1W+Q7HjPzi7yg=
|
github.com/dgraph-io/badger v1.6.1 h1:w9pSFNSdq/JPM1N12Fz/F/bzo993Is1W+Q7HjPzi7yg=
|
||||||
github.com/dgraph-io/badger v1.6.1/go.mod h1:FRmFw3uxvcpa8zG3Rxs0th+hCLIuaQg8HlNV5bjgnuU=
|
github.com/dgraph-io/badger v1.6.1/go.mod h1:FRmFw3uxvcpa8zG3Rxs0th+hCLIuaQg8HlNV5bjgnuU=
|
||||||
github.com/dgraph-io/badger/v2 v2.0.1-rc1.0.20200716180832-3ab515320794 h1:PIPH4SLjYXMMlX/cQqV7nIRatv7556yqUfWY+KBjrtQ=
|
github.com/dgraph-io/badger/v2 v2.0.3/go.mod h1:3KY8+bsP8wI0OEnQJAKpd4wIJW/Mm32yw2j/9FUVnIM=
|
||||||
github.com/dgraph-io/badger/v2 v2.0.1-rc1.0.20200716180832-3ab515320794/go.mod h1:26P/7fbL4kUZVEVKLAKXkBXKOydDmM2p1e+NhhnBCAE=
|
github.com/dgraph-io/badger/v2 v2.2007.2 h1:EjjK0KqwaFMlPin1ajhP943VPENHJdEz1KLIegjaI3k=
|
||||||
|
github.com/dgraph-io/badger/v2 v2.2007.2/go.mod h1:26P/7fbL4kUZVEVKLAKXkBXKOydDmM2p1e+NhhnBCAE=
|
||||||
|
github.com/dgraph-io/ristretto v0.0.2-0.20200115201040-8f368f2f2ab3/go.mod h1:KPxhHT9ZxKefz+PCeOGsrHpl1qZ7i70dGTu2u+Ahh6E=
|
||||||
github.com/dgraph-io/ristretto v0.0.2/go.mod h1:KPxhHT9ZxKefz+PCeOGsrHpl1qZ7i70dGTu2u+Ahh6E=
|
github.com/dgraph-io/ristretto v0.0.2/go.mod h1:KPxhHT9ZxKefz+PCeOGsrHpl1qZ7i70dGTu2u+Ahh6E=
|
||||||
github.com/dgraph-io/ristretto v0.0.3-0.20200630154024-f66de99634de h1:t0UHb5vdojIDUqktM6+xJAfScFBsVpXZmqC9dsgJmeA=
|
github.com/dgraph-io/ristretto v0.0.3-0.20200630154024-f66de99634de h1:t0UHb5vdojIDUqktM6+xJAfScFBsVpXZmqC9dsgJmeA=
|
||||||
github.com/dgraph-io/ristretto v0.0.3-0.20200630154024-f66de99634de/go.mod h1:KPxhHT9ZxKefz+PCeOGsrHpl1qZ7i70dGTu2u+Ahh6E=
|
github.com/dgraph-io/ristretto v0.0.3-0.20200630154024-f66de99634de/go.mod h1:KPxhHT9ZxKefz+PCeOGsrHpl1qZ7i70dGTu2u+Ahh6E=
|
||||||
@ -179,12 +182,12 @@ github.com/docker/go-units v0.4.0 h1:3uh0PgVws3nIA0Q+MwDC8yjEPf9zjRfZZWXZYDct3Tw
|
|||||||
github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
|
github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
|
||||||
github.com/drand/bls12-381 v0.3.2 h1:RImU8Wckmx8XQx1tp1q04OV73J9Tj6mmpQLYDP7V1XE=
|
github.com/drand/bls12-381 v0.3.2 h1:RImU8Wckmx8XQx1tp1q04OV73J9Tj6mmpQLYDP7V1XE=
|
||||||
github.com/drand/bls12-381 v0.3.2/go.mod h1:dtcLgPtYT38L3NO6mPDYH0nbpc5tjPassDqiniuAt4Y=
|
github.com/drand/bls12-381 v0.3.2/go.mod h1:dtcLgPtYT38L3NO6mPDYH0nbpc5tjPassDqiniuAt4Y=
|
||||||
github.com/drand/drand v1.0.3-0.20200714175734-29705eaf09d4 h1:+Rov3bfUriGWFR/lUVXnpimx+HMr9BXRC4by0BxuQ8k=
|
github.com/drand/drand v1.1.2-0.20200905144319-79c957281b32 h1:sU+51aQRaDxg0KnjQg19KuYRIxDBEUHffBAICSnBys8=
|
||||||
github.com/drand/drand v1.0.3-0.20200714175734-29705eaf09d4/go.mod h1:SnqWL9jksIMK63UKkfmWI6f9PDN8ROoCgg+Z4zWk7hg=
|
github.com/drand/drand v1.1.2-0.20200905144319-79c957281b32/go.mod h1:0sQEVg+ngs1jaDPVIiEgY0lbENWJPaUlWxGHEaSmKVM=
|
||||||
github.com/drand/kyber v1.0.1-0.20200110225416-8de27ed8c0e2/go.mod h1:UpXoA0Upd1N9l4TvRPHr1qAUBBERj6JQ/mnKI3BPEmw=
|
github.com/drand/kyber v1.0.1-0.20200110225416-8de27ed8c0e2/go.mod h1:UpXoA0Upd1N9l4TvRPHr1qAUBBERj6JQ/mnKI3BPEmw=
|
||||||
github.com/drand/kyber v1.0.2/go.mod h1:x6KOpK7avKj0GJ4emhXFP5n7M7W7ChAPmnQh/OL6vRw=
|
github.com/drand/kyber v1.0.2/go.mod h1:x6KOpK7avKj0GJ4emhXFP5n7M7W7ChAPmnQh/OL6vRw=
|
||||||
github.com/drand/kyber v1.1.1 h1:mwCY2XGRB+Qc1MPfrnRuVuXELkPhcq/r9yMoJIcDhHI=
|
github.com/drand/kyber v1.1.2 h1:faemqlaFyLrbBSjZGRzzu5SG/do+uTYpHlnrJIHbAhQ=
|
||||||
github.com/drand/kyber v1.1.1/go.mod h1:x6KOpK7avKj0GJ4emhXFP5n7M7W7ChAPmnQh/OL6vRw=
|
github.com/drand/kyber v1.1.2/go.mod h1:x6KOpK7avKj0GJ4emhXFP5n7M7W7ChAPmnQh/OL6vRw=
|
||||||
github.com/drand/kyber-bls12381 v0.1.0 h1:/P4C65VnyEwxzR5ZYYVMNzY1If+aYBrdUU5ukwh7LQw=
|
github.com/drand/kyber-bls12381 v0.1.0 h1:/P4C65VnyEwxzR5ZYYVMNzY1If+aYBrdUU5ukwh7LQw=
|
||||||
github.com/drand/kyber-bls12381 v0.1.0/go.mod h1:N1emiHpm+jj7kMlxEbu3MUyOiooTgNySln564cgD9mk=
|
github.com/drand/kyber-bls12381 v0.1.0/go.mod h1:N1emiHpm+jj7kMlxEbu3MUyOiooTgNySln564cgD9mk=
|
||||||
github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
|
github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
|
||||||
@ -224,9 +227,11 @@ github.com/filecoin-project/go-data-transfer v0.6.3 h1:7TLwm8nuodHYD/uiwJjKc/PGR
|
|||||||
github.com/filecoin-project/go-data-transfer v0.6.3/go.mod h1:PmBKVXkhh67/tnEdJXQwDHl5mT+7Tbcwe1NPninqhnM=
|
github.com/filecoin-project/go-data-transfer v0.6.3/go.mod h1:PmBKVXkhh67/tnEdJXQwDHl5mT+7Tbcwe1NPninqhnM=
|
||||||
github.com/filecoin-project/go-fil-commcid v0.0.0-20200716160307-8f644712406f h1:GxJzR3oRIMTPtpZ0b7QF8FKPK6/iPAc7trhlL5k/g+s=
|
github.com/filecoin-project/go-fil-commcid v0.0.0-20200716160307-8f644712406f h1:GxJzR3oRIMTPtpZ0b7QF8FKPK6/iPAc7trhlL5k/g+s=
|
||||||
github.com/filecoin-project/go-fil-commcid v0.0.0-20200716160307-8f644712406f/go.mod h1:Eaox7Hvus1JgPrL5+M3+h7aSPHc0cVqpSxA+TxIEpZQ=
|
github.com/filecoin-project/go-fil-commcid v0.0.0-20200716160307-8f644712406f/go.mod h1:Eaox7Hvus1JgPrL5+M3+h7aSPHc0cVqpSxA+TxIEpZQ=
|
||||||
github.com/filecoin-project/go-fil-markets v0.6.0 h1:gfxMweUHo4u+2BZh2Q7/7+cV0/ttikuJfhkkxLRsE2Q=
|
github.com/filecoin-project/go-fil-markets v0.6.1-0.20200911011457-2959ccca6a3c h1:YGoyYmELQ0LHwDj/WcOvY3oYt+3iM0wdrAhqJQUAIy4=
|
||||||
github.com/filecoin-project/go-fil-markets v0.6.0/go.mod h1:LhSFYLkjaoe0vFRKABGYyw1Jz+9jCpF1sPA7yOftLTw=
|
github.com/filecoin-project/go-fil-markets v0.6.1-0.20200911011457-2959ccca6a3c/go.mod h1:PLr9svZxsnHkae1Ky7+66g7fP9AlneVxIVu+oSMq56A=
|
||||||
github.com/filecoin-project/go-hamt-ipld v0.1.5 h1:uoXrKbCQZ49OHpsTCkrThPNelC4W3LPEk0OrS/ytIBM=
|
github.com/filecoin-project/go-hamt-ipld v0.1.5 h1:uoXrKbCQZ49OHpsTCkrThPNelC4W3LPEk0OrS/ytIBM=
|
||||||
|
github.com/filecoin-project/go-hamt-ipld v0.1.5 h1:uoXrKbCQZ49OHpsTCkrThPNelC4W3LPEk0OrS/ytIBM=
|
||||||
|
github.com/filecoin-project/go-hamt-ipld v0.1.5/go.mod h1:6Is+ONR5Cd5R6XZoCse1CWaXZc0Hdb/JeX+EQCQzX24=
|
||||||
github.com/filecoin-project/go-hamt-ipld v0.1.5/go.mod h1:6Is+ONR5Cd5R6XZoCse1CWaXZc0Hdb/JeX+EQCQzX24=
|
github.com/filecoin-project/go-hamt-ipld v0.1.5/go.mod h1:6Is+ONR5Cd5R6XZoCse1CWaXZc0Hdb/JeX+EQCQzX24=
|
||||||
github.com/filecoin-project/go-jsonrpc v0.1.2-0.20200822201400-474f4fdccc52 h1:FXtCp0ybqdQL9knb3OGDpkNTaBbPxgkqPeWKotUwkH0=
|
github.com/filecoin-project/go-jsonrpc v0.1.2-0.20200822201400-474f4fdccc52 h1:FXtCp0ybqdQL9knb3OGDpkNTaBbPxgkqPeWKotUwkH0=
|
||||||
github.com/filecoin-project/go-jsonrpc v0.1.2-0.20200822201400-474f4fdccc52/go.mod h1:XBBpuKIMaXIIzeqzO1iucq4GvbF8CxmXRFoezRh+Cx4=
|
github.com/filecoin-project/go-jsonrpc v0.1.2-0.20200822201400-474f4fdccc52/go.mod h1:XBBpuKIMaXIIzeqzO1iucq4GvbF8CxmXRFoezRh+Cx4=
|
||||||
@ -249,6 +254,11 @@ github.com/filecoin-project/go-statestore v0.1.0 h1:t56reH59843TwXHkMcwyuayStBIi
|
|||||||
github.com/filecoin-project/go-statestore v0.1.0/go.mod h1:LFc9hD+fRxPqiHiaqUEZOinUJB4WARkRfNl10O7kTnI=
|
github.com/filecoin-project/go-statestore v0.1.0/go.mod h1:LFc9hD+fRxPqiHiaqUEZOinUJB4WARkRfNl10O7kTnI=
|
||||||
github.com/filecoin-project/go-storedcounter v0.0.0-20200421200003-1c99c62e8a5b h1:fkRZSPrYpk42PV3/lIXiL0LHetxde7vyYYvSsttQtfg=
|
github.com/filecoin-project/go-storedcounter v0.0.0-20200421200003-1c99c62e8a5b h1:fkRZSPrYpk42PV3/lIXiL0LHetxde7vyYYvSsttQtfg=
|
||||||
github.com/filecoin-project/go-storedcounter v0.0.0-20200421200003-1c99c62e8a5b/go.mod h1:Q0GQOBtKf1oE10eSXSlhN45kDBdGvEcVOqMiffqX+N8=
|
github.com/filecoin-project/go-storedcounter v0.0.0-20200421200003-1c99c62e8a5b/go.mod h1:Q0GQOBtKf1oE10eSXSlhN45kDBdGvEcVOqMiffqX+N8=
|
||||||
|
github.com/filecoin-project/specs-actors v0.9.4/go.mod h1:BStZQzx5x7TmCkLv0Bpa07U6cPKol6fd3w9KjMPZ6Z4=
|
||||||
|
github.com/filecoin-project/specs-actors v0.9.7 h1:7PAZ8kdqwBdmgf/23FCkQZLCXcVu02XJrkpkhBikiA8=
|
||||||
|
github.com/filecoin-project/specs-actors v0.9.7/go.mod h1:wM2z+kwqYgXn5Z7scV1YHLyd1Q1cy0R8HfTIWQ0BFGU=
|
||||||
|
github.com/filecoin-project/specs-actors v0.9.9-0.20200911231631-727cd8845d30 h1:6Kn6y3TpJbk5BsvhVha+3jr7C3gAAJq0rCnwUYOWRl0=
|
||||||
|
github.com/filecoin-project/specs-actors v0.9.9-0.20200911231631-727cd8845d30/go.mod h1:czlvLQGEX0fjLLfdNHD7xLymy6L3n7aQzRWzsYGf+ys=
|
||||||
github.com/filecoin-project/specs-storage v0.1.1-0.20200907031224-ed2e5cd13796 h1:dJsTPWpG2pcTeojO2pyn0c6l+x/3MZYCBgo/9d11JEk=
|
github.com/filecoin-project/specs-storage v0.1.1-0.20200907031224-ed2e5cd13796 h1:dJsTPWpG2pcTeojO2pyn0c6l+x/3MZYCBgo/9d11JEk=
|
||||||
github.com/filecoin-project/specs-storage v0.1.1-0.20200907031224-ed2e5cd13796/go.mod h1:nJRRM7Aa9XVvygr3W9k6xGF46RWzr2zxF/iGoAIfA/g=
|
github.com/filecoin-project/specs-storage v0.1.1-0.20200907031224-ed2e5cd13796/go.mod h1:nJRRM7Aa9XVvygr3W9k6xGF46RWzr2zxF/iGoAIfA/g=
|
||||||
github.com/filecoin-project/test-vectors/schema v0.0.1 h1:5fNF76nl4qolEvcIsjc0kUADlTMVHO73tW4kXXPnsus=
|
github.com/filecoin-project/test-vectors/schema v0.0.1 h1:5fNF76nl4qolEvcIsjc0kUADlTMVHO73tW4kXXPnsus=
|
||||||
@ -498,6 +508,7 @@ github.com/ipfs/go-fs-lock v0.0.6/go.mod h1:OTR+Rj9sHiRubJh3dRhD15Juhd/+w6VPOY28
|
|||||||
github.com/ipfs/go-graphsync v0.1.0/go.mod h1:jMXfqIEDFukLPZHqDPp8tJMbHO9Rmeb9CEGevngQbmE=
|
github.com/ipfs/go-graphsync v0.1.0/go.mod h1:jMXfqIEDFukLPZHqDPp8tJMbHO9Rmeb9CEGevngQbmE=
|
||||||
github.com/ipfs/go-graphsync v0.1.2 h1:25Ll9kIXCE+DY0dicvfS3KMw+U5sd01b/FJbA7KAbhg=
|
github.com/ipfs/go-graphsync v0.1.2 h1:25Ll9kIXCE+DY0dicvfS3KMw+U5sd01b/FJbA7KAbhg=
|
||||||
github.com/ipfs/go-graphsync v0.1.2/go.mod h1:sLXVXm1OxtE2XYPw62MuXCdAuNwkAdsbnfrmos5odbA=
|
github.com/ipfs/go-graphsync v0.1.2/go.mod h1:sLXVXm1OxtE2XYPw62MuXCdAuNwkAdsbnfrmos5odbA=
|
||||||
|
github.com/ipfs/go-hamt-ipld v0.1.1/go.mod h1:1EZCr2v0jlCnhpa+aZ0JZYp8Tt2w16+JJOAVz17YcDk=
|
||||||
github.com/ipfs/go-ipfs-blockstore v0.0.1/go.mod h1:d3WClOmRQKFnJ0Jz/jj/zmksX0ma1gROTlovZKBmN08=
|
github.com/ipfs/go-ipfs-blockstore v0.0.1/go.mod h1:d3WClOmRQKFnJ0Jz/jj/zmksX0ma1gROTlovZKBmN08=
|
||||||
github.com/ipfs/go-ipfs-blockstore v0.1.0/go.mod h1:5aD0AvHPi7mZc6Ci1WCAhiBQu2IsfTduLl+422H6Rqw=
|
github.com/ipfs/go-ipfs-blockstore v0.1.0/go.mod h1:5aD0AvHPi7mZc6Ci1WCAhiBQu2IsfTduLl+422H6Rqw=
|
||||||
github.com/ipfs/go-ipfs-blockstore v0.1.4/go.mod h1:Jxm3XMVjh6R17WvxFEiyKBLUGr86HgIYJW/D/MwqeYQ=
|
github.com/ipfs/go-ipfs-blockstore v0.1.4/go.mod h1:Jxm3XMVjh6R17WvxFEiyKBLUGr86HgIYJW/D/MwqeYQ=
|
||||||
@ -842,8 +853,8 @@ github.com/libp2p/go-libp2p-protocol v0.0.1/go.mod h1:Af9n4PiruirSDjHycM1QuiMi/1
|
|||||||
github.com/libp2p/go-libp2p-protocol v0.1.0/go.mod h1:KQPHpAabB57XQxGrXCNvbL6UEXfQqUgC/1adR2Xtflk=
|
github.com/libp2p/go-libp2p-protocol v0.1.0/go.mod h1:KQPHpAabB57XQxGrXCNvbL6UEXfQqUgC/1adR2Xtflk=
|
||||||
github.com/libp2p/go-libp2p-pubsub v0.1.1/go.mod h1:ZwlKzRSe1eGvSIdU5bD7+8RZN/Uzw0t1Bp9R1znpR/Q=
|
github.com/libp2p/go-libp2p-pubsub v0.1.1/go.mod h1:ZwlKzRSe1eGvSIdU5bD7+8RZN/Uzw0t1Bp9R1znpR/Q=
|
||||||
github.com/libp2p/go-libp2p-pubsub v0.3.2-0.20200527132641-c0712c6e92cf/go.mod h1:TxPOBuo1FPdsTjFnv+FGZbNbWYsp74Culx+4ViQpato=
|
github.com/libp2p/go-libp2p-pubsub v0.3.2-0.20200527132641-c0712c6e92cf/go.mod h1:TxPOBuo1FPdsTjFnv+FGZbNbWYsp74Culx+4ViQpato=
|
||||||
github.com/libp2p/go-libp2p-pubsub v0.3.6-0.20200907103802-a3445b756fdb h1:0jm9ZSDkteX9XRjZqZwG5X0wuR+e0zAJ6ZEnqo2vcb0=
|
github.com/libp2p/go-libp2p-pubsub v0.3.6-0.20200910093904-f7f33e10cc18 h1:+ae7vHSv/PJ4xGXwLV6LKGj32zjyB8ttJHtyV4TXal0=
|
||||||
github.com/libp2p/go-libp2p-pubsub v0.3.6-0.20200907103802-a3445b756fdb/go.mod h1:DTMSVmZZfXodB/pvdTGrY2eHPZ9W2ev7hzTH83OKHrI=
|
github.com/libp2p/go-libp2p-pubsub v0.3.6-0.20200910093904-f7f33e10cc18/go.mod h1:DTMSVmZZfXodB/pvdTGrY2eHPZ9W2ev7hzTH83OKHrI=
|
||||||
github.com/libp2p/go-libp2p-quic-transport v0.1.1/go.mod h1:wqG/jzhF3Pu2NrhJEvE+IE0NTHNXslOPn9JQzyCAxzU=
|
github.com/libp2p/go-libp2p-quic-transport v0.1.1/go.mod h1:wqG/jzhF3Pu2NrhJEvE+IE0NTHNXslOPn9JQzyCAxzU=
|
||||||
github.com/libp2p/go-libp2p-quic-transport v0.5.0/go.mod h1:IEcuC5MLxvZ5KuHKjRu+dr3LjCT1Be3rcD/4d8JrX8M=
|
github.com/libp2p/go-libp2p-quic-transport v0.5.0/go.mod h1:IEcuC5MLxvZ5KuHKjRu+dr3LjCT1Be3rcD/4d8JrX8M=
|
||||||
github.com/libp2p/go-libp2p-quic-transport v0.8.0 h1:mHA94K2+TD0e9XtjWx/P5jGGZn0GdQ4OFYwNllagv4E=
|
github.com/libp2p/go-libp2p-quic-transport v0.8.0 h1:mHA94K2+TD0e9XtjWx/P5jGGZn0GdQ4OFYwNllagv4E=
|
||||||
@ -1315,8 +1326,6 @@ github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5
|
|||||||
github.com/stretchr/testify v1.6.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
github.com/stretchr/testify v1.6.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||||
github.com/stretchr/testify v1.6.1 h1:hDPOHmpOpP40lSULcqw7IrRb/u7w6RpDC9399XyoNd0=
|
github.com/stretchr/testify v1.6.1 h1:hDPOHmpOpP40lSULcqw7IrRb/u7w6RpDC9399XyoNd0=
|
||||||
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||||
github.com/supranational/blst v0.1.2-alpha.1 h1:v0UqVlvbRNZIaSeMPr+T01kvTUq1h0EZuZ6gnDR1Mlg=
|
|
||||||
github.com/supranational/blst v0.1.2-alpha.1/go.mod h1:jZJtfjgudtNl4en1tzwPIV3KjUnQUvG3/j+w+fVonLw=
|
|
||||||
github.com/syndtr/goleveldb v1.0.0 h1:fBdIW9lB4Iz0n9khmH8w27SJ3QEJ7+IgjPEwGSZiFdE=
|
github.com/syndtr/goleveldb v1.0.0 h1:fBdIW9lB4Iz0n9khmH8w27SJ3QEJ7+IgjPEwGSZiFdE=
|
||||||
github.com/syndtr/goleveldb v1.0.0/go.mod h1:ZVVdQEZoIme9iO1Ch2Jdy24qqXrMMOU6lpPAyBWyWuQ=
|
github.com/syndtr/goleveldb v1.0.0/go.mod h1:ZVVdQEZoIme9iO1Ch2Jdy24qqXrMMOU6lpPAyBWyWuQ=
|
||||||
github.com/tarm/serial v0.0.0-20180830185346-98f6abe2eb07/go.mod h1:kDXzergiv9cbyO7IOYJZWg1U88JhDg3PB6klq9Hg2pA=
|
github.com/tarm/serial v0.0.0-20180830185346-98f6abe2eb07/go.mod h1:kDXzergiv9cbyO7IOYJZWg1U88JhDg3PB6klq9Hg2pA=
|
||||||
@ -1746,7 +1755,6 @@ google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8
|
|||||||
google.golang.org/grpc v1.27.1/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
|
google.golang.org/grpc v1.27.1/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
|
||||||
google.golang.org/grpc v1.29.1 h1:EC2SB8S04d2r73uptxphDSUG+kTKVgjRPF+N3xpxRB4=
|
google.golang.org/grpc v1.29.1 h1:EC2SB8S04d2r73uptxphDSUG+kTKVgjRPF+N3xpxRB4=
|
||||||
google.golang.org/grpc v1.29.1/go.mod h1:itym6AZVZYACWQqET3MqgPpjcuV5QH3BxFS3IjizoKk=
|
google.golang.org/grpc v1.29.1/go.mod h1:itym6AZVZYACWQqET3MqgPpjcuV5QH3BxFS3IjizoKk=
|
||||||
google.golang.org/grpc/cmd/protoc-gen-go-grpc v0.0.0-20200617041141-9a465503579e/go.mod h1:6Kw0yEErY5E/yWrBtf03jp27GLLJujG4z/JK95pnjjw=
|
|
||||||
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
|
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
|
||||||
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
|
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
|
||||||
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
|
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
|
||||||
|
@ -501,12 +501,12 @@ func (c *ClientNodeAdapter) GetChainHead(ctx context.Context) (shared.TipSetToke
|
|||||||
return head.Key().Bytes(), head.Height(), nil
|
return head.Key().Bytes(), head.Height(), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *ClientNodeAdapter) WaitForMessage(ctx context.Context, mcid cid.Cid, cb func(code exitcode.ExitCode, bytes []byte, err error) error) error {
|
func (c *ClientNodeAdapter) WaitForMessage(ctx context.Context, mcid cid.Cid, cb func(code exitcode.ExitCode, bytes []byte, finalCid cid.Cid, err error) error) error {
|
||||||
receipt, err := c.StateWaitMsg(ctx, mcid, build.MessageConfidence)
|
receipt, err := c.StateWaitMsg(ctx, mcid, build.MessageConfidence)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return cb(0, nil, err)
|
return cb(0, nil, cid.Undef, err)
|
||||||
}
|
}
|
||||||
return cb(receipt.Receipt.ExitCode, receipt.Receipt.Return, nil)
|
return cb(receipt.Receipt.ExitCode, receipt.Receipt.Return, receipt.Message, nil)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *ClientNodeAdapter) GetMinerInfo(ctx context.Context, addr address.Address, encodedTs shared.TipSetToken) (*storagemarket.StorageProviderInfo, error) {
|
func (c *ClientNodeAdapter) GetMinerInfo(ctx context.Context, addr address.Address, encodedTs shared.TipSetToken) (*storagemarket.StorageProviderInfo, error) {
|
||||||
|
@ -6,6 +6,7 @@ import (
|
|||||||
"bytes"
|
"bytes"
|
||||||
"context"
|
"context"
|
||||||
"io"
|
"io"
|
||||||
|
"time"
|
||||||
|
|
||||||
"github.com/ipfs/go-cid"
|
"github.com/ipfs/go-cid"
|
||||||
logging "github.com/ipfs/go-log/v2"
|
logging "github.com/ipfs/go-log/v2"
|
||||||
@ -35,6 +36,8 @@ import (
|
|||||||
"github.com/filecoin-project/lotus/storage/sectorblocks"
|
"github.com/filecoin-project/lotus/storage/sectorblocks"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
var addPieceRetryWait = 5 * time.Minute
|
||||||
|
var addPieceRetryTimeout = 6 * time.Hour
|
||||||
var log = logging.Logger("storageadapter")
|
var log = logging.Logger("storageadapter")
|
||||||
|
|
||||||
type ProviderNodeAdapter struct {
|
type ProviderNodeAdapter struct {
|
||||||
@ -91,7 +94,7 @@ func (n *ProviderNodeAdapter) OnDealComplete(ctx context.Context, deal storagema
|
|||||||
return nil, xerrors.Errorf("deal.PublishCid can't be nil")
|
return nil, xerrors.Errorf("deal.PublishCid can't be nil")
|
||||||
}
|
}
|
||||||
|
|
||||||
p, offset, err := n.secb.AddPiece(ctx, pieceSize, pieceData, sealing.DealInfo{
|
sdInfo := sealing.DealInfo{
|
||||||
DealID: deal.DealID,
|
DealID: deal.DealID,
|
||||||
PublishCid: deal.PublishCid,
|
PublishCid: deal.PublishCid,
|
||||||
DealSchedule: sealing.DealSchedule{
|
DealSchedule: sealing.DealSchedule{
|
||||||
@ -99,7 +102,23 @@ func (n *ProviderNodeAdapter) OnDealComplete(ctx context.Context, deal storagema
|
|||||||
EndEpoch: deal.ClientDealProposal.Proposal.EndEpoch,
|
EndEpoch: deal.ClientDealProposal.Proposal.EndEpoch,
|
||||||
},
|
},
|
||||||
KeepUnsealed: deal.FastRetrieval,
|
KeepUnsealed: deal.FastRetrieval,
|
||||||
})
|
}
|
||||||
|
|
||||||
|
p, offset, err := n.secb.AddPiece(ctx, pieceSize, pieceData, sdInfo)
|
||||||
|
curTime := time.Now()
|
||||||
|
for time.Since(curTime) < addPieceRetryTimeout {
|
||||||
|
if !xerrors.Is(err, sealing.ErrTooManySectorsSealing) {
|
||||||
|
log.Errorf("failed to addPiece for deal %d, err: %w", deal.DealID, err)
|
||||||
|
break
|
||||||
|
}
|
||||||
|
select {
|
||||||
|
case <-time.After(addPieceRetryWait):
|
||||||
|
p, offset, err = n.secb.AddPiece(ctx, pieceSize, pieceData, sdInfo)
|
||||||
|
case <-ctx.Done():
|
||||||
|
return nil, xerrors.New("context expired while waiting to retry AddPiece")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, xerrors.Errorf("AddPiece failed: %s", err)
|
return nil, xerrors.Errorf("AddPiece failed: %s", err)
|
||||||
}
|
}
|
||||||
@ -355,12 +374,12 @@ func (n *ProviderNodeAdapter) GetChainHead(ctx context.Context) (shared.TipSetTo
|
|||||||
return head.Key().Bytes(), head.Height(), nil
|
return head.Key().Bytes(), head.Height(), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (n *ProviderNodeAdapter) WaitForMessage(ctx context.Context, mcid cid.Cid, cb func(code exitcode.ExitCode, bytes []byte, err error) error) error {
|
func (n *ProviderNodeAdapter) WaitForMessage(ctx context.Context, mcid cid.Cid, cb func(code exitcode.ExitCode, bytes []byte, finalCid cid.Cid, err error) error) error {
|
||||||
receipt, err := n.StateWaitMsg(ctx, mcid, 2*build.MessageConfidence)
|
receipt, err := n.StateWaitMsg(ctx, mcid, 2*build.MessageConfidence)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return cb(0, nil, err)
|
return cb(0, nil, cid.Undef, err)
|
||||||
}
|
}
|
||||||
return cb(receipt.Receipt.ExitCode, receipt.Receipt.Return, nil)
|
return cb(receipt.Receipt.ExitCode, receipt.Receipt.Return, receipt.Message, nil)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (n *ProviderNodeAdapter) GetDataCap(ctx context.Context, addr address.Address, encodedTs shared.TipSetToken) (*verifreg.DataCap, error) {
|
func (n *ProviderNodeAdapter) GetDataCap(ctx context.Context, addr address.Address, encodedTs shared.TipSetToken) (*verifreg.DataCap, error) {
|
||||||
|
@ -362,7 +362,7 @@ func (m *Miner) mineOne(ctx context.Context, base *MiningBase) (*types.BlockMsg,
|
|||||||
rbase = bvals[len(bvals)-1]
|
rbase = bvals[len(bvals)-1]
|
||||||
}
|
}
|
||||||
|
|
||||||
ticket, err := m.computeTicket(ctx, &rbase, base, len(bvals) > 0)
|
ticket, err := m.computeTicket(ctx, &rbase, base)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, xerrors.Errorf("scratching ticket failed: %w", err)
|
return nil, xerrors.Errorf("scratching ticket failed: %w", err)
|
||||||
}
|
}
|
||||||
@ -432,7 +432,7 @@ func (m *Miner) mineOne(ctx context.Context, base *MiningBase) (*types.BlockMsg,
|
|||||||
return b, nil
|
return b, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (m *Miner) computeTicket(ctx context.Context, brand *types.BeaconEntry, base *MiningBase, haveNewEntries bool) (*types.Ticket, error) {
|
func (m *Miner) computeTicket(ctx context.Context, brand *types.BeaconEntry, base *MiningBase) (*types.Ticket, error) {
|
||||||
mi, err := m.api.StateMinerInfo(ctx, m.address, types.EmptyTSK)
|
mi, err := m.api.StateMinerInfo(ctx, m.address, types.EmptyTSK)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@ -447,11 +447,12 @@ func (m *Miner) computeTicket(ctx context.Context, brand *types.BeaconEntry, bas
|
|||||||
return nil, xerrors.Errorf("failed to marshal address to cbor: %w", err)
|
return nil, xerrors.Errorf("failed to marshal address to cbor: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
if !haveNewEntries {
|
round := base.TipSet.Height() + base.NullRounds + 1
|
||||||
|
if round > build.UpgradeSmokeHeight {
|
||||||
buf.Write(base.TipSet.MinTicket().VRFProof)
|
buf.Write(base.TipSet.MinTicket().VRFProof)
|
||||||
}
|
}
|
||||||
|
|
||||||
input, err := store.DrawRandomness(brand.Data, crypto.DomainSeparationTag_TicketProduction, base.TipSet.Height()+base.NullRounds+1-build.TicketRandomnessLookback, buf.Bytes())
|
input, err := store.DrawRandomness(brand.Data, crypto.DomainSeparationTag_TicketProduction, round-build.TicketRandomnessLookback, buf.Bytes())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
@ -226,7 +226,7 @@ func Online() Option {
|
|||||||
|
|
||||||
Override(new(dtypes.BootstrapPeers), modules.BuiltinBootstrap),
|
Override(new(dtypes.BootstrapPeers), modules.BuiltinBootstrap),
|
||||||
Override(new(dtypes.DrandBootstrap), modules.DrandBootstrap),
|
Override(new(dtypes.DrandBootstrap), modules.DrandBootstrap),
|
||||||
Override(new(dtypes.DrandConfig), modules.BuiltinDrandConfig),
|
Override(new(dtypes.DrandSchedule), modules.BuiltinDrandConfig),
|
||||||
|
|
||||||
Override(HandleIncomingMessagesKey, modules.HandleIncomingMessages),
|
Override(HandleIncomingMessagesKey, modules.HandleIncomingMessages),
|
||||||
|
|
||||||
@ -272,7 +272,7 @@ func Online() Option {
|
|||||||
Override(new(modules.ClientDealFunds), modules.NewClientDealFunds),
|
Override(new(modules.ClientDealFunds), modules.NewClientDealFunds),
|
||||||
Override(new(storagemarket.StorageClient), modules.StorageClient),
|
Override(new(storagemarket.StorageClient), modules.StorageClient),
|
||||||
Override(new(storagemarket.StorageClientNode), storageadapter.NewClientNodeAdapter),
|
Override(new(storagemarket.StorageClientNode), storageadapter.NewClientNodeAdapter),
|
||||||
Override(new(beacon.RandomBeacon), modules.RandomBeacon),
|
Override(new(beacon.Schedule), modules.RandomSchedule),
|
||||||
|
|
||||||
Override(new(*paychmgr.Store), paychmgr.NewStore),
|
Override(new(*paychmgr.Store), paychmgr.NewStore),
|
||||||
Override(new(*paychmgr.Manager), paychmgr.NewManager),
|
Override(new(*paychmgr.Manager), paychmgr.NewManager),
|
||||||
@ -535,6 +535,6 @@ func Test() Option {
|
|||||||
return Options(
|
return Options(
|
||||||
Unset(RunPeerMgrKey),
|
Unset(RunPeerMgrKey),
|
||||||
Unset(new(*peermgr.PeerMgr)),
|
Unset(new(*peermgr.PeerMgr)),
|
||||||
Override(new(beacon.RandomBeacon), testing.RandomBeacon),
|
Override(new(beacon.Schedule), testing.RandomBeacon),
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
@ -6,6 +6,8 @@ import (
|
|||||||
"io"
|
"io"
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
|
"github.com/filecoin-project/go-state-types/dline"
|
||||||
|
|
||||||
datatransfer "github.com/filecoin-project/go-data-transfer"
|
datatransfer "github.com/filecoin-project/go-data-transfer"
|
||||||
"github.com/filecoin-project/go-state-types/big"
|
"github.com/filecoin-project/go-state-types/big"
|
||||||
"golang.org/x/xerrors"
|
"golang.org/x/xerrors"
|
||||||
@ -80,7 +82,7 @@ type API struct {
|
|||||||
Host host.Host
|
Host host.Host
|
||||||
}
|
}
|
||||||
|
|
||||||
func calcDealExpiration(minDuration uint64, md *miner.DeadlineInfo, startEpoch abi.ChainEpoch) abi.ChainEpoch {
|
func calcDealExpiration(minDuration uint64, md *dline.Info, startEpoch abi.ChainEpoch) abi.ChainEpoch {
|
||||||
// Make sure we give some time for the miner to seal
|
// Make sure we give some time for the miner to seal
|
||||||
minExp := startEpoch + abi.ChainEpoch(minDuration)
|
minExp := startEpoch + abi.ChainEpoch(minDuration)
|
||||||
|
|
||||||
|
@ -170,9 +170,14 @@ func (a *CommonAPI) ID(context.Context) (peer.ID, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (a *CommonAPI) Version(context.Context) (api.Version, error) {
|
func (a *CommonAPI) Version(context.Context) (api.Version, error) {
|
||||||
|
v, err := build.VersionForType(build.RunningNodeType)
|
||||||
|
if err != nil {
|
||||||
|
return api.Version{}, err
|
||||||
|
}
|
||||||
|
|
||||||
return api.Version{
|
return api.Version{
|
||||||
Version: build.UserVersion(),
|
Version: build.UserVersion(),
|
||||||
APIVersion: build.APIVersion,
|
APIVersion: v,
|
||||||
|
|
||||||
BlockDelay: build.BlockDelaySecs,
|
BlockDelay: build.BlockDelaySecs,
|
||||||
}, nil
|
}, nil
|
||||||
|
@ -13,12 +13,13 @@ import (
|
|||||||
type BeaconAPI struct {
|
type BeaconAPI struct {
|
||||||
fx.In
|
fx.In
|
||||||
|
|
||||||
Beacon beacon.RandomBeacon
|
Beacon beacon.Schedule
|
||||||
}
|
}
|
||||||
|
|
||||||
func (a *BeaconAPI) BeaconGetEntry(ctx context.Context, epoch abi.ChainEpoch) (*types.BeaconEntry, error) {
|
func (a *BeaconAPI) BeaconGetEntry(ctx context.Context, epoch abi.ChainEpoch) (*types.BeaconEntry, error) {
|
||||||
rr := a.Beacon.MaxBeaconRoundForEpoch(epoch, types.BeaconEntry{})
|
b := a.Beacon.BeaconForEpoch(epoch)
|
||||||
e := a.Beacon.Entry(ctx, rr)
|
rr := b.MaxBeaconRoundForEpoch(epoch)
|
||||||
|
e := b.Entry(ctx, rr)
|
||||||
|
|
||||||
select {
|
select {
|
||||||
case be, ok := <-e:
|
case be, ok := <-e:
|
||||||
|
@ -495,7 +495,7 @@ func (a *ChainAPI) ChainGetMessage(ctx context.Context, mc cid.Cid) (*types.Mess
|
|||||||
return cm.VMMessage(), nil
|
return cm.VMMessage(), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (a *ChainAPI) ChainExport(ctx context.Context, nroots abi.ChainEpoch, tsk types.TipSetKey) (<-chan []byte, error) {
|
func (a *ChainAPI) ChainExport(ctx context.Context, nroots abi.ChainEpoch, skipoldmsgs bool, tsk types.TipSetKey) (<-chan []byte, error) {
|
||||||
ts, err := a.Chain.GetTipSetFromKey(tsk)
|
ts, err := a.Chain.GetTipSetFromKey(tsk)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, xerrors.Errorf("loading tipset %s: %w", tsk, err)
|
return nil, xerrors.Errorf("loading tipset %s: %w", tsk, err)
|
||||||
@ -508,7 +508,7 @@ func (a *ChainAPI) ChainExport(ctx context.Context, nroots abi.ChainEpoch, tsk t
|
|||||||
bw := bufio.NewWriterSize(w, 1<<20)
|
bw := bufio.NewWriterSize(w, 1<<20)
|
||||||
defer bw.Flush() //nolint:errcheck // it is a write to a pipe
|
defer bw.Flush() //nolint:errcheck // it is a write to a pipe
|
||||||
|
|
||||||
if err := a.Chain.Export(ctx, ts, nroots, bw); err != nil {
|
if err := a.Chain.Export(ctx, ts, nroots, skipoldmsgs, bw); err != nil {
|
||||||
log.Errorf("chain export call failed: %s", err)
|
log.Errorf("chain export call failed: %s", err)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
@ -204,30 +204,14 @@ func (a *GasAPI) GasEstimateMessageGas(ctx context.Context, msg *types.Message,
|
|||||||
}
|
}
|
||||||
|
|
||||||
if msg.GasFeeCap == types.EmptyInt || types.BigCmp(msg.GasFeeCap, types.NewInt(0)) == 0 {
|
if msg.GasFeeCap == types.EmptyInt || types.BigCmp(msg.GasFeeCap, types.NewInt(0)) == 0 {
|
||||||
feeCap, err := a.GasEstimateFeeCap(ctx, msg, 10, types.EmptyTSK)
|
feeCap, err := a.GasEstimateFeeCap(ctx, msg, 20, types.EmptyTSK)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, xerrors.Errorf("estimating fee cap: %w", err)
|
return nil, xerrors.Errorf("estimating fee cap: %w", err)
|
||||||
}
|
}
|
||||||
msg.GasFeeCap = feeCap
|
msg.GasFeeCap = feeCap
|
||||||
}
|
}
|
||||||
|
|
||||||
capGasFee(msg, spec.Get().MaxFee)
|
messagepool.CapGasFee(msg, spec.Get().MaxFee)
|
||||||
|
|
||||||
return msg, nil
|
return msg, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func capGasFee(msg *types.Message, maxFee abi.TokenAmount) {
|
|
||||||
if maxFee.Equals(big.Zero()) {
|
|
||||||
maxFee = types.NewInt(build.FilecoinPrecision / 10)
|
|
||||||
}
|
|
||||||
|
|
||||||
gl := types.NewInt(uint64(msg.GasLimit))
|
|
||||||
totalFee := types.BigMul(msg.GasFeeCap, gl)
|
|
||||||
|
|
||||||
if totalFee.LessThanEqual(maxFee) {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
msg.GasFeeCap = big.Div(maxFee, gl)
|
|
||||||
msg.GasPremium = big.Min(msg.GasFeeCap, msg.GasPremium) // cap premium at FeeCap
|
|
||||||
}
|
|
||||||
|
@ -130,6 +130,33 @@ func (a *MsigAPI) MsigPropose(ctx context.Context, msig address.Address, to addr
|
|||||||
return smsg.Cid(), nil
|
return smsg.Cid(), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (a *MsigAPI) MsigAddPropose(ctx context.Context, msig address.Address, src address.Address, newAdd address.Address, inc bool) (cid.Cid, error) {
|
||||||
|
enc, actErr := serializeAddParams(newAdd, inc)
|
||||||
|
if actErr != nil {
|
||||||
|
return cid.Undef, actErr
|
||||||
|
}
|
||||||
|
|
||||||
|
return a.MsigPropose(ctx, msig, msig, big.Zero(), src, uint64(builtin.MethodsMultisig.AddSigner), enc)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (a *MsigAPI) MsigAddApprove(ctx context.Context, msig address.Address, src address.Address, txID uint64, proposer address.Address, newAdd address.Address, inc bool) (cid.Cid, error) {
|
||||||
|
enc, actErr := serializeAddParams(newAdd, inc)
|
||||||
|
if actErr != nil {
|
||||||
|
return cid.Undef, actErr
|
||||||
|
}
|
||||||
|
|
||||||
|
return a.MsigApprove(ctx, msig, txID, proposer, msig, big.Zero(), src, uint64(builtin.MethodsMultisig.AddSigner), enc)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (a *MsigAPI) MsigAddCancel(ctx context.Context, msig address.Address, src address.Address, txID uint64, newAdd address.Address, inc bool) (cid.Cid, error) {
|
||||||
|
enc, actErr := serializeAddParams(newAdd, inc)
|
||||||
|
if actErr != nil {
|
||||||
|
return cid.Undef, actErr
|
||||||
|
}
|
||||||
|
|
||||||
|
return a.MsigCancel(ctx, msig, txID, msig, big.Zero(), src, uint64(builtin.MethodsMultisig.AddSigner), enc)
|
||||||
|
}
|
||||||
|
|
||||||
func (a *MsigAPI) MsigSwapPropose(ctx context.Context, msig address.Address, src address.Address, oldAdd address.Address, newAdd address.Address) (cid.Cid, error) {
|
func (a *MsigAPI) MsigSwapPropose(ctx context.Context, msig address.Address, src address.Address, oldAdd address.Address, newAdd address.Address) (cid.Cid, error) {
|
||||||
enc, actErr := serializeSwapParams(oldAdd, newAdd)
|
enc, actErr := serializeSwapParams(oldAdd, newAdd)
|
||||||
if actErr != nil {
|
if actErr != nil {
|
||||||
@ -244,6 +271,18 @@ func (a *MsigAPI) msigApproveOrCancel(ctx context.Context, operation api.MsigPro
|
|||||||
return smsg.Cid(), nil
|
return smsg.Cid(), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func serializeAddParams(new address.Address, inc bool) ([]byte, error) {
|
||||||
|
enc, actErr := actors.SerializeParams(&samsig.AddSignerParams{
|
||||||
|
Signer: new,
|
||||||
|
Increase: inc,
|
||||||
|
})
|
||||||
|
if actErr != nil {
|
||||||
|
return nil, actErr
|
||||||
|
}
|
||||||
|
|
||||||
|
return enc, nil
|
||||||
|
}
|
||||||
|
|
||||||
func serializeSwapParams(old address.Address, new address.Address) ([]byte, error) {
|
func serializeSwapParams(old address.Address, new address.Address) ([]byte, error) {
|
||||||
enc, actErr := actors.SerializeParams(&samsig.SwapSignerParams{
|
enc, actErr := actors.SerializeParams(&samsig.SwapSignerParams{
|
||||||
From: old,
|
From: old,
|
||||||
|
@ -7,6 +7,8 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
"strconv"
|
"strconv"
|
||||||
|
|
||||||
|
"github.com/filecoin-project/go-state-types/dline"
|
||||||
|
|
||||||
cid "github.com/ipfs/go-cid"
|
cid "github.com/ipfs/go-cid"
|
||||||
cbor "github.com/ipfs/go-ipld-cbor"
|
cbor "github.com/ipfs/go-ipld-cbor"
|
||||||
cbg "github.com/whyrusleeping/cbor-gen"
|
cbg "github.com/whyrusleeping/cbor-gen"
|
||||||
@ -53,7 +55,7 @@ type StateAPI struct {
|
|||||||
ProofVerifier ffiwrapper.Verifier
|
ProofVerifier ffiwrapper.Verifier
|
||||||
StateManager *stmgr.StateManager
|
StateManager *stmgr.StateManager
|
||||||
Chain *store.ChainStore
|
Chain *store.ChainStore
|
||||||
Beacon beacon.RandomBeacon
|
Beacon beacon.Schedule
|
||||||
}
|
}
|
||||||
|
|
||||||
func (a *StateAPI) StateNetworkName(ctx context.Context) (dtypes.NetworkName, error) {
|
func (a *StateAPI) StateNetworkName(ctx context.Context) (dtypes.NetworkName, error) {
|
||||||
@ -158,7 +160,7 @@ func (a *StateAPI) StateMinerPartitions(ctx context.Context, m address.Address,
|
|||||||
}))))))
|
}))))))
|
||||||
}
|
}
|
||||||
|
|
||||||
func (a *StateAPI) StateMinerProvingDeadline(ctx context.Context, addr address.Address, tsk types.TipSetKey) (*miner.DeadlineInfo, error) {
|
func (a *StateAPI) StateMinerProvingDeadline(ctx context.Context, addr address.Address, tsk types.TipSetKey) (*dline.Info, error) {
|
||||||
ts, err := a.Chain.GetTipSetFromKey(tsk)
|
ts, err := a.Chain.GetTipSetFromKey(tsk)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, xerrors.Errorf("loading tipset %s: %w", tsk, err)
|
return nil, xerrors.Errorf("loading tipset %s: %w", tsk, err)
|
||||||
@ -899,6 +901,48 @@ func (a *StateAPI) MsigGetAvailableBalance(ctx context.Context, addr address.Add
|
|||||||
return types.BigSub(act.Balance, minBalance), nil
|
return types.BigSub(act.Balance, minBalance), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (a *StateAPI) MsigGetVested(ctx context.Context, addr address.Address, start types.TipSetKey, end types.TipSetKey) (types.BigInt, error) {
|
||||||
|
startTs, err := a.Chain.GetTipSetFromKey(start)
|
||||||
|
if err != nil {
|
||||||
|
return types.EmptyInt, xerrors.Errorf("loading start tipset %s: %w", start, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
endTs, err := a.Chain.GetTipSetFromKey(end)
|
||||||
|
if err != nil {
|
||||||
|
return types.EmptyInt, xerrors.Errorf("loading end tipset %s: %w", end, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if startTs.Height() > endTs.Height() {
|
||||||
|
return types.EmptyInt, xerrors.Errorf("start tipset %d is after end tipset %d", startTs.Height(), endTs.Height())
|
||||||
|
} else if startTs.Height() == endTs.Height() {
|
||||||
|
return big.Zero(), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
var mst samsig.State
|
||||||
|
act, err := a.StateManager.LoadActorState(ctx, addr, &mst, endTs)
|
||||||
|
if err != nil {
|
||||||
|
return types.EmptyInt, xerrors.Errorf("failed to load multisig actor state at end epoch: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if act.Code != builtin.MultisigActorCodeID {
|
||||||
|
return types.EmptyInt, fmt.Errorf("given actor was not a multisig")
|
||||||
|
}
|
||||||
|
|
||||||
|
if mst.UnlockDuration == 0 ||
|
||||||
|
mst.InitialBalance.IsZero() ||
|
||||||
|
mst.StartEpoch+mst.UnlockDuration <= startTs.Height() ||
|
||||||
|
mst.StartEpoch >= endTs.Height() {
|
||||||
|
return big.Zero(), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
startLk := mst.InitialBalance
|
||||||
|
if startTs.Height() > mst.StartEpoch {
|
||||||
|
startLk = mst.AmountLocked(startTs.Height() - mst.StartEpoch)
|
||||||
|
}
|
||||||
|
|
||||||
|
return big.Sub(startLk, mst.AmountLocked(endTs.Height()-mst.StartEpoch)), nil
|
||||||
|
}
|
||||||
|
|
||||||
var initialPledgeNum = types.NewInt(110)
|
var initialPledgeNum = types.NewInt(110)
|
||||||
var initialPledgeDen = types.NewInt(100)
|
var initialPledgeDen = types.NewInt(100)
|
||||||
|
|
||||||
|
@ -97,12 +97,23 @@ func (a *SyncAPI) SyncIncomingBlocks(ctx context.Context) (<-chan *types.BlockHe
|
|||||||
return a.Syncer.IncomingBlocks(ctx)
|
return a.Syncer.IncomingBlocks(ctx)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (a *SyncAPI) SyncCheckpoint(ctx context.Context, tsk types.TipSetKey) error {
|
||||||
|
log.Warnf("Marking tipset %s as checkpoint", tsk)
|
||||||
|
return a.Syncer.SetCheckpoint(tsk)
|
||||||
|
}
|
||||||
|
|
||||||
func (a *SyncAPI) SyncMarkBad(ctx context.Context, bcid cid.Cid) error {
|
func (a *SyncAPI) SyncMarkBad(ctx context.Context, bcid cid.Cid) error {
|
||||||
log.Warnf("Marking block %s as bad", bcid)
|
log.Warnf("Marking block %s as bad", bcid)
|
||||||
a.Syncer.MarkBad(bcid)
|
a.Syncer.MarkBad(bcid)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (a *SyncAPI) SyncUnmarkBad(ctx context.Context, bcid cid.Cid) error {
|
||||||
|
log.Warnf("Unmarking block %s as bad", bcid)
|
||||||
|
a.Syncer.UnmarkBad(bcid)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
func (a *SyncAPI) SyncCheckBad(ctx context.Context, bcid cid.Cid) (string, error) {
|
func (a *SyncAPI) SyncCheckBad(ctx context.Context, bcid cid.Cid) (string, error) {
|
||||||
reason, ok := a.Syncer.CheckBadBlockCache(bcid)
|
reason, ok := a.Syncer.CheckBadBlockCache(bcid)
|
||||||
if !ok {
|
if !ok {
|
||||||
|
@ -163,8 +163,8 @@ func NetworkName(mctx helpers.MetricsCtx, lc fx.Lifecycle, cs *store.ChainStore,
|
|||||||
return netName, err
|
return netName, err
|
||||||
}
|
}
|
||||||
|
|
||||||
func NewSyncer(lc fx.Lifecycle, sm *stmgr.StateManager, exchange exchange.Client, h host.Host, beacon beacon.RandomBeacon, verifier ffiwrapper.Verifier) (*chain.Syncer, error) {
|
func NewSyncer(lc fx.Lifecycle, ds dtypes.MetadataDS, sm *stmgr.StateManager, exchange exchange.Client, h host.Host, beacon beacon.Schedule, verifier ffiwrapper.Verifier) (*chain.Syncer, error) {
|
||||||
syncer, err := chain.NewSyncer(sm, exchange, h.ConnManager(), h.ID(), beacon, verifier)
|
syncer, err := chain.NewSyncer(ds, sm, exchange, h.ConnManager(), h.ID(), beacon, verifier)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
@ -10,6 +10,7 @@ import (
|
|||||||
|
|
||||||
"github.com/gbrlsnchs/jwt/v3"
|
"github.com/gbrlsnchs/jwt/v3"
|
||||||
logging "github.com/ipfs/go-log/v2"
|
logging "github.com/ipfs/go-log/v2"
|
||||||
|
"github.com/libp2p/go-libp2p-core/peer"
|
||||||
"github.com/libp2p/go-libp2p-core/peerstore"
|
"github.com/libp2p/go-libp2p-core/peerstore"
|
||||||
record "github.com/libp2p/go-libp2p-record"
|
record "github.com/libp2p/go-libp2p-record"
|
||||||
"golang.org/x/xerrors"
|
"golang.org/x/xerrors"
|
||||||
@ -93,14 +94,18 @@ func BuiltinBootstrap() (dtypes.BootstrapPeers, error) {
|
|||||||
return build.BuiltinBootstrap()
|
return build.BuiltinBootstrap()
|
||||||
}
|
}
|
||||||
|
|
||||||
func DrandBootstrap(d dtypes.DrandConfig) (dtypes.DrandBootstrap, error) {
|
func DrandBootstrap(ds dtypes.DrandSchedule) (dtypes.DrandBootstrap, error) {
|
||||||
// TODO: retry resolving, don't fail if at least one resolve succeeds
|
// TODO: retry resolving, don't fail if at least one resolve succeeds
|
||||||
addrs, err := addrutil.ParseAddresses(context.TODO(), d.Relays)
|
res := []peer.AddrInfo{}
|
||||||
if err != nil {
|
for _, d := range ds {
|
||||||
log.Errorf("reoslving drand relays addresses: %+v", err)
|
addrs, err := addrutil.ParseAddresses(context.TODO(), d.Config.Relays)
|
||||||
return nil, nil
|
if err != nil {
|
||||||
|
log.Errorf("reoslving drand relays addresses: %+v", err)
|
||||||
|
return res, nil
|
||||||
|
}
|
||||||
|
res = append(res, addrs...)
|
||||||
}
|
}
|
||||||
return addrs, nil
|
return res, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func SetupJournal(lr repo.LockedRepo) error {
|
func SetupJournal(lr repo.LockedRepo) error {
|
||||||
|
@ -1,5 +1,14 @@
|
|||||||
package dtypes
|
package dtypes
|
||||||
|
|
||||||
|
import "github.com/filecoin-project/go-state-types/abi"
|
||||||
|
|
||||||
|
type DrandSchedule []DrandPoint
|
||||||
|
|
||||||
|
type DrandPoint struct {
|
||||||
|
Start abi.ChainEpoch
|
||||||
|
Config DrandConfig
|
||||||
|
}
|
||||||
|
|
||||||
type DrandConfig struct {
|
type DrandConfig struct {
|
||||||
Servers []string
|
Servers []string
|
||||||
Relays []string
|
Relays []string
|
||||||
|
@ -49,7 +49,7 @@ type GossipIn struct {
|
|||||||
Db dtypes.DrandBootstrap
|
Db dtypes.DrandBootstrap
|
||||||
Cfg *config.Pubsub
|
Cfg *config.Pubsub
|
||||||
Sk *dtypes.ScoreKeeper
|
Sk *dtypes.ScoreKeeper
|
||||||
Dr dtypes.DrandConfig
|
Dr dtypes.DrandSchedule
|
||||||
}
|
}
|
||||||
|
|
||||||
func getDrandTopic(chainInfoJSON string) (string, error) {
|
func getDrandTopic(chainInfoJSON string) (string, error) {
|
||||||
@ -74,9 +74,126 @@ func GossipSub(in GossipIn) (service *pubsub.PubSub, err error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
isBootstrapNode := in.Cfg.Bootstrapper
|
isBootstrapNode := in.Cfg.Bootstrapper
|
||||||
drandTopic, err := getDrandTopic(in.Dr.ChainInfoJSON)
|
|
||||||
if err != nil {
|
drandTopicParams := &pubsub.TopicScoreParams{
|
||||||
return nil, err
|
// expected 2 beaconsn/min
|
||||||
|
TopicWeight: 0.5, // 5x block topic; max cap is 62.5
|
||||||
|
|
||||||
|
// 1 tick per second, maxes at 1 after 1 hour
|
||||||
|
TimeInMeshWeight: 0.00027, // ~1/3600
|
||||||
|
TimeInMeshQuantum: time.Second,
|
||||||
|
TimeInMeshCap: 1,
|
||||||
|
|
||||||
|
// deliveries decay after 1 hour, cap at 25 beacons
|
||||||
|
FirstMessageDeliveriesWeight: 5, // max value is 125
|
||||||
|
FirstMessageDeliveriesDecay: pubsub.ScoreParameterDecay(time.Hour),
|
||||||
|
FirstMessageDeliveriesCap: 25, // the maximum expected in an hour is ~26, including the decay
|
||||||
|
|
||||||
|
// Mesh Delivery Failure is currently turned off for beacons
|
||||||
|
// This is on purpose as
|
||||||
|
// - the traffic is very low for meaningful distribution of incoming edges.
|
||||||
|
// - the reaction time needs to be very slow -- in the order of 10 min at least
|
||||||
|
// so we might as well let opportunistic grafting repair the mesh on its own
|
||||||
|
// pace.
|
||||||
|
// - the network is too small, so large asymmetries can be expected between mesh
|
||||||
|
// edges.
|
||||||
|
// We should revisit this once the network grows.
|
||||||
|
|
||||||
|
// invalid messages decay after 1 hour
|
||||||
|
InvalidMessageDeliveriesWeight: -1000,
|
||||||
|
InvalidMessageDeliveriesDecay: pubsub.ScoreParameterDecay(time.Hour),
|
||||||
|
}
|
||||||
|
|
||||||
|
topicParams := map[string]*pubsub.TopicScoreParams{
|
||||||
|
build.BlocksTopic(in.Nn): {
|
||||||
|
// expected 10 blocks/min
|
||||||
|
TopicWeight: 0.1, // max cap is 50, max mesh penalty is -10, single invalid message is -100
|
||||||
|
|
||||||
|
// 1 tick per second, maxes at 1 after 1 hour
|
||||||
|
TimeInMeshWeight: 0.00027, // ~1/3600
|
||||||
|
TimeInMeshQuantum: time.Second,
|
||||||
|
TimeInMeshCap: 1,
|
||||||
|
|
||||||
|
// deliveries decay after 1 hour, cap at 100 blocks
|
||||||
|
FirstMessageDeliveriesWeight: 5, // max value is 500
|
||||||
|
FirstMessageDeliveriesDecay: pubsub.ScoreParameterDecay(time.Hour),
|
||||||
|
FirstMessageDeliveriesCap: 100, // 100 blocks in an hour
|
||||||
|
|
||||||
|
// Mesh Delivery Failure is currently turned off for blocks
|
||||||
|
// This is on purpose as
|
||||||
|
// - the traffic is very low for meaningful distribution of incoming edges.
|
||||||
|
// - the reaction time needs to be very slow -- in the order of 10 min at least
|
||||||
|
// so we might as well let opportunistic grafting repair the mesh on its own
|
||||||
|
// pace.
|
||||||
|
// - the network is too small, so large asymmetries can be expected between mesh
|
||||||
|
// edges.
|
||||||
|
// We should revisit this once the network grows.
|
||||||
|
//
|
||||||
|
// // tracks deliveries in the last minute
|
||||||
|
// // penalty activates at 1 minute and expects ~0.4 blocks
|
||||||
|
// MeshMessageDeliveriesWeight: -576, // max penalty is -100
|
||||||
|
// MeshMessageDeliveriesDecay: pubsub.ScoreParameterDecay(time.Minute),
|
||||||
|
// MeshMessageDeliveriesCap: 10, // 10 blocks in a minute
|
||||||
|
// MeshMessageDeliveriesThreshold: 0.41666, // 10/12/2 blocks/min
|
||||||
|
// MeshMessageDeliveriesWindow: 10 * time.Millisecond,
|
||||||
|
// MeshMessageDeliveriesActivation: time.Minute,
|
||||||
|
//
|
||||||
|
// // decays after 15 min
|
||||||
|
// MeshFailurePenaltyWeight: -576,
|
||||||
|
// MeshFailurePenaltyDecay: pubsub.ScoreParameterDecay(15 * time.Minute),
|
||||||
|
|
||||||
|
// invalid messages decay after 1 hour
|
||||||
|
InvalidMessageDeliveriesWeight: -1000,
|
||||||
|
InvalidMessageDeliveriesDecay: pubsub.ScoreParameterDecay(time.Hour),
|
||||||
|
},
|
||||||
|
build.MessagesTopic(in.Nn): {
|
||||||
|
// expected > 1 tx/second
|
||||||
|
TopicWeight: 0.1, // max cap is 5, single invalid message is -100
|
||||||
|
|
||||||
|
// 1 tick per second, maxes at 1 hour
|
||||||
|
TimeInMeshWeight: 0.0002778, // ~1/3600
|
||||||
|
TimeInMeshQuantum: time.Second,
|
||||||
|
TimeInMeshCap: 1,
|
||||||
|
|
||||||
|
// deliveries decay after 10min, cap at 100 tx
|
||||||
|
FirstMessageDeliveriesWeight: 0.5, // max value is 50
|
||||||
|
FirstMessageDeliveriesDecay: pubsub.ScoreParameterDecay(10 * time.Minute),
|
||||||
|
FirstMessageDeliveriesCap: 100, // 100 messages in 10 minutes
|
||||||
|
|
||||||
|
// Mesh Delivery Failure is currently turned off for messages
|
||||||
|
// This is on purpose as the network is still too small, which results in
|
||||||
|
// asymmetries and potential unmeshing from negative scores.
|
||||||
|
// // tracks deliveries in the last minute
|
||||||
|
// // penalty activates at 1 min and expects 2.5 txs
|
||||||
|
// MeshMessageDeliveriesWeight: -16, // max penalty is -100
|
||||||
|
// MeshMessageDeliveriesDecay: pubsub.ScoreParameterDecay(time.Minute),
|
||||||
|
// MeshMessageDeliveriesCap: 100, // 100 txs in a minute
|
||||||
|
// MeshMessageDeliveriesThreshold: 2.5, // 60/12/2 txs/minute
|
||||||
|
// MeshMessageDeliveriesWindow: 10 * time.Millisecond,
|
||||||
|
// MeshMessageDeliveriesActivation: time.Minute,
|
||||||
|
|
||||||
|
// // decays after 5min
|
||||||
|
// MeshFailurePenaltyWeight: -16,
|
||||||
|
// MeshFailurePenaltyDecay: pubsub.ScoreParameterDecay(5 * time.Minute),
|
||||||
|
|
||||||
|
// invalid messages decay after 1 hour
|
||||||
|
InvalidMessageDeliveriesWeight: -1000,
|
||||||
|
InvalidMessageDeliveriesDecay: pubsub.ScoreParameterDecay(time.Hour),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
pgTopicWeights := map[string]float64{
|
||||||
|
build.BlocksTopic(in.Nn): 10,
|
||||||
|
build.MessagesTopic(in.Nn): 1,
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, d := range in.Dr {
|
||||||
|
topic, err := getDrandTopic(d.Config.ChainInfoJSON)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
topicParams[topic] = drandTopicParams
|
||||||
|
pgTopicWeights[topic] = 5
|
||||||
}
|
}
|
||||||
|
|
||||||
options := []pubsub.Option{
|
options := []pubsub.Option{
|
||||||
@ -124,111 +241,7 @@ func GossipSub(in GossipIn) (service *pubsub.PubSub, err error) {
|
|||||||
RetainScore: 6 * time.Hour,
|
RetainScore: 6 * time.Hour,
|
||||||
|
|
||||||
// topic parameters
|
// topic parameters
|
||||||
Topics: map[string]*pubsub.TopicScoreParams{
|
Topics: topicParams,
|
||||||
drandTopic: {
|
|
||||||
// expected 2 beaconsn/min
|
|
||||||
TopicWeight: 0.5, // 5x block topic; max cap is 62.5
|
|
||||||
|
|
||||||
// 1 tick per second, maxes at 1 after 1 hour
|
|
||||||
TimeInMeshWeight: 0.00027, // ~1/3600
|
|
||||||
TimeInMeshQuantum: time.Second,
|
|
||||||
TimeInMeshCap: 1,
|
|
||||||
|
|
||||||
// deliveries decay after 1 hour, cap at 25 beacons
|
|
||||||
FirstMessageDeliveriesWeight: 5, // max value is 125
|
|
||||||
FirstMessageDeliveriesDecay: pubsub.ScoreParameterDecay(time.Hour),
|
|
||||||
FirstMessageDeliveriesCap: 25, // the maximum expected in an hour is ~26, including the decay
|
|
||||||
|
|
||||||
// Mesh Delivery Failure is currently turned off for beacons
|
|
||||||
// This is on purpose as
|
|
||||||
// - the traffic is very low for meaningful distribution of incoming edges.
|
|
||||||
// - the reaction time needs to be very slow -- in the order of 10 min at least
|
|
||||||
// so we might as well let opportunistic grafting repair the mesh on its own
|
|
||||||
// pace.
|
|
||||||
// - the network is too small, so large asymmetries can be expected between mesh
|
|
||||||
// edges.
|
|
||||||
// We should revisit this once the network grows.
|
|
||||||
|
|
||||||
// invalid messages decay after 1 hour
|
|
||||||
InvalidMessageDeliveriesWeight: -1000,
|
|
||||||
InvalidMessageDeliveriesDecay: pubsub.ScoreParameterDecay(time.Hour),
|
|
||||||
},
|
|
||||||
build.BlocksTopic(in.Nn): {
|
|
||||||
// expected 10 blocks/min
|
|
||||||
TopicWeight: 0.1, // max cap is 50, max mesh penalty is -10, single invalid message is -100
|
|
||||||
|
|
||||||
// 1 tick per second, maxes at 1 after 1 hour
|
|
||||||
TimeInMeshWeight: 0.00027, // ~1/3600
|
|
||||||
TimeInMeshQuantum: time.Second,
|
|
||||||
TimeInMeshCap: 1,
|
|
||||||
|
|
||||||
// deliveries decay after 1 hour, cap at 100 blocks
|
|
||||||
FirstMessageDeliveriesWeight: 5, // max value is 500
|
|
||||||
FirstMessageDeliveriesDecay: pubsub.ScoreParameterDecay(time.Hour),
|
|
||||||
FirstMessageDeliveriesCap: 100, // 100 blocks in an hour
|
|
||||||
|
|
||||||
// Mesh Delivery Failure is currently turned off for blocks
|
|
||||||
// This is on purpose as
|
|
||||||
// - the traffic is very low for meaningful distribution of incoming edges.
|
|
||||||
// - the reaction time needs to be very slow -- in the order of 10 min at least
|
|
||||||
// so we might as well let opportunistic grafting repair the mesh on its own
|
|
||||||
// pace.
|
|
||||||
// - the network is too small, so large asymmetries can be expected between mesh
|
|
||||||
// edges.
|
|
||||||
// We should revisit this once the network grows.
|
|
||||||
//
|
|
||||||
// // tracks deliveries in the last minute
|
|
||||||
// // penalty activates at 1 minute and expects ~0.4 blocks
|
|
||||||
// MeshMessageDeliveriesWeight: -576, // max penalty is -100
|
|
||||||
// MeshMessageDeliveriesDecay: pubsub.ScoreParameterDecay(time.Minute),
|
|
||||||
// MeshMessageDeliveriesCap: 10, // 10 blocks in a minute
|
|
||||||
// MeshMessageDeliveriesThreshold: 0.41666, // 10/12/2 blocks/min
|
|
||||||
// MeshMessageDeliveriesWindow: 10 * time.Millisecond,
|
|
||||||
// MeshMessageDeliveriesActivation: time.Minute,
|
|
||||||
//
|
|
||||||
// // decays after 15 min
|
|
||||||
// MeshFailurePenaltyWeight: -576,
|
|
||||||
// MeshFailurePenaltyDecay: pubsub.ScoreParameterDecay(15 * time.Minute),
|
|
||||||
|
|
||||||
// invalid messages decay after 1 hour
|
|
||||||
InvalidMessageDeliveriesWeight: -1000,
|
|
||||||
InvalidMessageDeliveriesDecay: pubsub.ScoreParameterDecay(time.Hour),
|
|
||||||
},
|
|
||||||
build.MessagesTopic(in.Nn): {
|
|
||||||
// expected > 1 tx/second
|
|
||||||
TopicWeight: 0.1, // max cap is 5, single invalid message is -100
|
|
||||||
|
|
||||||
// 1 tick per second, maxes at 1 hour
|
|
||||||
TimeInMeshWeight: 0.0002778, // ~1/3600
|
|
||||||
TimeInMeshQuantum: time.Second,
|
|
||||||
TimeInMeshCap: 1,
|
|
||||||
|
|
||||||
// deliveries decay after 10min, cap at 100 tx
|
|
||||||
FirstMessageDeliveriesWeight: 0.5, // max value is 50
|
|
||||||
FirstMessageDeliveriesDecay: pubsub.ScoreParameterDecay(10 * time.Minute),
|
|
||||||
FirstMessageDeliveriesCap: 100, // 100 messages in 10 minutes
|
|
||||||
|
|
||||||
// Mesh Delivery Failure is currently turned off for messages
|
|
||||||
// This is on purpose as the network is still too small, which results in
|
|
||||||
// asymmetries and potential unmeshing from negative scores.
|
|
||||||
// // tracks deliveries in the last minute
|
|
||||||
// // penalty activates at 1 min and expects 2.5 txs
|
|
||||||
// MeshMessageDeliveriesWeight: -16, // max penalty is -100
|
|
||||||
// MeshMessageDeliveriesDecay: pubsub.ScoreParameterDecay(time.Minute),
|
|
||||||
// MeshMessageDeliveriesCap: 100, // 100 txs in a minute
|
|
||||||
// MeshMessageDeliveriesThreshold: 2.5, // 60/12/2 txs/minute
|
|
||||||
// MeshMessageDeliveriesWindow: 10 * time.Millisecond,
|
|
||||||
// MeshMessageDeliveriesActivation: time.Minute,
|
|
||||||
|
|
||||||
// // decays after 5min
|
|
||||||
// MeshFailurePenaltyWeight: -16,
|
|
||||||
// MeshFailurePenaltyDecay: pubsub.ScoreParameterDecay(5 * time.Minute),
|
|
||||||
|
|
||||||
// invalid messages decay after 1 hour
|
|
||||||
InvalidMessageDeliveriesWeight: -1000,
|
|
||||||
InvalidMessageDeliveriesDecay: pubsub.ScoreParameterDecay(time.Hour),
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
},
|
||||||
&pubsub.PeerScoreThresholds{
|
&pubsub.PeerScoreThresholds{
|
||||||
GossipThreshold: -500,
|
GossipThreshold: -500,
|
||||||
@ -278,11 +291,6 @@ func GossipSub(in GossipIn) (service *pubsub.PubSub, err error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// validation queue RED
|
// validation queue RED
|
||||||
pgTopicWeights := map[string]float64{
|
|
||||||
drandTopic: 5,
|
|
||||||
build.BlocksTopic(in.Nn): 10,
|
|
||||||
build.MessagesTopic(in.Nn): 1,
|
|
||||||
}
|
|
||||||
var pgParams *pubsub.PeerGaterParams
|
var pgParams *pubsub.PeerGaterParams
|
||||||
|
|
||||||
if isBootstrapNode {
|
if isBootstrapNode {
|
||||||
|
@ -126,19 +126,27 @@ type RandomBeaconParams struct {
|
|||||||
|
|
||||||
PubSub *pubsub.PubSub `optional:"true"`
|
PubSub *pubsub.PubSub `optional:"true"`
|
||||||
Cs *store.ChainStore
|
Cs *store.ChainStore
|
||||||
DrandConfig dtypes.DrandConfig
|
DrandConfig dtypes.DrandSchedule
|
||||||
}
|
}
|
||||||
|
|
||||||
func BuiltinDrandConfig() dtypes.DrandConfig {
|
func BuiltinDrandConfig() dtypes.DrandSchedule {
|
||||||
return build.DrandConfig()
|
return build.DrandConfigSchedule()
|
||||||
}
|
}
|
||||||
|
|
||||||
func RandomBeacon(p RandomBeaconParams, _ dtypes.AfterGenesisSet) (beacon.RandomBeacon, error) {
|
func RandomSchedule(p RandomBeaconParams, _ dtypes.AfterGenesisSet) (beacon.Schedule, error) {
|
||||||
gen, err := p.Cs.GetGenesis()
|
gen, err := p.Cs.GetGenesis()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
//return beacon.NewMockBeacon(build.BlockDelaySecs * time.Second)
|
shd := beacon.Schedule{}
|
||||||
return drand.NewDrandBeacon(gen.Timestamp, build.BlockDelaySecs, p.PubSub, p.DrandConfig)
|
for _, dc := range p.DrandConfig {
|
||||||
|
bc, err := drand.NewDrandBeacon(gen.Timestamp, build.BlockDelaySecs, p.PubSub, dc.Config)
|
||||||
|
if err != nil {
|
||||||
|
return nil, xerrors.Errorf("creating drand beacon: %w", err)
|
||||||
|
}
|
||||||
|
shd = append(shd, beacon.BeaconPoint{Start: dc.Start, Beacon: bc})
|
||||||
|
}
|
||||||
|
|
||||||
|
return shd, nil
|
||||||
}
|
}
|
||||||
|
@ -7,6 +7,9 @@ import (
|
|||||||
"github.com/filecoin-project/lotus/chain/beacon"
|
"github.com/filecoin-project/lotus/chain/beacon"
|
||||||
)
|
)
|
||||||
|
|
||||||
func RandomBeacon() (beacon.RandomBeacon, error) {
|
func RandomBeacon() (beacon.Schedule, error) {
|
||||||
return beacon.NewMockBeacon(time.Duration(build.BlockDelaySecs) * time.Second), nil
|
return beacon.Schedule{
|
||||||
|
{Start: 0,
|
||||||
|
Beacon: beacon.NewMockBeacon(time.Duration(build.BlockDelaySecs) * time.Second),
|
||||||
|
}}, nil
|
||||||
}
|
}
|
||||||
|
@ -101,7 +101,7 @@ func (fsr *FsRepo) Init(t RepoType) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
log.Infof("Initializing repo at '%s'", fsr.path)
|
log.Infof("Initializing repo at '%s'", fsr.path)
|
||||||
err = os.Mkdir(fsr.path, 0755) //nolint: gosec
|
err = os.MkdirAll(fsr.path, 0755) //nolint: gosec
|
||||||
if err != nil && !os.IsExist(err) {
|
if err != nil && !os.IsExist(err) {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -7,4 +7,4 @@ export TRUST_PARAMS=1
|
|||||||
tag=${TAG:-debug}
|
tag=${TAG:-debug}
|
||||||
|
|
||||||
go run -tags=$tag ./cmd/lotus wallet import ~/.genesis-sectors/pre-seal-t01000.key
|
go run -tags=$tag ./cmd/lotus wallet import ~/.genesis-sectors/pre-seal-t01000.key
|
||||||
go run -tags=$tag ./cmd/lotus-miner init --actor=t01000 --genesis-miner --pre-sealed-sectors=~/.genesis-sectors --pre-sealed-metadata=~/.genesis-sectors/pre-seal-t01000.json
|
go run -tags=$tag ./cmd/lotus-storage-miner init --actor=t01000 --genesis-miner --pre-sealed-sectors=~/.genesis-sectors --pre-sealed-metadata=~/.genesis-sectors/pre-seal-t01000.json
|
||||||
|
@ -5,6 +5,8 @@ import (
|
|||||||
"errors"
|
"errors"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/filecoin-project/go-state-types/dline"
|
||||||
|
|
||||||
"github.com/filecoin-project/go-bitfield"
|
"github.com/filecoin-project/go-bitfield"
|
||||||
"github.com/filecoin-project/specs-actors/actors/runtime/proof"
|
"github.com/filecoin-project/specs-actors/actors/runtime/proof"
|
||||||
|
|
||||||
@ -60,7 +62,7 @@ type storageMinerApi interface {
|
|||||||
StateSectorGetInfo(context.Context, address.Address, abi.SectorNumber, types.TipSetKey) (*miner.SectorOnChainInfo, error)
|
StateSectorGetInfo(context.Context, address.Address, abi.SectorNumber, types.TipSetKey) (*miner.SectorOnChainInfo, error)
|
||||||
StateSectorPartition(ctx context.Context, maddr address.Address, sectorNumber abi.SectorNumber, tok types.TipSetKey) (*api.SectorLocation, error)
|
StateSectorPartition(ctx context.Context, maddr address.Address, sectorNumber abi.SectorNumber, tok types.TipSetKey) (*api.SectorLocation, error)
|
||||||
StateMinerInfo(context.Context, address.Address, types.TipSetKey) (api.MinerInfo, error)
|
StateMinerInfo(context.Context, address.Address, types.TipSetKey) (api.MinerInfo, error)
|
||||||
StateMinerProvingDeadline(context.Context, address.Address, types.TipSetKey) (*miner.DeadlineInfo, error)
|
StateMinerProvingDeadline(context.Context, address.Address, types.TipSetKey) (*dline.Info, error)
|
||||||
StateMinerPreCommitDepositForPower(context.Context, address.Address, miner.SectorPreCommitInfo, types.TipSetKey) (types.BigInt, error)
|
StateMinerPreCommitDepositForPower(context.Context, address.Address, miner.SectorPreCommitInfo, types.TipSetKey) (types.BigInt, error)
|
||||||
StateMinerInitialPledgeCollateral(context.Context, address.Address, miner.SectorPreCommitInfo, types.TipSetKey) (types.BigInt, error)
|
StateMinerInitialPledgeCollateral(context.Context, address.Address, miner.SectorPreCommitInfo, types.TipSetKey) (types.BigInt, error)
|
||||||
StateSearchMsg(context.Context, cid.Cid) (*api.MsgLookup, error)
|
StateSearchMsg(context.Context, cid.Cid) (*api.MsgLookup, error)
|
||||||
|
@ -376,94 +376,114 @@ func (s *WindowPoStScheduler) runPost(ctx context.Context, di dline.Info, ts *ty
|
|||||||
Proofs: nil,
|
Proofs: nil,
|
||||||
}
|
}
|
||||||
|
|
||||||
var sinfos []v0proof.SectorInfo
|
|
||||||
sidToPart := map[abi.SectorNumber]uint64{}
|
|
||||||
skipCount := uint64(0)
|
skipCount := uint64(0)
|
||||||
|
postSkipped := bitfield.New()
|
||||||
|
var postOut []proof.PoStProof
|
||||||
|
|
||||||
for partIdx, partition := range partitions {
|
for retries := 0; retries < 5; retries++ {
|
||||||
// TODO: Can do this in parallel
|
var sinfos []proof.SectorInfo
|
||||||
toProve, err := partition.ActiveSectors()
|
sidToPart := map[abi.SectorNumber]int{}
|
||||||
|
|
||||||
|
for partIdx, partition := range partitions {
|
||||||
|
// TODO: Can do this in parallel
|
||||||
|
toProve, err := partition.ActiveSectors()
|
||||||
|
if err != nil {
|
||||||
|
return nil, xerrors.Errorf("getting active sectors: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
toProve, err = bitfield.MergeBitFields(toProve, partition.Recoveries)
|
||||||
|
if err != nil {
|
||||||
|
return nil, xerrors.Errorf("adding recoveries to set of sectors to prove: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
toProve, err = bitfield.SubtractBitField(toProve, postSkipped)
|
||||||
|
if err != nil {
|
||||||
|
return nil, xerrors.Errorf("toProve - postSkipped: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
good, err := s.checkSectors(ctx, toProve)
|
||||||
|
if err != nil {
|
||||||
|
return nil, xerrors.Errorf("checking sectors to skip: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
skipped, err := bitfield.SubtractBitField(toProve, good)
|
||||||
|
if err != nil {
|
||||||
|
return nil, xerrors.Errorf("toProve - good: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
sc, err := skipped.Count()
|
||||||
|
if err != nil {
|
||||||
|
return nil, xerrors.Errorf("getting skipped sector count: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
skipCount += sc
|
||||||
|
|
||||||
|
ssi, err := s.sectorsForProof(ctx, good, partition.Sectors, ts)
|
||||||
|
if err != nil {
|
||||||
|
return nil, xerrors.Errorf("getting sorted sector info: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(ssi) == 0 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
sinfos = append(sinfos, ssi...)
|
||||||
|
for _, si := range ssi {
|
||||||
|
sidToPart[si.SectorNumber] = partIdx
|
||||||
|
}
|
||||||
|
|
||||||
|
params.Partitions = append(params.Partitions, v0miner.PoStPartition{
|
||||||
|
Index: uint64(partIdx),
|
||||||
|
Skipped: skipped,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(sinfos) == 0 {
|
||||||
|
// nothing to prove..
|
||||||
|
return nil, errNoPartitions
|
||||||
|
}
|
||||||
|
|
||||||
|
log.Infow("running windowPost",
|
||||||
|
"chain-random", rand,
|
||||||
|
"deadline", di,
|
||||||
|
"height", ts.Height(),
|
||||||
|
"skipped", skipCount)
|
||||||
|
|
||||||
|
tsStart := build.Clock.Now()
|
||||||
|
|
||||||
|
mid, err := address.IDFromAddress(s.actor)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, xerrors.Errorf("getting active sectors: %w", err)
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
toProve, err = bitfield.MergeBitFields(toProve, partition.Recoveries)
|
var ps []abi.SectorID
|
||||||
if err != nil {
|
postOut, ps, err = s.prover.GenerateWindowPoSt(ctx, abi.ActorID(mid), sinfos, abi.PoStRandomness(rand))
|
||||||
return nil, xerrors.Errorf("adding recoveries to set of sectors to prove: %w", err)
|
elapsed := time.Since(tsStart)
|
||||||
|
|
||||||
|
log.Infow("computing window PoSt", "elapsed", elapsed)
|
||||||
|
|
||||||
|
if err == nil {
|
||||||
|
break
|
||||||
}
|
}
|
||||||
|
|
||||||
good, err := s.checkSectors(ctx, toProve)
|
if len(ps) == 0 {
|
||||||
if err != nil {
|
return nil, xerrors.Errorf("running post failed: %w", err)
|
||||||
return nil, xerrors.Errorf("checking sectors to skip: %w", err)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
skipped, err := bitfield.SubtractBitField(toProve, good)
|
log.Warnw("generate window PoSt skipped sectors", "sectors", ps, "error", err, "try", retries)
|
||||||
if err != nil {
|
|
||||||
return nil, xerrors.Errorf("toProve - good: %w", err)
|
skipCount += uint64(len(ps))
|
||||||
|
for _, sector := range ps {
|
||||||
|
postSkipped.Set(uint64(sector.Number))
|
||||||
}
|
}
|
||||||
|
|
||||||
sc, err := skipped.Count()
|
|
||||||
if err != nil {
|
|
||||||
return nil, xerrors.Errorf("getting skipped sector count: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
skipCount += sc
|
|
||||||
|
|
||||||
ssi, err := s.sectorsForProof(ctx, good, partition.Sectors, ts)
|
|
||||||
if err != nil {
|
|
||||||
return nil, xerrors.Errorf("getting sorted sector info: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(ssi) == 0 {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
sinfos = append(sinfos, ssi...)
|
|
||||||
for _, si := range ssi {
|
|
||||||
sidToPart[si.SectorNumber] = uint64(partIdx)
|
|
||||||
}
|
|
||||||
|
|
||||||
params.Partitions = append(params.Partitions, v0miner.PoStPartition{
|
|
||||||
Index: uint64(partIdx),
|
|
||||||
Skipped: skipped,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(sinfos) == 0 {
|
|
||||||
// nothing to prove..
|
|
||||||
return nil, errNoPartitions
|
|
||||||
}
|
|
||||||
|
|
||||||
log.Infow("running windowPost",
|
|
||||||
"chain-random", rand,
|
|
||||||
"deadline", di,
|
|
||||||
"height", ts.Height(),
|
|
||||||
"skipped", skipCount)
|
|
||||||
|
|
||||||
tsStart := build.Clock.Now()
|
|
||||||
|
|
||||||
mid, err := address.IDFromAddress(s.actor)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
postOut, postSkipped, err := s.prover.GenerateWindowPoSt(ctx, abi.ActorID(mid), sinfos, abi.PoStRandomness(rand))
|
|
||||||
if err != nil {
|
|
||||||
return nil, xerrors.Errorf("running post failed: %w", err)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(postOut) == 0 {
|
if len(postOut) == 0 {
|
||||||
return nil, xerrors.Errorf("received proofs back from generate window post")
|
return nil, xerrors.Errorf("received no proofs back from generate window post")
|
||||||
}
|
}
|
||||||
|
|
||||||
params.Proofs = postOut
|
params.Proofs = postOut
|
||||||
|
|
||||||
for _, sector := range postSkipped {
|
|
||||||
params.Partitions[sidToPart[sector.Number]].Skipped.Set(uint64(sector.Number))
|
|
||||||
}
|
|
||||||
|
|
||||||
elapsed := time.Since(tsStart)
|
|
||||||
|
|
||||||
commEpoch := di.Open
|
commEpoch := di.Open
|
||||||
commRand, err := s.api.ChainGetRandomnessFromTickets(ctx, ts.Key(), crypto.DomainSeparationTag_PoStChainCommit, commEpoch, nil)
|
commRand, err := s.api.ChainGetRandomnessFromTickets(ctx, ts.Key(), crypto.DomainSeparationTag_PoStChainCommit, commEpoch, nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@ -472,7 +492,7 @@ func (s *WindowPoStScheduler) runPost(ctx context.Context, di dline.Info, ts *ty
|
|||||||
params.ChainCommitEpoch = commEpoch
|
params.ChainCommitEpoch = commEpoch
|
||||||
params.ChainCommitRand = commRand
|
params.ChainCommitRand = commRand
|
||||||
|
|
||||||
log.Infow("submitting window PoSt", "elapsed", elapsed)
|
log.Infow("submitting window PoSt")
|
||||||
|
|
||||||
return params, nil
|
return params, nil
|
||||||
}
|
}
|
||||||
@ -531,7 +551,7 @@ func (s *WindowPoStScheduler) submitPost(ctx context.Context, proof *v0miner.Sub
|
|||||||
From: s.worker,
|
From: s.worker,
|
||||||
Method: builtin.MethodsMiner.SubmitWindowedPoSt,
|
Method: builtin.MethodsMiner.SubmitWindowedPoSt,
|
||||||
Params: enc,
|
Params: enc,
|
||||||
Value: types.NewInt(1000), // currently hard-coded late fee in actor, returned if not late
|
Value: types.NewInt(0),
|
||||||
}
|
}
|
||||||
spec := &api.MessageSendSpec{MaxFee: abi.TokenAmount(s.feeCfg.MaxWindowPoStGasFee)}
|
spec := &api.MessageSendSpec{MaxFee: abi.TokenAmount(s.feeCfg.MaxWindowPoStGasFee)}
|
||||||
s.setSender(ctx, msg, spec)
|
s.setSender(ctx, msg, spec)
|
||||||
|
@ -4,6 +4,8 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/filecoin-project/go-state-types/dline"
|
||||||
|
|
||||||
"golang.org/x/xerrors"
|
"golang.org/x/xerrors"
|
||||||
|
|
||||||
"github.com/filecoin-project/go-address"
|
"github.com/filecoin-project/go-address"
|
||||||
|
@ -5,6 +5,7 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"math"
|
||||||
"math/big"
|
"math/big"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
@ -12,6 +13,7 @@ import (
|
|||||||
"github.com/filecoin-project/go-address"
|
"github.com/filecoin-project/go-address"
|
||||||
"github.com/filecoin-project/lotus/api"
|
"github.com/filecoin-project/lotus/api"
|
||||||
"github.com/filecoin-project/lotus/build"
|
"github.com/filecoin-project/lotus/build"
|
||||||
|
"github.com/filecoin-project/lotus/chain/store"
|
||||||
"github.com/filecoin-project/lotus/chain/types"
|
"github.com/filecoin-project/lotus/chain/types"
|
||||||
"github.com/filecoin-project/specs-actors/actors/builtin"
|
"github.com/filecoin-project/specs-actors/actors/builtin"
|
||||||
"github.com/filecoin-project/specs-actors/actors/builtin/power"
|
"github.com/filecoin-project/specs-actors/actors/builtin/power"
|
||||||
@ -131,12 +133,6 @@ func RecordTipsetPoints(ctx context.Context, api api.FullNode, pl *PointList, ti
|
|||||||
p = NewPoint("chain.blocktime", tsTime.Unix())
|
p = NewPoint("chain.blocktime", tsTime.Unix())
|
||||||
pl.AddPoint(p)
|
pl.AddPoint(p)
|
||||||
|
|
||||||
baseFeeBig := tipset.Blocks()[0].ParentBaseFee.Copy()
|
|
||||||
baseFeeRat := new(big.Rat).SetFrac(baseFeeBig.Int, new(big.Int).SetUint64(build.FilecoinPrecision))
|
|
||||||
baseFeeFloat, _ := baseFeeRat.Float64()
|
|
||||||
p = NewPoint("chain.basefee", baseFeeFloat)
|
|
||||||
pl.AddPoint(p)
|
|
||||||
|
|
||||||
totalGasLimit := int64(0)
|
totalGasLimit := int64(0)
|
||||||
totalUniqGasLimit := int64(0)
|
totalUniqGasLimit := int64(0)
|
||||||
seen := make(map[cid.Cid]struct{})
|
seen := make(map[cid.Cid]struct{})
|
||||||
@ -178,6 +174,30 @@ func RecordTipsetPoints(ctx context.Context, api api.FullNode, pl *PointList, ti
|
|||||||
p = NewPoint("chain.gas_limit_uniq_total", totalUniqGasLimit)
|
p = NewPoint("chain.gas_limit_uniq_total", totalUniqGasLimit)
|
||||||
pl.AddPoint(p)
|
pl.AddPoint(p)
|
||||||
|
|
||||||
|
{
|
||||||
|
baseFeeIn := tipset.Blocks()[0].ParentBaseFee
|
||||||
|
newBaseFee := store.ComputeNextBaseFee(baseFeeIn, totalUniqGasLimit, len(tipset.Blocks()), tipset.Height())
|
||||||
|
|
||||||
|
baseFeeRat := new(big.Rat).SetFrac(newBaseFee.Int, new(big.Int).SetUint64(build.FilecoinPrecision))
|
||||||
|
baseFeeFloat, _ := baseFeeRat.Float64()
|
||||||
|
p = NewPoint("chain.basefee", baseFeeFloat)
|
||||||
|
pl.AddPoint(p)
|
||||||
|
|
||||||
|
baseFeeChange := new(big.Rat).SetFrac(newBaseFee.Int, baseFeeIn.Int)
|
||||||
|
baseFeeChangeF, _ := baseFeeChange.Float64()
|
||||||
|
p = NewPoint("chain.basefee_change_log", math.Log(baseFeeChangeF)/math.Log(1.125))
|
||||||
|
pl.AddPoint(p)
|
||||||
|
}
|
||||||
|
{
|
||||||
|
blks := len(cids)
|
||||||
|
p = NewPoint("chain.gas_fill_ratio", float64(totalGasLimit)/float64(blks*build.BlockGasTarget))
|
||||||
|
pl.AddPoint(p)
|
||||||
|
p = NewPoint("chain.gas_capacity_ratio", float64(totalUniqGasLimit)/float64(blks*build.BlockGasTarget))
|
||||||
|
pl.AddPoint(p)
|
||||||
|
p = NewPoint("chain.gas_waste_ratio", float64(totalGasLimit-totalUniqGasLimit)/float64(blks*build.BlockGasTarget))
|
||||||
|
pl.AddPoint(p)
|
||||||
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user