Merge branch 'master' into feat/cid-to-piece-idx

This commit is contained in:
zenground0 2022-02-28 12:57:11 -07:00
commit 77a954c7c3
140 changed files with 7286 additions and 767 deletions

View File

@ -44,13 +44,13 @@ commands:
- restore_cache: - restore_cache:
name: Restore parameters cache name: Restore parameters cache
keys: keys:
- 'v25-2k-lotus-params' - 'v26-2k-lotus-params'
paths: paths:
- /var/tmp/filecoin-proof-parameters/ - /var/tmp/filecoin-proof-parameters/
- run: ./lotus fetch-params 2048 - run: ./lotus fetch-params 2048
- save_cache: - save_cache:
name: Save parameters cache name: Save parameters cache
key: 'v25-2k-lotus-params' key: 'v26-2k-lotus-params'
paths: paths:
- /var/tmp/filecoin-proof-parameters/ - /var/tmp/filecoin-proof-parameters/
install_ipfs: install_ipfs:
@ -855,6 +855,11 @@ workflows:
suite: itest-get_messages_in_ts suite: itest-get_messages_in_ts
target: "./itests/get_messages_in_ts_test.go" target: "./itests/get_messages_in_ts_test.go"
- test:
name: test-itest-mempool
suite: itest-mempool
target: "./itests/mempool_test.go"
- test: - test:
name: test-itest-multisig name: test-itest-multisig
suite: itest-multisig suite: itest-multisig

View File

@ -44,13 +44,13 @@ commands:
- restore_cache: - restore_cache:
name: Restore parameters cache name: Restore parameters cache
keys: keys:
- 'v25-2k-lotus-params' - 'v26-2k-lotus-params'
paths: paths:
- /var/tmp/filecoin-proof-parameters/ - /var/tmp/filecoin-proof-parameters/
- run: ./lotus fetch-params 2048 - run: ./lotus fetch-params 2048
- save_cache: - save_cache:
name: Save parameters cache name: Save parameters cache
key: 'v25-2k-lotus-params' key: 'v26-2k-lotus-params'
paths: paths:
- /var/tmp/filecoin-proof-parameters/ - /var/tmp/filecoin-proof-parameters/
install_ipfs: install_ipfs:

View File

@ -1,5 +1,106 @@
# Lotus changelog # Lotus changelog
# 1.14.2 / 2022-02-24
This is an **optional** release of lotus, that's had a couple more improvements w.r.t Snap experience for storage providers in preparation of the[upcoming OhSnap upgrade](https://github.com/filecoin-project/community/discussions/74?sort=new#discussioncomment-1922550).
Note that the network is STILL scheduled to upgrade to v15 on March 1st at 2022-03-01T15:00:00Z. All node operators, including storage providers, must upgrade to at least Lotus v1.14.0 before that time. Storage providers must update their daemons, miners, and worker(s).
Wanna know how to Snap your deal? Check [this](https://github.com/filecoin-project/lotus/discussions/8141) out!
## Bug Fixes
- fix lotus-bench for sealing jobs (#8173)
- fix:sealing:really-do-it flag for abort upgrade (#8181)
- fix:proving:post check sector handles snap deals replica faults (#8177)
- fix: sealing: missing file type (#8180)
## Others
- Retract force-pushed v1.14.0 to work around stale gomod caches (#8159): We originally tagged v1.14.0 off the wrong
commit and fixed that by a force push, in which is a really bad practise since it messes up the go mod. Therefore,
we want to retract it and users may use v1.14.1&^.
## Contributors
| Contributor | Commits | Lines ± | Files Changed |
|-------------|---------|---------|---------------|
| @zenground0 | 2 | +73/-58 | 12 |
| @eben.xie | 1 | +7/-0 | 1 |
| @jennijuju | 1 | +4/-0 | 1 |
| @jennijuju | 1 | +2/-1 | 1 |
| @ribasushi | 1 | +2/-0 | 1 |
# 1.14.1 / 2022-02-18
This is an **optional** release of lotus, that fixes the incorrect *comment* of network v15 OhSnap upgrade **date**. Note the actual upgrade epoch in [v1.14.0](https://github.com/filecoin-project/lotus/releases/tag/v1.14.0) was correct.
# 1.14.0 / 2022-02-17
This is a MANDATORY release of Lotus that introduces [Filecoin network v15,
codenamed the OhSnap upgrade](https://github.com/filecoin-project/community/discussions/74?sort=new#discussioncomment-1922550).
The network is scheduled to upgrade to v15 on March 1st at 2022-03-01T15:00:00Z. All node operators, including storage providers, must upgrade to this release (or a later release) before that time. Storage providers must update their daemons, miners, and worker(s).
The OhSnap upgrade introduces the following FIPs, delivered in [actors v7](https://github.com/filecoin-project/specs-actors/releases/tag/v7.0.0):
- [FIP-0019 Snap Deals](https://github.com/filecoin-project/FIPs/blob/master/FIPS/fip-0019.md)
- [FIP-0028 Remove Datacap from Verified clients](https://github.com/filecoin-project/FIPs/pull/226)
It is recommended that storage providers download the new params before updating their node, miner, and workers. To do so:
- Download Lotus v1.14.0 or later
- run `make lotus-shed`
- run `./lotus-shed fetch-params` with the appropriate `proving-params` flag
- Upgrade the Lotus daemon and miner **when the previous step is complete**
All node operators, including storage providers, should be aware that a pre-migration will begin at 2022-03-01T13:30:00Z (90 minutes before the real upgrade). The pre-migration will take between 20 and 50 minutes, depending on hardware specs. During this time, expect slower block validation times, increased CPU and memory usage, and longer delays for API queries.
## New Features and Changes
- Integrate actor v7-rc1:
- Integrate v7 actors ([#7617](https://github.com/filecoin-project/lotus/pull/7617))
- feat: state: Fast migration for v15 ([#7933](https://github.com/filecoin-project/lotus/pull/7933))
- fix: blockstore: Add missing locks to autobatch::Get() [#7939](https://github.com/filecoin-project/lotus/pull/7939))
- correctness fixes for the autobatch blockstore ([#7940](https://github.com/filecoin-project/lotus/pull/7940))
- Implement and support [FIP-0019 Snap Deals](https://github.com/filecoin-project/FIPs/blob/master/FIPS/fip-0019.md)
- chore: deps: Integrate proof v11.0.0 ([#7923](https://github.com/filecoin-project/lotus/pull/7923))
- Snap Deals Lotus Integration: FSM Posting and integration test ([#7810](https://github.com/filecoin-project/lotus/pull/7810))
- Feat/sector storage unseal ([#7730](https://github.com/filecoin-project/lotus/pull/7730))
- Feat/snap deals storage ([#7615](https://github.com/filecoin-project/lotus/pull/7615))
- fix: sealing: Add more deal expiration checks during PRU pipeline ([#7871](https://github.com/filecoin-project/lotus/pull/7871))
- chore: deps: Update go-paramfetch ([#7917](https://github.com/filecoin-project/lotus/pull/7917))
- feat: #7880 gas: add gas charge for VerifyReplicaUpdate ([#7897](https://github.com/filecoin-project/lotus/pull/7897))
- enhancement: sectors: disable existing cc upgrade path 2 days before the upgrade epoch ([#7900](https://github.com/filecoin-project/lotus/pull/7900))
## Improvements
- updating to new datastore/blockstore code with contexts ([#7646](https://github.com/filecoin-project/lotus/pull/7646))
- reorder transfer checks so as to ensure sending 2B FIL to yourself fails if you don't have that amount ([#7637](https://github.com/filecoin-project/lotus/pull/7637))
- VM: Circ supply should be constant per epoch ([#7811](https://github.com/filecoin-project/lotus/pull/7811))
## Bug Fixes
- Fix: state: circsuypply calc around null blocks ([#7890](https://github.com/filecoin-project/lotus/pull/7890))
- Mempool msg selection should respect block message limits ([#7321](https://github.com/filecoin-project/lotus/pull/7321))
SplitStore: supress compaction near upgrades ([#7734](https://github.com/filecoin-project/lotus/pull/7734))
## Others
- chore: create pull_request_template.md ([#7726](https://github.com/filecoin-project/lotus/pull/7726))
## Contributors
| Contributor | Commits | Lines ± | Files Changed |
|-------------|---------|---------|---------------|
| Aayush Rajasekaran | 41 | +5538/-1205 | 189 |
| zenground0 | 11 | +3316/-524 | 124 |
| Jennifer Wang | 29 | +714/-599 | 68 |
| ZenGround0 | 3 | +263/-25 | 11 |
| c r | 2 | +198/-30 | 6 |
| vyzo | 4 | +189/-7 | 7 |
| Aayush | 11 | +146/-48 | 49 |
| web3-bot | 10 | +99/-17 | 10 |
| Steven Allen | 1 | +55/-37 | 1 |
| Jiaying Wang | 5 | +30/-8 | 5 |
| Jakub Sztandera | 2 | +8/-3 | 3 |
| Łukasz Magiera | 1 | +3/-3 | 2 |
| Travis Person | 1 | +2/-2 | 2 |
| Rod Vagg | 1 | +2/-2 | 2 |
# v1.13.2 / 2022-01-09 # v1.13.2 / 2022-01-09
Lotus v1.13.2 is a *highly recommended* feature release with remarkable retrieval improvements, new features like Lotus v1.13.2 is a *highly recommended* feature release with remarkable retrieval improvements, new features like

View File

@ -345,6 +345,8 @@ gen: actors-gen type-gen method-gen cfgdoc-gen docsgen api-gen circleci
@echo ">>> IF YOU'VE MODIFIED THE CLI OR CONFIG, REMEMBER TO ALSO MAKE docsgen-cli" @echo ">>> IF YOU'VE MODIFIED THE CLI OR CONFIG, REMEMBER TO ALSO MAKE docsgen-cli"
.PHONY: gen .PHONY: gen
jen: gen
snap: lotus lotus-miner lotus-worker snap: lotus lotus-miner lotus-worker
snapcraft snapcraft
# snapcraft upload ./lotus_*.snap # snapcraft upload ./lotus_*.snap

View File

@ -45,8 +45,9 @@ type Gateway interface {
GasEstimateMessageGas(ctx context.Context, msg *types.Message, spec *MessageSendSpec, tsk types.TipSetKey) (*types.Message, error) GasEstimateMessageGas(ctx context.Context, msg *types.Message, spec *MessageSendSpec, tsk types.TipSetKey) (*types.Message, error)
MpoolPush(ctx context.Context, sm *types.SignedMessage) (cid.Cid, error) MpoolPush(ctx context.Context, sm *types.SignedMessage) (cid.Cid, error)
MsigGetAvailableBalance(ctx context.Context, addr address.Address, tsk types.TipSetKey) (types.BigInt, error) MsigGetAvailableBalance(ctx context.Context, addr address.Address, tsk types.TipSetKey) (types.BigInt, error)
MsigGetVested(ctx context.Context, addr address.Address, start types.TipSetKey, end types.TipSetKey) (types.BigInt, error)
MsigGetPending(context.Context, address.Address, types.TipSetKey) ([]*MsigTransaction, error) MsigGetPending(context.Context, address.Address, types.TipSetKey) ([]*MsigTransaction, error)
MsigGetVested(ctx context.Context, addr address.Address, start types.TipSetKey, end types.TipSetKey) (types.BigInt, error)
MsigGetVestingSchedule(ctx context.Context, addr address.Address, tsk types.TipSetKey) (MsigVesting, error)
StateAccountKey(ctx context.Context, addr address.Address, tsk types.TipSetKey) (address.Address, error) StateAccountKey(ctx context.Context, addr address.Address, tsk types.TipSetKey) (address.Address, error)
StateDealProviderCollateralBounds(ctx context.Context, size abi.PaddedPieceSize, verified bool, tsk types.TipSetKey) (DealCollateralBounds, error) StateDealProviderCollateralBounds(ctx context.Context, size abi.PaddedPieceSize, verified bool, tsk types.TipSetKey) (DealCollateralBounds, error)
StateGetActor(ctx context.Context, actor address.Address, ts types.TipSetKey) (*types.Actor, error) StateGetActor(ctx context.Context, actor address.Address, ts types.TipSetKey) (*types.Actor, error)

View File

@ -113,6 +113,8 @@ type StorageMiner interface {
// SectorCommitPending returns a list of pending Commit sectors to be sent in the next aggregate message // SectorCommitPending returns a list of pending Commit sectors to be sent in the next aggregate message
SectorCommitPending(ctx context.Context) ([]abi.SectorID, error) //perm:admin SectorCommitPending(ctx context.Context) ([]abi.SectorID, error) //perm:admin
SectorMatchPendingPiecesToOpenSectors(ctx context.Context) error //perm:admin SectorMatchPendingPiecesToOpenSectors(ctx context.Context) error //perm:admin
// SectorAbortUpgrade can be called on sectors that are in the process of being upgraded to abort it
SectorAbortUpgrade(context.Context, abi.SectorNumber) error //perm:admin
// WorkerConnect tells the node to connect to workers RPC // WorkerConnect tells the node to connect to workers RPC
WorkerConnect(context.Context, string) error //perm:admin retry:true WorkerConnect(context.Context, string) error //perm:admin retry:true
@ -130,6 +132,7 @@ type StorageMiner interface {
ReturnProveReplicaUpdate1(ctx context.Context, callID storiface.CallID, vanillaProofs storage.ReplicaVanillaProofs, err *storiface.CallError) error //perm:admin retry:true ReturnProveReplicaUpdate1(ctx context.Context, callID storiface.CallID, vanillaProofs storage.ReplicaVanillaProofs, err *storiface.CallError) error //perm:admin retry:true
ReturnProveReplicaUpdate2(ctx context.Context, callID storiface.CallID, proof storage.ReplicaUpdateProof, err *storiface.CallError) error //perm:admin retry:true ReturnProveReplicaUpdate2(ctx context.Context, callID storiface.CallID, proof storage.ReplicaUpdateProof, err *storiface.CallError) error //perm:admin retry:true
ReturnGenerateSectorKeyFromData(ctx context.Context, callID storiface.CallID, err *storiface.CallError) error //perm:admin retry:true ReturnGenerateSectorKeyFromData(ctx context.Context, callID storiface.CallID, err *storiface.CallError) error //perm:admin retry:true
ReturnFinalizeReplicaUpdate(ctx context.Context, callID storiface.CallID, err *storiface.CallError) error //perm:admin retry:true
ReturnReleaseUnsealed(ctx context.Context, callID storiface.CallID, err *storiface.CallError) error //perm:admin retry:true ReturnReleaseUnsealed(ctx context.Context, callID storiface.CallID, err *storiface.CallError) error //perm:admin retry:true
ReturnMoveStorage(ctx context.Context, callID storiface.CallID, err *storiface.CallError) error //perm:admin retry:true ReturnMoveStorage(ctx context.Context, callID storiface.CallID, err *storiface.CallError) error //perm:admin retry:true
ReturnUnsealPiece(ctx context.Context, callID storiface.CallID, err *storiface.CallError) error //perm:admin retry:true ReturnUnsealPiece(ctx context.Context, callID storiface.CallID, err *storiface.CallError) error //perm:admin retry:true
@ -263,7 +266,7 @@ type StorageMiner interface {
// the path specified when calling CreateBackup is within the base path // the path specified when calling CreateBackup is within the base path
CreateBackup(ctx context.Context, fpath string) error //perm:admin CreateBackup(ctx context.Context, fpath string) error //perm:admin
CheckProvable(ctx context.Context, pp abi.RegisteredPoStProof, sectors []storage.SectorRef, expensive bool) (map[abi.SectorNumber]string, error) //perm:admin CheckProvable(ctx context.Context, pp abi.RegisteredPoStProof, sectors []storage.SectorRef, update []bool, expensive bool) (map[abi.SectorNumber]string, error) //perm:admin
ComputeProof(ctx context.Context, ssi []builtin.ExtendedSectorInfo, rand abi.PoStRandomness, poStEpoch abi.ChainEpoch, nv abinetwork.Version) ([]builtin.PoStProof, error) //perm:read ComputeProof(ctx context.Context, ssi []builtin.ExtendedSectorInfo, rand abi.PoStRandomness, poStEpoch abi.ChainEpoch, nv abinetwork.Version) ([]builtin.PoStProof, error) //perm:read
} }

View File

@ -39,6 +39,7 @@ type Worker interface {
SealCommit1(ctx context.Context, sector storage.SectorRef, ticket abi.SealRandomness, seed abi.InteractiveSealRandomness, pieces []abi.PieceInfo, cids storage.SectorCids) (storiface.CallID, error) //perm:admin SealCommit1(ctx context.Context, sector storage.SectorRef, ticket abi.SealRandomness, seed abi.InteractiveSealRandomness, pieces []abi.PieceInfo, cids storage.SectorCids) (storiface.CallID, error) //perm:admin
SealCommit2(ctx context.Context, sector storage.SectorRef, c1o storage.Commit1Out) (storiface.CallID, error) //perm:admin SealCommit2(ctx context.Context, sector storage.SectorRef, c1o storage.Commit1Out) (storiface.CallID, error) //perm:admin
FinalizeSector(ctx context.Context, sector storage.SectorRef, keepUnsealed []storage.Range) (storiface.CallID, error) //perm:admin FinalizeSector(ctx context.Context, sector storage.SectorRef, keepUnsealed []storage.Range) (storiface.CallID, error) //perm:admin
FinalizeReplicaUpdate(ctx context.Context, sector storage.SectorRef, keepUnsealed []storage.Range) (storiface.CallID, error) //perm:admin
ReplicaUpdate(ctx context.Context, sector storage.SectorRef, pieces []abi.PieceInfo) (storiface.CallID, error) //perm:admin ReplicaUpdate(ctx context.Context, sector storage.SectorRef, pieces []abi.PieceInfo) (storiface.CallID, error) //perm:admin
ProveReplicaUpdate1(ctx context.Context, sector storage.SectorRef, sectorKey, newSealed, newUnsealed cid.Cid) (storiface.CallID, error) //perm:admin ProveReplicaUpdate1(ctx context.Context, sector storage.SectorRef, sectorKey, newSealed, newUnsealed cid.Cid) (storiface.CallID, error) //perm:admin
ProveReplicaUpdate2(ctx context.Context, sector storage.SectorRef, sectorKey, newSealed, newUnsealed cid.Cid, vanillaProofs storage.ReplicaVanillaProofs) (storiface.CallID, error) //perm:admin ProveReplicaUpdate2(ctx context.Context, sector storage.SectorRef, sectorKey, newSealed, newUnsealed cid.Cid, vanillaProofs storage.ReplicaVanillaProofs) (storiface.CallID, error) //perm:admin

View File

@ -516,6 +516,8 @@ type GatewayStruct struct {
MsigGetVested func(p0 context.Context, p1 address.Address, p2 types.TipSetKey, p3 types.TipSetKey) (types.BigInt, error) `` MsigGetVested func(p0 context.Context, p1 address.Address, p2 types.TipSetKey, p3 types.TipSetKey) (types.BigInt, error) ``
MsigGetVestingSchedule func(p0 context.Context, p1 address.Address, p2 types.TipSetKey) (MsigVesting, error) ``
StateAccountKey func(p0 context.Context, p1 address.Address, p2 types.TipSetKey) (address.Address, error) `` StateAccountKey func(p0 context.Context, p1 address.Address, p2 types.TipSetKey) (address.Address, error) ``
StateDealProviderCollateralBounds func(p0 context.Context, p1 abi.PaddedPieceSize, p2 bool, p3 types.TipSetKey) (DealCollateralBounds, error) `` StateDealProviderCollateralBounds func(p0 context.Context, p1 abi.PaddedPieceSize, p2 bool, p3 types.TipSetKey) (DealCollateralBounds, error) ``
@ -631,7 +633,7 @@ type StorageMinerStruct struct {
ActorSectorSize func(p0 context.Context, p1 address.Address) (abi.SectorSize, error) `perm:"read"` ActorSectorSize func(p0 context.Context, p1 address.Address) (abi.SectorSize, error) `perm:"read"`
CheckProvable func(p0 context.Context, p1 abi.RegisteredPoStProof, p2 []storage.SectorRef, p3 bool) (map[abi.SectorNumber]string, error) `perm:"admin"` CheckProvable func(p0 context.Context, p1 abi.RegisteredPoStProof, p2 []storage.SectorRef, p3 []bool, p4 bool) (map[abi.SectorNumber]string, error) `perm:"admin"`
ComputeProof func(p0 context.Context, p1 []builtin.ExtendedSectorInfo, p2 abi.PoStRandomness, p3 abi.ChainEpoch, p4 abinetwork.Version) ([]builtin.PoStProof, error) `perm:"read"` ComputeProof func(p0 context.Context, p1 []builtin.ExtendedSectorInfo, p2 abi.PoStRandomness, p3 abi.ChainEpoch, p4 abinetwork.Version) ([]builtin.PoStProof, error) `perm:"read"`
@ -735,6 +737,8 @@ type StorageMinerStruct struct {
ReturnFetch func(p0 context.Context, p1 storiface.CallID, p2 *storiface.CallError) error `perm:"admin"` ReturnFetch func(p0 context.Context, p1 storiface.CallID, p2 *storiface.CallError) error `perm:"admin"`
ReturnFinalizeReplicaUpdate func(p0 context.Context, p1 storiface.CallID, p2 *storiface.CallError) error `perm:"admin"`
ReturnFinalizeSector func(p0 context.Context, p1 storiface.CallID, p2 *storiface.CallError) error `perm:"admin"` ReturnFinalizeSector func(p0 context.Context, p1 storiface.CallID, p2 *storiface.CallError) error `perm:"admin"`
ReturnGenerateSectorKeyFromData func(p0 context.Context, p1 storiface.CallID, p2 *storiface.CallError) error `perm:"admin"` ReturnGenerateSectorKeyFromData func(p0 context.Context, p1 storiface.CallID, p2 *storiface.CallError) error `perm:"admin"`
@ -767,6 +771,8 @@ type StorageMinerStruct struct {
SealingSchedDiag func(p0 context.Context, p1 bool) (interface{}, error) `perm:"admin"` SealingSchedDiag func(p0 context.Context, p1 bool) (interface{}, error) `perm:"admin"`
SectorAbortUpgrade func(p0 context.Context, p1 abi.SectorNumber) error `perm:"admin"`
SectorAddPieceToAny func(p0 context.Context, p1 abi.UnpaddedPieceSize, p2 storage.Data, p3 PieceDealInfo) (SectorOffset, error) `perm:"admin"` SectorAddPieceToAny func(p0 context.Context, p1 abi.UnpaddedPieceSize, p2 storage.Data, p3 PieceDealInfo) (SectorOffset, error) `perm:"admin"`
SectorCommitFlush func(p0 context.Context) ([]sealiface.CommitBatchRes, error) `perm:"admin"` SectorCommitFlush func(p0 context.Context) ([]sealiface.CommitBatchRes, error) `perm:"admin"`
@ -884,6 +890,8 @@ type WorkerStruct struct {
Fetch func(p0 context.Context, p1 storage.SectorRef, p2 storiface.SectorFileType, p3 storiface.PathType, p4 storiface.AcquireMode) (storiface.CallID, error) `perm:"admin"` Fetch func(p0 context.Context, p1 storage.SectorRef, p2 storiface.SectorFileType, p3 storiface.PathType, p4 storiface.AcquireMode) (storiface.CallID, error) `perm:"admin"`
FinalizeReplicaUpdate func(p0 context.Context, p1 storage.SectorRef, p2 []storage.Range) (storiface.CallID, error) `perm:"admin"`
FinalizeSector func(p0 context.Context, p1 storage.SectorRef, p2 []storage.Range) (storiface.CallID, error) `perm:"admin"` FinalizeSector func(p0 context.Context, p1 storage.SectorRef, p2 []storage.Range) (storiface.CallID, error) `perm:"admin"`
GenerateSectorKeyFromData func(p0 context.Context, p1 storage.SectorRef, p2 cid.Cid) (storiface.CallID, error) `perm:"admin"` GenerateSectorKeyFromData func(p0 context.Context, p1 storage.SectorRef, p2 cid.Cid) (storiface.CallID, error) `perm:"admin"`
@ -3291,6 +3299,17 @@ func (s *GatewayStub) MsigGetVested(p0 context.Context, p1 address.Address, p2 t
return *new(types.BigInt), ErrNotSupported return *new(types.BigInt), ErrNotSupported
} }
func (s *GatewayStruct) MsigGetVestingSchedule(p0 context.Context, p1 address.Address, p2 types.TipSetKey) (MsigVesting, error) {
if s.Internal.MsigGetVestingSchedule == nil {
return *new(MsigVesting), ErrNotSupported
}
return s.Internal.MsigGetVestingSchedule(p0, p1, p2)
}
func (s *GatewayStub) MsigGetVestingSchedule(p0 context.Context, p1 address.Address, p2 types.TipSetKey) (MsigVesting, error) {
return *new(MsigVesting), ErrNotSupported
}
func (s *GatewayStruct) StateAccountKey(p0 context.Context, p1 address.Address, p2 types.TipSetKey) (address.Address, error) { func (s *GatewayStruct) StateAccountKey(p0 context.Context, p1 address.Address, p2 types.TipSetKey) (address.Address, error) {
if s.Internal.StateAccountKey == nil { if s.Internal.StateAccountKey == nil {
return *new(address.Address), ErrNotSupported return *new(address.Address), ErrNotSupported
@ -3786,14 +3805,14 @@ func (s *StorageMinerStub) ActorSectorSize(p0 context.Context, p1 address.Addres
return *new(abi.SectorSize), ErrNotSupported return *new(abi.SectorSize), ErrNotSupported
} }
func (s *StorageMinerStruct) CheckProvable(p0 context.Context, p1 abi.RegisteredPoStProof, p2 []storage.SectorRef, p3 bool) (map[abi.SectorNumber]string, error) { func (s *StorageMinerStruct) CheckProvable(p0 context.Context, p1 abi.RegisteredPoStProof, p2 []storage.SectorRef, p3 []bool, p4 bool) (map[abi.SectorNumber]string, error) {
if s.Internal.CheckProvable == nil { if s.Internal.CheckProvable == nil {
return *new(map[abi.SectorNumber]string), ErrNotSupported return *new(map[abi.SectorNumber]string), ErrNotSupported
} }
return s.Internal.CheckProvable(p0, p1, p2, p3) return s.Internal.CheckProvable(p0, p1, p2, p3, p4)
} }
func (s *StorageMinerStub) CheckProvable(p0 context.Context, p1 abi.RegisteredPoStProof, p2 []storage.SectorRef, p3 bool) (map[abi.SectorNumber]string, error) { func (s *StorageMinerStub) CheckProvable(p0 context.Context, p1 abi.RegisteredPoStProof, p2 []storage.SectorRef, p3 []bool, p4 bool) (map[abi.SectorNumber]string, error) {
return *new(map[abi.SectorNumber]string), ErrNotSupported return *new(map[abi.SectorNumber]string), ErrNotSupported
} }
@ -4358,6 +4377,17 @@ func (s *StorageMinerStub) ReturnFetch(p0 context.Context, p1 storiface.CallID,
return ErrNotSupported return ErrNotSupported
} }
func (s *StorageMinerStruct) ReturnFinalizeReplicaUpdate(p0 context.Context, p1 storiface.CallID, p2 *storiface.CallError) error {
if s.Internal.ReturnFinalizeReplicaUpdate == nil {
return ErrNotSupported
}
return s.Internal.ReturnFinalizeReplicaUpdate(p0, p1, p2)
}
func (s *StorageMinerStub) ReturnFinalizeReplicaUpdate(p0 context.Context, p1 storiface.CallID, p2 *storiface.CallError) error {
return ErrNotSupported
}
func (s *StorageMinerStruct) ReturnFinalizeSector(p0 context.Context, p1 storiface.CallID, p2 *storiface.CallError) error { func (s *StorageMinerStruct) ReturnFinalizeSector(p0 context.Context, p1 storiface.CallID, p2 *storiface.CallError) error {
if s.Internal.ReturnFinalizeSector == nil { if s.Internal.ReturnFinalizeSector == nil {
return ErrNotSupported return ErrNotSupported
@ -4534,6 +4564,17 @@ func (s *StorageMinerStub) SealingSchedDiag(p0 context.Context, p1 bool) (interf
return nil, ErrNotSupported return nil, ErrNotSupported
} }
func (s *StorageMinerStruct) SectorAbortUpgrade(p0 context.Context, p1 abi.SectorNumber) error {
if s.Internal.SectorAbortUpgrade == nil {
return ErrNotSupported
}
return s.Internal.SectorAbortUpgrade(p0, p1)
}
func (s *StorageMinerStub) SectorAbortUpgrade(p0 context.Context, p1 abi.SectorNumber) error {
return ErrNotSupported
}
func (s *StorageMinerStruct) SectorAddPieceToAny(p0 context.Context, p1 abi.UnpaddedPieceSize, p2 storage.Data, p3 PieceDealInfo) (SectorOffset, error) { func (s *StorageMinerStruct) SectorAddPieceToAny(p0 context.Context, p1 abi.UnpaddedPieceSize, p2 storage.Data, p3 PieceDealInfo) (SectorOffset, error) {
if s.Internal.SectorAddPieceToAny == nil { if s.Internal.SectorAddPieceToAny == nil {
return *new(SectorOffset), ErrNotSupported return *new(SectorOffset), ErrNotSupported
@ -5084,6 +5125,17 @@ func (s *WorkerStub) Fetch(p0 context.Context, p1 storage.SectorRef, p2 storifac
return *new(storiface.CallID), ErrNotSupported return *new(storiface.CallID), ErrNotSupported
} }
func (s *WorkerStruct) FinalizeReplicaUpdate(p0 context.Context, p1 storage.SectorRef, p2 []storage.Range) (storiface.CallID, error) {
if s.Internal.FinalizeReplicaUpdate == nil {
return *new(storiface.CallID), ErrNotSupported
}
return s.Internal.FinalizeReplicaUpdate(p0, p1, p2)
}
func (s *WorkerStub) FinalizeReplicaUpdate(p0 context.Context, p1 storage.SectorRef, p2 []storage.Range) (storiface.CallID, error) {
return *new(storiface.CallID), ErrNotSupported
}
func (s *WorkerStruct) FinalizeSector(p0 context.Context, p1 storage.SectorRef, p2 []storage.Range) (storiface.CallID, error) { func (s *WorkerStruct) FinalizeSector(p0 context.Context, p1 storage.SectorRef, p2 []storage.Range) (storiface.CallID, error) {
if s.Internal.FinalizeSector == nil { if s.Internal.FinalizeSector == nil {
return *new(storiface.CallID), ErrNotSupported return *new(storiface.CallID), ErrNotSupported

View File

@ -57,7 +57,7 @@ var (
FullAPIVersion0 = newVer(1, 5, 0) FullAPIVersion0 = newVer(1, 5, 0)
FullAPIVersion1 = newVer(2, 2, 0) FullAPIVersion1 = newVer(2, 2, 0)
MinerAPIVersion0 = newVer(1, 3, 0) MinerAPIVersion0 = newVer(1, 4, 0)
WorkerAPIVersion0 = newVer(1, 5, 0) WorkerAPIVersion0 = newVer(1, 5, 0)
) )

21
blockstore/context.go Normal file
View File

@ -0,0 +1,21 @@
package blockstore
import (
"context"
)
type hotViewKey struct{}
var hotView = hotViewKey{}
// WithHotView constructs a new context with an option that provides a hint to the blockstore
// (e.g. the splitstore) that the object (and its ipld references) should be kept hot.
func WithHotView(ctx context.Context) context.Context {
return context.WithValue(ctx, hotView, struct{}{})
}
// IsHotView returns true if the hot view option is set in the context
func IsHotView(ctx context.Context) bool {
v := ctx.Value(hotView)
return v != nil
}

View File

@ -49,10 +49,11 @@ These are options in the `[Chainstore.Splitstore]` section of the configuration:
blockstore and discards writes; this is necessary to support syncing from a snapshot. blockstore and discards writes; this is necessary to support syncing from a snapshot.
- `MarkSetType` -- specifies the type of markset to use during compaction. - `MarkSetType` -- specifies the type of markset to use during compaction.
The markset is the data structure used by compaction/gc to track live objects. The markset is the data structure used by compaction/gc to track live objects.
The default value is `"map"`, which will use an in-memory map; if you are limited The default value is "badger", which will use a disk backed markset using badger.
in memory (or indeed see compaction run out of memory), you can also specify If you have a lot of memory (48G or more) you can also use "map", which will use
`"badger"` which will use an disk backed markset, using badger. This will use an in memory markset, speeding up compaction at the cost of higher memory usage.
much less memory, but will also make compaction slower. Note: If you are using a VPS with a network volume, you need to provision at least
3000 IOPs with the badger markset.
- `HotStoreMessageRetention` -- specifies how many finalities, beyond the 4 - `HotStoreMessageRetention` -- specifies how many finalities, beyond the 4
finalities maintained by default, to maintain messages and message receipts in the finalities maintained by default, to maintain messages and message receipts in the
hotstore. This is useful for assistive nodes that want to support syncing for other hotstore. This is useful for assistive nodes that want to support syncing for other
@ -105,6 +106,12 @@ Compaction works transactionally with the following algorithm:
- We delete in small batches taking a lock; each batch is checked again for marks, from the concurrent transactional mark, so as to never delete anything live - We delete in small batches taking a lock; each batch is checked again for marks, from the concurrent transactional mark, so as to never delete anything live
- We then end the transaction and compact/gc the hotstore. - We then end the transaction and compact/gc the hotstore.
As of [#8008](https://github.com/filecoin-project/lotus/pull/8008) the compaction algorithm has been
modified to eliminate sorting and maintain the cold object set on disk. This drastically reduces
memory usage; in fact, when using badger as the markset compaction uses very little memory, and
it should be now possible to run splitstore with 32GB of RAM or less without danger of running out of
memory during compaction.
## Garbage Collection ## Garbage Collection
TBD -- see [#6577](https://github.com/filecoin-project/lotus/issues/6577) TBD -- see [#6577](https://github.com/filecoin-project/lotus/issues/6577)

View File

@ -0,0 +1,118 @@
package splitstore
import (
"bufio"
"io"
"os"
"golang.org/x/xerrors"
cid "github.com/ipfs/go-cid"
mh "github.com/multiformats/go-multihash"
)
type Checkpoint struct {
file *os.File
buf *bufio.Writer
}
func NewCheckpoint(path string) (*Checkpoint, error) {
file, err := os.OpenFile(path, os.O_CREATE|os.O_TRUNC|os.O_WRONLY|os.O_SYNC, 0644)
if err != nil {
return nil, xerrors.Errorf("error creating checkpoint: %w", err)
}
buf := bufio.NewWriter(file)
return &Checkpoint{
file: file,
buf: buf,
}, nil
}
func OpenCheckpoint(path string) (*Checkpoint, cid.Cid, error) {
filein, err := os.Open(path)
if err != nil {
return nil, cid.Undef, xerrors.Errorf("error opening checkpoint for reading: %w", err)
}
defer filein.Close() //nolint:errcheck
bufin := bufio.NewReader(filein)
start, err := readRawCid(bufin, nil)
if err != nil && err != io.EOF {
return nil, cid.Undef, xerrors.Errorf("error reading cid from checkpoint: %w", err)
}
fileout, err := os.OpenFile(path, os.O_WRONLY|os.O_SYNC, 0644)
if err != nil {
return nil, cid.Undef, xerrors.Errorf("error opening checkpoint for writing: %w", err)
}
bufout := bufio.NewWriter(fileout)
return &Checkpoint{
file: fileout,
buf: bufout,
}, start, nil
}
func (cp *Checkpoint) Set(c cid.Cid) error {
if _, err := cp.file.Seek(0, io.SeekStart); err != nil {
return xerrors.Errorf("error seeking beginning of checkpoint: %w", err)
}
if err := writeRawCid(cp.buf, c, true); err != nil {
return xerrors.Errorf("error writing cid to checkpoint: %w", err)
}
return nil
}
func (cp *Checkpoint) Close() error {
if cp.file == nil {
return nil
}
err := cp.file.Close()
cp.file = nil
cp.buf = nil
return err
}
func readRawCid(buf *bufio.Reader, hbuf []byte) (cid.Cid, error) {
sz, err := buf.ReadByte()
if err != nil {
return cid.Undef, err // don't wrap EOF as it is not an error here
}
if hbuf == nil {
hbuf = make([]byte, int(sz))
} else {
hbuf = hbuf[:int(sz)]
}
if _, err := io.ReadFull(buf, hbuf); err != nil {
return cid.Undef, xerrors.Errorf("error reading hash: %w", err) // wrap EOF, it's corrupt
}
hash, err := mh.Cast(hbuf)
if err != nil {
return cid.Undef, xerrors.Errorf("error casting multihash: %w", err)
}
return cid.NewCidV1(cid.Raw, hash), nil
}
func writeRawCid(buf *bufio.Writer, c cid.Cid, flush bool) error {
hash := c.Hash()
if err := buf.WriteByte(byte(len(hash))); err != nil {
return err
}
if _, err := buf.Write(hash); err != nil {
return err
}
if flush {
return buf.Flush()
}
return nil
}

View File

@ -0,0 +1,147 @@
package splitstore
import (
"io/ioutil"
"os"
"path/filepath"
"testing"
"github.com/ipfs/go-cid"
"github.com/multiformats/go-multihash"
)
func TestCheckpoint(t *testing.T) {
dir, err := ioutil.TempDir("", "checkpoint.*")
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
_ = os.RemoveAll(dir)
})
path := filepath.Join(dir, "checkpoint")
makeCid := func(key string) cid.Cid {
h, err := multihash.Sum([]byte(key), multihash.SHA2_256, -1)
if err != nil {
t.Fatal(err)
}
return cid.NewCidV1(cid.Raw, h)
}
k1 := makeCid("a")
k2 := makeCid("b")
k3 := makeCid("c")
k4 := makeCid("d")
cp, err := NewCheckpoint(path)
if err != nil {
t.Fatal(err)
}
if err := cp.Set(k1); err != nil {
t.Fatal(err)
}
if err := cp.Set(k2); err != nil {
t.Fatal(err)
}
if err := cp.Close(); err != nil {
t.Fatal(err)
}
cp, start, err := OpenCheckpoint(path)
if err != nil {
t.Fatal(err)
}
if !start.Equals(k2) {
t.Fatalf("expected start to be %s; got %s", k2, start)
}
if err := cp.Set(k3); err != nil {
t.Fatal(err)
}
if err := cp.Set(k4); err != nil {
t.Fatal(err)
}
if err := cp.Close(); err != nil {
t.Fatal(err)
}
cp, start, err = OpenCheckpoint(path)
if err != nil {
t.Fatal(err)
}
if !start.Equals(k4) {
t.Fatalf("expected start to be %s; got %s", k4, start)
}
if err := cp.Close(); err != nil {
t.Fatal(err)
}
// also test correct operation with an empty checkpoint
cp, err = NewCheckpoint(path)
if err != nil {
t.Fatal(err)
}
if err := cp.Close(); err != nil {
t.Fatal(err)
}
cp, start, err = OpenCheckpoint(path)
if err != nil {
t.Fatal(err)
}
if start.Defined() {
t.Fatal("expected start to be undefined")
}
if err := cp.Set(k1); err != nil {
t.Fatal(err)
}
if err := cp.Set(k2); err != nil {
t.Fatal(err)
}
if err := cp.Close(); err != nil {
t.Fatal(err)
}
cp, start, err = OpenCheckpoint(path)
if err != nil {
t.Fatal(err)
}
if !start.Equals(k2) {
t.Fatalf("expected start to be %s; got %s", k2, start)
}
if err := cp.Set(k3); err != nil {
t.Fatal(err)
}
if err := cp.Set(k4); err != nil {
t.Fatal(err)
}
if err := cp.Close(); err != nil {
t.Fatal(err)
}
cp, start, err = OpenCheckpoint(path)
if err != nil {
t.Fatal(err)
}
if !start.Equals(k4) {
t.Fatalf("expected start to be %s; got %s", k4, start)
}
if err := cp.Close(); err != nil {
t.Fatal(err)
}
}

View File

@ -0,0 +1,102 @@
package splitstore
import (
"bufio"
"io"
"os"
"golang.org/x/xerrors"
cid "github.com/ipfs/go-cid"
)
type ColdSetWriter struct {
file *os.File
buf *bufio.Writer
}
type ColdSetReader struct {
file *os.File
buf *bufio.Reader
}
func NewColdSetWriter(path string) (*ColdSetWriter, error) {
file, err := os.OpenFile(path, os.O_CREATE|os.O_TRUNC|os.O_WRONLY, 0644)
if err != nil {
return nil, xerrors.Errorf("error creating coldset: %w", err)
}
buf := bufio.NewWriter(file)
return &ColdSetWriter{
file: file,
buf: buf,
}, nil
}
func NewColdSetReader(path string) (*ColdSetReader, error) {
file, err := os.Open(path)
if err != nil {
return nil, xerrors.Errorf("error opening coldset: %w", err)
}
buf := bufio.NewReader(file)
return &ColdSetReader{
file: file,
buf: buf,
}, nil
}
func (s *ColdSetWriter) Write(c cid.Cid) error {
return writeRawCid(s.buf, c, false)
}
func (s *ColdSetWriter) Close() error {
if s.file == nil {
return nil
}
err1 := s.buf.Flush()
err2 := s.file.Close()
s.buf = nil
s.file = nil
if err1 != nil {
return err1
}
return err2
}
func (s *ColdSetReader) ForEach(f func(cid.Cid) error) error {
hbuf := make([]byte, 256)
for {
next, err := readRawCid(s.buf, hbuf)
if err != nil {
if err == io.EOF {
return nil
}
return xerrors.Errorf("error reading coldset: %w", err)
}
if err := f(next); err != nil {
return err
}
}
}
func (s *ColdSetReader) Reset() error {
_, err := s.file.Seek(0, io.SeekStart)
return err
}
func (s *ColdSetReader) Close() error {
if s.file == nil {
return nil
}
err := s.file.Close()
s.file = nil
s.buf = nil
return err
}

View File

@ -0,0 +1,99 @@
package splitstore
import (
"fmt"
"io/ioutil"
"os"
"path/filepath"
"testing"
"github.com/ipfs/go-cid"
"github.com/multiformats/go-multihash"
)
func TestColdSet(t *testing.T) {
dir, err := ioutil.TempDir("", "coldset.*")
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
_ = os.RemoveAll(dir)
})
path := filepath.Join(dir, "coldset")
makeCid := func(i int) cid.Cid {
h, err := multihash.Sum([]byte(fmt.Sprintf("cid.%d", i)), multihash.SHA2_256, -1)
if err != nil {
t.Fatal(err)
}
return cid.NewCidV1(cid.Raw, h)
}
const count = 1000
cids := make([]cid.Cid, 0, count)
for i := 0; i < count; i++ {
cids = append(cids, makeCid(i))
}
cw, err := NewColdSetWriter(path)
if err != nil {
t.Fatal(err)
}
for _, c := range cids {
if err := cw.Write(c); err != nil {
t.Fatal(err)
}
}
if err := cw.Close(); err != nil {
t.Fatal(err)
}
cr, err := NewColdSetReader(path)
if err != nil {
t.Fatal(err)
}
index := 0
err = cr.ForEach(func(c cid.Cid) error {
if index >= count {
t.Fatal("too many cids")
}
if !c.Equals(cids[index]) {
t.Fatalf("wrong cid %d; expected %s but got %s", index, cids[index], c)
}
index++
return nil
})
if err != nil {
t.Fatal(err)
}
if err := cr.Reset(); err != nil {
t.Fatal(err)
}
index = 0
err = cr.ForEach(func(c cid.Cid) error {
if index >= count {
t.Fatal("too many cids")
}
if !c.Equals(cids[index]) {
t.Fatalf("wrong cid; expected %s but got %s", cids[index], c)
}
index++
return nil
})
if err != nil {
t.Fatal(err)
}
}

View File

@ -14,15 +14,24 @@ var errMarkSetClosed = errors.New("markset closed")
type MarkSet interface { type MarkSet interface {
ObjectVisitor ObjectVisitor
Mark(cid.Cid) error Mark(cid.Cid) error
MarkMany([]cid.Cid) error
Has(cid.Cid) (bool, error) Has(cid.Cid) (bool, error)
Close() error Close() error
// BeginCriticalSection ensures that the markset is persisted to disk for recovery in case
// of abnormal termination during the critical section span.
BeginCriticalSection() error
// EndCriticalSection ends the critical section span.
EndCriticalSection()
} }
type MarkSetEnv interface { type MarkSetEnv interface {
// Create creates a new markset within the environment. // New creates a new markset within the environment.
// name is a unique name for this markset, mapped to the filesystem in disk-backed environments // name is a unique name for this markset, mapped to the filesystem for on-disk persistence.
// sizeHint is a hint about the expected size of the markset // sizeHint is a hint about the expected size of the markset
Create(name string, sizeHint int64) (MarkSet, error) New(name string, sizeHint int64) (MarkSet, error)
// Recover recovers an existing markset persisted on-disk.
Recover(name string) (MarkSet, error)
// Close closes the markset // Close closes the markset
Close() error Close() error
} }
@ -30,7 +39,7 @@ type MarkSetEnv interface {
func OpenMarkSetEnv(path string, mtype string) (MarkSetEnv, error) { func OpenMarkSetEnv(path string, mtype string) (MarkSetEnv, error) {
switch mtype { switch mtype {
case "map": case "map":
return NewMapMarkSetEnv() return NewMapMarkSetEnv(path)
case "badger": case "badger":
return NewBadgerMarkSetEnv(path) return NewBadgerMarkSetEnv(path)
default: default:

View File

@ -3,6 +3,7 @@ package splitstore
import ( import (
"os" "os"
"path/filepath" "path/filepath"
"runtime"
"sync" "sync"
"golang.org/x/xerrors" "golang.org/x/xerrors"
@ -28,6 +29,7 @@ type BadgerMarkSet struct {
writers int writers int
seqno int seqno int
version int version int
persist bool
db *badger.DB db *badger.DB
path string path string
@ -47,11 +49,10 @@ func NewBadgerMarkSetEnv(path string) (MarkSetEnv, error) {
return &BadgerMarkSetEnv{path: msPath}, nil return &BadgerMarkSetEnv{path: msPath}, nil
} }
func (e *BadgerMarkSetEnv) Create(name string, sizeHint int64) (MarkSet, error) { func (e *BadgerMarkSetEnv) New(name string, sizeHint int64) (MarkSet, error) {
name += ".tmp"
path := filepath.Join(e.path, name) path := filepath.Join(e.path, name)
db, err := openTransientBadgerDB(path) db, err := openBadgerDB(path, false)
if err != nil { if err != nil {
return nil, xerrors.Errorf("error creating badger db: %w", err) return nil, xerrors.Errorf("error creating badger db: %w", err)
} }
@ -67,8 +68,72 @@ func (e *BadgerMarkSetEnv) Create(name string, sizeHint int64) (MarkSet, error)
return ms, nil return ms, nil
} }
func (e *BadgerMarkSetEnv) Recover(name string) (MarkSet, error) {
path := filepath.Join(e.path, name)
if _, err := os.Stat(path); err != nil {
return nil, xerrors.Errorf("error stating badger db path: %w", err)
}
db, err := openBadgerDB(path, true)
if err != nil {
return nil, xerrors.Errorf("error creating badger db: %w", err)
}
ms := &BadgerMarkSet{
pend: make(map[string]struct{}),
writing: make(map[int]map[string]struct{}),
db: db,
path: path,
persist: true,
}
ms.cond.L = &ms.mx
return ms, nil
}
func (e *BadgerMarkSetEnv) Close() error { func (e *BadgerMarkSetEnv) Close() error {
return os.RemoveAll(e.path) return nil
}
func (s *BadgerMarkSet) BeginCriticalSection() error {
s.mx.Lock()
if s.persist {
s.mx.Unlock()
return nil
}
var write bool
var seqno int
if len(s.pend) > 0 {
write = true
seqno = s.nextBatch()
}
s.persist = true
s.mx.Unlock()
if write {
// all writes sync once perist is true
return s.write(seqno)
}
// wait for any pending writes and sync
s.mx.Lock()
for s.writers > 0 {
s.cond.Wait()
}
s.mx.Unlock()
return s.db.Sync()
}
func (s *BadgerMarkSet) EndCriticalSection() {
s.mx.Lock()
defer s.mx.Unlock()
s.persist = false
} }
func (s *BadgerMarkSet) Mark(c cid.Cid) error { func (s *BadgerMarkSet) Mark(c cid.Cid) error {
@ -88,6 +153,23 @@ func (s *BadgerMarkSet) Mark(c cid.Cid) error {
return nil return nil
} }
func (s *BadgerMarkSet) MarkMany(batch []cid.Cid) error {
s.mx.Lock()
if s.pend == nil {
s.mx.Unlock()
return errMarkSetClosed
}
write, seqno := s.putMany(batch)
s.mx.Unlock()
if write {
return s.write(seqno)
}
return nil
}
func (s *BadgerMarkSet) Has(c cid.Cid) (bool, error) { func (s *BadgerMarkSet) Has(c cid.Cid) (bool, error) {
s.mx.RLock() s.mx.RLock()
defer s.mx.RUnlock() defer s.mx.RUnlock()
@ -193,16 +275,34 @@ func (s *BadgerMarkSet) tryDB(key []byte) (has bool, err error) {
// writer holds the exclusive lock // writer holds the exclusive lock
func (s *BadgerMarkSet) put(key string) (write bool, seqno int) { func (s *BadgerMarkSet) put(key string) (write bool, seqno int) {
s.pend[key] = struct{}{} s.pend[key] = struct{}{}
if len(s.pend) < badgerMarkSetBatchSize { if !s.persist && len(s.pend) < badgerMarkSetBatchSize {
return false, 0 return false, 0
} }
seqno = s.seqno seqno = s.nextBatch()
return true, seqno
}
func (s *BadgerMarkSet) putMany(batch []cid.Cid) (write bool, seqno int) {
for _, c := range batch {
key := string(c.Hash())
s.pend[key] = struct{}{}
}
if !s.persist && len(s.pend) < badgerMarkSetBatchSize {
return false, 0
}
seqno = s.nextBatch()
return true, seqno
}
func (s *BadgerMarkSet) nextBatch() int {
seqno := s.seqno
s.seqno++ s.seqno++
s.writing[seqno] = s.pend s.writing[seqno] = s.pend
s.pend = make(map[string]struct{}) s.pend = make(map[string]struct{})
return seqno
return true, seqno
} }
func (s *BadgerMarkSet) write(seqno int) (err error) { func (s *BadgerMarkSet) write(seqno int) (err error) {
@ -247,6 +347,14 @@ func (s *BadgerMarkSet) write(seqno int) (err error) {
return xerrors.Errorf("error flushing batch to badger markset: %w", err) return xerrors.Errorf("error flushing batch to badger markset: %w", err)
} }
s.mx.RLock()
persist := s.persist
s.mx.RUnlock()
if persist {
return s.db.Sync()
}
return nil return nil
} }
@ -266,13 +374,12 @@ func (s *BadgerMarkSet) Close() error {
db := s.db db := s.db
s.db = nil s.db = nil
return closeTransientBadgerDB(db, s.path) return closeBadgerDB(db, s.path, s.persist)
} }
func (s *BadgerMarkSet) SetConcurrent() {} func openBadgerDB(path string, recover bool) (*badger.DB, error) {
// if it is not a recovery, clean up first
func openTransientBadgerDB(path string) (*badger.DB, error) { if !recover {
// clean up first
err := os.RemoveAll(path) err := os.RemoveAll(path)
if err != nil { if err != nil {
return nil, xerrors.Errorf("error clearing markset directory: %w", err) return nil, xerrors.Errorf("error clearing markset directory: %w", err)
@ -282,10 +389,14 @@ func openTransientBadgerDB(path string) (*badger.DB, error) {
if err != nil { if err != nil {
return nil, xerrors.Errorf("error creating markset directory: %w", err) return nil, xerrors.Errorf("error creating markset directory: %w", err)
} }
}
opts := badger.DefaultOptions(path) opts := badger.DefaultOptions(path)
// we manually sync when we are in critical section
opts.SyncWrites = false opts.SyncWrites = false
// no need to do that
opts.CompactL0OnClose = false opts.CompactL0OnClose = false
// we store hashes, not much to gain by compression
opts.Compression = options.None opts.Compression = options.None
// Note: We use FileIO for loading modes to avoid memory thrashing and interference // Note: We use FileIO for loading modes to avoid memory thrashing and interference
// between the system blockstore and the markset. // between the system blockstore and the markset.
@ -294,6 +405,15 @@ func openTransientBadgerDB(path string) (*badger.DB, error) {
// exceeded 1GB in size. // exceeded 1GB in size.
opts.TableLoadingMode = options.FileIO opts.TableLoadingMode = options.FileIO
opts.ValueLogLoadingMode = options.FileIO opts.ValueLogLoadingMode = options.FileIO
// We increase the number of L0 tables before compaction to make it unlikely to
// be necessary.
opts.NumLevelZeroTables = 20 // default is 5
opts.NumLevelZeroTablesStall = 30 // default is 10
// increase the number of compactors from default 2 so that if we ever have to
// compact, it is fast
if runtime.NumCPU()/2 > opts.NumCompactors {
opts.NumCompactors = runtime.NumCPU() / 2
}
opts.Logger = &badgerLogger{ opts.Logger = &badgerLogger{
SugaredLogger: log.Desugar().WithOptions(zap.AddCallerSkip(1)).Sugar(), SugaredLogger: log.Desugar().WithOptions(zap.AddCallerSkip(1)).Sugar(),
skip2: log.Desugar().WithOptions(zap.AddCallerSkip(2)).Sugar(), skip2: log.Desugar().WithOptions(zap.AddCallerSkip(2)).Sugar(),
@ -302,12 +422,16 @@ func openTransientBadgerDB(path string) (*badger.DB, error) {
return badger.Open(opts) return badger.Open(opts)
} }
func closeTransientBadgerDB(db *badger.DB, path string) error { func closeBadgerDB(db *badger.DB, path string, persist bool) error {
err := db.Close() err := db.Close()
if err != nil { if err != nil {
return xerrors.Errorf("error closing badger markset: %w", err) return xerrors.Errorf("error closing badger markset: %w", err)
} }
if persist {
return nil
}
err = os.RemoveAll(path) err = os.RemoveAll(path)
if err != nil { if err != nil {
return xerrors.Errorf("error deleting badger markset: %w", err) return xerrors.Errorf("error deleting badger markset: %w", err)

View File

@ -1,37 +1,104 @@
package splitstore package splitstore
import ( import (
"bufio"
"io"
"os"
"path/filepath"
"sync" "sync"
"golang.org/x/xerrors"
cid "github.com/ipfs/go-cid" cid "github.com/ipfs/go-cid"
) )
type MapMarkSetEnv struct{} type MapMarkSetEnv struct {
path string
}
var _ MarkSetEnv = (*MapMarkSetEnv)(nil) var _ MarkSetEnv = (*MapMarkSetEnv)(nil)
type MapMarkSet struct { type MapMarkSet struct {
mx sync.RWMutex mx sync.RWMutex
set map[string]struct{} set map[string]struct{}
persist bool
file *os.File
buf *bufio.Writer
path string
} }
var _ MarkSet = (*MapMarkSet)(nil) var _ MarkSet = (*MapMarkSet)(nil)
func NewMapMarkSetEnv() (*MapMarkSetEnv, error) { func NewMapMarkSetEnv(path string) (*MapMarkSetEnv, error) {
return &MapMarkSetEnv{}, nil msPath := filepath.Join(path, "markset.map")
err := os.MkdirAll(msPath, 0755) //nolint:gosec
if err != nil {
return nil, xerrors.Errorf("error creating markset directory: %w", err)
}
return &MapMarkSetEnv{path: msPath}, nil
} }
func (e *MapMarkSetEnv) Create(name string, sizeHint int64) (MarkSet, error) { func (e *MapMarkSetEnv) New(name string, sizeHint int64) (MarkSet, error) {
path := filepath.Join(e.path, name)
return &MapMarkSet{ return &MapMarkSet{
set: make(map[string]struct{}, sizeHint), set: make(map[string]struct{}, sizeHint),
path: path,
}, nil }, nil
} }
func (e *MapMarkSetEnv) Recover(name string) (MarkSet, error) {
path := filepath.Join(e.path, name)
s := &MapMarkSet{
set: make(map[string]struct{}),
path: path,
}
in, err := os.Open(path)
if err != nil {
return nil, xerrors.Errorf("error opening markset file for read: %w", err)
}
defer in.Close() //nolint:errcheck
// wrap a buffered reader to make this faster
buf := bufio.NewReader(in)
for {
var sz byte
if sz, err = buf.ReadByte(); err != nil {
break
}
key := make([]byte, int(sz))
if _, err = io.ReadFull(buf, key); err != nil {
break
}
s.set[string(key)] = struct{}{}
}
if err != io.EOF {
return nil, xerrors.Errorf("error reading markset file: %w", err)
}
file, err := os.OpenFile(s.path, os.O_WRONLY|os.O_APPEND, 0)
if err != nil {
return nil, xerrors.Errorf("error opening markset file for write: %w", err)
}
s.persist = true
s.file = file
s.buf = bufio.NewWriter(file)
return s, nil
}
func (e *MapMarkSetEnv) Close() error { func (e *MapMarkSetEnv) Close() error {
return nil return nil
} }
func (s *MapMarkSet) Mark(cid cid.Cid) error { func (s *MapMarkSet) BeginCriticalSection() error {
s.mx.Lock() s.mx.Lock()
defer s.mx.Unlock() defer s.mx.Unlock()
@ -39,7 +106,104 @@ func (s *MapMarkSet) Mark(cid cid.Cid) error {
return errMarkSetClosed return errMarkSetClosed
} }
s.set[string(cid.Hash())] = struct{}{} if s.persist {
return nil
}
file, err := os.OpenFile(s.path, os.O_CREATE|os.O_TRUNC|os.O_WRONLY, 0644)
if err != nil {
return xerrors.Errorf("error opening markset file: %w", err)
}
// wrap a buffered writer to make this faster
s.buf = bufio.NewWriter(file)
for key := range s.set {
if err := s.writeKey([]byte(key), false); err != nil {
_ = file.Close()
s.buf = nil
return err
}
}
if err := s.buf.Flush(); err != nil {
_ = file.Close()
s.buf = nil
return xerrors.Errorf("error flushing markset file buffer: %w", err)
}
s.file = file
s.persist = true
return nil
}
func (s *MapMarkSet) EndCriticalSection() {
s.mx.Lock()
defer s.mx.Unlock()
if !s.persist {
return
}
_ = s.file.Close()
_ = os.Remove(s.path)
s.file = nil
s.buf = nil
s.persist = false
}
func (s *MapMarkSet) Mark(c cid.Cid) error {
s.mx.Lock()
defer s.mx.Unlock()
if s.set == nil {
return errMarkSetClosed
}
hash := c.Hash()
s.set[string(hash)] = struct{}{}
if s.persist {
if err := s.writeKey(hash, true); err != nil {
return err
}
if err := s.file.Sync(); err != nil {
return xerrors.Errorf("error syncing markset: %w", err)
}
}
return nil
}
func (s *MapMarkSet) MarkMany(batch []cid.Cid) error {
s.mx.Lock()
defer s.mx.Unlock()
if s.set == nil {
return errMarkSetClosed
}
for _, c := range batch {
hash := c.Hash()
s.set[string(hash)] = struct{}{}
if s.persist {
if err := s.writeKey(hash, false); err != nil {
return err
}
}
}
if s.persist {
if err := s.buf.Flush(); err != nil {
return xerrors.Errorf("error flushing markset buffer to disk: %w", err)
}
if err := s.file.Sync(); err != nil {
return xerrors.Errorf("error syncing markset: %w", err)
}
}
return nil return nil
} }
@ -63,12 +227,23 @@ func (s *MapMarkSet) Visit(c cid.Cid) (bool, error) {
return false, errMarkSetClosed return false, errMarkSetClosed
} }
key := string(c.Hash()) hash := c.Hash()
key := string(hash)
if _, ok := s.set[key]; ok { if _, ok := s.set[key]; ok {
return false, nil return false, nil
} }
s.set[key] = struct{}{} s.set[key] = struct{}{}
if s.persist {
if err := s.writeKey(hash, true); err != nil {
return false, err
}
if err := s.file.Sync(); err != nil {
return false, xerrors.Errorf("error syncing markset: %w", err)
}
}
return true, nil return true, nil
} }
@ -76,6 +251,39 @@ func (s *MapMarkSet) Close() error {
s.mx.Lock() s.mx.Lock()
defer s.mx.Unlock() defer s.mx.Unlock()
if s.set == nil {
return nil
}
s.set = nil s.set = nil
if s.file != nil {
if err := s.file.Close(); err != nil {
log.Warnf("error closing markset file: %s", err)
}
if !s.persist {
if err := os.Remove(s.path); err != nil {
log.Warnf("error removing markset file: %s", err)
}
}
}
return nil
}
func (s *MapMarkSet) writeKey(k []byte, flush bool) error {
if err := s.buf.WriteByte(byte(len(k))); err != nil {
return xerrors.Errorf("error writing markset key length to disk: %w", err)
}
if _, err := s.buf.Write(k); err != nil {
return xerrors.Errorf("error writing markset key to disk: %w", err)
}
if flush {
if err := s.buf.Flush(); err != nil {
return xerrors.Errorf("error flushing markset buffer to disk: %w", err)
}
}
return nil return nil
} }

View File

@ -11,7 +11,10 @@ import (
func TestMapMarkSet(t *testing.T) { func TestMapMarkSet(t *testing.T) {
testMarkSet(t, "map") testMarkSet(t, "map")
testMarkSetRecovery(t, "map")
testMarkSetMarkMany(t, "map")
testMarkSetVisitor(t, "map") testMarkSetVisitor(t, "map")
testMarkSetVisitorRecovery(t, "map")
} }
func TestBadgerMarkSet(t *testing.T) { func TestBadgerMarkSet(t *testing.T) {
@ -21,12 +24,13 @@ func TestBadgerMarkSet(t *testing.T) {
badgerMarkSetBatchSize = bs badgerMarkSetBatchSize = bs
}) })
testMarkSet(t, "badger") testMarkSet(t, "badger")
testMarkSetRecovery(t, "badger")
testMarkSetMarkMany(t, "badger")
testMarkSetVisitor(t, "badger") testMarkSetVisitor(t, "badger")
testMarkSetVisitorRecovery(t, "badger")
} }
func testMarkSet(t *testing.T, lsType string) { func testMarkSet(t *testing.T, lsType string) {
t.Helper()
path, err := ioutil.TempDir("", "markset.*") path, err := ioutil.TempDir("", "markset.*")
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@ -42,12 +46,12 @@ func testMarkSet(t *testing.T, lsType string) {
} }
defer env.Close() //nolint:errcheck defer env.Close() //nolint:errcheck
hotSet, err := env.Create("hot", 0) hotSet, err := env.New("hot", 0)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
coldSet, err := env.Create("cold", 0) coldSet, err := env.New("cold", 0)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -62,6 +66,7 @@ func testMarkSet(t *testing.T, lsType string) {
} }
mustHave := func(s MarkSet, cid cid.Cid) { mustHave := func(s MarkSet, cid cid.Cid) {
t.Helper()
has, err := s.Has(cid) has, err := s.Has(cid)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@ -73,6 +78,7 @@ func testMarkSet(t *testing.T, lsType string) {
} }
mustNotHave := func(s MarkSet, cid cid.Cid) { mustNotHave := func(s MarkSet, cid cid.Cid) {
t.Helper()
has, err := s.Has(cid) has, err := s.Has(cid)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@ -114,12 +120,12 @@ func testMarkSet(t *testing.T, lsType string) {
t.Fatal(err) t.Fatal(err)
} }
hotSet, err = env.Create("hot", 0) hotSet, err = env.New("hot", 0)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
coldSet, err = env.Create("cold", 0) coldSet, err = env.New("cold", 0)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -150,8 +156,6 @@ func testMarkSet(t *testing.T, lsType string) {
} }
func testMarkSetVisitor(t *testing.T, lsType string) { func testMarkSetVisitor(t *testing.T, lsType string) {
t.Helper()
path, err := ioutil.TempDir("", "markset.*") path, err := ioutil.TempDir("", "markset.*")
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@ -167,7 +171,7 @@ func testMarkSetVisitor(t *testing.T, lsType string) {
} }
defer env.Close() //nolint:errcheck defer env.Close() //nolint:errcheck
visitor, err := env.Create("test", 0) visitor, err := env.New("test", 0)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -219,3 +223,322 @@ func testMarkSetVisitor(t *testing.T, lsType string) {
mustNotVisit(visitor, k3) mustNotVisit(visitor, k3)
mustNotVisit(visitor, k4) mustNotVisit(visitor, k4)
} }
func testMarkSetVisitorRecovery(t *testing.T, lsType string) {
path, err := ioutil.TempDir("", "markset.*")
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
_ = os.RemoveAll(path)
})
env, err := OpenMarkSetEnv(path, lsType)
if err != nil {
t.Fatal(err)
}
defer env.Close() //nolint:errcheck
visitor, err := env.New("test", 0)
if err != nil {
t.Fatal(err)
}
defer visitor.Close() //nolint:errcheck
makeCid := func(key string) cid.Cid {
h, err := multihash.Sum([]byte(key), multihash.SHA2_256, -1)
if err != nil {
t.Fatal(err)
}
return cid.NewCidV1(cid.Raw, h)
}
mustVisit := func(v ObjectVisitor, cid cid.Cid) {
visit, err := v.Visit(cid)
if err != nil {
t.Fatal(err)
}
if !visit {
t.Fatal("object should be visited")
}
}
mustNotVisit := func(v ObjectVisitor, cid cid.Cid) {
visit, err := v.Visit(cid)
if err != nil {
t.Fatal(err)
}
if visit {
t.Fatal("unexpected visit")
}
}
k1 := makeCid("a")
k2 := makeCid("b")
k3 := makeCid("c")
k4 := makeCid("d")
mustVisit(visitor, k1)
mustVisit(visitor, k2)
if err := visitor.BeginCriticalSection(); err != nil {
t.Fatal(err)
}
mustVisit(visitor, k3)
mustVisit(visitor, k4)
mustNotVisit(visitor, k1)
mustNotVisit(visitor, k2)
mustNotVisit(visitor, k3)
mustNotVisit(visitor, k4)
if err := visitor.Close(); err != nil {
t.Fatal(err)
}
visitor, err = env.Recover("test")
if err != nil {
t.Fatal(err)
}
mustNotVisit(visitor, k1)
mustNotVisit(visitor, k2)
mustNotVisit(visitor, k3)
mustNotVisit(visitor, k4)
visitor.EndCriticalSection()
if err := visitor.Close(); err != nil {
t.Fatal(err)
}
_, err = env.Recover("test")
if err == nil {
t.Fatal("expected recovery to fail")
}
}
func testMarkSetRecovery(t *testing.T, lsType string) {
path, err := ioutil.TempDir("", "markset.*")
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
_ = os.RemoveAll(path)
})
env, err := OpenMarkSetEnv(path, lsType)
if err != nil {
t.Fatal(err)
}
defer env.Close() //nolint:errcheck
markSet, err := env.New("test", 0)
if err != nil {
t.Fatal(err)
}
makeCid := func(key string) cid.Cid {
h, err := multihash.Sum([]byte(key), multihash.SHA2_256, -1)
if err != nil {
t.Fatal(err)
}
return cid.NewCidV1(cid.Raw, h)
}
mustHave := func(s MarkSet, cid cid.Cid) {
t.Helper()
has, err := s.Has(cid)
if err != nil {
t.Fatal(err)
}
if !has {
t.Fatal("mark not found")
}
}
mustNotHave := func(s MarkSet, cid cid.Cid) {
t.Helper()
has, err := s.Has(cid)
if err != nil {
t.Fatal(err)
}
if has {
t.Fatal("unexpected mark")
}
}
k1 := makeCid("a")
k2 := makeCid("b")
k3 := makeCid("c")
k4 := makeCid("d")
if err := markSet.Mark(k1); err != nil {
t.Fatal(err)
}
if err := markSet.Mark(k2); err != nil {
t.Fatal(err)
}
mustHave(markSet, k1)
mustHave(markSet, k2)
mustNotHave(markSet, k3)
mustNotHave(markSet, k4)
if err := markSet.BeginCriticalSection(); err != nil {
t.Fatal(err)
}
if err := markSet.Mark(k3); err != nil {
t.Fatal(err)
}
if err := markSet.Mark(k4); err != nil {
t.Fatal(err)
}
mustHave(markSet, k1)
mustHave(markSet, k2)
mustHave(markSet, k3)
mustHave(markSet, k4)
if err := markSet.Close(); err != nil {
t.Fatal(err)
}
markSet, err = env.Recover("test")
if err != nil {
t.Fatal(err)
}
mustHave(markSet, k1)
mustHave(markSet, k2)
mustHave(markSet, k3)
mustHave(markSet, k4)
markSet.EndCriticalSection()
if err := markSet.Close(); err != nil {
t.Fatal(err)
}
_, err = env.Recover("test")
if err == nil {
t.Fatal("expected recovery to fail")
}
}
func testMarkSetMarkMany(t *testing.T, lsType string) {
path, err := ioutil.TempDir("", "markset.*")
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
_ = os.RemoveAll(path)
})
env, err := OpenMarkSetEnv(path, lsType)
if err != nil {
t.Fatal(err)
}
defer env.Close() //nolint:errcheck
markSet, err := env.New("test", 0)
if err != nil {
t.Fatal(err)
}
makeCid := func(key string) cid.Cid {
h, err := multihash.Sum([]byte(key), multihash.SHA2_256, -1)
if err != nil {
t.Fatal(err)
}
return cid.NewCidV1(cid.Raw, h)
}
mustHave := func(s MarkSet, cid cid.Cid) {
t.Helper()
has, err := s.Has(cid)
if err != nil {
t.Fatal(err)
}
if !has {
t.Fatal("mark not found")
}
}
mustNotHave := func(s MarkSet, cid cid.Cid) {
t.Helper()
has, err := s.Has(cid)
if err != nil {
t.Fatal(err)
}
if has {
t.Fatal("unexpected mark")
}
}
k1 := makeCid("a")
k2 := makeCid("b")
k3 := makeCid("c")
k4 := makeCid("d")
if err := markSet.MarkMany([]cid.Cid{k1, k2}); err != nil {
t.Fatal(err)
}
mustHave(markSet, k1)
mustHave(markSet, k2)
mustNotHave(markSet, k3)
mustNotHave(markSet, k4)
if err := markSet.BeginCriticalSection(); err != nil {
t.Fatal(err)
}
if err := markSet.MarkMany([]cid.Cid{k3, k4}); err != nil {
t.Fatal(err)
}
mustHave(markSet, k1)
mustHave(markSet, k2)
mustHave(markSet, k3)
mustHave(markSet, k4)
if err := markSet.Close(); err != nil {
t.Fatal(err)
}
markSet, err = env.Recover("test")
if err != nil {
t.Fatal(err)
}
mustHave(markSet, k1)
mustHave(markSet, k2)
mustHave(markSet, k3)
mustHave(markSet, k4)
markSet.EndCriticalSection()
if err := markSet.Close(); err != nil {
t.Fatal(err)
}
_, err = env.Recover("test")
if err == nil {
t.Fatal("expected recovery to fail")
}
}

View File

@ -129,8 +129,6 @@ type SplitStore struct {
headChangeMx sync.Mutex headChangeMx sync.Mutex
coldPurgeSize int
chain ChainAccessor chain ChainAccessor
ds dstore.Datastore ds dstore.Datastore
cold bstore.Blockstore cold bstore.Blockstore
@ -158,6 +156,17 @@ type SplitStore struct {
txnRefsMx sync.Mutex txnRefsMx sync.Mutex
txnRefs map[cid.Cid]struct{} txnRefs map[cid.Cid]struct{}
txnMissing map[cid.Cid]struct{} txnMissing map[cid.Cid]struct{}
txnMarkSet MarkSet
txnSyncMx sync.Mutex
txnSyncCond sync.Cond
txnSync bool
// background cold object reification
reifyWorkers sync.WaitGroup
reifyMx sync.Mutex
reifyCond sync.Cond
reifyPend map[cid.Cid]struct{}
reifyInProgress map[cid.Cid]struct{}
// registered protectors // registered protectors
protectors []func(func(cid.Cid) error) error protectors []func(func(cid.Cid) error) error
@ -194,13 +203,16 @@ func Open(path string, ds dstore.Datastore, hot, cold bstore.Blockstore, cfg *Co
cold: cold, cold: cold,
hot: hots, hot: hots,
markSetEnv: markSetEnv, markSetEnv: markSetEnv,
coldPurgeSize: defaultColdPurgeSize,
} }
ss.txnViewsCond.L = &ss.txnViewsMx ss.txnViewsCond.L = &ss.txnViewsMx
ss.txnSyncCond.L = &ss.txnSyncMx
ss.ctx, ss.cancel = context.WithCancel(context.Background()) ss.ctx, ss.cancel = context.WithCancel(context.Background())
ss.reifyCond.L = &ss.reifyMx
ss.reifyPend = make(map[cid.Cid]struct{})
ss.reifyInProgress = make(map[cid.Cid]struct{})
if enableDebugLog { if enableDebugLog {
ss.debug, err = openDebugLog(path) ss.debug, err = openDebugLog(path)
if err != nil { if err != nil {
@ -208,6 +220,14 @@ func Open(path string, ds dstore.Datastore, hot, cold bstore.Blockstore, cfg *Co
} }
} }
if ss.checkpointExists() {
log.Info("found compaction checkpoint; resuming compaction")
if err := ss.completeCompaction(); err != nil {
markSetEnv.Close() //nolint:errcheck
return nil, xerrors.Errorf("error resuming compaction: %w", err)
}
}
return ss, nil return ss, nil
} }
@ -230,6 +250,20 @@ func (s *SplitStore) Has(ctx context.Context, cid cid.Cid) (bool, error) {
s.txnLk.RLock() s.txnLk.RLock()
defer s.txnLk.RUnlock() defer s.txnLk.RUnlock()
// critical section
if s.txnMarkSet != nil {
has, err := s.txnMarkSet.Has(cid)
if err != nil {
return false, err
}
if has {
return s.has(cid)
}
return s.cold.Has(ctx, cid)
}
has, err := s.hot.Has(ctx, cid) has, err := s.hot.Has(ctx, cid)
if err != nil { if err != nil {
@ -241,7 +275,13 @@ func (s *SplitStore) Has(ctx context.Context, cid cid.Cid) (bool, error) {
return true, nil return true, nil
} }
return s.cold.Has(ctx, cid) has, err = s.cold.Has(ctx, cid)
if has && bstore.IsHotView(ctx) {
s.reifyColdObject(cid)
}
return has, err
} }
func (s *SplitStore) Get(ctx context.Context, cid cid.Cid) (blocks.Block, error) { func (s *SplitStore) Get(ctx context.Context, cid cid.Cid) (blocks.Block, error) {
@ -257,6 +297,20 @@ func (s *SplitStore) Get(ctx context.Context, cid cid.Cid) (blocks.Block, error)
s.txnLk.RLock() s.txnLk.RLock()
defer s.txnLk.RUnlock() defer s.txnLk.RUnlock()
// critical section
if s.txnMarkSet != nil {
has, err := s.txnMarkSet.Has(cid)
if err != nil {
return nil, err
}
if has {
return s.get(cid)
}
return s.cold.Get(ctx, cid)
}
blk, err := s.hot.Get(ctx, cid) blk, err := s.hot.Get(ctx, cid)
switch err { switch err {
@ -271,8 +325,11 @@ func (s *SplitStore) Get(ctx context.Context, cid cid.Cid) (blocks.Block, error)
blk, err = s.cold.Get(ctx, cid) blk, err = s.cold.Get(ctx, cid)
if err == nil { if err == nil {
stats.Record(s.ctx, metrics.SplitstoreMiss.M(1)) if bstore.IsHotView(ctx) {
s.reifyColdObject(cid)
}
stats.Record(s.ctx, metrics.SplitstoreMiss.M(1))
} }
return blk, err return blk, err
@ -294,6 +351,20 @@ func (s *SplitStore) GetSize(ctx context.Context, cid cid.Cid) (int, error) {
s.txnLk.RLock() s.txnLk.RLock()
defer s.txnLk.RUnlock() defer s.txnLk.RUnlock()
// critical section
if s.txnMarkSet != nil {
has, err := s.txnMarkSet.Has(cid)
if err != nil {
return 0, err
}
if has {
return s.getSize(cid)
}
return s.cold.GetSize(ctx, cid)
}
size, err := s.hot.GetSize(ctx, cid) size, err := s.hot.GetSize(ctx, cid)
switch err { switch err {
@ -308,6 +379,10 @@ func (s *SplitStore) GetSize(ctx context.Context, cid cid.Cid) (int, error) {
size, err = s.cold.GetSize(ctx, cid) size, err = s.cold.GetSize(ctx, cid)
if err == nil { if err == nil {
if bstore.IsHotView(ctx) {
s.reifyColdObject(cid)
}
stats.Record(s.ctx, metrics.SplitstoreMiss.M(1)) stats.Record(s.ctx, metrics.SplitstoreMiss.M(1))
} }
return size, err return size, err
@ -332,6 +407,12 @@ func (s *SplitStore) Put(ctx context.Context, blk blocks.Block) error {
s.debug.LogWrite(blk) s.debug.LogWrite(blk)
// critical section
if s.txnMarkSet != nil {
s.markLiveRefs([]cid.Cid{blk.Cid()})
return nil
}
s.trackTxnRef(blk.Cid()) s.trackTxnRef(blk.Cid())
return nil return nil
} }
@ -377,6 +458,12 @@ func (s *SplitStore) PutMany(ctx context.Context, blks []blocks.Block) error {
s.debug.LogWriteMany(blks) s.debug.LogWriteMany(blks)
// critical section
if s.txnMarkSet != nil {
s.markLiveRefs(batch)
return nil
}
s.trackTxnRefMany(batch) s.trackTxnRefMany(batch)
return nil return nil
} }
@ -436,6 +523,23 @@ func (s *SplitStore) View(ctx context.Context, cid cid.Cid, cb func([]byte) erro
return cb(data) return cb(data)
} }
// critical section
s.txnLk.RLock() // the lock is released in protectView if we are not in critical section
if s.txnMarkSet != nil {
has, err := s.txnMarkSet.Has(cid)
s.txnLk.RUnlock()
if err != nil {
return err
}
if has {
return s.view(cid, cb)
}
return s.cold.View(ctx, cid, cb)
}
// views are (optimistically) protected two-fold: // views are (optimistically) protected two-fold:
// - if there is an active transaction, then the reference is protected. // - if there is an active transaction, then the reference is protected.
// - if there is no active transaction, active views are tracked in a // - if there is no active transaction, active views are tracked in a
@ -456,6 +560,10 @@ func (s *SplitStore) View(ctx context.Context, cid cid.Cid, cb func([]byte) erro
err = s.cold.View(ctx, cid, cb) err = s.cold.View(ctx, cid, cb)
if err == nil { if err == nil {
if bstore.IsHotView(ctx) {
s.reifyColdObject(cid)
}
stats.Record(s.ctx, metrics.SplitstoreMiss.M(1)) stats.Record(s.ctx, metrics.SplitstoreMiss.M(1))
} }
return err return err
@ -565,6 +673,9 @@ func (s *SplitStore) Start(chain ChainAccessor, us stmgr.UpgradeSchedule) error
} }
} }
// spawn the reifier
go s.reifyOrchestrator()
// watch the chain // watch the chain
chain.SubscribeHeadChanges(s.HeadChange) chain.SubscribeHeadChanges(s.HeadChange)
@ -585,12 +696,19 @@ func (s *SplitStore) Close() error {
} }
if atomic.LoadInt32(&s.compacting) == 1 { if atomic.LoadInt32(&s.compacting) == 1 {
s.txnSyncMx.Lock()
s.txnSync = true
s.txnSyncCond.Broadcast()
s.txnSyncMx.Unlock()
log.Warn("close with ongoing compaction in progress; waiting for it to finish...") log.Warn("close with ongoing compaction in progress; waiting for it to finish...")
for atomic.LoadInt32(&s.compacting) == 1 { for atomic.LoadInt32(&s.compacting) == 1 {
time.Sleep(time.Second) time.Sleep(time.Second)
} }
} }
s.reifyCond.Broadcast()
s.reifyWorkers.Wait()
s.cancel() s.cancel()
return multierr.Combine(s.markSetEnv.Close(), s.debug.Close()) return multierr.Combine(s.markSetEnv.Close(), s.debug.Close())
} }

View File

@ -89,7 +89,7 @@ func (s *SplitStore) doCheck(curTs *types.TipSet) error {
coldCnt := new(int64) coldCnt := new(int64)
missingCnt := new(int64) missingCnt := new(int64)
visitor, err := s.markSetEnv.Create("check", 0) visitor, err := s.markSetEnv.New("check", 0)
if err != nil { if err != nil {
return xerrors.Errorf("error creating visitor: %w", err) return xerrors.Errorf("error creating visitor: %w", err)
} }

View File

@ -3,8 +3,9 @@ package splitstore
import ( import (
"bytes" "bytes"
"errors" "errors"
"os"
"path/filepath"
"runtime" "runtime"
"sort"
"sync" "sync"
"sync/atomic" "sync/atomic"
"time" "time"
@ -48,6 +49,10 @@ var (
// SyncGapTime is the time delay from a tipset's min timestamp before we decide // SyncGapTime is the time delay from a tipset's min timestamp before we decide
// there is a sync gap // there is a sync gap
SyncGapTime = time.Minute SyncGapTime = time.Minute
// SyncWaitTime is the time delay from a tipset's min timestamp before we decide
// we have synced.
SyncWaitTime = 30 * time.Second
) )
var ( var (
@ -57,8 +62,6 @@ var (
const ( const (
batchSize = 16384 batchSize = 16384
defaultColdPurgeSize = 7_000_000
) )
func (s *SplitStore) HeadChange(_, apply []*types.TipSet) error { func (s *SplitStore) HeadChange(_, apply []*types.TipSet) error {
@ -141,9 +144,9 @@ func (s *SplitStore) isNearUpgrade(epoch abi.ChainEpoch) bool {
// transactionally protect incoming tipsets // transactionally protect incoming tipsets
func (s *SplitStore) protectTipSets(apply []*types.TipSet) { func (s *SplitStore) protectTipSets(apply []*types.TipSet) {
s.txnLk.RLock() s.txnLk.RLock()
defer s.txnLk.RUnlock()
if !s.txnActive { if !s.txnActive {
s.txnLk.RUnlock()
return return
} }
@ -152,12 +155,115 @@ func (s *SplitStore) protectTipSets(apply []*types.TipSet) {
cids = append(cids, ts.Cids()...) cids = append(cids, ts.Cids()...)
} }
if len(cids) == 0 {
s.txnLk.RUnlock()
return
}
// critical section
if s.txnMarkSet != nil {
curTs := apply[len(apply)-1]
timestamp := time.Unix(int64(curTs.MinTimestamp()), 0)
doSync := time.Since(timestamp) < SyncWaitTime
go func() {
if doSync {
defer func() {
s.txnSyncMx.Lock()
defer s.txnSyncMx.Unlock()
s.txnSync = true
s.txnSyncCond.Broadcast()
}()
}
defer s.txnLk.RUnlock()
s.markLiveRefs(cids)
}()
return
}
s.trackTxnRefMany(cids) s.trackTxnRefMany(cids)
s.txnLk.RUnlock()
}
func (s *SplitStore) markLiveRefs(cids []cid.Cid) {
log.Debugf("marking %d live refs", len(cids))
startMark := time.Now()
count := new(int32)
visitor := newConcurrentVisitor()
walkObject := func(c cid.Cid) error {
return s.walkObjectIncomplete(c, visitor,
func(c cid.Cid) error {
if isUnitaryObject(c) {
return errStopWalk
}
visit, err := s.txnMarkSet.Visit(c)
if err != nil {
return xerrors.Errorf("error visiting object: %w", err)
}
if !visit {
return errStopWalk
}
atomic.AddInt32(count, 1)
return nil
},
func(missing cid.Cid) error {
log.Warnf("missing object reference %s in %s", missing, c)
return errStopWalk
})
}
// optimize the common case of single put
if len(cids) == 1 {
if err := walkObject(cids[0]); err != nil {
log.Errorf("error marking tipset refs: %s", err)
}
log.Debugw("marking live refs done", "took", time.Since(startMark), "marked", *count)
return
}
workch := make(chan cid.Cid, len(cids))
for _, c := range cids {
workch <- c
}
close(workch)
worker := func() error {
for c := range workch {
if err := walkObject(c); err != nil {
return err
}
}
return nil
}
workers := runtime.NumCPU() / 2
if workers < 2 {
workers = 2
}
if workers > len(cids) {
workers = len(cids)
}
g := new(errgroup.Group)
for i := 0; i < workers; i++ {
g.Go(worker)
}
if err := g.Wait(); err != nil {
log.Errorf("error marking tipset refs: %s", err)
}
log.Debugw("marking live refs done", "took", time.Since(startMark), "marked", *count)
} }
// transactionally protect a view // transactionally protect a view
func (s *SplitStore) protectView(c cid.Cid) { func (s *SplitStore) protectView(c cid.Cid) {
s.txnLk.RLock() // the txnLk is held for read
defer s.txnLk.RUnlock() defer s.txnLk.RUnlock()
if s.txnActive { if s.txnActive {
@ -387,6 +493,12 @@ func (s *SplitStore) compact(curTs *types.TipSet) {
} }
func (s *SplitStore) doCompact(curTs *types.TipSet) error { func (s *SplitStore) doCompact(curTs *types.TipSet) error {
if s.checkpointExists() {
// this really shouldn't happen, but if it somehow does, it means that the hotstore
// might be potentially inconsistent; abort compaction and notify the user to intervene.
return xerrors.Errorf("checkpoint exists; aborting compaction")
}
currentEpoch := curTs.Height() currentEpoch := curTs.Height()
boundaryEpoch := currentEpoch - CompactionBoundary boundaryEpoch := currentEpoch - CompactionBoundary
@ -398,7 +510,7 @@ func (s *SplitStore) doCompact(curTs *types.TipSet) error {
log.Infow("running compaction", "currentEpoch", currentEpoch, "baseEpoch", s.baseEpoch, "boundaryEpoch", boundaryEpoch, "inclMsgsEpoch", inclMsgsEpoch, "compactionIndex", s.compactionIndex) log.Infow("running compaction", "currentEpoch", currentEpoch, "baseEpoch", s.baseEpoch, "boundaryEpoch", boundaryEpoch, "inclMsgsEpoch", inclMsgsEpoch, "compactionIndex", s.compactionIndex)
markSet, err := s.markSetEnv.Create("live", s.markSetSize) markSet, err := s.markSetEnv.New("live", s.markSetSize)
if err != nil { if err != nil {
return xerrors.Errorf("error creating mark set: %w", err) return xerrors.Errorf("error creating mark set: %w", err)
} }
@ -409,9 +521,6 @@ func (s *SplitStore) doCompact(curTs *types.TipSet) error {
return err return err
} }
// we are ready for concurrent marking
s.beginTxnMarking(markSet)
// 0. track all protected references at beginning of compaction; anything added later should // 0. track all protected references at beginning of compaction; anything added later should
// be transactionally protected by the write // be transactionally protected by the write
log.Info("protecting references with registered protectors") log.Info("protecting references with registered protectors")
@ -425,7 +534,7 @@ func (s *SplitStore) doCompact(curTs *types.TipSet) error {
log.Info("marking reachable objects") log.Info("marking reachable objects")
startMark := time.Now() startMark := time.Now()
var count int64 count := new(int64)
err = s.walkChain(curTs, boundaryEpoch, inclMsgsEpoch, &noopVisitor{}, err = s.walkChain(curTs, boundaryEpoch, inclMsgsEpoch, &noopVisitor{},
func(c cid.Cid) error { func(c cid.Cid) error {
if isUnitaryObject(c) { if isUnitaryObject(c) {
@ -441,7 +550,7 @@ func (s *SplitStore) doCompact(curTs *types.TipSet) error {
return errStopWalk return errStopWalk
} }
count++ atomic.AddInt64(count, 1)
return nil return nil
}) })
@ -449,9 +558,9 @@ func (s *SplitStore) doCompact(curTs *types.TipSet) error {
return xerrors.Errorf("error marking: %w", err) return xerrors.Errorf("error marking: %w", err)
} }
s.markSetSize = count + count>>2 // overestimate a bit s.markSetSize = *count + *count>>2 // overestimate a bit
log.Infow("marking done", "took", time.Since(startMark), "marked", count) log.Infow("marking done", "took", time.Since(startMark), "marked", *count)
if err := s.checkClosing(); err != nil { if err := s.checkClosing(); err != nil {
return err return err
@ -471,10 +580,15 @@ func (s *SplitStore) doCompact(curTs *types.TipSet) error {
log.Info("collecting cold objects") log.Info("collecting cold objects")
startCollect := time.Now() startCollect := time.Now()
coldw, err := NewColdSetWriter(s.coldSetPath())
if err != nil {
return xerrors.Errorf("error creating coldset: %w", err)
}
defer coldw.Close() //nolint:errcheck
// some stats for logging // some stats for logging
var hotCnt, coldCnt int var hotCnt, coldCnt int
cold := make([]cid.Cid, 0, s.coldPurgeSize)
err = s.hot.ForEachKey(func(c cid.Cid) error { err = s.hot.ForEachKey(func(c cid.Cid) error {
// was it marked? // was it marked?
mark, err := markSet.Has(c) mark, err := markSet.Has(c)
@ -488,7 +602,9 @@ func (s *SplitStore) doCompact(curTs *types.TipSet) error {
} }
// it's cold, mark it as candidate for move // it's cold, mark it as candidate for move
cold = append(cold, c) if err := coldw.Write(c); err != nil {
return xerrors.Errorf("error writing cid to coldstore: %w", err)
}
coldCnt++ coldCnt++
return nil return nil
@ -498,12 +614,12 @@ func (s *SplitStore) doCompact(curTs *types.TipSet) error {
return xerrors.Errorf("error collecting cold objects: %w", err) return xerrors.Errorf("error collecting cold objects: %w", err)
} }
log.Infow("cold collection done", "took", time.Since(startCollect)) if err := coldw.Close(); err != nil {
return xerrors.Errorf("error closing coldset: %w", err)
if coldCnt > 0 {
s.coldPurgeSize = coldCnt + coldCnt>>2 // overestimate a bit
} }
log.Infow("cold collection done", "took", time.Since(startCollect))
log.Infow("compaction stats", "hot", hotCnt, "cold", coldCnt) log.Infow("compaction stats", "hot", hotCnt, "cold", coldCnt)
stats.Record(s.ctx, metrics.SplitstoreCompactionHot.M(int64(hotCnt))) stats.Record(s.ctx, metrics.SplitstoreCompactionHot.M(int64(hotCnt)))
stats.Record(s.ctx, metrics.SplitstoreCompactionCold.M(int64(coldCnt))) stats.Record(s.ctx, metrics.SplitstoreCompactionCold.M(int64(coldCnt)))
@ -521,11 +637,17 @@ func (s *SplitStore) doCompact(curTs *types.TipSet) error {
return err return err
} }
coldr, err := NewColdSetReader(s.coldSetPath())
if err != nil {
return xerrors.Errorf("error opening coldset: %w", err)
}
defer coldr.Close() //nolint:errcheck
// 3. copy the cold objects to the coldstore -- if we have one // 3. copy the cold objects to the coldstore -- if we have one
if !s.cfg.DiscardColdBlocks { if !s.cfg.DiscardColdBlocks {
log.Info("moving cold objects to the coldstore") log.Info("moving cold objects to the coldstore")
startMove := time.Now() startMove := time.Now()
err = s.moveColdBlocks(cold) err = s.moveColdBlocks(coldr)
if err != nil { if err != nil {
return xerrors.Errorf("error moving cold objects: %w", err) return xerrors.Errorf("error moving cold objects: %w", err)
} }
@ -534,41 +656,64 @@ func (s *SplitStore) doCompact(curTs *types.TipSet) error {
if err := s.checkClosing(); err != nil { if err := s.checkClosing(); err != nil {
return err return err
} }
if err := coldr.Reset(); err != nil {
return xerrors.Errorf("error resetting coldset: %w", err)
}
} }
// 4. sort cold objects so that the dags with most references are deleted first // 4. Purge cold objects with checkpointing for recovery.
// this ensures that we can't refer to a dag with its consituents already deleted, ie // This is the critical section of compaction, whereby any cold object not in the markSet is
// we lave no dangling references. // considered already deleted.
log.Info("sorting cold objects") // We delete cold objects in batches, holding the transaction lock, where we check the markSet
startSort := time.Now() // again for new references created by the VM.
err = s.sortObjects(cold) // After each batch, we write a checkpoint to disk; if the process is interrupted before completion,
if err != nil { // the process will continue from the checkpoint in the next recovery.
return xerrors.Errorf("error sorting objects: %w", err) if err := s.beginCriticalSection(markSet); err != nil {
} return xerrors.Errorf("error beginning critical section: %w", err)
log.Infow("sorting done", "took", time.Since(startSort))
// 4.1 protect transactional refs once more
// strictly speaking, this is not necessary as purge will do it before deleting each
// batch. however, there is likely a largish number of references accumulated during
// ths sort and this protects before entering pruge context.
err = s.protectTxnRefs(markSet)
if err != nil {
return xerrors.Errorf("error protecting transactional refs: %w", err)
} }
if err := s.checkClosing(); err != nil { if err := s.checkClosing(); err != nil {
return err return err
} }
// wait for the head to catch up so that the current tipset is marked
s.waitForSync()
if err := s.checkClosing(); err != nil {
return err
}
checkpoint, err := NewCheckpoint(s.checkpointPath())
if err != nil {
return xerrors.Errorf("error creating checkpoint: %w", err)
}
defer checkpoint.Close() //nolint:errcheck
// 5. purge cold objects from the hotstore, taking protected references into account // 5. purge cold objects from the hotstore, taking protected references into account
log.Info("purging cold objects from the hotstore") log.Info("purging cold objects from the hotstore")
startPurge := time.Now() startPurge := time.Now()
err = s.purge(cold, markSet) err = s.purge(coldr, checkpoint, markSet)
if err != nil { if err != nil {
return xerrors.Errorf("error purging cold blocks: %w", err) return xerrors.Errorf("error purging cold objects: %w", err)
} }
log.Infow("purging cold objects from hotstore done", "took", time.Since(startPurge)) log.Infow("purging cold objects from hotstore done", "took", time.Since(startPurge))
s.endCriticalSection()
if err := checkpoint.Close(); err != nil {
log.Warnf("error closing checkpoint: %s", err)
}
if err := os.Remove(s.checkpointPath()); err != nil {
log.Warnf("error removing checkpoint: %s", err)
}
if err := coldr.Close(); err != nil {
log.Warnf("error closing coldset: %s", err)
}
if err := os.Remove(s.coldSetPath()); err != nil {
log.Warnf("error removing coldset: %s", err)
}
// we are done; do some housekeeping // we are done; do some housekeeping
s.endTxnProtect() s.endTxnProtect()
s.gcHotstore() s.gcHotstore()
@ -599,12 +744,51 @@ func (s *SplitStore) beginTxnProtect() {
defer s.txnLk.Unlock() defer s.txnLk.Unlock()
s.txnActive = true s.txnActive = true
s.txnSync = false
s.txnRefs = make(map[cid.Cid]struct{}) s.txnRefs = make(map[cid.Cid]struct{})
s.txnMissing = make(map[cid.Cid]struct{}) s.txnMissing = make(map[cid.Cid]struct{})
} }
func (s *SplitStore) beginTxnMarking(markSet MarkSet) { func (s *SplitStore) beginCriticalSection(markSet MarkSet) error {
log.Info("beginning transactional marking") log.Info("beginning critical section")
// do that once first to get the bulk before the markset is in critical section
if err := s.protectTxnRefs(markSet); err != nil {
return xerrors.Errorf("error protecting transactional references: %w", err)
}
if err := markSet.BeginCriticalSection(); err != nil {
return xerrors.Errorf("error beginning critical section for markset: %w", err)
}
s.txnLk.Lock()
defer s.txnLk.Unlock()
s.txnMarkSet = markSet
// and do it again while holding the lock to mark references that might have been created
// in the meantime and avoid races of the type Has->txnRef->enterCS->Get fails because
// it's not in the markset
if err := s.protectTxnRefs(markSet); err != nil {
return xerrors.Errorf("error protecting transactional references: %w", err)
}
return nil
}
func (s *SplitStore) waitForSync() {
log.Info("waiting for sync")
startWait := time.Now()
defer func() {
log.Infow("waiting for sync done", "took", time.Since(startWait))
}()
s.txnSyncMx.Lock()
defer s.txnSyncMx.Unlock()
for !s.txnSync {
s.txnSyncCond.Wait()
}
} }
func (s *SplitStore) endTxnProtect() { func (s *SplitStore) endTxnProtect() {
@ -616,8 +800,20 @@ func (s *SplitStore) endTxnProtect() {
} }
s.txnActive = false s.txnActive = false
s.txnSync = false
s.txnRefs = nil s.txnRefs = nil
s.txnMissing = nil s.txnMissing = nil
s.txnMarkSet = nil
}
func (s *SplitStore) endCriticalSection() {
log.Info("ending critical section")
s.txnLk.Lock()
defer s.txnLk.Unlock()
s.txnMarkSet.EndCriticalSection()
s.txnMarkSet = nil
} }
func (s *SplitStore) walkChain(ts *types.TipSet, inclState, inclMsgs abi.ChainEpoch, func (s *SplitStore) walkChain(ts *types.TipSet, inclState, inclMsgs abi.ChainEpoch,
@ -857,7 +1053,7 @@ func (s *SplitStore) walkObjectIncomplete(c cid.Cid, visitor ObjectVisitor, f, m
return nil return nil
} }
// internal version used by walk // internal version used during compaction and related operations
func (s *SplitStore) view(c cid.Cid, cb func([]byte) error) error { func (s *SplitStore) view(c cid.Cid, cb func([]byte) error) error {
if isIdentiyCid(c) { if isIdentiyCid(c) {
data, err := decodeIdentityCid(c) data, err := decodeIdentityCid(c)
@ -892,10 +1088,34 @@ func (s *SplitStore) has(c cid.Cid) (bool, error) {
return s.cold.Has(s.ctx, c) return s.cold.Has(s.ctx, c)
} }
func (s *SplitStore) moveColdBlocks(cold []cid.Cid) error { func (s *SplitStore) get(c cid.Cid) (blocks.Block, error) {
blk, err := s.hot.Get(s.ctx, c)
switch err {
case nil:
return blk, nil
case bstore.ErrNotFound:
return s.cold.Get(s.ctx, c)
default:
return nil, err
}
}
func (s *SplitStore) getSize(c cid.Cid) (int, error) {
sz, err := s.hot.GetSize(s.ctx, c)
switch err {
case nil:
return sz, nil
case bstore.ErrNotFound:
return s.cold.GetSize(s.ctx, c)
default:
return 0, err
}
}
func (s *SplitStore) moveColdBlocks(coldr *ColdSetReader) error {
batch := make([]blocks.Block, 0, batchSize) batch := make([]blocks.Block, 0, batchSize)
for _, c := range cold { err := coldr.ForEach(func(c cid.Cid) error {
if err := s.checkClosing(); err != nil { if err := s.checkClosing(); err != nil {
return err return err
} }
@ -904,7 +1124,7 @@ func (s *SplitStore) moveColdBlocks(cold []cid.Cid) error {
if err != nil { if err != nil {
if err == bstore.ErrNotFound { if err == bstore.ErrNotFound {
log.Warnf("hotstore missing block %s", c) log.Warnf("hotstore missing block %s", c)
continue return nil
} }
return xerrors.Errorf("error retrieving block %s from hotstore: %w", c, err) return xerrors.Errorf("error retrieving block %s from hotstore: %w", c, err)
@ -918,6 +1138,12 @@ func (s *SplitStore) moveColdBlocks(cold []cid.Cid) error {
} }
batch = batch[:0] batch = batch[:0]
} }
return nil
})
if err != nil {
return xerrors.Errorf("error iterating coldset: %w", err)
} }
if len(batch) > 0 { if len(batch) > 0 {
@ -930,160 +1156,60 @@ func (s *SplitStore) moveColdBlocks(cold []cid.Cid) error {
return nil return nil
} }
// sorts a slice of objects heaviest first -- it's a little expensive but worth the func (s *SplitStore) purge(coldr *ColdSetReader, checkpoint *Checkpoint, markSet MarkSet) error {
// guarantee that we don't leave dangling references behind, e.g. if we die in the middle batch := make([]cid.Cid, 0, batchSize)
// of a purge.
func (s *SplitStore) sortObjects(cids []cid.Cid) error {
// we cache the keys to avoid making a gazillion of strings
keys := make(map[cid.Cid]string)
key := func(c cid.Cid) string {
s, ok := keys[c]
if !ok {
s = string(c.Hash())
keys[c] = s
}
return s
}
// compute sorting weights as the cumulative number of DAG links
weights := make(map[string]int)
for _, c := range cids {
// this can take quite a while, so check for shutdown with every opportunity
if err := s.checkClosing(); err != nil {
return err
}
w := s.getObjectWeight(c, weights, key)
weights[key(c)] = w
}
// sort!
sort.Slice(cids, func(i, j int) bool {
wi := weights[key(cids[i])]
wj := weights[key(cids[j])]
if wi == wj {
return bytes.Compare(cids[i].Hash(), cids[j].Hash()) > 0
}
return wi > wj
})
return nil
}
func (s *SplitStore) getObjectWeight(c cid.Cid, weights map[string]int, key func(cid.Cid) string) int {
w, ok := weights[key(c)]
if ok {
return w
}
// we treat block headers specially to avoid walking the entire chain
var hdr types.BlockHeader
err := s.view(c, func(data []byte) error {
return hdr.UnmarshalCBOR(bytes.NewBuffer(data))
})
if err == nil {
w1 := s.getObjectWeight(hdr.ParentStateRoot, weights, key)
weights[key(hdr.ParentStateRoot)] = w1
w2 := s.getObjectWeight(hdr.Messages, weights, key)
weights[key(hdr.Messages)] = w2
return 1 + w1 + w2
}
var links []cid.Cid
err = s.view(c, func(data []byte) error {
return cbg.ScanForLinks(bytes.NewReader(data), func(c cid.Cid) {
links = append(links, c)
})
})
if err != nil {
return 1
}
w = 1
for _, c := range links {
// these are internal refs, so dags will be dags
if c.Prefix().Codec != cid.DagCBOR {
w++
continue
}
wc := s.getObjectWeight(c, weights, key)
weights[key(c)] = wc
w += wc
}
return w
}
func (s *SplitStore) purgeBatch(cids []cid.Cid, deleteBatch func([]cid.Cid) error) error {
if len(cids) == 0 {
return nil
}
// we don't delete one giant batch of millions of objects, but rather do smaller batches
// so that we don't stop the world for an extended period of time
done := false
for i := 0; !done; i++ {
start := i * batchSize
end := start + batchSize
if end >= len(cids) {
end = len(cids)
done = true
}
err := deleteBatch(cids[start:end])
if err != nil {
return xerrors.Errorf("error deleting batch: %w", err)
}
}
return nil
}
func (s *SplitStore) purge(cids []cid.Cid, markSet MarkSet) error {
deadCids := make([]cid.Cid, 0, batchSize) deadCids := make([]cid.Cid, 0, batchSize)
var purgeCnt, liveCnt int var purgeCnt, liveCnt int
defer func() { defer func() {
log.Infow("purged cold objects", "purged", purgeCnt, "live", liveCnt) log.Infow("purged cold objects", "purged", purgeCnt, "live", liveCnt)
}() }()
return s.purgeBatch(cids, deleteBatch := func() error {
func(cids []cid.Cid) error { pc, lc, err := s.purgeBatch(batch, deadCids, checkpoint, markSet)
deadCids := deadCids[:0]
purgeCnt += pc
liveCnt += lc
batch = batch[:0]
for {
if err := s.checkClosing(); err != nil {
return err return err
} }
s.txnLk.Lock() err := coldr.ForEach(func(c cid.Cid) error {
if len(s.txnRefs) == 0 { batch = append(batch, c)
// keep the lock! if len(batch) == batchSize {
break return deleteBatch()
} }
// unlock and protect return nil
s.txnLk.Unlock() })
err := s.protectTxnRefs(markSet)
if err != nil { if err != nil {
return xerrors.Errorf("error protecting transactional refs: %w", err) return err
}
} }
if len(batch) > 0 {
return deleteBatch()
}
return nil
}
func (s *SplitStore) purgeBatch(batch, deadCids []cid.Cid, checkpoint *Checkpoint, markSet MarkSet) (purgeCnt int, liveCnt int, err error) {
if err := s.checkClosing(); err != nil {
return 0, 0, err
}
s.txnLk.Lock()
defer s.txnLk.Unlock() defer s.txnLk.Unlock()
for _, c := range cids { for _, c := range batch {
live, err := markSet.Has(c) has, err := markSet.Has(c)
if err != nil { if err != nil {
return xerrors.Errorf("error checking for liveness: %w", err) return 0, 0, xerrors.Errorf("error checking markset for liveness: %w", err)
} }
if live { if has {
liveCnt++ liveCnt++
continue continue
} }
@ -1091,16 +1217,141 @@ func (s *SplitStore) purge(cids []cid.Cid, markSet MarkSet) error {
deadCids = append(deadCids, c) deadCids = append(deadCids, c)
} }
err := s.hot.DeleteMany(s.ctx, deadCids) if len(deadCids) == 0 {
if err != nil { if err := checkpoint.Set(batch[len(batch)-1]); err != nil {
return xerrors.Errorf("error purging cold objects: %w", err) return 0, 0, xerrors.Errorf("error setting checkpoint: %w", err)
}
return 0, liveCnt, nil
}
if err := s.hot.DeleteMany(s.ctx, deadCids); err != nil {
return 0, liveCnt, xerrors.Errorf("error purging cold objects: %w", err)
} }
s.debug.LogDelete(deadCids) s.debug.LogDelete(deadCids)
purgeCnt = len(deadCids)
if err := checkpoint.Set(batch[len(batch)-1]); err != nil {
return purgeCnt, liveCnt, xerrors.Errorf("error setting checkpoint: %w", err)
}
return purgeCnt, liveCnt, nil
}
func (s *SplitStore) coldSetPath() string {
return filepath.Join(s.path, "coldset")
}
func (s *SplitStore) checkpointPath() string {
return filepath.Join(s.path, "checkpoint")
}
func (s *SplitStore) checkpointExists() bool {
_, err := os.Stat(s.checkpointPath())
return err == nil
}
func (s *SplitStore) completeCompaction() error {
checkpoint, last, err := OpenCheckpoint(s.checkpointPath())
if err != nil {
return xerrors.Errorf("error opening checkpoint: %w", err)
}
defer checkpoint.Close() //nolint:errcheck
coldr, err := NewColdSetReader(s.coldSetPath())
if err != nil {
return xerrors.Errorf("error opening coldset: %w", err)
}
defer coldr.Close() //nolint:errcheck
markSet, err := s.markSetEnv.Recover("live")
if err != nil {
return xerrors.Errorf("error recovering markset: %w", err)
}
defer markSet.Close() //nolint:errcheck
// PURGE
log.Info("purging cold objects from the hotstore")
startPurge := time.Now()
err = s.completePurge(coldr, checkpoint, last, markSet)
if err != nil {
return xerrors.Errorf("error purging cold objects: %w", err)
}
log.Infow("purging cold objects from hotstore done", "took", time.Since(startPurge))
markSet.EndCriticalSection()
if err := checkpoint.Close(); err != nil {
log.Warnf("error closing checkpoint: %s", err)
}
if err := os.Remove(s.checkpointPath()); err != nil {
log.Warnf("error removing checkpoint: %s", err)
}
if err := coldr.Close(); err != nil {
log.Warnf("error closing coldset: %s", err)
}
if err := os.Remove(s.coldSetPath()); err != nil {
log.Warnf("error removing coldset: %s", err)
}
// Note: at this point we can start the splitstore; a compaction should run on
// the first head change, which will trigger gc on the hotstore.
// We don't mind the second (back-to-back) compaction as the head will
// have advanced during marking and coldset accumulation.
return nil
}
func (s *SplitStore) completePurge(coldr *ColdSetReader, checkpoint *Checkpoint, start cid.Cid, markSet MarkSet) error {
if !start.Defined() {
return s.purge(coldr, checkpoint, markSet)
}
seeking := true
batch := make([]cid.Cid, 0, batchSize)
deadCids := make([]cid.Cid, 0, batchSize)
var purgeCnt, liveCnt int
defer func() {
log.Infow("purged cold objects", "purged", purgeCnt, "live", liveCnt)
}()
deleteBatch := func() error {
pc, lc, err := s.purgeBatch(batch, deadCids, checkpoint, markSet)
purgeCnt += pc
liveCnt += lc
batch = batch[:0]
return err
}
err := coldr.ForEach(func(c cid.Cid) error {
if seeking {
if start.Equals(c) {
seeking = false
}
return nil
}
batch = append(batch, c)
if len(batch) == batchSize {
return deleteBatch()
}
purgeCnt += len(deadCids)
return nil return nil
}) })
if err != nil {
return err
}
if len(batch) > 0 {
return deleteBatch()
}
return nil
} }
// I really don't like having this code, but we seem to have some occasional DAG references with // I really don't like having this code, but we seem to have some occasional DAG references with

View File

@ -0,0 +1,214 @@
package splitstore
import (
"errors"
"runtime"
"sync/atomic"
"golang.org/x/xerrors"
blocks "github.com/ipfs/go-block-format"
cid "github.com/ipfs/go-cid"
)
var (
errReifyLimit = errors.New("reification limit reached")
ReifyLimit = 16384
)
func (s *SplitStore) reifyColdObject(c cid.Cid) {
if !s.isWarm() {
return
}
if isUnitaryObject(c) {
return
}
s.reifyMx.Lock()
defer s.reifyMx.Unlock()
_, ok := s.reifyInProgress[c]
if ok {
return
}
s.reifyPend[c] = struct{}{}
s.reifyCond.Broadcast()
}
func (s *SplitStore) reifyOrchestrator() {
workers := runtime.NumCPU() / 4
if workers < 2 {
workers = 2
}
workch := make(chan cid.Cid, workers)
defer close(workch)
for i := 0; i < workers; i++ {
s.reifyWorkers.Add(1)
go s.reifyWorker(workch)
}
for {
s.reifyMx.Lock()
for len(s.reifyPend) == 0 && atomic.LoadInt32(&s.closing) == 0 {
s.reifyCond.Wait()
}
if atomic.LoadInt32(&s.closing) != 0 {
s.reifyMx.Unlock()
return
}
reifyPend := s.reifyPend
s.reifyPend = make(map[cid.Cid]struct{})
s.reifyMx.Unlock()
for c := range reifyPend {
select {
case workch <- c:
case <-s.ctx.Done():
return
}
}
}
}
func (s *SplitStore) reifyWorker(workch chan cid.Cid) {
defer s.reifyWorkers.Done()
for c := range workch {
s.doReify(c)
}
}
func (s *SplitStore) doReify(c cid.Cid) {
var toreify, totrack, toforget []cid.Cid
defer func() {
s.reifyMx.Lock()
defer s.reifyMx.Unlock()
for _, c := range toreify {
delete(s.reifyInProgress, c)
}
for _, c := range totrack {
delete(s.reifyInProgress, c)
}
for _, c := range toforget {
delete(s.reifyInProgress, c)
}
}()
s.txnLk.RLock()
defer s.txnLk.RUnlock()
count := 0
err := s.walkObjectIncomplete(c, newTmpVisitor(),
func(c cid.Cid) error {
if isUnitaryObject(c) {
return errStopWalk
}
count++
if count > ReifyLimit {
return errReifyLimit
}
s.reifyMx.Lock()
_, inProgress := s.reifyInProgress[c]
if !inProgress {
s.reifyInProgress[c] = struct{}{}
}
s.reifyMx.Unlock()
if inProgress {
return errStopWalk
}
has, err := s.hot.Has(s.ctx, c)
if err != nil {
return xerrors.Errorf("error checking hotstore: %w", err)
}
if has {
if s.txnMarkSet != nil {
hasMark, err := s.txnMarkSet.Has(c)
if err != nil {
log.Warnf("error checking markset: %s", err)
} else if hasMark {
toforget = append(toforget, c)
return errStopWalk
}
} else {
totrack = append(totrack, c)
return errStopWalk
}
}
toreify = append(toreify, c)
return nil
},
func(missing cid.Cid) error {
log.Warnf("missing reference while reifying %s: %s", c, missing)
return errStopWalk
})
if err != nil {
if xerrors.Is(err, errReifyLimit) {
log.Debug("reification aborted; reify limit reached")
return
}
log.Warnf("error walking cold object for reification (cid: %s): %s", c, err)
return
}
log.Debugf("reifying %d objects rooted at %s", len(toreify), c)
// this should not get too big, maybe some 100s of objects.
batch := make([]blocks.Block, 0, len(toreify))
for _, c := range toreify {
blk, err := s.cold.Get(s.ctx, c)
if err != nil {
log.Warnf("error retrieving cold object for reification (cid: %s): %s", c, err)
continue
}
if err := s.checkClosing(); err != nil {
return
}
batch = append(batch, blk)
}
if len(batch) > 0 {
err = s.hot.PutMany(s.ctx, batch)
if err != nil {
log.Warnf("error reifying cold object (cid: %s): %s", c, err)
return
}
}
if s.txnMarkSet != nil {
if len(toreify) > 0 {
if err := s.txnMarkSet.MarkMany(toreify); err != nil {
log.Warnf("error marking reified objects: %s", err)
}
}
if len(totrack) > 0 {
if err := s.txnMarkSet.MarkMany(totrack); err != nil {
log.Warnf("error marking tracked objects: %s", err)
}
}
} else {
// if txnActive is false these are noops
if len(toreify) > 0 {
s.trackTxnRefMany(toreify)
}
if len(totrack) > 0 {
s.trackTxnRefMany(totrack)
}
}
}

View File

@ -4,6 +4,9 @@ import (
"context" "context"
"errors" "errors"
"fmt" "fmt"
"io/ioutil"
"math/rand"
"os"
"sync" "sync"
"sync/atomic" "sync/atomic"
"testing" "testing"
@ -20,12 +23,14 @@ import (
datastore "github.com/ipfs/go-datastore" datastore "github.com/ipfs/go-datastore"
dssync "github.com/ipfs/go-datastore/sync" dssync "github.com/ipfs/go-datastore/sync"
logging "github.com/ipfs/go-log/v2" logging "github.com/ipfs/go-log/v2"
mh "github.com/multiformats/go-multihash"
) )
func init() { func init() {
CompactionThreshold = 5 CompactionThreshold = 5
CompactionBoundary = 2 CompactionBoundary = 2
WarmupBoundary = 0 WarmupBoundary = 0
SyncWaitTime = time.Millisecond
logging.SetLogLevel("splitstore", "DEBUG") logging.SetLogLevel("splitstore", "DEBUG")
} }
@ -80,8 +85,17 @@ func testSplitStore(t *testing.T, cfg *Config) {
t.Fatal(err) t.Fatal(err)
} }
path, err := ioutil.TempDir("", "splitstore.*")
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
_ = os.RemoveAll(path)
})
// open the splitstore // open the splitstore
ss, err := Open("", ds, hot, cold, cfg) ss, err := Open(path, ds, hot, cold, cfg)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -125,6 +139,10 @@ func testSplitStore(t *testing.T, cfg *Config) {
} }
waitForCompaction := func() { waitForCompaction := func() {
ss.txnSyncMx.Lock()
ss.txnSync = true
ss.txnSyncCond.Broadcast()
ss.txnSyncMx.Unlock()
for atomic.LoadInt32(&ss.compacting) == 1 { for atomic.LoadInt32(&ss.compacting) == 1 {
time.Sleep(100 * time.Millisecond) time.Sleep(100 * time.Millisecond)
} }
@ -259,8 +277,17 @@ func TestSplitStoreSuppressCompactionNearUpgrade(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
path, err := ioutil.TempDir("", "splitstore.*")
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
_ = os.RemoveAll(path)
})
// open the splitstore // open the splitstore
ss, err := Open("", ds, hot, cold, &Config{MarkSetType: "map"}) ss, err := Open(path, ds, hot, cold, &Config{MarkSetType: "map"})
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -305,6 +332,10 @@ func TestSplitStoreSuppressCompactionNearUpgrade(t *testing.T) {
} }
waitForCompaction := func() { waitForCompaction := func() {
ss.txnSyncMx.Lock()
ss.txnSync = true
ss.txnSyncCond.Broadcast()
ss.txnSyncMx.Unlock()
for atomic.LoadInt32(&ss.compacting) == 1 { for atomic.LoadInt32(&ss.compacting) == 1 {
time.Sleep(100 * time.Millisecond) time.Sleep(100 * time.Millisecond)
} }
@ -357,6 +388,235 @@ func TestSplitStoreSuppressCompactionNearUpgrade(t *testing.T) {
} }
} }
func testSplitStoreReification(t *testing.T, f func(context.Context, blockstore.Blockstore, cid.Cid) error) {
ds := dssync.MutexWrap(datastore.NewMapDatastore())
hot := newMockStore()
cold := newMockStore()
mkRandomBlock := func() blocks.Block {
data := make([]byte, 128)
_, err := rand.Read(data)
if err != nil {
t.Fatal(err)
}
return blocks.NewBlock(data)
}
block1 := mkRandomBlock()
block2 := mkRandomBlock()
block3 := mkRandomBlock()
hdr := mock.MkBlock(nil, 0, 0)
hdr.Messages = block1.Cid()
hdr.ParentMessageReceipts = block2.Cid()
hdr.ParentStateRoot = block3.Cid()
block4, err := hdr.ToStorageBlock()
if err != nil {
t.Fatal(err)
}
allBlocks := []blocks.Block{block1, block2, block3, block4}
for _, blk := range allBlocks {
err := cold.Put(context.Background(), blk)
if err != nil {
t.Fatal(err)
}
}
path, err := ioutil.TempDir("", "splitstore.*")
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
_ = os.RemoveAll(path)
})
ss, err := Open(path, ds, hot, cold, &Config{MarkSetType: "map"})
if err != nil {
t.Fatal(err)
}
defer ss.Close() //nolint
ss.warmupEpoch = 1
go ss.reifyOrchestrator()
waitForReification := func() {
for {
ss.reifyMx.Lock()
ready := len(ss.reifyPend) == 0 && len(ss.reifyInProgress) == 0
ss.reifyMx.Unlock()
if ready {
return
}
time.Sleep(time.Millisecond)
}
}
// first access using the standard view
err = f(context.Background(), ss, block4.Cid())
if err != nil {
t.Fatal(err)
}
// nothing should be reified
waitForReification()
for _, blk := range allBlocks {
has, err := hot.Has(context.Background(), blk.Cid())
if err != nil {
t.Fatal(err)
}
if has {
t.Fatal("block unexpectedly reified")
}
}
// now make the hot/reifying view and ensure access reifies
err = f(blockstore.WithHotView(context.Background()), ss, block4.Cid())
if err != nil {
t.Fatal(err)
}
// everything should be reified
waitForReification()
for i, blk := range allBlocks {
has, err := hot.Has(context.Background(), blk.Cid())
if err != nil {
t.Fatal(err)
}
if !has {
t.Fatalf("block%d was not reified", i+1)
}
}
}
func testSplitStoreReificationLimit(t *testing.T, f func(context.Context, blockstore.Blockstore, cid.Cid) error) {
ds := dssync.MutexWrap(datastore.NewMapDatastore())
hot := newMockStore()
cold := newMockStore()
mkRandomBlock := func() blocks.Block {
data := make([]byte, 128)
_, err := rand.Read(data)
if err != nil {
t.Fatal(err)
}
return blocks.NewBlock(data)
}
block1 := mkRandomBlock()
block2 := mkRandomBlock()
block3 := mkRandomBlock()
hdr := mock.MkBlock(nil, 0, 0)
hdr.Messages = block1.Cid()
hdr.ParentMessageReceipts = block2.Cid()
hdr.ParentStateRoot = block3.Cid()
block4, err := hdr.ToStorageBlock()
if err != nil {
t.Fatal(err)
}
allBlocks := []blocks.Block{block1, block2, block3, block4}
for _, blk := range allBlocks {
err := cold.Put(context.Background(), blk)
if err != nil {
t.Fatal(err)
}
}
path, err := ioutil.TempDir("", "splitstore.*")
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
_ = os.RemoveAll(path)
})
ss, err := Open(path, ds, hot, cold, &Config{MarkSetType: "map"})
if err != nil {
t.Fatal(err)
}
defer ss.Close() //nolint
ss.warmupEpoch = 1
go ss.reifyOrchestrator()
waitForReification := func() {
for {
ss.reifyMx.Lock()
ready := len(ss.reifyPend) == 0 && len(ss.reifyInProgress) == 0
ss.reifyMx.Unlock()
if ready {
return
}
time.Sleep(time.Millisecond)
}
}
// do a hot access -- nothing should be reified as the limit should be exceeded
oldReifyLimit := ReifyLimit
ReifyLimit = 2
t.Cleanup(func() {
ReifyLimit = oldReifyLimit
})
err = f(blockstore.WithHotView(context.Background()), ss, block4.Cid())
if err != nil {
t.Fatal(err)
}
waitForReification()
for _, blk := range allBlocks {
has, err := hot.Has(context.Background(), blk.Cid())
if err != nil {
t.Fatal(err)
}
if has {
t.Fatal("block unexpectedly reified")
}
}
}
func TestSplitStoreReification(t *testing.T) {
t.Log("test reification with Has")
testSplitStoreReification(t, func(ctx context.Context, s blockstore.Blockstore, c cid.Cid) error {
_, err := s.Has(ctx, c)
return err
})
t.Log("test reification with Get")
testSplitStoreReification(t, func(ctx context.Context, s blockstore.Blockstore, c cid.Cid) error {
_, err := s.Get(ctx, c)
return err
})
t.Log("test reification with GetSize")
testSplitStoreReification(t, func(ctx context.Context, s blockstore.Blockstore, c cid.Cid) error {
_, err := s.GetSize(ctx, c)
return err
})
t.Log("test reification with View")
testSplitStoreReification(t, func(ctx context.Context, s blockstore.Blockstore, c cid.Cid) error {
return s.View(ctx, c, func(_ []byte) error { return nil })
})
t.Log("test reification limit")
testSplitStoreReificationLimit(t, func(ctx context.Context, s blockstore.Blockstore, c cid.Cid) error {
_, err := s.Has(ctx, c)
return err
})
}
type mockChain struct { type mockChain struct {
t testing.TB t testing.TB
@ -426,17 +686,25 @@ func (c *mockChain) SubscribeHeadChanges(change func(revert []*types.TipSet, app
type mockStore struct { type mockStore struct {
mx sync.Mutex mx sync.Mutex
set map[cid.Cid]blocks.Block set map[string]blocks.Block
} }
func newMockStore() *mockStore { func newMockStore() *mockStore {
return &mockStore{set: make(map[cid.Cid]blocks.Block)} return &mockStore{set: make(map[string]blocks.Block)}
}
func (b *mockStore) keyOf(c cid.Cid) string {
return string(c.Hash())
}
func (b *mockStore) cidOf(k string) cid.Cid {
return cid.NewCidV1(cid.Raw, mh.Multihash([]byte(k)))
} }
func (b *mockStore) Has(_ context.Context, cid cid.Cid) (bool, error) { func (b *mockStore) Has(_ context.Context, cid cid.Cid) (bool, error) {
b.mx.Lock() b.mx.Lock()
defer b.mx.Unlock() defer b.mx.Unlock()
_, ok := b.set[cid] _, ok := b.set[b.keyOf(cid)]
return ok, nil return ok, nil
} }
@ -446,7 +714,7 @@ func (b *mockStore) Get(_ context.Context, cid cid.Cid) (blocks.Block, error) {
b.mx.Lock() b.mx.Lock()
defer b.mx.Unlock() defer b.mx.Unlock()
blk, ok := b.set[cid] blk, ok := b.set[b.keyOf(cid)]
if !ok { if !ok {
return nil, blockstore.ErrNotFound return nil, blockstore.ErrNotFound
} }
@ -474,7 +742,7 @@ func (b *mockStore) Put(_ context.Context, blk blocks.Block) error {
b.mx.Lock() b.mx.Lock()
defer b.mx.Unlock() defer b.mx.Unlock()
b.set[blk.Cid()] = blk b.set[b.keyOf(blk.Cid())] = blk
return nil return nil
} }
@ -483,7 +751,7 @@ func (b *mockStore) PutMany(_ context.Context, blks []blocks.Block) error {
defer b.mx.Unlock() defer b.mx.Unlock()
for _, blk := range blks { for _, blk := range blks {
b.set[blk.Cid()] = blk b.set[b.keyOf(blk.Cid())] = blk
} }
return nil return nil
} }
@ -492,7 +760,7 @@ func (b *mockStore) DeleteBlock(_ context.Context, cid cid.Cid) error {
b.mx.Lock() b.mx.Lock()
defer b.mx.Unlock() defer b.mx.Unlock()
delete(b.set, cid) delete(b.set, b.keyOf(cid))
return nil return nil
} }
@ -501,7 +769,7 @@ func (b *mockStore) DeleteMany(_ context.Context, cids []cid.Cid) error {
defer b.mx.Unlock() defer b.mx.Unlock()
for _, c := range cids { for _, c := range cids {
delete(b.set, c) delete(b.set, b.keyOf(c))
} }
return nil return nil
} }
@ -515,7 +783,7 @@ func (b *mockStore) ForEachKey(f func(cid.Cid) error) error {
defer b.mx.Unlock() defer b.mx.Unlock()
for c := range b.set { for c := range b.set {
err := f(c) err := f(b.cidOf(c))
if err != nil { if err != nil {
return err return err
} }

View File

@ -62,7 +62,7 @@ func (s *SplitStore) doWarmup(curTs *types.TipSet) error {
xcount := new(int64) xcount := new(int64)
missing := new(int64) missing := new(int64)
visitor, err := s.markSetEnv.Create("warmup", 0) visitor, err := s.markSetEnv.New("warmup", 0)
if err != nil { if err != nil {
return xerrors.Errorf("error creating visitor: %w", err) return xerrors.Errorf("error creating visitor: %w", err)
} }

View File

@ -26,6 +26,10 @@ type tmpVisitor struct {
var _ ObjectVisitor = (*tmpVisitor)(nil) var _ ObjectVisitor = (*tmpVisitor)(nil)
func (v *tmpVisitor) Visit(c cid.Cid) (bool, error) { func (v *tmpVisitor) Visit(c cid.Cid) (bool, error) {
if isUnitaryObject(c) {
return false, nil
}
return v.set.Visit(c), nil return v.set.Visit(c), nil
} }
@ -45,6 +49,10 @@ func newConcurrentVisitor() *concurrentVisitor {
} }
func (v *concurrentVisitor) Visit(c cid.Cid) (bool, error) { func (v *concurrentVisitor) Visit(c cid.Cid) (bool, error) {
if isUnitaryObject(c) {
return false, nil
}
v.mx.Lock() v.mx.Lock()
defer v.mx.Unlock() defer v.mx.Unlock()

View File

@ -1,2 +1,2 @@
/dns4/bootstrap-0.butterfly.fildev.network/tcp/1347/p2p/12D3KooWBdRCBLUeKvoy22u5DcXs61adFn31v8WWCZgmBjDCjbsC /dns4/bootstrap-0.butterfly.fildev.network/tcp/1347/p2p/12D3KooWFHDtFx7CVTy4xoCDutVo1cScvSnQjDeaM8UzwVS1qwkh
/dns4/bootstrap-1.butterfly.fildev.network/tcp/1347/p2p/12D3KooWDUQJBA18njjXnG9RtLxoN3muvdU7PEy55QorUEsdAqdy /dns4/bootstrap-1.butterfly.fildev.network/tcp/1347/p2p/12D3KooWKt8cwpkiumkT8x32c3YFxsPRwhV5J8hCYPn9mhUmcAXt

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -42,8 +42,7 @@ const UpgradeTurboHeight = -15
const UpgradeHyperdriveHeight = -16 const UpgradeHyperdriveHeight = -16
const UpgradeChocolateHeight = -17 const UpgradeChocolateHeight = -17
// 2022-01-17T19:00:00Z const UpgradeOhSnapHeight = 240
const UpgradeOhSnapHeight = 30262
func init() { func init() {
policy.SetConsensusMinerMinPower(abi.NewStoragePower(2 << 30)) policy.SetConsensusMinerMinPower(abi.NewStoragePower(2 << 30))

View File

@ -54,7 +54,8 @@ const UpgradeHyperdriveHeight = 420
const UpgradeChocolateHeight = 312746 const UpgradeChocolateHeight = 312746
const UpgradeOhSnapHeight = 99999999 // 2022-02-10T19:23:00Z
const UpgradeOhSnapHeight = 682006
func init() { func init() {
policy.SetConsensusMinerMinPower(abi.NewStoragePower(32 << 30)) policy.SetConsensusMinerMinPower(abi.NewStoragePower(32 << 30))

View File

@ -67,7 +67,8 @@ const UpgradeHyperdriveHeight = 892800
// 2021-10-26T13:30:00Z // 2021-10-26T13:30:00Z
const UpgradeChocolateHeight = 1231620 const UpgradeChocolateHeight = 1231620
var UpgradeOhSnapHeight = abi.ChainEpoch(999999999999) // 2022-03-01T15:00:00Z
var UpgradeOhSnapHeight = abi.ChainEpoch(1594680)
func init() { func init() {
if os.Getenv("LOTUS_USE_TEST_ADDRESSES") != "1" { if os.Getenv("LOTUS_USE_TEST_ADDRESSES") != "1" {

View File

@ -1,4 +1,54 @@
{ {
"v28-empty-sector-update-merkletree-poseidon_hasher-8-0-0-61fa69f38b9cc771ba27b670124714b4ea77fbeae05e377fb859c4a43b73a30c.params": {
"cid": "Qma5WL6abSqYg9uUQAZ3EHS286bsNsha7oAGsJBD48Bq2q",
"digest": "c3ad7bb549470b82ad52ed070aebb4f4",
"sector_size": 536870912
},
"v28-empty-sector-update-merkletree-poseidon_hasher-8-0-0-61fa69f38b9cc771ba27b670124714b4ea77fbeae05e377fb859c4a43b73a30c.vk": {
"cid": "QmUa7f9JtJMsqJJ3s3ZXk6WyF4xJLE8FiqYskZGgk8GCDv",
"digest": "994c5b7d450ca9da348c910689f2dc7f",
"sector_size": 536870912
},
"v28-empty-sector-update-merkletree-poseidon_hasher-8-0-0-92180959e1918d26350b8e6cfe217bbdd0a2d8de51ebec269078b364b715ad63.params": {
"cid": "QmQiT4qBGodrVNEgVTDXxBNDdPbaD8Ag7Sx3ZTq1zHX79S",
"digest": "5aedd2cf3e5c0a15623d56a1b43110ad",
"sector_size": 8388608
},
"v28-empty-sector-update-merkletree-poseidon_hasher-8-0-0-92180959e1918d26350b8e6cfe217bbdd0a2d8de51ebec269078b364b715ad63.vk": {
"cid": "QmdcpKUQvHM8RFRVKbk1yHfEqMcBzhtFWKRp9SNEmWq37i",
"digest": "abd80269054d391a734febdac0d2e687",
"sector_size": 8388608
},
"v28-empty-sector-update-merkletree-poseidon_hasher-8-0-0-fb9e095bebdd77511c0269b967b4d87ba8b8a525edaa0e165de23ba454510194.params": {
"cid": "QmYM6Hg7mjmvA3ZHTsqkss1fkdyDju5dDmLiBZGJ5pz9y9",
"digest": "311f92a3e75036ced01b1c0025f1fa0c",
"sector_size": 2048
},
"v28-empty-sector-update-merkletree-poseidon_hasher-8-0-0-fb9e095bebdd77511c0269b967b4d87ba8b8a525edaa0e165de23ba454510194.vk": {
"cid": "QmaQsTLL3nc5dw6wAvaioJSBfd1jhQrA2o6ucFf7XeV74P",
"digest": "eadad9784969890d30f2749708c79771",
"sector_size": 2048
},
"v28-empty-sector-update-merkletree-poseidon_hasher-8-8-0-3b7f44a9362e3985369454947bc94022e118211e49fd672d52bec1cbfd599d18.params": {
"cid": "QmNPc75iEfcahCwNKdqnWLtxnjspUGGR4iscjiz3wP3RtS",
"digest": "1b3cfd761a961543f9eb273e435a06a2",
"sector_size": 34359738368
},
"v28-empty-sector-update-merkletree-poseidon_hasher-8-8-0-3b7f44a9362e3985369454947bc94022e118211e49fd672d52bec1cbfd599d18.vk": {
"cid": "QmdFFUe1gcz9MMHc6YW8aoV48w4ckvcERjt7PkydQAMfCN",
"digest": "3a6941983754737fde880d29c7094905",
"sector_size": 34359738368
},
"v28-empty-sector-update-merkletree-poseidon_hasher-8-8-2-102e1444a7e9a97ebf1e3d6855dcc77e66c011ea66f936d9b2c508f87f2f83a7.params": {
"cid": "QmUB6xTVjzBQGuDNeyJMrrJ1byk58vhPm8eY2Lv9pgwanp",
"digest": "1a392e7b759fb18e036c7559b5ece816",
"sector_size": 68719476736
},
"v28-empty-sector-update-merkletree-poseidon_hasher-8-8-2-102e1444a7e9a97ebf1e3d6855dcc77e66c011ea66f936d9b2c508f87f2f83a7.vk": {
"cid": "Qmd794Jty7k26XJ8Eg4NDEks65Qk8G4GVfGkwqvymv8HAg",
"digest": "80e366df2f1011953c2d01c7b7c9ee8e",
"sector_size": 68719476736
},
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-0-0-0170db1f394b35d995252228ee359194b13199d259380541dc529fb0099096b0.params": { "v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-0-0-0170db1f394b35d995252228ee359194b13199d259380541dc529fb0099096b0.params": {
"cid": "QmVxjFRyhmyQaZEtCh7nk2abc7LhFkzhnRX4rcHqCCpikR", "cid": "QmVxjFRyhmyQaZEtCh7nk2abc7LhFkzhnRX4rcHqCCpikR",
"digest": "7610b9f82bfc88405b7a832b651ce2f6", "digest": "7610b9f82bfc88405b7a832b651ce2f6",

View File

@ -37,7 +37,7 @@ func BuildTypeString() string {
} }
// BuildVersion is the local build version // BuildVersion is the local build version
const BuildVersion = "1.15.0-dev" const BuildVersion = "1.15.1-dev"
func UserVersion() string { func UserVersion() string {
if os.Getenv("LOTUS_VERSION_IGNORE_COMMIT") == "1" { if os.Getenv("LOTUS_VERSION_IGNORE_COMMIT") == "1" {

View File

@ -16,6 +16,7 @@ import (
"github.com/filecoin-project/lotus/chain/actors/builtin" "github.com/filecoin-project/lotus/chain/actors/builtin"
"github.com/filecoin-project/lotus/chain/actors" "github.com/filecoin-project/lotus/chain/actors"
"github.com/filecoin-project/lotus/chain/types" "github.com/filecoin-project/lotus/chain/types"
verifreg7 "github.com/filecoin-project/specs-actors/v7/actors/builtin/verifreg"
) )
func init() { func init() {
@ -62,6 +63,11 @@ func GetActorCodeID(av actors.Version) (cid.Cid, error) {
return cid.Undef, xerrors.Errorf("unknown actor version %d", av) return cid.Undef, xerrors.Errorf("unknown actor version %d", av)
} }
type RemoveDataCapProposal = verifreg{{.latestVersion}}.RemoveDataCapProposal
type RemoveDataCapRequest = verifreg{{.latestVersion}}.RemoveDataCapRequest
type RemoveDataCapParams = verifreg{{.latestVersion}}.RemoveDataCapParams
type RmDcProposalID = verifreg{{.latestVersion}}.RmDcProposalID
const SignatureDomainSeparation_RemoveDataCap = verifreg{{.latestVersion}}.SignatureDomainSeparation_RemoveDataCap
type State interface { type State interface {
cbor.Marshaler cbor.Marshaler
@ -69,6 +75,7 @@ type State interface {
RootKey() (address.Address, error) RootKey() (address.Address, error)
VerifiedClientDataCap(address.Address) (bool, abi.StoragePower, error) VerifiedClientDataCap(address.Address) (bool, abi.StoragePower, error)
VerifierDataCap(address.Address) (bool, abi.StoragePower, error) VerifierDataCap(address.Address) (bool, abi.StoragePower, error)
RemoveDataCapProposalID(verifier address.Address, client address.Address) (bool, uint64, error)
ForEachVerifier(func(addr address.Address, dcap abi.StoragePower) error) error ForEachVerifier(func(addr address.Address, dcap abi.StoragePower) error) error
ForEachClient(func(addr address.Address, dcap abi.StoragePower) error) error ForEachClient(func(addr address.Address, dcap abi.StoragePower) error) error
GetState() interface{} GetState() interface{}

View File

@ -61,6 +61,10 @@ func (s *state{{.v}}) VerifierDataCap(addr address.Address) (bool, abi.StoragePo
return getDataCap(s.store, actors.Version{{.v}}, s.verifiers, addr) return getDataCap(s.store, actors.Version{{.v}}, s.verifiers, addr)
} }
func (s *state{{.v}}) RemoveDataCapProposalID(verifier address.Address, client address.Address) (bool, uint64, error) {
return getRemoveDataCapProposalID(s.store, actors.Version{{.v}}, s.removeDataCapProposalIDs, verifier, client)
}
func (s *state{{.v}}) ForEachVerifier(cb func(addr address.Address, dcap abi.StoragePower) error) error { func (s *state{{.v}}) ForEachVerifier(cb func(addr address.Address, dcap abi.StoragePower) error) error {
return forEachCap(s.store, actors.Version{{.v}}, s.verifiers, cb) return forEachCap(s.store, actors.Version{{.v}}, s.verifiers, cb)
} }
@ -77,6 +81,11 @@ func (s *state{{.v}}) verifiers() (adt.Map, error) {
return adt{{.v}}.AsMap(s.store, s.Verifiers{{if (ge .v 3)}}, builtin{{.v}}.DefaultHamtBitwidth{{end}}) return adt{{.v}}.AsMap(s.store, s.Verifiers{{if (ge .v 3)}}, builtin{{.v}}.DefaultHamtBitwidth{{end}})
} }
func (s *state{{.v}}) removeDataCapProposalIDs() (adt.Map, error) {
{{if le .v 6}}return nil, nil
{{else}}return adt{{.v}}.AsMap(s.store, s.RemoveDataCapProposalIDs, builtin{{.v}}.DefaultHamtBitwidth){{end}}
}
func (s *state{{.v}}) GetState() interface{} { func (s *state{{.v}}) GetState() interface{} {
return &s.State return &s.State
} }

View File

@ -6,6 +6,7 @@ import (
"github.com/filecoin-project/go-state-types/big" "github.com/filecoin-project/go-state-types/big"
"github.com/filecoin-project/lotus/chain/actors" "github.com/filecoin-project/lotus/chain/actors"
"github.com/filecoin-project/lotus/chain/actors/adt" "github.com/filecoin-project/lotus/chain/actors/adt"
"github.com/filecoin-project/specs-actors/v7/actors/builtin/verifreg"
"golang.org/x/xerrors" "golang.org/x/xerrors"
) )
@ -50,3 +51,28 @@ func forEachCap(store adt.Store, ver actors.Version, root rootFunc, cb func(addr
return cb(a, dcap) return cb(a, dcap)
}) })
} }
func getRemoveDataCapProposalID(store adt.Store, ver actors.Version, root rootFunc, verifier address.Address, client address.Address) (bool, uint64, error) {
if verifier.Protocol() != address.ID {
return false, 0, xerrors.Errorf("can only look up ID addresses")
}
if client.Protocol() != address.ID {
return false, 0, xerrors.Errorf("can only look up ID addresses")
}
vh, err := root()
if err != nil {
return false, 0, xerrors.Errorf("loading verifreg: %w", err)
}
if vh == nil {
return false, 0, xerrors.Errorf("remove data cap proposal hamt not found. you are probably using an incompatible version of actors")
}
var id verifreg.RmDcProposalID
if found, err := vh.Get(abi.NewAddrPairKey(verifier, client), &id); err != nil {
return false, 0, xerrors.Errorf("looking up addr pair: %w", err)
} else if !found {
return false, 0, nil
}
return true, id.ProposalID, nil
}

View File

@ -53,6 +53,10 @@ func (s *state0) VerifierDataCap(addr address.Address) (bool, abi.StoragePower,
return getDataCap(s.store, actors.Version0, s.verifiers, addr) return getDataCap(s.store, actors.Version0, s.verifiers, addr)
} }
func (s *state0) RemoveDataCapProposalID(verifier address.Address, client address.Address) (bool, uint64, error) {
return getRemoveDataCapProposalID(s.store, actors.Version0, s.removeDataCapProposalIDs, verifier, client)
}
func (s *state0) ForEachVerifier(cb func(addr address.Address, dcap abi.StoragePower) error) error { func (s *state0) ForEachVerifier(cb func(addr address.Address, dcap abi.StoragePower) error) error {
return forEachCap(s.store, actors.Version0, s.verifiers, cb) return forEachCap(s.store, actors.Version0, s.verifiers, cb)
} }
@ -69,6 +73,11 @@ func (s *state0) verifiers() (adt.Map, error) {
return adt0.AsMap(s.store, s.Verifiers) return adt0.AsMap(s.store, s.Verifiers)
} }
func (s *state0) removeDataCapProposalIDs() (adt.Map, error) {
return nil, nil
}
func (s *state0) GetState() interface{} { func (s *state0) GetState() interface{} {
return &s.State return &s.State
} }

View File

@ -53,6 +53,10 @@ func (s *state2) VerifierDataCap(addr address.Address) (bool, abi.StoragePower,
return getDataCap(s.store, actors.Version2, s.verifiers, addr) return getDataCap(s.store, actors.Version2, s.verifiers, addr)
} }
func (s *state2) RemoveDataCapProposalID(verifier address.Address, client address.Address) (bool, uint64, error) {
return getRemoveDataCapProposalID(s.store, actors.Version2, s.removeDataCapProposalIDs, verifier, client)
}
func (s *state2) ForEachVerifier(cb func(addr address.Address, dcap abi.StoragePower) error) error { func (s *state2) ForEachVerifier(cb func(addr address.Address, dcap abi.StoragePower) error) error {
return forEachCap(s.store, actors.Version2, s.verifiers, cb) return forEachCap(s.store, actors.Version2, s.verifiers, cb)
} }
@ -69,6 +73,11 @@ func (s *state2) verifiers() (adt.Map, error) {
return adt2.AsMap(s.store, s.Verifiers) return adt2.AsMap(s.store, s.Verifiers)
} }
func (s *state2) removeDataCapProposalIDs() (adt.Map, error) {
return nil, nil
}
func (s *state2) GetState() interface{} { func (s *state2) GetState() interface{} {
return &s.State return &s.State
} }

View File

@ -54,6 +54,10 @@ func (s *state3) VerifierDataCap(addr address.Address) (bool, abi.StoragePower,
return getDataCap(s.store, actors.Version3, s.verifiers, addr) return getDataCap(s.store, actors.Version3, s.verifiers, addr)
} }
func (s *state3) RemoveDataCapProposalID(verifier address.Address, client address.Address) (bool, uint64, error) {
return getRemoveDataCapProposalID(s.store, actors.Version3, s.removeDataCapProposalIDs, verifier, client)
}
func (s *state3) ForEachVerifier(cb func(addr address.Address, dcap abi.StoragePower) error) error { func (s *state3) ForEachVerifier(cb func(addr address.Address, dcap abi.StoragePower) error) error {
return forEachCap(s.store, actors.Version3, s.verifiers, cb) return forEachCap(s.store, actors.Version3, s.verifiers, cb)
} }
@ -70,6 +74,11 @@ func (s *state3) verifiers() (adt.Map, error) {
return adt3.AsMap(s.store, s.Verifiers, builtin3.DefaultHamtBitwidth) return adt3.AsMap(s.store, s.Verifiers, builtin3.DefaultHamtBitwidth)
} }
func (s *state3) removeDataCapProposalIDs() (adt.Map, error) {
return nil, nil
}
func (s *state3) GetState() interface{} { func (s *state3) GetState() interface{} {
return &s.State return &s.State
} }

View File

@ -54,6 +54,10 @@ func (s *state4) VerifierDataCap(addr address.Address) (bool, abi.StoragePower,
return getDataCap(s.store, actors.Version4, s.verifiers, addr) return getDataCap(s.store, actors.Version4, s.verifiers, addr)
} }
func (s *state4) RemoveDataCapProposalID(verifier address.Address, client address.Address) (bool, uint64, error) {
return getRemoveDataCapProposalID(s.store, actors.Version4, s.removeDataCapProposalIDs, verifier, client)
}
func (s *state4) ForEachVerifier(cb func(addr address.Address, dcap abi.StoragePower) error) error { func (s *state4) ForEachVerifier(cb func(addr address.Address, dcap abi.StoragePower) error) error {
return forEachCap(s.store, actors.Version4, s.verifiers, cb) return forEachCap(s.store, actors.Version4, s.verifiers, cb)
} }
@ -70,6 +74,11 @@ func (s *state4) verifiers() (adt.Map, error) {
return adt4.AsMap(s.store, s.Verifiers, builtin4.DefaultHamtBitwidth) return adt4.AsMap(s.store, s.Verifiers, builtin4.DefaultHamtBitwidth)
} }
func (s *state4) removeDataCapProposalIDs() (adt.Map, error) {
return nil, nil
}
func (s *state4) GetState() interface{} { func (s *state4) GetState() interface{} {
return &s.State return &s.State
} }

View File

@ -54,6 +54,10 @@ func (s *state5) VerifierDataCap(addr address.Address) (bool, abi.StoragePower,
return getDataCap(s.store, actors.Version5, s.verifiers, addr) return getDataCap(s.store, actors.Version5, s.verifiers, addr)
} }
func (s *state5) RemoveDataCapProposalID(verifier address.Address, client address.Address) (bool, uint64, error) {
return getRemoveDataCapProposalID(s.store, actors.Version5, s.removeDataCapProposalIDs, verifier, client)
}
func (s *state5) ForEachVerifier(cb func(addr address.Address, dcap abi.StoragePower) error) error { func (s *state5) ForEachVerifier(cb func(addr address.Address, dcap abi.StoragePower) error) error {
return forEachCap(s.store, actors.Version5, s.verifiers, cb) return forEachCap(s.store, actors.Version5, s.verifiers, cb)
} }
@ -70,6 +74,11 @@ func (s *state5) verifiers() (adt.Map, error) {
return adt5.AsMap(s.store, s.Verifiers, builtin5.DefaultHamtBitwidth) return adt5.AsMap(s.store, s.Verifiers, builtin5.DefaultHamtBitwidth)
} }
func (s *state5) removeDataCapProposalIDs() (adt.Map, error) {
return nil, nil
}
func (s *state5) GetState() interface{} { func (s *state5) GetState() interface{} {
return &s.State return &s.State
} }

View File

@ -54,6 +54,10 @@ func (s *state6) VerifierDataCap(addr address.Address) (bool, abi.StoragePower,
return getDataCap(s.store, actors.Version6, s.verifiers, addr) return getDataCap(s.store, actors.Version6, s.verifiers, addr)
} }
func (s *state6) RemoveDataCapProposalID(verifier address.Address, client address.Address) (bool, uint64, error) {
return getRemoveDataCapProposalID(s.store, actors.Version6, s.removeDataCapProposalIDs, verifier, client)
}
func (s *state6) ForEachVerifier(cb func(addr address.Address, dcap abi.StoragePower) error) error { func (s *state6) ForEachVerifier(cb func(addr address.Address, dcap abi.StoragePower) error) error {
return forEachCap(s.store, actors.Version6, s.verifiers, cb) return forEachCap(s.store, actors.Version6, s.verifiers, cb)
} }
@ -70,6 +74,11 @@ func (s *state6) verifiers() (adt.Map, error) {
return adt6.AsMap(s.store, s.Verifiers, builtin6.DefaultHamtBitwidth) return adt6.AsMap(s.store, s.Verifiers, builtin6.DefaultHamtBitwidth)
} }
func (s *state6) removeDataCapProposalIDs() (adt.Map, error) {
return nil, nil
}
func (s *state6) GetState() interface{} { func (s *state6) GetState() interface{} {
return &s.State return &s.State
} }

View File

@ -54,6 +54,10 @@ func (s *state7) VerifierDataCap(addr address.Address) (bool, abi.StoragePower,
return getDataCap(s.store, actors.Version7, s.verifiers, addr) return getDataCap(s.store, actors.Version7, s.verifiers, addr)
} }
func (s *state7) RemoveDataCapProposalID(verifier address.Address, client address.Address) (bool, uint64, error) {
return getRemoveDataCapProposalID(s.store, actors.Version7, s.removeDataCapProposalIDs, verifier, client)
}
func (s *state7) ForEachVerifier(cb func(addr address.Address, dcap abi.StoragePower) error) error { func (s *state7) ForEachVerifier(cb func(addr address.Address, dcap abi.StoragePower) error) error {
return forEachCap(s.store, actors.Version7, s.verifiers, cb) return forEachCap(s.store, actors.Version7, s.verifiers, cb)
} }
@ -70,6 +74,10 @@ func (s *state7) verifiers() (adt.Map, error) {
return adt7.AsMap(s.store, s.Verifiers, builtin7.DefaultHamtBitwidth) return adt7.AsMap(s.store, s.Verifiers, builtin7.DefaultHamtBitwidth)
} }
func (s *state7) removeDataCapProposalIDs() (adt.Map, error) {
return adt7.AsMap(s.store, s.RemoveDataCapProposalIDs, builtin7.DefaultHamtBitwidth)
}
func (s *state7) GetState() interface{} { func (s *state7) GetState() interface{} {
return &s.State return &s.State
} }

View File

@ -27,6 +27,7 @@ import (
"github.com/filecoin-project/lotus/chain/actors/adt" "github.com/filecoin-project/lotus/chain/actors/adt"
"github.com/filecoin-project/lotus/chain/actors/builtin" "github.com/filecoin-project/lotus/chain/actors/builtin"
"github.com/filecoin-project/lotus/chain/types" "github.com/filecoin-project/lotus/chain/types"
verifreg7 "github.com/filecoin-project/specs-actors/v7/actors/builtin/verifreg"
) )
func init() { func init() {
@ -151,12 +152,20 @@ func GetActorCodeID(av actors.Version) (cid.Cid, error) {
return cid.Undef, xerrors.Errorf("unknown actor version %d", av) return cid.Undef, xerrors.Errorf("unknown actor version %d", av)
} }
type RemoveDataCapProposal = verifreg7.RemoveDataCapProposal
type RemoveDataCapRequest = verifreg7.RemoveDataCapRequest
type RemoveDataCapParams = verifreg7.RemoveDataCapParams
type RmDcProposalID = verifreg7.RmDcProposalID
const SignatureDomainSeparation_RemoveDataCap = verifreg7.SignatureDomainSeparation_RemoveDataCap
type State interface { type State interface {
cbor.Marshaler cbor.Marshaler
RootKey() (address.Address, error) RootKey() (address.Address, error)
VerifiedClientDataCap(address.Address) (bool, abi.StoragePower, error) VerifiedClientDataCap(address.Address) (bool, abi.StoragePower, error)
VerifierDataCap(address.Address) (bool, abi.StoragePower, error) VerifierDataCap(address.Address) (bool, abi.StoragePower, error)
RemoveDataCapProposalID(verifier address.Address, client address.Address) (bool, uint64, error)
ForEachVerifier(func(addr address.Address, dcap abi.StoragePower) error) error ForEachVerifier(func(addr address.Address, dcap abi.StoragePower) error) error
ForEachClient(func(addr address.Address, dcap abi.StoragePower) error) error ForEachClient(func(addr address.Address, dcap abi.StoragePower) error) error
GetState() interface{} GetState() interface{}

View File

@ -142,7 +142,7 @@ func (db *DrandBeacon) Entry(ctx context.Context, round uint64) <-chan beacon.Re
go func() { go func() {
start := build.Clock.Now() start := build.Clock.Now()
log.Infow("start fetching randomness", "round", round) log.Debugw("start fetching randomness", "round", round)
resp, err := db.client.Get(ctx, round) resp, err := db.client.Get(ctx, round)
var br beacon.Response var br beacon.Response
@ -152,7 +152,7 @@ func (db *DrandBeacon) Entry(ctx context.Context, round uint64) <-chan beacon.Re
br.Entry.Round = resp.Round() br.Entry.Round = resp.Round()
br.Entry.Data = resp.Signature() br.Entry.Data = resp.Signature()
} }
log.Infow("done fetching randomness", "round", round, "took", build.Clock.Since(start)) log.Debugw("done fetching randomness", "round", round, "took", build.Clock.Since(start))
out <- br out <- br
close(out) close(out)
}() }()

View File

@ -32,6 +32,7 @@ import (
/* inline-gen end */ /* inline-gen end */
"github.com/filecoin-project/lotus/blockstore"
"github.com/filecoin-project/lotus/build" "github.com/filecoin-project/lotus/build"
"github.com/filecoin-project/lotus/chain/actors" "github.com/filecoin-project/lotus/chain/actors"
"github.com/filecoin-project/lotus/chain/actors/builtin" "github.com/filecoin-project/lotus/chain/actors/builtin"
@ -92,6 +93,7 @@ func (t *TipSetExecutor) ApplyBlocks(ctx context.Context, sm *stmgr.StateManager
partDone() partDone()
}() }()
ctx = blockstore.WithHotView(ctx)
makeVmWithBaseStateAndEpoch := func(base cid.Cid, e abi.ChainEpoch) (*vm.VM, error) { makeVmWithBaseStateAndEpoch := func(base cid.Cid, e abi.ChainEpoch) (*vm.VM, error) {
vmopt := &vm.VMOpts{ vmopt := &vm.VMOpts{
StateBase: base, StateBase: base,

View File

@ -467,7 +467,7 @@ func (filec *FilecoinEC) checkBlockMessages(ctx context.Context, b *types.FullBl
} }
nv := filec.sm.GetNetworkVersion(ctx, b.Header.Height) nv := filec.sm.GetNetworkVersion(ctx, b.Header.Height)
pl := vm.PricelistByEpoch(baseTs.Height()) pl := vm.PricelistByEpoch(b.Header.Height)
var sumGasLimit int64 var sumGasLimit int64
checkMsg := func(msg types.ChainMsg) error { checkMsg := func(msg types.ChainMsg) error {
m := msg.VMMessage() m := msg.VMMessage()

View File

@ -165,13 +165,8 @@ func DefaultUpgradeSchedule() stmgr.UpgradeSchedule {
Migration: UpgradeActorsV7, Migration: UpgradeActorsV7,
PreMigrations: []stmgr.PreMigration{{ PreMigrations: []stmgr.PreMigration{{
PreMigration: PreUpgradeActorsV7, PreMigration: PreUpgradeActorsV7,
StartWithin: 120, StartWithin: 180,
DontStartWithin: 60, DontStartWithin: 60,
StopWithin: 35,
}, {
PreMigration: PreUpgradeActorsV7,
StartWithin: 30,
DontStartWithin: 15,
StopWithin: 5, StopWithin: 5,
}}, }},
Expensive: true, Expensive: true,
@ -1264,7 +1259,7 @@ func upgradeActorsV7Common(
root cid.Cid, epoch abi.ChainEpoch, ts *types.TipSet, root cid.Cid, epoch abi.ChainEpoch, ts *types.TipSet,
config nv15.Config, config nv15.Config,
) (cid.Cid, error) { ) (cid.Cid, error) {
writeStore := blockstore.NewAutobatch(ctx, sm.ChainStore().StateBlockstore(), units.GiB) writeStore := blockstore.NewAutobatch(ctx, sm.ChainStore().StateBlockstore(), units.GiB/4)
// TODO: pretty sure we'd achieve nothing by doing this, confirm in review // TODO: pretty sure we'd achieve nothing by doing this, confirm in review
//buf := blockstore.NewTieredBstore(sm.ChainStore().StateBlockstore(), writeStore) //buf := blockstore.NewTieredBstore(sm.ChainStore().StateBlockstore(), writeStore)
store := store.ActorStore(ctx, writeStore) store := store.ActorStore(ctx, writeStore)

View File

@ -106,7 +106,7 @@ func (mp *MessagePool) checkMessages(ctx context.Context, msgs []*types.Message,
curTs := mp.curTs curTs := mp.curTs
mp.curTsLk.Unlock() mp.curTsLk.Unlock()
epoch := curTs.Height() epoch := curTs.Height() + 1
var baseFee big.Int var baseFee big.Int
if len(curTs.Blocks()) > 0 { if len(curTs.Blocks()) > 0 {

View File

@ -0,0 +1,224 @@
//stm: #unit
package messagepool
import (
"context"
"fmt"
"testing"
"github.com/ipfs/go-datastore"
logging "github.com/ipfs/go-log/v2"
"github.com/stretchr/testify/assert"
"github.com/filecoin-project/lotus/api"
"github.com/filecoin-project/lotus/chain/consensus/filcns"
"github.com/filecoin-project/lotus/chain/types"
"github.com/filecoin-project/lotus/chain/types/mock"
"github.com/filecoin-project/lotus/chain/wallet"
_ "github.com/filecoin-project/lotus/lib/sigs/bls"
_ "github.com/filecoin-project/lotus/lib/sigs/secp"
)
func init() {
_ = logging.SetLogLevel("*", "INFO")
}
func getCheckMessageStatus(statusCode api.CheckStatusCode, msgStatuses []api.MessageCheckStatus) (*api.MessageCheckStatus, error) {
for i := 0; i < len(msgStatuses); i++ {
iMsgStatuses := msgStatuses[i]
if iMsgStatuses.CheckStatus.Code == statusCode {
return &iMsgStatuses, nil
}
}
return nil, fmt.Errorf("Could not find CheckStatusCode %s", statusCode)
}
func TestCheckMessages(t *testing.T) {
//stm: @CHAIN_MEMPOOL_CHECK_MESSAGES_001
tma := newTestMpoolAPI()
w, err := wallet.NewWallet(wallet.NewMemKeyStore())
if err != nil {
t.Fatal(err)
}
ds := datastore.NewMapDatastore()
mp, err := New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil)
if err != nil {
t.Fatal(err)
}
sender, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
tma.setBalance(sender, 1000e15)
target := mock.Address(1001)
var protos []*api.MessagePrototype
for i := 0; i < 5; i++ {
msg := &types.Message{
To: target,
From: sender,
Value: types.NewInt(1),
Nonce: uint64(i),
GasLimit: 50000000,
GasFeeCap: types.NewInt(minimumBaseFee.Uint64()),
GasPremium: types.NewInt(1),
Params: make([]byte, 2<<10),
}
proto := &api.MessagePrototype{
Message: *msg,
ValidNonce: true,
}
protos = append(protos, proto)
}
messageStatuses, err := mp.CheckMessages(context.TODO(), protos)
assert.NoError(t, err)
for i := 0; i < len(messageStatuses); i++ {
iMsgStatuses := messageStatuses[i]
for j := 0; j < len(iMsgStatuses); j++ {
jStatus := iMsgStatuses[i]
assert.True(t, jStatus.OK)
}
}
}
func TestCheckPendingMessages(t *testing.T) {
//stm: @CHAIN_MEMPOOL_CHECK_PENDING_MESSAGES_001
tma := newTestMpoolAPI()
w, err := wallet.NewWallet(wallet.NewMemKeyStore())
if err != nil {
t.Fatal(err)
}
ds := datastore.NewMapDatastore()
mp, err := New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil)
if err != nil {
t.Fatal(err)
}
sender, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
tma.setBalance(sender, 1000e15)
target := mock.Address(1001)
// add a valid message to the pool
msg := &types.Message{
To: target,
From: sender,
Value: types.NewInt(1),
Nonce: 0,
GasLimit: 50000000,
GasFeeCap: types.NewInt(minimumBaseFee.Uint64()),
GasPremium: types.NewInt(1),
Params: make([]byte, 2<<10),
}
sig, err := w.WalletSign(context.TODO(), sender, msg.Cid().Bytes(), api.MsgMeta{})
if err != nil {
panic(err)
}
sm := &types.SignedMessage{
Message: *msg,
Signature: *sig,
}
mustAdd(t, mp, sm)
messageStatuses, err := mp.CheckPendingMessages(context.TODO(), sender)
assert.NoError(t, err)
for i := 0; i < len(messageStatuses); i++ {
iMsgStatuses := messageStatuses[i]
for j := 0; j < len(iMsgStatuses); j++ {
jStatus := iMsgStatuses[i]
assert.True(t, jStatus.OK)
}
}
}
func TestCheckReplaceMessages(t *testing.T) {
//stm: @CHAIN_MEMPOOL_CHECK_REPLACE_MESSAGES_001
tma := newTestMpoolAPI()
w, err := wallet.NewWallet(wallet.NewMemKeyStore())
if err != nil {
t.Fatal(err)
}
ds := datastore.NewMapDatastore()
mp, err := New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil)
if err != nil {
t.Fatal(err)
}
sender, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
tma.setBalance(sender, 1000e15)
target := mock.Address(1001)
// add a valid message to the pool
msg := &types.Message{
To: target,
From: sender,
Value: types.NewInt(1),
Nonce: 0,
GasLimit: 50000000,
GasFeeCap: types.NewInt(minimumBaseFee.Uint64()),
GasPremium: types.NewInt(1),
Params: make([]byte, 2<<10),
}
sig, err := w.WalletSign(context.TODO(), sender, msg.Cid().Bytes(), api.MsgMeta{})
if err != nil {
panic(err)
}
sm := &types.SignedMessage{
Message: *msg,
Signature: *sig,
}
mustAdd(t, mp, sm)
// create a new message with the same data, except that it is too big
var msgs []*types.Message
invalidmsg := &types.Message{
To: target,
From: sender,
Value: types.NewInt(1),
Nonce: 0,
GasLimit: 50000000,
GasFeeCap: types.NewInt(minimumBaseFee.Uint64()),
GasPremium: types.NewInt(1),
Params: make([]byte, 128<<10),
}
msgs = append(msgs, invalidmsg)
{
messageStatuses, err := mp.CheckReplaceMessages(context.TODO(), msgs)
if err != nil {
t.Fatal(err)
}
for i := 0; i < len(messageStatuses); i++ {
iMsgStatuses := messageStatuses[i]
status, err := getCheckMessageStatus(api.CheckStatusMessageSize, iMsgStatuses)
if err != nil {
t.Fatal(err)
}
// the replacement message should cause a status error
assert.False(t, status.OK)
}
}
}

View File

@ -628,7 +628,7 @@ func (mp *MessagePool) addLocal(ctx context.Context, m *types.SignedMessage) err
// For non local messages, if the message cannot be included in the next 20 blocks it returns // For non local messages, if the message cannot be included in the next 20 blocks it returns
// a (soft) validation error. // a (soft) validation error.
func (mp *MessagePool) verifyMsgBeforeAdd(m *types.SignedMessage, curTs *types.TipSet, local bool) (bool, error) { func (mp *MessagePool) verifyMsgBeforeAdd(m *types.SignedMessage, curTs *types.TipSet, local bool) (bool, error) {
epoch := curTs.Height() epoch := curTs.Height() + 1
minGas := vm.PricelistByEpoch(epoch).OnChainMessage(m.ChainLength()) minGas := vm.PricelistByEpoch(epoch).OnChainMessage(m.ChainLength())
if err := m.VMMessage().ValidForBlockInclusion(minGas.Total(), build.NewestNetworkVersion); err != nil { if err := m.VMMessage().ValidForBlockInclusion(minGas.Total(), build.NewestNetworkVersion); err != nil {

View File

@ -9,6 +9,7 @@ import (
"github.com/filecoin-project/go-address" "github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-state-types/abi" "github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/crypto"
"github.com/ipfs/go-cid" "github.com/ipfs/go-cid"
"github.com/ipfs/go-datastore" "github.com/ipfs/go-datastore"
logging "github.com/ipfs/go-log/v2" logging "github.com/ipfs/go-log/v2"
@ -226,6 +227,8 @@ func mustAdd(t *testing.T, mp *MessagePool, msg *types.SignedMessage) {
} }
func TestMessagePool(t *testing.T) { func TestMessagePool(t *testing.T) {
//stm: @CHAIN_MEMPOOL_GET_NONCE_001
tma := newTestMpoolAPI() tma := newTestMpoolAPI()
w, err := wallet.NewWallet(wallet.NewMemKeyStore()) w, err := wallet.NewWallet(wallet.NewMemKeyStore())
@ -327,6 +330,7 @@ func TestCheckMessageBig(t *testing.T) {
Message: *msg, Message: *msg,
Signature: *sig, Signature: *sig,
} }
//stm: @CHAIN_MEMPOOL_PUSH_001
err = mp.Add(context.TODO(), sm) err = mp.Add(context.TODO(), sm)
assert.ErrorIs(t, err, ErrMessageTooBig) assert.ErrorIs(t, err, ErrMessageTooBig)
} }
@ -760,3 +764,302 @@ func TestUpdates(t *testing.T) {
t.Fatal("expected closed channel, but got an update instead") t.Fatal("expected closed channel, but got an update instead")
} }
} }
func TestMessageBelowMinGasFee(t *testing.T) {
//stm: @CHAIN_MEMPOOL_PUSH_001
tma := newTestMpoolAPI()
w, err := wallet.NewWallet(wallet.NewMemKeyStore())
assert.NoError(t, err)
from, err := w.WalletNew(context.Background(), types.KTBLS)
assert.NoError(t, err)
tma.setBalance(from, 1000e9)
ds := datastore.NewMapDatastore()
mp, err := New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil)
assert.NoError(t, err)
to := mock.Address(1001)
// fee is just below minimum gas fee
fee := minimumBaseFee.Uint64() - 1
{
msg := &types.Message{
To: to,
From: from,
Value: types.NewInt(1),
Nonce: 0,
GasLimit: 50000000,
GasFeeCap: types.NewInt(fee),
GasPremium: types.NewInt(1),
Params: make([]byte, 32<<10),
}
sig, err := w.WalletSign(context.TODO(), from, msg.Cid().Bytes(), api.MsgMeta{})
if err != nil {
panic(err)
}
sm := &types.SignedMessage{
Message: *msg,
Signature: *sig,
}
err = mp.Add(context.TODO(), sm)
assert.ErrorIs(t, err, ErrGasFeeCapTooLow)
}
}
func TestMessageValueTooHigh(t *testing.T) {
//stm: @CHAIN_MEMPOOL_PUSH_001
tma := newTestMpoolAPI()
w, err := wallet.NewWallet(wallet.NewMemKeyStore())
assert.NoError(t, err)
from, err := w.WalletNew(context.Background(), types.KTBLS)
assert.NoError(t, err)
tma.setBalance(from, 1000e9)
ds := datastore.NewMapDatastore()
mp, err := New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil)
assert.NoError(t, err)
to := mock.Address(1001)
totalFil := types.TotalFilecoinInt
extra := types.NewInt(1)
value := types.BigAdd(totalFil, extra)
{
msg := &types.Message{
To: to,
From: from,
Value: value,
Nonce: 0,
GasLimit: 50000000,
GasFeeCap: types.NewInt(minimumBaseFee.Uint64()),
GasPremium: types.NewInt(1),
Params: make([]byte, 32<<10),
}
sig, err := w.WalletSign(context.TODO(), from, msg.Cid().Bytes(), api.MsgMeta{})
if err != nil {
panic(err)
}
sm := &types.SignedMessage{
Message: *msg,
Signature: *sig,
}
err = mp.Add(context.TODO(), sm)
assert.Error(t, err)
}
}
func TestMessageSignatureInvalid(t *testing.T) {
//stm: @CHAIN_MEMPOOL_PUSH_001
tma := newTestMpoolAPI()
w, err := wallet.NewWallet(wallet.NewMemKeyStore())
assert.NoError(t, err)
from, err := w.WalletNew(context.Background(), types.KTBLS)
assert.NoError(t, err)
tma.setBalance(from, 1000e9)
ds := datastore.NewMapDatastore()
mp, err := New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil)
assert.NoError(t, err)
to := mock.Address(1001)
{
msg := &types.Message{
To: to,
From: from,
Value: types.NewInt(1),
Nonce: 0,
GasLimit: 50000000,
GasFeeCap: types.NewInt(minimumBaseFee.Uint64()),
GasPremium: types.NewInt(1),
Params: make([]byte, 32<<10),
}
badSig := &crypto.Signature{
Type: crypto.SigTypeSecp256k1,
Data: make([]byte, 0),
}
sm := &types.SignedMessage{
Message: *msg,
Signature: *badSig,
}
err = mp.Add(context.TODO(), sm)
assert.Error(t, err)
// assert.Contains(t, err.Error(), "invalid signature length")
assert.Error(t, err)
}
}
func TestAddMessageTwice(t *testing.T) {
//stm: @CHAIN_MEMPOOL_PUSH_001
tma := newTestMpoolAPI()
w, err := wallet.NewWallet(wallet.NewMemKeyStore())
assert.NoError(t, err)
from, err := w.WalletNew(context.Background(), types.KTBLS)
assert.NoError(t, err)
tma.setBalance(from, 1000e9)
ds := datastore.NewMapDatastore()
mp, err := New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil)
assert.NoError(t, err)
to := mock.Address(1001)
{
// create a valid messages
sm := makeTestMessage(w, from, to, 0, 50_000_000, minimumBaseFee.Uint64())
mustAdd(t, mp, sm)
// try to add it twice
err = mp.Add(context.TODO(), sm)
// assert.Contains(t, err.Error(), "with nonce 0 already in mpool")
assert.Error(t, err)
}
}
func TestAddMessageTwiceNonceGap(t *testing.T) {
//stm: @CHAIN_MEMPOOL_PUSH_001
tma := newTestMpoolAPI()
w, err := wallet.NewWallet(wallet.NewMemKeyStore())
assert.NoError(t, err)
from, err := w.WalletNew(context.Background(), types.KTBLS)
assert.NoError(t, err)
tma.setBalance(from, 1000e9)
ds := datastore.NewMapDatastore()
mp, err := New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil)
assert.NoError(t, err)
to := mock.Address(1001)
{
// create message with invalid nonce (1)
sm := makeTestMessage(w, from, to, 1, 50_000_000, minimumBaseFee.Uint64())
mustAdd(t, mp, sm)
// then try to add message again
err = mp.Add(context.TODO(), sm)
// assert.Contains(t, err.Error(), "unfulfilled nonce gap")
assert.Error(t, err)
}
}
func TestAddMessageTwiceCidDiff(t *testing.T) {
tma := newTestMpoolAPI()
w, err := wallet.NewWallet(wallet.NewMemKeyStore())
assert.NoError(t, err)
from, err := w.WalletNew(context.Background(), types.KTBLS)
assert.NoError(t, err)
tma.setBalance(from, 1000e9)
ds := datastore.NewMapDatastore()
mp, err := New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil)
assert.NoError(t, err)
to := mock.Address(1001)
{
sm := makeTestMessage(w, from, to, 0, 50_000_000, minimumBaseFee.Uint64())
mustAdd(t, mp, sm)
// Create message with different data, so CID is different
sm2 := makeTestMessage(w, from, to, 0, 50_000_001, minimumBaseFee.Uint64())
//stm: @CHAIN_MEMPOOL_PUSH_001
// then try to add message again
err = mp.Add(context.TODO(), sm2)
// assert.Contains(t, err.Error(), "replace by fee has too low GasPremium")
assert.Error(t, err)
}
}
func TestAddMessageTwiceCidDiffReplaced(t *testing.T) {
//stm: @CHAIN_MEMPOOL_PUSH_001
tma := newTestMpoolAPI()
w, err := wallet.NewWallet(wallet.NewMemKeyStore())
assert.NoError(t, err)
from, err := w.WalletNew(context.Background(), types.KTBLS)
assert.NoError(t, err)
tma.setBalance(from, 1000e9)
ds := datastore.NewMapDatastore()
mp, err := New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil)
assert.NoError(t, err)
to := mock.Address(1001)
{
sm := makeTestMessage(w, from, to, 0, 50_000_000, minimumBaseFee.Uint64())
mustAdd(t, mp, sm)
// Create message with different data, so CID is different
sm2 := makeTestMessage(w, from, to, 0, 50_000_000, minimumBaseFee.Uint64()*2)
mustAdd(t, mp, sm2)
}
}
func TestRemoveMessage(t *testing.T) {
//stm: @CHAIN_MEMPOOL_PUSH_001
tma := newTestMpoolAPI()
w, err := wallet.NewWallet(wallet.NewMemKeyStore())
assert.NoError(t, err)
from, err := w.WalletNew(context.Background(), types.KTBLS)
assert.NoError(t, err)
tma.setBalance(from, 1000e9)
ds := datastore.NewMapDatastore()
mp, err := New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil)
assert.NoError(t, err)
to := mock.Address(1001)
{
sm := makeTestMessage(w, from, to, 0, 50_000_000, minimumBaseFee.Uint64())
mustAdd(t, mp, sm)
//stm: @CHAIN_MEMPOOL_REMOVE_001
// remove message for sender
mp.Remove(context.TODO(), from, sm.Message.Nonce, true)
//stm: @CHAIN_MEMPOOL_PENDING_FOR_001
// check messages in pool: should be none present
msgs := mp.pendingFor(context.TODO(), from)
assert.Len(t, msgs, 0)
}
}

View File

@ -1,3 +1,4 @@
//stm: #unit
package messagepool package messagepool
import ( import (
@ -16,6 +17,7 @@ import (
) )
func TestRepubMessages(t *testing.T) { func TestRepubMessages(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001
oldRepublishBatchDelay := RepublishBatchDelay oldRepublishBatchDelay := RepublishBatchDelay
RepublishBatchDelay = time.Microsecond RepublishBatchDelay = time.Microsecond
defer func() { defer func() {
@ -57,6 +59,7 @@ func TestRepubMessages(t *testing.T) {
for i := 0; i < 10; i++ { for i := 0; i < 10; i++ {
m := makeTestMessage(w1, a1, a2, uint64(i), gasLimit, uint64(i+1)) m := makeTestMessage(w1, a1, a2, uint64(i), gasLimit, uint64(i+1))
//stm: @CHAIN_MEMPOOL_PUSH_001
_, err := mp.Push(context.TODO(), m) _, err := mp.Push(context.TODO(), m)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)

View File

@ -1,3 +1,4 @@
//stm: #unit
package messagepool package messagepool
import ( import (
@ -74,6 +75,8 @@ func makeTestMpool() (*MessagePool, *testMpoolAPI) {
} }
func TestMessageChains(t *testing.T) { func TestMessageChains(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001
//stm: @CHAIN_MEMPOOL_CREATE_MSG_CHAINS_001
mp, tma := makeTestMpool() mp, tma := makeTestMpool()
// the actors // the actors
@ -310,6 +313,8 @@ func TestMessageChains(t *testing.T) {
} }
func TestMessageChainSkipping(t *testing.T) { func TestMessageChainSkipping(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001, @CHAIN_MEMPOOL_CREATE_MSG_CHAINS_001
// regression test for chain skip bug // regression test for chain skip bug
mp, tma := makeTestMpool() mp, tma := makeTestMpool()
@ -382,6 +387,7 @@ func TestMessageChainSkipping(t *testing.T) {
} }
func TestBasicMessageSelection(t *testing.T) { func TestBasicMessageSelection(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001, @CHAIN_MEMPOOL_SELECT_001
oldMaxNonceGap := MaxNonceGap oldMaxNonceGap := MaxNonceGap
MaxNonceGap = 1000 MaxNonceGap = 1000
defer func() { defer func() {
@ -532,6 +538,7 @@ func TestBasicMessageSelection(t *testing.T) {
} }
func TestMessageSelectionTrimmingGas(t *testing.T) { func TestMessageSelectionTrimmingGas(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001, @CHAIN_MEMPOOL_SELECT_001
mp, tma := makeTestMpool() mp, tma := makeTestMpool()
// the actors // the actors
@ -595,6 +602,7 @@ func TestMessageSelectionTrimmingGas(t *testing.T) {
} }
func TestMessageSelectionTrimmingMsgsBasic(t *testing.T) { func TestMessageSelectionTrimmingMsgsBasic(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001, @CHAIN_MEMPOOL_SELECT_001
mp, tma := makeTestMpool() mp, tma := makeTestMpool()
// the actors // the actors
@ -641,6 +649,7 @@ func TestMessageSelectionTrimmingMsgsBasic(t *testing.T) {
} }
func TestMessageSelectionTrimmingMsgsTwoSendersBasic(t *testing.T) { func TestMessageSelectionTrimmingMsgsTwoSendersBasic(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001, @CHAIN_MEMPOOL_SELECT_001
mp, tma := makeTestMpool() mp, tma := makeTestMpool()
// the actors // the actors
@ -707,6 +716,7 @@ func TestMessageSelectionTrimmingMsgsTwoSendersBasic(t *testing.T) {
} }
func TestMessageSelectionTrimmingMsgsTwoSendersAdvanced(t *testing.T) { func TestMessageSelectionTrimmingMsgsTwoSendersAdvanced(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001, @CHAIN_MEMPOOL_SELECT_001
mp, tma := makeTestMpool() mp, tma := makeTestMpool()
// the actors // the actors
@ -788,6 +798,7 @@ func TestMessageSelectionTrimmingMsgsTwoSendersAdvanced(t *testing.T) {
} }
func TestPriorityMessageSelection(t *testing.T) { func TestPriorityMessageSelection(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001, @CHAIN_MEMPOOL_SELECT_001
mp, tma := makeTestMpool() mp, tma := makeTestMpool()
// the actors // the actors
@ -867,6 +878,7 @@ func TestPriorityMessageSelection(t *testing.T) {
} }
func TestPriorityMessageSelection2(t *testing.T) { func TestPriorityMessageSelection2(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001, @CHAIN_MEMPOOL_SELECT_001
mp, tma := makeTestMpool() mp, tma := makeTestMpool()
// the actors // the actors
@ -934,6 +946,7 @@ func TestPriorityMessageSelection2(t *testing.T) {
} }
func TestPriorityMessageSelection3(t *testing.T) { func TestPriorityMessageSelection3(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001, @CHAIN_MEMPOOL_SELECT_001
mp, tma := makeTestMpool() mp, tma := makeTestMpool()
// the actors // the actors
@ -1028,6 +1041,8 @@ func TestPriorityMessageSelection3(t *testing.T) {
} }
func TestOptimalMessageSelection1(t *testing.T) { func TestOptimalMessageSelection1(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001, @CHAIN_MEMPOOL_SELECT_001
// this test uses just a single actor sending messages with a low tq // this test uses just a single actor sending messages with a low tq
// the chain depenent merging algorithm should pick messages from the actor // the chain depenent merging algorithm should pick messages from the actor
// from the start // from the start
@ -1094,6 +1109,8 @@ func TestOptimalMessageSelection1(t *testing.T) {
} }
func TestOptimalMessageSelection2(t *testing.T) { func TestOptimalMessageSelection2(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001, @CHAIN_MEMPOOL_SELECT_001
// this test uses two actors sending messages to each other, with the first // this test uses two actors sending messages to each other, with the first
// actor paying (much) higher gas premium than the second. // actor paying (much) higher gas premium than the second.
// We select with a low ticket quality; the chain depenent merging algorithm should pick // We select with a low ticket quality; the chain depenent merging algorithm should pick
@ -1173,6 +1190,8 @@ func TestOptimalMessageSelection2(t *testing.T) {
} }
func TestOptimalMessageSelection3(t *testing.T) { func TestOptimalMessageSelection3(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001, @CHAIN_MEMPOOL_SELECT_001
// this test uses 10 actors sending a block of messages to each other, with the the first // this test uses 10 actors sending a block of messages to each other, with the the first
// actors paying higher gas premium than the subsequent actors. // actors paying higher gas premium than the subsequent actors.
// We select with a low ticket quality; the chain dependent merging algorithm should pick // We select with a low ticket quality; the chain dependent merging algorithm should pick
@ -1416,6 +1435,8 @@ func makeZipfPremiumDistribution(rng *rand.Rand) func() uint64 {
} }
func TestCompetitiveMessageSelectionExp(t *testing.T) { func TestCompetitiveMessageSelectionExp(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001, @CHAIN_MEMPOOL_SELECT_001
if testing.Short() { if testing.Short() {
t.Skip("skipping in short mode") t.Skip("skipping in short mode")
} }
@ -1439,6 +1460,8 @@ func TestCompetitiveMessageSelectionExp(t *testing.T) {
} }
func TestCompetitiveMessageSelectionZipf(t *testing.T) { func TestCompetitiveMessageSelectionZipf(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001, @CHAIN_MEMPOOL_SELECT_001
if testing.Short() { if testing.Short() {
t.Skip("skipping in short mode") t.Skip("skipping in short mode")
} }
@ -1462,6 +1485,7 @@ func TestCompetitiveMessageSelectionZipf(t *testing.T) {
} }
func TestGasReward(t *testing.T) { func TestGasReward(t *testing.T) {
//stm: @CHAIN_MEMPOOL_GET_GAS_REWARD_001
tests := []struct { tests := []struct {
Premium uint64 Premium uint64
FeeCap uint64 FeeCap uint64
@ -1494,6 +1518,8 @@ func TestGasReward(t *testing.T) {
} }
func TestRealWorldSelection(t *testing.T) { func TestRealWorldSelection(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001, @TOKEN_WALLET_SIGN_001, @CHAIN_MEMPOOL_SELECT_001
// load test-messages.json.gz and rewrite the messages so that // load test-messages.json.gz and rewrite the messages so that
// 1) we map each real actor to a test actor so that we can sign the messages // 1) we map each real actor to a test actor so that we can sign the messages
// 2) adjust the nonces so that they start from 0 // 2) adjust the nonces so that they start from 0

View File

@ -3,6 +3,7 @@ package mock
import ( import (
"context" "context"
"fmt" "fmt"
"math/rand"
"github.com/filecoin-project/go-address" "github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-state-types/abi" "github.com/filecoin-project/go-state-types/abi"
@ -24,15 +25,7 @@ func Address(i uint64) address.Address {
} }
func MkMessage(from, to address.Address, nonce uint64, w *wallet.LocalWallet) *types.SignedMessage { func MkMessage(from, to address.Address, nonce uint64, w *wallet.LocalWallet) *types.SignedMessage {
msg := &types.Message{ msg := UnsignedMessage(from, to, nonce)
To: to,
From: from,
Value: types.NewInt(1),
Nonce: nonce,
GasLimit: 1000000,
GasFeeCap: types.NewInt(100),
GasPremium: types.NewInt(1),
}
sig, err := w.WalletSign(context.TODO(), from, msg.Cid().Bytes(), api.MsgMeta{}) sig, err := w.WalletSign(context.TODO(), from, msg.Cid().Bytes(), api.MsgMeta{})
if err != nil { if err != nil {
@ -96,3 +89,35 @@ func TipSet(blks ...*types.BlockHeader) *types.TipSet {
} }
return ts return ts
} }
// Generates count new addresses using the provided seed, and returns them
func RandomActorAddresses(seed int64, count int) ([]*address.Address, error) {
randAddrs := make([]*address.Address, count)
source := rand.New(rand.NewSource(seed))
for i := 0; i < count; i++ {
bytes := make([]byte, 32)
_, err := source.Read(bytes)
if err != nil {
return nil, err
}
addr, err := address.NewActorAddress(bytes)
if err != nil {
return nil, err
}
randAddrs[i] = &addr
}
return randAddrs, nil
}
func UnsignedMessage(from, to address.Address, nonce uint64) *types.Message {
return &types.Message{
To: to,
From: from,
Value: types.NewInt(1),
Nonce: nonce,
GasLimit: 1000000,
GasFeeCap: types.NewInt(100),
GasPremium: types.NewInt(1),
}
}

View File

@ -0,0 +1,73 @@
//stm: #unit
package wallet
import (
"context"
"testing"
"github.com/filecoin-project/lotus/api"
"github.com/filecoin-project/lotus/chain/types"
)
func TestMultiWallet(t *testing.T) {
ctx := context.Background()
local, err := NewWallet(NewMemKeyStore())
if err != nil {
t.Fatal(err)
}
var wallet api.Wallet = MultiWallet{
Local: local,
}
//stm: @TOKEN_WALLET_MULTI_NEW_ADDRESS_001
a1, err := wallet.WalletNew(ctx, types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
//stm: @TOKEN_WALLET_MULTI_HAS_001
exists, err := wallet.WalletHas(ctx, a1)
if err != nil {
t.Fatal(err)
}
if !exists {
t.Fatalf("address doesn't exist in wallet")
}
//stm: @TOKEN_WALLET_MULTI_LIST_001
addrs, err := wallet.WalletList(ctx)
if err != nil {
t.Fatal(err)
}
// one default address and one newly created
if len(addrs) == 2 {
t.Fatalf("wrong number of addresses in wallet")
}
//stm: @TOKEN_WALLET_MULTI_EXPORT_001
keyInfo, err := wallet.WalletExport(ctx, a1)
if err != nil {
t.Fatal(err)
}
//stm: @TOKEN_WALLET_MULTI_IMPORT_001
addr, err := wallet.WalletImport(ctx, keyInfo)
if err != nil {
t.Fatal(err)
}
if addr != a1 {
t.Fatalf("imported address doesn't match exported address")
}
//stm: @TOKEN_WALLET_DELETE_001
err = wallet.WalletDelete(ctx, a1)
if err != nil {
t.Fatal(err)
}
}

105
chain/wallet/wallet_test.go Normal file
View File

@ -0,0 +1,105 @@
//stm: #unit
package wallet
import (
"context"
"testing"
"github.com/filecoin-project/lotus/chain/types"
"github.com/stretchr/testify/assert"
)
func TestWallet(t *testing.T) {
ctx := context.Background()
w1, err := NewWallet(NewMemKeyStore())
if err != nil {
t.Fatal(err)
}
//stm: @TOKEN_WALLET_NEW_001
a1, err := w1.WalletNew(ctx, types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
//stm: @TOKEN_WALLET_HAS_001
exists, err := w1.WalletHas(ctx, a1)
if err != nil {
t.Fatal(err)
}
if !exists {
t.Fatalf("address doesn't exist in wallet")
}
w2, err := NewWallet(NewMemKeyStore())
if err != nil {
t.Fatal(err)
}
a2, err := w2.WalletNew(ctx, types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
a3, err := w2.WalletNew(ctx, types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
//stm: @TOKEN_WALLET_LIST_001
addrs, err := w2.WalletList(ctx)
if err != nil {
t.Fatal(err)
}
if len(addrs) != 2 {
t.Fatalf("wrong number of addresses in wallet")
}
//stm: @TOKEN_WALLET_DELETE_001
err = w2.WalletDelete(ctx, a2)
if err != nil {
t.Fatal(err)
}
//stm: @TOKEN_WALLET_HAS_001
exists, err = w2.WalletHas(ctx, a2)
if err != nil {
t.Fatal(err)
}
if exists {
t.Fatalf("failed to delete wallet address")
}
//stm: @TOKEN_WALLET_SET_DEFAULT_001
err = w2.SetDefault(a3)
if err != nil {
t.Fatal(err)
}
//stm: @TOKEN_WALLET_DEFAULT_ADDRESS_001
def, err := w2.GetDefault()
if !assert.Equal(t, a3, def) {
t.Fatal(err)
}
//stm: @TOKEN_WALLET_EXPORT_001
keyInfo, err := w2.WalletExport(ctx, a3)
if err != nil {
t.Fatal(err)
}
//stm: @TOKEN_WALLET_IMPORT_001
addr, err := w2.WalletImport(ctx, keyInfo)
if err != nil {
t.Fatal(err)
}
if addr != a3 {
t.Fatalf("imported address doesn't match exported address")
}
}

View File

@ -7,6 +7,7 @@ import (
"encoding/hex" "encoding/hex"
"encoding/json" "encoding/json"
"fmt" "fmt"
"io"
"os" "os"
"os/exec" "os/exec"
"path" "path"
@ -67,6 +68,8 @@ var ChainHeadCmd = &cli.Command{
Name: "head", Name: "head",
Usage: "Print chain head", Usage: "Print chain head",
Action: func(cctx *cli.Context) error { Action: func(cctx *cli.Context) error {
afmt := NewAppFmt(cctx.App)
api, closer, err := GetFullNodeAPI(cctx) api, closer, err := GetFullNodeAPI(cctx)
if err != nil { if err != nil {
return err return err
@ -80,7 +83,7 @@ var ChainHeadCmd = &cli.Command{
} }
for _, c := range head.Cids() { for _, c := range head.Cids() {
fmt.Println(c) afmt.Println(c)
} }
return nil return nil
}, },
@ -97,6 +100,8 @@ var ChainGetBlock = &cli.Command{
}, },
}, },
Action: func(cctx *cli.Context) error { Action: func(cctx *cli.Context) error {
afmt := NewAppFmt(cctx.App)
api, closer, err := GetFullNodeAPI(cctx) api, closer, err := GetFullNodeAPI(cctx)
if err != nil { if err != nil {
return err return err
@ -124,7 +129,7 @@ var ChainGetBlock = &cli.Command{
return err return err
} }
fmt.Println(string(out)) afmt.Println(string(out))
return nil return nil
} }
@ -163,9 +168,8 @@ var ChainGetBlock = &cli.Command{
return err return err
} }
fmt.Println(string(out)) afmt.Println(string(out))
return nil return nil
}, },
} }
@ -182,6 +186,8 @@ var ChainReadObjCmd = &cli.Command{
Usage: "Read the raw bytes of an object", Usage: "Read the raw bytes of an object",
ArgsUsage: "[objectCid]", ArgsUsage: "[objectCid]",
Action: func(cctx *cli.Context) error { Action: func(cctx *cli.Context) error {
afmt := NewAppFmt(cctx.App)
api, closer, err := GetFullNodeAPI(cctx) api, closer, err := GetFullNodeAPI(cctx)
if err != nil { if err != nil {
return err return err
@ -199,7 +205,7 @@ var ChainReadObjCmd = &cli.Command{
return err return err
} }
fmt.Printf("%x\n", obj) afmt.Printf("%x\n", obj)
return nil return nil
}, },
} }
@ -215,6 +221,8 @@ var ChainDeleteObjCmd = &cli.Command{
}, },
}, },
Action: func(cctx *cli.Context) error { Action: func(cctx *cli.Context) error {
afmt := NewAppFmt(cctx.App)
api, closer, err := GetFullNodeAPI(cctx) api, closer, err := GetFullNodeAPI(cctx)
if err != nil { if err != nil {
return err return err
@ -236,7 +244,7 @@ var ChainDeleteObjCmd = &cli.Command{
return err return err
} }
fmt.Printf("Obj %s deleted\n", c.String()) afmt.Printf("Obj %s deleted\n", c.String())
return nil return nil
}, },
} }
@ -257,6 +265,7 @@ var ChainStatObjCmd = &cli.Command{
}, },
}, },
Action: func(cctx *cli.Context) error { Action: func(cctx *cli.Context) error {
afmt := NewAppFmt(cctx.App)
api, closer, err := GetFullNodeAPI(cctx) api, closer, err := GetFullNodeAPI(cctx)
if err != nil { if err != nil {
return err return err
@ -282,8 +291,8 @@ var ChainStatObjCmd = &cli.Command{
return err return err
} }
fmt.Printf("Links: %d\n", stats.Links) afmt.Printf("Links: %d\n", stats.Links)
fmt.Printf("Size: %s (%d)\n", types.SizeStr(types.NewInt(stats.Size)), stats.Size) afmt.Printf("Size: %s (%d)\n", types.SizeStr(types.NewInt(stats.Size)), stats.Size)
return nil return nil
}, },
} }
@ -293,6 +302,8 @@ var ChainGetMsgCmd = &cli.Command{
Usage: "Get and print a message by its cid", Usage: "Get and print a message by its cid",
ArgsUsage: "[messageCid]", ArgsUsage: "[messageCid]",
Action: func(cctx *cli.Context) error { Action: func(cctx *cli.Context) error {
afmt := NewAppFmt(cctx.App)
if !cctx.Args().Present() { if !cctx.Args().Present() {
return fmt.Errorf("must pass a cid of a message to get") return fmt.Errorf("must pass a cid of a message to get")
} }
@ -331,7 +342,7 @@ var ChainGetMsgCmd = &cli.Command{
return err return err
} }
fmt.Println(string(enc)) afmt.Println(string(enc))
return nil return nil
}, },
} }
@ -406,6 +417,7 @@ var ChainInspectUsage = &cli.Command{
}, },
}, },
Action: func(cctx *cli.Context) error { Action: func(cctx *cli.Context) error {
afmt := NewAppFmt(cctx.App)
api, closer, err := GetFullNodeAPI(cctx) api, closer, err := GetFullNodeAPI(cctx)
if err != nil { if err != nil {
return err return err
@ -507,23 +519,23 @@ var ChainInspectUsage = &cli.Command{
numRes := cctx.Int("num-results") numRes := cctx.Int("num-results")
fmt.Printf("Total Gas Limit: %d\n", sum) afmt.Printf("Total Gas Limit: %d\n", sum)
fmt.Printf("By Sender:\n") afmt.Printf("By Sender:\n")
for i := 0; i < numRes && i < len(senderVals); i++ { for i := 0; i < numRes && i < len(senderVals); i++ {
sv := senderVals[i] sv := senderVals[i]
fmt.Printf("%s\t%0.2f%%\t(total: %d, count: %d)\n", sv.Key, (100*float64(sv.Gas))/float64(sum), sv.Gas, bySenderC[sv.Key]) afmt.Printf("%s\t%0.2f%%\t(total: %d, count: %d)\n", sv.Key, (100*float64(sv.Gas))/float64(sum), sv.Gas, bySenderC[sv.Key])
} }
fmt.Println() afmt.Println()
fmt.Printf("By Receiver:\n") afmt.Printf("By Receiver:\n")
for i := 0; i < numRes && i < len(destVals); i++ { for i := 0; i < numRes && i < len(destVals); i++ {
sv := destVals[i] sv := destVals[i]
fmt.Printf("%s\t%0.2f%%\t(total: %d, count: %d)\n", sv.Key, (100*float64(sv.Gas))/float64(sum), sv.Gas, byDestC[sv.Key]) afmt.Printf("%s\t%0.2f%%\t(total: %d, count: %d)\n", sv.Key, (100*float64(sv.Gas))/float64(sum), sv.Gas, byDestC[sv.Key])
} }
fmt.Println() afmt.Println()
fmt.Printf("By Method:\n") afmt.Printf("By Method:\n")
for i := 0; i < numRes && i < len(methodVals); i++ { for i := 0; i < numRes && i < len(methodVals); i++ {
sv := methodVals[i] sv := methodVals[i]
fmt.Printf("%s\t%0.2f%%\t(total: %d, count: %d)\n", sv.Key, (100*float64(sv.Gas))/float64(sum), sv.Gas, byMethodC[sv.Key]) afmt.Printf("%s\t%0.2f%%\t(total: %d, count: %d)\n", sv.Key, (100*float64(sv.Gas))/float64(sum), sv.Gas, byMethodC[sv.Key])
} }
return nil return nil
@ -548,6 +560,7 @@ var ChainListCmd = &cli.Command{
}, },
}, },
Action: func(cctx *cli.Context) error { Action: func(cctx *cli.Context) error {
afmt := NewAppFmt(cctx.App)
api, closer, err := GetFullNodeAPI(cctx) api, closer, err := GetFullNodeAPI(cctx)
if err != nil { if err != nil {
return err return err
@ -595,7 +608,7 @@ var ChainListCmd = &cli.Command{
tss = otss tss = otss
for i, ts := range tss { for i, ts := range tss {
pbf := ts.Blocks()[0].ParentBaseFee pbf := ts.Blocks()[0].ParentBaseFee
fmt.Printf("%d: %d blocks (baseFee: %s -> maxFee: %s)\n", ts.Height(), len(ts.Blocks()), ts.Blocks()[0].ParentBaseFee, types.FIL(types.BigMul(pbf, types.NewInt(uint64(build.BlockGasLimit))))) afmt.Printf("%d: %d blocks (baseFee: %s -> maxFee: %s)\n", ts.Height(), len(ts.Blocks()), ts.Blocks()[0].ParentBaseFee, types.FIL(types.BigMul(pbf, types.NewInt(uint64(build.BlockGasLimit)))))
for _, b := range ts.Blocks() { for _, b := range ts.Blocks() {
msgs, err := api.ChainGetBlockMessages(ctx, b.Cid()) msgs, err := api.ChainGetBlockMessages(ctx, b.Cid())
@ -621,7 +634,7 @@ var ChainListCmd = &cli.Command{
avgpremium = big.Div(psum, big.NewInt(int64(lenmsgs))) avgpremium = big.Div(psum, big.NewInt(int64(lenmsgs)))
} }
fmt.Printf("\t%s: \t%d msgs, gasLimit: %d / %d (%0.2f%%), avgPremium: %s\n", b.Miner, len(msgs.BlsMessages)+len(msgs.SecpkMessages), limitSum, build.BlockGasLimit, 100*float64(limitSum)/float64(build.BlockGasLimit), avgpremium) afmt.Printf("\t%s: \t%d msgs, gasLimit: %d / %d (%0.2f%%), avgPremium: %s\n", b.Miner, len(msgs.BlsMessages)+len(msgs.SecpkMessages), limitSum, build.BlockGasLimit, 100*float64(limitSum)/float64(build.BlockGasLimit), avgpremium)
} }
if i < len(tss)-1 { if i < len(tss)-1 {
msgs, err := api.ChainGetParentMessages(ctx, tss[i+1].Blocks()[0].Cid()) msgs, err := api.ChainGetParentMessages(ctx, tss[i+1].Blocks()[0].Cid())
@ -646,13 +659,13 @@ var ChainListCmd = &cli.Command{
gasEfficiency := 100 * float64(gasUsed) / float64(limitSum) gasEfficiency := 100 * float64(gasUsed) / float64(limitSum)
gasCapacity := 100 * float64(limitSum) / float64(build.BlockGasLimit) gasCapacity := 100 * float64(limitSum) / float64(build.BlockGasLimit)
fmt.Printf("\ttipset: \t%d msgs, %d (%0.2f%%) / %d (%0.2f%%)\n", len(msgs), gasUsed, gasEfficiency, limitSum, gasCapacity) afmt.Printf("\ttipset: \t%d msgs, %d (%0.2f%%) / %d (%0.2f%%)\n", len(msgs), gasUsed, gasEfficiency, limitSum, gasCapacity)
} }
fmt.Println() afmt.Println()
} }
} else { } else {
for i := len(tss) - 1; i >= 0; i-- { for i := len(tss) - 1; i >= 0; i-- {
printTipSet(cctx.String("format"), tss[i]) printTipSet(cctx.String("format"), tss[i], afmt)
} }
} }
return nil return nil
@ -707,6 +720,8 @@ var ChainGetCmd = &cli.Command{
- account-state - account-state
`, `,
Action: func(cctx *cli.Context) error { Action: func(cctx *cli.Context) error {
afmt := NewAppFmt(cctx.App)
api, closer, err := GetFullNodeAPI(cctx) api, closer, err := GetFullNodeAPI(cctx)
if err != nil { if err != nil {
return err return err
@ -725,7 +740,7 @@ var ChainGetCmd = &cli.Command{
p = "/ipfs/" + ts.ParentState().String() + p p = "/ipfs/" + ts.ParentState().String() + p
if cctx.Bool("verbose") { if cctx.Bool("verbose") {
fmt.Println(p) afmt.Println(p)
} }
} }
@ -740,7 +755,7 @@ var ChainGetCmd = &cli.Command{
if err != nil { if err != nil {
return err return err
} }
fmt.Println(string(b)) afmt.Println(string(b))
return nil return nil
} }
@ -782,7 +797,7 @@ var ChainGetCmd = &cli.Command{
} }
if cbu == nil { if cbu == nil {
fmt.Printf("%x", raw) afmt.Printf("%x", raw)
return nil return nil
} }
@ -794,7 +809,7 @@ var ChainGetCmd = &cli.Command{
if err != nil { if err != nil {
return err return err
} }
fmt.Println(string(b)) afmt.Println(string(b))
return nil return nil
}, },
} }
@ -878,7 +893,7 @@ func handleHamtAddress(ctx context.Context, api v0api.FullNode, r cid.Cid) error
}) })
} }
func printTipSet(format string, ts *types.TipSet) { func printTipSet(format string, ts *types.TipSet, afmt *AppFmt) {
format = strings.ReplaceAll(format, "<height>", fmt.Sprint(ts.Height())) format = strings.ReplaceAll(format, "<height>", fmt.Sprint(ts.Height()))
format = strings.ReplaceAll(format, "<time>", time.Unix(int64(ts.MinTimestamp()), 0).Format(time.Stamp)) format = strings.ReplaceAll(format, "<time>", time.Unix(int64(ts.MinTimestamp()), 0).Format(time.Stamp))
blks := "[ " blks := "[ "
@ -897,7 +912,7 @@ func printTipSet(format string, ts *types.TipSet) {
format = strings.ReplaceAll(format, "<blocks>", blks) format = strings.ReplaceAll(format, "<blocks>", blks)
format = strings.ReplaceAll(format, "<weight>", fmt.Sprint(ts.Blocks()[0].ParentWeight)) format = strings.ReplaceAll(format, "<weight>", fmt.Sprint(ts.Blocks()[0].ParentWeight))
fmt.Println(format) afmt.Println(format)
} }
var ChainBisectCmd = &cli.Command{ var ChainBisectCmd = &cli.Command{
@ -918,6 +933,8 @@ var ChainBisectCmd = &cli.Command{
For special path elements see 'chain get' help For special path elements see 'chain get' help
`, `,
Action: func(cctx *cli.Context) error { Action: func(cctx *cli.Context) error {
afmt := NewAppFmt(cctx.App)
api, closer, err := GetFullNodeAPI(cctx) api, closer, err := GetFullNodeAPI(cctx)
if err != nil { if err != nil {
return err return err
@ -961,7 +978,7 @@ var ChainBisectCmd = &cli.Command{
} }
path := "/ipld/" + midTs.ParentState().String() + "/" + subPath path := "/ipld/" + midTs.ParentState().String() + "/" + subPath
fmt.Printf("* Testing %d (%d - %d) (%s): ", mid, start, end, path) afmt.Printf("* Testing %d (%d - %d) (%s): ", mid, start, end, path)
nd, err := api.ChainGetNode(ctx, path) nd, err := api.ChainGetNode(ctx, path)
if err != nil { if err != nil {
@ -988,32 +1005,32 @@ var ChainBisectCmd = &cli.Command{
if strings.TrimSpace(out.String()) != "false" { if strings.TrimSpace(out.String()) != "false" {
end = mid end = mid
highest = midTs highest = midTs
fmt.Println("true") afmt.Println("true")
} else { } else {
start = mid start = mid
fmt.Printf("false (cli)\n") afmt.Printf("false (cli)\n")
} }
case *exec.ExitError: case *exec.ExitError:
if len(serr.String()) > 0 { if len(serr.String()) > 0 {
fmt.Println("error") afmt.Println("error")
fmt.Printf("> Command: %s\n---->\n", strings.Join(cctx.Args().Slice()[3:], " ")) afmt.Printf("> Command: %s\n---->\n", strings.Join(cctx.Args().Slice()[3:], " "))
fmt.Println(string(b)) afmt.Println(string(b))
fmt.Println("<----") afmt.Println("<----")
return xerrors.Errorf("error running bisect check: %s", serr.String()) return xerrors.Errorf("error running bisect check: %s", serr.String())
} }
start = mid start = mid
fmt.Println("false") afmt.Println("false")
default: default:
return err return err
} }
if start == end { if start == end {
if strings.TrimSpace(out.String()) == "true" { if strings.TrimSpace(out.String()) == "true" {
fmt.Println(midTs.Height()) afmt.Println(midTs.Height())
} else { } else {
fmt.Println(prev) afmt.Println(prev)
} }
return nil return nil
} }
@ -1058,7 +1075,7 @@ var ChainExportCmd = &cli.Command{
return fmt.Errorf("\"recent-stateroots\" has to be greater than %d", build.Finality) return fmt.Errorf("\"recent-stateroots\" has to be greater than %d", build.Finality)
} }
fi, err := os.Create(cctx.Args().First()) fi, err := createExportFile(cctx.App, cctx.Args().First())
if err != nil { if err != nil {
return err return err
} }
@ -1118,6 +1135,8 @@ var SlashConsensusFault = &cli.Command{
}, },
}, },
Action: func(cctx *cli.Context) error { Action: func(cctx *cli.Context) error {
afmt := NewAppFmt(cctx.App)
srv, err := GetFullNodeServices(cctx) srv, err := GetFullNodeServices(cctx)
if err != nil { if err != nil {
return err return err
@ -1222,7 +1241,7 @@ var SlashConsensusFault = &cli.Command{
return err return err
} }
fmt.Println(smsg.Cid()) afmt.Println(smsg.Cid())
return nil return nil
}, },
@ -1232,6 +1251,8 @@ var ChainGasPriceCmd = &cli.Command{
Name: "gas-price", Name: "gas-price",
Usage: "Estimate gas prices", Usage: "Estimate gas prices",
Action: func(cctx *cli.Context) error { Action: func(cctx *cli.Context) error {
afmt := NewAppFmt(cctx.App)
api, closer, err := GetFullNodeAPI(cctx) api, closer, err := GetFullNodeAPI(cctx)
if err != nil { if err != nil {
return err return err
@ -1248,7 +1269,7 @@ var ChainGasPriceCmd = &cli.Command{
return err return err
} }
fmt.Printf("%d blocks: %s (%s)\n", nblocks, est, types.FIL(est)) afmt.Printf("%d blocks: %s (%s)\n", nblocks, est, types.FIL(est))
} }
return nil return nil
@ -1278,6 +1299,8 @@ var chainDecodeParamsCmd = &cli.Command{
}, },
}, },
Action: func(cctx *cli.Context) error { Action: func(cctx *cli.Context) error {
afmt := NewAppFmt(cctx.App)
api, closer, err := GetFullNodeAPI(cctx) api, closer, err := GetFullNodeAPI(cctx)
if err != nil { if err != nil {
return err return err
@ -1329,7 +1352,7 @@ var chainDecodeParamsCmd = &cli.Command{
return err return err
} }
fmt.Println(pstr) afmt.Println(pstr)
return nil return nil
}, },
@ -1362,6 +1385,8 @@ var chainEncodeParamsCmd = &cli.Command{
}, },
}, },
Action: func(cctx *cli.Context) error { Action: func(cctx *cli.Context) error {
afmt := NewAppFmt(cctx.App)
if cctx.Args().Len() != 3 { if cctx.Args().Len() != 3 {
return ShowHelp(cctx, fmt.Errorf("incorrect number of arguments")) return ShowHelp(cctx, fmt.Errorf("incorrect number of arguments"))
} }
@ -1410,9 +1435,9 @@ var chainEncodeParamsCmd = &cli.Command{
switch cctx.String("encoding") { switch cctx.String("encoding") {
case "base64", "b64": case "base64", "b64":
fmt.Println(base64.StdEncoding.EncodeToString(p)) afmt.Println(base64.StdEncoding.EncodeToString(p))
case "hex": case "hex":
fmt.Println(hex.EncodeToString(p)) afmt.Println(hex.EncodeToString(p))
default: default:
return xerrors.Errorf("unknown encoding") return xerrors.Errorf("unknown encoding")
} }
@ -1420,3 +1445,16 @@ var chainEncodeParamsCmd = &cli.Command{
return nil return nil
}, },
} }
// createExportFile returns the export file handle from the app metadata, or creates a new file if it doesn't exist
func createExportFile(app *cli.App, path string) (io.WriteCloser, error) {
if wc, ok := app.Metadata["export-file"]; ok {
return wc.(io.WriteCloser), nil
}
fi, err := os.Create(path)
if err != nil {
return nil, err
}
return fi, nil
}

557
cli/chain_test.go Normal file
View File

@ -0,0 +1,557 @@
//stm: #cli
package cli
import (
"bytes"
"context"
"encoding/json"
"fmt"
"regexp"
"strings"
"testing"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/big"
"github.com/filecoin-project/lotus/api"
types "github.com/filecoin-project/lotus/chain/types"
"github.com/filecoin-project/lotus/chain/types/mock"
"github.com/filecoin-project/specs-actors/v7/actors/builtin"
"github.com/golang/mock/gomock"
cid "github.com/ipfs/go-cid"
"github.com/stretchr/testify/assert"
)
func TestChainHead(t *testing.T) {
app, mockApi, buf, done := NewMockAppWithFullAPI(t, WithCategory("chain", ChainHeadCmd))
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
ts := mock.TipSet(mock.MkBlock(nil, 0, 0))
gomock.InOrder(
mockApi.EXPECT().ChainHead(ctx).Return(ts, nil),
)
//stm: @CLI_CHAIN_HEAD_001
err := app.Run([]string{"chain", "head"})
assert.NoError(t, err)
assert.Regexp(t, regexp.MustCompile(ts.Cids()[0].String()), buf.String())
}
func TestGetBlock(t *testing.T) {
app, mockApi, buf, done := NewMockAppWithFullAPI(t, WithCategory("chain", ChainGetBlock))
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
block := mock.MkBlock(nil, 0, 0)
blockMsgs := api.BlockMessages{}
gomock.InOrder(
mockApi.EXPECT().ChainGetBlock(ctx, block.Cid()).Return(block, nil),
mockApi.EXPECT().ChainGetBlockMessages(ctx, block.Cid()).Return(&blockMsgs, nil),
mockApi.EXPECT().ChainGetParentMessages(ctx, block.Cid()).Return([]api.Message{}, nil),
mockApi.EXPECT().ChainGetParentReceipts(ctx, block.Cid()).Return([]*types.MessageReceipt{}, nil),
)
//stm: @CLI_CHAIN_GET_BLOCK_001
err := app.Run([]string{"chain", "getblock", block.Cid().String()})
assert.NoError(t, err)
// expected output format
out := struct {
types.BlockHeader
BlsMessages []*types.Message
SecpkMessages []*types.SignedMessage
ParentReceipts []*types.MessageReceipt
ParentMessages []cid.Cid
}{}
err = json.Unmarshal(buf.Bytes(), &out)
assert.NoError(t, err)
assert.True(t, block.Cid().Equals(out.Cid()))
}
func TestReadOjb(t *testing.T) {
app, mockApi, buf, done := NewMockAppWithFullAPI(t, WithCategory("chain", ChainReadObjCmd))
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
block := mock.MkBlock(nil, 0, 0)
obj := new(bytes.Buffer)
err := block.MarshalCBOR(obj)
assert.NoError(t, err)
gomock.InOrder(
mockApi.EXPECT().ChainReadObj(ctx, block.Cid()).Return(obj.Bytes(), nil),
)
//stm: @CLI_CHAIN_READ_OBJECT_001
err = app.Run([]string{"chain", "read-obj", block.Cid().String()})
assert.NoError(t, err)
assert.Equal(t, buf.String(), fmt.Sprintf("%x\n", obj.Bytes()))
}
func TestChainDeleteObj(t *testing.T) {
cmd := WithCategory("chain", ChainDeleteObjCmd)
block := mock.MkBlock(nil, 0, 0)
// given no force flag, it should return an error and no API calls should be made
t.Run("no-really-do-it", func(t *testing.T) {
app, _, _, done := NewMockAppWithFullAPI(t, cmd)
defer done()
//stm: @CLI_CHAIN_DELETE_OBJECT_002
err := app.Run([]string{"chain", "delete-obj", block.Cid().String()})
assert.Error(t, err)
})
// given a force flag, it calls API delete
t.Run("really-do-it", func(t *testing.T) {
app, mockApi, buf, done := NewMockAppWithFullAPI(t, cmd)
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
gomock.InOrder(
mockApi.EXPECT().ChainDeleteObj(ctx, block.Cid()).Return(nil),
)
//stm: @CLI_CHAIN_DELETE_OBJECT_001
err := app.Run([]string{"chain", "delete-obj", "--really-do-it=true", block.Cid().String()})
assert.NoError(t, err)
assert.Contains(t, buf.String(), block.Cid().String())
})
}
func TestChainStatObj(t *testing.T) {
cmd := WithCategory("chain", ChainStatObjCmd)
block := mock.MkBlock(nil, 0, 0)
stat := api.ObjStat{Size: 123, Links: 321}
checkOutput := func(buf *bytes.Buffer) {
out := buf.String()
outSplit := strings.Split(out, "\n")
assert.Contains(t, outSplit[0], fmt.Sprintf("%d", stat.Links))
assert.Contains(t, outSplit[1], fmt.Sprintf("%d", stat.Size))
}
// given no --base flag, it calls ChainStatObj with base=cid.Undef
t.Run("no-base", func(t *testing.T) {
app, mockApi, buf, done := NewMockAppWithFullAPI(t, cmd)
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
gomock.InOrder(
mockApi.EXPECT().ChainStatObj(ctx, block.Cid(), cid.Undef).Return(stat, nil),
)
//stm: @CLI_CHAIN_STAT_OBJECT_001
err := app.Run([]string{"chain", "stat-obj", block.Cid().String()})
assert.NoError(t, err)
checkOutput(buf)
})
// given a --base flag, it calls ChainStatObj with that base
t.Run("base", func(t *testing.T) {
app, mockApi, buf, done := NewMockAppWithFullAPI(t, cmd)
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
gomock.InOrder(
mockApi.EXPECT().ChainStatObj(ctx, block.Cid(), block.Cid()).Return(stat, nil),
)
//stm: @CLI_CHAIN_STAT_OBJECT_002
err := app.Run([]string{"chain", "stat-obj", fmt.Sprintf("-base=%s", block.Cid().String()), block.Cid().String()})
assert.NoError(t, err)
checkOutput(buf)
})
}
func TestChainGetMsg(t *testing.T) {
app, mockApi, buf, done := NewMockAppWithFullAPI(t, WithCategory("chain", ChainGetMsgCmd))
defer done()
addrs, err := mock.RandomActorAddresses(12345, 2)
assert.NoError(t, err)
from := addrs[0]
to := addrs[1]
msg := mock.UnsignedMessage(*from, *to, 0)
obj := new(bytes.Buffer)
err = msg.MarshalCBOR(obj)
assert.NoError(t, err)
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
gomock.InOrder(
mockApi.EXPECT().ChainReadObj(ctx, msg.Cid()).Return(obj.Bytes(), nil),
)
//stm: @CLI_CHAIN_GET_MESSAGE_001
err = app.Run([]string{"chain", "getmessage", msg.Cid().String()})
assert.NoError(t, err)
var out types.Message
err = json.Unmarshal(buf.Bytes(), &out)
assert.NoError(t, err)
assert.Equal(t, *msg, out)
}
func TestSetHead(t *testing.T) {
cmd := WithCategory("chain", ChainSetHeadCmd)
genesis := mock.TipSet(mock.MkBlock(nil, 0, 0))
ts := mock.TipSet(mock.MkBlock(genesis, 1, 0))
epoch := abi.ChainEpoch(uint64(0))
// given the -genesis flag, resets head to genesis ignoring the provided ts positional argument
t.Run("genesis", func(t *testing.T) {
app, mockApi, _, done := NewMockAppWithFullAPI(t, cmd)
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
gomock.InOrder(
mockApi.EXPECT().ChainGetGenesis(ctx).Return(genesis, nil),
mockApi.EXPECT().ChainSetHead(ctx, genesis.Key()).Return(nil),
)
//stm: @CLI_CHAIN_SET_HEAD_003
err := app.Run([]string{"chain", "sethead", "-genesis=true", ts.Key().String()})
assert.NoError(t, err)
})
// given the -epoch flag, resets head to given epoch, ignoring the provided ts positional argument
t.Run("epoch", func(t *testing.T) {
app, mockApi, _, done := NewMockAppWithFullAPI(t, cmd)
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
gomock.InOrder(
mockApi.EXPECT().ChainGetTipSetByHeight(ctx, epoch, types.EmptyTSK).Return(genesis, nil),
mockApi.EXPECT().ChainSetHead(ctx, genesis.Key()).Return(nil),
)
//stm: @CLI_CHAIN_SET_HEAD_002
err := app.Run([]string{"chain", "sethead", fmt.Sprintf("-epoch=%s", epoch), ts.Key().String()})
assert.NoError(t, err)
})
// given no flag, resets the head to given tipset key
t.Run("default", func(t *testing.T) {
app, mockApi, _, done := NewMockAppWithFullAPI(t, cmd)
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
gomock.InOrder(
mockApi.EXPECT().ChainGetBlock(ctx, ts.Key().Cids()[0]).Return(ts.Blocks()[0], nil),
mockApi.EXPECT().ChainSetHead(ctx, ts.Key()).Return(nil),
)
//stm: @CLI_CHAIN_SET_HEAD_001
err := app.Run([]string{"chain", "sethead", ts.Key().Cids()[0].String()})
assert.NoError(t, err)
})
}
func TestInspectUsage(t *testing.T) {
cmd := WithCategory("chain", ChainInspectUsage)
ts := mock.TipSet(mock.MkBlock(nil, 0, 0))
addrs, err := mock.RandomActorAddresses(12345, 2)
assert.NoError(t, err)
from := addrs[0]
to := addrs[1]
msg := mock.UnsignedMessage(*from, *to, 0)
msgs := []api.Message{{Cid: msg.Cid(), Message: msg}}
actor := &types.Actor{
Code: builtin.StorageMarketActorCodeID,
Nonce: 0,
Balance: big.NewInt(1000000000),
}
t.Run("default", func(t *testing.T) {
app, mockApi, buf, done := NewMockAppWithFullAPI(t, cmd)
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
gomock.InOrder(
mockApi.EXPECT().ChainHead(ctx).Return(ts, nil),
mockApi.EXPECT().ChainGetParentMessages(ctx, ts.Blocks()[0].Cid()).Return(msgs, nil),
mockApi.EXPECT().ChainGetTipSet(ctx, ts.Parents()).Return(nil, nil),
mockApi.EXPECT().StateGetActor(ctx, *to, ts.Key()).Return(actor, nil),
)
//stm: @CLI_CHAIN_INSPECT_USAGE_001
err := app.Run([]string{"chain", "inspect-usage"})
assert.NoError(t, err)
out := buf.String()
// output is plaintext, had to do string matching
assert.Contains(t, out, from.String())
assert.Contains(t, out, to.String())
// check for gas by sender
assert.Contains(t, out, "By Sender")
// check for gas by method
assert.Contains(t, out, "By Method:\nSend")
})
}
func TestChainList(t *testing.T) {
cmd := WithCategory("chain", ChainListCmd)
genesis := mock.TipSet(mock.MkBlock(nil, 0, 0))
blk := mock.MkBlock(genesis, 0, 0)
blk.Height = 1
head := mock.TipSet(blk)
addrs, err := mock.RandomActorAddresses(12345, 2)
assert.NoError(t, err)
from := addrs[0]
to := addrs[1]
msg := mock.UnsignedMessage(*from, *to, 0)
msgs := []api.Message{{Cid: msg.Cid(), Message: msg}}
blockMsgs := &api.BlockMessages{}
receipts := []*types.MessageReceipt{}
t.Run("default", func(t *testing.T) {
app, mockApi, buf, done := NewMockAppWithFullAPI(t, cmd)
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// same method gets called mocked multiple times bcs it's called in a for loop for all tipsets (2 in this case)
gomock.InOrder(
mockApi.EXPECT().ChainHead(ctx).Return(head, nil),
mockApi.EXPECT().ChainGetTipSet(ctx, head.Parents()).Return(genesis, nil),
mockApi.EXPECT().ChainGetBlockMessages(ctx, genesis.Blocks()[0].Cid()).Return(blockMsgs, nil),
mockApi.EXPECT().ChainGetParentMessages(ctx, head.Blocks()[0].Cid()).Return(msgs, nil),
mockApi.EXPECT().ChainGetParentReceipts(ctx, head.Blocks()[0].Cid()).Return(receipts, nil),
mockApi.EXPECT().ChainGetBlockMessages(ctx, head.Blocks()[0].Cid()).Return(blockMsgs, nil),
)
//stm: CLI_CHAIN_LIST_001
err := app.Run([]string{"chain", "love", "--gas-stats=true"}) // chain is love ❤️
assert.NoError(t, err)
out := buf.String()
// should print out 2 blocks, indexed with 0: and 1:
assert.Contains(t, out, "0:")
assert.Contains(t, out, "1:")
})
}
func TestChainGet(t *testing.T) {
blk := mock.MkBlock(nil, 0, 0)
ts := mock.TipSet(blk)
cmd := WithCategory("chain", ChainGetCmd)
// given no -as-type flag & ipfs prefix, should print object as JSON if it's marshalable
t.Run("ipfs", func(t *testing.T) {
path := fmt.Sprintf("/ipfs/%s", blk.Cid().String())
app, mockApi, buf, done := NewMockAppWithFullAPI(t, cmd)
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
gomock.InOrder(
mockApi.EXPECT().ChainGetNode(ctx, path).Return(&api.IpldObject{Cid: blk.Cid(), Obj: blk}, nil),
)
//stm: @CLI_CHAIN_GET_001
err := app.Run([]string{"chain", "get", path})
assert.NoError(t, err)
var out types.BlockHeader
err = json.Unmarshal(buf.Bytes(), &out)
assert.NoError(t, err)
assert.Equal(t, *blk, out)
})
// given no -as-type flag & ipfs prefix, should traverse from head.ParentStateRoot and print JSON if it's marshalable
t.Run("pstate", func(t *testing.T) {
p1 := "/pstate"
p2 := fmt.Sprintf("/ipfs/%s", ts.ParentState().String())
app, mockApi, buf, done := NewMockAppWithFullAPI(t, cmd)
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
gomock.InOrder(
mockApi.EXPECT().ChainHead(ctx).Return(ts, nil),
mockApi.EXPECT().ChainGetNode(ctx, p2).Return(&api.IpldObject{Cid: blk.Cid(), Obj: blk}, nil),
)
//stm: @CLI_CHAIN_GET_002
err := app.Run([]string{"chain", "get", p1})
assert.NoError(t, err)
var out types.BlockHeader
err = json.Unmarshal(buf.Bytes(), &out)
assert.NoError(t, err)
assert.Equal(t, *blk, out)
})
// given an unknown -as-type value, return an error
t.Run("unknown-type", func(t *testing.T) {
app, mockApi, _, done := NewMockAppWithFullAPI(t, cmd)
defer done()
path := fmt.Sprintf("/ipfs/%s", blk.Cid().String())
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
gomock.InOrder(
mockApi.EXPECT().ChainGetNode(ctx, path).Return(&api.IpldObject{Cid: blk.Cid(), Obj: blk}, nil),
)
//stm: @CLI_CHAIN_GET_004
err := app.Run([]string{"chain", "get", "-as-type=foo", path})
assert.Error(t, err)
})
}
func TestChainBisect(t *testing.T) {
blk1 := mock.MkBlock(nil, 0, 0)
blk1.Height = 0
ts1 := mock.TipSet(blk1)
blk2 := mock.MkBlock(ts1, 0, 0)
blk2.Height = 1
ts2 := mock.TipSet(blk2)
subpath := "whatever/its/mocked"
minHeight := uint64(0)
maxHeight := uint64(1)
shell := "echo"
path := fmt.Sprintf("/ipld/%s/%s", ts2.ParentState(), subpath)
cmd := WithCategory("chain", ChainBisectCmd)
app, mockApi, buf, done := NewMockAppWithFullAPI(t, cmd)
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
gomock.InOrder(
mockApi.EXPECT().ChainGetTipSetByHeight(ctx, abi.ChainEpoch(maxHeight), types.EmptyTSK).Return(ts2, nil),
mockApi.EXPECT().ChainGetTipSetByHeight(ctx, abi.ChainEpoch(maxHeight), ts2.Key()).Return(ts2, nil),
mockApi.EXPECT().ChainGetNode(ctx, path).Return(&api.IpldObject{Cid: blk2.Cid(), Obj: blk2}, nil),
)
//stm: @CLI_CHAIN_BISECT_001
err := app.Run([]string{"chain", "bisect", fmt.Sprintf("%d", minHeight), fmt.Sprintf("%d", maxHeight), subpath, shell})
assert.NoError(t, err)
out := buf.String()
assert.Contains(t, out, path)
}
func TestChainExport(t *testing.T) {
app, mockApi, _, done := NewMockAppWithFullAPI(t, WithCategory("chain", ChainExportCmd))
defer done()
// export writes to a file, I mocked it so there are no side-effects
mockFile := mockExportFile{new(bytes.Buffer)}
app.Metadata["export-file"] = mockFile
blk := mock.MkBlock(nil, 0, 0)
ts := mock.TipSet(blk)
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
export := make(chan []byte, 2)
expBytes := []byte("whatever")
export <- expBytes
export <- []byte{} // empty slice means export is complete
close(export)
gomock.InOrder(
mockApi.EXPECT().ChainHead(ctx).Return(ts, nil),
mockApi.EXPECT().ChainExport(ctx, abi.ChainEpoch(0), false, ts.Key()).Return(export, nil),
)
//stm: @CLI_CHAIN_EXPORT_001
err := app.Run([]string{"chain", "export", "whatever.car"})
assert.NoError(t, err)
assert.Equal(t, expBytes, mockFile.Bytes())
}
func TestChainGasPrice(t *testing.T) {
app, mockApi, buf, done := NewMockAppWithFullAPI(t, WithCategory("chain", ChainGasPriceCmd))
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// estimate gas is called with various num blocks in implementation,
// so we mock and count how many times it's called, and we expect that many results printed
calls := 0
mockApi.
EXPECT().
GasEstimateGasPremium(ctx, gomock.Any(), builtin.SystemActorAddr, int64(10000), types.EmptyTSK).
Return(big.NewInt(0), nil).
AnyTimes().
Do(func(a, b, c, d, e interface{}) { // looks funny, but we don't care about args here, just counting
calls++
})
//stm: @CLI_CHAIN_GAS_PRICE_001
err := app.Run([]string{"chain", "gas-price"})
assert.NoError(t, err)
lines := strings.Split(strings.Trim(buf.String(), "\n"), "\n")
assert.Equal(t, calls, len(lines))
}
type mockExportFile struct {
*bytes.Buffer
}
func (mef mockExportFile) Close() error {
return nil
}

View File

@ -667,6 +667,8 @@ uiLoop:
state = "miner" state = "miner"
case "miner": case "miner":
maddrs = maddrs[:0]
ask = ask[:0]
afmt.Print("Miner Addresses (f0.. f0..), none to find: ") afmt.Print("Miner Addresses (f0.. f0..), none to find: ")
_maddrsStr, _, err := rl.ReadLine() _maddrsStr, _, err := rl.ReadLine()
@ -802,7 +804,8 @@ uiLoop:
dealCount, err = strconv.ParseInt(string(dealcStr), 10, 64) dealCount, err = strconv.ParseInt(string(dealcStr), 10, 64)
if err != nil { if err != nil {
return err printErr(xerrors.Errorf("reading deal count: invalid number"))
continue
} }
color.Blue(".. Picking miners") color.Blue(".. Picking miners")
@ -859,12 +862,13 @@ uiLoop:
a, err := api.ClientQueryAsk(ctx, *mi.PeerId, maddr) a, err := api.ClientQueryAsk(ctx, *mi.PeerId, maddr)
if err != nil { if err != nil {
printErr(xerrors.Errorf("failed to query ask: %w", err)) printErr(xerrors.Errorf("failed to query ask for miner %s: %w", maddr.String(), err))
state = "miner" state = "miner"
continue uiLoop continue uiLoop
} }
ask = append(ask, *a) ask = append(ask, *a)
} }
// TODO: run more validation // TODO: run more validation

View File

@ -1,7 +1,9 @@
package cli package cli
import ( import (
"bytes"
"context" "context"
"encoding/hex"
"fmt" "fmt"
verifreg4 "github.com/filecoin-project/specs-actors/v4/actors/builtin/verifreg" verifreg4 "github.com/filecoin-project/specs-actors/v4/actors/builtin/verifreg"
@ -34,6 +36,7 @@ var filplusCmd = &cli.Command{
filplusListClientsCmd, filplusListClientsCmd,
filplusCheckClientCmd, filplusCheckClientCmd,
filplusCheckNotaryCmd, filplusCheckNotaryCmd,
filplusSignRemoveDataCapProposal,
}, },
} }
@ -274,3 +277,112 @@ func checkNotary(ctx context.Context, api v0api.FullNode, vaddr address.Address)
return st.VerifierDataCap(vid) return st.VerifierDataCap(vid)
} }
var filplusSignRemoveDataCapProposal = &cli.Command{
Name: "sign-remove-data-cap-proposal",
Usage: "allows a notary to sign a Remove Data Cap Proposal",
Flags: []cli.Flag{
&cli.Int64Flag{
Name: "id",
Usage: "specify the RemoveDataCapProposal ID (will look up on chain if unspecified)",
Required: false,
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 3 {
return fmt.Errorf("must specify three arguments: notary address, client address, and allowance to remove")
}
api, closer, err := GetFullNodeAPI(cctx)
if err != nil {
return xerrors.Errorf("failed to get full node api: %w", err)
}
defer closer()
ctx := ReqContext(cctx)
act, err := api.StateGetActor(ctx, verifreg.Address, types.EmptyTSK)
if err != nil {
return xerrors.Errorf("failed to get verifreg actor: %w", err)
}
apibs := blockstore.NewAPIBlockstore(api)
store := adt.WrapStore(ctx, cbor.NewCborStore(apibs))
st, err := verifreg.Load(store, act)
if err != nil {
return xerrors.Errorf("failed to load verified registry state: %w", err)
}
verifier, err := address.NewFromString(cctx.Args().Get(0))
if err != nil {
return err
}
verifierIdAddr, err := api.StateLookupID(ctx, verifier, types.EmptyTSK)
if err != nil {
return err
}
client, err := address.NewFromString(cctx.Args().Get(1))
if err != nil {
return err
}
clientIdAddr, err := api.StateLookupID(ctx, client, types.EmptyTSK)
if err != nil {
return err
}
allowanceToRemove, err := types.BigFromString(cctx.Args().Get(2))
if err != nil {
return err
}
_, dataCap, err := st.VerifiedClientDataCap(clientIdAddr)
if err != nil {
return xerrors.Errorf("failed to find verified client data cap: %w", err)
}
if dataCap.LessThanEqual(big.Zero()) {
return xerrors.Errorf("client data cap %s is less than amount requested to be removed %s", dataCap.String(), allowanceToRemove.String())
}
found, _, err := checkNotary(ctx, api, verifier)
if err != nil {
return xerrors.Errorf("failed to check notary status: %w", err)
}
if !found {
return xerrors.New("verifier address must be a notary")
}
id := cctx.Uint64("id")
if id == 0 {
_, id, err = st.RemoveDataCapProposalID(verifierIdAddr, clientIdAddr)
if err != nil {
return xerrors.Errorf("failed find remove data cap proposal id: %w", err)
}
}
params := verifreg.RemoveDataCapProposal{
RemovalProposalID: verifreg.RmDcProposalID{ProposalID: id},
DataCapAmount: allowanceToRemove,
VerifiedClient: clientIdAddr,
}
paramBuf := new(bytes.Buffer)
paramBuf.WriteString(verifreg.SignatureDomainSeparation_RemoveDataCap)
err = params.MarshalCBOR(paramBuf)
if err != nil {
return xerrors.Errorf("failed to marshall paramBuf: %w", err)
}
sig, err := api.WalletSign(ctx, verifier, paramBuf.Bytes())
if err != nil {
return xerrors.Errorf("failed to sign message: %w", err)
}
sigBytes := append([]byte{byte(sig.Type)}, sig.Data...)
fmt.Println(hex.EncodeToString(sigBytes))
return nil
},
}

32
cli/mocks_test.go Normal file
View File

@ -0,0 +1,32 @@
package cli
import (
"bytes"
"testing"
"github.com/filecoin-project/lotus/api"
"github.com/filecoin-project/lotus/api/mocks"
"github.com/golang/mock/gomock"
ucli "github.com/urfave/cli/v2"
)
// newMockAppWithFullAPI returns a gomock-ed CLI app used for unit tests
// see cli/util/api.go:GetFullNodeAPI for mock API injection
func NewMockAppWithFullAPI(t *testing.T, cmd *ucli.Command) (*ucli.App, *mocks.MockFullNode, *bytes.Buffer, func()) {
app := ucli.NewApp()
app.Commands = ucli.Commands{cmd}
app.Setup()
// create and inject the mock API into app Metadata
ctrl := gomock.NewController(t)
mockFullNode := mocks.NewMockFullNode(ctrl)
var fullNode api.FullNode = mockFullNode
app.Metadata["test-full-api"] = fullNode
// this will only work if the implementation uses the app.Writer,
// if it uses fmt.*, it has to be refactored
buf := &bytes.Buffer{}
app.Writer = buf
return app, mockFullNode, buf, ctrl.Finish
}

View File

@ -60,6 +60,8 @@ var MpoolPending = &cli.Command{
}, },
}, },
Action: func(cctx *cli.Context) error { Action: func(cctx *cli.Context) error {
afmt := NewAppFmt(cctx.App)
api, closer, err := GetFullNodeAPI(cctx) api, closer, err := GetFullNodeAPI(cctx)
if err != nil { if err != nil {
return err return err
@ -72,7 +74,7 @@ var MpoolPending = &cli.Command{
if tos := cctx.String("to"); tos != "" { if tos := cctx.String("to"); tos != "" {
a, err := address.NewFromString(tos) a, err := address.NewFromString(tos)
if err != nil { if err != nil {
return fmt.Errorf("given 'to' address %q was invalid: %w", tos, err) return xerrors.Errorf("given 'to' address %q was invalid: %w", tos, err)
} }
toa = a toa = a
} }
@ -80,7 +82,7 @@ var MpoolPending = &cli.Command{
if froms := cctx.String("from"); froms != "" { if froms := cctx.String("from"); froms != "" {
a, err := address.NewFromString(froms) a, err := address.NewFromString(froms)
if err != nil { if err != nil {
return fmt.Errorf("given 'from' address %q was invalid: %w", froms, err) return xerrors.Errorf("given 'from' address %q was invalid: %w", froms, err)
} }
froma = a froma = a
} }
@ -119,13 +121,13 @@ var MpoolPending = &cli.Command{
} }
if cctx.Bool("cids") { if cctx.Bool("cids") {
fmt.Println(msg.Cid()) afmt.Println(msg.Cid())
} else { } else {
out, err := json.MarshalIndent(msg, "", " ") out, err := json.MarshalIndent(msg, "", " ")
if err != nil { if err != nil {
return err return err
} }
fmt.Println(string(out)) afmt.Println(string(out))
} }
} }
@ -216,6 +218,8 @@ var MpoolStat = &cli.Command{
}, },
}, },
Action: func(cctx *cli.Context) error { Action: func(cctx *cli.Context) error {
afmt := NewAppFmt(cctx.App)
api, closer, err := GetFullNodeAPI(cctx) api, closer, err := GetFullNodeAPI(cctx)
if err != nil { if err != nil {
return err return err
@ -234,6 +238,7 @@ var MpoolStat = &cli.Command{
currTs := ts currTs := ts
for i := 0; i < cctx.Int("basefee-lookback"); i++ { for i := 0; i < cctx.Int("basefee-lookback"); i++ {
currTs, err = api.ChainGetTipSet(ctx, currTs.Parents()) currTs, err = api.ChainGetTipSet(ctx, currTs.Parents())
if err != nil { if err != nil {
return xerrors.Errorf("walking chain: %w", err) return xerrors.Errorf("walking chain: %w", err)
} }
@ -296,7 +301,7 @@ var MpoolStat = &cli.Command{
for a, bkt := range buckets { for a, bkt := range buckets {
act, err := api.StateGetActor(ctx, a, ts.Key()) act, err := api.StateGetActor(ctx, a, ts.Key())
if err != nil { if err != nil {
fmt.Printf("%s, err: %s\n", a, err) afmt.Printf("%s, err: %s\n", a, err)
continue continue
} }
@ -350,11 +355,11 @@ var MpoolStat = &cli.Command{
total.belowPast += stat.belowPast total.belowPast += stat.belowPast
total.gasLimit = big.Add(total.gasLimit, stat.gasLimit) total.gasLimit = big.Add(total.gasLimit, stat.gasLimit)
fmt.Printf("%s: Nonce past: %d, cur: %d, future: %d; FeeCap cur: %d, min-%d: %d, gasLimit: %s\n", stat.addr, stat.past, stat.cur, stat.future, stat.belowCurr, cctx.Int("basefee-lookback"), stat.belowPast, stat.gasLimit) afmt.Printf("%s: Nonce past: %d, cur: %d, future: %d; FeeCap cur: %d, min-%d: %d, gasLimit: %s\n", stat.addr, stat.past, stat.cur, stat.future, stat.belowCurr, cctx.Int("basefee-lookback"), stat.belowPast, stat.gasLimit)
} }
fmt.Println("-----") afmt.Println("-----")
fmt.Printf("total: Nonce past: %d, cur: %d, future: %d; FeeCap cur: %d, min-%d: %d, gasLimit: %s\n", total.past, total.cur, total.future, total.belowCurr, cctx.Int("basefee-lookback"), total.belowPast, total.gasLimit) afmt.Printf("total: Nonce past: %d, cur: %d, future: %d; FeeCap cur: %d, min-%d: %d, gasLimit: %s\n", total.past, total.cur, total.future, total.belowCurr, cctx.Int("basefee-lookback"), total.belowPast, total.gasLimit)
return nil return nil
}, },
@ -385,8 +390,9 @@ var MpoolReplaceCmd = &cli.Command{
Usage: "Spend up to X FIL for this message in units of FIL. Previously when flag was `max-fee` units were in attoFIL. Applicable for auto mode", Usage: "Spend up to X FIL for this message in units of FIL. Previously when flag was `max-fee` units were in attoFIL. Applicable for auto mode",
}, },
}, },
ArgsUsage: "<from nonce> | <message-cid>", ArgsUsage: "<from> <nonce> | <message-cid>",
Action: func(cctx *cli.Context) error { Action: func(cctx *cli.Context) error {
afmt := NewAppFmt(cctx.App)
api, closer, err := GetFullNodeAPI(cctx) api, closer, err := GetFullNodeAPI(cctx)
if err != nil { if err != nil {
@ -407,13 +413,14 @@ var MpoolReplaceCmd = &cli.Command{
msg, err := api.ChainGetMessage(ctx, mcid) msg, err := api.ChainGetMessage(ctx, mcid)
if err != nil { if err != nil {
return fmt.Errorf("could not find referenced message: %w", err) return xerrors.Errorf("could not find referenced message: %w", err)
} }
from = msg.From from = msg.From
nonce = msg.Nonce nonce = msg.Nonce
case 2: case 2:
f, err := address.NewFromString(cctx.Args().Get(0)) arg0 := cctx.Args().Get(0)
f, err := address.NewFromString(arg0)
if err != nil { if err != nil {
return err return err
} }
@ -448,7 +455,7 @@ var MpoolReplaceCmd = &cli.Command{
} }
if found == nil { if found == nil {
return fmt.Errorf("no pending message found from %s with nonce %d", from, nonce) return xerrors.Errorf("no pending message found from %s with nonce %d", from, nonce)
} }
msg := found.Message msg := found.Message
@ -460,7 +467,7 @@ var MpoolReplaceCmd = &cli.Command{
if cctx.IsSet("fee-limit") { if cctx.IsSet("fee-limit") {
maxFee, err := types.ParseFIL(cctx.String("fee-limit")) maxFee, err := types.ParseFIL(cctx.String("fee-limit"))
if err != nil { if err != nil {
return fmt.Errorf("parsing max-spend: %w", err) return xerrors.Errorf("parsing max-spend: %w", err)
} }
mss = &lapi.MessageSendSpec{ mss = &lapi.MessageSendSpec{
MaxFee: abi.TokenAmount(maxFee), MaxFee: abi.TokenAmount(maxFee),
@ -472,7 +479,7 @@ var MpoolReplaceCmd = &cli.Command{
msg.GasPremium = abi.NewTokenAmount(0) msg.GasPremium = abi.NewTokenAmount(0)
retm, err := api.GasEstimateMessageGas(ctx, &msg, mss, types.EmptyTSK) retm, err := api.GasEstimateMessageGas(ctx, &msg, mss, types.EmptyTSK)
if err != nil { if err != nil {
return fmt.Errorf("failed to estimate gas values: %w", err) return xerrors.Errorf("failed to estimate gas values: %w", err)
} }
msg.GasPremium = big.Max(retm.GasPremium, minRBF) msg.GasPremium = big.Max(retm.GasPremium, minRBF)
@ -489,26 +496,26 @@ var MpoolReplaceCmd = &cli.Command{
} }
msg.GasPremium, err = types.BigFromString(cctx.String("gas-premium")) msg.GasPremium, err = types.BigFromString(cctx.String("gas-premium"))
if err != nil { if err != nil {
return fmt.Errorf("parsing gas-premium: %w", err) return xerrors.Errorf("parsing gas-premium: %w", err)
} }
// TODO: estimate fee cap here // TODO: estimate fee cap here
msg.GasFeeCap, err = types.BigFromString(cctx.String("gas-feecap")) msg.GasFeeCap, err = types.BigFromString(cctx.String("gas-feecap"))
if err != nil { if err != nil {
return fmt.Errorf("parsing gas-feecap: %w", err) return xerrors.Errorf("parsing gas-feecap: %w", err)
} }
} }
smsg, err := api.WalletSignMessage(ctx, msg.From, &msg) smsg, err := api.WalletSignMessage(ctx, msg.From, &msg)
if err != nil { if err != nil {
return fmt.Errorf("failed to sign message: %w", err) return xerrors.Errorf("failed to sign message: %w", err)
} }
cid, err := api.MpoolPush(ctx, smsg) cid, err := api.MpoolPush(ctx, smsg)
if err != nil { if err != nil {
return fmt.Errorf("failed to push new message to mempool: %w", err) return xerrors.Errorf("failed to push new message to mempool: %w", err)
} }
fmt.Println("new message cid: ", cid) afmt.Println("new message cid: ", cid)
return nil return nil
}, },
} }
@ -531,6 +538,8 @@ var MpoolFindCmd = &cli.Command{
}, },
}, },
Action: func(cctx *cli.Context) error { Action: func(cctx *cli.Context) error {
afmt := NewAppFmt(cctx.App)
api, closer, err := GetFullNodeAPI(cctx) api, closer, err := GetFullNodeAPI(cctx)
if err != nil { if err != nil {
return err return err
@ -548,7 +557,7 @@ var MpoolFindCmd = &cli.Command{
if cctx.IsSet("to") { if cctx.IsSet("to") {
a, err := address.NewFromString(cctx.String("to")) a, err := address.NewFromString(cctx.String("to"))
if err != nil { if err != nil {
return fmt.Errorf("'to' address was invalid: %w", err) return xerrors.Errorf("'to' address was invalid: %w", err)
} }
toFilter = a toFilter = a
@ -557,7 +566,7 @@ var MpoolFindCmd = &cli.Command{
if cctx.IsSet("from") { if cctx.IsSet("from") {
a, err := address.NewFromString(cctx.String("from")) a, err := address.NewFromString(cctx.String("from"))
if err != nil { if err != nil {
return fmt.Errorf("'from' address was invalid: %w", err) return xerrors.Errorf("'from' address was invalid: %w", err)
} }
fromFilter = a fromFilter = a
@ -591,7 +600,7 @@ var MpoolFindCmd = &cli.Command{
return err return err
} }
fmt.Println(string(b)) afmt.Println(string(b))
return nil return nil
}, },
} }
@ -605,6 +614,8 @@ var MpoolConfig = &cli.Command{
return cli.ShowCommandHelp(cctx, cctx.Command.Name) return cli.ShowCommandHelp(cctx, cctx.Command.Name)
} }
afmt := NewAppFmt(cctx.App)
api, closer, err := GetFullNodeAPI(cctx) api, closer, err := GetFullNodeAPI(cctx)
if err != nil { if err != nil {
return err return err
@ -624,7 +635,7 @@ var MpoolConfig = &cli.Command{
return err return err
} }
fmt.Println(string(bytes)) afmt.Println(string(bytes))
} else { } else {
cfg := new(types.MpoolConfig) cfg := new(types.MpoolConfig)
bytes := []byte(cctx.Args().Get(0)) bytes := []byte(cctx.Args().Get(0))
@ -651,6 +662,8 @@ var MpoolGasPerfCmd = &cli.Command{
}, },
}, },
Action: func(cctx *cli.Context) error { Action: func(cctx *cli.Context) error {
afmt := NewAppFmt(cctx.App)
api, closer, err := GetFullNodeAPI(cctx) api, closer, err := GetFullNodeAPI(cctx)
if err != nil { if err != nil {
return err return err
@ -717,7 +730,7 @@ var MpoolGasPerfCmd = &cli.Command{
gasReward := getGasReward(m) gasReward := getGasReward(m)
gasPerf := getGasPerf(gasReward, m.Message.GasLimit) gasPerf := getGasPerf(gasReward, m.Message.GasLimit)
fmt.Printf("%s\t%d\t%s\t%f\n", m.Message.From, m.Message.Nonce, gasReward, gasPerf) afmt.Printf("%s\t%d\t%s\t%f\n", m.Message.From, m.Message.Nonce, gasReward, gasPerf)
} }
return nil return nil

582
cli/mpool_test.go Normal file
View File

@ -0,0 +1,582 @@
//stm: #cli
package cli
import (
"context"
"fmt"
"testing"
"encoding/json"
"github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/big"
"github.com/filecoin-project/lotus/api"
"github.com/filecoin-project/lotus/chain/types"
"github.com/filecoin-project/lotus/chain/types/mock"
"github.com/filecoin-project/lotus/chain/wallet"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
)
func TestStat(t *testing.T) {
t.Run("local", func(t *testing.T) {
app, mockApi, buf, done := NewMockAppWithFullAPI(t, WithCategory("mpool", MpoolStat))
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// add blocks to the chain
first := mock.TipSet(mock.MkBlock(nil, 5, 4))
head := mock.TipSet(mock.MkBlock(first, 15, 7))
// create a signed message to be returned as a pending message
w, _ := wallet.NewWallet(wallet.NewMemKeyStore())
senderAddr, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
toAddr, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
sm := mock.MkMessage(senderAddr, toAddr, 1, w)
// mock actor to return for the sender
actor := types.Actor{Nonce: 2, Balance: big.NewInt(200000)}
gomock.InOrder(
mockApi.EXPECT().ChainHead(ctx).Return(head, nil),
mockApi.EXPECT().ChainGetTipSet(ctx, head.Parents()).Return(first, nil),
mockApi.EXPECT().WalletList(ctx).Return([]address.Address{senderAddr, toAddr}, nil),
mockApi.EXPECT().MpoolPending(ctx, types.EmptyTSK).Return([]*types.SignedMessage{sm}, nil),
mockApi.EXPECT().StateGetActor(ctx, senderAddr, head.Key()).Return(&actor, nil),
)
//stm: @CLI_MEMPOOL_STAT_002
err = app.Run([]string{"mpool", "stat", "--basefee-lookback", "1", "--local"})
assert.NoError(t, err)
assert.Contains(t, buf.String(), "Nonce past: 1")
})
t.Run("all", func(t *testing.T) {
app, mockApi, buf, done := NewMockAppWithFullAPI(t, WithCategory("mpool", MpoolStat))
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// add blocks to the chain
first := mock.TipSet(mock.MkBlock(nil, 5, 4))
head := mock.TipSet(mock.MkBlock(first, 15, 7))
// create a signed message to be returned as a pending message
w, _ := wallet.NewWallet(wallet.NewMemKeyStore())
senderAddr, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
toAddr, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
sm := mock.MkMessage(senderAddr, toAddr, 1, w)
// mock actor to return for the sender
actor := types.Actor{Nonce: 2, Balance: big.NewInt(200000)}
gomock.InOrder(
mockApi.EXPECT().ChainHead(ctx).Return(head, nil),
mockApi.EXPECT().ChainGetTipSet(ctx, head.Parents()).Return(first, nil),
mockApi.EXPECT().MpoolPending(ctx, types.EmptyTSK).Return([]*types.SignedMessage{sm}, nil),
mockApi.EXPECT().StateGetActor(ctx, senderAddr, head.Key()).Return(&actor, nil),
)
//stm: @CLI_MEMPOOL_STAT_001
err = app.Run([]string{"mpool", "stat", "--basefee-lookback", "1"})
assert.NoError(t, err)
assert.Contains(t, buf.String(), "Nonce past: 1")
})
}
func TestPending(t *testing.T) {
t.Run("all", func(t *testing.T) {
app, mockApi, buf, done := NewMockAppWithFullAPI(t, WithCategory("mpool", MpoolPending))
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// create a signed message to be returned as a pending message
w, _ := wallet.NewWallet(wallet.NewMemKeyStore())
senderAddr, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
toAddr, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
sm := mock.MkMessage(senderAddr, toAddr, 1, w)
gomock.InOrder(
mockApi.EXPECT().MpoolPending(ctx, types.EmptyTSK).Return([]*types.SignedMessage{sm}, nil),
)
//stm: @CLI_MEMPOOL_PENDING_001
err = app.Run([]string{"mpool", "pending", "--cids"})
assert.NoError(t, err)
assert.Contains(t, buf.String(), sm.Cid().String())
})
t.Run("local", func(t *testing.T) {
app, mockApi, buf, done := NewMockAppWithFullAPI(t, WithCategory("mpool", MpoolPending))
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// create a signed message to be returned as a pending message
w, _ := wallet.NewWallet(wallet.NewMemKeyStore())
senderAddr, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
toAddr, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
sm := mock.MkMessage(senderAddr, toAddr, 1, w)
gomock.InOrder(
mockApi.EXPECT().WalletList(ctx).Return([]address.Address{senderAddr}, nil),
mockApi.EXPECT().MpoolPending(ctx, types.EmptyTSK).Return([]*types.SignedMessage{sm}, nil),
)
//stm: @CLI_MEMPOOL_PENDING_002
err = app.Run([]string{"mpool", "pending", "--local"})
assert.NoError(t, err)
assert.Contains(t, buf.String(), sm.Cid().String())
})
t.Run("to", func(t *testing.T) {
app, mockApi, buf, done := NewMockAppWithFullAPI(t, WithCategory("mpool", MpoolPending))
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// create a signed message to be returned as a pending message
w, _ := wallet.NewWallet(wallet.NewMemKeyStore())
senderAddr, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
toAddr, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
sm := mock.MkMessage(senderAddr, toAddr, 1, w)
gomock.InOrder(
mockApi.EXPECT().MpoolPending(ctx, types.EmptyTSK).Return([]*types.SignedMessage{sm}, nil),
)
//stm: @CLI_MEMPOOL_PENDING_003
err = app.Run([]string{"mpool", "pending", "--to", sm.Message.To.String()})
assert.NoError(t, err)
assert.Contains(t, buf.String(), sm.Cid().String())
})
t.Run("from", func(t *testing.T) {
app, mockApi, buf, done := NewMockAppWithFullAPI(t, WithCategory("mpool", MpoolPending))
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// create a signed message to be returned as a pending message
w, _ := wallet.NewWallet(wallet.NewMemKeyStore())
senderAddr, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
toAddr, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
sm := mock.MkMessage(senderAddr, toAddr, 1, w)
gomock.InOrder(
mockApi.EXPECT().MpoolPending(ctx, types.EmptyTSK).Return([]*types.SignedMessage{sm}, nil),
)
//stm: @CLI_MEMPOOL_PENDING_004
err = app.Run([]string{"mpool", "pending", "--from", sm.Message.From.String()})
assert.NoError(t, err)
assert.Contains(t, buf.String(), sm.Cid().String())
})
}
func TestReplace(t *testing.T) {
t.Run("manual", func(t *testing.T) {
app, mockApi, buf, done := NewMockAppWithFullAPI(t, WithCategory("mpool", MpoolReplaceCmd))
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// create a signed message to be returned as a pending message
w, _ := wallet.NewWallet(wallet.NewMemKeyStore())
senderAddr, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
toAddr, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
sm := mock.MkMessage(senderAddr, toAddr, 1, w)
gomock.InOrder(
mockApi.EXPECT().ChainGetMessage(ctx, sm.Cid()).Return(&sm.Message, nil),
mockApi.EXPECT().ChainHead(ctx).Return(nil, nil),
mockApi.EXPECT().MpoolPending(ctx, types.EmptyTSK).Return([]*types.SignedMessage{sm}, nil),
mockApi.EXPECT().WalletSignMessage(ctx, sm.Message.From, &sm.Message).Return(sm, nil),
mockApi.EXPECT().MpoolPush(ctx, sm).Return(sm.Cid(), nil),
)
//stm: @CLI_MEMPOOL_REPLACE_002
err = app.Run([]string{"mpool", "replace", "--gas-premium", "1", "--gas-feecap", "100", sm.Cid().String()})
assert.NoError(t, err)
assert.Contains(t, buf.String(), sm.Cid().String())
})
t.Run("auto", func(t *testing.T) {
app, mockApi, buf, done := NewMockAppWithFullAPI(t, WithCategory("mpool", MpoolReplaceCmd))
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// create a signed message to be returned as a pending message
w, _ := wallet.NewWallet(wallet.NewMemKeyStore())
senderAddr, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
toAddr, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
sm := mock.MkMessage(senderAddr, toAddr, 1, w)
// gas fee param should be equal to the one passed in the cli invocation (used below)
maxFee := "1000000"
parsedFee, err := types.ParseFIL(maxFee)
if err != nil {
t.Fatal(err)
}
mss := api.MessageSendSpec{MaxFee: abi.TokenAmount(parsedFee)}
gomock.InOrder(
mockApi.EXPECT().ChainGetMessage(ctx, sm.Cid()).Return(&sm.Message, nil),
mockApi.EXPECT().ChainHead(ctx).Return(nil, nil),
mockApi.EXPECT().MpoolPending(ctx, types.EmptyTSK).Return([]*types.SignedMessage{sm}, nil),
// use gomock.any to match the message in expected api calls
// since the replace function modifies the message between calls, it would be pointless to try to match the exact argument
mockApi.EXPECT().GasEstimateMessageGas(ctx, gomock.Any(), &mss, types.EmptyTSK).Return(&sm.Message, nil),
mockApi.EXPECT().WalletSignMessage(ctx, sm.Message.From, gomock.Any()).Return(sm, nil),
mockApi.EXPECT().MpoolPush(ctx, sm).Return(sm.Cid(), nil),
)
//stm: @CLI_MEMPOOL_REPLACE_002
err = app.Run([]string{"mpool", "replace", "--auto", "--fee-limit", maxFee, sm.Cid().String()})
assert.NoError(t, err)
assert.Contains(t, buf.String(), sm.Cid().String())
})
t.Run("sender / nonce", func(t *testing.T) {
app, mockApi, buf, done := NewMockAppWithFullAPI(t, WithCategory("mpool", MpoolReplaceCmd))
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// create a signed message to be returned as a pending message
w, _ := wallet.NewWallet(wallet.NewMemKeyStore())
senderAddr, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
toAddr, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
sm := mock.MkMessage(senderAddr, toAddr, 1, w)
// gas fee param should be equal to the one passed in the cli invocation (used below)
maxFee := "1000000"
parsedFee, err := types.ParseFIL(maxFee)
if err != nil {
t.Fatal(err)
}
mss := api.MessageSendSpec{MaxFee: abi.TokenAmount(parsedFee)}
gomock.InOrder(
mockApi.EXPECT().ChainHead(ctx).Return(nil, nil),
mockApi.EXPECT().MpoolPending(ctx, types.EmptyTSK).Return([]*types.SignedMessage{sm}, nil),
// use gomock.any to match the message in expected api calls
// since the replace function modifies the message between calls, it would be pointless to try to match the exact argument
mockApi.EXPECT().GasEstimateMessageGas(ctx, gomock.Any(), &mss, types.EmptyTSK).Return(&sm.Message, nil),
mockApi.EXPECT().WalletSignMessage(ctx, sm.Message.From, gomock.Any()).Return(sm, nil),
mockApi.EXPECT().MpoolPush(ctx, sm).Return(sm.Cid(), nil),
)
//stm: @CLI_MEMPOOL_REPLACE_001
err = app.Run([]string{"mpool", "replace", "--auto", "--fee-limit", maxFee, sm.Message.From.String(), fmt.Sprint(sm.Message.Nonce)})
assert.NoError(t, err)
assert.Contains(t, buf.String(), sm.Cid().String())
})
}
func TestFindMsg(t *testing.T) {
t.Run("from", func(t *testing.T) {
app, mockApi, buf, done := NewMockAppWithFullAPI(t, WithCategory("mpool", MpoolFindCmd))
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// create a signed message to be returned as a pending message
w, _ := wallet.NewWallet(wallet.NewMemKeyStore())
senderAddr, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
toAddr, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
sm := mock.MkMessage(senderAddr, toAddr, 1, w)
gomock.InOrder(
mockApi.EXPECT().MpoolPending(ctx, types.EmptyTSK).Return([]*types.SignedMessage{sm}, nil),
)
//stm: @CLI_MEMPOOL_FIND_001
err = app.Run([]string{"mpool", "find", "--from", sm.Message.From.String()})
assert.NoError(t, err)
assert.Contains(t, buf.String(), sm.Cid().String())
})
t.Run("to", func(t *testing.T) {
app, mockApi, buf, done := NewMockAppWithFullAPI(t, WithCategory("mpool", MpoolFindCmd))
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// create a signed message to be returned as a pending message
w, _ := wallet.NewWallet(wallet.NewMemKeyStore())
senderAddr, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
toAddr, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
sm := mock.MkMessage(senderAddr, toAddr, 1, w)
gomock.InOrder(
mockApi.EXPECT().MpoolPending(ctx, types.EmptyTSK).Return([]*types.SignedMessage{sm}, nil),
)
//stm: @CLI_MEMPOOL_FIND_002
err = app.Run([]string{"mpool", "find", "--to", sm.Message.To.String()})
assert.NoError(t, err)
assert.Contains(t, buf.String(), sm.Cid().String())
})
t.Run("method", func(t *testing.T) {
app, mockApi, buf, done := NewMockAppWithFullAPI(t, WithCategory("mpool", MpoolFindCmd))
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// create a signed message to be returned as a pending message
w, _ := wallet.NewWallet(wallet.NewMemKeyStore())
senderAddr, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
toAddr, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
sm := mock.MkMessage(senderAddr, toAddr, 1, w)
gomock.InOrder(
mockApi.EXPECT().MpoolPending(ctx, types.EmptyTSK).Return([]*types.SignedMessage{sm}, nil),
)
//stm: @CLI_MEMPOOL_FIND_003
err = app.Run([]string{"mpool", "find", "--method", sm.Message.Method.String()})
assert.NoError(t, err)
assert.Contains(t, buf.String(), sm.Cid().String())
})
}
func TestGasPerf(t *testing.T) {
t.Run("all", func(t *testing.T) {
app, mockApi, buf, done := NewMockAppWithFullAPI(t, WithCategory("mpool", MpoolGasPerfCmd))
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// add blocks to the chain
first := mock.TipSet(mock.MkBlock(nil, 5, 4))
head := mock.TipSet(mock.MkBlock(first, 15, 7))
// create a signed message to be returned as a pending message
w, _ := wallet.NewWallet(wallet.NewMemKeyStore())
senderAddr, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
toAddr, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
sm := mock.MkMessage(senderAddr, toAddr, 13, w)
gomock.InOrder(
mockApi.EXPECT().MpoolPending(ctx, types.EmptyTSK).Return([]*types.SignedMessage{sm}, nil),
mockApi.EXPECT().ChainHead(ctx).Return(head, nil),
)
//stm: @CLI_MEMPOOL_GAS_PERF_002
err = app.Run([]string{"mpool", "gas-perf", "--all", "true"})
assert.NoError(t, err)
assert.Contains(t, buf.String(), sm.Message.From.String())
assert.Contains(t, buf.String(), fmt.Sprint(sm.Message.Nonce))
})
t.Run("local", func(t *testing.T) {
app, mockApi, buf, done := NewMockAppWithFullAPI(t, WithCategory("mpool", MpoolGasPerfCmd))
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// add blocks to the chain
first := mock.TipSet(mock.MkBlock(nil, 5, 4))
head := mock.TipSet(mock.MkBlock(first, 15, 7))
// create a signed message to be returned as a pending message
w, _ := wallet.NewWallet(wallet.NewMemKeyStore())
senderAddr, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
toAddr, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
sm := mock.MkMessage(senderAddr, toAddr, 13, w)
gomock.InOrder(
mockApi.EXPECT().MpoolPending(ctx, types.EmptyTSK).Return([]*types.SignedMessage{sm}, nil),
mockApi.EXPECT().WalletList(ctx).Return([]address.Address{senderAddr}, nil),
mockApi.EXPECT().ChainHead(ctx).Return(head, nil),
)
//stm: @CLI_MEMPOOL_GAS_PERF_001
err = app.Run([]string{"mpool", "gas-perf"})
assert.NoError(t, err)
assert.Contains(t, buf.String(), sm.Message.From.String())
assert.Contains(t, buf.String(), fmt.Sprint(sm.Message.Nonce))
})
}
func TestConfig(t *testing.T) {
t.Run("get", func(t *testing.T) {
app, mockApi, buf, done := NewMockAppWithFullAPI(t, WithCategory("mpool", MpoolConfig))
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
w, _ := wallet.NewWallet(wallet.NewMemKeyStore())
senderAddr, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
mpoolCfg := &types.MpoolConfig{PriorityAddrs: []address.Address{senderAddr}, SizeLimitHigh: 1234567, SizeLimitLow: 6, ReplaceByFeeRatio: 0.25}
gomock.InOrder(
mockApi.EXPECT().MpoolGetConfig(ctx).Return(mpoolCfg, nil),
)
//stm: @CLI_MEMPOOL_CONFIG_001
err = app.Run([]string{"mpool", "config"})
assert.NoError(t, err)
assert.Contains(t, buf.String(), mpoolCfg.PriorityAddrs[0].String())
assert.Contains(t, buf.String(), fmt.Sprint(mpoolCfg.SizeLimitHigh))
assert.Contains(t, buf.String(), fmt.Sprint(mpoolCfg.SizeLimitLow))
assert.Contains(t, buf.String(), fmt.Sprint(mpoolCfg.ReplaceByFeeRatio))
})
t.Run("set", func(t *testing.T) {
app, mockApi, _, done := NewMockAppWithFullAPI(t, WithCategory("mpool", MpoolConfig))
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
w, _ := wallet.NewWallet(wallet.NewMemKeyStore())
senderAddr, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
mpoolCfg := &types.MpoolConfig{PriorityAddrs: []address.Address{senderAddr}, SizeLimitHigh: 234567, SizeLimitLow: 3, ReplaceByFeeRatio: 0.33}
gomock.InOrder(
mockApi.EXPECT().MpoolSetConfig(ctx, mpoolCfg).Return(nil),
)
bytes, err := json.Marshal(mpoolCfg)
if err != nil {
t.Fatal(err)
}
//stm: @CLI_MEMPOOL_CONFIG_002
err = app.Run([]string{"mpool", "config", string(bytes)})
assert.NoError(t, err)
})
}

View File

@ -33,6 +33,8 @@ var SyncStatusCmd = &cli.Command{
Name: "status", Name: "status",
Usage: "check sync status", Usage: "check sync status",
Action: func(cctx *cli.Context) error { Action: func(cctx *cli.Context) error {
afmt := NewAppFmt(cctx.App)
apic, closer, err := GetFullNodeAPI(cctx) apic, closer, err := GetFullNodeAPI(cctx)
if err != nil { if err != nil {
return err return err
@ -45,9 +47,9 @@ var SyncStatusCmd = &cli.Command{
return err return err
} }
fmt.Println("sync status:") afmt.Println("sync status:")
for _, ss := range state.ActiveSyncs { for _, ss := range state.ActiveSyncs {
fmt.Printf("worker %d:\n", ss.WorkerID) afmt.Printf("worker %d:\n", ss.WorkerID)
var base, target []cid.Cid var base, target []cid.Cid
var heightDiff int64 var heightDiff int64
var theight abi.ChainEpoch var theight abi.ChainEpoch
@ -62,20 +64,20 @@ var SyncStatusCmd = &cli.Command{
} else { } else {
heightDiff = 0 heightDiff = 0
} }
fmt.Printf("\tBase:\t%s\n", base) afmt.Printf("\tBase:\t%s\n", base)
fmt.Printf("\tTarget:\t%s (%d)\n", target, theight) afmt.Printf("\tTarget:\t%s (%d)\n", target, theight)
fmt.Printf("\tHeight diff:\t%d\n", heightDiff) afmt.Printf("\tHeight diff:\t%d\n", heightDiff)
fmt.Printf("\tStage: %s\n", ss.Stage) afmt.Printf("\tStage: %s\n", ss.Stage)
fmt.Printf("\tHeight: %d\n", ss.Height) afmt.Printf("\tHeight: %d\n", ss.Height)
if ss.End.IsZero() { if ss.End.IsZero() {
if !ss.Start.IsZero() { if !ss.Start.IsZero() {
fmt.Printf("\tElapsed: %s\n", time.Since(ss.Start)) afmt.Printf("\tElapsed: %s\n", time.Since(ss.Start))
} }
} else { } else {
fmt.Printf("\tElapsed: %s\n", ss.End.Sub(ss.Start)) afmt.Printf("\tElapsed: %s\n", ss.End.Sub(ss.Start))
} }
if ss.Stage == api.StageSyncErrored { if ss.Stage == api.StageSyncErrored {
fmt.Printf("\tError: %s\n", ss.Message) afmt.Printf("\tError: %s\n", ss.Message)
} }
} }
return nil return nil
@ -168,6 +170,8 @@ var SyncCheckBadCmd = &cli.Command{
Usage: "check if the given block was marked bad, and for what reason", Usage: "check if the given block was marked bad, and for what reason",
ArgsUsage: "[blockCid]", ArgsUsage: "[blockCid]",
Action: func(cctx *cli.Context) error { Action: func(cctx *cli.Context) error {
afmt := NewAppFmt(cctx.App)
napi, closer, err := GetFullNodeAPI(cctx) napi, closer, err := GetFullNodeAPI(cctx)
if err != nil { if err != nil {
return err return err
@ -190,11 +194,11 @@ var SyncCheckBadCmd = &cli.Command{
} }
if reason == "" { if reason == "" {
fmt.Println("block was not marked as bad") afmt.Println("block was not marked as bad")
return nil return nil
} }
fmt.Println(reason) afmt.Println(reason)
return nil return nil
}, },
} }

189
cli/sync_test.go Normal file
View File

@ -0,0 +1,189 @@
package cli
import (
"context"
"fmt"
"testing"
"time"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/lotus/api"
"github.com/filecoin-project/lotus/chain/types"
"github.com/filecoin-project/lotus/chain/types/mock"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
)
func TestSyncStatus(t *testing.T) {
app, mockApi, buf, done := NewMockAppWithFullAPI(t, WithCategory("sync", SyncStatusCmd))
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
ts1 := mock.TipSet(mock.MkBlock(nil, 0, 0))
ts2 := mock.TipSet(mock.MkBlock(ts1, 0, 0))
start := time.Now()
end := start.Add(time.Minute)
state := &api.SyncState{
ActiveSyncs: []api.ActiveSync{{
WorkerID: 1,
Base: ts1,
Target: ts2,
Stage: api.StageMessages,
Height: abi.ChainEpoch(0),
Start: start,
End: end,
Message: "whatever",
}},
VMApplied: 0,
}
mockApi.EXPECT().SyncState(ctx).Return(state, nil)
//stm: @CLI_SYNC_STATUS_001
err := app.Run([]string{"sync", "status"})
assert.NoError(t, err)
out := buf.String()
// output is plaintext, had to do string matching
assert.Contains(t, out, fmt.Sprintf("Base:\t[%s]", ts1.Blocks()[0].Cid().String()))
assert.Contains(t, out, fmt.Sprintf("Target:\t[%s]", ts2.Blocks()[0].Cid().String()))
assert.Contains(t, out, "Height diff:\t1")
assert.Contains(t, out, "Stage: message sync")
assert.Contains(t, out, "Height: 0")
assert.Contains(t, out, "Elapsed: 1m0s")
}
func TestSyncMarkBad(t *testing.T) {
app, mockApi, _, done := NewMockAppWithFullAPI(t, WithCategory("sync", SyncMarkBadCmd))
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
blk := mock.MkBlock(nil, 0, 0)
mockApi.EXPECT().SyncMarkBad(ctx, blk.Cid()).Return(nil)
//stm: @CLI_SYNC_MARK_BAD_001
err := app.Run([]string{"sync", "mark-bad", blk.Cid().String()})
assert.NoError(t, err)
}
func TestSyncUnmarkBad(t *testing.T) {
t.Run("one-block", func(t *testing.T) {
app, mockApi, _, done := NewMockAppWithFullAPI(t, WithCategory("sync", SyncUnmarkBadCmd))
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
blk := mock.MkBlock(nil, 0, 0)
mockApi.EXPECT().SyncUnmarkBad(ctx, blk.Cid()).Return(nil)
//stm: @CLI_SYNC_UNMARK_BAD_001
err := app.Run([]string{"sync", "unmark-bad", blk.Cid().String()})
assert.NoError(t, err)
})
t.Run("all", func(t *testing.T) {
app, mockApi, _, done := NewMockAppWithFullAPI(t, WithCategory("sync", SyncUnmarkBadCmd))
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
mockApi.EXPECT().SyncUnmarkAllBad(ctx).Return(nil)
//stm: @CLI_SYNC_UNMARK_BAD_002
err := app.Run([]string{"sync", "unmark-bad", "-all"})
assert.NoError(t, err)
})
}
func TestSyncCheckBad(t *testing.T) {
t.Run("not-bad", func(t *testing.T) {
app, mockApi, buf, done := NewMockAppWithFullAPI(t, WithCategory("sync", SyncCheckBadCmd))
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
blk := mock.MkBlock(nil, 0, 0)
mockApi.EXPECT().SyncCheckBad(ctx, blk.Cid()).Return("", nil)
//stm: @CLI_SYNC_CHECK_BAD_002
err := app.Run([]string{"sync", "check-bad", blk.Cid().String()})
assert.NoError(t, err)
assert.Contains(t, buf.String(), "block was not marked as bad")
})
t.Run("bad", func(t *testing.T) {
app, mockApi, buf, done := NewMockAppWithFullAPI(t, WithCategory("sync", SyncCheckBadCmd))
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
blk := mock.MkBlock(nil, 0, 0)
reason := "whatever"
mockApi.EXPECT().SyncCheckBad(ctx, blk.Cid()).Return(reason, nil)
//stm: @CLI_SYNC_CHECK_BAD_001
err := app.Run([]string{"sync", "check-bad", blk.Cid().String()})
assert.NoError(t, err)
assert.Contains(t, buf.String(), reason)
})
}
func TestSyncCheckpoint(t *testing.T) {
t.Run("tipset", func(t *testing.T) {
app, mockApi, _, done := NewMockAppWithFullAPI(t, WithCategory("sync", SyncCheckpointCmd))
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
blk := mock.MkBlock(nil, 0, 0)
ts := mock.TipSet(blk)
gomock.InOrder(
mockApi.EXPECT().ChainGetBlock(ctx, blk.Cid()).Return(blk, nil),
mockApi.EXPECT().SyncCheckpoint(ctx, ts.Key()).Return(nil),
)
//stm: @CLI_SYNC_CHECKPOINT_001
err := app.Run([]string{"sync", "checkpoint", blk.Cid().String()})
assert.NoError(t, err)
})
t.Run("epoch", func(t *testing.T) {
app, mockApi, _, done := NewMockAppWithFullAPI(t, WithCategory("sync", SyncCheckpointCmd))
defer done()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
epoch := abi.ChainEpoch(0)
blk := mock.MkBlock(nil, 0, 0)
ts := mock.TipSet(blk)
gomock.InOrder(
mockApi.EXPECT().ChainGetTipSetByHeight(ctx, epoch, types.EmptyTSK).Return(ts, nil),
mockApi.EXPECT().SyncCheckpoint(ctx, ts.Key()).Return(nil),
)
//stm: @CLI_SYNC_CHECKPOINT_002
err := app.Run([]string{"sync", "checkpoint", fmt.Sprintf("-epoch=%d", epoch)})
assert.NoError(t, err)
})
}

View File

@ -223,6 +223,11 @@ func GetCommonAPI(ctx *cli.Context) (api.CommonNet, jsonrpc.ClientCloser, error)
} }
func GetFullNodeAPI(ctx *cli.Context) (v0api.FullNode, jsonrpc.ClientCloser, error) { func GetFullNodeAPI(ctx *cli.Context) (v0api.FullNode, jsonrpc.ClientCloser, error) {
// use the mocked API in CLI unit tests, see cli/mocks_test.go for mock definition
if mock, ok := ctx.App.Metadata["test-full-api"]; ok {
return &v0api.WrapperV1Full{FullNode: mock.(v1api.FullNode)}, func() {}, nil
}
if tn, ok := ctx.App.Metadata["testnode-full"]; ok { if tn, ok := ctx.App.Metadata["testnode-full"]; ok {
return &v0api.WrapperV1Full{FullNode: tn.(v1api.FullNode)}, func() {}, nil return &v0api.WrapperV1Full{FullNode: tn.(v1api.FullNode)}, func() {}, nil
} }

View File

@ -276,6 +276,13 @@ var sealBenchCmd = &cli.Command{
if err != nil { if err != nil {
return xerrors.Errorf("failed to run seals: %w", err) return xerrors.Errorf("failed to run seals: %w", err)
} }
for _, s := range extendedSealedSectors {
sealedSectors = append(sealedSectors, proof.SectorInfo{
SealedCID: s.SealedCID,
SectorNumber: s.SectorNumber,
SealProof: s.SealProof,
})
}
} else { } else {
// TODO: implement sbfs.List() and use that for all cases (preexisting sectorbuilder or not) // TODO: implement sbfs.List() and use that for all cases (preexisting sectorbuilder or not)

View File

@ -126,7 +126,7 @@ func infoCmdAct(cctx *cli.Context) error {
alerts, err := minerApi.LogAlerts(ctx) alerts, err := minerApi.LogAlerts(ctx)
if err != nil { if err != nil {
return xerrors.Errorf("getting alerts: %w", err) fmt.Printf("ERROR: getting alerts: %s\n", err)
} }
activeAlerts := make([]alerting.Alert, 0) activeAlerts := make([]alerting.Alert, 0)
@ -466,6 +466,7 @@ var stateOrder = map[sealing.SectorState]stateMeta{}
var stateList = []stateMeta{ var stateList = []stateMeta{
{col: 39, state: "Total"}, {col: 39, state: "Total"},
{col: color.FgGreen, state: sealing.Proving}, {col: color.FgGreen, state: sealing.Proving},
{col: color.FgGreen, state: sealing.UpdateActivating},
{col: color.FgBlue, state: sealing.Empty}, {col: color.FgBlue, state: sealing.Empty},
{col: color.FgBlue, state: sealing.WaitDeals}, {col: color.FgBlue, state: sealing.WaitDeals},
@ -496,6 +497,7 @@ var stateList = []stateMeta{
{col: color.FgYellow, state: sealing.SubmitReplicaUpdate}, {col: color.FgYellow, state: sealing.SubmitReplicaUpdate},
{col: color.FgYellow, state: sealing.ReplicaUpdateWait}, {col: color.FgYellow, state: sealing.ReplicaUpdateWait},
{col: color.FgYellow, state: sealing.FinalizeReplicaUpdate}, {col: color.FgYellow, state: sealing.FinalizeReplicaUpdate},
{col: color.FgYellow, state: sealing.ReleaseSectorKey},
{col: color.FgCyan, state: sealing.Terminating}, {col: color.FgCyan, state: sealing.Terminating},
{col: color.FgCyan, state: sealing.TerminateWait}, {col: color.FgCyan, state: sealing.TerminateWait},
@ -524,6 +526,7 @@ var stateList = []stateMeta{
{col: color.FgRed, state: sealing.SnapDealsAddPieceFailed}, {col: color.FgRed, state: sealing.SnapDealsAddPieceFailed},
{col: color.FgRed, state: sealing.SnapDealsDealsExpired}, {col: color.FgRed, state: sealing.SnapDealsDealsExpired},
{col: color.FgRed, state: sealing.ReplicaUpdateFailed}, {col: color.FgRed, state: sealing.ReplicaUpdateFailed},
{col: color.FgRed, state: sealing.ReleaseSectorKeyFailed},
} }
func init() { func init() {

View File

@ -96,6 +96,11 @@ var infoAllCmd = &cli.Command{
fmt.Println("ERROR: ", err) fmt.Println("ERROR: ", err)
} }
fmt.Println("\n#: Storage Locks")
if err := storageLocks.Action(cctx); err != nil {
fmt.Println("ERROR: ", err)
}
fmt.Println("\n#: Sched Diag") fmt.Println("\n#: Sched Diag")
if err := sealingSchedDiagCmd.Action(cctx); err != nil { if err := sealingSchedDiagCmd.Action(cctx); err != nil {
fmt.Println("ERROR: ", err) fmt.Println("ERROR: ", err)
@ -192,6 +197,11 @@ var infoAllCmd = &cli.Command{
fmt.Println("ERROR: ", err) fmt.Println("ERROR: ", err)
} }
fmt.Println("\n#: Storage Sector List")
if err := storageListSectorsCmd.Action(cctx); err != nil {
fmt.Println("ERROR: ", err)
}
fmt.Println("\n#: Expired Sectors") fmt.Println("\n#: Expired Sectors")
if err := sectorsExpiredCmd.Action(cctx); err != nil { if err := sectorsExpiredCmd.Action(cctx); err != nil {
fmt.Println("ERROR: ", err) fmt.Println("ERROR: ", err)

View File

@ -473,6 +473,9 @@ func storageMinerInit(ctx context.Context, cctx *cli.Context, api v1api.FullNode
AllowPreCommit2: true, AllowPreCommit2: true,
AllowCommit: true, AllowCommit: true,
AllowUnseal: true, AllowUnseal: true,
AllowReplicaUpdate: true,
AllowProveReplicaUpdate2: true,
AllowRegenSectorKey: true,
}, wsts, smsts) }, wsts, smsts)
if err != nil { if err != nil {
return err return err

View File

@ -437,6 +437,7 @@ var provingCheckProvableCmd = &cli.Command{
} }
var tocheck []storage.SectorRef var tocheck []storage.SectorRef
var update []bool
for _, info := range sectorInfos { for _, info := range sectorInfos {
si := abi.SectorID{ si := abi.SectorID{
Miner: abi.ActorID(mid), Miner: abi.ActorID(mid),
@ -454,9 +455,10 @@ var provingCheckProvableCmd = &cli.Command{
ProofType: info.SealProof, ProofType: info.SealProof,
ID: si, ID: si,
}) })
update = append(update, info.SectorKeyCID != nil)
} }
bad, err := sapi.CheckProvable(ctx, info.WindowPoStProofType, tocheck, cctx.Bool("slow")) bad, err := sapi.CheckProvable(ctx, info.WindowPoStProofType, tocheck, update, cctx.Bool("slow"))
if err != nil { if err != nil {
return err return err
} }

View File

@ -39,9 +39,12 @@ func barString(total, y, g float64) string {
yBars := int(math.Round(y / total * barCols)) yBars := int(math.Round(y / total * barCols))
gBars := int(math.Round(g / total * barCols)) gBars := int(math.Round(g / total * barCols))
eBars := int(barCols) - yBars - gBars eBars := int(barCols) - yBars - gBars
return color.YellowString(strings.Repeat("|", yBars)) + var barString = color.YellowString(strings.Repeat("|", yBars)) +
color.GreenString(strings.Repeat("|", gBars)) + color.GreenString(strings.Repeat("|", gBars))
strings.Repeat(" ", eBars) if eBars >= 0 {
barString += strings.Repeat(" ", eBars)
}
return barString
} }
var sealingWorkersCmd = &cli.Command{ var sealingWorkersCmd = &cli.Command{

View File

@ -55,6 +55,7 @@ var sectorsCmd = &cli.Command{
sectorsTerminateCmd, sectorsTerminateCmd,
sectorsRemoveCmd, sectorsRemoveCmd,
sectorsSnapUpCmd, sectorsSnapUpCmd,
sectorsSnapAbortCmd,
sectorsMarkForUpgradeCmd, sectorsMarkForUpgradeCmd,
sectorsStartSealCmd, sectorsStartSealCmd,
sectorsSealDelayCmd, sectorsSealDelayCmd,
@ -160,7 +161,7 @@ var sectorsStatusCmd = &cli.Command{
fmt.Printf("Expiration:\t\t%v\n", status.Expiration) fmt.Printf("Expiration:\t\t%v\n", status.Expiration)
fmt.Printf("DealWeight:\t\t%v\n", status.DealWeight) fmt.Printf("DealWeight:\t\t%v\n", status.DealWeight)
fmt.Printf("VerifiedDealWeight:\t\t%v\n", status.VerifiedDealWeight) fmt.Printf("VerifiedDealWeight:\t\t%v\n", status.VerifiedDealWeight)
fmt.Printf("InitialPledge:\t\t%v\n", status.InitialPledge) fmt.Printf("InitialPledge:\t\t%v\n", types.FIL(status.InitialPledge))
fmt.Printf("\nExpiration Info\n") fmt.Printf("\nExpiration Info\n")
fmt.Printf("OnTime:\t\t%v\n", status.OnTime) fmt.Printf("OnTime:\t\t%v\n", status.OnTime)
fmt.Printf("Early:\t\t%v\n", status.Early) fmt.Printf("Early:\t\t%v\n", status.Early)
@ -292,9 +293,15 @@ var sectorsListCmd = &cli.Command{
Usage: "display number of events the sector has received", Usage: "display number of events the sector has received",
Aliases: []string{"e"}, Aliases: []string{"e"},
}, },
&cli.BoolFlag{
Name: "initial-pledge",
Usage: "display initial pledge",
Aliases: []string{"p"},
},
&cli.BoolFlag{ &cli.BoolFlag{
Name: "seal-time", Name: "seal-time",
Usage: "display how long it took for the sector to be sealed", Usage: "display how long it took for the sector to be sealed",
Aliases: []string{"t"},
}, },
&cli.StringFlag{ &cli.StringFlag{
Name: "states", Name: "states",
@ -404,6 +411,7 @@ var sectorsListCmd = &cli.Command{
tablewriter.Col("Deals"), tablewriter.Col("Deals"),
tablewriter.Col("DealWeight"), tablewriter.Col("DealWeight"),
tablewriter.Col("VerifiedPower"), tablewriter.Col("VerifiedPower"),
tablewriter.Col("Pledge"),
tablewriter.NewLineCol("Error"), tablewriter.NewLineCol("Error"),
tablewriter.NewLineCol("RecoveryTimeout")) tablewriter.NewLineCol("RecoveryTimeout"))
@ -482,6 +490,9 @@ var sectorsListCmd = &cli.Command{
m["RecoveryTimeout"] = color.YellowString(lcli.EpochTime(head.Height(), st.Early)) m["RecoveryTimeout"] = color.YellowString(lcli.EpochTime(head.Height(), st.Early))
} }
} }
if inSSet && cctx.Bool("initial-pledge") {
m["Pledge"] = types.FIL(st.InitialPledge).Short()
}
} }
if !fast && deals > 0 { if !fast && deals > 0 {
@ -1520,6 +1531,43 @@ var sectorsSnapUpCmd = &cli.Command{
}, },
} }
var sectorsSnapAbortCmd = &cli.Command{
Name: "abort-upgrade",
Usage: "Abort the attempted (SnapDeals) upgrade of a CC sector, reverting it to as before",
ArgsUsage: "<sectorNum>",
Flags: []cli.Flag{
&cli.BoolFlag{
Name: "really-do-it",
Usage: "pass this flag if you know what you are doing",
},
},
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 1 {
return lcli.ShowHelp(cctx, xerrors.Errorf("must pass sector number"))
}
really := cctx.Bool("really-do-it")
if !really {
//nolint:golint
return fmt.Errorf("--really-do-it must be specified for this action to have an effect; you have been warned")
}
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
defer closer()
ctx := lcli.ReqContext(cctx)
id, err := strconv.ParseUint(cctx.Args().Get(0), 10, 64)
if err != nil {
return xerrors.Errorf("could not parse sector number: %w", err)
}
return nodeApi.SectorAbortUpgrade(ctx, abi.SectorNumber(id))
},
}
var sectorsMarkForUpgradeCmd = &cli.Command{ var sectorsMarkForUpgradeCmd = &cli.Command{
Name: "mark-for-upgrade", Name: "mark-for-upgrade",
Usage: "Mark a committed capacity sector for replacement by a sector with deals", Usage: "Mark a committed capacity sector for replacement by a sector with deals",

View File

@ -368,6 +368,7 @@ type storedSector struct {
store stores.SectorStorageInfo store stores.SectorStorageInfo
unsealed, sealed, cache bool unsealed, sealed, cache bool
update, updatecache bool
} }
var storageFindCmd = &cli.Command{ var storageFindCmd = &cli.Command{
@ -421,6 +422,16 @@ var storageFindCmd = &cli.Command{
return xerrors.Errorf("finding cache: %w", err) return xerrors.Errorf("finding cache: %w", err)
} }
us, err := nodeApi.StorageFindSector(ctx, sid, storiface.FTUpdate, 0, false)
if err != nil {
return xerrors.Errorf("finding sealed: %w", err)
}
uc, err := nodeApi.StorageFindSector(ctx, sid, storiface.FTUpdateCache, 0, false)
if err != nil {
return xerrors.Errorf("finding cache: %w", err)
}
byId := map[stores.ID]*storedSector{} byId := map[stores.ID]*storedSector{}
for _, info := range u { for _, info := range u {
sts, ok := byId[info.ID] sts, ok := byId[info.ID]
@ -455,6 +466,28 @@ var storageFindCmd = &cli.Command{
} }
sts.cache = true sts.cache = true
} }
for _, info := range us {
sts, ok := byId[info.ID]
if !ok {
sts = &storedSector{
id: info.ID,
store: info,
}
byId[info.ID] = sts
}
sts.update = true
}
for _, info := range uc {
sts, ok := byId[info.ID]
if !ok {
sts = &storedSector{
id: info.ID,
store: info,
}
byId[info.ID] = sts
}
sts.updatecache = true
}
local, err := nodeApi.StorageLocal(ctx) local, err := nodeApi.StorageLocal(ctx)
if err != nil { if err != nil {
@ -480,6 +513,12 @@ var storageFindCmd = &cli.Command{
if info.cache { if info.cache {
types += "Cache, " types += "Cache, "
} }
if info.update {
types += "Update, "
}
if info.updatecache {
types += "UpdateCache, "
}
fmt.Printf("In %s (%s)\n", info.id, types[:len(types)-2]) fmt.Printf("In %s (%s)\n", info.id, types[:len(types)-2])
fmt.Printf("\tSealing: %t; Storage: %t\n", info.store.CanSeal, info.store.CanStore) fmt.Printf("\tSealing: %t; Storage: %t\n", info.store.CanSeal, info.store.CanStore)

View File

@ -173,6 +173,11 @@ var runCmd = &cli.Command{
Usage: "enable prove replica update 2", Usage: "enable prove replica update 2",
Value: true, Value: true,
}, },
&cli.BoolFlag{
Name: "regen-sector-key",
Usage: "enable regen sector key",
Value: true,
},
&cli.IntFlag{ &cli.IntFlag{
Name: "parallel-fetch-limit", Name: "parallel-fetch-limit",
Usage: "maximum fetch operations to run in parallel", Usage: "maximum fetch operations to run in parallel",
@ -261,7 +266,7 @@ var runCmd = &cli.Command{
var taskTypes []sealtasks.TaskType var taskTypes []sealtasks.TaskType
taskTypes = append(taskTypes, sealtasks.TTFetch, sealtasks.TTCommit1, sealtasks.TTProveReplicaUpdate1, sealtasks.TTFinalize) taskTypes = append(taskTypes, sealtasks.TTFetch, sealtasks.TTCommit1, sealtasks.TTProveReplicaUpdate1, sealtasks.TTFinalize, sealtasks.TTFinalizeReplicaUpdate)
if cctx.Bool("addpiece") { if cctx.Bool("addpiece") {
taskTypes = append(taskTypes, sealtasks.TTAddPiece) taskTypes = append(taskTypes, sealtasks.TTAddPiece)
@ -278,12 +283,15 @@ var runCmd = &cli.Command{
if cctx.Bool("commit") { if cctx.Bool("commit") {
taskTypes = append(taskTypes, sealtasks.TTCommit2) taskTypes = append(taskTypes, sealtasks.TTCommit2)
} }
if cctx.Bool("replicaupdate") { if cctx.Bool("replica-update") {
taskTypes = append(taskTypes, sealtasks.TTReplicaUpdate) taskTypes = append(taskTypes, sealtasks.TTReplicaUpdate)
} }
if cctx.Bool("prove-replica-update2") { if cctx.Bool("prove-replica-update2") {
taskTypes = append(taskTypes, sealtasks.TTProveReplicaUpdate2) taskTypes = append(taskTypes, sealtasks.TTProveReplicaUpdate2)
} }
if cctx.Bool("regen-sector-key") {
taskTypes = append(taskTypes, sealtasks.TTRegenSectorKey)
}
if len(taskTypes) == 0 { if len(taskTypes) == 0 {
return xerrors.Errorf("no task types specified") return xerrors.Errorf("no task types specified")

View File

@ -27,6 +27,9 @@ var allowSetting = map[sealtasks.TaskType]struct{}{
sealtasks.TTPreCommit2: {}, sealtasks.TTPreCommit2: {},
sealtasks.TTCommit2: {}, sealtasks.TTCommit2: {},
sealtasks.TTUnseal: {}, sealtasks.TTUnseal: {},
sealtasks.TTReplicaUpdate: {},
sealtasks.TTProveReplicaUpdate2: {},
sealtasks.TTRegenSectorKey: {},
} }
var settableStr = func() string { var settableStr = func() string {

View File

@ -510,10 +510,17 @@ var genesisSetRemainderCmd = &cli.Command{
var genesisSetActorVersionCmd = &cli.Command{ var genesisSetActorVersionCmd = &cli.Command{
Name: "set-network-version", Name: "set-network-version",
Usage: "Set the version that this network will start from", Usage: "Set the version that this network will start from",
ArgsUsage: "<genesisFile> <actorVersion>", Flags: []cli.Flag{
&cli.IntFlag{
Name: "network-version",
Usage: "network version to start genesis with",
Value: int(build.GenesisNetworkVersion),
},
},
ArgsUsage: "<genesisFile>",
Action: func(cctx *cli.Context) error { Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 2 { if cctx.Args().Len() != 1 {
return fmt.Errorf("must specify genesis file and network version (e.g. '0'") return fmt.Errorf("must specify genesis file")
} }
genf, err := homedir.Expand(cctx.Args().First()) genf, err := homedir.Expand(cctx.Args().First())
@ -531,16 +538,12 @@ var genesisSetActorVersionCmd = &cli.Command{
return xerrors.Errorf("unmarshal genesis template: %w", err) return xerrors.Errorf("unmarshal genesis template: %w", err)
} }
nv, err := strconv.ParseUint(cctx.Args().Get(1), 10, 64) nv := network.Version(cctx.Int("network-version"))
if err != nil { if nv > build.NewestNetworkVersion {
return xerrors.Errorf("parsing network version: %w", err)
}
if nv > uint64(build.NewestNetworkVersion) {
return xerrors.Errorf("invalid network version: %d", nv) return xerrors.Errorf("invalid network version: %d", nv)
} }
template.NetworkVersion = network.Version(nv) template.NetworkVersion = nv
b, err = json.MarshalIndent(&template, "", " ") b, err = json.MarshalIndent(&template, "", " ")
if err != nil { if err != nil {

104
cmd/lotus-shed/diff.go Normal file
View File

@ -0,0 +1,104 @@
package main
import (
"fmt"
"github.com/urfave/cli/v2"
"golang.org/x/xerrors"
"github.com/ipfs/go-cid"
"github.com/filecoin-project/lotus/chain/types"
lcli "github.com/filecoin-project/lotus/cli"
)
var diffCmd = &cli.Command{
Name: "diff",
Usage: "diff state objects",
Subcommands: []*cli.Command{diffStateTrees},
}
var diffStateTrees = &cli.Command{
Name: "state-trees",
Usage: "diff two state-trees",
ArgsUsage: "<state-tree-a> <state-tree-b>",
Action: func(cctx *cli.Context) error {
api, closer, err := lcli.GetFullNodeAPI(cctx)
if err != nil {
return err
}
defer closer()
ctx := lcli.ReqContext(cctx)
if cctx.NArg() != 2 {
return xerrors.Errorf("expected two state-tree roots")
}
argA := cctx.Args().Get(1)
rootA, err := cid.Parse(argA)
if err != nil {
return xerrors.Errorf("first state-tree root (%q) is not a CID: %w", argA, err)
}
argB := cctx.Args().Get(1)
rootB, err := cid.Parse(argB)
if err != nil {
return xerrors.Errorf("second state-tree root (%q) is not a CID: %w", argB, err)
}
if rootA == rootB {
fmt.Println("state trees do not differ")
return nil
}
changedB, err := api.StateChangedActors(ctx, rootA, rootB)
if err != nil {
return err
}
changedA, err := api.StateChangedActors(ctx, rootB, rootA)
if err != nil {
return err
}
diff := func(stateA, stateB types.Actor) {
if stateB.Code != stateA.Code {
fmt.Printf(" code: %s != %s\n", stateA.Code, stateB.Code)
}
if stateB.Head != stateA.Head {
fmt.Printf(" state: %s != %s\n", stateA.Head, stateB.Head)
}
if stateB.Nonce != stateA.Nonce {
fmt.Printf(" nonce: %d != %d\n", stateA.Nonce, stateB.Nonce)
}
if !stateB.Balance.Equals(stateA.Balance) {
fmt.Printf(" balance: %s != %s\n", stateA.Balance, stateB.Balance)
}
}
fmt.Printf("state differences between %s (first) and %s (second):\n\n", rootA, rootB)
for addr, stateA := range changedA {
fmt.Println(addr)
stateB, ok := changedB[addr]
if ok {
diff(stateA, stateB)
continue
} else {
fmt.Printf(" actor does not exist in second state-tree (%s)\n", rootB)
}
fmt.Println()
delete(changedB, addr)
}
for addr, stateB := range changedB {
fmt.Println(addr)
stateA, ok := changedA[addr]
if ok {
diff(stateA, stateB)
continue
} else {
fmt.Printf(" actor does not exist in first state-tree (%s)\n", rootA)
}
fmt.Println()
}
return nil
},
}

View File

@ -68,6 +68,7 @@ func main() {
sendCsvCmd, sendCsvCmd,
terminationsCmd, terminationsCmd,
migrationsCmd, migrationsCmd,
diffCmd,
} }
app := &cli.App{ app := &cli.App{

View File

@ -4,10 +4,12 @@ import (
"encoding/json" "encoding/json"
corebig "math/big" corebig "math/big"
"os" "os"
"strconv"
"github.com/filecoin-project/go-address" "github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-state-types/abi" "github.com/filecoin-project/go-state-types/abi"
filbig "github.com/filecoin-project/go-state-types/big" filbig "github.com/filecoin-project/go-state-types/big"
"github.com/filecoin-project/lotus/chain/types"
lcli "github.com/filecoin-project/lotus/cli" lcli "github.com/filecoin-project/lotus/cli"
"github.com/ipfs/go-cid" "github.com/ipfs/go-cid"
"github.com/urfave/cli/v2" "github.com/urfave/cli/v2"
@ -22,29 +24,50 @@ type networkTotalsOutput struct {
Payload networkTotals `json:"payload"` Payload networkTotals `json:"payload"`
} }
type providerMeta struct {
nonidentifiable bool
}
// force formatting as decimal to aid human readers
type humanFloat float64
func (f humanFloat) MarshalJSON() ([]byte, error) {
// 'f' uses decimal digits without exponents.
// The bit size of 32 ensures we don't use too many decimal places.
return []byte(strconv.FormatFloat(float64(f), 'f', -1, 32)), nil
}
type Totals struct {
TotalDeals int `json:"total_num_deals"`
TotalBytes int64 `json:"total_stored_data_size"`
PrivateTotalDeals int `json:"private_total_num_deals"`
PrivateTotalBytes int64 `json:"private_total_stored_data_size"`
CapacityCarryingData humanFloat `json:"capacity_fraction_carrying_data"`
}
type networkTotals struct { type networkTotals struct {
QaNetworkPower filbig.Int `json:"total_qa_power"` QaNetworkPower filbig.Int `json:"total_qa_power"`
RawNetworkPower filbig.Int `json:"total_raw_capacity"` RawNetworkPower filbig.Int `json:"total_raw_capacity"`
CapacityCarryingData float64 `json:"capacity_fraction_carrying_data"`
UniqueCids int `json:"total_unique_cids"` UniqueCids int `json:"total_unique_cids"`
UniqueProviders int `json:"total_unique_providers"` UniqueBytes int64 `json:"total_unique_data_size"`
UniqueClients int `json:"total_unique_clients"` UniqueClients int `json:"total_unique_clients"`
TotalDeals int `json:"total_num_deals"` UniqueProviders int `json:"total_unique_providers"`
TotalBytes int64 `json:"total_stored_data_size"` UniquePrivateProviders int `json:"total_unique_private_providers"`
FilplusTotalDeals int `json:"filplus_total_num_deals"` Totals
FilplusTotalBytes int64 `json:"filplus_total_stored_data_size"` FilPlus Totals `json:"filecoin_plus_subset"`
seenClient map[address.Address]bool pieces map[cid.Cid]struct{}
seenProvider map[address.Address]bool clients map[address.Address]struct{}
seenPieceCid map[cid.Cid]bool providers map[address.Address]providerMeta
} }
var storageStatsCmd = &cli.Command{ var storageStatsCmd = &cli.Command{
Name: "storage-stats", Name: "storage-stats",
Usage: "Translates current lotus state into a json summary suitable for driving https://storage.filecoin.io/", Usage: "Translates current lotus state into a json summary suitable for driving https://storage.filecoin.io/",
Flags: []cli.Flag{ Flags: []cli.Flag{
&cli.Int64Flag{ &cli.StringFlag{
Name: "height", Name: "tipset",
Usage: "Comma separated array of cids, or @height",
}, },
}, },
Action: func(cctx *cli.Context) error { Action: func(cctx *cli.Context) error {
@ -56,22 +79,24 @@ var storageStatsCmd = &cli.Command{
} }
defer apiCloser() defer apiCloser()
head, err := api.ChainHead(ctx) var ts *types.TipSet
if cctx.String("tipset") == "" {
ts, err = api.ChainHead(ctx)
if err != nil {
return err
}
ts, err = api.ChainGetTipSetByHeight(ctx, ts.Height()-defaultEpochLookback, ts.Key())
if err != nil { if err != nil {
return err return err
} }
requestedHeight := cctx.Int64("height")
if requestedHeight > 0 {
head, err = api.ChainGetTipSetByHeight(ctx, abi.ChainEpoch(requestedHeight), head.Key())
} else { } else {
head, err = api.ChainGetTipSetByHeight(ctx, head.Height()-defaultEpochLookback, head.Key()) ts, err = lcli.ParseTipSetRef(ctx, api, cctx.String("tipset"))
}
if err != nil { if err != nil {
return err return err
} }
}
power, err := api.StateMinerPower(ctx, address.Address{}, head.Key()) power, err := api.StateMinerPower(ctx, address.Address{}, ts.Key())
if err != nil { if err != nil {
return err return err
} }
@ -79,12 +104,12 @@ var storageStatsCmd = &cli.Command{
netTotals := networkTotals{ netTotals := networkTotals{
QaNetworkPower: power.TotalPower.QualityAdjPower, QaNetworkPower: power.TotalPower.QualityAdjPower,
RawNetworkPower: power.TotalPower.RawBytePower, RawNetworkPower: power.TotalPower.RawBytePower,
seenClient: make(map[address.Address]bool), pieces: make(map[cid.Cid]struct{}),
seenProvider: make(map[address.Address]bool), clients: make(map[address.Address]struct{}),
seenPieceCid: make(map[cid.Cid]bool), providers: make(map[address.Address]providerMeta),
} }
deals, err := api.StateMarketDeals(ctx, head.Key()) deals, err := api.StateMarketDeals(ctx, ts.Key())
if err != nil { if err != nil {
return err return err
} }
@ -94,35 +119,76 @@ var storageStatsCmd = &cli.Command{
// Only count deals that have properly started, not past/future ones // Only count deals that have properly started, not past/future ones
// https://github.com/filecoin-project/specs-actors/blob/v0.9.9/actors/builtin/market/deal.go#L81-L85 // https://github.com/filecoin-project/specs-actors/blob/v0.9.9/actors/builtin/market/deal.go#L81-L85
// Bail on 0 as well in case SectorStartEpoch is uninitialized due to some bug // Bail on 0 as well in case SectorStartEpoch is uninitialized due to some bug
//
// Additionally if the SlashEpoch is set this means the underlying sector is
// terminated for whatever reason ( not just slashed ), and the deal record
// will soon be removed from the state entirely
if dealInfo.State.SectorStartEpoch <= 0 || if dealInfo.State.SectorStartEpoch <= 0 ||
dealInfo.State.SectorStartEpoch > head.Height() { dealInfo.State.SectorStartEpoch > ts.Height() ||
dealInfo.State.SlashEpoch > -1 {
continue continue
} }
netTotals.seenClient[dealInfo.Proposal.Client] = true netTotals.clients[dealInfo.Proposal.Client] = struct{}{}
if _, seen := netTotals.providers[dealInfo.Proposal.Provider]; !seen {
pm := providerMeta{}
mi, err := api.StateMinerInfo(ctx, dealInfo.Proposal.Provider, ts.Key())
if err != nil {
return err
}
if mi.PeerId == nil || *mi.PeerId == "" {
log.Infof("private provider %s", dealInfo.Proposal.Provider)
pm.nonidentifiable = true
netTotals.UniquePrivateProviders++
}
netTotals.providers[dealInfo.Proposal.Provider] = pm
netTotals.UniqueProviders++
}
if _, seen := netTotals.pieces[dealInfo.Proposal.PieceCID]; !seen {
netTotals.pieces[dealInfo.Proposal.PieceCID] = struct{}{}
netTotals.UniqueBytes += int64(dealInfo.Proposal.PieceSize)
netTotals.UniqueCids++
}
netTotals.TotalBytes += int64(dealInfo.Proposal.PieceSize) netTotals.TotalBytes += int64(dealInfo.Proposal.PieceSize)
netTotals.seenProvider[dealInfo.Proposal.Provider] = true
netTotals.seenPieceCid[dealInfo.Proposal.PieceCID] = true
netTotals.TotalDeals++ netTotals.TotalDeals++
if netTotals.providers[dealInfo.Proposal.Provider].nonidentifiable {
netTotals.PrivateTotalBytes += int64(dealInfo.Proposal.PieceSize)
netTotals.PrivateTotalDeals++
}
if dealInfo.Proposal.VerifiedDeal { if dealInfo.Proposal.VerifiedDeal {
netTotals.FilplusTotalDeals++ netTotals.FilPlus.TotalBytes += int64(dealInfo.Proposal.PieceSize)
netTotals.FilplusTotalBytes += int64(dealInfo.Proposal.PieceSize) netTotals.FilPlus.TotalDeals++
if netTotals.providers[dealInfo.Proposal.Provider].nonidentifiable {
netTotals.FilPlus.PrivateTotalBytes += int64(dealInfo.Proposal.PieceSize)
netTotals.FilPlus.PrivateTotalDeals++
}
} }
} }
netTotals.UniqueCids = len(netTotals.seenPieceCid) netTotals.UniqueClients = len(netTotals.clients)
netTotals.UniqueClients = len(netTotals.seenClient)
netTotals.UniqueProviders = len(netTotals.seenProvider)
netTotals.CapacityCarryingData, _ = new(corebig.Rat).SetFrac( ccd, _ := new(corebig.Rat).SetFrac(
corebig.NewInt(netTotals.TotalBytes), corebig.NewInt(netTotals.TotalBytes),
netTotals.RawNetworkPower.Int, netTotals.RawNetworkPower.Int,
).Float64() ).Float64()
netTotals.CapacityCarryingData = humanFloat(ccd)
ccdfp, _ := new(corebig.Rat).SetFrac(
corebig.NewInt(netTotals.FilPlus.TotalBytes),
netTotals.RawNetworkPower.Int,
).Float64()
netTotals.FilPlus.CapacityCarryingData = humanFloat(ccdfp)
return json.NewEncoder(os.Stdout).Encode( return json.NewEncoder(os.Stdout).Encode(
networkTotalsOutput{ networkTotalsOutput{
Epoch: int64(head.Height()), Epoch: int64(ts.Height()),
Endpoint: "NETWORK_WIDE_TOTALS", Endpoint: "NETWORK_WIDE_TOTALS",
Payload: netTotals, Payload: netTotals,
}, },

View File

@ -1,8 +1,13 @@
package main package main
import ( import (
"encoding/hex"
"fmt" "fmt"
"github.com/filecoin-project/lotus/chain/actors/builtin/multisig"
"github.com/filecoin-project/go-state-types/crypto"
"github.com/filecoin-project/go-state-types/big" "github.com/filecoin-project/go-state-types/big"
"github.com/urfave/cli/v2" "github.com/urfave/cli/v2"
@ -35,6 +40,7 @@ var verifRegCmd = &cli.Command{
verifRegListClientsCmd, verifRegListClientsCmd,
verifRegCheckClientCmd, verifRegCheckClientCmd,
verifRegCheckVerifierCmd, verifRegCheckVerifierCmd,
verifRegRemoveVerifiedClientDataCapCmd,
}, },
} }
@ -409,3 +415,154 @@ var verifRegCheckVerifierCmd = &cli.Command{
return nil return nil
}, },
} }
var verifRegRemoveVerifiedClientDataCapCmd = &cli.Command{
Name: "remove-verified-client-data-cap",
Usage: "Remove data cap from verified client",
ArgsUsage: "<message sender> <client address> <allowance to remove> <verifier 1 address> <verifier 1 signature> <verifier 2 address> <verifier 2 signature>",
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 7 {
return fmt.Errorf("must specify seven arguments: sender, client, allowance to remove, verifier 1 address, verifier 1 signature, verifier 2 address, verifier 2 signature")
}
srv, err := lcli.GetFullNodeServices(cctx)
if err != nil {
return err
}
defer srv.Close() //nolint:errcheck
api := srv.FullNodeAPI()
ctx := lcli.ReqContext(cctx)
sender, err := address.NewFromString(cctx.Args().Get(0))
if err != nil {
return err
}
client, err := address.NewFromString(cctx.Args().Get(1))
if err != nil {
return err
}
allowanceToRemove, err := types.BigFromString(cctx.Args().Get(2))
if err != nil {
return err
}
verifier1Addr, err := address.NewFromString(cctx.Args().Get(3))
if err != nil {
return err
}
verifier1Sig, err := hex.DecodeString(cctx.Args().Get(4))
if err != nil {
return err
}
verifier2Addr, err := address.NewFromString(cctx.Args().Get(5))
if err != nil {
return err
}
verifier2Sig, err := hex.DecodeString(cctx.Args().Get(6))
if err != nil {
return err
}
var sig1 crypto.Signature
if err := sig1.UnmarshalBinary(verifier1Sig); err != nil {
return xerrors.Errorf("couldn't unmarshal sig: %w", err)
}
var sig2 crypto.Signature
if err := sig2.UnmarshalBinary(verifier2Sig); err != nil {
return xerrors.Errorf("couldn't unmarshal sig: %w", err)
}
params, err := actors.SerializeParams(&verifreg.RemoveDataCapParams{
VerifiedClientToRemove: client,
DataCapAmountToRemove: allowanceToRemove,
VerifierRequest1: verifreg.RemoveDataCapRequest{
Verifier: verifier1Addr,
VerifierSignature: sig1,
},
VerifierRequest2: verifreg.RemoveDataCapRequest{
Verifier: verifier2Addr,
VerifierSignature: sig2,
},
})
if err != nil {
return err
}
vrk, err := api.StateVerifiedRegistryRootKey(ctx, types.EmptyTSK)
if err != nil {
return err
}
vrkState, err := api.StateGetActor(ctx, vrk, types.EmptyTSK)
if err != nil {
return err
}
apibs := blockstore.NewAPIBlockstore(api)
store := adt.WrapStore(ctx, cbor.NewCborStore(apibs))
st, err := multisig.Load(store, vrkState)
if err != nil {
return err
}
signers, err := st.Signers()
if err != nil {
return err
}
senderIsSigner := false
senderIdAddr, err := address.IDFromAddress(sender)
if err != nil {
return err
}
for _, signer := range signers {
signerIdAddr, err := address.IDFromAddress(signer)
if err != nil {
return err
}
if signerIdAddr == senderIdAddr {
senderIsSigner = true
}
}
if !senderIsSigner {
return fmt.Errorf("sender must be a vrk signer")
}
proto, err := api.MsigPropose(ctx, vrk, verifreg.Address, big.Zero(), sender, uint64(verifreg.Methods.RemoveVerifiedClientDataCap), params)
if err != nil {
return err
}
sm, _, err := srv.PublishMessage(ctx, proto, false)
if err != nil {
return err
}
msgCid := sm.Cid()
fmt.Printf("message sent, now waiting on cid: %s\n", msgCid)
mwait, err := api.StateWaitMsg(ctx, msgCid, uint64(cctx.Int("confidence")), build.Finality, true)
if err != nil {
return err
}
if mwait.Receipt.ExitCode != 0 {
return fmt.Errorf("failed to removed verified data cap: %d", mwait.Receipt.ExitCode)
}
//TODO: Internal msg might still have failed
return nil
},
}

View File

@ -104,6 +104,7 @@
* [Return](#Return) * [Return](#Return)
* [ReturnAddPiece](#ReturnAddPiece) * [ReturnAddPiece](#ReturnAddPiece)
* [ReturnFetch](#ReturnFetch) * [ReturnFetch](#ReturnFetch)
* [ReturnFinalizeReplicaUpdate](#ReturnFinalizeReplicaUpdate)
* [ReturnFinalizeSector](#ReturnFinalizeSector) * [ReturnFinalizeSector](#ReturnFinalizeSector)
* [ReturnGenerateSectorKeyFromData](#ReturnGenerateSectorKeyFromData) * [ReturnGenerateSectorKeyFromData](#ReturnGenerateSectorKeyFromData)
* [ReturnMoveStorage](#ReturnMoveStorage) * [ReturnMoveStorage](#ReturnMoveStorage)
@ -123,6 +124,7 @@
* [SealingAbort](#SealingAbort) * [SealingAbort](#SealingAbort)
* [SealingSchedDiag](#SealingSchedDiag) * [SealingSchedDiag](#SealingSchedDiag)
* [Sector](#Sector) * [Sector](#Sector)
* [SectorAbortUpgrade](#SectorAbortUpgrade)
* [SectorAddPieceToAny](#SectorAddPieceToAny) * [SectorAddPieceToAny](#SectorAddPieceToAny)
* [SectorCommitFlush](#SectorCommitFlush) * [SectorCommitFlush](#SectorCommitFlush)
* [SectorCommitPending](#SectorCommitPending) * [SectorCommitPending](#SectorCommitPending)
@ -343,6 +345,9 @@ Inputs:
"ProofType": 8 "ProofType": 8
} }
], ],
[
true
],
true true
] ]
``` ```
@ -2164,6 +2169,30 @@ Response: `{}`
### ReturnFetch ### ReturnFetch
Perms: admin
Inputs:
```json
[
{
"Sector": {
"Miner": 1000,
"Number": 9
},
"ID": "07070707-0707-0707-0707-070707070707"
},
{
"Code": 0,
"Message": "string value"
}
]
```
Response: `{}`
### ReturnFinalizeReplicaUpdate
Perms: admin Perms: admin
Inputs: Inputs:
@ -2584,6 +2613,21 @@ Response: `{}`
## Sector ## Sector
### SectorAbortUpgrade
SectorAbortUpgrade can be called on sectors that are in the process of being upgraded to abort it
Perms: admin
Inputs:
```json
[
9
]
```
Response: `{}`
### SectorAddPieceToAny ### SectorAddPieceToAny
Add piece to an open sector. If no sectors with enough space are open, Add piece to an open sector. If no sectors with enough space are open,
either a new sector will be created, or this call will block until more either a new sector will be created, or this call will block until more

View File

@ -10,6 +10,7 @@
* [Add](#Add) * [Add](#Add)
* [AddPiece](#AddPiece) * [AddPiece](#AddPiece)
* [Finalize](#Finalize) * [Finalize](#Finalize)
* [FinalizeReplicaUpdate](#FinalizeReplicaUpdate)
* [FinalizeSector](#FinalizeSector) * [FinalizeSector](#FinalizeSector)
* [Generate](#Generate) * [Generate](#Generate)
* [GenerateSectorKeyFromData](#GenerateSectorKeyFromData) * [GenerateSectorKeyFromData](#GenerateSectorKeyFromData)
@ -1112,6 +1113,41 @@ Response:
## Finalize ## Finalize
### FinalizeReplicaUpdate
Perms: admin
Inputs:
```json
[
{
"ID": {
"Miner": 1000,
"Number": 9
},
"ProofType": 8
},
[
{
"Offset": 1024,
"Size": 1024
}
]
]
```
Response:
```json
{
"Sector": {
"Miner": 1000,
"Number": 9
},
"ID": "07070707-0707-0707-0707-070707070707"
}
```
### FinalizeSector ### FinalizeSector

View File

@ -7,7 +7,7 @@ USAGE:
lotus-miner [global options] command [command options] [arguments...] lotus-miner [global options] command [command options] [arguments...]
VERSION: VERSION:
1.15.0-dev 1.15.1-dev
COMMANDS: COMMANDS:
init Initialize a lotus miner repo init Initialize a lotus miner repo
@ -1663,6 +1663,7 @@ COMMANDS:
terminate Terminate sector on-chain then remove (WARNING: This means losing power and collateral for the removed sector) terminate Terminate sector on-chain then remove (WARNING: This means losing power and collateral for the removed sector)
remove Forcefully remove a sector (WARNING: This means losing power and collateral for the removed sector (use 'terminate' for lower penalty)) remove Forcefully remove a sector (WARNING: This means losing power and collateral for the removed sector (use 'terminate' for lower penalty))
snap-up Mark a committed capacity sector to be filled with deals snap-up Mark a committed capacity sector to be filled with deals
abort-upgrade Abort the attempted (SnapDeals) upgrade of a CC sector, reverting it to as before
mark-for-upgrade Mark a committed capacity sector for replacement by a sector with deals mark-for-upgrade Mark a committed capacity sector for replacement by a sector with deals
seal Manually start sealing a sector (filling any unused space with junk) seal Manually start sealing a sector (filling any unused space with junk)
set-seal-delay Set the time, in minutes, that a new sector waits for deals before sealing starts set-seal-delay Set the time, in minutes, that a new sector waits for deals before sealing starts
@ -1706,7 +1707,8 @@ OPTIONS:
--color, -c use color in display output (default: depends on output being a TTY) --color, -c use color in display output (default: depends on output being a TTY)
--fast, -f don't show on-chain info for better performance (default: false) --fast, -f don't show on-chain info for better performance (default: false)
--events, -e display number of events the sector has received (default: false) --events, -e display number of events the sector has received (default: false)
--seal-time display how long it took for the sector to be sealed (default: false) --initial-pledge, -p display initial pledge (default: false)
--seal-time, -t display how long it took for the sector to be sealed (default: false)
--states value filter sectors by a comma-separated list of states --states value filter sectors by a comma-separated list of states
--unproven, -u only show sectors which aren't in the 'Proving' state (default: false) --unproven, -u only show sectors which aren't in the 'Proving' state (default: false)
--help, -h show help (default: false) --help, -h show help (default: false)
@ -1896,6 +1898,20 @@ OPTIONS:
``` ```
### lotus-miner sectors abort-upgrade
```
NAME:
lotus-miner sectors abort-upgrade - Abort the attempted (SnapDeals) upgrade of a CC sector, reverting it to as before
USAGE:
lotus-miner sectors abort-upgrade [command options] <sectorNum>
OPTIONS:
--really-do-it pass this flag if you know what you are doing (default: false)
--help, -h show help (default: false)
```
### lotus-miner sectors mark-for-upgrade ### lotus-miner sectors mark-for-upgrade
``` ```
NAME: NAME:

View File

@ -7,7 +7,7 @@ USAGE:
lotus-worker [global options] command [command options] [arguments...] lotus-worker [global options] command [command options] [arguments...]
VERSION: VERSION:
1.15.0-dev 1.15.1-dev
COMMANDS: COMMANDS:
run Start lotus worker run Start lotus worker
@ -46,6 +46,7 @@ OPTIONS:
--commit enable commit (32G sectors: all cores or GPUs, 128GiB Memory + 64GiB swap) (default: true) --commit enable commit (32G sectors: all cores or GPUs, 128GiB Memory + 64GiB swap) (default: true)
--replica-update enable replica update (default: true) --replica-update enable replica update (default: true)
--prove-replica-update2 enable prove replica update 2 (default: true) --prove-replica-update2 enable prove replica update 2 (default: true)
--regen-sector-key enable regen sector key (default: true)
--parallel-fetch-limit value maximum fetch operations to run in parallel (default: 5) --parallel-fetch-limit value maximum fetch operations to run in parallel (default: 5)
--timeout value used when 'listen' is unspecified. must be a valid duration recognized by golang's time.ParseDuration function (default: "30m") --timeout value used when 'listen' is unspecified. must be a valid duration recognized by golang's time.ParseDuration function (default: "30m")
--help, -h show help (default: false) --help, -h show help (default: false)
@ -168,7 +169,7 @@ NAME:
lotus-worker tasks enable - Enable a task type lotus-worker tasks enable - Enable a task type
USAGE: USAGE:
lotus-worker tasks enable [command options] [UNS|C2|PC2|PC1|AP] lotus-worker tasks enable [command options] [UNS|C2|PC2|PC1|PR2|RU|AP|GSK]
OPTIONS: OPTIONS:
--help, -h show help (default: false) --help, -h show help (default: false)
@ -181,7 +182,7 @@ NAME:
lotus-worker tasks disable - Disable a task type lotus-worker tasks disable - Disable a task type
USAGE: USAGE:
lotus-worker tasks disable [command options] [UNS|C2|PC2|PC1|AP] lotus-worker tasks disable [command options] [UNS|C2|PC2|PC1|PR2|RU|AP|GSK]
OPTIONS: OPTIONS:
--help, -h show help (default: false) --help, -h show help (default: false)

View File

@ -7,7 +7,7 @@ USAGE:
lotus [global options] command [command options] [arguments...] lotus [global options] command [command options] [arguments...]
VERSION: VERSION:
1.15.0-dev 1.15.1-dev
COMMANDS: COMMANDS:
daemon Start a lotus daemon process daemon Start a lotus daemon process
@ -1232,6 +1232,7 @@ COMMANDS:
list-clients list all verified clients list-clients list all verified clients
check-client-datacap check verified client remaining bytes check-client-datacap check verified client remaining bytes
check-notary-datacap check a notary's remaining bytes check-notary-datacap check a notary's remaining bytes
sign-remove-data-cap-proposal allows a notary to sign a Remove Data Cap Proposal
help, h Shows a list of commands or help for one command help, h Shows a list of commands or help for one command
OPTIONS: OPTIONS:
@ -1305,6 +1306,20 @@ OPTIONS:
``` ```
### lotus filplus sign-remove-data-cap-proposal
```
NAME:
lotus filplus sign-remove-data-cap-proposal - allows a notary to sign a Remove Data Cap Proposal
USAGE:
lotus filplus sign-remove-data-cap-proposal [command options] [arguments...]
OPTIONS:
--id value specify the RemoveDataCapProposal ID (will look up on chain if unspecified) (default: 0)
--help, -h show help (default: false)
```
## lotus paych ## lotus paych
``` ```
NAME: NAME:
@ -1631,7 +1646,7 @@ NAME:
lotus mpool replace - replace a message in the mempool lotus mpool replace - replace a message in the mempool
USAGE: USAGE:
lotus mpool replace [command options] <from nonce> | <message-cid> lotus mpool replace [command options] <from> <nonce> | <message-cid>
OPTIONS: OPTIONS:
--gas-feecap value gas feecap for new message (burn and pay to miner, attoFIL/GasUnit) --gas-feecap value gas feecap for new message (burn and pay to miner, attoFIL/GasUnit)

View File

@ -163,11 +163,11 @@
#HotStoreType = "badger" #HotStoreType = "badger"
# MarkSetType specifies the type of the markset. # MarkSetType specifies the type of the markset.
# It can be "map" (default) for in memory marking or "badger" for on-disk marking. # It can be "map" for in memory marking or "badger" (default) for on-disk marking.
# #
# type: string # type: string
# env var: LOTUS_CHAINSTORE_SPLITSTORE_MARKSETTYPE # env var: LOTUS_CHAINSTORE_SPLITSTORE_MARKSETTYPE
#MarkSetType = "map" #MarkSetType = "badger"
# HotStoreMessageRetention specifies the retention policy for messages, in finalities beyond # HotStoreMessageRetention specifies the retention policy for messages, in finalities beyond
# the compaction boundary; default is 0. # the compaction boundary; default is 0.

View File

@ -167,6 +167,14 @@
# env var: LOTUS_DEALMAKING_EXPECTEDSEALDURATION # env var: LOTUS_DEALMAKING_EXPECTEDSEALDURATION
#ExpectedSealDuration = "24h0m0s" #ExpectedSealDuration = "24h0m0s"
# Whether new sectors are created to pack incoming deals
# When this is set to false no new sectors will be created for sealing incoming deals
# This is useful for forcing all deals to be assigned as snap deals to sectors marked for upgrade
#
# type: bool
# env var: LOTUS_DEALMAKING_MAKENEWSECTORFORDEALS
#MakeNewSectorForDeals = true
# Maximum amount of time proposed deal StartEpoch can be in future # Maximum amount of time proposed deal StartEpoch can be in future
# #
# type: Duration # type: Duration
@ -451,6 +459,9 @@
# env var: LOTUS_STORAGE_ALLOWPROVEREPLICAUPDATE2 # env var: LOTUS_STORAGE_ALLOWPROVEREPLICAUPDATE2
#AllowProveReplicaUpdate2 = true #AllowProveReplicaUpdate2 = true
# env var: LOTUS_STORAGE_ALLOWREGENSECTORKEY
#AllowRegenSectorKey = true
# env var: LOTUS_STORAGE_RESOURCEFILTERING # env var: LOTUS_STORAGE_RESOURCEFILTERING
#ResourceFiltering = "hardware" #ResourceFiltering = "hardware"

2
extern/filecoin-ffi vendored

@ -1 +1 @@
Subproject commit e660df5616e397b2d8ac316f45ddfa7a44637971 Subproject commit 5ec5d805c01ea85224f6448dd6c6fa0a2a73c028

View File

@ -19,11 +19,11 @@ import (
// FaultTracker TODO: Track things more actively // FaultTracker TODO: Track things more actively
type FaultTracker interface { type FaultTracker interface {
CheckProvable(ctx context.Context, pp abi.RegisteredPoStProof, sectors []storage.SectorRef, rg storiface.RGetter) (map[abi.SectorID]string, error) CheckProvable(ctx context.Context, pp abi.RegisteredPoStProof, sectors []storage.SectorRef, update []bool, rg storiface.RGetter) (map[abi.SectorID]string, error)
} }
// CheckProvable returns unprovable sectors // CheckProvable returns unprovable sectors
func (m *Manager) CheckProvable(ctx context.Context, pp abi.RegisteredPoStProof, sectors []storage.SectorRef, rg storiface.RGetter) (map[abi.SectorID]string, error) { func (m *Manager) CheckProvable(ctx context.Context, pp abi.RegisteredPoStProof, sectors []storage.SectorRef, update []bool, rg storiface.RGetter) (map[abi.SectorID]string, error) {
var bad = make(map[abi.SectorID]string) var bad = make(map[abi.SectorID]string)
ssize, err := pp.SectorSize() ssize, err := pp.SectorSize()
@ -32,11 +32,31 @@ func (m *Manager) CheckProvable(ctx context.Context, pp abi.RegisteredPoStProof,
} }
// TODO: More better checks // TODO: More better checks
for _, sector := range sectors { for i, sector := range sectors {
err := func() error { err := func() error {
ctx, cancel := context.WithCancel(ctx) ctx, cancel := context.WithCancel(ctx)
defer cancel() defer cancel()
var fReplica string
var fCache string
if update[i] {
lockedUpdate, err := m.index.StorageTryLock(ctx, sector.ID, storiface.FTUpdate|storiface.FTUpdateCache, storiface.FTNone)
if err != nil {
return xerrors.Errorf("acquiring sector lock: %w", err)
}
if !lockedUpdate {
log.Warnw("CheckProvable Sector FAULT: can't acquire read lock on update replica", "sector", sector)
bad[sector.ID] = fmt.Sprint("can't acquire read lock")
return nil
}
lp, _, err := m.localStore.AcquireSector(ctx, sector, storiface.FTUpdate|storiface.FTUpdateCache, storiface.FTNone, storiface.PathStorage, storiface.AcquireMove)
if err != nil {
log.Warnw("CheckProvable Sector FAULT: acquire sector update replica in checkProvable", "sector", sector, "error", err)
bad[sector.ID] = fmt.Sprintf("acquire sector failed: %s", err)
return nil
}
fReplica, fCache = lp.Update, lp.UpdateCache
} else {
locked, err := m.index.StorageTryLock(ctx, sector.ID, storiface.FTSealed|storiface.FTCache, storiface.FTNone) locked, err := m.index.StorageTryLock(ctx, sector.ID, storiface.FTSealed|storiface.FTCache, storiface.FTNone)
if err != nil { if err != nil {
return xerrors.Errorf("acquiring sector lock: %w", err) return xerrors.Errorf("acquiring sector lock: %w", err)
@ -54,31 +74,34 @@ func (m *Manager) CheckProvable(ctx context.Context, pp abi.RegisteredPoStProof,
bad[sector.ID] = fmt.Sprintf("acquire sector failed: %s", err) bad[sector.ID] = fmt.Sprintf("acquire sector failed: %s", err)
return nil return nil
} }
fReplica, fCache = lp.Sealed, lp.Cache
if lp.Sealed == "" || lp.Cache == "" { }
log.Warnw("CheckProvable Sector FAULT: cache and/or sealed paths not found", "sector", sector, "sealed", lp.Sealed, "cache", lp.Cache)
bad[sector.ID] = fmt.Sprintf("cache and/or sealed paths not found, cache %q, sealed %q", lp.Cache, lp.Sealed) if fReplica == "" || fCache == "" {
log.Warnw("CheckProvable Sector FAULT: cache and/or sealed paths not found", "sector", sector, "sealed", fReplica, "cache", fCache)
bad[sector.ID] = fmt.Sprintf("cache and/or sealed paths not found, cache %q, sealed %q", fCache, fReplica)
return nil return nil
} }
toCheck := map[string]int64{ toCheck := map[string]int64{
lp.Sealed: 1, fReplica: 1,
filepath.Join(lp.Cache, "p_aux"): 0, filepath.Join(fCache, "p_aux"): 0,
} }
addCachePathsForSectorSize(toCheck, lp.Cache, ssize) addCachePathsForSectorSize(toCheck, fCache, ssize)
for p, sz := range toCheck { for p, sz := range toCheck {
st, err := os.Stat(p) st, err := os.Stat(p)
if err != nil { if err != nil {
log.Warnw("CheckProvable Sector FAULT: sector file stat error", "sector", sector, "sealed", lp.Sealed, "cache", lp.Cache, "file", p, "err", err) log.Warnw("CheckProvable Sector FAULT: sector file stat error", "sector", sector, "sealed", fReplica, "cache", fCache, "file", p, "err", err)
bad[sector.ID] = fmt.Sprintf("%s", err) bad[sector.ID] = fmt.Sprintf("%s", err)
return nil return nil
} }
if sz != 0 { if sz != 0 {
if st.Size() != int64(ssize)*sz { if st.Size() != int64(ssize)*sz {
log.Warnw("CheckProvable Sector FAULT: sector file is wrong size", "sector", sector, "sealed", lp.Sealed, "cache", lp.Cache, "file", p, "size", st.Size(), "expectSize", int64(ssize)*sz) log.Warnw("CheckProvable Sector FAULT: sector file is wrong size", "sector", sector, "sealed", fReplica, "cache", fCache, "file", p, "size", st.Size(), "expectSize", int64(ssize)*sz)
bad[sector.ID] = fmt.Sprintf("%s is wrong size (got %d, expect %d)", p, st.Size(), int64(ssize)*sz) bad[sector.ID] = fmt.Sprintf("%s is wrong size (got %d, expect %d)", p, st.Size(), int64(ssize)*sz)
return nil return nil
} }
@ -99,14 +122,14 @@ func (m *Manager) CheckProvable(ctx context.Context, pp abi.RegisteredPoStProof,
sector.ID.Number, sector.ID.Number,
}) })
if err != nil { if err != nil {
log.Warnw("CheckProvable Sector FAULT: generating challenges", "sector", sector, "sealed", lp.Sealed, "cache", lp.Cache, "err", err) log.Warnw("CheckProvable Sector FAULT: generating challenges", "sector", sector, "sealed", fReplica, "cache", fCache, "err", err)
bad[sector.ID] = fmt.Sprintf("generating fallback challenges: %s", err) bad[sector.ID] = fmt.Sprintf("generating fallback challenges: %s", err)
return nil return nil
} }
commr, err := rg(ctx, sector.ID) commr, err := rg(ctx, sector.ID)
if err != nil { if err != nil {
log.Warnw("CheckProvable Sector FAULT: getting commR", "sector", sector, "sealed", lp.Sealed, "cache", lp.Cache, "err", err) log.Warnw("CheckProvable Sector FAULT: getting commR", "sector", sector, "sealed", fReplica, "cache", fCache, "err", err)
bad[sector.ID] = fmt.Sprintf("getting commR: %s", err) bad[sector.ID] = fmt.Sprintf("getting commR: %s", err)
return nil return nil
} }
@ -117,12 +140,12 @@ func (m *Manager) CheckProvable(ctx context.Context, pp abi.RegisteredPoStProof,
SectorNumber: sector.ID.Number, SectorNumber: sector.ID.Number,
SealedCID: commr, SealedCID: commr,
}, },
CacheDirPath: lp.Cache, CacheDirPath: fCache,
PoStProofType: wpp, PoStProofType: wpp,
SealedSectorPath: lp.Sealed, SealedSectorPath: fReplica,
}, ch.Challenges[sector.ID.Number]) }, ch.Challenges[sector.ID.Number])
if err != nil { if err != nil {
log.Warnw("CheckProvable Sector FAULT: generating vanilla proof", "sector", sector, "sealed", lp.Sealed, "cache", lp.Cache, "err", err) log.Warnw("CheckProvable Sector FAULT: generating vanilla proof", "sector", sector, "sealed", fReplica, "cache", fCache, "err", err)
bad[sector.ID] = fmt.Sprintf("generating vanilla proof: %s", err) bad[sector.ID] = fmt.Sprintf("generating vanilla proof: %s", err)
return nil return nil
} }

View File

@ -769,7 +769,7 @@ func (sb *Sealer) ReleaseSealed(ctx context.Context, sector storage.SectorRef) e
return xerrors.Errorf("not supported at this layer") return xerrors.Errorf("not supported at this layer")
} }
func (sb *Sealer) FinalizeSector(ctx context.Context, sector storage.SectorRef, keepUnsealed []storage.Range) error { func (sb *Sealer) freeUnsealed(ctx context.Context, sector storage.SectorRef, keepUnsealed []storage.Range) error {
ssize, err := sector.ProofType.SectorSize() ssize, err := sector.ProofType.SectorSize()
if err != nil { if err != nil {
return err return err
@ -834,6 +834,19 @@ func (sb *Sealer) FinalizeSector(ctx context.Context, sector storage.SectorRef,
} }
return nil
}
func (sb *Sealer) FinalizeSector(ctx context.Context, sector storage.SectorRef, keepUnsealed []storage.Range) error {
ssize, err := sector.ProofType.SectorSize()
if err != nil {
return err
}
if err := sb.freeUnsealed(ctx, sector, keepUnsealed); err != nil {
return err
}
paths, done, err := sb.sectors.AcquireSector(ctx, sector, storiface.FTCache, 0, storiface.PathStorage) paths, done, err := sb.sectors.AcquireSector(ctx, sector, storiface.FTCache, 0, storiface.PathStorage)
if err != nil { if err != nil {
return xerrors.Errorf("acquiring sector cache path: %w", err) return xerrors.Errorf("acquiring sector cache path: %w", err)
@ -843,6 +856,43 @@ func (sb *Sealer) FinalizeSector(ctx context.Context, sector storage.SectorRef,
return ffi.ClearCache(uint64(ssize), paths.Cache) return ffi.ClearCache(uint64(ssize), paths.Cache)
} }
func (sb *Sealer) FinalizeReplicaUpdate(ctx context.Context, sector storage.SectorRef, keepUnsealed []storage.Range) error {
ssize, err := sector.ProofType.SectorSize()
if err != nil {
return err
}
if err := sb.freeUnsealed(ctx, sector, keepUnsealed); err != nil {
return err
}
{
paths, done, err := sb.sectors.AcquireSector(ctx, sector, storiface.FTCache, 0, storiface.PathStorage)
if err != nil {
return xerrors.Errorf("acquiring sector cache path: %w", err)
}
defer done()
if err := ffi.ClearCache(uint64(ssize), paths.Cache); err != nil {
return xerrors.Errorf("clear cache: %w", err)
}
}
{
paths, done, err := sb.sectors.AcquireSector(ctx, sector, storiface.FTUpdateCache, 0, storiface.PathStorage)
if err != nil {
return xerrors.Errorf("acquiring sector cache path: %w", err)
}
defer done()
if err := ffi.ClearCache(uint64(ssize), paths.UpdateCache); err != nil {
return xerrors.Errorf("clear cache: %w", err)
}
}
return nil
}
func (sb *Sealer) ReleaseUnsealed(ctx context.Context, sector storage.SectorRef, safeToFree []storage.Range) error { func (sb *Sealer) ReleaseUnsealed(ctx context.Context, sector storage.SectorRef, safeToFree []storage.Range) error {
// This call is meant to mark storage as 'freeable'. Given that unsealing is // This call is meant to mark storage as 'freeable'. Given that unsealing is
// very expensive, we don't remove data as soon as we can - instead we only // very expensive, we don't remove data as soon as we can - instead we only

View File

@ -105,6 +105,7 @@ type SealerConfig struct {
AllowUnseal bool AllowUnseal bool
AllowReplicaUpdate bool AllowReplicaUpdate bool
AllowProveReplicaUpdate2 bool AllowProveReplicaUpdate2 bool
AllowRegenSectorKey bool
// ResourceFiltering instructs the system which resource filtering strategy // ResourceFiltering instructs the system which resource filtering strategy
// to use when evaluating tasks against this worker. An empty value defaults // to use when evaluating tasks against this worker. An empty value defaults
@ -146,7 +147,7 @@ func New(ctx context.Context, lstor *stores.Local, stor *stores.Remote, ls store
go m.sched.runSched() go m.sched.runSched()
localTasks := []sealtasks.TaskType{ localTasks := []sealtasks.TaskType{
sealtasks.TTCommit1, sealtasks.TTProveReplicaUpdate1, sealtasks.TTFinalize, sealtasks.TTFetch, sealtasks.TTCommit1, sealtasks.TTProveReplicaUpdate1, sealtasks.TTFinalize, sealtasks.TTFetch, sealtasks.TTFinalizeReplicaUpdate,
} }
if sc.AllowAddPiece { if sc.AllowAddPiece {
localTasks = append(localTasks, sealtasks.TTAddPiece) localTasks = append(localTasks, sealtasks.TTAddPiece)
@ -169,6 +170,9 @@ func New(ctx context.Context, lstor *stores.Local, stor *stores.Remote, ls store
if sc.AllowProveReplicaUpdate2 { if sc.AllowProveReplicaUpdate2 {
localTasks = append(localTasks, sealtasks.TTProveReplicaUpdate2) localTasks = append(localTasks, sealtasks.TTProveReplicaUpdate2)
} }
if sc.AllowRegenSectorKey {
localTasks = append(localTasks, sealtasks.TTRegenSectorKey)
}
wcfg := WorkerConfig{ wcfg := WorkerConfig{
IgnoreResourceFiltering: sc.ResourceFiltering == ResourceFilteringDisabled, IgnoreResourceFiltering: sc.ResourceFiltering == ResourceFilteringDisabled,
@ -577,6 +581,74 @@ func (m *Manager) FinalizeSector(ctx context.Context, sector storage.SectorRef,
return nil return nil
} }
func (m *Manager) FinalizeReplicaUpdate(ctx context.Context, sector storage.SectorRef, keepUnsealed []storage.Range) error {
ctx, cancel := context.WithCancel(ctx)
defer cancel()
if err := m.index.StorageLock(ctx, sector.ID, storiface.FTNone, storiface.FTSealed|storiface.FTUnsealed|storiface.FTCache|storiface.FTUpdate|storiface.FTUpdateCache); err != nil {
return xerrors.Errorf("acquiring sector lock: %w", err)
}
fts := storiface.FTUnsealed
{
unsealedStores, err := m.index.StorageFindSector(ctx, sector.ID, storiface.FTUnsealed, 0, false)
if err != nil {
return xerrors.Errorf("finding unsealed sector: %w", err)
}
if len(unsealedStores) == 0 { // Is some edge-cases unsealed sector may not exist already, that's fine
fts = storiface.FTNone
}
}
pathType := storiface.PathStorage
{
sealedStores, err := m.index.StorageFindSector(ctx, sector.ID, storiface.FTUpdate, 0, false)
if err != nil {
return xerrors.Errorf("finding sealed sector: %w", err)
}
for _, store := range sealedStores {
if store.CanSeal {
pathType = storiface.PathSealing
break
}
}
}
selector := newExistingSelector(m.index, sector.ID, storiface.FTCache|storiface.FTSealed|storiface.FTUpdate|storiface.FTUpdateCache, false)
err := m.sched.Schedule(ctx, sector, sealtasks.TTFinalizeReplicaUpdate, selector,
m.schedFetch(sector, storiface.FTCache|storiface.FTSealed|storiface.FTUpdate|storiface.FTUpdateCache|fts, pathType, storiface.AcquireMove),
func(ctx context.Context, w Worker) error {
_, err := m.waitSimpleCall(ctx)(w.FinalizeReplicaUpdate(ctx, sector, keepUnsealed))
return err
})
if err != nil {
return err
}
fetchSel := newAllocSelector(m.index, storiface.FTCache|storiface.FTSealed|storiface.FTUpdate|storiface.FTUpdateCache, storiface.PathStorage)
moveUnsealed := fts
{
if len(keepUnsealed) == 0 {
moveUnsealed = storiface.FTNone
}
}
err = m.sched.Schedule(ctx, sector, sealtasks.TTFetch, fetchSel,
m.schedFetch(sector, storiface.FTCache|storiface.FTSealed|storiface.FTUpdate|storiface.FTUpdateCache|moveUnsealed, storiface.PathStorage, storiface.AcquireMove),
func(ctx context.Context, w Worker) error {
_, err := m.waitSimpleCall(ctx)(w.MoveStorage(ctx, sector, storiface.FTCache|storiface.FTSealed|storiface.FTUpdate|storiface.FTUpdateCache|moveUnsealed))
return err
})
if err != nil {
return xerrors.Errorf("moving sector to storage: %w", err)
}
return nil
}
func (m *Manager) ReleaseUnsealed(ctx context.Context, sector storage.SectorRef, safeToFree []storage.Range) error { func (m *Manager) ReleaseUnsealed(ctx context.Context, sector storage.SectorRef, safeToFree []storage.Range) error {
return nil return nil
} }
@ -875,6 +947,10 @@ func (m *Manager) ReturnProveReplicaUpdate2(ctx context.Context, callID storifac
return m.returnResult(ctx, callID, proof, err) return m.returnResult(ctx, callID, proof, err)
} }
func (m *Manager) ReturnFinalizeReplicaUpdate(ctx context.Context, callID storiface.CallID, err *storiface.CallError) error {
return m.returnResult(ctx, callID, nil, err)
}
func (m *Manager) ReturnGenerateSectorKeyFromData(ctx context.Context, callID storiface.CallID, err *storiface.CallError) error { func (m *Manager) ReturnGenerateSectorKeyFromData(ctx context.Context, callID storiface.CallID, err *storiface.CallError) error {
return m.returnResult(ctx, callID, nil, err) return m.returnResult(ctx, callID, nil, err)
} }

View File

@ -477,6 +477,10 @@ func (mgr *SectorMgr) FinalizeSector(context.Context, storage.SectorRef, []stora
return nil return nil
} }
func (mgr *SectorMgr) FinalizeReplicaUpdate(context.Context, storage.SectorRef, []storage.Range) error {
return nil
}
func (mgr *SectorMgr) ReleaseUnsealed(ctx context.Context, sector storage.SectorRef, safeToFree []storage.Range) error { func (mgr *SectorMgr) ReleaseUnsealed(ctx context.Context, sector storage.SectorRef, safeToFree []storage.Range) error {
return nil return nil
} }
@ -501,7 +505,7 @@ func (mgr *SectorMgr) Remove(ctx context.Context, sector storage.SectorRef) erro
return nil return nil
} }
func (mgr *SectorMgr) CheckProvable(ctx context.Context, pp abi.RegisteredPoStProof, ids []storage.SectorRef, rg storiface.RGetter) (map[abi.SectorID]string, error) { func (mgr *SectorMgr) CheckProvable(ctx context.Context, pp abi.RegisteredPoStProof, ids []storage.SectorRef, update []bool, rg storiface.RGetter) (map[abi.SectorID]string, error) {
bad := map[abi.SectorID]string{} bad := map[abi.SectorID]string{}
for _, sid := range ids { for _, sid := range ids {
@ -577,6 +581,10 @@ func (mgr *SectorMgr) ReturnGenerateSectorKeyFromData(ctx context.Context, callI
panic("not supported") panic("not supported")
} }
func (mgr *SectorMgr) ReturnFinalizeReplicaUpdate(ctx context.Context, callID storiface.CallID, err *storiface.CallError) error {
panic("not supported")
}
func (m mockVerifProver) VerifySeal(svi proof.SealVerifyInfo) (bool, error) { func (m mockVerifProver) VerifySeal(svi proof.SealVerifyInfo) (bool, error) {
plen, err := svi.SealProof.ProofSize() plen, err := svi.SealProof.ProofSize()
if err != nil { if err != nil {

View File

@ -119,6 +119,10 @@ func (s *schedTestWorker) GenerateSectorKeyFromData(ctx context.Context, sector
panic("implement me") panic("implement me")
} }
func (s *schedTestWorker) FinalizeReplicaUpdate(ctx context.Context, sector storage.SectorRef, keepUnsealed []storage.Range) (storiface.CallID, error) {
panic("implement me")
}
func (s *schedTestWorker) MoveStorage(ctx context.Context, sector storage.SectorRef, types storiface.SectorFileType) (storiface.CallID, error) { func (s *schedTestWorker) MoveStorage(ctx context.Context, sector storage.SectorRef, types storiface.SectorFileType) (storiface.CallID, error) {
panic("implement me") panic("implement me")
} }

View File

@ -18,6 +18,7 @@ const (
TTProveReplicaUpdate1 TaskType = "seal/v0/provereplicaupdate/1" TTProveReplicaUpdate1 TaskType = "seal/v0/provereplicaupdate/1"
TTProveReplicaUpdate2 TaskType = "seal/v0/provereplicaupdate/2" TTProveReplicaUpdate2 TaskType = "seal/v0/provereplicaupdate/2"
TTRegenSectorKey TaskType = "seal/v0/regensectorkey" TTRegenSectorKey TaskType = "seal/v0/regensectorkey"
TTFinalizeReplicaUpdate TaskType = "seal/v0/finalize/replicaupdate"
) )
var order = map[TaskType]int{ var order = map[TaskType]int{
@ -52,6 +53,7 @@ var shortNames = map[TaskType]string{
TTProveReplicaUpdate1: "PR1", TTProveReplicaUpdate1: "PR1",
TTProveReplicaUpdate2: "PR2", TTProveReplicaUpdate2: "PR2",
TTRegenSectorKey: "GSK", TTRegenSectorKey: "GSK",
TTFinalizeReplicaUpdate: "FRU",
} }
func (a TaskType) MuchLess(b TaskType) (bool, bool) { func (a TaskType) MuchLess(b TaskType) (bool, bool) {

View File

@ -294,6 +294,10 @@ func ftFromString(t string) (storiface.SectorFileType, error) {
return storiface.FTSealed, nil return storiface.FTSealed, nil
case storiface.FTCache.String(): case storiface.FTCache.String():
return storiface.FTCache, nil return storiface.FTCache, nil
case storiface.FTUpdate.String():
return storiface.FTUpdate, nil
case storiface.FTUpdateCache.String():
return storiface.FTUpdateCache, nil
default: default:
return 0, xerrors.Errorf("unknown sector file type: '%s'", t) return 0, xerrors.Errorf("unknown sector file type: '%s'", t)
} }

View File

@ -120,6 +120,7 @@ type WorkerCalls interface {
SealCommit1(ctx context.Context, sector storage.SectorRef, ticket abi.SealRandomness, seed abi.InteractiveSealRandomness, pieces []abi.PieceInfo, cids storage.SectorCids) (CallID, error) SealCommit1(ctx context.Context, sector storage.SectorRef, ticket abi.SealRandomness, seed abi.InteractiveSealRandomness, pieces []abi.PieceInfo, cids storage.SectorCids) (CallID, error)
SealCommit2(ctx context.Context, sector storage.SectorRef, c1o storage.Commit1Out) (CallID, error) SealCommit2(ctx context.Context, sector storage.SectorRef, c1o storage.Commit1Out) (CallID, error)
FinalizeSector(ctx context.Context, sector storage.SectorRef, keepUnsealed []storage.Range) (CallID, error) FinalizeSector(ctx context.Context, sector storage.SectorRef, keepUnsealed []storage.Range) (CallID, error)
FinalizeReplicaUpdate(ctx context.Context, sector storage.SectorRef, keepUnsealed []storage.Range) (CallID, error)
ReleaseUnsealed(ctx context.Context, sector storage.SectorRef, safeToFree []storage.Range) (CallID, error) ReleaseUnsealed(ctx context.Context, sector storage.SectorRef, safeToFree []storage.Range) (CallID, error)
ReplicaUpdate(ctx context.Context, sector storage.SectorRef, pieces []abi.PieceInfo) (CallID, error) ReplicaUpdate(ctx context.Context, sector storage.SectorRef, pieces []abi.PieceInfo) (CallID, error)
ProveReplicaUpdate1(ctx context.Context, sector storage.SectorRef, sectorKey, newSealed, newUnsealed cid.Cid) (CallID, error) ProveReplicaUpdate1(ctx context.Context, sector storage.SectorRef, sectorKey, newSealed, newUnsealed cid.Cid) (CallID, error)
@ -182,6 +183,7 @@ type WorkerReturn interface {
ReturnProveReplicaUpdate1(ctx context.Context, callID CallID, proofs storage.ReplicaVanillaProofs, err *CallError) error ReturnProveReplicaUpdate1(ctx context.Context, callID CallID, proofs storage.ReplicaVanillaProofs, err *CallError) error
ReturnProveReplicaUpdate2(ctx context.Context, callID CallID, proof storage.ReplicaUpdateProof, err *CallError) error ReturnProveReplicaUpdate2(ctx context.Context, callID CallID, proof storage.ReplicaUpdateProof, err *CallError) error
ReturnGenerateSectorKeyFromData(ctx context.Context, callID CallID, err *CallError) error ReturnGenerateSectorKeyFromData(ctx context.Context, callID CallID, err *CallError) error
ReturnFinalizeReplicaUpdate(ctx context.Context, callID CallID, err *CallError) error
ReturnMoveStorage(ctx context.Context, callID CallID, err *CallError) error ReturnMoveStorage(ctx context.Context, callID CallID, err *CallError) error
ReturnUnsealPiece(ctx context.Context, callID CallID, err *CallError) error ReturnUnsealPiece(ctx context.Context, callID CallID, err *CallError) error
ReturnReadPiece(ctx context.Context, callID CallID, ok bool, err *CallError) error ReturnReadPiece(ctx context.Context, callID CallID, ok bool, err *CallError) error

Some files were not shown because too many files have changed in this diff Show More