Merge branch 'master' into digitalocean-use-snap

This commit is contained in:
Cory Schwartz 2022-02-18 15:26:13 -08:00
commit 6ec82b7446
172 changed files with 7520 additions and 741 deletions

View File

@ -44,13 +44,13 @@ commands:
- restore_cache:
name: Restore parameters cache
keys:
- 'v25-2k-lotus-params'
- 'v26-2k-lotus-params'
paths:
- /var/tmp/filecoin-proof-parameters/
- run: ./lotus fetch-params 2048
- save_cache:
name: Save parameters cache
key: 'v25-2k-lotus-params'
key: 'v26-2k-lotus-params'
paths:
- /var/tmp/filecoin-proof-parameters/
install_ipfs:
@ -861,6 +861,11 @@ workflows:
suite: itest-get_messages_in_ts
target: "./itests/get_messages_in_ts_test.go"
- test:
name: test-itest-mempool
suite: itest-mempool
target: "./itests/mempool_test.go"
- test:
name: test-itest-multisig
suite: itest-multisig

View File

@ -44,13 +44,13 @@ commands:
- restore_cache:
name: Restore parameters cache
keys:
- 'v25-2k-lotus-params'
- 'v26-2k-lotus-params'
paths:
- /var/tmp/filecoin-proof-parameters/
- run: ./lotus fetch-params 2048
- save_cache:
name: Save parameters cache
key: 'v25-2k-lotus-params'
key: 'v26-2k-lotus-params'
paths:
- /var/tmp/filecoin-proof-parameters/
install_ipfs:

View File

@ -14,8 +14,8 @@ Before you mark the PR ready for review, please make sure that:
- [ ] All commits have a clear commit message.
- [ ] The PR title is in the form of of `<PR type>: <area>: <change being made>`
- example: ` fix: mempool: Introduce a cache for valid signatures`
- `PR type`: _fix_, _feat_, _INTERFACE BREAKING CHANGE_, _CONSENSUS BREAKING_, _build_, _chore_, _ci_, _docs_, _misc_,_perf_, _refactor_, _revert_, _style_, _test_
- `area`: _api_, _chain_, _state_, _vm_, _data transfer_, _market_, _mempool_, _message_, _block production_, _multisig_, _networking_, _paychan_, _proving_, _sealing_, _wallet_
- `PR type`: _fix_, _feat_, _INTERFACE BREAKING CHANGE_, _CONSENSUS BREAKING_, _build_, _chore_, _ci_, _docs_,_perf_, _refactor_, _revert_, _style_, _test_
- `area`: _api_, _chain_, _state_, _vm_, _data transfer_, _market_, _mempool_, _message_, _block production_, _multisig_, _networking_, _paychan_, _proving_, _sealing_, _wallet_, _deps_
- [ ] This PR has tests for new functionality or change in behaviour
- [ ] If new user-facing features are introduced, clear usage guidelines and / or documentation updates should be included in https://lotus.filecoin.io or [Discussion Tutorials.](https://github.com/filecoin-project/lotus/discussions/categories/tutorials)
- [ ] CI is green

View File

@ -1,5 +1,77 @@
# Lotus changelog
# 1.14.1 / 2022-02-18
This is an **optional** release of lotus, that fixes the incorrect *comment* of network v15 OhSnap upgrade **date**. Note the actual upgrade epoch in [v1.14.0](https://github.com/filecoin-project/lotus/releases/tag/v1.14.0) was correct.
# 1.14.0 / 2022-02-17
This is a MANDATORY release of Lotus that introduces [Filecoin network v15,
codenamed the OhSnap upgrade](https://github.com/filecoin-project/community/discussions/74?sort=new#discussioncomment-1922550).
The network is scheduled to upgrade to v15 on March 1st at 2022-03-01T15:00:00Z. All node operators, including storage providers, must upgrade to this release (or a later release) before that time. Storage providers must update their daemons, miners, and worker(s).
The OhSnap upgrade introduces the following FIPs, delivered in [actors v7](https://github.com/filecoin-project/specs-actors/releases/tag/v7.0.0):
- [FIP-0019 Snap Deals](https://github.com/filecoin-project/FIPs/blob/master/FIPS/fip-0019.md)
- [FIP-0028 Remove Datacap from Verified clients](https://github.com/filecoin-project/FIPs/pull/226)
It is recommended that storage providers download the new params before updating their node, miner, and workers. To do so:
- Download Lotus v1.14.0 or later
- run `make lotus-shed`
- run `./lotus-shed fetch-params` with the appropriate `proving-params` flag
- Upgrade the Lotus daemon and miner **when the previous step is complete**
All node operators, including storage providers, should be aware that a pre-migration will begin at 2022-03-01T13:30:00Z (150 minutes before the real upgrade). The pre-migration will take between 20 and 50 minutes, depending on hardware specs. During this time, expect slower block validation times, increased CPU and memory usage, and longer delays for API queries.
## New Features and Changes
- Integrate actor v7-rc1:
- Integrate v7 actors ([#7617](https://github.com/filecoin-project/lotus/pull/7617))
- feat: state: Fast migration for v15 ([#7933](https://github.com/filecoin-project/lotus/pull/7933))
- fix: blockstore: Add missing locks to autobatch::Get() [#7939](https://github.com/filecoin-project/lotus/pull/7939))
- correctness fixes for the autobatch blockstore ([#7940](https://github.com/filecoin-project/lotus/pull/7940))
- Implement and support [FIP-0019 Snap Deals](https://github.com/filecoin-project/FIPs/blob/master/FIPS/fip-0019.md)
- chore: deps: Integrate proof v11.0.0 ([#7923](https://github.com/filecoin-project/lotus/pull/7923))
- Snap Deals Lotus Integration: FSM Posting and integration test ([#7810](https://github.com/filecoin-project/lotus/pull/7810))
- Feat/sector storage unseal ([#7730](https://github.com/filecoin-project/lotus/pull/7730))
- Feat/snap deals storage ([#7615](https://github.com/filecoin-project/lotus/pull/7615))
- fix: sealing: Add more deal expiration checks during PRU pipeline ([#7871](https://github.com/filecoin-project/lotus/pull/7871))
- chore: deps: Update go-paramfetch ([#7917](https://github.com/filecoin-project/lotus/pull/7917))
- feat: #7880 gas: add gas charge for VerifyReplicaUpdate ([#7897](https://github.com/filecoin-project/lotus/pull/7897))
- enhancement: sectors: disable existing cc upgrade path 2 days before the upgrade epoch ([#7900](https://github.com/filecoin-project/lotus/pull/7900))
## Improvements
- updating to new datastore/blockstore code with contexts ([#7646](https://github.com/filecoin-project/lotus/pull/7646))
- reorder transfer checks so as to ensure sending 2B FIL to yourself fails if you don't have that amount ([#7637](https://github.com/filecoin-project/lotus/pull/7637))
- VM: Circ supply should be constant per epoch ([#7811](https://github.com/filecoin-project/lotus/pull/7811))
## Bug Fixes
- Fix: state: circsuypply calc around null blocks ([#7890](https://github.com/filecoin-project/lotus/pull/7890))
- Mempool msg selection should respect block message limits ([#7321](https://github.com/filecoin-project/lotus/pull/7321))
SplitStore: supress compaction near upgrades ([#7734](https://github.com/filecoin-project/lotus/pull/7734))
## Others
- chore: create pull_request_template.md ([#7726](https://github.com/filecoin-project/lotus/pull/7726))
## Contributors
| Contributor | Commits | Lines ± | Files Changed |
|-------------|---------|---------|---------------|
| Aayush Rajasekaran | 41 | +5538/-1205 | 189 |
| zenground0 | 11 | +3316/-524 | 124 |
| Jennifer Wang | 29 | +714/-599 | 68 |
| ZenGround0 | 3 | +263/-25 | 11 |
| c r | 2 | +198/-30 | 6 |
| vyzo | 4 | +189/-7 | 7 |
| Aayush | 11 | +146/-48 | 49 |
| web3-bot | 10 | +99/-17 | 10 |
| Steven Allen | 1 | +55/-37 | 1 |
| Jiaying Wang | 5 | +30/-8 | 5 |
| Jakub Sztandera | 2 | +8/-3 | 3 |
| Łukasz Magiera | 1 | +3/-3 | 2 |
| Travis Person | 1 | +2/-2 | 2 |
| Rod Vagg | 1 | +2/-2 | 2 |
# v1.13.2 / 2022-01-09
Lotus v1.13.2 is a *highly recommended* feature release with remarkable retrieval improvements, new features like

View File

@ -345,6 +345,8 @@ gen: actors-gen type-gen method-gen cfgdoc-gen docsgen api-gen circleci
@echo ">>> IF YOU'VE MODIFIED THE CLI OR CONFIG, REMEMBER TO ALSO MAKE docsgen-cli"
.PHONY: gen
jen: gen
snap: lotus lotus-miner lotus-worker
snapcraft
# snapcraft upload ./lotus_*.snap

View File

@ -51,6 +51,11 @@ type Net interface {
NetBlockRemove(ctx context.Context, acl NetBlockList) error //perm:admin
NetBlockList(ctx context.Context) (NetBlockList, error) //perm:read
// ResourceManager API
NetStat(ctx context.Context, scope string) (NetStat, error) //perm:read
NetLimit(ctx context.Context, scope string) (NetLimit, error) //perm:read
NetSetLimit(ctx context.Context, scope string, limit NetLimit) error //perm:admin
// ID returns peerID of libp2p node backing this API
ID(context.Context) (peer.ID, error) //perm:read
}

View File

@ -113,6 +113,8 @@ type StorageMiner interface {
// SectorCommitPending returns a list of pending Commit sectors to be sent in the next aggregate message
SectorCommitPending(ctx context.Context) ([]abi.SectorID, error) //perm:admin
SectorMatchPendingPiecesToOpenSectors(ctx context.Context) error //perm:admin
// SectorAbortUpgrade can be called on sectors that are in the process of being upgraded to abort it
SectorAbortUpgrade(context.Context, abi.SectorNumber) error //perm:admin
// WorkerConnect tells the node to connect to workers RPC
WorkerConnect(context.Context, string) error //perm:admin retry:true
@ -130,6 +132,7 @@ type StorageMiner interface {
ReturnProveReplicaUpdate1(ctx context.Context, callID storiface.CallID, vanillaProofs storage.ReplicaVanillaProofs, err *storiface.CallError) error //perm:admin retry:true
ReturnProveReplicaUpdate2(ctx context.Context, callID storiface.CallID, proof storage.ReplicaUpdateProof, err *storiface.CallError) error //perm:admin retry:true
ReturnGenerateSectorKeyFromData(ctx context.Context, callID storiface.CallID, err *storiface.CallError) error //perm:admin retry:true
ReturnFinalizeReplicaUpdate(ctx context.Context, callID storiface.CallID, err *storiface.CallError) error //perm:admin retry:true
ReturnReleaseUnsealed(ctx context.Context, callID storiface.CallID, err *storiface.CallError) error //perm:admin retry:true
ReturnMoveStorage(ctx context.Context, callID storiface.CallID, err *storiface.CallError) error //perm:admin retry:true
ReturnUnsealPiece(ctx context.Context, callID storiface.CallID, err *storiface.CallError) error //perm:admin retry:true

View File

@ -39,6 +39,7 @@ type Worker interface {
SealCommit1(ctx context.Context, sector storage.SectorRef, ticket abi.SealRandomness, seed abi.InteractiveSealRandomness, pieces []abi.PieceInfo, cids storage.SectorCids) (storiface.CallID, error) //perm:admin
SealCommit2(ctx context.Context, sector storage.SectorRef, c1o storage.Commit1Out) (storiface.CallID, error) //perm:admin
FinalizeSector(ctx context.Context, sector storage.SectorRef, keepUnsealed []storage.Range) (storiface.CallID, error) //perm:admin
FinalizeReplicaUpdate(ctx context.Context, sector storage.SectorRef, keepUnsealed []storage.Range) (storiface.CallID, error) //perm:admin
ReplicaUpdate(ctx context.Context, sector storage.SectorRef, pieces []abi.PieceInfo) (storiface.CallID, error) //perm:admin
ProveReplicaUpdate1(ctx context.Context, sector storage.SectorRef, sectorKey, newSealed, newUnsealed cid.Cid) (storiface.CallID, error) //perm:admin
ProveReplicaUpdate2(ctx context.Context, sector storage.SectorRef, sectorKey, newSealed, newUnsealed cid.Cid, vanillaProofs storage.ReplicaVanillaProofs) (storiface.CallID, error) //perm:admin

View File

@ -300,6 +300,34 @@ func init() {
Error: "<error>",
})
addExample(storiface.ResourceTable)
addExample(network.ScopeStat{
Memory: 123,
NumStreamsInbound: 1,
NumStreamsOutbound: 2,
NumConnsInbound: 3,
NumConnsOutbound: 4,
NumFD: 5,
})
addExample(map[string]network.ScopeStat{
"abc": {
Memory: 123,
NumStreamsInbound: 1,
NumStreamsOutbound: 2,
NumConnsInbound: 3,
NumConnsOutbound: 4,
NumFD: 5,
}})
addExample(api.NetLimit{
Memory: 123,
StreamsInbound: 1,
StreamsOutbound: 2,
Streams: 3,
ConnsInbound: 3,
ConnsOutbound: 4,
Conns: 4,
FD: 5,
})
}
func GetAPIType(name, pkg string) (i interface{}, t reflect.Type, permStruct []reflect.Type) {

View File

@ -1811,6 +1811,21 @@ func (mr *MockFullNodeMockRecorder) NetFindPeer(arg0, arg1 interface{}) *gomock.
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "NetFindPeer", reflect.TypeOf((*MockFullNode)(nil).NetFindPeer), arg0, arg1)
}
// NetLimit mocks base method.
func (m *MockFullNode) NetLimit(arg0 context.Context, arg1 string) (api.NetLimit, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "NetLimit", arg0, arg1)
ret0, _ := ret[0].(api.NetLimit)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// NetLimit indicates an expected call of NetLimit.
func (mr *MockFullNodeMockRecorder) NetLimit(arg0, arg1 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "NetLimit", reflect.TypeOf((*MockFullNode)(nil).NetLimit), arg0, arg1)
}
// NetPeerInfo mocks base method.
func (m *MockFullNode) NetPeerInfo(arg0 context.Context, arg1 peer.ID) (*api.ExtendedPeerInfo, error) {
m.ctrl.T.Helper()
@ -1856,6 +1871,35 @@ func (mr *MockFullNodeMockRecorder) NetPubsubScores(arg0 interface{}) *gomock.Ca
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "NetPubsubScores", reflect.TypeOf((*MockFullNode)(nil).NetPubsubScores), arg0)
}
// NetSetLimit mocks base method.
func (m *MockFullNode) NetSetLimit(arg0 context.Context, arg1 string, arg2 api.NetLimit) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "NetSetLimit", arg0, arg1, arg2)
ret0, _ := ret[0].(error)
return ret0
}
// NetSetLimit indicates an expected call of NetSetLimit.
func (mr *MockFullNodeMockRecorder) NetSetLimit(arg0, arg1, arg2 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "NetSetLimit", reflect.TypeOf((*MockFullNode)(nil).NetSetLimit), arg0, arg1, arg2)
}
// NetStat mocks base method.
func (m *MockFullNode) NetStat(arg0 context.Context, arg1 string) (api.NetStat, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "NetStat", arg0, arg1)
ret0, _ := ret[0].(api.NetStat)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// NetStat indicates an expected call of NetStat.
func (mr *MockFullNodeMockRecorder) NetStat(arg0, arg1 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "NetStat", reflect.TypeOf((*MockFullNode)(nil).NetStat), arg0, arg1)
}
// NodeStatus mocks base method.
func (m *MockFullNode) NodeStatus(arg0 context.Context, arg1 bool) (api.NodeStatus, error) {
m.ctrl.T.Helper()

View File

@ -587,11 +587,17 @@ type NetStruct struct {
NetFindPeer func(p0 context.Context, p1 peer.ID) (peer.AddrInfo, error) `perm:"read"`
NetLimit func(p0 context.Context, p1 string) (NetLimit, error) `perm:"read"`
NetPeerInfo func(p0 context.Context, p1 peer.ID) (*ExtendedPeerInfo, error) `perm:"read"`
NetPeers func(p0 context.Context) ([]peer.AddrInfo, error) `perm:"read"`
NetPubsubScores func(p0 context.Context) ([]PubsubScore, error) `perm:"read"`
NetSetLimit func(p0 context.Context, p1 string, p2 NetLimit) error `perm:"admin"`
NetStat func(p0 context.Context, p1 string) (NetStat, error) `perm:"read"`
}
}
@ -717,6 +723,8 @@ type StorageMinerStruct struct {
ReturnFetch func(p0 context.Context, p1 storiface.CallID, p2 *storiface.CallError) error `perm:"admin"`
ReturnFinalizeReplicaUpdate func(p0 context.Context, p1 storiface.CallID, p2 *storiface.CallError) error `perm:"admin"`
ReturnFinalizeSector func(p0 context.Context, p1 storiface.CallID, p2 *storiface.CallError) error `perm:"admin"`
ReturnGenerateSectorKeyFromData func(p0 context.Context, p1 storiface.CallID, p2 *storiface.CallError) error `perm:"admin"`
@ -749,6 +757,8 @@ type StorageMinerStruct struct {
SealingSchedDiag func(p0 context.Context, p1 bool) (interface{}, error) `perm:"admin"`
SectorAbortUpgrade func(p0 context.Context, p1 abi.SectorNumber) error `perm:"admin"`
SectorAddPieceToAny func(p0 context.Context, p1 abi.UnpaddedPieceSize, p2 storage.Data, p3 PieceDealInfo) (SectorOffset, error) `perm:"admin"`
SectorCommitFlush func(p0 context.Context) ([]sealiface.CommitBatchRes, error) `perm:"admin"`
@ -866,6 +876,8 @@ type WorkerStruct struct {
Fetch func(p0 context.Context, p1 storage.SectorRef, p2 storiface.SectorFileType, p3 storiface.PathType, p4 storiface.AcquireMode) (storiface.CallID, error) `perm:"admin"`
FinalizeReplicaUpdate func(p0 context.Context, p1 storage.SectorRef, p2 []storage.Range) (storiface.CallID, error) `perm:"admin"`
FinalizeSector func(p0 context.Context, p1 storage.SectorRef, p2 []storage.Range) (storiface.CallID, error) `perm:"admin"`
GenerateSectorKeyFromData func(p0 context.Context, p1 storage.SectorRef, p2 cid.Cid) (storiface.CallID, error) `perm:"admin"`
@ -3625,6 +3637,17 @@ func (s *NetStub) NetFindPeer(p0 context.Context, p1 peer.ID) (peer.AddrInfo, er
return *new(peer.AddrInfo), ErrNotSupported
}
func (s *NetStruct) NetLimit(p0 context.Context, p1 string) (NetLimit, error) {
if s.Internal.NetLimit == nil {
return *new(NetLimit), ErrNotSupported
}
return s.Internal.NetLimit(p0, p1)
}
func (s *NetStub) NetLimit(p0 context.Context, p1 string) (NetLimit, error) {
return *new(NetLimit), ErrNotSupported
}
func (s *NetStruct) NetPeerInfo(p0 context.Context, p1 peer.ID) (*ExtendedPeerInfo, error) {
if s.Internal.NetPeerInfo == nil {
return nil, ErrNotSupported
@ -3658,6 +3681,28 @@ func (s *NetStub) NetPubsubScores(p0 context.Context) ([]PubsubScore, error) {
return *new([]PubsubScore), ErrNotSupported
}
func (s *NetStruct) NetSetLimit(p0 context.Context, p1 string, p2 NetLimit) error {
if s.Internal.NetSetLimit == nil {
return ErrNotSupported
}
return s.Internal.NetSetLimit(p0, p1, p2)
}
func (s *NetStub) NetSetLimit(p0 context.Context, p1 string, p2 NetLimit) error {
return ErrNotSupported
}
func (s *NetStruct) NetStat(p0 context.Context, p1 string) (NetStat, error) {
if s.Internal.NetStat == nil {
return *new(NetStat), ErrNotSupported
}
return s.Internal.NetStat(p0, p1)
}
func (s *NetStub) NetStat(p0 context.Context, p1 string) (NetStat, error) {
return *new(NetStat), ErrNotSupported
}
func (s *SignableStruct) Sign(p0 context.Context, p1 SignFunc) error {
if s.Internal.Sign == nil {
return ErrNotSupported
@ -4241,6 +4286,17 @@ func (s *StorageMinerStub) ReturnFetch(p0 context.Context, p1 storiface.CallID,
return ErrNotSupported
}
func (s *StorageMinerStruct) ReturnFinalizeReplicaUpdate(p0 context.Context, p1 storiface.CallID, p2 *storiface.CallError) error {
if s.Internal.ReturnFinalizeReplicaUpdate == nil {
return ErrNotSupported
}
return s.Internal.ReturnFinalizeReplicaUpdate(p0, p1, p2)
}
func (s *StorageMinerStub) ReturnFinalizeReplicaUpdate(p0 context.Context, p1 storiface.CallID, p2 *storiface.CallError) error {
return ErrNotSupported
}
func (s *StorageMinerStruct) ReturnFinalizeSector(p0 context.Context, p1 storiface.CallID, p2 *storiface.CallError) error {
if s.Internal.ReturnFinalizeSector == nil {
return ErrNotSupported
@ -4417,6 +4473,17 @@ func (s *StorageMinerStub) SealingSchedDiag(p0 context.Context, p1 bool) (interf
return nil, ErrNotSupported
}
func (s *StorageMinerStruct) SectorAbortUpgrade(p0 context.Context, p1 abi.SectorNumber) error {
if s.Internal.SectorAbortUpgrade == nil {
return ErrNotSupported
}
return s.Internal.SectorAbortUpgrade(p0, p1)
}
func (s *StorageMinerStub) SectorAbortUpgrade(p0 context.Context, p1 abi.SectorNumber) error {
return ErrNotSupported
}
func (s *StorageMinerStruct) SectorAddPieceToAny(p0 context.Context, p1 abi.UnpaddedPieceSize, p2 storage.Data, p3 PieceDealInfo) (SectorOffset, error) {
if s.Internal.SectorAddPieceToAny == nil {
return *new(SectorOffset), ErrNotSupported
@ -4967,6 +5034,17 @@ func (s *WorkerStub) Fetch(p0 context.Context, p1 storage.SectorRef, p2 storifac
return *new(storiface.CallID), ErrNotSupported
}
func (s *WorkerStruct) FinalizeReplicaUpdate(p0 context.Context, p1 storage.SectorRef, p2 []storage.Range) (storiface.CallID, error) {
if s.Internal.FinalizeReplicaUpdate == nil {
return *new(storiface.CallID), ErrNotSupported
}
return s.Internal.FinalizeReplicaUpdate(p0, p1, p2)
}
func (s *WorkerStub) FinalizeReplicaUpdate(p0 context.Context, p1 storage.SectorRef, p2 []storage.Range) (storiface.CallID, error) {
return *new(storiface.CallID), ErrNotSupported
}
func (s *WorkerStruct) FinalizeSector(p0 context.Context, p1 storage.SectorRef, p2 []storage.Range) (storiface.CallID, error) {
if s.Internal.FinalizeSector == nil {
return *new(storiface.CallID), ErrNotSupported

View File

@ -12,6 +12,7 @@ import (
"github.com/ipfs/go-cid"
"github.com/ipfs/go-graphsync"
"github.com/libp2p/go-libp2p-core/network"
"github.com/libp2p/go-libp2p-core/peer"
pubsub "github.com/libp2p/go-libp2p-pubsub"
ma "github.com/multiformats/go-multiaddr"
@ -129,6 +130,28 @@ type NetBlockList struct {
IPSubnets []string
}
type NetStat struct {
System *network.ScopeStat `json:",omitempty"`
Transient *network.ScopeStat `json:",omitempty"`
Services map[string]network.ScopeStat `json:",omitempty"`
Protocols map[string]network.ScopeStat `json:",omitempty"`
Peers map[string]network.ScopeStat `json:",omitempty"`
}
type NetLimit struct {
Dynamic bool `json:",omitempty"`
// set if Dynamic is false
Memory int64 `json:",omitempty"`
// set if Dynamic is true
MemoryFraction float64 `json:",omitempty"`
MinMemory int64 `json:",omitempty"`
MaxMemory int64 `json:",omitempty"`
Streams, StreamsInbound, StreamsOutbound int
Conns, ConnsInbound, ConnsOutbound int
FD int
}
type ExtendedPeerInfo struct {
ID peer.ID
Agent string

View File

@ -1724,6 +1724,21 @@ func (mr *MockFullNodeMockRecorder) NetFindPeer(arg0, arg1 interface{}) *gomock.
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "NetFindPeer", reflect.TypeOf((*MockFullNode)(nil).NetFindPeer), arg0, arg1)
}
// NetLimit mocks base method.
func (m *MockFullNode) NetLimit(arg0 context.Context, arg1 string) (api.NetLimit, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "NetLimit", arg0, arg1)
ret0, _ := ret[0].(api.NetLimit)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// NetLimit indicates an expected call of NetLimit.
func (mr *MockFullNodeMockRecorder) NetLimit(arg0, arg1 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "NetLimit", reflect.TypeOf((*MockFullNode)(nil).NetLimit), arg0, arg1)
}
// NetPeerInfo mocks base method.
func (m *MockFullNode) NetPeerInfo(arg0 context.Context, arg1 peer.ID) (*api.ExtendedPeerInfo, error) {
m.ctrl.T.Helper()
@ -1769,6 +1784,35 @@ func (mr *MockFullNodeMockRecorder) NetPubsubScores(arg0 interface{}) *gomock.Ca
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "NetPubsubScores", reflect.TypeOf((*MockFullNode)(nil).NetPubsubScores), arg0)
}
// NetSetLimit mocks base method.
func (m *MockFullNode) NetSetLimit(arg0 context.Context, arg1 string, arg2 api.NetLimit) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "NetSetLimit", arg0, arg1, arg2)
ret0, _ := ret[0].(error)
return ret0
}
// NetSetLimit indicates an expected call of NetSetLimit.
func (mr *MockFullNodeMockRecorder) NetSetLimit(arg0, arg1, arg2 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "NetSetLimit", reflect.TypeOf((*MockFullNode)(nil).NetSetLimit), arg0, arg1, arg2)
}
// NetStat mocks base method.
func (m *MockFullNode) NetStat(arg0 context.Context, arg1 string) (api.NetStat, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "NetStat", arg0, arg1)
ret0, _ := ret[0].(api.NetStat)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// NetStat indicates an expected call of NetStat.
func (mr *MockFullNodeMockRecorder) NetStat(arg0, arg1 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "NetStat", reflect.TypeOf((*MockFullNode)(nil).NetStat), arg0, arg1)
}
// PaychAllocateLane mocks base method.
func (m *MockFullNode) PaychAllocateLane(arg0 context.Context, arg1 address.Address) (uint64, error) {
m.ctrl.T.Helper()

21
blockstore/context.go Normal file
View File

@ -0,0 +1,21 @@
package blockstore
import (
"context"
)
type hotViewKey struct{}
var hotView = hotViewKey{}
// WithHotView constructs a new context with an option that provides a hint to the blockstore
// (e.g. the splitstore) that the object (and its ipld references) should be kept hot.
func WithHotView(ctx context.Context) context.Context {
return context.WithValue(ctx, hotView, struct{}{})
}
// IsHotView returns true if the hot view option is set in the context
func IsHotView(ctx context.Context) bool {
v := ctx.Value(hotView)
return v != nil
}

View File

@ -49,10 +49,11 @@ These are options in the `[Chainstore.Splitstore]` section of the configuration:
blockstore and discards writes; this is necessary to support syncing from a snapshot.
- `MarkSetType` -- specifies the type of markset to use during compaction.
The markset is the data structure used by compaction/gc to track live objects.
The default value is `"map"`, which will use an in-memory map; if you are limited
in memory (or indeed see compaction run out of memory), you can also specify
`"badger"` which will use an disk backed markset, using badger. This will use
much less memory, but will also make compaction slower.
The default value is "badger", which will use a disk backed markset using badger.
If you have a lot of memory (48G or more) you can also use "map", which will use
an in memory markset, speeding up compaction at the cost of higher memory usage.
Note: If you are using a VPS with a network volume, you need to provision at least
3000 IOPs with the badger markset.
- `HotStoreMessageRetention` -- specifies how many finalities, beyond the 4
finalities maintained by default, to maintain messages and message receipts in the
hotstore. This is useful for assistive nodes that want to support syncing for other
@ -105,6 +106,12 @@ Compaction works transactionally with the following algorithm:
- We delete in small batches taking a lock; each batch is checked again for marks, from the concurrent transactional mark, so as to never delete anything live
- We then end the transaction and compact/gc the hotstore.
As of [#8008](https://github.com/filecoin-project/lotus/pull/8008) the compaction algorithm has been
modified to eliminate sorting and maintain the cold object set on disk. This drastically reduces
memory usage; in fact, when using badger as the markset compaction uses very little memory, and
it should be now possible to run splitstore with 32GB of RAM or less without danger of running out of
memory during compaction.
## Garbage Collection
TBD -- see [#6577](https://github.com/filecoin-project/lotus/issues/6577)

View File

@ -0,0 +1,118 @@
package splitstore
import (
"bufio"
"io"
"os"
"golang.org/x/xerrors"
cid "github.com/ipfs/go-cid"
mh "github.com/multiformats/go-multihash"
)
type Checkpoint struct {
file *os.File
buf *bufio.Writer
}
func NewCheckpoint(path string) (*Checkpoint, error) {
file, err := os.OpenFile(path, os.O_CREATE|os.O_TRUNC|os.O_WRONLY|os.O_SYNC, 0644)
if err != nil {
return nil, xerrors.Errorf("error creating checkpoint: %w", err)
}
buf := bufio.NewWriter(file)
return &Checkpoint{
file: file,
buf: buf,
}, nil
}
func OpenCheckpoint(path string) (*Checkpoint, cid.Cid, error) {
filein, err := os.Open(path)
if err != nil {
return nil, cid.Undef, xerrors.Errorf("error opening checkpoint for reading: %w", err)
}
defer filein.Close() //nolint:errcheck
bufin := bufio.NewReader(filein)
start, err := readRawCid(bufin, nil)
if err != nil && err != io.EOF {
return nil, cid.Undef, xerrors.Errorf("error reading cid from checkpoint: %w", err)
}
fileout, err := os.OpenFile(path, os.O_WRONLY|os.O_SYNC, 0644)
if err != nil {
return nil, cid.Undef, xerrors.Errorf("error opening checkpoint for writing: %w", err)
}
bufout := bufio.NewWriter(fileout)
return &Checkpoint{
file: fileout,
buf: bufout,
}, start, nil
}
func (cp *Checkpoint) Set(c cid.Cid) error {
if _, err := cp.file.Seek(0, io.SeekStart); err != nil {
return xerrors.Errorf("error seeking beginning of checkpoint: %w", err)
}
if err := writeRawCid(cp.buf, c, true); err != nil {
return xerrors.Errorf("error writing cid to checkpoint: %w", err)
}
return nil
}
func (cp *Checkpoint) Close() error {
if cp.file == nil {
return nil
}
err := cp.file.Close()
cp.file = nil
cp.buf = nil
return err
}
func readRawCid(buf *bufio.Reader, hbuf []byte) (cid.Cid, error) {
sz, err := buf.ReadByte()
if err != nil {
return cid.Undef, err // don't wrap EOF as it is not an error here
}
if hbuf == nil {
hbuf = make([]byte, int(sz))
} else {
hbuf = hbuf[:int(sz)]
}
if _, err := io.ReadFull(buf, hbuf); err != nil {
return cid.Undef, xerrors.Errorf("error reading hash: %w", err) // wrap EOF, it's corrupt
}
hash, err := mh.Cast(hbuf)
if err != nil {
return cid.Undef, xerrors.Errorf("error casting multihash: %w", err)
}
return cid.NewCidV1(cid.Raw, hash), nil
}
func writeRawCid(buf *bufio.Writer, c cid.Cid, flush bool) error {
hash := c.Hash()
if err := buf.WriteByte(byte(len(hash))); err != nil {
return err
}
if _, err := buf.Write(hash); err != nil {
return err
}
if flush {
return buf.Flush()
}
return nil
}

View File

@ -0,0 +1,147 @@
package splitstore
import (
"io/ioutil"
"os"
"path/filepath"
"testing"
"github.com/ipfs/go-cid"
"github.com/multiformats/go-multihash"
)
func TestCheckpoint(t *testing.T) {
dir, err := ioutil.TempDir("", "checkpoint.*")
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
_ = os.RemoveAll(dir)
})
path := filepath.Join(dir, "checkpoint")
makeCid := func(key string) cid.Cid {
h, err := multihash.Sum([]byte(key), multihash.SHA2_256, -1)
if err != nil {
t.Fatal(err)
}
return cid.NewCidV1(cid.Raw, h)
}
k1 := makeCid("a")
k2 := makeCid("b")
k3 := makeCid("c")
k4 := makeCid("d")
cp, err := NewCheckpoint(path)
if err != nil {
t.Fatal(err)
}
if err := cp.Set(k1); err != nil {
t.Fatal(err)
}
if err := cp.Set(k2); err != nil {
t.Fatal(err)
}
if err := cp.Close(); err != nil {
t.Fatal(err)
}
cp, start, err := OpenCheckpoint(path)
if err != nil {
t.Fatal(err)
}
if !start.Equals(k2) {
t.Fatalf("expected start to be %s; got %s", k2, start)
}
if err := cp.Set(k3); err != nil {
t.Fatal(err)
}
if err := cp.Set(k4); err != nil {
t.Fatal(err)
}
if err := cp.Close(); err != nil {
t.Fatal(err)
}
cp, start, err = OpenCheckpoint(path)
if err != nil {
t.Fatal(err)
}
if !start.Equals(k4) {
t.Fatalf("expected start to be %s; got %s", k4, start)
}
if err := cp.Close(); err != nil {
t.Fatal(err)
}
// also test correct operation with an empty checkpoint
cp, err = NewCheckpoint(path)
if err != nil {
t.Fatal(err)
}
if err := cp.Close(); err != nil {
t.Fatal(err)
}
cp, start, err = OpenCheckpoint(path)
if err != nil {
t.Fatal(err)
}
if start.Defined() {
t.Fatal("expected start to be undefined")
}
if err := cp.Set(k1); err != nil {
t.Fatal(err)
}
if err := cp.Set(k2); err != nil {
t.Fatal(err)
}
if err := cp.Close(); err != nil {
t.Fatal(err)
}
cp, start, err = OpenCheckpoint(path)
if err != nil {
t.Fatal(err)
}
if !start.Equals(k2) {
t.Fatalf("expected start to be %s; got %s", k2, start)
}
if err := cp.Set(k3); err != nil {
t.Fatal(err)
}
if err := cp.Set(k4); err != nil {
t.Fatal(err)
}
if err := cp.Close(); err != nil {
t.Fatal(err)
}
cp, start, err = OpenCheckpoint(path)
if err != nil {
t.Fatal(err)
}
if !start.Equals(k4) {
t.Fatalf("expected start to be %s; got %s", k4, start)
}
if err := cp.Close(); err != nil {
t.Fatal(err)
}
}

View File

@ -0,0 +1,102 @@
package splitstore
import (
"bufio"
"io"
"os"
"golang.org/x/xerrors"
cid "github.com/ipfs/go-cid"
)
type ColdSetWriter struct {
file *os.File
buf *bufio.Writer
}
type ColdSetReader struct {
file *os.File
buf *bufio.Reader
}
func NewColdSetWriter(path string) (*ColdSetWriter, error) {
file, err := os.OpenFile(path, os.O_CREATE|os.O_TRUNC|os.O_WRONLY, 0644)
if err != nil {
return nil, xerrors.Errorf("error creating coldset: %w", err)
}
buf := bufio.NewWriter(file)
return &ColdSetWriter{
file: file,
buf: buf,
}, nil
}
func NewColdSetReader(path string) (*ColdSetReader, error) {
file, err := os.Open(path)
if err != nil {
return nil, xerrors.Errorf("error opening coldset: %w", err)
}
buf := bufio.NewReader(file)
return &ColdSetReader{
file: file,
buf: buf,
}, nil
}
func (s *ColdSetWriter) Write(c cid.Cid) error {
return writeRawCid(s.buf, c, false)
}
func (s *ColdSetWriter) Close() error {
if s.file == nil {
return nil
}
err1 := s.buf.Flush()
err2 := s.file.Close()
s.buf = nil
s.file = nil
if err1 != nil {
return err1
}
return err2
}
func (s *ColdSetReader) ForEach(f func(cid.Cid) error) error {
hbuf := make([]byte, 256)
for {
next, err := readRawCid(s.buf, hbuf)
if err != nil {
if err == io.EOF {
return nil
}
return xerrors.Errorf("error reading coldset: %w", err)
}
if err := f(next); err != nil {
return err
}
}
}
func (s *ColdSetReader) Reset() error {
_, err := s.file.Seek(0, io.SeekStart)
return err
}
func (s *ColdSetReader) Close() error {
if s.file == nil {
return nil
}
err := s.file.Close()
s.file = nil
s.buf = nil
return err
}

View File

@ -0,0 +1,99 @@
package splitstore
import (
"fmt"
"io/ioutil"
"os"
"path/filepath"
"testing"
"github.com/ipfs/go-cid"
"github.com/multiformats/go-multihash"
)
func TestColdSet(t *testing.T) {
dir, err := ioutil.TempDir("", "coldset.*")
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
_ = os.RemoveAll(dir)
})
path := filepath.Join(dir, "coldset")
makeCid := func(i int) cid.Cid {
h, err := multihash.Sum([]byte(fmt.Sprintf("cid.%d", i)), multihash.SHA2_256, -1)
if err != nil {
t.Fatal(err)
}
return cid.NewCidV1(cid.Raw, h)
}
const count = 1000
cids := make([]cid.Cid, 0, count)
for i := 0; i < count; i++ {
cids = append(cids, makeCid(i))
}
cw, err := NewColdSetWriter(path)
if err != nil {
t.Fatal(err)
}
for _, c := range cids {
if err := cw.Write(c); err != nil {
t.Fatal(err)
}
}
if err := cw.Close(); err != nil {
t.Fatal(err)
}
cr, err := NewColdSetReader(path)
if err != nil {
t.Fatal(err)
}
index := 0
err = cr.ForEach(func(c cid.Cid) error {
if index >= count {
t.Fatal("too many cids")
}
if !c.Equals(cids[index]) {
t.Fatalf("wrong cid %d; expected %s but got %s", index, cids[index], c)
}
index++
return nil
})
if err != nil {
t.Fatal(err)
}
if err := cr.Reset(); err != nil {
t.Fatal(err)
}
index = 0
err = cr.ForEach(func(c cid.Cid) error {
if index >= count {
t.Fatal("too many cids")
}
if !c.Equals(cids[index]) {
t.Fatalf("wrong cid; expected %s but got %s", cids[index], c)
}
index++
return nil
})
if err != nil {
t.Fatal(err)
}
}

View File

@ -10,39 +10,36 @@ import (
var errMarkSetClosed = errors.New("markset closed")
// MarkSet is a utility to keep track of seen CID, and later query for them.
//
// * If the expected dataset is large, it can be backed by a datastore (e.g. bbolt).
// * If a probabilistic result is acceptable, it can be backed by a bloom filter
// MarkSet is an interface for tracking CIDs during chain and object walks
type MarkSet interface {
ObjectVisitor
Mark(cid.Cid) error
MarkMany([]cid.Cid) error
Has(cid.Cid) (bool, error)
Close() error
SetConcurrent()
}
type MarkSetVisitor interface {
MarkSet
ObjectVisitor
// BeginCriticalSection ensures that the markset is persisted to disk for recovery in case
// of abnormal termination during the critical section span.
BeginCriticalSection() error
// EndCriticalSection ends the critical section span.
EndCriticalSection()
}
type MarkSetEnv interface {
// Create creates a new markset within the environment.
// name is a unique name for this markset, mapped to the filesystem in disk-backed environments
// New creates a new markset within the environment.
// name is a unique name for this markset, mapped to the filesystem for on-disk persistence.
// sizeHint is a hint about the expected size of the markset
Create(name string, sizeHint int64) (MarkSet, error)
// CreateVisitor is like Create, but returns a wider interface that supports atomic visits.
// It may not be supported by some markset types (e.g. bloom).
CreateVisitor(name string, sizeHint int64) (MarkSetVisitor, error)
// SupportsVisitor returns true if the marksets created by this environment support the visitor interface.
SupportsVisitor() bool
New(name string, sizeHint int64) (MarkSet, error)
// Recover recovers an existing markset persisted on-disk.
Recover(name string) (MarkSet, error)
// Close closes the markset
Close() error
}
func OpenMarkSetEnv(path string, mtype string) (MarkSetEnv, error) {
switch mtype {
case "map":
return NewMapMarkSetEnv()
return NewMapMarkSetEnv(path)
case "badger":
return NewBadgerMarkSetEnv(path)
default:

View File

@ -3,6 +3,7 @@ package splitstore
import (
"os"
"path/filepath"
"runtime"
"sync"
"golang.org/x/xerrors"
@ -28,13 +29,13 @@ type BadgerMarkSet struct {
writers int
seqno int
version int
persist bool
db *badger.DB
path string
}
var _ MarkSet = (*BadgerMarkSet)(nil)
var _ MarkSetVisitor = (*BadgerMarkSet)(nil)
var badgerMarkSetBatchSize = 16384
@ -48,11 +49,10 @@ func NewBadgerMarkSetEnv(path string) (MarkSetEnv, error) {
return &BadgerMarkSetEnv{path: msPath}, nil
}
func (e *BadgerMarkSetEnv) create(name string, sizeHint int64) (*BadgerMarkSet, error) {
name += ".tmp"
func (e *BadgerMarkSetEnv) New(name string, sizeHint int64) (MarkSet, error) {
path := filepath.Join(e.path, name)
db, err := openTransientBadgerDB(path)
db, err := openBadgerDB(path, false)
if err != nil {
return nil, xerrors.Errorf("error creating badger db: %w", err)
}
@ -68,18 +68,72 @@ func (e *BadgerMarkSetEnv) create(name string, sizeHint int64) (*BadgerMarkSet,
return ms, nil
}
func (e *BadgerMarkSetEnv) Create(name string, sizeHint int64) (MarkSet, error) {
return e.create(name, sizeHint)
func (e *BadgerMarkSetEnv) Recover(name string) (MarkSet, error) {
path := filepath.Join(e.path, name)
if _, err := os.Stat(path); err != nil {
return nil, xerrors.Errorf("error stating badger db path: %w", err)
}
func (e *BadgerMarkSetEnv) CreateVisitor(name string, sizeHint int64) (MarkSetVisitor, error) {
return e.create(name, sizeHint)
db, err := openBadgerDB(path, true)
if err != nil {
return nil, xerrors.Errorf("error creating badger db: %w", err)
}
func (e *BadgerMarkSetEnv) SupportsVisitor() bool { return true }
ms := &BadgerMarkSet{
pend: make(map[string]struct{}),
writing: make(map[int]map[string]struct{}),
db: db,
path: path,
persist: true,
}
ms.cond.L = &ms.mx
return ms, nil
}
func (e *BadgerMarkSetEnv) Close() error {
return os.RemoveAll(e.path)
return nil
}
func (s *BadgerMarkSet) BeginCriticalSection() error {
s.mx.Lock()
if s.persist {
s.mx.Unlock()
return nil
}
var write bool
var seqno int
if len(s.pend) > 0 {
write = true
seqno = s.nextBatch()
}
s.persist = true
s.mx.Unlock()
if write {
// all writes sync once perist is true
return s.write(seqno)
}
// wait for any pending writes and sync
s.mx.Lock()
for s.writers > 0 {
s.cond.Wait()
}
s.mx.Unlock()
return s.db.Sync()
}
func (s *BadgerMarkSet) EndCriticalSection() {
s.mx.Lock()
defer s.mx.Unlock()
s.persist = false
}
func (s *BadgerMarkSet) Mark(c cid.Cid) error {
@ -99,6 +153,23 @@ func (s *BadgerMarkSet) Mark(c cid.Cid) error {
return nil
}
func (s *BadgerMarkSet) MarkMany(batch []cid.Cid) error {
s.mx.Lock()
if s.pend == nil {
s.mx.Unlock()
return errMarkSetClosed
}
write, seqno := s.putMany(batch)
s.mx.Unlock()
if write {
return s.write(seqno)
}
return nil
}
func (s *BadgerMarkSet) Has(c cid.Cid) (bool, error) {
s.mx.RLock()
defer s.mx.RUnlock()
@ -204,16 +275,34 @@ func (s *BadgerMarkSet) tryDB(key []byte) (has bool, err error) {
// writer holds the exclusive lock
func (s *BadgerMarkSet) put(key string) (write bool, seqno int) {
s.pend[key] = struct{}{}
if len(s.pend) < badgerMarkSetBatchSize {
if !s.persist && len(s.pend) < badgerMarkSetBatchSize {
return false, 0
}
seqno = s.seqno
seqno = s.nextBatch()
return true, seqno
}
func (s *BadgerMarkSet) putMany(batch []cid.Cid) (write bool, seqno int) {
for _, c := range batch {
key := string(c.Hash())
s.pend[key] = struct{}{}
}
if !s.persist && len(s.pend) < badgerMarkSetBatchSize {
return false, 0
}
seqno = s.nextBatch()
return true, seqno
}
func (s *BadgerMarkSet) nextBatch() int {
seqno := s.seqno
s.seqno++
s.writing[seqno] = s.pend
s.pend = make(map[string]struct{})
return true, seqno
return seqno
}
func (s *BadgerMarkSet) write(seqno int) (err error) {
@ -258,6 +347,14 @@ func (s *BadgerMarkSet) write(seqno int) (err error) {
return xerrors.Errorf("error flushing batch to badger markset: %w", err)
}
s.mx.RLock()
persist := s.persist
s.mx.RUnlock()
if persist {
return s.db.Sync()
}
return nil
}
@ -277,13 +374,12 @@ func (s *BadgerMarkSet) Close() error {
db := s.db
s.db = nil
return closeTransientBadgerDB(db, s.path)
return closeBadgerDB(db, s.path, s.persist)
}
func (s *BadgerMarkSet) SetConcurrent() {}
func openTransientBadgerDB(path string) (*badger.DB, error) {
// clean up first
func openBadgerDB(path string, recover bool) (*badger.DB, error) {
// if it is not a recovery, clean up first
if !recover {
err := os.RemoveAll(path)
if err != nil {
return nil, xerrors.Errorf("error clearing markset directory: %w", err)
@ -293,10 +389,14 @@ func openTransientBadgerDB(path string) (*badger.DB, error) {
if err != nil {
return nil, xerrors.Errorf("error creating markset directory: %w", err)
}
}
opts := badger.DefaultOptions(path)
// we manually sync when we are in critical section
opts.SyncWrites = false
// no need to do that
opts.CompactL0OnClose = false
// we store hashes, not much to gain by compression
opts.Compression = options.None
// Note: We use FileIO for loading modes to avoid memory thrashing and interference
// between the system blockstore and the markset.
@ -305,6 +405,15 @@ func openTransientBadgerDB(path string) (*badger.DB, error) {
// exceeded 1GB in size.
opts.TableLoadingMode = options.FileIO
opts.ValueLogLoadingMode = options.FileIO
// We increase the number of L0 tables before compaction to make it unlikely to
// be necessary.
opts.NumLevelZeroTables = 20 // default is 5
opts.NumLevelZeroTablesStall = 30 // default is 10
// increase the number of compactors from default 2 so that if we ever have to
// compact, it is fast
if runtime.NumCPU()/2 > opts.NumCompactors {
opts.NumCompactors = runtime.NumCPU() / 2
}
opts.Logger = &badgerLogger{
SugaredLogger: log.Desugar().WithOptions(zap.AddCallerSkip(1)).Sugar(),
skip2: log.Desugar().WithOptions(zap.AddCallerSkip(2)).Sugar(),
@ -313,12 +422,16 @@ func openTransientBadgerDB(path string) (*badger.DB, error) {
return badger.Open(opts)
}
func closeTransientBadgerDB(db *badger.DB, path string) error {
func closeBadgerDB(db *badger.DB, path string, persist bool) error {
err := db.Close()
if err != nil {
return xerrors.Errorf("error closing badger markset: %w", err)
}
if persist {
return nil
}
err = os.RemoveAll(path)
if err != nil {
return xerrors.Errorf("error deleting badger markset: %w", err)

View File

@ -1,12 +1,20 @@
package splitstore
import (
"bufio"
"io"
"os"
"path/filepath"
"sync"
"golang.org/x/xerrors"
cid "github.com/ipfs/go-cid"
)
type MapMarkSetEnv struct{}
type MapMarkSetEnv struct {
path string
}
var _ MarkSetEnv = (*MapMarkSetEnv)(nil)
@ -14,55 +22,194 @@ type MapMarkSet struct {
mx sync.RWMutex
set map[string]struct{}
ts bool
persist bool
file *os.File
buf *bufio.Writer
path string
}
var _ MarkSet = (*MapMarkSet)(nil)
var _ MarkSetVisitor = (*MapMarkSet)(nil)
func NewMapMarkSetEnv() (*MapMarkSetEnv, error) {
return &MapMarkSetEnv{}, nil
func NewMapMarkSetEnv(path string) (*MapMarkSetEnv, error) {
msPath := filepath.Join(path, "markset.map")
err := os.MkdirAll(msPath, 0755) //nolint:gosec
if err != nil {
return nil, xerrors.Errorf("error creating markset directory: %w", err)
}
func (e *MapMarkSetEnv) create(name string, sizeHint int64) (*MapMarkSet, error) {
return &MapMarkSetEnv{path: msPath}, nil
}
func (e *MapMarkSetEnv) New(name string, sizeHint int64) (MarkSet, error) {
path := filepath.Join(e.path, name)
return &MapMarkSet{
set: make(map[string]struct{}, sizeHint),
path: path,
}, nil
}
func (e *MapMarkSetEnv) Create(name string, sizeHint int64) (MarkSet, error) {
return e.create(name, sizeHint)
func (e *MapMarkSetEnv) Recover(name string) (MarkSet, error) {
path := filepath.Join(e.path, name)
s := &MapMarkSet{
set: make(map[string]struct{}),
path: path,
}
func (e *MapMarkSetEnv) CreateVisitor(name string, sizeHint int64) (MarkSetVisitor, error) {
return e.create(name, sizeHint)
in, err := os.Open(path)
if err != nil {
return nil, xerrors.Errorf("error opening markset file for read: %w", err)
}
defer in.Close() //nolint:errcheck
// wrap a buffered reader to make this faster
buf := bufio.NewReader(in)
for {
var sz byte
if sz, err = buf.ReadByte(); err != nil {
break
}
func (e *MapMarkSetEnv) SupportsVisitor() bool { return true }
key := make([]byte, int(sz))
if _, err = io.ReadFull(buf, key); err != nil {
break
}
s.set[string(key)] = struct{}{}
}
if err != io.EOF {
return nil, xerrors.Errorf("error reading markset file: %w", err)
}
file, err := os.OpenFile(s.path, os.O_WRONLY|os.O_APPEND, 0)
if err != nil {
return nil, xerrors.Errorf("error opening markset file for write: %w", err)
}
s.persist = true
s.file = file
s.buf = bufio.NewWriter(file)
return s, nil
}
func (e *MapMarkSetEnv) Close() error {
return nil
}
func (s *MapMarkSet) Mark(cid cid.Cid) error {
if s.ts {
func (s *MapMarkSet) BeginCriticalSection() error {
s.mx.Lock()
defer s.mx.Unlock()
}
if s.set == nil {
return errMarkSetClosed
}
s.set[string(cid.Hash())] = struct{}{}
if s.persist {
return nil
}
file, err := os.OpenFile(s.path, os.O_CREATE|os.O_TRUNC|os.O_WRONLY, 0644)
if err != nil {
return xerrors.Errorf("error opening markset file: %w", err)
}
// wrap a buffered writer to make this faster
s.buf = bufio.NewWriter(file)
for key := range s.set {
if err := s.writeKey([]byte(key), false); err != nil {
_ = file.Close()
s.buf = nil
return err
}
}
if err := s.buf.Flush(); err != nil {
_ = file.Close()
s.buf = nil
return xerrors.Errorf("error flushing markset file buffer: %w", err)
}
s.file = file
s.persist = true
return nil
}
func (s *MapMarkSet) EndCriticalSection() {
s.mx.Lock()
defer s.mx.Unlock()
if !s.persist {
return
}
_ = s.file.Close()
_ = os.Remove(s.path)
s.file = nil
s.buf = nil
s.persist = false
}
func (s *MapMarkSet) Mark(c cid.Cid) error {
s.mx.Lock()
defer s.mx.Unlock()
if s.set == nil {
return errMarkSetClosed
}
hash := c.Hash()
s.set[string(hash)] = struct{}{}
if s.persist {
if err := s.writeKey(hash, true); err != nil {
return err
}
if err := s.file.Sync(); err != nil {
return xerrors.Errorf("error syncing markset: %w", err)
}
}
return nil
}
func (s *MapMarkSet) MarkMany(batch []cid.Cid) error {
s.mx.Lock()
defer s.mx.Unlock()
if s.set == nil {
return errMarkSetClosed
}
for _, c := range batch {
hash := c.Hash()
s.set[string(hash)] = struct{}{}
if s.persist {
if err := s.writeKey(hash, false); err != nil {
return err
}
}
}
if s.persist {
if err := s.buf.Flush(); err != nil {
return xerrors.Errorf("error flushing markset buffer to disk: %w", err)
}
if err := s.file.Sync(); err != nil {
return xerrors.Errorf("error syncing markset: %w", err)
}
}
return nil
}
func (s *MapMarkSet) Has(cid cid.Cid) (bool, error) {
if s.ts {
s.mx.RLock()
defer s.mx.RUnlock()
}
if s.set == nil {
return false, errMarkSetClosed
@ -73,33 +220,70 @@ func (s *MapMarkSet) Has(cid cid.Cid) (bool, error) {
}
func (s *MapMarkSet) Visit(c cid.Cid) (bool, error) {
if s.ts {
s.mx.Lock()
defer s.mx.Unlock()
}
if s.set == nil {
return false, errMarkSetClosed
}
key := string(c.Hash())
hash := c.Hash()
key := string(hash)
if _, ok := s.set[key]; ok {
return false, nil
}
s.set[key] = struct{}{}
if s.persist {
if err := s.writeKey(hash, true); err != nil {
return false, err
}
if err := s.file.Sync(); err != nil {
return false, xerrors.Errorf("error syncing markset: %w", err)
}
}
return true, nil
}
func (s *MapMarkSet) Close() error {
if s.ts {
s.mx.Lock()
defer s.mx.Unlock()
}
s.set = nil
if s.set == nil {
return nil
}
func (s *MapMarkSet) SetConcurrent() {
s.ts = true
s.set = nil
if s.file != nil {
if err := s.file.Close(); err != nil {
log.Warnf("error closing markset file: %s", err)
}
if !s.persist {
if err := os.Remove(s.path); err != nil {
log.Warnf("error removing markset file: %s", err)
}
}
}
return nil
}
func (s *MapMarkSet) writeKey(k []byte, flush bool) error {
if err := s.buf.WriteByte(byte(len(k))); err != nil {
return xerrors.Errorf("error writing markset key length to disk: %w", err)
}
if _, err := s.buf.Write(k); err != nil {
return xerrors.Errorf("error writing markset key to disk: %w", err)
}
if flush {
if err := s.buf.Flush(); err != nil {
return xerrors.Errorf("error flushing markset buffer to disk: %w", err)
}
}
return nil
}

View File

@ -11,7 +11,10 @@ import (
func TestMapMarkSet(t *testing.T) {
testMarkSet(t, "map")
testMarkSetRecovery(t, "map")
testMarkSetMarkMany(t, "map")
testMarkSetVisitor(t, "map")
testMarkSetVisitorRecovery(t, "map")
}
func TestBadgerMarkSet(t *testing.T) {
@ -21,12 +24,13 @@ func TestBadgerMarkSet(t *testing.T) {
badgerMarkSetBatchSize = bs
})
testMarkSet(t, "badger")
testMarkSetRecovery(t, "badger")
testMarkSetMarkMany(t, "badger")
testMarkSetVisitor(t, "badger")
testMarkSetVisitorRecovery(t, "badger")
}
func testMarkSet(t *testing.T, lsType string) {
t.Helper()
path, err := ioutil.TempDir("", "markset.*")
if err != nil {
t.Fatal(err)
@ -42,12 +46,12 @@ func testMarkSet(t *testing.T, lsType string) {
}
defer env.Close() //nolint:errcheck
hotSet, err := env.Create("hot", 0)
hotSet, err := env.New("hot", 0)
if err != nil {
t.Fatal(err)
}
coldSet, err := env.Create("cold", 0)
coldSet, err := env.New("cold", 0)
if err != nil {
t.Fatal(err)
}
@ -62,6 +66,7 @@ func testMarkSet(t *testing.T, lsType string) {
}
mustHave := func(s MarkSet, cid cid.Cid) {
t.Helper()
has, err := s.Has(cid)
if err != nil {
t.Fatal(err)
@ -73,6 +78,7 @@ func testMarkSet(t *testing.T, lsType string) {
}
mustNotHave := func(s MarkSet, cid cid.Cid) {
t.Helper()
has, err := s.Has(cid)
if err != nil {
t.Fatal(err)
@ -114,12 +120,12 @@ func testMarkSet(t *testing.T, lsType string) {
t.Fatal(err)
}
hotSet, err = env.Create("hot", 0)
hotSet, err = env.New("hot", 0)
if err != nil {
t.Fatal(err)
}
coldSet, err = env.Create("cold", 0)
coldSet, err = env.New("cold", 0)
if err != nil {
t.Fatal(err)
}
@ -150,8 +156,6 @@ func testMarkSet(t *testing.T, lsType string) {
}
func testMarkSetVisitor(t *testing.T, lsType string) {
t.Helper()
path, err := ioutil.TempDir("", "markset.*")
if err != nil {
t.Fatal(err)
@ -167,7 +171,7 @@ func testMarkSetVisitor(t *testing.T, lsType string) {
}
defer env.Close() //nolint:errcheck
visitor, err := env.CreateVisitor("test", 0)
visitor, err := env.New("test", 0)
if err != nil {
t.Fatal(err)
}
@ -219,3 +223,322 @@ func testMarkSetVisitor(t *testing.T, lsType string) {
mustNotVisit(visitor, k3)
mustNotVisit(visitor, k4)
}
func testMarkSetVisitorRecovery(t *testing.T, lsType string) {
path, err := ioutil.TempDir("", "markset.*")
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
_ = os.RemoveAll(path)
})
env, err := OpenMarkSetEnv(path, lsType)
if err != nil {
t.Fatal(err)
}
defer env.Close() //nolint:errcheck
visitor, err := env.New("test", 0)
if err != nil {
t.Fatal(err)
}
defer visitor.Close() //nolint:errcheck
makeCid := func(key string) cid.Cid {
h, err := multihash.Sum([]byte(key), multihash.SHA2_256, -1)
if err != nil {
t.Fatal(err)
}
return cid.NewCidV1(cid.Raw, h)
}
mustVisit := func(v ObjectVisitor, cid cid.Cid) {
visit, err := v.Visit(cid)
if err != nil {
t.Fatal(err)
}
if !visit {
t.Fatal("object should be visited")
}
}
mustNotVisit := func(v ObjectVisitor, cid cid.Cid) {
visit, err := v.Visit(cid)
if err != nil {
t.Fatal(err)
}
if visit {
t.Fatal("unexpected visit")
}
}
k1 := makeCid("a")
k2 := makeCid("b")
k3 := makeCid("c")
k4 := makeCid("d")
mustVisit(visitor, k1)
mustVisit(visitor, k2)
if err := visitor.BeginCriticalSection(); err != nil {
t.Fatal(err)
}
mustVisit(visitor, k3)
mustVisit(visitor, k4)
mustNotVisit(visitor, k1)
mustNotVisit(visitor, k2)
mustNotVisit(visitor, k3)
mustNotVisit(visitor, k4)
if err := visitor.Close(); err != nil {
t.Fatal(err)
}
visitor, err = env.Recover("test")
if err != nil {
t.Fatal(err)
}
mustNotVisit(visitor, k1)
mustNotVisit(visitor, k2)
mustNotVisit(visitor, k3)
mustNotVisit(visitor, k4)
visitor.EndCriticalSection()
if err := visitor.Close(); err != nil {
t.Fatal(err)
}
_, err = env.Recover("test")
if err == nil {
t.Fatal("expected recovery to fail")
}
}
func testMarkSetRecovery(t *testing.T, lsType string) {
path, err := ioutil.TempDir("", "markset.*")
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
_ = os.RemoveAll(path)
})
env, err := OpenMarkSetEnv(path, lsType)
if err != nil {
t.Fatal(err)
}
defer env.Close() //nolint:errcheck
markSet, err := env.New("test", 0)
if err != nil {
t.Fatal(err)
}
makeCid := func(key string) cid.Cid {
h, err := multihash.Sum([]byte(key), multihash.SHA2_256, -1)
if err != nil {
t.Fatal(err)
}
return cid.NewCidV1(cid.Raw, h)
}
mustHave := func(s MarkSet, cid cid.Cid) {
t.Helper()
has, err := s.Has(cid)
if err != nil {
t.Fatal(err)
}
if !has {
t.Fatal("mark not found")
}
}
mustNotHave := func(s MarkSet, cid cid.Cid) {
t.Helper()
has, err := s.Has(cid)
if err != nil {
t.Fatal(err)
}
if has {
t.Fatal("unexpected mark")
}
}
k1 := makeCid("a")
k2 := makeCid("b")
k3 := makeCid("c")
k4 := makeCid("d")
if err := markSet.Mark(k1); err != nil {
t.Fatal(err)
}
if err := markSet.Mark(k2); err != nil {
t.Fatal(err)
}
mustHave(markSet, k1)
mustHave(markSet, k2)
mustNotHave(markSet, k3)
mustNotHave(markSet, k4)
if err := markSet.BeginCriticalSection(); err != nil {
t.Fatal(err)
}
if err := markSet.Mark(k3); err != nil {
t.Fatal(err)
}
if err := markSet.Mark(k4); err != nil {
t.Fatal(err)
}
mustHave(markSet, k1)
mustHave(markSet, k2)
mustHave(markSet, k3)
mustHave(markSet, k4)
if err := markSet.Close(); err != nil {
t.Fatal(err)
}
markSet, err = env.Recover("test")
if err != nil {
t.Fatal(err)
}
mustHave(markSet, k1)
mustHave(markSet, k2)
mustHave(markSet, k3)
mustHave(markSet, k4)
markSet.EndCriticalSection()
if err := markSet.Close(); err != nil {
t.Fatal(err)
}
_, err = env.Recover("test")
if err == nil {
t.Fatal("expected recovery to fail")
}
}
func testMarkSetMarkMany(t *testing.T, lsType string) {
path, err := ioutil.TempDir("", "markset.*")
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
_ = os.RemoveAll(path)
})
env, err := OpenMarkSetEnv(path, lsType)
if err != nil {
t.Fatal(err)
}
defer env.Close() //nolint:errcheck
markSet, err := env.New("test", 0)
if err != nil {
t.Fatal(err)
}
makeCid := func(key string) cid.Cid {
h, err := multihash.Sum([]byte(key), multihash.SHA2_256, -1)
if err != nil {
t.Fatal(err)
}
return cid.NewCidV1(cid.Raw, h)
}
mustHave := func(s MarkSet, cid cid.Cid) {
t.Helper()
has, err := s.Has(cid)
if err != nil {
t.Fatal(err)
}
if !has {
t.Fatal("mark not found")
}
}
mustNotHave := func(s MarkSet, cid cid.Cid) {
t.Helper()
has, err := s.Has(cid)
if err != nil {
t.Fatal(err)
}
if has {
t.Fatal("unexpected mark")
}
}
k1 := makeCid("a")
k2 := makeCid("b")
k3 := makeCid("c")
k4 := makeCid("d")
if err := markSet.MarkMany([]cid.Cid{k1, k2}); err != nil {
t.Fatal(err)
}
mustHave(markSet, k1)
mustHave(markSet, k2)
mustNotHave(markSet, k3)
mustNotHave(markSet, k4)
if err := markSet.BeginCriticalSection(); err != nil {
t.Fatal(err)
}
if err := markSet.MarkMany([]cid.Cid{k3, k4}); err != nil {
t.Fatal(err)
}
mustHave(markSet, k1)
mustHave(markSet, k2)
mustHave(markSet, k3)
mustHave(markSet, k4)
if err := markSet.Close(); err != nil {
t.Fatal(err)
}
markSet, err = env.Recover("test")
if err != nil {
t.Fatal(err)
}
mustHave(markSet, k1)
mustHave(markSet, k2)
mustHave(markSet, k3)
mustHave(markSet, k4)
markSet.EndCriticalSection()
if err := markSet.Close(); err != nil {
t.Fatal(err)
}
_, err = env.Recover("test")
if err == nil {
t.Fatal("expected recovery to fail")
}
}

View File

@ -129,8 +129,6 @@ type SplitStore struct {
headChangeMx sync.Mutex
coldPurgeSize int
chain ChainAccessor
ds dstore.Datastore
cold bstore.Blockstore
@ -158,6 +156,17 @@ type SplitStore struct {
txnRefsMx sync.Mutex
txnRefs map[cid.Cid]struct{}
txnMissing map[cid.Cid]struct{}
txnMarkSet MarkSet
txnSyncMx sync.Mutex
txnSyncCond sync.Cond
txnSync bool
// background cold object reification
reifyWorkers sync.WaitGroup
reifyMx sync.Mutex
reifyCond sync.Cond
reifyPend map[cid.Cid]struct{}
reifyInProgress map[cid.Cid]struct{}
// registered protectors
protectors []func(func(cid.Cid) error) error
@ -186,10 +195,6 @@ func Open(path string, ds dstore.Datastore, hot, cold bstore.Blockstore, cfg *Co
return nil, err
}
if !markSetEnv.SupportsVisitor() {
return nil, xerrors.Errorf("markset type does not support atomic visitors")
}
// and now we can make a SplitStore
ss := &SplitStore{
cfg: cfg,
@ -198,13 +203,16 @@ func Open(path string, ds dstore.Datastore, hot, cold bstore.Blockstore, cfg *Co
cold: cold,
hot: hots,
markSetEnv: markSetEnv,
coldPurgeSize: defaultColdPurgeSize,
}
ss.txnViewsCond.L = &ss.txnViewsMx
ss.txnSyncCond.L = &ss.txnSyncMx
ss.ctx, ss.cancel = context.WithCancel(context.Background())
ss.reifyCond.L = &ss.reifyMx
ss.reifyPend = make(map[cid.Cid]struct{})
ss.reifyInProgress = make(map[cid.Cid]struct{})
if enableDebugLog {
ss.debug, err = openDebugLog(path)
if err != nil {
@ -212,6 +220,14 @@ func Open(path string, ds dstore.Datastore, hot, cold bstore.Blockstore, cfg *Co
}
}
if ss.checkpointExists() {
log.Info("found compaction checkpoint; resuming compaction")
if err := ss.completeCompaction(); err != nil {
markSetEnv.Close() //nolint:errcheck
return nil, xerrors.Errorf("error resuming compaction: %w", err)
}
}
return ss, nil
}
@ -234,6 +250,20 @@ func (s *SplitStore) Has(ctx context.Context, cid cid.Cid) (bool, error) {
s.txnLk.RLock()
defer s.txnLk.RUnlock()
// critical section
if s.txnMarkSet != nil {
has, err := s.txnMarkSet.Has(cid)
if err != nil {
return false, err
}
if has {
return s.has(cid)
}
return s.cold.Has(ctx, cid)
}
has, err := s.hot.Has(ctx, cid)
if err != nil {
@ -245,7 +275,13 @@ func (s *SplitStore) Has(ctx context.Context, cid cid.Cid) (bool, error) {
return true, nil
}
return s.cold.Has(ctx, cid)
has, err = s.cold.Has(ctx, cid)
if has && bstore.IsHotView(ctx) {
s.reifyColdObject(cid)
}
return has, err
}
func (s *SplitStore) Get(ctx context.Context, cid cid.Cid) (blocks.Block, error) {
@ -261,6 +297,20 @@ func (s *SplitStore) Get(ctx context.Context, cid cid.Cid) (blocks.Block, error)
s.txnLk.RLock()
defer s.txnLk.RUnlock()
// critical section
if s.txnMarkSet != nil {
has, err := s.txnMarkSet.Has(cid)
if err != nil {
return nil, err
}
if has {
return s.get(cid)
}
return s.cold.Get(ctx, cid)
}
blk, err := s.hot.Get(ctx, cid)
switch err {
@ -275,8 +325,11 @@ func (s *SplitStore) Get(ctx context.Context, cid cid.Cid) (blocks.Block, error)
blk, err = s.cold.Get(ctx, cid)
if err == nil {
stats.Record(s.ctx, metrics.SplitstoreMiss.M(1))
if bstore.IsHotView(ctx) {
s.reifyColdObject(cid)
}
stats.Record(s.ctx, metrics.SplitstoreMiss.M(1))
}
return blk, err
@ -298,6 +351,20 @@ func (s *SplitStore) GetSize(ctx context.Context, cid cid.Cid) (int, error) {
s.txnLk.RLock()
defer s.txnLk.RUnlock()
// critical section
if s.txnMarkSet != nil {
has, err := s.txnMarkSet.Has(cid)
if err != nil {
return 0, err
}
if has {
return s.getSize(cid)
}
return s.cold.GetSize(ctx, cid)
}
size, err := s.hot.GetSize(ctx, cid)
switch err {
@ -312,6 +379,10 @@ func (s *SplitStore) GetSize(ctx context.Context, cid cid.Cid) (int, error) {
size, err = s.cold.GetSize(ctx, cid)
if err == nil {
if bstore.IsHotView(ctx) {
s.reifyColdObject(cid)
}
stats.Record(s.ctx, metrics.SplitstoreMiss.M(1))
}
return size, err
@ -336,6 +407,12 @@ func (s *SplitStore) Put(ctx context.Context, blk blocks.Block) error {
s.debug.LogWrite(blk)
// critical section
if s.txnMarkSet != nil {
s.markLiveRefs([]cid.Cid{blk.Cid()})
return nil
}
s.trackTxnRef(blk.Cid())
return nil
}
@ -381,6 +458,12 @@ func (s *SplitStore) PutMany(ctx context.Context, blks []blocks.Block) error {
s.debug.LogWriteMany(blks)
// critical section
if s.txnMarkSet != nil {
s.markLiveRefs(batch)
return nil
}
s.trackTxnRefMany(batch)
return nil
}
@ -440,6 +523,23 @@ func (s *SplitStore) View(ctx context.Context, cid cid.Cid, cb func([]byte) erro
return cb(data)
}
// critical section
s.txnLk.RLock() // the lock is released in protectView if we are not in critical section
if s.txnMarkSet != nil {
has, err := s.txnMarkSet.Has(cid)
s.txnLk.RUnlock()
if err != nil {
return err
}
if has {
return s.view(cid, cb)
}
return s.cold.View(ctx, cid, cb)
}
// views are (optimistically) protected two-fold:
// - if there is an active transaction, then the reference is protected.
// - if there is no active transaction, active views are tracked in a
@ -460,6 +560,10 @@ func (s *SplitStore) View(ctx context.Context, cid cid.Cid, cb func([]byte) erro
err = s.cold.View(ctx, cid, cb)
if err == nil {
if bstore.IsHotView(ctx) {
s.reifyColdObject(cid)
}
stats.Record(s.ctx, metrics.SplitstoreMiss.M(1))
}
return err
@ -569,6 +673,9 @@ func (s *SplitStore) Start(chain ChainAccessor, us stmgr.UpgradeSchedule) error
}
}
// spawn the reifier
go s.reifyOrchestrator()
// watch the chain
chain.SubscribeHeadChanges(s.HeadChange)
@ -589,12 +696,19 @@ func (s *SplitStore) Close() error {
}
if atomic.LoadInt32(&s.compacting) == 1 {
s.txnSyncMx.Lock()
s.txnSync = true
s.txnSyncCond.Broadcast()
s.txnSyncMx.Unlock()
log.Warn("close with ongoing compaction in progress; waiting for it to finish...")
for atomic.LoadInt32(&s.compacting) == 1 {
time.Sleep(time.Second)
}
}
s.reifyCond.Broadcast()
s.reifyWorkers.Wait()
s.cancel()
return multierr.Combine(s.markSetEnv.Close(), s.debug.Close())
}

View File

@ -4,6 +4,7 @@ import (
"fmt"
"os"
"path/filepath"
"sync"
"sync/atomic"
"time"
@ -67,7 +68,10 @@ func (s *SplitStore) doCheck(curTs *types.TipSet) error {
}
defer output.Close() //nolint:errcheck
var mx sync.Mutex
write := func(format string, args ...interface{}) {
mx.Lock()
defer mx.Unlock()
_, err := fmt.Fprintf(output, format+"\n", args...)
if err != nil {
log.Warnf("error writing check output: %s", err)
@ -82,9 +86,10 @@ func (s *SplitStore) doCheck(curTs *types.TipSet) error {
write("compaction index: %d", s.compactionIndex)
write("--")
var coldCnt, missingCnt int64
coldCnt := new(int64)
missingCnt := new(int64)
visitor, err := s.markSetEnv.CreateVisitor("check", 0)
visitor, err := s.markSetEnv.New("check", 0)
if err != nil {
return xerrors.Errorf("error creating visitor: %w", err)
}
@ -111,10 +116,10 @@ func (s *SplitStore) doCheck(curTs *types.TipSet) error {
}
if has {
coldCnt++
atomic.AddInt64(coldCnt, 1)
write("cold object reference: %s", c)
} else {
missingCnt++
atomic.AddInt64(missingCnt, 1)
write("missing object reference: %s", c)
return errStopWalk
}
@ -128,9 +133,9 @@ func (s *SplitStore) doCheck(curTs *types.TipSet) error {
return err
}
log.Infow("check done", "cold", coldCnt, "missing", missingCnt)
log.Infow("check done", "cold", *coldCnt, "missing", *missingCnt)
write("--")
write("cold: %d missing: %d", coldCnt, missingCnt)
write("cold: %d missing: %d", *coldCnt, *missingCnt)
write("DONE")
return nil

View File

@ -3,8 +3,10 @@ package splitstore
import (
"bytes"
"errors"
"os"
"path/filepath"
"runtime"
"sort"
"sync"
"sync/atomic"
"time"
@ -47,6 +49,10 @@ var (
// SyncGapTime is the time delay from a tipset's min timestamp before we decide
// there is a sync gap
SyncGapTime = time.Minute
// SyncWaitTime is the time delay from a tipset's min timestamp before we decide
// we have synced.
SyncWaitTime = 30 * time.Second
)
var (
@ -56,8 +62,6 @@ var (
const (
batchSize = 16384
defaultColdPurgeSize = 7_000_000
)
func (s *SplitStore) HeadChange(_, apply []*types.TipSet) error {
@ -140,9 +144,9 @@ func (s *SplitStore) isNearUpgrade(epoch abi.ChainEpoch) bool {
// transactionally protect incoming tipsets
func (s *SplitStore) protectTipSets(apply []*types.TipSet) {
s.txnLk.RLock()
defer s.txnLk.RUnlock()
if !s.txnActive {
s.txnLk.RUnlock()
return
}
@ -151,12 +155,115 @@ func (s *SplitStore) protectTipSets(apply []*types.TipSet) {
cids = append(cids, ts.Cids()...)
}
if len(cids) == 0 {
s.txnLk.RUnlock()
return
}
// critical section
if s.txnMarkSet != nil {
curTs := apply[len(apply)-1]
timestamp := time.Unix(int64(curTs.MinTimestamp()), 0)
doSync := time.Since(timestamp) < SyncWaitTime
go func() {
if doSync {
defer func() {
s.txnSyncMx.Lock()
defer s.txnSyncMx.Unlock()
s.txnSync = true
s.txnSyncCond.Broadcast()
}()
}
defer s.txnLk.RUnlock()
s.markLiveRefs(cids)
}()
return
}
s.trackTxnRefMany(cids)
s.txnLk.RUnlock()
}
func (s *SplitStore) markLiveRefs(cids []cid.Cid) {
log.Debugf("marking %d live refs", len(cids))
startMark := time.Now()
count := new(int32)
visitor := newConcurrentVisitor()
walkObject := func(c cid.Cid) error {
return s.walkObjectIncomplete(c, visitor,
func(c cid.Cid) error {
if isUnitaryObject(c) {
return errStopWalk
}
visit, err := s.txnMarkSet.Visit(c)
if err != nil {
return xerrors.Errorf("error visiting object: %w", err)
}
if !visit {
return errStopWalk
}
atomic.AddInt32(count, 1)
return nil
},
func(missing cid.Cid) error {
log.Warnf("missing object reference %s in %s", missing, c)
return errStopWalk
})
}
// optimize the common case of single put
if len(cids) == 1 {
if err := walkObject(cids[0]); err != nil {
log.Errorf("error marking tipset refs: %s", err)
}
log.Debugw("marking live refs done", "took", time.Since(startMark), "marked", *count)
return
}
workch := make(chan cid.Cid, len(cids))
for _, c := range cids {
workch <- c
}
close(workch)
worker := func() error {
for c := range workch {
if err := walkObject(c); err != nil {
return err
}
}
return nil
}
workers := runtime.NumCPU() / 2
if workers < 2 {
workers = 2
}
if workers > len(cids) {
workers = len(cids)
}
g := new(errgroup.Group)
for i := 0; i < workers; i++ {
g.Go(worker)
}
if err := g.Wait(); err != nil {
log.Errorf("error marking tipset refs: %s", err)
}
log.Debugw("marking live refs done", "took", time.Since(startMark), "marked", *count)
}
// transactionally protect a view
func (s *SplitStore) protectView(c cid.Cid) {
s.txnLk.RLock()
// the txnLk is held for read
defer s.txnLk.RUnlock()
if s.txnActive {
@ -227,7 +334,7 @@ func (s *SplitStore) trackTxnRefMany(cids []cid.Cid) {
}
// protect all pending transactional references
func (s *SplitStore) protectTxnRefs(markSet MarkSetVisitor) error {
func (s *SplitStore) protectTxnRefs(markSet MarkSet) error {
for {
var txnRefs map[cid.Cid]struct{}
@ -299,14 +406,14 @@ func (s *SplitStore) protectTxnRefs(markSet MarkSetVisitor) error {
// transactionally protect a reference by walking the object and marking.
// concurrent markings are short circuited by checking the markset.
func (s *SplitStore) doTxnProtect(root cid.Cid, markSet MarkSetVisitor) error {
func (s *SplitStore) doTxnProtect(root cid.Cid, markSet MarkSet) error {
if err := s.checkClosing(); err != nil {
return err
}
// Note: cold objects are deleted heaviest first, so the consituents of an object
// cannot be deleted before the object itself.
return s.walkObjectIncomplete(root, tmpVisitor(),
return s.walkObjectIncomplete(root, newTmpVisitor(),
func(c cid.Cid) error {
if isUnitaryObject(c) {
return errStopWalk
@ -386,6 +493,12 @@ func (s *SplitStore) compact(curTs *types.TipSet) {
}
func (s *SplitStore) doCompact(curTs *types.TipSet) error {
if s.checkpointExists() {
// this really shouldn't happen, but if it somehow does, it means that the hotstore
// might be potentially inconsistent; abort compaction and notify the user to intervene.
return xerrors.Errorf("checkpoint exists; aborting compaction")
}
currentEpoch := curTs.Height()
boundaryEpoch := currentEpoch - CompactionBoundary
@ -397,7 +510,7 @@ func (s *SplitStore) doCompact(curTs *types.TipSet) error {
log.Infow("running compaction", "currentEpoch", currentEpoch, "baseEpoch", s.baseEpoch, "boundaryEpoch", boundaryEpoch, "inclMsgsEpoch", inclMsgsEpoch, "compactionIndex", s.compactionIndex)
markSet, err := s.markSetEnv.CreateVisitor("live", s.markSetSize)
markSet, err := s.markSetEnv.New("live", s.markSetSize)
if err != nil {
return xerrors.Errorf("error creating mark set: %w", err)
}
@ -408,9 +521,6 @@ func (s *SplitStore) doCompact(curTs *types.TipSet) error {
return err
}
// we are ready for concurrent marking
s.beginTxnMarking(markSet)
// 0. track all protected references at beginning of compaction; anything added later should
// be transactionally protected by the write
log.Info("protecting references with registered protectors")
@ -424,7 +534,7 @@ func (s *SplitStore) doCompact(curTs *types.TipSet) error {
log.Info("marking reachable objects")
startMark := time.Now()
var count int64
count := new(int64)
err = s.walkChain(curTs, boundaryEpoch, inclMsgsEpoch, &noopVisitor{},
func(c cid.Cid) error {
if isUnitaryObject(c) {
@ -440,7 +550,7 @@ func (s *SplitStore) doCompact(curTs *types.TipSet) error {
return errStopWalk
}
count++
atomic.AddInt64(count, 1)
return nil
})
@ -448,9 +558,9 @@ func (s *SplitStore) doCompact(curTs *types.TipSet) error {
return xerrors.Errorf("error marking: %w", err)
}
s.markSetSize = count + count>>2 // overestimate a bit
s.markSetSize = *count + *count>>2 // overestimate a bit
log.Infow("marking done", "took", time.Since(startMark), "marked", count)
log.Infow("marking done", "took", time.Since(startMark), "marked", *count)
if err := s.checkClosing(); err != nil {
return err
@ -470,10 +580,15 @@ func (s *SplitStore) doCompact(curTs *types.TipSet) error {
log.Info("collecting cold objects")
startCollect := time.Now()
coldw, err := NewColdSetWriter(s.coldSetPath())
if err != nil {
return xerrors.Errorf("error creating coldset: %w", err)
}
defer coldw.Close() //nolint:errcheck
// some stats for logging
var hotCnt, coldCnt int
cold := make([]cid.Cid, 0, s.coldPurgeSize)
err = s.hot.ForEachKey(func(c cid.Cid) error {
// was it marked?
mark, err := markSet.Has(c)
@ -487,7 +602,9 @@ func (s *SplitStore) doCompact(curTs *types.TipSet) error {
}
// it's cold, mark it as candidate for move
cold = append(cold, c)
if err := coldw.Write(c); err != nil {
return xerrors.Errorf("error writing cid to coldstore: %w", err)
}
coldCnt++
return nil
@ -497,12 +614,12 @@ func (s *SplitStore) doCompact(curTs *types.TipSet) error {
return xerrors.Errorf("error collecting cold objects: %w", err)
}
log.Infow("cold collection done", "took", time.Since(startCollect))
if coldCnt > 0 {
s.coldPurgeSize = coldCnt + coldCnt>>2 // overestimate a bit
if err := coldw.Close(); err != nil {
return xerrors.Errorf("error closing coldset: %w", err)
}
log.Infow("cold collection done", "took", time.Since(startCollect))
log.Infow("compaction stats", "hot", hotCnt, "cold", coldCnt)
stats.Record(s.ctx, metrics.SplitstoreCompactionHot.M(int64(hotCnt)))
stats.Record(s.ctx, metrics.SplitstoreCompactionCold.M(int64(coldCnt)))
@ -520,11 +637,17 @@ func (s *SplitStore) doCompact(curTs *types.TipSet) error {
return err
}
coldr, err := NewColdSetReader(s.coldSetPath())
if err != nil {
return xerrors.Errorf("error opening coldset: %w", err)
}
defer coldr.Close() //nolint:errcheck
// 3. copy the cold objects to the coldstore -- if we have one
if !s.cfg.DiscardColdBlocks {
log.Info("moving cold objects to the coldstore")
startMove := time.Now()
err = s.moveColdBlocks(cold)
err = s.moveColdBlocks(coldr)
if err != nil {
return xerrors.Errorf("error moving cold objects: %w", err)
}
@ -533,41 +656,64 @@ func (s *SplitStore) doCompact(curTs *types.TipSet) error {
if err := s.checkClosing(); err != nil {
return err
}
if err := coldr.Reset(); err != nil {
return xerrors.Errorf("error resetting coldset: %w", err)
}
}
// 4. sort cold objects so that the dags with most references are deleted first
// this ensures that we can't refer to a dag with its consituents already deleted, ie
// we lave no dangling references.
log.Info("sorting cold objects")
startSort := time.Now()
err = s.sortObjects(cold)
if err != nil {
return xerrors.Errorf("error sorting objects: %w", err)
}
log.Infow("sorting done", "took", time.Since(startSort))
// 4.1 protect transactional refs once more
// strictly speaking, this is not necessary as purge will do it before deleting each
// batch. however, there is likely a largish number of references accumulated during
// ths sort and this protects before entering pruge context.
err = s.protectTxnRefs(markSet)
if err != nil {
return xerrors.Errorf("error protecting transactional refs: %w", err)
// 4. Purge cold objects with checkpointing for recovery.
// This is the critical section of compaction, whereby any cold object not in the markSet is
// considered already deleted.
// We delete cold objects in batches, holding the transaction lock, where we check the markSet
// again for new references created by the VM.
// After each batch, we write a checkpoint to disk; if the process is interrupted before completion,
// the process will continue from the checkpoint in the next recovery.
if err := s.beginCriticalSection(markSet); err != nil {
return xerrors.Errorf("error beginning critical section: %w", err)
}
if err := s.checkClosing(); err != nil {
return err
}
// wait for the head to catch up so that the current tipset is marked
s.waitForSync()
if err := s.checkClosing(); err != nil {
return err
}
checkpoint, err := NewCheckpoint(s.checkpointPath())
if err != nil {
return xerrors.Errorf("error creating checkpoint: %w", err)
}
defer checkpoint.Close() //nolint:errcheck
// 5. purge cold objects from the hotstore, taking protected references into account
log.Info("purging cold objects from the hotstore")
startPurge := time.Now()
err = s.purge(cold, markSet)
err = s.purge(coldr, checkpoint, markSet)
if err != nil {
return xerrors.Errorf("error purging cold blocks: %w", err)
return xerrors.Errorf("error purging cold objects: %w", err)
}
log.Infow("purging cold objects from hotstore done", "took", time.Since(startPurge))
s.endCriticalSection()
if err := checkpoint.Close(); err != nil {
log.Warnf("error closing checkpoint: %s", err)
}
if err := os.Remove(s.checkpointPath()); err != nil {
log.Warnf("error removing checkpoint: %s", err)
}
if err := coldr.Close(); err != nil {
log.Warnf("error closing coldset: %s", err)
}
if err := os.Remove(s.coldSetPath()); err != nil {
log.Warnf("error removing coldset: %s", err)
}
// we are done; do some housekeeping
s.endTxnProtect()
s.gcHotstore()
@ -598,12 +744,51 @@ func (s *SplitStore) beginTxnProtect() {
defer s.txnLk.Unlock()
s.txnActive = true
s.txnSync = false
s.txnRefs = make(map[cid.Cid]struct{})
s.txnMissing = make(map[cid.Cid]struct{})
}
func (s *SplitStore) beginTxnMarking(markSet MarkSetVisitor) {
markSet.SetConcurrent()
func (s *SplitStore) beginCriticalSection(markSet MarkSet) error {
log.Info("beginning critical section")
// do that once first to get the bulk before the markset is in critical section
if err := s.protectTxnRefs(markSet); err != nil {
return xerrors.Errorf("error protecting transactional references: %w", err)
}
if err := markSet.BeginCriticalSection(); err != nil {
return xerrors.Errorf("error beginning critical section for markset: %w", err)
}
s.txnLk.Lock()
defer s.txnLk.Unlock()
s.txnMarkSet = markSet
// and do it again while holding the lock to mark references that might have been created
// in the meantime and avoid races of the type Has->txnRef->enterCS->Get fails because
// it's not in the markset
if err := s.protectTxnRefs(markSet); err != nil {
return xerrors.Errorf("error protecting transactional references: %w", err)
}
return nil
}
func (s *SplitStore) waitForSync() {
log.Info("waiting for sync")
startWait := time.Now()
defer func() {
log.Infow("waiting for sync done", "took", time.Since(startWait))
}()
s.txnSyncMx.Lock()
defer s.txnSyncMx.Unlock()
for !s.txnSync {
s.txnSyncCond.Wait()
}
}
func (s *SplitStore) endTxnProtect() {
@ -615,32 +800,51 @@ func (s *SplitStore) endTxnProtect() {
}
s.txnActive = false
s.txnSync = false
s.txnRefs = nil
s.txnMissing = nil
s.txnMarkSet = nil
}
func (s *SplitStore) endCriticalSection() {
log.Info("ending critical section")
s.txnLk.Lock()
defer s.txnLk.Unlock()
s.txnMarkSet.EndCriticalSection()
s.txnMarkSet = nil
}
func (s *SplitStore) walkChain(ts *types.TipSet, inclState, inclMsgs abi.ChainEpoch,
visitor ObjectVisitor, f func(cid.Cid) error) error {
var walked *cid.Set
toWalk := ts.Cids()
walkCnt := 0
scanCnt := 0
var walked ObjectVisitor
var mx sync.Mutex
// we copy the tipset first into a new slice, which allows us to reuse it in every epoch.
toWalk := make([]cid.Cid, len(ts.Cids()))
copy(toWalk, ts.Cids())
walkCnt := new(int64)
scanCnt := new(int64)
stopWalk := func(_ cid.Cid) error { return errStopWalk }
walkBlock := func(c cid.Cid) error {
if !walked.Visit(c) {
visit, err := walked.Visit(c)
if err != nil {
return err
}
if !visit {
return nil
}
walkCnt++
atomic.AddInt64(walkCnt, 1)
if err := f(c); err != nil {
return err
}
var hdr types.BlockHeader
err := s.view(c, func(data []byte) error {
err = s.view(c, func(data []byte) error {
return hdr.UnmarshalCBOR(bytes.NewBuffer(data))
})
@ -676,11 +880,13 @@ func (s *SplitStore) walkChain(ts *types.TipSet, inclState, inclMsgs abi.ChainEp
if err := s.walkObject(hdr.ParentStateRoot, visitor, f); err != nil {
return xerrors.Errorf("error walking state root (cid: %s): %w", hdr.ParentStateRoot, err)
}
scanCnt++
atomic.AddInt64(scanCnt, 1)
}
if hdr.Height > 0 {
mx.Lock()
toWalk = append(toWalk, hdr.Parents...)
mx.Unlock()
}
return nil
@ -692,20 +898,43 @@ func (s *SplitStore) walkChain(ts *types.TipSet, inclState, inclMsgs abi.ChainEp
return err
}
workers := len(toWalk)
if workers > runtime.NumCPU()/2 {
workers = runtime.NumCPU() / 2
}
if workers < 2 {
workers = 2
}
// the walk is BFS, so we can reset the walked set in every iteration and avoid building up
// a set that contains all blocks (1M epochs -> 5M blocks -> 200MB worth of memory and growing
// over time)
walked = cid.NewSet()
walking := toWalk
toWalk = nil
for _, c := range walking {
walked = newConcurrentVisitor()
workch := make(chan cid.Cid, len(toWalk))
for _, c := range toWalk {
workch <- c
}
close(workch)
toWalk = toWalk[:0]
g := new(errgroup.Group)
for i := 0; i < workers; i++ {
g.Go(func() error {
for c := range workch {
if err := walkBlock(c); err != nil {
return xerrors.Errorf("error walking block (cid: %s): %w", c, err)
}
}
return nil
})
}
log.Infow("chain walk done", "walked", walkCnt, "scanned", scanCnt)
if err := g.Wait(); err != nil {
return err
}
}
log.Infow("chain walk done", "walked", *walkCnt, "scanned", *scanCnt)
return nil
}
@ -824,7 +1053,7 @@ func (s *SplitStore) walkObjectIncomplete(c cid.Cid, visitor ObjectVisitor, f, m
return nil
}
// internal version used by walk
// internal version used during compaction and related operations
func (s *SplitStore) view(c cid.Cid, cb func([]byte) error) error {
if isIdentiyCid(c) {
data, err := decodeIdentityCid(c)
@ -859,10 +1088,34 @@ func (s *SplitStore) has(c cid.Cid) (bool, error) {
return s.cold.Has(s.ctx, c)
}
func (s *SplitStore) moveColdBlocks(cold []cid.Cid) error {
func (s *SplitStore) get(c cid.Cid) (blocks.Block, error) {
blk, err := s.hot.Get(s.ctx, c)
switch err {
case nil:
return blk, nil
case bstore.ErrNotFound:
return s.cold.Get(s.ctx, c)
default:
return nil, err
}
}
func (s *SplitStore) getSize(c cid.Cid) (int, error) {
sz, err := s.hot.GetSize(s.ctx, c)
switch err {
case nil:
return sz, nil
case bstore.ErrNotFound:
return s.cold.GetSize(s.ctx, c)
default:
return 0, err
}
}
func (s *SplitStore) moveColdBlocks(coldr *ColdSetReader) error {
batch := make([]blocks.Block, 0, batchSize)
for _, c := range cold {
err := coldr.ForEach(func(c cid.Cid) error {
if err := s.checkClosing(); err != nil {
return err
}
@ -871,7 +1124,7 @@ func (s *SplitStore) moveColdBlocks(cold []cid.Cid) error {
if err != nil {
if err == bstore.ErrNotFound {
log.Warnf("hotstore missing block %s", c)
continue
return nil
}
return xerrors.Errorf("error retrieving block %s from hotstore: %w", c, err)
@ -885,6 +1138,12 @@ func (s *SplitStore) moveColdBlocks(cold []cid.Cid) error {
}
batch = batch[:0]
}
return nil
})
if err != nil {
return xerrors.Errorf("error iterating coldset: %w", err)
}
if len(batch) > 0 {
@ -897,160 +1156,60 @@ func (s *SplitStore) moveColdBlocks(cold []cid.Cid) error {
return nil
}
// sorts a slice of objects heaviest first -- it's a little expensive but worth the
// guarantee that we don't leave dangling references behind, e.g. if we die in the middle
// of a purge.
func (s *SplitStore) sortObjects(cids []cid.Cid) error {
// we cache the keys to avoid making a gazillion of strings
keys := make(map[cid.Cid]string)
key := func(c cid.Cid) string {
s, ok := keys[c]
if !ok {
s = string(c.Hash())
keys[c] = s
}
return s
}
// compute sorting weights as the cumulative number of DAG links
weights := make(map[string]int)
for _, c := range cids {
// this can take quite a while, so check for shutdown with every opportunity
if err := s.checkClosing(); err != nil {
return err
}
w := s.getObjectWeight(c, weights, key)
weights[key(c)] = w
}
// sort!
sort.Slice(cids, func(i, j int) bool {
wi := weights[key(cids[i])]
wj := weights[key(cids[j])]
if wi == wj {
return bytes.Compare(cids[i].Hash(), cids[j].Hash()) > 0
}
return wi > wj
})
return nil
}
func (s *SplitStore) getObjectWeight(c cid.Cid, weights map[string]int, key func(cid.Cid) string) int {
w, ok := weights[key(c)]
if ok {
return w
}
// we treat block headers specially to avoid walking the entire chain
var hdr types.BlockHeader
err := s.view(c, func(data []byte) error {
return hdr.UnmarshalCBOR(bytes.NewBuffer(data))
})
if err == nil {
w1 := s.getObjectWeight(hdr.ParentStateRoot, weights, key)
weights[key(hdr.ParentStateRoot)] = w1
w2 := s.getObjectWeight(hdr.Messages, weights, key)
weights[key(hdr.Messages)] = w2
return 1 + w1 + w2
}
var links []cid.Cid
err = s.view(c, func(data []byte) error {
return cbg.ScanForLinks(bytes.NewReader(data), func(c cid.Cid) {
links = append(links, c)
})
})
if err != nil {
return 1
}
w = 1
for _, c := range links {
// these are internal refs, so dags will be dags
if c.Prefix().Codec != cid.DagCBOR {
w++
continue
}
wc := s.getObjectWeight(c, weights, key)
weights[key(c)] = wc
w += wc
}
return w
}
func (s *SplitStore) purgeBatch(cids []cid.Cid, deleteBatch func([]cid.Cid) error) error {
if len(cids) == 0 {
return nil
}
// we don't delete one giant batch of millions of objects, but rather do smaller batches
// so that we don't stop the world for an extended period of time
done := false
for i := 0; !done; i++ {
start := i * batchSize
end := start + batchSize
if end >= len(cids) {
end = len(cids)
done = true
}
err := deleteBatch(cids[start:end])
if err != nil {
return xerrors.Errorf("error deleting batch: %w", err)
}
}
return nil
}
func (s *SplitStore) purge(cids []cid.Cid, markSet MarkSetVisitor) error {
func (s *SplitStore) purge(coldr *ColdSetReader, checkpoint *Checkpoint, markSet MarkSet) error {
batch := make([]cid.Cid, 0, batchSize)
deadCids := make([]cid.Cid, 0, batchSize)
var purgeCnt, liveCnt int
defer func() {
log.Infow("purged cold objects", "purged", purgeCnt, "live", liveCnt)
}()
return s.purgeBatch(cids,
func(cids []cid.Cid) error {
deadCids := deadCids[:0]
deleteBatch := func() error {
pc, lc, err := s.purgeBatch(batch, deadCids, checkpoint, markSet)
purgeCnt += pc
liveCnt += lc
batch = batch[:0]
for {
if err := s.checkClosing(); err != nil {
return err
}
s.txnLk.Lock()
if len(s.txnRefs) == 0 {
// keep the lock!
break
err := coldr.ForEach(func(c cid.Cid) error {
batch = append(batch, c)
if len(batch) == batchSize {
return deleteBatch()
}
// unlock and protect
s.txnLk.Unlock()
return nil
})
err := s.protectTxnRefs(markSet)
if err != nil {
return xerrors.Errorf("error protecting transactional refs: %w", err)
}
return err
}
if len(batch) > 0 {
return deleteBatch()
}
return nil
}
func (s *SplitStore) purgeBatch(batch, deadCids []cid.Cid, checkpoint *Checkpoint, markSet MarkSet) (purgeCnt int, liveCnt int, err error) {
if err := s.checkClosing(); err != nil {
return 0, 0, err
}
s.txnLk.Lock()
defer s.txnLk.Unlock()
for _, c := range cids {
live, err := markSet.Has(c)
for _, c := range batch {
has, err := markSet.Has(c)
if err != nil {
return xerrors.Errorf("error checking for liveness: %w", err)
return 0, 0, xerrors.Errorf("error checking markset for liveness: %w", err)
}
if live {
if has {
liveCnt++
continue
}
@ -1058,16 +1217,141 @@ func (s *SplitStore) purge(cids []cid.Cid, markSet MarkSetVisitor) error {
deadCids = append(deadCids, c)
}
err := s.hot.DeleteMany(s.ctx, deadCids)
if err != nil {
return xerrors.Errorf("error purging cold objects: %w", err)
if len(deadCids) == 0 {
if err := checkpoint.Set(batch[len(batch)-1]); err != nil {
return 0, 0, xerrors.Errorf("error setting checkpoint: %w", err)
}
return 0, liveCnt, nil
}
if err := s.hot.DeleteMany(s.ctx, deadCids); err != nil {
return 0, liveCnt, xerrors.Errorf("error purging cold objects: %w", err)
}
s.debug.LogDelete(deadCids)
purgeCnt = len(deadCids)
if err := checkpoint.Set(batch[len(batch)-1]); err != nil {
return purgeCnt, liveCnt, xerrors.Errorf("error setting checkpoint: %w", err)
}
return purgeCnt, liveCnt, nil
}
func (s *SplitStore) coldSetPath() string {
return filepath.Join(s.path, "coldset")
}
func (s *SplitStore) checkpointPath() string {
return filepath.Join(s.path, "checkpoint")
}
func (s *SplitStore) checkpointExists() bool {
_, err := os.Stat(s.checkpointPath())
return err == nil
}
func (s *SplitStore) completeCompaction() error {
checkpoint, last, err := OpenCheckpoint(s.checkpointPath())
if err != nil {
return xerrors.Errorf("error opening checkpoint: %w", err)
}
defer checkpoint.Close() //nolint:errcheck
coldr, err := NewColdSetReader(s.coldSetPath())
if err != nil {
return xerrors.Errorf("error opening coldset: %w", err)
}
defer coldr.Close() //nolint:errcheck
markSet, err := s.markSetEnv.Recover("live")
if err != nil {
return xerrors.Errorf("error recovering markset: %w", err)
}
defer markSet.Close() //nolint:errcheck
// PURGE
log.Info("purging cold objects from the hotstore")
startPurge := time.Now()
err = s.completePurge(coldr, checkpoint, last, markSet)
if err != nil {
return xerrors.Errorf("error purging cold objects: %w", err)
}
log.Infow("purging cold objects from hotstore done", "took", time.Since(startPurge))
markSet.EndCriticalSection()
if err := checkpoint.Close(); err != nil {
log.Warnf("error closing checkpoint: %s", err)
}
if err := os.Remove(s.checkpointPath()); err != nil {
log.Warnf("error removing checkpoint: %s", err)
}
if err := coldr.Close(); err != nil {
log.Warnf("error closing coldset: %s", err)
}
if err := os.Remove(s.coldSetPath()); err != nil {
log.Warnf("error removing coldset: %s", err)
}
// Note: at this point we can start the splitstore; a compaction should run on
// the first head change, which will trigger gc on the hotstore.
// We don't mind the second (back-to-back) compaction as the head will
// have advanced during marking and coldset accumulation.
return nil
}
func (s *SplitStore) completePurge(coldr *ColdSetReader, checkpoint *Checkpoint, start cid.Cid, markSet MarkSet) error {
if !start.Defined() {
return s.purge(coldr, checkpoint, markSet)
}
seeking := true
batch := make([]cid.Cid, 0, batchSize)
deadCids := make([]cid.Cid, 0, batchSize)
var purgeCnt, liveCnt int
defer func() {
log.Infow("purged cold objects", "purged", purgeCnt, "live", liveCnt)
}()
deleteBatch := func() error {
pc, lc, err := s.purgeBatch(batch, deadCids, checkpoint, markSet)
purgeCnt += pc
liveCnt += lc
batch = batch[:0]
return err
}
err := coldr.ForEach(func(c cid.Cid) error {
if seeking {
if start.Equals(c) {
seeking = false
}
return nil
}
batch = append(batch, c)
if len(batch) == batchSize {
return deleteBatch()
}
purgeCnt += len(deadCids)
return nil
})
if err != nil {
return err
}
if len(batch) > 0 {
return deleteBatch()
}
return nil
}
// I really don't like having this code, but we seem to have some occasional DAG references with
@ -1077,7 +1361,7 @@ func (s *SplitStore) purge(cids []cid.Cid, markSet MarkSetVisitor) error {
// have this gem[TM].
// My best guess is that they are parent message receipts or yet to be computed state roots; magik
// thinks the cause may be block validation.
func (s *SplitStore) waitForMissingRefs(markSet MarkSetVisitor) {
func (s *SplitStore) waitForMissingRefs(markSet MarkSet) {
s.txnLk.Lock()
missing := s.txnMissing
s.txnMissing = nil
@ -1106,7 +1390,7 @@ func (s *SplitStore) waitForMissingRefs(markSet MarkSetVisitor) {
}
towalk := missing
visitor := tmpVisitor()
visitor := newTmpVisitor()
missing = make(map[cid.Cid]struct{})
for c := range towalk {

View File

@ -0,0 +1,203 @@
package splitstore
import (
"runtime"
"sync/atomic"
"golang.org/x/xerrors"
blocks "github.com/ipfs/go-block-format"
cid "github.com/ipfs/go-cid"
)
var EnableReification = false
func (s *SplitStore) reifyColdObject(c cid.Cid) {
if !EnableReification {
return
}
if !s.isWarm() {
return
}
if isUnitaryObject(c) {
return
}
s.reifyMx.Lock()
defer s.reifyMx.Unlock()
_, ok := s.reifyInProgress[c]
if ok {
return
}
s.reifyPend[c] = struct{}{}
s.reifyCond.Broadcast()
}
func (s *SplitStore) reifyOrchestrator() {
workers := runtime.NumCPU() / 4
if workers < 2 {
workers = 2
}
workch := make(chan cid.Cid, workers)
defer close(workch)
for i := 0; i < workers; i++ {
s.reifyWorkers.Add(1)
go s.reifyWorker(workch)
}
for {
s.reifyMx.Lock()
for len(s.reifyPend) == 0 && atomic.LoadInt32(&s.closing) == 0 {
s.reifyCond.Wait()
}
if atomic.LoadInt32(&s.closing) != 0 {
s.reifyMx.Unlock()
return
}
reifyPend := s.reifyPend
s.reifyPend = make(map[cid.Cid]struct{})
s.reifyMx.Unlock()
for c := range reifyPend {
select {
case workch <- c:
case <-s.ctx.Done():
return
}
}
}
}
func (s *SplitStore) reifyWorker(workch chan cid.Cid) {
defer s.reifyWorkers.Done()
for c := range workch {
s.doReify(c)
}
}
func (s *SplitStore) doReify(c cid.Cid) {
var toreify, totrack, toforget []cid.Cid
defer func() {
s.reifyMx.Lock()
defer s.reifyMx.Unlock()
for _, c := range toreify {
delete(s.reifyInProgress, c)
}
for _, c := range totrack {
delete(s.reifyInProgress, c)
}
for _, c := range toforget {
delete(s.reifyInProgress, c)
}
}()
s.txnLk.RLock()
defer s.txnLk.RUnlock()
err := s.walkObjectIncomplete(c, newTmpVisitor(),
func(c cid.Cid) error {
if isUnitaryObject(c) {
return errStopWalk
}
s.reifyMx.Lock()
_, inProgress := s.reifyInProgress[c]
if !inProgress {
s.reifyInProgress[c] = struct{}{}
}
s.reifyMx.Unlock()
if inProgress {
return errStopWalk
}
has, err := s.hot.Has(s.ctx, c)
if err != nil {
return xerrors.Errorf("error checking hotstore: %w", err)
}
if has {
if s.txnMarkSet != nil {
hasMark, err := s.txnMarkSet.Has(c)
if err != nil {
log.Warnf("error checking markset: %s", err)
} else if hasMark {
toforget = append(toforget, c)
return errStopWalk
}
} else {
totrack = append(totrack, c)
return errStopWalk
}
}
toreify = append(toreify, c)
return nil
},
func(missing cid.Cid) error {
log.Warnf("missing reference while reifying %s: %s", c, missing)
return errStopWalk
})
if err != nil {
log.Warnf("error walking cold object for reification (cid: %s): %s", c, err)
return
}
log.Debugf("reifying %d objects rooted at %s", len(toreify), c)
// this should not get too big, maybe some 100s of objects.
batch := make([]blocks.Block, 0, len(toreify))
for _, c := range toreify {
blk, err := s.cold.Get(s.ctx, c)
if err != nil {
log.Warnf("error retrieving cold object for reification (cid: %s): %s", c, err)
continue
}
if err := s.checkClosing(); err != nil {
return
}
batch = append(batch, blk)
}
if len(batch) > 0 {
err = s.hot.PutMany(s.ctx, batch)
if err != nil {
log.Warnf("error reifying cold object (cid: %s): %s", c, err)
return
}
}
if s.txnMarkSet != nil {
if len(toreify) > 0 {
if err := s.txnMarkSet.MarkMany(toreify); err != nil {
log.Warnf("error marking reified objects: %s", err)
}
}
if len(totrack) > 0 {
if err := s.txnMarkSet.MarkMany(totrack); err != nil {
log.Warnf("error marking tracked objects: %s", err)
}
}
} else {
// if txnActive is false these are noops
if len(toreify) > 0 {
s.trackTxnRefMany(toreify)
}
if len(totrack) > 0 {
s.trackTxnRefMany(totrack)
}
}
}

View File

@ -4,6 +4,9 @@ import (
"context"
"errors"
"fmt"
"io/ioutil"
"math/rand"
"os"
"sync"
"sync/atomic"
"testing"
@ -20,12 +23,14 @@ import (
datastore "github.com/ipfs/go-datastore"
dssync "github.com/ipfs/go-datastore/sync"
logging "github.com/ipfs/go-log/v2"
mh "github.com/multiformats/go-multihash"
)
func init() {
CompactionThreshold = 5
CompactionBoundary = 2
WarmupBoundary = 0
SyncWaitTime = time.Millisecond
logging.SetLogLevel("splitstore", "DEBUG")
}
@ -80,8 +85,17 @@ func testSplitStore(t *testing.T, cfg *Config) {
t.Fatal(err)
}
path, err := ioutil.TempDir("", "splitstore.*")
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
_ = os.RemoveAll(path)
})
// open the splitstore
ss, err := Open("", ds, hot, cold, cfg)
ss, err := Open(path, ds, hot, cold, cfg)
if err != nil {
t.Fatal(err)
}
@ -125,6 +139,10 @@ func testSplitStore(t *testing.T, cfg *Config) {
}
waitForCompaction := func() {
ss.txnSyncMx.Lock()
ss.txnSync = true
ss.txnSyncCond.Broadcast()
ss.txnSyncMx.Unlock()
for atomic.LoadInt32(&ss.compacting) == 1 {
time.Sleep(100 * time.Millisecond)
}
@ -259,8 +277,17 @@ func TestSplitStoreSuppressCompactionNearUpgrade(t *testing.T) {
t.Fatal(err)
}
path, err := ioutil.TempDir("", "splitstore.*")
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
_ = os.RemoveAll(path)
})
// open the splitstore
ss, err := Open("", ds, hot, cold, &Config{MarkSetType: "map"})
ss, err := Open(path, ds, hot, cold, &Config{MarkSetType: "map"})
if err != nil {
t.Fatal(err)
}
@ -305,6 +332,10 @@ func TestSplitStoreSuppressCompactionNearUpgrade(t *testing.T) {
}
waitForCompaction := func() {
ss.txnSyncMx.Lock()
ss.txnSync = true
ss.txnSyncCond.Broadcast()
ss.txnSyncMx.Unlock()
for atomic.LoadInt32(&ss.compacting) == 1 {
time.Sleep(100 * time.Millisecond)
}
@ -357,6 +388,136 @@ func TestSplitStoreSuppressCompactionNearUpgrade(t *testing.T) {
}
}
func testSplitStoreReification(t *testing.T, f func(context.Context, blockstore.Blockstore, cid.Cid) error) {
ds := dssync.MutexWrap(datastore.NewMapDatastore())
hot := newMockStore()
cold := newMockStore()
mkRandomBlock := func() blocks.Block {
data := make([]byte, 128)
_, err := rand.Read(data)
if err != nil {
t.Fatal(err)
}
return blocks.NewBlock(data)
}
block1 := mkRandomBlock()
block2 := mkRandomBlock()
block3 := mkRandomBlock()
hdr := mock.MkBlock(nil, 0, 0)
hdr.Messages = block1.Cid()
hdr.ParentMessageReceipts = block2.Cid()
hdr.ParentStateRoot = block3.Cid()
block4, err := hdr.ToStorageBlock()
if err != nil {
t.Fatal(err)
}
allBlocks := []blocks.Block{block1, block2, block3, block4}
for _, blk := range allBlocks {
err := cold.Put(context.Background(), blk)
if err != nil {
t.Fatal(err)
}
}
path, err := ioutil.TempDir("", "splitstore.*")
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
_ = os.RemoveAll(path)
})
ss, err := Open(path, ds, hot, cold, &Config{MarkSetType: "map"})
if err != nil {
t.Fatal(err)
}
defer ss.Close() //nolint
ss.warmupEpoch = 1
go ss.reifyOrchestrator()
waitForReification := func() {
for {
ss.reifyMx.Lock()
ready := len(ss.reifyPend) == 0 && len(ss.reifyInProgress) == 0
ss.reifyMx.Unlock()
if ready {
return
}
time.Sleep(time.Millisecond)
}
}
// first access using the standard view
err = f(context.Background(), ss, block4.Cid())
if err != nil {
t.Fatal(err)
}
// nothing should be reified
waitForReification()
for _, blk := range allBlocks {
has, err := hot.Has(context.Background(), blk.Cid())
if err != nil {
t.Fatal(err)
}
if has {
t.Fatal("block unexpectedly reified")
}
}
// now make the hot/reifying view and ensure access reifies
err = f(blockstore.WithHotView(context.Background()), ss, block4.Cid())
if err != nil {
t.Fatal(err)
}
// everything should be reified
waitForReification()
for i, blk := range allBlocks {
has, err := hot.Has(context.Background(), blk.Cid())
if err != nil {
t.Fatal(err)
}
if !has {
t.Fatalf("block%d was not reified", i+1)
}
}
}
func TestSplitStoreReification(t *testing.T) {
EnableReification = true
t.Log("test reification with Has")
testSplitStoreReification(t, func(ctx context.Context, s blockstore.Blockstore, c cid.Cid) error {
_, err := s.Has(ctx, c)
return err
})
t.Log("test reification with Get")
testSplitStoreReification(t, func(ctx context.Context, s blockstore.Blockstore, c cid.Cid) error {
_, err := s.Get(ctx, c)
return err
})
t.Log("test reification with GetSize")
testSplitStoreReification(t, func(ctx context.Context, s blockstore.Blockstore, c cid.Cid) error {
_, err := s.GetSize(ctx, c)
return err
})
t.Log("test reification with View")
testSplitStoreReification(t, func(ctx context.Context, s blockstore.Blockstore, c cid.Cid) error {
return s.View(ctx, c, func(_ []byte) error { return nil })
})
}
type mockChain struct {
t testing.TB
@ -426,17 +587,25 @@ func (c *mockChain) SubscribeHeadChanges(change func(revert []*types.TipSet, app
type mockStore struct {
mx sync.Mutex
set map[cid.Cid]blocks.Block
set map[string]blocks.Block
}
func newMockStore() *mockStore {
return &mockStore{set: make(map[cid.Cid]blocks.Block)}
return &mockStore{set: make(map[string]blocks.Block)}
}
func (b *mockStore) keyOf(c cid.Cid) string {
return string(c.Hash())
}
func (b *mockStore) cidOf(k string) cid.Cid {
return cid.NewCidV1(cid.Raw, mh.Multihash([]byte(k)))
}
func (b *mockStore) Has(_ context.Context, cid cid.Cid) (bool, error) {
b.mx.Lock()
defer b.mx.Unlock()
_, ok := b.set[cid]
_, ok := b.set[b.keyOf(cid)]
return ok, nil
}
@ -446,7 +615,7 @@ func (b *mockStore) Get(_ context.Context, cid cid.Cid) (blocks.Block, error) {
b.mx.Lock()
defer b.mx.Unlock()
blk, ok := b.set[cid]
blk, ok := b.set[b.keyOf(cid)]
if !ok {
return nil, blockstore.ErrNotFound
}
@ -474,7 +643,7 @@ func (b *mockStore) Put(_ context.Context, blk blocks.Block) error {
b.mx.Lock()
defer b.mx.Unlock()
b.set[blk.Cid()] = blk
b.set[b.keyOf(blk.Cid())] = blk
return nil
}
@ -483,7 +652,7 @@ func (b *mockStore) PutMany(_ context.Context, blks []blocks.Block) error {
defer b.mx.Unlock()
for _, blk := range blks {
b.set[blk.Cid()] = blk
b.set[b.keyOf(blk.Cid())] = blk
}
return nil
}
@ -492,7 +661,7 @@ func (b *mockStore) DeleteBlock(_ context.Context, cid cid.Cid) error {
b.mx.Lock()
defer b.mx.Unlock()
delete(b.set, cid)
delete(b.set, b.keyOf(cid))
return nil
}
@ -501,7 +670,7 @@ func (b *mockStore) DeleteMany(_ context.Context, cids []cid.Cid) error {
defer b.mx.Unlock()
for _, c := range cids {
delete(b.set, c)
delete(b.set, b.keyOf(c))
}
return nil
}
@ -515,7 +684,7 @@ func (b *mockStore) ForEachKey(f func(cid.Cid) error) error {
defer b.mx.Unlock()
for c := range b.set {
err := f(c)
err := f(b.cidOf(c))
if err != nil {
return err
}

View File

@ -1,6 +1,7 @@
package splitstore
import (
"sync"
"sync/atomic"
"time"
@ -55,12 +56,13 @@ func (s *SplitStore) doWarmup(curTs *types.TipSet) error {
if WarmupBoundary < epoch {
boundaryEpoch = epoch - WarmupBoundary
}
var mx sync.Mutex
batchHot := make([]blocks.Block, 0, batchSize)
count := int64(0)
xcount := int64(0)
missing := int64(0)
count := new(int64)
xcount := new(int64)
missing := new(int64)
visitor, err := s.markSetEnv.CreateVisitor("warmup", 0)
visitor, err := s.markSetEnv.New("warmup", 0)
if err != nil {
return xerrors.Errorf("error creating visitor: %w", err)
}
@ -73,7 +75,7 @@ func (s *SplitStore) doWarmup(curTs *types.TipSet) error {
return errStopWalk
}
count++
atomic.AddInt64(count, 1)
has, err := s.hot.Has(s.ctx, c)
if err != nil {
@ -87,22 +89,25 @@ func (s *SplitStore) doWarmup(curTs *types.TipSet) error {
blk, err := s.cold.Get(s.ctx, c)
if err != nil {
if err == bstore.ErrNotFound {
missing++
atomic.AddInt64(missing, 1)
return errStopWalk
}
return err
}
xcount++
atomic.AddInt64(xcount, 1)
mx.Lock()
batchHot = append(batchHot, blk)
if len(batchHot) == batchSize {
err = s.hot.PutMany(s.ctx, batchHot)
if err != nil {
mx.Unlock()
return err
}
batchHot = batchHot[:0]
}
mx.Unlock()
return nil
})
@ -118,9 +123,9 @@ func (s *SplitStore) doWarmup(curTs *types.TipSet) error {
}
}
log.Infow("warmup stats", "visited", count, "warm", xcount, "missing", missing)
log.Infow("warmup stats", "visited", *count, "warm", *xcount, "missing", *missing)
s.markSetSize = count + count>>2 // overestimate a bit
s.markSetSize = *count + *count>>2 // overestimate a bit
err = s.ds.Put(s.ctx, markSetSizeKey, int64ToBytes(s.markSetSize))
if err != nil {
log.Warnf("error saving mark set size: %s", err)

View File

@ -1,6 +1,8 @@
package splitstore
import (
"sync"
cid "github.com/ipfs/go-cid"
)
@ -17,16 +19,34 @@ func (v *noopVisitor) Visit(_ cid.Cid) (bool, error) {
return true, nil
}
type cidSetVisitor struct {
type tmpVisitor struct {
set *cid.Set
}
var _ ObjectVisitor = (*cidSetVisitor)(nil)
var _ ObjectVisitor = (*tmpVisitor)(nil)
func (v *cidSetVisitor) Visit(c cid.Cid) (bool, error) {
func (v *tmpVisitor) Visit(c cid.Cid) (bool, error) {
return v.set.Visit(c), nil
}
func tmpVisitor() ObjectVisitor {
return &cidSetVisitor{set: cid.NewSet()}
func newTmpVisitor() ObjectVisitor {
return &tmpVisitor{set: cid.NewSet()}
}
type concurrentVisitor struct {
mx sync.Mutex
set *cid.Set
}
var _ ObjectVisitor = (*concurrentVisitor)(nil)
func newConcurrentVisitor() *concurrentVisitor {
return &concurrentVisitor{set: cid.NewSet()}
}
func (v *concurrentVisitor) Visit(c cid.Cid) (bool, error) {
v.mx.Lock()
defer v.mx.Unlock()
return v.set.Visit(c), nil
}

View File

@ -1,2 +1,2 @@
/dns4/bootstrap-0.butterfly.fildev.network/tcp/1347/p2p/12D3KooWBdRCBLUeKvoy22u5DcXs61adFn31v8WWCZgmBjDCjbsC
/dns4/bootstrap-1.butterfly.fildev.network/tcp/1347/p2p/12D3KooWDUQJBA18njjXnG9RtLxoN3muvdU7PEy55QorUEsdAqdy
/dns4/bootstrap-0.butterfly.fildev.network/tcp/1347/p2p/12D3KooWFHDtFx7CVTy4xoCDutVo1cScvSnQjDeaM8UzwVS1qwkh
/dns4/bootstrap-1.butterfly.fildev.network/tcp/1347/p2p/12D3KooWKt8cwpkiumkT8x32c3YFxsPRwhV5J8hCYPn9mhUmcAXt

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -90,6 +90,7 @@ func init() {
UpgradeTurboHeight = getUpgradeHeight("LOTUS_ACTORSV4_HEIGHT", UpgradeTurboHeight)
UpgradeHyperdriveHeight = getUpgradeHeight("LOTUS_HYPERDRIVE_HEIGHT", UpgradeHyperdriveHeight)
UpgradeChocolateHeight = getUpgradeHeight("LOTUS_CHOCOLATE_HEIGHT", UpgradeChocolateHeight)
UpgradeOhSnapHeight = getUpgradeHeight("LOTUS_OHSNAP_HEIGHT", UpgradeOhSnapHeight)
BuildType |= Build2k

View File

@ -42,8 +42,7 @@ const UpgradeTurboHeight = -15
const UpgradeHyperdriveHeight = -16
const UpgradeChocolateHeight = -17
// 2022-01-17T19:00:00Z
const UpgradeOhSnapHeight = 30262
const UpgradeOhSnapHeight = 240
func init() {
policy.SetConsensusMinerMinPower(abi.NewStoragePower(2 << 30))

View File

@ -54,7 +54,8 @@ const UpgradeHyperdriveHeight = 420
const UpgradeChocolateHeight = 312746
const UpgradeOhSnapHeight = 99999999
// 2022-02-10T19:23:00Z
const UpgradeOhSnapHeight = 682006
func init() {
policy.SetConsensusMinerMinPower(abi.NewStoragePower(32 << 30))

View File

@ -67,7 +67,8 @@ const UpgradeHyperdriveHeight = 892800
// 2021-10-26T13:30:00Z
const UpgradeChocolateHeight = 1231620
var UpgradeOhSnapHeight = abi.ChainEpoch(999999999999)
// 2022-03-01T15:00:00Z
var UpgradeOhSnapHeight = abi.ChainEpoch(1594680)
func init() {
if os.Getenv("LOTUS_USE_TEST_ADDRESSES") != "1" {

View File

@ -1,4 +1,54 @@
{
"v28-empty-sector-update-merkletree-poseidon_hasher-8-0-0-61fa69f38b9cc771ba27b670124714b4ea77fbeae05e377fb859c4a43b73a30c.params": {
"cid": "Qma5WL6abSqYg9uUQAZ3EHS286bsNsha7oAGsJBD48Bq2q",
"digest": "c3ad7bb549470b82ad52ed070aebb4f4",
"sector_size": 536870912
},
"v28-empty-sector-update-merkletree-poseidon_hasher-8-0-0-61fa69f38b9cc771ba27b670124714b4ea77fbeae05e377fb859c4a43b73a30c.vk": {
"cid": "QmUa7f9JtJMsqJJ3s3ZXk6WyF4xJLE8FiqYskZGgk8GCDv",
"digest": "994c5b7d450ca9da348c910689f2dc7f",
"sector_size": 536870912
},
"v28-empty-sector-update-merkletree-poseidon_hasher-8-0-0-92180959e1918d26350b8e6cfe217bbdd0a2d8de51ebec269078b364b715ad63.params": {
"cid": "QmQiT4qBGodrVNEgVTDXxBNDdPbaD8Ag7Sx3ZTq1zHX79S",
"digest": "5aedd2cf3e5c0a15623d56a1b43110ad",
"sector_size": 8388608
},
"v28-empty-sector-update-merkletree-poseidon_hasher-8-0-0-92180959e1918d26350b8e6cfe217bbdd0a2d8de51ebec269078b364b715ad63.vk": {
"cid": "QmdcpKUQvHM8RFRVKbk1yHfEqMcBzhtFWKRp9SNEmWq37i",
"digest": "abd80269054d391a734febdac0d2e687",
"sector_size": 8388608
},
"v28-empty-sector-update-merkletree-poseidon_hasher-8-0-0-fb9e095bebdd77511c0269b967b4d87ba8b8a525edaa0e165de23ba454510194.params": {
"cid": "QmYM6Hg7mjmvA3ZHTsqkss1fkdyDju5dDmLiBZGJ5pz9y9",
"digest": "311f92a3e75036ced01b1c0025f1fa0c",
"sector_size": 2048
},
"v28-empty-sector-update-merkletree-poseidon_hasher-8-0-0-fb9e095bebdd77511c0269b967b4d87ba8b8a525edaa0e165de23ba454510194.vk": {
"cid": "QmaQsTLL3nc5dw6wAvaioJSBfd1jhQrA2o6ucFf7XeV74P",
"digest": "eadad9784969890d30f2749708c79771",
"sector_size": 2048
},
"v28-empty-sector-update-merkletree-poseidon_hasher-8-8-0-3b7f44a9362e3985369454947bc94022e118211e49fd672d52bec1cbfd599d18.params": {
"cid": "QmNPc75iEfcahCwNKdqnWLtxnjspUGGR4iscjiz3wP3RtS",
"digest": "1b3cfd761a961543f9eb273e435a06a2",
"sector_size": 34359738368
},
"v28-empty-sector-update-merkletree-poseidon_hasher-8-8-0-3b7f44a9362e3985369454947bc94022e118211e49fd672d52bec1cbfd599d18.vk": {
"cid": "QmdFFUe1gcz9MMHc6YW8aoV48w4ckvcERjt7PkydQAMfCN",
"digest": "3a6941983754737fde880d29c7094905",
"sector_size": 34359738368
},
"v28-empty-sector-update-merkletree-poseidon_hasher-8-8-2-102e1444a7e9a97ebf1e3d6855dcc77e66c011ea66f936d9b2c508f87f2f83a7.params": {
"cid": "QmUB6xTVjzBQGuDNeyJMrrJ1byk58vhPm8eY2Lv9pgwanp",
"digest": "1a392e7b759fb18e036c7559b5ece816",
"sector_size": 68719476736
},
"v28-empty-sector-update-merkletree-poseidon_hasher-8-8-2-102e1444a7e9a97ebf1e3d6855dcc77e66c011ea66f936d9b2c508f87f2f83a7.vk": {
"cid": "Qmd794Jty7k26XJ8Eg4NDEks65Qk8G4GVfGkwqvymv8HAg",
"digest": "80e366df2f1011953c2d01c7b7c9ee8e",
"sector_size": 68719476736
},
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-0-0-0170db1f394b35d995252228ee359194b13199d259380541dc529fb0099096b0.params": {
"cid": "QmVxjFRyhmyQaZEtCh7nk2abc7LhFkzhnRX4rcHqCCpikR",
"digest": "7610b9f82bfc88405b7a832b651ce2f6",

View File

@ -37,7 +37,7 @@ func BuildTypeString() string {
}
// BuildVersion is the local build version
const BuildVersion = "1.15.0-dev"
const BuildVersion = "1.15.1-dev"
func UserVersion() string {
if os.Getenv("LOTUS_VERSION_IGNORE_COMMIT") == "1" {

View File

@ -142,7 +142,7 @@ func (db *DrandBeacon) Entry(ctx context.Context, round uint64) <-chan beacon.Re
go func() {
start := build.Clock.Now()
log.Infow("start fetching randomness", "round", round)
log.Debugw("start fetching randomness", "round", round)
resp, err := db.client.Get(ctx, round)
var br beacon.Response
@ -152,7 +152,7 @@ func (db *DrandBeacon) Entry(ctx context.Context, round uint64) <-chan beacon.Re
br.Entry.Round = resp.Round()
br.Entry.Data = resp.Signature()
}
log.Infow("done fetching randomness", "round", round, "took", build.Clock.Since(start))
log.Debugw("done fetching randomness", "round", round, "took", build.Clock.Since(start))
out <- br
close(out)
}()

View File

@ -32,6 +32,7 @@ import (
/* inline-gen end */
"github.com/filecoin-project/lotus/blockstore"
"github.com/filecoin-project/lotus/build"
"github.com/filecoin-project/lotus/chain/actors"
"github.com/filecoin-project/lotus/chain/actors/builtin"
@ -92,6 +93,7 @@ func (t *TipSetExecutor) ApplyBlocks(ctx context.Context, sm *stmgr.StateManager
partDone()
}()
ctx = blockstore.WithHotView(ctx)
makeVmWithBaseStateAndEpoch := func(base cid.Cid, e abi.ChainEpoch) (*vm.VM, error) {
vmopt := &vm.VMOpts{
StateBase: base,

View File

@ -182,7 +182,7 @@ func (filec *FilecoinEC) ValidateBlock(ctx context.Context, b *types.FullBlock)
}
}
return xerrors.Errorf("parent state root did not match computed state (%s != %s)", stateroot, h.ParentStateRoot)
return xerrors.Errorf("parent state root did not match computed state (%s != %s)", h.ParentStateRoot, stateroot)
}
if precp != h.ParentMessageReceipts {

View File

@ -165,13 +165,8 @@ func DefaultUpgradeSchedule() stmgr.UpgradeSchedule {
Migration: UpgradeActorsV7,
PreMigrations: []stmgr.PreMigration{{
PreMigration: PreUpgradeActorsV7,
StartWithin: 120,
StartWithin: 180,
DontStartWithin: 60,
StopWithin: 35,
}, {
PreMigration: PreUpgradeActorsV7,
StartWithin: 30,
DontStartWithin: 15,
StopWithin: 5,
}},
Expensive: true,
@ -1264,7 +1259,7 @@ func upgradeActorsV7Common(
root cid.Cid, epoch abi.ChainEpoch, ts *types.TipSet,
config nv15.Config,
) (cid.Cid, error) {
writeStore := blockstore.NewAutobatch(ctx, sm.ChainStore().StateBlockstore(), units.GiB)
writeStore := blockstore.NewAutobatch(ctx, sm.ChainStore().StateBlockstore(), units.GiB/4)
// TODO: pretty sure we'd achieve nothing by doing this, confirm in review
//buf := blockstore.NewTieredBstore(sm.ChainStore().StateBlockstore(), writeStore)
store := store.ActorStore(ctx, writeStore)

View File

@ -0,0 +1,224 @@
//stm: #unit
package messagepool
import (
"context"
"fmt"
"testing"
"github.com/ipfs/go-datastore"
logging "github.com/ipfs/go-log/v2"
"github.com/stretchr/testify/assert"
"github.com/filecoin-project/lotus/api"
"github.com/filecoin-project/lotus/chain/consensus/filcns"
"github.com/filecoin-project/lotus/chain/types"
"github.com/filecoin-project/lotus/chain/types/mock"
"github.com/filecoin-project/lotus/chain/wallet"
_ "github.com/filecoin-project/lotus/lib/sigs/bls"
_ "github.com/filecoin-project/lotus/lib/sigs/secp"
)
func init() {
_ = logging.SetLogLevel("*", "INFO")
}
func getCheckMessageStatus(statusCode api.CheckStatusCode, msgStatuses []api.MessageCheckStatus) (*api.MessageCheckStatus, error) {
for i := 0; i < len(msgStatuses); i++ {
iMsgStatuses := msgStatuses[i]
if iMsgStatuses.CheckStatus.Code == statusCode {
return &iMsgStatuses, nil
}
}
return nil, fmt.Errorf("Could not find CheckStatusCode %s", statusCode)
}
func TestCheckMessages(t *testing.T) {
//stm: @CHAIN_MEMPOOL_CHECK_MESSAGES_001
tma := newTestMpoolAPI()
w, err := wallet.NewWallet(wallet.NewMemKeyStore())
if err != nil {
t.Fatal(err)
}
ds := datastore.NewMapDatastore()
mp, err := New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil)
if err != nil {
t.Fatal(err)
}
sender, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
tma.setBalance(sender, 1000e15)
target := mock.Address(1001)
var protos []*api.MessagePrototype
for i := 0; i < 5; i++ {
msg := &types.Message{
To: target,
From: sender,
Value: types.NewInt(1),
Nonce: uint64(i),
GasLimit: 50000000,
GasFeeCap: types.NewInt(minimumBaseFee.Uint64()),
GasPremium: types.NewInt(1),
Params: make([]byte, 2<<10),
}
proto := &api.MessagePrototype{
Message: *msg,
ValidNonce: true,
}
protos = append(protos, proto)
}
messageStatuses, err := mp.CheckMessages(context.TODO(), protos)
assert.NoError(t, err)
for i := 0; i < len(messageStatuses); i++ {
iMsgStatuses := messageStatuses[i]
for j := 0; j < len(iMsgStatuses); j++ {
jStatus := iMsgStatuses[i]
assert.True(t, jStatus.OK)
}
}
}
func TestCheckPendingMessages(t *testing.T) {
//stm: @CHAIN_MEMPOOL_CHECK_PENDING_MESSAGES_001
tma := newTestMpoolAPI()
w, err := wallet.NewWallet(wallet.NewMemKeyStore())
if err != nil {
t.Fatal(err)
}
ds := datastore.NewMapDatastore()
mp, err := New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil)
if err != nil {
t.Fatal(err)
}
sender, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
tma.setBalance(sender, 1000e15)
target := mock.Address(1001)
// add a valid message to the pool
msg := &types.Message{
To: target,
From: sender,
Value: types.NewInt(1),
Nonce: 0,
GasLimit: 50000000,
GasFeeCap: types.NewInt(minimumBaseFee.Uint64()),
GasPremium: types.NewInt(1),
Params: make([]byte, 2<<10),
}
sig, err := w.WalletSign(context.TODO(), sender, msg.Cid().Bytes(), api.MsgMeta{})
if err != nil {
panic(err)
}
sm := &types.SignedMessage{
Message: *msg,
Signature: *sig,
}
mustAdd(t, mp, sm)
messageStatuses, err := mp.CheckPendingMessages(context.TODO(), sender)
assert.NoError(t, err)
for i := 0; i < len(messageStatuses); i++ {
iMsgStatuses := messageStatuses[i]
for j := 0; j < len(iMsgStatuses); j++ {
jStatus := iMsgStatuses[i]
assert.True(t, jStatus.OK)
}
}
}
func TestCheckReplaceMessages(t *testing.T) {
//stm: @CHAIN_MEMPOOL_CHECK_REPLACE_MESSAGES_001
tma := newTestMpoolAPI()
w, err := wallet.NewWallet(wallet.NewMemKeyStore())
if err != nil {
t.Fatal(err)
}
ds := datastore.NewMapDatastore()
mp, err := New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil)
if err != nil {
t.Fatal(err)
}
sender, err := w.WalletNew(context.Background(), types.KTSecp256k1)
if err != nil {
t.Fatal(err)
}
tma.setBalance(sender, 1000e15)
target := mock.Address(1001)
// add a valid message to the pool
msg := &types.Message{
To: target,
From: sender,
Value: types.NewInt(1),
Nonce: 0,
GasLimit: 50000000,
GasFeeCap: types.NewInt(minimumBaseFee.Uint64()),
GasPremium: types.NewInt(1),
Params: make([]byte, 2<<10),
}
sig, err := w.WalletSign(context.TODO(), sender, msg.Cid().Bytes(), api.MsgMeta{})
if err != nil {
panic(err)
}
sm := &types.SignedMessage{
Message: *msg,
Signature: *sig,
}
mustAdd(t, mp, sm)
// create a new message with the same data, except that it is too big
var msgs []*types.Message
invalidmsg := &types.Message{
To: target,
From: sender,
Value: types.NewInt(1),
Nonce: 0,
GasLimit: 50000000,
GasFeeCap: types.NewInt(minimumBaseFee.Uint64()),
GasPremium: types.NewInt(1),
Params: make([]byte, 128<<10),
}
msgs = append(msgs, invalidmsg)
{
messageStatuses, err := mp.CheckReplaceMessages(context.TODO(), msgs)
if err != nil {
t.Fatal(err)
}
for i := 0; i < len(messageStatuses); i++ {
iMsgStatuses := messageStatuses[i]
status, err := getCheckMessageStatus(api.CheckStatusMessageSize, iMsgStatuses)
if err != nil {
t.Fatal(err)
}
// the replacement message should cause a status error
assert.False(t, status.OK)
}
}
}

View File

@ -173,10 +173,17 @@ type MessagePool struct {
sigValCache *lru.TwoQueueCache
nonceCache *lru.Cache
evtTypes [3]journal.EventType
journal journal.Journal
}
type nonceCacheKey struct {
tsk types.TipSetKey
addr address.Address
}
type msgSet struct {
msgs map[uint64]*types.SignedMessage
nextNonce uint64
@ -361,6 +368,7 @@ func (ms *msgSet) toSlice() []*types.SignedMessage {
func New(ctx context.Context, api Provider, ds dtypes.MetadataDS, us stmgr.UpgradeSchedule, netName dtypes.NetworkName, j journal.Journal) (*MessagePool, error) {
cache, _ := lru.New2Q(build.BlsSignatureCacheSize)
verifcache, _ := lru.New2Q(build.VerifSigCacheSize)
noncecache, _ := lru.New(256)
cfg, err := loadConfig(ctx, ds)
if err != nil {
@ -386,6 +394,7 @@ func New(ctx context.Context, api Provider, ds dtypes.MetadataDS, us stmgr.Upgra
pruneCooldown: make(chan struct{}, 1),
blsSigCache: cache,
sigValCache: verifcache,
nonceCache: noncecache,
changes: lps.New(50),
localMsgs: namespace.Wrap(ds, datastore.NewKey(localMsgsDs)),
api: api,
@ -1016,11 +1025,23 @@ func (mp *MessagePool) getStateNonce(ctx context.Context, addr address.Address,
done := metrics.Timer(ctx, metrics.MpoolGetNonceDuration)
defer done()
nk := nonceCacheKey{
tsk: ts.Key(),
addr: addr,
}
n, ok := mp.nonceCache.Get(nk)
if ok {
return n.(uint64), nil
}
act, err := mp.api.GetActorAfter(addr, ts)
if err != nil {
return 0, err
}
mp.nonceCache.Add(nk, act.Nonce)
return act.Nonce, nil
}

View File

@ -1,3 +1,4 @@
//stm: #unit
package messagepool
import (
@ -8,6 +9,7 @@ import (
"github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/crypto"
"github.com/ipfs/go-cid"
"github.com/ipfs/go-datastore"
logging "github.com/ipfs/go-log/v2"
@ -206,6 +208,7 @@ func (tma *testMpoolAPI) ChainComputeBaseFee(ctx context.Context, ts *types.TipS
func assertNonce(t *testing.T, mp *MessagePool, addr address.Address, val uint64) {
t.Helper()
//stm: @CHAIN_MEMPOOL_GET_NONCE_001
n, err := mp.GetNonce(context.TODO(), addr, types.EmptyTSK)
if err != nil {
t.Fatal(err)
@ -224,6 +227,8 @@ func mustAdd(t *testing.T, mp *MessagePool, msg *types.SignedMessage) {
}
func TestMessagePool(t *testing.T) {
//stm: @CHAIN_MEMPOOL_GET_NONCE_001
tma := newTestMpoolAPI()
w, err := wallet.NewWallet(wallet.NewMemKeyStore())
@ -325,6 +330,7 @@ func TestCheckMessageBig(t *testing.T) {
Message: *msg,
Signature: *sig,
}
//stm: @CHAIN_MEMPOOL_PUSH_001
err = mp.Add(context.TODO(), sm)
assert.ErrorIs(t, err, ErrMessageTooBig)
}
@ -366,8 +372,10 @@ func TestMessagePoolMessagesInEachBlock(t *testing.T) {
tma.applyBlock(t, a)
tsa := mock.TipSet(a)
//stm: @CHAIN_MEMPOOL_PENDING_001
_, _ = mp.Pending(context.TODO())
//stm: @CHAIN_MEMPOOL_SELECT_001
selm, _ := mp.SelectMessages(context.Background(), tsa, 1)
if len(selm) == 0 {
t.Fatal("should have returned the rest of the messages")
@ -428,6 +436,7 @@ func TestRevertMessages(t *testing.T) {
assertNonce(t, mp, sender, 4)
//stm: @CHAIN_MEMPOOL_PENDING_001
p, _ := mp.Pending(context.TODO())
fmt.Printf("%+v\n", p)
if len(p) != 3 {
@ -486,6 +495,7 @@ func TestPruningSimple(t *testing.T) {
mp.Prune()
//stm: @CHAIN_MEMPOOL_PENDING_001
msgs, _ := mp.Pending(context.TODO())
if len(msgs) != 5 {
t.Fatal("expected only 5 messages in pool, got: ", len(msgs))
@ -528,6 +538,7 @@ func TestLoadLocal(t *testing.T) {
msgs := make(map[cid.Cid]struct{})
for i := 0; i < 10; i++ {
m := makeTestMessage(w1, a1, a2, uint64(i), gasLimit, uint64(i+1))
//stm: @CHAIN_MEMPOOL_PUSH_001
cid, err := mp.Push(context.TODO(), m)
if err != nil {
t.Fatal(err)
@ -544,6 +555,7 @@ func TestLoadLocal(t *testing.T) {
t.Fatal(err)
}
//stm: @CHAIN_MEMPOOL_PENDING_001
pmsgs, _ := mp.Pending(context.TODO())
if len(msgs) != len(pmsgs) {
t.Fatalf("expected %d messages, but got %d", len(msgs), len(pmsgs))
@ -599,6 +611,7 @@ func TestClearAll(t *testing.T) {
gasLimit := gasguess.Costs[gasguess.CostKey{Code: builtin2.StorageMarketActorCodeID, M: 2}]
for i := 0; i < 10; i++ {
m := makeTestMessage(w1, a1, a2, uint64(i), gasLimit, uint64(i+1))
//stm: @CHAIN_MEMPOOL_PUSH_001
_, err := mp.Push(context.TODO(), m)
if err != nil {
t.Fatal(err)
@ -610,8 +623,10 @@ func TestClearAll(t *testing.T) {
mustAdd(t, mp, m)
}
//stm: @CHAIN_MEMPOOL_CLEAR_001
mp.Clear(context.Background(), true)
//stm: @CHAIN_MEMPOOL_PENDING_001
pending, _ := mp.Pending(context.TODO())
if len(pending) > 0 {
t.Fatalf("cleared the mpool, but got %d pending messages", len(pending))
@ -654,6 +669,7 @@ func TestClearNonLocal(t *testing.T) {
gasLimit := gasguess.Costs[gasguess.CostKey{Code: builtin2.StorageMarketActorCodeID, M: 2}]
for i := 0; i < 10; i++ {
m := makeTestMessage(w1, a1, a2, uint64(i), gasLimit, uint64(i+1))
//stm: @CHAIN_MEMPOOL_PUSH_001
_, err := mp.Push(context.TODO(), m)
if err != nil {
t.Fatal(err)
@ -665,8 +681,10 @@ func TestClearNonLocal(t *testing.T) {
mustAdd(t, mp, m)
}
//stm: @CHAIN_MEMPOOL_CLEAR_001
mp.Clear(context.Background(), false)
//stm: @CHAIN_MEMPOOL_PENDING_001
pending, _ := mp.Pending(context.TODO())
if len(pending) != 10 {
t.Fatalf("expected 10 pending messages, but got %d instead", len(pending))
@ -724,6 +742,7 @@ func TestUpdates(t *testing.T) {
for i := 0; i < 10; i++ {
m := makeTestMessage(w1, a1, a2, uint64(i), gasLimit, uint64(i+1))
//stm: @CHAIN_MEMPOOL_PUSH_001
_, err := mp.Push(context.TODO(), m)
if err != nil {
t.Fatal(err)
@ -745,3 +764,302 @@ func TestUpdates(t *testing.T) {
t.Fatal("expected closed channel, but got an update instead")
}
}
func TestMessageBelowMinGasFee(t *testing.T) {
//stm: @CHAIN_MEMPOOL_PUSH_001
tma := newTestMpoolAPI()
w, err := wallet.NewWallet(wallet.NewMemKeyStore())
assert.NoError(t, err)
from, err := w.WalletNew(context.Background(), types.KTBLS)
assert.NoError(t, err)
tma.setBalance(from, 1000e9)
ds := datastore.NewMapDatastore()
mp, err := New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil)
assert.NoError(t, err)
to := mock.Address(1001)
// fee is just below minimum gas fee
fee := minimumBaseFee.Uint64() - 1
{
msg := &types.Message{
To: to,
From: from,
Value: types.NewInt(1),
Nonce: 0,
GasLimit: 50000000,
GasFeeCap: types.NewInt(fee),
GasPremium: types.NewInt(1),
Params: make([]byte, 32<<10),
}
sig, err := w.WalletSign(context.TODO(), from, msg.Cid().Bytes(), api.MsgMeta{})
if err != nil {
panic(err)
}
sm := &types.SignedMessage{
Message: *msg,
Signature: *sig,
}
err = mp.Add(context.TODO(), sm)
assert.ErrorIs(t, err, ErrGasFeeCapTooLow)
}
}
func TestMessageValueTooHigh(t *testing.T) {
//stm: @CHAIN_MEMPOOL_PUSH_001
tma := newTestMpoolAPI()
w, err := wallet.NewWallet(wallet.NewMemKeyStore())
assert.NoError(t, err)
from, err := w.WalletNew(context.Background(), types.KTBLS)
assert.NoError(t, err)
tma.setBalance(from, 1000e9)
ds := datastore.NewMapDatastore()
mp, err := New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil)
assert.NoError(t, err)
to := mock.Address(1001)
totalFil := types.TotalFilecoinInt
extra := types.NewInt(1)
value := types.BigAdd(totalFil, extra)
{
msg := &types.Message{
To: to,
From: from,
Value: value,
Nonce: 0,
GasLimit: 50000000,
GasFeeCap: types.NewInt(minimumBaseFee.Uint64()),
GasPremium: types.NewInt(1),
Params: make([]byte, 32<<10),
}
sig, err := w.WalletSign(context.TODO(), from, msg.Cid().Bytes(), api.MsgMeta{})
if err != nil {
panic(err)
}
sm := &types.SignedMessage{
Message: *msg,
Signature: *sig,
}
err = mp.Add(context.TODO(), sm)
assert.Error(t, err)
}
}
func TestMessageSignatureInvalid(t *testing.T) {
//stm: @CHAIN_MEMPOOL_PUSH_001
tma := newTestMpoolAPI()
w, err := wallet.NewWallet(wallet.NewMemKeyStore())
assert.NoError(t, err)
from, err := w.WalletNew(context.Background(), types.KTBLS)
assert.NoError(t, err)
tma.setBalance(from, 1000e9)
ds := datastore.NewMapDatastore()
mp, err := New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil)
assert.NoError(t, err)
to := mock.Address(1001)
{
msg := &types.Message{
To: to,
From: from,
Value: types.NewInt(1),
Nonce: 0,
GasLimit: 50000000,
GasFeeCap: types.NewInt(minimumBaseFee.Uint64()),
GasPremium: types.NewInt(1),
Params: make([]byte, 32<<10),
}
badSig := &crypto.Signature{
Type: crypto.SigTypeSecp256k1,
Data: make([]byte, 0),
}
sm := &types.SignedMessage{
Message: *msg,
Signature: *badSig,
}
err = mp.Add(context.TODO(), sm)
assert.Error(t, err)
// assert.Contains(t, err.Error(), "invalid signature length")
assert.Error(t, err)
}
}
func TestAddMessageTwice(t *testing.T) {
//stm: @CHAIN_MEMPOOL_PUSH_001
tma := newTestMpoolAPI()
w, err := wallet.NewWallet(wallet.NewMemKeyStore())
assert.NoError(t, err)
from, err := w.WalletNew(context.Background(), types.KTBLS)
assert.NoError(t, err)
tma.setBalance(from, 1000e9)
ds := datastore.NewMapDatastore()
mp, err := New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil)
assert.NoError(t, err)
to := mock.Address(1001)
{
// create a valid messages
sm := makeTestMessage(w, from, to, 0, 50_000_000, minimumBaseFee.Uint64())
mustAdd(t, mp, sm)
// try to add it twice
err = mp.Add(context.TODO(), sm)
// assert.Contains(t, err.Error(), "with nonce 0 already in mpool")
assert.Error(t, err)
}
}
func TestAddMessageTwiceNonceGap(t *testing.T) {
//stm: @CHAIN_MEMPOOL_PUSH_001
tma := newTestMpoolAPI()
w, err := wallet.NewWallet(wallet.NewMemKeyStore())
assert.NoError(t, err)
from, err := w.WalletNew(context.Background(), types.KTBLS)
assert.NoError(t, err)
tma.setBalance(from, 1000e9)
ds := datastore.NewMapDatastore()
mp, err := New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil)
assert.NoError(t, err)
to := mock.Address(1001)
{
// create message with invalid nonce (1)
sm := makeTestMessage(w, from, to, 1, 50_000_000, minimumBaseFee.Uint64())
mustAdd(t, mp, sm)
// then try to add message again
err = mp.Add(context.TODO(), sm)
// assert.Contains(t, err.Error(), "unfulfilled nonce gap")
assert.Error(t, err)
}
}
func TestAddMessageTwiceCidDiff(t *testing.T) {
tma := newTestMpoolAPI()
w, err := wallet.NewWallet(wallet.NewMemKeyStore())
assert.NoError(t, err)
from, err := w.WalletNew(context.Background(), types.KTBLS)
assert.NoError(t, err)
tma.setBalance(from, 1000e9)
ds := datastore.NewMapDatastore()
mp, err := New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil)
assert.NoError(t, err)
to := mock.Address(1001)
{
sm := makeTestMessage(w, from, to, 0, 50_000_000, minimumBaseFee.Uint64())
mustAdd(t, mp, sm)
// Create message with different data, so CID is different
sm2 := makeTestMessage(w, from, to, 0, 50_000_001, minimumBaseFee.Uint64())
//stm: @CHAIN_MEMPOOL_PUSH_001
// then try to add message again
err = mp.Add(context.TODO(), sm2)
// assert.Contains(t, err.Error(), "replace by fee has too low GasPremium")
assert.Error(t, err)
}
}
func TestAddMessageTwiceCidDiffReplaced(t *testing.T) {
//stm: @CHAIN_MEMPOOL_PUSH_001
tma := newTestMpoolAPI()
w, err := wallet.NewWallet(wallet.NewMemKeyStore())
assert.NoError(t, err)
from, err := w.WalletNew(context.Background(), types.KTBLS)
assert.NoError(t, err)
tma.setBalance(from, 1000e9)
ds := datastore.NewMapDatastore()
mp, err := New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil)
assert.NoError(t, err)
to := mock.Address(1001)
{
sm := makeTestMessage(w, from, to, 0, 50_000_000, minimumBaseFee.Uint64())
mustAdd(t, mp, sm)
// Create message with different data, so CID is different
sm2 := makeTestMessage(w, from, to, 0, 50_000_000, minimumBaseFee.Uint64()*2)
mustAdd(t, mp, sm2)
}
}
func TestRemoveMessage(t *testing.T) {
//stm: @CHAIN_MEMPOOL_PUSH_001
tma := newTestMpoolAPI()
w, err := wallet.NewWallet(wallet.NewMemKeyStore())
assert.NoError(t, err)
from, err := w.WalletNew(context.Background(), types.KTBLS)
assert.NoError(t, err)
tma.setBalance(from, 1000e9)
ds := datastore.NewMapDatastore()
mp, err := New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil)
assert.NoError(t, err)
to := mock.Address(1001)
{
sm := makeTestMessage(w, from, to, 0, 50_000_000, minimumBaseFee.Uint64())
mustAdd(t, mp, sm)
//stm: @CHAIN_MEMPOOL_REMOVE_001
// remove message for sender
mp.Remove(context.TODO(), from, sm.Message.Nonce, true)
//stm: @CHAIN_MEMPOOL_PENDING_FOR_001
// check messages in pool: should be none present
msgs := mp.pendingFor(context.TODO(), from)
assert.Len(t, msgs, 0)
}
}

View File

@ -1,3 +1,4 @@
//stm: #unit
package messagepool
import (
@ -16,6 +17,7 @@ import (
)
func TestRepubMessages(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001
oldRepublishBatchDelay := RepublishBatchDelay
RepublishBatchDelay = time.Microsecond
defer func() {
@ -57,6 +59,7 @@ func TestRepubMessages(t *testing.T) {
for i := 0; i < 10; i++ {
m := makeTestMessage(w1, a1, a2, uint64(i), gasLimit, uint64(i+1))
//stm: @CHAIN_MEMPOOL_PUSH_001
_, err := mp.Push(context.TODO(), m)
if err != nil {
t.Fatal(err)

View File

@ -1,3 +1,4 @@
//stm: #unit
package messagepool
import (
@ -74,6 +75,8 @@ func makeTestMpool() (*MessagePool, *testMpoolAPI) {
}
func TestMessageChains(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001
//stm: @CHAIN_MEMPOOL_CREATE_MSG_CHAINS_001
mp, tma := makeTestMpool()
// the actors
@ -310,6 +313,8 @@ func TestMessageChains(t *testing.T) {
}
func TestMessageChainSkipping(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001, @CHAIN_MEMPOOL_CREATE_MSG_CHAINS_001
// regression test for chain skip bug
mp, tma := makeTestMpool()
@ -382,6 +387,7 @@ func TestMessageChainSkipping(t *testing.T) {
}
func TestBasicMessageSelection(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001, @CHAIN_MEMPOOL_SELECT_001
oldMaxNonceGap := MaxNonceGap
MaxNonceGap = 1000
defer func() {
@ -532,6 +538,7 @@ func TestBasicMessageSelection(t *testing.T) {
}
func TestMessageSelectionTrimmingGas(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001, @CHAIN_MEMPOOL_SELECT_001
mp, tma := makeTestMpool()
// the actors
@ -595,6 +602,7 @@ func TestMessageSelectionTrimmingGas(t *testing.T) {
}
func TestMessageSelectionTrimmingMsgsBasic(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001, @CHAIN_MEMPOOL_SELECT_001
mp, tma := makeTestMpool()
// the actors
@ -641,6 +649,7 @@ func TestMessageSelectionTrimmingMsgsBasic(t *testing.T) {
}
func TestMessageSelectionTrimmingMsgsTwoSendersBasic(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001, @CHAIN_MEMPOOL_SELECT_001
mp, tma := makeTestMpool()
// the actors
@ -707,6 +716,7 @@ func TestMessageSelectionTrimmingMsgsTwoSendersBasic(t *testing.T) {
}
func TestMessageSelectionTrimmingMsgsTwoSendersAdvanced(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001, @CHAIN_MEMPOOL_SELECT_001
mp, tma := makeTestMpool()
// the actors
@ -788,6 +798,7 @@ func TestMessageSelectionTrimmingMsgsTwoSendersAdvanced(t *testing.T) {
}
func TestPriorityMessageSelection(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001, @CHAIN_MEMPOOL_SELECT_001
mp, tma := makeTestMpool()
// the actors
@ -867,6 +878,7 @@ func TestPriorityMessageSelection(t *testing.T) {
}
func TestPriorityMessageSelection2(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001, @CHAIN_MEMPOOL_SELECT_001
mp, tma := makeTestMpool()
// the actors
@ -934,6 +946,7 @@ func TestPriorityMessageSelection2(t *testing.T) {
}
func TestPriorityMessageSelection3(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001, @CHAIN_MEMPOOL_SELECT_001
mp, tma := makeTestMpool()
// the actors
@ -1028,6 +1041,8 @@ func TestPriorityMessageSelection3(t *testing.T) {
}
func TestOptimalMessageSelection1(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001, @CHAIN_MEMPOOL_SELECT_001
// this test uses just a single actor sending messages with a low tq
// the chain depenent merging algorithm should pick messages from the actor
// from the start
@ -1094,6 +1109,8 @@ func TestOptimalMessageSelection1(t *testing.T) {
}
func TestOptimalMessageSelection2(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001, @CHAIN_MEMPOOL_SELECT_001
// this test uses two actors sending messages to each other, with the first
// actor paying (much) higher gas premium than the second.
// We select with a low ticket quality; the chain depenent merging algorithm should pick
@ -1173,6 +1190,8 @@ func TestOptimalMessageSelection2(t *testing.T) {
}
func TestOptimalMessageSelection3(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001, @CHAIN_MEMPOOL_SELECT_001
// this test uses 10 actors sending a block of messages to each other, with the the first
// actors paying higher gas premium than the subsequent actors.
// We select with a low ticket quality; the chain dependent merging algorithm should pick
@ -1416,6 +1435,8 @@ func makeZipfPremiumDistribution(rng *rand.Rand) func() uint64 {
}
func TestCompetitiveMessageSelectionExp(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001, @CHAIN_MEMPOOL_SELECT_001
if testing.Short() {
t.Skip("skipping in short mode")
}
@ -1439,6 +1460,8 @@ func TestCompetitiveMessageSelectionExp(t *testing.T) {
}
func TestCompetitiveMessageSelectionZipf(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001, @CHAIN_MEMPOOL_SELECT_001
if testing.Short() {
t.Skip("skipping in short mode")
}
@ -1462,6 +1485,7 @@ func TestCompetitiveMessageSelectionZipf(t *testing.T) {
}
func TestGasReward(t *testing.T) {
//stm: @CHAIN_MEMPOOL_GET_GAS_REWARD_001
tests := []struct {
Premium uint64
FeeCap uint64
@ -1494,6 +1518,8 @@ func TestGasReward(t *testing.T) {
}
func TestRealWorldSelection(t *testing.T) {
//stm: @TOKEN_WALLET_NEW_001, @TOKEN_WALLET_SIGN_001, @CHAIN_MEMPOOL_SELECT_001
// load test-messages.json.gz and rewrite the messages so that
// 1) we map each real actor to a test actor so that we can sign the messages
// 2) adjust the nonces so that they start from 0

View File

@ -1,3 +1,4 @@
//stm: #unit
package messagesigner
import (
@ -60,6 +61,7 @@ func TestMessageSignerSignMessage(t *testing.T) {
to2, err := w.WalletNew(ctx, types.KTSecp256k1)
require.NoError(t, err)
//stm: @CHAIN_MESSAGE_SIGNER_NEW_SIGNER_001, @CHAIN_MESSAGE_SIGNER_SIGN_MESSAGE_001, @CHAIN_MESSAGE_SIGNER_SIGN_MESSAGE_005
type msgSpec struct {
msg *types.Message
mpoolNonce [1]uint64

View File

@ -1,3 +1,4 @@
//stm:#unit
package rand_test
import (
@ -55,11 +56,13 @@ func TestNullRandomnessV1(t *testing.T) {
randEpoch := ts.TipSet.TipSet().Height() - 2
//stm: @BLOCKCHAIN_RAND_GET_BEACON_RANDOMNESS_V1_01, @BLOCKCHAIN_RAND_EXTRACT_BEACON_ENTRY_FOR_EPOCH_01, @BLOCKCHAIN_RAND_GET_BEACON_RANDOMNESS_TIPSET_02
rand1, err := cg.StateManager().GetRandomnessFromBeacon(ctx, pers, randEpoch, entropy, ts.TipSet.TipSet().Key())
if err != nil {
t.Fatal(err)
}
//stm: @BLOCKCHAIN_BEACON_GET_BEACON_FOR_EPOCH_01
bch := cg.BeaconSchedule().BeaconForEpoch(randEpoch).Entry(ctx, uint64(beforeNullHeight)+offset)
select {
@ -68,6 +71,7 @@ func TestNullRandomnessV1(t *testing.T) {
t.Fatal(resp.Err)
}
//stm: @BLOCKCHAIN_RAND_DRAW_RANDOMNESS_01
rand2, err := rand.DrawRandomness(resp.Entry.Data, pers, randEpoch, entropy)
if err != nil {
t.Fatal(err)
@ -131,11 +135,13 @@ func TestNullRandomnessV2(t *testing.T) {
randEpoch := ts.TipSet.TipSet().Height() - 2
//stm: @BLOCKCHAIN_RAND_GET_BEACON_RANDOMNESS_V2_01
rand1, err := cg.StateManager().GetRandomnessFromBeacon(ctx, pers, randEpoch, entropy, ts.TipSet.TipSet().Key())
if err != nil {
t.Fatal(err)
}
//stm: @BLOCKCHAIN_BEACON_GET_BEACON_FOR_EPOCH_01
bch := cg.BeaconSchedule().BeaconForEpoch(randEpoch).Entry(ctx, uint64(ts.TipSet.TipSet().Height())+offset)
select {
@ -144,6 +150,7 @@ func TestNullRandomnessV2(t *testing.T) {
t.Fatal(resp.Err)
}
//stm: @BLOCKCHAIN_RAND_DRAW_RANDOMNESS_01, @BLOCKCHAIN_RAND_EXTRACT_BEACON_ENTRY_FOR_EPOCH_01, @BLOCKCHAIN_RAND_GET_BEACON_RANDOMNESS_TIPSET_03
// note that the randEpoch passed to DrawRandomness is still randEpoch (not the latest ts height)
rand2, err := rand.DrawRandomness(resp.Entry.Data, pers, randEpoch, entropy)
if err != nil {
@ -212,11 +219,13 @@ func TestNullRandomnessV3(t *testing.T) {
randEpoch := ts.TipSet.TipSet().Height() - 2
//stm: @BLOCKCHAIN_RAND_GET_BEACON_RANDOMNESS_V3_01, @BLOCKCHAIN_RAND_EXTRACT_BEACON_ENTRY_FOR_EPOCH_01
rand1, err := cg.StateManager().GetRandomnessFromBeacon(ctx, pers, randEpoch, entropy, ts.TipSet.TipSet().Key())
if err != nil {
t.Fatal(err)
}
//stm: @BLOCKCHAIN_BEACON_GET_BEACON_FOR_EPOCH_01
bch := cg.BeaconSchedule().BeaconForEpoch(randEpoch).Entry(ctx, uint64(randEpoch)+offset)
select {
@ -225,6 +234,7 @@ func TestNullRandomnessV3(t *testing.T) {
t.Fatal(resp.Err)
}
//stm: @BLOCKCHAIN_RAND_DRAW_RANDOMNESS_01
rand2, err := rand.DrawRandomness(resp.Entry.Data, pers, randEpoch, entropy)
if err != nil {
t.Fatal(err)

View File

@ -1,3 +1,4 @@
//stm: #unit
package sub
import (
@ -49,6 +50,7 @@ func TestFetchCidsWithDedup(t *testing.T) {
}
g := &getter{msgs}
//stm: @CHAIN_INCOMING_FETCH_MESSAGES_BY_CID_001
// the cids have a duplicate
res, err := FetchMessagesByCids(context.TODO(), g, append(cids, cids[0]))

View File

@ -1,3 +1,4 @@
//stm: #unit
package chain_test
import (
@ -103,7 +104,7 @@ func prepSyncTest(t testing.TB, h int) *syncTestUtil {
ctx: ctx,
cancel: cancel,
mn: mocknet.New(ctx),
mn: mocknet.New(),
g: g,
us: filcns.DefaultUpgradeSchedule(),
}
@ -157,7 +158,7 @@ func prepSyncTestWithV5Height(t testing.TB, h int, v5height abi.ChainEpoch) *syn
ctx: ctx,
cancel: cancel,
mn: mocknet.New(ctx),
mn: mocknet.New(),
g: g,
us: sched,
}
@ -462,6 +463,8 @@ func (tu *syncTestUtil) waitUntilSyncTarget(to int, target *types.TipSet) {
}
func TestSyncSimple(t *testing.T) {
//stm: @BLOCKCHAIN_BEACON_VALIDATE_BLOCK_VALUES_01, @CHAIN_SYNCER_LOAD_GENESIS_001, @CHAIN_SYNCER_FETCH_TIPSET_001, @CHAIN_SYNCER_START_001
//stm: @CHAIN_SYNCER_NEW_PEER_HEAD_001, @CHAIN_SYNCER_VALIDATE_MESSAGE_META_001, @CHAIN_SYNCER_STOP_001
H := 50
tu := prepSyncTest(t, H)
@ -478,6 +481,8 @@ func TestSyncSimple(t *testing.T) {
}
func TestSyncMining(t *testing.T) {
//stm: @BLOCKCHAIN_BEACON_VALIDATE_BLOCK_VALUES_01, @CHAIN_SYNCER_LOAD_GENESIS_001, @CHAIN_SYNCER_FETCH_TIPSET_001, @CHAIN_SYNCER_START_001
//stm: @CHAIN_SYNCER_NEW_PEER_HEAD_001, @CHAIN_SYNCER_VALIDATE_MESSAGE_META_001, @CHAIN_SYNCER_STOP_001
H := 50
tu := prepSyncTest(t, H)
@ -500,6 +505,8 @@ func TestSyncMining(t *testing.T) {
}
func TestSyncBadTimestamp(t *testing.T) {
//stm: @BLOCKCHAIN_BEACON_VALIDATE_BLOCK_VALUES_01, @CHAIN_SYNCER_LOAD_GENESIS_001, @CHAIN_SYNCER_FETCH_TIPSET_001, @CHAIN_SYNCER_START_001
//stm: @CHAIN_SYNCER_NEW_PEER_HEAD_001, @CHAIN_SYNCER_VALIDATE_MESSAGE_META_001, @CHAIN_SYNCER_STOP_001
H := 50
tu := prepSyncTest(t, H)
@ -554,6 +561,8 @@ func (wpp badWpp) ComputeProof(context.Context, []proof7.ExtendedSectorInfo, abi
}
func TestSyncBadWinningPoSt(t *testing.T) {
//stm: @BLOCKCHAIN_BEACON_VALIDATE_BLOCK_VALUES_01, @CHAIN_SYNCER_LOAD_GENESIS_001, @CHAIN_SYNCER_FETCH_TIPSET_001, @CHAIN_SYNCER_START_001
//stm: @CHAIN_SYNCER_NEW_PEER_HEAD_001, @CHAIN_SYNCER_VALIDATE_MESSAGE_META_001, @CHAIN_SYNCER_STOP_001
H := 15
tu := prepSyncTest(t, H)
@ -583,6 +592,9 @@ func (tu *syncTestUtil) loadChainToNode(to int) {
}
func TestSyncFork(t *testing.T) {
//stm: @BLOCKCHAIN_BEACON_VALIDATE_BLOCK_VALUES_01, @CHAIN_SYNCER_LOAD_GENESIS_001, @CHAIN_SYNCER_FETCH_TIPSET_001, @CHAIN_SYNCER_START_001
//stm: @CHAIN_SYNCER_SYNC_001, @CHAIN_SYNCER_COLLECT_CHAIN_001, @CHAIN_SYNCER_COLLECT_HEADERS_001, @CHAIN_SYNCER_VALIDATE_TIPSET_001
//stm: @CHAIN_SYNCER_NEW_PEER_HEAD_001, @CHAIN_SYNCER_VALIDATE_MESSAGE_META_001, @CHAIN_SYNCER_STOP_001
H := 10
tu := prepSyncTest(t, H)
@ -650,6 +662,9 @@ func TestSyncFork(t *testing.T) {
// A and B both include _different_ messages from sender X with nonce N (where N is the correct nonce for X).
// We can confirm that the state can be correctly computed, and that `MessagesForTipset` behaves as expected.
func TestDuplicateNonce(t *testing.T) {
//stm: @BLOCKCHAIN_BEACON_VALIDATE_BLOCK_VALUES_01, @CHAIN_SYNCER_LOAD_GENESIS_001, @CHAIN_SYNCER_FETCH_TIPSET_001, @CHAIN_SYNCER_START_001
//stm: @CHAIN_SYNCER_SYNC_001, @CHAIN_SYNCER_COLLECT_CHAIN_001, @CHAIN_SYNCER_COLLECT_HEADERS_001, @CHAIN_SYNCER_VALIDATE_TIPSET_001
//stm: @CHAIN_SYNCER_NEW_PEER_HEAD_001, @CHAIN_SYNCER_VALIDATE_MESSAGE_META_001, @CHAIN_SYNCER_STOP_001
H := 10
tu := prepSyncTest(t, H)
@ -704,6 +719,7 @@ func TestDuplicateNonce(t *testing.T) {
var includedMsg cid.Cid
var skippedMsg cid.Cid
//stm: @CHAIN_STATE_SEARCH_MSG_001
r0, err0 := tu.nds[0].StateSearchMsg(context.TODO(), ts2.TipSet().Key(), msgs[0][0].Cid(), api.LookbackNoLimit, true)
r1, err1 := tu.nds[0].StateSearchMsg(context.TODO(), ts2.TipSet().Key(), msgs[1][0].Cid(), api.LookbackNoLimit, true)
@ -745,6 +761,9 @@ func TestDuplicateNonce(t *testing.T) {
// This test asserts that a block that includes a message with bad nonce can't be synced. A nonce is "bad" if it can't
// be applied on the parent state.
func TestBadNonce(t *testing.T) {
//stm: @BLOCKCHAIN_BEACON_VALIDATE_BLOCK_VALUES_01, @CHAIN_SYNCER_LOAD_GENESIS_001, @CHAIN_SYNCER_FETCH_TIPSET_001, @CHAIN_SYNCER_START_001
//stm: @CHAIN_SYNCER_SYNC_001, @CHAIN_SYNCER_COLLECT_CHAIN_001, @CHAIN_SYNCER_COLLECT_HEADERS_001
//stm: @CHAIN_SYNCER_VALIDATE_MESSAGE_META_001, @CHAIN_SYNCER_VALIDATE_TIPSET_001, @CHAIN_SYNCER_STOP_001
H := 10
tu := prepSyncTest(t, H)
@ -792,6 +811,9 @@ func TestBadNonce(t *testing.T) {
// One of the messages uses the sender's robust address, the other uses the ID address.
// Such a block is invalid and should not sync.
func TestMismatchedNoncesRobustID(t *testing.T) {
//stm: @BLOCKCHAIN_BEACON_VALIDATE_BLOCK_VALUES_01, @CHAIN_SYNCER_LOAD_GENESIS_001, @CHAIN_SYNCER_FETCH_TIPSET_001, @CHAIN_SYNCER_START_001
//stm: @CHAIN_SYNCER_SYNC_001, @CHAIN_SYNCER_COLLECT_CHAIN_001, @CHAIN_SYNCER_COLLECT_HEADERS_001
//stm: @CHAIN_SYNCER_VALIDATE_MESSAGE_META_001, @CHAIN_SYNCER_VALIDATE_TIPSET_001, @CHAIN_SYNCER_STOP_001
v5h := abi.ChainEpoch(4)
tu := prepSyncTestWithV5Height(t, int(v5h+5), v5h)
@ -804,6 +826,7 @@ func TestMismatchedNoncesRobustID(t *testing.T) {
require.NoError(t, err)
// Produce a message from the banker
//stm: @CHAIN_STATE_LOOKUP_ID_001
makeMsg := func(id bool) *types.SignedMessage {
sender := tu.g.Banker()
if id {
@ -846,6 +869,9 @@ func TestMismatchedNoncesRobustID(t *testing.T) {
// One of the messages uses the sender's robust address, the other uses the ID address.
// Such a block is valid and should sync.
func TestMatchedNoncesRobustID(t *testing.T) {
//stm: @BLOCKCHAIN_BEACON_VALIDATE_BLOCK_VALUES_01, @CHAIN_SYNCER_LOAD_GENESIS_001, @CHAIN_SYNCER_FETCH_TIPSET_001, @CHAIN_SYNCER_START_001
//stm: @CHAIN_SYNCER_SYNC_001, @CHAIN_SYNCER_COLLECT_CHAIN_001, @CHAIN_SYNCER_COLLECT_HEADERS_001
//stm: @CHAIN_SYNCER_VALIDATE_MESSAGE_META_001, @CHAIN_SYNCER_VALIDATE_TIPSET_001, @CHAIN_SYNCER_STOP_001
v5h := abi.ChainEpoch(4)
tu := prepSyncTestWithV5Height(t, int(v5h+5), v5h)
@ -858,6 +884,7 @@ func TestMatchedNoncesRobustID(t *testing.T) {
require.NoError(t, err)
// Produce a message from the banker with specified nonce
//stm: @CHAIN_STATE_LOOKUP_ID_001
makeMsg := func(n uint64, id bool) *types.SignedMessage {
sender := tu.g.Banker()
if id {
@ -917,6 +944,8 @@ func runSyncBenchLength(b *testing.B, l int) {
}
func TestSyncInputs(t *testing.T) {
//stm: @CHAIN_SYNCER_LOAD_GENESIS_001, @CHAIN_SYNCER_FETCH_TIPSET_001, @CHAIN_SYNCER_VALIDATE_BLOCK_001,
//stm: @CHAIN_SYNCER_START_001, @CHAIN_SYNCER_STOP_001
H := 10
tu := prepSyncTest(t, H)
@ -944,6 +973,9 @@ func TestSyncInputs(t *testing.T) {
}
func TestSyncCheckpointHead(t *testing.T) {
//stm: @BLOCKCHAIN_BEACON_VALIDATE_BLOCK_VALUES_01, @CHAIN_SYNCER_LOAD_GENESIS_001, @CHAIN_SYNCER_FETCH_TIPSET_001, @CHAIN_SYNCER_START_001
//stm: @CHAIN_SYNCER_SYNC_001, @CHAIN_SYNCER_COLLECT_CHAIN_001, @CHAIN_SYNCER_COLLECT_HEADERS_001
//stm: @CHAIN_SYNCER_VALIDATE_MESSAGE_META_001, @CHAIN_SYNCER_VALIDATE_TIPSET_001, @CHAIN_SYNCER_STOP_001
H := 10
tu := prepSyncTest(t, H)
@ -963,6 +995,7 @@ func TestSyncCheckpointHead(t *testing.T) {
a = tu.mineOnBlock(a, p1, []int{0}, true, false, nil, 0, true)
tu.waitUntilSyncTarget(p1, a.TipSet())
//stm: @CHAIN_SYNCER_CHECKPOINT_001
tu.checkpointTs(p1, a.TipSet().Key())
require.NoError(t, tu.g.ResyncBankerNonce(a1.TipSet()))
@ -982,15 +1015,20 @@ func TestSyncCheckpointHead(t *testing.T) {
tu.waitUntilNodeHasTs(p1, b.TipSet().Key())
p1Head := tu.getHead(p1)
require.True(tu.t, p1Head.Equals(a.TipSet()))
//stm: @CHAIN_SYNCER_CHECK_BAD_001
tu.assertBad(p1, b.TipSet())
// Should be able to switch forks.
//stm: @CHAIN_SYNCER_CHECKPOINT_001
tu.checkpointTs(p1, b.TipSet().Key())
p1Head = tu.getHead(p1)
require.True(tu.t, p1Head.Equals(b.TipSet()))
}
func TestSyncCheckpointEarlierThanHead(t *testing.T) {
//stm: @BLOCKCHAIN_BEACON_VALIDATE_BLOCK_VALUES_01, @CHAIN_SYNCER_LOAD_GENESIS_001, @CHAIN_SYNCER_FETCH_TIPSET_001, @CHAIN_SYNCER_START_001
//stm: @CHAIN_SYNCER_SYNC_001, @CHAIN_SYNCER_COLLECT_CHAIN_001, @CHAIN_SYNCER_COLLECT_HEADERS_001
//stm: @CHAIN_SYNCER_VALIDATE_MESSAGE_META_001, @CHAIN_SYNCER_VALIDATE_TIPSET_001, @CHAIN_SYNCER_STOP_001
H := 10
tu := prepSyncTest(t, H)
@ -1010,6 +1048,7 @@ func TestSyncCheckpointEarlierThanHead(t *testing.T) {
a = tu.mineOnBlock(a, p1, []int{0}, true, false, nil, 0, true)
tu.waitUntilSyncTarget(p1, a.TipSet())
//stm: @CHAIN_SYNCER_CHECKPOINT_001
tu.checkpointTs(p1, a1.TipSet().Key())
require.NoError(t, tu.g.ResyncBankerNonce(a1.TipSet()))
@ -1029,15 +1068,19 @@ func TestSyncCheckpointEarlierThanHead(t *testing.T) {
tu.waitUntilNodeHasTs(p1, b.TipSet().Key())
p1Head := tu.getHead(p1)
require.True(tu.t, p1Head.Equals(a.TipSet()))
//stm: @CHAIN_SYNCER_CHECK_BAD_001
tu.assertBad(p1, b.TipSet())
// Should be able to switch forks.
//stm: @CHAIN_SYNCER_CHECKPOINT_001
tu.checkpointTs(p1, b.TipSet().Key())
p1Head = tu.getHead(p1)
require.True(tu.t, p1Head.Equals(b.TipSet()))
}
func TestInvalidHeight(t *testing.T) {
//stm: @CHAIN_SYNCER_LOAD_GENESIS_001, @CHAIN_SYNCER_FETCH_TIPSET_001, @CHAIN_SYNCER_START_001
//stm: @CHAIN_SYNCER_VALIDATE_MESSAGE_META_001, @CHAIN_SYNCER_STOP_001
H := 50
tu := prepSyncTest(t, H)

View File

@ -36,6 +36,8 @@ var NetCmd = &cli.Command{
NetReachability,
NetBandwidthCmd,
NetBlockCmd,
NetStatCmd,
NetLimitCmd,
},
}
@ -606,3 +608,103 @@ var NetBlockListCmd = &cli.Command{
return nil
},
}
var NetStatCmd = &cli.Command{
Name: "stat",
Usage: "Report resource usage for a scope",
ArgsUsage: "scope",
Description: `Report resource usage for a scope.
The scope can be one of the following:
- system -- reports the system aggregate resource usage.
- transient -- reports the transient resource usage.
- svc:<service> -- reports the resource usage of a specific service.
- proto:<proto> -- reports the resource usage of a specific protocol.
- peer:<peer> -- reports the resource usage of a specific peer.
- all -- reports the resource usage for all currently active scopes.
`,
Action: func(cctx *cli.Context) error {
api, closer, err := GetAPI(cctx)
if err != nil {
return err
}
defer closer()
ctx := ReqContext(cctx)
args := cctx.Args().Slice()
if len(args) != 1 {
return xerrors.Errorf("must specify exactly one scope")
}
scope := args[0]
result, err := api.NetStat(ctx, scope)
if err != nil {
return err
}
enc := json.NewEncoder(os.Stdout)
return enc.Encode(result)
},
}
var NetLimitCmd = &cli.Command{
Name: "limit",
Usage: "Get or set resource limits for a scope",
ArgsUsage: "scope [limit]",
Description: `Get or set resource limits for a scope.
The scope can be one of the following:
- system -- reports the system aggregate resource usage.
- transient -- reports the transient resource usage.
- svc:<service> -- reports the resource usage of a specific service.
- proto:<proto> -- reports the resource usage of a specific protocol.
- peer:<peer> -- reports the resource usage of a specific peer.
The limit is json-formatted, with the same structure as the limits file.
`,
Flags: []cli.Flag{
&cli.BoolFlag{
Name: "set",
Usage: "set the limit for a scope",
},
},
Action: func(cctx *cli.Context) error {
api, closer, err := GetAPI(cctx)
if err != nil {
return err
}
defer closer()
ctx := ReqContext(cctx)
args := cctx.Args().Slice()
if cctx.Bool("set") {
if len(args) != 2 {
return xerrors.Errorf("must specify exactly a scope and a limit")
}
scope := args[0]
limitStr := args[1]
var limit atypes.NetLimit
err := json.Unmarshal([]byte(limitStr), &limit)
if err != nil {
return xerrors.Errorf("error decoding limit: %w", err)
}
return api.NetSetLimit(ctx, scope, limit)
}
if len(args) != 1 {
return xerrors.Errorf("must specify exactly one scope")
}
scope := args[0]
result, err := api.NetLimit(ctx, scope)
if err != nil {
return err
}
enc := json.NewEncoder(os.Stdout)
return enc.Encode(result)
},
}

View File

@ -126,7 +126,7 @@ func infoCmdAct(cctx *cli.Context) error {
alerts, err := minerApi.LogAlerts(ctx)
if err != nil {
return xerrors.Errorf("getting alerts: %w", err)
fmt.Printf("ERROR: getting alerts: %s\n", err)
}
activeAlerts := make([]alerting.Alert, 0)
@ -466,6 +466,7 @@ var stateOrder = map[sealing.SectorState]stateMeta{}
var stateList = []stateMeta{
{col: 39, state: "Total"},
{col: color.FgGreen, state: sealing.Proving},
{col: color.FgGreen, state: sealing.UpdateActivating},
{col: color.FgBlue, state: sealing.Empty},
{col: color.FgBlue, state: sealing.WaitDeals},
@ -496,6 +497,7 @@ var stateList = []stateMeta{
{col: color.FgYellow, state: sealing.SubmitReplicaUpdate},
{col: color.FgYellow, state: sealing.ReplicaUpdateWait},
{col: color.FgYellow, state: sealing.FinalizeReplicaUpdate},
{col: color.FgYellow, state: sealing.ReleaseSectorKey},
{col: color.FgCyan, state: sealing.Terminating},
{col: color.FgCyan, state: sealing.TerminateWait},
@ -524,6 +526,7 @@ var stateList = []stateMeta{
{col: color.FgRed, state: sealing.SnapDealsAddPieceFailed},
{col: color.FgRed, state: sealing.SnapDealsDealsExpired},
{col: color.FgRed, state: sealing.ReplicaUpdateFailed},
{col: color.FgRed, state: sealing.ReleaseSectorKeyFailed},
}
func init() {

View File

@ -96,6 +96,11 @@ var infoAllCmd = &cli.Command{
fmt.Println("ERROR: ", err)
}
fmt.Println("\n#: Storage Locks")
if err := storageLocks.Action(cctx); err != nil {
fmt.Println("ERROR: ", err)
}
fmt.Println("\n#: Sched Diag")
if err := sealingSchedDiagCmd.Action(cctx); err != nil {
fmt.Println("ERROR: ", err)
@ -192,6 +197,11 @@ var infoAllCmd = &cli.Command{
fmt.Println("ERROR: ", err)
}
fmt.Println("\n#: Storage Sector List")
if err := storageListSectorsCmd.Action(cctx); err != nil {
fmt.Println("ERROR: ", err)
}
fmt.Println("\n#: Expired Sectors")
if err := sectorsExpiredCmd.Action(cctx); err != nil {
fmt.Println("ERROR: ", err)

View File

@ -473,6 +473,9 @@ func storageMinerInit(ctx context.Context, cctx *cli.Context, api v1api.FullNode
AllowPreCommit2: true,
AllowCommit: true,
AllowUnseal: true,
AllowReplicaUpdate: true,
AllowProveReplicaUpdate2: true,
AllowRegenSectorKey: true,
}, wsts, smsts)
if err != nil {
return err

View File

@ -55,6 +55,7 @@ var sectorsCmd = &cli.Command{
sectorsTerminateCmd,
sectorsRemoveCmd,
sectorsSnapUpCmd,
sectorsSnapAbortCmd,
sectorsMarkForUpgradeCmd,
sectorsStartSealCmd,
sectorsSealDelayCmd,
@ -160,7 +161,7 @@ var sectorsStatusCmd = &cli.Command{
fmt.Printf("Expiration:\t\t%v\n", status.Expiration)
fmt.Printf("DealWeight:\t\t%v\n", status.DealWeight)
fmt.Printf("VerifiedDealWeight:\t\t%v\n", status.VerifiedDealWeight)
fmt.Printf("InitialPledge:\t\t%v\n", status.InitialPledge)
fmt.Printf("InitialPledge:\t\t%v\n", types.FIL(status.InitialPledge))
fmt.Printf("\nExpiration Info\n")
fmt.Printf("OnTime:\t\t%v\n", status.OnTime)
fmt.Printf("Early:\t\t%v\n", status.Early)
@ -292,9 +293,15 @@ var sectorsListCmd = &cli.Command{
Usage: "display number of events the sector has received",
Aliases: []string{"e"},
},
&cli.BoolFlag{
Name: "initial-pledge",
Usage: "display initial pledge",
Aliases: []string{"p"},
},
&cli.BoolFlag{
Name: "seal-time",
Usage: "display how long it took for the sector to be sealed",
Aliases: []string{"t"},
},
&cli.StringFlag{
Name: "states",
@ -404,6 +411,7 @@ var sectorsListCmd = &cli.Command{
tablewriter.Col("Deals"),
tablewriter.Col("DealWeight"),
tablewriter.Col("VerifiedPower"),
tablewriter.Col("Pledge"),
tablewriter.NewLineCol("Error"),
tablewriter.NewLineCol("RecoveryTimeout"))
@ -482,6 +490,9 @@ var sectorsListCmd = &cli.Command{
m["RecoveryTimeout"] = color.YellowString(lcli.EpochTime(head.Height(), st.Early))
}
}
if inSSet && cctx.Bool("initial-pledge") {
m["Pledge"] = types.FIL(st.InitialPledge).Short()
}
}
if !fast && deals > 0 {
@ -1520,6 +1531,31 @@ var sectorsSnapUpCmd = &cli.Command{
},
}
var sectorsSnapAbortCmd = &cli.Command{
Name: "abort-upgrade",
Usage: "Abort the attempted (SnapDeals) upgrade of a CC sector, reverting it to as before",
ArgsUsage: "<sectorNum>",
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 1 {
return lcli.ShowHelp(cctx, xerrors.Errorf("must pass sector number"))
}
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
defer closer()
ctx := lcli.ReqContext(cctx)
id, err := strconv.ParseUint(cctx.Args().Get(0), 10, 64)
if err != nil {
return xerrors.Errorf("could not parse sector number: %w", err)
}
return nodeApi.SectorAbortUpgrade(ctx, abi.SectorNumber(id))
},
}
var sectorsMarkForUpgradeCmd = &cli.Command{
Name: "mark-for-upgrade",
Usage: "Mark a committed capacity sector for replacement by a sector with deals",

View File

@ -368,6 +368,7 @@ type storedSector struct {
store stores.SectorStorageInfo
unsealed, sealed, cache bool
update, updatecache bool
}
var storageFindCmd = &cli.Command{
@ -421,6 +422,16 @@ var storageFindCmd = &cli.Command{
return xerrors.Errorf("finding cache: %w", err)
}
us, err := nodeApi.StorageFindSector(ctx, sid, storiface.FTUpdate, 0, false)
if err != nil {
return xerrors.Errorf("finding sealed: %w", err)
}
uc, err := nodeApi.StorageFindSector(ctx, sid, storiface.FTUpdateCache, 0, false)
if err != nil {
return xerrors.Errorf("finding cache: %w", err)
}
byId := map[stores.ID]*storedSector{}
for _, info := range u {
sts, ok := byId[info.ID]
@ -455,6 +466,28 @@ var storageFindCmd = &cli.Command{
}
sts.cache = true
}
for _, info := range us {
sts, ok := byId[info.ID]
if !ok {
sts = &storedSector{
id: info.ID,
store: info,
}
byId[info.ID] = sts
}
sts.update = true
}
for _, info := range uc {
sts, ok := byId[info.ID]
if !ok {
sts = &storedSector{
id: info.ID,
store: info,
}
byId[info.ID] = sts
}
sts.updatecache = true
}
local, err := nodeApi.StorageLocal(ctx)
if err != nil {
@ -480,6 +513,12 @@ var storageFindCmd = &cli.Command{
if info.cache {
types += "Cache, "
}
if info.update {
types += "Update, "
}
if info.updatecache {
types += "UpdateCache, "
}
fmt.Printf("In %s (%s)\n", info.id, types[:len(types)-2])
fmt.Printf("\tSealing: %t; Storage: %t\n", info.store.CanSeal, info.store.CanStore)

View File

@ -173,6 +173,11 @@ var runCmd = &cli.Command{
Usage: "enable prove replica update 2",
Value: true,
},
&cli.BoolFlag{
Name: "regen-sector-key",
Usage: "enable regen sector key",
Value: true,
},
&cli.IntFlag{
Name: "parallel-fetch-limit",
Usage: "maximum fetch operations to run in parallel",
@ -261,7 +266,7 @@ var runCmd = &cli.Command{
var taskTypes []sealtasks.TaskType
taskTypes = append(taskTypes, sealtasks.TTFetch, sealtasks.TTCommit1, sealtasks.TTProveReplicaUpdate1, sealtasks.TTFinalize)
taskTypes = append(taskTypes, sealtasks.TTFetch, sealtasks.TTCommit1, sealtasks.TTProveReplicaUpdate1, sealtasks.TTFinalize, sealtasks.TTFinalizeReplicaUpdate)
if cctx.Bool("addpiece") {
taskTypes = append(taskTypes, sealtasks.TTAddPiece)
@ -278,12 +283,15 @@ var runCmd = &cli.Command{
if cctx.Bool("commit") {
taskTypes = append(taskTypes, sealtasks.TTCommit2)
}
if cctx.Bool("replicaupdate") {
if cctx.Bool("replica-update") {
taskTypes = append(taskTypes, sealtasks.TTReplicaUpdate)
}
if cctx.Bool("prove-replica-update2") {
taskTypes = append(taskTypes, sealtasks.TTProveReplicaUpdate2)
}
if cctx.Bool("regen-sector-key") {
taskTypes = append(taskTypes, sealtasks.TTRegenSectorKey)
}
if len(taskTypes) == 0 {
return xerrors.Errorf("no task types specified")

View File

@ -27,6 +27,9 @@ var allowSetting = map[sealtasks.TaskType]struct{}{
sealtasks.TTPreCommit2: {},
sealtasks.TTCommit2: {},
sealtasks.TTUnseal: {},
sealtasks.TTReplicaUpdate: {},
sealtasks.TTProveReplicaUpdate2: {},
sealtasks.TTRegenSectorKey: {},
}
var settableStr = func() string {

View File

@ -510,10 +510,17 @@ var genesisSetRemainderCmd = &cli.Command{
var genesisSetActorVersionCmd = &cli.Command{
Name: "set-network-version",
Usage: "Set the version that this network will start from",
ArgsUsage: "<genesisFile> <actorVersion>",
Flags: []cli.Flag{
&cli.IntFlag{
Name: "network-version",
Usage: "network version to start genesis with",
Value: int(build.GenesisNetworkVersion),
},
},
ArgsUsage: "<genesisFile>",
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 2 {
return fmt.Errorf("must specify genesis file and network version (e.g. '0'")
if cctx.Args().Len() != 1 {
return fmt.Errorf("must specify genesis file")
}
genf, err := homedir.Expand(cctx.Args().First())
@ -531,16 +538,12 @@ var genesisSetActorVersionCmd = &cli.Command{
return xerrors.Errorf("unmarshal genesis template: %w", err)
}
nv, err := strconv.ParseUint(cctx.Args().Get(1), 10, 64)
if err != nil {
return xerrors.Errorf("parsing network version: %w", err)
}
if nv > uint64(build.NewestNetworkVersion) {
nv := network.Version(cctx.Int("network-version"))
if nv > build.NewestNetworkVersion {
return xerrors.Errorf("invalid network version: %d", nv)
}
template.NetworkVersion = network.Version(nv)
template.NetworkVersion = nv
b, err = json.MarshalIndent(&template, "", " ")
if err != nil {

104
cmd/lotus-shed/diff.go Normal file
View File

@ -0,0 +1,104 @@
package main
import (
"fmt"
"github.com/urfave/cli/v2"
"golang.org/x/xerrors"
"github.com/ipfs/go-cid"
"github.com/filecoin-project/lotus/chain/types"
lcli "github.com/filecoin-project/lotus/cli"
)
var diffCmd = &cli.Command{
Name: "diff",
Usage: "diff state objects",
Subcommands: []*cli.Command{diffStateTrees},
}
var diffStateTrees = &cli.Command{
Name: "state-trees",
Usage: "diff two state-trees",
ArgsUsage: "<state-tree-a> <state-tree-b>",
Action: func(cctx *cli.Context) error {
api, closer, err := lcli.GetFullNodeAPI(cctx)
if err != nil {
return err
}
defer closer()
ctx := lcli.ReqContext(cctx)
if cctx.NArg() != 2 {
return xerrors.Errorf("expected two state-tree roots")
}
argA := cctx.Args().Get(1)
rootA, err := cid.Parse(argA)
if err != nil {
return xerrors.Errorf("first state-tree root (%q) is not a CID: %w", argA, err)
}
argB := cctx.Args().Get(1)
rootB, err := cid.Parse(argB)
if err != nil {
return xerrors.Errorf("second state-tree root (%q) is not a CID: %w", argB, err)
}
if rootA == rootB {
fmt.Println("state trees do not differ")
return nil
}
changedB, err := api.StateChangedActors(ctx, rootA, rootB)
if err != nil {
return err
}
changedA, err := api.StateChangedActors(ctx, rootB, rootA)
if err != nil {
return err
}
diff := func(stateA, stateB types.Actor) {
if stateB.Code != stateA.Code {
fmt.Printf(" code: %s != %s\n", stateA.Code, stateB.Code)
}
if stateB.Head != stateA.Head {
fmt.Printf(" state: %s != %s\n", stateA.Head, stateB.Head)
}
if stateB.Nonce != stateA.Nonce {
fmt.Printf(" nonce: %d != %d\n", stateA.Nonce, stateB.Nonce)
}
if !stateB.Balance.Equals(stateA.Balance) {
fmt.Printf(" balance: %s != %s\n", stateA.Balance, stateB.Balance)
}
}
fmt.Printf("state differences between %s (first) and %s (second):\n\n", rootA, rootB)
for addr, stateA := range changedA {
fmt.Println(addr)
stateB, ok := changedB[addr]
if ok {
diff(stateA, stateB)
continue
} else {
fmt.Printf(" actor does not exist in second state-tree (%s)\n", rootB)
}
fmt.Println()
delete(changedB, addr)
}
for addr, stateB := range changedB {
fmt.Println(addr)
stateA, ok := changedA[addr]
if ok {
diff(stateA, stateB)
continue
} else {
fmt.Printf(" actor does not exist in first state-tree (%s)\n", rootA)
}
fmt.Println()
}
return nil
},
}

View File

@ -68,6 +68,7 @@ func main() {
sendCsvCmd,
terminationsCmd,
migrationsCmd,
diffCmd,
}
app := &cli.App{

View File

@ -4,10 +4,12 @@ import (
"encoding/json"
corebig "math/big"
"os"
"strconv"
"github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-state-types/abi"
filbig "github.com/filecoin-project/go-state-types/big"
"github.com/filecoin-project/lotus/chain/types"
lcli "github.com/filecoin-project/lotus/cli"
"github.com/ipfs/go-cid"
"github.com/urfave/cli/v2"
@ -22,29 +24,50 @@ type networkTotalsOutput struct {
Payload networkTotals `json:"payload"`
}
type providerMeta struct {
nonidentifiable bool
}
// force formatting as decimal to aid human readers
type humanFloat float64
func (f humanFloat) MarshalJSON() ([]byte, error) {
// 'f' uses decimal digits without exponents.
// The bit size of 32 ensures we don't use too many decimal places.
return []byte(strconv.FormatFloat(float64(f), 'f', -1, 32)), nil
}
type Totals struct {
TotalDeals int `json:"total_num_deals"`
TotalBytes int64 `json:"total_stored_data_size"`
PrivateTotalDeals int `json:"private_total_num_deals"`
PrivateTotalBytes int64 `json:"private_total_stored_data_size"`
CapacityCarryingData humanFloat `json:"capacity_fraction_carrying_data"`
}
type networkTotals struct {
QaNetworkPower filbig.Int `json:"total_qa_power"`
RawNetworkPower filbig.Int `json:"total_raw_capacity"`
CapacityCarryingData float64 `json:"capacity_fraction_carrying_data"`
UniqueCids int `json:"total_unique_cids"`
UniqueProviders int `json:"total_unique_providers"`
UniqueBytes int64 `json:"total_unique_data_size"`
UniqueClients int `json:"total_unique_clients"`
TotalDeals int `json:"total_num_deals"`
TotalBytes int64 `json:"total_stored_data_size"`
FilplusTotalDeals int `json:"filplus_total_num_deals"`
FilplusTotalBytes int64 `json:"filplus_total_stored_data_size"`
UniqueProviders int `json:"total_unique_providers"`
UniquePrivateProviders int `json:"total_unique_private_providers"`
Totals
FilPlus Totals `json:"filecoin_plus_subset"`
seenClient map[address.Address]bool
seenProvider map[address.Address]bool
seenPieceCid map[cid.Cid]bool
pieces map[cid.Cid]struct{}
clients map[address.Address]struct{}
providers map[address.Address]providerMeta
}
var storageStatsCmd = &cli.Command{
Name: "storage-stats",
Usage: "Translates current lotus state into a json summary suitable for driving https://storage.filecoin.io/",
Flags: []cli.Flag{
&cli.Int64Flag{
Name: "height",
&cli.StringFlag{
Name: "tipset",
Usage: "Comma separated array of cids, or @height",
},
},
Action: func(cctx *cli.Context) error {
@ -56,22 +79,24 @@ var storageStatsCmd = &cli.Command{
}
defer apiCloser()
head, err := api.ChainHead(ctx)
var ts *types.TipSet
if cctx.String("tipset") == "" {
ts, err = api.ChainHead(ctx)
if err != nil {
return err
}
ts, err = api.ChainGetTipSetByHeight(ctx, ts.Height()-defaultEpochLookback, ts.Key())
if err != nil {
return err
}
requestedHeight := cctx.Int64("height")
if requestedHeight > 0 {
head, err = api.ChainGetTipSetByHeight(ctx, abi.ChainEpoch(requestedHeight), head.Key())
} else {
head, err = api.ChainGetTipSetByHeight(ctx, head.Height()-defaultEpochLookback, head.Key())
}
ts, err = lcli.ParseTipSetRef(ctx, api, cctx.String("tipset"))
if err != nil {
return err
}
}
power, err := api.StateMinerPower(ctx, address.Address{}, head.Key())
power, err := api.StateMinerPower(ctx, address.Address{}, ts.Key())
if err != nil {
return err
}
@ -79,12 +104,12 @@ var storageStatsCmd = &cli.Command{
netTotals := networkTotals{
QaNetworkPower: power.TotalPower.QualityAdjPower,
RawNetworkPower: power.TotalPower.RawBytePower,
seenClient: make(map[address.Address]bool),
seenProvider: make(map[address.Address]bool),
seenPieceCid: make(map[cid.Cid]bool),
pieces: make(map[cid.Cid]struct{}),
clients: make(map[address.Address]struct{}),
providers: make(map[address.Address]providerMeta),
}
deals, err := api.StateMarketDeals(ctx, head.Key())
deals, err := api.StateMarketDeals(ctx, ts.Key())
if err != nil {
return err
}
@ -94,35 +119,76 @@ var storageStatsCmd = &cli.Command{
// Only count deals that have properly started, not past/future ones
// https://github.com/filecoin-project/specs-actors/blob/v0.9.9/actors/builtin/market/deal.go#L81-L85
// Bail on 0 as well in case SectorStartEpoch is uninitialized due to some bug
//
// Additionally if the SlashEpoch is set this means the underlying sector is
// terminated for whatever reason ( not just slashed ), and the deal record
// will soon be removed from the state entirely
if dealInfo.State.SectorStartEpoch <= 0 ||
dealInfo.State.SectorStartEpoch > head.Height() {
dealInfo.State.SectorStartEpoch > ts.Height() ||
dealInfo.State.SlashEpoch > -1 {
continue
}
netTotals.seenClient[dealInfo.Proposal.Client] = true
netTotals.clients[dealInfo.Proposal.Client] = struct{}{}
if _, seen := netTotals.providers[dealInfo.Proposal.Provider]; !seen {
pm := providerMeta{}
mi, err := api.StateMinerInfo(ctx, dealInfo.Proposal.Provider, ts.Key())
if err != nil {
return err
}
if mi.PeerId == nil || *mi.PeerId == "" {
log.Infof("private provider %s", dealInfo.Proposal.Provider)
pm.nonidentifiable = true
netTotals.UniquePrivateProviders++
}
netTotals.providers[dealInfo.Proposal.Provider] = pm
netTotals.UniqueProviders++
}
if _, seen := netTotals.pieces[dealInfo.Proposal.PieceCID]; !seen {
netTotals.pieces[dealInfo.Proposal.PieceCID] = struct{}{}
netTotals.UniqueBytes += int64(dealInfo.Proposal.PieceSize)
netTotals.UniqueCids++
}
netTotals.TotalBytes += int64(dealInfo.Proposal.PieceSize)
netTotals.seenProvider[dealInfo.Proposal.Provider] = true
netTotals.seenPieceCid[dealInfo.Proposal.PieceCID] = true
netTotals.TotalDeals++
if netTotals.providers[dealInfo.Proposal.Provider].nonidentifiable {
netTotals.PrivateTotalBytes += int64(dealInfo.Proposal.PieceSize)
netTotals.PrivateTotalDeals++
}
if dealInfo.Proposal.VerifiedDeal {
netTotals.FilplusTotalDeals++
netTotals.FilplusTotalBytes += int64(dealInfo.Proposal.PieceSize)
netTotals.FilPlus.TotalBytes += int64(dealInfo.Proposal.PieceSize)
netTotals.FilPlus.TotalDeals++
if netTotals.providers[dealInfo.Proposal.Provider].nonidentifiable {
netTotals.FilPlus.PrivateTotalBytes += int64(dealInfo.Proposal.PieceSize)
netTotals.FilPlus.PrivateTotalDeals++
}
}
}
netTotals.UniqueCids = len(netTotals.seenPieceCid)
netTotals.UniqueClients = len(netTotals.seenClient)
netTotals.UniqueProviders = len(netTotals.seenProvider)
netTotals.UniqueClients = len(netTotals.clients)
netTotals.CapacityCarryingData, _ = new(corebig.Rat).SetFrac(
ccd, _ := new(corebig.Rat).SetFrac(
corebig.NewInt(netTotals.TotalBytes),
netTotals.RawNetworkPower.Int,
).Float64()
netTotals.CapacityCarryingData = humanFloat(ccd)
ccdfp, _ := new(corebig.Rat).SetFrac(
corebig.NewInt(netTotals.FilPlus.TotalBytes),
netTotals.RawNetworkPower.Int,
).Float64()
netTotals.FilPlus.CapacityCarryingData = humanFloat(ccdfp)
return json.NewEncoder(os.Stdout).Encode(
networkTotalsOutput{
Epoch: int64(head.Height()),
Epoch: int64(ts.Height()),
Endpoint: "NETWORK_WIDE_TOTALS",
Payload: netTotals,
},

View File

@ -24,6 +24,15 @@ var ProtocolCodenames = []struct {
{build.UpgradeTapeHeight + 1, "tape"},
{build.UpgradeLiftoffHeight + 1, "liftoff"},
{build.UpgradeKumquatHeight + 1, "postliftoff"},
{build.UpgradeCalicoHeight + 1, "calico"},
{build.UpgradePersianHeight + 1, "persian"},
{build.UpgradeOrangeHeight + 1, "orange"},
{build.UpgradeTrustHeight + 1, "trust"},
{build.UpgradeNorwegianHeight + 1, "norwegian"},
{build.UpgradeTurboHeight + 1, "turbo"},
{build.UpgradeHyperdriveHeight + 1, "hyperdrive"},
{build.UpgradeChocolateHeight + 1, "chocolate"},
{build.UpgradeOhSnapHeight + 1, "ohsnap"},
}
// GetProtocolCodename gets the protocol codename associated with a height.

View File

@ -8,12 +8,11 @@ import (
"io"
"log"
"github.com/filecoin-project/lotus/api/v0api"
"github.com/fatih/color"
"github.com/filecoin-project/go-address"
"github.com/filecoin-project/lotus/api"
"github.com/filecoin-project/lotus/api/v0api"
"github.com/filecoin-project/lotus/chain/actors/builtin"
init_ "github.com/filecoin-project/lotus/chain/actors/builtin/init"
"github.com/filecoin-project/lotus/chain/actors/builtin/reward"
@ -43,6 +42,15 @@ func doExtractMessage(opts extractOpts) error {
return fmt.Errorf("failed to resolve message and tipsets from chain: %w", err)
}
// Assumes that the desired message isn't at the boundary of network versions.
// Otherwise this will be inaccurate. But it's such a tiny edge case that
// it's not worth spending the time to support boundary messages unless
// actually needed.
nv, err := FullAPI.StateNetworkVersion(ctx, incTs.Key())
if err != nil {
return fmt.Errorf("failed to resolve network version from inclusion height: %w", err)
}
// get the circulating supply before the message was executed.
circSupplyDetail, err := FullAPI.StateVMCirculatingSupplyInternal(ctx, incTs.Key())
if err != nil {
@ -53,6 +61,7 @@ func doExtractMessage(opts extractOpts) error {
log.Printf("message was executed in tipset: %s", execTs.Key())
log.Printf("message was included in tipset: %s", incTs.Key())
log.Printf("network version at inclusion: %d", nv)
log.Printf("circulating supply at inclusion tipset: %d", circSupply)
log.Printf("finding precursor messages using mode: %s", opts.precursor)
@ -111,6 +120,7 @@ func doExtractMessage(opts extractOpts) error {
BaseFee: basefee,
// recorded randomness will be discarded.
Rand: conformance.NewRecordingRand(new(conformance.LogReporter), FullAPI),
NetworkVersion: nv,
})
if err != nil {
return fmt.Errorf("failed to execute precursor message: %w", err)
@ -146,6 +156,7 @@ func doExtractMessage(opts extractOpts) error {
CircSupply: circSupplyDetail.FilCirculating,
BaseFee: basefee,
Rand: recordingRand,
NetworkVersion: nv,
})
if err != nil {
return fmt.Errorf("failed to execute message: %w", err)
@ -263,11 +274,6 @@ func doExtractMessage(opts extractOpts) error {
return err
}
nv, err := FullAPI.StateNetworkVersion(ctx, execTs.Key())
if err != nil {
return err
}
codename := GetProtocolCodename(execTs.Height())
// Write out the test vector.

View File

@ -129,6 +129,7 @@ func runSimulateCmd(_ *cli.Context) error {
CircSupply: circSupply.FilCirculating,
BaseFee: baseFee,
Rand: rand,
// TODO NetworkVersion
})
if err != nil {
return fmt.Errorf("failed to apply message: %w", err)

View File

@ -81,9 +81,12 @@
* [NetConnectedness](#NetConnectedness)
* [NetDisconnect](#NetDisconnect)
* [NetFindPeer](#NetFindPeer)
* [NetLimit](#NetLimit)
* [NetPeerInfo](#NetPeerInfo)
* [NetPeers](#NetPeers)
* [NetPubsubScores](#NetPubsubScores)
* [NetSetLimit](#NetSetLimit)
* [NetStat](#NetStat)
* [Pieces](#Pieces)
* [PiecesGetCIDInfo](#PiecesGetCIDInfo)
* [PiecesGetPieceInfo](#PiecesGetPieceInfo)
@ -94,6 +97,7 @@
* [Return](#Return)
* [ReturnAddPiece](#ReturnAddPiece)
* [ReturnFetch](#ReturnFetch)
* [ReturnFinalizeReplicaUpdate](#ReturnFinalizeReplicaUpdate)
* [ReturnFinalizeSector](#ReturnFinalizeSector)
* [ReturnGenerateSectorKeyFromData](#ReturnGenerateSectorKeyFromData)
* [ReturnMoveStorage](#ReturnMoveStorage)
@ -113,6 +117,7 @@
* [SealingAbort](#SealingAbort)
* [SealingSchedDiag](#SealingSchedDiag)
* [Sector](#Sector)
* [SectorAbortUpgrade](#SectorAbortUpgrade)
* [SectorAddPieceToAny](#SectorAddPieceToAny)
* [SectorCommitFlush](#SectorCommitFlush)
* [SectorCommitPending](#SectorCommitPending)
@ -1698,6 +1703,32 @@ Response:
}
```
### NetLimit
Perms: read
Inputs:
```json
[
"string value"
]
```
Response:
```json
{
"Memory": 123,
"Streams": 3,
"StreamsInbound": 1,
"StreamsOutbound": 2,
"Conns": 4,
"ConnsInbound": 3,
"ConnsOutbound": 4,
"FD": 5
}
```
### NetPeerInfo
@ -1783,6 +1814,94 @@ Response:
]
```
### NetSetLimit
Perms: admin
Inputs:
```json
[
"string value",
{
"Memory": 123,
"Streams": 3,
"StreamsInbound": 1,
"StreamsOutbound": 2,
"Conns": 4,
"ConnsInbound": 3,
"ConnsOutbound": 4,
"FD": 5
}
]
```
Response: `{}`
### NetStat
Perms: read
Inputs:
```json
[
"string value"
]
```
Response:
```json
{
"System": {
"NumStreamsInbound": 123,
"NumStreamsOutbound": 123,
"NumConnsInbound": 123,
"NumConnsOutbound": 123,
"NumFD": 123,
"Memory": 9
},
"Transient": {
"NumStreamsInbound": 123,
"NumStreamsOutbound": 123,
"NumConnsInbound": 123,
"NumConnsOutbound": 123,
"NumFD": 123,
"Memory": 9
},
"Services": {
"abc": {
"NumStreamsInbound": 1,
"NumStreamsOutbound": 2,
"NumConnsInbound": 3,
"NumConnsOutbound": 4,
"NumFD": 5,
"Memory": 123
}
},
"Protocols": {
"abc": {
"NumStreamsInbound": 1,
"NumStreamsOutbound": 2,
"NumConnsInbound": 3,
"NumConnsOutbound": 4,
"NumFD": 5,
"Memory": 123
}
},
"Peers": {
"abc": {
"NumStreamsInbound": 1,
"NumStreamsOutbound": 2,
"NumConnsInbound": 3,
"NumConnsOutbound": 4,
"NumFD": 5,
"Memory": 123
}
}
}
```
## Pieces
@ -1937,6 +2056,30 @@ Response: `{}`
### ReturnFetch
Perms: admin
Inputs:
```json
[
{
"Sector": {
"Miner": 1000,
"Number": 9
},
"ID": "07070707-0707-0707-0707-070707070707"
},
{
"Code": 0,
"Message": "string value"
}
]
```
Response: `{}`
### ReturnFinalizeReplicaUpdate
Perms: admin
Inputs:
@ -2357,6 +2500,21 @@ Response: `{}`
## Sector
### SectorAbortUpgrade
SectorAbortUpgrade can be called on sectors that are in the process of being upgraded to abort it
Perms: admin
Inputs:
```json
[
9
]
```
Response: `{}`
### SectorAddPieceToAny
Add piece to an open sector. If no sectors with enough space are open,
either a new sector will be created, or this call will block until more
@ -3768,6 +3926,334 @@ Response:
"BaseMinMemory": 1073741824
}
},
"seal/v0/provereplicaupdate/1": {
"0": {
"MinMemory": 2048,
"MaxMemory": 2048,
"GPUUtilization": 0,
"MaxParallelism": 0,
"MaxParallelismGPU": 0,
"BaseMinMemory": 2048
},
"1": {
"MinMemory": 8388608,
"MaxMemory": 8388608,
"GPUUtilization": 0,
"MaxParallelism": 0,
"MaxParallelismGPU": 0,
"BaseMinMemory": 8388608
},
"2": {
"MinMemory": 1073741824,
"MaxMemory": 1073741824,
"GPUUtilization": 0,
"MaxParallelism": 0,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"3": {
"MinMemory": 1073741824,
"MaxMemory": 1073741824,
"GPUUtilization": 0,
"MaxParallelism": 0,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"4": {
"MinMemory": 1073741824,
"MaxMemory": 1073741824,
"GPUUtilization": 0,
"MaxParallelism": 0,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"5": {
"MinMemory": 2048,
"MaxMemory": 2048,
"GPUUtilization": 0,
"MaxParallelism": 0,
"MaxParallelismGPU": 0,
"BaseMinMemory": 2048
},
"6": {
"MinMemory": 8388608,
"MaxMemory": 8388608,
"GPUUtilization": 0,
"MaxParallelism": 0,
"MaxParallelismGPU": 0,
"BaseMinMemory": 8388608
},
"7": {
"MinMemory": 1073741824,
"MaxMemory": 1073741824,
"GPUUtilization": 0,
"MaxParallelism": 0,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"8": {
"MinMemory": 1073741824,
"MaxMemory": 1073741824,
"GPUUtilization": 0,
"MaxParallelism": 0,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"9": {
"MinMemory": 1073741824,
"MaxMemory": 1073741824,
"GPUUtilization": 0,
"MaxParallelism": 0,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
}
},
"seal/v0/provereplicaupdate/2": {
"0": {
"MinMemory": 2048,
"MaxMemory": 2048,
"GPUUtilization": 1,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 2048
},
"1": {
"MinMemory": 8388608,
"MaxMemory": 8388608,
"GPUUtilization": 1,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 8388608
},
"2": {
"MinMemory": 1073741824,
"MaxMemory": 1610612736,
"GPUUtilization": 1,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 10737418240
},
"3": {
"MinMemory": 32212254720,
"MaxMemory": 161061273600,
"GPUUtilization": 1,
"MaxParallelism": -1,
"MaxParallelismGPU": 6,
"BaseMinMemory": 34359738368
},
"4": {
"MinMemory": 64424509440,
"MaxMemory": 204010946560,
"GPUUtilization": 1,
"MaxParallelism": -1,
"MaxParallelismGPU": 6,
"BaseMinMemory": 68719476736
},
"5": {
"MinMemory": 2048,
"MaxMemory": 2048,
"GPUUtilization": 1,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 2048
},
"6": {
"MinMemory": 8388608,
"MaxMemory": 8388608,
"GPUUtilization": 1,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 8388608
},
"7": {
"MinMemory": 1073741824,
"MaxMemory": 1610612736,
"GPUUtilization": 1,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 10737418240
},
"8": {
"MinMemory": 32212254720,
"MaxMemory": 161061273600,
"GPUUtilization": 1,
"MaxParallelism": -1,
"MaxParallelismGPU": 6,
"BaseMinMemory": 34359738368
},
"9": {
"MinMemory": 64424509440,
"MaxMemory": 204010946560,
"GPUUtilization": 1,
"MaxParallelism": -1,
"MaxParallelismGPU": 6,
"BaseMinMemory": 68719476736
}
},
"seal/v0/regensectorkey": {
"0": {
"MinMemory": 2048,
"MaxMemory": 2048,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 2048
},
"1": {
"MinMemory": 8388608,
"MaxMemory": 8388608,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 8388608
},
"2": {
"MinMemory": 1073741824,
"MaxMemory": 1073741824,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"3": {
"MinMemory": 4294967296,
"MaxMemory": 4294967296,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"4": {
"MinMemory": 8589934592,
"MaxMemory": 8589934592,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"5": {
"MinMemory": 2048,
"MaxMemory": 2048,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 2048
},
"6": {
"MinMemory": 8388608,
"MaxMemory": 8388608,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 8388608
},
"7": {
"MinMemory": 1073741824,
"MaxMemory": 1073741824,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"8": {
"MinMemory": 4294967296,
"MaxMemory": 4294967296,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"9": {
"MinMemory": 8589934592,
"MaxMemory": 8589934592,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
}
},
"seal/v0/replicaupdate": {
"0": {
"MinMemory": 2048,
"MaxMemory": 2048,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 2048
},
"1": {
"MinMemory": 8388608,
"MaxMemory": 8388608,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 8388608
},
"2": {
"MinMemory": 1073741824,
"MaxMemory": 1073741824,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"3": {
"MinMemory": 4294967296,
"MaxMemory": 4294967296,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"4": {
"MinMemory": 8589934592,
"MaxMemory": 8589934592,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"5": {
"MinMemory": 2048,
"MaxMemory": 2048,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 2048
},
"6": {
"MinMemory": 8388608,
"MaxMemory": 8388608,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 8388608
},
"7": {
"MinMemory": 1073741824,
"MaxMemory": 1073741824,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"8": {
"MinMemory": 4294967296,
"MaxMemory": 4294967296,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"9": {
"MinMemory": 8589934592,
"MaxMemory": 8589934592,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
}
},
"seal/v0/unseal": {
"0": {
"MinMemory": 2048,

View File

@ -10,6 +10,7 @@
* [Add](#Add)
* [AddPiece](#AddPiece)
* [Finalize](#Finalize)
* [FinalizeReplicaUpdate](#FinalizeReplicaUpdate)
* [FinalizeSector](#FinalizeSector)
* [Generate](#Generate)
* [GenerateSectorKeyFromData](#GenerateSectorKeyFromData)
@ -599,6 +600,334 @@ Response:
"BaseMinMemory": 1073741824
}
},
"seal/v0/provereplicaupdate/1": {
"0": {
"MinMemory": 2048,
"MaxMemory": 2048,
"GPUUtilization": 0,
"MaxParallelism": 0,
"MaxParallelismGPU": 0,
"BaseMinMemory": 2048
},
"1": {
"MinMemory": 8388608,
"MaxMemory": 8388608,
"GPUUtilization": 0,
"MaxParallelism": 0,
"MaxParallelismGPU": 0,
"BaseMinMemory": 8388608
},
"2": {
"MinMemory": 1073741824,
"MaxMemory": 1073741824,
"GPUUtilization": 0,
"MaxParallelism": 0,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"3": {
"MinMemory": 1073741824,
"MaxMemory": 1073741824,
"GPUUtilization": 0,
"MaxParallelism": 0,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"4": {
"MinMemory": 1073741824,
"MaxMemory": 1073741824,
"GPUUtilization": 0,
"MaxParallelism": 0,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"5": {
"MinMemory": 2048,
"MaxMemory": 2048,
"GPUUtilization": 0,
"MaxParallelism": 0,
"MaxParallelismGPU": 0,
"BaseMinMemory": 2048
},
"6": {
"MinMemory": 8388608,
"MaxMemory": 8388608,
"GPUUtilization": 0,
"MaxParallelism": 0,
"MaxParallelismGPU": 0,
"BaseMinMemory": 8388608
},
"7": {
"MinMemory": 1073741824,
"MaxMemory": 1073741824,
"GPUUtilization": 0,
"MaxParallelism": 0,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"8": {
"MinMemory": 1073741824,
"MaxMemory": 1073741824,
"GPUUtilization": 0,
"MaxParallelism": 0,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"9": {
"MinMemory": 1073741824,
"MaxMemory": 1073741824,
"GPUUtilization": 0,
"MaxParallelism": 0,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
}
},
"seal/v0/provereplicaupdate/2": {
"0": {
"MinMemory": 2048,
"MaxMemory": 2048,
"GPUUtilization": 1,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 2048
},
"1": {
"MinMemory": 8388608,
"MaxMemory": 8388608,
"GPUUtilization": 1,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 8388608
},
"2": {
"MinMemory": 1073741824,
"MaxMemory": 1610612736,
"GPUUtilization": 1,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 10737418240
},
"3": {
"MinMemory": 32212254720,
"MaxMemory": 161061273600,
"GPUUtilization": 1,
"MaxParallelism": -1,
"MaxParallelismGPU": 6,
"BaseMinMemory": 34359738368
},
"4": {
"MinMemory": 64424509440,
"MaxMemory": 204010946560,
"GPUUtilization": 1,
"MaxParallelism": -1,
"MaxParallelismGPU": 6,
"BaseMinMemory": 68719476736
},
"5": {
"MinMemory": 2048,
"MaxMemory": 2048,
"GPUUtilization": 1,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 2048
},
"6": {
"MinMemory": 8388608,
"MaxMemory": 8388608,
"GPUUtilization": 1,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 8388608
},
"7": {
"MinMemory": 1073741824,
"MaxMemory": 1610612736,
"GPUUtilization": 1,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 10737418240
},
"8": {
"MinMemory": 32212254720,
"MaxMemory": 161061273600,
"GPUUtilization": 1,
"MaxParallelism": -1,
"MaxParallelismGPU": 6,
"BaseMinMemory": 34359738368
},
"9": {
"MinMemory": 64424509440,
"MaxMemory": 204010946560,
"GPUUtilization": 1,
"MaxParallelism": -1,
"MaxParallelismGPU": 6,
"BaseMinMemory": 68719476736
}
},
"seal/v0/regensectorkey": {
"0": {
"MinMemory": 2048,
"MaxMemory": 2048,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 2048
},
"1": {
"MinMemory": 8388608,
"MaxMemory": 8388608,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 8388608
},
"2": {
"MinMemory": 1073741824,
"MaxMemory": 1073741824,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"3": {
"MinMemory": 4294967296,
"MaxMemory": 4294967296,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"4": {
"MinMemory": 8589934592,
"MaxMemory": 8589934592,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"5": {
"MinMemory": 2048,
"MaxMemory": 2048,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 2048
},
"6": {
"MinMemory": 8388608,
"MaxMemory": 8388608,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 8388608
},
"7": {
"MinMemory": 1073741824,
"MaxMemory": 1073741824,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"8": {
"MinMemory": 4294967296,
"MaxMemory": 4294967296,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"9": {
"MinMemory": 8589934592,
"MaxMemory": 8589934592,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
}
},
"seal/v0/replicaupdate": {
"0": {
"MinMemory": 2048,
"MaxMemory": 2048,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 2048
},
"1": {
"MinMemory": 8388608,
"MaxMemory": 8388608,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 8388608
},
"2": {
"MinMemory": 1073741824,
"MaxMemory": 1073741824,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"3": {
"MinMemory": 4294967296,
"MaxMemory": 4294967296,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"4": {
"MinMemory": 8589934592,
"MaxMemory": 8589934592,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"5": {
"MinMemory": 2048,
"MaxMemory": 2048,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 2048
},
"6": {
"MinMemory": 8388608,
"MaxMemory": 8388608,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 8388608
},
"7": {
"MinMemory": 1073741824,
"MaxMemory": 1073741824,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"8": {
"MinMemory": 4294967296,
"MaxMemory": 4294967296,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"9": {
"MinMemory": 8589934592,
"MaxMemory": 8589934592,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
}
},
"seal/v0/unseal": {
"0": {
"MinMemory": 2048,
@ -784,6 +1113,41 @@ Response:
## Finalize
### FinalizeReplicaUpdate
Perms: admin
Inputs:
```json
[
{
"ID": {
"Miner": 1000,
"Number": 9
},
"ProofType": 8
},
[
{
"Offset": 1024,
"Size": 1024
}
]
]
```
Response:
```json
{
"Sector": {
"Miner": 1000,
"Number": 9
},
"ID": "07070707-0707-0707-0707-070707070707"
}
```
### FinalizeSector

View File

@ -128,9 +128,12 @@
* [NetConnectedness](#NetConnectedness)
* [NetDisconnect](#NetDisconnect)
* [NetFindPeer](#NetFindPeer)
* [NetLimit](#NetLimit)
* [NetPeerInfo](#NetPeerInfo)
* [NetPeers](#NetPeers)
* [NetPubsubScores](#NetPubsubScores)
* [NetSetLimit](#NetSetLimit)
* [NetStat](#NetStat)
* [Paych](#Paych)
* [PaychAllocateLane](#PaychAllocateLane)
* [PaychAvailableFunds](#PaychAvailableFunds)
@ -3821,6 +3824,32 @@ Response:
}
```
### NetLimit
Perms: read
Inputs:
```json
[
"string value"
]
```
Response:
```json
{
"Memory": 123,
"Streams": 3,
"StreamsInbound": 1,
"StreamsOutbound": 2,
"Conns": 4,
"ConnsInbound": 3,
"ConnsOutbound": 4,
"FD": 5
}
```
### NetPeerInfo
@ -3906,6 +3935,94 @@ Response:
]
```
### NetSetLimit
Perms: admin
Inputs:
```json
[
"string value",
{
"Memory": 123,
"Streams": 3,
"StreamsInbound": 1,
"StreamsOutbound": 2,
"Conns": 4,
"ConnsInbound": 3,
"ConnsOutbound": 4,
"FD": 5
}
]
```
Response: `{}`
### NetStat
Perms: read
Inputs:
```json
[
"string value"
]
```
Response:
```json
{
"System": {
"NumStreamsInbound": 123,
"NumStreamsOutbound": 123,
"NumConnsInbound": 123,
"NumConnsOutbound": 123,
"NumFD": 123,
"Memory": 9
},
"Transient": {
"NumStreamsInbound": 123,
"NumStreamsOutbound": 123,
"NumConnsInbound": 123,
"NumConnsOutbound": 123,
"NumFD": 123,
"Memory": 9
},
"Services": {
"abc": {
"NumStreamsInbound": 1,
"NumStreamsOutbound": 2,
"NumConnsInbound": 3,
"NumConnsOutbound": 4,
"NumFD": 5,
"Memory": 123
}
},
"Protocols": {
"abc": {
"NumStreamsInbound": 1,
"NumStreamsOutbound": 2,
"NumConnsInbound": 3,
"NumConnsOutbound": 4,
"NumFD": 5,
"Memory": 123
}
},
"Peers": {
"abc": {
"NumStreamsInbound": 1,
"NumStreamsOutbound": 2,
"NumConnsInbound": 3,
"NumConnsOutbound": 4,
"NumFD": 5,
"Memory": 123
}
}
}
```
## Paych
The Paych methods are for interacting with and managing payment channels

View File

@ -134,9 +134,12 @@
* [NetConnectedness](#NetConnectedness)
* [NetDisconnect](#NetDisconnect)
* [NetFindPeer](#NetFindPeer)
* [NetLimit](#NetLimit)
* [NetPeerInfo](#NetPeerInfo)
* [NetPeers](#NetPeers)
* [NetPubsubScores](#NetPubsubScores)
* [NetSetLimit](#NetSetLimit)
* [NetStat](#NetStat)
* [Node](#Node)
* [NodeStatus](#NodeStatus)
* [Paych](#Paych)
@ -4182,6 +4185,32 @@ Response:
}
```
### NetLimit
Perms: read
Inputs:
```json
[
"string value"
]
```
Response:
```json
{
"Memory": 123,
"Streams": 3,
"StreamsInbound": 1,
"StreamsOutbound": 2,
"Conns": 4,
"ConnsInbound": 3,
"ConnsOutbound": 4,
"FD": 5
}
```
### NetPeerInfo
@ -4267,6 +4296,94 @@ Response:
]
```
### NetSetLimit
Perms: admin
Inputs:
```json
[
"string value",
{
"Memory": 123,
"Streams": 3,
"StreamsInbound": 1,
"StreamsOutbound": 2,
"Conns": 4,
"ConnsInbound": 3,
"ConnsOutbound": 4,
"FD": 5
}
]
```
Response: `{}`
### NetStat
Perms: read
Inputs:
```json
[
"string value"
]
```
Response:
```json
{
"System": {
"NumStreamsInbound": 123,
"NumStreamsOutbound": 123,
"NumConnsInbound": 123,
"NumConnsOutbound": 123,
"NumFD": 123,
"Memory": 9
},
"Transient": {
"NumStreamsInbound": 123,
"NumStreamsOutbound": 123,
"NumConnsInbound": 123,
"NumConnsOutbound": 123,
"NumFD": 123,
"Memory": 9
},
"Services": {
"abc": {
"NumStreamsInbound": 1,
"NumStreamsOutbound": 2,
"NumConnsInbound": 3,
"NumConnsOutbound": 4,
"NumFD": 5,
"Memory": 123
}
},
"Protocols": {
"abc": {
"NumStreamsInbound": 1,
"NumStreamsOutbound": 2,
"NumConnsInbound": 3,
"NumConnsOutbound": 4,
"NumFD": 5,
"Memory": 123
}
},
"Peers": {
"abc": {
"NumStreamsInbound": 1,
"NumStreamsOutbound": 2,
"NumConnsInbound": 3,
"NumConnsOutbound": 4,
"NumFD": 5,
"Memory": 123
}
}
}
```
## Node
These methods are general node management and status commands

View File

@ -7,7 +7,7 @@ USAGE:
lotus-miner [global options] command [command options] [arguments...]
VERSION:
1.15.0-dev
1.15.1-dev
COMMANDS:
init Initialize a lotus miner repo
@ -1160,6 +1160,8 @@ COMMANDS:
reachability Print information about reachability from the internet
bandwidth Print bandwidth usage information
block Manage network connection gating rules
stat Report resource usage for a scope
limit Get or set resource limits for a scope
help, h Shows a list of commands or help for one command
OPTIONS:
@ -1428,6 +1430,58 @@ OPTIONS:
```
### lotus-miner net stat
```
NAME:
lotus-miner net stat - Report resource usage for a scope
USAGE:
lotus-miner net stat [command options] scope
DESCRIPTION:
Report resource usage for a scope.
The scope can be one of the following:
- system -- reports the system aggregate resource usage.
- transient -- reports the transient resource usage.
- svc:<service> -- reports the resource usage of a specific service.
- proto:<proto> -- reports the resource usage of a specific protocol.
- peer:<peer> -- reports the resource usage of a specific peer.
- all -- reports the resource usage for all currently active scopes.
OPTIONS:
--help, -h show help (default: false)
```
### lotus-miner net limit
```
NAME:
lotus-miner net limit - Get or set resource limits for a scope
USAGE:
lotus-miner net limit [command options] scope [limit]
DESCRIPTION:
Get or set resource limits for a scope.
The scope can be one of the following:
- system -- reports the system aggregate resource usage.
- transient -- reports the transient resource usage.
- svc:<service> -- reports the resource usage of a specific service.
- proto:<proto> -- reports the resource usage of a specific protocol.
- peer:<peer> -- reports the resource usage of a specific peer.
The limit is json-formatted, with the same structure as the limits file.
OPTIONS:
--set set the limit for a scope (default: false)
--help, -h show help (default: false)
```
## lotus-miner pieces
```
NAME:
@ -1526,6 +1580,7 @@ COMMANDS:
terminate Terminate sector on-chain then remove (WARNING: This means losing power and collateral for the removed sector)
remove Forcefully remove a sector (WARNING: This means losing power and collateral for the removed sector (use 'terminate' for lower penalty))
snap-up Mark a committed capacity sector to be filled with deals
abort-upgrade Abort the attempted (SnapDeals) upgrade of a CC sector, reverting it to as before
mark-for-upgrade Mark a committed capacity sector for replacement by a sector with deals
seal Manually start sealing a sector (filling any unused space with junk)
set-seal-delay Set the time, in minutes, that a new sector waits for deals before sealing starts
@ -1570,7 +1625,8 @@ OPTIONS:
--color, -c use color in display output (default: depends on output being a TTY)
--fast, -f don't show on-chain info for better performance (default: false)
--events, -e display number of events the sector has received (default: false)
--seal-time display how long it took for the sector to be sealed (default: false)
--initial-pledge, -p display initial pledge (default: false)
--seal-time, -t display how long it took for the sector to be sealed (default: false)
--states value filter sectors by a comma-separated list of states
--unproven, -u only show sectors which aren't in the 'Proving' state (default: false)
--help, -h show help (default: false)
@ -1761,6 +1817,19 @@ OPTIONS:
```
### lotus-miner sectors abort-upgrade
```
NAME:
lotus-miner sectors abort-upgrade - Abort the attempted (SnapDeals) upgrade of a CC sector, reverting it to as before
USAGE:
lotus-miner sectors abort-upgrade [command options] <sectorNum>
OPTIONS:
--help, -h show help (default: false)
```
### lotus-miner sectors mark-for-upgrade
```
NAME:

View File

@ -7,7 +7,7 @@ USAGE:
lotus-worker [global options] command [command options] [arguments...]
VERSION:
1.15.0-dev
1.15.1-dev
COMMANDS:
run Start lotus worker
@ -46,6 +46,7 @@ OPTIONS:
--commit enable commit (32G sectors: all cores or GPUs, 128GiB Memory + 64GiB swap) (default: true)
--replica-update enable replica update (default: true)
--prove-replica-update2 enable prove replica update 2 (default: true)
--regen-sector-key enable regen sector key (default: true)
--parallel-fetch-limit value maximum fetch operations to run in parallel (default: 5)
--timeout value used when 'listen' is unspecified. must be a valid duration recognized by golang's time.ParseDuration function (default: "30m")
--help, -h show help (default: false)
@ -170,7 +171,7 @@ NAME:
lotus-worker tasks enable - Enable a task type
USAGE:
lotus-worker tasks enable [command options] [UNS|C2|PC2|PC1|AP]
lotus-worker tasks enable [command options] [UNS|C2|PC2|PC1|PR2|RU|AP|GSK]
OPTIONS:
--help, -h show help (default: false)
@ -183,7 +184,7 @@ NAME:
lotus-worker tasks disable - Disable a task type
USAGE:
lotus-worker tasks disable [command options] [UNS|C2|PC2|PC1|AP]
lotus-worker tasks disable [command options] [UNS|C2|PC2|PC1|PR2|RU|AP|GSK]
OPTIONS:
--help, -h show help (default: false)

View File

@ -7,7 +7,7 @@ USAGE:
lotus [global options] command [command options] [arguments...]
VERSION:
1.15.0-dev
1.15.1-dev
COMMANDS:
daemon Start a lotus daemon process
@ -2616,6 +2616,8 @@ COMMANDS:
reachability Print information about reachability from the internet
bandwidth Print bandwidth usage information
block Manage network connection gating rules
stat Report resource usage for a scope
limit Get or set resource limits for a scope
help, h Shows a list of commands or help for one command
OPTIONS:
@ -2884,6 +2886,58 @@ OPTIONS:
```
### lotus net stat
```
NAME:
lotus net stat - Report resource usage for a scope
USAGE:
lotus net stat [command options] scope
DESCRIPTION:
Report resource usage for a scope.
The scope can be one of the following:
- system -- reports the system aggregate resource usage.
- transient -- reports the transient resource usage.
- svc:<service> -- reports the resource usage of a specific service.
- proto:<proto> -- reports the resource usage of a specific protocol.
- peer:<peer> -- reports the resource usage of a specific peer.
- all -- reports the resource usage for all currently active scopes.
OPTIONS:
--help, -h show help (default: false)
```
### lotus net limit
```
NAME:
lotus net limit - Get or set resource limits for a scope
USAGE:
lotus net limit [command options] scope [limit]
DESCRIPTION:
Get or set resource limits for a scope.
The scope can be one of the following:
- system -- reports the system aggregate resource usage.
- transient -- reports the transient resource usage.
- svc:<service> -- reports the resource usage of a specific service.
- proto:<proto> -- reports the resource usage of a specific protocol.
- peer:<peer> -- reports the resource usage of a specific peer.
The limit is json-formatted, with the same structure as the limits file.
OPTIONS:
--set set the limit for a scope (default: false)
--help, -h show help (default: false)
```
## lotus sync
```
NAME:

View File

@ -163,11 +163,11 @@
#HotStoreType = "badger"
# MarkSetType specifies the type of the markset.
# It can be "map" (default) for in memory marking or "badger" for on-disk marking.
# It can be "map" for in memory marking or "badger" (default) for on-disk marking.
#
# type: string
# env var: LOTUS_CHAINSTORE_SPLITSTORE_MARKSETTYPE
#MarkSetType = "map"
#MarkSetType = "badger"
# HotStoreMessageRetention specifies the retention policy for messages, in finalities beyond
# the compaction boundary; default is 0.

View File

@ -167,6 +167,14 @@
# env var: LOTUS_DEALMAKING_EXPECTEDSEALDURATION
#ExpectedSealDuration = "24h0m0s"
# Whether new sectors are created to pack incoming deals
# When this is set to false no new sectors will be created for sealing incoming deals
# This is useful for forcing all deals to be assigned as snap deals to sectors marked for upgrade
#
# type: bool
# env var: LOTUS_DEALMAKING_MAKENEWSECTORFORDEALS
#MakeNewSectorForDeals = true
# Maximum amount of time proposed deal StartEpoch can be in future
#
# type: Duration
@ -430,6 +438,9 @@
# env var: LOTUS_STORAGE_ALLOWPROVEREPLICAUPDATE2
#AllowProveReplicaUpdate2 = true
# env var: LOTUS_STORAGE_ALLOWREGENSECTORKEY
#AllowRegenSectorKey = true
# env var: LOTUS_STORAGE_RESOURCEFILTERING
#ResourceFiltering = "hardware"

2
extern/filecoin-ffi vendored

@ -1 +1 @@
Subproject commit e660df5616e397b2d8ac316f45ddfa7a44637971
Subproject commit 5ec5d805c01ea85224f6448dd6c6fa0a2a73c028

View File

@ -55,6 +55,25 @@ func (m *Manager) CheckProvable(ctx context.Context, pp abi.RegisteredPoStProof,
return nil
}
// temporary hack to make the check work with snapdeals
// will go away in https://github.com/filecoin-project/lotus/pull/7971
if lp.Sealed == "" || lp.Cache == "" {
// maybe it's update
lockedUpdate, err := m.index.StorageTryLock(ctx, sector.ID, storiface.FTUpdate|storiface.FTUpdateCache, storiface.FTNone)
if err != nil {
return xerrors.Errorf("acquiring sector lock: %w", err)
}
if lockedUpdate {
lp, _, err = m.localStore.AcquireSector(ctx, sector, storiface.FTUpdate|storiface.FTUpdateCache, storiface.FTNone, storiface.PathStorage, storiface.AcquireMove)
if err != nil {
log.Warnw("CheckProvable Sector FAULT: acquire sector in checkProvable", "sector", sector, "error", err)
bad[sector.ID] = fmt.Sprintf("acquire sector failed: %s", err)
return nil
}
lp.Sealed, lp.Cache = lp.Update, lp.UpdateCache
}
}
if lp.Sealed == "" || lp.Cache == "" {
log.Warnw("CheckProvable Sector FAULT: cache and/or sealed paths not found", "sector", sector, "sealed", lp.Sealed, "cache", lp.Cache)
bad[sector.ID] = fmt.Sprintf("cache and/or sealed paths not found, cache %q, sealed %q", lp.Cache, lp.Sealed)

View File

@ -34,6 +34,12 @@ func (b *Provider) AcquireSector(ctx context.Context, id storage.SectorRef, exis
if err := os.Mkdir(filepath.Join(b.Root, storiface.FTCache.String()), 0755); err != nil && !os.IsExist(err) { // nolint
return storiface.SectorPaths{}, nil, err
}
if err := os.Mkdir(filepath.Join(b.Root, storiface.FTUpdate.String()), 0755); err != nil && !os.IsExist(err) { // nolint
return storiface.SectorPaths{}, nil, err
}
if err := os.Mkdir(filepath.Join(b.Root, storiface.FTUpdateCache.String()), 0755); err != nil && !os.IsExist(err) { // nolint
return storiface.SectorPaths{}, nil, err
}
done := func() {}

View File

@ -669,7 +669,7 @@ func (sb *Sealer) SealCommit2(ctx context.Context, sector storage.SectorRef, pha
func (sb *Sealer) ReplicaUpdate(ctx context.Context, sector storage.SectorRef, pieces []abi.PieceInfo) (storage.ReplicaUpdateOut, error) {
empty := storage.ReplicaUpdateOut{}
paths, done, err := sb.sectors.AcquireSector(ctx, sector, storiface.FTSealed|storiface.FTUnsealed|storiface.FTCache, storiface.FTUpdate|storiface.FTUpdateCache, storiface.PathSealing)
paths, done, err := sb.sectors.AcquireSector(ctx, sector, storiface.FTUnsealed|storiface.FTSealed|storiface.FTCache, storiface.FTUpdate|storiface.FTUpdateCache, storiface.PathSealing)
if err != nil {
return empty, xerrors.Errorf("failed to acquire sector paths: %w", err)
}
@ -697,10 +697,10 @@ func (sb *Sealer) ReplicaUpdate(ctx context.Context, sector storage.SectorRef, p
if err := os.Mkdir(paths.UpdateCache, 0755); err != nil { // nolint
if os.IsExist(err) {
log.Warnf("existing cache in %s; removing", paths.Cache)
log.Warnf("existing cache in %s; removing", paths.UpdateCache)
if err := os.RemoveAll(paths.UpdateCache); err != nil {
return empty, xerrors.Errorf("remove existing sector cache from %s (sector %d): %w", paths.Cache, sector, err)
return empty, xerrors.Errorf("remove existing sector cache from %s (sector %d): %w", paths.UpdateCache, sector, err)
}
if err := os.Mkdir(paths.UpdateCache, 0755); err != nil { // nolint:gosec
@ -718,7 +718,7 @@ func (sb *Sealer) ReplicaUpdate(ctx context.Context, sector storage.SectorRef, p
}
func (sb *Sealer) ProveReplicaUpdate1(ctx context.Context, sector storage.SectorRef, sectorKey, newSealed, newUnsealed cid.Cid) (storage.ReplicaVanillaProofs, error) {
paths, done, err := sb.sectors.AcquireSector(ctx, sector, storiface.FTSealed|storiface.FTCache|storiface.FTUpdateCache|storiface.FTUpdate, storiface.FTNone, storiface.PathSealing)
paths, done, err := sb.sectors.AcquireSector(ctx, sector, storiface.FTSealed|storiface.FTCache|storiface.FTUpdate|storiface.FTUpdateCache, storiface.FTNone, storiface.PathSealing)
if err != nil {
return nil, xerrors.Errorf("failed to acquire sector paths: %w", err)
}
@ -769,7 +769,7 @@ func (sb *Sealer) ReleaseSealed(ctx context.Context, sector storage.SectorRef) e
return xerrors.Errorf("not supported at this layer")
}
func (sb *Sealer) FinalizeSector(ctx context.Context, sector storage.SectorRef, keepUnsealed []storage.Range) error {
func (sb *Sealer) freeUnsealed(ctx context.Context, sector storage.SectorRef, keepUnsealed []storage.Range) error {
ssize, err := sector.ProofType.SectorSize()
if err != nil {
return err
@ -834,6 +834,19 @@ func (sb *Sealer) FinalizeSector(ctx context.Context, sector storage.SectorRef,
}
return nil
}
func (sb *Sealer) FinalizeSector(ctx context.Context, sector storage.SectorRef, keepUnsealed []storage.Range) error {
ssize, err := sector.ProofType.SectorSize()
if err != nil {
return err
}
if err := sb.freeUnsealed(ctx, sector, keepUnsealed); err != nil {
return err
}
paths, done, err := sb.sectors.AcquireSector(ctx, sector, storiface.FTCache, 0, storiface.PathStorage)
if err != nil {
return xerrors.Errorf("acquiring sector cache path: %w", err)
@ -843,6 +856,43 @@ func (sb *Sealer) FinalizeSector(ctx context.Context, sector storage.SectorRef,
return ffi.ClearCache(uint64(ssize), paths.Cache)
}
func (sb *Sealer) FinalizeReplicaUpdate(ctx context.Context, sector storage.SectorRef, keepUnsealed []storage.Range) error {
ssize, err := sector.ProofType.SectorSize()
if err != nil {
return err
}
if err := sb.freeUnsealed(ctx, sector, keepUnsealed); err != nil {
return err
}
{
paths, done, err := sb.sectors.AcquireSector(ctx, sector, storiface.FTCache, 0, storiface.PathStorage)
if err != nil {
return xerrors.Errorf("acquiring sector cache path: %w", err)
}
defer done()
if err := ffi.ClearCache(uint64(ssize), paths.Cache); err != nil {
return xerrors.Errorf("clear cache: %w", err)
}
}
{
paths, done, err := sb.sectors.AcquireSector(ctx, sector, storiface.FTUpdateCache, 0, storiface.PathStorage)
if err != nil {
return xerrors.Errorf("acquiring sector cache path: %w", err)
}
defer done()
if err := ffi.ClearCache(uint64(ssize), paths.UpdateCache); err != nil {
return xerrors.Errorf("clear cache: %w", err)
}
}
return nil
}
func (sb *Sealer) ReleaseUnsealed(ctx context.Context, sector storage.SectorRef, safeToFree []storage.Range) error {
// This call is meant to mark storage as 'freeable'. Given that unsealing is
// very expensive, we don't remove data as soon as we can - instead we only

View File

@ -105,6 +105,7 @@ type SealerConfig struct {
AllowUnseal bool
AllowReplicaUpdate bool
AllowProveReplicaUpdate2 bool
AllowRegenSectorKey bool
// ResourceFiltering instructs the system which resource filtering strategy
// to use when evaluating tasks against this worker. An empty value defaults
@ -146,7 +147,7 @@ func New(ctx context.Context, lstor *stores.Local, stor *stores.Remote, ls store
go m.sched.runSched()
localTasks := []sealtasks.TaskType{
sealtasks.TTCommit1, sealtasks.TTProveReplicaUpdate1, sealtasks.TTFinalize, sealtasks.TTFetch,
sealtasks.TTCommit1, sealtasks.TTProveReplicaUpdate1, sealtasks.TTFinalize, sealtasks.TTFetch, sealtasks.TTFinalizeReplicaUpdate,
}
if sc.AllowAddPiece {
localTasks = append(localTasks, sealtasks.TTAddPiece)
@ -169,6 +170,9 @@ func New(ctx context.Context, lstor *stores.Local, stor *stores.Remote, ls store
if sc.AllowProveReplicaUpdate2 {
localTasks = append(localTasks, sealtasks.TTProveReplicaUpdate2)
}
if sc.AllowRegenSectorKey {
localTasks = append(localTasks, sealtasks.TTRegenSectorKey)
}
wcfg := WorkerConfig{
IgnoreResourceFiltering: sc.ResourceFiltering == ResourceFilteringDisabled,
@ -577,6 +581,74 @@ func (m *Manager) FinalizeSector(ctx context.Context, sector storage.SectorRef,
return nil
}
func (m *Manager) FinalizeReplicaUpdate(ctx context.Context, sector storage.SectorRef, keepUnsealed []storage.Range) error {
ctx, cancel := context.WithCancel(ctx)
defer cancel()
if err := m.index.StorageLock(ctx, sector.ID, storiface.FTNone, storiface.FTSealed|storiface.FTUnsealed|storiface.FTCache|storiface.FTUpdate|storiface.FTUpdateCache); err != nil {
return xerrors.Errorf("acquiring sector lock: %w", err)
}
fts := storiface.FTUnsealed
{
unsealedStores, err := m.index.StorageFindSector(ctx, sector.ID, storiface.FTUnsealed, 0, false)
if err != nil {
return xerrors.Errorf("finding unsealed sector: %w", err)
}
if len(unsealedStores) == 0 { // Is some edge-cases unsealed sector may not exist already, that's fine
fts = storiface.FTNone
}
}
pathType := storiface.PathStorage
{
sealedStores, err := m.index.StorageFindSector(ctx, sector.ID, storiface.FTUpdate, 0, false)
if err != nil {
return xerrors.Errorf("finding sealed sector: %w", err)
}
for _, store := range sealedStores {
if store.CanSeal {
pathType = storiface.PathSealing
break
}
}
}
selector := newExistingSelector(m.index, sector.ID, storiface.FTCache|storiface.FTSealed|storiface.FTUpdate|storiface.FTUpdateCache, false)
err := m.sched.Schedule(ctx, sector, sealtasks.TTFinalizeReplicaUpdate, selector,
m.schedFetch(sector, storiface.FTCache|storiface.FTSealed|storiface.FTUpdate|storiface.FTUpdateCache|fts, pathType, storiface.AcquireMove),
func(ctx context.Context, w Worker) error {
_, err := m.waitSimpleCall(ctx)(w.FinalizeReplicaUpdate(ctx, sector, keepUnsealed))
return err
})
if err != nil {
return err
}
fetchSel := newAllocSelector(m.index, storiface.FTCache|storiface.FTSealed|storiface.FTUpdate|storiface.FTUpdateCache, storiface.PathStorage)
moveUnsealed := fts
{
if len(keepUnsealed) == 0 {
moveUnsealed = storiface.FTNone
}
}
err = m.sched.Schedule(ctx, sector, sealtasks.TTFetch, fetchSel,
m.schedFetch(sector, storiface.FTCache|storiface.FTSealed|storiface.FTUpdate|storiface.FTUpdateCache|moveUnsealed, storiface.PathStorage, storiface.AcquireMove),
func(ctx context.Context, w Worker) error {
_, err := m.waitSimpleCall(ctx)(w.MoveStorage(ctx, sector, storiface.FTCache|storiface.FTSealed|storiface.FTUpdate|storiface.FTUpdateCache|moveUnsealed))
return err
})
if err != nil {
return xerrors.Errorf("moving sector to storage: %w", err)
}
return nil
}
func (m *Manager) ReleaseUnsealed(ctx context.Context, sector storage.SectorRef, safeToFree []storage.Range) error {
return nil
}
@ -715,14 +787,13 @@ func (m *Manager) ReplicaUpdate(ctx context.Context, sector storage.SectorRef, p
return out, waitErr
}
if err := m.index.StorageLock(ctx, sector.ID, storiface.FTSealed|storiface.FTCache, storiface.FTUpdate|storiface.FTUpdateCache); err != nil {
if err := m.index.StorageLock(ctx, sector.ID, storiface.FTUnsealed|storiface.FTSealed|storiface.FTCache, storiface.FTUpdate|storiface.FTUpdateCache); err != nil {
return storage.ReplicaUpdateOut{}, xerrors.Errorf("acquiring sector lock: %w", err)
}
selector := newAllocSelector(m.index, storiface.FTUpdate|storiface.FTUpdateCache, storiface.PathSealing)
err = m.sched.Schedule(ctx, sector, sealtasks.TTReplicaUpdate, selector, m.schedFetch(sector, storiface.FTSealed, storiface.PathSealing, storiface.AcquireCopy), func(ctx context.Context, w Worker) error {
log.Errorf("scheduled work for replica update")
err = m.sched.Schedule(ctx, sector, sealtasks.TTReplicaUpdate, selector, m.schedFetch(sector, storiface.FTUnsealed|storiface.FTSealed|storiface.FTCache, storiface.PathSealing, storiface.AcquireCopy), func(ctx context.Context, w Worker) error {
err := m.startWork(ctx, w, wk)(w.ReplicaUpdate(ctx, sector, pieces))
if err != nil {
return xerrors.Errorf("startWork: %w", err)
@ -768,9 +839,12 @@ func (m *Manager) ProveReplicaUpdate1(ctx context.Context, sector storage.Sector
return nil, xerrors.Errorf("acquiring sector lock: %w", err)
}
selector := newExistingSelector(m.index, sector.ID, storiface.FTUpdate|storiface.FTUpdateCache|storiface.FTSealed|storiface.FTCache, true)
// NOTE: We set allowFetch to false in so that we always execute on a worker
// with direct access to the data. We want to do that because this step is
// generally very cheap / fast, and transferring data is not worth the effort
selector := newExistingSelector(m.index, sector.ID, storiface.FTUpdate|storiface.FTUpdateCache|storiface.FTSealed|storiface.FTCache, false)
err = m.sched.Schedule(ctx, sector, sealtasks.TTProveReplicaUpdate1, selector, m.schedFetch(sector, storiface.FTSealed, storiface.PathSealing, storiface.AcquireCopy), func(ctx context.Context, w Worker) error {
err = m.sched.Schedule(ctx, sector, sealtasks.TTProveReplicaUpdate1, selector, m.schedFetch(sector, storiface.FTSealed|storiface.FTCache|storiface.FTUpdate|storiface.FTUpdateCache, storiface.PathSealing, storiface.AcquireCopy), func(ctx context.Context, w Worker) error {
err := m.startWork(ctx, w, wk)(w.ProveReplicaUpdate1(ctx, sector, sectorKey, newSealed, newUnsealed))
if err != nil {
@ -873,6 +947,10 @@ func (m *Manager) ReturnProveReplicaUpdate2(ctx context.Context, callID storifac
return m.returnResult(ctx, callID, proof, err)
}
func (m *Manager) ReturnFinalizeReplicaUpdate(ctx context.Context, callID storiface.CallID, err *storiface.CallError) error {
return m.returnResult(ctx, callID, nil, err)
}
func (m *Manager) ReturnGenerateSectorKeyFromData(ctx context.Context, callID storiface.CallID, err *storiface.CallError) error {
return m.returnResult(ctx, callID, nil, err)
}

View File

@ -1,3 +1,4 @@
//stm: #unit
package sectorstorage
import (
@ -363,6 +364,7 @@ func TestRedoPC1(t *testing.T) {
// Manager restarts in the middle of a task, restarts it, it completes
func TestRestartManager(t *testing.T) {
//stm: @WORKER_JOBS_001
test := func(returnBeforeCall bool) func(*testing.T) {
return func(t *testing.T) {
logging.SetAllLoggers(logging.LevelDebug)
@ -507,6 +509,7 @@ func TestRestartWorker(t *testing.T) {
<-arch
require.NoError(t, w.Close())
//stm: @WORKER_STATS_001
for {
if len(m.WorkerStats()) == 0 {
break
@ -569,6 +572,7 @@ func TestReenableWorker(t *testing.T) {
// disable
atomic.StoreInt64(&w.testDisable, 1)
//stm: @WORKER_STATS_001
for i := 0; i < 100; i++ {
if !m.WorkerStats()[w.session].Enabled {
break

View File

@ -477,6 +477,10 @@ func (mgr *SectorMgr) FinalizeSector(context.Context, storage.SectorRef, []stora
return nil
}
func (mgr *SectorMgr) FinalizeReplicaUpdate(context.Context, storage.SectorRef, []storage.Range) error {
return nil
}
func (mgr *SectorMgr) ReleaseUnsealed(ctx context.Context, sector storage.SectorRef, safeToFree []storage.Range) error {
return nil
}
@ -577,6 +581,10 @@ func (mgr *SectorMgr) ReturnGenerateSectorKeyFromData(ctx context.Context, callI
panic("not supported")
}
func (mgr *SectorMgr) ReturnFinalizeReplicaUpdate(ctx context.Context, callID storiface.CallID, err *storiface.CallError) error {
panic("not supported")
}
func (m mockVerifProver) VerifySeal(svi proof.SealVerifyInfo) (bool, error) {
plen, err := svi.SealProof.ProofSize()
if err != nil {

View File

@ -1,3 +1,4 @@
//stm: #unit
package sectorstorage
import (
@ -118,6 +119,10 @@ func (s *schedTestWorker) GenerateSectorKeyFromData(ctx context.Context, sector
panic("implement me")
}
func (s *schedTestWorker) FinalizeReplicaUpdate(ctx context.Context, sector storage.SectorRef, keepUnsealed []storage.Range) (storiface.CallID, error) {
panic("implement me")
}
func (s *schedTestWorker) MoveStorage(ctx context.Context, sector storage.SectorRef, types storiface.SectorFileType) (storiface.CallID, error) {
panic("implement me")
}
@ -206,6 +211,7 @@ func TestSchedStartStop(t *testing.T) {
}
func TestSched(t *testing.T) {
//stm: @WORKER_JOBS_001
storiface.ParallelNum = 1
storiface.ParallelDenom = 1

View File

@ -6,7 +6,7 @@ const (
TTAddPiece TaskType = "seal/v0/addpiece"
TTPreCommit1 TaskType = "seal/v0/precommit/1"
TTPreCommit2 TaskType = "seal/v0/precommit/2"
TTCommit1 TaskType = "seal/v0/commit/1" // NOTE: We use this to transfer the sector into miner-local storage for now; Don't use on workers!
TTCommit1 TaskType = "seal/v0/commit/1"
TTCommit2 TaskType = "seal/v0/commit/2"
TTFinalize TaskType = "seal/v0/finalize"
@ -18,6 +18,7 @@ const (
TTProveReplicaUpdate1 TaskType = "seal/v0/provereplicaupdate/1"
TTProveReplicaUpdate2 TaskType = "seal/v0/provereplicaupdate/2"
TTRegenSectorKey TaskType = "seal/v0/regensectorkey"
TTFinalizeReplicaUpdate TaskType = "seal/v0/finalize/replicaupdate"
)
var order = map[TaskType]int{
@ -52,6 +53,7 @@ var shortNames = map[TaskType]string{
TTProveReplicaUpdate1: "PR1",
TTProveReplicaUpdate2: "PR2",
TTRegenSectorKey: "GSK",
TTFinalizeReplicaUpdate: "FRU",
}
func (a TaskType) MuchLess(b TaskType) (bool, bool) {

View File

@ -1,3 +1,4 @@
//stm: #unit
package stores_test
import (
@ -154,6 +155,7 @@ func TestMoveShared(t *testing.T) {
}
func TestReader(t *testing.T) {
//stm: @STORAGE_INFO_001
logging.SetAllLoggers(logging.LevelDebug)
bz := []byte("Hello World")

View File

@ -331,10 +331,146 @@ var ResourceTable = map[sealtasks.TaskType]map[abi.RegisteredSealProof]Resources
BaseMinMemory: 0,
},
},
// TODO: this should ideally be the actual replica update proof types
// TODO: actually measure this (and all the other replica update work)
sealtasks.TTReplicaUpdate: { // copied from addpiece
abi.RegisteredSealProof_StackedDrg64GiBV1: Resources{
MaxMemory: 8 << 30,
MinMemory: 8 << 30,
MaxParallelism: 1,
BaseMinMemory: 1 << 30,
},
abi.RegisteredSealProof_StackedDrg32GiBV1: Resources{
MaxMemory: 4 << 30,
MinMemory: 4 << 30,
MaxParallelism: 1,
BaseMinMemory: 1 << 30,
},
abi.RegisteredSealProof_StackedDrg512MiBV1: Resources{
MaxMemory: 1 << 30,
MinMemory: 1 << 30,
MaxParallelism: 1,
BaseMinMemory: 1 << 30,
},
abi.RegisteredSealProof_StackedDrg2KiBV1: Resources{
MaxMemory: 2 << 10,
MinMemory: 2 << 10,
MaxParallelism: 1,
BaseMinMemory: 2 << 10,
},
abi.RegisteredSealProof_StackedDrg8MiBV1: Resources{
MaxMemory: 8 << 20,
MinMemory: 8 << 20,
MaxParallelism: 1,
BaseMinMemory: 8 << 20,
},
},
sealtasks.TTProveReplicaUpdate1: { // copied from commit1
abi.RegisteredSealProof_StackedDrg64GiBV1: Resources{
MaxMemory: 1 << 30,
MinMemory: 1 << 30,
MaxParallelism: 0,
BaseMinMemory: 1 << 30,
},
abi.RegisteredSealProof_StackedDrg32GiBV1: Resources{
MaxMemory: 1 << 30,
MinMemory: 1 << 30,
MaxParallelism: 0,
BaseMinMemory: 1 << 30,
},
abi.RegisteredSealProof_StackedDrg512MiBV1: Resources{
MaxMemory: 1 << 30,
MinMemory: 1 << 30,
MaxParallelism: 0,
BaseMinMemory: 1 << 30,
},
abi.RegisteredSealProof_StackedDrg2KiBV1: Resources{
MaxMemory: 2 << 10,
MinMemory: 2 << 10,
MaxParallelism: 0,
BaseMinMemory: 2 << 10,
},
abi.RegisteredSealProof_StackedDrg8MiBV1: Resources{
MaxMemory: 8 << 20,
MinMemory: 8 << 20,
MaxParallelism: 0,
BaseMinMemory: 8 << 20,
},
},
sealtasks.TTProveReplicaUpdate2: { // copied from commit2
abi.RegisteredSealProof_StackedDrg64GiBV1: Resources{
MaxMemory: 190 << 30, // TODO: Confirm
MinMemory: 60 << 30,
MaxParallelism: -1,
MaxParallelismGPU: 6,
GPUUtilization: 1.0,
BaseMinMemory: 64 << 30, // params
},
abi.RegisteredSealProof_StackedDrg32GiBV1: Resources{
MaxMemory: 150 << 30, // TODO: ~30G of this should really be BaseMaxMemory
MinMemory: 30 << 30,
MaxParallelism: -1,
MaxParallelismGPU: 6,
GPUUtilization: 1.0,
BaseMinMemory: 32 << 30, // params
},
abi.RegisteredSealProof_StackedDrg512MiBV1: Resources{
MaxMemory: 3 << 29, // 1.5G
MinMemory: 1 << 30,
MaxParallelism: 1, // This is fine
GPUUtilization: 1.0,
BaseMinMemory: 10 << 30,
},
abi.RegisteredSealProof_StackedDrg2KiBV1: Resources{
MaxMemory: 2 << 10,
MinMemory: 2 << 10,
MaxParallelism: 1,
GPUUtilization: 1.0,
BaseMinMemory: 2 << 10,
},
abi.RegisteredSealProof_StackedDrg8MiBV1: Resources{
MaxMemory: 8 << 20,
MinMemory: 8 << 20,
MaxParallelism: 1,
GPUUtilization: 1.0,
BaseMinMemory: 8 << 20,
},
},
}
func init() {
ResourceTable[sealtasks.TTUnseal] = ResourceTable[sealtasks.TTPreCommit1] // TODO: measure accurately
ResourceTable[sealtasks.TTRegenSectorKey] = ResourceTable[sealtasks.TTReplicaUpdate]
// V1_1 is the same as V1
for _, m := range ResourceTable {

View File

@ -120,6 +120,7 @@ type WorkerCalls interface {
SealCommit1(ctx context.Context, sector storage.SectorRef, ticket abi.SealRandomness, seed abi.InteractiveSealRandomness, pieces []abi.PieceInfo, cids storage.SectorCids) (CallID, error)
SealCommit2(ctx context.Context, sector storage.SectorRef, c1o storage.Commit1Out) (CallID, error)
FinalizeSector(ctx context.Context, sector storage.SectorRef, keepUnsealed []storage.Range) (CallID, error)
FinalizeReplicaUpdate(ctx context.Context, sector storage.SectorRef, keepUnsealed []storage.Range) (CallID, error)
ReleaseUnsealed(ctx context.Context, sector storage.SectorRef, safeToFree []storage.Range) (CallID, error)
ReplicaUpdate(ctx context.Context, sector storage.SectorRef, pieces []abi.PieceInfo) (CallID, error)
ProveReplicaUpdate1(ctx context.Context, sector storage.SectorRef, sectorKey, newSealed, newUnsealed cid.Cid) (CallID, error)
@ -182,6 +183,7 @@ type WorkerReturn interface {
ReturnProveReplicaUpdate1(ctx context.Context, callID CallID, proofs storage.ReplicaVanillaProofs, err *CallError) error
ReturnProveReplicaUpdate2(ctx context.Context, callID CallID, proof storage.ReplicaUpdateProof, err *CallError) error
ReturnGenerateSectorKeyFromData(ctx context.Context, callID CallID, err *CallError) error
ReturnFinalizeReplicaUpdate(ctx context.Context, callID CallID, err *CallError) error
ReturnMoveStorage(ctx context.Context, callID CallID, err *CallError) error
ReturnUnsealPiece(ctx context.Context, callID CallID, err *CallError) error
ReturnReadPiece(ctx context.Context, callID CallID, ok bool, err *CallError) error

View File

@ -87,6 +87,10 @@ func (t *testExec) GenerateSectorKeyFromData(ctx context.Context, sector storage
panic("implement me")
}
func (t *testExec) FinalizeReplicaUpdate(ctx context.Context, sector storage.SectorRef, keepUnsealed []storage.Range) error {
panic("implement me")
}
func (t *testExec) NewSector(ctx context.Context, sector storage.SectorRef) error {
panic("implement me")
}

View File

@ -168,6 +168,7 @@ const (
SealCommit1 ReturnType = "SealCommit1"
SealCommit2 ReturnType = "SealCommit2"
FinalizeSector ReturnType = "FinalizeSector"
FinalizeReplicaUpdate ReturnType = "FinalizeReplicaUpdate"
ReplicaUpdate ReturnType = "ReplicaUpdate"
ProveReplicaUpdate1 ReturnType = "ProveReplicaUpdate1"
ProveReplicaUpdate2 ReturnType = "ProveReplicaUpdate2"
@ -224,6 +225,7 @@ var returnFunc = map[ReturnType]func(context.Context, storiface.CallID, storifac
ProveReplicaUpdate1: rfunc(storiface.WorkerReturn.ReturnProveReplicaUpdate1),
ProveReplicaUpdate2: rfunc(storiface.WorkerReturn.ReturnProveReplicaUpdate2),
GenerateSectorKey: rfunc(storiface.WorkerReturn.ReturnGenerateSectorKeyFromData),
FinalizeReplicaUpdate: rfunc(storiface.WorkerReturn.ReturnFinalizeReplicaUpdate),
MoveStorage: rfunc(storiface.WorkerReturn.ReturnMoveStorage),
UnsealPiece: rfunc(storiface.WorkerReturn.ReturnUnsealPiece),
Fetch: rfunc(storiface.WorkerReturn.ReturnFetch),
@ -456,6 +458,27 @@ func (l *LocalWorker) FinalizeSector(ctx context.Context, sector storage.SectorR
})
}
func (l *LocalWorker) FinalizeReplicaUpdate(ctx context.Context, sector storage.SectorRef, keepUnsealed []storage.Range) (storiface.CallID, error) {
sb, err := l.executor()
if err != nil {
return storiface.UndefCall, err
}
return l.asyncCall(ctx, sector, FinalizeReplicaUpdate, func(ctx context.Context, ci storiface.CallID) (interface{}, error) {
if err := sb.FinalizeReplicaUpdate(ctx, sector, keepUnsealed); err != nil {
return nil, xerrors.Errorf("finalizing sector: %w", err)
}
if len(keepUnsealed) == 0 {
if err := l.storage.Remove(ctx, sector.ID, storiface.FTUnsealed, true, nil); err != nil {
return nil, xerrors.Errorf("removing unsealed data: %w", err)
}
}
return nil, err
})
}
func (l *LocalWorker) ReleaseUnsealed(ctx context.Context, sector storage.SectorRef, safeToFree []storage.Range) (storiface.CallID, error) {
return storiface.UndefCall, xerrors.Errorf("implement me")
}

View File

@ -215,4 +215,8 @@ func (t *trackedWorker) ProveReplicaUpdate2(ctx context.Context, sector storage.
})
}
func (t *trackedWorker) FinalizeReplicaUpdate(ctx context.Context, sector storage.SectorRef, keepUnsealed []storage.Range) (storiface.CallID, error) {
return t.tracker.track(ctx, t.execute, t.wid, t.workerInfo, sector, sealtasks.TTFinalizeReplicaUpdate, func() (storiface.CallID, error) { return t.Worker.FinalizeReplicaUpdate(ctx, sector, keepUnsealed) })
}
var _ Worker = &trackedWorker{}

View File

@ -20,6 +20,7 @@ import (
// We should implement some wait-for-api logic
type ErrApi struct{ error }
type ErrNoDeals struct{ error }
type ErrInvalidDeals struct{ error }
type ErrInvalidPiece struct{ error }
type ErrExpiredDeals struct{ error }
@ -38,12 +39,14 @@ type ErrCommitWaitFailed struct{ error }
type ErrBadRU struct{ error }
type ErrBadPR struct{ error }
func checkPieces(ctx context.Context, maddr address.Address, si SectorInfo, api SealingAPI) error {
func checkPieces(ctx context.Context, maddr address.Address, si SectorInfo, api SealingAPI, mustHaveDeals bool) error {
tok, height, err := api.ChainHead(ctx)
if err != nil {
return &ErrApi{xerrors.Errorf("getting chain head: %w", err)}
}
dealCount := 0
for i, p := range si.Pieces {
// if no deal is associated with the piece, ensure that we added it as
// filler (i.e. ensure that it has a zero PieceCID)
@ -55,6 +58,8 @@ func checkPieces(ctx context.Context, maddr address.Address, si SectorInfo, api
continue
}
dealCount++
proposal, err := api.StateMarketStorageDealProposal(ctx, p.DealInfo.DealID, tok)
if err != nil {
return &ErrInvalidDeals{xerrors.Errorf("getting deal %d for piece %d: %w", p.DealInfo.DealID, i, err)}
@ -77,13 +82,17 @@ func checkPieces(ctx context.Context, maddr address.Address, si SectorInfo, api
}
}
if mustHaveDeals && dealCount <= 0 {
return &ErrNoDeals{(xerrors.Errorf("sector %d must have deals, but does not", si.SectorNumber))}
}
return nil
}
// checkPrecommit checks that data commitment generated in the sealing process
// matches pieces, and that the seal ticket isn't expired
func checkPrecommit(ctx context.Context, maddr address.Address, si SectorInfo, tok TipSetToken, height abi.ChainEpoch, api SealingAPI) (err error) {
if err := checkPieces(ctx, maddr, si, api); err != nil {
if err := checkPieces(ctx, maddr, si, api, false); err != nil {
return err
}
@ -184,7 +193,7 @@ func (m *Sealing) checkCommit(ctx context.Context, si SectorInfo, proof []byte,
return &ErrInvalidProof{xerrors.New("invalid proof (compute error?)")}
}
if err := checkPieces(ctx, m.maddr, si, m.Api); err != nil {
if err := checkPieces(ctx, m.maddr, si, m.Api, false); err != nil {
return err
}
@ -194,7 +203,7 @@ func (m *Sealing) checkCommit(ctx context.Context, si SectorInfo, proof []byte,
// check that sector info is good after running a replica update
func checkReplicaUpdate(ctx context.Context, maddr address.Address, si SectorInfo, tok TipSetToken, api SealingAPI) error {
if err := checkPieces(ctx, maddr, si, api); err != nil {
if err := checkPieces(ctx, maddr, si, api, true); err != nil {
return err
}
if !si.CCUpdate {
@ -205,15 +214,20 @@ func checkReplicaUpdate(ctx context.Context, maddr address.Address, si SectorInf
if err != nil {
return &ErrApi{xerrors.Errorf("calling StateComputeDataCommitment: %w", err)}
}
if si.UpdateUnsealed == nil || !commD.Equals(*si.UpdateUnsealed) {
return &ErrBadRU{xerrors.Errorf("on chain CommD differs from sector: %s != %s", commD, si.CommD)}
if si.UpdateUnsealed == nil {
return &ErrBadRU{xerrors.New("nil UpdateUnsealed cid after replica update")}
}
if !commD.Equals(*si.UpdateUnsealed) {
return &ErrBadRU{xerrors.Errorf("calculated CommD differs from updated replica: %s != %s", commD, *si.UpdateUnsealed)}
}
if si.UpdateSealed == nil {
return &ErrBadRU{xerrors.Errorf("nil sealed cid")}
}
if si.ReplicaUpdateProof == nil {
return ErrBadPR{xerrors.Errorf("nil PR2 proof")}
return &ErrBadPR{xerrors.Errorf("nil PR2 proof")}
}
return nil

View File

@ -1,3 +1,4 @@
//stm: #unit
package sealing_test
import (
@ -28,6 +29,7 @@ import (
)
func TestCommitBatcher(t *testing.T) {
//stm: @CHAIN_STATE_MINER_PRE_COM_INFO_001, @CHAIN_STATE_MINER_INFO_001, @CHAIN_STATE_NETWORK_VERSION_001
t0123, err := address.NewFromString("t0123")
require.NoError(t, err)
@ -147,6 +149,7 @@ func TestCommitBatcher(t *testing.T) {
}
}
//stm: @CHAIN_STATE_MINER_INFO_001, @CHAIN_STATE_NETWORK_VERSION_001, @CHAIN_STATE_MINER_GET_COLLATERAL_001
expectSend := func(expect []abi.SectorNumber, aboveBalancer, failOnePCI bool) action {
return func(t *testing.T, s *mocks.MockCommitBatcherApi, pcb *sealing.CommitBatcher) promise {
s.EXPECT().StateMinerInfo(gomock.Any(), gomock.Any(), gomock.Any()).Return(miner.MinerInfo{Owner: t0123, Worker: t0123}, nil)

View File

@ -137,27 +137,32 @@ var fsmPlanners = map[SectorState]func(events []statemachine.Event, state *Secto
SnapDealsWaitDeals: planOne(
on(SectorAddPiece{}, SnapDealsAddPiece),
on(SectorStartPacking{}, SnapDealsPacking),
on(SectorAbortUpgrade{}, AbortUpgrade),
),
SnapDealsAddPiece: planOne(
on(SectorPieceAdded{}, SnapDealsWaitDeals),
apply(SectorStartPacking{}),
apply(SectorAddPiece{}),
on(SectorAddPieceFailed{}, SnapDealsAddPieceFailed),
on(SectorAbortUpgrade{}, AbortUpgrade),
),
SnapDealsPacking: planOne(
on(SectorPacked{}, UpdateReplica),
on(SectorAbortUpgrade{}, AbortUpgrade),
),
UpdateReplica: planOne(
on(SectorReplicaUpdate{}, ProveReplicaUpdate),
on(SectorUpdateReplicaFailed{}, ReplicaUpdateFailed),
on(SectorDealsExpired{}, SnapDealsDealsExpired),
on(SectorInvalidDealIDs{}, SnapDealsRecoverDealIDs),
on(SectorAbortUpgrade{}, AbortUpgrade),
),
ProveReplicaUpdate: planOne(
on(SectorProveReplicaUpdate{}, SubmitReplicaUpdate),
on(SectorProveReplicaUpdateFailed{}, ReplicaUpdateFailed),
on(SectorDealsExpired{}, SnapDealsDealsExpired),
on(SectorInvalidDealIDs{}, SnapDealsRecoverDealIDs),
on(SectorAbortUpgrade{}, AbortUpgrade),
),
SubmitReplicaUpdate: planOne(
on(SectorReplicaUpdateSubmitted{}, ReplicaUpdateWait),
@ -169,7 +174,14 @@ var fsmPlanners = map[SectorState]func(events []statemachine.Event, state *Secto
on(SectorAbortUpgrade{}, AbortUpgrade),
),
FinalizeReplicaUpdate: planOne(
on(SectorFinalized{}, Proving),
on(SectorFinalized{}, UpdateActivating),
),
UpdateActivating: planOne(
on(SectorUpdateActive{}, ReleaseSectorKey),
),
ReleaseSectorKey: planOne(
on(SectorKeyReleased{}, Proving),
on(SectorReleaseKeyFailed{}, ReleaseSectorKeyFailed),
),
// Sealing errors
@ -231,6 +243,7 @@ var fsmPlanners = map[SectorState]func(events []statemachine.Event, state *Secto
on(SectorRetryWaitDeals{}, SnapDealsWaitDeals),
apply(SectorStartPacking{}),
apply(SectorAddPiece{}),
on(SectorAbortUpgrade{}, AbortUpgrade),
),
SnapDealsDealsExpired: planOne(
on(SectorAbortUpgrade{}, AbortUpgrade),
@ -249,6 +262,10 @@ var fsmPlanners = map[SectorState]func(events []statemachine.Event, state *Secto
on(SectorRetryProveReplicaUpdate{}, ProveReplicaUpdate),
on(SectorInvalidDealIDs{}, SnapDealsRecoverDealIDs),
on(SectorDealsExpired{}, SnapDealsDealsExpired),
on(SectorAbortUpgrade{}, AbortUpgrade),
),
ReleaseSectorKeyFailed: planOne(
on(SectorUpdateActive{}, ReleaseSectorKey),
),
// Post-seal
@ -477,6 +494,10 @@ func (m *Sealing) plan(events []statemachine.Event, state *SectorInfo) (func(sta
return m.handleReplicaUpdateWait, processed, nil
case FinalizeReplicaUpdate:
return m.handleFinalizeReplicaUpdate, processed, nil
case UpdateActivating:
return m.handleUpdateActivating, processed, nil
case ReleaseSectorKey:
return m.handleReleaseSectorKey, processed, nil
// Handled failure modes
case AddPieceFailed:
@ -513,6 +534,8 @@ func (m *Sealing) plan(events []statemachine.Event, state *SectorInfo) (func(sta
return m.handleSnapDealsRecoverDealIDs, processed, nil
case ReplicaUpdateFailed:
return m.handleSubmitReplicaUpdateFailed, processed, nil
case ReleaseSectorKeyFailed:
return m.handleReleaseSectorKeyFailed, 0, err
case AbortUpgrade:
return m.handleAbortUpgrade, processed, nil

View File

@ -335,6 +335,14 @@ type SectorReplicaUpdateLanded struct{}
func (evt SectorReplicaUpdateLanded) apply(state *SectorInfo) {}
type SectorUpdateActive struct{}
func (evt SectorUpdateActive) apply(state *SectorInfo) {}
type SectorKeyReleased struct{}
func (evt SectorKeyReleased) apply(state *SectorInfo) {}
// Failed state recovery
type SectorRetrySealPreCommit1 struct{}
@ -445,6 +453,13 @@ type SectorSubmitReplicaUpdateFailed struct{}
func (evt SectorSubmitReplicaUpdateFailed) apply(state *SectorInfo) {}
type SectorReleaseKeyFailed struct{ error }
func (evt SectorReleaseKeyFailed) FormatError(xerrors.Printer) (next error) {
return evt.error
}
func (evt SectorReleaseKeyFailed) apply(state *SectorInfo) {}
// Faults
type SectorFaulty struct{}

View File

@ -16,6 +16,7 @@ import (
"github.com/filecoin-project/specs-storage/storage"
"github.com/filecoin-project/lotus/api"
"github.com/filecoin-project/lotus/build"
sectorstorage "github.com/filecoin-project/lotus/extern/sector-storage"
"github.com/filecoin-project/lotus/extern/sector-storage/ffiwrapper"
"github.com/filecoin-project/lotus/extern/storage-sealing/sealiface"
@ -117,9 +118,25 @@ func (m *Sealing) maybeStartSealing(ctx statemachine.Context, sector SectorInfo,
return false, xerrors.Errorf("getting storage config: %w", err)
}
// todo check deal age, start sealing if any deal has less than X (configurable) to start deadline
sealTime := time.Unix(sector.CreationTime, 0).Add(cfg.WaitDealsDelay)
// check deal age, start sealing when the deal closest to starting is within slack time
_, current, err := m.Api.ChainHead(ctx.Context())
blockTime := time.Second * time.Duration(build.BlockDelaySecs)
if err != nil {
return false, xerrors.Errorf("API error getting head: %w", err)
}
for _, piece := range sector.Pieces {
if piece.DealInfo == nil {
continue
}
dealSafeSealEpoch := piece.DealInfo.DealProposal.StartEpoch - cfg.StartEpochSealingBuffer
dealSafeSealTime := time.Now().Add(time.Duration(dealSafeSealEpoch-current) * blockTime)
if dealSafeSealTime.Before(sealTime) {
sealTime = dealSafeSealTime
}
}
if now.After(sealTime) {
log.Infow("starting to seal deal sector", "sector", sector.SectorNumber, "trigger", "wait-timeout")
return true, ctx.Send(SectorStartPacking{})
@ -475,6 +492,10 @@ func (m *Sealing) tryCreateDealSector(ctx context.Context, sp abi.RegisteredSeal
return xerrors.Errorf("getting storage config: %w", err)
}
if !cfg.MakeNewSectorForDeals {
return nil
}
if cfg.MaxSealingSectorsForDeals > 0 && m.stats.curSealing() >= cfg.MaxSealingSectorsForDeals {
return nil
}
@ -524,6 +545,13 @@ func (m *Sealing) StartPacking(sid abi.SectorNumber) error {
return m.sectors.Send(uint64(sid), SectorStartPacking{})
}
func (m *Sealing) AbortUpgrade(sid abi.SectorNumber) error {
m.startupWait.Wait()
log.Infow("aborting upgrade of sector", "sector", sid, "trigger", "user")
return m.sectors.Send(uint64(sid), SectorAbortUpgrade{xerrors.New("triggered by user")})
}
func proposalCID(deal api.PieceDealInfo) cid.Cid {
pc, err := deal.DealProposal.Cid()
if err != nil {

View File

@ -1,3 +1,4 @@
//stm: #unit
package sealing_test
import (
@ -38,6 +39,7 @@ var fc = config.MinerFeeConfig{
}
func TestPrecommitBatcher(t *testing.T) {
//stm: @CHAIN_STATE_MINER_CALCULATE_DEADLINE_001
t0123, err := address.NewFromString("t0123")
require.NoError(t, err)
@ -151,6 +153,7 @@ func TestPrecommitBatcher(t *testing.T) {
}
}
//stm: @CHAIN_STATE_MINER_INFO_001, @CHAIN_STATE_NETWORK_VERSION_001
expectSend := func(expect []abi.SectorNumber) action {
return func(t *testing.T, s *mocks.MockPreCommitBatcherApi, pcb *sealing.PreCommitBatcher) promise {
s.EXPECT().ChainHead(gomock.Any()).Return(nil, abi.ChainEpoch(1), nil)
@ -171,6 +174,7 @@ func TestPrecommitBatcher(t *testing.T) {
}
}
//stm: @CHAIN_STATE_MINER_INFO_001, @CHAIN_STATE_NETWORK_VERSION_001
expectSendsSingle := func(expect []abi.SectorNumber) action {
return func(t *testing.T, s *mocks.MockPreCommitBatcherApi, pcb *sealing.PreCommitBatcher) promise {
s.EXPECT().ChainHead(gomock.Any()).Return(nil, abi.ChainEpoch(1), nil)

View File

@ -18,6 +18,8 @@ type Config struct {
// includes failed, 0 = no limit
MaxSealingSectorsForDeals uint64
MakeNewSectorForDeals bool
WaitDealsDelay time.Duration
CommittedCapacitySectorLifetime time.Duration

View File

@ -52,11 +52,14 @@ var ExistSectorStateList = map[SectorState]struct{}{
ProveReplicaUpdate: {},
SubmitReplicaUpdate: {},
ReplicaUpdateWait: {},
UpdateActivating: {},
ReleaseSectorKey: {},
FinalizeReplicaUpdate: {},
SnapDealsAddPieceFailed: {},
SnapDealsDealsExpired: {},
SnapDealsRecoverDealIDs: {},
ReplicaUpdateFailed: {},
ReleaseSectorKeyFailed: {},
AbortUpgrade: {},
}
@ -104,6 +107,8 @@ const (
SubmitReplicaUpdate SectorState = "SubmitReplicaUpdate"
ReplicaUpdateWait SectorState = "ReplicaUpdateWait"
FinalizeReplicaUpdate SectorState = "FinalizeReplicaUpdate"
UpdateActivating SectorState = "UpdateActivating"
ReleaseSectorKey SectorState = "ReleaseSectorKey"
// error modes
FailedUnrecoverable SectorState = "FailedUnrecoverable"
@ -124,6 +129,7 @@ const (
SnapDealsRecoverDealIDs SectorState = "SnapDealsRecoverDealIDs"
AbortUpgrade SectorState = "AbortUpgrade"
ReplicaUpdateFailed SectorState = "ReplicaUpdateFailed"
ReleaseSectorKeyFailed SectorState = "ReleaseSectorKeyFailed"
Faulty SectorState = "Faulty" // sector is corrupted or gone for some reason
FaultReported SectorState = "FaultReported" // sector has been declared as a fault on chain
@ -153,7 +159,7 @@ func toStatState(st SectorState, finEarly bool) statSectorState {
return sstProving
}
return sstSealing
case Proving, Removed, Removing, Terminating, TerminateWait, TerminateFinality, TerminateFailed:
case Proving, UpdateActivating, ReleaseSectorKey, Removed, Removing, Terminating, TerminateWait, TerminateFinality, TerminateFailed:
return sstProving
}

Some files were not shown because too many files have changed in this diff Show More