Merge remote-tracking branch 'upstream/master' into bloxico/system-test-matrix

This commit is contained in:
Darko Brdareski 2022-01-27 10:57:56 +01:00
commit e51ce5c508
164 changed files with 9391 additions and 1060 deletions

View File

@ -950,7 +950,7 @@ workflows:
codecov-upload: false
suite: conformance-bleeding-edge
target: "./conformance"
vectors-branch: master
vectors-branch: specs-actors-v7
- trigger-testplans:
filters:
branches:

View File

@ -785,7 +785,7 @@ workflows:
codecov-upload: false
suite: conformance-bleeding-edge
target: "./conformance"
vectors-branch: master
vectors-branch: specs-actors-v7
- trigger-testplans:
filters:
branches:

View File

@ -12,10 +12,10 @@
Before you mark the PR ready for review, please make sure that:
- [ ] All commits have a clear commit message.
- [ ] The PR title is in the form of of `<PR type>: <#issue number> <area>: <change being made>`
- example: ` fix: #1234 mempool: Introduce a cache for valid signatures`
- `PR type`: _fix_, _feat_, _INTERFACE BREAKING CHANGE_, _CONSENSUS BREAKING_, _build_, _chore_, _ci_, _docs_, _misc_,_perf_, _refactor_, _revert_, _style_, _test_
- `area`: _api_, _chain_, _state_, _vm_, _data transfer_, _market_, _mempool_, _message_, _block production_, _multisig_, _networking_, _paychan_, _proving_, _sealing_, _wallet_
- [ ] The PR title is in the form of of `<PR type>: <area>: <change being made>`
- example: ` fix: mempool: Introduce a cache for valid signatures`
- `PR type`: _fix_, _feat_, _INTERFACE BREAKING CHANGE_, _CONSENSUS BREAKING_, _build_, _chore_, _ci_, _docs_,_perf_, _refactor_, _revert_, _style_, _test_
- `area`: _api_, _chain_, _state_, _vm_, _data transfer_, _market_, _mempool_, _message_, _block production_, _multisig_, _networking_, _paychan_, _proving_, _sealing_, _wallet_, _deps_
- [ ] This PR has tests for new functionality or change in behaviour
- [ ] If new user-facing features are introduced, clear usage guidelines and / or documentation updates should be included in https://lotus.filecoin.io or [Discussion Tutorials.](https://github.com/filecoin-project/lotus/discussions/categories/tutorials)
- [ ] CI is green

View File

@ -1,5 +1,107 @@
# Lotus changelog
# v1.13.2 / 2022-01-09
Lotus v1.13.2 is a *highly recommended* feature release with remarkable retrieval improvements, new features like
worker management, schedule enhancements and so on.
## Highlights
- 🚀🚀🚀Improve retrieval deal experience
- Testing result with MinerX.3 shows the retrieval deal success rate has increased dramatically with faster transfer
speed, you can join or follow along furthur performance testings [here](https://github.com/filecoin-project/lotus/discussions/7874). We recommend application developers to integrate with the new
retrieval APIs to provide a better client experience.
- 🌟🌟🌟 Reduce retrieval Time-To-First-Byte over 100x ([#7693](https://github.com/filecoin-project/lotus/pull/7693))
- This change makes most free, small retrievals sub-second
- 🌟🌟🌟 Partial retrieval ux improvements ([#7610](https://github.com/filecoin-project/lotus/pull/7610))
- New retrieval commands for clients:
- `lotus client ls`: retrieve and list desired object links
- `lotus client cat`: retrieve and print the data from the network
- 🌟🌟 The monolith `ClientRetrieve` method was broken into:
- `ClientRetrieve` which retrieves data into the local repo (or into an IPFS node if ipfs integration is enabled)
- `ClientRetrieveWait` which will wait for the retrieval to complete
- `ClientExport` which will export data from the local node
- Note: this change only applies to v1 API. v0 API remains unchanged.
- 🌟 Support for full ipld selectors was added (for example making it possible to only retrieve list of directories in a deal, without fetching any file data)
- To learn more, see [here](https://github.com/filecoin-project/lotus/blob/0523c946f984b22b3f5de8cc3003cc791389527e/api/types.go#L230-L264)
- 🚀🚀 Sealing scheduler enhancements ([#7703](https://github.com/filecoin-project/lotus/pull/7703),
[#7269](https://github.com/filecoin-project/lotus/pull/7269)), [#7714](https://github.com/filecoin-project/lotus/pull/7714)
- Workers are now aware of cgroup memory limits
- Multiple tasks which use a GPU can be scheduled on a single worker
- Workers can override default resource table through env vars
- Default value list: https://gist.github.com/magik6k/c0e1c7cd73c1241a9acabc30bf469a43
- 🚀🚀 Sector storage groups ([#7453](https://github.com/filecoin-project/lotus/pull/7453))
- Storage groups allow for better control of data flow between workers, for example, it makes it possible to define that data from PC1 on a given worker has to have it's PC2 step executed on the same worker
- To set it up, follow the instructions under the `Sector Storage Group` section [here](https://lotus.filecoin.io/docs/storage-providers/seal-workers/#lotus-worker-co-location)
## New Features
- Add RLE dump code ([#7691](https://github.com/filecoin-project/lotus/pull/7691))
- Shed: Add a util to list miner faults ([#7605](https://github.com/filecoin-project/lotus/pull/7605))
- lotus-shed msg: Decode submessages/msig proposals ([#7639](https://github.com/filecoin-project/lotus/pull/7639))
- CLI: Add a lotus multisig cancel command ([#7645](https://github.com/filecoin-project/lotus/pull/7645))
- shed: simple wallet balancer util ([#7414](https://github.com/filecoin-project/lotus/pull/7414))
- balancing token balance between multiple accounts
## Improvements
- Add verbose mode to `lotus-miner pieces list-cids` ([#7699](https://github.com/filecoin-project/lotus/pull/7699))
- retrieval: Only output matching nodes, MatchPath dagspec ([#7706](https://github.com/filecoin-project/lotus/pull/7706))
- Cleanup partial retrieval codepaths ( zero functional changes ) ([#7688](https://github.com/filecoin-project/lotus/pull/7688))
- storage: Use 1M buffers for Tar transfers ([#7681](https://github.com/filecoin-project/lotus/pull/7681))
- Chore/dm level tests plus merkle proof cars ([#7673](https://github.com/filecoin-project/lotus/pull/7673))
- Shed: Add a util to create miners more easily ([#7595](https://github.com/filecoin-project/lotus/pull/7595))
- add timeout flag to wait-api command ([#7592](https://github.com/filecoin-project/lotus/pull/7592))
- add log for restart windows post scheduler ([#7613](https://github.com/filecoin-project/lotus/pull/7613))
- remove jaeger envvars ([#7631](https://github.com/filecoin-project/lotus/pull/7631))
- remove api and jaeger env from docker file ([#7624](https://github.com/filecoin-project/lotus/pull/7624))
- Wdpost worker: Reduce challenge confidence to 1 epoch ([#7572](https://github.com/filecoin-project/lotus/pull/7572))
- add additional methods to lotus gateway ([#7644](https://github.com/filecoin-project/lotus/pull/7644))
- Add caches to lotus-stats and splitcode ([#7329](https://github.com/filecoin-project/lotus/pull/7329))
- remote store: Remove debug printf ([#7664](https://github.com/filecoin-project/lotus/pull/7664))
- docsgen-cli: Handle commands with no description correctly ([#7659](https://github.com/filecoin-project/lotus/pull/7659))
## Bug Fixes
- fix docker logic error ([#7709](https://github.com/filecoin-project/lotus/pull/7709))
- add missing NodeType tag ([#7559](https://github.com/filecoin-project/lotus/pull/7559))
- checkCommit should return SectorCommitFailed ([#7555](https://github.com/filecoin-project/lotus/pull/7555))
- ffiwrapper: Validate PC2 by calling C1 with random seeds ([#7710](https://github.com/filecoin-project/lotus/pull/7710))
## Dependency Updates
- Update go-graphsync v0.10.6 ([#7708](https://github.com/filecoin-project/lotus/pull/7708))
- update go-libp2p-pubsub to v0.5.6 ([#7581](https://github.com/filecoin-project/lotus/pull/7581))
- Update go-state-types ([#7591](https://github.com/filecoin-project/lotus/pull/7591))
- disable mplex stream muxer ([#7689](https://github.com/filecoin-project/lotus/pull/7689))
- Bump ws from 5.2.2 to 5.2.3 in /lotuspond/front ([#7660](https://github.com/filecoin-project/lotus/pull/7660))
- Bump color-string from 1.5.3 to 1.6.0 in /lotuspond/front ([#7658](https://github.com/filecoin-project/lotus/pull/7658))
- Bump postcss from 7.0.17 to 7.0.39 in /lotuspond/front ([#7657](https://github.com/filecoin-project/lotus/pull/7657))
- Bump path-parse from 1.0.6 to 1.0.7 in /lotuspond/front ([#7656](https://github.com/filecoin-project/lotus/pull/7656))
- Bump tmpl from 1.0.4 to 1.0.5 in /lotuspond/front ([#7655](https://github.com/filecoin-project/lotus/pull/7655))
- Bump url-parse from 1.4.7 to 1.5.3 in /lotuspond/front ([#7654](https://github.com/filecoin-project/lotus/pull/7654))
- github.com/filecoin-project/go-state-types (v0.1.1-0.20210915140513-d354ccf10379 -> v0.1.1):
## Others
- Update archive script ([#7690](https://github.com/filecoin-project/lotus/pull/7690))
## Contributors
| Contributor | Commits | Lines ± | Files Changed |
|-------------|---------|---------|---------------|
| @magik6k | 89 | +5200/-1818 | 232 |
| Travis Person | 5 | +1473/-953 | 38 |
| @arajasek | 6 | +550/-38 | 19 |
| @clinta | 4 | +393/-123 | 26 |
| @ribasushi | 3 | +334/-68 | 7 |
| @jennijuju| 13 | +197/-120 | 67 |
| @Kubuxu | 10 | +153/-30 | 10 |
| @coryschwartz | 6 | +18/-26 | 6 |
| Marten Seemann | 2 | +6/-34 | 5 |
| @vyzo | 1 | +3/-3 | 2 |
| @hannahhoward | 1 | +3/-3 | 2 |
| @zenground0 | 2 | +2/-2 | 2 |
| @yaohcn | 2 | +2/-2 | 2 |
| @jennijuju | 1 | +1/-1 | 1 |
| @hunjixin | 1 | +1/-0 | 1 |
# v1.13.1 / 2021-11-26
This is an optional Lotus v1.13.1 release.

View File

@ -900,6 +900,7 @@ type QueryOffer struct {
Size uint64
MinPrice types.BigInt
UnsealPrice types.BigInt
PricePerByte abi.TokenAmount
PaymentInterval uint64
PaymentIntervalIncrease uint64
Miner address.Address
@ -1083,7 +1084,7 @@ type CirculatingSupply struct {
type MiningBaseInfo struct {
MinerPower types.BigInt
NetworkPower types.BigInt
Sectors []builtin.SectorInfo
Sectors []builtin.ExtendedSectorInfo
WorkerKey address.Address
SectorSize abi.SectorSize
PrevBeaconEntry types.BeaconEntry

View File

@ -51,6 +51,11 @@ type Net interface {
NetBlockRemove(ctx context.Context, acl NetBlockList) error //perm:admin
NetBlockList(ctx context.Context) (NetBlockList, error) //perm:read
// ResourceManager API
NetStat(ctx context.Context, scope string) (NetStat, error) //perm:read
NetLimit(ctx context.Context, scope string) (NetLimit, error) //perm:read
NetSetLimit(ctx context.Context, scope string, limit NetLimit) error //perm:admin
// ID returns peerID of libp2p node backing this API
ID(context.Context) (peer.ID, error) //perm:read
}

View File

@ -14,6 +14,7 @@ import (
"github.com/filecoin-project/go-address"
datatransfer "github.com/filecoin-project/go-data-transfer"
"github.com/filecoin-project/go-state-types/abi"
abinetwork "github.com/filecoin-project/go-state-types/network"
"github.com/filecoin-project/specs-actors/v2/actors/builtin/market"
"github.com/filecoin-project/specs-storage/storage"
@ -99,8 +100,8 @@ type StorageMiner interface {
// Returns null if message wasn't sent
SectorTerminateFlush(ctx context.Context) (*cid.Cid, error) //perm:admin
// SectorTerminatePending returns a list of pending sector terminations to be sent in the next batch message
SectorTerminatePending(ctx context.Context) ([]abi.SectorID, error) //perm:admin
SectorMarkForUpgrade(ctx context.Context, id abi.SectorNumber) error //perm:admin
SectorTerminatePending(ctx context.Context) ([]abi.SectorID, error) //perm:admin
SectorMarkForUpgrade(ctx context.Context, id abi.SectorNumber, snap bool) error //perm:admin
// SectorPreCommitFlush immediately sends a PreCommit message with sectors batched for PreCommit.
// Returns null if message wasn't sent
SectorPreCommitFlush(ctx context.Context) ([]sealiface.PreCommitBatchRes, error) //perm:admin
@ -111,6 +112,7 @@ type StorageMiner interface {
SectorCommitFlush(ctx context.Context) ([]sealiface.CommitBatchRes, error) //perm:admin
// SectorCommitPending returns a list of pending Commit sectors to be sent in the next aggregate message
SectorCommitPending(ctx context.Context) ([]abi.SectorID, error) //perm:admin
SectorMatchPendingPiecesToOpenSectors(ctx context.Context) error //perm:admin
// WorkerConnect tells the node to connect to workers RPC
WorkerConnect(context.Context, string) error //perm:admin retry:true
@ -165,6 +167,8 @@ type StorageMiner interface {
MarketGetRetrievalAsk(ctx context.Context) (*retrievalmarket.Ask, error) //perm:read
MarketListDataTransfers(ctx context.Context) ([]DataTransferChannel, error) //perm:write
MarketDataTransferUpdates(ctx context.Context) (<-chan DataTransferChannel, error) //perm:write
// MarketDataTransferDiagnostics generates debugging information about current data transfers over graphsync
MarketDataTransferDiagnostics(ctx context.Context, p peer.ID) (*TransferDiagnostics, error) //perm:write
// MarketRestartDataTransfer attempts to restart a data transfer with the given transfer ID and other peer
MarketRestartDataTransfer(ctx context.Context, transferID datatransfer.TransferID, otherPeer peer.ID, isInitiator bool) error //perm:write
// MarketCancelDataTransfer cancels a data transfer with the given transfer ID and other peer
@ -251,7 +255,7 @@ type StorageMiner interface {
CheckProvable(ctx context.Context, pp abi.RegisteredPoStProof, sectors []storage.SectorRef, expensive bool) (map[abi.SectorNumber]string, error) //perm:admin
ComputeProof(ctx context.Context, ssi []builtin.SectorInfo, rand abi.PoStRandomness) ([]builtin.PoStProof, error) //perm:read
ComputeProof(ctx context.Context, ssi []builtin.ExtendedSectorInfo, rand abi.PoStRandomness, poStEpoch abi.ChainEpoch, nv abinetwork.Version) ([]builtin.PoStProof, error) //perm:read
}
var _ storiface.WorkerReturn = *new(StorageMiner)

View File

@ -1,6 +1,7 @@
package docgen
import (
"encoding/json"
"fmt"
"go/ast"
"go/parser"
@ -15,6 +16,7 @@ import (
"github.com/filecoin-project/go-bitfield"
"github.com/google/uuid"
"github.com/ipfs/go-cid"
"github.com/ipfs/go-graphsync"
"github.com/libp2p/go-libp2p-core/metrics"
"github.com/libp2p/go-libp2p-core/network"
"github.com/libp2p/go-libp2p-core/peer"
@ -120,6 +122,7 @@ func init() {
addExample(api.FullAPIVersion1)
addExample(api.PCHInbound)
addExample(time.Minute)
addExample(graphsync.RequestID(4))
addExample(datatransfer.TransferID(3))
addExample(datatransfer.Ongoing)
addExample(storeIDExample)
@ -250,6 +253,7 @@ func init() {
addExample(map[abi.SectorNumber]string{
123: "can't acquire read lock",
})
addExample(json.RawMessage(`"json raw message"`))
addExample(map[api.SectorState]int{
api.SectorState(sealing.Proving): 120,
})
@ -296,6 +300,34 @@ func init() {
Error: "<error>",
})
addExample(storiface.ResourceTable)
addExample(network.ScopeStat{
Memory: 123,
NumStreamsInbound: 1,
NumStreamsOutbound: 2,
NumConnsInbound: 3,
NumConnsOutbound: 4,
NumFD: 5,
})
addExample(map[string]network.ScopeStat{
"abc": {
Memory: 123,
NumStreamsInbound: 1,
NumStreamsOutbound: 2,
NumConnsInbound: 3,
NumConnsOutbound: 4,
NumFD: 5,
}})
addExample(api.NetLimit{
Memory: 123,
StreamsInbound: 1,
StreamsOutbound: 2,
Streams: 3,
ConnsInbound: 3,
ConnsOutbound: 4,
Conns: 4,
FD: 5,
})
}
func GetAPIType(name, pkg string) (i interface{}, t reflect.Type, permStruct []reflect.Type) {
@ -346,7 +378,7 @@ func ExampleValue(method string, t, parent reflect.Type) interface{} {
switch t.Kind() {
case reflect.Slice:
out := reflect.New(t).Elem()
reflect.Append(out, reflect.ValueOf(ExampleValue(method, t.Elem(), t)))
out = reflect.Append(out, reflect.ValueOf(ExampleValue(method, t.Elem(), t)))
return out.Interface()
case reflect.Chan:
return ExampleValue(method, t.Elem(), nil)

View File

@ -1811,6 +1811,21 @@ func (mr *MockFullNodeMockRecorder) NetFindPeer(arg0, arg1 interface{}) *gomock.
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "NetFindPeer", reflect.TypeOf((*MockFullNode)(nil).NetFindPeer), arg0, arg1)
}
// NetLimit mocks base method.
func (m *MockFullNode) NetLimit(arg0 context.Context, arg1 string) (api.NetLimit, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "NetLimit", arg0, arg1)
ret0, _ := ret[0].(api.NetLimit)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// NetLimit indicates an expected call of NetLimit.
func (mr *MockFullNodeMockRecorder) NetLimit(arg0, arg1 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "NetLimit", reflect.TypeOf((*MockFullNode)(nil).NetLimit), arg0, arg1)
}
// NetPeerInfo mocks base method.
func (m *MockFullNode) NetPeerInfo(arg0 context.Context, arg1 peer.ID) (*api.ExtendedPeerInfo, error) {
m.ctrl.T.Helper()
@ -1856,6 +1871,35 @@ func (mr *MockFullNodeMockRecorder) NetPubsubScores(arg0 interface{}) *gomock.Ca
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "NetPubsubScores", reflect.TypeOf((*MockFullNode)(nil).NetPubsubScores), arg0)
}
// NetSetLimit mocks base method.
func (m *MockFullNode) NetSetLimit(arg0 context.Context, arg1 string, arg2 api.NetLimit) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "NetSetLimit", arg0, arg1, arg2)
ret0, _ := ret[0].(error)
return ret0
}
// NetSetLimit indicates an expected call of NetSetLimit.
func (mr *MockFullNodeMockRecorder) NetSetLimit(arg0, arg1, arg2 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "NetSetLimit", reflect.TypeOf((*MockFullNode)(nil).NetSetLimit), arg0, arg1, arg2)
}
// NetStat mocks base method.
func (m *MockFullNode) NetStat(arg0 context.Context, arg1 string) (api.NetStat, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "NetStat", arg0, arg1)
ret0, _ := ret[0].(api.NetStat)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// NetStat indicates an expected call of NetStat.
func (mr *MockFullNodeMockRecorder) NetStat(arg0, arg1 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "NetStat", reflect.TypeOf((*MockFullNode)(nil).NetStat), arg0, arg1)
}
// NodeStatus mocks base method.
func (m *MockFullNode) NodeStatus(arg0 context.Context, arg1 bool) (api.NodeStatus, error) {
m.ctrl.T.Helper()

View File

@ -17,6 +17,7 @@ import (
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/crypto"
"github.com/filecoin-project/go-state-types/dline"
abinetwork "github.com/filecoin-project/go-state-types/network"
apitypes "github.com/filecoin-project/lotus/api/types"
"github.com/filecoin-project/lotus/chain/actors/builtin"
"github.com/filecoin-project/lotus/chain/actors/builtin/miner"
@ -586,11 +587,17 @@ type NetStruct struct {
NetFindPeer func(p0 context.Context, p1 peer.ID) (peer.AddrInfo, error) `perm:"read"`
NetLimit func(p0 context.Context, p1 string) (NetLimit, error) `perm:"read"`
NetPeerInfo func(p0 context.Context, p1 peer.ID) (*ExtendedPeerInfo, error) `perm:"read"`
NetPeers func(p0 context.Context) ([]peer.AddrInfo, error) `perm:"read"`
NetPubsubScores func(p0 context.Context) ([]PubsubScore, error) `perm:"read"`
NetSetLimit func(p0 context.Context, p1 string, p2 NetLimit) error `perm:"admin"`
NetStat func(p0 context.Context, p1 string) (NetStat, error) `perm:"read"`
}
}
@ -620,7 +627,7 @@ type StorageMinerStruct struct {
CheckProvable func(p0 context.Context, p1 abi.RegisteredPoStProof, p2 []storage.SectorRef, p3 bool) (map[abi.SectorNumber]string, error) `perm:"admin"`
ComputeProof func(p0 context.Context, p1 []builtin.SectorInfo, p2 abi.PoStRandomness) ([]builtin.PoStProof, error) `perm:"read"`
ComputeProof func(p0 context.Context, p1 []builtin.ExtendedSectorInfo, p2 abi.PoStRandomness, p3 abi.ChainEpoch, p4 abinetwork.Version) ([]builtin.PoStProof, error) `perm:"read"`
CreateBackup func(p0 context.Context, p1 string) error `perm:"admin"`
@ -668,6 +675,8 @@ type StorageMinerStruct struct {
MarketCancelDataTransfer func(p0 context.Context, p1 datatransfer.TransferID, p2 peer.ID, p3 bool) error `perm:"write"`
MarketDataTransferDiagnostics func(p0 context.Context, p1 peer.ID) (*TransferDiagnostics, error) `perm:"write"`
MarketDataTransferUpdates func(p0 context.Context) (<-chan DataTransferChannel, error) `perm:"write"`
MarketGetAsk func(p0 context.Context) (*storagemarket.SignedStorageAsk, error) `perm:"read"`
@ -756,7 +765,9 @@ type StorageMinerStruct struct {
SectorGetSealDelay func(p0 context.Context) (time.Duration, error) `perm:"read"`
SectorMarkForUpgrade func(p0 context.Context, p1 abi.SectorNumber) error `perm:"admin"`
SectorMarkForUpgrade func(p0 context.Context, p1 abi.SectorNumber, p2 bool) error `perm:"admin"`
SectorMatchPendingPiecesToOpenSectors func(p0 context.Context) error `perm:"admin"`
SectorPreCommitFlush func(p0 context.Context) ([]sealiface.PreCommitBatchRes, error) `perm:"admin"`
@ -3620,6 +3631,17 @@ func (s *NetStub) NetFindPeer(p0 context.Context, p1 peer.ID) (peer.AddrInfo, er
return *new(peer.AddrInfo), ErrNotSupported
}
func (s *NetStruct) NetLimit(p0 context.Context, p1 string) (NetLimit, error) {
if s.Internal.NetLimit == nil {
return *new(NetLimit), ErrNotSupported
}
return s.Internal.NetLimit(p0, p1)
}
func (s *NetStub) NetLimit(p0 context.Context, p1 string) (NetLimit, error) {
return *new(NetLimit), ErrNotSupported
}
func (s *NetStruct) NetPeerInfo(p0 context.Context, p1 peer.ID) (*ExtendedPeerInfo, error) {
if s.Internal.NetPeerInfo == nil {
return nil, ErrNotSupported
@ -3653,6 +3675,28 @@ func (s *NetStub) NetPubsubScores(p0 context.Context) ([]PubsubScore, error) {
return *new([]PubsubScore), ErrNotSupported
}
func (s *NetStruct) NetSetLimit(p0 context.Context, p1 string, p2 NetLimit) error {
if s.Internal.NetSetLimit == nil {
return ErrNotSupported
}
return s.Internal.NetSetLimit(p0, p1, p2)
}
func (s *NetStub) NetSetLimit(p0 context.Context, p1 string, p2 NetLimit) error {
return ErrNotSupported
}
func (s *NetStruct) NetStat(p0 context.Context, p1 string) (NetStat, error) {
if s.Internal.NetStat == nil {
return *new(NetStat), ErrNotSupported
}
return s.Internal.NetStat(p0, p1)
}
func (s *NetStub) NetStat(p0 context.Context, p1 string) (NetStat, error) {
return *new(NetStat), ErrNotSupported
}
func (s *SignableStruct) Sign(p0 context.Context, p1 SignFunc) error {
if s.Internal.Sign == nil {
return ErrNotSupported
@ -3708,14 +3752,14 @@ func (s *StorageMinerStub) CheckProvable(p0 context.Context, p1 abi.RegisteredPo
return *new(map[abi.SectorNumber]string), ErrNotSupported
}
func (s *StorageMinerStruct) ComputeProof(p0 context.Context, p1 []builtin.SectorInfo, p2 abi.PoStRandomness) ([]builtin.PoStProof, error) {
func (s *StorageMinerStruct) ComputeProof(p0 context.Context, p1 []builtin.ExtendedSectorInfo, p2 abi.PoStRandomness, p3 abi.ChainEpoch, p4 abinetwork.Version) ([]builtin.PoStProof, error) {
if s.Internal.ComputeProof == nil {
return *new([]builtin.PoStProof), ErrNotSupported
}
return s.Internal.ComputeProof(p0, p1, p2)
return s.Internal.ComputeProof(p0, p1, p2, p3, p4)
}
func (s *StorageMinerStub) ComputeProof(p0 context.Context, p1 []builtin.SectorInfo, p2 abi.PoStRandomness) ([]builtin.PoStProof, error) {
func (s *StorageMinerStub) ComputeProof(p0 context.Context, p1 []builtin.ExtendedSectorInfo, p2 abi.PoStRandomness, p3 abi.ChainEpoch, p4 abinetwork.Version) ([]builtin.PoStProof, error) {
return *new([]builtin.PoStProof), ErrNotSupported
}
@ -3972,6 +4016,17 @@ func (s *StorageMinerStub) MarketCancelDataTransfer(p0 context.Context, p1 datat
return ErrNotSupported
}
func (s *StorageMinerStruct) MarketDataTransferDiagnostics(p0 context.Context, p1 peer.ID) (*TransferDiagnostics, error) {
if s.Internal.MarketDataTransferDiagnostics == nil {
return nil, ErrNotSupported
}
return s.Internal.MarketDataTransferDiagnostics(p0, p1)
}
func (s *StorageMinerStub) MarketDataTransferDiagnostics(p0 context.Context, p1 peer.ID) (*TransferDiagnostics, error) {
return nil, ErrNotSupported
}
func (s *StorageMinerStruct) MarketDataTransferUpdates(p0 context.Context) (<-chan DataTransferChannel, error) {
if s.Internal.MarketDataTransferUpdates == nil {
return nil, ErrNotSupported
@ -4456,14 +4511,25 @@ func (s *StorageMinerStub) SectorGetSealDelay(p0 context.Context) (time.Duration
return *new(time.Duration), ErrNotSupported
}
func (s *StorageMinerStruct) SectorMarkForUpgrade(p0 context.Context, p1 abi.SectorNumber) error {
func (s *StorageMinerStruct) SectorMarkForUpgrade(p0 context.Context, p1 abi.SectorNumber, p2 bool) error {
if s.Internal.SectorMarkForUpgrade == nil {
return ErrNotSupported
}
return s.Internal.SectorMarkForUpgrade(p0, p1)
return s.Internal.SectorMarkForUpgrade(p0, p1, p2)
}
func (s *StorageMinerStub) SectorMarkForUpgrade(p0 context.Context, p1 abi.SectorNumber) error {
func (s *StorageMinerStub) SectorMarkForUpgrade(p0 context.Context, p1 abi.SectorNumber, p2 bool) error {
return ErrNotSupported
}
func (s *StorageMinerStruct) SectorMatchPendingPiecesToOpenSectors(p0 context.Context) error {
if s.Internal.SectorMatchPendingPiecesToOpenSectors == nil {
return ErrNotSupported
}
return s.Internal.SectorMatchPendingPiecesToOpenSectors(p0)
}
func (s *StorageMinerStub) SectorMatchPendingPiecesToOpenSectors(p0 context.Context) error {
return ErrNotSupported
}

View File

@ -10,7 +10,9 @@ import (
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/lotus/chain/types"
"github.com/ipfs/go-cid"
"github.com/ipfs/go-graphsync"
"github.com/libp2p/go-libp2p-core/network"
"github.com/libp2p/go-libp2p-core/peer"
pubsub "github.com/libp2p/go-libp2p-pubsub"
ma "github.com/multiformats/go-multiaddr"
@ -53,6 +55,30 @@ type MessageSendSpec struct {
MaxFee abi.TokenAmount
}
// GraphSyncDataTransfer provides diagnostics on a data transfer happening over graphsync
type GraphSyncDataTransfer struct {
// GraphSync request id for this transfer
RequestID graphsync.RequestID
// Graphsync state for this transfer
RequestState string
// If a channel ID is present, indicates whether this is the current graphsync request for this channel
// (could have changed in a restart)
IsCurrentChannelRequest bool
// Data transfer channel ID for this transfer
ChannelID *datatransfer.ChannelID
// Data transfer state for this transfer
ChannelState *DataTransferChannel
// Diagnostic information about this request -- and unexpected inconsistencies in
// request state
Diagnostics []string
}
// TransferDiagnostics give current information about transfers going over graphsync that may be helpful for debugging
type TransferDiagnostics struct {
ReceivingTransfers []*GraphSyncDataTransfer
SendingTransfers []*GraphSyncDataTransfer
}
type DataTransferChannel struct {
TransferID datatransfer.TransferID
Status datatransfer.Status
@ -104,6 +130,28 @@ type NetBlockList struct {
IPSubnets []string
}
type NetStat struct {
System *network.ScopeStat `json:",omitempty"`
Transient *network.ScopeStat `json:",omitempty"`
Services map[string]network.ScopeStat `json:",omitempty"`
Protocols map[string]network.ScopeStat `json:",omitempty"`
Peers map[string]network.ScopeStat `json:",omitempty"`
}
type NetLimit struct {
Dynamic bool `json:",omitempty"`
// set if Dynamic is false
Memory int64 `json:",omitempty"`
// set if Dynamic is true
MemoryFraction float64 `json:",omitempty"`
MinMemory int64 `json:",omitempty"`
MaxMemory int64 `json:",omitempty"`
Streams, StreamsInbound, StreamsOutbound int
Conns, ConnsInbound, ConnsOutbound int
FD int
}
type ExtendedPeerInfo struct {
ID peer.ID
Agent string

View File

@ -8,7 +8,7 @@ import (
"github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/dline"
"github.com/filecoin-project/go-state-types/network"
abinetwork "github.com/filecoin-project/go-state-types/network"
"github.com/filecoin-project/lotus/api"
"github.com/filecoin-project/lotus/chain/actors/builtin/miner"
@ -57,7 +57,7 @@ type Gateway interface {
StateMinerInfo(ctx context.Context, actor address.Address, tsk types.TipSetKey) (miner.MinerInfo, error)
StateMinerProvingDeadline(ctx context.Context, addr address.Address, tsk types.TipSetKey) (*dline.Info, error)
StateMinerPower(context.Context, address.Address, types.TipSetKey) (*api.MinerPower, error)
StateNetworkVersion(context.Context, types.TipSetKey) (network.Version, error)
StateNetworkVersion(context.Context, types.TipSetKey) (abinetwork.Version, error)
StateSearchMsg(ctx context.Context, msg cid.Cid) (*api.MsgLookup, error)
StateSectorGetInfo(ctx context.Context, maddr address.Address, n abi.SectorNumber, tsk types.TipSetKey) (*miner.SectorOnChainInfo, error)
StateVerifiedClientStatus(ctx context.Context, addr address.Address, tsk types.TipSetKey) (*abi.StoragePower, error)

View File

@ -13,7 +13,7 @@ import (
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/crypto"
"github.com/filecoin-project/go-state-types/dline"
"github.com/filecoin-project/go-state-types/network"
abinetwork "github.com/filecoin-project/go-state-types/network"
"github.com/filecoin-project/lotus/api"
apitypes "github.com/filecoin-project/lotus/api/types"
"github.com/filecoin-project/lotus/chain/actors/builtin/miner"
@ -451,7 +451,7 @@ type GatewayStruct struct {
StateMinerProvingDeadline func(p0 context.Context, p1 address.Address, p2 types.TipSetKey) (*dline.Info, error) ``
StateNetworkVersion func(p0 context.Context, p1 types.TipSetKey) (network.Version, error) ``
StateNetworkVersion func(p0 context.Context, p1 types.TipSetKey) (abinetwork.Version, error) ``
StateSearchMsg func(p0 context.Context, p1 cid.Cid) (*api.MsgLookup, error) ``
@ -2703,15 +2703,15 @@ func (s *GatewayStub) StateMinerProvingDeadline(p0 context.Context, p1 address.A
return nil, ErrNotSupported
}
func (s *GatewayStruct) StateNetworkVersion(p0 context.Context, p1 types.TipSetKey) (network.Version, error) {
func (s *GatewayStruct) StateNetworkVersion(p0 context.Context, p1 types.TipSetKey) (abinetwork.Version, error) {
if s.Internal.StateNetworkVersion == nil {
return *new(network.Version), ErrNotSupported
return *new(abinetwork.Version), ErrNotSupported
}
return s.Internal.StateNetworkVersion(p0, p1)
}
func (s *GatewayStub) StateNetworkVersion(p0 context.Context, p1 types.TipSetKey) (network.Version, error) {
return *new(network.Version), ErrNotSupported
func (s *GatewayStub) StateNetworkVersion(p0 context.Context, p1 types.TipSetKey) (abinetwork.Version, error) {
return *new(abinetwork.Version), ErrNotSupported
}
func (s *GatewayStruct) StateSearchMsg(p0 context.Context, p1 cid.Cid) (*api.MsgLookup, error) {

View File

@ -1724,6 +1724,21 @@ func (mr *MockFullNodeMockRecorder) NetFindPeer(arg0, arg1 interface{}) *gomock.
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "NetFindPeer", reflect.TypeOf((*MockFullNode)(nil).NetFindPeer), arg0, arg1)
}
// NetLimit mocks base method.
func (m *MockFullNode) NetLimit(arg0 context.Context, arg1 string) (api.NetLimit, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "NetLimit", arg0, arg1)
ret0, _ := ret[0].(api.NetLimit)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// NetLimit indicates an expected call of NetLimit.
func (mr *MockFullNodeMockRecorder) NetLimit(arg0, arg1 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "NetLimit", reflect.TypeOf((*MockFullNode)(nil).NetLimit), arg0, arg1)
}
// NetPeerInfo mocks base method.
func (m *MockFullNode) NetPeerInfo(arg0 context.Context, arg1 peer.ID) (*api.ExtendedPeerInfo, error) {
m.ctrl.T.Helper()
@ -1769,6 +1784,35 @@ func (mr *MockFullNodeMockRecorder) NetPubsubScores(arg0 interface{}) *gomock.Ca
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "NetPubsubScores", reflect.TypeOf((*MockFullNode)(nil).NetPubsubScores), arg0)
}
// NetSetLimit mocks base method.
func (m *MockFullNode) NetSetLimit(arg0 context.Context, arg1 string, arg2 api.NetLimit) error {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "NetSetLimit", arg0, arg1, arg2)
ret0, _ := ret[0].(error)
return ret0
}
// NetSetLimit indicates an expected call of NetSetLimit.
func (mr *MockFullNodeMockRecorder) NetSetLimit(arg0, arg1, arg2 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "NetSetLimit", reflect.TypeOf((*MockFullNode)(nil).NetSetLimit), arg0, arg1, arg2)
}
// NetStat mocks base method.
func (m *MockFullNode) NetStat(arg0 context.Context, arg1 string) (api.NetStat, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "NetStat", arg0, arg1)
ret0, _ := ret[0].(api.NetStat)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// NetStat indicates an expected call of NetStat.
func (mr *MockFullNodeMockRecorder) NetStat(arg0, arg1 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "NetStat", reflect.TypeOf((*MockFullNode)(nil).NetStat), arg0, arg1)
}
// PaychAllocateLane mocks base method.
func (m *MockFullNode) PaychAllocateLane(arg0 context.Context, arg1 address.Address) (uint64, error) {
m.ctrl.T.Helper()

View File

@ -54,10 +54,10 @@ func VersionForType(nodeType NodeType) (Version, error) {
// semver versions of the rpc api exposed
var (
FullAPIVersion0 = newVer(1, 4, 0)
FullAPIVersion1 = newVer(2, 1, 0)
FullAPIVersion0 = newVer(1, 5, 0)
FullAPIVersion1 = newVer(2, 2, 0)
MinerAPIVersion0 = newVer(1, 2, 0)
MinerAPIVersion0 = newVer(1, 3, 0)
WorkerAPIVersion0 = newVer(1, 5, 0)
)

262
blockstore/autobatch.go Normal file
View File

@ -0,0 +1,262 @@
package blockstore
import (
"context"
"sync"
"time"
"golang.org/x/xerrors"
block "github.com/ipfs/go-block-format"
"github.com/ipfs/go-cid"
)
// autolog is a logger for the autobatching blockstore. It is subscoped from the
// blockstore logger.
var autolog = log.Named("auto")
// contains the same set of blocks twice, once as an ordered list for flushing, and as a map for fast access
type blockBatch struct {
blockList []block.Block
blockMap map[cid.Cid]block.Block
}
type AutobatchBlockstore struct {
// TODO: drop if memory consumption is too high
addedCids map[cid.Cid]struct{}
stateLock sync.Mutex
bufferedBatch blockBatch
flushingBatch blockBatch
flushErr error
flushCh chan struct{}
doFlushLock sync.Mutex
flushRetryDelay time.Duration
doneCh chan struct{}
shutdown context.CancelFunc
backingBs Blockstore
bufferCapacity int
bufferSize int
}
func NewAutobatch(ctx context.Context, backingBs Blockstore, bufferCapacity int) *AutobatchBlockstore {
ctx, cancel := context.WithCancel(ctx)
bs := &AutobatchBlockstore{
addedCids: make(map[cid.Cid]struct{}),
backingBs: backingBs,
bufferCapacity: bufferCapacity,
flushCh: make(chan struct{}, 1),
doneCh: make(chan struct{}),
// could be made configable
flushRetryDelay: time.Millisecond * 100,
shutdown: cancel,
}
bs.bufferedBatch.blockMap = make(map[cid.Cid]block.Block)
go bs.flushWorker(ctx)
return bs
}
func (bs *AutobatchBlockstore) Put(ctx context.Context, blk block.Block) error {
bs.stateLock.Lock()
defer bs.stateLock.Unlock()
_, ok := bs.addedCids[blk.Cid()]
if !ok {
bs.addedCids[blk.Cid()] = struct{}{}
bs.bufferedBatch.blockList = append(bs.bufferedBatch.blockList, blk)
bs.bufferedBatch.blockMap[blk.Cid()] = blk
bs.bufferSize += len(blk.RawData())
if bs.bufferSize >= bs.bufferCapacity {
// signal that a flush is appropriate, may be ignored
select {
case bs.flushCh <- struct{}{}:
default:
// do nothing
}
}
}
return nil
}
func (bs *AutobatchBlockstore) flushWorker(ctx context.Context) {
defer close(bs.doneCh)
for {
select {
case <-bs.flushCh:
// TODO: check if we _should_ actually flush. We could get a spurious wakeup
// here.
putErr := bs.doFlush(ctx, false)
for putErr != nil {
select {
case <-ctx.Done():
return
case <-time.After(bs.flushRetryDelay):
autolog.Errorf("FLUSH ERRORED: %w, retrying after %v", putErr, bs.flushRetryDelay)
putErr = bs.doFlush(ctx, true)
}
}
case <-ctx.Done():
// Do one last flush.
_ = bs.doFlush(ctx, false)
return
}
}
}
// caller must NOT hold stateLock
// set retryOnly to true to only retry a failed flush and not flush anything new.
func (bs *AutobatchBlockstore) doFlush(ctx context.Context, retryOnly bool) error {
bs.doFlushLock.Lock()
defer bs.doFlushLock.Unlock()
// If we failed to flush last time, try flushing again.
if bs.flushErr != nil {
bs.flushErr = bs.backingBs.PutMany(ctx, bs.flushingBatch.blockList)
}
// If we failed, or we're _only_ retrying, bail.
if retryOnly || bs.flushErr != nil {
return bs.flushErr
}
// Then take the current batch...
bs.stateLock.Lock()
// We do NOT clear addedCids here, because its purpose is to expedite Puts
bs.flushingBatch = bs.bufferedBatch
bs.bufferedBatch.blockList = make([]block.Block, 0, len(bs.flushingBatch.blockList))
bs.bufferedBatch.blockMap = make(map[cid.Cid]block.Block, len(bs.flushingBatch.blockMap))
bs.stateLock.Unlock()
// And try to flush it.
bs.flushErr = bs.backingBs.PutMany(ctx, bs.flushingBatch.blockList)
// If we succeeded, reset the batch. Otherwise, we'll try again next time.
if bs.flushErr == nil {
bs.stateLock.Lock()
bs.flushingBatch = blockBatch{}
bs.stateLock.Unlock()
}
return bs.flushErr
}
// caller must NOT hold stateLock
func (bs *AutobatchBlockstore) Flush(ctx context.Context) error {
return bs.doFlush(ctx, false)
}
func (bs *AutobatchBlockstore) Shutdown(ctx context.Context) error {
// TODO: Prevent puts after we call this to avoid losing data.
bs.shutdown()
select {
case <-bs.doneCh:
case <-ctx.Done():
return ctx.Err()
}
bs.doFlushLock.Lock()
defer bs.doFlushLock.Unlock()
return bs.flushErr
}
func (bs *AutobatchBlockstore) Get(ctx context.Context, c cid.Cid) (block.Block, error) {
// may seem backward to check the backingBs first, but that is the likeliest case
blk, err := bs.backingBs.Get(ctx, c)
if err == nil {
return blk, nil
}
if err != ErrNotFound {
return blk, err
}
bs.stateLock.Lock()
defer bs.stateLock.Unlock()
v, ok := bs.flushingBatch.blockMap[c]
if ok {
return v, nil
}
v, ok = bs.bufferedBatch.blockMap[c]
if ok {
return v, nil
}
return bs.Get(ctx, c)
}
func (bs *AutobatchBlockstore) DeleteBlock(context.Context, cid.Cid) error {
// if we wanted to support this, we would have to:
// - flush
// - delete from the backingBs (if present)
// - remove from addedCids (if present)
// - if present in addedCids, also walk the ordered lists and remove if present
return xerrors.New("deletion is unsupported")
}
func (bs *AutobatchBlockstore) DeleteMany(ctx context.Context, cids []cid.Cid) error {
// see note in DeleteBlock()
return xerrors.New("deletion is unsupported")
}
func (bs *AutobatchBlockstore) Has(ctx context.Context, c cid.Cid) (bool, error) {
_, err := bs.Get(ctx, c)
if err == nil {
return true, nil
}
if err == ErrNotFound {
return false, nil
}
return false, err
}
func (bs *AutobatchBlockstore) GetSize(ctx context.Context, c cid.Cid) (int, error) {
blk, err := bs.Get(ctx, c)
if err != nil {
return 0, err
}
return len(blk.RawData()), nil
}
func (bs *AutobatchBlockstore) PutMany(ctx context.Context, blks []block.Block) error {
for _, blk := range blks {
if err := bs.Put(ctx, blk); err != nil {
return err
}
}
return nil
}
func (bs *AutobatchBlockstore) AllKeysChan(ctx context.Context) (<-chan cid.Cid, error) {
if err := bs.Flush(ctx); err != nil {
return nil, err
}
return bs.backingBs.AllKeysChan(ctx)
}
func (bs *AutobatchBlockstore) HashOnRead(enabled bool) {
bs.backingBs.HashOnRead(enabled)
}
func (bs *AutobatchBlockstore) View(ctx context.Context, cid cid.Cid, callback func([]byte) error) error {
blk, err := bs.Get(ctx, cid)
if err != nil {
return err
}
return callback(blk.RawData())
}

View File

@ -0,0 +1,34 @@
package blockstore
import (
"context"
"testing"
"github.com/stretchr/testify/require"
)
func TestAutobatchBlockstore(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
ab := NewAutobatch(ctx, NewMemory(), len(b0.RawData())+len(b1.RawData())-1)
require.NoError(t, ab.Put(ctx, b0))
require.NoError(t, ab.Put(ctx, b1))
require.NoError(t, ab.Put(ctx, b2))
v0, err := ab.Get(ctx, b0.Cid())
require.NoError(t, err)
require.Equal(t, b0.RawData(), v0.RawData())
v1, err := ab.Get(ctx, b1.Cid())
require.NoError(t, err)
require.Equal(t, b1.RawData(), v1.RawData())
v2, err := ab.Get(ctx, b2.Cid())
require.NoError(t, err)
require.Equal(t, b2.RawData(), v2.RawData())
require.NoError(t, ab.Flush(ctx))
require.NoError(t, ab.Shutdown(ctx))
}

View File

@ -1,2 +1,2 @@
/dns4/bootstrap-0.butterfly.fildev.network/tcp/1347/p2p/12D3KooWBzv5sf4eTyo8cjJGfGnpxo6QkEPkRShG9GqjE2A5QaW5
/dns4/bootstrap-1.butterfly.fildev.network/tcp/1347/p2p/12D3KooWBo9TSD4XXRFtu6snv6QNYvXgRaSaVb116YiYEsDWgKtq
/dns4/bootstrap-0.butterfly.fildev.network/tcp/1347/p2p/12D3KooWBdRCBLUeKvoy22u5DcXs61adFn31v8WWCZgmBjDCjbsC
/dns4/bootstrap-1.butterfly.fildev.network/tcp/1347/p2p/12D3KooWDUQJBA18njjXnG9RtLxoN3muvdU7PEy55QorUEsdAqdy

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -17,7 +17,7 @@ import (
const BootstrappersFile = ""
const GenesisFile = ""
const GenesisNetworkVersion = network.Version14
const GenesisNetworkVersion = network.Version15
var UpgradeBreezeHeight = abi.ChainEpoch(-1)
@ -47,7 +47,7 @@ var UpgradeHyperdriveHeight = abi.ChainEpoch(-16)
var UpgradeChocolateHeight = abi.ChainEpoch(-17)
var UpgradeSnapDealsHeight = abi.ChainEpoch(-18)
var UpgradeOhSnapHeight = abi.ChainEpoch(-18)
var DrandSchedule = map[abi.ChainEpoch]DrandEnum{
0: DrandMainnet,
@ -90,6 +90,7 @@ func init() {
UpgradeTurboHeight = getUpgradeHeight("LOTUS_ACTORSV4_HEIGHT", UpgradeTurboHeight)
UpgradeHyperdriveHeight = getUpgradeHeight("LOTUS_HYPERDRIVE_HEIGHT", UpgradeHyperdriveHeight)
UpgradeChocolateHeight = getUpgradeHeight("LOTUS_CHOCOLATE_HEIGHT", UpgradeChocolateHeight)
UpgradeOhSnapHeight = getUpgradeHeight("LOTUS_OHSNAP_HEIGHT", UpgradeOhSnapHeight)
BuildType |= Build2k

View File

@ -16,7 +16,7 @@ var DrandSchedule = map[abi.ChainEpoch]DrandEnum{
0: DrandMainnet,
}
const GenesisNetworkVersion = network.Version13
const GenesisNetworkVersion = network.Version14
const BootstrappersFile = "butterflynet.pi"
const GenesisFile = "butterflynet.car"
@ -40,13 +40,17 @@ const UpgradeTrustHeight = -13
const UpgradeNorwegianHeight = -14
const UpgradeTurboHeight = -15
const UpgradeHyperdriveHeight = -16
const UpgradeChocolateHeight = 6360
const UpgradeSnapDealsHeight = 99999999
const UpgradeChocolateHeight = -17
// 2022-01-17T19:00:00Z
const UpgradeOhSnapHeight = 30262
func init() {
policy.SetConsensusMinerMinPower(abi.NewStoragePower(2 << 30))
policy.SetSupportedProofTypes(
abi.RegisteredSealProof_StackedDrg512MiBV1,
abi.RegisteredSealProof_StackedDrg32GiBV1,
abi.RegisteredSealProof_StackedDrg64GiBV1,
)
SetAddressNetwork(address.Testnet)

View File

@ -54,7 +54,7 @@ const UpgradeHyperdriveHeight = 420
const UpgradeChocolateHeight = 312746
const UpgradeSnapDealsHeight = 99999999
const UpgradeOhSnapHeight = 99999999
func init() {
policy.SetConsensusMinerMinPower(abi.NewStoragePower(32 << 30))

View File

@ -47,7 +47,7 @@ var UpgradeTurboHeight = abi.ChainEpoch(-15)
var UpgradeHyperdriveHeight = abi.ChainEpoch(-16)
var UpgradeChocolateHeight = abi.ChainEpoch(-17)
var UpgradeSnapDealsHeight = abi.ChainEpoch(-18)
var UpgradeOhSnapHeight = abi.ChainEpoch(-18)
var DrandSchedule = map[abi.ChainEpoch]DrandEnum{
0: DrandMainnet,

View File

@ -67,7 +67,7 @@ const UpgradeHyperdriveHeight = 892800
// 2021-10-26T13:30:00Z
const UpgradeChocolateHeight = 1231620
var UpgradeSnapDealsHeight = abi.ChainEpoch(999999999999)
var UpgradeOhSnapHeight = abi.ChainEpoch(999999999999)
func init() {
if os.Getenv("LOTUS_USE_TEST_ADDRESSES") != "1" {
@ -75,7 +75,7 @@ func init() {
}
if os.Getenv("LOTUS_DISABLE_SNAPDEALS") == "1" {
UpgradeSnapDealsHeight = math.MaxInt64
UpgradeOhSnapHeight = math.MaxInt64
}
Devnet = false

View File

@ -99,7 +99,7 @@ var (
UpgradeTurboHeight abi.ChainEpoch = -14
UpgradeHyperdriveHeight abi.ChainEpoch = -15
UpgradeChocolateHeight abi.ChainEpoch = -16
UpgradeSnapDealsHeight abi.ChainEpoch = -17
UpgradeOhSnapHeight abi.ChainEpoch = -17
DrandSchedule = map[abi.ChainEpoch]DrandEnum{
0: DrandMainnet,
@ -107,8 +107,8 @@ var (
GenesisNetworkVersion = network.Version0
NewestNetworkVersion = network.Version14
ActorUpgradeNetworkVersion = network.Version4
NewestNetworkVersion = network.Version15
ActorUpgradeNetworkVersion = network.Version15
Devnet = true
ZeroAddress = MustParseAddress("f3yaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaby2smx7a")

View File

@ -37,7 +37,7 @@ func BuildTypeString() string {
}
// BuildVersion is the local build version
const BuildVersion = "1.13.3-dev"
const BuildVersion = "1.15.0-dev"
func UserVersion() string {
if os.Getenv("LOTUS_VERSION_IGNORE_COMMIT") == "1" {

View File

@ -61,6 +61,7 @@ const (
// These are all just type aliases across actor versions. In the future, that might change
// and we might need to do something fancier.
type SectorInfo = proof7.SectorInfo
type ExtendedSectorInfo = proof7.ExtendedSectorInfo
type PoStProof = proof7.PoStProof
type FilterEstimate = smoothing0.FilterEstimate

View File

@ -45,6 +45,7 @@ const (
// These are all just type aliases across actor versions. In the future, that might change
// and we might need to do something fancier.
type SectorInfo = proof{{.latestVersion}}.SectorInfo
type ExtendedSectorInfo = proof{{.latestVersion}}.ExtendedSectorInfo
type PoStProof = proof{{.latestVersion}}.PoStProof
type FilterEstimate = smoothing0.FilterEstimate

View File

@ -23,6 +23,7 @@ import (
miner2 "github.com/filecoin-project/specs-actors/v2/actors/builtin/miner"
miner3 "github.com/filecoin-project/specs-actors/v3/actors/builtin/miner"
miner5 "github.com/filecoin-project/specs-actors/v5/actors/builtin/miner"
miner7 "github.com/filecoin-project/specs-actors/v7/actors/builtin/miner"
{{range .versions}}
builtin{{.}} "github.com/filecoin-project/specs-actors{{import .}}actors/builtin"
{{end}}
@ -193,6 +194,7 @@ type SectorPreCommitOnChainInfo struct {
type PoStPartition = miner0.PoStPartition
type RecoveryDeclaration = miner0.RecoveryDeclaration
type FaultDeclaration = miner0.FaultDeclaration
type ReplicaUpdate = miner7.ReplicaUpdate
// Params
type DeclareFaultsParams = miner0.DeclareFaultsParams
@ -201,6 +203,7 @@ type SubmitWindowedPoStParams = miner0.SubmitWindowedPoStParams
type ProveCommitSectorParams = miner0.ProveCommitSectorParams
type DisputeWindowedPoStParams = miner3.DisputeWindowedPoStParams
type ProveCommitAggregateParams = miner5.ProveCommitAggregateParams
type ProveReplicaUpdatesParams = miner7.ProveReplicaUpdatesParams
func PreferredSealProofTypeFromWindowPoStType(nver network.Version, proof abi.RegisteredPoStProof) (abi.RegisteredSealProof, error) {
// We added support for the new proofs in network version 7, and removed support for the old

View File

@ -23,6 +23,7 @@ import (
miner2 "github.com/filecoin-project/specs-actors/v2/actors/builtin/miner"
miner3 "github.com/filecoin-project/specs-actors/v3/actors/builtin/miner"
miner5 "github.com/filecoin-project/specs-actors/v5/actors/builtin/miner"
miner7 "github.com/filecoin-project/specs-actors/v7/actors/builtin/miner"
builtin0 "github.com/filecoin-project/specs-actors/actors/builtin"
@ -282,6 +283,7 @@ type SectorPreCommitOnChainInfo struct {
type PoStPartition = miner0.PoStPartition
type RecoveryDeclaration = miner0.RecoveryDeclaration
type FaultDeclaration = miner0.FaultDeclaration
type ReplicaUpdate = miner7.ReplicaUpdate
// Params
type DeclareFaultsParams = miner0.DeclareFaultsParams
@ -290,6 +292,7 @@ type SubmitWindowedPoStParams = miner0.SubmitWindowedPoStParams
type ProveCommitSectorParams = miner0.ProveCommitSectorParams
type DisputeWindowedPoStParams = miner3.DisputeWindowedPoStParams
type ProveCommitAggregateParams = miner5.ProveCommitAggregateParams
type ProveReplicaUpdatesParams = miner7.ProveReplicaUpdatesParams
func PreferredSealProofTypeFromWindowPoStType(nver network.Version, proof abi.RegisteredPoStProof) (abi.RegisteredSealProof, error) {
// We added support for the new proofs in network version 7, and removed support for the old

View File

@ -39,7 +39,11 @@ func (m message{{.v}}) Create(to address.Address, initialAmount abi.TokenAmount)
func (m message{{.v}}) Update(paych address.Address, sv *SignedVoucher, secret []byte) (*types.Message, error) {
params, aerr := actors.SerializeParams(&paych{{.v}}.UpdateChannelStateParams{
{{if (ge .v 7)}}
Sv: toV{{.v}}SignedVoucher(*sv),
{{else}}
Sv: *sv,
{{end}}
Secret: secret,
})
if aerr != nil {

View File

@ -39,7 +39,9 @@ func (m message0) Create(to address.Address, initialAmount abi.TokenAmount) (*ty
func (m message0) Update(paych address.Address, sv *SignedVoucher, secret []byte) (*types.Message, error) {
params, aerr := actors.SerializeParams(&paych0.UpdateChannelStateParams{
Sv: *sv,
Sv: *sv,
Secret: secret,
})
if aerr != nil {

View File

@ -39,7 +39,9 @@ func (m message2) Create(to address.Address, initialAmount abi.TokenAmount) (*ty
func (m message2) Update(paych address.Address, sv *SignedVoucher, secret []byte) (*types.Message, error) {
params, aerr := actors.SerializeParams(&paych2.UpdateChannelStateParams{
Sv: *sv,
Sv: *sv,
Secret: secret,
})
if aerr != nil {

View File

@ -39,7 +39,9 @@ func (m message3) Create(to address.Address, initialAmount abi.TokenAmount) (*ty
func (m message3) Update(paych address.Address, sv *SignedVoucher, secret []byte) (*types.Message, error) {
params, aerr := actors.SerializeParams(&paych3.UpdateChannelStateParams{
Sv: *sv,
Sv: *sv,
Secret: secret,
})
if aerr != nil {

View File

@ -39,7 +39,9 @@ func (m message4) Create(to address.Address, initialAmount abi.TokenAmount) (*ty
func (m message4) Update(paych address.Address, sv *SignedVoucher, secret []byte) (*types.Message, error) {
params, aerr := actors.SerializeParams(&paych4.UpdateChannelStateParams{
Sv: *sv,
Sv: *sv,
Secret: secret,
})
if aerr != nil {

View File

@ -39,7 +39,9 @@ func (m message5) Create(to address.Address, initialAmount abi.TokenAmount) (*ty
func (m message5) Update(paych address.Address, sv *SignedVoucher, secret []byte) (*types.Message, error) {
params, aerr := actors.SerializeParams(&paych5.UpdateChannelStateParams{
Sv: *sv,
Sv: *sv,
Secret: secret,
})
if aerr != nil {

View File

@ -39,7 +39,9 @@ func (m message6) Create(to address.Address, initialAmount abi.TokenAmount) (*ty
func (m message6) Update(paych address.Address, sv *SignedVoucher, secret []byte) (*types.Message, error) {
params, aerr := actors.SerializeParams(&paych6.UpdateChannelStateParams{
Sv: *sv,
Sv: *sv,
Secret: secret,
})
if aerr != nil {

View File

@ -39,7 +39,9 @@ func (m message7) Create(to address.Address, initialAmount abi.TokenAmount) (*ty
func (m message7) Update(paych address.Address, sv *SignedVoucher, secret []byte) (*types.Message, error) {
params, aerr := actors.SerializeParams(&paych7.UpdateChannelStateParams{
Sv: *sv,
Sv: toV7SignedVoucher(*sv),
Secret: secret,
})
if aerr != nil {

View File

@ -112,3 +112,21 @@ func (ls *laneState{{.v}}) Redeemed() (big.Int, error) {
func (ls *laneState{{.v}}) Nonce() (uint64, error) {
return ls.LaneState.Nonce, nil
}
{{if (ge .v 7)}}
func toV{{.v}}SignedVoucher(sv SignedVoucher) paych{{.v}}.SignedVoucher {
return paych{{.v}}.SignedVoucher{
ChannelAddr: sv.ChannelAddr,
TimeLockMin: sv.TimeLockMin,
TimeLockMax: sv.TimeLockMax,
SecretHash: sv.SecretPreimage,
Extra: sv.Extra,
Lane: sv.Lane,
Nonce: sv.Nonce,
Amount: sv.Amount,
MinSettleHeight: sv.MinSettleHeight,
Merges: sv.Merges,
Signature: sv.Signature,
}
}
{{end}}

View File

@ -112,3 +112,19 @@ func (ls *laneState7) Redeemed() (big.Int, error) {
func (ls *laneState7) Nonce() (uint64, error) {
return ls.LaneState.Nonce, nil
}
func toV7SignedVoucher(sv SignedVoucher) paych7.SignedVoucher {
return paych7.SignedVoucher{
ChannelAddr: sv.ChannelAddr,
TimeLockMin: sv.TimeLockMin,
TimeLockMax: sv.TimeLockMax,
SecretHash: sv.SecretPreimage,
Extra: sv.Extra,
Lane: sv.Lane,
Nonce: sv.Nonce,
Amount: sv.Amount,
MinSettleHeight: sv.MinSettleHeight,
Merges: sv.Merges,
Signature: sv.Signature,
}
}

View File

@ -92,16 +92,16 @@ func (t *TipSetExecutor) ApplyBlocks(ctx context.Context, sm *stmgr.StateManager
partDone()
}()
makeVmWithBaseState := func(base cid.Cid) (*vm.VM, error) {
makeVmWithBaseStateAndEpoch := func(base cid.Cid, e abi.ChainEpoch) (*vm.VM, error) {
vmopt := &vm.VMOpts{
StateBase: base,
Epoch: epoch,
Epoch: e,
Rand: r,
Bstore: sm.ChainStore().StateBlockstore(),
Actors: NewActorRegistry(),
Syscalls: sm.Syscalls,
CircSupplyCalc: sm.GetVMCirculatingSupply,
NtwkVersion: sm.GetNtwkVersion,
NetworkVersion: sm.GetNetworkVersion(ctx, e),
BaseFee: baseFee,
LookbackState: stmgr.LookbackStateGetterForTipset(sm, ts),
}
@ -109,12 +109,7 @@ func (t *TipSetExecutor) ApplyBlocks(ctx context.Context, sm *stmgr.StateManager
return sm.VMConstructor()(ctx, vmopt)
}
vmi, err := makeVmWithBaseState(pstate)
if err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("making vm: %w", err)
}
runCron := func(epoch abi.ChainEpoch) error {
runCron := func(vmCron *vm.VM, epoch abi.ChainEpoch) error {
cronMsg := &types.Message{
To: cron.Address,
From: builtin.SystemActorAddr,
@ -126,56 +121,58 @@ func (t *TipSetExecutor) ApplyBlocks(ctx context.Context, sm *stmgr.StateManager
Method: cron.Methods.EpochTick,
Params: nil,
}
ret, err := vmi.ApplyImplicitMessage(ctx, cronMsg)
ret, err := vmCron.ApplyImplicitMessage(ctx, cronMsg)
if err != nil {
return err
return xerrors.Errorf("running cron: %w", err)
}
if em != nil {
if err := em.MessageApplied(ctx, ts, cronMsg.Cid(), cronMsg, ret, true); err != nil {
return xerrors.Errorf("callback failed on cron message: %w", err)
}
}
if ret.ExitCode != 0 {
return xerrors.Errorf("CheckProofSubmissions exit was non-zero: %d", ret.ExitCode)
return xerrors.Errorf("cron exit was non-zero: %d", ret.ExitCode)
}
return nil
}
for i := parentEpoch; i < epoch; i++ {
var err error
if i > parentEpoch {
// run cron for null rounds if any
if err := runCron(i); err != nil {
return cid.Undef, cid.Undef, err
vmCron, err := makeVmWithBaseStateAndEpoch(pstate, i)
if err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("making cron vm: %w", err)
}
pstate, err = vmi.Flush(ctx)
// run cron for null rounds if any
if err = runCron(vmCron, i); err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("running cron: %w", err)
}
pstate, err = vmCron.Flush(ctx)
if err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("flushing vm: %w", err)
return cid.Undef, cid.Undef, xerrors.Errorf("flushing cron vm: %w", err)
}
}
// handle state forks
// XXX: The state tree
newState, err := sm.HandleStateForks(ctx, pstate, i, em, ts)
pstate, err = sm.HandleStateForks(ctx, pstate, i, em, ts)
if err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("error handling state forks: %w", err)
}
if pstate != newState {
vmi, err = makeVmWithBaseState(newState)
if err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("making vm: %w", err)
}
}
vmi.SetBlockHeight(i + 1)
pstate = newState
}
partDone()
partDone = metrics.Timer(ctx, metrics.VMApplyMessages)
vmi, err := makeVmWithBaseStateAndEpoch(pstate, epoch)
if err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("making vm: %w", err)
}
var receipts []cbg.CBORMarshaler
processedMsgs := make(map[cid.Cid]struct{})
for _, b := range bms {
@ -243,7 +240,7 @@ func (t *TipSetExecutor) ApplyBlocks(ctx context.Context, sm *stmgr.StateManager
partDone()
partDone = metrics.Timer(ctx, metrics.VMApplyCron)
if err := runCron(epoch); err != nil {
if err := runCron(vmi, epoch); err != nil {
return cid.Cid{}, cid.Cid{}, err
}
@ -299,7 +296,7 @@ func (t *TipSetExecutor) ExecuteTipSet(ctx context.Context, sm *stmgr.StateManag
parentEpoch = parent.Height
}
r := rand.NewStateRand(sm.ChainStore(), ts.Cids(), sm.Beacon())
r := rand.NewStateRand(sm.ChainStore(), ts.Cids(), sm.Beacon(), sm.GetNetworkVersion)
blkmsgs, err := sm.ChainStore().BlockMsgsForTipset(ctx, ts)
if err != nil {

View File

@ -26,7 +26,7 @@ import (
"github.com/filecoin-project/go-state-types/crypto"
"github.com/filecoin-project/go-state-types/network"
blockadt "github.com/filecoin-project/specs-actors/actors/util/adt"
proof2 "github.com/filecoin-project/specs-actors/v2/actors/runtime/proof"
"github.com/filecoin-project/specs-actors/v7/actors/runtime/proof"
bstore "github.com/filecoin-project/lotus/blockstore"
"github.com/filecoin-project/lotus/build"
@ -95,7 +95,7 @@ func (filec *FilecoinEC) ValidateBlock(ctx context.Context, b *types.FullBlock)
return xerrors.Errorf("load parent tipset failed (%s): %w", h.Parents, err)
}
winPoStNv := filec.sm.GetNtwkVersion(ctx, baseTs.Height())
winPoStNv := filec.sm.GetNetworkVersion(ctx, baseTs.Height())
lbts, lbst, err := stmgr.GetLookbackTipSetForRound(ctx, filec.sm, baseTs, h.Height)
if err != nil {
@ -182,7 +182,7 @@ func (filec *FilecoinEC) ValidateBlock(ctx context.Context, b *types.FullBlock)
}
}
return xerrors.Errorf("parent state root did not match computed state (%s != %s)", stateroot, h.ParentStateRoot)
return xerrors.Errorf("parent state root did not match computed state (%s != %s)", h.ParentStateRoot, stateroot)
}
if precp != h.ParentMessageReceipts {
@ -400,12 +400,21 @@ func (filec *FilecoinEC) VerifyWinningPoStProof(ctx context.Context, nv network.
return xerrors.Errorf("failed to get ID from miner address %s: %w", h.Miner, err)
}
sectors, err := stmgr.GetSectorsForWinningPoSt(ctx, nv, filec.verifier, filec.sm, lbst, h.Miner, rand)
xsectors, err := stmgr.GetSectorsForWinningPoSt(ctx, nv, filec.verifier, filec.sm, lbst, h.Miner, rand)
if err != nil {
return xerrors.Errorf("getting winning post sector set: %w", err)
}
ok, err := ffiwrapper.ProofVerifier.VerifyWinningPoSt(ctx, proof2.WinningPoStVerifyInfo{
sectors := make([]proof.SectorInfo, len(xsectors))
for i, xsi := range xsectors {
sectors[i] = proof.SectorInfo{
SealProof: xsi.SealProof,
SectorNumber: xsi.SectorNumber,
SealedCID: xsi.SealedCID,
}
}
ok, err := ffiwrapper.ProofVerifier.VerifyWinningPoSt(ctx, proof.WinningPoStVerifyInfo{
Randomness: rand,
Proofs: h.WinPoStProof,
ChallengedSectors: sectors,
@ -449,7 +458,7 @@ func (filec *FilecoinEC) checkBlockMessages(ctx context.Context, b *types.FullBl
stateroot, _, err := filec.sm.TipSetState(ctx, baseTs)
if err != nil {
return err
return xerrors.Errorf("failed to compute tipsettate for %s: %w", baseTs.Key(), err)
}
st, err := state.LoadStateTree(filec.store.ActorStore(ctx), stateroot)
@ -457,7 +466,7 @@ func (filec *FilecoinEC) checkBlockMessages(ctx context.Context, b *types.FullBl
return xerrors.Errorf("failed to load base state tree: %w", err)
}
nv := filec.sm.GetNtwkVersion(ctx, b.Header.Height)
nv := filec.sm.GetNetworkVersion(ctx, b.Header.Height)
pl := vm.PricelistByEpoch(baseTs.Height())
var sumGasLimit int64
checkMsg := func(msg types.ChainMsg) error {
@ -466,7 +475,7 @@ func (filec *FilecoinEC) checkBlockMessages(ctx context.Context, b *types.FullBl
// Phase 1: syntactic validation, as defined in the spec
minGas := pl.OnChainMessage(msg.ChainLength())
if err := m.ValidForBlockInclusion(minGas.Total(), nv); err != nil {
return err
return xerrors.Errorf("msg %s invalid for block inclusion: %w", m.Cid(), err)
}
// ValidForBlockInclusion checks if any single message does not exceed BlockGasLimit
@ -479,10 +488,10 @@ func (filec *FilecoinEC) checkBlockMessages(ctx context.Context, b *types.FullBl
// Phase 2: (Partial) semantic validation:
// the sender exists and is an account actor, and the nonces make sense
var sender address.Address
if filec.sm.GetNtwkVersion(ctx, b.Header.Height) >= network.Version13 {
if filec.sm.GetNetworkVersion(ctx, b.Header.Height) >= network.Version13 {
sender, err = st.LookupID(m.From)
if err != nil {
return err
return xerrors.Errorf("failed to lookup sender %s: %w", m.From, err)
}
} else {
sender = m.From
@ -532,7 +541,7 @@ func (filec *FilecoinEC) checkBlockMessages(ctx context.Context, b *types.FullBl
smArr := blockadt.MakeEmptyArray(tmpstore)
for i, m := range b.SecpkMessages {
if filec.sm.GetNtwkVersion(ctx, b.Header.Height) >= network.Version14 {
if filec.sm.GetNetworkVersion(ctx, b.Header.Height) >= network.Version14 {
if m.Signature.Type != crypto.SigTypeSecp256k1 {
return xerrors.Errorf("block had invalid secpk message at index %d: %w", i, err)
}
@ -565,12 +574,13 @@ func (filec *FilecoinEC) checkBlockMessages(ctx context.Context, b *types.FullBl
bmroot, err := bmArr.Root()
if err != nil {
return err
return xerrors.Errorf("failed to root bls msgs: %w", err)
}
smroot, err := smArr.Root()
if err != nil {
return err
return xerrors.Errorf("failed to root secp msgs: %w", err)
}
mrcid, err := tmpstore.Put(ctx, &types.MsgMeta{
@ -578,7 +588,7 @@ func (filec *FilecoinEC) checkBlockMessages(ctx context.Context, b *types.FullBl
SecpkMessages: smroot,
})
if err != nil {
return err
return xerrors.Errorf("failed to put msg meta: %w", err)
}
if b.Header.Messages != mrcid {
@ -586,7 +596,12 @@ func (filec *FilecoinEC) checkBlockMessages(ctx context.Context, b *types.FullBl
}
// Finally, flush.
return vm.Copy(ctx, tmpbs, filec.store.ChainBlockstore(), mrcid)
err = vm.Copy(ctx, tmpbs, filec.store.ChainBlockstore(), mrcid)
if err != nil {
return xerrors.Errorf("failed to flush:%w", err)
}
return nil
}
func (filec *FilecoinEC) IsEpochBeyondCurrMax(epoch abi.ChainEpoch) bool {

View File

@ -5,6 +5,8 @@ import (
"runtime"
"time"
"github.com/docker/go-units"
"github.com/filecoin-project/specs-actors/v6/actors/migration/nv14"
"github.com/filecoin-project/specs-actors/v7/actors/migration/nv15"
@ -158,7 +160,7 @@ func DefaultUpgradeSchedule() stmgr.UpgradeSchedule {
}},
Expensive: true,
}, {
Height: build.UpgradeSnapDealsHeight,
Height: build.UpgradeOhSnapHeight,
Network: network.Version15,
Migration: UpgradeActorsV7,
PreMigrations: []stmgr.PreMigration{{
@ -1245,8 +1247,15 @@ func PreUpgradeActorsV7(ctx context.Context, sm *stmgr.StateManager, cache stmgr
workerCount /= 2
}
config := nv15.Config{MaxWorkers: uint(workerCount)}
_, err := upgradeActorsV7Common(ctx, sm, cache, root, epoch, ts, config)
lbts, lbRoot, err := stmgr.GetLookbackTipSetForRound(ctx, sm, ts, epoch)
if err != nil {
return xerrors.Errorf("error getting lookback ts for premigration: %w", err)
}
config := nv15.Config{MaxWorkers: uint(workerCount),
ProgressLogPeriod: time.Minute * 5}
_, err = upgradeActorsV7Common(ctx, sm, cache, lbRoot, epoch, lbts, config)
return err
}
@ -1255,9 +1264,10 @@ func upgradeActorsV7Common(
root cid.Cid, epoch abi.ChainEpoch, ts *types.TipSet,
config nv15.Config,
) (cid.Cid, error) {
buf := blockstore.NewTieredBstore(sm.ChainStore().StateBlockstore(), blockstore.NewMemorySync())
store := store.ActorStore(ctx, buf)
writeStore := blockstore.NewAutobatch(ctx, sm.ChainStore().StateBlockstore(), units.GiB)
// TODO: pretty sure we'd achieve nothing by doing this, confirm in review
//buf := blockstore.NewTieredBstore(sm.ChainStore().StateBlockstore(), writeStore)
store := store.ActorStore(ctx, writeStore)
// Load the state root.
var stateRoot types.StateRoot
if err := store.Get(ctx, root, &stateRoot); err != nil {
@ -1287,15 +1297,13 @@ func upgradeActorsV7Common(
return cid.Undef, xerrors.Errorf("failed to persist new state root: %w", err)
}
// Persist the new tree.
// Persists the new tree and shuts down the flush worker
if err := writeStore.Flush(ctx); err != nil {
return cid.Undef, xerrors.Errorf("writeStore flush failed: %w", err)
}
{
from := buf
to := buf.Read()
if err := vm.Copy(ctx, from, to, newRoot); err != nil {
return cid.Undef, xerrors.Errorf("copying migrated tree: %w", err)
}
if err := writeStore.Shutdown(ctx); err != nil {
return cid.Undef, xerrors.Errorf("writeStore shutdown failed: %w", err)
}
return newRoot, nil

View File

@ -461,7 +461,7 @@ func (cg *ChainGen) NextTipSetFromMinersWithMessagesAndNulls(base *types.TipSet,
if et != nil {
// TODO: maybe think about passing in more real parameters to this?
wpost, err := cg.eppProvs[m].ComputeProof(context.TODO(), nil, nil)
wpost, err := cg.eppProvs[m].ComputeProof(context.TODO(), nil, nil, round, network.Version0)
if err != nil {
return nil, err
}
@ -620,7 +620,7 @@ func (mca mca) WalletSign(ctx context.Context, a address.Address, v []byte) (*cr
type WinningPoStProver interface {
GenerateCandidates(context.Context, abi.PoStRandomness, uint64) ([]uint64, error)
ComputeProof(context.Context, []proof5.SectorInfo, abi.PoStRandomness) ([]proof5.PoStProof, error)
ComputeProof(context.Context, []proof7.ExtendedSectorInfo, abi.PoStRandomness, abi.ChainEpoch, network.Version) ([]proof5.PoStProof, error)
}
type wppProvider struct{}
@ -629,7 +629,7 @@ func (wpp *wppProvider) GenerateCandidates(ctx context.Context, _ abi.PoStRandom
return []uint64{0}, nil
}
func (wpp *wppProvider) ComputeProof(context.Context, []proof5.SectorInfo, abi.PoStRandomness) ([]proof5.PoStProof, error) {
func (wpp *wppProvider) ComputeProof(context.Context, []proof7.ExtendedSectorInfo, abi.PoStRandomness, abi.ChainEpoch, network.Version) ([]proof5.PoStProof, error) {
return ValidWpostForTesting, nil
}
@ -692,11 +692,11 @@ func (m genFakeVerifier) VerifyReplicaUpdate(update proof7.ReplicaUpdateInfo) (b
panic("not supported")
}
func (m genFakeVerifier) VerifyWinningPoSt(ctx context.Context, info proof5.WinningPoStVerifyInfo) (bool, error) {
func (m genFakeVerifier) VerifyWinningPoSt(ctx context.Context, info proof7.WinningPoStVerifyInfo) (bool, error) {
panic("not supported")
}
func (m genFakeVerifier) VerifyWindowPoSt(ctx context.Context, info proof5.WindowPoStVerifyInfo) (bool, error) {
func (m genFakeVerifier) VerifyWindowPoSt(ctx context.Context, info proof7.WindowPoStVerifyInfo) (bool, error) {
panic("not supported")
}

View File

@ -491,10 +491,8 @@ func VerifyPreSealedData(ctx context.Context, cs *store.ChainStore, sys vm.Sysca
Actors: filcns.NewActorRegistry(),
Syscalls: mkFakedSigSyscalls(sys),
CircSupplyCalc: csc,
NtwkVersion: func(_ context.Context, _ abi.ChainEpoch) network.Version {
return nv
},
BaseFee: types.NewInt(0),
NetworkVersion: nv,
BaseFee: types.NewInt(0),
}
vm, err := vm.NewVM(ctx, &vmopt)
if err != nil {

View File

@ -94,10 +94,8 @@ func SetupStorageMiners(ctx context.Context, cs *store.ChainStore, sys vm.Syscal
Actors: filcns.NewActorRegistry(),
Syscalls: mkFakedSigSyscalls(sys),
CircSupplyCalc: csc,
NtwkVersion: func(_ context.Context, _ abi.ChainEpoch) network.Version {
return nv
},
BaseFee: types.NewInt(0),
NetworkVersion: nv,
BaseFee: types.NewInt(0),
}
vm, err := vm.NewVM(ctx, vmopt)
@ -510,13 +508,13 @@ func SetupStorageMiners(ctx context.Context, cs *store.ChainStore, sys vm.Syscal
// TODO: copied from actors test harness, deduplicate or remove from here
type fakeRand struct{}
func (fr *fakeRand) GetChainRandomness(ctx context.Context, rnv network.Version, personalization crypto.DomainSeparationTag, randEpoch abi.ChainEpoch, entropy []byte) ([]byte, error) {
func (fr *fakeRand) GetChainRandomness(ctx context.Context, personalization crypto.DomainSeparationTag, randEpoch abi.ChainEpoch, entropy []byte) ([]byte, error) {
out := make([]byte, 32)
_, _ = rand.New(rand.NewSource(int64(randEpoch * 1000))).Read(out) //nolint
return out, nil
}
func (fr *fakeRand) GetBeaconRandomness(ctx context.Context, rnv network.Version, personalization crypto.DomainSeparationTag, randEpoch abi.ChainEpoch, entropy []byte) ([]byte, error) {
func (fr *fakeRand) GetBeaconRandomness(ctx context.Context, personalization crypto.DomainSeparationTag, randEpoch abi.ChainEpoch, entropy []byte) ([]byte, error) {
out := make([]byte, 32)
_, _ = rand.New(rand.NewSource(int64(randEpoch))).Read(out) //nolint
return out, nil

View File

@ -173,10 +173,17 @@ type MessagePool struct {
sigValCache *lru.TwoQueueCache
nonceCache *lru.Cache
evtTypes [3]journal.EventType
journal journal.Journal
}
type nonceCacheKey struct {
tsk types.TipSetKey
addr address.Address
}
type msgSet struct {
msgs map[uint64]*types.SignedMessage
nextNonce uint64
@ -196,10 +203,10 @@ func ComputeMinRBF(curPrem abi.TokenAmount) abi.TokenAmount {
return types.BigAdd(minPrice, types.NewInt(1))
}
func CapGasFee(mff dtypes.DefaultMaxFeeFunc, msg *types.Message, sendSepc *api.MessageSendSpec) {
func CapGasFee(mff dtypes.DefaultMaxFeeFunc, msg *types.Message, sendSpec *api.MessageSendSpec) {
var maxFee abi.TokenAmount
if sendSepc != nil {
maxFee = sendSepc.MaxFee
if sendSpec != nil {
maxFee = sendSpec.MaxFee
}
if maxFee.Int == nil || maxFee.Equals(big.Zero()) {
mf, err := mff()
@ -361,6 +368,7 @@ func (ms *msgSet) toSlice() []*types.SignedMessage {
func New(ctx context.Context, api Provider, ds dtypes.MetadataDS, us stmgr.UpgradeSchedule, netName dtypes.NetworkName, j journal.Journal) (*MessagePool, error) {
cache, _ := lru.New2Q(build.BlsSignatureCacheSize)
verifcache, _ := lru.New2Q(build.VerifSigCacheSize)
noncecache, _ := lru.New(256)
cfg, err := loadConfig(ctx, ds)
if err != nil {
@ -386,6 +394,7 @@ func New(ctx context.Context, api Provider, ds dtypes.MetadataDS, us stmgr.Upgra
pruneCooldown: make(chan struct{}, 1),
blsSigCache: cache,
sigValCache: verifcache,
nonceCache: noncecache,
changes: lps.New(50),
localMsgs: namespace.Wrap(ds, datastore.NewKey(localMsgsDs)),
api: api,
@ -1016,11 +1025,23 @@ func (mp *MessagePool) getStateNonce(ctx context.Context, addr address.Address,
done := metrics.Timer(ctx, metrics.MpoolGetNonceDuration)
defer done()
nk := nonceCacheKey{
tsk: ts.Key(),
addr: addr,
}
n, ok := mp.nonceCache.Get(nk)
if ok {
return n.(uint64), nil
}
act, err := mp.api.GetActorAfter(addr, ts)
if err != nil {
return 0, err
}
mp.nonceCache.Add(nk, act.Nonce)
return act.Nonce, nil
}

View File

@ -103,17 +103,21 @@ func (sr *stateRand) getChainRandomness(ctx context.Context, pers crypto.DomainS
return DrawRandomness(mtb.Ticket.VRFProof, pers, round, entropy)
}
type NetworkVersionGetter func(context.Context, abi.ChainEpoch) network.Version
type stateRand struct {
cs *store.ChainStore
blks []cid.Cid
beacon beacon.Schedule
cs *store.ChainStore
blks []cid.Cid
beacon beacon.Schedule
networkVersionGetter NetworkVersionGetter
}
func NewStateRand(cs *store.ChainStore, blks []cid.Cid, b beacon.Schedule) vm.Rand {
func NewStateRand(cs *store.ChainStore, blks []cid.Cid, b beacon.Schedule, networkVersionGetter NetworkVersionGetter) vm.Rand {
return &stateRand{
cs: cs,
blks: blks,
beacon: b,
cs: cs,
blks: blks,
beacon: b,
networkVersionGetter: networkVersionGetter,
}
}
@ -166,7 +170,9 @@ func (sr *stateRand) getBeaconRandomnessV3(ctx context.Context, pers crypto.Doma
return DrawRandomness(be.Data, pers, filecoinEpoch, entropy)
}
func (sr *stateRand) GetChainRandomness(ctx context.Context, nv network.Version, pers crypto.DomainSeparationTag, filecoinEpoch abi.ChainEpoch, entropy []byte) ([]byte, error) {
func (sr *stateRand) GetChainRandomness(ctx context.Context, pers crypto.DomainSeparationTag, filecoinEpoch abi.ChainEpoch, entropy []byte) ([]byte, error) {
nv := sr.networkVersionGetter(ctx, filecoinEpoch)
if nv >= network.Version13 {
return sr.getChainRandomness(ctx, pers, filecoinEpoch, entropy, false)
}
@ -174,7 +180,9 @@ func (sr *stateRand) GetChainRandomness(ctx context.Context, nv network.Version,
return sr.getChainRandomness(ctx, pers, filecoinEpoch, entropy, true)
}
func (sr *stateRand) GetBeaconRandomness(ctx context.Context, nv network.Version, pers crypto.DomainSeparationTag, filecoinEpoch abi.ChainEpoch, entropy []byte) ([]byte, error) {
func (sr *stateRand) GetBeaconRandomness(ctx context.Context, pers crypto.DomainSeparationTag, filecoinEpoch abi.ChainEpoch, entropy []byte) ([]byte, error) {
nv := sr.networkVersionGetter(ctx, filecoinEpoch)
if nv >= network.Version14 {
return sr.getBeaconRandomnessV3(ctx, pers, filecoinEpoch, entropy)
} else if nv == network.Version13 {

View File

@ -117,7 +117,7 @@ func MinerSectorInfo(ctx context.Context, sm *StateManager, maddr address.Addres
return mas.GetSector(sid)
}
func GetSectorsForWinningPoSt(ctx context.Context, nv network.Version, pv ffiwrapper.Verifier, sm *StateManager, st cid.Cid, maddr address.Address, rand abi.PoStRandomness) ([]builtin.SectorInfo, error) {
func GetSectorsForWinningPoSt(ctx context.Context, nv network.Version, pv ffiwrapper.Verifier, sm *StateManager, st cid.Cid, maddr address.Address, rand abi.PoStRandomness) ([]builtin.ExtendedSectorInfo, error) {
act, err := sm.LoadActorRaw(ctx, maddr, st)
if err != nil {
return nil, xerrors.Errorf("failed to load miner actor: %w", err)
@ -203,12 +203,13 @@ func GetSectorsForWinningPoSt(ctx context.Context, nv network.Version, pv ffiwra
return nil, xerrors.Errorf("loading proving sectors: %w", err)
}
out := make([]builtin.SectorInfo, len(sectors))
out := make([]builtin.ExtendedSectorInfo, len(sectors))
for i, sinfo := range sectors {
out[i] = builtin.SectorInfo{
out[i] = builtin.ExtendedSectorInfo{
SealProof: sinfo.SealProof,
SectorNumber: sinfo.SectorNumber,
SealedCID: sinfo.SealedCID,
SectorKey: sinfo.SectorKeyCID,
}
}
@ -358,7 +359,7 @@ func MinerGetBaseInfo(ctx context.Context, sm *StateManager, bcs beacon.Schedule
return nil, xerrors.Errorf("failed to get randomness for winning post: %w", err)
}
nv := sm.GetNtwkVersion(ctx, ts.Height())
nv := sm.GetNetworkVersion(ctx, ts.Height())
sectors, err := GetSectorsForWinningPoSt(ctx, nv, pv, sm, lbst, maddr, prand)
if err != nil {
@ -420,7 +421,7 @@ func MinerEligibleToMine(ctx context.Context, sm *StateManager, addr address.Add
hmp, err := minerHasMinPower(ctx, sm, addr, lookbackTs)
// TODO: We're blurring the lines between a "runtime network version" and a "Lotus upgrade epoch", is that unavoidable?
if sm.GetNtwkVersion(ctx, baseTs.Height()) <= network.Version3 {
if sm.GetNetworkVersion(ctx, baseTs.Height()) <= network.Version3 {
return hmp, err
}

View File

@ -75,12 +75,12 @@ func (sm *StateManager) Call(ctx context.Context, msg *types.Message, ts *types.
vmopt := &vm.VMOpts{
StateBase: bstate,
Epoch: pheight + 1,
Rand: rand.NewStateRand(sm.cs, ts.Cids(), sm.beacon),
Rand: rand.NewStateRand(sm.cs, ts.Cids(), sm.beacon, sm.GetNetworkVersion),
Bstore: sm.cs.StateBlockstore(),
Actors: sm.tsExec.NewActorRegistry(),
Syscalls: sm.Syscalls,
CircSupplyCalc: sm.GetVMCirculatingSupply,
NtwkVersion: sm.GetNtwkVersion,
NetworkVersion: sm.GetNetworkVersion(ctx, pheight+1),
BaseFee: types.NewInt(0),
LookbackState: LookbackStateGetterForTipset(sm, ts),
}
@ -186,7 +186,7 @@ func (sm *StateManager) CallWithGas(ctx context.Context, msg *types.Message, pri
return nil, fmt.Errorf("failed to handle fork: %w", err)
}
r := rand.NewStateRand(sm.cs, ts.Cids(), sm.beacon)
r := rand.NewStateRand(sm.cs, ts.Cids(), sm.beacon, sm.GetNetworkVersion)
if span.IsRecordingEvents() {
span.AddAttributes(
@ -204,7 +204,7 @@ func (sm *StateManager) CallWithGas(ctx context.Context, msg *types.Message, pri
Actors: sm.tsExec.NewActorRegistry(),
Syscalls: sm.Syscalls,
CircSupplyCalc: sm.GetVMCirculatingSupply,
NtwkVersion: sm.GetNtwkVersion,
NetworkVersion: sm.GetNetworkVersion(ctx, ts.Height()+1),
BaseFee: ts.Blocks()[0].ParentBaseFee,
LookbackState: LookbackStateGetterForTipset(sm, ts),
}

View File

@ -8,6 +8,8 @@ import (
"sync"
"time"
"github.com/filecoin-project/specs-actors/v7/actors/migration/nv15"
"github.com/ipfs/go-cid"
"golang.org/x/xerrors"
@ -15,8 +17,6 @@ import (
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/big"
"github.com/filecoin-project/go-state-types/network"
"github.com/filecoin-project/specs-actors/v3/actors/migration/nv10"
"github.com/filecoin-project/lotus/chain/actors/adt"
"github.com/filecoin-project/lotus/chain/actors/builtin"
init_ "github.com/filecoin-project/lotus/chain/actors/builtin/init"
@ -211,7 +211,7 @@ func (sm *StateManager) hasExpensiveFork(height abi.ChainEpoch) bool {
return ok
}
func runPreMigration(ctx context.Context, sm *StateManager, fn PreMigrationFunc, cache *nv10.MemMigrationCache, ts *types.TipSet) {
func runPreMigration(ctx context.Context, sm *StateManager, fn PreMigrationFunc, cache *nv15.MemMigrationCache, ts *types.TipSet) {
height := ts.Height()
parent := ts.ParentState()

View File

@ -4,6 +4,8 @@ import (
"context"
"sync"
"github.com/filecoin-project/specs-actors/v7/actors/migration/nv15"
"github.com/filecoin-project/lotus/chain/rand"
"github.com/filecoin-project/lotus/chain/beacon"
@ -18,10 +20,6 @@ import (
"github.com/filecoin-project/go-state-types/crypto"
"github.com/filecoin-project/go-state-types/network"
// Used for genesis.
msig0 "github.com/filecoin-project/specs-actors/actors/builtin/multisig"
"github.com/filecoin-project/specs-actors/v3/actors/migration/nv10"
"github.com/filecoin-project/lotus/api"
"github.com/filecoin-project/lotus/build"
"github.com/filecoin-project/lotus/chain/actors/builtin/paych"
@ -30,6 +28,9 @@ import (
"github.com/filecoin-project/lotus/chain/store"
"github.com/filecoin-project/lotus/chain/types"
"github.com/filecoin-project/lotus/chain/vm"
// Used for genesis.
msig0 "github.com/filecoin-project/specs-actors/actors/builtin/multisig"
)
const LookbackNoLimit = api.LookbackNoLimit
@ -53,7 +54,7 @@ type versionSpec struct {
type migration struct {
upgrade MigrationFunc
preMigrations []PreMigration
cache *nv10.MemMigrationCache
cache *nv15.MemMigrationCache
}
type Executor interface {
@ -121,7 +122,7 @@ func NewStateManager(cs *store.ChainStore, exec Executor, sys vm.SyscallBuilder,
migration := &migration{
upgrade: upgrade.Migration,
preMigrations: upgrade.PreMigrations,
cache: nv10.NewMemMigrationCache(),
cache: nv15.NewMemMigrationCache(),
}
stateMigrations[upgrade.Height] = migration
}
@ -356,7 +357,7 @@ func (sm *StateManager) VMConstructor() func(context.Context, *vm.VMOpts) (*vm.V
}
}
func (sm *StateManager) GetNtwkVersion(ctx context.Context, height abi.ChainEpoch) network.Version {
func (sm *StateManager) GetNetworkVersion(ctx context.Context, height abi.ChainEpoch) network.Version {
// The epochs here are the _last_ epoch for every version, or -1 if the
// version is disabled.
for _, spec := range sm.networkVersions {
@ -377,10 +378,9 @@ func (sm *StateManager) GetRandomnessFromBeacon(ctx context.Context, personaliza
return nil, xerrors.Errorf("loading tipset %s: %w", tsk, err)
}
r := rand.NewStateRand(sm.ChainStore(), pts.Cids(), sm.beacon)
rnv := sm.GetNtwkVersion(ctx, randEpoch)
r := rand.NewStateRand(sm.ChainStore(), pts.Cids(), sm.beacon, sm.GetNetworkVersion)
return r.GetBeaconRandomness(ctx, rnv, personalization, randEpoch, entropy)
return r.GetBeaconRandomness(ctx, personalization, randEpoch, entropy)
}
@ -390,8 +390,7 @@ func (sm *StateManager) GetRandomnessFromTickets(ctx context.Context, personaliz
return nil, xerrors.Errorf("loading tipset key: %w", err)
}
r := rand.NewStateRand(sm.ChainStore(), pts.Cids(), sm.beacon)
rnv := sm.GetNtwkVersion(ctx, randEpoch)
r := rand.NewStateRand(sm.ChainStore(), pts.Cids(), sm.beacon, sm.GetNetworkVersion)
return r.GetChainRandomness(ctx, rnv, personalization, randEpoch, entropy)
return r.GetChainRandomness(ctx, personalization, randEpoch, entropy)
}

View File

@ -79,7 +79,7 @@ func ComputeState(ctx context.Context, sm *StateManager, height abi.ChainEpoch,
// future. It's not guaranteed to be accurate... but that's fine.
}
r := rand.NewStateRand(sm.cs, ts.Cids(), sm.beacon)
r := rand.NewStateRand(sm.cs, ts.Cids(), sm.beacon, sm.GetNetworkVersion)
vmopt := &vm.VMOpts{
StateBase: base,
Epoch: height,
@ -88,7 +88,7 @@ func ComputeState(ctx context.Context, sm *StateManager, height abi.ChainEpoch,
Actors: sm.tsExec.NewActorRegistry(),
Syscalls: sm.Syscalls,
CircSupplyCalc: sm.GetVMCirculatingSupply,
NtwkVersion: sm.GetNtwkVersion,
NetworkVersion: sm.GetNetworkVersion(ctx, height),
BaseFee: ts.Blocks()[0].ParentBaseFee,
LookbackState: LookbackStateGetterForTipset(sm, ts),
}
@ -128,7 +128,7 @@ func LookbackStateGetterForTipset(sm *StateManager, ts *types.TipSet) vm.Lookbac
func GetLookbackTipSetForRound(ctx context.Context, sm *StateManager, ts *types.TipSet, round abi.ChainEpoch) (*types.TipSet, cid.Cid, error) {
var lbr abi.ChainEpoch
lb := policy.GetWinningPoStSectorSetLookback(sm.GetNtwkVersion(ctx, round))
lb := policy.GetWinningPoStSectorSetLookback(sm.GetNetworkVersion(ctx, round))
if round > lb {
lbr = round - lb
}

View File

@ -23,6 +23,7 @@ import (
"github.com/filecoin-project/go-state-types/abi"
proof2 "github.com/filecoin-project/specs-actors/v2/actors/runtime/proof"
proof7 "github.com/filecoin-project/specs-actors/v7/actors/runtime/proof"
"github.com/filecoin-project/lotus/api"
"github.com/filecoin-project/lotus/build"
@ -103,7 +104,7 @@ func prepSyncTest(t testing.TB, h int) *syncTestUtil {
ctx: ctx,
cancel: cancel,
mn: mocknet.New(ctx),
mn: mocknet.New(),
g: g,
us: filcns.DefaultUpgradeSchedule(),
}
@ -157,7 +158,7 @@ func prepSyncTestWithV5Height(t testing.TB, h int, v5height abi.ChainEpoch) *syn
ctx: ctx,
cancel: cancel,
mn: mocknet.New(ctx),
mn: mocknet.New(),
g: g,
us: sched,
}
@ -550,7 +551,7 @@ func (wpp badWpp) GenerateCandidates(context.Context, abi.PoStRandomness, uint64
return []uint64{1}, nil
}
func (wpp badWpp) ComputeProof(context.Context, []proof2.SectorInfo, abi.PoStRandomness) ([]proof2.PoStProof, error) {
func (wpp badWpp) ComputeProof(context.Context, []proof7.ExtendedSectorInfo, abi.PoStRandomness, abi.ChainEpoch, network.Version) ([]proof2.PoStProof, error) {
return []proof2.PoStProof{
{
PoStProof: abi.RegisteredPoStProof_StackedDrgWinning2KiBV1,

View File

@ -10,8 +10,9 @@ import (
addr "github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/crypto"
"github.com/filecoin-project/lotus/build"
"github.com/ipfs/go-cid"
"github.com/filecoin-project/lotus/build"
)
type GasCharge struct {
@ -81,7 +82,9 @@ type Pricelist interface {
OnVerifyConsensusFault() GasCharge
}
var prices = map[abi.ChainEpoch]Pricelist{
// Prices are the price lists per starting epoch. Public for testing purposes
// (concretely to allow the test vector runner to rebase prices).
var Prices = map[abi.ChainEpoch]Pricelist{
abi.ChainEpoch(0): &pricelistV0{
computeGasMulti: 1,
storageGasMulti: 1000,
@ -206,6 +209,8 @@ var prices = map[abi.ChainEpoch]Pricelist{
},
verifyPostDiscount: false,
verifyConsensusFault: 495422,
verifyReplicaUpdate: 36316136,
},
}
@ -214,8 +219,8 @@ func PricelistByEpoch(epoch abi.ChainEpoch) Pricelist {
// since we are storing the prices as map or epoch to price
// we need to get the price with the highest epoch that is lower or equal to the `epoch` arg
bestEpoch := abi.ChainEpoch(0)
bestPrice := prices[bestEpoch]
for e, pl := range prices {
bestPrice := Prices[bestEpoch]
for e, pl := range Prices {
// if `e` happened after `bestEpoch` and `e` is earlier or equal to the target `epoch`
if e > bestEpoch && e <= epoch {
bestEpoch = e

View File

@ -120,6 +120,8 @@ type pricelistV0 struct {
verifyPostLookup map[abi.RegisteredPoStProof]scalingCost
verifyPostDiscount bool
verifyConsensusFault int64
verifyReplicaUpdate int64
}
var _ Pricelist = (*pricelistV0)(nil)
@ -229,8 +231,7 @@ func (pl *pricelistV0) OnVerifyAggregateSeals(aggregate proof7.AggregateSealVeri
// OnVerifyReplicaUpdate
func (pl *pricelistV0) OnVerifyReplicaUpdate(update proof7.ReplicaUpdateInfo) GasCharge {
// TODO: do the thing
return GasCharge{}
return newGasCharge("OnVerifyReplicaUpdate", pl.verifyReplicaUpdate, 0)
}
// OnVerifyPost

View File

@ -1,7 +1,6 @@
package vm
import (
"context"
"fmt"
"io"
"testing"
@ -136,9 +135,7 @@ func TestInvokerBasic(t *testing.T) {
{
_, aerr := code[1](&Runtime{
vm: &VM{ntwkVersion: func(ctx context.Context, epoch abi.ChainEpoch) network.Version {
return network.Version0
}},
vm: &VM{networkVersion: network.Version0},
Message: &basicRtMessage{},
}, []byte{99})
if aerrors.IsFatal(aerr) {
@ -149,9 +146,7 @@ func TestInvokerBasic(t *testing.T) {
{
_, aerr := code[1](&Runtime{
vm: &VM{ntwkVersion: func(ctx context.Context, epoch abi.ChainEpoch) network.Version {
return network.Version7
}},
vm: &VM{networkVersion: network.Version7},
Message: &basicRtMessage{},
}, []byte{99})
if aerrors.IsFatal(aerr) {

View File

@ -92,7 +92,7 @@ func (rt *Runtime) BaseFee() abi.TokenAmount {
}
func (rt *Runtime) NetworkVersion() network.Version {
return rt.vm.GetNtwkVersion(rt.ctx, rt.CurrEpoch())
return rt.vm.networkVersion
}
func (rt *Runtime) TotalFilCircSupply() abi.TokenAmount {
@ -224,8 +224,7 @@ func (rt *Runtime) GetActorCodeCID(addr address.Address) (ret cid.Cid, ok bool)
}
func (rt *Runtime) GetRandomnessFromTickets(personalization crypto.DomainSeparationTag, randEpoch abi.ChainEpoch, entropy []byte) abi.Randomness {
rnv := rt.vm.ntwkVersion(rt.ctx, randEpoch)
res, err := rt.vm.rand.GetChainRandomness(rt.ctx, rnv, personalization, randEpoch, entropy)
res, err := rt.vm.rand.GetChainRandomness(rt.ctx, personalization, randEpoch, entropy)
if err != nil {
panic(aerrors.Fatalf("could not get ticket randomness: %s", err))
@ -234,8 +233,7 @@ func (rt *Runtime) GetRandomnessFromTickets(personalization crypto.DomainSeparat
}
func (rt *Runtime) GetRandomnessFromBeacon(personalization crypto.DomainSeparationTag, randEpoch abi.ChainEpoch, entropy []byte) abi.Randomness {
rnv := rt.vm.ntwkVersion(rt.ctx, randEpoch)
res, err := rt.vm.rand.GetBeaconRandomness(rt.ctx, rnv, personalization, randEpoch, entropy)
res, err := rt.vm.rand.GetBeaconRandomness(rt.ctx, personalization, randEpoch, entropy)
if err != nil {
panic(aerrors.Fatalf("could not get beacon randomness: %s", err))

View File

@ -245,8 +245,8 @@ func (ss *syscallShim) workerKeyAtLookback(height abi.ChainEpoch) (address.Addre
return ResolveToKeyAddr(ss.cstate, ss.cst, info.Worker)
}
func (ss *syscallShim) VerifyPoSt(proof proof5.WindowPoStVerifyInfo) error {
ok, err := ss.verifier.VerifyWindowPoSt(context.TODO(), proof)
func (ss *syscallShim) VerifyPoSt(info proof5.WindowPoStVerifyInfo) error {
ok, err := ss.verifier.VerifyWindowPoSt(context.TODO(), info)
if err != nil {
return err
}

View File

@ -169,7 +169,7 @@ func (vm *VM) makeRuntime(ctx context.Context, msg *types.Message, parent *Runti
}
vmm.From = resF
if vm.ntwkVersion(ctx, vm.blockHeight) <= network.Version3 {
if vm.networkVersion <= network.Version3 {
rt.Message = &vmm
} else {
resT, _ := rt.ResolveAddress(msg.To)
@ -209,7 +209,7 @@ type VM struct {
areg *ActorRegistry
rand Rand
circSupplyCalc CircSupplyCalculator
ntwkVersion NtwkVersionGetter
networkVersion network.Version
baseFee abi.TokenAmount
lbStateGet LookbackStateGetter
baseCircSupply abi.TokenAmount
@ -225,7 +225,7 @@ type VMOpts struct {
Actors *ActorRegistry
Syscalls SyscallBuilder
CircSupplyCalc CircSupplyCalculator
NtwkVersion NtwkVersionGetter // TODO: stebalien: In what cases do we actually need this? It seems like even when creating new networks we want to use the 'global'/build-default version getter
NetworkVersion network.Version
BaseFee abi.TokenAmount
LookbackState LookbackStateGetter
}
@ -251,7 +251,7 @@ func NewVM(ctx context.Context, opts *VMOpts) (*VM, error) {
areg: opts.Actors,
rand: opts.Rand, // TODO: Probably should be a syscall
circSupplyCalc: opts.CircSupplyCalc,
ntwkVersion: opts.NtwkVersion,
networkVersion: opts.NetworkVersion,
Syscalls: opts.Syscalls,
baseFee: opts.BaseFee,
baseCircSupply: baseCirc,
@ -260,8 +260,8 @@ func NewVM(ctx context.Context, opts *VMOpts) (*VM, error) {
}
type Rand interface {
GetChainRandomness(ctx context.Context, nv network.Version, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error)
GetBeaconRandomness(ctx context.Context, nv network.Version, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error)
GetChainRandomness(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error)
GetBeaconRandomness(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error)
}
type ApplyRet struct {
@ -313,7 +313,7 @@ func (vm *VM) send(ctx context.Context, msg *types.Message, parent *Runtime,
return nil, aerrors.Wrapf(err, "could not create account")
}
toActor = a
if vm.ntwkVersion(ctx, vm.blockHeight) <= network.Version3 {
if vm.networkVersion <= network.Version3 {
// Leave the rt.Message as is
} else {
nmsg := Message{
@ -340,7 +340,7 @@ func (vm *VM) send(ctx context.Context, msg *types.Message, parent *Runtime,
defer rt.chargeGasSafe(newGasCharge("OnMethodInvocationDone", 0, 0))
if types.BigCmp(msg.Value, types.NewInt(0)) != 0 {
if err := vm.transfer(msg.From, msg.To, msg.Value, vm.ntwkVersion(ctx, vm.blockHeight)); err != nil {
if err := vm.transfer(msg.From, msg.To, msg.Value, vm.networkVersion); err != nil {
return nil, aerrors.Wrap(err, "failed to transfer funds")
}
}
@ -617,7 +617,7 @@ func (vm *VM) ApplyMessage(ctx context.Context, cmsg types.ChainMsg) (*ApplyRet,
}
func (vm *VM) ShouldBurn(ctx context.Context, st *state.StateTree, msg *types.Message, errcode exitcode.ExitCode) (bool, error) {
if vm.ntwkVersion(ctx, vm.blockHeight) <= network.Version12 {
if vm.networkVersion <= network.Version12 {
// Check to see if we should burn funds. We avoid burning on successful
// window post. This won't catch _indirect_ window post calls, but this
// is the best we can get for now.
@ -824,10 +824,6 @@ func (vm *VM) StateTree() types.StateTree {
return vm.cstate
}
func (vm *VM) SetBlockHeight(h abi.ChainEpoch) {
vm.blockHeight = h
}
func (vm *VM) Invoke(act *types.Actor, rt *Runtime, method abi.MethodNum, params []byte) ([]byte, aerrors.ActorError) {
ctx, span := trace.StartSpan(rt.ctx, "vm.Invoke")
defer span.End()
@ -855,13 +851,9 @@ func (vm *VM) SetInvoker(i *ActorRegistry) {
vm.areg = i
}
func (vm *VM) GetNtwkVersion(ctx context.Context, ce abi.ChainEpoch) network.Version {
return vm.ntwkVersion(ctx, ce)
}
func (vm *VM) GetCircSupply(ctx context.Context) (abi.TokenAmount, error) {
// Before v15, this was recalculated on each invocation as the state tree was mutated
if vm.GetNtwkVersion(ctx, vm.blockHeight) <= network.Version14 {
if vm.networkVersion <= network.Version14 {
return vm.circSupplyCalc(ctx, vm.blockHeight, vm.cstate)
}

View File

@ -92,6 +92,7 @@ var clientCmd = &cli.Command{
WithCategory("data", clientLocalCmd),
WithCategory("data", clientStat),
WithCategory("retrieval", clientFindCmd),
WithCategory("retrieval", clientQueryRetrievalAskCmd),
WithCategory("retrieval", clientRetrieveCmd),
WithCategory("retrieval", clientRetrieveCatCmd),
WithCategory("retrieval", clientRetrieveLsCmd),
@ -1030,6 +1031,67 @@ var clientFindCmd = &cli.Command{
},
}
var clientQueryRetrievalAskCmd = &cli.Command{
Name: "retrieval-ask",
Usage: "Get a miner's retrieval ask",
ArgsUsage: "[minerAddress] [data CID]",
Flags: []cli.Flag{
&cli.Int64Flag{
Name: "size",
Usage: "data size in bytes",
},
},
Action: func(cctx *cli.Context) error {
afmt := NewAppFmt(cctx.App)
if cctx.NArg() != 2 {
afmt.Println("Usage: retrieval-ask [minerAddress] [data CID]")
return nil
}
maddr, err := address.NewFromString(cctx.Args().First())
if err != nil {
return err
}
dataCid, err := cid.Parse(cctx.Args().Get(1))
if err != nil {
return fmt.Errorf("parsing data cid: %w", err)
}
api, closer, err := GetFullNodeAPI(cctx)
if err != nil {
return err
}
defer closer()
ctx := ReqContext(cctx)
ask, err := api.ClientMinerQueryOffer(ctx, maddr, dataCid, nil)
if err != nil {
return err
}
afmt.Printf("Ask: %s\n", maddr)
afmt.Printf("Unseal price: %s\n", types.FIL(ask.UnsealPrice))
afmt.Printf("Price per byte: %s\n", types.FIL(ask.PricePerByte))
afmt.Printf("Payment interval: %s\n", types.SizeStr(types.NewInt(ask.PaymentInterval)))
afmt.Printf("Payment interval increase: %s\n", types.SizeStr(types.NewInt(ask.PaymentIntervalIncrease)))
size := cctx.Uint64("size")
if size == 0 {
if ask.Size == 0 {
return nil
}
size = ask.Size
afmt.Printf("Size: %s\n", types.SizeStr(types.NewInt(ask.Size)))
}
transferPrice := types.BigMul(ask.PricePerByte, types.NewInt(size))
totalPrice := types.BigAdd(ask.UnsealPrice, transferPrice)
afmt.Printf("Total price for %d bytes: %s\n", size, types.FIL(totalPrice))
return nil
},
}
var clientListRetrievalsCmd = &cli.Command{
Name: "list-retrievals",
Usage: "List retrieval market deals",

View File

@ -36,6 +36,8 @@ var NetCmd = &cli.Command{
NetReachability,
NetBandwidthCmd,
NetBlockCmd,
NetStatCmd,
NetLimitCmd,
},
}
@ -606,3 +608,103 @@ var NetBlockListCmd = &cli.Command{
return nil
},
}
var NetStatCmd = &cli.Command{
Name: "stat",
Usage: "Report resource usage for a scope",
ArgsUsage: "scope",
Description: `Report resource usage for a scope.
The scope can be one of the following:
- system -- reports the system aggregate resource usage.
- transient -- reports the transient resource usage.
- svc:<service> -- reports the resource usage of a specific service.
- proto:<proto> -- reports the resource usage of a specific protocol.
- peer:<peer> -- reports the resource usage of a specific peer.
- all -- reports the resource usage for all currently active scopes.
`,
Action: func(cctx *cli.Context) error {
api, closer, err := GetAPI(cctx)
if err != nil {
return err
}
defer closer()
ctx := ReqContext(cctx)
args := cctx.Args().Slice()
if len(args) != 1 {
return xerrors.Errorf("must specify exactly one scope")
}
scope := args[0]
result, err := api.NetStat(ctx, scope)
if err != nil {
return err
}
enc := json.NewEncoder(os.Stdout)
return enc.Encode(result)
},
}
var NetLimitCmd = &cli.Command{
Name: "limit",
Usage: "Get or set resource limits for a scope",
ArgsUsage: "scope [limit]",
Description: `Get or set resource limits for a scope.
The scope can be one of the following:
- system -- reports the system aggregate resource usage.
- transient -- reports the transient resource usage.
- svc:<service> -- reports the resource usage of a specific service.
- proto:<proto> -- reports the resource usage of a specific protocol.
- peer:<peer> -- reports the resource usage of a specific peer.
The limit is json-formatted, with the same structure as the limits file.
`,
Flags: []cli.Flag{
&cli.BoolFlag{
Name: "set",
Usage: "set the limit for a scope",
},
},
Action: func(cctx *cli.Context) error {
api, closer, err := GetAPI(cctx)
if err != nil {
return err
}
defer closer()
ctx := ReqContext(cctx)
args := cctx.Args().Slice()
if cctx.Bool("set") {
if len(args) != 2 {
return xerrors.Errorf("must specify exactly a scope and a limit")
}
scope := args[0]
limitStr := args[1]
var limit atypes.NetLimit
err := json.Unmarshal([]byte(limitStr), &limit)
if err != nil {
return xerrors.Errorf("error decoding limit: %w", err)
}
return api.NetSetLimit(ctx, scope, limit)
}
if len(args) != 1 {
return xerrors.Errorf("must specify exactly one scope")
}
scope := args[0]
result, err := api.NetLimit(ctx, scope)
if err != nil {
return err
}
enc := json.NewEncoder(os.Stdout)
return enc.Encode(result)
},
}

View File

@ -86,9 +86,10 @@ func (cv *cachingVerifier) VerifySeal(svi proof2.SealVerifyInfo) (bool, error) {
}, &svi)
}
func (cv *cachingVerifier) VerifyWinningPoSt(ctx context.Context, info proof2.WinningPoStVerifyInfo) (bool, error) {
func (cv *cachingVerifier) VerifyWinningPoSt(ctx context.Context, info proof7.WinningPoStVerifyInfo) (bool, error) {
return cv.backend.VerifyWinningPoSt(ctx, info)
}
func (cv *cachingVerifier) VerifyWindowPoSt(ctx context.Context, info proof2.WindowPoStVerifyInfo) (bool, error) {
return cv.withCache(func() (bool, error) {
return cv.backend.VerifyWindowPoSt(ctx, info)

View File

@ -12,6 +12,8 @@ import (
"time"
saproof2 "github.com/filecoin-project/specs-actors/v2/actors/runtime/proof"
"github.com/filecoin-project/specs-actors/v7/actors/runtime/proof"
saproof7 "github.com/filecoin-project/specs-actors/v7/actors/runtime/proof"
"github.com/docker/go-units"
logging "github.com/ipfs/go-log/v2"
@ -260,7 +262,8 @@ var sealBenchCmd = &cli.Command{
sectorNumber := c.Int("num-sectors")
var sealTimings []SealingResult
var sealedSectors []saproof2.SectorInfo
var extendedSealedSectors []saproof7.ExtendedSectorInfo
var sealedSectors []saproof7.SectorInfo
if robench == "" {
var err error
@ -269,7 +272,7 @@ var sealBenchCmd = &cli.Command{
PreCommit2: 1,
Commit: 1,
}
sealTimings, sealedSectors, err = runSeals(sb, sbfs, sectorNumber, parCfg, mid, sectorSize, []byte(c.String("ticket-preimage")), c.String("save-commit2-input"), skipc2, c.Bool("skip-unseal"))
sealTimings, extendedSealedSectors, err = runSeals(sb, sbfs, sectorNumber, parCfg, mid, sectorSize, []byte(c.String("ticket-preimage")), c.String("save-commit2-input"), skipc2, c.Bool("skip-unseal"))
if err != nil {
return xerrors.Errorf("failed to run seals: %w", err)
}
@ -296,7 +299,13 @@ var sealBenchCmd = &cli.Command{
}
for _, s := range genm.Sectors {
sealedSectors = append(sealedSectors, saproof2.SectorInfo{
extendedSealedSectors = append(extendedSealedSectors, saproof7.ExtendedSectorInfo{
SealedCID: s.CommR,
SectorNumber: s.SectorID,
SealProof: s.ProofType,
SectorKey: nil,
})
sealedSectors = append(sealedSectors, proof.SectorInfo{
SealedCID: s.CommR,
SectorNumber: s.SectorID,
SealProof: s.ProofType,
@ -325,20 +334,20 @@ var sealBenchCmd = &cli.Command{
return err
}
fcandidates, err := ffiwrapper.ProofVerifier.GenerateWinningPoStSectorChallenge(context.TODO(), wipt, mid, challenge[:], uint64(len(sealedSectors)))
fcandidates, err := ffiwrapper.ProofVerifier.GenerateWinningPoStSectorChallenge(context.TODO(), wipt, mid, challenge[:], uint64(len(extendedSealedSectors)))
if err != nil {
return err
}
candidates := make([]saproof2.SectorInfo, len(fcandidates))
xcandidates := make([]saproof7.ExtendedSectorInfo, len(fcandidates))
for i, fcandidate := range fcandidates {
candidates[i] = sealedSectors[fcandidate]
xcandidates[i] = extendedSealedSectors[fcandidate]
}
gencandidates := time.Now()
log.Info("computing winning post snark (cold)")
proof1, err := sb.GenerateWinningPoSt(context.TODO(), mid, candidates, challenge[:])
proof1, err := sb.GenerateWinningPoSt(context.TODO(), mid, xcandidates, challenge[:])
if err != nil {
return err
}
@ -346,14 +355,23 @@ var sealBenchCmd = &cli.Command{
winningpost1 := time.Now()
log.Info("computing winning post snark (hot)")
proof2, err := sb.GenerateWinningPoSt(context.TODO(), mid, candidates, challenge[:])
proof2, err := sb.GenerateWinningPoSt(context.TODO(), mid, xcandidates, challenge[:])
if err != nil {
return err
}
candidates := make([]saproof7.SectorInfo, len(xcandidates))
for i, xsi := range xcandidates {
candidates[i] = saproof7.SectorInfo{
SealedCID: xsi.SealedCID,
SectorNumber: xsi.SectorNumber,
SealProof: xsi.SealProof,
}
}
winnningpost2 := time.Now()
pvi1 := saproof2.WinningPoStVerifyInfo{
pvi1 := saproof7.WinningPoStVerifyInfo{
Randomness: abi.PoStRandomness(challenge[:]),
Proofs: proof1,
ChallengedSectors: candidates,
@ -369,7 +387,7 @@ var sealBenchCmd = &cli.Command{
verifyWinningPost1 := time.Now()
pvi2 := saproof2.WinningPoStVerifyInfo{
pvi2 := saproof7.WinningPoStVerifyInfo{
Randomness: abi.PoStRandomness(challenge[:]),
Proofs: proof2,
ChallengedSectors: candidates,
@ -386,7 +404,7 @@ var sealBenchCmd = &cli.Command{
verifyWinningPost2 := time.Now()
log.Info("computing window post snark (cold)")
wproof1, _, err := sb.GenerateWindowPoSt(context.TODO(), mid, sealedSectors, challenge[:])
wproof1, _, err := sb.GenerateWindowPoSt(context.TODO(), mid, extendedSealedSectors, challenge[:])
if err != nil {
return err
}
@ -394,7 +412,7 @@ var sealBenchCmd = &cli.Command{
windowpost1 := time.Now()
log.Info("computing window post snark (hot)")
wproof2, _, err := sb.GenerateWindowPoSt(context.TODO(), mid, sealedSectors, challenge[:])
wproof2, _, err := sb.GenerateWindowPoSt(context.TODO(), mid, extendedSealedSectors, challenge[:])
if err != nil {
return err
}
@ -502,10 +520,10 @@ type ParCfg struct {
Commit int
}
func runSeals(sb *ffiwrapper.Sealer, sbfs *basicfs.Provider, numSectors int, par ParCfg, mid abi.ActorID, sectorSize abi.SectorSize, ticketPreimage []byte, saveC2inp string, skipc2, skipunseal bool) ([]SealingResult, []saproof2.SectorInfo, error) {
func runSeals(sb *ffiwrapper.Sealer, sbfs *basicfs.Provider, numSectors int, par ParCfg, mid abi.ActorID, sectorSize abi.SectorSize, ticketPreimage []byte, saveC2inp string, skipc2, skipunseal bool) ([]SealingResult, []saproof7.ExtendedSectorInfo, error) {
var pieces []abi.PieceInfo
sealTimings := make([]SealingResult, numSectors)
sealedSectors := make([]saproof2.SectorInfo, numSectors)
sealedSectors := make([]saproof7.ExtendedSectorInfo, numSectors)
preCommit2Sema := make(chan struct{}, par.PreCommit2)
commitSema := make(chan struct{}, par.Commit)
@ -579,10 +597,11 @@ func runSeals(sb *ffiwrapper.Sealer, sbfs *basicfs.Provider, numSectors int, par
precommit2 := time.Now()
<-preCommit2Sema
sealedSectors[i] = saproof2.SectorInfo{
sealedSectors[i] = saproof7.ExtendedSectorInfo{
SealProof: sid.ProofType,
SectorNumber: i,
SealedCID: cids.Sealed,
SectorKey: nil,
}
seed := lapi.SealSeed{

View File

@ -470,6 +470,8 @@ var stateList = []stateMeta{
{col: color.FgBlue, state: sealing.Empty},
{col: color.FgBlue, state: sealing.WaitDeals},
{col: color.FgBlue, state: sealing.AddPiece},
{col: color.FgBlue, state: sealing.SnapDealsWaitDeals},
{col: color.FgBlue, state: sealing.SnapDealsAddPiece},
{col: color.FgRed, state: sealing.UndefinedSectorState},
{col: color.FgYellow, state: sealing.Packing},
@ -488,6 +490,12 @@ var stateList = []stateMeta{
{col: color.FgYellow, state: sealing.SubmitCommitAggregate},
{col: color.FgYellow, state: sealing.CommitAggregateWait},
{col: color.FgYellow, state: sealing.FinalizeSector},
{col: color.FgYellow, state: sealing.SnapDealsPacking},
{col: color.FgYellow, state: sealing.UpdateReplica},
{col: color.FgYellow, state: sealing.ProveReplicaUpdate},
{col: color.FgYellow, state: sealing.SubmitReplicaUpdate},
{col: color.FgYellow, state: sealing.ReplicaUpdateWait},
{col: color.FgYellow, state: sealing.FinalizeReplicaUpdate},
{col: color.FgCyan, state: sealing.Terminating},
{col: color.FgCyan, state: sealing.TerminateWait},
@ -495,6 +503,7 @@ var stateList = []stateMeta{
{col: color.FgCyan, state: sealing.TerminateFailed},
{col: color.FgCyan, state: sealing.Removing},
{col: color.FgCyan, state: sealing.Removed},
{col: color.FgCyan, state: sealing.AbortUpgrade},
{col: color.FgRed, state: sealing.FailedUnrecoverable},
{col: color.FgRed, state: sealing.AddPieceFailed},
@ -512,6 +521,9 @@ var stateList = []stateMeta{
{col: color.FgRed, state: sealing.RemoveFailed},
{col: color.FgRed, state: sealing.DealsExpired},
{col: color.FgRed, state: sealing.RecoverDealIDs},
{col: color.FgRed, state: sealing.SnapDealsAddPieceFailed},
{col: color.FgRed, state: sealing.SnapDealsDealsExpired},
{col: color.FgRed, state: sealing.ReplicaUpdateFailed},
}
func init() {

View File

@ -638,6 +638,7 @@ var dataTransfersCmd = &cli.Command{
transfersListCmd,
marketRestartTransfer,
marketCancelTransfer,
transfersDiagnosticsCmd,
},
}
@ -857,6 +858,38 @@ var transfersListCmd = &cli.Command{
},
}
var transfersDiagnosticsCmd = &cli.Command{
Name: "diagnostics",
Usage: "Get detailed diagnostics on active transfers with a specific peer",
Flags: []cli.Flag{},
Action: func(cctx *cli.Context) error {
if !cctx.Args().Present() {
return cli.ShowCommandHelp(cctx, cctx.Command.Name)
}
api, closer, err := lcli.GetMarketsAPI(cctx)
if err != nil {
return err
}
defer closer()
ctx := lcli.ReqContext(cctx)
targetPeer, err := peer.Decode(cctx.Args().First())
if err != nil {
return err
}
diagnostics, err := api.MarketDataTransferDiagnostics(ctx, targetPeer)
if err != nil {
return err
}
out, err := json.MarshalIndent(diagnostics, "", "\t")
if err != nil {
return err
}
fmt.Println(string(out))
return nil
},
}
var dealsPendingPublish = &cli.Command{
Name: "pending-publish",
Usage: "list deals waiting in publish queue",

View File

@ -11,6 +11,9 @@ import (
"strings"
"time"
"github.com/filecoin-project/lotus/build"
"github.com/filecoin-project/lotus/chain/actors/builtin"
"github.com/docker/go-units"
"github.com/fatih/color"
cbor "github.com/ipfs/go-ipld-cbor"
@ -20,6 +23,7 @@ import (
"github.com/filecoin-project/go-bitfield"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/big"
"github.com/filecoin-project/go-state-types/network"
miner5 "github.com/filecoin-project/specs-actors/v5/actors/builtin/miner"
"github.com/filecoin-project/lotus/api"
@ -50,11 +54,13 @@ var sectorsCmd = &cli.Command{
sectorsExtendCmd,
sectorsTerminateCmd,
sectorsRemoveCmd,
sectorsSnapUpCmd,
sectorsMarkForUpgradeCmd,
sectorsStartSealCmd,
sectorsSealDelayCmd,
sectorsCapacityCollateralCmd,
sectorsBatching,
sectorsRefreshPieceMatchingCmd,
},
}
@ -1476,6 +1482,44 @@ var sectorsRemoveCmd = &cli.Command{
},
}
var sectorsSnapUpCmd = &cli.Command{
Name: "snap-up",
Usage: "Mark a committed capacity sector to be filled with deals",
ArgsUsage: "<sectorNum>",
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 1 {
return lcli.ShowHelp(cctx, xerrors.Errorf("must pass sector number"))
}
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
defer closer()
api, nCloser, err := lcli.GetFullNodeAPI(cctx)
if err != nil {
return err
}
defer nCloser()
ctx := lcli.ReqContext(cctx)
nv, err := api.StateNetworkVersion(ctx, types.EmptyTSK)
if err != nil {
return xerrors.Errorf("failed to get network version: %w", err)
}
if nv < network.Version15 {
return xerrors.Errorf("snap deals upgrades enabled in network v15")
}
id, err := strconv.ParseUint(cctx.Args().Get(0), 10, 64)
if err != nil {
return xerrors.Errorf("could not parse sector number: %w", err)
}
return nodeApi.SectorMarkForUpgrade(ctx, abi.SectorNumber(id), true)
},
}
var sectorsMarkForUpgradeCmd = &cli.Command{
Name: "mark-for-upgrade",
Usage: "Mark a committed capacity sector for replacement by a sector with deals",
@ -1490,14 +1534,40 @@ var sectorsMarkForUpgradeCmd = &cli.Command{
return err
}
defer closer()
api, nCloser, err := lcli.GetFullNodeAPI(cctx)
if err != nil {
return err
}
defer nCloser()
ctx := lcli.ReqContext(cctx)
nv, err := api.StateNetworkVersion(ctx, types.EmptyTSK)
if err != nil {
return xerrors.Errorf("failed to get network version: %w", err)
}
if nv >= network.Version15 {
return xerrors.Errorf("classic cc upgrades disabled v15 and beyond, use `snap-up`")
}
// disable mark for upgrade two days before the ntwk v15 upgrade
// TODO: remove the following block in v1.15.1
head, err := api.ChainHead(ctx)
if err != nil {
return xerrors.Errorf("failed to get chain head: %w", err)
}
twoDays := abi.ChainEpoch(2 * builtin.EpochsInDay)
if head.Height() > (build.UpgradeOhSnapHeight - twoDays) {
return xerrors.Errorf("OhSnap is coming soon, " +
"please use `snap-up` to upgrade your cc sectors after the network v15 upgrade!")
}
id, err := strconv.ParseUint(cctx.Args().Get(0), 10, 64)
if err != nil {
return xerrors.Errorf("could not parse sector number: %w", err)
}
return nodeApi.SectorMarkForUpgrade(ctx, abi.SectorNumber(id))
return nodeApi.SectorMarkForUpgrade(ctx, abi.SectorNumber(id), false)
},
}
@ -2000,6 +2070,25 @@ var sectorsBatchingPendingPreCommit = &cli.Command{
},
}
var sectorsRefreshPieceMatchingCmd = &cli.Command{
Name: "match-pending-pieces",
Usage: "force a refreshed match of pending pieces to open sectors without manually waiting for more deals",
Action: func(cctx *cli.Context) error {
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
defer closer()
ctx := lcli.ReqContext(cctx)
if err := nodeApi.SectorMatchPendingPiecesToOpenSectors(ctx); err != nil {
return err
}
return nil
},
}
func yesno(b bool) string {
if b {
return color.GreenString("YES")

View File

@ -163,6 +163,16 @@ var runCmd = &cli.Command{
Usage: "enable commit (32G sectors: all cores or GPUs, 128GiB Memory + 64GiB swap)",
Value: true,
},
&cli.BoolFlag{
Name: "replica-update",
Usage: "enable replica update",
Value: true,
},
&cli.BoolFlag{
Name: "prove-replica-update2",
Usage: "enable prove replica update 2",
Value: true,
},
&cli.IntFlag{
Name: "parallel-fetch-limit",
Usage: "maximum fetch operations to run in parallel",
@ -251,7 +261,7 @@ var runCmd = &cli.Command{
var taskTypes []sealtasks.TaskType
taskTypes = append(taskTypes, sealtasks.TTFetch, sealtasks.TTCommit1, sealtasks.TTFinalize)
taskTypes = append(taskTypes, sealtasks.TTFetch, sealtasks.TTCommit1, sealtasks.TTProveReplicaUpdate1, sealtasks.TTFinalize)
if cctx.Bool("addpiece") {
taskTypes = append(taskTypes, sealtasks.TTAddPiece)
@ -268,6 +278,12 @@ var runCmd = &cli.Command{
if cctx.Bool("commit") {
taskTypes = append(taskTypes, sealtasks.TTCommit2)
}
if cctx.Bool("replicaupdate") {
taskTypes = append(taskTypes, sealtasks.TTReplicaUpdate)
}
if cctx.Bool("prove-replica-update2") {
taskTypes = append(taskTypes, sealtasks.TTProveReplicaUpdate2)
}
if len(taskTypes) == 0 {
return xerrors.Errorf("no task types specified")

View File

@ -65,7 +65,9 @@ func main() {
fr32Cmd,
chainCmd,
balancerCmd,
sendCsvCmd,
terminationsCmd,
migrationsCmd,
}
app := &cli.App{

View File

@ -0,0 +1,127 @@
package main
import (
"context"
"fmt"
"io"
"time"
"github.com/filecoin-project/lotus/chain/stmgr"
"github.com/filecoin-project/lotus/chain/vm"
"github.com/filecoin-project/lotus/extern/sector-storage/ffiwrapper"
"github.com/filecoin-project/specs-actors/v7/actors/migration/nv15"
"github.com/filecoin-project/lotus/chain/types"
"github.com/filecoin-project/lotus/chain/consensus/filcns"
"github.com/filecoin-project/lotus/chain/store"
"github.com/filecoin-project/lotus/node/repo"
"github.com/ipfs/go-cid"
"github.com/urfave/cli/v2"
)
var migrationsCmd = &cli.Command{
Name: "migrate-nv15",
Description: "Run the specified migration",
ArgsUsage: "[block to look back from]",
Flags: []cli.Flag{
&cli.StringFlag{
Name: "repo",
Value: "~/.lotus",
},
},
Action: func(cctx *cli.Context) error {
ctx := context.TODO()
if cctx.NArg() != 1 {
return fmt.Errorf("must pass block cid")
}
blkCid, err := cid.Decode(cctx.Args().First())
if err != nil {
return fmt.Errorf("failed to parse input: %w", err)
}
fsrepo, err := repo.NewFS(cctx.String("repo"))
if err != nil {
return err
}
lkrepo, err := fsrepo.Lock(repo.FullNode)
if err != nil {
return err
}
defer lkrepo.Close() //nolint:errcheck
bs, err := lkrepo.Blockstore(ctx, repo.UniversalBlockstore)
if err != nil {
return fmt.Errorf("failed to open blockstore: %w", err)
}
defer func() {
if c, ok := bs.(io.Closer); ok {
if err := c.Close(); err != nil {
log.Warnf("failed to close blockstore: %s", err)
}
}
}()
mds, err := lkrepo.Datastore(context.Background(), "/metadata")
if err != nil {
return err
}
cs := store.NewChainStore(bs, bs, mds, filcns.Weight, nil)
defer cs.Close() //nolint:errcheck
sm, err := stmgr.NewStateManager(cs, filcns.NewTipSetExecutor(), vm.Syscalls(ffiwrapper.ProofVerifier), filcns.DefaultUpgradeSchedule(), nil)
if err != nil {
return err
}
cache := nv15.NewMemMigrationCache()
blk, err := cs.GetBlock(ctx, blkCid)
if err != nil {
return err
}
migrationTs, err := cs.LoadTipSet(ctx, types.NewTipSetKey(blk.Parents...))
if err != nil {
return err
}
ts1, err := cs.GetTipsetByHeight(ctx, blk.Height-240, migrationTs, false)
if err != nil {
return err
}
startTime := time.Now()
err = filcns.PreUpgradeActorsV7(ctx, sm, cache, ts1.ParentState(), ts1.Height()-1, ts1)
if err != nil {
return err
}
fmt.Println("completed round 1, took ", time.Since(startTime))
startTime = time.Now()
newCid1, err := filcns.UpgradeActorsV7(ctx, sm, cache, nil, blk.ParentStateRoot, blk.Height-1, migrationTs)
if err != nil {
return err
}
fmt.Println("completed round actual (with cache), took ", time.Since(startTime))
fmt.Println("new cid", newCid1)
newCid2, err := filcns.UpgradeActorsV7(ctx, sm, nv15.NewMemMigrationCache(), nil, blk.ParentStateRoot, blk.Height-1, migrationTs)
if err != nil {
return err
}
fmt.Println("completed round actual (without cache), took ", time.Since(startTime))
fmt.Println("new cid", newCid2)
return nil
},
}

152
cmd/lotus-shed/send-csv.go Normal file
View File

@ -0,0 +1,152 @@
package main
import (
"encoding/csv"
"encoding/hex"
"fmt"
"os"
"strconv"
"strings"
"github.com/ipfs/go-cid"
"github.com/urfave/cli/v2"
"golang.org/x/xerrors"
"github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/exitcode"
lapi "github.com/filecoin-project/lotus/api"
"github.com/filecoin-project/lotus/chain/types"
lcli "github.com/filecoin-project/lotus/cli"
)
var sendCsvCmd = &cli.Command{
Name: "send-csv",
Usage: "Utility for sending a batch of balance transfers",
Flags: []cli.Flag{
&cli.StringFlag{
Name: "from",
Usage: "specify the account to send funds from",
Required: true,
},
},
ArgsUsage: "[csvfile]",
Action: func(cctx *cli.Context) error {
if cctx.NArg() != 1 {
return xerrors.New("must supply path to csv file")
}
api, closer, err := lcli.GetFullNodeAPIV1(cctx)
if err != nil {
return err
}
defer closer()
ctx := lcli.ReqContext(cctx)
srv, err := lcli.GetFullNodeServices(cctx)
if err != nil {
return err
}
defer srv.Close() //nolint:errcheck
sender, err := address.NewFromString(cctx.String("from"))
if err != nil {
return err
}
fileReader, err := os.Open(cctx.Args().First())
if err != nil {
return xerrors.Errorf("read csv: %w", err)
}
defer fileReader.Close() //nolint:errcheck
r := csv.NewReader(fileReader)
records, err := r.ReadAll()
if err != nil {
return xerrors.Errorf("read csv: %w", err)
}
if strings.TrimSpace(records[0][0]) != "Recipient" ||
strings.TrimSpace(records[0][1]) != "FIL" ||
strings.TrimSpace(records[0][2]) != "Method" ||
strings.TrimSpace(records[0][3]) != "Params" {
return xerrors.Errorf("expected header row to be \"Recipient, FIL, Method, Params\"")
}
var msgs []*types.Message
for i, e := range records[1:] {
addr, err := address.NewFromString(e[0])
if err != nil {
return xerrors.Errorf("failed to parse address in row %d: %w", i, err)
}
value, err := types.ParseFIL(strings.TrimSpace(e[1]))
if err != nil {
return xerrors.Errorf("failed to parse value balance: %w", err)
}
method, err := strconv.Atoi(strings.TrimSpace(e[2]))
if err != nil {
return xerrors.Errorf("failed to parse method number: %w", err)
}
var params []byte
if strings.TrimSpace(e[3]) != "nil" {
params, err = hex.DecodeString(strings.TrimSpace(e[3]))
if err != nil {
return xerrors.Errorf("failed to parse hexparams: %w", err)
}
}
msgs = append(msgs, &types.Message{
To: addr,
From: sender,
Value: abi.TokenAmount(value),
Method: abi.MethodNum(method),
Params: params,
})
}
if len(msgs) == 0 {
return nil
}
var msgCids []cid.Cid
for i, msg := range msgs {
smsg, err := api.MpoolPushMessage(ctx, msg, nil)
if err != nil {
fmt.Printf("%d, ERROR %s\n", i, err)
continue
}
fmt.Printf("%d, %s\n", i, smsg.Cid())
if i > 0 && i%100 == 0 {
fmt.Printf("catching up until latest message lands")
_, err := api.StateWaitMsg(ctx, smsg.Cid(), 1, lapi.LookbackNoLimit, true)
if err != nil {
return err
}
}
msgCids = append(msgCids, smsg.Cid())
}
fmt.Println("waiting on messages...")
for _, msgCid := range msgCids {
ml, err := api.StateWaitMsg(ctx, msgCid, 5, lapi.LookbackNoLimit, true)
if err != nil {
return err
}
if ml.Receipt.ExitCode != exitcode.Ok {
fmt.Printf("MSG %s NON-ZERO EXITCODE: %s\n", msgCid, ml.Receipt.ExitCode)
}
}
fmt.Println("all sent messages succeeded")
return nil
},
}

View File

@ -79,7 +79,7 @@ func NewBlockBuilder(ctx context.Context, logger *zap.SugaredLogger, sm *stmgr.S
// 1. We don't charge a fee.
// 2. The runtime has "fake" proof logic.
// 3. We don't actually save any of the results.
r := lrand.NewStateRand(sm.ChainStore(), parentTs.Cids(), sm.Beacon())
r := lrand.NewStateRand(sm.ChainStore(), parentTs.Cids(), sm.Beacon(), sm.GetNetworkVersion)
vmopt := &vm.VMOpts{
StateBase: parentState,
Epoch: parentTs.Height() + 1,
@ -88,7 +88,7 @@ func NewBlockBuilder(ctx context.Context, logger *zap.SugaredLogger, sm *stmgr.S
Actors: filcns.NewActorRegistry(),
Syscalls: sm.VMSys(),
CircSupplyCalc: sm.GetVMCirculatingSupply,
NtwkVersion: sm.GetNtwkVersion,
NetworkVersion: sm.GetNetworkVersion(ctx, parentTs.Height()+1),
BaseFee: abi.NewTokenAmount(0),
LookbackState: stmgr.LookbackStateGetterForTipset(sm, parentTs),
}
@ -265,7 +265,7 @@ func (bb *BlockBuilder) Height() abi.ChainEpoch {
// NetworkVersion returns the network version for the target block.
func (bb *BlockBuilder) NetworkVersion() network.Version {
return bb.sm.GetNtwkVersion(bb.ctx, bb.Height())
return bb.sm.GetNetworkVersion(bb.ctx, bb.Height())
}
// StateManager returns the stmgr.StateManager.

View File

@ -78,7 +78,7 @@ func (mockVerifier) VerifyReplicaUpdate(update proof7.ReplicaUpdateInfo) (bool,
return false, nil
}
func (mockVerifier) VerifyWinningPoSt(ctx context.Context, info proof5.WinningPoStVerifyInfo) (bool, error) {
func (mockVerifier) VerifyWinningPoSt(ctx context.Context, info proof7.WinningPoStVerifyInfo) (bool, error) {
panic("should not be called")
}
func (mockVerifier) VerifyWindowPoSt(ctx context.Context, info proof5.WindowPoStVerifyInfo) (bool, error) {

View File

@ -159,7 +159,7 @@ func (sim *Simulation) GetStart() *types.TipSet {
// GetNetworkVersion returns the current network version for the simulation.
func (sim *Simulation) GetNetworkVersion() network.Version {
return sim.StateManager.GetNtwkVersion(context.TODO(), sim.head.Height())
return sim.StateManager.GetNetworkVersion(context.TODO(), sim.head.Height())
}
// SetHead updates the current head of the simulation and stores it in the metadata store. This is

View File

@ -41,8 +41,8 @@ func (sim *Simulation) popNextMessages(ctx context.Context) ([]*types.Message, e
// This isn't what the network does, but it makes things easier. Otherwise, we'd need to run
// migrations before this epoch and I'd rather not deal with that.
nextHeight := parentTs.Height() + 1
prevVer := sim.StateManager.GetNtwkVersion(ctx, nextHeight-1)
nextVer := sim.StateManager.GetNtwkVersion(ctx, nextHeight)
prevVer := sim.StateManager.GetNetworkVersion(ctx, nextHeight-1)
nextVer := sim.StateManager.GetNetworkVersion(ctx, nextHeight)
if nextVer != prevVer {
log.Warnw("packing no messages for version upgrade block",
"old", prevVer,

View File

@ -24,6 +24,15 @@ var ProtocolCodenames = []struct {
{build.UpgradeTapeHeight + 1, "tape"},
{build.UpgradeLiftoffHeight + 1, "liftoff"},
{build.UpgradeKumquatHeight + 1, "postliftoff"},
{build.UpgradeCalicoHeight + 1, "calico"},
{build.UpgradePersianHeight + 1, "persian"},
{build.UpgradeOrangeHeight + 1, "orange"},
{build.UpgradeTrustHeight + 1, "trust"},
{build.UpgradeNorwegianHeight + 1, "norwegian"},
{build.UpgradeTurboHeight + 1, "turbo"},
{build.UpgradeHyperdriveHeight + 1, "hyperdrive"},
{build.UpgradeChocolateHeight + 1, "chocolate"},
{build.UpgradeOhSnapHeight + 1, "ohsnap"},
}
// GetProtocolCodename gets the protocol codename associated with a height.

View File

@ -5,6 +5,7 @@ import (
"encoding/json"
"fmt"
"io"
"io/fs"
"log"
"os"
"path/filepath"
@ -136,25 +137,31 @@ func processTipsetOpts() error {
}
func execVectorDir(path string, outdir string) error {
files, err := filepath.Glob(filepath.Join(path, "*"))
if err != nil {
return fmt.Errorf("failed to glob input directory %s: %w", path, err)
}
for _, f := range files {
outfile := strings.TrimSuffix(filepath.Base(f), filepath.Ext(f)) + ".out"
return filepath.WalkDir(path, func(path string, d fs.DirEntry, err error) error {
if err != nil {
return fmt.Errorf("failed while visiting path %s: %w", path, err)
}
if d.IsDir() || !strings.HasSuffix(path, "json") {
return nil
}
// Create an output file to capture the output from the run of the vector.
outfile := strings.TrimSuffix(filepath.Base(path), filepath.Ext(path)) + ".out"
outpath := filepath.Join(outdir, outfile)
outw, err := os.Create(outpath)
if err != nil {
return fmt.Errorf("failed to create file %s: %w", outpath, err)
}
log.Printf("processing vector %s; sending output to %s", f, outpath)
log.Printf("processing vector %s; sending output to %s", path, outpath)
// Actually run the vector.
log.SetOutput(io.MultiWriter(os.Stderr, outw)) // tee the output.
_, _ = execVectorFile(new(conformance.LogReporter), f)
_, _ = execVectorFile(new(conformance.LogReporter), path)
log.SetOutput(os.Stderr)
_ = outw.Close()
}
return nil
return nil
})
}
func execVectorsStdin() error {

View File

@ -8,12 +8,11 @@ import (
"io"
"log"
"github.com/filecoin-project/lotus/api/v0api"
"github.com/fatih/color"
"github.com/filecoin-project/go-address"
"github.com/filecoin-project/lotus/api"
"github.com/filecoin-project/lotus/api/v0api"
"github.com/filecoin-project/lotus/chain/actors/builtin"
init_ "github.com/filecoin-project/lotus/chain/actors/builtin/init"
"github.com/filecoin-project/lotus/chain/actors/builtin/reward"
@ -43,6 +42,15 @@ func doExtractMessage(opts extractOpts) error {
return fmt.Errorf("failed to resolve message and tipsets from chain: %w", err)
}
// Assumes that the desired message isn't at the boundary of network versions.
// Otherwise this will be inaccurate. But it's such a tiny edge case that
// it's not worth spending the time to support boundary messages unless
// actually needed.
nv, err := FullAPI.StateNetworkVersion(ctx, incTs.Key())
if err != nil {
return fmt.Errorf("failed to resolve network version from inclusion height: %w", err)
}
// get the circulating supply before the message was executed.
circSupplyDetail, err := FullAPI.StateVMCirculatingSupplyInternal(ctx, incTs.Key())
if err != nil {
@ -53,6 +61,7 @@ func doExtractMessage(opts extractOpts) error {
log.Printf("message was executed in tipset: %s", execTs.Key())
log.Printf("message was included in tipset: %s", incTs.Key())
log.Printf("network version at inclusion: %d", nv)
log.Printf("circulating supply at inclusion tipset: %d", circSupply)
log.Printf("finding precursor messages using mode: %s", opts.precursor)
@ -110,7 +119,8 @@ func doExtractMessage(opts extractOpts) error {
CircSupply: circSupplyDetail.FilCirculating,
BaseFee: basefee,
// recorded randomness will be discarded.
Rand: conformance.NewRecordingRand(new(conformance.LogReporter), FullAPI),
Rand: conformance.NewRecordingRand(new(conformance.LogReporter), FullAPI),
NetworkVersion: nv,
})
if err != nil {
return fmt.Errorf("failed to execute precursor message: %w", err)
@ -140,12 +150,13 @@ func doExtractMessage(opts extractOpts) error {
preroot = root
applyret, postroot, err = driver.ExecuteMessage(pst.Blockstore, conformance.ExecuteMessageParams{
Preroot: preroot,
Epoch: execTs.Height(),
Message: msg,
CircSupply: circSupplyDetail.FilCirculating,
BaseFee: basefee,
Rand: recordingRand,
Preroot: preroot,
Epoch: execTs.Height(),
Message: msg,
CircSupply: circSupplyDetail.FilCirculating,
BaseFee: basefee,
Rand: recordingRand,
NetworkVersion: nv,
})
if err != nil {
return fmt.Errorf("failed to execute message: %w", err)
@ -263,11 +274,6 @@ func doExtractMessage(opts extractOpts) error {
return err
}
nv, err := FullAPI.StateNetworkVersion(ctx, execTs.Key())
if err != nil {
return err
}
codename := GetProtocolCodename(execTs.Height())
// Write out the test vector.

View File

@ -129,6 +129,7 @@ func runSimulateCmd(_ *cli.Context) error {
CircSupply: circSupply.FilCirculating,
BaseFee: baseFee,
Rand: rand,
// TODO NetworkVersion
})
if err != nil {
return fmt.Errorf("failed to apply message: %w", err)

View File

@ -5,6 +5,8 @@ import (
gobig "math/big"
"os"
"github.com/filecoin-project/go-state-types/network"
"github.com/filecoin-project/lotus/blockstore"
"github.com/filecoin-project/lotus/chain/consensus/filcns"
"github.com/filecoin-project/lotus/chain/state"
@ -187,11 +189,12 @@ func (d *Driver) ExecuteTipset(bs blockstore.Blockstore, ds ds.Batching, params
}
type ExecuteMessageParams struct {
Preroot cid.Cid
Epoch abi.ChainEpoch
Message *types.Message
CircSupply abi.TokenAmount
BaseFee abi.TokenAmount
Preroot cid.Cid
Epoch abi.ChainEpoch
Message *types.Message
CircSupply abi.TokenAmount
BaseFee abi.TokenAmount
NetworkVersion network.Version
// Rand is an optional vm.Rand implementation to use. If nil, the driver
// will use a vm.Rand that returns a fixed value for all calls.
@ -210,13 +213,6 @@ func (d *Driver) ExecuteMessage(bs blockstore.Blockstore, params ExecuteMessageP
params.Rand = NewFixedRand()
}
// dummy state manager; only to reference the GetNetworkVersion method,
// which does not depend on state.
sm, err := stmgr.NewStateManager(nil, filcns.NewTipSetExecutor(), nil, filcns.DefaultUpgradeSchedule(), nil)
if err != nil {
return nil, cid.Cid{}, err
}
vmOpts := &vm.VMOpts{
StateBase: params.Preroot,
Epoch: params.Epoch,
@ -225,9 +221,9 @@ func (d *Driver) ExecuteMessage(bs blockstore.Blockstore, params ExecuteMessageP
CircSupplyCalc: func(_ context.Context, _ abi.ChainEpoch, _ *state.StateTree) (abi.TokenAmount, error) {
return params.CircSupply, nil
},
Rand: params.Rand,
BaseFee: params.BaseFee,
NtwkVersion: sm.GetNtwkVersion,
Rand: params.Rand,
BaseFee: params.BaseFee,
NetworkVersion: params.NetworkVersion,
}
lvm, err := vm.NewVM(context.TODO(), vmOpts)

View File

@ -3,8 +3,6 @@ package conformance
import (
"context"
"github.com/filecoin-project/go-state-types/network"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/crypto"
@ -21,10 +19,10 @@ func NewFixedRand() vm.Rand {
return &fixedRand{}
}
func (r *fixedRand) GetChainRandomness(_ context.Context, _ network.Version, _ crypto.DomainSeparationTag, _ abi.ChainEpoch, _ []byte) ([]byte, error) {
func (r *fixedRand) GetChainRandomness(_ context.Context, _ crypto.DomainSeparationTag, _ abi.ChainEpoch, _ []byte) ([]byte, error) {
return []byte("i_am_random_____i_am_random_____"), nil // 32 bytes.
}
func (r *fixedRand) GetBeaconRandomness(_ context.Context, _ network.Version, _ crypto.DomainSeparationTag, _ abi.ChainEpoch, _ []byte) ([]byte, error) {
func (r *fixedRand) GetBeaconRandomness(_ context.Context, _ crypto.DomainSeparationTag, _ abi.ChainEpoch, _ []byte) ([]byte, error) {
return []byte("i_am_random_____i_am_random_____"), nil // 32 bytes.
}

View File

@ -5,8 +5,6 @@ import (
"fmt"
"sync"
"github.com/filecoin-project/go-state-types/network"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/crypto"
@ -47,7 +45,7 @@ func (r *RecordingRand) loadHead() {
r.head = head.Key()
}
func (r *RecordingRand) GetChainRandomness(ctx context.Context, nv network.Version, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
func (r *RecordingRand) GetChainRandomness(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
r.once.Do(r.loadHead)
// FullNode's v0 ChainGetRandomnessFromTickets handles whether we should be looking forward or back
ret, err := r.api.ChainGetRandomnessFromTickets(ctx, r.head, pers, round, entropy)
@ -73,7 +71,7 @@ func (r *RecordingRand) GetChainRandomness(ctx context.Context, nv network.Versi
return ret, err
}
func (r *RecordingRand) GetBeaconRandomness(ctx context.Context, nv network.Version, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
func (r *RecordingRand) GetBeaconRandomness(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
r.once.Do(r.loadHead)
ret, err := r.api.StateGetRandomnessFromBeacon(ctx, pers, round, entropy, r.head)
if err != nil {

View File

@ -4,8 +4,6 @@ import (
"bytes"
"context"
"github.com/filecoin-project/go-state-types/network"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/crypto"
@ -45,7 +43,7 @@ func (r *ReplayingRand) match(requested schema.RandomnessRule) ([]byte, bool) {
return nil, false
}
func (r *ReplayingRand) GetChainRandomness(ctx context.Context, nv network.Version, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
func (r *ReplayingRand) GetChainRandomness(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
rule := schema.RandomnessRule{
Kind: schema.RandomnessChain,
DomainSeparationTag: int64(pers),
@ -60,10 +58,10 @@ func (r *ReplayingRand) GetChainRandomness(ctx context.Context, nv network.Versi
r.reporter.Logf("returning fallback chain randomness: dst=%d, epoch=%d, entropy=%x", pers, round, entropy)
return r.fallback.GetChainRandomness(ctx, nv, pers, round, entropy)
return r.fallback.GetChainRandomness(ctx, pers, round, entropy)
}
func (r *ReplayingRand) GetBeaconRandomness(ctx context.Context, nv network.Version, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
func (r *ReplayingRand) GetBeaconRandomness(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
rule := schema.RandomnessRule{
Kind: schema.RandomnessBeacon,
DomainSeparationTag: int64(pers),
@ -78,5 +76,5 @@ func (r *ReplayingRand) GetBeaconRandomness(ctx context.Context, nv network.Vers
r.reporter.Logf("returning fallback beacon randomness: dst=%d, epoch=%d, entropy=%x", pers, round, entropy)
return r.fallback.GetBeaconRandomness(ctx, nv, pers, round, entropy)
return r.fallback.GetBeaconRandomness(ctx, pers, round, entropy)
}

View File

@ -7,6 +7,7 @@ import (
"encoding/base64"
"fmt"
"io/ioutil"
"math"
"os"
"os/exec"
"strconv"
@ -14,6 +15,7 @@ import (
"github.com/fatih/color"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/exitcode"
"github.com/filecoin-project/go-state-types/network"
"github.com/hashicorp/go-multierror"
blocks "github.com/ipfs/go-block-format"
"github.com/ipfs/go-blockservice"
@ -27,6 +29,7 @@ import (
"github.com/filecoin-project/test-vectors/schema"
"github.com/filecoin-project/lotus/blockstore"
"github.com/filecoin-project/lotus/chain/consensus/filcns"
"github.com/filecoin-project/lotus/chain/types"
"github.com/filecoin-project/lotus/chain/vm"
)
@ -50,11 +53,58 @@ var TipsetVectorOpts struct {
OnTipsetApplied []func(bs blockstore.Blockstore, params *ExecuteTipsetParams, res *ExecuteTipsetResult)
}
type GasPricingRestoreFn func()
// adjustGasPricing adjusts the global gas price mapping to make sure that the
// gas pricelist for vector's network version is used at the vector's epoch.
// Because it manipulates a global, it returns a function that reverts the
// change. The caller MUST invoke this function or the test vector runner will
// become invalid.
func adjustGasPricing(vectorEpoch abi.ChainEpoch, vectorNv network.Version) GasPricingRestoreFn {
// Stash the current pricing mapping.
// Ok to take a reference instead of a copy, because we override the map
// with a new one below.
var old = vm.Prices
// Resolve the epoch at which the vector network version kicks in.
var epoch abi.ChainEpoch = math.MaxInt64
if vectorNv == network.Version0 {
// genesis is not an upgrade.
epoch = 0
} else {
for _, u := range filcns.DefaultUpgradeSchedule() {
if u.Network == vectorNv {
epoch = u.Height
break
}
}
}
if epoch == math.MaxInt64 {
panic(fmt.Sprintf("could not resolve network version %d to height", vectorNv))
}
// Find the right pricelist for this network version.
pricelist := vm.PricelistByEpoch(epoch)
// Override the pricing mapping by setting the relevant pricelist for the
// network version at the epoch where the vector runs.
vm.Prices = map[abi.ChainEpoch]vm.Pricelist{
vectorEpoch: pricelist,
}
// Return a function to restore the original mapping.
return func() {
vm.Prices = old
}
}
// ExecuteMessageVector executes a message-class test vector.
func ExecuteMessageVector(r Reporter, vector *schema.TestVector, variant *schema.Variant) (diffs []string, err error) {
var (
ctx = context.Background()
baseEpoch = variant.Epoch
baseEpoch = abi.ChainEpoch(variant.Epoch)
nv = network.Version(variant.NetworkVersion)
root = vector.Pre.StateTree.RootCID
)
@ -67,6 +117,10 @@ func ExecuteMessageVector(r Reporter, vector *schema.TestVector, variant *schema
// Create a new Driver.
driver := NewDriver(ctx, vector.Selector, DriverOpts{DisableVMFlush: true})
// Monkey patch the gas pricing.
revertFn := adjustGasPricing(baseEpoch, nv)
defer revertFn()
// Apply every message.
for i, m := range vector.ApplyMessages {
msg, err := types.DecodeMessage(m.Bytes)
@ -76,18 +130,19 @@ func ExecuteMessageVector(r Reporter, vector *schema.TestVector, variant *schema
// add the epoch offset if one is set.
if m.EpochOffset != nil {
baseEpoch += *m.EpochOffset
baseEpoch += abi.ChainEpoch(*m.EpochOffset)
}
// Execute the message.
var ret *vm.ApplyRet
ret, root, err = driver.ExecuteMessage(bs, ExecuteMessageParams{
Preroot: root,
Epoch: abi.ChainEpoch(baseEpoch),
Message: msg,
BaseFee: BaseFeeOrDefault(vector.Pre.BaseFee),
CircSupply: CircSupplyOrDefault(vector.Pre.CircSupply),
Rand: NewReplayingRand(r, vector.Randomness),
Preroot: root,
Epoch: baseEpoch,
Message: msg,
BaseFee: BaseFeeOrDefault(vector.Pre.BaseFee),
CircSupply: CircSupplyOrDefault(vector.Pre.CircSupply),
Rand: NewReplayingRand(r, vector.Randomness),
NetworkVersion: nv,
})
if err != nil {
r.Fatalf("fatal failure when executing message: %s", err)
@ -184,8 +239,10 @@ func ExecuteTipsetVector(r Reporter, vector *schema.TestVector, variant *schema.
func AssertMsgResult(r Reporter, expected *schema.Receipt, actual *vm.ApplyRet, label string) {
r.Helper()
applyret := actual
if expected, actual := exitcode.ExitCode(expected.ExitCode), actual.ExitCode; expected != actual {
r.Errorf("exit code of msg %s did not match; expected: %s, got: %s", label, expected, actual)
r.Errorf("\t\\==> actor error: %s", applyret.ActorErr)
}
if expected, actual := expected.GasUsed, actual.GasUsed; expected != actual {
r.Errorf("gas used of msg %s did not match; expected: %d, got: %d", label, expected, actual)

File diff suppressed because it is too large Load Diff

View File

@ -103,7 +103,9 @@ Response:
"MemSwap": 42,
"MemSwapUsed": 42,
"CPUs": 42,
"GPUs": null,
"GPUs": [
"string value"
],
"Resources": {
"seal/v0/addpiece": {
"0": {
@ -597,6 +599,334 @@ Response:
"BaseMinMemory": 1073741824
}
},
"seal/v0/provereplicaupdate/1": {
"0": {
"MinMemory": 2048,
"MaxMemory": 2048,
"GPUUtilization": 0,
"MaxParallelism": 0,
"MaxParallelismGPU": 0,
"BaseMinMemory": 2048
},
"1": {
"MinMemory": 8388608,
"MaxMemory": 8388608,
"GPUUtilization": 0,
"MaxParallelism": 0,
"MaxParallelismGPU": 0,
"BaseMinMemory": 8388608
},
"2": {
"MinMemory": 1073741824,
"MaxMemory": 1073741824,
"GPUUtilization": 0,
"MaxParallelism": 0,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"3": {
"MinMemory": 1073741824,
"MaxMemory": 1073741824,
"GPUUtilization": 0,
"MaxParallelism": 0,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"4": {
"MinMemory": 1073741824,
"MaxMemory": 1073741824,
"GPUUtilization": 0,
"MaxParallelism": 0,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"5": {
"MinMemory": 2048,
"MaxMemory": 2048,
"GPUUtilization": 0,
"MaxParallelism": 0,
"MaxParallelismGPU": 0,
"BaseMinMemory": 2048
},
"6": {
"MinMemory": 8388608,
"MaxMemory": 8388608,
"GPUUtilization": 0,
"MaxParallelism": 0,
"MaxParallelismGPU": 0,
"BaseMinMemory": 8388608
},
"7": {
"MinMemory": 1073741824,
"MaxMemory": 1073741824,
"GPUUtilization": 0,
"MaxParallelism": 0,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"8": {
"MinMemory": 1073741824,
"MaxMemory": 1073741824,
"GPUUtilization": 0,
"MaxParallelism": 0,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"9": {
"MinMemory": 1073741824,
"MaxMemory": 1073741824,
"GPUUtilization": 0,
"MaxParallelism": 0,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
}
},
"seal/v0/provereplicaupdate/2": {
"0": {
"MinMemory": 2048,
"MaxMemory": 2048,
"GPUUtilization": 1,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 2048
},
"1": {
"MinMemory": 8388608,
"MaxMemory": 8388608,
"GPUUtilization": 1,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 8388608
},
"2": {
"MinMemory": 1073741824,
"MaxMemory": 1610612736,
"GPUUtilization": 1,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 10737418240
},
"3": {
"MinMemory": 32212254720,
"MaxMemory": 161061273600,
"GPUUtilization": 1,
"MaxParallelism": -1,
"MaxParallelismGPU": 6,
"BaseMinMemory": 34359738368
},
"4": {
"MinMemory": 64424509440,
"MaxMemory": 204010946560,
"GPUUtilization": 1,
"MaxParallelism": -1,
"MaxParallelismGPU": 6,
"BaseMinMemory": 68719476736
},
"5": {
"MinMemory": 2048,
"MaxMemory": 2048,
"GPUUtilization": 1,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 2048
},
"6": {
"MinMemory": 8388608,
"MaxMemory": 8388608,
"GPUUtilization": 1,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 8388608
},
"7": {
"MinMemory": 1073741824,
"MaxMemory": 1610612736,
"GPUUtilization": 1,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 10737418240
},
"8": {
"MinMemory": 32212254720,
"MaxMemory": 161061273600,
"GPUUtilization": 1,
"MaxParallelism": -1,
"MaxParallelismGPU": 6,
"BaseMinMemory": 34359738368
},
"9": {
"MinMemory": 64424509440,
"MaxMemory": 204010946560,
"GPUUtilization": 1,
"MaxParallelism": -1,
"MaxParallelismGPU": 6,
"BaseMinMemory": 68719476736
}
},
"seal/v0/regensectorkey": {
"0": {
"MinMemory": 2048,
"MaxMemory": 2048,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 2048
},
"1": {
"MinMemory": 8388608,
"MaxMemory": 8388608,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 8388608
},
"2": {
"MinMemory": 1073741824,
"MaxMemory": 1073741824,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"3": {
"MinMemory": 4294967296,
"MaxMemory": 4294967296,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"4": {
"MinMemory": 8589934592,
"MaxMemory": 8589934592,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"5": {
"MinMemory": 2048,
"MaxMemory": 2048,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 2048
},
"6": {
"MinMemory": 8388608,
"MaxMemory": 8388608,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 8388608
},
"7": {
"MinMemory": 1073741824,
"MaxMemory": 1073741824,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"8": {
"MinMemory": 4294967296,
"MaxMemory": 4294967296,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"9": {
"MinMemory": 8589934592,
"MaxMemory": 8589934592,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
}
},
"seal/v0/replicaupdate": {
"0": {
"MinMemory": 2048,
"MaxMemory": 2048,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 2048
},
"1": {
"MinMemory": 8388608,
"MaxMemory": 8388608,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 8388608
},
"2": {
"MinMemory": 1073741824,
"MaxMemory": 1073741824,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"3": {
"MinMemory": 4294967296,
"MaxMemory": 4294967296,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"4": {
"MinMemory": 8589934592,
"MaxMemory": 8589934592,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"5": {
"MinMemory": 2048,
"MaxMemory": 2048,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 2048
},
"6": {
"MinMemory": 8388608,
"MaxMemory": 8388608,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 8388608
},
"7": {
"MinMemory": 1073741824,
"MaxMemory": 1073741824,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"8": {
"MinMemory": 4294967296,
"MaxMemory": 4294967296,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
},
"9": {
"MinMemory": 8589934592,
"MaxMemory": 8589934592,
"GPUUtilization": 0,
"MaxParallelism": 1,
"MaxParallelismGPU": 0,
"BaseMinMemory": 1073741824
}
},
"seal/v0/unseal": {
"0": {
"MinMemory": 2048,
@ -691,7 +1021,18 @@ Perms: admin
Inputs: `null`
Response: `null`
Response:
```json
[
{
"ID": "76f1988b-ef30-4d7e-b3ec-9a627f4ba5a8",
"Weight": 42,
"LocalPath": "string value",
"CanSeal": true,
"CanStore": true
}
]
```
### Remove
Storage / Other
@ -728,7 +1069,7 @@ Perms: admin
Inputs: `null`
Response: `131328`
Response: `131584`
## Add
@ -749,7 +1090,9 @@ Inputs:
},
"ProofType": 8
},
null,
[
1024
],
1024,
{}
]
@ -784,7 +1127,12 @@ Inputs:
},
"ProofType": 8
},
null
[
{
"Offset": 1024,
"Size": 1024
}
]
]
```
@ -946,7 +1294,9 @@ Inputs:
{
"/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
},
null
[
"Ynl0ZSBhcnJheQ=="
]
]
```
@ -979,7 +1329,12 @@ Inputs:
},
"ProofType": 8
},
null
[
{
"Offset": 1024,
"Size": 1024
}
]
]
```
@ -1012,7 +1367,14 @@ Inputs:
},
"ProofType": 8
},
null
[
{
"Size": 1032,
"PieceCID": {
"/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
}
}
]
]
```
@ -1045,9 +1407,16 @@ Inputs:
},
"ProofType": 8
},
null,
null,
null,
"Bw==",
"Bw==",
[
{
"Size": 1032,
"PieceCID": {
"/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
}
}
],
{
"Unsealed": {
"/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
@ -1085,7 +1454,7 @@ Inputs:
},
"ProofType": 8
},
null
"Bw=="
]
```
@ -1115,8 +1484,15 @@ Inputs:
},
"ProofType": 8
},
null,
null
"Bw==",
[
{
"Size": 1032,
"PieceCID": {
"/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
}
}
]
]
```
@ -1146,7 +1522,7 @@ Inputs:
},
"ProofType": 8
},
null
"Bw=="
]
```
@ -1263,7 +1639,7 @@ Inputs:
},
1040384,
1024,
null,
"Bw==",
{
"/": "bafy2bzacea3wsdh6y3a36tb3skempjoxqpuyompjbmfeyf34fi3uy6uue42v4"
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -7,7 +7,7 @@ USAGE:
lotus-miner [global options] command [command options] [arguments...]
VERSION:
1.13.3-dev
1.15.0-dev
COMMANDS:
init Initialize a lotus miner repo
@ -974,10 +974,11 @@ USAGE:
lotus-miner data-transfers command [command options] [arguments...]
COMMANDS:
list List ongoing data transfers for this miner
restart Force restart a stalled data transfer
cancel Force cancel a data transfer
help, h Shows a list of commands or help for one command
list List ongoing data transfers for this miner
restart Force restart a stalled data transfer
cancel Force cancel a data transfer
diagnostics Get detailed diagnostics on active transfers with a specific peer
help, h Shows a list of commands or help for one command
OPTIONS:
--help, -h show help (default: false)
@ -1034,6 +1035,19 @@ OPTIONS:
```
### lotus-miner data-transfers diagnostics
```
NAME:
lotus-miner data-transfers diagnostics - Get detailed diagnostics on active transfers with a specific peer
USAGE:
lotus-miner data-transfers diagnostics [command options] [arguments...]
OPTIONS:
--help, -h show help (default: false)
```
## lotus-miner dagstore
```
NAME:
@ -1146,6 +1160,8 @@ COMMANDS:
reachability Print information about reachability from the internet
bandwidth Print bandwidth usage information
block Manage network connection gating rules
stat Report resource usage for a scope
limit Get or set resource limits for a scope
help, h Shows a list of commands or help for one command
OPTIONS:
@ -1414,6 +1430,58 @@ OPTIONS:
```
### lotus-miner net stat
```
NAME:
lotus-miner net stat - Report resource usage for a scope
USAGE:
lotus-miner net stat [command options] scope
DESCRIPTION:
Report resource usage for a scope.
The scope can be one of the following:
- system -- reports the system aggregate resource usage.
- transient -- reports the transient resource usage.
- svc:<service> -- reports the resource usage of a specific service.
- proto:<proto> -- reports the resource usage of a specific protocol.
- peer:<peer> -- reports the resource usage of a specific peer.
- all -- reports the resource usage for all currently active scopes.
OPTIONS:
--help, -h show help (default: false)
```
### lotus-miner net limit
```
NAME:
lotus-miner net limit - Get or set resource limits for a scope
USAGE:
lotus-miner net limit [command options] scope [limit]
DESCRIPTION:
Get or set resource limits for a scope.
The scope can be one of the following:
- system -- reports the system aggregate resource usage.
- transient -- reports the transient resource usage.
- svc:<service> -- reports the resource usage of a specific service.
- proto:<proto> -- reports the resource usage of a specific protocol.
- peer:<peer> -- reports the resource usage of a specific peer.
The limit is json-formatted, with the same structure as the limits file.
OPTIONS:
--set set the limit for a scope (default: false)
--help, -h show help (default: false)
```
## lotus-miner pieces
```
NAME:
@ -1500,23 +1568,25 @@ USAGE:
lotus-miner sectors command [command options] [arguments...]
COMMANDS:
status Get the seal status of a sector by its number
list List sectors
refs List References to sectors
update-state ADVANCED: manually update the state of a sector, this may aid in error recovery
pledge store random data in a sector
check-expire Inspect expiring sectors
expired Get or cleanup expired sectors
renew Renew expiring sectors while not exceeding each sector's max life
extend Extend sector expiration
terminate Terminate sector on-chain then remove (WARNING: This means losing power and collateral for the removed sector)
remove Forcefully remove a sector (WARNING: This means losing power and collateral for the removed sector (use 'terminate' for lower penalty))
mark-for-upgrade Mark a committed capacity sector for replacement by a sector with deals
seal Manually start sealing a sector (filling any unused space with junk)
set-seal-delay Set the time, in minutes, that a new sector waits for deals before sealing starts
get-cc-collateral Get the collateral required to pledge a committed capacity sector
batching manage batch sector operations
help, h Shows a list of commands or help for one command
status Get the seal status of a sector by its number
list List sectors
refs List References to sectors
update-state ADVANCED: manually update the state of a sector, this may aid in error recovery
pledge store random data in a sector
check-expire Inspect expiring sectors
expired Get or cleanup expired sectors
renew Renew expiring sectors while not exceeding each sector's max life
extend Extend sector expiration
terminate Terminate sector on-chain then remove (WARNING: This means losing power and collateral for the removed sector)
remove Forcefully remove a sector (WARNING: This means losing power and collateral for the removed sector (use 'terminate' for lower penalty))
snap-up Mark a committed capacity sector to be filled with deals
mark-for-upgrade Mark a committed capacity sector for replacement by a sector with deals
seal Manually start sealing a sector (filling any unused space with junk)
set-seal-delay Set the time, in minutes, that a new sector waits for deals before sealing starts
get-cc-collateral Get the collateral required to pledge a committed capacity sector
batching manage batch sector operations
match-pending-pieces force a refreshed match of pending pieces to open sectors without manually waiting for more deals
help, h Shows a list of commands or help for one command
OPTIONS:
--help, -h show help (default: false)
@ -1732,6 +1802,19 @@ OPTIONS:
```
### lotus-miner sectors snap-up
```
NAME:
lotus-miner sectors snap-up - Mark a committed capacity sector to be filled with deals
USAGE:
lotus-miner sectors snap-up [command options] <sectorNum>
OPTIONS:
--help, -h show help (default: false)
```
### lotus-miner sectors mark-for-upgrade
```
NAME:
@ -1832,6 +1915,19 @@ OPTIONS:
```
### lotus-miner sectors match-pending-pieces
```
NAME:
lotus-miner sectors match-pending-pieces - force a refreshed match of pending pieces to open sectors without manually waiting for more deals
USAGE:
lotus-miner sectors match-pending-pieces [command options] [arguments...]
OPTIONS:
--help, -h show help (default: false)
```
## lotus-miner proving
```
NAME:

View File

@ -7,7 +7,7 @@ USAGE:
lotus-worker [global options] command [command options] [arguments...]
VERSION:
1.13.3-dev
1.15.0-dev
COMMANDS:
run Start lotus worker
@ -44,6 +44,8 @@ OPTIONS:
--unseal enable unsealing (32G sectors: 1 core, 128GiB Memory) (default: true)
--precommit2 enable precommit2 (32G sectors: all cores, 96GiB Memory) (default: true)
--commit enable commit (32G sectors: all cores or GPUs, 128GiB Memory + 64GiB swap) (default: true)
--replica-update enable replica update (default: true)
--prove-replica-update2 enable prove replica update 2 (default: true)
--parallel-fetch-limit value maximum fetch operations to run in parallel (default: 5)
--timeout value used when 'listen' is unspecified. must be a valid duration recognized by golang's time.ParseDuration function (default: "30m")
--help, -h show help (default: false)

View File

@ -7,7 +7,7 @@ USAGE:
lotus [global options] command [command options] [arguments...]
VERSION:
1.13.3-dev
1.15.0-dev
COMMANDS:
daemon Start a lotus daemon process
@ -425,6 +425,7 @@ COMMANDS:
stat Print information about a locally stored file (piece size, etc)
RETRIEVAL:
find Find data in the network
retrieval-ask Get a miner's retrieval ask
retrieve Retrieve data from network
cat Show data from network
ls List object links
@ -535,6 +536,23 @@ OPTIONS:
```
### lotus client retrieval-ask
```
NAME:
lotus client retrieval-ask - Get a miner's retrieval ask
USAGE:
lotus client retrieval-ask [command options] [minerAddress] [data CID]
CATEGORY:
RETRIEVAL
OPTIONS:
--size value data size in bytes (default: 0)
--help, -h show help (default: false)
```
### lotus client retrieve
```
NAME:
@ -2598,6 +2616,8 @@ COMMANDS:
reachability Print information about reachability from the internet
bandwidth Print bandwidth usage information
block Manage network connection gating rules
stat Report resource usage for a scope
limit Get or set resource limits for a scope
help, h Shows a list of commands or help for one command
OPTIONS:
@ -2866,6 +2886,58 @@ OPTIONS:
```
### lotus net stat
```
NAME:
lotus net stat - Report resource usage for a scope
USAGE:
lotus net stat [command options] scope
DESCRIPTION:
Report resource usage for a scope.
The scope can be one of the following:
- system -- reports the system aggregate resource usage.
- transient -- reports the transient resource usage.
- svc:<service> -- reports the resource usage of a specific service.
- proto:<proto> -- reports the resource usage of a specific protocol.
- peer:<peer> -- reports the resource usage of a specific peer.
- all -- reports the resource usage for all currently active scopes.
OPTIONS:
--help, -h show help (default: false)
```
### lotus net limit
```
NAME:
lotus net limit - Get or set resource limits for a scope
USAGE:
lotus net limit [command options] scope [limit]
DESCRIPTION:
Get or set resource limits for a scope.
The scope can be one of the following:
- system -- reports the system aggregate resource usage.
- transient -- reports the transient resource usage.
- svc:<service> -- reports the resource usage of a specific service.
- proto:<proto> -- reports the resource usage of a specific protocol.
- peer:<peer> -- reports the resource usage of a specific peer.
The limit is json-formatted, with the same structure as the limits file.
OPTIONS:
--set set the limit for a scope (default: false)
--help, -h show help (default: false)
```
## lotus sync
```
NAME:

View File

@ -424,6 +424,12 @@
# env var: LOTUS_STORAGE_ALLOWUNSEAL
#AllowUnseal = true
# env var: LOTUS_STORAGE_ALLOWREPLICAUPDATE
#AllowReplicaUpdate = true
# env var: LOTUS_STORAGE_ALLOWPROVEREPLICAUPDATE2
#AllowProveReplicaUpdate2 = true
# env var: LOTUS_STORAGE_RESOURCEFILTERING
#ResourceFiltering = "hardware"
@ -544,6 +550,14 @@
# env var: LOTUS_DAGSTORE_MAXCONCURRENTREADYFETCHES
#MaxConcurrentReadyFetches = 0
# The maximum amount of unseals that can be processed simultaneously
# from the storage subsystem. 0 means unlimited.
# Default value: 0 (unlimited).
#
# type: int
# env var: LOTUS_DAGSTORE_MAXCONCURRENTUNSEALS
#MaxConcurrentUnseals = 5
# The maximum number of simultaneous inflight API calls to the storage
# subsystem.
# Default value: 100.

2
extern/filecoin-ffi vendored

@ -1 +1 @@
Subproject commit 52d80081bfdd8a30bc44bcfe44cb0f299615b9f3
Subproject commit e660df5616e397b2d8ac316f45ddfa7a44637971

View File

@ -34,6 +34,12 @@ func (b *Provider) AcquireSector(ctx context.Context, id storage.SectorRef, exis
if err := os.Mkdir(filepath.Join(b.Root, storiface.FTCache.String()), 0755); err != nil && !os.IsExist(err) { // nolint
return storiface.SectorPaths{}, nil, err
}
if err := os.Mkdir(filepath.Join(b.Root, storiface.FTUpdate.String()), 0755); err != nil && !os.IsExist(err) { // nolint
return storiface.SectorPaths{}, nil, err
}
if err := os.Mkdir(filepath.Join(b.Root, storiface.FTUpdateCache.String()), 0755); err != nil && !os.IsExist(err) { // nolint
return storiface.SectorPaths{}, nil, err
}
done := func() {}

View File

@ -669,7 +669,7 @@ func (sb *Sealer) SealCommit2(ctx context.Context, sector storage.SectorRef, pha
func (sb *Sealer) ReplicaUpdate(ctx context.Context, sector storage.SectorRef, pieces []abi.PieceInfo) (storage.ReplicaUpdateOut, error) {
empty := storage.ReplicaUpdateOut{}
paths, done, err := sb.sectors.AcquireSector(ctx, sector, storiface.FTSealed|storiface.FTUnsealed|storiface.FTCache, storiface.FTUpdate|storiface.FTUpdateCache, storiface.PathSealing)
paths, done, err := sb.sectors.AcquireSector(ctx, sector, storiface.FTUnsealed|storiface.FTSealed|storiface.FTCache, storiface.FTUpdate|storiface.FTUpdateCache, storiface.PathSealing)
if err != nil {
return empty, xerrors.Errorf("failed to acquire sector paths: %w", err)
}
@ -697,10 +697,10 @@ func (sb *Sealer) ReplicaUpdate(ctx context.Context, sector storage.SectorRef, p
if err := os.Mkdir(paths.UpdateCache, 0755); err != nil { // nolint
if os.IsExist(err) {
log.Warnf("existing cache in %s; removing", paths.Cache)
log.Warnf("existing cache in %s; removing", paths.UpdateCache)
if err := os.RemoveAll(paths.UpdateCache); err != nil {
return empty, xerrors.Errorf("remove existing sector cache from %s (sector %d): %w", paths.Cache, sector, err)
return empty, xerrors.Errorf("remove existing sector cache from %s (sector %d): %w", paths.UpdateCache, sector, err)
}
if err := os.Mkdir(paths.UpdateCache, 0755); err != nil { // nolint:gosec
@ -714,12 +714,11 @@ func (sb *Sealer) ReplicaUpdate(ctx context.Context, sector storage.SectorRef, p
if err != nil {
return empty, xerrors.Errorf("failed to update replica %d with new deal data: %w", sector.ID.Number, err)
}
return storage.ReplicaUpdateOut{NewSealed: sealed, NewUnsealed: unsealed}, nil
}
func (sb *Sealer) ProveReplicaUpdate1(ctx context.Context, sector storage.SectorRef, sectorKey, newSealed, newUnsealed cid.Cid) (storage.ReplicaVanillaProofs, error) {
paths, done, err := sb.sectors.AcquireSector(ctx, sector, storiface.FTSealed|storiface.FTCache|storiface.FTUpdateCache|storiface.FTUpdate, storiface.FTNone, storiface.PathSealing)
paths, done, err := sb.sectors.AcquireSector(ctx, sector, storiface.FTSealed|storiface.FTCache|storiface.FTUpdate|storiface.FTUpdateCache, storiface.FTNone, storiface.PathSealing)
if err != nil {
return nil, xerrors.Errorf("failed to acquire sector paths: %w", err)
}
@ -854,6 +853,14 @@ func (sb *Sealer) ReleaseUnsealed(ctx context.Context, sector storage.SectorRef,
return xerrors.Errorf("not supported at this layer")
}
func (sb *Sealer) ReleaseReplicaUpgrade(ctx context.Context, sector storage.SectorRef) error {
return xerrors.Errorf("not supported at this layer")
}
func (sb *Sealer) ReleaseSectorKey(ctx context.Context, sector storage.SectorRef) error {
return xerrors.Errorf("not supported at this layer")
}
func (sb *Sealer) Remove(ctx context.Context, sector storage.SectorRef) error {
return xerrors.Errorf("not supported at this layer") // happens in localworker
}

View File

@ -19,6 +19,7 @@ import (
proof2 "github.com/filecoin-project/specs-actors/v2/actors/runtime/proof"
proof5 "github.com/filecoin-project/specs-actors/v5/actors/runtime/proof"
proof7 "github.com/filecoin-project/specs-actors/v7/actors/runtime/proof"
"github.com/ipfs/go-cid"
@ -180,16 +181,16 @@ func (s *seal) unseal(t *testing.T, sb *Sealer, sp *basicfs.Provider, si storage
func post(t *testing.T, sealer *Sealer, skipped []abi.SectorID, seals ...seal) {
randomness := abi.PoStRandomness{0, 9, 2, 7, 6, 5, 4, 3, 2, 1, 0, 9, 8, 7, 6, 45, 3, 2, 1, 0, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0, 9, 7}
sis := make([]proof2.SectorInfo, len(seals))
xsis := make([]proof7.ExtendedSectorInfo, len(seals))
for i, s := range seals {
sis[i] = proof2.SectorInfo{
xsis[i] = proof7.ExtendedSectorInfo{
SealProof: s.ref.ProofType,
SectorNumber: s.ref.ID.Number,
SealedCID: s.cids.Sealed,
}
}
proofs, skp, err := sealer.GenerateWindowPoSt(context.TODO(), seals[0].ref.ID.Miner, sis, randomness)
proofs, skp, err := sealer.GenerateWindowPoSt(context.TODO(), seals[0].ref.ID.Miner, xsis, randomness)
if len(skipped) > 0 {
require.Error(t, err)
require.EqualValues(t, skipped, skp)
@ -200,7 +201,16 @@ func post(t *testing.T, sealer *Sealer, skipped []abi.SectorID, seals ...seal) {
t.Fatalf("%+v", err)
}
ok, err := ProofVerifier.VerifyWindowPoSt(context.TODO(), proof2.WindowPoStVerifyInfo{
sis := make([]proof7.SectorInfo, len(seals))
for i, xsi := range xsis {
sis[i] = proof7.SectorInfo{
SealProof: xsi.SealProof,
SectorNumber: xsi.SectorNumber,
SealedCID: xsi.SealedCID,
}
}
ok, err := ProofVerifier.VerifyWindowPoSt(context.TODO(), proof7.WindowPoStVerifyInfo{
Randomness: randomness,
Proofs: proofs,
ChallengedSectors: sis,

View File

@ -4,9 +4,7 @@ import (
"context"
"io"
proof7 "github.com/filecoin-project/specs-actors/v7/actors/runtime/proof"
proof5 "github.com/filecoin-project/specs-actors/v5/actors/runtime/proof"
"github.com/filecoin-project/specs-actors/v7/actors/runtime/proof"
"github.com/ipfs/go-cid"
@ -36,11 +34,11 @@ type Storage interface {
}
type Verifier interface {
VerifySeal(proof5.SealVerifyInfo) (bool, error)
VerifyAggregateSeals(aggregate proof5.AggregateSealVerifyProofAndInfos) (bool, error)
VerifyReplicaUpdate(update proof7.ReplicaUpdateInfo) (bool, error)
VerifyWinningPoSt(ctx context.Context, info proof5.WinningPoStVerifyInfo) (bool, error)
VerifyWindowPoSt(ctx context.Context, info proof5.WindowPoStVerifyInfo) (bool, error)
VerifySeal(proof.SealVerifyInfo) (bool, error)
VerifyAggregateSeals(aggregate proof.AggregateSealVerifyProofAndInfos) (bool, error)
VerifyReplicaUpdate(update proof.ReplicaUpdateInfo) (bool, error)
VerifyWinningPoSt(ctx context.Context, info proof.WinningPoStVerifyInfo) (bool, error)
VerifyWindowPoSt(ctx context.Context, info proof.WindowPoStVerifyInfo) (bool, error)
GenerateWinningPoStSectorChallenge(context.Context, abi.RegisteredPoStProof, abi.ActorID, abi.PoStRandomness, uint64) ([]uint64, error)
}
@ -49,7 +47,7 @@ type Verifier interface {
type Prover interface {
// TODO: move GenerateWinningPoStSectorChallenge from the Verifier interface to here
AggregateSealProofs(aggregateInfo proof5.AggregateSealVerifyProofAndInfos, proofs [][]byte) ([]byte, error)
AggregateSealProofs(aggregateInfo proof.AggregateSealVerifyProofAndInfos, proofs [][]byte) ([]byte, error)
}
type SectorProvider interface {

Some files were not shown because too many files have changed in this diff Show More