Merge branch 'master' into sbansal/nonce-coordination-and-consensus-for-chain-nodes

This commit is contained in:
Shrenuj Bansal 2022-10-17 22:55:48 -04:00
commit 15ed1ee33c
53 changed files with 1238 additions and 663 deletions

View File

@ -140,7 +140,7 @@ commands:
- run: - run:
name: "Run a packer build" name: "Run a packer build"
command: packer build << parameters.args >> << parameters.template >> command: packer build << parameters.args >> << parameters.template >>
no_output_timeout: 30m no_output_timeout: 1h
jobs: jobs:
mod-tidy-check: mod-tidy-check:
@ -424,6 +424,12 @@ jobs:
- checkout - checkout
- attach_workspace: - attach_workspace:
at: "." at: "."
- run:
name: Update Go
command: |
curl -L https://golang.org/dl/go1.18.1.linux-amd64.tar.gz -o /tmp/go.tar.gz && \
sudo tar -C /usr/local -xvf /tmp/go.tar.gz
- run: go version
- run: - run:
name: install appimage-builder name: install appimage-builder
command: | command: |
@ -1269,10 +1275,10 @@ workflows:
- publish-dockerhub: - publish-dockerhub:
name: publish-dockerhub-nightly name: publish-dockerhub-nightly
tag: nightly tag: nightly
monthly: biweekly:
triggers: triggers:
- schedule: - schedule:
cron: "0 0 1 * *" cron: "0 0 1,15 * *"
filters: filters:
branches: branches:
only: only:

View File

@ -140,7 +140,7 @@ commands:
- run: - run:
name: "Run a packer build" name: "Run a packer build"
command: packer build << parameters.args >> << parameters.template >> command: packer build << parameters.args >> << parameters.template >>
no_output_timeout: 30m no_output_timeout: 1h
jobs: jobs:
mod-tidy-check: mod-tidy-check:
@ -424,6 +424,12 @@ jobs:
- checkout - checkout
- attach_workspace: - attach_workspace:
at: "." at: "."
- run:
name: Update Go
command: |
curl -L https://golang.org/dl/go1.18.1.linux-amd64.tar.gz -o /tmp/go.tar.gz && \
sudo tar -C /usr/local -xvf /tmp/go.tar.gz
- run: go version
- run: - run:
name: install appimage-builder name: install appimage-builder
command: | command: |
@ -984,10 +990,10 @@ workflows:
- publish-dockerhub: - publish-dockerhub:
name: publish-dockerhub-nightly name: publish-dockerhub-nightly
tag: nightly tag: nightly
monthly: biweekly:
triggers: triggers:
- schedule: - schedule:
cron: "0 0 1 * *" cron: "0 0 1,15 * *"
filters: filters:
branches: branches:
only: only:

View File

@ -1,21 +1,23 @@
## Related Issues ## Related Issues
<!-- link all issues that this PR might resolve/fix. If an issue doesn't exist, include a brief motivation for the change being made.--> <!-- Link issues that this PR might resolve/fix. If an issue doesn't exist, include a brief motivation for the change being made -->
## Proposed Changes ## Proposed Changes
<!-- provide a clear list of the changes being made--> <!-- A clear list of the changes being made -->
## Additional Info ## Additional Info
<!-- callouts, links to documentation, and etc--> <!-- Callouts, links to documentation, and etc -->
## Checklist ## Checklist
Before you mark the PR ready for review, please make sure that: Before you mark the PR ready for review, please make sure that:
- [ ] All commits have a clear commit message.
- [ ] The PR title is in the form of of `<PR type>: <area>: <change being made>` - [ ] Commits have a clear commit message.
- example: ` fix: mempool: Introduce a cache for valid signatures` - [ ] PR title is in the form of of `<PR type>: <area>: <change being made>`
- `PR type`: _fix_, _feat_, _INTERFACE BREAKING CHANGE_, _CONSENSUS BREAKING_, _build_, _chore_, _ci_, _docs_,_perf_, _refactor_, _revert_, _style_, _test_ - example: ` fix: mempool: Introduce a cache for valid signatures`
- `area`: _api_, _chain_, _state_, _vm_, _data transfer_, _market_, _mempool_, _message_, _block production_, _multisig_, _networking_, _paychan_, _proving_, _sealing_, _wallet_, _deps_ - `PR type`: fix, feat, build, chore, ci, docs, perf, refactor, revert, style, test
- [ ] This PR has tests for new functionality or change in behaviour - `area`, e.g. api, chain, state, market, mempool, multisig, networking, paych, proving, sealing, wallet, deps
- [ ] If new user-facing features are introduced, clear usage guidelines and / or documentation updates should be included in https://lotus.filecoin.io or [Discussion Tutorials.](https://github.com/filecoin-project/lotus/discussions/categories/tutorials) - [ ] New features have usage guidelines and / or documentation updates in
- [ ] [Lotus Documentation](https://lotus.filecoin.io)
- [ ] [Discussion Tutorials](https://github.com/filecoin-project/lotus/discussions/categories/tutorials)
- [ ] Tests exist for new functionality or change in behavior
- [ ] CI is green - [ ] CI is green

View File

@ -1,5 +1,143 @@
# Lotus changelog # Lotus changelog
# v1.17.2 / 2022-10-05
This is an OPTIONAL release of Lotus. This feature release introduces new sector number management APIs in Lotus that enables all the Sealing-as-a-Service and Lotus interactions needed to function. The default propagation delay setting for storage providers has also been changed, as well as numerous other features and enhancements. Check out the sub-bullet points in the feature and enhancement section to get a short description about each feature and enhancements.
### Highlights
🦭 **SaaS** 🦭
New sector management APIs makes it possible to import partially sealed sectors into Lotus. This release implements all SaaS<->Lotus interactions needed for such services to work. Deep dive into the new APIs here: https://github.com/filecoin-project/lotus/discussions/9079#discussioncomment-3652044
**Propagation delay** ⌛️
In v1.17.2 the default PropagationDelay has been raised from 6 seconds -> 10 seconds, and you can tune this yourself with the `PROPAGATION_DELAY_SECS` environment variable. This means you will now wait for 10 seconds for other blocks to arrive from the network before computing a winningPoSt (if eligible). In your `lotus-miner` logs that means you will see this "baseDeltaSeconds": 10 as default.
⚠️ **Please note that Lotus v1.17.2 will require a Go-version of v1.18.1 or higher!**
## New features
- feat: sealing: Partially sealed sector import ([filecoin-project/lotus#9210](https://github.com/filecoin-project/lotus/pull/9210))
- Implements support for importing (partially) sealed sectors which is needed for enabling external sealing services.
- feat: sealing: Use bitfields to manage sector numbers ([filecoin-project/lotus#9183](https://github.com/filecoin-project/lotus/pull/9183))
- Needed for Sealing-as-a-Service. Move sector number assigning logic to use stored bitfields instead of stored counters. Makes it possible to reserve ranges of sector numbers, see the sector assigner state and view sector number reservations.
- feat: env: propagation delay ([filecoin-project/lotus#9290](https://github.com/filecoin-project/lotus/pull/9290))
- The default propagation delay is raised to 10 seconds from 6 seconds. Ability to set it yourself with the `PROPAGATION_DELAY_SECS` environment variable.
- feat: cli: lotus info cmd ([filecoin-project/lotus#9233](https://github.com/filecoin-project/lotus/pull/9233))
- A new `lotus info` command that prints useful node information in one place.
- feat: proving: Introduce manual sector fault recovery (#9144) ([filecoin-project/lotus#9144](https://github.com/filecoin-project/lotus/pull/9144))
- Allow users to declare fault recovery messages manually with the `lotus-miner proving recover-faults` command, rather than waiting for it to happen automatically before windowPost.
- feat: api: Reintroduce StateActorManifestCID ([filecoin-project/lotus#9201](https://github.com/filecoin-project/lotus/pull/9201))
- Adds ability to retrieve the Actor Manifest CID through the api.
- feat: message: Add uuid to mpool message sent to chain node from miner ([filecoin-project/lotus#9174](https://github.com/filecoin-project/lotus/pull/9174))
- Adds a UUID to each message sent by the `lotus-miner` to the daemon. A requirement needed for https://github.com/filecoin-project/lotus/issues/9130
- feat: message: Add retries to mpool push message from lotus-miner ([filecoin-project/lotus#9177](https://github.com/filecoin-project/lotus/pull/9177))
- Retries to mpool push message API in case of unavailability of the lotus chain node.
**Network 17 related features :**
- feat: network: add nv17 and integrate the corresponding go state type ([filecoin-project/lotus#9267](https://github.com/filecoin-project/lotus/pull/9267))
- feat: cli: print beneficiary info in state miner-info ([filecoin-project/lotus#9308](https://github.com/filecoin-project/lotus/pull/9308))
- feat: api/cli: change beneficiary propose and confirm for actors and multisigs. ([filecoin-project/lotus#9307](https://github.com/filecoin-project/lotus/pull/9307))
- feat: api/cli: beneficiary withdraw api and cli ([filecoin-project/lotus#9296](https://github.com/filecoin-project/lotus/pull/9296))
## Enhancements
- feat: sectors renew --only-cc ([filecoin-project/lotus#9184](https://github.com/filecoin-project/lotus/pull/9184))
- Exlude extending deal-related sectors with the `--only-cc` option when using the `lotus-miner sectors renew`
- feat: miner: display updated & update-cache for storage list ([filecoin-project/lotus#9323](https://github.com/filecoin-project/lotus/pull/9323))
- Show amount of `updated` & `update-cache` sectors in each storage path in the `lotus-miner storage list` output
- feat: add descriptive errors to markets event handler ([filecoin-project/lotus#9326](https://github.com/filecoin-project/lotus/pull/9326))
- More descriptive market error logs
- feat: cli: Add option to terminate sectors from worker address ([filecoin-project/lotus#9291](https://github.com/filecoin-project/lotus/pull/9291))
- Adds a flag to allow either owner address or worker address to send terminate sectors message.
- fix: cli: actor-cids cli command now defaults to current network ([filecoin-project/lotus#9321](https://github.com/filecoin-project/lotus/pull/9321))
- Makes the command defaults to the current network.
- fix: ux: Output bytes in `lotus client commP` cmd ([filecoin-project/lotus#9189](https://github.com/filecoin-project/lotus/pull/9189))
- Adds an additional line that outputs bytes in the `lotus client commP` command.
- fix: sealing: Add information on what worker a job was assigned to in logs ([filecoin-project/lotus#9151](https://github.com/filecoin-project/lotus/pull/9151))
- Adds the worker hostname into the assignment logs.
- refactor: sealing pipeline: Remove useless storage adapter code ([filecoin-project/lotus#9142](https://github.com/filecoin-project/lotus/pull/9142)
- Remove proxy code in `storage/miner.go` / `storage/miner_sealing.go`, call the pipeline directly instead.
## Bug fixes
- fix: ffiwrapper: Close readers in AddPiece ([filecoin-project/lotus#9328](https://github.com/filecoin-project/lotus/pull/9328))
- fix: sealing: Drop unused PreCommitInfo from pipeline.SectorInfo ([filecoin-project/lotus#9325](https://github.com/filecoin-project/lotus/pull/9325))
- fix: cli: fix panic in `lotus-miner actor control list` ([filecoin-project/lotus#9241](https://github.com/filecoin-project/lotus/pull/9241))
- fix: sealing: Abort upgrades in sectors with no deals ([filecoin-project/lotus#9310](https://github.com/filecoin-project/lotus/pull/9310))
- fix: sealing: Make DataCid resource env vars make more sense ([filecoin-project/lotus#9231](https://github.com/filecoin-project/lotus/pull/9231))
- fix: cli: Option to specify --from msg sender ([filecoin-project/lotus#9237](https://github.com/filecoin-project/lotus/pull/9237))
- fix: ux: better ledger rejection error ([filecoin-project/lotus#9242](https://github.com/filecoin-project/lotus/pull/9242))
- fix: ux: msg receipt for actor withdrawal ([filecoin-project/lotus#9155](https://github.com/filecoin-project/lotus/pull/9155))
- fix: ux: exclude negative available balance from spendable amount ([filecoin-project/lotus#9182](https://github.com/filecoin-project/lotus/pull/9182))
- fix: sealing: Avoid panicking in handleUpdateActivating on startup ([filecoin-project/lotus#9331](https://github.com/filecoin-project/lotus/pull/9331))
- fix: api: DataCid - ensure reader is closed ([filecoin-project/lotus#9230](https://github.com/filecoin-project/lotus/pull/9230))
- fix: verifreg: serialize RmDcProposalID as int, not tuple ([filecoin-project/lotus#9206](https://github.com/filecoin-project/lotus/pull/9206))
- fix: api: Ignore uuid check for messages with uuid not set ([filecoin-project/lotus#9303](https://github.com/filecoin-project/lotus/pull/9303))
- fix: cgroupV1: memory.memsw.usage_in_bytes: no such file or directory ([filecoin-project/lotus#9202](https://github.com/filecoin-project/lotus/pull/9202))
- fix: miner: init miner's with 32GiB sectors by default ([filecoin-project/lotus#9364](https://github.com/filecoin-project/lotus/pull/9364))
- fix: worker: Close all storage paths on worker shutdown ([filecoin-project/lotus#9153](https://github.com/filecoin-project/lotus/pull/9153))
- fix: build: set PropagationDelaySecs correctly ([filecoin-project/lotus#9358](https://github.com/filecoin-project/lotus/pull/9358))
- fix: renew --only-cc with sectorfile ([filecoin-project/lotus#9428](https://github.com/filecoin-project/lotus/pull/9428))
## Dependency updates
- github.com/filecoin-project/go-fil-markets (v1.23.1 -> v1.24.0)
- github.com/filecoin-project/go-jsonrpc (v0.1.5 -> v0.1.7)
- github.com/filecoin-project/go-state-types (v0.1.10 -> v0.1.12-beta)
- github.com/filecoin-project/go-commp-utils/nonffi (null -> v0.0.0-20220905160352-62059082a837)
- deps: go-libp2p-pubsub v0.8.0 ([filecoin-project/lotus#9229](https://github.com/filecoin-project/lotus/pull/9229))
- deps: libp2p v0.22 ([filecoin-project/lotus#9216](https://github.com/filecoin-project/lotus/pull/9216))
- deps: Use latest cbor-gen ([filecoin-project/lotus#9335](https://github.com/filecoin-project/lotus/pull/9335))
- chore: update bitswap and some libp2p packages ([filecoin-project/lotus#9279](https://github.com/filecoin-project/lotus/pull/9279))
## Others
- chore: merge releases into master after v1.17.1 release ([filecoin-project/lotus#9283](https://github.com/filecoin-project/lotus/pull/9283))
- chore: docs: Fix dead links to docs.filecoin.io ([filecoin-project/lotus#9304](https://github.com/filecoin-project/lotus/pull/9304))
- chore: deps: update FFI ([filecoin-project/lotus#9330](https://github.com/filecoin-project/lotus/pull/9330))
- chore: seed: add cmd for adding signers to rkh to genesis ([filecoin-project/lotus#9198](https://github.com/filecoin-project/lotus/pull/9198))
- chore: fix typo in comment ([filecoin-project/lotus#9161](https://github.com/filecoin-project/lotus/pull/9161))
- chore: cli: cleanup and standardize cli ([filecoin-project/lotus#9317](https://github.com/filecoin-project/lotus/pull/9317))
- chore: versioning: Bump version to v1.17.2-dev ([filecoin-project/lotus#9147](https://github.com/filecoin-project/lotus/pull/9147))
- chore: release: v1.17.2-rc1 ([filecoin-project/lotus#9339](https://github.com/filecoin-project/lotus/pull/9339))
- feat: shed: add a --max-size flag to vlog2car ([filecoin-project/lotus#9212](https://github.com/filecoin-project/lotus/pull/9212))
- fix: docsgen: revert rename of API Name to Num ([filecoin-project/lotus#9315](https://github.com/filecoin-project/lotus/pull/9315))
- fix: ffi: Revert accidental filecoin-ffi downgrade from #9144 ([filecoin-project/lotus#9277](https://github.com/filecoin-project/lotus/pull/9277))
- fix: miner: Call SyncBasefeeCheck from `lotus info` ([filecoin-project/lotus#9281](https://github.com/filecoin-project/lotus/pull/9281))
- fix: mock sealer: grab lock in ReadPiece ([filecoin-project/lotus#9207](https://github.com/filecoin-project/lotus/pull/9207))
- refactor: use `os.ReadDir` for lightweight directory reading ([filecoin-project/lotus#9282](https://github.com/filecoin-project/lotus/pull/9282))
- tests: cli: Don't panic with no providers in client retrieve ([filecoin-project/lotus#9232](https://github.com/filecoin-project/lotus/pull/9232))
- build: artifacts: butterfly ([filecoin-project/lotus#9027](https://github.com/filecoin-project/lotus/pull/9027))
- build: Use lotus snap (and fix typo) for packer builds ([filecoin-project/lotus#9152](https://github.com/filecoin-project/lotus/pull/9152))
- build: Update xcode version for macos builds ([filecoin-project/lotus#9170](https://github.com/filecoin-project/lotus/pull/9170))
- ci: build: Snap daemon autorun disable ([filecoin-project/lotus#9167](https://github.com/filecoin-project/lotus/pull/9167))
- ci: Use golang 1.18.1 to build appimage ([filecoin-project/lotus#9389](https://github.com/filecoin-project/lotus/pull/9389))
- ci: Don't publish new homebrew releases for RC builds ([filecoin-project/lotus#9350](https://github.com/filecoin-project/lotus/pull/9350))
- Merge branch 'deps/go-libp2p-v0.21'
Contributors
| Contributor | Commits | Lines ± | Files Changed |
|-------------|---------|---------|---------------|
| Aayush Rajasekaran | 8 | +23010/-2122 | 109 |
| Aayush | 15 | +6168/-2679 | 360 |
| Łukasz Magiera | 69 | +6462/-2137 | 606 |
| Geoff Stuart | 19 | +3080/-1177 | 342 |
| Marco Munizaga | 16 | +543/-424 | 41 |
| Shrenuj Bansal | 30 | +485/-419 | 88 |
| LexLuthr | 3 | +498/-12 | 19 |
| Phi | 15 | +330/-70 | 17 |
| Jennifer Wang | 7 | +132/-12 | 11 |
| TippyFlitsUK | 1 | +43/-45 | 12 |
| Steven Allen | 1 | +18/-28 | 2 |
| Frrist | 1 | +19/-11 | 2 |
| Eng Zer Jun | 1 | +14/-11 | 6 |
| Dirk McCormick | 2 | +23/-1 | 3 |
| Ian Davis | 3 | +7/-9 | 3 |
| Masih H. Derkani | 1 | +11/-0 | 1 |
| Anton Evangelatov | 1 | +11/-0 | 1 |
| Yu | 2 | +4/-4 | 5 |
| Hannah Howard | 1 | +4/-4 | 1 |
| Phi-rjan | 1 | +1/-2 | 1 |
| Jiaying Wang | 1 | +3/-0 | 1 |
| nujz | 1 | +1/-1 | 1 |
| Rob Quist | 1 | +1/-1 | 1 |
# v1.17.1 / 2022-09-06 # v1.17.1 / 2022-09-06
This is an optional release of Lotus. This release introduces the [Splitstore v2 - beta](https://github.com/filecoin-project/lotus/blob/master/blockstore/splitstore/README.md)(beta). Splitstore aims to reduce the node performance impact that's caused by the Filecoin's very large, and continuously growing datastore. Splitstore v2 introduces the coldstore auto prune/GC feature & some improvements for the hotstore. We welcome all lotus users to join the early testers and try the new Splitstore out, you can leave any feedback or report issues in [this discussion](https://github.com/filecoin-project/lotus/discussions/9179) or create an issue. As always, multiple small bug fixes, new features & improvements are also included in this release. This is an optional release of Lotus. This release introduces the [Splitstore v2 - beta](https://github.com/filecoin-project/lotus/blob/master/blockstore/splitstore/README.md)(beta). Splitstore aims to reduce the node performance impact that's caused by the Filecoin's very large, and continuously growing datastore. Splitstore v2 introduces the coldstore auto prune/GC feature & some improvements for the hotstore. We welcome all lotus users to join the early testers and try the new Splitstore out, you can leave any feedback or report issues in [this discussion](https://github.com/filecoin-project/lotus/discussions/9179) or create an issue. As always, multiple small bug fixes, new features & improvements are also included in this release.

View File

@ -3,6 +3,7 @@ package api
import ( import (
"context" "context"
"fmt" "fmt"
"time"
"github.com/google/uuid" "github.com/google/uuid"
@ -49,6 +50,9 @@ type Common interface {
// trigger graceful shutdown // trigger graceful shutdown
Shutdown(context.Context) error //perm:admin Shutdown(context.Context) error //perm:admin
// StartTime returns node start time
StartTime(context.Context) (time.Time, error) //perm:read
// Session returns a random UUID of api provider session // Session returns a random UUID of api provider session
Session(context.Context) (uuid.UUID, error) //perm:read Session(context.Context) (uuid.UUID, error) //perm:read

View File

@ -2302,6 +2302,21 @@ func (mr *MockFullNodeMockRecorder) Shutdown(arg0 interface{}) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Shutdown", reflect.TypeOf((*MockFullNode)(nil).Shutdown), arg0) return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Shutdown", reflect.TypeOf((*MockFullNode)(nil).Shutdown), arg0)
} }
// StartTime mocks base method.
func (m *MockFullNode) StartTime(arg0 context.Context) (time.Time, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "StartTime", arg0)
ret0, _ := ret[0].(time.Time)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// StartTime indicates an expected call of StartTime.
func (mr *MockFullNodeMockRecorder) StartTime(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "StartTime", reflect.TypeOf((*MockFullNode)(nil).StartTime), arg0)
}
// StateAccountKey mocks base method. // StateAccountKey mocks base method.
func (m *MockFullNode) StateAccountKey(arg0 context.Context, arg1 address.Address, arg2 types.TipSetKey) (address.Address, error) { func (m *MockFullNode) StateAccountKey(arg0 context.Context, arg1 address.Address, arg2 types.TipSetKey) (address.Address, error) {
m.ctrl.T.Helper() m.ctrl.T.Helper()

View File

@ -79,6 +79,8 @@ type CommonStruct struct {
Shutdown func(p0 context.Context) error `perm:"admin"` Shutdown func(p0 context.Context) error `perm:"admin"`
StartTime func(p0 context.Context) (time.Time, error) `perm:"read"`
Version func(p0 context.Context) (APIVersion, error) `perm:"read"` Version func(p0 context.Context) (APIVersion, error) `perm:"read"`
} }
} }
@ -1166,6 +1168,17 @@ func (s *CommonStub) Shutdown(p0 context.Context) error {
return ErrNotSupported return ErrNotSupported
} }
func (s *CommonStruct) StartTime(p0 context.Context) (time.Time, error) {
if s.Internal.StartTime == nil {
return *new(time.Time), ErrNotSupported
}
return s.Internal.StartTime(p0)
}
func (s *CommonStub) StartTime(p0 context.Context) (time.Time, error) {
return *new(time.Time), ErrNotSupported
}
func (s *CommonStruct) Version(p0 context.Context) (APIVersion, error) { func (s *CommonStruct) Version(p0 context.Context) (APIVersion, error) {
if s.Internal.Version == nil { if s.Internal.Version == nil {
return *new(APIVersion), ErrNotSupported return *new(APIVersion), ErrNotSupported

View File

@ -2157,6 +2157,21 @@ func (mr *MockFullNodeMockRecorder) Shutdown(arg0 interface{}) *gomock.Call {
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Shutdown", reflect.TypeOf((*MockFullNode)(nil).Shutdown), arg0) return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Shutdown", reflect.TypeOf((*MockFullNode)(nil).Shutdown), arg0)
} }
// StartTime mocks base method.
func (m *MockFullNode) StartTime(arg0 context.Context) (time.Time, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "StartTime", arg0)
ret0, _ := ret[0].(time.Time)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// StartTime indicates an expected call of StartTime.
func (mr *MockFullNodeMockRecorder) StartTime(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "StartTime", reflect.TypeOf((*MockFullNode)(nil).StartTime), arg0)
}
// StateAccountKey mocks base method. // StateAccountKey mocks base method.
func (m *MockFullNode) StateAccountKey(arg0 context.Context, arg1 address.Address, arg2 types.TipSetKey) (address.Address, error) { func (m *MockFullNode) StateAccountKey(arg0 context.Context, arg1 address.Address, arg2 types.TipSetKey) (address.Address, error) {
m.ctrl.T.Helper() m.ctrl.T.Helper()

View File

@ -47,6 +47,9 @@ func (m MemBlockstore) Get(ctx context.Context, k cid.Cid) (blocks.Block, error)
if !ok { if !ok {
return nil, ipld.ErrNotFound{Cid: k} return nil, ipld.ErrNotFound{Cid: k}
} }
if b.Cid().Prefix().Codec != k.Prefix().Codec {
return blocks.NewBlockWithCid(b.RawData(), k)
}
return b, nil return b, nil
} }

45
blockstore/mem_test.go Normal file
View File

@ -0,0 +1,45 @@
package blockstore
import (
"context"
"testing"
blocks "github.com/ipfs/go-block-format"
"github.com/ipfs/go-cid"
mh "github.com/multiformats/go-multihash"
"github.com/stretchr/testify/require"
)
func TestMemGetCodec(t *testing.T) {
ctx := context.Background()
bs := NewMemory()
cborArr := []byte{0x82, 1, 2}
h, err := mh.Sum(cborArr, mh.SHA2_256, -1)
require.NoError(t, err)
rawCid := cid.NewCidV1(cid.Raw, h)
rawBlk, err := blocks.NewBlockWithCid(cborArr, rawCid)
require.NoError(t, err)
err = bs.Put(ctx, rawBlk)
require.NoError(t, err)
cborCid := cid.NewCidV1(cid.DagCBOR, h)
cborBlk, err := bs.Get(ctx, cborCid)
require.NoError(t, err)
require.Equal(t, cborCid.Prefix(), cborBlk.Cid().Prefix())
require.EqualValues(t, cborArr, cborBlk.RawData())
// was allocated
require.NotEqual(t, cborBlk, rawBlk)
gotRawBlk, err := bs.Get(ctx, rawCid)
require.NoError(t, err)
// not allocated
require.Equal(t, rawBlk, gotRawBlk)
}

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -113,7 +113,7 @@ type MessagePoolEvtMessage struct {
func init() { func init() {
// if the republish interval is too short compared to the pubsub timecache, adjust it // if the republish interval is too short compared to the pubsub timecache, adjust it
minInterval := pubsub.TimeCacheDuration + time.Duration(build.PropagationDelaySecs) minInterval := pubsub.TimeCacheDuration + time.Duration(build.PropagationDelaySecs)*time.Second
if RepublishInterval < minInterval { if RepublishInterval < minInterval {
RepublishInterval = minInterval RepublishInterval = minInterval
} }

View File

@ -181,12 +181,14 @@ func retrieve(ctx context.Context, cctx *cli.Context, fapi lapi.FullNode, sel *l
event = retrievalmarket.ClientEvents[*evt.Event] event = retrievalmarket.ClientEvents[*evt.Event]
} }
printf("Recv %s, Paid %s, %s (%s), %s\n", printf("Recv %s, Paid %s, %s (%s), %s [%d|%d]\n",
types.SizeStr(types.NewInt(evt.BytesReceived)), types.SizeStr(types.NewInt(evt.BytesReceived)),
types.FIL(evt.TotalPaid), types.FIL(evt.TotalPaid),
strings.TrimPrefix(event, "ClientEvent"), strings.TrimPrefix(event, "ClientEvent"),
strings.TrimPrefix(retrievalmarket.DealStatuses[evt.Status], "DealStatus"), strings.TrimPrefix(retrievalmarket.DealStatuses[evt.Status], "DealStatus"),
time.Now().Sub(start).Truncate(time.Millisecond), time.Now().Sub(start).Truncate(time.Millisecond),
evt.ID,
types.NewInt(evt.BytesReceived),
) )
switch evt.Status { switch evt.Status {

View File

@ -41,7 +41,13 @@ func infoCmdAct(cctx *cli.Context) error {
return err return err
} }
start, err := fullapi.StartTime(ctx)
if err != nil {
return err
}
fmt.Printf("Network: %s\n", network.NetworkName) fmt.Printf("Network: %s\n", network.NetworkName)
fmt.Printf("StartTime: %s (started at %s)\n", time.Now().Sub(start).Truncate(time.Second), start.Truncate(time.Second))
fmt.Print("Chain: ") fmt.Print("Chain: ")
err = SyncBasefeeCheck(ctx, fullapi) err = SyncBasefeeCheck(ctx, fullapi)
if err != nil { if err != nil {

View File

@ -124,8 +124,9 @@ var NetPeers = &cli.Command{
} }
var NetPing = &cli.Command{ var NetPing = &cli.Command{
Name: "ping", Name: "ping",
Usage: "Ping peers", Usage: "Ping peers",
ArgsUsage: "[peerMultiaddr]",
Flags: []cli.Flag{ Flags: []cli.Flag{
&cli.IntFlag{ &cli.IntFlag{
Name: "count", Name: "count",

View File

@ -44,6 +44,7 @@ import (
"github.com/filecoin-project/lotus/chain/consensus/filcns" "github.com/filecoin-project/lotus/chain/consensus/filcns"
"github.com/filecoin-project/lotus/chain/state" "github.com/filecoin-project/lotus/chain/state"
"github.com/filecoin-project/lotus/chain/stmgr" "github.com/filecoin-project/lotus/chain/stmgr"
"github.com/filecoin-project/lotus/chain/store"
"github.com/filecoin-project/lotus/chain/types" "github.com/filecoin-project/lotus/chain/types"
) )
@ -294,6 +295,28 @@ func ParseTipSetRef(ctx context.Context, api v0api.FullNode, tss string) (*types
return ts, nil return ts, nil
} }
func ParseTipSetRefOffline(ctx context.Context, cs *store.ChainStore, tss string) (*types.TipSet, error) {
switch {
case tss == "" || tss == "@head":
return cs.GetHeaviestTipSet(), nil
case tss[0] != '@':
cids, err := ParseTipSetString(tss)
if err != nil {
return nil, xerrors.Errorf("failed to parse tipset (%q): %w", tss, err)
}
return cs.LoadTipSet(ctx, types.NewTipSetKey(cids...))
default:
var h uint64
if _, err := fmt.Sscanf(tss, "@%d", &h); err != nil {
return nil, xerrors.Errorf("parsing height tipset ref: %w", err)
}
return cs.GetTipsetByHeight(ctx, abi.ChainEpoch(h), cs.GetHeaviestTipSet(), true)
}
}
var StatePowerCmd = &cli.Command{ var StatePowerCmd = &cli.Command{
Name: "power", Name: "power",
Usage: "Query network or miner power", Usage: "Query network or miner power",

View File

@ -260,6 +260,7 @@ var walletSetDefault = &cli.Command{
return err return err
} }
fmt.Println("Default address set to:", addr)
return api.WalletSetDefault(ctx, addr) return api.WalletSetDefault(ctx, addr)
}, },
} }
@ -517,6 +518,8 @@ var walletDelete = &cli.Command{
return err return err
} }
fmt.Println("Soft deleting address:", addr)
fmt.Println("Hard deletion of the address in `~/.lotus/keystore` is needed for permanent removal")
return api.WalletDelete(ctx, addr) return api.WalletDelete(ctx, addr)
}, },
} }

View File

@ -92,6 +92,12 @@ func infoCmdAct(cctx *cli.Context) error {
fmt.Println("Enabled subsystems (from markets API):", subsystems) fmt.Println("Enabled subsystems (from markets API):", subsystems)
start, err := fullapi.StartTime(ctx)
if err != nil {
return err
}
fmt.Printf("StartTime: %s (started at %s)\n", time.Now().Sub(start).Truncate(time.Second), start.Truncate(time.Second))
fmt.Print("Chain: ") fmt.Print("Chain: ")
err = lcli.SyncBasefeeCheck(ctx, fullapi) err = lcli.SyncBasefeeCheck(ctx, fullapi)

View File

@ -918,6 +918,9 @@ var sectorsRenewCmd = &cli.Command{
} }
si, found := activeSectorsInfo[abi.SectorNumber(id)] si, found := activeSectorsInfo[abi.SectorNumber(id)]
if len(si.DealIDs) > 0 && cctx.Bool("only-cc") {
continue
}
if !found { if !found {
return xerrors.Errorf("sector %d is not active", id) return xerrors.Errorf("sector %d is not active", id)
} }

View File

@ -32,7 +32,6 @@ import (
"github.com/filecoin-project/lotus/blockstore" "github.com/filecoin-project/lotus/blockstore"
"github.com/filecoin-project/lotus/chain/store" "github.com/filecoin-project/lotus/chain/store"
"github.com/filecoin-project/lotus/chain/types"
lcli "github.com/filecoin-project/lotus/cli" lcli "github.com/filecoin-project/lotus/cli"
"github.com/filecoin-project/lotus/cmd/lotus-shed/shedgen" "github.com/filecoin-project/lotus/cmd/lotus-shed/shedgen"
"github.com/filecoin-project/lotus/node/repo" "github.com/filecoin-project/lotus/node/repo"
@ -125,22 +124,9 @@ var exportChainCmd = &cli.Command{
fullstate := cctx.Bool("full-state") fullstate := cctx.Bool("full-state")
skipoldmsgs := cctx.Bool("skip-old-msgs") skipoldmsgs := cctx.Bool("skip-old-msgs")
var ts *types.TipSet ts, err := lcli.ParseTipSetRefOffline(ctx, cs, cctx.String("tipset"))
if tss := cctx.String("tipset"); tss != "" { if err != nil {
cids, err := lcli.ParseTipSetString(tss) return err
if err != nil {
return xerrors.Errorf("failed to parse tipset (%q): %w", tss, err)
}
tsk := types.NewTipSetKey(cids...)
selts, err := cs.LoadTipSet(context.Background(), tsk)
if err != nil {
return xerrors.Errorf("loading tipset: %w", err)
}
ts = selts
} else {
ts = cs.GetHeaviestTipSet()
} }
if fullstate { if fullstate {

554
cmd/lotus-shed/fip-0036.go Normal file
View File

@ -0,0 +1,554 @@
package main
import (
"context"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"sort"
"strconv"
"github.com/ipfs/go-cid"
cbor "github.com/ipfs/go-ipld-cbor"
"github.com/mitchellh/go-homedir"
"github.com/urfave/cli/v2"
"golang.org/x/xerrors"
"github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/big"
"github.com/filecoin-project/lotus/chain/actors/adt"
"github.com/filecoin-project/lotus/chain/actors/builtin"
"github.com/filecoin-project/lotus/chain/actors/builtin/market"
"github.com/filecoin-project/lotus/chain/actors/builtin/miner"
"github.com/filecoin-project/lotus/chain/actors/builtin/multisig"
"github.com/filecoin-project/lotus/chain/actors/builtin/power"
"github.com/filecoin-project/lotus/chain/consensus/filcns"
"github.com/filecoin-project/lotus/chain/state"
"github.com/filecoin-project/lotus/chain/store"
"github.com/filecoin-project/lotus/chain/types"
"github.com/filecoin-project/lotus/node/repo"
)
type Option uint64
const (
Approve Option = 49
Reject = 50
)
type Vote struct {
ID uint64
OptionID Option
SignerAddress address.Address
}
type msigVote struct {
Multisig msigBriefInfo
ApproveCount uint64
RejectCount uint64
}
// https://filpoll.io/poll/16
// snapshot height: 2162760
// state root: bafy2bzacebdnzh43hw66bmvguk65wiwr5ssaejlq44fpdei2ysfh3eefpdlqs
var fip36PollCmd = &cli.Command{
Name: "fip36poll",
Usage: "Process the FIP0036 FilPoll result",
ArgsUsage: "[state root, votes]",
Flags: []cli.Flag{
&cli.StringFlag{
Name: "repo",
Value: "~/.lotus",
},
},
Subcommands: []*cli.Command{
finalResultCmd,
},
}
var finalResultCmd = &cli.Command{
Name: "results",
Usage: "get poll results",
ArgsUsage: "[state root] [height] [votes json]",
Flags: []cli.Flag{
&cli.StringFlag{
Name: "repo",
Value: "~/.lotus",
},
},
Action: func(cctx *cli.Context) error {
if cctx.NArg() != 3 {
return xerrors.New("filpoll0036 results [state root] [height] [votes.json]")
}
ctx := context.TODO()
if !cctx.Args().Present() {
return fmt.Errorf("must pass state root")
}
sroot, err := cid.Decode(cctx.Args().First())
if err != nil {
return fmt.Errorf("failed to parse input: %w", err)
}
fsrepo, err := repo.NewFS(cctx.String("repo"))
if err != nil {
return err
}
lkrepo, err := fsrepo.Lock(repo.FullNode)
if err != nil {
return err
}
defer lkrepo.Close() //nolint:errcheck
bs, err := lkrepo.Blockstore(ctx, repo.UniversalBlockstore)
if err != nil {
return fmt.Errorf("failed to open blockstore: %w", err)
}
defer func() {
if c, ok := bs.(io.Closer); ok {
if err := c.Close(); err != nil {
log.Warnf("failed to close blockstore: %s", err)
}
}
}()
mds, err := lkrepo.Datastore(context.Background(), "/metadata")
if err != nil {
return err
}
cs := store.NewChainStore(bs, bs, mds, filcns.Weight, nil)
defer cs.Close() //nolint:errcheck
cst := cbor.NewCborStore(bs)
store := adt.WrapStore(ctx, cst)
st, err := state.LoadStateTree(cst, sroot)
if err != nil {
return err
}
height, err := strconv.Atoi(cctx.Args().Get(1))
if err != nil {
return err
}
//get all the votes' signer ID address && their vote
vj, err := homedir.Expand(cctx.Args().Get(2))
if err != nil {
return xerrors.Errorf("fail to get votes json")
}
votes, err := getVotesMap(vj)
if err != nil {
return xerrors.Errorf("failed to get voters: ", err)
}
type minerBriefInfo struct {
rawBytePower abi.StoragePower
dealPower abi.StoragePower
balance abi.TokenAmount
}
// power actor
pa, err := st.GetActor(power.Address)
if err != nil {
return xerrors.Errorf("failed to get power actor: \n", err)
}
powerState, err := power.Load(store, pa)
if err != nil {
return xerrors.Errorf("failed to get power state: \n", err)
}
//market actor
ma, err := st.GetActor(market.Address)
if err != nil {
return xerrors.Errorf("fail to get market actor: ", err)
}
marketState, err := market.Load(store, ma)
if err != nil {
return xerrors.Errorf("fail to load market state: ", err)
}
lookupId := func(addr address.Address) address.Address {
ret, err := st.LookupID(addr)
if err != nil {
panic(err)
}
return ret
}
// we need to build several pieces of information, as we traverse the state tree:
// a map of accounts to every msig that they are a signer of
accountsToMultisigs := make(map[address.Address][]address.Address)
// a map of multisigs to some info about them for quick lookup
msigActorsInfo := make(map[address.Address]msigBriefInfo)
// a map of actors (accounts+multisigs) to every miner that they are an owner of
ownerMap := make(map[address.Address][]address.Address)
// a map of accounts to every miner that they are a worker of
workerMap := make(map[address.Address][]address.Address)
// a map of miners to some info about them for quick lookup
minerActorsInfo := make(map[address.Address]minerBriefInfo)
// a map of client addresses to deal data stored in proposals
clientToDealStorage := make(map[address.Address]abi.StoragePower)
fmt.Println("iterating over all actors")
count := 0
err = st.ForEach(func(addr address.Address, act *types.Actor) error {
if count%200000 == 0 {
fmt.Println("processed ", count, " actors building maps")
}
count++
if builtin.IsMultisigActor(act.Code) {
ms, err := multisig.Load(store, act)
if err != nil {
return fmt.Errorf("load msig failed %v", err)
}
// TODO: Confirm that these are always ID addresses
signers, err := ms.Signers()
if err != nil {
return xerrors.Errorf("fail to get msig signers", err)
}
for _, s := range signers {
signerId := lookupId(s)
accountsToMultisigs[signerId] = append(accountsToMultisigs[signerId], addr)
}
locked, err := ms.LockedBalance(abi.ChainEpoch(height))
if err != nil {
return xerrors.Errorf("failed to compute locked multisig balance: %w", err)
}
threshold, _ := ms.Threshold()
info := msigBriefInfo{
ID: addr,
Signer: signers,
Balance: big.Max(big.Zero(), types.BigSub(act.Balance, locked)),
Threshold: threshold,
}
msigActorsInfo[addr] = info
}
if builtin.IsStorageMinerActor(act.Code) {
m, err := miner.Load(store, act)
if err != nil {
return xerrors.Errorf("fail to load miner actor: \n", err)
}
info, err := m.Info()
if err != nil {
return xerrors.Errorf("fail to get miner info: \n", err)
}
ownerId := lookupId(info.Owner)
ownerMap[ownerId] = append(ownerMap[ownerId], addr)
workerId := lookupId(info.Worker)
workerMap[workerId] = append(workerMap[workerId], addr)
lockedFunds, err := m.LockedFunds()
if err != nil {
return err
}
bal := big.Sub(act.Balance, lockedFunds.TotalLockedFunds())
bal = big.Max(big.Zero(), bal)
pow, ok, err := powerState.MinerPower(addr)
if err != nil {
return err
}
if !ok {
pow.RawBytePower = big.Zero()
}
minerActorsInfo[addr] = minerBriefInfo{
rawBytePower: pow.RawBytePower,
// gets added up outside this loop
dealPower: big.Zero(),
balance: bal,
}
}
return nil
})
if err != nil {
return err
}
fmt.Println("iterating over proposals")
dealProposals, err := marketState.Proposals()
if err != nil {
return err
}
dealStates, err := marketState.States()
if err != nil {
return err
}
if err := dealProposals.ForEach(func(dealID abi.DealID, d market.DealProposal) error {
dealState, ok, err := dealStates.Get(dealID)
if err != nil {
return err
}
if !ok || dealState.SectorStartEpoch == -1 {
// effectively a continue
return nil
}
clientId := lookupId(d.Client)
if cd, found := clientToDealStorage[clientId]; found {
clientToDealStorage[clientId] = big.Add(cd, big.NewInt(int64(d.PieceSize)))
} else {
clientToDealStorage[clientId] = big.NewInt(int64(d.PieceSize))
}
providerId := lookupId(d.Provider)
mai, found := minerActorsInfo[providerId]
if !found {
return xerrors.Errorf("didn't find miner %s", providerId)
}
mai.dealPower = big.Add(mai.dealPower, big.NewInt(int64(d.PieceSize)))
minerActorsInfo[providerId] = mai
return nil
}); err != nil {
return xerrors.Errorf("fail to get deals")
}
// now tabulate votes
approveBalance := abi.NewTokenAmount(0)
rejectionBalance := abi.NewTokenAmount(0)
clientApproveBytes := big.Zero()
clientRejectBytes := big.Zero()
msigPendingVotes := make(map[address.Address]msigVote) //map[msig ID]msigVote
msigVotes := make(map[address.Address]Option)
minerVotes := make(map[address.Address]Option)
fmt.Println("counting account and multisig votes")
for _, vote := range votes {
signerId, err := st.LookupID(vote.SignerAddress)
if err != nil {
fmt.Println("voter ", vote.SignerAddress, " not found in state tree, skipping")
continue
}
//process votes for regular accounts
accountActor, err := st.GetActor(signerId)
if err != nil {
return xerrors.Errorf("fail to get account account for signer: ", err)
}
clientBytes, ok := clientToDealStorage[signerId]
if !ok {
clientBytes = big.Zero()
}
if vote.OptionID == Approve {
approveBalance = types.BigAdd(approveBalance, accountActor.Balance)
clientApproveBytes = big.Add(clientApproveBytes, clientBytes)
} else {
rejectionBalance = types.BigAdd(rejectionBalance, accountActor.Balance)
clientRejectBytes = big.Add(clientRejectBytes, clientBytes)
}
if minerInfos, found := ownerMap[signerId]; found {
for _, minerInfo := range minerInfos {
minerVotes[minerInfo] = vote.OptionID
}
}
if minerInfos, found := workerMap[signerId]; found {
for _, minerInfo := range minerInfos {
if _, ok := minerVotes[minerInfo]; !ok {
minerVotes[minerInfo] = vote.OptionID
}
}
}
//process msigs
// There is a possibility that enough signers have voted for BOTH options in the poll to be above the threshold
// Because we are iterating over votes in order they arrived, the first option to go over the threshold will win
// This is in line with onchain behaviour (consider a case where signers are competing to withdraw all the funds
// in an msig into 2 different accounts)
if mss, found := accountsToMultisigs[signerId]; found {
for _, ms := range mss { //get all the msig signer has
if _, ok := msigVotes[ms]; ok {
// msig has already voted, skip
continue
}
if mpv, found := msigPendingVotes[ms]; found { //other signers of the multisig have voted, yet the threshold has not met
if vote.OptionID == Approve {
if mpv.ApproveCount+1 == mpv.Multisig.Threshold { //met threshold
approveBalance = types.BigAdd(approveBalance, mpv.Multisig.Balance)
delete(msigPendingVotes, ms) //threshold, can skip later signer votes
msigVotes[ms] = vote.OptionID
} else {
mpv.ApproveCount++
msigPendingVotes[ms] = mpv
}
} else {
if mpv.RejectCount+1 == mpv.Multisig.Threshold { //met threshold
rejectionBalance = types.BigAdd(rejectionBalance, mpv.Multisig.Balance)
delete(msigPendingVotes, ms) //threshold, can skip later signer votes
msigVotes[ms] = vote.OptionID
} else {
mpv.RejectCount++
msigPendingVotes[ms] = mpv
}
}
} else { //first vote received from one of the signers of the msig
msi, ok := msigActorsInfo[ms]
if !ok {
return xerrors.Errorf("didn't find msig %s in msig map", ms)
}
if msi.Threshold == 1 { //met threshold with this signer's single vote
if vote.OptionID == Approve {
approveBalance = types.BigAdd(approveBalance, msi.Balance)
msigVotes[ms] = Approve
} else {
rejectionBalance = types.BigAdd(rejectionBalance, msi.Balance)
msigVotes[ms] = Reject
}
} else { //threshold not met, add to pending vote
if vote.OptionID == Approve {
msigPendingVotes[ms] = msigVote{
Multisig: msi,
ApproveCount: 1,
}
} else {
msigPendingVotes[ms] = msigVote{
Multisig: msi,
RejectCount: 1,
}
}
}
}
}
}
}
for s, v := range msigVotes {
if minerInfos, found := ownerMap[s]; found {
for _, minerInfo := range minerInfos {
minerVotes[minerInfo] = v
}
}
if minerInfos, found := workerMap[s]; found {
for _, minerInfo := range minerInfos {
if _, ok := minerVotes[minerInfo]; !ok {
minerVotes[minerInfo] = v
}
}
}
}
approveRBP := big.Zero()
approveDealPower := big.Zero()
rejectionRBP := big.Zero()
rejectionDealPower := big.Zero()
fmt.Println("adding up miner votes")
for minerAddr, vote := range minerVotes {
mbi, ok := minerActorsInfo[minerAddr]
if !ok {
return xerrors.Errorf("failed to find miner info for %s", minerAddr)
}
if vote == Approve {
approveBalance = big.Add(approveBalance, mbi.balance)
approveRBP = big.Add(approveRBP, mbi.rawBytePower)
approveDealPower = big.Add(approveDealPower, mbi.dealPower)
} else {
rejectionBalance = big.Add(rejectionBalance, mbi.balance)
rejectionRBP = big.Add(rejectionRBP, mbi.rawBytePower)
rejectionDealPower = big.Add(rejectionDealPower, mbi.dealPower)
}
}
fmt.Println("Total acceptance token: ", approveBalance)
fmt.Println("Total rejection token: ", rejectionBalance)
fmt.Println("Total acceptance SP deal power: ", approveDealPower)
fmt.Println("Total rejection SP deal power: ", rejectionDealPower)
fmt.Println("Total acceptance SP rb power: ", approveRBP)
fmt.Println("Total rejection SP rb power: ", rejectionRBP)
fmt.Println("Total acceptance Client rb power: ", clientApproveBytes)
fmt.Println("Total rejection Client rb power: ", clientRejectBytes)
fmt.Println("\n\nFinal results **drumroll**")
if rejectionBalance.GreaterThanEqual(big.Mul(approveBalance, big.NewInt(3))) {
fmt.Println("token holders VETO FIP-0036!")
} else if approveBalance.LessThanEqual(rejectionBalance) {
fmt.Println("token holders REJECT FIP-0036")
} else {
fmt.Println("token holders ACCEPT FIP-0036")
}
if rejectionDealPower.GreaterThanEqual(big.Mul(approveDealPower, big.NewInt(3))) {
fmt.Println("SPs by deal data stored VETO FIP-0036!")
} else if approveDealPower.LessThanEqual(rejectionDealPower) {
fmt.Println("SPs by deal data stored REJECT FIP-0036")
} else {
fmt.Println("SPs by deal data stored ACCEPT FIP-0036")
}
if rejectionRBP.GreaterThanEqual(big.Mul(approveRBP, big.NewInt(3))) {
fmt.Println("SPs by total raw byte power VETO FIP-0036!")
} else if approveRBP.LessThanEqual(rejectionRBP) {
fmt.Println("SPs by total raw byte power REJECT FIP-0036")
} else {
fmt.Println("SPs by total raw byte power ACCEPT FIP-0036")
}
if clientRejectBytes.GreaterThanEqual(big.Mul(clientApproveBytes, big.NewInt(3))) {
fmt.Println("Storage Clients VETO FIP-0036!")
} else if clientApproveBytes.LessThanEqual(clientRejectBytes) {
fmt.Println("Storage Clients REJECT FIP-0036")
} else {
fmt.Println("Storage Clients ACCEPT FIP-0036")
}
return nil
},
}
// Returns voted sorted by votes from earliest to latest
func getVotesMap(file string) ([]Vote, error) {
var votes []Vote
vb, err := ioutil.ReadFile(file)
if err != nil {
return nil, xerrors.Errorf("read vote: %w", err)
}
if err := json.Unmarshal(vb, &votes); err != nil {
return nil, xerrors.Errorf("unmarshal vote: %w", err)
}
sort.SliceStable(votes, func(i, j int) bool {
return votes[i].ID < votes[j].ID
})
return votes, nil
}

View File

@ -74,6 +74,7 @@ func main() {
diffCmd, diffCmd,
itestdCmd, itestdCmd,
msigCmd, msigCmd,
fip36PollCmd,
} }
app := &cli.App{ app := &cli.App{

View File

@ -25,7 +25,7 @@ import (
type msigBriefInfo struct { type msigBriefInfo struct {
ID address.Address ID address.Address
Signer interface{} Signer []address.Address
Balance abi.TokenAmount Balance abi.TokenAmount
Threshold uint64 Threshold uint64
} }

View File

@ -31,12 +31,7 @@ var createSimCommand = &cli.Command{
} }
ts = node.Chainstore.GetHeaviestTipSet() ts = node.Chainstore.GetHeaviestTipSet()
case 1: case 1:
cids, err := lcli.ParseTipSetString(cctx.Args().Get(1)) ts, err = lcli.ParseTipSetRefOffline(cctx.Context, node.Chainstore, cctx.Args().Get(1))
if err != nil {
return err
}
tsk := types.NewTipSetKey(cids...)
ts, err = node.Chainstore.LoadTipSet(cctx.Context, tsk)
if err != nil { if err != nil {
return err return err
} }

View File

@ -167,6 +167,8 @@
* [SectorsSummary](#SectorsSummary) * [SectorsSummary](#SectorsSummary)
* [SectorsUnsealPiece](#SectorsUnsealPiece) * [SectorsUnsealPiece](#SectorsUnsealPiece)
* [SectorsUpdate](#SectorsUpdate) * [SectorsUpdate](#SectorsUpdate)
* [Start](#Start)
* [StartTime](#StartTime)
* [Storage](#Storage) * [Storage](#Storage)
* [StorageAddLocal](#StorageAddLocal) * [StorageAddLocal](#StorageAddLocal)
* [StorageAttach](#StorageAttach) * [StorageAttach](#StorageAttach)
@ -3621,6 +3623,18 @@ Inputs:
Response: `{}` Response: `{}`
## Start
### StartTime
Perms: read
Inputs: `null`
Response: `"0001-01-01T00:00:00Z"`
## Storage ## Storage

View File

@ -156,6 +156,8 @@
* [PaychVoucherCreate](#PaychVoucherCreate) * [PaychVoucherCreate](#PaychVoucherCreate)
* [PaychVoucherList](#PaychVoucherList) * [PaychVoucherList](#PaychVoucherList)
* [PaychVoucherSubmit](#PaychVoucherSubmit) * [PaychVoucherSubmit](#PaychVoucherSubmit)
* [Start](#Start)
* [StartTime](#StartTime)
* [State](#State) * [State](#State)
* [StateAccountKey](#StateAccountKey) * [StateAccountKey](#StateAccountKey)
* [StateActorCodeCIDs](#StateActorCodeCIDs) * [StateActorCodeCIDs](#StateActorCodeCIDs)
@ -4615,6 +4617,18 @@ Response:
} }
``` ```
## Start
### StartTime
Perms: read
Inputs: `null`
Response: `"0001-01-01T00:00:00Z"`
## State ## State
The State methods are used to query, inspect, and interact with chain state. The State methods are used to query, inspect, and interact with chain state.
Most methods take a TipSetKey as a parameter. The state looked up is the parent state of the tipset. Most methods take a TipSetKey as a parameter. The state looked up is the parent state of the tipset.

View File

@ -167,6 +167,8 @@
* [Raft](#Raft) * [Raft](#Raft)
* [RaftLeader](#RaftLeader) * [RaftLeader](#RaftLeader)
* [RaftState](#RaftState) * [RaftState](#RaftState)
* [Start](#Start)
* [StartTime](#StartTime)
* [State](#State) * [State](#State)
* [StateAccountKey](#StateAccountKey) * [StateAccountKey](#StateAccountKey)
* [StateActorCodeCIDs](#StateActorCodeCIDs) * [StateActorCodeCIDs](#StateActorCodeCIDs)
@ -5077,6 +5079,18 @@ Response:
} }
``` ```
## Start
### StartTime
Perms: read
Inputs: `null`
Response: `"0001-01-01T00:00:00Z"`
## State ## State
The State methods are used to query, inspect, and interact with chain state. The State methods are used to query, inspect, and interact with chain state.
Most methods take a TipSetKey as a parameter. The state looked up is the parent state of the tipset. Most methods take a TipSetKey as a parameter. The state looked up is the parent state of the tipset.

View File

@ -1241,7 +1241,7 @@ NAME:
lotus-miner net ping - Ping peers lotus-miner net ping - Ping peers
USAGE: USAGE:
lotus-miner net ping [command options] [arguments...] lotus-miner net ping [command options] [peerMultiaddr]
OPTIONS: OPTIONS:
--count value, -c value specify the number of times it should ping (default: 10) --count value, -c value specify the number of times it should ping (default: 10)

View File

@ -2542,7 +2542,7 @@ NAME:
lotus net ping - Ping peers lotus net ping - Ping peers
USAGE: USAGE:
lotus net ping [command options] [arguments...] lotus net ping [command options] [peerMultiaddr]
OPTIONS: OPTIONS:
--count value, -c value specify the number of times it should ping (default: 10) --count value, -c value specify the number of times it should ping (default: 10)

View File

@ -380,11 +380,9 @@
# A single partition may contain up to 2349 32GiB sectors, or 2300 64GiB sectors. # A single partition may contain up to 2349 32GiB sectors, or 2300 64GiB sectors.
# #
# The maximum number of sectors which can be proven in a single PoSt message is 25000 in network version 16, which # The maximum number of sectors which can be proven in a single PoSt message is 25000 in network version 16, which
# means that a single message can prove at most 10 partinions # means that a single message can prove at most 10 partitions
# #
# In some cases when submitting PoSt messages which are recovering sectors, the default network limit may still be # Note that setting this value lower may result in less efficient gas use - more messages will be sent,
# too high to fit in the block gas limit; In those cases it may be necessary to set this value to something lower
# than 10; Note that setting this value lower may result in less efficient gas use - more messages will be sent,
# to prove each deadline, resulting in more total gas use (but each message will have lower gas limit) # to prove each deadline, resulting in more total gas use (but each message will have lower gas limit)
# #
# Setting this value above the network limit has no effect # Setting this value above the network limit has no effect
@ -403,6 +401,19 @@
# env var: LOTUS_PROVING_MAXPARTITIONSPERRECOVERYMESSAGE # env var: LOTUS_PROVING_MAXPARTITIONSPERRECOVERYMESSAGE
#MaxPartitionsPerRecoveryMessage = 0 #MaxPartitionsPerRecoveryMessage = 0
# Enable single partition per PoSt Message for partitions containing recovery sectors
#
# In cases when submitting PoSt messages which contain recovering sectors, the default network limit may still be
# too high to fit in the block gas limit. In those cases, it becomes useful to only house the single partition
# with recovering sectors in the post message
#
# Note that setting this value lower may result in less efficient gas use - more messages will be sent,
# to prove each deadline, resulting in more total gas use (but each message will have lower gas limit)
#
# type: bool
# env var: LOTUS_PROVING_SINGLERECOVERINGPARTITIONPERPOSTMESSAGE
#SingleRecoveringPartitionPerPostMessage = false
[Sealing] [Sealing]
# Upper bound on how many sectors can be waiting for more deals to be packed in it before it begins sealing at any given time. # Upper bound on how many sectors can be waiting for more deals to be packed in it before it begins sealing at any given time.

View File

@ -26,6 +26,7 @@ First steps:
Prepping an RC: Prepping an RC:
- [ ] version string in `build/version.go` has been updated (in the `release/vX.Y.Z` branch). - [ ] version string in `build/version.go` has been updated (in the `release/vX.Y.Z` branch).
- [ ] run `make gen && make docsgen-cli`
- [ ] tag commit with `vX.Y.Z-rcN` - [ ] tag commit with `vX.Y.Z-rcN`
- [ ] cut a pre-release [here](https://github.com/filecoin-project/lotus/releases/new?prerelease=true) - [ ] cut a pre-release [here](https://github.com/filecoin-project/lotus/releases/new?prerelease=true)
@ -66,14 +67,14 @@ Testing an RC:
- [ ] Update the [CHANGELOG.md](https://github.com/filecoin-project/lotus/blob/master/CHANGELOG.md) to the state that can be used as release note. - [ ] Update the [CHANGELOG.md](https://github.com/filecoin-project/lotus/blob/master/CHANGELOG.md) to the state that can be used as release note.
- [ ] Invite the wider community through (link to the release issue) - [ ] Invite the wider community through (link to the release issue)
- [ ] **Stage 4 - Release** - [ ] **Stage 4 - Stable Release**
- [ ] Final preparation - [ ] Final preparation
- [ ] Verify that version string in [`version.go`](https://github.com/ipfs/go-ipfs/tree/master/version.go) has been updated. - [ ] Verify that version string in [`version.go`](https://github.com/filecoin-project/lotus/blob/master/build/version.go) has been updated.
- [ ] Prep the changelog using `scripts/mkreleaselog`, and add it to `CHANGELOG.md`. Ensure that [CHANGELOG.md](https://github.com/filecoin-project/lotus/blob/master/CHANGELOG.md) is up to date - [ ] Verify that codegen is up to date (`make gen && make docsgen-cli`)
- [ ] Ensure that [CHANGELOG.md](https://github.com/filecoin-project/lotus/blob/master/CHANGELOG.md) is up to date
- [ ] Merge `release-vX.Y.Z` into the `releases` branch. - [ ] Merge `release-vX.Y.Z` into the `releases` branch.
- [ ] Tag this merge commit (on the `releases` branch) with `vX.Y.Z` - [ ] Tag this merge commit (on the `releases` branch) with `vX.Y.Z`
- [ ] Cut the release [here](https://github.com/filecoin-project/lotus/releases/new?prerelease=true&target=releases). - [ ] Cut the release [here](https://github.com/filecoin-project/lotus/releases/new?prerelease=false&target=releases).
- [ ] Check `Create a discussion for this release`
- [ ] **Post-Release** - [ ] **Post-Release**

4
go.mod
View File

@ -97,7 +97,7 @@ require (
github.com/ipfs/go-ipld-cbor v0.0.6 github.com/ipfs/go-ipld-cbor v0.0.6
github.com/ipfs/go-ipld-format v0.4.0 github.com/ipfs/go-ipld-format v0.4.0
github.com/ipfs/go-log/v2 v2.5.1 github.com/ipfs/go-log/v2 v2.5.1
github.com/ipfs/go-merkledag v0.6.0 github.com/ipfs/go-merkledag v0.8.0
github.com/ipfs/go-metrics-interface v0.0.1 github.com/ipfs/go-metrics-interface v0.0.1
github.com/ipfs/go-metrics-prometheus v0.0.2 github.com/ipfs/go-metrics-prometheus v0.0.2
github.com/ipfs/go-unixfs v0.3.1 github.com/ipfs/go-unixfs v0.3.1
@ -323,7 +323,7 @@ require (
github.com/whyrusleeping/go-keyspace v0.0.0-20160322163242-5b898ac5add1 // indirect github.com/whyrusleeping/go-keyspace v0.0.0-20160322163242-5b898ac5add1 // indirect
github.com/whyrusleeping/timecache v0.0.0-20160911033111-cfcb2f1abfee // indirect github.com/whyrusleeping/timecache v0.0.0-20160911033111-cfcb2f1abfee // indirect
github.com/xrash/smetrics v0.0.0-20201216005158-039620a65673 // indirect github.com/xrash/smetrics v0.0.0-20201216005158-039620a65673 // indirect
github.com/zondax/hid v0.9.0 // indirect github.com/zondax/hid v0.9.1-0.20220302062450-5552068d2266 // indirect
github.com/zondax/ledger-go v0.12.1 // indirect github.com/zondax/ledger-go v0.12.1 // indirect
go.opentelemetry.io/otel/metric v0.25.0 // indirect go.opentelemetry.io/otel/metric v0.25.0 // indirect
go.opentelemetry.io/otel/sdk/export/metric v0.25.0 // indirect go.opentelemetry.io/otel/sdk/export/metric v0.25.0 // indirect

6
go.sum
View File

@ -828,8 +828,9 @@ github.com/ipfs/go-merkledag v0.2.3/go.mod h1:SQiXrtSts3KGNmgOzMICy5c0POOpUNQLvB
github.com/ipfs/go-merkledag v0.2.4/go.mod h1:SQiXrtSts3KGNmgOzMICy5c0POOpUNQLvB3ClKnBAlk= github.com/ipfs/go-merkledag v0.2.4/go.mod h1:SQiXrtSts3KGNmgOzMICy5c0POOpUNQLvB3ClKnBAlk=
github.com/ipfs/go-merkledag v0.3.2/go.mod h1:fvkZNNZixVW6cKSZ/JfLlON5OlgTXNdRLz0p6QG/I2M= github.com/ipfs/go-merkledag v0.3.2/go.mod h1:fvkZNNZixVW6cKSZ/JfLlON5OlgTXNdRLz0p6QG/I2M=
github.com/ipfs/go-merkledag v0.5.1/go.mod h1:cLMZXx8J08idkp5+id62iVftUQV+HlYJ3PIhDfZsjA4= github.com/ipfs/go-merkledag v0.5.1/go.mod h1:cLMZXx8J08idkp5+id62iVftUQV+HlYJ3PIhDfZsjA4=
github.com/ipfs/go-merkledag v0.6.0 h1:oV5WT2321tS4YQVOPgIrWHvJ0lJobRTerU+i9nmUCuA=
github.com/ipfs/go-merkledag v0.6.0/go.mod h1:9HSEwRd5sV+lbykiYP+2NC/3o6MZbKNaa4hfNcH5iH0= github.com/ipfs/go-merkledag v0.6.0/go.mod h1:9HSEwRd5sV+lbykiYP+2NC/3o6MZbKNaa4hfNcH5iH0=
github.com/ipfs/go-merkledag v0.8.0 h1:ZUda+sh/MGZX4Z13DE/VQT4GmKWm4H95Nje4qcL/yPE=
github.com/ipfs/go-merkledag v0.8.0/go.mod h1:/RmH1kOs7qDMNtGKPh4d/UErNMVuAMpPS/tP57a3aoY=
github.com/ipfs/go-metrics-interface v0.0.1 h1:j+cpbjYvu4R8zbleSs36gvB7jR+wsL2fGD6n0jO4kdg= github.com/ipfs/go-metrics-interface v0.0.1 h1:j+cpbjYvu4R8zbleSs36gvB7jR+wsL2fGD6n0jO4kdg=
github.com/ipfs/go-metrics-interface v0.0.1/go.mod h1:6s6euYU4zowdslK0GKHmqaIZ3j/b/tL7HTWtJ4VPgWY= github.com/ipfs/go-metrics-interface v0.0.1/go.mod h1:6s6euYU4zowdslK0GKHmqaIZ3j/b/tL7HTWtJ4VPgWY=
github.com/ipfs/go-metrics-prometheus v0.0.2 h1:9i2iljLg12S78OhC6UAiXi176xvQGiZaGVF1CUVdE+s= github.com/ipfs/go-metrics-prometheus v0.0.2 h1:9i2iljLg12S78OhC6UAiXi176xvQGiZaGVF1CUVdE+s=
@ -1829,8 +1830,9 @@ github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9de
github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k= github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/zondax/hid v0.9.0 h1:eiT3P6vNxAEVxXMw66eZUAAnU2zD33JBkfG/EnfAKl8=
github.com/zondax/hid v0.9.0/go.mod h1:l5wttcP0jwtdLjqjMMWFVEE7d1zO0jvSPA9OPZxWpEM= github.com/zondax/hid v0.9.0/go.mod h1:l5wttcP0jwtdLjqjMMWFVEE7d1zO0jvSPA9OPZxWpEM=
github.com/zondax/hid v0.9.1-0.20220302062450-5552068d2266 h1:O9XLFXGkVswDFmH9LaYpqu+r/AAFWqr0DL6V00KEVFg=
github.com/zondax/hid v0.9.1-0.20220302062450-5552068d2266/go.mod h1:l5wttcP0jwtdLjqjMMWFVEE7d1zO0jvSPA9OPZxWpEM=
github.com/zondax/ledger-go v0.12.1 h1:hYRcyznPRJp+5mzF2sazTLP2nGvGjYDD2VzhHhFomLU= github.com/zondax/ledger-go v0.12.1 h1:hYRcyznPRJp+5mzF2sazTLP2nGvGjYDD2VzhHhFomLU=
github.com/zondax/ledger-go v0.12.1/go.mod h1:KatxXrVDzgWwbssUWsF5+cOJHXPvzQ09YSlzGNuhOEo= github.com/zondax/ledger-go v0.12.1/go.mod h1:KatxXrVDzgWwbssUWsF5+cOJHXPvzQ09YSlzGNuhOEo=
go.dedis.ch/fixbuf v1.0.3 h1:hGcV9Cd/znUxlusJ64eAlExS+5cJDIyTyEG+otu5wQs= go.dedis.ch/fixbuf v1.0.3 h1:hGcV9Cd/znUxlusJ64eAlExS+5cJDIyTyEG+otu5wQs=

View File

@ -3,14 +3,15 @@ package kit
import ( import (
"context" "context"
"fmt" "fmt"
"github.com/multiformats/go-multiaddr"
manet "github.com/multiformats/go-multiaddr/net"
"github.com/stretchr/testify/require"
"net" "net"
"net/http" "net/http"
"net/http/httptest" "net/http/httptest"
"testing" "testing"
"github.com/multiformats/go-multiaddr"
manet "github.com/multiformats/go-multiaddr/net"
"github.com/stretchr/testify/require"
"github.com/filecoin-project/lotus/api/client" "github.com/filecoin-project/lotus/api/client"
"github.com/filecoin-project/lotus/cmd/lotus-worker/sealworker" "github.com/filecoin-project/lotus/cmd/lotus-worker/sealworker"
"github.com/filecoin-project/lotus/node" "github.com/filecoin-project/lotus/node"

View File

@ -156,6 +156,7 @@ func defaults() []Option {
Override(new(journal.DisabledEvents), journal.EnvDisabledEvents), Override(new(journal.DisabledEvents), journal.EnvDisabledEvents),
Override(new(journal.Journal), modules.OpenFilesystemJournal), Override(new(journal.Journal), modules.OpenFilesystemJournal),
Override(new(*alerting.Alerting), alerting.NewAlertingSystem), Override(new(*alerting.Alerting), alerting.NewAlertingSystem),
Override(new(dtypes.NodeStartTime), FromVal(dtypes.NodeStartTime(time.Now()))),
Override(CheckFDLimit, modules.CheckFdLimit(build.DefaultFDLimit)), Override(CheckFDLimit, modules.CheckFdLimit(build.DefaultFDLimit)),

View File

@ -94,6 +94,10 @@ func ConfigStorageMiner(c interface{}) Option {
Override(new(paths.Store), From(new(*paths.Remote))), Override(new(paths.Store), From(new(*paths.Remote))),
Override(new(dtypes.RetrievalPricingFunc), modules.RetrievalPricingFunc(cfg.Dealmaking)), Override(new(dtypes.RetrievalPricingFunc), modules.RetrievalPricingFunc(cfg.Dealmaking)),
If(cfg.Subsystems.EnableMining || cfg.Subsystems.EnableSealing,
Override(GetParamsKey, modules.GetParams(!cfg.Proving.DisableBuiltinWindowPoSt || !cfg.Proving.DisableBuiltinWinningPoSt || cfg.Storage.AllowCommit || cfg.Storage.AllowProveReplicaUpdate2)),
),
If(!cfg.Subsystems.EnableMining, If(!cfg.Subsystems.EnableMining,
If(cfg.Subsystems.EnableSealing, Error(xerrors.Errorf("sealing can only be enabled on a mining node"))), If(cfg.Subsystems.EnableSealing, Error(xerrors.Errorf("sealing can only be enabled on a mining node"))),
If(cfg.Subsystems.EnableSectorStorage, Error(xerrors.Errorf("sealing can only be enabled on a mining node"))), If(cfg.Subsystems.EnableSectorStorage, Error(xerrors.Errorf("sealing can only be enabled on a mining node"))),
@ -107,9 +111,6 @@ func ConfigStorageMiner(c interface{}) Option {
Override(new(storiface.Prover), ffiwrapper.ProofProver), Override(new(storiface.Prover), ffiwrapper.ProofProver),
Override(new(storiface.ProverPoSt), From(new(sectorstorage.SectorManager))), Override(new(storiface.ProverPoSt), From(new(sectorstorage.SectorManager))),
// Sealing (todo should be under EnableSealing, but storagefsm is currently bundled with storage.Miner)
Override(GetParamsKey, modules.GetParams),
Override(new(dtypes.SetSealingConfigFunc), modules.NewSetSealConfigFunc), Override(new(dtypes.SetSealingConfigFunc), modules.NewSetSealConfigFunc),
Override(new(dtypes.GetSealingConfigFunc), modules.NewGetSealConfigFunc), Override(new(dtypes.GetSealingConfigFunc), modules.NewGetSealConfigFunc),

View File

@ -704,11 +704,9 @@ After changing this option, confirm that the new value works in your setup by in
A single partition may contain up to 2349 32GiB sectors, or 2300 64GiB sectors. A single partition may contain up to 2349 32GiB sectors, or 2300 64GiB sectors.
The maximum number of sectors which can be proven in a single PoSt message is 25000 in network version 16, which The maximum number of sectors which can be proven in a single PoSt message is 25000 in network version 16, which
means that a single message can prove at most 10 partinions means that a single message can prove at most 10 partitions
In some cases when submitting PoSt messages which are recovering sectors, the default network limit may still be Note that setting this value lower may result in less efficient gas use - more messages will be sent,
too high to fit in the block gas limit; In those cases it may be necessary to set this value to something lower
than 10; Note that setting this value lower may result in less efficient gas use - more messages will be sent,
to prove each deadline, resulting in more total gas use (but each message will have lower gas limit) to prove each deadline, resulting in more total gas use (but each message will have lower gas limit)
Setting this value above the network limit has no effect`, Setting this value above the network limit has no effect`,
@ -723,6 +721,19 @@ In those cases it may be necessary to set this value to something low (eg 1);
Note that setting this value lower may result in less efficient gas use - more messages will be sent than needed, Note that setting this value lower may result in less efficient gas use - more messages will be sent than needed,
resulting in more total gas use (but each message will have lower gas limit)`, resulting in more total gas use (but each message will have lower gas limit)`,
}, },
{
Name: "SingleRecoveringPartitionPerPostMessage",
Type: "bool",
Comment: `Enable single partition per PoSt Message for partitions containing recovery sectors
In cases when submitting PoSt messages which contain recovering sectors, the default network limit may still be
too high to fit in the block gas limit. In those cases, it becomes useful to only house the single partition
with recovering sectors in the post message
Note that setting this value lower may result in less efficient gas use - more messages will be sent,
to prove each deadline, resulting in more total gas use (but each message will have lower gas limit)`,
},
}, },
"Pubsub": []DocField{ "Pubsub": []DocField{
{ {

View File

@ -277,11 +277,9 @@ type ProvingConfig struct {
// A single partition may contain up to 2349 32GiB sectors, or 2300 64GiB sectors. // A single partition may contain up to 2349 32GiB sectors, or 2300 64GiB sectors.
// //
// The maximum number of sectors which can be proven in a single PoSt message is 25000 in network version 16, which // The maximum number of sectors which can be proven in a single PoSt message is 25000 in network version 16, which
// means that a single message can prove at most 10 partinions // means that a single message can prove at most 10 partitions
// //
// In some cases when submitting PoSt messages which are recovering sectors, the default network limit may still be // Note that setting this value lower may result in less efficient gas use - more messages will be sent,
// too high to fit in the block gas limit; In those cases it may be necessary to set this value to something lower
// than 10; Note that setting this value lower may result in less efficient gas use - more messages will be sent,
// to prove each deadline, resulting in more total gas use (but each message will have lower gas limit) // to prove each deadline, resulting in more total gas use (but each message will have lower gas limit)
// //
// Setting this value above the network limit has no effect // Setting this value above the network limit has no effect
@ -295,6 +293,16 @@ type ProvingConfig struct {
// Note that setting this value lower may result in less efficient gas use - more messages will be sent than needed, // Note that setting this value lower may result in less efficient gas use - more messages will be sent than needed,
// resulting in more total gas use (but each message will have lower gas limit) // resulting in more total gas use (but each message will have lower gas limit)
MaxPartitionsPerRecoveryMessage int MaxPartitionsPerRecoveryMessage int
// Enable single partition per PoSt Message for partitions containing recovery sectors
//
// In cases when submitting PoSt messages which contain recovering sectors, the default network limit may still be
// too high to fit in the block gas limit. In those cases, it becomes useful to only house the single partition
// with recovering sectors in the post message
//
// Note that setting this value lower may result in less efficient gas use - more messages will be sent,
// to prove each deadline, resulting in more total gas use (but each message will have lower gas limit)
SingleRecoveringPartitionPerPostMessage bool
} }
type SealingConfig struct { type SealingConfig struct {

View File

@ -2,6 +2,7 @@ package common
import ( import (
"context" "context"
"time"
"github.com/gbrlsnchs/jwt/v3" "github.com/gbrlsnchs/jwt/v3"
"github.com/google/uuid" "github.com/google/uuid"
@ -26,6 +27,8 @@ type CommonAPI struct {
Alerting *alerting.Alerting Alerting *alerting.Alerting
APISecret *dtypes.APIAlg APISecret *dtypes.APIAlg
ShutdownChan dtypes.ShutdownChan ShutdownChan dtypes.ShutdownChan
Start dtypes.NodeStartTime
} }
type jwtPayload struct { type jwtPayload struct {
@ -91,3 +94,7 @@ func (a *CommonAPI) Session(ctx context.Context) (uuid.UUID, error) {
func (a *CommonAPI) Closing(ctx context.Context) (<-chan struct{}, error) { func (a *CommonAPI) Closing(ctx context.Context) (<-chan struct{}, error) {
return make(chan struct{}), nil // relies on jsonrpc closing return make(chan struct{}), nil // relies on jsonrpc closing
} }
func (a *CommonAPI) StartTime(context.Context) (time.Time, error) {
return time.Time(a.Start), nil
}

View File

@ -1,6 +1,8 @@
package dtypes package dtypes
import ( import (
"time"
"github.com/gbrlsnchs/jwt/v3" "github.com/gbrlsnchs/jwt/v3"
"github.com/multiformats/go-multiaddr" "github.com/multiformats/go-multiaddr"
) )
@ -8,3 +10,5 @@ import (
type APIAlg jwt.HMACSHA type APIAlg jwt.HMACSHA
type APIEndpoint multiaddr.Multiaddr type APIEndpoint multiaddr.Multiaddr
type NodeStartTime time.Time

View File

@ -103,24 +103,31 @@ func minerAddrFromDS(ds dtypes.MetadataDS) (address.Address, error) {
return address.NewFromBytes(maddrb) return address.NewFromBytes(maddrb)
} }
func GetParams(spt abi.RegisteredSealProof) error { func GetParams(prover bool) func(spt abi.RegisteredSealProof) error {
ssize, err := spt.SectorSize() return func(spt abi.RegisteredSealProof) error {
if err != nil { ssize, err := spt.SectorSize()
return err if err != nil {
} return err
}
// If built-in assets are disabled, we expect the user to have placed the right
// parameters in the right location on the filesystem (/var/tmp/filecoin-proof-parameters).
if build.DisableBuiltinAssets {
return nil
}
var provingSize uint64
if prover {
provingSize = uint64(ssize)
}
// TODO: We should fetch the params for the actual proof type, not just based on the size.
if err := paramfetch.GetParams(context.TODO(), build.ParametersJSON(), build.SrsJSON(), provingSize); err != nil {
return xerrors.Errorf("fetching proof parameters: %w", err)
}
// If built-in assets are disabled, we expect the user to have placed the right
// parameters in the right location on the filesystem (/var/tmp/filecoin-proof-parameters).
if build.DisableBuiltinAssets {
return nil return nil
} }
// TODO: We should fetch the params for the actual proof type, not just based on the size.
if err := paramfetch.GetParams(context.TODO(), build.ParametersJSON(), build.SrsJSON(), uint64(ssize)); err != nil {
return xerrors.Errorf("fetching proof parameters: %w", err)
}
return nil
} }
func MinerAddress(ds dtypes.MetadataDS) (dtypes.MinerAddress, error) { func MinerAddress(ds dtypes.MetadataDS) (dtypes.MinerAddress, error) {

View File

@ -93,6 +93,12 @@ func From(typ interface{}) interface{} {
}).Interface() }).Interface()
} }
func FromVal[T any](v T) func() T {
return func() T {
return v
}
}
// from go-ipfs // from go-ipfs
// as casts input constructor to a given interface (if a value is given, it // as casts input constructor to a given interface (if a value is given, it
// wraps it into a constructor). // wraps it into a constructor).

View File

@ -169,6 +169,12 @@ func (ps *poStScheduler) watch(wid storiface.WorkerID, worker *WorkerHandle) {
}() }()
for { for {
select {
case <-heartbeatTimer.C:
case <-worker.closingMgr:
return
}
sctx, scancel := context.WithTimeout(ctx, paths.HeartbeatInterval/2) sctx, scancel := context.WithTimeout(ctx, paths.HeartbeatInterval/2)
curSes, err := worker.workerRpc.Session(sctx) curSes, err := worker.workerRpc.Session(sctx)
scancel() scancel()
@ -177,12 +183,7 @@ func (ps *poStScheduler) watch(wid storiface.WorkerID, worker *WorkerHandle) {
log.Warnw("failed to check worker session", "error", err) log.Warnw("failed to check worker session", "error", err)
ps.disable(wid) ps.disable(wid)
select { continue
case <-heartbeatTimer.C:
continue
case <-worker.closingMgr:
return
}
} }
if storiface.WorkerID(curSes) != wid { if storiface.WorkerID(curSes) != wid {

View File

@ -286,7 +286,7 @@ func (s *WindowPoStScheduler) runPoStCycle(ctx context.Context, manual bool, di
// Split partitions into batches, so as not to exceed the number of sectors // Split partitions into batches, so as not to exceed the number of sectors
// allowed in a single message // allowed in a single message
partitionBatches, err := s.batchPartitions(partitions, nv) partitionBatches, err := s.BatchPartitions(partitions, nv)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -492,7 +492,9 @@ func (s *WindowPoStScheduler) runPoStCycle(ctx context.Context, manual bool, di
return posts, nil return posts, nil
} }
func (s *WindowPoStScheduler) batchPartitions(partitions []api.Partition, nv network.Version) ([][]api.Partition, error) { // Note: Partition order within batches must match original partition order in order
// for code following the user code to work
func (s *WindowPoStScheduler) BatchPartitions(partitions []api.Partition, nv network.Version) ([][]api.Partition, error) {
// We don't want to exceed the number of sectors allowed in a message. // We don't want to exceed the number of sectors allowed in a message.
// So given the number of sectors in a partition, work out the number of // So given the number of sectors in a partition, work out the number of
// partitions that can be in a message without exceeding sectors per // partitions that can be in a message without exceeding sectors per
@ -524,21 +526,33 @@ func (s *WindowPoStScheduler) batchPartitions(partitions []api.Partition, nv net
} }
} }
// The number of messages will be: batches := [][]api.Partition{}
// ceiling(number of partitions / partitions per message)
batchCount := len(partitions) / partitionsPerMsg
if len(partitions)%partitionsPerMsg != 0 {
batchCount++
}
// Split the partitions into batches currBatch := []api.Partition{}
batches := make([][]api.Partition, 0, batchCount) for _, partition := range partitions {
for i := 0; i < len(partitions); i += partitionsPerMsg { recSectors, err := partition.RecoveringSectors.Count()
end := i + partitionsPerMsg if err != nil {
if end > len(partitions) { return nil, err
end = len(partitions)
} }
batches = append(batches, partitions[i:end])
// Only add single partition to a batch if it contains recovery sectors
// and has the below user config set
if s.singleRecoveringPartitionPerPostMessage && recSectors > 0 {
if len(currBatch) > 0 {
batches = append(batches, currBatch)
currBatch = []api.Partition{}
}
batches = append(batches, []api.Partition{partition})
} else {
if len(currBatch) >= partitionsPerMsg {
batches = append(batches, currBatch)
currBatch = []api.Partition{}
}
currBatch = append(currBatch, partition)
}
}
if len(currBatch) > 0 {
batches = append(batches, currBatch)
} }
return batches, nil return batches, nil

View File

@ -177,6 +177,26 @@ func (m mockFaultTracker) CheckProvable(ctx context.Context, pp abi.RegisteredPo
return map[abi.SectorID]string{}, nil return map[abi.SectorID]string{}, nil
} }
func generatePartition(sectorCount uint64, recoverySectorCount uint64) api.Partition {
var partition api.Partition
sectors := bitfield.New()
recoverySectors := bitfield.New()
for s := uint64(0); s < sectorCount; s++ {
sectors.Set(s)
}
for s := uint64(0); s < recoverySectorCount; s++ {
recoverySectors.Set(s)
}
partition = api.Partition{
AllSectors: sectors,
FaultySectors: bitfield.New(),
RecoveringSectors: recoverySectors,
LiveSectors: sectors,
ActiveSectors: sectors,
}
return partition
}
// TestWDPostDoPost verifies that doPost will send the correct number of window // TestWDPostDoPost verifies that doPost will send the correct number of window
// PoST messages for a given number of partitions // PoST messages for a given number of partitions
func TestWDPostDoPost(t *testing.T) { func TestWDPostDoPost(t *testing.T) {
@ -368,6 +388,55 @@ func TestWDPostDoPostPartLimitConfig(t *testing.T) {
} }
} }
// TestBatchPartitionsRecoverySectors tests if the batches with recovery sectors
// contain only single partitions while keeping all the partitions in order
func TestBatchPartitionsRecoverySectors(t *testing.T) {
proofType := abi.RegisteredPoStProof_StackedDrgWindow2KiBV1
postAct := tutils.NewIDAddr(t, 100)
mockStgMinerAPI := newMockStorageMinerAPI()
userPartLimit := 4
scheduler := &WindowPoStScheduler{
api: mockStgMinerAPI,
prover: &mockProver{},
verifier: &mockVerif{},
faultTracker: &mockFaultTracker{},
proofType: proofType,
actor: postAct,
journal: journal.NilJournal(),
addrSel: &ctladdr.AddressSelector{},
maxPartitionsPerPostMessage: userPartLimit,
singleRecoveringPartitionPerPostMessage: true,
}
var partitions []api.Partition
for p := 0; p < 4; p++ {
partitions = append(partitions, generatePartition(100, 0))
}
for p := 0; p < 2; p++ {
partitions = append(partitions, generatePartition(100, 10))
}
for p := 0; p < 6; p++ {
partitions = append(partitions, generatePartition(100, 0))
}
partitions = append(partitions, generatePartition(100, 10))
expectedBatchLens := []int{4, 1, 1, 4, 2, 1}
batches, err := scheduler.BatchPartitions(partitions, network.Version16)
require.NoError(t, err)
require.Equal(t, len(batches), 6)
for i, batch := range batches {
require.Equal(t, len(batch), expectedBatchLens[i])
}
}
// TestWDPostDeclareRecoveriesPartLimitConfig verifies that declareRecoveries will send the correct number of // TestWDPostDeclareRecoveriesPartLimitConfig verifies that declareRecoveries will send the correct number of
// DeclareFaultsRecovered messages for a given number of partitions based on user config // DeclareFaultsRecovered messages for a given number of partitions based on user config
func TestWDPostDeclareRecoveriesPartLimitConfig(t *testing.T) { func TestWDPostDeclareRecoveriesPartLimitConfig(t *testing.T) {

View File

@ -64,18 +64,19 @@ type NodeAPI interface {
// WindowPoStScheduler watches the chain though the changeHandler, which in turn // WindowPoStScheduler watches the chain though the changeHandler, which in turn
// turn calls the scheduler when the time arrives to do work. // turn calls the scheduler when the time arrives to do work.
type WindowPoStScheduler struct { type WindowPoStScheduler struct {
api NodeAPI api NodeAPI
feeCfg config.MinerFeeConfig feeCfg config.MinerFeeConfig
addrSel *ctladdr.AddressSelector addrSel *ctladdr.AddressSelector
prover storiface.ProverPoSt prover storiface.ProverPoSt
verifier storiface.Verifier verifier storiface.Verifier
faultTracker sealer.FaultTracker faultTracker sealer.FaultTracker
proofType abi.RegisteredPoStProof proofType abi.RegisteredPoStProof
partitionSectors uint64 partitionSectors uint64
disablePreChecks bool disablePreChecks bool
maxPartitionsPerPostMessage int maxPartitionsPerPostMessage int
maxPartitionsPerRecoveryMessage int maxPartitionsPerRecoveryMessage int
ch *changeHandler singleRecoveringPartitionPerPostMessage bool
ch *changeHandler
actor address.Address actor address.Address
@ -102,18 +103,19 @@ func NewWindowedPoStScheduler(api NodeAPI,
} }
return &WindowPoStScheduler{ return &WindowPoStScheduler{
api: api, api: api,
feeCfg: cfg, feeCfg: cfg,
addrSel: as, addrSel: as,
prover: sp, prover: sp,
verifier: verif, verifier: verif,
faultTracker: ft, faultTracker: ft,
proofType: mi.WindowPoStProofType, proofType: mi.WindowPoStProofType,
partitionSectors: mi.WindowPoStPartitionSectors, partitionSectors: mi.WindowPoStPartitionSectors,
disablePreChecks: pcfg.DisableWDPoStPreChecks, disablePreChecks: pcfg.DisableWDPoStPreChecks,
maxPartitionsPerPostMessage: pcfg.MaxPartitionsPerPoStMessage, maxPartitionsPerPostMessage: pcfg.MaxPartitionsPerPoStMessage,
maxPartitionsPerRecoveryMessage: pcfg.MaxPartitionsPerRecoveryMessage, maxPartitionsPerRecoveryMessage: pcfg.MaxPartitionsPerRecoveryMessage,
actor: actor, singleRecoveringPartitionPerPostMessage: pcfg.SingleRecoveringPartitionPerPostMessage,
actor: actor,
evtTypes: [...]journal.EventType{ evtTypes: [...]journal.EventType{
evtTypeWdPoStScheduler: j.RegisterEventType("wdpost", "scheduler"), evtTypeWdPoStScheduler: j.RegisterEventType("wdpost", "scheduler"),
evtTypeWdPoStProofs: j.RegisterEventType("wdpost", "proofs_processed"), evtTypeWdPoStProofs: j.RegisterEventType("wdpost", "proofs_processed"),

View File

@ -10,8 +10,8 @@ require (
github.com/filecoin-project/go-address v1.0.0 github.com/filecoin-project/go-address v1.0.0
github.com/filecoin-project/go-data-transfer v1.15.2 github.com/filecoin-project/go-data-transfer v1.15.2
github.com/filecoin-project/go-fil-markets v1.24.0 github.com/filecoin-project/go-fil-markets v1.24.0
github.com/filecoin-project/go-jsonrpc v0.1.5 github.com/filecoin-project/go-jsonrpc v0.1.7
github.com/filecoin-project/go-state-types v0.1.11-0.20220823184028-73c63d4127a4 github.com/filecoin-project/go-state-types v0.1.11
github.com/filecoin-project/go-storedcounter v0.1.0 github.com/filecoin-project/go-storedcounter v0.1.0
github.com/filecoin-project/lotus v0.0.0-00010101000000-000000000000 github.com/filecoin-project/lotus v0.0.0-00010101000000-000000000000
github.com/filecoin-project/specs-actors v0.9.15 github.com/filecoin-project/specs-actors v0.9.15
@ -27,13 +27,12 @@ require (
github.com/ipfs/go-unixfs v0.3.1 github.com/ipfs/go-unixfs v0.3.1
github.com/ipld/go-car v0.4.0 github.com/ipld/go-car v0.4.0
github.com/kpacha/opencensus-influxdb v0.0.0-20181102202715-663e2683a27c github.com/kpacha/opencensus-influxdb v0.0.0-20181102202715-663e2683a27c
github.com/libp2p/go-libp2p v0.21.0 github.com/libp2p/go-libp2p v0.22.0
github.com/libp2p/go-libp2p-core v0.19.1
github.com/libp2p/go-libp2p-pubsub-tracer v0.0.0-20200626141350-e730b32bf1e6 github.com/libp2p/go-libp2p-pubsub-tracer v0.0.0-20200626141350-e730b32bf1e6
github.com/multiformats/go-multiaddr v0.6.0 github.com/multiformats/go-multiaddr v0.6.0
github.com/testground/sdk-go v0.2.6 github.com/testground/sdk-go v0.2.6
go.opencensus.io v0.23.0 go.opencensus.io v0.23.0
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4
) )
require ( require (
@ -52,7 +51,6 @@ require (
github.com/benbjohnson/clock v1.3.0 // indirect github.com/benbjohnson/clock v1.3.0 // indirect
github.com/beorn7/perks v1.0.1 // indirect github.com/beorn7/perks v1.0.1 // indirect
github.com/bep/debounce v1.2.0 // indirect github.com/bep/debounce v1.2.0 // indirect
github.com/btcsuite/btcd/btcec/v2 v2.2.0 // indirect
github.com/buger/goterm v1.0.3 // indirect github.com/buger/goterm v1.0.3 // indirect
github.com/cespare/xxhash v1.1.0 // indirect github.com/cespare/xxhash v1.1.0 // indirect
github.com/cespare/xxhash/v2 v2.1.2 // indirect github.com/cespare/xxhash/v2 v2.1.2 // indirect
@ -67,7 +65,7 @@ require (
github.com/cskr/pubsub v1.0.2 // indirect github.com/cskr/pubsub v1.0.2 // indirect
github.com/daaku/go.zipexe v1.0.0 // indirect github.com/daaku/go.zipexe v1.0.0 // indirect
github.com/davidlazar/go-crypto v0.0.0-20200604182044-b73af7476f6c // indirect github.com/davidlazar/go-crypto v0.0.0-20200604182044-b73af7476f6c // indirect
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.0.1 // indirect github.com/decred/dcrd/dcrec/secp256k1/v4 v4.1.0 // indirect
github.com/detailyang/go-fallocate v0.0.0-20180908115635-432fa640bd2e // indirect github.com/detailyang/go-fallocate v0.0.0-20180908115635-432fa640bd2e // indirect
github.com/dgraph-io/badger/v2 v2.2007.3 // indirect github.com/dgraph-io/badger/v2 v2.2007.3 // indirect
github.com/dgraph-io/ristretto v0.1.0 // indirect github.com/dgraph-io/ristretto v0.1.0 // indirect
@ -173,7 +171,7 @@ require (
github.com/ipfs/go-ipfs-util v0.0.2 // indirect github.com/ipfs/go-ipfs-util v0.0.2 // indirect
github.com/ipfs/go-ipld-cbor v0.0.6 // indirect github.com/ipfs/go-ipld-cbor v0.0.6 // indirect
github.com/ipfs/go-ipld-legacy v0.1.1 // indirect github.com/ipfs/go-ipld-legacy v0.1.1 // indirect
github.com/ipfs/go-ipns v0.1.2 // indirect github.com/ipfs/go-ipns v0.1.3-0.20220819140646-0d8e99ba180a // indirect
github.com/ipfs/go-log v1.0.5 // indirect github.com/ipfs/go-log v1.0.5 // indirect
github.com/ipfs/go-metrics-interface v0.0.1 // indirect github.com/ipfs/go-metrics-interface v0.0.1 // indirect
github.com/ipfs/go-path v0.3.0 // indirect github.com/ipfs/go-path v0.3.0 // indirect
@ -196,43 +194,43 @@ require (
github.com/kelseyhightower/envconfig v1.4.0 // indirect github.com/kelseyhightower/envconfig v1.4.0 // indirect
github.com/kilic/bls12-381 v0.0.0-20200820230200-6b2c19996391 // indirect github.com/kilic/bls12-381 v0.0.0-20200820230200-6b2c19996391 // indirect
github.com/klauspost/compress v1.15.1 // indirect github.com/klauspost/compress v1.15.1 // indirect
github.com/klauspost/cpuid/v2 v2.0.14 // indirect github.com/klauspost/cpuid/v2 v2.1.0 // indirect
github.com/koron/go-ssdp v0.0.3 // indirect github.com/koron/go-ssdp v0.0.3 // indirect
github.com/libp2p/go-buffer-pool v0.1.0 // indirect github.com/libp2p/go-buffer-pool v0.1.0 // indirect
github.com/libp2p/go-cidranger v1.1.0 // indirect github.com/libp2p/go-cidranger v1.1.0 // indirect
github.com/libp2p/go-eventbus v0.2.1 // indirect github.com/libp2p/go-eventbus v0.3.0 // indirect
github.com/libp2p/go-flow-metrics v0.0.3 // indirect github.com/libp2p/go-flow-metrics v0.1.0 // indirect
github.com/libp2p/go-libp2p-asn-util v0.2.0 // indirect github.com/libp2p/go-libp2p-asn-util v0.2.0 // indirect
github.com/libp2p/go-libp2p-connmgr v0.4.0 // indirect github.com/libp2p/go-libp2p-connmgr v0.4.0 // indirect
github.com/libp2p/go-libp2p-discovery v0.7.0 // indirect github.com/libp2p/go-libp2p-core v0.20.0 // indirect
github.com/libp2p/go-libp2p-gostream v0.4.0 // indirect github.com/libp2p/go-libp2p-gostream v0.4.0 // indirect
github.com/libp2p/go-libp2p-kad-dht v0.17.0 // indirect github.com/libp2p/go-libp2p-kad-dht v0.17.0 // indirect
github.com/libp2p/go-libp2p-kbucket v0.4.7 // indirect github.com/libp2p/go-libp2p-kbucket v0.4.7 // indirect
github.com/libp2p/go-libp2p-loggables v0.1.0 // indirect github.com/libp2p/go-libp2p-loggables v0.1.0 // indirect
github.com/libp2p/go-libp2p-noise v0.5.0 // indirect github.com/libp2p/go-libp2p-noise v0.5.0 // indirect
github.com/libp2p/go-libp2p-peerstore v0.7.1 // indirect github.com/libp2p/go-libp2p-peerstore v0.7.1 // indirect
github.com/libp2p/go-libp2p-pubsub v0.7.1 // indirect github.com/libp2p/go-libp2p-pubsub v0.8.0 // indirect
github.com/libp2p/go-libp2p-record v0.1.3 // indirect github.com/libp2p/go-libp2p-record v0.1.3 // indirect
github.com/libp2p/go-libp2p-resource-manager v0.5.3 // indirect
github.com/libp2p/go-libp2p-routing-helpers v0.2.3 // indirect github.com/libp2p/go-libp2p-routing-helpers v0.2.3 // indirect
github.com/libp2p/go-libp2p-tls v0.5.0 // indirect github.com/libp2p/go-libp2p-tls v0.5.0 // indirect
github.com/libp2p/go-maddr-filter v0.1.0 // indirect github.com/libp2p/go-maddr-filter v0.1.0 // indirect
github.com/libp2p/go-msgio v0.2.0 // indirect github.com/libp2p/go-msgio v0.2.0 // indirect
github.com/libp2p/go-nat v0.1.0 // indirect github.com/libp2p/go-nat v0.1.0 // indirect
github.com/libp2p/go-netroute v0.2.0 // indirect github.com/libp2p/go-netroute v0.2.0 // indirect
github.com/libp2p/go-openssl v0.0.7 // indirect github.com/libp2p/go-openssl v0.1.0 // indirect
github.com/libp2p/go-reuseport v0.2.0 // indirect github.com/libp2p/go-reuseport v0.2.0 // indirect
github.com/libp2p/go-yamux/v3 v3.1.2 // indirect github.com/libp2p/go-yamux/v3 v3.1.2 // indirect
github.com/lucas-clemente/quic-go v0.28.0 // indirect github.com/lucas-clemente/quic-go v0.28.1 // indirect
github.com/lucasb-eyer/go-colorful v1.0.3 // indirect github.com/lucasb-eyer/go-colorful v1.0.3 // indirect
github.com/magefile/mage v1.9.0 // indirect github.com/magefile/mage v1.9.0 // indirect
github.com/marten-seemann/qtls-go1-16 v0.1.5 // indirect github.com/marten-seemann/qtls-go1-16 v0.1.5 // indirect
github.com/marten-seemann/qtls-go1-17 v0.1.2 // indirect github.com/marten-seemann/qtls-go1-17 v0.1.2 // indirect
github.com/marten-seemann/qtls-go1-18 v0.1.2 // indirect github.com/marten-seemann/qtls-go1-18 v0.1.2 // indirect
github.com/marten-seemann/qtls-go1-19 v0.1.0-beta.1 // indirect github.com/marten-seemann/qtls-go1-19 v0.1.0 // indirect
github.com/marten-seemann/tcp v0.0.0-20210406111302-dfbc87cc63fd // indirect github.com/marten-seemann/tcp v0.0.0-20210406111302-dfbc87cc63fd // indirect
github.com/mattn/go-colorable v0.1.9 // indirect github.com/mattn/go-colorable v0.1.9 // indirect
github.com/mattn/go-isatty v0.0.14 // indirect github.com/mattn/go-isatty v0.0.16 // indirect
github.com/mattn/go-pointer v0.0.1 // indirect
github.com/mattn/go-runewidth v0.0.10 // indirect github.com/mattn/go-runewidth v0.0.10 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.1 // indirect github.com/matttproud/golang_protobuf_extensions v1.0.1 // indirect
github.com/miekg/dns v1.1.50 // indirect github.com/miekg/dns v1.1.50 // indirect
@ -248,7 +246,7 @@ require (
github.com/multiformats/go-multiaddr-fmt v0.1.0 // indirect github.com/multiformats/go-multiaddr-fmt v0.1.0 // indirect
github.com/multiformats/go-multibase v0.1.1 // indirect github.com/multiformats/go-multibase v0.1.1 // indirect
github.com/multiformats/go-multicodec v0.5.0 // indirect github.com/multiformats/go-multicodec v0.5.0 // indirect
github.com/multiformats/go-multihash v0.2.0 // indirect github.com/multiformats/go-multihash v0.2.1 // indirect
github.com/multiformats/go-multistream v0.3.3 // indirect github.com/multiformats/go-multistream v0.3.3 // indirect
github.com/multiformats/go-varint v0.0.6 // indirect github.com/multiformats/go-varint v0.0.6 // indirect
github.com/nikkolasg/hexjson v0.0.0-20181101101858-78e39397e00c // indirect github.com/nikkolasg/hexjson v0.0.0-20181101101858-78e39397e00c // indirect
@ -266,8 +264,8 @@ require (
github.com/polydawn/refmt v0.0.0-20201211092308-30ac6d18308e // indirect github.com/polydawn/refmt v0.0.0-20201211092308-30ac6d18308e // indirect
github.com/prometheus/client_golang v1.12.1 // indirect github.com/prometheus/client_golang v1.12.1 // indirect
github.com/prometheus/client_model v0.2.0 // indirect github.com/prometheus/client_model v0.2.0 // indirect
github.com/prometheus/common v0.35.0 // indirect github.com/prometheus/common v0.37.0 // indirect
github.com/prometheus/procfs v0.7.3 // indirect github.com/prometheus/procfs v0.8.0 // indirect
github.com/prometheus/statsd_exporter v0.21.0 // indirect github.com/prometheus/statsd_exporter v0.21.0 // indirect
github.com/raulk/clock v1.1.0 // indirect github.com/raulk/clock v1.1.0 // indirect
github.com/raulk/go-watchdog v1.3.0 // indirect github.com/raulk/go-watchdog v1.3.0 // indirect
@ -298,32 +296,32 @@ require (
github.com/whyrusleeping/multiaddr-filter v0.0.0-20160516205228-e903e4adabd7 // indirect github.com/whyrusleeping/multiaddr-filter v0.0.0-20160516205228-e903e4adabd7 // indirect
github.com/whyrusleeping/timecache v0.0.0-20160911033111-cfcb2f1abfee // indirect github.com/whyrusleeping/timecache v0.0.0-20160911033111-cfcb2f1abfee // indirect
github.com/xrash/smetrics v0.0.0-20201216005158-039620a65673 // indirect github.com/xrash/smetrics v0.0.0-20201216005158-039620a65673 // indirect
github.com/zondax/hid v0.9.0 // indirect github.com/zondax/hid v0.9.1-0.20220302062450-5552068d2266 // indirect
github.com/zondax/ledger-go v0.12.1 // indirect github.com/zondax/ledger-go v0.12.1 // indirect
go.dedis.ch/fixbuf v1.0.3 // indirect go.dedis.ch/fixbuf v1.0.3 // indirect
go.dedis.ch/protobuf v1.0.11 // indirect go.dedis.ch/protobuf v1.0.11 // indirect
go.etcd.io/bbolt v1.3.4 // indirect go.etcd.io/bbolt v1.3.4 // indirect
go.opentelemetry.io/otel v1.7.0 // indirect go.opentelemetry.io/otel v1.7.0 // indirect
go.opentelemetry.io/otel/trace v1.7.0 // indirect go.opentelemetry.io/otel/trace v1.7.0 // indirect
go.uber.org/atomic v1.9.0 // indirect go.uber.org/atomic v1.10.0 // indirect
go.uber.org/dig v1.12.0 // indirect go.uber.org/dig v1.12.0 // indirect
go.uber.org/fx v1.15.0 // indirect go.uber.org/fx v1.15.0 // indirect
go.uber.org/multierr v1.8.0 // indirect go.uber.org/multierr v1.8.0 // indirect
go.uber.org/zap v1.21.0 // indirect go.uber.org/zap v1.22.0 // indirect
go4.org v0.0.0-20200411211856-f5505b9728dd // indirect go4.org v0.0.0-20200411211856-f5505b9728dd // indirect
golang.org/x/crypto v0.0.0-20220525230936-793ad666bf5e // indirect golang.org/x/crypto v0.0.0-20220525230936-793ad666bf5e // indirect
golang.org/x/exp v0.0.0-20220426173459-3bcf042a4bf5 // indirect golang.org/x/exp v0.0.0-20220426173459-3bcf042a4bf5 // indirect
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4 // indirect golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4 // indirect
golang.org/x/net v0.0.0-20220630215102-69896b714898 // indirect golang.org/x/net v0.0.0-20220812174116-3211cb980234 // indirect
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a // indirect golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab // indirect
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 // indirect golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 // indirect
golang.org/x/text v0.3.7 // indirect golang.org/x/text v0.3.7 // indirect
golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac // indirect golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac // indirect
golang.org/x/tools v0.1.11 // indirect golang.org/x/tools v0.1.12 // indirect
golang.org/x/xerrors v0.0.0-20220609144429-65e65417b02f // indirect golang.org/x/xerrors v0.0.0-20220609144429-65e65417b02f // indirect
google.golang.org/genproto v0.0.0-20210917145530-b395a37504d4 // indirect google.golang.org/genproto v0.0.0-20210917145530-b395a37504d4 // indirect
google.golang.org/grpc v1.45.0 // indirect google.golang.org/grpc v1.45.0 // indirect
google.golang.org/protobuf v1.28.0 // indirect google.golang.org/protobuf v1.28.1 // indirect
gopkg.in/cheggaaa/pb.v1 v1.0.28 // indirect gopkg.in/cheggaaa/pb.v1 v1.0.28 // indirect
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 // indirect gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect gopkg.in/yaml.v2 v2.4.0 // indirect

File diff suppressed because it is too large Load Diff

View File

@ -56,6 +56,11 @@ source "amazon-ebs" "lotus" {
owners = ["099720109477"] owners = ["099720109477"]
} }
ssh_username = "ubuntu" ssh_username = "ubuntu"
aws_polling {
delay_seconds = 60
max_attempts = 60
}
} }
source "digitalocean" "lotus" { source "digitalocean" "lotus" {
@ -82,3 +87,4 @@ build {
script = "./tools/packer/setup-snap.sh" script = "./tools/packer/setup-snap.sh"
} }
} }

View File

@ -23,13 +23,10 @@ MANAGED_FILES=(
) )
# this is required on digitalocean, which does not have snap seeded correctly at this phase. # this is required on digitalocean, which does not have snap seeded correctly at this phase.
apt update apt-get -y update && apt-get -y reinstall snapd
apt reinstall snapd
snap install lotus snap install lotus
snap alias lotus.lotus lotus
snap alias lotus.lotus-daemon lotus-daemon
snap alias lotus.lotus-miner lotus-miner snap alias lotus.lotus-miner lotus-miner
snap alias lotus.lotus-worker lotus-worker snap alias lotus.lotus-worker lotus-worker