Merge branch 'master' into test/t.TempDir

This commit is contained in:
Łukasz Magiera 2022-03-17 12:06:52 +01:00 committed by GitHub
commit 1c055fe83b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
162 changed files with 2004 additions and 765 deletions

View File

@ -1,5 +1,134 @@
# Lotus changelog # Lotus changelog
# 1.15.0 / 2022-03-09
This is an optional release with retrieval improvements(client side), SP ux with unsealing, snap deals and regular deal making and many other new features, improvements and bug fixes.
## Highlights
- feat:sealing: StartEpochSealingBuffer triggers packing on time([filecoin-project/lotus#7905](https://github.com/filecoin-project/lotus/pull/7905))
- use the `StartEpochSealingBuffer` configuration variable as a way to enforce that sectors are packed for sealing / updating no matter how many deals they have if the nearest deal start date is close enough to the present.
- feat: #6017 market: retrieval ask CLI command ([filecoin-project/lotus#7814](https://github.com/filecoin-project/lotus/pull/7814))
- feat(graphsync): allow setting of per-peer incoming requests for miners ([filecoin-project/lotus#7578](https://github.com/filecoin-project/lotus/pull/7578))
- by setting `SimultaneousTransfersForStoragePerClient` in deal making configuration.
- Make retrieval even faster ([filecoin-project/lotus#7746](https://github.com/filecoin-project/lotus/pull/7746))
- feat: #7747 sealing: Adding conf variable for capping number of concurrent unsealing jobs (#7884) ([filecoin-project/lotus#7884](https://github.com/filecoin-project/lotus/pull/7884))
- by setting `MaxConcurrentUnseals` in `DAGStoreConfig`
## New Features
- feat: mpool: Cache state nonces ([filecoin-project/lotus#8005](https://github.com/filecoin-project/lotus/pull/8005))
- chore: build: make the OhSnap epoch configurable by an envvar for devnets ([filecoin-project/lotus#7995](https://github.com/filecoin-project/lotus/pull/7995))
- Shed: Add a util to send a batch of messages ([filecoin-project/lotus#7667](https://github.com/filecoin-project/lotus/pull/7667))
- Add api for transfer diagnostics ([filecoin-project/lotus#7759](https://github.com/filecoin-project/lotus/pull/7759))
- Shed: Add a util to list terminated deals ([filecoin-project/lotus#7774](https://github.com/filecoin-project/lotus/pull/7774))
- Expose EnableGasTracing as an env_var ([filecoin-project/lotus#7750](https://github.com/filecoin-project/lotus/pull/7750))
- Command to list active sector locks ([filecoin-project/lotus#7735](https://github.com/filecoin-project/lotus/pull/7735))
- Initial switch to OpenTelemetry ([filecoin-project/lotus#7725](https://github.com/filecoin-project/lotus/pull/7725))
## Improvements
- splitstore sortless compaction ([filecoin-project/lotus#8008](https://github.com/filecoin-project/lotus/pull/8008))
- perf: chain: Make drand logs in daemon less noisy (#7955) ([filecoin-project/lotus#7955](https://github.com/filecoin-project/lotus/pull/7955))
- chore: shed: storage stats 2.0 ([filecoin-project/lotus#7941](https://github.com/filecoin-project/lotus/pull/7941))
- misc: api: Annotate lotus tests according to listed behaviors ([filecoin-project/lotus#7835](https://github.com/filecoin-project/lotus/pull/7835))
- some basic splitstore refactors ([filecoin-project/lotus#7999](https://github.com/filecoin-project/lotus/pull/7999))
- chore: sealer: quieten a log ([filecoin-project/lotus#7998](https://github.com/filecoin-project/lotus/pull/7998))
- tvx: supply network version when extracting messages. ([filecoin-project/lotus#7996](https://github.com/filecoin-project/lotus/pull/7996))
- chore: remove inaccurate comment in sealtasks ([filecoin-project/lotus#7977](https://github.com/filecoin-project/lotus/pull/7977))
- Refactor: VM: Remove the NetworkVersionGetter ([filecoin-project/lotus#7818](https://github.com/filecoin-project/lotus/pull/7818))
- refactor: state: Move randomness versioning out of the VM ([filecoin-project/lotus#7816](https://github.com/filecoin-project/lotus/pull/7816))
- updating to new datastore/blockstore code with contexts ([filecoin-project/lotus#7646](https://github.com/filecoin-project/lotus/pull/7646))
- Mempool msg selection should respect block message limits ([filecoin-project/lotus#7321](https://github.com/filecoin-project/lotus/pull/7321))
- Minor improvement for OpenTelemetry ([filecoin-project/lotus#7760](https://github.com/filecoin-project/lotus/pull/7760))
- Sort lotus-miner retrieval-deals by dealId ([filecoin-project/lotus#7749](https://github.com/filecoin-project/lotus/pull/7749))
- dagstore pieceReader: Always read full in ReadAt ([filecoin-project/lotus#7737](https://github.com/filecoin-project/lotus/pull/7737))
## Bug Fixes
- fix: sealing: Stop recovery attempts after fault ([filecoin-project/lotus#8014](https://github.com/filecoin-project/lotus/pull/8014))
- fix:snap: pay for the collateral difference needed if the miner available balance is insufficient ([filecoin-project/lotus#8234](https://github.com/filecoin-project/lotus/pull/8234))
- sealer: fix error message ([filecoin-project/lotus#8136](https://github.com/filecoin-project/lotus/pull/8136))
- typo in variable name ([filecoin-project/lotus#8134](https://github.com/filecoin-project/lotus/pull/8134))
- fix: sealer: allow enable/disabling ReplicaUpdate tasks ([filecoin-project/lotus#8093](https://github.com/filecoin-project/lotus/pull/8093))
- chore: chain: fix log ([filecoin-project/lotus#7993](https://github.com/filecoin-project/lotus/pull/7993))
- Fix: chain: create a new VM for each epoch ([filecoin-project/lotus#7966](https://github.com/filecoin-project/lotus/pull/7966))
- fix: doc generation struct slice example value ([filecoin-project/lotus#7851](https://github.com/filecoin-project/lotus/pull/7851))
- fix: returned error not be accept correctly ([filecoin-project/lotus#7852](https://github.com/filecoin-project/lotus/pull/7852))
- fix: #7577 markets: When retrying Add Piece, first seek to start of reader ([filecoin-project/lotus#7812](https://github.com/filecoin-project/lotus/pull/7812))
- misc: n/a sealing: Fix grammatical error in a log warning message ([filecoin-project/lotus#7831](https://github.com/filecoin-project/lotus/pull/7831))
- sectors update-state checks if sector exists before changing its state ([filecoin-project/lotus#7762](https://github.com/filecoin-project/lotus/pull/7762))
- SplitStore: supress compaction near upgrades ([filecoin-project/lotus#7734](https://github.com/filecoin-project/lotus/pull/7734))
## Dependency Updates
- github.com/filecoin-project/go-commp-utils (v0.1.2 -> v0.1.3):
- github.com/filecoin-project/dagstore (v0.4.3 -> v0.4.4):
- github.com/filecoin-project/go-fil-markets (v1.13.4 -> v1.19.2):
- github.com/filecoin-project/go-statestore (v0.1.1 -> v0.2.0):
- github.com/filecoin-project/go-storedcounter (v0.0.0-20200421200003-1c99c62e8a5b -> v0.1.0):
- github.com/filecoin-project/specs-actors/v2 (v2.3.5 -> v2.3.6):
- feat(deps): update markets stack ([filecoin-project/lotus#7959](https://github.com/filecoin-project/lotus/pull/7959))
- Use go-libp2p-connmgr v0.3.1 ([filecoin-project/lotus#7957](https://github.com/filecoin-project/lotus/pull/7957))
- dep/fix 7701 Dependency: update to ipld-legacy to v0.1.1 ([filecoin-project/lotus#7751](https://github.com/filecoin-project/lotus/pull/7751))
## Others
- chore: backport: release ([filecoin-project/lotus#8245](https://github.com/filecoin-project/lotus/pull/8245))
- Lotus release v1.15.0-rc3 ([filecoin-project/lotus#8236](https://github.com/filecoin-project/lotus/pull/8236))
- Lotus release v1.15.0-rc2 ([filecoin-project/lotus#8211](https://github.com/filecoin-project/lotus/pull/8211))
- Merge branch 'releases' into release/v1.15.0
- chore: build: backport releases ([filecoin-project/lotus#8193](https://github.com/filecoin-project/lotus/pull/8193))
- Merge branch 'releases' into release/v1.15.0
- bump the version to v1.15.0-rc1
- chore: build: v1.14.0 -> master ([filecoin-project/lotus#8053](https://github.com/filecoin-project/lotus/pull/8053))
- chore: merge release/v1.14.0 PRs into master ([filecoin-project/lotus#7979](https://github.com/filecoin-project/lotus/pull/7979))
- chore: update PR template ([filecoin-project/lotus#7918](https://github.com/filecoin-project/lotus/pull/7918))
- build: release: bump master version to v1.15.0-dev ([filecoin-project/lotus#7922](https://github.com/filecoin-project/lotus/pull/7922))
- misc: docs: remove issue number from the pr title ([filecoin-project/lotus#7902](https://github.com/filecoin-project/lotus/pull/7902))
- Snapcraft grade no develgrade ([filecoin-project/lotus#7802](https://github.com/filecoin-project/lotus/pull/7802))
- chore: create pull_request_template.md ([filecoin-project/lotus#7726](https://github.com/filecoin-project/lotus/pull/7726))
- Disable appimage ([filecoin-project/lotus#7707](https://github.com/filecoin-project/lotus/pull/7707))
## Contributors
| Contributor | Commits | Lines ± | Files Changed |
|-------------|---------|---------|---------------|
| @arajasek | 73 | +7232/-2778 | 386 |
| @zenground0 | 27 | +5604/-1049 | 219 |
| @vyzo | 118 | +4356/-1470 | 253 |
| @zl | 1 | +3725/-309 | 8 |
| @dirkmc | 7 | +1392/-1110 | 61 |
| arajasek | 37 | +221/-1329 | 90 |
| @magik6k | 33 | +1138/-336 | 101 |
| @whyrusleeping | 2 | +483/-585 | 28 |
| Darko Brdareski | 14 | +725/-276 | 154 |
| @rvagg | 2 | +43/-947 | 10 |
| @hannahhoward | 5 | +436/-335 | 31 |
| @hannahhoward | 12 | +507/-133 | 37 |
| @jennijuju | 27 | +333/-178 | 54 |
| @TheMenko | 8 | +237/-179 | 17 |
| c r | 2 | +227/-45 | 12 |
| @dirkmck | 12 | +188/-40 | 27 |
| @ribasushi | 3 | +128/-62 | 3 |
| @raulk | 6 | +128/-49 | 9 |
| @Whyrusleeping | 1 | +76/-70 | 8 |
| @Stebalien | 1 | +55/-37 | 1 |
| @jennijuju | 11 | +29/-16 | 11 |
| @aarshkshah1992 | 1 | +23/-19 | 5 |
| @travisperson | 1 | +0/-18 | 2 |
| @gstuart | 3 | +12/-1 | 3 |
| @coryschwartz | 4 | +5/-6 | 4 |
| @pefish | 1 | +4/-3 | 1 |
| @Kubuxu | 1 | +5/-2 | 2 |
| Colin Kennedy | 1 | +4/-2 | 1 |
| Rob Quist | 1 | +2/-2 | 1 |
| @shotcollin | 1 | +1/-1 | 1 |
# 1.14.4 / 2022-03-03
This is a *highly recommended* optional release for storage providers that are doing snap deals. This fix the bug
that causes some snap deal sectors are stuck in `FinalizeReplicaUpdate`. In addition, SPs should be able to force
update sectors status without getting blocked by `normal shutdown of state machine`.
# v1.14.3 / 2022-02-28
This is an **optional** release, that includes a fix to properly register the `--really-do-it` flag for abort-upgrade.
# 1.14.2 / 2022-02-24 # 1.14.2 / 2022-02-24
This is an **optional** release of lotus, that's had a couple more improvements w.r.t Snap experience for storage providers in preparation of the[upcoming OhSnap upgrade](https://github.com/filecoin-project/community/discussions/74?sort=new#discussioncomment-1922550). This is an **optional** release of lotus, that's had a couple more improvements w.r.t Snap experience for storage providers in preparation of the[upcoming OhSnap upgrade](https://github.com/filecoin-project/community/discussions/74?sort=new#discussioncomment-1922550).
@ -326,7 +455,7 @@ This feature release includes the latest functionalities and improvements, like
- Prep retrieval for selectors: no functional changes ([filecoin-project/lotus#7306](https://github.com/filecoin-project/lotus/pull/7306)) - Prep retrieval for selectors: no functional changes ([filecoin-project/lotus#7306](https://github.com/filecoin-project/lotus/pull/7306))
- Seed: improve helptext ([filecoin-project/lotus#7304](https://github.com/filecoin-project/lotus/pull/7304)) - Seed: improve helptext ([filecoin-project/lotus#7304](https://github.com/filecoin-project/lotus/pull/7304))
- Mempool: reduce size of sigValCache ([filecoin-project/lotus#7305](https://github.com/filecoin-project/lotus/pull/7305)) - Mempool: reduce size of sigValCache ([filecoin-project/lotus#7305](https://github.com/filecoin-project/lotus/pull/7305))
- Stop indirectly depending on deprecated github.com/prometheus/common ([filecoin-project/lotus#7474](https://github.com/filecoin-project/lotus/pull/7474)) - Stop indirectly depending on deprecated github.com/prometheus/common ([filecoin-project/lotus#7474](https://github.com/filecoin-project/lotus/pull/7474))
## Bug Fixes ## Bug Fixes
- StateSearchMsg: Correct usage of the allowReplaced flag ([filecoin-project/lotus#7450](https://github.com/filecoin-project/lotus/pull/7450)) - StateSearchMsg: Correct usage of the allowReplaced flag ([filecoin-project/lotus#7450](https://github.com/filecoin-project/lotus/pull/7450))
@ -733,8 +862,8 @@ This is a **highly recommended** but optional Lotus v1.11.1 release that introd
- You can now preview the the default and updated node config by running `lotus/lotus-miner config default/updated` - You can now preview the the default and updated node config by running `lotus/lotus-miner config default/updated`
## New Features ## New Features
- ⭐️⭐️⭐️ Support standalone miner-market process ([filecoin-project/lotus#6356](https://github.com/filecoin-project/lotus/pull/6356)) - ⭐️⭐️⭐️ Support standalone miner-market process ([filecoin-project/lotus#6356](https://github.com/filecoin-project/lotus/pull/6356))
- **⭐️⭐️ Experimental** [Splitstore]((https://github.com/filecoin-project/lotus/blob/master/blockstore/splitstore/README.md)) (more details coming in v1.11.2! Stay tuned! Join the discussion [here](https://github.com/filecoin-project/lotus/discussions/5788) if you have questions!) : - **⭐️⭐️ Experimental** [Splitstore]((https://github.com/filecoin-project/lotus/blob/master/blockstore/splitstore/README.md)) (more details coming in v1.11.2! Stay tuned! Join the discussion [here](https://github.com/filecoin-project/lotus/discussions/5788) if you have questions!) :
- Improve splitstore warmup ([filecoin-project/lotus#6867](https://github.com/filecoin-project/lotus/pull/6867)) - Improve splitstore warmup ([filecoin-project/lotus#6867](https://github.com/filecoin-project/lotus/pull/6867))
- Moving GC for badger ([filecoin-project/lotus#6854](https://github.com/filecoin-project/lotus/pull/6854)) - Moving GC for badger ([filecoin-project/lotus#6854](https://github.com/filecoin-project/lotus/pull/6854))
- splitstore shed utils ([filecoin-project/lotus#6811](https://github.com/filecoin-project/lotus/pull/6811)) - splitstore shed utils ([filecoin-project/lotus#6811](https://github.com/filecoin-project/lotus/pull/6811))
@ -748,95 +877,95 @@ This is a **highly recommended** but optional Lotus v1.11.1 release that introd
- Splitstore code reorg ([filecoin-project/lotus#6756](https://github.com/filecoin-project/lotus/pull/6756)) - Splitstore code reorg ([filecoin-project/lotus#6756](https://github.com/filecoin-project/lotus/pull/6756))
- Splitstore: Some small fixes ([filecoin-project/lotus#6754](https://github.com/filecoin-project/lotus/pull/6754)) - Splitstore: Some small fixes ([filecoin-project/lotus#6754](https://github.com/filecoin-project/lotus/pull/6754))
- Splitstore Enhanchements ([filecoin-project/lotus#6474](https://github.com/filecoin-project/lotus/pull/6474)) - Splitstore Enhanchements ([filecoin-project/lotus#6474](https://github.com/filecoin-project/lotus/pull/6474))
- lotus-shed: initial export cmd for markets related metadata ([filecoin-project/lotus#6840](https://github.com/filecoin-project/lotus/pull/6840)) - lotus-shed: initial export cmd for markets related metadata ([filecoin-project/lotus#6840](https://github.com/filecoin-project/lotus/pull/6840))
- add a very verbose -vv flag to lotus and lotus-miner. ([filecoin-project/lotus#6888](https://github.com/filecoin-project/lotus/pull/6888)) - add a very verbose -vv flag to lotus and lotus-miner. ([filecoin-project/lotus#6888](https://github.com/filecoin-project/lotus/pull/6888))
- Add allocated sectorid vis ([filecoin-project/lotus#4638](https://github.com/filecoin-project/lotus/pull/4638)) - Add allocated sectorid vis ([filecoin-project/lotus#4638](https://github.com/filecoin-project/lotus/pull/4638))
- add a command for compacting sector numbers bitfield ([filecoin-project/lotus#4640](https://github.com/filecoin-project/lotus/pull/4640)) - add a command for compacting sector numbers bitfield ([filecoin-project/lotus#4640](https://github.com/filecoin-project/lotus/pull/4640))
- Run `lotus-miner actor compact-allocated` to compact sector number allocations to reduce the size of the allocated sector number bitfield. - Run `lotus-miner actor compact-allocated` to compact sector number allocations to reduce the size of the allocated sector number bitfield.
- Add ChainGetMessagesInTipset API ([filecoin-project/lotus#6642](https://github.com/filecoin-project/lotus/pull/6642)) - Add ChainGetMessagesInTipset API ([filecoin-project/lotus#6642](https://github.com/filecoin-project/lotus/pull/6642))
- Handle the --color flag via proper global state ([filecoin-project/lotus#6743](https://github.com/filecoin-project/lotus/pull/6743)) - Handle the --color flag via proper global state ([filecoin-project/lotus#6743](https://github.com/filecoin-project/lotus/pull/6743))
- Enable color by default only if os.Stdout is a TTY ([filecoin-project/lotus#6696](https://github.com/filecoin-project/lotus/pull/6696)) - Enable color by default only if os.Stdout is a TTY ([filecoin-project/lotus#6696](https://github.com/filecoin-project/lotus/pull/6696))
- Stop outputing ANSI color on non-TTY ([filecoin-project/lotus#6694](https://github.com/filecoin-project/lotus/pull/6694)) - Stop outputing ANSI color on non-TTY ([filecoin-project/lotus#6694](https://github.com/filecoin-project/lotus/pull/6694))
- Envvar to disable slash filter ([filecoin-project/lotus#6620](https://github.com/filecoin-project/lotus/pull/6620)) - Envvar to disable slash filter ([filecoin-project/lotus#6620](https://github.com/filecoin-project/lotus/pull/6620))
- commit batch: AggregateAboveBaseFee config ([filecoin-project/lotus#6650](https://github.com/filecoin-project/lotus/pull/6650)) - commit batch: AggregateAboveBaseFee config ([filecoin-project/lotus#6650](https://github.com/filecoin-project/lotus/pull/6650))
- shed tool to estimate aggregate network fees ([filecoin-project/lotus#6631](https://github.com/filecoin-project/lotus/pull/6631)) - shed tool to estimate aggregate network fees ([filecoin-project/lotus#6631](https://github.com/filecoin-project/lotus/pull/6631))
## Bug Fixes ## Bug Fixes
- Fix padding of deals, which only partially shipped in #5988 ([filecoin-project/lotus#6683](https://github.com/filecoin-project/lotus/pull/6683)) - Fix padding of deals, which only partially shipped in #5988 ([filecoin-project/lotus#6683](https://github.com/filecoin-project/lotus/pull/6683))
- fix deal concurrency test failures by upgrading graphsync and others ([filecoin-project/lotus#6724](https://github.com/filecoin-project/lotus/pull/6724)) - fix deal concurrency test failures by upgrading graphsync and others ([filecoin-project/lotus#6724](https://github.com/filecoin-project/lotus/pull/6724))
- fix: on randomness change, use new rand ([filecoin-project/lotus#6805](https://github.com/filecoin-project/lotus/pull/6805)) - fix: always check if StateSearchMessage returns nil ([filecoin-project/lotus#6802](https://github.com/filecoin-project/lotus/pull/6802)) - fix: on randomness change, use new rand ([filecoin-project/lotus#6805](https://github.com/filecoin-project/lotus/pull/6805)) - fix: always check if StateSearchMessage returns nil ([filecoin-project/lotus#6802](https://github.com/filecoin-project/lotus/pull/6802))
- test: fix flaky window post tests ([filecoin-project/lotus#6804](https://github.com/filecoin-project/lotus/pull/6804)) - test: fix flaky window post tests ([filecoin-project/lotus#6804](https://github.com/filecoin-project/lotus/pull/6804))
- wrap close(wait) with sync.Once to avoid panic ([filecoin-project/lotus#6800](https://github.com/filecoin-project/lotus/pull/6800)) - wrap close(wait) with sync.Once to avoid panic ([filecoin-project/lotus#6800](https://github.com/filecoin-project/lotus/pull/6800))
- fixes #6786 segfault ([filecoin-project/lotus#6787](https://github.com/filecoin-project/lotus/pull/6787)) - fixes #6786 segfault ([filecoin-project/lotus#6787](https://github.com/filecoin-project/lotus/pull/6787))
- ClientRetrieve stops on cancel([filecoin-project/lotus#6739](https://github.com/filecoin-project/lotus/pull/6739)) - ClientRetrieve stops on cancel([filecoin-project/lotus#6739](https://github.com/filecoin-project/lotus/pull/6739))
- Fix bugs in sectors extend --v1-sectors ([filecoin-project/lotus#6066](https://github.com/filecoin-project/lotus/pull/6066)) - Fix bugs in sectors extend --v1-sectors ([filecoin-project/lotus#6066](https://github.com/filecoin-project/lotus/pull/6066))
- fix "lotus-seed genesis car" error "merkledag: not found" ([filecoin-project/lotus#6688](https://github.com/filecoin-project/lotus/pull/6688)) - fix "lotus-seed genesis car" error "merkledag: not found" ([filecoin-project/lotus#6688](https://github.com/filecoin-project/lotus/pull/6688))
- Get retrieval pricing input should not error out on a deal state fetch ([filecoin-project/lotus#6679](https://github.com/filecoin-project/lotus/pull/6679)) - Get retrieval pricing input should not error out on a deal state fetch ([filecoin-project/lotus#6679](https://github.com/filecoin-project/lotus/pull/6679))
- Fix more CID double-encoding as hex ([filecoin-project/lotus#6680](https://github.com/filecoin-project/lotus/pull/6680)) - Fix more CID double-encoding as hex ([filecoin-project/lotus#6680](https://github.com/filecoin-project/lotus/pull/6680))
- storage: Fix FinalizeSector with sectors in stoage paths ([filecoin-project/lotus#6653](https://github.com/filecoin-project/lotus/pull/6653)) - storage: Fix FinalizeSector with sectors in stoage paths ([filecoin-project/lotus#6653](https://github.com/filecoin-project/lotus/pull/6653))
- Fix tiny error in check-client-datacap ([filecoin-project/lotus#6664](https://github.com/filecoin-project/lotus/pull/6664)) - Fix tiny error in check-client-datacap ([filecoin-project/lotus#6664](https://github.com/filecoin-project/lotus/pull/6664))
- Fix: precommit_batch method used the wrong cfg.CommitBatchWait ([filecoin-project/lotus#6658](https://github.com/filecoin-project/lotus/pull/6658)) - Fix: precommit_batch method used the wrong cfg.CommitBatchWait ([filecoin-project/lotus#6658](https://github.com/filecoin-project/lotus/pull/6658))
- fix ticket expiration check ([filecoin-project/lotus#6635](https://github.com/filecoin-project/lotus/pull/6635)) - fix ticket expiration check ([filecoin-project/lotus#6635](https://github.com/filecoin-project/lotus/pull/6635))
- remove precommit check in handleCommitFailed ([filecoin-project/lotus#6634](https://github.com/filecoin-project/lotus/pull/6634)) - remove precommit check in handleCommitFailed ([filecoin-project/lotus#6634](https://github.com/filecoin-project/lotus/pull/6634))
- fix prove commit aggregate send token amount ([filecoin-project/lotus#6625](https://github.com/filecoin-project/lotus/pull/6625)) - fix prove commit aggregate send token amount ([filecoin-project/lotus#6625](https://github.com/filecoin-project/lotus/pull/6625))
## Improvements ## Improvements
- Eliminate inefficiency in markets logging ([filecoin-project/lotus#6895](https://github.com/filecoin-project/lotus/pull/6895)) - Eliminate inefficiency in markets logging ([filecoin-project/lotus#6895](https://github.com/filecoin-project/lotus/pull/6895))
- rename `cmd/lotus{-storage=>}-miner` to match binary. ([filecoin-project/lotus#6886](https://github.com/filecoin-project/lotus/pull/6886)) - rename `cmd/lotus{-storage=>}-miner` to match binary. ([filecoin-project/lotus#6886](https://github.com/filecoin-project/lotus/pull/6886))
- fix racy TestSimultanenousTransferLimit. ([filecoin-project/lotus#6862](https://github.com/filecoin-project/lotus/pull/6862)) - fix racy TestSimultanenousTransferLimit. ([filecoin-project/lotus#6862](https://github.com/filecoin-project/lotus/pull/6862))
- ValidateBlock: Assert that block header height's are greater than parents ([filecoin-project/lotus#6872](https://github.com/filecoin-project/lotus/pull/6872)) - ValidateBlock: Assert that block header height's are greater than parents ([filecoin-project/lotus#6872](https://github.com/filecoin-project/lotus/pull/6872))
- feat: Don't panic when api impl is nil ([filecoin-project/lotus#6857](https://github.com/filecoin-project/lotus/pull/6857)) - feat: Don't panic when api impl is nil ([filecoin-project/lotus#6857](https://github.com/filecoin-project/lotus/pull/6857))
- add docker-compose file ([filecoin-project/lotus#6544](https://github.com/filecoin-project/lotus/pull/6544)) - add docker-compose file ([filecoin-project/lotus#6544](https://github.com/filecoin-project/lotus/pull/6544))
- easy way to make install app ([filecoin-project/lotus#5183](https://github.com/filecoin-project/lotus/pull/5183)) - easy way to make install app ([filecoin-project/lotus#5183](https://github.com/filecoin-project/lotus/pull/5183))
- api: Separate the Net interface from Common ([filecoin-project/lotus#6627](https://github.com/filecoin-project/lotus/pull/6627)) - add StateReadState to gateway api ([filecoin-project/lotus#6818](https://github.com/filecoin-project/lotus/pull/6818)) - api: Separate the Net interface from Common ([filecoin-project/lotus#6627](https://github.com/filecoin-project/lotus/pull/6627)) - add StateReadState to gateway api ([filecoin-project/lotus#6818](https://github.com/filecoin-project/lotus/pull/6818))
- add SealProof in SectorBuilder ([filecoin-project/lotus#6815](https://github.com/filecoin-project/lotus/pull/6815)) - add SealProof in SectorBuilder ([filecoin-project/lotus#6815](https://github.com/filecoin-project/lotus/pull/6815))
- sealing: Handle preCommitParams errors more correctly ([filecoin-project/lotus#6763](https://github.com/filecoin-project/lotus/pull/6763)) - sealing: Handle preCommitParams errors more correctly ([filecoin-project/lotus#6763](https://github.com/filecoin-project/lotus/pull/6763))
- ClientFindData: always fetch peer id from chain ([filecoin-project/lotus#6807](https://github.com/filecoin-project/lotus/pull/6807)) - ClientFindData: always fetch peer id from chain ([filecoin-project/lotus#6807](https://github.com/filecoin-project/lotus/pull/6807))
- test: handle null blocks in TestForkRefuseCall ([filecoin-project/lotus#6758](https://github.com/filecoin-project/lotus/pull/6758)) - test: handle null blocks in TestForkRefuseCall ([filecoin-project/lotus#6758](https://github.com/filecoin-project/lotus/pull/6758))
- Add more deal details to lotus-miner info ([filecoin-project/lotus#6708](https://github.com/filecoin-project/lotus/pull/6708)) - Add more deal details to lotus-miner info ([filecoin-project/lotus#6708](https://github.com/filecoin-project/lotus/pull/6708))
- add election backtest ([filecoin-project/lotus#5950](https://github.com/filecoin-project/lotus/pull/5950)) - add election backtest ([filecoin-project/lotus#5950](https://github.com/filecoin-project/lotus/pull/5950))
- add dollar sign ([filecoin-project/lotus#6690](https://github.com/filecoin-project/lotus/pull/6690)) - add dollar sign ([filecoin-project/lotus#6690](https://github.com/filecoin-project/lotus/pull/6690))
- get-actor cli spelling fix ([filecoin-project/lotus#6681](https://github.com/filecoin-project/lotus/pull/6681)) - get-actor cli spelling fix ([filecoin-project/lotus#6681](https://github.com/filecoin-project/lotus/pull/6681))
- polish(statetree): accept a context in statetree diff for timeouts ([filecoin-project/lotus#6639](https://github.com/filecoin-project/lotus/pull/6639)) - polish(statetree): accept a context in statetree diff for timeouts ([filecoin-project/lotus#6639](https://github.com/filecoin-project/lotus/pull/6639))
- Add helptext to lotus chain export ([filecoin-project/lotus#6672](https://github.com/filecoin-project/lotus/pull/6672)) - Add helptext to lotus chain export ([filecoin-project/lotus#6672](https://github.com/filecoin-project/lotus/pull/6672))
- add an incremental nonce itest. ([filecoin-project/lotus#6663](https://github.com/filecoin-project/lotus/pull/6663)) - add an incremental nonce itest. ([filecoin-project/lotus#6663](https://github.com/filecoin-project/lotus/pull/6663))
- commit batch: Initialize the FailedSectors map ([filecoin-project/lotus#6647](https://github.com/filecoin-project/lotus/pull/6647)) - commit batch: Initialize the FailedSectors map ([filecoin-project/lotus#6647](https://github.com/filecoin-project/lotus/pull/6647))
- Fast-path retry submitting commit aggregate if commit is still valid ([filecoin-project/lotus#6638](https://github.com/filecoin-project/lotus/pull/6638)) - Fast-path retry submitting commit aggregate if commit is still valid ([filecoin-project/lotus#6638](https://github.com/filecoin-project/lotus/pull/6638))
- Reuse timers in sealing batch logic ([filecoin-project/lotus#6636](https://github.com/filecoin-project/lotus/pull/6636)) - Reuse timers in sealing batch logic ([filecoin-project/lotus#6636](https://github.com/filecoin-project/lotus/pull/6636))
## Dependency Updates ## Dependency Updates
- Update to proof v8.0.3 ([filecoin-project/lotus#6890](https://github.com/filecoin-project/lotus/pull/6890)) - Update to proof v8.0.3 ([filecoin-project/lotus#6890](https://github.com/filecoin-project/lotus/pull/6890))
- update to go-fil-market v1.6.0 ([filecoin-project/lotus#6885](https://github.com/filecoin-project/lotus/pull/6885)) - update to go-fil-market v1.6.0 ([filecoin-project/lotus#6885](https://github.com/filecoin-project/lotus/pull/6885))
- Bump go-multihash, adjust test for supported version ([filecoin-project/lotus#6674](https://github.com/filecoin-project/lotus/pull/6674)) - Bump go-multihash, adjust test for supported version ([filecoin-project/lotus#6674](https://github.com/filecoin-project/lotus/pull/6674))
- github.com/filecoin-project/go-data-transfer (v1.6.0 -> v1.7.2): - github.com/filecoin-project/go-data-transfer (v1.6.0 -> v1.7.2):
- github.com/filecoin-project/go-fil-markets (v1.5.0 -> v1.6.2): - github.com/filecoin-project/go-fil-markets (v1.5.0 -> v1.6.2):
- github.com/filecoin-project/go-padreader (v0.0.0-20200903213702-ed5fae088b20 -> v0.0.0-20210723183308-812a16dc01b1) - github.com/filecoin-project/go-padreader (v0.0.0-20200903213702-ed5fae088b20 -> v0.0.0-20210723183308-812a16dc01b1)
- github.com/filecoin-project/go-state-types (v0.1.1-0.20210506134452-99b279731c48 -> v0.1.1-0.20210810190654-139e0e79e69e) - github.com/filecoin-project/go-state-types (v0.1.1-0.20210506134452-99b279731c48 -> v0.1.1-0.20210810190654-139e0e79e69e)
- github.com/filecoin-project/go-statemachine (v0.0.0-20200925024713-05bd7c71fbfe -> v1.0.1) - github.com/filecoin-project/go-statemachine (v0.0.0-20200925024713-05bd7c71fbfe -> v1.0.1)
- update go-libp2p-pubsub to v0.5.0 ([filecoin-project/lotus#6764](https://github.com/filecoin-project/lotus/pull/6764)) - update go-libp2p-pubsub to v0.5.0 ([filecoin-project/lotus#6764](https://github.com/filecoin-project/lotus/pull/6764))
## Others ## Others
- Master->v1.11.1 ([filecoin-project/lotus#7051](https://github.com/filecoin-project/lotus/pull/7051)) - Master->v1.11.1 ([filecoin-project/lotus#7051](https://github.com/filecoin-project/lotus/pull/7051))
- v1.11.1-rc2 ([filecoin-project/lotus#6966](https://github.com/filecoin-project/lotus/pull/6966)) - v1.11.1-rc2 ([filecoin-project/lotus#6966](https://github.com/filecoin-project/lotus/pull/6966))
- Backport master -> v1.11.1 ([filecoin-project/lotus#6965](https://github.com/filecoin-project/lotus/pull/6965)) - Backport master -> v1.11.1 ([filecoin-project/lotus#6965](https://github.com/filecoin-project/lotus/pull/6965))
- Fixes in master -> release ([filecoin-project/lotus#6933](https://github.com/filecoin-project/lotus/pull/6933)) - Fixes in master -> release ([filecoin-project/lotus#6933](https://github.com/filecoin-project/lotus/pull/6933))
- Add changelog for v1.11.1-rc1 and bump the version ([filecoin-project/lotus#6900](https://github.com/filecoin-project/lotus/pull/6900)) - Add changelog for v1.11.1-rc1 and bump the version ([filecoin-project/lotus#6900](https://github.com/filecoin-project/lotus/pull/6900))
- Fix merge release -> v1.11.1 ([filecoin-project/lotus#6897](https://github.com/filecoin-project/lotus/pull/6897)) - Fix merge release -> v1.11.1 ([filecoin-project/lotus#6897](https://github.com/filecoin-project/lotus/pull/6897))
- Update RELEASE_ISSUE_TEMPLATE.md ([filecoin-project/lotus#6880](https://github.com/filecoin-project/lotus/pull/6880)) - Update RELEASE_ISSUE_TEMPLATE.md ([filecoin-project/lotus#6880](https://github.com/filecoin-project/lotus/pull/6880))
- Add github actions for staled pr ([filecoin-project/lotus#6879](https://github.com/filecoin-project/lotus/pull/6879)) - Add github actions for staled pr ([filecoin-project/lotus#6879](https://github.com/filecoin-project/lotus/pull/6879))
- Update issue templates and add templates for M1 ([filecoin-project/lotus#6856](https://github.com/filecoin-project/lotus/pull/6856)) - Update issue templates and add templates for M1 ([filecoin-project/lotus#6856](https://github.com/filecoin-project/lotus/pull/6856))
- Fix links in issue templates - Fix links in issue templates
- Update issue templates to forms ([filecoin-project/lotus#6798](https://github.com/filecoin-project/lotus/pull/6798) - Update issue templates to forms ([filecoin-project/lotus#6798](https://github.com/filecoin-project/lotus/pull/6798)
- Nerpa v13 upgrade ([filecoin-project/lotus#6837](https://github.com/filecoin-project/lotus/pull/6837)) - Nerpa v13 upgrade ([filecoin-project/lotus#6837](https://github.com/filecoin-project/lotus/pull/6837))
- add docker-compose file ([filecoin-project/lotus#6544](https://github.com/filecoin-project/lotus/pull/6544)) - add docker-compose file ([filecoin-project/lotus#6544](https://github.com/filecoin-project/lotus/pull/6544))
- release -> master ([filecoin-project/lotus#6828](https://github.com/filecoin-project/lotus/pull/6828)) - release -> master ([filecoin-project/lotus#6828](https://github.com/filecoin-project/lotus/pull/6828))
- Resurrect CODEOWNERS, but for maintainers group ([filecoin-project/lotus#6773](https://github.com/filecoin-project/lotus/pull/6773)) - Resurrect CODEOWNERS, but for maintainers group ([filecoin-project/lotus#6773](https://github.com/filecoin-project/lotus/pull/6773))
- Master disclaimer ([filecoin-project/lotus#6757](https://github.com/filecoin-project/lotus/pull/6757)) - Master disclaimer ([filecoin-project/lotus#6757](https://github.com/filecoin-project/lotus/pull/6757))
- Create stale.yml ([filecoin-project/lotus#6747](https://github.com/filecoin-project/lotus/pull/6747)) - Create stale.yml ([filecoin-project/lotus#6747](https://github.com/filecoin-project/lotus/pull/6747))
- Release template: Update all testnet infra at once ([filecoin-project/lotus#6710](https://github.com/filecoin-project/lotus/pull/6710)) - Release template: Update all testnet infra at once ([filecoin-project/lotus#6710](https://github.com/filecoin-project/lotus/pull/6710))
- Release Template: remove binary validation step ([filecoin-project/lotus#6709](https://github.com/filecoin-project/lotus/pull/6709)) - Release Template: remove binary validation step ([filecoin-project/lotus#6709](https://github.com/filecoin-project/lotus/pull/6709))
- Reset of the interop network ([filecoin-project/lotus#6689](https://github.com/filecoin-project/lotus/pull/6689)) - Reset of the interop network ([filecoin-project/lotus#6689](https://github.com/filecoin-project/lotus/pull/6689))
- Update version.go to 1.11.1 ([filecoin-project/lotus#6621](https://github.com/filecoin-project/lotus/pull/6621)) - Update version.go to 1.11.1 ([filecoin-project/lotus#6621](https://github.com/filecoin-project/lotus/pull/6621))
## Contributors ## Contributors
@ -878,7 +1007,7 @@ This is a **highly recommended** but optional Lotus v1.11.1 release that introd
| dependabot[bot] | 1 | +3/-3 | 1 | | dependabot[bot] | 1 | +3/-3 | 1 |
| zhoutian527 | 1 | +2/-2 | 1 | | zhoutian527 | 1 | +2/-2 | 1 |
| xloem | 1 | +4/-0 | 1 | | xloem | 1 | +4/-0 | 1 |
| @travisperson| 2 | +2/-2 | 3 | | | 2 | +2/-2 | 3 |
| Liviu Damian | 2 | +2/-2 | 2 | | Liviu Damian | 2 | +2/-2 | 2 |
| @jimpick | 2 | +2/-2 | 2 | | @jimpick | 2 | +2/-2 | 2 |
| Frank | 1 | +3/-0 | 1 | | Frank | 1 | +3/-0 | 1 |
@ -890,6 +1019,7 @@ This is a **highly recommended** but optional Lotus v1.11.1 release that introd
This is a **highly recommended** release of Lotus that have many bug fixes, improvements and new features. This is a **highly recommended** release of Lotus that have many bug fixes, improvements and new features.
## Highlights ## Highlights
- Miner SimultaneousTransfers config ([filecoin-project/lotus#6612](https://github.com/filecoin-project/lotus/pull/6612))
- Miner SimultaneousTransfers config ([filecoin-project/lotus#6612](https://github.com/filecoin-project/lotus/pull/6612)) - Miner SimultaneousTransfers config ([filecoin-project/lotus#6612](https://github.com/filecoin-project/lotus/pull/6612))
- Set `SimultaneousTransfers` in lotus miner config to configure the maximum number of parallel online data transfers, including both storage and retrieval deals. - Set `SimultaneousTransfers` in lotus miner config to configure the maximum number of parallel online data transfers, including both storage and retrieval deals.
- Dynamic Retrieval pricing ([filecoin-project/lotus#6175](https://github.com/filecoin-project/lotus/pull/6175)) - Dynamic Retrieval pricing ([filecoin-project/lotus#6175](https://github.com/filecoin-project/lotus/pull/6175))
@ -1044,7 +1174,7 @@ This is a **highly recommended** release of Lotus that have many bug fixes, impr
| @Stebalien | 106 | +7653/-2718 | 273 | | @Stebalien | 106 | +7653/-2718 | 273 |
| dirkmc | 11 | +2580/-1371 | 77 | | dirkmc | 11 | +2580/-1371 | 77 |
| @dirkmc | 39 | +1865/-1194 | 79 | | @dirkmc | 39 | +1865/-1194 | 79 |
| @Kubuxu | 19 | +1973/-485 | 81 | | | 19 | +1973/-485 | 81 |
| @vyzo | 4 | +1748/-330 | 50 | | @vyzo | 4 | +1748/-330 | 50 |
| @aarshkshah1992 | 5 | +1462/-213 | 27 | | @aarshkshah1992 | 5 | +1462/-213 | 27 |
| @coryschwartz | 35 | +568/-206 | 59 | | @coryschwartz | 35 | +568/-206 | 59 |
@ -1159,9 +1289,9 @@ FIPs [0008](https://github.com/filecoin-project/FIPs/blob/master/FIPS/fip-0008.m
**Check out the documentation [here](https://docs.filecoin.io/mine/lotus/miner-configuration/#precommitsectorsbatch) for details on the new Lotus miner sealing config options, [here](https://docs.filecoin.io/mine/lotus/miner-configuration/#fees-section) for fee config options, and explanations of the new features.** **Check out the documentation [here](https://docs.filecoin.io/mine/lotus/miner-configuration/#precommitsectorsbatch) for details on the new Lotus miner sealing config options, [here](https://docs.filecoin.io/mine/lotus/miner-configuration/#fees-section) for fee config options, and explanations of the new features.**
Note: Note:
- We recommend to keep `PreCommitSectorsBatch` as 1. - We recommend to keep `PreCommitSectorsBatch` as 1.
- We recommend miners to set `PreCommitBatchWait` lower than 30 hours. - We recommend miners to set `PreCommitBatchWait` lower than 30 hours.
- We recommend miners to set a longer `CommitBatchSlack` and `PreCommitBatchSlack` to prevent message failures - We recommend miners to set a longer `CommitBatchSlack` and `PreCommitBatchSlack` to prevent message failures
due to expirations. due to expirations.
### Projected state tree growth ### Projected state tree growth

View File

@ -53,8 +53,9 @@ COPY --from=builder /usr/lib/x86_64-linux-gnu/libnuma.so.1 /lib/
COPY --from=builder /usr/lib/x86_64-linux-gnu/libhwloc.so.5 /lib/ COPY --from=builder /usr/lib/x86_64-linux-gnu/libhwloc.so.5 /lib/
COPY --from=builder /usr/lib/x86_64-linux-gnu/libOpenCL.so.1 /lib/ COPY --from=builder /usr/lib/x86_64-linux-gnu/libOpenCL.so.1 /lib/
RUN useradd -r -u 532 -U fc RUN useradd -r -u 532 -U fc \
&& mkdir -p /etc/OpenCL/vendors \
&& echo "libnvidia-opencl.so.1" > /etc/OpenCL/vendors/nvidia.icd
### ###
FROM base AS lotus FROM base AS lotus

View File

@ -93,6 +93,7 @@ type StorageMiner interface {
// SectorRemove removes the sector from storage. It doesn't terminate it on-chain, which can // SectorRemove removes the sector from storage. It doesn't terminate it on-chain, which can
// be done with SectorTerminate. Removing and not terminating live sectors will cause additional penalties. // be done with SectorTerminate. Removing and not terminating live sectors will cause additional penalties.
SectorRemove(context.Context, abi.SectorNumber) error //perm:admin SectorRemove(context.Context, abi.SectorNumber) error //perm:admin
SectorMarkForUpgrade(ctx context.Context, id abi.SectorNumber, snap bool) error //perm:admin
// SectorTerminate terminates the sector on-chain (adding it to a termination batch first), then // SectorTerminate terminates the sector on-chain (adding it to a termination batch first), then
// automatically removes it from storage // automatically removes it from storage
SectorTerminate(context.Context, abi.SectorNumber) error //perm:admin SectorTerminate(context.Context, abi.SectorNumber) error //perm:admin
@ -101,7 +102,6 @@ type StorageMiner interface {
SectorTerminateFlush(ctx context.Context) (*cid.Cid, error) //perm:admin SectorTerminateFlush(ctx context.Context) (*cid.Cid, error) //perm:admin
// SectorTerminatePending returns a list of pending sector terminations to be sent in the next batch message // SectorTerminatePending returns a list of pending sector terminations to be sent in the next batch message
SectorTerminatePending(ctx context.Context) ([]abi.SectorID, error) //perm:admin SectorTerminatePending(ctx context.Context) ([]abi.SectorID, error) //perm:admin
SectorMarkForUpgrade(ctx context.Context, id abi.SectorNumber, snap bool) error //perm:admin
// SectorPreCommitFlush immediately sends a PreCommit message with sectors batched for PreCommit. // SectorPreCommitFlush immediately sends a PreCommit message with sectors batched for PreCommit.
// Returns null if message wasn't sent // Returns null if message wasn't sent
SectorPreCommitFlush(ctx context.Context) ([]sealiface.PreCommitBatchRes, error) //perm:admin SectorPreCommitFlush(ctx context.Context) ([]sealiface.PreCommitBatchRes, error) //perm:admin

View File

@ -1,3 +1,4 @@
//stm: #unit
package api package api
import ( import (
@ -26,6 +27,7 @@ func goCmd() string {
} }
func TestDoesntDependOnFFI(t *testing.T) { func TestDoesntDependOnFFI(t *testing.T) {
//stm: @OTHER_IMPLEMENTATION_FFI_DEPENDENCE_001
deps, err := exec.Command(goCmd(), "list", "-deps", "github.com/filecoin-project/lotus/api").Output() deps, err := exec.Command(goCmd(), "list", "-deps", "github.com/filecoin-project/lotus/api").Output()
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@ -38,6 +40,7 @@ func TestDoesntDependOnFFI(t *testing.T) {
} }
func TestDoesntDependOnBuild(t *testing.T) { func TestDoesntDependOnBuild(t *testing.T) {
//stm: @OTHER_IMPLEMENTATION_FFI_DEPENDENCE_002
deps, err := exec.Command(goCmd(), "list", "-deps", "github.com/filecoin-project/lotus/api").Output() deps, err := exec.Command(goCmd(), "list", "-deps", "github.com/filecoin-project/lotus/api").Output()
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@ -50,6 +53,7 @@ func TestDoesntDependOnBuild(t *testing.T) {
} }
func TestReturnTypes(t *testing.T) { func TestReturnTypes(t *testing.T) {
//stm: @OTHER_IMPLEMENTATION_001
errType := reflect.TypeOf(new(error)).Elem() errType := reflect.TypeOf(new(error)).Elem()
bareIface := reflect.TypeOf(new(interface{})).Elem() bareIface := reflect.TypeOf(new(interface{})).Elem()
jmarsh := reflect.TypeOf(new(json.Marshaler)).Elem() jmarsh := reflect.TypeOf(new(json.Marshaler)).Elem()
@ -115,6 +119,7 @@ func TestReturnTypes(t *testing.T) {
} }
func TestPermTags(t *testing.T) { func TestPermTags(t *testing.T) {
//stm: @OTHER_IMPLEMENTATION_PERM_TAGS_001
_ = PermissionedFullAPI(&FullNodeStruct{}) _ = PermissionedFullAPI(&FullNodeStruct{})
_ = PermissionedStorMinerAPI(&StorageMinerStruct{}) _ = PermissionedStorMinerAPI(&StorageMinerStruct{})
_ = PermissionedWorkerAPI(&WorkerStruct{}) _ = PermissionedWorkerAPI(&WorkerStruct{})

View File

@ -1,3 +1,4 @@
//stm: #unit
package api package api
import ( import (
@ -29,6 +30,7 @@ type StrC struct {
} }
func TestGetInternalStructs(t *testing.T) { func TestGetInternalStructs(t *testing.T) {
//stm: @OTHER_IMPLEMENTATION_API_STRUCTS_001
var proxy StrA var proxy StrA
sts := GetInternalStructs(&proxy) sts := GetInternalStructs(&proxy)
@ -44,6 +46,7 @@ func TestGetInternalStructs(t *testing.T) {
} }
func TestNestedInternalStructs(t *testing.T) { func TestNestedInternalStructs(t *testing.T) {
//stm: @OTHER_IMPLEMENTATION_API_STRUCTS_001
var proxy StrC var proxy StrC
// check that only the top-level internal struct gets picked up // check that only the top-level internal struct gets picked up

View File

@ -5,6 +5,8 @@ import (
"fmt" "fmt"
"time" "time"
"github.com/libp2p/go-libp2p-core/network"
datatransfer "github.com/filecoin-project/go-data-transfer" datatransfer "github.com/filecoin-project/go-data-transfer"
"github.com/filecoin-project/go-fil-markets/retrievalmarket" "github.com/filecoin-project/go-fil-markets/retrievalmarket"
"github.com/filecoin-project/go-state-types/abi" "github.com/filecoin-project/go-state-types/abi"
@ -12,7 +14,6 @@ import (
"github.com/ipfs/go-cid" "github.com/ipfs/go-cid"
"github.com/ipfs/go-graphsync" "github.com/ipfs/go-graphsync"
"github.com/libp2p/go-libp2p-core/network"
"github.com/libp2p/go-libp2p-core/peer" "github.com/libp2p/go-libp2p-core/peer"
pubsub "github.com/libp2p/go-libp2p-pubsub" pubsub "github.com/libp2p/go-libp2p-pubsub"
ma "github.com/multiformats/go-multiaddr" ma "github.com/multiformats/go-multiaddr"
@ -124,12 +125,6 @@ func NewDataTransferChannel(hostID peer.ID, channelState datatransfer.ChannelSta
return channel return channel
} }
type NetBlockList struct {
Peers []peer.ID
IPAddrs []string
IPSubnets []string
}
type NetStat struct { type NetStat struct {
System *network.ScopeStat `json:",omitempty"` System *network.ScopeStat `json:",omitempty"`
Transient *network.ScopeStat `json:",omitempty"` Transient *network.ScopeStat `json:",omitempty"`
@ -152,6 +147,12 @@ type NetLimit struct {
FD int FD int
} }
type NetBlockList struct {
Peers []peer.ID
IPAddrs []string
IPSubnets []string
}
type ExtendedPeerInfo struct { type ExtendedPeerInfo struct {
ID peer.ID ID peer.ID
Agent string Agent string

View File

@ -57,8 +57,8 @@ var (
FullAPIVersion0 = newVer(1, 5, 0) FullAPIVersion0 = newVer(1, 5, 0)
FullAPIVersion1 = newVer(2, 2, 0) FullAPIVersion1 = newVer(2, 2, 0)
MinerAPIVersion0 = newVer(1, 4, 0) MinerAPIVersion0 = newVer(1, 5, 0)
WorkerAPIVersion0 = newVer(1, 5, 0) WorkerAPIVersion0 = newVer(1, 6, 0)
) )
//nolint:varcheck,deadcode //nolint:varcheck,deadcode

View File

@ -1,3 +1,4 @@
//stm: #unit
package badgerbs package badgerbs
import ( import (
@ -19,6 +20,8 @@ import (
) )
func TestBadgerBlockstore(t *testing.T) { func TestBadgerBlockstore(t *testing.T) {
//stm: @SPLITSTORE_BADGER_PUT_001, @SPLITSTORE_BADGER_POOLED_STORAGE_KEY_001
//stm: @SPLITSTORE_BADGER_OPEN_001, @SPLITSTORE_BADGER_CLOSE_001
(&Suite{ (&Suite{
NewBlockstore: newBlockstore(DefaultOptions), NewBlockstore: newBlockstore(DefaultOptions),
OpenBlockstore: openBlockstore(DefaultOptions), OpenBlockstore: openBlockstore(DefaultOptions),
@ -37,6 +40,8 @@ func TestBadgerBlockstore(t *testing.T) {
} }
func TestStorageKey(t *testing.T) { func TestStorageKey(t *testing.T) {
//stm: @SPLITSTORE_BADGER_OPEN_001, @SPLITSTORE_BADGER_CLOSE_001
//stm: @SPLITSTORE_BADGER_STORAGE_KEY_001
bs, _ := newBlockstore(DefaultOptions)(t) bs, _ := newBlockstore(DefaultOptions)(t)
bbs := bs.(*Blockstore) bbs := bs.(*Blockstore)
defer bbs.Close() //nolint:errcheck defer bbs.Close() //nolint:errcheck
@ -250,10 +255,16 @@ func testMove(t *testing.T, optsF func(string) Options) {
} }
func TestMoveNoPrefix(t *testing.T) { func TestMoveNoPrefix(t *testing.T) {
//stm: @SPLITSTORE_BADGER_OPEN_001, @SPLITSTORE_BADGER_CLOSE_001
//stm: @SPLITSTORE_BADGER_PUT_001, @SPLITSTORE_BADGER_POOLED_STORAGE_KEY_001
//stm: @SPLITSTORE_BADGER_DELETE_001, @SPLITSTORE_BADGER_COLLECT_GARBAGE_001
testMove(t, DefaultOptions) testMove(t, DefaultOptions)
} }
func TestMoveWithPrefix(t *testing.T) { func TestMoveWithPrefix(t *testing.T) {
//stm: @SPLITSTORE_BADGER_OPEN_001, @SPLITSTORE_BADGER_CLOSE_001
//stm: @SPLITSTORE_BADGER_PUT_001, @SPLITSTORE_BADGER_POOLED_STORAGE_KEY_001
//stm: @SPLITSTORE_BADGER_DELETE_001, @SPLITSTORE_BADGER_COLLECT_GARBAGE_001
testMove(t, func(path string) Options { testMove(t, func(path string) Options {
opts := DefaultOptions(path) opts := DefaultOptions(path)
opts.Prefix = "/prefixed/" opts.Prefix = "/prefixed/"

View File

@ -1,3 +1,4 @@
//stm: #unit
package badgerbs package badgerbs
import ( import (
@ -44,6 +45,8 @@ func (s *Suite) RunTests(t *testing.T, prefix string) {
} }
func (s *Suite) TestGetWhenKeyNotPresent(t *testing.T) { func (s *Suite) TestGetWhenKeyNotPresent(t *testing.T) {
//stm: @SPLITSTORE_BADGER_OPEN_001, @SPLITSTORE_BADGER_CLOSE_001
//stm: @SPLITSTORE_BADGER_GET_001, @SPLITSTORE_BADGER_POOLED_STORAGE_KEY_001
ctx := context.Background() ctx := context.Background()
bs, _ := s.NewBlockstore(t) bs, _ := s.NewBlockstore(t)
if c, ok := bs.(io.Closer); ok { if c, ok := bs.(io.Closer); ok {
@ -57,6 +60,8 @@ func (s *Suite) TestGetWhenKeyNotPresent(t *testing.T) {
} }
func (s *Suite) TestGetWhenKeyIsNil(t *testing.T) { func (s *Suite) TestGetWhenKeyIsNil(t *testing.T) {
//stm: @SPLITSTORE_BADGER_OPEN_001, @SPLITSTORE_BADGER_CLOSE_001
//stm: @SPLITSTORE_BADGER_GET_001
ctx := context.Background() ctx := context.Background()
bs, _ := s.NewBlockstore(t) bs, _ := s.NewBlockstore(t)
if c, ok := bs.(io.Closer); ok { if c, ok := bs.(io.Closer); ok {
@ -68,6 +73,9 @@ func (s *Suite) TestGetWhenKeyIsNil(t *testing.T) {
} }
func (s *Suite) TestPutThenGetBlock(t *testing.T) { func (s *Suite) TestPutThenGetBlock(t *testing.T) {
//stm: @SPLITSTORE_BADGER_OPEN_001, @SPLITSTORE_BADGER_CLOSE_001
//stm: @SPLITSTORE_BADGER_PUT_001, @SPLITSTORE_BADGER_POOLED_STORAGE_KEY_001
//stm: @SPLITSTORE_BADGER_GET_001
ctx := context.Background() ctx := context.Background()
bs, _ := s.NewBlockstore(t) bs, _ := s.NewBlockstore(t)
if c, ok := bs.(io.Closer); ok { if c, ok := bs.(io.Closer); ok {
@ -85,6 +93,8 @@ func (s *Suite) TestPutThenGetBlock(t *testing.T) {
} }
func (s *Suite) TestHas(t *testing.T) { func (s *Suite) TestHas(t *testing.T) {
//stm: @SPLITSTORE_BADGER_OPEN_001, @SPLITSTORE_BADGER_CLOSE_001
//stm: @SPLITSTORE_BADGER_HAS_001, @SPLITSTORE_BADGER_POOLED_STORAGE_KEY_001
ctx := context.Background() ctx := context.Background()
bs, _ := s.NewBlockstore(t) bs, _ := s.NewBlockstore(t)
if c, ok := bs.(io.Closer); ok { if c, ok := bs.(io.Closer); ok {
@ -106,6 +116,9 @@ func (s *Suite) TestHas(t *testing.T) {
} }
func (s *Suite) TestCidv0v1(t *testing.T) { func (s *Suite) TestCidv0v1(t *testing.T) {
//stm: @SPLITSTORE_BADGER_OPEN_001, @SPLITSTORE_BADGER_CLOSE_001
//stm: @SPLITSTORE_BADGER_PUT_001, @SPLITSTORE_BADGER_POOLED_STORAGE_KEY_001
//stm: @SPLITSTORE_BADGER_GET_001
ctx := context.Background() ctx := context.Background()
bs, _ := s.NewBlockstore(t) bs, _ := s.NewBlockstore(t)
if c, ok := bs.(io.Closer); ok { if c, ok := bs.(io.Closer); ok {
@ -123,6 +136,9 @@ func (s *Suite) TestCidv0v1(t *testing.T) {
} }
func (s *Suite) TestPutThenGetSizeBlock(t *testing.T) { func (s *Suite) TestPutThenGetSizeBlock(t *testing.T) {
//stm: @SPLITSTORE_BADGER_OPEN_001, @SPLITSTORE_BADGER_CLOSE_001
//stm: @SPLITSTORE_BADGER_PUT_001, @SPLITSTORE_BADGER_POOLED_STORAGE_KEY_001
//stm: @SPLITSTORE_BADGER_GET_SIZE_001
ctx := context.Background() ctx := context.Background()
bs, _ := s.NewBlockstore(t) bs, _ := s.NewBlockstore(t)
@ -154,6 +170,8 @@ func (s *Suite) TestPutThenGetSizeBlock(t *testing.T) {
} }
func (s *Suite) TestAllKeysSimple(t *testing.T) { func (s *Suite) TestAllKeysSimple(t *testing.T) {
//stm: @SPLITSTORE_BADGER_OPEN_001, @SPLITSTORE_BADGER_CLOSE_001
//stm: @SPLITSTORE_BADGER_PUT_001, @SPLITSTORE_BADGER_POOLED_STORAGE_KEY_001
bs, _ := s.NewBlockstore(t) bs, _ := s.NewBlockstore(t)
if c, ok := bs.(io.Closer); ok { if c, ok := bs.(io.Closer); ok {
defer func() { require.NoError(t, c.Close()) }() defer func() { require.NoError(t, c.Close()) }()
@ -170,6 +188,9 @@ func (s *Suite) TestAllKeysSimple(t *testing.T) {
} }
func (s *Suite) TestAllKeysRespectsContext(t *testing.T) { func (s *Suite) TestAllKeysRespectsContext(t *testing.T) {
//stm: @SPLITSTORE_BADGER_OPEN_001, @SPLITSTORE_BADGER_CLOSE_001
//stm: @SPLITSTORE_BADGER_PUT_001, @SPLITSTORE_BADGER_POOLED_STORAGE_KEY_001
//stm: @SPLITSTORE_BADGER_ALL_KEYS_CHAN_001
bs, _ := s.NewBlockstore(t) bs, _ := s.NewBlockstore(t)
if c, ok := bs.(io.Closer); ok { if c, ok := bs.(io.Closer); ok {
defer func() { require.NoError(t, c.Close()) }() defer func() { require.NoError(t, c.Close()) }()
@ -200,6 +221,7 @@ func (s *Suite) TestAllKeysRespectsContext(t *testing.T) {
} }
func (s *Suite) TestDoubleClose(t *testing.T) { func (s *Suite) TestDoubleClose(t *testing.T) {
//stm: @SPLITSTORE_BADGER_OPEN_001, @SPLITSTORE_BADGER_CLOSE_001
bs, _ := s.NewBlockstore(t) bs, _ := s.NewBlockstore(t)
c, ok := bs.(io.Closer) c, ok := bs.(io.Closer)
if !ok { if !ok {
@ -210,6 +232,9 @@ func (s *Suite) TestDoubleClose(t *testing.T) {
} }
func (s *Suite) TestReopenPutGet(t *testing.T) { func (s *Suite) TestReopenPutGet(t *testing.T) {
//stm: @SPLITSTORE_BADGER_OPEN_001, @SPLITSTORE_BADGER_CLOSE_001
//stm: @SPLITSTORE_BADGER_PUT_001, @SPLITSTORE_BADGER_POOLED_STORAGE_KEY_001
//stm: @SPLITSTORE_BADGER_GET_001
ctx := context.Background() ctx := context.Background()
bs, path := s.NewBlockstore(t) bs, path := s.NewBlockstore(t)
c, ok := bs.(io.Closer) c, ok := bs.(io.Closer)
@ -236,6 +261,10 @@ func (s *Suite) TestReopenPutGet(t *testing.T) {
} }
func (s *Suite) TestPutMany(t *testing.T) { func (s *Suite) TestPutMany(t *testing.T) {
//stm: @SPLITSTORE_BADGER_OPEN_001, @SPLITSTORE_BADGER_CLOSE_001
//stm: @SPLITSTORE_BADGER_HAS_001, @SPLITSTORE_BADGER_POOLED_STORAGE_KEY_001
//stm: @SPLITSTORE_BADGER_GET_001, @SPLITSTORE_BADGER_PUT_MANY_001
//stm: @SPLITSTORE_BADGER_ALL_KEYS_CHAN_001
ctx := context.Background() ctx := context.Background()
bs, _ := s.NewBlockstore(t) bs, _ := s.NewBlockstore(t)
if c, ok := bs.(io.Closer); ok { if c, ok := bs.(io.Closer); ok {
@ -268,6 +297,11 @@ func (s *Suite) TestPutMany(t *testing.T) {
} }
func (s *Suite) TestDelete(t *testing.T) { func (s *Suite) TestDelete(t *testing.T) {
//stm: @SPLITSTORE_BADGER_PUT_001, @SPLITSTORE_BADGER_POOLED_STORAGE_KEY_001
//stm: @SPLITSTORE_BADGER_DELETE_001, @SPLITSTORE_BADGER_POOLED_STORAGE_HAS_001
//stm: @SPLITSTORE_BADGER_ALL_KEYS_CHAN_001, @SPLITSTORE_BADGER_HAS_001
//stm: @SPLITSTORE_BADGER_PUT_MANY_001
ctx := context.Background() ctx := context.Background()
bs, _ := s.NewBlockstore(t) bs, _ := s.NewBlockstore(t)
if c, ok := bs.(io.Closer); ok { if c, ok := bs.(io.Closer); ok {

View File

@ -1,3 +1,4 @@
//stm: #unit
package splitstore package splitstore
import ( import (
@ -8,6 +9,8 @@ import (
) )
func TestMapMarkSet(t *testing.T) { func TestMapMarkSet(t *testing.T) {
//stm: @SPLITSTORE_MARKSET_CREATE_001, @SPLITSTORE_MARKSET_HAS_001, @@SPLITSTORE_MARKSET_MARK_001
//stm: @SPLITSTORE_MARKSET_CLOSE_001, @SPLITSTORE_MARKSET_CREATE_VISITOR_001
testMarkSet(t, "map") testMarkSet(t, "map")
testMarkSetRecovery(t, "map") testMarkSetRecovery(t, "map")
testMarkSetMarkMany(t, "map") testMarkSetMarkMany(t, "map")
@ -16,6 +19,8 @@ func TestMapMarkSet(t *testing.T) {
} }
func TestBadgerMarkSet(t *testing.T) { func TestBadgerMarkSet(t *testing.T) {
//stm: @SPLITSTORE_MARKSET_CREATE_001, @SPLITSTORE_MARKSET_HAS_001, @@SPLITSTORE_MARKSET_MARK_001
//stm: @SPLITSTORE_MARKSET_CLOSE_001, @SPLITSTORE_MARKSET_CREATE_VISITOR_001
bs := badgerMarkSetBatchSize bs := badgerMarkSetBatchSize
badgerMarkSetBatchSize = 1 badgerMarkSetBatchSize = 1
t.Cleanup(func() { t.Cleanup(func() {
@ -37,6 +42,7 @@ func testMarkSet(t *testing.T, lsType string) {
} }
defer env.Close() //nolint:errcheck defer env.Close() //nolint:errcheck
// stm: @SPLITSTORE_MARKSET_CREATE_001
hotSet, err := env.New("hot", 0) hotSet, err := env.New("hot", 0)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@ -56,6 +62,7 @@ func testMarkSet(t *testing.T, lsType string) {
return cid.NewCidV1(cid.Raw, h) return cid.NewCidV1(cid.Raw, h)
} }
// stm: @SPLITSTORE_MARKSET_HAS_001
mustHave := func(s MarkSet, cid cid.Cid) { mustHave := func(s MarkSet, cid cid.Cid) {
t.Helper() t.Helper()
has, err := s.Has(cid) has, err := s.Has(cid)
@ -85,6 +92,7 @@ func testMarkSet(t *testing.T, lsType string) {
k3 := makeCid("c") k3 := makeCid("c")
k4 := makeCid("d") k4 := makeCid("d")
// stm: @SPLITSTORE_MARKSET_MARK_001
hotSet.Mark(k1) //nolint hotSet.Mark(k1) //nolint
hotSet.Mark(k2) //nolint hotSet.Mark(k2) //nolint
coldSet.Mark(k3) //nolint coldSet.Mark(k3) //nolint
@ -135,6 +143,7 @@ func testMarkSet(t *testing.T, lsType string) {
mustNotHave(coldSet, k3) mustNotHave(coldSet, k3)
mustNotHave(coldSet, k4) mustNotHave(coldSet, k4)
//stm: @SPLITSTORE_MARKSET_CLOSE_001
err = hotSet.Close() err = hotSet.Close()
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@ -155,6 +164,7 @@ func testMarkSetVisitor(t *testing.T, lsType string) {
} }
defer env.Close() //nolint:errcheck defer env.Close() //nolint:errcheck
//stm: @SPLITSTORE_MARKSET_CREATE_VISITOR_001
visitor, err := env.New("test", 0) visitor, err := env.New("test", 0)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)

View File

@ -1,3 +1,4 @@
//stm: #unit
package splitstore package splitstore
import ( import (
@ -219,10 +220,16 @@ func testSplitStore(t *testing.T, cfg *Config) {
} }
func TestSplitStoreCompaction(t *testing.T) { func TestSplitStoreCompaction(t *testing.T) {
//stm: @SPLITSTORE_SPLITSTORE_OPEN_001, @SPLITSTORE_SPLITSTORE_CLOSE_001
//stm: @SPLITSTORE_SPLITSTORE_PUT_001, @SPLITSTORE_SPLITSTORE_ADD_PROTECTOR_001
//stm: @SPLITSTORE_SPLITSTORE_CLOSE_001
testSplitStore(t, &Config{MarkSetType: "map"}) testSplitStore(t, &Config{MarkSetType: "map"})
} }
func TestSplitStoreCompactionWithBadger(t *testing.T) { func TestSplitStoreCompactionWithBadger(t *testing.T) {
//stm: @SPLITSTORE_SPLITSTORE_OPEN_001, @SPLITSTORE_SPLITSTORE_CLOSE_001
//stm: @SPLITSTORE_SPLITSTORE_PUT_001, @SPLITSTORE_SPLITSTORE_ADD_PROTECTOR_001
//stm: @SPLITSTORE_SPLITSTORE_CLOSE_001
bs := badgerMarkSetBatchSize bs := badgerMarkSetBatchSize
badgerMarkSetBatchSize = 1 badgerMarkSetBatchSize = 1
t.Cleanup(func() { t.Cleanup(func() {
@ -232,6 +239,9 @@ func TestSplitStoreCompactionWithBadger(t *testing.T) {
} }
func TestSplitStoreSuppressCompactionNearUpgrade(t *testing.T) { func TestSplitStoreSuppressCompactionNearUpgrade(t *testing.T) {
//stm: @SPLITSTORE_SPLITSTORE_OPEN_001, @SPLITSTORE_SPLITSTORE_CLOSE_001
//stm: @SPLITSTORE_SPLITSTORE_PUT_001, @SPLITSTORE_SPLITSTORE_ADD_PROTECTOR_001
//stm: @SPLITSTORE_SPLITSTORE_CLOSE_001
ctx := context.Background() ctx := context.Background()
chain := &mockChain{t: t} chain := &mockChain{t: t}

View File

@ -1,3 +1,4 @@
//stm: #unit
package blockstore package blockstore
import ( import (
@ -13,6 +14,9 @@ import (
) )
func TestTimedCacheBlockstoreSimple(t *testing.T) { func TestTimedCacheBlockstoreSimple(t *testing.T) {
//stm: @SPLITSTORE_TIMED_BLOCKSTORE_START_001
//stm: @SPLITSTORE_TIMED_BLOCKSTORE_PUT_001, @SPLITSTORE_TIMED_BLOCKSTORE_HAS_001, @SPLITSTORE_TIMED_BLOCKSTORE_GET_001
//stm: @SPLITSTORE_TIMED_BLOCKSTORE_ALL_KEYS_CHAN_001
tc := NewTimedCacheBlockstore(10 * time.Millisecond) tc := NewTimedCacheBlockstore(10 * time.Millisecond)
mClock := clock.NewMock() mClock := clock.NewMock()
mClock.Set(time.Now()) mClock.Set(time.Now())

View File

@ -1,3 +1,4 @@
//stm: #unit
package blockstore package blockstore
import ( import (
@ -15,6 +16,7 @@ var (
) )
func TestUnionBlockstore_Get(t *testing.T) { func TestUnionBlockstore_Get(t *testing.T) {
//stm: @SPLITSTORE_UNION_BLOCKSTORE_GET_001
ctx := context.Background() ctx := context.Background()
m1 := NewMemory() m1 := NewMemory()
m2 := NewMemory() m2 := NewMemory()
@ -34,6 +36,9 @@ func TestUnionBlockstore_Get(t *testing.T) {
} }
func TestUnionBlockstore_Put_PutMany_Delete_AllKeysChan(t *testing.T) { func TestUnionBlockstore_Put_PutMany_Delete_AllKeysChan(t *testing.T) {
//stm: @SPLITSTORE_UNION_BLOCKSTORE_PUT_001, @SPLITSTORE_UNION_BLOCKSTORE_HAS_001
//stm: @SPLITSTORE_UNION_BLOCKSTORE_PUT_MANY_001, @SPLITSTORE_UNION_BLOCKSTORE_DELETE_001
//stm: @SPLITSTORE_UNION_BLOCKSTORE_ALL_KEYS_CHAN_001
ctx := context.Background() ctx := context.Background()
m1 := NewMemory() m1 := NewMemory()
m2 := NewMemory() m2 := NewMemory()

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -1,3 +1,4 @@
//stm: #unit
package build package build
import ( import (
@ -7,6 +8,7 @@ import (
) )
func TestOpenRPCDiscoverJSON_Version(t *testing.T) { func TestOpenRPCDiscoverJSON_Version(t *testing.T) {
//stm: @OTHER_IMPLEMENTATION_OPENRPC_VERSION_001
// openRPCDocVersion is the current OpenRPC version of the API docs. // openRPCDocVersion is the current OpenRPC version of the API docs.
openRPCDocVersion := "1.2.6" openRPCDocVersion := "1.2.6"

View File

@ -37,7 +37,7 @@ func BuildTypeString() string {
} }
// BuildVersion is the local build version // BuildVersion is the local build version
const BuildVersion = "1.15.1-dev" const BuildVersion = "1.15.2-dev"
func UserVersion() string { func UserVersion() string {
if os.Getenv("LOTUS_VERSION_IGNORE_COMMIT") == "1" { if os.Getenv("LOTUS_VERSION_IGNORE_COMMIT") == "1" {

View File

@ -1,3 +1,4 @@
//stm: #unit
package adt package adt
import ( import (
@ -44,6 +45,7 @@ func TestDiffAdtArray(t *testing.T) {
changes := new(TestDiffArray) changes := new(TestDiffArray)
//stm: @CHAIN_ADT_ARRAY_DIFF_001
assert.NoError(t, DiffAdtArray(arrA, arrB, changes)) assert.NoError(t, DiffAdtArray(arrA, arrB, changes))
assert.NotNil(t, changes) assert.NotNil(t, changes)
@ -98,6 +100,7 @@ func TestDiffAdtMap(t *testing.T) {
changes := new(TestDiffMap) changes := new(TestDiffMap)
//stm: @CHAIN_ADT_MAP_DIFF_001
assert.NoError(t, DiffAdtMap(mapA, mapB, changes)) assert.NoError(t, DiffAdtMap(mapA, mapB, changes))
assert.NotNil(t, changes) assert.NotNil(t, changes)

View File

@ -1,3 +1,4 @@
//stm: #unit
package aerrors_test package aerrors_test
import ( import (
@ -11,6 +12,7 @@ import (
) )
func TestFatalError(t *testing.T) { func TestFatalError(t *testing.T) {
//stm: @OTHER_IMPLEMENTATION_ACTOR_ERRORS_001
e1 := xerrors.New("out of disk space") e1 := xerrors.New("out of disk space")
e2 := xerrors.Errorf("could not put node: %w", e1) e2 := xerrors.Errorf("could not put node: %w", e1)
e3 := xerrors.Errorf("could not save head: %w", e2) e3 := xerrors.Errorf("could not save head: %w", e2)
@ -24,6 +26,7 @@ func TestFatalError(t *testing.T) {
assert.True(t, IsFatal(aw4), "should be fatal") assert.True(t, IsFatal(aw4), "should be fatal")
} }
func TestAbsorbeError(t *testing.T) { func TestAbsorbeError(t *testing.T) {
//stm: @OTHER_IMPLEMENTATION_ACTOR_ERRORS_001
e1 := xerrors.New("EOF") e1 := xerrors.New("EOF")
e2 := xerrors.Errorf("could not decode: %w", e1) e2 := xerrors.Errorf("could not decode: %w", e1)
ae := Absorb(e2, 35, "failed to decode CBOR") ae := Absorb(e2, 35, "failed to decode CBOR")

View File

@ -1,3 +1,4 @@
//stm: #unit
package policy package policy
import ( import (
@ -22,6 +23,7 @@ func TestSupportedProofTypes(t *testing.T) {
for t := range miner0.SupportedProofTypes { for t := range miner0.SupportedProofTypes {
oldTypes = append(oldTypes, t) oldTypes = append(oldTypes, t)
} }
//stm: @BLOCKCHAIN_POLICY_SET_MAX_SUPPORTED_PROOF_TYPES_001
t.Cleanup(func() { t.Cleanup(func() {
SetSupportedProofTypes(oldTypes...) SetSupportedProofTypes(oldTypes...)
}) })
@ -33,6 +35,7 @@ func TestSupportedProofTypes(t *testing.T) {
abi.RegisteredSealProof_StackedDrg2KiBV1: {}, abi.RegisteredSealProof_StackedDrg2KiBV1: {},
}, },
) )
//stm: @BLOCKCHAIN_POLICY_ADD_MAX_SUPPORTED_PROOF_TYPES_001
AddSupportedProofTypes(abi.RegisteredSealProof_StackedDrg8MiBV1) AddSupportedProofTypes(abi.RegisteredSealProof_StackedDrg8MiBV1)
require.EqualValues(t, require.EqualValues(t,
miner0.SupportedProofTypes, miner0.SupportedProofTypes,
@ -45,6 +48,7 @@ func TestSupportedProofTypes(t *testing.T) {
// Tests assumptions about policies being the same between actor versions. // Tests assumptions about policies being the same between actor versions.
func TestAssumptions(t *testing.T) { func TestAssumptions(t *testing.T) {
//stm: @BLOCKCHAIN_POLICY_ASSUMPTIONS_001
require.EqualValues(t, miner0.SupportedProofTypes, miner2.PreCommitSealProofTypesV0) require.EqualValues(t, miner0.SupportedProofTypes, miner2.PreCommitSealProofTypesV0)
require.Equal(t, miner0.PreCommitChallengeDelay, miner2.PreCommitChallengeDelay) require.Equal(t, miner0.PreCommitChallengeDelay, miner2.PreCommitChallengeDelay)
require.Equal(t, miner0.MaxSectorExpirationExtension, miner2.MaxSectorExpirationExtension) require.Equal(t, miner0.MaxSectorExpirationExtension, miner2.MaxSectorExpirationExtension)
@ -58,6 +62,7 @@ func TestAssumptions(t *testing.T) {
} }
func TestPartitionSizes(t *testing.T) { func TestPartitionSizes(t *testing.T) {
//stm: @CHAIN_ACTOR_PARTITION_SIZES_001
for _, p := range abi.SealProofInfos { for _, p := range abi.SealProofInfos {
sizeNew, err := builtin2.PoStProofWindowPoStPartitionSectors(p.WindowPoStProof) sizeNew, err := builtin2.PoStProofWindowPoStPartitionSectors(p.WindowPoStProof)
require.NoError(t, err) require.NoError(t, err)
@ -71,6 +76,7 @@ func TestPartitionSizes(t *testing.T) {
} }
func TestPoStSize(t *testing.T) { func TestPoStSize(t *testing.T) {
//stm: @BLOCKCHAIN_POLICY_GET_MAX_POST_PARTITIONS_001
v12PoStSize, err := GetMaxPoStPartitions(network.Version12, abi.RegisteredPoStProof_StackedDrgWindow64GiBV1) v12PoStSize, err := GetMaxPoStPartitions(network.Version12, abi.RegisteredPoStProof_StackedDrgWindow64GiBV1)
require.Equal(t, 4, v12PoStSize) require.Equal(t, 4, v12PoStSize)
require.NoError(t, err) require.NoError(t, err)

View File

@ -1,3 +1,5 @@
//stm: ignore
//Only tests external library behavior, therefore it should not be annotated
package drand package drand
import ( import (

View File

@ -2,6 +2,7 @@ package filcns
import ( import (
"context" "context"
"os"
"sync/atomic" "sync/atomic"
"github.com/filecoin-project/lotus/chain/rand" "github.com/filecoin-project/lotus/chain/rand"
@ -94,7 +95,7 @@ func (t *TipSetExecutor) ApplyBlocks(ctx context.Context, sm *stmgr.StateManager
}() }()
ctx = blockstore.WithHotView(ctx) ctx = blockstore.WithHotView(ctx)
makeVmWithBaseStateAndEpoch := func(base cid.Cid, e abi.ChainEpoch) (*vm.VM, error) { makeVmWithBaseStateAndEpoch := func(base cid.Cid, e abi.ChainEpoch) (vm.Interface, error) {
vmopt := &vm.VMOpts{ vmopt := &vm.VMOpts{
StateBase: base, StateBase: base,
Epoch: e, Epoch: e,
@ -108,10 +109,23 @@ func (t *TipSetExecutor) ApplyBlocks(ctx context.Context, sm *stmgr.StateManager
LookbackState: stmgr.LookbackStateGetterForTipset(sm, ts), LookbackState: stmgr.LookbackStateGetterForTipset(sm, ts),
} }
if os.Getenv("LOTUS_USE_FVM_EXPERIMENTAL") == "1" {
// This is needed so that the FVM does not have to duplicate the genesis vesting schedule, one
// of the components of the circ supply calc.
// This field is NOT needed by the LegacyVM, and also NOT needed by the FVM from v15 onwards.
filVested, err := sm.GetFilVested(ctx, e)
if err != nil {
return nil, err
}
vmopt.FilVested = filVested
return vm.NewFVM(ctx, vmopt)
}
return sm.VMConstructor()(ctx, vmopt) return sm.VMConstructor()(ctx, vmopt)
} }
runCron := func(vmCron *vm.VM, epoch abi.ChainEpoch) error { runCron := func(vmCron vm.Interface, epoch abi.ChainEpoch) error {
cronMsg := &types.Message{ cronMsg := &types.Message{
To: cron.Address, To: cron.Address,
From: builtin.SystemActorAddr, From: builtin.SystemActorAddr,

View File

@ -1,3 +1,4 @@
//stm: #unit
package events package events
import ( import (
@ -358,6 +359,7 @@ func (fcs *fakeCS) advance(rev, app, drop int, msgs map[int]cid.Cid, nulls ...in
var _ EventAPI = &fakeCS{} var _ EventAPI = &fakeCS{}
func TestAt(t *testing.T) { func TestAt(t *testing.T) {
//stm: @EVENTS_HEIGHT_CHAIN_AT_001, @EVENTS_HEIGHT_REVERT_001
fcs := newFakeCS(t) fcs := newFakeCS(t)
events, err := NewEvents(context.Background(), fcs) events, err := NewEvents(context.Background(), fcs)
require.NoError(t, err) require.NoError(t, err)
@ -418,6 +420,7 @@ func TestAt(t *testing.T) {
} }
func TestAtNullTrigger(t *testing.T) { func TestAtNullTrigger(t *testing.T) {
//stm: @EVENTS_HEIGHT_CHAIN_AT_001
fcs := newFakeCS(t) fcs := newFakeCS(t)
events, err := NewEvents(context.Background(), fcs) events, err := NewEvents(context.Background(), fcs)
require.NoError(t, err) require.NoError(t, err)
@ -447,6 +450,7 @@ func TestAtNullTrigger(t *testing.T) {
} }
func TestAtNullConf(t *testing.T) { func TestAtNullConf(t *testing.T) {
//stm: @EVENTS_HEIGHT_CHAIN_AT_001, @EVENTS_HEIGHT_REVERT_001
ctx, cancel := context.WithCancel(context.Background()) ctx, cancel := context.WithCancel(context.Background())
defer cancel() defer cancel()
@ -485,6 +489,7 @@ func TestAtNullConf(t *testing.T) {
} }
func TestAtStart(t *testing.T) { func TestAtStart(t *testing.T) {
//stm: @EVENTS_HEIGHT_CHAIN_AT_001
fcs := newFakeCS(t) fcs := newFakeCS(t)
events, err := NewEvents(context.Background(), fcs) events, err := NewEvents(context.Background(), fcs)
@ -515,6 +520,7 @@ func TestAtStart(t *testing.T) {
} }
func TestAtStartConfidence(t *testing.T) { func TestAtStartConfidence(t *testing.T) {
//stm: @EVENTS_HEIGHT_CHAIN_AT_001
fcs := newFakeCS(t) fcs := newFakeCS(t)
events, err := NewEvents(context.Background(), fcs) events, err := NewEvents(context.Background(), fcs)
@ -541,6 +547,7 @@ func TestAtStartConfidence(t *testing.T) {
} }
func TestAtChained(t *testing.T) { func TestAtChained(t *testing.T) {
//stm: @EVENTS_HEIGHT_CHAIN_AT_001
fcs := newFakeCS(t) fcs := newFakeCS(t)
events, err := NewEvents(context.Background(), fcs) events, err := NewEvents(context.Background(), fcs)
@ -571,6 +578,7 @@ func TestAtChained(t *testing.T) {
} }
func TestAtChainedConfidence(t *testing.T) { func TestAtChainedConfidence(t *testing.T) {
//stm: @EVENTS_HEIGHT_CHAIN_AT_001
fcs := newFakeCS(t) fcs := newFakeCS(t)
events, err := NewEvents(context.Background(), fcs) events, err := NewEvents(context.Background(), fcs)
@ -601,6 +609,7 @@ func TestAtChainedConfidence(t *testing.T) {
} }
func TestAtChainedConfidenceNull(t *testing.T) { func TestAtChainedConfidenceNull(t *testing.T) {
//stm: @EVENTS_HEIGHT_CHAIN_AT_001
fcs := newFakeCS(t) fcs := newFakeCS(t)
events, err := NewEvents(context.Background(), fcs) events, err := NewEvents(context.Background(), fcs)
@ -632,6 +641,7 @@ func matchAddrMethod(to address.Address, m abi.MethodNum) func(msg *types.Messag
} }
func TestCalled(t *testing.T) { func TestCalled(t *testing.T) {
//stm: @EVENTS_EVENTS_CALLED_001, @EVENTS_HEIGHT_REVERT_001
fcs := newFakeCS(t) fcs := newFakeCS(t)
events, err := NewEvents(context.Background(), fcs) events, err := NewEvents(context.Background(), fcs)
@ -837,6 +847,7 @@ func TestCalled(t *testing.T) {
} }
func TestCalledTimeout(t *testing.T) { func TestCalledTimeout(t *testing.T) {
//stm: @EVENTS_EVENTS_CALLED_001, @EVENTS_HEIGHT_REVERT_001
fcs := newFakeCS(t) fcs := newFakeCS(t)
events, err := NewEvents(context.Background(), fcs) events, err := NewEvents(context.Background(), fcs)
@ -897,6 +908,7 @@ func TestCalledTimeout(t *testing.T) {
} }
func TestCalledOrder(t *testing.T) { func TestCalledOrder(t *testing.T) {
//stm: @EVENTS_EVENTS_CALLED_001, @EVENTS_HEIGHT_REVERT_001
fcs := newFakeCS(t) fcs := newFakeCS(t)
events, err := NewEvents(context.Background(), fcs) events, err := NewEvents(context.Background(), fcs)
@ -953,6 +965,7 @@ func TestCalledOrder(t *testing.T) {
} }
func TestCalledNull(t *testing.T) { func TestCalledNull(t *testing.T) {
//stm: @EVENTS_EVENTS_CALLED_001, @EVENTS_HEIGHT_REVERT_001
fcs := newFakeCS(t) fcs := newFakeCS(t)
events, err := NewEvents(context.Background(), fcs) events, err := NewEvents(context.Background(), fcs)
@ -1011,6 +1024,7 @@ func TestCalledNull(t *testing.T) {
} }
func TestRemoveTriggersOnMessage(t *testing.T) { func TestRemoveTriggersOnMessage(t *testing.T) {
//stm: @EVENTS_EVENTS_CALLED_001, @EVENTS_HEIGHT_REVERT_001
fcs := newFakeCS(t) fcs := newFakeCS(t)
events, err := NewEvents(context.Background(), fcs) events, err := NewEvents(context.Background(), fcs)
@ -1094,6 +1108,7 @@ type testStateChange struct {
} }
func TestStateChanged(t *testing.T) { func TestStateChanged(t *testing.T) {
//stm: @EVENTS_EVENTS_CALLED_001, @EVENTS_HEIGHT_REVERT_001
fcs := newFakeCS(t) fcs := newFakeCS(t)
events, err := NewEvents(context.Background(), fcs) events, err := NewEvents(context.Background(), fcs)
@ -1179,6 +1194,7 @@ func TestStateChanged(t *testing.T) {
} }
func TestStateChangedRevert(t *testing.T) { func TestStateChangedRevert(t *testing.T) {
//stm: @EVENTS_EVENTS_CALLED_001, @EVENTS_HEIGHT_REVERT_001
fcs := newFakeCS(t) fcs := newFakeCS(t)
events, err := NewEvents(context.Background(), fcs) events, err := NewEvents(context.Background(), fcs)
@ -1255,6 +1271,7 @@ func TestStateChangedRevert(t *testing.T) {
} }
func TestStateChangedTimeout(t *testing.T) { func TestStateChangedTimeout(t *testing.T) {
//stm: @EVENTS_EVENTS_CALLED_001, @EVENTS_HEIGHT_REVERT_001
timeoutHeight := abi.ChainEpoch(20) timeoutHeight := abi.ChainEpoch(20)
confidence := 3 confidence := 3
@ -1332,6 +1349,7 @@ func TestStateChangedTimeout(t *testing.T) {
} }
func TestCalledMultiplePerEpoch(t *testing.T) { func TestCalledMultiplePerEpoch(t *testing.T) {
//stm: @EVENTS_EVENTS_CALLED_001, @EVENTS_HEIGHT_REVERT_001
fcs := newFakeCS(t) fcs := newFakeCS(t)
events, err := NewEvents(context.Background(), fcs) events, err := NewEvents(context.Background(), fcs)
@ -1384,6 +1402,7 @@ func TestCalledMultiplePerEpoch(t *testing.T) {
} }
func TestCachedSameBlock(t *testing.T) { func TestCachedSameBlock(t *testing.T) {
//stm: @EVENTS_EVENTS_CALLED_001, @EVENTS_HEIGHT_REVERT_001
fcs := newFakeCS(t) fcs := newFakeCS(t)
_, err := NewEvents(context.Background(), fcs) _, err := NewEvents(context.Background(), fcs)
@ -1418,6 +1437,7 @@ func (t *testObserver) Revert(_ context.Context, from, to *types.TipSet) error {
} }
func TestReconnect(t *testing.T) { func TestReconnect(t *testing.T) {
//stm: @EVENTS_EVENTS_CALLED_001, @EVENTS_HEIGHT_REVERT_001
ctx, cancel := context.WithCancel(context.Background()) ctx, cancel := context.WithCancel(context.Background())
defer cancel() defer cancel()

View File

@ -1,3 +1,4 @@
//stm: #unit
package state package state
import ( import (
@ -35,6 +36,12 @@ func init() {
} }
func TestMarketPredicates(t *testing.T) { func TestMarketPredicates(t *testing.T) {
//stm: @EVENTS_PREDICATES_ON_ACTOR_STATE_CHANGED_001, @EVENTS_PREDICATES_DEAL_STATE_CHANGED_001
//stm: @EVENTS_PREDICATES_DEAL_CHANGED_FOR_IDS
//stm: @EVENTS_PREDICATES_ON_BALANCE_CHANGED_001, @EVENTS_PREDICATES_BALANCE_CHANGED_FOR_ADDRESS_001
//stm: @EVENTS_PREDICATES_ON_DEAL_PROPOSAL_CHANGED_001, @EVENTS_PREDICATES_PROPOSAL_AMT_CHANGED_001
//stm: @EVENTS_PREDICATES_DEAL_STATE_CHANGED_001, @EVENTS_PREDICATES_DEAL_AMT_CHANGED_001
ctx := context.Background() ctx := context.Background()
bs := bstore.NewMemorySync() bs := bstore.NewMemorySync()
store := adt2.WrapStore(ctx, cbornode.NewCborStore(bs)) store := adt2.WrapStore(ctx, cbornode.NewCborStore(bs))
@ -333,6 +340,8 @@ func TestMarketPredicates(t *testing.T) {
} }
func TestMinerSectorChange(t *testing.T) { func TestMinerSectorChange(t *testing.T) {
//stm: @EVENTS_PREDICATES_ON_ACTOR_STATE_CHANGED_001, @EVENTS_PREDICATES_MINER_ACTOR_CHANGE_001
//stm: @EVENTS_PREDICATES_MINER_SECTOR_CHANGE_001
ctx := context.Background() ctx := context.Background()
bs := bstore.NewMemorySync() bs := bstore.NewMemorySync()
store := adt2.WrapStore(ctx, cbornode.NewCborStore(bs)) store := adt2.WrapStore(ctx, cbornode.NewCborStore(bs))

View File

@ -1,3 +1,4 @@
//stm: #unit
package events package events
import ( import (
@ -92,6 +93,7 @@ func (h *cacheHarness) skip(n abi.ChainEpoch) {
} }
func TestTsCache(t *testing.T) { func TestTsCache(t *testing.T) {
//stm: @EVENTS_CACHE_GET_CHAIN_HEAD_001, @EVENTS_CACHE_GET_001, @EVENTS_CACHE_ADD_001
h := newCacheharness(t) h := newCacheharness(t)
for i := 0; i < 9000; i++ { for i := 0; i < 9000; i++ {
@ -104,6 +106,8 @@ func TestTsCache(t *testing.T) {
} }
func TestTsCacheNulls(t *testing.T) { func TestTsCacheNulls(t *testing.T) {
//stm: @EVENTS_CACHE_GET_CHAIN_HEAD_001, @EVENTS_CACHE_GET_CHAIN_TIPSET_BEFORE_001, @EVENTS_CACHE_GET_CHAIN_TIPSET_AFTER_001
//stm: @EVENTS_CACHE_GET_001, @EVENTS_CACHE_ADD_001
ctx := context.Background() ctx := context.Background()
h := newCacheharness(t) h := newCacheharness(t)
@ -182,6 +186,7 @@ func (tc *tsCacheAPIStorageCallCounter) ChainGetTipSet(ctx context.Context, tsk
} }
func TestTsCacheEmpty(t *testing.T) { func TestTsCacheEmpty(t *testing.T) {
//stm: @EVENTS_CACHE_GET_CHAIN_HEAD_001
// Calling best on an empty cache should just call out to the chain API // Calling best on an empty cache should just call out to the chain API
callCounter := &tsCacheAPIStorageCallCounter{t: t} callCounter := &tsCacheAPIStorageCallCounter{t: t}
tsc := newTSCache(callCounter, 50) tsc := newTSCache(callCounter, 50)
@ -191,6 +196,7 @@ func TestTsCacheEmpty(t *testing.T) {
} }
func TestTsCacheSkip(t *testing.T) { func TestTsCacheSkip(t *testing.T) {
//stm: @EVENTS_CACHE_GET_CHAIN_HEAD_001, @EVENTS_CACHE_GET_001, @EVENTS_CACHE_ADD_001
h := newCacheharness(t) h := newCacheharness(t)
ts, err := types.NewTipSet([]*types.BlockHeader{{ ts, err := types.NewTipSet([]*types.BlockHeader{{

View File

@ -1,3 +1,4 @@
//stm: #unit
package gen package gen
import ( import (
@ -34,6 +35,7 @@ func testGeneration(t testing.TB, n int, msgs int, sectors int) {
} }
func TestChainGeneration(t *testing.T) { func TestChainGeneration(t *testing.T) {
//stm: @CHAIN_GEN_NEW_GEN_WITH_SECTORS_001, @CHAIN_GEN_NEXT_TIPSET_001
t.Run("10-20-1", func(t *testing.T) { testGeneration(t, 10, 20, 1) }) t.Run("10-20-1", func(t *testing.T) { testGeneration(t, 10, 20, 1) })
t.Run("10-20-25", func(t *testing.T) { testGeneration(t, 10, 20, 25) }) t.Run("10-20-25", func(t *testing.T) { testGeneration(t, 10, 20, 25) })
} }

View File

@ -491,12 +491,13 @@ func VerifyPreSealedData(ctx context.Context, cs *store.ChainStore, sys vm.Sysca
Actors: filcns.NewActorRegistry(), Actors: filcns.NewActorRegistry(),
Syscalls: mkFakedSigSyscalls(sys), Syscalls: mkFakedSigSyscalls(sys),
CircSupplyCalc: csc, CircSupplyCalc: csc,
FilVested: big.Zero(),
NetworkVersion: nv, NetworkVersion: nv,
BaseFee: types.NewInt(0), BaseFee: big.Zero(),
} }
vm, err := vm.NewVM(ctx, &vmopt) vm, err := vm.NewLegacyVM(ctx, &vmopt)
if err != nil { if err != nil {
return cid.Undef, xerrors.Errorf("failed to create NewVM: %w", err) return cid.Undef, xerrors.Errorf("failed to create NewLegacyVM: %w", err)
} }
for mi, m := range template.Miners { for mi, m := range template.Miners {

View File

@ -95,12 +95,13 @@ func SetupStorageMiners(ctx context.Context, cs *store.ChainStore, sys vm.Syscal
Syscalls: mkFakedSigSyscalls(sys), Syscalls: mkFakedSigSyscalls(sys),
CircSupplyCalc: csc, CircSupplyCalc: csc,
NetworkVersion: nv, NetworkVersion: nv,
BaseFee: types.NewInt(0), BaseFee: big.Zero(),
FilVested: big.Zero(),
} }
vm, err := vm.NewVM(ctx, vmopt) vm, err := vm.NewLegacyVM(ctx, vmopt)
if err != nil { if err != nil {
return cid.Undef, xerrors.Errorf("failed to create NewVM: %w", err) return cid.Undef, xerrors.Errorf("failed to create NewLegacyVM: %w", err)
} }
if len(miners) == 0 { if len(miners) == 0 {
@ -520,7 +521,7 @@ func (fr *fakeRand) GetBeaconRandomness(ctx context.Context, personalization cry
return out, nil return out, nil
} }
func currentTotalPower(ctx context.Context, vm *vm.VM, maddr address.Address) (*power0.CurrentTotalPowerReturn, error) { func currentTotalPower(ctx context.Context, vm *vm.LegacyVM, maddr address.Address) (*power0.CurrentTotalPowerReturn, error) {
pwret, err := doExecValue(ctx, vm, power.Address, maddr, big.Zero(), builtin0.MethodsPower.CurrentTotalPower, nil) pwret, err := doExecValue(ctx, vm, power.Address, maddr, big.Zero(), builtin0.MethodsPower.CurrentTotalPower, nil)
if err != nil { if err != nil {
return nil, err return nil, err
@ -533,7 +534,7 @@ func currentTotalPower(ctx context.Context, vm *vm.VM, maddr address.Address) (*
return &pwr, nil return &pwr, nil
} }
func dealWeight(ctx context.Context, vm *vm.VM, maddr address.Address, dealIDs []abi.DealID, sectorStart, sectorExpiry abi.ChainEpoch, av actors.Version) (abi.DealWeight, abi.DealWeight, error) { func dealWeight(ctx context.Context, vm *vm.LegacyVM, maddr address.Address, dealIDs []abi.DealID, sectorStart, sectorExpiry abi.ChainEpoch, av actors.Version) (abi.DealWeight, abi.DealWeight, error) {
// TODO: This hack should move to market actor wrapper // TODO: This hack should move to market actor wrapper
if av <= actors.Version2 { if av <= actors.Version2 {
params := &market0.VerifyDealsForActivationParams{ params := &market0.VerifyDealsForActivationParams{
@ -593,7 +594,7 @@ func dealWeight(ctx context.Context, vm *vm.VM, maddr address.Address, dealIDs [
return dealWeights.Sectors[0].DealWeight, dealWeights.Sectors[0].VerifiedDealWeight, nil return dealWeights.Sectors[0].DealWeight, dealWeights.Sectors[0].VerifiedDealWeight, nil
} }
func currentEpochBlockReward(ctx context.Context, vm *vm.VM, maddr address.Address, av actors.Version) (abi.StoragePower, builtin.FilterEstimate, error) { func currentEpochBlockReward(ctx context.Context, vm *vm.LegacyVM, maddr address.Address, av actors.Version) (abi.StoragePower, builtin.FilterEstimate, error) {
rwret, err := doExecValue(ctx, vm, reward.Address, maddr, big.Zero(), reward.Methods.ThisEpochReward, nil) rwret, err := doExecValue(ctx, vm, reward.Address, maddr, big.Zero(), reward.Methods.ThisEpochReward, nil)
if err != nil { if err != nil {
return big.Zero(), builtin.FilterEstimate{}, err return big.Zero(), builtin.FilterEstimate{}, err
@ -628,7 +629,7 @@ func currentEpochBlockReward(ctx context.Context, vm *vm.VM, maddr address.Addre
return epochReward.ThisEpochBaselinePower, builtin.FilterEstimate(epochReward.ThisEpochRewardSmoothed), nil return epochReward.ThisEpochBaselinePower, builtin.FilterEstimate(epochReward.ThisEpochRewardSmoothed), nil
} }
func circSupply(ctx context.Context, vmi *vm.VM, maddr address.Address) abi.TokenAmount { func circSupply(ctx context.Context, vmi *vm.LegacyVM, maddr address.Address) abi.TokenAmount {
unsafeVM := &vm.UnsafeVM{VM: vmi} unsafeVM := &vm.UnsafeVM{VM: vmi}
rt := unsafeVM.MakeRuntime(ctx, &types.Message{ rt := unsafeVM.MakeRuntime(ctx, &types.Message{
GasLimit: 1_000_000_000, GasLimit: 1_000_000_000,

View File

@ -21,7 +21,7 @@ func mustEnc(i cbg.CBORMarshaler) []byte {
return enc return enc
} }
func doExecValue(ctx context.Context, vm *vm.VM, to, from address.Address, value types.BigInt, method abi.MethodNum, params []byte) ([]byte, error) { func doExecValue(ctx context.Context, vm *vm.LegacyVM, to, from address.Address, value types.BigInt, method abi.MethodNum, params []byte) ([]byte, error) {
act, err := vm.StateTree().GetActor(from) act, err := vm.StateTree().GetActor(from)
if err != nil { if err != nil {
return nil, xerrors.Errorf("doExec failed to get from actor (%s): %w", from, err) return nil, xerrors.Errorf("doExec failed to get from actor (%s): %w", from, err)

View File

@ -1,3 +1,4 @@
//stm: #unit
package market package market
import ( import (
@ -22,6 +23,7 @@ import (
// TestFundManagerBasic verifies that the basic fund manager operations work // TestFundManagerBasic verifies that the basic fund manager operations work
func TestFundManagerBasic(t *testing.T) { func TestFundManagerBasic(t *testing.T) {
//stm: @MARKET_RESERVE_FUNDS_001, @MARKET_RELEASE_FUNDS_001, @MARKET_WITHDRAW_FUNDS_001
s := setup(t) s := setup(t)
defer s.fm.Stop() defer s.fm.Stop()
@ -106,6 +108,7 @@ func TestFundManagerBasic(t *testing.T) {
// TestFundManagerParallel verifies that operations can be run in parallel // TestFundManagerParallel verifies that operations can be run in parallel
func TestFundManagerParallel(t *testing.T) { func TestFundManagerParallel(t *testing.T) {
//stm: @MARKET_RESERVE_FUNDS_001, @MARKET_RELEASE_FUNDS_001, @MARKET_WITHDRAW_FUNDS_001
s := setup(t) s := setup(t)
defer s.fm.Stop() defer s.fm.Stop()
@ -197,6 +200,7 @@ func TestFundManagerParallel(t *testing.T) {
// TestFundManagerReserveByWallet verifies that reserve requests are grouped by wallet // TestFundManagerReserveByWallet verifies that reserve requests are grouped by wallet
func TestFundManagerReserveByWallet(t *testing.T) { func TestFundManagerReserveByWallet(t *testing.T) {
//stm: @MARKET_RESERVE_FUNDS_001
s := setup(t) s := setup(t)
defer s.fm.Stop() defer s.fm.Stop()
@ -290,6 +294,7 @@ func TestFundManagerReserveByWallet(t *testing.T) {
// TestFundManagerWithdrawal verifies that as many withdraw operations as // TestFundManagerWithdrawal verifies that as many withdraw operations as
// possible are processed // possible are processed
func TestFundManagerWithdrawalLimit(t *testing.T) { func TestFundManagerWithdrawalLimit(t *testing.T) {
//stm: @MARKET_RESERVE_FUNDS_001, @MARKET_RELEASE_FUNDS_001, @MARKET_WITHDRAW_FUNDS_001
s := setup(t) s := setup(t)
defer s.fm.Stop() defer s.fm.Stop()
@ -384,6 +389,7 @@ func TestFundManagerWithdrawalLimit(t *testing.T) {
// TestFundManagerWithdrawByWallet verifies that withdraw requests are grouped by wallet // TestFundManagerWithdrawByWallet verifies that withdraw requests are grouped by wallet
func TestFundManagerWithdrawByWallet(t *testing.T) { func TestFundManagerWithdrawByWallet(t *testing.T) {
//stm: @MARKET_RESERVE_FUNDS_001, @MARKET_RELEASE_FUNDS_001, @MARKET_WITHDRAW_FUNDS_001
s := setup(t) s := setup(t)
defer s.fm.Stop() defer s.fm.Stop()
@ -493,6 +499,7 @@ func TestFundManagerWithdrawByWallet(t *testing.T) {
// TestFundManagerRestart verifies that waiting for incomplete requests resumes // TestFundManagerRestart verifies that waiting for incomplete requests resumes
// on restart // on restart
func TestFundManagerRestart(t *testing.T) { func TestFundManagerRestart(t *testing.T) {
//stm: @MARKET_RESERVE_FUNDS_001
s := setup(t) s := setup(t)
defer s.fm.Stop() defer s.fm.Stop()
@ -559,6 +566,7 @@ func TestFundManagerRestart(t *testing.T) {
// 3. Deal B completes, reducing addr1 by 7: reserved 12 available 12 -> 5 // 3. Deal B completes, reducing addr1 by 7: reserved 12 available 12 -> 5
// 4. Deal A releases 5 from addr1: reserved 12 -> 7 available 5 // 4. Deal A releases 5 from addr1: reserved 12 -> 7 available 5
func TestFundManagerReleaseAfterPublish(t *testing.T) { func TestFundManagerReleaseAfterPublish(t *testing.T) {
//stm: @MARKET_RESERVE_FUNDS_001, @MARKET_RELEASE_FUNDS_001
s := setup(t) s := setup(t)
defer s.fm.Stop() defer s.fm.Stop()

View File

@ -1,3 +1,4 @@
//stm: #unit
package messagepool package messagepool
import ( import (
@ -8,6 +9,7 @@ import (
) )
func TestBlockProbability(t *testing.T) { func TestBlockProbability(t *testing.T) {
//stm: @OTHER_IMPLEMENTATION_BLOCK_PROB_001
mp := &MessagePool{} mp := &MessagePool{}
bp := mp.blockProbabilities(1 - 0.15) bp := mp.blockProbabilities(1 - 0.15)
t.Logf("%+v\n", bp) t.Logf("%+v\n", bp)
@ -20,6 +22,7 @@ func TestBlockProbability(t *testing.T) {
} }
func TestWinnerProba(t *testing.T) { func TestWinnerProba(t *testing.T) {
//stm: @OTHER_IMPLEMENTATION_BLOCK_PROB_002
rand.Seed(time.Now().UnixNano()) rand.Seed(time.Now().UnixNano())
const N = 1000000 const N = 1000000
winnerProba := noWinnersProb() winnerProba := noWinnersProb()

View File

@ -854,7 +854,6 @@ func TestMessageValueTooHigh(t *testing.T) {
Message: *msg, Message: *msg,
Signature: *sig, Signature: *sig,
} }
err = mp.Add(context.TODO(), sm) err = mp.Add(context.TODO(), sm)
assert.Error(t, err) assert.Error(t, err)
} }
@ -901,8 +900,7 @@ func TestMessageSignatureInvalid(t *testing.T) {
} }
err = mp.Add(context.TODO(), sm) err = mp.Add(context.TODO(), sm)
assert.Error(t, err) assert.Error(t, err)
// assert.Contains(t, err.Error(), "invalid signature length") assert.Contains(t, err.Error(), "invalid signature length")
assert.Error(t, err)
} }
} }
@ -926,14 +924,29 @@ func TestAddMessageTwice(t *testing.T) {
to := mock.Address(1001) to := mock.Address(1001)
{ {
// create a valid messages msg := &types.Message{
sm := makeTestMessage(w, from, to, 0, 50_000_000, minimumBaseFee.Uint64()) To: to,
From: from,
Value: types.NewInt(1),
Nonce: 0,
GasLimit: 50000000,
GasFeeCap: types.NewInt(minimumBaseFee.Uint64()),
GasPremium: types.NewInt(1),
Params: make([]byte, 32<<10),
}
sig, err := w.WalletSign(context.TODO(), from, msg.Cid().Bytes(), api.MsgMeta{})
if err != nil {
panic(err)
}
sm := &types.SignedMessage{
Message: *msg,
Signature: *sig,
}
mustAdd(t, mp, sm) mustAdd(t, mp, sm)
// try to add it twice
err = mp.Add(context.TODO(), sm) err = mp.Add(context.TODO(), sm)
// assert.Contains(t, err.Error(), "with nonce 0 already in mpool") assert.Contains(t, err.Error(), "with nonce 0 already in mpool")
assert.Error(t, err)
} }
} }
@ -963,8 +976,7 @@ func TestAddMessageTwiceNonceGap(t *testing.T) {
// then try to add message again // then try to add message again
err = mp.Add(context.TODO(), sm) err = mp.Add(context.TODO(), sm)
// assert.Contains(t, err.Error(), "unfulfilled nonce gap") assert.Contains(t, err.Error(), "unfulfilled nonce gap")
assert.Error(t, err)
} }
} }

View File

@ -1,3 +1,4 @@
//stm: #unit
package state package state
import ( import (
@ -18,6 +19,7 @@ import (
) )
func BenchmarkStateTreeSet(b *testing.B) { func BenchmarkStateTreeSet(b *testing.B) {
//stm: @CHAIN_STATETREE_SET_ACTOR_001
cst := cbor.NewMemCborStore() cst := cbor.NewMemCborStore()
st, err := NewStateTree(cst, types.StateTreeVersion1) st, err := NewStateTree(cst, types.StateTreeVersion1)
if err != nil { if err != nil {
@ -45,6 +47,7 @@ func BenchmarkStateTreeSet(b *testing.B) {
} }
func BenchmarkStateTreeSetFlush(b *testing.B) { func BenchmarkStateTreeSetFlush(b *testing.B) {
//stm: @CHAIN_STATETREE_SET_ACTOR_001
cst := cbor.NewMemCborStore() cst := cbor.NewMemCborStore()
sv, err := VersionForNetwork(build.NewestNetworkVersion) sv, err := VersionForNetwork(build.NewestNetworkVersion)
if err != nil { if err != nil {
@ -80,6 +83,8 @@ func BenchmarkStateTreeSetFlush(b *testing.B) {
} }
func TestResolveCache(t *testing.T) { func TestResolveCache(t *testing.T) {
//stm: @CHAIN_STATETREE_SET_ACTOR_001, @CHAIN_STATETREE_GET_ACTOR_001, @CHAIN_STATETREE_VERSION_FOR_NETWORK_001
//stm: @CHAIN_STATETREE_SNAPSHOT_001, @CHAIN_STATETREE_SNAPSHOT_CLEAR_001
cst := cbor.NewMemCborStore() cst := cbor.NewMemCborStore()
sv, err := VersionForNetwork(build.NewestNetworkVersion) sv, err := VersionForNetwork(build.NewestNetworkVersion)
if err != nil { if err != nil {
@ -182,6 +187,8 @@ func TestResolveCache(t *testing.T) {
} }
func BenchmarkStateTree10kGetActor(b *testing.B) { func BenchmarkStateTree10kGetActor(b *testing.B) {
//stm: @CHAIN_STATETREE_SET_ACTOR_001, @CHAIN_STATETREE_GET_ACTOR_001, @CHAIN_STATETREE_VERSION_FOR_NETWORK_001
//stm: @CHAIN_STATETREE_FLUSH_001
cst := cbor.NewMemCborStore() cst := cbor.NewMemCborStore()
sv, err := VersionForNetwork(build.NewestNetworkVersion) sv, err := VersionForNetwork(build.NewestNetworkVersion)
if err != nil { if err != nil {
@ -229,6 +236,7 @@ func BenchmarkStateTree10kGetActor(b *testing.B) {
} }
func TestSetCache(t *testing.T) { func TestSetCache(t *testing.T) {
//stm: @CHAIN_STATETREE_SET_ACTOR_001, @CHAIN_STATETREE_GET_ACTOR_001, @CHAIN_STATETREE_VERSION_FOR_NETWORK_001
cst := cbor.NewMemCborStore() cst := cbor.NewMemCborStore()
sv, err := VersionForNetwork(build.NewestNetworkVersion) sv, err := VersionForNetwork(build.NewestNetworkVersion)
if err != nil { if err != nil {
@ -270,6 +278,8 @@ func TestSetCache(t *testing.T) {
} }
func TestSnapshots(t *testing.T) { func TestSnapshots(t *testing.T) {
//stm: @CHAIN_STATETREE_SET_ACTOR_001, @CHAIN_STATETREE_GET_ACTOR_001, @CHAIN_STATETREE_VERSION_FOR_NETWORK_001
//stm: @CHAIN_STATETREE_FLUSH_001, @CHAIN_STATETREE_SNAPSHOT_REVERT_001, CHAIN_STATETREE_SNAPSHOT_CLEAR_001
ctx := context.Background() ctx := context.Background()
cst := cbor.NewMemCborStore() cst := cbor.NewMemCborStore()
@ -360,6 +370,7 @@ func assertNotHas(t *testing.T, st *StateTree, addr address.Address) {
} }
func TestStateTreeConsistency(t *testing.T) { func TestStateTreeConsistency(t *testing.T) {
//stm: @CHAIN_STATETREE_SET_ACTOR_001, @CHAIN_STATETREE_VERSION_FOR_NETWORK_001, @CHAIN_STATETREE_FLUSH_001
cst := cbor.NewMemCborStore() cst := cbor.NewMemCborStore()
// TODO: ActorUpgrade: this test tests pre actors v2 // TODO: ActorUpgrade: this test tests pre actors v2

View File

@ -5,6 +5,12 @@ import (
"errors" "errors"
"fmt" "fmt"
cbor "github.com/ipfs/go-ipld-cbor"
"github.com/filecoin-project/lotus/chain/state"
"github.com/filecoin-project/lotus/blockstore"
"github.com/filecoin-project/lotus/chain/rand" "github.com/filecoin-project/lotus/chain/rand"
"github.com/filecoin-project/go-address" "github.com/filecoin-project/go-address"
@ -64,6 +70,8 @@ func (sm *StateManager) Call(ctx context.Context, msg *types.Message, ts *types.
pheight = ts.Height() - 1 pheight = ts.Height() - 1
} }
// Since we're simulating a future message, pretend we're applying it in the "next" tipset
vmHeight := pheight + 1
bstate := ts.ParentState() bstate := ts.ParentState()
// Run the (not expensive) migration. // Run the (not expensive) migration.
@ -72,9 +80,14 @@ func (sm *StateManager) Call(ctx context.Context, msg *types.Message, ts *types.
return nil, fmt.Errorf("failed to handle fork: %w", err) return nil, fmt.Errorf("failed to handle fork: %w", err)
} }
filVested, err := sm.GetFilVested(ctx, vmHeight)
if err != nil {
return nil, err
}
vmopt := &vm.VMOpts{ vmopt := &vm.VMOpts{
StateBase: bstate, StateBase: bstate,
Epoch: pheight + 1, Epoch: vmHeight,
Rand: rand.NewStateRand(sm.cs, ts.Cids(), sm.beacon, sm.GetNetworkVersion), Rand: rand.NewStateRand(sm.cs, ts.Cids(), sm.beacon, sm.GetNetworkVersion),
Bstore: sm.cs.StateBlockstore(), Bstore: sm.cs.StateBlockstore(),
Actors: sm.tsExec.NewActorRegistry(), Actors: sm.tsExec.NewActorRegistry(),
@ -82,6 +95,7 @@ func (sm *StateManager) Call(ctx context.Context, msg *types.Message, ts *types.
CircSupplyCalc: sm.GetVMCirculatingSupply, CircSupplyCalc: sm.GetVMCirculatingSupply,
NetworkVersion: sm.GetNetworkVersion(ctx, pheight+1), NetworkVersion: sm.GetNetworkVersion(ctx, pheight+1),
BaseFee: types.NewInt(0), BaseFee: types.NewInt(0),
FilVested: filVested,
LookbackState: LookbackStateGetterForTipset(sm, ts), LookbackState: LookbackStateGetterForTipset(sm, ts),
} }
@ -112,7 +126,12 @@ func (sm *StateManager) Call(ctx context.Context, msg *types.Message, ts *types.
) )
} }
fromActor, err := vmi.StateTree().GetActor(msg.From) stTree, err := sm.StateTree(bstate)
if err != nil {
return nil, xerrors.Errorf("failed to load state tree: %w", err)
}
fromActor, err := stTree.GetActor(msg.From)
if err != nil { if err != nil {
return nil, xerrors.Errorf("call raw get actor: %s", err) return nil, xerrors.Errorf("call raw get actor: %s", err)
} }
@ -175,13 +194,16 @@ func (sm *StateManager) CallWithGas(ctx context.Context, msg *types.Message, pri
} }
} }
state, _, err := sm.TipSetState(ctx, ts) // Since we're simulating a future message, pretend we're applying it in the "next" tipset
vmHeight := ts.Height() + 1
stateCid, _, err := sm.TipSetState(ctx, ts)
if err != nil { if err != nil {
return nil, xerrors.Errorf("computing tipset state: %w", err) return nil, xerrors.Errorf("computing tipset state: %w", err)
} }
// Technically, the tipset we're passing in here should be ts+1, but that may not exist. // Technically, the tipset we're passing in here should be ts+1, but that may not exist.
state, err = sm.HandleStateForks(ctx, state, ts.Height(), nil, ts) stateCid, err = sm.HandleStateForks(ctx, stateCid, ts.Height(), nil, ts)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to handle fork: %w", err) return nil, fmt.Errorf("failed to handle fork: %w", err)
} }
@ -196,16 +218,23 @@ func (sm *StateManager) CallWithGas(ctx context.Context, msg *types.Message, pri
) )
} }
filVested, err := sm.GetFilVested(ctx, vmHeight)
if err != nil {
return nil, err
}
buffStore := blockstore.NewBuffered(sm.cs.StateBlockstore())
vmopt := &vm.VMOpts{ vmopt := &vm.VMOpts{
StateBase: state, StateBase: stateCid,
Epoch: ts.Height() + 1, Epoch: vmHeight,
Rand: r, Rand: r,
Bstore: sm.cs.StateBlockstore(), Bstore: buffStore,
Actors: sm.tsExec.NewActorRegistry(), Actors: sm.tsExec.NewActorRegistry(),
Syscalls: sm.Syscalls, Syscalls: sm.Syscalls,
CircSupplyCalc: sm.GetVMCirculatingSupply, CircSupplyCalc: sm.GetVMCirculatingSupply,
NetworkVersion: sm.GetNetworkVersion(ctx, ts.Height()+1), NetworkVersion: sm.GetNetworkVersion(ctx, ts.Height()+1),
BaseFee: ts.Blocks()[0].ParentBaseFee, BaseFee: ts.Blocks()[0].ParentBaseFee,
FilVested: filVested,
LookbackState: LookbackStateGetterForTipset(sm, ts), LookbackState: LookbackStateGetterForTipset(sm, ts),
} }
vmi, err := sm.newVM(ctx, vmopt) vmi, err := sm.newVM(ctx, vmopt)
@ -219,7 +248,19 @@ func (sm *StateManager) CallWithGas(ctx context.Context, msg *types.Message, pri
} }
} }
fromActor, err := vmi.StateTree().GetActor(msg.From) // We flush to get the VM's view of the state tree after applying the above messages
// This is needed to get the correct nonce from the actor state to match the VM
stateCid, err = vmi.Flush(ctx)
if err != nil {
return nil, xerrors.Errorf("flushing vm: %w", err)
}
stTree, err := state.LoadStateTree(cbor.NewCborStore(buffStore), stateCid)
if err != nil {
return nil, xerrors.Errorf("loading state tree: %w", err)
}
fromActor, err := stTree.GetActor(msg.From)
if err != nil { if err != nil {
return nil, xerrors.Errorf("call raw get actor: %s", err) return nil, xerrors.Errorf("call raw get actor: %s", err)
} }

View File

@ -1,3 +1,4 @@
//stm: #integration
package stmgr_test package stmgr_test
import ( import (
@ -106,6 +107,9 @@ func (ta *testActor) TestMethod(rt rt2.Runtime, params *abi.EmptyValue) *abi.Emp
} }
func TestForkHeightTriggers(t *testing.T) { func TestForkHeightTriggers(t *testing.T) {
//stm: @CHAIN_STATETREE_GET_ACTOR_001, @CHAIN_STATETREE_FLUSH_001, @TOKEN_WALLET_SIGN_001
//stm: @CHAIN_GEN_NEXT_TIPSET_001
//stm: @CHAIN_STATE_RESOLVE_TO_KEY_ADDR_001, @CHAIN_STATE_SET_VM_CONSTRUCTOR_001
logging.SetAllLoggers(logging.LevelInfo) logging.SetAllLoggers(logging.LevelInfo)
ctx := context.TODO() ctx := context.TODO()
@ -166,8 +170,8 @@ func TestForkHeightTriggers(t *testing.T) {
inv := filcns.NewActorRegistry() inv := filcns.NewActorRegistry()
inv.Register(nil, testActor{}) inv.Register(nil, testActor{})
sm.SetVMConstructor(func(ctx context.Context, vmopt *vm.VMOpts) (*vm.VM, error) { sm.SetVMConstructor(func(ctx context.Context, vmopt *vm.VMOpts) (vm.Interface, error) {
nvm, err := vm.NewVM(ctx, vmopt) nvm, err := vm.NewLegacyVM(ctx, vmopt)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -241,6 +245,8 @@ func TestForkHeightTriggers(t *testing.T) {
} }
func TestForkRefuseCall(t *testing.T) { func TestForkRefuseCall(t *testing.T) {
//stm: @CHAIN_GEN_NEXT_TIPSET_001, @CHAIN_GEN_NEXT_TIPSET_FROM_MINERS_001
//stm: @CHAIN_STATE_RESOLVE_TO_KEY_ADDR_001, @CHAIN_STATE_SET_VM_CONSTRUCTOR_001, @CHAIN_STATE_CALL_001
logging.SetAllLoggers(logging.LevelInfo) logging.SetAllLoggers(logging.LevelInfo)
for after := 0; after < 3; after++ { for after := 0; after < 3; after++ {
@ -281,8 +287,8 @@ func testForkRefuseCall(t *testing.T, nullsBefore, nullsAfter int) {
inv := filcns.NewActorRegistry() inv := filcns.NewActorRegistry()
inv.Register(nil, testActor{}) inv.Register(nil, testActor{})
sm.SetVMConstructor(func(ctx context.Context, vmopt *vm.VMOpts) (*vm.VM, error) { sm.SetVMConstructor(func(ctx context.Context, vmopt *vm.VMOpts) (vm.Interface, error) {
nvm, err := vm.NewVM(ctx, vmopt) nvm, err := vm.NewLegacyVM(ctx, vmopt)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -360,6 +366,8 @@ func testForkRefuseCall(t *testing.T, nullsBefore, nullsAfter int) {
} }
func TestForkPreMigration(t *testing.T) { func TestForkPreMigration(t *testing.T) {
//stm: @CHAIN_GEN_NEXT_TIPSET_001,
//stm: @CHAIN_STATE_RESOLVE_TO_KEY_ADDR_001, @CHAIN_STATE_SET_VM_CONSTRUCTOR_001
logging.SetAllLoggers(logging.LevelInfo) logging.SetAllLoggers(logging.LevelInfo)
cg, err := gen.NewGenerator() cg, err := gen.NewGenerator()
@ -500,8 +508,8 @@ func TestForkPreMigration(t *testing.T) {
inv := filcns.NewActorRegistry() inv := filcns.NewActorRegistry()
inv.Register(nil, testActor{}) inv.Register(nil, testActor{})
sm.SetVMConstructor(func(ctx context.Context, vmopt *vm.VMOpts) (*vm.VM, error) { sm.SetVMConstructor(func(ctx context.Context, vmopt *vm.VMOpts) (vm.Interface, error) {
nvm, err := vm.NewVM(ctx, vmopt) nvm, err := vm.NewLegacyVM(ctx, vmopt)
if err != nil { if err != nil {
return nil, err return nil, err
} }

View File

@ -1,3 +1,4 @@
//stm: #unit
package stmgr_test package stmgr_test
import ( import (
@ -12,6 +13,8 @@ import (
) )
func TestSearchForMessageReplacements(t *testing.T) { func TestSearchForMessageReplacements(t *testing.T) {
//stm: @CHAIN_GEN_NEXT_TIPSET_001
//stm: @CHAIN_STATE_SEARCH_MSG_001
ctx := context.Background() ctx := context.Background()
cg, err := gen.NewGenerator() cg, err := gen.NewGenerator()
if err != nil { if err != nil {

View File

@ -84,7 +84,7 @@ type StateManager struct {
compWait map[string]chan struct{} compWait map[string]chan struct{}
stlk sync.Mutex stlk sync.Mutex
genesisMsigLk sync.Mutex genesisMsigLk sync.Mutex
newVM func(context.Context, *vm.VMOpts) (*vm.VM, error) newVM func(context.Context, *vm.VMOpts) (vm.Interface, error)
Syscalls vm.SyscallBuilder Syscalls vm.SyscallBuilder
preIgnitionVesting []msig0.State preIgnitionVesting []msig0.State
postIgnitionVesting []msig0.State postIgnitionVesting []msig0.State
@ -347,12 +347,12 @@ func (sm *StateManager) ValidateChain(ctx context.Context, ts *types.TipSet) err
return nil return nil
} }
func (sm *StateManager) SetVMConstructor(nvm func(context.Context, *vm.VMOpts) (*vm.VM, error)) { func (sm *StateManager) SetVMConstructor(nvm func(context.Context, *vm.VMOpts) (vm.Interface, error)) {
sm.newVM = nvm sm.newVM = nvm
} }
func (sm *StateManager) VMConstructor() func(context.Context, *vm.VMOpts) (*vm.VM, error) { func (sm *StateManager) VMConstructor() func(context.Context, *vm.VMOpts) (vm.Interface, error) {
return func(ctx context.Context, opts *vm.VMOpts) (*vm.VM, error) { return func(ctx context.Context, opts *vm.VMOpts) (vm.Interface, error) {
return sm.newVM(ctx, opts) return sm.newVM(ctx, opts)
} }
} }

View File

@ -196,8 +196,32 @@ func (sm *StateManager) setupPostCalicoVesting(ctx context.Context) error {
// GetVestedFunds returns all funds that have "left" actors that are in the genesis state: // GetVestedFunds returns all funds that have "left" actors that are in the genesis state:
// - For Multisigs, it counts the actual amounts that have vested at the given epoch // - For Multisigs, it counts the actual amounts that have vested at the given epoch
// - For Accounts, it counts max(currentBalance - genesisBalance, 0). // - For Accounts, it counts max(currentBalance - genesisBalance, 0).
func (sm *StateManager) GetFilVested(ctx context.Context, height abi.ChainEpoch, st *state.StateTree) (abi.TokenAmount, error) { func (sm *StateManager) GetFilVested(ctx context.Context, height abi.ChainEpoch) (abi.TokenAmount, error) {
vf := big.Zero() vf := big.Zero()
sm.genesisMsigLk.Lock()
defer sm.genesisMsigLk.Unlock()
// TODO: combine all this?
if sm.preIgnitionVesting == nil || sm.genesisPledge.IsZero() || sm.genesisMarketFunds.IsZero() {
err := sm.setupGenesisVestingSchedule(ctx)
if err != nil {
return vf, xerrors.Errorf("failed to setup pre-ignition vesting schedule: %w", err)
}
}
if sm.postIgnitionVesting == nil {
err := sm.setupPostIgnitionVesting(ctx)
if err != nil {
return vf, xerrors.Errorf("failed to setup post-ignition vesting schedule: %w", err)
}
}
if sm.postCalicoVesting == nil {
err := sm.setupPostCalicoVesting(ctx)
if err != nil {
return vf, xerrors.Errorf("failed to setup post-calico vesting schedule: %w", err)
}
}
if height <= build.UpgradeIgnitionHeight { if height <= build.UpgradeIgnitionHeight {
for _, v := range sm.preIgnitionVesting { for _, v := range sm.preIgnitionVesting {
au := big.Sub(v.InitialBalance, v.AmountLocked(height)) au := big.Sub(v.InitialBalance, v.AmountLocked(height))
@ -282,7 +306,7 @@ func getFilPowerLocked(ctx context.Context, st *state.StateTree) (abi.TokenAmoun
return pst.TotalLocked() return pst.TotalLocked()
} }
func (sm *StateManager) GetFilLocked(ctx context.Context, st *state.StateTree) (abi.TokenAmount, error) { func GetFilLocked(ctx context.Context, st *state.StateTree) (abi.TokenAmount, error) {
filMarketLocked, err := getFilMarketLocked(ctx, st) filMarketLocked, err := getFilMarketLocked(ctx, st)
if err != nil { if err != nil {
@ -316,28 +340,7 @@ func (sm *StateManager) GetVMCirculatingSupply(ctx context.Context, height abi.C
} }
func (sm *StateManager) GetVMCirculatingSupplyDetailed(ctx context.Context, height abi.ChainEpoch, st *state.StateTree) (api.CirculatingSupply, error) { func (sm *StateManager) GetVMCirculatingSupplyDetailed(ctx context.Context, height abi.ChainEpoch, st *state.StateTree) (api.CirculatingSupply, error) {
sm.genesisMsigLk.Lock() filVested, err := sm.GetFilVested(ctx, height)
defer sm.genesisMsigLk.Unlock()
if sm.preIgnitionVesting == nil || sm.genesisPledge.IsZero() || sm.genesisMarketFunds.IsZero() {
err := sm.setupGenesisVestingSchedule(ctx)
if err != nil {
return api.CirculatingSupply{}, xerrors.Errorf("failed to setup pre-ignition vesting schedule: %w", err)
}
}
if sm.postIgnitionVesting == nil {
err := sm.setupPostIgnitionVesting(ctx)
if err != nil {
return api.CirculatingSupply{}, xerrors.Errorf("failed to setup post-ignition vesting schedule: %w", err)
}
}
if sm.postCalicoVesting == nil {
err := sm.setupPostCalicoVesting(ctx)
if err != nil {
return api.CirculatingSupply{}, xerrors.Errorf("failed to setup post-calico vesting schedule: %w", err)
}
}
filVested, err := sm.GetFilVested(ctx, height, st)
if err != nil { if err != nil {
return api.CirculatingSupply{}, xerrors.Errorf("failed to calculate filVested: %w", err) return api.CirculatingSupply{}, xerrors.Errorf("failed to calculate filVested: %w", err)
} }
@ -360,7 +363,7 @@ func (sm *StateManager) GetVMCirculatingSupplyDetailed(ctx context.Context, heig
return api.CirculatingSupply{}, xerrors.Errorf("failed to calculate filBurnt: %w", err) return api.CirculatingSupply{}, xerrors.Errorf("failed to calculate filBurnt: %w", err)
} }
filLocked, err := sm.GetFilLocked(ctx, st) filLocked, err := GetFilLocked(ctx, st)
if err != nil { if err != nil {
return api.CirculatingSupply{}, xerrors.Errorf("failed to calculate filLocked: %w", err) return api.CirculatingSupply{}, xerrors.Errorf("failed to calculate filLocked: %w", err)
} }

View File

@ -79,6 +79,11 @@ func ComputeState(ctx context.Context, sm *StateManager, height abi.ChainEpoch,
// future. It's not guaranteed to be accurate... but that's fine. // future. It's not guaranteed to be accurate... but that's fine.
} }
filVested, err := sm.GetFilVested(ctx, height)
if err != nil {
return cid.Undef, nil, err
}
r := rand.NewStateRand(sm.cs, ts.Cids(), sm.beacon, sm.GetNetworkVersion) r := rand.NewStateRand(sm.cs, ts.Cids(), sm.beacon, sm.GetNetworkVersion)
vmopt := &vm.VMOpts{ vmopt := &vm.VMOpts{
StateBase: base, StateBase: base,
@ -90,6 +95,7 @@ func ComputeState(ctx context.Context, sm *StateManager, height abi.ChainEpoch,
CircSupplyCalc: sm.GetVMCirculatingSupply, CircSupplyCalc: sm.GetVMCirculatingSupply,
NetworkVersion: sm.GetNetworkVersion(ctx, height), NetworkVersion: sm.GetNetworkVersion(ctx, height),
BaseFee: ts.Blocks()[0].ParentBaseFee, BaseFee: ts.Blocks()[0].ParentBaseFee,
FilVested: filVested,
LookbackState: LookbackStateGetterForTipset(sm, ts), LookbackState: LookbackStateGetterForTipset(sm, ts),
} }
vmi, err := sm.newVM(ctx, vmopt) vmi, err := sm.newVM(ctx, vmopt)

View File

@ -1,3 +1,5 @@
//stm: #unit
package store package store
import ( import (
@ -10,6 +12,7 @@ import (
) )
func TestBaseFee(t *testing.T) { func TestBaseFee(t *testing.T) {
//stm: @CHAIN_STORE_COMPUTE_NEXT_BASE_FEE_001
tests := []struct { tests := []struct {
basefee uint64 basefee uint64
limitUsed int64 limitUsed int64

View File

@ -1,3 +1,4 @@
//stm: #unit
package store_test package store_test
import ( import (
@ -10,6 +11,9 @@ import (
) )
func TestChainCheckpoint(t *testing.T) { func TestChainCheckpoint(t *testing.T) {
//stm: @CHAIN_GEN_NEXT_TIPSET_FROM_MINERS_001
//stm: @CHAIN_STORE_GET_TIPSET_FROM_KEY_001, @CHAIN_STORE_SET_HEAD_001, @CHAIN_STORE_GET_HEAVIEST_TIPSET_001
//stm: @CHAIN_STORE_SET_CHECKPOINT_001, @CHAIN_STORE_MAYBE_TAKE_HEAVIER_TIPSET_001, @CHAIN_STORE_REMOVE_CHECKPOINT_001
ctx := context.Background() ctx := context.Background()
cg, err := gen.NewGenerator() cg, err := gen.NewGenerator()

View File

@ -1,3 +1,4 @@
//stm: #unit
package store package store
import ( import (
@ -9,6 +10,7 @@ import (
) )
func TestHeadChangeCoalescer(t *testing.T) { func TestHeadChangeCoalescer(t *testing.T) {
//stm: @CHAIN_STORE_COALESCE_HEAD_CHANGE_001
notif := make(chan headChange, 1) notif := make(chan headChange, 1)
c := NewHeadChangeCoalescer(func(revert, apply []*types.TipSet) error { c := NewHeadChangeCoalescer(func(revert, apply []*types.TipSet) error {
notif <- headChange{apply: apply, revert: revert} notif <- headChange{apply: apply, revert: revert}

View File

@ -1,3 +1,4 @@
//stm: #unit
package store_test package store_test
import ( import (
@ -17,6 +18,9 @@ import (
) )
func TestIndexSeeks(t *testing.T) { func TestIndexSeeks(t *testing.T) {
//stm: @CHAIN_STORE_IMPORT_001
//stm: @CHAIN_STORE_GET_TIPSET_BY_HEIGHT_001, @CHAIN_STORE_PUT_TIPSET_001, @CHAIN_STORE_SET_GENESIS_BLOCK_001
//stm: @CHAIN_STORE_CLOSE_001
cg, err := gen.NewGenerator() cg, err := gen.NewGenerator()
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)

View File

@ -1,3 +1,4 @@
//stm: #unit
package store_test package store_test
import ( import (
@ -28,6 +29,8 @@ func init() {
} }
func BenchmarkGetRandomness(b *testing.B) { func BenchmarkGetRandomness(b *testing.B) {
//stm: @CHAIN_GEN_NEXT_TIPSET_001
//stm: @CHAIN_STATE_GET_RANDOMNESS_FROM_TICKETS_001
cg, err := gen.NewGenerator() cg, err := gen.NewGenerator()
if err != nil { if err != nil {
b.Fatal(err) b.Fatal(err)
@ -85,6 +88,8 @@ func BenchmarkGetRandomness(b *testing.B) {
} }
func TestChainExportImport(t *testing.T) { func TestChainExportImport(t *testing.T) {
//stm: @CHAIN_GEN_NEXT_TIPSET_001
//stm: @CHAIN_STORE_IMPORT_001
cg, err := gen.NewGenerator() cg, err := gen.NewGenerator()
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@ -120,6 +125,9 @@ func TestChainExportImport(t *testing.T) {
} }
func TestChainExportImportFull(t *testing.T) { func TestChainExportImportFull(t *testing.T) {
//stm: @CHAIN_GEN_NEXT_TIPSET_001
//stm: @CHAIN_STORE_IMPORT_001, @CHAIN_STORE_EXPORT_001, @CHAIN_STORE_SET_HEAD_001
//stm: @CHAIN_STORE_GET_TIPSET_BY_HEIGHT_001
cg, err := gen.NewGenerator() cg, err := gen.NewGenerator()
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)

View File

@ -1,3 +1,4 @@
//stm: #unit
package chain package chain
import ( import (
@ -78,6 +79,7 @@ func assertGetSyncOp(t *testing.T, c chan *syncOp, ts *types.TipSet) {
} }
func TestSyncManagerEdgeCase(t *testing.T) { func TestSyncManagerEdgeCase(t *testing.T) {
//stm: @CHAIN_SYNCER_SET_PEER_HEAD_001
ctx := context.Background() ctx := context.Background()
a := mock.TipSet(mock.MkBlock(genTs, 1, 1)) a := mock.TipSet(mock.MkBlock(genTs, 1, 1))
@ -161,6 +163,7 @@ func TestSyncManagerEdgeCase(t *testing.T) {
} }
func TestSyncManager(t *testing.T) { func TestSyncManager(t *testing.T) {
//stm: @CHAIN_SYNCER_SET_PEER_HEAD_001
ctx := context.Background() ctx := context.Background()
a := mock.TipSet(mock.MkBlock(genTs, 1, 1)) a := mock.TipSet(mock.MkBlock(genTs, 1, 1))

View File

@ -1,3 +1,4 @@
//stm: #unit
package types package types
import ( import (
@ -14,6 +15,7 @@ import (
) )
func TestBigIntSerializationRoundTrip(t *testing.T) { func TestBigIntSerializationRoundTrip(t *testing.T) {
//stm: @CHAIN_TYPES_PARSE_BIGINT_001
testValues := []string{ testValues := []string{
"0", "1", "10", "-10", "9999", "12345678901234567891234567890123456789012345678901234567890", "0", "1", "10", "-10", "9999", "12345678901234567891234567890123456789012345678901234567890",
} }
@ -42,6 +44,7 @@ func TestBigIntSerializationRoundTrip(t *testing.T) {
} }
func TestFilRoundTrip(t *testing.T) { func TestFilRoundTrip(t *testing.T) {
//stm: @TYPES_FIL_PARSE_001
testValues := []string{ testValues := []string{
"0 FIL", "1 FIL", "1.001 FIL", "100.10001 FIL", "101100 FIL", "5000.01 FIL", "5000 FIL", "0 FIL", "1 FIL", "1.001 FIL", "100.10001 FIL", "101100 FIL", "5000.01 FIL", "5000 FIL",
} }
@ -59,6 +62,7 @@ func TestFilRoundTrip(t *testing.T) {
} }
func TestSizeStr(t *testing.T) { func TestSizeStr(t *testing.T) {
//stm: @CHAIN_TYPES_SIZE_BIGINT_001
cases := []struct { cases := []struct {
in uint64 in uint64
out string out string
@ -79,6 +83,7 @@ func TestSizeStr(t *testing.T) {
} }
func TestSizeStrUnitsSymmetry(t *testing.T) { func TestSizeStrUnitsSymmetry(t *testing.T) {
//stm: @CHAIN_TYPES_SIZE_BIGINT_001
s := rand.NewSource(time.Now().UnixNano()) s := rand.NewSource(time.Now().UnixNano())
r := rand.New(s) r := rand.New(s)
@ -95,6 +100,7 @@ func TestSizeStrUnitsSymmetry(t *testing.T) {
} }
func TestSizeStrBig(t *testing.T) { func TestSizeStrBig(t *testing.T) {
//stm: @CHAIN_TYPES_SIZE_BIGINT_001
ZiB := big.NewInt(50000) ZiB := big.NewInt(50000)
ZiB = ZiB.Lsh(ZiB, 70) ZiB = ZiB.Lsh(ZiB, 70)

View File

@ -1,3 +1,4 @@
//stm: #unit
package types package types
import ( import (
@ -51,6 +52,7 @@ func testBlockHeader(t testing.TB) *BlockHeader {
} }
func TestBlockHeaderSerialization(t *testing.T) { func TestBlockHeaderSerialization(t *testing.T) {
//stm: @CHAIN_TYPES_BLOCK_HEADER_FROM_CBOR_001, @CHAIN_TYPES_BLOCK_HEADER_TO_CBOR_001
bh := testBlockHeader(t) bh := testBlockHeader(t)
buf := new(bytes.Buffer) buf := new(bytes.Buffer)
@ -71,6 +73,7 @@ func TestBlockHeaderSerialization(t *testing.T) {
} }
func TestInteropBH(t *testing.T) { func TestInteropBH(t *testing.T) {
//stm: @OTHER_IMPLEMENTATION_BLOCK_INTEROP_001
newAddr, err := address.NewSecp256k1Address([]byte("address0")) newAddr, err := address.NewSecp256k1Address([]byte("address0"))
if err != nil { if err != nil {

View File

@ -1,3 +1,4 @@
//stm: #unit
package types package types
import ( import (
@ -11,6 +12,7 @@ import (
) )
func TestPoissonFunction(t *testing.T) { func TestPoissonFunction(t *testing.T) {
//stm: @CHAIN_TYPES_POISSON_001
tests := []struct { tests := []struct {
lambdaBase uint64 lambdaBase uint64
lambdaShift uint lambdaShift uint
@ -47,6 +49,7 @@ func TestPoissonFunction(t *testing.T) {
} }
func TestLambdaFunction(t *testing.T) { func TestLambdaFunction(t *testing.T) {
//stm: @CHAIN_TYPES_LAMBDA_001
tests := []struct { tests := []struct {
power string power string
totalPower string totalPower string
@ -72,6 +75,7 @@ func TestLambdaFunction(t *testing.T) {
} }
func TestExpFunction(t *testing.T) { func TestExpFunction(t *testing.T) {
//stm: @CHAIN_TYPES_NEGATIVE_EXP_001
const N = 256 const N = 256
step := big.NewInt(5) step := big.NewInt(5)
@ -100,6 +104,7 @@ func q256ToF(x *big.Int) float64 {
} }
func TestElectionLam(t *testing.T) { func TestElectionLam(t *testing.T) {
//stm: @CHAIN_TYPES_LAMBDA_001
p := big.NewInt(64) p := big.NewInt(64)
tot := big.NewInt(128) tot := big.NewInt(128)
lam := lambda(p, tot) lam := lambda(p, tot)
@ -128,6 +133,7 @@ func BenchmarkWinCounts(b *testing.B) {
} }
func TestWinCounts(t *testing.T) { func TestWinCounts(t *testing.T) {
//stm: @TYPES_ELECTION_PROOF_COMPUTE_WIN_COUNT_001
totalPower := NewInt(100) totalPower := NewInt(100)
power := NewInt(20) power := NewInt(20)

View File

@ -1,3 +1,4 @@
//stm: #unit
package types package types
import ( import (
@ -7,6 +8,7 @@ import (
) )
func TestFilShort(t *testing.T) { func TestFilShort(t *testing.T) {
//stm: @TYPES_FIL_PARSE_001
for _, s := range []struct { for _, s := range []struct {
fil string fil string
expect string expect string

View File

@ -1,3 +1,4 @@
//stm: #unit
package types package types
import ( import (
@ -71,6 +72,7 @@ func TestEqualCall(t *testing.T) {
Params: []byte("hai"), Params: []byte("hai"),
} }
//stm: @TYPES_MESSAGE_EQUAL_CALL_001
require.True(t, m1.EqualCall(m2)) require.True(t, m1.EqualCall(m2))
require.True(t, m1.EqualCall(m3)) require.True(t, m1.EqualCall(m3))
require.False(t, m1.EqualCall(m4)) require.False(t, m1.EqualCall(m4))
@ -97,11 +99,13 @@ func TestMessageJson(t *testing.T) {
exp := []byte("{\"Version\":0,\"To\":\"f04\",\"From\":\"f00\",\"Nonce\":34,\"Value\":\"0\",\"GasLimit\":123,\"GasFeeCap\":\"234\",\"GasPremium\":\"234\",\"Method\":6,\"Params\":\"aGFp\",\"CID\":{\"/\":\"bafy2bzaced5rdpz57e64sc7mdwjn3blicglhpialnrph2dlbufhf6iha63dmc\"}}") exp := []byte("{\"Version\":0,\"To\":\"f04\",\"From\":\"f00\",\"Nonce\":34,\"Value\":\"0\",\"GasLimit\":123,\"GasFeeCap\":\"234\",\"GasPremium\":\"234\",\"Method\":6,\"Params\":\"aGFp\",\"CID\":{\"/\":\"bafy2bzaced5rdpz57e64sc7mdwjn3blicglhpialnrph2dlbufhf6iha63dmc\"}}")
fmt.Println(string(b)) fmt.Println(string(b))
//stm: @TYPES_MESSAGE_JSON_EQUAL_CALL_001
require.Equal(t, exp, b) require.Equal(t, exp, b)
var um Message var um Message
require.NoError(t, json.Unmarshal(b, &um)) require.NoError(t, json.Unmarshal(b, &um))
//stm: @TYPES_MESSAGE_JSON_EQUAL_CALL_002
require.EqualValues(t, *m, um) require.EqualValues(t, *m, um)
} }
@ -131,10 +135,12 @@ func TestSignedMessageJson(t *testing.T) {
exp := []byte("{\"Message\":{\"Version\":0,\"To\":\"f04\",\"From\":\"f00\",\"Nonce\":34,\"Value\":\"0\",\"GasLimit\":123,\"GasFeeCap\":\"234\",\"GasPremium\":\"234\",\"Method\":6,\"Params\":\"aGFp\",\"CID\":{\"/\":\"bafy2bzaced5rdpz57e64sc7mdwjn3blicglhpialnrph2dlbufhf6iha63dmc\"}},\"Signature\":{\"Type\":0,\"Data\":null},\"CID\":{\"/\":\"bafy2bzacea5ainifngxj3rygaw2hppnyz2cw72x5pysqty2x6dxmjs5qg2uus\"}}") exp := []byte("{\"Message\":{\"Version\":0,\"To\":\"f04\",\"From\":\"f00\",\"Nonce\":34,\"Value\":\"0\",\"GasLimit\":123,\"GasFeeCap\":\"234\",\"GasPremium\":\"234\",\"Method\":6,\"Params\":\"aGFp\",\"CID\":{\"/\":\"bafy2bzaced5rdpz57e64sc7mdwjn3blicglhpialnrph2dlbufhf6iha63dmc\"}},\"Signature\":{\"Type\":0,\"Data\":null},\"CID\":{\"/\":\"bafy2bzacea5ainifngxj3rygaw2hppnyz2cw72x5pysqty2x6dxmjs5qg2uus\"}}")
fmt.Println(string(b)) fmt.Println(string(b))
//stm: @TYPES_MESSAGE_JSON_EQUAL_CALL_001
require.Equal(t, exp, b) require.Equal(t, exp, b)
var um SignedMessage var um SignedMessage
require.NoError(t, json.Unmarshal(b, &um)) require.NoError(t, json.Unmarshal(b, &um))
//stm: @TYPES_MESSAGE_JSON_EQUAL_CALL_002
require.EqualValues(t, *sm, um) require.EqualValues(t, *sm, um)
} }

View File

@ -1,3 +1,4 @@
//stm: #unit
package types package types
import ( import (
@ -8,6 +9,7 @@ import (
) )
func TestSignatureSerializeRoundTrip(t *testing.T) { func TestSignatureSerializeRoundTrip(t *testing.T) {
//stm: @CHAIN_TYPES_SIGNATURE_SERIALIZATION_001
s := &crypto.Signature{ s := &crypto.Signature{
Data: []byte("foo bar cat dog"), Data: []byte("foo bar cat dog"),
Type: crypto.SigTypeBLS, Type: crypto.SigTypeBLS,

View File

@ -1,3 +1,4 @@
//stm: #unit
package types package types
import ( import (
@ -12,6 +13,7 @@ import (
) )
func TestTipSetKey(t *testing.T) { func TestTipSetKey(t *testing.T) {
//stm: @TYPES_TIPSETKEY_FROM_BYTES_001, @TYPES_TIPSETKEY_NEW_001
cb := cid.V1Builder{Codec: cid.DagCBOR, MhType: multihash.BLAKE2B_MIN + 31} cb := cid.V1Builder{Codec: cid.DagCBOR, MhType: multihash.BLAKE2B_MIN + 31}
c1, _ := cb.Sum([]byte("a")) c1, _ := cb.Sum([]byte("a"))
c2, _ := cb.Sum([]byte("b")) c2, _ := cb.Sum([]byte("b"))

View File

@ -1,3 +1,4 @@
//stm: #unit
package types package types
import ( import (

View File

@ -1,3 +1,4 @@
//stm: #unit
package chain package chain
import ( import (
@ -12,6 +13,7 @@ import (
) )
func TestSignedMessageJsonRoundtrip(t *testing.T) { func TestSignedMessageJsonRoundtrip(t *testing.T) {
//stm: @TYPES_MESSAGE_JSON_EQUAL_CALL_002
to, _ := address.NewIDAddress(5234623) to, _ := address.NewIDAddress(5234623)
from, _ := address.NewIDAddress(603911192) from, _ := address.NewIDAddress(603911192)
smsg := &types.SignedMessage{ smsg := &types.SignedMessage{
@ -40,6 +42,7 @@ func TestSignedMessageJsonRoundtrip(t *testing.T) {
} }
func TestAddressType(t *testing.T) { func TestAddressType(t *testing.T) {
//stm: @CHAIN_TYPES_ADDRESS_PREFIX_001
build.SetAddressNetwork(address.Testnet) build.SetAddressNetwork(address.Testnet)
addr, err := makeRandomAddress() addr, err := makeRandomAddress()
if err != nil { if err != nil {

View File

@ -1,3 +1,4 @@
//stm: #unit
package vectors package vectors
import ( import (
@ -26,6 +27,7 @@ func LoadVector(t *testing.T, f string, out interface{}) {
} }
func TestBlockHeaderVectors(t *testing.T) { func TestBlockHeaderVectors(t *testing.T) {
//stm: @CHAIN_TYPES_SERIALIZATION_BLOCK_001
var headers []HeaderVector var headers []HeaderVector
LoadVector(t, "block_headers.json", &headers) LoadVector(t, "block_headers.json", &headers)
@ -46,6 +48,7 @@ func TestBlockHeaderVectors(t *testing.T) {
} }
func TestMessageSigningVectors(t *testing.T) { func TestMessageSigningVectors(t *testing.T) {
//stm: @CHAIN_TYPES_SERIALIZATION_SIGNED_MESSAGE_001
var msvs []MessageSigningVector var msvs []MessageSigningVector
LoadVector(t, "message_signing.json", &msvs) LoadVector(t, "message_signing.json", &msvs)
@ -64,6 +67,7 @@ func TestMessageSigningVectors(t *testing.T) {
} }
func TestUnsignedMessageVectors(t *testing.T) { func TestUnsignedMessageVectors(t *testing.T) {
//stm: @CHAIN_TYPES_SERIALIZATION_MESSAGE_001
var msvs []UnsignedMessageVector var msvs []UnsignedMessageVector
LoadVector(t, "unsigned_messages.json", &msvs) LoadVector(t, "unsigned_messages.json", &msvs)

View File

@ -1,3 +1,4 @@
//stm: #unit
package vm package vm
import ( import (
@ -9,6 +10,7 @@ import (
) )
func TestGasBurn(t *testing.T) { func TestGasBurn(t *testing.T) {
//stm: @BURN_ESTIMATE_GAS_OVERESTIMATION_BURN_001
tests := []struct { tests := []struct {
used int64 used int64
limit int64 limit int64
@ -40,6 +42,7 @@ func TestGasBurn(t *testing.T) {
} }
func TestGasOutputs(t *testing.T) { func TestGasOutputs(t *testing.T) {
//stm: @BURN_ESTIMATE_GAS_OUTPUTS_001
baseFee := types.NewInt(10) baseFee := types.NewInt(10)
tests := []struct { tests := []struct {
used int64 used int64

312
chain/vm/fvm.go Normal file
View File

@ -0,0 +1,312 @@
package vm
import (
"bytes"
"context"
"time"
"github.com/filecoin-project/go-state-types/network"
"github.com/filecoin-project/go-state-types/big"
"github.com/filecoin-project/lotus/build"
"github.com/filecoin-project/lotus/chain/state"
cbor "github.com/ipfs/go-ipld-cbor"
"github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/exitcode"
"github.com/filecoin-project/lotus/lib/sigs"
"golang.org/x/xerrors"
"github.com/filecoin-project/lotus/blockstore"
ffi "github.com/filecoin-project/filecoin-ffi"
ffi_cgo "github.com/filecoin-project/filecoin-ffi/cgo"
"github.com/filecoin-project/lotus/chain/actors/adt"
"github.com/filecoin-project/lotus/chain/actors/builtin/miner"
"github.com/filecoin-project/lotus/chain/types"
"github.com/ipfs/go-cid"
)
var _ Interface = (*FVM)(nil)
var _ ffi_cgo.Externs = (*FvmExtern)(nil)
type FvmExtern struct {
Rand
blockstore.Blockstore
epoch abi.ChainEpoch
lbState LookbackStateGetter
base cid.Cid
}
// VerifyConsensusFault is similar to the one in syscalls.go used by the LegacyVM, except it never errors
// Errors are logged and "no fault" is returned, which is functionally what go-actors does anyway
func (x *FvmExtern) VerifyConsensusFault(ctx context.Context, a, b, extra []byte) (*ffi_cgo.ConsensusFault, int64) {
totalGas := int64(0)
ret := &ffi_cgo.ConsensusFault{
Type: ffi_cgo.ConsensusFaultNone,
}
// Note that block syntax is not validated. Any validly signed block will be accepted pursuant to the below conditions.
// Whether or not it could ever have been accepted in a chain is not checked/does not matter here.
// for that reason when checking block parent relationships, rather than instantiating a Tipset to do so
// (which runs a syntactic check), we do it directly on the CIDs.
// (0) cheap preliminary checks
// can blocks be decoded properly?
var blockA, blockB types.BlockHeader
if decodeErr := blockA.UnmarshalCBOR(bytes.NewReader(a)); decodeErr != nil {
log.Info("invalid consensus fault: cannot decode first block header: %w", decodeErr)
return ret, totalGas
}
if decodeErr := blockB.UnmarshalCBOR(bytes.NewReader(b)); decodeErr != nil {
log.Info("invalid consensus fault: cannot decode second block header: %w", decodeErr)
return ret, totalGas
}
// are blocks the same?
if blockA.Cid().Equals(blockB.Cid()) {
log.Info("invalid consensus fault: submitted blocks are the same")
return ret, totalGas
}
// (1) check conditions necessary to any consensus fault
// were blocks mined by same miner?
if blockA.Miner != blockB.Miner {
log.Info("invalid consensus fault: blocks not mined by the same miner")
return ret, totalGas
}
// block a must be earlier or equal to block b, epoch wise (ie at least as early in the chain).
if blockB.Height < blockA.Height {
log.Info("invalid consensus fault: first block must not be of higher height than second")
return ret, totalGas
}
ret.Epoch = blockB.Height
faultType := ffi_cgo.ConsensusFaultNone
// (2) check for the consensus faults themselves
// (a) double-fork mining fault
if blockA.Height == blockB.Height {
faultType = ffi_cgo.ConsensusFaultDoubleForkMining
}
// (b) time-offset mining fault
// strictly speaking no need to compare heights based on double fork mining check above,
// but at same height this would be a different fault.
if types.CidArrsEqual(blockA.Parents, blockB.Parents) && blockA.Height != blockB.Height {
faultType = ffi_cgo.ConsensusFaultTimeOffsetMining
}
// (c) parent-grinding fault
// Here extra is the "witness", a third block that shows the connection between A and B as
// A's sibling and B's parent.
// Specifically, since A is of lower height, it must be that B was mined omitting A from its tipset
//
// B
// |
// [A, C]
var blockC types.BlockHeader
if len(extra) > 0 {
if decodeErr := blockC.UnmarshalCBOR(bytes.NewReader(extra)); decodeErr != nil {
log.Info("invalid consensus fault: cannot decode extra: %w", decodeErr)
return ret, totalGas
}
if types.CidArrsEqual(blockA.Parents, blockC.Parents) && blockA.Height == blockC.Height &&
types.CidArrsContains(blockB.Parents, blockC.Cid()) && !types.CidArrsContains(blockB.Parents, blockA.Cid()) {
faultType = ffi_cgo.ConsensusFaultParentGrinding
}
}
// (3) return if no consensus fault by now
if faultType == ffi_cgo.ConsensusFaultNone {
log.Info("invalid consensus fault: no fault detected")
return ret, totalGas
}
// else
// (4) expensive final checks
// check blocks are properly signed by their respective miner
// note we do not need to check extra's: it is a parent to block b
// which itself is signed, so it was willingly included by the miner
gasA, sigErr := x.VerifyBlockSig(ctx, &blockA)
totalGas += gasA
if sigErr != nil {
log.Info("invalid consensus fault: cannot verify first block sig: %w", sigErr)
return ret, totalGas
}
gas2, sigErr := x.VerifyBlockSig(ctx, &blockB)
totalGas += gas2
if sigErr != nil {
log.Info("invalid consensus fault: cannot verify second block sig: %w", sigErr)
return ret, totalGas
}
ret.Type = faultType
ret.Target = blockA.Miner
return ret, totalGas
}
func (x *FvmExtern) VerifyBlockSig(ctx context.Context, blk *types.BlockHeader) (int64, error) {
waddr, gasUsed, err := x.workerKeyAtLookback(ctx, blk.Miner, blk.Height)
if err != nil {
return gasUsed, err
}
return gasUsed, sigs.CheckBlockSignature(ctx, blk, waddr)
}
func (x *FvmExtern) workerKeyAtLookback(ctx context.Context, minerId address.Address, height abi.ChainEpoch) (address.Address, int64, error) {
gasUsed := int64(0)
gasAdder := func(gc GasCharge) {
// technically not overflow safe, but that's fine
gasUsed += gc.Total()
}
cstWithoutGas := cbor.NewCborStore(x.Blockstore)
cbb := &gasChargingBlocks{gasAdder, PricelistByEpoch(x.epoch), x.Blockstore}
cstWithGas := cbor.NewCborStore(cbb)
lbState, err := x.lbState(ctx, height)
if err != nil {
return address.Undef, gasUsed, err
}
// get appropriate miner actor
act, err := lbState.GetActor(minerId)
if err != nil {
return address.Undef, gasUsed, err
}
// use that to get the miner state
mas, err := miner.Load(adt.WrapStore(ctx, cstWithGas), act)
if err != nil {
return address.Undef, gasUsed, err
}
info, err := mas.Info()
if err != nil {
return address.Undef, gasUsed, err
}
stateTree, err := state.LoadStateTree(cstWithoutGas, x.base)
if err != nil {
return address.Undef, gasUsed, err
}
raddr, err := ResolveToKeyAddr(stateTree, cstWithGas, info.Worker)
if err != nil {
return address.Undef, gasUsed, err
}
return raddr, gasUsed, nil
}
type FVM struct {
fvm *ffi.FVM
}
func NewFVM(ctx context.Context, opts *VMOpts) (*FVM, error) {
circToReport := opts.FilVested
// For v14 (and earlier), we perform the FilVested portion of the calculation, and let the FVM dynamically do the rest
// v15 and after, the circ supply is always constant per epoch, so we calculate the base and report it at creation
if opts.NetworkVersion >= network.Version15 {
state, err := state.LoadStateTree(cbor.NewCborStore(opts.Bstore), opts.StateBase)
if err != nil {
return nil, err
}
circToReport, err = opts.CircSupplyCalc(ctx, opts.Epoch, state)
if err != nil {
return nil, err
}
}
fvm, err := ffi.CreateFVM(0,
&FvmExtern{Rand: opts.Rand, Blockstore: opts.Bstore, lbState: opts.LookbackState, base: opts.StateBase, epoch: opts.Epoch},
opts.Epoch, opts.BaseFee, circToReport, opts.NetworkVersion, opts.StateBase,
)
if err != nil {
return nil, err
}
return &FVM{
fvm: fvm,
}, nil
}
func (vm *FVM) ApplyMessage(ctx context.Context, cmsg types.ChainMsg) (*ApplyRet, error) {
start := build.Clock.Now()
msgBytes, err := cmsg.VMMessage().Serialize()
if err != nil {
return nil, xerrors.Errorf("serializing msg: %w", err)
}
ret, err := vm.fvm.ApplyMessage(msgBytes, uint(cmsg.ChainLength()))
if err != nil {
return nil, xerrors.Errorf("applying msg: %w", err)
}
return &ApplyRet{
MessageReceipt: types.MessageReceipt{
Return: ret.Return,
ExitCode: exitcode.ExitCode(ret.ExitCode),
GasUsed: ret.GasUsed,
},
GasCosts: &GasOutputs{
// TODO: do the other optional fields eventually
BaseFeeBurn: big.Zero(),
OverEstimationBurn: big.Zero(),
MinerPenalty: ret.MinerPenalty,
MinerTip: ret.MinerTip,
Refund: big.Zero(),
GasRefund: 0,
GasBurned: 0,
},
// TODO: do these eventually, not consensus critical
// https://github.com/filecoin-project/ref-fvm/issues/318
ActorErr: nil,
ExecutionTrace: types.ExecutionTrace{},
Duration: time.Since(start),
}, nil
}
func (vm *FVM) ApplyImplicitMessage(ctx context.Context, cmsg *types.Message) (*ApplyRet, error) {
start := build.Clock.Now()
msgBytes, err := cmsg.VMMessage().Serialize()
if err != nil {
return nil, xerrors.Errorf("serializing msg: %w", err)
}
ret, err := vm.fvm.ApplyImplicitMessage(msgBytes)
if err != nil {
return nil, xerrors.Errorf("applying msg: %w", err)
}
return &ApplyRet{
MessageReceipt: types.MessageReceipt{
Return: ret.Return,
ExitCode: exitcode.ExitCode(ret.ExitCode),
GasUsed: ret.GasUsed,
},
GasCosts: nil,
// TODO: do these eventually, not consensus critical
// https://github.com/filecoin-project/ref-fvm/issues/318
ActorErr: nil,
ExecutionTrace: types.ExecutionTrace{},
Duration: time.Since(start),
}, nil
}
func (vm *FVM) Flush(ctx context.Context) (cid.Cid, error) {
return vm.fvm.Flush()
}

View File

@ -50,7 +50,7 @@ func newGasCharge(name string, computeGas int64, storageGas int64) GasCharge {
} }
} }
// Pricelist provides prices for operations in the VM. // Pricelist provides prices for operations in the LegacyVM.
// //
// Note: this interface should be APPEND ONLY since last chain checkpoint // Note: this interface should be APPEND ONLY since last chain checkpoint
type Pricelist interface { type Pricelist interface {

View File

@ -50,7 +50,7 @@ type pricelistV0 struct {
// whether it succeeds or fails in application) is given by: // whether it succeeds or fails in application) is given by:
// OnChainMessageBase + len(serialized message)*OnChainMessagePerByte // OnChainMessageBase + len(serialized message)*OnChainMessagePerByte
// Together, these account for the cost of message propagation and validation, // Together, these account for the cost of message propagation and validation,
// up to but excluding any actual processing by the VM. // up to but excluding any actual processing by the LegacyVM.
// This is the cost a block producer burns when including an invalid message. // This is the cost a block producer burns when including an invalid message.
onChainMessageComputeBase int64 onChainMessageComputeBase int64
onChainMessageStorageBase int64 onChainMessageStorageBase int64
@ -83,11 +83,11 @@ type pricelistV0 struct {
sendInvokeMethod int64 sendInvokeMethod int64
// Gas cost for any Get operation to the IPLD store // Gas cost for any Get operation to the IPLD store
// in the runtime VM context. // in the runtime LegacyVM context.
ipldGetBase int64 ipldGetBase int64
// Gas cost (Base + len*PerByte) for any Put operation to the IPLD store // Gas cost (Base + len*PerByte) for any Put operation to the IPLD store
// in the runtime VM context. // in the runtime LegacyVM context.
// //
// Note: these costs should be significantly higher than the costs for Get // Note: these costs should be significantly higher than the costs for Get
// operations, since they reflect not only serialization/deserialization // operations, since they reflect not only serialization/deserialization

View File

@ -1,3 +1,4 @@
//stm: #unit
package vm package vm
import ( import (

View File

@ -1,3 +1,4 @@
//stm: #unit
package vm package vm
import ( import (
@ -106,6 +107,7 @@ func (*basicRtMessage) ValueReceived() abi.TokenAmount {
} }
func TestInvokerBasic(t *testing.T) { func TestInvokerBasic(t *testing.T) {
//stm: @INVOKER_TRANSFORM_001
inv := ActorRegistry{} inv := ActorRegistry{}
code, err := inv.transform(basicContract{}) code, err := inv.transform(basicContract{})
assert.NoError(t, err) assert.NoError(t, err)
@ -135,7 +137,7 @@ func TestInvokerBasic(t *testing.T) {
{ {
_, aerr := code[1](&Runtime{ _, aerr := code[1](&Runtime{
vm: &VM{networkVersion: network.Version0}, vm: &LegacyVM{networkVersion: network.Version0},
Message: &basicRtMessage{}, Message: &basicRtMessage{},
}, []byte{99}) }, []byte{99})
if aerrors.IsFatal(aerr) { if aerrors.IsFatal(aerr) {
@ -146,7 +148,7 @@ func TestInvokerBasic(t *testing.T) {
{ {
_, aerr := code[1](&Runtime{ _, aerr := code[1](&Runtime{
vm: &VM{networkVersion: network.Version7}, vm: &LegacyVM{networkVersion: network.Version7},
Message: &basicRtMessage{}, Message: &basicRtMessage{},
}, []byte{99}) }, []byte{99})
if aerrors.IsFatal(aerr) { if aerrors.IsFatal(aerr) {

View File

@ -65,7 +65,7 @@ type Runtime struct {
ctx context.Context ctx context.Context
vm *VM vm *LegacyVM
state *state.StateTree state *state.StateTree
height abi.ChainEpoch height abi.ChainEpoch
cst ipldcbor.IpldStore cst ipldcbor.IpldStore
@ -158,7 +158,7 @@ func (rt *Runtime) shimCall(f func() interface{}) (rval []byte, aerr aerrors.Act
defer func() { defer func() {
if r := recover(); r != nil { if r := recover(); r != nil {
if ar, ok := r.(aerrors.ActorError); ok { if ar, ok := r.(aerrors.ActorError); ok {
log.Warnf("VM.Call failure in call from: %s to %s: %+v", rt.Caller(), rt.Receiver(), ar) log.Warnf("LegacyVM.Call failure in call from: %s to %s: %+v", rt.Caller(), rt.Receiver(), ar)
aerr = ar aerr = ar
return return
} }

View File

@ -1,3 +1,4 @@
//stm: #unit
package vm package vm
import ( import (
@ -22,6 +23,7 @@ func (*NotAVeryGoodMarshaler) MarshalCBOR(writer io.Writer) error {
var _ cbg.CBORMarshaler = &NotAVeryGoodMarshaler{} var _ cbg.CBORMarshaler = &NotAVeryGoodMarshaler{}
func TestRuntimePutErrors(t *testing.T) { func TestRuntimePutErrors(t *testing.T) {
//stm: @CHAIN_VM_STORE_PUT_002
defer func() { defer func() {
err := recover() err := recover()
if err == nil { if err == nil {

View File

@ -122,7 +122,7 @@ func (bs *gasChargingBlocks) Put(ctx context.Context, blk block.Block) error {
return nil return nil
} }
func (vm *VM) makeRuntime(ctx context.Context, msg *types.Message, parent *Runtime) *Runtime { func (vm *LegacyVM) makeRuntime(ctx context.Context, msg *types.Message, parent *Runtime) *Runtime {
rt := &Runtime{ rt := &Runtime{
ctx: ctx, ctx: ctx,
vm: vm, vm: vm,
@ -188,7 +188,7 @@ func (vm *VM) makeRuntime(ctx context.Context, msg *types.Message, parent *Runti
} }
type UnsafeVM struct { type UnsafeVM struct {
VM *VM VM *LegacyVM
} }
func (vm *UnsafeVM) MakeRuntime(ctx context.Context, msg *types.Message) *Runtime { func (vm *UnsafeVM) MakeRuntime(ctx context.Context, msg *types.Message) *Runtime {
@ -201,7 +201,9 @@ type (
LookbackStateGetter func(context.Context, abi.ChainEpoch) (*state.StateTree, error) LookbackStateGetter func(context.Context, abi.ChainEpoch) (*state.StateTree, error)
) )
type VM struct { var _ Interface = (*LegacyVM)(nil)
type LegacyVM struct {
cstate *state.StateTree cstate *state.StateTree
cst *cbor.BasicIpldStore cst *cbor.BasicIpldStore
buf *blockstore.BufferedBlockstore buf *blockstore.BufferedBlockstore
@ -225,12 +227,14 @@ type VMOpts struct {
Actors *ActorRegistry Actors *ActorRegistry
Syscalls SyscallBuilder Syscalls SyscallBuilder
CircSupplyCalc CircSupplyCalculator CircSupplyCalc CircSupplyCalculator
// Amount of FIL vested from genesis actors.
FilVested abi.TokenAmount
NetworkVersion network.Version NetworkVersion network.Version
BaseFee abi.TokenAmount BaseFee abi.TokenAmount
LookbackState LookbackStateGetter LookbackState LookbackStateGetter
} }
func NewVM(ctx context.Context, opts *VMOpts) (*VM, error) { func NewLegacyVM(ctx context.Context, opts *VMOpts) (*LegacyVM, error) {
buf := blockstore.NewBuffered(opts.Bstore) buf := blockstore.NewBuffered(opts.Bstore)
cst := cbor.NewCborStore(buf) cst := cbor.NewCborStore(buf)
state, err := state.LoadStateTree(cst, opts.StateBase) state, err := state.LoadStateTree(cst, opts.StateBase)
@ -243,7 +247,7 @@ func NewVM(ctx context.Context, opts *VMOpts) (*VM, error) {
return nil, err return nil, err
} }
return &VM{ return &LegacyVM{
cstate: state, cstate: state,
cst: cst, cst: cst,
buf: buf, buf: buf,
@ -272,7 +276,7 @@ type ApplyRet struct {
GasCosts *GasOutputs GasCosts *GasOutputs
} }
func (vm *VM) send(ctx context.Context, msg *types.Message, parent *Runtime, func (vm *LegacyVM) send(ctx context.Context, msg *types.Message, parent *Runtime,
gasCharge *GasCharge, start time.Time) ([]byte, aerrors.ActorError, *Runtime) { gasCharge *GasCharge, start time.Time) ([]byte, aerrors.ActorError, *Runtime) {
defer atomic.AddUint64(&StatSends, 1) defer atomic.AddUint64(&StatSends, 1)
@ -391,7 +395,7 @@ func checkMessage(msg *types.Message) error {
return nil return nil
} }
func (vm *VM) ApplyImplicitMessage(ctx context.Context, msg *types.Message) (*ApplyRet, error) { func (vm *LegacyVM) ApplyImplicitMessage(ctx context.Context, msg *types.Message) (*ApplyRet, error) {
start := build.Clock.Now() start := build.Clock.Now()
defer atomic.AddUint64(&StatApplied, 1) defer atomic.AddUint64(&StatApplied, 1)
ret, actorErr, rt := vm.send(ctx, msg, nil, nil, start) ret, actorErr, rt := vm.send(ctx, msg, nil, nil, start)
@ -409,7 +413,7 @@ func (vm *VM) ApplyImplicitMessage(ctx context.Context, msg *types.Message) (*Ap
}, actorErr }, actorErr
} }
func (vm *VM) ApplyMessage(ctx context.Context, cmsg types.ChainMsg) (*ApplyRet, error) { func (vm *LegacyVM) ApplyMessage(ctx context.Context, cmsg types.ChainMsg) (*ApplyRet, error) {
start := build.Clock.Now() start := build.Clock.Now()
ctx, span := trace.StartSpan(ctx, "vm.ApplyMessage") ctx, span := trace.StartSpan(ctx, "vm.ApplyMessage")
defer span.End() defer span.End()
@ -616,7 +620,7 @@ func (vm *VM) ApplyMessage(ctx context.Context, cmsg types.ChainMsg) (*ApplyRet,
}, nil }, nil
} }
func (vm *VM) ShouldBurn(ctx context.Context, st *state.StateTree, msg *types.Message, errcode exitcode.ExitCode) (bool, error) { func (vm *LegacyVM) ShouldBurn(ctx context.Context, st *state.StateTree, msg *types.Message, errcode exitcode.ExitCode) (bool, error) {
if vm.networkVersion <= network.Version12 { if vm.networkVersion <= network.Version12 {
// Check to see if we should burn funds. We avoid burning on successful // Check to see if we should burn funds. We avoid burning on successful
// window post. This won't catch _indirect_ window post calls, but this // window post. This won't catch _indirect_ window post calls, but this
@ -646,7 +650,7 @@ func (vm *VM) ShouldBurn(ctx context.Context, st *state.StateTree, msg *types.Me
type vmFlushKey struct{} type vmFlushKey struct{}
func (vm *VM) Flush(ctx context.Context) (cid.Cid, error) { func (vm *LegacyVM) Flush(ctx context.Context) (cid.Cid, error) {
_, span := trace.StartSpan(ctx, "vm.Flush") _, span := trace.StartSpan(ctx, "vm.Flush")
defer span.End() defer span.End()
@ -665,9 +669,9 @@ func (vm *VM) Flush(ctx context.Context) (cid.Cid, error) {
return root, nil return root, nil
} }
// Get the buffered blockstore associated with the VM. This includes any temporary blocks produced // Get the buffered blockstore associated with the LegacyVM. This includes any temporary blocks produced
// during this VM's execution. // during this LegacyVM's execution.
func (vm *VM) ActorStore(ctx context.Context) adt.Store { func (vm *LegacyVM) ActorStore(ctx context.Context) adt.Store {
return adt.WrapStore(ctx, vm.cst) return adt.WrapStore(ctx, vm.cst)
} }
@ -820,11 +824,11 @@ func copyRec(ctx context.Context, from, to blockstore.Blockstore, root cid.Cid,
return nil return nil
} }
func (vm *VM) StateTree() types.StateTree { func (vm *LegacyVM) StateTree() types.StateTree {
return vm.cstate return vm.cstate
} }
func (vm *VM) Invoke(act *types.Actor, rt *Runtime, method abi.MethodNum, params []byte) ([]byte, aerrors.ActorError) { func (vm *LegacyVM) Invoke(act *types.Actor, rt *Runtime, method abi.MethodNum, params []byte) ([]byte, aerrors.ActorError) {
ctx, span := trace.StartSpan(rt.ctx, "vm.Invoke") ctx, span := trace.StartSpan(rt.ctx, "vm.Invoke")
defer span.End() defer span.End()
if span.IsRecordingEvents() { if span.IsRecordingEvents() {
@ -847,11 +851,11 @@ func (vm *VM) Invoke(act *types.Actor, rt *Runtime, method abi.MethodNum, params
return ret, nil return ret, nil
} }
func (vm *VM) SetInvoker(i *ActorRegistry) { func (vm *LegacyVM) SetInvoker(i *ActorRegistry) {
vm.areg = i vm.areg = i
} }
func (vm *VM) GetCircSupply(ctx context.Context) (abi.TokenAmount, error) { func (vm *LegacyVM) GetCircSupply(ctx context.Context) (abi.TokenAmount, error) {
// Before v15, this was recalculated on each invocation as the state tree was mutated // Before v15, this was recalculated on each invocation as the state tree was mutated
if vm.networkVersion <= network.Version14 { if vm.networkVersion <= network.Version14 {
return vm.circSupplyCalc(ctx, vm.blockHeight, vm.cstate) return vm.circSupplyCalc(ctx, vm.blockHeight, vm.cstate)
@ -860,14 +864,14 @@ func (vm *VM) GetCircSupply(ctx context.Context) (abi.TokenAmount, error) {
return vm.baseCircSupply, nil return vm.baseCircSupply, nil
} }
func (vm *VM) incrementNonce(addr address.Address) error { func (vm *LegacyVM) incrementNonce(addr address.Address) error {
return vm.cstate.MutateActor(addr, func(a *types.Actor) error { return vm.cstate.MutateActor(addr, func(a *types.Actor) error {
a.Nonce++ a.Nonce++
return nil return nil
}) })
} }
func (vm *VM) transfer(from, to address.Address, amt types.BigInt, networkVersion network.Version) aerrors.ActorError { func (vm *LegacyVM) transfer(from, to address.Address, amt types.BigInt, networkVersion network.Version) aerrors.ActorError {
var f *types.Actor var f *types.Actor
var fromID, toID address.Address var fromID, toID address.Address
var err error var err error
@ -955,7 +959,7 @@ func (vm *VM) transfer(from, to address.Address, amt types.BigInt, networkVersio
return nil return nil
} }
func (vm *VM) transferToGasHolder(addr address.Address, gasHolder *types.Actor, amt types.BigInt) error { func (vm *LegacyVM) transferToGasHolder(addr address.Address, gasHolder *types.Actor, amt types.BigInt) error {
if amt.LessThan(types.NewInt(0)) { if amt.LessThan(types.NewInt(0)) {
return xerrors.Errorf("attempted to transfer negative value to gas holder") return xerrors.Errorf("attempted to transfer negative value to gas holder")
} }
@ -969,7 +973,7 @@ func (vm *VM) transferToGasHolder(addr address.Address, gasHolder *types.Actor,
}) })
} }
func (vm *VM) transferFromGasHolder(addr address.Address, gasHolder *types.Actor, amt types.BigInt) error { func (vm *LegacyVM) transferFromGasHolder(addr address.Address, gasHolder *types.Actor, amt types.BigInt) error {
if amt.LessThan(types.NewInt(0)) { if amt.LessThan(types.NewInt(0)) {
return xerrors.Errorf("attempted to transfer negative value from gas holder") return xerrors.Errorf("attempted to transfer negative value from gas holder")
} }

27
chain/vm/vmi.go Normal file
View File

@ -0,0 +1,27 @@
package vm
import (
"context"
"os"
"github.com/filecoin-project/lotus/chain/types"
"github.com/ipfs/go-cid"
)
type Interface interface {
// Applies the given message onto the VM's current state, returning the result of the execution
ApplyMessage(ctx context.Context, cmsg types.ChainMsg) (*ApplyRet, error)
// Same as above but for system messages (the Cron invocation and block reward payments).
// Must NEVER fail.
ApplyImplicitMessage(ctx context.Context, msg *types.Message) (*ApplyRet, error)
// Flush all buffered objects into the state store provided to the VM at construction.
Flush(ctx context.Context) (cid.Cid, error)
}
func NewVM(ctx context.Context, opts *VMOpts) (Interface, error) {
if os.Getenv("LOTUS_USE_FVM_EXPERIMENTAL") == "1" {
return NewFVM(ctx, opts)
}
return NewLegacyVM(ctx, opts)
}

View File

@ -1,4 +1,4 @@
//stm: #cli //stm: #unit
package cli package cli
import ( import (

View File

@ -1,4 +1,4 @@
//stm: #cli //stm: #unit
package cli package cli
import ( import (

View File

@ -1,3 +1,5 @@
//stm: ignore
//stm: #unit
package cli package cli
import ( import (

View File

@ -1,3 +1,5 @@
//stm: ignore
//stm: #unit
package cli package cli
import ( import (

View File

@ -1,3 +1,4 @@
//stm: #unit
package cli package cli
import ( import (

View File

@ -1,4 +1,4 @@
//stm: #cli //stm: #unit
package cli package cli
import ( import (

View File

@ -1,3 +1,4 @@
//stm: #unit
package main package main
import ( import (

View File

@ -1,3 +1,4 @@
//stm: #unit
package main package main
import ( import (
@ -8,6 +9,7 @@ import (
) )
func TestRateLimit(t *testing.T) { func TestRateLimit(t *testing.T) {
//stm: @CMD_LIMITER_GET_IP_LIMITER_001, @CMD_LIMITER_GET_WALLET_LIMITER_001
limiter := NewLimiter(LimiterConfig{ limiter := NewLimiter(LimiterConfig{
TotalRate: time.Second, TotalRate: time.Second,
TotalBurst: 20, TotalBurst: 20,

View File

@ -1,3 +1,4 @@
//stm: #unit
package main package main
import ( import (
@ -9,6 +10,7 @@ import (
) )
func TestAppendCIDsToWindow(t *testing.T) { func TestAppendCIDsToWindow(t *testing.T) {
//stm: @CMD_HEALTH_APPEND_CIDS_001
assert := assert.New(t) assert := assert.New(t)
var window CidWindow var window CidWindow
threshold := 3 threshold := 3
@ -27,6 +29,7 @@ func TestAppendCIDsToWindow(t *testing.T) {
} }
func TestCheckWindow(t *testing.T) { func TestCheckWindow(t *testing.T) {
//stm: @CMD_HEALTH_APPEND_CIDS_001, @CMD_HEALTH_CHECK_WINDOW_001
assert := assert.New(t) assert := assert.New(t)
threshold := 3 threshold := 3

View File

@ -1,3 +1,4 @@
//stm: #unit
package main package main
import ( import (
@ -23,6 +24,7 @@ import (
) )
func TestWorkerKeyChange(t *testing.T) { func TestWorkerKeyChange(t *testing.T) {
//stm: @OTHER_WORKER_KEY_CHANGE_001
if testing.Short() { if testing.Short() {
t.Skip("skipping test in short mode") t.Skip("skipping test in short mode")
} }

View File

@ -1,3 +1,4 @@
//stm: #integration
package main package main
import ( import (
@ -49,6 +50,7 @@ func TestMinerAllInfo(t *testing.T) {
t.Run("pre-info-all", run) t.Run("pre-info-all", run)
//stm: @CLIENT_DATA_IMPORT_001, @CLIENT_STORAGE_DEALS_GET_001
dh := kit.NewDealHarness(t, client, miner, miner) dh := kit.NewDealHarness(t, client, miner, miner)
deal, res, inPath := dh.MakeOnlineDeal(context.Background(), kit.MakeFullDealParams{Rseed: 6}) deal, res, inPath := dh.MakeOnlineDeal(context.Background(), kit.MakeFullDealParams{Rseed: 6})
outPath := dh.PerformRetrieval(context.Background(), deal, res.Root, false) outPath := dh.PerformRetrieval(context.Background(), deal, res.Root, false)

View File

@ -466,6 +466,7 @@ var stateOrder = map[sealing.SectorState]stateMeta{}
var stateList = []stateMeta{ var stateList = []stateMeta{
{col: 39, state: "Total"}, {col: 39, state: "Total"},
{col: color.FgGreen, state: sealing.Proving}, {col: color.FgGreen, state: sealing.Proving},
{col: color.FgGreen, state: sealing.Available},
{col: color.FgGreen, state: sealing.UpdateActivating}, {col: color.FgGreen, state: sealing.UpdateActivating},
{col: color.FgBlue, state: sealing.Empty}, {col: color.FgBlue, state: sealing.Empty},

View File

@ -11,9 +11,6 @@ import (
"strings" "strings"
"time" "time"
"github.com/filecoin-project/lotus/build"
"github.com/filecoin-project/lotus/chain/actors/builtin"
"github.com/docker/go-units" "github.com/docker/go-units"
"github.com/fatih/color" "github.com/fatih/color"
cbor "github.com/ipfs/go-ipld-cbor" cbor "github.com/ipfs/go-ipld-cbor"
@ -56,7 +53,6 @@ var sectorsCmd = &cli.Command{
sectorsRemoveCmd, sectorsRemoveCmd,
sectorsSnapUpCmd, sectorsSnapUpCmd,
sectorsSnapAbortCmd, sectorsSnapAbortCmd,
sectorsMarkForUpgradeCmd,
sectorsStartSealCmd, sectorsStartSealCmd,
sectorsSealDelayCmd, sectorsSealDelayCmd,
sectorsCapacityCollateralCmd, sectorsCapacityCollateralCmd,
@ -351,7 +347,7 @@ var sectorsListCmd = &cli.Command{
if cctx.Bool("unproven") { if cctx.Bool("unproven") {
for state := range sealing.ExistSectorStateList { for state := range sealing.ExistSectorStateList {
if state == sealing.Proving { if state == sealing.Proving || state == sealing.Available {
continue continue
} }
states = append(states, api.SectorState(state)) states = append(states, api.SectorState(state))
@ -1568,57 +1564,6 @@ var sectorsSnapAbortCmd = &cli.Command{
}, },
} }
var sectorsMarkForUpgradeCmd = &cli.Command{
Name: "mark-for-upgrade",
Usage: "Mark a committed capacity sector for replacement by a sector with deals",
ArgsUsage: "<sectorNum>",
Action: func(cctx *cli.Context) error {
if cctx.Args().Len() != 1 {
return lcli.ShowHelp(cctx, xerrors.Errorf("must pass sector number"))
}
nodeApi, closer, err := lcli.GetStorageMinerAPI(cctx)
if err != nil {
return err
}
defer closer()
api, nCloser, err := lcli.GetFullNodeAPI(cctx)
if err != nil {
return err
}
defer nCloser()
ctx := lcli.ReqContext(cctx)
nv, err := api.StateNetworkVersion(ctx, types.EmptyTSK)
if err != nil {
return xerrors.Errorf("failed to get network version: %w", err)
}
if nv >= network.Version15 {
return xerrors.Errorf("classic cc upgrades disabled v15 and beyond, use `snap-up`")
}
// disable mark for upgrade two days before the ntwk v15 upgrade
// TODO: remove the following block in v1.15.1
head, err := api.ChainHead(ctx)
if err != nil {
return xerrors.Errorf("failed to get chain head: %w", err)
}
twoDays := abi.ChainEpoch(2 * builtin.EpochsInDay)
if head.Height() > (build.UpgradeOhSnapHeight - twoDays) {
return xerrors.Errorf("OhSnap is coming soon, " +
"please use `snap-up` to upgrade your cc sectors after the network v15 upgrade!")
}
id, err := strconv.ParseUint(cctx.Args().Get(0), 10, 64)
if err != nil {
return xerrors.Errorf("could not parse sector number: %w", err)
}
return nodeApi.SectorMarkForUpgrade(ctx, abi.SectorNumber(id), false)
},
}
var sectorsStartSealCmd = &cli.Command{ var sectorsStartSealCmd = &cli.Command{
Name: "seal", Name: "seal",
Usage: "Manually start sealing a sector (filling any unused space with junk)", Usage: "Manually start sealing a sector (filling any unused space with junk)",

View File

@ -598,7 +598,7 @@ var storageListSectorsCmd = &cli.Command{
ft storiface.SectorFileType ft storiface.SectorFileType
urls string urls string
primary, seal, store bool primary, copy, main, seal, store bool
state api.SectorState state api.SectorState
} }
@ -626,6 +626,9 @@ var storageListSectorsCmd = &cli.Command{
urls: strings.Join(info.URLs, ";"), urls: strings.Join(info.URLs, ";"),
primary: info.Primary, primary: info.Primary,
copy: !info.Primary && len(si) > 1,
main: !info.Primary && len(si) == 1, // only copy, but not primary
seal: info.CanSeal, seal: info.CanSeal,
store: info.CanStore, store: info.CanStore,
@ -680,7 +683,7 @@ var storageListSectorsCmd = &cli.Command{
"Sector": e.id, "Sector": e.id,
"Type": e.ft.String(), "Type": e.ft.String(),
"State": color.New(stateOrder[sealing.SectorState(e.state)].col).Sprint(e.state), "State": color.New(stateOrder[sealing.SectorState(e.state)].col).Sprint(e.state),
"Primary": maybeStr(e.seal, color.FgGreen, "primary"), "Primary": maybeStr(e.primary, color.FgGreen, "primary") + maybeStr(e.copy, color.FgBlue, "copy") + maybeStr(e.main, color.FgRed, "main"),
"Path use": maybeStr(e.seal, color.FgMagenta, "seal ") + maybeStr(e.store, color.FgCyan, "store"), "Path use": maybeStr(e.seal, color.FgMagenta, "seal ") + maybeStr(e.store, color.FgCyan, "store"),
"URLs": e.urls, "URLs": e.urls,
} }

View File

@ -0,0 +1,118 @@
package main
import (
"context"
"fmt"
"io"
"unicode/utf8"
"github.com/filecoin-project/lotus/chain/actors/builtin/market"
"github.com/filecoin-project/lotus/chain/consensus/filcns"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/lotus/chain/state"
"github.com/filecoin-project/lotus/chain/store"
"github.com/filecoin-project/lotus/node/repo"
"github.com/filecoin-project/specs-actors/v4/actors/util/adt"
"github.com/ipfs/go-cid"
cbor "github.com/ipfs/go-ipld-cbor"
"github.com/urfave/cli/v2"
)
var dealLabelCmd = &cli.Command{
Name: "deal-label",
Usage: "Scrape state to report on how many deals have non UTF-8 labels",
Flags: []cli.Flag{
&cli.StringFlag{
Name: "repo",
Value: "~/.lotus",
},
},
Action: func(cctx *cli.Context) error {
ctx := context.TODO()
if !cctx.Args().Present() {
return fmt.Errorf("must pass state root")
}
sroot, err := cid.Decode(cctx.Args().First())
if err != nil {
return fmt.Errorf("failed to parse input: %w", err)
}
fsrepo, err := repo.NewFS(cctx.String("repo"))
if err != nil {
return err
}
lkrepo, err := fsrepo.Lock(repo.FullNode)
if err != nil {
return err
}
defer lkrepo.Close() //nolint:errcheck
bs, err := lkrepo.Blockstore(ctx, repo.UniversalBlockstore)
if err != nil {
return fmt.Errorf("failed to open blockstore: %w", err)
}
defer func() {
if c, ok := bs.(io.Closer); ok {
if err := c.Close(); err != nil {
log.Warnf("failed to close blockstore: %s", err)
}
}
}()
mds, err := lkrepo.Datastore(context.Background(), "/metadata")
if err != nil {
return err
}
cs := store.NewChainStore(bs, bs, mds, filcns.Weight, nil)
defer cs.Close() //nolint:errcheck
cst := cbor.NewCborStore(bs)
store := adt.WrapStore(ctx, cst)
tree, err := state.LoadStateTree(cst, sroot)
if err != nil {
return err
}
ma, err := tree.GetActor(market.Address)
if err != nil {
return err
}
ms, err := market.Load(store, ma)
if err != nil {
return err
}
ps, err := ms.Proposals()
if err != nil {
return err
}
var deals []abi.DealID
if err = ps.ForEach(func(id abi.DealID, dp market.DealProposal) error {
if !utf8.Valid([]byte(dp.Label)) {
deals = append(deals, id)
}
return nil
}); err != nil {
return err
}
fmt.Println("there are ", len(deals), " bad labels")
for _, d := range deals {
fmt.Print(d, " ")
}
return nil
},
}

View File

@ -22,6 +22,7 @@ func main() {
bitFieldCmd, bitFieldCmd,
cronWcCmd, cronWcCmd,
frozenMinersCmd, frozenMinersCmd,
dealLabelCmd,
keyinfoCmd, keyinfoCmd,
jwtCmd, jwtCmd,
noncefix, noncefix,

View File

@ -44,7 +44,7 @@ type BlockBuilder struct {
parentTs *types.TipSet parentTs *types.TipSet
parentSt *state.StateTree parentSt *state.StateTree
vm *vm.VM vm *vm.LegacyVM
sm *stmgr.StateManager sm *stmgr.StateManager
gasTotal int64 gasTotal int64
@ -73,9 +73,9 @@ func NewBlockBuilder(ctx context.Context, logger *zap.SugaredLogger, sm *stmgr.S
parentSt: parentSt, parentSt: parentSt,
} }
// Then we construct a VM to execute messages for gas estimation. // Then we construct a LegacyVM to execute messages for gas estimation.
// //
// Most parts of this VM are "real" except: // Most parts of this LegacyVM are "real" except:
// 1. We don't charge a fee. // 1. We don't charge a fee.
// 2. The runtime has "fake" proof logic. // 2. The runtime has "fake" proof logic.
// 3. We don't actually save any of the results. // 3. We don't actually save any of the results.
@ -92,7 +92,7 @@ func NewBlockBuilder(ctx context.Context, logger *zap.SugaredLogger, sm *stmgr.S
BaseFee: abi.NewTokenAmount(0), BaseFee: abi.NewTokenAmount(0),
LookbackState: stmgr.LookbackStateGetterForTipset(sm, parentTs), LookbackState: stmgr.LookbackStateGetterForTipset(sm, parentTs),
} }
bb.vm, err = vm.NewVM(bb.ctx, vmopt) bb.vm, err = vm.NewLegacyVM(bb.ctx, vmopt)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -190,12 +190,12 @@ func (bb *BlockBuilder) PushMessage(msg *types.Message) (*types.MessageReceipt,
return &ret.MessageReceipt, nil return &ret.MessageReceipt, nil
} }
// ActorStore returns the VM's current (pending) blockstore. // ActorStore returns the LegacyVM's current (pending) blockstore.
func (bb *BlockBuilder) ActorStore() adt.Store { func (bb *BlockBuilder) ActorStore() adt.Store {
return bb.vm.ActorStore(bb.ctx) return bb.vm.ActorStore(bb.ctx)
} }
// StateTree returns the VM's current (pending) state-tree. This includes any changes made by // StateTree returns the LegacyVM's current (pending) state-tree. This includes any changes made by
// successfully pushed messages. // successfully pushed messages.
// //
// You probably want ParentStateTree // You probably want ParentStateTree

View File

@ -1,3 +1,4 @@
//stm: #unit
package stages package stages
import ( import (
@ -13,6 +14,7 @@ import (
) )
func TestCommitQueue(t *testing.T) { func TestCommitQueue(t *testing.T) {
//stm: @CMD_COMMIT_Q_ENQUEUE_COMMIT_001
var q commitQueue var q commitQueue
addr1, err := address.NewIDAddress(1000) addr1, err := address.NewIDAddress(1000)
require.NoError(t, err) require.NoError(t, err)
@ -46,6 +48,7 @@ func TestCommitQueue(t *testing.T) {
SectorNumber: 6, SectorNumber: 6,
})) }))
//stm: @CMD_COMMIT_Q_ADVANCE_EPOCH_001, @CMD_COMMIT_Q_NEXT_MINER_001
epoch := abi.ChainEpoch(0) epoch := abi.ChainEpoch(0)
q.advanceEpoch(epoch) q.advanceEpoch(epoch)
_, _, ok := q.nextMiner() _, _, ok := q.nextMiner()

View File

@ -1,3 +1,4 @@
//stm: #unit
package main package main
import ( import (
@ -10,6 +11,7 @@ import (
) )
func TestProtocolCodenames(t *testing.T) { func TestProtocolCodenames(t *testing.T) {
//stm: @OTHER_IMPLEMENTATION_EPOCH_CODENAMES_001
if height := abi.ChainEpoch(100); GetProtocolCodename(height) != "genesis" { if height := abi.ChainEpoch(100); GetProtocolCodename(height) != "genesis" {
t.Fatal("expected genesis codename") t.Fatal("expected genesis codename")
} }

View File

@ -1,3 +1,4 @@
//stm: #chaos
package chaos package chaos
import ( import (
@ -15,6 +16,7 @@ import (
) )
func TestSingleton(t *testing.T) { func TestSingleton(t *testing.T) {
//stm: @CHAIN_ACTOR_CHAOS_BUILDER_001
receiver := atesting2.NewIDAddr(t, 100) receiver := atesting2.NewIDAddr(t, 100)
builder := mock2.NewBuilder(context.Background(), receiver) builder := mock2.NewBuilder(context.Background(), receiver)
@ -29,6 +31,7 @@ func TestSingleton(t *testing.T) {
} }
func TestCallerValidationNone(t *testing.T) { func TestCallerValidationNone(t *testing.T) {
//stm: @CHAIN_ACTOR_CHAOS_CALLER_VALIDATION_001
receiver := atesting2.NewIDAddr(t, 100) receiver := atesting2.NewIDAddr(t, 100)
builder := mock2.NewBuilder(context.Background(), receiver) builder := mock2.NewBuilder(context.Background(), receiver)
@ -40,6 +43,7 @@ func TestCallerValidationNone(t *testing.T) {
} }
func TestCallerValidationIs(t *testing.T) { func TestCallerValidationIs(t *testing.T) {
//stm: @CHAIN_ACTOR_CHAOS_CALLER_VALIDATION_001
caller := atesting2.NewIDAddr(t, 100) caller := atesting2.NewIDAddr(t, 100)
receiver := atesting2.NewIDAddr(t, 101) receiver := atesting2.NewIDAddr(t, 101)
builder := mock2.NewBuilder(context.Background(), receiver) builder := mock2.NewBuilder(context.Background(), receiver)
@ -69,6 +73,7 @@ func TestCallerValidationIs(t *testing.T) {
} }
func TestCallerValidationType(t *testing.T) { func TestCallerValidationType(t *testing.T) {
//stm: @CHAIN_ACTOR_CHAOS_CALLER_VALIDATION_001
caller := atesting2.NewIDAddr(t, 100) caller := atesting2.NewIDAddr(t, 100)
receiver := atesting2.NewIDAddr(t, 101) receiver := atesting2.NewIDAddr(t, 101)
builder := mock2.NewBuilder(context.Background(), receiver) builder := mock2.NewBuilder(context.Background(), receiver)
@ -95,6 +100,7 @@ func TestCallerValidationType(t *testing.T) {
} }
func TestCallerValidationInvalidBranch(t *testing.T) { func TestCallerValidationInvalidBranch(t *testing.T) {
//stm: @CHAIN_ACTOR_CHAOS_CALLER_VALIDATION_001
receiver := atesting2.NewIDAddr(t, 100) receiver := atesting2.NewIDAddr(t, 100)
builder := mock2.NewBuilder(context.Background(), receiver) builder := mock2.NewBuilder(context.Background(), receiver)
@ -108,6 +114,7 @@ func TestCallerValidationInvalidBranch(t *testing.T) {
} }
func TestDeleteActor(t *testing.T) { func TestDeleteActor(t *testing.T) {
//stm: @CHAIN_ACTOR_CHAOS_CREATE_ACTOR_001
receiver := atesting2.NewIDAddr(t, 100) receiver := atesting2.NewIDAddr(t, 100)
beneficiary := atesting2.NewIDAddr(t, 101) beneficiary := atesting2.NewIDAddr(t, 101)
builder := mock2.NewBuilder(context.Background(), receiver) builder := mock2.NewBuilder(context.Background(), receiver)
@ -122,6 +129,7 @@ func TestDeleteActor(t *testing.T) {
} }
func TestMutateStateInTransaction(t *testing.T) { func TestMutateStateInTransaction(t *testing.T) {
//stm: @CHAIN_ACTOR_CHAOS_CREATE_STATE_001, @CHAIN_ACTOR_CHAOS_MUTATE_STATE_001
receiver := atesting2.NewIDAddr(t, 100) receiver := atesting2.NewIDAddr(t, 100)
builder := mock2.NewBuilder(context.Background(), receiver) builder := mock2.NewBuilder(context.Background(), receiver)
@ -149,6 +157,7 @@ func TestMutateStateInTransaction(t *testing.T) {
} }
func TestMutateStateAfterTransaction(t *testing.T) { func TestMutateStateAfterTransaction(t *testing.T) {
//stm: @CHAIN_ACTOR_CHAOS_CREATE_STATE_001, @CHAIN_ACTOR_CHAOS_MUTATE_STATE_001
receiver := atesting2.NewIDAddr(t, 100) receiver := atesting2.NewIDAddr(t, 100)
builder := mock2.NewBuilder(context.Background(), receiver) builder := mock2.NewBuilder(context.Background(), receiver)
@ -183,6 +192,7 @@ func TestMutateStateAfterTransaction(t *testing.T) {
} }
func TestMutateStateReadonly(t *testing.T) { func TestMutateStateReadonly(t *testing.T) {
//stm: @CHAIN_ACTOR_CHAOS_CREATE_STATE_001, @CHAIN_ACTOR_CHAOS_MUTATE_STATE_001
receiver := atesting2.NewIDAddr(t, 100) receiver := atesting2.NewIDAddr(t, 100)
builder := mock2.NewBuilder(context.Background(), receiver) builder := mock2.NewBuilder(context.Background(), receiver)
@ -217,6 +227,7 @@ func TestMutateStateReadonly(t *testing.T) {
} }
func TestMutateStateInvalidBranch(t *testing.T) { func TestMutateStateInvalidBranch(t *testing.T) {
//stm: @CHAIN_ACTOR_CHAOS_MUTATE_STATE_001
receiver := atesting2.NewIDAddr(t, 100) receiver := atesting2.NewIDAddr(t, 100)
builder := mock2.NewBuilder(context.Background(), receiver) builder := mock2.NewBuilder(context.Background(), receiver)
@ -231,6 +242,7 @@ func TestMutateStateInvalidBranch(t *testing.T) {
} }
func TestAbortWith(t *testing.T) { func TestAbortWith(t *testing.T) {
//stm: @CHAIN_ACTOR_CHAOS_ABORT_WITH_001
receiver := atesting2.NewIDAddr(t, 100) receiver := atesting2.NewIDAddr(t, 100)
builder := mock2.NewBuilder(context.Background(), receiver) builder := mock2.NewBuilder(context.Background(), receiver)
@ -249,6 +261,7 @@ func TestAbortWith(t *testing.T) {
} }
func TestAbortWithUncontrolled(t *testing.T) { func TestAbortWithUncontrolled(t *testing.T) {
//stm: @CHAIN_ACTOR_CHAOS_ABORT_WITH_001
receiver := atesting2.NewIDAddr(t, 100) receiver := atesting2.NewIDAddr(t, 100)
builder := mock2.NewBuilder(context.Background(), receiver) builder := mock2.NewBuilder(context.Background(), receiver)
@ -266,6 +279,7 @@ func TestAbortWithUncontrolled(t *testing.T) {
} }
func TestInspectRuntime(t *testing.T) { func TestInspectRuntime(t *testing.T) {
//stm: @CHAIN_ACTOR_CHAOS_INSPECT_RUNTIME_001, @CHAIN_ACTOR_CHAOS_CREATE_STATE_001
caller := atesting2.NewIDAddr(t, 100) caller := atesting2.NewIDAddr(t, 100)
receiver := atesting2.NewIDAddr(t, 101) receiver := atesting2.NewIDAddr(t, 101)
builder := mock2.NewBuilder(context.Background(), receiver) builder := mock2.NewBuilder(context.Background(), receiver)

View File

@ -1,3 +1,6 @@
//stm: ignore
// This file does not test any behaviors by itself; rather, it runs other test files
// Therefore, this file should not be annotated.
package conformance package conformance
import ( import (

View File

@ -155,12 +155,12 @@ func (d *Driver) ExecuteTipset(bs blockstore.Blockstore, ds ds.Batching, params
results: []*vm.ApplyRet{}, results: []*vm.ApplyRet{},
} }
sm.SetVMConstructor(func(ctx context.Context, vmopt *vm.VMOpts) (*vm.VM, error) { sm.SetVMConstructor(func(ctx context.Context, vmopt *vm.VMOpts) (vm.Interface, error) {
vmopt.CircSupplyCalc = func(context.Context, abi.ChainEpoch, *state.StateTree) (abi.TokenAmount, error) { vmopt.CircSupplyCalc = func(context.Context, abi.ChainEpoch, *state.StateTree) (abi.TokenAmount, error) {
return big.Zero(), nil return big.Zero(), nil
} }
return vm.NewVM(ctx, vmopt) return vm.NewLegacyVM(ctx, vmopt)
}) })
postcid, receiptsroot, err := tse.ApplyBlocks(context.Background(), postcid, receiptsroot, err := tse.ApplyBlocks(context.Background(),
@ -226,7 +226,7 @@ func (d *Driver) ExecuteMessage(bs blockstore.Blockstore, params ExecuteMessageP
NetworkVersion: params.NetworkVersion, NetworkVersion: params.NetworkVersion,
} }
lvm, err := vm.NewVM(context.TODO(), vmOpts) lvm, err := vm.NewLegacyVM(context.TODO(), vmOpts)
if err != nil { if err != nil {
return nil, cid.Undef, err return nil, cid.Undef, err
} }

View File

@ -7,7 +7,7 @@ USAGE:
lotus-miner [global options] command [command options] [arguments...] lotus-miner [global options] command [command options] [arguments...]
VERSION: VERSION:
1.15.1-dev 1.15.2-dev
COMMANDS: COMMANDS:
init Initialize a lotus miner repo init Initialize a lotus miner repo
@ -1664,7 +1664,6 @@ COMMANDS:
remove Forcefully remove a sector (WARNING: This means losing power and collateral for the removed sector (use 'terminate' for lower penalty)) remove Forcefully remove a sector (WARNING: This means losing power and collateral for the removed sector (use 'terminate' for lower penalty))
snap-up Mark a committed capacity sector to be filled with deals snap-up Mark a committed capacity sector to be filled with deals
abort-upgrade Abort the attempted (SnapDeals) upgrade of a CC sector, reverting it to as before abort-upgrade Abort the attempted (SnapDeals) upgrade of a CC sector, reverting it to as before
mark-for-upgrade Mark a committed capacity sector for replacement by a sector with deals
seal Manually start sealing a sector (filling any unused space with junk) seal Manually start sealing a sector (filling any unused space with junk)
set-seal-delay Set the time, in minutes, that a new sector waits for deals before sealing starts set-seal-delay Set the time, in minutes, that a new sector waits for deals before sealing starts
get-cc-collateral Get the collateral required to pledge a committed capacity sector get-cc-collateral Get the collateral required to pledge a committed capacity sector
@ -1912,19 +1911,6 @@ OPTIONS:
``` ```
### lotus-miner sectors mark-for-upgrade
```
NAME:
lotus-miner sectors mark-for-upgrade - Mark a committed capacity sector for replacement by a sector with deals
USAGE:
lotus-miner sectors mark-for-upgrade [command options] <sectorNum>
OPTIONS:
--help, -h show help (default: false)
```
### lotus-miner sectors seal ### lotus-miner sectors seal
``` ```
NAME: NAME:

View File

@ -7,7 +7,7 @@ USAGE:
lotus-worker [global options] command [command options] [arguments...] lotus-worker [global options] command [command options] [arguments...]
VERSION: VERSION:
1.15.1-dev 1.15.2-dev
COMMANDS: COMMANDS:
run Start lotus worker run Start lotus worker

View File

@ -7,7 +7,7 @@ USAGE:
lotus [global options] command [command options] [arguments...] lotus [global options] command [command options] [arguments...]
VERSION: VERSION:
1.15.1-dev 1.15.2-dev
COMMANDS: COMMANDS:
daemon Start a lotus daemon process daemon Start a lotus daemon process

View File

@ -365,6 +365,12 @@
# env var: LOTUS_SEALING_FINALIZEEARLY # env var: LOTUS_SEALING_FINALIZEEARLY
#FinalizeEarly = false #FinalizeEarly = false
# After sealing CC sectors, make them available for upgrading with deals
#
# type: bool
# env var: LOTUS_SEALING_MAKECCSECTORSAVAILABLE
#MakeCCSectorsAvailable = false
# Whether to use available miner balance for sector collateral instead of sending it with each message # Whether to use available miner balance for sector collateral instead of sending it with each message
# #
# type: bool # type: bool

2
extern/filecoin-ffi vendored

@ -1 +1 @@
Subproject commit 5ec5d805c01ea85224f6448dd6c6fa0a2a73c028 Subproject commit c2668aa67ec589a773153022348b9c0ed6ed4d5d

View File

@ -12,6 +12,7 @@ import (
"github.com/ipfs/go-cid" "github.com/ipfs/go-cid"
logging "github.com/ipfs/go-log/v2" logging "github.com/ipfs/go-log/v2"
"github.com/mitchellh/go-homedir" "github.com/mitchellh/go-homedir"
"go.uber.org/multierr"
"golang.org/x/xerrors" "golang.org/x/xerrors"
"github.com/filecoin-project/go-state-types/abi" "github.com/filecoin-project/go-state-types/abi"
@ -589,7 +590,7 @@ func (m *Manager) FinalizeReplicaUpdate(ctx context.Context, sector storage.Sect
return xerrors.Errorf("acquiring sector lock: %w", err) return xerrors.Errorf("acquiring sector lock: %w", err)
} }
fts := storiface.FTUnsealed moveUnsealed := storiface.FTUnsealed
{ {
unsealedStores, err := m.index.StorageFindSector(ctx, sector.ID, storiface.FTUnsealed, 0, false) unsealedStores, err := m.index.StorageFindSector(ctx, sector.ID, storiface.FTUnsealed, 0, false)
if err != nil { if err != nil {
@ -597,7 +598,7 @@ func (m *Manager) FinalizeReplicaUpdate(ctx context.Context, sector storage.Sect
} }
if len(unsealedStores) == 0 { // Is some edge-cases unsealed sector may not exist already, that's fine if len(unsealedStores) == 0 { // Is some edge-cases unsealed sector may not exist already, that's fine
fts = storiface.FTNone moveUnsealed = storiface.FTNone
} }
} }
@ -616,10 +617,10 @@ func (m *Manager) FinalizeReplicaUpdate(ctx context.Context, sector storage.Sect
} }
} }
selector := newExistingSelector(m.index, sector.ID, storiface.FTCache|storiface.FTSealed|storiface.FTUpdate|storiface.FTUpdateCache, false) selector := newExistingSelector(m.index, sector.ID, storiface.FTCache|storiface.FTUpdateCache, false)
err := m.sched.Schedule(ctx, sector, sealtasks.TTFinalizeReplicaUpdate, selector, err := m.sched.Schedule(ctx, sector, sealtasks.TTFinalizeReplicaUpdate, selector,
m.schedFetch(sector, storiface.FTCache|storiface.FTSealed|storiface.FTUpdate|storiface.FTUpdateCache|fts, pathType, storiface.AcquireMove), m.schedFetch(sector, storiface.FTCache|storiface.FTUpdateCache|moveUnsealed, pathType, storiface.AcquireMove),
func(ctx context.Context, w Worker) error { func(ctx context.Context, w Worker) error {
_, err := m.waitSimpleCall(ctx)(w.FinalizeReplicaUpdate(ctx, sector, keepUnsealed)) _, err := m.waitSimpleCall(ctx)(w.FinalizeReplicaUpdate(ctx, sector, keepUnsealed))
return err return err
@ -628,8 +629,8 @@ func (m *Manager) FinalizeReplicaUpdate(ctx context.Context, sector storage.Sect
return err return err
} }
fetchSel := newAllocSelector(m.index, storiface.FTCache|storiface.FTSealed|storiface.FTUpdate|storiface.FTUpdateCache, storiface.PathStorage) move := func(types storiface.SectorFileType) error {
moveUnsealed := fts fetchSel := newAllocSelector(m.index, types, storiface.PathStorage)
{ {
if len(keepUnsealed) == 0 { if len(keepUnsealed) == 0 {
moveUnsealed = storiface.FTNone moveUnsealed = storiface.FTNone
@ -637,14 +638,22 @@ func (m *Manager) FinalizeReplicaUpdate(ctx context.Context, sector storage.Sect
} }
err = m.sched.Schedule(ctx, sector, sealtasks.TTFetch, fetchSel, err = m.sched.Schedule(ctx, sector, sealtasks.TTFetch, fetchSel,
m.schedFetch(sector, storiface.FTCache|storiface.FTSealed|storiface.FTUpdate|storiface.FTUpdateCache|moveUnsealed, storiface.PathStorage, storiface.AcquireMove), m.schedFetch(sector, types, storiface.PathStorage, storiface.AcquireMove),
func(ctx context.Context, w Worker) error { func(ctx context.Context, w Worker) error {
_, err := m.waitSimpleCall(ctx)(w.MoveStorage(ctx, sector, storiface.FTCache|storiface.FTSealed|storiface.FTUpdate|storiface.FTUpdateCache|moveUnsealed)) _, err := m.waitSimpleCall(ctx)(w.MoveStorage(ctx, sector, types))
return err return err
}) })
if err != nil { if err != nil {
return xerrors.Errorf("moving sector to storage: %w", err) return xerrors.Errorf("moving sector to storage: %w", err)
} }
return nil
}
err = multierr.Append(move(storiface.FTUpdate|storiface.FTUpdateCache), move(storiface.FTCache))
err = multierr.Append(err, move(storiface.FTSealed)) // Sealed separate from cache just in case ReleaseSectorKey was already called
if moveUnsealed != storiface.FTNone {
err = multierr.Append(err, move(moveUnsealed))
}
return nil return nil
} }

View File

@ -426,11 +426,19 @@ func generateFakePoSt(sectorInfo []proof.SectorInfo, rpt func(abi.RegisteredSeal
} }
func (mgr *SectorMgr) ReadPiece(ctx context.Context, sector storage.SectorRef, offset storiface.UnpaddedByteIndex, size abi.UnpaddedPieceSize, ticket abi.SealRandomness, unsealed cid.Cid) (mount.Reader, bool, error) { func (mgr *SectorMgr) ReadPiece(ctx context.Context, sector storage.SectorRef, offset storiface.UnpaddedByteIndex, size abi.UnpaddedPieceSize, ticket abi.SealRandomness, unsealed cid.Cid) (mount.Reader, bool, error) {
if uint64(offset) != 0 { off := storiface.UnpaddedByteIndex(0)
panic("implme") var piece cid.Cid
for _, c := range mgr.sectors[sector.ID].pieces {
piece = c
if off >= offset {
break
} }
off += storiface.UnpaddedByteIndex(len(mgr.pieces[piece]))
br := bytes.NewReader(mgr.pieces[mgr.sectors[sector.ID].pieces[0]][:size]) }
if off > offset {
panic("non-aligned offset todo")
}
br := bytes.NewReader(mgr.pieces[piece][:size])
return struct { return struct {
io.ReadCloser io.ReadCloser

View File

@ -172,7 +172,7 @@ func (handler *FetchHandler) remoteDeleteSector(w http.ResponseWriter, r *http.R
return return
} }
if err := handler.Local.Remove(r.Context(), id, ft, false, []ID{ID(r.FormValue("keep"))}); err != nil { if err := handler.Local.Remove(r.Context(), id, ft, false, ParseIDList(r.FormValue("keep"))); err != nil {
log.Errorf("%+v", err) log.Errorf("%+v", err)
w.WriteHeader(500) w.WriteHeader(500)
return return

View File

@ -7,6 +7,7 @@ import (
"net/url" "net/url"
gopath "path" gopath "path"
"sort" "sort"
"strings"
"sync" "sync"
"time" "time"
@ -29,6 +30,27 @@ var SkippedHeartbeatThresh = HeartbeatInterval * 5
// filesystem, local or networked / shared by multiple machines // filesystem, local or networked / shared by multiple machines
type ID string type ID string
const IDSep = "."
type IDList []ID
func (il IDList) String() string {
l := make([]string, len(il))
for i, id := range il {
l[i] = string(id)
}
return strings.Join(l, IDSep)
}
func ParseIDList(s string) IDList {
strs := strings.Split(s, IDSep)
out := make([]ID, len(strs))
for i, str := range strs {
out[i] = ID(str)
}
return out
}
type Group = string type Group = string
type StorageInfo struct { type StorageInfo struct {

View File

@ -44,12 +44,36 @@ type Remote struct {
pfHandler PartialFileHandler pfHandler PartialFileHandler
} }
func (r *Remote) RemoveCopies(ctx context.Context, s abi.SectorID, types storiface.SectorFileType) error { func (r *Remote) RemoveCopies(ctx context.Context, s abi.SectorID, typ storiface.SectorFileType) error {
// TODO: do this on remotes too if bits.OnesCount(uint(typ)) != 1 {
// (not that we really need to do that since it's always called by the return xerrors.New("RemoveCopies expects one file type")
// worker which pulled the copy) }
return r.local.RemoveCopies(ctx, s, types) if err := r.local.RemoveCopies(ctx, s, typ); err != nil {
return xerrors.Errorf("removing local copies: %w", err)
}
si, err := r.index.StorageFindSector(ctx, s, typ, 0, false)
if err != nil {
return xerrors.Errorf("finding existing sector %d(t:%d) failed: %w", s, typ, err)
}
var hasPrimary bool
var keep []ID
for _, info := range si {
if info.Primary {
hasPrimary = true
keep = append(keep, info.ID)
break
}
}
if !hasPrimary {
log.Warnf("remote RemoveCopies: no primary copies of sector %v (%s), not removing anything", s, typ)
return nil
}
return r.Remove(ctx, s, typ, true, keep)
} }
func NewRemote(local Store, index SectorIndex, auth http.Header, fetchLimit int, pfHandler PartialFileHandler) *Remote { func NewRemote(local Store, index SectorIndex, auth http.Header, fetchLimit int, pfHandler PartialFileHandler) *Remote {
@ -156,7 +180,7 @@ func (r *Remote) AcquireSector(ctx context.Context, s storage.SectorRef, existin
if op == storiface.AcquireMove { if op == storiface.AcquireMove {
id := ID(storageID) id := ID(storageID)
if err := r.deleteFromRemote(ctx, url, &id); err != nil { if err := r.deleteFromRemote(ctx, url, []ID{id}); err != nil {
log.Warnf("deleting sector %v from %s (delete %s): %+v", s, storageID, url, err) log.Warnf("deleting sector %v from %s (delete %s): %+v", s, storageID, url, err)
} }
} }
@ -355,7 +379,7 @@ storeLoop:
} }
} }
for _, url := range info.URLs { for _, url := range info.URLs {
if err := r.deleteFromRemote(ctx, url, nil); err != nil { if err := r.deleteFromRemote(ctx, url, keepIn); err != nil {
log.Warnf("remove %s: %+v", url, err) log.Warnf("remove %s: %+v", url, err)
continue continue
} }
@ -366,9 +390,9 @@ storeLoop:
return nil return nil
} }
func (r *Remote) deleteFromRemote(ctx context.Context, url string, keepIn *ID) error { func (r *Remote) deleteFromRemote(ctx context.Context, url string, keepIn IDList) error {
if keepIn != nil { if keepIn != nil {
url = url + "?keep=" + string(*keepIn) url = url + "?keep=" + keepIn.String()
} }
log.Infof("Delete %s", url) log.Infof("Delete %s", url)

View File

@ -516,7 +516,20 @@ func (l *LocalWorker) Remove(ctx context.Context, sector abi.SectorID) error {
func (l *LocalWorker) MoveStorage(ctx context.Context, sector storage.SectorRef, types storiface.SectorFileType) (storiface.CallID, error) { func (l *LocalWorker) MoveStorage(ctx context.Context, sector storage.SectorRef, types storiface.SectorFileType) (storiface.CallID, error) {
return l.asyncCall(ctx, sector, MoveStorage, func(ctx context.Context, ci storiface.CallID) (interface{}, error) { return l.asyncCall(ctx, sector, MoveStorage, func(ctx context.Context, ci storiface.CallID) (interface{}, error) {
return nil, l.storage.MoveStorage(ctx, sector, types) if err := l.storage.MoveStorage(ctx, sector, types); err != nil {
return nil, xerrors.Errorf("move to storage: %w", err)
}
for _, fileType := range storiface.PathTypes {
if fileType&types == 0 {
continue
}
if err := l.storage.RemoveCopies(ctx, sector.ID, fileType); err != nil {
return nil, xerrors.Errorf("rm copies (t:%s, s:%v): %w", fileType, sector, err)
}
}
return nil, nil
}) })
} }

View File

@ -111,6 +111,7 @@ var fsmPlanners = map[SectorState]func(events []statemachine.Event, state *Secto
Committing: planCommitting, Committing: planCommitting,
CommitFinalize: planOne( CommitFinalize: planOne(
on(SectorFinalized{}, SubmitCommit), on(SectorFinalized{}, SubmitCommit),
on(SectorFinalizedAvailable{}, SubmitCommit),
on(SectorFinalizeFailed{}, CommitFinalizeFailed), on(SectorFinalizeFailed{}, CommitFinalizeFailed),
), ),
SubmitCommit: planOne( SubmitCommit: planOne(
@ -136,6 +137,7 @@ var fsmPlanners = map[SectorState]func(events []statemachine.Event, state *Secto
FinalizeSector: planOne( FinalizeSector: planOne(
on(SectorFinalized{}, Proving), on(SectorFinalized{}, Proving),
on(SectorFinalizedAvailable{}, Available),
on(SectorFinalizeFailed{}, FinalizeFailed), on(SectorFinalizeFailed{}, FinalizeFailed),
), ),
@ -283,7 +285,11 @@ var fsmPlanners = map[SectorState]func(events []statemachine.Event, state *Secto
Proving: planOne( Proving: planOne(
on(SectorFaultReported{}, FaultReported), on(SectorFaultReported{}, FaultReported),
on(SectorFaulty{}, Faulty), on(SectorFaulty{}, Faulty),
on(SectorMarkForUpdate{}, Available),
),
Available: planOne(
on(SectorStartCCUpdate{}, SnapDealsWaitDeals), on(SectorStartCCUpdate{}, SnapDealsWaitDeals),
on(SectorAbortUpgrade{}, Proving),
), ),
Terminating: planOne( Terminating: planOne(
on(SectorTerminating{}, TerminateWait), on(SectorTerminating{}, TerminateWait),
@ -558,6 +564,8 @@ func (m *Sealing) plan(events []statemachine.Event, state *SectorInfo) (func(sta
// Post-seal // Post-seal
case Proving: case Proving:
return m.handleProvingSector, processed, nil return m.handleProvingSector, processed, nil
case Available:
return m.handleAvailableSector, processed, nil
case Terminating: case Terminating:
return m.handleTerminating, processed, nil return m.handleTerminating, processed, nil
case TerminateWait: case TerminateWait:

Some files were not shown because too many files have changed in this diff Show More