Merge pull request #8279 from filecoin-project/release/v1.15.0

build: release: v1.15.0
This commit is contained in:
Jiaying Wang 2022-03-11 00:33:18 -05:00 committed by GitHub
commit 0ac1bbc7ae
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
267 changed files with 9282 additions and 2764 deletions

View File

@ -12,10 +12,10 @@
Before you mark the PR ready for review, please make sure that: Before you mark the PR ready for review, please make sure that:
- [ ] All commits have a clear commit message. - [ ] All commits have a clear commit message.
- [ ] The PR title is in the form of `<PR type>: <area>: <change being made>` - [ ] The PR title is in the form of of `<PR type>: <area>: <change being made>`
- example: ` fix: mempool: Introduce a cache for valid signatures` - example: ` fix: mempool: Introduce a cache for valid signatures`
- `PR type`: _fix_, _feat_, _INTERFACE BREAKING CHANGE_, _CONSENSUS BREAKING_, _build_, _chore_, _ci_, _docs_, _misc_,_perf_, _refactor_, _revert_, _style_, _test_ - `PR type`: _fix_, _feat_, _INTERFACE BREAKING CHANGE_, _CONSENSUS BREAKING_, _build_, _chore_, _ci_, _docs_,_perf_, _refactor_, _revert_, _style_, _test_
- `area`: _api_, _chain_, _state_, _vm_, _data transfer_, _market_, _mempool_, _message_, _block production_, _multisig_, _networking_, _paychan_, _proving_, _sealing_, _wallet_ - `area`: _api_, _chain_, _state_, _vm_, _data transfer_, _market_, _mempool_, _message_, _block production_, _multisig_, _networking_, _paychan_, _proving_, _sealing_, _wallet_, _deps_
- [ ] This PR has tests for new functionality or change in behaviour - [ ] This PR has tests for new functionality or change in behaviour
- [ ] If new user-facing features are introduced, clear usage guidelines and / or documentation updates should be included in https://lotus.filecoin.io or [Discussion Tutorials.](https://github.com/filecoin-project/lotus/discussions/categories/tutorials) - [ ] If new user-facing features are introduced, clear usage guidelines and / or documentation updates should be included in https://lotus.filecoin.io or [Discussion Tutorials.](https://github.com/filecoin-project/lotus/discussions/categories/tutorials)
- [ ] CI is green - [ ] CI is green

View File

@ -1,5 +1,124 @@
# Lotus changelog # Lotus changelog
# 1.15.0 / 2022-03-09
This is an optional release with retrieval improvements(client side), SP ux with unsealing, snap deals and regular deal making and many other new features, improvements and bug fixes.
## Highlights
- feat:sealing: StartEpochSealingBuffer triggers packing on time([filecoin-project/lotus#7905](https://github.com/filecoin-project/lotus/pull/7905))
- use the `StartEpochSealingBuffer` configuration variable as a way to enforce that sectors are packed for sealing / updating no matter how many deals they have if the nearest deal start date is close enough to the present.
- feat: #6017 market: retrieval ask CLI command ([filecoin-project/lotus#7814](https://github.com/filecoin-project/lotus/pull/7814))
- feat(graphsync): allow setting of per-peer incoming requests for miners ([filecoin-project/lotus#7578](https://github.com/filecoin-project/lotus/pull/7578))
- by setting `SimultaneousTransfersForStoragePerClient` in deal making configuration.
- Make retrieval even faster ([filecoin-project/lotus#7746](https://github.com/filecoin-project/lotus/pull/7746))
- feat: #7747 sealing: Adding conf variable for capping number of concurrent unsealing jobs (#7884) ([filecoin-project/lotus#7884](https://github.com/filecoin-project/lotus/pull/7884))
- by setting `MaxConcurrentUnseals` in `DAGStoreConfig`
## New Features
- feat: mpool: Cache state nonces ([filecoin-project/lotus#8005](https://github.com/filecoin-project/lotus/pull/8005))
- chore: build: make the OhSnap epoch configurable by an envvar for devnets ([filecoin-project/lotus#7995](https://github.com/filecoin-project/lotus/pull/7995))
- Shed: Add a util to send a batch of messages ([filecoin-project/lotus#7667](https://github.com/filecoin-project/lotus/pull/7667))
- Add api for transfer diagnostics ([filecoin-project/lotus#7759](https://github.com/filecoin-project/lotus/pull/7759))
- Shed: Add a util to list terminated deals ([filecoin-project/lotus#7774](https://github.com/filecoin-project/lotus/pull/7774))
- Expose EnableGasTracing as an env_var ([filecoin-project/lotus#7750](https://github.com/filecoin-project/lotus/pull/7750))
- Command to list active sector locks ([filecoin-project/lotus#7735](https://github.com/filecoin-project/lotus/pull/7735))
- Initial switch to OpenTelemetry ([filecoin-project/lotus#7725](https://github.com/filecoin-project/lotus/pull/7725))
## Improvements
- splitstore sortless compaction ([filecoin-project/lotus#8008](https://github.com/filecoin-project/lotus/pull/8008))
- perf: chain: Make drand logs in daemon less noisy (#7955) ([filecoin-project/lotus#7955](https://github.com/filecoin-project/lotus/pull/7955))
- chore: shed: storage stats 2.0 ([filecoin-project/lotus#7941](https://github.com/filecoin-project/lotus/pull/7941))
- misc: api: Annotate lotus tests according to listed behaviors ([filecoin-project/lotus#7835](https://github.com/filecoin-project/lotus/pull/7835))
- some basic splitstore refactors ([filecoin-project/lotus#7999](https://github.com/filecoin-project/lotus/pull/7999))
- chore: sealer: quieten a log ([filecoin-project/lotus#7998](https://github.com/filecoin-project/lotus/pull/7998))
- tvx: supply network version when extracting messages. ([filecoin-project/lotus#7996](https://github.com/filecoin-project/lotus/pull/7996))
- chore: remove inaccurate comment in sealtasks ([filecoin-project/lotus#7977](https://github.com/filecoin-project/lotus/pull/7977))
- Refactor: VM: Remove the NetworkVersionGetter ([filecoin-project/lotus#7818](https://github.com/filecoin-project/lotus/pull/7818))
- refactor: state: Move randomness versioning out of the VM ([filecoin-project/lotus#7816](https://github.com/filecoin-project/lotus/pull/7816))
- updating to new datastore/blockstore code with contexts ([filecoin-project/lotus#7646](https://github.com/filecoin-project/lotus/pull/7646))
- Mempool msg selection should respect block message limits ([filecoin-project/lotus#7321](https://github.com/filecoin-project/lotus/pull/7321))
- Minor improvement for OpenTelemetry ([filecoin-project/lotus#7760](https://github.com/filecoin-project/lotus/pull/7760))
- Sort lotus-miner retrieval-deals by dealId ([filecoin-project/lotus#7749](https://github.com/filecoin-project/lotus/pull/7749))
- dagstore pieceReader: Always read full in ReadAt ([filecoin-project/lotus#7737](https://github.com/filecoin-project/lotus/pull/7737))
## Bug Fixes
- fix: sealing: Stop recovery attempts after fault ([filecoin-project/lotus#8014](https://github.com/filecoin-project/lotus/pull/8014))
- fix:snap: pay for the collateral difference needed if the miner available balance is insufficient ([filecoin-project/lotus#8234](https://github.com/filecoin-project/lotus/pull/8234))
- sealer: fix error message ([filecoin-project/lotus#8136](https://github.com/filecoin-project/lotus/pull/8136))
- typo in variable name ([filecoin-project/lotus#8134](https://github.com/filecoin-project/lotus/pull/8134))
- fix: sealer: allow enable/disabling ReplicaUpdate tasks ([filecoin-project/lotus#8093](https://github.com/filecoin-project/lotus/pull/8093))
- chore: chain: fix log ([filecoin-project/lotus#7993](https://github.com/filecoin-project/lotus/pull/7993))
- Fix: chain: create a new VM for each epoch ([filecoin-project/lotus#7966](https://github.com/filecoin-project/lotus/pull/7966))
- fix: doc generation struct slice example value ([filecoin-project/lotus#7851](https://github.com/filecoin-project/lotus/pull/7851))
- fix: returned error not be accept correctly ([filecoin-project/lotus#7852](https://github.com/filecoin-project/lotus/pull/7852))
- fix: #7577 markets: When retrying Add Piece, first seek to start of reader ([filecoin-project/lotus#7812](https://github.com/filecoin-project/lotus/pull/7812))
- misc: n/a sealing: Fix grammatical error in a log warning message ([filecoin-project/lotus#7831](https://github.com/filecoin-project/lotus/pull/7831))
- sectors update-state checks if sector exists before changing its state ([filecoin-project/lotus#7762](https://github.com/filecoin-project/lotus/pull/7762))
- SplitStore: supress compaction near upgrades ([filecoin-project/lotus#7734](https://github.com/filecoin-project/lotus/pull/7734))
## Dependency Updates
- github.com/filecoin-project/go-commp-utils (v0.1.2 -> v0.1.3):
- github.com/filecoin-project/dagstore (v0.4.3 -> v0.4.4):
- github.com/filecoin-project/go-fil-markets (v1.13.4 -> v1.19.2):
- github.com/filecoin-project/go-statestore (v0.1.1 -> v0.2.0):
- github.com/filecoin-project/go-storedcounter (v0.0.0-20200421200003-1c99c62e8a5b -> v0.1.0):
- github.com/filecoin-project/specs-actors/v2 (v2.3.5 -> v2.3.6):
- feat(deps): update markets stack ([filecoin-project/lotus#7959](https://github.com/filecoin-project/lotus/pull/7959))
- Use go-libp2p-connmgr v0.3.1 ([filecoin-project/lotus#7957](https://github.com/filecoin-project/lotus/pull/7957))
- dep/fix 7701 Dependency: update to ipld-legacy to v0.1.1 ([filecoin-project/lotus#7751](https://github.com/filecoin-project/lotus/pull/7751))
## Others
- chore: backport: release ([filecoin-project/lotus#8245](https://github.com/filecoin-project/lotus/pull/8245))
- Lotus release v1.15.0-rc3 ([filecoin-project/lotus#8236](https://github.com/filecoin-project/lotus/pull/8236))
- Lotus release v1.15.0-rc2 ([filecoin-project/lotus#8211](https://github.com/filecoin-project/lotus/pull/8211))
- Merge branch 'releases' into release/v1.15.0
- chore: build: backport releases ([filecoin-project/lotus#8193](https://github.com/filecoin-project/lotus/pull/8193))
- Merge branch 'releases' into release/v1.15.0
- bump the version to v1.15.0-rc1
- chore: build: v1.14.0 -> master ([filecoin-project/lotus#8053](https://github.com/filecoin-project/lotus/pull/8053))
- chore: merge release/v1.14.0 PRs into master ([filecoin-project/lotus#7979](https://github.com/filecoin-project/lotus/pull/7979))
- chore: update PR template ([filecoin-project/lotus#7918](https://github.com/filecoin-project/lotus/pull/7918))
- build: release: bump master version to v1.15.0-dev ([filecoin-project/lotus#7922](https://github.com/filecoin-project/lotus/pull/7922))
- misc: docs: remove issue number from the pr title ([filecoin-project/lotus#7902](https://github.com/filecoin-project/lotus/pull/7902))
- Snapcraft grade no develgrade ([filecoin-project/lotus#7802](https://github.com/filecoin-project/lotus/pull/7802))
- chore: create pull_request_template.md ([filecoin-project/lotus#7726](https://github.com/filecoin-project/lotus/pull/7726))
- Disable appimage ([filecoin-project/lotus#7707](https://github.com/filecoin-project/lotus/pull/7707))
## Contributors
| Contributor | Commits | Lines ± | Files Changed |
|-------------|---------|---------|---------------|
| @arajasek | 73 | +7232/-2778 | 386 |
| @zenground0 | 27 | +5604/-1049 | 219 |
| @vyzo | 118 | +4356/-1470 | 253 |
| @zl | 1 | +3725/-309 | 8 |
| @dirkmc | 7 | +1392/-1110 | 61 |
| arajasek | 37 | +221/-1329 | 90 |
| @magik6k | 33 | +1138/-336 | 101 |
| @whyrusleeping | 2 | +483/-585 | 28 |
| Darko Brdareski | 14 | +725/-276 | 154 |
| @rvagg | 2 | +43/-947 | 10 |
| @hannahhoward | 5 | +436/-335 | 31 |
| @hannahhoward | 12 | +507/-133 | 37 |
| @jennijuju | 27 | +333/-178 | 54 |
| @TheMenko | 8 | +237/-179 | 17 |
| c r | 2 | +227/-45 | 12 |
| @dirkmck | 12 | +188/-40 | 27 |
| @ribasushi | 3 | +128/-62 | 3 |
| @raulk | 6 | +128/-49 | 9 |
| @Whyrusleeping | 1 | +76/-70 | 8 |
| @Stebalien | 1 | +55/-37 | 1 |
| @jennijuju | 11 | +29/-16 | 11 |
| @aarshkshah1992 | 1 | +23/-19 | 5 |
| @travisperson | 1 | +0/-18 | 2 |
| @gstuart | 3 | +12/-1 | 3 |
| @coryschwartz | 4 | +5/-6 | 4 |
| @pefish | 1 | +4/-3 | 1 |
| @Kubuxu | 1 | +5/-2 | 2 |
| Colin Kennedy | 1 | +4/-2 | 1 |
| Rob Quist | 1 | +2/-2 | 1 |
| @shotcollin | 1 | +1/-1 | 1 |
# 1.14.4 / 2022-03-03 # 1.14.4 / 2022-03-03
This is a *highly recommended* optional release for storage providers that are doing snap deals. This fix the bug This is a *highly recommended* optional release for storage providers that are doing snap deals. This fix the bug
@ -111,8 +230,6 @@ All node operators, including storage providers, should be aware that a pre-migr
| Travis Person | 1 | +2/-2 | 2 | | Travis Person | 1 | +2/-2 | 2 |
| Rod Vagg | 1 | +2/-2 | 2 | | Rod Vagg | 1 | +2/-2 | 2 |
# v1.13.2 / 2022-01-09 # v1.13.2 / 2022-01-09
Lotus v1.13.2 is a *highly recommended* feature release with remarkable retrieval improvements, new features like Lotus v1.13.2 is a *highly recommended* feature release with remarkable retrieval improvements, new features like
@ -890,7 +1007,7 @@ This is a **highly recommended** but optional Lotus v1.11.1 release that introd
| dependabot[bot] | 1 | +3/-3 | 1 | | dependabot[bot] | 1 | +3/-3 | 1 |
| zhoutian527 | 1 | +2/-2 | 1 | | zhoutian527 | 1 | +2/-2 | 1 |
| xloem | 1 | +4/-0 | 1 | | xloem | 1 | +4/-0 | 1 |
| @travisperson| 2 | +2/-2 | 3 | | | 2 | +2/-2 | 3 |
| Liviu Damian | 2 | +2/-2 | 2 | | Liviu Damian | 2 | +2/-2 | 2 |
| @jimpick | 2 | +2/-2 | 2 | | @jimpick | 2 | +2/-2 | 2 |
| Frank | 1 | +3/-0 | 1 | | Frank | 1 | +3/-0 | 1 |
@ -902,6 +1019,7 @@ This is a **highly recommended** but optional Lotus v1.11.1 release that introd
This is a **highly recommended** release of Lotus that have many bug fixes, improvements and new features. This is a **highly recommended** release of Lotus that have many bug fixes, improvements and new features.
## Highlights ## Highlights
- Miner SimultaneousTransfers config ([filecoin-project/lotus#6612](https://github.com/filecoin-project/lotus/pull/6612))
- Miner SimultaneousTransfers config ([filecoin-project/lotus#6612](https://github.com/filecoin-project/lotus/pull/6612)) - Miner SimultaneousTransfers config ([filecoin-project/lotus#6612](https://github.com/filecoin-project/lotus/pull/6612))
- Set `SimultaneousTransfers` in lotus miner config to configure the maximum number of parallel online data transfers, including both storage and retrieval deals. - Set `SimultaneousTransfers` in lotus miner config to configure the maximum number of parallel online data transfers, including both storage and retrieval deals.
- Dynamic Retrieval pricing ([filecoin-project/lotus#6175](https://github.com/filecoin-project/lotus/pull/6175)) - Dynamic Retrieval pricing ([filecoin-project/lotus#6175](https://github.com/filecoin-project/lotus/pull/6175))
@ -1056,7 +1174,7 @@ This is a **highly recommended** release of Lotus that have many bug fixes, impr
| @Stebalien | 106 | +7653/-2718 | 273 | | @Stebalien | 106 | +7653/-2718 | 273 |
| dirkmc | 11 | +2580/-1371 | 77 | | dirkmc | 11 | +2580/-1371 | 77 |
| @dirkmc | 39 | +1865/-1194 | 79 | | @dirkmc | 39 | +1865/-1194 | 79 |
| @Kubuxu | 19 | +1973/-485 | 81 | | | 19 | +1973/-485 | 81 |
| @vyzo | 4 | +1748/-330 | 50 | | @vyzo | 4 | +1748/-330 | 50 |
| @aarshkshah1992 | 5 | +1462/-213 | 27 | | @aarshkshah1992 | 5 | +1462/-213 | 27 |
| @coryschwartz | 35 | +568/-206 | 59 | | @coryschwartz | 35 | +568/-206 | 59 |

View File

@ -900,6 +900,7 @@ type QueryOffer struct {
Size uint64 Size uint64
MinPrice types.BigInt MinPrice types.BigInt
UnsealPrice types.BigInt UnsealPrice types.BigInt
PricePerByte abi.TokenAmount
PaymentInterval uint64 PaymentInterval uint64
PaymentIntervalIncrease uint64 PaymentIntervalIncrease uint64
Miner address.Address Miner address.Address

View File

@ -154,6 +154,7 @@ type StorageMiner interface {
StorageLock(ctx context.Context, sector abi.SectorID, read storiface.SectorFileType, write storiface.SectorFileType) error //perm:admin StorageLock(ctx context.Context, sector abi.SectorID, read storiface.SectorFileType, write storiface.SectorFileType) error //perm:admin
StorageTryLock(ctx context.Context, sector abi.SectorID, read storiface.SectorFileType, write storiface.SectorFileType) (bool, error) //perm:admin StorageTryLock(ctx context.Context, sector abi.SectorID, read storiface.SectorFileType, write storiface.SectorFileType) (bool, error) //perm:admin
StorageList(ctx context.Context) (map[stores.ID][]stores.Decl, error) //perm:admin StorageList(ctx context.Context) (map[stores.ID][]stores.Decl, error) //perm:admin
StorageGetLocks(ctx context.Context) (storiface.SectorLocks, error) //perm:admin
StorageLocal(ctx context.Context) (map[stores.ID]string, error) //perm:admin StorageLocal(ctx context.Context) (map[stores.ID]string, error) //perm:admin
StorageStat(ctx context.Context, id stores.ID) (fsutil.FsStat, error) //perm:admin StorageStat(ctx context.Context, id stores.ID) (fsutil.FsStat, error) //perm:admin
@ -169,6 +170,8 @@ type StorageMiner interface {
MarketGetRetrievalAsk(ctx context.Context) (*retrievalmarket.Ask, error) //perm:read MarketGetRetrievalAsk(ctx context.Context) (*retrievalmarket.Ask, error) //perm:read
MarketListDataTransfers(ctx context.Context) ([]DataTransferChannel, error) //perm:write MarketListDataTransfers(ctx context.Context) ([]DataTransferChannel, error) //perm:write
MarketDataTransferUpdates(ctx context.Context) (<-chan DataTransferChannel, error) //perm:write MarketDataTransferUpdates(ctx context.Context) (<-chan DataTransferChannel, error) //perm:write
// MarketDataTransferDiagnostics generates debugging information about current data transfers over graphsync
MarketDataTransferDiagnostics(ctx context.Context, p peer.ID) (*TransferDiagnostics, error) //perm:write
// MarketRestartDataTransfer attempts to restart a data transfer with the given transfer ID and other peer // MarketRestartDataTransfer attempts to restart a data transfer with the given transfer ID and other peer
MarketRestartDataTransfer(ctx context.Context, transferID datatransfer.TransferID, otherPeer peer.ID, isInitiator bool) error //perm:write MarketRestartDataTransfer(ctx context.Context, transferID datatransfer.TransferID, otherPeer peer.ID, isInitiator bool) error //perm:write
// MarketCancelDataTransfer cancels a data transfer with the given transfer ID and other peer // MarketCancelDataTransfer cancels a data transfer with the given transfer ID and other peer

View File

@ -1,6 +1,7 @@
package docgen package docgen
import ( import (
"encoding/json"
"fmt" "fmt"
"go/ast" "go/ast"
"go/parser" "go/parser"
@ -15,6 +16,7 @@ import (
"github.com/filecoin-project/go-bitfield" "github.com/filecoin-project/go-bitfield"
"github.com/google/uuid" "github.com/google/uuid"
"github.com/ipfs/go-cid" "github.com/ipfs/go-cid"
"github.com/ipfs/go-graphsync"
"github.com/libp2p/go-libp2p-core/metrics" "github.com/libp2p/go-libp2p-core/metrics"
"github.com/libp2p/go-libp2p-core/network" "github.com/libp2p/go-libp2p-core/network"
"github.com/libp2p/go-libp2p-core/peer" "github.com/libp2p/go-libp2p-core/peer"
@ -120,6 +122,7 @@ func init() {
addExample(api.FullAPIVersion1) addExample(api.FullAPIVersion1)
addExample(api.PCHInbound) addExample(api.PCHInbound)
addExample(time.Minute) addExample(time.Minute)
addExample(graphsync.RequestID(4))
addExample(datatransfer.TransferID(3)) addExample(datatransfer.TransferID(3))
addExample(datatransfer.Ongoing) addExample(datatransfer.Ongoing)
addExample(storeIDExample) addExample(storeIDExample)
@ -250,10 +253,18 @@ func init() {
addExample(map[abi.SectorNumber]string{ addExample(map[abi.SectorNumber]string{
123: "can't acquire read lock", 123: "can't acquire read lock",
}) })
addExample(json.RawMessage(`"json raw message"`))
addExample(map[api.SectorState]int{ addExample(map[api.SectorState]int{
api.SectorState(sealing.Proving): 120, api.SectorState(sealing.Proving): 120,
}) })
addExample([]abi.SectorNumber{123, 124}) addExample([]abi.SectorNumber{123, 124})
addExample([]storiface.SectorLock{
{
Sector: abi.SectorID{Number: 123, Miner: 1000},
Write: [storiface.FileTypes]uint{0, 0, 1},
Read: [storiface.FileTypes]uint{2, 3, 0},
},
})
// worker specific // worker specific
addExample(storiface.AcquireMove) addExample(storiface.AcquireMove)
@ -339,7 +350,7 @@ func ExampleValue(method string, t, parent reflect.Type) interface{} {
switch t.Kind() { switch t.Kind() {
case reflect.Slice: case reflect.Slice:
out := reflect.New(t).Elem() out := reflect.New(t).Elem()
reflect.Append(out, reflect.ValueOf(ExampleValue(method, t.Elem(), t))) out = reflect.Append(out, reflect.ValueOf(ExampleValue(method, t.Elem(), t)))
return out.Interface() return out.Interface()
case reflect.Chan: case reflect.Chan:
return ExampleValue(method, t.Elem(), nil) return ExampleValue(method, t.Elem(), nil)

View File

@ -669,6 +669,8 @@ type StorageMinerStruct struct {
MarketCancelDataTransfer func(p0 context.Context, p1 datatransfer.TransferID, p2 peer.ID, p3 bool) error `perm:"write"` MarketCancelDataTransfer func(p0 context.Context, p1 datatransfer.TransferID, p2 peer.ID, p3 bool) error `perm:"write"`
MarketDataTransferDiagnostics func(p0 context.Context, p1 peer.ID) (*TransferDiagnostics, error) `perm:"write"`
MarketDataTransferUpdates func(p0 context.Context) (<-chan DataTransferChannel, error) `perm:"write"` MarketDataTransferUpdates func(p0 context.Context) (<-chan DataTransferChannel, error) `perm:"write"`
MarketGetAsk func(p0 context.Context) (*storagemarket.SignedStorageAsk, error) `perm:"read"` MarketGetAsk func(p0 context.Context) (*storagemarket.SignedStorageAsk, error) `perm:"read"`
@ -809,6 +811,8 @@ type StorageMinerStruct struct {
StorageFindSector func(p0 context.Context, p1 abi.SectorID, p2 storiface.SectorFileType, p3 abi.SectorSize, p4 bool) ([]stores.SectorStorageInfo, error) `perm:"admin"` StorageFindSector func(p0 context.Context, p1 abi.SectorID, p2 storiface.SectorFileType, p3 abi.SectorSize, p4 bool) ([]stores.SectorStorageInfo, error) `perm:"admin"`
StorageGetLocks func(p0 context.Context) (storiface.SectorLocks, error) `perm:"admin"`
StorageInfo func(p0 context.Context, p1 stores.ID) (stores.StorageInfo, error) `perm:"admin"` StorageInfo func(p0 context.Context, p1 stores.ID) (stores.StorageInfo, error) `perm:"admin"`
StorageList func(p0 context.Context) (map[stores.ID][]stores.Decl, error) `perm:"admin"` StorageList func(p0 context.Context) (map[stores.ID][]stores.Decl, error) `perm:"admin"`
@ -3979,6 +3983,17 @@ func (s *StorageMinerStub) MarketCancelDataTransfer(p0 context.Context, p1 datat
return ErrNotSupported return ErrNotSupported
} }
func (s *StorageMinerStruct) MarketDataTransferDiagnostics(p0 context.Context, p1 peer.ID) (*TransferDiagnostics, error) {
if s.Internal.MarketDataTransferDiagnostics == nil {
return nil, ErrNotSupported
}
return s.Internal.MarketDataTransferDiagnostics(p0, p1)
}
func (s *StorageMinerStub) MarketDataTransferDiagnostics(p0 context.Context, p1 peer.ID) (*TransferDiagnostics, error) {
return nil, ErrNotSupported
}
func (s *StorageMinerStruct) MarketDataTransferUpdates(p0 context.Context) (<-chan DataTransferChannel, error) { func (s *StorageMinerStruct) MarketDataTransferUpdates(p0 context.Context) (<-chan DataTransferChannel, error) {
if s.Internal.MarketDataTransferUpdates == nil { if s.Internal.MarketDataTransferUpdates == nil {
return nil, ErrNotSupported return nil, ErrNotSupported
@ -4749,6 +4764,17 @@ func (s *StorageMinerStub) StorageFindSector(p0 context.Context, p1 abi.SectorID
return *new([]stores.SectorStorageInfo), ErrNotSupported return *new([]stores.SectorStorageInfo), ErrNotSupported
} }
func (s *StorageMinerStruct) StorageGetLocks(p0 context.Context) (storiface.SectorLocks, error) {
if s.Internal.StorageGetLocks == nil {
return *new(storiface.SectorLocks), ErrNotSupported
}
return s.Internal.StorageGetLocks(p0)
}
func (s *StorageMinerStub) StorageGetLocks(p0 context.Context) (storiface.SectorLocks, error) {
return *new(storiface.SectorLocks), ErrNotSupported
}
func (s *StorageMinerStruct) StorageInfo(p0 context.Context, p1 stores.ID) (stores.StorageInfo, error) { func (s *StorageMinerStruct) StorageInfo(p0 context.Context, p1 stores.ID) (stores.StorageInfo, error) {
if s.Internal.StorageInfo == nil { if s.Internal.StorageInfo == nil {
return *new(stores.StorageInfo), ErrNotSupported return *new(stores.StorageInfo), ErrNotSupported

View File

@ -10,6 +10,7 @@ import (
"github.com/filecoin-project/go-state-types/abi" "github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/lotus/chain/types" "github.com/filecoin-project/lotus/chain/types"
"github.com/ipfs/go-cid" "github.com/ipfs/go-cid"
"github.com/ipfs/go-graphsync"
"github.com/libp2p/go-libp2p-core/peer" "github.com/libp2p/go-libp2p-core/peer"
pubsub "github.com/libp2p/go-libp2p-pubsub" pubsub "github.com/libp2p/go-libp2p-pubsub"
@ -53,6 +54,30 @@ type MessageSendSpec struct {
MaxFee abi.TokenAmount MaxFee abi.TokenAmount
} }
// GraphSyncDataTransfer provides diagnostics on a data transfer happening over graphsync
type GraphSyncDataTransfer struct {
// GraphSync request id for this transfer
RequestID graphsync.RequestID
// Graphsync state for this transfer
RequestState string
// If a channel ID is present, indicates whether this is the current graphsync request for this channel
// (could have changed in a restart)
IsCurrentChannelRequest bool
// Data transfer channel ID for this transfer
ChannelID *datatransfer.ChannelID
// Data transfer state for this transfer
ChannelState *DataTransferChannel
// Diagnostic information about this request -- and unexpected inconsistencies in
// request state
Diagnostics []string
}
// TransferDiagnostics give current information about transfers going over graphsync that may be helpful for debugging
type TransferDiagnostics struct {
ReceivingTransfers []*GraphSyncDataTransfer
SendingTransfers []*GraphSyncDataTransfer
}
type DataTransferChannel struct { type DataTransferChannel struct {
TransferID datatransfer.TransferID TransferID datatransfer.TransferID
Status datatransfer.Status Status datatransfer.Status

View File

@ -25,35 +25,35 @@ func NewAPIBlockstore(cio ChainIO) Blockstore {
return Adapt(bs) // return an adapted blockstore. return Adapt(bs) // return an adapted blockstore.
} }
func (a *apiBlockstore) DeleteBlock(cid.Cid) error { func (a *apiBlockstore) DeleteBlock(context.Context, cid.Cid) error {
return xerrors.New("not supported") return xerrors.New("not supported")
} }
func (a *apiBlockstore) Has(c cid.Cid) (bool, error) { func (a *apiBlockstore) Has(ctx context.Context, c cid.Cid) (bool, error) {
return a.api.ChainHasObj(context.TODO(), c) return a.api.ChainHasObj(ctx, c)
} }
func (a *apiBlockstore) Get(c cid.Cid) (blocks.Block, error) { func (a *apiBlockstore) Get(ctx context.Context, c cid.Cid) (blocks.Block, error) {
bb, err := a.api.ChainReadObj(context.TODO(), c) bb, err := a.api.ChainReadObj(ctx, c)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return blocks.NewBlockWithCid(bb, c) return blocks.NewBlockWithCid(bb, c)
} }
func (a *apiBlockstore) GetSize(c cid.Cid) (int, error) { func (a *apiBlockstore) GetSize(ctx context.Context, c cid.Cid) (int, error) {
bb, err := a.api.ChainReadObj(context.TODO(), c) bb, err := a.api.ChainReadObj(ctx, c)
if err != nil { if err != nil {
return 0, err return 0, err
} }
return len(bb), nil return len(bb), nil
} }
func (a *apiBlockstore) Put(blocks.Block) error { func (a *apiBlockstore) Put(context.Context, blocks.Block) error {
return xerrors.New("not supported") return xerrors.New("not supported")
} }
func (a *apiBlockstore) PutMany([]blocks.Block) error { func (a *apiBlockstore) PutMany(context.Context, []blocks.Block) error {
return xerrors.New("not supported") return xerrors.New("not supported")
} }

View File

@ -64,7 +64,7 @@ func NewAutobatch(ctx context.Context, backingBs Blockstore, bufferCapacity int)
return bs return bs
} }
func (bs *AutobatchBlockstore) Put(blk block.Block) error { func (bs *AutobatchBlockstore) Put(ctx context.Context, blk block.Block) error {
bs.stateLock.Lock() bs.stateLock.Lock()
defer bs.stateLock.Unlock() defer bs.stateLock.Unlock()
@ -94,19 +94,19 @@ func (bs *AutobatchBlockstore) flushWorker(ctx context.Context) {
case <-bs.flushCh: case <-bs.flushCh:
// TODO: check if we _should_ actually flush. We could get a spurious wakeup // TODO: check if we _should_ actually flush. We could get a spurious wakeup
// here. // here.
putErr := bs.doFlush(false) putErr := bs.doFlush(ctx, false)
for putErr != nil { for putErr != nil {
select { select {
case <-ctx.Done(): case <-ctx.Done():
return return
case <-time.After(bs.flushRetryDelay): case <-time.After(bs.flushRetryDelay):
autolog.Errorf("FLUSH ERRORED: %w, retrying after %v", putErr, bs.flushRetryDelay) autolog.Errorf("FLUSH ERRORED: %w, retrying after %v", putErr, bs.flushRetryDelay)
putErr = bs.doFlush(true) putErr = bs.doFlush(ctx, true)
} }
} }
case <-ctx.Done(): case <-ctx.Done():
// Do one last flush. // Do one last flush.
_ = bs.doFlush(false) _ = bs.doFlush(ctx, false)
return return
} }
} }
@ -114,13 +114,13 @@ func (bs *AutobatchBlockstore) flushWorker(ctx context.Context) {
// caller must NOT hold stateLock // caller must NOT hold stateLock
// set retryOnly to true to only retry a failed flush and not flush anything new. // set retryOnly to true to only retry a failed flush and not flush anything new.
func (bs *AutobatchBlockstore) doFlush(retryOnly bool) error { func (bs *AutobatchBlockstore) doFlush(ctx context.Context, retryOnly bool) error {
bs.doFlushLock.Lock() bs.doFlushLock.Lock()
defer bs.doFlushLock.Unlock() defer bs.doFlushLock.Unlock()
// If we failed to flush last time, try flushing again. // If we failed to flush last time, try flushing again.
if bs.flushErr != nil { if bs.flushErr != nil {
bs.flushErr = bs.backingBs.PutMany(bs.flushingBatch.blockList) bs.flushErr = bs.backingBs.PutMany(ctx, bs.flushingBatch.blockList)
} }
// If we failed, or we're _only_ retrying, bail. // If we failed, or we're _only_ retrying, bail.
@ -137,7 +137,7 @@ func (bs *AutobatchBlockstore) doFlush(retryOnly bool) error {
bs.stateLock.Unlock() bs.stateLock.Unlock()
// And try to flush it. // And try to flush it.
bs.flushErr = bs.backingBs.PutMany(bs.flushingBatch.blockList) bs.flushErr = bs.backingBs.PutMany(ctx, bs.flushingBatch.blockList)
// If we succeeded, reset the batch. Otherwise, we'll try again next time. // If we succeeded, reset the batch. Otherwise, we'll try again next time.
if bs.flushErr == nil { if bs.flushErr == nil {
@ -151,7 +151,7 @@ func (bs *AutobatchBlockstore) doFlush(retryOnly bool) error {
// caller must NOT hold stateLock // caller must NOT hold stateLock
func (bs *AutobatchBlockstore) Flush(ctx context.Context) error { func (bs *AutobatchBlockstore) Flush(ctx context.Context) error {
return bs.doFlush(false) return bs.doFlush(ctx, false)
} }
func (bs *AutobatchBlockstore) Shutdown(ctx context.Context) error { func (bs *AutobatchBlockstore) Shutdown(ctx context.Context) error {
@ -169,9 +169,9 @@ func (bs *AutobatchBlockstore) Shutdown(ctx context.Context) error {
return bs.flushErr return bs.flushErr
} }
func (bs *AutobatchBlockstore) Get(c cid.Cid) (block.Block, error) { func (bs *AutobatchBlockstore) Get(ctx context.Context, c cid.Cid) (block.Block, error) {
// may seem backward to check the backingBs first, but that is the likeliest case // may seem backward to check the backingBs first, but that is the likeliest case
blk, err := bs.backingBs.Get(c) blk, err := bs.backingBs.Get(ctx, c)
if err == nil { if err == nil {
return blk, nil return blk, nil
} }
@ -192,10 +192,10 @@ func (bs *AutobatchBlockstore) Get(c cid.Cid) (block.Block, error) {
return v, nil return v, nil
} }
return bs.Get(c) return bs.Get(ctx, c)
} }
func (bs *AutobatchBlockstore) DeleteBlock(cid.Cid) error { func (bs *AutobatchBlockstore) DeleteBlock(context.Context, cid.Cid) error {
// if we wanted to support this, we would have to: // if we wanted to support this, we would have to:
// - flush // - flush
// - delete from the backingBs (if present) // - delete from the backingBs (if present)
@ -204,13 +204,13 @@ func (bs *AutobatchBlockstore) DeleteBlock(cid.Cid) error {
return xerrors.New("deletion is unsupported") return xerrors.New("deletion is unsupported")
} }
func (bs *AutobatchBlockstore) DeleteMany(cids []cid.Cid) error { func (bs *AutobatchBlockstore) DeleteMany(ctx context.Context, cids []cid.Cid) error {
// see note in DeleteBlock() // see note in DeleteBlock()
return xerrors.New("deletion is unsupported") return xerrors.New("deletion is unsupported")
} }
func (bs *AutobatchBlockstore) Has(c cid.Cid) (bool, error) { func (bs *AutobatchBlockstore) Has(ctx context.Context, c cid.Cid) (bool, error) {
_, err := bs.Get(c) _, err := bs.Get(ctx, c)
if err == nil { if err == nil {
return true, nil return true, nil
} }
@ -221,8 +221,8 @@ func (bs *AutobatchBlockstore) Has(c cid.Cid) (bool, error) {
return false, err return false, err
} }
func (bs *AutobatchBlockstore) GetSize(c cid.Cid) (int, error) { func (bs *AutobatchBlockstore) GetSize(ctx context.Context, c cid.Cid) (int, error) {
blk, err := bs.Get(c) blk, err := bs.Get(ctx, c)
if err != nil { if err != nil {
return 0, err return 0, err
} }
@ -230,9 +230,9 @@ func (bs *AutobatchBlockstore) GetSize(c cid.Cid) (int, error) {
return len(blk.RawData()), nil return len(blk.RawData()), nil
} }
func (bs *AutobatchBlockstore) PutMany(blks []block.Block) error { func (bs *AutobatchBlockstore) PutMany(ctx context.Context, blks []block.Block) error {
for _, blk := range blks { for _, blk := range blks {
if err := bs.Put(blk); err != nil { if err := bs.Put(ctx, blk); err != nil {
return err return err
} }
} }
@ -252,8 +252,8 @@ func (bs *AutobatchBlockstore) HashOnRead(enabled bool) {
bs.backingBs.HashOnRead(enabled) bs.backingBs.HashOnRead(enabled)
} }
func (bs *AutobatchBlockstore) View(cid cid.Cid, callback func([]byte) error) error { func (bs *AutobatchBlockstore) View(ctx context.Context, cid cid.Cid, callback func([]byte) error) error {
blk, err := bs.Get(cid) blk, err := bs.Get(ctx, cid)
if err != nil { if err != nil {
return err return err
} }

View File

@ -13,19 +13,19 @@ func TestAutobatchBlockstore(t *testing.T) {
ab := NewAutobatch(ctx, NewMemory(), len(b0.RawData())+len(b1.RawData())-1) ab := NewAutobatch(ctx, NewMemory(), len(b0.RawData())+len(b1.RawData())-1)
require.NoError(t, ab.Put(b0)) require.NoError(t, ab.Put(ctx, b0))
require.NoError(t, ab.Put(b1)) require.NoError(t, ab.Put(ctx, b1))
require.NoError(t, ab.Put(b2)) require.NoError(t, ab.Put(ctx, b2))
v0, err := ab.Get(b0.Cid()) v0, err := ab.Get(ctx, b0.Cid())
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, b0.RawData(), v0.RawData()) require.Equal(t, b0.RawData(), v0.RawData())
v1, err := ab.Get(b1.Cid()) v1, err := ab.Get(ctx, b1.Cid())
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, b1.RawData(), v1.RawData()) require.Equal(t, b1.RawData(), v1.RawData())
v2, err := ab.Get(b2.Cid()) v2, err := ab.Get(ctx, b2.Cid())
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, b2.RawData(), v2.RawData()) require.Equal(t, b2.RawData(), v2.RawData())

View File

@ -525,7 +525,7 @@ func (b *Blockstore) Size() (int64, error) {
// View implements blockstore.Viewer, which leverages zero-copy read-only // View implements blockstore.Viewer, which leverages zero-copy read-only
// access to values. // access to values.
func (b *Blockstore) View(cid cid.Cid, fn func([]byte) error) error { func (b *Blockstore) View(ctx context.Context, cid cid.Cid, fn func([]byte) error) error {
if err := b.access(); err != nil { if err := b.access(); err != nil {
return err return err
} }
@ -552,7 +552,7 @@ func (b *Blockstore) View(cid cid.Cid, fn func([]byte) error) error {
} }
// Has implements Blockstore.Has. // Has implements Blockstore.Has.
func (b *Blockstore) Has(cid cid.Cid) (bool, error) { func (b *Blockstore) Has(ctx context.Context, cid cid.Cid) (bool, error) {
if err := b.access(); err != nil { if err := b.access(); err != nil {
return false, err return false, err
} }
@ -582,7 +582,7 @@ func (b *Blockstore) Has(cid cid.Cid) (bool, error) {
} }
// Get implements Blockstore.Get. // Get implements Blockstore.Get.
func (b *Blockstore) Get(cid cid.Cid) (blocks.Block, error) { func (b *Blockstore) Get(ctx context.Context, cid cid.Cid) (blocks.Block, error) {
if !cid.Defined() { if !cid.Defined() {
return nil, blockstore.ErrNotFound return nil, blockstore.ErrNotFound
} }
@ -619,7 +619,7 @@ func (b *Blockstore) Get(cid cid.Cid) (blocks.Block, error) {
} }
// GetSize implements Blockstore.GetSize. // GetSize implements Blockstore.GetSize.
func (b *Blockstore) GetSize(cid cid.Cid) (int, error) { func (b *Blockstore) GetSize(ctx context.Context, cid cid.Cid) (int, error) {
if err := b.access(); err != nil { if err := b.access(); err != nil {
return 0, err return 0, err
} }
@ -652,7 +652,7 @@ func (b *Blockstore) GetSize(cid cid.Cid) (int, error) {
} }
// Put implements Blockstore.Put. // Put implements Blockstore.Put.
func (b *Blockstore) Put(block blocks.Block) error { func (b *Blockstore) Put(ctx context.Context, block blocks.Block) error {
if err := b.access(); err != nil { if err := b.access(); err != nil {
return err return err
} }
@ -691,7 +691,7 @@ func (b *Blockstore) Put(block blocks.Block) error {
} }
// PutMany implements Blockstore.PutMany. // PutMany implements Blockstore.PutMany.
func (b *Blockstore) PutMany(blocks []blocks.Block) error { func (b *Blockstore) PutMany(ctx context.Context, blocks []blocks.Block) error {
if err := b.access(); err != nil { if err := b.access(); err != nil {
return err return err
} }
@ -755,7 +755,7 @@ func (b *Blockstore) PutMany(blocks []blocks.Block) error {
} }
// DeleteBlock implements Blockstore.DeleteBlock. // DeleteBlock implements Blockstore.DeleteBlock.
func (b *Blockstore) DeleteBlock(cid cid.Cid) error { func (b *Blockstore) DeleteBlock(ctx context.Context, cid cid.Cid) error {
if err := b.access(); err != nil { if err := b.access(); err != nil {
return err return err
} }
@ -774,7 +774,7 @@ func (b *Blockstore) DeleteBlock(cid cid.Cid) error {
}) })
} }
func (b *Blockstore) DeleteMany(cids []cid.Cid) error { func (b *Blockstore) DeleteMany(ctx context.Context, cids []cid.Cid) error {
if err := b.access(); err != nil { if err := b.access(); err != nil {
return err return err
} }

View File

@ -2,6 +2,7 @@ package badgerbs
import ( import (
"bytes" "bytes"
"context"
"fmt" "fmt"
"io/ioutil" "io/ioutil"
"os" "os"
@ -98,6 +99,7 @@ func openBlockstore(optsSupplier func(path string) Options) func(tb testing.TB,
} }
func testMove(t *testing.T, optsF func(string) Options) { func testMove(t *testing.T, optsF func(string) Options) {
ctx := context.Background()
basePath, err := ioutil.TempDir("", "") basePath, err := ioutil.TempDir("", "")
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@ -122,7 +124,7 @@ func testMove(t *testing.T, optsF func(string) Options) {
// add some blocks // add some blocks
for i := 0; i < 10; i++ { for i := 0; i < 10; i++ {
blk := blocks.NewBlock([]byte(fmt.Sprintf("some data %d", i))) blk := blocks.NewBlock([]byte(fmt.Sprintf("some data %d", i)))
err := db.Put(blk) err := db.Put(ctx, blk)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -132,7 +134,7 @@ func testMove(t *testing.T, optsF func(string) Options) {
// delete some of them // delete some of them
for i := 5; i < 10; i++ { for i := 5; i < 10; i++ {
c := have[i].Cid() c := have[i].Cid()
err := db.DeleteBlock(c) err := db.DeleteBlock(ctx, c)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -145,7 +147,7 @@ func testMove(t *testing.T, optsF func(string) Options) {
g.Go(func() error { g.Go(func() error {
for i := 10; i < 1000; i++ { for i := 10; i < 1000; i++ {
blk := blocks.NewBlock([]byte(fmt.Sprintf("some data %d", i))) blk := blocks.NewBlock([]byte(fmt.Sprintf("some data %d", i)))
err := db.Put(blk) err := db.Put(ctx, blk)
if err != nil { if err != nil {
return err return err
} }
@ -165,7 +167,7 @@ func testMove(t *testing.T, optsF func(string) Options) {
// now check that we have all the blocks in have and none in the deleted lists // now check that we have all the blocks in have and none in the deleted lists
checkBlocks := func() { checkBlocks := func() {
for _, blk := range have { for _, blk := range have {
has, err := db.Has(blk.Cid()) has, err := db.Has(ctx, blk.Cid())
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -174,7 +176,7 @@ func testMove(t *testing.T, optsF func(string) Options) {
t.Fatal("missing block") t.Fatal("missing block")
} }
blk2, err := db.Get(blk.Cid()) blk2, err := db.Get(ctx, blk.Cid())
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -185,7 +187,7 @@ func testMove(t *testing.T, optsF func(string) Options) {
} }
for _, c := range deleted { for _, c := range deleted {
has, err := db.Has(c) has, err := db.Has(ctx, c)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }

View File

@ -44,28 +44,31 @@ func (s *Suite) RunTests(t *testing.T, prefix string) {
} }
func (s *Suite) TestGetWhenKeyNotPresent(t *testing.T) { func (s *Suite) TestGetWhenKeyNotPresent(t *testing.T) {
ctx := context.Background()
bs, _ := s.NewBlockstore(t) bs, _ := s.NewBlockstore(t)
if c, ok := bs.(io.Closer); ok { if c, ok := bs.(io.Closer); ok {
defer func() { require.NoError(t, c.Close()) }() defer func() { require.NoError(t, c.Close()) }()
} }
c := cid.NewCidV0(u.Hash([]byte("stuff"))) c := cid.NewCidV0(u.Hash([]byte("stuff")))
bl, err := bs.Get(c) bl, err := bs.Get(ctx, c)
require.Nil(t, bl) require.Nil(t, bl)
require.Equal(t, blockstore.ErrNotFound, err) require.Equal(t, blockstore.ErrNotFound, err)
} }
func (s *Suite) TestGetWhenKeyIsNil(t *testing.T) { func (s *Suite) TestGetWhenKeyIsNil(t *testing.T) {
ctx := context.Background()
bs, _ := s.NewBlockstore(t) bs, _ := s.NewBlockstore(t)
if c, ok := bs.(io.Closer); ok { if c, ok := bs.(io.Closer); ok {
defer func() { require.NoError(t, c.Close()) }() defer func() { require.NoError(t, c.Close()) }()
} }
_, err := bs.Get(cid.Undef) _, err := bs.Get(ctx, cid.Undef)
require.Equal(t, blockstore.ErrNotFound, err) require.Equal(t, blockstore.ErrNotFound, err)
} }
func (s *Suite) TestPutThenGetBlock(t *testing.T) { func (s *Suite) TestPutThenGetBlock(t *testing.T) {
ctx := context.Background()
bs, _ := s.NewBlockstore(t) bs, _ := s.NewBlockstore(t)
if c, ok := bs.(io.Closer); ok { if c, ok := bs.(io.Closer); ok {
defer func() { require.NoError(t, c.Close()) }() defer func() { require.NoError(t, c.Close()) }()
@ -73,15 +76,16 @@ func (s *Suite) TestPutThenGetBlock(t *testing.T) {
orig := blocks.NewBlock([]byte("some data")) orig := blocks.NewBlock([]byte("some data"))
err := bs.Put(orig) err := bs.Put(ctx, orig)
require.NoError(t, err) require.NoError(t, err)
fetched, err := bs.Get(orig.Cid()) fetched, err := bs.Get(ctx, orig.Cid())
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, orig.RawData(), fetched.RawData()) require.Equal(t, orig.RawData(), fetched.RawData())
} }
func (s *Suite) TestHas(t *testing.T) { func (s *Suite) TestHas(t *testing.T) {
ctx := context.Background()
bs, _ := s.NewBlockstore(t) bs, _ := s.NewBlockstore(t)
if c, ok := bs.(io.Closer); ok { if c, ok := bs.(io.Closer); ok {
defer func() { require.NoError(t, c.Close()) }() defer func() { require.NoError(t, c.Close()) }()
@ -89,19 +93,20 @@ func (s *Suite) TestHas(t *testing.T) {
orig := blocks.NewBlock([]byte("some data")) orig := blocks.NewBlock([]byte("some data"))
err := bs.Put(orig) err := bs.Put(ctx, orig)
require.NoError(t, err) require.NoError(t, err)
ok, err := bs.Has(orig.Cid()) ok, err := bs.Has(ctx, orig.Cid())
require.NoError(t, err) require.NoError(t, err)
require.True(t, ok) require.True(t, ok)
ok, err = bs.Has(blocks.NewBlock([]byte("another thing")).Cid()) ok, err = bs.Has(ctx, blocks.NewBlock([]byte("another thing")).Cid())
require.NoError(t, err) require.NoError(t, err)
require.False(t, ok) require.False(t, ok)
} }
func (s *Suite) TestCidv0v1(t *testing.T) { func (s *Suite) TestCidv0v1(t *testing.T) {
ctx := context.Background()
bs, _ := s.NewBlockstore(t) bs, _ := s.NewBlockstore(t)
if c, ok := bs.(io.Closer); ok { if c, ok := bs.(io.Closer); ok {
defer func() { require.NoError(t, c.Close()) }() defer func() { require.NoError(t, c.Close()) }()
@ -109,15 +114,17 @@ func (s *Suite) TestCidv0v1(t *testing.T) {
orig := blocks.NewBlock([]byte("some data")) orig := blocks.NewBlock([]byte("some data"))
err := bs.Put(orig) err := bs.Put(ctx, orig)
require.NoError(t, err) require.NoError(t, err)
fetched, err := bs.Get(cid.NewCidV1(cid.DagProtobuf, orig.Cid().Hash())) fetched, err := bs.Get(ctx, cid.NewCidV1(cid.DagProtobuf, orig.Cid().Hash()))
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, orig.RawData(), fetched.RawData()) require.Equal(t, orig.RawData(), fetched.RawData())
} }
func (s *Suite) TestPutThenGetSizeBlock(t *testing.T) { func (s *Suite) TestPutThenGetSizeBlock(t *testing.T) {
ctx := context.Background()
bs, _ := s.NewBlockstore(t) bs, _ := s.NewBlockstore(t)
if c, ok := bs.(io.Closer); ok { if c, ok := bs.(io.Closer); ok {
defer func() { require.NoError(t, c.Close()) }() defer func() { require.NoError(t, c.Close()) }()
@ -127,21 +134,21 @@ func (s *Suite) TestPutThenGetSizeBlock(t *testing.T) {
missingBlock := blocks.NewBlock([]byte("missingBlock")) missingBlock := blocks.NewBlock([]byte("missingBlock"))
emptyBlock := blocks.NewBlock([]byte{}) emptyBlock := blocks.NewBlock([]byte{})
err := bs.Put(block) err := bs.Put(ctx, block)
require.NoError(t, err) require.NoError(t, err)
blockSize, err := bs.GetSize(block.Cid()) blockSize, err := bs.GetSize(ctx, block.Cid())
require.NoError(t, err) require.NoError(t, err)
require.Len(t, block.RawData(), blockSize) require.Len(t, block.RawData(), blockSize)
err = bs.Put(emptyBlock) err = bs.Put(ctx, emptyBlock)
require.NoError(t, err) require.NoError(t, err)
emptySize, err := bs.GetSize(emptyBlock.Cid()) emptySize, err := bs.GetSize(ctx, emptyBlock.Cid())
require.NoError(t, err) require.NoError(t, err)
require.Zero(t, emptySize) require.Zero(t, emptySize)
missingSize, err := bs.GetSize(missingBlock.Cid()) missingSize, err := bs.GetSize(ctx, missingBlock.Cid())
require.Equal(t, blockstore.ErrNotFound, err) require.Equal(t, blockstore.ErrNotFound, err)
require.Equal(t, -1, missingSize) require.Equal(t, -1, missingSize)
} }
@ -203,6 +210,7 @@ func (s *Suite) TestDoubleClose(t *testing.T) {
} }
func (s *Suite) TestReopenPutGet(t *testing.T) { func (s *Suite) TestReopenPutGet(t *testing.T) {
ctx := context.Background()
bs, path := s.NewBlockstore(t) bs, path := s.NewBlockstore(t)
c, ok := bs.(io.Closer) c, ok := bs.(io.Closer)
if !ok { if !ok {
@ -210,7 +218,7 @@ func (s *Suite) TestReopenPutGet(t *testing.T) {
} }
orig := blocks.NewBlock([]byte("some data")) orig := blocks.NewBlock([]byte("some data"))
err := bs.Put(orig) err := bs.Put(ctx, orig)
require.NoError(t, err) require.NoError(t, err)
err = c.Close() err = c.Close()
@ -219,7 +227,7 @@ func (s *Suite) TestReopenPutGet(t *testing.T) {
bs, err = s.OpenBlockstore(t, path) bs, err = s.OpenBlockstore(t, path)
require.NoError(t, err) require.NoError(t, err)
fetched, err := bs.Get(orig.Cid()) fetched, err := bs.Get(ctx, orig.Cid())
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, orig.RawData(), fetched.RawData()) require.Equal(t, orig.RawData(), fetched.RawData())
@ -228,6 +236,7 @@ func (s *Suite) TestReopenPutGet(t *testing.T) {
} }
func (s *Suite) TestPutMany(t *testing.T) { func (s *Suite) TestPutMany(t *testing.T) {
ctx := context.Background()
bs, _ := s.NewBlockstore(t) bs, _ := s.NewBlockstore(t)
if c, ok := bs.(io.Closer); ok { if c, ok := bs.(io.Closer); ok {
defer func() { require.NoError(t, c.Close()) }() defer func() { require.NoError(t, c.Close()) }()
@ -238,15 +247,15 @@ func (s *Suite) TestPutMany(t *testing.T) {
blocks.NewBlock([]byte("foo2")), blocks.NewBlock([]byte("foo2")),
blocks.NewBlock([]byte("foo3")), blocks.NewBlock([]byte("foo3")),
} }
err := bs.PutMany(blks) err := bs.PutMany(ctx, blks)
require.NoError(t, err) require.NoError(t, err)
for _, blk := range blks { for _, blk := range blks {
fetched, err := bs.Get(blk.Cid()) fetched, err := bs.Get(ctx, blk.Cid())
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, blk.RawData(), fetched.RawData()) require.Equal(t, blk.RawData(), fetched.RawData())
ok, err := bs.Has(blk.Cid()) ok, err := bs.Has(ctx, blk.Cid())
require.NoError(t, err) require.NoError(t, err)
require.True(t, ok) require.True(t, ok)
} }
@ -259,6 +268,7 @@ func (s *Suite) TestPutMany(t *testing.T) {
} }
func (s *Suite) TestDelete(t *testing.T) { func (s *Suite) TestDelete(t *testing.T) {
ctx := context.Background()
bs, _ := s.NewBlockstore(t) bs, _ := s.NewBlockstore(t)
if c, ok := bs.(io.Closer); ok { if c, ok := bs.(io.Closer); ok {
defer func() { require.NoError(t, c.Close()) }() defer func() { require.NoError(t, c.Close()) }()
@ -269,10 +279,10 @@ func (s *Suite) TestDelete(t *testing.T) {
blocks.NewBlock([]byte("foo2")), blocks.NewBlock([]byte("foo2")),
blocks.NewBlock([]byte("foo3")), blocks.NewBlock([]byte("foo3")),
} }
err := bs.PutMany(blks) err := bs.PutMany(ctx, blks)
require.NoError(t, err) require.NoError(t, err)
err = bs.DeleteBlock(blks[1].Cid()) err = bs.DeleteBlock(ctx, blks[1].Cid())
require.NoError(t, err) require.NoError(t, err)
ch, err := bs.AllKeysChan(context.Background()) ch, err := bs.AllKeysChan(context.Background())
@ -285,17 +295,17 @@ func (s *Suite) TestDelete(t *testing.T) {
cid.NewCidV1(cid.Raw, blks[2].Cid().Hash()), cid.NewCidV1(cid.Raw, blks[2].Cid().Hash()),
}) })
has, err := bs.Has(blks[1].Cid()) has, err := bs.Has(ctx, blks[1].Cid())
require.NoError(t, err) require.NoError(t, err)
require.False(t, has) require.False(t, has)
} }
func insertBlocks(t *testing.T, bs blockstore.BasicBlockstore, count int) []cid.Cid { func insertBlocks(t *testing.T, bs blockstore.BasicBlockstore, count int) []cid.Cid {
ctx := context.Background()
keys := make([]cid.Cid, count) keys := make([]cid.Cid, count)
for i := 0; i < count; i++ { for i := 0; i < count; i++ {
block := blocks.NewBlock([]byte(fmt.Sprintf("some data %d", i))) block := blocks.NewBlock([]byte(fmt.Sprintf("some data %d", i)))
err := bs.Put(block) err := bs.Put(ctx, block)
require.NoError(t, err) require.NoError(t, err)
// NewBlock assigns a CIDv0; we convert it to CIDv1 because that's what // NewBlock assigns a CIDv0; we convert it to CIDv1 because that's what
// the store returns. // the store returns.

View File

@ -1,6 +1,8 @@
package blockstore package blockstore
import ( import (
"context"
cid "github.com/ipfs/go-cid" cid "github.com/ipfs/go-cid"
ds "github.com/ipfs/go-datastore" ds "github.com/ipfs/go-datastore"
logging "github.com/ipfs/go-log/v2" logging "github.com/ipfs/go-log/v2"
@ -27,7 +29,7 @@ type BasicBlockstore = blockstore.Blockstore
type Viewer = blockstore.Viewer type Viewer = blockstore.Viewer
type BatchDeleter interface { type BatchDeleter interface {
DeleteMany(cids []cid.Cid) error DeleteMany(ctx context.Context, cids []cid.Cid) error
} }
// BlockstoreIterator is a trait for efficient iteration // BlockstoreIterator is a trait for efficient iteration
@ -93,17 +95,17 @@ type adaptedBlockstore struct {
var _ Blockstore = (*adaptedBlockstore)(nil) var _ Blockstore = (*adaptedBlockstore)(nil)
func (a *adaptedBlockstore) View(cid cid.Cid, callback func([]byte) error) error { func (a *adaptedBlockstore) View(ctx context.Context, cid cid.Cid, callback func([]byte) error) error {
blk, err := a.Get(cid) blk, err := a.Get(ctx, cid)
if err != nil { if err != nil {
return err return err
} }
return callback(blk.RawData()) return callback(blk.RawData())
} }
func (a *adaptedBlockstore) DeleteMany(cids []cid.Cid) error { func (a *adaptedBlockstore) DeleteMany(ctx context.Context, cids []cid.Cid) error {
for _, cid := range cids { for _, cid := range cids {
err := a.DeleteBlock(cid) err := a.DeleteBlock(ctx, cid)
if err != nil { if err != nil {
return err return err
} }

View File

@ -88,34 +88,34 @@ func (bs *BufferedBlockstore) AllKeysChan(ctx context.Context) (<-chan cid.Cid,
return out, nil return out, nil
} }
func (bs *BufferedBlockstore) DeleteBlock(c cid.Cid) error { func (bs *BufferedBlockstore) DeleteBlock(ctx context.Context, c cid.Cid) error {
if err := bs.read.DeleteBlock(c); err != nil { if err := bs.read.DeleteBlock(ctx, c); err != nil {
return err return err
} }
return bs.write.DeleteBlock(c) return bs.write.DeleteBlock(ctx, c)
} }
func (bs *BufferedBlockstore) DeleteMany(cids []cid.Cid) error { func (bs *BufferedBlockstore) DeleteMany(ctx context.Context, cids []cid.Cid) error {
if err := bs.read.DeleteMany(cids); err != nil { if err := bs.read.DeleteMany(ctx, cids); err != nil {
return err return err
} }
return bs.write.DeleteMany(cids) return bs.write.DeleteMany(ctx, cids)
} }
func (bs *BufferedBlockstore) View(c cid.Cid, callback func([]byte) error) error { func (bs *BufferedBlockstore) View(ctx context.Context, c cid.Cid, callback func([]byte) error) error {
// both stores are viewable. // both stores are viewable.
if err := bs.write.View(c, callback); err == ErrNotFound { if err := bs.write.View(ctx, c, callback); err == ErrNotFound {
// not found in write blockstore; fall through. // not found in write blockstore; fall through.
} else { } else {
return err // propagate errors, or nil, i.e. found. return err // propagate errors, or nil, i.e. found.
} }
return bs.read.View(c, callback) return bs.read.View(ctx, c, callback)
} }
func (bs *BufferedBlockstore) Get(c cid.Cid) (block.Block, error) { func (bs *BufferedBlockstore) Get(ctx context.Context, c cid.Cid) (block.Block, error) {
if out, err := bs.write.Get(c); err != nil { if out, err := bs.write.Get(ctx, c); err != nil {
if err != ErrNotFound { if err != ErrNotFound {
return nil, err return nil, err
} }
@ -123,20 +123,20 @@ func (bs *BufferedBlockstore) Get(c cid.Cid) (block.Block, error) {
return out, nil return out, nil
} }
return bs.read.Get(c) return bs.read.Get(ctx, c)
} }
func (bs *BufferedBlockstore) GetSize(c cid.Cid) (int, error) { func (bs *BufferedBlockstore) GetSize(ctx context.Context, c cid.Cid) (int, error) {
s, err := bs.read.GetSize(c) s, err := bs.read.GetSize(ctx, c)
if err == ErrNotFound || s == 0 { if err == ErrNotFound || s == 0 {
return bs.write.GetSize(c) return bs.write.GetSize(ctx, c)
} }
return s, err return s, err
} }
func (bs *BufferedBlockstore) Put(blk block.Block) error { func (bs *BufferedBlockstore) Put(ctx context.Context, blk block.Block) error {
has, err := bs.read.Has(blk.Cid()) // TODO: consider dropping this check has, err := bs.read.Has(ctx, blk.Cid()) // TODO: consider dropping this check
if err != nil { if err != nil {
return err return err
} }
@ -145,11 +145,11 @@ func (bs *BufferedBlockstore) Put(blk block.Block) error {
return nil return nil
} }
return bs.write.Put(blk) return bs.write.Put(ctx, blk)
} }
func (bs *BufferedBlockstore) Has(c cid.Cid) (bool, error) { func (bs *BufferedBlockstore) Has(ctx context.Context, c cid.Cid) (bool, error) {
has, err := bs.write.Has(c) has, err := bs.write.Has(ctx, c)
if err != nil { if err != nil {
return false, err return false, err
} }
@ -157,7 +157,7 @@ func (bs *BufferedBlockstore) Has(c cid.Cid) (bool, error) {
return true, nil return true, nil
} }
return bs.read.Has(c) return bs.read.Has(ctx, c)
} }
func (bs *BufferedBlockstore) HashOnRead(hor bool) { func (bs *BufferedBlockstore) HashOnRead(hor bool) {
@ -165,8 +165,8 @@ func (bs *BufferedBlockstore) HashOnRead(hor bool) {
bs.write.HashOnRead(hor) bs.write.HashOnRead(hor)
} }
func (bs *BufferedBlockstore) PutMany(blks []block.Block) error { func (bs *BufferedBlockstore) PutMany(ctx context.Context, blks []block.Block) error {
return bs.write.PutMany(blks) return bs.write.PutMany(ctx, blks)
} }
func (bs *BufferedBlockstore) Read() Blockstore { func (bs *BufferedBlockstore) Read() Blockstore {

View File

@ -18,39 +18,39 @@ func NewDiscardStore(bs Blockstore) Blockstore {
return &discardstore{bs: bs} return &discardstore{bs: bs}
} }
func (b *discardstore) Has(cid cid.Cid) (bool, error) { func (b *discardstore) Has(ctx context.Context, cid cid.Cid) (bool, error) {
return b.bs.Has(cid) return b.bs.Has(ctx, cid)
} }
func (b *discardstore) HashOnRead(hor bool) { func (b *discardstore) HashOnRead(hor bool) {
b.bs.HashOnRead(hor) b.bs.HashOnRead(hor)
} }
func (b *discardstore) Get(cid cid.Cid) (blocks.Block, error) { func (b *discardstore) Get(ctx context.Context, cid cid.Cid) (blocks.Block, error) {
return b.bs.Get(cid) return b.bs.Get(ctx, cid)
} }
func (b *discardstore) GetSize(cid cid.Cid) (int, error) { func (b *discardstore) GetSize(ctx context.Context, cid cid.Cid) (int, error) {
return b.bs.GetSize(cid) return b.bs.GetSize(ctx, cid)
} }
func (b *discardstore) View(cid cid.Cid, f func([]byte) error) error { func (b *discardstore) View(ctx context.Context, cid cid.Cid, f func([]byte) error) error {
return b.bs.View(cid, f) return b.bs.View(ctx, cid, f)
} }
func (b *discardstore) Put(blk blocks.Block) error { func (b *discardstore) Put(ctx context.Context, blk blocks.Block) error {
return nil return nil
} }
func (b *discardstore) PutMany(blks []blocks.Block) error { func (b *discardstore) PutMany(ctx context.Context, blks []blocks.Block) error {
return nil return nil
} }
func (b *discardstore) DeleteBlock(cid cid.Cid) error { func (b *discardstore) DeleteBlock(ctx context.Context, cid cid.Cid) error {
return nil return nil
} }
func (b *discardstore) DeleteMany(cids []cid.Cid) error { func (b *discardstore) DeleteMany(ctx context.Context, cids []cid.Cid) error {
return nil return nil
} }

View File

@ -71,14 +71,14 @@ func (fbs *FallbackStore) getFallback(c cid.Cid) (blocks.Block, error) {
// chain bitswap puts blocks in temp blockstore which is cleaned up // chain bitswap puts blocks in temp blockstore which is cleaned up
// every few min (to drop any messages we fetched but don't want) // every few min (to drop any messages we fetched but don't want)
// in this case we want to keep this block around // in this case we want to keep this block around
if err := fbs.Put(b); err != nil { if err := fbs.Put(ctx, b); err != nil {
return nil, xerrors.Errorf("persisting fallback-fetched block: %w", err) return nil, xerrors.Errorf("persisting fallback-fetched block: %w", err)
} }
return b, nil return b, nil
} }
func (fbs *FallbackStore) Get(c cid.Cid) (blocks.Block, error) { func (fbs *FallbackStore) Get(ctx context.Context, c cid.Cid) (blocks.Block, error) {
b, err := fbs.Blockstore.Get(c) b, err := fbs.Blockstore.Get(ctx, c)
switch err { switch err {
case nil: case nil:
return b, nil return b, nil
@ -89,8 +89,8 @@ func (fbs *FallbackStore) Get(c cid.Cid) (blocks.Block, error) {
} }
} }
func (fbs *FallbackStore) GetSize(c cid.Cid) (int, error) { func (fbs *FallbackStore) GetSize(ctx context.Context, c cid.Cid) (int, error) {
sz, err := fbs.Blockstore.GetSize(c) sz, err := fbs.Blockstore.GetSize(ctx, c)
switch err { switch err {
case nil: case nil:
return sz, nil return sz, nil

View File

@ -38,7 +38,7 @@ func decodeCid(cid cid.Cid) (inline bool, data []byte, err error) {
return false, nil, err return false, nil, err
} }
func (b *idstore) Has(cid cid.Cid) (bool, error) { func (b *idstore) Has(ctx context.Context, cid cid.Cid) (bool, error) {
inline, _, err := decodeCid(cid) inline, _, err := decodeCid(cid)
if err != nil { if err != nil {
return false, xerrors.Errorf("error decoding Cid: %w", err) return false, xerrors.Errorf("error decoding Cid: %w", err)
@ -48,10 +48,10 @@ func (b *idstore) Has(cid cid.Cid) (bool, error) {
return true, nil return true, nil
} }
return b.bs.Has(cid) return b.bs.Has(ctx, cid)
} }
func (b *idstore) Get(cid cid.Cid) (blocks.Block, error) { func (b *idstore) Get(ctx context.Context, cid cid.Cid) (blocks.Block, error) {
inline, data, err := decodeCid(cid) inline, data, err := decodeCid(cid)
if err != nil { if err != nil {
return nil, xerrors.Errorf("error decoding Cid: %w", err) return nil, xerrors.Errorf("error decoding Cid: %w", err)
@ -61,10 +61,10 @@ func (b *idstore) Get(cid cid.Cid) (blocks.Block, error) {
return blocks.NewBlockWithCid(data, cid) return blocks.NewBlockWithCid(data, cid)
} }
return b.bs.Get(cid) return b.bs.Get(ctx, cid)
} }
func (b *idstore) GetSize(cid cid.Cid) (int, error) { func (b *idstore) GetSize(ctx context.Context, cid cid.Cid) (int, error) {
inline, data, err := decodeCid(cid) inline, data, err := decodeCid(cid)
if err != nil { if err != nil {
return 0, xerrors.Errorf("error decoding Cid: %w", err) return 0, xerrors.Errorf("error decoding Cid: %w", err)
@ -74,10 +74,10 @@ func (b *idstore) GetSize(cid cid.Cid) (int, error) {
return len(data), err return len(data), err
} }
return b.bs.GetSize(cid) return b.bs.GetSize(ctx, cid)
} }
func (b *idstore) View(cid cid.Cid, cb func([]byte) error) error { func (b *idstore) View(ctx context.Context, cid cid.Cid, cb func([]byte) error) error {
inline, data, err := decodeCid(cid) inline, data, err := decodeCid(cid)
if err != nil { if err != nil {
return xerrors.Errorf("error decoding Cid: %w", err) return xerrors.Errorf("error decoding Cid: %w", err)
@ -87,10 +87,10 @@ func (b *idstore) View(cid cid.Cid, cb func([]byte) error) error {
return cb(data) return cb(data)
} }
return b.bs.View(cid, cb) return b.bs.View(ctx, cid, cb)
} }
func (b *idstore) Put(blk blocks.Block) error { func (b *idstore) Put(ctx context.Context, blk blocks.Block) error {
inline, _, err := decodeCid(blk.Cid()) inline, _, err := decodeCid(blk.Cid())
if err != nil { if err != nil {
return xerrors.Errorf("error decoding Cid: %w", err) return xerrors.Errorf("error decoding Cid: %w", err)
@ -100,10 +100,10 @@ func (b *idstore) Put(blk blocks.Block) error {
return nil return nil
} }
return b.bs.Put(blk) return b.bs.Put(ctx, blk)
} }
func (b *idstore) PutMany(blks []blocks.Block) error { func (b *idstore) PutMany(ctx context.Context, blks []blocks.Block) error {
toPut := make([]blocks.Block, 0, len(blks)) toPut := make([]blocks.Block, 0, len(blks))
for _, blk := range blks { for _, blk := range blks {
inline, _, err := decodeCid(blk.Cid()) inline, _, err := decodeCid(blk.Cid())
@ -118,13 +118,13 @@ func (b *idstore) PutMany(blks []blocks.Block) error {
} }
if len(toPut) > 0 { if len(toPut) > 0 {
return b.bs.PutMany(toPut) return b.bs.PutMany(ctx, toPut)
} }
return nil return nil
} }
func (b *idstore) DeleteBlock(cid cid.Cid) error { func (b *idstore) DeleteBlock(ctx context.Context, cid cid.Cid) error {
inline, _, err := decodeCid(cid) inline, _, err := decodeCid(cid)
if err != nil { if err != nil {
return xerrors.Errorf("error decoding Cid: %w", err) return xerrors.Errorf("error decoding Cid: %w", err)
@ -134,10 +134,10 @@ func (b *idstore) DeleteBlock(cid cid.Cid) error {
return nil return nil
} }
return b.bs.DeleteBlock(cid) return b.bs.DeleteBlock(ctx, cid)
} }
func (b *idstore) DeleteMany(cids []cid.Cid) error { func (b *idstore) DeleteMany(ctx context.Context, cids []cid.Cid) error {
toDelete := make([]cid.Cid, 0, len(cids)) toDelete := make([]cid.Cid, 0, len(cids))
for _, cid := range cids { for _, cid := range cids {
inline, _, err := decodeCid(cid) inline, _, err := decodeCid(cid)
@ -152,7 +152,7 @@ func (b *idstore) DeleteMany(cids []cid.Cid) error {
} }
if len(toDelete) > 0 { if len(toDelete) > 0 {
return b.bs.DeleteMany(toDelete) return b.bs.DeleteMany(ctx, toDelete)
} }
return nil return nil

View File

@ -79,12 +79,12 @@ func NewRemoteIPFSBlockstore(ctx context.Context, maddr multiaddr.Multiaddr, onl
return Adapt(bs), nil return Adapt(bs), nil
} }
func (i *IPFSBlockstore) DeleteBlock(cid cid.Cid) error { func (i *IPFSBlockstore) DeleteBlock(ctx context.Context, cid cid.Cid) error {
return xerrors.Errorf("not supported") return xerrors.Errorf("not supported")
} }
func (i *IPFSBlockstore) Has(cid cid.Cid) (bool, error) { func (i *IPFSBlockstore) Has(ctx context.Context, cid cid.Cid) (bool, error) {
_, err := i.offlineAPI.Block().Stat(i.ctx, path.IpldPath(cid)) _, err := i.offlineAPI.Block().Stat(ctx, path.IpldPath(cid))
if err != nil { if err != nil {
// The underlying client is running in Offline mode. // The underlying client is running in Offline mode.
// Stat() will fail with an err if the block isn't in the // Stat() will fail with an err if the block isn't in the
@ -99,8 +99,8 @@ func (i *IPFSBlockstore) Has(cid cid.Cid) (bool, error) {
return true, nil return true, nil
} }
func (i *IPFSBlockstore) Get(cid cid.Cid) (blocks.Block, error) { func (i *IPFSBlockstore) Get(ctx context.Context, cid cid.Cid) (blocks.Block, error) {
rd, err := i.api.Block().Get(i.ctx, path.IpldPath(cid)) rd, err := i.api.Block().Get(ctx, path.IpldPath(cid))
if err != nil { if err != nil {
return nil, xerrors.Errorf("getting ipfs block: %w", err) return nil, xerrors.Errorf("getting ipfs block: %w", err)
} }
@ -113,8 +113,8 @@ func (i *IPFSBlockstore) Get(cid cid.Cid) (blocks.Block, error) {
return blocks.NewBlockWithCid(data, cid) return blocks.NewBlockWithCid(data, cid)
} }
func (i *IPFSBlockstore) GetSize(cid cid.Cid) (int, error) { func (i *IPFSBlockstore) GetSize(ctx context.Context, cid cid.Cid) (int, error) {
st, err := i.api.Block().Stat(i.ctx, path.IpldPath(cid)) st, err := i.api.Block().Stat(ctx, path.IpldPath(cid))
if err != nil { if err != nil {
return 0, xerrors.Errorf("getting ipfs block: %w", err) return 0, xerrors.Errorf("getting ipfs block: %w", err)
} }
@ -122,23 +122,23 @@ func (i *IPFSBlockstore) GetSize(cid cid.Cid) (int, error) {
return st.Size(), nil return st.Size(), nil
} }
func (i *IPFSBlockstore) Put(block blocks.Block) error { func (i *IPFSBlockstore) Put(ctx context.Context, block blocks.Block) error {
mhd, err := multihash.Decode(block.Cid().Hash()) mhd, err := multihash.Decode(block.Cid().Hash())
if err != nil { if err != nil {
return err return err
} }
_, err = i.api.Block().Put(i.ctx, bytes.NewReader(block.RawData()), _, err = i.api.Block().Put(ctx, bytes.NewReader(block.RawData()),
options.Block.Hash(mhd.Code, mhd.Length), options.Block.Hash(mhd.Code, mhd.Length),
options.Block.Format(cid.CodecToStr[block.Cid().Type()])) options.Block.Format(cid.CodecToStr[block.Cid().Type()]))
return err return err
} }
func (i *IPFSBlockstore) PutMany(blocks []blocks.Block) error { func (i *IPFSBlockstore) PutMany(ctx context.Context, blocks []blocks.Block) error {
// TODO: could be done in parallel // TODO: could be done in parallel
for _, block := range blocks { for _, block := range blocks {
if err := i.Put(block); err != nil { if err := i.Put(ctx, block); err != nil {
return err return err
} }
} }

View File

@ -15,24 +15,24 @@ func NewMemory() MemBlockstore {
// MemBlockstore is a terminal blockstore that keeps blocks in memory. // MemBlockstore is a terminal blockstore that keeps blocks in memory.
type MemBlockstore map[cid.Cid]blocks.Block type MemBlockstore map[cid.Cid]blocks.Block
func (m MemBlockstore) DeleteBlock(k cid.Cid) error { func (m MemBlockstore) DeleteBlock(ctx context.Context, k cid.Cid) error {
delete(m, k) delete(m, k)
return nil return nil
} }
func (m MemBlockstore) DeleteMany(ks []cid.Cid) error { func (m MemBlockstore) DeleteMany(ctx context.Context, ks []cid.Cid) error {
for _, k := range ks { for _, k := range ks {
delete(m, k) delete(m, k)
} }
return nil return nil
} }
func (m MemBlockstore) Has(k cid.Cid) (bool, error) { func (m MemBlockstore) Has(ctx context.Context, k cid.Cid) (bool, error) {
_, ok := m[k] _, ok := m[k]
return ok, nil return ok, nil
} }
func (m MemBlockstore) View(k cid.Cid, callback func([]byte) error) error { func (m MemBlockstore) View(ctx context.Context, k cid.Cid, callback func([]byte) error) error {
b, ok := m[k] b, ok := m[k]
if !ok { if !ok {
return ErrNotFound return ErrNotFound
@ -40,7 +40,7 @@ func (m MemBlockstore) View(k cid.Cid, callback func([]byte) error) error {
return callback(b.RawData()) return callback(b.RawData())
} }
func (m MemBlockstore) Get(k cid.Cid) (blocks.Block, error) { func (m MemBlockstore) Get(ctx context.Context, k cid.Cid) (blocks.Block, error) {
b, ok := m[k] b, ok := m[k]
if !ok { if !ok {
return nil, ErrNotFound return nil, ErrNotFound
@ -49,7 +49,7 @@ func (m MemBlockstore) Get(k cid.Cid) (blocks.Block, error) {
} }
// GetSize returns the CIDs mapped BlockSize // GetSize returns the CIDs mapped BlockSize
func (m MemBlockstore) GetSize(k cid.Cid) (int, error) { func (m MemBlockstore) GetSize(ctx context.Context, k cid.Cid) (int, error) {
b, ok := m[k] b, ok := m[k]
if !ok { if !ok {
return 0, ErrNotFound return 0, ErrNotFound
@ -58,7 +58,7 @@ func (m MemBlockstore) GetSize(k cid.Cid) (int, error) {
} }
// Put puts a given block to the underlying datastore // Put puts a given block to the underlying datastore
func (m MemBlockstore) Put(b blocks.Block) error { func (m MemBlockstore) Put(ctx context.Context, b blocks.Block) error {
// Convert to a basic block for safety, but try to reuse the existing // Convert to a basic block for safety, but try to reuse the existing
// block if it's already a basic block. // block if it's already a basic block.
k := b.Cid() k := b.Cid()
@ -76,9 +76,9 @@ func (m MemBlockstore) Put(b blocks.Block) error {
// PutMany puts a slice of blocks at the same time using batching // PutMany puts a slice of blocks at the same time using batching
// capabilities of the underlying datastore whenever possible. // capabilities of the underlying datastore whenever possible.
func (m MemBlockstore) PutMany(bs []blocks.Block) error { func (m MemBlockstore) PutMany(ctx context.Context, bs []blocks.Block) error {
for _, b := range bs { for _, b := range bs {
_ = m.Put(b) // can't fail _ = m.Put(ctx, b) // can't fail
} }
return nil return nil
} }

View File

@ -49,10 +49,11 @@ These are options in the `[Chainstore.Splitstore]` section of the configuration:
blockstore and discards writes; this is necessary to support syncing from a snapshot. blockstore and discards writes; this is necessary to support syncing from a snapshot.
- `MarkSetType` -- specifies the type of markset to use during compaction. - `MarkSetType` -- specifies the type of markset to use during compaction.
The markset is the data structure used by compaction/gc to track live objects. The markset is the data structure used by compaction/gc to track live objects.
The default value is `"map"`, which will use an in-memory map; if you are limited The default value is "badger", which will use a disk backed markset using badger.
in memory (or indeed see compaction run out of memory), you can also specify If you have a lot of memory (48G or more) you can also use "map", which will use
`"badger"` which will use an disk backed markset, using badger. This will use an in memory markset, speeding up compaction at the cost of higher memory usage.
much less memory, but will also make compaction slower. Note: If you are using a VPS with a network volume, you need to provision at least
3000 IOPs with the badger markset.
- `HotStoreMessageRetention` -- specifies how many finalities, beyond the 4 - `HotStoreMessageRetention` -- specifies how many finalities, beyond the 4
finalities maintained by default, to maintain messages and message receipts in the finalities maintained by default, to maintain messages and message receipts in the
hotstore. This is useful for assistive nodes that want to support syncing for other hotstore. This is useful for assistive nodes that want to support syncing for other
@ -105,6 +106,12 @@ Compaction works transactionally with the following algorithm:
- We delete in small batches taking a lock; each batch is checked again for marks, from the concurrent transactional mark, so as to never delete anything live - We delete in small batches taking a lock; each batch is checked again for marks, from the concurrent transactional mark, so as to never delete anything live
- We then end the transaction and compact/gc the hotstore. - We then end the transaction and compact/gc the hotstore.
As of [#8008](https://github.com/filecoin-project/lotus/pull/8008) the compaction algorithm has been
modified to eliminate sorting and maintain the cold object set on disk. This drastically reduces
memory usage; in fact, when using badger as the markset compaction uses very little memory, and
it should be now possible to run splitstore with 32GB of RAM or less without danger of running out of
memory during compaction.
## Garbage Collection ## Garbage Collection
TBD -- see [#6577](https://github.com/filecoin-project/lotus/issues/6577) TBD -- see [#6577](https://github.com/filecoin-project/lotus/issues/6577)

View File

@ -0,0 +1,118 @@
package splitstore
import (
"bufio"
"io"
"os"
"golang.org/x/xerrors"
cid "github.com/ipfs/go-cid"
mh "github.com/multiformats/go-multihash"
)
type Checkpoint struct {
file *os.File
buf *bufio.Writer
}
func NewCheckpoint(path string) (*Checkpoint, error) {
file, err := os.OpenFile(path, os.O_CREATE|os.O_TRUNC|os.O_WRONLY|os.O_SYNC, 0644)
if err != nil {
return nil, xerrors.Errorf("error creating checkpoint: %w", err)
}
buf := bufio.NewWriter(file)
return &Checkpoint{
file: file,
buf: buf,
}, nil
}
func OpenCheckpoint(path string) (*Checkpoint, cid.Cid, error) {
filein, err := os.Open(path)
if err != nil {
return nil, cid.Undef, xerrors.Errorf("error opening checkpoint for reading: %w", err)
}
defer filein.Close() //nolint:errcheck
bufin := bufio.NewReader(filein)
start, err := readRawCid(bufin, nil)
if err != nil && err != io.EOF {
return nil, cid.Undef, xerrors.Errorf("error reading cid from checkpoint: %w", err)
}
fileout, err := os.OpenFile(path, os.O_WRONLY|os.O_SYNC, 0644)
if err != nil {
return nil, cid.Undef, xerrors.Errorf("error opening checkpoint for writing: %w", err)
}
bufout := bufio.NewWriter(fileout)
return &Checkpoint{
file: fileout,
buf: bufout,
}, start, nil
}
func (cp *Checkpoint) Set(c cid.Cid) error {
if _, err := cp.file.Seek(0, io.SeekStart); err != nil {
return xerrors.Errorf("error seeking beginning of checkpoint: %w", err)
}
if err := writeRawCid(cp.buf, c, true); err != nil {
return xerrors.Errorf("error writing cid to checkpoint: %w", err)
}
return nil
}
func (cp *Checkpoint) Close() error {
if cp.file == nil {
return nil
}
err := cp.file.Close()
cp.file = nil
cp.buf = nil
return err
}
func readRawCid(buf *bufio.Reader, hbuf []byte) (cid.Cid, error) {
sz, err := buf.ReadByte()
if err != nil {
return cid.Undef, err // don't wrap EOF as it is not an error here
}
if hbuf == nil {
hbuf = make([]byte, int(sz))
} else {
hbuf = hbuf[:int(sz)]
}
if _, err := io.ReadFull(buf, hbuf); err != nil {
return cid.Undef, xerrors.Errorf("error reading hash: %w", err) // wrap EOF, it's corrupt
}
hash, err := mh.Cast(hbuf)
if err != nil {
return cid.Undef, xerrors.Errorf("error casting multihash: %w", err)
}
return cid.NewCidV1(cid.Raw, hash), nil
}
func writeRawCid(buf *bufio.Writer, c cid.Cid, flush bool) error {
hash := c.Hash()
if err := buf.WriteByte(byte(len(hash))); err != nil {
return err
}
if _, err := buf.Write(hash); err != nil {
return err
}
if flush {
return buf.Flush()
}
return nil
}

View File

@ -0,0 +1,147 @@
package splitstore
import (
"io/ioutil"
"os"
"path/filepath"
"testing"
"github.com/ipfs/go-cid"
"github.com/multiformats/go-multihash"
)
func TestCheckpoint(t *testing.T) {
dir, err := ioutil.TempDir("", "checkpoint.*")
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
_ = os.RemoveAll(dir)
})
path := filepath.Join(dir, "checkpoint")
makeCid := func(key string) cid.Cid {
h, err := multihash.Sum([]byte(key), multihash.SHA2_256, -1)
if err != nil {
t.Fatal(err)
}
return cid.NewCidV1(cid.Raw, h)
}
k1 := makeCid("a")
k2 := makeCid("b")
k3 := makeCid("c")
k4 := makeCid("d")
cp, err := NewCheckpoint(path)
if err != nil {
t.Fatal(err)
}
if err := cp.Set(k1); err != nil {
t.Fatal(err)
}
if err := cp.Set(k2); err != nil {
t.Fatal(err)
}
if err := cp.Close(); err != nil {
t.Fatal(err)
}
cp, start, err := OpenCheckpoint(path)
if err != nil {
t.Fatal(err)
}
if !start.Equals(k2) {
t.Fatalf("expected start to be %s; got %s", k2, start)
}
if err := cp.Set(k3); err != nil {
t.Fatal(err)
}
if err := cp.Set(k4); err != nil {
t.Fatal(err)
}
if err := cp.Close(); err != nil {
t.Fatal(err)
}
cp, start, err = OpenCheckpoint(path)
if err != nil {
t.Fatal(err)
}
if !start.Equals(k4) {
t.Fatalf("expected start to be %s; got %s", k4, start)
}
if err := cp.Close(); err != nil {
t.Fatal(err)
}
// also test correct operation with an empty checkpoint
cp, err = NewCheckpoint(path)
if err != nil {
t.Fatal(err)
}
if err := cp.Close(); err != nil {
t.Fatal(err)
}
cp, start, err = OpenCheckpoint(path)
if err != nil {
t.Fatal(err)
}
if start.Defined() {
t.Fatal("expected start to be undefined")
}
if err := cp.Set(k1); err != nil {
t.Fatal(err)
}
if err := cp.Set(k2); err != nil {
t.Fatal(err)
}
if err := cp.Close(); err != nil {
t.Fatal(err)
}
cp, start, err = OpenCheckpoint(path)
if err != nil {
t.Fatal(err)
}
if !start.Equals(k2) {
t.Fatalf("expected start to be %s; got %s", k2, start)
}
if err := cp.Set(k3); err != nil {
t.Fatal(err)
}
if err := cp.Set(k4); err != nil {
t.Fatal(err)
}
if err := cp.Close(); err != nil {
t.Fatal(err)
}
cp, start, err = OpenCheckpoint(path)
if err != nil {
t.Fatal(err)
}
if !start.Equals(k4) {
t.Fatalf("expected start to be %s; got %s", k4, start)
}
if err := cp.Close(); err != nil {
t.Fatal(err)
}
}

View File

@ -0,0 +1,102 @@
package splitstore
import (
"bufio"
"io"
"os"
"golang.org/x/xerrors"
cid "github.com/ipfs/go-cid"
)
type ColdSetWriter struct {
file *os.File
buf *bufio.Writer
}
type ColdSetReader struct {
file *os.File
buf *bufio.Reader
}
func NewColdSetWriter(path string) (*ColdSetWriter, error) {
file, err := os.OpenFile(path, os.O_CREATE|os.O_TRUNC|os.O_WRONLY, 0644)
if err != nil {
return nil, xerrors.Errorf("error creating coldset: %w", err)
}
buf := bufio.NewWriter(file)
return &ColdSetWriter{
file: file,
buf: buf,
}, nil
}
func NewColdSetReader(path string) (*ColdSetReader, error) {
file, err := os.Open(path)
if err != nil {
return nil, xerrors.Errorf("error opening coldset: %w", err)
}
buf := bufio.NewReader(file)
return &ColdSetReader{
file: file,
buf: buf,
}, nil
}
func (s *ColdSetWriter) Write(c cid.Cid) error {
return writeRawCid(s.buf, c, false)
}
func (s *ColdSetWriter) Close() error {
if s.file == nil {
return nil
}
err1 := s.buf.Flush()
err2 := s.file.Close()
s.buf = nil
s.file = nil
if err1 != nil {
return err1
}
return err2
}
func (s *ColdSetReader) ForEach(f func(cid.Cid) error) error {
hbuf := make([]byte, 256)
for {
next, err := readRawCid(s.buf, hbuf)
if err != nil {
if err == io.EOF {
return nil
}
return xerrors.Errorf("error reading coldset: %w", err)
}
if err := f(next); err != nil {
return err
}
}
}
func (s *ColdSetReader) Reset() error {
_, err := s.file.Seek(0, io.SeekStart)
return err
}
func (s *ColdSetReader) Close() error {
if s.file == nil {
return nil
}
err := s.file.Close()
s.file = nil
s.buf = nil
return err
}

View File

@ -0,0 +1,99 @@
package splitstore
import (
"fmt"
"io/ioutil"
"os"
"path/filepath"
"testing"
"github.com/ipfs/go-cid"
"github.com/multiformats/go-multihash"
)
func TestColdSet(t *testing.T) {
dir, err := ioutil.TempDir("", "coldset.*")
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
_ = os.RemoveAll(dir)
})
path := filepath.Join(dir, "coldset")
makeCid := func(i int) cid.Cid {
h, err := multihash.Sum([]byte(fmt.Sprintf("cid.%d", i)), multihash.SHA2_256, -1)
if err != nil {
t.Fatal(err)
}
return cid.NewCidV1(cid.Raw, h)
}
const count = 1000
cids := make([]cid.Cid, 0, count)
for i := 0; i < count; i++ {
cids = append(cids, makeCid(i))
}
cw, err := NewColdSetWriter(path)
if err != nil {
t.Fatal(err)
}
for _, c := range cids {
if err := cw.Write(c); err != nil {
t.Fatal(err)
}
}
if err := cw.Close(); err != nil {
t.Fatal(err)
}
cr, err := NewColdSetReader(path)
if err != nil {
t.Fatal(err)
}
index := 0
err = cr.ForEach(func(c cid.Cid) error {
if index >= count {
t.Fatal("too many cids")
}
if !c.Equals(cids[index]) {
t.Fatalf("wrong cid %d; expected %s but got %s", index, cids[index], c)
}
index++
return nil
})
if err != nil {
t.Fatal(err)
}
if err := cr.Reset(); err != nil {
t.Fatal(err)
}
index = 0
err = cr.ForEach(func(c cid.Cid) error {
if index >= count {
t.Fatal("too many cids")
}
if !c.Equals(cids[index]) {
t.Fatalf("wrong cid; expected %s but got %s", cids[index], c)
}
index++
return nil
})
if err != nil {
t.Fatal(err)
}
}

View File

@ -10,39 +10,36 @@ import (
var errMarkSetClosed = errors.New("markset closed") var errMarkSetClosed = errors.New("markset closed")
// MarkSet is a utility to keep track of seen CID, and later query for them. // MarkSet is an interface for tracking CIDs during chain and object walks
//
// * If the expected dataset is large, it can be backed by a datastore (e.g. bbolt).
// * If a probabilistic result is acceptable, it can be backed by a bloom filter
type MarkSet interface { type MarkSet interface {
ObjectVisitor
Mark(cid.Cid) error Mark(cid.Cid) error
MarkMany([]cid.Cid) error
Has(cid.Cid) (bool, error) Has(cid.Cid) (bool, error)
Close() error Close() error
SetConcurrent()
}
type MarkSetVisitor interface { // BeginCriticalSection ensures that the markset is persisted to disk for recovery in case
MarkSet // of abnormal termination during the critical section span.
ObjectVisitor BeginCriticalSection() error
// EndCriticalSection ends the critical section span.
EndCriticalSection()
} }
type MarkSetEnv interface { type MarkSetEnv interface {
// Create creates a new markset within the environment. // New creates a new markset within the environment.
// name is a unique name for this markset, mapped to the filesystem in disk-backed environments // name is a unique name for this markset, mapped to the filesystem for on-disk persistence.
// sizeHint is a hint about the expected size of the markset // sizeHint is a hint about the expected size of the markset
Create(name string, sizeHint int64) (MarkSet, error) New(name string, sizeHint int64) (MarkSet, error)
// CreateVisitor is like Create, but returns a wider interface that supports atomic visits. // Recover recovers an existing markset persisted on-disk.
// It may not be supported by some markset types (e.g. bloom). Recover(name string) (MarkSet, error)
CreateVisitor(name string, sizeHint int64) (MarkSetVisitor, error) // Close closes the markset
// SupportsVisitor returns true if the marksets created by this environment support the visitor interface.
SupportsVisitor() bool
Close() error Close() error
} }
func OpenMarkSetEnv(path string, mtype string) (MarkSetEnv, error) { func OpenMarkSetEnv(path string, mtype string) (MarkSetEnv, error) {
switch mtype { switch mtype {
case "map": case "map":
return NewMapMarkSetEnv() return NewMapMarkSetEnv(path)
case "badger": case "badger":
return NewBadgerMarkSetEnv(path) return NewBadgerMarkSetEnv(path)
default: default:

View File

@ -3,6 +3,7 @@ package splitstore
import ( import (
"os" "os"
"path/filepath" "path/filepath"
"runtime"
"sync" "sync"
"golang.org/x/xerrors" "golang.org/x/xerrors"
@ -28,13 +29,13 @@ type BadgerMarkSet struct {
writers int writers int
seqno int seqno int
version int version int
persist bool
db *badger.DB db *badger.DB
path string path string
} }
var _ MarkSet = (*BadgerMarkSet)(nil) var _ MarkSet = (*BadgerMarkSet)(nil)
var _ MarkSetVisitor = (*BadgerMarkSet)(nil)
var badgerMarkSetBatchSize = 16384 var badgerMarkSetBatchSize = 16384
@ -48,11 +49,10 @@ func NewBadgerMarkSetEnv(path string) (MarkSetEnv, error) {
return &BadgerMarkSetEnv{path: msPath}, nil return &BadgerMarkSetEnv{path: msPath}, nil
} }
func (e *BadgerMarkSetEnv) create(name string, sizeHint int64) (*BadgerMarkSet, error) { func (e *BadgerMarkSetEnv) New(name string, sizeHint int64) (MarkSet, error) {
name += ".tmp"
path := filepath.Join(e.path, name) path := filepath.Join(e.path, name)
db, err := openTransientBadgerDB(path) db, err := openBadgerDB(path, false)
if err != nil { if err != nil {
return nil, xerrors.Errorf("error creating badger db: %w", err) return nil, xerrors.Errorf("error creating badger db: %w", err)
} }
@ -68,18 +68,72 @@ func (e *BadgerMarkSetEnv) create(name string, sizeHint int64) (*BadgerMarkSet,
return ms, nil return ms, nil
} }
func (e *BadgerMarkSetEnv) Create(name string, sizeHint int64) (MarkSet, error) { func (e *BadgerMarkSetEnv) Recover(name string) (MarkSet, error) {
return e.create(name, sizeHint) path := filepath.Join(e.path, name)
}
func (e *BadgerMarkSetEnv) CreateVisitor(name string, sizeHint int64) (MarkSetVisitor, error) { if _, err := os.Stat(path); err != nil {
return e.create(name, sizeHint) return nil, xerrors.Errorf("error stating badger db path: %w", err)
} }
func (e *BadgerMarkSetEnv) SupportsVisitor() bool { return true } db, err := openBadgerDB(path, true)
if err != nil {
return nil, xerrors.Errorf("error creating badger db: %w", err)
}
ms := &BadgerMarkSet{
pend: make(map[string]struct{}),
writing: make(map[int]map[string]struct{}),
db: db,
path: path,
persist: true,
}
ms.cond.L = &ms.mx
return ms, nil
}
func (e *BadgerMarkSetEnv) Close() error { func (e *BadgerMarkSetEnv) Close() error {
return os.RemoveAll(e.path) return nil
}
func (s *BadgerMarkSet) BeginCriticalSection() error {
s.mx.Lock()
if s.persist {
s.mx.Unlock()
return nil
}
var write bool
var seqno int
if len(s.pend) > 0 {
write = true
seqno = s.nextBatch()
}
s.persist = true
s.mx.Unlock()
if write {
// all writes sync once perist is true
return s.write(seqno)
}
// wait for any pending writes and sync
s.mx.Lock()
for s.writers > 0 {
s.cond.Wait()
}
s.mx.Unlock()
return s.db.Sync()
}
func (s *BadgerMarkSet) EndCriticalSection() {
s.mx.Lock()
defer s.mx.Unlock()
s.persist = false
} }
func (s *BadgerMarkSet) Mark(c cid.Cid) error { func (s *BadgerMarkSet) Mark(c cid.Cid) error {
@ -99,6 +153,23 @@ func (s *BadgerMarkSet) Mark(c cid.Cid) error {
return nil return nil
} }
func (s *BadgerMarkSet) MarkMany(batch []cid.Cid) error {
s.mx.Lock()
if s.pend == nil {
s.mx.Unlock()
return errMarkSetClosed
}
write, seqno := s.putMany(batch)
s.mx.Unlock()
if write {
return s.write(seqno)
}
return nil
}
func (s *BadgerMarkSet) Has(c cid.Cid) (bool, error) { func (s *BadgerMarkSet) Has(c cid.Cid) (bool, error) {
s.mx.RLock() s.mx.RLock()
defer s.mx.RUnlock() defer s.mx.RUnlock()
@ -204,16 +275,34 @@ func (s *BadgerMarkSet) tryDB(key []byte) (has bool, err error) {
// writer holds the exclusive lock // writer holds the exclusive lock
func (s *BadgerMarkSet) put(key string) (write bool, seqno int) { func (s *BadgerMarkSet) put(key string) (write bool, seqno int) {
s.pend[key] = struct{}{} s.pend[key] = struct{}{}
if len(s.pend) < badgerMarkSetBatchSize { if !s.persist && len(s.pend) < badgerMarkSetBatchSize {
return false, 0 return false, 0
} }
seqno = s.seqno seqno = s.nextBatch()
return true, seqno
}
func (s *BadgerMarkSet) putMany(batch []cid.Cid) (write bool, seqno int) {
for _, c := range batch {
key := string(c.Hash())
s.pend[key] = struct{}{}
}
if !s.persist && len(s.pend) < badgerMarkSetBatchSize {
return false, 0
}
seqno = s.nextBatch()
return true, seqno
}
func (s *BadgerMarkSet) nextBatch() int {
seqno := s.seqno
s.seqno++ s.seqno++
s.writing[seqno] = s.pend s.writing[seqno] = s.pend
s.pend = make(map[string]struct{}) s.pend = make(map[string]struct{})
return seqno
return true, seqno
} }
func (s *BadgerMarkSet) write(seqno int) (err error) { func (s *BadgerMarkSet) write(seqno int) (err error) {
@ -258,6 +347,14 @@ func (s *BadgerMarkSet) write(seqno int) (err error) {
return xerrors.Errorf("error flushing batch to badger markset: %w", err) return xerrors.Errorf("error flushing batch to badger markset: %w", err)
} }
s.mx.RLock()
persist := s.persist
s.mx.RUnlock()
if persist {
return s.db.Sync()
}
return nil return nil
} }
@ -277,26 +374,29 @@ func (s *BadgerMarkSet) Close() error {
db := s.db db := s.db
s.db = nil s.db = nil
return closeTransientBadgerDB(db, s.path) return closeBadgerDB(db, s.path, s.persist)
} }
func (s *BadgerMarkSet) SetConcurrent() {} func openBadgerDB(path string, recover bool) (*badger.DB, error) {
// if it is not a recovery, clean up first
if !recover {
err := os.RemoveAll(path)
if err != nil {
return nil, xerrors.Errorf("error clearing markset directory: %w", err)
}
func openTransientBadgerDB(path string) (*badger.DB, error) { err = os.MkdirAll(path, 0755) //nolint:gosec
// clean up first if err != nil {
err := os.RemoveAll(path) return nil, xerrors.Errorf("error creating markset directory: %w", err)
if err != nil { }
return nil, xerrors.Errorf("error clearing markset directory: %w", err)
}
err = os.MkdirAll(path, 0755) //nolint:gosec
if err != nil {
return nil, xerrors.Errorf("error creating markset directory: %w", err)
} }
opts := badger.DefaultOptions(path) opts := badger.DefaultOptions(path)
// we manually sync when we are in critical section
opts.SyncWrites = false opts.SyncWrites = false
// no need to do that
opts.CompactL0OnClose = false opts.CompactL0OnClose = false
// we store hashes, not much to gain by compression
opts.Compression = options.None opts.Compression = options.None
// Note: We use FileIO for loading modes to avoid memory thrashing and interference // Note: We use FileIO for loading modes to avoid memory thrashing and interference
// between the system blockstore and the markset. // between the system blockstore and the markset.
@ -305,6 +405,15 @@ func openTransientBadgerDB(path string) (*badger.DB, error) {
// exceeded 1GB in size. // exceeded 1GB in size.
opts.TableLoadingMode = options.FileIO opts.TableLoadingMode = options.FileIO
opts.ValueLogLoadingMode = options.FileIO opts.ValueLogLoadingMode = options.FileIO
// We increase the number of L0 tables before compaction to make it unlikely to
// be necessary.
opts.NumLevelZeroTables = 20 // default is 5
opts.NumLevelZeroTablesStall = 30 // default is 10
// increase the number of compactors from default 2 so that if we ever have to
// compact, it is fast
if runtime.NumCPU()/2 > opts.NumCompactors {
opts.NumCompactors = runtime.NumCPU() / 2
}
opts.Logger = &badgerLogger{ opts.Logger = &badgerLogger{
SugaredLogger: log.Desugar().WithOptions(zap.AddCallerSkip(1)).Sugar(), SugaredLogger: log.Desugar().WithOptions(zap.AddCallerSkip(1)).Sugar(),
skip2: log.Desugar().WithOptions(zap.AddCallerSkip(2)).Sugar(), skip2: log.Desugar().WithOptions(zap.AddCallerSkip(2)).Sugar(),
@ -313,12 +422,16 @@ func openTransientBadgerDB(path string) (*badger.DB, error) {
return badger.Open(opts) return badger.Open(opts)
} }
func closeTransientBadgerDB(db *badger.DB, path string) error { func closeBadgerDB(db *badger.DB, path string, persist bool) error {
err := db.Close() err := db.Close()
if err != nil { if err != nil {
return xerrors.Errorf("error closing badger markset: %w", err) return xerrors.Errorf("error closing badger markset: %w", err)
} }
if persist {
return nil
}
err = os.RemoveAll(path) err = os.RemoveAll(path)
if err != nil { if err != nil {
return xerrors.Errorf("error deleting badger markset: %w", err) return xerrors.Errorf("error deleting badger markset: %w", err)

View File

@ -1,12 +1,20 @@
package splitstore package splitstore
import ( import (
"bufio"
"io"
"os"
"path/filepath"
"sync" "sync"
"golang.org/x/xerrors"
cid "github.com/ipfs/go-cid" cid "github.com/ipfs/go-cid"
) )
type MapMarkSetEnv struct{} type MapMarkSetEnv struct {
path string
}
var _ MarkSetEnv = (*MapMarkSetEnv)(nil) var _ MarkSetEnv = (*MapMarkSetEnv)(nil)
@ -14,55 +22,194 @@ type MapMarkSet struct {
mx sync.RWMutex mx sync.RWMutex
set map[string]struct{} set map[string]struct{}
ts bool persist bool
file *os.File
buf *bufio.Writer
path string
} }
var _ MarkSet = (*MapMarkSet)(nil) var _ MarkSet = (*MapMarkSet)(nil)
var _ MarkSetVisitor = (*MapMarkSet)(nil)
func NewMapMarkSetEnv() (*MapMarkSetEnv, error) { func NewMapMarkSetEnv(path string) (*MapMarkSetEnv, error) {
return &MapMarkSetEnv{}, nil msPath := filepath.Join(path, "markset.map")
err := os.MkdirAll(msPath, 0755) //nolint:gosec
if err != nil {
return nil, xerrors.Errorf("error creating markset directory: %w", err)
}
return &MapMarkSetEnv{path: msPath}, nil
} }
func (e *MapMarkSetEnv) create(name string, sizeHint int64) (*MapMarkSet, error) { func (e *MapMarkSetEnv) New(name string, sizeHint int64) (MarkSet, error) {
path := filepath.Join(e.path, name)
return &MapMarkSet{ return &MapMarkSet{
set: make(map[string]struct{}, sizeHint), set: make(map[string]struct{}, sizeHint),
path: path,
}, nil }, nil
} }
func (e *MapMarkSetEnv) Create(name string, sizeHint int64) (MarkSet, error) { func (e *MapMarkSetEnv) Recover(name string) (MarkSet, error) {
return e.create(name, sizeHint) path := filepath.Join(e.path, name)
} s := &MapMarkSet{
set: make(map[string]struct{}),
path: path,
}
func (e *MapMarkSetEnv) CreateVisitor(name string, sizeHint int64) (MarkSetVisitor, error) { in, err := os.Open(path)
return e.create(name, sizeHint) if err != nil {
} return nil, xerrors.Errorf("error opening markset file for read: %w", err)
}
defer in.Close() //nolint:errcheck
func (e *MapMarkSetEnv) SupportsVisitor() bool { return true } // wrap a buffered reader to make this faster
buf := bufio.NewReader(in)
for {
var sz byte
if sz, err = buf.ReadByte(); err != nil {
break
}
key := make([]byte, int(sz))
if _, err = io.ReadFull(buf, key); err != nil {
break
}
s.set[string(key)] = struct{}{}
}
if err != io.EOF {
return nil, xerrors.Errorf("error reading markset file: %w", err)
}
file, err := os.OpenFile(s.path, os.O_WRONLY|os.O_APPEND, 0)
if err != nil {
return nil, xerrors.Errorf("error opening markset file for write: %w", err)
}
s.persist = true
s.file = file
s.buf = bufio.NewWriter(file)
return s, nil
}
func (e *MapMarkSetEnv) Close() error { func (e *MapMarkSetEnv) Close() error {
return nil return nil
} }
func (s *MapMarkSet) Mark(cid cid.Cid) error { func (s *MapMarkSet) BeginCriticalSection() error {
if s.ts { s.mx.Lock()
s.mx.Lock() defer s.mx.Unlock()
defer s.mx.Unlock()
}
if s.set == nil { if s.set == nil {
return errMarkSetClosed return errMarkSetClosed
} }
s.set[string(cid.Hash())] = struct{}{} if s.persist {
return nil
}
file, err := os.OpenFile(s.path, os.O_CREATE|os.O_TRUNC|os.O_WRONLY, 0644)
if err != nil {
return xerrors.Errorf("error opening markset file: %w", err)
}
// wrap a buffered writer to make this faster
s.buf = bufio.NewWriter(file)
for key := range s.set {
if err := s.writeKey([]byte(key), false); err != nil {
_ = file.Close()
s.buf = nil
return err
}
}
if err := s.buf.Flush(); err != nil {
_ = file.Close()
s.buf = nil
return xerrors.Errorf("error flushing markset file buffer: %w", err)
}
s.file = file
s.persist = true
return nil
}
func (s *MapMarkSet) EndCriticalSection() {
s.mx.Lock()
defer s.mx.Unlock()
if !s.persist {
return
}
_ = s.file.Close()
_ = os.Remove(s.path)
s.file = nil
s.buf = nil
s.persist = false
}
func (s *MapMarkSet) Mark(c cid.Cid) error {
s.mx.Lock()
defer s.mx.Unlock()
if s.set == nil {
return errMarkSetClosed
}
hash := c.Hash()
s.set[string(hash)] = struct{}{}
if s.persist {
if err := s.writeKey(hash, true); err != nil {
return err
}
if err := s.file.Sync(); err != nil {
return xerrors.Errorf("error syncing markset: %w", err)
}
}
return nil
}
func (s *MapMarkSet) MarkMany(batch []cid.Cid) error {
s.mx.Lock()
defer s.mx.Unlock()
if s.set == nil {
return errMarkSetClosed
}
for _, c := range batch {
hash := c.Hash()
s.set[string(hash)] = struct{}{}
if s.persist {
if err := s.writeKey(hash, false); err != nil {
return err
}
}
}
if s.persist {
if err := s.buf.Flush(); err != nil {
return xerrors.Errorf("error flushing markset buffer to disk: %w", err)
}
if err := s.file.Sync(); err != nil {
return xerrors.Errorf("error syncing markset: %w", err)
}
}
return nil return nil
} }
func (s *MapMarkSet) Has(cid cid.Cid) (bool, error) { func (s *MapMarkSet) Has(cid cid.Cid) (bool, error) {
if s.ts { s.mx.RLock()
s.mx.RLock() defer s.mx.RUnlock()
defer s.mx.RUnlock()
}
if s.set == nil { if s.set == nil {
return false, errMarkSetClosed return false, errMarkSetClosed
@ -73,33 +220,70 @@ func (s *MapMarkSet) Has(cid cid.Cid) (bool, error) {
} }
func (s *MapMarkSet) Visit(c cid.Cid) (bool, error) { func (s *MapMarkSet) Visit(c cid.Cid) (bool, error) {
if s.ts { s.mx.Lock()
s.mx.Lock() defer s.mx.Unlock()
defer s.mx.Unlock()
}
if s.set == nil { if s.set == nil {
return false, errMarkSetClosed return false, errMarkSetClosed
} }
key := string(c.Hash()) hash := c.Hash()
key := string(hash)
if _, ok := s.set[key]; ok { if _, ok := s.set[key]; ok {
return false, nil return false, nil
} }
s.set[key] = struct{}{} s.set[key] = struct{}{}
if s.persist {
if err := s.writeKey(hash, true); err != nil {
return false, err
}
if err := s.file.Sync(); err != nil {
return false, xerrors.Errorf("error syncing markset: %w", err)
}
}
return true, nil return true, nil
} }
func (s *MapMarkSet) Close() error { func (s *MapMarkSet) Close() error {
if s.ts { s.mx.Lock()
s.mx.Lock() defer s.mx.Unlock()
defer s.mx.Unlock()
if s.set == nil {
return nil
} }
s.set = nil s.set = nil
if s.file != nil {
if err := s.file.Close(); err != nil {
log.Warnf("error closing markset file: %s", err)
}
if !s.persist {
if err := os.Remove(s.path); err != nil {
log.Warnf("error removing markset file: %s", err)
}
}
}
return nil return nil
} }
func (s *MapMarkSet) SetConcurrent() { func (s *MapMarkSet) writeKey(k []byte, flush bool) error {
s.ts = true if err := s.buf.WriteByte(byte(len(k))); err != nil {
return xerrors.Errorf("error writing markset key length to disk: %w", err)
}
if _, err := s.buf.Write(k); err != nil {
return xerrors.Errorf("error writing markset key to disk: %w", err)
}
if flush {
if err := s.buf.Flush(); err != nil {
return xerrors.Errorf("error flushing markset buffer to disk: %w", err)
}
}
return nil
} }

View File

@ -11,7 +11,10 @@ import (
func TestMapMarkSet(t *testing.T) { func TestMapMarkSet(t *testing.T) {
testMarkSet(t, "map") testMarkSet(t, "map")
testMarkSetRecovery(t, "map")
testMarkSetMarkMany(t, "map")
testMarkSetVisitor(t, "map") testMarkSetVisitor(t, "map")
testMarkSetVisitorRecovery(t, "map")
} }
func TestBadgerMarkSet(t *testing.T) { func TestBadgerMarkSet(t *testing.T) {
@ -21,12 +24,13 @@ func TestBadgerMarkSet(t *testing.T) {
badgerMarkSetBatchSize = bs badgerMarkSetBatchSize = bs
}) })
testMarkSet(t, "badger") testMarkSet(t, "badger")
testMarkSetRecovery(t, "badger")
testMarkSetMarkMany(t, "badger")
testMarkSetVisitor(t, "badger") testMarkSetVisitor(t, "badger")
testMarkSetVisitorRecovery(t, "badger")
} }
func testMarkSet(t *testing.T, lsType string) { func testMarkSet(t *testing.T, lsType string) {
t.Helper()
path, err := ioutil.TempDir("", "markset.*") path, err := ioutil.TempDir("", "markset.*")
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@ -42,12 +46,12 @@ func testMarkSet(t *testing.T, lsType string) {
} }
defer env.Close() //nolint:errcheck defer env.Close() //nolint:errcheck
hotSet, err := env.Create("hot", 0) hotSet, err := env.New("hot", 0)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
coldSet, err := env.Create("cold", 0) coldSet, err := env.New("cold", 0)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -62,6 +66,7 @@ func testMarkSet(t *testing.T, lsType string) {
} }
mustHave := func(s MarkSet, cid cid.Cid) { mustHave := func(s MarkSet, cid cid.Cid) {
t.Helper()
has, err := s.Has(cid) has, err := s.Has(cid)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@ -73,6 +78,7 @@ func testMarkSet(t *testing.T, lsType string) {
} }
mustNotHave := func(s MarkSet, cid cid.Cid) { mustNotHave := func(s MarkSet, cid cid.Cid) {
t.Helper()
has, err := s.Has(cid) has, err := s.Has(cid)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@ -114,12 +120,12 @@ func testMarkSet(t *testing.T, lsType string) {
t.Fatal(err) t.Fatal(err)
} }
hotSet, err = env.Create("hot", 0) hotSet, err = env.New("hot", 0)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
coldSet, err = env.Create("cold", 0) coldSet, err = env.New("cold", 0)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -150,8 +156,6 @@ func testMarkSet(t *testing.T, lsType string) {
} }
func testMarkSetVisitor(t *testing.T, lsType string) { func testMarkSetVisitor(t *testing.T, lsType string) {
t.Helper()
path, err := ioutil.TempDir("", "markset.*") path, err := ioutil.TempDir("", "markset.*")
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@ -167,7 +171,7 @@ func testMarkSetVisitor(t *testing.T, lsType string) {
} }
defer env.Close() //nolint:errcheck defer env.Close() //nolint:errcheck
visitor, err := env.CreateVisitor("test", 0) visitor, err := env.New("test", 0)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -219,3 +223,322 @@ func testMarkSetVisitor(t *testing.T, lsType string) {
mustNotVisit(visitor, k3) mustNotVisit(visitor, k3)
mustNotVisit(visitor, k4) mustNotVisit(visitor, k4)
} }
func testMarkSetVisitorRecovery(t *testing.T, lsType string) {
path, err := ioutil.TempDir("", "markset.*")
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
_ = os.RemoveAll(path)
})
env, err := OpenMarkSetEnv(path, lsType)
if err != nil {
t.Fatal(err)
}
defer env.Close() //nolint:errcheck
visitor, err := env.New("test", 0)
if err != nil {
t.Fatal(err)
}
defer visitor.Close() //nolint:errcheck
makeCid := func(key string) cid.Cid {
h, err := multihash.Sum([]byte(key), multihash.SHA2_256, -1)
if err != nil {
t.Fatal(err)
}
return cid.NewCidV1(cid.Raw, h)
}
mustVisit := func(v ObjectVisitor, cid cid.Cid) {
visit, err := v.Visit(cid)
if err != nil {
t.Fatal(err)
}
if !visit {
t.Fatal("object should be visited")
}
}
mustNotVisit := func(v ObjectVisitor, cid cid.Cid) {
visit, err := v.Visit(cid)
if err != nil {
t.Fatal(err)
}
if visit {
t.Fatal("unexpected visit")
}
}
k1 := makeCid("a")
k2 := makeCid("b")
k3 := makeCid("c")
k4 := makeCid("d")
mustVisit(visitor, k1)
mustVisit(visitor, k2)
if err := visitor.BeginCriticalSection(); err != nil {
t.Fatal(err)
}
mustVisit(visitor, k3)
mustVisit(visitor, k4)
mustNotVisit(visitor, k1)
mustNotVisit(visitor, k2)
mustNotVisit(visitor, k3)
mustNotVisit(visitor, k4)
if err := visitor.Close(); err != nil {
t.Fatal(err)
}
visitor, err = env.Recover("test")
if err != nil {
t.Fatal(err)
}
mustNotVisit(visitor, k1)
mustNotVisit(visitor, k2)
mustNotVisit(visitor, k3)
mustNotVisit(visitor, k4)
visitor.EndCriticalSection()
if err := visitor.Close(); err != nil {
t.Fatal(err)
}
_, err = env.Recover("test")
if err == nil {
t.Fatal("expected recovery to fail")
}
}
func testMarkSetRecovery(t *testing.T, lsType string) {
path, err := ioutil.TempDir("", "markset.*")
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
_ = os.RemoveAll(path)
})
env, err := OpenMarkSetEnv(path, lsType)
if err != nil {
t.Fatal(err)
}
defer env.Close() //nolint:errcheck
markSet, err := env.New("test", 0)
if err != nil {
t.Fatal(err)
}
makeCid := func(key string) cid.Cid {
h, err := multihash.Sum([]byte(key), multihash.SHA2_256, -1)
if err != nil {
t.Fatal(err)
}
return cid.NewCidV1(cid.Raw, h)
}
mustHave := func(s MarkSet, cid cid.Cid) {
t.Helper()
has, err := s.Has(cid)
if err != nil {
t.Fatal(err)
}
if !has {
t.Fatal("mark not found")
}
}
mustNotHave := func(s MarkSet, cid cid.Cid) {
t.Helper()
has, err := s.Has(cid)
if err != nil {
t.Fatal(err)
}
if has {
t.Fatal("unexpected mark")
}
}
k1 := makeCid("a")
k2 := makeCid("b")
k3 := makeCid("c")
k4 := makeCid("d")
if err := markSet.Mark(k1); err != nil {
t.Fatal(err)
}
if err := markSet.Mark(k2); err != nil {
t.Fatal(err)
}
mustHave(markSet, k1)
mustHave(markSet, k2)
mustNotHave(markSet, k3)
mustNotHave(markSet, k4)
if err := markSet.BeginCriticalSection(); err != nil {
t.Fatal(err)
}
if err := markSet.Mark(k3); err != nil {
t.Fatal(err)
}
if err := markSet.Mark(k4); err != nil {
t.Fatal(err)
}
mustHave(markSet, k1)
mustHave(markSet, k2)
mustHave(markSet, k3)
mustHave(markSet, k4)
if err := markSet.Close(); err != nil {
t.Fatal(err)
}
markSet, err = env.Recover("test")
if err != nil {
t.Fatal(err)
}
mustHave(markSet, k1)
mustHave(markSet, k2)
mustHave(markSet, k3)
mustHave(markSet, k4)
markSet.EndCriticalSection()
if err := markSet.Close(); err != nil {
t.Fatal(err)
}
_, err = env.Recover("test")
if err == nil {
t.Fatal("expected recovery to fail")
}
}
func testMarkSetMarkMany(t *testing.T, lsType string) {
path, err := ioutil.TempDir("", "markset.*")
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
_ = os.RemoveAll(path)
})
env, err := OpenMarkSetEnv(path, lsType)
if err != nil {
t.Fatal(err)
}
defer env.Close() //nolint:errcheck
markSet, err := env.New("test", 0)
if err != nil {
t.Fatal(err)
}
makeCid := func(key string) cid.Cid {
h, err := multihash.Sum([]byte(key), multihash.SHA2_256, -1)
if err != nil {
t.Fatal(err)
}
return cid.NewCidV1(cid.Raw, h)
}
mustHave := func(s MarkSet, cid cid.Cid) {
t.Helper()
has, err := s.Has(cid)
if err != nil {
t.Fatal(err)
}
if !has {
t.Fatal("mark not found")
}
}
mustNotHave := func(s MarkSet, cid cid.Cid) {
t.Helper()
has, err := s.Has(cid)
if err != nil {
t.Fatal(err)
}
if has {
t.Fatal("unexpected mark")
}
}
k1 := makeCid("a")
k2 := makeCid("b")
k3 := makeCid("c")
k4 := makeCid("d")
if err := markSet.MarkMany([]cid.Cid{k1, k2}); err != nil {
t.Fatal(err)
}
mustHave(markSet, k1)
mustHave(markSet, k2)
mustNotHave(markSet, k3)
mustNotHave(markSet, k4)
if err := markSet.BeginCriticalSection(); err != nil {
t.Fatal(err)
}
if err := markSet.MarkMany([]cid.Cid{k3, k4}); err != nil {
t.Fatal(err)
}
mustHave(markSet, k1)
mustHave(markSet, k2)
mustHave(markSet, k3)
mustHave(markSet, k4)
if err := markSet.Close(); err != nil {
t.Fatal(err)
}
markSet, err = env.Recover("test")
if err != nil {
t.Fatal(err)
}
mustHave(markSet, k1)
mustHave(markSet, k2)
mustHave(markSet, k3)
mustHave(markSet, k4)
markSet.EndCriticalSection()
if err := markSet.Close(); err != nil {
t.Fatal(err)
}
_, err = env.Recover("test")
if err == nil {
t.Fatal("expected recovery to fail")
}
}

View File

@ -129,8 +129,6 @@ type SplitStore struct {
headChangeMx sync.Mutex headChangeMx sync.Mutex
coldPurgeSize int
chain ChainAccessor chain ChainAccessor
ds dstore.Datastore ds dstore.Datastore
cold bstore.Blockstore cold bstore.Blockstore
@ -158,6 +156,10 @@ type SplitStore struct {
txnRefsMx sync.Mutex txnRefsMx sync.Mutex
txnRefs map[cid.Cid]struct{} txnRefs map[cid.Cid]struct{}
txnMissing map[cid.Cid]struct{} txnMissing map[cid.Cid]struct{}
txnMarkSet MarkSet
txnSyncMx sync.Mutex
txnSyncCond sync.Cond
txnSync bool
// registered protectors // registered protectors
protectors []func(func(cid.Cid) error) error protectors []func(func(cid.Cid) error) error
@ -186,10 +188,6 @@ func Open(path string, ds dstore.Datastore, hot, cold bstore.Blockstore, cfg *Co
return nil, err return nil, err
} }
if !markSetEnv.SupportsVisitor() {
return nil, xerrors.Errorf("markset type does not support atomic visitors")
}
// and now we can make a SplitStore // and now we can make a SplitStore
ss := &SplitStore{ ss := &SplitStore{
cfg: cfg, cfg: cfg,
@ -198,11 +196,10 @@ func Open(path string, ds dstore.Datastore, hot, cold bstore.Blockstore, cfg *Co
cold: cold, cold: cold,
hot: hots, hot: hots,
markSetEnv: markSetEnv, markSetEnv: markSetEnv,
coldPurgeSize: defaultColdPurgeSize,
} }
ss.txnViewsCond.L = &ss.txnViewsMx ss.txnViewsCond.L = &ss.txnViewsMx
ss.txnSyncCond.L = &ss.txnSyncMx
ss.ctx, ss.cancel = context.WithCancel(context.Background()) ss.ctx, ss.cancel = context.WithCancel(context.Background())
if enableDebugLog { if enableDebugLog {
@ -212,21 +209,29 @@ func Open(path string, ds dstore.Datastore, hot, cold bstore.Blockstore, cfg *Co
} }
} }
if ss.checkpointExists() {
log.Info("found compaction checkpoint; resuming compaction")
if err := ss.completeCompaction(); err != nil {
markSetEnv.Close() //nolint:errcheck
return nil, xerrors.Errorf("error resuming compaction: %w", err)
}
}
return ss, nil return ss, nil
} }
// Blockstore interface // Blockstore interface
func (s *SplitStore) DeleteBlock(_ cid.Cid) error { func (s *SplitStore) DeleteBlock(_ context.Context, _ cid.Cid) error {
// afaict we don't seem to be using this method, so it's not implemented // afaict we don't seem to be using this method, so it's not implemented
return errors.New("DeleteBlock not implemented on SplitStore; don't do this Luke!") //nolint return errors.New("DeleteBlock not implemented on SplitStore; don't do this Luke!") //nolint
} }
func (s *SplitStore) DeleteMany(_ []cid.Cid) error { func (s *SplitStore) DeleteMany(_ context.Context, _ []cid.Cid) error {
// afaict we don't seem to be using this method, so it's not implemented // afaict we don't seem to be using this method, so it's not implemented
return errors.New("DeleteMany not implemented on SplitStore; don't do this Luke!") //nolint return errors.New("DeleteMany not implemented on SplitStore; don't do this Luke!") //nolint
} }
func (s *SplitStore) Has(cid cid.Cid) (bool, error) { func (s *SplitStore) Has(ctx context.Context, cid cid.Cid) (bool, error) {
if isIdentiyCid(cid) { if isIdentiyCid(cid) {
return true, nil return true, nil
} }
@ -234,7 +239,21 @@ func (s *SplitStore) Has(cid cid.Cid) (bool, error) {
s.txnLk.RLock() s.txnLk.RLock()
defer s.txnLk.RUnlock() defer s.txnLk.RUnlock()
has, err := s.hot.Has(cid) // critical section
if s.txnMarkSet != nil {
has, err := s.txnMarkSet.Has(cid)
if err != nil {
return false, err
}
if has {
return s.has(cid)
}
return s.cold.Has(ctx, cid)
}
has, err := s.hot.Has(ctx, cid)
if err != nil { if err != nil {
return has, err return has, err
@ -245,10 +264,10 @@ func (s *SplitStore) Has(cid cid.Cid) (bool, error) {
return true, nil return true, nil
} }
return s.cold.Has(cid) return s.cold.Has(ctx, cid)
} }
func (s *SplitStore) Get(cid cid.Cid) (blocks.Block, error) { func (s *SplitStore) Get(ctx context.Context, cid cid.Cid) (blocks.Block, error) {
if isIdentiyCid(cid) { if isIdentiyCid(cid) {
data, err := decodeIdentityCid(cid) data, err := decodeIdentityCid(cid)
if err != nil { if err != nil {
@ -261,7 +280,21 @@ func (s *SplitStore) Get(cid cid.Cid) (blocks.Block, error) {
s.txnLk.RLock() s.txnLk.RLock()
defer s.txnLk.RUnlock() defer s.txnLk.RUnlock()
blk, err := s.hot.Get(cid) // critical section
if s.txnMarkSet != nil {
has, err := s.txnMarkSet.Has(cid)
if err != nil {
return nil, err
}
if has {
return s.get(cid)
}
return s.cold.Get(ctx, cid)
}
blk, err := s.hot.Get(ctx, cid)
switch err { switch err {
case nil: case nil:
@ -273,7 +306,7 @@ func (s *SplitStore) Get(cid cid.Cid) (blocks.Block, error) {
s.debug.LogReadMiss(cid) s.debug.LogReadMiss(cid)
} }
blk, err = s.cold.Get(cid) blk, err = s.cold.Get(ctx, cid)
if err == nil { if err == nil {
stats.Record(s.ctx, metrics.SplitstoreMiss.M(1)) stats.Record(s.ctx, metrics.SplitstoreMiss.M(1))
@ -285,7 +318,7 @@ func (s *SplitStore) Get(cid cid.Cid) (blocks.Block, error) {
} }
} }
func (s *SplitStore) GetSize(cid cid.Cid) (int, error) { func (s *SplitStore) GetSize(ctx context.Context, cid cid.Cid) (int, error) {
if isIdentiyCid(cid) { if isIdentiyCid(cid) {
data, err := decodeIdentityCid(cid) data, err := decodeIdentityCid(cid)
if err != nil { if err != nil {
@ -298,7 +331,21 @@ func (s *SplitStore) GetSize(cid cid.Cid) (int, error) {
s.txnLk.RLock() s.txnLk.RLock()
defer s.txnLk.RUnlock() defer s.txnLk.RUnlock()
size, err := s.hot.GetSize(cid) // critical section
if s.txnMarkSet != nil {
has, err := s.txnMarkSet.Has(cid)
if err != nil {
return 0, err
}
if has {
return s.getSize(cid)
}
return s.cold.GetSize(ctx, cid)
}
size, err := s.hot.GetSize(ctx, cid)
switch err { switch err {
case nil: case nil:
@ -310,7 +357,7 @@ func (s *SplitStore) GetSize(cid cid.Cid) (int, error) {
s.debug.LogReadMiss(cid) s.debug.LogReadMiss(cid)
} }
size, err = s.cold.GetSize(cid) size, err = s.cold.GetSize(ctx, cid)
if err == nil { if err == nil {
stats.Record(s.ctx, metrics.SplitstoreMiss.M(1)) stats.Record(s.ctx, metrics.SplitstoreMiss.M(1))
} }
@ -321,7 +368,7 @@ func (s *SplitStore) GetSize(cid cid.Cid) (int, error) {
} }
} }
func (s *SplitStore) Put(blk blocks.Block) error { func (s *SplitStore) Put(ctx context.Context, blk blocks.Block) error {
if isIdentiyCid(blk.Cid()) { if isIdentiyCid(blk.Cid()) {
return nil return nil
} }
@ -329,18 +376,24 @@ func (s *SplitStore) Put(blk blocks.Block) error {
s.txnLk.RLock() s.txnLk.RLock()
defer s.txnLk.RUnlock() defer s.txnLk.RUnlock()
err := s.hot.Put(blk) err := s.hot.Put(ctx, blk)
if err != nil { if err != nil {
return err return err
} }
s.debug.LogWrite(blk) s.debug.LogWrite(blk)
// critical section
if s.txnMarkSet != nil {
s.markLiveRefs([]cid.Cid{blk.Cid()})
return nil
}
s.trackTxnRef(blk.Cid()) s.trackTxnRef(blk.Cid())
return nil return nil
} }
func (s *SplitStore) PutMany(blks []blocks.Block) error { func (s *SplitStore) PutMany(ctx context.Context, blks []blocks.Block) error {
// filter identites // filter identites
idcids := 0 idcids := 0
for _, blk := range blks { for _, blk := range blks {
@ -374,13 +427,19 @@ func (s *SplitStore) PutMany(blks []blocks.Block) error {
s.txnLk.RLock() s.txnLk.RLock()
defer s.txnLk.RUnlock() defer s.txnLk.RUnlock()
err := s.hot.PutMany(blks) err := s.hot.PutMany(ctx, blks)
if err != nil { if err != nil {
return err return err
} }
s.debug.LogWriteMany(blks) s.debug.LogWriteMany(blks)
// critical section
if s.txnMarkSet != nil {
s.markLiveRefs(batch)
return nil
}
s.trackTxnRefMany(batch) s.trackTxnRefMany(batch)
return nil return nil
} }
@ -430,7 +489,7 @@ func (s *SplitStore) HashOnRead(enabled bool) {
s.cold.HashOnRead(enabled) s.cold.HashOnRead(enabled)
} }
func (s *SplitStore) View(cid cid.Cid, cb func([]byte) error) error { func (s *SplitStore) View(ctx context.Context, cid cid.Cid, cb func([]byte) error) error {
if isIdentiyCid(cid) { if isIdentiyCid(cid) {
data, err := decodeIdentityCid(cid) data, err := decodeIdentityCid(cid)
if err != nil { if err != nil {
@ -440,6 +499,23 @@ func (s *SplitStore) View(cid cid.Cid, cb func([]byte) error) error {
return cb(data) return cb(data)
} }
// critical section
s.txnLk.RLock() // the lock is released in protectView if we are not in critical section
if s.txnMarkSet != nil {
has, err := s.txnMarkSet.Has(cid)
s.txnLk.RUnlock()
if err != nil {
return err
}
if has {
return s.view(cid, cb)
}
return s.cold.View(ctx, cid, cb)
}
// views are (optimistically) protected two-fold: // views are (optimistically) protected two-fold:
// - if there is an active transaction, then the reference is protected. // - if there is an active transaction, then the reference is protected.
// - if there is no active transaction, active views are tracked in a // - if there is no active transaction, active views are tracked in a
@ -451,14 +527,14 @@ func (s *SplitStore) View(cid cid.Cid, cb func([]byte) error) error {
s.protectView(cid) s.protectView(cid)
defer s.viewDone() defer s.viewDone()
err := s.hot.View(cid, cb) err := s.hot.View(ctx, cid, cb)
switch err { switch err {
case bstore.ErrNotFound: case bstore.ErrNotFound:
if s.isWarm() { if s.isWarm() {
s.debug.LogReadMiss(cid) s.debug.LogReadMiss(cid)
} }
err = s.cold.View(cid, cb) err = s.cold.View(ctx, cid, cb)
if err == nil { if err == nil {
stats.Record(s.ctx, metrics.SplitstoreMiss.M(1)) stats.Record(s.ctx, metrics.SplitstoreMiss.M(1))
} }
@ -502,7 +578,7 @@ func (s *SplitStore) Start(chain ChainAccessor, us stmgr.UpgradeSchedule) error
// load base epoch from metadata ds // load base epoch from metadata ds
// if none, then use current epoch because it's a fresh start // if none, then use current epoch because it's a fresh start
bs, err := s.ds.Get(baseEpochKey) bs, err := s.ds.Get(s.ctx, baseEpochKey)
switch err { switch err {
case nil: case nil:
s.baseEpoch = bytesToEpoch(bs) s.baseEpoch = bytesToEpoch(bs)
@ -523,7 +599,7 @@ func (s *SplitStore) Start(chain ChainAccessor, us stmgr.UpgradeSchedule) error
} }
// load warmup epoch from metadata ds // load warmup epoch from metadata ds
bs, err = s.ds.Get(warmupEpochKey) bs, err = s.ds.Get(s.ctx, warmupEpochKey)
switch err { switch err {
case nil: case nil:
s.warmupEpoch = bytesToEpoch(bs) s.warmupEpoch = bytesToEpoch(bs)
@ -536,7 +612,7 @@ func (s *SplitStore) Start(chain ChainAccessor, us stmgr.UpgradeSchedule) error
} }
// load markSetSize from metadata ds to provide a size hint for marksets // load markSetSize from metadata ds to provide a size hint for marksets
bs, err = s.ds.Get(markSetSizeKey) bs, err = s.ds.Get(s.ctx, markSetSizeKey)
switch err { switch err {
case nil: case nil:
s.markSetSize = bytesToInt64(bs) s.markSetSize = bytesToInt64(bs)
@ -547,7 +623,7 @@ func (s *SplitStore) Start(chain ChainAccessor, us stmgr.UpgradeSchedule) error
} }
// load compactionIndex from metadata ds to provide a hint as to when to perform moving gc // load compactionIndex from metadata ds to provide a hint as to when to perform moving gc
bs, err = s.ds.Get(compactionIndexKey) bs, err = s.ds.Get(s.ctx, compactionIndexKey)
switch err { switch err {
case nil: case nil:
s.compactionIndex = bytesToInt64(bs) s.compactionIndex = bytesToInt64(bs)
@ -589,6 +665,11 @@ func (s *SplitStore) Close() error {
} }
if atomic.LoadInt32(&s.compacting) == 1 { if atomic.LoadInt32(&s.compacting) == 1 {
s.txnSyncMx.Lock()
s.txnSync = true
s.txnSyncCond.Broadcast()
s.txnSyncMx.Unlock()
log.Warn("close with ongoing compaction in progress; waiting for it to finish...") log.Warn("close with ongoing compaction in progress; waiting for it to finish...")
for atomic.LoadInt32(&s.compacting) == 1 { for atomic.LoadInt32(&s.compacting) == 1 {
time.Sleep(time.Second) time.Sleep(time.Second)
@ -609,5 +690,5 @@ func (s *SplitStore) checkClosing() error {
func (s *SplitStore) setBaseEpoch(epoch abi.ChainEpoch) error { func (s *SplitStore) setBaseEpoch(epoch abi.ChainEpoch) error {
s.baseEpoch = epoch s.baseEpoch = epoch
return s.ds.Put(baseEpochKey, epochToBytes(epoch)) return s.ds.Put(s.ctx, baseEpochKey, epochToBytes(epoch))
} }

View File

@ -4,6 +4,7 @@ import (
"fmt" "fmt"
"os" "os"
"path/filepath" "path/filepath"
"sync"
"sync/atomic" "sync/atomic"
"time" "time"
@ -67,7 +68,10 @@ func (s *SplitStore) doCheck(curTs *types.TipSet) error {
} }
defer output.Close() //nolint:errcheck defer output.Close() //nolint:errcheck
var mx sync.Mutex
write := func(format string, args ...interface{}) { write := func(format string, args ...interface{}) {
mx.Lock()
defer mx.Unlock()
_, err := fmt.Fprintf(output, format+"\n", args...) _, err := fmt.Fprintf(output, format+"\n", args...)
if err != nil { if err != nil {
log.Warnf("error writing check output: %s", err) log.Warnf("error writing check output: %s", err)
@ -82,9 +86,10 @@ func (s *SplitStore) doCheck(curTs *types.TipSet) error {
write("compaction index: %d", s.compactionIndex) write("compaction index: %d", s.compactionIndex)
write("--") write("--")
var coldCnt, missingCnt int64 coldCnt := new(int64)
missingCnt := new(int64)
visitor, err := s.markSetEnv.CreateVisitor("check", 0) visitor, err := s.markSetEnv.New("check", 0)
if err != nil { if err != nil {
return xerrors.Errorf("error creating visitor: %w", err) return xerrors.Errorf("error creating visitor: %w", err)
} }
@ -96,7 +101,7 @@ func (s *SplitStore) doCheck(curTs *types.TipSet) error {
return errStopWalk return errStopWalk
} }
has, err := s.hot.Has(c) has, err := s.hot.Has(s.ctx, c)
if err != nil { if err != nil {
return xerrors.Errorf("error checking hotstore: %w", err) return xerrors.Errorf("error checking hotstore: %w", err)
} }
@ -105,16 +110,16 @@ func (s *SplitStore) doCheck(curTs *types.TipSet) error {
return nil return nil
} }
has, err = s.cold.Has(c) has, err = s.cold.Has(s.ctx, c)
if err != nil { if err != nil {
return xerrors.Errorf("error checking coldstore: %w", err) return xerrors.Errorf("error checking coldstore: %w", err)
} }
if has { if has {
coldCnt++ atomic.AddInt64(coldCnt, 1)
write("cold object reference: %s", c) write("cold object reference: %s", c)
} else { } else {
missingCnt++ atomic.AddInt64(missingCnt, 1)
write("missing object reference: %s", c) write("missing object reference: %s", c)
return errStopWalk return errStopWalk
} }
@ -128,9 +133,9 @@ func (s *SplitStore) doCheck(curTs *types.TipSet) error {
return err return err
} }
log.Infow("check done", "cold", coldCnt, "missing", missingCnt) log.Infow("check done", "cold", *coldCnt, "missing", *missingCnt)
write("--") write("--")
write("cold: %d missing: %d", coldCnt, missingCnt) write("cold: %d missing: %d", *coldCnt, *missingCnt)
write("DONE") write("DONE")
return nil return nil

File diff suppressed because it is too large Load Diff

View File

@ -20,28 +20,28 @@ func (s *SplitStore) Expose() bstore.Blockstore {
return &exposedSplitStore{s: s} return &exposedSplitStore{s: s}
} }
func (es *exposedSplitStore) DeleteBlock(_ cid.Cid) error { func (es *exposedSplitStore) DeleteBlock(_ context.Context, _ cid.Cid) error {
return errors.New("DeleteBlock: operation not supported") return errors.New("DeleteBlock: operation not supported")
} }
func (es *exposedSplitStore) DeleteMany(_ []cid.Cid) error { func (es *exposedSplitStore) DeleteMany(_ context.Context, _ []cid.Cid) error {
return errors.New("DeleteMany: operation not supported") return errors.New("DeleteMany: operation not supported")
} }
func (es *exposedSplitStore) Has(c cid.Cid) (bool, error) { func (es *exposedSplitStore) Has(ctx context.Context, c cid.Cid) (bool, error) {
if isIdentiyCid(c) { if isIdentiyCid(c) {
return true, nil return true, nil
} }
has, err := es.s.hot.Has(c) has, err := es.s.hot.Has(ctx, c)
if has || err != nil { if has || err != nil {
return has, err return has, err
} }
return es.s.cold.Has(c) return es.s.cold.Has(ctx, c)
} }
func (es *exposedSplitStore) Get(c cid.Cid) (blocks.Block, error) { func (es *exposedSplitStore) Get(ctx context.Context, c cid.Cid) (blocks.Block, error) {
if isIdentiyCid(c) { if isIdentiyCid(c) {
data, err := decodeIdentityCid(c) data, err := decodeIdentityCid(c)
if err != nil { if err != nil {
@ -51,16 +51,16 @@ func (es *exposedSplitStore) Get(c cid.Cid) (blocks.Block, error) {
return blocks.NewBlockWithCid(data, c) return blocks.NewBlockWithCid(data, c)
} }
blk, err := es.s.hot.Get(c) blk, err := es.s.hot.Get(ctx, c)
switch err { switch err {
case bstore.ErrNotFound: case bstore.ErrNotFound:
return es.s.cold.Get(c) return es.s.cold.Get(ctx, c)
default: default:
return blk, err return blk, err
} }
} }
func (es *exposedSplitStore) GetSize(c cid.Cid) (int, error) { func (es *exposedSplitStore) GetSize(ctx context.Context, c cid.Cid) (int, error) {
if isIdentiyCid(c) { if isIdentiyCid(c) {
data, err := decodeIdentityCid(c) data, err := decodeIdentityCid(c)
if err != nil { if err != nil {
@ -70,21 +70,21 @@ func (es *exposedSplitStore) GetSize(c cid.Cid) (int, error) {
return len(data), nil return len(data), nil
} }
size, err := es.s.hot.GetSize(c) size, err := es.s.hot.GetSize(ctx, c)
switch err { switch err {
case bstore.ErrNotFound: case bstore.ErrNotFound:
return es.s.cold.GetSize(c) return es.s.cold.GetSize(ctx, c)
default: default:
return size, err return size, err
} }
} }
func (es *exposedSplitStore) Put(blk blocks.Block) error { func (es *exposedSplitStore) Put(ctx context.Context, blk blocks.Block) error {
return es.s.Put(blk) return es.s.Put(ctx, blk)
} }
func (es *exposedSplitStore) PutMany(blks []blocks.Block) error { func (es *exposedSplitStore) PutMany(ctx context.Context, blks []blocks.Block) error {
return es.s.PutMany(blks) return es.s.PutMany(ctx, blks)
} }
func (es *exposedSplitStore) AllKeysChan(ctx context.Context) (<-chan cid.Cid, error) { func (es *exposedSplitStore) AllKeysChan(ctx context.Context) (<-chan cid.Cid, error) {
@ -93,7 +93,7 @@ func (es *exposedSplitStore) AllKeysChan(ctx context.Context) (<-chan cid.Cid, e
func (es *exposedSplitStore) HashOnRead(enabled bool) {} func (es *exposedSplitStore) HashOnRead(enabled bool) {}
func (es *exposedSplitStore) View(c cid.Cid, f func([]byte) error) error { func (es *exposedSplitStore) View(ctx context.Context, c cid.Cid, f func([]byte) error) error {
if isIdentiyCid(c) { if isIdentiyCid(c) {
data, err := decodeIdentityCid(c) data, err := decodeIdentityCid(c)
if err != nil { if err != nil {
@ -103,10 +103,10 @@ func (es *exposedSplitStore) View(c cid.Cid, f func([]byte) error) error {
return f(data) return f(data)
} }
err := es.s.hot.View(c, f) err := es.s.hot.View(ctx, c, f)
switch err { switch err {
case bstore.ErrNotFound: case bstore.ErrNotFound:
return es.s.cold.View(c, f) return es.s.cold.View(ctx, c, f)
default: default:
return err return err

View File

@ -4,6 +4,8 @@ import (
"context" "context"
"errors" "errors"
"fmt" "fmt"
"io/ioutil"
"os"
"sync" "sync"
"sync/atomic" "sync/atomic"
"testing" "testing"
@ -20,16 +22,19 @@ import (
datastore "github.com/ipfs/go-datastore" datastore "github.com/ipfs/go-datastore"
dssync "github.com/ipfs/go-datastore/sync" dssync "github.com/ipfs/go-datastore/sync"
logging "github.com/ipfs/go-log/v2" logging "github.com/ipfs/go-log/v2"
mh "github.com/multiformats/go-multihash"
) )
func init() { func init() {
CompactionThreshold = 5 CompactionThreshold = 5
CompactionBoundary = 2 CompactionBoundary = 2
WarmupBoundary = 0 WarmupBoundary = 0
SyncWaitTime = time.Millisecond
logging.SetLogLevel("splitstore", "DEBUG") logging.SetLogLevel("splitstore", "DEBUG")
} }
func testSplitStore(t *testing.T, cfg *Config) { func testSplitStore(t *testing.T, cfg *Config) {
ctx := context.Background()
chain := &mockChain{t: t} chain := &mockChain{t: t}
// the myriads of stores // the myriads of stores
@ -39,7 +44,7 @@ func testSplitStore(t *testing.T, cfg *Config) {
// this is necessary to avoid the garbage mock puts in the blocks // this is necessary to avoid the garbage mock puts in the blocks
garbage := blocks.NewBlock([]byte{1, 2, 3}) garbage := blocks.NewBlock([]byte{1, 2, 3})
err := cold.Put(garbage) err := cold.Put(ctx, garbage)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -60,27 +65,36 @@ func testSplitStore(t *testing.T, cfg *Config) {
t.Fatal(err) t.Fatal(err)
} }
err = cold.Put(blk) err = cold.Put(ctx, blk)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
// create a garbage block that is protected with a rgistered protector // create a garbage block that is protected with a rgistered protector
protected := blocks.NewBlock([]byte("protected!")) protected := blocks.NewBlock([]byte("protected!"))
err = hot.Put(protected) err = hot.Put(ctx, protected)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
// and another one that is not protected // and another one that is not protected
unprotected := blocks.NewBlock([]byte("unprotected!")) unprotected := blocks.NewBlock([]byte("unprotected!"))
err = hot.Put(unprotected) err = hot.Put(ctx, unprotected)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
path, err := ioutil.TempDir("", "splitstore.*")
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
_ = os.RemoveAll(path)
})
// open the splitstore // open the splitstore
ss, err := Open("", ds, hot, cold, cfg) ss, err := Open(path, ds, hot, cold, cfg)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -109,11 +123,11 @@ func testSplitStore(t *testing.T, cfg *Config) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
err = ss.Put(stateRoot) err = ss.Put(ctx, stateRoot)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
err = ss.Put(sblk) err = ss.Put(ctx, sblk)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -124,6 +138,10 @@ func testSplitStore(t *testing.T, cfg *Config) {
} }
waitForCompaction := func() { waitForCompaction := func() {
ss.txnSyncMx.Lock()
ss.txnSync = true
ss.txnSyncCond.Broadcast()
ss.txnSyncMx.Unlock()
for atomic.LoadInt32(&ss.compacting) == 1 { for atomic.LoadInt32(&ss.compacting) == 1 {
time.Sleep(100 * time.Millisecond) time.Sleep(100 * time.Millisecond)
} }
@ -176,7 +194,7 @@ func testSplitStore(t *testing.T, cfg *Config) {
} }
// ensure our protected block is still there // ensure our protected block is still there
has, err := hot.Has(protected.Cid()) has, err := hot.Has(ctx, protected.Cid())
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -186,7 +204,7 @@ func testSplitStore(t *testing.T, cfg *Config) {
} }
// ensure our unprotected block is in the coldstore now // ensure our unprotected block is in the coldstore now
has, err = hot.Has(unprotected.Cid()) has, err = hot.Has(ctx, unprotected.Cid())
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -195,7 +213,7 @@ func testSplitStore(t *testing.T, cfg *Config) {
t.Fatal("unprotected block is still in hotstore") t.Fatal("unprotected block is still in hotstore")
} }
has, err = cold.Has(unprotected.Cid()) has, err = cold.Has(ctx, unprotected.Cid())
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -222,6 +240,7 @@ func TestSplitStoreCompactionWithBadger(t *testing.T) {
} }
func TestSplitStoreSuppressCompactionNearUpgrade(t *testing.T) { func TestSplitStoreSuppressCompactionNearUpgrade(t *testing.T) {
ctx := context.Background()
chain := &mockChain{t: t} chain := &mockChain{t: t}
// the myriads of stores // the myriads of stores
@ -231,7 +250,7 @@ func TestSplitStoreSuppressCompactionNearUpgrade(t *testing.T) {
// this is necessary to avoid the garbage mock puts in the blocks // this is necessary to avoid the garbage mock puts in the blocks
garbage := blocks.NewBlock([]byte{1, 2, 3}) garbage := blocks.NewBlock([]byte{1, 2, 3})
err := cold.Put(garbage) err := cold.Put(ctx, garbage)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -252,13 +271,22 @@ func TestSplitStoreSuppressCompactionNearUpgrade(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
err = cold.Put(blk) err = cold.Put(ctx, blk)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
path, err := ioutil.TempDir("", "splitstore.*")
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
_ = os.RemoveAll(path)
})
// open the splitstore // open the splitstore
ss, err := Open("", ds, hot, cold, &Config{MarkSetType: "map"}) ss, err := Open(path, ds, hot, cold, &Config{MarkSetType: "map"})
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -288,11 +316,11 @@ func TestSplitStoreSuppressCompactionNearUpgrade(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
err = ss.Put(stateRoot) err = ss.Put(ctx, stateRoot)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
err = ss.Put(sblk) err = ss.Put(ctx, sblk)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -303,6 +331,10 @@ func TestSplitStoreSuppressCompactionNearUpgrade(t *testing.T) {
} }
waitForCompaction := func() { waitForCompaction := func() {
ss.txnSyncMx.Lock()
ss.txnSync = true
ss.txnSyncCond.Broadcast()
ss.txnSyncMx.Unlock()
for atomic.LoadInt32(&ss.compacting) == 1 { for atomic.LoadInt32(&ss.compacting) == 1 {
time.Sleep(100 * time.Millisecond) time.Sleep(100 * time.Millisecond)
} }
@ -424,35 +456,43 @@ func (c *mockChain) SubscribeHeadChanges(change func(revert []*types.TipSet, app
type mockStore struct { type mockStore struct {
mx sync.Mutex mx sync.Mutex
set map[cid.Cid]blocks.Block set map[string]blocks.Block
} }
func newMockStore() *mockStore { func newMockStore() *mockStore {
return &mockStore{set: make(map[cid.Cid]blocks.Block)} return &mockStore{set: make(map[string]blocks.Block)}
} }
func (b *mockStore) Has(cid cid.Cid) (bool, error) { func (b *mockStore) keyOf(c cid.Cid) string {
return string(c.Hash())
}
func (b *mockStore) cidOf(k string) cid.Cid {
return cid.NewCidV1(cid.Raw, mh.Multihash([]byte(k)))
}
func (b *mockStore) Has(_ context.Context, cid cid.Cid) (bool, error) {
b.mx.Lock() b.mx.Lock()
defer b.mx.Unlock() defer b.mx.Unlock()
_, ok := b.set[cid] _, ok := b.set[b.keyOf(cid)]
return ok, nil return ok, nil
} }
func (b *mockStore) HashOnRead(hor bool) {} func (b *mockStore) HashOnRead(hor bool) {}
func (b *mockStore) Get(cid cid.Cid) (blocks.Block, error) { func (b *mockStore) Get(_ context.Context, cid cid.Cid) (blocks.Block, error) {
b.mx.Lock() b.mx.Lock()
defer b.mx.Unlock() defer b.mx.Unlock()
blk, ok := b.set[cid] blk, ok := b.set[b.keyOf(cid)]
if !ok { if !ok {
return nil, blockstore.ErrNotFound return nil, blockstore.ErrNotFound
} }
return blk, nil return blk, nil
} }
func (b *mockStore) GetSize(cid cid.Cid) (int, error) { func (b *mockStore) GetSize(ctx context.Context, cid cid.Cid) (int, error) {
blk, err := b.Get(cid) blk, err := b.Get(ctx, cid)
if err != nil { if err != nil {
return 0, err return 0, err
} }
@ -460,46 +500,46 @@ func (b *mockStore) GetSize(cid cid.Cid) (int, error) {
return len(blk.RawData()), nil return len(blk.RawData()), nil
} }
func (b *mockStore) View(cid cid.Cid, f func([]byte) error) error { func (b *mockStore) View(ctx context.Context, cid cid.Cid, f func([]byte) error) error {
blk, err := b.Get(cid) blk, err := b.Get(ctx, cid)
if err != nil { if err != nil {
return err return err
} }
return f(blk.RawData()) return f(blk.RawData())
} }
func (b *mockStore) Put(blk blocks.Block) error { func (b *mockStore) Put(_ context.Context, blk blocks.Block) error {
b.mx.Lock() b.mx.Lock()
defer b.mx.Unlock() defer b.mx.Unlock()
b.set[blk.Cid()] = blk b.set[b.keyOf(blk.Cid())] = blk
return nil return nil
} }
func (b *mockStore) PutMany(blks []blocks.Block) error { func (b *mockStore) PutMany(_ context.Context, blks []blocks.Block) error {
b.mx.Lock() b.mx.Lock()
defer b.mx.Unlock() defer b.mx.Unlock()
for _, blk := range blks { for _, blk := range blks {
b.set[blk.Cid()] = blk b.set[b.keyOf(blk.Cid())] = blk
} }
return nil return nil
} }
func (b *mockStore) DeleteBlock(cid cid.Cid) error { func (b *mockStore) DeleteBlock(_ context.Context, cid cid.Cid) error {
b.mx.Lock() b.mx.Lock()
defer b.mx.Unlock() defer b.mx.Unlock()
delete(b.set, cid) delete(b.set, b.keyOf(cid))
return nil return nil
} }
func (b *mockStore) DeleteMany(cids []cid.Cid) error { func (b *mockStore) DeleteMany(_ context.Context, cids []cid.Cid) error {
b.mx.Lock() b.mx.Lock()
defer b.mx.Unlock() defer b.mx.Unlock()
for _, c := range cids { for _, c := range cids {
delete(b.set, c) delete(b.set, b.keyOf(c))
} }
return nil return nil
} }
@ -513,7 +553,7 @@ func (b *mockStore) ForEachKey(f func(cid.Cid) error) error {
defer b.mx.Unlock() defer b.mx.Unlock()
for c := range b.set { for c := range b.set {
err := f(c) err := f(b.cidOf(c))
if err != nil { if err != nil {
return err return err
} }

View File

@ -1,6 +1,7 @@
package splitstore package splitstore
import ( import (
"sync"
"sync/atomic" "sync/atomic"
"time" "time"
@ -55,12 +56,13 @@ func (s *SplitStore) doWarmup(curTs *types.TipSet) error {
if WarmupBoundary < epoch { if WarmupBoundary < epoch {
boundaryEpoch = epoch - WarmupBoundary boundaryEpoch = epoch - WarmupBoundary
} }
var mx sync.Mutex
batchHot := make([]blocks.Block, 0, batchSize) batchHot := make([]blocks.Block, 0, batchSize)
count := int64(0) count := new(int64)
xcount := int64(0) xcount := new(int64)
missing := int64(0) missing := new(int64)
visitor, err := s.markSetEnv.CreateVisitor("warmup", 0) visitor, err := s.markSetEnv.New("warmup", 0)
if err != nil { if err != nil {
return xerrors.Errorf("error creating visitor: %w", err) return xerrors.Errorf("error creating visitor: %w", err)
} }
@ -73,9 +75,9 @@ func (s *SplitStore) doWarmup(curTs *types.TipSet) error {
return errStopWalk return errStopWalk
} }
count++ atomic.AddInt64(count, 1)
has, err := s.hot.Has(c) has, err := s.hot.Has(s.ctx, c)
if err != nil { if err != nil {
return err return err
} }
@ -84,25 +86,28 @@ func (s *SplitStore) doWarmup(curTs *types.TipSet) error {
return nil return nil
} }
blk, err := s.cold.Get(c) blk, err := s.cold.Get(s.ctx, c)
if err != nil { if err != nil {
if err == bstore.ErrNotFound { if err == bstore.ErrNotFound {
missing++ atomic.AddInt64(missing, 1)
return errStopWalk return errStopWalk
} }
return err return err
} }
xcount++ atomic.AddInt64(xcount, 1)
mx.Lock()
batchHot = append(batchHot, blk) batchHot = append(batchHot, blk)
if len(batchHot) == batchSize { if len(batchHot) == batchSize {
err = s.hot.PutMany(batchHot) err = s.hot.PutMany(s.ctx, batchHot)
if err != nil { if err != nil {
mx.Unlock()
return err return err
} }
batchHot = batchHot[:0] batchHot = batchHot[:0]
} }
mx.Unlock()
return nil return nil
}) })
@ -112,22 +117,22 @@ func (s *SplitStore) doWarmup(curTs *types.TipSet) error {
} }
if len(batchHot) > 0 { if len(batchHot) > 0 {
err = s.hot.PutMany(batchHot) err = s.hot.PutMany(s.ctx, batchHot)
if err != nil { if err != nil {
return err return err
} }
} }
log.Infow("warmup stats", "visited", count, "warm", xcount, "missing", missing) log.Infow("warmup stats", "visited", *count, "warm", *xcount, "missing", *missing)
s.markSetSize = count + count>>2 // overestimate a bit s.markSetSize = *count + *count>>2 // overestimate a bit
err = s.ds.Put(markSetSizeKey, int64ToBytes(s.markSetSize)) err = s.ds.Put(s.ctx, markSetSizeKey, int64ToBytes(s.markSetSize))
if err != nil { if err != nil {
log.Warnf("error saving mark set size: %s", err) log.Warnf("error saving mark set size: %s", err)
} }
// save the warmup epoch // save the warmup epoch
err = s.ds.Put(warmupEpochKey, epochToBytes(epoch)) err = s.ds.Put(s.ctx, warmupEpochKey, epochToBytes(epoch))
if err != nil { if err != nil {
return xerrors.Errorf("error saving warm up epoch: %w", err) return xerrors.Errorf("error saving warm up epoch: %w", err)
} }
@ -136,7 +141,7 @@ func (s *SplitStore) doWarmup(curTs *types.TipSet) error {
s.mx.Unlock() s.mx.Unlock()
// also save the compactionIndex, as this is used as an indicator of warmup for upgraded nodes // also save the compactionIndex, as this is used as an indicator of warmup for upgraded nodes
err = s.ds.Put(compactionIndexKey, int64ToBytes(s.compactionIndex)) err = s.ds.Put(s.ctx, compactionIndexKey, int64ToBytes(s.compactionIndex))
if err != nil { if err != nil {
return xerrors.Errorf("error saving compaction index: %w", err) return xerrors.Errorf("error saving compaction index: %w", err)
} }

View File

@ -1,6 +1,8 @@
package splitstore package splitstore
import ( import (
"sync"
cid "github.com/ipfs/go-cid" cid "github.com/ipfs/go-cid"
) )
@ -17,16 +19,34 @@ func (v *noopVisitor) Visit(_ cid.Cid) (bool, error) {
return true, nil return true, nil
} }
type cidSetVisitor struct { type tmpVisitor struct {
set *cid.Set set *cid.Set
} }
var _ ObjectVisitor = (*cidSetVisitor)(nil) var _ ObjectVisitor = (*tmpVisitor)(nil)
func (v *cidSetVisitor) Visit(c cid.Cid) (bool, error) { func (v *tmpVisitor) Visit(c cid.Cid) (bool, error) {
return v.set.Visit(c), nil return v.set.Visit(c), nil
} }
func tmpVisitor() ObjectVisitor { func newTmpVisitor() ObjectVisitor {
return &cidSetVisitor{set: cid.NewSet()} return &tmpVisitor{set: cid.NewSet()}
}
type concurrentVisitor struct {
mx sync.Mutex
set *cid.Set
}
var _ ObjectVisitor = (*concurrentVisitor)(nil)
func newConcurrentVisitor() *concurrentVisitor {
return &concurrentVisitor{set: cid.NewSet()}
}
func (v *concurrentVisitor) Visit(c cid.Cid) (bool, error) {
v.mx.Lock()
defer v.mx.Unlock()
return v.set.Visit(c), nil
} }

View File

@ -20,53 +20,53 @@ type SyncBlockstore struct {
bs MemBlockstore // specifically use a memStore to save indirection overhead. bs MemBlockstore // specifically use a memStore to save indirection overhead.
} }
func (m *SyncBlockstore) DeleteBlock(k cid.Cid) error { func (m *SyncBlockstore) DeleteBlock(ctx context.Context, k cid.Cid) error {
m.mu.Lock() m.mu.Lock()
defer m.mu.Unlock() defer m.mu.Unlock()
return m.bs.DeleteBlock(k) return m.bs.DeleteBlock(ctx, k)
} }
func (m *SyncBlockstore) DeleteMany(ks []cid.Cid) error { func (m *SyncBlockstore) DeleteMany(ctx context.Context, ks []cid.Cid) error {
m.mu.Lock() m.mu.Lock()
defer m.mu.Unlock() defer m.mu.Unlock()
return m.bs.DeleteMany(ks) return m.bs.DeleteMany(ctx, ks)
} }
func (m *SyncBlockstore) Has(k cid.Cid) (bool, error) { func (m *SyncBlockstore) Has(ctx context.Context, k cid.Cid) (bool, error) {
m.mu.RLock() m.mu.RLock()
defer m.mu.RUnlock() defer m.mu.RUnlock()
return m.bs.Has(k) return m.bs.Has(ctx, k)
} }
func (m *SyncBlockstore) View(k cid.Cid, callback func([]byte) error) error { func (m *SyncBlockstore) View(ctx context.Context, k cid.Cid, callback func([]byte) error) error {
m.mu.RLock() m.mu.RLock()
defer m.mu.RUnlock() defer m.mu.RUnlock()
return m.bs.View(k, callback) return m.bs.View(ctx, k, callback)
} }
func (m *SyncBlockstore) Get(k cid.Cid) (blocks.Block, error) { func (m *SyncBlockstore) Get(ctx context.Context, k cid.Cid) (blocks.Block, error) {
m.mu.RLock() m.mu.RLock()
defer m.mu.RUnlock() defer m.mu.RUnlock()
return m.bs.Get(k) return m.bs.Get(ctx, k)
} }
func (m *SyncBlockstore) GetSize(k cid.Cid) (int, error) { func (m *SyncBlockstore) GetSize(ctx context.Context, k cid.Cid) (int, error) {
m.mu.RLock() m.mu.RLock()
defer m.mu.RUnlock() defer m.mu.RUnlock()
return m.bs.GetSize(k) return m.bs.GetSize(ctx, k)
} }
func (m *SyncBlockstore) Put(b blocks.Block) error { func (m *SyncBlockstore) Put(ctx context.Context, b blocks.Block) error {
m.mu.Lock() m.mu.Lock()
defer m.mu.Unlock() defer m.mu.Unlock()
return m.bs.Put(b) return m.bs.Put(ctx, b)
} }
func (m *SyncBlockstore) PutMany(bs []blocks.Block) error { func (m *SyncBlockstore) PutMany(ctx context.Context, bs []blocks.Block) error {
m.mu.Lock() m.mu.Lock()
defer m.mu.Unlock() defer m.mu.Unlock()
return m.bs.PutMany(bs) return m.bs.PutMany(ctx, bs)
} }
func (m *SyncBlockstore) AllKeysChan(ctx context.Context) (<-chan cid.Cid, error) { func (m *SyncBlockstore) AllKeysChan(ctx context.Context) (<-chan cid.Cid, error) {

View File

@ -92,28 +92,28 @@ func (t *TimedCacheBlockstore) rotate() {
t.mu.Unlock() t.mu.Unlock()
} }
func (t *TimedCacheBlockstore) Put(b blocks.Block) error { func (t *TimedCacheBlockstore) Put(ctx context.Context, b blocks.Block) error {
// Don't check the inactive set here. We want to keep this block for at // Don't check the inactive set here. We want to keep this block for at
// least one interval. // least one interval.
t.mu.Lock() t.mu.Lock()
defer t.mu.Unlock() defer t.mu.Unlock()
return t.active.Put(b) return t.active.Put(ctx, b)
} }
func (t *TimedCacheBlockstore) PutMany(bs []blocks.Block) error { func (t *TimedCacheBlockstore) PutMany(ctx context.Context, bs []blocks.Block) error {
t.mu.Lock() t.mu.Lock()
defer t.mu.Unlock() defer t.mu.Unlock()
return t.active.PutMany(bs) return t.active.PutMany(ctx, bs)
} }
func (t *TimedCacheBlockstore) View(k cid.Cid, callback func([]byte) error) error { func (t *TimedCacheBlockstore) View(ctx context.Context, k cid.Cid, callback func([]byte) error) error {
// The underlying blockstore is always a "mem" blockstore so there's no difference, // The underlying blockstore is always a "mem" blockstore so there's no difference,
// from a performance perspective, between view & get. So we call Get to avoid // from a performance perspective, between view & get. So we call Get to avoid
// calling an arbitrary callback while holding a lock. // calling an arbitrary callback while holding a lock.
t.mu.RLock() t.mu.RLock()
block, err := t.active.Get(k) block, err := t.active.Get(ctx, k)
if err == ErrNotFound { if err == ErrNotFound {
block, err = t.inactive.Get(k) block, err = t.inactive.Get(ctx, k)
} }
t.mu.RUnlock() t.mu.RUnlock()
@ -123,51 +123,51 @@ func (t *TimedCacheBlockstore) View(k cid.Cid, callback func([]byte) error) erro
return callback(block.RawData()) return callback(block.RawData())
} }
func (t *TimedCacheBlockstore) Get(k cid.Cid) (blocks.Block, error) { func (t *TimedCacheBlockstore) Get(ctx context.Context, k cid.Cid) (blocks.Block, error) {
t.mu.RLock() t.mu.RLock()
defer t.mu.RUnlock() defer t.mu.RUnlock()
b, err := t.active.Get(k) b, err := t.active.Get(ctx, k)
if err == ErrNotFound { if err == ErrNotFound {
b, err = t.inactive.Get(k) b, err = t.inactive.Get(ctx, k)
} }
return b, err return b, err
} }
func (t *TimedCacheBlockstore) GetSize(k cid.Cid) (int, error) { func (t *TimedCacheBlockstore) GetSize(ctx context.Context, k cid.Cid) (int, error) {
t.mu.RLock() t.mu.RLock()
defer t.mu.RUnlock() defer t.mu.RUnlock()
size, err := t.active.GetSize(k) size, err := t.active.GetSize(ctx, k)
if err == ErrNotFound { if err == ErrNotFound {
size, err = t.inactive.GetSize(k) size, err = t.inactive.GetSize(ctx, k)
} }
return size, err return size, err
} }
func (t *TimedCacheBlockstore) Has(k cid.Cid) (bool, error) { func (t *TimedCacheBlockstore) Has(ctx context.Context, k cid.Cid) (bool, error) {
t.mu.RLock() t.mu.RLock()
defer t.mu.RUnlock() defer t.mu.RUnlock()
if has, err := t.active.Has(k); err != nil { if has, err := t.active.Has(ctx, k); err != nil {
return false, err return false, err
} else if has { } else if has {
return true, nil return true, nil
} }
return t.inactive.Has(k) return t.inactive.Has(ctx, k)
} }
func (t *TimedCacheBlockstore) HashOnRead(_ bool) { func (t *TimedCacheBlockstore) HashOnRead(_ bool) {
// no-op // no-op
} }
func (t *TimedCacheBlockstore) DeleteBlock(k cid.Cid) error { func (t *TimedCacheBlockstore) DeleteBlock(ctx context.Context, k cid.Cid) error {
t.mu.Lock() t.mu.Lock()
defer t.mu.Unlock() defer t.mu.Unlock()
return multierr.Combine(t.active.DeleteBlock(k), t.inactive.DeleteBlock(k)) return multierr.Combine(t.active.DeleteBlock(ctx, k), t.inactive.DeleteBlock(ctx, k))
} }
func (t *TimedCacheBlockstore) DeleteMany(ks []cid.Cid) error { func (t *TimedCacheBlockstore) DeleteMany(ctx context.Context, ks []cid.Cid) error {
t.mu.Lock() t.mu.Lock()
defer t.mu.Unlock() defer t.mu.Unlock()
return multierr.Combine(t.active.DeleteMany(ks), t.inactive.DeleteMany(ks)) return multierr.Combine(t.active.DeleteMany(ctx, ks), t.inactive.DeleteMany(ctx, ks))
} }
func (t *TimedCacheBlockstore) AllKeysChan(_ context.Context) (<-chan cid.Cid, error) { func (t *TimedCacheBlockstore) AllKeysChan(_ context.Context) (<-chan cid.Cid, error) {

View File

@ -19,6 +19,8 @@ func TestTimedCacheBlockstoreSimple(t *testing.T) {
tc.clock = mClock tc.clock = mClock
tc.doneRotatingCh = make(chan struct{}) tc.doneRotatingCh = make(chan struct{})
ctx := context.Background()
_ = tc.Start(context.Background()) _ = tc.Start(context.Background())
mClock.Add(1) // IDK why it is needed but it makes it work mClock.Add(1) // IDK why it is needed but it makes it work
@ -27,18 +29,18 @@ func TestTimedCacheBlockstoreSimple(t *testing.T) {
}() }()
b1 := blocks.NewBlock([]byte("foo")) b1 := blocks.NewBlock([]byte("foo"))
require.NoError(t, tc.Put(b1)) require.NoError(t, tc.Put(ctx, b1))
b2 := blocks.NewBlock([]byte("bar")) b2 := blocks.NewBlock([]byte("bar"))
require.NoError(t, tc.Put(b2)) require.NoError(t, tc.Put(ctx, b2))
b3 := blocks.NewBlock([]byte("baz")) b3 := blocks.NewBlock([]byte("baz"))
b1out, err := tc.Get(b1.Cid()) b1out, err := tc.Get(ctx, b1.Cid())
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, b1.RawData(), b1out.RawData()) require.Equal(t, b1.RawData(), b1out.RawData())
has, err := tc.Has(b1.Cid()) has, err := tc.Has(ctx, b1.Cid())
require.NoError(t, err) require.NoError(t, err)
require.True(t, has) require.True(t, has)
@ -46,17 +48,17 @@ func TestTimedCacheBlockstoreSimple(t *testing.T) {
<-tc.doneRotatingCh <-tc.doneRotatingCh
// We should still have everything. // We should still have everything.
has, err = tc.Has(b1.Cid()) has, err = tc.Has(ctx, b1.Cid())
require.NoError(t, err) require.NoError(t, err)
require.True(t, has) require.True(t, has)
has, err = tc.Has(b2.Cid()) has, err = tc.Has(ctx, b2.Cid())
require.NoError(t, err) require.NoError(t, err)
require.True(t, has) require.True(t, has)
// extend b2, add b3. // extend b2, add b3.
require.NoError(t, tc.Put(b2)) require.NoError(t, tc.Put(ctx, b2))
require.NoError(t, tc.Put(b3)) require.NoError(t, tc.Put(ctx, b3))
// all keys once. // all keys once.
allKeys, err := tc.AllKeysChan(context.Background()) allKeys, err := tc.AllKeysChan(context.Background())
@ -71,15 +73,15 @@ func TestTimedCacheBlockstoreSimple(t *testing.T) {
<-tc.doneRotatingCh <-tc.doneRotatingCh
// should still have b2, and b3, but not b1 // should still have b2, and b3, but not b1
has, err = tc.Has(b1.Cid()) has, err = tc.Has(ctx, b1.Cid())
require.NoError(t, err) require.NoError(t, err)
require.False(t, has) require.False(t, has)
has, err = tc.Has(b2.Cid()) has, err = tc.Has(ctx, b2.Cid())
require.NoError(t, err) require.NoError(t, err)
require.True(t, has) require.True(t, has)
has, err = tc.Has(b3.Cid()) has, err = tc.Has(ctx, b3.Cid())
require.NoError(t, err) require.NoError(t, err)
require.True(t, has) require.True(t, has)
} }

View File

@ -19,72 +19,72 @@ func Union(stores ...Blockstore) Blockstore {
return unionBlockstore(stores) return unionBlockstore(stores)
} }
func (m unionBlockstore) Has(cid cid.Cid) (has bool, err error) { func (m unionBlockstore) Has(ctx context.Context, cid cid.Cid) (has bool, err error) {
for _, bs := range m { for _, bs := range m {
if has, err = bs.Has(cid); has || err != nil { if has, err = bs.Has(ctx, cid); has || err != nil {
break break
} }
} }
return has, err return has, err
} }
func (m unionBlockstore) Get(cid cid.Cid) (blk blocks.Block, err error) { func (m unionBlockstore) Get(ctx context.Context, cid cid.Cid) (blk blocks.Block, err error) {
for _, bs := range m { for _, bs := range m {
if blk, err = bs.Get(cid); err == nil || err != ErrNotFound { if blk, err = bs.Get(ctx, cid); err == nil || err != ErrNotFound {
break break
} }
} }
return blk, err return blk, err
} }
func (m unionBlockstore) View(cid cid.Cid, callback func([]byte) error) (err error) { func (m unionBlockstore) View(ctx context.Context, cid cid.Cid, callback func([]byte) error) (err error) {
for _, bs := range m { for _, bs := range m {
if err = bs.View(cid, callback); err == nil || err != ErrNotFound { if err = bs.View(ctx, cid, callback); err == nil || err != ErrNotFound {
break break
} }
} }
return err return err
} }
func (m unionBlockstore) GetSize(cid cid.Cid) (size int, err error) { func (m unionBlockstore) GetSize(ctx context.Context, cid cid.Cid) (size int, err error) {
for _, bs := range m { for _, bs := range m {
if size, err = bs.GetSize(cid); err == nil || err != ErrNotFound { if size, err = bs.GetSize(ctx, cid); err == nil || err != ErrNotFound {
break break
} }
} }
return size, err return size, err
} }
func (m unionBlockstore) Put(block blocks.Block) (err error) { func (m unionBlockstore) Put(ctx context.Context, block blocks.Block) (err error) {
for _, bs := range m { for _, bs := range m {
if err = bs.Put(block); err != nil { if err = bs.Put(ctx, block); err != nil {
break break
} }
} }
return err return err
} }
func (m unionBlockstore) PutMany(blks []blocks.Block) (err error) { func (m unionBlockstore) PutMany(ctx context.Context, blks []blocks.Block) (err error) {
for _, bs := range m { for _, bs := range m {
if err = bs.PutMany(blks); err != nil { if err = bs.PutMany(ctx, blks); err != nil {
break break
} }
} }
return err return err
} }
func (m unionBlockstore) DeleteBlock(cid cid.Cid) (err error) { func (m unionBlockstore) DeleteBlock(ctx context.Context, cid cid.Cid) (err error) {
for _, bs := range m { for _, bs := range m {
if err = bs.DeleteBlock(cid); err != nil { if err = bs.DeleteBlock(ctx, cid); err != nil {
break break
} }
} }
return err return err
} }
func (m unionBlockstore) DeleteMany(cids []cid.Cid) (err error) { func (m unionBlockstore) DeleteMany(ctx context.Context, cids []cid.Cid) (err error) {
for _, bs := range m { for _, bs := range m {
if err = bs.DeleteMany(cids); err != nil { if err = bs.DeleteMany(ctx, cids); err != nil {
break break
} }
} }

View File

@ -15,79 +15,81 @@ var (
) )
func TestUnionBlockstore_Get(t *testing.T) { func TestUnionBlockstore_Get(t *testing.T) {
ctx := context.Background()
m1 := NewMemory() m1 := NewMemory()
m2 := NewMemory() m2 := NewMemory()
_ = m1.Put(b1) _ = m1.Put(ctx, b1)
_ = m2.Put(b2) _ = m2.Put(ctx, b2)
u := Union(m1, m2) u := Union(m1, m2)
v1, err := u.Get(b1.Cid()) v1, err := u.Get(ctx, b1.Cid())
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, b1.RawData(), v1.RawData()) require.Equal(t, b1.RawData(), v1.RawData())
v2, err := u.Get(b2.Cid()) v2, err := u.Get(ctx, b2.Cid())
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, b2.RawData(), v2.RawData()) require.Equal(t, b2.RawData(), v2.RawData())
} }
func TestUnionBlockstore_Put_PutMany_Delete_AllKeysChan(t *testing.T) { func TestUnionBlockstore_Put_PutMany_Delete_AllKeysChan(t *testing.T) {
ctx := context.Background()
m1 := NewMemory() m1 := NewMemory()
m2 := NewMemory() m2 := NewMemory()
u := Union(m1, m2) u := Union(m1, m2)
err := u.Put(b0) err := u.Put(ctx, b0)
require.NoError(t, err) require.NoError(t, err)
var has bool var has bool
// write was broadcasted to all stores. // write was broadcasted to all stores.
has, _ = m1.Has(b0.Cid()) has, _ = m1.Has(ctx, b0.Cid())
require.True(t, has) require.True(t, has)
has, _ = m2.Has(b0.Cid()) has, _ = m2.Has(ctx, b0.Cid())
require.True(t, has) require.True(t, has)
has, _ = u.Has(b0.Cid()) has, _ = u.Has(ctx, b0.Cid())
require.True(t, has) require.True(t, has)
// put many. // put many.
err = u.PutMany([]blocks.Block{b1, b2}) err = u.PutMany(ctx, []blocks.Block{b1, b2})
require.NoError(t, err) require.NoError(t, err)
// write was broadcasted to all stores. // write was broadcasted to all stores.
has, _ = m1.Has(b1.Cid()) has, _ = m1.Has(ctx, b1.Cid())
require.True(t, has) require.True(t, has)
has, _ = m1.Has(b2.Cid()) has, _ = m1.Has(ctx, b2.Cid())
require.True(t, has) require.True(t, has)
has, _ = m2.Has(b1.Cid()) has, _ = m2.Has(ctx, b1.Cid())
require.True(t, has) require.True(t, has)
has, _ = m2.Has(b2.Cid()) has, _ = m2.Has(ctx, b2.Cid())
require.True(t, has) require.True(t, has)
// also in the union store. // also in the union store.
has, _ = u.Has(b1.Cid()) has, _ = u.Has(ctx, b1.Cid())
require.True(t, has) require.True(t, has)
has, _ = u.Has(b2.Cid()) has, _ = u.Has(ctx, b2.Cid())
require.True(t, has) require.True(t, has)
// deleted from all stores. // deleted from all stores.
err = u.DeleteBlock(b1.Cid()) err = u.DeleteBlock(ctx, b1.Cid())
require.NoError(t, err) require.NoError(t, err)
has, _ = u.Has(b1.Cid()) has, _ = u.Has(ctx, b1.Cid())
require.False(t, has) require.False(t, has)
has, _ = m1.Has(b1.Cid()) has, _ = m1.Has(ctx, b1.Cid())
require.False(t, has) require.False(t, has)
has, _ = m2.Has(b1.Cid()) has, _ = m2.Has(ctx, b1.Cid())
require.False(t, has) require.False(t, has)
// check that AllKeysChan returns b0 and b2, twice (once per backing store) // check that AllKeysChan returns b0 and b2, twice (once per backing store)

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -37,7 +37,7 @@ func BuildTypeString() string {
} }
// BuildVersion is the local build version // BuildVersion is the local build version
const BuildVersion = "1.14.4" const BuildVersion = "1.15.0"
func UserVersion() string { func UserVersion() string {
if os.Getenv("LOTUS_VERSION_IGNORE_COMMIT") == "1" { if os.Getenv("LOTUS_VERSION_IGNORE_COMMIT") == "1" {

View File

@ -142,7 +142,7 @@ func (db *DrandBeacon) Entry(ctx context.Context, round uint64) <-chan beacon.Re
go func() { go func() {
start := build.Clock.Now() start := build.Clock.Now()
log.Infow("start fetching randomness", "round", round) log.Debugw("start fetching randomness", "round", round)
resp, err := db.client.Get(ctx, round) resp, err := db.client.Get(ctx, round)
var br beacon.Response var br beacon.Response
@ -152,7 +152,7 @@ func (db *DrandBeacon) Entry(ctx context.Context, round uint64) <-chan beacon.Re
br.Entry.Round = resp.Round() br.Entry.Round = resp.Round()
br.Entry.Data = resp.Signature() br.Entry.Data = resp.Signature()
} }
log.Infow("done fetching randomness", "round", round, "took", build.Clock.Since(start)) log.Debugw("done fetching randomness", "round", round, "took", build.Clock.Since(start))
out <- br out <- br
close(out) close(out)
}() }()

View File

@ -13,7 +13,7 @@ func (syncer *Syncer) SyncCheckpoint(ctx context.Context, tsk types.TipSetKey) e
return xerrors.Errorf("called with empty tsk") return xerrors.Errorf("called with empty tsk")
} }
ts, err := syncer.ChainStore().LoadTipSet(tsk) ts, err := syncer.ChainStore().LoadTipSet(ctx, tsk)
if err != nil { if err != nil {
tss, err := syncer.Exchange.GetBlocks(ctx, tsk, 1) tss, err := syncer.Exchange.GetBlocks(ctx, tsk, 1)
if err != nil { if err != nil {
@ -28,7 +28,7 @@ func (syncer *Syncer) SyncCheckpoint(ctx context.Context, tsk types.TipSetKey) e
return xerrors.Errorf("failed to switch chain when syncing checkpoint: %w", err) return xerrors.Errorf("failed to switch chain when syncing checkpoint: %w", err)
} }
if err := syncer.ChainStore().SetCheckpoint(ts); err != nil { if err := syncer.ChainStore().SetCheckpoint(ctx, ts); err != nil {
return xerrors.Errorf("failed to set the chain checkpoint: %w", err) return xerrors.Errorf("failed to set the chain checkpoint: %w", err)
} }
@ -41,7 +41,7 @@ func (syncer *Syncer) switchChain(ctx context.Context, ts *types.TipSet) error {
return nil return nil
} }
if anc, err := syncer.store.IsAncestorOf(ts, hts); err == nil && anc { if anc, err := syncer.store.IsAncestorOf(ctx, ts, hts); err == nil && anc {
return nil return nil
} }
@ -50,7 +50,7 @@ func (syncer *Syncer) switchChain(ctx context.Context, ts *types.TipSet) error {
return xerrors.Errorf("failed to collect chain for checkpoint: %w", err) return xerrors.Errorf("failed to collect chain for checkpoint: %w", err)
} }
if err := syncer.ChainStore().SetHead(ts); err != nil { if err := syncer.ChainStore().SetHead(ctx, ts); err != nil {
return xerrors.Errorf("failed to set the chain head: %w", err) return xerrors.Errorf("failed to set the chain head: %w", err)
} }
return nil return nil

View File

@ -92,16 +92,16 @@ func (t *TipSetExecutor) ApplyBlocks(ctx context.Context, sm *stmgr.StateManager
partDone() partDone()
}() }()
makeVmWithBaseState := func(base cid.Cid) (*vm.VM, error) { makeVmWithBaseStateAndEpoch := func(base cid.Cid, e abi.ChainEpoch) (*vm.VM, error) {
vmopt := &vm.VMOpts{ vmopt := &vm.VMOpts{
StateBase: base, StateBase: base,
Epoch: epoch, Epoch: e,
Rand: r, Rand: r,
Bstore: sm.ChainStore().StateBlockstore(), Bstore: sm.ChainStore().StateBlockstore(),
Actors: NewActorRegistry(), Actors: NewActorRegistry(),
Syscalls: sm.Syscalls, Syscalls: sm.Syscalls,
CircSupplyCalc: sm.GetVMCirculatingSupply, CircSupplyCalc: sm.GetVMCirculatingSupply,
NtwkVersion: sm.GetNtwkVersion, NetworkVersion: sm.GetNetworkVersion(ctx, e),
BaseFee: baseFee, BaseFee: baseFee,
LookbackState: stmgr.LookbackStateGetterForTipset(sm, ts), LookbackState: stmgr.LookbackStateGetterForTipset(sm, ts),
} }
@ -109,12 +109,7 @@ func (t *TipSetExecutor) ApplyBlocks(ctx context.Context, sm *stmgr.StateManager
return sm.VMConstructor()(ctx, vmopt) return sm.VMConstructor()(ctx, vmopt)
} }
vmi, err := makeVmWithBaseState(pstate) runCron := func(vmCron *vm.VM, epoch abi.ChainEpoch) error {
if err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("making vm: %w", err)
}
runCron := func(epoch abi.ChainEpoch) error {
cronMsg := &types.Message{ cronMsg := &types.Message{
To: cron.Address, To: cron.Address,
From: builtin.SystemActorAddr, From: builtin.SystemActorAddr,
@ -126,59 +121,58 @@ func (t *TipSetExecutor) ApplyBlocks(ctx context.Context, sm *stmgr.StateManager
Method: cron.Methods.EpochTick, Method: cron.Methods.EpochTick,
Params: nil, Params: nil,
} }
ret, err := vmi.ApplyImplicitMessage(ctx, cronMsg) ret, err := vmCron.ApplyImplicitMessage(ctx, cronMsg)
if err != nil { if err != nil {
return err return xerrors.Errorf("running cron: %w", err)
} }
if em != nil { if em != nil {
if err := em.MessageApplied(ctx, ts, cronMsg.Cid(), cronMsg, ret, true); err != nil { if err := em.MessageApplied(ctx, ts, cronMsg.Cid(), cronMsg, ret, true); err != nil {
return xerrors.Errorf("callback failed on cron message: %w", err) return xerrors.Errorf("callback failed on cron message: %w", err)
} }
} }
if ret.ExitCode != 0 { if ret.ExitCode != 0 {
return xerrors.Errorf("CheckProofSubmissions exit was non-zero: %d", ret.ExitCode) return xerrors.Errorf("cron exit was non-zero: %d", ret.ExitCode)
} }
return nil return nil
} }
for i := parentEpoch; i < epoch; i++ { for i := parentEpoch; i < epoch; i++ {
var err error
if i > parentEpoch { if i > parentEpoch {
// run cron for null rounds if any vmCron, err := makeVmWithBaseStateAndEpoch(pstate, i)
if err := runCron(i); err != nil { if err != nil {
return cid.Undef, cid.Undef, err return cid.Undef, cid.Undef, xerrors.Errorf("making cron vm: %w", err)
} }
pstate, err = vmi.Flush(ctx) // run cron for null rounds if any
if err = runCron(vmCron, i); err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("running cron: %w", err)
}
pstate, err = vmCron.Flush(ctx)
if err != nil { if err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("flushing vm: %w", err) return cid.Undef, cid.Undef, xerrors.Errorf("flushing cron vm: %w", err)
} }
} }
// handle state forks // handle state forks
// XXX: The state tree // XXX: The state tree
newState, err := sm.HandleStateForks(ctx, pstate, i, em, ts) pstate, err = sm.HandleStateForks(ctx, pstate, i, em, ts)
if err != nil { if err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("error handling state forks: %w", err) return cid.Undef, cid.Undef, xerrors.Errorf("error handling state forks: %w", err)
} }
if pstate != newState {
vmi, err = makeVmWithBaseState(newState)
if err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("making vm: %w", err)
}
}
if err = vmi.SetBlockHeight(ctx, i+1); err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("error advancing vm an epoch: %w", err)
}
pstate = newState
} }
partDone() partDone()
partDone = metrics.Timer(ctx, metrics.VMApplyMessages) partDone = metrics.Timer(ctx, metrics.VMApplyMessages)
vmi, err := makeVmWithBaseStateAndEpoch(pstate, epoch)
if err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("making vm: %w", err)
}
var receipts []cbg.CBORMarshaler var receipts []cbg.CBORMarshaler
processedMsgs := make(map[cid.Cid]struct{}) processedMsgs := make(map[cid.Cid]struct{})
for _, b := range bms { for _, b := range bms {
@ -246,7 +240,7 @@ func (t *TipSetExecutor) ApplyBlocks(ctx context.Context, sm *stmgr.StateManager
partDone() partDone()
partDone = metrics.Timer(ctx, metrics.VMApplyCron) partDone = metrics.Timer(ctx, metrics.VMApplyCron)
if err := runCron(epoch); err != nil { if err := runCron(vmi, epoch); err != nil {
return cid.Cid{}, cid.Cid{}, err return cid.Cid{}, cid.Cid{}, err
} }
@ -294,7 +288,7 @@ func (t *TipSetExecutor) ExecuteTipSet(ctx context.Context, sm *stmgr.StateManag
var parentEpoch abi.ChainEpoch var parentEpoch abi.ChainEpoch
pstate := blks[0].ParentStateRoot pstate := blks[0].ParentStateRoot
if blks[0].Height > 0 { if blks[0].Height > 0 {
parent, err := sm.ChainStore().GetBlock(blks[0].Parents[0]) parent, err := sm.ChainStore().GetBlock(ctx, blks[0].Parents[0])
if err != nil { if err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("getting parent block: %w", err) return cid.Undef, cid.Undef, xerrors.Errorf("getting parent block: %w", err)
} }
@ -302,9 +296,9 @@ func (t *TipSetExecutor) ExecuteTipSet(ctx context.Context, sm *stmgr.StateManag
parentEpoch = parent.Height parentEpoch = parent.Height
} }
r := rand.NewStateRand(sm.ChainStore(), ts.Cids(), sm.Beacon()) r := rand.NewStateRand(sm.ChainStore(), ts.Cids(), sm.Beacon(), sm.GetNetworkVersion)
blkmsgs, err := sm.ChainStore().BlockMsgsForTipset(ts) blkmsgs, err := sm.ChainStore().BlockMsgsForTipset(ctx, ts)
if err != nil { if err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("getting block messages for tipset: %w", err) return cid.Undef, cid.Undef, xerrors.Errorf("getting block messages for tipset: %w", err)
} }

View File

@ -90,19 +90,19 @@ func (filec *FilecoinEC) ValidateBlock(ctx context.Context, b *types.FullBlock)
h := b.Header h := b.Header
baseTs, err := filec.store.LoadTipSet(types.NewTipSetKey(h.Parents...)) baseTs, err := filec.store.LoadTipSet(ctx, types.NewTipSetKey(h.Parents...))
if err != nil { if err != nil {
return xerrors.Errorf("load parent tipset failed (%s): %w", h.Parents, err) return xerrors.Errorf("load parent tipset failed (%s): %w", h.Parents, err)
} }
winPoStNv := filec.sm.GetNtwkVersion(ctx, baseTs.Height()) winPoStNv := filec.sm.GetNetworkVersion(ctx, baseTs.Height())
lbts, lbst, err := stmgr.GetLookbackTipSetForRound(ctx, filec.sm, baseTs, h.Height) lbts, lbst, err := stmgr.GetLookbackTipSetForRound(ctx, filec.sm, baseTs, h.Height)
if err != nil { if err != nil {
return xerrors.Errorf("failed to get lookback tipset for block: %w", err) return xerrors.Errorf("failed to get lookback tipset for block: %w", err)
} }
prevBeacon, err := filec.store.GetLatestBeaconEntry(baseTs) prevBeacon, err := filec.store.GetLatestBeaconEntry(ctx, baseTs)
if err != nil { if err != nil {
return xerrors.Errorf("failed to get latest beacon entry: %w", err) return xerrors.Errorf("failed to get latest beacon entry: %w", err)
} }
@ -171,7 +171,7 @@ func (filec *FilecoinEC) ValidateBlock(ctx context.Context, b *types.FullBlock)
} }
if stateroot != h.ParentStateRoot { if stateroot != h.ParentStateRoot {
msgs, err := filec.store.MessagesForTipset(baseTs) msgs, err := filec.store.MessagesForTipset(ctx, baseTs)
if err != nil { if err != nil {
log.Error("failed to load messages for tipset during tipset state mismatch error: ", err) log.Error("failed to load messages for tipset during tipset state mismatch error: ", err)
} else { } else {
@ -182,7 +182,7 @@ func (filec *FilecoinEC) ValidateBlock(ctx context.Context, b *types.FullBlock)
} }
} }
return xerrors.Errorf("parent state root did not match computed state (%s != %s)", stateroot, h.ParentStateRoot) return xerrors.Errorf("parent state root did not match computed state (%s != %s)", h.ParentStateRoot, stateroot)
} }
if precp != h.ParentMessageReceipts { if precp != h.ParentMessageReceipts {
@ -458,7 +458,7 @@ func (filec *FilecoinEC) checkBlockMessages(ctx context.Context, b *types.FullBl
stateroot, _, err := filec.sm.TipSetState(ctx, baseTs) stateroot, _, err := filec.sm.TipSetState(ctx, baseTs)
if err != nil { if err != nil {
return err return xerrors.Errorf("failed to compute tipsettate for %s: %w", baseTs.Key(), err)
} }
st, err := state.LoadStateTree(filec.store.ActorStore(ctx), stateroot) st, err := state.LoadStateTree(filec.store.ActorStore(ctx), stateroot)
@ -466,7 +466,7 @@ func (filec *FilecoinEC) checkBlockMessages(ctx context.Context, b *types.FullBl
return xerrors.Errorf("failed to load base state tree: %w", err) return xerrors.Errorf("failed to load base state tree: %w", err)
} }
nv := filec.sm.GetNtwkVersion(ctx, b.Header.Height) nv := filec.sm.GetNetworkVersion(ctx, b.Header.Height)
pl := vm.PricelistByEpoch(baseTs.Height()) pl := vm.PricelistByEpoch(baseTs.Height())
var sumGasLimit int64 var sumGasLimit int64
checkMsg := func(msg types.ChainMsg) error { checkMsg := func(msg types.ChainMsg) error {
@ -475,7 +475,7 @@ func (filec *FilecoinEC) checkBlockMessages(ctx context.Context, b *types.FullBl
// Phase 1: syntactic validation, as defined in the spec // Phase 1: syntactic validation, as defined in the spec
minGas := pl.OnChainMessage(msg.ChainLength()) minGas := pl.OnChainMessage(msg.ChainLength())
if err := m.ValidForBlockInclusion(minGas.Total(), nv); err != nil { if err := m.ValidForBlockInclusion(minGas.Total(), nv); err != nil {
return err return xerrors.Errorf("msg %s invalid for block inclusion: %w", m.Cid(), err)
} }
// ValidForBlockInclusion checks if any single message does not exceed BlockGasLimit // ValidForBlockInclusion checks if any single message does not exceed BlockGasLimit
@ -488,10 +488,10 @@ func (filec *FilecoinEC) checkBlockMessages(ctx context.Context, b *types.FullBl
// Phase 2: (Partial) semantic validation: // Phase 2: (Partial) semantic validation:
// the sender exists and is an account actor, and the nonces make sense // the sender exists and is an account actor, and the nonces make sense
var sender address.Address var sender address.Address
if filec.sm.GetNtwkVersion(ctx, b.Header.Height) >= network.Version13 { if filec.sm.GetNetworkVersion(ctx, b.Header.Height) >= network.Version13 {
sender, err = st.LookupID(m.From) sender, err = st.LookupID(m.From)
if err != nil { if err != nil {
return err return xerrors.Errorf("failed to lookup sender %s: %w", m.From, err)
} }
} else { } else {
sender = m.From sender = m.From
@ -528,7 +528,7 @@ func (filec *FilecoinEC) checkBlockMessages(ctx context.Context, b *types.FullBl
return xerrors.Errorf("block had invalid bls message at index %d: %w", i, err) return xerrors.Errorf("block had invalid bls message at index %d: %w", i, err)
} }
c, err := store.PutMessage(tmpbs, m) c, err := store.PutMessage(ctx, tmpbs, m)
if err != nil { if err != nil {
return xerrors.Errorf("failed to store message %s: %w", m.Cid(), err) return xerrors.Errorf("failed to store message %s: %w", m.Cid(), err)
} }
@ -541,7 +541,7 @@ func (filec *FilecoinEC) checkBlockMessages(ctx context.Context, b *types.FullBl
smArr := blockadt.MakeEmptyArray(tmpstore) smArr := blockadt.MakeEmptyArray(tmpstore)
for i, m := range b.SecpkMessages { for i, m := range b.SecpkMessages {
if filec.sm.GetNtwkVersion(ctx, b.Header.Height) >= network.Version14 { if filec.sm.GetNetworkVersion(ctx, b.Header.Height) >= network.Version14 {
if m.Signature.Type != crypto.SigTypeSecp256k1 { if m.Signature.Type != crypto.SigTypeSecp256k1 {
return xerrors.Errorf("block had invalid secpk message at index %d: %w", i, err) return xerrors.Errorf("block had invalid secpk message at index %d: %w", i, err)
} }
@ -562,7 +562,7 @@ func (filec *FilecoinEC) checkBlockMessages(ctx context.Context, b *types.FullBl
return xerrors.Errorf("secpk message %s has invalid signature: %w", m.Cid(), err) return xerrors.Errorf("secpk message %s has invalid signature: %w", m.Cid(), err)
} }
c, err := store.PutMessage(tmpbs, m) c, err := store.PutMessage(ctx, tmpbs, m)
if err != nil { if err != nil {
return xerrors.Errorf("failed to store message %s: %w", m.Cid(), err) return xerrors.Errorf("failed to store message %s: %w", m.Cid(), err)
} }
@ -574,12 +574,13 @@ func (filec *FilecoinEC) checkBlockMessages(ctx context.Context, b *types.FullBl
bmroot, err := bmArr.Root() bmroot, err := bmArr.Root()
if err != nil { if err != nil {
return err return xerrors.Errorf("failed to root bls msgs: %w", err)
} }
smroot, err := smArr.Root() smroot, err := smArr.Root()
if err != nil { if err != nil {
return err return xerrors.Errorf("failed to root secp msgs: %w", err)
} }
mrcid, err := tmpstore.Put(ctx, &types.MsgMeta{ mrcid, err := tmpstore.Put(ctx, &types.MsgMeta{
@ -587,7 +588,7 @@ func (filec *FilecoinEC) checkBlockMessages(ctx context.Context, b *types.FullBl
SecpkMessages: smroot, SecpkMessages: smroot,
}) })
if err != nil { if err != nil {
return err return xerrors.Errorf("failed to put msg meta: %w", err)
} }
if b.Header.Messages != mrcid { if b.Header.Messages != mrcid {
@ -595,7 +596,12 @@ func (filec *FilecoinEC) checkBlockMessages(ctx context.Context, b *types.FullBl
} }
// Finally, flush. // Finally, flush.
return vm.Copy(ctx, tmpbs, filec.store.ChainBlockstore(), mrcid) err = vm.Copy(ctx, tmpbs, filec.store.ChainBlockstore(), mrcid)
if err != nil {
return xerrors.Errorf("failed to flush:%w", err)
}
return nil
} }
func (filec *FilecoinEC) IsEpochBeyondCurrMax(epoch abi.ChainEpoch) bool { func (filec *FilecoinEC) IsEpochBeyondCurrMax(epoch abi.ChainEpoch) bool {

View File

@ -15,7 +15,7 @@ import (
) )
func (filec *FilecoinEC) CreateBlock(ctx context.Context, w api.Wallet, bt *api.BlockTemplate) (*types.FullBlock, error) { func (filec *FilecoinEC) CreateBlock(ctx context.Context, w api.Wallet, bt *api.BlockTemplate) (*types.FullBlock, error) {
pts, err := filec.sm.ChainStore().LoadTipSet(bt.Parents) pts, err := filec.sm.ChainStore().LoadTipSet(ctx, bt.Parents)
if err != nil { if err != nil {
return nil, xerrors.Errorf("failed to load parent tipset: %w", err) return nil, xerrors.Errorf("failed to load parent tipset: %w", err)
} }
@ -59,14 +59,14 @@ func (filec *FilecoinEC) CreateBlock(ctx context.Context, w api.Wallet, bt *api.
blsSigs = append(blsSigs, msg.Signature) blsSigs = append(blsSigs, msg.Signature)
blsMessages = append(blsMessages, &msg.Message) blsMessages = append(blsMessages, &msg.Message)
c, err := filec.sm.ChainStore().PutMessage(&msg.Message) c, err := filec.sm.ChainStore().PutMessage(ctx, &msg.Message)
if err != nil { if err != nil {
return nil, err return nil, err
} }
blsMsgCids = append(blsMsgCids, c) blsMsgCids = append(blsMsgCids, c)
} else if msg.Signature.Type == crypto.SigTypeSecp256k1 { } else if msg.Signature.Type == crypto.SigTypeSecp256k1 {
c, err := filec.sm.ChainStore().PutMessage(msg) c, err := filec.sm.ChainStore().PutMessage(ctx, msg)
if err != nil { if err != nil {
return nil, err return nil, err
} }

View File

@ -639,7 +639,7 @@ func splitGenesisMultisig0(ctx context.Context, em stmgr.ExecMonitor, addr addre
// TODO: After the Liftoff epoch, refactor this to use resetMultisigVesting // TODO: After the Liftoff epoch, refactor this to use resetMultisigVesting
func resetGenesisMsigs0(ctx context.Context, sm *stmgr.StateManager, store adt0.Store, tree *state.StateTree, startEpoch abi.ChainEpoch) error { func resetGenesisMsigs0(ctx context.Context, sm *stmgr.StateManager, store adt0.Store, tree *state.StateTree, startEpoch abi.ChainEpoch) error {
gb, err := sm.ChainStore().GetGenesis() gb, err := sm.ChainStore().GetGenesis(ctx)
if err != nil { if err != nil {
return xerrors.Errorf("getting genesis block: %w", err) return xerrors.Errorf("getting genesis block: %w", err)
} }

View File

@ -87,7 +87,7 @@ func (fcs *fakeCS) ChainGetPath(ctx context.Context, from, to types.TipSetKey) (
} }
// copied from the chainstore // copied from the chainstore
revert, apply, err := store.ReorgOps(func(tsk types.TipSetKey) (*types.TipSet, error) { revert, apply, err := store.ReorgOps(ctx, func(ctx context.Context, tsk types.TipSetKey) (*types.TipSet, error) {
return fcs.ChainGetTipSet(ctx, tsk) return fcs.ChainGetTipSet(ctx, tsk)
}, fromTs, toTs) }, fromTs, toTs)
if err != nil { if err != nil {

View File

@ -27,11 +27,11 @@ func NewMockAPI(bs blockstore.Blockstore) *MockAPI {
} }
func (m *MockAPI) ChainHasObj(ctx context.Context, c cid.Cid) (bool, error) { func (m *MockAPI) ChainHasObj(ctx context.Context, c cid.Cid) (bool, error) {
return m.bs.Has(c) return m.bs.Has(ctx, c)
} }
func (m *MockAPI) ChainReadObj(ctx context.Context, c cid.Cid) ([]byte, error) { func (m *MockAPI) ChainReadObj(ctx context.Context, c cid.Cid) ([]byte, error) {
blk, err := m.bs.Get(c) blk, err := m.bs.Get(ctx, c)
if err != nil { if err != nil {
return nil, xerrors.Errorf("blockstore get: %w", err) return nil, xerrors.Errorf("blockstore get: %w", err)
} }

View File

@ -136,7 +136,7 @@ func (s *server) serviceRequest(ctx context.Context, req *validatedRequest) (*Re
_, span := trace.StartSpan(ctx, "chainxchg.ServiceRequest") _, span := trace.StartSpan(ctx, "chainxchg.ServiceRequest")
defer span.End() defer span.End()
chain, err := collectChainSegment(s.cs, req) chain, err := collectChainSegment(ctx, s.cs, req)
if err != nil { if err != nil {
log.Warn("block sync request: collectChainSegment failed: ", err) log.Warn("block sync request: collectChainSegment failed: ", err)
return &Response{ return &Response{
@ -156,13 +156,13 @@ func (s *server) serviceRequest(ctx context.Context, req *validatedRequest) (*Re
}, nil }, nil
} }
func collectChainSegment(cs *store.ChainStore, req *validatedRequest) ([]*BSTipSet, error) { func collectChainSegment(ctx context.Context, cs *store.ChainStore, req *validatedRequest) ([]*BSTipSet, error) {
var bstips []*BSTipSet var bstips []*BSTipSet
cur := req.head cur := req.head
for { for {
var bst BSTipSet var bst BSTipSet
ts, err := cs.LoadTipSet(cur) ts, err := cs.LoadTipSet(ctx, cur)
if err != nil { if err != nil {
return nil, xerrors.Errorf("failed loading tipset %s: %w", cur, err) return nil, xerrors.Errorf("failed loading tipset %s: %w", cur, err)
} }
@ -172,7 +172,7 @@ func collectChainSegment(cs *store.ChainStore, req *validatedRequest) ([]*BSTipS
} }
if req.options.IncludeMessages { if req.options.IncludeMessages {
bmsgs, bmincl, smsgs, smincl, err := gatherMessages(cs, ts) bmsgs, bmincl, smsgs, smincl, err := gatherMessages(ctx, cs, ts)
if err != nil { if err != nil {
return nil, xerrors.Errorf("gather messages failed: %w", err) return nil, xerrors.Errorf("gather messages failed: %w", err)
} }
@ -197,14 +197,14 @@ func collectChainSegment(cs *store.ChainStore, req *validatedRequest) ([]*BSTipS
} }
} }
func gatherMessages(cs *store.ChainStore, ts *types.TipSet) ([]*types.Message, [][]uint64, []*types.SignedMessage, [][]uint64, error) { func gatherMessages(ctx context.Context, cs *store.ChainStore, ts *types.TipSet) ([]*types.Message, [][]uint64, []*types.SignedMessage, [][]uint64, error) {
blsmsgmap := make(map[cid.Cid]uint64) blsmsgmap := make(map[cid.Cid]uint64)
secpkmsgmap := make(map[cid.Cid]uint64) secpkmsgmap := make(map[cid.Cid]uint64)
var secpkincl, blsincl [][]uint64 var secpkincl, blsincl [][]uint64
var blscids, secpkcids []cid.Cid var blscids, secpkcids []cid.Cid
for _, block := range ts.Blocks() { for _, block := range ts.Blocks() {
bc, sc, err := cs.ReadMsgMetaCids(block.Messages) bc, sc, err := cs.ReadMsgMetaCids(ctx, block.Messages)
if err != nil { if err != nil {
return nil, nil, nil, nil, err return nil, nil, nil, nil, err
} }
@ -237,12 +237,12 @@ func gatherMessages(cs *store.ChainStore, ts *types.TipSet) ([]*types.Message, [
secpkincl = append(secpkincl, smi) secpkincl = append(secpkincl, smi)
} }
blsmsgs, err := cs.LoadMessagesFromCids(blscids) blsmsgs, err := cs.LoadMessagesFromCids(ctx, blscids)
if err != nil { if err != nil {
return nil, nil, nil, nil, err return nil, nil, nil, nil, err
} }
secpkmsgs, err := cs.LoadSignedMessagesFromCids(secpkcids) secpkmsgs, err := cs.LoadSignedMessagesFromCids(ctx, secpkcids)
if err != nil { if err != nil {
return nil, nil, nil, nil, err return nil, nil, nil, nil, err
} }

View File

@ -241,7 +241,7 @@ func NewGeneratorWithSectorsAndUpgradeSchedule(numSectors int, us stmgr.UpgradeS
genfb := &types.FullBlock{Header: genb.Genesis} genfb := &types.FullBlock{Header: genb.Genesis}
gents := store.NewFullTipSet([]*types.FullBlock{genfb}) gents := store.NewFullTipSet([]*types.FullBlock{genfb})
if err := cs.SetGenesis(genb.Genesis); err != nil { if err := cs.SetGenesis(context.TODO(), genb.Genesis); err != nil {
return nil, xerrors.Errorf("set genesis failed: %w", err) return nil, xerrors.Errorf("set genesis failed: %w", err)
} }
@ -471,7 +471,7 @@ func (cg *ChainGen) NextTipSetFromMinersWithMessagesAndNulls(base *types.TipSet,
return nil, xerrors.Errorf("making a block for next tipset failed: %w", err) return nil, xerrors.Errorf("making a block for next tipset failed: %w", err)
} }
if err := cg.cs.PersistBlockHeaders(fblk.Header); err != nil { if err := cg.cs.PersistBlockHeaders(context.TODO(), fblk.Header); err != nil {
return nil, xerrors.Errorf("chainstore AddBlock: %w", err) return nil, xerrors.Errorf("chainstore AddBlock: %w", err)
} }

View File

@ -491,10 +491,8 @@ func VerifyPreSealedData(ctx context.Context, cs *store.ChainStore, sys vm.Sysca
Actors: filcns.NewActorRegistry(), Actors: filcns.NewActorRegistry(),
Syscalls: mkFakedSigSyscalls(sys), Syscalls: mkFakedSigSyscalls(sys),
CircSupplyCalc: csc, CircSupplyCalc: csc,
NtwkVersion: func(_ context.Context, _ abi.ChainEpoch) network.Version { NetworkVersion: nv,
return nv BaseFee: types.NewInt(0),
},
BaseFee: types.NewInt(0),
} }
vm, err := vm.NewVM(ctx, &vmopt) vm, err := vm.NewVM(ctx, &vmopt)
if err != nil { if err != nil {
@ -595,7 +593,7 @@ func MakeGenesisBlock(ctx context.Context, j journal.Journal, bs bstore.Blocksto
if err != nil { if err != nil {
return nil, xerrors.Errorf("serializing msgmeta failed: %w", err) return nil, xerrors.Errorf("serializing msgmeta failed: %w", err)
} }
if err := bs.Put(mmb); err != nil { if err := bs.Put(ctx, mmb); err != nil {
return nil, xerrors.Errorf("putting msgmeta block to blockstore: %w", err) return nil, xerrors.Errorf("putting msgmeta block to blockstore: %w", err)
} }
@ -625,7 +623,7 @@ func MakeGenesisBlock(ctx context.Context, j journal.Journal, bs bstore.Blocksto
return nil, xerrors.Errorf("filecoinGenesisCid != gblk.Cid") return nil, xerrors.Errorf("filecoinGenesisCid != gblk.Cid")
} }
if err := bs.Put(gblk); err != nil { if err := bs.Put(ctx, gblk); err != nil {
return nil, xerrors.Errorf("failed writing filecoin genesis block to blockstore: %w", err) return nil, xerrors.Errorf("failed writing filecoin genesis block to blockstore: %w", err)
} }
@ -656,7 +654,7 @@ func MakeGenesisBlock(ctx context.Context, j journal.Journal, bs bstore.Blocksto
return nil, xerrors.Errorf("serializing block header failed: %w", err) return nil, xerrors.Errorf("serializing block header failed: %w", err)
} }
if err := bs.Put(sb); err != nil { if err := bs.Put(ctx, sb); err != nil {
return nil, xerrors.Errorf("putting header to blockstore: %w", err) return nil, xerrors.Errorf("putting header to blockstore: %w", err)
} }

View File

@ -94,10 +94,8 @@ func SetupStorageMiners(ctx context.Context, cs *store.ChainStore, sys vm.Syscal
Actors: filcns.NewActorRegistry(), Actors: filcns.NewActorRegistry(),
Syscalls: mkFakedSigSyscalls(sys), Syscalls: mkFakedSigSyscalls(sys),
CircSupplyCalc: csc, CircSupplyCalc: csc,
NtwkVersion: func(_ context.Context, _ abi.ChainEpoch) network.Version { NetworkVersion: nv,
return nv BaseFee: types.NewInt(0),
},
BaseFee: types.NewInt(0),
} }
vm, err := vm.NewVM(ctx, vmopt) vm, err := vm.NewVM(ctx, vmopt)
@ -510,31 +508,13 @@ func SetupStorageMiners(ctx context.Context, cs *store.ChainStore, sys vm.Syscal
// TODO: copied from actors test harness, deduplicate or remove from here // TODO: copied from actors test harness, deduplicate or remove from here
type fakeRand struct{} type fakeRand struct{}
func (fr *fakeRand) GetChainRandomnessV2(ctx context.Context, personalization crypto.DomainSeparationTag, randEpoch abi.ChainEpoch, entropy []byte) ([]byte, error) { func (fr *fakeRand) GetChainRandomness(ctx context.Context, personalization crypto.DomainSeparationTag, randEpoch abi.ChainEpoch, entropy []byte) ([]byte, error) {
out := make([]byte, 32) out := make([]byte, 32)
_, _ = rand.New(rand.NewSource(int64(randEpoch * 1000))).Read(out) //nolint _, _ = rand.New(rand.NewSource(int64(randEpoch * 1000))).Read(out) //nolint
return out, nil return out, nil
} }
func (fr *fakeRand) GetChainRandomnessV1(ctx context.Context, personalization crypto.DomainSeparationTag, randEpoch abi.ChainEpoch, entropy []byte) ([]byte, error) { func (fr *fakeRand) GetBeaconRandomness(ctx context.Context, personalization crypto.DomainSeparationTag, randEpoch abi.ChainEpoch, entropy []byte) ([]byte, error) {
out := make([]byte, 32)
_, _ = rand.New(rand.NewSource(int64(randEpoch * 1000))).Read(out) //nolint
return out, nil
}
func (fr *fakeRand) GetBeaconRandomnessV3(ctx context.Context, personalization crypto.DomainSeparationTag, randEpoch abi.ChainEpoch, entropy []byte) ([]byte, error) {
out := make([]byte, 32)
_, _ = rand.New(rand.NewSource(int64(randEpoch))).Read(out) //nolint
return out, nil
}
func (fr *fakeRand) GetBeaconRandomnessV2(ctx context.Context, personalization crypto.DomainSeparationTag, randEpoch abi.ChainEpoch, entropy []byte) ([]byte, error) {
out := make([]byte, 32)
_, _ = rand.New(rand.NewSource(int64(randEpoch))).Read(out) //nolint
return out, nil
}
func (fr *fakeRand) GetBeaconRandomnessV1(ctx context.Context, personalization crypto.DomainSeparationTag, randEpoch abi.ChainEpoch, entropy []byte) ([]byte, error) {
out := make([]byte, 32) out := make([]byte, 32)
_, _ = rand.New(rand.NewSource(int64(randEpoch))).Read(out) //nolint _, _ = rand.New(rand.NewSource(int64(randEpoch))).Read(out) //nolint
return out, nil return out, nil

View File

@ -1,6 +1,7 @@
package slashfilter package slashfilter
import ( import (
"context"
"fmt" "fmt"
"github.com/filecoin-project/lotus/build" "github.com/filecoin-project/lotus/build"
@ -27,7 +28,7 @@ func New(dstore ds.Batching) *SlashFilter {
} }
} }
func (f *SlashFilter) MinedBlock(bh *types.BlockHeader, parentEpoch abi.ChainEpoch) error { func (f *SlashFilter) MinedBlock(ctx context.Context, bh *types.BlockHeader, parentEpoch abi.ChainEpoch) error {
if build.IsNearUpgrade(bh.Height, build.UpgradeOrangeHeight) { if build.IsNearUpgrade(bh.Height, build.UpgradeOrangeHeight) {
return nil return nil
} }
@ -35,7 +36,7 @@ func (f *SlashFilter) MinedBlock(bh *types.BlockHeader, parentEpoch abi.ChainEpo
epochKey := ds.NewKey(fmt.Sprintf("/%s/%d", bh.Miner, bh.Height)) epochKey := ds.NewKey(fmt.Sprintf("/%s/%d", bh.Miner, bh.Height))
{ {
// double-fork mining (2 blocks at one epoch) // double-fork mining (2 blocks at one epoch)
if err := checkFault(f.byEpoch, epochKey, bh, "double-fork mining faults"); err != nil { if err := checkFault(ctx, f.byEpoch, epochKey, bh, "double-fork mining faults"); err != nil {
return err return err
} }
} }
@ -43,7 +44,7 @@ func (f *SlashFilter) MinedBlock(bh *types.BlockHeader, parentEpoch abi.ChainEpo
parentsKey := ds.NewKey(fmt.Sprintf("/%s/%x", bh.Miner, types.NewTipSetKey(bh.Parents...).Bytes())) parentsKey := ds.NewKey(fmt.Sprintf("/%s/%x", bh.Miner, types.NewTipSetKey(bh.Parents...).Bytes()))
{ {
// time-offset mining faults (2 blocks with the same parents) // time-offset mining faults (2 blocks with the same parents)
if err := checkFault(f.byParents, parentsKey, bh, "time-offset mining faults"); err != nil { if err := checkFault(ctx, f.byParents, parentsKey, bh, "time-offset mining faults"); err != nil {
return err return err
} }
} }
@ -53,14 +54,14 @@ func (f *SlashFilter) MinedBlock(bh *types.BlockHeader, parentEpoch abi.ChainEpo
// First check if we have mined a block on the parent epoch // First check if we have mined a block on the parent epoch
parentEpochKey := ds.NewKey(fmt.Sprintf("/%s/%d", bh.Miner, parentEpoch)) parentEpochKey := ds.NewKey(fmt.Sprintf("/%s/%d", bh.Miner, parentEpoch))
have, err := f.byEpoch.Has(parentEpochKey) have, err := f.byEpoch.Has(ctx, parentEpochKey)
if err != nil { if err != nil {
return err return err
} }
if have { if have {
// If we had, make sure it's in our parent tipset // If we had, make sure it's in our parent tipset
cidb, err := f.byEpoch.Get(parentEpochKey) cidb, err := f.byEpoch.Get(ctx, parentEpochKey)
if err != nil { if err != nil {
return xerrors.Errorf("getting other block cid: %w", err) return xerrors.Errorf("getting other block cid: %w", err)
} }
@ -83,25 +84,25 @@ func (f *SlashFilter) MinedBlock(bh *types.BlockHeader, parentEpoch abi.ChainEpo
} }
} }
if err := f.byParents.Put(parentsKey, bh.Cid().Bytes()); err != nil { if err := f.byParents.Put(ctx, parentsKey, bh.Cid().Bytes()); err != nil {
return xerrors.Errorf("putting byEpoch entry: %w", err) return xerrors.Errorf("putting byEpoch entry: %w", err)
} }
if err := f.byEpoch.Put(epochKey, bh.Cid().Bytes()); err != nil { if err := f.byEpoch.Put(ctx, epochKey, bh.Cid().Bytes()); err != nil {
return xerrors.Errorf("putting byEpoch entry: %w", err) return xerrors.Errorf("putting byEpoch entry: %w", err)
} }
return nil return nil
} }
func checkFault(t ds.Datastore, key ds.Key, bh *types.BlockHeader, faultType string) error { func checkFault(ctx context.Context, t ds.Datastore, key ds.Key, bh *types.BlockHeader, faultType string) error {
fault, err := t.Has(key) fault, err := t.Has(ctx, key)
if err != nil { if err != nil {
return err return err
} }
if fault { if fault {
cidb, err := t.Get(key) cidb, err := t.Get(ctx, key)
if err != nil { if err != nil {
return xerrors.Errorf("getting other block cid: %w", err) return xerrors.Errorf("getting other block cid: %w", err)
} }

View File

@ -89,7 +89,7 @@ func (fm *FundManager) Start() error {
// - in State() only load addresses with in-progress messages // - in State() only load addresses with in-progress messages
// - load the others just-in-time from getFundedAddress // - load the others just-in-time from getFundedAddress
// - delete(fm.fundedAddrs, addr) when the queue has been processed // - delete(fm.fundedAddrs, addr) when the queue has been processed
return fm.str.forEach(func(state *FundedAddressState) { return fm.str.forEach(fm.ctx, func(state *FundedAddressState) {
fa := newFundedAddress(fm, state.Addr) fa := newFundedAddress(fm, state.Addr)
fa.state = state fa.state = state
fm.fundedAddrs[fa.state.Addr] = fa fm.fundedAddrs[fa.state.Addr] = fa
@ -322,7 +322,7 @@ func (a *fundedAddress) clearWaitState() {
// Save state to datastore // Save state to datastore
func (a *fundedAddress) saveState() { func (a *fundedAddress) saveState() {
// Not much we can do if saving to the datastore fails, just log // Not much we can do if saving to the datastore fails, just log
err := a.str.save(a.state) err := a.str.save(a.ctx, a.state)
if err != nil { if err != nil {
log.Errorf("saving state to store for addr %s: %v", a.state.Addr, err) log.Errorf("saving state to store for addr %s: %v", a.state.Addr, err)
} }

View File

@ -2,6 +2,7 @@ package market
import ( import (
"bytes" "bytes"
"context"
cborrpc "github.com/filecoin-project/go-cbor-util" cborrpc "github.com/filecoin-project/go-cbor-util"
"github.com/ipfs/go-datastore" "github.com/ipfs/go-datastore"
@ -27,7 +28,7 @@ func newStore(ds dtypes.MetadataDS) *Store {
} }
// save the state to the datastore // save the state to the datastore
func (ps *Store) save(state *FundedAddressState) error { func (ps *Store) save(ctx context.Context, state *FundedAddressState) error {
k := dskeyForAddr(state.Addr) k := dskeyForAddr(state.Addr)
b, err := cborrpc.Dump(state) b, err := cborrpc.Dump(state)
@ -35,14 +36,14 @@ func (ps *Store) save(state *FundedAddressState) error {
return err return err
} }
return ps.ds.Put(k, b) return ps.ds.Put(ctx, k, b)
} }
// get the state for the given address // get the state for the given address
func (ps *Store) get(addr address.Address) (*FundedAddressState, error) { func (ps *Store) get(ctx context.Context, addr address.Address) (*FundedAddressState, error) {
k := dskeyForAddr(addr) k := dskeyForAddr(addr)
data, err := ps.ds.Get(k) data, err := ps.ds.Get(ctx, k)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -56,8 +57,8 @@ func (ps *Store) get(addr address.Address) (*FundedAddressState, error) {
} }
// forEach calls iter with each address in the datastore // forEach calls iter with each address in the datastore
func (ps *Store) forEach(iter func(*FundedAddressState)) error { func (ps *Store) forEach(ctx context.Context, iter func(*FundedAddressState)) error {
res, err := ps.ds.Query(dsq.Query{Prefix: dsKeyAddr}) res, err := ps.ds.Query(ctx, dsq.Query{Prefix: dsKeyAddr})
if err != nil { if err != nil {
return err return err
} }

View File

@ -1,6 +1,7 @@
package messagepool package messagepool
import ( import (
"context"
"encoding/json" "encoding/json"
"fmt" "fmt"
"time" "time"
@ -20,8 +21,8 @@ var (
ConfigKey = datastore.NewKey("/mpool/config") ConfigKey = datastore.NewKey("/mpool/config")
) )
func loadConfig(ds dtypes.MetadataDS) (*types.MpoolConfig, error) { func loadConfig(ctx context.Context, ds dtypes.MetadataDS) (*types.MpoolConfig, error) {
haveCfg, err := ds.Has(ConfigKey) haveCfg, err := ds.Has(ctx, ConfigKey)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -30,7 +31,7 @@ func loadConfig(ds dtypes.MetadataDS) (*types.MpoolConfig, error) {
return DefaultConfig(), nil return DefaultConfig(), nil
} }
cfgBytes, err := ds.Get(ConfigKey) cfgBytes, err := ds.Get(ctx, ConfigKey)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -39,12 +40,12 @@ func loadConfig(ds dtypes.MetadataDS) (*types.MpoolConfig, error) {
return cfg, err return cfg, err
} }
func saveConfig(cfg *types.MpoolConfig, ds dtypes.MetadataDS) error { func saveConfig(ctx context.Context, cfg *types.MpoolConfig, ds dtypes.MetadataDS) error {
cfgBytes, err := json.Marshal(cfg) cfgBytes, err := json.Marshal(cfg)
if err != nil { if err != nil {
return err return err
} }
return ds.Put(ConfigKey, cfgBytes) return ds.Put(ctx, ConfigKey, cfgBytes)
} }
func (mp *MessagePool) GetConfig() *types.MpoolConfig { func (mp *MessagePool) GetConfig() *types.MpoolConfig {
@ -68,7 +69,7 @@ func validateConfg(cfg *types.MpoolConfig) error {
return nil return nil
} }
func (mp *MessagePool) SetConfig(cfg *types.MpoolConfig) error { func (mp *MessagePool) SetConfig(ctx context.Context, cfg *types.MpoolConfig) error {
if err := validateConfg(cfg); err != nil { if err := validateConfg(cfg); err != nil {
return err return err
} }
@ -76,7 +77,7 @@ func (mp *MessagePool) SetConfig(cfg *types.MpoolConfig) error {
mp.cfgLk.Lock() mp.cfgLk.Lock()
mp.cfg = cfg mp.cfg = cfg
err := saveConfig(cfg, mp.ds) err := saveConfig(ctx, cfg, mp.ds)
if err != nil { if err != nil {
log.Warnf("error persisting mpool config: %s", err) log.Warnf("error persisting mpool config: %s", err)
} }

View File

@ -173,10 +173,17 @@ type MessagePool struct {
sigValCache *lru.TwoQueueCache sigValCache *lru.TwoQueueCache
nonceCache *lru.Cache
evtTypes [3]journal.EventType evtTypes [3]journal.EventType
journal journal.Journal journal journal.Journal
} }
type nonceCacheKey struct {
tsk types.TipSetKey
addr address.Address
}
type msgSet struct { type msgSet struct {
msgs map[uint64]*types.SignedMessage msgs map[uint64]*types.SignedMessage
nextNonce uint64 nextNonce uint64
@ -196,10 +203,10 @@ func ComputeMinRBF(curPrem abi.TokenAmount) abi.TokenAmount {
return types.BigAdd(minPrice, types.NewInt(1)) return types.BigAdd(minPrice, types.NewInt(1))
} }
func CapGasFee(mff dtypes.DefaultMaxFeeFunc, msg *types.Message, sendSepc *api.MessageSendSpec) { func CapGasFee(mff dtypes.DefaultMaxFeeFunc, msg *types.Message, sendSpec *api.MessageSendSpec) {
var maxFee abi.TokenAmount var maxFee abi.TokenAmount
if sendSepc != nil { if sendSpec != nil {
maxFee = sendSepc.MaxFee maxFee = sendSpec.MaxFee
} }
if maxFee.Int == nil || maxFee.Equals(big.Zero()) { if maxFee.Int == nil || maxFee.Equals(big.Zero()) {
mf, err := mff() mf, err := mff()
@ -358,11 +365,12 @@ func (ms *msgSet) toSlice() []*types.SignedMessage {
return set return set
} }
func New(api Provider, ds dtypes.MetadataDS, us stmgr.UpgradeSchedule, netName dtypes.NetworkName, j journal.Journal) (*MessagePool, error) { func New(ctx context.Context, api Provider, ds dtypes.MetadataDS, us stmgr.UpgradeSchedule, netName dtypes.NetworkName, j journal.Journal) (*MessagePool, error) {
cache, _ := lru.New2Q(build.BlsSignatureCacheSize) cache, _ := lru.New2Q(build.BlsSignatureCacheSize)
verifcache, _ := lru.New2Q(build.VerifSigCacheSize) verifcache, _ := lru.New2Q(build.VerifSigCacheSize)
noncecache, _ := lru.New(256)
cfg, err := loadConfig(ds) cfg, err := loadConfig(ctx, ds)
if err != nil { if err != nil {
return nil, xerrors.Errorf("error loading mpool config: %w", err) return nil, xerrors.Errorf("error loading mpool config: %w", err)
} }
@ -386,6 +394,7 @@ func New(api Provider, ds dtypes.MetadataDS, us stmgr.UpgradeSchedule, netName d
pruneCooldown: make(chan struct{}, 1), pruneCooldown: make(chan struct{}, 1),
blsSigCache: cache, blsSigCache: cache,
sigValCache: verifcache, sigValCache: verifcache,
nonceCache: noncecache,
changes: lps.New(50), changes: lps.New(50),
localMsgs: namespace.Wrap(ds, datastore.NewKey(localMsgsDs)), localMsgs: namespace.Wrap(ds, datastore.NewKey(localMsgsDs)),
api: api, api: api,
@ -601,7 +610,7 @@ func (mp *MessagePool) addLocal(ctx context.Context, m *types.SignedMessage) err
return xerrors.Errorf("error serializing message: %w", err) return xerrors.Errorf("error serializing message: %w", err)
} }
if err := mp.localMsgs.Put(datastore.NewKey(string(m.Cid().Bytes())), msgb); err != nil { if err := mp.localMsgs.Put(ctx, datastore.NewKey(string(m.Cid().Bytes())), msgb); err != nil {
return xerrors.Errorf("persisting local message: %w", err) return xerrors.Errorf("persisting local message: %w", err)
} }
@ -909,12 +918,12 @@ func (mp *MessagePool) addLocked(ctx context.Context, m *types.SignedMessage, st
mp.blsSigCache.Add(m.Cid(), m.Signature) mp.blsSigCache.Add(m.Cid(), m.Signature)
} }
if _, err := mp.api.PutMessage(m); err != nil { if _, err := mp.api.PutMessage(ctx, m); err != nil {
log.Warnf("mpooladd cs.PutMessage failed: %s", err) log.Warnf("mpooladd cs.PutMessage failed: %s", err)
return err return err
} }
if _, err := mp.api.PutMessage(&m.Message); err != nil { if _, err := mp.api.PutMessage(ctx, &m.Message); err != nil {
log.Warnf("mpooladd cs.PutMessage failed: %s", err) log.Warnf("mpooladd cs.PutMessage failed: %s", err)
return err return err
} }
@ -1016,11 +1025,23 @@ func (mp *MessagePool) getStateNonce(ctx context.Context, addr address.Address,
done := metrics.Timer(ctx, metrics.MpoolGetNonceDuration) done := metrics.Timer(ctx, metrics.MpoolGetNonceDuration)
defer done() defer done()
nk := nonceCacheKey{
tsk: ts.Key(),
addr: addr,
}
n, ok := mp.nonceCache.Get(nk)
if ok {
return n.(uint64), nil
}
act, err := mp.api.GetActorAfter(addr, ts) act, err := mp.api.GetActorAfter(addr, ts)
if err != nil { if err != nil {
return 0, err return 0, err
} }
mp.nonceCache.Add(nk, act.Nonce)
return act.Nonce, nil return act.Nonce, nil
} }
@ -1207,7 +1228,7 @@ func (mp *MessagePool) HeadChange(ctx context.Context, revert []*types.TipSet, a
var merr error var merr error
for _, ts := range revert { for _, ts := range revert {
pts, err := mp.api.LoadTipSet(ts.Parents()) pts, err := mp.api.LoadTipSet(ctx, ts.Parents())
if err != nil { if err != nil {
log.Errorf("error loading reverted tipset parent: %s", err) log.Errorf("error loading reverted tipset parent: %s", err)
merr = multierror.Append(merr, err) merr = multierror.Append(merr, err)
@ -1216,7 +1237,7 @@ func (mp *MessagePool) HeadChange(ctx context.Context, revert []*types.TipSet, a
mp.curTs = pts mp.curTs = pts
msgs, err := mp.MessagesForBlocks(ts.Blocks()) msgs, err := mp.MessagesForBlocks(ctx, ts.Blocks())
if err != nil { if err != nil {
log.Errorf("error retrieving messages for reverted block: %s", err) log.Errorf("error retrieving messages for reverted block: %s", err)
merr = multierror.Append(merr, err) merr = multierror.Append(merr, err)
@ -1232,7 +1253,7 @@ func (mp *MessagePool) HeadChange(ctx context.Context, revert []*types.TipSet, a
mp.curTs = ts mp.curTs = ts
for _, b := range ts.Blocks() { for _, b := range ts.Blocks() {
bmsgs, smsgs, err := mp.api.MessagesForBlock(b) bmsgs, smsgs, err := mp.api.MessagesForBlock(ctx, b)
if err != nil { if err != nil {
xerr := xerrors.Errorf("failed to get messages for apply block %s(height %d) (msgroot = %s): %w", b.Cid(), b.Height, b.Messages, err) xerr := xerrors.Errorf("failed to get messages for apply block %s(height %d) (msgroot = %s): %w", b.Cid(), b.Height, b.Messages, err)
log.Errorf("error retrieving messages for block: %s", xerr) log.Errorf("error retrieving messages for block: %s", xerr)
@ -1338,7 +1359,7 @@ func (mp *MessagePool) HeadChange(ctx context.Context, revert []*types.TipSet, a
return merr return merr
} }
func (mp *MessagePool) runHeadChange(from *types.TipSet, to *types.TipSet, rmsgs map[address.Address]map[uint64]*types.SignedMessage) error { func (mp *MessagePool) runHeadChange(ctx context.Context, from *types.TipSet, to *types.TipSet, rmsgs map[address.Address]map[uint64]*types.SignedMessage) error {
add := func(m *types.SignedMessage) { add := func(m *types.SignedMessage) {
s, ok := rmsgs[m.Message.From] s, ok := rmsgs[m.Message.From]
if !ok { if !ok {
@ -1360,7 +1381,7 @@ func (mp *MessagePool) runHeadChange(from *types.TipSet, to *types.TipSet, rmsgs
} }
revert, apply, err := store.ReorgOps(mp.api.LoadTipSet, from, to) revert, apply, err := store.ReorgOps(ctx, mp.api.LoadTipSet, from, to)
if err != nil { if err != nil {
return xerrors.Errorf("failed to compute reorg ops for mpool pending messages: %w", err) return xerrors.Errorf("failed to compute reorg ops for mpool pending messages: %w", err)
} }
@ -1368,7 +1389,7 @@ func (mp *MessagePool) runHeadChange(from *types.TipSet, to *types.TipSet, rmsgs
var merr error var merr error
for _, ts := range revert { for _, ts := range revert {
msgs, err := mp.MessagesForBlocks(ts.Blocks()) msgs, err := mp.MessagesForBlocks(ctx, ts.Blocks())
if err != nil { if err != nil {
log.Errorf("error retrieving messages for reverted block: %s", err) log.Errorf("error retrieving messages for reverted block: %s", err)
merr = multierror.Append(merr, err) merr = multierror.Append(merr, err)
@ -1382,7 +1403,7 @@ func (mp *MessagePool) runHeadChange(from *types.TipSet, to *types.TipSet, rmsgs
for _, ts := range apply { for _, ts := range apply {
for _, b := range ts.Blocks() { for _, b := range ts.Blocks() {
bmsgs, smsgs, err := mp.api.MessagesForBlock(b) bmsgs, smsgs, err := mp.api.MessagesForBlock(ctx, b)
if err != nil { if err != nil {
xerr := xerrors.Errorf("failed to get messages for apply block %s(height %d) (msgroot = %s): %w", b.Cid(), b.Height, b.Messages, err) xerr := xerrors.Errorf("failed to get messages for apply block %s(height %d) (msgroot = %s): %w", b.Cid(), b.Height, b.Messages, err)
log.Errorf("error retrieving messages for block: %s", xerr) log.Errorf("error retrieving messages for block: %s", xerr)
@ -1407,11 +1428,11 @@ type statBucket struct {
msgs map[uint64]*types.SignedMessage msgs map[uint64]*types.SignedMessage
} }
func (mp *MessagePool) MessagesForBlocks(blks []*types.BlockHeader) ([]*types.SignedMessage, error) { func (mp *MessagePool) MessagesForBlocks(ctx context.Context, blks []*types.BlockHeader) ([]*types.SignedMessage, error) {
out := make([]*types.SignedMessage, 0) out := make([]*types.SignedMessage, 0)
for _, b := range blks { for _, b := range blks {
bmsgs, smsgs, err := mp.api.MessagesForBlock(b) bmsgs, smsgs, err := mp.api.MessagesForBlock(ctx, b)
if err != nil { if err != nil {
return nil, xerrors.Errorf("failed to get messages for apply block %s(height %d) (msgroot = %s): %w", b.Cid(), b.Height, b.Messages, err) return nil, xerrors.Errorf("failed to get messages for apply block %s(height %d) (msgroot = %s): %w", b.Cid(), b.Height, b.Messages, err)
} }
@ -1477,7 +1498,7 @@ func (mp *MessagePool) Updates(ctx context.Context) (<-chan api.MpoolUpdate, err
} }
func (mp *MessagePool) loadLocal(ctx context.Context) error { func (mp *MessagePool) loadLocal(ctx context.Context) error {
res, err := mp.localMsgs.Query(query.Query{}) res, err := mp.localMsgs.Query(ctx, query.Query{})
if err != nil { if err != nil {
return xerrors.Errorf("query local messages: %w", err) return xerrors.Errorf("query local messages: %w", err)
} }
@ -1525,7 +1546,7 @@ func (mp *MessagePool) Clear(ctx context.Context, local bool) {
if ok { if ok {
for _, m := range mset.msgs { for _, m := range mset.msgs {
err := mp.localMsgs.Delete(datastore.NewKey(string(m.Cid().Bytes()))) err := mp.localMsgs.Delete(ctx, datastore.NewKey(string(m.Cid().Bytes())))
if err != nil { if err != nil {
log.Warnf("error deleting local message: %s", err) log.Warnf("error deleting local message: %s", err)
} }

View File

@ -1,3 +1,4 @@
//stm: #unit
package messagepool package messagepool
import ( import (
@ -103,7 +104,7 @@ func (tma *testMpoolAPI) SubscribeHeadChanges(cb func(rev, app []*types.TipSet)
return tma.tipsets[0] return tma.tipsets[0]
} }
func (tma *testMpoolAPI) PutMessage(m types.ChainMsg) (cid.Cid, error) { func (tma *testMpoolAPI) PutMessage(ctx context.Context, m types.ChainMsg) (cid.Cid, error) {
return cid.Undef, nil return cid.Undef, nil
} }
@ -164,16 +165,16 @@ func (tma *testMpoolAPI) StateAccountKeyAtFinality(ctx context.Context, addr add
return addr, nil return addr, nil
} }
func (tma *testMpoolAPI) MessagesForBlock(h *types.BlockHeader) ([]*types.Message, []*types.SignedMessage, error) { func (tma *testMpoolAPI) MessagesForBlock(ctx context.Context, h *types.BlockHeader) ([]*types.Message, []*types.SignedMessage, error) {
return nil, tma.bmsgs[h.Cid()], nil return nil, tma.bmsgs[h.Cid()], nil
} }
func (tma *testMpoolAPI) MessagesForTipset(ts *types.TipSet) ([]types.ChainMsg, error) { func (tma *testMpoolAPI) MessagesForTipset(ctx context.Context, ts *types.TipSet) ([]types.ChainMsg, error) {
if len(ts.Blocks()) != 1 { if len(ts.Blocks()) != 1 {
panic("cant deal with multiblock tipsets in this test") panic("cant deal with multiblock tipsets in this test")
} }
bm, sm, err := tma.MessagesForBlock(ts.Blocks()[0]) bm, sm, err := tma.MessagesForBlock(ctx, ts.Blocks()[0])
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -190,7 +191,7 @@ func (tma *testMpoolAPI) MessagesForTipset(ts *types.TipSet) ([]types.ChainMsg,
return out, nil return out, nil
} }
func (tma *testMpoolAPI) LoadTipSet(tsk types.TipSetKey) (*types.TipSet, error) { func (tma *testMpoolAPI) LoadTipSet(ctx context.Context, tsk types.TipSetKey) (*types.TipSet, error) {
for _, ts := range tma.tipsets { for _, ts := range tma.tipsets {
if types.CidArrsEqual(tsk.Cids(), ts.Cids()) { if types.CidArrsEqual(tsk.Cids(), ts.Cids()) {
return ts, nil return ts, nil
@ -206,6 +207,7 @@ func (tma *testMpoolAPI) ChainComputeBaseFee(ctx context.Context, ts *types.TipS
func assertNonce(t *testing.T, mp *MessagePool, addr address.Address, val uint64) { func assertNonce(t *testing.T, mp *MessagePool, addr address.Address, val uint64) {
t.Helper() t.Helper()
//stm: @CHAIN_MEMPOOL_GET_NONCE_001
n, err := mp.GetNonce(context.TODO(), addr, types.EmptyTSK) n, err := mp.GetNonce(context.TODO(), addr, types.EmptyTSK)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@ -233,7 +235,7 @@ func TestMessagePool(t *testing.T) {
ds := datastore.NewMapDatastore() ds := datastore.NewMapDatastore()
mp, err := New(tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil) mp, err := New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -277,7 +279,7 @@ func TestCheckMessageBig(t *testing.T) {
ds := datastore.NewMapDatastore() ds := datastore.NewMapDatastore()
mp, err := New(tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil) mp, err := New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil)
assert.NoError(t, err) assert.NoError(t, err)
to := mock.Address(1001) to := mock.Address(1001)
@ -340,7 +342,7 @@ func TestMessagePoolMessagesInEachBlock(t *testing.T) {
ds := datastore.NewMapDatastore() ds := datastore.NewMapDatastore()
mp, err := New(tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil) mp, err := New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -366,8 +368,10 @@ func TestMessagePoolMessagesInEachBlock(t *testing.T) {
tma.applyBlock(t, a) tma.applyBlock(t, a)
tsa := mock.TipSet(a) tsa := mock.TipSet(a)
//stm: @CHAIN_MEMPOOL_PENDING_001
_, _ = mp.Pending(context.TODO()) _, _ = mp.Pending(context.TODO())
//stm: @CHAIN_MEMPOOL_SELECT_001
selm, _ := mp.SelectMessages(context.Background(), tsa, 1) selm, _ := mp.SelectMessages(context.Background(), tsa, 1)
if len(selm) == 0 { if len(selm) == 0 {
t.Fatal("should have returned the rest of the messages") t.Fatal("should have returned the rest of the messages")
@ -389,7 +393,7 @@ func TestRevertMessages(t *testing.T) {
ds := datastore.NewMapDatastore() ds := datastore.NewMapDatastore()
mp, err := New(tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil) mp, err := New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -428,6 +432,7 @@ func TestRevertMessages(t *testing.T) {
assertNonce(t, mp, sender, 4) assertNonce(t, mp, sender, 4)
//stm: @CHAIN_MEMPOOL_PENDING_001
p, _ := mp.Pending(context.TODO()) p, _ := mp.Pending(context.TODO())
fmt.Printf("%+v\n", p) fmt.Printf("%+v\n", p)
if len(p) != 3 { if len(p) != 3 {
@ -452,7 +457,7 @@ func TestPruningSimple(t *testing.T) {
ds := datastore.NewMapDatastore() ds := datastore.NewMapDatastore()
mp, err := New(tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil) mp, err := New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -486,6 +491,7 @@ func TestPruningSimple(t *testing.T) {
mp.Prune() mp.Prune()
//stm: @CHAIN_MEMPOOL_PENDING_001
msgs, _ := mp.Pending(context.TODO()) msgs, _ := mp.Pending(context.TODO())
if len(msgs) != 5 { if len(msgs) != 5 {
t.Fatal("expected only 5 messages in pool, got: ", len(msgs)) t.Fatal("expected only 5 messages in pool, got: ", len(msgs))
@ -496,7 +502,7 @@ func TestLoadLocal(t *testing.T) {
tma := newTestMpoolAPI() tma := newTestMpoolAPI()
ds := datastore.NewMapDatastore() ds := datastore.NewMapDatastore()
mp, err := New(tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil) mp, err := New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -528,6 +534,7 @@ func TestLoadLocal(t *testing.T) {
msgs := make(map[cid.Cid]struct{}) msgs := make(map[cid.Cid]struct{})
for i := 0; i < 10; i++ { for i := 0; i < 10; i++ {
m := makeTestMessage(w1, a1, a2, uint64(i), gasLimit, uint64(i+1)) m := makeTestMessage(w1, a1, a2, uint64(i), gasLimit, uint64(i+1))
//stm: @CHAIN_MEMPOOL_PUSH_001
cid, err := mp.Push(context.TODO(), m) cid, err := mp.Push(context.TODO(), m)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@ -539,11 +546,12 @@ func TestLoadLocal(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
mp, err = New(tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil) mp, err = New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
//stm: @CHAIN_MEMPOOL_PENDING_001
pmsgs, _ := mp.Pending(context.TODO()) pmsgs, _ := mp.Pending(context.TODO())
if len(msgs) != len(pmsgs) { if len(msgs) != len(pmsgs) {
t.Fatalf("expected %d messages, but got %d", len(msgs), len(pmsgs)) t.Fatalf("expected %d messages, but got %d", len(msgs), len(pmsgs))
@ -568,7 +576,7 @@ func TestClearAll(t *testing.T) {
tma := newTestMpoolAPI() tma := newTestMpoolAPI()
ds := datastore.NewMapDatastore() ds := datastore.NewMapDatastore()
mp, err := New(tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil) mp, err := New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -599,6 +607,7 @@ func TestClearAll(t *testing.T) {
gasLimit := gasguess.Costs[gasguess.CostKey{Code: builtin2.StorageMarketActorCodeID, M: 2}] gasLimit := gasguess.Costs[gasguess.CostKey{Code: builtin2.StorageMarketActorCodeID, M: 2}]
for i := 0; i < 10; i++ { for i := 0; i < 10; i++ {
m := makeTestMessage(w1, a1, a2, uint64(i), gasLimit, uint64(i+1)) m := makeTestMessage(w1, a1, a2, uint64(i), gasLimit, uint64(i+1))
//stm: @CHAIN_MEMPOOL_PUSH_001
_, err := mp.Push(context.TODO(), m) _, err := mp.Push(context.TODO(), m)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@ -610,8 +619,10 @@ func TestClearAll(t *testing.T) {
mustAdd(t, mp, m) mustAdd(t, mp, m)
} }
//stm: @CHAIN_MEMPOOL_CLEAR_001
mp.Clear(context.Background(), true) mp.Clear(context.Background(), true)
//stm: @CHAIN_MEMPOOL_PENDING_001
pending, _ := mp.Pending(context.TODO()) pending, _ := mp.Pending(context.TODO())
if len(pending) > 0 { if len(pending) > 0 {
t.Fatalf("cleared the mpool, but got %d pending messages", len(pending)) t.Fatalf("cleared the mpool, but got %d pending messages", len(pending))
@ -622,7 +633,7 @@ func TestClearNonLocal(t *testing.T) {
tma := newTestMpoolAPI() tma := newTestMpoolAPI()
ds := datastore.NewMapDatastore() ds := datastore.NewMapDatastore()
mp, err := New(tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil) mp, err := New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -654,6 +665,7 @@ func TestClearNonLocal(t *testing.T) {
gasLimit := gasguess.Costs[gasguess.CostKey{Code: builtin2.StorageMarketActorCodeID, M: 2}] gasLimit := gasguess.Costs[gasguess.CostKey{Code: builtin2.StorageMarketActorCodeID, M: 2}]
for i := 0; i < 10; i++ { for i := 0; i < 10; i++ {
m := makeTestMessage(w1, a1, a2, uint64(i), gasLimit, uint64(i+1)) m := makeTestMessage(w1, a1, a2, uint64(i), gasLimit, uint64(i+1))
//stm: @CHAIN_MEMPOOL_PUSH_001
_, err := mp.Push(context.TODO(), m) _, err := mp.Push(context.TODO(), m)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@ -665,8 +677,10 @@ func TestClearNonLocal(t *testing.T) {
mustAdd(t, mp, m) mustAdd(t, mp, m)
} }
//stm: @CHAIN_MEMPOOL_CLEAR_001
mp.Clear(context.Background(), false) mp.Clear(context.Background(), false)
//stm: @CHAIN_MEMPOOL_PENDING_001
pending, _ := mp.Pending(context.TODO()) pending, _ := mp.Pending(context.TODO())
if len(pending) != 10 { if len(pending) != 10 {
t.Fatalf("expected 10 pending messages, but got %d instead", len(pending)) t.Fatalf("expected 10 pending messages, but got %d instead", len(pending))
@ -683,7 +697,7 @@ func TestUpdates(t *testing.T) {
tma := newTestMpoolAPI() tma := newTestMpoolAPI()
ds := datastore.NewMapDatastore() ds := datastore.NewMapDatastore()
mp, err := New(tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil) mp, err := New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -724,6 +738,7 @@ func TestUpdates(t *testing.T) {
for i := 0; i < 10; i++ { for i := 0; i < 10; i++ {
m := makeTestMessage(w1, a1, a2, uint64(i), gasLimit, uint64(i+1)) m := makeTestMessage(w1, a1, a2, uint64(i), gasLimit, uint64(i+1))
//stm: @CHAIN_MEMPOOL_PUSH_001
_, err := mp.Push(context.TODO(), m) _, err := mp.Push(context.TODO(), m)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)

View File

@ -23,13 +23,13 @@ var (
type Provider interface { type Provider interface {
SubscribeHeadChanges(func(rev, app []*types.TipSet) error) *types.TipSet SubscribeHeadChanges(func(rev, app []*types.TipSet) error) *types.TipSet
PutMessage(m types.ChainMsg) (cid.Cid, error) PutMessage(ctx context.Context, m types.ChainMsg) (cid.Cid, error)
PubSubPublish(string, []byte) error PubSubPublish(string, []byte) error
GetActorAfter(address.Address, *types.TipSet) (*types.Actor, error) GetActorAfter(address.Address, *types.TipSet) (*types.Actor, error)
StateAccountKeyAtFinality(context.Context, address.Address, *types.TipSet) (address.Address, error) StateAccountKeyAtFinality(context.Context, address.Address, *types.TipSet) (address.Address, error)
MessagesForBlock(*types.BlockHeader) ([]*types.Message, []*types.SignedMessage, error) MessagesForBlock(context.Context, *types.BlockHeader) ([]*types.Message, []*types.SignedMessage, error)
MessagesForTipset(*types.TipSet) ([]types.ChainMsg, error) MessagesForTipset(context.Context, *types.TipSet) ([]types.ChainMsg, error)
LoadTipSet(tsk types.TipSetKey) (*types.TipSet, error) LoadTipSet(ctx context.Context, tsk types.TipSetKey) (*types.TipSet, error)
ChainComputeBaseFee(ctx context.Context, ts *types.TipSet) (types.BigInt, error) ChainComputeBaseFee(ctx context.Context, ts *types.TipSet) (types.BigInt, error)
IsLite() bool IsLite() bool
} }
@ -66,8 +66,8 @@ func (mpp *mpoolProvider) SubscribeHeadChanges(cb func(rev, app []*types.TipSet)
return mpp.sm.ChainStore().GetHeaviestTipSet() return mpp.sm.ChainStore().GetHeaviestTipSet()
} }
func (mpp *mpoolProvider) PutMessage(m types.ChainMsg) (cid.Cid, error) { func (mpp *mpoolProvider) PutMessage(ctx context.Context, m types.ChainMsg) (cid.Cid, error) {
return mpp.sm.ChainStore().PutMessage(m) return mpp.sm.ChainStore().PutMessage(ctx, m)
} }
func (mpp *mpoolProvider) PubSubPublish(k string, v []byte) error { func (mpp *mpoolProvider) PubSubPublish(k string, v []byte) error {
@ -103,16 +103,16 @@ func (mpp *mpoolProvider) StateAccountKeyAtFinality(ctx context.Context, addr ad
return mpp.sm.ResolveToKeyAddressAtFinality(ctx, addr, ts) return mpp.sm.ResolveToKeyAddressAtFinality(ctx, addr, ts)
} }
func (mpp *mpoolProvider) MessagesForBlock(h *types.BlockHeader) ([]*types.Message, []*types.SignedMessage, error) { func (mpp *mpoolProvider) MessagesForBlock(ctx context.Context, h *types.BlockHeader) ([]*types.Message, []*types.SignedMessage, error) {
return mpp.sm.ChainStore().MessagesForBlock(h) return mpp.sm.ChainStore().MessagesForBlock(ctx, h)
} }
func (mpp *mpoolProvider) MessagesForTipset(ts *types.TipSet) ([]types.ChainMsg, error) { func (mpp *mpoolProvider) MessagesForTipset(ctx context.Context, ts *types.TipSet) ([]types.ChainMsg, error) {
return mpp.sm.ChainStore().MessagesForTipset(ts) return mpp.sm.ChainStore().MessagesForTipset(ctx, ts)
} }
func (mpp *mpoolProvider) LoadTipSet(tsk types.TipSetKey) (*types.TipSet, error) { func (mpp *mpoolProvider) LoadTipSet(ctx context.Context, tsk types.TipSetKey) (*types.TipSet, error) {
return mpp.sm.ChainStore().LoadTipSet(tsk) return mpp.sm.ChainStore().LoadTipSet(ctx, tsk)
} }
func (mpp *mpoolProvider) ChainComputeBaseFee(ctx context.Context, ts *types.TipSet) (types.BigInt, error) { func (mpp *mpoolProvider) ChainComputeBaseFee(ctx context.Context, ts *types.TipSet) (types.BigInt, error) {

View File

@ -49,7 +49,7 @@ func (mp *MessagePool) pruneMessages(ctx context.Context, ts *types.TipSet) erro
} }
baseFeeLowerBound := getBaseFeeLowerBound(baseFee, baseFeeLowerBoundFactor) baseFeeLowerBound := getBaseFeeLowerBound(baseFee, baseFeeLowerBoundFactor)
pending, _ := mp.getPendingMessages(ts, ts) pending, _ := mp.getPendingMessages(ctx, ts, ts)
// protected actors -- not pruned // protected actors -- not pruned
protected := make(map[address.Address]struct{}) protected := make(map[address.Address]struct{})

View File

@ -25,7 +25,7 @@ func TestRepubMessages(t *testing.T) {
tma := newTestMpoolAPI() tma := newTestMpoolAPI()
ds := datastore.NewMapDatastore() ds := datastore.NewMapDatastore()
mp, err := New(tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil) mp, err := New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "mptest", nil)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }

View File

@ -210,7 +210,7 @@ func (mp *MessagePool) selectMessagesOptimal(ctx context.Context, curTs, ts *typ
// 0. Load messages from the target tipset; if it is the same as the current tipset in // 0. Load messages from the target tipset; if it is the same as the current tipset in
// the mpool, then this is just the pending messages // the mpool, then this is just the pending messages
pending, err := mp.getPendingMessages(curTs, ts) pending, err := mp.getPendingMessages(ctx, curTs, ts)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -458,7 +458,7 @@ func (mp *MessagePool) selectMessagesGreedy(ctx context.Context, curTs, ts *type
// 0. Load messages for the target tipset; if it is the same as the current tipset in the mpool // 0. Load messages for the target tipset; if it is the same as the current tipset in the mpool
// then this is just the pending messages // then this is just the pending messages
pending, err := mp.getPendingMessages(curTs, ts) pending, err := mp.getPendingMessages(ctx, curTs, ts)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -695,7 +695,7 @@ tailLoop:
return result return result
} }
func (mp *MessagePool) getPendingMessages(curTs, ts *types.TipSet) (map[address.Address]map[uint64]*types.SignedMessage, error) { func (mp *MessagePool) getPendingMessages(ctx context.Context, curTs, ts *types.TipSet) (map[address.Address]map[uint64]*types.SignedMessage, error) {
start := time.Now() start := time.Now()
result := make(map[address.Address]map[uint64]*types.SignedMessage) result := make(map[address.Address]map[uint64]*types.SignedMessage)
@ -731,7 +731,7 @@ func (mp *MessagePool) getPendingMessages(curTs, ts *types.TipSet) (map[address.
return result, nil return result, nil
} }
if err := mp.runHeadChange(curTs, ts, result); err != nil { if err := mp.runHeadChange(ctx, curTs, ts, result); err != nil {
return nil, xerrors.Errorf("failed to process difference between mpool head and given head: %w", err) return nil, xerrors.Errorf("failed to process difference between mpool head and given head: %w", err)
} }

View File

@ -65,7 +65,7 @@ func makeTestMessage(w *wallet.LocalWallet, from, to address.Address, nonce uint
func makeTestMpool() (*MessagePool, *testMpoolAPI) { func makeTestMpool() (*MessagePool, *testMpoolAPI) {
tma := newTestMpoolAPI() tma := newTestMpoolAPI()
ds := datastore.NewMapDatastore() ds := datastore.NewMapDatastore()
mp, err := New(tma, ds, filcns.DefaultUpgradeSchedule(), "test", nil) mp, err := New(context.Background(), tma, ds, filcns.DefaultUpgradeSchedule(), "test", nil)
if err != nil { if err != nil {
panic(err) panic(err)
} }

View File

@ -84,7 +84,7 @@ func (ms *MessageSigner) SignMessage(ctx context.Context, msg *types.Message, cb
} }
// If the callback executed successfully, write the nonce to the datastore // If the callback executed successfully, write the nonce to the datastore
if err := ms.saveNonce(msg.From, nonce); err != nil { if err := ms.saveNonce(ctx, msg.From, nonce); err != nil {
return nil, xerrors.Errorf("failed to save nonce: %w", err) return nil, xerrors.Errorf("failed to save nonce: %w", err)
} }
@ -105,7 +105,7 @@ func (ms *MessageSigner) nextNonce(ctx context.Context, addr address.Address) (u
// Get the next nonce for this address from the datastore // Get the next nonce for this address from the datastore
addrNonceKey := ms.dstoreKey(addr) addrNonceKey := ms.dstoreKey(addr)
dsNonceBytes, err := ms.ds.Get(addrNonceKey) dsNonceBytes, err := ms.ds.Get(ctx, addrNonceKey)
switch { switch {
case xerrors.Is(err, datastore.ErrNotFound): case xerrors.Is(err, datastore.ErrNotFound):
@ -139,7 +139,7 @@ func (ms *MessageSigner) nextNonce(ctx context.Context, addr address.Address) (u
// saveNonce increments the nonce for this address and writes it to the // saveNonce increments the nonce for this address and writes it to the
// datastore // datastore
func (ms *MessageSigner) saveNonce(addr address.Address, nonce uint64) error { func (ms *MessageSigner) saveNonce(ctx context.Context, addr address.Address, nonce uint64) error {
// Increment the nonce // Increment the nonce
nonce++ nonce++
@ -150,7 +150,7 @@ func (ms *MessageSigner) saveNonce(addr address.Address, nonce uint64) error {
if err != nil { if err != nil {
return xerrors.Errorf("failed to marshall nonce: %w", err) return xerrors.Errorf("failed to marshall nonce: %w", err)
} }
err = ms.ds.Put(addrNonceKey, buf.Bytes()) err = ms.ds.Put(ctx, addrNonceKey, buf.Bytes())
if err != nil { if err != nil {
return xerrors.Errorf("failed to write nonce to datastore: %w", err) return xerrors.Errorf("failed to write nonce to datastore: %w", err)
} }

View File

@ -1,3 +1,4 @@
//stm: #unit
package messagesigner package messagesigner
import ( import (
@ -60,6 +61,7 @@ func TestMessageSignerSignMessage(t *testing.T) {
to2, err := w.WalletNew(ctx, types.KTSecp256k1) to2, err := w.WalletNew(ctx, types.KTSecp256k1)
require.NoError(t, err) require.NoError(t, err)
//stm: @CHAIN_MESSAGE_SIGNER_NEW_SIGNER_001, @CHAIN_MESSAGE_SIGNER_SIGN_MESSAGE_001, @CHAIN_MESSAGE_SIGNER_SIGN_MESSAGE_005
type msgSpec struct { type msgSpec struct {
msg *types.Message msg *types.Message
mpoolNonce [1]uint64 mpoolNonce [1]uint64

View File

@ -4,6 +4,8 @@ import (
"context" "context"
"encoding/binary" "encoding/binary"
"github.com/filecoin-project/go-state-types/network"
logging "github.com/ipfs/go-log/v2" logging "github.com/ipfs/go-log/v2"
"github.com/filecoin-project/lotus/chain/beacon" "github.com/filecoin-project/lotus/chain/beacon"
@ -48,7 +50,7 @@ func (sr *stateRand) GetBeaconRandomnessTipset(ctx context.Context, round abi.Ch
defer span.End() defer span.End()
span.AddAttributes(trace.Int64Attribute("round", int64(round))) span.AddAttributes(trace.Int64Attribute("round", int64(round)))
ts, err := sr.cs.LoadTipSet(types.NewTipSetKey(sr.blks...)) ts, err := sr.cs.LoadTipSet(ctx, types.NewTipSetKey(sr.blks...))
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -70,12 +72,12 @@ func (sr *stateRand) GetBeaconRandomnessTipset(ctx context.Context, round abi.Ch
return randTs, nil return randTs, nil
} }
func (sr *stateRand) GetChainRandomness(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte, lookback bool) ([]byte, error) { func (sr *stateRand) getChainRandomness(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte, lookback bool) ([]byte, error) {
_, span := trace.StartSpan(ctx, "store.GetChainRandomness") _, span := trace.StartSpan(ctx, "store.GetChainRandomness")
defer span.End() defer span.End()
span.AddAttributes(trace.Int64Attribute("round", int64(round))) span.AddAttributes(trace.Int64Attribute("round", int64(round)))
ts, err := sr.cs.LoadTipSet(types.NewTipSetKey(sr.blks...)) ts, err := sr.cs.LoadTipSet(ctx, types.NewTipSetKey(sr.blks...))
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -101,38 +103,32 @@ func (sr *stateRand) GetChainRandomness(ctx context.Context, pers crypto.DomainS
return DrawRandomness(mtb.Ticket.VRFProof, pers, round, entropy) return DrawRandomness(mtb.Ticket.VRFProof, pers, round, entropy)
} }
type NetworkVersionGetter func(context.Context, abi.ChainEpoch) network.Version
type stateRand struct { type stateRand struct {
cs *store.ChainStore cs *store.ChainStore
blks []cid.Cid blks []cid.Cid
beacon beacon.Schedule beacon beacon.Schedule
networkVersionGetter NetworkVersionGetter
} }
func NewStateRand(cs *store.ChainStore, blks []cid.Cid, b beacon.Schedule) vm.Rand { func NewStateRand(cs *store.ChainStore, blks []cid.Cid, b beacon.Schedule, networkVersionGetter NetworkVersionGetter) vm.Rand {
return &stateRand{ return &stateRand{
cs: cs, cs: cs,
blks: blks, blks: blks,
beacon: b, beacon: b,
networkVersionGetter: networkVersionGetter,
} }
} }
// network v0-12 // network v0-12
func (sr *stateRand) GetChainRandomnessV1(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) { func (sr *stateRand) getBeaconRandomnessV1(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return sr.GetChainRandomness(ctx, pers, round, entropy, true)
}
// network v13 and on
func (sr *stateRand) GetChainRandomnessV2(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return sr.GetChainRandomness(ctx, pers, round, entropy, false)
}
// network v0-12
func (sr *stateRand) GetBeaconRandomnessV1(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
randTs, err := sr.GetBeaconRandomnessTipset(ctx, round, true) randTs, err := sr.GetBeaconRandomnessTipset(ctx, round, true)
if err != nil { if err != nil {
return nil, err return nil, err
} }
be, err := sr.cs.GetLatestBeaconEntry(randTs) be, err := sr.cs.GetLatestBeaconEntry(ctx, randTs)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -143,13 +139,13 @@ func (sr *stateRand) GetBeaconRandomnessV1(ctx context.Context, pers crypto.Doma
} }
// network v13 // network v13
func (sr *stateRand) GetBeaconRandomnessV2(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) { func (sr *stateRand) getBeaconRandomnessV2(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
randTs, err := sr.GetBeaconRandomnessTipset(ctx, round, false) randTs, err := sr.GetBeaconRandomnessTipset(ctx, round, false)
if err != nil { if err != nil {
return nil, err return nil, err
} }
be, err := sr.cs.GetLatestBeaconEntry(randTs) be, err := sr.cs.GetLatestBeaconEntry(ctx, randTs)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -160,9 +156,9 @@ func (sr *stateRand) GetBeaconRandomnessV2(ctx context.Context, pers crypto.Doma
} }
// network v14 and on // network v14 and on
func (sr *stateRand) GetBeaconRandomnessV3(ctx context.Context, pers crypto.DomainSeparationTag, filecoinEpoch abi.ChainEpoch, entropy []byte) ([]byte, error) { func (sr *stateRand) getBeaconRandomnessV3(ctx context.Context, pers crypto.DomainSeparationTag, filecoinEpoch abi.ChainEpoch, entropy []byte) ([]byte, error) {
if filecoinEpoch < 0 { if filecoinEpoch < 0 {
return sr.GetBeaconRandomnessV2(ctx, pers, filecoinEpoch, entropy) return sr.getBeaconRandomnessV2(ctx, pers, filecoinEpoch, entropy)
} }
be, err := sr.extractBeaconEntryForEpoch(ctx, filecoinEpoch) be, err := sr.extractBeaconEntryForEpoch(ctx, filecoinEpoch)
@ -174,6 +170,28 @@ func (sr *stateRand) GetBeaconRandomnessV3(ctx context.Context, pers crypto.Doma
return DrawRandomness(be.Data, pers, filecoinEpoch, entropy) return DrawRandomness(be.Data, pers, filecoinEpoch, entropy)
} }
func (sr *stateRand) GetChainRandomness(ctx context.Context, pers crypto.DomainSeparationTag, filecoinEpoch abi.ChainEpoch, entropy []byte) ([]byte, error) {
nv := sr.networkVersionGetter(ctx, filecoinEpoch)
if nv >= network.Version13 {
return sr.getChainRandomness(ctx, pers, filecoinEpoch, entropy, false)
}
return sr.getChainRandomness(ctx, pers, filecoinEpoch, entropy, true)
}
func (sr *stateRand) GetBeaconRandomness(ctx context.Context, pers crypto.DomainSeparationTag, filecoinEpoch abi.ChainEpoch, entropy []byte) ([]byte, error) {
nv := sr.networkVersionGetter(ctx, filecoinEpoch)
if nv >= network.Version14 {
return sr.getBeaconRandomnessV3(ctx, pers, filecoinEpoch, entropy)
} else if nv == network.Version13 {
return sr.getBeaconRandomnessV2(ctx, pers, filecoinEpoch, entropy)
} else {
return sr.getBeaconRandomnessV1(ctx, pers, filecoinEpoch, entropy)
}
}
func (sr *stateRand) extractBeaconEntryForEpoch(ctx context.Context, filecoinEpoch abi.ChainEpoch) (*types.BeaconEntry, error) { func (sr *stateRand) extractBeaconEntryForEpoch(ctx context.Context, filecoinEpoch abi.ChainEpoch) (*types.BeaconEntry, error) {
randTs, err := sr.GetBeaconRandomnessTipset(ctx, filecoinEpoch, false) randTs, err := sr.GetBeaconRandomnessTipset(ctx, filecoinEpoch, false)
if err != nil { if err != nil {
@ -190,7 +208,7 @@ func (sr *stateRand) extractBeaconEntryForEpoch(ctx context.Context, filecoinEpo
} }
} }
next, err := sr.cs.LoadTipSet(randTs.Parents()) next, err := sr.cs.LoadTipSet(ctx, randTs.Parents())
if err != nil { if err != nil {
return nil, xerrors.Errorf("failed to load parents when searching back for beacon entry: %w", err) return nil, xerrors.Errorf("failed to load parents when searching back for beacon entry: %w", err)
} }

View File

@ -1,3 +1,4 @@
//stm:#unit
package rand_test package rand_test
import ( import (
@ -55,11 +56,13 @@ func TestNullRandomnessV1(t *testing.T) {
randEpoch := ts.TipSet.TipSet().Height() - 2 randEpoch := ts.TipSet.TipSet().Height() - 2
//stm: @BLOCKCHAIN_RAND_GET_BEACON_RANDOMNESS_V1_01, @BLOCKCHAIN_RAND_EXTRACT_BEACON_ENTRY_FOR_EPOCH_01, @BLOCKCHAIN_RAND_GET_BEACON_RANDOMNESS_TIPSET_02
rand1, err := cg.StateManager().GetRandomnessFromBeacon(ctx, pers, randEpoch, entropy, ts.TipSet.TipSet().Key()) rand1, err := cg.StateManager().GetRandomnessFromBeacon(ctx, pers, randEpoch, entropy, ts.TipSet.TipSet().Key())
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
//stm: @BLOCKCHAIN_BEACON_GET_BEACON_FOR_EPOCH_01
bch := cg.BeaconSchedule().BeaconForEpoch(randEpoch).Entry(ctx, uint64(beforeNullHeight)+offset) bch := cg.BeaconSchedule().BeaconForEpoch(randEpoch).Entry(ctx, uint64(beforeNullHeight)+offset)
select { select {
@ -68,6 +71,7 @@ func TestNullRandomnessV1(t *testing.T) {
t.Fatal(resp.Err) t.Fatal(resp.Err)
} }
//stm: @BLOCKCHAIN_RAND_DRAW_RANDOMNESS_01
rand2, err := rand.DrawRandomness(resp.Entry.Data, pers, randEpoch, entropy) rand2, err := rand.DrawRandomness(resp.Entry.Data, pers, randEpoch, entropy)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@ -131,11 +135,13 @@ func TestNullRandomnessV2(t *testing.T) {
randEpoch := ts.TipSet.TipSet().Height() - 2 randEpoch := ts.TipSet.TipSet().Height() - 2
//stm: @BLOCKCHAIN_RAND_GET_BEACON_RANDOMNESS_V2_01
rand1, err := cg.StateManager().GetRandomnessFromBeacon(ctx, pers, randEpoch, entropy, ts.TipSet.TipSet().Key()) rand1, err := cg.StateManager().GetRandomnessFromBeacon(ctx, pers, randEpoch, entropy, ts.TipSet.TipSet().Key())
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
//stm: @BLOCKCHAIN_BEACON_GET_BEACON_FOR_EPOCH_01
bch := cg.BeaconSchedule().BeaconForEpoch(randEpoch).Entry(ctx, uint64(ts.TipSet.TipSet().Height())+offset) bch := cg.BeaconSchedule().BeaconForEpoch(randEpoch).Entry(ctx, uint64(ts.TipSet.TipSet().Height())+offset)
select { select {
@ -144,6 +150,7 @@ func TestNullRandomnessV2(t *testing.T) {
t.Fatal(resp.Err) t.Fatal(resp.Err)
} }
//stm: @BLOCKCHAIN_RAND_DRAW_RANDOMNESS_01, @BLOCKCHAIN_RAND_EXTRACT_BEACON_ENTRY_FOR_EPOCH_01, @BLOCKCHAIN_RAND_GET_BEACON_RANDOMNESS_TIPSET_03
// note that the randEpoch passed to DrawRandomness is still randEpoch (not the latest ts height) // note that the randEpoch passed to DrawRandomness is still randEpoch (not the latest ts height)
rand2, err := rand.DrawRandomness(resp.Entry.Data, pers, randEpoch, entropy) rand2, err := rand.DrawRandomness(resp.Entry.Data, pers, randEpoch, entropy)
if err != nil { if err != nil {
@ -212,11 +219,13 @@ func TestNullRandomnessV3(t *testing.T) {
randEpoch := ts.TipSet.TipSet().Height() - 2 randEpoch := ts.TipSet.TipSet().Height() - 2
//stm: @BLOCKCHAIN_RAND_GET_BEACON_RANDOMNESS_V3_01, @BLOCKCHAIN_RAND_EXTRACT_BEACON_ENTRY_FOR_EPOCH_01
rand1, err := cg.StateManager().GetRandomnessFromBeacon(ctx, pers, randEpoch, entropy, ts.TipSet.TipSet().Key()) rand1, err := cg.StateManager().GetRandomnessFromBeacon(ctx, pers, randEpoch, entropy, ts.TipSet.TipSet().Key())
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
//stm: @BLOCKCHAIN_BEACON_GET_BEACON_FOR_EPOCH_01
bch := cg.BeaconSchedule().BeaconForEpoch(randEpoch).Entry(ctx, uint64(randEpoch)+offset) bch := cg.BeaconSchedule().BeaconForEpoch(randEpoch).Entry(ctx, uint64(randEpoch)+offset)
select { select {
@ -225,6 +234,7 @@ func TestNullRandomnessV3(t *testing.T) {
t.Fatal(resp.Err) t.Fatal(resp.Err)
} }
//stm: @BLOCKCHAIN_RAND_DRAW_RANDOMNESS_01
rand2, err := rand.DrawRandomness(resp.Entry.Data, pers, randEpoch, entropy) rand2, err := rand.DrawRandomness(resp.Entry.Data, pers, randEpoch, entropy)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)

View File

@ -14,6 +14,7 @@ import (
"github.com/filecoin-project/go-state-types/crypto" "github.com/filecoin-project/go-state-types/crypto"
"github.com/filecoin-project/go-state-types/network" "github.com/filecoin-project/go-state-types/network"
cid "github.com/ipfs/go-cid" cid "github.com/ipfs/go-cid"
"golang.org/x/xerrors" "golang.org/x/xerrors"
"github.com/filecoin-project/lotus/api" "github.com/filecoin-project/lotus/api"
@ -301,12 +302,12 @@ func ListMinerActors(ctx context.Context, sm *StateManager, ts *types.TipSet) ([
} }
func MinerGetBaseInfo(ctx context.Context, sm *StateManager, bcs beacon.Schedule, tsk types.TipSetKey, round abi.ChainEpoch, maddr address.Address, pv ffiwrapper.Verifier) (*api.MiningBaseInfo, error) { func MinerGetBaseInfo(ctx context.Context, sm *StateManager, bcs beacon.Schedule, tsk types.TipSetKey, round abi.ChainEpoch, maddr address.Address, pv ffiwrapper.Verifier) (*api.MiningBaseInfo, error) {
ts, err := sm.ChainStore().LoadTipSet(tsk) ts, err := sm.ChainStore().LoadTipSet(ctx, tsk)
if err != nil { if err != nil {
return nil, xerrors.Errorf("failed to load tipset for mining base: %w", err) return nil, xerrors.Errorf("failed to load tipset for mining base: %w", err)
} }
prev, err := sm.ChainStore().GetLatestBeaconEntry(ts) prev, err := sm.ChainStore().GetLatestBeaconEntry(ctx, ts)
if err != nil { if err != nil {
if os.Getenv("LOTUS_IGNORE_DRAND") != "_yes_" { if os.Getenv("LOTUS_IGNORE_DRAND") != "_yes_" {
return nil, xerrors.Errorf("failed to get latest beacon entry: %w", err) return nil, xerrors.Errorf("failed to get latest beacon entry: %w", err)
@ -358,7 +359,7 @@ func MinerGetBaseInfo(ctx context.Context, sm *StateManager, bcs beacon.Schedule
return nil, xerrors.Errorf("failed to get randomness for winning post: %w", err) return nil, xerrors.Errorf("failed to get randomness for winning post: %w", err)
} }
nv := sm.GetNtwkVersion(ctx, ts.Height()) nv := sm.GetNetworkVersion(ctx, ts.Height())
sectors, err := GetSectorsForWinningPoSt(ctx, nv, pv, sm, lbst, maddr, prand) sectors, err := GetSectorsForWinningPoSt(ctx, nv, pv, sm, lbst, maddr, prand)
if err != nil { if err != nil {
@ -420,7 +421,7 @@ func MinerEligibleToMine(ctx context.Context, sm *StateManager, addr address.Add
hmp, err := minerHasMinPower(ctx, sm, addr, lookbackTs) hmp, err := minerHasMinPower(ctx, sm, addr, lookbackTs)
// TODO: We're blurring the lines between a "runtime network version" and a "Lotus upgrade epoch", is that unavoidable? // TODO: We're blurring the lines between a "runtime network version" and a "Lotus upgrade epoch", is that unavoidable?
if sm.GetNtwkVersion(ctx, baseTs.Height()) <= network.Version3 { if sm.GetNetworkVersion(ctx, baseTs.Height()) <= network.Version3 {
return hmp, err return hmp, err
} }

View File

@ -40,7 +40,7 @@ func (sm *StateManager) Call(ctx context.Context, msg *types.Message, ts *types.
ts = sm.cs.GetHeaviestTipSet() ts = sm.cs.GetHeaviestTipSet()
// Search back till we find a height with no fork, or we reach the beginning. // Search back till we find a height with no fork, or we reach the beginning.
for ts.Height() > 0 { for ts.Height() > 0 {
pts, err := sm.cs.GetTipSetFromKey(ts.Parents()) pts, err := sm.cs.GetTipSetFromKey(ctx, ts.Parents())
if err != nil { if err != nil {
return nil, xerrors.Errorf("failed to find a non-forking epoch: %w", err) return nil, xerrors.Errorf("failed to find a non-forking epoch: %w", err)
} }
@ -51,7 +51,7 @@ func (sm *StateManager) Call(ctx context.Context, msg *types.Message, ts *types.
ts = pts ts = pts
} }
} else if ts.Height() > 0 { } else if ts.Height() > 0 {
pts, err := sm.cs.LoadTipSet(ts.Parents()) pts, err := sm.cs.LoadTipSet(ctx, ts.Parents())
if err != nil { if err != nil {
return nil, xerrors.Errorf("failed to load parent tipset: %w", err) return nil, xerrors.Errorf("failed to load parent tipset: %w", err)
} }
@ -75,12 +75,12 @@ func (sm *StateManager) Call(ctx context.Context, msg *types.Message, ts *types.
vmopt := &vm.VMOpts{ vmopt := &vm.VMOpts{
StateBase: bstate, StateBase: bstate,
Epoch: pheight + 1, Epoch: pheight + 1,
Rand: rand.NewStateRand(sm.cs, ts.Cids(), sm.beacon), Rand: rand.NewStateRand(sm.cs, ts.Cids(), sm.beacon, sm.GetNetworkVersion),
Bstore: sm.cs.StateBlockstore(), Bstore: sm.cs.StateBlockstore(),
Actors: sm.tsExec.NewActorRegistry(), Actors: sm.tsExec.NewActorRegistry(),
Syscalls: sm.Syscalls, Syscalls: sm.Syscalls,
CircSupplyCalc: sm.GetVMCirculatingSupply, CircSupplyCalc: sm.GetVMCirculatingSupply,
NtwkVersion: sm.GetNtwkVersion, NetworkVersion: sm.GetNetworkVersion(ctx, pheight+1),
BaseFee: types.NewInt(0), BaseFee: types.NewInt(0),
LookbackState: LookbackStateGetterForTipset(sm, ts), LookbackState: LookbackStateGetterForTipset(sm, ts),
} }
@ -155,7 +155,7 @@ func (sm *StateManager) CallWithGas(ctx context.Context, msg *types.Message, pri
// height to have no fork, because we'll run it inside this // height to have no fork, because we'll run it inside this
// function before executing the given message. // function before executing the given message.
for ts.Height() > 0 { for ts.Height() > 0 {
pts, err := sm.cs.GetTipSetFromKey(ts.Parents()) pts, err := sm.cs.GetTipSetFromKey(ctx, ts.Parents())
if err != nil { if err != nil {
return nil, xerrors.Errorf("failed to find a non-forking epoch: %w", err) return nil, xerrors.Errorf("failed to find a non-forking epoch: %w", err)
} }
@ -166,7 +166,7 @@ func (sm *StateManager) CallWithGas(ctx context.Context, msg *types.Message, pri
ts = pts ts = pts
} }
} else if ts.Height() > 0 { } else if ts.Height() > 0 {
pts, err := sm.cs.GetTipSetFromKey(ts.Parents()) pts, err := sm.cs.GetTipSetFromKey(ctx, ts.Parents())
if err != nil { if err != nil {
return nil, xerrors.Errorf("failed to find a non-forking epoch: %w", err) return nil, xerrors.Errorf("failed to find a non-forking epoch: %w", err)
} }
@ -186,7 +186,7 @@ func (sm *StateManager) CallWithGas(ctx context.Context, msg *types.Message, pri
return nil, fmt.Errorf("failed to handle fork: %w", err) return nil, fmt.Errorf("failed to handle fork: %w", err)
} }
r := rand.NewStateRand(sm.cs, ts.Cids(), sm.beacon) r := rand.NewStateRand(sm.cs, ts.Cids(), sm.beacon, sm.GetNetworkVersion)
if span.IsRecordingEvents() { if span.IsRecordingEvents() {
span.AddAttributes( span.AddAttributes(
@ -204,7 +204,7 @@ func (sm *StateManager) CallWithGas(ctx context.Context, msg *types.Message, pri
Actors: sm.tsExec.NewActorRegistry(), Actors: sm.tsExec.NewActorRegistry(),
Syscalls: sm.Syscalls, Syscalls: sm.Syscalls,
CircSupplyCalc: sm.GetVMCirculatingSupply, CircSupplyCalc: sm.GetVMCirculatingSupply,
NtwkVersion: sm.GetNtwkVersion, NetworkVersion: sm.GetNetworkVersion(ctx, ts.Height()+1),
BaseFee: ts.Blocks()[0].ParentBaseFee, BaseFee: ts.Blocks()[0].ParentBaseFee,
LookbackState: LookbackStateGetterForTipset(sm, ts), LookbackState: LookbackStateGetterForTipset(sm, ts),
} }

View File

@ -13,8 +13,8 @@ import (
"github.com/filecoin-project/lotus/chain/types" "github.com/filecoin-project/lotus/chain/types"
) )
func (sm *StateManager) ParentStateTsk(tsk types.TipSetKey) (*state.StateTree, error) { func (sm *StateManager) ParentStateTsk(ctx context.Context, tsk types.TipSetKey) (*state.StateTree, error) {
ts, err := sm.cs.GetTipSetFromKey(tsk) ts, err := sm.cs.GetTipSetFromKey(ctx, tsk)
if err != nil { if err != nil {
return nil, xerrors.Errorf("loading tipset %s: %w", tsk, err) return nil, xerrors.Errorf("loading tipset %s: %w", tsk, err)
} }
@ -57,8 +57,8 @@ func (sm *StateManager) LoadActor(_ context.Context, addr address.Address, ts *t
return state.GetActor(addr) return state.GetActor(addr)
} }
func (sm *StateManager) LoadActorTsk(_ context.Context, addr address.Address, tsk types.TipSetKey) (*types.Actor, error) { func (sm *StateManager) LoadActorTsk(ctx context.Context, addr address.Address, tsk types.TipSetKey) (*types.Actor, error) {
state, err := sm.ParentStateTsk(tsk) state, err := sm.ParentStateTsk(ctx, tsk)
if err != nil { if err != nil {
return nil, err return nil, err
} }

View File

@ -20,7 +20,7 @@ func (sm *StateManager) WaitForMessage(ctx context.Context, mcid cid.Cid, confid
ctx, cancel := context.WithCancel(ctx) ctx, cancel := context.WithCancel(ctx)
defer cancel() defer cancel()
msg, err := sm.cs.GetCMessage(mcid) msg, err := sm.cs.GetCMessage(ctx, mcid)
if err != nil { if err != nil {
return nil, nil, cid.Undef, fmt.Errorf("failed to load message: %w", err) return nil, nil, cid.Undef, fmt.Errorf("failed to load message: %w", err)
} }
@ -40,7 +40,7 @@ func (sm *StateManager) WaitForMessage(ctx context.Context, mcid cid.Cid, confid
return nil, nil, cid.Undef, fmt.Errorf("expected current head on SHC stream (got %s)", head[0].Type) return nil, nil, cid.Undef, fmt.Errorf("expected current head on SHC stream (got %s)", head[0].Type)
} }
r, foundMsg, err := sm.tipsetExecutedMessage(head[0].Val, mcid, msg.VMMessage(), allowReplaced) r, foundMsg, err := sm.tipsetExecutedMessage(ctx, head[0].Val, mcid, msg.VMMessage(), allowReplaced)
if err != nil { if err != nil {
return nil, nil, cid.Undef, err return nil, nil, cid.Undef, err
} }
@ -93,7 +93,7 @@ func (sm *StateManager) WaitForMessage(ctx context.Context, mcid cid.Cid, confid
if candidateTs != nil && val.Val.Height() >= candidateTs.Height()+abi.ChainEpoch(confidence) { if candidateTs != nil && val.Val.Height() >= candidateTs.Height()+abi.ChainEpoch(confidence) {
return candidateTs, candidateRcp, candidateFm, nil return candidateTs, candidateRcp, candidateFm, nil
} }
r, foundMsg, err := sm.tipsetExecutedMessage(val.Val, mcid, msg.VMMessage(), allowReplaced) r, foundMsg, err := sm.tipsetExecutedMessage(ctx, val.Val, mcid, msg.VMMessage(), allowReplaced)
if err != nil { if err != nil {
return nil, nil, cid.Undef, err return nil, nil, cid.Undef, err
} }
@ -130,12 +130,12 @@ func (sm *StateManager) WaitForMessage(ctx context.Context, mcid cid.Cid, confid
} }
func (sm *StateManager) SearchForMessage(ctx context.Context, head *types.TipSet, mcid cid.Cid, lookbackLimit abi.ChainEpoch, allowReplaced bool) (*types.TipSet, *types.MessageReceipt, cid.Cid, error) { func (sm *StateManager) SearchForMessage(ctx context.Context, head *types.TipSet, mcid cid.Cid, lookbackLimit abi.ChainEpoch, allowReplaced bool) (*types.TipSet, *types.MessageReceipt, cid.Cid, error) {
msg, err := sm.cs.GetCMessage(mcid) msg, err := sm.cs.GetCMessage(ctx, mcid)
if err != nil { if err != nil {
return nil, nil, cid.Undef, fmt.Errorf("failed to load message: %w", err) return nil, nil, cid.Undef, fmt.Errorf("failed to load message: %w", err)
} }
r, foundMsg, err := sm.tipsetExecutedMessage(head, mcid, msg.VMMessage(), allowReplaced) r, foundMsg, err := sm.tipsetExecutedMessage(ctx, head, mcid, msg.VMMessage(), allowReplaced)
if err != nil { if err != nil {
return nil, nil, cid.Undef, err return nil, nil, cid.Undef, err
} }
@ -201,7 +201,7 @@ func (sm *StateManager) searchBackForMsg(ctx context.Context, from *types.TipSet
return nil, nil, cid.Undef, nil return nil, nil, cid.Undef, nil
} }
pts, err := sm.cs.LoadTipSet(cur.Parents()) pts, err := sm.cs.LoadTipSet(ctx, cur.Parents())
if err != nil { if err != nil {
return nil, nil, cid.Undef, xerrors.Errorf("failed to load tipset during msg wait searchback: %w", err) return nil, nil, cid.Undef, xerrors.Errorf("failed to load tipset during msg wait searchback: %w", err)
} }
@ -214,7 +214,7 @@ func (sm *StateManager) searchBackForMsg(ctx context.Context, from *types.TipSet
// check that between cur and parent tipset the nonce fell into range of our message // check that between cur and parent tipset the nonce fell into range of our message
if actorNoExist || (curActor.Nonce > mNonce && act.Nonce <= mNonce) { if actorNoExist || (curActor.Nonce > mNonce && act.Nonce <= mNonce) {
r, foundMsg, err := sm.tipsetExecutedMessage(cur, m.Cid(), m.VMMessage(), allowReplaced) r, foundMsg, err := sm.tipsetExecutedMessage(ctx, cur, m.Cid(), m.VMMessage(), allowReplaced)
if err != nil { if err != nil {
return nil, nil, cid.Undef, xerrors.Errorf("checking for message execution during lookback: %w", err) return nil, nil, cid.Undef, xerrors.Errorf("checking for message execution during lookback: %w", err)
} }
@ -229,18 +229,18 @@ func (sm *StateManager) searchBackForMsg(ctx context.Context, from *types.TipSet
} }
} }
func (sm *StateManager) tipsetExecutedMessage(ts *types.TipSet, msg cid.Cid, vmm *types.Message, allowReplaced bool) (*types.MessageReceipt, cid.Cid, error) { func (sm *StateManager) tipsetExecutedMessage(ctx context.Context, ts *types.TipSet, msg cid.Cid, vmm *types.Message, allowReplaced bool) (*types.MessageReceipt, cid.Cid, error) {
// The genesis block did not execute any messages // The genesis block did not execute any messages
if ts.Height() == 0 { if ts.Height() == 0 {
return nil, cid.Undef, nil return nil, cid.Undef, nil
} }
pts, err := sm.cs.LoadTipSet(ts.Parents()) pts, err := sm.cs.LoadTipSet(ctx, ts.Parents())
if err != nil { if err != nil {
return nil, cid.Undef, err return nil, cid.Undef, err
} }
cm, err := sm.cs.MessagesForTipset(pts) cm, err := sm.cs.MessagesForTipset(ctx, pts)
if err != nil { if err != nil {
return nil, cid.Undef, err return nil, cid.Undef, err
} }
@ -267,7 +267,7 @@ func (sm *StateManager) tipsetExecutedMessage(ts *types.TipSet, msg cid.Cid, vmm
} }
} }
pr, err := sm.cs.GetParentReceipt(ts.Blocks()[0], i) pr, err := sm.cs.GetParentReceipt(ctx, ts.Blocks()[0], i)
if err != nil { if err != nil {
return nil, cid.Undef, err return nil, cid.Undef, err
} }

View File

@ -75,7 +75,7 @@ func TestSearchForMessageReplacements(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
err = cg.Blockstore().Put(rmb) err = cg.Blockstore().Put(ctx, rmb)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -117,7 +117,7 @@ func TestSearchForMessageReplacements(t *testing.T) {
t.Fatal(err) t.Fatal(err)
} }
err = cg.Blockstore().Put(nrmb) err = cg.Blockstore().Put(ctx, nrmb)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }

View File

@ -321,7 +321,7 @@ func (sm *StateManager) LookupID(ctx context.Context, addr address.Address, ts *
func (sm *StateManager) ValidateChain(ctx context.Context, ts *types.TipSet) error { func (sm *StateManager) ValidateChain(ctx context.Context, ts *types.TipSet) error {
tschain := []*types.TipSet{ts} tschain := []*types.TipSet{ts}
for ts.Height() != 0 { for ts.Height() != 0 {
next, err := sm.cs.LoadTipSet(ts.Parents()) next, err := sm.cs.LoadTipSet(ctx, ts.Parents())
if err != nil { if err != nil {
return err return err
} }
@ -357,7 +357,7 @@ func (sm *StateManager) VMConstructor() func(context.Context, *vm.VMOpts) (*vm.V
} }
} }
func (sm *StateManager) GetNtwkVersion(ctx context.Context, height abi.ChainEpoch) network.Version { func (sm *StateManager) GetNetworkVersion(ctx context.Context, height abi.ChainEpoch) network.Version {
// The epochs here are the _last_ epoch for every version, or -1 if the // The epochs here are the _last_ epoch for every version, or -1 if the
// version is disabled. // version is disabled.
for _, spec := range sm.networkVersions { for _, spec := range sm.networkVersions {
@ -373,36 +373,24 @@ func (sm *StateManager) VMSys() vm.SyscallBuilder {
} }
func (sm *StateManager) GetRandomnessFromBeacon(ctx context.Context, personalization crypto.DomainSeparationTag, randEpoch abi.ChainEpoch, entropy []byte, tsk types.TipSetKey) (abi.Randomness, error) { func (sm *StateManager) GetRandomnessFromBeacon(ctx context.Context, personalization crypto.DomainSeparationTag, randEpoch abi.ChainEpoch, entropy []byte, tsk types.TipSetKey) (abi.Randomness, error) {
pts, err := sm.ChainStore().GetTipSetFromKey(tsk) pts, err := sm.ChainStore().GetTipSetFromKey(ctx, tsk)
if err != nil { if err != nil {
return nil, xerrors.Errorf("loading tipset %s: %w", tsk, err) return nil, xerrors.Errorf("loading tipset %s: %w", tsk, err)
} }
r := rand.NewStateRand(sm.ChainStore(), pts.Cids(), sm.beacon) r := rand.NewStateRand(sm.ChainStore(), pts.Cids(), sm.beacon, sm.GetNetworkVersion)
rnv := sm.GetNtwkVersion(ctx, randEpoch)
if rnv >= network.Version14 { return r.GetBeaconRandomness(ctx, personalization, randEpoch, entropy)
return r.GetBeaconRandomnessV3(ctx, personalization, randEpoch, entropy)
} else if rnv == network.Version13 {
return r.GetBeaconRandomnessV2(ctx, personalization, randEpoch, entropy)
}
return r.GetBeaconRandomnessV1(ctx, personalization, randEpoch, entropy)
} }
func (sm *StateManager) GetRandomnessFromTickets(ctx context.Context, personalization crypto.DomainSeparationTag, randEpoch abi.ChainEpoch, entropy []byte, tsk types.TipSetKey) (abi.Randomness, error) { func (sm *StateManager) GetRandomnessFromTickets(ctx context.Context, personalization crypto.DomainSeparationTag, randEpoch abi.ChainEpoch, entropy []byte, tsk types.TipSetKey) (abi.Randomness, error) {
pts, err := sm.ChainStore().LoadTipSet(tsk) pts, err := sm.ChainStore().LoadTipSet(ctx, tsk)
if err != nil { if err != nil {
return nil, xerrors.Errorf("loading tipset key: %w", err) return nil, xerrors.Errorf("loading tipset key: %w", err)
} }
r := rand.NewStateRand(sm.ChainStore(), pts.Cids(), sm.beacon) r := rand.NewStateRand(sm.ChainStore(), pts.Cids(), sm.beacon, sm.GetNetworkVersion)
rnv := sm.GetNtwkVersion(ctx, randEpoch)
if rnv >= network.Version13 { return r.GetChainRandomness(ctx, personalization, randEpoch, entropy)
return r.GetChainRandomnessV2(ctx, personalization, randEpoch, entropy)
}
return r.GetChainRandomnessV1(ctx, personalization, randEpoch, entropy)
} }

View File

@ -31,7 +31,7 @@ import (
// sets up information about the vesting schedule // sets up information about the vesting schedule
func (sm *StateManager) setupGenesisVestingSchedule(ctx context.Context) error { func (sm *StateManager) setupGenesisVestingSchedule(ctx context.Context) error {
gb, err := sm.cs.GetGenesis() gb, err := sm.cs.GetGenesis(ctx)
if err != nil { if err != nil {
return xerrors.Errorf("getting genesis block: %w", err) return xerrors.Errorf("getting genesis block: %w", err)
} }

View File

@ -79,7 +79,7 @@ func ComputeState(ctx context.Context, sm *StateManager, height abi.ChainEpoch,
// future. It's not guaranteed to be accurate... but that's fine. // future. It's not guaranteed to be accurate... but that's fine.
} }
r := rand.NewStateRand(sm.cs, ts.Cids(), sm.beacon) r := rand.NewStateRand(sm.cs, ts.Cids(), sm.beacon, sm.GetNetworkVersion)
vmopt := &vm.VMOpts{ vmopt := &vm.VMOpts{
StateBase: base, StateBase: base,
Epoch: height, Epoch: height,
@ -88,7 +88,7 @@ func ComputeState(ctx context.Context, sm *StateManager, height abi.ChainEpoch,
Actors: sm.tsExec.NewActorRegistry(), Actors: sm.tsExec.NewActorRegistry(),
Syscalls: sm.Syscalls, Syscalls: sm.Syscalls,
CircSupplyCalc: sm.GetVMCirculatingSupply, CircSupplyCalc: sm.GetVMCirculatingSupply,
NtwkVersion: sm.GetNtwkVersion, NetworkVersion: sm.GetNetworkVersion(ctx, height),
BaseFee: ts.Blocks()[0].ParentBaseFee, BaseFee: ts.Blocks()[0].ParentBaseFee,
LookbackState: LookbackStateGetterForTipset(sm, ts), LookbackState: LookbackStateGetterForTipset(sm, ts),
} }
@ -128,7 +128,7 @@ func LookbackStateGetterForTipset(sm *StateManager, ts *types.TipSet) vm.Lookbac
func GetLookbackTipSetForRound(ctx context.Context, sm *StateManager, ts *types.TipSet, round abi.ChainEpoch) (*types.TipSet, cid.Cid, error) { func GetLookbackTipSetForRound(ctx context.Context, sm *StateManager, ts *types.TipSet, round abi.ChainEpoch) (*types.TipSet, cid.Cid, error) {
var lbr abi.ChainEpoch var lbr abi.ChainEpoch
lb := policy.GetWinningPoStSectorSetLookback(sm.GetNtwkVersion(ctx, round)) lb := policy.GetWinningPoStSectorSetLookback(sm.GetNetworkVersion(ctx, round))
if round > lb { if round > lb {
lbr = round - lb lbr = round - lb
} }
@ -155,7 +155,7 @@ func GetLookbackTipSetForRound(ctx context.Context, sm *StateManager, ts *types.
} }
lbts, err := sm.ChainStore().GetTipSetFromKey(nextTs.Parents()) lbts, err := sm.ChainStore().GetTipSetFromKey(ctx, nextTs.Parents())
if err != nil { if err != nil {
return nil, cid.Undef, xerrors.Errorf("failed to resolve lookback tipset: %w", err) return nil, cid.Undef, xerrors.Errorf("failed to resolve lookback tipset: %w", err)
} }

View File

@ -58,7 +58,7 @@ func (cs *ChainStore) ComputeBaseFee(ctx context.Context, ts *types.TipSet) (abi
seen := make(map[cid.Cid]struct{}) seen := make(map[cid.Cid]struct{})
for _, b := range ts.Blocks() { for _, b := range ts.Blocks() {
msg1, msg2, err := cs.MessagesForBlock(b) msg1, msg2, err := cs.MessagesForBlock(ctx, b)
if err != nil { if err != nil {
return zero, xerrors.Errorf("error getting messages for: %s: %w", b.Cid(), err) return zero, xerrors.Errorf("error getting messages for: %s: %w", b.Cid(), err)
} }

View File

@ -10,6 +10,8 @@ import (
) )
func TestChainCheckpoint(t *testing.T) { func TestChainCheckpoint(t *testing.T) {
ctx := context.Background()
cg, err := gen.NewGenerator() cg, err := gen.NewGenerator()
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@ -27,11 +29,11 @@ func TestChainCheckpoint(t *testing.T) {
cs := cg.ChainStore() cs := cg.ChainStore()
checkpoint := last checkpoint := last
checkpointParents, err := cs.GetTipSetFromKey(checkpoint.Parents()) checkpointParents, err := cs.GetTipSetFromKey(ctx, checkpoint.Parents())
require.NoError(t, err) require.NoError(t, err)
// Set the head to the block before the checkpoint. // Set the head to the block before the checkpoint.
err = cs.SetHead(checkpointParents) err = cs.SetHead(ctx, checkpointParents)
require.NoError(t, err) require.NoError(t, err)
// Verify it worked. // Verify it worked.
@ -39,11 +41,11 @@ func TestChainCheckpoint(t *testing.T) {
require.True(t, head.Equals(checkpointParents)) require.True(t, head.Equals(checkpointParents))
// Try to set the checkpoint in the future, it should fail. // Try to set the checkpoint in the future, it should fail.
err = cs.SetCheckpoint(checkpoint) err = cs.SetCheckpoint(ctx, checkpoint)
require.Error(t, err) require.Error(t, err)
// Then move the head back. // Then move the head back.
err = cs.SetHead(checkpoint) err = cs.SetHead(ctx, checkpoint)
require.NoError(t, err) require.NoError(t, err)
// Verify it worked. // Verify it worked.
@ -51,7 +53,7 @@ func TestChainCheckpoint(t *testing.T) {
require.True(t, head.Equals(checkpoint)) require.True(t, head.Equals(checkpoint))
// And checkpoint it. // And checkpoint it.
err = cs.SetCheckpoint(checkpoint) err = cs.SetCheckpoint(ctx, checkpoint)
require.NoError(t, err) require.NoError(t, err)
// Let the second miner miner mine a fork // Let the second miner miner mine a fork
@ -70,7 +72,7 @@ func TestChainCheckpoint(t *testing.T) {
require.True(t, head.Equals(checkpoint)) require.True(t, head.Equals(checkpoint))
// Remove the checkpoint. // Remove the checkpoint.
err = cs.RemoveCheckpoint() err = cs.RemoveCheckpoint(ctx)
require.NoError(t, err) require.NoError(t, err)
// Now switch to the other fork. // Now switch to the other fork.
@ -80,10 +82,10 @@ func TestChainCheckpoint(t *testing.T) {
require.True(t, head.Equals(last)) require.True(t, head.Equals(last))
// Setting a checkpoint on the other fork should fail. // Setting a checkpoint on the other fork should fail.
err = cs.SetCheckpoint(checkpoint) err = cs.SetCheckpoint(ctx, checkpoint)
require.Error(t, err) require.Error(t, err)
// Setting a checkpoint on this fork should succeed. // Setting a checkpoint on this fork should succeed.
err = cs.SetCheckpoint(checkpointParents) err = cs.SetCheckpoint(ctx, checkpointParents)
require.NoError(t, err) require.NoError(t, err)
} }

View File

@ -31,7 +31,7 @@ type ChainIndex struct {
skipLength abi.ChainEpoch skipLength abi.ChainEpoch
} }
type loadTipSetFunc func(types.TipSetKey) (*types.TipSet, error) type loadTipSetFunc func(context.Context, types.TipSetKey) (*types.TipSet, error)
func NewChainIndex(lts loadTipSetFunc) *ChainIndex { func NewChainIndex(lts loadTipSetFunc) *ChainIndex {
sc, _ := lru.NewARC(DefaultChainIndexCacheSize) sc, _ := lru.NewARC(DefaultChainIndexCacheSize)
@ -49,12 +49,12 @@ type lbEntry struct {
target types.TipSetKey target types.TipSetKey
} }
func (ci *ChainIndex) GetTipsetByHeight(_ context.Context, from *types.TipSet, to abi.ChainEpoch) (*types.TipSet, error) { func (ci *ChainIndex) GetTipsetByHeight(ctx context.Context, from *types.TipSet, to abi.ChainEpoch) (*types.TipSet, error) {
if from.Height()-to <= ci.skipLength { if from.Height()-to <= ci.skipLength {
return ci.walkBack(from, to) return ci.walkBack(ctx, from, to)
} }
rounded, err := ci.roundDown(from) rounded, err := ci.roundDown(ctx, from)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -63,7 +63,7 @@ func (ci *ChainIndex) GetTipsetByHeight(_ context.Context, from *types.TipSet, t
for { for {
cval, ok := ci.skipCache.Get(cur) cval, ok := ci.skipCache.Get(cur)
if !ok { if !ok {
fc, err := ci.fillCache(cur) fc, err := ci.fillCache(ctx, cur)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -74,19 +74,19 @@ func (ci *ChainIndex) GetTipsetByHeight(_ context.Context, from *types.TipSet, t
if lbe.ts.Height() == to || lbe.parentHeight < to { if lbe.ts.Height() == to || lbe.parentHeight < to {
return lbe.ts, nil return lbe.ts, nil
} else if to > lbe.targetHeight { } else if to > lbe.targetHeight {
return ci.walkBack(lbe.ts, to) return ci.walkBack(ctx, lbe.ts, to)
} }
cur = lbe.target cur = lbe.target
} }
} }
func (ci *ChainIndex) GetTipsetByHeightWithoutCache(from *types.TipSet, to abi.ChainEpoch) (*types.TipSet, error) { func (ci *ChainIndex) GetTipsetByHeightWithoutCache(ctx context.Context, from *types.TipSet, to abi.ChainEpoch) (*types.TipSet, error) {
return ci.walkBack(from, to) return ci.walkBack(ctx, from, to)
} }
func (ci *ChainIndex) fillCache(tsk types.TipSetKey) (*lbEntry, error) { func (ci *ChainIndex) fillCache(ctx context.Context, tsk types.TipSetKey) (*lbEntry, error) {
ts, err := ci.loadTipSet(tsk) ts, err := ci.loadTipSet(ctx, tsk)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -101,7 +101,7 @@ func (ci *ChainIndex) fillCache(tsk types.TipSetKey) (*lbEntry, error) {
// will either be equal to ts.Height, or at least > ts.Parent.Height() // will either be equal to ts.Height, or at least > ts.Parent.Height()
rheight := ci.roundHeight(ts.Height()) rheight := ci.roundHeight(ts.Height())
parent, err := ci.loadTipSet(ts.Parents()) parent, err := ci.loadTipSet(ctx, ts.Parents())
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -115,7 +115,7 @@ func (ci *ChainIndex) fillCache(tsk types.TipSetKey) (*lbEntry, error) {
if parent.Height() < rheight { if parent.Height() < rheight {
skipTarget = parent skipTarget = parent
} else { } else {
skipTarget, err = ci.walkBack(parent, rheight) skipTarget, err = ci.walkBack(ctx, parent, rheight)
if err != nil { if err != nil {
return nil, xerrors.Errorf("fillCache walkback: %w", err) return nil, xerrors.Errorf("fillCache walkback: %w", err)
} }
@ -137,10 +137,10 @@ func (ci *ChainIndex) roundHeight(h abi.ChainEpoch) abi.ChainEpoch {
return (h / ci.skipLength) * ci.skipLength return (h / ci.skipLength) * ci.skipLength
} }
func (ci *ChainIndex) roundDown(ts *types.TipSet) (*types.TipSet, error) { func (ci *ChainIndex) roundDown(ctx context.Context, ts *types.TipSet) (*types.TipSet, error) {
target := ci.roundHeight(ts.Height()) target := ci.roundHeight(ts.Height())
rounded, err := ci.walkBack(ts, target) rounded, err := ci.walkBack(ctx, ts, target)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -148,7 +148,7 @@ func (ci *ChainIndex) roundDown(ts *types.TipSet) (*types.TipSet, error) {
return rounded, nil return rounded, nil
} }
func (ci *ChainIndex) walkBack(from *types.TipSet, to abi.ChainEpoch) (*types.TipSet, error) { func (ci *ChainIndex) walkBack(ctx context.Context, from *types.TipSet, to abi.ChainEpoch) (*types.TipSet, error) {
if to > from.Height() { if to > from.Height() {
return nil, xerrors.Errorf("looking for tipset with height greater than start point") return nil, xerrors.Errorf("looking for tipset with height greater than start point")
} }
@ -160,7 +160,7 @@ func (ci *ChainIndex) walkBack(from *types.TipSet, to abi.ChainEpoch) (*types.Ti
ts := from ts := from
for { for {
pts, err := ci.loadTipSet(ts.Parents()) pts, err := ci.loadTipSet(ctx, ts.Parents())
if err != nil { if err != nil {
return nil, err return nil, err
} }

View File

@ -35,7 +35,7 @@ func TestIndexSeeks(t *testing.T) {
cs := store.NewChainStore(nbs, nbs, syncds.MutexWrap(datastore.NewMapDatastore()), filcns.Weight, nil) cs := store.NewChainStore(nbs, nbs, syncds.MutexWrap(datastore.NewMapDatastore()), filcns.Weight, nil)
defer cs.Close() //nolint:errcheck defer cs.Close() //nolint:errcheck
_, err = cs.Import(bytes.NewReader(gencar)) _, err = cs.Import(ctx, bytes.NewReader(gencar))
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -44,7 +44,7 @@ func TestIndexSeeks(t *testing.T) {
if err := cs.PutTipSet(ctx, mock.TipSet(gen)); err != nil { if err := cs.PutTipSet(ctx, mock.TipSet(gen)); err != nil {
t.Fatal(err) t.Fatal(err)
} }
assert.NoError(t, cs.SetGenesis(gen)) assert.NoError(t, cs.SetGenesis(ctx, gen))
// Put 113 blocks from genesis // Put 113 blocks from genesis
for i := 0; i < 113; i++ { for i := 0; i < 113; i++ {

View File

@ -23,25 +23,25 @@ type storable interface {
ToStorageBlock() (block.Block, error) ToStorageBlock() (block.Block, error)
} }
func PutMessage(bs bstore.Blockstore, m storable) (cid.Cid, error) { func PutMessage(ctx context.Context, bs bstore.Blockstore, m storable) (cid.Cid, error) {
b, err := m.ToStorageBlock() b, err := m.ToStorageBlock()
if err != nil { if err != nil {
return cid.Undef, err return cid.Undef, err
} }
if err := bs.Put(b); err != nil { if err := bs.Put(ctx, b); err != nil {
return cid.Undef, err return cid.Undef, err
} }
return b.Cid(), nil return b.Cid(), nil
} }
func (cs *ChainStore) PutMessage(m storable) (cid.Cid, error) { func (cs *ChainStore) PutMessage(ctx context.Context, m storable) (cid.Cid, error) {
return PutMessage(cs.chainBlockstore, m) return PutMessage(ctx, cs.chainBlockstore, m)
} }
func (cs *ChainStore) GetCMessage(c cid.Cid) (types.ChainMsg, error) { func (cs *ChainStore) GetCMessage(ctx context.Context, c cid.Cid) (types.ChainMsg, error) {
m, err := cs.GetMessage(c) m, err := cs.GetMessage(ctx, c)
if err == nil { if err == nil {
return m, nil return m, nil
} }
@ -49,21 +49,21 @@ func (cs *ChainStore) GetCMessage(c cid.Cid) (types.ChainMsg, error) {
log.Warnf("GetCMessage: unexpected error getting unsigned message: %s", err) log.Warnf("GetCMessage: unexpected error getting unsigned message: %s", err)
} }
return cs.GetSignedMessage(c) return cs.GetSignedMessage(ctx, c)
} }
func (cs *ChainStore) GetMessage(c cid.Cid) (*types.Message, error) { func (cs *ChainStore) GetMessage(ctx context.Context, c cid.Cid) (*types.Message, error) {
var msg *types.Message var msg *types.Message
err := cs.chainLocalBlockstore.View(c, func(b []byte) (err error) { err := cs.chainLocalBlockstore.View(ctx, c, func(b []byte) (err error) {
msg, err = types.DecodeMessage(b) msg, err = types.DecodeMessage(b)
return err return err
}) })
return msg, err return msg, err
} }
func (cs *ChainStore) GetSignedMessage(c cid.Cid) (*types.SignedMessage, error) { func (cs *ChainStore) GetSignedMessage(ctx context.Context, c cid.Cid) (*types.SignedMessage, error) {
var msg *types.SignedMessage var msg *types.SignedMessage
err := cs.chainLocalBlockstore.View(c, func(b []byte) (err error) { err := cs.chainLocalBlockstore.View(ctx, c, func(b []byte) (err error) {
msg, err = types.DecodeSignedMessage(b) msg, err = types.DecodeSignedMessage(b)
return err return err
}) })
@ -103,7 +103,7 @@ type BlockMessages struct {
SecpkMessages []types.ChainMsg SecpkMessages []types.ChainMsg
} }
func (cs *ChainStore) BlockMsgsForTipset(ts *types.TipSet) ([]BlockMessages, error) { func (cs *ChainStore) BlockMsgsForTipset(ctx context.Context, ts *types.TipSet) ([]BlockMessages, error) {
// returned BlockMessages match block order in tipset // returned BlockMessages match block order in tipset
applied := make(map[address.Address]uint64) applied := make(map[address.Address]uint64)
@ -142,7 +142,7 @@ func (cs *ChainStore) BlockMsgsForTipset(ts *types.TipSet) ([]BlockMessages, err
var out []BlockMessages var out []BlockMessages
for _, b := range ts.Blocks() { for _, b := range ts.Blocks() {
bms, sms, err := cs.MessagesForBlock(b) bms, sms, err := cs.MessagesForBlock(ctx, b)
if err != nil { if err != nil {
return nil, xerrors.Errorf("failed to get messages for block: %w", err) return nil, xerrors.Errorf("failed to get messages for block: %w", err)
} }
@ -181,8 +181,8 @@ func (cs *ChainStore) BlockMsgsForTipset(ts *types.TipSet) ([]BlockMessages, err
return out, nil return out, nil
} }
func (cs *ChainStore) MessagesForTipset(ts *types.TipSet) ([]types.ChainMsg, error) { func (cs *ChainStore) MessagesForTipset(ctx context.Context, ts *types.TipSet) ([]types.ChainMsg, error) {
bmsgs, err := cs.BlockMsgsForTipset(ts) bmsgs, err := cs.BlockMsgsForTipset(ctx, ts)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -206,7 +206,7 @@ type mmCids struct {
secpk []cid.Cid secpk []cid.Cid
} }
func (cs *ChainStore) ReadMsgMetaCids(mmc cid.Cid) ([]cid.Cid, []cid.Cid, error) { func (cs *ChainStore) ReadMsgMetaCids(ctx context.Context, mmc cid.Cid) ([]cid.Cid, []cid.Cid, error) {
o, ok := cs.mmCache.Get(mmc) o, ok := cs.mmCache.Get(mmc)
if ok { if ok {
mmcids := o.(*mmCids) mmcids := o.(*mmCids)
@ -215,7 +215,7 @@ func (cs *ChainStore) ReadMsgMetaCids(mmc cid.Cid) ([]cid.Cid, []cid.Cid, error)
cst := cbor.NewCborStore(cs.chainLocalBlockstore) cst := cbor.NewCborStore(cs.chainLocalBlockstore)
var msgmeta types.MsgMeta var msgmeta types.MsgMeta
if err := cst.Get(context.TODO(), mmc, &msgmeta); err != nil { if err := cst.Get(ctx, mmc, &msgmeta); err != nil {
return nil, nil, xerrors.Errorf("failed to load msgmeta (%s): %w", mmc, err) return nil, nil, xerrors.Errorf("failed to load msgmeta (%s): %w", mmc, err)
} }
@ -237,18 +237,18 @@ func (cs *ChainStore) ReadMsgMetaCids(mmc cid.Cid) ([]cid.Cid, []cid.Cid, error)
return blscids, secpkcids, nil return blscids, secpkcids, nil
} }
func (cs *ChainStore) MessagesForBlock(b *types.BlockHeader) ([]*types.Message, []*types.SignedMessage, error) { func (cs *ChainStore) MessagesForBlock(ctx context.Context, b *types.BlockHeader) ([]*types.Message, []*types.SignedMessage, error) {
blscids, secpkcids, err := cs.ReadMsgMetaCids(b.Messages) blscids, secpkcids, err := cs.ReadMsgMetaCids(ctx, b.Messages)
if err != nil { if err != nil {
return nil, nil, err return nil, nil, err
} }
blsmsgs, err := cs.LoadMessagesFromCids(blscids) blsmsgs, err := cs.LoadMessagesFromCids(ctx, blscids)
if err != nil { if err != nil {
return nil, nil, xerrors.Errorf("loading bls messages for block: %w", err) return nil, nil, xerrors.Errorf("loading bls messages for block: %w", err)
} }
secpkmsgs, err := cs.LoadSignedMessagesFromCids(secpkcids) secpkmsgs, err := cs.LoadSignedMessagesFromCids(ctx, secpkcids)
if err != nil { if err != nil {
return nil, nil, xerrors.Errorf("loading secpk messages for block: %w", err) return nil, nil, xerrors.Errorf("loading secpk messages for block: %w", err)
} }
@ -256,8 +256,7 @@ func (cs *ChainStore) MessagesForBlock(b *types.BlockHeader) ([]*types.Message,
return blsmsgs, secpkmsgs, nil return blsmsgs, secpkmsgs, nil
} }
func (cs *ChainStore) GetParentReceipt(b *types.BlockHeader, i int) (*types.MessageReceipt, error) { func (cs *ChainStore) GetParentReceipt(ctx context.Context, b *types.BlockHeader, i int) (*types.MessageReceipt, error) {
ctx := context.TODO()
// block headers use adt0, for now. // block headers use adt0, for now.
a, err := blockadt.AsArray(cs.ActorStore(ctx), b.ParentMessageReceipts) a, err := blockadt.AsArray(cs.ActorStore(ctx), b.ParentMessageReceipts)
if err != nil { if err != nil {
@ -274,10 +273,10 @@ func (cs *ChainStore) GetParentReceipt(b *types.BlockHeader, i int) (*types.Mess
return &r, nil return &r, nil
} }
func (cs *ChainStore) LoadMessagesFromCids(cids []cid.Cid) ([]*types.Message, error) { func (cs *ChainStore) LoadMessagesFromCids(ctx context.Context, cids []cid.Cid) ([]*types.Message, error) {
msgs := make([]*types.Message, 0, len(cids)) msgs := make([]*types.Message, 0, len(cids))
for i, c := range cids { for i, c := range cids {
m, err := cs.GetMessage(c) m, err := cs.GetMessage(ctx, c)
if err != nil { if err != nil {
return nil, xerrors.Errorf("failed to get message: (%s):%d: %w", c, i, err) return nil, xerrors.Errorf("failed to get message: (%s):%d: %w", c, i, err)
} }
@ -288,10 +287,10 @@ func (cs *ChainStore) LoadMessagesFromCids(cids []cid.Cid) ([]*types.Message, er
return msgs, nil return msgs, nil
} }
func (cs *ChainStore) LoadSignedMessagesFromCids(cids []cid.Cid) ([]*types.SignedMessage, error) { func (cs *ChainStore) LoadSignedMessagesFromCids(ctx context.Context, cids []cid.Cid) ([]*types.SignedMessage, error) {
msgs := make([]*types.SignedMessage, 0, len(cids)) msgs := make([]*types.SignedMessage, 0, len(cids))
for i, c := range cids { for i, c := range cids {
m, err := cs.GetSignedMessage(c) m, err := cs.GetSignedMessage(ctx, c)
if err != nil { if err != nil {
return nil, xerrors.Errorf("failed to get message: (%s):%d: %w", c, i, err) return nil, xerrors.Errorf("failed to get message: (%s):%d: %w", c, i, err)
} }

View File

@ -30,7 +30,7 @@ func (cs *ChainStore) Export(ctx context.Context, ts *types.TipSet, inclRecentRo
unionBs := bstore.Union(cs.stateBlockstore, cs.chainBlockstore) unionBs := bstore.Union(cs.stateBlockstore, cs.chainBlockstore)
return cs.WalkSnapshot(ctx, ts, inclRecentRoots, skipOldMsgs, true, func(c cid.Cid) error { return cs.WalkSnapshot(ctx, ts, inclRecentRoots, skipOldMsgs, true, func(c cid.Cid) error {
blk, err := unionBs.Get(c) blk, err := unionBs.Get(ctx, c)
if err != nil { if err != nil {
return xerrors.Errorf("writing object to car, bs.Get: %w", err) return xerrors.Errorf("writing object to car, bs.Get: %w", err)
} }
@ -43,18 +43,18 @@ func (cs *ChainStore) Export(ctx context.Context, ts *types.TipSet, inclRecentRo
}) })
} }
func (cs *ChainStore) Import(r io.Reader) (*types.TipSet, error) { func (cs *ChainStore) Import(ctx context.Context, r io.Reader) (*types.TipSet, error) {
// TODO: writing only to the state blockstore is incorrect. // TODO: writing only to the state blockstore is incorrect.
// At this time, both the state and chain blockstores are backed by the // At this time, both the state and chain blockstores are backed by the
// universal store. When we physically segregate the stores, we will need // universal store. When we physically segregate the stores, we will need
// to route state objects to the state blockstore, and chain objects to // to route state objects to the state blockstore, and chain objects to
// the chain blockstore. // the chain blockstore.
header, err := car.LoadCar(cs.StateBlockstore(), r) header, err := car.LoadCar(ctx, cs.StateBlockstore(), r)
if err != nil { if err != nil {
return nil, xerrors.Errorf("loadcar failed: %w", err) return nil, xerrors.Errorf("loadcar failed: %w", err)
} }
root, err := cs.LoadTipSet(types.NewTipSetKey(header.Roots...)) root, err := cs.LoadTipSet(ctx, types.NewTipSetKey(header.Roots...))
if err != nil { if err != nil {
return nil, xerrors.Errorf("failed to load root tipset from chainfile: %w", err) return nil, xerrors.Errorf("failed to load root tipset from chainfile: %w", err)
} }
@ -82,7 +82,7 @@ func (cs *ChainStore) WalkSnapshot(ctx context.Context, ts *types.TipSet, inclRe
return err return err
} }
data, err := cs.chainBlockstore.Get(blk) data, err := cs.chainBlockstore.Get(ctx, blk)
if err != nil { if err != nil {
return xerrors.Errorf("getting block: %w", err) return xerrors.Errorf("getting block: %w", err)
} }
@ -102,7 +102,7 @@ func (cs *ChainStore) WalkSnapshot(ctx context.Context, ts *types.TipSet, inclRe
var cids []cid.Cid var cids []cid.Cid
if !skipOldMsgs || b.Height > ts.Height()-inclRecentRoots { if !skipOldMsgs || b.Height > ts.Height()-inclRecentRoots {
if walked.Visit(b.Messages) { if walked.Visit(b.Messages) {
mcids, err := recurseLinks(cs.chainBlockstore, walked, b.Messages, []cid.Cid{b.Messages}) mcids, err := recurseLinks(ctx, cs.chainBlockstore, walked, b.Messages, []cid.Cid{b.Messages})
if err != nil { if err != nil {
return xerrors.Errorf("recursing messages failed: %w", err) return xerrors.Errorf("recursing messages failed: %w", err)
} }
@ -123,7 +123,7 @@ func (cs *ChainStore) WalkSnapshot(ctx context.Context, ts *types.TipSet, inclRe
if b.Height == 0 || b.Height > ts.Height()-inclRecentRoots { if b.Height == 0 || b.Height > ts.Height()-inclRecentRoots {
if walked.Visit(b.ParentStateRoot) { if walked.Visit(b.ParentStateRoot) {
cids, err := recurseLinks(cs.stateBlockstore, walked, b.ParentStateRoot, []cid.Cid{b.ParentStateRoot}) cids, err := recurseLinks(ctx, cs.stateBlockstore, walked, b.ParentStateRoot, []cid.Cid{b.ParentStateRoot})
if err != nil { if err != nil {
return xerrors.Errorf("recursing genesis state failed: %w", err) return xerrors.Errorf("recursing genesis state failed: %w", err)
} }
@ -168,12 +168,12 @@ func (cs *ChainStore) WalkSnapshot(ctx context.Context, ts *types.TipSet, inclRe
return nil return nil
} }
func recurseLinks(bs bstore.Blockstore, walked *cid.Set, root cid.Cid, in []cid.Cid) ([]cid.Cid, error) { func recurseLinks(ctx context.Context, bs bstore.Blockstore, walked *cid.Set, root cid.Cid, in []cid.Cid) ([]cid.Cid, error) {
if root.Prefix().Codec != cid.DagCBOR { if root.Prefix().Codec != cid.DagCBOR {
return in, nil return in, nil
} }
data, err := bs.Get(root) data, err := bs.Get(ctx, root)
if err != nil { if err != nil {
return nil, xerrors.Errorf("recurse links get (%s) failed: %w", root, err) return nil, xerrors.Errorf("recurse links get (%s) failed: %w", root, err)
} }
@ -192,7 +192,7 @@ func recurseLinks(bs bstore.Blockstore, walked *cid.Set, root cid.Cid, in []cid.
in = append(in, c) in = append(in, c)
var err error var err error
in, err = recurseLinks(bs, walked, c, in) in, err = recurseLinks(ctx, bs, walked, c, in)
if err != nil { if err != nil {
rerr = err rerr = err
} }

View File

@ -207,17 +207,17 @@ func (cs *ChainStore) Close() error {
return nil return nil
} }
func (cs *ChainStore) Load() error { func (cs *ChainStore) Load(ctx context.Context) error {
if err := cs.loadHead(); err != nil { if err := cs.loadHead(ctx); err != nil {
return err return err
} }
if err := cs.loadCheckpoint(); err != nil { if err := cs.loadCheckpoint(ctx); err != nil {
return err return err
} }
return nil return nil
} }
func (cs *ChainStore) loadHead() error { func (cs *ChainStore) loadHead(ctx context.Context) error {
head, err := cs.metadataDs.Get(chainHeadKey) head, err := cs.metadataDs.Get(ctx, chainHeadKey)
if err == dstore.ErrNotFound { if err == dstore.ErrNotFound {
log.Warn("no previous chain state found") log.Warn("no previous chain state found")
return nil return nil
@ -231,7 +231,7 @@ func (cs *ChainStore) loadHead() error {
return xerrors.Errorf("failed to unmarshal stored chain head: %w", err) return xerrors.Errorf("failed to unmarshal stored chain head: %w", err)
} }
ts, err := cs.LoadTipSet(types.NewTipSetKey(tscids...)) ts, err := cs.LoadTipSet(ctx, types.NewTipSetKey(tscids...))
if err != nil { if err != nil {
return xerrors.Errorf("loading tipset: %w", err) return xerrors.Errorf("loading tipset: %w", err)
} }
@ -241,8 +241,8 @@ func (cs *ChainStore) loadHead() error {
return nil return nil
} }
func (cs *ChainStore) loadCheckpoint() error { func (cs *ChainStore) loadCheckpoint(ctx context.Context) error {
tskBytes, err := cs.metadataDs.Get(checkpointKey) tskBytes, err := cs.metadataDs.Get(ctx, checkpointKey)
if err == dstore.ErrNotFound { if err == dstore.ErrNotFound {
return nil return nil
} }
@ -256,7 +256,7 @@ func (cs *ChainStore) loadCheckpoint() error {
return err return err
} }
ts, err := cs.LoadTipSet(tsk) ts, err := cs.LoadTipSet(ctx, tsk)
if err != nil { if err != nil {
return xerrors.Errorf("loading tipset: %w", err) return xerrors.Errorf("loading tipset: %w", err)
} }
@ -266,13 +266,13 @@ func (cs *ChainStore) loadCheckpoint() error {
return nil return nil
} }
func (cs *ChainStore) writeHead(ts *types.TipSet) error { func (cs *ChainStore) writeHead(ctx context.Context, ts *types.TipSet) error {
data, err := json.Marshal(ts.Cids()) data, err := json.Marshal(ts.Cids())
if err != nil { if err != nil {
return xerrors.Errorf("failed to marshal tipset: %w", err) return xerrors.Errorf("failed to marshal tipset: %w", err)
} }
if err := cs.metadataDs.Put(chainHeadKey, data); err != nil { if err := cs.metadataDs.Put(ctx, chainHeadKey, data); err != nil {
return xerrors.Errorf("failed to write chain head to datastore: %w", err) return xerrors.Errorf("failed to write chain head to datastore: %w", err)
} }
@ -341,13 +341,13 @@ func (cs *ChainStore) SubscribeHeadChanges(f ReorgNotifee) {
func (cs *ChainStore) IsBlockValidated(ctx context.Context, blkid cid.Cid) (bool, error) { func (cs *ChainStore) IsBlockValidated(ctx context.Context, blkid cid.Cid) (bool, error) {
key := blockValidationCacheKeyPrefix.Instance(blkid.String()) key := blockValidationCacheKeyPrefix.Instance(blkid.String())
return cs.metadataDs.Has(key) return cs.metadataDs.Has(ctx, key)
} }
func (cs *ChainStore) MarkBlockAsValidated(ctx context.Context, blkid cid.Cid) error { func (cs *ChainStore) MarkBlockAsValidated(ctx context.Context, blkid cid.Cid) error {
key := blockValidationCacheKeyPrefix.Instance(blkid.String()) key := blockValidationCacheKeyPrefix.Instance(blkid.String())
if err := cs.metadataDs.Put(key, []byte{0}); err != nil { if err := cs.metadataDs.Put(ctx, key, []byte{0}); err != nil {
return xerrors.Errorf("cache block validation: %w", err) return xerrors.Errorf("cache block validation: %w", err)
} }
@ -357,34 +357,34 @@ func (cs *ChainStore) MarkBlockAsValidated(ctx context.Context, blkid cid.Cid) e
func (cs *ChainStore) UnmarkBlockAsValidated(ctx context.Context, blkid cid.Cid) error { func (cs *ChainStore) UnmarkBlockAsValidated(ctx context.Context, blkid cid.Cid) error {
key := blockValidationCacheKeyPrefix.Instance(blkid.String()) key := blockValidationCacheKeyPrefix.Instance(blkid.String())
if err := cs.metadataDs.Delete(key); err != nil { if err := cs.metadataDs.Delete(ctx, key); err != nil {
return xerrors.Errorf("removing from valid block cache: %w", err) return xerrors.Errorf("removing from valid block cache: %w", err)
} }
return nil return nil
} }
func (cs *ChainStore) SetGenesis(b *types.BlockHeader) error { func (cs *ChainStore) SetGenesis(ctx context.Context, b *types.BlockHeader) error {
ts, err := types.NewTipSet([]*types.BlockHeader{b}) ts, err := types.NewTipSet([]*types.BlockHeader{b})
if err != nil { if err != nil {
return err return err
} }
if err := cs.PutTipSet(context.TODO(), ts); err != nil { if err := cs.PutTipSet(ctx, ts); err != nil {
return err return err
} }
return cs.metadataDs.Put(dstore.NewKey("0"), b.Cid().Bytes()) return cs.metadataDs.Put(ctx, dstore.NewKey("0"), b.Cid().Bytes())
} }
func (cs *ChainStore) PutTipSet(ctx context.Context, ts *types.TipSet) error { func (cs *ChainStore) PutTipSet(ctx context.Context, ts *types.TipSet) error {
for _, b := range ts.Blocks() { for _, b := range ts.Blocks() {
if err := cs.PersistBlockHeaders(b); err != nil { if err := cs.PersistBlockHeaders(ctx, b); err != nil {
return err return err
} }
} }
expanded, err := cs.expandTipset(ts.Blocks()[0]) expanded, err := cs.expandTipset(ctx, ts.Blocks()[0])
if err != nil { if err != nil {
return xerrors.Errorf("errored while expanding tipset: %w", err) return xerrors.Errorf("errored while expanding tipset: %w", err)
} }
@ -435,7 +435,7 @@ func (cs *ChainStore) MaybeTakeHeavierTipSet(ctx context.Context, ts *types.TipS
// difference between 'bootstrap sync' and 'caught up' sync, we need // difference between 'bootstrap sync' and 'caught up' sync, we need
// some other heuristic. // some other heuristic.
exceeds, err := cs.exceedsForkLength(cs.heaviest, ts) exceeds, err := cs.exceedsForkLength(ctx, cs.heaviest, ts)
if err != nil { if err != nil {
return err return err
} }
@ -458,7 +458,7 @@ func (cs *ChainStore) MaybeTakeHeavierTipSet(ctx context.Context, ts *types.TipS
// FIXME: We may want to replace some of the logic in `syncFork()` with this. // FIXME: We may want to replace some of the logic in `syncFork()` with this.
// `syncFork()` counts the length on both sides of the fork at the moment (we // `syncFork()` counts the length on both sides of the fork at the moment (we
// need to settle on that) but here we just enforce it on the `synced` side. // need to settle on that) but here we just enforce it on the `synced` side.
func (cs *ChainStore) exceedsForkLength(synced, external *types.TipSet) (bool, error) { func (cs *ChainStore) exceedsForkLength(ctx context.Context, synced, external *types.TipSet) (bool, error) {
if synced == nil || external == nil { if synced == nil || external == nil {
// FIXME: If `cs.heaviest` is nil we should just bypass the entire // FIXME: If `cs.heaviest` is nil we should just bypass the entire
// `MaybeTakeHeavierTipSet` logic (instead of each of the called // `MaybeTakeHeavierTipSet` logic (instead of each of the called
@ -482,7 +482,7 @@ func (cs *ChainStore) exceedsForkLength(synced, external *types.TipSet) (bool, e
// length). // length).
return true, nil return true, nil
} }
external, err = cs.LoadTipSet(external.Parents()) external, err = cs.LoadTipSet(ctx, external.Parents())
if err != nil { if err != nil {
return false, xerrors.Errorf("failed to load parent tipset in external chain: %w", err) return false, xerrors.Errorf("failed to load parent tipset in external chain: %w", err)
} }
@ -505,7 +505,7 @@ func (cs *ChainStore) exceedsForkLength(synced, external *types.TipSet) (bool, e
// there is no common ancestor. // there is no common ancestor.
return true, nil return true, nil
} }
synced, err = cs.LoadTipSet(synced.Parents()) synced, err = cs.LoadTipSet(ctx, synced.Parents())
if err != nil { if err != nil {
return false, xerrors.Errorf("failed to load parent tipset in synced chain: %w", err) return false, xerrors.Errorf("failed to load parent tipset in synced chain: %w", err)
} }
@ -521,17 +521,17 @@ func (cs *ChainStore) exceedsForkLength(synced, external *types.TipSet) (bool, e
// CAUTION: Use it only for testing, such as to teleport the chain to a // CAUTION: Use it only for testing, such as to teleport the chain to a
// particular tipset to carry out a benchmark, verification, etc. on a chain // particular tipset to carry out a benchmark, verification, etc. on a chain
// segment. // segment.
func (cs *ChainStore) ForceHeadSilent(_ context.Context, ts *types.TipSet) error { func (cs *ChainStore) ForceHeadSilent(ctx context.Context, ts *types.TipSet) error {
log.Warnf("(!!!) forcing a new head silently; new head: %s", ts) log.Warnf("(!!!) forcing a new head silently; new head: %s", ts)
cs.heaviestLk.Lock() cs.heaviestLk.Lock()
defer cs.heaviestLk.Unlock() defer cs.heaviestLk.Unlock()
if err := cs.removeCheckpoint(); err != nil { if err := cs.removeCheckpoint(ctx); err != nil {
return err return err
} }
cs.heaviest = ts cs.heaviest = ts
err := cs.writeHead(ts) err := cs.writeHead(ctx, ts)
if err != nil { if err != nil {
err = xerrors.Errorf("failed to write chain head: %s", err) err = xerrors.Errorf("failed to write chain head: %s", err)
} }
@ -561,7 +561,7 @@ func (cs *ChainStore) reorgWorker(ctx context.Context, initialNotifees []ReorgNo
notifees = append(notifees, n) notifees = append(notifees, n)
case r := <-out: case r := <-out:
revert, apply, err := cs.ReorgOps(r.old, r.new) revert, apply, err := cs.ReorgOps(ctx, r.old, r.new)
if err != nil { if err != nil {
log.Error("computing reorg ops failed: ", err) log.Error("computing reorg ops failed: ", err)
continue continue
@ -646,7 +646,7 @@ func (cs *ChainStore) takeHeaviestTipSet(ctx context.Context, ts *types.TipSet)
log.Infof("New heaviest tipset! %s (height=%d)", ts.Cids(), ts.Height()) log.Infof("New heaviest tipset! %s (height=%d)", ts.Cids(), ts.Height())
cs.heaviest = ts cs.heaviest = ts
if err := cs.writeHead(ts); err != nil { if err := cs.writeHead(ctx, ts); err != nil {
log.Errorf("failed to write chain head: %s", err) log.Errorf("failed to write chain head: %s", err)
return nil return nil
} }
@ -656,14 +656,14 @@ func (cs *ChainStore) takeHeaviestTipSet(ctx context.Context, ts *types.TipSet)
// FlushValidationCache removes all results of block validation from the // FlushValidationCache removes all results of block validation from the
// chain metadata store. Usually the first step after a new chain import. // chain metadata store. Usually the first step after a new chain import.
func (cs *ChainStore) FlushValidationCache() error { func (cs *ChainStore) FlushValidationCache(ctx context.Context) error {
return FlushValidationCache(cs.metadataDs) return FlushValidationCache(ctx, cs.metadataDs)
} }
func FlushValidationCache(ds dstore.Batching) error { func FlushValidationCache(ctx context.Context, ds dstore.Batching) error {
log.Infof("clearing block validation cache...") log.Infof("clearing block validation cache...")
dsWalk, err := ds.Query(query.Query{ dsWalk, err := ds.Query(ctx, query.Query{
// Potential TODO: the validation cache is not a namespace on its own // Potential TODO: the validation cache is not a namespace on its own
// but is rather constructed as prefixed-key `foo:bar` via .Instance(), which // but is rather constructed as prefixed-key `foo:bar` via .Instance(), which
// in turn does not work with the filter, which can match only on `foo/bar` // in turn does not work with the filter, which can match only on `foo/bar`
@ -683,7 +683,7 @@ func FlushValidationCache(ds dstore.Batching) error {
return xerrors.Errorf("failed to run key listing query: %w", err) return xerrors.Errorf("failed to run key listing query: %w", err)
} }
batch, err := ds.Batch() batch, err := ds.Batch(ctx)
if err != nil { if err != nil {
return xerrors.Errorf("failed to open a DS batch: %w", err) return xerrors.Errorf("failed to open a DS batch: %w", err)
} }
@ -692,11 +692,11 @@ func FlushValidationCache(ds dstore.Batching) error {
for _, k := range allKeys { for _, k := range allKeys {
if strings.HasPrefix(k.Key, blockValidationCacheKeyPrefix.String()) { if strings.HasPrefix(k.Key, blockValidationCacheKeyPrefix.String()) {
delCnt++ delCnt++
batch.Delete(dstore.RawKey(k.Key)) // nolint:errcheck batch.Delete(ctx, dstore.RawKey(k.Key)) // nolint:errcheck
} }
} }
if err := batch.Commit(); err != nil { if err := batch.Commit(ctx); err != nil {
return xerrors.Errorf("failed to commit the DS batch: %w", err) return xerrors.Errorf("failed to commit the DS batch: %w", err)
} }
@ -709,24 +709,24 @@ func FlushValidationCache(ds dstore.Batching) error {
// This should only be called if something is broken and needs fixing. // This should only be called if something is broken and needs fixing.
// //
// This function will bypass and remove any checkpoints. // This function will bypass and remove any checkpoints.
func (cs *ChainStore) SetHead(ts *types.TipSet) error { func (cs *ChainStore) SetHead(ctx context.Context, ts *types.TipSet) error {
cs.heaviestLk.Lock() cs.heaviestLk.Lock()
defer cs.heaviestLk.Unlock() defer cs.heaviestLk.Unlock()
if err := cs.removeCheckpoint(); err != nil { if err := cs.removeCheckpoint(ctx); err != nil {
return err return err
} }
return cs.takeHeaviestTipSet(context.TODO(), ts) return cs.takeHeaviestTipSet(context.TODO(), ts)
} }
// RemoveCheckpoint removes the current checkpoint. // RemoveCheckpoint removes the current checkpoint.
func (cs *ChainStore) RemoveCheckpoint() error { func (cs *ChainStore) RemoveCheckpoint(ctx context.Context) error {
cs.heaviestLk.Lock() cs.heaviestLk.Lock()
defer cs.heaviestLk.Unlock() defer cs.heaviestLk.Unlock()
return cs.removeCheckpoint() return cs.removeCheckpoint(ctx)
} }
func (cs *ChainStore) removeCheckpoint() error { func (cs *ChainStore) removeCheckpoint(ctx context.Context) error {
if err := cs.metadataDs.Delete(checkpointKey); err != nil { if err := cs.metadataDs.Delete(ctx, checkpointKey); err != nil {
return err return err
} }
cs.checkpoint = nil cs.checkpoint = nil
@ -736,7 +736,7 @@ func (cs *ChainStore) removeCheckpoint() error {
// SetCheckpoint will set a checkpoint past which the chainstore will not allow forks. // SetCheckpoint will set a checkpoint past which the chainstore will not allow forks.
// //
// NOTE: Checkpoints cannot be set beyond ForkLengthThreshold epochs in the past. // NOTE: Checkpoints cannot be set beyond ForkLengthThreshold epochs in the past.
func (cs *ChainStore) SetCheckpoint(ts *types.TipSet) error { func (cs *ChainStore) SetCheckpoint(ctx context.Context, ts *types.TipSet) error {
tskBytes, err := json.Marshal(ts.Key()) tskBytes, err := json.Marshal(ts.Key())
if err != nil { if err != nil {
return err return err
@ -755,7 +755,7 @@ func (cs *ChainStore) SetCheckpoint(ts *types.TipSet) error {
} }
if !ts.Equals(cs.heaviest) { if !ts.Equals(cs.heaviest) {
anc, err := cs.IsAncestorOf(ts, cs.heaviest) anc, err := cs.IsAncestorOf(ctx, ts, cs.heaviest)
if err != nil { if err != nil {
return xerrors.Errorf("cannot determine whether checkpoint tipset is in main-chain: %w", err) return xerrors.Errorf("cannot determine whether checkpoint tipset is in main-chain: %w", err)
} }
@ -764,7 +764,7 @@ func (cs *ChainStore) SetCheckpoint(ts *types.TipSet) error {
return xerrors.Errorf("cannot mark tipset as checkpoint, since it isn't in the main-chain: %w", err) return xerrors.Errorf("cannot mark tipset as checkpoint, since it isn't in the main-chain: %w", err)
} }
} }
err = cs.metadataDs.Put(checkpointKey, tskBytes) err = cs.metadataDs.Put(ctx, checkpointKey, tskBytes)
if err != nil { if err != nil {
return err return err
} }
@ -781,9 +781,9 @@ func (cs *ChainStore) GetCheckpoint() *types.TipSet {
} }
// Contains returns whether our BlockStore has all blocks in the supplied TipSet. // Contains returns whether our BlockStore has all blocks in the supplied TipSet.
func (cs *ChainStore) Contains(ts *types.TipSet) (bool, error) { func (cs *ChainStore) Contains(ctx context.Context, ts *types.TipSet) (bool, error) {
for _, c := range ts.Cids() { for _, c := range ts.Cids() {
has, err := cs.chainBlockstore.Has(c) has, err := cs.chainBlockstore.Has(ctx, c)
if err != nil { if err != nil {
return false, err return false, err
} }
@ -797,16 +797,16 @@ func (cs *ChainStore) Contains(ts *types.TipSet) (bool, error) {
// GetBlock fetches a BlockHeader with the supplied CID. It returns // GetBlock fetches a BlockHeader with the supplied CID. It returns
// blockstore.ErrNotFound if the block was not found in the BlockStore. // blockstore.ErrNotFound if the block was not found in the BlockStore.
func (cs *ChainStore) GetBlock(c cid.Cid) (*types.BlockHeader, error) { func (cs *ChainStore) GetBlock(ctx context.Context, c cid.Cid) (*types.BlockHeader, error) {
var blk *types.BlockHeader var blk *types.BlockHeader
err := cs.chainLocalBlockstore.View(c, func(b []byte) (err error) { err := cs.chainLocalBlockstore.View(ctx, c, func(b []byte) (err error) {
blk, err = types.DecodeBlock(b) blk, err = types.DecodeBlock(b)
return err return err
}) })
return blk, err return blk, err
} }
func (cs *ChainStore) LoadTipSet(tsk types.TipSetKey) (*types.TipSet, error) { func (cs *ChainStore) LoadTipSet(ctx context.Context, tsk types.TipSetKey) (*types.TipSet, error) {
v, ok := cs.tsCache.Get(tsk) v, ok := cs.tsCache.Get(tsk)
if ok { if ok {
return v.(*types.TipSet), nil return v.(*types.TipSet), nil
@ -819,7 +819,7 @@ func (cs *ChainStore) LoadTipSet(tsk types.TipSetKey) (*types.TipSet, error) {
for i, c := range cids { for i, c := range cids {
i, c := i, c i, c := i, c
eg.Go(func() error { eg.Go(func() error {
b, err := cs.GetBlock(c) b, err := cs.GetBlock(ctx, c)
if err != nil { if err != nil {
return xerrors.Errorf("get block %s: %w", c, err) return xerrors.Errorf("get block %s: %w", c, err)
} }
@ -844,14 +844,14 @@ func (cs *ChainStore) LoadTipSet(tsk types.TipSetKey) (*types.TipSet, error) {
} }
// IsAncestorOf returns true if 'a' is an ancestor of 'b' // IsAncestorOf returns true if 'a' is an ancestor of 'b'
func (cs *ChainStore) IsAncestorOf(a, b *types.TipSet) (bool, error) { func (cs *ChainStore) IsAncestorOf(ctx context.Context, a, b *types.TipSet) (bool, error) {
if b.Height() <= a.Height() { if b.Height() <= a.Height() {
return false, nil return false, nil
} }
cur := b cur := b
for !a.Equals(cur) && cur.Height() > a.Height() { for !a.Equals(cur) && cur.Height() > a.Height() {
next, err := cs.LoadTipSet(cur.Parents()) next, err := cs.LoadTipSet(ctx, cur.Parents())
if err != nil { if err != nil {
return false, err return false, err
} }
@ -862,13 +862,13 @@ func (cs *ChainStore) IsAncestorOf(a, b *types.TipSet) (bool, error) {
return cur.Equals(a), nil return cur.Equals(a), nil
} }
func (cs *ChainStore) NearestCommonAncestor(a, b *types.TipSet) (*types.TipSet, error) { func (cs *ChainStore) NearestCommonAncestor(ctx context.Context, a, b *types.TipSet) (*types.TipSet, error) {
l, _, err := cs.ReorgOps(a, b) l, _, err := cs.ReorgOps(ctx, a, b)
if err != nil { if err != nil {
return nil, err return nil, err
} }
return cs.LoadTipSet(l[len(l)-1].Parents()) return cs.LoadTipSet(ctx, l[len(l)-1].Parents())
} }
// ReorgOps takes two tipsets (which can be at different heights), and walks // ReorgOps takes two tipsets (which can be at different heights), and walks
@ -879,11 +879,11 @@ func (cs *ChainStore) NearestCommonAncestor(a, b *types.TipSet) (*types.TipSet,
// ancestor. // ancestor.
// //
// If an error happens along the way, we return the error with nil slices. // If an error happens along the way, we return the error with nil slices.
func (cs *ChainStore) ReorgOps(a, b *types.TipSet) ([]*types.TipSet, []*types.TipSet, error) { func (cs *ChainStore) ReorgOps(ctx context.Context, a, b *types.TipSet) ([]*types.TipSet, []*types.TipSet, error) {
return ReorgOps(cs.LoadTipSet, a, b) return ReorgOps(ctx, cs.LoadTipSet, a, b)
} }
func ReorgOps(lts func(types.TipSetKey) (*types.TipSet, error), a, b *types.TipSet) ([]*types.TipSet, []*types.TipSet, error) { func ReorgOps(ctx context.Context, lts func(ctx context.Context, _ types.TipSetKey) (*types.TipSet, error), a, b *types.TipSet) ([]*types.TipSet, []*types.TipSet, error) {
left := a left := a
right := b right := b
@ -891,7 +891,7 @@ func ReorgOps(lts func(types.TipSetKey) (*types.TipSet, error), a, b *types.TipS
for !left.Equals(right) { for !left.Equals(right) {
if left.Height() > right.Height() { if left.Height() > right.Height() {
leftChain = append(leftChain, left) leftChain = append(leftChain, left)
par, err := lts(left.Parents()) par, err := lts(ctx, left.Parents())
if err != nil { if err != nil {
return nil, nil, err return nil, nil, err
} }
@ -899,7 +899,7 @@ func ReorgOps(lts func(types.TipSetKey) (*types.TipSet, error), a, b *types.TipS
left = par left = par
} else { } else {
rightChain = append(rightChain, right) rightChain = append(rightChain, right)
par, err := lts(right.Parents()) par, err := lts(ctx, right.Parents())
if err != nil { if err != nil {
log.Infof("failed to fetch right.Parents: %s", err) log.Infof("failed to fetch right.Parents: %s", err)
return nil, nil, err return nil, nil, err
@ -921,7 +921,7 @@ func (cs *ChainStore) GetHeaviestTipSet() (ts *types.TipSet) {
return return
} }
func (cs *ChainStore) AddToTipSetTracker(b *types.BlockHeader) error { func (cs *ChainStore) AddToTipSetTracker(ctx context.Context, b *types.BlockHeader) error {
cs.tstLk.Lock() cs.tstLk.Lock()
defer cs.tstLk.Unlock() defer cs.tstLk.Unlock()
@ -931,7 +931,7 @@ func (cs *ChainStore) AddToTipSetTracker(b *types.BlockHeader) error {
log.Debug("tried to add block to tipset tracker that was already there") log.Debug("tried to add block to tipset tracker that was already there")
return nil return nil
} }
h, err := cs.GetBlock(oc) h, err := cs.GetBlock(ctx, oc)
if err == nil && h != nil { if err == nil && h != nil {
if h.Miner == b.Miner { if h.Miner == b.Miner {
log.Warnf("Have multiple blocks from miner %s at height %d in our tipset cache %s-%s", b.Miner, b.Height, b.Cid(), h.Cid()) log.Warnf("Have multiple blocks from miner %s at height %d in our tipset cache %s-%s", b.Miner, b.Height, b.Cid(), h.Cid())
@ -960,7 +960,7 @@ func (cs *ChainStore) AddToTipSetTracker(b *types.BlockHeader) error {
return nil return nil
} }
func (cs *ChainStore) PersistBlockHeaders(b ...*types.BlockHeader) error { func (cs *ChainStore) PersistBlockHeaders(ctx context.Context, b ...*types.BlockHeader) error {
sbs := make([]block.Block, len(b)) sbs := make([]block.Block, len(b))
for i, header := range b { for i, header := range b {
@ -982,13 +982,13 @@ func (cs *ChainStore) PersistBlockHeaders(b ...*types.BlockHeader) error {
end = len(b) end = len(b)
} }
err = multierr.Append(err, cs.chainLocalBlockstore.PutMany(sbs[start:end])) err = multierr.Append(err, cs.chainLocalBlockstore.PutMany(ctx, sbs[start:end]))
} }
return err return err
} }
func (cs *ChainStore) expandTipset(b *types.BlockHeader) (*types.TipSet, error) { func (cs *ChainStore) expandTipset(ctx context.Context, b *types.BlockHeader) (*types.TipSet, error) {
// Hold lock for the whole function for now, if it becomes a problem we can // Hold lock for the whole function for now, if it becomes a problem we can
// fix pretty easily // fix pretty easily
cs.tstLk.Lock() cs.tstLk.Lock()
@ -1007,7 +1007,7 @@ func (cs *ChainStore) expandTipset(b *types.BlockHeader) (*types.TipSet, error)
continue continue
} }
h, err := cs.GetBlock(bhc) h, err := cs.GetBlock(ctx, bhc)
if err != nil { if err != nil {
return nil, xerrors.Errorf("failed to load block (%s) for tipset expansion: %w", bhc, err) return nil, xerrors.Errorf("failed to load block (%s) for tipset expansion: %w", bhc, err)
} }
@ -1029,11 +1029,11 @@ func (cs *ChainStore) expandTipset(b *types.BlockHeader) (*types.TipSet, error)
} }
func (cs *ChainStore) AddBlock(ctx context.Context, b *types.BlockHeader) error { func (cs *ChainStore) AddBlock(ctx context.Context, b *types.BlockHeader) error {
if err := cs.PersistBlockHeaders(b); err != nil { if err := cs.PersistBlockHeaders(ctx, b); err != nil {
return err return err
} }
ts, err := cs.expandTipset(b) ts, err := cs.expandTipset(ctx, b)
if err != nil { if err != nil {
return err return err
} }
@ -1045,8 +1045,8 @@ func (cs *ChainStore) AddBlock(ctx context.Context, b *types.BlockHeader) error
return nil return nil
} }
func (cs *ChainStore) GetGenesis() (*types.BlockHeader, error) { func (cs *ChainStore) GetGenesis(ctx context.Context) (*types.BlockHeader, error) {
data, err := cs.metadataDs.Get(dstore.NewKey("0")) data, err := cs.metadataDs.Get(ctx, dstore.NewKey("0"))
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -1056,22 +1056,22 @@ func (cs *ChainStore) GetGenesis() (*types.BlockHeader, error) {
return nil, err return nil, err
} }
return cs.GetBlock(c) return cs.GetBlock(ctx, c)
} }
// GetPath returns the sequence of atomic head change operations that // GetPath returns the sequence of atomic head change operations that
// need to be applied in order to switch the head of the chain from the `from` // need to be applied in order to switch the head of the chain from the `from`
// tipset to the `to` tipset. // tipset to the `to` tipset.
func (cs *ChainStore) GetPath(ctx context.Context, from types.TipSetKey, to types.TipSetKey) ([]*api.HeadChange, error) { func (cs *ChainStore) GetPath(ctx context.Context, from types.TipSetKey, to types.TipSetKey) ([]*api.HeadChange, error) {
fts, err := cs.LoadTipSet(from) fts, err := cs.LoadTipSet(ctx, from)
if err != nil { if err != nil {
return nil, xerrors.Errorf("loading from tipset %s: %w", from, err) return nil, xerrors.Errorf("loading from tipset %s: %w", from, err)
} }
tts, err := cs.LoadTipSet(to) tts, err := cs.LoadTipSet(ctx, to)
if err != nil { if err != nil {
return nil, xerrors.Errorf("loading to tipset %s: %w", to, err) return nil, xerrors.Errorf("loading to tipset %s: %w", to, err)
} }
revert, apply, err := cs.ReorgOps(fts, tts) revert, apply, err := cs.ReorgOps(ctx, fts, tts)
if err != nil { if err != nil {
return nil, xerrors.Errorf("error getting tipset branches: %w", err) return nil, xerrors.Errorf("error getting tipset branches: %w", err)
} }
@ -1108,11 +1108,11 @@ func (cs *ChainStore) ActorStore(ctx context.Context) adt.Store {
return ActorStore(ctx, cs.stateBlockstore) return ActorStore(ctx, cs.stateBlockstore)
} }
func (cs *ChainStore) TryFillTipSet(ts *types.TipSet) (*FullTipSet, error) { func (cs *ChainStore) TryFillTipSet(ctx context.Context, ts *types.TipSet) (*FullTipSet, error) {
var out []*types.FullBlock var out []*types.FullBlock
for _, b := range ts.Blocks() { for _, b := range ts.Blocks() {
bmsgs, smsgs, err := cs.MessagesForBlock(b) bmsgs, smsgs, err := cs.MessagesForBlock(ctx, b)
if err != nil { if err != nil {
// TODO: check for 'not found' errors, and only return nil if this // TODO: check for 'not found' errors, and only return nil if this
// is actually a 'not found' error // is actually a 'not found' error
@ -1154,7 +1154,7 @@ func (cs *ChainStore) GetTipsetByHeight(ctx context.Context, h abi.ChainEpoch, t
if lbts.Height() < h { if lbts.Height() < h {
log.Warnf("chain index returned the wrong tipset at height %d, using slow retrieval", h) log.Warnf("chain index returned the wrong tipset at height %d, using slow retrieval", h)
lbts, err = cs.cindex.GetTipsetByHeightWithoutCache(ts, h) lbts, err = cs.cindex.GetTipsetByHeightWithoutCache(ctx, ts, h)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -1164,7 +1164,7 @@ func (cs *ChainStore) GetTipsetByHeight(ctx context.Context, h abi.ChainEpoch, t
return lbts, nil return lbts, nil
} }
return cs.LoadTipSet(lbts.Parents()) return cs.LoadTipSet(ctx, lbts.Parents())
} }
func (cs *ChainStore) Weight(ctx context.Context, hts *types.TipSet) (types.BigInt, error) { // todo remove func (cs *ChainStore) Weight(ctx context.Context, hts *types.TipSet) (types.BigInt, error) { // todo remove
@ -1190,14 +1190,14 @@ func breakWeightTie(ts1, ts2 *types.TipSet) bool {
return false return false
} }
func (cs *ChainStore) GetTipSetFromKey(tsk types.TipSetKey) (*types.TipSet, error) { func (cs *ChainStore) GetTipSetFromKey(ctx context.Context, tsk types.TipSetKey) (*types.TipSet, error) {
if tsk.IsEmpty() { if tsk.IsEmpty() {
return cs.GetHeaviestTipSet(), nil return cs.GetHeaviestTipSet(), nil
} }
return cs.LoadTipSet(tsk) return cs.LoadTipSet(ctx, tsk)
} }
func (cs *ChainStore) GetLatestBeaconEntry(ts *types.TipSet) (*types.BeaconEntry, error) { func (cs *ChainStore) GetLatestBeaconEntry(ctx context.Context, ts *types.TipSet) (*types.BeaconEntry, error) {
cur := ts cur := ts
for i := 0; i < 20; i++ { for i := 0; i < 20; i++ {
cbe := cur.Blocks()[0].BeaconEntries cbe := cur.Blocks()[0].BeaconEntries
@ -1209,7 +1209,7 @@ func (cs *ChainStore) GetLatestBeaconEntry(ts *types.TipSet) (*types.BeaconEntry
return nil, xerrors.Errorf("made it back to genesis block without finding beacon entry") return nil, xerrors.Errorf("made it back to genesis block without finding beacon entry")
} }
next, err := cs.LoadTipSet(cur.Parents()) next, err := cs.LoadTipSet(ctx, cur.Parents())
if err != nil { if err != nil {
return nil, xerrors.Errorf("failed to load parents when searching back for latest beacon entry: %w", err) return nil, xerrors.Errorf("failed to load parents when searching back for latest beacon entry: %w", err)
} }

View File

@ -109,7 +109,7 @@ func TestChainExportImport(t *testing.T) {
cs := store.NewChainStore(nbs, nbs, datastore.NewMapDatastore(), filcns.Weight, nil) cs := store.NewChainStore(nbs, nbs, datastore.NewMapDatastore(), filcns.Weight, nil)
defer cs.Close() //nolint:errcheck defer cs.Close() //nolint:errcheck
root, err := cs.Import(buf) root, err := cs.Import(context.TODO(), buf)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -144,12 +144,12 @@ func TestChainExportImportFull(t *testing.T) {
cs := store.NewChainStore(nbs, nbs, datastore.NewMapDatastore(), filcns.Weight, nil) cs := store.NewChainStore(nbs, nbs, datastore.NewMapDatastore(), filcns.Weight, nil)
defer cs.Close() //nolint:errcheck defer cs.Close() //nolint:errcheck
root, err := cs.Import(buf) root, err := cs.Import(context.TODO(), buf)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
err = cs.SetHead(last) err = cs.SetHead(context.Background(), last)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }

View File

@ -1,3 +1,4 @@
//stm: #unit
package sub package sub
import ( import (
@ -49,6 +50,7 @@ func TestFetchCidsWithDedup(t *testing.T) {
} }
g := &getter{msgs} g := &getter{msgs}
//stm: @CHAIN_INCOMING_FETCH_MESSAGES_BY_CID_001
// the cids have a duplicate // the cids have a duplicate
res, err := FetchMessagesByCids(context.TODO(), g, append(cids, cids[0])) res, err := FetchMessagesByCids(context.TODO(), g, append(cids, cids[0]))

View File

@ -119,8 +119,8 @@ type SyncManagerCtor func(syncFn SyncFunc) SyncManager
type Genesis *types.TipSet type Genesis *types.TipSet
func LoadGenesis(sm *stmgr.StateManager) (Genesis, error) { func LoadGenesis(ctx context.Context, sm *stmgr.StateManager) (Genesis, error) {
gen, err := sm.ChainStore().GetGenesis() gen, err := sm.ChainStore().GetGenesis(ctx)
if err != nil { if err != nil {
return nil, xerrors.Errorf("getting genesis block: %w", err) return nil, xerrors.Errorf("getting genesis block: %w", err)
} }
@ -227,7 +227,7 @@ func (syncer *Syncer) InformNewHead(from peer.ID, fts *store.FullTipSet) bool {
// TODO: IMPORTANT(GARBAGE) this needs to be put in the 'temporary' side of // TODO: IMPORTANT(GARBAGE) this needs to be put in the 'temporary' side of
// the blockstore // the blockstore
if err := syncer.store.PersistBlockHeaders(fts.TipSet().Blocks()...); err != nil { if err := syncer.store.PersistBlockHeaders(ctx, fts.TipSet().Blocks()...); err != nil {
log.Warn("failed to persist incoming block header: ", err) log.Warn("failed to persist incoming block header: ", err)
return false return false
} }
@ -298,11 +298,11 @@ func (syncer *Syncer) ValidateMsgMeta(fblk *types.FullBlock) error {
// into the blockstore. // into the blockstore.
blockstore := bstore.NewMemory() blockstore := bstore.NewMemory()
cst := cbor.NewCborStore(blockstore) cst := cbor.NewCborStore(blockstore)
ctx := context.Background()
var bcids, scids []cid.Cid var bcids, scids []cid.Cid
for _, m := range fblk.BlsMessages { for _, m := range fblk.BlsMessages {
c, err := store.PutMessage(blockstore, m) c, err := store.PutMessage(ctx, blockstore, m)
if err != nil { if err != nil {
return xerrors.Errorf("putting bls message to blockstore after msgmeta computation: %w", err) return xerrors.Errorf("putting bls message to blockstore after msgmeta computation: %w", err)
} }
@ -310,7 +310,7 @@ func (syncer *Syncer) ValidateMsgMeta(fblk *types.FullBlock) error {
} }
for _, m := range fblk.SecpkMessages { for _, m := range fblk.SecpkMessages {
c, err := store.PutMessage(blockstore, m) c, err := store.PutMessage(ctx, blockstore, m)
if err != nil { if err != nil {
return xerrors.Errorf("putting bls message to blockstore after msgmeta computation: %w", err) return xerrors.Errorf("putting bls message to blockstore after msgmeta computation: %w", err)
} }
@ -360,7 +360,7 @@ func copyBlockstore(ctx context.Context, from, to bstore.Blockstore) error {
// TODO: should probably expose better methods on the blockstore for this operation // TODO: should probably expose better methods on the blockstore for this operation
var blks []blocks.Block var blks []blocks.Block
for c := range cids { for c := range cids {
b, err := from.Get(c) b, err := from.Get(ctx, c)
if err != nil { if err != nil {
return err return err
} }
@ -368,7 +368,7 @@ func copyBlockstore(ctx context.Context, from, to bstore.Blockstore) error {
blks = append(blks, b) blks = append(blks, b)
} }
if err := to.PutMany(blks); err != nil { if err := to.PutMany(ctx, blks); err != nil {
return err return err
} }
@ -463,7 +463,7 @@ func computeMsgMeta(bs cbor.IpldStore, bmsgCids, smsgCids []cid.Cid) (cid.Cid, e
// {hint/usage} This is used from the HELLO protocol, to fetch the greeting // {hint/usage} This is used from the HELLO protocol, to fetch the greeting
// peer's heaviest tipset if we don't have it. // peer's heaviest tipset if we don't have it.
func (syncer *Syncer) FetchTipSet(ctx context.Context, p peer.ID, tsk types.TipSetKey) (*store.FullTipSet, error) { func (syncer *Syncer) FetchTipSet(ctx context.Context, p peer.ID, tsk types.TipSetKey) (*store.FullTipSet, error) {
if fts, err := syncer.tryLoadFullTipSet(tsk); err == nil { if fts, err := syncer.tryLoadFullTipSet(ctx, tsk); err == nil {
return fts, nil return fts, nil
} }
@ -474,15 +474,15 @@ func (syncer *Syncer) FetchTipSet(ctx context.Context, p peer.ID, tsk types.TipS
// tryLoadFullTipSet queries the tipset in the ChainStore, and returns a full // tryLoadFullTipSet queries the tipset in the ChainStore, and returns a full
// representation of it containing FullBlocks. If ALL blocks are not found // representation of it containing FullBlocks. If ALL blocks are not found
// locally, it errors entirely with blockstore.ErrNotFound. // locally, it errors entirely with blockstore.ErrNotFound.
func (syncer *Syncer) tryLoadFullTipSet(tsk types.TipSetKey) (*store.FullTipSet, error) { func (syncer *Syncer) tryLoadFullTipSet(ctx context.Context, tsk types.TipSetKey) (*store.FullTipSet, error) {
ts, err := syncer.store.LoadTipSet(tsk) ts, err := syncer.store.LoadTipSet(ctx, tsk)
if err != nil { if err != nil {
return nil, err return nil, err
} }
fts := &store.FullTipSet{} fts := &store.FullTipSet{}
for _, b := range ts.Blocks() { for _, b := range ts.Blocks() {
bmsgs, smsgs, err := syncer.store.MessagesForBlock(b) bmsgs, smsgs, err := syncer.store.MessagesForBlock(ctx, b)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -583,7 +583,7 @@ func (syncer *Syncer) ValidateTipSet(ctx context.Context, fts *store.FullTipSet,
return xerrors.Errorf("validating block %s: %w", b.Cid(), err) return xerrors.Errorf("validating block %s: %w", b.Cid(), err)
} }
if err := syncer.sm.ChainStore().AddToTipSetTracker(b.Header); err != nil { if err := syncer.sm.ChainStore().AddToTipSetTracker(ctx, b.Header); err != nil {
return xerrors.Errorf("failed to add validated header to tipset tracker: %w", err) return xerrors.Errorf("failed to add validated header to tipset tracker: %w", err)
} }
return nil return nil
@ -755,7 +755,7 @@ loop:
} }
// If, for some reason, we have a suffix of the chain locally, handle that here // If, for some reason, we have a suffix of the chain locally, handle that here
ts, err := syncer.store.LoadTipSet(at) ts, err := syncer.store.LoadTipSet(ctx, at)
if err == nil { if err == nil {
acceptedBlocks = append(acceptedBlocks, at.Cids()...) acceptedBlocks = append(acceptedBlocks, at.Cids()...)
@ -838,7 +838,7 @@ loop:
return blockSet, nil return blockSet, nil
} }
knownParent, err := syncer.store.LoadTipSet(known.Parents()) knownParent, err := syncer.store.LoadTipSet(ctx, known.Parents())
if err != nil { if err != nil {
return nil, xerrors.Errorf("failed to load next local tipset: %w", err) return nil, xerrors.Errorf("failed to load next local tipset: %w", err)
} }
@ -892,7 +892,7 @@ func (syncer *Syncer) syncFork(ctx context.Context, incoming *types.TipSet, know
return nil, err return nil, err
} }
nts, err := syncer.store.LoadTipSet(known.Parents()) nts, err := syncer.store.LoadTipSet(ctx, known.Parents())
if err != nil { if err != nil {
return nil, xerrors.Errorf("failed to load next local tipset: %w", err) return nil, xerrors.Errorf("failed to load next local tipset: %w", err)
} }
@ -928,7 +928,7 @@ func (syncer *Syncer) syncFork(ctx context.Context, incoming *types.TipSet, know
return nil, ErrForkCheckpoint return nil, ErrForkCheckpoint
} }
nts, err = syncer.store.LoadTipSet(nts.Parents()) nts, err = syncer.store.LoadTipSet(ctx, nts.Parents())
if err != nil { if err != nil {
return nil, xerrors.Errorf("loading next local tipset: %w", err) return nil, xerrors.Errorf("loading next local tipset: %w", err)
} }
@ -965,7 +965,7 @@ func (syncer *Syncer) iterFullTipsets(ctx context.Context, headers []*types.TipS
span.AddAttributes(trace.Int64Attribute("num_headers", int64(len(headers)))) span.AddAttributes(trace.Int64Attribute("num_headers", int64(len(headers))))
for i := len(headers) - 1; i >= 0; { for i := len(headers) - 1; i >= 0; {
fts, err := syncer.store.TryFillTipSet(headers[i]) fts, err := syncer.store.TryFillTipSet(ctx, headers[i])
if err != nil { if err != nil {
return err return err
} }
@ -1138,7 +1138,7 @@ func persistMessages(ctx context.Context, bs bstore.Blockstore, bst *exchange.Co
for _, m := range bst.Bls { for _, m := range bst.Bls {
//log.Infof("putting BLS message: %s", m.Cid()) //log.Infof("putting BLS message: %s", m.Cid())
if _, err := store.PutMessage(bs, m); err != nil { if _, err := store.PutMessage(ctx, bs, m); err != nil {
log.Errorf("failed to persist messages: %+v", err) log.Errorf("failed to persist messages: %+v", err)
return xerrors.Errorf("BLS message processing failed: %w", err) return xerrors.Errorf("BLS message processing failed: %w", err)
} }
@ -1148,7 +1148,7 @@ func persistMessages(ctx context.Context, bs bstore.Blockstore, bst *exchange.Co
return xerrors.Errorf("unknown signature type on message %s: %q", m.Cid(), m.Signature.Type) return xerrors.Errorf("unknown signature type on message %s: %q", m.Cid(), m.Signature.Type)
} }
//log.Infof("putting secp256k1 message: %s", m.Cid()) //log.Infof("putting secp256k1 message: %s", m.Cid())
if _, err := store.PutMessage(bs, m); err != nil { if _, err := store.PutMessage(ctx, bs, m); err != nil {
log.Errorf("failed to persist messages: %+v", err) log.Errorf("failed to persist messages: %+v", err)
return xerrors.Errorf("secp256k1 message processing failed: %w", err) return xerrors.Errorf("secp256k1 message processing failed: %w", err)
} }
@ -1201,7 +1201,7 @@ func (syncer *Syncer) collectChain(ctx context.Context, ts *types.TipSet, hts *t
for _, ts := range headers { for _, ts := range headers {
toPersist = append(toPersist, ts.Blocks()...) toPersist = append(toPersist, ts.Blocks()...)
} }
if err := syncer.store.PersistBlockHeaders(toPersist...); err != nil { if err := syncer.store.PersistBlockHeaders(ctx, toPersist...); err != nil {
err = xerrors.Errorf("failed to persist synced blocks to the chainstore: %w", err) err = xerrors.Errorf("failed to persist synced blocks to the chainstore: %w", err)
ss.Error(err) ss.Error(err)
return err return err
@ -1245,7 +1245,7 @@ func (syncer *Syncer) CheckBadBlockCache(blk cid.Cid) (string, bool) {
return bbr.String(), ok return bbr.String(), ok
} }
func (syncer *Syncer) getLatestBeaconEntry(_ context.Context, ts *types.TipSet) (*types.BeaconEntry, error) { func (syncer *Syncer) getLatestBeaconEntry(ctx context.Context, ts *types.TipSet) (*types.BeaconEntry, error) {
cur := ts cur := ts
for i := 0; i < 20; i++ { for i := 0; i < 20; i++ {
cbe := cur.Blocks()[0].BeaconEntries cbe := cur.Blocks()[0].BeaconEntries
@ -1257,7 +1257,7 @@ func (syncer *Syncer) getLatestBeaconEntry(_ context.Context, ts *types.TipSet)
return nil, xerrors.Errorf("made it back to genesis block without finding beacon entry") return nil, xerrors.Errorf("made it back to genesis block without finding beacon entry")
} }
next, err := syncer.store.LoadTipSet(cur.Parents()) next, err := syncer.store.LoadTipSet(ctx, cur.Parents())
if err != nil { if err != nil {
return nil, xerrors.Errorf("failed to load parents when searching back for latest beacon entry: %w", err) return nil, xerrors.Errorf("failed to load parents when searching back for latest beacon entry: %w", err)
} }

View File

@ -1,3 +1,4 @@
//stm: #unit
package chain_test package chain_test
import ( import (
@ -206,20 +207,21 @@ func (tu *syncTestUtil) pushFtsAndWait(to int, fts *store.FullTipSet, wait bool)
} }
func (tu *syncTestUtil) pushTsExpectErr(to int, fts *store.FullTipSet, experr bool) { func (tu *syncTestUtil) pushTsExpectErr(to int, fts *store.FullTipSet, experr bool) {
ctx := context.TODO()
for _, fb := range fts.Blocks { for _, fb := range fts.Blocks {
var b types.BlockMsg var b types.BlockMsg
// -1 to match block.Height // -1 to match block.Height
b.Header = fb.Header b.Header = fb.Header
for _, msg := range fb.SecpkMessages { for _, msg := range fb.SecpkMessages {
c, err := tu.nds[to].(*impl.FullNodeAPI).ChainAPI.Chain.PutMessage(msg) c, err := tu.nds[to].(*impl.FullNodeAPI).ChainAPI.Chain.PutMessage(ctx, msg)
require.NoError(tu.t, err) require.NoError(tu.t, err)
b.SecpkMessages = append(b.SecpkMessages, c) b.SecpkMessages = append(b.SecpkMessages, c)
} }
for _, msg := range fb.BlsMessages { for _, msg := range fb.BlsMessages {
c, err := tu.nds[to].(*impl.FullNodeAPI).ChainAPI.Chain.PutMessage(msg) c, err := tu.nds[to].(*impl.FullNodeAPI).ChainAPI.Chain.PutMessage(ctx, msg)
require.NoError(tu.t, err) require.NoError(tu.t, err)
b.BlsMessages = append(b.BlsMessages, c) b.BlsMessages = append(b.BlsMessages, c)
@ -299,7 +301,7 @@ func (tu *syncTestUtil) addSourceNode(gen int) {
lastTs := blocks[len(blocks)-1].Blocks lastTs := blocks[len(blocks)-1].Blocks
for _, lastB := range lastTs { for _, lastB := range lastTs {
cs := out.(*impl.FullNodeAPI).ChainAPI.Chain cs := out.(*impl.FullNodeAPI).ChainAPI.Chain
require.NoError(tu.t, cs.AddToTipSetTracker(lastB.Header)) require.NoError(tu.t, cs.AddToTipSetTracker(context.Background(), lastB.Header))
err = cs.AddBlock(tu.ctx, lastB.Header) err = cs.AddBlock(tu.ctx, lastB.Header)
require.NoError(tu.t, err) require.NoError(tu.t, err)
} }
@ -461,6 +463,8 @@ func (tu *syncTestUtil) waitUntilSyncTarget(to int, target *types.TipSet) {
} }
func TestSyncSimple(t *testing.T) { func TestSyncSimple(t *testing.T) {
//stm: @BLOCKCHAIN_BEACON_VALIDATE_BLOCK_VALUES_01, @CHAIN_SYNCER_LOAD_GENESIS_001, @CHAIN_SYNCER_FETCH_TIPSET_001, @CHAIN_SYNCER_START_001
//stm: @CHAIN_SYNCER_NEW_PEER_HEAD_001, @CHAIN_SYNCER_VALIDATE_MESSAGE_META_001, @CHAIN_SYNCER_STOP_001
H := 50 H := 50
tu := prepSyncTest(t, H) tu := prepSyncTest(t, H)
@ -477,6 +481,8 @@ func TestSyncSimple(t *testing.T) {
} }
func TestSyncMining(t *testing.T) { func TestSyncMining(t *testing.T) {
//stm: @BLOCKCHAIN_BEACON_VALIDATE_BLOCK_VALUES_01, @CHAIN_SYNCER_LOAD_GENESIS_001, @CHAIN_SYNCER_FETCH_TIPSET_001, @CHAIN_SYNCER_START_001
//stm: @CHAIN_SYNCER_NEW_PEER_HEAD_001, @CHAIN_SYNCER_VALIDATE_MESSAGE_META_001, @CHAIN_SYNCER_STOP_001
H := 50 H := 50
tu := prepSyncTest(t, H) tu := prepSyncTest(t, H)
@ -499,6 +505,8 @@ func TestSyncMining(t *testing.T) {
} }
func TestSyncBadTimestamp(t *testing.T) { func TestSyncBadTimestamp(t *testing.T) {
//stm: @BLOCKCHAIN_BEACON_VALIDATE_BLOCK_VALUES_01, @CHAIN_SYNCER_LOAD_GENESIS_001, @CHAIN_SYNCER_FETCH_TIPSET_001, @CHAIN_SYNCER_START_001
//stm: @CHAIN_SYNCER_NEW_PEER_HEAD_001, @CHAIN_SYNCER_VALIDATE_MESSAGE_META_001, @CHAIN_SYNCER_STOP_001
H := 50 H := 50
tu := prepSyncTest(t, H) tu := prepSyncTest(t, H)
@ -553,6 +561,8 @@ func (wpp badWpp) ComputeProof(context.Context, []proof7.ExtendedSectorInfo, abi
} }
func TestSyncBadWinningPoSt(t *testing.T) { func TestSyncBadWinningPoSt(t *testing.T) {
//stm: @BLOCKCHAIN_BEACON_VALIDATE_BLOCK_VALUES_01, @CHAIN_SYNCER_LOAD_GENESIS_001, @CHAIN_SYNCER_FETCH_TIPSET_001, @CHAIN_SYNCER_START_001
//stm: @CHAIN_SYNCER_NEW_PEER_HEAD_001, @CHAIN_SYNCER_VALIDATE_MESSAGE_META_001, @CHAIN_SYNCER_STOP_001
H := 15 H := 15
tu := prepSyncTest(t, H) tu := prepSyncTest(t, H)
@ -582,6 +592,9 @@ func (tu *syncTestUtil) loadChainToNode(to int) {
} }
func TestSyncFork(t *testing.T) { func TestSyncFork(t *testing.T) {
//stm: @BLOCKCHAIN_BEACON_VALIDATE_BLOCK_VALUES_01, @CHAIN_SYNCER_LOAD_GENESIS_001, @CHAIN_SYNCER_FETCH_TIPSET_001, @CHAIN_SYNCER_START_001
//stm: @CHAIN_SYNCER_SYNC_001, @CHAIN_SYNCER_COLLECT_CHAIN_001, @CHAIN_SYNCER_COLLECT_HEADERS_001, @CHAIN_SYNCER_VALIDATE_TIPSET_001
//stm: @CHAIN_SYNCER_NEW_PEER_HEAD_001, @CHAIN_SYNCER_VALIDATE_MESSAGE_META_001, @CHAIN_SYNCER_STOP_001
H := 10 H := 10
tu := prepSyncTest(t, H) tu := prepSyncTest(t, H)
@ -649,6 +662,9 @@ func TestSyncFork(t *testing.T) {
// A and B both include _different_ messages from sender X with nonce N (where N is the correct nonce for X). // A and B both include _different_ messages from sender X with nonce N (where N is the correct nonce for X).
// We can confirm that the state can be correctly computed, and that `MessagesForTipset` behaves as expected. // We can confirm that the state can be correctly computed, and that `MessagesForTipset` behaves as expected.
func TestDuplicateNonce(t *testing.T) { func TestDuplicateNonce(t *testing.T) {
//stm: @BLOCKCHAIN_BEACON_VALIDATE_BLOCK_VALUES_01, @CHAIN_SYNCER_LOAD_GENESIS_001, @CHAIN_SYNCER_FETCH_TIPSET_001, @CHAIN_SYNCER_START_001
//stm: @CHAIN_SYNCER_SYNC_001, @CHAIN_SYNCER_COLLECT_CHAIN_001, @CHAIN_SYNCER_COLLECT_HEADERS_001, @CHAIN_SYNCER_VALIDATE_TIPSET_001
//stm: @CHAIN_SYNCER_NEW_PEER_HEAD_001, @CHAIN_SYNCER_VALIDATE_MESSAGE_META_001, @CHAIN_SYNCER_STOP_001
H := 10 H := 10
tu := prepSyncTest(t, H) tu := prepSyncTest(t, H)
@ -703,6 +719,7 @@ func TestDuplicateNonce(t *testing.T) {
var includedMsg cid.Cid var includedMsg cid.Cid
var skippedMsg cid.Cid var skippedMsg cid.Cid
//stm: @CHAIN_STATE_SEARCH_MSG_001
r0, err0 := tu.nds[0].StateSearchMsg(context.TODO(), ts2.TipSet().Key(), msgs[0][0].Cid(), api.LookbackNoLimit, true) r0, err0 := tu.nds[0].StateSearchMsg(context.TODO(), ts2.TipSet().Key(), msgs[0][0].Cid(), api.LookbackNoLimit, true)
r1, err1 := tu.nds[0].StateSearchMsg(context.TODO(), ts2.TipSet().Key(), msgs[1][0].Cid(), api.LookbackNoLimit, true) r1, err1 := tu.nds[0].StateSearchMsg(context.TODO(), ts2.TipSet().Key(), msgs[1][0].Cid(), api.LookbackNoLimit, true)
@ -735,7 +752,7 @@ func TestDuplicateNonce(t *testing.T) {
t.Fatal("included message should be in exec trace") t.Fatal("included message should be in exec trace")
} }
mft, err := tu.g.ChainStore().MessagesForTipset(ts1.TipSet()) mft, err := tu.g.ChainStore().MessagesForTipset(context.TODO(), ts1.TipSet())
require.NoError(t, err) require.NoError(t, err)
require.True(t, len(mft) == 1, "only expecting one message for this tipset") require.True(t, len(mft) == 1, "only expecting one message for this tipset")
require.Equal(t, includedMsg, mft[0].VMMessage().Cid(), "messages for tipset didn't contain expected message") require.Equal(t, includedMsg, mft[0].VMMessage().Cid(), "messages for tipset didn't contain expected message")
@ -744,6 +761,9 @@ func TestDuplicateNonce(t *testing.T) {
// This test asserts that a block that includes a message with bad nonce can't be synced. A nonce is "bad" if it can't // This test asserts that a block that includes a message with bad nonce can't be synced. A nonce is "bad" if it can't
// be applied on the parent state. // be applied on the parent state.
func TestBadNonce(t *testing.T) { func TestBadNonce(t *testing.T) {
//stm: @BLOCKCHAIN_BEACON_VALIDATE_BLOCK_VALUES_01, @CHAIN_SYNCER_LOAD_GENESIS_001, @CHAIN_SYNCER_FETCH_TIPSET_001, @CHAIN_SYNCER_START_001
//stm: @CHAIN_SYNCER_SYNC_001, @CHAIN_SYNCER_COLLECT_CHAIN_001, @CHAIN_SYNCER_COLLECT_HEADERS_001
//stm: @CHAIN_SYNCER_VALIDATE_MESSAGE_META_001, @CHAIN_SYNCER_VALIDATE_TIPSET_001, @CHAIN_SYNCER_STOP_001
H := 10 H := 10
tu := prepSyncTest(t, H) tu := prepSyncTest(t, H)
@ -791,6 +811,9 @@ func TestBadNonce(t *testing.T) {
// One of the messages uses the sender's robust address, the other uses the ID address. // One of the messages uses the sender's robust address, the other uses the ID address.
// Such a block is invalid and should not sync. // Such a block is invalid and should not sync.
func TestMismatchedNoncesRobustID(t *testing.T) { func TestMismatchedNoncesRobustID(t *testing.T) {
//stm: @BLOCKCHAIN_BEACON_VALIDATE_BLOCK_VALUES_01, @CHAIN_SYNCER_LOAD_GENESIS_001, @CHAIN_SYNCER_FETCH_TIPSET_001, @CHAIN_SYNCER_START_001
//stm: @CHAIN_SYNCER_SYNC_001, @CHAIN_SYNCER_COLLECT_CHAIN_001, @CHAIN_SYNCER_COLLECT_HEADERS_001
//stm: @CHAIN_SYNCER_VALIDATE_MESSAGE_META_001, @CHAIN_SYNCER_VALIDATE_TIPSET_001, @CHAIN_SYNCER_STOP_001
v5h := abi.ChainEpoch(4) v5h := abi.ChainEpoch(4)
tu := prepSyncTestWithV5Height(t, int(v5h+5), v5h) tu := prepSyncTestWithV5Height(t, int(v5h+5), v5h)
@ -803,6 +826,7 @@ func TestMismatchedNoncesRobustID(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
// Produce a message from the banker // Produce a message from the banker
//stm: @CHAIN_STATE_LOOKUP_ID_001
makeMsg := func(id bool) *types.SignedMessage { makeMsg := func(id bool) *types.SignedMessage {
sender := tu.g.Banker() sender := tu.g.Banker()
if id { if id {
@ -845,6 +869,9 @@ func TestMismatchedNoncesRobustID(t *testing.T) {
// One of the messages uses the sender's robust address, the other uses the ID address. // One of the messages uses the sender's robust address, the other uses the ID address.
// Such a block is valid and should sync. // Such a block is valid and should sync.
func TestMatchedNoncesRobustID(t *testing.T) { func TestMatchedNoncesRobustID(t *testing.T) {
//stm: @BLOCKCHAIN_BEACON_VALIDATE_BLOCK_VALUES_01, @CHAIN_SYNCER_LOAD_GENESIS_001, @CHAIN_SYNCER_FETCH_TIPSET_001, @CHAIN_SYNCER_START_001
//stm: @CHAIN_SYNCER_SYNC_001, @CHAIN_SYNCER_COLLECT_CHAIN_001, @CHAIN_SYNCER_COLLECT_HEADERS_001
//stm: @CHAIN_SYNCER_VALIDATE_MESSAGE_META_001, @CHAIN_SYNCER_VALIDATE_TIPSET_001, @CHAIN_SYNCER_STOP_001
v5h := abi.ChainEpoch(4) v5h := abi.ChainEpoch(4)
tu := prepSyncTestWithV5Height(t, int(v5h+5), v5h) tu := prepSyncTestWithV5Height(t, int(v5h+5), v5h)
@ -857,6 +884,7 @@ func TestMatchedNoncesRobustID(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
// Produce a message from the banker with specified nonce // Produce a message from the banker with specified nonce
//stm: @CHAIN_STATE_LOOKUP_ID_001
makeMsg := func(n uint64, id bool) *types.SignedMessage { makeMsg := func(n uint64, id bool) *types.SignedMessage {
sender := tu.g.Banker() sender := tu.g.Banker()
if id { if id {
@ -916,6 +944,8 @@ func runSyncBenchLength(b *testing.B, l int) {
} }
func TestSyncInputs(t *testing.T) { func TestSyncInputs(t *testing.T) {
//stm: @CHAIN_SYNCER_LOAD_GENESIS_001, @CHAIN_SYNCER_FETCH_TIPSET_001, @CHAIN_SYNCER_VALIDATE_BLOCK_001,
//stm: @CHAIN_SYNCER_START_001, @CHAIN_SYNCER_STOP_001
H := 10 H := 10
tu := prepSyncTest(t, H) tu := prepSyncTest(t, H)
@ -943,6 +973,9 @@ func TestSyncInputs(t *testing.T) {
} }
func TestSyncCheckpointHead(t *testing.T) { func TestSyncCheckpointHead(t *testing.T) {
//stm: @BLOCKCHAIN_BEACON_VALIDATE_BLOCK_VALUES_01, @CHAIN_SYNCER_LOAD_GENESIS_001, @CHAIN_SYNCER_FETCH_TIPSET_001, @CHAIN_SYNCER_START_001
//stm: @CHAIN_SYNCER_SYNC_001, @CHAIN_SYNCER_COLLECT_CHAIN_001, @CHAIN_SYNCER_COLLECT_HEADERS_001
//stm: @CHAIN_SYNCER_VALIDATE_MESSAGE_META_001, @CHAIN_SYNCER_VALIDATE_TIPSET_001, @CHAIN_SYNCER_STOP_001
H := 10 H := 10
tu := prepSyncTest(t, H) tu := prepSyncTest(t, H)
@ -962,6 +995,7 @@ func TestSyncCheckpointHead(t *testing.T) {
a = tu.mineOnBlock(a, p1, []int{0}, true, false, nil, 0, true) a = tu.mineOnBlock(a, p1, []int{0}, true, false, nil, 0, true)
tu.waitUntilSyncTarget(p1, a.TipSet()) tu.waitUntilSyncTarget(p1, a.TipSet())
//stm: @CHAIN_SYNCER_CHECKPOINT_001
tu.checkpointTs(p1, a.TipSet().Key()) tu.checkpointTs(p1, a.TipSet().Key())
require.NoError(t, tu.g.ResyncBankerNonce(a1.TipSet())) require.NoError(t, tu.g.ResyncBankerNonce(a1.TipSet()))
@ -981,15 +1015,20 @@ func TestSyncCheckpointHead(t *testing.T) {
tu.waitUntilNodeHasTs(p1, b.TipSet().Key()) tu.waitUntilNodeHasTs(p1, b.TipSet().Key())
p1Head := tu.getHead(p1) p1Head := tu.getHead(p1)
require.True(tu.t, p1Head.Equals(a.TipSet())) require.True(tu.t, p1Head.Equals(a.TipSet()))
//stm: @CHAIN_SYNCER_CHECK_BAD_001
tu.assertBad(p1, b.TipSet()) tu.assertBad(p1, b.TipSet())
// Should be able to switch forks. // Should be able to switch forks.
//stm: @CHAIN_SYNCER_CHECKPOINT_001
tu.checkpointTs(p1, b.TipSet().Key()) tu.checkpointTs(p1, b.TipSet().Key())
p1Head = tu.getHead(p1) p1Head = tu.getHead(p1)
require.True(tu.t, p1Head.Equals(b.TipSet())) require.True(tu.t, p1Head.Equals(b.TipSet()))
} }
func TestSyncCheckpointEarlierThanHead(t *testing.T) { func TestSyncCheckpointEarlierThanHead(t *testing.T) {
//stm: @BLOCKCHAIN_BEACON_VALIDATE_BLOCK_VALUES_01, @CHAIN_SYNCER_LOAD_GENESIS_001, @CHAIN_SYNCER_FETCH_TIPSET_001, @CHAIN_SYNCER_START_001
//stm: @CHAIN_SYNCER_SYNC_001, @CHAIN_SYNCER_COLLECT_CHAIN_001, @CHAIN_SYNCER_COLLECT_HEADERS_001
//stm: @CHAIN_SYNCER_VALIDATE_MESSAGE_META_001, @CHAIN_SYNCER_VALIDATE_TIPSET_001, @CHAIN_SYNCER_STOP_001
H := 10 H := 10
tu := prepSyncTest(t, H) tu := prepSyncTest(t, H)
@ -1009,6 +1048,7 @@ func TestSyncCheckpointEarlierThanHead(t *testing.T) {
a = tu.mineOnBlock(a, p1, []int{0}, true, false, nil, 0, true) a = tu.mineOnBlock(a, p1, []int{0}, true, false, nil, 0, true)
tu.waitUntilSyncTarget(p1, a.TipSet()) tu.waitUntilSyncTarget(p1, a.TipSet())
//stm: @CHAIN_SYNCER_CHECKPOINT_001
tu.checkpointTs(p1, a1.TipSet().Key()) tu.checkpointTs(p1, a1.TipSet().Key())
require.NoError(t, tu.g.ResyncBankerNonce(a1.TipSet())) require.NoError(t, tu.g.ResyncBankerNonce(a1.TipSet()))
@ -1028,15 +1068,19 @@ func TestSyncCheckpointEarlierThanHead(t *testing.T) {
tu.waitUntilNodeHasTs(p1, b.TipSet().Key()) tu.waitUntilNodeHasTs(p1, b.TipSet().Key())
p1Head := tu.getHead(p1) p1Head := tu.getHead(p1)
require.True(tu.t, p1Head.Equals(a.TipSet())) require.True(tu.t, p1Head.Equals(a.TipSet()))
//stm: @CHAIN_SYNCER_CHECK_BAD_001
tu.assertBad(p1, b.TipSet()) tu.assertBad(p1, b.TipSet())
// Should be able to switch forks. // Should be able to switch forks.
//stm: @CHAIN_SYNCER_CHECKPOINT_001
tu.checkpointTs(p1, b.TipSet().Key()) tu.checkpointTs(p1, b.TipSet().Key())
p1Head = tu.getHead(p1) p1Head = tu.getHead(p1)
require.True(tu.t, p1Head.Equals(b.TipSet())) require.True(tu.t, p1Head.Equals(b.TipSet()))
} }
func TestInvalidHeight(t *testing.T) { func TestInvalidHeight(t *testing.T) {
//stm: @CHAIN_SYNCER_LOAD_GENESIS_001, @CHAIN_SYNCER_FETCH_TIPSET_001, @CHAIN_SYNCER_START_001
//stm: @CHAIN_SYNCER_VALIDATE_MESSAGE_META_001, @CHAIN_SYNCER_STOP_001
H := 50 H := 50
tu := prepSyncTest(t, H) tu := prepSyncTest(t, H)

View File

@ -10,8 +10,9 @@ import (
addr "github.com/filecoin-project/go-address" addr "github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-state-types/abi" "github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/crypto" "github.com/filecoin-project/go-state-types/crypto"
"github.com/filecoin-project/lotus/build"
"github.com/ipfs/go-cid" "github.com/ipfs/go-cid"
"github.com/filecoin-project/lotus/build"
) )
type GasCharge struct { type GasCharge struct {
@ -81,7 +82,9 @@ type Pricelist interface {
OnVerifyConsensusFault() GasCharge OnVerifyConsensusFault() GasCharge
} }
var prices = map[abi.ChainEpoch]Pricelist{ // Prices are the price lists per starting epoch. Public for testing purposes
// (concretely to allow the test vector runner to rebase prices).
var Prices = map[abi.ChainEpoch]Pricelist{
abi.ChainEpoch(0): &pricelistV0{ abi.ChainEpoch(0): &pricelistV0{
computeGasMulti: 1, computeGasMulti: 1,
storageGasMulti: 1000, storageGasMulti: 1000,
@ -216,8 +219,8 @@ func PricelistByEpoch(epoch abi.ChainEpoch) Pricelist {
// since we are storing the prices as map or epoch to price // since we are storing the prices as map or epoch to price
// we need to get the price with the highest epoch that is lower or equal to the `epoch` arg // we need to get the price with the highest epoch that is lower or equal to the `epoch` arg
bestEpoch := abi.ChainEpoch(0) bestEpoch := abi.ChainEpoch(0)
bestPrice := prices[bestEpoch] bestPrice := Prices[bestEpoch]
for e, pl := range prices { for e, pl := range Prices {
// if `e` happened after `bestEpoch` and `e` is earlier or equal to the target `epoch` // if `e` happened after `bestEpoch` and `e` is earlier or equal to the target `epoch`
if e > bestEpoch && e <= epoch { if e > bestEpoch && e <= epoch {
bestEpoch = e bestEpoch = e

View File

@ -1,7 +1,6 @@
package vm package vm
import ( import (
"context"
"fmt" "fmt"
"io" "io"
"testing" "testing"
@ -136,9 +135,7 @@ func TestInvokerBasic(t *testing.T) {
{ {
_, aerr := code[1](&Runtime{ _, aerr := code[1](&Runtime{
vm: &VM{ntwkVersion: func(ctx context.Context, epoch abi.ChainEpoch) network.Version { vm: &VM{networkVersion: network.Version0},
return network.Version0
}},
Message: &basicRtMessage{}, Message: &basicRtMessage{},
}, []byte{99}) }, []byte{99})
if aerrors.IsFatal(aerr) { if aerrors.IsFatal(aerr) {
@ -149,9 +146,7 @@ func TestInvokerBasic(t *testing.T) {
{ {
_, aerr := code[1](&Runtime{ _, aerr := code[1](&Runtime{
vm: &VM{ntwkVersion: func(ctx context.Context, epoch abi.ChainEpoch) network.Version { vm: &VM{networkVersion: network.Version7},
return network.Version7
}},
Message: &basicRtMessage{}, Message: &basicRtMessage{},
}, []byte{99}) }, []byte{99})
if aerrors.IsFatal(aerr) { if aerrors.IsFatal(aerr) {

View File

@ -5,6 +5,7 @@ import (
"context" "context"
"encoding/binary" "encoding/binary"
"fmt" "fmt"
"os"
gruntime "runtime" gruntime "runtime"
"time" "time"
@ -56,7 +57,7 @@ func (m *Message) ValueReceived() abi.TokenAmount {
} }
// EnableGasTracing, if true, outputs gas tracing in execution traces. // EnableGasTracing, if true, outputs gas tracing in execution traces.
var EnableGasTracing = false var EnableGasTracing = os.Getenv("LOTUS_VM_ENABLE_GAS_TRACING_VERY_SLOW") == "1"
type Runtime struct { type Runtime struct {
rt7.Message rt7.Message
@ -91,7 +92,7 @@ func (rt *Runtime) BaseFee() abi.TokenAmount {
} }
func (rt *Runtime) NetworkVersion() network.Version { func (rt *Runtime) NetworkVersion() network.Version {
return rt.vm.GetNtwkVersion(rt.ctx, rt.CurrEpoch()) return rt.vm.networkVersion
} }
func (rt *Runtime) TotalFilCircSupply() abi.TokenAmount { func (rt *Runtime) TotalFilCircSupply() abi.TokenAmount {
@ -223,16 +224,7 @@ func (rt *Runtime) GetActorCodeCID(addr address.Address) (ret cid.Cid, ok bool)
} }
func (rt *Runtime) GetRandomnessFromTickets(personalization crypto.DomainSeparationTag, randEpoch abi.ChainEpoch, entropy []byte) abi.Randomness { func (rt *Runtime) GetRandomnessFromTickets(personalization crypto.DomainSeparationTag, randEpoch abi.ChainEpoch, entropy []byte) abi.Randomness {
var err error res, err := rt.vm.rand.GetChainRandomness(rt.ctx, personalization, randEpoch, entropy)
var res []byte
rnv := rt.vm.ntwkVersion(rt.ctx, randEpoch)
if rnv >= network.Version13 {
res, err = rt.vm.rand.GetChainRandomnessV2(rt.ctx, personalization, randEpoch, entropy)
} else {
res, err = rt.vm.rand.GetChainRandomnessV1(rt.ctx, personalization, randEpoch, entropy)
}
if err != nil { if err != nil {
panic(aerrors.Fatalf("could not get ticket randomness: %s", err)) panic(aerrors.Fatalf("could not get ticket randomness: %s", err))
@ -241,17 +233,7 @@ func (rt *Runtime) GetRandomnessFromTickets(personalization crypto.DomainSeparat
} }
func (rt *Runtime) GetRandomnessFromBeacon(personalization crypto.DomainSeparationTag, randEpoch abi.ChainEpoch, entropy []byte) abi.Randomness { func (rt *Runtime) GetRandomnessFromBeacon(personalization crypto.DomainSeparationTag, randEpoch abi.ChainEpoch, entropy []byte) abi.Randomness {
var err error res, err := rt.vm.rand.GetBeaconRandomness(rt.ctx, personalization, randEpoch, entropy)
var res []byte
rnv := rt.vm.ntwkVersion(rt.ctx, randEpoch)
if rnv >= network.Version14 {
res, err = rt.vm.rand.GetBeaconRandomnessV3(rt.ctx, personalization, randEpoch, entropy)
} else if rnv == network.Version13 {
res, err = rt.vm.rand.GetBeaconRandomnessV2(rt.ctx, personalization, randEpoch, entropy)
} else {
res, err = rt.vm.rand.GetBeaconRandomnessV1(rt.ctx, personalization, randEpoch, entropy)
}
if err != nil { if err != nil {
panic(aerrors.Fatalf("could not get beacon randomness: %s", err)) panic(aerrors.Fatalf("could not get beacon randomness: %s", err))

View File

@ -82,10 +82,10 @@ type gasChargingBlocks struct {
under cbor.IpldBlockstore under cbor.IpldBlockstore
} }
func (bs *gasChargingBlocks) View(c cid.Cid, cb func([]byte) error) error { func (bs *gasChargingBlocks) View(ctx context.Context, c cid.Cid, cb func([]byte) error) error {
if v, ok := bs.under.(blockstore.Viewer); ok { if v, ok := bs.under.(blockstore.Viewer); ok {
bs.chargeGas(bs.pricelist.OnIpldGet()) bs.chargeGas(bs.pricelist.OnIpldGet())
return v.View(c, func(b []byte) error { return v.View(ctx, c, func(b []byte) error {
// we have successfully retrieved the value; charge for it, even if the user-provided function fails. // we have successfully retrieved the value; charge for it, even if the user-provided function fails.
bs.chargeGas(newGasCharge("OnIpldViewEnd", 0, 0).WithExtra(len(b))) bs.chargeGas(newGasCharge("OnIpldViewEnd", 0, 0).WithExtra(len(b)))
bs.chargeGas(gasOnActorExec) bs.chargeGas(gasOnActorExec)
@ -93,16 +93,16 @@ func (bs *gasChargingBlocks) View(c cid.Cid, cb func([]byte) error) error {
}) })
} }
// the underlying blockstore doesn't implement the viewer interface, fall back to normal Get behaviour. // the underlying blockstore doesn't implement the viewer interface, fall back to normal Get behaviour.
blk, err := bs.Get(c) blk, err := bs.Get(ctx, c)
if err == nil && blk != nil { if err == nil && blk != nil {
return cb(blk.RawData()) return cb(blk.RawData())
} }
return err return err
} }
func (bs *gasChargingBlocks) Get(c cid.Cid) (block.Block, error) { func (bs *gasChargingBlocks) Get(ctx context.Context, c cid.Cid) (block.Block, error) {
bs.chargeGas(bs.pricelist.OnIpldGet()) bs.chargeGas(bs.pricelist.OnIpldGet())
blk, err := bs.under.Get(c) blk, err := bs.under.Get(ctx, c)
if err != nil { if err != nil {
return nil, aerrors.Escalate(err, "failed to get block from blockstore") return nil, aerrors.Escalate(err, "failed to get block from blockstore")
} }
@ -112,10 +112,10 @@ func (bs *gasChargingBlocks) Get(c cid.Cid) (block.Block, error) {
return blk, nil return blk, nil
} }
func (bs *gasChargingBlocks) Put(blk block.Block) error { func (bs *gasChargingBlocks) Put(ctx context.Context, blk block.Block) error {
bs.chargeGas(bs.pricelist.OnIpldPut(len(blk.RawData()))) bs.chargeGas(bs.pricelist.OnIpldPut(len(blk.RawData())))
if err := bs.under.Put(blk); err != nil { if err := bs.under.Put(ctx, blk); err != nil {
return aerrors.Escalate(err, "failed to write data to disk") return aerrors.Escalate(err, "failed to write data to disk")
} }
bs.chargeGas(gasOnActorExec) bs.chargeGas(gasOnActorExec)
@ -169,7 +169,7 @@ func (vm *VM) makeRuntime(ctx context.Context, msg *types.Message, parent *Runti
} }
vmm.From = resF vmm.From = resF
if vm.ntwkVersion(ctx, vm.blockHeight) <= network.Version3 { if vm.networkVersion <= network.Version3 {
rt.Message = &vmm rt.Message = &vmm
} else { } else {
resT, _ := rt.ResolveAddress(msg.To) resT, _ := rt.ResolveAddress(msg.To)
@ -209,7 +209,7 @@ type VM struct {
areg *ActorRegistry areg *ActorRegistry
rand Rand rand Rand
circSupplyCalc CircSupplyCalculator circSupplyCalc CircSupplyCalculator
ntwkVersion NtwkVersionGetter networkVersion network.Version
baseFee abi.TokenAmount baseFee abi.TokenAmount
lbStateGet LookbackStateGetter lbStateGet LookbackStateGetter
baseCircSupply abi.TokenAmount baseCircSupply abi.TokenAmount
@ -225,7 +225,7 @@ type VMOpts struct {
Actors *ActorRegistry Actors *ActorRegistry
Syscalls SyscallBuilder Syscalls SyscallBuilder
CircSupplyCalc CircSupplyCalculator CircSupplyCalc CircSupplyCalculator
NtwkVersion NtwkVersionGetter // TODO: stebalien: In what cases do we actually need this? It seems like even when creating new networks we want to use the 'global'/build-default version getter NetworkVersion network.Version
BaseFee abi.TokenAmount BaseFee abi.TokenAmount
LookbackState LookbackStateGetter LookbackState LookbackStateGetter
} }
@ -251,7 +251,7 @@ func NewVM(ctx context.Context, opts *VMOpts) (*VM, error) {
areg: opts.Actors, areg: opts.Actors,
rand: opts.Rand, // TODO: Probably should be a syscall rand: opts.Rand, // TODO: Probably should be a syscall
circSupplyCalc: opts.CircSupplyCalc, circSupplyCalc: opts.CircSupplyCalc,
ntwkVersion: opts.NtwkVersion, networkVersion: opts.NetworkVersion,
Syscalls: opts.Syscalls, Syscalls: opts.Syscalls,
baseFee: opts.BaseFee, baseFee: opts.BaseFee,
baseCircSupply: baseCirc, baseCircSupply: baseCirc,
@ -260,11 +260,8 @@ func NewVM(ctx context.Context, opts *VMOpts) (*VM, error) {
} }
type Rand interface { type Rand interface {
GetChainRandomnessV1(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) GetChainRandomness(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error)
GetChainRandomnessV2(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) GetBeaconRandomness(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error)
GetBeaconRandomnessV1(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error)
GetBeaconRandomnessV2(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error)
GetBeaconRandomnessV3(ctx context.Context, pers crypto.DomainSeparationTag, filecoinEpoch abi.ChainEpoch, entropy []byte) ([]byte, error)
} }
type ApplyRet struct { type ApplyRet struct {
@ -316,7 +313,7 @@ func (vm *VM) send(ctx context.Context, msg *types.Message, parent *Runtime,
return nil, aerrors.Wrapf(err, "could not create account") return nil, aerrors.Wrapf(err, "could not create account")
} }
toActor = a toActor = a
if vm.ntwkVersion(ctx, vm.blockHeight) <= network.Version3 { if vm.networkVersion <= network.Version3 {
// Leave the rt.Message as is // Leave the rt.Message as is
} else { } else {
nmsg := Message{ nmsg := Message{
@ -343,7 +340,7 @@ func (vm *VM) send(ctx context.Context, msg *types.Message, parent *Runtime,
defer rt.chargeGasSafe(newGasCharge("OnMethodInvocationDone", 0, 0)) defer rt.chargeGasSafe(newGasCharge("OnMethodInvocationDone", 0, 0))
if types.BigCmp(msg.Value, types.NewInt(0)) != 0 { if types.BigCmp(msg.Value, types.NewInt(0)) != 0 {
if err := vm.transfer(msg.From, msg.To, msg.Value, vm.ntwkVersion(ctx, vm.blockHeight)); err != nil { if err := vm.transfer(msg.From, msg.To, msg.Value, vm.networkVersion); err != nil {
return nil, aerrors.Wrap(err, "failed to transfer funds") return nil, aerrors.Wrap(err, "failed to transfer funds")
} }
} }
@ -620,7 +617,7 @@ func (vm *VM) ApplyMessage(ctx context.Context, cmsg types.ChainMsg) (*ApplyRet,
} }
func (vm *VM) ShouldBurn(ctx context.Context, st *state.StateTree, msg *types.Message, errcode exitcode.ExitCode) (bool, error) { func (vm *VM) ShouldBurn(ctx context.Context, st *state.StateTree, msg *types.Message, errcode exitcode.ExitCode) (bool, error) {
if vm.ntwkVersion(ctx, vm.blockHeight) <= network.Version12 { if vm.networkVersion <= network.Version12 {
// Check to see if we should burn funds. We avoid burning on successful // Check to see if we should burn funds. We avoid burning on successful
// window post. This won't catch _indirect_ window post calls, but this // window post. This won't catch _indirect_ window post calls, but this
// is the best we can get for now. // is the best we can get for now.
@ -710,7 +707,7 @@ func Copy(ctx context.Context, from, to blockstore.Blockstore, root cid.Cid) err
go func() { go func() {
for b := range toFlush { for b := range toFlush {
if err := to.PutMany(b); err != nil { if err := to.PutMany(ctx, b); err != nil {
close(freeBufs) close(freeBufs)
errFlushChan <- xerrors.Errorf("batch put in copy: %w", err) errFlushChan <- xerrors.Errorf("batch put in copy: %w", err)
return return
@ -739,7 +736,7 @@ func Copy(ctx context.Context, from, to blockstore.Blockstore, root cid.Cid) err
return nil return nil
} }
if err := copyRec(from, to, root, batchCp); err != nil { if err := copyRec(ctx, from, to, root, batchCp); err != nil {
return xerrors.Errorf("copyRec: %w", err) return xerrors.Errorf("copyRec: %w", err)
} }
@ -764,13 +761,13 @@ func Copy(ctx context.Context, from, to blockstore.Blockstore, root cid.Cid) err
return nil return nil
} }
func copyRec(from, to blockstore.Blockstore, root cid.Cid, cp func(block.Block) error) error { func copyRec(ctx context.Context, from, to blockstore.Blockstore, root cid.Cid, cp func(block.Block) error) error {
if root.Prefix().MhType == 0 { if root.Prefix().MhType == 0 {
// identity cid, skip // identity cid, skip
return nil return nil
} }
blk, err := from.Get(root) blk, err := from.Get(ctx, root)
if err != nil { if err != nil {
return xerrors.Errorf("get %s failed: %w", root, err) return xerrors.Errorf("get %s failed: %w", root, err)
} }
@ -795,7 +792,7 @@ func copyRec(from, to blockstore.Blockstore, root cid.Cid, cp func(block.Block)
} }
} else { } else {
// If we have an object, we already have its children, skip the object. // If we have an object, we already have its children, skip the object.
has, err := to.Has(link) has, err := to.Has(ctx, link)
if err != nil { if err != nil {
lerr = xerrors.Errorf("has: %w", err) lerr = xerrors.Errorf("has: %w", err)
return return
@ -805,7 +802,7 @@ func copyRec(from, to blockstore.Blockstore, root cid.Cid, cp func(block.Block)
} }
} }
if err := copyRec(from, to, link, cp); err != nil { if err := copyRec(ctx, from, to, link, cp); err != nil {
lerr = err lerr = err
return return
} }
@ -827,17 +824,6 @@ func (vm *VM) StateTree() types.StateTree {
return vm.cstate return vm.cstate
} }
func (vm *VM) SetBlockHeight(ctx context.Context, h abi.ChainEpoch) error {
vm.blockHeight = h
ncirc, err := vm.circSupplyCalc(ctx, vm.blockHeight, vm.cstate)
if err != nil {
return err
}
vm.baseCircSupply = ncirc
return nil
}
func (vm *VM) Invoke(act *types.Actor, rt *Runtime, method abi.MethodNum, params []byte) ([]byte, aerrors.ActorError) { func (vm *VM) Invoke(act *types.Actor, rt *Runtime, method abi.MethodNum, params []byte) ([]byte, aerrors.ActorError) {
ctx, span := trace.StartSpan(rt.ctx, "vm.Invoke") ctx, span := trace.StartSpan(rt.ctx, "vm.Invoke")
defer span.End() defer span.End()
@ -865,13 +851,9 @@ func (vm *VM) SetInvoker(i *ActorRegistry) {
vm.areg = i vm.areg = i
} }
func (vm *VM) GetNtwkVersion(ctx context.Context, ce abi.ChainEpoch) network.Version {
return vm.ntwkVersion(ctx, ce)
}
func (vm *VM) GetCircSupply(ctx context.Context) (abi.TokenAmount, error) { func (vm *VM) GetCircSupply(ctx context.Context) (abi.TokenAmount, error) {
// Before v15, this was recalculated on each invocation as the state tree was mutated // Before v15, this was recalculated on each invocation as the state tree was mutated
if vm.GetNtwkVersion(ctx, vm.blockHeight) <= network.Version14 { if vm.networkVersion <= network.Version14 {
return vm.circSupplyCalc(ctx, vm.blockHeight, vm.cstate) return vm.circSupplyCalc(ctx, vm.blockHeight, vm.cstate)
} }

View File

@ -39,7 +39,7 @@ type LedgerKeyInfo struct {
var _ api.Wallet = (*LedgerWallet)(nil) var _ api.Wallet = (*LedgerWallet)(nil)
func (lw LedgerWallet) WalletSign(ctx context.Context, signer address.Address, toSign []byte, meta api.MsgMeta) (*crypto.Signature, error) { func (lw LedgerWallet) WalletSign(ctx context.Context, signer address.Address, toSign []byte, meta api.MsgMeta) (*crypto.Signature, error) {
ki, err := lw.getKeyInfo(signer) ki, err := lw.getKeyInfo(ctx, signer)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -80,8 +80,8 @@ func (lw LedgerWallet) WalletSign(ctx context.Context, signer address.Address, t
}, nil }, nil
} }
func (lw LedgerWallet) getKeyInfo(addr address.Address) (*LedgerKeyInfo, error) { func (lw LedgerWallet) getKeyInfo(ctx context.Context, addr address.Address) (*LedgerKeyInfo, error) {
kib, err := lw.ds.Get(keyForAddr(addr)) kib, err := lw.ds.Get(ctx, keyForAddr(addr))
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -95,7 +95,7 @@ func (lw LedgerWallet) getKeyInfo(addr address.Address) (*LedgerKeyInfo, error)
} }
func (lw LedgerWallet) WalletDelete(ctx context.Context, k address.Address) error { func (lw LedgerWallet) WalletDelete(ctx context.Context, k address.Address) error {
return lw.ds.Delete(keyForAddr(k)) return lw.ds.Delete(ctx, keyForAddr(k))
} }
func (lw LedgerWallet) WalletExport(ctx context.Context, k address.Address) (*types.KeyInfo, error) { func (lw LedgerWallet) WalletExport(ctx context.Context, k address.Address) (*types.KeyInfo, error) {
@ -103,7 +103,7 @@ func (lw LedgerWallet) WalletExport(ctx context.Context, k address.Address) (*ty
} }
func (lw LedgerWallet) WalletHas(ctx context.Context, k address.Address) (bool, error) { func (lw LedgerWallet) WalletHas(ctx context.Context, k address.Address) (bool, error) {
_, err := lw.ds.Get(keyForAddr(k)) _, err := lw.ds.Get(ctx, keyForAddr(k))
if err == nil { if err == nil {
return true, nil return true, nil
} }
@ -118,10 +118,10 @@ func (lw LedgerWallet) WalletImport(ctx context.Context, kinfo *types.KeyInfo) (
if err := json.Unmarshal(kinfo.PrivateKey, &ki); err != nil { if err := json.Unmarshal(kinfo.PrivateKey, &ki); err != nil {
return address.Undef, err return address.Undef, err
} }
return lw.importKey(ki) return lw.importKey(ctx, ki)
} }
func (lw LedgerWallet) importKey(ki LedgerKeyInfo) (address.Address, error) { func (lw LedgerWallet) importKey(ctx context.Context, ki LedgerKeyInfo) (address.Address, error) {
if ki.Address == address.Undef { if ki.Address == address.Undef {
return address.Undef, fmt.Errorf("no address given in imported key info") return address.Undef, fmt.Errorf("no address given in imported key info")
} }
@ -133,7 +133,7 @@ func (lw LedgerWallet) importKey(ki LedgerKeyInfo) (address.Address, error) {
return address.Undef, xerrors.Errorf("marshaling key info: %w", err) return address.Undef, xerrors.Errorf("marshaling key info: %w", err)
} }
if err := lw.ds.Put(keyForAddr(ki.Address), bb); err != nil { if err := lw.ds.Put(ctx, keyForAddr(ki.Address), bb); err != nil {
return address.Undef, err return address.Undef, err
} }
@ -141,7 +141,7 @@ func (lw LedgerWallet) importKey(ki LedgerKeyInfo) (address.Address, error) {
} }
func (lw LedgerWallet) WalletList(ctx context.Context) ([]address.Address, error) { func (lw LedgerWallet) WalletList(ctx context.Context) ([]address.Address, error) {
res, err := lw.ds.Query(query.Query{Prefix: dsLedgerPrefix}) res, err := lw.ds.Query(ctx, query.Query{Prefix: dsLedgerPrefix})
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -175,7 +175,7 @@ func (lw LedgerWallet) WalletNew(ctx context.Context, t types.KeyType) (address.
t, types.KTSecp256k1Ledger) t, types.KTSecp256k1Ledger)
} }
res, err := lw.ds.Query(query.Query{Prefix: dsLedgerPrefix}) res, err := lw.ds.Query(ctx, query.Query{Prefix: dsLedgerPrefix})
if err != nil { if err != nil {
return address.Undef, err return address.Undef, err
} }
@ -224,7 +224,7 @@ func (lw LedgerWallet) WalletNew(ctx context.Context, t types.KeyType) (address.
lki.Address = a lki.Address = a
lki.Path = path lki.Path = path
return lw.importKey(lki) return lw.importKey(ctx, lki)
} }
func (lw *LedgerWallet) Get() api.Wallet { func (lw *LedgerWallet) Get() api.Wallet {

View File

@ -66,7 +66,7 @@ func BackupCmd(repoFlag string, rt repo.RepoType, getApi BackupApiFn) *cli.Comma
return xerrors.Errorf("opening backup file %s: %w", fpath, err) return xerrors.Errorf("opening backup file %s: %w", fpath, err)
} }
if err := bds.Backup(out); err != nil { if err := bds.Backup(cctx.Context, out); err != nil {
if cerr := out.Close(); cerr != nil { if cerr := out.Close(); cerr != nil {
log.Errorw("error closing backup file while handling backup error", "closeErr", cerr, "backupErr", err) log.Errorw("error closing backup file while handling backup error", "closeErr", cerr, "backupErr", err)
} }

View File

@ -92,6 +92,7 @@ var clientCmd = &cli.Command{
WithCategory("data", clientLocalCmd), WithCategory("data", clientLocalCmd),
WithCategory("data", clientStat), WithCategory("data", clientStat),
WithCategory("retrieval", clientFindCmd), WithCategory("retrieval", clientFindCmd),
WithCategory("retrieval", clientQueryRetrievalAskCmd),
WithCategory("retrieval", clientRetrieveCmd), WithCategory("retrieval", clientRetrieveCmd),
WithCategory("retrieval", clientRetrieveCatCmd), WithCategory("retrieval", clientRetrieveCatCmd),
WithCategory("retrieval", clientRetrieveLsCmd), WithCategory("retrieval", clientRetrieveLsCmd),
@ -1030,6 +1031,67 @@ var clientFindCmd = &cli.Command{
}, },
} }
var clientQueryRetrievalAskCmd = &cli.Command{
Name: "retrieval-ask",
Usage: "Get a miner's retrieval ask",
ArgsUsage: "[minerAddress] [data CID]",
Flags: []cli.Flag{
&cli.Int64Flag{
Name: "size",
Usage: "data size in bytes",
},
},
Action: func(cctx *cli.Context) error {
afmt := NewAppFmt(cctx.App)
if cctx.NArg() != 2 {
afmt.Println("Usage: retrieval-ask [minerAddress] [data CID]")
return nil
}
maddr, err := address.NewFromString(cctx.Args().First())
if err != nil {
return err
}
dataCid, err := cid.Parse(cctx.Args().Get(1))
if err != nil {
return fmt.Errorf("parsing data cid: %w", err)
}
api, closer, err := GetFullNodeAPI(cctx)
if err != nil {
return err
}
defer closer()
ctx := ReqContext(cctx)
ask, err := api.ClientMinerQueryOffer(ctx, maddr, dataCid, nil)
if err != nil {
return err
}
afmt.Printf("Ask: %s\n", maddr)
afmt.Printf("Unseal price: %s\n", types.FIL(ask.UnsealPrice))
afmt.Printf("Price per byte: %s\n", types.FIL(ask.PricePerByte))
afmt.Printf("Payment interval: %s\n", types.SizeStr(types.NewInt(ask.PaymentInterval)))
afmt.Printf("Payment interval increase: %s\n", types.SizeStr(types.NewInt(ask.PaymentIntervalIncrease)))
size := cctx.Uint64("size")
if size == 0 {
if ask.Size == 0 {
return nil
}
size = ask.Size
afmt.Printf("Size: %s\n", types.SizeStr(types.NewInt(ask.Size)))
}
transferPrice := types.BigMul(ask.PricePerByte, types.NewInt(size))
totalPrice := types.BigAdd(ask.UnsealPrice, transferPrice)
afmt.Printf("Total price for %d bytes: %s\n", size, types.FIL(totalPrice))
return nil
},
}
var clientListRetrievalsCmd = &cli.Command{ var clientListRetrievalsCmd = &cli.Command{
Name: "list-retrievals", Name: "list-retrievals",
Usage: "List retrieval market deals", Usage: "List retrieval market deals",

View File

@ -37,7 +37,7 @@ func (cv cachingVerifier) withCache(execute func() (bool, error), param cbg.CBOR
} }
hash := hasher.Sum(nil) hash := hasher.Sum(nil)
key := datastore.NewKey(string(hash)) key := datastore.NewKey(string(hash))
fromDs, err := cv.ds.Get(key) fromDs, err := cv.ds.Get(context.Background(), key)
if err == nil { if err == nil {
switch fromDs[0] { switch fromDs[0] {
case 's': case 's':
@ -67,7 +67,7 @@ func (cv cachingVerifier) withCache(execute func() (bool, error), param cbg.CBOR
} }
if len(save) != 0 { if len(save) != 0 {
errSave := cv.ds.Put(key, save) errSave := cv.ds.Put(context.Background(), key, save)
if errSave != nil { if errSave != nil {
log.Errorf("error saving result: %+v", errSave) log.Errorf("error saving result: %+v", errSave)
} }

Some files were not shown because too many files have changed in this diff Show More