bd10bdf99a
* build: Bump version to v1.17.3-dev * build: set version to v1.18.0-dev * chore: actors: Allow builtin-actors to return a map of methods (#9342) * Allow builtin-actors to return a map of methods * go mod * Fix tests * Fix tests, check carefully please * Delete lotus-pond (#9352) * feat: add StateNetworkVersion to mpool API * chore: refactor: rename NewestNetworkVersion * feat: actors: Integrate datacap actor into lotus (#9348) * Integrate datacap actor * Implement datacap actor in chain/builtin * feat: support typed errors over RPC * chore: deps: update to go-jsonrpc 0.1.8 * remove duplicate import * fix: itest: check for closed connection * chore: refactor: move retry test to API * address magik supernit * Add ability to only have single partition per msg for partitions with recovery sectors * doc gen * Address comments * Return beneficiary info from miner state Info() * Update builtin-actors to dev/20220922-v9 which includes FIP-0045 changes in progress * Integrate verifreg changes to lotus * Setup datacap actor * Update builtin-actors to dev/20220922-v9-1 * Update datacap actor to query datacap instead of verifreg * update gst * update markets * update actors with hamt fix * update gst * Update datacap to parse tokens * Update bundles * datacap and verifreg actors use ID addresses without protocol byte * update builtin-actors to rc1 * update go-fil-markets * Update bundles to rc2 * Integrate the v9 migration * Add api for getting allocation * Add upgrade epoch for butterfly * Tweak PreSeal struct to be infra-friendly * docsgen * More tweaking of PreSeal for genesis * review fixes * Use fake cid for test * add butterfly artifacts for oct 5 upgrade * check datacaps for v8 verifreg match v9 datacap actor * Remove print statements * Update to go-state-types master * Update to go-state-types v0.9.0-rc1 * review fixes * use go-fil-markets v1.24.0-v17 * Add accessors for allocations and claims maps * fix: missing permissions tag * butterfly * update butterfly artifacts * sealing pipeline: Prepare deal assigning logic for FIP-45 * sealing pipeline: Get allocationId with StateApi * use NoAllocationID instead of nil AllocationId * address review * Add datacap actor to registry.go * Add cli for listing allocations and removing expired allocations * Update to go-state-types master * deps: upgrade go-merkledag to 0.8.0 * shark params * Update cli/filplus.go Co-authored-by: Aayush Rajasekaran <arajasek94@gmail.com> * revert change to verifreg util * docsgen-cli * miss the stuff * Update FFI * Update go-state-types to v0.9.0 * Update builtin-actors to v9.0.0 * add calib upgrade epcoh * update the upgrade envvar * kill shark * Remove fvm splash banner from nv17 upgrade * check invariance for pending deals and allocations * check pending verified deal proposal migrated to allocation * Add check for unsealed CID in precommit sectors * Fix counting of allocations in nv17 migration test * make gen * pass state trees as pointers * Add assertion that migrations with & without cache are the same * compare allocation to verified deal proposal * Fix miner state precommit info * fix migration test tool * add changelog * Update to go-state-types v0.9.1 * Integrate builtin-actors v9.0.1 * chore: ver: bump version for rc3 (#9512) * Bump version to 1.18.0-rc3 * Update CHANGELOG.md * Update CHANGELOG.md Co-authored-by: Aayush Rajasekaran <arajasek94@gmail.com> * Update CHANGELOG.md Co-authored-by: Aayush Rajasekaran <arajasek94@gmail.com> Co-authored-by: Jiaying Wang <42981373+jennijuju@users.noreply.github.com> Co-authored-by: Aayush Rajasekaran <arajasek94@gmail.com> * Migration: Use autobatch bs * Fix autobatch Signed-off-by: Jakub Sztandera <kubuxu@protocol.ai> * Invoker: Use MethodMeta from go-state-types * Add a second premigration for nv17 * Add more shed tools for migration checking * address review * Lotus release v1.18.0-rc4 * fix: ci: fix app-image build on ci (#9527) * Remove old go version first * Add GO_VERSION file * Use GO_VERSION to set / verify go version * mv GO_VERSION GO_VERSION_MIN * Use GO_VERSION_MIN in Makefile check Co-authored-by: Ian Davis <jungziege@gmail.com> * Update to latest go-state-types for migration fixes * go mod tidy * fix: use api.ErrActorNotFound instead of types.ErrActorNotFound * fix: add fields to ForkUpgradeParams * docs: update actors_version_checklist.md * chore: fix lint * update to go state type v0.9.6 with market migration fix (#9545) * update go-state-types to v-0.9.7 * Add invariant checks to migration * fix invariant check: number of entries in datacap actor should include verifreg * Invariant checks: Only include not-activated deals * test: nv17 migration * Address review * add lotus-shed invariance method * Migration cli takes a stateroot cid and a height * make gen * Update to builtin-actors v9.0.2 * Failing test that shows that notaries can remove datacap from the verifreg actor * Test that should pass when the problem is solved * make gen * Review fixes * statemanager call function will return call information even if call errors * update go-state-types * update builtin-actors * bubble up errors properly from ApplyImplicitMessage * bump to rc5 * set new upgrade heights for calibnet * set new upgrade height for butterfly * tweak calibnet upgrade schedule * clarify changelog note about calibnet * butterfly * update calibnet artifacts * Allow setting local bundles for Debug FVM for av 9+ * fix: autobatch: remove potential deadlock when a block is missing Check the _underlying_ blockstore instead of recursing. Also, drop the lock before we do that. * fix imports * build: set shark mainnet epoch (#9640) * chore: build: Lotus release v1.18.0 (#9641) * Lotus release v1.18.0 * add changelog * address review * changelog improvement Co-authored-by: Jennifer Wang <jiayingw703@gmail.com> Co-authored-by: Jiaying Wang <42981373+jennijuju@users.noreply.github.com> Signed-off-by: Jakub Sztandera <kubuxu@protocol.ai> Co-authored-by: Łukasz Magiera <magik6k@gmail.com> Co-authored-by: Łukasz Magiera <magik6k@users.noreply.github.com> Co-authored-by: Aayush <arajasek94@gmail.com> Co-authored-by: Geoff Stuart <geoff.vball@gmail.com> Co-authored-by: Shrenuj Bansal <shrenuj.bansal@protocol.ai> Co-authored-by: simlecode <69969590+simlecode@users.noreply.github.com> Co-authored-by: Rod Vagg <rod@vagg.org> Co-authored-by: Jakub Sztandera <kubuxu@protocol.ai> Co-authored-by: Ian Davis <jungziege@gmail.com> Co-authored-by: zenground0 <ZenGround0@users.noreply.github.com> Co-authored-by: Steven Allen <steven@stebalien.com>
267 lines
6.5 KiB
Go
267 lines
6.5 KiB
Go
package blockstore
|
|
|
|
import (
|
|
"context"
|
|
"sync"
|
|
"time"
|
|
|
|
block "github.com/ipfs/go-block-format"
|
|
"github.com/ipfs/go-cid"
|
|
ipld "github.com/ipfs/go-ipld-format"
|
|
"golang.org/x/xerrors"
|
|
)
|
|
|
|
// autolog is a logger for the autobatching blockstore. It is subscoped from the
|
|
// blockstore logger.
|
|
var autolog = log.Named("auto")
|
|
|
|
// contains the same set of blocks twice, once as an ordered list for flushing, and as a map for fast access
|
|
type blockBatch struct {
|
|
blockList []block.Block
|
|
blockMap map[cid.Cid]block.Block
|
|
}
|
|
|
|
type AutobatchBlockstore struct {
|
|
// TODO: drop if memory consumption is too high
|
|
addedCids map[cid.Cid]struct{}
|
|
|
|
stateLock sync.Mutex
|
|
bufferedBatch blockBatch
|
|
|
|
flushingBatch blockBatch
|
|
flushErr error
|
|
|
|
flushCh chan struct{}
|
|
|
|
doFlushLock sync.Mutex
|
|
flushRetryDelay time.Duration
|
|
doneCh chan struct{}
|
|
shutdown context.CancelFunc
|
|
|
|
backingBs Blockstore
|
|
|
|
bufferCapacity int
|
|
bufferSize int
|
|
}
|
|
|
|
func NewAutobatch(ctx context.Context, backingBs Blockstore, bufferCapacity int) *AutobatchBlockstore {
|
|
ctx, cancel := context.WithCancel(ctx)
|
|
bs := &AutobatchBlockstore{
|
|
addedCids: make(map[cid.Cid]struct{}),
|
|
backingBs: backingBs,
|
|
bufferCapacity: bufferCapacity,
|
|
flushCh: make(chan struct{}, 1),
|
|
doneCh: make(chan struct{}),
|
|
// could be made configable
|
|
flushRetryDelay: time.Millisecond * 100,
|
|
shutdown: cancel,
|
|
}
|
|
|
|
bs.bufferedBatch.blockMap = make(map[cid.Cid]block.Block)
|
|
|
|
go bs.flushWorker(ctx)
|
|
|
|
return bs
|
|
}
|
|
|
|
func (bs *AutobatchBlockstore) Put(ctx context.Context, blk block.Block) error {
|
|
bs.stateLock.Lock()
|
|
defer bs.stateLock.Unlock()
|
|
|
|
_, ok := bs.addedCids[blk.Cid()]
|
|
if !ok {
|
|
bs.addedCids[blk.Cid()] = struct{}{}
|
|
bs.bufferedBatch.blockList = append(bs.bufferedBatch.blockList, blk)
|
|
bs.bufferedBatch.blockMap[blk.Cid()] = blk
|
|
bs.bufferSize += len(blk.RawData())
|
|
if bs.bufferSize >= bs.bufferCapacity {
|
|
// signal that a flush is appropriate, may be ignored
|
|
select {
|
|
case bs.flushCh <- struct{}{}:
|
|
default:
|
|
// do nothing
|
|
}
|
|
}
|
|
}
|
|
|
|
return nil
|
|
}
|
|
|
|
func (bs *AutobatchBlockstore) flushWorker(ctx context.Context) {
|
|
defer close(bs.doneCh)
|
|
for {
|
|
select {
|
|
case <-bs.flushCh:
|
|
// TODO: check if we _should_ actually flush. We could get a spurious wakeup
|
|
// here.
|
|
putErr := bs.doFlush(ctx, false)
|
|
for putErr != nil {
|
|
select {
|
|
case <-ctx.Done():
|
|
return
|
|
case <-time.After(bs.flushRetryDelay):
|
|
autolog.Errorf("FLUSH ERRORED: %w, retrying after %v", putErr, bs.flushRetryDelay)
|
|
putErr = bs.doFlush(ctx, true)
|
|
}
|
|
}
|
|
case <-ctx.Done():
|
|
// Do one last flush.
|
|
_ = bs.doFlush(ctx, false)
|
|
return
|
|
}
|
|
}
|
|
}
|
|
|
|
// caller must NOT hold stateLock
|
|
// set retryOnly to true to only retry a failed flush and not flush anything new.
|
|
func (bs *AutobatchBlockstore) doFlush(ctx context.Context, retryOnly bool) error {
|
|
bs.doFlushLock.Lock()
|
|
defer bs.doFlushLock.Unlock()
|
|
|
|
// If we failed to flush last time, try flushing again.
|
|
if bs.flushErr != nil {
|
|
bs.flushErr = bs.backingBs.PutMany(ctx, bs.flushingBatch.blockList)
|
|
}
|
|
|
|
// If we failed, or we're _only_ retrying, bail.
|
|
if retryOnly || bs.flushErr != nil {
|
|
return bs.flushErr
|
|
}
|
|
|
|
// Then take the current batch...
|
|
bs.stateLock.Lock()
|
|
// We do NOT clear addedCids here, because its purpose is to expedite Puts
|
|
bs.flushingBatch = bs.bufferedBatch
|
|
bs.bufferedBatch.blockList = make([]block.Block, 0, len(bs.flushingBatch.blockList))
|
|
bs.bufferedBatch.blockMap = make(map[cid.Cid]block.Block, len(bs.flushingBatch.blockMap))
|
|
bs.stateLock.Unlock()
|
|
|
|
// And try to flush it.
|
|
bs.flushErr = bs.backingBs.PutMany(ctx, bs.flushingBatch.blockList)
|
|
|
|
// If we succeeded, reset the batch. Otherwise, we'll try again next time.
|
|
if bs.flushErr == nil {
|
|
bs.stateLock.Lock()
|
|
bs.flushingBatch = blockBatch{}
|
|
bs.stateLock.Unlock()
|
|
}
|
|
|
|
return bs.flushErr
|
|
}
|
|
|
|
// caller must NOT hold stateLock
|
|
func (bs *AutobatchBlockstore) Flush(ctx context.Context) error {
|
|
return bs.doFlush(ctx, false)
|
|
}
|
|
|
|
func (bs *AutobatchBlockstore) Shutdown(ctx context.Context) error {
|
|
// TODO: Prevent puts after we call this to avoid losing data.
|
|
bs.shutdown()
|
|
select {
|
|
case <-bs.doneCh:
|
|
case <-ctx.Done():
|
|
return ctx.Err()
|
|
}
|
|
|
|
bs.doFlushLock.Lock()
|
|
defer bs.doFlushLock.Unlock()
|
|
|
|
return bs.flushErr
|
|
}
|
|
|
|
func (bs *AutobatchBlockstore) Get(ctx context.Context, c cid.Cid) (block.Block, error) {
|
|
// may seem backward to check the backingBs first, but that is the likeliest case
|
|
blk, err := bs.backingBs.Get(ctx, c)
|
|
if err == nil {
|
|
return blk, nil
|
|
}
|
|
|
|
if !ipld.IsNotFound(err) {
|
|
return blk, err
|
|
}
|
|
|
|
bs.stateLock.Lock()
|
|
v, ok := bs.flushingBatch.blockMap[c]
|
|
if ok {
|
|
bs.stateLock.Unlock()
|
|
return v, nil
|
|
}
|
|
|
|
v, ok = bs.bufferedBatch.blockMap[c]
|
|
if ok {
|
|
bs.stateLock.Unlock()
|
|
return v, nil
|
|
}
|
|
bs.stateLock.Unlock()
|
|
|
|
// We have to check the backing store one more time because it may have been flushed by the
|
|
// time we were able to take the lock above.
|
|
return bs.backingBs.Get(ctx, c)
|
|
}
|
|
|
|
func (bs *AutobatchBlockstore) DeleteBlock(context.Context, cid.Cid) error {
|
|
// if we wanted to support this, we would have to:
|
|
// - flush
|
|
// - delete from the backingBs (if present)
|
|
// - remove from addedCids (if present)
|
|
// - if present in addedCids, also walk the ordered lists and remove if present
|
|
return xerrors.New("deletion is unsupported")
|
|
}
|
|
|
|
func (bs *AutobatchBlockstore) DeleteMany(ctx context.Context, cids []cid.Cid) error {
|
|
// see note in DeleteBlock()
|
|
return xerrors.New("deletion is unsupported")
|
|
}
|
|
|
|
func (bs *AutobatchBlockstore) Has(ctx context.Context, c cid.Cid) (bool, error) {
|
|
_, err := bs.Get(ctx, c)
|
|
if err == nil {
|
|
return true, nil
|
|
}
|
|
if ipld.IsNotFound(err) {
|
|
return false, nil
|
|
}
|
|
|
|
return false, err
|
|
}
|
|
|
|
func (bs *AutobatchBlockstore) GetSize(ctx context.Context, c cid.Cid) (int, error) {
|
|
blk, err := bs.Get(ctx, c)
|
|
if err != nil {
|
|
return 0, err
|
|
}
|
|
|
|
return len(blk.RawData()), nil
|
|
}
|
|
|
|
func (bs *AutobatchBlockstore) PutMany(ctx context.Context, blks []block.Block) error {
|
|
for _, blk := range blks {
|
|
if err := bs.Put(ctx, blk); err != nil {
|
|
return err
|
|
}
|
|
}
|
|
|
|
return nil
|
|
}
|
|
|
|
func (bs *AutobatchBlockstore) AllKeysChan(ctx context.Context) (<-chan cid.Cid, error) {
|
|
if err := bs.Flush(ctx); err != nil {
|
|
return nil, err
|
|
}
|
|
|
|
return bs.backingBs.AllKeysChan(ctx)
|
|
}
|
|
|
|
func (bs *AutobatchBlockstore) HashOnRead(enabled bool) {
|
|
bs.backingBs.HashOnRead(enabled)
|
|
}
|
|
|
|
func (bs *AutobatchBlockstore) View(ctx context.Context, cid cid.Cid, callback func([]byte) error) error {
|
|
blk, err := bs.Get(ctx, cid)
|
|
if err != nil {
|
|
return err
|
|
}
|
|
|
|
return callback(blk.RawData())
|
|
}
|