lotus/markets/storageadapter/provider.go
Jiaying Wang bd10bdf99a
build: release: v1.18.0 (#9652)
* build: Bump version to v1.17.3-dev

* build: set version to v1.18.0-dev

* chore: actors: Allow builtin-actors to return a map of methods (#9342)

* Allow builtin-actors to return a map of methods

* go mod

* Fix tests

* Fix tests, check carefully please

* Delete lotus-pond (#9352)

* feat: add StateNetworkVersion to mpool API

* chore: refactor: rename NewestNetworkVersion

* feat: actors: Integrate datacap actor into lotus (#9348)

* Integrate datacap actor

* Implement datacap actor in chain/builtin

* feat: support typed errors over RPC

* chore: deps: update to go-jsonrpc 0.1.8

* remove duplicate import

* fix: itest: check for closed connection

* chore: refactor: move retry test to API

* address magik supernit

* Add ability to only have single partition per msg for partitions with recovery sectors

* doc gen

* Address comments

* Return beneficiary info from miner state Info()

* Update builtin-actors to dev/20220922-v9 which includes FIP-0045 changes in progress

* Integrate verifreg changes to lotus

* Setup datacap actor

* Update builtin-actors to dev/20220922-v9-1

* Update datacap actor to query datacap instead of verifreg

* update gst

* update markets

* update actors with hamt fix

* update gst

* Update datacap to parse tokens

* Update bundles

* datacap and verifreg actors use ID addresses without protocol byte

* update builtin-actors to rc1

* update go-fil-markets

* Update bundles to rc2

* Integrate the v9 migration

* Add api for getting allocation

* Add upgrade epoch for butterfly

* Tweak PreSeal struct to be infra-friendly

* docsgen

* More tweaking of PreSeal for genesis

* review fixes

* Use fake cid for test

* add butterfly artifacts for oct 5 upgrade

* check datacaps for v8 verifreg match v9 datacap actor

* Remove print statements

* Update to go-state-types master

* Update to go-state-types v0.9.0-rc1

* review fixes

* use go-fil-markets v1.24.0-v17

* Add accessors for allocations and claims maps

* fix: missing permissions tag

* butterfly

* update butterfly artifacts

* sealing pipeline: Prepare deal assigning logic for FIP-45

* sealing pipeline: Get allocationId with StateApi

* use NoAllocationID instead of nil AllocationId

* address review

* Add datacap actor to registry.go

* Add cli for listing allocations and removing expired allocations

* Update to go-state-types master

* deps: upgrade go-merkledag to 0.8.0

* shark params

* Update cli/filplus.go

Co-authored-by: Aayush Rajasekaran <arajasek94@gmail.com>

* revert change to verifreg util

* docsgen-cli

* miss the stuff

* Update FFI

* Update go-state-types to v0.9.0

* Update builtin-actors to v9.0.0

* add calib upgrade epcoh

* update  the upgrade envvar

* kill shark

* Remove fvm splash banner from nv17 upgrade

* check invariance for pending deals and allocations

* check pending verified deal proposal migrated to allocation

* Add check for unsealed CID in precommit sectors

* Fix counting of allocations in nv17 migration test

* make gen

* pass state trees as pointers

* Add assertion that migrations with & without cache are the same

* compare allocation to verified deal proposal

* Fix miner state precommit info

* fix migration test tool

* add changelog

* Update to go-state-types v0.9.1

* Integrate builtin-actors v9.0.1

* chore: ver: bump version for rc3 (#9512)

* Bump version to 1.18.0-rc3

* Update CHANGELOG.md

* Update CHANGELOG.md

Co-authored-by: Aayush Rajasekaran <arajasek94@gmail.com>

* Update CHANGELOG.md

Co-authored-by: Aayush Rajasekaran <arajasek94@gmail.com>

Co-authored-by: Jiaying Wang <42981373+jennijuju@users.noreply.github.com>
Co-authored-by: Aayush Rajasekaran <arajasek94@gmail.com>

* Migration: Use autobatch bs

* Fix autobatch

Signed-off-by: Jakub Sztandera <kubuxu@protocol.ai>

* Invoker: Use MethodMeta from go-state-types

* Add a second premigration for nv17

* Add more shed tools for migration checking

* address review

* Lotus release v1.18.0-rc4

* fix: ci: fix app-image build on ci (#9527)

* Remove old go version first

* Add GO_VERSION file

* Use GO_VERSION to set / verify go version

* mv GO_VERSION GO_VERSION_MIN

* Use GO_VERSION_MIN in Makefile check

Co-authored-by: Ian Davis <jungziege@gmail.com>

* Update to latest go-state-types for migration fixes

* go mod tidy

* fix: use api.ErrActorNotFound instead of types.ErrActorNotFound

* fix: add fields to ForkUpgradeParams

* docs: update actors_version_checklist.md

* chore: fix lint

* update to go state type v0.9.6 with market migration fix (#9545)

* update go-state-types to v-0.9.7

* Add invariant checks to migration

* fix invariant check: number of entries in datacap actor should include verifreg

* Invariant checks: Only include not-activated deals

* test: nv17 migration

* Address review

* add lotus-shed invariance method

* Migration cli takes a stateroot cid and a height

* make gen

* Update to builtin-actors v9.0.2

* Failing test that shows that notaries can remove datacap from the verifreg actor

* Test that should pass when the problem is solved

* make gen

* Review fixes

* statemanager call function will return call information even if call errors

* update go-state-types

* update builtin-actors

* bubble up errors properly from ApplyImplicitMessage

* bump to rc5

* set new upgrade heights for calibnet

* set new upgrade height for butterfly

* tweak calibnet upgrade schedule

* clarify changelog note about calibnet

* butterfly

* update calibnet artifacts

* Allow setting local bundles for Debug FVM for av 9+

* fix: autobatch: remove potential deadlock when a block is missing

Check the _underlying_ blockstore instead of recursing. Also, drop the
lock before we do that.

* fix imports

* build: set shark mainnet epoch (#9640)

* chore: build: Lotus release v1.18.0 (#9641)

* Lotus release v1.18.0

* add changelog

* address review

* changelog improvement

Co-authored-by: Jennifer Wang <jiayingw703@gmail.com>
Co-authored-by: Jiaying Wang <42981373+jennijuju@users.noreply.github.com>

Signed-off-by: Jakub Sztandera <kubuxu@protocol.ai>
Co-authored-by: Łukasz Magiera <magik6k@gmail.com>
Co-authored-by: Łukasz Magiera <magik6k@users.noreply.github.com>
Co-authored-by: Aayush <arajasek94@gmail.com>
Co-authored-by: Geoff Stuart <geoff.vball@gmail.com>
Co-authored-by: Shrenuj Bansal <shrenuj.bansal@protocol.ai>
Co-authored-by: simlecode <69969590+simlecode@users.noreply.github.com>
Co-authored-by: Rod Vagg <rod@vagg.org>
Co-authored-by: Jakub Sztandera <kubuxu@protocol.ai>
Co-authored-by: Ian Davis <jungziege@gmail.com>
Co-authored-by: zenground0 <ZenGround0@users.noreply.github.com>
Co-authored-by: Steven Allen <steven@stebalien.com>
2022-11-15 20:57:23 -05:00

438 lines
15 KiB
Go

package storageadapter
// this file implements storagemarket.StorageProviderNode
import (
"context"
"time"
"github.com/ipfs/go-cid"
logging "github.com/ipfs/go-log/v2"
"go.uber.org/fx"
"golang.org/x/xerrors"
"github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-fil-markets/shared"
"github.com/filecoin-project/go-fil-markets/storagemarket"
"github.com/filecoin-project/go-state-types/abi"
markettypes "github.com/filecoin-project/go-state-types/builtin/v9/market"
"github.com/filecoin-project/go-state-types/crypto"
"github.com/filecoin-project/go-state-types/exitcode"
"github.com/filecoin-project/lotus/api"
"github.com/filecoin-project/lotus/api/v1api"
"github.com/filecoin-project/lotus/build"
"github.com/filecoin-project/lotus/chain/actors/builtin/market"
"github.com/filecoin-project/lotus/chain/actors/builtin/miner"
"github.com/filecoin-project/lotus/chain/events"
"github.com/filecoin-project/lotus/chain/events/state"
"github.com/filecoin-project/lotus/chain/types"
"github.com/filecoin-project/lotus/lib/sigs"
"github.com/filecoin-project/lotus/markets/utils"
"github.com/filecoin-project/lotus/node/config"
"github.com/filecoin-project/lotus/node/modules/helpers"
pipeline "github.com/filecoin-project/lotus/storage/pipeline"
"github.com/filecoin-project/lotus/storage/sectorblocks"
)
var addPieceRetryWait = 5 * time.Minute
var addPieceRetryTimeout = 6 * time.Hour
var defaultMaxProviderCollateralMultiplier = uint64(2)
var log = logging.Logger("storageadapter")
type ProviderNodeAdapter struct {
v1api.FullNode
secb *sectorblocks.SectorBlocks
ev *events.Events
dealPublisher *DealPublisher
addBalanceSpec *api.MessageSendSpec
maxDealCollateralMultiplier uint64
dsMatcher *dealStateMatcher
scMgr *SectorCommittedManager
}
func NewProviderNodeAdapter(fc *config.MinerFeeConfig, dc *config.DealmakingConfig) func(mctx helpers.MetricsCtx, lc fx.Lifecycle, secb *sectorblocks.SectorBlocks, full v1api.FullNode, dealPublisher *DealPublisher) (storagemarket.StorageProviderNode, error) {
return func(mctx helpers.MetricsCtx, lc fx.Lifecycle, secb *sectorblocks.SectorBlocks, full v1api.FullNode, dealPublisher *DealPublisher) (storagemarket.StorageProviderNode, error) {
ctx := helpers.LifecycleCtx(mctx, lc)
ev, err := events.NewEvents(ctx, full)
if err != nil {
return nil, err
}
na := &ProviderNodeAdapter{
FullNode: full,
secb: secb,
ev: ev,
dealPublisher: dealPublisher,
dsMatcher: newDealStateMatcher(state.NewStatePredicates(state.WrapFastAPI(full))),
}
if fc != nil {
na.addBalanceSpec = &api.MessageSendSpec{MaxFee: abi.TokenAmount(fc.MaxMarketBalanceAddFee)}
}
na.maxDealCollateralMultiplier = defaultMaxProviderCollateralMultiplier
if dc != nil {
na.maxDealCollateralMultiplier = dc.MaxProviderCollateralMultiplier
}
na.scMgr = NewSectorCommittedManager(ev, na, &apiWrapper{api: full})
return na, nil
}
}
func (n *ProviderNodeAdapter) PublishDeals(ctx context.Context, deal storagemarket.MinerDeal) (cid.Cid, error) {
return n.dealPublisher.Publish(ctx, deal.ClientDealProposal)
}
func (n *ProviderNodeAdapter) OnDealComplete(ctx context.Context, deal storagemarket.MinerDeal, pieceSize abi.UnpaddedPieceSize, pieceData shared.ReadSeekStarter) (*storagemarket.PackingResult, error) {
if deal.PublishCid == nil {
return nil, xerrors.Errorf("deal.PublishCid can't be nil")
}
sdInfo := api.PieceDealInfo{
DealID: deal.DealID,
DealProposal: &deal.Proposal,
PublishCid: deal.PublishCid,
DealSchedule: api.DealSchedule{
StartEpoch: deal.ClientDealProposal.Proposal.StartEpoch,
EndEpoch: deal.ClientDealProposal.Proposal.EndEpoch,
},
KeepUnsealed: deal.FastRetrieval,
}
// Attempt to add the piece to the sector
p, offset, err := n.secb.AddPiece(ctx, pieceSize, pieceData, sdInfo)
curTime := build.Clock.Now()
for build.Clock.Since(curTime) < addPieceRetryTimeout {
// Check if there was an error because of too many sectors being sealed
if !xerrors.Is(err, pipeline.ErrTooManySectorsSealing) {
if err != nil {
log.Errorf("failed to addPiece for deal %d, err: %v", deal.DealID, err)
}
// There was either a fatal error or no error. In either case
// don't retry AddPiece
break
}
// The piece could not be added to the sector because there are too
// many sectors being sealed, back-off for a while before trying again
select {
case <-build.Clock.After(addPieceRetryWait):
// Reset the reader to the start
err = pieceData.SeekStart()
if err != nil {
return nil, xerrors.Errorf("failed to reset piece reader to start before retrying AddPiece for deal %d: %w", deal.DealID, err)
}
// Attempt to add the piece again
p, offset, err = n.secb.AddPiece(ctx, pieceSize, pieceData, sdInfo)
case <-ctx.Done():
return nil, xerrors.New("context expired while waiting to retry AddPiece")
}
}
if err != nil {
return nil, xerrors.Errorf("AddPiece failed: %s", err)
}
log.Warnf("New Deal: deal %d", deal.DealID)
return &storagemarket.PackingResult{
SectorNumber: p,
Offset: offset,
Size: pieceSize.Padded(),
}, nil
}
func (n *ProviderNodeAdapter) VerifySignature(ctx context.Context, sig crypto.Signature, addr address.Address, input []byte, encodedTs shared.TipSetToken) (bool, error) {
addr, err := n.StateAccountKey(ctx, addr, types.EmptyTSK)
if err != nil {
return false, err
}
err = sigs.Verify(&sig, addr, input)
return err == nil, err
}
func (n *ProviderNodeAdapter) GetMinerWorkerAddress(ctx context.Context, maddr address.Address, tok shared.TipSetToken) (address.Address, error) {
tsk, err := types.TipSetKeyFromBytes(tok)
if err != nil {
return address.Undef, err
}
mi, err := n.StateMinerInfo(ctx, maddr, tsk)
if err != nil {
return address.Address{}, err
}
return mi.Worker, nil
}
func (n *ProviderNodeAdapter) GetProofType(ctx context.Context, maddr address.Address, tok shared.TipSetToken) (abi.RegisteredSealProof, error) {
tsk, err := types.TipSetKeyFromBytes(tok)
if err != nil {
return 0, err
}
mi, err := n.StateMinerInfo(ctx, maddr, tsk)
if err != nil {
return 0, err
}
nver, err := n.StateNetworkVersion(ctx, tsk)
if err != nil {
return 0, err
}
return miner.PreferredSealProofTypeFromWindowPoStType(nver, mi.WindowPoStProofType)
}
func (n *ProviderNodeAdapter) SignBytes(ctx context.Context, signer address.Address, b []byte) (*crypto.Signature, error) {
signer, err := n.StateAccountKey(ctx, signer, types.EmptyTSK)
if err != nil {
return nil, err
}
localSignature, err := n.WalletSign(ctx, signer, b)
if err != nil {
return nil, err
}
return localSignature, nil
}
func (n *ProviderNodeAdapter) ReserveFunds(ctx context.Context, wallet, addr address.Address, amt abi.TokenAmount) (cid.Cid, error) {
return n.MarketReserveFunds(ctx, wallet, addr, amt)
}
func (n *ProviderNodeAdapter) ReleaseFunds(ctx context.Context, addr address.Address, amt abi.TokenAmount) error {
return n.MarketReleaseFunds(ctx, addr, amt)
}
// Adds funds with the StorageMinerActor for a storage participant. Used by both providers and clients.
func (n *ProviderNodeAdapter) AddFunds(ctx context.Context, addr address.Address, amount abi.TokenAmount) (cid.Cid, error) {
// (Provider Node API)
smsg, err := n.MpoolPushMessage(ctx, &types.Message{
To: market.Address,
From: addr,
Value: amount,
Method: market.Methods.AddBalance,
}, n.addBalanceSpec)
if err != nil {
return cid.Undef, err
}
return smsg.Cid(), nil
}
func (n *ProviderNodeAdapter) GetBalance(ctx context.Context, addr address.Address, encodedTs shared.TipSetToken) (storagemarket.Balance, error) {
tsk, err := types.TipSetKeyFromBytes(encodedTs)
if err != nil {
return storagemarket.Balance{}, err
}
bal, err := n.StateMarketBalance(ctx, addr, tsk)
if err != nil {
return storagemarket.Balance{}, err
}
return utils.ToSharedBalance(bal), nil
}
// TODO: why doesnt this method take in a sector ID?
func (n *ProviderNodeAdapter) LocatePieceForDealWithinSector(ctx context.Context, dealID abi.DealID, encodedTs shared.TipSetToken) (sectorID abi.SectorNumber, offset abi.PaddedPieceSize, length abi.PaddedPieceSize, err error) {
refs, err := n.secb.GetRefs(ctx, dealID)
if err != nil {
return 0, 0, 0, err
}
if len(refs) == 0 {
return 0, 0, 0, xerrors.New("no sector information for deal ID")
}
// TODO: better strategy (e.g. look for already unsealed)
var best api.SealedRef
var bestSi api.SectorInfo
for _, r := range refs {
si, err := n.secb.SectorBuilder.SectorsStatus(ctx, r.SectorID, false)
if err != nil {
return 0, 0, 0, xerrors.Errorf("getting sector info: %w", err)
}
if si.State == api.SectorState(pipeline.Proving) {
best = r
bestSi = si
break
}
}
if bestSi.State == api.SectorState(pipeline.UndefinedSectorState) {
return 0, 0, 0, xerrors.New("no sealed sector found")
}
return best.SectorID, best.Offset, best.Size.Padded(), nil
}
func (n *ProviderNodeAdapter) DealProviderCollateralBounds(ctx context.Context, size abi.PaddedPieceSize, isVerified bool) (abi.TokenAmount, abi.TokenAmount, error) {
bounds, err := n.StateDealProviderCollateralBounds(ctx, size, isVerified, types.EmptyTSK)
if err != nil {
return abi.TokenAmount{}, abi.TokenAmount{}, err
}
// The maximum amount of collateral that the provider will put into escrow
// for a deal is calculated as a multiple of the minimum bounded amount
max := types.BigMul(bounds.Min, types.NewInt(n.maxDealCollateralMultiplier))
return bounds.Min, max, nil
}
// TODO: Remove dealID parameter, change publishCid to be cid.Cid (instead of pointer)
func (n *ProviderNodeAdapter) OnDealSectorPreCommitted(ctx context.Context, provider address.Address, dealID abi.DealID, proposal markettypes.DealProposal, publishCid *cid.Cid, cb storagemarket.DealSectorPreCommittedCallback) error {
return n.scMgr.OnDealSectorPreCommitted(ctx, provider, proposal, *publishCid, cb)
}
// TODO: Remove dealID parameter, change publishCid to be cid.Cid (instead of pointer)
func (n *ProviderNodeAdapter) OnDealSectorCommitted(ctx context.Context, provider address.Address, dealID abi.DealID, sectorNumber abi.SectorNumber, proposal markettypes.DealProposal, publishCid *cid.Cid, cb storagemarket.DealSectorCommittedCallback) error {
return n.scMgr.OnDealSectorCommitted(ctx, provider, sectorNumber, proposal, *publishCid, cb)
}
func (n *ProviderNodeAdapter) GetChainHead(ctx context.Context) (shared.TipSetToken, abi.ChainEpoch, error) {
head, err := n.ChainHead(ctx)
if err != nil {
return nil, 0, err
}
return head.Key().Bytes(), head.Height(), nil
}
func (n *ProviderNodeAdapter) WaitForMessage(ctx context.Context, mcid cid.Cid, cb func(code exitcode.ExitCode, bytes []byte, finalCid cid.Cid, err error) error) error {
receipt, err := n.StateWaitMsg(ctx, mcid, 2*build.MessageConfidence, api.LookbackNoLimit, true)
if err != nil {
return cb(0, nil, cid.Undef, err)
}
return cb(receipt.Receipt.ExitCode, receipt.Receipt.Return, receipt.Message, nil)
}
func (n *ProviderNodeAdapter) WaitForPublishDeals(ctx context.Context, publishCid cid.Cid, proposal markettypes.DealProposal) (*storagemarket.PublishDealsWaitResult, error) {
// Wait for deal to be published (plus additional time for confidence)
receipt, err := n.StateWaitMsg(ctx, publishCid, 2*build.MessageConfidence, api.LookbackNoLimit, true)
if err != nil {
return nil, xerrors.Errorf("WaitForPublishDeals errored: %w", err)
}
if receipt.Receipt.ExitCode != exitcode.Ok {
return nil, xerrors.Errorf("WaitForPublishDeals exit code: %s", receipt.Receipt.ExitCode)
}
// The deal ID may have changed since publish if there was a reorg, so
// get the current deal ID
head, err := n.ChainHead(ctx)
if err != nil {
return nil, xerrors.Errorf("WaitForPublishDeals failed to get chain head: %w", err)
}
res, err := n.scMgr.dealInfo.GetCurrentDealInfo(ctx, head.Key(), &proposal, publishCid)
if err != nil {
return nil, xerrors.Errorf("WaitForPublishDeals getting deal info errored: %w", err)
}
return &storagemarket.PublishDealsWaitResult{DealID: res.DealID, FinalCid: receipt.Message}, nil
}
func (n *ProviderNodeAdapter) GetDataCap(ctx context.Context, addr address.Address, encodedTs shared.TipSetToken) (*abi.StoragePower, error) {
tsk, err := types.TipSetKeyFromBytes(encodedTs)
if err != nil {
return nil, err
}
sp, err := n.StateVerifiedClientStatus(ctx, addr, tsk)
return sp, err
}
func (n *ProviderNodeAdapter) OnDealExpiredOrSlashed(ctx context.Context, dealID abi.DealID, onDealExpired storagemarket.DealExpiredCallback, onDealSlashed storagemarket.DealSlashedCallback) error {
head, err := n.ChainHead(ctx)
if err != nil {
return xerrors.Errorf("client: failed to get chain head: %w", err)
}
sd, err := n.StateMarketStorageDeal(ctx, dealID, head.Key())
if err != nil {
return xerrors.Errorf("client: failed to look up deal %d on chain: %w", dealID, err)
}
// Called immediately to check if the deal has already expired or been slashed
checkFunc := func(ctx context.Context, ts *types.TipSet) (done bool, more bool, err error) {
if ts == nil {
// keep listening for events
return false, true, nil
}
// Check if the deal has already expired
if sd.Proposal.EndEpoch <= ts.Height() {
onDealExpired(nil)
return true, false, nil
}
// If there is no deal assume it's already been slashed
if sd.State.SectorStartEpoch < 0 {
onDealSlashed(ts.Height(), nil)
return true, false, nil
}
// No events have occurred yet, so return
// done: false, more: true (keep listening for events)
return false, true, nil
}
// Called when there was a match against the state change we're looking for
// and the chain has advanced to the confidence height
stateChanged := func(ts *types.TipSet, ts2 *types.TipSet, states events.StateChange, h abi.ChainEpoch) (more bool, err error) {
// Check if the deal has already expired
if ts2 == nil || sd.Proposal.EndEpoch <= ts2.Height() {
onDealExpired(nil)
return false, nil
}
// Timeout waiting for state change
if states == nil {
log.Error("timed out waiting for deal expiry")
return false, nil
}
changedDeals, ok := states.(state.ChangedDeals)
if !ok {
panic("Expected state.ChangedDeals")
}
deal, ok := changedDeals[dealID]
if !ok {
// No change to deal
return true, nil
}
// Deal was slashed
if deal.To == nil {
onDealSlashed(ts2.Height(), nil)
return false, nil
}
return true, nil
}
// Called when there was a chain reorg and the state change was reverted
revert := func(ctx context.Context, ts *types.TipSet) error {
// TODO: Is it ok to just ignore this?
log.Warn("deal state reverted; TODO: actually handle this!")
return nil
}
// Watch for state changes to the deal
match := n.dsMatcher.matcher(ctx, dealID)
// Wait until after the end epoch for the deal and then timeout
timeout := (sd.Proposal.EndEpoch - head.Height()) + 1
if err := n.ev.StateChanged(checkFunc, stateChanged, revert, int(build.MessageConfidence)+1, timeout, match); err != nil {
return xerrors.Errorf("failed to set up state changed handler: %w", err)
}
return nil
}
var _ storagemarket.StorageProviderNode = &ProviderNodeAdapter{}