bd10bdf99a
* build: Bump version to v1.17.3-dev * build: set version to v1.18.0-dev * chore: actors: Allow builtin-actors to return a map of methods (#9342) * Allow builtin-actors to return a map of methods * go mod * Fix tests * Fix tests, check carefully please * Delete lotus-pond (#9352) * feat: add StateNetworkVersion to mpool API * chore: refactor: rename NewestNetworkVersion * feat: actors: Integrate datacap actor into lotus (#9348) * Integrate datacap actor * Implement datacap actor in chain/builtin * feat: support typed errors over RPC * chore: deps: update to go-jsonrpc 0.1.8 * remove duplicate import * fix: itest: check for closed connection * chore: refactor: move retry test to API * address magik supernit * Add ability to only have single partition per msg for partitions with recovery sectors * doc gen * Address comments * Return beneficiary info from miner state Info() * Update builtin-actors to dev/20220922-v9 which includes FIP-0045 changes in progress * Integrate verifreg changes to lotus * Setup datacap actor * Update builtin-actors to dev/20220922-v9-1 * Update datacap actor to query datacap instead of verifreg * update gst * update markets * update actors with hamt fix * update gst * Update datacap to parse tokens * Update bundles * datacap and verifreg actors use ID addresses without protocol byte * update builtin-actors to rc1 * update go-fil-markets * Update bundles to rc2 * Integrate the v9 migration * Add api for getting allocation * Add upgrade epoch for butterfly * Tweak PreSeal struct to be infra-friendly * docsgen * More tweaking of PreSeal for genesis * review fixes * Use fake cid for test * add butterfly artifacts for oct 5 upgrade * check datacaps for v8 verifreg match v9 datacap actor * Remove print statements * Update to go-state-types master * Update to go-state-types v0.9.0-rc1 * review fixes * use go-fil-markets v1.24.0-v17 * Add accessors for allocations and claims maps * fix: missing permissions tag * butterfly * update butterfly artifacts * sealing pipeline: Prepare deal assigning logic for FIP-45 * sealing pipeline: Get allocationId with StateApi * use NoAllocationID instead of nil AllocationId * address review * Add datacap actor to registry.go * Add cli for listing allocations and removing expired allocations * Update to go-state-types master * deps: upgrade go-merkledag to 0.8.0 * shark params * Update cli/filplus.go Co-authored-by: Aayush Rajasekaran <arajasek94@gmail.com> * revert change to verifreg util * docsgen-cli * miss the stuff * Update FFI * Update go-state-types to v0.9.0 * Update builtin-actors to v9.0.0 * add calib upgrade epcoh * update the upgrade envvar * kill shark * Remove fvm splash banner from nv17 upgrade * check invariance for pending deals and allocations * check pending verified deal proposal migrated to allocation * Add check for unsealed CID in precommit sectors * Fix counting of allocations in nv17 migration test * make gen * pass state trees as pointers * Add assertion that migrations with & without cache are the same * compare allocation to verified deal proposal * Fix miner state precommit info * fix migration test tool * add changelog * Update to go-state-types v0.9.1 * Integrate builtin-actors v9.0.1 * chore: ver: bump version for rc3 (#9512) * Bump version to 1.18.0-rc3 * Update CHANGELOG.md * Update CHANGELOG.md Co-authored-by: Aayush Rajasekaran <arajasek94@gmail.com> * Update CHANGELOG.md Co-authored-by: Aayush Rajasekaran <arajasek94@gmail.com> Co-authored-by: Jiaying Wang <42981373+jennijuju@users.noreply.github.com> Co-authored-by: Aayush Rajasekaran <arajasek94@gmail.com> * Migration: Use autobatch bs * Fix autobatch Signed-off-by: Jakub Sztandera <kubuxu@protocol.ai> * Invoker: Use MethodMeta from go-state-types * Add a second premigration for nv17 * Add more shed tools for migration checking * address review * Lotus release v1.18.0-rc4 * fix: ci: fix app-image build on ci (#9527) * Remove old go version first * Add GO_VERSION file * Use GO_VERSION to set / verify go version * mv GO_VERSION GO_VERSION_MIN * Use GO_VERSION_MIN in Makefile check Co-authored-by: Ian Davis <jungziege@gmail.com> * Update to latest go-state-types for migration fixes * go mod tidy * fix: use api.ErrActorNotFound instead of types.ErrActorNotFound * fix: add fields to ForkUpgradeParams * docs: update actors_version_checklist.md * chore: fix lint * update to go state type v0.9.6 with market migration fix (#9545) * update go-state-types to v-0.9.7 * Add invariant checks to migration * fix invariant check: number of entries in datacap actor should include verifreg * Invariant checks: Only include not-activated deals * test: nv17 migration * Address review * add lotus-shed invariance method * Migration cli takes a stateroot cid and a height * make gen * Update to builtin-actors v9.0.2 * Failing test that shows that notaries can remove datacap from the verifreg actor * Test that should pass when the problem is solved * make gen * Review fixes * statemanager call function will return call information even if call errors * update go-state-types * update builtin-actors * bubble up errors properly from ApplyImplicitMessage * bump to rc5 * set new upgrade heights for calibnet * set new upgrade height for butterfly * tweak calibnet upgrade schedule * clarify changelog note about calibnet * butterfly * update calibnet artifacts * Allow setting local bundles for Debug FVM for av 9+ * fix: autobatch: remove potential deadlock when a block is missing Check the _underlying_ blockstore instead of recursing. Also, drop the lock before we do that. * fix imports * build: set shark mainnet epoch (#9640) * chore: build: Lotus release v1.18.0 (#9641) * Lotus release v1.18.0 * add changelog * address review * changelog improvement Co-authored-by: Jennifer Wang <jiayingw703@gmail.com> Co-authored-by: Jiaying Wang <42981373+jennijuju@users.noreply.github.com> Signed-off-by: Jakub Sztandera <kubuxu@protocol.ai> Co-authored-by: Łukasz Magiera <magik6k@gmail.com> Co-authored-by: Łukasz Magiera <magik6k@users.noreply.github.com> Co-authored-by: Aayush <arajasek94@gmail.com> Co-authored-by: Geoff Stuart <geoff.vball@gmail.com> Co-authored-by: Shrenuj Bansal <shrenuj.bansal@protocol.ai> Co-authored-by: simlecode <69969590+simlecode@users.noreply.github.com> Co-authored-by: Rod Vagg <rod@vagg.org> Co-authored-by: Jakub Sztandera <kubuxu@protocol.ai> Co-authored-by: Ian Davis <jungziege@gmail.com> Co-authored-by: zenground0 <ZenGround0@users.noreply.github.com> Co-authored-by: Steven Allen <steven@stebalien.com>
424 lines
13 KiB
Go
424 lines
13 KiB
Go
// stm: #unit
|
|
package storageadapter
|
|
|
|
import (
|
|
"bytes"
|
|
"context"
|
|
"testing"
|
|
"time"
|
|
|
|
"github.com/ipfs/go-cid"
|
|
"github.com/raulk/clock"
|
|
"github.com/stretchr/testify/require"
|
|
"golang.org/x/xerrors"
|
|
|
|
"github.com/filecoin-project/go-address"
|
|
"github.com/filecoin-project/go-state-types/abi"
|
|
markettypes "github.com/filecoin-project/go-state-types/builtin/v9/market"
|
|
"github.com/filecoin-project/go-state-types/crypto"
|
|
"github.com/filecoin-project/go-state-types/exitcode"
|
|
tutils "github.com/filecoin-project/specs-actors/v2/support/testing"
|
|
|
|
"github.com/filecoin-project/lotus/api"
|
|
"github.com/filecoin-project/lotus/build"
|
|
"github.com/filecoin-project/lotus/chain/actors/builtin/market"
|
|
"github.com/filecoin-project/lotus/chain/types"
|
|
)
|
|
|
|
func TestDealPublisher(t *testing.T) {
|
|
//stm: @MARKET_DEAL_PUBLISHER_PUBLISH_001, @MARKET_DEAL_PUBLISHER_GET_PENDING_DEALS_001
|
|
oldClock := build.Clock
|
|
t.Cleanup(func() { build.Clock = oldClock })
|
|
mc := clock.NewMock()
|
|
build.Clock = mc
|
|
|
|
testCases := []struct {
|
|
name string
|
|
publishPeriod time.Duration
|
|
maxDealsPerMsg uint64
|
|
dealCountWithinPublishPeriod int
|
|
ctxCancelledWithinPublishPeriod int
|
|
expiredDeals int
|
|
dealCountAfterPublishPeriod int
|
|
expectedDealsPerMsg []int
|
|
failOne bool
|
|
}{{
|
|
name: "publish one deal within publish period",
|
|
publishPeriod: 10 * time.Millisecond,
|
|
maxDealsPerMsg: 5,
|
|
dealCountWithinPublishPeriod: 1,
|
|
dealCountAfterPublishPeriod: 0,
|
|
expectedDealsPerMsg: []int{1},
|
|
}, {
|
|
name: "publish two deals within publish period",
|
|
publishPeriod: 10 * time.Millisecond,
|
|
maxDealsPerMsg: 5,
|
|
dealCountWithinPublishPeriod: 2,
|
|
dealCountAfterPublishPeriod: 0,
|
|
expectedDealsPerMsg: []int{2},
|
|
}, {
|
|
name: "publish one deal within publish period, and one after",
|
|
publishPeriod: 10 * time.Millisecond,
|
|
maxDealsPerMsg: 5,
|
|
dealCountWithinPublishPeriod: 1,
|
|
dealCountAfterPublishPeriod: 1,
|
|
expectedDealsPerMsg: []int{1, 1},
|
|
}, {
|
|
name: "publish deals that exceed max deals per message within publish period, and one after",
|
|
publishPeriod: 10 * time.Millisecond,
|
|
maxDealsPerMsg: 2,
|
|
dealCountWithinPublishPeriod: 3,
|
|
dealCountAfterPublishPeriod: 1,
|
|
expectedDealsPerMsg: []int{2, 1, 1},
|
|
}, {
|
|
name: "ignore deals with cancelled context",
|
|
publishPeriod: 10 * time.Millisecond,
|
|
maxDealsPerMsg: 5,
|
|
dealCountWithinPublishPeriod: 2,
|
|
ctxCancelledWithinPublishPeriod: 2,
|
|
dealCountAfterPublishPeriod: 1,
|
|
expectedDealsPerMsg: []int{2, 1},
|
|
}, {
|
|
name: "ignore expired deals",
|
|
publishPeriod: 10 * time.Millisecond,
|
|
maxDealsPerMsg: 5,
|
|
dealCountWithinPublishPeriod: 2,
|
|
expiredDeals: 2,
|
|
dealCountAfterPublishPeriod: 1,
|
|
expectedDealsPerMsg: []int{2, 1},
|
|
}, {
|
|
name: "zero config",
|
|
publishPeriod: 0,
|
|
maxDealsPerMsg: 0,
|
|
dealCountWithinPublishPeriod: 2,
|
|
ctxCancelledWithinPublishPeriod: 0,
|
|
dealCountAfterPublishPeriod: 2,
|
|
expectedDealsPerMsg: []int{1, 1, 1, 1},
|
|
}, {
|
|
name: "one deal failing doesn't fail the entire batch",
|
|
publishPeriod: 10 * time.Millisecond,
|
|
maxDealsPerMsg: 5,
|
|
dealCountWithinPublishPeriod: 2,
|
|
dealCountAfterPublishPeriod: 0,
|
|
failOne: true,
|
|
expectedDealsPerMsg: []int{1},
|
|
}}
|
|
|
|
for _, tc := range testCases {
|
|
tc := tc
|
|
t.Run(tc.name, func(t *testing.T) {
|
|
mc.Set(time.Now())
|
|
dpapi := newDPAPI(t)
|
|
|
|
// Create a deal publisher
|
|
dp := newDealPublisher(dpapi, nil, PublishMsgConfig{
|
|
Period: tc.publishPeriod,
|
|
MaxDealsPerMsg: tc.maxDealsPerMsg,
|
|
}, &api.MessageSendSpec{MaxFee: abi.NewTokenAmount(1)})
|
|
|
|
// Keep a record of the deals that were submitted to be published
|
|
var dealsToPublish []markettypes.ClientDealProposal
|
|
|
|
// Publish deals within publish period
|
|
for i := 0; i < tc.dealCountWithinPublishPeriod; i++ {
|
|
if tc.failOne && i == 1 {
|
|
publishDeal(t, dp, i, false, false)
|
|
} else {
|
|
deal := publishDeal(t, dp, 0, false, false)
|
|
dealsToPublish = append(dealsToPublish, deal)
|
|
}
|
|
}
|
|
for i := 0; i < tc.ctxCancelledWithinPublishPeriod; i++ {
|
|
publishDeal(t, dp, 0, true, false)
|
|
}
|
|
for i := 0; i < tc.expiredDeals; i++ {
|
|
publishDeal(t, dp, 0, false, true)
|
|
}
|
|
|
|
// Wait until publish period has elapsed
|
|
if tc.publishPeriod > 0 {
|
|
// If we expect deals to get stuck in the queue, wait until that happens
|
|
if tc.maxDealsPerMsg != 0 && tc.dealCountWithinPublishPeriod%int(tc.maxDealsPerMsg) != 0 {
|
|
require.Eventually(t, func() bool {
|
|
dp.lk.Lock()
|
|
defer dp.lk.Unlock()
|
|
return !dp.publishPeriodStart.IsZero()
|
|
}, time.Second, time.Millisecond, "failed to queue deals")
|
|
}
|
|
|
|
// Then wait to send
|
|
require.Eventually(t, func() bool {
|
|
dp.lk.Lock()
|
|
defer dp.lk.Unlock()
|
|
|
|
// Advance if necessary.
|
|
if mc.Since(dp.publishPeriodStart) <= tc.publishPeriod {
|
|
dp.lk.Unlock()
|
|
mc.Set(dp.publishPeriodStart.Add(tc.publishPeriod + 1))
|
|
dp.lk.Lock()
|
|
}
|
|
|
|
return len(dp.pending) == 0
|
|
}, time.Second, time.Millisecond, "failed to send pending messages")
|
|
}
|
|
|
|
// Publish deals after publish period
|
|
for i := 0; i < tc.dealCountAfterPublishPeriod; i++ {
|
|
deal := publishDeal(t, dp, 0, false, false)
|
|
dealsToPublish = append(dealsToPublish, deal)
|
|
}
|
|
|
|
if tc.publishPeriod > 0 && tc.dealCountAfterPublishPeriod > 0 {
|
|
require.Eventually(t, func() bool {
|
|
dp.lk.Lock()
|
|
defer dp.lk.Unlock()
|
|
if mc.Since(dp.publishPeriodStart) <= tc.publishPeriod {
|
|
dp.lk.Unlock()
|
|
mc.Set(dp.publishPeriodStart.Add(tc.publishPeriod + 1))
|
|
dp.lk.Lock()
|
|
}
|
|
return len(dp.pending) == 0
|
|
}, time.Second, time.Millisecond, "failed to send pending messages")
|
|
}
|
|
|
|
checkPublishedDeals(t, dpapi, dealsToPublish, tc.expectedDealsPerMsg)
|
|
})
|
|
}
|
|
}
|
|
|
|
func TestForcePublish(t *testing.T) {
|
|
//stm: @MARKET_DEAL_PUBLISHER_PUBLISH_001, @MARKET_DEAL_PUBLISHER_GET_PENDING_DEALS_001
|
|
//stm: @MARKET_DEAL_PUBLISHER_FORCE_PUBLISH_ALL_001
|
|
dpapi := newDPAPI(t)
|
|
|
|
// Create a deal publisher
|
|
start := build.Clock.Now()
|
|
publishPeriod := time.Hour
|
|
dp := newDealPublisher(dpapi, nil, PublishMsgConfig{
|
|
Period: publishPeriod,
|
|
MaxDealsPerMsg: 10,
|
|
}, &api.MessageSendSpec{MaxFee: abi.NewTokenAmount(1)})
|
|
|
|
// Queue three deals for publishing, one with a cancelled context
|
|
var dealsToPublish []markettypes.ClientDealProposal
|
|
// 1. Regular deal
|
|
deal := publishDeal(t, dp, 0, false, false)
|
|
dealsToPublish = append(dealsToPublish, deal)
|
|
// 2. Deal with cancelled context
|
|
publishDeal(t, dp, 0, true, false)
|
|
// 3. Regular deal
|
|
deal = publishDeal(t, dp, 0, false, false)
|
|
dealsToPublish = append(dealsToPublish, deal)
|
|
|
|
// Allow a moment for them to be queued
|
|
build.Clock.Sleep(10 * time.Millisecond)
|
|
|
|
// Should be two deals in the pending deals list
|
|
// (deal with cancelled context is ignored)
|
|
pendingInfo := dp.PendingDeals()
|
|
require.Len(t, pendingInfo.Deals, 2)
|
|
require.Equal(t, publishPeriod, pendingInfo.PublishPeriod)
|
|
require.True(t, pendingInfo.PublishPeriodStart.After(start))
|
|
require.True(t, pendingInfo.PublishPeriodStart.Before(build.Clock.Now()))
|
|
|
|
// Force publish all pending deals
|
|
dp.ForcePublishPendingDeals()
|
|
|
|
// Should be no pending deals
|
|
pendingInfo = dp.PendingDeals()
|
|
require.Len(t, pendingInfo.Deals, 0)
|
|
|
|
// Make sure the expected deals were published
|
|
checkPublishedDeals(t, dpapi, dealsToPublish, []int{2})
|
|
}
|
|
|
|
func publishDeal(t *testing.T, dp *DealPublisher, invalid int, ctxCancelled bool, expired bool) markettypes.ClientDealProposal {
|
|
ctx, cancel := context.WithCancel(context.Background())
|
|
t.Cleanup(cancel)
|
|
|
|
pctx := ctx
|
|
if ctxCancelled {
|
|
pctx, cancel = context.WithCancel(ctx)
|
|
cancel()
|
|
}
|
|
|
|
startEpoch := abi.ChainEpoch(20)
|
|
if expired {
|
|
startEpoch = abi.ChainEpoch(5)
|
|
}
|
|
deal := markettypes.ClientDealProposal{
|
|
Proposal: markettypes.DealProposal{
|
|
PieceCID: generateCids(1)[0],
|
|
Client: getClientActor(t),
|
|
Provider: getProviderActor(t),
|
|
StartEpoch: startEpoch,
|
|
EndEpoch: abi.ChainEpoch(120),
|
|
PieceSize: abi.PaddedPieceSize(invalid), // pass invalid into StateCall below
|
|
},
|
|
ClientSignature: crypto.Signature{
|
|
Type: crypto.SigTypeSecp256k1,
|
|
Data: []byte("signature data"),
|
|
},
|
|
}
|
|
|
|
go func() {
|
|
_, err := dp.Publish(pctx, deal)
|
|
|
|
// If the test has completed just bail out without checking for errors
|
|
if ctx.Err() != nil {
|
|
return
|
|
}
|
|
|
|
if ctxCancelled || expired || invalid == 1 {
|
|
require.Error(t, err)
|
|
} else {
|
|
require.NoError(t, err)
|
|
}
|
|
}()
|
|
|
|
return deal
|
|
}
|
|
|
|
func checkPublishedDeals(t *testing.T, dpapi *dpAPI, dealsToPublish []markettypes.ClientDealProposal, expectedDealsPerMsg []int) {
|
|
// For each message that was expected to be sent
|
|
var publishedDeals []markettypes.ClientDealProposal
|
|
for _, expectedDealsInMsg := range expectedDealsPerMsg {
|
|
// Should have called StateMinerInfo with the provider address
|
|
stateMinerInfoAddr := <-dpapi.stateMinerInfoCalls
|
|
require.Equal(t, getProviderActor(t), stateMinerInfoAddr)
|
|
|
|
// Check the fields of the message that was sent
|
|
msg := <-dpapi.pushedMsgs
|
|
require.Equal(t, getWorkerActor(t), msg.From)
|
|
require.Equal(t, market.Address, msg.To)
|
|
require.Equal(t, market.Methods.PublishStorageDeals, msg.Method)
|
|
|
|
// Check that the expected number of deals was included in the message
|
|
var params markettypes.PublishStorageDealsParams
|
|
err := params.UnmarshalCBOR(bytes.NewReader(msg.Params))
|
|
require.NoError(t, err)
|
|
require.Len(t, params.Deals, expectedDealsInMsg)
|
|
|
|
// Keep track of the deals that were sent
|
|
for _, d := range params.Deals {
|
|
publishedDeals = append(publishedDeals, d)
|
|
}
|
|
}
|
|
|
|
// Verify that all deals that were submitted to be published were
|
|
// sent out (we do this by ensuring all the piece CIDs are present)
|
|
require.True(t, matchPieceCids(publishedDeals, dealsToPublish))
|
|
}
|
|
|
|
func matchPieceCids(sent []markettypes.ClientDealProposal, exp []markettypes.ClientDealProposal) bool {
|
|
cidsA := dealPieceCids(sent)
|
|
cidsB := dealPieceCids(exp)
|
|
|
|
if len(cidsA) != len(cidsB) {
|
|
return false
|
|
}
|
|
|
|
s1 := cid.NewSet()
|
|
for _, c := range cidsA {
|
|
s1.Add(c)
|
|
}
|
|
|
|
for _, c := range cidsB {
|
|
if !s1.Has(c) {
|
|
return false
|
|
}
|
|
}
|
|
|
|
return true
|
|
}
|
|
|
|
func dealPieceCids(deals []markettypes.ClientDealProposal) []cid.Cid {
|
|
cids := make([]cid.Cid, 0, len(deals))
|
|
for _, dl := range deals {
|
|
cids = append(cids, dl.Proposal.PieceCID)
|
|
}
|
|
return cids
|
|
}
|
|
|
|
type dpAPI struct {
|
|
t *testing.T
|
|
worker address.Address
|
|
|
|
stateMinerInfoCalls chan address.Address
|
|
pushedMsgs chan *types.Message
|
|
}
|
|
|
|
func newDPAPI(t *testing.T) *dpAPI {
|
|
return &dpAPI{
|
|
t: t,
|
|
worker: getWorkerActor(t),
|
|
stateMinerInfoCalls: make(chan address.Address, 128),
|
|
pushedMsgs: make(chan *types.Message, 128),
|
|
}
|
|
}
|
|
|
|
func (d *dpAPI) ChainHead(ctx context.Context) (*types.TipSet, error) {
|
|
dummyCid, err := cid.Parse("bafkqaaa")
|
|
require.NoError(d.t, err)
|
|
return types.NewTipSet([]*types.BlockHeader{{
|
|
Miner: tutils.NewActorAddr(d.t, "miner"),
|
|
Height: abi.ChainEpoch(10),
|
|
ParentStateRoot: dummyCid,
|
|
Messages: dummyCid,
|
|
ParentMessageReceipts: dummyCid,
|
|
BlockSig: &crypto.Signature{Type: crypto.SigTypeBLS},
|
|
BLSAggregate: &crypto.Signature{Type: crypto.SigTypeBLS},
|
|
}})
|
|
}
|
|
|
|
func (d *dpAPI) StateMinerInfo(ctx context.Context, address address.Address, key types.TipSetKey) (api.MinerInfo, error) {
|
|
d.stateMinerInfoCalls <- address
|
|
return api.MinerInfo{Worker: d.worker}, nil
|
|
}
|
|
|
|
func (d *dpAPI) MpoolPushMessage(ctx context.Context, msg *types.Message, spec *api.MessageSendSpec) (*types.SignedMessage, error) {
|
|
d.pushedMsgs <- msg
|
|
return &types.SignedMessage{Message: *msg}, nil
|
|
}
|
|
|
|
func (d *dpAPI) WalletBalance(ctx context.Context, a address.Address) (types.BigInt, error) {
|
|
panic("don't call me")
|
|
}
|
|
|
|
func (d *dpAPI) WalletHas(ctx context.Context, a address.Address) (bool, error) {
|
|
panic("don't call me")
|
|
}
|
|
|
|
func (d *dpAPI) StateAccountKey(ctx context.Context, a address.Address, key types.TipSetKey) (address.Address, error) {
|
|
panic("don't call me")
|
|
}
|
|
|
|
func (d *dpAPI) StateLookupID(ctx context.Context, a address.Address, key types.TipSetKey) (address.Address, error) {
|
|
panic("don't call me")
|
|
}
|
|
|
|
func (d *dpAPI) StateCall(ctx context.Context, message *types.Message, key types.TipSetKey) (*api.InvocResult, error) {
|
|
var p markettypes.PublishStorageDealsParams
|
|
if err := p.UnmarshalCBOR(bytes.NewReader(message.Params)); err != nil {
|
|
return nil, xerrors.Errorf("unmarshal market params: %w", err)
|
|
}
|
|
|
|
exit := exitcode.Ok
|
|
if p.Deals[0].Proposal.PieceSize == 1 {
|
|
exit = exitcode.ErrIllegalState
|
|
}
|
|
return &api.InvocResult{MsgRct: &types.MessageReceipt{ExitCode: exit}}, nil
|
|
}
|
|
|
|
func getClientActor(t *testing.T) address.Address {
|
|
return tutils.NewActorAddr(t, "client")
|
|
}
|
|
|
|
func getWorkerActor(t *testing.T) address.Address {
|
|
return tutils.NewActorAddr(t, "worker")
|
|
}
|
|
|
|
func getProviderActor(t *testing.T) address.Address {
|
|
return tutils.NewActorAddr(t, "provider")
|
|
}
|