lotus/chain/stmgr/forks_test.go
Jiaying Wang bd10bdf99a
build: release: v1.18.0 (#9652)
* build: Bump version to v1.17.3-dev

* build: set version to v1.18.0-dev

* chore: actors: Allow builtin-actors to return a map of methods (#9342)

* Allow builtin-actors to return a map of methods

* go mod

* Fix tests

* Fix tests, check carefully please

* Delete lotus-pond (#9352)

* feat: add StateNetworkVersion to mpool API

* chore: refactor: rename NewestNetworkVersion

* feat: actors: Integrate datacap actor into lotus (#9348)

* Integrate datacap actor

* Implement datacap actor in chain/builtin

* feat: support typed errors over RPC

* chore: deps: update to go-jsonrpc 0.1.8

* remove duplicate import

* fix: itest: check for closed connection

* chore: refactor: move retry test to API

* address magik supernit

* Add ability to only have single partition per msg for partitions with recovery sectors

* doc gen

* Address comments

* Return beneficiary info from miner state Info()

* Update builtin-actors to dev/20220922-v9 which includes FIP-0045 changes in progress

* Integrate verifreg changes to lotus

* Setup datacap actor

* Update builtin-actors to dev/20220922-v9-1

* Update datacap actor to query datacap instead of verifreg

* update gst

* update markets

* update actors with hamt fix

* update gst

* Update datacap to parse tokens

* Update bundles

* datacap and verifreg actors use ID addresses without protocol byte

* update builtin-actors to rc1

* update go-fil-markets

* Update bundles to rc2

* Integrate the v9 migration

* Add api for getting allocation

* Add upgrade epoch for butterfly

* Tweak PreSeal struct to be infra-friendly

* docsgen

* More tweaking of PreSeal for genesis

* review fixes

* Use fake cid for test

* add butterfly artifacts for oct 5 upgrade

* check datacaps for v8 verifreg match v9 datacap actor

* Remove print statements

* Update to go-state-types master

* Update to go-state-types v0.9.0-rc1

* review fixes

* use go-fil-markets v1.24.0-v17

* Add accessors for allocations and claims maps

* fix: missing permissions tag

* butterfly

* update butterfly artifacts

* sealing pipeline: Prepare deal assigning logic for FIP-45

* sealing pipeline: Get allocationId with StateApi

* use NoAllocationID instead of nil AllocationId

* address review

* Add datacap actor to registry.go

* Add cli for listing allocations and removing expired allocations

* Update to go-state-types master

* deps: upgrade go-merkledag to 0.8.0

* shark params

* Update cli/filplus.go

Co-authored-by: Aayush Rajasekaran <arajasek94@gmail.com>

* revert change to verifreg util

* docsgen-cli

* miss the stuff

* Update FFI

* Update go-state-types to v0.9.0

* Update builtin-actors to v9.0.0

* add calib upgrade epcoh

* update  the upgrade envvar

* kill shark

* Remove fvm splash banner from nv17 upgrade

* check invariance for pending deals and allocations

* check pending verified deal proposal migrated to allocation

* Add check for unsealed CID in precommit sectors

* Fix counting of allocations in nv17 migration test

* make gen

* pass state trees as pointers

* Add assertion that migrations with & without cache are the same

* compare allocation to verified deal proposal

* Fix miner state precommit info

* fix migration test tool

* add changelog

* Update to go-state-types v0.9.1

* Integrate builtin-actors v9.0.1

* chore: ver: bump version for rc3 (#9512)

* Bump version to 1.18.0-rc3

* Update CHANGELOG.md

* Update CHANGELOG.md

Co-authored-by: Aayush Rajasekaran <arajasek94@gmail.com>

* Update CHANGELOG.md

Co-authored-by: Aayush Rajasekaran <arajasek94@gmail.com>

Co-authored-by: Jiaying Wang <42981373+jennijuju@users.noreply.github.com>
Co-authored-by: Aayush Rajasekaran <arajasek94@gmail.com>

* Migration: Use autobatch bs

* Fix autobatch

Signed-off-by: Jakub Sztandera <kubuxu@protocol.ai>

* Invoker: Use MethodMeta from go-state-types

* Add a second premigration for nv17

* Add more shed tools for migration checking

* address review

* Lotus release v1.18.0-rc4

* fix: ci: fix app-image build on ci (#9527)

* Remove old go version first

* Add GO_VERSION file

* Use GO_VERSION to set / verify go version

* mv GO_VERSION GO_VERSION_MIN

* Use GO_VERSION_MIN in Makefile check

Co-authored-by: Ian Davis <jungziege@gmail.com>

* Update to latest go-state-types for migration fixes

* go mod tidy

* fix: use api.ErrActorNotFound instead of types.ErrActorNotFound

* fix: add fields to ForkUpgradeParams

* docs: update actors_version_checklist.md

* chore: fix lint

* update to go state type v0.9.6 with market migration fix (#9545)

* update go-state-types to v-0.9.7

* Add invariant checks to migration

* fix invariant check: number of entries in datacap actor should include verifreg

* Invariant checks: Only include not-activated deals

* test: nv17 migration

* Address review

* add lotus-shed invariance method

* Migration cli takes a stateroot cid and a height

* make gen

* Update to builtin-actors v9.0.2

* Failing test that shows that notaries can remove datacap from the verifreg actor

* Test that should pass when the problem is solved

* make gen

* Review fixes

* statemanager call function will return call information even if call errors

* update go-state-types

* update builtin-actors

* bubble up errors properly from ApplyImplicitMessage

* bump to rc5

* set new upgrade heights for calibnet

* set new upgrade height for butterfly

* tweak calibnet upgrade schedule

* clarify changelog note about calibnet

* butterfly

* update calibnet artifacts

* Allow setting local bundles for Debug FVM for av 9+

* fix: autobatch: remove potential deadlock when a block is missing

Check the _underlying_ blockstore instead of recursing. Also, drop the
lock before we do that.

* fix imports

* build: set shark mainnet epoch (#9640)

* chore: build: Lotus release v1.18.0 (#9641)

* Lotus release v1.18.0

* add changelog

* address review

* changelog improvement

Co-authored-by: Jennifer Wang <jiayingw703@gmail.com>
Co-authored-by: Jiaying Wang <42981373+jennijuju@users.noreply.github.com>

Signed-off-by: Jakub Sztandera <kubuxu@protocol.ai>
Co-authored-by: Łukasz Magiera <magik6k@gmail.com>
Co-authored-by: Łukasz Magiera <magik6k@users.noreply.github.com>
Co-authored-by: Aayush <arajasek94@gmail.com>
Co-authored-by: Geoff Stuart <geoff.vball@gmail.com>
Co-authored-by: Shrenuj Bansal <shrenuj.bansal@protocol.ai>
Co-authored-by: simlecode <69969590+simlecode@users.noreply.github.com>
Co-authored-by: Rod Vagg <rod@vagg.org>
Co-authored-by: Jakub Sztandera <kubuxu@protocol.ai>
Co-authored-by: Ian Davis <jungziege@gmail.com>
Co-authored-by: zenground0 <ZenGround0@users.noreply.github.com>
Co-authored-by: Steven Allen <steven@stebalien.com>
2022-11-15 20:57:23 -05:00

537 lines
14 KiB
Go

// stm: #integration
package stmgr_test
import (
"context"
"fmt"
"io"
"sync"
"testing"
"github.com/ipfs/go-cid"
ipldcbor "github.com/ipfs/go-ipld-cbor"
logging "github.com/ipfs/go-log/v2"
"github.com/stretchr/testify/require"
cbg "github.com/whyrusleeping/cbor-gen"
"golang.org/x/xerrors"
"github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-state-types/abi"
actorstypes "github.com/filecoin-project/go-state-types/actors"
"github.com/filecoin-project/go-state-types/cbor"
"github.com/filecoin-project/go-state-types/network"
rtt "github.com/filecoin-project/go-state-types/rt"
builtin0 "github.com/filecoin-project/specs-actors/actors/builtin"
init2 "github.com/filecoin-project/specs-actors/v2/actors/builtin/init"
rt2 "github.com/filecoin-project/specs-actors/v2/actors/runtime"
"github.com/filecoin-project/lotus/api"
"github.com/filecoin-project/lotus/chain/actors"
"github.com/filecoin-project/lotus/chain/actors/aerrors"
"github.com/filecoin-project/lotus/chain/actors/builtin"
_init "github.com/filecoin-project/lotus/chain/actors/builtin/init"
"github.com/filecoin-project/lotus/chain/actors/policy"
"github.com/filecoin-project/lotus/chain/consensus/filcns"
"github.com/filecoin-project/lotus/chain/gen"
. "github.com/filecoin-project/lotus/chain/stmgr"
"github.com/filecoin-project/lotus/chain/types"
"github.com/filecoin-project/lotus/chain/vm"
_ "github.com/filecoin-project/lotus/lib/sigs/bls"
_ "github.com/filecoin-project/lotus/lib/sigs/secp"
)
func init() {
policy.SetSupportedProofTypes(abi.RegisteredSealProof_StackedDrg2KiBV1)
policy.SetConsensusMinerMinPower(abi.NewStoragePower(2048))
policy.SetMinVerifiedDealSize(abi.NewStoragePower(256))
}
const testForkHeight = 40
type testActor struct {
}
// must use existing actor that an account is allowed to exec.
func (testActor) Code() cid.Cid { return builtin0.PaymentChannelActorCodeID }
func (testActor) State() cbor.Er { return new(testActorState) }
type testActorState struct {
HasUpgraded uint64
}
func (tas *testActorState) MarshalCBOR(w io.Writer) error {
return cbg.CborWriteHeader(w, cbg.MajUnsignedInt, tas.HasUpgraded)
}
func (tas *testActorState) UnmarshalCBOR(r io.Reader) error {
t, v, err := cbg.CborReadHeader(r)
if err != nil {
return err
}
if t != cbg.MajUnsignedInt {
return fmt.Errorf("wrong type in test actor state (got %d)", t)
}
tas.HasUpgraded = v
return nil
}
func (ta testActor) Exports() []interface{} {
return []interface{}{
1: ta.Constructor,
2: ta.TestMethod,
}
}
func (ta *testActor) Constructor(rt rt2.Runtime, params *abi.EmptyValue) *abi.EmptyValue {
rt.ValidateImmediateCallerAcceptAny()
rt.StateCreate(&testActorState{11})
//fmt.Println("NEW ACTOR ADDRESS IS: ", rt.Receiver())
return abi.Empty
}
func (ta *testActor) TestMethod(rt rt2.Runtime, params *abi.EmptyValue) *abi.EmptyValue {
rt.ValidateImmediateCallerAcceptAny()
var st testActorState
rt.StateReadonly(&st)
if rt.CurrEpoch() > testForkHeight {
if st.HasUpgraded != 55 {
panic(aerrors.Fatal("fork updating applied in wrong order"))
}
} else {
if st.HasUpgraded != 11 {
panic(aerrors.Fatal("fork updating happened too early"))
}
}
return abi.Empty
}
func TestForkHeightTriggers(t *testing.T) {
//stm: @CHAIN_STATETREE_GET_ACTOR_001, @CHAIN_STATETREE_FLUSH_001, @TOKEN_WALLET_SIGN_001
//stm: @CHAIN_GEN_NEXT_TIPSET_001
//stm: @CHAIN_STATE_RESOLVE_TO_KEY_ADDR_001, @CHAIN_STATE_SET_VM_CONSTRUCTOR_001
logging.SetAllLoggers(logging.LevelInfo)
ctx := context.TODO()
cg, err := gen.NewGenerator()
if err != nil {
t.Fatal(err)
}
// predicting the address here... may break if other assumptions change
taddr, err := address.NewIDAddress(1002)
if err != nil {
t.Fatal(err)
}
sm, err := NewStateManager(
cg.ChainStore(), filcns.NewTipSetExecutor(), cg.StateManager().VMSys(), UpgradeSchedule{{
Network: network.Version1,
Height: testForkHeight,
Migration: func(ctx context.Context, sm *StateManager, cache MigrationCache, cb ExecMonitor,
root cid.Cid, height abi.ChainEpoch, ts *types.TipSet) (cid.Cid, error) {
cst := ipldcbor.NewCborStore(sm.ChainStore().StateBlockstore())
st, err := sm.StateTree(root)
if err != nil {
return cid.Undef, xerrors.Errorf("getting state tree: %w", err)
}
act, err := st.GetActor(taddr)
if err != nil {
return cid.Undef, err
}
var tas testActorState
if err := cst.Get(ctx, act.Head, &tas); err != nil {
return cid.Undef, xerrors.Errorf("in fork handler, failed to run get: %w", err)
}
tas.HasUpgraded = 55
ns, err := cst.Put(ctx, &tas)
if err != nil {
return cid.Undef, err
}
act.Head = ns
if err := st.SetActor(taddr, act); err != nil {
return cid.Undef, err
}
return st.Flush(ctx)
}}}, cg.BeaconSchedule())
if err != nil {
t.Fatal(err)
}
inv := filcns.NewActorRegistry()
registry := builtin.MakeRegistryLegacy([]rtt.VMActor{testActor{}})
inv.Register(actorstypes.Version0, nil, registry)
sm.SetVMConstructor(func(ctx context.Context, vmopt *vm.VMOpts) (vm.Interface, error) {
nvm, err := vm.NewLegacyVM(ctx, vmopt)
if err != nil {
return nil, err
}
nvm.SetInvoker(inv)
return nvm, nil
})
cg.SetStateManager(sm)
var msgs []*types.SignedMessage
enc, err := actors.SerializeParams(&init2.ExecParams{CodeCID: (testActor{}).Code()})
if err != nil {
t.Fatal(err)
}
m := &types.Message{
From: cg.Banker(),
To: _init.Address,
Method: _init.Methods.Exec,
Params: enc,
GasLimit: types.TestGasLimit,
}
sig, err := cg.Wallet().WalletSign(ctx, cg.Banker(), m.Cid().Bytes(), api.MsgMeta{})
if err != nil {
t.Fatal(err)
}
msgs = append(msgs, &types.SignedMessage{
Signature: *sig,
Message: *m,
})
nonce := uint64(1)
cg.GetMessages = func(cg *gen.ChainGen) ([]*types.SignedMessage, error) {
if len(msgs) > 0 {
fmt.Println("added construct method")
m := msgs
msgs = nil
return m, nil
}
m := &types.Message{
From: cg.Banker(),
To: taddr,
Method: 2,
Params: nil,
Nonce: nonce,
GasLimit: types.TestGasLimit,
}
nonce++
sig, err := cg.Wallet().WalletSign(ctx, cg.Banker(), m.Cid().Bytes(), api.MsgMeta{})
if err != nil {
return nil, err
}
return []*types.SignedMessage{
{
Signature: *sig,
Message: *m,
},
}, nil
}
for i := 0; i < 50; i++ {
_, err = cg.NextTipSet()
if err != nil {
t.Fatal(err)
}
}
}
func TestForkRefuseCall(t *testing.T) {
//stm: @CHAIN_GEN_NEXT_TIPSET_001, @CHAIN_GEN_NEXT_TIPSET_FROM_MINERS_001
//stm: @CHAIN_STATE_RESOLVE_TO_KEY_ADDR_001, @CHAIN_STATE_SET_VM_CONSTRUCTOR_001, @CHAIN_STATE_CALL_001
logging.SetAllLoggers(logging.LevelInfo)
for after := 0; after < 3; after++ {
for before := 0; before < 3; before++ {
// Makes the lints happy...
after := after
before := before
t.Run(fmt.Sprintf("after:%d,before:%d", after, before), func(t *testing.T) {
testForkRefuseCall(t, before, after)
})
}
}
}
func testForkRefuseCall(t *testing.T, nullsBefore, nullsAfter int) {
ctx := context.TODO()
cg, err := gen.NewGenerator()
if err != nil {
t.Fatal(err)
}
var migrationCount int
sm, err := NewStateManager(
cg.ChainStore(), filcns.NewTipSetExecutor(), cg.StateManager().VMSys(), UpgradeSchedule{{
Network: network.Version1,
Expensive: true,
Height: testForkHeight,
Migration: func(ctx context.Context, sm *StateManager, cache MigrationCache, cb ExecMonitor,
root cid.Cid, height abi.ChainEpoch, ts *types.TipSet) (cid.Cid, error) {
migrationCount++
return root, nil
}}}, cg.BeaconSchedule())
if err != nil {
t.Fatal(err)
}
inv := filcns.NewActorRegistry()
registry := builtin.MakeRegistryLegacy([]rtt.VMActor{testActor{}})
inv.Register(actorstypes.Version0, nil, registry)
sm.SetVMConstructor(func(ctx context.Context, vmopt *vm.VMOpts) (vm.Interface, error) {
nvm, err := vm.NewLegacyVM(ctx, vmopt)
if err != nil {
return nil, err
}
nvm.SetInvoker(inv)
return nvm, nil
})
cg.SetStateManager(sm)
enc, err := actors.SerializeParams(&init2.ExecParams{CodeCID: (testActor{}).Code()})
if err != nil {
t.Fatal(err)
}
m := &types.Message{
From: cg.Banker(),
To: _init.Address,
Method: _init.Methods.Exec,
Params: enc,
GasLimit: types.TestGasLimit,
Value: types.NewInt(0),
GasPremium: types.NewInt(0),
GasFeeCap: types.NewInt(0),
}
nullStart := abi.ChainEpoch(testForkHeight - nullsBefore)
nullLength := abi.ChainEpoch(nullsBefore + nullsAfter)
for i := 0; i < testForkHeight*2; i++ {
pts := cg.CurTipset.TipSet()
skip := abi.ChainEpoch(0)
if pts.Height() == nullStart {
skip = nullLength
}
ts, err := cg.NextTipSetFromMiners(pts, cg.Miners, skip)
if err != nil {
t.Fatal(err)
}
parentHeight := pts.Height()
currentHeight := ts.TipSet.TipSet().Height()
// CallWithGas calls _at_ the current tipset.
ret, err := sm.CallWithGas(ctx, m, nil, ts.TipSet.TipSet())
if parentHeight <= testForkHeight && currentHeight >= testForkHeight {
// If I had a fork, or I _will_ have a fork, it should fail.
require.Equal(t, ErrExpensiveFork, err)
} else {
require.NoError(t, err)
require.True(t, ret.MsgRct.ExitCode.IsSuccess())
}
// Call always applies the message to the "next block" after the tipset's parent state.
ret, err = sm.Call(ctx, m, ts.TipSet.TipSet())
if parentHeight == testForkHeight {
require.Equal(t, ErrExpensiveFork, err)
} else {
require.NoError(t, err)
require.True(t, ret.MsgRct.ExitCode.IsSuccess())
}
// Calls without a tipset should walk back to the last non-fork tipset.
// We _verify_ that the migration wasn't run multiple times at the end of the
// test.
ret, err = sm.CallWithGas(ctx, m, nil, nil)
require.NoError(t, err)
require.True(t, ret.MsgRct.ExitCode.IsSuccess())
ret, err = sm.Call(ctx, m, nil)
require.NoError(t, err)
require.True(t, ret.MsgRct.ExitCode.IsSuccess())
}
// Make sure we didn't execute the migration multiple times.
require.Equal(t, migrationCount, 1)
}
func TestForkPreMigration(t *testing.T) {
//stm: @CHAIN_GEN_NEXT_TIPSET_001,
//stm: @CHAIN_STATE_RESOLVE_TO_KEY_ADDR_001, @CHAIN_STATE_SET_VM_CONSTRUCTOR_001
logging.SetAllLoggers(logging.LevelInfo)
cg, err := gen.NewGenerator()
if err != nil {
t.Fatal(err)
}
fooCid, err := abi.CidBuilder.Sum([]byte("foo"))
require.NoError(t, err)
barCid, err := abi.CidBuilder.Sum([]byte("bar"))
require.NoError(t, err)
failCid, err := abi.CidBuilder.Sum([]byte("fail"))
require.NoError(t, err)
var wait20 sync.WaitGroup
wait20.Add(3)
wasCanceled := make(chan struct{})
checkCache := func(t *testing.T, cache MigrationCache) {
found, value, err := cache.Read("foo")
require.NoError(t, err)
require.True(t, found)
require.Equal(t, fooCid, value)
found, value, err = cache.Read("bar")
require.NoError(t, err)
require.True(t, found)
require.Equal(t, barCid, value)
found, _, err = cache.Read("fail")
require.NoError(t, err)
require.False(t, found)
}
counter := make(chan struct{}, 10)
sm, err := NewStateManager(
cg.ChainStore(), filcns.NewTipSetExecutor(), cg.StateManager().VMSys(), UpgradeSchedule{{
Network: network.Version1,
Height: testForkHeight,
Migration: func(ctx context.Context, sm *StateManager, cache MigrationCache, cb ExecMonitor,
root cid.Cid, height abi.ChainEpoch, ts *types.TipSet) (cid.Cid, error) {
// Make sure the test that should be canceled, is canceled.
select {
case <-wasCanceled:
case <-ctx.Done():
return cid.Undef, ctx.Err()
}
// the cache should be setup correctly.
checkCache(t, cache)
counter <- struct{}{}
return root, nil
},
PreMigrations: []PreMigration{{
StartWithin: 20,
PreMigration: func(ctx context.Context, _ *StateManager, cache MigrationCache,
_ cid.Cid, _ abi.ChainEpoch, _ *types.TipSet) error {
wait20.Done()
wait20.Wait()
err := cache.Write("foo", fooCid)
require.NoError(t, err)
counter <- struct{}{}
return nil
},
}, {
StartWithin: 20,
PreMigration: func(ctx context.Context, _ *StateManager, cache MigrationCache,
_ cid.Cid, _ abi.ChainEpoch, _ *types.TipSet) error {
wait20.Done()
wait20.Wait()
err := cache.Write("bar", barCid)
require.NoError(t, err)
counter <- struct{}{}
return nil
},
}, {
StartWithin: 20,
PreMigration: func(ctx context.Context, _ *StateManager, cache MigrationCache,
_ cid.Cid, _ abi.ChainEpoch, _ *types.TipSet) error {
wait20.Done()
wait20.Wait()
err := cache.Write("fail", failCid)
require.NoError(t, err)
counter <- struct{}{}
// Fail this migration. The cached entry should not be persisted.
return fmt.Errorf("failed")
},
}, {
StartWithin: 15,
StopWithin: 5,
PreMigration: func(ctx context.Context, _ *StateManager, cache MigrationCache,
_ cid.Cid, _ abi.ChainEpoch, _ *types.TipSet) error {
<-ctx.Done()
close(wasCanceled)
counter <- struct{}{}
return nil
},
}, {
StartWithin: 10,
PreMigration: func(ctx context.Context, _ *StateManager, cache MigrationCache,
_ cid.Cid, _ abi.ChainEpoch, _ *types.TipSet) error {
checkCache(t, cache)
counter <- struct{}{}
return nil
},
}}},
}, cg.BeaconSchedule())
if err != nil {
t.Fatal(err)
}
require.NoError(t, sm.Start(context.Background()))
defer func() {
require.NoError(t, sm.Stop(context.Background()))
}()
inv := filcns.NewActorRegistry()
registry := builtin.MakeRegistryLegacy([]rtt.VMActor{testActor{}})
inv.Register(actorstypes.Version0, nil, registry)
sm.SetVMConstructor(func(ctx context.Context, vmopt *vm.VMOpts) (vm.Interface, error) {
nvm, err := vm.NewLegacyVM(ctx, vmopt)
if err != nil {
return nil, err
}
nvm.SetInvoker(inv)
return nvm, nil
})
cg.SetStateManager(sm)
for i := 0; i < 50; i++ {
_, err := cg.NextTipSet()
if err != nil {
t.Fatal(err)
}
}
// We have 5 pre-migration steps, and the migration. They should all have written something
// to this channel.
require.Equal(t, 6, len(counter))
}