v1.27.0-a #10

Closed
jonathanface wants to merge 473 commits from v1.27.0-a into master
19 changed files with 376 additions and 88 deletions
Showing only changes of commit 8206954bb0 - Show all commits

View File

@ -6,8 +6,53 @@
## Improvements ## Improvements
# v1.26.2 / 2024-04-08
**This is a mandatory patch release for the Filecoin network version 22 mainnet upgrade, for all node operators.**
There is an update in the upgrade epoch for nv22, you can read the [full discussion in Slack here.](https://filecoinproject.slack.com/archives/C05P37R9KQD/p1712548103521969)
The new upgrade epoch is scheduled to be on **epoch `3855360 - 2024-04-24 - 14:00:00Z`**. That means:
- **All mainnet node operators that have upgraded to v1.26.x, must upgrade to this patch release before 2024-04-11T14:00:00Z.**
- **All mainnet node operators that are on a version lower the v1.26.x, must upgrade to this patch release before 2024-04-24T14:00:00Z.**
This patch also includes fixes for node operators who want to index builtin-actor events after the nv22 upgrade. Specifically, it ensures the builtin actor event entries are ordered by insertion order when selected ([#11834](https://github.com/filecoin-project/lotus/pull/11834)). It also includes a couple Lotus-Miner patch fixes, ensuring that SnapDeals works properly and are using the new ProveReplicaUpdate3 message after the network version 22 upgrade, ensuring that DDO-sectors has the correct sector expirations, as well as DDO-sector visibility in the `lotus-miner sectors list` cmd.
## Upgrade Warnings
For users currently on a version of Lotus lower than v1.26.0, please note that **this release requires a minimum Go version of v1.21.7 or higher to successfully build Lotus.**
## v1.26.x Inclusions
See the [v1.26.0](#v1260--2024-03-21) release notes below for inclusions and notes on the v1.26.x series.
* [v13 Builtin Actor Bundle](#v13-builtin-actor-bundle)
* [Migration](#migration)
* [New features](#new-features-1)
* [Tracing API](#tracing-api)
* [Ethereum Tracing API (`trace_block` and `trace_replayBlockTransactions`)](#ethereum-tracing-api-trace_block-and-trace_replayblocktransactions)
* [GetActorEventsRaw and SubscribeActorEventsRaw](#getactoreventsraw-and-subscribeactoreventsraw)
* [Events Configuration Changes](#events-configuration-changes)
* [GetAllClaims and GetAllAlocations](#getallclaims-and-getallalocations)
* [Lotus CLI](#lotus-cli)
#v1260--2024-03-21
# v1.26.1 / 2024-03-27
***RETRACTED: Due to a change in network version 22 upgrade epoch, Lotus v1.26.1 should not be used prior to the new upgrade epoch. See v1.26.2 release notes above.***
**This is a patch release for the Calibration network user.** The Calibration network is scheduled for an upgrade to include the two additional built-in actor events to ease the transition and observability of DDO for the ecosystem ([#964](https://github.com/filecoin-project/FIPs/pull/964) and [#968](https://github.com/filecoin-project/FIPs/pull/968)).
The agreed-upon epoch between the Filecoin implementer team for the update is `1493854`, corresponding to `2024-04-03T11:00:00Z`. All Calibration network users need to upgrade to this patch release before that.
**Lotus Mainnet Users**: For users on the Mainnet, the [Lotus v1.26.0](https://github.com/filecoin-project/lotus/releases/tag/v1.26.0) release already includes the aforementioned events in preparation for the Mainnet nv22 upgrade. Therefore, both v1.26.0 and v1.26.1 versions are suitable for use on the Mainnet for the coming network version 22 upgrade.
# v1.26.0 / 2024-03-21 # v1.26.0 / 2024-03-21
***RETRACTED: Due to a change in network version 22 upgrade epoch, Lotus v1.26.0 should not be used prior to the new upgrade epoch. See v1.26.2 release notes above.***
This is the stable release for the upcoming MANDATORY Filecoin network upgrade v22, codenamed Dragon 🐉, at `epoch 3817920 - 2024-04-11 - 14:00:00Z` This is the stable release for the upcoming MANDATORY Filecoin network upgrade v22, codenamed Dragon 🐉, at `epoch 3817920 - 2024-04-11 - 14:00:00Z`
The Filecoin network version 22 delivers the following FIPs: The Filecoin network version 22 delivers the following FIPs:
@ -69,9 +114,7 @@ For certain node operators, such as full archival nodes or systems that need to
- feat: implement FIP-0063 ([filecoin-project/lotus#11572](https://github.com/filecoin-project/lotus/pull/11572)) - feat: implement FIP-0063 ([filecoin-project/lotus#11572](https://github.com/filecoin-project/lotus/pull/11572))
- feat: events: Add Lotus APIs to consume smart contract and built-in actor events ([filecoin-project/lotus#11618](https://github.com/filecoin-project/lotus/pull/11618)) - feat: events: Add Lotus APIs to consume smart contract and built-in actor events ([filecoin-project/lotus#11618](https://github.com/filecoin-project/lotus/pull/11618))
## Improvements ### Tracing API
## Tracing API
Replace the `CodeCid` field in the message trace (added in 1.23.4) with an `InvokedActor` field. Replace the `CodeCid` field in the message trace (added in 1.23.4) with an `InvokedActor` field.

Binary file not shown.

View File

@ -48,6 +48,7 @@ func init() {
if NetworkBundle == "calibrationnet" { if NetworkBundle == "calibrationnet" {
actors.AddActorMeta("storageminer", cid.MustParse("bafk2bzacecnh2ouohmonvebq7uughh4h3ppmg4cjsk74dzxlbbtlcij4xbzxq"), actorstypes.Version12) actors.AddActorMeta("storageminer", cid.MustParse("bafk2bzacecnh2ouohmonvebq7uughh4h3ppmg4cjsk74dzxlbbtlcij4xbzxq"), actorstypes.Version12)
actors.AddActorMeta("storageminer", cid.MustParse("bafk2bzaced7emkbbnrewv5uvrokxpf5tlm4jslu2jsv77ofw2yqdglg657uie"), actorstypes.Version12) actors.AddActorMeta("storageminer", cid.MustParse("bafk2bzaced7emkbbnrewv5uvrokxpf5tlm4jslu2jsv77ofw2yqdglg657uie"), actorstypes.Version12)
actors.AddActorMeta("verifiedregistry", cid.MustParse("bafk2bzacednskl3bykz5qpo54z2j2p4q44t5of4ktd6vs6ymmg2zebsbxazkm"), actorstypes.Version13)
} }
} }
@ -194,7 +195,8 @@ func readEmbeddedBuiltinActorsMetadata(bundle string) ([]*BuiltinActorsMetadata,
// The following manifest cids existed temporarily on the calibnet testnet // The following manifest cids existed temporarily on the calibnet testnet
// We include them in our builtin bundle, but intentionally omit from metadata // We include them in our builtin bundle, but intentionally omit from metadata
if root == cid.MustParse("bafy2bzacedrunxfqta5skb7q7x32lnp4efz2oq7fn226ffm7fu5iqs62jkmvs") || if root == cid.MustParse("bafy2bzacedrunxfqta5skb7q7x32lnp4efz2oq7fn226ffm7fu5iqs62jkmvs") ||
root == cid.MustParse("bafy2bzacebl4w5ptfvuw6746w7ev562idkbf5ppq72e6zub22435ws2rukzru") { root == cid.MustParse("bafy2bzacebl4w5ptfvuw6746w7ev562idkbf5ppq72e6zub22435ws2rukzru") ||
root == cid.MustParse("bafy2bzacea4firkyvt2zzdwqjrws5pyeluaesh6uaid246tommayr4337xpmi") {
continue continue
} }
bundles = append(bundles, &BuiltinActorsMetadata{ bundles = append(bundles, &BuiltinActorsMetadata{

View File

@ -249,8 +249,8 @@ var EmbeddedBuiltinActorsMetadata []*BuiltinActorsMetadata = []*BuiltinActorsMet
}, { }, {
Network: "calibrationnet", Network: "calibrationnet",
Version: 13, Version: 13,
BundleGitTag: "v13.0.0-rc.3", BundleGitTag: "v13.0.0",
ManifestCid: MustParseCid("bafy2bzacea4firkyvt2zzdwqjrws5pyeluaesh6uaid246tommayr4337xpmi"), ManifestCid: MustParseCid("bafy2bzacect4ktyujrwp6mjlsitnpvuw2pbuppz6w52sfljyo4agjevzm75qs"),
Actors: map[string]cid.Cid{ Actors: map[string]cid.Cid{
"account": MustParseCid("bafk2bzaceb3j36ri5y5mfklgp5emlvrms6g4733ss2j3l7jismrxq6ng3tcc6"), "account": MustParseCid("bafk2bzaceb3j36ri5y5mfklgp5emlvrms6g4733ss2j3l7jismrxq6ng3tcc6"),
"cron": MustParseCid("bafk2bzaceaz6rocamdxehgpwcbku6wlapwpgzyyvkrploj66mlqptsulf52bs"), "cron": MustParseCid("bafk2bzaceaz6rocamdxehgpwcbku6wlapwpgzyyvkrploj66mlqptsulf52bs"),
@ -267,7 +267,7 @@ var EmbeddedBuiltinActorsMetadata []*BuiltinActorsMetadata = []*BuiltinActorsMet
"storageminer": MustParseCid("bafk2bzaceckzw3v7wqliyggvjvihz4wywchnnsie4frfvkm3fm5znb64mofri"), "storageminer": MustParseCid("bafk2bzaceckzw3v7wqliyggvjvihz4wywchnnsie4frfvkm3fm5znb64mofri"),
"storagepower": MustParseCid("bafk2bzacea7t4wynzjajl442mpdqbnh3wusjusqtnzgpvefvweh4n2tgzgqhu"), "storagepower": MustParseCid("bafk2bzacea7t4wynzjajl442mpdqbnh3wusjusqtnzgpvefvweh4n2tgzgqhu"),
"system": MustParseCid("bafk2bzacedjnrb5glewazsxpcx6rwiuhl4kwrfcqolyprn6rrjtlzmthlhdq6"), "system": MustParseCid("bafk2bzacedjnrb5glewazsxpcx6rwiuhl4kwrfcqolyprn6rrjtlzmthlhdq6"),
"verifiedregistry": MustParseCid("bafk2bzacednskl3bykz5qpo54z2j2p4q44t5of4ktd6vs6ymmg2zebsbxazkm"), "verifiedregistry": MustParseCid("bafk2bzacebj2zdquagzy2xxn7up574oemg3w7ed3fe4aujkyhgdwj57voesn2"),
}, },
}, { }, {
Network: "caterpillarnet", Network: "caterpillarnet",

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -77,6 +77,9 @@ const UpgradeWatermelonFixHeight = -100
// This fix upgrade only ran on calibrationnet // This fix upgrade only ran on calibrationnet
const UpgradeWatermelonFix2Height = -101 const UpgradeWatermelonFix2Height = -101
// This fix upgrade only ran on calibrationnet
const UpgradeCalibrationDragonFixHeight = -102
var DrandSchedule = map[abi.ChainEpoch]DrandEnum{ var DrandSchedule = map[abi.ChainEpoch]DrandEnum{
0: DrandMainnet, 0: DrandMainnet,
UpgradePhoenixHeight: DrandQuicknet, UpgradePhoenixHeight: DrandQuicknet,

View File

@ -67,6 +67,9 @@ const UpgradeWatermelonFixHeight = -100
// This fix upgrade only ran on calibrationnet // This fix upgrade only ran on calibrationnet
const UpgradeWatermelonFix2Height = -101 const UpgradeWatermelonFix2Height = -101
// This fix upgrade only ran on calibrationnet
const UpgradeCalibrationDragonFixHeight = -102
var SupportedProofTypes = []abi.RegisteredSealProof{ var SupportedProofTypes = []abi.RegisteredSealProof{
abi.RegisteredSealProof_StackedDrg512MiBV1, abi.RegisteredSealProof_StackedDrg512MiBV1,
abi.RegisteredSealProof_StackedDrg32GiBV1, abi.RegisteredSealProof_StackedDrg32GiBV1,

View File

@ -95,6 +95,9 @@ const UpgradeDragonHeight = 1427974
// This epoch, 120 epochs after the "rest" of the nv22 upgrade, is when we switch to Drand quicknet // This epoch, 120 epochs after the "rest" of the nv22 upgrade, is when we switch to Drand quicknet
const UpgradePhoenixHeight = UpgradeDragonHeight + 120 const UpgradePhoenixHeight = UpgradeDragonHeight + 120
// 2024-04-03T11:00:00Z
const UpgradeCalibrationDragonFixHeight = 1493854
var SupportedProofTypes = []abi.RegisteredSealProof{ var SupportedProofTypes = []abi.RegisteredSealProof{
abi.RegisteredSealProof_StackedDrg32GiBV1, abi.RegisteredSealProof_StackedDrg32GiBV1,
abi.RegisteredSealProof_StackedDrg64GiBV1, abi.RegisteredSealProof_StackedDrg64GiBV1,

View File

@ -65,6 +65,9 @@ const UpgradeWatermelonFixHeight = -1
// This fix upgrade only ran on calibrationnet // This fix upgrade only ran on calibrationnet
const UpgradeWatermelonFix2Height = -2 const UpgradeWatermelonFix2Height = -2
// This fix upgrade only ran on calibrationnet
const UpgradeCalibrationDragonFixHeight = -3
var DrandSchedule = map[abi.ChainEpoch]DrandEnum{ var DrandSchedule = map[abi.ChainEpoch]DrandEnum{
0: DrandMainnet, 0: DrandMainnet,
UpgradePhoenixHeight: DrandQuicknet, UpgradePhoenixHeight: DrandQuicknet,

View File

@ -99,8 +99,8 @@ const UpgradeThunderHeight = UpgradeLightningHeight + 2880*21
// 2023-12-12T13:30:00Z // 2023-12-12T13:30:00Z
const UpgradeWatermelonHeight = 3469380 const UpgradeWatermelonHeight = 3469380
// 2024-04-11T14:00:00Z // 2024-04-24T14:00:00Z
var UpgradeDragonHeight = abi.ChainEpoch(3817920) var UpgradeDragonHeight = abi.ChainEpoch(3855360)
// This epoch, 120 epochs after the "rest" of the nv22 upgrade, is when we switch to Drand quicknet // This epoch, 120 epochs after the "rest" of the nv22 upgrade, is when we switch to Drand quicknet
// 2024-04-11T15:00:00Z // 2024-04-11T15:00:00Z
@ -112,6 +112,9 @@ const UpgradeWatermelonFixHeight = -1
// This fix upgrade only ran on calibrationnet // This fix upgrade only ran on calibrationnet
const UpgradeWatermelonFix2Height = -2 const UpgradeWatermelonFix2Height = -2
// This fix upgrade only ran on calibrationnet
const UpgradeCalibrationDragonFixHeight = -3
var SupportedProofTypes = []abi.RegisteredSealProof{ var SupportedProofTypes = []abi.RegisteredSealProof{
abi.RegisteredSealProof_StackedDrg32GiBV1, abi.RegisteredSealProof_StackedDrg32GiBV1,
abi.RegisteredSealProof_StackedDrg64GiBV1, abi.RegisteredSealProof_StackedDrg64GiBV1,
@ -129,6 +132,7 @@ func init() {
if os.Getenv("LOTUS_DISABLE_DRAGON") == "1" { if os.Getenv("LOTUS_DISABLE_DRAGON") == "1" {
UpgradeDragonHeight = math.MaxInt64 - 1 UpgradeDragonHeight = math.MaxInt64 - 1
delete(DrandSchedule, UpgradePhoenixHeight)
UpgradePhoenixHeight = math.MaxInt64 UpgradePhoenixHeight = math.MaxInt64
} }

View File

@ -87,33 +87,34 @@ var (
UpgradeBreezeHeight abi.ChainEpoch = -1 UpgradeBreezeHeight abi.ChainEpoch = -1
BreezeGasTampingDuration abi.ChainEpoch = 0 BreezeGasTampingDuration abi.ChainEpoch = 0
UpgradeSmokeHeight abi.ChainEpoch = -1 UpgradeSmokeHeight abi.ChainEpoch = -1
UpgradeIgnitionHeight abi.ChainEpoch = -2 UpgradeIgnitionHeight abi.ChainEpoch = -2
UpgradeRefuelHeight abi.ChainEpoch = -3 UpgradeRefuelHeight abi.ChainEpoch = -3
UpgradeTapeHeight abi.ChainEpoch = -4 UpgradeTapeHeight abi.ChainEpoch = -4
UpgradeAssemblyHeight abi.ChainEpoch = 10 UpgradeAssemblyHeight abi.ChainEpoch = 10
UpgradeLiftoffHeight abi.ChainEpoch = -5 UpgradeLiftoffHeight abi.ChainEpoch = -5
UpgradeKumquatHeight abi.ChainEpoch = -6 UpgradeKumquatHeight abi.ChainEpoch = -6
UpgradeCalicoHeight abi.ChainEpoch = -8 UpgradeCalicoHeight abi.ChainEpoch = -8
UpgradePersianHeight abi.ChainEpoch = -9 UpgradePersianHeight abi.ChainEpoch = -9
UpgradeOrangeHeight abi.ChainEpoch = -10 UpgradeOrangeHeight abi.ChainEpoch = -10
UpgradeClausHeight abi.ChainEpoch = -11 UpgradeClausHeight abi.ChainEpoch = -11
UpgradeTrustHeight abi.ChainEpoch = -12 UpgradeTrustHeight abi.ChainEpoch = -12
UpgradeNorwegianHeight abi.ChainEpoch = -13 UpgradeNorwegianHeight abi.ChainEpoch = -13
UpgradeTurboHeight abi.ChainEpoch = -14 UpgradeTurboHeight abi.ChainEpoch = -14
UpgradeHyperdriveHeight abi.ChainEpoch = -15 UpgradeHyperdriveHeight abi.ChainEpoch = -15
UpgradeChocolateHeight abi.ChainEpoch = -16 UpgradeChocolateHeight abi.ChainEpoch = -16
UpgradeOhSnapHeight abi.ChainEpoch = -17 UpgradeOhSnapHeight abi.ChainEpoch = -17
UpgradeSkyrHeight abi.ChainEpoch = -18 UpgradeSkyrHeight abi.ChainEpoch = -18
UpgradeSharkHeight abi.ChainEpoch = -19 UpgradeSharkHeight abi.ChainEpoch = -19
UpgradeHyggeHeight abi.ChainEpoch = -20 UpgradeHyggeHeight abi.ChainEpoch = -20
UpgradeLightningHeight abi.ChainEpoch = -21 UpgradeLightningHeight abi.ChainEpoch = -21
UpgradeThunderHeight abi.ChainEpoch = -22 UpgradeThunderHeight abi.ChainEpoch = -22
UpgradeWatermelonHeight abi.ChainEpoch = -23 UpgradeWatermelonHeight abi.ChainEpoch = -23
UpgradeWatermelonFixHeight abi.ChainEpoch = -24 UpgradeWatermelonFixHeight abi.ChainEpoch = -24
UpgradeWatermelonFix2Height abi.ChainEpoch = -25 UpgradeWatermelonFix2Height abi.ChainEpoch = -25
UpgradeDragonHeight abi.ChainEpoch = -26 UpgradeDragonHeight abi.ChainEpoch = -26
UpgradePhoenixHeight abi.ChainEpoch = -27 UpgradePhoenixHeight abi.ChainEpoch = -27
UpgradeCalibrationDragonFixHeight abi.ChainEpoch = -28
DrandSchedule = map[abi.ChainEpoch]DrandEnum{ DrandSchedule = map[abi.ChainEpoch]DrandEnum{
0: DrandMainnet, 0: DrandMainnet,

View File

@ -23,7 +23,9 @@ import (
init11 "github.com/filecoin-project/go-state-types/builtin/v11/init" init11 "github.com/filecoin-project/go-state-types/builtin/v11/init"
nv19 "github.com/filecoin-project/go-state-types/builtin/v11/migration" nv19 "github.com/filecoin-project/go-state-types/builtin/v11/migration"
system11 "github.com/filecoin-project/go-state-types/builtin/v11/system" system11 "github.com/filecoin-project/go-state-types/builtin/v11/system"
init12 "github.com/filecoin-project/go-state-types/builtin/v12/init"
nv21 "github.com/filecoin-project/go-state-types/builtin/v12/migration" nv21 "github.com/filecoin-project/go-state-types/builtin/v12/migration"
system12 "github.com/filecoin-project/go-state-types/builtin/v12/system"
nv22 "github.com/filecoin-project/go-state-types/builtin/v13/migration" nv22 "github.com/filecoin-project/go-state-types/builtin/v13/migration"
nv17 "github.com/filecoin-project/go-state-types/builtin/v9/migration" nv17 "github.com/filecoin-project/go-state-types/builtin/v9/migration"
"github.com/filecoin-project/go-state-types/manifest" "github.com/filecoin-project/go-state-types/manifest"
@ -296,6 +298,10 @@ func DefaultUpgradeSchedule() stmgr.UpgradeSchedule {
StopWithin: 10, StopWithin: 10,
}}, }},
Expensive: true, Expensive: true,
}, {
Height: build.UpgradeCalibrationDragonFixHeight,
Network: network.Version22,
Migration: upgradeActorsV13VerifregFix(calibnetv13BuggyVerifregCID1, calibnetv13CorrectManifestCID1),
}, },
} }
@ -1902,6 +1908,13 @@ var (
calibnetv12BuggyManifestCID1 = cid.MustParse("bafy2bzacedrunxfqta5skb7q7x32lnp4efz2oq7fn226ffm7fu5iqs62jkmvs") calibnetv12BuggyManifestCID1 = cid.MustParse("bafy2bzacedrunxfqta5skb7q7x32lnp4efz2oq7fn226ffm7fu5iqs62jkmvs")
calibnetv12BuggyManifestCID2 = cid.MustParse("bafy2bzacebl4w5ptfvuw6746w7ev562idkbf5ppq72e6zub22435ws2rukzru") calibnetv12BuggyManifestCID2 = cid.MustParse("bafy2bzacebl4w5ptfvuw6746w7ev562idkbf5ppq72e6zub22435ws2rukzru")
calibnetv12CorrectManifestCID1 = cid.MustParse("bafy2bzacednzb3pkrfnbfhmoqtb3bc6dgvxszpqklf3qcc7qzcage4ewzxsca") calibnetv12CorrectManifestCID1 = cid.MustParse("bafy2bzacednzb3pkrfnbfhmoqtb3bc6dgvxszpqklf3qcc7qzcage4ewzxsca")
calibnetv13BuggyVerifregCID1 = cid.MustParse("bafk2bzacednskl3bykz5qpo54z2j2p4q44t5of4ktd6vs6ymmg2zebsbxazkm")
calibnetv13BuggyBundleSuffix1 = "calibrationnet-13-rc3"
calibnetv13BuggyManifestCID1 = cid.MustParse("bafy2bzacea4firkyvt2zzdwqjrws5pyeluaesh6uaid246tommayr4337xpmi")
calibnetv13CorrectManifestCID1 = cid.MustParse("bafy2bzacect4ktyujrwp6mjlsitnpvuw2pbuppz6w52sfljyo4agjevzm75qs")
) )
func upgradeActorsV12Common( func upgradeActorsV12Common(
@ -2227,16 +2240,53 @@ func upgradeActorsV13Common(
) )
} }
manifest, ok := actors.GetManifest(actorstypes.Version13) // check whether or not this is a calibnet upgrade
if !ok { // we do this because calibnet upgraded to a "wrong" actors bundle, which was then corrected
return cid.Undef, xerrors.Errorf("no manifest CID for v13 upgrade") // we thus upgrade to calibrationnet-buggy in this upgrade
actorsIn, err := state.LoadStateTree(adtStore, root)
if err != nil {
return cid.Undef, xerrors.Errorf("loading state tree: %w", err)
}
initActor, err := actorsIn.GetActor(builtin.InitActorAddr)
if err != nil {
return cid.Undef, xerrors.Errorf("failed to get system actor: %w", err)
}
var initState init12.State
if err := adtStore.Get(ctx, initActor.Head, &initState); err != nil {
return cid.Undef, xerrors.Errorf("failed to get system actor state: %w", err)
}
var manifestCid cid.Cid
if initState.NetworkName == "calibrationnet" {
embedded, ok := build.GetEmbeddedBuiltinActorsBundle(actorstypes.Version13, calibnetv13BuggyBundleSuffix1)
if !ok {
return cid.Undef, xerrors.Errorf("didn't find buggy calibrationnet bundle")
}
var err error
manifestCid, err = bundle.LoadBundle(ctx, writeStore, bytes.NewReader(embedded))
if err != nil {
return cid.Undef, xerrors.Errorf("failed to load buggy calibnet bundle: %w", err)
}
if manifestCid != calibnetv13BuggyManifestCID1 {
return cid.Undef, xerrors.Errorf("didn't find expected buggy calibnet bundle manifest: %s != %s", manifestCid, calibnetv12BuggyManifestCID1)
}
} else {
ok := false
manifestCid, ok = actors.GetManifest(actorstypes.Version13)
if !ok {
return cid.Undef, xerrors.Errorf("no manifest CID for v13 upgrade")
}
} }
// Perform the migration // Perform the migration
newHamtRoot, err := nv22.MigrateStateTree(ctx, adtStore, manifest, stateRoot.Actors, epoch, config, newHamtRoot, err := nv22.MigrateStateTree(ctx, adtStore, manifestCid, stateRoot.Actors, epoch, config,
migrationLogger{}, cache) migrationLogger{}, cache)
if err != nil { if err != nil {
return cid.Undef, xerrors.Errorf("upgrading to actors v11: %w", err) return cid.Undef, xerrors.Errorf("upgrading to actors v13: %w", err)
} }
// Persist the result. // Persist the result.
@ -2261,6 +2311,163 @@ func upgradeActorsV13Common(
return newRoot, nil return newRoot, nil
} }
// ////////////////////
func upgradeActorsV13VerifregFix(oldBuggyVerifregCID, newManifestCID cid.Cid) func(ctx context.Context, sm *stmgr.StateManager, cache stmgr.MigrationCache, cb stmgr.ExecMonitor, root cid.Cid, epoch abi.ChainEpoch, ts *types.TipSet) (cid.Cid, error) {
return func(ctx context.Context, sm *stmgr.StateManager, cache stmgr.MigrationCache, cb stmgr.ExecMonitor, root cid.Cid, epoch abi.ChainEpoch, ts *types.TipSet) (cid.Cid, error) {
stateStore := sm.ChainStore().StateBlockstore()
adtStore := store.ActorStore(ctx, stateStore)
// ensure that the manifest is loaded in the blockstore
// this loads the "correct" bundle for UpgradeCalibrationDragonFixHeight
if err := bundle.LoadBundles(ctx, stateStore, actorstypes.Version13); err != nil {
return cid.Undef, xerrors.Errorf("failed to load manifest bundle: %w", err)
}
// this loads the buggy bundle, for UpgradeDragonHeight
embedded, ok := build.GetEmbeddedBuiltinActorsBundle(actorstypes.Version13, calibnetv13BuggyBundleSuffix1)
if !ok {
return cid.Undef, xerrors.Errorf("didn't find buggy calibrationnet bundle")
}
_, err := bundle.LoadBundle(ctx, stateStore, bytes.NewReader(embedded))
if err != nil {
return cid.Undef, xerrors.Errorf("failed to load buggy calibnet bundle: %w", err)
}
// now confirm we have the one we're migrating to
if haveManifest, err := stateStore.Has(ctx, newManifestCID); err != nil {
return cid.Undef, xerrors.Errorf("blockstore error when loading manifest %s: %w", newManifestCID, err)
} else if !haveManifest {
return cid.Undef, xerrors.Errorf("missing new manifest %s in blockstore", newManifestCID)
}
// Load input state tree
actorsIn, err := state.LoadStateTree(adtStore, root)
if err != nil {
return cid.Undef, xerrors.Errorf("loading state tree: %w", err)
}
// load old manifest data
systemActor, err := actorsIn.GetActor(builtin.SystemActorAddr)
if err != nil {
return cid.Undef, xerrors.Errorf("failed to get system actor: %w", err)
}
var systemState system12.State
if err := adtStore.Get(ctx, systemActor.Head, &systemState); err != nil {
return cid.Undef, xerrors.Errorf("failed to get system actor state: %w", err)
}
var oldManifestData manifest.ManifestData
if err := adtStore.Get(ctx, systemState.BuiltinActors, &oldManifestData); err != nil {
return cid.Undef, xerrors.Errorf("failed to get old manifest data: %w", err)
}
// load new manifest
var newManifest manifest.Manifest
if err := adtStore.Get(ctx, newManifestCID, &newManifest); err != nil {
return cid.Undef, xerrors.Errorf("error reading actor manifest: %w", err)
}
if err := newManifest.Load(ctx, adtStore); err != nil {
return cid.Undef, xerrors.Errorf("error loading actor manifest: %w", err)
}
// build the CID mapping
codeMapping := make(map[cid.Cid]cid.Cid, len(oldManifestData.Entries))
for _, oldEntry := range oldManifestData.Entries {
newCID, ok := newManifest.Get(oldEntry.Name)
if !ok {
return cid.Undef, xerrors.Errorf("missing manifest entry for %s", oldEntry.Name)
}
// Note: we expect newCID to be the same as oldEntry.Code for all actors except the verifreg actor
codeMapping[oldEntry.Code] = newCID
}
// Create empty actorsOut
actorsOut, err := state.NewStateTree(adtStore, actorsIn.Version())
if err != nil {
return cid.Undef, xerrors.Errorf("failed to create new tree: %w", err)
}
// Perform the migration
err = actorsIn.ForEach(func(a address.Address, actor *types.Actor) error {
newCid, ok := codeMapping[actor.Code]
if !ok {
return xerrors.Errorf("didn't find mapping for %s", actor.Code)
}
return actorsOut.SetActor(a, &types.ActorV5{
Code: newCid,
Head: actor.Head,
Nonce: actor.Nonce,
Balance: actor.Balance,
Address: actor.Address,
})
})
if err != nil {
return cid.Undef, xerrors.Errorf("failed to perform migration: %w", err)
}
systemState.BuiltinActors = newManifest.Data
newSystemHead, err := adtStore.Put(ctx, &systemState)
if err != nil {
return cid.Undef, xerrors.Errorf("failed to put new system state: %w", err)
}
systemActor.Head = newSystemHead
if err = actorsOut.SetActor(builtin.SystemActorAddr, systemActor); err != nil {
return cid.Undef, xerrors.Errorf("failed to put new system actor: %w", err)
}
// Sanity checking
err = actorsIn.ForEach(func(a address.Address, inActor *types.Actor) error {
outActor, err := actorsOut.GetActor(a)
if err != nil {
return xerrors.Errorf("failed to get actor in outTree: %w", err)
}
if inActor.Nonce != outActor.Nonce {
return xerrors.Errorf("mismatched nonce for actor %s", a)
}
if !inActor.Balance.Equals(outActor.Balance) {
return xerrors.Errorf("mismatched balance for actor %s: %d != %d", a, inActor.Balance, outActor.Balance)
}
if inActor.Address != outActor.Address && inActor.Address.String() != outActor.Address.String() {
return xerrors.Errorf("mismatched address for actor %s: %s != %s", a, inActor.Address, outActor.Address)
}
if inActor.Head != outActor.Head && a != builtin.SystemActorAddr {
return xerrors.Errorf("mismatched head for actor %s", a)
}
// Actor Codes are only expected to change for the verifreg actor
if inActor.Code != oldBuggyVerifregCID && inActor.Code != outActor.Code {
return xerrors.Errorf("unexpected change in code for actor %s", a)
}
return nil
})
if err != nil {
return cid.Undef, xerrors.Errorf("failed to sanity check migration: %w", err)
}
// Persist the result.
newRoot, err := actorsOut.Flush(ctx)
if err != nil {
return cid.Undef, xerrors.Errorf("failed to persist new state root: %w", err)
}
return newRoot, nil
}
}
//////////////////// ////////////////////
// Example upgrade function if upgrade requires only code changes // Example upgrade function if upgrade requires only code changes

View File

@ -613,7 +613,8 @@ func (ei *EventIndex) prefillFilter(ctx context.Context, f *eventFilter, exclude
s = s + " WHERE " + strings.Join(clauses, " AND ") s = s + " WHERE " + strings.Join(clauses, " AND ")
} }
s += " ORDER BY event.height DESC" // retain insertion order of event_entry rows with the implicit _rowid_ column
s += " ORDER BY event.height DESC, event_entry._rowid_ ASC"
stmt, err := ei.db.Prepare(s) stmt, err := ei.db.Prepare(s)
if err != nil { if err != nil {

View File

@ -11,6 +11,7 @@ import (
"github.com/ipfs/go-cid" "github.com/ipfs/go-cid"
"github.com/ipld/go-ipld-prime" "github.com/ipld/go-ipld-prime"
"github.com/ipld/go-ipld-prime/codec/dagcbor" "github.com/ipld/go-ipld-prime/codec/dagcbor"
cidlink "github.com/ipld/go-ipld-prime/linking/cid"
"github.com/ipld/go-ipld-prime/node/basicnode" "github.com/ipld/go-ipld-prime/node/basicnode"
"github.com/multiformats/go-multicodec" "github.com/multiformats/go-multicodec"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
@ -24,6 +25,7 @@ import (
"github.com/filecoin-project/go-state-types/exitcode" "github.com/filecoin-project/go-state-types/exitcode"
"github.com/filecoin-project/go-state-types/network" "github.com/filecoin-project/go-state-types/network"
"github.com/filecoin-project/lotus/api"
"github.com/filecoin-project/lotus/chain/actors/builtin/market" "github.com/filecoin-project/lotus/chain/actors/builtin/market"
minertypes "github.com/filecoin-project/lotus/chain/actors/builtin/miner" minertypes "github.com/filecoin-project/lotus/chain/actors/builtin/miner"
"github.com/filecoin-project/lotus/chain/consensus/filcns" "github.com/filecoin-project/lotus/chain/consensus/filcns"
@ -131,15 +133,17 @@ func TestOnboardMixedMarketDDO(t *testing.T) {
var pieces []abi.PieceInfo var pieces []abi.PieceInfo
var dealID abi.DealID var dealID abi.DealID
// market ddo piece
var marketSector api.SectorOffset
var marketPiece abi.PieceInfo
marketPieceSize := abi.PaddedPieceSize(2048 / 2).Unpadded()
{ {
// market piece pieceData := make([]byte, marketPieceSize)
pieceSize := abi.PaddedPieceSize(2048 / 2).Unpadded()
pieceData := make([]byte, pieceSize)
_, _ = rand.Read(pieceData) _, _ = rand.Read(pieceData)
dc, err := miner.ComputeDataCid(ctx, pieceSize, bytes.NewReader(pieceData)) marketPiece, err = miner.ComputeDataCid(ctx, marketPieceSize, bytes.NewReader(pieceData))
require.NoError(t, err) require.NoError(t, err)
pieces = append(pieces, dc) pieces = append(pieces, marketPiece)
head, err := client.ChainHead(ctx) head, err := client.ChainHead(ctx)
require.NoError(t, err) require.NoError(t, err)
@ -148,7 +152,7 @@ func TestOnboardMixedMarketDDO(t *testing.T) {
psdParams := market2.PublishStorageDealsParams{ psdParams := market2.PublishStorageDealsParams{
Deals: []market2.ClientDealProposal{ Deals: []market2.ClientDealProposal{
makeMarketDealProposal(t, client, miner, dc.PieceCID, pieceSize.Padded(), head.Height()+2880*2, head.Height()+2880*400), makeMarketDealProposal(t, client, miner, marketPiece.PieceCID, marketPieceSize.Padded(), head.Height()+2880*2, head.Height()+2880*400),
}, },
} }
@ -177,7 +181,7 @@ func TestOnboardMixedMarketDDO(t *testing.T) {
mcid := smsg.Cid() mcid := smsg.Cid()
so, err := miner.SectorAddPieceToAny(ctx, pieceSize, bytes.NewReader(pieceData), piece.PieceDealInfo{ marketSector, err = miner.SectorAddPieceToAny(ctx, marketPieceSize, bytes.NewReader(pieceData), piece.PieceDealInfo{
PublishCid: &mcid, PublishCid: &mcid,
DealID: dealID, DealID: dealID,
DealProposal: &psdParams.Deals[0].Proposal, DealProposal: &psdParams.Deals[0].Proposal,
@ -190,25 +194,26 @@ func TestOnboardMixedMarketDDO(t *testing.T) {
}) })
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, abi.PaddedPieceSize(0), so.Offset) require.Equal(t, abi.PaddedPieceSize(0), marketSector.Offset)
require.Equal(t, abi.SectorNumber(2), so.Sector) require.Equal(t, abi.SectorNumber(2), marketSector.Sector)
} }
// raw ddo piece
var rawSector api.SectorOffset
var rawPiece abi.PieceInfo
rawPieceSize := abi.PaddedPieceSize(2048 / 2).Unpadded()
{ {
// raw ddo piece pieceData := make([]byte, rawPieceSize)
pieceSize := abi.PaddedPieceSize(2048 / 2).Unpadded()
pieceData := make([]byte, pieceSize)
_, _ = rand.Read(pieceData) _, _ = rand.Read(pieceData)
dc, err := miner.ComputeDataCid(ctx, pieceSize, bytes.NewReader(pieceData)) rawPiece, err = miner.ComputeDataCid(ctx, rawPieceSize, bytes.NewReader(pieceData))
require.NoError(t, err) require.NoError(t, err)
pieces = append(pieces, dc) pieces = append(pieces, rawPiece)
head, err := client.ChainHead(ctx) head, err := client.ChainHead(ctx)
require.NoError(t, err) require.NoError(t, err)
so, err := miner.SectorAddPieceToAny(ctx, pieceSize, bytes.NewReader(pieceData), piece.PieceDealInfo{ rawSector, err = miner.SectorAddPieceToAny(ctx, rawPieceSize, bytes.NewReader(pieceData), piece.PieceDealInfo{
PublishCid: nil, PublishCid: nil,
DealID: 0, DealID: 0,
DealProposal: nil, DealProposal: nil,
@ -218,18 +223,20 @@ func TestOnboardMixedMarketDDO(t *testing.T) {
}, },
KeepUnsealed: false, KeepUnsealed: false,
PieceActivationManifest: &minertypes.PieceActivationManifest{ PieceActivationManifest: &minertypes.PieceActivationManifest{
CID: dc.PieceCID, CID: rawPiece.PieceCID,
Size: dc.Size, Size: rawPiece.Size,
VerifiedAllocationKey: nil, VerifiedAllocationKey: nil,
Notify: nil, Notify: nil,
}, },
}) })
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, abi.PaddedPieceSize(1024), so.Offset) require.Equal(t, abi.PaddedPieceSize(1024), rawSector.Offset)
require.Equal(t, abi.SectorNumber(2), so.Sector) require.Equal(t, abi.SectorNumber(2), rawSector.Sector)
} }
require.Equal(t, marketSector.Sector, rawSector.Sector) // sanity check same sector
toCheck := map[abi.SectorNumber]struct{}{ toCheck := map[abi.SectorNumber]struct{}{
2: {}, 2: {},
} }
@ -272,7 +279,7 @@ func TestOnboardMixedMarketDDO(t *testing.T) {
fmt.Println("piece", piece.PieceCID, piece.Size) fmt.Println("piece", piece.PieceCID, piece.Size)
} }
// check "deal-published" actor event // check some actor events
var epochZero abi.ChainEpoch var epochZero abi.ChainEpoch
allEvents, err := miner.FullNode.GetActorEventsRaw(ctx, &types.ActorEventFilter{ allEvents, err := miner.FullNode.GetActorEventsRaw(ctx, &types.ActorEventFilter{
FromHeight: &epochZero, FromHeight: &epochZero,
@ -282,21 +289,32 @@ func TestOnboardMixedMarketDDO(t *testing.T) {
var found bool var found bool
keyBytes := must.One(ipld.Encode(basicnode.NewString(key), dagcbor.Encode)) keyBytes := must.One(ipld.Encode(basicnode.NewString(key), dagcbor.Encode))
for _, event := range allEvents { for _, event := range allEvents {
for _, e := range event.Entries { require.True(t, len(event.Entries) > 0)
if e.Key == "$type" && bytes.Equal(e.Value, keyBytes) { if event.Entries[0].Key == "$type" && bytes.Equal(event.Entries[0].Value, keyBytes) {
found = true found = true
switch key { switch key {
case "deal-published", "deal-activated": case "deal-published", "deal-activated":
expectedEntries := []types.EventEntry{ expectedEntries := []types.EventEntry{
{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "$type", Value: keyBytes}, {Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "$type", Value: keyBytes},
{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "id", Value: must.One(ipld.Encode(basicnode.NewInt(2), dagcbor.Encode))}, {Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "id", Value: must.One(ipld.Encode(basicnode.NewInt(2), dagcbor.Encode))},
{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "client", Value: must.One(ipld.Encode(basicnode.NewInt(int64(clientId)), dagcbor.Encode))}, {Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "client", Value: must.One(ipld.Encode(basicnode.NewInt(int64(clientId)), dagcbor.Encode))},
{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "provider", Value: must.One(ipld.Encode(basicnode.NewInt(int64(minerId)), dagcbor.Encode))}, {Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "provider", Value: must.One(ipld.Encode(basicnode.NewInt(int64(minerId)), dagcbor.Encode))},
}
require.ElementsMatch(t, expectedEntries, event.Entries)
} }
break require.Equal(t, expectedEntries, event.Entries)
case "sector-activated":
// only one sector, that has both our pieces in it
expectedEntries := []types.EventEntry{
{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "$type", Value: must.One(ipld.Encode(basicnode.NewString("sector-activated"), dagcbor.Encode))},
{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "sector", Value: must.One(ipld.Encode(basicnode.NewInt(int64(rawSector.Sector)), dagcbor.Encode))},
{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "unsealed-cid", Value: must.One(ipld.Encode(basicnode.NewLink(cidlink.Link{Cid: expectCommD}), dagcbor.Encode))},
{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "piece-cid", Value: must.One(ipld.Encode(basicnode.NewLink(cidlink.Link{Cid: marketPiece.PieceCID}), dagcbor.Encode))},
{Flags: 0x01, Codec: uint64(multicodec.Cbor), Key: "piece-size", Value: must.One(ipld.Encode(basicnode.NewInt(int64(marketPieceSize.Padded())), dagcbor.Encode))},
{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "piece-cid", Value: must.One(ipld.Encode(basicnode.NewLink(cidlink.Link{Cid: rawPiece.PieceCID}), dagcbor.Encode))},
{Flags: 0x01, Codec: uint64(multicodec.Cbor), Key: "piece-size", Value: must.One(ipld.Encode(basicnode.NewInt(int64(rawPieceSize.Padded())), dagcbor.Encode))},
}
require.Equal(t, expectedEntries, event.Entries)
} }
break
} }
} }
require.True(t, found, "expected to find event %s", key) require.True(t, found, "expected to find event %s", key)

View File

@ -176,11 +176,11 @@ func TestOnboardRawPieceVerified_WithActorEvents(t *testing.T) {
// first sector to start mining is CC // first sector to start mining is CC
{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "sector", Value: must.One(ipld.Encode(basicnode.NewInt(int64(so.Sector)-1), dagcbor.Encode))}, {Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "sector", Value: must.One(ipld.Encode(basicnode.NewInt(int64(so.Sector)-1), dagcbor.Encode))},
} }
require.ElementsMatch(t, expectedEntries, precommitedEvents[0].Entries) require.Equal(t, expectedEntries, precommitedEvents[0].Entries)
// second sector has our piece // second sector has our piece
expectedEntries[1].Value = must.One(ipld.Encode(basicnode.NewInt(int64(so.Sector)), dagcbor.Encode)) expectedEntries[1].Value = must.One(ipld.Encode(basicnode.NewInt(int64(so.Sector)), dagcbor.Encode))
require.ElementsMatch(t, expectedEntries, precommitedEvents[1].Entries) require.Equal(t, expectedEntries, precommitedEvents[1].Entries)
} }
{ {
@ -193,7 +193,7 @@ func TestOnboardRawPieceVerified_WithActorEvents(t *testing.T) {
{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "sector", Value: must.One(ipld.Encode(basicnode.NewInt(int64(so.Sector)-1), dagcbor.Encode))}, {Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "sector", Value: must.One(ipld.Encode(basicnode.NewInt(int64(so.Sector)-1), dagcbor.Encode))},
{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "unsealed-cid", Value: must.One(ipld.Encode(datamodel.Null, dagcbor.Encode))}, {Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "unsealed-cid", Value: must.One(ipld.Encode(datamodel.Null, dagcbor.Encode))},
} }
require.ElementsMatch(t, expectedEntries, activatedEvents[0].Entries) require.Equal(t, expectedEntries, activatedEvents[0].Entries)
// second sector has our piece, and only our piece, so usealed-cid matches piece-cid, // second sector has our piece, and only our piece, so usealed-cid matches piece-cid,
// unfortunately we don't have a case with multiple pieces // unfortunately we don't have a case with multiple pieces
@ -203,7 +203,7 @@ func TestOnboardRawPieceVerified_WithActorEvents(t *testing.T) {
types.EventEntry{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "piece-cid", Value: must.One(ipld.Encode(basicnode.NewLink(cidlink.Link{Cid: dc.PieceCID}), dagcbor.Encode))}, types.EventEntry{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "piece-cid", Value: must.One(ipld.Encode(basicnode.NewLink(cidlink.Link{Cid: dc.PieceCID}), dagcbor.Encode))},
types.EventEntry{Flags: 0x01, Codec: uint64(multicodec.Cbor), Key: "piece-size", Value: must.One(ipld.Encode(basicnode.NewInt(int64(pieceSize.Padded())), dagcbor.Encode))}, types.EventEntry{Flags: 0x01, Codec: uint64(multicodec.Cbor), Key: "piece-size", Value: must.One(ipld.Encode(basicnode.NewInt(int64(pieceSize.Padded())), dagcbor.Encode))},
) )
require.ElementsMatch(t, expectedEntries, activatedEvents[1].Entries) require.Equal(t, expectedEntries, activatedEvents[1].Entries)
} }
{ {
@ -233,21 +233,21 @@ func TestOnboardRawPieceVerified_WithActorEvents(t *testing.T) {
{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "$type", Value: must.One(ipld.Encode(basicnode.NewString("allocation"), dagcbor.Encode))}, {Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "$type", Value: must.One(ipld.Encode(basicnode.NewString("allocation"), dagcbor.Encode))},
// first, bogus, allocation // first, bogus, allocation
{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "id", Value: must.One(ipld.Encode(basicnode.NewInt(int64(allocationId)-1), dagcbor.Encode))}, {Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "id", Value: must.One(ipld.Encode(basicnode.NewInt(int64(allocationId)-1), dagcbor.Encode))},
{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "provider", Value: must.One(ipld.Encode(basicnode.NewInt(int64(minerId)), dagcbor.Encode))},
{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "client", Value: must.One(ipld.Encode(basicnode.NewInt(int64(clientId)), dagcbor.Encode))}, {Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "client", Value: must.One(ipld.Encode(basicnode.NewInt(int64(clientId)), dagcbor.Encode))},
{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "provider", Value: must.One(ipld.Encode(basicnode.NewInt(int64(minerId)), dagcbor.Encode))},
{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "piece-cid", Value: must.One(ipld.Encode(basicnode.NewLink(cidlink.Link{Cid: bogusPieceCid}), dagcbor.Encode))}, {Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "piece-cid", Value: must.One(ipld.Encode(basicnode.NewLink(cidlink.Link{Cid: bogusPieceCid}), dagcbor.Encode))},
{Flags: 0x01, Codec: uint64(multicodec.Cbor), Key: "piece-size", Value: must.One(ipld.Encode(basicnode.NewInt(int64(pieceSize.Padded())), dagcbor.Encode))}, {Flags: 0x01, Codec: uint64(multicodec.Cbor), Key: "piece-size", Value: must.One(ipld.Encode(basicnode.NewInt(int64(pieceSize.Padded())), dagcbor.Encode))},
{Flags: 0x01, Codec: uint64(multicodec.Cbor), Key: "term-min", Value: must.One(ipld.Encode(basicnode.NewInt(verifregtypes13.MinimumVerifiedAllocationTerm), dagcbor.Encode))}, {Flags: 0x01, Codec: uint64(multicodec.Cbor), Key: "term-min", Value: must.One(ipld.Encode(basicnode.NewInt(verifregtypes13.MinimumVerifiedAllocationTerm), dagcbor.Encode))},
{Flags: 0x01, Codec: uint64(multicodec.Cbor), Key: "term-max", Value: must.One(ipld.Encode(basicnode.NewInt(verifregtypes13.MaximumVerifiedAllocationTerm), dagcbor.Encode))}, {Flags: 0x01, Codec: uint64(multicodec.Cbor), Key: "term-max", Value: must.One(ipld.Encode(basicnode.NewInt(verifregtypes13.MaximumVerifiedAllocationTerm), dagcbor.Encode))},
{Flags: 0x01, Codec: uint64(multicodec.Cbor), Key: "expiration", Value: must.One(ipld.Encode(basicnode.NewInt(int64(bogusAllocationExpiry)), dagcbor.Encode))}, {Flags: 0x01, Codec: uint64(multicodec.Cbor), Key: "expiration", Value: must.One(ipld.Encode(basicnode.NewInt(int64(bogusAllocationExpiry)), dagcbor.Encode))},
} }
require.ElementsMatch(t, expectedEntries, allocationEvents[0].Entries) require.Equal(t, expectedEntries, allocationEvents[0].Entries)
// the second, real allocation // the second, real allocation
expectedEntries[1].Value = must.One(ipld.Encode(basicnode.NewInt(int64(allocationId)), dagcbor.Encode)) // "id" expectedEntries[1].Value = must.One(ipld.Encode(basicnode.NewInt(int64(allocationId)), dagcbor.Encode)) // "id"
expectedEntries[4].Value = must.One(ipld.Encode(basicnode.NewLink(cidlink.Link{Cid: dc.PieceCID}), dagcbor.Encode)) // "piece-cid" expectedEntries[4].Value = must.One(ipld.Encode(basicnode.NewLink(cidlink.Link{Cid: dc.PieceCID}), dagcbor.Encode)) // "piece-cid"
expectedEntries[8].Value = must.One(ipld.Encode(basicnode.NewInt(verifregtypes13.MaximumVerifiedAllocationExpiration), dagcbor.Encode)) // "expiration" expectedEntries[8].Value = must.One(ipld.Encode(basicnode.NewInt(verifregtypes13.MaximumVerifiedAllocationExpiration), dagcbor.Encode)) // "expiration"
require.ElementsMatch(t, expectedEntries, allocationEvents[1].Entries) require.Equal(t, expectedEntries, allocationEvents[1].Entries)
} }
{ {
@ -258,15 +258,15 @@ func TestOnboardRawPieceVerified_WithActorEvents(t *testing.T) {
expectedEntries := []types.EventEntry{ expectedEntries := []types.EventEntry{
{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "$type", Value: must.One(ipld.Encode(basicnode.NewString("allocation-removed"), dagcbor.Encode))}, {Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "$type", Value: must.One(ipld.Encode(basicnode.NewString("allocation-removed"), dagcbor.Encode))},
{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "id", Value: must.One(ipld.Encode(basicnode.NewInt(int64(allocationId)-1), dagcbor.Encode))}, {Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "id", Value: must.One(ipld.Encode(basicnode.NewInt(int64(allocationId)-1), dagcbor.Encode))},
{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "provider", Value: must.One(ipld.Encode(basicnode.NewInt(int64(minerId)), dagcbor.Encode))},
{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "client", Value: must.One(ipld.Encode(basicnode.NewInt(int64(clientId)), dagcbor.Encode))}, {Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "client", Value: must.One(ipld.Encode(basicnode.NewInt(int64(clientId)), dagcbor.Encode))},
{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "provider", Value: must.One(ipld.Encode(basicnode.NewInt(int64(minerId)), dagcbor.Encode))},
{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "piece-cid", Value: must.One(ipld.Encode(basicnode.NewLink(cidlink.Link{Cid: bogusPieceCid}), dagcbor.Encode))}, {Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "piece-cid", Value: must.One(ipld.Encode(basicnode.NewLink(cidlink.Link{Cid: bogusPieceCid}), dagcbor.Encode))},
{Flags: 0x01, Codec: uint64(multicodec.Cbor), Key: "piece-size", Value: must.One(ipld.Encode(basicnode.NewInt(int64(pieceSize.Padded())), dagcbor.Encode))}, {Flags: 0x01, Codec: uint64(multicodec.Cbor), Key: "piece-size", Value: must.One(ipld.Encode(basicnode.NewInt(int64(pieceSize.Padded())), dagcbor.Encode))},
{Flags: 0x01, Codec: uint64(multicodec.Cbor), Key: "term-min", Value: must.One(ipld.Encode(basicnode.NewInt(verifregtypes13.MinimumVerifiedAllocationTerm), dagcbor.Encode))}, {Flags: 0x01, Codec: uint64(multicodec.Cbor), Key: "term-min", Value: must.One(ipld.Encode(basicnode.NewInt(verifregtypes13.MinimumVerifiedAllocationTerm), dagcbor.Encode))},
{Flags: 0x01, Codec: uint64(multicodec.Cbor), Key: "term-max", Value: must.One(ipld.Encode(basicnode.NewInt(verifregtypes13.MaximumVerifiedAllocationTerm), dagcbor.Encode))}, {Flags: 0x01, Codec: uint64(multicodec.Cbor), Key: "term-max", Value: must.One(ipld.Encode(basicnode.NewInt(verifregtypes13.MaximumVerifiedAllocationTerm), dagcbor.Encode))},
{Flags: 0x01, Codec: uint64(multicodec.Cbor), Key: "expiration", Value: must.One(ipld.Encode(basicnode.NewInt(int64(bogusAllocationExpiry)), dagcbor.Encode))}, {Flags: 0x01, Codec: uint64(multicodec.Cbor), Key: "expiration", Value: must.One(ipld.Encode(basicnode.NewInt(int64(bogusAllocationExpiry)), dagcbor.Encode))},
} }
require.ElementsMatch(t, expectedEntries, allocationEvents[0].Entries) require.Equal(t, expectedEntries, allocationEvents[0].Entries)
} }
{ {
@ -277,8 +277,8 @@ func TestOnboardRawPieceVerified_WithActorEvents(t *testing.T) {
{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "$type", Value: must.One(ipld.Encode(basicnode.NewString("claim"), dagcbor.Encode))}, {Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "$type", Value: must.One(ipld.Encode(basicnode.NewString("claim"), dagcbor.Encode))},
// claimId inherits from its original allocationId // claimId inherits from its original allocationId
{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "id", Value: must.One(ipld.Encode(basicnode.NewInt(int64(allocationId)), dagcbor.Encode))}, {Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "id", Value: must.One(ipld.Encode(basicnode.NewInt(int64(allocationId)), dagcbor.Encode))},
{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "provider", Value: must.One(ipld.Encode(basicnode.NewInt(int64(minerId)), dagcbor.Encode))},
{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "client", Value: must.One(ipld.Encode(basicnode.NewInt(int64(clientId)), dagcbor.Encode))}, {Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "client", Value: must.One(ipld.Encode(basicnode.NewInt(int64(clientId)), dagcbor.Encode))},
{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "provider", Value: must.One(ipld.Encode(basicnode.NewInt(int64(minerId)), dagcbor.Encode))},
{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "piece-cid", Value: must.One(ipld.Encode(basicnode.NewLink(cidlink.Link{Cid: dc.PieceCID}), dagcbor.Encode))}, {Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "piece-cid", Value: must.One(ipld.Encode(basicnode.NewLink(cidlink.Link{Cid: dc.PieceCID}), dagcbor.Encode))},
{Flags: 0x01, Codec: uint64(multicodec.Cbor), Key: "piece-size", Value: must.One(ipld.Encode(basicnode.NewInt(int64(pieceSize.Padded())), dagcbor.Encode))}, {Flags: 0x01, Codec: uint64(multicodec.Cbor), Key: "piece-size", Value: must.One(ipld.Encode(basicnode.NewInt(int64(pieceSize.Padded())), dagcbor.Encode))},
{Flags: 0x01, Codec: uint64(multicodec.Cbor), Key: "term-min", Value: must.One(ipld.Encode(basicnode.NewInt(verifregtypes13.MinimumVerifiedAllocationTerm), dagcbor.Encode))}, {Flags: 0x01, Codec: uint64(multicodec.Cbor), Key: "term-min", Value: must.One(ipld.Encode(basicnode.NewInt(verifregtypes13.MinimumVerifiedAllocationTerm), dagcbor.Encode))},
@ -286,7 +286,7 @@ func TestOnboardRawPieceVerified_WithActorEvents(t *testing.T) {
{Flags: 0x01, Codec: uint64(multicodec.Cbor), Key: "term-start", Value: must.One(ipld.Encode(basicnode.NewInt(int64(claimEvents[0].Height)), dagcbor.Encode))}, {Flags: 0x01, Codec: uint64(multicodec.Cbor), Key: "term-start", Value: must.One(ipld.Encode(basicnode.NewInt(int64(claimEvents[0].Height)), dagcbor.Encode))},
{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "sector", Value: must.One(ipld.Encode(basicnode.NewInt(int64(si.SectorID)), dagcbor.Encode))}, {Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "sector", Value: must.One(ipld.Encode(basicnode.NewInt(int64(si.SectorID)), dagcbor.Encode))},
} }
require.ElementsMatch(t, expectedEntries, claimEvents[0].Entries) require.Equal(t, expectedEntries, claimEvents[0].Entries)
} }
// verify that we can trace a datacap allocation through to a claim with the events, since this // verify that we can trace a datacap allocation through to a claim with the events, since this

View File

@ -189,7 +189,7 @@ loop:
{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "$type", Value: keyBytes}, {Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "$type", Value: keyBytes},
{Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "sector", Value: must.One(ipld.Encode(basicnode.NewInt(int64(toTerminate)), dagcbor.Encode))}, {Flags: 0x03, Codec: uint64(multicodec.Cbor), Key: "sector", Value: must.One(ipld.Encode(basicnode.NewInt(int64(toTerminate)), dagcbor.Encode))},
} }
require.ElementsMatch(t, expectedEntries, event.Entries) require.Equal(t, expectedEntries, event.Entries)
} }
break break
} }