Merge branch 'master' into fix/rice-box

This commit is contained in:
Jiaying Wang 2021-08-18 15:05:12 -04:00 committed by GitHub
commit 2d1494314e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
225 changed files with 9280 additions and 8642 deletions

View File

@ -1,19 +1,29 @@
comment: off comment: off
ignore: ignore:
- "**/cbor_gen.go" # Auto generated
- "api/test/**/*" - "^.*_gen.go$"
- "api/test/*" - "^.*/mock_full.go$"
- "gen/**/*" - "^chain/actors/builtin/[^/]*/(message|state|v)[0-4]\\.go$" # We test the latest version only.
- "gen/*" # Tests
- "cmd/lotus-shed/*" - "api/test/**"
- "cmd/tvx/*" - "conformance/**"
- "cmd/lotus-pcr/*" # Generators
- "cmd/tvx/*" - "gen/**"
- "cmd/lotus-chainwatch/*" - "chain/actors/agen/**"
- "cmd/lotus-health/*" # Non-critical utilities
- "cmd/lotus-fountain/*" - "cmd/lotus-shed/**"
- "cmd/lotus-townhall/*" - "cmd/tvx/**"
- "cmd/lotus-stats/*" - "cmd/lotus-pcr/**"
- "cmd/lotus-pcr/*" - "cmd/tvx/**"
- "cmd/lotus-health/**"
- "cmd/lotus-fountain/**"
- "cmd/lotus-stats/**"
- "cmd/lotus-pcr/**"
- "cmd/lotus-sim/**"
- "cmd/lotus-bench/**"
- "cmd/lotus-seed/**"
- "cmd/chain-noise/**"
- "api/docgen/**"
- "api/docgen-openrpc/**"
github_checks: github_checks:
annotations: false annotations: false

View File

@ -43,13 +43,14 @@ body:
id: version id: version
attributes: attributes:
label: Lotus Version label: Lotus Version
render: text
description: Enter the output of `lotus version` and `lotus-miner version` if applicable. description: Enter the output of `lotus version` and `lotus-miner version` if applicable.
placeholder: | placeholder: |
e.g. e.g.
Daemon:1.11.0-rc2+debug+git.0519cd371.dirty+api1.3.0 Daemon:1.11.0-rc2+debug+git.0519cd371.dirty+api1.3.0
Local: lotus version 1.11.0-rc2+debug+git.0519cd371.dirty Local: lotus version 1.11.0-rc2+debug+git.0519cd371.dirty
validations: validations:
reuiqred: true required: true
- type: textarea - type: textarea
id: Description id: Description
attributes: attributes:
@ -62,19 +63,18 @@ body:
* For sealing issues, include the output of `lotus-miner sectors status --log <sectorId>` for the failed sector(s). * For sealing issues, include the output of `lotus-miner sectors status --log <sectorId>` for the failed sector(s).
* For proving issues, include the output of `lotus-miner proving` info. * For proving issues, include the output of `lotus-miner proving` info.
* For deal making issues, include the output of `lotus client list-deals -v` and/or `lotus-miner storage-deals|retrieval-deals|data-transfers list [-v]` commands for the deal(s) in question. * For deal making issues, include the output of `lotus client list-deals -v` and/or `lotus-miner storage-deals|retrieval-deals|data-transfers list [-v]` commands for the deal(s) in question.
render: bash
validations: validations:
required: true required: true
- type: textarea - type: textarea
id: extraInfo id: extraInfo
attributes: attributes:
label: Logging Information label: Logging Information
render: text
description: | description: |
Please provide debug logs of the problem, remember you can get set log level control for: Please provide debug logs of the problem, remember you can get set log level control for:
* lotus: use `lotus log list` to get all log systems available and set level by `lotus log set-level`. An example can be found [here](https://docs.filecoin.io/get-started/lotus/configuration-and-advanced-usage/#log-level-control). * lotus: use `lotus log list` to get all log systems available and set level by `lotus log set-level`. An example can be found [here](https://docs.filecoin.io/get-started/lotus/configuration-and-advanced-usage/#log-level-control).
* lotus-miner:`lotus-miner log list` to get all log systems available and set level by `lotus-miner log set-level * lotus-miner:`lotus-miner log list` to get all log systems available and set level by `lotus-miner log set-level
If you don't provide detailed logs when you raise the issue it will almost certainly be the first request I make before furthur diagnosing the problem. If you don't provide detailed logs when you raise the issue it will almost certainly be the first request I make before furthur diagnosing the problem.
render: bash
validations: validations:
required: true required: true
- type: textarea - type: textarea
@ -87,6 +87,5 @@ body:
2. Do '...' 2. Do '...'
3. See error '...' 3. See error '...'
... ...
render: bash
validations: validations:
required: false required: false

View File

@ -35,10 +35,11 @@ body:
- type: textarea - type: textarea
id: version id: version
attributes: attributes:
render: text
label: Lotus Tag and Version label: Lotus Tag and Version
description: Enter the lotus tag, output of `lotus version` and `lotus-miner version`. description: Enter the lotus tag, output of `lotus version` and `lotus-miner version`.
validations: validations:
reuiqred: true required: true
- type: textarea - type: textarea
id: Description id: Description
attributes: attributes:
@ -48,7 +49,6 @@ body:
* What you were doding when you experienced the bug? * What you were doding when you experienced the bug?
* Any *error* messages you saw, *where* you saw them, and what you believe may have caused them (if you have any ideas). * Any *error* messages you saw, *where* you saw them, and what you believe may have caused them (if you have any ideas).
* What is the expected behaviour? * What is the expected behaviour?
render: bash
validations: validations:
required: true required: true
- type: textarea - type: textarea
@ -72,6 +72,7 @@ body:
- type: textarea - type: textarea
id: logging id: logging
attributes: attributes:
render: text
label: Logging Information label: Logging Information
description: Please link to the whole of the miner logs on your side of the transaction. You can upload the logs to a [gist](https://gist.github.com). description: Please link to the whole of the miner logs on your side of the transaction. You can upload the logs to a [gist](https://gist.github.com).
validations: validations:
@ -86,6 +87,5 @@ body:
2. Do '...' 2. Do '...'
3. See error '...' 3. See error '...'
... ...
render: bash
validations: validations:
required: false required: false

View File

@ -36,10 +36,11 @@ body:
- type: textarea - type: textarea
id: version id: version
attributes: attributes:
render: text
label: Lotus Tag and Version label: Lotus Tag and Version
description: Enter the lotus tag, output of `lotus version` and `lotus-miner version`. description: Enter the lotus tag, output of `lotus version` and `lotus-miner version`.
validations: validations:
reuiqred: true required: true
- type: textarea - type: textarea
id: Description id: Description
attributes: attributes:
@ -51,19 +52,18 @@ body:
* What is the expected behaviour? * What is the expected behaviour?
* For sealing issues, include the output of `lotus-miner sectors status --log <sectorId>` for the failed sector(s). * For sealing issues, include the output of `lotus-miner sectors status --log <sectorId>` for the failed sector(s).
* For proving issues, include the output of `lotus-miner proving` info. * For proving issues, include the output of `lotus-miner proving` info.
render: bash
validations: validations:
required: true required: true
- type: textarea - type: textarea
id: extraInfo id: extraInfo
attributes: attributes:
label: Logging Information label: Logging Information
render: text
description: | description: |
Please provide debug logs of the problem, remember you can get set log level control for: Please provide debug logs of the problem, remember you can get set log level control for:
* lotus: use `lotus log list` to get all log systems available and set level by `lotus log set-level`. An example can be found [here](https://docs.filecoin.io/get-started/lotus/configuration-and-advanced-usage/#log-level-control). * lotus: use `lotus log list` to get all log systems available and set level by `lotus log set-level`. An example can be found [here](https://docs.filecoin.io/get-started/lotus/configuration-and-advanced-usage/#log-level-control).
* lotus-miner:`lotus-miner log list` to get all log systems available and set level by `lotus-miner log set-level * lotus-miner:`lotus-miner log list` to get all log systems available and set level by `lotus-miner log set-level
If you don't provide detailed logs when you raise the issue it will almost certainly be the first request I make before furthur diagnosing the problem. If you don't provide detailed logs when you raise the issue it will almost certainly be the first request I make before furthur diagnosing the problem.
render: bash
validations: validations:
required: true required: true
- type: textarea - type: textarea
@ -76,6 +76,5 @@ body:
2. Do '...' 2. Do '...'
3. See error '...' 3. See error '...'
... ...
render: bash
validations: validations:
required: false required: false

View File

@ -175,7 +175,6 @@ This is a **highly recommended** but optional Lotus v1.11.1 release that introd
| turuslan | 1 | +1/-1 | 1 | | turuslan | 1 | +1/-1 | 1 |
| Kirk Baird | 1 | +0/-0 | 1 | | Kirk Baird | 1 | +0/-0 | 1 |
# 1.11.0 / 2021-07-22 # 1.11.0 / 2021-07-22
This is a **highly recommended** release of Lotus that have many bug fixes, improvements and new features. This is a **highly recommended** release of Lotus that have many bug fixes, improvements and new features.
@ -429,13 +428,6 @@ Contributors
| @zhoutian527 | 1 | +2/-2 | 1 | | @zhoutian527 | 1 | +2/-2 | 1 |
| @ribasushi| 1 | +1/-1 | 1 | | @ribasushi| 1 | +1/-1 | 1 |
<<<<<<< HEAD
||||||| merged common ancestors
>>>>>>>>> Temporary merge branch 2
=======
>>>>>>> releases
>>>>>>> releases
# 1.10.0 / 2021-06-23 # 1.10.0 / 2021-06-23
This is a mandatory release of Lotus that introduces Filecoin network v13, codenamed the HyperDrive upgrade. The This is a mandatory release of Lotus that introduces Filecoin network v13, codenamed the HyperDrive upgrade. The

142
LOTUS_RELEASE_FLOW.md Normal file
View File

@ -0,0 +1,142 @@
<!-- TOC -->
- [`lotus` Release Flow](#lotus-release-flow)
- [Purpose](#purpose)
- [High-level Summary](#high-level-summary)
- [Motivation and Requirements](#motivation-and-requirements)
- [Adopted Conventions](#adopted-conventions)
- [Major Releases](#major-releases)
- [Mandatory Releases](#mandatory-releases)
- [Feature Releases](#feature-releases)
- [Examples Scenarios](#examples-scenarios)
- [Release Cycle](#release-cycle)
- [Patch Releases](#patch-releases)
- [Performing a Release](#performing-a-release)
- [Security Fix Policy](#security-fix-policy)
- [FAQ](#faq)
- [Why aren't Go major versions used more?](#why-arent-go-major-versions-used-more)
- [Related Items](#related-items)
<!-- /TOC -->
# `lotus` Release Flow
## Purpose
This document aims to describe how the Lotus team plans to ship releases of the Lotus implementation of Filecoin network. Interested parties can expect new releases to come out as described in this document.
## High-level Summary
- **Major releases** (1.0.0, 2.0.0, etc.) are reserved for significant changes to the Filecoin protocol that transform the network and its usecases. Such changes could include the addition of new Virtual Machines to the protocol, major upgrades to the Proofs of Replication used by Filecoin, etc.
- Even minor releases (1.2.0, 1.4.0, etc.) of the Lotus software correspond to **mandatory releases** as they ship Filecoin network upgrades. Users **must** upgrade to these releases before a certain time in order to keep in sync with the Filecoin network. We aim to ensure there is at least one week before the upgrade deadline.
- Patch versions of even minor releases (1.2.1, 1.4.2, etc.) correspond to **hotfix releases**. Such releases will only be shipped when a critical fix is needed to be applied on top of a mandatory release.
- Odd minor releases (1.3.0, 1.5.0, etc.), as well as patch releases in these series (1.3.1, 1.5.2, etc.) correspond to **feature releases** with new development and bugfixes. These releases are not mandatory, but still highly recommended, **as they may contain critical security fixes**.
- We aim to ship a new feature release of the Lotus software every 3 weeks, so users can expect a regular cadence of Lotus feature releases. Note that mandatory releases for network upgrades may disrupt this schedule.
## Motivation and Requirements
Our primary motivation is for users of the Lotus software (storage providers, storage clients, developers, etc.) to have a clear idea about when they can expect Lotus releases, and what they can expect in a release.
In order to achieve this, we need the following from our release process and conventions:
- Lotus version conventions make it immediately obvious whether a new Lotus release is mandatory or not. A release is mandatory if it ships a network upgrade to the Filecoin protocol.
- The ability to ship critical fixes on top of mandatory releases, so as to avoid forcing users to consume larger unrelated changes.
- A regular cadence of feature releases, so that users can know when a new Lotus release will be available for consumption.
- The ability to ship any number of feature releases between two mandatory releases.
- A clear description of the various stages of testing that a Lotus Release Candidate (RC) goes through.
- Lotus Release issues will present a single source of truth for what may be contained in Lotus releases, including security fixes, and how they will be disclosed.
## Adopted Conventions
This section describes the conventions we have adopted. Users of Lotus are encouraged to review this in detail so as to be informed when new Lotus releases are shipped.
### Major Releases
Bumps to the Lotus major version number (1.0.0, 2.0.0, etc.) are reserved for significant changes to the Filecoin protocol that dramatically transform the network itself. Such changes could include the addition of new Virtual Machines to the protocol, major upgrades to the Proofs of Replication used by Filecoin, etc. These releases are expected to take lots of time to develop and will be rare. See also "Why aren't Go major versions used more?" below.
### Mandatory Releases
Even bumps to the Lotus minor version number (1.2.0, 1.4.0, etc.) are reserved for **mandatory releases** that ship Filecoin network upgrades. Users **must** upgrade to these releases before a certain time in order to keep in sync with the Filecoin network.
Depending on the scope of the upgrade, these releases may take up to several weeks to fully develop and test. We aim to ensure there are at least 2 weeks between the publication of the final Lotus release and the Filecoin network upgrade deadline.
These releases do not follow a regular cadence, as they are developed in lockstep with the other implementations of the Filecoin protocol. As of August 2021, the developers aim to ship 3-4 Filecoin network upgrades a year, though smaller security-critical upgrades may occur unpredictably.
Mandatory releases are somewhat sensitive since all Lotus users are forced to upgrade to them at the same time. As a result, they will be shipped on top of the most recent stable release of Lotus, which will generally be the latest Lotus release that has been in production for more than 2 weeks. (Note: given this rule, the basis of a mandatory release could be a mandatory release or a feature release depending on timing). Mandatory releases will not include any new feature development or bugfixes that haven't already baked in production for 2+ weeks, except for the changes needed for the network upgrade itself. Further, any critical fixes that are needed after the network upgrade will be shipped as patch version bumps to the mandatory release (1.2.1, 1.2.2, etc.) This prevents users from being forced to quickly digest unnecessary changes.
Users should generally aim to always upgrade to a new even minor version release since they either introduce a mandatory network upgrade or a critical fix.
### Feature Releases
All releases under an odd minor version number indicate **feature releases**. These could include releases such as 1.3.0, 1.3.1, 1.5.2, etc.
Feature releases include new development and bug fixes. They are not mandatory, but still highly recommended, **as they may contain critical security fixes**. Note that some of these releases may be very small patch releases that include critical hotfixes. There is no way to distinguish between a bug fix release and a feature release on the "feature" version. Both cases will use the "patch" version number.
We aim to ship a new feature release of the Lotus software from our development (master) branch every 3 weeks, so users can expect a regular cadence of Lotus feature releases. Note that mandatory releases for network upgrades may disrupt this schedule. For more, see the Release Cycle section (TODO: Link).
### Examples Scenarios
- **Scenario 1**: **Lotus 1.12.0 shipped a network upgrade, and no network upgrades are needed for a long while.**
In this case, the next feature release will be Lotus 1.13.0. In three-week intervals, we will ship Lotus 1.13.1, 1.13.2, and 1.13.3, all containing new features and bug fixes.
Let us assume that after the release of 1.13.3, a critical issue is discovered and a hotfix quickly developed. This hotfix will then be shipped in **both** 1.12.1 and 1.13.4. Users who have already upgrade to the 1.13 series can simply upgrade to 1.13.4. Users who have chosen to still be on 1.12.0, however, can use 1.12.1 to patch the critical issue without being forced to consume all the changes in the 1.13 series.
- **Scenario 2**: **Lotus 1.12.0 shipped a network upgrade, but the need for an unexpected network upgrade soon arises**
In this case, the Lotus 1.13 series will be dropped entirely, including any RCs that may have been undergoing testing. Instead, the network upgrade will be shipped as Lotus 1.14.0, built on top of Lotus 1.12.0. It will thus include no unnecessary changes, only the work needed to support the new network upgrade.
Any changes that were being worked on in the 1.13.0 series will then get applied on top of Lotus 1.14.0 and get shipped as Lotus 1.15.0.
## Release Cycle
A mandatory release process should take about 3-6 weeks, depending on the amount and the overall complexity of new features being introduced to the network protocol. It may also be shorter if there is a network incident that requires an emergency upgrade. A feature release process should take about 2-3 weeks.
The start time of the mandatory release process is subject to the network upgrade timeline. We will start a new feature release process every 3 weeks on Tuesdays, regardless of when the previous release landed unless it's still ongoing.
### Patch Releases
**Mandatory Release**
If we encounter a serious bug in a mandatory release post a network upgrade, we will create a patch release based on this release. Strictly only the fix to the bug will be included in the patch, and the bug fix will be backported to the master (dev) branch, and any ongoing feature release branch if applicable.
Patch release process for the mandatory releases will follow a compressed release cycle from hours to days depending on the severity and the impact to the network of the bug. In a patch release:
1. Automated and internal testing (stage 0 and 1) will be compressed into a few hours.
2. Stage 2-3 will be skipped or shortened case by case.
Some patch releases, especially ones fixing one or more complex bugs that doesn't require a follow-up mandatory upgrade, may undergo the full release process.
**Feature Release**
Patch releases in odd minor releases (1.3.0, 1.5.0, etc.) like 1.3.1, 1.5.2 and etc are corresponding to another **feature releases** with new development and bugfixes. These releases are not mandatory, but still **highly** recommended, **as they may contain critical security fixes**.
---
### Performing a Release
At the beginning of each release cycle, we will generate our "Release tracking issue", which is populated with the content at [https://github.com/filecoin-project/lotus/blob/master/documentation/misc/RELEASE_ISSUE_TEMPLATE.md](https://github.com/filecoin-project/lotus/blob/master/documentation/misc/RELEASE_ISSUE_TEMPLATE.md)
This template will be used to track major goals we have, a planned shipping date, and a complete release checklist tied to a specific release.
### Security Fix Policy
Any release may contain security fixes. Unless the fix addresses a bug being exploited in the wild, the fix will *not* be called out in the release notes. Please make sure to update ASAP.
By policy, the team will usually wait until about 3 weeks after the final release to announce any fixed security issues. However, depending on the impact and ease of discovery of the issue, the team may wait more or less time.
It is important to always update to the latest version ASAP and file issues if you're unable to update for some reason.
Finally, unless a security issue is actively being exploited or a significant number of users are unable to update to the latest version (e.g., due to a difficult migration, breaking changes, etc.), security fixes will *not* be backported to previous releases.
## FAQ
### Why aren't Go major versions used more?
Golang tightly couples source code with versioning (major versions beyond v1 leak into import paths). This poses logistical difficulties to using major versions here. Concretely, if we were to pick a policy that bumped the major version on every network upgrade, we would disrupt every single downstream library/application that consumed the native Lotus API (e.g., libraries depending on the JSON-RPC client, testground tests). They would need to update their code every single time that we released a network breaking change, even if it brought on zero expectation of breakage for the Golang APIs that they depend on. In this scenario, we are signaling breakage on the wrong API surface! We're signaling breakage on the Go level, when what breaks is the network protocol.
## Related Items
1. [Release Issue template](https://github.com/filecoin-project/lotus/blob/master/documentation/misc/RELEASE_ISSUE_TEMPLATE.md)
2. [Lotus Release Flow Discussion](https://github.com/filecoin-project/lotus/discussions/7053): Leave a comment if you have any questions or feedbacks with regard to the lotus release flow.

View File

@ -5,7 +5,9 @@ all: build
unexport GOFLAGS unexport GOFLAGS
GOVERSION:=$(shell go version | cut -d' ' -f 3 | sed 's/^go//' | awk -F. '{printf "%d%03d%03d", $$1, $$2, $$3}') GOCC?=go
GOVERSION:=$(shell $(GOCC) version | tr ' ' '\n' | grep go1 | sed 's/^go//' | awk -F. '{printf "%d%03d%03d", $$1, $$2, $$3}')
ifeq ($(shell expr $(GOVERSION) \< 1016000), 1) ifeq ($(shell expr $(GOVERSION) \< 1016000), 1)
$(warning Your Golang version is go$(shell expr $(GOVERSION) / 1000000).$(shell expr $(GOVERSION) % 1000000 / 1000).$(shell expr $(GOVERSION) % 1000)) $(warning Your Golang version is go$(shell expr $(GOVERSION) / 1000000).$(shell expr $(GOVERSION) % 1000000 / 1000).$(shell expr $(GOVERSION) % 1000))
$(error Update Golang to version to at least 1.16.0) $(error Update Golang to version to at least 1.16.0)
@ -85,32 +87,32 @@ interopnet: build-devnets
lotus: $(BUILD_DEPS) lotus: $(BUILD_DEPS)
rm -f lotus rm -f lotus
go build $(GOFLAGS) -o lotus ./cmd/lotus $(GOCC) build $(GOFLAGS) -o lotus ./cmd/lotus
.PHONY: lotus .PHONY: lotus
BINS+=lotus BINS+=lotus
lotus-miner: $(BUILD_DEPS) lotus-miner: $(BUILD_DEPS)
rm -f lotus-miner rm -f lotus-miner
go build $(GOFLAGS) -o lotus-miner ./cmd/lotus-miner $(GOCC) build $(GOFLAGS) -o lotus-miner ./cmd/lotus-miner
.PHONY: lotus-miner .PHONY: lotus-miner
BINS+=lotus-miner BINS+=lotus-miner
lotus-worker: $(BUILD_DEPS) lotus-worker: $(BUILD_DEPS)
rm -f lotus-worker rm -f lotus-worker
go build $(GOFLAGS) -o lotus-worker ./cmd/lotus-seal-worker $(GOCC) build $(GOFLAGS) -o lotus-worker ./cmd/lotus-seal-worker
.PHONY: lotus-worker .PHONY: lotus-worker
BINS+=lotus-worker BINS+=lotus-worker
lotus-shed: $(BUILD_DEPS) lotus-shed: $(BUILD_DEPS)
rm -f lotus-shed rm -f lotus-shed
go build $(GOFLAGS) -o lotus-shed ./cmd/lotus-shed $(GOCC) build $(GOFLAGS) -o lotus-shed ./cmd/lotus-shed
.PHONY: lotus-shed .PHONY: lotus-shed
BINS+=lotus-shed BINS+=lotus-shed
lotus-gateway: $(BUILD_DEPS) lotus-gateway: $(BUILD_DEPS)
rm -f lotus-gateway rm -f lotus-gateway
go build $(GOFLAGS) -o lotus-gateway ./cmd/lotus-gateway $(GOCC) build $(GOFLAGS) -o lotus-gateway ./cmd/lotus-gateway
.PHONY: lotus-gateway .PHONY: lotus-gateway
BINS+=lotus-gateway BINS+=lotus-gateway
@ -138,19 +140,19 @@ install-app:
lotus-seed: $(BUILD_DEPS) lotus-seed: $(BUILD_DEPS)
rm -f lotus-seed rm -f lotus-seed
go build $(GOFLAGS) -o lotus-seed ./cmd/lotus-seed $(GOCC) build $(GOFLAGS) -o lotus-seed ./cmd/lotus-seed
.PHONY: lotus-seed .PHONY: lotus-seed
BINS+=lotus-seed BINS+=lotus-seed
benchmarks: benchmarks:
go run github.com/whyrusleeping/bencher ./... > bench.json $(GOCC) run github.com/whyrusleeping/bencher ./... > bench.json
@echo Submitting results @echo Submitting results
@curl -X POST 'http://benchmark.kittyhawk.wtf/benchmark' -d '@bench.json' -u "${benchmark_http_cred}" @curl -X POST 'http://benchmark.kittyhawk.wtf/benchmark' -d '@bench.json' -u "${benchmark_http_cred}"
.PHONY: benchmarks .PHONY: benchmarks
lotus-pond: 2k lotus-pond: 2k
go build -o lotus-pond ./lotuspond $(GOCC) build -o lotus-pond ./lotuspond
.PHONY: lotus-pond .PHONY: lotus-pond
BINS+=lotus-pond BINS+=lotus-pond
@ -161,19 +163,6 @@ lotus-pond-front:
lotus-pond-app: lotus-pond-front lotus-pond lotus-pond-app: lotus-pond-front lotus-pond
.PHONY: lotus-pond-app .PHONY: lotus-pond-app
lotus-townhall:
rm -f lotus-townhall
go build -o lotus-townhall ./cmd/lotus-townhall
.PHONY: lotus-townhall
BINS+=lotus-townhall
lotus-townhall-front:
(cd ./cmd/lotus-townhall/townhall && npm i && npm run build)
.PHONY: lotus-townhall-front
lotus-townhall-app: lotus-touch lotus-townhall-front
.PHONY: lotus-townhall-app
lotus-fountain: lotus-fountain:
rm -f lotus-fountain rm -f lotus-fountain
go build $(GOFLAGS) -o lotus-fountain ./cmd/lotus-fountain go build $(GOFLAGS) -o lotus-fountain ./cmd/lotus-fountain
@ -181,66 +170,57 @@ lotus-fountain:
.PHONY: lotus-fountain .PHONY: lotus-fountain
BINS+=lotus-fountain BINS+=lotus-fountain
lotus-chainwatch:
rm -f lotus-chainwatch
go build $(GOFLAGS) -o lotus-chainwatch ./cmd/lotus-chainwatch
.PHONY: lotus-chainwatch
BINS+=lotus-chainwatch
lotus-bench: lotus-bench:
rm -f lotus-bench rm -f lotus-bench
go build -o lotus-bench ./cmd/lotus-bench $(GOCC) build -o lotus-bench ./cmd/lotus-bench
.PHONY: lotus-bench .PHONY: lotus-bench
BINS+=lotus-bench BINS+=lotus-bench
lotus-stats: lotus-stats:
rm -f lotus-stats rm -f lotus-stats
go build $(GOFLAGS) -o lotus-stats ./cmd/lotus-stats $(GOCC) build $(GOFLAGS) -o lotus-stats ./cmd/lotus-stats
.PHONY: lotus-stats .PHONY: lotus-stats
BINS+=lotus-stats BINS+=lotus-stats
lotus-pcr: lotus-pcr:
rm -f lotus-pcr rm -f lotus-pcr
go build $(GOFLAGS) -o lotus-pcr ./cmd/lotus-pcr $(GOCC) build $(GOFLAGS) -o lotus-pcr ./cmd/lotus-pcr
.PHONY: lotus-pcr .PHONY: lotus-pcr
BINS+=lotus-pcr BINS+=lotus-pcr
lotus-health: lotus-health:
rm -f lotus-health rm -f lotus-health
go build -o lotus-health ./cmd/lotus-health $(GOCC) build -o lotus-health ./cmd/lotus-health
.PHONY: lotus-health .PHONY: lotus-health
BINS+=lotus-health BINS+=lotus-health
lotus-wallet: lotus-wallet:
rm -f lotus-wallet rm -f lotus-wallet
go build -o lotus-wallet ./cmd/lotus-wallet $(GOCC) build -o lotus-wallet ./cmd/lotus-wallet
.PHONY: lotus-wallet .PHONY: lotus-wallet
BINS+=lotus-wallet BINS+=lotus-wallet
lotus-keygen: lotus-keygen:
rm -f lotus-keygen rm -f lotus-keygen
go build -o lotus-keygen ./cmd/lotus-keygen $(GOCC) build -o lotus-keygen ./cmd/lotus-keygen
.PHONY: lotus-keygen .PHONY: lotus-keygen
BINS+=lotus-keygen BINS+=lotus-keygen
testground: testground:
go build -tags testground -o /dev/null ./cmd/lotus $(GOCC) build -tags testground -o /dev/null ./cmd/lotus
.PHONY: testground .PHONY: testground
BINS+=testground BINS+=testground
tvx: tvx:
rm -f tvx rm -f tvx
go build -o tvx ./cmd/tvx $(GOCC) build -o tvx ./cmd/tvx
.PHONY: tvx .PHONY: tvx
BINS+=tvx BINS+=tvx
install-chainwatch: lotus-chainwatch
install -C ./lotus-chainwatch /usr/local/bin/lotus-chainwatch
lotus-sim: $(BUILD_DEPS) lotus-sim: $(BUILD_DEPS)
rm -f lotus-sim rm -f lotus-sim
go build $(GOFLAGS) -o lotus-sim ./cmd/lotus-sim $(GOCC) build $(GOFLAGS) -o lotus-sim ./cmd/lotus-sim
.PHONY: lotus-sim .PHONY: lotus-sim
BINS+=lotus-sim BINS+=lotus-sim
@ -262,21 +242,13 @@ install-miner-service: install-miner install-daemon-service
@echo @echo
@echo "lotus-miner service installed. Don't forget to run 'sudo systemctl start lotus-miner' to start it and 'sudo systemctl enable lotus-miner' for it to be enabled on startup." @echo "lotus-miner service installed. Don't forget to run 'sudo systemctl start lotus-miner' to start it and 'sudo systemctl enable lotus-miner' for it to be enabled on startup."
install-chainwatch-service: install-chainwatch install-daemon-service
mkdir -p /etc/systemd/system
mkdir -p /var/log/lotus
install -C -m 0644 ./scripts/lotus-chainwatch.service /etc/systemd/system/lotus-chainwatch.service
systemctl daemon-reload
@echo
@echo "chainwatch service installed. Don't forget to run 'sudo systemctl start lotus-chainwatch' to start it and 'sudo systemctl enable lotus-chainwatch' for it to be enabled on startup."
install-main-services: install-miner-service install-main-services: install-miner-service
install-all-services: install-main-services install-chainwatch-service install-all-services: install-main-services
install-services: install-main-services install-services: install-main-services
clean-daemon-service: clean-miner-service clean-chainwatch-service clean-daemon-service: clean-miner-service
-systemctl stop lotus-daemon -systemctl stop lotus-daemon
-systemctl disable lotus-daemon -systemctl disable lotus-daemon
rm -f /etc/systemd/system/lotus-daemon.service rm -f /etc/systemd/system/lotus-daemon.service
@ -288,12 +260,6 @@ clean-miner-service:
rm -f /etc/systemd/system/lotus-miner.service rm -f /etc/systemd/system/lotus-miner.service
systemctl daemon-reload systemctl daemon-reload
clean-chainwatch-service:
-systemctl stop lotus-chainwatch
-systemctl disable lotus-chainwatch
rm -f /etc/systemd/system/lotus-chainwatch.service
systemctl daemon-reload
clean-main-services: clean-daemon-service clean-main-services: clean-daemon-service
clean-all-services: clean-main-services clean-all-services: clean-main-services
@ -320,25 +286,25 @@ dist-clean:
.PHONY: dist-clean .PHONY: dist-clean
type-gen: api-gen type-gen: api-gen
go run ./gen/main.go $(GOCC) run ./gen/main.go
go generate -x ./... $(GOCC) generate -x ./...
goimports -w api/ goimports -w api/
method-gen: api-gen method-gen: api-gen
(cd ./lotuspond/front/src/chain && go run ./methodgen.go) (cd ./lotuspond/front/src/chain && $(GOCC) run ./methodgen.go)
actors-gen: actors-gen:
go run ./chain/actors/agen $(GOCC) run ./chain/actors/agen
go fmt ./... $(GOCC) fmt ./...
api-gen: api-gen:
go run ./gen/api $(GOCC) run ./gen/api
goimports -w api goimports -w api
goimports -w api goimports -w api
.PHONY: api-gen .PHONY: api-gen
cfgdoc-gen: cfgdoc-gen:
go run ./node/config/cfgdocgen > ./node/config/doc_gen.go $(GOCC) run ./node/config/cfgdocgen > ./node/config/doc_gen.go
appimage: lotus appimage: lotus
rm -rf appimage-builder-cache || true rm -rf appimage-builder-cache || true
@ -352,9 +318,9 @@ appimage: lotus
docsgen: docsgen-md docsgen-openrpc docsgen: docsgen-md docsgen-openrpc
docsgen-md-bin: api-gen actors-gen docsgen-md-bin: api-gen actors-gen
go build $(GOFLAGS) -o docgen-md ./api/docgen/cmd $(GOCC) build $(GOFLAGS) -o docgen-md ./api/docgen/cmd
docsgen-openrpc-bin: api-gen actors-gen docsgen-openrpc-bin: api-gen actors-gen
go build $(GOFLAGS) -o docgen-openrpc ./api/docgen-openrpc/cmd $(GOCC) build $(GOFLAGS) -o docgen-openrpc ./api/docgen-openrpc/cmd
docsgen-md: docsgen-md-full docsgen-md-storage docsgen-md-worker docsgen-md: docsgen-md-full docsgen-md-storage docsgen-md-worker
@ -394,4 +360,4 @@ print-%:
@echo $*=$($*) @echo $*=$($*)
circleci: circleci:
go generate -x ./.circleci go generate -x ./.circleci

View File

@ -12,14 +12,14 @@ import (
"github.com/filecoin-project/go-address" "github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-bitfield" "github.com/filecoin-project/go-bitfield"
datatransfer "github.com/filecoin-project/go-data-transfer" datatransfer "github.com/filecoin-project/go-data-transfer"
"github.com/filecoin-project/go-fil-markets/retrievalmarket"
"github.com/filecoin-project/go-fil-markets/storagemarket"
"github.com/filecoin-project/go-multistore"
"github.com/filecoin-project/go-state-types/abi" "github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/big" "github.com/filecoin-project/go-state-types/big"
"github.com/filecoin-project/go-state-types/crypto" "github.com/filecoin-project/go-state-types/crypto"
"github.com/filecoin-project/go-state-types/dline" "github.com/filecoin-project/go-state-types/dline"
"github.com/filecoin-project/go-fil-markets/retrievalmarket"
"github.com/filecoin-project/go-fil-markets/storagemarket"
apitypes "github.com/filecoin-project/lotus/api/types" apitypes "github.com/filecoin-project/lotus/api/types"
"github.com/filecoin-project/lotus/chain/actors/builtin" "github.com/filecoin-project/lotus/chain/actors/builtin"
"github.com/filecoin-project/lotus/chain/actors/builtin/market" "github.com/filecoin-project/lotus/chain/actors/builtin/market"
@ -29,6 +29,7 @@ import (
"github.com/filecoin-project/lotus/chain/types" "github.com/filecoin-project/lotus/chain/types"
marketevents "github.com/filecoin-project/lotus/markets/loggers" marketevents "github.com/filecoin-project/lotus/markets/loggers"
"github.com/filecoin-project/lotus/node/modules/dtypes" "github.com/filecoin-project/lotus/node/modules/dtypes"
"github.com/filecoin-project/lotus/node/repo/imports"
) )
//go:generate go run github.com/golang/mock/mockgen -destination=mocks/mock_full.go -package=mocks . FullNode //go:generate go run github.com/golang/mock/mockgen -destination=mocks/mock_full.go -package=mocks . FullNode
@ -113,6 +114,11 @@ type FullNode interface {
// will be returned. // will be returned.
ChainGetTipSetByHeight(context.Context, abi.ChainEpoch, types.TipSetKey) (*types.TipSet, error) //perm:read ChainGetTipSetByHeight(context.Context, abi.ChainEpoch, types.TipSetKey) (*types.TipSet, error) //perm:read
// ChainGetTipSetAfterHeight looks back for a tipset at the specified epoch.
// If there are no blocks at the specified epoch, the first non-nil tipset at a later epoch
// will be returned.
ChainGetTipSetAfterHeight(context.Context, abi.ChainEpoch, types.TipSetKey) (*types.TipSet, error) //perm:read
// ChainReadObj reads ipld nodes referenced by the specified CID from chain // ChainReadObj reads ipld nodes referenced by the specified CID from chain
// blockstore and returns raw bytes. // blockstore and returns raw bytes.
ChainReadObj(context.Context, cid.Cid) ([]byte, error) //perm:read ChainReadObj(context.Context, cid.Cid) ([]byte, error) //perm:read
@ -331,7 +337,7 @@ type FullNode interface {
// ClientImport imports file under the specified path into filestore. // ClientImport imports file under the specified path into filestore.
ClientImport(ctx context.Context, ref FileRef) (*ImportRes, error) //perm:admin ClientImport(ctx context.Context, ref FileRef) (*ImportRes, error) //perm:admin
// ClientRemoveImport removes file import // ClientRemoveImport removes file import
ClientRemoveImport(ctx context.Context, importID multistore.StoreID) error //perm:admin ClientRemoveImport(ctx context.Context, importID imports.ID) error //perm:admin
// ClientStartDeal proposes a deal with a miner. // ClientStartDeal proposes a deal with a miner.
ClientStartDeal(ctx context.Context, params *StartDealParams) (*cid.Cid, error) //perm:admin ClientStartDeal(ctx context.Context, params *StartDealParams) (*cid.Cid, error) //perm:admin
// ClientStatelessDeal fire-and-forget-proposes an offline deal to a miner without subsequent tracking. // ClientStatelessDeal fire-and-forget-proposes an offline deal to a miner without subsequent tracking.
@ -723,16 +729,28 @@ type MinerSectors struct {
type ImportRes struct { type ImportRes struct {
Root cid.Cid Root cid.Cid
ImportID multistore.StoreID ImportID imports.ID
} }
type Import struct { type Import struct {
Key multistore.StoreID Key imports.ID
Err string Err string
Root *cid.Cid Root *cid.Cid
Source string
// Source is the provenance of the import, e.g. "import", "unknown", else.
// Currently useless but may be used in the future.
Source string
// FilePath is the path of the original file. It is important that the file
// is retained at this path, because it will be referenced during
// the transfer (when we do the UnixFS chunking, we don't duplicate the
// leaves, but rather point to chunks of the original data through
// positional references).
FilePath string FilePath string
// CARPath is the path of the CAR file containing the DAG for this import.
CARPath string
} }
type DealInfo struct { type DealInfo struct {
@ -915,7 +933,7 @@ type RetrievalOrder struct {
Piece *cid.Cid Piece *cid.Cid
Size uint64 Size uint64
LocalStore *multistore.StoreID // if specified, get data from local store FromLocalCAR string // if specified, get data from a local CARv2 file.
// TODO: support offset // TODO: support offset
Total types.BigInt Total types.BigInt
UnsealPrice types.BigInt UnsealPrice types.BigInt

View File

@ -35,6 +35,7 @@ type Gateway interface {
ChainGetMessage(ctx context.Context, mc cid.Cid) (*types.Message, error) ChainGetMessage(ctx context.Context, mc cid.Cid) (*types.Message, error)
ChainGetTipSet(ctx context.Context, tsk types.TipSetKey) (*types.TipSet, error) ChainGetTipSet(ctx context.Context, tsk types.TipSetKey) (*types.TipSet, error)
ChainGetTipSetByHeight(ctx context.Context, h abi.ChainEpoch, tsk types.TipSetKey) (*types.TipSet, error) ChainGetTipSetByHeight(ctx context.Context, h abi.ChainEpoch, tsk types.TipSetKey) (*types.TipSet, error)
ChainGetTipSetAfterHeight(ctx context.Context, h abi.ChainEpoch, tsk types.TipSetKey) (*types.TipSet, error)
ChainNotify(context.Context) (<-chan []*HeadChange, error) ChainNotify(context.Context) (<-chan []*HeadChange, error)
ChainReadObj(context.Context, cid.Cid) ([]byte, error) ChainReadObj(context.Context, cid.Cid) ([]byte, error)
GasEstimateMessageGas(ctx context.Context, msg *types.Message, spec *MessageSendSpec, tsk types.TipSetKey) (*types.Message, error) GasEstimateMessageGas(ctx context.Context, msg *types.Message, spec *MessageSendSpec, tsk types.TipSetKey) (*types.Message, error)

View File

@ -13,13 +13,14 @@ import (
"github.com/filecoin-project/go-address" "github.com/filecoin-project/go-address"
datatransfer "github.com/filecoin-project/go-data-transfer" datatransfer "github.com/filecoin-project/go-data-transfer"
"github.com/filecoin-project/go-fil-markets/piecestore"
"github.com/filecoin-project/go-fil-markets/retrievalmarket"
"github.com/filecoin-project/go-fil-markets/storagemarket"
"github.com/filecoin-project/go-state-types/abi" "github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/specs-actors/v2/actors/builtin/market" "github.com/filecoin-project/specs-actors/v2/actors/builtin/market"
"github.com/filecoin-project/specs-storage/storage" "github.com/filecoin-project/specs-storage/storage"
"github.com/filecoin-project/go-fil-markets/piecestore"
"github.com/filecoin-project/go-fil-markets/retrievalmarket"
"github.com/filecoin-project/go-fil-markets/storagemarket"
"github.com/filecoin-project/lotus/chain/types" "github.com/filecoin-project/lotus/chain/types"
"github.com/filecoin-project/lotus/extern/sector-storage/fsutil" "github.com/filecoin-project/lotus/extern/sector-storage/fsutil"
"github.com/filecoin-project/lotus/extern/sector-storage/stores" "github.com/filecoin-project/lotus/extern/sector-storage/stores"
@ -166,6 +167,48 @@ type StorageMiner interface {
MarketPendingDeals(ctx context.Context) (PendingDealInfo, error) //perm:write MarketPendingDeals(ctx context.Context) (PendingDealInfo, error) //perm:write
MarketPublishPendingDeals(ctx context.Context) error //perm:admin MarketPublishPendingDeals(ctx context.Context) error //perm:admin
// DagstoreListShards returns information about all shards known to the
// DAG store. Only available on nodes running the markets subsystem.
DagstoreListShards(ctx context.Context) ([]DagstoreShardInfo, error) //perm:read
// DagstoreInitializeShard initializes an uninitialized shard.
//
// Initialization consists of fetching the shard's data (deal payload) from
// the storage subsystem, generating an index, and persisting the index
// to facilitate later retrievals, and/or to publish to external sources.
//
// This operation is intended to complement the initial migration. The
// migration registers a shard for every unique piece CID, with lazy
// initialization. Thus, shards are not initialized immediately to avoid
// IO activity competing with proving. Instead, shard are initialized
// when first accessed. This method forces the initialization of a shard by
// accessing it and immediately releasing it. This is useful to warm up the
// cache to facilitate subsequent retrievals, and to generate the indexes
// to publish them externally.
//
// This operation fails if the shard is not in ShardStateNew state.
// It blocks until initialization finishes.
DagstoreInitializeShard(ctx context.Context, key string) error //perm:write
// DagstoreRecoverShard attempts to recover a failed shard.
//
// This operation fails if the shard is not in ShardStateErrored state.
// It blocks until recovery finishes. If recovery failed, it returns the
// error.
DagstoreRecoverShard(ctx context.Context, key string) error //perm:write
// DagstoreInitializeAll initializes all uninitialized shards in bulk,
// according to the policy passed in the parameters.
//
// It is recommended to set a maximum concurrency to avoid extreme
// IO pressure if the storage subsystem has a large amount of deals.
//
// It returns a stream of events to report progress.
DagstoreInitializeAll(ctx context.Context, params DagstoreInitializeAllParams) (<-chan DagstoreInitializeAllEvent, error) //perm:write
// DagstoreGC runs garbage collection on the DAG store.
DagstoreGC(ctx context.Context) ([]DagstoreShardResult, error) //perm:admin
// RuntimeSubsystems returns the subsystems that are enabled // RuntimeSubsystems returns the subsystems that are enabled
// in this instance. // in this instance.
RuntimeSubsystems(ctx context.Context) (MinerSubsystems, error) //perm:read RuntimeSubsystems(ctx context.Context) (MinerSubsystems, error) //perm:read
@ -336,3 +379,34 @@ type DealSchedule struct {
StartEpoch abi.ChainEpoch StartEpoch abi.ChainEpoch
EndEpoch abi.ChainEpoch EndEpoch abi.ChainEpoch
} }
// DagstoreShardInfo is the serialized form of dagstore.DagstoreShardInfo that
// we expose through JSON-RPC to avoid clients having to depend on the
// dagstore lib.
type DagstoreShardInfo struct {
Key string
State string
Error string
}
// DagstoreShardResult enumerates results per shard.
type DagstoreShardResult struct {
Key string
Success bool
Error string
}
type DagstoreInitializeAllParams struct {
MaxConcurrency int
IncludeSealed bool
}
// DagstoreInitializeAllEvent represents an initialization event.
type DagstoreInitializeAllEvent struct {
Key string
Event string // "start", "end"
Success bool
Error string
Total int
Current int
}

View File

@ -76,7 +76,7 @@ func TestReturnTypes(t *testing.T) {
seen[typ] = struct{}{} seen[typ] = struct{}{}
if typ.Kind() == reflect.Interface && typ != bareIface && !typ.Implements(jmarsh) { if typ.Kind() == reflect.Interface && typ != bareIface && !typ.Implements(jmarsh) {
t.Error("methods can't return interfaces", m.Name) t.Error("methods can't return interfaces or struct types not implementing json.Marshaller", m.Name)
} }
switch typ.Kind() { switch typ.Kind() {

View File

@ -5,6 +5,7 @@ package api
import ( import (
"fmt" "fmt"
"io" "io"
"math"
"sort" "sort"
abi "github.com/filecoin-project/go-state-types/abi" abi "github.com/filecoin-project/go-state-types/abi"
@ -17,6 +18,7 @@ import (
var _ = xerrors.Errorf var _ = xerrors.Errorf
var _ = cid.Undef var _ = cid.Undef
var _ = math.E
var _ = sort.Sort var _ = sort.Sort
func (t *PaymentInfo) MarshalCBOR(w io.Writer) error { func (t *PaymentInfo) MarshalCBOR(w io.Writer) error {

View File

@ -13,6 +13,7 @@ import (
"github.com/filecoin-project/go-address" "github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-bitfield" "github.com/filecoin-project/go-bitfield"
"github.com/filecoin-project/go-multistore"
"github.com/google/uuid" "github.com/google/uuid"
"github.com/ipfs/go-cid" "github.com/ipfs/go-cid"
"github.com/ipfs/go-filestore" "github.com/ipfs/go-filestore"
@ -27,7 +28,6 @@ import (
filestore2 "github.com/filecoin-project/go-fil-markets/filestore" filestore2 "github.com/filecoin-project/go-fil-markets/filestore"
"github.com/filecoin-project/go-fil-markets/retrievalmarket" "github.com/filecoin-project/go-fil-markets/retrievalmarket"
"github.com/filecoin-project/go-jsonrpc/auth" "github.com/filecoin-project/go-jsonrpc/auth"
"github.com/filecoin-project/go-multistore"
"github.com/filecoin-project/go-state-types/abi" "github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/crypto" "github.com/filecoin-project/go-state-types/crypto"
@ -43,6 +43,7 @@ import (
"github.com/filecoin-project/lotus/extern/sector-storage/storiface" "github.com/filecoin-project/lotus/extern/sector-storage/storiface"
sealing "github.com/filecoin-project/lotus/extern/storage-sealing" sealing "github.com/filecoin-project/lotus/extern/storage-sealing"
"github.com/filecoin-project/lotus/node/modules/dtypes" "github.com/filecoin-project/lotus/node/modules/dtypes"
"github.com/filecoin-project/lotus/node/repo/imports"
) )
var ExampleValues = map[reflect.Type]interface{}{ var ExampleValues = map[reflect.Type]interface{}{
@ -90,6 +91,7 @@ func init() {
addExample(&pid) addExample(&pid)
multistoreIDExample := multistore.StoreID(50) multistoreIDExample := multistore.StoreID(50)
storeIDExample := imports.ID(50)
addExample(bitfield.NewFromSet([]uint64{5})) addExample(bitfield.NewFromSet([]uint64{5}))
addExample(abi.RegisteredSealProof_StackedDrg32GiBV1_1) addExample(abi.RegisteredSealProof_StackedDrg32GiBV1_1)
@ -120,6 +122,8 @@ func init() {
addExample(time.Minute) addExample(time.Minute)
addExample(datatransfer.TransferID(3)) addExample(datatransfer.TransferID(3))
addExample(datatransfer.Ongoing) addExample(datatransfer.Ongoing)
addExample(storeIDExample)
addExample(&storeIDExample)
addExample(multistoreIDExample) addExample(multistoreIDExample)
addExample(&multistoreIDExample) addExample(&multistoreIDExample)
addExample(retrievalmarket.ClientEventDealAccepted) addExample(retrievalmarket.ClientEventDealAccepted)
@ -176,7 +180,7 @@ func init() {
// miner specific // miner specific
addExample(filestore2.Path(".lotusminer/fstmp123")) addExample(filestore2.Path(".lotusminer/fstmp123"))
si := multistore.StoreID(12) si := uint64(12)
addExample(&si) addExample(&si)
addExample(retrievalmarket.DealID(5)) addExample(retrievalmarket.DealID(5))
addExample(abi.ActorID(1000)) addExample(abi.ActorID(1000))
@ -271,6 +275,15 @@ func init() {
api.SubsystemSectorStorage, api.SubsystemSectorStorage,
api.SubsystemMarkets, api.SubsystemMarkets,
}) })
addExample(api.DagstoreShardResult{
Key: "baga6ea4seaqecmtz7iak33dsfshi627abz4i4665dfuzr3qfs4bmad6dx3iigdq",
Error: "<error>",
})
addExample(api.DagstoreShardInfo{
Key: "baga6ea4seaqecmtz7iak33dsfshi627abz4i4665dfuzr3qfs4bmad6dx3iigdq",
State: "ShardStateAvailable",
Error: "<error>",
})
} }
func GetAPIType(name, pkg string) (i interface{}, t reflect.Type, permStruct []reflect.Type) { func GetAPIType(name, pkg string) (i interface{}, t reflect.Type, permStruct []reflect.Type) {

View File

@ -14,7 +14,6 @@ import (
retrievalmarket "github.com/filecoin-project/go-fil-markets/retrievalmarket" retrievalmarket "github.com/filecoin-project/go-fil-markets/retrievalmarket"
storagemarket "github.com/filecoin-project/go-fil-markets/storagemarket" storagemarket "github.com/filecoin-project/go-fil-markets/storagemarket"
auth "github.com/filecoin-project/go-jsonrpc/auth" auth "github.com/filecoin-project/go-jsonrpc/auth"
multistore "github.com/filecoin-project/go-multistore"
abi "github.com/filecoin-project/go-state-types/abi" abi "github.com/filecoin-project/go-state-types/abi"
big "github.com/filecoin-project/go-state-types/big" big "github.com/filecoin-project/go-state-types/big"
crypto "github.com/filecoin-project/go-state-types/crypto" crypto "github.com/filecoin-project/go-state-types/crypto"
@ -26,6 +25,7 @@ import (
types "github.com/filecoin-project/lotus/chain/types" types "github.com/filecoin-project/lotus/chain/types"
marketevents "github.com/filecoin-project/lotus/markets/loggers" marketevents "github.com/filecoin-project/lotus/markets/loggers"
dtypes "github.com/filecoin-project/lotus/node/modules/dtypes" dtypes "github.com/filecoin-project/lotus/node/modules/dtypes"
imports "github.com/filecoin-project/lotus/node/repo/imports"
miner0 "github.com/filecoin-project/specs-actors/actors/builtin/miner" miner0 "github.com/filecoin-project/specs-actors/actors/builtin/miner"
paych "github.com/filecoin-project/specs-actors/actors/builtin/paych" paych "github.com/filecoin-project/specs-actors/actors/builtin/paych"
gomock "github.com/golang/mock/gomock" gomock "github.com/golang/mock/gomock"
@ -343,6 +343,21 @@ func (mr *MockFullNodeMockRecorder) ChainGetTipSet(arg0, arg1 interface{}) *gomo
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ChainGetTipSet", reflect.TypeOf((*MockFullNode)(nil).ChainGetTipSet), arg0, arg1) return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ChainGetTipSet", reflect.TypeOf((*MockFullNode)(nil).ChainGetTipSet), arg0, arg1)
} }
// ChainGetTipSetAfterHeight mocks base method.
func (m *MockFullNode) ChainGetTipSetAfterHeight(arg0 context.Context, arg1 abi.ChainEpoch, arg2 types.TipSetKey) (*types.TipSet, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "ChainGetTipSetAfterHeight", arg0, arg1, arg2)
ret0, _ := ret[0].(*types.TipSet)
ret1, _ := ret[1].(error)
return ret0, ret1
}
// ChainGetTipSetAfterHeight indicates an expected call of ChainGetTipSetAfterHeight.
func (mr *MockFullNodeMockRecorder) ChainGetTipSetAfterHeight(arg0, arg1, arg2 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "ChainGetTipSetAfterHeight", reflect.TypeOf((*MockFullNode)(nil).ChainGetTipSetAfterHeight), arg0, arg1, arg2)
}
// ChainGetTipSetByHeight mocks base method. // ChainGetTipSetByHeight mocks base method.
func (m *MockFullNode) ChainGetTipSetByHeight(arg0 context.Context, arg1 abi.ChainEpoch, arg2 types.TipSetKey) (*types.TipSet, error) { func (m *MockFullNode) ChainGetTipSetByHeight(arg0 context.Context, arg1 abi.ChainEpoch, arg2 types.TipSetKey) (*types.TipSet, error) {
m.ctrl.T.Helper() m.ctrl.T.Helper()
@ -760,7 +775,7 @@ func (mr *MockFullNodeMockRecorder) ClientQueryAsk(arg0, arg1, arg2 interface{})
} }
// ClientRemoveImport mocks base method. // ClientRemoveImport mocks base method.
func (m *MockFullNode) ClientRemoveImport(arg0 context.Context, arg1 multistore.StoreID) error { func (m *MockFullNode) ClientRemoveImport(arg0 context.Context, arg1 imports.ID) error {
m.ctrl.T.Helper() m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "ClientRemoveImport", arg0, arg1) ret := m.ctrl.Call(m, "ClientRemoveImport", arg0, arg1)
ret0, _ := ret[0].(error) ret0, _ := ret[0].(error)

View File

@ -13,7 +13,6 @@ import (
"github.com/filecoin-project/go-fil-markets/retrievalmarket" "github.com/filecoin-project/go-fil-markets/retrievalmarket"
"github.com/filecoin-project/go-fil-markets/storagemarket" "github.com/filecoin-project/go-fil-markets/storagemarket"
"github.com/filecoin-project/go-jsonrpc/auth" "github.com/filecoin-project/go-jsonrpc/auth"
"github.com/filecoin-project/go-multistore"
"github.com/filecoin-project/go-state-types/abi" "github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/crypto" "github.com/filecoin-project/go-state-types/crypto"
"github.com/filecoin-project/go-state-types/dline" "github.com/filecoin-project/go-state-types/dline"
@ -29,6 +28,7 @@ import (
"github.com/filecoin-project/lotus/extern/storage-sealing/sealiface" "github.com/filecoin-project/lotus/extern/storage-sealing/sealiface"
marketevents "github.com/filecoin-project/lotus/markets/loggers" marketevents "github.com/filecoin-project/lotus/markets/loggers"
"github.com/filecoin-project/lotus/node/modules/dtypes" "github.com/filecoin-project/lotus/node/modules/dtypes"
"github.com/filecoin-project/lotus/node/repo/imports"
"github.com/filecoin-project/specs-storage/storage" "github.com/filecoin-project/specs-storage/storage"
"github.com/google/uuid" "github.com/google/uuid"
"github.com/ipfs/go-cid" "github.com/ipfs/go-cid"
@ -132,6 +132,8 @@ type FullNodeStruct struct {
ChainGetTipSet func(p0 context.Context, p1 types.TipSetKey) (*types.TipSet, error) `perm:"read"` ChainGetTipSet func(p0 context.Context, p1 types.TipSetKey) (*types.TipSet, error) `perm:"read"`
ChainGetTipSetAfterHeight func(p0 context.Context, p1 abi.ChainEpoch, p2 types.TipSetKey) (*types.TipSet, error) `perm:"read"`
ChainGetTipSetByHeight func(p0 context.Context, p1 abi.ChainEpoch, p2 types.TipSetKey) (*types.TipSet, error) `perm:"read"` ChainGetTipSetByHeight func(p0 context.Context, p1 abi.ChainEpoch, p2 types.TipSetKey) (*types.TipSet, error) `perm:"read"`
ChainHasObj func(p0 context.Context, p1 cid.Cid) (bool, error) `perm:"read"` ChainHasObj func(p0 context.Context, p1 cid.Cid) (bool, error) `perm:"read"`
@ -188,7 +190,7 @@ type FullNodeStruct struct {
ClientQueryAsk func(p0 context.Context, p1 peer.ID, p2 address.Address) (*storagemarket.StorageAsk, error) `perm:"read"` ClientQueryAsk func(p0 context.Context, p1 peer.ID, p2 address.Address) (*storagemarket.StorageAsk, error) `perm:"read"`
ClientRemoveImport func(p0 context.Context, p1 multistore.StoreID) error `perm:"admin"` ClientRemoveImport func(p0 context.Context, p1 imports.ID) error `perm:"admin"`
ClientRestartDataTransfer func(p0 context.Context, p1 datatransfer.TransferID, p2 peer.ID, p3 bool) error `perm:"write"` ClientRestartDataTransfer func(p0 context.Context, p1 datatransfer.TransferID, p2 peer.ID, p3 bool) error `perm:"write"`
@ -474,6 +476,8 @@ type GatewayStruct struct {
ChainGetTipSet func(p0 context.Context, p1 types.TipSetKey) (*types.TipSet, error) `` ChainGetTipSet func(p0 context.Context, p1 types.TipSetKey) (*types.TipSet, error) ``
ChainGetTipSetAfterHeight func(p0 context.Context, p1 abi.ChainEpoch, p2 types.TipSetKey) (*types.TipSet, error) ``
ChainGetTipSetByHeight func(p0 context.Context, p1 abi.ChainEpoch, p2 types.TipSetKey) (*types.TipSet, error) `` ChainGetTipSetByHeight func(p0 context.Context, p1 abi.ChainEpoch, p2 types.TipSetKey) (*types.TipSet, error) ``
ChainHasObj func(p0 context.Context, p1 cid.Cid) (bool, error) `` ChainHasObj func(p0 context.Context, p1 cid.Cid) (bool, error) ``
@ -603,6 +607,16 @@ type StorageMinerStruct struct {
CreateBackup func(p0 context.Context, p1 string) error `perm:"admin"` CreateBackup func(p0 context.Context, p1 string) error `perm:"admin"`
DagstoreGC func(p0 context.Context) ([]DagstoreShardResult, error) `perm:"admin"`
DagstoreInitializeAll func(p0 context.Context, p1 DagstoreInitializeAllParams) (<-chan DagstoreInitializeAllEvent, error) `perm:"write"`
DagstoreInitializeShard func(p0 context.Context, p1 string) error `perm:"write"`
DagstoreListShards func(p0 context.Context) ([]DagstoreShardInfo, error) `perm:"read"`
DagstoreRecoverShard func(p0 context.Context, p1 string) error `perm:"write"`
DealsConsiderOfflineRetrievalDeals func(p0 context.Context) (bool, error) `perm:"admin"` DealsConsiderOfflineRetrievalDeals func(p0 context.Context) (bool, error) `perm:"admin"`
DealsConsiderOfflineStorageDeals func(p0 context.Context) (bool, error) `perm:"admin"` DealsConsiderOfflineStorageDeals func(p0 context.Context) (bool, error) `perm:"admin"`
@ -1171,6 +1185,17 @@ func (s *FullNodeStub) ChainGetTipSet(p0 context.Context, p1 types.TipSetKey) (*
return nil, ErrNotSupported return nil, ErrNotSupported
} }
func (s *FullNodeStruct) ChainGetTipSetAfterHeight(p0 context.Context, p1 abi.ChainEpoch, p2 types.TipSetKey) (*types.TipSet, error) {
if s.Internal.ChainGetTipSetAfterHeight == nil {
return nil, ErrNotSupported
}
return s.Internal.ChainGetTipSetAfterHeight(p0, p1, p2)
}
func (s *FullNodeStub) ChainGetTipSetAfterHeight(p0 context.Context, p1 abi.ChainEpoch, p2 types.TipSetKey) (*types.TipSet, error) {
return nil, ErrNotSupported
}
func (s *FullNodeStruct) ChainGetTipSetByHeight(p0 context.Context, p1 abi.ChainEpoch, p2 types.TipSetKey) (*types.TipSet, error) { func (s *FullNodeStruct) ChainGetTipSetByHeight(p0 context.Context, p1 abi.ChainEpoch, p2 types.TipSetKey) (*types.TipSet, error) {
if s.Internal.ChainGetTipSetByHeight == nil { if s.Internal.ChainGetTipSetByHeight == nil {
return nil, ErrNotSupported return nil, ErrNotSupported
@ -1479,14 +1504,14 @@ func (s *FullNodeStub) ClientQueryAsk(p0 context.Context, p1 peer.ID, p2 address
return nil, ErrNotSupported return nil, ErrNotSupported
} }
func (s *FullNodeStruct) ClientRemoveImport(p0 context.Context, p1 multistore.StoreID) error { func (s *FullNodeStruct) ClientRemoveImport(p0 context.Context, p1 imports.ID) error {
if s.Internal.ClientRemoveImport == nil { if s.Internal.ClientRemoveImport == nil {
return ErrNotSupported return ErrNotSupported
} }
return s.Internal.ClientRemoveImport(p0, p1) return s.Internal.ClientRemoveImport(p0, p1)
} }
func (s *FullNodeStub) ClientRemoveImport(p0 context.Context, p1 multistore.StoreID) error { func (s *FullNodeStub) ClientRemoveImport(p0 context.Context, p1 imports.ID) error {
return ErrNotSupported return ErrNotSupported
} }
@ -2997,6 +3022,17 @@ func (s *GatewayStub) ChainGetTipSet(p0 context.Context, p1 types.TipSetKey) (*t
return nil, ErrNotSupported return nil, ErrNotSupported
} }
func (s *GatewayStruct) ChainGetTipSetAfterHeight(p0 context.Context, p1 abi.ChainEpoch, p2 types.TipSetKey) (*types.TipSet, error) {
if s.Internal.ChainGetTipSetAfterHeight == nil {
return nil, ErrNotSupported
}
return s.Internal.ChainGetTipSetAfterHeight(p0, p1, p2)
}
func (s *GatewayStub) ChainGetTipSetAfterHeight(p0 context.Context, p1 abi.ChainEpoch, p2 types.TipSetKey) (*types.TipSet, error) {
return nil, ErrNotSupported
}
func (s *GatewayStruct) ChainGetTipSetByHeight(p0 context.Context, p1 abi.ChainEpoch, p2 types.TipSetKey) (*types.TipSet, error) { func (s *GatewayStruct) ChainGetTipSetByHeight(p0 context.Context, p1 abi.ChainEpoch, p2 types.TipSetKey) (*types.TipSet, error) {
if s.Internal.ChainGetTipSetByHeight == nil { if s.Internal.ChainGetTipSetByHeight == nil {
return nil, ErrNotSupported return nil, ErrNotSupported
@ -3569,6 +3605,61 @@ func (s *StorageMinerStub) CreateBackup(p0 context.Context, p1 string) error {
return ErrNotSupported return ErrNotSupported
} }
func (s *StorageMinerStruct) DagstoreGC(p0 context.Context) ([]DagstoreShardResult, error) {
if s.Internal.DagstoreGC == nil {
return *new([]DagstoreShardResult), ErrNotSupported
}
return s.Internal.DagstoreGC(p0)
}
func (s *StorageMinerStub) DagstoreGC(p0 context.Context) ([]DagstoreShardResult, error) {
return *new([]DagstoreShardResult), ErrNotSupported
}
func (s *StorageMinerStruct) DagstoreInitializeAll(p0 context.Context, p1 DagstoreInitializeAllParams) (<-chan DagstoreInitializeAllEvent, error) {
if s.Internal.DagstoreInitializeAll == nil {
return nil, ErrNotSupported
}
return s.Internal.DagstoreInitializeAll(p0, p1)
}
func (s *StorageMinerStub) DagstoreInitializeAll(p0 context.Context, p1 DagstoreInitializeAllParams) (<-chan DagstoreInitializeAllEvent, error) {
return nil, ErrNotSupported
}
func (s *StorageMinerStruct) DagstoreInitializeShard(p0 context.Context, p1 string) error {
if s.Internal.DagstoreInitializeShard == nil {
return ErrNotSupported
}
return s.Internal.DagstoreInitializeShard(p0, p1)
}
func (s *StorageMinerStub) DagstoreInitializeShard(p0 context.Context, p1 string) error {
return ErrNotSupported
}
func (s *StorageMinerStruct) DagstoreListShards(p0 context.Context) ([]DagstoreShardInfo, error) {
if s.Internal.DagstoreListShards == nil {
return *new([]DagstoreShardInfo), ErrNotSupported
}
return s.Internal.DagstoreListShards(p0)
}
func (s *StorageMinerStub) DagstoreListShards(p0 context.Context) ([]DagstoreShardInfo, error) {
return *new([]DagstoreShardInfo), ErrNotSupported
}
func (s *StorageMinerStruct) DagstoreRecoverShard(p0 context.Context, p1 string) error {
if s.Internal.DagstoreRecoverShard == nil {
return ErrNotSupported
}
return s.Internal.DagstoreRecoverShard(p0, p1)
}
func (s *StorageMinerStub) DagstoreRecoverShard(p0 context.Context, p1 string) error {
return ErrNotSupported
}
func (s *StorageMinerStruct) DealsConsiderOfflineRetrievalDeals(p0 context.Context) (bool, error) { func (s *StorageMinerStruct) DealsConsiderOfflineRetrievalDeals(p0 context.Context) (bool, error) {
if s.Internal.DealsConsiderOfflineRetrievalDeals == nil { if s.Internal.DealsConsiderOfflineRetrievalDeals == nil {
return false, ErrNotSupported return false, ErrNotSupported

View File

@ -8,7 +8,6 @@ import (
datatransfer "github.com/filecoin-project/go-data-transfer" datatransfer "github.com/filecoin-project/go-data-transfer"
"github.com/filecoin-project/go-fil-markets/retrievalmarket" "github.com/filecoin-project/go-fil-markets/retrievalmarket"
"github.com/filecoin-project/go-fil-markets/storagemarket" "github.com/filecoin-project/go-fil-markets/storagemarket"
"github.com/filecoin-project/go-multistore"
"github.com/filecoin-project/go-state-types/abi" "github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/crypto" "github.com/filecoin-project/go-state-types/crypto"
"github.com/filecoin-project/go-state-types/dline" "github.com/filecoin-project/go-state-types/dline"
@ -22,6 +21,7 @@ import (
"github.com/filecoin-project/lotus/chain/types" "github.com/filecoin-project/lotus/chain/types"
marketevents "github.com/filecoin-project/lotus/markets/loggers" marketevents "github.com/filecoin-project/lotus/markets/loggers"
"github.com/filecoin-project/lotus/node/modules/dtypes" "github.com/filecoin-project/lotus/node/modules/dtypes"
"github.com/filecoin-project/lotus/node/repo/imports"
) )
//go:generate go run github.com/golang/mock/mockgen -destination=v0mocks/mock_full.go -package=v0mocks . FullNode //go:generate go run github.com/golang/mock/mockgen -destination=v0mocks/mock_full.go -package=v0mocks . FullNode
@ -305,7 +305,7 @@ type FullNode interface {
// ClientImport imports file under the specified path into filestore. // ClientImport imports file under the specified path into filestore.
ClientImport(ctx context.Context, ref api.FileRef) (*api.ImportRes, error) //perm:admin ClientImport(ctx context.Context, ref api.FileRef) (*api.ImportRes, error) //perm:admin
// ClientRemoveImport removes file import // ClientRemoveImport removes file import
ClientRemoveImport(ctx context.Context, importID multistore.StoreID) error //perm:admin ClientRemoveImport(ctx context.Context, importID imports.ID) error //perm:admin
// ClientStartDeal proposes a deal with a miner. // ClientStartDeal proposes a deal with a miner.
ClientStartDeal(ctx context.Context, params *api.StartDealParams) (*cid.Cid, error) //perm:admin ClientStartDeal(ctx context.Context, params *api.StartDealParams) (*cid.Cid, error) //perm:admin
// ClientStatelessDeal fire-and-forget-proposes an offline deal to a miner without subsequent tracking. // ClientStatelessDeal fire-and-forget-proposes an offline deal to a miner without subsequent tracking.

View File

@ -10,7 +10,6 @@ import (
datatransfer "github.com/filecoin-project/go-data-transfer" datatransfer "github.com/filecoin-project/go-data-transfer"
"github.com/filecoin-project/go-fil-markets/retrievalmarket" "github.com/filecoin-project/go-fil-markets/retrievalmarket"
"github.com/filecoin-project/go-fil-markets/storagemarket" "github.com/filecoin-project/go-fil-markets/storagemarket"
"github.com/filecoin-project/go-multistore"
"github.com/filecoin-project/go-state-types/abi" "github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/crypto" "github.com/filecoin-project/go-state-types/crypto"
"github.com/filecoin-project/go-state-types/dline" "github.com/filecoin-project/go-state-types/dline"
@ -22,6 +21,7 @@ import (
"github.com/filecoin-project/lotus/chain/types" "github.com/filecoin-project/lotus/chain/types"
marketevents "github.com/filecoin-project/lotus/markets/loggers" marketevents "github.com/filecoin-project/lotus/markets/loggers"
"github.com/filecoin-project/lotus/node/modules/dtypes" "github.com/filecoin-project/lotus/node/modules/dtypes"
"github.com/filecoin-project/lotus/node/repo/imports"
"github.com/ipfs/go-cid" "github.com/ipfs/go-cid"
"github.com/libp2p/go-libp2p-core/peer" "github.com/libp2p/go-libp2p-core/peer"
"golang.org/x/xerrors" "golang.org/x/xerrors"
@ -121,7 +121,7 @@ type FullNodeStruct struct {
ClientQueryAsk func(p0 context.Context, p1 peer.ID, p2 address.Address) (*storagemarket.StorageAsk, error) `perm:"read"` ClientQueryAsk func(p0 context.Context, p1 peer.ID, p2 address.Address) (*storagemarket.StorageAsk, error) `perm:"read"`
ClientRemoveImport func(p0 context.Context, p1 multistore.StoreID) error `perm:"admin"` ClientRemoveImport func(p0 context.Context, p1 imports.ID) error `perm:"admin"`
ClientRestartDataTransfer func(p0 context.Context, p1 datatransfer.TransferID, p2 peer.ID, p3 bool) error `perm:"write"` ClientRestartDataTransfer func(p0 context.Context, p1 datatransfer.TransferID, p2 peer.ID, p3 bool) error `perm:"write"`
@ -939,14 +939,14 @@ func (s *FullNodeStub) ClientQueryAsk(p0 context.Context, p1 peer.ID, p2 address
return nil, ErrNotSupported return nil, ErrNotSupported
} }
func (s *FullNodeStruct) ClientRemoveImport(p0 context.Context, p1 multistore.StoreID) error { func (s *FullNodeStruct) ClientRemoveImport(p0 context.Context, p1 imports.ID) error {
if s.Internal.ClientRemoveImport == nil { if s.Internal.ClientRemoveImport == nil {
return ErrNotSupported return ErrNotSupported
} }
return s.Internal.ClientRemoveImport(p0, p1) return s.Internal.ClientRemoveImport(p0, p1)
} }
func (s *FullNodeStub) ClientRemoveImport(p0 context.Context, p1 multistore.StoreID) error { func (s *FullNodeStub) ClientRemoveImport(p0 context.Context, p1 imports.ID) error {
return ErrNotSupported return ErrNotSupported
} }

View File

@ -14,7 +14,6 @@ import (
retrievalmarket "github.com/filecoin-project/go-fil-markets/retrievalmarket" retrievalmarket "github.com/filecoin-project/go-fil-markets/retrievalmarket"
storagemarket "github.com/filecoin-project/go-fil-markets/storagemarket" storagemarket "github.com/filecoin-project/go-fil-markets/storagemarket"
auth "github.com/filecoin-project/go-jsonrpc/auth" auth "github.com/filecoin-project/go-jsonrpc/auth"
multistore "github.com/filecoin-project/go-multistore"
abi "github.com/filecoin-project/go-state-types/abi" abi "github.com/filecoin-project/go-state-types/abi"
big "github.com/filecoin-project/go-state-types/big" big "github.com/filecoin-project/go-state-types/big"
crypto "github.com/filecoin-project/go-state-types/crypto" crypto "github.com/filecoin-project/go-state-types/crypto"
@ -26,6 +25,7 @@ import (
types "github.com/filecoin-project/lotus/chain/types" types "github.com/filecoin-project/lotus/chain/types"
marketevents "github.com/filecoin-project/lotus/markets/loggers" marketevents "github.com/filecoin-project/lotus/markets/loggers"
dtypes "github.com/filecoin-project/lotus/node/modules/dtypes" dtypes "github.com/filecoin-project/lotus/node/modules/dtypes"
imports "github.com/filecoin-project/lotus/node/repo/imports"
miner0 "github.com/filecoin-project/specs-actors/actors/builtin/miner" miner0 "github.com/filecoin-project/specs-actors/actors/builtin/miner"
paych "github.com/filecoin-project/specs-actors/actors/builtin/paych" paych "github.com/filecoin-project/specs-actors/actors/builtin/paych"
gomock "github.com/golang/mock/gomock" gomock "github.com/golang/mock/gomock"
@ -731,7 +731,7 @@ func (mr *MockFullNodeMockRecorder) ClientQueryAsk(arg0, arg1, arg2 interface{})
} }
// ClientRemoveImport mocks base method. // ClientRemoveImport mocks base method.
func (m *MockFullNode) ClientRemoveImport(arg0 context.Context, arg1 multistore.StoreID) error { func (m *MockFullNode) ClientRemoveImport(arg0 context.Context, arg1 imports.ID) error {
m.ctrl.T.Helper() m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "ClientRemoveImport", arg0, arg1) ret := m.ctrl.Call(m, "ClientRemoveImport", arg0, arg1)
ret0, _ := ret[0].(error) ret0, _ := ret[0].(error)

View File

@ -21,15 +21,26 @@ type MarkSet interface {
SetConcurrent() SetConcurrent()
} }
type MarkSetVisitor interface {
MarkSet
ObjectVisitor
}
type MarkSetEnv interface { type MarkSetEnv interface {
// Create creates a new markset within the environment.
// name is a unique name for this markset, mapped to the filesystem in disk-backed environments
// sizeHint is a hint about the expected size of the markset
Create(name string, sizeHint int64) (MarkSet, error) Create(name string, sizeHint int64) (MarkSet, error)
// CreateVisitor is like Create, but returns a wider interface that supports atomic visits.
// It may not be supported by some markset types (e.g. bloom).
CreateVisitor(name string, sizeHint int64) (MarkSetVisitor, error)
// SupportsVisitor returns true if the marksets created by this environment support the visitor interface.
SupportsVisitor() bool
Close() error Close() error
} }
func OpenMarkSetEnv(path string, mtype string) (MarkSetEnv, error) { func OpenMarkSetEnv(path string, mtype string) (MarkSetEnv, error) {
switch mtype { switch mtype {
case "bloom":
return NewBloomMarkSetEnv()
case "map": case "map":
return NewMapMarkSetEnv() return NewMapMarkSetEnv()
case "badger": case "badger":

View File

@ -27,12 +27,14 @@ type BadgerMarkSet struct {
writing map[int]map[string]struct{} writing map[int]map[string]struct{}
writers int writers int
seqno int seqno int
version int
db *badger.DB db *badger.DB
path string path string
} }
var _ MarkSet = (*BadgerMarkSet)(nil) var _ MarkSet = (*BadgerMarkSet)(nil)
var _ MarkSetVisitor = (*BadgerMarkSet)(nil)
var badgerMarkSetBatchSize = 16384 var badgerMarkSetBatchSize = 16384
@ -46,9 +48,241 @@ func NewBadgerMarkSetEnv(path string) (MarkSetEnv, error) {
return &BadgerMarkSetEnv{path: msPath}, nil return &BadgerMarkSetEnv{path: msPath}, nil
} }
func (e *BadgerMarkSetEnv) Create(name string, sizeHint int64) (MarkSet, error) { func (e *BadgerMarkSetEnv) create(name string, sizeHint int64) (*BadgerMarkSet, error) {
name += ".tmp"
path := filepath.Join(e.path, name) path := filepath.Join(e.path, name)
db, err := openTransientBadgerDB(path)
if err != nil {
return nil, xerrors.Errorf("error creating badger db: %w", err)
}
ms := &BadgerMarkSet{
pend: make(map[string]struct{}),
writing: make(map[int]map[string]struct{}),
db: db,
path: path,
}
ms.cond.L = &ms.mx
return ms, nil
}
func (e *BadgerMarkSetEnv) Create(name string, sizeHint int64) (MarkSet, error) {
return e.create(name, sizeHint)
}
func (e *BadgerMarkSetEnv) CreateVisitor(name string, sizeHint int64) (MarkSetVisitor, error) {
return e.create(name, sizeHint)
}
func (e *BadgerMarkSetEnv) SupportsVisitor() bool { return true }
func (e *BadgerMarkSetEnv) Close() error {
return os.RemoveAll(e.path)
}
func (s *BadgerMarkSet) Mark(c cid.Cid) error {
s.mx.Lock()
if s.pend == nil {
s.mx.Unlock()
return errMarkSetClosed
}
write, seqno := s.put(string(c.Hash()))
s.mx.Unlock()
if write {
return s.write(seqno)
}
return nil
}
func (s *BadgerMarkSet) Has(c cid.Cid) (bool, error) {
s.mx.RLock()
defer s.mx.RUnlock()
key := c.Hash()
pendKey := string(key)
has, err := s.tryPending(pendKey)
if has || err != nil {
return has, err
}
return s.tryDB(key)
}
func (s *BadgerMarkSet) Visit(c cid.Cid) (bool, error) {
key := c.Hash()
pendKey := string(key)
s.mx.RLock()
has, err := s.tryPending(pendKey)
if has || err != nil {
s.mx.RUnlock()
return false, err
}
has, err = s.tryDB(key)
if has || err != nil {
s.mx.RUnlock()
return false, err
}
// we need to upgrade the lock to exclusive in order to write; take the version count to see
// if there was another write while we were upgrading
version := s.version
s.mx.RUnlock()
s.mx.Lock()
// we have to do the check dance again
has, err = s.tryPending(pendKey)
if has || err != nil {
s.mx.Unlock()
return false, err
}
if version != s.version {
// something was written to the db, we need to check it
has, err = s.tryDB(key)
if has || err != nil {
s.mx.Unlock()
return false, err
}
}
write, seqno := s.put(pendKey)
s.mx.Unlock()
if write {
err = s.write(seqno)
}
return true, err
}
// reader holds the (r)lock
func (s *BadgerMarkSet) tryPending(key string) (has bool, err error) {
if s.pend == nil {
return false, errMarkSetClosed
}
if _, ok := s.pend[key]; ok {
return true, nil
}
for _, wr := range s.writing {
if _, ok := wr[key]; ok {
return true, nil
}
}
return false, nil
}
func (s *BadgerMarkSet) tryDB(key []byte) (has bool, err error) {
err = s.db.View(func(txn *badger.Txn) error {
_, err := txn.Get(key)
return err
})
switch err {
case nil:
return true, nil
case badger.ErrKeyNotFound:
return false, nil
default:
return false, err
}
}
// writer holds the exclusive lock
func (s *BadgerMarkSet) put(key string) (write bool, seqno int) {
s.pend[key] = struct{}{}
if len(s.pend) < badgerMarkSetBatchSize {
return false, 0
}
seqno = s.seqno
s.seqno++
s.writing[seqno] = s.pend
s.pend = make(map[string]struct{})
return true, seqno
}
func (s *BadgerMarkSet) write(seqno int) (err error) {
s.mx.Lock()
if s.pend == nil {
s.mx.Unlock()
return errMarkSetClosed
}
pend := s.writing[seqno]
s.writers++
s.mx.Unlock()
defer func() {
s.mx.Lock()
defer s.mx.Unlock()
if err == nil {
delete(s.writing, seqno)
s.version++
}
s.writers--
if s.writers == 0 {
s.cond.Broadcast()
}
}()
empty := []byte{} // not nil
batch := s.db.NewWriteBatch()
defer batch.Cancel()
for k := range pend {
if err = batch.Set([]byte(k), empty); err != nil {
return xerrors.Errorf("error setting batch: %w", err)
}
}
err = batch.Flush()
if err != nil {
return xerrors.Errorf("error flushing batch to badger markset: %w", err)
}
return nil
}
func (s *BadgerMarkSet) Close() error {
s.mx.Lock()
defer s.mx.Unlock()
if s.pend == nil {
return nil
}
for s.writers > 0 {
s.cond.Wait()
}
s.pend = nil
db := s.db
s.db = nil
return closeTransientBadgerDB(db, s.path)
}
func (s *BadgerMarkSet) SetConcurrent() {}
func openTransientBadgerDB(path string) (*badger.DB, error) {
// clean up first // clean up first
err := os.RemoveAll(path) err := os.RemoveAll(path)
if err != nil { if err != nil {
@ -76,140 +310,16 @@ func (e *BadgerMarkSetEnv) Create(name string, sizeHint int64) (MarkSet, error)
skip2: log.Desugar().WithOptions(zap.AddCallerSkip(2)).Sugar(), skip2: log.Desugar().WithOptions(zap.AddCallerSkip(2)).Sugar(),
} }
db, err := badger.Open(opts) return badger.Open(opts)
if err != nil {
return nil, xerrors.Errorf("error creating badger markset: %w", err)
}
ms := &BadgerMarkSet{
pend: make(map[string]struct{}),
writing: make(map[int]map[string]struct{}),
db: db,
path: path,
}
ms.cond.L = &ms.mx
return ms, nil
} }
func (e *BadgerMarkSetEnv) Close() error { func closeTransientBadgerDB(db *badger.DB, path string) error {
return os.RemoveAll(e.path)
}
func (s *BadgerMarkSet) Mark(c cid.Cid) error {
s.mx.Lock()
if s.pend == nil {
s.mx.Unlock()
return errMarkSetClosed
}
s.pend[string(c.Hash())] = struct{}{}
if len(s.pend) < badgerMarkSetBatchSize {
s.mx.Unlock()
return nil
}
pend := s.pend
seqno := s.seqno
s.seqno++
s.writing[seqno] = pend
s.pend = make(map[string]struct{})
s.writers++
s.mx.Unlock()
defer func() {
s.mx.Lock()
defer s.mx.Unlock()
delete(s.writing, seqno)
s.writers--
if s.writers == 0 {
s.cond.Broadcast()
}
}()
empty := []byte{} // not nil
batch := s.db.NewWriteBatch()
defer batch.Cancel()
for k := range pend {
if err := batch.Set([]byte(k), empty); err != nil {
return err
}
}
err := batch.Flush()
if err != nil {
return xerrors.Errorf("error flushing batch to badger markset: %w", err)
}
return nil
}
func (s *BadgerMarkSet) Has(c cid.Cid) (bool, error) {
s.mx.RLock()
defer s.mx.RUnlock()
if s.pend == nil {
return false, errMarkSetClosed
}
key := c.Hash()
pendKey := string(key)
_, ok := s.pend[pendKey]
if ok {
return true, nil
}
for _, wr := range s.writing {
_, ok := wr[pendKey]
if ok {
return true, nil
}
}
err := s.db.View(func(txn *badger.Txn) error {
_, err := txn.Get(key)
return err
})
switch err {
case nil:
return true, nil
case badger.ErrKeyNotFound:
return false, nil
default:
return false, xerrors.Errorf("error checking badger markset: %w", err)
}
}
func (s *BadgerMarkSet) Close() error {
s.mx.Lock()
defer s.mx.Unlock()
if s.pend == nil {
return nil
}
for s.writers > 0 {
s.cond.Wait()
}
s.pend = nil
db := s.db
s.db = nil
err := db.Close() err := db.Close()
if err != nil { if err != nil {
return xerrors.Errorf("error closing badger markset: %w", err) return xerrors.Errorf("error closing badger markset: %w", err)
} }
err = os.RemoveAll(s.path) err = os.RemoveAll(path)
if err != nil { if err != nil {
return xerrors.Errorf("error deleting badger markset: %w", err) return xerrors.Errorf("error deleting badger markset: %w", err)
} }
@ -217,8 +327,6 @@ func (s *BadgerMarkSet) Close() error {
return nil return nil
} }
func (s *BadgerMarkSet) SetConcurrent() {}
// badger logging through go-log // badger logging through go-log
type badgerLogger struct { type badgerLogger struct {
*zap.SugaredLogger *zap.SugaredLogger

View File

@ -1,107 +0,0 @@
package splitstore
import (
"crypto/rand"
"crypto/sha256"
"sync"
"golang.org/x/xerrors"
bbloom "github.com/ipfs/bbloom"
cid "github.com/ipfs/go-cid"
)
const (
BloomFilterMinSize = 10_000_000
BloomFilterProbability = 0.01
)
type BloomMarkSetEnv struct{}
var _ MarkSetEnv = (*BloomMarkSetEnv)(nil)
type BloomMarkSet struct {
salt []byte
mx sync.RWMutex
bf *bbloom.Bloom
ts bool
}
var _ MarkSet = (*BloomMarkSet)(nil)
func NewBloomMarkSetEnv() (*BloomMarkSetEnv, error) {
return &BloomMarkSetEnv{}, nil
}
func (e *BloomMarkSetEnv) Create(name string, sizeHint int64) (MarkSet, error) {
size := int64(BloomFilterMinSize)
for size < sizeHint {
size += BloomFilterMinSize
}
salt := make([]byte, 4)
_, err := rand.Read(salt)
if err != nil {
return nil, xerrors.Errorf("error reading salt: %w", err)
}
bf, err := bbloom.New(float64(size), BloomFilterProbability)
if err != nil {
return nil, xerrors.Errorf("error creating bloom filter: %w", err)
}
return &BloomMarkSet{salt: salt, bf: bf}, nil
}
func (e *BloomMarkSetEnv) Close() error {
return nil
}
func (s *BloomMarkSet) saltedKey(cid cid.Cid) []byte {
hash := cid.Hash()
key := make([]byte, len(s.salt)+len(hash))
n := copy(key, s.salt)
copy(key[n:], hash)
rehash := sha256.Sum256(key)
return rehash[:]
}
func (s *BloomMarkSet) Mark(cid cid.Cid) error {
if s.ts {
s.mx.Lock()
defer s.mx.Unlock()
}
if s.bf == nil {
return errMarkSetClosed
}
s.bf.Add(s.saltedKey(cid))
return nil
}
func (s *BloomMarkSet) Has(cid cid.Cid) (bool, error) {
if s.ts {
s.mx.RLock()
defer s.mx.RUnlock()
}
if s.bf == nil {
return false, errMarkSetClosed
}
return s.bf.Has(s.saltedKey(cid)), nil
}
func (s *BloomMarkSet) Close() error {
if s.ts {
s.mx.Lock()
defer s.mx.Unlock()
}
s.bf = nil
return nil
}
func (s *BloomMarkSet) SetConcurrent() {
s.ts = true
}

View File

@ -18,17 +18,28 @@ type MapMarkSet struct {
} }
var _ MarkSet = (*MapMarkSet)(nil) var _ MarkSet = (*MapMarkSet)(nil)
var _ MarkSetVisitor = (*MapMarkSet)(nil)
func NewMapMarkSetEnv() (*MapMarkSetEnv, error) { func NewMapMarkSetEnv() (*MapMarkSetEnv, error) {
return &MapMarkSetEnv{}, nil return &MapMarkSetEnv{}, nil
} }
func (e *MapMarkSetEnv) Create(name string, sizeHint int64) (MarkSet, error) { func (e *MapMarkSetEnv) create(name string, sizeHint int64) (*MapMarkSet, error) {
return &MapMarkSet{ return &MapMarkSet{
set: make(map[string]struct{}, sizeHint), set: make(map[string]struct{}, sizeHint),
}, nil }, nil
} }
func (e *MapMarkSetEnv) Create(name string, sizeHint int64) (MarkSet, error) {
return e.create(name, sizeHint)
}
func (e *MapMarkSetEnv) CreateVisitor(name string, sizeHint int64) (MarkSetVisitor, error) {
return e.create(name, sizeHint)
}
func (e *MapMarkSetEnv) SupportsVisitor() bool { return true }
func (e *MapMarkSetEnv) Close() error { func (e *MapMarkSetEnv) Close() error {
return nil return nil
} }
@ -61,6 +72,25 @@ func (s *MapMarkSet) Has(cid cid.Cid) (bool, error) {
return ok, nil return ok, nil
} }
func (s *MapMarkSet) Visit(c cid.Cid) (bool, error) {
if s.ts {
s.mx.Lock()
defer s.mx.Unlock()
}
if s.set == nil {
return false, errMarkSetClosed
}
key := string(c.Hash())
if _, ok := s.set[key]; ok {
return false, nil
}
s.set[key] = struct{}{}
return true, nil
}
func (s *MapMarkSet) Close() error { func (s *MapMarkSet) Close() error {
if s.ts { if s.ts {
s.mx.Lock() s.mx.Lock()

View File

@ -2,6 +2,7 @@ package splitstore
import ( import (
"io/ioutil" "io/ioutil"
"os"
"testing" "testing"
cid "github.com/ipfs/go-cid" cid "github.com/ipfs/go-cid"
@ -10,10 +11,7 @@ import (
func TestMapMarkSet(t *testing.T) { func TestMapMarkSet(t *testing.T) {
testMarkSet(t, "map") testMarkSet(t, "map")
} testMarkSetVisitor(t, "map")
func TestBloomMarkSet(t *testing.T) {
testMarkSet(t, "bloom")
} }
func TestBadgerMarkSet(t *testing.T) { func TestBadgerMarkSet(t *testing.T) {
@ -23,16 +21,21 @@ func TestBadgerMarkSet(t *testing.T) {
badgerMarkSetBatchSize = bs badgerMarkSetBatchSize = bs
}) })
testMarkSet(t, "badger") testMarkSet(t, "badger")
testMarkSetVisitor(t, "badger")
} }
func testMarkSet(t *testing.T, lsType string) { func testMarkSet(t *testing.T, lsType string) {
t.Helper() t.Helper()
path, err := ioutil.TempDir("", "sweep-test.*") path, err := ioutil.TempDir("", "markset.*")
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
t.Cleanup(func() {
_ = os.RemoveAll(path)
})
env, err := OpenMarkSetEnv(path, lsType) env, err := OpenMarkSetEnv(path, lsType)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
@ -145,3 +148,74 @@ func testMarkSet(t *testing.T, lsType string) {
t.Fatal(err) t.Fatal(err)
} }
} }
func testMarkSetVisitor(t *testing.T, lsType string) {
t.Helper()
path, err := ioutil.TempDir("", "markset.*")
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
_ = os.RemoveAll(path)
})
env, err := OpenMarkSetEnv(path, lsType)
if err != nil {
t.Fatal(err)
}
defer env.Close() //nolint:errcheck
visitor, err := env.CreateVisitor("test", 0)
if err != nil {
t.Fatal(err)
}
defer visitor.Close() //nolint:errcheck
makeCid := func(key string) cid.Cid {
h, err := multihash.Sum([]byte(key), multihash.SHA2_256, -1)
if err != nil {
t.Fatal(err)
}
return cid.NewCidV1(cid.Raw, h)
}
mustVisit := func(v ObjectVisitor, cid cid.Cid) {
visit, err := v.Visit(cid)
if err != nil {
t.Fatal(err)
}
if !visit {
t.Fatal("object should be visited")
}
}
mustNotVisit := func(v ObjectVisitor, cid cid.Cid) {
visit, err := v.Visit(cid)
if err != nil {
t.Fatal(err)
}
if visit {
t.Fatal("unexpected visit")
}
}
k1 := makeCid("a")
k2 := makeCid("b")
k3 := makeCid("c")
k4 := makeCid("d")
mustVisit(visitor, k1)
mustVisit(visitor, k2)
mustVisit(visitor, k3)
mustVisit(visitor, k4)
mustNotVisit(visitor, k1)
mustNotVisit(visitor, k2)
mustNotVisit(visitor, k3)
mustNotVisit(visitor, k4)
}

View File

@ -142,7 +142,6 @@ type SplitStore struct {
txnViews int txnViews int
txnViewsWaiting bool txnViewsWaiting bool
txnActive bool txnActive bool
txnProtect MarkSet
txnRefsMx sync.Mutex txnRefsMx sync.Mutex
txnRefs map[cid.Cid]struct{} txnRefs map[cid.Cid]struct{}
txnMissing map[cid.Cid]struct{} txnMissing map[cid.Cid]struct{}
@ -174,6 +173,10 @@ func Open(path string, ds dstore.Datastore, hot, cold bstore.Blockstore, cfg *Co
return nil, err return nil, err
} }
if !markSetEnv.SupportsVisitor() {
return nil, xerrors.Errorf("markset type does not support atomic visitors")
}
// and now we can make a SplitStore // and now we can make a SplitStore
ss := &SplitStore{ ss := &SplitStore{
cfg: cfg, cfg: cfg,

View File

@ -83,7 +83,14 @@ func (s *SplitStore) doCheck(curTs *types.TipSet) error {
write("--") write("--")
var coldCnt, missingCnt int64 var coldCnt, missingCnt int64
err = s.walkChain(curTs, boundaryEpoch, boundaryEpoch,
visitor, err := s.markSetEnv.CreateVisitor("check", 0)
if err != nil {
return xerrors.Errorf("error creating visitor: %w", err)
}
defer visitor.Close() //nolint
err = s.walkChain(curTs, boundaryEpoch, boundaryEpoch, visitor,
func(c cid.Cid) error { func(c cid.Cid) error {
if isUnitaryObject(c) { if isUnitaryObject(c) {
return errStopWalk return errStopWalk

View File

@ -211,7 +211,7 @@ func (s *SplitStore) trackTxnRefMany(cids []cid.Cid) {
} }
// protect all pending transactional references // protect all pending transactional references
func (s *SplitStore) protectTxnRefs(markSet MarkSet) error { func (s *SplitStore) protectTxnRefs(markSet MarkSetVisitor) error {
for { for {
var txnRefs map[cid.Cid]struct{} var txnRefs map[cid.Cid]struct{}
@ -283,30 +283,29 @@ func (s *SplitStore) protectTxnRefs(markSet MarkSet) error {
// transactionally protect a reference by walking the object and marking. // transactionally protect a reference by walking the object and marking.
// concurrent markings are short circuited by checking the markset. // concurrent markings are short circuited by checking the markset.
func (s *SplitStore) doTxnProtect(root cid.Cid, markSet MarkSet) error { func (s *SplitStore) doTxnProtect(root cid.Cid, markSet MarkSetVisitor) error {
if err := s.checkClosing(); err != nil { if err := s.checkClosing(); err != nil {
return err return err
} }
// Note: cold objects are deleted heaviest first, so the consituents of an object // Note: cold objects are deleted heaviest first, so the consituents of an object
// cannot be deleted before the object itself. // cannot be deleted before the object itself.
return s.walkObjectIncomplete(root, cid.NewSet(), return s.walkObjectIncomplete(root, tmpVisitor(),
func(c cid.Cid) error { func(c cid.Cid) error {
if isUnitaryObject(c) { if isUnitaryObject(c) {
return errStopWalk return errStopWalk
} }
mark, err := markSet.Has(c) visit, err := markSet.Visit(c)
if err != nil { if err != nil {
return xerrors.Errorf("error checking markset: %w", err) return xerrors.Errorf("error visiting object: %w", err)
} }
// it's marked, nothing to do if !visit {
if mark {
return errStopWalk return errStopWalk
} }
return markSet.Mark(c) return nil
}, },
func(c cid.Cid) error { func(c cid.Cid) error {
if s.txnMissing != nil { if s.txnMissing != nil {
@ -382,7 +381,7 @@ func (s *SplitStore) doCompact(curTs *types.TipSet) error {
log.Infow("running compaction", "currentEpoch", currentEpoch, "baseEpoch", s.baseEpoch, "boundaryEpoch", boundaryEpoch, "inclMsgsEpoch", inclMsgsEpoch, "compactionIndex", s.compactionIndex) log.Infow("running compaction", "currentEpoch", currentEpoch, "baseEpoch", s.baseEpoch, "boundaryEpoch", boundaryEpoch, "inclMsgsEpoch", inclMsgsEpoch, "compactionIndex", s.compactionIndex)
markSet, err := s.markSetEnv.Create("live", s.markSetSize) markSet, err := s.markSetEnv.CreateVisitor("live", s.markSetSize)
if err != nil { if err != nil {
return xerrors.Errorf("error creating mark set: %w", err) return xerrors.Errorf("error creating mark set: %w", err)
} }
@ -410,14 +409,23 @@ func (s *SplitStore) doCompact(curTs *types.TipSet) error {
startMark := time.Now() startMark := time.Now()
var count int64 var count int64
err = s.walkChain(curTs, boundaryEpoch, inclMsgsEpoch, err = s.walkChain(curTs, boundaryEpoch, inclMsgsEpoch, &noopVisitor{},
func(c cid.Cid) error { func(c cid.Cid) error {
if isUnitaryObject(c) { if isUnitaryObject(c) {
return errStopWalk return errStopWalk
} }
visit, err := markSet.Visit(c)
if err != nil {
return xerrors.Errorf("error visiting object: %w", err)
}
if !visit {
return errStopWalk
}
count++ count++
return markSet.Mark(c) return nil
}) })
if err != nil { if err != nil {
@ -578,12 +586,8 @@ func (s *SplitStore) beginTxnProtect() {
s.txnMissing = make(map[cid.Cid]struct{}) s.txnMissing = make(map[cid.Cid]struct{})
} }
func (s *SplitStore) beginTxnMarking(markSet MarkSet) { func (s *SplitStore) beginTxnMarking(markSet MarkSetVisitor) {
markSet.SetConcurrent() markSet.SetConcurrent()
s.txnLk.Lock()
s.txnProtect = markSet
s.txnLk.Unlock()
} }
func (s *SplitStore) endTxnProtect() { func (s *SplitStore) endTxnProtect() {
@ -594,21 +598,14 @@ func (s *SplitStore) endTxnProtect() {
return return
} }
// release markset memory
if s.txnProtect != nil {
_ = s.txnProtect.Close()
}
s.txnActive = false s.txnActive = false
s.txnProtect = nil
s.txnRefs = nil s.txnRefs = nil
s.txnMissing = nil s.txnMissing = nil
} }
func (s *SplitStore) walkChain(ts *types.TipSet, inclState, inclMsgs abi.ChainEpoch, func (s *SplitStore) walkChain(ts *types.TipSet, inclState, inclMsgs abi.ChainEpoch,
f func(cid.Cid) error) error { visitor ObjectVisitor, f func(cid.Cid) error) error {
visited := cid.NewSet() var walked *cid.Set
walked := cid.NewSet()
toWalk := ts.Cids() toWalk := ts.Cids()
walkCnt := 0 walkCnt := 0
scanCnt := 0 scanCnt := 0
@ -616,7 +613,7 @@ func (s *SplitStore) walkChain(ts *types.TipSet, inclState, inclMsgs abi.ChainEp
stopWalk := func(_ cid.Cid) error { return errStopWalk } stopWalk := func(_ cid.Cid) error { return errStopWalk }
walkBlock := func(c cid.Cid) error { walkBlock := func(c cid.Cid) error {
if !visited.Visit(c) { if !walked.Visit(c) {
return nil return nil
} }
@ -640,19 +637,19 @@ func (s *SplitStore) walkChain(ts *types.TipSet, inclState, inclMsgs abi.ChainEp
if inclMsgs < inclState { if inclMsgs < inclState {
// we need to use walkObjectIncomplete here, as messages/receipts may be missing early on if we // we need to use walkObjectIncomplete here, as messages/receipts may be missing early on if we
// synced from snapshot and have a long HotStoreMessageRetentionPolicy. // synced from snapshot and have a long HotStoreMessageRetentionPolicy.
if err := s.walkObjectIncomplete(hdr.Messages, walked, f, stopWalk); err != nil { if err := s.walkObjectIncomplete(hdr.Messages, visitor, f, stopWalk); err != nil {
return xerrors.Errorf("error walking messages (cid: %s): %w", hdr.Messages, err) return xerrors.Errorf("error walking messages (cid: %s): %w", hdr.Messages, err)
} }
if err := s.walkObjectIncomplete(hdr.ParentMessageReceipts, walked, f, stopWalk); err != nil { if err := s.walkObjectIncomplete(hdr.ParentMessageReceipts, visitor, f, stopWalk); err != nil {
return xerrors.Errorf("error walking messages receipts (cid: %s): %w", hdr.ParentMessageReceipts, err) return xerrors.Errorf("error walking messages receipts (cid: %s): %w", hdr.ParentMessageReceipts, err)
} }
} else { } else {
if err := s.walkObject(hdr.Messages, walked, f); err != nil { if err := s.walkObject(hdr.Messages, visitor, f); err != nil {
return xerrors.Errorf("error walking messages (cid: %s): %w", hdr.Messages, err) return xerrors.Errorf("error walking messages (cid: %s): %w", hdr.Messages, err)
} }
if err := s.walkObject(hdr.ParentMessageReceipts, walked, f); err != nil { if err := s.walkObject(hdr.ParentMessageReceipts, visitor, f); err != nil {
return xerrors.Errorf("error walking message receipts (cid: %s): %w", hdr.ParentMessageReceipts, err) return xerrors.Errorf("error walking message receipts (cid: %s): %w", hdr.ParentMessageReceipts, err)
} }
} }
@ -660,7 +657,7 @@ func (s *SplitStore) walkChain(ts *types.TipSet, inclState, inclMsgs abi.ChainEp
// state is only retained if within the inclState boundary, with the exception of genesis // state is only retained if within the inclState boundary, with the exception of genesis
if hdr.Height >= inclState || hdr.Height == 0 { if hdr.Height >= inclState || hdr.Height == 0 {
if err := s.walkObject(hdr.ParentStateRoot, walked, f); err != nil { if err := s.walkObject(hdr.ParentStateRoot, visitor, f); err != nil {
return xerrors.Errorf("error walking state root (cid: %s): %w", hdr.ParentStateRoot, err) return xerrors.Errorf("error walking state root (cid: %s): %w", hdr.ParentStateRoot, err)
} }
scanCnt++ scanCnt++
@ -679,6 +676,10 @@ func (s *SplitStore) walkChain(ts *types.TipSet, inclState, inclMsgs abi.ChainEp
return err return err
} }
// the walk is BFS, so we can reset the walked set in every iteration and avoid building up
// a set that contains all blocks (1M epochs -> 5M blocks -> 200MB worth of memory and growing
// over time)
walked = cid.NewSet()
walking := toWalk walking := toWalk
toWalk = nil toWalk = nil
for _, c := range walking { for _, c := range walking {
@ -693,8 +694,13 @@ func (s *SplitStore) walkChain(ts *types.TipSet, inclState, inclMsgs abi.ChainEp
return nil return nil
} }
func (s *SplitStore) walkObject(c cid.Cid, walked *cid.Set, f func(cid.Cid) error) error { func (s *SplitStore) walkObject(c cid.Cid, visitor ObjectVisitor, f func(cid.Cid) error) error {
if !walked.Visit(c) { visit, err := visitor.Visit(c)
if err != nil {
return xerrors.Errorf("error visiting object: %w", err)
}
if !visit {
return nil return nil
} }
@ -716,7 +722,7 @@ func (s *SplitStore) walkObject(c cid.Cid, walked *cid.Set, f func(cid.Cid) erro
} }
var links []cid.Cid var links []cid.Cid
err := s.view(c, func(data []byte) error { err = s.view(c, func(data []byte) error {
return cbg.ScanForLinks(bytes.NewReader(data), func(c cid.Cid) { return cbg.ScanForLinks(bytes.NewReader(data), func(c cid.Cid) {
links = append(links, c) links = append(links, c)
}) })
@ -727,7 +733,7 @@ func (s *SplitStore) walkObject(c cid.Cid, walked *cid.Set, f func(cid.Cid) erro
} }
for _, c := range links { for _, c := range links {
err := s.walkObject(c, walked, f) err := s.walkObject(c, visitor, f)
if err != nil { if err != nil {
return xerrors.Errorf("error walking link (cid: %s): %w", c, err) return xerrors.Errorf("error walking link (cid: %s): %w", c, err)
} }
@ -737,8 +743,13 @@ func (s *SplitStore) walkObject(c cid.Cid, walked *cid.Set, f func(cid.Cid) erro
} }
// like walkObject, but the object may be potentially incomplete (references missing) // like walkObject, but the object may be potentially incomplete (references missing)
func (s *SplitStore) walkObjectIncomplete(c cid.Cid, walked *cid.Set, f, missing func(cid.Cid) error) error { func (s *SplitStore) walkObjectIncomplete(c cid.Cid, visitor ObjectVisitor, f, missing func(cid.Cid) error) error {
if !walked.Visit(c) { visit, err := visitor.Visit(c)
if err != nil {
return xerrors.Errorf("error visiting object: %w", err)
}
if !visit {
return nil return nil
} }
@ -777,7 +788,7 @@ func (s *SplitStore) walkObjectIncomplete(c cid.Cid, walked *cid.Set, f, missing
} }
var links []cid.Cid var links []cid.Cid
err := s.view(c, func(data []byte) error { err = s.view(c, func(data []byte) error {
return cbg.ScanForLinks(bytes.NewReader(data), func(c cid.Cid) { return cbg.ScanForLinks(bytes.NewReader(data), func(c cid.Cid) {
links = append(links, c) links = append(links, c)
}) })
@ -788,7 +799,7 @@ func (s *SplitStore) walkObjectIncomplete(c cid.Cid, walked *cid.Set, f, missing
} }
for _, c := range links { for _, c := range links {
err := s.walkObjectIncomplete(c, walked, f, missing) err := s.walkObjectIncomplete(c, visitor, f, missing)
if err != nil { if err != nil {
return xerrors.Errorf("error walking link (cid: %s): %w", c, err) return xerrors.Errorf("error walking link (cid: %s): %w", c, err)
} }
@ -984,7 +995,7 @@ func (s *SplitStore) purgeBatch(cids []cid.Cid, deleteBatch func([]cid.Cid) erro
return nil return nil
} }
func (s *SplitStore) purge(cids []cid.Cid, markSet MarkSet) error { func (s *SplitStore) purge(cids []cid.Cid, markSet MarkSetVisitor) error {
deadCids := make([]cid.Cid, 0, batchSize) deadCids := make([]cid.Cid, 0, batchSize)
var purgeCnt, liveCnt int var purgeCnt, liveCnt int
defer func() { defer func() {
@ -1050,7 +1061,7 @@ func (s *SplitStore) purge(cids []cid.Cid, markSet MarkSet) error {
// have this gem[TM]. // have this gem[TM].
// My best guess is that they are parent message receipts or yet to be computed state roots; magik // My best guess is that they are parent message receipts or yet to be computed state roots; magik
// thinks the cause may be block validation. // thinks the cause may be block validation.
func (s *SplitStore) waitForMissingRefs(markSet MarkSet) { func (s *SplitStore) waitForMissingRefs(markSet MarkSetVisitor) {
s.txnLk.Lock() s.txnLk.Lock()
missing := s.txnMissing missing := s.txnMissing
s.txnMissing = nil s.txnMissing = nil
@ -1079,27 +1090,27 @@ func (s *SplitStore) waitForMissingRefs(markSet MarkSet) {
} }
towalk := missing towalk := missing
walked := cid.NewSet() visitor := tmpVisitor()
missing = make(map[cid.Cid]struct{}) missing = make(map[cid.Cid]struct{})
for c := range towalk { for c := range towalk {
err := s.walkObjectIncomplete(c, walked, err := s.walkObjectIncomplete(c, visitor,
func(c cid.Cid) error { func(c cid.Cid) error {
if isUnitaryObject(c) { if isUnitaryObject(c) {
return errStopWalk return errStopWalk
} }
mark, err := markSet.Has(c) visit, err := markSet.Visit(c)
if err != nil { if err != nil {
return xerrors.Errorf("error checking markset for %s: %w", c, err) return xerrors.Errorf("error visiting object: %w", err)
} }
if mark { if !visit {
return errStopWalk return errStopWalk
} }
count++ count++
return markSet.Mark(c) return nil
}, },
func(c cid.Cid) error { func(c cid.Cid) error {
missing[c] = struct{}{} missing[c] = struct{}{}

View File

@ -59,7 +59,15 @@ func (s *SplitStore) doWarmup(curTs *types.TipSet) error {
count := int64(0) count := int64(0)
xcount := int64(0) xcount := int64(0)
missing := int64(0) missing := int64(0)
err := s.walkChain(curTs, boundaryEpoch, epoch+1, // we don't load messages/receipts in warmup
visitor, err := s.markSetEnv.CreateVisitor("warmup", 0)
if err != nil {
return xerrors.Errorf("error creating visitor: %w", err)
}
defer visitor.Close() //nolint
err = s.walkChain(curTs, boundaryEpoch, epoch+1, // we don't load messages/receipts in warmup
visitor,
func(c cid.Cid) error { func(c cid.Cid) error {
if isUnitaryObject(c) { if isUnitaryObject(c) {
return errStopWalk return errStopWalk

View File

@ -0,0 +1,32 @@
package splitstore
import (
cid "github.com/ipfs/go-cid"
)
// ObjectVisitor is an interface for deduplicating objects during walks
type ObjectVisitor interface {
Visit(cid.Cid) (bool, error)
}
type noopVisitor struct{}
var _ ObjectVisitor = (*noopVisitor)(nil)
func (v *noopVisitor) Visit(_ cid.Cid) (bool, error) {
return true, nil
}
type cidSetVisitor struct {
set *cid.Set
}
var _ ObjectVisitor = (*cidSetVisitor)(nil)
func (v *cidSetVisitor) Visit(c cid.Cid) (bool, error) {
return v.set.Visit(c), nil
}
func tmpVisitor() ObjectVisitor {
return &cidSetVisitor{set: cid.NewSet()}
}

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -46,6 +46,8 @@ func init() {
SetAddressNetwork(address.Testnet) SetAddressNetwork(address.Testnet)
Devnet = true Devnet = true
BuildType = BuildButterflynet
} }
const BlockDelaySecs = uint64(builtin2.EpochDurationSeconds) const BlockDelaySecs = uint64(builtin2.EpochDurationSeconds)

View File

@ -66,6 +66,8 @@ func init() {
//miner.WPoStChallengeLookback = abi.ChainEpoch(2) //miner.WPoStChallengeLookback = abi.ChainEpoch(2)
Devnet = false Devnet = false
BuildType = BuildNerpanet
} }
const BlockDelaySecs = uint64(builtin2.EpochDurationSeconds) const BlockDelaySecs = uint64(builtin2.EpochDurationSeconds)

View File

@ -26,7 +26,6 @@ const UnixfsLinksPerLevel = 1024
const AllowableClockDriftSecs = uint64(1) const AllowableClockDriftSecs = uint64(1)
const NewestNetworkVersion = network.Version13 const NewestNetworkVersion = network.Version13
const ActorUpgradeNetworkVersion = network.Version4
// Epochs // Epochs
const ForkLengthThreshold = Finality const ForkLengthThreshold = Finality

View File

@ -6,12 +6,14 @@ var CurrentCommit string
var BuildType int var BuildType int
const ( const (
BuildDefault = 0 BuildDefault = 0
BuildMainnet = 0x1 BuildMainnet = 0x1
Build2k = 0x2 Build2k = 0x2
BuildDebug = 0x3 BuildDebug = 0x3
BuildCalibnet = 0x4 BuildCalibnet = 0x4
BuildInteropnet = 0x5 BuildInteropnet = 0x5
BuildNerpanet = 0x6
BuildButterflynet = 0x7
) )
func buildType() string { func buildType() string {
@ -28,13 +30,17 @@ func buildType() string {
return "+calibnet" return "+calibnet"
case BuildInteropnet: case BuildInteropnet:
return "+interopnet" return "+interopnet"
case BuildNerpanet:
return "+nerpanet"
case BuildButterflynet:
return "+butterflynet"
default: default:
return "+huh?" return "+huh?"
} }
} }
// BuildVersion is the local build version // BuildVersion is the local build version
const BuildVersion = "1.11.1" const BuildVersion = "1.11.2-dev"
func UserVersion() string { func UserVersion() string {
if os.Getenv("LOTUS_VERSION_IGNORE_COMMIT") == "1" { if os.Getenv("LOTUS_VERSION_IGNORE_COMMIT") == "1" {

View File

@ -38,7 +38,7 @@ func init() {
var Methods = builtin{{.latestVersion}}.MethodsMiner var Methods = builtin{{.latestVersion}}.MethodsMiner
// Unchanged between v0, v2, v3, and v4 actors // Unchanged between v0, v2, v3, v4, and v5 actors
var WPoStProvingPeriod = miner0.WPoStProvingPeriod var WPoStProvingPeriod = miner0.WPoStProvingPeriod
var WPoStPeriodDeadlines = miner0.WPoStPeriodDeadlines var WPoStPeriodDeadlines = miner0.WPoStPeriodDeadlines
var WPoStChallengeWindow = miner0.WPoStChallengeWindow var WPoStChallengeWindow = miner0.WPoStChallengeWindow

View File

@ -61,7 +61,7 @@ func init() {
var Methods = builtin5.MethodsMiner var Methods = builtin5.MethodsMiner
// Unchanged between v0, v2, v3, and v4 actors // Unchanged between v0, v2, v3, v4, and v5 actors
var WPoStProvingPeriod = miner0.WPoStProvingPeriod var WPoStProvingPeriod = miner0.WPoStProvingPeriod
var WPoStPeriodDeadlines = miner0.WPoStPeriodDeadlines var WPoStPeriodDeadlines = miner0.WPoStPeriodDeadlines
var WPoStChallengeWindow = miner0.WPoStChallengeWindow var WPoStChallengeWindow = miner0.WPoStChallengeWindow

View File

@ -479,12 +479,15 @@ func (s *state{{.v}}) EraseAllUnproven() error {
return dls.UpdateDeadline(s.store, dindx, dl) return dls.UpdateDeadline(s.store, dindx, dl)
}) })
if err != nil {
return err
}
return s.State.SaveDeadlines(s.store, dls) return s.State.SaveDeadlines(s.store, dls)
{{else}} {{else}}
// field doesn't exist until v2 // field doesn't exist until v2
return nil
{{end}} {{end}}
return nil
} }
func (d *deadline{{.v}}) LoadPartition(idx uint64) (Partition, error) { func (d *deadline{{.v}}) LoadPartition(idx uint64) (Partition, error) {

View File

@ -444,8 +444,8 @@ func (s *state0) decodeSectorPreCommitOnChainInfo(val *cbg.Deferred) (SectorPreC
func (s *state0) EraseAllUnproven() error { func (s *state0) EraseAllUnproven() error {
// field doesn't exist until v2 // field doesn't exist until v2
return nil return nil
} }
func (d *deadline0) LoadPartition(idx uint64) (Partition, error) { func (d *deadline0) LoadPartition(idx uint64) (Partition, error) {

View File

@ -470,10 +470,12 @@ func (s *state2) EraseAllUnproven() error {
return dls.UpdateDeadline(s.store, dindx, dl) return dls.UpdateDeadline(s.store, dindx, dl)
}) })
if err != nil {
return err
}
return s.State.SaveDeadlines(s.store, dls) return s.State.SaveDeadlines(s.store, dls)
return nil
} }
func (d *deadline2) LoadPartition(idx uint64) (Partition, error) { func (d *deadline2) LoadPartition(idx uint64) (Partition, error) {

View File

@ -467,10 +467,12 @@ func (s *state3) EraseAllUnproven() error {
return dls.UpdateDeadline(s.store, dindx, dl) return dls.UpdateDeadline(s.store, dindx, dl)
}) })
if err != nil {
return err
}
return s.State.SaveDeadlines(s.store, dls) return s.State.SaveDeadlines(s.store, dls)
return nil
} }
func (d *deadline3) LoadPartition(idx uint64) (Partition, error) { func (d *deadline3) LoadPartition(idx uint64) (Partition, error) {

View File

@ -467,10 +467,12 @@ func (s *state4) EraseAllUnproven() error {
return dls.UpdateDeadline(s.store, dindx, dl) return dls.UpdateDeadline(s.store, dindx, dl)
}) })
if err != nil {
return err
}
return s.State.SaveDeadlines(s.store, dls) return s.State.SaveDeadlines(s.store, dls)
return nil
} }
func (d *deadline4) LoadPartition(idx uint64) (Partition, error) { func (d *deadline4) LoadPartition(idx uint64) (Partition, error) {

View File

@ -467,10 +467,12 @@ func (s *state5) EraseAllUnproven() error {
return dls.UpdateDeadline(s.store, dindx, dl) return dls.UpdateDeadline(s.store, dindx, dl)
}) })
if err != nil {
return err
}
return s.State.SaveDeadlines(s.store, dls) return s.State.SaveDeadlines(s.store, dls)
return nil
} }
func (d *deadline5) LoadPartition(idx uint64) (Partition, error) { func (d *deadline5) LoadPartition(idx uint64) (Partition, error) {

View File

@ -315,6 +315,10 @@ func GetMaxSectorExpirationExtension() abi.ChainEpoch {
return miner5.MaxSectorExpirationExtension return miner5.MaxSectorExpirationExtension
} }
func GetMinSectorExpiration() abi.ChainEpoch {
return miner5.MinSectorExpiration
}
func GetMaxPoStPartitions(nv network.Version, p abi.RegisteredPoStProof) (int, error) { func GetMaxPoStPartitions(nv network.Version, p abi.RegisteredPoStProof) (int, error) {
sectorsPerPart, err := builtin5.PoStProofWindowPoStPartitionSectors(p) sectorsPerPart, err := builtin5.PoStProofWindowPoStPartitionSectors(p)
if err != nil { if err != nil {

View File

@ -203,6 +203,10 @@ func GetMaxSectorExpirationExtension() abi.ChainEpoch {
return miner{{.latestVersion}}.MaxSectorExpirationExtension return miner{{.latestVersion}}.MaxSectorExpirationExtension
} }
func GetMinSectorExpiration() abi.ChainEpoch {
return miner{{.latestVersion}}.MinSectorExpiration
}
func GetMaxPoStPartitions(nv network.Version, p abi.RegisteredPoStProof) (int, error) { func GetMaxPoStPartitions(nv network.Version, p abi.RegisteredPoStProof) (int, error) {
sectorsPerPart, err := builtin{{.latestVersion}}.PoStProofWindowPoStPartitionSectors(p) sectorsPerPart, err := builtin{{.latestVersion}}.PoStProofWindowPoStPartitionSectors(p)
if err != nil { if err != nil {

View File

@ -5,6 +5,7 @@ package exchange
import ( import (
"fmt" "fmt"
"io" "io"
"math"
"sort" "sort"
types "github.com/filecoin-project/lotus/chain/types" types "github.com/filecoin-project/lotus/chain/types"
@ -15,6 +16,7 @@ import (
var _ = xerrors.Errorf var _ = xerrors.Errorf
var _ = cid.Undef var _ = cid.Undef
var _ = math.E
var _ = sort.Sort var _ = sort.Sort
var lengthBufRequest = []byte{131} var lengthBufRequest = []byte{131}

View File

@ -233,7 +233,7 @@ func NewGeneratorWithSectorsAndUpgradeSchedule(numSectors int, us stmgr.UpgradeS
return nil, xerrors.Errorf("make genesis block failed: %w", err) return nil, xerrors.Errorf("make genesis block failed: %w", err)
} }
cs := store.NewChainStore(bs, bs, ds, sys, j) cs := store.NewChainStore(bs, bs, ds, j)
genfb := &types.FullBlock{Header: genb.Genesis} genfb := &types.FullBlock{Header: genb.Genesis}
gents := store.NewFullTipSet([]*types.FullBlock{genfb}) gents := store.NewFullTipSet([]*types.FullBlock{genfb})
@ -247,7 +247,7 @@ func NewGeneratorWithSectorsAndUpgradeSchedule(numSectors int, us stmgr.UpgradeS
mgen[genesis2.MinerAddress(uint64(i))] = &wppProvider{} mgen[genesis2.MinerAddress(uint64(i))] = &wppProvider{}
} }
sm, err := stmgr.NewStateManagerWithUpgradeSchedule(cs, us) sm, err := stmgr.NewStateManagerWithUpgradeSchedule(cs, sys, us)
if err != nil { if err != nil {
return nil, xerrors.Errorf("initing stmgr: %w", err) return nil, xerrors.Errorf("initing stmgr: %w", err)
} }

View File

@ -474,7 +474,7 @@ func createMultisigAccount(ctx context.Context, cst cbor.IpldStore, state *state
return nil return nil
} }
func VerifyPreSealedData(ctx context.Context, cs *store.ChainStore, stateroot cid.Cid, template genesis.Template, keyIDs map[address.Address]address.Address, nv network.Version) (cid.Cid, error) { func VerifyPreSealedData(ctx context.Context, cs *store.ChainStore, sys vm.SyscallBuilder, stateroot cid.Cid, template genesis.Template, keyIDs map[address.Address]address.Address, nv network.Version) (cid.Cid, error) {
verifNeeds := make(map[address.Address]abi.PaddedPieceSize) verifNeeds := make(map[address.Address]abi.PaddedPieceSize)
var sum abi.PaddedPieceSize var sum abi.PaddedPieceSize
@ -483,7 +483,7 @@ func VerifyPreSealedData(ctx context.Context, cs *store.ChainStore, stateroot ci
Epoch: 0, Epoch: 0,
Rand: &fakeRand{}, Rand: &fakeRand{},
Bstore: cs.StateBlockstore(), Bstore: cs.StateBlockstore(),
Syscalls: mkFakedSigSyscalls(cs.VMSys()), Syscalls: mkFakedSigSyscalls(sys),
CircSupplyCalc: nil, CircSupplyCalc: nil,
NtwkVersion: func(_ context.Context, _ abi.ChainEpoch) network.Version { NtwkVersion: func(_ context.Context, _ abi.ChainEpoch) network.Version {
return nv return nv
@ -562,15 +562,15 @@ func MakeGenesisBlock(ctx context.Context, j journal.Journal, bs bstore.Blocksto
} }
// temp chainstore // temp chainstore
cs := store.NewChainStore(bs, bs, datastore.NewMapDatastore(), sys, j) cs := store.NewChainStore(bs, bs, datastore.NewMapDatastore(), j)
// Verify PreSealed Data // Verify PreSealed Data
stateroot, err = VerifyPreSealedData(ctx, cs, stateroot, template, keyIDs, template.NetworkVersion) stateroot, err = VerifyPreSealedData(ctx, cs, sys, stateroot, template, keyIDs, template.NetworkVersion)
if err != nil { if err != nil {
return nil, xerrors.Errorf("failed to verify presealed data: %w", err) return nil, xerrors.Errorf("failed to verify presealed data: %w", err)
} }
stateroot, err = SetupStorageMiners(ctx, cs, stateroot, template.Miners, template.NetworkVersion) stateroot, err = SetupStorageMiners(ctx, cs, sys, stateroot, template.Miners, template.NetworkVersion)
if err != nil { if err != nil {
return nil, xerrors.Errorf("setup miners failed: %w", err) return nil, xerrors.Errorf("setup miners failed: %w", err)
} }

View File

@ -6,30 +6,6 @@ import (
"fmt" "fmt"
"math/rand" "math/rand"
power4 "github.com/filecoin-project/specs-actors/v4/actors/builtin/power"
reward4 "github.com/filecoin-project/specs-actors/v4/actors/builtin/reward"
market4 "github.com/filecoin-project/specs-actors/v4/actors/builtin/market"
"github.com/filecoin-project/lotus/chain/actors"
"github.com/filecoin-project/lotus/chain/actors/builtin"
"github.com/filecoin-project/lotus/chain/actors/policy"
"github.com/filecoin-project/lotus/chain/actors/adt"
"github.com/filecoin-project/go-state-types/network"
market0 "github.com/filecoin-project/specs-actors/actors/builtin/market"
"github.com/filecoin-project/lotus/chain/actors/builtin/power"
"github.com/filecoin-project/lotus/chain/actors/builtin/reward"
"github.com/filecoin-project/lotus/chain/actors/builtin/market"
"github.com/filecoin-project/lotus/chain/actors/builtin/miner"
"github.com/ipfs/go-cid" "github.com/ipfs/go-cid"
cbor "github.com/ipfs/go-ipld-cbor" cbor "github.com/ipfs/go-ipld-cbor"
cbg "github.com/whyrusleeping/cbor-gen" cbg "github.com/whyrusleeping/cbor-gen"
@ -39,12 +15,28 @@ import (
"github.com/filecoin-project/go-state-types/abi" "github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/big" "github.com/filecoin-project/go-state-types/big"
"github.com/filecoin-project/go-state-types/crypto" "github.com/filecoin-project/go-state-types/crypto"
"github.com/filecoin-project/go-state-types/network"
builtin0 "github.com/filecoin-project/specs-actors/actors/builtin" builtin0 "github.com/filecoin-project/specs-actors/actors/builtin"
market0 "github.com/filecoin-project/specs-actors/actors/builtin/market"
miner0 "github.com/filecoin-project/specs-actors/actors/builtin/miner" miner0 "github.com/filecoin-project/specs-actors/actors/builtin/miner"
power0 "github.com/filecoin-project/specs-actors/actors/builtin/power" power0 "github.com/filecoin-project/specs-actors/actors/builtin/power"
reward0 "github.com/filecoin-project/specs-actors/actors/builtin/reward" reward0 "github.com/filecoin-project/specs-actors/actors/builtin/reward"
market2 "github.com/filecoin-project/specs-actors/v2/actors/builtin/market"
reward2 "github.com/filecoin-project/specs-actors/v2/actors/builtin/reward"
market4 "github.com/filecoin-project/specs-actors/v4/actors/builtin/market"
power4 "github.com/filecoin-project/specs-actors/v4/actors/builtin/power"
reward4 "github.com/filecoin-project/specs-actors/v4/actors/builtin/reward"
runtime5 "github.com/filecoin-project/specs-actors/v5/actors/runtime" runtime5 "github.com/filecoin-project/specs-actors/v5/actors/runtime"
"github.com/filecoin-project/lotus/chain/actors"
"github.com/filecoin-project/lotus/chain/actors/adt"
"github.com/filecoin-project/lotus/chain/actors/builtin"
"github.com/filecoin-project/lotus/chain/actors/builtin/market"
"github.com/filecoin-project/lotus/chain/actors/builtin/miner"
"github.com/filecoin-project/lotus/chain/actors/builtin/power"
"github.com/filecoin-project/lotus/chain/actors/builtin/reward"
"github.com/filecoin-project/lotus/chain/actors/policy"
"github.com/filecoin-project/lotus/chain/state" "github.com/filecoin-project/lotus/chain/state"
"github.com/filecoin-project/lotus/chain/store" "github.com/filecoin-project/lotus/chain/store"
"github.com/filecoin-project/lotus/chain/types" "github.com/filecoin-project/lotus/chain/types"
@ -78,7 +70,7 @@ func mkFakedSigSyscalls(base vm.SyscallBuilder) vm.SyscallBuilder {
} }
// Note: Much of this is brittle, if the methodNum / param / return changes, it will break things // Note: Much of this is brittle, if the methodNum / param / return changes, it will break things
func SetupStorageMiners(ctx context.Context, cs *store.ChainStore, sroot cid.Cid, miners []genesis.Miner, nv network.Version) (cid.Cid, error) { func SetupStorageMiners(ctx context.Context, cs *store.ChainStore, sys vm.SyscallBuilder, sroot cid.Cid, miners []genesis.Miner, nv network.Version) (cid.Cid, error) {
cst := cbor.NewCborStore(cs.StateBlockstore()) cst := cbor.NewCborStore(cs.StateBlockstore())
av, err := actors.VersionForNetwork(nv) av, err := actors.VersionForNetwork(nv)
@ -95,7 +87,7 @@ func SetupStorageMiners(ctx context.Context, cs *store.ChainStore, sroot cid.Cid
Epoch: 0, Epoch: 0,
Rand: &fakeRand{}, Rand: &fakeRand{},
Bstore: cs.StateBlockstore(), Bstore: cs.StateBlockstore(),
Syscalls: mkFakedSigSyscalls(cs.VMSys()), Syscalls: mkFakedSigSyscalls(sys),
CircSupplyCalc: csc, CircSupplyCalc: csc,
NtwkVersion: func(_ context.Context, _ abi.ChainEpoch) network.Version { NtwkVersion: func(_ context.Context, _ abi.ChainEpoch) network.Version {
return nv return nv
@ -404,8 +396,8 @@ func SetupStorageMiners(ctx context.Context, cs *store.ChainStore, sroot cid.Cid
return cid.Undef, xerrors.Errorf("failed to confirm presealed sectors: %w", err) return cid.Undef, xerrors.Errorf("failed to confirm presealed sectors: %w", err)
} }
if av > actors.Version2 { if av >= actors.Version2 {
// post v2, we need to explicitly Claim this power since ConfirmSectorProofsValid doesn't do it anymore // post v0, we need to explicitly Claim this power since ConfirmSectorProofsValid doesn't do it anymore
claimParams := &power4.UpdateClaimedPowerParams{ claimParams := &power4.UpdateClaimedPowerParams{
RawByteDelta: types.NewInt(uint64(m.SectorSize)), RawByteDelta: types.NewInt(uint64(m.SectorSize)),
QualityAdjustedDelta: sectorWeight, QualityAdjustedDelta: sectorWeight,
@ -537,7 +529,6 @@ func dealWeight(ctx context.Context, vm *vm.VM, maddr address.Address, dealIDs [
SectorExpiry: sectorExpiry, SectorExpiry: sectorExpiry,
} }
var dealWeights market0.VerifyDealsForActivationReturn
ret, err := doExecValue(ctx, vm, ret, err := doExecValue(ctx, vm,
market.Address, market.Address,
maddr, maddr,
@ -548,11 +539,23 @@ func dealWeight(ctx context.Context, vm *vm.VM, maddr address.Address, dealIDs [
if err != nil { if err != nil {
return big.Zero(), big.Zero(), err return big.Zero(), big.Zero(), err
} }
if err := dealWeights.UnmarshalCBOR(bytes.NewReader(ret)); err != nil { var weight, verifiedWeight abi.DealWeight
if av < actors.Version2 {
var dealWeights market0.VerifyDealsForActivationReturn
err = dealWeights.UnmarshalCBOR(bytes.NewReader(ret))
weight = dealWeights.DealWeight
verifiedWeight = dealWeights.VerifiedDealWeight
} else {
var dealWeights market2.VerifyDealsForActivationReturn
err = dealWeights.UnmarshalCBOR(bytes.NewReader(ret))
weight = dealWeights.DealWeight
verifiedWeight = dealWeights.VerifiedDealWeight
}
if err != nil {
return big.Zero(), big.Zero(), err return big.Zero(), big.Zero(), err
} }
return dealWeights.DealWeight, dealWeights.VerifiedDealWeight, nil return weight, verifiedWeight, nil
} }
params := &market4.VerifyDealsForActivationParams{Sectors: []market4.SectorDeals{{ params := &market4.VerifyDealsForActivationParams{Sectors: []market4.SectorDeals{{
SectorExpiry: sectorExpiry, SectorExpiry: sectorExpiry,
@ -584,7 +587,8 @@ func currentEpochBlockReward(ctx context.Context, vm *vm.VM, maddr address.Addre
} }
// TODO: This hack should move to reward actor wrapper // TODO: This hack should move to reward actor wrapper
if av <= actors.Version2 { switch av {
case actors.Version0:
var epochReward reward0.ThisEpochRewardReturn var epochReward reward0.ThisEpochRewardReturn
if err := epochReward.UnmarshalCBOR(bytes.NewReader(rwret)); err != nil { if err := epochReward.UnmarshalCBOR(bytes.NewReader(rwret)); err != nil {
@ -592,6 +596,14 @@ func currentEpochBlockReward(ctx context.Context, vm *vm.VM, maddr address.Addre
} }
return epochReward.ThisEpochBaselinePower, *epochReward.ThisEpochRewardSmoothed, nil return epochReward.ThisEpochBaselinePower, *epochReward.ThisEpochRewardSmoothed, nil
case actors.Version2:
var epochReward reward2.ThisEpochRewardReturn
if err := epochReward.UnmarshalCBOR(bytes.NewReader(rwret)); err != nil {
return big.Zero(), builtin.FilterEstimate{}, err
}
return epochReward.ThisEpochBaselinePower, builtin.FilterEstimate(epochReward.ThisEpochRewardSmoothed), nil
} }
var epochReward reward4.ThisEpochRewardReturn var epochReward reward4.ThisEpochRewardReturn

View File

@ -5,6 +5,7 @@ package market
import ( import (
"fmt" "fmt"
"io" "io"
"math"
"sort" "sort"
cid "github.com/ipfs/go-cid" cid "github.com/ipfs/go-cid"
@ -14,6 +15,7 @@ import (
var _ = xerrors.Errorf var _ = xerrors.Errorf
var _ = cid.Undef var _ = cid.Undef
var _ = math.E
var _ = sort.Sort var _ = sort.Sort
var lengthBufFundedAddressState = []byte{131} var lengthBufFundedAddressState = []byte{131}

View File

@ -1,129 +0,0 @@
package metrics
import (
"context"
"encoding/json"
"github.com/filecoin-project/go-state-types/abi"
"github.com/ipfs/go-cid"
logging "github.com/ipfs/go-log/v2"
pubsub "github.com/libp2p/go-libp2p-pubsub"
"go.uber.org/fx"
"github.com/filecoin-project/lotus/build"
"github.com/filecoin-project/lotus/chain/types"
"github.com/filecoin-project/lotus/node/impl/full"
"github.com/filecoin-project/lotus/node/modules/helpers"
)
var log = logging.Logger("metrics")
const baseTopic = "/fil/headnotifs/"
type Update struct {
Type string
}
func SendHeadNotifs(nickname string) func(mctx helpers.MetricsCtx, lc fx.Lifecycle, ps *pubsub.PubSub, chain full.ChainAPI) error {
return func(mctx helpers.MetricsCtx, lc fx.Lifecycle, ps *pubsub.PubSub, chain full.ChainAPI) error {
ctx := helpers.LifecycleCtx(mctx, lc)
lc.Append(fx.Hook{
OnStart: func(_ context.Context) error {
gen, err := chain.Chain.GetGenesis()
if err != nil {
return err
}
topic := baseTopic + gen.Cid().String()
go func() {
if err := sendHeadNotifs(ctx, ps, topic, chain, nickname); err != nil {
log.Error("consensus metrics error", err)
return
}
}()
go func() {
sub, err := ps.Subscribe(topic) //nolint
if err != nil {
return
}
defer sub.Cancel()
for {
if _, err := sub.Next(ctx); err != nil {
return
}
}
}()
return nil
},
})
return nil
}
}
type message struct {
// TipSet
Cids []cid.Cid
Blocks []*types.BlockHeader
Height abi.ChainEpoch
Weight types.BigInt
Time uint64
Nonce uint64
// Meta
NodeName string
}
func sendHeadNotifs(ctx context.Context, ps *pubsub.PubSub, topic string, chain full.ChainAPI, nickname string) error {
ctx, cancel := context.WithCancel(ctx)
defer cancel()
notifs, err := chain.ChainNotify(ctx)
if err != nil {
return err
}
// using unix nano time makes very sure we pick a nonce higher than previous restart
nonce := uint64(build.Clock.Now().UnixNano())
for {
select {
case notif := <-notifs:
n := notif[len(notif)-1]
w, err := chain.ChainTipSetWeight(ctx, n.Val.Key())
if err != nil {
return err
}
m := message{
Cids: n.Val.Cids(),
Blocks: n.Val.Blocks(),
Height: n.Val.Height(),
Weight: w,
NodeName: nickname,
Time: uint64(build.Clock.Now().UnixNano() / 1000_000),
Nonce: nonce,
}
b, err := json.Marshal(m)
if err != nil {
return err
}
//nolint
if err := ps.Publish(topic, b); err != nil {
return err
}
case <-ctx.Done():
return nil
}
nonce++
}
}

View File

@ -273,7 +273,7 @@ func LoadStateTree(cst cbor.IpldStore, c cid.Cid) (*StateTree, error) {
} }
if err != nil { if err != nil {
log.Errorf("failed to load state tree: %s", err) log.Errorf("failed to load state tree: %s", err)
return nil, xerrors.Errorf("failed to load state tree: %w", err) return nil, xerrors.Errorf("failed to load state tree %s: %w", c, err)
} }
s := &StateTree{ s := &StateTree{

551
chain/stmgr/actors.go Normal file
View File

@ -0,0 +1,551 @@
package stmgr
import (
"bytes"
"context"
"os"
"github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-bitfield"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/big"
"github.com/filecoin-project/go-state-types/crypto"
"github.com/filecoin-project/go-state-types/network"
cid "github.com/ipfs/go-cid"
"golang.org/x/xerrors"
"github.com/filecoin-project/lotus/api"
"github.com/filecoin-project/lotus/chain/actors/builtin"
"github.com/filecoin-project/lotus/chain/actors/builtin/market"
"github.com/filecoin-project/lotus/chain/actors/builtin/miner"
"github.com/filecoin-project/lotus/chain/actors/builtin/paych"
"github.com/filecoin-project/lotus/chain/actors/builtin/power"
"github.com/filecoin-project/lotus/chain/beacon"
"github.com/filecoin-project/lotus/chain/store"
"github.com/filecoin-project/lotus/chain/types"
"github.com/filecoin-project/lotus/chain/vm"
"github.com/filecoin-project/lotus/extern/sector-storage/ffiwrapper"
)
func GetMinerWorkerRaw(ctx context.Context, sm *StateManager, st cid.Cid, maddr address.Address) (address.Address, error) {
state, err := sm.StateTree(st)
if err != nil {
return address.Undef, xerrors.Errorf("(get sset) failed to load state tree: %w", err)
}
act, err := state.GetActor(maddr)
if err != nil {
return address.Undef, xerrors.Errorf("(get sset) failed to load miner actor: %w", err)
}
mas, err := miner.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return address.Undef, xerrors.Errorf("(get sset) failed to load miner actor state: %w", err)
}
info, err := mas.Info()
if err != nil {
return address.Undef, xerrors.Errorf("failed to load actor info: %w", err)
}
return vm.ResolveToKeyAddr(state, sm.cs.ActorStore(ctx), info.Worker)
}
func GetPower(ctx context.Context, sm *StateManager, ts *types.TipSet, maddr address.Address) (power.Claim, power.Claim, bool, error) {
return GetPowerRaw(ctx, sm, ts.ParentState(), maddr)
}
func GetPowerRaw(ctx context.Context, sm *StateManager, st cid.Cid, maddr address.Address) (power.Claim, power.Claim, bool, error) {
act, err := sm.LoadActorRaw(ctx, power.Address, st)
if err != nil {
return power.Claim{}, power.Claim{}, false, xerrors.Errorf("(get sset) failed to load power actor state: %w", err)
}
pas, err := power.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return power.Claim{}, power.Claim{}, false, err
}
tpow, err := pas.TotalPower()
if err != nil {
return power.Claim{}, power.Claim{}, false, err
}
var mpow power.Claim
var minpow bool
if maddr != address.Undef {
var found bool
mpow, found, err = pas.MinerPower(maddr)
if err != nil || !found {
return power.Claim{}, tpow, false, err
}
minpow, err = pas.MinerNominalPowerMeetsConsensusMinimum(maddr)
if err != nil {
return power.Claim{}, power.Claim{}, false, err
}
}
return mpow, tpow, minpow, nil
}
func PreCommitInfo(ctx context.Context, sm *StateManager, maddr address.Address, sid abi.SectorNumber, ts *types.TipSet) (*miner.SectorPreCommitOnChainInfo, error) {
act, err := sm.LoadActor(ctx, maddr, ts)
if err != nil {
return nil, xerrors.Errorf("(get sset) failed to load miner actor: %w", err)
}
mas, err := miner.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return nil, xerrors.Errorf("(get sset) failed to load miner actor state: %w", err)
}
return mas.GetPrecommittedSector(sid)
}
func MinerSectorInfo(ctx context.Context, sm *StateManager, maddr address.Address, sid abi.SectorNumber, ts *types.TipSet) (*miner.SectorOnChainInfo, error) {
act, err := sm.LoadActor(ctx, maddr, ts)
if err != nil {
return nil, xerrors.Errorf("(get sset) failed to load miner actor: %w", err)
}
mas, err := miner.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return nil, xerrors.Errorf("(get sset) failed to load miner actor state: %w", err)
}
return mas.GetSector(sid)
}
func GetSectorsForWinningPoSt(ctx context.Context, nv network.Version, pv ffiwrapper.Verifier, sm *StateManager, st cid.Cid, maddr address.Address, rand abi.PoStRandomness) ([]builtin.SectorInfo, error) {
act, err := sm.LoadActorRaw(ctx, maddr, st)
if err != nil {
return nil, xerrors.Errorf("failed to load miner actor: %w", err)
}
mas, err := miner.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return nil, xerrors.Errorf("failed to load miner actor state: %w", err)
}
var provingSectors bitfield.BitField
if nv < network.Version7 {
allSectors, err := miner.AllPartSectors(mas, miner.Partition.AllSectors)
if err != nil {
return nil, xerrors.Errorf("get all sectors: %w", err)
}
faultySectors, err := miner.AllPartSectors(mas, miner.Partition.FaultySectors)
if err != nil {
return nil, xerrors.Errorf("get faulty sectors: %w", err)
}
provingSectors, err = bitfield.SubtractBitField(allSectors, faultySectors)
if err != nil {
return nil, xerrors.Errorf("calc proving sectors: %w", err)
}
} else {
provingSectors, err = miner.AllPartSectors(mas, miner.Partition.ActiveSectors)
if err != nil {
return nil, xerrors.Errorf("get active sectors sectors: %w", err)
}
}
numProvSect, err := provingSectors.Count()
if err != nil {
return nil, xerrors.Errorf("failed to count bits: %w", err)
}
// TODO(review): is this right? feels fishy to me
if numProvSect == 0 {
return nil, nil
}
info, err := mas.Info()
if err != nil {
return nil, xerrors.Errorf("getting miner info: %w", err)
}
mid, err := address.IDFromAddress(maddr)
if err != nil {
return nil, xerrors.Errorf("getting miner ID: %w", err)
}
proofType, err := miner.WinningPoStProofTypeFromWindowPoStProofType(nv, info.WindowPoStProofType)
if err != nil {
return nil, xerrors.Errorf("determining winning post proof type: %w", err)
}
ids, err := pv.GenerateWinningPoStSectorChallenge(ctx, proofType, abi.ActorID(mid), rand, numProvSect)
if err != nil {
return nil, xerrors.Errorf("generating winning post challenges: %w", err)
}
iter, err := provingSectors.BitIterator()
if err != nil {
return nil, xerrors.Errorf("iterating over proving sectors: %w", err)
}
// Select winning sectors by _index_ in the all-sectors bitfield.
selectedSectors := bitfield.New()
prev := uint64(0)
for _, n := range ids {
sno, err := iter.Nth(n - prev)
if err != nil {
return nil, xerrors.Errorf("iterating over proving sectors: %w", err)
}
selectedSectors.Set(sno)
prev = n
}
sectors, err := mas.LoadSectors(&selectedSectors)
if err != nil {
return nil, xerrors.Errorf("loading proving sectors: %w", err)
}
out := make([]builtin.SectorInfo, len(sectors))
for i, sinfo := range sectors {
out[i] = builtin.SectorInfo{
SealProof: sinfo.SealProof,
SectorNumber: sinfo.SectorNumber,
SealedCID: sinfo.SealedCID,
}
}
return out, nil
}
func GetMinerSlashed(ctx context.Context, sm *StateManager, ts *types.TipSet, maddr address.Address) (bool, error) {
act, err := sm.LoadActor(ctx, power.Address, ts)
if err != nil {
return false, xerrors.Errorf("failed to load power actor: %w", err)
}
spas, err := power.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return false, xerrors.Errorf("failed to load power actor state: %w", err)
}
_, ok, err := spas.MinerPower(maddr)
if err != nil {
return false, xerrors.Errorf("getting miner power: %w", err)
}
if !ok {
return true, nil
}
return false, nil
}
func GetStorageDeal(ctx context.Context, sm *StateManager, dealID abi.DealID, ts *types.TipSet) (*api.MarketDeal, error) {
act, err := sm.LoadActor(ctx, market.Address, ts)
if err != nil {
return nil, xerrors.Errorf("failed to load market actor: %w", err)
}
state, err := market.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return nil, xerrors.Errorf("failed to load market actor state: %w", err)
}
proposals, err := state.Proposals()
if err != nil {
return nil, err
}
proposal, found, err := proposals.Get(dealID)
if err != nil {
return nil, err
} else if !found {
return nil, xerrors.Errorf(
"deal %d not found "+
"- deal may not have completed sealing before deal proposal "+
"start epoch, or deal may have been slashed",
dealID)
}
states, err := state.States()
if err != nil {
return nil, err
}
st, found, err := states.Get(dealID)
if err != nil {
return nil, err
}
if !found {
st = market.EmptyDealState()
}
return &api.MarketDeal{
Proposal: *proposal,
State: *st,
}, nil
}
func ListMinerActors(ctx context.Context, sm *StateManager, ts *types.TipSet) ([]address.Address, error) {
act, err := sm.LoadActor(ctx, power.Address, ts)
if err != nil {
return nil, xerrors.Errorf("failed to load power actor: %w", err)
}
powState, err := power.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return nil, xerrors.Errorf("failed to load power actor state: %w", err)
}
return powState.ListAllMiners()
}
func MinerGetBaseInfo(ctx context.Context, sm *StateManager, bcs beacon.Schedule, tsk types.TipSetKey, round abi.ChainEpoch, maddr address.Address, pv ffiwrapper.Verifier) (*api.MiningBaseInfo, error) {
ts, err := sm.ChainStore().LoadTipSet(tsk)
if err != nil {
return nil, xerrors.Errorf("failed to load tipset for mining base: %w", err)
}
prev, err := sm.ChainStore().GetLatestBeaconEntry(ts)
if err != nil {
if os.Getenv("LOTUS_IGNORE_DRAND") != "_yes_" {
return nil, xerrors.Errorf("failed to get latest beacon entry: %w", err)
}
prev = &types.BeaconEntry{}
}
entries, err := beacon.BeaconEntriesForBlock(ctx, bcs, round, ts.Height(), *prev)
if err != nil {
return nil, err
}
rbase := *prev
if len(entries) > 0 {
rbase = entries[len(entries)-1]
}
lbts, lbst, err := GetLookbackTipSetForRound(ctx, sm, ts, round)
if err != nil {
return nil, xerrors.Errorf("getting lookback miner actor state: %w", err)
}
act, err := sm.LoadActorRaw(ctx, maddr, lbst)
if xerrors.Is(err, types.ErrActorNotFound) {
_, err := sm.LoadActor(ctx, maddr, ts)
if err != nil {
return nil, xerrors.Errorf("loading miner in current state: %w", err)
}
return nil, nil
}
if err != nil {
return nil, xerrors.Errorf("failed to load miner actor: %w", err)
}
mas, err := miner.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return nil, xerrors.Errorf("failed to load miner actor state: %w", err)
}
buf := new(bytes.Buffer)
if err := maddr.MarshalCBOR(buf); err != nil {
return nil, xerrors.Errorf("failed to marshal miner address: %w", err)
}
prand, err := store.DrawRandomness(rbase.Data, crypto.DomainSeparationTag_WinningPoStChallengeSeed, round, buf.Bytes())
if err != nil {
return nil, xerrors.Errorf("failed to get randomness for winning post: %w", err)
}
nv := sm.GetNtwkVersion(ctx, ts.Height())
sectors, err := GetSectorsForWinningPoSt(ctx, nv, pv, sm, lbst, maddr, prand)
if err != nil {
return nil, xerrors.Errorf("getting winning post proving set: %w", err)
}
if len(sectors) == 0 {
return nil, nil
}
mpow, tpow, _, err := GetPowerRaw(ctx, sm, lbst, maddr)
if err != nil {
return nil, xerrors.Errorf("failed to get power: %w", err)
}
info, err := mas.Info()
if err != nil {
return nil, err
}
worker, err := sm.ResolveToKeyAddress(ctx, info.Worker, ts)
if err != nil {
return nil, xerrors.Errorf("resolving worker address: %w", err)
}
// TODO: Not ideal performance...This method reloads miner and power state (already looked up here and in GetPowerRaw)
eligible, err := MinerEligibleToMine(ctx, sm, maddr, ts, lbts)
if err != nil {
return nil, xerrors.Errorf("determining miner eligibility: %w", err)
}
return &api.MiningBaseInfo{
MinerPower: mpow.QualityAdjPower,
NetworkPower: tpow.QualityAdjPower,
Sectors: sectors,
WorkerKey: worker,
SectorSize: info.SectorSize,
PrevBeaconEntry: *prev,
BeaconEntries: entries,
EligibleForMining: eligible,
}, nil
}
func minerHasMinPower(ctx context.Context, sm *StateManager, addr address.Address, ts *types.TipSet) (bool, error) {
pact, err := sm.LoadActor(ctx, power.Address, ts)
if err != nil {
return false, xerrors.Errorf("loading power actor state: %w", err)
}
ps, err := power.Load(sm.cs.ActorStore(ctx), pact)
if err != nil {
return false, err
}
return ps.MinerNominalPowerMeetsConsensusMinimum(addr)
}
func MinerEligibleToMine(ctx context.Context, sm *StateManager, addr address.Address, baseTs *types.TipSet, lookbackTs *types.TipSet) (bool, error) {
hmp, err := minerHasMinPower(ctx, sm, addr, lookbackTs)
// TODO: We're blurring the lines between a "runtime network version" and a "Lotus upgrade epoch", is that unavoidable?
if sm.GetNtwkVersion(ctx, baseTs.Height()) <= network.Version3 {
return hmp, err
}
if err != nil {
return false, err
}
if !hmp {
return false, nil
}
// Post actors v2, also check MinerEligibleForElection with base ts
pact, err := sm.LoadActor(ctx, power.Address, baseTs)
if err != nil {
return false, xerrors.Errorf("loading power actor state: %w", err)
}
pstate, err := power.Load(sm.cs.ActorStore(ctx), pact)
if err != nil {
return false, err
}
mact, err := sm.LoadActor(ctx, addr, baseTs)
if err != nil {
return false, xerrors.Errorf("loading miner actor state: %w", err)
}
mstate, err := miner.Load(sm.cs.ActorStore(ctx), mact)
if err != nil {
return false, err
}
// Non-empty power claim.
if claim, found, err := pstate.MinerPower(addr); err != nil {
return false, err
} else if !found {
return false, err
} else if claim.QualityAdjPower.LessThanEqual(big.Zero()) {
return false, err
}
// No fee debt.
if debt, err := mstate.FeeDebt(); err != nil {
return false, err
} else if !debt.IsZero() {
return false, err
}
// No active consensus faults.
if mInfo, err := mstate.Info(); err != nil {
return false, err
} else if baseTs.Height() <= mInfo.ConsensusFaultElapsed {
return false, nil
}
return true, nil
}
func (sm *StateManager) GetPaychState(ctx context.Context, addr address.Address, ts *types.TipSet) (*types.Actor, paych.State, error) {
st, err := sm.ParentState(ts)
if err != nil {
return nil, nil, err
}
act, err := st.GetActor(addr)
if err != nil {
return nil, nil, err
}
actState, err := paych.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return nil, nil, err
}
return act, actState, nil
}
func (sm *StateManager) GetMarketState(ctx context.Context, ts *types.TipSet) (market.State, error) {
st, err := sm.ParentState(ts)
if err != nil {
return nil, err
}
act, err := st.GetActor(market.Address)
if err != nil {
return nil, err
}
actState, err := market.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return nil, err
}
return actState, nil
}
func (sm *StateManager) MarketBalance(ctx context.Context, addr address.Address, ts *types.TipSet) (api.MarketBalance, error) {
mstate, err := sm.GetMarketState(ctx, ts)
if err != nil {
return api.MarketBalance{}, err
}
addr, err = sm.LookupID(ctx, addr, ts)
if err != nil {
return api.MarketBalance{}, err
}
var out api.MarketBalance
et, err := mstate.EscrowTable()
if err != nil {
return api.MarketBalance{}, err
}
out.Escrow, err = et.Get(addr)
if err != nil {
return api.MarketBalance{}, xerrors.Errorf("getting escrow balance: %w", err)
}
lt, err := mstate.LockedTable()
if err != nil {
return api.MarketBalance{}, err
}
out.Locked, err = lt.Get(addr)
if err != nil {
return api.MarketBalance{}, xerrors.Errorf("getting locked balance: %w", err)
}
return out, nil
}
var _ StateManagerAPI = (*StateManager)(nil)

View File

@ -64,7 +64,7 @@ func (sm *StateManager) Call(ctx context.Context, msg *types.Message, ts *types.
Epoch: pheight + 1, Epoch: pheight + 1,
Rand: store.NewChainRand(sm.cs, ts.Cids()), Rand: store.NewChainRand(sm.cs, ts.Cids()),
Bstore: sm.cs.StateBlockstore(), Bstore: sm.cs.StateBlockstore(),
Syscalls: sm.cs.VMSys(), Syscalls: sm.syscalls,
CircSupplyCalc: sm.GetVMCirculatingSupply, CircSupplyCalc: sm.GetVMCirculatingSupply,
NtwkVersion: sm.GetNtwkVersion, NtwkVersion: sm.GetNtwkVersion,
BaseFee: types.NewInt(0), BaseFee: types.NewInt(0),
@ -179,7 +179,7 @@ func (sm *StateManager) CallWithGas(ctx context.Context, msg *types.Message, pri
Epoch: ts.Height() + 1, Epoch: ts.Height() + 1,
Rand: r, Rand: r,
Bstore: sm.cs.StateBlockstore(), Bstore: sm.cs.StateBlockstore(),
Syscalls: sm.cs.VMSys(), Syscalls: sm.syscalls,
CircSupplyCalc: sm.GetVMCirculatingSupply, CircSupplyCalc: sm.GetVMCirculatingSupply,
NtwkVersion: sm.GetNtwkVersion, NtwkVersion: sm.GetNtwkVersion,
BaseFee: ts.Blocks()[0].ParentBaseFee, BaseFee: ts.Blocks()[0].ParentBaseFee,

326
chain/stmgr/execute.go Normal file
View File

@ -0,0 +1,326 @@
package stmgr
import (
"context"
"fmt"
"sync/atomic"
"github.com/ipfs/go-cid"
cbg "github.com/whyrusleeping/cbor-gen"
"go.opencensus.io/stats"
"go.opencensus.io/trace"
"golang.org/x/xerrors"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/big"
blockadt "github.com/filecoin-project/specs-actors/actors/util/adt"
"github.com/filecoin-project/lotus/api"
"github.com/filecoin-project/lotus/build"
"github.com/filecoin-project/lotus/chain/actors"
"github.com/filecoin-project/lotus/chain/actors/builtin"
"github.com/filecoin-project/lotus/chain/actors/builtin/cron"
"github.com/filecoin-project/lotus/chain/actors/builtin/reward"
"github.com/filecoin-project/lotus/chain/store"
"github.com/filecoin-project/lotus/chain/types"
"github.com/filecoin-project/lotus/chain/vm"
"github.com/filecoin-project/lotus/metrics"
)
func (sm *StateManager) ApplyBlocks(ctx context.Context, parentEpoch abi.ChainEpoch, pstate cid.Cid, bms []store.BlockMessages, epoch abi.ChainEpoch, r vm.Rand, em ExecMonitor, baseFee abi.TokenAmount, ts *types.TipSet) (cid.Cid, cid.Cid, error) {
done := metrics.Timer(ctx, metrics.VMApplyBlocksTotal)
defer done()
partDone := metrics.Timer(ctx, metrics.VMApplyEarly)
defer func() {
partDone()
}()
makeVmWithBaseState := func(base cid.Cid) (*vm.VM, error) {
vmopt := &vm.VMOpts{
StateBase: base,
Epoch: epoch,
Rand: r,
Bstore: sm.cs.StateBlockstore(),
Syscalls: sm.syscalls,
CircSupplyCalc: sm.GetVMCirculatingSupply,
NtwkVersion: sm.GetNtwkVersion,
BaseFee: baseFee,
LookbackState: LookbackStateGetterForTipset(sm, ts),
}
return sm.newVM(ctx, vmopt)
}
vmi, err := makeVmWithBaseState(pstate)
if err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("making vm: %w", err)
}
runCron := func(epoch abi.ChainEpoch) error {
cronMsg := &types.Message{
To: cron.Address,
From: builtin.SystemActorAddr,
Nonce: uint64(epoch),
Value: types.NewInt(0),
GasFeeCap: types.NewInt(0),
GasPremium: types.NewInt(0),
GasLimit: build.BlockGasLimit * 10000, // Make super sure this is never too little
Method: cron.Methods.EpochTick,
Params: nil,
}
ret, err := vmi.ApplyImplicitMessage(ctx, cronMsg)
if err != nil {
return err
}
if em != nil {
if err := em.MessageApplied(ctx, ts, cronMsg.Cid(), cronMsg, ret, true); err != nil {
return xerrors.Errorf("callback failed on cron message: %w", err)
}
}
if ret.ExitCode != 0 {
return xerrors.Errorf("CheckProofSubmissions exit was non-zero: %d", ret.ExitCode)
}
return nil
}
for i := parentEpoch; i < epoch; i++ {
if i > parentEpoch {
// run cron for null rounds if any
if err := runCron(i); err != nil {
return cid.Undef, cid.Undef, err
}
pstate, err = vmi.Flush(ctx)
if err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("flushing vm: %w", err)
}
}
// handle state forks
// XXX: The state tree
newState, err := sm.handleStateForks(ctx, pstate, i, em, ts)
if err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("error handling state forks: %w", err)
}
if pstate != newState {
vmi, err = makeVmWithBaseState(newState)
if err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("making vm: %w", err)
}
}
vmi.SetBlockHeight(i + 1)
pstate = newState
}
partDone()
partDone = metrics.Timer(ctx, metrics.VMApplyMessages)
var receipts []cbg.CBORMarshaler
processedMsgs := make(map[cid.Cid]struct{})
for _, b := range bms {
penalty := types.NewInt(0)
gasReward := big.Zero()
for _, cm := range append(b.BlsMessages, b.SecpkMessages...) {
m := cm.VMMessage()
if _, found := processedMsgs[m.Cid()]; found {
continue
}
r, err := vmi.ApplyMessage(ctx, cm)
if err != nil {
return cid.Undef, cid.Undef, err
}
receipts = append(receipts, &r.MessageReceipt)
gasReward = big.Add(gasReward, r.GasCosts.MinerTip)
penalty = big.Add(penalty, r.GasCosts.MinerPenalty)
if em != nil {
if err := em.MessageApplied(ctx, ts, cm.Cid(), m, r, false); err != nil {
return cid.Undef, cid.Undef, err
}
}
processedMsgs[m.Cid()] = struct{}{}
}
params, err := actors.SerializeParams(&reward.AwardBlockRewardParams{
Miner: b.Miner,
Penalty: penalty,
GasReward: gasReward,
WinCount: b.WinCount,
})
if err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("failed to serialize award params: %w", err)
}
rwMsg := &types.Message{
From: builtin.SystemActorAddr,
To: reward.Address,
Nonce: uint64(epoch),
Value: types.NewInt(0),
GasFeeCap: types.NewInt(0),
GasPremium: types.NewInt(0),
GasLimit: 1 << 30,
Method: reward.Methods.AwardBlockReward,
Params: params,
}
ret, actErr := vmi.ApplyImplicitMessage(ctx, rwMsg)
if actErr != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("failed to apply reward message for miner %s: %w", b.Miner, actErr)
}
if em != nil {
if err := em.MessageApplied(ctx, ts, rwMsg.Cid(), rwMsg, ret, true); err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("callback failed on reward message: %w", err)
}
}
if ret.ExitCode != 0 {
return cid.Undef, cid.Undef, xerrors.Errorf("reward application message failed (exit %d): %s", ret.ExitCode, ret.ActorErr)
}
}
partDone()
partDone = metrics.Timer(ctx, metrics.VMApplyCron)
if err := runCron(epoch); err != nil {
return cid.Cid{}, cid.Cid{}, err
}
partDone()
partDone = metrics.Timer(ctx, metrics.VMApplyFlush)
rectarr := blockadt.MakeEmptyArray(sm.cs.ActorStore(ctx))
for i, receipt := range receipts {
if err := rectarr.Set(uint64(i), receipt); err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("failed to build receipts amt: %w", err)
}
}
rectroot, err := rectarr.Root()
if err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("failed to build receipts amt: %w", err)
}
st, err := vmi.Flush(ctx)
if err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("vm flush failed: %w", err)
}
stats.Record(ctx, metrics.VMSends.M(int64(atomic.LoadUint64(&vm.StatSends))),
metrics.VMApplied.M(int64(atomic.LoadUint64(&vm.StatApplied))))
return st, rectroot, nil
}
func (sm *StateManager) TipSetState(ctx context.Context, ts *types.TipSet) (st cid.Cid, rec cid.Cid, err error) {
ctx, span := trace.StartSpan(ctx, "tipSetState")
defer span.End()
if span.IsRecordingEvents() {
span.AddAttributes(trace.StringAttribute("tipset", fmt.Sprint(ts.Cids())))
}
ck := cidsToKey(ts.Cids())
sm.stlk.Lock()
cw, cwok := sm.compWait[ck]
if cwok {
sm.stlk.Unlock()
span.AddAttributes(trace.BoolAttribute("waited", true))
select {
case <-cw:
sm.stlk.Lock()
case <-ctx.Done():
return cid.Undef, cid.Undef, ctx.Err()
}
}
cached, ok := sm.stCache[ck]
if ok {
sm.stlk.Unlock()
span.AddAttributes(trace.BoolAttribute("cache", true))
return cached[0], cached[1], nil
}
ch := make(chan struct{})
sm.compWait[ck] = ch
defer func() {
sm.stlk.Lock()
delete(sm.compWait, ck)
if st != cid.Undef {
sm.stCache[ck] = []cid.Cid{st, rec}
}
sm.stlk.Unlock()
close(ch)
}()
sm.stlk.Unlock()
if ts.Height() == 0 {
// NB: This is here because the process that executes blocks requires that the
// block miner reference a valid miner in the state tree. Unless we create some
// magical genesis miner, this won't work properly, so we short circuit here
// This avoids the question of 'who gets paid the genesis block reward'
return ts.Blocks()[0].ParentStateRoot, ts.Blocks()[0].ParentMessageReceipts, nil
}
st, rec, err = sm.computeTipSetState(ctx, ts, sm.tsExecMonitor)
if err != nil {
return cid.Undef, cid.Undef, err
}
return st, rec, nil
}
func (sm *StateManager) ExecutionTraceWithMonitor(ctx context.Context, ts *types.TipSet, em ExecMonitor) (cid.Cid, error) {
st, _, err := sm.computeTipSetState(ctx, ts, em)
return st, err
}
func (sm *StateManager) ExecutionTrace(ctx context.Context, ts *types.TipSet) (cid.Cid, []*api.InvocResult, error) {
var invocTrace []*api.InvocResult
st, err := sm.ExecutionTraceWithMonitor(ctx, ts, &InvocationTracer{trace: &invocTrace})
if err != nil {
return cid.Undef, nil, err
}
return st, invocTrace, nil
}
func (sm *StateManager) computeTipSetState(ctx context.Context, ts *types.TipSet, em ExecMonitor) (cid.Cid, cid.Cid, error) {
ctx, span := trace.StartSpan(ctx, "computeTipSetState")
defer span.End()
blks := ts.Blocks()
for i := 0; i < len(blks); i++ {
for j := i + 1; j < len(blks); j++ {
if blks[i].Miner == blks[j].Miner {
return cid.Undef, cid.Undef,
xerrors.Errorf("duplicate miner in a tipset (%s %s)",
blks[i].Miner, blks[j].Miner)
}
}
}
var parentEpoch abi.ChainEpoch
pstate := blks[0].ParentStateRoot
if blks[0].Height > 0 {
parent, err := sm.cs.GetBlock(blks[0].Parents[0])
if err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("getting parent block: %w", err)
}
parentEpoch = parent.Height
}
r := store.NewChainRand(sm.cs, ts.Cids())
blkmsgs, err := sm.cs.BlockMsgsForTipset(ts)
if err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("getting block messages for tipset: %w", err)
}
baseFee := blks[0].ParentBaseFee
return sm.ApplyBlocks(ctx, parentEpoch, pstate, blkmsgs, blks[0].Height, r, em, baseFee, ts)
}

View File

@ -121,7 +121,7 @@ func TestForkHeightTriggers(t *testing.T) {
} }
sm, err := NewStateManagerWithUpgradeSchedule( sm, err := NewStateManagerWithUpgradeSchedule(
cg.ChainStore(), UpgradeSchedule{{ cg.ChainStore(), cg.StateManager().VMSys(), UpgradeSchedule{{
Network: network.Version1, Network: network.Version1,
Height: testForkHeight, Height: testForkHeight,
Migration: func(ctx context.Context, sm *StateManager, cache MigrationCache, cb ExecMonitor, Migration: func(ctx context.Context, sm *StateManager, cache MigrationCache, cb ExecMonitor,
@ -250,7 +250,7 @@ func TestForkRefuseCall(t *testing.T) {
} }
sm, err := NewStateManagerWithUpgradeSchedule( sm, err := NewStateManagerWithUpgradeSchedule(
cg.ChainStore(), UpgradeSchedule{{ cg.ChainStore(), cg.StateManager().VMSys(), UpgradeSchedule{{
Network: network.Version1, Network: network.Version1,
Expensive: true, Expensive: true,
Height: testForkHeight, Height: testForkHeight,
@ -365,7 +365,7 @@ func TestForkPreMigration(t *testing.T) {
counter := make(chan struct{}, 10) counter := make(chan struct{}, 10)
sm, err := NewStateManagerWithUpgradeSchedule( sm, err := NewStateManagerWithUpgradeSchedule(
cg.ChainStore(), UpgradeSchedule{{ cg.ChainStore(), cg.StateManager().VMSys(), UpgradeSchedule{{
Network: network.Version1, Network: network.Version1,
Height: testForkHeight, Height: testForkHeight,
Migration: func(ctx context.Context, sm *StateManager, cache MigrationCache, cb ExecMonitor, Migration: func(ctx context.Context, sm *StateManager, cache MigrationCache, cb ExecMonitor,

View File

@ -31,6 +31,14 @@ func (sm *StateManager) ParentState(ts *types.TipSet) (*state.StateTree, error)
return state, nil return state, nil
} }
func (sm *StateManager) parentState(ts *types.TipSet) cid.Cid {
if ts == nil {
ts = sm.cs.GetHeaviestTipSet()
}
return ts.ParentState()
}
func (sm *StateManager) StateTree(st cid.Cid) (*state.StateTree, error) { func (sm *StateManager) StateTree(st cid.Cid) (*state.StateTree, error) {
cst := cbor.NewCborStore(sm.cs.StateBlockstore()) cst := cbor.NewCborStore(sm.cs.StateBlockstore())
state, err := state.LoadStateTree(cst, st) state, err := state.LoadStateTree(cst, st)

279
chain/stmgr/searchwait.go Normal file
View File

@ -0,0 +1,279 @@
package stmgr
import (
"context"
"errors"
"fmt"
"github.com/ipfs/go-cid"
"golang.org/x/xerrors"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/lotus/chain/store"
"github.com/filecoin-project/lotus/chain/types"
)
// WaitForMessage blocks until a message appears on chain. It looks backwards in the chain to see if this has already
// happened, with an optional limit to how many epochs it will search. It guarantees that the message has been on
// chain for at least confidence epochs without being reverted before returning.
func (sm *StateManager) WaitForMessage(ctx context.Context, mcid cid.Cid, confidence uint64, lookbackLimit abi.ChainEpoch, allowReplaced bool) (*types.TipSet, *types.MessageReceipt, cid.Cid, error) {
ctx, cancel := context.WithCancel(ctx)
defer cancel()
msg, err := sm.cs.GetCMessage(mcid)
if err != nil {
return nil, nil, cid.Undef, fmt.Errorf("failed to load message: %w", err)
}
tsub := sm.cs.SubHeadChanges(ctx)
head, ok := <-tsub
if !ok {
return nil, nil, cid.Undef, fmt.Errorf("SubHeadChanges stream was invalid")
}
if len(head) != 1 {
return nil, nil, cid.Undef, fmt.Errorf("SubHeadChanges first entry should have been one item")
}
if head[0].Type != store.HCCurrent {
return nil, nil, cid.Undef, fmt.Errorf("expected current head on SHC stream (got %s)", head[0].Type)
}
r, foundMsg, err := sm.tipsetExecutedMessage(head[0].Val, mcid, msg.VMMessage(), allowReplaced)
if err != nil {
return nil, nil, cid.Undef, err
}
if r != nil {
return head[0].Val, r, foundMsg, nil
}
var backTs *types.TipSet
var backRcp *types.MessageReceipt
var backFm cid.Cid
backSearchWait := make(chan struct{})
go func() {
fts, r, foundMsg, err := sm.searchBackForMsg(ctx, head[0].Val, msg, lookbackLimit, allowReplaced)
if err != nil {
log.Warnf("failed to look back through chain for message: %v", err)
return
}
backTs = fts
backRcp = r
backFm = foundMsg
close(backSearchWait)
}()
var candidateTs *types.TipSet
var candidateRcp *types.MessageReceipt
var candidateFm cid.Cid
heightOfHead := head[0].Val.Height()
reverts := map[types.TipSetKey]bool{}
for {
select {
case notif, ok := <-tsub:
if !ok {
return nil, nil, cid.Undef, ctx.Err()
}
for _, val := range notif {
switch val.Type {
case store.HCRevert:
if val.Val.Equals(candidateTs) {
candidateTs = nil
candidateRcp = nil
candidateFm = cid.Undef
}
if backSearchWait != nil {
reverts[val.Val.Key()] = true
}
case store.HCApply:
if candidateTs != nil && val.Val.Height() >= candidateTs.Height()+abi.ChainEpoch(confidence) {
return candidateTs, candidateRcp, candidateFm, nil
}
r, foundMsg, err := sm.tipsetExecutedMessage(val.Val, mcid, msg.VMMessage(), allowReplaced)
if err != nil {
return nil, nil, cid.Undef, err
}
if r != nil {
if confidence == 0 {
return val.Val, r, foundMsg, err
}
candidateTs = val.Val
candidateRcp = r
candidateFm = foundMsg
}
heightOfHead = val.Val.Height()
}
}
case <-backSearchWait:
// check if we found the message in the chain and that is hasn't been reverted since we started searching
if backTs != nil && !reverts[backTs.Key()] {
// if head is at or past confidence interval, return immediately
if heightOfHead >= backTs.Height()+abi.ChainEpoch(confidence) {
return backTs, backRcp, backFm, nil
}
// wait for confidence interval
candidateTs = backTs
candidateRcp = backRcp
candidateFm = backFm
}
reverts = nil
backSearchWait = nil
case <-ctx.Done():
return nil, nil, cid.Undef, ctx.Err()
}
}
}
func (sm *StateManager) SearchForMessage(ctx context.Context, head *types.TipSet, mcid cid.Cid, lookbackLimit abi.ChainEpoch, allowReplaced bool) (*types.TipSet, *types.MessageReceipt, cid.Cid, error) {
msg, err := sm.cs.GetCMessage(mcid)
if err != nil {
return nil, nil, cid.Undef, fmt.Errorf("failed to load message: %w", err)
}
r, foundMsg, err := sm.tipsetExecutedMessage(head, mcid, msg.VMMessage(), allowReplaced)
if err != nil {
return nil, nil, cid.Undef, err
}
if r != nil {
return head, r, foundMsg, nil
}
fts, r, foundMsg, err := sm.searchBackForMsg(ctx, head, msg, lookbackLimit, allowReplaced)
if err != nil {
log.Warnf("failed to look back through chain for message %s", mcid)
return nil, nil, cid.Undef, err
}
if fts == nil {
return nil, nil, cid.Undef, nil
}
return fts, r, foundMsg, nil
}
// searchBackForMsg searches up to limit tipsets backwards from the given
// tipset for a message receipt.
// If limit is
// - 0 then no tipsets are searched
// - 5 then five tipset are searched
// - LookbackNoLimit then there is no limit
func (sm *StateManager) searchBackForMsg(ctx context.Context, from *types.TipSet, m types.ChainMsg, limit abi.ChainEpoch, allowReplaced bool) (*types.TipSet, *types.MessageReceipt, cid.Cid, error) {
limitHeight := from.Height() - limit
noLimit := limit == LookbackNoLimit
cur := from
curActor, err := sm.LoadActor(ctx, m.VMMessage().From, cur)
if err != nil {
return nil, nil, cid.Undef, xerrors.Errorf("failed to load initital tipset")
}
mFromId, err := sm.LookupID(ctx, m.VMMessage().From, from)
if err != nil {
return nil, nil, cid.Undef, xerrors.Errorf("looking up From id address: %w", err)
}
mNonce := m.VMMessage().Nonce
for {
// If we've reached the genesis block, or we've reached the limit of
// how far back to look
if cur.Height() == 0 || !noLimit && cur.Height() <= limitHeight {
// it ain't here!
return nil, nil, cid.Undef, nil
}
select {
case <-ctx.Done():
return nil, nil, cid.Undef, nil
default:
}
// we either have no messages from the sender, or the latest message we found has a lower nonce than the one being searched for,
// either way, no reason to lookback, it ain't there
if curActor == nil || curActor.Nonce == 0 || curActor.Nonce < mNonce {
return nil, nil, cid.Undef, nil
}
pts, err := sm.cs.LoadTipSet(cur.Parents())
if err != nil {
return nil, nil, cid.Undef, xerrors.Errorf("failed to load tipset during msg wait searchback: %w", err)
}
act, err := sm.LoadActor(ctx, mFromId, pts)
actorNoExist := errors.Is(err, types.ErrActorNotFound)
if err != nil && !actorNoExist {
return nil, nil, cid.Cid{}, xerrors.Errorf("failed to load the actor: %w", err)
}
// check that between cur and parent tipset the nonce fell into range of our message
if actorNoExist || (curActor.Nonce > mNonce && act.Nonce <= mNonce) {
r, foundMsg, err := sm.tipsetExecutedMessage(cur, m.Cid(), m.VMMessage(), allowReplaced)
if err != nil {
return nil, nil, cid.Undef, xerrors.Errorf("checking for message execution during lookback: %w", err)
}
if r != nil {
return cur, r, foundMsg, nil
}
}
cur = pts
curActor = act
}
}
func (sm *StateManager) tipsetExecutedMessage(ts *types.TipSet, msg cid.Cid, vmm *types.Message, allowReplaced bool) (*types.MessageReceipt, cid.Cid, error) {
// The genesis block did not execute any messages
if ts.Height() == 0 {
return nil, cid.Undef, nil
}
pts, err := sm.cs.LoadTipSet(ts.Parents())
if err != nil {
return nil, cid.Undef, err
}
cm, err := sm.cs.MessagesForTipset(pts)
if err != nil {
return nil, cid.Undef, err
}
for ii := range cm {
// iterate in reverse because we going backwards through the chain
i := len(cm) - ii - 1
m := cm[i]
if m.VMMessage().From == vmm.From { // cheaper to just check origin first
if m.VMMessage().Nonce == vmm.Nonce {
if allowReplaced && m.VMMessage().EqualCall(vmm) {
if m.Cid() != msg {
log.Warnw("found message with equal nonce and call params but different CID",
"wanted", msg, "found", m.Cid(), "nonce", vmm.Nonce, "from", vmm.From)
}
pr, err := sm.cs.GetParentReceipt(ts.Blocks()[0], i)
if err != nil {
return nil, cid.Undef, err
}
return pr, m.Cid(), nil
}
// this should be that message
return nil, cid.Undef, xerrors.Errorf("found message with equal nonce as the one we are looking for (F:%s n %d, TS: %s n%d)",
msg, vmm.Nonce, m.Cid(), m.VMMessage().Nonce)
}
if m.VMMessage().Nonce < vmm.Nonce {
return nil, cid.Undef, nil // don't bother looking further
}
}
}
return nil, cid.Undef, nil
}

File diff suppressed because it is too large Load Diff

473
chain/stmgr/supply.go Normal file
View File

@ -0,0 +1,473 @@
package stmgr
import (
"context"
"github.com/ipfs/go-cid"
cbor "github.com/ipfs/go-ipld-cbor"
"golang.org/x/xerrors"
"github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/big"
msig0 "github.com/filecoin-project/specs-actors/actors/builtin/multisig"
"github.com/filecoin-project/lotus/api"
"github.com/filecoin-project/lotus/build"
"github.com/filecoin-project/lotus/chain/actors/adt"
"github.com/filecoin-project/lotus/chain/actors/builtin"
_init "github.com/filecoin-project/lotus/chain/actors/builtin/init"
"github.com/filecoin-project/lotus/chain/actors/builtin/market"
"github.com/filecoin-project/lotus/chain/actors/builtin/miner"
"github.com/filecoin-project/lotus/chain/actors/builtin/multisig"
"github.com/filecoin-project/lotus/chain/actors/builtin/power"
"github.com/filecoin-project/lotus/chain/actors/builtin/reward"
"github.com/filecoin-project/lotus/chain/actors/builtin/verifreg"
"github.com/filecoin-project/lotus/chain/state"
"github.com/filecoin-project/lotus/chain/types"
)
// sets up information about the vesting schedule
func (sm *StateManager) setupGenesisVestingSchedule(ctx context.Context) error {
gb, err := sm.cs.GetGenesis()
if err != nil {
return xerrors.Errorf("getting genesis block: %w", err)
}
gts, err := types.NewTipSet([]*types.BlockHeader{gb})
if err != nil {
return xerrors.Errorf("getting genesis tipset: %w", err)
}
st, _, err := sm.TipSetState(ctx, gts)
if err != nil {
return xerrors.Errorf("getting genesis tipset state: %w", err)
}
cst := cbor.NewCborStore(sm.cs.StateBlockstore())
sTree, err := state.LoadStateTree(cst, st)
if err != nil {
return xerrors.Errorf("loading state tree: %w", err)
}
gmf, err := getFilMarketLocked(ctx, sTree)
if err != nil {
return xerrors.Errorf("setting up genesis market funds: %w", err)
}
gp, err := getFilPowerLocked(ctx, sTree)
if err != nil {
return xerrors.Errorf("setting up genesis pledge: %w", err)
}
sm.genesisMarketFunds = gmf
sm.genesisPledge = gp
totalsByEpoch := make(map[abi.ChainEpoch]abi.TokenAmount)
// 6 months
sixMonths := abi.ChainEpoch(183 * builtin.EpochsInDay)
totalsByEpoch[sixMonths] = big.NewInt(49_929_341)
totalsByEpoch[sixMonths] = big.Add(totalsByEpoch[sixMonths], big.NewInt(32_787_700))
// 1 year
oneYear := abi.ChainEpoch(365 * builtin.EpochsInDay)
totalsByEpoch[oneYear] = big.NewInt(22_421_712)
// 2 years
twoYears := abi.ChainEpoch(2 * 365 * builtin.EpochsInDay)
totalsByEpoch[twoYears] = big.NewInt(7_223_364)
// 3 years
threeYears := abi.ChainEpoch(3 * 365 * builtin.EpochsInDay)
totalsByEpoch[threeYears] = big.NewInt(87_637_883)
// 6 years
sixYears := abi.ChainEpoch(6 * 365 * builtin.EpochsInDay)
totalsByEpoch[sixYears] = big.NewInt(100_000_000)
totalsByEpoch[sixYears] = big.Add(totalsByEpoch[sixYears], big.NewInt(300_000_000))
sm.preIgnitionVesting = make([]msig0.State, 0, len(totalsByEpoch))
for k, v := range totalsByEpoch {
ns := msig0.State{
InitialBalance: v,
UnlockDuration: k,
PendingTxns: cid.Undef,
}
sm.preIgnitionVesting = append(sm.preIgnitionVesting, ns)
}
return nil
}
// sets up information about the vesting schedule post the ignition upgrade
func (sm *StateManager) setupPostIgnitionVesting(ctx context.Context) error {
totalsByEpoch := make(map[abi.ChainEpoch]abi.TokenAmount)
// 6 months
sixMonths := abi.ChainEpoch(183 * builtin.EpochsInDay)
totalsByEpoch[sixMonths] = big.NewInt(49_929_341)
totalsByEpoch[sixMonths] = big.Add(totalsByEpoch[sixMonths], big.NewInt(32_787_700))
// 1 year
oneYear := abi.ChainEpoch(365 * builtin.EpochsInDay)
totalsByEpoch[oneYear] = big.NewInt(22_421_712)
// 2 years
twoYears := abi.ChainEpoch(2 * 365 * builtin.EpochsInDay)
totalsByEpoch[twoYears] = big.NewInt(7_223_364)
// 3 years
threeYears := abi.ChainEpoch(3 * 365 * builtin.EpochsInDay)
totalsByEpoch[threeYears] = big.NewInt(87_637_883)
// 6 years
sixYears := abi.ChainEpoch(6 * 365 * builtin.EpochsInDay)
totalsByEpoch[sixYears] = big.NewInt(100_000_000)
totalsByEpoch[sixYears] = big.Add(totalsByEpoch[sixYears], big.NewInt(300_000_000))
sm.postIgnitionVesting = make([]msig0.State, 0, len(totalsByEpoch))
for k, v := range totalsByEpoch {
ns := msig0.State{
// In the pre-ignition logic, we incorrectly set this value in Fil, not attoFil, an off-by-10^18 error
InitialBalance: big.Mul(v, big.NewInt(int64(build.FilecoinPrecision))),
UnlockDuration: k,
PendingTxns: cid.Undef,
// In the pre-ignition logic, the start epoch was 0. This changes in the fork logic of the Ignition upgrade itself.
StartEpoch: build.UpgradeLiftoffHeight,
}
sm.postIgnitionVesting = append(sm.postIgnitionVesting, ns)
}
return nil
}
// sets up information about the vesting schedule post the calico upgrade
func (sm *StateManager) setupPostCalicoVesting(ctx context.Context) error {
totalsByEpoch := make(map[abi.ChainEpoch]abi.TokenAmount)
// 0 days
zeroDays := abi.ChainEpoch(0)
totalsByEpoch[zeroDays] = big.NewInt(10_632_000)
// 6 months
sixMonths := abi.ChainEpoch(183 * builtin.EpochsInDay)
totalsByEpoch[sixMonths] = big.NewInt(19_015_887)
totalsByEpoch[sixMonths] = big.Add(totalsByEpoch[sixMonths], big.NewInt(32_787_700))
// 1 year
oneYear := abi.ChainEpoch(365 * builtin.EpochsInDay)
totalsByEpoch[oneYear] = big.NewInt(22_421_712)
totalsByEpoch[oneYear] = big.Add(totalsByEpoch[oneYear], big.NewInt(9_400_000))
// 2 years
twoYears := abi.ChainEpoch(2 * 365 * builtin.EpochsInDay)
totalsByEpoch[twoYears] = big.NewInt(7_223_364)
// 3 years
threeYears := abi.ChainEpoch(3 * 365 * builtin.EpochsInDay)
totalsByEpoch[threeYears] = big.NewInt(87_637_883)
totalsByEpoch[threeYears] = big.Add(totalsByEpoch[threeYears], big.NewInt(898_958))
// 6 years
sixYears := abi.ChainEpoch(6 * 365 * builtin.EpochsInDay)
totalsByEpoch[sixYears] = big.NewInt(100_000_000)
totalsByEpoch[sixYears] = big.Add(totalsByEpoch[sixYears], big.NewInt(300_000_000))
totalsByEpoch[sixYears] = big.Add(totalsByEpoch[sixYears], big.NewInt(9_805_053))
sm.postCalicoVesting = make([]msig0.State, 0, len(totalsByEpoch))
for k, v := range totalsByEpoch {
ns := msig0.State{
InitialBalance: big.Mul(v, big.NewInt(int64(build.FilecoinPrecision))),
UnlockDuration: k,
PendingTxns: cid.Undef,
StartEpoch: build.UpgradeLiftoffHeight,
}
sm.postCalicoVesting = append(sm.postCalicoVesting, ns)
}
return nil
}
// GetVestedFunds returns all funds that have "left" actors that are in the genesis state:
// - For Multisigs, it counts the actual amounts that have vested at the given epoch
// - For Accounts, it counts max(currentBalance - genesisBalance, 0).
func (sm *StateManager) GetFilVested(ctx context.Context, height abi.ChainEpoch, st *state.StateTree) (abi.TokenAmount, error) {
vf := big.Zero()
if height <= build.UpgradeIgnitionHeight {
for _, v := range sm.preIgnitionVesting {
au := big.Sub(v.InitialBalance, v.AmountLocked(height))
vf = big.Add(vf, au)
}
} else if height <= build.UpgradeCalicoHeight {
for _, v := range sm.postIgnitionVesting {
// In the pre-ignition logic, we simply called AmountLocked(height), assuming startEpoch was 0.
// The start epoch changed in the Ignition upgrade.
au := big.Sub(v.InitialBalance, v.AmountLocked(height-v.StartEpoch))
vf = big.Add(vf, au)
}
} else {
for _, v := range sm.postCalicoVesting {
// In the pre-ignition logic, we simply called AmountLocked(height), assuming startEpoch was 0.
// The start epoch changed in the Ignition upgrade.
au := big.Sub(v.InitialBalance, v.AmountLocked(height-v.StartEpoch))
vf = big.Add(vf, au)
}
}
// After UpgradeAssemblyHeight these funds are accounted for in GetFilReserveDisbursed
if height <= build.UpgradeAssemblyHeight {
// continue to use preIgnitionGenInfos, nothing changed at the Ignition epoch
vf = big.Add(vf, sm.genesisPledge)
// continue to use preIgnitionGenInfos, nothing changed at the Ignition epoch
vf = big.Add(vf, sm.genesisMarketFunds)
}
return vf, nil
}
func GetFilReserveDisbursed(ctx context.Context, st *state.StateTree) (abi.TokenAmount, error) {
ract, err := st.GetActor(builtin.ReserveAddress)
if err != nil {
return big.Zero(), xerrors.Errorf("failed to get reserve actor: %w", err)
}
// If money enters the reserve actor, this could lead to a negative term
return big.Sub(big.NewFromGo(build.InitialFilReserved), ract.Balance), nil
}
func GetFilMined(ctx context.Context, st *state.StateTree) (abi.TokenAmount, error) {
ractor, err := st.GetActor(reward.Address)
if err != nil {
return big.Zero(), xerrors.Errorf("failed to load reward actor state: %w", err)
}
rst, err := reward.Load(adt.WrapStore(ctx, st.Store), ractor)
if err != nil {
return big.Zero(), err
}
return rst.TotalStoragePowerReward()
}
func getFilMarketLocked(ctx context.Context, st *state.StateTree) (abi.TokenAmount, error) {
act, err := st.GetActor(market.Address)
if err != nil {
return big.Zero(), xerrors.Errorf("failed to load market actor: %w", err)
}
mst, err := market.Load(adt.WrapStore(ctx, st.Store), act)
if err != nil {
return big.Zero(), xerrors.Errorf("failed to load market state: %w", err)
}
return mst.TotalLocked()
}
func getFilPowerLocked(ctx context.Context, st *state.StateTree) (abi.TokenAmount, error) {
pactor, err := st.GetActor(power.Address)
if err != nil {
return big.Zero(), xerrors.Errorf("failed to load power actor: %w", err)
}
pst, err := power.Load(adt.WrapStore(ctx, st.Store), pactor)
if err != nil {
return big.Zero(), xerrors.Errorf("failed to load power state: %w", err)
}
return pst.TotalLocked()
}
func (sm *StateManager) GetFilLocked(ctx context.Context, st *state.StateTree) (abi.TokenAmount, error) {
filMarketLocked, err := getFilMarketLocked(ctx, st)
if err != nil {
return big.Zero(), xerrors.Errorf("failed to get filMarketLocked: %w", err)
}
filPowerLocked, err := getFilPowerLocked(ctx, st)
if err != nil {
return big.Zero(), xerrors.Errorf("failed to get filPowerLocked: %w", err)
}
return types.BigAdd(filMarketLocked, filPowerLocked), nil
}
func GetFilBurnt(ctx context.Context, st *state.StateTree) (abi.TokenAmount, error) {
burnt, err := st.GetActor(builtin.BurntFundsActorAddr)
if err != nil {
return big.Zero(), xerrors.Errorf("failed to load burnt actor: %w", err)
}
return burnt.Balance, nil
}
func (sm *StateManager) GetVMCirculatingSupply(ctx context.Context, height abi.ChainEpoch, st *state.StateTree) (abi.TokenAmount, error) {
cs, err := sm.GetVMCirculatingSupplyDetailed(ctx, height, st)
if err != nil {
return types.EmptyInt, err
}
return cs.FilCirculating, err
}
func (sm *StateManager) GetVMCirculatingSupplyDetailed(ctx context.Context, height abi.ChainEpoch, st *state.StateTree) (api.CirculatingSupply, error) {
sm.genesisMsigLk.Lock()
defer sm.genesisMsigLk.Unlock()
if sm.preIgnitionVesting == nil || sm.genesisPledge.IsZero() || sm.genesisMarketFunds.IsZero() {
err := sm.setupGenesisVestingSchedule(ctx)
if err != nil {
return api.CirculatingSupply{}, xerrors.Errorf("failed to setup pre-ignition vesting schedule: %w", err)
}
}
if sm.postIgnitionVesting == nil {
err := sm.setupPostIgnitionVesting(ctx)
if err != nil {
return api.CirculatingSupply{}, xerrors.Errorf("failed to setup post-ignition vesting schedule: %w", err)
}
}
if sm.postCalicoVesting == nil {
err := sm.setupPostCalicoVesting(ctx)
if err != nil {
return api.CirculatingSupply{}, xerrors.Errorf("failed to setup post-calico vesting schedule: %w", err)
}
}
filVested, err := sm.GetFilVested(ctx, height, st)
if err != nil {
return api.CirculatingSupply{}, xerrors.Errorf("failed to calculate filVested: %w", err)
}
filReserveDisbursed := big.Zero()
if height > build.UpgradeAssemblyHeight {
filReserveDisbursed, err = GetFilReserveDisbursed(ctx, st)
if err != nil {
return api.CirculatingSupply{}, xerrors.Errorf("failed to calculate filReserveDisbursed: %w", err)
}
}
filMined, err := GetFilMined(ctx, st)
if err != nil {
return api.CirculatingSupply{}, xerrors.Errorf("failed to calculate filMined: %w", err)
}
filBurnt, err := GetFilBurnt(ctx, st)
if err != nil {
return api.CirculatingSupply{}, xerrors.Errorf("failed to calculate filBurnt: %w", err)
}
filLocked, err := sm.GetFilLocked(ctx, st)
if err != nil {
return api.CirculatingSupply{}, xerrors.Errorf("failed to calculate filLocked: %w", err)
}
ret := types.BigAdd(filVested, filMined)
ret = types.BigAdd(ret, filReserveDisbursed)
ret = types.BigSub(ret, filBurnt)
ret = types.BigSub(ret, filLocked)
if ret.LessThan(big.Zero()) {
ret = big.Zero()
}
return api.CirculatingSupply{
FilVested: filVested,
FilMined: filMined,
FilBurnt: filBurnt,
FilLocked: filLocked,
FilCirculating: ret,
FilReserveDisbursed: filReserveDisbursed,
}, nil
}
func (sm *StateManager) GetCirculatingSupply(ctx context.Context, height abi.ChainEpoch, st *state.StateTree) (abi.TokenAmount, error) {
circ := big.Zero()
unCirc := big.Zero()
err := st.ForEach(func(a address.Address, actor *types.Actor) error {
switch {
case actor.Balance.IsZero():
// Do nothing for zero-balance actors
break
case a == _init.Address ||
a == reward.Address ||
a == verifreg.Address ||
// The power actor itself should never receive funds
a == power.Address ||
a == builtin.SystemActorAddr ||
a == builtin.CronActorAddr ||
a == builtin.BurntFundsActorAddr ||
a == builtin.SaftAddress ||
a == builtin.ReserveAddress:
unCirc = big.Add(unCirc, actor.Balance)
case a == market.Address:
mst, err := market.Load(sm.cs.ActorStore(ctx), actor)
if err != nil {
return err
}
lb, err := mst.TotalLocked()
if err != nil {
return err
}
circ = big.Add(circ, big.Sub(actor.Balance, lb))
unCirc = big.Add(unCirc, lb)
case builtin.IsAccountActor(actor.Code) || builtin.IsPaymentChannelActor(actor.Code):
circ = big.Add(circ, actor.Balance)
case builtin.IsStorageMinerActor(actor.Code):
mst, err := miner.Load(sm.cs.ActorStore(ctx), actor)
if err != nil {
return err
}
ab, err := mst.AvailableBalance(actor.Balance)
if err == nil {
circ = big.Add(circ, ab)
unCirc = big.Add(unCirc, big.Sub(actor.Balance, ab))
} else {
// Assume any error is because the miner state is "broken" (lower actor balance than locked funds)
// In this case, the actor's entire balance is considered "uncirculating"
unCirc = big.Add(unCirc, actor.Balance)
}
case builtin.IsMultisigActor(actor.Code):
mst, err := multisig.Load(sm.cs.ActorStore(ctx), actor)
if err != nil {
return err
}
lb, err := mst.LockedBalance(height)
if err != nil {
return err
}
ab := big.Sub(actor.Balance, lb)
circ = big.Add(circ, big.Max(ab, big.Zero()))
unCirc = big.Add(unCirc, big.Min(actor.Balance, lb))
default:
return xerrors.Errorf("unexpected actor: %s", a)
}
return nil
})
if err != nil {
return types.EmptyInt, err
}
total := big.Add(circ, unCirc)
if !total.Equals(types.TotalFilecoinInt) {
return types.EmptyInt, xerrors.Errorf("total filecoin didn't add to expected amount: %s != %s", total, types.TotalFilecoinInt)
}
return circ, nil
}

View File

@ -1,540 +1,38 @@
package stmgr package stmgr
import ( import (
"bytes"
"context" "context"
"fmt" "fmt"
"os"
"reflect" "reflect"
"runtime" "runtime"
"strings" "strings"
exported5 "github.com/filecoin-project/specs-actors/v5/actors/builtin/exported" "github.com/ipfs/go-cid"
"github.com/filecoin-project/go-state-types/big"
"github.com/filecoin-project/go-state-types/network"
cid "github.com/ipfs/go-cid"
cbg "github.com/whyrusleeping/cbor-gen" cbg "github.com/whyrusleeping/cbor-gen"
"golang.org/x/xerrors" "golang.org/x/xerrors"
"github.com/filecoin-project/go-address" "github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-bitfield"
"github.com/filecoin-project/go-state-types/abi" "github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/crypto" "github.com/filecoin-project/go-state-types/big"
"github.com/filecoin-project/go-state-types/rt" "github.com/filecoin-project/go-state-types/rt"
exported0 "github.com/filecoin-project/specs-actors/actors/builtin/exported" exported0 "github.com/filecoin-project/specs-actors/actors/builtin/exported"
exported2 "github.com/filecoin-project/specs-actors/v2/actors/builtin/exported" exported2 "github.com/filecoin-project/specs-actors/v2/actors/builtin/exported"
exported3 "github.com/filecoin-project/specs-actors/v3/actors/builtin/exported" exported3 "github.com/filecoin-project/specs-actors/v3/actors/builtin/exported"
exported4 "github.com/filecoin-project/specs-actors/v4/actors/builtin/exported" exported4 "github.com/filecoin-project/specs-actors/v4/actors/builtin/exported"
exported5 "github.com/filecoin-project/specs-actors/v5/actors/builtin/exported"
"github.com/filecoin-project/lotus/api" "github.com/filecoin-project/lotus/api"
"github.com/filecoin-project/lotus/chain/actors/builtin" "github.com/filecoin-project/lotus/chain/actors/builtin"
init_ "github.com/filecoin-project/lotus/chain/actors/builtin/init" init_ "github.com/filecoin-project/lotus/chain/actors/builtin/init"
"github.com/filecoin-project/lotus/chain/actors/builtin/market"
"github.com/filecoin-project/lotus/chain/actors/builtin/miner"
"github.com/filecoin-project/lotus/chain/actors/builtin/power"
"github.com/filecoin-project/lotus/chain/actors/policy" "github.com/filecoin-project/lotus/chain/actors/policy"
"github.com/filecoin-project/lotus/chain/beacon"
"github.com/filecoin-project/lotus/chain/state" "github.com/filecoin-project/lotus/chain/state"
"github.com/filecoin-project/lotus/chain/store" "github.com/filecoin-project/lotus/chain/store"
"github.com/filecoin-project/lotus/chain/types" "github.com/filecoin-project/lotus/chain/types"
"github.com/filecoin-project/lotus/chain/vm" "github.com/filecoin-project/lotus/chain/vm"
"github.com/filecoin-project/lotus/extern/sector-storage/ffiwrapper"
"github.com/filecoin-project/lotus/node/modules/dtypes" "github.com/filecoin-project/lotus/node/modules/dtypes"
) )
func GetNetworkName(ctx context.Context, sm *StateManager, st cid.Cid) (dtypes.NetworkName, error) {
act, err := sm.LoadActorRaw(ctx, init_.Address, st)
if err != nil {
return "", err
}
ias, err := init_.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return "", err
}
return ias.NetworkName()
}
func GetMinerWorkerRaw(ctx context.Context, sm *StateManager, st cid.Cid, maddr address.Address) (address.Address, error) {
state, err := sm.StateTree(st)
if err != nil {
return address.Undef, xerrors.Errorf("(get sset) failed to load state tree: %w", err)
}
act, err := state.GetActor(maddr)
if err != nil {
return address.Undef, xerrors.Errorf("(get sset) failed to load miner actor: %w", err)
}
mas, err := miner.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return address.Undef, xerrors.Errorf("(get sset) failed to load miner actor state: %w", err)
}
info, err := mas.Info()
if err != nil {
return address.Undef, xerrors.Errorf("failed to load actor info: %w", err)
}
return vm.ResolveToKeyAddr(state, sm.cs.ActorStore(ctx), info.Worker)
}
func GetPower(ctx context.Context, sm *StateManager, ts *types.TipSet, maddr address.Address) (power.Claim, power.Claim, bool, error) {
return GetPowerRaw(ctx, sm, ts.ParentState(), maddr)
}
func GetPowerRaw(ctx context.Context, sm *StateManager, st cid.Cid, maddr address.Address) (power.Claim, power.Claim, bool, error) {
act, err := sm.LoadActorRaw(ctx, power.Address, st)
if err != nil {
return power.Claim{}, power.Claim{}, false, xerrors.Errorf("(get sset) failed to load power actor state: %w", err)
}
pas, err := power.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return power.Claim{}, power.Claim{}, false, err
}
tpow, err := pas.TotalPower()
if err != nil {
return power.Claim{}, power.Claim{}, false, err
}
var mpow power.Claim
var minpow bool
if maddr != address.Undef {
var found bool
mpow, found, err = pas.MinerPower(maddr)
if err != nil || !found {
return power.Claim{}, tpow, false, err
}
minpow, err = pas.MinerNominalPowerMeetsConsensusMinimum(maddr)
if err != nil {
return power.Claim{}, power.Claim{}, false, err
}
}
return mpow, tpow, minpow, nil
}
func PreCommitInfo(ctx context.Context, sm *StateManager, maddr address.Address, sid abi.SectorNumber, ts *types.TipSet) (*miner.SectorPreCommitOnChainInfo, error) {
act, err := sm.LoadActor(ctx, maddr, ts)
if err != nil {
return nil, xerrors.Errorf("(get sset) failed to load miner actor: %w", err)
}
mas, err := miner.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return nil, xerrors.Errorf("(get sset) failed to load miner actor state: %w", err)
}
return mas.GetPrecommittedSector(sid)
}
func MinerSectorInfo(ctx context.Context, sm *StateManager, maddr address.Address, sid abi.SectorNumber, ts *types.TipSet) (*miner.SectorOnChainInfo, error) {
act, err := sm.LoadActor(ctx, maddr, ts)
if err != nil {
return nil, xerrors.Errorf("(get sset) failed to load miner actor: %w", err)
}
mas, err := miner.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return nil, xerrors.Errorf("(get sset) failed to load miner actor state: %w", err)
}
return mas.GetSector(sid)
}
func GetSectorsForWinningPoSt(ctx context.Context, nv network.Version, pv ffiwrapper.Verifier, sm *StateManager, st cid.Cid, maddr address.Address, rand abi.PoStRandomness) ([]builtin.SectorInfo, error) {
act, err := sm.LoadActorRaw(ctx, maddr, st)
if err != nil {
return nil, xerrors.Errorf("failed to load miner actor: %w", err)
}
mas, err := miner.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return nil, xerrors.Errorf("failed to load miner actor state: %w", err)
}
var provingSectors bitfield.BitField
if nv < network.Version7 {
allSectors, err := miner.AllPartSectors(mas, miner.Partition.AllSectors)
if err != nil {
return nil, xerrors.Errorf("get all sectors: %w", err)
}
faultySectors, err := miner.AllPartSectors(mas, miner.Partition.FaultySectors)
if err != nil {
return nil, xerrors.Errorf("get faulty sectors: %w", err)
}
provingSectors, err = bitfield.SubtractBitField(allSectors, faultySectors)
if err != nil {
return nil, xerrors.Errorf("calc proving sectors: %w", err)
}
} else {
provingSectors, err = miner.AllPartSectors(mas, miner.Partition.ActiveSectors)
if err != nil {
return nil, xerrors.Errorf("get active sectors sectors: %w", err)
}
}
numProvSect, err := provingSectors.Count()
if err != nil {
return nil, xerrors.Errorf("failed to count bits: %w", err)
}
// TODO(review): is this right? feels fishy to me
if numProvSect == 0 {
return nil, nil
}
info, err := mas.Info()
if err != nil {
return nil, xerrors.Errorf("getting miner info: %w", err)
}
mid, err := address.IDFromAddress(maddr)
if err != nil {
return nil, xerrors.Errorf("getting miner ID: %w", err)
}
proofType, err := miner.WinningPoStProofTypeFromWindowPoStProofType(nv, info.WindowPoStProofType)
if err != nil {
return nil, xerrors.Errorf("determining winning post proof type: %w", err)
}
ids, err := pv.GenerateWinningPoStSectorChallenge(ctx, proofType, abi.ActorID(mid), rand, numProvSect)
if err != nil {
return nil, xerrors.Errorf("generating winning post challenges: %w", err)
}
iter, err := provingSectors.BitIterator()
if err != nil {
return nil, xerrors.Errorf("iterating over proving sectors: %w", err)
}
// Select winning sectors by _index_ in the all-sectors bitfield.
selectedSectors := bitfield.New()
prev := uint64(0)
for _, n := range ids {
sno, err := iter.Nth(n - prev)
if err != nil {
return nil, xerrors.Errorf("iterating over proving sectors: %w", err)
}
selectedSectors.Set(sno)
prev = n
}
sectors, err := mas.LoadSectors(&selectedSectors)
if err != nil {
return nil, xerrors.Errorf("loading proving sectors: %w", err)
}
out := make([]builtin.SectorInfo, len(sectors))
for i, sinfo := range sectors {
out[i] = builtin.SectorInfo{
SealProof: sinfo.SealProof,
SectorNumber: sinfo.SectorNumber,
SealedCID: sinfo.SealedCID,
}
}
return out, nil
}
func GetMinerSlashed(ctx context.Context, sm *StateManager, ts *types.TipSet, maddr address.Address) (bool, error) {
act, err := sm.LoadActor(ctx, power.Address, ts)
if err != nil {
return false, xerrors.Errorf("failed to load power actor: %w", err)
}
spas, err := power.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return false, xerrors.Errorf("failed to load power actor state: %w", err)
}
_, ok, err := spas.MinerPower(maddr)
if err != nil {
return false, xerrors.Errorf("getting miner power: %w", err)
}
if !ok {
return true, nil
}
return false, nil
}
func GetStorageDeal(ctx context.Context, sm *StateManager, dealID abi.DealID, ts *types.TipSet) (*api.MarketDeal, error) {
act, err := sm.LoadActor(ctx, market.Address, ts)
if err != nil {
return nil, xerrors.Errorf("failed to load market actor: %w", err)
}
state, err := market.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return nil, xerrors.Errorf("failed to load market actor state: %w", err)
}
proposals, err := state.Proposals()
if err != nil {
return nil, err
}
proposal, found, err := proposals.Get(dealID)
if err != nil {
return nil, err
} else if !found {
return nil, xerrors.Errorf(
"deal %d not found "+
"- deal may not have completed sealing before deal proposal "+
"start epoch, or deal may have been slashed",
dealID)
}
states, err := state.States()
if err != nil {
return nil, err
}
st, found, err := states.Get(dealID)
if err != nil {
return nil, err
}
if !found {
st = market.EmptyDealState()
}
return &api.MarketDeal{
Proposal: *proposal,
State: *st,
}, nil
}
func ListMinerActors(ctx context.Context, sm *StateManager, ts *types.TipSet) ([]address.Address, error) {
act, err := sm.LoadActor(ctx, power.Address, ts)
if err != nil {
return nil, xerrors.Errorf("failed to load power actor: %w", err)
}
powState, err := power.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return nil, xerrors.Errorf("failed to load power actor state: %w", err)
}
return powState.ListAllMiners()
}
func ComputeState(ctx context.Context, sm *StateManager, height abi.ChainEpoch, msgs []*types.Message, ts *types.TipSet) (cid.Cid, []*api.InvocResult, error) {
if ts == nil {
ts = sm.cs.GetHeaviestTipSet()
}
base, trace, err := sm.ExecutionTrace(ctx, ts)
if err != nil {
return cid.Undef, nil, err
}
for i := ts.Height(); i < height; i++ {
// handle state forks
base, err = sm.handleStateForks(ctx, base, i, &InvocationTracer{trace: &trace}, ts)
if err != nil {
return cid.Undef, nil, xerrors.Errorf("error handling state forks: %w", err)
}
// TODO: should we also run cron here?
}
r := store.NewChainRand(sm.cs, ts.Cids())
vmopt := &vm.VMOpts{
StateBase: base,
Epoch: height,
Rand: r,
Bstore: sm.cs.StateBlockstore(),
Syscalls: sm.cs.VMSys(),
CircSupplyCalc: sm.GetVMCirculatingSupply,
NtwkVersion: sm.GetNtwkVersion,
BaseFee: ts.Blocks()[0].ParentBaseFee,
LookbackState: LookbackStateGetterForTipset(sm, ts),
}
vmi, err := sm.newVM(ctx, vmopt)
if err != nil {
return cid.Undef, nil, err
}
for i, msg := range msgs {
// TODO: Use the signed message length for secp messages
ret, err := vmi.ApplyMessage(ctx, msg)
if err != nil {
return cid.Undef, nil, xerrors.Errorf("applying message %s: %w", msg.Cid(), err)
}
if ret.ExitCode != 0 {
log.Infof("compute state apply message %d failed (exit: %d): %s", i, ret.ExitCode, ret.ActorErr)
}
}
root, err := vmi.Flush(ctx)
if err != nil {
return cid.Undef, nil, err
}
return root, trace, nil
}
func LookbackStateGetterForTipset(sm *StateManager, ts *types.TipSet) vm.LookbackStateGetter {
return func(ctx context.Context, round abi.ChainEpoch) (*state.StateTree, error) {
_, st, err := GetLookbackTipSetForRound(ctx, sm, ts, round)
if err != nil {
return nil, err
}
return sm.StateTree(st)
}
}
func GetLookbackTipSetForRound(ctx context.Context, sm *StateManager, ts *types.TipSet, round abi.ChainEpoch) (*types.TipSet, cid.Cid, error) {
var lbr abi.ChainEpoch
lb := policy.GetWinningPoStSectorSetLookback(sm.GetNtwkVersion(ctx, round))
if round > lb {
lbr = round - lb
}
// more null blocks than our lookback
if lbr >= ts.Height() {
// This should never happen at this point, but may happen before
// network version 3 (where the lookback was only 10 blocks).
st, _, err := sm.TipSetState(ctx, ts)
if err != nil {
return nil, cid.Undef, err
}
return ts, st, nil
}
// Get the tipset after the lookback tipset, or the next non-null one.
nextTs, err := sm.ChainStore().GetTipsetByHeight(ctx, lbr+1, ts, false)
if err != nil {
return nil, cid.Undef, xerrors.Errorf("failed to get lookback tipset+1: %w", err)
}
if lbr > nextTs.Height() {
return nil, cid.Undef, xerrors.Errorf("failed to find non-null tipset %s (%d) which is known to exist, found %s (%d)", ts.Key(), ts.Height(), nextTs.Key(), nextTs.Height())
}
lbts, err := sm.ChainStore().GetTipSetFromKey(nextTs.Parents())
if err != nil {
return nil, cid.Undef, xerrors.Errorf("failed to resolve lookback tipset: %w", err)
}
return lbts, nextTs.ParentState(), nil
}
func MinerGetBaseInfo(ctx context.Context, sm *StateManager, bcs beacon.Schedule, tsk types.TipSetKey, round abi.ChainEpoch, maddr address.Address, pv ffiwrapper.Verifier) (*api.MiningBaseInfo, error) {
ts, err := sm.ChainStore().LoadTipSet(tsk)
if err != nil {
return nil, xerrors.Errorf("failed to load tipset for mining base: %w", err)
}
prev, err := sm.ChainStore().GetLatestBeaconEntry(ts)
if err != nil {
if os.Getenv("LOTUS_IGNORE_DRAND") != "_yes_" {
return nil, xerrors.Errorf("failed to get latest beacon entry: %w", err)
}
prev = &types.BeaconEntry{}
}
entries, err := beacon.BeaconEntriesForBlock(ctx, bcs, round, ts.Height(), *prev)
if err != nil {
return nil, err
}
rbase := *prev
if len(entries) > 0 {
rbase = entries[len(entries)-1]
}
lbts, lbst, err := GetLookbackTipSetForRound(ctx, sm, ts, round)
if err != nil {
return nil, xerrors.Errorf("getting lookback miner actor state: %w", err)
}
act, err := sm.LoadActorRaw(ctx, maddr, lbst)
if xerrors.Is(err, types.ErrActorNotFound) {
_, err := sm.LoadActor(ctx, maddr, ts)
if err != nil {
return nil, xerrors.Errorf("loading miner in current state: %w", err)
}
return nil, nil
}
if err != nil {
return nil, xerrors.Errorf("failed to load miner actor: %w", err)
}
mas, err := miner.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return nil, xerrors.Errorf("failed to load miner actor state: %w", err)
}
buf := new(bytes.Buffer)
if err := maddr.MarshalCBOR(buf); err != nil {
return nil, xerrors.Errorf("failed to marshal miner address: %w", err)
}
prand, err := store.DrawRandomness(rbase.Data, crypto.DomainSeparationTag_WinningPoStChallengeSeed, round, buf.Bytes())
if err != nil {
return nil, xerrors.Errorf("failed to get randomness for winning post: %w", err)
}
nv := sm.GetNtwkVersion(ctx, ts.Height())
sectors, err := GetSectorsForWinningPoSt(ctx, nv, pv, sm, lbst, maddr, prand)
if err != nil {
return nil, xerrors.Errorf("getting winning post proving set: %w", err)
}
if len(sectors) == 0 {
return nil, nil
}
mpow, tpow, _, err := GetPowerRaw(ctx, sm, lbst, maddr)
if err != nil {
return nil, xerrors.Errorf("failed to get power: %w", err)
}
info, err := mas.Info()
if err != nil {
return nil, err
}
worker, err := sm.ResolveToKeyAddress(ctx, info.Worker, ts)
if err != nil {
return nil, xerrors.Errorf("resolving worker address: %w", err)
}
// TODO: Not ideal performance...This method reloads miner and power state (already looked up here and in GetPowerRaw)
eligible, err := MinerEligibleToMine(ctx, sm, maddr, ts, lbts)
if err != nil {
return nil, xerrors.Errorf("determining miner eligibility: %w", err)
}
return &api.MiningBaseInfo{
MinerPower: mpow.QualityAdjPower,
NetworkPower: tpow.QualityAdjPower,
Sectors: sectors,
WorkerKey: worker,
SectorSize: info.SectorSize,
PrevBeaconEntry: *prev,
BeaconEntries: entries,
EligibleForMining: eligible,
}, nil
}
type MethodMeta struct { type MethodMeta struct {
Name string Name string
@ -621,86 +119,124 @@ func GetParamType(actCode cid.Cid, method abi.MethodNum) (cbg.CBORUnmarshaler, e
return reflect.New(m.Params.Elem()).Interface().(cbg.CBORUnmarshaler), nil return reflect.New(m.Params.Elem()).Interface().(cbg.CBORUnmarshaler), nil
} }
func minerHasMinPower(ctx context.Context, sm *StateManager, addr address.Address, ts *types.TipSet) (bool, error) { func GetNetworkName(ctx context.Context, sm *StateManager, st cid.Cid) (dtypes.NetworkName, error) {
pact, err := sm.LoadActor(ctx, power.Address, ts) act, err := sm.LoadActorRaw(ctx, init_.Address, st)
if err != nil { if err != nil {
return false, xerrors.Errorf("loading power actor state: %w", err) return "", err
}
ias, err := init_.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return "", err
} }
ps, err := power.Load(sm.cs.ActorStore(ctx), pact) return ias.NetworkName()
if err != nil {
return false, err
}
return ps.MinerNominalPowerMeetsConsensusMinimum(addr)
} }
func MinerEligibleToMine(ctx context.Context, sm *StateManager, addr address.Address, baseTs *types.TipSet, lookbackTs *types.TipSet) (bool, error) { func ComputeState(ctx context.Context, sm *StateManager, height abi.ChainEpoch, msgs []*types.Message, ts *types.TipSet) (cid.Cid, []*api.InvocResult, error) {
hmp, err := minerHasMinPower(ctx, sm, addr, lookbackTs) if ts == nil {
ts = sm.cs.GetHeaviestTipSet()
// TODO: We're blurring the lines between a "runtime network version" and a "Lotus upgrade epoch", is that unavoidable?
if sm.GetNtwkVersion(ctx, baseTs.Height()) <= network.Version3 {
return hmp, err
} }
base, trace, err := sm.ExecutionTrace(ctx, ts)
if err != nil { if err != nil {
return false, err return cid.Undef, nil, err
} }
if !hmp { for i := ts.Height(); i < height; i++ {
return false, nil // handle state forks
base, err = sm.handleStateForks(ctx, base, i, &InvocationTracer{trace: &trace}, ts)
if err != nil {
return cid.Undef, nil, xerrors.Errorf("error handling state forks: %w", err)
}
// TODO: should we also run cron here?
} }
// Post actors v2, also check MinerEligibleForElection with base ts r := store.NewChainRand(sm.cs, ts.Cids())
vmopt := &vm.VMOpts{
pact, err := sm.LoadActor(ctx, power.Address, baseTs) StateBase: base,
Epoch: height,
Rand: r,
Bstore: sm.cs.StateBlockstore(),
Syscalls: sm.syscalls,
CircSupplyCalc: sm.GetVMCirculatingSupply,
NtwkVersion: sm.GetNtwkVersion,
BaseFee: ts.Blocks()[0].ParentBaseFee,
LookbackState: LookbackStateGetterForTipset(sm, ts),
}
vmi, err := sm.newVM(ctx, vmopt)
if err != nil { if err != nil {
return false, xerrors.Errorf("loading power actor state: %w", err) return cid.Undef, nil, err
} }
pstate, err := power.Load(sm.cs.ActorStore(ctx), pact) for i, msg := range msgs {
// TODO: Use the signed message length for secp messages
ret, err := vmi.ApplyMessage(ctx, msg)
if err != nil {
return cid.Undef, nil, xerrors.Errorf("applying message %s: %w", msg.Cid(), err)
}
if ret.ExitCode != 0 {
log.Infof("compute state apply message %d failed (exit: %d): %s", i, ret.ExitCode, ret.ActorErr)
}
}
root, err := vmi.Flush(ctx)
if err != nil { if err != nil {
return false, err return cid.Undef, nil, err
} }
mact, err := sm.LoadActor(ctx, addr, baseTs) return root, trace, nil
if err != nil {
return false, xerrors.Errorf("loading miner actor state: %w", err)
}
mstate, err := miner.Load(sm.cs.ActorStore(ctx), mact)
if err != nil {
return false, err
}
// Non-empty power claim.
if claim, found, err := pstate.MinerPower(addr); err != nil {
return false, err
} else if !found {
return false, err
} else if claim.QualityAdjPower.LessThanEqual(big.Zero()) {
return false, err
}
// No fee debt.
if debt, err := mstate.FeeDebt(); err != nil {
return false, err
} else if !debt.IsZero() {
return false, err
}
// No active consensus faults.
if mInfo, err := mstate.Info(); err != nil {
return false, err
} else if baseTs.Height() <= mInfo.ConsensusFaultElapsed {
return false, nil
}
return true, nil
} }
func CheckTotalFIL(ctx context.Context, sm *StateManager, ts *types.TipSet) (abi.TokenAmount, error) { func LookbackStateGetterForTipset(sm *StateManager, ts *types.TipSet) vm.LookbackStateGetter {
str, err := state.LoadStateTree(sm.ChainStore().ActorStore(ctx), ts.ParentState()) return func(ctx context.Context, round abi.ChainEpoch) (*state.StateTree, error) {
_, st, err := GetLookbackTipSetForRound(ctx, sm, ts, round)
if err != nil {
return nil, err
}
return sm.StateTree(st)
}
}
func GetLookbackTipSetForRound(ctx context.Context, sm *StateManager, ts *types.TipSet, round abi.ChainEpoch) (*types.TipSet, cid.Cid, error) {
var lbr abi.ChainEpoch
lb := policy.GetWinningPoStSectorSetLookback(sm.GetNtwkVersion(ctx, round))
if round > lb {
lbr = round - lb
}
// more null blocks than our lookback
if lbr >= ts.Height() {
// This should never happen at this point, but may happen before
// network version 3 (where the lookback was only 10 blocks).
st, _, err := sm.TipSetState(ctx, ts)
if err != nil {
return nil, cid.Undef, err
}
return ts, st, nil
}
// Get the tipset after the lookback tipset, or the next non-null one.
nextTs, err := sm.ChainStore().GetTipsetByHeight(ctx, lbr+1, ts, false)
if err != nil {
return nil, cid.Undef, xerrors.Errorf("failed to get lookback tipset+1: %w", err)
}
if lbr > nextTs.Height() {
return nil, cid.Undef, xerrors.Errorf("failed to find non-null tipset %s (%d) which is known to exist, found %s (%d)", ts.Key(), ts.Height(), nextTs.Key(), nextTs.Height())
}
lbts, err := sm.ChainStore().GetTipSetFromKey(nextTs.Parents())
if err != nil {
return nil, cid.Undef, xerrors.Errorf("failed to resolve lookback tipset: %w", err)
}
return lbts, nextTs.ParentState(), nil
}
func CheckTotalFIL(ctx context.Context, cs *store.ChainStore, ts *types.TipSet) (abi.TokenAmount, error) {
str, err := state.LoadStateTree(cs.ActorStore(ctx), ts.ParentState())
if err != nil { if err != nil {
return abi.TokenAmount{}, err return abi.TokenAmount{}, err
} }
@ -729,3 +265,21 @@ func MakeMsgGasCost(msg *types.Message, ret *vm.ApplyRet) api.MsgGasCost {
TotalCost: big.Sub(msg.RequiredFunds(), ret.GasCosts.Refund), TotalCost: big.Sub(msg.RequiredFunds(), ret.GasCosts.Refund),
} }
} }
func (sm *StateManager) ListAllActors(ctx context.Context, ts *types.TipSet) ([]address.Address, error) {
stateTree, err := sm.StateTree(sm.parentState(ts))
if err != nil {
return nil, err
}
var out []address.Address
err = stateTree.ForEach(func(addr address.Address, act *types.Actor) error {
out = append(out, addr)
return nil
})
if err != nil {
return nil, err
}
return out, nil
}

View File

@ -31,7 +31,7 @@ func TestIndexSeeks(t *testing.T) {
ctx := context.TODO() ctx := context.TODO()
nbs := blockstore.NewMemorySync() nbs := blockstore.NewMemorySync()
cs := store.NewChainStore(nbs, nbs, syncds.MutexWrap(datastore.NewMapDatastore()), nil, nil) cs := store.NewChainStore(nbs, nbs, syncds.MutexWrap(datastore.NewMapDatastore()), nil)
defer cs.Close() //nolint:errcheck defer cs.Close() //nolint:errcheck
_, err = cs.Import(bytes.NewReader(gencar)) _, err = cs.Import(bytes.NewReader(gencar))

303
chain/store/messages.go Normal file
View File

@ -0,0 +1,303 @@
package store
import (
"context"
"github.com/ipfs/go-cid"
"golang.org/x/xerrors"
block "github.com/ipfs/go-block-format"
cbor "github.com/ipfs/go-ipld-cbor"
cbg "github.com/whyrusleeping/cbor-gen"
"github.com/filecoin-project/go-address"
blockadt "github.com/filecoin-project/specs-actors/actors/util/adt"
bstore "github.com/filecoin-project/lotus/blockstore"
"github.com/filecoin-project/lotus/build"
"github.com/filecoin-project/lotus/chain/state"
"github.com/filecoin-project/lotus/chain/types"
)
type storable interface {
ToStorageBlock() (block.Block, error)
}
func PutMessage(bs bstore.Blockstore, m storable) (cid.Cid, error) {
b, err := m.ToStorageBlock()
if err != nil {
return cid.Undef, err
}
if err := bs.Put(b); err != nil {
return cid.Undef, err
}
return b.Cid(), nil
}
func (cs *ChainStore) PutMessage(m storable) (cid.Cid, error) {
return PutMessage(cs.chainBlockstore, m)
}
func (cs *ChainStore) GetCMessage(c cid.Cid) (types.ChainMsg, error) {
m, err := cs.GetMessage(c)
if err == nil {
return m, nil
}
if err != bstore.ErrNotFound {
log.Warnf("GetCMessage: unexpected error getting unsigned message: %s", err)
}
return cs.GetSignedMessage(c)
}
func (cs *ChainStore) GetMessage(c cid.Cid) (*types.Message, error) {
var msg *types.Message
err := cs.chainLocalBlockstore.View(c, func(b []byte) (err error) {
msg, err = types.DecodeMessage(b)
return err
})
return msg, err
}
func (cs *ChainStore) GetSignedMessage(c cid.Cid) (*types.SignedMessage, error) {
var msg *types.SignedMessage
err := cs.chainLocalBlockstore.View(c, func(b []byte) (err error) {
msg, err = types.DecodeSignedMessage(b)
return err
})
return msg, err
}
func (cs *ChainStore) readAMTCids(root cid.Cid) ([]cid.Cid, error) {
ctx := context.TODO()
// block headers use adt0, for now.
a, err := blockadt.AsArray(cs.ActorStore(ctx), root)
if err != nil {
return nil, xerrors.Errorf("amt load: %w", err)
}
var (
cids []cid.Cid
cborCid cbg.CborCid
)
if err := a.ForEach(&cborCid, func(i int64) error {
c := cid.Cid(cborCid)
cids = append(cids, c)
return nil
}); err != nil {
return nil, xerrors.Errorf("failed to traverse amt: %w", err)
}
if uint64(len(cids)) != a.Length() {
return nil, xerrors.Errorf("found %d cids, expected %d", len(cids), a.Length())
}
return cids, nil
}
type BlockMessages struct {
Miner address.Address
BlsMessages []types.ChainMsg
SecpkMessages []types.ChainMsg
WinCount int64
}
func (cs *ChainStore) BlockMsgsForTipset(ts *types.TipSet) ([]BlockMessages, error) {
applied := make(map[address.Address]uint64)
cst := cbor.NewCborStore(cs.stateBlockstore)
st, err := state.LoadStateTree(cst, ts.Blocks()[0].ParentStateRoot)
if err != nil {
return nil, xerrors.Errorf("failed to load state tree at tipset %s: %w", ts, err)
}
selectMsg := func(m *types.Message) (bool, error) {
var sender address.Address
if ts.Height() >= build.UpgradeHyperdriveHeight {
sender, err = st.LookupID(m.From)
if err != nil {
return false, err
}
} else {
sender = m.From
}
// The first match for a sender is guaranteed to have correct nonce -- the block isn't valid otherwise
if _, ok := applied[sender]; !ok {
applied[sender] = m.Nonce
}
if applied[sender] != m.Nonce {
return false, nil
}
applied[sender]++
return true, nil
}
var out []BlockMessages
for _, b := range ts.Blocks() {
bms, sms, err := cs.MessagesForBlock(b)
if err != nil {
return nil, xerrors.Errorf("failed to get messages for block: %w", err)
}
bm := BlockMessages{
Miner: b.Miner,
BlsMessages: make([]types.ChainMsg, 0, len(bms)),
SecpkMessages: make([]types.ChainMsg, 0, len(sms)),
WinCount: b.ElectionProof.WinCount,
}
for _, bmsg := range bms {
b, err := selectMsg(bmsg.VMMessage())
if err != nil {
return nil, xerrors.Errorf("failed to decide whether to select message for block: %w", err)
}
if b {
bm.BlsMessages = append(bm.BlsMessages, bmsg)
}
}
for _, smsg := range sms {
b, err := selectMsg(smsg.VMMessage())
if err != nil {
return nil, xerrors.Errorf("failed to decide whether to select message for block: %w", err)
}
if b {
bm.SecpkMessages = append(bm.SecpkMessages, smsg)
}
}
out = append(out, bm)
}
return out, nil
}
func (cs *ChainStore) MessagesForTipset(ts *types.TipSet) ([]types.ChainMsg, error) {
bmsgs, err := cs.BlockMsgsForTipset(ts)
if err != nil {
return nil, err
}
var out []types.ChainMsg
for _, bm := range bmsgs {
for _, blsm := range bm.BlsMessages {
out = append(out, blsm)
}
for _, secm := range bm.SecpkMessages {
out = append(out, secm)
}
}
return out, nil
}
type mmCids struct {
bls []cid.Cid
secpk []cid.Cid
}
func (cs *ChainStore) ReadMsgMetaCids(mmc cid.Cid) ([]cid.Cid, []cid.Cid, error) {
o, ok := cs.mmCache.Get(mmc)
if ok {
mmcids := o.(*mmCids)
return mmcids.bls, mmcids.secpk, nil
}
cst := cbor.NewCborStore(cs.chainLocalBlockstore)
var msgmeta types.MsgMeta
if err := cst.Get(context.TODO(), mmc, &msgmeta); err != nil {
return nil, nil, xerrors.Errorf("failed to load msgmeta (%s): %w", mmc, err)
}
blscids, err := cs.readAMTCids(msgmeta.BlsMessages)
if err != nil {
return nil, nil, xerrors.Errorf("loading bls message cids for block: %w", err)
}
secpkcids, err := cs.readAMTCids(msgmeta.SecpkMessages)
if err != nil {
return nil, nil, xerrors.Errorf("loading secpk message cids for block: %w", err)
}
cs.mmCache.Add(mmc, &mmCids{
bls: blscids,
secpk: secpkcids,
})
return blscids, secpkcids, nil
}
func (cs *ChainStore) MessagesForBlock(b *types.BlockHeader) ([]*types.Message, []*types.SignedMessage, error) {
blscids, secpkcids, err := cs.ReadMsgMetaCids(b.Messages)
if err != nil {
return nil, nil, err
}
blsmsgs, err := cs.LoadMessagesFromCids(blscids)
if err != nil {
return nil, nil, xerrors.Errorf("loading bls messages for block: %w", err)
}
secpkmsgs, err := cs.LoadSignedMessagesFromCids(secpkcids)
if err != nil {
return nil, nil, xerrors.Errorf("loading secpk messages for block: %w", err)
}
return blsmsgs, secpkmsgs, nil
}
func (cs *ChainStore) GetParentReceipt(b *types.BlockHeader, i int) (*types.MessageReceipt, error) {
ctx := context.TODO()
// block headers use adt0, for now.
a, err := blockadt.AsArray(cs.ActorStore(ctx), b.ParentMessageReceipts)
if err != nil {
return nil, xerrors.Errorf("amt load: %w", err)
}
var r types.MessageReceipt
if found, err := a.Get(uint64(i), &r); err != nil {
return nil, err
} else if !found {
return nil, xerrors.Errorf("failed to find receipt %d", i)
}
return &r, nil
}
func (cs *ChainStore) LoadMessagesFromCids(cids []cid.Cid) ([]*types.Message, error) {
msgs := make([]*types.Message, 0, len(cids))
for i, c := range cids {
m, err := cs.GetMessage(c)
if err != nil {
return nil, xerrors.Errorf("failed to get message: (%s):%d: %w", c, i, err)
}
msgs = append(msgs, m)
}
return msgs, nil
}
func (cs *ChainStore) LoadSignedMessagesFromCids(cids []cid.Cid) ([]*types.SignedMessage, error) {
msgs := make([]*types.SignedMessage, 0, len(cids))
for i, c := range cids {
m, err := cs.GetSignedMessage(c)
if err != nil {
return nil, xerrors.Errorf("failed to get message: (%s):%d: %w", c, i, err)
}
msgs = append(msgs, m)
}
return msgs, nil
}

182
chain/store/rand.go Normal file
View File

@ -0,0 +1,182 @@
package store
import (
"context"
"encoding/binary"
"os"
"github.com/ipfs/go-cid"
"github.com/minio/blake2b-simd"
"go.opencensus.io/trace"
"golang.org/x/xerrors"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/crypto"
"github.com/filecoin-project/lotus/chain/types"
"github.com/filecoin-project/lotus/chain/vm"
)
func DrawRandomness(rbase []byte, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
h := blake2b.New256()
if err := binary.Write(h, binary.BigEndian, int64(pers)); err != nil {
return nil, xerrors.Errorf("deriving randomness: %w", err)
}
VRFDigest := blake2b.Sum256(rbase)
_, err := h.Write(VRFDigest[:])
if err != nil {
return nil, xerrors.Errorf("hashing VRFDigest: %w", err)
}
if err := binary.Write(h, binary.BigEndian, round); err != nil {
return nil, xerrors.Errorf("deriving randomness: %w", err)
}
_, err = h.Write(entropy)
if err != nil {
return nil, xerrors.Errorf("hashing entropy: %w", err)
}
return h.Sum(nil), nil
}
func (cs *ChainStore) GetBeaconRandomnessLookingBack(ctx context.Context, blks []cid.Cid, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return cs.GetBeaconRandomness(ctx, blks, pers, round, entropy, true)
}
func (cs *ChainStore) GetBeaconRandomnessLookingForward(ctx context.Context, blks []cid.Cid, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return cs.GetBeaconRandomness(ctx, blks, pers, round, entropy, false)
}
func (cs *ChainStore) GetBeaconRandomness(ctx context.Context, blks []cid.Cid, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte, lookback bool) ([]byte, error) {
_, span := trace.StartSpan(ctx, "store.GetBeaconRandomness")
defer span.End()
span.AddAttributes(trace.Int64Attribute("round", int64(round)))
ts, err := cs.LoadTipSet(types.NewTipSetKey(blks...))
if err != nil {
return nil, err
}
if round > ts.Height() {
return nil, xerrors.Errorf("cannot draw randomness from the future")
}
searchHeight := round
if searchHeight < 0 {
searchHeight = 0
}
randTs, err := cs.GetTipsetByHeight(ctx, searchHeight, ts, lookback)
if err != nil {
return nil, err
}
be, err := cs.GetLatestBeaconEntry(randTs)
if err != nil {
return nil, err
}
// if at (or just past -- for null epochs) appropriate epoch
// or at genesis (works for negative epochs)
return DrawRandomness(be.Data, pers, round, entropy)
}
func (cs *ChainStore) GetChainRandomnessLookingBack(ctx context.Context, blks []cid.Cid, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return cs.GetChainRandomness(ctx, blks, pers, round, entropy, true)
}
func (cs *ChainStore) GetChainRandomnessLookingForward(ctx context.Context, blks []cid.Cid, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return cs.GetChainRandomness(ctx, blks, pers, round, entropy, false)
}
func (cs *ChainStore) GetChainRandomness(ctx context.Context, blks []cid.Cid, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte, lookback bool) ([]byte, error) {
_, span := trace.StartSpan(ctx, "store.GetChainRandomness")
defer span.End()
span.AddAttributes(trace.Int64Attribute("round", int64(round)))
ts, err := cs.LoadTipSet(types.NewTipSetKey(blks...))
if err != nil {
return nil, err
}
if round > ts.Height() {
return nil, xerrors.Errorf("cannot draw randomness from the future")
}
searchHeight := round
if searchHeight < 0 {
searchHeight = 0
}
randTs, err := cs.GetTipsetByHeight(ctx, searchHeight, ts, lookback)
if err != nil {
return nil, err
}
mtb := randTs.MinTicketBlock()
// if at (or just past -- for null epochs) appropriate epoch
// or at genesis (works for negative epochs)
return DrawRandomness(mtb.Ticket.VRFProof, pers, round, entropy)
}
func (cs *ChainStore) GetLatestBeaconEntry(ts *types.TipSet) (*types.BeaconEntry, error) {
cur := ts
for i := 0; i < 20; i++ {
cbe := cur.Blocks()[0].BeaconEntries
if len(cbe) > 0 {
return &cbe[len(cbe)-1], nil
}
if cur.Height() == 0 {
return nil, xerrors.Errorf("made it back to genesis block without finding beacon entry")
}
next, err := cs.LoadTipSet(cur.Parents())
if err != nil {
return nil, xerrors.Errorf("failed to load parents when searching back for latest beacon entry: %w", err)
}
cur = next
}
if os.Getenv("LOTUS_IGNORE_DRAND") == "_yes_" {
return &types.BeaconEntry{
Data: []byte{9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9},
}, nil
}
return nil, xerrors.Errorf("found NO beacon entries in the 20 latest tipsets")
}
type chainRand struct {
cs *ChainStore
blks []cid.Cid
}
func NewChainRand(cs *ChainStore, blks []cid.Cid) vm.Rand {
return &chainRand{
cs: cs,
blks: blks,
}
}
func (cr *chainRand) GetChainRandomnessLookingBack(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return cr.cs.GetChainRandomnessLookingBack(ctx, cr.blks, pers, round, entropy)
}
func (cr *chainRand) GetChainRandomnessLookingForward(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return cr.cs.GetChainRandomnessLookingForward(ctx, cr.blks, pers, round, entropy)
}
func (cr *chainRand) GetBeaconRandomnessLookingBack(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return cr.cs.GetBeaconRandomnessLookingBack(ctx, cr.blks, pers, round, entropy)
}
func (cr *chainRand) GetBeaconRandomnessLookingForward(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return cr.cs.GetBeaconRandomnessLookingForward(ctx, cr.blks, pers, round, entropy)
}
func (cs *ChainStore) GetTipSetFromKey(tsk types.TipSetKey) (*types.TipSet, error) {
if tsk.IsEmpty() {
return cs.GetHeaviestTipSet(), nil
}
return cs.LoadTipSet(tsk)
}

205
chain/store/snapshot.go Normal file
View File

@ -0,0 +1,205 @@
package store
import (
"bytes"
"context"
"io"
"github.com/ipfs/go-cid"
"github.com/ipld/go-car"
carutil "github.com/ipld/go-car/util"
cbg "github.com/whyrusleeping/cbor-gen"
"golang.org/x/xerrors"
"github.com/filecoin-project/go-state-types/abi"
bstore "github.com/filecoin-project/lotus/blockstore"
"github.com/filecoin-project/lotus/build"
"github.com/filecoin-project/lotus/chain/actors/builtin"
"github.com/filecoin-project/lotus/chain/types"
)
func (cs *ChainStore) Export(ctx context.Context, ts *types.TipSet, inclRecentRoots abi.ChainEpoch, skipOldMsgs bool, w io.Writer) error {
h := &car.CarHeader{
Roots: ts.Cids(),
Version: 1,
}
if err := car.WriteHeader(h, w); err != nil {
return xerrors.Errorf("failed to write car header: %s", err)
}
unionBs := bstore.Union(cs.stateBlockstore, cs.chainBlockstore)
return cs.WalkSnapshot(ctx, ts, inclRecentRoots, skipOldMsgs, true, func(c cid.Cid) error {
blk, err := unionBs.Get(c)
if err != nil {
return xerrors.Errorf("writing object to car, bs.Get: %w", err)
}
if err := carutil.LdWrite(w, c.Bytes(), blk.RawData()); err != nil {
return xerrors.Errorf("failed to write block to car output: %w", err)
}
return nil
})
}
func (cs *ChainStore) Import(r io.Reader) (*types.TipSet, error) {
// TODO: writing only to the state blockstore is incorrect.
// At this time, both the state and chain blockstores are backed by the
// universal store. When we physically segregate the stores, we will need
// to route state objects to the state blockstore, and chain objects to
// the chain blockstore.
header, err := car.LoadCar(cs.StateBlockstore(), r)
if err != nil {
return nil, xerrors.Errorf("loadcar failed: %w", err)
}
root, err := cs.LoadTipSet(types.NewTipSetKey(header.Roots...))
if err != nil {
return nil, xerrors.Errorf("failed to load root tipset from chainfile: %w", err)
}
return root, nil
}
func (cs *ChainStore) WalkSnapshot(ctx context.Context, ts *types.TipSet, inclRecentRoots abi.ChainEpoch, skipOldMsgs, skipMsgReceipts bool, cb func(cid.Cid) error) error {
if ts == nil {
ts = cs.GetHeaviestTipSet()
}
seen := cid.NewSet()
walked := cid.NewSet()
blocksToWalk := ts.Cids()
currentMinHeight := ts.Height()
walkChain := func(blk cid.Cid) error {
if !seen.Visit(blk) {
return nil
}
if err := cb(blk); err != nil {
return err
}
data, err := cs.chainBlockstore.Get(blk)
if err != nil {
return xerrors.Errorf("getting block: %w", err)
}
var b types.BlockHeader
if err := b.UnmarshalCBOR(bytes.NewBuffer(data.RawData())); err != nil {
return xerrors.Errorf("unmarshaling block header (cid=%s): %w", blk, err)
}
if currentMinHeight > b.Height {
currentMinHeight = b.Height
if currentMinHeight%builtin.EpochsInDay == 0 {
log.Infow("export", "height", currentMinHeight)
}
}
var cids []cid.Cid
if !skipOldMsgs || b.Height > ts.Height()-inclRecentRoots {
if walked.Visit(b.Messages) {
mcids, err := recurseLinks(cs.chainBlockstore, walked, b.Messages, []cid.Cid{b.Messages})
if err != nil {
return xerrors.Errorf("recursing messages failed: %w", err)
}
cids = mcids
}
}
if b.Height > 0 {
for _, p := range b.Parents {
blocksToWalk = append(blocksToWalk, p)
}
} else {
// include the genesis block
cids = append(cids, b.Parents...)
}
out := cids
if b.Height == 0 || b.Height > ts.Height()-inclRecentRoots {
if walked.Visit(b.ParentStateRoot) {
cids, err := recurseLinks(cs.stateBlockstore, walked, b.ParentStateRoot, []cid.Cid{b.ParentStateRoot})
if err != nil {
return xerrors.Errorf("recursing genesis state failed: %w", err)
}
out = append(out, cids...)
}
if !skipMsgReceipts && walked.Visit(b.ParentMessageReceipts) {
out = append(out, b.ParentMessageReceipts)
}
}
for _, c := range out {
if seen.Visit(c) {
if c.Prefix().Codec != cid.DagCBOR {
continue
}
if err := cb(c); err != nil {
return err
}
}
}
return nil
}
log.Infow("export started")
exportStart := build.Clock.Now()
for len(blocksToWalk) > 0 {
next := blocksToWalk[0]
blocksToWalk = blocksToWalk[1:]
if err := walkChain(next); err != nil {
return xerrors.Errorf("walk chain failed: %w", err)
}
}
log.Infow("export finished", "duration", build.Clock.Now().Sub(exportStart).Seconds())
return nil
}
func recurseLinks(bs bstore.Blockstore, walked *cid.Set, root cid.Cid, in []cid.Cid) ([]cid.Cid, error) {
if root.Prefix().Codec != cid.DagCBOR {
return in, nil
}
data, err := bs.Get(root)
if err != nil {
return nil, xerrors.Errorf("recurse links get (%s) failed: %w", root, err)
}
var rerr error
err = cbg.ScanForLinks(bytes.NewReader(data.RawData()), func(c cid.Cid) {
if rerr != nil {
// No error return on ScanForLinks :(
return
}
// traversed this already...
if !walked.Visit(c) {
return
}
in = append(in, c)
var err error
in, err = recurseLinks(bs, walked, c, in)
if err != nil {
rerr = err
}
})
if err != nil {
return nil, xerrors.Errorf("scanning for links failed: %w", err)
}
return in, rerr
}

View File

@ -1,36 +1,24 @@
package store package store
import ( import (
"bytes"
"context" "context"
"encoding/binary"
"encoding/json" "encoding/json"
"errors" "errors"
"io"
"os" "os"
"strconv" "strconv"
"strings" "strings"
"sync" "sync"
"time" "time"
"github.com/filecoin-project/lotus/chain/state"
"golang.org/x/sync/errgroup" "golang.org/x/sync/errgroup"
"github.com/filecoin-project/go-state-types/crypto"
"github.com/minio/blake2b-simd"
"github.com/filecoin-project/go-address" "github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-state-types/abi" "github.com/filecoin-project/go-state-types/abi"
blockadt "github.com/filecoin-project/specs-actors/actors/util/adt"
"github.com/filecoin-project/lotus/api" "github.com/filecoin-project/lotus/api"
bstore "github.com/filecoin-project/lotus/blockstore" bstore "github.com/filecoin-project/lotus/blockstore"
"github.com/filecoin-project/lotus/build" "github.com/filecoin-project/lotus/build"
"github.com/filecoin-project/lotus/chain/actors/adt" "github.com/filecoin-project/lotus/chain/actors/adt"
"github.com/filecoin-project/lotus/chain/actors/builtin"
"github.com/filecoin-project/lotus/chain/vm"
"github.com/filecoin-project/lotus/journal" "github.com/filecoin-project/lotus/journal"
"github.com/filecoin-project/lotus/metrics" "github.com/filecoin-project/lotus/metrics"
@ -48,9 +36,6 @@ import (
"github.com/ipfs/go-datastore/query" "github.com/ipfs/go-datastore/query"
cbor "github.com/ipfs/go-ipld-cbor" cbor "github.com/ipfs/go-ipld-cbor"
logging "github.com/ipfs/go-log/v2" logging "github.com/ipfs/go-log/v2"
"github.com/ipld/go-car"
carutil "github.com/ipld/go-car/util"
cbg "github.com/whyrusleeping/cbor-gen"
"github.com/whyrusleeping/pubsub" "github.com/whyrusleeping/pubsub"
"golang.org/x/xerrors" "golang.org/x/xerrors"
) )
@ -134,11 +119,9 @@ type ChainStore struct {
reorgCh chan<- reorg reorgCh chan<- reorg
reorgNotifeeCh chan ReorgNotifee reorgNotifeeCh chan ReorgNotifee
mmCache *lru.ARCCache mmCache *lru.ARCCache // msg meta cache (mh.Messages -> secp, bls []cid)
tsCache *lru.ARCCache tsCache *lru.ARCCache
vmcalls vm.SyscallBuilder
evtTypes [1]journal.EventType evtTypes [1]journal.EventType
journal journal.Journal journal journal.Journal
@ -146,7 +129,7 @@ type ChainStore struct {
wg sync.WaitGroup wg sync.WaitGroup
} }
func NewChainStore(chainBs bstore.Blockstore, stateBs bstore.Blockstore, ds dstore.Batching, vmcalls vm.SyscallBuilder, j journal.Journal) *ChainStore { func NewChainStore(chainBs bstore.Blockstore, stateBs bstore.Blockstore, ds dstore.Batching, j journal.Journal) *ChainStore {
c, _ := lru.NewARC(DefaultMsgMetaCacheSize) c, _ := lru.NewARC(DefaultMsgMetaCacheSize)
tsc, _ := lru.NewARC(DefaultTipSetCacheSize) tsc, _ := lru.NewARC(DefaultTipSetCacheSize)
if j == nil { if j == nil {
@ -166,7 +149,6 @@ func NewChainStore(chainBs bstore.Blockstore, stateBs bstore.Blockstore, ds dsto
tipsets: make(map[abi.ChainEpoch][]cid.Cid), tipsets: make(map[abi.ChainEpoch][]cid.Cid),
mmCache: c, mmCache: c,
tsCache: tsc, tsCache: tsc,
vmcalls: vmcalls,
cancelFn: cancel, cancelFn: cancel,
journal: j, journal: j,
} }
@ -988,27 +970,6 @@ func (cs *ChainStore) PersistBlockHeaders(b ...*types.BlockHeader) error {
return err return err
} }
type storable interface {
ToStorageBlock() (block.Block, error)
}
func PutMessage(bs bstore.Blockstore, m storable) (cid.Cid, error) {
b, err := m.ToStorageBlock()
if err != nil {
return cid.Undef, err
}
if err := bs.Put(b); err != nil {
return cid.Undef, err
}
return b.Cid(), nil
}
func (cs *ChainStore) PutMessage(m storable) (cid.Cid, error) {
return PutMessage(cs.chainBlockstore, m)
}
func (cs *ChainStore) expandTipset(b *types.BlockHeader) (*types.TipSet, error) { func (cs *ChainStore) expandTipset(b *types.BlockHeader) (*types.TipSet, error) {
// Hold lock for the whole function for now, if it becomes a problem we can // Hold lock for the whole function for now, if it becomes a problem we can
// fix pretty easily // fix pretty easily
@ -1080,203 +1041,6 @@ func (cs *ChainStore) GetGenesis() (*types.BlockHeader, error) {
return cs.GetBlock(c) return cs.GetBlock(c)
} }
func (cs *ChainStore) GetCMessage(c cid.Cid) (types.ChainMsg, error) {
m, err := cs.GetMessage(c)
if err == nil {
return m, nil
}
if err != bstore.ErrNotFound {
log.Warnf("GetCMessage: unexpected error getting unsigned message: %s", err)
}
return cs.GetSignedMessage(c)
}
func (cs *ChainStore) GetMessage(c cid.Cid) (*types.Message, error) {
var msg *types.Message
err := cs.chainLocalBlockstore.View(c, func(b []byte) (err error) {
msg, err = types.DecodeMessage(b)
return err
})
return msg, err
}
func (cs *ChainStore) GetSignedMessage(c cid.Cid) (*types.SignedMessage, error) {
var msg *types.SignedMessage
err := cs.chainLocalBlockstore.View(c, func(b []byte) (err error) {
msg, err = types.DecodeSignedMessage(b)
return err
})
return msg, err
}
func (cs *ChainStore) readAMTCids(root cid.Cid) ([]cid.Cid, error) {
ctx := context.TODO()
// block headers use adt0, for now.
a, err := blockadt.AsArray(cs.ActorStore(ctx), root)
if err != nil {
return nil, xerrors.Errorf("amt load: %w", err)
}
var (
cids []cid.Cid
cborCid cbg.CborCid
)
if err := a.ForEach(&cborCid, func(i int64) error {
c := cid.Cid(cborCid)
cids = append(cids, c)
return nil
}); err != nil {
return nil, xerrors.Errorf("failed to traverse amt: %w", err)
}
if uint64(len(cids)) != a.Length() {
return nil, xerrors.Errorf("found %d cids, expected %d", len(cids), a.Length())
}
return cids, nil
}
type BlockMessages struct {
Miner address.Address
BlsMessages []types.ChainMsg
SecpkMessages []types.ChainMsg
WinCount int64
}
func (cs *ChainStore) BlockMsgsForTipset(ts *types.TipSet) ([]BlockMessages, error) {
applied := make(map[address.Address]uint64)
cst := cbor.NewCborStore(cs.stateBlockstore)
st, err := state.LoadStateTree(cst, ts.Blocks()[0].ParentStateRoot)
if err != nil {
return nil, xerrors.Errorf("failed to load state tree")
}
selectMsg := func(m *types.Message) (bool, error) {
var sender address.Address
if ts.Height() >= build.UpgradeHyperdriveHeight {
sender, err = st.LookupID(m.From)
if err != nil {
return false, err
}
} else {
sender = m.From
}
// The first match for a sender is guaranteed to have correct nonce -- the block isn't valid otherwise
if _, ok := applied[sender]; !ok {
applied[sender] = m.Nonce
}
if applied[sender] != m.Nonce {
return false, nil
}
applied[sender]++
return true, nil
}
var out []BlockMessages
for _, b := range ts.Blocks() {
bms, sms, err := cs.MessagesForBlock(b)
if err != nil {
return nil, xerrors.Errorf("failed to get messages for block: %w", err)
}
bm := BlockMessages{
Miner: b.Miner,
BlsMessages: make([]types.ChainMsg, 0, len(bms)),
SecpkMessages: make([]types.ChainMsg, 0, len(sms)),
WinCount: b.ElectionProof.WinCount,
}
for _, bmsg := range bms {
b, err := selectMsg(bmsg.VMMessage())
if err != nil {
return nil, xerrors.Errorf("failed to decide whether to select message for block: %w", err)
}
if b {
bm.BlsMessages = append(bm.BlsMessages, bmsg)
}
}
for _, smsg := range sms {
b, err := selectMsg(smsg.VMMessage())
if err != nil {
return nil, xerrors.Errorf("failed to decide whether to select message for block: %w", err)
}
if b {
bm.SecpkMessages = append(bm.SecpkMessages, smsg)
}
}
out = append(out, bm)
}
return out, nil
}
func (cs *ChainStore) MessagesForTipset(ts *types.TipSet) ([]types.ChainMsg, error) {
bmsgs, err := cs.BlockMsgsForTipset(ts)
if err != nil {
return nil, err
}
var out []types.ChainMsg
for _, bm := range bmsgs {
for _, blsm := range bm.BlsMessages {
out = append(out, blsm)
}
for _, secm := range bm.SecpkMessages {
out = append(out, secm)
}
}
return out, nil
}
type mmCids struct {
bls []cid.Cid
secpk []cid.Cid
}
func (cs *ChainStore) ReadMsgMetaCids(mmc cid.Cid) ([]cid.Cid, []cid.Cid, error) {
o, ok := cs.mmCache.Get(mmc)
if ok {
mmcids := o.(*mmCids)
return mmcids.bls, mmcids.secpk, nil
}
cst := cbor.NewCborStore(cs.chainLocalBlockstore)
var msgmeta types.MsgMeta
if err := cst.Get(context.TODO(), mmc, &msgmeta); err != nil {
return nil, nil, xerrors.Errorf("failed to load msgmeta (%s): %w", mmc, err)
}
blscids, err := cs.readAMTCids(msgmeta.BlsMessages)
if err != nil {
return nil, nil, xerrors.Errorf("loading bls message cids for block: %w", err)
}
secpkcids, err := cs.readAMTCids(msgmeta.SecpkMessages)
if err != nil {
return nil, nil, xerrors.Errorf("loading secpk message cids for block: %w", err)
}
cs.mmCache.Add(mmc, &mmCids{
bls: blscids,
secpk: secpkcids,
})
return blscids, secpkcids, nil
}
// GetPath returns the sequence of atomic head change operations that // GetPath returns the sequence of atomic head change operations that
// need to be applied in order to switch the head of the chain from the `from` // need to be applied in order to switch the head of the chain from the `from`
// tipset to the `to` tipset. // tipset to the `to` tipset.
@ -1304,71 +1068,6 @@ func (cs *ChainStore) GetPath(ctx context.Context, from types.TipSetKey, to type
return path, nil return path, nil
} }
func (cs *ChainStore) MessagesForBlock(b *types.BlockHeader) ([]*types.Message, []*types.SignedMessage, error) {
blscids, secpkcids, err := cs.ReadMsgMetaCids(b.Messages)
if err != nil {
return nil, nil, err
}
blsmsgs, err := cs.LoadMessagesFromCids(blscids)
if err != nil {
return nil, nil, xerrors.Errorf("loading bls messages for block: %w", err)
}
secpkmsgs, err := cs.LoadSignedMessagesFromCids(secpkcids)
if err != nil {
return nil, nil, xerrors.Errorf("loading secpk messages for block: %w", err)
}
return blsmsgs, secpkmsgs, nil
}
func (cs *ChainStore) GetParentReceipt(b *types.BlockHeader, i int) (*types.MessageReceipt, error) {
ctx := context.TODO()
// block headers use adt0, for now.
a, err := blockadt.AsArray(cs.ActorStore(ctx), b.ParentMessageReceipts)
if err != nil {
return nil, xerrors.Errorf("amt load: %w", err)
}
var r types.MessageReceipt
if found, err := a.Get(uint64(i), &r); err != nil {
return nil, err
} else if !found {
return nil, xerrors.Errorf("failed to find receipt %d", i)
}
return &r, nil
}
func (cs *ChainStore) LoadMessagesFromCids(cids []cid.Cid) ([]*types.Message, error) {
msgs := make([]*types.Message, 0, len(cids))
for i, c := range cids {
m, err := cs.GetMessage(c)
if err != nil {
return nil, xerrors.Errorf("failed to get message: (%s):%d: %w", c, i, err)
}
msgs = append(msgs, m)
}
return msgs, nil
}
func (cs *ChainStore) LoadSignedMessagesFromCids(cids []cid.Cid) ([]*types.SignedMessage, error) {
msgs := make([]*types.SignedMessage, 0, len(cids))
for i, c := range cids {
m, err := cs.GetSignedMessage(c)
if err != nil {
return nil, xerrors.Errorf("failed to get message: (%s):%d: %w", c, i, err)
}
msgs = append(msgs, m)
}
return msgs, nil
}
// ChainBlockstore returns the chain blockstore. Currently the chain and state // ChainBlockstore returns the chain blockstore. Currently the chain and state
// // stores are both backed by the same physical store, albeit with different // // stores are both backed by the same physical store, albeit with different
// // caching policies, but in the future they will segregate. // // caching policies, but in the future they will segregate.
@ -1391,10 +1090,6 @@ func (cs *ChainStore) ActorStore(ctx context.Context) adt.Store {
return ActorStore(ctx, cs.stateBlockstore) return ActorStore(ctx, cs.stateBlockstore)
} }
func (cs *ChainStore) VMSys() vm.SyscallBuilder {
return cs.vmcalls
}
func (cs *ChainStore) TryFillTipSet(ts *types.TipSet) (*FullTipSet, error) { func (cs *ChainStore) TryFillTipSet(ts *types.TipSet) (*FullTipSet, error) {
var out []*types.FullBlock var out []*types.FullBlock
@ -1417,108 +1112,6 @@ func (cs *ChainStore) TryFillTipSet(ts *types.TipSet) (*FullTipSet, error) {
return NewFullTipSet(out), nil return NewFullTipSet(out), nil
} }
func DrawRandomness(rbase []byte, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
h := blake2b.New256()
if err := binary.Write(h, binary.BigEndian, int64(pers)); err != nil {
return nil, xerrors.Errorf("deriving randomness: %w", err)
}
VRFDigest := blake2b.Sum256(rbase)
_, err := h.Write(VRFDigest[:])
if err != nil {
return nil, xerrors.Errorf("hashing VRFDigest: %w", err)
}
if err := binary.Write(h, binary.BigEndian, round); err != nil {
return nil, xerrors.Errorf("deriving randomness: %w", err)
}
_, err = h.Write(entropy)
if err != nil {
return nil, xerrors.Errorf("hashing entropy: %w", err)
}
return h.Sum(nil), nil
}
func (cs *ChainStore) GetBeaconRandomnessLookingBack(ctx context.Context, blks []cid.Cid, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return cs.GetBeaconRandomness(ctx, blks, pers, round, entropy, true)
}
func (cs *ChainStore) GetBeaconRandomnessLookingForward(ctx context.Context, blks []cid.Cid, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return cs.GetBeaconRandomness(ctx, blks, pers, round, entropy, false)
}
func (cs *ChainStore) GetBeaconRandomness(ctx context.Context, blks []cid.Cid, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte, lookback bool) ([]byte, error) {
_, span := trace.StartSpan(ctx, "store.GetBeaconRandomness")
defer span.End()
span.AddAttributes(trace.Int64Attribute("round", int64(round)))
ts, err := cs.LoadTipSet(types.NewTipSetKey(blks...))
if err != nil {
return nil, err
}
if round > ts.Height() {
return nil, xerrors.Errorf("cannot draw randomness from the future")
}
searchHeight := round
if searchHeight < 0 {
searchHeight = 0
}
randTs, err := cs.GetTipsetByHeight(ctx, searchHeight, ts, lookback)
if err != nil {
return nil, err
}
be, err := cs.GetLatestBeaconEntry(randTs)
if err != nil {
return nil, err
}
// if at (or just past -- for null epochs) appropriate epoch
// or at genesis (works for negative epochs)
return DrawRandomness(be.Data, pers, round, entropy)
}
func (cs *ChainStore) GetChainRandomnessLookingBack(ctx context.Context, blks []cid.Cid, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return cs.GetChainRandomness(ctx, blks, pers, round, entropy, true)
}
func (cs *ChainStore) GetChainRandomnessLookingForward(ctx context.Context, blks []cid.Cid, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return cs.GetChainRandomness(ctx, blks, pers, round, entropy, false)
}
func (cs *ChainStore) GetChainRandomness(ctx context.Context, blks []cid.Cid, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte, lookback bool) ([]byte, error) {
_, span := trace.StartSpan(ctx, "store.GetChainRandomness")
defer span.End()
span.AddAttributes(trace.Int64Attribute("round", int64(round)))
ts, err := cs.LoadTipSet(types.NewTipSetKey(blks...))
if err != nil {
return nil, err
}
if round > ts.Height() {
return nil, xerrors.Errorf("cannot draw randomness from the future")
}
searchHeight := round
if searchHeight < 0 {
searchHeight = 0
}
randTs, err := cs.GetTipsetByHeight(ctx, searchHeight, ts, lookback)
if err != nil {
return nil, err
}
mtb := randTs.MinTicketBlock()
// if at (or just past -- for null epochs) appropriate epoch
// or at genesis (works for negative epochs)
return DrawRandomness(mtb.Ticket.VRFProof, pers, round, entropy)
}
// GetTipsetByHeight returns the tipset on the chain behind 'ts' at the given // GetTipsetByHeight returns the tipset on the chain behind 'ts' at the given
// height. In the case that the given height is a null round, the 'prev' flag // height. In the case that the given height is a null round, the 'prev' flag
// selects the tipset before the null round if true, and the tipset following // selects the tipset before the null round if true, and the tipset following
@ -1555,252 +1148,3 @@ func (cs *ChainStore) GetTipsetByHeight(ctx context.Context, h abi.ChainEpoch, t
return cs.LoadTipSet(lbts.Parents()) return cs.LoadTipSet(lbts.Parents())
} }
func recurseLinks(bs bstore.Blockstore, walked *cid.Set, root cid.Cid, in []cid.Cid) ([]cid.Cid, error) {
if root.Prefix().Codec != cid.DagCBOR {
return in, nil
}
data, err := bs.Get(root)
if err != nil {
return nil, xerrors.Errorf("recurse links get (%s) failed: %w", root, err)
}
var rerr error
err = cbg.ScanForLinks(bytes.NewReader(data.RawData()), func(c cid.Cid) {
if rerr != nil {
// No error return on ScanForLinks :(
return
}
// traversed this already...
if !walked.Visit(c) {
return
}
in = append(in, c)
var err error
in, err = recurseLinks(bs, walked, c, in)
if err != nil {
rerr = err
}
})
if err != nil {
return nil, xerrors.Errorf("scanning for links failed: %w", err)
}
return in, rerr
}
func (cs *ChainStore) Export(ctx context.Context, ts *types.TipSet, inclRecentRoots abi.ChainEpoch, skipOldMsgs bool, w io.Writer) error {
h := &car.CarHeader{
Roots: ts.Cids(),
Version: 1,
}
if err := car.WriteHeader(h, w); err != nil {
return xerrors.Errorf("failed to write car header: %s", err)
}
unionBs := bstore.Union(cs.stateBlockstore, cs.chainBlockstore)
return cs.WalkSnapshot(ctx, ts, inclRecentRoots, skipOldMsgs, true, func(c cid.Cid) error {
blk, err := unionBs.Get(c)
if err != nil {
return xerrors.Errorf("writing object to car, bs.Get: %w", err)
}
if err := carutil.LdWrite(w, c.Bytes(), blk.RawData()); err != nil {
return xerrors.Errorf("failed to write block to car output: %w", err)
}
return nil
})
}
func (cs *ChainStore) WalkSnapshot(ctx context.Context, ts *types.TipSet, inclRecentRoots abi.ChainEpoch, skipOldMsgs, skipMsgReceipts bool, cb func(cid.Cid) error) error {
if ts == nil {
ts = cs.GetHeaviestTipSet()
}
seen := cid.NewSet()
walked := cid.NewSet()
blocksToWalk := ts.Cids()
currentMinHeight := ts.Height()
walkChain := func(blk cid.Cid) error {
if !seen.Visit(blk) {
return nil
}
if err := cb(blk); err != nil {
return err
}
data, err := cs.chainBlockstore.Get(blk)
if err != nil {
return xerrors.Errorf("getting block: %w", err)
}
var b types.BlockHeader
if err := b.UnmarshalCBOR(bytes.NewBuffer(data.RawData())); err != nil {
return xerrors.Errorf("unmarshaling block header (cid=%s): %w", blk, err)
}
if currentMinHeight > b.Height {
currentMinHeight = b.Height
if currentMinHeight%builtin.EpochsInDay == 0 {
log.Infow("export", "height", currentMinHeight)
}
}
var cids []cid.Cid
if !skipOldMsgs || b.Height > ts.Height()-inclRecentRoots {
if walked.Visit(b.Messages) {
mcids, err := recurseLinks(cs.chainBlockstore, walked, b.Messages, []cid.Cid{b.Messages})
if err != nil {
return xerrors.Errorf("recursing messages failed: %w", err)
}
cids = mcids
}
}
if b.Height > 0 {
for _, p := range b.Parents {
blocksToWalk = append(blocksToWalk, p)
}
} else {
// include the genesis block
cids = append(cids, b.Parents...)
}
out := cids
if b.Height == 0 || b.Height > ts.Height()-inclRecentRoots {
if walked.Visit(b.ParentStateRoot) {
cids, err := recurseLinks(cs.stateBlockstore, walked, b.ParentStateRoot, []cid.Cid{b.ParentStateRoot})
if err != nil {
return xerrors.Errorf("recursing genesis state failed: %w", err)
}
out = append(out, cids...)
}
if !skipMsgReceipts && walked.Visit(b.ParentMessageReceipts) {
out = append(out, b.ParentMessageReceipts)
}
}
for _, c := range out {
if seen.Visit(c) {
if c.Prefix().Codec != cid.DagCBOR {
continue
}
if err := cb(c); err != nil {
return err
}
}
}
return nil
}
log.Infow("export started")
exportStart := build.Clock.Now()
for len(blocksToWalk) > 0 {
next := blocksToWalk[0]
blocksToWalk = blocksToWalk[1:]
if err := walkChain(next); err != nil {
return xerrors.Errorf("walk chain failed: %w", err)
}
}
log.Infow("export finished", "duration", build.Clock.Now().Sub(exportStart).Seconds())
return nil
}
func (cs *ChainStore) Import(r io.Reader) (*types.TipSet, error) {
// TODO: writing only to the state blockstore is incorrect.
// At this time, both the state and chain blockstores are backed by the
// universal store. When we physically segregate the stores, we will need
// to route state objects to the state blockstore, and chain objects to
// the chain blockstore.
header, err := car.LoadCar(cs.StateBlockstore(), r)
if err != nil {
return nil, xerrors.Errorf("loadcar failed: %w", err)
}
root, err := cs.LoadTipSet(types.NewTipSetKey(header.Roots...))
if err != nil {
return nil, xerrors.Errorf("failed to load root tipset from chainfile: %w", err)
}
return root, nil
}
func (cs *ChainStore) GetLatestBeaconEntry(ts *types.TipSet) (*types.BeaconEntry, error) {
cur := ts
for i := 0; i < 20; i++ {
cbe := cur.Blocks()[0].BeaconEntries
if len(cbe) > 0 {
return &cbe[len(cbe)-1], nil
}
if cur.Height() == 0 {
return nil, xerrors.Errorf("made it back to genesis block without finding beacon entry")
}
next, err := cs.LoadTipSet(cur.Parents())
if err != nil {
return nil, xerrors.Errorf("failed to load parents when searching back for latest beacon entry: %w", err)
}
cur = next
}
if os.Getenv("LOTUS_IGNORE_DRAND") == "_yes_" {
return &types.BeaconEntry{
Data: []byte{9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9},
}, nil
}
return nil, xerrors.Errorf("found NO beacon entries in the 20 latest tipsets")
}
type chainRand struct {
cs *ChainStore
blks []cid.Cid
}
func NewChainRand(cs *ChainStore, blks []cid.Cid) vm.Rand {
return &chainRand{
cs: cs,
blks: blks,
}
}
func (cr *chainRand) GetChainRandomnessLookingBack(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return cr.cs.GetChainRandomnessLookingBack(ctx, cr.blks, pers, round, entropy)
}
func (cr *chainRand) GetChainRandomnessLookingForward(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return cr.cs.GetChainRandomnessLookingForward(ctx, cr.blks, pers, round, entropy)
}
func (cr *chainRand) GetBeaconRandomnessLookingBack(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return cr.cs.GetBeaconRandomnessLookingBack(ctx, cr.blks, pers, round, entropy)
}
func (cr *chainRand) GetBeaconRandomnessLookingForward(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return cr.cs.GetBeaconRandomnessLookingForward(ctx, cr.blks, pers, round, entropy)
}
func (cs *ChainStore) GetTipSetFromKey(tsk types.TipSetKey) (*types.TipSet, error) {
if tsk.IsEmpty() {
return cs.GetHeaviestTipSet(), nil
}
return cs.LoadTipSet(tsk)
}

View File

@ -70,7 +70,7 @@ func BenchmarkGetRandomness(b *testing.B) {
b.Fatal(err) b.Fatal(err)
} }
cs := store.NewChainStore(bs, bs, mds, nil, nil) cs := store.NewChainStore(bs, bs, mds, nil)
defer cs.Close() //nolint:errcheck defer cs.Close() //nolint:errcheck
b.ResetTimer() b.ResetTimer()
@ -105,7 +105,7 @@ func TestChainExportImport(t *testing.T) {
} }
nbs := blockstore.NewMemory() nbs := blockstore.NewMemory()
cs := store.NewChainStore(nbs, nbs, datastore.NewMapDatastore(), nil, nil) cs := store.NewChainStore(nbs, nbs, datastore.NewMapDatastore(), nil)
defer cs.Close() //nolint:errcheck defer cs.Close() //nolint:errcheck
root, err := cs.Import(buf) root, err := cs.Import(buf)
@ -140,7 +140,7 @@ func TestChainExportImportFull(t *testing.T) {
} }
nbs := blockstore.NewMemory() nbs := blockstore.NewMemory()
cs := store.NewChainStore(nbs, nbs, datastore.NewMapDatastore(), nil, nil) cs := store.NewChainStore(nbs, nbs, datastore.NewMapDatastore(), nil)
defer cs.Close() //nolint:errcheck defer cs.Close() //nolint:errcheck
root, err := cs.Import(buf) root, err := cs.Import(buf)
@ -157,7 +157,7 @@ func TestChainExportImportFull(t *testing.T) {
t.Fatal("imported chain differed from exported chain") t.Fatal("imported chain differed from exported chain")
} }
sm := stmgr.NewStateManager(cs) sm := stmgr.NewStateManager(cs, nil)
for i := 0; i < 100; i++ { for i := 0; i < 100; i++ {
ts, err := cs.GetTipsetByHeight(context.TODO(), abi.ChainEpoch(i), nil, false) ts, err := cs.GetTipsetByHeight(context.TODO(), abi.ChainEpoch(i), nil, false)
if err != nil { if err != nil {

View File

@ -5,6 +5,7 @@ package types
import ( import (
"fmt" "fmt"
"io" "io"
"math"
"sort" "sort"
abi "github.com/filecoin-project/go-state-types/abi" abi "github.com/filecoin-project/go-state-types/abi"
@ -18,6 +19,7 @@ import (
var _ = xerrors.Errorf var _ = xerrors.Errorf
var _ = cid.Undef var _ = cid.Undef
var _ = math.E
var _ = sort.Sort var _ = sort.Sort
var lengthBufBlockHeader = []byte{144} var lengthBufBlockHeader = []byte{144}

View File

@ -231,7 +231,7 @@ func DumpActorState(act *types.Actor, b []byte) (interface{}, error) {
return nil, nil return nil, nil
} }
i := NewActorRegistry() // TODO: register builtins in init block i := NewActorRegistry()
actInfo, ok := i.actors[act.Code] actInfo, ok := i.actors[act.Code]
if !ok { if !ok {

View File

@ -6,6 +6,8 @@ import (
"io" "io"
"testing" "testing"
"github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-state-types/big"
"github.com/filecoin-project/go-state-types/network" "github.com/filecoin-project/go-state-types/network"
cbor "github.com/ipfs/go-ipld-cbor" cbor "github.com/ipfs/go-ipld-cbor"
@ -80,6 +82,30 @@ func (basicContract) InvokeSomething10(rt runtime2.Runtime, params *basicParams)
return nil return nil
} }
type basicRtMessage struct{}
var _ runtime2.Message = (*basicRtMessage)(nil)
func (*basicRtMessage) Caller() address.Address {
a, err := address.NewIDAddress(0)
if err != nil {
panic(err)
}
return a
}
func (*basicRtMessage) Receiver() address.Address {
a, err := address.NewIDAddress(1)
if err != nil {
panic(err)
}
return a
}
func (*basicRtMessage) ValueReceived() abi.TokenAmount {
return big.NewInt(0)
}
func TestInvokerBasic(t *testing.T) { func TestInvokerBasic(t *testing.T) {
inv := ActorRegistry{} inv := ActorRegistry{}
code, err := inv.transform(basicContract{}) code, err := inv.transform(basicContract{})
@ -89,7 +115,7 @@ func TestInvokerBasic(t *testing.T) {
bParam, err := actors.SerializeParams(&basicParams{B: 1}) bParam, err := actors.SerializeParams(&basicParams{B: 1})
assert.NoError(t, err) assert.NoError(t, err)
_, aerr := code[0](&Runtime{}, bParam) _, aerr := code[0](&Runtime{Message: &basicRtMessage{}}, bParam)
assert.Equal(t, exitcode.ExitCode(1), aerrors.RetCode(aerr), "return code should be 1") assert.Equal(t, exitcode.ExitCode(1), aerrors.RetCode(aerr), "return code should be 1")
if aerrors.IsFatal(aerr) { if aerrors.IsFatal(aerr) {
@ -101,7 +127,7 @@ func TestInvokerBasic(t *testing.T) {
bParam, err := actors.SerializeParams(&basicParams{B: 2}) bParam, err := actors.SerializeParams(&basicParams{B: 2})
assert.NoError(t, err) assert.NoError(t, err)
_, aerr := code[10](&Runtime{}, bParam) _, aerr := code[10](&Runtime{Message: &basicRtMessage{}}, bParam)
assert.Equal(t, exitcode.ExitCode(12), aerrors.RetCode(aerr), "return code should be 12") assert.Equal(t, exitcode.ExitCode(12), aerrors.RetCode(aerr), "return code should be 12")
if aerrors.IsFatal(aerr) { if aerrors.IsFatal(aerr) {
t.Fatal("err should not be fatal") t.Fatal("err should not be fatal")
@ -113,6 +139,7 @@ func TestInvokerBasic(t *testing.T) {
vm: &VM{ntwkVersion: func(ctx context.Context, epoch abi.ChainEpoch) network.Version { vm: &VM{ntwkVersion: func(ctx context.Context, epoch abi.ChainEpoch) network.Version {
return network.Version0 return network.Version0
}}, }},
Message: &basicRtMessage{},
}, []byte{99}) }, []byte{99})
if aerrors.IsFatal(aerr) { if aerrors.IsFatal(aerr) {
t.Fatal("err should not be fatal") t.Fatal("err should not be fatal")
@ -125,6 +152,7 @@ func TestInvokerBasic(t *testing.T) {
vm: &VM{ntwkVersion: func(ctx context.Context, epoch abi.ChainEpoch) network.Version { vm: &VM{ntwkVersion: func(ctx context.Context, epoch abi.ChainEpoch) network.Version {
return network.Version7 return network.Version7
}}, }},
Message: &basicRtMessage{},
}, []byte{99}) }, []byte{99})
if aerrors.IsFatal(aerr) { if aerrors.IsFatal(aerr) {
t.Fatal("err should not be fatal") t.Fatal("err should not be fatal")

View File

@ -146,7 +146,7 @@ func (rt *Runtime) shimCall(f func() interface{}) (rval []byte, aerr aerrors.Act
defer func() { defer func() {
if r := recover(); r != nil { if r := recover(); r != nil {
if ar, ok := r.(aerrors.ActorError); ok { if ar, ok := r.(aerrors.ActorError); ok {
log.Warnf("VM.Call failure: %+v", ar) log.Warnf("VM.Call failure in call from: %s to %s: %+v", rt.Caller(), rt.Receiver(), ar)
aerr = ar aerr = ar
return return
} }
@ -391,7 +391,7 @@ func (rt *Runtime) Send(to address.Address, method abi.MethodNum, m cbor.Marshal
if err.IsFatal() { if err.IsFatal() {
panic(err) panic(err)
} }
log.Warnf("vmctx send failed: to: %s, method: %d: ret: %d, err: %s", to, method, ret, err) log.Warnf("vmctx send failed: from: %s to: %s, method: %d: err: %s", rt.Receiver(), to, method, err)
return err.RetCode() return err.RetCode()
} }
_ = rt.chargeGasSafe(gasOnActorExec) _ = rt.chargeGasSafe(gasOnActorExec)

View File

@ -641,15 +641,6 @@ func (vm *VM) ShouldBurn(ctx context.Context, st *state.StateTree, msg *types.Me
return true, nil return true, nil
} }
func (vm *VM) ActorBalance(addr address.Address) (types.BigInt, aerrors.ActorError) {
act, err := vm.cstate.GetActor(addr)
if err != nil {
return types.EmptyInt, aerrors.Absorb(err, 1, "failed to find actor")
}
return act.Balance, nil
}
type vmFlushKey struct{} type vmFlushKey struct{}
func (vm *VM) Flush(ctx context.Context) (cid.Cid, error) { func (vm *VM) Flush(ctx context.Context) (cid.Cid, error) {

View File

@ -123,7 +123,7 @@ var AuthApiInfoToken = &cli.Command{
ainfo, err := GetAPIInfo(cctx, t) ainfo, err := GetAPIInfo(cctx, t)
if err != nil { if err != nil {
return xerrors.Errorf("could not get API info: %w", err) return xerrors.Errorf("could not get API info for %s: %w", t, err)
} }
// TODO: Log in audit log when it is implemented // TODO: Log in audit log when it is implemented

View File

@ -24,7 +24,6 @@ import (
"github.com/docker/go-units" "github.com/docker/go-units"
"github.com/fatih/color" "github.com/fatih/color"
datatransfer "github.com/filecoin-project/go-data-transfer" datatransfer "github.com/filecoin-project/go-data-transfer"
"github.com/filecoin-project/go-fil-markets/retrievalmarket"
"github.com/ipfs/go-cid" "github.com/ipfs/go-cid"
"github.com/ipfs/go-cidutil/cidenc" "github.com/ipfs/go-cidutil/cidenc"
"github.com/libp2p/go-libp2p-core/peer" "github.com/libp2p/go-libp2p-core/peer"
@ -32,12 +31,14 @@ import (
"github.com/urfave/cli/v2" "github.com/urfave/cli/v2"
"golang.org/x/xerrors" "golang.org/x/xerrors"
"github.com/filecoin-project/go-fil-markets/retrievalmarket"
"github.com/filecoin-project/go-address" "github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-fil-markets/storagemarket"
"github.com/filecoin-project/go-multistore"
"github.com/filecoin-project/go-state-types/abi" "github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/big" "github.com/filecoin-project/go-state-types/big"
"github.com/filecoin-project/go-fil-markets/storagemarket"
"github.com/filecoin-project/lotus/api" "github.com/filecoin-project/lotus/api"
lapi "github.com/filecoin-project/lotus/api" lapi "github.com/filecoin-project/lotus/api"
"github.com/filecoin-project/lotus/api/v0api" "github.com/filecoin-project/lotus/api/v0api"
@ -46,6 +47,7 @@ import (
"github.com/filecoin-project/lotus/chain/actors/builtin/market" "github.com/filecoin-project/lotus/chain/actors/builtin/market"
"github.com/filecoin-project/lotus/chain/types" "github.com/filecoin-project/lotus/chain/types"
"github.com/filecoin-project/lotus/lib/tablewriter" "github.com/filecoin-project/lotus/lib/tablewriter"
"github.com/filecoin-project/lotus/node/repo/imports"
) )
var CidBaseFlag = cli.StringFlag{ var CidBaseFlag = cli.StringFlag{
@ -174,18 +176,18 @@ var clientDropCmd = &cli.Command{
defer closer() defer closer()
ctx := ReqContext(cctx) ctx := ReqContext(cctx)
var ids []multistore.StoreID var ids []uint64
for i, s := range cctx.Args().Slice() { for i, s := range cctx.Args().Slice() {
id, err := strconv.ParseInt(s, 10, 0) id, err := strconv.ParseUint(s, 10, 64)
if err != nil { if err != nil {
return xerrors.Errorf("parsing %d-th import ID: %w", i, err) return xerrors.Errorf("parsing %d-th import ID: %w", i, err)
} }
ids = append(ids, multistore.StoreID(id)) ids = append(ids, id)
} }
for _, id := range ids { for _, id := range ids {
if err := api.ClientRemoveImport(ctx, id); err != nil { if err := api.ClientRemoveImport(ctx, imports.ID(id)); err != nil {
return xerrors.Errorf("removing import %d: %w", id, err) return xerrors.Errorf("removing import %d: %w", id, err)
} }
} }
@ -1104,8 +1106,8 @@ var clientRetrieveCmd = &cli.Command{
for _, i := range imports { for _, i := range imports {
if i.Root != nil && i.Root.Equals(file) { if i.Root != nil && i.Root.Equals(file) {
order = &lapi.RetrievalOrder{ order = &lapi.RetrievalOrder{
Root: file, Root: file,
LocalStore: &i.Key, FromLocalCAR: i.CARPath,
Total: big.Zero(), Total: big.Zero(),
UnsealPrice: big.Zero(), UnsealPrice: big.Zero(),

View File

@ -381,8 +381,8 @@ var MpoolReplaceCmd = &cli.Command{
Usage: "automatically reprice the specified message", Usage: "automatically reprice the specified message",
}, },
&cli.StringFlag{ &cli.StringFlag{
Name: "max-fee", Name: "fee-limit",
Usage: "Spend up to X attoFIL for this message (applicable for auto mode)", Usage: "Spend up to X FIL for this message in units of FIL. Previously when flag was `max-fee` units were in attoFIL. Applicable for auto mode",
}, },
}, },
ArgsUsage: "<from nonce> | <message-cid>", ArgsUsage: "<from nonce> | <message-cid>",
@ -457,13 +457,13 @@ var MpoolReplaceCmd = &cli.Command{
minRBF := messagepool.ComputeMinRBF(msg.GasPremium) minRBF := messagepool.ComputeMinRBF(msg.GasPremium)
var mss *lapi.MessageSendSpec var mss *lapi.MessageSendSpec
if cctx.IsSet("max-fee") { if cctx.IsSet("fee-limit") {
maxFee, err := types.BigFromString(cctx.String("max-fee")) maxFee, err := types.ParseFIL(cctx.String("fee-limit"))
if err != nil { if err != nil {
return fmt.Errorf("parsing max-spend: %w", err) return fmt.Errorf("parsing max-spend: %w", err)
} }
mss = &lapi.MessageSendSpec{ mss = &lapi.MessageSendSpec{
MaxFee: maxFee, MaxFee: abi.TokenAmount(maxFee),
} }
} }

View File

@ -34,7 +34,7 @@ var PprofGoroutines = &cli.Command{
} }
ainfo, err := GetAPIInfo(cctx, t) ainfo, err := GetAPIInfo(cctx, t)
if err != nil { if err != nil {
return xerrors.Errorf("could not get API info: %w", err) return xerrors.Errorf("could not get API info for %s: %w", t, err)
} }
addr, err := ainfo.Host() addr, err := ainfo.Host()
if err != nil { if err != nil {

View File

@ -2,6 +2,7 @@ package cliutil
import ( import (
"context" "context"
"errors"
"fmt" "fmt"
"net/http" "net/http"
"net/url" "net/url"
@ -142,6 +143,15 @@ func GetAPIInfo(ctx *cli.Context, t repo.RepoType) (APIInfo, error) {
return APIInfo{}, xerrors.Errorf("could not open repo at path: %s; %w", p, err) return APIInfo{}, xerrors.Errorf("could not open repo at path: %s; %w", p, err)
} }
exists, err := r.Exists()
if err != nil {
return APIInfo{}, xerrors.Errorf("repo.Exists returned an error: %w", err)
}
if !exists {
return APIInfo{}, errors.New("repo directory does not exist. Make sure your configuration is correct")
}
ma, err := r.APIEndpoint() ma, err := r.APIEndpoint()
if err != nil { if err != nil {
return APIInfo{}, xerrors.Errorf("could not get api endpoint: %w", err) return APIInfo{}, xerrors.Errorf("could not get api endpoint: %w", err)
@ -171,7 +181,7 @@ func GetAPIInfo(ctx *cli.Context, t repo.RepoType) (APIInfo, error) {
func GetRawAPI(ctx *cli.Context, t repo.RepoType, version string) (string, http.Header, error) { func GetRawAPI(ctx *cli.Context, t repo.RepoType, version string) (string, http.Header, error) {
ainfo, err := GetAPIInfo(ctx, t) ainfo, err := GetAPIInfo(ctx, t)
if err != nil { if err != nil {
return "", nil, xerrors.Errorf("could not get API info: %w", err) return "", nil, xerrors.Errorf("could not get API info for %s: %w", t, err)
} }
addr, err := ainfo.DialArgs(version) addr, err := ainfo.DialArgs(version)
@ -243,7 +253,19 @@ func GetFullNodeAPIV1(ctx *cli.Context) (v1api.FullNode, jsonrpc.ClientCloser, e
_, _ = fmt.Fprintln(ctx.App.Writer, "using full node API v1 endpoint:", addr) _, _ = fmt.Fprintln(ctx.App.Writer, "using full node API v1 endpoint:", addr)
} }
return client.NewFullNodeRPCV1(ctx.Context, addr, headers) v1API, closer, err := client.NewFullNodeRPCV1(ctx.Context, addr, headers)
if err != nil {
return nil, nil, err
}
v, err := v1API.Version(ctx.Context)
if err != nil {
return nil, nil, err
}
if !v.APIVersion.EqMajorMinor(api.FullAPIVersion1) {
return nil, nil, xerrors.Errorf("Remote API version didn't match (expected %s, remote %s)", api.FullAPIVersion1, v.APIVersion)
}
return v1API, closer, nil
} }
type GetStorageMinerOptions struct { type GetStorageMinerOptions struct {

View File

@ -22,7 +22,7 @@ var WaitApiCmd = &cli.Command{
ctx := ReqContext(cctx) ctx := ReqContext(cctx)
_, err = api.ID(ctx) _, err = api.Version(ctx)
if err != nil { if err != nil {
return err return err
} }

View File

@ -253,10 +253,10 @@ var importBenchCmd = &cli.Command{
} }
metadataDs := datastore.NewMapDatastore() metadataDs := datastore.NewMapDatastore()
cs := store.NewChainStore(bs, bs, metadataDs, vm.Syscalls(verifier), nil) cs := store.NewChainStore(bs, bs, metadataDs, nil)
defer cs.Close() //nolint:errcheck defer cs.Close() //nolint:errcheck
stm := stmgr.NewStateManager(cs) stm := stmgr.NewStateManager(cs, vm.Syscalls(verifier))
var carFile *os.File var carFile *os.File
// open the CAR file if one is provided. // open the CAR file if one is provided.

View File

@ -1,131 +0,0 @@
package main
import (
"database/sql"
"fmt"
"hash/crc32"
"strconv"
"github.com/ipfs/go-cid"
logging "github.com/ipfs/go-log/v2"
"github.com/urfave/cli/v2"
"golang.org/x/xerrors"
)
var dotCmd = &cli.Command{
Name: "dot",
Usage: "generate dot graphs",
ArgsUsage: "<minHeight> <toseeHeight>",
Action: func(cctx *cli.Context) error {
ll := cctx.String("log-level")
if err := logging.SetLogLevel("*", ll); err != nil {
return err
}
db, err := sql.Open("postgres", cctx.String("db"))
if err != nil {
return err
}
defer func() {
if err := db.Close(); err != nil {
log.Errorw("Failed to close database", "error", err)
}
}()
if err := db.Ping(); err != nil {
return xerrors.Errorf("Database failed to respond to ping (is it online?): %w", err)
}
minH, err := strconv.ParseInt(cctx.Args().Get(0), 10, 32)
if err != nil {
return err
}
tosee, err := strconv.ParseInt(cctx.Args().Get(1), 10, 32)
if err != nil {
return err
}
maxH := minH + tosee
res, err := db.Query(`select block, parent, b.miner, b.height, p.height from block_parents
inner join blocks b on block_parents.block = b.cid
inner join blocks p on block_parents.parent = p.cid
where b.height > $1 and b.height < $2`, minH, maxH)
if err != nil {
return err
}
fmt.Println("digraph D {")
hl, err := syncedBlocks(db)
if err != nil {
log.Fatal(err)
}
for res.Next() {
var block, parent, miner string
var height, ph uint64
if err := res.Scan(&block, &parent, &miner, &height, &ph); err != nil {
return err
}
bc, err := cid.Parse(block)
if err != nil {
return err
}
_, has := hl[bc]
col := crc32.Checksum([]byte(miner), crc32.MakeTable(crc32.Castagnoli))&0xc0c0c0c0 + 0x30303030
hasstr := ""
if !has {
//col = 0xffffffff
hasstr = " UNSYNCED"
}
nulls := height - ph - 1
for i := uint64(0); i < nulls; i++ {
name := block + "NP" + fmt.Sprint(i)
fmt.Printf("%s [label = \"NULL:%d\", fillcolor = \"#ffddff\", style=filled, forcelabels=true]\n%s -> %s\n",
name, height-nulls+i, name, parent)
parent = name
}
fmt.Printf("%s [label = \"%s:%d%s\", fillcolor = \"#%06x\", style=filled, forcelabels=true]\n%s -> %s\n", block, miner, height, hasstr, col, block, parent)
}
if res.Err() != nil {
return res.Err()
}
fmt.Println("}")
return nil
},
}
func syncedBlocks(db *sql.DB) (map[cid.Cid]struct{}, error) {
// timestamp is used to return a configurable amount of rows based on when they were last added.
rws, err := db.Query(`select cid FROM blocks_synced`)
if err != nil {
return nil, xerrors.Errorf("Failed to query blocks_synced: %w", err)
}
out := map[cid.Cid]struct{}{}
for rws.Next() {
var c string
if err := rws.Scan(&c); err != nil {
return nil, xerrors.Errorf("Failed to scan blocks_synced: %w", err)
}
ci, err := cid.Parse(c)
if err != nil {
return nil, xerrors.Errorf("Failed to parse blocks_synced: %w", err)
}
out[ci] = struct{}{}
}
return out, nil
}

View File

@ -1,54 +0,0 @@
package main
import (
"os"
"github.com/filecoin-project/lotus/build"
logging "github.com/ipfs/go-log/v2"
"github.com/urfave/cli/v2"
)
var log = logging.Logger("chainwatch")
func main() {
if err := logging.SetLogLevel("*", "info"); err != nil {
log.Fatal(err)
}
log.Info("Starting chainwatch", " v", build.UserVersion())
app := &cli.App{
Name: "lotus-chainwatch",
Usage: "Devnet token distribution utility",
Version: build.UserVersion(),
Flags: []cli.Flag{
&cli.StringFlag{
Name: "repo",
EnvVars: []string{"LOTUS_PATH"},
Value: "~/.lotus", // TODO: Consider XDG_DATA_HOME
},
&cli.StringFlag{
Name: "api",
EnvVars: []string{"FULLNODE_API_INFO"},
Value: "",
},
&cli.StringFlag{
Name: "db",
EnvVars: []string{"LOTUS_DB"},
Value: "",
},
&cli.StringFlag{
Name: "log-level",
EnvVars: []string{"GOLOG_LOG_LEVEL"},
Value: "info",
},
},
Commands: []*cli.Command{
dotCmd,
runCmd,
},
}
if err := app.Run(os.Args); err != nil {
log.Fatal(err)
}
}

View File

@ -1,299 +0,0 @@
package processor
import (
"context"
"time"
"golang.org/x/sync/errgroup"
"golang.org/x/xerrors"
"github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-state-types/abi"
"github.com/ipfs/go-cid"
builtin2 "github.com/filecoin-project/specs-actors/v2/actors/builtin"
"github.com/filecoin-project/lotus/chain/actors/builtin"
_init "github.com/filecoin-project/lotus/chain/actors/builtin/init"
"github.com/filecoin-project/lotus/chain/events/state"
"github.com/filecoin-project/lotus/chain/types"
cw_util "github.com/filecoin-project/lotus/cmd/lotus-chainwatch/util"
)
func (p *Processor) setupCommonActors() error {
tx, err := p.db.Begin()
if err != nil {
return err
}
if _, err := tx.Exec(`
create table if not exists id_address_map
(
id text not null,
address text not null,
constraint id_address_map_pk
primary key (id, address)
);
create unique index if not exists id_address_map_id_uindex
on id_address_map (id);
create unique index if not exists id_address_map_address_uindex
on id_address_map (address);
create table if not exists actors
(
id text not null
constraint id_address_map_actors_id_fk
references id_address_map (id),
code text not null,
head text not null,
nonce int not null,
balance text not null,
stateroot text
);
create index if not exists actors_id_index
on actors (id);
create index if not exists id_address_map_address_index
on id_address_map (address);
create index if not exists id_address_map_id_index
on id_address_map (id);
create or replace function actor_tips(epoch bigint)
returns table (id text,
code text,
head text,
nonce int,
balance text,
stateroot text,
height bigint,
parentstateroot text) as
$body$
select distinct on (id) * from actors
inner join state_heights sh on sh.parentstateroot = stateroot
where height < $1
order by id, height desc;
$body$ language sql;
create table if not exists actor_states
(
head text not null,
code text not null,
state json not null
);
create unique index if not exists actor_states_head_code_uindex
on actor_states (head, code);
create index if not exists actor_states_head_index
on actor_states (head);
create index if not exists actor_states_code_head_index
on actor_states (head, code);
`); err != nil {
return err
}
return tx.Commit()
}
func (p *Processor) HandleCommonActorsChanges(ctx context.Context, actors map[cid.Cid]ActorTips) error {
if err := p.storeActorAddresses(ctx, actors); err != nil {
return err
}
grp, _ := errgroup.WithContext(ctx)
grp.Go(func() error {
if err := p.storeActorHeads(actors); err != nil {
return err
}
return nil
})
grp.Go(func() error {
if err := p.storeActorStates(actors); err != nil {
return err
}
return nil
})
return grp.Wait()
}
type UpdateAddresses struct {
Old state.AddressPair
New state.AddressPair
}
func (p Processor) storeActorAddresses(ctx context.Context, actors map[cid.Cid]ActorTips) error {
start := time.Now()
defer func() {
log.Debugw("Stored Actor Addresses", "duration", time.Since(start).String())
}()
addressToID := map[address.Address]address.Address{}
// HACK until genesis storage is figured out:
addressToID[builtin2.SystemActorAddr] = builtin2.SystemActorAddr
addressToID[builtin2.InitActorAddr] = builtin2.InitActorAddr
addressToID[builtin2.RewardActorAddr] = builtin2.RewardActorAddr
addressToID[builtin2.CronActorAddr] = builtin2.CronActorAddr
addressToID[builtin2.StoragePowerActorAddr] = builtin2.StoragePowerActorAddr
addressToID[builtin2.StorageMarketActorAddr] = builtin2.StorageMarketActorAddr
addressToID[builtin2.VerifiedRegistryActorAddr] = builtin2.VerifiedRegistryActorAddr
addressToID[builtin2.BurntFundsActorAddr] = builtin2.BurntFundsActorAddr
initActor, err := p.node.StateGetActor(ctx, builtin2.InitActorAddr, types.EmptyTSK)
if err != nil {
return err
}
initActorState, err := _init.Load(cw_util.NewAPIIpldStore(ctx, p.node), initActor)
if err != nil {
return err
}
// gross..
if err := initActorState.ForEachActor(func(id abi.ActorID, addr address.Address) error {
idAddr, err := address.NewIDAddress(uint64(id))
if err != nil {
return err
}
addressToID[addr] = idAddr
return nil
}); err != nil {
return err
}
tx, err := p.db.Begin()
if err != nil {
return err
}
if _, err := tx.Exec(`
create temp table iam (like id_address_map excluding constraints) on commit drop;
`); err != nil {
return xerrors.Errorf("prep temp: %w", err)
}
stmt, err := tx.Prepare(`copy iam (id, address) from STDIN `)
if err != nil {
return err
}
for a, i := range addressToID {
if i == address.Undef {
continue
}
if _, err := stmt.Exec(
i.String(),
a.String(),
); err != nil {
return err
}
}
if err := stmt.Close(); err != nil {
return err
}
// HACK until chain watch can handle reorgs we need to update this table when ID -> PubKey mappings change
if _, err := tx.Exec(`insert into id_address_map select * from iam on conflict (id) do update set address = EXCLUDED.address`); err != nil {
log.Warnw("Failed to update id_address_map table, this is a known issue")
return nil
}
return tx.Commit()
}
func (p *Processor) storeActorHeads(actors map[cid.Cid]ActorTips) error {
start := time.Now()
defer func() {
log.Debugw("Stored Actor Heads", "duration", time.Since(start).String())
}()
// Basic
tx, err := p.db.Begin()
if err != nil {
return err
}
if _, err := tx.Exec(`
create temp table a_tmp (like actors excluding constraints) on commit drop;
`); err != nil {
return xerrors.Errorf("prep temp: %w", err)
}
stmt, err := tx.Prepare(`copy a_tmp (id, code, head, nonce, balance, stateroot) from stdin `)
if err != nil {
return err
}
for code, actTips := range actors {
actorName := code.String()
if builtin.IsBuiltinActor(code) {
actorName = builtin.ActorNameByCode(code)
}
for _, actorInfo := range actTips {
for _, a := range actorInfo {
if _, err := stmt.Exec(a.addr.String(), actorName, a.act.Head.String(), a.act.Nonce, a.act.Balance.String(), a.stateroot.String()); err != nil {
return err
}
}
}
}
if err := stmt.Close(); err != nil {
return err
}
if _, err := tx.Exec(`insert into actors select * from a_tmp on conflict do nothing `); err != nil {
return xerrors.Errorf("actor put: %w", err)
}
return tx.Commit()
}
func (p *Processor) storeActorStates(actors map[cid.Cid]ActorTips) error {
start := time.Now()
defer func() {
log.Debugw("Stored Actor States", "duration", time.Since(start).String())
}()
// States
tx, err := p.db.Begin()
if err != nil {
return err
}
if _, err := tx.Exec(`
create temp table as_tmp (like actor_states excluding constraints) on commit drop;
`); err != nil {
return xerrors.Errorf("prep temp: %w", err)
}
stmt, err := tx.Prepare(`copy as_tmp (head, code, state) from stdin `)
if err != nil {
return err
}
for code, actTips := range actors {
actorName := code.String()
if builtin.IsBuiltinActor(code) {
actorName = builtin.ActorNameByCode(code)
}
for _, actorInfo := range actTips {
for _, a := range actorInfo {
if _, err := stmt.Exec(a.act.Head.String(), actorName, a.state); err != nil {
return err
}
}
}
}
if err := stmt.Close(); err != nil {
return err
}
if _, err := tx.Exec(`insert into actor_states select * from as_tmp on conflict do nothing `); err != nil {
return xerrors.Errorf("actor put: %w", err)
}
return tx.Commit()
}

View File

@ -1,316 +0,0 @@
package processor
import (
"context"
"strconv"
"time"
"golang.org/x/sync/errgroup"
"golang.org/x/xerrors"
"github.com/filecoin-project/lotus/chain/actors/builtin/market"
"github.com/filecoin-project/lotus/chain/events/state"
)
func (p *Processor) setupMarket() error {
tx, err := p.db.Begin()
if err != nil {
return err
}
if _, err := tx.Exec(`
create table if not exists market_deal_proposals
(
deal_id bigint not null,
state_root text not null,
piece_cid text not null,
padded_piece_size bigint not null,
unpadded_piece_size bigint not null,
is_verified bool not null,
client_id text not null,
provider_id text not null,
start_epoch bigint not null,
end_epoch bigint not null,
slashed_epoch bigint,
storage_price_per_epoch text not null,
provider_collateral text not null,
client_collateral text not null,
constraint market_deal_proposal_pk
primary key (deal_id)
);
create table if not exists market_deal_states
(
deal_id bigint not null,
sector_start_epoch bigint not null,
last_update_epoch bigint not null,
slash_epoch bigint not null,
state_root text not null,
unique (deal_id, sector_start_epoch, last_update_epoch, slash_epoch),
constraint market_deal_states_pk
primary key (deal_id, state_root)
);
create table if not exists minerid_dealid_sectorid
(
deal_id bigint not null
constraint sectors_sector_ids_id_fk
references market_deal_proposals(deal_id),
sector_id bigint not null,
miner_id text not null,
foreign key (sector_id, miner_id) references sector_precommit_info(sector_id, miner_id),
constraint miner_sector_deal_ids_pk
primary key (miner_id, sector_id, deal_id)
);
`); err != nil {
return err
}
return tx.Commit()
}
type marketActorInfo struct {
common actorInfo
}
func (p *Processor) HandleMarketChanges(ctx context.Context, marketTips ActorTips) error {
marketChanges, err := p.processMarket(ctx, marketTips)
if err != nil {
log.Fatalw("Failed to process market actors", "error", err)
}
if err := p.persistMarket(ctx, marketChanges); err != nil {
log.Fatalw("Failed to persist market actors", "error", err)
}
if err := p.updateMarket(ctx, marketChanges); err != nil {
log.Fatalw("Failed to update market actors", "error", err)
}
return nil
}
func (p *Processor) processMarket(ctx context.Context, marketTips ActorTips) ([]marketActorInfo, error) {
start := time.Now()
defer func() {
log.Debugw("Processed Market", "duration", time.Since(start).String())
}()
var out []marketActorInfo
for _, markets := range marketTips {
for _, mt := range markets {
// NB: here is where we can extract the market state when we need it.
out = append(out, marketActorInfo{common: mt})
}
}
return out, nil
}
func (p *Processor) persistMarket(ctx context.Context, info []marketActorInfo) error {
start := time.Now()
defer func() {
log.Debugw("Persisted Market", "duration", time.Since(start).String())
}()
grp, ctx := errgroup.WithContext(ctx)
grp.Go(func() error {
if err := p.storeMarketActorDealProposals(ctx, info); err != nil {
return xerrors.Errorf("Failed to store marker deal proposals: %w", err)
}
return nil
})
grp.Go(func() error {
if err := p.storeMarketActorDealStates(info); err != nil {
return xerrors.Errorf("Failed to store marker deal states: %w", err)
}
return nil
})
return grp.Wait()
}
func (p *Processor) updateMarket(ctx context.Context, info []marketActorInfo) error {
if err := p.updateMarketActorDealProposals(ctx, info); err != nil {
return xerrors.Errorf("Failed to update market info: %w", err)
}
return nil
}
func (p *Processor) storeMarketActorDealStates(marketTips []marketActorInfo) error {
start := time.Now()
defer func() {
log.Debugw("Stored Market Deal States", "duration", time.Since(start).String())
}()
tx, err := p.db.Begin()
if err != nil {
return err
}
if _, err := tx.Exec(`create temp table mds (like market_deal_states excluding constraints) on commit drop;`); err != nil {
return err
}
stmt, err := tx.Prepare(`copy mds (deal_id, sector_start_epoch, last_update_epoch, slash_epoch, state_root) from STDIN`)
if err != nil {
return err
}
for _, mt := range marketTips {
dealStates, err := p.node.StateMarketDeals(context.TODO(), mt.common.tsKey)
if err != nil {
return err
}
for dealID, ds := range dealStates {
id, err := strconv.ParseUint(dealID, 10, 64)
if err != nil {
return err
}
if _, err := stmt.Exec(
id,
ds.State.SectorStartEpoch,
ds.State.LastUpdatedEpoch,
ds.State.SlashEpoch,
mt.common.stateroot.String(),
); err != nil {
return err
}
}
}
if err := stmt.Close(); err != nil {
return err
}
if _, err := tx.Exec(`insert into market_deal_states select * from mds on conflict do nothing`); err != nil {
return err
}
return tx.Commit()
}
func (p *Processor) storeMarketActorDealProposals(ctx context.Context, marketTips []marketActorInfo) error {
start := time.Now()
defer func() {
log.Debugw("Stored Market Deal Proposals", "duration", time.Since(start).String())
}()
tx, err := p.db.Begin()
if err != nil {
return err
}
if _, err := tx.Exec(`create temp table mdp (like market_deal_proposals excluding constraints) on commit drop;`); err != nil {
return xerrors.Errorf("prep temp: %w", err)
}
stmt, err := tx.Prepare(`copy mdp (deal_id, state_root, piece_cid, padded_piece_size, unpadded_piece_size, is_verified, client_id, provider_id, start_epoch, end_epoch, slashed_epoch, storage_price_per_epoch, provider_collateral, client_collateral) from STDIN`)
if err != nil {
return err
}
// insert in sorted order (lowest height -> highest height) since dealid is pk of table.
for _, mt := range marketTips {
dealStates, err := p.node.StateMarketDeals(ctx, mt.common.tsKey)
if err != nil {
return err
}
for dealID, ds := range dealStates {
id, err := strconv.ParseUint(dealID, 10, 64)
if err != nil {
return err
}
if _, err := stmt.Exec(
id,
mt.common.stateroot.String(),
ds.Proposal.PieceCID.String(),
ds.Proposal.PieceSize,
ds.Proposal.PieceSize.Unpadded(),
ds.Proposal.VerifiedDeal,
ds.Proposal.Client.String(),
ds.Proposal.Provider.String(),
ds.Proposal.StartEpoch,
ds.Proposal.EndEpoch,
nil, // slashed_epoch
ds.Proposal.StoragePricePerEpoch.String(),
ds.Proposal.ProviderCollateral.String(),
ds.Proposal.ClientCollateral.String(),
); err != nil {
return err
}
}
}
if err := stmt.Close(); err != nil {
return err
}
if _, err := tx.Exec(`insert into market_deal_proposals select * from mdp on conflict do nothing`); err != nil {
return err
}
return tx.Commit()
}
func (p *Processor) updateMarketActorDealProposals(ctx context.Context, marketTip []marketActorInfo) error {
start := time.Now()
defer func() {
log.Debugw("Updated Market Deal Proposals", "duration", time.Since(start).String())
}()
pred := state.NewStatePredicates(p.node)
tx, err := p.db.Begin()
if err != nil {
return err
}
stmt, err := tx.Prepare(`update market_deal_proposals set slashed_epoch=$1 where deal_id=$2`)
if err != nil {
return err
}
for _, mt := range marketTip {
stateDiff := pred.OnStorageMarketActorChanged(pred.OnDealStateChanged(pred.OnDealStateAmtChanged()))
changed, val, err := stateDiff(ctx, mt.common.parentTsKey, mt.common.tsKey)
if err != nil {
log.Warnw("error getting market deal state diff", "error", err)
}
if !changed {
continue
}
changes, ok := val.(*market.DealStateChanges)
if !ok {
return xerrors.Errorf("Unknown type returned by Deal State AMT predicate: %T", val)
}
for _, modified := range changes.Modified {
if modified.From.SlashEpoch != modified.To.SlashEpoch {
if _, err := stmt.Exec(modified.To.SlashEpoch, modified.ID); err != nil {
return err
}
}
}
}
if err := stmt.Close(); err != nil {
return err
}
return tx.Commit()
}

View File

@ -1,318 +0,0 @@
package processor
import (
"context"
"sync"
"golang.org/x/sync/errgroup"
"golang.org/x/xerrors"
"github.com/ipfs/go-cid"
"github.com/filecoin-project/lotus/chain/types"
"github.com/filecoin-project/lotus/lib/parmap"
)
func (p *Processor) setupMessages() error {
tx, err := p.db.Begin()
if err != nil {
return err
}
if _, err := tx.Exec(`
create table if not exists messages
(
cid text not null
constraint messages_pk
primary key,
"from" text not null,
"to" text not null,
size_bytes bigint not null,
nonce bigint not null,
value text not null,
gas_fee_cap text not null,
gas_premium text not null,
gas_limit bigint not null,
method bigint,
params bytea
);
create unique index if not exists messages_cid_uindex
on messages (cid);
create index if not exists messages_from_index
on messages ("from");
create index if not exists messages_to_index
on messages ("to");
create table if not exists block_messages
(
block text not null
constraint blocks_block_cids_cid_fk
references block_cids (cid),
message text not null,
constraint block_messages_pk
primary key (block, message)
);
create table if not exists mpool_messages
(
msg text not null
constraint mpool_messages_pk
primary key
constraint mpool_messages_messages_cid_fk
references messages,
add_ts int not null
);
create unique index if not exists mpool_messages_msg_uindex
on mpool_messages (msg);
create table if not exists receipts
(
msg text not null,
state text not null,
idx int not null,
exit int not null,
gas_used bigint not null,
return bytea,
constraint receipts_pk
primary key (msg, state)
);
create index if not exists receipts_msg_state_index
on receipts (msg, state);
`); err != nil {
return err
}
return tx.Commit()
}
func (p *Processor) HandleMessageChanges(ctx context.Context, blocks map[cid.Cid]*types.BlockHeader) error {
if err := p.persistMessagesAndReceipts(ctx, blocks); err != nil {
return err
}
return nil
}
func (p *Processor) persistMessagesAndReceipts(ctx context.Context, blocks map[cid.Cid]*types.BlockHeader) error {
messages, inclusions := p.fetchMessages(ctx, blocks)
receipts := p.fetchParentReceipts(ctx, blocks)
grp, _ := errgroup.WithContext(ctx)
grp.Go(func() error {
return p.storeMessages(messages)
})
grp.Go(func() error {
return p.storeMsgInclusions(inclusions)
})
grp.Go(func() error {
return p.storeReceipts(receipts)
})
return grp.Wait()
}
func (p *Processor) storeReceipts(recs map[mrec]*types.MessageReceipt) error {
tx, err := p.db.Begin()
if err != nil {
return err
}
if _, err := tx.Exec(`
create temp table recs (like receipts excluding constraints) on commit drop;
`); err != nil {
return xerrors.Errorf("prep temp: %w", err)
}
stmt, err := tx.Prepare(`copy recs (msg, state, idx, exit, gas_used, return) from stdin `)
if err != nil {
return err
}
for c, m := range recs {
if _, err := stmt.Exec(
c.msg.String(),
c.state.String(),
c.idx,
m.ExitCode,
m.GasUsed,
m.Return,
); err != nil {
return err
}
}
if err := stmt.Close(); err != nil {
return err
}
if _, err := tx.Exec(`insert into receipts select * from recs on conflict do nothing `); err != nil {
return xerrors.Errorf("actor put: %w", err)
}
return tx.Commit()
}
func (p *Processor) storeMsgInclusions(incls map[cid.Cid][]cid.Cid) error {
tx, err := p.db.Begin()
if err != nil {
return err
}
if _, err := tx.Exec(`
create temp table mi (like block_messages excluding constraints) on commit drop;
`); err != nil {
return xerrors.Errorf("prep temp: %w", err)
}
stmt, err := tx.Prepare(`copy mi (block, message) from STDIN `)
if err != nil {
return err
}
for b, msgs := range incls {
for _, msg := range msgs {
if _, err := stmt.Exec(
b.String(),
msg.String(),
); err != nil {
return err
}
}
}
if err := stmt.Close(); err != nil {
return err
}
if _, err := tx.Exec(`insert into block_messages select * from mi on conflict do nothing `); err != nil {
return xerrors.Errorf("actor put: %w", err)
}
return tx.Commit()
}
func (p *Processor) storeMessages(msgs map[cid.Cid]*types.Message) error {
tx, err := p.db.Begin()
if err != nil {
return err
}
if _, err := tx.Exec(`
create temp table msgs (like messages excluding constraints) on commit drop;
`); err != nil {
return xerrors.Errorf("prep temp: %w", err)
}
stmt, err := tx.Prepare(`copy msgs (cid, "from", "to", size_bytes, nonce, "value", gas_premium, gas_fee_cap, gas_limit, method, params) from stdin `)
if err != nil {
return err
}
for c, m := range msgs {
var msgBytes int
if b, err := m.Serialize(); err == nil {
msgBytes = len(b)
}
if _, err := stmt.Exec(
c.String(),
m.From.String(),
m.To.String(),
msgBytes,
m.Nonce,
m.Value.String(),
m.GasPremium.String(),
m.GasFeeCap.String(),
m.GasLimit,
m.Method,
m.Params,
); err != nil {
return err
}
}
if err := stmt.Close(); err != nil {
return err
}
if _, err := tx.Exec(`insert into messages select * from msgs on conflict do nothing `); err != nil {
return xerrors.Errorf("actor put: %w", err)
}
return tx.Commit()
}
func (p *Processor) fetchMessages(ctx context.Context, blocks map[cid.Cid]*types.BlockHeader) (map[cid.Cid]*types.Message, map[cid.Cid][]cid.Cid) {
var lk sync.Mutex
messages := map[cid.Cid]*types.Message{}
inclusions := map[cid.Cid][]cid.Cid{} // block -> msgs
parmap.Par(50, parmap.MapArr(blocks), func(header *types.BlockHeader) {
msgs, err := p.node.ChainGetBlockMessages(ctx, header.Cid())
if err != nil {
log.Error(err)
log.Debugw("ChainGetBlockMessages", "header_cid", header.Cid())
return
}
vmm := make([]*types.Message, 0, len(msgs.Cids))
for _, m := range msgs.BlsMessages {
vmm = append(vmm, m)
}
for _, m := range msgs.SecpkMessages {
vmm = append(vmm, &m.Message)
}
lk.Lock()
for _, message := range vmm {
messages[message.Cid()] = message
inclusions[header.Cid()] = append(inclusions[header.Cid()], message.Cid())
}
lk.Unlock()
})
return messages, inclusions
}
type mrec struct {
msg cid.Cid
state cid.Cid
idx int
}
func (p *Processor) fetchParentReceipts(ctx context.Context, toSync map[cid.Cid]*types.BlockHeader) map[mrec]*types.MessageReceipt {
var lk sync.Mutex
out := map[mrec]*types.MessageReceipt{}
parmap.Par(50, parmap.MapArr(toSync), func(header *types.BlockHeader) {
recs, err := p.node.ChainGetParentReceipts(ctx, header.Cid())
if err != nil {
log.Error(err)
log.Debugw("ChainGetParentReceipts", "header_cid", header.Cid())
return
}
msgs, err := p.node.ChainGetParentMessages(ctx, header.Cid())
if err != nil {
log.Error(err)
log.Debugw("ChainGetParentMessages", "header_cid", header.Cid())
return
}
lk.Lock()
for i, r := range recs {
out[mrec{
msg: msgs[i].Cid,
state: header.ParentStateRoot,
idx: i,
}] = r
}
lk.Unlock()
})
return out
}

File diff suppressed because it is too large Load Diff

View File

@ -1,100 +0,0 @@
package processor
import (
"context"
"time"
"golang.org/x/xerrors"
"github.com/ipfs/go-cid"
"github.com/filecoin-project/lotus/api"
"github.com/filecoin-project/lotus/chain/types"
)
func (p *Processor) subMpool(ctx context.Context) {
sub, err := p.node.MpoolSub(ctx)
if err != nil {
return
}
for {
var updates []api.MpoolUpdate
select {
case update := <-sub:
updates = append(updates, update)
case <-ctx.Done():
return
}
loop:
for {
select {
case update := <-sub:
updates = append(updates, update)
case <-time.After(10 * time.Millisecond):
break loop
}
}
msgs := map[cid.Cid]*types.Message{}
for _, v := range updates {
if v.Type != api.MpoolAdd {
continue
}
msgs[v.Message.Message.Cid()] = &v.Message.Message
}
err := p.storeMessages(msgs)
if err != nil {
log.Error(err)
}
if err := p.storeMpoolInclusions(updates); err != nil {
log.Error(err)
}
}
}
func (p *Processor) storeMpoolInclusions(msgs []api.MpoolUpdate) error {
tx, err := p.db.Begin()
if err != nil {
return err
}
if _, err := tx.Exec(`
create temp table mi (like mpool_messages excluding constraints) on commit drop;
`); err != nil {
return xerrors.Errorf("prep temp: %w", err)
}
stmt, err := tx.Prepare(`copy mi (msg, add_ts) from stdin `)
if err != nil {
return err
}
for _, msg := range msgs {
if msg.Type != api.MpoolAdd {
continue
}
if _, err := stmt.Exec(
msg.Message.Message.Cid().String(),
time.Now().Unix(),
); err != nil {
return err
}
}
if err := stmt.Close(); err != nil {
return err
}
if _, err := tx.Exec(`insert into mpool_messages select * from mi on conflict do nothing `); err != nil {
return xerrors.Errorf("actor put: %w", err)
}
return tx.Commit()
}

View File

@ -1,190 +0,0 @@
package processor
import (
"context"
"time"
"golang.org/x/xerrors"
"github.com/filecoin-project/go-state-types/big"
"github.com/filecoin-project/lotus/chain/actors/builtin"
)
type powerActorInfo struct {
common actorInfo
totalRawBytes big.Int
totalRawBytesCommitted big.Int
totalQualityAdjustedBytes big.Int
totalQualityAdjustedBytesCommitted big.Int
totalPledgeCollateral big.Int
qaPowerSmoothed builtin.FilterEstimate
minerCount int64
minerCountAboveMinimumPower int64
}
func (p *Processor) setupPower() error {
tx, err := p.db.Begin()
if err != nil {
return err
}
if _, err := tx.Exec(`
create table if not exists chain_power
(
state_root text not null
constraint power_smoothing_estimates_pk
primary key,
total_raw_bytes_power text not null,
total_raw_bytes_committed text not null,
total_qa_bytes_power text not null,
total_qa_bytes_committed text not null,
total_pledge_collateral text not null,
qa_smoothed_position_estimate text not null,
qa_smoothed_velocity_estimate text not null,
miner_count int not null,
minimum_consensus_miner_count int not null
);
`); err != nil {
return err
}
return tx.Commit()
}
func (p *Processor) HandlePowerChanges(ctx context.Context, powerTips ActorTips) error {
powerChanges, err := p.processPowerActors(ctx, powerTips)
if err != nil {
return xerrors.Errorf("Failed to process power actors: %w", err)
}
if err := p.persistPowerActors(ctx, powerChanges); err != nil {
return err
}
return nil
}
func (p *Processor) processPowerActors(ctx context.Context, powerTips ActorTips) ([]powerActorInfo, error) {
start := time.Now()
defer func() {
log.Debugw("Processed Power Actors", "duration", time.Since(start).String())
}()
var out []powerActorInfo
for tipset, powerStates := range powerTips {
for _, act := range powerStates {
var pw powerActorInfo
pw.common = act
powerActorState, err := getPowerActorState(ctx, p.node, tipset)
if err != nil {
return nil, xerrors.Errorf("get power state (@ %s): %w", pw.common.stateroot.String(), err)
}
totalPower, err := powerActorState.TotalPower()
if err != nil {
return nil, xerrors.Errorf("failed to compute total power: %w", err)
}
totalCommitted, err := powerActorState.TotalCommitted()
if err != nil {
return nil, xerrors.Errorf("failed to compute total committed: %w", err)
}
totalLocked, err := powerActorState.TotalLocked()
if err != nil {
return nil, xerrors.Errorf("failed to compute total locked: %w", err)
}
powerSmoothed, err := powerActorState.TotalPowerSmoothed()
if err != nil {
return nil, xerrors.Errorf("failed to determine smoothed power: %w", err)
}
// NOTE: this doesn't set new* fields. Previously, we
// filled these using ThisEpoch* fields from the actor
// state, but these fields are effectively internal
// state and don't represent "new" power, as was
// assumed.
participatingMiners, totalMiners, err := powerActorState.MinerCounts()
if err != nil {
return nil, xerrors.Errorf("failed to count miners: %w", err)
}
pw.totalRawBytes = totalPower.RawBytePower
pw.totalQualityAdjustedBytes = totalPower.QualityAdjPower
pw.totalRawBytesCommitted = totalCommitted.RawBytePower
pw.totalQualityAdjustedBytesCommitted = totalCommitted.QualityAdjPower
pw.totalPledgeCollateral = totalLocked
pw.qaPowerSmoothed = powerSmoothed
pw.minerCountAboveMinimumPower = int64(participatingMiners)
pw.minerCount = int64(totalMiners)
}
}
return out, nil
}
func (p *Processor) persistPowerActors(ctx context.Context, powerStates []powerActorInfo) error {
// NB: use errgroup when there is more than a single store operation
return p.storePowerSmoothingEstimates(powerStates)
}
func (p *Processor) storePowerSmoothingEstimates(powerStates []powerActorInfo) error {
tx, err := p.db.Begin()
if err != nil {
return xerrors.Errorf("begin chain_power tx: %w", err)
}
if _, err := tx.Exec(`create temp table cp (like chain_power) on commit drop`); err != nil {
return xerrors.Errorf("prep chain_power: %w", err)
}
stmt, err := tx.Prepare(`copy cp (state_root, total_raw_bytes_power, total_raw_bytes_committed, total_qa_bytes_power, total_qa_bytes_committed, total_pledge_collateral, qa_smoothed_position_estimate, qa_smoothed_velocity_estimate, miner_count, minimum_consensus_miner_count) from stdin;`)
if err != nil {
return xerrors.Errorf("prepare tmp chain_power: %w", err)
}
for _, ps := range powerStates {
if _, err := stmt.Exec(
ps.common.stateroot.String(),
ps.totalRawBytes.String(),
ps.totalRawBytesCommitted.String(),
ps.totalQualityAdjustedBytes.String(),
ps.totalQualityAdjustedBytesCommitted.String(),
ps.totalPledgeCollateral.String(),
ps.qaPowerSmoothed.PositionEstimate.String(),
ps.qaPowerSmoothed.VelocityEstimate.String(),
ps.minerCount,
ps.minerCountAboveMinimumPower,
); err != nil {
return xerrors.Errorf("failed to store smoothing estimate: %w", err)
}
}
if err := stmt.Close(); err != nil {
return xerrors.Errorf("close prepared chain_power: %w", err)
}
if _, err := tx.Exec(`insert into chain_power select * from cp on conflict do nothing`); err != nil {
return xerrors.Errorf("insert chain_power from tmp: %w", err)
}
if err := tx.Commit(); err != nil {
return xerrors.Errorf("commit chain_power tx: %w", err)
}
return nil
}

View File

@ -1,420 +0,0 @@
package processor
import (
"context"
"database/sql"
"encoding/json"
"math"
"sync"
"time"
"golang.org/x/xerrors"
"github.com/filecoin-project/go-address"
"github.com/ipfs/go-cid"
logging "github.com/ipfs/go-log/v2"
"github.com/filecoin-project/go-state-types/abi"
builtin2 "github.com/filecoin-project/specs-actors/v2/actors/builtin"
"github.com/filecoin-project/lotus/api/v0api"
"github.com/filecoin-project/lotus/chain/types"
cw_util "github.com/filecoin-project/lotus/cmd/lotus-chainwatch/util"
"github.com/filecoin-project/lotus/lib/parmap"
)
var log = logging.Logger("processor")
type Processor struct {
db *sql.DB
node v0api.FullNode
ctxStore *cw_util.APIIpldStore
genesisTs *types.TipSet
// number of blocks processed at a time
batch int
}
type ActorTips map[types.TipSetKey][]actorInfo
type actorInfo struct {
act types.Actor
stateroot cid.Cid
height abi.ChainEpoch // so that we can walk the actor changes in chronological order.
tsKey types.TipSetKey
parentTsKey types.TipSetKey
addr address.Address
state string
}
func NewProcessor(ctx context.Context, db *sql.DB, node v0api.FullNode, batch int) *Processor {
ctxStore := cw_util.NewAPIIpldStore(ctx, node)
return &Processor{
db: db,
ctxStore: ctxStore,
node: node,
batch: batch,
}
}
func (p *Processor) setupSchemas() error {
// maintain order, subsequent calls create tables with foreign keys.
if err := p.setupMiners(); err != nil {
return err
}
if err := p.setupMarket(); err != nil {
return err
}
if err := p.setupRewards(); err != nil {
return err
}
if err := p.setupMessages(); err != nil {
return err
}
if err := p.setupCommonActors(); err != nil {
return err
}
if err := p.setupPower(); err != nil {
return err
}
return nil
}
func (p *Processor) Start(ctx context.Context) {
log.Debug("Starting Processor")
if err := p.setupSchemas(); err != nil {
log.Fatalw("Failed to setup processor", "error", err)
}
var err error
p.genesisTs, err = p.node.ChainGetGenesis(ctx)
if err != nil {
log.Fatalw("Failed to get genesis state from lotus", "error", err.Error())
}
go p.subMpool(ctx)
// main processor loop
go func() {
for {
select {
case <-ctx.Done():
log.Info("Stopping Processor...")
return
default:
loopStart := time.Now()
toProcess, err := p.unprocessedBlocks(ctx, p.batch)
if err != nil {
log.Fatalw("Failed to get unprocessed blocks", "error", err)
}
if len(toProcess) == 0 {
log.Info("No unprocessed blocks. Wait then try again...")
time.Sleep(time.Second * 30)
continue
}
// TODO special case genesis state handling here to avoid all the special cases that will be needed for it else where
// before doing "normal" processing.
actorChanges, nullRounds, err := p.collectActorChanges(ctx, toProcess)
if err != nil {
log.Fatalw("Failed to collect actor changes", "error", err)
}
log.Infow("Collected Actor Changes",
"MarketChanges", len(actorChanges[builtin2.StorageMarketActorCodeID]),
"MinerChanges", len(actorChanges[builtin2.StorageMinerActorCodeID]),
"RewardChanges", len(actorChanges[builtin2.RewardActorCodeID]),
"AccountChanges", len(actorChanges[builtin2.AccountActorCodeID]),
"nullRounds", len(nullRounds))
grp := sync.WaitGroup{}
grp.Add(1)
go func() {
defer grp.Done()
if err := p.HandleMarketChanges(ctx, actorChanges[builtin2.StorageMarketActorCodeID]); err != nil {
log.Errorf("Failed to handle market changes: %v", err)
return
}
}()
grp.Add(1)
go func() {
defer grp.Done()
if err := p.HandleMinerChanges(ctx, actorChanges[builtin2.StorageMinerActorCodeID]); err != nil {
log.Errorf("Failed to handle miner changes: %v", err)
return
}
}()
grp.Add(1)
go func() {
defer grp.Done()
if err := p.HandleRewardChanges(ctx, actorChanges[builtin2.RewardActorCodeID], nullRounds); err != nil {
log.Errorf("Failed to handle reward changes: %v", err)
return
}
}()
grp.Add(1)
go func() {
defer grp.Done()
if err := p.HandlePowerChanges(ctx, actorChanges[builtin2.StoragePowerActorCodeID]); err != nil {
log.Errorf("Failed to handle power actor changes: %v", err)
return
}
}()
grp.Add(1)
go func() {
defer grp.Done()
if err := p.HandleMessageChanges(ctx, toProcess); err != nil {
log.Errorf("Failed to handle message changes: %v", err)
return
}
}()
grp.Add(1)
go func() {
defer grp.Done()
if err := p.HandleCommonActorsChanges(ctx, actorChanges); err != nil {
log.Errorf("Failed to handle common actor changes: %v", err)
return
}
}()
grp.Wait()
if err := p.markBlocksProcessed(ctx, toProcess); err != nil {
log.Fatalw("Failed to mark blocks as processed", "error", err)
}
if err := p.refreshViews(); err != nil {
log.Errorw("Failed to refresh views", "error", err)
}
log.Infow("Processed Batch Complete", "duration", time.Since(loopStart).String())
}
}
}()
}
func (p *Processor) refreshViews() error {
if _, err := p.db.Exec(`refresh materialized view state_heights`); err != nil {
return err
}
return nil
}
func (p *Processor) collectActorChanges(ctx context.Context, toProcess map[cid.Cid]*types.BlockHeader) (map[cid.Cid]ActorTips, []types.TipSetKey, error) {
start := time.Now()
defer func() {
log.Debugw("Collected Actor Changes", "duration", time.Since(start).String())
}()
// ActorCode - > tipset->[]actorInfo
out := map[cid.Cid]ActorTips{}
var outMu sync.Mutex
// map of addresses to changed actors
var changes map[string]types.Actor
actorsSeen := map[cid.Cid]struct{}{}
var nullRounds []types.TipSetKey
var nullBlkMu sync.Mutex
// collect all actor state that has changes between block headers
paDone := 0
parmap.Par(50, parmap.MapArr(toProcess), func(bh *types.BlockHeader) {
paDone++
if paDone%100 == 0 {
log.Debugw("Collecting actor changes", "done", paDone, "percent", (paDone*100)/len(toProcess))
}
pts, err := p.node.ChainGetTipSet(ctx, types.NewTipSetKey(bh.Parents...))
if err != nil {
log.Error(err)
return
}
if pts.ParentState().Equals(bh.ParentStateRoot) {
nullBlkMu.Lock()
nullRounds = append(nullRounds, pts.Key())
nullBlkMu.Unlock()
}
// collect all actors that had state changes between the blockheader parent-state and its grandparent-state.
// TODO: changes will contain deleted actors, this causes needless processing further down the pipeline, consider
// a separate strategy for deleted actors
changes, err = p.node.StateChangedActors(ctx, pts.ParentState(), bh.ParentStateRoot)
if err != nil {
log.Error(err)
log.Debugw("StateChangedActors", "grandparent_state", pts.ParentState(), "parent_state", bh.ParentStateRoot)
return
}
// record the state of all actors that have changed
for a, act := range changes {
act := act
a := a
// ignore actors that were deleted.
has, err := p.node.ChainHasObj(ctx, act.Head)
if err != nil {
log.Error(err)
log.Debugw("ChanHasObj", "actor_head", act.Head)
return
}
if !has {
continue
}
addr, err := address.NewFromString(a)
if err != nil {
log.Error(err)
log.Debugw("NewFromString", "address_string", a)
return
}
ast, err := p.node.StateReadState(ctx, addr, pts.Key())
if err != nil {
log.Error(err)
log.Debugw("StateReadState", "address_string", a, "parent_tipset_key", pts.Key())
return
}
// TODO look here for an empty state, maybe thats a sign the actor was deleted?
state, err := json.Marshal(ast.State)
if err != nil {
log.Error(err)
return
}
outMu.Lock()
if _, ok := actorsSeen[act.Head]; !ok {
_, ok := out[act.Code]
if !ok {
out[act.Code] = map[types.TipSetKey][]actorInfo{}
}
out[act.Code][pts.Key()] = append(out[act.Code][pts.Key()], actorInfo{
act: act,
stateroot: bh.ParentStateRoot,
height: bh.Height,
tsKey: pts.Key(),
parentTsKey: pts.Parents(),
addr: addr,
state: string(state),
})
}
actorsSeen[act.Head] = struct{}{}
outMu.Unlock()
}
})
return out, nullRounds, nil
}
func (p *Processor) unprocessedBlocks(ctx context.Context, batch int) (map[cid.Cid]*types.BlockHeader, error) {
start := time.Now()
defer func() {
log.Debugw("Gathered Blocks to process", "duration", time.Since(start).String())
}()
rows, err := p.db.Query(`
with toProcess as (
select b.cid, b.height, rank() over (order by height) as rnk
from blocks_synced bs
left join blocks b on bs.cid = b.cid
where bs.processed_at is null and b.height > 0
)
select cid
from toProcess
where rnk <= $1
`, batch)
if err != nil {
return nil, xerrors.Errorf("Failed to query for unprocessed blocks: %w", err)
}
out := map[cid.Cid]*types.BlockHeader{}
minBlock := abi.ChainEpoch(math.MaxInt64)
maxBlock := abi.ChainEpoch(0)
// TODO consider parallel execution here for getting the blocks from the api as is done in fetchMessages()
for rows.Next() {
if rows.Err() != nil {
return nil, err
}
var c string
if err := rows.Scan(&c); err != nil {
log.Errorf("Failed to scan unprocessed blocks: %s", err.Error())
continue
}
ci, err := cid.Parse(c)
if err != nil {
log.Errorf("Failed to parse unprocessed blocks: %s", err.Error())
continue
}
bh, err := p.node.ChainGetBlock(ctx, ci)
if err != nil {
// this is a pretty serious issue.
log.Errorf("Failed to get block header %s: %s", ci.String(), err.Error())
continue
}
out[ci] = bh
if bh.Height < minBlock {
minBlock = bh.Height
}
if bh.Height > maxBlock {
maxBlock = bh.Height
}
}
if minBlock <= maxBlock {
log.Infow("Gathered Blocks to Process", "start", minBlock, "end", maxBlock)
}
return out, rows.Close()
}
func (p *Processor) markBlocksProcessed(ctx context.Context, processed map[cid.Cid]*types.BlockHeader) error {
start := time.Now()
processedHeight := abi.ChainEpoch(0)
defer func() {
log.Debugw("Marked blocks as Processed", "duration", time.Since(start).String())
log.Infow("Processed Blocks", "height", processedHeight)
}()
tx, err := p.db.Begin()
if err != nil {
return err
}
processedAt := time.Now().Unix()
stmt, err := tx.Prepare(`update blocks_synced set processed_at=$1 where cid=$2`)
if err != nil {
return err
}
for c, bh := range processed {
if bh.Height > processedHeight {
processedHeight = bh.Height
}
if _, err := stmt.Exec(processedAt, c.String()); err != nil {
return err
}
}
if err := stmt.Close(); err != nil {
return err
}
return tx.Commit()
}

View File

@ -1,234 +0,0 @@
package processor
import (
"context"
"time"
"golang.org/x/xerrors"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/big"
"github.com/filecoin-project/lotus/chain/actors/builtin"
"github.com/filecoin-project/lotus/chain/actors/builtin/reward"
"github.com/filecoin-project/lotus/chain/types"
cw_util "github.com/filecoin-project/lotus/cmd/lotus-chainwatch/util"
)
type rewardActorInfo struct {
common actorInfo
cumSumBaselinePower big.Int
cumSumRealizedPower big.Int
effectiveNetworkTime abi.ChainEpoch
effectiveBaselinePower big.Int
// NOTE: These variables are wrong. Talk to @ZX about fixing. These _do
// not_ represent "new" anything.
newBaselinePower big.Int
newBaseReward big.Int
newSmoothingEstimate builtin.FilterEstimate
totalMinedReward big.Int
}
func (rw *rewardActorInfo) set(s reward.State) (err error) {
rw.cumSumBaselinePower, err = s.CumsumBaseline()
if err != nil {
return xerrors.Errorf("getting cumsum baseline power (@ %s): %w", rw.common.stateroot.String(), err)
}
rw.cumSumRealizedPower, err = s.CumsumRealized()
if err != nil {
return xerrors.Errorf("getting cumsum realized power (@ %s): %w", rw.common.stateroot.String(), err)
}
rw.effectiveNetworkTime, err = s.EffectiveNetworkTime()
if err != nil {
return xerrors.Errorf("getting effective network time (@ %s): %w", rw.common.stateroot.String(), err)
}
rw.effectiveBaselinePower, err = s.EffectiveBaselinePower()
if err != nil {
return xerrors.Errorf("getting effective baseline power (@ %s): %w", rw.common.stateroot.String(), err)
}
rw.totalMinedReward, err = s.TotalStoragePowerReward()
if err != nil {
return xerrors.Errorf("getting total mined (@ %s): %w", rw.common.stateroot.String(), err)
}
rw.newBaselinePower, err = s.ThisEpochBaselinePower()
if err != nil {
return xerrors.Errorf("getting this epoch baseline power (@ %s): %w", rw.common.stateroot.String(), err)
}
rw.newBaseReward, err = s.ThisEpochReward()
if err != nil {
return xerrors.Errorf("getting this epoch baseline power (@ %s): %w", rw.common.stateroot.String(), err)
}
rw.newSmoothingEstimate, err = s.ThisEpochRewardSmoothed()
if err != nil {
return xerrors.Errorf("getting this epoch baseline power (@ %s): %w", rw.common.stateroot.String(), err)
}
return nil
}
func (p *Processor) setupRewards() error {
tx, err := p.db.Begin()
if err != nil {
return err
}
if _, err := tx.Exec(`
/* captures chain-specific power state for any given stateroot */
create table if not exists chain_reward
(
state_root text not null
constraint chain_reward_pk
primary key,
cum_sum_baseline text not null,
cum_sum_realized text not null,
effective_network_time int not null,
effective_baseline_power text not null,
new_baseline_power text not null,
new_reward numeric not null,
new_reward_smoothed_position_estimate text not null,
new_reward_smoothed_velocity_estimate text not null,
total_mined_reward text not null
);
`); err != nil {
return err
}
return tx.Commit()
}
func (p *Processor) HandleRewardChanges(ctx context.Context, rewardTips ActorTips, nullRounds []types.TipSetKey) error {
rewardChanges, err := p.processRewardActors(ctx, rewardTips, nullRounds)
if err != nil {
return xerrors.Errorf("Failed to process reward actors: %w", err)
}
if err := p.persistRewardActors(ctx, rewardChanges); err != nil {
return err
}
return nil
}
func (p *Processor) processRewardActors(ctx context.Context, rewardTips ActorTips, nullRounds []types.TipSetKey) ([]rewardActorInfo, error) {
start := time.Now()
defer func() {
log.Debugw("Processed Reward Actors", "duration", time.Since(start).String())
}()
var out []rewardActorInfo
for tipset, rewards := range rewardTips {
for _, act := range rewards {
var rw rewardActorInfo
rw.common = act
// get reward actor states at each tipset once for all updates
rewardActor, err := p.node.StateGetActor(ctx, reward.Address, tipset)
if err != nil {
return nil, xerrors.Errorf("get reward state (@ %s): %w", rw.common.stateroot.String(), err)
}
rewardActorState, err := reward.Load(cw_util.NewAPIIpldStore(ctx, p.node), rewardActor)
if err != nil {
return nil, xerrors.Errorf("read state obj (@ %s): %w", rw.common.stateroot.String(), err)
}
if err := rw.set(rewardActorState); err != nil {
return nil, err
}
out = append(out, rw)
}
}
for _, tsKey := range nullRounds {
var rw rewardActorInfo
tipset, err := p.node.ChainGetTipSet(ctx, tsKey)
if err != nil {
return nil, err
}
rw.common.tsKey = tipset.Key()
rw.common.height = tipset.Height()
rw.common.stateroot = tipset.ParentState()
rw.common.parentTsKey = tipset.Parents()
// get reward actor states at each tipset once for all updates
rewardActor, err := p.node.StateGetActor(ctx, reward.Address, tsKey)
if err != nil {
return nil, err
}
rewardActorState, err := reward.Load(cw_util.NewAPIIpldStore(ctx, p.node), rewardActor)
if err != nil {
return nil, xerrors.Errorf("read state obj (@ %s): %w", rw.common.stateroot.String(), err)
}
if err := rw.set(rewardActorState); err != nil {
return nil, err
}
out = append(out, rw)
}
return out, nil
}
func (p *Processor) persistRewardActors(ctx context.Context, rewards []rewardActorInfo) error {
start := time.Now()
defer func() {
log.Debugw("Persisted Reward Actors", "duration", time.Since(start).String())
}()
tx, err := p.db.Begin()
if err != nil {
return xerrors.Errorf("begin chain_reward tx: %w", err)
}
if _, err := tx.Exec(`create temp table cr (like chain_reward excluding constraints) on commit drop`); err != nil {
return xerrors.Errorf("prep chain_reward temp: %w", err)
}
stmt, err := tx.Prepare(`copy cr ( state_root, cum_sum_baseline, cum_sum_realized, effective_network_time, effective_baseline_power, new_baseline_power, new_reward, new_reward_smoothed_position_estimate, new_reward_smoothed_velocity_estimate, total_mined_reward) from STDIN`)
if err != nil {
return xerrors.Errorf("prepare tmp chain_reward: %w", err)
}
for _, rewardState := range rewards {
if _, err := stmt.Exec(
rewardState.common.stateroot.String(),
rewardState.cumSumBaselinePower.String(),
rewardState.cumSumRealizedPower.String(),
uint64(rewardState.effectiveNetworkTime),
rewardState.effectiveBaselinePower.String(),
rewardState.newBaselinePower.String(),
rewardState.newBaseReward.String(),
rewardState.newSmoothingEstimate.PositionEstimate.String(),
rewardState.newSmoothingEstimate.VelocityEstimate.String(),
rewardState.totalMinedReward.String(),
); err != nil {
log.Errorw("failed to store chain power", "state_root", rewardState.common.stateroot, "error", err)
}
}
if err := stmt.Close(); err != nil {
return xerrors.Errorf("close prepared chain_reward: %w", err)
}
if _, err := tx.Exec(`insert into chain_reward select * from cr on conflict do nothing`); err != nil {
return xerrors.Errorf("insert chain_reward from tmp: %w", err)
}
if err := tx.Commit(); err != nil {
return xerrors.Errorf("commit chain_reward tx: %w", err)
}
return nil
}

View File

@ -1,107 +0,0 @@
package main
import (
"database/sql"
"fmt"
"net/http"
_ "net/http/pprof"
"os"
"strings"
"github.com/filecoin-project/lotus/api/v0api"
_ "github.com/lib/pq"
"github.com/filecoin-project/go-jsonrpc"
logging "github.com/ipfs/go-log/v2"
"github.com/urfave/cli/v2"
"golang.org/x/xerrors"
lcli "github.com/filecoin-project/lotus/cli"
"github.com/filecoin-project/lotus/cmd/lotus-chainwatch/processor"
"github.com/filecoin-project/lotus/cmd/lotus-chainwatch/scheduler"
"github.com/filecoin-project/lotus/cmd/lotus-chainwatch/syncer"
"github.com/filecoin-project/lotus/cmd/lotus-chainwatch/util"
)
var runCmd = &cli.Command{
Name: "run",
Usage: "Start lotus chainwatch",
Flags: []cli.Flag{
&cli.IntFlag{
Name: "max-batch",
Value: 50,
},
},
Action: func(cctx *cli.Context) error {
go func() {
http.ListenAndServe(":6060", nil) //nolint:errcheck
}()
ll := cctx.String("log-level")
if err := logging.SetLogLevel("*", ll); err != nil {
return err
}
if err := logging.SetLogLevel("rpc", "error"); err != nil {
return err
}
var api v0api.FullNode
var closer jsonrpc.ClientCloser
var err error
if tokenMaddr := cctx.String("api"); tokenMaddr != "" {
toks := strings.Split(tokenMaddr, ":")
if len(toks) != 2 {
return fmt.Errorf("invalid api tokens, expected <token>:<maddr>, got: %s", tokenMaddr)
}
api, closer, err = util.GetFullNodeAPIUsingCredentials(cctx.Context, toks[1], toks[0])
if err != nil {
return err
}
} else {
api, closer, err = lcli.GetFullNodeAPI(cctx)
if err != nil {
return err
}
}
defer closer()
ctx := lcli.ReqContext(cctx)
v, err := api.Version(ctx)
if err != nil {
return err
}
log.Infof("Remote version: %s", v.Version)
maxBatch := cctx.Int("max-batch")
db, err := sql.Open("postgres", cctx.String("db"))
if err != nil {
return err
}
defer func() {
if err := db.Close(); err != nil {
log.Errorw("Failed to close database", "error", err)
}
}()
if err := db.Ping(); err != nil {
return xerrors.Errorf("Database failed to respond to ping (is it online?): %w", err)
}
db.SetMaxOpenConns(1350)
sync := syncer.NewSyncer(db, api, 1400)
sync.Start(ctx)
proc := processor.NewProcessor(ctx, db, api, maxBatch)
proc.Start(ctx)
sched := scheduler.PrepareScheduler(db)
sched.Start(ctx)
<-ctx.Done()
os.Exit(0)
return nil
},
}

View File

@ -1,78 +0,0 @@
package scheduler
import (
"context"
"database/sql"
"golang.org/x/xerrors"
)
func setupTopMinerByBaseRewardSchema(ctx context.Context, db *sql.DB) error {
select {
case <-ctx.Done():
return nil
default:
}
tx, err := db.Begin()
if err != nil {
return err
}
if _, err := tx.Exec(`
create materialized view if not exists top_miners_by_base_reward as
with total_rewards_by_miner as (
select
b.miner,
sum(cr.new_reward * b.win_count) as total_reward
from blocks b
inner join chain_reward cr on b.parentstateroot = cr.state_root
group by 1
) select
rank() over (order by total_reward desc),
miner,
total_reward
from total_rewards_by_miner
group by 2, 3;
create index if not exists top_miners_by_base_reward_miner_index
on top_miners_by_base_reward (miner);
create materialized view if not exists top_miners_by_base_reward_max_height as
select
b."timestamp"as current_timestamp,
max(b.height) as current_height
from blocks b
join chain_reward cr on b.parentstateroot = cr.state_root
where cr.new_reward is not null
group by 1
order by 1 desc
limit 1;
`); err != nil {
return xerrors.Errorf("create top_miners_by_base_reward views: %w", err)
}
if err := tx.Commit(); err != nil {
return xerrors.Errorf("committing top_miners_by_base_reward views; %w", err)
}
return nil
}
func refreshTopMinerByBaseReward(ctx context.Context, db *sql.DB) error {
select {
case <-ctx.Done():
return nil
default:
}
_, err := db.Exec("refresh materialized view top_miners_by_base_reward;")
if err != nil {
return xerrors.Errorf("refresh top_miners_by_base_reward: %w", err)
}
_, err = db.Exec("refresh materialized view top_miners_by_base_reward_max_height;")
if err != nil {
return xerrors.Errorf("refresh top_miners_by_base_reward_max_height: %w", err)
}
return nil
}

View File

@ -1,60 +0,0 @@
package scheduler
import (
"context"
"database/sql"
"time"
logging "github.com/ipfs/go-log/v2"
"golang.org/x/xerrors"
)
var log = logging.Logger("scheduler")
// Scheduler manages the execution of jobs triggered
// by tickers. Not externally configurable at runtime.
type Scheduler struct {
db *sql.DB
}
// PrepareScheduler returns a ready-to-run Scheduler
func PrepareScheduler(db *sql.DB) *Scheduler {
return &Scheduler{db}
}
func (s *Scheduler) setupSchema(ctx context.Context) error {
if err := setupTopMinerByBaseRewardSchema(ctx, s.db); err != nil {
return xerrors.Errorf("setup top miners by reward schema: %w", err)
}
return nil
}
// Start the scheduler jobs at the defined intervals
func (s *Scheduler) Start(ctx context.Context) {
log.Debug("Starting Scheduler")
if err := s.setupSchema(ctx); err != nil {
log.Fatalw("applying scheduling schema", "error", err)
}
go func() {
// run once on start after schema has initialized
time.Sleep(1 * time.Minute)
if err := refreshTopMinerByBaseReward(ctx, s.db); err != nil {
log.Errorw("failed to refresh top miner", "error", err)
}
refreshTopMinerCh := time.NewTicker(30 * time.Second)
defer refreshTopMinerCh.Stop()
for {
select {
case <-refreshTopMinerCh.C:
if err := refreshTopMinerByBaseReward(ctx, s.db); err != nil {
log.Errorw("failed to refresh top miner", "error", err)
}
case <-ctx.Done():
return
}
}
}()
}

View File

@ -1,27 +0,0 @@
package syncer
import (
"context"
"time"
"github.com/filecoin-project/lotus/chain/types"
"github.com/ipfs/go-cid"
)
func (s *Syncer) subBlocks(ctx context.Context) {
sub, err := s.node.SyncIncomingBlocks(ctx)
if err != nil {
log.Errorf("opening incoming block channel: %+v", err)
return
}
log.Infow("Capturing incoming blocks")
for bh := range sub {
err := s.storeHeaders(map[cid.Cid]*types.BlockHeader{
bh.Cid(): bh,
}, false, time.Now())
if err != nil {
log.Errorf("storing incoming block header: %+v", err)
}
}
}

View File

@ -1,527 +0,0 @@
package syncer
import (
"container/list"
"context"
"database/sql"
"fmt"
"sync"
"time"
"golang.org/x/xerrors"
"github.com/ipfs/go-cid"
logging "github.com/ipfs/go-log/v2"
"github.com/filecoin-project/lotus/api/v0api"
"github.com/filecoin-project/lotus/chain/store"
"github.com/filecoin-project/lotus/chain/types"
)
var log = logging.Logger("syncer")
type Syncer struct {
db *sql.DB
lookbackLimit uint64
headerLk sync.Mutex
node v0api.FullNode
}
func NewSyncer(db *sql.DB, node v0api.FullNode, lookbackLimit uint64) *Syncer {
return &Syncer{
db: db,
node: node,
lookbackLimit: lookbackLimit,
}
}
func (s *Syncer) setupSchemas() error {
tx, err := s.db.Begin()
if err != nil {
return err
}
if _, err := tx.Exec(`
/* tracks circulating fil available on the network at each tipset */
create table if not exists chain_economics
(
parent_state_root text not null
constraint chain_economics_pk primary key,
circulating_fil text not null,
vested_fil text not null,
mined_fil text not null,
burnt_fil text not null,
locked_fil text not null
);
create table if not exists block_cids
(
cid text not null
constraint block_cids_pk
primary key
);
create unique index if not exists block_cids_cid_uindex
on block_cids (cid);
create table if not exists blocks_synced
(
cid text not null
constraint blocks_synced_pk
primary key
constraint blocks_block_cids_cid_fk
references block_cids (cid),
synced_at int not null,
processed_at int
);
create unique index if not exists blocks_synced_cid_uindex
on blocks_synced (cid,processed_at);
create table if not exists block_parents
(
block text not null
constraint blocks_block_cids_cid_fk
references block_cids (cid),
parent text not null
);
create unique index if not exists block_parents_block_parent_uindex
on block_parents (block, parent);
create table if not exists drand_entries
(
round bigint not null
constraint drand_entries_pk
primary key,
data bytea not null
);
create unique index if not exists drand_entries_round_uindex
on drand_entries (round);
create table if not exists block_drand_entries
(
round bigint not null
constraint block_drand_entries_drand_entries_round_fk
references drand_entries (round),
block text not null
constraint blocks_block_cids_cid_fk
references block_cids (cid)
);
create unique index if not exists block_drand_entries_round_uindex
on block_drand_entries (round, block);
create table if not exists blocks
(
cid text not null
constraint blocks_pk
primary key
constraint blocks_block_cids_cid_fk
references block_cids (cid),
parentWeight numeric not null,
parentStateRoot text not null,
height bigint not null,
miner text not null,
timestamp bigint not null,
ticket bytea not null,
election_proof bytea,
win_count bigint,
parent_base_fee text not null,
forksig bigint not null
);
create unique index if not exists block_cid_uindex
on blocks (cid,height);
create materialized view if not exists state_heights
as select min(b.height) height, b.parentstateroot
from blocks b group by b.parentstateroot;
create index if not exists state_heights_height_index
on state_heights (height);
create index if not exists state_heights_parentstateroot_index
on state_heights (parentstateroot);
`); err != nil {
return err
}
return tx.Commit()
}
func (s *Syncer) Start(ctx context.Context) {
if err := logging.SetLogLevel("syncer", "info"); err != nil {
log.Fatal(err)
}
log.Debug("Starting Syncer")
if err := s.setupSchemas(); err != nil {
log.Fatal(err)
}
// capture all reported blocks
go s.subBlocks(ctx)
// we need to ensure that on a restart we don't reprocess the whole flarping chain
var sinceEpoch uint64
blkCID, height, err := s.mostRecentlySyncedBlockHeight()
if err != nil {
log.Fatalw("failed to find most recently synced block", "error", err)
} else {
if height > 0 {
log.Infow("Found starting point for syncing", "blockCID", blkCID.String(), "height", height)
sinceEpoch = uint64(height)
}
}
// continue to keep the block headers table up to date.
notifs, err := s.node.ChainNotify(ctx)
if err != nil {
log.Fatal(err)
}
go func() {
for notif := range notifs {
for _, change := range notif {
switch change.Type {
case store.HCCurrent:
// This case is important for capturing the initial state of a node
// which might be on a dead network with no new blocks being produced.
// It also allows a fresh Chainwatch instance to start walking the
// chain without waiting for a new block to come along.
fallthrough
case store.HCApply:
unsynced, err := s.unsyncedBlocks(ctx, change.Val, sinceEpoch)
if err != nil {
log.Errorw("failed to gather unsynced blocks", "error", err)
}
if err := s.storeCirculatingSupply(ctx, change.Val); err != nil {
log.Errorw("failed to store circulating supply", "error", err)
}
if len(unsynced) == 0 {
continue
}
if err := s.storeHeaders(unsynced, true, time.Now()); err != nil {
// so this is pretty bad, need some kind of retry..
// for now just log an error and the blocks will be attempted again on next notifi
log.Errorw("failed to store unsynced blocks", "error", err)
}
sinceEpoch = uint64(change.Val.Height())
case store.HCRevert:
log.Debug("revert todo")
}
}
}
}()
}
func (s *Syncer) unsyncedBlocks(ctx context.Context, head *types.TipSet, since uint64) (map[cid.Cid]*types.BlockHeader, error) {
hasList, err := s.syncedBlocks(since, s.lookbackLimit)
if err != nil {
return nil, err
}
// build a list of blocks that we have not synced.
toVisit := list.New()
for _, header := range head.Blocks() {
toVisit.PushBack(header)
}
toSync := map[cid.Cid]*types.BlockHeader{}
for toVisit.Len() > 0 {
bh := toVisit.Remove(toVisit.Back()).(*types.BlockHeader)
_, has := hasList[bh.Cid()]
if _, seen := toSync[bh.Cid()]; seen || has {
continue
}
toSync[bh.Cid()] = bh
if len(toSync)%500 == 10 {
log.Debugw("To visit", "toVisit", toVisit.Len(), "toSync", len(toSync), "current_height", bh.Height)
}
if bh.Height == 0 {
continue
}
pts, err := s.node.ChainGetTipSet(ctx, types.NewTipSetKey(bh.Parents...))
if err != nil {
log.Error(err)
continue
}
for _, header := range pts.Blocks() {
toVisit.PushBack(header)
}
}
log.Debugw("Gathered unsynced blocks", "count", len(toSync))
return toSync, nil
}
func (s *Syncer) syncedBlocks(since, limit uint64) (map[cid.Cid]struct{}, error) {
rws, err := s.db.Query(`select bs.cid FROM blocks_synced bs left join blocks b on b.cid = bs.cid where b.height <= $1 and bs.processed_at is not null limit $2`, since, limit)
if err != nil {
return nil, xerrors.Errorf("Failed to query blocks_synced: %w", err)
}
out := map[cid.Cid]struct{}{}
for rws.Next() {
var c string
if err := rws.Scan(&c); err != nil {
return nil, xerrors.Errorf("Failed to scan blocks_synced: %w", err)
}
ci, err := cid.Parse(c)
if err != nil {
return nil, xerrors.Errorf("Failed to parse blocks_synced: %w", err)
}
out[ci] = struct{}{}
}
return out, nil
}
func (s *Syncer) mostRecentlySyncedBlockHeight() (cid.Cid, int64, error) {
rw := s.db.QueryRow(`
select blocks_synced.cid, b.height
from blocks_synced
left join blocks b on blocks_synced.cid = b.cid
where processed_at is not null
order by height desc
limit 1
`)
var c string
var h int64
if err := rw.Scan(&c, &h); err != nil {
if err == sql.ErrNoRows {
return cid.Undef, 0, nil
}
return cid.Undef, -1, err
}
ci, err := cid.Parse(c)
if err != nil {
return cid.Undef, -1, err
}
return ci, h, nil
}
func (s *Syncer) storeCirculatingSupply(ctx context.Context, tipset *types.TipSet) error {
supply, err := s.node.StateVMCirculatingSupplyInternal(ctx, tipset.Key())
if err != nil {
return err
}
ceInsert := `insert into chain_economics (parent_state_root, circulating_fil, vested_fil, mined_fil, burnt_fil, locked_fil) ` +
`values ('%s', '%s', '%s', '%s', '%s', '%s') on conflict on constraint chain_economics_pk do ` +
`update set (circulating_fil, vested_fil, mined_fil, burnt_fil, locked_fil) = ('%[2]s', '%[3]s', '%[4]s', '%[5]s', '%[6]s') ` +
`where chain_economics.parent_state_root = '%[1]s';`
if _, err := s.db.Exec(fmt.Sprintf(ceInsert,
tipset.ParentState().String(),
supply.FilCirculating.String(),
supply.FilVested.String(),
supply.FilMined.String(),
supply.FilBurnt.String(),
supply.FilLocked.String(),
)); err != nil {
return xerrors.Errorf("insert circulating supply for tipset (%s): %w", tipset.Key().String(), err)
}
return nil
}
func (s *Syncer) storeHeaders(bhs map[cid.Cid]*types.BlockHeader, sync bool, timestamp time.Time) error {
s.headerLk.Lock()
defer s.headerLk.Unlock()
if len(bhs) == 0 {
return nil
}
log.Debugw("Storing Headers", "count", len(bhs))
tx, err := s.db.Begin()
if err != nil {
return xerrors.Errorf("begin: %w", err)
}
if _, err := tx.Exec(`
create temp table bc (like block_cids excluding constraints) on commit drop;
create temp table de (like drand_entries excluding constraints) on commit drop;
create temp table bde (like block_drand_entries excluding constraints) on commit drop;
create temp table tbp (like block_parents excluding constraints) on commit drop;
create temp table bs (like blocks_synced excluding constraints) on commit drop;
create temp table b (like blocks excluding constraints) on commit drop;
`); err != nil {
return xerrors.Errorf("prep temp: %w", err)
}
{
stmt, err := tx.Prepare(`copy bc (cid) from STDIN`)
if err != nil {
return err
}
for _, bh := range bhs {
if _, err := stmt.Exec(bh.Cid().String()); err != nil {
log.Error(err)
}
}
if err := stmt.Close(); err != nil {
return err
}
if _, err := tx.Exec(`insert into block_cids select * from bc on conflict do nothing `); err != nil {
return xerrors.Errorf("drand entries put: %w", err)
}
}
{
stmt, err := tx.Prepare(`copy de (round, data) from STDIN`)
if err != nil {
return err
}
for _, bh := range bhs {
for _, ent := range bh.BeaconEntries {
if _, err := stmt.Exec(ent.Round, ent.Data); err != nil {
log.Error(err)
}
}
}
if err := stmt.Close(); err != nil {
return err
}
if _, err := tx.Exec(`insert into drand_entries select * from de on conflict do nothing `); err != nil {
return xerrors.Errorf("drand entries put: %w", err)
}
}
{
stmt, err := tx.Prepare(`copy bde (round, block) from STDIN`)
if err != nil {
return err
}
for _, bh := range bhs {
for _, ent := range bh.BeaconEntries {
if _, err := stmt.Exec(ent.Round, bh.Cid().String()); err != nil {
log.Error(err)
}
}
}
if err := stmt.Close(); err != nil {
return err
}
if _, err := tx.Exec(`insert into block_drand_entries select * from bde on conflict do nothing `); err != nil {
return xerrors.Errorf("block drand entries put: %w", err)
}
}
{
stmt, err := tx.Prepare(`copy tbp (block, parent) from STDIN`)
if err != nil {
return err
}
for _, bh := range bhs {
for _, parent := range bh.Parents {
if _, err := stmt.Exec(bh.Cid().String(), parent.String()); err != nil {
log.Error(err)
}
}
}
if err := stmt.Close(); err != nil {
return err
}
if _, err := tx.Exec(`insert into block_parents select * from tbp on conflict do nothing `); err != nil {
return xerrors.Errorf("parent put: %w", err)
}
}
if sync {
stmt, err := tx.Prepare(`copy bs (cid, synced_at) from stdin `)
if err != nil {
return err
}
for _, bh := range bhs {
if _, err := stmt.Exec(bh.Cid().String(), timestamp.Unix()); err != nil {
log.Error(err)
}
}
if err := stmt.Close(); err != nil {
return err
}
if _, err := tx.Exec(`insert into blocks_synced select * from bs on conflict do nothing `); err != nil {
return xerrors.Errorf("syncd put: %w", err)
}
}
stmt2, err := tx.Prepare(`copy b (cid, parentWeight, parentStateRoot, height, miner, "timestamp", ticket, election_proof, win_count, parent_base_fee, forksig) from stdin`)
if err != nil {
return err
}
for _, bh := range bhs {
var eproof, winCount interface{}
if bh.ElectionProof != nil {
eproof = bh.ElectionProof.VRFProof
winCount = bh.ElectionProof.WinCount
}
if bh.Ticket == nil {
log.Warnf("got a block with nil ticket")
bh.Ticket = &types.Ticket{
VRFProof: []byte{},
}
}
if _, err := stmt2.Exec(
bh.Cid().String(),
bh.ParentWeight.String(),
bh.ParentStateRoot.String(),
bh.Height,
bh.Miner.String(),
bh.Timestamp,
bh.Ticket.VRFProof,
eproof,
winCount,
bh.ParentBaseFee.String(),
bh.ForkSignaling); err != nil {
log.Error(err)
}
}
if err := stmt2.Close(); err != nil {
return xerrors.Errorf("s2 close: %w", err)
}
if _, err := tx.Exec(`insert into blocks select * from b on conflict do nothing `); err != nil {
return xerrors.Errorf("blk put: %w", err)
}
return tx.Commit()
}

View File

@ -1,34 +0,0 @@
package util
import (
"context"
"net/http"
"github.com/filecoin-project/go-jsonrpc"
"github.com/filecoin-project/lotus/api/client"
"github.com/filecoin-project/lotus/api/v0api"
ma "github.com/multiformats/go-multiaddr"
manet "github.com/multiformats/go-multiaddr/net"
)
func GetFullNodeAPIUsingCredentials(ctx context.Context, listenAddr, token string) (v0api.FullNode, jsonrpc.ClientCloser, error) {
parsedAddr, err := ma.NewMultiaddr(listenAddr)
if err != nil {
return nil, nil, err
}
_, addr, err := manet.DialArgs(parsedAddr)
if err != nil {
return nil, nil, err
}
return client.NewFullNodeRPCV0(ctx, apiURI(addr), apiHeaders(token))
}
func apiURI(addr string) string {
return "ws://" + addr + "/rpc/v0"
}
func apiHeaders(token string) http.Header {
headers := http.Header{}
headers.Add("Authorization", "Bearer "+token)
return headers
}

View File

@ -1,51 +0,0 @@
package util
import (
"bytes"
"context"
"fmt"
"github.com/ipfs/go-cid"
cbg "github.com/whyrusleeping/cbor-gen"
"github.com/filecoin-project/lotus/api/v0api"
)
// TODO extract this to a common location in lotus and reuse the code
// APIIpldStore is required for AMT and HAMT access.
type APIIpldStore struct {
ctx context.Context
api v0api.FullNode
}
func NewAPIIpldStore(ctx context.Context, api v0api.FullNode) *APIIpldStore {
return &APIIpldStore{
ctx: ctx,
api: api,
}
}
func (ht *APIIpldStore) Context() context.Context {
return ht.ctx
}
func (ht *APIIpldStore) Get(ctx context.Context, c cid.Cid, out interface{}) error {
raw, err := ht.api.ChainReadObj(ctx, c)
if err != nil {
return err
}
cu, ok := out.(cbg.CBORUnmarshaler)
if ok {
if err := cu.UnmarshalCBOR(bytes.NewReader(raw)); err != nil {
return err
}
return nil
}
return fmt.Errorf("Object does not implement CBORUnmarshaler: %T", out)
}
func (ht *APIIpldStore) Put(ctx context.Context, v interface{}) (cid.Cid, error) {
return cid.Undef, fmt.Errorf("Put is not implemented on APIIpldStore")
}

View File

@ -415,7 +415,7 @@ var actorControlList = &cli.Command{
ctx := lcli.ReqContext(cctx) ctx := lcli.ReqContext(cctx)
maddr, err := nodeApi.ActorAddress(ctx) maddr, err := getActorAddress(ctx, cctx)
if err != nil { if err != nil {
return err return err
} }

View File

@ -10,7 +10,6 @@ import (
"testing" "testing"
"time" "time"
"github.com/filecoin-project/go-state-types/network"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
"github.com/urfave/cli/v2" "github.com/urfave/cli/v2"
@ -34,9 +33,7 @@ func TestWorkerKeyChange(t *testing.T) {
kit.QuietMiningLogs() kit.QuietMiningLogs()
blocktime := 1 * time.Millisecond blocktime := 1 * time.Millisecond
client1, client2, miner, ens := kit.EnsembleTwoOne(t, kit.MockProofs(), client1, client2, miner, ens := kit.EnsembleTwoOne(t, kit.MockProofs())
kit.ConstructorOpts(kit.InstantaneousNetworkVersion(network.Version13)),
)
ens.InterconnectAll().BeginMining(blocktime) ens.InterconnectAll().BeginMining(blocktime)
output := bytes.NewBuffer(nil) output := bytes.NewBuffer(nil)

267
cmd/lotus-miner/dagstore.go Normal file
View File

@ -0,0 +1,267 @@
package main
import (
"fmt"
"os"
"github.com/fatih/color"
"github.com/urfave/cli/v2"
"github.com/filecoin-project/lotus/api"
lcli "github.com/filecoin-project/lotus/cli"
"github.com/filecoin-project/lotus/lib/tablewriter"
)
var dagstoreCmd = &cli.Command{
Name: "dagstore",
Usage: "Manage the dagstore on the markets subsystem",
Subcommands: []*cli.Command{
dagstoreListShardsCmd,
dagstoreInitializeShardCmd,
dagstoreRecoverShardCmd,
dagstoreInitializeAllCmd,
dagstoreGcCmd,
},
}
var dagstoreListShardsCmd = &cli.Command{
Name: "list-shards",
Usage: "List all shards known to the dagstore, with their current status",
Flags: []cli.Flag{
&cli.BoolFlag{
Name: "color",
Usage: "use color in display output",
DefaultText: "depends on output being a TTY",
},
},
Action: func(cctx *cli.Context) error {
if cctx.IsSet("color") {
color.NoColor = !cctx.Bool("color")
}
marketsApi, closer, err := lcli.GetMarketsAPI(cctx)
if err != nil {
return err
}
defer closer()
ctx := lcli.ReqContext(cctx)
shards, err := marketsApi.DagstoreListShards(ctx)
if err != nil {
return err
}
if len(shards) == 0 {
return nil
}
tw := tablewriter.New(
tablewriter.Col("Key"),
tablewriter.Col("State"),
tablewriter.Col("Error"),
)
colors := map[string]color.Attribute{
"ShardStateAvailable": color.FgGreen,
"ShardStateServing": color.FgBlue,
"ShardStateErrored": color.FgRed,
"ShardStateNew": color.FgYellow,
}
for _, s := range shards {
m := map[string]interface{}{
"Key": s.Key,
"State": func() string {
if c, ok := colors[s.State]; ok {
return color.New(c).Sprint(s.State)
}
return s.State
}(),
"Error": s.Error,
}
tw.Write(m)
}
return tw.Flush(os.Stdout)
},
}
var dagstoreInitializeShardCmd = &cli.Command{
Name: "initialize-shard",
ArgsUsage: "[key]",
Usage: "Initialize the specified shard",
Flags: []cli.Flag{
&cli.BoolFlag{
Name: "color",
Usage: "use color in display output",
DefaultText: "depends on output being a TTY",
},
},
Action: func(cctx *cli.Context) error {
if cctx.IsSet("color") {
color.NoColor = !cctx.Bool("color")
}
if cctx.NArg() != 1 {
return fmt.Errorf("must provide a single shard key")
}
marketsApi, closer, err := lcli.GetMarketsAPI(cctx)
if err != nil {
return err
}
defer closer()
ctx := lcli.ReqContext(cctx)
return marketsApi.DagstoreInitializeShard(ctx, cctx.Args().First())
},
}
var dagstoreRecoverShardCmd = &cli.Command{
Name: "recover-shard",
ArgsUsage: "[key]",
Usage: "Attempt to recover a shard in errored state",
Flags: []cli.Flag{
&cli.BoolFlag{
Name: "color",
Usage: "use color in display output",
DefaultText: "depends on output being a TTY",
},
},
Action: func(cctx *cli.Context) error {
if cctx.IsSet("color") {
color.NoColor = !cctx.Bool("color")
}
if cctx.NArg() != 1 {
return fmt.Errorf("must provide a single shard key")
}
marketsApi, closer, err := lcli.GetMarketsAPI(cctx)
if err != nil {
return err
}
defer closer()
ctx := lcli.ReqContext(cctx)
return marketsApi.DagstoreRecoverShard(ctx, cctx.Args().First())
},
}
var dagstoreInitializeAllCmd = &cli.Command{
Name: "initialize-all",
Usage: "Initialize all uninitialized shards, streaming results as they're produced; only shards for unsealed pieces are initialized by default",
Flags: []cli.Flag{
&cli.UintFlag{
Name: "concurrency",
Usage: "maximum shards to initialize concurrently at a time; use 0 for unlimited",
Required: true,
},
&cli.BoolFlag{
Name: "include-sealed",
Usage: "initialize sealed pieces as well",
},
&cli.BoolFlag{
Name: "color",
Usage: "use color in display output",
DefaultText: "depends on output being a TTY",
},
},
Action: func(cctx *cli.Context) error {
if cctx.IsSet("color") {
color.NoColor = !cctx.Bool("color")
}
concurrency := cctx.Uint("concurrency")
sealed := cctx.Bool("sealed")
marketsApi, closer, err := lcli.GetMarketsAPI(cctx)
if err != nil {
return err
}
defer closer()
ctx := lcli.ReqContext(cctx)
params := api.DagstoreInitializeAllParams{
MaxConcurrency: int(concurrency),
IncludeSealed: sealed,
}
ch, err := marketsApi.DagstoreInitializeAll(ctx, params)
if err != nil {
return err
}
for {
select {
case evt, ok := <-ch:
if !ok {
return nil
}
_, _ = fmt.Fprint(os.Stdout, color.New(color.BgHiBlack).Sprintf("(%d/%d)", evt.Current, evt.Total))
_, _ = fmt.Fprint(os.Stdout, " ")
if evt.Event == "start" {
_, _ = fmt.Fprintln(os.Stdout, evt.Key, color.New(color.Reset).Sprint("STARTING"))
} else {
if evt.Success {
_, _ = fmt.Fprintln(os.Stdout, evt.Key, color.New(color.FgGreen).Sprint("SUCCESS"))
} else {
_, _ = fmt.Fprintln(os.Stdout, evt.Key, color.New(color.FgRed).Sprint("ERROR"), evt.Error)
}
}
case <-ctx.Done():
return fmt.Errorf("aborted")
}
}
},
}
var dagstoreGcCmd = &cli.Command{
Name: "gc",
Usage: "Garbage collect the dagstore",
Flags: []cli.Flag{
&cli.BoolFlag{
Name: "color",
Usage: "use color in display output",
DefaultText: "depends on output being a TTY",
},
},
Action: func(cctx *cli.Context) error {
if cctx.IsSet("color") {
color.NoColor = !cctx.Bool("color")
}
marketsApi, closer, err := lcli.GetMarketsAPI(cctx)
if err != nil {
return err
}
defer closer()
ctx := lcli.ReqContext(cctx)
collected, err := marketsApi.DagstoreGC(ctx)
if err != nil {
return err
}
if len(collected) == 0 {
_, _ = fmt.Fprintln(os.Stdout, "no shards collected")
return nil
}
for _, e := range collected {
if e.Error == "" {
_, _ = fmt.Fprintln(os.Stdout, e.Key, color.New(color.FgGreen).Sprint("SUCCESS"))
} else {
_, _ = fmt.Fprintln(os.Stdout, e.Key, color.New(color.FgRed).Sprint("ERROR"), e.Error)
}
}
return nil
},
}

View File

@ -10,12 +10,13 @@ import (
"go.opencensus.io/trace" "go.opencensus.io/trace"
"golang.org/x/xerrors" "golang.org/x/xerrors"
cliutil "github.com/filecoin-project/lotus/cli/util"
"github.com/filecoin-project/go-address" "github.com/filecoin-project/go-address"
"github.com/filecoin-project/lotus/api" "github.com/filecoin-project/lotus/api"
"github.com/filecoin-project/lotus/build" "github.com/filecoin-project/lotus/build"
lcli "github.com/filecoin-project/lotus/cli" lcli "github.com/filecoin-project/lotus/cli"
cliutil "github.com/filecoin-project/lotus/cli/util"
"github.com/filecoin-project/lotus/lib/lotuslog" "github.com/filecoin-project/lotus/lib/lotuslog"
"github.com/filecoin-project/lotus/lib/tracing" "github.com/filecoin-project/lotus/lib/tracing"
"github.com/filecoin-project/lotus/node/repo" "github.com/filecoin-project/lotus/node/repo"
@ -47,6 +48,7 @@ func main() {
lcli.WithCategory("market", storageDealsCmd), lcli.WithCategory("market", storageDealsCmd),
lcli.WithCategory("market", retrievalDealsCmd), lcli.WithCategory("market", retrievalDealsCmd),
lcli.WithCategory("market", dataTransfersCmd), lcli.WithCategory("market", dataTransfersCmd),
lcli.WithCategory("market", dagstoreCmd),
lcli.WithCategory("storage", sectorsCmd), lcli.WithCategory("storage", sectorsCmd),
lcli.WithCategory("storage", provingCmd), lcli.WithCategory("storage", provingCmd),
lcli.WithCategory("storage", storageCmd), lcli.WithCategory("storage", storageCmd),

Some files were not shown because too many files have changed in this diff Show More