Merge branch 'master' into fix/vouchers

This commit is contained in:
Steven Allen 2021-07-28 18:45:58 -07:00 committed by GitHub
commit cf94aed32e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
112 changed files with 5542 additions and 8323 deletions

View File

@ -43,13 +43,14 @@ body:
id: version
attributes:
label: Lotus Version
render: text
description: Enter the output of `lotus version` and `lotus-miner version` if applicable.
placeholder: |
e.g.
Daemon:1.11.0-rc2+debug+git.0519cd371.dirty+api1.3.0
Local: lotus version 1.11.0-rc2+debug+git.0519cd371.dirty
validations:
reuiqred: true
required: true
- type: textarea
id: Description
attributes:
@ -62,19 +63,18 @@ body:
* For sealing issues, include the output of `lotus-miner sectors status --log <sectorId>` for the failed sector(s).
* For proving issues, include the output of `lotus-miner proving` info.
* For deal making issues, include the output of `lotus client list-deals -v` and/or `lotus-miner storage-deals|retrieval-deals|data-transfers list [-v]` commands for the deal(s) in question.
render: bash
validations:
required: true
- type: textarea
id: extraInfo
attributes:
label: Logging Information
render: text
description: |
Please provide debug logs of the problem, remember you can get set log level control for:
* lotus: use `lotus log list` to get all log systems available and set level by `lotus log set-level`. An example can be found [here](https://docs.filecoin.io/get-started/lotus/configuration-and-advanced-usage/#log-level-control).
* lotus-miner:`lotus-miner log list` to get all log systems available and set level by `lotus-miner log set-level
If you don't provide detailed logs when you raise the issue it will almost certainly be the first request I make before furthur diagnosing the problem.
render: bash
validations:
required: true
- type: textarea
@ -87,6 +87,5 @@ body:
2. Do '...'
3. See error '...'
...
render: bash
validations:
required: false
required: false

View File

@ -35,10 +35,11 @@ body:
- type: textarea
id: version
attributes:
render: text
label: Lotus Tag and Version
description: Enter the lotus tag, output of `lotus version` and `lotus-miner version`.
validations:
reuiqred: true
required: true
- type: textarea
id: Description
attributes:
@ -48,7 +49,6 @@ body:
* What you were doding when you experienced the bug?
* Any *error* messages you saw, *where* you saw them, and what you believe may have caused them (if you have any ideas).
* What is the expected behaviour?
render: bash
validations:
required: true
- type: textarea
@ -72,6 +72,7 @@ body:
- type: textarea
id: logging
attributes:
render: text
label: Logging Information
description: Please link to the whole of the miner logs on your side of the transaction. You can upload the logs to a [gist](https://gist.github.com).
validations:
@ -86,6 +87,5 @@ body:
2. Do '...'
3. See error '...'
...
render: bash
validations:
required: false
required: false

View File

@ -36,10 +36,11 @@ body:
- type: textarea
id: version
attributes:
render: text
label: Lotus Tag and Version
description: Enter the lotus tag, output of `lotus version` and `lotus-miner version`.
validations:
reuiqred: true
required: true
- type: textarea
id: Description
attributes:
@ -51,19 +52,18 @@ body:
* What is the expected behaviour?
* For sealing issues, include the output of `lotus-miner sectors status --log <sectorId>` for the failed sector(s).
* For proving issues, include the output of `lotus-miner proving` info.
render: bash
validations:
required: true
- type: textarea
id: extraInfo
attributes:
label: Logging Information
render: text
description: |
Please provide debug logs of the problem, remember you can get set log level control for:
* lotus: use `lotus log list` to get all log systems available and set level by `lotus log set-level`. An example can be found [here](https://docs.filecoin.io/get-started/lotus/configuration-and-advanced-usage/#log-level-control).
* lotus-miner:`lotus-miner log list` to get all log systems available and set level by `lotus-miner log set-level
If you don't provide detailed logs when you raise the issue it will almost certainly be the first request I make before furthur diagnosing the problem.
render: bash
validations:
required: true
- type: textarea
@ -76,6 +76,5 @@ body:
2. Do '...'
3. See error '...'
...
render: bash
validations:
required: false
required: false

View File

@ -2,7 +2,7 @@ name: Close and mark stale issue
on:
schedule:
- cron: '0 0 * * *'
- cron: '0 12 * * *'
jobs:
stale:
@ -18,10 +18,16 @@ jobs:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-issue-message: 'Oops, seems like we needed more information for this issue, please comment with more details or this issue will be closed in 24 hours.'
close-issue-message: 'This issue was closed because it is missing author input.'
stale-pr-message: 'Thank you for submitting the PR and contributing to lotus! Lotus maintainers need more of your input before merging it, please address the suggested changes or reply to the comments or this PR will be closed in 48 hours. You are always more than welcome to reopen the PR later as well!'
close-pr-message: 'This PR was closed because it is missing author input. Please feel free to reopen the PR when you get to it! Thank you for your interest in contributing to lotus!'
stale-issue-label: 'kind/stale'
any-of-labels: 'hint/needs-author-input'
days-before-issue-stale: 5
stale-pr-label: 'kind/stale'
any-of-labels: 'need/author-input '
days-before-issue-stale: 3
days-before-issue-close: 1
days-before-pr-stale: 5
days-before-pr-close: 2
remove-stale-when-updated: true
enable-statistics: true

View File

@ -1,5 +1,194 @@
# Lotus changelog
# 1.11.0 / 2021-07-22
This is a **highly recommended** release of Lotus that have many bug fixes, improvements and new features.
## Highlights
- Miner SimultaneousTransfers config ([filecoin-project/lotus#6612](https://github.com/filecoin-project/lotus/pull/6612))
- Set `SimultaneousTransfers` in lotus miner config to configure the maximum number of parallel online data transfers, including both storage and retrieval deals.
- Dynamic Retrieval pricing ([filecoin-project/lotus#6175](https://github.com/filecoin-project/lotus/pull/6175))
- Customize your retrieval ask price, see a quick tutorial [here](https://github.com/filecoin-project/lotus/discussions/6780).
- Robust message management ([filecoin-project/lotus#5822](https://github.com/filecoin-project/lotus/pull/5822))
- run `lotus mpool manage and follow the instructions!
- Demo available at https://www.youtube.com/watch?v=QDocpLQjZgQ.
- Add utils to use multisigs as miner owners ([filecoin-project/lotus#6490](https://github.com/filecoin-project/lotus/pull/6490))
## More New Features
- feat: implement lotus-sim ([filecoin-project/lotus#6406](https://github.com/filecoin-project/lotus/pull/6406))
- implement a command to export a car ([filecoin-project/lotus#6405](https://github.com/filecoin-project/lotus/pull/6405))
- Add a command to get the fees of a deal ([filecoin-project/lotus#5307](https://github.com/filecoin-project/lotus/pull/5307))
- run `lotus-shed market get-deal-fees`
- Add a command to list retrievals ([filecoin-project/lotus#6337](https://github.com/filecoin-project/lotus/pull/6337))
- run `lotus client list-retrievals`
- lotus-gateway: add check command ([filecoin-project/lotus#6373](https://github.com/filecoin-project/lotus/pull/6373))
- lotus-wallet: JWT Support ([filecoin-project/lotus#6360](https://github.com/filecoin-project/lotus/pull/6360))
- Allow starting networks from arbitrary actor versions ([filecoin-project/lotus#6305](https://github.com/filecoin-project/lotus/pull/6305))
- oh, snap! ([filecoin-project/lotus#6202](https://github.com/filecoin-project/lotus/pull/6202))
- Add a shed util to count 64 GiB miner stats ([filecoin-project/lotus#6290](https://github.com/filecoin-project/lotus/pull/6290))
- Introduce stateless offline dealflow, bypassing the FSM/deallists ([filecoin-project/lotus#5961](https://github.com/filecoin-project/lotus/pull/5961))
- Transplant some useful commands to lotus-shed actor ([filecoin-project/lotus#5913](https://github.com/filecoin-project/lotus/pull/5913))
- run `lotus-shed actor`
- actor wrapper codegen ([filecoin-project/lotus#6108](https://github.com/filecoin-project/lotus/pull/6108))
- Add a shed util to count miners by post type ([filecoin-project/lotus#6169](https://github.com/filecoin-project/lotus/pull/6169))
- shed: command to list duplicate messages in tipsets (steb) ([filecoin-project/lotus#5847](https://github.com/filecoin-project/lotus/pull/5847))
- feat: allow checkpointing to forks ([filecoin-project/lotus#6107](https://github.com/filecoin-project/lotus/pull/6107))
- Add a CLI tool for miner proving deadline ([filecoin-project/lotus#6132](https://github.com/filecoin-project/lotus/pull/6132))
- run `lotus state miner-proving-deadline`
## Bug Fixes
- Fix wallet error messages ([filecoin-project/lotus#6594](https://github.com/filecoin-project/lotus/pull/6594))
- Fix CircleCI gen ([filecoin-project/lotus#6589](https://github.com/filecoin-project/lotus/pull/6589))
- Make query-ask CLI more graceful ([filecoin-project/lotus#6590](https://github.com/filecoin-project/lotus/pull/6590))
- scale up sector expiration to avoid sector expire in batch-pre-commit waitting ([filecoin-project/lotus#6566](https://github.com/filecoin-project/lotus/pull/6566))
- Fix an error in msigLockCancel ([filecoin-project/lotus#6582](https://github.com/filecoin-project/lotus/pull/6582)
- fix circleci being out of sync. ([filecoin-project/lotus#6573](https://github.com/filecoin-project/lotus/pull/6573))
- Fix helptext for ask price([filecoin-project/lotus#6560](https://github.com/filecoin-project/lotus/pull/6560))
- fix commit finalize failed ([filecoin-project/lotus#6521](https://github.com/filecoin-project/lotus/pull/6521))
- Fix soup ([filecoin-project/lotus#6501](https://github.com/filecoin-project/lotus/pull/6501))
- fix: pick the correct partitions-per-post limit ([filecoin-project/lotus#6502](https://github.com/filecoin-project/lotus/pull/6502))
- sealing: Fix restartSectors race ([filecoin-project/lotus#6495](https://github.com/filecoin-project/lotus/pull/6495))
- Fix: correct the change of message size limit ([filecoin-project/lotus#6430](https://github.com/filecoin-project/lotus/pull/6430))
- Fix logging of stringified CIDs double-encoded in hex ([filecoin-project/lotus#6413](https://github.com/filecoin-project/lotus/pull/6413))
- Fix success handling in Retreival ([filecoin-project/lotus#5921](https://github.com/filecoin-project/lotus/pull/5921))
- storagefsm: Fix batch deal packing behavior ([filecoin-project/lotus#6041](https://github.com/filecoin-project/lotus/pull/6041))
- events: Fix handling of multiple matched events per epoch ([filecoin-project/lotus#6355](https://github.com/filecoin-project/lotus/pull/6355))
- Fix logging around mineOne ([filecoin-project/lotus#6310](https://github.com/filecoin-project/lotus/pull/6310))
- Fix shell completions ([filecoin-project/lotus#6316](https://github.com/filecoin-project/lotus/pull/6316))
- Allow 8MB sectors in devnet ([filecoin-project/lotus#6312](https://github.com/filecoin-project/lotus/pull/6312))
- fix ticket expired ([filecoin-project/lotus#6304](https://github.com/filecoin-project/lotus/pull/6304))
- Revert "chore: update go-libp2p" ([filecoin-project/lotus#6306](https://github.com/filecoin-project/lotus/pull/6306))
- fix: wait-api should use GetAPI to acquire binary specific API ([filecoin-project/lotus#6246](https://github.com/filecoin-project/lotus/pull/6246))
- fix(ci): Updates to lotus CI build process ([filecoin-project/lotus#6256](https://github.com/filecoin-project/lotus/pull/6256))
- fix: use a consistent tipset in commands ([filecoin-project/lotus#6142](https://github.com/filecoin-project/lotus/pull/6142))
- go mod tidy for lotus-soup testplans ([filecoin-project/lotus#6124](https://github.com/filecoin-project/lotus/pull/6124))
- fix testground payment channel tests: use 1 miner ([filecoin-project/lotus#6126](https://github.com/filecoin-project/lotus/pull/6126))
- fix: use the parent state when listing actors ([filecoin-project/lotus#6143](https://github.com/filecoin-project/lotus/pull/6143))
- Speed up StateListMessages in some cases ([filecoin-project/lotus#6007](https://github.com/filecoin-project/lotus/pull/6007))
- fix(splitstore): fix a panic on revert-only head changes ([filecoin-project/lotus#6133](https://github.com/filecoin-project/lotus/pull/6133))
- drand: fix beacon cache ([filecoin-project/lotus#6164](https://github.com/filecoin-project/lotus/pull/6164))
## Improvements
- gateway: Add support for Version method ([filecoin-project/lotus#6618](https://github.com/filecoin-project/lotus/pull/6618))
- revamped integration test kit (aka. Operation Sparks Joy) ([filecoin-project/lotus#6329](https://github.com/filecoin-project/lotus/pull/6329))
- move with changed name ([filecoin-project/lotus#6587](https://github.com/filecoin-project/lotus/pull/6587))
- dynamic circleci config for streamlining test execution ([filecoin-project/lotus#6561](https://github.com/filecoin-project/lotus/pull/6561))
- extern/storage: add ability to ignore worker resources when scheduling. ([filecoin-project/lotus#6542](https://github.com/filecoin-project/lotus/pull/6542))
- Adjust various CLI display ratios to arbitrary precision ([filecoin-project/lotus#6309](https://github.com/filecoin-project/lotus/pull/6309))
- Test multicore SDR support ([filecoin-project/lotus#6479](https://github.com/filecoin-project/lotus/pull/6479))
- Unit tests for sector batchers ([filecoin-project/lotus#6432](https://github.com/filecoin-project/lotus/pull/6432))
- Update chain list with correct help instructions ([filecoin-project/lotus#6465](https://github.com/filecoin-project/lotus/pull/6465))
- clean failed sectors in batch commit ([filecoin-project/lotus#6451](https://github.com/filecoin-project/lotus/pull/6451))
- itests/kit: add guard to ensure imports from tests only. ([filecoin-project/lotus#6445](https://github.com/filecoin-project/lotus/pull/6445))
- consolidate integration tests into `itests` package; create test kit; cleanup ([filecoin-project/lotus#6311](https://github.com/filecoin-project/lotus/pull/6311))
- Fee config for sector batching ([filecoin-project/lotus#6420](https://github.com/filecoin-project/lotus/pull/6420))
- UX: lotus state power CLI should fail if called with a not-miner ([filecoin-project/lotus#6425](https://github.com/filecoin-project/lotus/pull/6425))
- Increase message size limit ([filecoin-project/lotus#6419](https://github.com/filecoin-project/lotus/pull/6419))
- polish(stmgr): define ExecMonitor for message application callback ([filecoin-project/lotus#6389](https://github.com/filecoin-project/lotus/pull/6389))
- upgrade testground action version ([filecoin-project/lotus#6403](https://github.com/filecoin-project/lotus/pull/6403))
- Bypass task scheduler for reading unsealed pieces ([filecoin-project/lotus#6280](https://github.com/filecoin-project/lotus/pull/6280))
- testplans: lotus-soup: use default WPoStChallengeWindow ([filecoin-project/lotus#6400](https://github.com/filecoin-project/lotus/pull/6400))
- Integration tests for offline deals ([filecoin-project/lotus#6081](https://github.com/filecoin-project/lotus/pull/6081))
- Fix some flaky tests ([filecoin-project/lotus#6397](https://github.com/filecoin-project/lotus/pull/6397))
- build appimage in CI ([filecoin-project/lotus#6384](https://github.com/filecoin-project/lotus/pull/6384))
- Generate AppImage ([filecoin-project/lotus#6208](https://github.com/filecoin-project/lotus/pull/6208))
- Add test for AddVerifiedClient ([filecoin-project/lotus#6317](https://github.com/filecoin-project/lotus/pull/6317))
- Typo fix in error message: "pubusb" -> "pubsub" ([filecoin-project/lotus#6365](https://github.com/filecoin-project/lotus/pull/6365))
- Improve the cli state call command ([filecoin-project/lotus#6226](https://github.com/filecoin-project/lotus/pull/6226))
- Upscale mineOne message to a WARN on unexpected ineligibility ([filecoin-project/lotus#6358](https://github.com/filecoin-project/lotus/pull/6358))
- Remove few useless variable assignments ([filecoin-project/lotus#6359](https://github.com/filecoin-project/lotus/pull/6359))
- Reduce noise from 'peer has different genesis' messages ([filecoin-project/lotus#6357](https://github.com/filecoin-project/lotus/pull/6357))
- Get current seal proof when necessary ([filecoin-project/lotus#6339](https://github.com/filecoin-project/lotus/pull/6339))
- Remove log line when tracing is not configured ([filecoin-project/lotus#6334](https://github.com/filecoin-project/lotus/pull/6334))
- separate tracing environment variables ([filecoin-project/lotus#6323](https://github.com/filecoin-project/lotus/pull/6323))
- feat: log dispute rate ([filecoin-project/lotus#6322](https://github.com/filecoin-project/lotus/pull/6322))
- Move verifreg shed utils to CLI ([filecoin-project/lotus#6135](https://github.com/filecoin-project/lotus/pull/6135))
- consider storiface.PathStorage when calculating storage requirements ([filecoin-project/lotus#6233](https://github.com/filecoin-project/lotus/pull/6233))
- `storage` module: add go docs and minor code quality refactors ([filecoin-project/lotus#6259](https://github.com/filecoin-project/lotus/pull/6259))
- Increase data transfer timeouts ([filecoin-project/lotus#6300](https://github.com/filecoin-project/lotus/pull/6300))
- gateway: spin off from cmd to package ([filecoin-project/lotus#6294](https://github.com/filecoin-project/lotus/pull/6294))
- Return total power when GetPowerRaw doesn't find miner claim ([filecoin-project/lotus#4938](https://github.com/filecoin-project/lotus/pull/4938))
- add flags to control gateway lookback parameters ([filecoin-project/lotus#6247](https://github.com/filecoin-project/lotus/pull/6247))
- chore(ci): Enable build on RC tags ([filecoin-project/lotus#6238](https://github.com/filecoin-project/lotus/pull/6238))
- cron-wc ([filecoin-project/lotus#6178](https://github.com/filecoin-project/lotus/pull/6178))
- Allow creation of state tree v3s ([filecoin-project/lotus#6167](https://github.com/filecoin-project/lotus/pull/6167))
- mpool: Cleanup pre-nv12 selection logic ([filecoin-project/lotus#6148](https://github.com/filecoin-project/lotus/pull/6148))
- attempt to do better padding on pieces being written into sectors ([filecoin-project/lotus#5988](https://github.com/filecoin-project/lotus/pull/5988))
- remove duplicate ask and calculate ping before lock ([filecoin-project/lotus#5968](https://github.com/filecoin-project/lotus/pull/5968))
- flaky tests improvement: separate TestBatchDealInput from TestAPIDealFlow ([filecoin-project/lotus#6141](https://github.com/filecoin-project/lotus/pull/6141))
- Testground checks on push ([filecoin-project/lotus#5887](https://github.com/filecoin-project/lotus/pull/5887))
- Use EmptyTSK where appropriate ([filecoin-project/lotus#6134](https://github.com/filecoin-project/lotus/pull/6134))
- upgrade `lotus-soup` testplans and reduce deals concurrency to a single miner ([filecoin-project/lotus#6122](https://github.com/filecoin-project/lotus/pull/6122)
## Dependency Updates
- downgrade libp2p/go-libp2p-yamux to v0.5.1. ([filecoin-project/lotus#6605](https://github.com/filecoin-project/lotus/pull/6605))
- Update libp2p to 0.14.2 ([filecoin-project/lotus#6404](https://github.com/filecoin-project/lotus/pull/6404))
- update to markets-v1.4.0 ([filecoin-project/lotus#6369](https://github.com/filecoin-project/lotus/pull/6369))
- Use new actor tags ([filecoin-project/lotus#6291](https://github.com/filecoin-project/lotus/pull/6291))
- chore: update go-libp2p ([filecoin-project/lotus#6231](https://github.com/filecoin-project/lotus/pull/6231))
- Update ffi to proofs v7 ([filecoin-project/lotus#6150](https://github.com/filecoin-project/lotus/pull/6150))
## Others
- Initial draft: basic build instructions on Readme ([filecoin-project/lotus#6498](https://github.com/filecoin-project/lotus/pull/6498))
- Remove rc changelog, compile the new changelog for final release only ([filecoin-project/lotus#6444](https://github.com/filecoin-project/lotus/pull/6444))
- updated configuration comments for docs ([filecoin-project/lotus#6440](https://github.com/filecoin-project/lotus/pull/6440))
- Set ntwk v13 HyperDrive Calibration upgrade epoch ([filecoin-project/lotus#6441](https://github.com/filecoin-project/lotus/pull/6441))
- build snapcraft ([filecoin-project/lotus#6388](https://github.com/filecoin-project/lotus/pull/6388))
- Fix the doc errors of the sealing config funcs ([filecoin-project/lotus#6399](https://github.com/filecoin-project/lotus/pull/6399))
- Add doc on gas balancing ([filecoin-project/lotus#6392](https://github.com/filecoin-project/lotus/pull/6392))
- Add interop network ([filecoin-project/lotus#6387](https://github.com/filecoin-project/lotus/pull/6387))
- Network version 13 (v1.11) ([filecoin-project/lotus#6342](https://github.com/filecoin-project/lotus/pull/6342))
- Add a warning to the release issue template ([filecoin-project/lotus#6374](https://github.com/filecoin-project/lotus/pull/6374))
- Update RELEASE_ISSUE_TEMPLATE.md ([filecoin-project/lotus#6236](https://github.com/filecoin-project/lotus/pull/6236))
- Delete CODEOWNERS ([filecoin-project/lotus#6289](https://github.com/filecoin-project/lotus/pull/6289))
- Feat/nerpa v4 ([filecoin-project/lotus#6248](https://github.com/filecoin-project/lotus/pull/6248))
- Introduce a release issue template ([filecoin-project/lotus#5826](https://github.com/filecoin-project/lotus/pull/5826))
- This is a 1:1 forward-port of PR#6183 from 1.9.x to master ([filecoin-project/lotus#6196](https://github.com/filecoin-project/lotus/pull/6196))
- Update cli gen ([filecoin-project/lotus#6155](https://github.com/filecoin-project/lotus/pull/6155))
- Generate CLI docs ([filecoin-project/lotus#6145](https://github.com/filecoin-project/lotus/pull/6145))
## Contributors
| Contributor | Commits | Lines ± | Files Changed |
|-------------|---------|---------|---------------|
| @raulk | 118 | +11972/-10860 | 472 |
| @magik6k | 65 | +10824/-4158 | 353 |
| @aarshkshah1992 | 59 | +8057/-3355 | 224 |
| @arajasek | 41 | +8786/-1691 | 331 |
| @Stebalien | 106 | +7653/-2718 | 273 |
| dirkmc | 11 | +2580/-1371 | 77 |
| @dirkmc | 39 | +1865/-1194 | 79 |
| @Kubuxu | 19 | +1973/-485 | 81 |
| @vyzo | 4 | +1748/-330 | 50 |
| @aarshkshah1992 | 5 | +1462/-213 | 27 |
| @coryschwartz | 35 | +568/-206 | 59 |
| @chadwick2143 | 3 | +739/-1 | 4 |
| @ribasushi | 21 | +487/-164 | 36 |
| @hannahhoward | 5 | +544/-5 | 19 |
| @jennijuju | 9 | +241/-174 | 19 |
| @frrist | 1 | +137/-88 | 7 |
| @travisperson | 3 | +175/-6 | 7 |
| @wadeAlexC | 1 | +48/-129 | 1 |
| @whyrusleeping | 8 | +161/-13 | 11 |
| lotus | 1 | +114/-46 | 1 |
| @nonsense | 8 | +107/-53 | 20 |
| @rjan90 | 4 | +115/-33 | 4 |
| @ZenGround0 | 3 | +114/-1 | 4 |
| @Aloxaf | 1 | +43/-61 | 7 |
| @yaohcn | 4 | +89/-9 | 5 |
| @mitchellsoo | 1 | +51/-0 | 1 |
| @placer14 | 3 | +28/-18 | 4 |
| @jennijuju | 6 | +9/-14 | 6 |
| @Frank | 2 | +11/-10 | 2 |
| @wangchao | 3 | +5/-4 | 4 |
| @Steve Loeppky | 1 | +7/-1 | 1 |
| @Lion | 1 | +4/-2 | 1 |
| @Mimir | 1 | +2/-2 | 1 |
| @raulk | 1 | +1/-1 | 1 |
| @Jack Yao | 1 | +1/-1 | 1 |
| @IPFSUnion | 1 | +1/-1 | 1 |
# 1.10.1 / 2021-07-05
This is an optional but **highly recommended** release of Lotus for lotus miners that has many bug fixes and improvements based on the feedback we got from the community since HyperDrive.
@ -32,7 +221,269 @@ Contributors
| @zhoutian527 | 1 | +2/-2 | 1 |
| @ribasushi| 1 | +1/-1 | 1 |
# 1.11.0-rc1 / 2021-06-28
This is the first release candidate for the optional Lotus v1.11.0 release that introduces several months of bugfixes and feature development.
- github.com/filecoin-project/lotus:
- Lotus version 1.11.0
- gateway: Add support for Version method ([filecoin-project/lotus#6618](https://github.com/filecoin-project/lotus/pull/6618))
- Miner SimultaneousTransfers config ([filecoin-project/lotus#6612](https://github.com/filecoin-project/lotus/pull/6612))
- revamped integration test kit (aka. Operation Sparks Joy) ([filecoin-project/lotus#6329](https://github.com/filecoin-project/lotus/pull/6329))
- downgrade libp2p/go-libp2p-yamux to v0.5.1. ([filecoin-project/lotus#6605](https://github.com/filecoin-project/lotus/pull/6605))
- Fix wallet error messages ([filecoin-project/lotus#6594](https://github.com/filecoin-project/lotus/pull/6594))
- Fix CircleCI gen ([filecoin-project/lotus#6589](https://github.com/filecoin-project/lotus/pull/6589))
- Make query-ask CLI more graceful ([filecoin-project/lotus#6590](https://github.com/filecoin-project/lotus/pull/6590))
- ([filecoin-project/lotus#6406](https://github.com/filecoin-project/lotus/pull/6406))
- move with changed name ([filecoin-project/lotus#6587](https://github.com/filecoin-project/lotus/pull/6587))
- scale up sector expiration to avoid sector expire in batch-pre-commit waitting ([filecoin-project/lotus#6566](https://github.com/filecoin-project/lotus/pull/6566))
- Merge release branch into master ([filecoin-project/lotus#6583](https://github.com/filecoin-project/lotus/pull/6583))
- ([filecoin-project/lotus#6582](https://github.com/filecoin-project/lotus/pull/6582))
- fix circleci being out of sync. ([filecoin-project/lotus#6573](https://github.com/filecoin-project/lotus/pull/6573))
- dynamic circleci config for streamlining test execution ([filecoin-project/lotus#6561](https://github.com/filecoin-project/lotus/pull/6561))
- Merge 1.10 branch into master ([filecoin-project/lotus#6571](https://github.com/filecoin-project/lotus/pull/6571))
- Fix helptext ([filecoin-project/lotus#6560](https://github.com/filecoin-project/lotus/pull/6560))
- extern/storage: add ability to ignore worker resources when scheduling. ([filecoin-project/lotus#6542](https://github.com/filecoin-project/lotus/pull/6542))
- Merge 1.10 branch into master ([filecoin-project/lotus#6540](https://github.com/filecoin-project/lotus/pull/6540))
- Initial draft: basic build instructions on Readme ([filecoin-project/lotus#6498](https://github.com/filecoin-project/lotus/pull/6498))
- fix commit finalize failed ([filecoin-project/lotus#6521](https://github.com/filecoin-project/lotus/pull/6521))
- Dynamic Retrieval pricing ([filecoin-project/lotus#6175](https://github.com/filecoin-project/lotus/pull/6175))
- Fix soup ([filecoin-project/lotus#6501](https://github.com/filecoin-project/lotus/pull/6501))
- fix: pick the correct partitions-per-post limit ([filecoin-project/lotus#6502](https://github.com/filecoin-project/lotus/pull/6502))
- Fix the build
- Adjust various CLI display ratios to arbitrary precision ([filecoin-project/lotus#6309](https://github.com/filecoin-project/lotus/pull/6309))
- Add utils to use multisigs as miner owners ([filecoin-project/lotus#6490](https://github.com/filecoin-project/lotus/pull/6490))
- Test multicore SDR support ([filecoin-project/lotus#6479](https://github.com/filecoin-project/lotus/pull/6479))
- sealing: Fix restartSectors race ([filecoin-project/lotus#6495](https://github.com/filecoin-project/lotus/pull/6495))
- Merge 1.10 into master ([filecoin-project/lotus#6487](https://github.com/filecoin-project/lotus/pull/6487))
- Unit tests for sector batchers ([filecoin-project/lotus#6432](https://github.com/filecoin-project/lotus/pull/6432))
- Merge 1.10 changes into master ([filecoin-project/lotus#6466](https://github.com/filecoin-project/lotus/pull/6466))
- Update chain list with correct help instructions ([filecoin-project/lotus#6465](https://github.com/filecoin-project/lotus/pull/6465))
- clean failed sectors in batch commit ([filecoin-project/lotus#6451](https://github.com/filecoin-project/lotus/pull/6451))
- itests/kit: add guard to ensure imports from tests only. ([filecoin-project/lotus#6445](https://github.com/filecoin-project/lotus/pull/6445))
- consolidate integration tests into `itests` package; create test kit; cleanup ([filecoin-project/lotus#6311](https://github.com/filecoin-project/lotus/pull/6311))
- Remove rc changelog, compile the new changelog for final release only ([filecoin-project/lotus#6444](https://github.com/filecoin-project/lotus/pull/6444))
- updated configuration comments for docs ([filecoin-project/lotus#6440](https://github.com/filecoin-project/lotus/pull/6440))
- Set ntwk v13 HyperDrive Calibration upgrade epoch ([filecoin-project/lotus#6441](https://github.com/filecoin-project/lotus/pull/6441))
- Merge release/v1.10.10 into master ([filecoin-project/lotus#6439](https://github.com/filecoin-project/lotus/pull/6439))
- implement a command to export a car ([filecoin-project/lotus#6405](https://github.com/filecoin-project/lotus/pull/6405))
- Merge v1.10 release branch into master ([filecoin-project/lotus#6435](https://github.com/filecoin-project/lotus/pull/6435))
- Fee config for sector batching ([filecoin-project/lotus#6420](https://github.com/filecoin-project/lotus/pull/6420))
- Fix: correct the change of message size limit ([filecoin-project/lotus#6430](https://github.com/filecoin-project/lotus/pull/6430))
- UX: lotus state power CLI should fail if called with a not-miner ([filecoin-project/lotus#6425](https://github.com/filecoin-project/lotus/pull/6425))
- network reset friday
- Increase message size limit ([filecoin-project/lotus#6419](https://github.com/filecoin-project/lotus/pull/6419))
- polish(stmgr): define ExecMonitor for message application callback ([filecoin-project/lotus#6389](https://github.com/filecoin-project/lotus/pull/6389))
- upgrade testground action version ([filecoin-project/lotus#6403](https://github.com/filecoin-project/lotus/pull/6403))
- Fix logging of stringified CIDs double-encoded in hex ([filecoin-project/lotus#6413](https://github.com/filecoin-project/lotus/pull/6413))
- Update libp2p to 0.14.2 ([filecoin-project/lotus#6404](https://github.com/filecoin-project/lotus/pull/6404))
- Bypass task scheduler for reading unsealed pieces ([filecoin-project/lotus#6280](https://github.com/filecoin-project/lotus/pull/6280))
- testplans: lotus-soup: use default WPoStChallengeWindow ([filecoin-project/lotus#6400](https://github.com/filecoin-project/lotus/pull/6400))
- build snapcraft ([filecoin-project/lotus#6388](https://github.com/filecoin-project/lotus/pull/6388))
- Fix the doc errors of the sealing config funcs ([filecoin-project/lotus#6399](https://github.com/filecoin-project/lotus/pull/6399))
- Integration tests for offline deals ([filecoin-project/lotus#6081](https://github.com/filecoin-project/lotus/pull/6081))
- Fix success handling in Retreival ([filecoin-project/lotus#5921](https://github.com/filecoin-project/lotus/pull/5921))
- Fix some flaky tests ([filecoin-project/lotus#6397](https://github.com/filecoin-project/lotus/pull/6397))
- build appimage in CI ([filecoin-project/lotus#6384](https://github.com/filecoin-project/lotus/pull/6384))
- Add doc on gas balancing ([filecoin-project/lotus#6392](https://github.com/filecoin-project/lotus/pull/6392))
- Add a command to list retrievals ([filecoin-project/lotus#6337](https://github.com/filecoin-project/lotus/pull/6337))
- Add interop network ([filecoin-project/lotus#6387](https://github.com/filecoin-project/lotus/pull/6387))
- Network version 13 (v1.11) ([filecoin-project/lotus#6342](https://github.com/filecoin-project/lotus/pull/6342))
- Generate AppImage ([filecoin-project/lotus#6208](https://github.com/filecoin-project/lotus/pull/6208))
- lotus-gateway: add check command ([filecoin-project/lotus#6373](https://github.com/filecoin-project/lotus/pull/6373))
- Add a warning to the release issue template ([filecoin-project/lotus#6374](https://github.com/filecoin-project/lotus/pull/6374))
- update to markets-v1.4.0 ([filecoin-project/lotus#6369](https://github.com/filecoin-project/lotus/pull/6369))
- Add test for AddVerifiedClient ([filecoin-project/lotus#6317](https://github.com/filecoin-project/lotus/pull/6317))
- Typo fix in error message: "pubusb" -> "pubsub" ([filecoin-project/lotus#6365](https://github.com/filecoin-project/lotus/pull/6365))
- Improve the cli state call command ([filecoin-project/lotus#6226](https://github.com/filecoin-project/lotus/pull/6226))
- Upscale mineOne message to a WARN on unexpected ineligibility ([filecoin-project/lotus#6358](https://github.com/filecoin-project/lotus/pull/6358))
- storagefsm: Fix batch deal packing behavior ([filecoin-project/lotus#6041](https://github.com/filecoin-project/lotus/pull/6041))
- Remove few useless variable assignments ([filecoin-project/lotus#6359](https://github.com/filecoin-project/lotus/pull/6359))
- lotus-wallet: JWT Support ([filecoin-project/lotus#6360](https://github.com/filecoin-project/lotus/pull/6360))
- Reduce noise from 'peer has different genesis' messages ([filecoin-project/lotus#6357](https://github.com/filecoin-project/lotus/pull/6357))
- events: Fix handling of multiple matched events per epoch ([filecoin-project/lotus#6355](https://github.com/filecoin-project/lotus/pull/6355))
- Update RELEASE_ISSUE_TEMPLATE.md ([filecoin-project/lotus#6236](https://github.com/filecoin-project/lotus/pull/6236))
- Get current seal proof when necessary ([filecoin-project/lotus#6339](https://github.com/filecoin-project/lotus/pull/6339))
- Allow starting networks from arbitrary actor versions ([filecoin-project/lotus#6333](https://github.com/filecoin-project/lotus/pull/6333))
- Remove log line when tracing is not configured ([filecoin-project/lotus#6334](https://github.com/filecoin-project/lotus/pull/6334))
- Revert "Allow starting networks from arbitrary actor versions" ([filecoin-project/lotus#6330](https://github.com/filecoin-project/lotus/pull/6330))
- separate tracing environment variables ([filecoin-project/lotus#6323](https://github.com/filecoin-project/lotus/pull/6323))
- Allow starting networks from arbitrary actor versions ([filecoin-project/lotus#6305](https://github.com/filecoin-project/lotus/pull/6305))
- feat: log dispute rate ([filecoin-project/lotus#6322](https://github.com/filecoin-project/lotus/pull/6322))
- Use new actor tags ([filecoin-project/lotus#6291](https://github.com/filecoin-project/lotus/pull/6291))
- Fix logging around mineOne ([filecoin-project/lotus#6310](https://github.com/filecoin-project/lotus/pull/6310))
- Fix shell completions ([filecoin-project/lotus#6316](https://github.com/filecoin-project/lotus/pull/6316))
- Allow 8MB sectors in devnet ([filecoin-project/lotus#6312](https://github.com/filecoin-project/lotus/pull/6312))
- fix ticket expired ([filecoin-project/lotus#6304](https://github.com/filecoin-project/lotus/pull/6304))
- oh, snap! ([filecoin-project/lotus#6202](https://github.com/filecoin-project/lotus/pull/6202))
- Move verifreg shed utils to CLI ([filecoin-project/lotus#6135](https://github.com/filecoin-project/lotus/pull/6135))
- consider storiface.PathStorage when calculating storage requirements ([filecoin-project/lotus#6233](https://github.com/filecoin-project/lotus/pull/6233))
- `storage` module: add go docs and minor code quality refactors ([filecoin-project/lotus#6259](https://github.com/filecoin-project/lotus/pull/6259))
- Revert "chore: update go-libp2p" ([filecoin-project/lotus#6306](https://github.com/filecoin-project/lotus/pull/6306))
- Increase data transfer timeouts ([filecoin-project/lotus#6300](https://github.com/filecoin-project/lotus/pull/6300))
- gateway: spin off from cmd to package ([filecoin-project/lotus#6294](https://github.com/filecoin-project/lotus/pull/6294))
- Update to markets 1.3 ([filecoin-project/lotus#6149](https://github.com/filecoin-project/lotus/pull/6149))
- Add a shed util to count 64 GiB miner stats ([filecoin-project/lotus#6290](https://github.com/filecoin-project/lotus/pull/6290))
- Delete CODEOWNERS ([filecoin-project/lotus#6289](https://github.com/filecoin-project/lotus/pull/6289))
- Merge v1.9.0 to master ([filecoin-project/lotus#6275](https://github.com/filecoin-project/lotus/pull/6275))
- Backport 6200 to master ([filecoin-project/lotus#6272](https://github.com/filecoin-project/lotus/pull/6272))
- Introduce stateless offline dealflow, bypassing the FSM/deallists ([filecoin-project/lotus#5961](https://github.com/filecoin-project/lotus/pull/5961))
- chore: update go-libp2p ([filecoin-project/lotus#6231](https://github.com/filecoin-project/lotus/pull/6231))
- fix: wait-api should use GetAPI to acquire binary specific API ([filecoin-project/lotus#6246](https://github.com/filecoin-project/lotus/pull/6246))
- Update RELEASE_ISSUE_TEMPLATE.md
- fix(ci): Updates to lotus CI build process ([filecoin-project/lotus#6256](https://github.com/filecoin-project/lotus/pull/6256))
- add flags to control gateway lookback parameters ([filecoin-project/lotus#6247](https://github.com/filecoin-project/lotus/pull/6247))
- Feat/nerpa v4 ([filecoin-project/lotus#6248](https://github.com/filecoin-project/lotus/pull/6248))
- chore(ci): Enable build on RC tags ([filecoin-project/lotus#6238](https://github.com/filecoin-project/lotus/pull/6238))
- Transplant some useful commands to lotus-shed actor ([filecoin-project/lotus#5913](https://github.com/filecoin-project/lotus/pull/5913))
- wip actor wrapper codegen ([filecoin-project/lotus#6108](https://github.com/filecoin-project/lotus/pull/6108))
- Robust message management ([filecoin-project/lotus#5822](https://github.com/filecoin-project/lotus/pull/5822))
- Add a shed util to count miners by post type ([filecoin-project/lotus#6169](https://github.com/filecoin-project/lotus/pull/6169))
- Introduce a release issue template ([filecoin-project/lotus#5826](https://github.com/filecoin-project/lotus/pull/5826))
- cron-wc ([filecoin-project/lotus#6178](https://github.com/filecoin-project/lotus/pull/6178))
- This is a 1:1 forward-port of PR#6183 from 1.9.x to master ([filecoin-project/lotus#6196](https://github.com/filecoin-project/lotus/pull/6196))
- Allow creation of state tree v3s ([filecoin-project/lotus#6167](https://github.com/filecoin-project/lotus/pull/6167))
- drand: fix beacon cache ([filecoin-project/lotus#6164](https://github.com/filecoin-project/lotus/pull/6164))
- Update cli gen ([filecoin-project/lotus#6155](https://github.com/filecoin-project/lotus/pull/6155))
- mpool: Cleanup pre-nv12 selection logic ([filecoin-project/lotus#6148](https://github.com/filecoin-project/lotus/pull/6148))
- Update ffi to proofs v7 ([filecoin-project/lotus#6150](https://github.com/filecoin-project/lotus/pull/6150))
- Generate CLI docs ([filecoin-project/lotus#6145](https://github.com/filecoin-project/lotus/pull/6145))
- feat: allow checkpointing to forks ([filecoin-project/lotus#6107](https://github.com/filecoin-project/lotus/pull/6107))
- attempt to do better padding on pieces being written into sectors ([filecoin-project/lotus#5988](https://github.com/filecoin-project/lotus/pull/5988))
- remove duplicate ask and calculate ping before lock ([filecoin-project/lotus#5968](https://github.com/filecoin-project/lotus/pull/5968))
- Add a command to get the fees of a deal ([filecoin-project/lotus#5307](https://github.com/filecoin-project/lotus/pull/5307))
- flaky tests improvement: separate TestBatchDealInput from TestAPIDealFlow ([filecoin-project/lotus#6141](https://github.com/filecoin-project/lotus/pull/6141))
- Testground checks on push ([filecoin-project/lotus#5887](https://github.com/filecoin-project/lotus/pull/5887))
- Add a CLI tool for miner proving deadline ([filecoin-project/lotus#6132](https://github.com/filecoin-project/lotus/pull/6132))
- Use EmptyTSK where appropriate ([filecoin-project/lotus#6134](https://github.com/filecoin-project/lotus/pull/6134))
- fix: use a consistent tipset in commands ([filecoin-project/lotus#6142](https://github.com/filecoin-project/lotus/pull/6142))
- go mod tidy for lotus-soup testplans ([filecoin-project/lotus#6124](https://github.com/filecoin-project/lotus/pull/6124))
- fix testground payment channel tests: use 1 miner ([filecoin-project/lotus#6126](https://github.com/filecoin-project/lotus/pull/6126))
- fix: use the parent state when listing actors ([filecoin-project/lotus#6143](https://github.com/filecoin-project/lotus/pull/6143))
- Speed up StateListMessages in some cases ([filecoin-project/lotus#6007](https://github.com/filecoin-project/lotus/pull/6007))
- Return total power when GetPowerRaw doesn't find miner claim ([filecoin-project/lotus#4938](https://github.com/filecoin-project/lotus/pull/4938))
- fix(splitstore): fix a panic on revert-only head changes ([filecoin-project/lotus#6133](https://github.com/filecoin-project/lotus/pull/6133))
- shed: command to list duplicate messages in tipsets (steb) ([filecoin-project/lotus#5847](https://github.com/filecoin-project/lotus/pull/5847))
- upgrade `lotus-soup` testplans and reduce deals concurrency to a single miner ([filecoin-project/lotus#6122](https://github.com/filecoin-project/lotus/pull/6122))
- Merge releases (1.8.0) into master ([filecoin-project/lotus#6118](https://github.com/filecoin-project/lotus/pull/6118))
- github.com/filecoin-project/go-commp-utils (v0.1.0 -> v0.1.1-0.20210427191551-70bf140d31c7):
- add a padding helper function ([filecoin-project/go-commp-utils#3](https://github.com/filecoin-project/go-commp-utils/pull/3))
- github.com/filecoin-project/go-data-transfer (v1.4.3 -> v1.6.0):
- release: v1.6.0
- fix: option to disable accept and complete timeouts
- fix: disable restart ack timeout
- release: v1.5.0
- Add isRestart param to validators (#197) ([filecoin-project/go-data-transfer#197](https://github.com/filecoin-project/go-data-transfer/pull/197))
- fix: flaky TestChannelMonitorAutoRestart (#198) ([filecoin-project/go-data-transfer#198](https://github.com/filecoin-project/go-data-transfer/pull/198))
- Channel monitor watches for errors instead of measuring data rate (#190) ([filecoin-project/go-data-transfer#190](https://github.com/filecoin-project/go-data-transfer/pull/190))
- fix: prevent concurrent restarts for same channel (#195) ([filecoin-project/go-data-transfer#195](https://github.com/filecoin-project/go-data-transfer/pull/195))
- fix: channel state machine event handling (#194) ([filecoin-project/go-data-transfer#194](https://github.com/filecoin-project/go-data-transfer/pull/194))
- Dont double count data sent (#185) ([filecoin-project/go-data-transfer#185](https://github.com/filecoin-project/go-data-transfer/pull/185))
- release: v1.4.3 (#189) ([filecoin-project/go-data-transfer#189](https://github.com/filecoin-project/go-data-transfer/pull/189))
- github.com/filecoin-project/go-fil-markets (v1.2.5 -> v1.5.0):
- release: v1.5.0
- Dynamic Retrieval Pricing (#542) ([filecoin-project/go-fil-markets#542](https://github.com/filecoin-project/go-fil-markets/pull/542))
- release: v1.4.0 (#551) ([filecoin-project/go-fil-markets#551](https://github.com/filecoin-project/go-fil-markets/pull/551))
- Update to go data transfer v1.6.0 (#550) ([filecoin-project/go-fil-markets#550](https://github.com/filecoin-project/go-fil-markets/pull/550))
- fix first make error (#548) ([filecoin-project/go-fil-markets#548](https://github.com/filecoin-project/go-fil-markets/pull/548))
- release: v1.3.0 (#544) ([filecoin-project/go-fil-markets#544](https://github.com/filecoin-project/go-fil-markets/pull/544))
- fix restarts during data transfer for a retrieval deal (#540) ([filecoin-project/go-fil-markets#540](https://github.com/filecoin-project/go-fil-markets/pull/540))
- Test Retrieval for offline deals (#541) ([filecoin-project/go-fil-markets#541](https://github.com/filecoin-project/go-fil-markets/pull/541))
- Allow anonymous submodule checkout (#535) ([filecoin-project/go-fil-markets#535](https://github.com/filecoin-project/go-fil-markets/pull/535))
- github.com/filecoin-project/specs-actors (v0.9.13 -> v0.9.14):
- Set ConsensusMinerMinPower to 10 TiB (#1427) ([filecoin-project/specs-actors#1427](https://github.com/filecoin-project/specs-actors/pull/1427))
- github.com/filecoin-project/specs-actors/v2 (v2.3.5-0.20210114162132-5b58b773f4fb -> v2.3.5):
- Set ConsensusMinerMinPower to 10 TiB (#1428) ([filecoin-project/specs-actors#1428](https://github.com/filecoin-project/specs-actors/pull/1428))
- v2 VM satisfies SimVM interface (#1355) ([filecoin-project/specs-actors#1355](https://github.com/filecoin-project/specs-actors/pull/1355))
- github.com/filecoin-project/specs-actors/v3 (v3.1.0 -> v3.1.1):
- Set ConsensusMinerMinPower to 10 TiB for all PoStProofPolicies (#1429) ([filecoin-project/specs-actors#1429](https://github.com/filecoin-project/specs-actors/pull/1429))
- github.com/filecoin-project/specs-actors/v4 (v4.0.0 -> v4.0.1):
- Set ConsensusMinerMinPower to 10 TiB for all PoStProofPolicies (#1430) ([filecoin-project/specs-actors#1430](https://github.com/filecoin-project/specs-actors/pull/1430))
Contributors
| Contributor | Commits | Lines ± | Files Changed |
|-------------|---------|---------|---------------|
| Raúl Kripalani | 118 | +11972/-10860 | 472 |
| Łukasz Magiera | 65 | +10824/-4158 | 353 |
| aarshkshah1992 | 59 | +8057/-3355 | 224 |
| Aayush Rajasekaran | 41 | +8786/-1691 | 331 |
| Steven Allen | 106 | +7653/-2718 | 273 |
| dirkmc | 11 | +2580/-1371 | 77 |
| Dirk McCormick | 39 | +1865/-1194 | 79 |
| Jakub Sztandera | 19 | +1973/-485 | 81 |
| vyzo | 4 | +1748/-330 | 50 |
| Aarsh Shah | 5 | +1462/-213 | 27 |
| Cory Schwartz | 35 | +568/-206 | 59 |
| chadwick2143 | 3 | +739/-1 | 4 |
| Peter Rabbitson | 21 | +487/-164 | 36 |
| hannahhoward | 5 | +544/-5 | 19 |
| Jennifer Wang | 8 | +206/-172 | 17 |
| frrist | 1 | +137/-88 | 7 |
| Travis Person | 3 | +175/-6 | 7 |
| Alex Wade | 1 | +48/-129 | 1 |
| whyrusleeping | 8 | +161/-13 | 11 |
| lotus | 1 | +114/-46 | 1 |
| Anton Evangelatov | 8 | +107/-53 | 20 |
| Rjan | 4 | +115/-33 | 4 |
| ZenGround0 | 3 | +114/-1 | 4 |
| Aloxaf | 1 | +43/-61 | 7 |
| yaohcn | 4 | +89/-9 | 5 |
| mitchellsoo | 1 | +51/-0 | 1 |
| Mike Greenberg | 3 | +28/-18 | 4 |
| Jennifer | 6 | +9/-14 | 6 |
| Frank | 2 | +11/-10 | 2 |
| wangchao | 3 | +5/-4 | 4 |
| Steve Loeppky | 1 | +7/-1 | 1 |
| Lion | 1 | +4/-2 | 1 |
| Mimir | 1 | +2/-2 | 1 |
| raulk | 1 | +1/-1 | 1 |
| Jack Yao | 1 | +1/-1 | 1 |
| IPFSUnion | 1 | +1/-1 | 1 |
||||||||| 764fa9dae
=========
=======
>>>>>>> master
# 1.10.1 / 2021-07-05
This is an optional but **highly recommended** release of Lotus for lotus miners that has many bug fixes and improvements based on the feedback we got from the community since HyperDrive.
## New Features
- commit batch: AggregateAboveBaseFee config #6650
- `AggregateAboveBaseFee` is added to miner sealing configuration for setting the network base fee to start aggregating proofs. When the network base fee is lower than this value, the prove commits will be submitted individually via `ProveCommitSector`. According to the [Batch Incentive Alignment](https://github.com/filecoin-project/FIPs/blob/master/FIPS/fip-0013.md#batch-incentive-alignment) introduced in FIP-0013, we recommend miners to set this value to 0.15 nanoFIL(which is the default value) to avoid unexpected aggregation fee in burn and enjoy the most benefits of aggregation!
## Bug Fixes
- storage: Fix FinalizeSector with sectors in storage paths #6652
- Fix tiny error in check-client-datacap #6664
- Fix: precommit_batch method used the wrong cfg.PreCommitBatchWait #6658
- to optimize the batchwait #6636
- fix getTicket: sector precommitted but expired case #6635
- handleSubmitCommitAggregate() exception handling #6595
- remove precommit check in handleCommitFailed #6634
- ensure agg fee is adequate
- fix: miner balance is not enough, so that ProveCommitAggregate msg exec failed #6623
- commit batch: Initialize the FailedSectors map #6647
Contributors
| Contributor | Commits | Lines ± | Files Changed |
|-------------|---------|---------|---------------|
| @magik6k| 7 | +151/-56 | 21 |
| @llifezou | 4 | +59/-20 | 4 |
| @johnli-helloworld | 2 | +45/-14 | 4 |
| @wangchao | 1 | +1/-27 | 1 |
| Jerry | 2 | +9/-4 | 2 |
| @zhoutian527 | 1 | +2/-2 | 1 |
| @ribasushi| 1 | +1/-1 | 1 |
<<<<<<< HEAD
>>>>>>> releases
||||||| merged common ancestors
>>>>>>>>> Temporary merge branch 2
=======
>>>>>>> master
# 1.10.0 / 2021-06-23
This is a mandatory release of Lotus that introduces Filecoin network v13, codenamed the HyperDrive upgrade. The

View File

@ -17,6 +17,8 @@ MODULES:=
CLEAN:=
BINS:=
GOCC?=go
ldflags=-X=github.com/filecoin-project/lotus/build.CurrentCommit=+git.$(subst -,.,$(shell git describe --always --match=NeVeRmAtCh --dirty 2>/dev/null || git rev-parse --short HEAD 2>/dev/null))
ifneq ($(strip $(LDFLAGS)),)
ldflags+=-extldflags=$(LDFLAGS)
@ -85,32 +87,32 @@ interopnet: build-devnets
lotus: $(BUILD_DEPS)
rm -f lotus
go build $(GOFLAGS) -o lotus ./cmd/lotus
$(GOCC) build $(GOFLAGS) -o lotus ./cmd/lotus
.PHONY: lotus
BINS+=lotus
lotus-miner: $(BUILD_DEPS)
rm -f lotus-miner
go build $(GOFLAGS) -o lotus-miner ./cmd/lotus-miner
$(GOCC) build $(GOFLAGS) -o lotus-miner ./cmd/lotus-miner
.PHONY: lotus-miner
BINS+=lotus-miner
lotus-worker: $(BUILD_DEPS)
rm -f lotus-worker
go build $(GOFLAGS) -o lotus-worker ./cmd/lotus-seal-worker
$(GOCC) build $(GOFLAGS) -o lotus-worker ./cmd/lotus-seal-worker
.PHONY: lotus-worker
BINS+=lotus-worker
lotus-shed: $(BUILD_DEPS)
rm -f lotus-shed
go build $(GOFLAGS) -o lotus-shed ./cmd/lotus-shed
$(GOCC) build $(GOFLAGS) -o lotus-shed ./cmd/lotus-shed
.PHONY: lotus-shed
BINS+=lotus-shed
lotus-gateway: $(BUILD_DEPS)
rm -f lotus-gateway
go build $(GOFLAGS) -o lotus-gateway ./cmd/lotus-gateway
$(GOCC) build $(GOFLAGS) -o lotus-gateway ./cmd/lotus-gateway
.PHONY: lotus-gateway
BINS+=lotus-gateway
@ -138,19 +140,19 @@ install-app:
lotus-seed: $(BUILD_DEPS)
rm -f lotus-seed
go build $(GOFLAGS) -o lotus-seed ./cmd/lotus-seed
$(GOCC) build $(GOFLAGS) -o lotus-seed ./cmd/lotus-seed
.PHONY: lotus-seed
BINS+=lotus-seed
benchmarks:
go run github.com/whyrusleeping/bencher ./... > bench.json
$(GOCC) run github.com/whyrusleeping/bencher ./... > bench.json
@echo Submitting results
@curl -X POST 'http://benchmark.kittyhawk.wtf/benchmark' -d '@bench.json' -u "${benchmark_http_cred}"
.PHONY: benchmarks
lotus-pond: 2k
go build -o lotus-pond ./lotuspond
$(GOCC) build -o lotus-pond ./lotuspond
.PHONY: lotus-pond
BINS+=lotus-pond
@ -161,85 +163,63 @@ lotus-pond-front:
lotus-pond-app: lotus-pond-front lotus-pond
.PHONY: lotus-pond-app
lotus-townhall:
rm -f lotus-townhall
go build -o lotus-townhall ./cmd/lotus-townhall
.PHONY: lotus-townhall
BINS+=lotus-townhall
lotus-townhall-front:
(cd ./cmd/lotus-townhall/townhall && npm i && npm run build)
.PHONY: lotus-townhall-front
lotus-townhall-app: lotus-touch lotus-townhall-front
.PHONY: lotus-townhall-app
lotus-fountain:
rm -f lotus-fountain
go build -o lotus-fountain ./cmd/lotus-fountain
$(GOCC) build -o lotus-fountain ./cmd/lotus-fountain
.PHONY: lotus-fountain
BINS+=lotus-fountain
lotus-chainwatch:
rm -f lotus-chainwatch
go build $(GOFLAGS) -o lotus-chainwatch ./cmd/lotus-chainwatch
.PHONY: lotus-chainwatch
BINS+=lotus-chainwatch
lotus-bench:
rm -f lotus-bench
go build -o lotus-bench ./cmd/lotus-bench
$(GOCC) build -o lotus-bench ./cmd/lotus-bench
.PHONY: lotus-bench
BINS+=lotus-bench
lotus-stats:
rm -f lotus-stats
go build $(GOFLAGS) -o lotus-stats ./cmd/lotus-stats
$(GOCC) build $(GOFLAGS) -o lotus-stats ./cmd/lotus-stats
.PHONY: lotus-stats
BINS+=lotus-stats
lotus-pcr:
rm -f lotus-pcr
go build $(GOFLAGS) -o lotus-pcr ./cmd/lotus-pcr
$(GOCC) build $(GOFLAGS) -o lotus-pcr ./cmd/lotus-pcr
.PHONY: lotus-pcr
BINS+=lotus-pcr
lotus-health:
rm -f lotus-health
go build -o lotus-health ./cmd/lotus-health
$(GOCC) build -o lotus-health ./cmd/lotus-health
.PHONY: lotus-health
BINS+=lotus-health
lotus-wallet:
rm -f lotus-wallet
go build -o lotus-wallet ./cmd/lotus-wallet
$(GOCC) build -o lotus-wallet ./cmd/lotus-wallet
.PHONY: lotus-wallet
BINS+=lotus-wallet
lotus-keygen:
rm -f lotus-keygen
go build -o lotus-keygen ./cmd/lotus-keygen
$(GOCC) build -o lotus-keygen ./cmd/lotus-keygen
.PHONY: lotus-keygen
BINS+=lotus-keygen
testground:
go build -tags testground -o /dev/null ./cmd/lotus
$(GOCC) build -tags testground -o /dev/null ./cmd/lotus
.PHONY: testground
BINS+=testground
tvx:
rm -f tvx
go build -o tvx ./cmd/tvx
$(GOCC) build -o tvx ./cmd/tvx
.PHONY: tvx
BINS+=tvx
install-chainwatch: lotus-chainwatch
install -C ./lotus-chainwatch /usr/local/bin/lotus-chainwatch
lotus-sim: $(BUILD_DEPS)
rm -f lotus-sim
go build $(GOFLAGS) -o lotus-sim ./cmd/lotus-sim
$(GOCC) build $(GOFLAGS) -o lotus-sim ./cmd/lotus-sim
.PHONY: lotus-sim
BINS+=lotus-sim
@ -261,21 +241,13 @@ install-miner-service: install-miner install-daemon-service
@echo
@echo "lotus-miner service installed. Don't forget to run 'sudo systemctl start lotus-miner' to start it and 'sudo systemctl enable lotus-miner' for it to be enabled on startup."
install-chainwatch-service: install-chainwatch install-daemon-service
mkdir -p /etc/systemd/system
mkdir -p /var/log/lotus
install -C -m 0644 ./scripts/lotus-chainwatch.service /etc/systemd/system/lotus-chainwatch.service
systemctl daemon-reload
@echo
@echo "chainwatch service installed. Don't forget to run 'sudo systemctl start lotus-chainwatch' to start it and 'sudo systemctl enable lotus-chainwatch' for it to be enabled on startup."
install-main-services: install-miner-service
install-all-services: install-main-services install-chainwatch-service
install-all-services: install-main-services
install-services: install-main-services
clean-daemon-service: clean-miner-service clean-chainwatch-service
clean-daemon-service: clean-miner-service
-systemctl stop lotus-daemon
-systemctl disable lotus-daemon
rm -f /etc/systemd/system/lotus-daemon.service
@ -287,12 +259,6 @@ clean-miner-service:
rm -f /etc/systemd/system/lotus-miner.service
systemctl daemon-reload
clean-chainwatch-service:
-systemctl stop lotus-chainwatch
-systemctl disable lotus-chainwatch
rm -f /etc/systemd/system/lotus-chainwatch.service
systemctl daemon-reload
clean-main-services: clean-daemon-service
clean-all-services: clean-main-services
@ -319,25 +285,25 @@ dist-clean:
.PHONY: dist-clean
type-gen: api-gen
go run ./gen/main.go
go generate -x ./...
$(GOCC) run ./gen/main.go
$(GOCC) generate -x ./...
goimports -w api/
method-gen: api-gen
(cd ./lotuspond/front/src/chain && go run ./methodgen.go)
(cd ./lotuspond/front/src/chain && $(GOCC) run ./methodgen.go)
actors-gen:
go run ./chain/actors/agen
go fmt ./...
$(GOCC) run ./chain/actors/agen
$(GOCC) fmt ./...
api-gen:
go run ./gen/api
$(GOCC) run ./gen/api
goimports -w api
goimports -w api
.PHONY: api-gen
cfgdoc-gen:
go run ./node/config/cfgdocgen > ./node/config/doc_gen.go
$(GOCC) run ./node/config/cfgdocgen > ./node/config/doc_gen.go
appimage: lotus
rm -rf appimage-builder-cache || true
@ -351,9 +317,9 @@ appimage: lotus
docsgen: docsgen-md docsgen-openrpc
docsgen-md-bin: api-gen actors-gen
go build $(GOFLAGS) -o docgen-md ./api/docgen/cmd
$(GOCC) build $(GOFLAGS) -o docgen-md ./api/docgen/cmd
docsgen-openrpc-bin: api-gen actors-gen
go build $(GOFLAGS) -o docgen-openrpc ./api/docgen-openrpc/cmd
$(GOCC) build $(GOFLAGS) -o docgen-openrpc ./api/docgen-openrpc/cmd
docsgen-md: docsgen-md-full docsgen-md-storage docsgen-md-worker
@ -393,4 +359,4 @@ print-%:
@echo $*=$($*)
circleci:
go generate -x ./.circleci
$(GOCC) generate -x ./.circleci

View File

@ -166,6 +166,10 @@ type StorageMiner interface {
MarketPendingDeals(ctx context.Context) (PendingDealInfo, error) //perm:write
MarketPublishPendingDeals(ctx context.Context) error //perm:admin
// RuntimeSubsystems returns the subsystems that are enabled
// in this instance.
RuntimeSubsystems(ctx context.Context) (MinerSubsystems, error) //perm:read
DealsImportData(ctx context.Context, dealPropCid cid.Cid, file string) error //perm:admin
DealsList(ctx context.Context) ([]MarketDeal, error) //perm:admin
DealsConsiderOnlineStorageDeals(context.Context) (bool, error) //perm:admin

View File

@ -16,10 +16,10 @@ import (
"github.com/google/uuid"
"github.com/ipfs/go-cid"
"github.com/ipfs/go-filestore"
metrics "github.com/libp2p/go-libp2p-core/metrics"
"github.com/libp2p/go-libp2p-core/metrics"
"github.com/libp2p/go-libp2p-core/network"
"github.com/libp2p/go-libp2p-core/peer"
protocol "github.com/libp2p/go-libp2p-core/protocol"
"github.com/libp2p/go-libp2p-core/protocol"
pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/multiformats/go-multiaddr"
@ -46,11 +46,12 @@ import (
)
var ExampleValues = map[reflect.Type]interface{}{
reflect.TypeOf(auth.Permission("")): auth.Permission("write"),
reflect.TypeOf(""): "string value",
reflect.TypeOf(uint64(42)): uint64(42),
reflect.TypeOf(byte(7)): byte(7),
reflect.TypeOf([]byte{}): []byte("byte array"),
reflect.TypeOf(api.MinerSubsystem(0)): api.MinerSubsystem(1),
reflect.TypeOf(auth.Permission("")): auth.Permission("write"),
reflect.TypeOf(""): "string value",
reflect.TypeOf(uint64(42)): uint64(42),
reflect.TypeOf(byte(7)): byte(7),
reflect.TypeOf([]byte{}): []byte("byte array"),
}
func addExample(v interface{}) {
@ -264,6 +265,12 @@ func init() {
addExample(api.CheckStatusCode(0))
addExample(map[string]interface{}{"abc": 123})
addExample(api.MinerSubsystems{
api.SubsystemMining,
api.SubsystemSealing,
api.SubsystemSectorStorage,
api.SubsystemMarkets,
})
}
func GetAPIType(name, pkg string) (i interface{}, t reflect.Type, permStruct []reflect.Type) {

79
api/miner_subsystems.go Normal file
View File

@ -0,0 +1,79 @@
package api
import (
"encoding/json"
)
// MinerSubsystem represents a miner subsystem. Int and string values are not
// guaranteed to be stable over time is not
// guaranteed to be stable over time.
type MinerSubsystem int
const (
// SubsystemUnknown is a placeholder for the zero value. It should never
// be used.
SubsystemUnknown MinerSubsystem = iota
// SubsystemMarkets signifies the storage and retrieval
// deal-making subsystem.
SubsystemMarkets
// SubsystemMining signifies the mining subsystem.
SubsystemMining
// SubsystemSealing signifies the sealing subsystem.
SubsystemSealing
// SubsystemSectorStorage signifies the sector storage subsystem.
SubsystemSectorStorage
)
var MinerSubsystemToString = map[MinerSubsystem]string{
SubsystemUnknown: "Unknown",
SubsystemMarkets: "Markets",
SubsystemMining: "Mining",
SubsystemSealing: "Sealing",
SubsystemSectorStorage: "SectorStorage",
}
var MinerSubsystemToID = map[string]MinerSubsystem{
"Unknown": SubsystemUnknown,
"Markets": SubsystemMarkets,
"Mining": SubsystemMining,
"Sealing": SubsystemSealing,
"SectorStorage": SubsystemSectorStorage,
}
func (ms MinerSubsystem) MarshalJSON() ([]byte, error) {
return json.Marshal(MinerSubsystemToString[ms])
}
func (ms *MinerSubsystem) UnmarshalJSON(b []byte) error {
var j string
err := json.Unmarshal(b, &j)
if err != nil {
return err
}
s, ok := MinerSubsystemToID[j]
if !ok {
*ms = SubsystemUnknown
} else {
*ms = s
}
return nil
}
type MinerSubsystems []MinerSubsystem
func (ms MinerSubsystems) Has(entry MinerSubsystem) bool {
for _, v := range ms {
if v == entry {
return true
}
}
return false
}
func (ms MinerSubsystem) String() string {
s, ok := MinerSubsystemToString[ms]
if !ok {
return MinerSubsystemToString[SubsystemUnknown]
}
return s
}

View File

@ -699,6 +699,8 @@ type StorageMinerStruct struct {
ReturnUnsealPiece func(p0 context.Context, p1 storiface.CallID, p2 *storiface.CallError) error `perm:"admin"`
RuntimeSubsystems func(p0 context.Context) (MinerSubsystems, error) `perm:"read"`
SealingAbort func(p0 context.Context, p1 storiface.CallID) error `perm:"admin"`
SealingSchedDiag func(p0 context.Context, p1 bool) (interface{}, error) `perm:"admin"`
@ -4095,6 +4097,17 @@ func (s *StorageMinerStub) ReturnUnsealPiece(p0 context.Context, p1 storiface.Ca
return ErrNotSupported
}
func (s *StorageMinerStruct) RuntimeSubsystems(p0 context.Context) (MinerSubsystems, error) {
if s.Internal.RuntimeSubsystems == nil {
return *new(MinerSubsystems), ErrNotSupported
}
return s.Internal.RuntimeSubsystems(p0)
}
func (s *StorageMinerStub) RuntimeSubsystems(p0 context.Context) (MinerSubsystems, error) {
return *new(MinerSubsystems), ErrNotSupported
}
func (s *StorageMinerStruct) SealingAbort(p0 context.Context, p1 storiface.CallID) error {
if s.Internal.SealingAbort == nil {
return ErrNotSupported

View File

@ -8,9 +8,11 @@ import (
"path/filepath"
"runtime"
"sync"
"time"
"github.com/dgraph-io/badger/v2"
"github.com/dgraph-io/badger/v2/options"
"github.com/dgraph-io/badger/v2/pb"
"github.com/multiformats/go-base32"
"go.uber.org/zap"
@ -74,20 +76,45 @@ func (b *badgerLogger) Warningf(format string, args ...interface{}) {
b.skip2.Warnf(format, args...)
}
// bsState is the current blockstore state
type bsState int
const (
stateOpen = iota
// stateOpen signifies an open blockstore
stateOpen bsState = iota
// stateClosing signifies a blockstore that is currently closing
stateClosing
// stateClosed signifies a blockstore that has been colosed
stateClosed
)
type bsMoveState int
const (
// moveStateNone signifies that there is no move in progress
moveStateNone bsMoveState = iota
// moveStateMoving signifies that there is a move in a progress
moveStateMoving
// moveStateCleanup signifies that a move has completed or aborted and we are cleaning up
moveStateCleanup
// moveStateLock signifies that an exclusive lock has been acquired
moveStateLock
)
// Blockstore is a badger-backed IPLD blockstore.
type Blockstore struct {
stateLk sync.RWMutex
state int
state bsState
viewers sync.WaitGroup
DB *badger.DB
opts Options
moveMx sync.Mutex
moveCond sync.Cond
moveState bsMoveState
rlock int
db *badger.DB
dbNext *badger.DB // when moving
opts Options
prefixing bool
prefix []byte
@ -113,13 +140,15 @@ func Open(opts Options) (*Blockstore, error) {
return nil, fmt.Errorf("failed to open badger blockstore: %w", err)
}
bs := &Blockstore{DB: db, opts: opts}
bs := &Blockstore{db: db, opts: opts}
if p := opts.Prefix; p != "" {
bs.prefixing = true
bs.prefix = []byte(p)
bs.prefixLen = len(bs.prefix)
}
bs.moveCond.L = &bs.moveMx
return bs, nil
}
@ -143,7 +172,7 @@ func (b *Blockstore) Close() error {
// wait for all accesses to complete
b.viewers.Wait()
return b.DB.Close()
return b.db.Close()
}
func (b *Blockstore) access() error {
@ -165,12 +194,225 @@ func (b *Blockstore) isOpen() bool {
return b.state == stateOpen
}
// CollectGarbage runs garbage collection on the value log
func (b *Blockstore) CollectGarbage() error {
if err := b.access(); err != nil {
return err
// lockDB/unlockDB implement a recursive lock contingent on move state
func (b *Blockstore) lockDB() {
b.moveMx.Lock()
defer b.moveMx.Unlock()
if b.rlock == 0 {
for b.moveState == moveStateLock {
b.moveCond.Wait()
}
}
defer b.viewers.Done()
b.rlock++
}
func (b *Blockstore) unlockDB() {
b.moveMx.Lock()
defer b.moveMx.Unlock()
b.rlock--
if b.rlock == 0 && b.moveState == moveStateLock {
b.moveCond.Broadcast()
}
}
// lockMove/unlockMove implement an exclusive lock of move state
func (b *Blockstore) lockMove() {
b.moveMx.Lock()
b.moveState = moveStateLock
for b.rlock > 0 {
b.moveCond.Wait()
}
}
func (b *Blockstore) unlockMove(state bsMoveState) {
b.moveState = state
b.moveCond.Broadcast()
b.moveMx.Unlock()
}
// movingGC moves the blockstore to a new path, adjacent to the current path, and creates
// a symlink from the current path to the new path; the old blockstore is deleted.
//
// The blockstore MUST accept new writes during the move and ensure that these
// are persisted to the new blockstore; if a failure occurs aboring the move,
// then they must be peristed to the old blockstore.
// In short, the blockstore must not lose data from new writes during the move.
func (b *Blockstore) movingGC() error {
// this inlines moveLock/moveUnlock for the initial state check to prevent a second move
// while one is in progress without clobbering state
b.moveMx.Lock()
if b.moveState != moveStateNone {
b.moveMx.Unlock()
return fmt.Errorf("move in progress")
}
b.moveState = moveStateLock
for b.rlock > 0 {
b.moveCond.Wait()
}
b.moveState = moveStateMoving
b.moveCond.Broadcast()
b.moveMx.Unlock()
var path string
defer func() {
b.lockMove()
db2 := b.dbNext
b.dbNext = nil
var state bsMoveState
if db2 != nil {
state = moveStateCleanup
} else {
state = moveStateNone
}
b.unlockMove(state)
if db2 != nil {
err := db2.Close()
if err != nil {
log.Warnf("error closing badger db: %s", err)
}
b.deleteDB(path)
b.lockMove()
b.unlockMove(moveStateNone)
}
}()
// we resolve symlinks to create the new path in the adjacent to the old path.
// this allows the user to symlink the db directory into a separate filesystem.
basePath := b.opts.Dir
linkPath, err := filepath.EvalSymlinks(basePath)
if err != nil {
return fmt.Errorf("error resolving symlink %s: %w", basePath, err)
}
if basePath == linkPath {
path = basePath
} else {
name := filepath.Base(basePath)
dir := filepath.Dir(linkPath)
path = filepath.Join(dir, name)
}
path = fmt.Sprintf("%s.%d", path, time.Now().UnixNano())
log.Infof("moving blockstore from %s to %s", b.opts.Dir, path)
opts := b.opts
opts.Dir = path
opts.ValueDir = path
db2, err := badger.Open(opts.Options)
if err != nil {
return fmt.Errorf("failed to open badger blockstore in %s: %w", path, err)
}
b.lockMove()
b.dbNext = db2
b.unlockMove(moveStateMoving)
log.Info("copying blockstore")
err = b.doCopy(b.db, b.dbNext)
if err != nil {
return fmt.Errorf("error moving badger blockstore to %s: %w", path, err)
}
b.lockMove()
db1 := b.db
b.db = b.dbNext
b.dbNext = nil
b.unlockMove(moveStateCleanup)
err = db1.Close()
if err != nil {
log.Warnf("error closing old badger db: %s", err)
}
dbpath := b.opts.Dir
oldpath := fmt.Sprintf("%s.old.%d", dbpath, time.Now().Unix())
if err = os.Rename(dbpath, oldpath); err != nil {
// this is not catastrophic in the sense that we have not lost any data.
// but it is pretty bad, as the db path points to the old db, while we are now using to the new
// db; we can't continue and leave a ticking bomb for the next restart.
// so a panic is appropriate and user can fix.
panic(fmt.Errorf("error renaming old badger db dir from %s to %s: %w; USER ACTION REQUIRED", dbpath, oldpath, err)) //nolint
}
if err = os.Symlink(path, dbpath); err != nil {
// same here; the db path is pointing to the void. panic and let the user fix.
panic(fmt.Errorf("error symlinking new badger db dir from %s to %s: %w; USER ACTION REQUIRED", path, dbpath, err)) //nolint
}
b.deleteDB(oldpath)
log.Info("moving blockstore done")
return nil
}
// doCopy copies a badger blockstore to another, with an optional filter; if the filter
// is not nil, then only cids that satisfy the filter will be copied.
func (b *Blockstore) doCopy(from, to *badger.DB) error {
workers := runtime.NumCPU() / 2
if workers < 2 {
workers = 2
}
stream := from.NewStream()
stream.NumGo = workers
stream.LogPrefix = "doCopy"
stream.Send = func(list *pb.KVList) error {
batch := to.NewWriteBatch()
defer batch.Cancel()
for _, kv := range list.Kv {
if kv.Key == nil || kv.Value == nil {
continue
}
if err := batch.Set(kv.Key, kv.Value); err != nil {
return err
}
}
return batch.Flush()
}
return stream.Orchestrate(context.Background())
}
func (b *Blockstore) deleteDB(path string) {
// follow symbolic links, otherwise the data wil be left behind
lpath, err := filepath.EvalSymlinks(path)
if err != nil {
log.Warnf("error resolving symlinks in %s", path)
return
}
log.Infof("removing data directory %s", lpath)
if err := os.RemoveAll(lpath); err != nil {
log.Warnf("error deleting db at %s: %s", lpath, err)
return
}
if path != lpath {
log.Infof("removing link %s", path)
if err := os.Remove(path); err != nil {
log.Warnf("error removing symbolic link %s", err)
}
}
}
func (b *Blockstore) onlineGC() error {
b.lockDB()
defer b.unlockDB()
// compact first to gather the necessary statistics for GC
nworkers := runtime.NumCPU() / 2
@ -178,13 +420,13 @@ func (b *Blockstore) CollectGarbage() error {
nworkers = 2
}
err := b.DB.Flatten(nworkers)
err := b.db.Flatten(nworkers)
if err != nil {
return err
}
for err == nil {
err = b.DB.RunValueLogGC(0.125)
err = b.db.RunValueLogGC(0.125)
}
if err == badger.ErrNoRewrite {
@ -195,6 +437,29 @@ func (b *Blockstore) CollectGarbage() error {
return err
}
// CollectGarbage compacts and runs garbage collection on the value log;
// implements the BlockstoreGC trait
func (b *Blockstore) CollectGarbage(opts ...blockstore.BlockstoreGCOption) error {
if err := b.access(); err != nil {
return err
}
defer b.viewers.Done()
var options blockstore.BlockstoreGCOptions
for _, opt := range opts {
err := opt(&options)
if err != nil {
return err
}
}
if options.FullGC {
return b.movingGC()
}
return b.onlineGC()
}
// Size returns the aggregate size of the blockstore
func (b *Blockstore) Size() (int64, error) {
if err := b.access(); err != nil {
@ -202,7 +467,10 @@ func (b *Blockstore) Size() (int64, error) {
}
defer b.viewers.Done()
lsm, vlog := b.DB.Size()
b.lockDB()
defer b.unlockDB()
lsm, vlog := b.db.Size()
size := lsm + vlog
if size == 0 {
@ -234,12 +502,15 @@ func (b *Blockstore) View(cid cid.Cid, fn func([]byte) error) error {
}
defer b.viewers.Done()
b.lockDB()
defer b.unlockDB()
k, pooled := b.PooledStorageKey(cid)
if pooled {
defer KeyPool.Put(k)
}
return b.DB.View(func(txn *badger.Txn) error {
return b.db.View(func(txn *badger.Txn) error {
switch item, err := txn.Get(k); err {
case nil:
return item.Value(fn)
@ -258,12 +529,15 @@ func (b *Blockstore) Has(cid cid.Cid) (bool, error) {
}
defer b.viewers.Done()
b.lockDB()
defer b.unlockDB()
k, pooled := b.PooledStorageKey(cid)
if pooled {
defer KeyPool.Put(k)
}
err := b.DB.View(func(txn *badger.Txn) error {
err := b.db.View(func(txn *badger.Txn) error {
_, err := txn.Get(k)
return err
})
@ -289,13 +563,16 @@ func (b *Blockstore) Get(cid cid.Cid) (blocks.Block, error) {
}
defer b.viewers.Done()
b.lockDB()
defer b.unlockDB()
k, pooled := b.PooledStorageKey(cid)
if pooled {
defer KeyPool.Put(k)
}
var val []byte
err := b.DB.View(func(txn *badger.Txn) error {
err := b.db.View(func(txn *badger.Txn) error {
switch item, err := txn.Get(k); err {
case nil:
val, err = item.ValueCopy(nil)
@ -319,13 +596,16 @@ func (b *Blockstore) GetSize(cid cid.Cid) (int, error) {
}
defer b.viewers.Done()
b.lockDB()
defer b.unlockDB()
k, pooled := b.PooledStorageKey(cid)
if pooled {
defer KeyPool.Put(k)
}
var size int
err := b.DB.View(func(txn *badger.Txn) error {
err := b.db.View(func(txn *badger.Txn) error {
switch item, err := txn.Get(k); err {
case nil:
size = int(item.ValueSize())
@ -349,18 +629,36 @@ func (b *Blockstore) Put(block blocks.Block) error {
}
defer b.viewers.Done()
b.lockDB()
defer b.unlockDB()
k, pooled := b.PooledStorageKey(block.Cid())
if pooled {
defer KeyPool.Put(k)
}
err := b.DB.Update(func(txn *badger.Txn) error {
return txn.Set(k, block.RawData())
})
if err != nil {
err = fmt.Errorf("failed to put block in badger blockstore: %w", err)
put := func(db *badger.DB) error {
err := db.Update(func(txn *badger.Txn) error {
return txn.Set(k, block.RawData())
})
if err != nil {
return fmt.Errorf("failed to put block in badger blockstore: %w", err)
}
return nil
}
return err
if err := put(b.db); err != nil {
return err
}
if b.dbNext != nil {
if err := put(b.dbNext); err != nil {
return err
}
}
return nil
}
// PutMany implements Blockstore.PutMany.
@ -370,6 +668,9 @@ func (b *Blockstore) PutMany(blocks []blocks.Block) error {
}
defer b.viewers.Done()
b.lockDB()
defer b.unlockDB()
// toReturn tracks the byte slices to return to the pool, if we're using key
// prefixing. we can't return each slice to the pool after each Set, because
// badger holds on to the slice.
@ -383,24 +684,45 @@ func (b *Blockstore) PutMany(blocks []blocks.Block) error {
}()
}
batch := b.DB.NewWriteBatch()
defer batch.Cancel()
keys := make([][]byte, 0, len(blocks))
for _, block := range blocks {
k, pooled := b.PooledStorageKey(block.Cid())
if pooled {
toReturn = append(toReturn, k)
}
if err := batch.Set(k, block.RawData()); err != nil {
keys = append(keys, k)
}
put := func(db *badger.DB) error {
batch := db.NewWriteBatch()
defer batch.Cancel()
for i, block := range blocks {
k := keys[i]
if err := batch.Set(k, block.RawData()); err != nil {
return err
}
}
err := batch.Flush()
if err != nil {
return fmt.Errorf("failed to put blocks in badger blockstore: %w", err)
}
return nil
}
if err := put(b.db); err != nil {
return err
}
if b.dbNext != nil {
if err := put(b.dbNext); err != nil {
return err
}
}
err := batch.Flush()
if err != nil {
err = fmt.Errorf("failed to put blocks in badger blockstore: %w", err)
}
return err
return nil
}
// DeleteBlock implements Blockstore.DeleteBlock.
@ -410,12 +732,15 @@ func (b *Blockstore) DeleteBlock(cid cid.Cid) error {
}
defer b.viewers.Done()
b.lockDB()
defer b.unlockDB()
k, pooled := b.PooledStorageKey(cid)
if pooled {
defer KeyPool.Put(k)
}
return b.DB.Update(func(txn *badger.Txn) error {
return b.db.Update(func(txn *badger.Txn) error {
return txn.Delete(k)
})
}
@ -426,6 +751,9 @@ func (b *Blockstore) DeleteMany(cids []cid.Cid) error {
}
defer b.viewers.Done()
b.lockDB()
defer b.unlockDB()
// toReturn tracks the byte slices to return to the pool, if we're using key
// prefixing. we can't return each slice to the pool after each Set, because
// badger holds on to the slice.
@ -439,7 +767,7 @@ func (b *Blockstore) DeleteMany(cids []cid.Cid) error {
}()
}
batch := b.DB.NewWriteBatch()
batch := b.db.NewWriteBatch()
defer batch.Cancel()
for _, cid := range cids {
@ -465,7 +793,10 @@ func (b *Blockstore) AllKeysChan(ctx context.Context) (<-chan cid.Cid, error) {
return nil, err
}
txn := b.DB.NewTransaction(false)
b.lockDB()
defer b.unlockDB()
txn := b.db.NewTransaction(false)
opts := badger.IteratorOptions{PrefetchSize: 100}
if b.prefixing {
opts.Prefix = b.prefix
@ -519,7 +850,10 @@ func (b *Blockstore) ForEachKey(f func(cid.Cid) error) error {
}
defer b.viewers.Done()
txn := b.DB.NewTransaction(false)
b.lockDB()
defer b.unlockDB()
txn := b.db.NewTransaction(false)
defer txn.Discard()
opts := badger.IteratorOptions{PrefetchSize: 100}
@ -614,3 +948,9 @@ func (b *Blockstore) StorageKey(dst []byte, cid cid.Cid) []byte {
}
return dst[:reqsize]
}
// this method is added for lotus-shed needs
// WARNING: THIS IS COMPLETELY UNSAFE; DONT USE THIS IN PRODUCTION CODE
func (b *Blockstore) DB() *badger.DB {
return b.db
}

View File

@ -1,12 +1,19 @@
package badgerbs
import (
"bytes"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"strings"
"testing"
blocks "github.com/ipfs/go-block-format"
"github.com/stretchr/testify/require"
"golang.org/x/sync/errgroup"
blocks "github.com/ipfs/go-block-format"
cid "github.com/ipfs/go-cid"
"github.com/filecoin-project/lotus/blockstore"
)
@ -89,3 +96,165 @@ func openBlockstore(optsSupplier func(path string) Options) func(tb testing.TB,
return Open(optsSupplier(path))
}
}
func testMove(t *testing.T, optsF func(string) Options) {
basePath, err := ioutil.TempDir("", "")
if err != nil {
t.Fatal(err)
}
dbPath := filepath.Join(basePath, "db")
t.Cleanup(func() {
_ = os.RemoveAll(basePath)
})
db, err := Open(optsF(dbPath))
if err != nil {
t.Fatal(err)
}
defer db.Close() //nolint
var have []blocks.Block
var deleted []cid.Cid
// add some blocks
for i := 0; i < 10; i++ {
blk := blocks.NewBlock([]byte(fmt.Sprintf("some data %d", i)))
err := db.Put(blk)
if err != nil {
t.Fatal(err)
}
have = append(have, blk)
}
// delete some of them
for i := 5; i < 10; i++ {
c := have[i].Cid()
err := db.DeleteBlock(c)
if err != nil {
t.Fatal(err)
}
deleted = append(deleted, c)
}
have = have[:5]
// start a move concurrent with some more puts
g := new(errgroup.Group)
g.Go(func() error {
for i := 10; i < 1000; i++ {
blk := blocks.NewBlock([]byte(fmt.Sprintf("some data %d", i)))
err := db.Put(blk)
if err != nil {
return err
}
have = append(have, blk)
}
return nil
})
g.Go(func() error {
return db.CollectGarbage(blockstore.WithFullGC(true))
})
err = g.Wait()
if err != nil {
t.Fatal(err)
}
// now check that we have all the blocks in have and none in the deleted lists
checkBlocks := func() {
for _, blk := range have {
has, err := db.Has(blk.Cid())
if err != nil {
t.Fatal(err)
}
if !has {
t.Fatal("missing block")
}
blk2, err := db.Get(blk.Cid())
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(blk.RawData(), blk2.RawData()) {
t.Fatal("data mismatch")
}
}
for _, c := range deleted {
has, err := db.Has(c)
if err != nil {
t.Fatal(err)
}
if has {
t.Fatal("resurrected block")
}
}
}
checkBlocks()
// check the basePath -- it should contain a directory with name db.{timestamp}, soft-linked
// to db and nothing else
checkPath := func() {
entries, err := os.ReadDir(basePath)
if err != nil {
t.Fatal(err)
}
if len(entries) != 2 {
t.Fatalf("too many entries; expected %d but got %d", 2, len(entries))
}
var haveDB, haveDBLink bool
for _, e := range entries {
if e.Name() == "db" {
if (e.Type() & os.ModeSymlink) == 0 {
t.Fatal("found db, but it's not a symlink")
}
haveDBLink = true
continue
}
if strings.HasPrefix(e.Name(), "db.") {
if !e.Type().IsDir() {
t.Fatal("found db prefix, but it's not a directory")
}
haveDB = true
continue
}
}
if !haveDB {
t.Fatal("db directory is missing")
}
if !haveDBLink {
t.Fatal("db link is missing")
}
}
checkPath()
// now do another FullGC to test the double move and following of symlinks
if err := db.CollectGarbage(blockstore.WithFullGC(true)); err != nil {
t.Fatal(err)
}
checkBlocks()
checkPath()
}
func TestMoveNoPrefix(t *testing.T) {
testMove(t, DefaultOptions)
}
func TestMoveWithPrefix(t *testing.T) {
testMove(t, func(path string) Options {
opts := DefaultOptions(path)
opts.Prefix = "/prefixed/"
return opts
})
}

View File

@ -37,7 +37,22 @@ type BlockstoreIterator interface {
// BlockstoreGC is a trait for blockstores that support online garbage collection
type BlockstoreGC interface {
CollectGarbage() error
CollectGarbage(options ...BlockstoreGCOption) error
}
// BlockstoreGCOption is a functional interface for controlling blockstore GC options
type BlockstoreGCOption = func(*BlockstoreGCOptions) error
// BlockstoreGCOptions is a struct with GC options
type BlockstoreGCOptions struct {
FullGC bool
}
func WithFullGC(fullgc bool) BlockstoreGCOption {
return func(opts *BlockstoreGCOptions) error {
opts.FullGC = fullgc
return nil
}
}
// BlockstoreSize is a trait for on-disk blockstores that can report their size

View File

@ -59,6 +59,15 @@ These are options in the `[Chainstore.Splitstore]` section of the configuration:
nodes beyond 4 finalities, while running with the discard coldstore option.
It is also useful for miners who accept deals and need to lookback messages beyond
the 4 finalities, which would otherwise hit the coldstore.
- `HotStoreFullGCFrequency` -- specifies how frequenty to garbage collect the hotstore
using full (moving) GC.
The default value is 20, which uses full GC every 20 compactions (about once a week);
set to 0 to disable full GC altogether.
Rationale: badger supports online GC, and this is used by default. However it has proven to
be ineffective in practice with the hotstore size slowly creeping up. In order to address this,
we have added moving GC support in our badger wrapper, which can effectively reclaim all space.
The downside is that it takes a bit longer to perform a moving GC and you also need enough
space to house the new hotstore while the old one is still live.
## Operation

View File

@ -81,6 +81,13 @@ type Config struct {
// - a positive integer indicates the number of finalities, outside the compaction boundary,
// for which messages will be retained in the hotstore.
HotStoreMessageRetention uint64
// HotstoreFullGCFrequency indicates how frequently (in terms of compactions) to garbage collect
// the hotstore using full (moving) GC if supported by the hotstore.
// A value of 0 disables full GC entirely.
// A positive value is the number of compactions before a full GC is performed;
// a value of 1 will perform full GC in every compaction.
HotStoreFullGCFrequency uint64
}
// ChainAccessor allows the Splitstore to access the chain. It will most likely

View File

@ -8,17 +8,22 @@ import (
)
func (s *SplitStore) gcHotstore() {
if err := s.gcBlockstoreOnline(s.hot); err != nil {
var opts []bstore.BlockstoreGCOption
if s.cfg.HotStoreFullGCFrequency > 0 && s.compactionIndex%int64(s.cfg.HotStoreFullGCFrequency) == 0 {
opts = append(opts, bstore.WithFullGC(true))
}
if err := s.gcBlockstore(s.hot, opts); err != nil {
log.Warnf("error garbage collecting hostore: %s", err)
}
}
func (s *SplitStore) gcBlockstoreOnline(b bstore.Blockstore) error {
func (s *SplitStore) gcBlockstore(b bstore.Blockstore, opts []bstore.BlockstoreGCOption) error {
if gc, ok := b.(bstore.BlockstoreGC); ok {
log.Info("garbage collecting blockstore")
startGC := time.Now()
if err := gc.CollectGarbage(); err != nil {
if err := gc.CollectGarbage(opts...); err != nil {
return err
}
@ -26,5 +31,5 @@ func (s *SplitStore) gcBlockstoreOnline(b bstore.Blockstore) error {
return nil
}
return fmt.Errorf("blockstore doesn't support online gc: %T", b)
return fmt.Errorf("blockstore doesn't support garbage collection: %T", b)
}

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -34,7 +34,7 @@ func buildType() string {
}
// BuildVersion is the local build version, set by build system
const BuildVersion = "1.11.1-dev"
const BuildVersion = "1.11.2-dev"
func UserVersion() string {
if os.Getenv("LOTUS_VERSION_IGNORE_COMMIT") == "1" {

View File

@ -233,7 +233,7 @@ func NewGeneratorWithSectorsAndUpgradeSchedule(numSectors int, us stmgr.UpgradeS
return nil, xerrors.Errorf("make genesis block failed: %w", err)
}
cs := store.NewChainStore(bs, bs, ds, sys, j)
cs := store.NewChainStore(bs, bs, ds, j)
genfb := &types.FullBlock{Header: genb.Genesis}
gents := store.NewFullTipSet([]*types.FullBlock{genfb})
@ -247,7 +247,7 @@ func NewGeneratorWithSectorsAndUpgradeSchedule(numSectors int, us stmgr.UpgradeS
mgen[genesis2.MinerAddress(uint64(i))] = &wppProvider{}
}
sm, err := stmgr.NewStateManagerWithUpgradeSchedule(cs, us)
sm, err := stmgr.NewStateManagerWithUpgradeSchedule(cs, sys, us)
if err != nil {
return nil, xerrors.Errorf("initing stmgr: %w", err)
}

View File

@ -471,7 +471,7 @@ func createMultisigAccount(ctx context.Context, cst cbor.IpldStore, state *state
return nil
}
func VerifyPreSealedData(ctx context.Context, cs *store.ChainStore, stateroot cid.Cid, template genesis.Template, keyIDs map[address.Address]address.Address, nv network.Version) (cid.Cid, error) {
func VerifyPreSealedData(ctx context.Context, cs *store.ChainStore, sys vm.SyscallBuilder, stateroot cid.Cid, template genesis.Template, keyIDs map[address.Address]address.Address, nv network.Version) (cid.Cid, error) {
verifNeeds := make(map[address.Address]abi.PaddedPieceSize)
var sum abi.PaddedPieceSize
@ -480,7 +480,7 @@ func VerifyPreSealedData(ctx context.Context, cs *store.ChainStore, stateroot ci
Epoch: 0,
Rand: &fakeRand{},
Bstore: cs.StateBlockstore(),
Syscalls: mkFakedSigSyscalls(cs.VMSys()),
Syscalls: mkFakedSigSyscalls(sys),
CircSupplyCalc: nil,
NtwkVersion: func(_ context.Context, _ abi.ChainEpoch) network.Version {
return nv
@ -559,15 +559,15 @@ func MakeGenesisBlock(ctx context.Context, j journal.Journal, bs bstore.Blocksto
}
// temp chainstore
cs := store.NewChainStore(bs, bs, datastore.NewMapDatastore(), sys, j)
cs := store.NewChainStore(bs, bs, datastore.NewMapDatastore(), j)
// Verify PreSealed Data
stateroot, err = VerifyPreSealedData(ctx, cs, stateroot, template, keyIDs, template.NetworkVersion)
stateroot, err = VerifyPreSealedData(ctx, cs, sys, stateroot, template, keyIDs, template.NetworkVersion)
if err != nil {
return nil, xerrors.Errorf("failed to verify presealed data: %w", err)
}
stateroot, err = SetupStorageMiners(ctx, cs, stateroot, template.Miners, template.NetworkVersion)
stateroot, err = SetupStorageMiners(ctx, cs, sys, stateroot, template.Miners, template.NetworkVersion)
if err != nil {
return nil, xerrors.Errorf("setup miners failed: %w", err)
}

View File

@ -78,7 +78,7 @@ func mkFakedSigSyscalls(base vm.SyscallBuilder) vm.SyscallBuilder {
}
// Note: Much of this is brittle, if the methodNum / param / return changes, it will break things
func SetupStorageMiners(ctx context.Context, cs *store.ChainStore, sroot cid.Cid, miners []genesis.Miner, nv network.Version) (cid.Cid, error) {
func SetupStorageMiners(ctx context.Context, cs *store.ChainStore, sys vm.SyscallBuilder, sroot cid.Cid, miners []genesis.Miner, nv network.Version) (cid.Cid, error) {
cst := cbor.NewCborStore(cs.StateBlockstore())
av := actors.VersionForNetwork(nv)
@ -92,7 +92,7 @@ func SetupStorageMiners(ctx context.Context, cs *store.ChainStore, sroot cid.Cid
Epoch: 0,
Rand: &fakeRand{},
Bstore: cs.StateBlockstore(),
Syscalls: mkFakedSigSyscalls(cs.VMSys()),
Syscalls: mkFakedSigSyscalls(sys),
CircSupplyCalc: csc,
NtwkVersion: func(_ context.Context, _ abi.ChainEpoch) network.Version {
return nv

View File

@ -1,129 +0,0 @@
package metrics
import (
"context"
"encoding/json"
"github.com/filecoin-project/go-state-types/abi"
"github.com/ipfs/go-cid"
logging "github.com/ipfs/go-log/v2"
pubsub "github.com/libp2p/go-libp2p-pubsub"
"go.uber.org/fx"
"github.com/filecoin-project/lotus/build"
"github.com/filecoin-project/lotus/chain/types"
"github.com/filecoin-project/lotus/node/impl/full"
"github.com/filecoin-project/lotus/node/modules/helpers"
)
var log = logging.Logger("metrics")
const baseTopic = "/fil/headnotifs/"
type Update struct {
Type string
}
func SendHeadNotifs(nickname string) func(mctx helpers.MetricsCtx, lc fx.Lifecycle, ps *pubsub.PubSub, chain full.ChainAPI) error {
return func(mctx helpers.MetricsCtx, lc fx.Lifecycle, ps *pubsub.PubSub, chain full.ChainAPI) error {
ctx := helpers.LifecycleCtx(mctx, lc)
lc.Append(fx.Hook{
OnStart: func(_ context.Context) error {
gen, err := chain.Chain.GetGenesis()
if err != nil {
return err
}
topic := baseTopic + gen.Cid().String()
go func() {
if err := sendHeadNotifs(ctx, ps, topic, chain, nickname); err != nil {
log.Error("consensus metrics error", err)
return
}
}()
go func() {
sub, err := ps.Subscribe(topic) //nolint
if err != nil {
return
}
defer sub.Cancel()
for {
if _, err := sub.Next(ctx); err != nil {
return
}
}
}()
return nil
},
})
return nil
}
}
type message struct {
// TipSet
Cids []cid.Cid
Blocks []*types.BlockHeader
Height abi.ChainEpoch
Weight types.BigInt
Time uint64
Nonce uint64
// Meta
NodeName string
}
func sendHeadNotifs(ctx context.Context, ps *pubsub.PubSub, topic string, chain full.ChainAPI, nickname string) error {
ctx, cancel := context.WithCancel(ctx)
defer cancel()
notifs, err := chain.ChainNotify(ctx)
if err != nil {
return err
}
// using unix nano time makes very sure we pick a nonce higher than previous restart
nonce := uint64(build.Clock.Now().UnixNano())
for {
select {
case notif := <-notifs:
n := notif[len(notif)-1]
w, err := chain.ChainTipSetWeight(ctx, n.Val.Key())
if err != nil {
return err
}
m := message{
Cids: n.Val.Cids(),
Blocks: n.Val.Blocks(),
Height: n.Val.Height(),
Weight: w,
NodeName: nickname,
Time: uint64(build.Clock.Now().UnixNano() / 1000_000),
Nonce: nonce,
}
b, err := json.Marshal(m)
if err != nil {
return err
}
//nolint
if err := ps.Publish(topic, b); err != nil {
return err
}
case <-ctx.Done():
return nil
}
nonce++
}
}

View File

@ -273,7 +273,7 @@ func LoadStateTree(cst cbor.IpldStore, c cid.Cid) (*StateTree, error) {
}
if err != nil {
log.Errorf("failed to load state tree: %s", err)
return nil, xerrors.Errorf("failed to load state tree: %w", err)
return nil, xerrors.Errorf("failed to load state tree %s: %w", c, err)
}
s := &StateTree{

551
chain/stmgr/actors.go Normal file
View File

@ -0,0 +1,551 @@
package stmgr
import (
"bytes"
"context"
"os"
"github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-bitfield"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/big"
"github.com/filecoin-project/go-state-types/crypto"
"github.com/filecoin-project/go-state-types/network"
cid "github.com/ipfs/go-cid"
"golang.org/x/xerrors"
"github.com/filecoin-project/lotus/api"
"github.com/filecoin-project/lotus/chain/actors/builtin"
"github.com/filecoin-project/lotus/chain/actors/builtin/market"
"github.com/filecoin-project/lotus/chain/actors/builtin/miner"
"github.com/filecoin-project/lotus/chain/actors/builtin/paych"
"github.com/filecoin-project/lotus/chain/actors/builtin/power"
"github.com/filecoin-project/lotus/chain/beacon"
"github.com/filecoin-project/lotus/chain/store"
"github.com/filecoin-project/lotus/chain/types"
"github.com/filecoin-project/lotus/chain/vm"
"github.com/filecoin-project/lotus/extern/sector-storage/ffiwrapper"
)
func GetMinerWorkerRaw(ctx context.Context, sm *StateManager, st cid.Cid, maddr address.Address) (address.Address, error) {
state, err := sm.StateTree(st)
if err != nil {
return address.Undef, xerrors.Errorf("(get sset) failed to load state tree: %w", err)
}
act, err := state.GetActor(maddr)
if err != nil {
return address.Undef, xerrors.Errorf("(get sset) failed to load miner actor: %w", err)
}
mas, err := miner.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return address.Undef, xerrors.Errorf("(get sset) failed to load miner actor state: %w", err)
}
info, err := mas.Info()
if err != nil {
return address.Undef, xerrors.Errorf("failed to load actor info: %w", err)
}
return vm.ResolveToKeyAddr(state, sm.cs.ActorStore(ctx), info.Worker)
}
func GetPower(ctx context.Context, sm *StateManager, ts *types.TipSet, maddr address.Address) (power.Claim, power.Claim, bool, error) {
return GetPowerRaw(ctx, sm, ts.ParentState(), maddr)
}
func GetPowerRaw(ctx context.Context, sm *StateManager, st cid.Cid, maddr address.Address) (power.Claim, power.Claim, bool, error) {
act, err := sm.LoadActorRaw(ctx, power.Address, st)
if err != nil {
return power.Claim{}, power.Claim{}, false, xerrors.Errorf("(get sset) failed to load power actor state: %w", err)
}
pas, err := power.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return power.Claim{}, power.Claim{}, false, err
}
tpow, err := pas.TotalPower()
if err != nil {
return power.Claim{}, power.Claim{}, false, err
}
var mpow power.Claim
var minpow bool
if maddr != address.Undef {
var found bool
mpow, found, err = pas.MinerPower(maddr)
if err != nil || !found {
return power.Claim{}, tpow, false, err
}
minpow, err = pas.MinerNominalPowerMeetsConsensusMinimum(maddr)
if err != nil {
return power.Claim{}, power.Claim{}, false, err
}
}
return mpow, tpow, minpow, nil
}
func PreCommitInfo(ctx context.Context, sm *StateManager, maddr address.Address, sid abi.SectorNumber, ts *types.TipSet) (*miner.SectorPreCommitOnChainInfo, error) {
act, err := sm.LoadActor(ctx, maddr, ts)
if err != nil {
return nil, xerrors.Errorf("(get sset) failed to load miner actor: %w", err)
}
mas, err := miner.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return nil, xerrors.Errorf("(get sset) failed to load miner actor state: %w", err)
}
return mas.GetPrecommittedSector(sid)
}
func MinerSectorInfo(ctx context.Context, sm *StateManager, maddr address.Address, sid abi.SectorNumber, ts *types.TipSet) (*miner.SectorOnChainInfo, error) {
act, err := sm.LoadActor(ctx, maddr, ts)
if err != nil {
return nil, xerrors.Errorf("(get sset) failed to load miner actor: %w", err)
}
mas, err := miner.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return nil, xerrors.Errorf("(get sset) failed to load miner actor state: %w", err)
}
return mas.GetSector(sid)
}
func GetSectorsForWinningPoSt(ctx context.Context, nv network.Version, pv ffiwrapper.Verifier, sm *StateManager, st cid.Cid, maddr address.Address, rand abi.PoStRandomness) ([]builtin.SectorInfo, error) {
act, err := sm.LoadActorRaw(ctx, maddr, st)
if err != nil {
return nil, xerrors.Errorf("failed to load miner actor: %w", err)
}
mas, err := miner.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return nil, xerrors.Errorf("failed to load miner actor state: %w", err)
}
var provingSectors bitfield.BitField
if nv < network.Version7 {
allSectors, err := miner.AllPartSectors(mas, miner.Partition.AllSectors)
if err != nil {
return nil, xerrors.Errorf("get all sectors: %w", err)
}
faultySectors, err := miner.AllPartSectors(mas, miner.Partition.FaultySectors)
if err != nil {
return nil, xerrors.Errorf("get faulty sectors: %w", err)
}
provingSectors, err = bitfield.SubtractBitField(allSectors, faultySectors)
if err != nil {
return nil, xerrors.Errorf("calc proving sectors: %w", err)
}
} else {
provingSectors, err = miner.AllPartSectors(mas, miner.Partition.ActiveSectors)
if err != nil {
return nil, xerrors.Errorf("get active sectors sectors: %w", err)
}
}
numProvSect, err := provingSectors.Count()
if err != nil {
return nil, xerrors.Errorf("failed to count bits: %w", err)
}
// TODO(review): is this right? feels fishy to me
if numProvSect == 0 {
return nil, nil
}
info, err := mas.Info()
if err != nil {
return nil, xerrors.Errorf("getting miner info: %w", err)
}
mid, err := address.IDFromAddress(maddr)
if err != nil {
return nil, xerrors.Errorf("getting miner ID: %w", err)
}
proofType, err := miner.WinningPoStProofTypeFromWindowPoStProofType(nv, info.WindowPoStProofType)
if err != nil {
return nil, xerrors.Errorf("determining winning post proof type: %w", err)
}
ids, err := pv.GenerateWinningPoStSectorChallenge(ctx, proofType, abi.ActorID(mid), rand, numProvSect)
if err != nil {
return nil, xerrors.Errorf("generating winning post challenges: %w", err)
}
iter, err := provingSectors.BitIterator()
if err != nil {
return nil, xerrors.Errorf("iterating over proving sectors: %w", err)
}
// Select winning sectors by _index_ in the all-sectors bitfield.
selectedSectors := bitfield.New()
prev := uint64(0)
for _, n := range ids {
sno, err := iter.Nth(n - prev)
if err != nil {
return nil, xerrors.Errorf("iterating over proving sectors: %w", err)
}
selectedSectors.Set(sno)
prev = n
}
sectors, err := mas.LoadSectors(&selectedSectors)
if err != nil {
return nil, xerrors.Errorf("loading proving sectors: %w", err)
}
out := make([]builtin.SectorInfo, len(sectors))
for i, sinfo := range sectors {
out[i] = builtin.SectorInfo{
SealProof: sinfo.SealProof,
SectorNumber: sinfo.SectorNumber,
SealedCID: sinfo.SealedCID,
}
}
return out, nil
}
func GetMinerSlashed(ctx context.Context, sm *StateManager, ts *types.TipSet, maddr address.Address) (bool, error) {
act, err := sm.LoadActor(ctx, power.Address, ts)
if err != nil {
return false, xerrors.Errorf("failed to load power actor: %w", err)
}
spas, err := power.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return false, xerrors.Errorf("failed to load power actor state: %w", err)
}
_, ok, err := spas.MinerPower(maddr)
if err != nil {
return false, xerrors.Errorf("getting miner power: %w", err)
}
if !ok {
return true, nil
}
return false, nil
}
func GetStorageDeal(ctx context.Context, sm *StateManager, dealID abi.DealID, ts *types.TipSet) (*api.MarketDeal, error) {
act, err := sm.LoadActor(ctx, market.Address, ts)
if err != nil {
return nil, xerrors.Errorf("failed to load market actor: %w", err)
}
state, err := market.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return nil, xerrors.Errorf("failed to load market actor state: %w", err)
}
proposals, err := state.Proposals()
if err != nil {
return nil, err
}
proposal, found, err := proposals.Get(dealID)
if err != nil {
return nil, err
} else if !found {
return nil, xerrors.Errorf(
"deal %d not found "+
"- deal may not have completed sealing before deal proposal "+
"start epoch, or deal may have been slashed",
dealID)
}
states, err := state.States()
if err != nil {
return nil, err
}
st, found, err := states.Get(dealID)
if err != nil {
return nil, err
}
if !found {
st = market.EmptyDealState()
}
return &api.MarketDeal{
Proposal: *proposal,
State: *st,
}, nil
}
func ListMinerActors(ctx context.Context, sm *StateManager, ts *types.TipSet) ([]address.Address, error) {
act, err := sm.LoadActor(ctx, power.Address, ts)
if err != nil {
return nil, xerrors.Errorf("failed to load power actor: %w", err)
}
powState, err := power.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return nil, xerrors.Errorf("failed to load power actor state: %w", err)
}
return powState.ListAllMiners()
}
func MinerGetBaseInfo(ctx context.Context, sm *StateManager, bcs beacon.Schedule, tsk types.TipSetKey, round abi.ChainEpoch, maddr address.Address, pv ffiwrapper.Verifier) (*api.MiningBaseInfo, error) {
ts, err := sm.ChainStore().LoadTipSet(tsk)
if err != nil {
return nil, xerrors.Errorf("failed to load tipset for mining base: %w", err)
}
prev, err := sm.ChainStore().GetLatestBeaconEntry(ts)
if err != nil {
if os.Getenv("LOTUS_IGNORE_DRAND") != "_yes_" {
return nil, xerrors.Errorf("failed to get latest beacon entry: %w", err)
}
prev = &types.BeaconEntry{}
}
entries, err := beacon.BeaconEntriesForBlock(ctx, bcs, round, ts.Height(), *prev)
if err != nil {
return nil, err
}
rbase := *prev
if len(entries) > 0 {
rbase = entries[len(entries)-1]
}
lbts, lbst, err := GetLookbackTipSetForRound(ctx, sm, ts, round)
if err != nil {
return nil, xerrors.Errorf("getting lookback miner actor state: %w", err)
}
act, err := sm.LoadActorRaw(ctx, maddr, lbst)
if xerrors.Is(err, types.ErrActorNotFound) {
_, err := sm.LoadActor(ctx, maddr, ts)
if err != nil {
return nil, xerrors.Errorf("loading miner in current state: %w", err)
}
return nil, nil
}
if err != nil {
return nil, xerrors.Errorf("failed to load miner actor: %w", err)
}
mas, err := miner.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return nil, xerrors.Errorf("failed to load miner actor state: %w", err)
}
buf := new(bytes.Buffer)
if err := maddr.MarshalCBOR(buf); err != nil {
return nil, xerrors.Errorf("failed to marshal miner address: %w", err)
}
prand, err := store.DrawRandomness(rbase.Data, crypto.DomainSeparationTag_WinningPoStChallengeSeed, round, buf.Bytes())
if err != nil {
return nil, xerrors.Errorf("failed to get randomness for winning post: %w", err)
}
nv := sm.GetNtwkVersion(ctx, ts.Height())
sectors, err := GetSectorsForWinningPoSt(ctx, nv, pv, sm, lbst, maddr, prand)
if err != nil {
return nil, xerrors.Errorf("getting winning post proving set: %w", err)
}
if len(sectors) == 0 {
return nil, nil
}
mpow, tpow, _, err := GetPowerRaw(ctx, sm, lbst, maddr)
if err != nil {
return nil, xerrors.Errorf("failed to get power: %w", err)
}
info, err := mas.Info()
if err != nil {
return nil, err
}
worker, err := sm.ResolveToKeyAddress(ctx, info.Worker, ts)
if err != nil {
return nil, xerrors.Errorf("resolving worker address: %w", err)
}
// TODO: Not ideal performance...This method reloads miner and power state (already looked up here and in GetPowerRaw)
eligible, err := MinerEligibleToMine(ctx, sm, maddr, ts, lbts)
if err != nil {
return nil, xerrors.Errorf("determining miner eligibility: %w", err)
}
return &api.MiningBaseInfo{
MinerPower: mpow.QualityAdjPower,
NetworkPower: tpow.QualityAdjPower,
Sectors: sectors,
WorkerKey: worker,
SectorSize: info.SectorSize,
PrevBeaconEntry: *prev,
BeaconEntries: entries,
EligibleForMining: eligible,
}, nil
}
func minerHasMinPower(ctx context.Context, sm *StateManager, addr address.Address, ts *types.TipSet) (bool, error) {
pact, err := sm.LoadActor(ctx, power.Address, ts)
if err != nil {
return false, xerrors.Errorf("loading power actor state: %w", err)
}
ps, err := power.Load(sm.cs.ActorStore(ctx), pact)
if err != nil {
return false, err
}
return ps.MinerNominalPowerMeetsConsensusMinimum(addr)
}
func MinerEligibleToMine(ctx context.Context, sm *StateManager, addr address.Address, baseTs *types.TipSet, lookbackTs *types.TipSet) (bool, error) {
hmp, err := minerHasMinPower(ctx, sm, addr, lookbackTs)
// TODO: We're blurring the lines between a "runtime network version" and a "Lotus upgrade epoch", is that unavoidable?
if sm.GetNtwkVersion(ctx, baseTs.Height()) <= network.Version3 {
return hmp, err
}
if err != nil {
return false, err
}
if !hmp {
return false, nil
}
// Post actors v2, also check MinerEligibleForElection with base ts
pact, err := sm.LoadActor(ctx, power.Address, baseTs)
if err != nil {
return false, xerrors.Errorf("loading power actor state: %w", err)
}
pstate, err := power.Load(sm.cs.ActorStore(ctx), pact)
if err != nil {
return false, err
}
mact, err := sm.LoadActor(ctx, addr, baseTs)
if err != nil {
return false, xerrors.Errorf("loading miner actor state: %w", err)
}
mstate, err := miner.Load(sm.cs.ActorStore(ctx), mact)
if err != nil {
return false, err
}
// Non-empty power claim.
if claim, found, err := pstate.MinerPower(addr); err != nil {
return false, err
} else if !found {
return false, err
} else if claim.QualityAdjPower.LessThanEqual(big.Zero()) {
return false, err
}
// No fee debt.
if debt, err := mstate.FeeDebt(); err != nil {
return false, err
} else if !debt.IsZero() {
return false, err
}
// No active consensus faults.
if mInfo, err := mstate.Info(); err != nil {
return false, err
} else if baseTs.Height() <= mInfo.ConsensusFaultElapsed {
return false, nil
}
return true, nil
}
func (sm *StateManager) GetPaychState(ctx context.Context, addr address.Address, ts *types.TipSet) (*types.Actor, paych.State, error) {
st, err := sm.ParentState(ts)
if err != nil {
return nil, nil, err
}
act, err := st.GetActor(addr)
if err != nil {
return nil, nil, err
}
actState, err := paych.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return nil, nil, err
}
return act, actState, nil
}
func (sm *StateManager) GetMarketState(ctx context.Context, ts *types.TipSet) (market.State, error) {
st, err := sm.ParentState(ts)
if err != nil {
return nil, err
}
act, err := st.GetActor(market.Address)
if err != nil {
return nil, err
}
actState, err := market.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return nil, err
}
return actState, nil
}
func (sm *StateManager) MarketBalance(ctx context.Context, addr address.Address, ts *types.TipSet) (api.MarketBalance, error) {
mstate, err := sm.GetMarketState(ctx, ts)
if err != nil {
return api.MarketBalance{}, err
}
addr, err = sm.LookupID(ctx, addr, ts)
if err != nil {
return api.MarketBalance{}, err
}
var out api.MarketBalance
et, err := mstate.EscrowTable()
if err != nil {
return api.MarketBalance{}, err
}
out.Escrow, err = et.Get(addr)
if err != nil {
return api.MarketBalance{}, xerrors.Errorf("getting escrow balance: %w", err)
}
lt, err := mstate.LockedTable()
if err != nil {
return api.MarketBalance{}, err
}
out.Locked, err = lt.Get(addr)
if err != nil {
return api.MarketBalance{}, xerrors.Errorf("getting locked balance: %w", err)
}
return out, nil
}
var _ StateManagerAPI = (*StateManager)(nil)

View File

@ -64,7 +64,7 @@ func (sm *StateManager) Call(ctx context.Context, msg *types.Message, ts *types.
Epoch: pheight + 1,
Rand: store.NewChainRand(sm.cs, ts.Cids()),
Bstore: sm.cs.StateBlockstore(),
Syscalls: sm.cs.VMSys(),
Syscalls: sm.syscalls,
CircSupplyCalc: sm.GetVMCirculatingSupply,
NtwkVersion: sm.GetNtwkVersion,
BaseFee: types.NewInt(0),
@ -179,7 +179,7 @@ func (sm *StateManager) CallWithGas(ctx context.Context, msg *types.Message, pri
Epoch: ts.Height() + 1,
Rand: r,
Bstore: sm.cs.StateBlockstore(),
Syscalls: sm.cs.VMSys(),
Syscalls: sm.syscalls,
CircSupplyCalc: sm.GetVMCirculatingSupply,
NtwkVersion: sm.GetNtwkVersion,
BaseFee: ts.Blocks()[0].ParentBaseFee,

326
chain/stmgr/execute.go Normal file
View File

@ -0,0 +1,326 @@
package stmgr
import (
"context"
"fmt"
"sync/atomic"
"github.com/ipfs/go-cid"
cbg "github.com/whyrusleeping/cbor-gen"
"go.opencensus.io/stats"
"go.opencensus.io/trace"
"golang.org/x/xerrors"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/big"
blockadt "github.com/filecoin-project/specs-actors/actors/util/adt"
"github.com/filecoin-project/lotus/api"
"github.com/filecoin-project/lotus/build"
"github.com/filecoin-project/lotus/chain/actors"
"github.com/filecoin-project/lotus/chain/actors/builtin"
"github.com/filecoin-project/lotus/chain/actors/builtin/cron"
"github.com/filecoin-project/lotus/chain/actors/builtin/reward"
"github.com/filecoin-project/lotus/chain/store"
"github.com/filecoin-project/lotus/chain/types"
"github.com/filecoin-project/lotus/chain/vm"
"github.com/filecoin-project/lotus/metrics"
)
func (sm *StateManager) ApplyBlocks(ctx context.Context, parentEpoch abi.ChainEpoch, pstate cid.Cid, bms []store.BlockMessages, epoch abi.ChainEpoch, r vm.Rand, em ExecMonitor, baseFee abi.TokenAmount, ts *types.TipSet) (cid.Cid, cid.Cid, error) {
done := metrics.Timer(ctx, metrics.VMApplyBlocksTotal)
defer done()
partDone := metrics.Timer(ctx, metrics.VMApplyEarly)
defer func() {
partDone()
}()
makeVmWithBaseState := func(base cid.Cid) (*vm.VM, error) {
vmopt := &vm.VMOpts{
StateBase: base,
Epoch: epoch,
Rand: r,
Bstore: sm.cs.StateBlockstore(),
Syscalls: sm.syscalls,
CircSupplyCalc: sm.GetVMCirculatingSupply,
NtwkVersion: sm.GetNtwkVersion,
BaseFee: baseFee,
LookbackState: LookbackStateGetterForTipset(sm, ts),
}
return sm.newVM(ctx, vmopt)
}
vmi, err := makeVmWithBaseState(pstate)
if err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("making vm: %w", err)
}
runCron := func(epoch abi.ChainEpoch) error {
cronMsg := &types.Message{
To: cron.Address,
From: builtin.SystemActorAddr,
Nonce: uint64(epoch),
Value: types.NewInt(0),
GasFeeCap: types.NewInt(0),
GasPremium: types.NewInt(0),
GasLimit: build.BlockGasLimit * 10000, // Make super sure this is never too little
Method: cron.Methods.EpochTick,
Params: nil,
}
ret, err := vmi.ApplyImplicitMessage(ctx, cronMsg)
if err != nil {
return err
}
if em != nil {
if err := em.MessageApplied(ctx, ts, cronMsg.Cid(), cronMsg, ret, true); err != nil {
return xerrors.Errorf("callback failed on cron message: %w", err)
}
}
if ret.ExitCode != 0 {
return xerrors.Errorf("CheckProofSubmissions exit was non-zero: %d", ret.ExitCode)
}
return nil
}
for i := parentEpoch; i < epoch; i++ {
if i > parentEpoch {
// run cron for null rounds if any
if err := runCron(i); err != nil {
return cid.Undef, cid.Undef, err
}
pstate, err = vmi.Flush(ctx)
if err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("flushing vm: %w", err)
}
}
// handle state forks
// XXX: The state tree
newState, err := sm.handleStateForks(ctx, pstate, i, em, ts)
if err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("error handling state forks: %w", err)
}
if pstate != newState {
vmi, err = makeVmWithBaseState(newState)
if err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("making vm: %w", err)
}
}
vmi.SetBlockHeight(i + 1)
pstate = newState
}
partDone()
partDone = metrics.Timer(ctx, metrics.VMApplyMessages)
var receipts []cbg.CBORMarshaler
processedMsgs := make(map[cid.Cid]struct{})
for _, b := range bms {
penalty := types.NewInt(0)
gasReward := big.Zero()
for _, cm := range append(b.BlsMessages, b.SecpkMessages...) {
m := cm.VMMessage()
if _, found := processedMsgs[m.Cid()]; found {
continue
}
r, err := vmi.ApplyMessage(ctx, cm)
if err != nil {
return cid.Undef, cid.Undef, err
}
receipts = append(receipts, &r.MessageReceipt)
gasReward = big.Add(gasReward, r.GasCosts.MinerTip)
penalty = big.Add(penalty, r.GasCosts.MinerPenalty)
if em != nil {
if err := em.MessageApplied(ctx, ts, cm.Cid(), m, r, false); err != nil {
return cid.Undef, cid.Undef, err
}
}
processedMsgs[m.Cid()] = struct{}{}
}
params, err := actors.SerializeParams(&reward.AwardBlockRewardParams{
Miner: b.Miner,
Penalty: penalty,
GasReward: gasReward,
WinCount: b.WinCount,
})
if err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("failed to serialize award params: %w", err)
}
rwMsg := &types.Message{
From: builtin.SystemActorAddr,
To: reward.Address,
Nonce: uint64(epoch),
Value: types.NewInt(0),
GasFeeCap: types.NewInt(0),
GasPremium: types.NewInt(0),
GasLimit: 1 << 30,
Method: reward.Methods.AwardBlockReward,
Params: params,
}
ret, actErr := vmi.ApplyImplicitMessage(ctx, rwMsg)
if actErr != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("failed to apply reward message for miner %s: %w", b.Miner, actErr)
}
if em != nil {
if err := em.MessageApplied(ctx, ts, rwMsg.Cid(), rwMsg, ret, true); err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("callback failed on reward message: %w", err)
}
}
if ret.ExitCode != 0 {
return cid.Undef, cid.Undef, xerrors.Errorf("reward application message failed (exit %d): %s", ret.ExitCode, ret.ActorErr)
}
}
partDone()
partDone = metrics.Timer(ctx, metrics.VMApplyCron)
if err := runCron(epoch); err != nil {
return cid.Cid{}, cid.Cid{}, err
}
partDone()
partDone = metrics.Timer(ctx, metrics.VMApplyFlush)
rectarr := blockadt.MakeEmptyArray(sm.cs.ActorStore(ctx))
for i, receipt := range receipts {
if err := rectarr.Set(uint64(i), receipt); err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("failed to build receipts amt: %w", err)
}
}
rectroot, err := rectarr.Root()
if err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("failed to build receipts amt: %w", err)
}
st, err := vmi.Flush(ctx)
if err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("vm flush failed: %w", err)
}
stats.Record(ctx, metrics.VMSends.M(int64(atomic.LoadUint64(&vm.StatSends))),
metrics.VMApplied.M(int64(atomic.LoadUint64(&vm.StatApplied))))
return st, rectroot, nil
}
func (sm *StateManager) TipSetState(ctx context.Context, ts *types.TipSet) (st cid.Cid, rec cid.Cid, err error) {
ctx, span := trace.StartSpan(ctx, "tipSetState")
defer span.End()
if span.IsRecordingEvents() {
span.AddAttributes(trace.StringAttribute("tipset", fmt.Sprint(ts.Cids())))
}
ck := cidsToKey(ts.Cids())
sm.stlk.Lock()
cw, cwok := sm.compWait[ck]
if cwok {
sm.stlk.Unlock()
span.AddAttributes(trace.BoolAttribute("waited", true))
select {
case <-cw:
sm.stlk.Lock()
case <-ctx.Done():
return cid.Undef, cid.Undef, ctx.Err()
}
}
cached, ok := sm.stCache[ck]
if ok {
sm.stlk.Unlock()
span.AddAttributes(trace.BoolAttribute("cache", true))
return cached[0], cached[1], nil
}
ch := make(chan struct{})
sm.compWait[ck] = ch
defer func() {
sm.stlk.Lock()
delete(sm.compWait, ck)
if st != cid.Undef {
sm.stCache[ck] = []cid.Cid{st, rec}
}
sm.stlk.Unlock()
close(ch)
}()
sm.stlk.Unlock()
if ts.Height() == 0 {
// NB: This is here because the process that executes blocks requires that the
// block miner reference a valid miner in the state tree. Unless we create some
// magical genesis miner, this won't work properly, so we short circuit here
// This avoids the question of 'who gets paid the genesis block reward'
return ts.Blocks()[0].ParentStateRoot, ts.Blocks()[0].ParentMessageReceipts, nil
}
st, rec, err = sm.computeTipSetState(ctx, ts, sm.tsExecMonitor)
if err != nil {
return cid.Undef, cid.Undef, err
}
return st, rec, nil
}
func (sm *StateManager) ExecutionTraceWithMonitor(ctx context.Context, ts *types.TipSet, em ExecMonitor) (cid.Cid, error) {
st, _, err := sm.computeTipSetState(ctx, ts, em)
return st, err
}
func (sm *StateManager) ExecutionTrace(ctx context.Context, ts *types.TipSet) (cid.Cid, []*api.InvocResult, error) {
var invocTrace []*api.InvocResult
st, err := sm.ExecutionTraceWithMonitor(ctx, ts, &InvocationTracer{trace: &invocTrace})
if err != nil {
return cid.Undef, nil, err
}
return st, invocTrace, nil
}
func (sm *StateManager) computeTipSetState(ctx context.Context, ts *types.TipSet, em ExecMonitor) (cid.Cid, cid.Cid, error) {
ctx, span := trace.StartSpan(ctx, "computeTipSetState")
defer span.End()
blks := ts.Blocks()
for i := 0; i < len(blks); i++ {
for j := i + 1; j < len(blks); j++ {
if blks[i].Miner == blks[j].Miner {
return cid.Undef, cid.Undef,
xerrors.Errorf("duplicate miner in a tipset (%s %s)",
blks[i].Miner, blks[j].Miner)
}
}
}
var parentEpoch abi.ChainEpoch
pstate := blks[0].ParentStateRoot
if blks[0].Height > 0 {
parent, err := sm.cs.GetBlock(blks[0].Parents[0])
if err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("getting parent block: %w", err)
}
parentEpoch = parent.Height
}
r := store.NewChainRand(sm.cs, ts.Cids())
blkmsgs, err := sm.cs.BlockMsgsForTipset(ts)
if err != nil {
return cid.Undef, cid.Undef, xerrors.Errorf("getting block messages for tipset: %w", err)
}
baseFee := blks[0].ParentBaseFee
return sm.ApplyBlocks(ctx, parentEpoch, pstate, blkmsgs, blks[0].Height, r, em, baseFee, ts)
}

File diff suppressed because it is too large Load Diff

View File

@ -121,7 +121,7 @@ func TestForkHeightTriggers(t *testing.T) {
}
sm, err := NewStateManagerWithUpgradeSchedule(
cg.ChainStore(), UpgradeSchedule{{
cg.ChainStore(), cg.StateManager().VMSys(), UpgradeSchedule{{
Network: network.Version1,
Height: testForkHeight,
Migration: func(ctx context.Context, sm *StateManager, cache MigrationCache, cb ExecMonitor,
@ -250,7 +250,7 @@ func TestForkRefuseCall(t *testing.T) {
}
sm, err := NewStateManagerWithUpgradeSchedule(
cg.ChainStore(), UpgradeSchedule{{
cg.ChainStore(), cg.StateManager().VMSys(), UpgradeSchedule{{
Network: network.Version1,
Expensive: true,
Height: testForkHeight,
@ -365,7 +365,7 @@ func TestForkPreMigration(t *testing.T) {
counter := make(chan struct{}, 10)
sm, err := NewStateManagerWithUpgradeSchedule(
cg.ChainStore(), UpgradeSchedule{{
cg.ChainStore(), cg.StateManager().VMSys(), UpgradeSchedule{{
Network: network.Version1,
Height: testForkHeight,
Migration: func(ctx context.Context, sm *StateManager, cache MigrationCache, cb ExecMonitor,

View File

@ -31,6 +31,14 @@ func (sm *StateManager) ParentState(ts *types.TipSet) (*state.StateTree, error)
return state, nil
}
func (sm *StateManager) parentState(ts *types.TipSet) cid.Cid {
if ts == nil {
ts = sm.cs.GetHeaviestTipSet()
}
return ts.ParentState()
}
func (sm *StateManager) StateTree(st cid.Cid) (*state.StateTree, error) {
cst := cbor.NewCborStore(sm.cs.StateBlockstore())
state, err := state.LoadStateTree(cst, st)

279
chain/stmgr/searchwait.go Normal file
View File

@ -0,0 +1,279 @@
package stmgr
import (
"context"
"errors"
"fmt"
"github.com/ipfs/go-cid"
"golang.org/x/xerrors"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/lotus/chain/store"
"github.com/filecoin-project/lotus/chain/types"
)
// WaitForMessage blocks until a message appears on chain. It looks backwards in the chain to see if this has already
// happened, with an optional limit to how many epochs it will search. It guarantees that the message has been on
// chain for at least confidence epochs without being reverted before returning.
func (sm *StateManager) WaitForMessage(ctx context.Context, mcid cid.Cid, confidence uint64, lookbackLimit abi.ChainEpoch, allowReplaced bool) (*types.TipSet, *types.MessageReceipt, cid.Cid, error) {
ctx, cancel := context.WithCancel(ctx)
defer cancel()
msg, err := sm.cs.GetCMessage(mcid)
if err != nil {
return nil, nil, cid.Undef, fmt.Errorf("failed to load message: %w", err)
}
tsub := sm.cs.SubHeadChanges(ctx)
head, ok := <-tsub
if !ok {
return nil, nil, cid.Undef, fmt.Errorf("SubHeadChanges stream was invalid")
}
if len(head) != 1 {
return nil, nil, cid.Undef, fmt.Errorf("SubHeadChanges first entry should have been one item")
}
if head[0].Type != store.HCCurrent {
return nil, nil, cid.Undef, fmt.Errorf("expected current head on SHC stream (got %s)", head[0].Type)
}
r, foundMsg, err := sm.tipsetExecutedMessage(head[0].Val, mcid, msg.VMMessage(), allowReplaced)
if err != nil {
return nil, nil, cid.Undef, err
}
if r != nil {
return head[0].Val, r, foundMsg, nil
}
var backTs *types.TipSet
var backRcp *types.MessageReceipt
var backFm cid.Cid
backSearchWait := make(chan struct{})
go func() {
fts, r, foundMsg, err := sm.searchBackForMsg(ctx, head[0].Val, msg, lookbackLimit, allowReplaced)
if err != nil {
log.Warnf("failed to look back through chain for message: %v", err)
return
}
backTs = fts
backRcp = r
backFm = foundMsg
close(backSearchWait)
}()
var candidateTs *types.TipSet
var candidateRcp *types.MessageReceipt
var candidateFm cid.Cid
heightOfHead := head[0].Val.Height()
reverts := map[types.TipSetKey]bool{}
for {
select {
case notif, ok := <-tsub:
if !ok {
return nil, nil, cid.Undef, ctx.Err()
}
for _, val := range notif {
switch val.Type {
case store.HCRevert:
if val.Val.Equals(candidateTs) {
candidateTs = nil
candidateRcp = nil
candidateFm = cid.Undef
}
if backSearchWait != nil {
reverts[val.Val.Key()] = true
}
case store.HCApply:
if candidateTs != nil && val.Val.Height() >= candidateTs.Height()+abi.ChainEpoch(confidence) {
return candidateTs, candidateRcp, candidateFm, nil
}
r, foundMsg, err := sm.tipsetExecutedMessage(val.Val, mcid, msg.VMMessage(), allowReplaced)
if err != nil {
return nil, nil, cid.Undef, err
}
if r != nil {
if confidence == 0 {
return val.Val, r, foundMsg, err
}
candidateTs = val.Val
candidateRcp = r
candidateFm = foundMsg
}
heightOfHead = val.Val.Height()
}
}
case <-backSearchWait:
// check if we found the message in the chain and that is hasn't been reverted since we started searching
if backTs != nil && !reverts[backTs.Key()] {
// if head is at or past confidence interval, return immediately
if heightOfHead >= backTs.Height()+abi.ChainEpoch(confidence) {
return backTs, backRcp, backFm, nil
}
// wait for confidence interval
candidateTs = backTs
candidateRcp = backRcp
candidateFm = backFm
}
reverts = nil
backSearchWait = nil
case <-ctx.Done():
return nil, nil, cid.Undef, ctx.Err()
}
}
}
func (sm *StateManager) SearchForMessage(ctx context.Context, head *types.TipSet, mcid cid.Cid, lookbackLimit abi.ChainEpoch, allowReplaced bool) (*types.TipSet, *types.MessageReceipt, cid.Cid, error) {
msg, err := sm.cs.GetCMessage(mcid)
if err != nil {
return nil, nil, cid.Undef, fmt.Errorf("failed to load message: %w", err)
}
r, foundMsg, err := sm.tipsetExecutedMessage(head, mcid, msg.VMMessage(), allowReplaced)
if err != nil {
return nil, nil, cid.Undef, err
}
if r != nil {
return head, r, foundMsg, nil
}
fts, r, foundMsg, err := sm.searchBackForMsg(ctx, head, msg, lookbackLimit, allowReplaced)
if err != nil {
log.Warnf("failed to look back through chain for message %s", mcid)
return nil, nil, cid.Undef, err
}
if fts == nil {
return nil, nil, cid.Undef, nil
}
return fts, r, foundMsg, nil
}
// searchBackForMsg searches up to limit tipsets backwards from the given
// tipset for a message receipt.
// If limit is
// - 0 then no tipsets are searched
// - 5 then five tipset are searched
// - LookbackNoLimit then there is no limit
func (sm *StateManager) searchBackForMsg(ctx context.Context, from *types.TipSet, m types.ChainMsg, limit abi.ChainEpoch, allowReplaced bool) (*types.TipSet, *types.MessageReceipt, cid.Cid, error) {
limitHeight := from.Height() - limit
noLimit := limit == LookbackNoLimit
cur := from
curActor, err := sm.LoadActor(ctx, m.VMMessage().From, cur)
if err != nil {
return nil, nil, cid.Undef, xerrors.Errorf("failed to load initital tipset")
}
mFromId, err := sm.LookupID(ctx, m.VMMessage().From, from)
if err != nil {
return nil, nil, cid.Undef, xerrors.Errorf("looking up From id address: %w", err)
}
mNonce := m.VMMessage().Nonce
for {
// If we've reached the genesis block, or we've reached the limit of
// how far back to look
if cur.Height() == 0 || !noLimit && cur.Height() <= limitHeight {
// it ain't here!
return nil, nil, cid.Undef, nil
}
select {
case <-ctx.Done():
return nil, nil, cid.Undef, nil
default:
}
// we either have no messages from the sender, or the latest message we found has a lower nonce than the one being searched for,
// either way, no reason to lookback, it ain't there
if curActor == nil || curActor.Nonce == 0 || curActor.Nonce < mNonce {
return nil, nil, cid.Undef, nil
}
pts, err := sm.cs.LoadTipSet(cur.Parents())
if err != nil {
return nil, nil, cid.Undef, xerrors.Errorf("failed to load tipset during msg wait searchback: %w", err)
}
act, err := sm.LoadActor(ctx, mFromId, pts)
actorNoExist := errors.Is(err, types.ErrActorNotFound)
if err != nil && !actorNoExist {
return nil, nil, cid.Cid{}, xerrors.Errorf("failed to load the actor: %w", err)
}
// check that between cur and parent tipset the nonce fell into range of our message
if actorNoExist || (curActor.Nonce > mNonce && act.Nonce <= mNonce) {
r, foundMsg, err := sm.tipsetExecutedMessage(cur, m.Cid(), m.VMMessage(), allowReplaced)
if err != nil {
return nil, nil, cid.Undef, xerrors.Errorf("checking for message execution during lookback: %w", err)
}
if r != nil {
return cur, r, foundMsg, nil
}
}
cur = pts
curActor = act
}
}
func (sm *StateManager) tipsetExecutedMessage(ts *types.TipSet, msg cid.Cid, vmm *types.Message, allowReplaced bool) (*types.MessageReceipt, cid.Cid, error) {
// The genesis block did not execute any messages
if ts.Height() == 0 {
return nil, cid.Undef, nil
}
pts, err := sm.cs.LoadTipSet(ts.Parents())
if err != nil {
return nil, cid.Undef, err
}
cm, err := sm.cs.MessagesForTipset(pts)
if err != nil {
return nil, cid.Undef, err
}
for ii := range cm {
// iterate in reverse because we going backwards through the chain
i := len(cm) - ii - 1
m := cm[i]
if m.VMMessage().From == vmm.From { // cheaper to just check origin first
if m.VMMessage().Nonce == vmm.Nonce {
if allowReplaced && m.VMMessage().EqualCall(vmm) {
if m.Cid() != msg {
log.Warnw("found message with equal nonce and call params but different CID",
"wanted", msg, "found", m.Cid(), "nonce", vmm.Nonce, "from", vmm.From)
}
pr, err := sm.cs.GetParentReceipt(ts.Blocks()[0], i)
if err != nil {
return nil, cid.Undef, err
}
return pr, m.Cid(), nil
}
// this should be that message
return nil, cid.Undef, xerrors.Errorf("found message with equal nonce as the one we are looking for (F:%s n %d, TS: %s n%d)",
msg, vmm.Nonce, m.Cid(), m.VMMessage().Nonce)
}
if m.VMMessage().Nonce < vmm.Nonce {
return nil, cid.Undef, nil // don't bother looking further
}
}
}
return nil, cid.Undef, nil
}

File diff suppressed because it is too large Load Diff

473
chain/stmgr/supply.go Normal file
View File

@ -0,0 +1,473 @@
package stmgr
import (
"context"
"github.com/ipfs/go-cid"
cbor "github.com/ipfs/go-ipld-cbor"
"golang.org/x/xerrors"
"github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/big"
msig0 "github.com/filecoin-project/specs-actors/actors/builtin/multisig"
"github.com/filecoin-project/lotus/api"
"github.com/filecoin-project/lotus/build"
"github.com/filecoin-project/lotus/chain/actors/adt"
"github.com/filecoin-project/lotus/chain/actors/builtin"
_init "github.com/filecoin-project/lotus/chain/actors/builtin/init"
"github.com/filecoin-project/lotus/chain/actors/builtin/market"
"github.com/filecoin-project/lotus/chain/actors/builtin/miner"
"github.com/filecoin-project/lotus/chain/actors/builtin/multisig"
"github.com/filecoin-project/lotus/chain/actors/builtin/power"
"github.com/filecoin-project/lotus/chain/actors/builtin/reward"
"github.com/filecoin-project/lotus/chain/actors/builtin/verifreg"
"github.com/filecoin-project/lotus/chain/state"
"github.com/filecoin-project/lotus/chain/types"
)
// sets up information about the vesting schedule
func (sm *StateManager) setupGenesisVestingSchedule(ctx context.Context) error {
gb, err := sm.cs.GetGenesis()
if err != nil {
return xerrors.Errorf("getting genesis block: %w", err)
}
gts, err := types.NewTipSet([]*types.BlockHeader{gb})
if err != nil {
return xerrors.Errorf("getting genesis tipset: %w", err)
}
st, _, err := sm.TipSetState(ctx, gts)
if err != nil {
return xerrors.Errorf("getting genesis tipset state: %w", err)
}
cst := cbor.NewCborStore(sm.cs.StateBlockstore())
sTree, err := state.LoadStateTree(cst, st)
if err != nil {
return xerrors.Errorf("loading state tree: %w", err)
}
gmf, err := getFilMarketLocked(ctx, sTree)
if err != nil {
return xerrors.Errorf("setting up genesis market funds: %w", err)
}
gp, err := getFilPowerLocked(ctx, sTree)
if err != nil {
return xerrors.Errorf("setting up genesis pledge: %w", err)
}
sm.genesisMarketFunds = gmf
sm.genesisPledge = gp
totalsByEpoch := make(map[abi.ChainEpoch]abi.TokenAmount)
// 6 months
sixMonths := abi.ChainEpoch(183 * builtin.EpochsInDay)
totalsByEpoch[sixMonths] = big.NewInt(49_929_341)
totalsByEpoch[sixMonths] = big.Add(totalsByEpoch[sixMonths], big.NewInt(32_787_700))
// 1 year
oneYear := abi.ChainEpoch(365 * builtin.EpochsInDay)
totalsByEpoch[oneYear] = big.NewInt(22_421_712)
// 2 years
twoYears := abi.ChainEpoch(2 * 365 * builtin.EpochsInDay)
totalsByEpoch[twoYears] = big.NewInt(7_223_364)
// 3 years
threeYears := abi.ChainEpoch(3 * 365 * builtin.EpochsInDay)
totalsByEpoch[threeYears] = big.NewInt(87_637_883)
// 6 years
sixYears := abi.ChainEpoch(6 * 365 * builtin.EpochsInDay)
totalsByEpoch[sixYears] = big.NewInt(100_000_000)
totalsByEpoch[sixYears] = big.Add(totalsByEpoch[sixYears], big.NewInt(300_000_000))
sm.preIgnitionVesting = make([]msig0.State, 0, len(totalsByEpoch))
for k, v := range totalsByEpoch {
ns := msig0.State{
InitialBalance: v,
UnlockDuration: k,
PendingTxns: cid.Undef,
}
sm.preIgnitionVesting = append(sm.preIgnitionVesting, ns)
}
return nil
}
// sets up information about the vesting schedule post the ignition upgrade
func (sm *StateManager) setupPostIgnitionVesting(ctx context.Context) error {
totalsByEpoch := make(map[abi.ChainEpoch]abi.TokenAmount)
// 6 months
sixMonths := abi.ChainEpoch(183 * builtin.EpochsInDay)
totalsByEpoch[sixMonths] = big.NewInt(49_929_341)
totalsByEpoch[sixMonths] = big.Add(totalsByEpoch[sixMonths], big.NewInt(32_787_700))
// 1 year
oneYear := abi.ChainEpoch(365 * builtin.EpochsInDay)
totalsByEpoch[oneYear] = big.NewInt(22_421_712)
// 2 years
twoYears := abi.ChainEpoch(2 * 365 * builtin.EpochsInDay)
totalsByEpoch[twoYears] = big.NewInt(7_223_364)
// 3 years
threeYears := abi.ChainEpoch(3 * 365 * builtin.EpochsInDay)
totalsByEpoch[threeYears] = big.NewInt(87_637_883)
// 6 years
sixYears := abi.ChainEpoch(6 * 365 * builtin.EpochsInDay)
totalsByEpoch[sixYears] = big.NewInt(100_000_000)
totalsByEpoch[sixYears] = big.Add(totalsByEpoch[sixYears], big.NewInt(300_000_000))
sm.postIgnitionVesting = make([]msig0.State, 0, len(totalsByEpoch))
for k, v := range totalsByEpoch {
ns := msig0.State{
// In the pre-ignition logic, we incorrectly set this value in Fil, not attoFil, an off-by-10^18 error
InitialBalance: big.Mul(v, big.NewInt(int64(build.FilecoinPrecision))),
UnlockDuration: k,
PendingTxns: cid.Undef,
// In the pre-ignition logic, the start epoch was 0. This changes in the fork logic of the Ignition upgrade itself.
StartEpoch: build.UpgradeLiftoffHeight,
}
sm.postIgnitionVesting = append(sm.postIgnitionVesting, ns)
}
return nil
}
// sets up information about the vesting schedule post the calico upgrade
func (sm *StateManager) setupPostCalicoVesting(ctx context.Context) error {
totalsByEpoch := make(map[abi.ChainEpoch]abi.TokenAmount)
// 0 days
zeroDays := abi.ChainEpoch(0)
totalsByEpoch[zeroDays] = big.NewInt(10_632_000)
// 6 months
sixMonths := abi.ChainEpoch(183 * builtin.EpochsInDay)
totalsByEpoch[sixMonths] = big.NewInt(19_015_887)
totalsByEpoch[sixMonths] = big.Add(totalsByEpoch[sixMonths], big.NewInt(32_787_700))
// 1 year
oneYear := abi.ChainEpoch(365 * builtin.EpochsInDay)
totalsByEpoch[oneYear] = big.NewInt(22_421_712)
totalsByEpoch[oneYear] = big.Add(totalsByEpoch[oneYear], big.NewInt(9_400_000))
// 2 years
twoYears := abi.ChainEpoch(2 * 365 * builtin.EpochsInDay)
totalsByEpoch[twoYears] = big.NewInt(7_223_364)
// 3 years
threeYears := abi.ChainEpoch(3 * 365 * builtin.EpochsInDay)
totalsByEpoch[threeYears] = big.NewInt(87_637_883)
totalsByEpoch[threeYears] = big.Add(totalsByEpoch[threeYears], big.NewInt(898_958))
// 6 years
sixYears := abi.ChainEpoch(6 * 365 * builtin.EpochsInDay)
totalsByEpoch[sixYears] = big.NewInt(100_000_000)
totalsByEpoch[sixYears] = big.Add(totalsByEpoch[sixYears], big.NewInt(300_000_000))
totalsByEpoch[sixYears] = big.Add(totalsByEpoch[sixYears], big.NewInt(9_805_053))
sm.postCalicoVesting = make([]msig0.State, 0, len(totalsByEpoch))
for k, v := range totalsByEpoch {
ns := msig0.State{
InitialBalance: big.Mul(v, big.NewInt(int64(build.FilecoinPrecision))),
UnlockDuration: k,
PendingTxns: cid.Undef,
StartEpoch: build.UpgradeLiftoffHeight,
}
sm.postCalicoVesting = append(sm.postCalicoVesting, ns)
}
return nil
}
// GetVestedFunds returns all funds that have "left" actors that are in the genesis state:
// - For Multisigs, it counts the actual amounts that have vested at the given epoch
// - For Accounts, it counts max(currentBalance - genesisBalance, 0).
func (sm *StateManager) GetFilVested(ctx context.Context, height abi.ChainEpoch, st *state.StateTree) (abi.TokenAmount, error) {
vf := big.Zero()
if height <= build.UpgradeIgnitionHeight {
for _, v := range sm.preIgnitionVesting {
au := big.Sub(v.InitialBalance, v.AmountLocked(height))
vf = big.Add(vf, au)
}
} else if height <= build.UpgradeCalicoHeight {
for _, v := range sm.postIgnitionVesting {
// In the pre-ignition logic, we simply called AmountLocked(height), assuming startEpoch was 0.
// The start epoch changed in the Ignition upgrade.
au := big.Sub(v.InitialBalance, v.AmountLocked(height-v.StartEpoch))
vf = big.Add(vf, au)
}
} else {
for _, v := range sm.postCalicoVesting {
// In the pre-ignition logic, we simply called AmountLocked(height), assuming startEpoch was 0.
// The start epoch changed in the Ignition upgrade.
au := big.Sub(v.InitialBalance, v.AmountLocked(height-v.StartEpoch))
vf = big.Add(vf, au)
}
}
// After UpgradeAssemblyHeight these funds are accounted for in GetFilReserveDisbursed
if height <= build.UpgradeAssemblyHeight {
// continue to use preIgnitionGenInfos, nothing changed at the Ignition epoch
vf = big.Add(vf, sm.genesisPledge)
// continue to use preIgnitionGenInfos, nothing changed at the Ignition epoch
vf = big.Add(vf, sm.genesisMarketFunds)
}
return vf, nil
}
func GetFilReserveDisbursed(ctx context.Context, st *state.StateTree) (abi.TokenAmount, error) {
ract, err := st.GetActor(builtin.ReserveAddress)
if err != nil {
return big.Zero(), xerrors.Errorf("failed to get reserve actor: %w", err)
}
// If money enters the reserve actor, this could lead to a negative term
return big.Sub(big.NewFromGo(build.InitialFilReserved), ract.Balance), nil
}
func GetFilMined(ctx context.Context, st *state.StateTree) (abi.TokenAmount, error) {
ractor, err := st.GetActor(reward.Address)
if err != nil {
return big.Zero(), xerrors.Errorf("failed to load reward actor state: %w", err)
}
rst, err := reward.Load(adt.WrapStore(ctx, st.Store), ractor)
if err != nil {
return big.Zero(), err
}
return rst.TotalStoragePowerReward()
}
func getFilMarketLocked(ctx context.Context, st *state.StateTree) (abi.TokenAmount, error) {
act, err := st.GetActor(market.Address)
if err != nil {
return big.Zero(), xerrors.Errorf("failed to load market actor: %w", err)
}
mst, err := market.Load(adt.WrapStore(ctx, st.Store), act)
if err != nil {
return big.Zero(), xerrors.Errorf("failed to load market state: %w", err)
}
return mst.TotalLocked()
}
func getFilPowerLocked(ctx context.Context, st *state.StateTree) (abi.TokenAmount, error) {
pactor, err := st.GetActor(power.Address)
if err != nil {
return big.Zero(), xerrors.Errorf("failed to load power actor: %w", err)
}
pst, err := power.Load(adt.WrapStore(ctx, st.Store), pactor)
if err != nil {
return big.Zero(), xerrors.Errorf("failed to load power state: %w", err)
}
return pst.TotalLocked()
}
func (sm *StateManager) GetFilLocked(ctx context.Context, st *state.StateTree) (abi.TokenAmount, error) {
filMarketLocked, err := getFilMarketLocked(ctx, st)
if err != nil {
return big.Zero(), xerrors.Errorf("failed to get filMarketLocked: %w", err)
}
filPowerLocked, err := getFilPowerLocked(ctx, st)
if err != nil {
return big.Zero(), xerrors.Errorf("failed to get filPowerLocked: %w", err)
}
return types.BigAdd(filMarketLocked, filPowerLocked), nil
}
func GetFilBurnt(ctx context.Context, st *state.StateTree) (abi.TokenAmount, error) {
burnt, err := st.GetActor(builtin.BurntFundsActorAddr)
if err != nil {
return big.Zero(), xerrors.Errorf("failed to load burnt actor: %w", err)
}
return burnt.Balance, nil
}
func (sm *StateManager) GetVMCirculatingSupply(ctx context.Context, height abi.ChainEpoch, st *state.StateTree) (abi.TokenAmount, error) {
cs, err := sm.GetVMCirculatingSupplyDetailed(ctx, height, st)
if err != nil {
return types.EmptyInt, err
}
return cs.FilCirculating, err
}
func (sm *StateManager) GetVMCirculatingSupplyDetailed(ctx context.Context, height abi.ChainEpoch, st *state.StateTree) (api.CirculatingSupply, error) {
sm.genesisMsigLk.Lock()
defer sm.genesisMsigLk.Unlock()
if sm.preIgnitionVesting == nil || sm.genesisPledge.IsZero() || sm.genesisMarketFunds.IsZero() {
err := sm.setupGenesisVestingSchedule(ctx)
if err != nil {
return api.CirculatingSupply{}, xerrors.Errorf("failed to setup pre-ignition vesting schedule: %w", err)
}
}
if sm.postIgnitionVesting == nil {
err := sm.setupPostIgnitionVesting(ctx)
if err != nil {
return api.CirculatingSupply{}, xerrors.Errorf("failed to setup post-ignition vesting schedule: %w", err)
}
}
if sm.postCalicoVesting == nil {
err := sm.setupPostCalicoVesting(ctx)
if err != nil {
return api.CirculatingSupply{}, xerrors.Errorf("failed to setup post-calico vesting schedule: %w", err)
}
}
filVested, err := sm.GetFilVested(ctx, height, st)
if err != nil {
return api.CirculatingSupply{}, xerrors.Errorf("failed to calculate filVested: %w", err)
}
filReserveDisbursed := big.Zero()
if height > build.UpgradeAssemblyHeight {
filReserveDisbursed, err = GetFilReserveDisbursed(ctx, st)
if err != nil {
return api.CirculatingSupply{}, xerrors.Errorf("failed to calculate filReserveDisbursed: %w", err)
}
}
filMined, err := GetFilMined(ctx, st)
if err != nil {
return api.CirculatingSupply{}, xerrors.Errorf("failed to calculate filMined: %w", err)
}
filBurnt, err := GetFilBurnt(ctx, st)
if err != nil {
return api.CirculatingSupply{}, xerrors.Errorf("failed to calculate filBurnt: %w", err)
}
filLocked, err := sm.GetFilLocked(ctx, st)
if err != nil {
return api.CirculatingSupply{}, xerrors.Errorf("failed to calculate filLocked: %w", err)
}
ret := types.BigAdd(filVested, filMined)
ret = types.BigAdd(ret, filReserveDisbursed)
ret = types.BigSub(ret, filBurnt)
ret = types.BigSub(ret, filLocked)
if ret.LessThan(big.Zero()) {
ret = big.Zero()
}
return api.CirculatingSupply{
FilVested: filVested,
FilMined: filMined,
FilBurnt: filBurnt,
FilLocked: filLocked,
FilCirculating: ret,
FilReserveDisbursed: filReserveDisbursed,
}, nil
}
func (sm *StateManager) GetCirculatingSupply(ctx context.Context, height abi.ChainEpoch, st *state.StateTree) (abi.TokenAmount, error) {
circ := big.Zero()
unCirc := big.Zero()
err := st.ForEach(func(a address.Address, actor *types.Actor) error {
switch {
case actor.Balance.IsZero():
// Do nothing for zero-balance actors
break
case a == _init.Address ||
a == reward.Address ||
a == verifreg.Address ||
// The power actor itself should never receive funds
a == power.Address ||
a == builtin.SystemActorAddr ||
a == builtin.CronActorAddr ||
a == builtin.BurntFundsActorAddr ||
a == builtin.SaftAddress ||
a == builtin.ReserveAddress:
unCirc = big.Add(unCirc, actor.Balance)
case a == market.Address:
mst, err := market.Load(sm.cs.ActorStore(ctx), actor)
if err != nil {
return err
}
lb, err := mst.TotalLocked()
if err != nil {
return err
}
circ = big.Add(circ, big.Sub(actor.Balance, lb))
unCirc = big.Add(unCirc, lb)
case builtin.IsAccountActor(actor.Code) || builtin.IsPaymentChannelActor(actor.Code):
circ = big.Add(circ, actor.Balance)
case builtin.IsStorageMinerActor(actor.Code):
mst, err := miner.Load(sm.cs.ActorStore(ctx), actor)
if err != nil {
return err
}
ab, err := mst.AvailableBalance(actor.Balance)
if err == nil {
circ = big.Add(circ, ab)
unCirc = big.Add(unCirc, big.Sub(actor.Balance, ab))
} else {
// Assume any error is because the miner state is "broken" (lower actor balance than locked funds)
// In this case, the actor's entire balance is considered "uncirculating"
unCirc = big.Add(unCirc, actor.Balance)
}
case builtin.IsMultisigActor(actor.Code):
mst, err := multisig.Load(sm.cs.ActorStore(ctx), actor)
if err != nil {
return err
}
lb, err := mst.LockedBalance(height)
if err != nil {
return err
}
ab := big.Sub(actor.Balance, lb)
circ = big.Add(circ, big.Max(ab, big.Zero()))
unCirc = big.Add(unCirc, big.Min(actor.Balance, lb))
default:
return xerrors.Errorf("unexpected actor: %s", a)
}
return nil
})
if err != nil {
return types.EmptyInt, err
}
total := big.Add(circ, unCirc)
if !total.Equals(types.TotalFilecoinInt) {
return types.EmptyInt, xerrors.Errorf("total filecoin didn't add to expected amount: %s != %s", total, types.TotalFilecoinInt)
}
return circ, nil
}

1094
chain/stmgr/upgrades.go Normal file

File diff suppressed because it is too large Load Diff

View File

@ -1,540 +1,38 @@
package stmgr
import (
"bytes"
"context"
"fmt"
"os"
"reflect"
"runtime"
"strings"
exported5 "github.com/filecoin-project/specs-actors/v5/actors/builtin/exported"
"github.com/filecoin-project/go-state-types/big"
"github.com/filecoin-project/go-state-types/network"
cid "github.com/ipfs/go-cid"
"github.com/ipfs/go-cid"
cbg "github.com/whyrusleeping/cbor-gen"
"golang.org/x/xerrors"
"github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-bitfield"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/crypto"
"github.com/filecoin-project/go-state-types/big"
"github.com/filecoin-project/go-state-types/rt"
exported0 "github.com/filecoin-project/specs-actors/actors/builtin/exported"
exported2 "github.com/filecoin-project/specs-actors/v2/actors/builtin/exported"
exported3 "github.com/filecoin-project/specs-actors/v3/actors/builtin/exported"
exported4 "github.com/filecoin-project/specs-actors/v4/actors/builtin/exported"
exported5 "github.com/filecoin-project/specs-actors/v5/actors/builtin/exported"
"github.com/filecoin-project/lotus/api"
"github.com/filecoin-project/lotus/chain/actors/builtin"
init_ "github.com/filecoin-project/lotus/chain/actors/builtin/init"
"github.com/filecoin-project/lotus/chain/actors/builtin/market"
"github.com/filecoin-project/lotus/chain/actors/builtin/miner"
"github.com/filecoin-project/lotus/chain/actors/builtin/power"
"github.com/filecoin-project/lotus/chain/actors/policy"
"github.com/filecoin-project/lotus/chain/beacon"
"github.com/filecoin-project/lotus/chain/state"
"github.com/filecoin-project/lotus/chain/store"
"github.com/filecoin-project/lotus/chain/types"
"github.com/filecoin-project/lotus/chain/vm"
"github.com/filecoin-project/lotus/extern/sector-storage/ffiwrapper"
"github.com/filecoin-project/lotus/node/modules/dtypes"
)
func GetNetworkName(ctx context.Context, sm *StateManager, st cid.Cid) (dtypes.NetworkName, error) {
act, err := sm.LoadActorRaw(ctx, init_.Address, st)
if err != nil {
return "", err
}
ias, err := init_.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return "", err
}
return ias.NetworkName()
}
func GetMinerWorkerRaw(ctx context.Context, sm *StateManager, st cid.Cid, maddr address.Address) (address.Address, error) {
state, err := sm.StateTree(st)
if err != nil {
return address.Undef, xerrors.Errorf("(get sset) failed to load state tree: %w", err)
}
act, err := state.GetActor(maddr)
if err != nil {
return address.Undef, xerrors.Errorf("(get sset) failed to load miner actor: %w", err)
}
mas, err := miner.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return address.Undef, xerrors.Errorf("(get sset) failed to load miner actor state: %w", err)
}
info, err := mas.Info()
if err != nil {
return address.Undef, xerrors.Errorf("failed to load actor info: %w", err)
}
return vm.ResolveToKeyAddr(state, sm.cs.ActorStore(ctx), info.Worker)
}
func GetPower(ctx context.Context, sm *StateManager, ts *types.TipSet, maddr address.Address) (power.Claim, power.Claim, bool, error) {
return GetPowerRaw(ctx, sm, ts.ParentState(), maddr)
}
func GetPowerRaw(ctx context.Context, sm *StateManager, st cid.Cid, maddr address.Address) (power.Claim, power.Claim, bool, error) {
act, err := sm.LoadActorRaw(ctx, power.Address, st)
if err != nil {
return power.Claim{}, power.Claim{}, false, xerrors.Errorf("(get sset) failed to load power actor state: %w", err)
}
pas, err := power.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return power.Claim{}, power.Claim{}, false, err
}
tpow, err := pas.TotalPower()
if err != nil {
return power.Claim{}, power.Claim{}, false, err
}
var mpow power.Claim
var minpow bool
if maddr != address.Undef {
var found bool
mpow, found, err = pas.MinerPower(maddr)
if err != nil || !found {
return power.Claim{}, tpow, false, err
}
minpow, err = pas.MinerNominalPowerMeetsConsensusMinimum(maddr)
if err != nil {
return power.Claim{}, power.Claim{}, false, err
}
}
return mpow, tpow, minpow, nil
}
func PreCommitInfo(ctx context.Context, sm *StateManager, maddr address.Address, sid abi.SectorNumber, ts *types.TipSet) (*miner.SectorPreCommitOnChainInfo, error) {
act, err := sm.LoadActor(ctx, maddr, ts)
if err != nil {
return nil, xerrors.Errorf("(get sset) failed to load miner actor: %w", err)
}
mas, err := miner.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return nil, xerrors.Errorf("(get sset) failed to load miner actor state: %w", err)
}
return mas.GetPrecommittedSector(sid)
}
func MinerSectorInfo(ctx context.Context, sm *StateManager, maddr address.Address, sid abi.SectorNumber, ts *types.TipSet) (*miner.SectorOnChainInfo, error) {
act, err := sm.LoadActor(ctx, maddr, ts)
if err != nil {
return nil, xerrors.Errorf("(get sset) failed to load miner actor: %w", err)
}
mas, err := miner.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return nil, xerrors.Errorf("(get sset) failed to load miner actor state: %w", err)
}
return mas.GetSector(sid)
}
func GetSectorsForWinningPoSt(ctx context.Context, nv network.Version, pv ffiwrapper.Verifier, sm *StateManager, st cid.Cid, maddr address.Address, rand abi.PoStRandomness) ([]builtin.SectorInfo, error) {
act, err := sm.LoadActorRaw(ctx, maddr, st)
if err != nil {
return nil, xerrors.Errorf("failed to load miner actor: %w", err)
}
mas, err := miner.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return nil, xerrors.Errorf("failed to load miner actor state: %w", err)
}
var provingSectors bitfield.BitField
if nv < network.Version7 {
allSectors, err := miner.AllPartSectors(mas, miner.Partition.AllSectors)
if err != nil {
return nil, xerrors.Errorf("get all sectors: %w", err)
}
faultySectors, err := miner.AllPartSectors(mas, miner.Partition.FaultySectors)
if err != nil {
return nil, xerrors.Errorf("get faulty sectors: %w", err)
}
provingSectors, err = bitfield.SubtractBitField(allSectors, faultySectors)
if err != nil {
return nil, xerrors.Errorf("calc proving sectors: %w", err)
}
} else {
provingSectors, err = miner.AllPartSectors(mas, miner.Partition.ActiveSectors)
if err != nil {
return nil, xerrors.Errorf("get active sectors sectors: %w", err)
}
}
numProvSect, err := provingSectors.Count()
if err != nil {
return nil, xerrors.Errorf("failed to count bits: %w", err)
}
// TODO(review): is this right? feels fishy to me
if numProvSect == 0 {
return nil, nil
}
info, err := mas.Info()
if err != nil {
return nil, xerrors.Errorf("getting miner info: %w", err)
}
mid, err := address.IDFromAddress(maddr)
if err != nil {
return nil, xerrors.Errorf("getting miner ID: %w", err)
}
proofType, err := miner.WinningPoStProofTypeFromWindowPoStProofType(nv, info.WindowPoStProofType)
if err != nil {
return nil, xerrors.Errorf("determining winning post proof type: %w", err)
}
ids, err := pv.GenerateWinningPoStSectorChallenge(ctx, proofType, abi.ActorID(mid), rand, numProvSect)
if err != nil {
return nil, xerrors.Errorf("generating winning post challenges: %w", err)
}
iter, err := provingSectors.BitIterator()
if err != nil {
return nil, xerrors.Errorf("iterating over proving sectors: %w", err)
}
// Select winning sectors by _index_ in the all-sectors bitfield.
selectedSectors := bitfield.New()
prev := uint64(0)
for _, n := range ids {
sno, err := iter.Nth(n - prev)
if err != nil {
return nil, xerrors.Errorf("iterating over proving sectors: %w", err)
}
selectedSectors.Set(sno)
prev = n
}
sectors, err := mas.LoadSectors(&selectedSectors)
if err != nil {
return nil, xerrors.Errorf("loading proving sectors: %w", err)
}
out := make([]builtin.SectorInfo, len(sectors))
for i, sinfo := range sectors {
out[i] = builtin.SectorInfo{
SealProof: sinfo.SealProof,
SectorNumber: sinfo.SectorNumber,
SealedCID: sinfo.SealedCID,
}
}
return out, nil
}
func GetMinerSlashed(ctx context.Context, sm *StateManager, ts *types.TipSet, maddr address.Address) (bool, error) {
act, err := sm.LoadActor(ctx, power.Address, ts)
if err != nil {
return false, xerrors.Errorf("failed to load power actor: %w", err)
}
spas, err := power.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return false, xerrors.Errorf("failed to load power actor state: %w", err)
}
_, ok, err := spas.MinerPower(maddr)
if err != nil {
return false, xerrors.Errorf("getting miner power: %w", err)
}
if !ok {
return true, nil
}
return false, nil
}
func GetStorageDeal(ctx context.Context, sm *StateManager, dealID abi.DealID, ts *types.TipSet) (*api.MarketDeal, error) {
act, err := sm.LoadActor(ctx, market.Address, ts)
if err != nil {
return nil, xerrors.Errorf("failed to load market actor: %w", err)
}
state, err := market.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return nil, xerrors.Errorf("failed to load market actor state: %w", err)
}
proposals, err := state.Proposals()
if err != nil {
return nil, err
}
proposal, found, err := proposals.Get(dealID)
if err != nil {
return nil, err
} else if !found {
return nil, xerrors.Errorf(
"deal %d not found "+
"- deal may not have completed sealing before deal proposal "+
"start epoch, or deal may have been slashed",
dealID)
}
states, err := state.States()
if err != nil {
return nil, err
}
st, found, err := states.Get(dealID)
if err != nil {
return nil, err
}
if !found {
st = market.EmptyDealState()
}
return &api.MarketDeal{
Proposal: *proposal,
State: *st,
}, nil
}
func ListMinerActors(ctx context.Context, sm *StateManager, ts *types.TipSet) ([]address.Address, error) {
act, err := sm.LoadActor(ctx, power.Address, ts)
if err != nil {
return nil, xerrors.Errorf("failed to load power actor: %w", err)
}
powState, err := power.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return nil, xerrors.Errorf("failed to load power actor state: %w", err)
}
return powState.ListAllMiners()
}
func ComputeState(ctx context.Context, sm *StateManager, height abi.ChainEpoch, msgs []*types.Message, ts *types.TipSet) (cid.Cid, []*api.InvocResult, error) {
if ts == nil {
ts = sm.cs.GetHeaviestTipSet()
}
base, trace, err := sm.ExecutionTrace(ctx, ts)
if err != nil {
return cid.Undef, nil, err
}
for i := ts.Height(); i < height; i++ {
// handle state forks
base, err = sm.handleStateForks(ctx, base, i, &InvocationTracer{trace: &trace}, ts)
if err != nil {
return cid.Undef, nil, xerrors.Errorf("error handling state forks: %w", err)
}
// TODO: should we also run cron here?
}
r := store.NewChainRand(sm.cs, ts.Cids())
vmopt := &vm.VMOpts{
StateBase: base,
Epoch: height,
Rand: r,
Bstore: sm.cs.StateBlockstore(),
Syscalls: sm.cs.VMSys(),
CircSupplyCalc: sm.GetVMCirculatingSupply,
NtwkVersion: sm.GetNtwkVersion,
BaseFee: ts.Blocks()[0].ParentBaseFee,
LookbackState: LookbackStateGetterForTipset(sm, ts),
}
vmi, err := sm.newVM(ctx, vmopt)
if err != nil {
return cid.Undef, nil, err
}
for i, msg := range msgs {
// TODO: Use the signed message length for secp messages
ret, err := vmi.ApplyMessage(ctx, msg)
if err != nil {
return cid.Undef, nil, xerrors.Errorf("applying message %s: %w", msg.Cid(), err)
}
if ret.ExitCode != 0 {
log.Infof("compute state apply message %d failed (exit: %d): %s", i, ret.ExitCode, ret.ActorErr)
}
}
root, err := vmi.Flush(ctx)
if err != nil {
return cid.Undef, nil, err
}
return root, trace, nil
}
func LookbackStateGetterForTipset(sm *StateManager, ts *types.TipSet) vm.LookbackStateGetter {
return func(ctx context.Context, round abi.ChainEpoch) (*state.StateTree, error) {
_, st, err := GetLookbackTipSetForRound(ctx, sm, ts, round)
if err != nil {
return nil, err
}
return sm.StateTree(st)
}
}
func GetLookbackTipSetForRound(ctx context.Context, sm *StateManager, ts *types.TipSet, round abi.ChainEpoch) (*types.TipSet, cid.Cid, error) {
var lbr abi.ChainEpoch
lb := policy.GetWinningPoStSectorSetLookback(sm.GetNtwkVersion(ctx, round))
if round > lb {
lbr = round - lb
}
// more null blocks than our lookback
if lbr >= ts.Height() {
// This should never happen at this point, but may happen before
// network version 3 (where the lookback was only 10 blocks).
st, _, err := sm.TipSetState(ctx, ts)
if err != nil {
return nil, cid.Undef, err
}
return ts, st, nil
}
// Get the tipset after the lookback tipset, or the next non-null one.
nextTs, err := sm.ChainStore().GetTipsetByHeight(ctx, lbr+1, ts, false)
if err != nil {
return nil, cid.Undef, xerrors.Errorf("failed to get lookback tipset+1: %w", err)
}
if lbr > nextTs.Height() {
return nil, cid.Undef, xerrors.Errorf("failed to find non-null tipset %s (%d) which is known to exist, found %s (%d)", ts.Key(), ts.Height(), nextTs.Key(), nextTs.Height())
}
lbts, err := sm.ChainStore().GetTipSetFromKey(nextTs.Parents())
if err != nil {
return nil, cid.Undef, xerrors.Errorf("failed to resolve lookback tipset: %w", err)
}
return lbts, nextTs.ParentState(), nil
}
func MinerGetBaseInfo(ctx context.Context, sm *StateManager, bcs beacon.Schedule, tsk types.TipSetKey, round abi.ChainEpoch, maddr address.Address, pv ffiwrapper.Verifier) (*api.MiningBaseInfo, error) {
ts, err := sm.ChainStore().LoadTipSet(tsk)
if err != nil {
return nil, xerrors.Errorf("failed to load tipset for mining base: %w", err)
}
prev, err := sm.ChainStore().GetLatestBeaconEntry(ts)
if err != nil {
if os.Getenv("LOTUS_IGNORE_DRAND") != "_yes_" {
return nil, xerrors.Errorf("failed to get latest beacon entry: %w", err)
}
prev = &types.BeaconEntry{}
}
entries, err := beacon.BeaconEntriesForBlock(ctx, bcs, round, ts.Height(), *prev)
if err != nil {
return nil, err
}
rbase := *prev
if len(entries) > 0 {
rbase = entries[len(entries)-1]
}
lbts, lbst, err := GetLookbackTipSetForRound(ctx, sm, ts, round)
if err != nil {
return nil, xerrors.Errorf("getting lookback miner actor state: %w", err)
}
act, err := sm.LoadActorRaw(ctx, maddr, lbst)
if xerrors.Is(err, types.ErrActorNotFound) {
_, err := sm.LoadActor(ctx, maddr, ts)
if err != nil {
return nil, xerrors.Errorf("loading miner in current state: %w", err)
}
return nil, nil
}
if err != nil {
return nil, xerrors.Errorf("failed to load miner actor: %w", err)
}
mas, err := miner.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return nil, xerrors.Errorf("failed to load miner actor state: %w", err)
}
buf := new(bytes.Buffer)
if err := maddr.MarshalCBOR(buf); err != nil {
return nil, xerrors.Errorf("failed to marshal miner address: %w", err)
}
prand, err := store.DrawRandomness(rbase.Data, crypto.DomainSeparationTag_WinningPoStChallengeSeed, round, buf.Bytes())
if err != nil {
return nil, xerrors.Errorf("failed to get randomness for winning post: %w", err)
}
nv := sm.GetNtwkVersion(ctx, ts.Height())
sectors, err := GetSectorsForWinningPoSt(ctx, nv, pv, sm, lbst, maddr, prand)
if err != nil {
return nil, xerrors.Errorf("getting winning post proving set: %w", err)
}
if len(sectors) == 0 {
return nil, nil
}
mpow, tpow, _, err := GetPowerRaw(ctx, sm, lbst, maddr)
if err != nil {
return nil, xerrors.Errorf("failed to get power: %w", err)
}
info, err := mas.Info()
if err != nil {
return nil, err
}
worker, err := sm.ResolveToKeyAddress(ctx, info.Worker, ts)
if err != nil {
return nil, xerrors.Errorf("resolving worker address: %w", err)
}
// TODO: Not ideal performance...This method reloads miner and power state (already looked up here and in GetPowerRaw)
eligible, err := MinerEligibleToMine(ctx, sm, maddr, ts, lbts)
if err != nil {
return nil, xerrors.Errorf("determining miner eligibility: %w", err)
}
return &api.MiningBaseInfo{
MinerPower: mpow.QualityAdjPower,
NetworkPower: tpow.QualityAdjPower,
Sectors: sectors,
WorkerKey: worker,
SectorSize: info.SectorSize,
PrevBeaconEntry: *prev,
BeaconEntries: entries,
EligibleForMining: eligible,
}, nil
}
type MethodMeta struct {
Name string
@ -621,86 +119,124 @@ func GetParamType(actCode cid.Cid, method abi.MethodNum) (cbg.CBORUnmarshaler, e
return reflect.New(m.Params.Elem()).Interface().(cbg.CBORUnmarshaler), nil
}
func minerHasMinPower(ctx context.Context, sm *StateManager, addr address.Address, ts *types.TipSet) (bool, error) {
pact, err := sm.LoadActor(ctx, power.Address, ts)
func GetNetworkName(ctx context.Context, sm *StateManager, st cid.Cid) (dtypes.NetworkName, error) {
act, err := sm.LoadActorRaw(ctx, init_.Address, st)
if err != nil {
return false, xerrors.Errorf("loading power actor state: %w", err)
return "", err
}
ias, err := init_.Load(sm.cs.ActorStore(ctx), act)
if err != nil {
return "", err
}
ps, err := power.Load(sm.cs.ActorStore(ctx), pact)
if err != nil {
return false, err
}
return ps.MinerNominalPowerMeetsConsensusMinimum(addr)
return ias.NetworkName()
}
func MinerEligibleToMine(ctx context.Context, sm *StateManager, addr address.Address, baseTs *types.TipSet, lookbackTs *types.TipSet) (bool, error) {
hmp, err := minerHasMinPower(ctx, sm, addr, lookbackTs)
// TODO: We're blurring the lines between a "runtime network version" and a "Lotus upgrade epoch", is that unavoidable?
if sm.GetNtwkVersion(ctx, baseTs.Height()) <= network.Version3 {
return hmp, err
func ComputeState(ctx context.Context, sm *StateManager, height abi.ChainEpoch, msgs []*types.Message, ts *types.TipSet) (cid.Cid, []*api.InvocResult, error) {
if ts == nil {
ts = sm.cs.GetHeaviestTipSet()
}
base, trace, err := sm.ExecutionTrace(ctx, ts)
if err != nil {
return false, err
return cid.Undef, nil, err
}
if !hmp {
return false, nil
for i := ts.Height(); i < height; i++ {
// handle state forks
base, err = sm.handleStateForks(ctx, base, i, &InvocationTracer{trace: &trace}, ts)
if err != nil {
return cid.Undef, nil, xerrors.Errorf("error handling state forks: %w", err)
}
// TODO: should we also run cron here?
}
// Post actors v2, also check MinerEligibleForElection with base ts
pact, err := sm.LoadActor(ctx, power.Address, baseTs)
r := store.NewChainRand(sm.cs, ts.Cids())
vmopt := &vm.VMOpts{
StateBase: base,
Epoch: height,
Rand: r,
Bstore: sm.cs.StateBlockstore(),
Syscalls: sm.syscalls,
CircSupplyCalc: sm.GetVMCirculatingSupply,
NtwkVersion: sm.GetNtwkVersion,
BaseFee: ts.Blocks()[0].ParentBaseFee,
LookbackState: LookbackStateGetterForTipset(sm, ts),
}
vmi, err := sm.newVM(ctx, vmopt)
if err != nil {
return false, xerrors.Errorf("loading power actor state: %w", err)
return cid.Undef, nil, err
}
pstate, err := power.Load(sm.cs.ActorStore(ctx), pact)
for i, msg := range msgs {
// TODO: Use the signed message length for secp messages
ret, err := vmi.ApplyMessage(ctx, msg)
if err != nil {
return cid.Undef, nil, xerrors.Errorf("applying message %s: %w", msg.Cid(), err)
}
if ret.ExitCode != 0 {
log.Infof("compute state apply message %d failed (exit: %d): %s", i, ret.ExitCode, ret.ActorErr)
}
}
root, err := vmi.Flush(ctx)
if err != nil {
return false, err
return cid.Undef, nil, err
}
mact, err := sm.LoadActor(ctx, addr, baseTs)
if err != nil {
return false, xerrors.Errorf("loading miner actor state: %w", err)
}
mstate, err := miner.Load(sm.cs.ActorStore(ctx), mact)
if err != nil {
return false, err
}
// Non-empty power claim.
if claim, found, err := pstate.MinerPower(addr); err != nil {
return false, err
} else if !found {
return false, err
} else if claim.QualityAdjPower.LessThanEqual(big.Zero()) {
return false, err
}
// No fee debt.
if debt, err := mstate.FeeDebt(); err != nil {
return false, err
} else if !debt.IsZero() {
return false, err
}
// No active consensus faults.
if mInfo, err := mstate.Info(); err != nil {
return false, err
} else if baseTs.Height() <= mInfo.ConsensusFaultElapsed {
return false, nil
}
return true, nil
return root, trace, nil
}
func CheckTotalFIL(ctx context.Context, sm *StateManager, ts *types.TipSet) (abi.TokenAmount, error) {
str, err := state.LoadStateTree(sm.ChainStore().ActorStore(ctx), ts.ParentState())
func LookbackStateGetterForTipset(sm *StateManager, ts *types.TipSet) vm.LookbackStateGetter {
return func(ctx context.Context, round abi.ChainEpoch) (*state.StateTree, error) {
_, st, err := GetLookbackTipSetForRound(ctx, sm, ts, round)
if err != nil {
return nil, err
}
return sm.StateTree(st)
}
}
func GetLookbackTipSetForRound(ctx context.Context, sm *StateManager, ts *types.TipSet, round abi.ChainEpoch) (*types.TipSet, cid.Cid, error) {
var lbr abi.ChainEpoch
lb := policy.GetWinningPoStSectorSetLookback(sm.GetNtwkVersion(ctx, round))
if round > lb {
lbr = round - lb
}
// more null blocks than our lookback
if lbr >= ts.Height() {
// This should never happen at this point, but may happen before
// network version 3 (where the lookback was only 10 blocks).
st, _, err := sm.TipSetState(ctx, ts)
if err != nil {
return nil, cid.Undef, err
}
return ts, st, nil
}
// Get the tipset after the lookback tipset, or the next non-null one.
nextTs, err := sm.ChainStore().GetTipsetByHeight(ctx, lbr+1, ts, false)
if err != nil {
return nil, cid.Undef, xerrors.Errorf("failed to get lookback tipset+1: %w", err)
}
if lbr > nextTs.Height() {
return nil, cid.Undef, xerrors.Errorf("failed to find non-null tipset %s (%d) which is known to exist, found %s (%d)", ts.Key(), ts.Height(), nextTs.Key(), nextTs.Height())
}
lbts, err := sm.ChainStore().GetTipSetFromKey(nextTs.Parents())
if err != nil {
return nil, cid.Undef, xerrors.Errorf("failed to resolve lookback tipset: %w", err)
}
return lbts, nextTs.ParentState(), nil
}
func CheckTotalFIL(ctx context.Context, cs *store.ChainStore, ts *types.TipSet) (abi.TokenAmount, error) {
str, err := state.LoadStateTree(cs.ActorStore(ctx), ts.ParentState())
if err != nil {
return abi.TokenAmount{}, err
}
@ -729,3 +265,21 @@ func MakeMsgGasCost(msg *types.Message, ret *vm.ApplyRet) api.MsgGasCost {
TotalCost: big.Sub(msg.RequiredFunds(), ret.GasCosts.Refund),
}
}
func (sm *StateManager) ListAllActors(ctx context.Context, ts *types.TipSet) ([]address.Address, error) {
stateTree, err := sm.StateTree(sm.parentState(ts))
if err != nil {
return nil, err
}
var out []address.Address
err = stateTree.ForEach(func(addr address.Address, act *types.Actor) error {
out = append(out, addr)
return nil
})
if err != nil {
return nil, err
}
return out, nil
}

View File

@ -31,7 +31,7 @@ func TestIndexSeeks(t *testing.T) {
ctx := context.TODO()
nbs := blockstore.NewMemorySync()
cs := store.NewChainStore(nbs, nbs, syncds.MutexWrap(datastore.NewMapDatastore()), nil, nil)
cs := store.NewChainStore(nbs, nbs, syncds.MutexWrap(datastore.NewMapDatastore()), nil)
defer cs.Close() //nolint:errcheck
_, err = cs.Import(bytes.NewReader(gencar))

303
chain/store/messages.go Normal file
View File

@ -0,0 +1,303 @@
package store
import (
"context"
"github.com/ipfs/go-cid"
"golang.org/x/xerrors"
block "github.com/ipfs/go-block-format"
cbor "github.com/ipfs/go-ipld-cbor"
cbg "github.com/whyrusleeping/cbor-gen"
"github.com/filecoin-project/go-address"
blockadt "github.com/filecoin-project/specs-actors/actors/util/adt"
bstore "github.com/filecoin-project/lotus/blockstore"
"github.com/filecoin-project/lotus/build"
"github.com/filecoin-project/lotus/chain/state"
"github.com/filecoin-project/lotus/chain/types"
)
type storable interface {
ToStorageBlock() (block.Block, error)
}
func PutMessage(bs bstore.Blockstore, m storable) (cid.Cid, error) {
b, err := m.ToStorageBlock()
if err != nil {
return cid.Undef, err
}
if err := bs.Put(b); err != nil {
return cid.Undef, err
}
return b.Cid(), nil
}
func (cs *ChainStore) PutMessage(m storable) (cid.Cid, error) {
return PutMessage(cs.chainBlockstore, m)
}
func (cs *ChainStore) GetCMessage(c cid.Cid) (types.ChainMsg, error) {
m, err := cs.GetMessage(c)
if err == nil {
return m, nil
}
if err != bstore.ErrNotFound {
log.Warnf("GetCMessage: unexpected error getting unsigned message: %s", err)
}
return cs.GetSignedMessage(c)
}
func (cs *ChainStore) GetMessage(c cid.Cid) (*types.Message, error) {
var msg *types.Message
err := cs.chainLocalBlockstore.View(c, func(b []byte) (err error) {
msg, err = types.DecodeMessage(b)
return err
})
return msg, err
}
func (cs *ChainStore) GetSignedMessage(c cid.Cid) (*types.SignedMessage, error) {
var msg *types.SignedMessage
err := cs.chainLocalBlockstore.View(c, func(b []byte) (err error) {
msg, err = types.DecodeSignedMessage(b)
return err
})
return msg, err
}
func (cs *ChainStore) readAMTCids(root cid.Cid) ([]cid.Cid, error) {
ctx := context.TODO()
// block headers use adt0, for now.
a, err := blockadt.AsArray(cs.ActorStore(ctx), root)
if err != nil {
return nil, xerrors.Errorf("amt load: %w", err)
}
var (
cids []cid.Cid
cborCid cbg.CborCid
)
if err := a.ForEach(&cborCid, func(i int64) error {
c := cid.Cid(cborCid)
cids = append(cids, c)
return nil
}); err != nil {
return nil, xerrors.Errorf("failed to traverse amt: %w", err)
}
if uint64(len(cids)) != a.Length() {
return nil, xerrors.Errorf("found %d cids, expected %d", len(cids), a.Length())
}
return cids, nil
}
type BlockMessages struct {
Miner address.Address
BlsMessages []types.ChainMsg
SecpkMessages []types.ChainMsg
WinCount int64
}
func (cs *ChainStore) BlockMsgsForTipset(ts *types.TipSet) ([]BlockMessages, error) {
applied := make(map[address.Address]uint64)
cst := cbor.NewCborStore(cs.stateBlockstore)
st, err := state.LoadStateTree(cst, ts.Blocks()[0].ParentStateRoot)
if err != nil {
return nil, xerrors.Errorf("failed to load state tree at tipset %s: %w", ts, err)
}
selectMsg := func(m *types.Message) (bool, error) {
var sender address.Address
if ts.Height() >= build.UpgradeHyperdriveHeight {
sender, err = st.LookupID(m.From)
if err != nil {
return false, err
}
} else {
sender = m.From
}
// The first match for a sender is guaranteed to have correct nonce -- the block isn't valid otherwise
if _, ok := applied[sender]; !ok {
applied[sender] = m.Nonce
}
if applied[sender] != m.Nonce {
return false, nil
}
applied[sender]++
return true, nil
}
var out []BlockMessages
for _, b := range ts.Blocks() {
bms, sms, err := cs.MessagesForBlock(b)
if err != nil {
return nil, xerrors.Errorf("failed to get messages for block: %w", err)
}
bm := BlockMessages{
Miner: b.Miner,
BlsMessages: make([]types.ChainMsg, 0, len(bms)),
SecpkMessages: make([]types.ChainMsg, 0, len(sms)),
WinCount: b.ElectionProof.WinCount,
}
for _, bmsg := range bms {
b, err := selectMsg(bmsg.VMMessage())
if err != nil {
return nil, xerrors.Errorf("failed to decide whether to select message for block: %w", err)
}
if b {
bm.BlsMessages = append(bm.BlsMessages, bmsg)
}
}
for _, smsg := range sms {
b, err := selectMsg(smsg.VMMessage())
if err != nil {
return nil, xerrors.Errorf("failed to decide whether to select message for block: %w", err)
}
if b {
bm.SecpkMessages = append(bm.SecpkMessages, smsg)
}
}
out = append(out, bm)
}
return out, nil
}
func (cs *ChainStore) MessagesForTipset(ts *types.TipSet) ([]types.ChainMsg, error) {
bmsgs, err := cs.BlockMsgsForTipset(ts)
if err != nil {
return nil, err
}
var out []types.ChainMsg
for _, bm := range bmsgs {
for _, blsm := range bm.BlsMessages {
out = append(out, blsm)
}
for _, secm := range bm.SecpkMessages {
out = append(out, secm)
}
}
return out, nil
}
type mmCids struct {
bls []cid.Cid
secpk []cid.Cid
}
func (cs *ChainStore) ReadMsgMetaCids(mmc cid.Cid) ([]cid.Cid, []cid.Cid, error) {
o, ok := cs.mmCache.Get(mmc)
if ok {
mmcids := o.(*mmCids)
return mmcids.bls, mmcids.secpk, nil
}
cst := cbor.NewCborStore(cs.chainLocalBlockstore)
var msgmeta types.MsgMeta
if err := cst.Get(context.TODO(), mmc, &msgmeta); err != nil {
return nil, nil, xerrors.Errorf("failed to load msgmeta (%s): %w", mmc, err)
}
blscids, err := cs.readAMTCids(msgmeta.BlsMessages)
if err != nil {
return nil, nil, xerrors.Errorf("loading bls message cids for block: %w", err)
}
secpkcids, err := cs.readAMTCids(msgmeta.SecpkMessages)
if err != nil {
return nil, nil, xerrors.Errorf("loading secpk message cids for block: %w", err)
}
cs.mmCache.Add(mmc, &mmCids{
bls: blscids,
secpk: secpkcids,
})
return blscids, secpkcids, nil
}
func (cs *ChainStore) MessagesForBlock(b *types.BlockHeader) ([]*types.Message, []*types.SignedMessage, error) {
blscids, secpkcids, err := cs.ReadMsgMetaCids(b.Messages)
if err != nil {
return nil, nil, err
}
blsmsgs, err := cs.LoadMessagesFromCids(blscids)
if err != nil {
return nil, nil, xerrors.Errorf("loading bls messages for block: %w", err)
}
secpkmsgs, err := cs.LoadSignedMessagesFromCids(secpkcids)
if err != nil {
return nil, nil, xerrors.Errorf("loading secpk messages for block: %w", err)
}
return blsmsgs, secpkmsgs, nil
}
func (cs *ChainStore) GetParentReceipt(b *types.BlockHeader, i int) (*types.MessageReceipt, error) {
ctx := context.TODO()
// block headers use adt0, for now.
a, err := blockadt.AsArray(cs.ActorStore(ctx), b.ParentMessageReceipts)
if err != nil {
return nil, xerrors.Errorf("amt load: %w", err)
}
var r types.MessageReceipt
if found, err := a.Get(uint64(i), &r); err != nil {
return nil, err
} else if !found {
return nil, xerrors.Errorf("failed to find receipt %d", i)
}
return &r, nil
}
func (cs *ChainStore) LoadMessagesFromCids(cids []cid.Cid) ([]*types.Message, error) {
msgs := make([]*types.Message, 0, len(cids))
for i, c := range cids {
m, err := cs.GetMessage(c)
if err != nil {
return nil, xerrors.Errorf("failed to get message: (%s):%d: %w", c, i, err)
}
msgs = append(msgs, m)
}
return msgs, nil
}
func (cs *ChainStore) LoadSignedMessagesFromCids(cids []cid.Cid) ([]*types.SignedMessage, error) {
msgs := make([]*types.SignedMessage, 0, len(cids))
for i, c := range cids {
m, err := cs.GetSignedMessage(c)
if err != nil {
return nil, xerrors.Errorf("failed to get message: (%s):%d: %w", c, i, err)
}
msgs = append(msgs, m)
}
return msgs, nil
}

182
chain/store/rand.go Normal file
View File

@ -0,0 +1,182 @@
package store
import (
"context"
"encoding/binary"
"os"
"github.com/ipfs/go-cid"
"github.com/minio/blake2b-simd"
"go.opencensus.io/trace"
"golang.org/x/xerrors"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/crypto"
"github.com/filecoin-project/lotus/chain/types"
"github.com/filecoin-project/lotus/chain/vm"
)
func DrawRandomness(rbase []byte, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
h := blake2b.New256()
if err := binary.Write(h, binary.BigEndian, int64(pers)); err != nil {
return nil, xerrors.Errorf("deriving randomness: %w", err)
}
VRFDigest := blake2b.Sum256(rbase)
_, err := h.Write(VRFDigest[:])
if err != nil {
return nil, xerrors.Errorf("hashing VRFDigest: %w", err)
}
if err := binary.Write(h, binary.BigEndian, round); err != nil {
return nil, xerrors.Errorf("deriving randomness: %w", err)
}
_, err = h.Write(entropy)
if err != nil {
return nil, xerrors.Errorf("hashing entropy: %w", err)
}
return h.Sum(nil), nil
}
func (cs *ChainStore) GetBeaconRandomnessLookingBack(ctx context.Context, blks []cid.Cid, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return cs.GetBeaconRandomness(ctx, blks, pers, round, entropy, true)
}
func (cs *ChainStore) GetBeaconRandomnessLookingForward(ctx context.Context, blks []cid.Cid, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return cs.GetBeaconRandomness(ctx, blks, pers, round, entropy, false)
}
func (cs *ChainStore) GetBeaconRandomness(ctx context.Context, blks []cid.Cid, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte, lookback bool) ([]byte, error) {
_, span := trace.StartSpan(ctx, "store.GetBeaconRandomness")
defer span.End()
span.AddAttributes(trace.Int64Attribute("round", int64(round)))
ts, err := cs.LoadTipSet(types.NewTipSetKey(blks...))
if err != nil {
return nil, err
}
if round > ts.Height() {
return nil, xerrors.Errorf("cannot draw randomness from the future")
}
searchHeight := round
if searchHeight < 0 {
searchHeight = 0
}
randTs, err := cs.GetTipsetByHeight(ctx, searchHeight, ts, lookback)
if err != nil {
return nil, err
}
be, err := cs.GetLatestBeaconEntry(randTs)
if err != nil {
return nil, err
}
// if at (or just past -- for null epochs) appropriate epoch
// or at genesis (works for negative epochs)
return DrawRandomness(be.Data, pers, round, entropy)
}
func (cs *ChainStore) GetChainRandomnessLookingBack(ctx context.Context, blks []cid.Cid, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return cs.GetChainRandomness(ctx, blks, pers, round, entropy, true)
}
func (cs *ChainStore) GetChainRandomnessLookingForward(ctx context.Context, blks []cid.Cid, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return cs.GetChainRandomness(ctx, blks, pers, round, entropy, false)
}
func (cs *ChainStore) GetChainRandomness(ctx context.Context, blks []cid.Cid, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte, lookback bool) ([]byte, error) {
_, span := trace.StartSpan(ctx, "store.GetChainRandomness")
defer span.End()
span.AddAttributes(trace.Int64Attribute("round", int64(round)))
ts, err := cs.LoadTipSet(types.NewTipSetKey(blks...))
if err != nil {
return nil, err
}
if round > ts.Height() {
return nil, xerrors.Errorf("cannot draw randomness from the future")
}
searchHeight := round
if searchHeight < 0 {
searchHeight = 0
}
randTs, err := cs.GetTipsetByHeight(ctx, searchHeight, ts, lookback)
if err != nil {
return nil, err
}
mtb := randTs.MinTicketBlock()
// if at (or just past -- for null epochs) appropriate epoch
// or at genesis (works for negative epochs)
return DrawRandomness(mtb.Ticket.VRFProof, pers, round, entropy)
}
func (cs *ChainStore) GetLatestBeaconEntry(ts *types.TipSet) (*types.BeaconEntry, error) {
cur := ts
for i := 0; i < 20; i++ {
cbe := cur.Blocks()[0].BeaconEntries
if len(cbe) > 0 {
return &cbe[len(cbe)-1], nil
}
if cur.Height() == 0 {
return nil, xerrors.Errorf("made it back to genesis block without finding beacon entry")
}
next, err := cs.LoadTipSet(cur.Parents())
if err != nil {
return nil, xerrors.Errorf("failed to load parents when searching back for latest beacon entry: %w", err)
}
cur = next
}
if os.Getenv("LOTUS_IGNORE_DRAND") == "_yes_" {
return &types.BeaconEntry{
Data: []byte{9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9},
}, nil
}
return nil, xerrors.Errorf("found NO beacon entries in the 20 latest tipsets")
}
type chainRand struct {
cs *ChainStore
blks []cid.Cid
}
func NewChainRand(cs *ChainStore, blks []cid.Cid) vm.Rand {
return &chainRand{
cs: cs,
blks: blks,
}
}
func (cr *chainRand) GetChainRandomnessLookingBack(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return cr.cs.GetChainRandomnessLookingBack(ctx, cr.blks, pers, round, entropy)
}
func (cr *chainRand) GetChainRandomnessLookingForward(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return cr.cs.GetChainRandomnessLookingForward(ctx, cr.blks, pers, round, entropy)
}
func (cr *chainRand) GetBeaconRandomnessLookingBack(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return cr.cs.GetBeaconRandomnessLookingBack(ctx, cr.blks, pers, round, entropy)
}
func (cr *chainRand) GetBeaconRandomnessLookingForward(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return cr.cs.GetBeaconRandomnessLookingForward(ctx, cr.blks, pers, round, entropy)
}
func (cs *ChainStore) GetTipSetFromKey(tsk types.TipSetKey) (*types.TipSet, error) {
if tsk.IsEmpty() {
return cs.GetHeaviestTipSet(), nil
}
return cs.LoadTipSet(tsk)
}

205
chain/store/snapshot.go Normal file
View File

@ -0,0 +1,205 @@
package store
import (
"bytes"
"context"
"io"
"github.com/ipfs/go-cid"
"github.com/ipld/go-car"
carutil "github.com/ipld/go-car/util"
cbg "github.com/whyrusleeping/cbor-gen"
"golang.org/x/xerrors"
"github.com/filecoin-project/go-state-types/abi"
bstore "github.com/filecoin-project/lotus/blockstore"
"github.com/filecoin-project/lotus/build"
"github.com/filecoin-project/lotus/chain/actors/builtin"
"github.com/filecoin-project/lotus/chain/types"
)
func (cs *ChainStore) Export(ctx context.Context, ts *types.TipSet, inclRecentRoots abi.ChainEpoch, skipOldMsgs bool, w io.Writer) error {
h := &car.CarHeader{
Roots: ts.Cids(),
Version: 1,
}
if err := car.WriteHeader(h, w); err != nil {
return xerrors.Errorf("failed to write car header: %s", err)
}
unionBs := bstore.Union(cs.stateBlockstore, cs.chainBlockstore)
return cs.WalkSnapshot(ctx, ts, inclRecentRoots, skipOldMsgs, true, func(c cid.Cid) error {
blk, err := unionBs.Get(c)
if err != nil {
return xerrors.Errorf("writing object to car, bs.Get: %w", err)
}
if err := carutil.LdWrite(w, c.Bytes(), blk.RawData()); err != nil {
return xerrors.Errorf("failed to write block to car output: %w", err)
}
return nil
})
}
func (cs *ChainStore) Import(r io.Reader) (*types.TipSet, error) {
// TODO: writing only to the state blockstore is incorrect.
// At this time, both the state and chain blockstores are backed by the
// universal store. When we physically segregate the stores, we will need
// to route state objects to the state blockstore, and chain objects to
// the chain blockstore.
header, err := car.LoadCar(cs.StateBlockstore(), r)
if err != nil {
return nil, xerrors.Errorf("loadcar failed: %w", err)
}
root, err := cs.LoadTipSet(types.NewTipSetKey(header.Roots...))
if err != nil {
return nil, xerrors.Errorf("failed to load root tipset from chainfile: %w", err)
}
return root, nil
}
func (cs *ChainStore) WalkSnapshot(ctx context.Context, ts *types.TipSet, inclRecentRoots abi.ChainEpoch, skipOldMsgs, skipMsgReceipts bool, cb func(cid.Cid) error) error {
if ts == nil {
ts = cs.GetHeaviestTipSet()
}
seen := cid.NewSet()
walked := cid.NewSet()
blocksToWalk := ts.Cids()
currentMinHeight := ts.Height()
walkChain := func(blk cid.Cid) error {
if !seen.Visit(blk) {
return nil
}
if err := cb(blk); err != nil {
return err
}
data, err := cs.chainBlockstore.Get(blk)
if err != nil {
return xerrors.Errorf("getting block: %w", err)
}
var b types.BlockHeader
if err := b.UnmarshalCBOR(bytes.NewBuffer(data.RawData())); err != nil {
return xerrors.Errorf("unmarshaling block header (cid=%s): %w", blk, err)
}
if currentMinHeight > b.Height {
currentMinHeight = b.Height
if currentMinHeight%builtin.EpochsInDay == 0 {
log.Infow("export", "height", currentMinHeight)
}
}
var cids []cid.Cid
if !skipOldMsgs || b.Height > ts.Height()-inclRecentRoots {
if walked.Visit(b.Messages) {
mcids, err := recurseLinks(cs.chainBlockstore, walked, b.Messages, []cid.Cid{b.Messages})
if err != nil {
return xerrors.Errorf("recursing messages failed: %w", err)
}
cids = mcids
}
}
if b.Height > 0 {
for _, p := range b.Parents {
blocksToWalk = append(blocksToWalk, p)
}
} else {
// include the genesis block
cids = append(cids, b.Parents...)
}
out := cids
if b.Height == 0 || b.Height > ts.Height()-inclRecentRoots {
if walked.Visit(b.ParentStateRoot) {
cids, err := recurseLinks(cs.stateBlockstore, walked, b.ParentStateRoot, []cid.Cid{b.ParentStateRoot})
if err != nil {
return xerrors.Errorf("recursing genesis state failed: %w", err)
}
out = append(out, cids...)
}
if !skipMsgReceipts && walked.Visit(b.ParentMessageReceipts) {
out = append(out, b.ParentMessageReceipts)
}
}
for _, c := range out {
if seen.Visit(c) {
if c.Prefix().Codec != cid.DagCBOR {
continue
}
if err := cb(c); err != nil {
return err
}
}
}
return nil
}
log.Infow("export started")
exportStart := build.Clock.Now()
for len(blocksToWalk) > 0 {
next := blocksToWalk[0]
blocksToWalk = blocksToWalk[1:]
if err := walkChain(next); err != nil {
return xerrors.Errorf("walk chain failed: %w", err)
}
}
log.Infow("export finished", "duration", build.Clock.Now().Sub(exportStart).Seconds())
return nil
}
func recurseLinks(bs bstore.Blockstore, walked *cid.Set, root cid.Cid, in []cid.Cid) ([]cid.Cid, error) {
if root.Prefix().Codec != cid.DagCBOR {
return in, nil
}
data, err := bs.Get(root)
if err != nil {
return nil, xerrors.Errorf("recurse links get (%s) failed: %w", root, err)
}
var rerr error
err = cbg.ScanForLinks(bytes.NewReader(data.RawData()), func(c cid.Cid) {
if rerr != nil {
// No error return on ScanForLinks :(
return
}
// traversed this already...
if !walked.Visit(c) {
return
}
in = append(in, c)
var err error
in, err = recurseLinks(bs, walked, c, in)
if err != nil {
rerr = err
}
})
if err != nil {
return nil, xerrors.Errorf("scanning for links failed: %w", err)
}
return in, rerr
}

View File

@ -1,36 +1,24 @@
package store
import (
"bytes"
"context"
"encoding/binary"
"encoding/json"
"errors"
"io"
"os"
"strconv"
"strings"
"sync"
"time"
"github.com/filecoin-project/lotus/chain/state"
"golang.org/x/sync/errgroup"
"github.com/filecoin-project/go-state-types/crypto"
"github.com/minio/blake2b-simd"
"github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-state-types/abi"
blockadt "github.com/filecoin-project/specs-actors/actors/util/adt"
"github.com/filecoin-project/lotus/api"
bstore "github.com/filecoin-project/lotus/blockstore"
"github.com/filecoin-project/lotus/build"
"github.com/filecoin-project/lotus/chain/actors/adt"
"github.com/filecoin-project/lotus/chain/actors/builtin"
"github.com/filecoin-project/lotus/chain/vm"
"github.com/filecoin-project/lotus/journal"
"github.com/filecoin-project/lotus/metrics"
@ -48,9 +36,6 @@ import (
"github.com/ipfs/go-datastore/query"
cbor "github.com/ipfs/go-ipld-cbor"
logging "github.com/ipfs/go-log/v2"
"github.com/ipld/go-car"
carutil "github.com/ipld/go-car/util"
cbg "github.com/whyrusleeping/cbor-gen"
"github.com/whyrusleeping/pubsub"
"golang.org/x/xerrors"
)
@ -134,11 +119,9 @@ type ChainStore struct {
reorgCh chan<- reorg
reorgNotifeeCh chan ReorgNotifee
mmCache *lru.ARCCache
mmCache *lru.ARCCache // msg meta cache (mh.Messages -> secp, bls []cid)
tsCache *lru.ARCCache
vmcalls vm.SyscallBuilder
evtTypes [1]journal.EventType
journal journal.Journal
@ -146,7 +129,7 @@ type ChainStore struct {
wg sync.WaitGroup
}
func NewChainStore(chainBs bstore.Blockstore, stateBs bstore.Blockstore, ds dstore.Batching, vmcalls vm.SyscallBuilder, j journal.Journal) *ChainStore {
func NewChainStore(chainBs bstore.Blockstore, stateBs bstore.Blockstore, ds dstore.Batching, j journal.Journal) *ChainStore {
c, _ := lru.NewARC(DefaultMsgMetaCacheSize)
tsc, _ := lru.NewARC(DefaultTipSetCacheSize)
if j == nil {
@ -166,7 +149,6 @@ func NewChainStore(chainBs bstore.Blockstore, stateBs bstore.Blockstore, ds dsto
tipsets: make(map[abi.ChainEpoch][]cid.Cid),
mmCache: c,
tsCache: tsc,
vmcalls: vmcalls,
cancelFn: cancel,
journal: j,
}
@ -988,27 +970,6 @@ func (cs *ChainStore) PersistBlockHeaders(b ...*types.BlockHeader) error {
return err
}
type storable interface {
ToStorageBlock() (block.Block, error)
}
func PutMessage(bs bstore.Blockstore, m storable) (cid.Cid, error) {
b, err := m.ToStorageBlock()
if err != nil {
return cid.Undef, err
}
if err := bs.Put(b); err != nil {
return cid.Undef, err
}
return b.Cid(), nil
}
func (cs *ChainStore) PutMessage(m storable) (cid.Cid, error) {
return PutMessage(cs.chainBlockstore, m)
}
func (cs *ChainStore) expandTipset(b *types.BlockHeader) (*types.TipSet, error) {
// Hold lock for the whole function for now, if it becomes a problem we can
// fix pretty easily
@ -1080,203 +1041,6 @@ func (cs *ChainStore) GetGenesis() (*types.BlockHeader, error) {
return cs.GetBlock(c)
}
func (cs *ChainStore) GetCMessage(c cid.Cid) (types.ChainMsg, error) {
m, err := cs.GetMessage(c)
if err == nil {
return m, nil
}
if err != bstore.ErrNotFound {
log.Warnf("GetCMessage: unexpected error getting unsigned message: %s", err)
}
return cs.GetSignedMessage(c)
}
func (cs *ChainStore) GetMessage(c cid.Cid) (*types.Message, error) {
var msg *types.Message
err := cs.chainLocalBlockstore.View(c, func(b []byte) (err error) {
msg, err = types.DecodeMessage(b)
return err
})
return msg, err
}
func (cs *ChainStore) GetSignedMessage(c cid.Cid) (*types.SignedMessage, error) {
var msg *types.SignedMessage
err := cs.chainLocalBlockstore.View(c, func(b []byte) (err error) {
msg, err = types.DecodeSignedMessage(b)
return err
})
return msg, err
}
func (cs *ChainStore) readAMTCids(root cid.Cid) ([]cid.Cid, error) {
ctx := context.TODO()
// block headers use adt0, for now.
a, err := blockadt.AsArray(cs.ActorStore(ctx), root)
if err != nil {
return nil, xerrors.Errorf("amt load: %w", err)
}
var (
cids []cid.Cid
cborCid cbg.CborCid
)
if err := a.ForEach(&cborCid, func(i int64) error {
c := cid.Cid(cborCid)
cids = append(cids, c)
return nil
}); err != nil {
return nil, xerrors.Errorf("failed to traverse amt: %w", err)
}
if uint64(len(cids)) != a.Length() {
return nil, xerrors.Errorf("found %d cids, expected %d", len(cids), a.Length())
}
return cids, nil
}
type BlockMessages struct {
Miner address.Address
BlsMessages []types.ChainMsg
SecpkMessages []types.ChainMsg
WinCount int64
}
func (cs *ChainStore) BlockMsgsForTipset(ts *types.TipSet) ([]BlockMessages, error) {
applied := make(map[address.Address]uint64)
cst := cbor.NewCborStore(cs.stateBlockstore)
st, err := state.LoadStateTree(cst, ts.Blocks()[0].ParentStateRoot)
if err != nil {
return nil, xerrors.Errorf("failed to load state tree")
}
selectMsg := func(m *types.Message) (bool, error) {
var sender address.Address
if ts.Height() >= build.UpgradeHyperdriveHeight {
sender, err = st.LookupID(m.From)
if err != nil {
return false, err
}
} else {
sender = m.From
}
// The first match for a sender is guaranteed to have correct nonce -- the block isn't valid otherwise
if _, ok := applied[sender]; !ok {
applied[sender] = m.Nonce
}
if applied[sender] != m.Nonce {
return false, nil
}
applied[sender]++
return true, nil
}
var out []BlockMessages
for _, b := range ts.Blocks() {
bms, sms, err := cs.MessagesForBlock(b)
if err != nil {
return nil, xerrors.Errorf("failed to get messages for block: %w", err)
}
bm := BlockMessages{
Miner: b.Miner,
BlsMessages: make([]types.ChainMsg, 0, len(bms)),
SecpkMessages: make([]types.ChainMsg, 0, len(sms)),
WinCount: b.ElectionProof.WinCount,
}
for _, bmsg := range bms {
b, err := selectMsg(bmsg.VMMessage())
if err != nil {
return nil, xerrors.Errorf("failed to decide whether to select message for block: %w", err)
}
if b {
bm.BlsMessages = append(bm.BlsMessages, bmsg)
}
}
for _, smsg := range sms {
b, err := selectMsg(smsg.VMMessage())
if err != nil {
return nil, xerrors.Errorf("failed to decide whether to select message for block: %w", err)
}
if b {
bm.SecpkMessages = append(bm.SecpkMessages, smsg)
}
}
out = append(out, bm)
}
return out, nil
}
func (cs *ChainStore) MessagesForTipset(ts *types.TipSet) ([]types.ChainMsg, error) {
bmsgs, err := cs.BlockMsgsForTipset(ts)
if err != nil {
return nil, err
}
var out []types.ChainMsg
for _, bm := range bmsgs {
for _, blsm := range bm.BlsMessages {
out = append(out, blsm)
}
for _, secm := range bm.SecpkMessages {
out = append(out, secm)
}
}
return out, nil
}
type mmCids struct {
bls []cid.Cid
secpk []cid.Cid
}
func (cs *ChainStore) ReadMsgMetaCids(mmc cid.Cid) ([]cid.Cid, []cid.Cid, error) {
o, ok := cs.mmCache.Get(mmc)
if ok {
mmcids := o.(*mmCids)
return mmcids.bls, mmcids.secpk, nil
}
cst := cbor.NewCborStore(cs.chainLocalBlockstore)
var msgmeta types.MsgMeta
if err := cst.Get(context.TODO(), mmc, &msgmeta); err != nil {
return nil, nil, xerrors.Errorf("failed to load msgmeta (%s): %w", mmc, err)
}
blscids, err := cs.readAMTCids(msgmeta.BlsMessages)
if err != nil {
return nil, nil, xerrors.Errorf("loading bls message cids for block: %w", err)
}
secpkcids, err := cs.readAMTCids(msgmeta.SecpkMessages)
if err != nil {
return nil, nil, xerrors.Errorf("loading secpk message cids for block: %w", err)
}
cs.mmCache.Add(mmc, &mmCids{
bls: blscids,
secpk: secpkcids,
})
return blscids, secpkcids, nil
}
// GetPath returns the sequence of atomic head change operations that
// need to be applied in order to switch the head of the chain from the `from`
// tipset to the `to` tipset.
@ -1304,71 +1068,6 @@ func (cs *ChainStore) GetPath(ctx context.Context, from types.TipSetKey, to type
return path, nil
}
func (cs *ChainStore) MessagesForBlock(b *types.BlockHeader) ([]*types.Message, []*types.SignedMessage, error) {
blscids, secpkcids, err := cs.ReadMsgMetaCids(b.Messages)
if err != nil {
return nil, nil, err
}
blsmsgs, err := cs.LoadMessagesFromCids(blscids)
if err != nil {
return nil, nil, xerrors.Errorf("loading bls messages for block: %w", err)
}
secpkmsgs, err := cs.LoadSignedMessagesFromCids(secpkcids)
if err != nil {
return nil, nil, xerrors.Errorf("loading secpk messages for block: %w", err)
}
return blsmsgs, secpkmsgs, nil
}
func (cs *ChainStore) GetParentReceipt(b *types.BlockHeader, i int) (*types.MessageReceipt, error) {
ctx := context.TODO()
// block headers use adt0, for now.
a, err := blockadt.AsArray(cs.ActorStore(ctx), b.ParentMessageReceipts)
if err != nil {
return nil, xerrors.Errorf("amt load: %w", err)
}
var r types.MessageReceipt
if found, err := a.Get(uint64(i), &r); err != nil {
return nil, err
} else if !found {
return nil, xerrors.Errorf("failed to find receipt %d", i)
}
return &r, nil
}
func (cs *ChainStore) LoadMessagesFromCids(cids []cid.Cid) ([]*types.Message, error) {
msgs := make([]*types.Message, 0, len(cids))
for i, c := range cids {
m, err := cs.GetMessage(c)
if err != nil {
return nil, xerrors.Errorf("failed to get message: (%s):%d: %w", c, i, err)
}
msgs = append(msgs, m)
}
return msgs, nil
}
func (cs *ChainStore) LoadSignedMessagesFromCids(cids []cid.Cid) ([]*types.SignedMessage, error) {
msgs := make([]*types.SignedMessage, 0, len(cids))
for i, c := range cids {
m, err := cs.GetSignedMessage(c)
if err != nil {
return nil, xerrors.Errorf("failed to get message: (%s):%d: %w", c, i, err)
}
msgs = append(msgs, m)
}
return msgs, nil
}
// ChainBlockstore returns the chain blockstore. Currently the chain and state
// // stores are both backed by the same physical store, albeit with different
// // caching policies, but in the future they will segregate.
@ -1391,10 +1090,6 @@ func (cs *ChainStore) ActorStore(ctx context.Context) adt.Store {
return ActorStore(ctx, cs.stateBlockstore)
}
func (cs *ChainStore) VMSys() vm.SyscallBuilder {
return cs.vmcalls
}
func (cs *ChainStore) TryFillTipSet(ts *types.TipSet) (*FullTipSet, error) {
var out []*types.FullBlock
@ -1417,108 +1112,6 @@ func (cs *ChainStore) TryFillTipSet(ts *types.TipSet) (*FullTipSet, error) {
return NewFullTipSet(out), nil
}
func DrawRandomness(rbase []byte, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
h := blake2b.New256()
if err := binary.Write(h, binary.BigEndian, int64(pers)); err != nil {
return nil, xerrors.Errorf("deriving randomness: %w", err)
}
VRFDigest := blake2b.Sum256(rbase)
_, err := h.Write(VRFDigest[:])
if err != nil {
return nil, xerrors.Errorf("hashing VRFDigest: %w", err)
}
if err := binary.Write(h, binary.BigEndian, round); err != nil {
return nil, xerrors.Errorf("deriving randomness: %w", err)
}
_, err = h.Write(entropy)
if err != nil {
return nil, xerrors.Errorf("hashing entropy: %w", err)
}
return h.Sum(nil), nil
}
func (cs *ChainStore) GetBeaconRandomnessLookingBack(ctx context.Context, blks []cid.Cid, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return cs.GetBeaconRandomness(ctx, blks, pers, round, entropy, true)
}
func (cs *ChainStore) GetBeaconRandomnessLookingForward(ctx context.Context, blks []cid.Cid, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return cs.GetBeaconRandomness(ctx, blks, pers, round, entropy, false)
}
func (cs *ChainStore) GetBeaconRandomness(ctx context.Context, blks []cid.Cid, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte, lookback bool) ([]byte, error) {
_, span := trace.StartSpan(ctx, "store.GetBeaconRandomness")
defer span.End()
span.AddAttributes(trace.Int64Attribute("round", int64(round)))
ts, err := cs.LoadTipSet(types.NewTipSetKey(blks...))
if err != nil {
return nil, err
}
if round > ts.Height() {
return nil, xerrors.Errorf("cannot draw randomness from the future")
}
searchHeight := round
if searchHeight < 0 {
searchHeight = 0
}
randTs, err := cs.GetTipsetByHeight(ctx, searchHeight, ts, lookback)
if err != nil {
return nil, err
}
be, err := cs.GetLatestBeaconEntry(randTs)
if err != nil {
return nil, err
}
// if at (or just past -- for null epochs) appropriate epoch
// or at genesis (works for negative epochs)
return DrawRandomness(be.Data, pers, round, entropy)
}
func (cs *ChainStore) GetChainRandomnessLookingBack(ctx context.Context, blks []cid.Cid, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return cs.GetChainRandomness(ctx, blks, pers, round, entropy, true)
}
func (cs *ChainStore) GetChainRandomnessLookingForward(ctx context.Context, blks []cid.Cid, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return cs.GetChainRandomness(ctx, blks, pers, round, entropy, false)
}
func (cs *ChainStore) GetChainRandomness(ctx context.Context, blks []cid.Cid, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte, lookback bool) ([]byte, error) {
_, span := trace.StartSpan(ctx, "store.GetChainRandomness")
defer span.End()
span.AddAttributes(trace.Int64Attribute("round", int64(round)))
ts, err := cs.LoadTipSet(types.NewTipSetKey(blks...))
if err != nil {
return nil, err
}
if round > ts.Height() {
return nil, xerrors.Errorf("cannot draw randomness from the future")
}
searchHeight := round
if searchHeight < 0 {
searchHeight = 0
}
randTs, err := cs.GetTipsetByHeight(ctx, searchHeight, ts, lookback)
if err != nil {
return nil, err
}
mtb := randTs.MinTicketBlock()
// if at (or just past -- for null epochs) appropriate epoch
// or at genesis (works for negative epochs)
return DrawRandomness(mtb.Ticket.VRFProof, pers, round, entropy)
}
// GetTipsetByHeight returns the tipset on the chain behind 'ts' at the given
// height. In the case that the given height is a null round, the 'prev' flag
// selects the tipset before the null round if true, and the tipset following
@ -1555,252 +1148,3 @@ func (cs *ChainStore) GetTipsetByHeight(ctx context.Context, h abi.ChainEpoch, t
return cs.LoadTipSet(lbts.Parents())
}
func recurseLinks(bs bstore.Blockstore, walked *cid.Set, root cid.Cid, in []cid.Cid) ([]cid.Cid, error) {
if root.Prefix().Codec != cid.DagCBOR {
return in, nil
}
data, err := bs.Get(root)
if err != nil {
return nil, xerrors.Errorf("recurse links get (%s) failed: %w", root, err)
}
var rerr error
err = cbg.ScanForLinks(bytes.NewReader(data.RawData()), func(c cid.Cid) {
if rerr != nil {
// No error return on ScanForLinks :(
return
}
// traversed this already...
if !walked.Visit(c) {
return
}
in = append(in, c)
var err error
in, err = recurseLinks(bs, walked, c, in)
if err != nil {
rerr = err
}
})
if err != nil {
return nil, xerrors.Errorf("scanning for links failed: %w", err)
}
return in, rerr
}
func (cs *ChainStore) Export(ctx context.Context, ts *types.TipSet, inclRecentRoots abi.ChainEpoch, skipOldMsgs bool, w io.Writer) error {
h := &car.CarHeader{
Roots: ts.Cids(),
Version: 1,
}
if err := car.WriteHeader(h, w); err != nil {
return xerrors.Errorf("failed to write car header: %s", err)
}
unionBs := bstore.Union(cs.stateBlockstore, cs.chainBlockstore)
return cs.WalkSnapshot(ctx, ts, inclRecentRoots, skipOldMsgs, true, func(c cid.Cid) error {
blk, err := unionBs.Get(c)
if err != nil {
return xerrors.Errorf("writing object to car, bs.Get: %w", err)
}
if err := carutil.LdWrite(w, c.Bytes(), blk.RawData()); err != nil {
return xerrors.Errorf("failed to write block to car output: %w", err)
}
return nil
})
}
func (cs *ChainStore) WalkSnapshot(ctx context.Context, ts *types.TipSet, inclRecentRoots abi.ChainEpoch, skipOldMsgs, skipMsgReceipts bool, cb func(cid.Cid) error) error {
if ts == nil {
ts = cs.GetHeaviestTipSet()
}
seen := cid.NewSet()
walked := cid.NewSet()
blocksToWalk := ts.Cids()
currentMinHeight := ts.Height()
walkChain := func(blk cid.Cid) error {
if !seen.Visit(blk) {
return nil
}
if err := cb(blk); err != nil {
return err
}
data, err := cs.chainBlockstore.Get(blk)
if err != nil {
return xerrors.Errorf("getting block: %w", err)
}
var b types.BlockHeader
if err := b.UnmarshalCBOR(bytes.NewBuffer(data.RawData())); err != nil {
return xerrors.Errorf("unmarshaling block header (cid=%s): %w", blk, err)
}
if currentMinHeight > b.Height {
currentMinHeight = b.Height
if currentMinHeight%builtin.EpochsInDay == 0 {
log.Infow("export", "height", currentMinHeight)
}
}
var cids []cid.Cid
if !skipOldMsgs || b.Height > ts.Height()-inclRecentRoots {
if walked.Visit(b.Messages) {
mcids, err := recurseLinks(cs.chainBlockstore, walked, b.Messages, []cid.Cid{b.Messages})
if err != nil {
return xerrors.Errorf("recursing messages failed: %w", err)
}
cids = mcids
}
}
if b.Height > 0 {
for _, p := range b.Parents {
blocksToWalk = append(blocksToWalk, p)
}
} else {
// include the genesis block
cids = append(cids, b.Parents...)
}
out := cids
if b.Height == 0 || b.Height > ts.Height()-inclRecentRoots {
if walked.Visit(b.ParentStateRoot) {
cids, err := recurseLinks(cs.stateBlockstore, walked, b.ParentStateRoot, []cid.Cid{b.ParentStateRoot})
if err != nil {
return xerrors.Errorf("recursing genesis state failed: %w", err)
}
out = append(out, cids...)
}
if !skipMsgReceipts && walked.Visit(b.ParentMessageReceipts) {
out = append(out, b.ParentMessageReceipts)
}
}
for _, c := range out {
if seen.Visit(c) {
if c.Prefix().Codec != cid.DagCBOR {
continue
}
if err := cb(c); err != nil {
return err
}
}
}
return nil
}
log.Infow("export started")
exportStart := build.Clock.Now()
for len(blocksToWalk) > 0 {
next := blocksToWalk[0]
blocksToWalk = blocksToWalk[1:]
if err := walkChain(next); err != nil {
return xerrors.Errorf("walk chain failed: %w", err)
}
}
log.Infow("export finished", "duration", build.Clock.Now().Sub(exportStart).Seconds())
return nil
}
func (cs *ChainStore) Import(r io.Reader) (*types.TipSet, error) {
// TODO: writing only to the state blockstore is incorrect.
// At this time, both the state and chain blockstores are backed by the
// universal store. When we physically segregate the stores, we will need
// to route state objects to the state blockstore, and chain objects to
// the chain blockstore.
header, err := car.LoadCar(cs.StateBlockstore(), r)
if err != nil {
return nil, xerrors.Errorf("loadcar failed: %w", err)
}
root, err := cs.LoadTipSet(types.NewTipSetKey(header.Roots...))
if err != nil {
return nil, xerrors.Errorf("failed to load root tipset from chainfile: %w", err)
}
return root, nil
}
func (cs *ChainStore) GetLatestBeaconEntry(ts *types.TipSet) (*types.BeaconEntry, error) {
cur := ts
for i := 0; i < 20; i++ {
cbe := cur.Blocks()[0].BeaconEntries
if len(cbe) > 0 {
return &cbe[len(cbe)-1], nil
}
if cur.Height() == 0 {
return nil, xerrors.Errorf("made it back to genesis block without finding beacon entry")
}
next, err := cs.LoadTipSet(cur.Parents())
if err != nil {
return nil, xerrors.Errorf("failed to load parents when searching back for latest beacon entry: %w", err)
}
cur = next
}
if os.Getenv("LOTUS_IGNORE_DRAND") == "_yes_" {
return &types.BeaconEntry{
Data: []byte{9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9},
}, nil
}
return nil, xerrors.Errorf("found NO beacon entries in the 20 latest tipsets")
}
type chainRand struct {
cs *ChainStore
blks []cid.Cid
}
func NewChainRand(cs *ChainStore, blks []cid.Cid) vm.Rand {
return &chainRand{
cs: cs,
blks: blks,
}
}
func (cr *chainRand) GetChainRandomnessLookingBack(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return cr.cs.GetChainRandomnessLookingBack(ctx, cr.blks, pers, round, entropy)
}
func (cr *chainRand) GetChainRandomnessLookingForward(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return cr.cs.GetChainRandomnessLookingForward(ctx, cr.blks, pers, round, entropy)
}
func (cr *chainRand) GetBeaconRandomnessLookingBack(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return cr.cs.GetBeaconRandomnessLookingBack(ctx, cr.blks, pers, round, entropy)
}
func (cr *chainRand) GetBeaconRandomnessLookingForward(ctx context.Context, pers crypto.DomainSeparationTag, round abi.ChainEpoch, entropy []byte) ([]byte, error) {
return cr.cs.GetBeaconRandomnessLookingForward(ctx, cr.blks, pers, round, entropy)
}
func (cs *ChainStore) GetTipSetFromKey(tsk types.TipSetKey) (*types.TipSet, error) {
if tsk.IsEmpty() {
return cs.GetHeaviestTipSet(), nil
}
return cs.LoadTipSet(tsk)
}

View File

@ -70,7 +70,7 @@ func BenchmarkGetRandomness(b *testing.B) {
b.Fatal(err)
}
cs := store.NewChainStore(bs, bs, mds, nil, nil)
cs := store.NewChainStore(bs, bs, mds, nil)
defer cs.Close() //nolint:errcheck
b.ResetTimer()
@ -105,7 +105,7 @@ func TestChainExportImport(t *testing.T) {
}
nbs := blockstore.NewMemory()
cs := store.NewChainStore(nbs, nbs, datastore.NewMapDatastore(), nil, nil)
cs := store.NewChainStore(nbs, nbs, datastore.NewMapDatastore(), nil)
defer cs.Close() //nolint:errcheck
root, err := cs.Import(buf)
@ -140,7 +140,7 @@ func TestChainExportImportFull(t *testing.T) {
}
nbs := blockstore.NewMemory()
cs := store.NewChainStore(nbs, nbs, datastore.NewMapDatastore(), nil, nil)
cs := store.NewChainStore(nbs, nbs, datastore.NewMapDatastore(), nil)
defer cs.Close() //nolint:errcheck
root, err := cs.Import(buf)
@ -157,7 +157,7 @@ func TestChainExportImportFull(t *testing.T) {
t.Fatal("imported chain differed from exported chain")
}
sm := stmgr.NewStateManager(cs)
sm := stmgr.NewStateManager(cs, nil)
for i := 0; i < 100; i++ {
ts, err := cs.GetTipsetByHeight(context.TODO(), abi.ChainEpoch(i), nil, false)
if err != nil {

View File

@ -228,7 +228,7 @@ func DumpActorState(act *types.Actor, b []byte) (interface{}, error) {
return nil, nil
}
i := NewActorRegistry() // TODO: register builtins in init block
i := NewActorRegistry()
actInfo, ok := i.actors[act.Code]
if !ok {

View File

@ -641,15 +641,6 @@ func (vm *VM) ShouldBurn(ctx context.Context, st *state.StateTree, msg *types.Me
return true, nil
}
func (vm *VM) ActorBalance(addr address.Address) (types.BigInt, aerrors.ActorError) {
act, err := vm.cstate.GetActor(addr)
if err != nil {
return types.EmptyInt, aerrors.Absorb(err, 1, "failed to find actor")
}
return act.Balance, nil
}
type vmFlushKey struct{}
func (vm *VM) Flush(ctx context.Context) (cid.Cid, error) {

View File

@ -146,6 +146,10 @@ func GetRawAPI(ctx *cli.Context, t repo.RepoType, version string) (string, http.
return "", nil, xerrors.Errorf("could not get DialArgs: %w", err)
}
if IsVeryVerbose {
_, _ = fmt.Fprintf(ctx.App.Writer, "using raw API %s endpoint: %s\n", version, addr)
}
return addr, ainfo.AuthHeader(), nil
}
@ -185,6 +189,10 @@ func GetFullNodeAPI(ctx *cli.Context) (v0api.FullNode, jsonrpc.ClientCloser, err
return nil, nil, err
}
if IsVeryVerbose {
_, _ = fmt.Fprintln(ctx.App.Writer, "using full node API v0 endpoint:", addr)
}
return client.NewFullNodeRPCV0(ctx.Context, addr, headers)
}
@ -198,6 +206,10 @@ func GetFullNodeAPIV1(ctx *cli.Context) (v1api.FullNode, jsonrpc.ClientCloser, e
return nil, nil, err
}
if IsVeryVerbose {
_, _ = fmt.Fprintln(ctx.App.Writer, "using full node API v1 endpoint:", addr)
}
return client.NewFullNodeRPCV1(ctx.Context, addr, headers)
}
@ -242,6 +254,10 @@ func GetStorageMinerAPI(ctx *cli.Context, opts ...GetStorageMinerOption) (api.St
addr = u.String()
}
if IsVeryVerbose {
_, _ = fmt.Fprintln(ctx.App.Writer, "using miner API v0 endpoint:", addr)
}
return client.NewStorageMinerRPCV0(ctx.Context, addr, headers)
}
@ -251,6 +267,10 @@ func GetWorkerAPI(ctx *cli.Context) (api.Worker, jsonrpc.ClientCloser, error) {
return nil, nil, err
}
if IsVeryVerbose {
_, _ = fmt.Fprintln(ctx.App.Writer, "using worker API v0 endpoint:", addr)
}
return client.NewWorkerRPCV0(ctx.Context, addr, headers)
}
@ -260,6 +280,10 @@ func GetGatewayAPI(ctx *cli.Context) (api.Gateway, jsonrpc.ClientCloser, error)
return nil, nil, err
}
if IsVeryVerbose {
_, _ = fmt.Fprintln(ctx.App.Writer, "using gateway API v1 endpoint:", addr)
}
return client.NewGatewayRPCV1(ctx.Context, addr, headers)
}
@ -269,6 +293,10 @@ func GetGatewayAPIV0(ctx *cli.Context) (v0api.Gateway, jsonrpc.ClientCloser, err
return nil, nil, err
}
if IsVeryVerbose {
_, _ = fmt.Fprintln(ctx.App.Writer, "using gateway API v0 endpoint:", addr)
}
return client.NewGatewayRPCV0(ctx.Context, addr, headers)
}

16
cli/util/verbose.go Normal file
View File

@ -0,0 +1,16 @@
package cliutil
import "github.com/urfave/cli/v2"
// IsVeryVerbose is a global var signalling if the CLI is running in very
// verbose mode or not (default: false).
var IsVeryVerbose bool
// FlagVeryVerbose enables very verbose mode, which is useful when debugging
// the CLI itself. It should be included as a flag on the top-level command
// (e.g. lotus -vv, lotus-miner -vv).
var FlagVeryVerbose = &cli.BoolFlag{
Name: "vv",
Usage: "enables very verbose mode, useful for debugging the CLI",
Destination: &IsVeryVerbose,
}

View File

@ -253,10 +253,10 @@ var importBenchCmd = &cli.Command{
}
metadataDs := datastore.NewMapDatastore()
cs := store.NewChainStore(bs, bs, metadataDs, vm.Syscalls(verifier), nil)
cs := store.NewChainStore(bs, bs, metadataDs, nil)
defer cs.Close() //nolint:errcheck
stm := stmgr.NewStateManager(cs)
stm := stmgr.NewStateManager(cs, vm.Syscalls(verifier))
var carFile *os.File
// open the CAR file if one is provided.

View File

@ -1,131 +0,0 @@
package main
import (
"database/sql"
"fmt"
"hash/crc32"
"strconv"
"github.com/ipfs/go-cid"
logging "github.com/ipfs/go-log/v2"
"github.com/urfave/cli/v2"
"golang.org/x/xerrors"
)
var dotCmd = &cli.Command{
Name: "dot",
Usage: "generate dot graphs",
ArgsUsage: "<minHeight> <toseeHeight>",
Action: func(cctx *cli.Context) error {
ll := cctx.String("log-level")
if err := logging.SetLogLevel("*", ll); err != nil {
return err
}
db, err := sql.Open("postgres", cctx.String("db"))
if err != nil {
return err
}
defer func() {
if err := db.Close(); err != nil {
log.Errorw("Failed to close database", "error", err)
}
}()
if err := db.Ping(); err != nil {
return xerrors.Errorf("Database failed to respond to ping (is it online?): %w", err)
}
minH, err := strconv.ParseInt(cctx.Args().Get(0), 10, 32)
if err != nil {
return err
}
tosee, err := strconv.ParseInt(cctx.Args().Get(1), 10, 32)
if err != nil {
return err
}
maxH := minH + tosee
res, err := db.Query(`select block, parent, b.miner, b.height, p.height from block_parents
inner join blocks b on block_parents.block = b.cid
inner join blocks p on block_parents.parent = p.cid
where b.height > $1 and b.height < $2`, minH, maxH)
if err != nil {
return err
}
fmt.Println("digraph D {")
hl, err := syncedBlocks(db)
if err != nil {
log.Fatal(err)
}
for res.Next() {
var block, parent, miner string
var height, ph uint64
if err := res.Scan(&block, &parent, &miner, &height, &ph); err != nil {
return err
}
bc, err := cid.Parse(block)
if err != nil {
return err
}
_, has := hl[bc]
col := crc32.Checksum([]byte(miner), crc32.MakeTable(crc32.Castagnoli))&0xc0c0c0c0 + 0x30303030
hasstr := ""
if !has {
//col = 0xffffffff
hasstr = " UNSYNCED"
}
nulls := height - ph - 1
for i := uint64(0); i < nulls; i++ {
name := block + "NP" + fmt.Sprint(i)
fmt.Printf("%s [label = \"NULL:%d\", fillcolor = \"#ffddff\", style=filled, forcelabels=true]\n%s -> %s\n",
name, height-nulls+i, name, parent)
parent = name
}
fmt.Printf("%s [label = \"%s:%d%s\", fillcolor = \"#%06x\", style=filled, forcelabels=true]\n%s -> %s\n", block, miner, height, hasstr, col, block, parent)
}
if res.Err() != nil {
return res.Err()
}
fmt.Println("}")
return nil
},
}
func syncedBlocks(db *sql.DB) (map[cid.Cid]struct{}, error) {
// timestamp is used to return a configurable amount of rows based on when they were last added.
rws, err := db.Query(`select cid FROM blocks_synced`)
if err != nil {
return nil, xerrors.Errorf("Failed to query blocks_synced: %w", err)
}
out := map[cid.Cid]struct{}{}
for rws.Next() {
var c string
if err := rws.Scan(&c); err != nil {
return nil, xerrors.Errorf("Failed to scan blocks_synced: %w", err)
}
ci, err := cid.Parse(c)
if err != nil {
return nil, xerrors.Errorf("Failed to parse blocks_synced: %w", err)
}
out[ci] = struct{}{}
}
return out, nil
}

View File

@ -1,54 +0,0 @@
package main
import (
"os"
"github.com/filecoin-project/lotus/build"
logging "github.com/ipfs/go-log/v2"
"github.com/urfave/cli/v2"
)
var log = logging.Logger("chainwatch")
func main() {
if err := logging.SetLogLevel("*", "info"); err != nil {
log.Fatal(err)
}
log.Info("Starting chainwatch", " v", build.UserVersion())
app := &cli.App{
Name: "lotus-chainwatch",
Usage: "Devnet token distribution utility",
Version: build.UserVersion(),
Flags: []cli.Flag{
&cli.StringFlag{
Name: "repo",
EnvVars: []string{"LOTUS_PATH"},
Value: "~/.lotus", // TODO: Consider XDG_DATA_HOME
},
&cli.StringFlag{
Name: "api",
EnvVars: []string{"FULLNODE_API_INFO"},
Value: "",
},
&cli.StringFlag{
Name: "db",
EnvVars: []string{"LOTUS_DB"},
Value: "",
},
&cli.StringFlag{
Name: "log-level",
EnvVars: []string{"GOLOG_LOG_LEVEL"},
Value: "info",
},
},
Commands: []*cli.Command{
dotCmd,
runCmd,
},
}
if err := app.Run(os.Args); err != nil {
log.Fatal(err)
}
}

View File

@ -1,299 +0,0 @@
package processor
import (
"context"
"time"
"golang.org/x/sync/errgroup"
"golang.org/x/xerrors"
"github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-state-types/abi"
"github.com/ipfs/go-cid"
builtin2 "github.com/filecoin-project/specs-actors/v2/actors/builtin"
"github.com/filecoin-project/lotus/chain/actors/builtin"
_init "github.com/filecoin-project/lotus/chain/actors/builtin/init"
"github.com/filecoin-project/lotus/chain/events/state"
"github.com/filecoin-project/lotus/chain/types"
cw_util "github.com/filecoin-project/lotus/cmd/lotus-chainwatch/util"
)
func (p *Processor) setupCommonActors() error {
tx, err := p.db.Begin()
if err != nil {
return err
}
if _, err := tx.Exec(`
create table if not exists id_address_map
(
id text not null,
address text not null,
constraint id_address_map_pk
primary key (id, address)
);
create unique index if not exists id_address_map_id_uindex
on id_address_map (id);
create unique index if not exists id_address_map_address_uindex
on id_address_map (address);
create table if not exists actors
(
id text not null
constraint id_address_map_actors_id_fk
references id_address_map (id),
code text not null,
head text not null,
nonce int not null,
balance text not null,
stateroot text
);
create index if not exists actors_id_index
on actors (id);
create index if not exists id_address_map_address_index
on id_address_map (address);
create index if not exists id_address_map_id_index
on id_address_map (id);
create or replace function actor_tips(epoch bigint)
returns table (id text,
code text,
head text,
nonce int,
balance text,
stateroot text,
height bigint,
parentstateroot text) as
$body$
select distinct on (id) * from actors
inner join state_heights sh on sh.parentstateroot = stateroot
where height < $1
order by id, height desc;
$body$ language sql;
create table if not exists actor_states
(
head text not null,
code text not null,
state json not null
);
create unique index if not exists actor_states_head_code_uindex
on actor_states (head, code);
create index if not exists actor_states_head_index
on actor_states (head);
create index if not exists actor_states_code_head_index
on actor_states (head, code);
`); err != nil {
return err
}
return tx.Commit()
}
func (p *Processor) HandleCommonActorsChanges(ctx context.Context, actors map[cid.Cid]ActorTips) error {
if err := p.storeActorAddresses(ctx, actors); err != nil {
return err
}
grp, _ := errgroup.WithContext(ctx)
grp.Go(func() error {
if err := p.storeActorHeads(actors); err != nil {
return err
}
return nil
})
grp.Go(func() error {
if err := p.storeActorStates(actors); err != nil {
return err
}
return nil
})
return grp.Wait()
}
type UpdateAddresses struct {
Old state.AddressPair
New state.AddressPair
}
func (p Processor) storeActorAddresses(ctx context.Context, actors map[cid.Cid]ActorTips) error {
start := time.Now()
defer func() {
log.Debugw("Stored Actor Addresses", "duration", time.Since(start).String())
}()
addressToID := map[address.Address]address.Address{}
// HACK until genesis storage is figured out:
addressToID[builtin2.SystemActorAddr] = builtin2.SystemActorAddr
addressToID[builtin2.InitActorAddr] = builtin2.InitActorAddr
addressToID[builtin2.RewardActorAddr] = builtin2.RewardActorAddr
addressToID[builtin2.CronActorAddr] = builtin2.CronActorAddr
addressToID[builtin2.StoragePowerActorAddr] = builtin2.StoragePowerActorAddr
addressToID[builtin2.StorageMarketActorAddr] = builtin2.StorageMarketActorAddr
addressToID[builtin2.VerifiedRegistryActorAddr] = builtin2.VerifiedRegistryActorAddr
addressToID[builtin2.BurntFundsActorAddr] = builtin2.BurntFundsActorAddr
initActor, err := p.node.StateGetActor(ctx, builtin2.InitActorAddr, types.EmptyTSK)
if err != nil {
return err
}
initActorState, err := _init.Load(cw_util.NewAPIIpldStore(ctx, p.node), initActor)
if err != nil {
return err
}
// gross..
if err := initActorState.ForEachActor(func(id abi.ActorID, addr address.Address) error {
idAddr, err := address.NewIDAddress(uint64(id))
if err != nil {
return err
}
addressToID[addr] = idAddr
return nil
}); err != nil {
return err
}
tx, err := p.db.Begin()
if err != nil {
return err
}
if _, err := tx.Exec(`
create temp table iam (like id_address_map excluding constraints) on commit drop;
`); err != nil {
return xerrors.Errorf("prep temp: %w", err)
}
stmt, err := tx.Prepare(`copy iam (id, address) from STDIN `)
if err != nil {
return err
}
for a, i := range addressToID {
if i == address.Undef {
continue
}
if _, err := stmt.Exec(
i.String(),
a.String(),
); err != nil {
return err
}
}
if err := stmt.Close(); err != nil {
return err
}
// HACK until chain watch can handle reorgs we need to update this table when ID -> PubKey mappings change
if _, err := tx.Exec(`insert into id_address_map select * from iam on conflict (id) do update set address = EXCLUDED.address`); err != nil {
log.Warnw("Failed to update id_address_map table, this is a known issue")
return nil
}
return tx.Commit()
}
func (p *Processor) storeActorHeads(actors map[cid.Cid]ActorTips) error {
start := time.Now()
defer func() {
log.Debugw("Stored Actor Heads", "duration", time.Since(start).String())
}()
// Basic
tx, err := p.db.Begin()
if err != nil {
return err
}
if _, err := tx.Exec(`
create temp table a_tmp (like actors excluding constraints) on commit drop;
`); err != nil {
return xerrors.Errorf("prep temp: %w", err)
}
stmt, err := tx.Prepare(`copy a_tmp (id, code, head, nonce, balance, stateroot) from stdin `)
if err != nil {
return err
}
for code, actTips := range actors {
actorName := code.String()
if builtin.IsBuiltinActor(code) {
actorName = builtin.ActorNameByCode(code)
}
for _, actorInfo := range actTips {
for _, a := range actorInfo {
if _, err := stmt.Exec(a.addr.String(), actorName, a.act.Head.String(), a.act.Nonce, a.act.Balance.String(), a.stateroot.String()); err != nil {
return err
}
}
}
}
if err := stmt.Close(); err != nil {
return err
}
if _, err := tx.Exec(`insert into actors select * from a_tmp on conflict do nothing `); err != nil {
return xerrors.Errorf("actor put: %w", err)
}
return tx.Commit()
}
func (p *Processor) storeActorStates(actors map[cid.Cid]ActorTips) error {
start := time.Now()
defer func() {
log.Debugw("Stored Actor States", "duration", time.Since(start).String())
}()
// States
tx, err := p.db.Begin()
if err != nil {
return err
}
if _, err := tx.Exec(`
create temp table as_tmp (like actor_states excluding constraints) on commit drop;
`); err != nil {
return xerrors.Errorf("prep temp: %w", err)
}
stmt, err := tx.Prepare(`copy as_tmp (head, code, state) from stdin `)
if err != nil {
return err
}
for code, actTips := range actors {
actorName := code.String()
if builtin.IsBuiltinActor(code) {
actorName = builtin.ActorNameByCode(code)
}
for _, actorInfo := range actTips {
for _, a := range actorInfo {
if _, err := stmt.Exec(a.act.Head.String(), actorName, a.state); err != nil {
return err
}
}
}
}
if err := stmt.Close(); err != nil {
return err
}
if _, err := tx.Exec(`insert into actor_states select * from as_tmp on conflict do nothing `); err != nil {
return xerrors.Errorf("actor put: %w", err)
}
return tx.Commit()
}

View File

@ -1,316 +0,0 @@
package processor
import (
"context"
"strconv"
"time"
"golang.org/x/sync/errgroup"
"golang.org/x/xerrors"
"github.com/filecoin-project/lotus/chain/actors/builtin/market"
"github.com/filecoin-project/lotus/chain/events/state"
)
func (p *Processor) setupMarket() error {
tx, err := p.db.Begin()
if err != nil {
return err
}
if _, err := tx.Exec(`
create table if not exists market_deal_proposals
(
deal_id bigint not null,
state_root text not null,
piece_cid text not null,
padded_piece_size bigint not null,
unpadded_piece_size bigint not null,
is_verified bool not null,
client_id text not null,
provider_id text not null,
start_epoch bigint not null,
end_epoch bigint not null,
slashed_epoch bigint,
storage_price_per_epoch text not null,
provider_collateral text not null,
client_collateral text not null,
constraint market_deal_proposal_pk
primary key (deal_id)
);
create table if not exists market_deal_states
(
deal_id bigint not null,
sector_start_epoch bigint not null,
last_update_epoch bigint not null,
slash_epoch bigint not null,
state_root text not null,
unique (deal_id, sector_start_epoch, last_update_epoch, slash_epoch),
constraint market_deal_states_pk
primary key (deal_id, state_root)
);
create table if not exists minerid_dealid_sectorid
(
deal_id bigint not null
constraint sectors_sector_ids_id_fk
references market_deal_proposals(deal_id),
sector_id bigint not null,
miner_id text not null,
foreign key (sector_id, miner_id) references sector_precommit_info(sector_id, miner_id),
constraint miner_sector_deal_ids_pk
primary key (miner_id, sector_id, deal_id)
);
`); err != nil {
return err
}
return tx.Commit()
}
type marketActorInfo struct {
common actorInfo
}
func (p *Processor) HandleMarketChanges(ctx context.Context, marketTips ActorTips) error {
marketChanges, err := p.processMarket(ctx, marketTips)
if err != nil {
log.Fatalw("Failed to process market actors", "error", err)
}
if err := p.persistMarket(ctx, marketChanges); err != nil {
log.Fatalw("Failed to persist market actors", "error", err)
}
if err := p.updateMarket(ctx, marketChanges); err != nil {
log.Fatalw("Failed to update market actors", "error", err)
}
return nil
}
func (p *Processor) processMarket(ctx context.Context, marketTips ActorTips) ([]marketActorInfo, error) {
start := time.Now()
defer func() {
log.Debugw("Processed Market", "duration", time.Since(start).String())
}()
var out []marketActorInfo
for _, markets := range marketTips {
for _, mt := range markets {
// NB: here is where we can extract the market state when we need it.
out = append(out, marketActorInfo{common: mt})
}
}
return out, nil
}
func (p *Processor) persistMarket(ctx context.Context, info []marketActorInfo) error {
start := time.Now()
defer func() {
log.Debugw("Persisted Market", "duration", time.Since(start).String())
}()
grp, ctx := errgroup.WithContext(ctx)
grp.Go(func() error {
if err := p.storeMarketActorDealProposals(ctx, info); err != nil {
return xerrors.Errorf("Failed to store marker deal proposals: %w", err)
}
return nil
})
grp.Go(func() error {
if err := p.storeMarketActorDealStates(info); err != nil {
return xerrors.Errorf("Failed to store marker deal states: %w", err)
}
return nil
})
return grp.Wait()
}
func (p *Processor) updateMarket(ctx context.Context, info []marketActorInfo) error {
if err := p.updateMarketActorDealProposals(ctx, info); err != nil {
return xerrors.Errorf("Failed to update market info: %w", err)
}
return nil
}
func (p *Processor) storeMarketActorDealStates(marketTips []marketActorInfo) error {
start := time.Now()
defer func() {
log.Debugw("Stored Market Deal States", "duration", time.Since(start).String())
}()
tx, err := p.db.Begin()
if err != nil {
return err
}
if _, err := tx.Exec(`create temp table mds (like market_deal_states excluding constraints) on commit drop;`); err != nil {
return err
}
stmt, err := tx.Prepare(`copy mds (deal_id, sector_start_epoch, last_update_epoch, slash_epoch, state_root) from STDIN`)
if err != nil {
return err
}
for _, mt := range marketTips {
dealStates, err := p.node.StateMarketDeals(context.TODO(), mt.common.tsKey)
if err != nil {
return err
}
for dealID, ds := range dealStates {
id, err := strconv.ParseUint(dealID, 10, 64)
if err != nil {
return err
}
if _, err := stmt.Exec(
id,
ds.State.SectorStartEpoch,
ds.State.LastUpdatedEpoch,
ds.State.SlashEpoch,
mt.common.stateroot.String(),
); err != nil {
return err
}
}
}
if err := stmt.Close(); err != nil {
return err
}
if _, err := tx.Exec(`insert into market_deal_states select * from mds on conflict do nothing`); err != nil {
return err
}
return tx.Commit()
}
func (p *Processor) storeMarketActorDealProposals(ctx context.Context, marketTips []marketActorInfo) error {
start := time.Now()
defer func() {
log.Debugw("Stored Market Deal Proposals", "duration", time.Since(start).String())
}()
tx, err := p.db.Begin()
if err != nil {
return err
}
if _, err := tx.Exec(`create temp table mdp (like market_deal_proposals excluding constraints) on commit drop;`); err != nil {
return xerrors.Errorf("prep temp: %w", err)
}
stmt, err := tx.Prepare(`copy mdp (deal_id, state_root, piece_cid, padded_piece_size, unpadded_piece_size, is_verified, client_id, provider_id, start_epoch, end_epoch, slashed_epoch, storage_price_per_epoch, provider_collateral, client_collateral) from STDIN`)
if err != nil {
return err
}
// insert in sorted order (lowest height -> highest height) since dealid is pk of table.
for _, mt := range marketTips {
dealStates, err := p.node.StateMarketDeals(ctx, mt.common.tsKey)
if err != nil {
return err
}
for dealID, ds := range dealStates {
id, err := strconv.ParseUint(dealID, 10, 64)
if err != nil {
return err
}
if _, err := stmt.Exec(
id,
mt.common.stateroot.String(),
ds.Proposal.PieceCID.String(),
ds.Proposal.PieceSize,
ds.Proposal.PieceSize.Unpadded(),
ds.Proposal.VerifiedDeal,
ds.Proposal.Client.String(),
ds.Proposal.Provider.String(),
ds.Proposal.StartEpoch,
ds.Proposal.EndEpoch,
nil, // slashed_epoch
ds.Proposal.StoragePricePerEpoch.String(),
ds.Proposal.ProviderCollateral.String(),
ds.Proposal.ClientCollateral.String(),
); err != nil {
return err
}
}
}
if err := stmt.Close(); err != nil {
return err
}
if _, err := tx.Exec(`insert into market_deal_proposals select * from mdp on conflict do nothing`); err != nil {
return err
}
return tx.Commit()
}
func (p *Processor) updateMarketActorDealProposals(ctx context.Context, marketTip []marketActorInfo) error {
start := time.Now()
defer func() {
log.Debugw("Updated Market Deal Proposals", "duration", time.Since(start).String())
}()
pred := state.NewStatePredicates(p.node)
tx, err := p.db.Begin()
if err != nil {
return err
}
stmt, err := tx.Prepare(`update market_deal_proposals set slashed_epoch=$1 where deal_id=$2`)
if err != nil {
return err
}
for _, mt := range marketTip {
stateDiff := pred.OnStorageMarketActorChanged(pred.OnDealStateChanged(pred.OnDealStateAmtChanged()))
changed, val, err := stateDiff(ctx, mt.common.parentTsKey, mt.common.tsKey)
if err != nil {
log.Warnw("error getting market deal state diff", "error", err)
}
if !changed {
continue
}
changes, ok := val.(*market.DealStateChanges)
if !ok {
return xerrors.Errorf("Unknown type returned by Deal State AMT predicate: %T", val)
}
for _, modified := range changes.Modified {
if modified.From.SlashEpoch != modified.To.SlashEpoch {
if _, err := stmt.Exec(modified.To.SlashEpoch, modified.ID); err != nil {
return err
}
}
}
}
if err := stmt.Close(); err != nil {
return err
}
return tx.Commit()
}

View File

@ -1,318 +0,0 @@
package processor
import (
"context"
"sync"
"golang.org/x/sync/errgroup"
"golang.org/x/xerrors"
"github.com/ipfs/go-cid"
"github.com/filecoin-project/lotus/chain/types"
"github.com/filecoin-project/lotus/lib/parmap"
)
func (p *Processor) setupMessages() error {
tx, err := p.db.Begin()
if err != nil {
return err
}
if _, err := tx.Exec(`
create table if not exists messages
(
cid text not null
constraint messages_pk
primary key,
"from" text not null,
"to" text not null,
size_bytes bigint not null,
nonce bigint not null,
value text not null,
gas_fee_cap text not null,
gas_premium text not null,
gas_limit bigint not null,
method bigint,
params bytea
);
create unique index if not exists messages_cid_uindex
on messages (cid);
create index if not exists messages_from_index
on messages ("from");
create index if not exists messages_to_index
on messages ("to");
create table if not exists block_messages
(
block text not null
constraint blocks_block_cids_cid_fk
references block_cids (cid),
message text not null,
constraint block_messages_pk
primary key (block, message)
);
create table if not exists mpool_messages
(
msg text not null
constraint mpool_messages_pk
primary key
constraint mpool_messages_messages_cid_fk
references messages,
add_ts int not null
);
create unique index if not exists mpool_messages_msg_uindex
on mpool_messages (msg);
create table if not exists receipts
(
msg text not null,
state text not null,
idx int not null,
exit int not null,
gas_used bigint not null,
return bytea,
constraint receipts_pk
primary key (msg, state)
);
create index if not exists receipts_msg_state_index
on receipts (msg, state);
`); err != nil {
return err
}
return tx.Commit()
}
func (p *Processor) HandleMessageChanges(ctx context.Context, blocks map[cid.Cid]*types.BlockHeader) error {
if err := p.persistMessagesAndReceipts(ctx, blocks); err != nil {
return err
}
return nil
}
func (p *Processor) persistMessagesAndReceipts(ctx context.Context, blocks map[cid.Cid]*types.BlockHeader) error {
messages, inclusions := p.fetchMessages(ctx, blocks)
receipts := p.fetchParentReceipts(ctx, blocks)
grp, _ := errgroup.WithContext(ctx)
grp.Go(func() error {
return p.storeMessages(messages)
})
grp.Go(func() error {
return p.storeMsgInclusions(inclusions)
})
grp.Go(func() error {
return p.storeReceipts(receipts)
})
return grp.Wait()
}
func (p *Processor) storeReceipts(recs map[mrec]*types.MessageReceipt) error {
tx, err := p.db.Begin()
if err != nil {
return err
}
if _, err := tx.Exec(`
create temp table recs (like receipts excluding constraints) on commit drop;
`); err != nil {
return xerrors.Errorf("prep temp: %w", err)
}
stmt, err := tx.Prepare(`copy recs (msg, state, idx, exit, gas_used, return) from stdin `)
if err != nil {
return err
}
for c, m := range recs {
if _, err := stmt.Exec(
c.msg.String(),
c.state.String(),
c.idx,
m.ExitCode,
m.GasUsed,
m.Return,
); err != nil {
return err
}
}
if err := stmt.Close(); err != nil {
return err
}
if _, err := tx.Exec(`insert into receipts select * from recs on conflict do nothing `); err != nil {
return xerrors.Errorf("actor put: %w", err)
}
return tx.Commit()
}
func (p *Processor) storeMsgInclusions(incls map[cid.Cid][]cid.Cid) error {
tx, err := p.db.Begin()
if err != nil {
return err
}
if _, err := tx.Exec(`
create temp table mi (like block_messages excluding constraints) on commit drop;
`); err != nil {
return xerrors.Errorf("prep temp: %w", err)
}
stmt, err := tx.Prepare(`copy mi (block, message) from STDIN `)
if err != nil {
return err
}
for b, msgs := range incls {
for _, msg := range msgs {
if _, err := stmt.Exec(
b.String(),
msg.String(),
); err != nil {
return err
}
}
}
if err := stmt.Close(); err != nil {
return err
}
if _, err := tx.Exec(`insert into block_messages select * from mi on conflict do nothing `); err != nil {
return xerrors.Errorf("actor put: %w", err)
}
return tx.Commit()
}
func (p *Processor) storeMessages(msgs map[cid.Cid]*types.Message) error {
tx, err := p.db.Begin()
if err != nil {
return err
}
if _, err := tx.Exec(`
create temp table msgs (like messages excluding constraints) on commit drop;
`); err != nil {
return xerrors.Errorf("prep temp: %w", err)
}
stmt, err := tx.Prepare(`copy msgs (cid, "from", "to", size_bytes, nonce, "value", gas_premium, gas_fee_cap, gas_limit, method, params) from stdin `)
if err != nil {
return err
}
for c, m := range msgs {
var msgBytes int
if b, err := m.Serialize(); err == nil {
msgBytes = len(b)
}
if _, err := stmt.Exec(
c.String(),
m.From.String(),
m.To.String(),
msgBytes,
m.Nonce,
m.Value.String(),
m.GasPremium.String(),
m.GasFeeCap.String(),
m.GasLimit,
m.Method,
m.Params,
); err != nil {
return err
}
}
if err := stmt.Close(); err != nil {
return err
}
if _, err := tx.Exec(`insert into messages select * from msgs on conflict do nothing `); err != nil {
return xerrors.Errorf("actor put: %w", err)
}
return tx.Commit()
}
func (p *Processor) fetchMessages(ctx context.Context, blocks map[cid.Cid]*types.BlockHeader) (map[cid.Cid]*types.Message, map[cid.Cid][]cid.Cid) {
var lk sync.Mutex
messages := map[cid.Cid]*types.Message{}
inclusions := map[cid.Cid][]cid.Cid{} // block -> msgs
parmap.Par(50, parmap.MapArr(blocks), func(header *types.BlockHeader) {
msgs, err := p.node.ChainGetBlockMessages(ctx, header.Cid())
if err != nil {
log.Error(err)
log.Debugw("ChainGetBlockMessages", "header_cid", header.Cid())
return
}
vmm := make([]*types.Message, 0, len(msgs.Cids))
for _, m := range msgs.BlsMessages {
vmm = append(vmm, m)
}
for _, m := range msgs.SecpkMessages {
vmm = append(vmm, &m.Message)
}
lk.Lock()
for _, message := range vmm {
messages[message.Cid()] = message
inclusions[header.Cid()] = append(inclusions[header.Cid()], message.Cid())
}
lk.Unlock()
})
return messages, inclusions
}
type mrec struct {
msg cid.Cid
state cid.Cid
idx int
}
func (p *Processor) fetchParentReceipts(ctx context.Context, toSync map[cid.Cid]*types.BlockHeader) map[mrec]*types.MessageReceipt {
var lk sync.Mutex
out := map[mrec]*types.MessageReceipt{}
parmap.Par(50, parmap.MapArr(toSync), func(header *types.BlockHeader) {
recs, err := p.node.ChainGetParentReceipts(ctx, header.Cid())
if err != nil {
log.Error(err)
log.Debugw("ChainGetParentReceipts", "header_cid", header.Cid())
return
}
msgs, err := p.node.ChainGetParentMessages(ctx, header.Cid())
if err != nil {
log.Error(err)
log.Debugw("ChainGetParentMessages", "header_cid", header.Cid())
return
}
lk.Lock()
for i, r := range recs {
out[mrec{
msg: msgs[i].Cid,
state: header.ParentStateRoot,
idx: i,
}] = r
}
lk.Unlock()
})
return out
}

File diff suppressed because it is too large Load Diff

View File

@ -1,100 +0,0 @@
package processor
import (
"context"
"time"
"golang.org/x/xerrors"
"github.com/ipfs/go-cid"
"github.com/filecoin-project/lotus/api"
"github.com/filecoin-project/lotus/chain/types"
)
func (p *Processor) subMpool(ctx context.Context) {
sub, err := p.node.MpoolSub(ctx)
if err != nil {
return
}
for {
var updates []api.MpoolUpdate
select {
case update := <-sub:
updates = append(updates, update)
case <-ctx.Done():
return
}
loop:
for {
select {
case update := <-sub:
updates = append(updates, update)
case <-time.After(10 * time.Millisecond):
break loop
}
}
msgs := map[cid.Cid]*types.Message{}
for _, v := range updates {
if v.Type != api.MpoolAdd {
continue
}
msgs[v.Message.Message.Cid()] = &v.Message.Message
}
err := p.storeMessages(msgs)
if err != nil {
log.Error(err)
}
if err := p.storeMpoolInclusions(updates); err != nil {
log.Error(err)
}
}
}
func (p *Processor) storeMpoolInclusions(msgs []api.MpoolUpdate) error {
tx, err := p.db.Begin()
if err != nil {
return err
}
if _, err := tx.Exec(`
create temp table mi (like mpool_messages excluding constraints) on commit drop;
`); err != nil {
return xerrors.Errorf("prep temp: %w", err)
}
stmt, err := tx.Prepare(`copy mi (msg, add_ts) from stdin `)
if err != nil {
return err
}
for _, msg := range msgs {
if msg.Type != api.MpoolAdd {
continue
}
if _, err := stmt.Exec(
msg.Message.Message.Cid().String(),
time.Now().Unix(),
); err != nil {
return err
}
}
if err := stmt.Close(); err != nil {
return err
}
if _, err := tx.Exec(`insert into mpool_messages select * from mi on conflict do nothing `); err != nil {
return xerrors.Errorf("actor put: %w", err)
}
return tx.Commit()
}

View File

@ -1,190 +0,0 @@
package processor
import (
"context"
"time"
"golang.org/x/xerrors"
"github.com/filecoin-project/go-state-types/big"
"github.com/filecoin-project/lotus/chain/actors/builtin"
)
type powerActorInfo struct {
common actorInfo
totalRawBytes big.Int
totalRawBytesCommitted big.Int
totalQualityAdjustedBytes big.Int
totalQualityAdjustedBytesCommitted big.Int
totalPledgeCollateral big.Int
qaPowerSmoothed builtin.FilterEstimate
minerCount int64
minerCountAboveMinimumPower int64
}
func (p *Processor) setupPower() error {
tx, err := p.db.Begin()
if err != nil {
return err
}
if _, err := tx.Exec(`
create table if not exists chain_power
(
state_root text not null
constraint power_smoothing_estimates_pk
primary key,
total_raw_bytes_power text not null,
total_raw_bytes_committed text not null,
total_qa_bytes_power text not null,
total_qa_bytes_committed text not null,
total_pledge_collateral text not null,
qa_smoothed_position_estimate text not null,
qa_smoothed_velocity_estimate text not null,
miner_count int not null,
minimum_consensus_miner_count int not null
);
`); err != nil {
return err
}
return tx.Commit()
}
func (p *Processor) HandlePowerChanges(ctx context.Context, powerTips ActorTips) error {
powerChanges, err := p.processPowerActors(ctx, powerTips)
if err != nil {
return xerrors.Errorf("Failed to process power actors: %w", err)
}
if err := p.persistPowerActors(ctx, powerChanges); err != nil {
return err
}
return nil
}
func (p *Processor) processPowerActors(ctx context.Context, powerTips ActorTips) ([]powerActorInfo, error) {
start := time.Now()
defer func() {
log.Debugw("Processed Power Actors", "duration", time.Since(start).String())
}()
var out []powerActorInfo
for tipset, powerStates := range powerTips {
for _, act := range powerStates {
var pw powerActorInfo
pw.common = act
powerActorState, err := getPowerActorState(ctx, p.node, tipset)
if err != nil {
return nil, xerrors.Errorf("get power state (@ %s): %w", pw.common.stateroot.String(), err)
}
totalPower, err := powerActorState.TotalPower()
if err != nil {
return nil, xerrors.Errorf("failed to compute total power: %w", err)
}
totalCommitted, err := powerActorState.TotalCommitted()
if err != nil {
return nil, xerrors.Errorf("failed to compute total committed: %w", err)
}
totalLocked, err := powerActorState.TotalLocked()
if err != nil {
return nil, xerrors.Errorf("failed to compute total locked: %w", err)
}
powerSmoothed, err := powerActorState.TotalPowerSmoothed()
if err != nil {
return nil, xerrors.Errorf("failed to determine smoothed power: %w", err)
}
// NOTE: this doesn't set new* fields. Previously, we
// filled these using ThisEpoch* fields from the actor
// state, but these fields are effectively internal
// state and don't represent "new" power, as was
// assumed.
participatingMiners, totalMiners, err := powerActorState.MinerCounts()
if err != nil {
return nil, xerrors.Errorf("failed to count miners: %w", err)
}
pw.totalRawBytes = totalPower.RawBytePower
pw.totalQualityAdjustedBytes = totalPower.QualityAdjPower
pw.totalRawBytesCommitted = totalCommitted.RawBytePower
pw.totalQualityAdjustedBytesCommitted = totalCommitted.QualityAdjPower
pw.totalPledgeCollateral = totalLocked
pw.qaPowerSmoothed = powerSmoothed
pw.minerCountAboveMinimumPower = int64(participatingMiners)
pw.minerCount = int64(totalMiners)
}
}
return out, nil
}
func (p *Processor) persistPowerActors(ctx context.Context, powerStates []powerActorInfo) error {
// NB: use errgroup when there is more than a single store operation
return p.storePowerSmoothingEstimates(powerStates)
}
func (p *Processor) storePowerSmoothingEstimates(powerStates []powerActorInfo) error {
tx, err := p.db.Begin()
if err != nil {
return xerrors.Errorf("begin chain_power tx: %w", err)
}
if _, err := tx.Exec(`create temp table cp (like chain_power) on commit drop`); err != nil {
return xerrors.Errorf("prep chain_power: %w", err)
}
stmt, err := tx.Prepare(`copy cp (state_root, total_raw_bytes_power, total_raw_bytes_committed, total_qa_bytes_power, total_qa_bytes_committed, total_pledge_collateral, qa_smoothed_position_estimate, qa_smoothed_velocity_estimate, miner_count, minimum_consensus_miner_count) from stdin;`)
if err != nil {
return xerrors.Errorf("prepare tmp chain_power: %w", err)
}
for _, ps := range powerStates {
if _, err := stmt.Exec(
ps.common.stateroot.String(),
ps.totalRawBytes.String(),
ps.totalRawBytesCommitted.String(),
ps.totalQualityAdjustedBytes.String(),
ps.totalQualityAdjustedBytesCommitted.String(),
ps.totalPledgeCollateral.String(),
ps.qaPowerSmoothed.PositionEstimate.String(),
ps.qaPowerSmoothed.VelocityEstimate.String(),
ps.minerCount,
ps.minerCountAboveMinimumPower,
); err != nil {
return xerrors.Errorf("failed to store smoothing estimate: %w", err)
}
}
if err := stmt.Close(); err != nil {
return xerrors.Errorf("close prepared chain_power: %w", err)
}
if _, err := tx.Exec(`insert into chain_power select * from cp on conflict do nothing`); err != nil {
return xerrors.Errorf("insert chain_power from tmp: %w", err)
}
if err := tx.Commit(); err != nil {
return xerrors.Errorf("commit chain_power tx: %w", err)
}
return nil
}

View File

@ -1,420 +0,0 @@
package processor
import (
"context"
"database/sql"
"encoding/json"
"math"
"sync"
"time"
"golang.org/x/xerrors"
"github.com/filecoin-project/go-address"
"github.com/ipfs/go-cid"
logging "github.com/ipfs/go-log/v2"
"github.com/filecoin-project/go-state-types/abi"
builtin2 "github.com/filecoin-project/specs-actors/v2/actors/builtin"
"github.com/filecoin-project/lotus/api/v0api"
"github.com/filecoin-project/lotus/chain/types"
cw_util "github.com/filecoin-project/lotus/cmd/lotus-chainwatch/util"
"github.com/filecoin-project/lotus/lib/parmap"
)
var log = logging.Logger("processor")
type Processor struct {
db *sql.DB
node v0api.FullNode
ctxStore *cw_util.APIIpldStore
genesisTs *types.TipSet
// number of blocks processed at a time
batch int
}
type ActorTips map[types.TipSetKey][]actorInfo
type actorInfo struct {
act types.Actor
stateroot cid.Cid
height abi.ChainEpoch // so that we can walk the actor changes in chronological order.
tsKey types.TipSetKey
parentTsKey types.TipSetKey
addr address.Address
state string
}
func NewProcessor(ctx context.Context, db *sql.DB, node v0api.FullNode, batch int) *Processor {
ctxStore := cw_util.NewAPIIpldStore(ctx, node)
return &Processor{
db: db,
ctxStore: ctxStore,
node: node,
batch: batch,
}
}
func (p *Processor) setupSchemas() error {
// maintain order, subsequent calls create tables with foreign keys.
if err := p.setupMiners(); err != nil {
return err
}
if err := p.setupMarket(); err != nil {
return err
}
if err := p.setupRewards(); err != nil {
return err
}
if err := p.setupMessages(); err != nil {
return err
}
if err := p.setupCommonActors(); err != nil {
return err
}
if err := p.setupPower(); err != nil {
return err
}
return nil
}
func (p *Processor) Start(ctx context.Context) {
log.Debug("Starting Processor")
if err := p.setupSchemas(); err != nil {
log.Fatalw("Failed to setup processor", "error", err)
}
var err error
p.genesisTs, err = p.node.ChainGetGenesis(ctx)
if err != nil {
log.Fatalw("Failed to get genesis state from lotus", "error", err.Error())
}
go p.subMpool(ctx)
// main processor loop
go func() {
for {
select {
case <-ctx.Done():
log.Info("Stopping Processor...")
return
default:
loopStart := time.Now()
toProcess, err := p.unprocessedBlocks(ctx, p.batch)
if err != nil {
log.Fatalw("Failed to get unprocessed blocks", "error", err)
}
if len(toProcess) == 0 {
log.Info("No unprocessed blocks. Wait then try again...")
time.Sleep(time.Second * 30)
continue
}
// TODO special case genesis state handling here to avoid all the special cases that will be needed for it else where
// before doing "normal" processing.
actorChanges, nullRounds, err := p.collectActorChanges(ctx, toProcess)
if err != nil {
log.Fatalw("Failed to collect actor changes", "error", err)
}
log.Infow("Collected Actor Changes",
"MarketChanges", len(actorChanges[builtin2.StorageMarketActorCodeID]),
"MinerChanges", len(actorChanges[builtin2.StorageMinerActorCodeID]),
"RewardChanges", len(actorChanges[builtin2.RewardActorCodeID]),
"AccountChanges", len(actorChanges[builtin2.AccountActorCodeID]),
"nullRounds", len(nullRounds))
grp := sync.WaitGroup{}
grp.Add(1)
go func() {
defer grp.Done()
if err := p.HandleMarketChanges(ctx, actorChanges[builtin2.StorageMarketActorCodeID]); err != nil {
log.Errorf("Failed to handle market changes: %v", err)
return
}
}()
grp.Add(1)
go func() {
defer grp.Done()
if err := p.HandleMinerChanges(ctx, actorChanges[builtin2.StorageMinerActorCodeID]); err != nil {
log.Errorf("Failed to handle miner changes: %v", err)
return
}
}()
grp.Add(1)
go func() {
defer grp.Done()
if err := p.HandleRewardChanges(ctx, actorChanges[builtin2.RewardActorCodeID], nullRounds); err != nil {
log.Errorf("Failed to handle reward changes: %v", err)
return
}
}()
grp.Add(1)
go func() {
defer grp.Done()
if err := p.HandlePowerChanges(ctx, actorChanges[builtin2.StoragePowerActorCodeID]); err != nil {
log.Errorf("Failed to handle power actor changes: %v", err)
return
}
}()
grp.Add(1)
go func() {
defer grp.Done()
if err := p.HandleMessageChanges(ctx, toProcess); err != nil {
log.Errorf("Failed to handle message changes: %v", err)
return
}
}()
grp.Add(1)
go func() {
defer grp.Done()
if err := p.HandleCommonActorsChanges(ctx, actorChanges); err != nil {
log.Errorf("Failed to handle common actor changes: %v", err)
return
}
}()
grp.Wait()
if err := p.markBlocksProcessed(ctx, toProcess); err != nil {
log.Fatalw("Failed to mark blocks as processed", "error", err)
}
if err := p.refreshViews(); err != nil {
log.Errorw("Failed to refresh views", "error", err)
}
log.Infow("Processed Batch Complete", "duration", time.Since(loopStart).String())
}
}
}()
}
func (p *Processor) refreshViews() error {
if _, err := p.db.Exec(`refresh materialized view state_heights`); err != nil {
return err
}
return nil
}
func (p *Processor) collectActorChanges(ctx context.Context, toProcess map[cid.Cid]*types.BlockHeader) (map[cid.Cid]ActorTips, []types.TipSetKey, error) {
start := time.Now()
defer func() {
log.Debugw("Collected Actor Changes", "duration", time.Since(start).String())
}()
// ActorCode - > tipset->[]actorInfo
out := map[cid.Cid]ActorTips{}
var outMu sync.Mutex
// map of addresses to changed actors
var changes map[string]types.Actor
actorsSeen := map[cid.Cid]struct{}{}
var nullRounds []types.TipSetKey
var nullBlkMu sync.Mutex
// collect all actor state that has changes between block headers
paDone := 0
parmap.Par(50, parmap.MapArr(toProcess), func(bh *types.BlockHeader) {
paDone++
if paDone%100 == 0 {
log.Debugw("Collecting actor changes", "done", paDone, "percent", (paDone*100)/len(toProcess))
}
pts, err := p.node.ChainGetTipSet(ctx, types.NewTipSetKey(bh.Parents...))
if err != nil {
log.Error(err)
return
}
if pts.ParentState().Equals(bh.ParentStateRoot) {
nullBlkMu.Lock()
nullRounds = append(nullRounds, pts.Key())
nullBlkMu.Unlock()
}
// collect all actors that had state changes between the blockheader parent-state and its grandparent-state.
// TODO: changes will contain deleted actors, this causes needless processing further down the pipeline, consider
// a separate strategy for deleted actors
changes, err = p.node.StateChangedActors(ctx, pts.ParentState(), bh.ParentStateRoot)
if err != nil {
log.Error(err)
log.Debugw("StateChangedActors", "grandparent_state", pts.ParentState(), "parent_state", bh.ParentStateRoot)
return
}
// record the state of all actors that have changed
for a, act := range changes {
act := act
a := a
// ignore actors that were deleted.
has, err := p.node.ChainHasObj(ctx, act.Head)
if err != nil {
log.Error(err)
log.Debugw("ChanHasObj", "actor_head", act.Head)
return
}
if !has {
continue
}
addr, err := address.NewFromString(a)
if err != nil {
log.Error(err)
log.Debugw("NewFromString", "address_string", a)
return
}
ast, err := p.node.StateReadState(ctx, addr, pts.Key())
if err != nil {
log.Error(err)
log.Debugw("StateReadState", "address_string", a, "parent_tipset_key", pts.Key())
return
}
// TODO look here for an empty state, maybe thats a sign the actor was deleted?
state, err := json.Marshal(ast.State)
if err != nil {
log.Error(err)
return
}
outMu.Lock()
if _, ok := actorsSeen[act.Head]; !ok {
_, ok := out[act.Code]
if !ok {
out[act.Code] = map[types.TipSetKey][]actorInfo{}
}
out[act.Code][pts.Key()] = append(out[act.Code][pts.Key()], actorInfo{
act: act,
stateroot: bh.ParentStateRoot,
height: bh.Height,
tsKey: pts.Key(),
parentTsKey: pts.Parents(),
addr: addr,
state: string(state),
})
}
actorsSeen[act.Head] = struct{}{}
outMu.Unlock()
}
})
return out, nullRounds, nil
}
func (p *Processor) unprocessedBlocks(ctx context.Context, batch int) (map[cid.Cid]*types.BlockHeader, error) {
start := time.Now()
defer func() {
log.Debugw("Gathered Blocks to process", "duration", time.Since(start).String())
}()
rows, err := p.db.Query(`
with toProcess as (
select b.cid, b.height, rank() over (order by height) as rnk
from blocks_synced bs
left join blocks b on bs.cid = b.cid
where bs.processed_at is null and b.height > 0
)
select cid
from toProcess
where rnk <= $1
`, batch)
if err != nil {
return nil, xerrors.Errorf("Failed to query for unprocessed blocks: %w", err)
}
out := map[cid.Cid]*types.BlockHeader{}
minBlock := abi.ChainEpoch(math.MaxInt64)
maxBlock := abi.ChainEpoch(0)
// TODO consider parallel execution here for getting the blocks from the api as is done in fetchMessages()
for rows.Next() {
if rows.Err() != nil {
return nil, err
}
var c string
if err := rows.Scan(&c); err != nil {
log.Errorf("Failed to scan unprocessed blocks: %s", err.Error())
continue
}
ci, err := cid.Parse(c)
if err != nil {
log.Errorf("Failed to parse unprocessed blocks: %s", err.Error())
continue
}
bh, err := p.node.ChainGetBlock(ctx, ci)
if err != nil {
// this is a pretty serious issue.
log.Errorf("Failed to get block header %s: %s", ci.String(), err.Error())
continue
}
out[ci] = bh
if bh.Height < minBlock {
minBlock = bh.Height
}
if bh.Height > maxBlock {
maxBlock = bh.Height
}
}
if minBlock <= maxBlock {
log.Infow("Gathered Blocks to Process", "start", minBlock, "end", maxBlock)
}
return out, rows.Close()
}
func (p *Processor) markBlocksProcessed(ctx context.Context, processed map[cid.Cid]*types.BlockHeader) error {
start := time.Now()
processedHeight := abi.ChainEpoch(0)
defer func() {
log.Debugw("Marked blocks as Processed", "duration", time.Since(start).String())
log.Infow("Processed Blocks", "height", processedHeight)
}()
tx, err := p.db.Begin()
if err != nil {
return err
}
processedAt := time.Now().Unix()
stmt, err := tx.Prepare(`update blocks_synced set processed_at=$1 where cid=$2`)
if err != nil {
return err
}
for c, bh := range processed {
if bh.Height > processedHeight {
processedHeight = bh.Height
}
if _, err := stmt.Exec(processedAt, c.String()); err != nil {
return err
}
}
if err := stmt.Close(); err != nil {
return err
}
return tx.Commit()
}

View File

@ -1,234 +0,0 @@
package processor
import (
"context"
"time"
"golang.org/x/xerrors"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/big"
"github.com/filecoin-project/lotus/chain/actors/builtin"
"github.com/filecoin-project/lotus/chain/actors/builtin/reward"
"github.com/filecoin-project/lotus/chain/types"
cw_util "github.com/filecoin-project/lotus/cmd/lotus-chainwatch/util"
)
type rewardActorInfo struct {
common actorInfo
cumSumBaselinePower big.Int
cumSumRealizedPower big.Int
effectiveNetworkTime abi.ChainEpoch
effectiveBaselinePower big.Int
// NOTE: These variables are wrong. Talk to @ZX about fixing. These _do
// not_ represent "new" anything.
newBaselinePower big.Int
newBaseReward big.Int
newSmoothingEstimate builtin.FilterEstimate
totalMinedReward big.Int
}
func (rw *rewardActorInfo) set(s reward.State) (err error) {
rw.cumSumBaselinePower, err = s.CumsumBaseline()
if err != nil {
return xerrors.Errorf("getting cumsum baseline power (@ %s): %w", rw.common.stateroot.String(), err)
}
rw.cumSumRealizedPower, err = s.CumsumRealized()
if err != nil {
return xerrors.Errorf("getting cumsum realized power (@ %s): %w", rw.common.stateroot.String(), err)
}
rw.effectiveNetworkTime, err = s.EffectiveNetworkTime()
if err != nil {
return xerrors.Errorf("getting effective network time (@ %s): %w", rw.common.stateroot.String(), err)
}
rw.effectiveBaselinePower, err = s.EffectiveBaselinePower()
if err != nil {
return xerrors.Errorf("getting effective baseline power (@ %s): %w", rw.common.stateroot.String(), err)
}
rw.totalMinedReward, err = s.TotalStoragePowerReward()
if err != nil {
return xerrors.Errorf("getting total mined (@ %s): %w", rw.common.stateroot.String(), err)
}
rw.newBaselinePower, err = s.ThisEpochBaselinePower()
if err != nil {
return xerrors.Errorf("getting this epoch baseline power (@ %s): %w", rw.common.stateroot.String(), err)
}
rw.newBaseReward, err = s.ThisEpochReward()
if err != nil {
return xerrors.Errorf("getting this epoch baseline power (@ %s): %w", rw.common.stateroot.String(), err)
}
rw.newSmoothingEstimate, err = s.ThisEpochRewardSmoothed()
if err != nil {
return xerrors.Errorf("getting this epoch baseline power (@ %s): %w", rw.common.stateroot.String(), err)
}
return nil
}
func (p *Processor) setupRewards() error {
tx, err := p.db.Begin()
if err != nil {
return err
}
if _, err := tx.Exec(`
/* captures chain-specific power state for any given stateroot */
create table if not exists chain_reward
(
state_root text not null
constraint chain_reward_pk
primary key,
cum_sum_baseline text not null,
cum_sum_realized text not null,
effective_network_time int not null,
effective_baseline_power text not null,
new_baseline_power text not null,
new_reward numeric not null,
new_reward_smoothed_position_estimate text not null,
new_reward_smoothed_velocity_estimate text not null,
total_mined_reward text not null
);
`); err != nil {
return err
}
return tx.Commit()
}
func (p *Processor) HandleRewardChanges(ctx context.Context, rewardTips ActorTips, nullRounds []types.TipSetKey) error {
rewardChanges, err := p.processRewardActors(ctx, rewardTips, nullRounds)
if err != nil {
return xerrors.Errorf("Failed to process reward actors: %w", err)
}
if err := p.persistRewardActors(ctx, rewardChanges); err != nil {
return err
}
return nil
}
func (p *Processor) processRewardActors(ctx context.Context, rewardTips ActorTips, nullRounds []types.TipSetKey) ([]rewardActorInfo, error) {
start := time.Now()
defer func() {
log.Debugw("Processed Reward Actors", "duration", time.Since(start).String())
}()
var out []rewardActorInfo
for tipset, rewards := range rewardTips {
for _, act := range rewards {
var rw rewardActorInfo
rw.common = act
// get reward actor states at each tipset once for all updates
rewardActor, err := p.node.StateGetActor(ctx, reward.Address, tipset)
if err != nil {
return nil, xerrors.Errorf("get reward state (@ %s): %w", rw.common.stateroot.String(), err)
}
rewardActorState, err := reward.Load(cw_util.NewAPIIpldStore(ctx, p.node), rewardActor)
if err != nil {
return nil, xerrors.Errorf("read state obj (@ %s): %w", rw.common.stateroot.String(), err)
}
if err := rw.set(rewardActorState); err != nil {
return nil, err
}
out = append(out, rw)
}
}
for _, tsKey := range nullRounds {
var rw rewardActorInfo
tipset, err := p.node.ChainGetTipSet(ctx, tsKey)
if err != nil {
return nil, err
}
rw.common.tsKey = tipset.Key()
rw.common.height = tipset.Height()
rw.common.stateroot = tipset.ParentState()
rw.common.parentTsKey = tipset.Parents()
// get reward actor states at each tipset once for all updates
rewardActor, err := p.node.StateGetActor(ctx, reward.Address, tsKey)
if err != nil {
return nil, err
}
rewardActorState, err := reward.Load(cw_util.NewAPIIpldStore(ctx, p.node), rewardActor)
if err != nil {
return nil, xerrors.Errorf("read state obj (@ %s): %w", rw.common.stateroot.String(), err)
}
if err := rw.set(rewardActorState); err != nil {
return nil, err
}
out = append(out, rw)
}
return out, nil
}
func (p *Processor) persistRewardActors(ctx context.Context, rewards []rewardActorInfo) error {
start := time.Now()
defer func() {
log.Debugw("Persisted Reward Actors", "duration", time.Since(start).String())
}()
tx, err := p.db.Begin()
if err != nil {
return xerrors.Errorf("begin chain_reward tx: %w", err)
}
if _, err := tx.Exec(`create temp table cr (like chain_reward excluding constraints) on commit drop`); err != nil {
return xerrors.Errorf("prep chain_reward temp: %w", err)
}
stmt, err := tx.Prepare(`copy cr ( state_root, cum_sum_baseline, cum_sum_realized, effective_network_time, effective_baseline_power, new_baseline_power, new_reward, new_reward_smoothed_position_estimate, new_reward_smoothed_velocity_estimate, total_mined_reward) from STDIN`)
if err != nil {
return xerrors.Errorf("prepare tmp chain_reward: %w", err)
}
for _, rewardState := range rewards {
if _, err := stmt.Exec(
rewardState.common.stateroot.String(),
rewardState.cumSumBaselinePower.String(),
rewardState.cumSumRealizedPower.String(),
uint64(rewardState.effectiveNetworkTime),
rewardState.effectiveBaselinePower.String(),
rewardState.newBaselinePower.String(),
rewardState.newBaseReward.String(),
rewardState.newSmoothingEstimate.PositionEstimate.String(),
rewardState.newSmoothingEstimate.VelocityEstimate.String(),
rewardState.totalMinedReward.String(),
); err != nil {
log.Errorw("failed to store chain power", "state_root", rewardState.common.stateroot, "error", err)
}
}
if err := stmt.Close(); err != nil {
return xerrors.Errorf("close prepared chain_reward: %w", err)
}
if _, err := tx.Exec(`insert into chain_reward select * from cr on conflict do nothing`); err != nil {
return xerrors.Errorf("insert chain_reward from tmp: %w", err)
}
if err := tx.Commit(); err != nil {
return xerrors.Errorf("commit chain_reward tx: %w", err)
}
return nil
}

View File

@ -1,107 +0,0 @@
package main
import (
"database/sql"
"fmt"
"net/http"
_ "net/http/pprof"
"os"
"strings"
"github.com/filecoin-project/lotus/api/v0api"
_ "github.com/lib/pq"
"github.com/filecoin-project/go-jsonrpc"
logging "github.com/ipfs/go-log/v2"
"github.com/urfave/cli/v2"
"golang.org/x/xerrors"
lcli "github.com/filecoin-project/lotus/cli"
"github.com/filecoin-project/lotus/cmd/lotus-chainwatch/processor"
"github.com/filecoin-project/lotus/cmd/lotus-chainwatch/scheduler"
"github.com/filecoin-project/lotus/cmd/lotus-chainwatch/syncer"
"github.com/filecoin-project/lotus/cmd/lotus-chainwatch/util"
)
var runCmd = &cli.Command{
Name: "run",
Usage: "Start lotus chainwatch",
Flags: []cli.Flag{
&cli.IntFlag{
Name: "max-batch",
Value: 50,
},
},
Action: func(cctx *cli.Context) error {
go func() {
http.ListenAndServe(":6060", nil) //nolint:errcheck
}()
ll := cctx.String("log-level")
if err := logging.SetLogLevel("*", ll); err != nil {
return err
}
if err := logging.SetLogLevel("rpc", "error"); err != nil {
return err
}
var api v0api.FullNode
var closer jsonrpc.ClientCloser
var err error
if tokenMaddr := cctx.String("api"); tokenMaddr != "" {
toks := strings.Split(tokenMaddr, ":")
if len(toks) != 2 {
return fmt.Errorf("invalid api tokens, expected <token>:<maddr>, got: %s", tokenMaddr)
}
api, closer, err = util.GetFullNodeAPIUsingCredentials(cctx.Context, toks[1], toks[0])
if err != nil {
return err
}
} else {
api, closer, err = lcli.GetFullNodeAPI(cctx)
if err != nil {
return err
}
}
defer closer()
ctx := lcli.ReqContext(cctx)
v, err := api.Version(ctx)
if err != nil {
return err
}
log.Infof("Remote version: %s", v.Version)
maxBatch := cctx.Int("max-batch")
db, err := sql.Open("postgres", cctx.String("db"))
if err != nil {
return err
}
defer func() {
if err := db.Close(); err != nil {
log.Errorw("Failed to close database", "error", err)
}
}()
if err := db.Ping(); err != nil {
return xerrors.Errorf("Database failed to respond to ping (is it online?): %w", err)
}
db.SetMaxOpenConns(1350)
sync := syncer.NewSyncer(db, api, 1400)
sync.Start(ctx)
proc := processor.NewProcessor(ctx, db, api, maxBatch)
proc.Start(ctx)
sched := scheduler.PrepareScheduler(db)
sched.Start(ctx)
<-ctx.Done()
os.Exit(0)
return nil
},
}

View File

@ -1,78 +0,0 @@
package scheduler
import (
"context"
"database/sql"
"golang.org/x/xerrors"
)
func setupTopMinerByBaseRewardSchema(ctx context.Context, db *sql.DB) error {
select {
case <-ctx.Done():
return nil
default:
}
tx, err := db.Begin()
if err != nil {
return err
}
if _, err := tx.Exec(`
create materialized view if not exists top_miners_by_base_reward as
with total_rewards_by_miner as (
select
b.miner,
sum(cr.new_reward * b.win_count) as total_reward
from blocks b
inner join chain_reward cr on b.parentstateroot = cr.state_root
group by 1
) select
rank() over (order by total_reward desc),
miner,
total_reward
from total_rewards_by_miner
group by 2, 3;
create index if not exists top_miners_by_base_reward_miner_index
on top_miners_by_base_reward (miner);
create materialized view if not exists top_miners_by_base_reward_max_height as
select
b."timestamp"as current_timestamp,
max(b.height) as current_height
from blocks b
join chain_reward cr on b.parentstateroot = cr.state_root
where cr.new_reward is not null
group by 1
order by 1 desc
limit 1;
`); err != nil {
return xerrors.Errorf("create top_miners_by_base_reward views: %w", err)
}
if err := tx.Commit(); err != nil {
return xerrors.Errorf("committing top_miners_by_base_reward views; %w", err)
}
return nil
}
func refreshTopMinerByBaseReward(ctx context.Context, db *sql.DB) error {
select {
case <-ctx.Done():
return nil
default:
}
_, err := db.Exec("refresh materialized view top_miners_by_base_reward;")
if err != nil {
return xerrors.Errorf("refresh top_miners_by_base_reward: %w", err)
}
_, err = db.Exec("refresh materialized view top_miners_by_base_reward_max_height;")
if err != nil {
return xerrors.Errorf("refresh top_miners_by_base_reward_max_height: %w", err)
}
return nil
}

View File

@ -1,60 +0,0 @@
package scheduler
import (
"context"
"database/sql"
"time"
logging "github.com/ipfs/go-log/v2"
"golang.org/x/xerrors"
)
var log = logging.Logger("scheduler")
// Scheduler manages the execution of jobs triggered
// by tickers. Not externally configurable at runtime.
type Scheduler struct {
db *sql.DB
}
// PrepareScheduler returns a ready-to-run Scheduler
func PrepareScheduler(db *sql.DB) *Scheduler {
return &Scheduler{db}
}
func (s *Scheduler) setupSchema(ctx context.Context) error {
if err := setupTopMinerByBaseRewardSchema(ctx, s.db); err != nil {
return xerrors.Errorf("setup top miners by reward schema: %w", err)
}
return nil
}
// Start the scheduler jobs at the defined intervals
func (s *Scheduler) Start(ctx context.Context) {
log.Debug("Starting Scheduler")
if err := s.setupSchema(ctx); err != nil {
log.Fatalw("applying scheduling schema", "error", err)
}
go func() {
// run once on start after schema has initialized
time.Sleep(1 * time.Minute)
if err := refreshTopMinerByBaseReward(ctx, s.db); err != nil {
log.Errorw("failed to refresh top miner", "error", err)
}
refreshTopMinerCh := time.NewTicker(30 * time.Second)
defer refreshTopMinerCh.Stop()
for {
select {
case <-refreshTopMinerCh.C:
if err := refreshTopMinerByBaseReward(ctx, s.db); err != nil {
log.Errorw("failed to refresh top miner", "error", err)
}
case <-ctx.Done():
return
}
}
}()
}

View File

@ -1,27 +0,0 @@
package syncer
import (
"context"
"time"
"github.com/filecoin-project/lotus/chain/types"
"github.com/ipfs/go-cid"
)
func (s *Syncer) subBlocks(ctx context.Context) {
sub, err := s.node.SyncIncomingBlocks(ctx)
if err != nil {
log.Errorf("opening incoming block channel: %+v", err)
return
}
log.Infow("Capturing incoming blocks")
for bh := range sub {
err := s.storeHeaders(map[cid.Cid]*types.BlockHeader{
bh.Cid(): bh,
}, false, time.Now())
if err != nil {
log.Errorf("storing incoming block header: %+v", err)
}
}
}

View File

@ -1,527 +0,0 @@
package syncer
import (
"container/list"
"context"
"database/sql"
"fmt"
"sync"
"time"
"golang.org/x/xerrors"
"github.com/ipfs/go-cid"
logging "github.com/ipfs/go-log/v2"
"github.com/filecoin-project/lotus/api/v0api"
"github.com/filecoin-project/lotus/chain/store"
"github.com/filecoin-project/lotus/chain/types"
)
var log = logging.Logger("syncer")
type Syncer struct {
db *sql.DB
lookbackLimit uint64
headerLk sync.Mutex
node v0api.FullNode
}
func NewSyncer(db *sql.DB, node v0api.FullNode, lookbackLimit uint64) *Syncer {
return &Syncer{
db: db,
node: node,
lookbackLimit: lookbackLimit,
}
}
func (s *Syncer) setupSchemas() error {
tx, err := s.db.Begin()
if err != nil {
return err
}
if _, err := tx.Exec(`
/* tracks circulating fil available on the network at each tipset */
create table if not exists chain_economics
(
parent_state_root text not null
constraint chain_economics_pk primary key,
circulating_fil text not null,
vested_fil text not null,
mined_fil text not null,
burnt_fil text not null,
locked_fil text not null
);
create table if not exists block_cids
(
cid text not null
constraint block_cids_pk
primary key
);
create unique index if not exists block_cids_cid_uindex
on block_cids (cid);
create table if not exists blocks_synced
(
cid text not null
constraint blocks_synced_pk
primary key
constraint blocks_block_cids_cid_fk
references block_cids (cid),
synced_at int not null,
processed_at int
);
create unique index if not exists blocks_synced_cid_uindex
on blocks_synced (cid,processed_at);
create table if not exists block_parents
(
block text not null
constraint blocks_block_cids_cid_fk
references block_cids (cid),
parent text not null
);
create unique index if not exists block_parents_block_parent_uindex
on block_parents (block, parent);
create table if not exists drand_entries
(
round bigint not null
constraint drand_entries_pk
primary key,
data bytea not null
);
create unique index if not exists drand_entries_round_uindex
on drand_entries (round);
create table if not exists block_drand_entries
(
round bigint not null
constraint block_drand_entries_drand_entries_round_fk
references drand_entries (round),
block text not null
constraint blocks_block_cids_cid_fk
references block_cids (cid)
);
create unique index if not exists block_drand_entries_round_uindex
on block_drand_entries (round, block);
create table if not exists blocks
(
cid text not null
constraint blocks_pk
primary key
constraint blocks_block_cids_cid_fk
references block_cids (cid),
parentWeight numeric not null,
parentStateRoot text not null,
height bigint not null,
miner text not null,
timestamp bigint not null,
ticket bytea not null,
election_proof bytea,
win_count bigint,
parent_base_fee text not null,
forksig bigint not null
);
create unique index if not exists block_cid_uindex
on blocks (cid,height);
create materialized view if not exists state_heights
as select min(b.height) height, b.parentstateroot
from blocks b group by b.parentstateroot;
create index if not exists state_heights_height_index
on state_heights (height);
create index if not exists state_heights_parentstateroot_index
on state_heights (parentstateroot);
`); err != nil {
return err
}
return tx.Commit()
}
func (s *Syncer) Start(ctx context.Context) {
if err := logging.SetLogLevel("syncer", "info"); err != nil {
log.Fatal(err)
}
log.Debug("Starting Syncer")
if err := s.setupSchemas(); err != nil {
log.Fatal(err)
}
// capture all reported blocks
go s.subBlocks(ctx)
// we need to ensure that on a restart we don't reprocess the whole flarping chain
var sinceEpoch uint64
blkCID, height, err := s.mostRecentlySyncedBlockHeight()
if err != nil {
log.Fatalw("failed to find most recently synced block", "error", err)
} else {
if height > 0 {
log.Infow("Found starting point for syncing", "blockCID", blkCID.String(), "height", height)
sinceEpoch = uint64(height)
}
}
// continue to keep the block headers table up to date.
notifs, err := s.node.ChainNotify(ctx)
if err != nil {
log.Fatal(err)
}
go func() {
for notif := range notifs {
for _, change := range notif {
switch change.Type {
case store.HCCurrent:
// This case is important for capturing the initial state of a node
// which might be on a dead network with no new blocks being produced.
// It also allows a fresh Chainwatch instance to start walking the
// chain without waiting for a new block to come along.
fallthrough
case store.HCApply:
unsynced, err := s.unsyncedBlocks(ctx, change.Val, sinceEpoch)
if err != nil {
log.Errorw("failed to gather unsynced blocks", "error", err)
}
if err := s.storeCirculatingSupply(ctx, change.Val); err != nil {
log.Errorw("failed to store circulating supply", "error", err)
}
if len(unsynced) == 0 {
continue
}
if err := s.storeHeaders(unsynced, true, time.Now()); err != nil {
// so this is pretty bad, need some kind of retry..
// for now just log an error and the blocks will be attempted again on next notifi
log.Errorw("failed to store unsynced blocks", "error", err)
}
sinceEpoch = uint64(change.Val.Height())
case store.HCRevert:
log.Debug("revert todo")
}
}
}
}()
}
func (s *Syncer) unsyncedBlocks(ctx context.Context, head *types.TipSet, since uint64) (map[cid.Cid]*types.BlockHeader, error) {
hasList, err := s.syncedBlocks(since, s.lookbackLimit)
if err != nil {
return nil, err
}
// build a list of blocks that we have not synced.
toVisit := list.New()
for _, header := range head.Blocks() {
toVisit.PushBack(header)
}
toSync := map[cid.Cid]*types.BlockHeader{}
for toVisit.Len() > 0 {
bh := toVisit.Remove(toVisit.Back()).(*types.BlockHeader)
_, has := hasList[bh.Cid()]
if _, seen := toSync[bh.Cid()]; seen || has {
continue
}
toSync[bh.Cid()] = bh
if len(toSync)%500 == 10 {
log.Debugw("To visit", "toVisit", toVisit.Len(), "toSync", len(toSync), "current_height", bh.Height)
}
if bh.Height == 0 {
continue
}
pts, err := s.node.ChainGetTipSet(ctx, types.NewTipSetKey(bh.Parents...))
if err != nil {
log.Error(err)
continue
}
for _, header := range pts.Blocks() {
toVisit.PushBack(header)
}
}
log.Debugw("Gathered unsynced blocks", "count", len(toSync))
return toSync, nil
}
func (s *Syncer) syncedBlocks(since, limit uint64) (map[cid.Cid]struct{}, error) {
rws, err := s.db.Query(`select bs.cid FROM blocks_synced bs left join blocks b on b.cid = bs.cid where b.height <= $1 and bs.processed_at is not null limit $2`, since, limit)
if err != nil {
return nil, xerrors.Errorf("Failed to query blocks_synced: %w", err)
}
out := map[cid.Cid]struct{}{}
for rws.Next() {
var c string
if err := rws.Scan(&c); err != nil {
return nil, xerrors.Errorf("Failed to scan blocks_synced: %w", err)
}
ci, err := cid.Parse(c)
if err != nil {
return nil, xerrors.Errorf("Failed to parse blocks_synced: %w", err)
}
out[ci] = struct{}{}
}
return out, nil
}
func (s *Syncer) mostRecentlySyncedBlockHeight() (cid.Cid, int64, error) {
rw := s.db.QueryRow(`
select blocks_synced.cid, b.height
from blocks_synced
left join blocks b on blocks_synced.cid = b.cid
where processed_at is not null
order by height desc
limit 1
`)
var c string
var h int64
if err := rw.Scan(&c, &h); err != nil {
if err == sql.ErrNoRows {
return cid.Undef, 0, nil
}
return cid.Undef, -1, err
}
ci, err := cid.Parse(c)
if err != nil {
return cid.Undef, -1, err
}
return ci, h, nil
}
func (s *Syncer) storeCirculatingSupply(ctx context.Context, tipset *types.TipSet) error {
supply, err := s.node.StateVMCirculatingSupplyInternal(ctx, tipset.Key())
if err != nil {
return err
}
ceInsert := `insert into chain_economics (parent_state_root, circulating_fil, vested_fil, mined_fil, burnt_fil, locked_fil) ` +
`values ('%s', '%s', '%s', '%s', '%s', '%s') on conflict on constraint chain_economics_pk do ` +
`update set (circulating_fil, vested_fil, mined_fil, burnt_fil, locked_fil) = ('%[2]s', '%[3]s', '%[4]s', '%[5]s', '%[6]s') ` +
`where chain_economics.parent_state_root = '%[1]s';`
if _, err := s.db.Exec(fmt.Sprintf(ceInsert,
tipset.ParentState().String(),
supply.FilCirculating.String(),
supply.FilVested.String(),
supply.FilMined.String(),
supply.FilBurnt.String(),
supply.FilLocked.String(),
)); err != nil {
return xerrors.Errorf("insert circulating supply for tipset (%s): %w", tipset.Key().String(), err)
}
return nil
}
func (s *Syncer) storeHeaders(bhs map[cid.Cid]*types.BlockHeader, sync bool, timestamp time.Time) error {
s.headerLk.Lock()
defer s.headerLk.Unlock()
if len(bhs) == 0 {
return nil
}
log.Debugw("Storing Headers", "count", len(bhs))
tx, err := s.db.Begin()
if err != nil {
return xerrors.Errorf("begin: %w", err)
}
if _, err := tx.Exec(`
create temp table bc (like block_cids excluding constraints) on commit drop;
create temp table de (like drand_entries excluding constraints) on commit drop;
create temp table bde (like block_drand_entries excluding constraints) on commit drop;
create temp table tbp (like block_parents excluding constraints) on commit drop;
create temp table bs (like blocks_synced excluding constraints) on commit drop;
create temp table b (like blocks excluding constraints) on commit drop;
`); err != nil {
return xerrors.Errorf("prep temp: %w", err)
}
{
stmt, err := tx.Prepare(`copy bc (cid) from STDIN`)
if err != nil {
return err
}
for _, bh := range bhs {
if _, err := stmt.Exec(bh.Cid().String()); err != nil {
log.Error(err)
}
}
if err := stmt.Close(); err != nil {
return err
}
if _, err := tx.Exec(`insert into block_cids select * from bc on conflict do nothing `); err != nil {
return xerrors.Errorf("drand entries put: %w", err)
}
}
{
stmt, err := tx.Prepare(`copy de (round, data) from STDIN`)
if err != nil {
return err
}
for _, bh := range bhs {
for _, ent := range bh.BeaconEntries {
if _, err := stmt.Exec(ent.Round, ent.Data); err != nil {
log.Error(err)
}
}
}
if err := stmt.Close(); err != nil {
return err
}
if _, err := tx.Exec(`insert into drand_entries select * from de on conflict do nothing `); err != nil {
return xerrors.Errorf("drand entries put: %w", err)
}
}
{
stmt, err := tx.Prepare(`copy bde (round, block) from STDIN`)
if err != nil {
return err
}
for _, bh := range bhs {
for _, ent := range bh.BeaconEntries {
if _, err := stmt.Exec(ent.Round, bh.Cid().String()); err != nil {
log.Error(err)
}
}
}
if err := stmt.Close(); err != nil {
return err
}
if _, err := tx.Exec(`insert into block_drand_entries select * from bde on conflict do nothing `); err != nil {
return xerrors.Errorf("block drand entries put: %w", err)
}
}
{
stmt, err := tx.Prepare(`copy tbp (block, parent) from STDIN`)
if err != nil {
return err
}
for _, bh := range bhs {
for _, parent := range bh.Parents {
if _, err := stmt.Exec(bh.Cid().String(), parent.String()); err != nil {
log.Error(err)
}
}
}
if err := stmt.Close(); err != nil {
return err
}
if _, err := tx.Exec(`insert into block_parents select * from tbp on conflict do nothing `); err != nil {
return xerrors.Errorf("parent put: %w", err)
}
}
if sync {
stmt, err := tx.Prepare(`copy bs (cid, synced_at) from stdin `)
if err != nil {
return err
}
for _, bh := range bhs {
if _, err := stmt.Exec(bh.Cid().String(), timestamp.Unix()); err != nil {
log.Error(err)
}
}
if err := stmt.Close(); err != nil {
return err
}
if _, err := tx.Exec(`insert into blocks_synced select * from bs on conflict do nothing `); err != nil {
return xerrors.Errorf("syncd put: %w", err)
}
}
stmt2, err := tx.Prepare(`copy b (cid, parentWeight, parentStateRoot, height, miner, "timestamp", ticket, election_proof, win_count, parent_base_fee, forksig) from stdin`)
if err != nil {
return err
}
for _, bh := range bhs {
var eproof, winCount interface{}
if bh.ElectionProof != nil {
eproof = bh.ElectionProof.VRFProof
winCount = bh.ElectionProof.WinCount
}
if bh.Ticket == nil {
log.Warnf("got a block with nil ticket")
bh.Ticket = &types.Ticket{
VRFProof: []byte{},
}
}
if _, err := stmt2.Exec(
bh.Cid().String(),
bh.ParentWeight.String(),
bh.ParentStateRoot.String(),
bh.Height,
bh.Miner.String(),
bh.Timestamp,
bh.Ticket.VRFProof,
eproof,
winCount,
bh.ParentBaseFee.String(),
bh.ForkSignaling); err != nil {
log.Error(err)
}
}
if err := stmt2.Close(); err != nil {
return xerrors.Errorf("s2 close: %w", err)
}
if _, err := tx.Exec(`insert into blocks select * from b on conflict do nothing `); err != nil {
return xerrors.Errorf("blk put: %w", err)
}
return tx.Commit()
}

View File

@ -1,34 +0,0 @@
package util
import (
"context"
"net/http"
"github.com/filecoin-project/go-jsonrpc"
"github.com/filecoin-project/lotus/api/client"
"github.com/filecoin-project/lotus/api/v0api"
ma "github.com/multiformats/go-multiaddr"
manet "github.com/multiformats/go-multiaddr/net"
)
func GetFullNodeAPIUsingCredentials(ctx context.Context, listenAddr, token string) (v0api.FullNode, jsonrpc.ClientCloser, error) {
parsedAddr, err := ma.NewMultiaddr(listenAddr)
if err != nil {
return nil, nil, err
}
_, addr, err := manet.DialArgs(parsedAddr)
if err != nil {
return nil, nil, err
}
return client.NewFullNodeRPCV0(ctx, apiURI(addr), apiHeaders(token))
}
func apiURI(addr string) string {
return "ws://" + addr + "/rpc/v0"
}
func apiHeaders(token string) http.Header {
headers := http.Header{}
headers.Add("Authorization", "Bearer "+token)
return headers
}

View File

@ -1,51 +0,0 @@
package util
import (
"bytes"
"context"
"fmt"
"github.com/ipfs/go-cid"
cbg "github.com/whyrusleeping/cbor-gen"
"github.com/filecoin-project/lotus/api/v0api"
)
// TODO extract this to a common location in lotus and reuse the code
// APIIpldStore is required for AMT and HAMT access.
type APIIpldStore struct {
ctx context.Context
api v0api.FullNode
}
func NewAPIIpldStore(ctx context.Context, api v0api.FullNode) *APIIpldStore {
return &APIIpldStore{
ctx: ctx,
api: api,
}
}
func (ht *APIIpldStore) Context() context.Context {
return ht.ctx
}
func (ht *APIIpldStore) Get(ctx context.Context, c cid.Cid, out interface{}) error {
raw, err := ht.api.ChainReadObj(ctx, c)
if err != nil {
return err
}
cu, ok := out.(cbg.CBORUnmarshaler)
if ok {
if err := cu.UnmarshalCBOR(bytes.NewReader(raw)); err != nil {
return err
}
return nil
}
return fmt.Errorf("Object does not implement CBORUnmarshaler: %T", out)
}
func (ht *APIIpldStore) Put(ctx context.Context, v interface{}) (cid.Cid, error) {
return cid.Undef, fmt.Errorf("Put is not implemented on APIIpldStore")
}

View File

@ -21,6 +21,7 @@ import (
"github.com/filecoin-project/go-fil-markets/storagemarket"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/big"
"github.com/filecoin-project/lotus/api/v0api"
sealing "github.com/filecoin-project/lotus/extern/storage-sealing"
"github.com/filecoin-project/lotus/api"
@ -55,7 +56,7 @@ func infoCmdAct(cctx *cli.Context) error {
}
defer closer()
api, acloser, err := lcli.GetFullNodeAPI(cctx)
fullapi, acloser, err := lcli.GetFullNodeAPI(cctx)
if err != nil {
return err
}
@ -63,9 +64,16 @@ func infoCmdAct(cctx *cli.Context) error {
ctx := lcli.ReqContext(cctx)
subsystems, err := nodeApi.RuntimeSubsystems(ctx)
if err != nil {
return err
}
fmt.Println("Enabled subsystems:", subsystems)
fmt.Print("Chain: ")
head, err := api.ChainHead(ctx)
head, err := fullapi.ChainHead(ctx)
if err != nil {
return err
}
@ -95,24 +103,42 @@ func infoCmdAct(cctx *cli.Context) error {
fmt.Println()
if subsystems.Has(api.SubsystemSectorStorage) {
err := handleMiningInfo(ctx, cctx, fullapi, nodeApi)
if err != nil {
return err
}
}
if subsystems.Has(api.SubsystemMarkets) {
err := handleMarketsInfo(ctx, nodeApi)
if err != nil {
return err
}
}
return nil
}
func handleMiningInfo(ctx context.Context, cctx *cli.Context, fullapi v0api.FullNode, nodeApi api.StorageMiner) error {
maddr, err := getActorAddress(ctx, cctx)
if err != nil {
return err
}
mact, err := api.StateGetActor(ctx, maddr, types.EmptyTSK)
mact, err := fullapi.StateGetActor(ctx, maddr, types.EmptyTSK)
if err != nil {
return err
}
tbs := blockstore.NewTieredBstore(blockstore.NewAPIBlockstore(api), blockstore.NewMemory())
tbs := blockstore.NewTieredBstore(blockstore.NewAPIBlockstore(fullapi), blockstore.NewMemory())
mas, err := miner.Load(adt.WrapStore(ctx, cbor.NewCborStore(tbs)), mact)
if err != nil {
return err
}
// Sector size
mi, err := api.StateMinerInfo(ctx, maddr, types.EmptyTSK)
mi, err := fullapi.StateMinerInfo(ctx, maddr, types.EmptyTSK)
if err != nil {
return err
}
@ -120,7 +146,7 @@ func infoCmdAct(cctx *cli.Context) error {
ssize := types.SizeStr(types.NewInt(uint64(mi.SectorSize)))
fmt.Printf("Miner: %s (%s sectors)\n", color.BlueString("%s", maddr), ssize)
pow, err := api.StateMinerPower(ctx, maddr, types.EmptyTSK)
pow, err := fullapi.StateMinerPower(ctx, maddr, types.EmptyTSK)
if err != nil {
return err
}
@ -142,7 +168,7 @@ func infoCmdAct(cctx *cli.Context) error {
pow.TotalPower.RawBytePower,
),
)
secCounts, err := api.StateMinerSectorCount(ctx, maddr, types.EmptyTSK)
secCounts, err := fullapi.StateMinerSectorCount(ctx, maddr, types.EmptyTSK)
if err != nil {
return err
}
@ -219,6 +245,75 @@ func infoCmdAct(cctx *cli.Context) error {
fmt.Println()
spendable := big.Zero()
// NOTE: there's no need to unlock anything here. Funds only
// vest on deadline boundaries, and they're unlocked by cron.
lockedFunds, err := mas.LockedFunds()
if err != nil {
return xerrors.Errorf("getting locked funds: %w", err)
}
availBalance, err := mas.AvailableBalance(mact.Balance)
if err != nil {
return xerrors.Errorf("getting available balance: %w", err)
}
spendable = big.Add(spendable, availBalance)
fmt.Printf("Miner Balance: %s\n", color.YellowString("%s", types.FIL(mact.Balance).Short()))
fmt.Printf(" PreCommit: %s\n", types.FIL(lockedFunds.PreCommitDeposits).Short())
fmt.Printf(" Pledge: %s\n", types.FIL(lockedFunds.InitialPledgeRequirement).Short())
fmt.Printf(" Vesting: %s\n", types.FIL(lockedFunds.VestingFunds).Short())
colorTokenAmount(" Available: %s\n", availBalance)
mb, err := fullapi.StateMarketBalance(ctx, maddr, types.EmptyTSK)
if err != nil {
return xerrors.Errorf("getting market balance: %w", err)
}
spendable = big.Add(spendable, big.Sub(mb.Escrow, mb.Locked))
fmt.Printf("Market Balance: %s\n", types.FIL(mb.Escrow).Short())
fmt.Printf(" Locked: %s\n", types.FIL(mb.Locked).Short())
colorTokenAmount(" Available: %s\n", big.Sub(mb.Escrow, mb.Locked))
wb, err := fullapi.WalletBalance(ctx, mi.Worker)
if err != nil {
return xerrors.Errorf("getting worker balance: %w", err)
}
spendable = big.Add(spendable, wb)
color.Cyan("Worker Balance: %s", types.FIL(wb).Short())
if len(mi.ControlAddresses) > 0 {
cbsum := big.Zero()
for _, ca := range mi.ControlAddresses {
b, err := fullapi.WalletBalance(ctx, ca)
if err != nil {
return xerrors.Errorf("getting control address balance: %w", err)
}
cbsum = big.Add(cbsum, b)
}
spendable = big.Add(spendable, cbsum)
fmt.Printf(" Control: %s\n", types.FIL(cbsum).Short())
}
colorTokenAmount("Total Spendable: %s\n", spendable)
fmt.Println()
if !cctx.Bool("hide-sectors-info") {
fmt.Println("Sectors:")
err = sectorsInfo(ctx, nodeApi)
if err != nil {
return err
}
}
// TODO: grab actr state / info
// * Sealed sectors (count / bytes)
// * Power
return nil
}
func handleMarketsInfo(ctx context.Context, nodeApi api.StorageMiner) error {
deals, err := nodeApi.MarketListIncompleteDeals(ctx)
if err != nil {
return err
@ -309,70 +404,6 @@ func infoCmdAct(cctx *cli.Context) error {
fmt.Println()
spendable := big.Zero()
// NOTE: there's no need to unlock anything here. Funds only
// vest on deadline boundaries, and they're unlocked by cron.
lockedFunds, err := mas.LockedFunds()
if err != nil {
return xerrors.Errorf("getting locked funds: %w", err)
}
availBalance, err := mas.AvailableBalance(mact.Balance)
if err != nil {
return xerrors.Errorf("getting available balance: %w", err)
}
spendable = big.Add(spendable, availBalance)
fmt.Printf("Miner Balance: %s\n", color.YellowString("%s", types.FIL(mact.Balance).Short()))
fmt.Printf(" PreCommit: %s\n", types.FIL(lockedFunds.PreCommitDeposits).Short())
fmt.Printf(" Pledge: %s\n", types.FIL(lockedFunds.InitialPledgeRequirement).Short())
fmt.Printf(" Vesting: %s\n", types.FIL(lockedFunds.VestingFunds).Short())
colorTokenAmount(" Available: %s\n", availBalance)
mb, err := api.StateMarketBalance(ctx, maddr, types.EmptyTSK)
if err != nil {
return xerrors.Errorf("getting market balance: %w", err)
}
spendable = big.Add(spendable, big.Sub(mb.Escrow, mb.Locked))
fmt.Printf("Market Balance: %s\n", types.FIL(mb.Escrow).Short())
fmt.Printf(" Locked: %s\n", types.FIL(mb.Locked).Short())
colorTokenAmount(" Available: %s\n", big.Sub(mb.Escrow, mb.Locked))
wb, err := api.WalletBalance(ctx, mi.Worker)
if err != nil {
return xerrors.Errorf("getting worker balance: %w", err)
}
spendable = big.Add(spendable, wb)
color.Cyan("Worker Balance: %s", types.FIL(wb).Short())
if len(mi.ControlAddresses) > 0 {
cbsum := big.Zero()
for _, ca := range mi.ControlAddresses {
b, err := api.WalletBalance(ctx, ca)
if err != nil {
return xerrors.Errorf("getting control address balance: %w", err)
}
cbsum = big.Add(cbsum, b)
}
spendable = big.Add(spendable, cbsum)
fmt.Printf(" Control: %s\n", types.FIL(cbsum).Short())
}
colorTokenAmount("Total Spendable: %s\n", spendable)
fmt.Println()
if !cctx.Bool("hide-sectors-info") {
fmt.Println("Sectors:")
err = sectorsInfo(ctx, nodeApi)
if err != nil {
return err
}
}
// TODO: grab actr state / info
// * Sealed sectors (count / bytes)
// * Power
return nil
}

View File

@ -5,6 +5,7 @@ import (
"fmt"
"github.com/fatih/color"
cliutil "github.com/filecoin-project/lotus/cli/util"
logging "github.com/ipfs/go-log/v2"
"github.com/urfave/cli/v2"
"go.opencensus.io/trace"
@ -105,6 +106,7 @@ func main() {
Value: "~/.lotusminer", // TODO: Consider XDG_DATA_HOME
Usage: fmt.Sprintf("Specify miner repo path. flag(%s) and env(LOTUS_STORAGE_PATH) are DEPRECATION, will REMOVE SOON", FlagMinerRepoDeprecation),
},
cliutil.FlagVeryVerbose,
},
Commands: append(local, lcli.CommonCommands...),

View File

@ -510,13 +510,13 @@ var chainBalanceStateCmd = &cli.Command{
return err
}
cs := store.NewChainStore(bs, bs, mds, vm.Syscalls(ffiwrapper.ProofVerifier), nil)
cs := store.NewChainStore(bs, bs, mds, nil)
defer cs.Close() //nolint:errcheck
cst := cbor.NewCborStore(bs)
store := adt.WrapStore(ctx, cst)
sm := stmgr.NewStateManager(cs)
sm := stmgr.NewStateManager(cs, vm.Syscalls(ffiwrapper.ProofVerifier))
tree, err := state.LoadStateTree(cst, sroot)
if err != nil {
@ -731,13 +731,13 @@ var chainPledgeCmd = &cli.Command{
return err
}
cs := store.NewChainStore(bs, bs, mds, vm.Syscalls(ffiwrapper.ProofVerifier), nil)
cs := store.NewChainStore(bs, bs, mds, nil)
defer cs.Close() //nolint:errcheck
cst := cbor.NewCborStore(bs)
store := adt.WrapStore(ctx, cst)
sm := stmgr.NewStateManager(cs)
sm := stmgr.NewStateManager(cs, vm.Syscalls(ffiwrapper.ProofVerifier))
state, err := state.LoadStateTree(cst, sroot)
if err != nil {

View File

@ -90,7 +90,7 @@ var exportChainCmd = &cli.Command{
return err
}
cs := store.NewChainStore(bs, bs, mds, nil, nil)
cs := store.NewChainStore(bs, bs, mds, nil)
defer cs.Close() //nolint:errcheck
if err := cs.Load(); err != nil {

View File

@ -52,7 +52,7 @@ var genesisVerifyCmd = &cli.Command{
}
bs := blockstore.FromDatastore(datastore.NewMapDatastore())
cs := store.NewChainStore(bs, bs, datastore.NewMapDatastore(), nil, nil)
cs := store.NewChainStore(bs, bs, datastore.NewMapDatastore(), nil)
defer cs.Close() //nolint:errcheck
cf := cctx.Args().Get(0)
@ -66,9 +66,7 @@ var genesisVerifyCmd = &cli.Command{
return err
}
sm := stmgr.NewStateManager(cs)
total, err := stmgr.CheckTotalFIL(context.TODO(), sm, ts)
total, err := stmgr.CheckTotalFIL(context.TODO(), cs, ts)
if err != nil {
return err
}

View File

@ -2,6 +2,18 @@ package main
import (
"fmt"
"os"
"path"
levelds "github.com/ipfs/go-ds-leveldb"
ldbopts "github.com/syndtr/goleveldb/leveldb/opt"
"github.com/filecoin-project/lotus/lib/backupds"
"github.com/filecoin-project/lotus/node/repo"
"github.com/ipfs/go-datastore"
dsq "github.com/ipfs/go-datastore/query"
logging "github.com/ipfs/go-log/v2"
lcli "github.com/filecoin-project/lotus/cli"
@ -18,6 +30,8 @@ var marketCmd = &cli.Command{
Flags: []cli.Flag{},
Subcommands: []*cli.Command{
marketDealFeesCmd,
marketExportDatastoreCmd,
marketImportDatastoreCmd,
},
}
@ -100,3 +114,196 @@ var marketDealFeesCmd = &cli.Command{
return xerrors.New("must provide either --provider or --dealId flag")
},
}
const mktsMetadataNamespace = "metadata"
var marketExportDatastoreCmd = &cli.Command{
Name: "export-datastore",
Description: "export markets datastore key/values to a file",
Flags: []cli.Flag{
&cli.StringFlag{
Name: "repo",
Usage: "path to the repo",
},
&cli.StringFlag{
Name: "backup-dir",
Usage: "path to the backup directory",
},
},
Action: func(cctx *cli.Context) error {
logging.SetLogLevel("badger", "ERROR") // nolint:errcheck
// If the backup dir is not specified, just use the OS temp dir
backupDir := cctx.String("backup-dir")
if backupDir == "" {
backupDir = os.TempDir()
}
// Open the repo at the repo path
repoPath := cctx.String("repo")
lr, err := openLockedRepo(repoPath)
if err != nil {
return err
}
defer lr.Close() //nolint:errcheck
// Open the metadata datastore on the repo
ds, err := lr.Datastore(cctx.Context, datastore.NewKey(mktsMetadataNamespace).String())
if err != nil {
return xerrors.Errorf("opening datastore %s on repo %s: %w", mktsMetadataNamespace, repoPath, err)
}
// Create a tmp datastore that we'll add the exported key / values to
// and then backup
backupDsDir := path.Join(backupDir, "markets-backup-datastore")
if err := os.MkdirAll(backupDsDir, 0775); err != nil { //nolint:gosec
return xerrors.Errorf("creating tmp datastore directory: %w", err)
}
defer os.RemoveAll(backupDsDir) //nolint:errcheck
backupDs, err := levelds.NewDatastore(backupDsDir, &levelds.Options{
Compression: ldbopts.NoCompression,
NoSync: false,
Strict: ldbopts.StrictAll,
ReadOnly: false,
})
if err != nil {
return xerrors.Errorf("opening backup datastore at %s: %w", backupDir, err)
}
// Export the key / values
prefixes := []string{
"/deals/provider",
"/retrievals/provider",
"/storagemarket",
}
for _, prefix := range prefixes {
err := exportPrefix(prefix, ds, backupDs)
if err != nil {
return err
}
}
// Wrap the datastore in a backup datastore
bds, err := backupds.Wrap(backupDs, "")
if err != nil {
return xerrors.Errorf("opening backupds: %w", err)
}
// Create a file for the backup
fpath := path.Join(backupDir, "markets.datastore.backup")
out, err := os.OpenFile(fpath, os.O_CREATE|os.O_WRONLY, 0644)
if err != nil {
return xerrors.Errorf("opening backup file %s: %w", fpath, err)
}
// Write the backup to the file
if err := bds.Backup(out); err != nil {
if cerr := out.Close(); cerr != nil {
log.Errorw("error closing backup file while handling backup error", "closeErr", cerr, "backupErr", err)
}
return xerrors.Errorf("backup error: %w", err)
}
if err := out.Close(); err != nil {
return xerrors.Errorf("closing backup file: %w", err)
}
fmt.Println("Wrote backup file to " + fpath)
return nil
},
}
func exportPrefix(prefix string, ds datastore.Batching, backupDs datastore.Batching) error {
q, err := ds.Query(dsq.Query{
Prefix: prefix,
})
if err != nil {
return xerrors.Errorf("datastore query: %w", err)
}
defer q.Close() //nolint:errcheck
for res := range q.Next() {
fmt.Println("Exporting key " + res.Key)
err := backupDs.Put(datastore.NewKey(res.Key), res.Value)
if err != nil {
return xerrors.Errorf("putting %s to backup datastore: %w", res.Key, err)
}
}
return nil
}
var marketImportDatastoreCmd = &cli.Command{
Name: "import-datastore",
Description: "import markets datastore key/values from a backup file",
Flags: []cli.Flag{
&cli.StringFlag{
Name: "repo",
Usage: "path to the repo",
},
&cli.StringFlag{
Name: "backup-path",
Usage: "path to the backup file",
Required: true,
},
},
Action: func(cctx *cli.Context) error {
logging.SetLogLevel("badger", "ERROR") // nolint:errcheck
backupPath := cctx.String("backup-path")
// Open the repo at the repo path
lr, err := openLockedRepo(cctx.String("repo"))
if err != nil {
return err
}
defer lr.Close() //nolint:errcheck
// Open the metadata datastore on the repo
repoDs, err := lr.Datastore(cctx.Context, datastore.NewKey(mktsMetadataNamespace).String())
if err != nil {
return err
}
r, err := os.Open(backupPath)
if err != nil {
return xerrors.Errorf("opening backup path %s: %w", backupPath, err)
}
fmt.Println("Importing from backup file " + backupPath)
err = backupds.RestoreInto(r, repoDs)
if err != nil {
return xerrors.Errorf("restoring backup from path %s: %w", backupPath, err)
}
fmt.Println("Completed importing from backup file " + backupPath)
return nil
},
}
func openLockedRepo(path string) (repo.LockedRepo, error) {
// Open the repo at the repo path
rpo, err := repo.NewFS(path)
if err != nil {
return nil, xerrors.Errorf("could not open repo %s: %w", path, err)
}
// Make sure the repo exists
exists, err := rpo.Exists()
if err != nil {
return nil, xerrors.Errorf("checking repo %s exists: %w", path, err)
}
if !exists {
return nil, xerrors.Errorf("repo does not exist: %s", path)
}
// Lock the repo
lr, err := rpo.Lock(repo.StorageMiner)
if err != nil {
return nil, xerrors.Errorf("locking repo %s: %w", path, err)
}
return lr, nil
}

View File

@ -15,8 +15,6 @@ import (
"github.com/filecoin-project/lotus/chain/state"
"github.com/filecoin-project/lotus/chain/store"
"github.com/filecoin-project/lotus/chain/types"
"github.com/filecoin-project/lotus/chain/vm"
"github.com/filecoin-project/lotus/extern/sector-storage/ffiwrapper"
"github.com/filecoin-project/lotus/node/repo"
builtin4 "github.com/filecoin-project/specs-actors/v4/actors/builtin"
"github.com/filecoin-project/specs-actors/v4/actors/util/adt"
@ -76,7 +74,7 @@ var minerTypesCmd = &cli.Command{
return err
}
cs := store.NewChainStore(bs, bs, mds, vm.Syscalls(ffiwrapper.ProofVerifier), nil)
cs := store.NewChainStore(bs, bs, mds, nil)
defer cs.Close() //nolint:errcheck
cst := cbor.NewCborStore(bs)

View File

@ -13,8 +13,6 @@ import (
badgerbs "github.com/filecoin-project/lotus/blockstore/badger"
"github.com/filecoin-project/lotus/chain/store"
"github.com/filecoin-project/lotus/chain/vm"
"github.com/filecoin-project/lotus/extern/sector-storage/ffiwrapper"
"github.com/filecoin-project/lotus/node/repo"
)
@ -161,7 +159,7 @@ var stateTreePruneCmd = &cli.Command{
if cctx.Bool("only-ds-gc") {
fmt.Println("running datastore gc....")
for i := 0; i < cctx.Int("gc-count"); i++ {
if err := badgbs.DB.RunValueLogGC(DiscardRatio); err != nil {
if err := badgbs.DB().RunValueLogGC(DiscardRatio); err != nil {
return xerrors.Errorf("datastore GC failed: %w", err)
}
}
@ -169,7 +167,7 @@ var stateTreePruneCmd = &cli.Command{
return nil
}
cs := store.NewChainStore(bs, bs, mds, vm.Syscalls(ffiwrapper.ProofVerifier), nil)
cs := store.NewChainStore(bs, bs, mds, nil)
defer cs.Close() //nolint:errcheck
if err := cs.Load(); err != nil {
@ -208,7 +206,7 @@ var stateTreePruneCmd = &cli.Command{
return nil
}
b := badgbs.DB.NewWriteBatch()
b := badgbs.DB().NewWriteBatch()
defer b.Cancel()
markForRemoval := func(c cid.Cid) error {
@ -249,7 +247,7 @@ var stateTreePruneCmd = &cli.Command{
fmt.Println("running datastore gc....")
for i := 0; i < cctx.Int("gc-count"); i++ {
if err := badgbs.DB.RunValueLogGC(DiscardRatio); err != nil {
if err := badgbs.DB().RunValueLogGC(DiscardRatio); err != nil {
return xerrors.Errorf("datastore GC failed: %w", err)
}
}

View File

@ -83,7 +83,7 @@ func NewBlockBuilder(ctx context.Context, logger *zap.SugaredLogger, sm *stmgr.S
Epoch: parentTs.Height() + 1,
Rand: r,
Bstore: sm.ChainStore().StateBlockstore(),
Syscalls: sm.ChainStore().VMSys(),
Syscalls: sm.VMSys(),
CircSupplyCalc: sm.GetVMCirculatingSupply,
NtwkVersion: sm.GetNtwkVersion,
BaseFee: abi.NewTokenAmount(0),

View File

@ -61,7 +61,7 @@ func NewNode(ctx context.Context, r repo.Repo) (nd *Node, _err error) {
}
return &Node{
repo: lr,
Chainstore: store.NewChainStore(bs, bs, ds, vm.Syscalls(mock.Verifier), nil),
Chainstore: store.NewChainStore(bs, bs, ds, nil),
MetadataDS: ds,
Blockstore: bs,
}, err
@ -105,7 +105,7 @@ func (nd *Node) LoadSim(ctx context.Context, name string) (*Simulation, error) {
if err != nil {
return nil, xerrors.Errorf("failed to create upgrade schedule for simulation %s: %w", name, err)
}
sim.StateManager, err = stmgr.NewStateManagerWithUpgradeSchedule(nd.Chainstore, us)
sim.StateManager, err = stmgr.NewStateManagerWithUpgradeSchedule(nd.Chainstore, vm.Syscalls(mock.Verifier), us)
if err != nil {
return nil, xerrors.Errorf("failed to create state manager for simulation %s: %w", name, err)
}
@ -127,7 +127,7 @@ func (nd *Node) CreateSim(ctx context.Context, name string, head *types.TipSet)
sim := &Simulation{
name: name,
Node: nd,
StateManager: stmgr.NewStateManager(nd.Chainstore),
StateManager: stmgr.NewStateManager(nd.Chainstore, vm.Syscalls(mock.Verifier)),
stages: stages,
}
if has, err := nd.MetadataDS.Has(sim.key("head")); err != nil {

View File

@ -18,6 +18,8 @@ import (
"github.com/filecoin-project/lotus/chain/stmgr"
"github.com/filecoin-project/lotus/chain/types"
"github.com/filecoin-project/lotus/chain/vm"
"github.com/filecoin-project/lotus/cmd/lotus-sim/simulation/mock"
"github.com/filecoin-project/lotus/cmd/lotus-sim/simulation/stages"
)
@ -198,7 +200,7 @@ func (sim *Simulation) SetUpgradeHeight(nv network.Version, epoch abi.ChainEpoch
if err != nil {
return err
}
sm, err := stmgr.NewStateManagerWithUpgradeSchedule(sim.Node.Chainstore, newUpgradeSchedule)
sm, err := stmgr.NewStateManagerWithUpgradeSchedule(sim.Node.Chainstore, vm.Syscalls(mock.Verifier), newUpgradeSchedule)
if err != nil {
return err
}

View File

@ -1,134 +0,0 @@
package main
import (
"bytes"
"context"
"encoding/json"
"fmt"
"net/http"
"time"
rice "github.com/GeertJohan/go.rice"
"github.com/gorilla/websocket"
"github.com/ipld/go-car"
"github.com/libp2p/go-libp2p"
"github.com/libp2p/go-libp2p-core/peer"
pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/filecoin-project/lotus/blockstore"
"github.com/filecoin-project/lotus/build"
)
var topic = "/fil/headnotifs/"
func init() {
genBytes := build.MaybeGenesis()
if len(genBytes) == 0 {
topic = ""
return
}
bs := blockstore.NewMemory()
c, err := car.LoadCar(bs, bytes.NewReader(genBytes))
if err != nil {
panic(err)
}
if len(c.Roots) != 1 {
panic("expected genesis file to have one root")
}
fmt.Printf("Genesis CID: %s\n", c.Roots[0])
topic = topic + c.Roots[0].String()
}
var upgrader = websocket.Upgrader{
WriteBufferSize: 1024,
CheckOrigin: func(r *http.Request) bool {
return true
},
}
func main() {
if topic == "" {
fmt.Println("FATAL: No genesis found")
return
}
ctx := context.Background()
host, err := libp2p.New(
ctx,
libp2p.Defaults,
)
if err != nil {
panic(err)
}
ps, err := pubsub.NewGossipSub(ctx, host)
if err != nil {
panic(err)
}
pi, err := build.BuiltinBootstrap()
if err != nil {
panic(err)
}
if err := host.Connect(ctx, pi[0]); err != nil {
panic(err)
}
http.HandleFunc("/sub", handler(ps))
http.Handle("/", http.FileServer(rice.MustFindBox("townhall/build").HTTPBox()))
fmt.Println("listening on http://localhost:2975")
if err := http.ListenAndServe("0.0.0.0:2975", nil); err != nil {
panic(err)
}
}
type update struct {
From peer.ID
Update json.RawMessage
Time uint64
}
func handler(ps *pubsub.PubSub) func(w http.ResponseWriter, r *http.Request) {
return func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Access-Control-Allow-Origin", "*")
if r.Header.Get("Sec-WebSocket-Protocol") != "" {
w.Header().Set("Sec-WebSocket-Protocol", r.Header.Get("Sec-WebSocket-Protocol"))
}
conn, err := upgrader.Upgrade(w, r, nil)
if err != nil {
return
}
sub, err := ps.Subscribe(topic) //nolint
if err != nil {
return
}
defer sub.Cancel() //nolint:errcheck
fmt.Println("new conn")
for {
msg, err := sub.Next(r.Context())
if err != nil {
return
}
//fmt.Println(msg)
if err := conn.WriteJSON(update{
From: peer.ID(msg.From),
Update: msg.Data,
Time: uint64(time.Now().UnixNano() / 1000_000),
}); err != nil {
return
}
}
}
}

View File

@ -1,23 +0,0 @@
# See https://help.github.com/articles/ignoring-files/ for more about ignoring files.
# dependencies
/node_modules
/.pnp
.pnp.js
# testing
/coverage
# production
/build
# misc
.DS_Store
.env.local
.env.development.local
.env.test.local
.env.production.local
npm-debug.log*
yarn-debug.log*
yarn-error.log*

View File

@ -1,31 +0,0 @@
{
"name": "townhall",
"version": "0.1.0",
"private": true,
"dependencies": {
"react": "^16.10.2",
"react-dom": "^16.10.2",
"react-scripts": "3.2.0"
},
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test",
"eject": "react-scripts eject"
},
"eslintConfig": {
"extends": "react-app"
},
"browserslist": {
"production": [
">0.2%",
"not dead",
"not op_mini all"
],
"development": [
"last 1 chrome version",
"last 1 firefox version",
"last 1 safari version"
]
}
}

View File

@ -1,13 +0,0 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<meta name="theme-color" content="#1a1a1a" />
<title>Lotus TownHall</title>
</head>
<body>
<noscript>You need to enable JavaScript to run this app.</noscript>
<div id="root"></div>
</body>
</html>

View File

@ -1,2 +0,0 @@
# https://www.robotstxt.org/robotstxt.html
User-agent: *

View File

@ -1 +0,0 @@

View File

@ -1,87 +0,0 @@
import React from 'react';
import './App.css';
function colForH(besth, height) {
const diff = besth - height
if(diff === 0) return '#6f6'
if(diff === 1) return '#df4'
if(diff < 4) return '#ff0'
if(diff < 10) return '#f60'
return '#f00'
}
function colLag(lag) {
if(lag < 100) return '#6f6'
if(lag < 400) return '#df4'
if(lag < 1000) return '#ff0'
if(lag < 4000) return '#f60'
return '#f00'
}
function lagCol(lag, good) {
return <span>
<span style={{color: colLag(lag)}}>{lag}</span>
<span style={{color: good ? '#f0f0f0' : '#f60'}}>ms</span>
</span>
}
class App extends React.Component {
constructor(props) {
super(props);
let ws = new WebSocket("ws://" + window.location.host + "/sub")
//let ws = new WebSocket("ws://127.0.0.1:2975/sub")
ws.onmessage = (ev) => {
console.log(ev)
let update = JSON.parse(ev.data)
update.Update.Weight = Number(update.Update.Weight)
let wdiff = update.Update.Weight - (this.state[update.From] || {Weight: update.Update.Weight}).Weight
wdiff = <span style={{color: wdiff < 0 ? '#f00' : '#f0f0f0'}}>{wdiff}</span>
let utDiff = update.Time - (this.state[update.From] || {utime: update.Time}).utime
utDiff = <span style={{color: utDiff < 0 ? '#f00' : '#f0f0f0'}}>{utDiff}ms</span>
this.setState( prev => ({
...prev, [update.From]: {...update.Update, utime: update.Time, wdiff: wdiff, utDiff: utDiff},
}))
}
ws.onclose = () => {
this.setState({disconnected: true})
}
this.state = {}
}
render() {
if(this.state.disconnected) {
return <span>Error: disconnected</span>
}
let besth = Object.keys(this.state).map(k => this.state[k]).reduce((p, n) => p > n.Height ? p : n.Height, -1)
let bestw = Object.keys(this.state).map(k => this.state[k]).reduce((p, n) => p > n.Weight ? p : n.Weight, -1)
return <table>
<tr><td>PeerID</td><td>Nickname</td><td>Lag</td><td>Weight(best, prev)</td><td>Height</td><td>Blocks</td></tr>
{Object.keys(this.state).map(k => [k, this.state[k]]).map(([k, v]) => {
let mnrs = v.Blocks.map(b => <td>&nbsp;m:{b.Miner}({lagCol(v.Time ? v.Time - (b.Timestamp*1000) : v.utime - (b.Timestamp*1000), v.Time)})</td>)
let l = [
<td>{k}</td>,
<td>{v.NodeName}</td>,
<td>{v.Time ? lagCol(v.utime - v.Time, true) : ""}(Δ{v.utDiff})</td>,
<td style={{color: bestw !== v.Weight ? '#f00' : '#afa'}}>{v.Weight}({bestw - v.Weight}, {v.wdiff})</td>,
<td style={{color: colForH(besth, v.Height)}}>{v.Height}({besth - v.Height})</td>,
...mnrs,
]
l = <tr>{l}</tr>
return l
})
}
</table>
}
}
export default App;

View File

@ -1,9 +0,0 @@
import React from 'react';
import ReactDOM from 'react-dom';
import App from './App';
it('renders without crashing', () => {
const div = document.createElement('div');
ReactDOM.render(<App />, div);
ReactDOM.unmountComponentAtNode(div);
});

View File

@ -1,6 +0,0 @@
body {
margin: 0;
font-family: monospace;
background: #1f1f1f;
color: #f0f0f0;
}

View File

@ -1,6 +0,0 @@
import React from 'react';
import ReactDOM from 'react-dom';
import './index.css';
import App from './App';
ReactDOM.render(<App />, document.getElementById('root'));

View File

@ -481,7 +481,7 @@ func ImportChain(ctx context.Context, r repo.Repo, fname string, snapshot bool)
return xerrors.Errorf("failed to open journal: %w", err)
}
cst := store.NewChainStore(bs, bs, mds, vm.Syscalls(ffiwrapper.ProofVerifier), j)
cst := store.NewChainStore(bs, bs, mds, j)
defer cst.Close() //nolint:errcheck
log.Infof("importing chain from %s...", fname)
@ -517,7 +517,7 @@ func ImportChain(ctx context.Context, r repo.Repo, fname string, snapshot bool)
return err
}
stm := stmgr.NewStateManager(cst)
stm := stmgr.NewStateManager(cst, vm.Syscalls(ffiwrapper.ProofVerifier))
if !snapshot {
log.Infof("validating imported chain...")

View File

@ -12,6 +12,7 @@ import (
"github.com/filecoin-project/lotus/api"
"github.com/filecoin-project/lotus/build"
lcli "github.com/filecoin-project/lotus/cli"
cliutil "github.com/filecoin-project/lotus/cli/util"
"github.com/filecoin-project/lotus/lib/lotuslog"
"github.com/filecoin-project/lotus/lib/tracing"
"github.com/filecoin-project/lotus/node/repo"
@ -81,6 +82,7 @@ func main() {
Name: "force-send",
Usage: "if true, will ignore pre-send checks",
},
cliutil.FlagVeryVerbose,
},
Commands: append(local, lcli.Commands...),

View File

@ -101,8 +101,8 @@ func (d *Driver) ExecuteTipset(bs blockstore.Blockstore, ds ds.Batching, params
tipset = params.Tipset
syscalls = vm.Syscalls(ffiwrapper.ProofVerifier)
cs = store.NewChainStore(bs, bs, ds, syscalls, nil)
sm = stmgr.NewStateManager(cs)
cs = store.NewChainStore(bs, bs, ds, nil)
sm = stmgr.NewStateManager(cs, syscalls)
)
if params.Rand == nil {
@ -196,7 +196,7 @@ func (d *Driver) ExecuteMessage(bs blockstore.Blockstore, params ExecuteMessageP
// dummy state manager; only to reference the GetNetworkVersion method,
// which does not depend on state.
sm := stmgr.NewStateManager(nil)
sm := stmgr.NewStateManager(nil, nil)
vmOpts := &vm.VMOpts{
StateBase: params.Preroot,

View File

@ -94,6 +94,8 @@
* [ReturnSealPreCommit1](#ReturnSealPreCommit1)
* [ReturnSealPreCommit2](#ReturnSealPreCommit2)
* [ReturnUnsealPiece](#ReturnUnsealPiece)
* [Runtime](#Runtime)
* [RuntimeSubsystems](#RuntimeSubsystems)
* [Sealing](#Sealing)
* [SealingAbort](#SealingAbort)
* [SealingSchedDiag](#SealingSchedDiag)
@ -1522,6 +1524,28 @@ Inputs:
Response: `{}`
## Runtime
### RuntimeSubsystems
RuntimeSubsystems returns the subsystems that are enabled
in this instance.
Perms: read
Inputs: `null`
Response:
```json
[
"Mining",
"Sealing",
"SectorStorage",
"Markets"
]
```
## Sealing

View File

@ -7,7 +7,7 @@ USAGE:
lotus-miner [global options] command [command options] [arguments...]
VERSION:
1.11.1-dev
1.11.2-dev
COMMANDS:
init Initialize a lotus miner repo
@ -43,6 +43,7 @@ GLOBAL OPTIONS:
--actor value, -a value specify other actor to check state for (read only)
--color use color in display output (default: depends on output being a TTY)
--miner-repo value, --storagerepo value Specify miner repo path. flag(storagerepo) and env(LOTUS_STORAGE_PATH) are DEPRECATION, will REMOVE SOON (default: "~/.lotusminer") [$LOTUS_MINER_PATH, $LOTUS_STORAGE_PATH]
--vv enables very verbose mode, useful for debugging the CLI (default: false)
--help, -h show help (default: false)
--version, -v print the version (default: false)
```

View File

@ -7,7 +7,7 @@ USAGE:
lotus-worker [global options] command [command options] [arguments...]
VERSION:
1.11.1-dev
1.11.2-dev
COMMANDS:
run Start lotus worker

View File

@ -7,7 +7,7 @@ USAGE:
lotus [global options] command [command options] [arguments...]
VERSION:
1.11.1-dev
1.11.2-dev
COMMANDS:
daemon Start a lotus daemon process
@ -39,6 +39,7 @@ COMMANDS:
GLOBAL OPTIONS:
--interactive setting to false will disable interactive functionality of commands (default: false)
--force-send if true, will ignore pre-send checks (default: false)
--vv enables very verbose mode, useful for debugging the CLI (default: false)
--help, -h show help (default: false)
--version, -v print the version (default: false)
```

View File

@ -63,24 +63,15 @@ Testing an RC:
- [ ] (optional) let a sector go faulty, and see it be recovered
- [ ] **Stage 2 - Community Testing**
- [ ] Inform beta miners (@lotus-early-testers-miner in Filecoin Slack #fil-lotus)
- [ ] Ask close ecosystem partners to test their projects (@lotus-early-testers-eco-dev in Filecoin slack #fil-lotus)
- [ ] Powergate
- [ ] Glif
- [ ] Zondax
- [ ] Stats dashboard
- [ ] Community dashboards
- [ ] Infura
- [ ] Sentinel
- [ ] Protofire
- [ ] Fleek
- [ ] Inform beta lotus users (@lotus-early-testers in Filecoin Slack #fil-lotus)
- [ ] **Stage 3 - Community Prod Testing**
- [ ] Documentation
- [ ] Ensure that [CHANGELOG.md](https://github.com/filecoin-project/lotus/blob/master/CHANGELOG.md) is up to date
- [ ] Check if any [config](https://docs.filecoin.io/get-started/lotus/configuration-and-advanced-usage/#configuration) updates are needed
- [ ] Invite the wider community through (link to the release issue):
- [ ] Check `Create a discussion for this release` when tagging for the major rcs(new features, hot-fixes) release
- [ ] Check `Create a discussion for this release` when tagging for the major/close-to-final rcs(new features, hot-fixes) release
- [ ] Link the disucssion in #fil-lotus on Filecoin slack
- [ ] **Stage 4 - Release**
@ -91,11 +82,10 @@ Testing an RC:
- [ ] Merge `release-vX.Y.Z` into the `releases` branch.
- [ ] Tag this merge commit (on the `releases` branch) with `vX.Y.Z`
- [ ] Cut the release [here](https://github.com/filecoin-project/lotus/releases/new?prerelease=true&target=releases).
- [ ] Check `Create a discussion for this release` when tagging the release
- [ ] Final announcements
- [ ] Update network.filecoin.io for mainnet, calib and nerpa.
- [ ] repost in #fil-lotus in filecoin slack
- [ ] Inform node provides (Protofire, Digital Ocean..)
- [ ] repost in #fil-lotus-announcement in filecoin slack
- [ ] Inform node providers (Protofire, Digital Ocean..)
- [ ] **Post-Release**
- [ ] Merge the `releases` branch back into `master`, ignoring the changes to `version.go` (keep the `-dev` version from master). Do NOT delete the `releases` branch when doing so!
@ -104,11 +94,7 @@ Testing an RC:
## ❤️ Contributors
< list generated by scripts/mkreleaselog >
Would you like to contribute to Lotus and don't know how? Well, there are a few places you can get started:
- TODO
See the final release notes!
## ⁉️ Do you have questions?

2
extern/filecoin-ffi vendored

@ -1 +1 @@
Subproject commit d60fc680aa8abeafba698f738fed5b94c9bda33d
Subproject commit a7b3c2e695393fd716e9265ff8cba932a3e38dd4

View File

@ -93,27 +93,29 @@ func checkPrecommit(ctx context.Context, maddr address.Address, si SectorInfo, t
return &ErrBadCommD{xerrors.Errorf("on chain CommD differs from sector: %s != %s", commD, si.CommD)}
}
ticketEarliest := height - policy.MaxPreCommitRandomnessLookback
if si.TicketEpoch < ticketEarliest {
return &ErrExpiredTicket{xerrors.Errorf("ticket expired: seal height: %d, head: %d", si.TicketEpoch+policy.SealRandomnessLookback, height)}
}
pci, err := api.StateSectorPreCommitInfo(ctx, maddr, si.SectorNumber, tok)
if err != nil {
if err == ErrSectorAllocated {
//committed P2 message but commit C2 message too late, pci should be null in this case
return &ErrSectorNumberAllocated{err}
}
return &ErrApi{xerrors.Errorf("getting precommit info: %w", err)}
}
if pci != nil {
// committed P2 message
if pci.Info.SealRandEpoch != si.TicketEpoch {
return &ErrBadTicket{xerrors.Errorf("bad ticket epoch: %d != %d", pci.Info.SealRandEpoch, si.TicketEpoch)}
}
return &ErrPrecommitOnChain{xerrors.Errorf("precommit already on chain")}
}
//never commit P2 message before, check ticket expiration
ticketEarliest := height - policy.MaxPreCommitRandomnessLookback
if si.TicketEpoch < ticketEarliest {
return &ErrExpiredTicket{xerrors.Errorf("ticket expired: seal height: %d, head: %d", si.TicketEpoch+policy.SealRandomnessLookback, height)}
}
return nil
}

View File

@ -3,11 +3,13 @@ package sealing
import (
"context"
"github.com/filecoin-project/lotus/chain/actors/builtin/miner"
"github.com/filecoin-project/go-state-types/network"
"golang.org/x/xerrors"
"github.com/filecoin-project/go-state-types/abi"
"github.com/filecoin-project/go-state-types/network"
"github.com/filecoin-project/lotus/chain/actors/builtin"
"github.com/filecoin-project/lotus/chain/actors/builtin/miner"
"github.com/filecoin-project/lotus/chain/actors/policy"
)
type PreCommitPolicy interface {
@ -34,21 +36,23 @@ type Chain interface {
// If we're in Mode 2: The pre-commit expiration epoch will be set to the
// current epoch + the provided default duration.
type BasicPreCommitPolicy struct {
api Chain
api Chain
getSealingConfig GetSealingConfigFunc
provingBoundary abi.ChainEpoch
duration abi.ChainEpoch
provingBuffer abi.ChainEpoch
}
// NewBasicPreCommitPolicy produces a BasicPreCommitPolicy.
//
// The provided duration is used as the default sector expiry when the sector
// contains no deals. The proving boundary is used to adjust/align the sector's expiration.
func NewBasicPreCommitPolicy(api Chain, duration abi.ChainEpoch, provingBoundary abi.ChainEpoch) BasicPreCommitPolicy {
func NewBasicPreCommitPolicy(api Chain, cfgGetter GetSealingConfigFunc, provingBoundary abi.ChainEpoch, provingBuffer abi.ChainEpoch) BasicPreCommitPolicy {
return BasicPreCommitPolicy{
api: api,
provingBoundary: provingBoundary,
duration: duration,
api: api,
getSealingConfig: cfgGetter,
provingBoundary: provingBoundary,
provingBuffer: provingBuffer,
}
}
@ -79,7 +83,13 @@ func (p *BasicPreCommitPolicy) Expiration(ctx context.Context, ps ...Piece) (abi
}
if end == nil {
tmp := epoch + p.duration
// no deal pieces, get expiration for committed capacity sector
expirationDuration, err := p.getCCSectorLifetime()
if err != nil {
return 0, err
}
tmp := epoch + expirationDuration
end = &tmp
}
@ -87,3 +97,27 @@ func (p *BasicPreCommitPolicy) Expiration(ctx context.Context, ps ...Piece) (abi
return *end, nil
}
func (p *BasicPreCommitPolicy) getCCSectorLifetime() (abi.ChainEpoch, error) {
c, err := p.getSealingConfig()
if err != nil {
return 0, xerrors.Errorf("sealing config load error: %w", err)
}
var ccLifetimeEpochs = abi.ChainEpoch(uint64(c.CommittedCapacitySectorLifetime.Seconds()) / builtin.EpochDurationSeconds)
// if zero value in config, assume maximum sector extension
if ccLifetimeEpochs == 0 {
ccLifetimeEpochs = policy.GetMaxSectorExpirationExtension()
}
if minExpiration := abi.ChainEpoch(miner.MinSectorExpiration); ccLifetimeEpochs < minExpiration {
log.Warnf("value for CommittedCapacitySectorLiftime is too short, using default minimum (%d epochs)", minExpiration)
return minExpiration, nil
}
if maxExpiration := policy.GetMaxSectorExpirationExtension(); ccLifetimeEpochs > maxExpiration {
log.Warnf("value for CommittedCapacitySectorLiftime is too long, using default maximum (%d epochs)", maxExpiration)
return maxExpiration, nil
}
return (ccLifetimeEpochs - p.provingBuffer), nil
}

View File

@ -3,10 +3,16 @@ package sealing_test
import (
"context"
"testing"
"time"
"github.com/filecoin-project/go-state-types/network"
api "github.com/filecoin-project/lotus/api"
"github.com/filecoin-project/lotus/build"
"github.com/filecoin-project/lotus/chain/actors/builtin"
"github.com/filecoin-project/lotus/chain/actors/builtin/miner"
"github.com/filecoin-project/lotus/chain/actors/policy"
sealing "github.com/filecoin-project/lotus/extern/storage-sealing"
"github.com/filecoin-project/lotus/extern/storage-sealing/sealiface"
"github.com/ipfs/go-cid"
"github.com/stretchr/testify/assert"
@ -14,14 +20,28 @@ import (
commcid "github.com/filecoin-project/go-fil-commcid"
"github.com/filecoin-project/go-state-types/abi"
sealing "github.com/filecoin-project/lotus/extern/storage-sealing"
)
type fakeChain struct {
h abi.ChainEpoch
}
type fakeConfigStub struct {
CCSectorLifetime time.Duration
}
func fakeConfigGetter(stub *fakeConfigStub) sealing.GetSealingConfigFunc {
return func() (sealiface.Config, error) {
if stub == nil {
return sealiface.Config{}, nil
}
return sealiface.Config{
CommittedCapacitySectorLifetime: stub.CCSectorLifetime,
}, nil
}
}
func (f *fakeChain) StateNetworkVersion(ctx context.Context, tok sealing.TipSetToken) (network.Version, error) {
return build.NewestNetworkVersion, nil
}
@ -38,21 +58,49 @@ func fakePieceCid(t *testing.T) cid.Cid {
}
func TestBasicPolicyEmptySector(t *testing.T) {
policy := sealing.NewBasicPreCommitPolicy(&fakeChain{
h: abi.ChainEpoch(55),
}, 10, 0)
cfg := fakeConfigGetter(nil)
h := abi.ChainEpoch(55)
pBoundary := abi.ChainEpoch(0)
pBuffer := abi.ChainEpoch(2)
pcp := sealing.NewBasicPreCommitPolicy(&fakeChain{h: h}, cfg, pBoundary, pBuffer)
exp, err := pcp.Expiration(context.Background())
exp, err := policy.Expiration(context.Background())
require.NoError(t, err)
assert.Equal(t, 2879, int(exp))
// as set when there are no deal pieces
expected := h + policy.GetMaxSectorExpirationExtension() - (pBuffer * 2)
// as set just before returning within Expiration()
expected += miner.WPoStProvingPeriod - (expected % miner.WPoStProvingPeriod) + pBoundary - 1
assert.Equal(t, int(expected), int(exp))
}
func TestCustomCCSectorConfig(t *testing.T) {
customLifetime := 200 * 24 * time.Hour
customLifetimeEpochs := abi.ChainEpoch(int64(customLifetime.Seconds()) / builtin.EpochDurationSeconds)
cfgStub := fakeConfigStub{CCSectorLifetime: customLifetime}
cfg := fakeConfigGetter(&cfgStub)
h := abi.ChainEpoch(55)
pBoundary := abi.ChainEpoch(0)
pBuffer := abi.ChainEpoch(2)
pcp := sealing.NewBasicPreCommitPolicy(&fakeChain{h: h}, cfg, pBoundary, pBuffer)
exp, err := pcp.Expiration(context.Background())
require.NoError(t, err)
// as set when there are no deal pieces
expected := h + customLifetimeEpochs - (pBuffer * 2)
// as set just before returning within Expiration()
expected += miner.WPoStProvingPeriod - (expected % miner.WPoStProvingPeriod) + pBoundary - 1
assert.Equal(t, int(expected), int(exp))
}
func TestBasicPolicyMostConstrictiveSchedule(t *testing.T) {
cfg := fakeConfigGetter(nil)
pPeriod := abi.ChainEpoch(11)
policy := sealing.NewBasicPreCommitPolicy(&fakeChain{
h: abi.ChainEpoch(55),
}, 100, 11)
}, cfg, pPeriod, 2)
longestDealEpochEnd := abi.ChainEpoch(100)
pieces := []sealing.Piece{
{
Piece: abi.PieceInfo{
@ -76,7 +124,7 @@ func TestBasicPolicyMostConstrictiveSchedule(t *testing.T) {
DealID: abi.DealID(43),
DealSchedule: api.DealSchedule{
StartEpoch: abi.ChainEpoch(80),
EndEpoch: abi.ChainEpoch(100),
EndEpoch: longestDealEpochEnd,
},
},
},
@ -85,13 +133,15 @@ func TestBasicPolicyMostConstrictiveSchedule(t *testing.T) {
exp, err := policy.Expiration(context.Background(), pieces...)
require.NoError(t, err)
assert.Equal(t, 2890, int(exp))
expected := longestDealEpochEnd + miner.WPoStProvingPeriod - (longestDealEpochEnd % miner.WPoStProvingPeriod) + pPeriod - 1
assert.Equal(t, int(expected), int(exp))
}
func TestBasicPolicyIgnoresExistingScheduleIfExpired(t *testing.T) {
cfg := fakeConfigGetter(nil)
policy := sealing.NewBasicPreCommitPolicy(&fakeChain{
h: abi.ChainEpoch(55),
}, 100, 0)
}, cfg, 0, 0)
pieces := []sealing.Piece{
{
@ -112,13 +162,14 @@ func TestBasicPolicyIgnoresExistingScheduleIfExpired(t *testing.T) {
exp, err := policy.Expiration(context.Background(), pieces...)
require.NoError(t, err)
assert.Equal(t, 2879, int(exp))
assert.Equal(t, 1558079, int(exp))
}
func TestMissingDealIsIgnored(t *testing.T) {
cfg := fakeConfigGetter(nil)
policy := sealing.NewBasicPreCommitPolicy(&fakeChain{
h: abi.ChainEpoch(55),
}, 100, 11)
}, cfg, 11, 0)
pieces := []sealing.Piece{
{
@ -146,5 +197,5 @@ func TestMissingDealIsIgnored(t *testing.T) {
exp, err := policy.Expiration(context.Background(), pieces...)
require.NoError(t, err)
assert.Equal(t, 2890, int(exp))
assert.Equal(t, 1558090, int(exp))
}

View File

@ -20,6 +20,8 @@ type Config struct {
WaitDealsDelay time.Duration
CommittedCapacitySectorLifetime time.Duration
AlwaysKeepUnsealedCopy bool
FinalizeEarly bool

31
go.mod
View File

@ -33,16 +33,16 @@ require (
github.com/filecoin-project/go-cbor-util v0.0.0-20191219014500-08c40a1e63a2
github.com/filecoin-project/go-commp-utils v0.1.1-0.20210427191551-70bf140d31c7
github.com/filecoin-project/go-crypto v0.0.0-20191218222705-effae4ea9f03
github.com/filecoin-project/go-data-transfer v1.7.0
github.com/filecoin-project/go-data-transfer v1.7.1
github.com/filecoin-project/go-fil-commcid v0.1.0
github.com/filecoin-project/go-fil-commp-hashhash v0.1.0
github.com/filecoin-project/go-fil-markets v1.6.0
github.com/filecoin-project/go-fil-markets v1.6.1
github.com/filecoin-project/go-jsonrpc v0.1.4-0.20210217175800-45ea43ac2bec
github.com/filecoin-project/go-multistore v0.0.3
github.com/filecoin-project/go-padreader v0.0.0-20210723183308-812a16dc01b1
github.com/filecoin-project/go-paramfetch v0.0.2-0.20210614165157-25a6c7769498
github.com/filecoin-project/go-state-types v0.1.1-0.20210722133031-ad9bfe54c124
github.com/filecoin-project/go-statemachine v0.0.0-20200925024713-05bd7c71fbfe
github.com/filecoin-project/go-statemachine v1.0.0
github.com/filecoin-project/go-statestore v0.1.1
github.com/filecoin-project/go-storedcounter v0.0.0-20200421200003-1c99c62e8a5b
github.com/filecoin-project/specs-actors v0.9.14
@ -56,7 +56,7 @@ require (
github.com/gdamore/tcell/v2 v2.2.0
github.com/go-kit/kit v0.10.0
github.com/go-ole/go-ole v1.2.4 // indirect
github.com/golang/mock v1.5.0
github.com/golang/mock v1.6.0
github.com/google/uuid v1.1.2
github.com/gorilla/mux v1.7.4
github.com/gorilla/websocket v1.4.2
@ -78,7 +78,7 @@ require (
github.com/ipfs/go-ds-pebble v0.0.2-0.20200921225637-ce220f8ac459
github.com/ipfs/go-filestore v1.0.0
github.com/ipfs/go-fs-lock v0.0.6
github.com/ipfs/go-graphsync v0.6.5
github.com/ipfs/go-graphsync v0.6.6
github.com/ipfs/go-ipfs-blockstore v1.0.3
github.com/ipfs/go-ipfs-chunker v0.0.5
github.com/ipfs/go-ipfs-ds-help v1.0.0
@ -100,22 +100,21 @@ require (
github.com/ipld/go-car v0.1.1-0.20201119040415-11b6074b6d4d
github.com/ipld/go-ipld-prime v0.5.1-0.20201021195245-109253e8a018
github.com/kelseyhightower/envconfig v1.4.0
github.com/lib/pq v1.7.0
github.com/libp2p/go-buffer-pool v0.0.2
github.com/libp2p/go-eventbus v0.2.1
github.com/libp2p/go-libp2p v0.14.2
github.com/libp2p/go-libp2p-connmgr v0.2.4
github.com/libp2p/go-libp2p-core v0.8.5
github.com/libp2p/go-libp2p-discovery v0.5.0
github.com/libp2p/go-libp2p-core v0.8.6
github.com/libp2p/go-libp2p-discovery v0.5.1
github.com/libp2p/go-libp2p-kad-dht v0.11.0
github.com/libp2p/go-libp2p-mplex v0.4.1
github.com/libp2p/go-libp2p-noise v0.2.0
github.com/libp2p/go-libp2p-peerstore v0.2.7
github.com/libp2p/go-libp2p-pubsub v0.5.0
github.com/libp2p/go-libp2p-quic-transport v0.10.0
github.com/libp2p/go-libp2p-peerstore v0.2.8
github.com/libp2p/go-libp2p-pubsub v0.5.3
github.com/libp2p/go-libp2p-quic-transport v0.11.2
github.com/libp2p/go-libp2p-record v0.1.3
github.com/libp2p/go-libp2p-routing-helpers v0.2.3
github.com/libp2p/go-libp2p-swarm v0.5.0
github.com/libp2p/go-libp2p-swarm v0.5.3
github.com/libp2p/go-libp2p-tls v0.1.3
github.com/libp2p/go-libp2p-yamux v0.5.4
github.com/libp2p/go-maddr-filter v0.1.0
@ -124,14 +123,14 @@ require (
github.com/minio/blake2b-simd v0.0.0-20160723061019-3f5f724cb5b1
github.com/mitchellh/go-homedir v1.1.0
github.com/multiformats/go-base32 v0.0.3
github.com/multiformats/go-multiaddr v0.3.1
github.com/multiformats/go-multiaddr v0.3.3
github.com/multiformats/go-multiaddr-dns v0.3.1
github.com/multiformats/go-multibase v0.0.3
github.com/multiformats/go-multihash v0.0.15
github.com/open-rpc/meta-schema v0.0.0-20201029221707-1b72ef2ea333
github.com/opentracing/opentracing-go v1.2.0
github.com/polydawn/refmt v0.0.0-20190809202753-05966cbd336a
github.com/prometheus/client_golang v1.6.0
github.com/prometheus/client_golang v1.10.0
github.com/raulk/clock v1.1.0
github.com/raulk/go-watchdog v1.0.1
github.com/streadway/quantile v0.0.0-20150917103942-b0c588724d25
@ -150,9 +149,9 @@ require (
go.uber.org/fx v1.9.0
go.uber.org/multierr v1.6.0
go.uber.org/zap v1.16.0
golang.org/x/net v0.0.0-20210423184538-5f58ad60dda6
golang.org/x/net v0.0.0-20210428140749-89ef3d95e781
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c
golang.org/x/sys v0.0.0-20210510120138-977fb7262007
golang.org/x/sys v0.0.0-20210511113859-b0526f3d8744
golang.org/x/time v0.0.0-20191024005414-555d28b269f0
golang.org/x/tools v0.1.5
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1

162
go.sum
View File

@ -27,8 +27,9 @@ dmitri.shuralyov.com/service/change v0.0.0-20181023043359-a85b471d5412/go.mod h1
dmitri.shuralyov.com/state v0.0.0-20180228185332-28bcc343414c/go.mod h1:0PRwlb0D6DFvNNtx+9ybjezNCa8XF0xaYcETyp6rHWU=
git.apache.org/thrift.git v0.0.0-20180902110319-2566ecd5d999/go.mod h1:fPE2ZNJGynbRyZ4dJvy6G277gSllfV2HJqblrnkyeyg=
github.com/AndreasBriese/bbloom v0.0.0-20180913140656-343706a395b7/go.mod h1:bOvUY6CB00SOBii9/FifXqc0awNKxLFCL/+pkDPuyl8=
github.com/AndreasBriese/bbloom v0.0.0-20190306092124-e2d15f34fcf9 h1:HD8gA2tkByhMAwYaFAX9w2l7vxvBQ5NMoxDrkhqhtn4=
github.com/AndreasBriese/bbloom v0.0.0-20190306092124-e2d15f34fcf9/go.mod h1:bOvUY6CB00SOBii9/FifXqc0awNKxLFCL/+pkDPuyl8=
github.com/AndreasBriese/bbloom v0.0.0-20190825152654-46b345b51c96 h1:cTp8I5+VIoKjsnZuH8vjyaysT/ses3EvZeaV/1UkF2M=
github.com/AndreasBriese/bbloom v0.0.0-20190825152654-46b345b51c96/go.mod h1:bOvUY6CB00SOBii9/FifXqc0awNKxLFCL/+pkDPuyl8=
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
@ -199,8 +200,9 @@ github.com/detailyang/go-fallocate v0.0.0-20180908115635-432fa640bd2e/go.mod h1:
github.com/dgraph-io/badger v1.5.5-0.20190226225317-8115aed38f8f/go.mod h1:VZxzAIRPHRVNRKRo6AXrX9BJegn6il06VMTZVJYCIjQ=
github.com/dgraph-io/badger v1.6.0-rc1/go.mod h1:zwt7syl517jmP8s94KqSxTlM6IMsdhYy6psNgSztDR4=
github.com/dgraph-io/badger v1.6.0/go.mod h1:zwt7syl517jmP8s94KqSxTlM6IMsdhYy6psNgSztDR4=
github.com/dgraph-io/badger v1.6.1 h1:w9pSFNSdq/JPM1N12Fz/F/bzo993Is1W+Q7HjPzi7yg=
github.com/dgraph-io/badger v1.6.1/go.mod h1:FRmFw3uxvcpa8zG3Rxs0th+hCLIuaQg8HlNV5bjgnuU=
github.com/dgraph-io/badger v1.6.2 h1:mNw0qs90GVgGGWylh0umH5iag1j6n/PeJtNvL6KY/x8=
github.com/dgraph-io/badger v1.6.2/go.mod h1:JW2yswe3V058sS0kZ2h/AXeDSqFjxnZcRrVH//y2UQE=
github.com/dgraph-io/badger/v2 v2.0.3/go.mod h1:3KY8+bsP8wI0OEnQJAKpd4wIJW/Mm32yw2j/9FUVnIM=
github.com/dgraph-io/badger/v2 v2.2007.2 h1:EjjK0KqwaFMlPin1ajhP943VPENHJdEz1KLIegjaI3k=
github.com/dgraph-io/badger/v2 v2.2007.2/go.mod h1:26P/7fbL4kUZVEVKLAKXkBXKOydDmM2p1e+NhhnBCAE=
@ -274,8 +276,9 @@ github.com/filecoin-project/go-commp-utils v0.1.1-0.20210427191551-70bf140d31c7/
github.com/filecoin-project/go-crypto v0.0.0-20191218222705-effae4ea9f03 h1:2pMXdBnCiXjfCYx/hLqFxccPoqsSveQFxVLvNxy9bus=
github.com/filecoin-project/go-crypto v0.0.0-20191218222705-effae4ea9f03/go.mod h1:+viYnvGtUTgJRdy6oaeF4MTFKAfatX071MPDPBL11EQ=
github.com/filecoin-project/go-data-transfer v1.0.1/go.mod h1:UxvfUAY9v3ub0a21BSK9u3pB2aq30Y0KMsG+w9/ysyo=
github.com/filecoin-project/go-data-transfer v1.7.0 h1:mFRn+UuTdPROmhplLSekzd4rAs9ug8ubtSY4nw9wYkU=
github.com/filecoin-project/go-data-transfer v1.7.0/go.mod h1:GLRr5BmLEqsLwXfiRDG7uJvph22KGL2M4iOuF8EINaU=
github.com/filecoin-project/go-data-transfer v1.7.1 h1:Co4bTenvCc3WnOhQWyXRt59FLZvxwH8UeF0ZCOc1ik0=
github.com/filecoin-project/go-data-transfer v1.7.1/go.mod h1:GLRr5BmLEqsLwXfiRDG7uJvph22KGL2M4iOuF8EINaU=
github.com/filecoin-project/go-ds-versioning v0.1.0 h1:y/X6UksYTsK8TLCI7rttCKEvl8btmWxyFMEeeWGUxIQ=
github.com/filecoin-project/go-ds-versioning v0.1.0/go.mod h1:mp16rb4i2QPmxBnmanUx8i/XANp+PFCCJWiAb+VW4/s=
github.com/filecoin-project/go-fil-commcid v0.0.0-20200716160307-8f644712406f/go.mod h1:Eaox7Hvus1JgPrL5+M3+h7aSPHc0cVqpSxA+TxIEpZQ=
@ -285,8 +288,8 @@ github.com/filecoin-project/go-fil-commcid v0.1.0/go.mod h1:Eaox7Hvus1JgPrL5+M3+
github.com/filecoin-project/go-fil-commp-hashhash v0.1.0 h1:imrrpZWEHRnNqqv0tN7LXep5bFEVOVmQWHJvl2mgsGo=
github.com/filecoin-project/go-fil-commp-hashhash v0.1.0/go.mod h1:73S8WSEWh9vr0fDJVnKADhfIv/d6dCbAGaAGWbdJEI8=
github.com/filecoin-project/go-fil-markets v1.0.5-0.20201113164554-c5eba40d5335/go.mod h1:AJySOJC00JRWEZzRG2KsfUnqEf5ITXxeX09BE9N4f9c=
github.com/filecoin-project/go-fil-markets v1.6.0 h1:+1usyX7rXz6Ey6hbHd/Fhx616ZvGCI94rW7wneMcptU=
github.com/filecoin-project/go-fil-markets v1.6.0/go.mod h1:ZuFDagROUV6GfvBU//KReTQDw+EZci4rH7jMYTD10vs=
github.com/filecoin-project/go-fil-markets v1.6.1 h1:8xdFyWrELfOzwcGa229bLu/olD+1l4sEWFIsZR7oz5U=
github.com/filecoin-project/go-fil-markets v1.6.1/go.mod h1:ZuFDagROUV6GfvBU//KReTQDw+EZci4rH7jMYTD10vs=
github.com/filecoin-project/go-hamt-ipld v0.1.5 h1:uoXrKbCQZ49OHpsTCkrThPNelC4W3LPEk0OrS/ytIBM=
github.com/filecoin-project/go-hamt-ipld v0.1.5/go.mod h1:6Is+ONR5Cd5R6XZoCse1CWaXZc0Hdb/JeX+EQCQzX24=
github.com/filecoin-project/go-hamt-ipld/v2 v2.0.0 h1:b3UDemBYN2HNfk3KOXNuxgTTxlWi3xVvbQP0IT38fvM=
@ -311,8 +314,9 @@ github.com/filecoin-project/go-state-types v0.1.0/go.mod h1:ezYnPf0bNkTsDibL/psS
github.com/filecoin-project/go-state-types v0.1.1-0.20210506134452-99b279731c48/go.mod h1:ezYnPf0bNkTsDibL/psSz5dy4B5awOJ/E7P2Saeep8g=
github.com/filecoin-project/go-state-types v0.1.1-0.20210722133031-ad9bfe54c124 h1:veGrNABg/9I7prngrowkhwbvW5d5JN55MNKmbsr5FqA=
github.com/filecoin-project/go-state-types v0.1.1-0.20210722133031-ad9bfe54c124/go.mod h1:ezYnPf0bNkTsDibL/psSz5dy4B5awOJ/E7P2Saeep8g=
github.com/filecoin-project/go-statemachine v0.0.0-20200925024713-05bd7c71fbfe h1:dF8u+LEWeIcTcfUcCf3WFVlc81Fr2JKg8zPzIbBDKDw=
github.com/filecoin-project/go-statemachine v0.0.0-20200925024713-05bd7c71fbfe/go.mod h1:FGwQgZAt2Gh5mjlwJUlVB62JeYdo+if0xWxSEfBD9ig=
github.com/filecoin-project/go-statemachine v1.0.0 h1:b8FpFewPSklyAIUqH0oHt4nvKf03bU7asop1bJpjAtQ=
github.com/filecoin-project/go-statemachine v1.0.0/go.mod h1:FGwQgZAt2Gh5mjlwJUlVB62JeYdo+if0xWxSEfBD9ig=
github.com/filecoin-project/go-statestore v0.1.0/go.mod h1:LFc9hD+fRxPqiHiaqUEZOinUJB4WARkRfNl10O7kTnI=
github.com/filecoin-project/go-statestore v0.1.1 h1:ufMFq00VqnT2CAuDpcGnwLnCX1I/c3OROw/kXVNSTZk=
github.com/filecoin-project/go-statestore v0.1.1/go.mod h1:LFc9hD+fRxPqiHiaqUEZOinUJB4WARkRfNl10O7kTnI=
@ -397,6 +401,8 @@ github.com/go-sql-driver/mysql v1.4.0/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG
github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg=
github.com/go-stack/stack v1.8.0 h1:5SgMzNM5HxrEjV0ww2lTmX6E2Izsfxas4+YHWRs3Lsk=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0 h1:p104kn46Q8WdvHunIJ9dAyjPVtrBPhSr3KT2yUst43I=
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE=
github.com/godbus/dbus v0.0.0-20190402143921-271e53dc4968 h1:s+PDl6lozQ+dEUtUtQnO7+A2iPG3sK1pI4liU+jxn90=
github.com/godbus/dbus v0.0.0-20190402143921-271e53dc4968/go.mod h1:/YcGZj5zSblfDWMMoOzV4fas9FZnQYTkDnsGvmh2Grw=
github.com/godbus/dbus/v5 v5.0.3 h1:ZqHaoEF7TBzh4jzPmqVhE/5A1z9of6orkAe5uHoAeME=
@ -429,8 +435,8 @@ github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFU
github.com/golang/mock v1.4.0/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.3/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.4/go.mod h1:l3mdAwkq5BuhzHwde/uurv3sEJeZMXNpwsxVWU71h+4=
github.com/golang/mock v1.5.0 h1:jlYHihg//f7RRwuPfptm04yp4s7O6Kw8EZiVYIGcH0g=
github.com/golang/mock v1.5.0/go.mod h1:CWnOUgYIOo4TcNZ0wHX3YZCqsaM1I1Jvs6v3mP3KVu8=
github.com/golang/mock v1.6.0 h1:ErTB+efbowRARo13NNdxyJji2egdxLGQhRaY+DUumQc=
github.com/golang/mock v1.6.0/go.mod h1:p6yTPP+5HYm5mzsMV8JkE6ZKdX+/wYM6Hr+LicevLPs=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.0/go.mod h1:Qd/q+1AKNOZr9uGQzbzCmRO6sUih6GTPZv6a1/R87v0=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
@ -443,8 +449,10 @@ github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:W
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.4.3 h1:JjCZWpVbqXDqFVmTfYWEVTMIYrL/NPdPSCHPJ0T/raM=
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/golang/snappy v0.0.2-0.20190904063534-ff6b7dc882cf h1:gFVkHXmVAhEbxZVDln5V9GKrLaluNoFHDbrZwAWZgws=
@ -457,8 +465,10 @@ github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMyw
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.3 h1:x95R7cp+rSeeqAMI2knLtQ0DKlaBhv2NrtrOvafPHRo=
github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-github v17.0.0+incompatible/go.mod h1:zLgOLi98H3fifZn+44m+umXrS52loVEgC2AApnigrVQ=
github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
@ -609,8 +619,9 @@ github.com/ipfs/go-ds-badger v0.0.2/go.mod h1:Y3QpeSFWQf6MopLTiZD+VT6IC1yZqaGmjv
github.com/ipfs/go-ds-badger v0.0.5/go.mod h1:g5AuuCGmr7efyzQhLL8MzwqcauPojGPUaHzfGTzuE3s=
github.com/ipfs/go-ds-badger v0.0.7/go.mod h1:qt0/fWzZDoPW6jpQeqUjR5kBfhDNB65jd9YlmAvpQBk=
github.com/ipfs/go-ds-badger v0.2.1/go.mod h1:Tx7l3aTph3FMFrRS838dcSJh+jjA7cX9DrGVwx/NOwE=
github.com/ipfs/go-ds-badger v0.2.3 h1:J27YvAcpuA5IvZUbeBxOcQgqnYHUPxoygc6QxxkodZ4=
github.com/ipfs/go-ds-badger v0.2.3/go.mod h1:pEYw0rgg3FIrywKKnL+Snr+w/LjJZVMTBRn4FS6UHUk=
github.com/ipfs/go-ds-badger v0.2.7 h1:ju5REfIm+v+wgVnQ19xGLYPHYHbYLR6qJfmMbCDSK1I=
github.com/ipfs/go-ds-badger v0.2.7/go.mod h1:02rnztVKA4aZwDuaRPTf8mpqcKmXP7mLl6JPxd14JHA=
github.com/ipfs/go-ds-badger2 v0.1.0/go.mod h1:pbR1p817OZbdId9EvLOhKBgUVTM3BMCSTan78lDDVaw=
github.com/ipfs/go-ds-badger2 v0.1.1-0.20200708190120-187fc06f714e h1:Xi1nil8K2lBOorBS6Ys7+hmUCzH8fr3U9ipdL/IrcEI=
github.com/ipfs/go-ds-badger2 v0.1.1-0.20200708190120-187fc06f714e/go.mod h1:lJnws7amT9Ehqzta0gwMrRsURU04caT0iRPr1W8AsOU=
@ -631,8 +642,8 @@ github.com/ipfs/go-graphsync v0.1.0/go.mod h1:jMXfqIEDFukLPZHqDPp8tJMbHO9Rmeb9CE
github.com/ipfs/go-graphsync v0.4.2/go.mod h1:/VmbZTUdUMTbNkgzAiCEucIIAU3BkLE2cZrDCVUhyi0=
github.com/ipfs/go-graphsync v0.4.3/go.mod h1:mPOwDYv128gf8gxPFgXnz4fNrSYPsWyqisJ7ych+XDY=
github.com/ipfs/go-graphsync v0.6.4/go.mod h1:5WyaeigpNdpiYQuW2vwpuecOoEfB4h747ZGEOKmAGTg=
github.com/ipfs/go-graphsync v0.6.5 h1:YAJl6Yit23PQcaawzb1rPK9PSnbbq2jjMRPpRpJ0Y5U=
github.com/ipfs/go-graphsync v0.6.5/go.mod h1:GdHT8JeuIZ0R4lSjFR16Oe4zPi5dXwKi9zR9ADVlcdk=
github.com/ipfs/go-graphsync v0.6.6 h1:In7jjzvSXlrAUz4OjN41lxYf/dzkf1bVeVxLpwKMRo8=
github.com/ipfs/go-graphsync v0.6.6/go.mod h1:GdHT8JeuIZ0R4lSjFR16Oe4zPi5dXwKi9zR9ADVlcdk=
github.com/ipfs/go-hamt-ipld v0.1.1/go.mod h1:1EZCr2v0jlCnhpa+aZ0JZYp8Tt2w16+JJOAVz17YcDk=
github.com/ipfs/go-ipfs-blockstore v0.0.1/go.mod h1:d3WClOmRQKFnJ0Jz/jj/zmksX0ma1gROTlovZKBmN08=
github.com/ipfs/go-ipfs-blockstore v0.1.0/go.mod h1:5aD0AvHPi7mZc6Ci1WCAhiBQu2IsfTduLl+422H6Rqw=
@ -698,8 +709,9 @@ github.com/ipfs/go-log v1.0.0/go.mod h1:JO7RzlMK6rA+CIxFMLOuB6Wf5b81GDiKElL7UPSI
github.com/ipfs/go-log v1.0.1/go.mod h1:HuWlQttfN6FWNHRhlY5yMk/lW7evQC0HHGOxEwMRR8I=
github.com/ipfs/go-log v1.0.2/go.mod h1:1MNjMxe0u6xvJZgeqbJ8vdo2TKaGwZ1a0Bpza+sr2Sk=
github.com/ipfs/go-log v1.0.3/go.mod h1:OsLySYkwIbiSUR/yBTdv1qPtcE4FW3WPWk/ewz9Ru+A=
github.com/ipfs/go-log v1.0.4 h1:6nLQdX4W8P9yZZFH7mO+X/PzjN8Laozm/lMJ6esdgzY=
github.com/ipfs/go-log v1.0.4/go.mod h1:oDCg2FkjogeFOhqqb+N39l2RpTNPL6F/StPkB3kPgcs=
github.com/ipfs/go-log v1.0.5 h1:2dOuUCB1Z7uoczMWgAyDck5JLb72zHzrMnGnCNNbvY8=
github.com/ipfs/go-log v1.0.5/go.mod h1:j0b8ZoR+7+R99LD9jZ6+AJsrzkPbSXbZfGakb5JPtIo=
github.com/ipfs/go-log/v2 v2.0.1/go.mod h1:O7P1lJt27vWHhOwQmcFEvlmo49ry2VY2+JfBWFaa9+0=
github.com/ipfs/go-log/v2 v2.0.2/go.mod h1:O7P1lJt27vWHhOwQmcFEvlmo49ry2VY2+JfBWFaa9+0=
github.com/ipfs/go-log/v2 v2.0.3/go.mod h1:O7P1lJt27vWHhOwQmcFEvlmo49ry2VY2+JfBWFaa9+0=
@ -800,12 +812,14 @@ github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCV
github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.8/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.9/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
github.com/jtolds/gls v4.2.1+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
github.com/kabukky/httpscerts v0.0.0-20150320125433-617593d7dcb3/go.mod h1:BYpt4ufZiIGv2nXn4gMxnfKV306n3mWXgNu/d2TqdTU=
github.com/kami-zh/go-capturer v0.0.0-20171211120116-e492ea43421d/go.mod h1:P2viExyCEfeWGU259JnaQ34Inuec4R38JCyBx2edgD0=
github.com/kelseyhightower/envconfig v1.4.0 h1:Im6hONhd3pLkfDFsbRgu68RDNkGF1r3dvMUtDTo2cv8=
@ -819,6 +833,8 @@ github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQL
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/kkdai/bstream v0.0.0-20161212061736-f391b8402d23/go.mod h1:J+Gs4SYgM6CZQHDETBtE9HaSEkGmuNXF86RwHhHUvq4=
github.com/klauspost/compress v1.11.7 h1:0hzRabrMN4tSTvMfnL3SCv1ZGeAP23ynzodBgaHeMeg=
github.com/klauspost/compress v1.11.7/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
github.com/klauspost/cpuid/v2 v2.0.4 h1:g0I61F2K2DjRHz1cnxlkNSBIaePVoJIjjnHui8QHbiw=
github.com/klauspost/cpuid/v2 v2.0.4/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
@ -835,11 +851,10 @@ github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/pty v1.1.3/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/lib/pq v1.7.0 h1:h93mCPfUSkaul3Ka/VG8uZdmW1uMHDGxzu0NWHuJmHY=
github.com/lib/pq v1.7.0/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/libp2p/go-addr-util v0.0.1/go.mod h1:4ac6O7n9rIAKB1dnd+s8IbbMXkt+oBpzX4/+RACcnlQ=
github.com/libp2p/go-addr-util v0.0.2 h1:7cWK5cdA5x72jX0g8iLrQWm5TRJZ6CzGdPEhWj7plWU=
github.com/libp2p/go-addr-util v0.0.2/go.mod h1:Ecd6Fb3yIuLzq4bD7VcywcVSBtefcAwnUISBM3WG15E=
github.com/libp2p/go-addr-util v0.1.0 h1:acKsntI33w2bTU7tC9a0SaPimJGfSI0bFKC18ChxeVI=
github.com/libp2p/go-addr-util v0.1.0/go.mod h1:6I3ZYuFr2O/9D+SoyM0zEw0EF3YkldtTX406BpdQMqw=
github.com/libp2p/go-buffer-pool v0.0.1/go.mod h1:xtyIz9PMobb13WaxR6Zo1Pd1zXJKYg0a8KiIvDp3TzQ=
github.com/libp2p/go-buffer-pool v0.0.2 h1:QNK2iAFa8gjAe1SPz6mHSMuCcjs+X1wlHzeOSqcmlfs=
github.com/libp2p/go-buffer-pool v0.0.2/go.mod h1:MvaB6xw5vOrDl8rYZGLFdKAuk/hRoRZd1Vi32+RXyFM=
@ -940,9 +955,9 @@ github.com/libp2p/go-libp2p-core v0.7.0/go.mod h1:FfewUH/YpvWbEB+ZY9AQRQ4TAD8sJB
github.com/libp2p/go-libp2p-core v0.8.0/go.mod h1:FfewUH/YpvWbEB+ZY9AQRQ4TAD8sJBt/G1rVvhz5XT8=
github.com/libp2p/go-libp2p-core v0.8.1/go.mod h1:FfewUH/YpvWbEB+ZY9AQRQ4TAD8sJBt/G1rVvhz5XT8=
github.com/libp2p/go-libp2p-core v0.8.2/go.mod h1:FfewUH/YpvWbEB+ZY9AQRQ4TAD8sJBt/G1rVvhz5XT8=
github.com/libp2p/go-libp2p-core v0.8.3/go.mod h1:FfewUH/YpvWbEB+ZY9AQRQ4TAD8sJBt/G1rVvhz5XT8=
github.com/libp2p/go-libp2p-core v0.8.5 h1:aEgbIcPGsKy6zYcC+5AJivYFedhYa4sW7mIpWpUaLKw=
github.com/libp2p/go-libp2p-core v0.8.5/go.mod h1:FfewUH/YpvWbEB+ZY9AQRQ4TAD8sJBt/G1rVvhz5XT8=
github.com/libp2p/go-libp2p-core v0.8.6 h1:3S8g006qG6Tjpj1JdRK2S+TWc2DJQKX/RG9fdLeiLSU=
github.com/libp2p/go-libp2p-core v0.8.6/go.mod h1:dgHr0l0hIKfWpGpqAMbpo19pen9wJfdCGv51mTmdpmM=
github.com/libp2p/go-libp2p-crypto v0.0.1/go.mod h1:yJkNyDmO341d5wwXxDUGO0LykUVT72ImHNUqh5D/dBE=
github.com/libp2p/go-libp2p-crypto v0.0.2/go.mod h1:eETI5OUfBnvARGOHrJz2eWNyTUxEGZnBxMcbUjfIj4I=
github.com/libp2p/go-libp2p-crypto v0.1.0 h1:k9MFy+o2zGDNGsaoZl0MA3iZ75qXxr9OOoAZF+sD5OQ=
@ -954,8 +969,9 @@ github.com/libp2p/go-libp2p-discovery v0.1.0/go.mod h1:4F/x+aldVHjHDHuX85x1zWoFT
github.com/libp2p/go-libp2p-discovery v0.2.0/go.mod h1:s4VGaxYMbw4+4+tsoQTqh7wfxg97AEdo4GYBt6BadWg=
github.com/libp2p/go-libp2p-discovery v0.3.0/go.mod h1:o03drFnz9BVAZdzC/QUQ+NeQOu38Fu7LJGEOK2gQltw=
github.com/libp2p/go-libp2p-discovery v0.4.0/go.mod h1:bZ0aJSrFc/eX2llP0ryhb1kpgkPyTo23SJ5b7UQCMh4=
github.com/libp2p/go-libp2p-discovery v0.5.0 h1:Qfl+e5+lfDgwdrXdu4YNCWyEo3fWuP+WgN9mN0iWviQ=
github.com/libp2p/go-libp2p-discovery v0.5.0/go.mod h1:+srtPIU9gDaBNu//UHvcdliKBIcr4SfDcm0/PfPJLug=
github.com/libp2p/go-libp2p-discovery v0.5.1 h1:CJylx+h2+4+s68GvrM4pGNyfNhOYviWBPtVv5PA7sfo=
github.com/libp2p/go-libp2p-discovery v0.5.1/go.mod h1:+srtPIU9gDaBNu//UHvcdliKBIcr4SfDcm0/PfPJLug=
github.com/libp2p/go-libp2p-host v0.0.1/go.mod h1:qWd+H1yuU0m5CwzAkvbSjqKairayEHdR5MMl7Cwa7Go=
github.com/libp2p/go-libp2p-host v0.0.3/go.mod h1:Y/qPyA6C8j2coYyos1dfRm0I8+nvd4TGrDGt4tA7JR8=
github.com/libp2p/go-libp2p-interface-connmgr v0.0.1/go.mod h1:GarlRLH0LdeWcLnYM/SaBykKFl9U5JFnbBGruAk/D5k=
@ -1009,20 +1025,22 @@ github.com/libp2p/go-libp2p-peerstore v0.2.2/go.mod h1:NQxhNjWxf1d4w6PihR8btWIRj
github.com/libp2p/go-libp2p-peerstore v0.2.3/go.mod h1:K8ljLdFn590GMttg/luh4caB/3g0vKuY01psze0upRw=
github.com/libp2p/go-libp2p-peerstore v0.2.4/go.mod h1:ss/TWTgHZTMpsU/oKVVPQCGuDHItOpf2W8RxAi50P2s=
github.com/libp2p/go-libp2p-peerstore v0.2.6/go.mod h1:ss/TWTgHZTMpsU/oKVVPQCGuDHItOpf2W8RxAi50P2s=
github.com/libp2p/go-libp2p-peerstore v0.2.7 h1:83JoLxyR9OYTnNfB5vvFqvMUv/xDNa6NoPHnENhBsGw=
github.com/libp2p/go-libp2p-peerstore v0.2.7/go.mod h1:ss/TWTgHZTMpsU/oKVVPQCGuDHItOpf2W8RxAi50P2s=
github.com/libp2p/go-libp2p-peerstore v0.2.8 h1:nJghUlUkFVvyk7ccsM67oFA6kqUkwyCM1G4WPVMCWYA=
github.com/libp2p/go-libp2p-peerstore v0.2.8/go.mod h1:gGiPlXdz7mIHd2vfAsHzBNAMqSDkt2UBFwgcITgw1lA=
github.com/libp2p/go-libp2p-pnet v0.2.0 h1:J6htxttBipJujEjz1y0a5+eYoiPcFHhSYHH6na5f0/k=
github.com/libp2p/go-libp2p-pnet v0.2.0/go.mod h1:Qqvq6JH/oMZGwqs3N1Fqhv8NVhrdYcO0BW4wssv21LA=
github.com/libp2p/go-libp2p-protocol v0.0.1/go.mod h1:Af9n4PiruirSDjHycM1QuiMi/1VZNHYcK8cLgFJLZ4s=
github.com/libp2p/go-libp2p-protocol v0.1.0/go.mod h1:KQPHpAabB57XQxGrXCNvbL6UEXfQqUgC/1adR2Xtflk=
github.com/libp2p/go-libp2p-pubsub v0.1.1/go.mod h1:ZwlKzRSe1eGvSIdU5bD7+8RZN/Uzw0t1Bp9R1znpR/Q=
github.com/libp2p/go-libp2p-pubsub v0.3.2-0.20200527132641-c0712c6e92cf/go.mod h1:TxPOBuo1FPdsTjFnv+FGZbNbWYsp74Culx+4ViQpato=
github.com/libp2p/go-libp2p-pubsub v0.5.0 h1:OzcIuCWyJpOrWH0PTOfvxTzqFur4tiXpY5jXC8OxjyE=
github.com/libp2p/go-libp2p-pubsub v0.5.0/go.mod h1:MKnrsQkFgPcrQs1KVmOXy6Uz2RDQ1xO7dQo/P0Ba+ig=
github.com/libp2p/go-libp2p-pubsub v0.5.3 h1:XCn5xvgA/AKpbbaeqbomfKtQCbT9QsU39tYsVj0IndQ=
github.com/libp2p/go-libp2p-pubsub v0.5.3/go.mod h1:gVOzwebXVdSMDQBTfH8ACO5EJ4SQrvsHqCmYsCZpD0E=
github.com/libp2p/go-libp2p-quic-transport v0.1.1/go.mod h1:wqG/jzhF3Pu2NrhJEvE+IE0NTHNXslOPn9JQzyCAxzU=
github.com/libp2p/go-libp2p-quic-transport v0.5.0/go.mod h1:IEcuC5MLxvZ5KuHKjRu+dr3LjCT1Be3rcD/4d8JrX8M=
github.com/libp2p/go-libp2p-quic-transport v0.10.0 h1:koDCbWD9CCHwcHZL3/WEvP2A+e/o5/W5L3QS/2SPMA0=
github.com/libp2p/go-libp2p-quic-transport v0.10.0/go.mod h1:RfJbZ8IqXIhxBRm5hqUEJqjiiY8xmEuq3HUDS993MkA=
github.com/libp2p/go-libp2p-quic-transport v0.11.2 h1:p1YQDZRHH4Cv2LPtHubqlQ9ggz4CKng/REZuXZbZMhM=
github.com/libp2p/go-libp2p-quic-transport v0.11.2/go.mod h1:wlanzKtIh6pHrq+0U3p3DY9PJfGqxMgPaGKaK5LifwQ=
github.com/libp2p/go-libp2p-record v0.0.1/go.mod h1:grzqg263Rug/sRex85QrDOLntdFAymLDLm7lxMgU79Q=
github.com/libp2p/go-libp2p-record v0.1.0/go.mod h1:ujNc8iuE5dlKWVy6wuL6dd58t0n7xI4hAIl8pE6wu5Q=
github.com/libp2p/go-libp2p-record v0.1.1/go.mod h1:VRgKajOyMVgP/F0L5g3kH7SVskp17vFi2xheb5uMJtg=
@ -1050,9 +1068,9 @@ github.com/libp2p/go-libp2p-swarm v0.2.7/go.mod h1:ZSJ0Q+oq/B1JgfPHJAT2HTall+xYR
github.com/libp2p/go-libp2p-swarm v0.2.8/go.mod h1:JQKMGSth4SMqonruY0a8yjlPVIkb0mdNSwckW7OYziM=
github.com/libp2p/go-libp2p-swarm v0.3.0/go.mod h1:hdv95GWCTmzkgeJpP+GK/9D9puJegb7H57B5hWQR5Kk=
github.com/libp2p/go-libp2p-swarm v0.3.1/go.mod h1:hdv95GWCTmzkgeJpP+GK/9D9puJegb7H57B5hWQR5Kk=
github.com/libp2p/go-libp2p-swarm v0.4.3/go.mod h1:mmxP1pGBSc1Arw4F5DIjcpjFAmsRzA1KADuMtMuCT4g=
github.com/libp2p/go-libp2p-swarm v0.5.0 h1:HIK0z3Eqoo8ugmN8YqWAhD2RORgR+3iNXYG4U2PFd1E=
github.com/libp2p/go-libp2p-swarm v0.5.0/go.mod h1:sU9i6BoHE0Ve5SKz3y9WfKrh8dUat6JknzUehFx8xW4=
github.com/libp2p/go-libp2p-swarm v0.5.3 h1:hsYaD/y6+kZff1o1Mc56NcuwSg80lIphTS/zDk3mO4M=
github.com/libp2p/go-libp2p-swarm v0.5.3/go.mod h1:NBn7eNW2lu568L7Ns9wdFrOhgRlkRnIDg0FLKbuu3i8=
github.com/libp2p/go-libp2p-testing v0.0.1/go.mod h1:gvchhf3FQOtBdr+eFUABet5a4MBLK8jM3V4Zghvmi+E=
github.com/libp2p/go-libp2p-testing v0.0.2/go.mod h1:gvchhf3FQOtBdr+eFUABet5a4MBLK8jM3V4Zghvmi+E=
github.com/libp2p/go-libp2p-testing v0.0.3/go.mod h1:gvchhf3FQOtBdr+eFUABet5a4MBLK8jM3V4Zghvmi+E=
@ -1061,8 +1079,9 @@ github.com/libp2p/go-libp2p-testing v0.1.0/go.mod h1:xaZWMJrPUM5GlDBxCeGUi7kI4eq
github.com/libp2p/go-libp2p-testing v0.1.1/go.mod h1:xaZWMJrPUM5GlDBxCeGUi7kI4eqnjVyavGroI2nxEM0=
github.com/libp2p/go-libp2p-testing v0.1.2-0.20200422005655-8775583591d8/go.mod h1:Qy8sAncLKpwXtS2dSnDOP8ktexIAHKu+J+pnZOFZLTc=
github.com/libp2p/go-libp2p-testing v0.3.0/go.mod h1:efZkql4UZ7OVsEfaxNHZPzIehtsBXMrXnCfJIgDti5g=
github.com/libp2p/go-libp2p-testing v0.4.0 h1:PrwHRi0IGqOwVQWR3xzgigSlhlLfxgfXgkHxr77EghQ=
github.com/libp2p/go-libp2p-testing v0.4.0/go.mod h1:Q+PFXYoiYFN5CAEG2w3gLPEzotlKsNSbKQ/lImlOWF0=
github.com/libp2p/go-libp2p-testing v0.4.2 h1:IOiA5mMigi+eEjf4J+B7fepDhsjtsoWA9QbsCqbNp5U=
github.com/libp2p/go-libp2p-testing v0.4.2/go.mod h1:Q+PFXYoiYFN5CAEG2w3gLPEzotlKsNSbKQ/lImlOWF0=
github.com/libp2p/go-libp2p-tls v0.1.3 h1:twKMhMu44jQO+HgQK9X8NHO5HkeJu2QbhLzLJpa8oNM=
github.com/libp2p/go-libp2p-tls v0.1.3/go.mod h1:wZfuewxOndz5RTnCAxFliGjvYSDA40sKitV4c50uI1M=
github.com/libp2p/go-libp2p-transport v0.0.1/go.mod h1:UzbUs9X+PHOSw7S3ZmeOxfnwaQY5vGDzZmKPod3N3tk=
@ -1073,8 +1092,9 @@ github.com/libp2p/go-libp2p-transport-upgrader v0.0.4/go.mod h1:RGq+tupk+oj7PzL2
github.com/libp2p/go-libp2p-transport-upgrader v0.1.1/go.mod h1:IEtA6or8JUbsV07qPW4r01GnTenLW4oi3lOPbUMGJJA=
github.com/libp2p/go-libp2p-transport-upgrader v0.2.0/go.mod h1:mQcrHj4asu6ArfSoMuyojOdjx73Q47cYD7s5+gZOlns=
github.com/libp2p/go-libp2p-transport-upgrader v0.3.0/go.mod h1:i+SKzbRnvXdVbU3D1dwydnTmKRPXiAR/fyvi1dXuL4o=
github.com/libp2p/go-libp2p-transport-upgrader v0.4.2 h1:4JsnbfJzgZeRS9AWN7B9dPqn/LY/HoQTlO9gtdJTIYM=
github.com/libp2p/go-libp2p-transport-upgrader v0.4.2/go.mod h1:NR8ne1VwfreD5VIWIU62Agt/J18ekORFU/j1i2y8zvk=
github.com/libp2p/go-libp2p-transport-upgrader v0.4.6 h1:SHt3g0FslnqIkEWF25YOB8UCOCTpGAVvHRWQYJ+veiI=
github.com/libp2p/go-libp2p-transport-upgrader v0.4.6/go.mod h1:JE0WQuQdy+uLZ5zOaI3Nw9dWGYJIA7mywEtP2lMvnyk=
github.com/libp2p/go-libp2p-yamux v0.5.1 h1:sX4WQPHMhRxJE5UZTfjEuBvlQWXB5Bo3A2JK9ZJ9EM0=
github.com/libp2p/go-libp2p-yamux v0.5.1/go.mod h1:dowuvDu8CRWmr0iqySMiSxK+W0iL5cMVO9S94Y6gkv4=
github.com/libp2p/go-maddr-filter v0.0.1/go.mod h1:6eT12kSQMA9x2pvFQa+xesMKUBlj9VImZbj3B9FBH/Q=
@ -1103,6 +1123,7 @@ github.com/libp2p/go-nat v0.0.5 h1:qxnwkco8RLKqVh1NmjQ+tJ8p8khNLFxuElYG/TwqW4Q=
github.com/libp2p/go-nat v0.0.5/go.mod h1:B7NxsVNPZmRLvMOwiEO1scOSyjA56zxYAGv1yQgRkEU=
github.com/libp2p/go-netroute v0.1.2/go.mod h1:jZLDV+1PE8y5XxBySEBgbuVAXbhtuHSdmLPL2n9MKbk=
github.com/libp2p/go-netroute v0.1.3/go.mod h1:jZLDV+1PE8y5XxBySEBgbuVAXbhtuHSdmLPL2n9MKbk=
github.com/libp2p/go-netroute v0.1.5/go.mod h1:V1SR3AaECRkEQCoFFzYwVYWvYIEtlxx89+O3qcpCl4A=
github.com/libp2p/go-netroute v0.1.6 h1:ruPJStbYyXVYGQ81uzEDzuvbYRLKRrLvTYd33yomC38=
github.com/libp2p/go-netroute v0.1.6/go.mod h1:AqhkMh0VuWmfgtxKPp3Oc1LdU5QSWS7wl0QLhSZqXxQ=
github.com/libp2p/go-openssl v0.0.2/go.mod h1:v8Zw2ijCSWBQi8Pq5GAixw6DbFfa9u6VIYDXnvOXkc0=
@ -1117,8 +1138,9 @@ github.com/libp2p/go-reuseport v0.0.2/go.mod h1:SPD+5RwGC7rcnzngoYC86GjPzjSywuQy
github.com/libp2p/go-reuseport-transport v0.0.1/go.mod h1:YkbSDrvjUVDL6b8XqriyA20obEtsW9BLkuOUyQAOCbs=
github.com/libp2p/go-reuseport-transport v0.0.2/go.mod h1:YkbSDrvjUVDL6b8XqriyA20obEtsW9BLkuOUyQAOCbs=
github.com/libp2p/go-reuseport-transport v0.0.3/go.mod h1:Spv+MPft1exxARzP2Sruj2Wb5JSyHNncjf1Oi2dEbzM=
github.com/libp2p/go-reuseport-transport v0.0.4 h1:OZGz0RB620QDGpv300n1zaOcKGGAoGVf8h9txtt/1uM=
github.com/libp2p/go-reuseport-transport v0.0.4/go.mod h1:trPa7r/7TJK/d+0hdBLOCGvpQQVOU74OXbNCIMkufGw=
github.com/libp2p/go-reuseport-transport v0.0.5 h1:lJzi+vSYbyJj2faPKLxNGWEIBcaV/uJmyvsUxXy2mLw=
github.com/libp2p/go-reuseport-transport v0.0.5/go.mod h1:TC62hhPc8qs5c/RoXDZG6YmjK+/YWUPC0yYmeUecbjc=
github.com/libp2p/go-sockaddr v0.0.2/go.mod h1:syPvOmNs24S3dFVGJA1/mrqdeijPxLV2Le3BRLKd68k=
github.com/libp2p/go-sockaddr v0.1.0/go.mod h1:syPvOmNs24S3dFVGJA1/mrqdeijPxLV2Le3BRLKd68k=
github.com/libp2p/go-sockaddr v0.1.1 h1:yD80l2ZOdGksnOyHrhxDdTDFrf7Oy+v3FMVArIRgZxQ=
@ -1134,8 +1156,9 @@ github.com/libp2p/go-tcp-transport v0.0.4/go.mod h1:+E8HvC8ezEVOxIo3V5vCK9l1y/19
github.com/libp2p/go-tcp-transport v0.1.0/go.mod h1:oJ8I5VXryj493DEJ7OsBieu8fcg2nHGctwtInJVpipc=
github.com/libp2p/go-tcp-transport v0.1.1/go.mod h1:3HzGvLbx6etZjnFlERyakbaYPdfjg2pWP97dFZworkY=
github.com/libp2p/go-tcp-transport v0.2.0/go.mod h1:vX2U0CnWimU4h0SGSEsg++AzvBcroCGYw28kh94oLe0=
github.com/libp2p/go-tcp-transport v0.2.1 h1:ExZiVQV+h+qL16fzCWtd1HSzPsqWottJ8KXwWaVi8Ns=
github.com/libp2p/go-tcp-transport v0.2.1/go.mod h1:zskiJ70MEfWz2MKxvFB/Pv+tPIB1PpPUrHIWQ8aFw7M=
github.com/libp2p/go-tcp-transport v0.2.7 h1:Z8Kc/Kb8tD84WiaH55xAlaEnkqzrp88jSEySCKV4+gg=
github.com/libp2p/go-tcp-transport v0.2.7/go.mod h1:lue9p1b3VmZj1MhhEGB/etmvF/nBQ0X9CW2DutBT3MM=
github.com/libp2p/go-testutil v0.0.1/go.mod h1:iAcJc/DKJQanJ5ws2V+u5ywdL2n12X1WbbEG+Jjy69I=
github.com/libp2p/go-testutil v0.1.0/go.mod h1:81b2n5HypcVyrCg/MJx4Wgfp/VHojytjVe/gLzZ2Ehc=
github.com/libp2p/go-ws-transport v0.0.1/go.mod h1:p3bKjDWHEgtuKKj+2OdPYs5dAPIjtpQGHF2tJfGz7Ww=
@ -1156,8 +1179,9 @@ github.com/lightstep/lightstep-tracer-common/golang/gogo v0.0.0-20190605223551-b
github.com/lightstep/lightstep-tracer-go v0.18.1/go.mod h1:jlF1pusYV4pidLvZ+XD0UBX0ZE6WURAspgAczcDHrL4=
github.com/lucas-clemente/quic-go v0.11.2/go.mod h1:PpMmPfPKO9nKJ/psF49ESTAGQSdfXxlg1otPbEB2nOw=
github.com/lucas-clemente/quic-go v0.16.0/go.mod h1:I0+fcNTdb9eS1ZcjQZbDVPGchJ86chcIxPALn9lEJqE=
github.com/lucas-clemente/quic-go v0.19.3 h1:eCDQqvGBB+kCTkA0XrAFtNe81FMa0/fn4QSoeAbmiF4=
github.com/lucas-clemente/quic-go v0.19.3/go.mod h1:ADXpNbTQjq1hIzCpB+y/k5iz4n4z4IwqoLb94Kh5Hu8=
github.com/lucas-clemente/quic-go v0.21.2 h1:8LqqL7nBQFDUINadW0fHV/xSaCQJgmJC0Gv+qUnjd78=
github.com/lucas-clemente/quic-go v0.21.2/go.mod h1:vF5M1XqhBAHgbjKcJOXY3JZz3GP0T3FQhz/uyOUS38Q=
github.com/lucasb-eyer/go-colorful v1.0.3 h1:QIbQXiugsb+q10B+MI+7DI1oQLdmnep86tWFlaaUAac=
github.com/lucasb-eyer/go-colorful v1.0.3/go.mod h1:R4dSotOR9KMtayYi1e77YzuveK+i7ruzyGqttikkLy0=
github.com/lufia/iostat v1.1.0/go.mod h1:rEPNA0xXgjHQjuI5Cy05sLlS2oRcSlWHRLrvh/AQ+Pg=
@ -1175,10 +1199,17 @@ github.com/marten-seemann/qpack v0.1.0/go.mod h1:LFt1NU/Ptjip0C2CPkhimBz5CGE3WGD
github.com/marten-seemann/qpack v0.2.1/go.mod h1:F7Gl5L1jIgN1D11ucXefiuJS9UMVP2opoCp2jDKb7wc=
github.com/marten-seemann/qtls v0.2.3/go.mod h1:xzjG7avBwGGbdZ8dTGxlBnLArsVKLvwmjgmPuiQEcYk=
github.com/marten-seemann/qtls v0.9.1/go.mod h1:T1MmAdDPyISzxlK6kjRr0pcZFBVd1OZbBb/j3cvzHhk=
github.com/marten-seemann/qtls v0.10.0 h1:ECsuYUKalRL240rRD4Ri33ISb7kAQ3qGDlrrl55b2pc=
github.com/marten-seemann/qtls v0.10.0/go.mod h1:UvMd1oaYDACI99/oZUYLzMCkBXQVT0aGm99sJhbT8hs=
github.com/marten-seemann/qtls-go1-15 v0.1.1 h1:LIH6K34bPVttyXnUWixk0bzH6/N07VxbSabxn5A5gZQ=
github.com/marten-seemann/qtls-go1-15 v0.1.1/go.mod h1:GyFwywLKkRt+6mfU99csTEY1joMZz5vmB1WNZH3P81I=
github.com/marten-seemann/qtls-go1-15 v0.1.4/go.mod h1:GyFwywLKkRt+6mfU99csTEY1joMZz5vmB1WNZH3P81I=
github.com/marten-seemann/qtls-go1-15 v0.1.5 h1:Ci4EIUN6Rlb+D6GmLdej/bCQ4nPYNtVXQB+xjiXE1nk=
github.com/marten-seemann/qtls-go1-15 v0.1.5/go.mod h1:GyFwywLKkRt+6mfU99csTEY1joMZz5vmB1WNZH3P81I=
github.com/marten-seemann/qtls-go1-16 v0.1.4 h1:xbHbOGGhrenVtII6Co8akhLEdrawwB2iHl5yhJRpnco=
github.com/marten-seemann/qtls-go1-16 v0.1.4/go.mod h1:gNpI2Ol+lRS3WwSOtIUUtRwZEQMXjYK+dQSBFbethAk=
github.com/marten-seemann/qtls-go1-17 v0.1.0-rc.1 h1:/rpmWuGvceLwwWuaKPdjpR4JJEUH0tq64/I3hvzaNLM=
github.com/marten-seemann/qtls-go1-17 v0.1.0-rc.1/go.mod h1:fz4HIxByo+LlWcreM4CZOYNuz3taBQ8rN2X6FqvaWo8=
github.com/marten-seemann/tcp v0.0.0-20210406111302-dfbc87cc63fd h1:br0buuQ854V8u83wA0rVZ8ttrq5CpaPZdvrK0LP2lOk=
github.com/marten-seemann/tcp v0.0.0-20210406111302-dfbc87cc63fd/go.mod h1:QuCEs1Nt24+FYQEqAAncTDPJIuGs+LxK1MCiFL25pMU=
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
github.com/mattn/go-colorable v0.1.1/go.mod h1:FuOcm+DKB9mbwrcAfNl7/TZVBZ6rcnceauSikq3lYCQ=
github.com/mattn/go-colorable v0.1.2/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
@ -1215,6 +1246,12 @@ github.com/miekg/dns v1.1.28/go.mod h1:KNUDUusw/aVsxyTYZM1oqvCicbwhgbNgztCETuNZ7
github.com/miekg/dns v1.1.31/go.mod h1:KNUDUusw/aVsxyTYZM1oqvCicbwhgbNgztCETuNZ7xM=
github.com/miekg/dns v1.1.41 h1:WMszZWJG0XmzbK9FEmzH2TVcqYzFesusSIB41b8KHxY=
github.com/miekg/dns v1.1.41/go.mod h1:p6aan82bvRIyn+zDIv9xYNUpwa73JcSh9BKwknJysuI=
github.com/mikioh/tcp v0.0.0-20190314235350-803a9b46060c h1:bzE/A84HN25pxAuk9Eej1Kz9OUelF97nAc82bDquQI8=
github.com/mikioh/tcp v0.0.0-20190314235350-803a9b46060c/go.mod h1:0SQS9kMwD2VsyFEB++InYyBJroV/FRmBgcydeSUcJms=
github.com/mikioh/tcpinfo v0.0.0-20190314235526-30a79bb1804b h1:z78hV3sbSMAUoyUMM0I83AUIT6Hu17AWfgjzIbtrYFc=
github.com/mikioh/tcpinfo v0.0.0-20190314235526-30a79bb1804b/go.mod h1:lxPUiZwKoFL8DUUmalo2yJJUCxbPKtm8OKfqr2/FTNU=
github.com/mikioh/tcpopt v0.0.0-20190314235656-172688c1accc h1:PTfri+PuQmWDqERdnNMiD9ZejrlswWrCpBEZgWOiTrc=
github.com/mikioh/tcpopt v0.0.0-20190314235656-172688c1accc/go.mod h1:cGKTAVKx4SxOuR/czcZ/E2RSJ3sfHs8FpHhQ5CWMf9s=
github.com/minio/blake2b-simd v0.0.0-20160723061019-3f5f724cb5b1 h1:lYpkrQH5ajf0OXOcUbGjvZxxijuBwbbmlSxLiuofa+g=
github.com/minio/blake2b-simd v0.0.0-20160723061019-3f5f724cb5b1/go.mod h1:pD8RvIylQ358TN4wwqatJ8rNavkEINozVn9DtGI3dfQ=
github.com/minio/sha256-simd v0.0.0-20190131020904-2d45a736cd16/go.mod h1:2FMWW+8GMoPweT6+pI63m9YE3Lmw4J71hV56Chs1E/U=
@ -1256,8 +1293,9 @@ github.com/multiformats/go-multiaddr v0.2.0/go.mod h1:0nO36NvPpyV4QzvTLi/lafl2y9
github.com/multiformats/go-multiaddr v0.2.1/go.mod h1:s/Apk6IyxfvMjDafnhJgJ3/46z7tZ04iMk5wP4QMGGE=
github.com/multiformats/go-multiaddr v0.2.2/go.mod h1:NtfXiOtHvghW9KojvtySjH5y0u0xW5UouOmQQrn6a3Y=
github.com/multiformats/go-multiaddr v0.3.0/go.mod h1:dF9kph9wfJ+3VLAaeBqo9Of8x4fJxp6ggJGteB8HQTI=
github.com/multiformats/go-multiaddr v0.3.1 h1:1bxa+W7j9wZKTZREySx1vPMs2TqrYWjVZ7zE6/XLG1I=
github.com/multiformats/go-multiaddr v0.3.1/go.mod h1:uPbspcUPd5AfaP6ql3ujFY+QWzmBD8uLLL4bXW0XfGc=
github.com/multiformats/go-multiaddr v0.3.3 h1:vo2OTSAqnENB2rLk79pLtr+uhj+VAzSe3uef5q0lRSs=
github.com/multiformats/go-multiaddr v0.3.3/go.mod h1:lCKNGP1EQ1eZ35Za2wlqnabm9xQkib3fyB+nZXHLag0=
github.com/multiformats/go-multiaddr-dns v0.0.1/go.mod h1:9kWcqw/Pj6FwxAwW38n/9403szc57zJPs45fmnznu3Q=
github.com/multiformats/go-multiaddr-dns v0.0.2/go.mod h1:9kWcqw/Pj6FwxAwW38n/9403szc57zJPs45fmnznu3Q=
github.com/multiformats/go-multiaddr-dns v0.0.3/go.mod h1:9kWcqw/Pj6FwxAwW38n/9403szc57zJPs45fmnznu3Q=
@ -1305,6 +1343,7 @@ github.com/multiformats/go-varint v0.0.5/go.mod h1:3Ls8CIEsrijN6+B7PbrXRPxHRPuXS
github.com/multiformats/go-varint v0.0.6 h1:gk85QWKxh3TazbLxED/NlDVv8+q+ReFJk7Y2W/KhfNY=
github.com/multiformats/go-varint v0.0.6/go.mod h1:3Ls8CIEsrijN6+B7PbrXRPxHRPuXSrVKRY101jdMZYE=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/nats-io/jwt v0.3.0/go.mod h1:fRYCDE99xlTsqUzISS1Bi75UBJ6ljOJQOAAu5VglpSg=
github.com/nats-io/jwt v0.3.2/go.mod h1:/euKqTS1ZD+zzjYrY7pseZrTtWQSjujC7xjPc8wL6eU=
github.com/nats-io/nats-server/v2 v2.1.2/go.mod h1:Afk+wRZqkMQs/p45uXdrVLuab3gwv3Z8C4HTBu8GD/k=
@ -1319,8 +1358,9 @@ github.com/nikkolasg/hexjson v0.0.0-20181101101858-78e39397e00c h1:5bFTChQxSKNwy
github.com/nikkolasg/hexjson v0.0.0-20181101101858-78e39397e00c/go.mod h1:7qN3Y0BvzRUf4LofcoJplQL10lsFDb4PYlePTVwrP28=
github.com/nkovacs/streamquote v0.0.0-20170412213628-49af9bddb229 h1:E2B8qYyeSgv5MXpmzZXRNp8IAQ4vjxIjhpAf5hv/tAg=
github.com/nkovacs/streamquote v0.0.0-20170412213628-49af9bddb229/go.mod h1:0aYXnNPJ8l7uZxf45rWW1a/uME32OF0rhiYGNQ2oF2E=
github.com/nxadm/tail v1.4.4 h1:DQuhQpB1tVlglWS2hLQ5OV6B5r8aGxSrPc5Qo6uTN78=
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
github.com/nxadm/tail v1.4.8 h1:nPr65rt6Y5JFSKQO7qToXr7pePgD6Gwiw05lkbyAQTE=
github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU=
github.com/oklog/oklog v0.3.2/go.mod h1:FCV+B7mhrz4o+ueLpx+KqkyXRGMWOYEvfiXtdGtbWGs=
github.com/oklog/run v1.0.0/go.mod h1:dlhp/R75TPv97u0XWUtDeV/lRKWPKSdTuV0TZvrmrQA=
github.com/olekukonko/tablewriter v0.0.0-20170122224234-a0225b3f23b5/go.mod h1:vsDQFd/mU46D+Z4whnwzcISnGGzXWMclvtLoiIKAKIo=
@ -1330,16 +1370,19 @@ github.com/onsi/ginkgo v1.8.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+W
github.com/onsi/ginkgo v1.11.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.12.0/go.mod h1:oUhWkIvk5aDxtKvDDuw8gItl8pKl42LzjC9KZE0HfGg=
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
github.com/onsi/ginkgo v1.14.0 h1:2mOpI4JVVPBN+WQRa0WKH2eXR+Ey+uK4n7Zj0aYpIQA=
github.com/onsi/ginkgo v1.14.0/go.mod h1:iSB4RoI2tjJc9BBv4NKIKWKya62Rps+oPG/Lv9klQyY=
github.com/onsi/ginkgo v1.16.2/go.mod h1:CObGmKUOKaSC0RjmoAK7tKyn4Azo5P2IWuoMnvwxz1E=
github.com/onsi/ginkgo v1.16.4 h1:29JGrr5oVBm5ulCWet69zQkzWipVXIol6ygQUe/EzNc=
github.com/onsi/ginkgo v1.16.4/go.mod h1:dX+/inL/fNMqNlz0e9LfyB9TswhZpCVdJM/Z6Vvnwo0=
github.com/onsi/gomega v1.4.1/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=
github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.5.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
github.com/onsi/gomega v1.8.1/go.mod h1:Ho0h+IUsWyvy1OpqCwxlQ/21gkhVunqlU8fDGcoTdcA=
github.com/onsi/gomega v1.9.0/go.mod h1:Ho0h+IUsWyvy1OpqCwxlQ/21gkhVunqlU8fDGcoTdcA=
github.com/onsi/gomega v1.10.1 h1:o0+MgICZLuZ7xjH7Vx6zS/zcu93/BEp1VwkIW1mEXCE=
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
github.com/onsi/gomega v1.13.0 h1:7lLHu94wT9Ij0o6EWWclhu0aOh32VxhkwEJvzuWPeak=
github.com/onsi/gomega v1.13.0/go.mod h1:lRk9szgn8TxENtWd0Tp4c3wjlRfMTMH27I+3Je41yGY=
github.com/op/go-logging v0.0.0-20160315200505-970db520ece7/go.mod h1:HzydrMdWErDVzsI23lYNej1Htcns9BCg93Dk0bBINWk=
github.com/open-rpc/meta-schema v0.0.0-20201029221707-1b72ef2ea333 h1:CznVS40zms0Dj5he4ERo+fRPtO0qxUk8lA8Xu3ddet0=
github.com/open-rpc/meta-schema v0.0.0-20201029221707-1b72ef2ea333/go.mod h1:Ag6rSXkHIckQmjFBCweJEEt1mrTPBv8b9W4aU/NQWfI=
@ -1390,8 +1433,11 @@ github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5Fsn
github.com/prometheus/client_golang v1.1.0/go.mod h1:I1FGZT9+L76gKKOs5djB6ezCbFQP1xR9D75/vuwEF3g=
github.com/prometheus/client_golang v1.3.0/go.mod h1:hJaj2vgQTGQmVCsAACORcieXFeDPbaTKGT+JTgUa3og=
github.com/prometheus/client_golang v1.4.1/go.mod h1:e9GMxYsXl05ICDXkRhurwBS4Q3OK1iX/F2sw+iXX5zU=
github.com/prometheus/client_golang v1.6.0 h1:YVPodQOcK15POxhgARIvnDRVpLcuK8mglnMrWfyrw6A=
github.com/prometheus/client_golang v1.6.0/go.mod h1:ZLOG9ck3JLRdB5MgO8f+lLTe83AXG6ro35rLTxvnIl4=
github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
github.com/prometheus/client_golang v1.9.0/go.mod h1:FqZLKOZnGdFAhOK4nqGHa7D66IdsO+O441Eve7ptJDU=
github.com/prometheus/client_golang v1.10.0 h1:/o0BDeWzLWXNZ+4q5gXltUvaMpJqckTa+jTNoB+z4cg=
github.com/prometheus/client_golang v1.10.0/go.mod h1:WJM3cc3yu7XKBKa/I8WeZm+V3eltZnBwfENSU7mdogU=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190115171406-56726106282f/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
@ -1406,8 +1452,10 @@ github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y8
github.com/prometheus/common v0.6.0/go.mod h1:eBmuwkDJBwy6iBfxCBob6t6dR6ENT/y+J+Zk0j9GMYc=
github.com/prometheus/common v0.7.0/go.mod h1:DjGbpBbp5NYNiECxcL/VnbXCCaQpKd3tt26CguLLsqA=
github.com/prometheus/common v0.9.1/go.mod h1:yhUN8i9wzaXS3w1O07YhxHEBxD+W35wd8bs7vj7HSQ4=
github.com/prometheus/common v0.10.0 h1:RyRA7RzGXQZiW+tGMr7sxa85G1z0yOpM1qq5c8lNawc=
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
github.com/prometheus/common v0.15.0/go.mod h1:U+gB1OBLb1lF3O42bTCL+FK18tX9Oar16Clt/msog/s=
github.com/prometheus/common v0.18.0 h1:WCVKW7aL6LEe1uryfI9dnEc2ZqNB1Fn0ok930v0iL1Y=
github.com/prometheus/common v0.18.0/go.mod h1:U+gB1OBLb1lF3O42bTCL+FK18tX9Oar16Clt/msog/s=
github.com/prometheus/node_exporter v1.0.0-rc.0.0.20200428091818-01054558c289/go.mod h1:FGbBv5OPKjch+jNUJmEQpMZytIdyW0NdBtWFcfSKusc=
github.com/prometheus/procfs v0.0.0-20180725123919-05ee40e3a273/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
@ -1418,8 +1466,11 @@ github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsT
github.com/prometheus/procfs v0.0.3/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
github.com/prometheus/procfs v0.0.8/go.mod h1:7Qr8sr6344vo1JqZ6HhLceV9o3AJ1Ff+GxbHq6oeK9A=
github.com/prometheus/procfs v0.0.11/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/prometheus/procfs v0.1.0 h1:jhMy6QXfi3y2HEzFoyuCj40z4OZIIHHPtFyCMftmvKA=
github.com/prometheus/procfs v0.1.0/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/prometheus/procfs v0.2.0/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/prometheus/procfs v0.6.0 h1:mxy4L2jP6qMonqmq+aTtOx1ifVWUgG/TAmntgbh3xv4=
github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
github.com/raulk/clock v1.1.0 h1:dpb29+UKMbLqiU/jqIJptgLR1nn23HLgMY0sTCDza5Y=
github.com/raulk/clock v1.1.0/go.mod h1:3MpVxdZ/ODBQDxbN+kzshf5OSZwPjtMDx6BBXBmOeY0=
github.com/raulk/go-watchdog v1.0.1 h1:qgm3DIJAeb+2byneLrQJ7kvmDLGxN2vy3apXyGaDKN4=
@ -1661,8 +1712,9 @@ go.uber.org/dig v1.10.0 h1:yLmDDj9/zuDjv3gz8GQGviXMs9TfysIUMUilCpgzUJY=
go.uber.org/dig v1.10.0/go.mod h1:X34SnWGr8Fyla9zQNO2GSO2D+TIuqB14OS8JhYocIyw=
go.uber.org/fx v1.9.0 h1:7OAz8ucp35AU8eydejpYG7QrbE8rLKzGhHbZlJi5LYY=
go.uber.org/fx v1.9.0/go.mod h1:mFdUyAUuJ3w4jAckiKSKbldsxy1ojpAMJ+dVZg5Y0Aw=
go.uber.org/goleak v1.0.0 h1:qsup4IcBdlmsnGfqyLl4Ntn3C2XCCuKAE7DwHpScyUo=
go.uber.org/goleak v1.0.0/go.mod h1:8a7PlsEVH3e/a/GLqe5IIrQx6GzcnRmZEufDUTk4A7A=
go.uber.org/goleak v1.1.10 h1:z+mqJhf6ss6BSfSM671tgKyZBFPTTJM+HLxnhPC3wu0=
go.uber.org/goleak v1.1.10/go.mod h1:8a7PlsEVH3e/a/GLqe5IIrQx6GzcnRmZEufDUTk4A7A=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
go.uber.org/multierr v1.3.0/go.mod h1:VgVr7evmIr6uPjLBxg28wmKNXyqE9akIJ5XnfpiKl+4=
go.uber.org/multierr v1.4.0/go.mod h1:VgVr7evmIr6uPjLBxg28wmKNXyqE9akIJ5XnfpiKl+4=
@ -1714,8 +1766,9 @@ golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPh
golang.org/x/crypto v0.0.0-20200728195943-123391ffb6de/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200820211705-5c72a883971a/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210220033148-5ea612d1eb83/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2 h1:It14KIkyBFYkHkwZ7k45minvA9aorojkyjGk9KJ5B/w=
golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
golang.org/x/crypto v0.0.0-20210506145944-38f3c27a63bf h1:B2n+Zi5QeYRDAEodEu72OS36gmTWjgpXr2+cWcBW90o=
golang.org/x/crypto v0.0.0-20210506145944-38f3c27a63bf/go.mod h1:P+XmwS30IXTQdn5tA2iutPOUgjI07+tq3H3K9MVA1s8=
golang.org/x/exp v0.0.0-20181106170214-d68db9428509/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
@ -1798,10 +1851,13 @@ golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81R
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201022231255-08b38378de70/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4/go.mod h1:RBQZq4jEuRlivfhVLdyRGr576XBO4/greRjx4P4O3yc=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20210423184538-5f58ad60dda6 h1:0PC75Fz/kyMGhL0e1QnypqK2kQMqKt9csD1GnMJR+Zk=
golang.org/x/net v0.0.0-20210423184538-5f58ad60dda6/go.mod h1:OJAsFXCWl8Ukc7SiCT/9KSuxbyM7479/AVlXFRxuMCk=
golang.org/x/net v0.0.0-20210428140749-89ef3d95e781 h1:DzZ89McO9/gWPsQXS/FVKAlG02ZjaQ6AlZRBimEYOd0=
golang.org/x/net v0.0.0-20210428140749-89ef3d95e781/go.mod h1:OJAsFXCWl8Ukc7SiCT/9KSuxbyM7479/AVlXFRxuMCk=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20181017192945-9dcd33a902f4/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20181203162652-d668ce993890/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
@ -1819,6 +1875,7 @@ golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c h1:5KslGYwFpkhGh+Q16bwMP3cOontH8FOep7tGV86Y7SQ=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180202135801-37707fdb30a5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@ -1884,17 +1941,25 @@ golang.org/x/sys v0.0.0-20200420163511-1957bb5e6d1f/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20200509044756-6aff5f38e54f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200519105757-fe76b779f299/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200602225109-6fdc65e7d980/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200812155832-6a926be9bd1d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200926100807-9d91bd62050c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201214210602-f9fddec55a1e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210112080510-489259a85091/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210303074136-134d130e1a04/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210309074719-68d13333faf2/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210315160823-c6e025ad8005/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210317225723-c4fcb01b228e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210426080607-c94f62235c83/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007 h1:gG67DSER+11cZvqIMb8S8bt0vZtiN6xWYARwirrOSfE=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210511113859-b0526f3d8744 h1:yhBbb4IRs2HS9PPlAg6DMC6mUOKexJBNsLf4Z+6En1Q=
golang.org/x/sys v0.0.0-20210511113859-b0526f3d8744/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20201210144234-2321bbc49cbf h1:MZ2shdL+ZM/XzY3ZGOnh4Nlpnxz5GSOhOmtHo3iPU6M=
@ -1953,7 +2018,9 @@ golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roY
golang.org/x/tools v0.0.0-20200711155855-7342f9734a7d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20200827010519-17fd2f27a9e3/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20201112185108-eeaa07dd7696/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.5 h1:ouewzE6p+/VEB31YYnTbEJdi8pFqKp4P4n85vwo3DHA=
golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@ -2035,8 +2102,10 @@ google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4=
google.golang.org/protobuf v1.25.0 h1:Ejskq+SyPohKW+1uil0JJMtmHCgJPJ/qWTxr8qp+R4c=
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0 h1:bxAC2xTBsZGibn2RTntX0oH50xLsqy1OxA9tTL3p/lk=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
@ -2063,8 +2132,9 @@ gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.3.0 h1:clyUAQHOM3G0M3f5vQj7LuJrETvjVot3Z5el9nffUtU=
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gotest.tools v2.2.0+incompatible h1:VsBPFP1AI068pPrMxtb/S8Zkgf9xEmTLJjfM+P5UIEo=

Some files were not shown because too many files have changed in this diff Show More