merge filecoin-project/oni subtree in testplans/
This commit is contained in:
commit
1f678c17d3
86
testplans/.circleci/config.yml
Normal file
86
testplans/.circleci/config.yml
Normal file
@ -0,0 +1,86 @@
|
||||
---
|
||||
version: 2.1
|
||||
|
||||
parameters:
|
||||
workspace-dir:
|
||||
type: string
|
||||
default: "/home/circleci"
|
||||
|
||||
commands:
|
||||
setup:
|
||||
description: "install go, checkout and restore cache"
|
||||
steps:
|
||||
- checkout
|
||||
- run: sudo apt-get update
|
||||
- run: sudo apt-get install ocl-icd-opencl-dev
|
||||
- run: git submodule sync
|
||||
- run: git submodule update --init
|
||||
- run: cd extra/filecoin-ffi && make
|
||||
|
||||
executors:
|
||||
golang:
|
||||
docker:
|
||||
- image: circleci/golang:1.14.6
|
||||
resource_class: 2xlarge
|
||||
|
||||
workflows:
|
||||
version: 2
|
||||
main:
|
||||
jobs:
|
||||
- build-soup-linux
|
||||
- build-graphsync-linux
|
||||
- trigger-testplans
|
||||
nightly:
|
||||
triggers:
|
||||
- schedule:
|
||||
cron: "45 * * * *"
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- master
|
||||
jobs:
|
||||
- trigger-testplans
|
||||
|
||||
jobs:
|
||||
|
||||
build-soup-linux:
|
||||
executor: golang
|
||||
steps:
|
||||
- setup
|
||||
- run:
|
||||
name: "build lotus-soup"
|
||||
command: pushd lotus-soup && go build -tags=testground .
|
||||
|
||||
build-graphsync-linux:
|
||||
executor: golang
|
||||
steps:
|
||||
- setup
|
||||
- run:
|
||||
name: "build graphsync"
|
||||
command: pushd graphsync && go build .
|
||||
|
||||
trigger-testplans:
|
||||
executor: golang
|
||||
steps:
|
||||
- setup
|
||||
- run:
|
||||
name: "download testground"
|
||||
command: wget https://gist.github.com/nonsense/5fbf3167cac79945f658771aed32fc44/raw/2e17eb0debf7ec6bdf027c1bdafc2c92dd97273b/testground-d3e9603 -O ~/testground-cli && chmod +x ~/testground-cli
|
||||
- run:
|
||||
name: "prepare .env.toml"
|
||||
command: pushd lotus-soup && mkdir -p $HOME/testground && cp env-ci.toml $HOME/testground/.env.toml && echo 'endpoint="'$endpoint'"' >> $HOME/testground/.env.toml && echo 'token="'$token'"' >> $HOME/testground/.env.toml && echo 'user="circleci"' >> $HOME/testground/.env.toml
|
||||
- run:
|
||||
name: "prepare testground home dir"
|
||||
command: mkdir -p $HOME/testground/plans && mv lotus-soup $HOME/testground/plans/ && mv graphsync $HOME/testground/plans/
|
||||
- run:
|
||||
name: "trigger baseline test plan on testground ci"
|
||||
command: ~/testground-cli run composition -f $HOME/testground/plans/lotus-soup/_compositions/baseline-k8s-3-1.toml --metadata-commit=$CIRCLE_SHA1 --metadata-repo=filecoin-project/oni --metadata-branch=$CIRCLE_BRANCH
|
||||
- run:
|
||||
name: "trigger payment channel stress test plan on testground ci"
|
||||
command: ~/testground-cli run composition -f $HOME/testground/plans/lotus-soup/_compositions/paych-stress-k8s.toml --metadata-commit=$CIRCLE_SHA1 --metadata-repo=filecoin-project/oni --metadata-branch=$CIRCLE_BRANCH
|
||||
- run:
|
||||
name: "trigger deals stress concurrent test plan on testground ci"
|
||||
command: ~/testground-cli run composition -f $HOME/testground/plans/lotus-soup/_compositions/deals-stress-concurrent-natural-k8s.toml --metadata-commit=$CIRCLE_SHA1 --metadata-repo=filecoin-project/oni --metadata-branch=$CIRCLE_BRANCH
|
||||
- run:
|
||||
name: "trigger graphsync stress test plan on testground ci"
|
||||
command: ~/testground-cli run composition -f $HOME/testground/plans/graphsync/_compositions/stress-k8s.toml --metadata-commit=$CIRCLE_SHA1 --metadata-repo=filecoin-project/oni --metadata-branch=$CIRCLE_BRANCH
|
28
testplans/.github/ISSUE_TEMPLATE/test-scenario-request.md
vendored
Normal file
28
testplans/.github/ISSUE_TEMPLATE/test-scenario-request.md
vendored
Normal file
@ -0,0 +1,28 @@
|
||||
---
|
||||
name: 👯♀️ Test scenario
|
||||
about: Suggest a test scenario for Project Oni to consider
|
||||
title: "[test scenario] "
|
||||
labels: hint/needs-analysis, hint/needs-triage, kind/test-scenario
|
||||
|
||||
---
|
||||
|
||||
**Describe the test scenario.**
|
||||
|
||||
<!-- A clear and concise description of what the test scenario looks like, e.g. I'd like you to test that/if/how [...] -->
|
||||
|
||||
**Provide any background and technical implementation details.**
|
||||
|
||||
<!-- Provide any info and technical insight that would help us implement such scenario. Think about the setup, choreography between nodes, operations involved, etc. -->
|
||||
|
||||
**What should we measure?**
|
||||
|
||||
<!-- Indicate what we should be measuring for comparison/analytical purposes. -->
|
||||
|
||||
**Discomfort factor (0-10).**
|
||||
|
||||
<!-- How uncomfortable would you be if we launched mainnet without our Oni tests covering this system process / test scenario? -->
|
||||
<!-- Rubric in https://github.com/filecoin-project/oni/labels?q=discomfort. -->
|
||||
|
||||
**Additional remarks.**
|
||||
|
||||
<!-- Anything else that Project Oni should know. -->
|
277
testplans/.github/labels.yml
vendored
Normal file
277
testplans/.github/labels.yml
vendored
Normal file
@ -0,0 +1,277 @@
|
||||
## Color palette
|
||||
|
||||
# yellows: dba355 d8a038 d8bd36 edd17d fbca04
|
||||
# greens: 92ef92 6bbf3b 1cef5c 75b72d 9fea8f c6e84e c1f45a b8d613 fcf0b5
|
||||
# reds: dd362a
|
||||
# blues: 5b91c6 2a7d93 0bb1ed
|
||||
# pinks: bf0f73 c619b5
|
||||
# oranges: ba500e ce8048
|
||||
# teals: 40c491
|
||||
#
|
||||
# Tailwind CSS colors: https://tailwindcss.com/docs/customizing-colors/
|
||||
|
||||
###
|
||||
### Special magic GitHub labels
|
||||
###
|
||||
- description: "Good for newcomers"
|
||||
name: "good first issue"
|
||||
color: 7057ff
|
||||
- description: "Extra attention is needed"
|
||||
name: "help wanted"
|
||||
color: 008672
|
||||
|
||||
## Work streams
|
||||
- description: "Workstream: End-to-end Tests"
|
||||
name: "workstream/e2e-tests"
|
||||
color: fbca04
|
||||
- description: "Workstream: VM Conformance Tests"
|
||||
name: "workstream/vm-conformance-tests"
|
||||
color: fbca04
|
||||
|
||||
###
|
||||
### Topics
|
||||
###
|
||||
- description: "Topic: Slashing"
|
||||
name: topic/slashing
|
||||
color: c619b5
|
||||
- description: "Topic: Sector proving"
|
||||
name: topic/sector-proving
|
||||
color: c619b5
|
||||
- description: "Topic: Sync / fork selection"
|
||||
name: topic/sync-forks
|
||||
color: c619b5
|
||||
- description: "Topic: Present-time mining / tipset assembly"
|
||||
name: topic/mining-present
|
||||
color: c619b5
|
||||
- description: "Topic: Catch-up / rush mining"
|
||||
name: topic/mining-rush
|
||||
color: c619b5
|
||||
- description: "Topic: Payment channels"
|
||||
name: topic/paych
|
||||
color: c619b5
|
||||
- description: "Topic: Drand faults"
|
||||
name: topic/drand
|
||||
color: c619b5
|
||||
- description: "Topic: Mempool message selection"
|
||||
name: topic/mempool
|
||||
color: c619b5
|
||||
- description: "Topic: Presealing"
|
||||
name: topic/presealing
|
||||
color: c619b5
|
||||
- description: "Topic: Deals"
|
||||
name: topic/deals
|
||||
color: c619b5
|
||||
|
||||
###
|
||||
### Kinds
|
||||
###
|
||||
- description: "Kind: Bug"
|
||||
name: kind/bug
|
||||
color: c92712
|
||||
- description: "Kind: Problem"
|
||||
name: kind/problem
|
||||
color: c92712
|
||||
- description: "Kind: Investigation"
|
||||
name: kind/investigation
|
||||
color: fcf0b5
|
||||
- description: "Kind: Chore"
|
||||
name: kind/chore
|
||||
color: fcf0b5
|
||||
- description: "Kind: Feature"
|
||||
name: kind/feature
|
||||
color: fcf0b5
|
||||
- description: "Kind: Improvement"
|
||||
name: kind/improvement
|
||||
color: fcf0b5
|
||||
- description: "Kind: Test Scenario"
|
||||
name: kind/test-scenario
|
||||
color: feb95e
|
||||
- description: "Kind: Tracking Issue"
|
||||
name: kind/tracking-issue
|
||||
color: fcf0b5
|
||||
- description: "Kind: Question"
|
||||
name: kind/question
|
||||
color: fcf0b5
|
||||
- description: "Kind: Enhancement"
|
||||
name: kind/enhancement
|
||||
color: fcf0b5
|
||||
- description: "Kind: Discussion"
|
||||
name: kind/discussion
|
||||
color: fcf0b5
|
||||
- description: "Kind: Spike"
|
||||
name: kind/spike
|
||||
color: fcf0b5
|
||||
- description: "Kind: System Process"
|
||||
name: kind/system-process
|
||||
color: ff4782
|
||||
|
||||
###
|
||||
### Difficulties
|
||||
###
|
||||
- description: "Difficulty: Trivial"
|
||||
name: dif/trivial
|
||||
color: b2b7ff
|
||||
- description: "Difficulty: Easy"
|
||||
name: dif/easy
|
||||
color: 7886d7
|
||||
- description: "Difficulty: Medium"
|
||||
name: dif/medium
|
||||
color: 6574cd
|
||||
- description: "Difficulty: Hard"
|
||||
name: dif/hard
|
||||
color: 5661b3
|
||||
- description: "Difficulty: Expert"
|
||||
name: dif/expert
|
||||
color: 2f365f
|
||||
|
||||
###
|
||||
### Efforts
|
||||
###
|
||||
- description: "Effort: Minutes"
|
||||
name: effort/minutes
|
||||
color: e8fffe
|
||||
- description: "Effort: One or Multiple Hours."
|
||||
name: effort/hours
|
||||
color: a0f0ed
|
||||
- description: "Effort: One Day."
|
||||
name: effort/day
|
||||
color: 64d5ca
|
||||
- description: "Effort: Multiple Days."
|
||||
name: effort/days
|
||||
color: 4dc0b5
|
||||
- description: "Effort: One Week."
|
||||
name: effort/week
|
||||
color: 38a89d
|
||||
- description: "Effort: Multiple Weeks."
|
||||
name: effort/weeks
|
||||
color: 20504f
|
||||
|
||||
###
|
||||
### Discomfort Factor
|
||||
###
|
||||
- description: "Discomfort factor: I wake up in the middle of the night with nightmares, sweats, and chills."
|
||||
name: discomfort-factor/10
|
||||
color: c53030
|
||||
- description: "Discomfort factor: Wakes me up in the middle of the night, but if I breathe deep, I can sleep again."
|
||||
name: discomfort-factor/9
|
||||
color: e53e3e
|
||||
- description: "Discomfort factor: I touched my eyes after picking up a Jalapeño (10,000 SHU)."
|
||||
name: discomfort-factor/8
|
||||
color: f56565
|
||||
- description: "Discomfort factor: Sitting next to a sweaty gentleman in a transatlantic flight."
|
||||
name: discomfort-factor/7
|
||||
color: dd6b20
|
||||
- description: "Discomfort factor: Agonizing smalltalk."
|
||||
name: discomfort-factor/6
|
||||
color: ed8936
|
||||
- description: "Discomfort factor: Watching tourists wear white socks with flip-flops."
|
||||
name: discomfort-factor/5
|
||||
color: f6ad55
|
||||
- description: "Discomfort factor: An itchy jumper label the entire night."
|
||||
name: discomfort-factor/4
|
||||
color: ecc94b
|
||||
- description: "Discomfort factor: A pebble in my shoe."
|
||||
name: discomfort-factor/3
|
||||
color: f6e05e
|
||||
- description: "Discomfort factor: A sneeze that just won't come out."
|
||||
name: discomfort-factor/2
|
||||
color: faf089
|
||||
- description: "Discomfort factor: Opening a can with a supposedly 'easy open lid' whose ring has snapped."
|
||||
name: discomfort-factor/1
|
||||
color: c6f6d5
|
||||
- description: "Discomfort factor: Don't worry, chill, we're cool!"
|
||||
name: discomfort-factor/0
|
||||
color: f0fff4
|
||||
|
||||
###
|
||||
### Priorities
|
||||
###
|
||||
- description: "P0: Critical. This is a blocker. Drop everything else."
|
||||
name: P0
|
||||
color: dd362a
|
||||
- description: "P1: Must be fixed."
|
||||
name: P1
|
||||
color: ce8048
|
||||
- description: "P2: Should be fixed."
|
||||
name: P2
|
||||
color: dbd81a
|
||||
- description: "P3: Might get fixed."
|
||||
name: P3
|
||||
color: 9fea8f
|
||||
|
||||
###
|
||||
### Hints
|
||||
###
|
||||
- description: "Hint: Good First Issue"
|
||||
name: hint/good-first-issue
|
||||
color: 0623cc
|
||||
- description: "Hint: Needs Contributor"
|
||||
name: hint/needs-contributor
|
||||
color: 0623cc
|
||||
- description: "Hint: Needs Scoring"
|
||||
name: hint/needs-scoring
|
||||
color: 0623cc
|
||||
- description: "Hint: Needs Decision"
|
||||
name: hint/needs-decision
|
||||
color: 0623cc
|
||||
- description: "Hint: Needs Triage"
|
||||
name: hint/needs-triage
|
||||
color: 0623cc
|
||||
- description: "Hint: Needs Analysis"
|
||||
name: hint/needs-analysis
|
||||
color: 0623cc
|
||||
- description: "Hint: Needs Author Input"
|
||||
name: hint/needs-author-input
|
||||
color: 0623cc
|
||||
- description: "Hint: Needs Team Input"
|
||||
name: hint/needs-team-input
|
||||
color: 0623cc
|
||||
- description: "Hint: Needs Community Input"
|
||||
name: hint/needs-community-input
|
||||
color: 0623cc
|
||||
- description: "Hint: Needs Review"
|
||||
name: hint/needs-review
|
||||
color: 0623cc
|
||||
- description: "Hint: Needs Help"
|
||||
name: hint/needs-help
|
||||
color: 0623cc
|
||||
- description: "Hint: Description outdated"
|
||||
name: hint/desc-outdated
|
||||
color: 0623cc
|
||||
|
||||
###
|
||||
### Statuses
|
||||
###
|
||||
- description: "Status: Done"
|
||||
name: status/done
|
||||
color: edb3a6
|
||||
- description: "Status: Approved and Awaiting Merge"
|
||||
name: status/approved-waiting
|
||||
color: edb3a6
|
||||
- description: "Status: Changes Requested in Review"
|
||||
name: status/changes-requested
|
||||
color: edb3a6
|
||||
- description: "Status: Waiting for Review"
|
||||
name: status/awaiting-review
|
||||
color: edb3a6
|
||||
- description: "Status: Deferred"
|
||||
name: status/deferred
|
||||
color: edb3a6
|
||||
- description: "Status: In Progress"
|
||||
name: status/in-progress
|
||||
color: edb3a6
|
||||
- description: "Status: Blocked"
|
||||
name: status/blocked
|
||||
color: edb3a6
|
||||
- description: "Status: Inactive"
|
||||
name: status/inactive
|
||||
color: edb3a6
|
||||
- description: "Status: Waiting"
|
||||
name: status/waiting
|
||||
color: edb3a6
|
||||
- description: "Status: Rotten"
|
||||
name: status/rotten
|
||||
color: edb3a6
|
||||
- description: "Status: Discarded / Won't fix"
|
||||
name: status/discarded
|
||||
color: a0aec0
|
16
testplans/.github/workflows/label-syncer.yml
vendored
Normal file
16
testplans/.github/workflows/label-syncer.yml
vendored
Normal file
@ -0,0 +1,16 @@
|
||||
name: Label syncer
|
||||
on:
|
||||
push:
|
||||
paths:
|
||||
- '.github/labels.yml'
|
||||
branches:
|
||||
- master
|
||||
jobs:
|
||||
build:
|
||||
name: Sync labels
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@1.0.0
|
||||
- uses: micnncim/action-label-syncer@v0.4.0
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
4
testplans/.gitignore
vendored
Normal file
4
testplans/.gitignore
vendored
Normal file
@ -0,0 +1,4 @@
|
||||
venv/
|
||||
__pycache__/
|
||||
.ipynb_checkpoints/
|
||||
tvx/tvx
|
6
testplans/.gitmodules
vendored
Normal file
6
testplans/.gitmodules
vendored
Normal file
@ -0,0 +1,6 @@
|
||||
[submodule "extra/filecoin-ffi"]
|
||||
path = extra/filecoin-ffi
|
||||
url = https://github.com/filecoin-project/filecoin-ffi.git
|
||||
[submodule "extra/fil-blst"]
|
||||
path = extra/fil-blst
|
||||
url = https://github.com/filecoin-project/fil-blst.git
|
193
testplans/DELVING.md
Normal file
193
testplans/DELVING.md
Normal file
@ -0,0 +1,193 @@
|
||||
# Delving into the unknown
|
||||
|
||||
This write-up summarises how to debug what appears to be a mischievous Lotus
|
||||
instance during our Testground tests. It also goes enumerates which assets are
|
||||
useful to report suspicious behaviours upstream, in a way that they are
|
||||
actionable.
|
||||
|
||||
## Querying the Lotus RPC API
|
||||
|
||||
The `local:docker` and `cluster:k8s` map ports that you specify in the
|
||||
composition.toml, so you can access them externally.
|
||||
|
||||
All our compositions should carry this fragment:
|
||||
|
||||
```toml
|
||||
[global.run_config]
|
||||
exposed_ports = { pprof = "6060", node_rpc = "1234", miner_rpc = "2345" }
|
||||
```
|
||||
|
||||
This tells Testground to expose the following ports:
|
||||
|
||||
* `6060` => Go pprof.
|
||||
* `1234` => Lotus full node RPC.
|
||||
* `2345` => Lotus storage miner RPC.
|
||||
|
||||
### `local:docker`
|
||||
|
||||
1. Install the `lotus` binary on your host.
|
||||
2. Find the container that you want to connect to in `docker ps`.
|
||||
* Note that our _container names_ are slightly long, and they're the last
|
||||
field on every line, so if your terminal is wrapping text, the port
|
||||
numbers will end up ABOVE the friendly/recognizable container name (e.g. `tg-lotus-soup-deals-e2e-acfc60bc1727-miners-1`).
|
||||
* The testground output displays the _container ID_ inside coloured angle
|
||||
brackets, so if you spot something spurious in a particular node, you can
|
||||
hone in on that one, e.g. `<< 54dd5ad916b2 >>`.
|
||||
|
||||
```
|
||||
⟩ docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
54dd5ad916b2 be3c18d7f0d4 "/testplan" 10 seconds ago Up 8 seconds 0.0.0.0:32788->1234/tcp, 0.0.0.0:32783->2345/tcp, 0.0.0.0:32773->6060/tcp, 0.0.0.0:32777->6060/tcp tg-lotus-soup-deals-e2e-acfc60bc1727-clients-2
|
||||
53757489ce71 be3c18d7f0d4 "/testplan" 10 seconds ago Up 8 seconds 0.0.0.0:32792->1234/tcp, 0.0.0.0:32790->2345/tcp, 0.0.0.0:32781->6060/tcp, 0.0.0.0:32786->6060/tcp tg-lotus-soup-deals-e2e-acfc60bc1727-clients-1
|
||||
9d3e83b71087 be3c18d7f0d4 "/testplan" 10 seconds ago Up 8 seconds 0.0.0.0:32791->1234/tcp, 0.0.0.0:32789->2345/tcp, 0.0.0.0:32779->6060/tcp, 0.0.0.0:32784->6060/tcp tg-lotus-soup-deals-e2e-acfc60bc1727-clients-0
|
||||
7bd60e75ed0e be3c18d7f0d4 "/testplan" 10 seconds ago Up 8 seconds 0.0.0.0:32787->1234/tcp, 0.0.0.0:32782->2345/tcp, 0.0.0.0:32772->6060/tcp, 0.0.0.0:32776->6060/tcp tg-lotus-soup-deals-e2e-acfc60bc1727-miners-1
|
||||
dff229d7b342 be3c18d7f0d4 "/testplan" 10 seconds ago Up 9 seconds 0.0.0.0:32778->1234/tcp, 0.0.0.0:32774->2345/tcp, 0.0.0.0:32769->6060/tcp, 0.0.0.0:32770->6060/tcp tg-lotus-soup-deals-e2e-acfc60bc1727-miners-0
|
||||
4cd67690e3b8 be3c18d7f0d4 "/testplan" 11 seconds ago Up 8 seconds 0.0.0.0:32785->1234/tcp, 0.0.0.0:32780->2345/tcp, 0.0.0.0:32771->6060/tcp, 0.0.0.0:32775->6060/tcp tg-lotus-soup-deals-e2e-acfc60bc1727-bootstrapper-0
|
||||
aeb334adf88d iptestground/sidecar:edge "testground sidecar …" 43 hours ago Up About an hour 0.0.0.0:32768->6060/tcp testground-sidecar
|
||||
c1157500282b influxdb:1.8 "/entrypoint.sh infl…" 43 hours ago Up 25 seconds 0.0.0.0:8086->8086/tcp testground-influxdb
|
||||
99ca4c07fecc redis "docker-entrypoint.s…" 43 hours ago Up About an hour 0.0.0.0:6379->6379/tcp testground-redis
|
||||
bf25c87488a5 bitnami/grafana "/run.sh" 43 hours ago Up 26 seconds 0.0.0.0:3000->3000/tcp testground-grafana
|
||||
cd1d6383eff7 goproxy/goproxy "/goproxy" 45 hours ago Up About a minute 8081/tcp testground-goproxy
|
||||
```
|
||||
|
||||
3. Take note of the port mapping. Imagine in the output above, we want to query
|
||||
`54dd5ad916b2`. We'd use `localhost:32788`, as it forwards to the container's
|
||||
1234 port (Lotus Full Node RPC).
|
||||
4. Run your Lotus CLI command setting the `FULLNODE_API_INFO` env variable,
|
||||
which is a multiaddr:
|
||||
|
||||
```sh
|
||||
$ FULLNODE_API_INFO=":/ip4/127.0.0.1/tcp/$port/http" lotus chain list
|
||||
[...]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
Alternatively, you could download gawk and setup a script in you .bashrc or .zshrc similar to:
|
||||
|
||||
```
|
||||
lprt() {
|
||||
NAME=$1
|
||||
PORT=$2
|
||||
|
||||
docker ps --format "table {{.Names}}" | grep $NAME | xargs -I {} docker port {} $PORT | gawk --field-separator=":" '{print $2}'
|
||||
}
|
||||
|
||||
envs() {
|
||||
NAME=$1
|
||||
|
||||
local REMOTE_PORT_1234=$(lprt $NAME 1234)
|
||||
local REMOTE_PORT_2345=$(lprt $NAME 2345)
|
||||
|
||||
export FULLNODE_API_INFO=":/ip4/127.0.0.1/tcp/$REMOTE_PORT_1234/http"
|
||||
export STORAGE_API_INFO=":/ip4/127.0.0.1/tcp/$REMOTE_PORT_2345/http"
|
||||
|
||||
echo "Setting \$FULLNODE_API_INFO to $FULLNODE_API_INFO"
|
||||
echo "Setting \$STORAGE_API_INFO to $STORAGE_API_INFO"
|
||||
}
|
||||
```
|
||||
|
||||
Then call commands like:
|
||||
```
|
||||
envs miners-0
|
||||
lotus chain list
|
||||
```
|
||||
|
||||
### `cluster:k8s`
|
||||
|
||||
Similar to `local:docker`, you pick a pod that you want to connect to and port-forward 1234 and 2345 to that specific pod, such as:
|
||||
|
||||
```
|
||||
export PODNAME="tg-lotus-soup-ae620dfb2e19-miners-0"
|
||||
kubectl port-forward pods/$PODNAME 1234:1234 2345:2345
|
||||
|
||||
export FULLNODE_API_INFO=":/ip4/127.0.0.1/tcp/1234/http"
|
||||
export STORAGE_API_INFO=":/ip4/127.0.0.1/tcp/2345/http"
|
||||
lotus-storage-miner storage-deals list
|
||||
lotus-storage-miner storage-deals get-ask
|
||||
```
|
||||
|
||||
### Useful commands / checks
|
||||
|
||||
* **Making sure miners are on the same chain:** compare outputs of `lotus chain list`.
|
||||
* **Checking deals:** `lotus client list-deals`.
|
||||
* **Sector queries:** `lotus-storage-miner info` , `lotus-storage-miner proving info`
|
||||
* **Sector sealing errors:**
|
||||
* `STORAGE_API_INFO=":/ip4/127.0.0.1/tcp/53624/http" FULLNODE_API_INFO=":/ip4/127.0.0.1/tcp/53623/http" lotus-storage-miner sector info`
|
||||
* `STORAGE_API_INFO=":/ip4/127.0.0.1/tcp/53624/http" FULLNODE_API_INFO=":/ip4/127.0.0.1/tcp/53623/http" lotus-storage-miner sector status <sector_no>`
|
||||
* `STORAGE_API_INFO=":/ip4/127.0.0.1/tcp/53624/http" FULLNODE_API_INFO=":/ip4/127.0.0.1/tcp/53623/http" lotus-storage-miner sector status --log <sector_no>`
|
||||
|
||||
## Viewing logs of a particular container `local:docker`
|
||||
|
||||
This works for both started and stopped containers. Just get the container ID
|
||||
(in double angle brackets in Testground output, on every log line), and do a:
|
||||
|
||||
```shell script
|
||||
$ docker logs $container_id
|
||||
```
|
||||
|
||||
## Accessing the golang instrumentation
|
||||
|
||||
Testground exposes a pprof endpoint under local port 6060, which both
|
||||
`local:docker` and `cluster:k8s` map.
|
||||
|
||||
For `local:docker`, see above to figure out which host port maps to the
|
||||
container's 6060 port.
|
||||
|
||||
## Acquiring a goroutine dump
|
||||
|
||||
When things appear to be stuck, get a goroutine dump.
|
||||
|
||||
```shell script
|
||||
$ wget -o goroutine.out http://localhost:${pprof_port}/debug/pprof/goroutine?debug=2
|
||||
```
|
||||
|
||||
You can use whyrusleeping/stackparse to extract a summary:
|
||||
|
||||
```shell script
|
||||
$ go get https://github.com/whyrusleeping/stackparse
|
||||
$ stackparse --summary goroutine.out
|
||||
```
|
||||
|
||||
## Acquiring a CPU profile
|
||||
|
||||
When the CPU appears to be spiking/rallying, grab a CPU profile.
|
||||
|
||||
```shell script
|
||||
$ wget -o profile.out http://localhost:${pprof_port}/debug/pprof/profile
|
||||
```
|
||||
|
||||
Analyse it using `go tool pprof`. Usually, generating a `png` graph is useful:
|
||||
|
||||
```shell script
|
||||
$ go tool pprof profile.out
|
||||
File: testground
|
||||
Type: cpu
|
||||
Time: Jul 3, 2020 at 12:00am (WEST)
|
||||
Duration: 30.07s, Total samples = 2.81s ( 9.34%)
|
||||
Entering interactive mode (type "help" for commands, "o" for options)
|
||||
(pprof) png
|
||||
Generating report in profile003.png
|
||||
```
|
||||
|
||||
## Submitting actionable reports / findings
|
||||
|
||||
This is useful both internally (within the Oni team, so that peers can help) and
|
||||
externally (when submitting a finding upstream).
|
||||
|
||||
We don't need to play the full bug-hunting game on Lotus, but it's tremendously
|
||||
useful to provide the necessary data so that any reports are actionable.
|
||||
|
||||
These include:
|
||||
|
||||
* test outputs (use `testground collect`).
|
||||
* stack traces that appear in logs (whether panics or not).
|
||||
* output of relevant Lotus CLI commands.
|
||||
* if this is some kind of blockage / deadlock, goroutine dumps.
|
||||
* if this is a CPU hotspot, a CPU profile would be useful.
|
||||
* if this is a memory issue, a heap dump would be useful.
|
||||
|
||||
**When submitting bugs upstream (Lotus), make sure to indicate:**
|
||||
|
||||
* Lotus commit.
|
||||
* FFI commit.
|
23
testplans/Makefile
Normal file
23
testplans/Makefile
Normal file
@ -0,0 +1,23 @@
|
||||
SHELL = /bin/bash
|
||||
|
||||
.DEFAULT_GOAL := download-proofs
|
||||
|
||||
download-proofs:
|
||||
go run github.com/filecoin-project/go-paramfetch/paramfetch 2048 ./docker-images/proof-parameters.json
|
||||
|
||||
build-images:
|
||||
docker build -t "iptestground/oni-buildbase:v8" -f "docker-images/Dockerfile.oni-buildbase" "docker-images"
|
||||
docker build -t "iptestground/oni-runtime:v3" -f "docker-images/Dockerfile.oni-runtime" "docker-images"
|
||||
docker build -t "iptestground/oni-runtime:v4-debug" -f "docker-images/Dockerfile.oni-runtime-debug" "docker-images"
|
||||
|
||||
push-images:
|
||||
docker push iptestground/oni-buildbase:v9
|
||||
docker push iptestground/oni-runtime:v3
|
||||
docker push iptestground/oni-runtime:v4-debug
|
||||
|
||||
pull-images:
|
||||
docker pull iptestground/oni-buildbase:v9
|
||||
docker pull iptestground/oni-runtime:v3
|
||||
docker pull iptestground/oni-runtime:v4-debug
|
||||
|
||||
.PHONY: download-proofs build-images push-images pull-images
|
254
testplans/README.md
Normal file
254
testplans/README.md
Normal file
@ -0,0 +1,254 @@
|
||||
# Project Oni 👹
|
||||
|
||||
Our mandate is:
|
||||
|
||||
> To verify the successful end-to-end outcome of the filecoin protocol and filecoin implementations, under a variety of real-world and simulated scenarios.
|
||||
|
||||
➡️ Find out more about our goals, requirements, execution plan, and team culture, in our [Project Description](https://docs.google.com/document/d/16jYL--EWYpJhxT9bakYq7ZBGLQ9SB940Wd1lTDOAbNE).
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Testing topics](#testing-topics)
|
||||
- [Repository contents](#repository-contents)
|
||||
- [Running the test cases](#running-the-test-cases)
|
||||
- [Catalog](#catalog)
|
||||
- [Debugging](#debugging)
|
||||
- [Dependencies](#dependencies)
|
||||
- [Docker images changelog](#docker-images-changelog)
|
||||
- [Team](#team)
|
||||
|
||||
## Testing topics
|
||||
|
||||
These are the topics we are currently centering our testing efforts on. Our testing efforts include fault induction, stress tests, and end-to-end testing.
|
||||
|
||||
* **slashing:** [_(view test scenarios)_](https://github.com/filecoin-project/oni/issues?q=is%3Aissue+sort%3Aupdated-desc+label%3Atopic%2Fslashing)
|
||||
* We are recreating the scenarios that lead to slashing, as they are not readily seen in mono-client testnets.
|
||||
* Context: slashing is the negative economic consequence of penalising a miner that has breached protocol by deducing FIL and/or removing their power from the network.
|
||||
* **windowed PoSt/sector proving faults:** [_(view test scenarios)_](https://github.com/filecoin-project/oni/issues?q=is%3Aissue+sort%3Aupdated-desc+label%3Atopic%2Fsector-proving)
|
||||
* We are recreating the proving fault scenarios and triggering them in an accelerated fasion (by modifying the system configuration), so that we're able to verify that the sector state transitions properly through the different milestones (temporary faults, termination, etc.), and under chain fork conditions.
|
||||
* Context: every 24 hours there are 36 windows where miners need to submit their proofs of sector liveness, correctness, and validity. Failure to do so will mark a sector as faulted, and will eventually terminate the sector, triggering slashing consequences for the miner.
|
||||
* **syncing/fork selection:** [_(view test scenarios)_](https://github.com/filecoin-project/oni/issues?q=is%3Aissue+sort%3Aupdated-desc+label%3Atopic%2Fsync-forks)
|
||||
* Newly bootstrapped clients, and paused-then-resumed clients, are able to latch on to the correct chain even in the presence of a large number of forks in the network, either in the present, or throughout history.
|
||||
* **present-time mining/tipset assembly:** [_(view test scenarios)_](https://github.com/filecoin-project/oni/issues?q=is%3Aissue+sort%3Aupdated-desc+label%3Atopic%2Fmining-present)
|
||||
* Induce forks in the network, create network partitions, simulate chain halts, long-range forks, etc. Stage many kinds of convoluted chain shapes, and network partitions, and ensure that miners are always able to arrive to consensus when disruptions subside.
|
||||
* **catch-up/rush mining:** [_(view test scenarios)_](https://github.com/filecoin-project/oni/issues?q=is%3Aissue+sort%3Aupdated-desc+label%3Atopic%2Fmining-rush)
|
||||
* Induce network-wide, or partition-wide arrests, and investigate what the resulting chain is after the system is allowed to recover.
|
||||
* Context: catch-up/rush mining is a dedicated pathway in the mining logic that brings the chain up to speed with present time, in order to recover from network halts. Basically it entails producing backdated blocks in a hot loop. Imagine all miners recover in unison from a network-wide disruption; miners will produce blocks for their winning rounds, and will label losing rounds as _null rounds_. In the current implementation, there is no time for block propagation, so miners will produce solo-chains, and the assumption is that when all these chains hit the network, the _fork choice rule_ will pick the heaviest one. Unfortunately this process is brittle and unbalanced, as it favours the miner that held the highest power before the disruption commenced.
|
||||
* **storage and retrieval deals:** [_(view test scenarios)_](https://github.com/filecoin-project/oni/issues?q=is%3Aissue+sort%3Aupdated-desc+label%3Atopic%2Fdeals)
|
||||
* end-to-end flows where clients store and retrieve pieces from miners, including stress testing the system.
|
||||
* **payment channels:** [_(view test scenarios)_](https://github.com/filecoin-project/oni/issues?q=is%3Aissue+sort%3Aupdated-desc+label%3Atopic%2Fpaych)
|
||||
* stress testing payment channels via excessive lane creation, excessive payment voucher atomisation, and redemption.
|
||||
* **drand incidents and impact on the filecoin network/protocol/chain:** [_(view test scenarios)_](https://github.com/filecoin-project/oni/issues?q=is%3Aissue+sort%3Aupdated-desc+label%3Atopic%2Fdrand)
|
||||
* drand total unavailabilities, drand catch-ups, drand slowness, etc.
|
||||
* **mempool message selection:** [_(view test scenarios)_](https://github.com/filecoin-project/oni/issues?q=is%3Aissue+sort%3Aupdated-desc+label%3Atopic%2Fmempool)
|
||||
* soundness of message selection logic; potentially targeted attacks against miners by flooding their message pools with different kinds of messages.
|
||||
* **presealing:** [_(view test scenarios)_](https://github.com/filecoin-project/oni/issues?q=is%3Aissue+sort%3Aupdated-desc+label%3Atopic%2Fpresealing)
|
||||
* TBD, anything related to this worth testing?
|
||||
|
||||
## Repository contents
|
||||
|
||||
This repository consists of [test plans](https://docs.testground.ai/concepts-and-architecture/test-structure) built to be run on [Testground](https://github.com/testground/testground).
|
||||
|
||||
The source code for the various test cases can be found in the [`lotus-soup` directory](https://github.com/filecoin-project/oni/tree/master/lotus-soup).
|
||||
|
||||
## Running the test cases
|
||||
|
||||
If you are unfamiliar with Testground, we strongly suggest you read the Testground [Getting Started guide](https://docs.testground.ai/getting-started) in order to learn how to install Testground and how to use it.
|
||||
|
||||
You can find various [composition files](https://docs.testground.ai/running-test-plans#composition-runs) describing various test scenarios built as part of Project Oni at [`lotus-soup/_compositions` directory](https://github.com/filecoin-project/oni/tree/master/lotus-soup/_compositions).
|
||||
|
||||
We've designed the test cases so that you can run them via the `local:exec`, `local:docker` and the `cluster:k8s` runners. Note that Lotus miners are quite resource intensive, requiring gigabytes of memory. Hence you would have to run these test cases on a beafy machine (when using `local:docker` and `local:exec`), or on a Kubernetes cluster (when using `cluster:k8s`).
|
||||
|
||||
Here are the basics of how to run the baseline deals end-to-end test case:
|
||||
|
||||
### Running the baseline deals end-to-end test case
|
||||
|
||||
1. Compile and Install Testground from source code.
|
||||
* See the [Getting Started](https://github.com/testground/testground#getting-started) section of the README for instructions.
|
||||
|
||||
2. Run a Testground daemon
|
||||
|
||||
```
|
||||
testground daemon
|
||||
```
|
||||
|
||||
3. Download required Docker images for the `lotus-soup` test plan
|
||||
|
||||
```
|
||||
make pull-images
|
||||
```
|
||||
|
||||
Alternatively you can build them locally with
|
||||
|
||||
```
|
||||
make build-images
|
||||
```
|
||||
|
||||
4. Import the `lotus-soup` test plan into your Testground home directory
|
||||
|
||||
```
|
||||
testground plan import --from ./lotus-soup
|
||||
```
|
||||
|
||||
5. Init the `filecoin-ffi` Git submodule in the `extra` folder.
|
||||
|
||||
```
|
||||
git submodule update --init --recursive
|
||||
```
|
||||
|
||||
6. Compile the `filecoin-ffi` version locally (necessary if you use `local:exec`)
|
||||
|
||||
```
|
||||
cd extra/filecoin-ffi
|
||||
make
|
||||
```
|
||||
|
||||
7. Run a composition for the baseline deals end-to-end test case
|
||||
|
||||
```
|
||||
testground run composition -f ./lotus-soup/_compositions/baseline-docker-5-1.toml
|
||||
```
|
||||
|
||||
## Batch-running randomised test cases
|
||||
|
||||
The Oni testkit supports [range parameters](https://github.com/filecoin-project/oni/blob/master/lotus-soup/testkit/testenv_ranges.go),
|
||||
which test cases can use to generate random values, either at the instance level
|
||||
(each instance computes a random value within range), or at the run level (one
|
||||
instance computes the values, and propagates them to all other instances via the
|
||||
sync service).
|
||||
|
||||
For example:
|
||||
|
||||
```toml
|
||||
latency_range = '["20ms", "500ms"]'
|
||||
loss_range = '[0, 0.2]'
|
||||
```
|
||||
|
||||
Could pick a random latency between 20ms and 500ms, and a packet loss
|
||||
probability between 0 and 0.2. We could apply those values through the
|
||||
`netclient.ConfigureNetwork` Testground SDK API.
|
||||
|
||||
Randomized range-based parameters are specially interesting when combined with
|
||||
batch runs, as it enables Monte Carlo approaches to testing.
|
||||
|
||||
The Oni codebase includes a batch test run driver in package `lotus-soup/runner`.
|
||||
You can point it at a composition file that uses range parameters and tell it to
|
||||
run N iterations of the test:
|
||||
|
||||
```shell script
|
||||
$ go run ./runner -runs 5 _compositions/net-chaos/latency.toml
|
||||
```
|
||||
|
||||
This will run the test as many times as instructed, and will place all outputs
|
||||
in a temporary directory. You can pass a concrete output directory with
|
||||
the `-output` flag.
|
||||
|
||||
## Catalog
|
||||
|
||||
### Test cases part of `lotus-soup`
|
||||
|
||||
* `deals-e2e` - Deals end-to-end test case. Clients pick a miner at random, start a deal, wait for it to be sealed, and try to retrieve from another random miner who offers back the data.
|
||||
* `drand-halting` - Test case that instructs Drand with a sequence of halt/resume/wait events, while running deals between clients and miners at the same time.
|
||||
* `deals-stress` - Deals stress test case. Clients pick a miner and send multiple deals (concurrently or serially) in order to test how many deals miners can handle.
|
||||
* `paych-stress` - A test case exercising various payment channel stress tests.
|
||||
|
||||
### Compositions part of `lotus-soup`
|
||||
|
||||
* `baseline-docker-5-1.toml` - Runs a `baseline` test (deals e2e test) with a network of 5 clients and 1 miner targeting `local:docker`
|
||||
* `baseline-k8s-10-3.toml` - Runs a `baseline` test (deals e2e test) with a network of 10 clients and 3 miner targeting `cluster:k8s`
|
||||
* `baseline-k8s-3-1.toml` - Runs a `baseline` test (deals e2e test) with a network of 3 clients and 1 miner targeting `cluster:k8s`
|
||||
* `baseline-k8s-3-2.toml` - Runs a `baseline` test (deals e2e test) with a network of 3 clients and 2 miner targeting `cluster:k8s`
|
||||
* `baseline.toml` - Runs a `baseline` test (deals e2e test) with a network of 3 clients and 2 miner targeting `local:exec`. You have to manually download the proof parameters and place them in `/var/tmp`.
|
||||
* `deals-stress-concurrent-natural-k8s.toml`
|
||||
* `deals-stress-concurrent-natural.toml`
|
||||
* `deals-stress-concurrent.toml`
|
||||
* `deals-stress-serial-natural.toml`
|
||||
* `deals-stress-serial.toml`
|
||||
* `drand-halt.toml`
|
||||
* `local-drand.toml`
|
||||
* `natural.toml`
|
||||
* `paych-stress.toml`
|
||||
* `pubsub-tracer.toml`
|
||||
|
||||
|
||||
## Debugging
|
||||
|
||||
Find commands and how-to guides on debugging test plans at [DELVING.md](https://github.com/filecoin-project/oni/blob/master/DELVING.md)
|
||||
|
||||
1. Querying the Lotus RPC API
|
||||
|
||||
2. Useful commands / checks
|
||||
|
||||
* Making sure miners are on the same chain
|
||||
|
||||
* Checking deals
|
||||
|
||||
* Sector queries
|
||||
|
||||
* Sector sealing errors
|
||||
|
||||
## Dependencies
|
||||
|
||||
Our current test plan `lotus-soup` is building programatically the Lotus filecoin implementation and therefore requires all it's dependencies. The build process is slightly more complicated than a normal Go project, because we are binding a bit of Rust code. Lotus codebase is in Go, however its `proofs` and `crypto` libraries are in Rust (BLS signatures, SNARK verification, etc.).
|
||||
|
||||
Depending on the runner you want to use to run the test plan, these dependencies are included in the build process in a different way, which you should be aware of should you require to use the test plan with a newer version of Lotus:
|
||||
|
||||
### Filecoin FFI libraries
|
||||
|
||||
* `local:docker`
|
||||
|
||||
The Rust libraries are included in the Filecoin FFI Git submodule, which is part of the `iptestground/oni-buildbase` image. If the FFI changes on Lotus, we have to rebuild this image with the `make build-images` command, where X is the next version (see [Docker images changelog](#docker-images-changelog)
|
||||
below).
|
||||
|
||||
* `local:exec`
|
||||
|
||||
The Rust libraries are included via the `extra` directory. Make sure that the test plan reference to Lotus in `go.mod` and the `extra` directory are pointing to the same commit of the FFI git submodule. You also need to compile the `extra/filecoin-ffi` libraries with `make`.
|
||||
|
||||
* `cluster:k8s`
|
||||
|
||||
The same process as for `local:docker`, however you need to make sure that the respective `iptestground/oni-buildbase` image is available as a public Docker image, so that the Kubernetes cluster can download it.
|
||||
|
||||
### proof parameters
|
||||
|
||||
Additional to the Filecoin FFI Git submodules, we are also bundling `proof parameters` in the `iptestground/oni-runtime` image. If these change, you will need to rebuild that image with `make build-images` command, where X is the next version.
|
||||
|
||||
## Docker images changelog
|
||||
|
||||
### oni-buildbase
|
||||
|
||||
* `v1` => initial image locking in Filecoin FFI commit ca281af0b6c00314382a75ae869e5cb22c83655b.
|
||||
* `v2` => no changes; released only for aligning both images to aesthetically please @nonsense :D
|
||||
* `v3` => locking in Filecoin FFI commit 5342c7c97d1a1df4650629d14f2823d52889edd9.
|
||||
* `v4` => locking in Filecoin FFI commit 6a143e06f923f3a4f544c7a652e8b4df420a3d28.
|
||||
* `v5` => locking in Filecoin FFI commit cddc56607e1d851ea6d09d49404bd7db70cb3c2e.
|
||||
* `v6` => locking in Filecoin FFI commit 40569104603407c999d6c9e4c3f1228cbd4d0e5c.
|
||||
* `v7` => add Filecoin-BLST repo to buildbase.
|
||||
* `v8` => locking in Filecoin FFI commit f640612a1a1f7a2d.
|
||||
* `v9` => locking in Filecoin FFI commit 57e38efe4943f09d3127dcf6f0edd614e6acf68e and Filecoin-BLST commit 8609119cf4595d1741139c24378fcd8bc4f1c475.
|
||||
|
||||
|
||||
### oni-runtime
|
||||
|
||||
* `v1` => initial image with 2048 parameters.
|
||||
* `v2` => adds auxiliary tools: `net-tools netcat traceroute iputils-ping wget vim curl telnet iproute2 dnsutils`.
|
||||
* `v3` => bump proof parameters from v27 to v28
|
||||
|
||||
### oni-runtime-debug
|
||||
|
||||
* `v1` => initial image
|
||||
* `v2` => locking in Lotus commit e21ea53
|
||||
* `v3` => locking in Lotus commit d557c40
|
||||
* `v4` => bump proof parameters from v27 to v28
|
||||
* `v5` => locking in Lotus commit 1a170e18a
|
||||
|
||||
|
||||
## Team
|
||||
|
||||
* [@raulk](https://github.com/raulk) (Captain + TL)
|
||||
* [@nonsense](https://github.com/nonsense) (Testground TG + engineer)
|
||||
* [@yusefnapora](https://github.com/yusefnapora) (engineer and technical writer)
|
||||
* [@vyzo](https://github.com/vyzo) (engineer)
|
||||
* [@schomatis](https://github.com/schomatis) (advisor)
|
||||
* [@willscott](https://github.com/willscott) (engineer)
|
||||
* [@alanshaw](https://github.com/alanshaw) (engineer)
|
||||
|
29
testplans/composer/Dockerfile
Normal file
29
testplans/composer/Dockerfile
Normal file
@ -0,0 +1,29 @@
|
||||
FROM golang:1.14.4-buster as tg-build
|
||||
|
||||
ARG TESTGROUND_REF="oni"
|
||||
WORKDIR /usr/src
|
||||
RUN git clone https://github.com/testground/testground.git
|
||||
RUN cd testground && git checkout $TESTGROUND_REF && go build .
|
||||
|
||||
FROM python:3.8-buster
|
||||
|
||||
WORKDIR /usr/src/app
|
||||
|
||||
COPY --from=tg-build /usr/src/testground/testground /usr/bin/testground
|
||||
|
||||
RUN mkdir /composer && chmod 777 /composer
|
||||
RUN mkdir /testground && chmod 777 /testground
|
||||
|
||||
ENV HOME /composer
|
||||
ENV TESTGROUND_HOME /testground
|
||||
ENV LISTEN_PORT 5006
|
||||
ENV TESTGROUND_DAEMON_HOST host.docker.internal
|
||||
|
||||
VOLUME /testground/plans
|
||||
|
||||
|
||||
COPY requirements.txt ./
|
||||
RUN pip install -r requirements.txt
|
||||
COPY . .
|
||||
|
||||
CMD panel serve --address 0.0.0.0 --port $LISTEN_PORT composer.ipynb
|
4
testplans/composer/Makefile
Normal file
4
testplans/composer/Makefile
Normal file
@ -0,0 +1,4 @@
|
||||
all: docker
|
||||
|
||||
docker:
|
||||
docker build -t "iptestground/composer:latest" .
|
63
testplans/composer/README.md
Normal file
63
testplans/composer/README.md
Normal file
@ -0,0 +1,63 @@
|
||||
# Testground Composer
|
||||
|
||||
This is a work-in-progress UI for configuring and running testground compositions.
|
||||
|
||||
The app code lives in [./app](./app), and there's a thin Jupyter notebook shell in [composer.ipynb](./composer.ipynb).
|
||||
|
||||
## Running
|
||||
|
||||
You can either run the app in docker, or in a local python virtualenv. Docker is recommended unless you're hacking
|
||||
on the code for Composer itself.
|
||||
|
||||
### Running with docker
|
||||
|
||||
Run the `./composer.sh` script to build a container with the latest source and run it. The first build
|
||||
will take a little while since it needs to build testground and fetch a bunch of python dependencies.
|
||||
|
||||
You can skip the build if you set `SKIP_BUILD=true` when running `composer.sh`, and you can rebuild
|
||||
manually with `make docker`.
|
||||
|
||||
The contents of `$TESTGROUND_HOME/plans` will be sync'd to a temporary directory and read-only mounted
|
||||
into the container.
|
||||
|
||||
After building and starting the container, the script will open a browser to the composer UI.
|
||||
|
||||
You should be able to load an existing composition or create a new one from one of the plans in
|
||||
`$TESTGROUND_HOME/plans`.
|
||||
|
||||
Right now docker only supports the standalone webapp UI; to run the UI in a Jupyter notebook, see below.
|
||||
|
||||
### Running with local python
|
||||
|
||||
To run without docker, make a python3 virtual environment somewhere and activate it:
|
||||
|
||||
```shell
|
||||
# make a virtualenv called "venv" in the current directory
|
||||
python3 -m venv ./venv
|
||||
|
||||
# activate (bash/zsh):
|
||||
source ./venv/bin/activate
|
||||
|
||||
# activate (fish):
|
||||
source ./venv/bin/activate.fish
|
||||
```
|
||||
|
||||
Then install the python dependencies:
|
||||
|
||||
```shell
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
And start the UI:
|
||||
|
||||
```shell
|
||||
panel serve composer.ipynb
|
||||
```
|
||||
|
||||
That will start the standalone webapp UI. If you want a Jupyter notebook instead, run:
|
||||
|
||||
```
|
||||
jupyter notebook
|
||||
```
|
||||
|
||||
and open `composer.ipynb` in the Jupyter file picker.
|
94
testplans/composer/app/app.py
Normal file
94
testplans/composer/app/app.py
Normal file
@ -0,0 +1,94 @@
|
||||
import param
|
||||
import panel as pn
|
||||
import toml
|
||||
from .util import get_plans, get_manifest
|
||||
from .composition import Composition
|
||||
from .runner import TestRunner
|
||||
|
||||
STAGE_WELCOME = 'Welcome'
|
||||
STAGE_CONFIG_COMPOSITION = 'Configure'
|
||||
STAGE_RUN_TEST = 'Run'
|
||||
|
||||
|
||||
class Welcome(param.Parameterized):
|
||||
composition = param.Parameter()
|
||||
composition_picker = pn.widgets.FileInput(accept='.toml')
|
||||
plan_picker = param.Selector()
|
||||
ready = param.Boolean()
|
||||
|
||||
def __init__(self, **params):
|
||||
super().__init__(**params)
|
||||
self.composition_picker.param.watch(self._composition_updated, 'value')
|
||||
self.param.watch(self._plan_selected, 'plan_picker')
|
||||
self.param['plan_picker'].objects = ['Select a Plan'] + get_plans()
|
||||
|
||||
def panel(self):
|
||||
tabs = pn.Tabs(
|
||||
('New Compostion', self.param['plan_picker']),
|
||||
('Existing Composition', self.composition_picker),
|
||||
)
|
||||
|
||||
return pn.Column(
|
||||
"Either choose an existing composition or select a plan to create a new composition:",
|
||||
tabs,
|
||||
)
|
||||
|
||||
def _composition_updated(self, *args):
|
||||
print('composition updated')
|
||||
content = self.composition_picker.value.decode('utf8')
|
||||
comp_toml = toml.loads(content)
|
||||
manifest = get_manifest(comp_toml['global']['plan'])
|
||||
self.composition = Composition.from_dict(comp_toml, manifest=manifest)
|
||||
print('existing composition: {}'.format(self.composition))
|
||||
self.ready = True
|
||||
|
||||
def _plan_selected(self, evt):
|
||||
if evt.new == 'Select a Plan':
|
||||
return
|
||||
print('plan selected: {}'.format(evt.new))
|
||||
manifest = get_manifest(evt.new)
|
||||
self.composition = Composition(manifest=manifest, add_default_group=True)
|
||||
print('new composition: ', self.composition)
|
||||
self.ready = True
|
||||
|
||||
|
||||
class ConfigureComposition(param.Parameterized):
|
||||
composition = param.Parameter()
|
||||
|
||||
@param.depends('composition')
|
||||
def panel(self):
|
||||
if self.composition is None:
|
||||
return pn.Pane("no composition :(")
|
||||
print('composition: ', self.composition)
|
||||
return self.composition.panel()
|
||||
|
||||
|
||||
class WorkflowPipeline(object):
|
||||
def __init__(self):
|
||||
stages = [
|
||||
(STAGE_WELCOME, Welcome(), dict(ready_parameter='ready')),
|
||||
(STAGE_CONFIG_COMPOSITION, ConfigureComposition()),
|
||||
(STAGE_RUN_TEST, TestRunner()),
|
||||
]
|
||||
|
||||
self.pipeline = pn.pipeline.Pipeline(debug=True, stages=stages)
|
||||
|
||||
def panel(self):
|
||||
return pn.Column(
|
||||
pn.Row(
|
||||
self.pipeline.title,
|
||||
self.pipeline.network,
|
||||
self.pipeline.prev_button,
|
||||
self.pipeline.next_button,
|
||||
),
|
||||
self.pipeline.stage,
|
||||
sizing_mode='stretch_width',
|
||||
)
|
||||
|
||||
|
||||
class App(object):
|
||||
def __init__(self):
|
||||
self.workflow = WorkflowPipeline()
|
||||
|
||||
def ui(self):
|
||||
return self.workflow.panel().servable("Testground Composer")
|
328
testplans/composer/app/composition.py
Normal file
328
testplans/composer/app/composition.py
Normal file
@ -0,0 +1,328 @@
|
||||
import param
|
||||
import panel as pn
|
||||
import toml
|
||||
from .util import get_manifest, print_err
|
||||
|
||||
|
||||
def value_dict(parameterized, renames=None, stringify=False):
|
||||
d = dict()
|
||||
if renames is None:
|
||||
renames = dict()
|
||||
for name, p in parameterized.param.objects().items():
|
||||
if name == 'name':
|
||||
continue
|
||||
if name in renames:
|
||||
name = renames[name]
|
||||
val = p.__get__(parameterized, type(p))
|
||||
if isinstance(val, param.Parameterized):
|
||||
try:
|
||||
val = val.to_dict()
|
||||
except:
|
||||
val = value_dict(val, renames=renames)
|
||||
if stringify:
|
||||
val = str(val)
|
||||
d[name] = val
|
||||
return d
|
||||
|
||||
|
||||
def make_group_params_class(testcase):
|
||||
"""Returns a subclass of param.Parameterized whose params are defined by the
|
||||
'params' dict inside of the given testcase dict"""
|
||||
tc_params = dict()
|
||||
for name, p in testcase.get('params', {}).items():
|
||||
tc_params[name] = make_param(p)
|
||||
|
||||
name = 'Test Params for testcase {}'.format(testcase.get('name', ''))
|
||||
cls = param.parameterized_class(name, tc_params, GroupParamsBase)
|
||||
return cls
|
||||
|
||||
|
||||
def make_param(pdef):
|
||||
"""
|
||||
:param pdef: a parameter definition dict from a testground plan manifest
|
||||
:return: a param.Parameter that has the type, bounds, default value, etc from the definition
|
||||
"""
|
||||
typ = pdef['type'].lower()
|
||||
if typ == 'int':
|
||||
return num_param(pdef, cls=param.Integer)
|
||||
elif typ == 'float':
|
||||
return num_param(pdef)
|
||||
elif typ.startswith('bool'):
|
||||
return bool_param(pdef)
|
||||
else:
|
||||
return str_param(pdef)
|
||||
|
||||
|
||||
def num_param(pdef, cls=param.Number):
|
||||
lo = pdef.get('min', None)
|
||||
hi = pdef.get('max', None)
|
||||
bounds = (lo, hi)
|
||||
if lo == hi and lo is not None:
|
||||
bounds = None
|
||||
|
||||
default_val = pdef.get('default', None)
|
||||
if default_val is not None:
|
||||
if cls == param.Integer:
|
||||
default_val = int(default_val)
|
||||
else:
|
||||
default_val = float(default_val)
|
||||
return cls(default=default_val, bounds=bounds, doc=pdef.get('desc', ''))
|
||||
|
||||
|
||||
def bool_param(pdef):
|
||||
default_val = str(pdef.get('default', 'false')).lower() == 'true'
|
||||
return param.Boolean(
|
||||
doc=pdef.get('desc', ''),
|
||||
default=default_val
|
||||
)
|
||||
|
||||
|
||||
def str_param(pdef):
|
||||
return param.String(
|
||||
default=pdef.get('default', ''),
|
||||
doc=pdef.get('desc', ''),
|
||||
)
|
||||
|
||||
|
||||
class Base(param.Parameterized):
|
||||
@classmethod
|
||||
def from_dict(cls, d):
|
||||
return cls(**d)
|
||||
|
||||
def to_dict(self):
|
||||
return value_dict(self)
|
||||
|
||||
|
||||
class GroupParamsBase(Base):
|
||||
def to_dict(self):
|
||||
return value_dict(self, stringify=True)
|
||||
|
||||
|
||||
class Metadata(Base):
|
||||
composition_name = param.String()
|
||||
author = param.String()
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, d):
|
||||
d['composition_name'] = d.get('name', '')
|
||||
del d['name']
|
||||
return Metadata(**d)
|
||||
|
||||
def to_dict(self):
|
||||
return value_dict(self, {'composition_name': 'name'})
|
||||
|
||||
|
||||
class Global(Base):
|
||||
plan = param.String()
|
||||
case = param.Selector()
|
||||
builder = param.String()
|
||||
runner = param.String()
|
||||
|
||||
# TODO: link to instance counts in groups
|
||||
total_instances = param.Integer()
|
||||
# TODO: add ui widget for key/value maps instead of using Dict param type
|
||||
build_config = param.Dict(default={}, allow_None=True)
|
||||
run_config = param.Dict(default={}, allow_None=True)
|
||||
|
||||
def set_manifest(self, manifest):
|
||||
if manifest is None:
|
||||
return
|
||||
print('manifest:', manifest)
|
||||
self.plan = manifest['name']
|
||||
cases = [tc['name'] for tc in manifest['testcases']]
|
||||
self.param['case'].objects = cases
|
||||
print('global config updated manifest. cases:', self.param['case'].objects)
|
||||
if len(cases) != 0:
|
||||
self.case = cases[0]
|
||||
|
||||
if 'defaults' in manifest:
|
||||
print('manifest defaults', manifest['defaults'])
|
||||
if self.builder == '':
|
||||
self.builder = manifest['defaults'].get('builder', '')
|
||||
if self.runner == '':
|
||||
self.runner = manifest['defaults'].get('runner', '')
|
||||
|
||||
|
||||
class Resources(Base):
|
||||
memory = param.String(allow_None=True)
|
||||
cpu = param.String(allow_None=True)
|
||||
|
||||
|
||||
class Instances(Base):
|
||||
count = param.Integer(allow_None=True)
|
||||
percentage = param.Number(allow_None=True)
|
||||
|
||||
|
||||
class Dependency(Base):
|
||||
module = param.String()
|
||||
version = param.String()
|
||||
|
||||
|
||||
class Build(Base):
|
||||
selectors = param.List(class_=str, allow_None=True)
|
||||
dependencies = param.List(allow_None=True)
|
||||
|
||||
|
||||
class Run(Base):
|
||||
artifact = param.String(allow_None=True)
|
||||
test_params = param.Parameter(instantiate=True)
|
||||
|
||||
def __init__(self, params_class=None, **params):
|
||||
super().__init__(**params)
|
||||
if params_class is not None:
|
||||
self.test_params = params_class()
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, d, params_class=None):
|
||||
return Run(artifact=d.get('artifact', None), params_class=params_class)
|
||||
|
||||
def panel(self):
|
||||
return pn.Column(
|
||||
self.param['artifact'],
|
||||
pn.Param(self.test_params)
|
||||
)
|
||||
|
||||
|
||||
class Group(Base):
|
||||
id = param.String()
|
||||
instances = param.Parameter(Instances(), instantiate=True)
|
||||
resources = param.Parameter(Resources(), allow_None=True, instantiate=True)
|
||||
build = param.Parameter(Build(), instantiate=True)
|
||||
run = param.Parameter(Run(), instantiate=True)
|
||||
|
||||
def __init__(self, params_class=None, **params):
|
||||
super().__init__(**params)
|
||||
if params_class is not None:
|
||||
self.run = Run(params_class=params_class)
|
||||
self._set_name(self.id)
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, d, params_class=None):
|
||||
return Group(
|
||||
id=d['id'],
|
||||
resources=Resources.from_dict(d.get('resources', {})),
|
||||
instances=Instances.from_dict(d.get('instances', {})),
|
||||
build=Build.from_dict(d.get('build', {})),
|
||||
run=Run.from_dict(d.get('params', {}), params_class=params_class),
|
||||
)
|
||||
|
||||
def panel(self):
|
||||
print('rendering groups panel for ' + self.id)
|
||||
return pn.Column(
|
||||
"**Group: {}**".format(self.id),
|
||||
self.param['id'],
|
||||
self.instances,
|
||||
self.resources,
|
||||
self.build,
|
||||
self.run.panel(),
|
||||
)
|
||||
|
||||
|
||||
class Composition(param.Parameterized):
|
||||
metadata = param.Parameter(Metadata(), instantiate=True)
|
||||
global_config = param.Parameter(Global(), instantiate=True)
|
||||
|
||||
groups = param.List(precedence=-1)
|
||||
group_tabs = pn.Tabs()
|
||||
groups_ui = None
|
||||
|
||||
def __init__(self, manifest=None, add_default_group=False, **params):
|
||||
super(Composition, self).__init__(**params)
|
||||
self.manifest = manifest
|
||||
self.testcase_param_classes = dict()
|
||||
self._set_manifest(manifest)
|
||||
if add_default_group:
|
||||
self._add_group()
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, d, manifest=None):
|
||||
if manifest is None:
|
||||
try:
|
||||
manifest = get_manifest(d['global']['plan'])
|
||||
except FileNotFoundError:
|
||||
print_err("Unable to find manifest for test plan {}. Please import into $TESTGROUND_HOME/plans and try again".format(d['global']['plan']))
|
||||
|
||||
c = Composition(
|
||||
manifest=manifest,
|
||||
metadata=Metadata.from_dict(d.get('metadata', {})),
|
||||
global_config=Global.from_dict(d.get('global', {})),
|
||||
)
|
||||
params_class = c._params_class_for_current_testcase()
|
||||
c.groups = [Group.from_dict(g, params_class=params_class) for g in d.get('groups', [])]
|
||||
|
||||
return c
|
||||
|
||||
@classmethod
|
||||
def from_toml_file(cls, filename, manifest=None):
|
||||
with open(filename, 'rt') as f:
|
||||
d = toml.load(f)
|
||||
return cls.from_dict(d, manifest=manifest)
|
||||
|
||||
@param.depends('groups', watch=True)
|
||||
def panel(self):
|
||||
add_group_button = pn.widgets.Button(name='Add Group')
|
||||
add_group_button.on_click(self._add_group)
|
||||
|
||||
self._refresh_tabs()
|
||||
|
||||
if self.groups_ui is None:
|
||||
self.groups_ui = pn.Column(
|
||||
add_group_button,
|
||||
self.group_tabs,
|
||||
)
|
||||
|
||||
return pn.Row(
|
||||
pn.Column(self.metadata, self.global_config),
|
||||
self.groups_ui,
|
||||
)
|
||||
|
||||
def _set_manifest(self, manifest):
|
||||
if manifest is None:
|
||||
return
|
||||
|
||||
g = self.global_config
|
||||
print('global conifg: ', g)
|
||||
g.set_manifest(manifest)
|
||||
for tc in manifest.get('testcases', []):
|
||||
self.testcase_param_classes[tc['name']] = make_group_params_class(tc)
|
||||
|
||||
def _params_class_for_current_testcase(self):
|
||||
case = self.global_config.case
|
||||
cls = self.testcase_param_classes.get(case, None)
|
||||
if cls is None:
|
||||
print_err("No testcase found in manifest named " + case)
|
||||
return cls
|
||||
|
||||
def _add_group(self, *args):
|
||||
group_id = 'group-{}'.format(len(self.groups) + 1)
|
||||
g = Group(id=group_id, params_class=self._params_class_for_current_testcase())
|
||||
g.param.watch(self._refresh_tabs, 'id')
|
||||
groups = self.groups
|
||||
groups.append(g)
|
||||
self.groups = groups
|
||||
self.group_tabs.active = len(groups)-1
|
||||
|
||||
@param.depends("global_config.case", watch=True)
|
||||
def _test_case_changed(self):
|
||||
print('test case changed', self.global_config.case)
|
||||
cls = self._params_class_for_current_testcase()
|
||||
for g in self.groups:
|
||||
g.run.test_params = cls()
|
||||
self._refresh_tabs()
|
||||
|
||||
def _refresh_tabs(self, *args):
|
||||
self.group_tabs[:] = [(g.id, g.panel()) for g in self.groups]
|
||||
|
||||
def to_dict(self):
|
||||
return {
|
||||
'metadata': value_dict(self.metadata, renames={'composition_name': 'name'}),
|
||||
'global': value_dict(self.global_config),
|
||||
'groups': [g.to_dict() for g in self.groups]
|
||||
}
|
||||
|
||||
def to_toml(self):
|
||||
return toml.dumps(self.to_dict())
|
||||
|
||||
def write_to_file(self, filename):
|
||||
with open(filename, 'wt') as f:
|
||||
toml.dump(self.to_dict(), f)
|
111
testplans/composer/app/runner.py
Normal file
111
testplans/composer/app/runner.py
Normal file
@ -0,0 +1,111 @@
|
||||
import os
|
||||
import panel as pn
|
||||
import param
|
||||
from panel.io.server import unlocked
|
||||
from tornado.ioloop import IOLoop, PeriodicCallback
|
||||
from tornado.process import Subprocess
|
||||
from subprocess import STDOUT
|
||||
from bokeh.models.widgets import Div
|
||||
from ansi2html import Ansi2HTMLConverter
|
||||
|
||||
from .composition import Composition
|
||||
|
||||
TESTGROUND = 'testground'
|
||||
|
||||
|
||||
class AnsiColorText(pn.widgets.Widget):
|
||||
style = param.Dict(default=None, doc="""
|
||||
Dictionary of CSS property:value pairs to apply to this Div.""")
|
||||
|
||||
value = param.Parameter(default=None)
|
||||
|
||||
_format = '<div>{value}</div>'
|
||||
|
||||
_rename = {'name': None, 'value': 'text'}
|
||||
|
||||
# _target_transforms = {'value': 'target.text.split(": ")[0]+": "+value'}
|
||||
#
|
||||
# _source_transforms = {'value': 'value.split(": ")[1]'}
|
||||
|
||||
_widget_type = Div
|
||||
|
||||
_converter = Ansi2HTMLConverter(inline=True)
|
||||
|
||||
def _process_param_change(self, msg):
|
||||
msg = super(AnsiColorText, self)._process_property_change(msg)
|
||||
if 'value' in msg:
|
||||
text = str(msg.pop('value'))
|
||||
text = self._converter.convert(text)
|
||||
msg['text'] = text
|
||||
return msg
|
||||
|
||||
def scroll_down(self):
|
||||
# TODO: figure out how to automatically scroll down as text is added
|
||||
pass
|
||||
|
||||
|
||||
class CommandRunner(param.Parameterized):
|
||||
command_output = param.String()
|
||||
|
||||
def __init__(self, **params):
|
||||
super().__init__(**params)
|
||||
self._output_lines = []
|
||||
self.proc = None
|
||||
self._updater = PeriodicCallback(self._refresh_output, callback_time=1000)
|
||||
|
||||
@pn.depends('command_output')
|
||||
def panel(self):
|
||||
return pn.Param(self.param, show_name=False, sizing_mode='stretch_width', widgets={
|
||||
'command_output': dict(
|
||||
type=AnsiColorText,
|
||||
sizing_mode='stretch_width',
|
||||
height=800)
|
||||
})
|
||||
|
||||
def run(self, *cmd):
|
||||
self.command_output = ''
|
||||
self._output_lines = []
|
||||
self.proc = Subprocess(cmd, stdout=Subprocess.STREAM, stderr=STDOUT)
|
||||
self._get_next_line()
|
||||
self._updater.start()
|
||||
|
||||
def _get_next_line(self):
|
||||
if self.proc is None:
|
||||
return
|
||||
loop = IOLoop.current()
|
||||
loop.add_future(self.proc.stdout.read_until(bytes('\n', encoding='utf8')), self._append_output)
|
||||
|
||||
def _append_output(self, future):
|
||||
self._output_lines.append(future.result().decode('utf8'))
|
||||
self._get_next_line()
|
||||
|
||||
def _refresh_output(self):
|
||||
text = ''.join(self._output_lines)
|
||||
if len(text) != len(self.command_output):
|
||||
with unlocked():
|
||||
self.command_output = text
|
||||
|
||||
|
||||
class TestRunner(param.Parameterized):
|
||||
composition = param.ClassSelector(class_=Composition, precedence=-1)
|
||||
testground_daemon_endpoint = param.String(default="{}:8042".format(os.environ.get('TESTGROUND_DAEMON_HOST', 'localhost')))
|
||||
run_test = param.Action(lambda self: self.run())
|
||||
runner = CommandRunner()
|
||||
|
||||
def __init__(self, **params):
|
||||
super().__init__(**params)
|
||||
|
||||
def run(self):
|
||||
# TODO: temp file management - maybe we should mount a volume and save there?
|
||||
filename = '/tmp/composition.toml'
|
||||
self.composition.write_to_file(filename)
|
||||
|
||||
self.runner.run(TESTGROUND, '--endpoint', self.testground_daemon_endpoint, 'run', 'composition', '-f', filename)
|
||||
|
||||
def panel(self):
|
||||
return pn.Column(
|
||||
self.param['testground_daemon_endpoint'],
|
||||
self.param['run_test'],
|
||||
self.runner.panel(),
|
||||
sizing_mode='stretch_width',
|
||||
)
|
26
testplans/composer/app/util.py
Normal file
26
testplans/composer/app/util.py
Normal file
@ -0,0 +1,26 @@
|
||||
import toml
|
||||
import os
|
||||
import sys
|
||||
|
||||
|
||||
def parse_manifest(manifest_path):
|
||||
with open(manifest_path, 'rt') as f:
|
||||
return toml.load(f)
|
||||
|
||||
|
||||
def tg_home():
|
||||
return os.environ.get('TESTGROUND_HOME',
|
||||
os.path.join(os.environ['HOME'], 'testground'))
|
||||
|
||||
|
||||
def get_plans():
|
||||
return list(os.listdir(os.path.join(tg_home(), 'plans')))
|
||||
|
||||
|
||||
def get_manifest(plan_name):
|
||||
manifest_path = os.path.join(tg_home(), 'plans', plan_name, 'manifest.toml')
|
||||
return parse_manifest(manifest_path)
|
||||
|
||||
|
||||
def print_err(*args):
|
||||
print(*args, file=sys.stderr)
|
174
testplans/composer/chain-state.ipynb
Normal file
174
testplans/composer/chain-state.ipynb
Normal file
@ -0,0 +1,174 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import json\n",
|
||||
"import pandas as pd\n",
|
||||
"import matplotlib.pyplot as plt\n",
|
||||
"import hvplot.pandas\n",
|
||||
"import panel as pn\n",
|
||||
"\n",
|
||||
"STATE_FILE = './chain-state.ndjson'\n",
|
||||
"\n",
|
||||
"MINER_STATE_COL_RENAMES = {\n",
|
||||
" 'Info.MinerAddr': 'Miner',\n",
|
||||
" 'Info.MinerPower.MinerPower.RawBytePower': 'Info.MinerPowerRaw',\n",
|
||||
" 'Info.MinerPower.MinerPower.QualityAdjPower': 'Info.MinerPowerQualityAdj',\n",
|
||||
" 'Info.MinerPower.TotalPower.RawBytePower': 'Info.TotalPowerRaw',\n",
|
||||
" 'Info.MinerPower.TotalPower.QualityAdjPower': 'Info.TotalPowerQualityAdj',\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"MINER_NUMERIC_COLS = [\n",
|
||||
" 'Info.MinerPowerRaw',\n",
|
||||
" 'Info.MinerPowerQualityAdj',\n",
|
||||
" 'Info.TotalPowerRaw',\n",
|
||||
" 'Info.TotalPowerQualityAdj',\n",
|
||||
" 'Info.Balance',\n",
|
||||
" 'Info.CommittedBytes',\n",
|
||||
" 'Info.ProvingBytes',\n",
|
||||
" 'Info.FaultyBytes',\n",
|
||||
" 'Info.FaultyPercentage',\n",
|
||||
" 'Info.PreCommitDeposits',\n",
|
||||
" 'Info.LockedFunds',\n",
|
||||
" 'Info.AvailableFunds',\n",
|
||||
" 'Info.WorkerBalance',\n",
|
||||
" 'Info.MarketEscrow',\n",
|
||||
" 'Info.MarketLocked',\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"DERIVED_COLS = [\n",
|
||||
" 'CommittedSectors',\n",
|
||||
" 'ProvingSectors',\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"ATTO_FIL_COLS = [\n",
|
||||
" 'Info.Balance',\n",
|
||||
" 'Info.PreCommitDeposits',\n",
|
||||
" 'Info.LockedFunds',\n",
|
||||
" 'Info.AvailableFunds',\n",
|
||||
" 'Info.WorkerBalance',\n",
|
||||
" 'Info.MarketEscrow',\n",
|
||||
" 'Info.MarketLocked',\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"def atto_to_fil(x):\n",
|
||||
" return float(x) * pow(10, -18)\n",
|
||||
"\n",
|
||||
"def chain_state_to_pandas(statefile):\n",
|
||||
" chain = None\n",
|
||||
" \n",
|
||||
" with open(statefile, 'rt') as f:\n",
|
||||
" for line in f.readlines():\n",
|
||||
" j = json.loads(line)\n",
|
||||
" chain_height = j['Height']\n",
|
||||
" \n",
|
||||
" miners = j['MinerStates']\n",
|
||||
" for m in miners.values():\n",
|
||||
" df = pd.json_normalize(m)\n",
|
||||
" df['Height'] = chain_height\n",
|
||||
" df.rename(columns=MINER_STATE_COL_RENAMES, inplace=True)\n",
|
||||
" if chain is None:\n",
|
||||
" chain = df\n",
|
||||
" else:\n",
|
||||
" chain = chain.append(df, ignore_index=True)\n",
|
||||
" chain.fillna(0, inplace=True)\n",
|
||||
" chain.set_index('Height', inplace=True)\n",
|
||||
" \n",
|
||||
" for c in ATTO_FIL_COLS:\n",
|
||||
" chain[c] = chain[c].apply(atto_to_fil)\n",
|
||||
" \n",
|
||||
" for c in MINER_NUMERIC_COLS:\n",
|
||||
" chain[c] = chain[c].apply(pd.to_numeric)\n",
|
||||
" \n",
|
||||
" # the Sectors.* fields are lists of sector ids, but we want to plot counts, so\n",
|
||||
" # we pull the length of each list into a new column\n",
|
||||
" chain['CommittedSectors'] = chain['Sectors.Committed'].apply(lambda x: len(x))\n",
|
||||
" chain['ProvingSectors'] = chain['Sectors.Proving'].apply(lambda x: len(x))\n",
|
||||
" return chain\n",
|
||||
" \n",
|
||||
"cs = chain_state_to_pandas(STATE_FILE)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# choose which col to plot using a widget\n",
|
||||
"\n",
|
||||
"cols_to_plot = MINER_NUMERIC_COLS + DERIVED_COLS\n",
|
||||
"\n",
|
||||
"col_selector = pn.widgets.Select(name='Field', options=cols_to_plot)\n",
|
||||
"cols = ['Miner'] + cols_to_plot\n",
|
||||
"plot = cs[cols].hvplot(by='Miner', y=col_selector)\n",
|
||||
"pn.Column(pn.WidgetBox(col_selector), plot)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"scrolled": true
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# plot all line charts in a vertical stack\n",
|
||||
"\n",
|
||||
"plots = []\n",
|
||||
"for c in cols_to_plot:\n",
|
||||
" title = c.split('.')[-1]\n",
|
||||
" p = cs[['Miner', c]].hvplot(by='Miner', y=c, title=title)\n",
|
||||
" plots.append(p)\n",
|
||||
"pn.Column(*plots)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# miner power area chart\n",
|
||||
"\n",
|
||||
"mp = cs[['Miner', 'Info.MinerPowerRaw']].rename(columns={'Info.MinerPowerRaw': 'Power'})\n",
|
||||
"mp = mp.pivot_table(values=['Power'], index=cs.index, columns='Miner', aggfunc='sum')\n",
|
||||
"mp = mp.div(mp.sum(1), axis=0)\n",
|
||||
"mp.columns = mp.columns.get_level_values(1)\n",
|
||||
"mp.hvplot.area(title='Miner Power Distribution')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.8.2"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
}
|
45
testplans/composer/composer.ipynb
Normal file
45
testplans/composer/composer.ipynb
Normal file
@ -0,0 +1,45 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"scrolled": true
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import param\n",
|
||||
"import panel as pn\n",
|
||||
"import app.app as app\n",
|
||||
"import importlib\n",
|
||||
"importlib.reload(app)\n",
|
||||
"\n",
|
||||
"pn.extension()\n",
|
||||
"\n",
|
||||
"a = app.App()\n",
|
||||
"a.ui()"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.8.2"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
}
|
134
testplans/composer/composer.sh
Executable file
134
testplans/composer/composer.sh
Executable file
@ -0,0 +1,134 @@
|
||||
#!/bin/bash
|
||||
|
||||
# this script runs jupyter inside a docker container and copies
|
||||
# plan manifests from the user's local filesystem into a temporary
|
||||
# directory that's bind-mounted into the container.
|
||||
|
||||
set -o errexit
|
||||
set -o pipefail
|
||||
|
||||
set -e
|
||||
|
||||
err_report() {
|
||||
echo "Error on line $1"
|
||||
}
|
||||
|
||||
trap 'err_report $LINENO' ERR
|
||||
|
||||
|
||||
image_name="iptestground/composer"
|
||||
image_tag="latest"
|
||||
image_full_name="$image_name:$image_tag"
|
||||
tg_home=${TESTGROUND_HOME:-$HOME/testground}
|
||||
container_plans_dir="/testground/plans"
|
||||
jupyter_port=${JUPYTER_PORT:-8888}
|
||||
panel_port=${PANEL_PORT:-5006}
|
||||
|
||||
poll_interval=30
|
||||
|
||||
exists() {
|
||||
command -v "$1" >/dev/null 2>&1
|
||||
}
|
||||
|
||||
require_cmds() {
|
||||
for cmd in $@; do
|
||||
exists $cmd || { echo "This script requires the $cmd command. Please install it and try again." >&2; exit 1; }
|
||||
done
|
||||
}
|
||||
|
||||
update_plans() {
|
||||
local dest_dir=$1
|
||||
rsync -avzh --quiet --copy-links "${tg_home}/plans/" ${dest_dir}
|
||||
}
|
||||
|
||||
watch_plans() {
|
||||
local plans_dest=$1
|
||||
while true; do
|
||||
update_plans ${plans_dest}
|
||||
sleep $poll_interval
|
||||
done
|
||||
}
|
||||
|
||||
open_url() {
|
||||
local url=$1
|
||||
if exists cmd.exe; then
|
||||
cmd.exe /c start ${url} >/dev/null 2>&1
|
||||
elif exists xdg-open; then
|
||||
xdg-open ${url} >/dev/null 2>&1 &
|
||||
elif exists open; then
|
||||
open ${url}
|
||||
else
|
||||
echo "unable to automatically open url. copy/paste this into a browser: $url"
|
||||
fi
|
||||
}
|
||||
|
||||
# delete temp dir and stop docker container
|
||||
cleanup () {
|
||||
if [[ "$container_id" != "" ]]; then
|
||||
docker stop ${container_id} >/dev/null
|
||||
fi
|
||||
|
||||
if [[ -d "$temp_plans_dir" ]]; then
|
||||
rm -rf ${temp_plans_dir}
|
||||
fi
|
||||
}
|
||||
|
||||
get_host_ip() {
|
||||
# get interface of default route
|
||||
local net_if=$(netstat -rn | awk '/^0.0.0.0/ {thif=substr($0,74,10); print thif;} /^default.*UG/ {thif=substr($0,65,10); print thif;}')
|
||||
# use ifconfig to get addr of that interface
|
||||
detected_host_ip=`ifconfig ${net_if} | grep -Eo 'inet (addr:)?([0-9]*\.){3}[0-9]*' | grep -Eo '([0-9]*\.){3}[0-9]*' | grep -v '127.0.0.1'`
|
||||
|
||||
if [ -z "$detected_host_ip" ]
|
||||
then
|
||||
detected_host_ip="host.docker.internal"
|
||||
fi
|
||||
|
||||
echo $detected_host_ip
|
||||
}
|
||||
|
||||
# run cleanup on exit
|
||||
trap "{ cleanup; }" EXIT
|
||||
|
||||
# make sure we have the commands we need
|
||||
require_cmds jq docker rsync
|
||||
|
||||
if [[ "$SKIP_BUILD" == "" ]]; then
|
||||
echo "Building latest docker image. Set SKIP_BUILD env var to any value to bypass."
|
||||
require_cmds make
|
||||
make docker
|
||||
fi
|
||||
|
||||
# make temp dir for manifests
|
||||
temp_base="/tmp"
|
||||
if [[ "$TEMP" != "" ]]; then
|
||||
temp_base=$TEMP
|
||||
fi
|
||||
|
||||
temp_plans_dir="$(mktemp -d ${temp_base}/testground-composer-XXXX)"
|
||||
echo "temp plans dir: $temp_plans_dir"
|
||||
|
||||
# copy testplans from $TESTGROUND_HOME/plans to the temp dir
|
||||
update_plans ${temp_plans_dir}
|
||||
|
||||
# run the container in detached mode and grab the id
|
||||
container_id=$(docker run -d \
|
||||
-e TESTGROUND_DAEMON_HOST=$(get_host_ip) \
|
||||
--user $(id -u):$(id -g) \
|
||||
-p ${panel_port}:5006 \
|
||||
-v ${temp_plans_dir}:${container_plans_dir}:ro \
|
||||
$image_full_name)
|
||||
|
||||
echo "container $container_id started"
|
||||
# print the log output
|
||||
docker logs -f ${container_id} &
|
||||
|
||||
# sleep for a couple seconds to let the server start up
|
||||
sleep 2
|
||||
|
||||
# open a browser to the app url
|
||||
panel_url="http://localhost:${panel_port}"
|
||||
open_url $panel_url
|
||||
|
||||
# poll & sync testplan changes every few seconds
|
||||
watch_plans ${temp_plans_dir}
|
214
testplans/composer/fixtures/all-both-k8s.toml
Normal file
214
testplans/composer/fixtures/all-both-k8s.toml
Normal file
@ -0,0 +1,214 @@
|
||||
[metadata]
|
||||
name = "all-both"
|
||||
author = "adin"
|
||||
|
||||
[global]
|
||||
plan = "dht"
|
||||
case = "all"
|
||||
total_instances = 1000
|
||||
builder = "docker:go"
|
||||
runner = "cluster:k8s"
|
||||
[global.build_config]
|
||||
push_registry = true
|
||||
registry_type = "aws"
|
||||
|
||||
[[groups]]
|
||||
id = "balsam-undialable-provider"
|
||||
[groups.instances]
|
||||
count = 5
|
||||
percentage = 0.0
|
||||
[groups.build]
|
||||
selectors = ["balsam"]
|
||||
[groups.run]
|
||||
artifact = "909427826938.dkr.ecr.us-east-1.amazonaws.com/testground-us-east-1-dht:701251a63b92"
|
||||
[groups.run.test_params]
|
||||
bs_strategy = "7"
|
||||
bucket_size = "10"
|
||||
expect_dht = "false"
|
||||
group_order = "4"
|
||||
latency = "100"
|
||||
record_count = "1"
|
||||
timeout_secs = "600"
|
||||
undialable = "true"
|
||||
|
||||
[[groups]]
|
||||
id = "balsam-undialable-searcher"
|
||||
[groups.instances]
|
||||
count = 5
|
||||
percentage = 0.0
|
||||
[groups.build]
|
||||
selectors = ["balsam"]
|
||||
[groups.run]
|
||||
artifact = "909427826938.dkr.ecr.us-east-1.amazonaws.com/testground-us-east-1-dht:701251a63b92"
|
||||
[groups.run.test_params]
|
||||
bs_strategy = "7"
|
||||
bucket_size = "10"
|
||||
expect_dht = "false"
|
||||
group_order = "5"
|
||||
latency = "100"
|
||||
search_records = "true"
|
||||
timeout_secs = "600"
|
||||
undialable = "true"
|
||||
|
||||
[[groups]]
|
||||
id = "balsam-dialable-passive"
|
||||
[groups.instances]
|
||||
count = 780
|
||||
percentage = 0.0
|
||||
[groups.build]
|
||||
selectors = ["balsam"]
|
||||
[groups.run]
|
||||
artifact = "909427826938.dkr.ecr.us-east-1.amazonaws.com/testground-us-east-1-dht:701251a63b92"
|
||||
[groups.run.test_params]
|
||||
bs_strategy = "7"
|
||||
bucket_size = "10"
|
||||
expect_dht = "false"
|
||||
group_order = "6"
|
||||
latency = "100"
|
||||
timeout_secs = "600"
|
||||
undialable = "false"
|
||||
|
||||
[[groups]]
|
||||
id = "balsam-dialable-provider"
|
||||
[groups.instances]
|
||||
count = 5
|
||||
percentage = 0.0
|
||||
[groups.build]
|
||||
selectors = ["balsam"]
|
||||
[groups.run]
|
||||
artifact = "909427826938.dkr.ecr.us-east-1.amazonaws.com/testground-us-east-1-dht:701251a63b92"
|
||||
[groups.run.test_params]
|
||||
bs_strategy = "7"
|
||||
bucket_size = "10"
|
||||
expect_dht = "false"
|
||||
group_order = "7"
|
||||
latency = "100"
|
||||
record_count = "1"
|
||||
timeout_secs = "600"
|
||||
undialable = "false"
|
||||
|
||||
[[groups]]
|
||||
id = "balsam-dialable-searcher"
|
||||
[groups.instances]
|
||||
count = 5
|
||||
percentage = 0.0
|
||||
[groups.build]
|
||||
selectors = ["balsam"]
|
||||
[groups.run]
|
||||
artifact = "909427826938.dkr.ecr.us-east-1.amazonaws.com/testground-us-east-1-dht:701251a63b92"
|
||||
[groups.run.test_params]
|
||||
bs_strategy = "7"
|
||||
bucket_size = "10"
|
||||
expect_dht = "false"
|
||||
group_order = "8"
|
||||
latency = "100"
|
||||
search_records = "true"
|
||||
timeout_secs = "600"
|
||||
undialable = "false"
|
||||
|
||||
[[groups]]
|
||||
id = "cypress-passive"
|
||||
[groups.instances]
|
||||
count = 185
|
||||
percentage = 0.0
|
||||
[groups.build]
|
||||
selectors = ["cypress"]
|
||||
|
||||
[[groups.build.dependencies]]
|
||||
module = "github.com/libp2p/go-libp2p-kad-dht"
|
||||
version = "180be07b8303d536e39809bc39c58be5407fedd9"
|
||||
|
||||
[[groups.build.dependencies]]
|
||||
module = "github.com/libp2p/go-libp2p-xor"
|
||||
version = "df24f5b04bcbdc0059b27989163a6090f4f6dc7a"
|
||||
[groups.run]
|
||||
artifact = "909427826938.dkr.ecr.us-east-1.amazonaws.com/testground-us-east-1-dht:ca78473d669d"
|
||||
[groups.run.test_params]
|
||||
alpha = "6"
|
||||
beta = "3"
|
||||
bs_strategy = "7"
|
||||
bucket_size = "10"
|
||||
group_order = "1"
|
||||
latency = "100"
|
||||
timeout_secs = "600"
|
||||
|
||||
[[groups]]
|
||||
id = "cypress-provider"
|
||||
[groups.instances]
|
||||
count = 5
|
||||
percentage = 0.0
|
||||
[groups.build]
|
||||
selectors = ["cypress"]
|
||||
|
||||
[[groups.build.dependencies]]
|
||||
module = "github.com/libp2p/go-libp2p-kad-dht"
|
||||
version = "180be07b8303d536e39809bc39c58be5407fedd9"
|
||||
|
||||
[[groups.build.dependencies]]
|
||||
module = "github.com/libp2p/go-libp2p-xor"
|
||||
version = "df24f5b04bcbdc0059b27989163a6090f4f6dc7a"
|
||||
[groups.run]
|
||||
artifact = "909427826938.dkr.ecr.us-east-1.amazonaws.com/testground-us-east-1-dht:ca78473d669d"
|
||||
[groups.run.test_params]
|
||||
alpha = "6"
|
||||
beta = "3"
|
||||
bs_strategy = "7"
|
||||
bucket_size = "10"
|
||||
group_order = "2"
|
||||
latency = "100"
|
||||
record_count = "1"
|
||||
timeout_secs = "600"
|
||||
|
||||
[[groups]]
|
||||
id = "cypress-searcher"
|
||||
[groups.instances]
|
||||
count = 5
|
||||
percentage = 0.0
|
||||
[groups.build]
|
||||
selectors = ["cypress"]
|
||||
|
||||
[[groups.build.dependencies]]
|
||||
module = "github.com/libp2p/go-libp2p-kad-dht"
|
||||
version = "180be07b8303d536e39809bc39c58be5407fedd9"
|
||||
|
||||
[[groups.build.dependencies]]
|
||||
module = "github.com/libp2p/go-libp2p-xor"
|
||||
version = "df24f5b04bcbdc0059b27989163a6090f4f6dc7a"
|
||||
[groups.run]
|
||||
artifact = "909427826938.dkr.ecr.us-east-1.amazonaws.com/testground-us-east-1-dht:ca78473d669d"
|
||||
[groups.run.test_params]
|
||||
alpha = "6"
|
||||
beta = "3"
|
||||
bs_strategy = "7"
|
||||
bucket_size = "10"
|
||||
group_order = "3"
|
||||
latency = "100"
|
||||
search_records = "true"
|
||||
timeout_secs = "600"
|
||||
|
||||
[[groups]]
|
||||
id = "cypress-bs"
|
||||
[groups.instances]
|
||||
count = 5
|
||||
percentage = 0.0
|
||||
[groups.build]
|
||||
selectors = ["cypress"]
|
||||
|
||||
[[groups.build.dependencies]]
|
||||
module = "github.com/libp2p/go-libp2p-kad-dht"
|
||||
version = "180be07b8303d536e39809bc39c58be5407fedd9"
|
||||
|
||||
[[groups.build.dependencies]]
|
||||
module = "github.com/libp2p/go-libp2p-xor"
|
||||
version = "df24f5b04bcbdc0059b27989163a6090f4f6dc7a"
|
||||
[groups.run]
|
||||
artifact = "909427826938.dkr.ecr.us-east-1.amazonaws.com/testground-us-east-1-dht:ca78473d669d"
|
||||
[groups.run.test_params]
|
||||
alpha = "6"
|
||||
beta = "3"
|
||||
bootstrapper = "true"
|
||||
bs_strategy = "7"
|
||||
bucket_size = "10"
|
||||
group_order = "0"
|
||||
latency = "100"
|
||||
timeout_secs = "600"
|
14
testplans/composer/fixtures/ping-pong-local.toml
Normal file
14
testplans/composer/fixtures/ping-pong-local.toml
Normal file
@ -0,0 +1,14 @@
|
||||
[metadata]
|
||||
name = "ping-pong-local"
|
||||
author = "yusef"
|
||||
|
||||
[global]
|
||||
plan = "network"
|
||||
case = "ping-pong"
|
||||
total_instances = 2
|
||||
builder = "docker:go"
|
||||
runner = "local:docker"
|
||||
|
||||
[[groups]]
|
||||
id = "nodes"
|
||||
instances = { count = 2 }
|
8
testplans/composer/requirements.txt
Normal file
8
testplans/composer/requirements.txt
Normal file
@ -0,0 +1,8 @@
|
||||
param
|
||||
toml
|
||||
jupyter
|
||||
panel
|
||||
holoviews
|
||||
ansi2html
|
||||
matplotlib
|
||||
hvplot
|
2106
testplans/dashboards/baseline.json
Normal file
2106
testplans/dashboards/baseline.json
Normal file
File diff suppressed because it is too large
Load Diff
2748
testplans/dashboards/chain.json
Normal file
2748
testplans/dashboards/chain.json
Normal file
File diff suppressed because it is too large
Load Diff
22
testplans/docker-images/Dockerfile.oni-buildbase
Normal file
22
testplans/docker-images/Dockerfile.oni-buildbase
Normal file
@ -0,0 +1,22 @@
|
||||
ARG GO_VERSION=1.14.2
|
||||
|
||||
FROM golang:${GO_VERSION}-buster
|
||||
|
||||
RUN apt-get update && apt-get install -y ca-certificates llvm clang mesa-opencl-icd ocl-icd-opencl-dev jq gcc git pkg-config bzr
|
||||
|
||||
ARG FILECOIN_FFI_COMMIT=57e38efe4943f09d3127dcf6f0edd614e6acf68e
|
||||
ARG FFI_DIR=/extra/filecoin-ffi
|
||||
|
||||
RUN mkdir -p ${FFI_DIR} \
|
||||
&& git clone https://github.com/filecoin-project/filecoin-ffi.git ${FFI_DIR} \
|
||||
&& cd ${FFI_DIR} \
|
||||
&& git checkout ${FILECOIN_FFI_COMMIT} \
|
||||
&& make
|
||||
|
||||
ARG FIL_BLST_COMMIT=8609119cf4595d1741139c24378fcd8bc4f1c475
|
||||
ARG BLST_DIR=/extra/fil-blst
|
||||
|
||||
RUN mkdir -p ${BLST_DIR} \
|
||||
&& git clone https://github.com/filecoin-project/fil-blst.git ${BLST_DIR} \
|
||||
&& cd ${BLST_DIR} \
|
||||
&& git checkout ${FIL_BLST_COMMIT}
|
18
testplans/docker-images/Dockerfile.oni-runtime
Normal file
18
testplans/docker-images/Dockerfile.oni-runtime
Normal file
@ -0,0 +1,18 @@
|
||||
ARG GO_VERSION=1.14.2
|
||||
|
||||
FROM golang:${GO_VERSION}-buster as downloader
|
||||
|
||||
## Fetch the proof parameters.
|
||||
## 1. Install the paramfetch binary first, so it can be cached over builds.
|
||||
## 2. Then copy over the parameters (which could change).
|
||||
## 3. Trigger the download.
|
||||
## Output will be in /var/tmp/filecoin-proof-parameters.
|
||||
|
||||
RUN go get github.com/filecoin-project/go-paramfetch/paramfetch
|
||||
COPY /proof-parameters.json /
|
||||
RUN paramfetch 2048 /proof-parameters.json
|
||||
|
||||
FROM ubuntu:18.04
|
||||
|
||||
RUN apt-get update && apt-get install -y ca-certificates llvm clang mesa-opencl-icd ocl-icd-opencl-dev jq gcc pkg-config net-tools netcat traceroute iputils-ping wget vim curl telnet iproute2 dnsutils
|
||||
COPY --from=downloader /var/tmp/filecoin-proof-parameters /var/tmp/filecoin-proof-parameters
|
30
testplans/docker-images/Dockerfile.oni-runtime-debug
Normal file
30
testplans/docker-images/Dockerfile.oni-runtime-debug
Normal file
@ -0,0 +1,30 @@
|
||||
ARG GO_VERSION=1.14.2
|
||||
|
||||
FROM golang:${GO_VERSION}-buster as downloader
|
||||
|
||||
## Fetch the proof parameters.
|
||||
## 1. Install the paramfetch binary first, so it can be cached over builds.
|
||||
## 2. Then copy over the parameters (which could change).
|
||||
## 3. Trigger the download.
|
||||
## Output will be in /var/tmp/filecoin-proof-parameters.
|
||||
|
||||
RUN go get github.com/filecoin-project/go-paramfetch/paramfetch
|
||||
COPY /proof-parameters.json /
|
||||
RUN paramfetch 2048 /proof-parameters.json
|
||||
|
||||
ARG LOTUS_COMMIT=1a170e18a
|
||||
|
||||
## for debug purposes
|
||||
RUN apt update && apt install -y mesa-opencl-icd ocl-icd-opencl-dev gcc git bzr jq pkg-config curl && git clone https://github.com/filecoin-project/lotus.git && cd lotus/ && git checkout ${LOTUS_COMMIT} && make clean && make all && make install
|
||||
|
||||
FROM ubuntu:18.04
|
||||
|
||||
RUN apt-get update && apt-get install -y ca-certificates llvm clang mesa-opencl-icd ocl-icd-opencl-dev jq gcc pkg-config net-tools netcat traceroute iputils-ping wget vim curl telnet iproute2 dnsutils
|
||||
COPY --from=downloader /var/tmp/filecoin-proof-parameters /var/tmp/filecoin-proof-parameters
|
||||
|
||||
## for debug purposes
|
||||
COPY --from=downloader /usr/local/bin/lotus /usr/local/bin/lll
|
||||
COPY --from=downloader /usr/local/bin/lotus-miner /usr/local/bin/lm
|
||||
|
||||
ENV FULLNODE_API_INFO="dummytoken:/ip4/127.0.0.1/tcp/1234/http"
|
||||
ENV MINER_API_INFO="dummytoken:/ip4/127.0.0.1/tcp/2345/http"
|
152
testplans/docker-images/proof-parameters.json
Normal file
152
testplans/docker-images/proof-parameters.json
Normal file
@ -0,0 +1,152 @@
|
||||
{
|
||||
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-0-0-0170db1f394b35d995252228ee359194b13199d259380541dc529fb0099096b0.params": {
|
||||
"cid": "QmVxjFRyhmyQaZEtCh7nk2abc7LhFkzhnRX4rcHqCCpikR",
|
||||
"digest": "7610b9f82bfc88405b7a832b651ce2f6",
|
||||
"sector_size": 2048
|
||||
},
|
||||
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-0-0-0170db1f394b35d995252228ee359194b13199d259380541dc529fb0099096b0.vk": {
|
||||
"cid": "QmcS5JZs8X3TdtkEBpHAdUYjdNDqcL7fWQFtQz69mpnu2X",
|
||||
"digest": "0e0958009936b9d5e515ec97b8cb792d",
|
||||
"sector_size": 2048
|
||||
},
|
||||
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-0-0-0cfb4f178bbb71cf2ecfcd42accce558b27199ab4fb59cb78f2483fe21ef36d9.params": {
|
||||
"cid": "QmUiRx71uxfmUE8V3H9sWAsAXoM88KR4eo1ByvvcFNeTLR",
|
||||
"digest": "1a7d4a9c8a502a497ed92a54366af33f",
|
||||
"sector_size": 536870912
|
||||
},
|
||||
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-0-0-0cfb4f178bbb71cf2ecfcd42accce558b27199ab4fb59cb78f2483fe21ef36d9.vk": {
|
||||
"cid": "QmfCeddjFpWtavzfEzZpJfzSajGNwfL4RjFXWAvA9TSnTV",
|
||||
"digest": "4dae975de4f011f101f5a2f86d1daaba",
|
||||
"sector_size": 536870912
|
||||
},
|
||||
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-0-0-3ea05428c9d11689f23529cde32fd30aabd50f7d2c93657c1d3650bca3e8ea9e.params": {
|
||||
"cid": "QmcSTqDcFVLGGVYz1njhUZ7B6fkKtBumsLUwx4nkh22TzS",
|
||||
"digest": "82c88066be968bb550a05e30ff6c2413",
|
||||
"sector_size": 2048
|
||||
},
|
||||
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-0-0-3ea05428c9d11689f23529cde32fd30aabd50f7d2c93657c1d3650bca3e8ea9e.vk": {
|
||||
"cid": "QmSTCXF2ipGA3f6muVo6kHc2URSx6PzZxGUqu7uykaH5KU",
|
||||
"digest": "ffd79788d614d27919ae5bd2d94eacb6",
|
||||
"sector_size": 2048
|
||||
},
|
||||
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-0-0-50c7368dea9593ed0989e70974d28024efa9d156d585b7eea1be22b2e753f331.params": {
|
||||
"cid": "QmU9SBzJNrcjRFDiFc4GcApqdApN6z9X7MpUr66mJ2kAJP",
|
||||
"digest": "700171ecf7334e3199437c930676af82",
|
||||
"sector_size": 8388608
|
||||
},
|
||||
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-0-0-50c7368dea9593ed0989e70974d28024efa9d156d585b7eea1be22b2e753f331.vk": {
|
||||
"cid": "QmbmUMa3TbbW3X5kFhExs6WgC4KeWT18YivaVmXDkB6ANG",
|
||||
"digest": "79ebb55f56fda427743e35053edad8fc",
|
||||
"sector_size": 8388608
|
||||
},
|
||||
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-0-0-5294475db5237a2e83c3e52fd6c2b03859a1831d45ed08c4f35dbf9a803165a9.params": {
|
||||
"cid": "QmdNEL2RtqL52GQNuj8uz6mVj5Z34NVnbaJ1yMyh1oXtBx",
|
||||
"digest": "c49499bb76a0762884896f9683403f55",
|
||||
"sector_size": 8388608
|
||||
},
|
||||
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-0-0-5294475db5237a2e83c3e52fd6c2b03859a1831d45ed08c4f35dbf9a803165a9.vk": {
|
||||
"cid": "QmUiVYCQUgr6Y13pZFr8acWpSM4xvTXUdcvGmxyuHbKhsc",
|
||||
"digest": "34d4feeacd9abf788d69ef1bb4d8fd00",
|
||||
"sector_size": 8388608
|
||||
},
|
||||
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-0-0-7d739b8cf60f1b0709eeebee7730e297683552e4b69cab6984ec0285663c5781.params": {
|
||||
"cid": "QmVgCsJFRXKLuuUhT3aMYwKVGNA9rDeR6DCrs7cAe8riBT",
|
||||
"digest": "827359440349fe8f5a016e7598993b79",
|
||||
"sector_size": 536870912
|
||||
},
|
||||
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-0-0-7d739b8cf60f1b0709eeebee7730e297683552e4b69cab6984ec0285663c5781.vk": {
|
||||
"cid": "QmfA31fbCWojSmhSGvvfxmxaYCpMoXP95zEQ9sLvBGHNaN",
|
||||
"digest": "bd2cd62f65c1ab84f19ca27e97b7c731",
|
||||
"sector_size": 536870912
|
||||
},
|
||||
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-8-0-0377ded656c6f524f1618760bffe4e0a1c51d5a70c4509eedae8a27555733edc.params": {
|
||||
"cid": "QmaUmfcJt6pozn8ndq1JVBzLRjRJdHMTPd4foa8iw5sjBZ",
|
||||
"digest": "2cf49eb26f1fee94c85781a390ddb4c8",
|
||||
"sector_size": 34359738368
|
||||
},
|
||||
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-8-0-0377ded656c6f524f1618760bffe4e0a1c51d5a70c4509eedae8a27555733edc.vk": {
|
||||
"cid": "QmR9i9KL3vhhAqTBGj1bPPC7LvkptxrH9RvxJxLN1vvsBE",
|
||||
"digest": "0f8ec542485568fa3468c066e9fed82b",
|
||||
"sector_size": 34359738368
|
||||
},
|
||||
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-8-0-559e581f022bb4e4ec6e719e563bf0e026ad6de42e56c18714a2c692b1b88d7e.params": {
|
||||
"cid": "Qmdtczp7p4wrbDofmHdGhiixn9irAcN77mV9AEHZBaTt1i",
|
||||
"digest": "d84f79a16fe40e9e25a36e2107bb1ba0",
|
||||
"sector_size": 34359738368
|
||||
},
|
||||
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-8-0-559e581f022bb4e4ec6e719e563bf0e026ad6de42e56c18714a2c692b1b88d7e.vk": {
|
||||
"cid": "QmZCvxKcKP97vDAk8Nxs9R1fWtqpjQrAhhfXPoCi1nkDoF",
|
||||
"digest": "fc02943678dd119e69e7fab8420e8819",
|
||||
"sector_size": 34359738368
|
||||
},
|
||||
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-8-2-2627e4006b67f99cef990c0a47d5426cb7ab0a0ad58fc1061547bf2d28b09def.params": {
|
||||
"cid": "QmeAN4vuANhXsF8xP2Lx5j2L6yMSdogLzpcvqCJThRGK1V",
|
||||
"digest": "3810b7780ac0e299b22ae70f1f94c9bc",
|
||||
"sector_size": 68719476736
|
||||
},
|
||||
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-8-2-2627e4006b67f99cef990c0a47d5426cb7ab0a0ad58fc1061547bf2d28b09def.vk": {
|
||||
"cid": "QmWV8rqZLxs1oQN9jxNWmnT1YdgLwCcscv94VARrhHf1T7",
|
||||
"digest": "59d2bf1857adc59a4f08fcf2afaa916b",
|
||||
"sector_size": 68719476736
|
||||
},
|
||||
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-8-2-b62098629d07946e9028127e70295ed996fe3ed25b0f9f88eb610a0ab4385a3c.params": {
|
||||
"cid": "QmVkrXc1SLcpgcudK5J25HH93QvR9tNsVhVTYHm5UymXAz",
|
||||
"digest": "2170a91ad5bae22ea61f2ea766630322",
|
||||
"sector_size": 68719476736
|
||||
},
|
||||
"v28-proof-of-spacetime-fallback-merkletree-poseidon_hasher-8-8-2-b62098629d07946e9028127e70295ed996fe3ed25b0f9f88eb610a0ab4385a3c.vk": {
|
||||
"cid": "QmbfQjPD7EpzjhWGmvWAsyN2mAZ4PcYhsf3ujuhU9CSuBm",
|
||||
"digest": "6d3789148fb6466d07ee1e24d6292fd6",
|
||||
"sector_size": 68719476736
|
||||
},
|
||||
"v28-stacked-proof-of-replication-merkletree-poseidon_hasher-8-0-0-sha256_hasher-032d3138d22506ec0082ed72b2dcba18df18477904e35bafee82b3793b06832f.params": {
|
||||
"cid": "QmWceMgnWYLopMuM4AoGMvGEau7tNe5UK83XFjH5V9B17h",
|
||||
"digest": "434fb1338ecfaf0f59256f30dde4968f",
|
||||
"sector_size": 2048
|
||||
},
|
||||
"v28-stacked-proof-of-replication-merkletree-poseidon_hasher-8-0-0-sha256_hasher-032d3138d22506ec0082ed72b2dcba18df18477904e35bafee82b3793b06832f.vk": {
|
||||
"cid": "QmamahpFCstMUqHi2qGtVoDnRrsXhid86qsfvoyCTKJqHr",
|
||||
"digest": "dc1ade9929ade1708238f155343044ac",
|
||||
"sector_size": 2048
|
||||
},
|
||||
"v28-stacked-proof-of-replication-merkletree-poseidon_hasher-8-0-0-sha256_hasher-6babf46ce344ae495d558e7770a585b2382d54f225af8ed0397b8be7c3fcd472.params": {
|
||||
"cid": "QmYBpTt7LWNAWr1JXThV5VxX7wsQFLd1PHrGYVbrU1EZjC",
|
||||
"digest": "6c77597eb91ab936c1cef4cf19eba1b3",
|
||||
"sector_size": 536870912
|
||||
},
|
||||
"v28-stacked-proof-of-replication-merkletree-poseidon_hasher-8-0-0-sha256_hasher-6babf46ce344ae495d558e7770a585b2382d54f225af8ed0397b8be7c3fcd472.vk": {
|
||||
"cid": "QmWionkqH2B6TXivzBSQeSyBxojaiAFbzhjtwYRrfwd8nH",
|
||||
"digest": "065179da19fbe515507267677f02823e",
|
||||
"sector_size": 536870912
|
||||
},
|
||||
"v28-stacked-proof-of-replication-merkletree-poseidon_hasher-8-0-0-sha256_hasher-ecd683648512ab1765faa2a5f14bab48f676e633467f0aa8aad4b55dcb0652bb.params": {
|
||||
"cid": "QmPXAPPuQtuQz7Zz3MHMAMEtsYwqM1o9H1csPLeiMUQwZH",
|
||||
"digest": "09e612e4eeb7a0eb95679a88404f960c",
|
||||
"sector_size": 8388608
|
||||
},
|
||||
"v28-stacked-proof-of-replication-merkletree-poseidon_hasher-8-0-0-sha256_hasher-ecd683648512ab1765faa2a5f14bab48f676e633467f0aa8aad4b55dcb0652bb.vk": {
|
||||
"cid": "QmYCuipFyvVW1GojdMrjK1JnMobXtT4zRCZs1CGxjizs99",
|
||||
"digest": "b687beb9adbd9dabe265a7e3620813e4",
|
||||
"sector_size": 8388608
|
||||
},
|
||||
"v28-stacked-proof-of-replication-merkletree-poseidon_hasher-8-8-0-sha256_hasher-82a357d2f2ca81dc61bb45f4a762807aedee1b0a53fd6c4e77b46a01bfef7820.params": {
|
||||
"cid": "QmengpM684XLQfG8754ToonszgEg2bQeAGUan5uXTHUQzJ",
|
||||
"digest": "6a388072a518cf46ebd661f5cc46900a",
|
||||
"sector_size": 34359738368
|
||||
},
|
||||
"v28-stacked-proof-of-replication-merkletree-poseidon_hasher-8-8-0-sha256_hasher-82a357d2f2ca81dc61bb45f4a762807aedee1b0a53fd6c4e77b46a01bfef7820.vk": {
|
||||
"cid": "Qmf93EMrADXAK6CyiSfE8xx45fkMfR3uzKEPCvZC1n2kzb",
|
||||
"digest": "0c7b4aac1c40fdb7eb82bc355b41addf",
|
||||
"sector_size": 34359738368
|
||||
},
|
||||
"v28-stacked-proof-of-replication-merkletree-poseidon_hasher-8-8-2-sha256_hasher-96f1b4a04c5c51e4759bbf224bbc2ef5a42c7100f16ec0637123f16a845ddfb2.params": {
|
||||
"cid": "QmS7ye6Ri2MfFzCkcUJ7FQ6zxDKuJ6J6B8k5PN7wzSR9sX",
|
||||
"digest": "1801f8a6e1b00bceb00cc27314bb5ce3",
|
||||
"sector_size": 68719476736
|
||||
},
|
||||
"v28-stacked-proof-of-replication-merkletree-poseidon_hasher-8-8-2-sha256_hasher-96f1b4a04c5c51e4759bbf224bbc2ef5a42c7100f16ec0637123f16a845ddfb2.vk": {
|
||||
"cid": "QmehSmC6BhrgRZakPDta2ewoH9nosNzdjCqQRXsNFNUkLN",
|
||||
"digest": "a89884252c04c298d0b3c81bfd884164",
|
||||
"sector_size": 68719476736
|
||||
}
|
||||
}
|
1
testplans/extra/fil-blst
Submodule
1
testplans/extra/fil-blst
Submodule
@ -0,0 +1 @@
|
||||
Subproject commit 5f93488fc0dbfb450f2355269f18fc67010d59bb
|
1
testplans/extra/filecoin-ffi
Submodule
1
testplans/extra/filecoin-ffi
Submodule
@ -0,0 +1 @@
|
||||
Subproject commit f640612a1a1f7a2dd8b3a49e1531db0aa0f63447
|
35
testplans/graphsync/_compositions/stress-k8s.toml
Normal file
35
testplans/graphsync/_compositions/stress-k8s.toml
Normal file
@ -0,0 +1,35 @@
|
||||
[metadata]
|
||||
name = "stress"
|
||||
|
||||
[global]
|
||||
plan = "graphsync"
|
||||
case = "stress"
|
||||
total_instances = 2
|
||||
builder = "docker:go"
|
||||
runner = "cluster:k8s"
|
||||
|
||||
[global.build_config]
|
||||
push_registry=true
|
||||
go_proxy_mode="remote"
|
||||
go_proxy_url="http://localhost:8081"
|
||||
registry_type="aws"
|
||||
|
||||
[global.run.test_params]
|
||||
size = "10MB"
|
||||
latencies = '["50ms", "100ms", "200ms"]'
|
||||
bandwidths = '["32MiB", "16MiB", "8MiB", "4MiB", "1MiB"]'
|
||||
concurrency = "10"
|
||||
|
||||
[[groups]]
|
||||
id = "providers"
|
||||
instances = { count = 1 }
|
||||
[groups.resources]
|
||||
memory = "4096Mi"
|
||||
cpu = "1000m"
|
||||
|
||||
[[groups]]
|
||||
id = "requestors"
|
||||
instances = { count = 1 }
|
||||
[groups.resources]
|
||||
memory = "4096Mi"
|
||||
cpu = "1000m"
|
23
testplans/graphsync/_compositions/stress.toml
Normal file
23
testplans/graphsync/_compositions/stress.toml
Normal file
@ -0,0 +1,23 @@
|
||||
[metadata]
|
||||
name = "stress"
|
||||
|
||||
[global]
|
||||
plan = "graphsync"
|
||||
case = "stress"
|
||||
total_instances = 2
|
||||
builder = "docker:go"
|
||||
runner = "local:docker"
|
||||
|
||||
[global.run.test_params]
|
||||
size = "10MB"
|
||||
latencies = '["50ms", "100ms", "200ms"]'
|
||||
bandwidths = '["32MiB", "16MiB", "8MiB", "4MiB", "1MiB"]'
|
||||
concurrency = "10"
|
||||
|
||||
[[groups]]
|
||||
id = "providers"
|
||||
instances = { count = 1 }
|
||||
|
||||
[[groups]]
|
||||
id = "requestors"
|
||||
instances = { count = 1 }
|
34
testplans/graphsync/_compositions/version_compat.toml
Normal file
34
testplans/graphsync/_compositions/version_compat.toml
Normal file
@ -0,0 +1,34 @@
|
||||
[metadata]
|
||||
name = "version_compat"
|
||||
|
||||
[global]
|
||||
plan = "graphsync"
|
||||
case = "stress"
|
||||
total_instances = 2
|
||||
builder = "docker:go"
|
||||
runner = "local:docker"
|
||||
|
||||
[global.run.test_params]
|
||||
size = "10MB"
|
||||
latencies = '["50ms"]'
|
||||
bandwidths = '["4MiB"]'
|
||||
concurrency = "1"
|
||||
|
||||
[[groups]]
|
||||
id = "providers"
|
||||
instances = { count = 1 }
|
||||
[groups.build]
|
||||
[[groups.build.dependencies]]
|
||||
module = "github.com/ipfs/go-graphsync"
|
||||
version = "v0.2.1"
|
||||
[[groups.build.dependencies]]
|
||||
module = "github.com/hannahhoward/all-selector"
|
||||
version = "v0.2.0"
|
||||
|
||||
[[groups]]
|
||||
id = "requestors"
|
||||
instances = { count = 1 }
|
||||
[groups.build]
|
||||
[[groups.build.dependencies]]
|
||||
module = "github.com/ipfs/go-graphsync"
|
||||
version = "v0.1.2"
|
32
testplans/graphsync/go.mod
Normal file
32
testplans/graphsync/go.mod
Normal file
@ -0,0 +1,32 @@
|
||||
module github.com/libp2p/test-plans/ping
|
||||
|
||||
go 1.14
|
||||
|
||||
require (
|
||||
github.com/hannahhoward/all-selector v0.1.0
|
||||
github.com/ipfs/go-blockservice v0.1.3
|
||||
github.com/ipfs/go-cid v0.0.6
|
||||
github.com/ipfs/go-datastore v0.4.4
|
||||
github.com/ipfs/go-graphsync v0.1.2
|
||||
github.com/ipfs/go-ipfs-blockstore v0.1.4
|
||||
github.com/ipfs/go-ipfs-chunker v0.0.5
|
||||
github.com/ipfs/go-ipfs-exchange-offline v0.0.1
|
||||
github.com/ipfs/go-ipfs-files v0.0.8
|
||||
github.com/ipfs/go-ipld-format v0.2.0
|
||||
github.com/ipfs/go-merkledag v0.3.1
|
||||
github.com/ipfs/go-unixfs v0.2.4
|
||||
github.com/ipld/go-ipld-prime v0.4.0
|
||||
github.com/kr/text v0.2.0 // indirect
|
||||
github.com/libp2p/go-libp2p v0.10.0
|
||||
github.com/libp2p/go-libp2p-core v0.6.0
|
||||
github.com/libp2p/go-libp2p-noise v0.1.1
|
||||
github.com/libp2p/go-libp2p-secio v0.2.2
|
||||
github.com/libp2p/go-libp2p-tls v0.1.3
|
||||
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e // indirect
|
||||
github.com/testground/sdk-go v0.2.6-0.20201016180515-1e40e1b0ec3a
|
||||
golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208
|
||||
golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae // indirect
|
||||
google.golang.org/protobuf v1.25.0 // indirect
|
||||
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f // indirect
|
||||
gopkg.in/yaml.v2 v2.2.8 // indirect
|
||||
)
|
972
testplans/graphsync/go.sum
Normal file
972
testplans/graphsync/go.sum
Normal file
@ -0,0 +1,972 @@
|
||||
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
|
||||
cloud.google.com/go v0.31.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
|
||||
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
|
||||
cloud.google.com/go v0.37.0/go.mod h1:TS1dMSSfndXH133OKGwekG838Om/cQT0BUHV3HcBgoo=
|
||||
dmitri.shuralyov.com/app/changes v0.0.0-20180602232624-0a106ad413e3/go.mod h1:Yl+fi1br7+Rr3LqpNJf1/uxUdtRUV+Tnj0o93V2B9MU=
|
||||
dmitri.shuralyov.com/html/belt v0.0.0-20180602232347-f7d459c86be0/go.mod h1:JLBrvjyP0v+ecvNYvCpyZgu5/xkfAUhi6wJj28eUfSU=
|
||||
dmitri.shuralyov.com/service/change v0.0.0-20181023043359-a85b471d5412/go.mod h1:a1inKt/atXimZ4Mv927x+r7UpyzRUf4emIoiiSC2TN4=
|
||||
dmitri.shuralyov.com/state v0.0.0-20180228185332-28bcc343414c/go.mod h1:0PRwlb0D6DFvNNtx+9ybjezNCa8XF0xaYcETyp6rHWU=
|
||||
git.apache.org/thrift.git v0.0.0-20180902110319-2566ecd5d999/go.mod h1:fPE2ZNJGynbRyZ4dJvy6G277gSllfV2HJqblrnkyeyg=
|
||||
github.com/AndreasBriese/bbloom v0.0.0-20180913140656-343706a395b7/go.mod h1:bOvUY6CB00SOBii9/FifXqc0awNKxLFCL/+pkDPuyl8=
|
||||
github.com/AndreasBriese/bbloom v0.0.0-20190306092124-e2d15f34fcf9 h1:HD8gA2tkByhMAwYaFAX9w2l7vxvBQ5NMoxDrkhqhtn4=
|
||||
github.com/AndreasBriese/bbloom v0.0.0-20190306092124-e2d15f34fcf9/go.mod h1:bOvUY6CB00SOBii9/FifXqc0awNKxLFCL/+pkDPuyl8=
|
||||
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
|
||||
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
|
||||
github.com/Kubuxu/go-os-helper v0.0.1/go.mod h1:N8B+I7vPCT80IcP58r50u4+gEEcsZETFUpAzWW2ep1Y=
|
||||
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
|
||||
github.com/Stebalien/go-bitfield v0.0.1 h1:X3kbSSPUaJK60wV2hjOPZwmpljr6VGCqdq4cBLhbQBo=
|
||||
github.com/Stebalien/go-bitfield v0.0.1/go.mod h1:GNjFpasyUVkHMsfEOk8EFLJ9syQ6SI+XWrX9Wf2XH0s=
|
||||
github.com/aead/siphash v1.0.1/go.mod h1:Nywa3cDsYNNK3gaciGTWPwHt0wlpNV15vwmswBAUSII=
|
||||
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
|
||||
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
|
||||
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
|
||||
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
|
||||
github.com/anmitsu/go-shlex v0.0.0-20161002113705-648efa622239/go.mod h1:2FmKhYUyUczH0OGQWaF5ceTx0UBShxjsH6f8oGKYe2c=
|
||||
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
|
||||
github.com/avast/retry-go v2.6.0+incompatible h1:FelcMrm7Bxacr1/RM8+/eqkDkmVN7tjlsy51dOzB3LI=
|
||||
github.com/avast/retry-go v2.6.0+incompatible/go.mod h1:XtSnn+n/sHqQIpZ10K1qAevBhOOCWBLXXy3hyiqqBrY=
|
||||
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
|
||||
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
|
||||
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
|
||||
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
|
||||
github.com/bradfitz/go-smtpd v0.0.0-20170404230938-deb6d6237625/go.mod h1:HYsPBTaaSFSlLx/70C2HPIMNZpVV8+vt/A+FMnYP11g=
|
||||
github.com/btcsuite/btcd v0.0.0-20190213025234-306aecffea32/go.mod h1:DrZx5ec/dmnfpw9KyYoQyYo7d0KEvTkk/5M/vbZjAr8=
|
||||
github.com/btcsuite/btcd v0.0.0-20190523000118-16327141da8c/go.mod h1:3J08xEfcugPacsc34/LKRU2yO7YmuT8yt28J8k2+rrI=
|
||||
github.com/btcsuite/btcd v0.0.0-20190605094302-a0d1e3e36d50/go.mod h1:3J08xEfcugPacsc34/LKRU2yO7YmuT8yt28J8k2+rrI=
|
||||
github.com/btcsuite/btcd v0.0.0-20190824003749-130ea5bddde3/go.mod h1:3J08xEfcugPacsc34/LKRU2yO7YmuT8yt28J8k2+rrI=
|
||||
github.com/btcsuite/btcd v0.20.1-beta h1:Ik4hyJqN8Jfyv3S4AGBOmyouMsYE3EdYODkMbQjwPGw=
|
||||
github.com/btcsuite/btcd v0.20.1-beta/go.mod h1:wVuoA8VJLEcwgqHBwHmzLRazpKxTv13Px/pDuV7OomQ=
|
||||
github.com/btcsuite/btclog v0.0.0-20170628155309-84c8d2346e9f/go.mod h1:TdznJufoqS23FtqVCzL0ZqgP5MqXbb4fg/WgDys70nA=
|
||||
github.com/btcsuite/btcutil v0.0.0-20190207003914-4c204d697803/go.mod h1:+5NJ2+qvTyV9exUAL/rxXi3DcLg2Ts+ymUAY5y4NvMg=
|
||||
github.com/btcsuite/btcutil v0.0.0-20190425235716-9e5f4b9a998d/go.mod h1:+5NJ2+qvTyV9exUAL/rxXi3DcLg2Ts+ymUAY5y4NvMg=
|
||||
github.com/btcsuite/go-socks v0.0.0-20170105172521-4720035b7bfd/go.mod h1:HHNXQzUsZCxOoE+CPiyCTO6x34Zs86zZUiwtpXoGdtg=
|
||||
github.com/btcsuite/goleveldb v0.0.0-20160330041536-7834afc9e8cd/go.mod h1:F+uVaaLLH7j4eDXPRvw78tMflu7Ie2bzYOH4Y8rRKBY=
|
||||
github.com/btcsuite/snappy-go v0.0.0-20151229074030-0bdef8d06723/go.mod h1:8woku9dyThutzjeg+3xrA5iCpBRH8XEEg3lh6TiUghc=
|
||||
github.com/btcsuite/websocket v0.0.0-20150119174127-31079b680792/go.mod h1:ghJtEyQwv5/p4Mg4C0fgbePVuGr935/5ddU9Z3TmDRY=
|
||||
github.com/btcsuite/winsvc v1.0.0/go.mod h1:jsenWakMcC0zFBFurPLEAyrnc/teJEM1O46fmI40EZs=
|
||||
github.com/buger/jsonparser v0.0.0-20181115193947-bf1c66bbce23/go.mod h1:bbYlZJ7hK1yFx9hf58LP0zeX7UjIGs20ufpu3evjr+s=
|
||||
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
|
||||
github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko=
|
||||
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
|
||||
github.com/cespare/xxhash/v2 v2.1.1 h1:6MnRN8NT7+YBpUIWxHtefFZOKTAPgGjpQSxqLNn0+qY=
|
||||
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||
github.com/cheekybits/genny v1.0.0 h1:uGGa4nei+j20rOSeDeP5Of12XVm7TGUd4dJA9RDitfE=
|
||||
github.com/cheekybits/genny v1.0.0/go.mod h1:+tQajlRqAUrPI7DOSpB0XAqZYtQakVtB7wXkRAgjxjQ=
|
||||
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
|
||||
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
|
||||
github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk=
|
||||
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
|
||||
github.com/coreos/go-semver v0.3.0 h1:wkHLiw0WNATZnSG7epLsujiMCgPAc9xhjJ4tgnAxmfM=
|
||||
github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
|
||||
github.com/coreos/go-systemd v0.0.0-20181012123002-c6f51f82210d/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
|
||||
github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE=
|
||||
github.com/crackcomm/go-gitignore v0.0.0-20170627025303-887ab5e44cc3 h1:HVTnpeuvF6Owjd5mniCL8DEXo7uYXdQEmOP4FJbV5tg=
|
||||
github.com/crackcomm/go-gitignore v0.0.0-20170627025303-887ab5e44cc3/go.mod h1:p1d6YEZWvFzEh4KLyvBcVSnrfNDDvK2zfK/4x2v/4pE=
|
||||
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
|
||||
github.com/cskr/pubsub v1.0.2 h1:vlOzMhl6PFn60gRlTQQsIfVwaPB/B/8MziK8FhEPt/0=
|
||||
github.com/cskr/pubsub v1.0.2/go.mod h1:/8MzYXk/NJAz782G8RPkFzXTZVu63VotefPnR9TIRis=
|
||||
github.com/davecgh/go-spew v0.0.0-20171005155431-ecdeabc65495/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davidlazar/go-crypto v0.0.0-20170701192655-dcfb0a7ac018/go.mod h1:rQYf4tfk5sSwFsnDg3qYaBxSjsD9S8+59vW0dKUgme4=
|
||||
github.com/davidlazar/go-crypto v0.0.0-20190912175916-7055855a373f h1:BOaYiTvg8p9vBUXpklC22XSK/mifLF7lG9jtmYYi3Tc=
|
||||
github.com/davidlazar/go-crypto v0.0.0-20190912175916-7055855a373f/go.mod h1:rQYf4tfk5sSwFsnDg3qYaBxSjsD9S8+59vW0dKUgme4=
|
||||
github.com/dgraph-io/badger v1.5.5-0.20190226225317-8115aed38f8f/go.mod h1:VZxzAIRPHRVNRKRo6AXrX9BJegn6il06VMTZVJYCIjQ=
|
||||
github.com/dgraph-io/badger v1.6.0-rc1/go.mod h1:zwt7syl517jmP8s94KqSxTlM6IMsdhYy6psNgSztDR4=
|
||||
github.com/dgraph-io/badger v1.6.0/go.mod h1:zwt7syl517jmP8s94KqSxTlM6IMsdhYy6psNgSztDR4=
|
||||
github.com/dgraph-io/badger v1.6.1 h1:w9pSFNSdq/JPM1N12Fz/F/bzo993Is1W+Q7HjPzi7yg=
|
||||
github.com/dgraph-io/badger v1.6.1/go.mod h1:FRmFw3uxvcpa8zG3Rxs0th+hCLIuaQg8HlNV5bjgnuU=
|
||||
github.com/dgraph-io/ristretto v0.0.2 h1:a5WaUrDa0qm0YrAAS1tUykT5El3kt62KNZZeMxQn3po=
|
||||
github.com/dgraph-io/ristretto v0.0.2/go.mod h1:KPxhHT9ZxKefz+PCeOGsrHpl1qZ7i70dGTu2u+Ahh6E=
|
||||
github.com/dgryski/go-farm v0.0.0-20190104051053-3adb47b1fb0f/go.mod h1:SqUrOPUnsFjfmXRMNPybcSiG0BgUW2AuFH8PAnS2iTw=
|
||||
github.com/dgryski/go-farm v0.0.0-20190423205320-6a90982ecee2/go.mod h1:SqUrOPUnsFjfmXRMNPybcSiG0BgUW2AuFH8PAnS2iTw=
|
||||
github.com/dustin/go-humanize v1.0.0 h1:VSnTsYCnlFHaM2/igO1h6X3HA71jcobQuxemgkq4zYo=
|
||||
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
|
||||
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
|
||||
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
|
||||
github.com/flynn/go-shlex v0.0.0-20150515145356-3f9db97f8568/go.mod h1:xEzjJPgXI435gkrCt3MPfRiAkVrwSbHsst4LCFVfpJc=
|
||||
github.com/flynn/noise v0.0.0-20180327030543-2492fe189ae6 h1:u/UEqS66A5ckRmS4yNpjmVH56sVtS/RfclBAYocb4as=
|
||||
github.com/flynn/noise v0.0.0-20180327030543-2492fe189ae6/go.mod h1:1i71OnUq3iUe1ma7Lr6yG6/rjvM3emb6yoL7xLFzcVQ=
|
||||
github.com/francoispqt/gojay v1.2.13 h1:d2m3sFjloqoIUQU3TsHBgj6qg/BVGlTBeHDUmyJnXKk=
|
||||
github.com/francoispqt/gojay v1.2.13/go.mod h1:ehT5mTG4ua4581f1++1WLG0vPdaA9HaiDsoyrBGkyDY=
|
||||
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
|
||||
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
|
||||
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
|
||||
github.com/gliderlabs/ssh v0.1.1/go.mod h1:U7qILu1NlMHj9FlMhZLlkCdDnU1DBEAqr0aevW3Awn0=
|
||||
github.com/go-check/check v0.0.0-20180628173108-788fd7840127/go.mod h1:9ES+weclKsC9YodN5RgxqK/VD9HM9JsCSh7rNhMZE98=
|
||||
github.com/go-errors/errors v1.0.1/go.mod h1:f4zRHt4oKfwPJE5k8C9vpYG+aDHdBFUsgrm6/TyX73Q=
|
||||
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
|
||||
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
|
||||
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
|
||||
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
|
||||
github.com/go-redis/redis/v7 v7.4.0 h1:7obg6wUoj05T0EpY0o8B59S9w5yeMWql7sw2kwNW1x4=
|
||||
github.com/go-redis/redis/v7 v7.4.0/go.mod h1:JDNMw23GTyLNC4GZu9njt15ctBQVn7xjRfnwdHj/Dcg=
|
||||
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
|
||||
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
|
||||
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
|
||||
github.com/gogo/protobuf v1.3.0/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
|
||||
github.com/gogo/protobuf v1.3.1 h1:DqDEcV5aeaTmdFBePNpYsp3FlcVH/2ISVVM9Qf8PSls=
|
||||
github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
|
||||
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
|
||||
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6 h1:ZgQEtGgCBiWRM39fZuwSd1LwSqqSW0hOdXCYYDX0R3I=
|
||||
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
|
||||
github.com/golang/lint v0.0.0-20180702182130-06c8688daad7/go.mod h1:tluoj9z5200jBnyusfRPU2LqT6J+DAorxEvtC7LHB+E=
|
||||
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
|
||||
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
|
||||
github.com/golang/mock v1.4.0 h1:Rd1kQnQu0Hq3qvJppYSG0HtP+f5LPPUiDswTLiEegLg=
|
||||
github.com/golang/mock v1.4.0/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
|
||||
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/golang/protobuf v1.3.0/go.mod h1:Qd/q+1AKNOZr9uGQzbzCmRO6sUih6GTPZv6a1/R87v0=
|
||||
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
|
||||
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
|
||||
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
|
||||
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
|
||||
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
|
||||
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
|
||||
github.com/golang/protobuf v1.4.2 h1:+Z5KGCizgyZCbGh1KZqA0fcLLkwbsjIzS4aV2v7wJX0=
|
||||
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
|
||||
github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
||||
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
|
||||
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
|
||||
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
|
||||
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
|
||||
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.0 h1:/QaMHBdZ26BB3SSst0Iwl10Epc+xhTquomWX0oZEB6w=
|
||||
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-github v17.0.0+incompatible/go.mod h1:zLgOLi98H3fifZn+44m+umXrS52loVEgC2AApnigrVQ=
|
||||
github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck=
|
||||
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||
github.com/google/gopacket v1.1.17 h1:rMrlX2ZY2UbvT+sdz3+6J+pp2z+msCq9MxTU6ymxbBY=
|
||||
github.com/google/gopacket v1.1.17/go.mod h1:UdDNZ1OO62aGYVnPhxT1U6aI7ukYtA/kB8vaU0diBUM=
|
||||
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
|
||||
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
|
||||
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
|
||||
github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY=
|
||||
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/googleapis/gax-go v2.0.0+incompatible/go.mod h1:SFVmujtThgffbyetf+mdk2eWhX2bMyUtNHzFKcPA9HY=
|
||||
github.com/googleapis/gax-go/v2 v2.0.3/go.mod h1:LLvjysVCY1JZeum8Z6l8qUty8fiNwE08qbEPm1M08qg=
|
||||
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
|
||||
github.com/gopherjs/gopherjs v0.0.0-20190430165422-3e4dfb77656c/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
|
||||
github.com/gopherjs/gopherjs v0.0.0-20190812055157-5d271430af9f h1:KMlcu9X58lhTA/KrfX8Bi1LQSO4pzoVjTiL3h4Jk+Zk=
|
||||
github.com/gopherjs/gopherjs v0.0.0-20190812055157-5d271430af9f/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
|
||||
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
|
||||
github.com/gorilla/websocket v1.4.1/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
|
||||
github.com/gorilla/websocket v1.4.2 h1:+/TMaTYc4QFitKJxsQ7Yye35DkWvkdLcvGKqM+x0Ufc=
|
||||
github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
|
||||
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
|
||||
github.com/grpc-ecosystem/grpc-gateway v1.5.0/go.mod h1:RSKVYQBd5MCa4OVpNdGskqpgL2+G+NZTnrVHpWWfpdw=
|
||||
github.com/gxed/hashland/keccakpg v0.0.1/go.mod h1:kRzw3HkwxFU1mpmPP8v1WyQzwdGfmKFJ6tItnhQ67kU=
|
||||
github.com/gxed/hashland/murmur3 v0.0.1/go.mod h1:KjXop02n4/ckmZSnY2+HKcLud/tcmvhST0bie/0lS48=
|
||||
github.com/hannahhoward/all-selector v0.1.0 h1:B+hMG/8Vb0+XB3eHK2Cz6hYpSZWVZuSz401ebRvfGtk=
|
||||
github.com/hannahhoward/all-selector v0.1.0/go.mod h1:2wbwlpJCyAaTfpSYqKqqA5Xe0YPvJmyjylxKs6+PIvA=
|
||||
github.com/hannahhoward/go-pubsub v0.0.0-20200423002714-8d62886cc36e h1:3YKHER4nmd7b5qy5t0GWDTwSn4OyRgfAXSmo6VnryBY=
|
||||
github.com/hannahhoward/go-pubsub v0.0.0-20200423002714-8d62886cc36e/go.mod h1:I8h3MITA53gN9OnWGCgaMa0JWVRdXthWw4M3CPM54OY=
|
||||
github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/UYA=
|
||||
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
|
||||
github.com/hashicorp/go-multierror v1.1.0 h1:B9UzwGQJehnUY1yNrnwREHc3fGbC2xefo8g4TbElacI=
|
||||
github.com/hashicorp/go-multierror v1.1.0/go.mod h1:spPvp8C1qA32ftKqdAHm4hHTbPw+vmowP0z+KUhOZdA=
|
||||
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
|
||||
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
|
||||
github.com/hashicorp/golang-lru v0.5.4 h1:YDjusn29QI/Das2iO9M0BHnIbxPeyuCHsjMW+lJfyTc=
|
||||
github.com/hashicorp/golang-lru v0.5.4/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4=
|
||||
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
|
||||
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
|
||||
github.com/huin/goupnp v1.0.0 h1:wg75sLpL6DZqwHQN6E1Cfk6mtfzS45z8OV+ic+DtHRo=
|
||||
github.com/huin/goupnp v1.0.0/go.mod h1:n9v9KO1tAxYH82qOn+UTIFQDmx5n1Zxd/ClZDMX7Bnc=
|
||||
github.com/huin/goutil v0.0.0-20170803182201-1ca381bf3150/go.mod h1:PpLOETDnJ0o3iZrZfqZzyLl6l7F3c6L1oWn7OICBi6o=
|
||||
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
|
||||
github.com/influxdata/influxdb1-client v0.0.0-20200515024757-02f0bf5dbca3 h1:k3/6a1Shi7GGCp9QpyYuXsMM6ncTOjCzOE9Fd6CDA+Q=
|
||||
github.com/influxdata/influxdb1-client v0.0.0-20200515024757-02f0bf5dbca3/go.mod h1:qj24IKcXYK6Iy9ceXlo3Tc+vtHo9lIhSX5JddghvEPo=
|
||||
github.com/ipfs/bbloom v0.0.1/go.mod h1:oqo8CVWsJFMOZqTglBG4wydCE4IQA/G2/SEofB0rjUI=
|
||||
github.com/ipfs/bbloom v0.0.4 h1:Gi+8EGJ2y5qiD5FbsbpX/TMNcJw8gSqr7eyjHa4Fhvs=
|
||||
github.com/ipfs/bbloom v0.0.4/go.mod h1:cS9YprKXpoZ9lT0n/Mw/a6/aFV6DTjTLYHeA+gyqMG0=
|
||||
github.com/ipfs/go-bitswap v0.1.0/go.mod h1:FFJEf18E9izuCqUtHxbWEvq+reg7o4CW5wSAE1wsxj0=
|
||||
github.com/ipfs/go-bitswap v0.1.2/go.mod h1:qxSWS4NXGs7jQ6zQvoPY3+NmOfHHG47mhkiLzBpJQIs=
|
||||
github.com/ipfs/go-bitswap v0.1.8 h1:38X1mKXkiU6Nzw4TOSWD8eTVY5eX3slQunv3QEWfXKg=
|
||||
github.com/ipfs/go-bitswap v0.1.8/go.mod h1:TOWoxllhccevbWFUR2N7B1MTSVVge1s6XSMiCSA4MzM=
|
||||
github.com/ipfs/go-block-format v0.0.1/go.mod h1:DK/YYcsSUIVAFNwo/KZCdIIbpN0ROH/baNLgayt4pFc=
|
||||
github.com/ipfs/go-block-format v0.0.2 h1:qPDvcP19izTjU8rgo6p7gTXZlkMkF5bz5G3fqIsSCPE=
|
||||
github.com/ipfs/go-block-format v0.0.2/go.mod h1:AWR46JfpcObNfg3ok2JHDUfdiHRgWhJgCQF+KIgOPJY=
|
||||
github.com/ipfs/go-blockservice v0.1.0/go.mod h1:hzmMScl1kXHg3M2BjTymbVPjv627N7sYcvYaKbop39M=
|
||||
github.com/ipfs/go-blockservice v0.1.3 h1:9XgsPMwwWJSC9uVr2pMDsW2qFTBSkxpGMhmna8mIjPM=
|
||||
github.com/ipfs/go-blockservice v0.1.3/go.mod h1:OTZhFpkgY48kNzbgyvcexW9cHrpjBYIjSR0KoDOFOLU=
|
||||
github.com/ipfs/go-cid v0.0.1/go.mod h1:GHWU/WuQdMPmIosc4Yn1bcCT7dSeX4lBafM7iqUPQvM=
|
||||
github.com/ipfs/go-cid v0.0.2/go.mod h1:GHWU/WuQdMPmIosc4Yn1bcCT7dSeX4lBafM7iqUPQvM=
|
||||
github.com/ipfs/go-cid v0.0.3/go.mod h1:GHWU/WuQdMPmIosc4Yn1bcCT7dSeX4lBafM7iqUPQvM=
|
||||
github.com/ipfs/go-cid v0.0.4/go.mod h1:4LLaPOQwmk5z9LBgQnpkivrx8BJjUyGwTXCd5Xfj6+M=
|
||||
github.com/ipfs/go-cid v0.0.5/go.mod h1:plgt+Y5MnOey4vO4UlUazGqdbEXuFYitED67FexhXog=
|
||||
github.com/ipfs/go-cid v0.0.6 h1:go0y+GcDOGeJIV01FeBsta4FHngoA4Wz7KMeLkXAhMs=
|
||||
github.com/ipfs/go-cid v0.0.6/go.mod h1:6Ux9z5e+HpkQdckYoX1PG/6xqKspzlEIR5SDmgqgC/I=
|
||||
github.com/ipfs/go-datastore v0.0.1/go.mod h1:d4KVXhMt913cLBEI/PXAy6ko+W7e9AhyAKBGh803qeE=
|
||||
github.com/ipfs/go-datastore v0.0.5/go.mod h1:d4KVXhMt913cLBEI/PXAy6ko+W7e9AhyAKBGh803qeE=
|
||||
github.com/ipfs/go-datastore v0.1.0/go.mod h1:d4KVXhMt913cLBEI/PXAy6ko+W7e9AhyAKBGh803qeE=
|
||||
github.com/ipfs/go-datastore v0.1.1/go.mod h1:w38XXW9kVFNp57Zj5knbKWM2T+KOZCGDRVNdgPHtbHw=
|
||||
github.com/ipfs/go-datastore v0.3.1/go.mod h1:w38XXW9kVFNp57Zj5knbKWM2T+KOZCGDRVNdgPHtbHw=
|
||||
github.com/ipfs/go-datastore v0.4.0/go.mod h1:SX/xMIKoCszPqp+z9JhPYCmoOoXTvaa13XEbGtsFUhA=
|
||||
github.com/ipfs/go-datastore v0.4.1/go.mod h1:SX/xMIKoCszPqp+z9JhPYCmoOoXTvaa13XEbGtsFUhA=
|
||||
github.com/ipfs/go-datastore v0.4.4 h1:rjvQ9+muFaJ+QZ7dN5B1MSDNQ0JVZKkkES/rMZmA8X8=
|
||||
github.com/ipfs/go-datastore v0.4.4/go.mod h1:SX/xMIKoCszPqp+z9JhPYCmoOoXTvaa13XEbGtsFUhA=
|
||||
github.com/ipfs/go-detect-race v0.0.1 h1:qX/xay2W3E4Q1U7d9lNs1sU9nvguX0a7319XbyQ6cOk=
|
||||
github.com/ipfs/go-detect-race v0.0.1/go.mod h1:8BNT7shDZPo99Q74BpGMK+4D8Mn4j46UU0LZ723meps=
|
||||
github.com/ipfs/go-ds-badger v0.0.2/go.mod h1:Y3QpeSFWQf6MopLTiZD+VT6IC1yZqaGmjvRcKeSGij8=
|
||||
github.com/ipfs/go-ds-badger v0.0.5/go.mod h1:g5AuuCGmr7efyzQhLL8MzwqcauPojGPUaHzfGTzuE3s=
|
||||
github.com/ipfs/go-ds-badger v0.2.1/go.mod h1:Tx7l3aTph3FMFrRS838dcSJh+jjA7cX9DrGVwx/NOwE=
|
||||
github.com/ipfs/go-ds-badger v0.2.3 h1:J27YvAcpuA5IvZUbeBxOcQgqnYHUPxoygc6QxxkodZ4=
|
||||
github.com/ipfs/go-ds-badger v0.2.3/go.mod h1:pEYw0rgg3FIrywKKnL+Snr+w/LjJZVMTBRn4FS6UHUk=
|
||||
github.com/ipfs/go-ds-leveldb v0.0.1/go.mod h1:feO8V3kubwsEF22n0YRQCffeb79OOYIykR4L04tMOYc=
|
||||
github.com/ipfs/go-ds-leveldb v0.4.1/go.mod h1:jpbku/YqBSsBc1qgME8BkWS4AxzF2cEu1Ii2r79Hh9s=
|
||||
github.com/ipfs/go-ds-leveldb v0.4.2/go.mod h1:jpbku/YqBSsBc1qgME8BkWS4AxzF2cEu1Ii2r79Hh9s=
|
||||
github.com/ipfs/go-graphsync v0.1.2 h1:25Ll9kIXCE+DY0dicvfS3KMw+U5sd01b/FJbA7KAbhg=
|
||||
github.com/ipfs/go-graphsync v0.1.2/go.mod h1:sLXVXm1OxtE2XYPw62MuXCdAuNwkAdsbnfrmos5odbA=
|
||||
github.com/ipfs/go-ipfs-blockstore v0.0.1/go.mod h1:d3WClOmRQKFnJ0Jz/jj/zmksX0ma1gROTlovZKBmN08=
|
||||
github.com/ipfs/go-ipfs-blockstore v0.1.0/go.mod h1:5aD0AvHPi7mZc6Ci1WCAhiBQu2IsfTduLl+422H6Rqw=
|
||||
github.com/ipfs/go-ipfs-blockstore v0.1.4 h1:2SGI6U1B44aODevza8Rde3+dY30Pb+lbcObe1LETxOQ=
|
||||
github.com/ipfs/go-ipfs-blockstore v0.1.4/go.mod h1:Jxm3XMVjh6R17WvxFEiyKBLUGr86HgIYJW/D/MwqeYQ=
|
||||
github.com/ipfs/go-ipfs-blocksutil v0.0.1 h1:Eh/H4pc1hsvhzsQoMEP3Bke/aW5P5rVM1IWFJMcGIPQ=
|
||||
github.com/ipfs/go-ipfs-blocksutil v0.0.1/go.mod h1:Yq4M86uIOmxmGPUHv/uI7uKqZNtLb449gwKqXjIsnRk=
|
||||
github.com/ipfs/go-ipfs-chunker v0.0.1/go.mod h1:tWewYK0we3+rMbOh7pPFGDyypCtvGcBFymgY4rSDLAw=
|
||||
github.com/ipfs/go-ipfs-chunker v0.0.5 h1:ojCf7HV/m+uS2vhUGWcogIIxiO5ubl5O57Q7NapWLY8=
|
||||
github.com/ipfs/go-ipfs-chunker v0.0.5/go.mod h1:jhgdF8vxRHycr00k13FM8Y0E+6BoalYeobXmUyTreP8=
|
||||
github.com/ipfs/go-ipfs-delay v0.0.0-20181109222059-70721b86a9a8/go.mod h1:8SP1YXK1M1kXuc4KJZINY3TQQ03J2rwBG9QfXmbRPrw=
|
||||
github.com/ipfs/go-ipfs-delay v0.0.1 h1:r/UXYyRcddO6thwOnhiznIAiSvxMECGgtv35Xs1IeRQ=
|
||||
github.com/ipfs/go-ipfs-delay v0.0.1/go.mod h1:8SP1YXK1M1kXuc4KJZINY3TQQ03J2rwBG9QfXmbRPrw=
|
||||
github.com/ipfs/go-ipfs-ds-help v0.0.1/go.mod h1:gtP9xRaZXqIQRh1HRpp595KbBEdgqWFxefeVKOV8sxo=
|
||||
github.com/ipfs/go-ipfs-ds-help v0.1.1 h1:IW/bXGeaAZV2VH0Kuok+Ohva/zHkHmeLFBxC1k7mNPc=
|
||||
github.com/ipfs/go-ipfs-ds-help v0.1.1/go.mod h1:SbBafGJuGsPI/QL3j9Fc5YPLeAu+SzOkI0gFwAg+mOs=
|
||||
github.com/ipfs/go-ipfs-exchange-interface v0.0.1 h1:LJXIo9W7CAmugqI+uofioIpRb6rY30GUu7G6LUfpMvM=
|
||||
github.com/ipfs/go-ipfs-exchange-interface v0.0.1/go.mod h1:c8MwfHjtQjPoDyiy9cFquVtVHkO9b9Ob3FG91qJnWCM=
|
||||
github.com/ipfs/go-ipfs-exchange-offline v0.0.1 h1:P56jYKZF7lDDOLx5SotVh5KFxoY6C81I1NSHW1FxGew=
|
||||
github.com/ipfs/go-ipfs-exchange-offline v0.0.1/go.mod h1:WhHSFCVYX36H/anEKQboAzpUws3x7UeEGkzQc3iNkM0=
|
||||
github.com/ipfs/go-ipfs-files v0.0.3/go.mod h1:INEFm0LL2LWXBhNJ2PMIIb2w45hpXgPjNoE7yA8Y1d4=
|
||||
github.com/ipfs/go-ipfs-files v0.0.8 h1:8o0oFJkJ8UkO/ABl8T6ac6tKF3+NIpj67aAB6ZpusRg=
|
||||
github.com/ipfs/go-ipfs-files v0.0.8/go.mod h1:wiN/jSG8FKyk7N0WyctKSvq3ljIa2NNTiZB55kpTdOs=
|
||||
github.com/ipfs/go-ipfs-posinfo v0.0.1 h1:Esoxj+1JgSjX0+ylc0hUmJCOv6V2vFoZiETLR6OtpRs=
|
||||
github.com/ipfs/go-ipfs-posinfo v0.0.1/go.mod h1:SwyeVP+jCwiDu0C313l/8jg6ZxM0qqtlt2a0vILTc1A=
|
||||
github.com/ipfs/go-ipfs-pq v0.0.1/go.mod h1:LWIqQpqfRG3fNc5XsnIhz/wQ2XXGyugQwls7BgUmUfY=
|
||||
github.com/ipfs/go-ipfs-pq v0.0.2 h1:e1vOOW6MuOwG2lqxcLA+wEn93i/9laCY8sXAw76jFOY=
|
||||
github.com/ipfs/go-ipfs-pq v0.0.2/go.mod h1:LWIqQpqfRG3fNc5XsnIhz/wQ2XXGyugQwls7BgUmUfY=
|
||||
github.com/ipfs/go-ipfs-routing v0.1.0 h1:gAJTT1cEeeLj6/DlLX6t+NxD9fQe2ymTO6qWRDI/HQQ=
|
||||
github.com/ipfs/go-ipfs-routing v0.1.0/go.mod h1:hYoUkJLyAUKhF58tysKpids8RNDPO42BVMgK5dNsoqY=
|
||||
github.com/ipfs/go-ipfs-util v0.0.1/go.mod h1:spsl5z8KUnrve+73pOhSVZND1SIxPW5RyBCNzQxlJBc=
|
||||
github.com/ipfs/go-ipfs-util v0.0.2 h1:59Sswnk1MFaiq+VcaknX7aYEyGyGDAA73ilhEK2POp8=
|
||||
github.com/ipfs/go-ipfs-util v0.0.2/go.mod h1:CbPtkWJzjLdEcezDns2XYaehFVNXG9zrdrtMecczcsQ=
|
||||
github.com/ipfs/go-ipld-cbor v0.0.2/go.mod h1:wTBtrQZA3SoFKMVkp6cn6HMRteIB1VsmHA0AQFOn7Nc=
|
||||
github.com/ipfs/go-ipld-cbor v0.0.3/go.mod h1:wTBtrQZA3SoFKMVkp6cn6HMRteIB1VsmHA0AQFOn7Nc=
|
||||
github.com/ipfs/go-ipld-cbor v0.0.4 h1:Aw3KPOKXjvrm6VjwJvFf1F1ekR/BH3jdof3Bk7OTiSA=
|
||||
github.com/ipfs/go-ipld-cbor v0.0.4/go.mod h1:BkCduEx3XBCO6t2Sfo5BaHzuok7hbhdMm9Oh8B2Ftq4=
|
||||
github.com/ipfs/go-ipld-format v0.0.1/go.mod h1:kyJtbkDALmFHv3QR6et67i35QzO3S0dCDnkOJhcZkms=
|
||||
github.com/ipfs/go-ipld-format v0.0.2/go.mod h1:4B6+FM2u9OJ9zCV+kSbgFAZlOrv1Hqbf0INGQgiKf9k=
|
||||
github.com/ipfs/go-ipld-format v0.2.0 h1:xGlJKkArkmBvowr+GMCX0FEZtkro71K1AwiKnL37mwA=
|
||||
github.com/ipfs/go-ipld-format v0.2.0/go.mod h1:3l3C1uKoadTPbeNfrDi+xMInYKlx2Cvg1BuydPSdzQs=
|
||||
github.com/ipfs/go-log v0.0.1/go.mod h1:kL1d2/hzSpI0thNYjiKfjanbVNU+IIGA/WnNESY9leM=
|
||||
github.com/ipfs/go-log v1.0.2/go.mod h1:1MNjMxe0u6xvJZgeqbJ8vdo2TKaGwZ1a0Bpza+sr2Sk=
|
||||
github.com/ipfs/go-log v1.0.3/go.mod h1:OsLySYkwIbiSUR/yBTdv1qPtcE4FW3WPWk/ewz9Ru+A=
|
||||
github.com/ipfs/go-log v1.0.4 h1:6nLQdX4W8P9yZZFH7mO+X/PzjN8Laozm/lMJ6esdgzY=
|
||||
github.com/ipfs/go-log v1.0.4/go.mod h1:oDCg2FkjogeFOhqqb+N39l2RpTNPL6F/StPkB3kPgcs=
|
||||
github.com/ipfs/go-log/v2 v2.0.2/go.mod h1:O7P1lJt27vWHhOwQmcFEvlmo49ry2VY2+JfBWFaa9+0=
|
||||
github.com/ipfs/go-log/v2 v2.0.3/go.mod h1:O7P1lJt27vWHhOwQmcFEvlmo49ry2VY2+JfBWFaa9+0=
|
||||
github.com/ipfs/go-log/v2 v2.0.5 h1:fL4YI+1g5V/b1Yxr1qAiXTMg1H8z9vx/VmJxBuQMHvU=
|
||||
github.com/ipfs/go-log/v2 v2.0.5/go.mod h1:eZs4Xt4ZUJQFM3DlanGhy7TkwwawCZcSByscwkWG+dw=
|
||||
github.com/ipfs/go-merkledag v0.2.3/go.mod h1:SQiXrtSts3KGNmgOzMICy5c0POOpUNQLvB3ClKnBAlk=
|
||||
github.com/ipfs/go-merkledag v0.3.1 h1:3UqWINBEr3/N+r6OwgFXAddDP/8zpQX/8J7IGVOCqRQ=
|
||||
github.com/ipfs/go-merkledag v0.3.1/go.mod h1:fvkZNNZixVW6cKSZ/JfLlON5OlgTXNdRLz0p6QG/I2M=
|
||||
github.com/ipfs/go-metrics-interface v0.0.1 h1:j+cpbjYvu4R8zbleSs36gvB7jR+wsL2fGD6n0jO4kdg=
|
||||
github.com/ipfs/go-metrics-interface v0.0.1/go.mod h1:6s6euYU4zowdslK0GKHmqaIZ3j/b/tL7HTWtJ4VPgWY=
|
||||
github.com/ipfs/go-peertaskqueue v0.1.0/go.mod h1:Jmk3IyCcfl1W3jTW3YpghSwSEC6IJ3Vzz/jUmWw8Z0U=
|
||||
github.com/ipfs/go-peertaskqueue v0.1.1/go.mod h1:Jmk3IyCcfl1W3jTW3YpghSwSEC6IJ3Vzz/jUmWw8Z0U=
|
||||
github.com/ipfs/go-peertaskqueue v0.2.0 h1:2cSr7exUGKYyDeUyQ7P/nHPs9P7Ht/B+ROrpN1EJOjc=
|
||||
github.com/ipfs/go-peertaskqueue v0.2.0/go.mod h1:5/eNrBEbtSKWCG+kQK8K8fGNixoYUnr+P7jivavs9lY=
|
||||
github.com/ipfs/go-unixfs v0.2.4 h1:6NwppOXefWIyysZ4LR/qUBPvXd5//8J3jiMdvpbw6Lo=
|
||||
github.com/ipfs/go-unixfs v0.2.4/go.mod h1:SUdisfUjNoSDzzhGVxvCL9QO/nKdwXdr+gbMUdqcbYw=
|
||||
github.com/ipfs/go-verifcid v0.0.1 h1:m2HI7zIuR5TFyQ1b79Da5N9dnnCP1vcu2QqawmWlK2E=
|
||||
github.com/ipfs/go-verifcid v0.0.1/go.mod h1:5Hrva5KBeIog4A+UpqlaIU+DEstipcJYQQZc0g37pY0=
|
||||
github.com/ipld/go-ipld-prime v0.0.2-0.20200428162820-8b59dc292b8e/go.mod h1:uVIwe/u0H4VdKv3kaN1ck7uCb6yD9cFLS9/ELyXbsw8=
|
||||
github.com/ipld/go-ipld-prime v0.0.4-0.20200828224805-5ff8c8b0b6ef h1:/yPelt/0CuzZsmRkYzBBnJ499JnAOGaIaAXHujx96ic=
|
||||
github.com/ipld/go-ipld-prime v0.0.4-0.20200828224805-5ff8c8b0b6ef/go.mod h1:uVIwe/u0H4VdKv3kaN1ck7uCb6yD9cFLS9/ELyXbsw8=
|
||||
github.com/ipld/go-ipld-prime v0.4.0 h1:ySDtWeWl+TDMokXlwGANSMeD5TN618cZp9NnxqZ452M=
|
||||
github.com/ipld/go-ipld-prime v0.4.0/go.mod h1:uVIwe/u0H4VdKv3kaN1ck7uCb6yD9cFLS9/ELyXbsw8=
|
||||
github.com/ipld/go-ipld-prime-proto v0.0.0-20200828231332-ae0aea07222b h1:ZtlW6pubN17TDaStlxgrwEXXwwUfJaXu9RobwczXato=
|
||||
github.com/ipld/go-ipld-prime-proto v0.0.0-20200828231332-ae0aea07222b/go.mod h1:OAV6xBmuTLsPZ+epzKkPB1e25FHk/vCtyatkdHcArLs=
|
||||
github.com/jackpal/gateway v1.0.5/go.mod h1:lTpwd4ACLXmpyiCTRtfiNyVnUmqT9RivzCDQetPfnjA=
|
||||
github.com/jackpal/go-nat-pmp v1.0.1/go.mod h1:QPH045xvCAeXUZOxsnwmrtiCoxIr9eob+4orBN1SBKc=
|
||||
github.com/jackpal/go-nat-pmp v1.0.2 h1:KzKSgb7qkJvOUTqYl9/Hg/me3pWgBmERKrTGD7BdWus=
|
||||
github.com/jackpal/go-nat-pmp v1.0.2/go.mod h1:QPH045xvCAeXUZOxsnwmrtiCoxIr9eob+4orBN1SBKc=
|
||||
github.com/jbenet/go-cienv v0.0.0-20150120210510-1bb1476777ec/go.mod h1:rGaEvXB4uRSZMmzKNLoXvTu1sfx+1kv/DojUlPrSZGs=
|
||||
github.com/jbenet/go-cienv v0.1.0 h1:Vc/s0QbQtoxX8MwwSLWWh+xNNZvM3Lw7NsTcHrvvhMc=
|
||||
github.com/jbenet/go-cienv v0.1.0/go.mod h1:TqNnHUmJgXau0nCzC7kXWeotg3J9W34CUv5Djy1+FlA=
|
||||
github.com/jbenet/go-random v0.0.0-20190219211222-123a90aedc0c h1:uUx61FiAa1GI6ZmVd2wf2vULeQZIKG66eybjNXKYCz4=
|
||||
github.com/jbenet/go-random v0.0.0-20190219211222-123a90aedc0c/go.mod h1:sdx1xVM9UuLw1tXnhJWN3piypTUO3vCIHYmG15KE/dU=
|
||||
github.com/jbenet/go-temp-err-catcher v0.0.0-20150120210811-aac704a3f4f2/go.mod h1:8GXXJV31xl8whumTzdZsTt3RnUIiPqzkyf7mxToRCMs=
|
||||
github.com/jbenet/go-temp-err-catcher v0.1.0 h1:zpb3ZH6wIE8Shj2sKS+khgRvf7T7RABoLk/+KKHggpk=
|
||||
github.com/jbenet/go-temp-err-catcher v0.1.0/go.mod h1:0kJRvmDZXNMIiJirNPEYfhpPwbGVtZVWC34vc5WLsDk=
|
||||
github.com/jbenet/goprocess v0.0.0-20160826012719-b497e2f366b8/go.mod h1:Ly/wlsjFq/qrU3Rar62tu1gASgGw6chQbSh/XgIIXCY=
|
||||
github.com/jbenet/goprocess v0.1.3/go.mod h1:5yspPrukOVuOLORacaBi858NqyClJPQxYZlqdZVfqY4=
|
||||
github.com/jbenet/goprocess v0.1.4 h1:DRGOFReOMqqDNXwW70QkacFW0YN9QnwLV0Vqk+3oU0o=
|
||||
github.com/jbenet/goprocess v0.1.4/go.mod h1:5yspPrukOVuOLORacaBi858NqyClJPQxYZlqdZVfqY4=
|
||||
github.com/jellevandenhooff/dkim v0.0.0-20150330215556-f50fe3d243e1/go.mod h1:E0B/fFc00Y+Rasa88328GlI/XbtyysCtTHZS8h7IrBU=
|
||||
github.com/jessevdk/go-flags v0.0.0-20141203071132-1679536dcc89/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI=
|
||||
github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI=
|
||||
github.com/jrick/logrotate v1.0.0/go.mod h1:LNinyqDIJnpAur+b8yyulnQw/wDuN1+BYKlTRt3OuAQ=
|
||||
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
|
||||
github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
|
||||
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
|
||||
github.com/jtolds/gls v4.2.1+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
|
||||
github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
|
||||
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
|
||||
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
|
||||
github.com/kami-zh/go-capturer v0.0.0-20171211120116-e492ea43421d/go.mod h1:P2viExyCEfeWGU259JnaQ34Inuec4R38JCyBx2edgD0=
|
||||
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
|
||||
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
|
||||
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||
github.com/kkdai/bstream v0.0.0-20161212061736-f391b8402d23/go.mod h1:J+Gs4SYgM6CZQHDETBtE9HaSEkGmuNXF86RwHhHUvq4=
|
||||
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
||||
github.com/koron/go-ssdp v0.0.0-20180514024734-4a0ed625a78b/go.mod h1:5Ky9EC2xfoUKUor0Hjgi2BJhCSXJfMOFlmyYrVKGQMk=
|
||||
github.com/koron/go-ssdp v0.0.0-20191105050749-2e1c40ed0b5d h1:68u9r4wEvL3gYg2jvAOgROwZ3H+Y3hIDk4tbbmIjcYQ=
|
||||
github.com/koron/go-ssdp v0.0.0-20191105050749-2e1c40ed0b5d/go.mod h1:5Ky9EC2xfoUKUor0Hjgi2BJhCSXJfMOFlmyYrVKGQMk=
|
||||
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
|
||||
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
||||
github.com/kr/pretty v0.2.0 h1:s5hAObm+yFO5uHYt5dYjxi2rXrsnmRpJx4OYvIWUaQs=
|
||||
github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
|
||||
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||
github.com/kr/pty v1.1.3/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
|
||||
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
||||
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
||||
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
|
||||
github.com/libp2p/go-addr-util v0.0.1/go.mod h1:4ac6O7n9rIAKB1dnd+s8IbbMXkt+oBpzX4/+RACcnlQ=
|
||||
github.com/libp2p/go-addr-util v0.0.2 h1:7cWK5cdA5x72jX0g8iLrQWm5TRJZ6CzGdPEhWj7plWU=
|
||||
github.com/libp2p/go-addr-util v0.0.2/go.mod h1:Ecd6Fb3yIuLzq4bD7VcywcVSBtefcAwnUISBM3WG15E=
|
||||
github.com/libp2p/go-buffer-pool v0.0.1/go.mod h1:xtyIz9PMobb13WaxR6Zo1Pd1zXJKYg0a8KiIvDp3TzQ=
|
||||
github.com/libp2p/go-buffer-pool v0.0.2 h1:QNK2iAFa8gjAe1SPz6mHSMuCcjs+X1wlHzeOSqcmlfs=
|
||||
github.com/libp2p/go-buffer-pool v0.0.2/go.mod h1:MvaB6xw5vOrDl8rYZGLFdKAuk/hRoRZd1Vi32+RXyFM=
|
||||
github.com/libp2p/go-conn-security-multistream v0.1.0/go.mod h1:aw6eD7LOsHEX7+2hJkDxw1MteijaVcI+/eP2/x3J1xc=
|
||||
github.com/libp2p/go-conn-security-multistream v0.2.0 h1:uNiDjS58vrvJTg9jO6bySd1rMKejieG7v45ekqHbZ1M=
|
||||
github.com/libp2p/go-conn-security-multistream v0.2.0/go.mod h1:hZN4MjlNetKD3Rq5Jb/P5ohUnFLNzEAR4DLSzpn2QLU=
|
||||
github.com/libp2p/go-eventbus v0.1.0/go.mod h1:vROgu5cs5T7cv7POWlWxBaVLxfSegC5UGQf8A2eEmx4=
|
||||
github.com/libp2p/go-eventbus v0.2.1 h1:VanAdErQnpTioN2TowqNcOijf6YwhuODe4pPKSDpxGc=
|
||||
github.com/libp2p/go-eventbus v0.2.1/go.mod h1:jc2S4SoEVPP48H9Wpzm5aiGwUCBMfGhVhhBjyhhCJs8=
|
||||
github.com/libp2p/go-flow-metrics v0.0.1/go.mod h1:Iv1GH0sG8DtYN3SVJ2eG221wMiNpZxBdp967ls1g+k8=
|
||||
github.com/libp2p/go-flow-metrics v0.0.3 h1:8tAs/hSdNvUiLgtlSy3mxwxWP4I9y/jlkPFT7epKdeM=
|
||||
github.com/libp2p/go-flow-metrics v0.0.3/go.mod h1:HeoSNUrOJVK1jEpDqVEiUOIXqhbnS27omG0uWU5slZs=
|
||||
github.com/libp2p/go-libp2p v0.1.0/go.mod h1:6D/2OBauqLUoqcADOJpn9WbKqvaM07tDw68qHM0BxUM=
|
||||
github.com/libp2p/go-libp2p v0.1.1/go.mod h1:I00BRo1UuUSdpuc8Q2mN7yDF/oTUTRAX6JWpTiK9Rp8=
|
||||
github.com/libp2p/go-libp2p v0.6.0/go.mod h1:mfKWI7Soz3ABX+XEBR61lGbg+ewyMtJHVt043oWeqwg=
|
||||
github.com/libp2p/go-libp2p v0.6.1/go.mod h1:CTFnWXogryAHjXAKEbOf1OWY+VeAP3lDMZkfEI5sT54=
|
||||
github.com/libp2p/go-libp2p v0.7.0/go.mod h1:hZJf8txWeCduQRDC/WSqBGMxaTHCOYHt2xSU1ivxn0k=
|
||||
github.com/libp2p/go-libp2p v0.7.4/go.mod h1:oXsBlTLF1q7pxr+9w6lqzS1ILpyHsaBPniVO7zIHGMw=
|
||||
github.com/libp2p/go-libp2p v0.8.1/go.mod h1:QRNH9pwdbEBpx5DTJYg+qxcVaDMAz3Ee/qDKwXujH5o=
|
||||
github.com/libp2p/go-libp2p v0.8.3/go.mod h1:EsH1A+8yoWK+L4iKcbPYu6MPluZ+CHWI9El8cTaefiM=
|
||||
github.com/libp2p/go-libp2p v0.10.0 h1:7ooOvK1wi8eLpyTppy8TeH43UHy5uI75GAHGJxenUi0=
|
||||
github.com/libp2p/go-libp2p v0.10.0/go.mod h1:yBJNpb+mGJdgrwbKAKrhPU0u3ogyNFTfjJ6bdM+Q/G8=
|
||||
github.com/libp2p/go-libp2p-autonat v0.1.0/go.mod h1:1tLf2yXxiE/oKGtDwPYWTSYG3PtvYlJmg7NeVtPRqH8=
|
||||
github.com/libp2p/go-libp2p-autonat v0.1.1/go.mod h1:OXqkeGOY2xJVWKAGV2inNF5aKN/djNA3fdpCWloIudE=
|
||||
github.com/libp2p/go-libp2p-autonat v0.2.0/go.mod h1:DX+9teU4pEEoZUqR1PiMlqliONQdNbfzE1C718tcViI=
|
||||
github.com/libp2p/go-libp2p-autonat v0.2.1/go.mod h1:MWtAhV5Ko1l6QBsHQNSuM6b1sRkXrpk0/LqCr+vCVxI=
|
||||
github.com/libp2p/go-libp2p-autonat v0.2.2/go.mod h1:HsM62HkqZmHR2k1xgX34WuWDzk/nBwNHoeyyT4IWV6A=
|
||||
github.com/libp2p/go-libp2p-autonat v0.2.3 h1:w46bKK3KTOUWDe5mDYMRjJu1uryqBp8HCNDp/TWMqKw=
|
||||
github.com/libp2p/go-libp2p-autonat v0.2.3/go.mod h1:2U6bNWCNsAG9LEbwccBDQbjzQ8Krdjge1jLTE9rdoMM=
|
||||
github.com/libp2p/go-libp2p-blankhost v0.1.1/go.mod h1:pf2fvdLJPsC1FsVrNP3DUUvMzUts2dsLLBEpo1vW1ro=
|
||||
github.com/libp2p/go-libp2p-blankhost v0.1.4/go.mod h1:oJF0saYsAXQCSfDq254GMNmLNz6ZTHTOvtF4ZydUvwU=
|
||||
github.com/libp2p/go-libp2p-blankhost v0.1.6 h1:CkPp1/zaCrCnBo0AdsQA0O1VkUYoUOtyHOnoa8gKIcE=
|
||||
github.com/libp2p/go-libp2p-blankhost v0.1.6/go.mod h1:jONCAJqEP+Z8T6EQviGL4JsQcLx1LgTGtVqFNY8EMfQ=
|
||||
github.com/libp2p/go-libp2p-circuit v0.1.0/go.mod h1:Ahq4cY3V9VJcHcn1SBXjr78AbFkZeIRmfunbA7pmFh8=
|
||||
github.com/libp2p/go-libp2p-circuit v0.1.4/go.mod h1:CY67BrEjKNDhdTk8UgBX1Y/H5c3xkAcs3gnksxY7osU=
|
||||
github.com/libp2p/go-libp2p-circuit v0.2.1/go.mod h1:BXPwYDN5A8z4OEY9sOfr2DUQMLQvKt/6oku45YUmjIo=
|
||||
github.com/libp2p/go-libp2p-circuit v0.2.2/go.mod h1:nkG3iE01tR3FoQ2nMm06IUrCpCyJp1Eo4A1xYdpjfs4=
|
||||
github.com/libp2p/go-libp2p-circuit v0.2.3 h1:3Uw1fPHWrp1tgIhBz0vSOxRUmnKL8L/NGUyEd5WfSGM=
|
||||
github.com/libp2p/go-libp2p-circuit v0.2.3/go.mod h1:nkG3iE01tR3FoQ2nMm06IUrCpCyJp1Eo4A1xYdpjfs4=
|
||||
github.com/libp2p/go-libp2p-core v0.0.1/go.mod h1:g/VxnTZ/1ygHxH3dKok7Vno1VfpvGcGip57wjTU4fco=
|
||||
github.com/libp2p/go-libp2p-core v0.0.2/go.mod h1:9dAcntw/n46XycV4RnlBq3BpgrmyUi9LuoTNdPrbUco=
|
||||
github.com/libp2p/go-libp2p-core v0.0.3/go.mod h1:j+YQMNz9WNSkNezXOsahp9kwZBKBvxLpKD316QWSJXE=
|
||||
github.com/libp2p/go-libp2p-core v0.0.4/go.mod h1:jyuCQP356gzfCFtRKyvAbNkyeuxb7OlyhWZ3nls5d2I=
|
||||
github.com/libp2p/go-libp2p-core v0.2.0/go.mod h1:X0eyB0Gy93v0DZtSYbEM7RnMChm9Uv3j7yRXjO77xSI=
|
||||
github.com/libp2p/go-libp2p-core v0.2.2/go.mod h1:8fcwTbsG2B+lTgRJ1ICZtiM5GWCWZVoVrLaDRvIRng0=
|
||||
github.com/libp2p/go-libp2p-core v0.2.4/go.mod h1:STh4fdfa5vDYr0/SzYYeqnt+E6KfEV5VxfIrm0bcI0g=
|
||||
github.com/libp2p/go-libp2p-core v0.3.0/go.mod h1:ACp3DmS3/N64c2jDzcV429ukDpicbL6+TrrxANBjPGw=
|
||||
github.com/libp2p/go-libp2p-core v0.3.1/go.mod h1:thvWy0hvaSBhnVBaW37BvzgVV68OUhgJJLAa6almrII=
|
||||
github.com/libp2p/go-libp2p-core v0.4.0/go.mod h1:49XGI+kc38oGVwqSBhDEwytaAxgZasHhFfQKibzTls0=
|
||||
github.com/libp2p/go-libp2p-core v0.5.0/go.mod h1:49XGI+kc38oGVwqSBhDEwytaAxgZasHhFfQKibzTls0=
|
||||
github.com/libp2p/go-libp2p-core v0.5.1/go.mod h1:uN7L2D4EvPCvzSH5SrhR72UWbnSGpt5/a35Sm4upn4Y=
|
||||
github.com/libp2p/go-libp2p-core v0.5.2/go.mod h1:uN7L2D4EvPCvzSH5SrhR72UWbnSGpt5/a35Sm4upn4Y=
|
||||
github.com/libp2p/go-libp2p-core v0.5.3/go.mod h1:uN7L2D4EvPCvzSH5SrhR72UWbnSGpt5/a35Sm4upn4Y=
|
||||
github.com/libp2p/go-libp2p-core v0.5.4/go.mod h1:uN7L2D4EvPCvzSH5SrhR72UWbnSGpt5/a35Sm4upn4Y=
|
||||
github.com/libp2p/go-libp2p-core v0.5.5/go.mod h1:vj3awlOr9+GMZJFH9s4mpt9RHHgGqeHCopzbYKZdRjM=
|
||||
github.com/libp2p/go-libp2p-core v0.5.6/go.mod h1:txwbVEhHEXikXn9gfC7/UDDw7rkxuX0bJvM49Ykaswo=
|
||||
github.com/libp2p/go-libp2p-core v0.5.7/go.mod h1:txwbVEhHEXikXn9gfC7/UDDw7rkxuX0bJvM49Ykaswo=
|
||||
github.com/libp2p/go-libp2p-core v0.6.0 h1:u03qofNYTBN+yVg08PuAKylZogVf0xcTEeM8skGf+ak=
|
||||
github.com/libp2p/go-libp2p-core v0.6.0/go.mod h1:txwbVEhHEXikXn9gfC7/UDDw7rkxuX0bJvM49Ykaswo=
|
||||
github.com/libp2p/go-libp2p-crypto v0.1.0/go.mod h1:sPUokVISZiy+nNuTTH/TY+leRSxnFj/2GLjtOTW90hI=
|
||||
github.com/libp2p/go-libp2p-discovery v0.1.0/go.mod h1:4F/x+aldVHjHDHuX85x1zWoFTGElt8HnoDzwkFZm29g=
|
||||
github.com/libp2p/go-libp2p-discovery v0.2.0/go.mod h1:s4VGaxYMbw4+4+tsoQTqh7wfxg97AEdo4GYBt6BadWg=
|
||||
github.com/libp2p/go-libp2p-discovery v0.3.0/go.mod h1:o03drFnz9BVAZdzC/QUQ+NeQOu38Fu7LJGEOK2gQltw=
|
||||
github.com/libp2p/go-libp2p-discovery v0.4.0 h1:dK78UhopBk48mlHtRCzbdLm3q/81g77FahEBTjcqQT8=
|
||||
github.com/libp2p/go-libp2p-discovery v0.4.0/go.mod h1:bZ0aJSrFc/eX2llP0ryhb1kpgkPyTo23SJ5b7UQCMh4=
|
||||
github.com/libp2p/go-libp2p-loggables v0.1.0 h1:h3w8QFfCt2UJl/0/NW4K829HX/0S4KD31PQ7m8UXXO8=
|
||||
github.com/libp2p/go-libp2p-loggables v0.1.0/go.mod h1:EyumB2Y6PrYjr55Q3/tiJ/o3xoDasoRYM7nOzEpoa90=
|
||||
github.com/libp2p/go-libp2p-mplex v0.2.0/go.mod h1:Ejl9IyjvXJ0T9iqUTE1jpYATQ9NM3g+OtR+EMMODbKo=
|
||||
github.com/libp2p/go-libp2p-mplex v0.2.1/go.mod h1:SC99Rxs8Vuzrf/6WhmH41kNn13TiYdAWNYHrwImKLnE=
|
||||
github.com/libp2p/go-libp2p-mplex v0.2.2/go.mod h1:74S9eum0tVQdAfFiKxAyKzNdSuLqw5oadDq7+L/FELo=
|
||||
github.com/libp2p/go-libp2p-mplex v0.2.3 h1:2zijwaJvpdesST2MXpI5w9wWFRgYtMcpRX7rrw0jmOo=
|
||||
github.com/libp2p/go-libp2p-mplex v0.2.3/go.mod h1:CK3p2+9qH9x+7ER/gWWDYJ3QW5ZxWDkm+dVvjfuG3ek=
|
||||
github.com/libp2p/go-libp2p-nat v0.0.4/go.mod h1:N9Js/zVtAXqaeT99cXgTV9e75KpnWCvVOiGzlcHmBbY=
|
||||
github.com/libp2p/go-libp2p-nat v0.0.5/go.mod h1:1qubaE5bTZMJE+E/uu2URroMbzdubFz1ChgiN79yKPE=
|
||||
github.com/libp2p/go-libp2p-nat v0.0.6 h1:wMWis3kYynCbHoyKLPBEMu4YRLltbm8Mk08HGSfvTkU=
|
||||
github.com/libp2p/go-libp2p-nat v0.0.6/go.mod h1:iV59LVhB3IkFvS6S6sauVTSOrNEANnINbI/fkaLimiw=
|
||||
github.com/libp2p/go-libp2p-netutil v0.1.0 h1:zscYDNVEcGxyUpMd0JReUZTrpMfia8PmLKcKF72EAMQ=
|
||||
github.com/libp2p/go-libp2p-netutil v0.1.0/go.mod h1:3Qv/aDqtMLTUyQeundkKsA+YCThNdbQD54k3TqjpbFU=
|
||||
github.com/libp2p/go-libp2p-noise v0.1.1 h1:vqYQWvnIcHpIoWJKC7Al4D6Hgj0H012TuXRhPwSMGpQ=
|
||||
github.com/libp2p/go-libp2p-noise v0.1.1/go.mod h1:QDFLdKX7nluB7DEnlVPbz7xlLHdwHFA9HiohJRr3vwM=
|
||||
github.com/libp2p/go-libp2p-peer v0.2.0/go.mod h1:RCffaCvUyW2CJmG2gAWVqwePwW7JMgxjsHm7+J5kjWY=
|
||||
github.com/libp2p/go-libp2p-peerstore v0.1.0/go.mod h1:2CeHkQsr8svp4fZ+Oi9ykN1HBb6u0MOvdJ7YIsmcwtY=
|
||||
github.com/libp2p/go-libp2p-peerstore v0.1.3/go.mod h1:BJ9sHlm59/80oSkpWgr1MyY1ciXAXV397W6h1GH/uKI=
|
||||
github.com/libp2p/go-libp2p-peerstore v0.2.0/go.mod h1:N2l3eVIeAitSg3Pi2ipSrJYnqhVnMNQZo9nkSCuAbnQ=
|
||||
github.com/libp2p/go-libp2p-peerstore v0.2.1/go.mod h1:NQxhNjWxf1d4w6PihR8btWIRjwRLBr4TYKfNgrUkOPA=
|
||||
github.com/libp2p/go-libp2p-peerstore v0.2.2/go.mod h1:NQxhNjWxf1d4w6PihR8btWIRjwRLBr4TYKfNgrUkOPA=
|
||||
github.com/libp2p/go-libp2p-peerstore v0.2.3/go.mod h1:K8ljLdFn590GMttg/luh4caB/3g0vKuY01psze0upRw=
|
||||
github.com/libp2p/go-libp2p-peerstore v0.2.4/go.mod h1:ss/TWTgHZTMpsU/oKVVPQCGuDHItOpf2W8RxAi50P2s=
|
||||
github.com/libp2p/go-libp2p-peerstore v0.2.6 h1:2ACefBX23iMdJU9Ke+dcXt3w86MIryes9v7In4+Qq3U=
|
||||
github.com/libp2p/go-libp2p-peerstore v0.2.6/go.mod h1:ss/TWTgHZTMpsU/oKVVPQCGuDHItOpf2W8RxAi50P2s=
|
||||
github.com/libp2p/go-libp2p-pnet v0.2.0 h1:J6htxttBipJujEjz1y0a5+eYoiPcFHhSYHH6na5f0/k=
|
||||
github.com/libp2p/go-libp2p-pnet v0.2.0/go.mod h1:Qqvq6JH/oMZGwqs3N1Fqhv8NVhrdYcO0BW4wssv21LA=
|
||||
github.com/libp2p/go-libp2p-quic-transport v0.5.0 h1:BUN1lgYNUrtv4WLLQ5rQmC9MCJ6uEXusezGvYRNoJXE=
|
||||
github.com/libp2p/go-libp2p-quic-transport v0.5.0/go.mod h1:IEcuC5MLxvZ5KuHKjRu+dr3LjCT1Be3rcD/4d8JrX8M=
|
||||
github.com/libp2p/go-libp2p-record v0.1.0/go.mod h1:ujNc8iuE5dlKWVy6wuL6dd58t0n7xI4hAIl8pE6wu5Q=
|
||||
github.com/libp2p/go-libp2p-record v0.1.1 h1:ZJK2bHXYUBqObHX+rHLSNrM3M8fmJUlUHrodDPPATmY=
|
||||
github.com/libp2p/go-libp2p-record v0.1.1/go.mod h1:VRgKajOyMVgP/F0L5g3kH7SVskp17vFi2xheb5uMJtg=
|
||||
github.com/libp2p/go-libp2p-secio v0.1.0/go.mod h1:tMJo2w7h3+wN4pgU2LSYeiKPrfqBgkOsdiKK77hE7c8=
|
||||
github.com/libp2p/go-libp2p-secio v0.2.0/go.mod h1:2JdZepB8J5V9mBp79BmwsaPQhRPNN2NrnB2lKQcdy6g=
|
||||
github.com/libp2p/go-libp2p-secio v0.2.1/go.mod h1:cWtZpILJqkqrSkiYcDBh5lA3wbT2Q+hz3rJQq3iftD8=
|
||||
github.com/libp2p/go-libp2p-secio v0.2.2 h1:rLLPvShPQAcY6eNurKNZq3eZjPWfU9kXF2eI9jIYdrg=
|
||||
github.com/libp2p/go-libp2p-secio v0.2.2/go.mod h1:wP3bS+m5AUnFA+OFO7Er03uO1mncHG0uVwGrwvjYlNY=
|
||||
github.com/libp2p/go-libp2p-swarm v0.1.0/go.mod h1:wQVsCdjsuZoc730CgOvh5ox6K8evllckjebkdiY5ta4=
|
||||
github.com/libp2p/go-libp2p-swarm v0.2.2/go.mod h1:fvmtQ0T1nErXym1/aa1uJEyN7JzaTNyBcHImCxRpPKU=
|
||||
github.com/libp2p/go-libp2p-swarm v0.2.3/go.mod h1:P2VO/EpxRyDxtChXz/VPVXyTnszHvokHKRhfkEgFKNM=
|
||||
github.com/libp2p/go-libp2p-swarm v0.2.7 h1:4lV/sf7f0NuVqunOpt1I11+Z54+xp+m0eeAvxj/LyRc=
|
||||
github.com/libp2p/go-libp2p-swarm v0.2.7/go.mod h1:ZSJ0Q+oq/B1JgfPHJAT2HTall+xYRNYp1xs4S2FBWKA=
|
||||
github.com/libp2p/go-libp2p-testing v0.0.2/go.mod h1:gvchhf3FQOtBdr+eFUABet5a4MBLK8jM3V4Zghvmi+E=
|
||||
github.com/libp2p/go-libp2p-testing v0.0.3/go.mod h1:gvchhf3FQOtBdr+eFUABet5a4MBLK8jM3V4Zghvmi+E=
|
||||
github.com/libp2p/go-libp2p-testing v0.0.4/go.mod h1:gvchhf3FQOtBdr+eFUABet5a4MBLK8jM3V4Zghvmi+E=
|
||||
github.com/libp2p/go-libp2p-testing v0.1.0/go.mod h1:xaZWMJrPUM5GlDBxCeGUi7kI4eqnjVyavGroI2nxEM0=
|
||||
github.com/libp2p/go-libp2p-testing v0.1.1 h1:U03z3HnGI7Ni8Xx6ONVZvUFOAzWYmolWf5W5jAOPNmU=
|
||||
github.com/libp2p/go-libp2p-testing v0.1.1/go.mod h1:xaZWMJrPUM5GlDBxCeGUi7kI4eqnjVyavGroI2nxEM0=
|
||||
github.com/libp2p/go-libp2p-tls v0.1.3 h1:twKMhMu44jQO+HgQK9X8NHO5HkeJu2QbhLzLJpa8oNM=
|
||||
github.com/libp2p/go-libp2p-tls v0.1.3/go.mod h1:wZfuewxOndz5RTnCAxFliGjvYSDA40sKitV4c50uI1M=
|
||||
github.com/libp2p/go-libp2p-transport-upgrader v0.1.1/go.mod h1:IEtA6or8JUbsV07qPW4r01GnTenLW4oi3lOPbUMGJJA=
|
||||
github.com/libp2p/go-libp2p-transport-upgrader v0.2.0/go.mod h1:mQcrHj4asu6ArfSoMuyojOdjx73Q47cYD7s5+gZOlns=
|
||||
github.com/libp2p/go-libp2p-transport-upgrader v0.3.0 h1:q3ULhsknEQ34eVDhv4YwKS8iet69ffs9+Fir6a7weN4=
|
||||
github.com/libp2p/go-libp2p-transport-upgrader v0.3.0/go.mod h1:i+SKzbRnvXdVbU3D1dwydnTmKRPXiAR/fyvi1dXuL4o=
|
||||
github.com/libp2p/go-libp2p-yamux v0.2.0/go.mod h1:Db2gU+XfLpm6E4rG5uGCFX6uXA8MEXOxFcRoXUODaK8=
|
||||
github.com/libp2p/go-libp2p-yamux v0.2.1/go.mod h1:1FBXiHDk1VyRM1C0aez2bCfHQ4vMZKkAQzZbkSQt5fI=
|
||||
github.com/libp2p/go-libp2p-yamux v0.2.2/go.mod h1:lIohaR0pT6mOt0AZ0L2dFze9hds9Req3OfS+B+dv4qw=
|
||||
github.com/libp2p/go-libp2p-yamux v0.2.5/go.mod h1:Zpgj6arbyQrmZ3wxSZxfBmbdnWtbZ48OpsfmQVTErwA=
|
||||
github.com/libp2p/go-libp2p-yamux v0.2.7/go.mod h1:X28ENrBMU/nm4I3Nx4sZ4dgjZ6VhLEn0XhIoZ5viCwU=
|
||||
github.com/libp2p/go-libp2p-yamux v0.2.8 h1:0s3ELSLu2O7hWKfX1YjzudBKCP0kZ+m9e2+0veXzkn4=
|
||||
github.com/libp2p/go-libp2p-yamux v0.2.8/go.mod h1:/t6tDqeuZf0INZMTgd0WxIRbtK2EzI2h7HbFm9eAKI4=
|
||||
github.com/libp2p/go-maddr-filter v0.0.4/go.mod h1:6eT12kSQMA9x2pvFQa+xesMKUBlj9VImZbj3B9FBH/Q=
|
||||
github.com/libp2p/go-maddr-filter v0.0.5/go.mod h1:Jk+36PMfIqCJhAnaASRH83bdAvfDRp/w6ENFaC9bG+M=
|
||||
github.com/libp2p/go-mplex v0.0.3/go.mod h1:pK5yMLmOoBR1pNCqDlA2GQrdAVTMkqFalaTWe7l4Yd0=
|
||||
github.com/libp2p/go-mplex v0.1.0/go.mod h1:SXgmdki2kwCUlCCbfGLEgHjC4pFqhTp0ZoV6aiKgxDU=
|
||||
github.com/libp2p/go-mplex v0.1.1/go.mod h1:Xgz2RDCi3co0LeZfgjm4OgUF15+sVR8SRcu3SFXI1lk=
|
||||
github.com/libp2p/go-mplex v0.1.2 h1:qOg1s+WdGLlpkrczDqmhYzyk3vCfsQ8+RxRTQjOZWwI=
|
||||
github.com/libp2p/go-mplex v0.1.2/go.mod h1:Xgz2RDCi3co0LeZfgjm4OgUF15+sVR8SRcu3SFXI1lk=
|
||||
github.com/libp2p/go-msgio v0.0.2/go.mod h1:63lBBgOTDKQL6EWazRMCwXsEeEeK9O2Cd+0+6OOuipQ=
|
||||
github.com/libp2p/go-msgio v0.0.3/go.mod h1:63lBBgOTDKQL6EWazRMCwXsEeEeK9O2Cd+0+6OOuipQ=
|
||||
github.com/libp2p/go-msgio v0.0.4 h1:agEFehY3zWJFUHK6SEMR7UYmk2z6kC3oeCM7ybLhguA=
|
||||
github.com/libp2p/go-msgio v0.0.4/go.mod h1:63lBBgOTDKQL6EWazRMCwXsEeEeK9O2Cd+0+6OOuipQ=
|
||||
github.com/libp2p/go-nat v0.0.3/go.mod h1:88nUEt0k0JD45Bk93NIwDqjlhiOwOoV36GchpcVc1yI=
|
||||
github.com/libp2p/go-nat v0.0.4/go.mod h1:Nmw50VAvKuk38jUBcmNh6p9lUJLoODbJRvYAa/+KSDo=
|
||||
github.com/libp2p/go-nat v0.0.5 h1:qxnwkco8RLKqVh1NmjQ+tJ8p8khNLFxuElYG/TwqW4Q=
|
||||
github.com/libp2p/go-nat v0.0.5/go.mod h1:B7NxsVNPZmRLvMOwiEO1scOSyjA56zxYAGv1yQgRkEU=
|
||||
github.com/libp2p/go-netroute v0.1.2 h1:UHhB35chwgvcRI392znJA3RCBtZ3MpE3ahNCN5MR4Xg=
|
||||
github.com/libp2p/go-netroute v0.1.2/go.mod h1:jZLDV+1PE8y5XxBySEBgbuVAXbhtuHSdmLPL2n9MKbk=
|
||||
github.com/libp2p/go-openssl v0.0.2/go.mod h1:v8Zw2ijCSWBQi8Pq5GAixw6DbFfa9u6VIYDXnvOXkc0=
|
||||
github.com/libp2p/go-openssl v0.0.3/go.mod h1:unDrJpgy3oFr+rqXsarWifmJuNnJR4chtO1HmaZjggc=
|
||||
github.com/libp2p/go-openssl v0.0.4/go.mod h1:unDrJpgy3oFr+rqXsarWifmJuNnJR4chtO1HmaZjggc=
|
||||
github.com/libp2p/go-openssl v0.0.5 h1:pQkejVhF0xp08D4CQUcw8t+BFJeXowja6RVcb5p++EA=
|
||||
github.com/libp2p/go-openssl v0.0.5/go.mod h1:unDrJpgy3oFr+rqXsarWifmJuNnJR4chtO1HmaZjggc=
|
||||
github.com/libp2p/go-reuseport v0.0.1 h1:7PhkfH73VXfPJYKQ6JwS5I/eVcoyYi9IMNGc6FWpFLw=
|
||||
github.com/libp2p/go-reuseport v0.0.1/go.mod h1:jn6RmB1ufnQwl0Q1f+YxAj8isJgDCQzaaxIFYDhcYEA=
|
||||
github.com/libp2p/go-reuseport-transport v0.0.2/go.mod h1:YkbSDrvjUVDL6b8XqriyA20obEtsW9BLkuOUyQAOCbs=
|
||||
github.com/libp2p/go-reuseport-transport v0.0.3 h1:zzOeXnTooCkRvoH+bSXEfXhn76+LAiwoneM0gnXjF2M=
|
||||
github.com/libp2p/go-reuseport-transport v0.0.3/go.mod h1:Spv+MPft1exxARzP2Sruj2Wb5JSyHNncjf1Oi2dEbzM=
|
||||
github.com/libp2p/go-sockaddr v0.0.2/go.mod h1:syPvOmNs24S3dFVGJA1/mrqdeijPxLV2Le3BRLKd68k=
|
||||
github.com/libp2p/go-sockaddr v0.1.0 h1:Y4s3/jNoryVRKEBrkJ576F17CPOaMIzUeCsg7dlTDj0=
|
||||
github.com/libp2p/go-sockaddr v0.1.0/go.mod h1:syPvOmNs24S3dFVGJA1/mrqdeijPxLV2Le3BRLKd68k=
|
||||
github.com/libp2p/go-stream-muxer v0.0.1/go.mod h1:bAo8x7YkSpadMTbtTaxGVHWUQsR/l5MEaHbKaliuT14=
|
||||
github.com/libp2p/go-stream-muxer-multistream v0.2.0/go.mod h1:j9eyPol/LLRqT+GPLSxvimPhNph4sfYfMoDPd7HkzIc=
|
||||
github.com/libp2p/go-stream-muxer-multistream v0.3.0 h1:TqnSHPJEIqDEO7h1wZZ0p3DXdvDSiLHQidKKUGZtiOY=
|
||||
github.com/libp2p/go-stream-muxer-multistream v0.3.0/go.mod h1:yDh8abSIzmZtqtOt64gFJUXEryejzNb0lisTt+fAMJA=
|
||||
github.com/libp2p/go-tcp-transport v0.1.0/go.mod h1:oJ8I5VXryj493DEJ7OsBieu8fcg2nHGctwtInJVpipc=
|
||||
github.com/libp2p/go-tcp-transport v0.1.1/go.mod h1:3HzGvLbx6etZjnFlERyakbaYPdfjg2pWP97dFZworkY=
|
||||
github.com/libp2p/go-tcp-transport v0.2.0 h1:YoThc549fzmNJIh7XjHVtMIFaEDRtIrtWciG5LyYAPo=
|
||||
github.com/libp2p/go-tcp-transport v0.2.0/go.mod h1:vX2U0CnWimU4h0SGSEsg++AzvBcroCGYw28kh94oLe0=
|
||||
github.com/libp2p/go-testutil v0.1.0/go.mod h1:81b2n5HypcVyrCg/MJx4Wgfp/VHojytjVe/gLzZ2Ehc=
|
||||
github.com/libp2p/go-ws-transport v0.1.0/go.mod h1:rjw1MG1LU9YDC6gzmwObkPd/Sqwhw7yT74kj3raBFuo=
|
||||
github.com/libp2p/go-ws-transport v0.2.0/go.mod h1:9BHJz/4Q5A9ludYWKoGCFC5gUElzlHoKzu0yY9p/klM=
|
||||
github.com/libp2p/go-ws-transport v0.3.0/go.mod h1:bpgTJmRZAvVHrgHybCVyqoBmyLQ1fiZuEaBYusP5zsk=
|
||||
github.com/libp2p/go-ws-transport v0.3.1 h1:ZX5rWB8nhRRJVaPO6tmkGI/Xx8XNboYX20PW5hXIscw=
|
||||
github.com/libp2p/go-ws-transport v0.3.1/go.mod h1:bpgTJmRZAvVHrgHybCVyqoBmyLQ1fiZuEaBYusP5zsk=
|
||||
github.com/libp2p/go-yamux v1.2.2/go.mod h1:FGTiPvoV/3DVdgWpX+tM0OW3tsM+W5bSE3gZwqQTcow=
|
||||
github.com/libp2p/go-yamux v1.2.3/go.mod h1:FGTiPvoV/3DVdgWpX+tM0OW3tsM+W5bSE3gZwqQTcow=
|
||||
github.com/libp2p/go-yamux v1.3.0/go.mod h1:FGTiPvoV/3DVdgWpX+tM0OW3tsM+W5bSE3gZwqQTcow=
|
||||
github.com/libp2p/go-yamux v1.3.3/go.mod h1:FGTiPvoV/3DVdgWpX+tM0OW3tsM+W5bSE3gZwqQTcow=
|
||||
github.com/libp2p/go-yamux v1.3.5/go.mod h1:FGTiPvoV/3DVdgWpX+tM0OW3tsM+W5bSE3gZwqQTcow=
|
||||
github.com/libp2p/go-yamux v1.3.7 h1:v40A1eSPJDIZwz2AvrV3cxpTZEGDP11QJbukmEhYyQI=
|
||||
github.com/libp2p/go-yamux v1.3.7/go.mod h1:fr7aVgmdNGJK+N1g+b6DW6VxzbRCjCOejR/hkmpooHE=
|
||||
github.com/lucas-clemente/quic-go v0.16.0 h1:jJw36wfzGJhmOhAOaOC2lS36WgeqXQszH47A7spo1LI=
|
||||
github.com/lucas-clemente/quic-go v0.16.0/go.mod h1:I0+fcNTdb9eS1ZcjQZbDVPGchJ86chcIxPALn9lEJqE=
|
||||
github.com/lunixbochs/vtclean v1.0.0/go.mod h1:pHhQNgMf3btfWnGBVipUOjRYhoOsdGqdm/+2c2E2WMI=
|
||||
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
|
||||
github.com/mailru/easyjson v0.0.0-20180823135443-60711f1a8329/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
|
||||
github.com/mailru/easyjson v0.0.0-20190312143242-1de009706dbe/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
|
||||
github.com/marten-seemann/qpack v0.1.0/go.mod h1:LFt1NU/Ptjip0C2CPkhimBz5CGE3WGDAUWqna+CNTrI=
|
||||
github.com/marten-seemann/qtls v0.9.1 h1:O0YKQxNVPaiFgMng0suWEOY2Sb4LT2sRn9Qimq3Z1IQ=
|
||||
github.com/marten-seemann/qtls v0.9.1/go.mod h1:T1MmAdDPyISzxlK6kjRr0pcZFBVd1OZbBb/j3cvzHhk=
|
||||
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
|
||||
github.com/mattn/go-colorable v0.1.1/go.mod h1:FuOcm+DKB9mbwrcAfNl7/TZVBZ6rcnceauSikq3lYCQ=
|
||||
github.com/mattn/go-colorable v0.1.2/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
|
||||
github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
|
||||
github.com/mattn/go-isatty v0.0.5/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
|
||||
github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
|
||||
github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU=
|
||||
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
|
||||
github.com/mgutz/ansi v0.0.0-20170206155736-9520e82c474b/go.mod h1:01TrycV0kFyexm33Z7vhZRXopbI8J3TDReVlkTgMUxE=
|
||||
github.com/microcosm-cc/bluemonday v1.0.1/go.mod h1:hsXNsILzKxV+sX77C5b8FSuKF00vh2OMYv+xgHpAMF4=
|
||||
github.com/miekg/dns v1.1.12/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
|
||||
github.com/miekg/dns v1.1.28/go.mod h1:KNUDUusw/aVsxyTYZM1oqvCicbwhgbNgztCETuNZ7xM=
|
||||
github.com/minio/blake2b-simd v0.0.0-20160723061019-3f5f724cb5b1 h1:lYpkrQH5ajf0OXOcUbGjvZxxijuBwbbmlSxLiuofa+g=
|
||||
github.com/minio/blake2b-simd v0.0.0-20160723061019-3f5f724cb5b1/go.mod h1:pD8RvIylQ358TN4wwqatJ8rNavkEINozVn9DtGI3dfQ=
|
||||
github.com/minio/sha256-simd v0.0.0-20190131020904-2d45a736cd16/go.mod h1:2FMWW+8GMoPweT6+pI63m9YE3Lmw4J71hV56Chs1E/U=
|
||||
github.com/minio/sha256-simd v0.0.0-20190328051042-05b4dd3047e5/go.mod h1:2FMWW+8GMoPweT6+pI63m9YE3Lmw4J71hV56Chs1E/U=
|
||||
github.com/minio/sha256-simd v0.1.0/go.mod h1:2FMWW+8GMoPweT6+pI63m9YE3Lmw4J71hV56Chs1E/U=
|
||||
github.com/minio/sha256-simd v0.1.1-0.20190913151208-6de447530771/go.mod h1:B5e1o+1/KgNmWrSQK08Y6Z1Vb5pwIktudl0J58iy0KM=
|
||||
github.com/minio/sha256-simd v0.1.1 h1:5QHSlgo3nt5yKOJrC7W8w7X+NFl8cMPZm96iu8kKUJU=
|
||||
github.com/minio/sha256-simd v0.1.1/go.mod h1:B5e1o+1/KgNmWrSQK08Y6Z1Vb5pwIktudl0J58iy0KM=
|
||||
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
|
||||
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
|
||||
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
|
||||
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
|
||||
github.com/mr-tron/base58 v1.1.0/go.mod h1:xcD2VGqlgYjBdcBLw+TuYLr8afG+Hj8g2eTVqeSzSU8=
|
||||
github.com/mr-tron/base58 v1.1.1/go.mod h1:xcD2VGqlgYjBdcBLw+TuYLr8afG+Hj8g2eTVqeSzSU8=
|
||||
github.com/mr-tron/base58 v1.1.2/go.mod h1:BinMc/sQntlIE1frQmRFPUoPA1Zkr8VRgBdjWI2mNwc=
|
||||
github.com/mr-tron/base58 v1.1.3 h1:v+sk57XuaCKGXpWtVBX8YJzO7hMGx4Aajh4TQbdEFdc=
|
||||
github.com/mr-tron/base58 v1.1.3/go.mod h1:BinMc/sQntlIE1frQmRFPUoPA1Zkr8VRgBdjWI2mNwc=
|
||||
github.com/multiformats/go-base32 v0.0.3 h1:tw5+NhuwaOjJCC5Pp82QuXbrmLzWg7uxlMFp8Nq/kkI=
|
||||
github.com/multiformats/go-base32 v0.0.3/go.mod h1:pLiuGC8y0QR3Ue4Zug5UzK9LjgbkL8NSQj0zQ5Nz/AA=
|
||||
github.com/multiformats/go-base36 v0.1.0 h1:JR6TyF7JjGd3m6FbLU2cOxhC0Li8z8dLNGQ89tUg4F4=
|
||||
github.com/multiformats/go-base36 v0.1.0/go.mod h1:kFGE83c6s80PklsHO9sRn2NCoffoRdUUOENyW/Vv6sM=
|
||||
github.com/multiformats/go-multiaddr v0.0.1/go.mod h1:xKVEak1K9cS1VdmPZW3LSIb6lgmoS58qz/pzqmAxV44=
|
||||
github.com/multiformats/go-multiaddr v0.0.2/go.mod h1:xKVEak1K9cS1VdmPZW3LSIb6lgmoS58qz/pzqmAxV44=
|
||||
github.com/multiformats/go-multiaddr v0.0.4/go.mod h1:xKVEak1K9cS1VdmPZW3LSIb6lgmoS58qz/pzqmAxV44=
|
||||
github.com/multiformats/go-multiaddr v0.1.0/go.mod h1:xKVEak1K9cS1VdmPZW3LSIb6lgmoS58qz/pzqmAxV44=
|
||||
github.com/multiformats/go-multiaddr v0.1.1/go.mod h1:aMKBKNEYmzmDmxfX88/vz+J5IU55txyt0p4aiWVohjo=
|
||||
github.com/multiformats/go-multiaddr v0.2.0/go.mod h1:0nO36NvPpyV4QzvTLi/lafl2y95ncPj0vFwVF6k6wJ4=
|
||||
github.com/multiformats/go-multiaddr v0.2.1/go.mod h1:s/Apk6IyxfvMjDafnhJgJ3/46z7tZ04iMk5wP4QMGGE=
|
||||
github.com/multiformats/go-multiaddr v0.2.2 h1:XZLDTszBIJe6m0zF6ITBrEcZR73OPUhCBBS9rYAuUzI=
|
||||
github.com/multiformats/go-multiaddr v0.2.2/go.mod h1:NtfXiOtHvghW9KojvtySjH5y0u0xW5UouOmQQrn6a3Y=
|
||||
github.com/multiformats/go-multiaddr-dns v0.0.1/go.mod h1:9kWcqw/Pj6FwxAwW38n/9403szc57zJPs45fmnznu3Q=
|
||||
github.com/multiformats/go-multiaddr-dns v0.0.2/go.mod h1:9kWcqw/Pj6FwxAwW38n/9403szc57zJPs45fmnznu3Q=
|
||||
github.com/multiformats/go-multiaddr-dns v0.2.0 h1:YWJoIDwLePniH7OU5hBnDZV6SWuvJqJ0YtN6pLeH9zA=
|
||||
github.com/multiformats/go-multiaddr-dns v0.2.0/go.mod h1:TJ5pr5bBO7Y1B18djPuRsVkduhQH2YqYSbxWJzYGdK0=
|
||||
github.com/multiformats/go-multiaddr-fmt v0.0.1/go.mod h1:aBYjqL4T/7j4Qx+R73XSv/8JsgnRFlf0w2KGLCmXl3Q=
|
||||
github.com/multiformats/go-multiaddr-fmt v0.1.0 h1:WLEFClPycPkp4fnIzoFoV9FVd49/eQsuaL3/CWe167E=
|
||||
github.com/multiformats/go-multiaddr-fmt v0.1.0/go.mod h1:hGtDIW4PU4BqJ50gW2quDuPVjyWNZxToGUh/HwTZYJo=
|
||||
github.com/multiformats/go-multiaddr-net v0.0.1/go.mod h1:nw6HSxNmCIQH27XPGBuX+d1tnvM7ihcFwHMSstNAVUU=
|
||||
github.com/multiformats/go-multiaddr-net v0.1.0/go.mod h1:5JNbcfBOP4dnhoZOv10JJVkJO0pCCEf8mTnipAo2UZQ=
|
||||
github.com/multiformats/go-multiaddr-net v0.1.1/go.mod h1:5JNbcfBOP4dnhoZOv10JJVkJO0pCCEf8mTnipAo2UZQ=
|
||||
github.com/multiformats/go-multiaddr-net v0.1.2/go.mod h1:QsWt3XK/3hwvNxZJp92iMQKME1qHfpYmyIjFVsSOY6Y=
|
||||
github.com/multiformats/go-multiaddr-net v0.1.3/go.mod h1:ilNnaM9HbmVFqsb/qcNysjCu4PVONlrBZpHIrw/qQuA=
|
||||
github.com/multiformats/go-multiaddr-net v0.1.4/go.mod h1:ilNnaM9HbmVFqsb/qcNysjCu4PVONlrBZpHIrw/qQuA=
|
||||
github.com/multiformats/go-multiaddr-net v0.1.5 h1:QoRKvu0xHN1FCFJcMQLbG/yQE2z441L5urvG3+qyz7g=
|
||||
github.com/multiformats/go-multiaddr-net v0.1.5/go.mod h1:ilNnaM9HbmVFqsb/qcNysjCu4PVONlrBZpHIrw/qQuA=
|
||||
github.com/multiformats/go-multibase v0.0.1/go.mod h1:bja2MqRZ3ggyXtZSEDKpl0uO/gviWFaSteVbWT51qgs=
|
||||
github.com/multiformats/go-multibase v0.0.3 h1:l/B6bJDQjvQ5G52jw4QGSYeOTZoAwIO77RblWplfIqk=
|
||||
github.com/multiformats/go-multibase v0.0.3/go.mod h1:5+1R4eQrT3PkYZ24C3W2Ue2tPwIdYQD509ZjSb5y9Oc=
|
||||
github.com/multiformats/go-multihash v0.0.1/go.mod h1:w/5tugSrLEbWqlcgJabL3oHFKTwfvkofsjW2Qa1ct4U=
|
||||
github.com/multiformats/go-multihash v0.0.5/go.mod h1:lt/HCbqlQwlPBz7lv0sQCdtfcMtlJvakRUn/0Ual8po=
|
||||
github.com/multiformats/go-multihash v0.0.8/go.mod h1:YSLudS+Pi8NHE7o6tb3D8vrpKa63epEDmG8nTduyAew=
|
||||
github.com/multiformats/go-multihash v0.0.10/go.mod h1:YSLudS+Pi8NHE7o6tb3D8vrpKa63epEDmG8nTduyAew=
|
||||
github.com/multiformats/go-multihash v0.0.13 h1:06x+mk/zj1FoMsgNejLpy6QTvJqlSt/BhLEy87zidlc=
|
||||
github.com/multiformats/go-multihash v0.0.13/go.mod h1:VdAWLKTwram9oKAatUcLxBNUjdtcVwxObEQBtRfuyjc=
|
||||
github.com/multiformats/go-multistream v0.1.0/go.mod h1:fJTiDfXJVmItycydCnNx4+wSzZ5NwG2FEVAI30fiovg=
|
||||
github.com/multiformats/go-multistream v0.1.1 h1:JlAdpIFhBhGRLxe9W6Om0w++Gd6KMWoFPZL/dEnm9nI=
|
||||
github.com/multiformats/go-multistream v0.1.1/go.mod h1:KmHZ40hzVxiaiwlj3MEbYgK9JFk2/9UktWZAF54Du38=
|
||||
github.com/multiformats/go-varint v0.0.1/go.mod h1:3Ls8CIEsrijN6+B7PbrXRPxHRPuXSrVKRY101jdMZYE=
|
||||
github.com/multiformats/go-varint v0.0.2/go.mod h1:3Ls8CIEsrijN6+B7PbrXRPxHRPuXSrVKRY101jdMZYE=
|
||||
github.com/multiformats/go-varint v0.0.5 h1:XVZwSo04Cs3j/jS0uAEPpT3JY6DzMcVLLoWOSnCxOjg=
|
||||
github.com/multiformats/go-varint v0.0.5/go.mod h1:3Ls8CIEsrijN6+B7PbrXRPxHRPuXSrVKRY101jdMZYE=
|
||||
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
|
||||
github.com/neelance/astrewrite v0.0.0-20160511093645-99348263ae86/go.mod h1:kHJEU3ofeGjhHklVoIGuVj85JJwZ6kWPaJwCIxgnFmo=
|
||||
github.com/neelance/sourcemap v0.0.0-20151028013722-8c68805598ab/go.mod h1:Qr6/a/Q4r9LP1IltGz7tA7iOK1WonHEYhu1HRBA7ZiM=
|
||||
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e h1:fD57ERR4JtEqsWbfPhv4DMiApHyliiK5xCTNVSPiaAs=
|
||||
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
|
||||
github.com/nxadm/tail v1.4.4 h1:DQuhQpB1tVlglWS2hLQ5OV6B5r8aGxSrPc5Qo6uTN78=
|
||||
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
|
||||
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
|
||||
github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
|
||||
github.com/onsi/ginkgo v1.8.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
|
||||
github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
|
||||
github.com/onsi/ginkgo v1.11.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
|
||||
github.com/onsi/ginkgo v1.12.0/go.mod h1:oUhWkIvk5aDxtKvDDuw8gItl8pKl42LzjC9KZE0HfGg=
|
||||
github.com/onsi/ginkgo v1.12.1 h1:mFwc4LvZ0xpSvDZ3E+k8Yte0hLOMxXUlP+yXtJqkYfQ=
|
||||
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
|
||||
github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
|
||||
github.com/onsi/gomega v1.5.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
|
||||
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
|
||||
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
|
||||
github.com/onsi/gomega v1.8.1/go.mod h1:Ho0h+IUsWyvy1OpqCwxlQ/21gkhVunqlU8fDGcoTdcA=
|
||||
github.com/onsi/gomega v1.9.0 h1:R1uwffexN6Pr340GtYRIdZmAiN4J+iw6WG4wog1DUXg=
|
||||
github.com/onsi/gomega v1.9.0/go.mod h1:Ho0h+IUsWyvy1OpqCwxlQ/21gkhVunqlU8fDGcoTdcA=
|
||||
github.com/opentracing/opentracing-go v1.0.2/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
|
||||
github.com/opentracing/opentracing-go v1.1.0 h1:pWlfV3Bxv7k65HYwkikxat0+s3pV4bsqf19k25Ur8rU=
|
||||
github.com/opentracing/opentracing-go v1.1.0/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
|
||||
github.com/openzipkin/zipkin-go v0.1.1/go.mod h1:NtoC/o8u3JlF1lSlyPNswIbeQH9bJTmOf0Erfk+hxe8=
|
||||
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
|
||||
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
|
||||
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/polydawn/refmt v0.0.0-20190221155625-df39d6c2d992/go.mod h1:uIp+gprXxxrWSjjklXD+mN4wed/tMfjMMmN/9+JsA9o=
|
||||
github.com/polydawn/refmt v0.0.0-20190408063855-01bf1e26dd14/go.mod h1:uIp+gprXxxrWSjjklXD+mN4wed/tMfjMMmN/9+JsA9o=
|
||||
github.com/polydawn/refmt v0.0.0-20190807091052-3d65705ee9f1/go.mod h1:uIp+gprXxxrWSjjklXD+mN4wed/tMfjMMmN/9+JsA9o=
|
||||
github.com/polydawn/refmt v0.0.0-20190809202753-05966cbd336a h1:hjZfReYVLbqFkAtr2us7vdy04YWz3LVAirzP7reh8+M=
|
||||
github.com/polydawn/refmt v0.0.0-20190809202753-05966cbd336a/go.mod h1:uIp+gprXxxrWSjjklXD+mN4wed/tMfjMMmN/9+JsA9o=
|
||||
github.com/prometheus/client_golang v0.8.0/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
|
||||
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
|
||||
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
|
||||
github.com/prometheus/client_golang v1.7.1 h1:NTGy1Ja9pByO+xAeH/qiWnLrKtr3hJPNjaVUwnjpdpA=
|
||||
github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
|
||||
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
|
||||
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||
github.com/prometheus/client_model v0.2.0 h1:uq5h0d+GuxiXLJLNABMgp2qUWDPiLvgCzz2dUR+/W/M=
|
||||
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||
github.com/prometheus/common v0.0.0-20180801064454-c7de2306084e/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
|
||||
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
|
||||
github.com/prometheus/common v0.10.0 h1:RyRA7RzGXQZiW+tGMr7sxa85G1z0yOpM1qq5c8lNawc=
|
||||
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
|
||||
github.com/prometheus/procfs v0.0.0-20180725123919-05ee40e3a273/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
||||
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
||||
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
|
||||
github.com/prometheus/procfs v0.1.3 h1:F0+tqvhOksq22sc6iCHF5WGlWjdwj92p0udFh1VFBS8=
|
||||
github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
|
||||
github.com/rcrowley/go-metrics v0.0.0-20200313005456-10cdbea86bc0 h1:MkV+77GLUNo5oJ0jf870itWm3D0Sjh7+Za9gazKc5LQ=
|
||||
github.com/rcrowley/go-metrics v0.0.0-20200313005456-10cdbea86bc0/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
|
||||
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
|
||||
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
|
||||
github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo=
|
||||
github.com/shurcooL/component v0.0.0-20170202220835-f88ec8f54cc4/go.mod h1:XhFIlyj5a1fBNx5aJTbKoIq0mNaPvOagO+HjB3EtxrY=
|
||||
github.com/shurcooL/events v0.0.0-20181021180414-410e4ca65f48/go.mod h1:5u70Mqkb5O5cxEA8nxTsgrgLehJeAw6Oc4Ab1c/P1HM=
|
||||
github.com/shurcooL/github_flavored_markdown v0.0.0-20181002035957-2122de532470/go.mod h1:2dOwnU2uBioM+SGy2aZoq1f/Sd1l9OkAeAUvjSyvgU0=
|
||||
github.com/shurcooL/go v0.0.0-20180423040247-9e1955d9fb6e/go.mod h1:TDJrrUr11Vxrven61rcy3hJMUqaf/CLWYhHNPmT14Lk=
|
||||
github.com/shurcooL/go-goon v0.0.0-20170922171312-37c2f522c041/go.mod h1:N5mDOmsrJOB+vfqUK+7DmDyjhSLIIBnXo9lvZJj3MWQ=
|
||||
github.com/shurcooL/gofontwoff v0.0.0-20180329035133-29b52fc0a18d/go.mod h1:05UtEgK5zq39gLST6uB0cf3NEHjETfB4Fgr3Gx5R9Vw=
|
||||
github.com/shurcooL/gopherjslib v0.0.0-20160914041154-feb6d3990c2c/go.mod h1:8d3azKNyqcHP1GaQE/c6dDgjkgSx2BZ4IoEi4F1reUI=
|
||||
github.com/shurcooL/highlight_diff v0.0.0-20170515013008-09bb4053de1b/go.mod h1:ZpfEhSmds4ytuByIcDnOLkTHGUI6KNqRNPDLHDk+mUU=
|
||||
github.com/shurcooL/highlight_go v0.0.0-20181028180052-98c3abbbae20/go.mod h1:UDKB5a1T23gOMUJrI+uSuH0VRDStOiUVSjBTRDVBVag=
|
||||
github.com/shurcooL/home v0.0.0-20181020052607-80b7ffcb30f9/go.mod h1:+rgNQw2P9ARFAs37qieuu7ohDNQ3gds9msbT2yn85sg=
|
||||
github.com/shurcooL/htmlg v0.0.0-20170918183704-d01228ac9e50/go.mod h1:zPn1wHpTIePGnXSHpsVPWEktKXHr6+SS6x/IKRb7cpw=
|
||||
github.com/shurcooL/httperror v0.0.0-20170206035902-86b7830d14cc/go.mod h1:aYMfkZ6DWSJPJ6c4Wwz3QtW22G7mf/PEgaB9k/ik5+Y=
|
||||
github.com/shurcooL/httpfs v0.0.0-20171119174359-809beceb2371/go.mod h1:ZY1cvUeJuFPAdZ/B6v7RHavJWZn2YPVFQ1OSXhCGOkg=
|
||||
github.com/shurcooL/httpgzip v0.0.0-20180522190206-b1c53ac65af9/go.mod h1:919LwcH0M7/W4fcZ0/jy0qGght1GIhqyS/EgWGH2j5Q=
|
||||
github.com/shurcooL/issues v0.0.0-20181008053335-6292fdc1e191/go.mod h1:e2qWDig5bLteJ4fwvDAc2NHzqFEthkqn7aOZAOpj+PQ=
|
||||
github.com/shurcooL/issuesapp v0.0.0-20180602232740-048589ce2241/go.mod h1:NPpHK2TI7iSaM0buivtFUc9offApnI0Alt/K8hcHy0I=
|
||||
github.com/shurcooL/notifications v0.0.0-20181007000457-627ab5aea122/go.mod h1:b5uSkrEVM1jQUspwbixRBhaIjIzL2xazXp6kntxYle0=
|
||||
github.com/shurcooL/octicon v0.0.0-20181028054416-fa4f57f9efb2/go.mod h1:eWdoE5JD4R5UVWDucdOPg1g2fqQRq78IQa9zlOV1vpQ=
|
||||
github.com/shurcooL/reactions v0.0.0-20181006231557-f2e0b4ca5b82/go.mod h1:TCR1lToEk4d2s07G3XGfz2QrgHXg4RJBvjrOozvoWfk=
|
||||
github.com/shurcooL/sanitized_anchor_name v0.0.0-20170918181015-86672fcb3f95/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
|
||||
github.com/shurcooL/users v0.0.0-20180125191416-49c67e49c537/go.mod h1:QJTqeLYEDaXHZDBsXlPCDqdhQuJkuw4NOtaxYe3xii4=
|
||||
github.com/shurcooL/webdavfs v0.0.0-20170829043945-18c3829fa133/go.mod h1:hKmq5kWdCj2z2KEozexVbfEZIWiTjhE0+UjmZgPqehw=
|
||||
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
|
||||
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
|
||||
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
|
||||
github.com/smartystreets/assertions v1.0.0/go.mod h1:kHHU4qYBaI3q23Pp3VPrmWhuIUrLW/7eUrw0BU5VaoM=
|
||||
github.com/smartystreets/assertions v1.0.1 h1:voD4ITNjPL5jjBfgR/r8fPIIBrliWrWHeiJApdr3r4w=
|
||||
github.com/smartystreets/assertions v1.0.1/go.mod h1:kHHU4qYBaI3q23Pp3VPrmWhuIUrLW/7eUrw0BU5VaoM=
|
||||
github.com/smartystreets/goconvey v0.0.0-20190222223459-a17d461953aa/go.mod h1:2RVY1rIf+2J2o/IM9+vPq9RzmHDSseB7FoXiSNIUsoU=
|
||||
github.com/smartystreets/goconvey v0.0.0-20190330032615-68dc04aab96a/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
|
||||
github.com/smartystreets/goconvey v0.0.0-20190731233626-505e41936337 h1:WN9BUFbdyOsSH/XohnWpXOlq9NBD5sGAB2FciQMUEe8=
|
||||
github.com/smartystreets/goconvey v0.0.0-20190731233626-505e41936337/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
|
||||
github.com/smartystreets/goconvey v1.6.4 h1:fv0U8FUIMPNf1L9lnHLvLhgicrIVChEkdzIKYqbNC9s=
|
||||
github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
|
||||
github.com/smola/gocompat v0.2.0/go.mod h1:1B0MlxbmoZNo3h8guHp8HztB3BSYR5itql9qtVc0ypY=
|
||||
github.com/sourcegraph/annotate v0.0.0-20160123013949-f4cad6c6324d/go.mod h1:UdhH50NIW0fCiwBSr0co2m7BnFLdv4fQTgdqdJTHFeE=
|
||||
github.com/sourcegraph/syntaxhighlight v0.0.0-20170531221838-bd320f5d308e/go.mod h1:HuIsMU8RRBOtsCgI77wP899iHVBQpCmg4ErYMZB+2IA=
|
||||
github.com/spacemonkeygo/openssl v0.0.0-20181017203307-c2dcc5cca94a/go.mod h1:7AyxJNCJ7SBZ1MfVQCWD6Uqo2oubI2Eq2y2eqf+A5r0=
|
||||
github.com/spacemonkeygo/spacelog v0.0.0-20180420211403-2296661a0572 h1:RC6RW7j+1+HkWaX/Yh71Ee5ZHaHYt7ZP4sQgUrm6cDU=
|
||||
github.com/spacemonkeygo/spacelog v0.0.0-20180420211403-2296661a0572/go.mod h1:w0SWMsp6j9O/dk4/ZpIhL+3CkG8ofA2vuv7k+ltqUMc=
|
||||
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
|
||||
github.com/spaolacci/murmur3 v1.1.0 h1:7c1g84S4BPRrfL5Xrdp6fOJ206sU9y293DDHaoy0bLI=
|
||||
github.com/spaolacci/murmur3 v1.1.0/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
|
||||
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
|
||||
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
|
||||
github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU=
|
||||
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
|
||||
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
|
||||
github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s=
|
||||
github.com/src-d/envconfig v1.0.0/go.mod h1:Q9YQZ7BKITldTBnoxsE5gOeB5y66RyPXeue/R4aaNBc=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
|
||||
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
|
||||
github.com/stretchr/testify v1.6.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.6.1 h1:hDPOHmpOpP40lSULcqw7IrRb/u7w6RpDC9399XyoNd0=
|
||||
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/syndtr/goleveldb v1.0.0/go.mod h1:ZVVdQEZoIme9iO1Ch2Jdy24qqXrMMOU6lpPAyBWyWuQ=
|
||||
github.com/tarm/serial v0.0.0-20180830185346-98f6abe2eb07/go.mod h1:kDXzergiv9cbyO7IOYJZWg1U88JhDg3PB6klq9Hg2pA=
|
||||
github.com/testground/sdk-go v0.2.6-0.20201016180515-1e40e1b0ec3a h1:iQDLQpTGtdfatdQtGqQBuoXFrl2AQ0n3Q8mNKkqbmnw=
|
||||
github.com/testground/sdk-go v0.2.6-0.20201016180515-1e40e1b0ec3a/go.mod h1:Q4dnWsUBH+dZ1u7aEGDBHWGUaLfhitjUq3UJQqxeTmk=
|
||||
github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0=
|
||||
github.com/viant/assertly v0.4.8/go.mod h1:aGifi++jvCrUaklKEKT0BU95igDNaqkvz+49uaYMPRU=
|
||||
github.com/viant/toolbox v0.24.0/go.mod h1:OxMCG57V0PXuIP2HNQrtJf2CjqdmbrOx5EkMILuUhzM=
|
||||
github.com/warpfork/go-wish v0.0.0-20180510122957-5ad1f5abf436/go.mod h1:x6AKhvSSexNrVSrViXSHUEbICjmGXhtgABaHIySUSGw=
|
||||
github.com/warpfork/go-wish v0.0.0-20190328234359-8b3e70f8e830/go.mod h1:x6AKhvSSexNrVSrViXSHUEbICjmGXhtgABaHIySUSGw=
|
||||
github.com/warpfork/go-wish v0.0.0-20200122115046-b9ea61034e4a h1:G++j5e0OC488te356JvdhaM8YS6nMsjLAYF7JxCv07w=
|
||||
github.com/warpfork/go-wish v0.0.0-20200122115046-b9ea61034e4a/go.mod h1:x6AKhvSSexNrVSrViXSHUEbICjmGXhtgABaHIySUSGw=
|
||||
github.com/whyrusleeping/cbor-gen v0.0.0-20200123233031-1cdf64d27158/go.mod h1:Xj/M2wWU+QdTdRbu/L/1dIZY8/Wb2K9pAhtroQuxJJI=
|
||||
github.com/whyrusleeping/cbor-gen v0.0.0-20200402171437-3d27c146c105 h1:Sh6UG5dW5xW8Ek2CtRGq4ipdEvvx9hOyBJjEGyTYDl0=
|
||||
github.com/whyrusleeping/cbor-gen v0.0.0-20200402171437-3d27c146c105/go.mod h1:Xj/M2wWU+QdTdRbu/L/1dIZY8/Wb2K9pAhtroQuxJJI=
|
||||
github.com/whyrusleeping/chunker v0.0.0-20181014151217-fe64bd25879f h1:jQa4QT2UP9WYv2nzyawpKMOCl+Z/jW7djv2/J50lj9E=
|
||||
github.com/whyrusleeping/chunker v0.0.0-20181014151217-fe64bd25879f/go.mod h1:p9UJB6dDgdPgMJZs7UjUOdulKyRr9fqkS+6JKAInPy8=
|
||||
github.com/whyrusleeping/go-keyspace v0.0.0-20160322163242-5b898ac5add1/go.mod h1:8UvriyWtv5Q5EOgjHaSseUEdkQfvwFv1I/In/O2M9gc=
|
||||
github.com/whyrusleeping/go-logging v0.0.0-20170515211332-0457bb6b88fc/go.mod h1:bopw91TMyo8J3tvftk8xmU2kPmlrt4nScJQZU2hE5EM=
|
||||
github.com/whyrusleeping/go-logging v0.0.1/go.mod h1:lDPYj54zutzG1XYfHAhcc7oNXEburHQBn+Iqd4yS4vE=
|
||||
github.com/whyrusleeping/go-notifier v0.0.0-20170827234753-097c5d47330f/go.mod h1:cZNvX9cFybI01GriPRMXDtczuvUhgbcYr9iCGaNlRv8=
|
||||
github.com/whyrusleeping/mafmt v1.2.8/go.mod h1:faQJFPbLSxzD9xpA02ttW/tS9vZykNvXwGvqIpk20FA=
|
||||
github.com/whyrusleeping/mdns v0.0.0-20180901202407-ef14215e6b30/go.mod h1:j4l84WPFclQPj320J9gp0XwNKBb3U0zt5CBqjPp22G4=
|
||||
github.com/whyrusleeping/mdns v0.0.0-20190826153040-b9b60ed33aa9/go.mod h1:j4l84WPFclQPj320J9gp0XwNKBb3U0zt5CBqjPp22G4=
|
||||
github.com/whyrusleeping/multiaddr-filter v0.0.0-20160516205228-e903e4adabd7 h1:E9S12nwJwEOXe2d6gT6qxdvqMnNq+VnSsKPgm2ZZNds=
|
||||
github.com/whyrusleeping/multiaddr-filter v0.0.0-20160516205228-e903e4adabd7/go.mod h1:X2c0RVCI1eSUFI8eLcY3c0423ykwiUdxLJtkDvruhjI=
|
||||
github.com/x-cray/logrus-prefixed-formatter v0.5.2/go.mod h1:2duySbKsL6M18s5GU7VPsoEPHyzalCE06qoARUCeBBE=
|
||||
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
|
||||
github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
go.opencensus.io v0.18.0/go.mod h1:vKdFvxhtzZ9onBp9VKHK8z/sRpBMnKAsufL7wlDrCOA=
|
||||
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
|
||||
go.opencensus.io v0.22.1/go.mod h1:Ap50jQcDJrx6rB6VgeeFPtuPIf3wMRvRfrfYDO6+BmA=
|
||||
go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
|
||||
go.opencensus.io v0.22.3 h1:8sGtKOrtQqkN1bp2AtX+misvLIlOmsEsNd+9NIcPEm8=
|
||||
go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
|
||||
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
|
||||
go.uber.org/atomic v1.5.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
|
||||
go.uber.org/atomic v1.6.0 h1:Ezj3JGmsOnG1MoRWQkPBsKLe9DwWD9QeXzTRzzldNVk=
|
||||
go.uber.org/atomic v1.6.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
|
||||
go.uber.org/goleak v1.0.0 h1:qsup4IcBdlmsnGfqyLl4Ntn3C2XCCuKAE7DwHpScyUo=
|
||||
go.uber.org/goleak v1.0.0/go.mod h1:8a7PlsEVH3e/a/GLqe5IIrQx6GzcnRmZEufDUTk4A7A=
|
||||
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
|
||||
go.uber.org/multierr v1.4.0/go.mod h1:VgVr7evmIr6uPjLBxg28wmKNXyqE9akIJ5XnfpiKl+4=
|
||||
go.uber.org/multierr v1.5.0 h1:KCa4XfM8CWFCpxXRGok+Q0SS/0XBhMDbHHGABQLvD2A=
|
||||
go.uber.org/multierr v1.5.0/go.mod h1:FeouvMocqHpRaaGuG9EjoKcStLC43Zu/fmqdUMPcKYU=
|
||||
go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee h1:0mgffUl7nfd+FpvXMVz4IDEaUSmT1ysygQC7qYo7sG4=
|
||||
go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee/go.mod h1:vJERXedbb3MVM5f9Ejo0C68/HhF8uaILCdgjnY+goOA=
|
||||
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
|
||||
go.uber.org/zap v1.14.1/go.mod h1:Mb2vm2krFEG5DV0W9qcHBYFtp/Wku1cvYaqPsS/WYfc=
|
||||
go.uber.org/zap v1.15.0 h1:ZZCA22JRF2gQE5FoNmhmrf7jeJJ2uhqDUNRYKm8dvmM=
|
||||
go.uber.org/zap v1.15.0/go.mod h1:Mb2vm2krFEG5DV0W9qcHBYFtp/Wku1cvYaqPsS/WYfc=
|
||||
go4.org v0.0.0-20180809161055-417644f6feb5/go.mod h1:MkTOUMDaeVYJUOUsaDXIhWPZYa1yOyC1qaOBpL57BhE=
|
||||
golang.org/x/build v0.0.0-20190111050920-041ab4dc3f9d/go.mod h1:OWs+y06UdEOHN4y+MfF/py+xQ/tYqIWW03b70/CG9Rw=
|
||||
golang.org/x/crypto v0.0.0-20170930174604-9419663f5a44/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20181030102418-4d3f4d9ffa16/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20190211182817-74369b46fc67/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20190225124518-7f87c0fbb88b/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/crypto v0.0.0-20190313024323-a1f597ede03a/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/crypto v0.0.0-20190426145343-a29dc8fdc734/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20190513172903-22d7a77e9e5f/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20190530122614-20be4c3c3ed5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20190618222545-ea8f1a30c443/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20200117160349-530e935923ad/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20200221231518-2aa609cf4a9d/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20200423211502-4bdfaf469ed5 h1:Q7tZBpemrlsc2I7IyODzhtallWRSm4Q0d09pL6XbQtU=
|
||||
golang.org/x/crypto v0.0.0-20200423211502-4bdfaf469ed5/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9 h1:psW17arqaxU48Z5kZ0CQnkZWQJsqcURM6tKiBApRjXI=
|
||||
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||
golang.org/x/lint v0.0.0-20180702182130-06c8688daad7/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
|
||||
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
|
||||
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
|
||||
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
||||
golang.org/x/lint v0.0.0-20190930215403-16217165b5de h1:5hukYrvBGR8/eNkX5mdUezrA6JiaEZDtJb9Ei+1LlBs=
|
||||
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
||||
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
|
||||
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
|
||||
golang.org/x/mod v0.3.0 h1:RM4zey1++hCTbCVQfnWeKs9/IEsaBLA8vTkd0WVtmH4=
|
||||
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20181011144130-49bb7cea24b1/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20181029044818-c44066c5c816/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20181106065722-10aee1819953/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20190227160552-c95aed5357e7/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20190228165749-92fc7df08ae7/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190313220215-9f648a60d977/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
|
||||
golang.org/x/net v0.0.0-20190611141213-3f473d35a33a/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200625001655-4c5254603344 h1:vGXIOMxbNfDTk/aXCmfdLgkrSV+Z2tcbze+pEc3v5W4=
|
||||
golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
golang.org/x/oauth2 v0.0.0-20181017192945-9dcd33a902f4/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
golang.org/x/oauth2 v0.0.0-20181203162652-d668ce993890/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/perf v0.0.0-20180704124530-6e6d33e29852/go.mod h1:JLpeXjPJfIyPr5TlbXLkXWLhP8nz10XfvxElABhCtcw=
|
||||
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208 h1:qwRHBd0NqMbJxfbotnDhm2ByMI1Shq4Y6oRJo21SGJA=
|
||||
golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20181029174526-d69651ed3497/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190219092855-153ac476189d/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190228124157-a34e9553db1e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190302025703-b6889370fb10/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190316082340-a2f829d7f35f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190405154228-4b34438f7a67/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190524122548-abf6ff778158/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190526052359-791d8a0f4d09/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190610200419-93c9922d18ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190626221950-04f50cda93cb/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190924154521-2837fb4f24fe/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191010194322-b09406accb47/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae h1:Ih9Yo4hSPImZOpfGuA4bR/ORKTAbhZo2AbWNRCnevdo=
|
||||
golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
|
||||
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
|
||||
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20180828015842-6cd1fcedba52/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20181030000716-a0a13e073c7b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20181130052023-1c3d964395ce/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
|
||||
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
||||
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
||||
golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
|
||||
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
|
||||
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
|
||||
golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191108193012-7d206e10da11/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191216052735-49a3e744a425 h1:VvQyQJN0tSuecqgcIxMWnnfG5kSmgy9KZR9sW3W5QeA=
|
||||
golang.org/x/tools v0.0.0-20191216052735-49a3e744a425/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.0.0-20200827010519-17fd2f27a9e3 h1:r3P/5xOq/dK1991B65Oy6E1fRF/2d/fSYZJ/fXGVfJc=
|
||||
golang.org/x/tools v0.0.0-20200827010519-17fd2f27a9e3/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
|
||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
google.golang.org/api v0.0.0-20180910000450-7ca32eb868bf/go.mod h1:4mhQ8q/RsB7i+udVvVy5NUi08OU8ZlA0gRVgrF7VFY0=
|
||||
google.golang.org/api v0.0.0-20181030000543-1d582fd0359e/go.mod h1:4mhQ8q/RsB7i+udVvVy5NUi08OU8ZlA0gRVgrF7VFY0=
|
||||
google.golang.org/api v0.1.0/go.mod h1:UGEZY7KEX120AnNLIHFMKIo4obdJhkp2tPbaPlQx13Y=
|
||||
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
|
||||
google.golang.org/appengine v1.2.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
google.golang.org/appengine v1.3.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
|
||||
google.golang.org/genproto v0.0.0-20180831171423-11092d34479b/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
|
||||
google.golang.org/genproto v0.0.0-20181029155118-b69ba1387ce2/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
|
||||
google.golang.org/genproto v0.0.0-20181202183823-bd91e49a0898/go.mod h1:7Ep/1NZk928CDR8SjdVbjWNpdIf6nzjE3BTgJDr2Atg=
|
||||
google.golang.org/genproto v0.0.0-20190306203927-b5d61aea6440/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
||||
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
||||
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
||||
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
|
||||
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
|
||||
google.golang.org/grpc v1.14.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
|
||||
google.golang.org/grpc v1.16.0/go.mod h1:0JHn/cJsOMiMfNA9+DeHDlAU7KAAB5GDlYFpa9MZMio=
|
||||
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
|
||||
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
|
||||
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
|
||||
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
|
||||
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
|
||||
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
|
||||
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
|
||||
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
|
||||
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
|
||||
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
|
||||
google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
|
||||
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
|
||||
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
|
||||
google.golang.org/protobuf v1.25.0 h1:Ejskq+SyPohKW+1uil0JJMtmHCgJPJ/qWTxr8qp+R4c=
|
||||
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
|
||||
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
|
||||
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f h1:BLraFXnmrev5lT+xlilqcH8XK9/i0At2xKjWk4p6zsU=
|
||||
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
|
||||
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
|
||||
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
|
||||
gopkg.in/src-d/go-cli.v0 v0.0.0-20181105080154-d492247bbc0d/go.mod h1:z+K8VcOYVYcSwSjGebuDL6176A1XskgbtNl64NSg+n8=
|
||||
gopkg.in/src-d/go-log.v1 v1.0.1/go.mod h1:GN34hKP0g305ysm2/hctJ0Y8nWP3zxXXJ8GFabTyABE=
|
||||
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
|
||||
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
|
||||
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.5 h1:ymVxjfMaHvXD8RqPRmzHHsB3VvucivSkIAvJFDI5O3c=
|
||||
gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.8 h1:obN1ZagJSUGI0Ek/LBmuj4SNLPfIny3KsKFopxRdj10=
|
||||
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo=
|
||||
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
grpc.go4.org v0.0.0-20170609214715-11d0a25b4919/go.mod h1:77eQGdRu53HpSqPFJFmuJdjuHRquDANNeA4x7B8WQ9o=
|
||||
honnef.co/go/tools v0.0.0-20180728063816-88497007e858/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||
honnef.co/go/tools v0.0.1-2019.2.3 h1:3JgtbtFHMiCmsznwGVTUWbgGov+pVqnlf1dEJTNAXeM=
|
||||
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
|
||||
rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
|
||||
rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
|
||||
sourcegraph.com/sourcegraph/go-diff v0.5.0/go.mod h1:kuch7UrkMzY0X+p9CRK03kfuPQ2zzQcaEFbx8wA8rck=
|
||||
sourcegraph.com/sqs/pbtypes v0.0.0-20180604144634-d3ebe8f20ae4/go.mod h1:ketZ/q3QxT9HOBeFhu6RdvsftgpsbFHBF5Cas6cDKZ0=
|
383
testplans/graphsync/main.go
Normal file
383
testplans/graphsync/main.go
Normal file
@ -0,0 +1,383 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/rand"
|
||||
"fmt"
|
||||
"io"
|
||||
goruntime "runtime"
|
||||
"time"
|
||||
|
||||
allselector "github.com/hannahhoward/all-selector"
|
||||
"github.com/ipfs/go-blockservice"
|
||||
"github.com/ipfs/go-cid"
|
||||
ds "github.com/ipfs/go-datastore"
|
||||
dss "github.com/ipfs/go-datastore/sync"
|
||||
"github.com/ipfs/go-graphsync/storeutil"
|
||||
blockstore "github.com/ipfs/go-ipfs-blockstore"
|
||||
chunk "github.com/ipfs/go-ipfs-chunker"
|
||||
offline "github.com/ipfs/go-ipfs-exchange-offline"
|
||||
files "github.com/ipfs/go-ipfs-files"
|
||||
format "github.com/ipfs/go-ipld-format"
|
||||
"github.com/ipfs/go-merkledag"
|
||||
"github.com/ipfs/go-unixfs/importer/balanced"
|
||||
ihelper "github.com/ipfs/go-unixfs/importer/helpers"
|
||||
cidlink "github.com/ipld/go-ipld-prime/linking/cid"
|
||||
"github.com/libp2p/go-libp2p-core/metrics"
|
||||
"github.com/testground/sdk-go/network"
|
||||
"golang.org/x/sync/errgroup"
|
||||
|
||||
gs "github.com/ipfs/go-graphsync"
|
||||
gsi "github.com/ipfs/go-graphsync/impl"
|
||||
gsnet "github.com/ipfs/go-graphsync/network"
|
||||
|
||||
"github.com/libp2p/go-libp2p"
|
||||
"github.com/libp2p/go-libp2p-core/host"
|
||||
"github.com/libp2p/go-libp2p-core/peer"
|
||||
noise "github.com/libp2p/go-libp2p-noise"
|
||||
secio "github.com/libp2p/go-libp2p-secio"
|
||||
tls "github.com/libp2p/go-libp2p-tls"
|
||||
|
||||
"github.com/testground/sdk-go/run"
|
||||
"github.com/testground/sdk-go/runtime"
|
||||
"github.com/testground/sdk-go/sync"
|
||||
)
|
||||
|
||||
var testcases = map[string]interface{}{
|
||||
"stress": run.InitializedTestCaseFn(runStress),
|
||||
}
|
||||
|
||||
func main() {
|
||||
run.InvokeMap(testcases)
|
||||
}
|
||||
|
||||
type networkParams struct {
|
||||
latency time.Duration
|
||||
bandwidth uint64
|
||||
}
|
||||
|
||||
func (p networkParams) String() string {
|
||||
return fmt.Sprintf("<lat: %s, bandwidth: %d>", p.latency, p.bandwidth)
|
||||
}
|
||||
|
||||
func runStress(runenv *runtime.RunEnv, initCtx *run.InitContext) error {
|
||||
var (
|
||||
size = runenv.SizeParam("size")
|
||||
concurrency = runenv.IntParam("concurrency")
|
||||
|
||||
networkParams = parseNetworkConfig(runenv)
|
||||
)
|
||||
runenv.RecordMessage("started test instance")
|
||||
runenv.RecordMessage("network params: %v", networkParams)
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Minute)
|
||||
defer cancel()
|
||||
|
||||
initCtx.MustWaitAllInstancesInitialized(ctx)
|
||||
|
||||
host, peers, _ := makeHost(ctx, runenv, initCtx)
|
||||
defer host.Close()
|
||||
|
||||
var (
|
||||
// make datastore, blockstore, dag service, graphsync
|
||||
bs = blockstore.NewBlockstore(dss.MutexWrap(ds.NewMapDatastore()))
|
||||
dagsrv = merkledag.NewDAGService(blockservice.New(bs, offline.Exchange(bs)))
|
||||
gsync = gsi.New(ctx,
|
||||
gsnet.NewFromLibp2pHost(host),
|
||||
storeutil.LoaderForBlockstore(bs),
|
||||
storeutil.StorerForBlockstore(bs),
|
||||
)
|
||||
)
|
||||
|
||||
defer initCtx.SyncClient.MustSignalAndWait(ctx, "done", runenv.TestInstanceCount)
|
||||
|
||||
switch runenv.TestGroupID {
|
||||
case "providers":
|
||||
if runenv.TestGroupInstanceCount > 1 {
|
||||
panic("test case only supports one provider")
|
||||
}
|
||||
|
||||
runenv.RecordMessage("we are the provider")
|
||||
defer runenv.RecordMessage("done provider")
|
||||
|
||||
gsync.RegisterIncomingRequestHook(func(p peer.ID, request gs.RequestData, hookActions gs.IncomingRequestHookActions) {
|
||||
hookActions.ValidateRequest()
|
||||
})
|
||||
|
||||
return runProvider(ctx, runenv, initCtx, dagsrv, size, networkParams, concurrency)
|
||||
|
||||
case "requestors":
|
||||
runenv.RecordMessage("we are the requestor")
|
||||
defer runenv.RecordMessage("done requestor")
|
||||
|
||||
p := *peers[0]
|
||||
if err := host.Connect(ctx, p); err != nil {
|
||||
return err
|
||||
}
|
||||
runenv.RecordMessage("done dialling provider")
|
||||
return runRequestor(ctx, runenv, initCtx, gsync, p, dagsrv, networkParams, concurrency, size)
|
||||
|
||||
default:
|
||||
panic("unsupported group ID")
|
||||
}
|
||||
}
|
||||
|
||||
func parseNetworkConfig(runenv *runtime.RunEnv) []networkParams {
|
||||
var (
|
||||
bandwidths = runenv.SizeArrayParam("bandwidths")
|
||||
latencies []time.Duration
|
||||
)
|
||||
|
||||
lats := runenv.StringArrayParam("latencies")
|
||||
for _, l := range lats {
|
||||
d, err := time.ParseDuration(l)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
latencies = append(latencies, d)
|
||||
}
|
||||
|
||||
// prepend bandwidth=0 and latency=0 zero values; the first iteration will
|
||||
// be a control iteration. The sidecar interprets zero values as no
|
||||
// limitation on that attribute.
|
||||
bandwidths = append([]uint64{0}, bandwidths...)
|
||||
latencies = append([]time.Duration{0}, latencies...)
|
||||
|
||||
var ret []networkParams
|
||||
for _, bandwidth := range bandwidths {
|
||||
for _, latency := range latencies {
|
||||
ret = append(ret, networkParams{
|
||||
latency: latency,
|
||||
bandwidth: bandwidth,
|
||||
})
|
||||
}
|
||||
}
|
||||
return ret
|
||||
}
|
||||
|
||||
func runRequestor(ctx context.Context, runenv *runtime.RunEnv, initCtx *run.InitContext, gsync gs.GraphExchange, p peer.AddrInfo, dagsrv format.DAGService, networkParams []networkParams, concurrency int, size uint64) error {
|
||||
var (
|
||||
cids []cid.Cid
|
||||
// create a selector for the whole UnixFS dag
|
||||
sel = allselector.AllSelector
|
||||
)
|
||||
|
||||
for round, np := range networkParams {
|
||||
var (
|
||||
topicCid = sync.NewTopic(fmt.Sprintf("cid-%d", round), []cid.Cid{})
|
||||
stateNext = sync.State(fmt.Sprintf("next-%d", round))
|
||||
stateNet = sync.State(fmt.Sprintf("network-configured-%d", round))
|
||||
)
|
||||
|
||||
// wait for all instances to be ready for the next state.
|
||||
initCtx.SyncClient.MustSignalAndWait(ctx, stateNext, runenv.TestInstanceCount)
|
||||
|
||||
// clean up previous CIDs to attempt to free memory
|
||||
// TODO does this work?
|
||||
_ = dagsrv.RemoveMany(ctx, cids)
|
||||
|
||||
runenv.RecordMessage("===== ROUND %d: latency=%s, bandwidth=%d =====", round, np.latency, np.bandwidth)
|
||||
|
||||
sctx, scancel := context.WithCancel(ctx)
|
||||
cidCh := make(chan []cid.Cid, 1)
|
||||
initCtx.SyncClient.MustSubscribe(sctx, topicCid, cidCh)
|
||||
cids = <-cidCh
|
||||
scancel()
|
||||
|
||||
// run GC to get accurate-ish stats.
|
||||
goruntime.GC()
|
||||
goruntime.GC()
|
||||
|
||||
<-initCtx.SyncClient.MustBarrier(ctx, stateNet, 1).C
|
||||
|
||||
errgrp, grpctx := errgroup.WithContext(ctx)
|
||||
for _, c := range cids {
|
||||
c := c // capture
|
||||
np := np // capture
|
||||
|
||||
errgrp.Go(func() error {
|
||||
// make a go-ipld-prime link for the root UnixFS node
|
||||
clink := cidlink.Link{Cid: c}
|
||||
|
||||
// execute the traversal.
|
||||
runenv.RecordMessage("\t>>> requesting CID %s", c)
|
||||
|
||||
start := time.Now()
|
||||
_, errCh := gsync.Request(grpctx, p.ID, clink, sel)
|
||||
for err := range errCh {
|
||||
return err
|
||||
}
|
||||
dur := time.Since(start)
|
||||
|
||||
runenv.RecordMessage("\t<<< request complete with no errors")
|
||||
runenv.RecordMessage("***** ROUND %d observed duration (lat=%s,bw=%d): %s", round, np.latency, np.bandwidth, dur)
|
||||
runenv.R().RecordPoint(fmt.Sprintf("duration,lat=%s,bw=%d,concurrency=%d,size=%d", np.latency, np.bandwidth, concurrency, size), float64(dur))
|
||||
|
||||
// verify that we have the CID now.
|
||||
if node, err := dagsrv.Get(grpctx, c); err != nil {
|
||||
return err
|
||||
} else if node == nil {
|
||||
return fmt.Errorf("finished graphsync request, but CID not in store")
|
||||
}
|
||||
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
if err := errgrp.Wait(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func runProvider(ctx context.Context, runenv *runtime.RunEnv, initCtx *run.InitContext, dagsrv format.DAGService, size uint64, networkParams []networkParams, concurrency int) error {
|
||||
var (
|
||||
cids []cid.Cid
|
||||
bufferedDS = format.NewBufferedDAG(ctx, dagsrv)
|
||||
)
|
||||
|
||||
for round, np := range networkParams {
|
||||
var (
|
||||
topicCid = sync.NewTopic(fmt.Sprintf("cid-%d", round), []cid.Cid{})
|
||||
stateNext = sync.State(fmt.Sprintf("next-%d", round))
|
||||
stateNet = sync.State(fmt.Sprintf("network-configured-%d", round))
|
||||
)
|
||||
|
||||
// wait for all instances to be ready for the next state.
|
||||
initCtx.SyncClient.MustSignalAndWait(ctx, stateNext, runenv.TestInstanceCount)
|
||||
|
||||
// remove the previous CIDs from the dag service; hopefully this
|
||||
// will delete them from the store and free up memory.
|
||||
for _, c := range cids {
|
||||
_ = dagsrv.Remove(ctx, c)
|
||||
}
|
||||
cids = cids[:0]
|
||||
|
||||
runenv.RecordMessage("===== ROUND %d: latency=%s, bandwidth=%d =====", round, np.latency, np.bandwidth)
|
||||
|
||||
// generate as many random files as the concurrency level.
|
||||
for i := 0; i < concurrency; i++ {
|
||||
// file with random data
|
||||
file := files.NewReaderFile(io.LimitReader(rand.Reader, int64(size)))
|
||||
|
||||
const unixfsChunkSize uint64 = 1 << 20
|
||||
const unixfsLinksPerLevel = 1024
|
||||
|
||||
params := ihelper.DagBuilderParams{
|
||||
Maxlinks: unixfsLinksPerLevel,
|
||||
RawLeaves: true,
|
||||
CidBuilder: nil,
|
||||
Dagserv: bufferedDS,
|
||||
}
|
||||
|
||||
db, err := params.New(chunk.NewSizeSplitter(file, int64(unixfsChunkSize)))
|
||||
if err != nil {
|
||||
return fmt.Errorf("unable to setup dag builder: %w", err)
|
||||
}
|
||||
|
||||
node, err := balanced.Layout(db)
|
||||
if err != nil {
|
||||
return fmt.Errorf("unable to create unix fs node: %w", err)
|
||||
}
|
||||
|
||||
cids = append(cids, node.Cid())
|
||||
}
|
||||
|
||||
if err := bufferedDS.Commit(); err != nil {
|
||||
return fmt.Errorf("unable to commit unix fs node: %w", err)
|
||||
}
|
||||
|
||||
// run GC to get accurate-ish stats.
|
||||
goruntime.GC()
|
||||
goruntime.GC()
|
||||
|
||||
runenv.RecordMessage("\tCIDs are: %v", cids)
|
||||
initCtx.SyncClient.MustPublish(ctx, topicCid, cids)
|
||||
|
||||
runenv.RecordMessage("\tconfiguring network for round %d", round)
|
||||
initCtx.NetClient.MustConfigureNetwork(ctx, &network.Config{
|
||||
Network: "default",
|
||||
Enable: true,
|
||||
Default: network.LinkShape{
|
||||
Latency: np.latency,
|
||||
Bandwidth: np.bandwidth * 8, // bps
|
||||
},
|
||||
CallbackState: stateNet,
|
||||
CallbackTarget: 1,
|
||||
})
|
||||
runenv.RecordMessage("\tnetwork configured for round %d", round)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func makeHost(ctx context.Context, runenv *runtime.RunEnv, initCtx *run.InitContext) (host.Host, []*peer.AddrInfo, *metrics.BandwidthCounter) {
|
||||
secureChannel := runenv.StringParam("secure_channel")
|
||||
|
||||
var security libp2p.Option
|
||||
switch secureChannel {
|
||||
case "noise":
|
||||
security = libp2p.Security(noise.ID, noise.New)
|
||||
case "secio":
|
||||
security = libp2p.Security(secio.ID, secio.New)
|
||||
case "tls":
|
||||
security = libp2p.Security(tls.ID, tls.New)
|
||||
}
|
||||
|
||||
// ☎️ Let's construct the libp2p node.
|
||||
ip := initCtx.NetClient.MustGetDataNetworkIP()
|
||||
listenAddr := fmt.Sprintf("/ip4/%s/tcp/0", ip)
|
||||
bwcounter := metrics.NewBandwidthCounter()
|
||||
host, err := libp2p.New(ctx,
|
||||
security,
|
||||
libp2p.ListenAddrStrings(listenAddr),
|
||||
libp2p.BandwidthReporter(bwcounter),
|
||||
)
|
||||
if err != nil {
|
||||
panic(fmt.Sprintf("failed to instantiate libp2p instance: %s", err))
|
||||
}
|
||||
|
||||
// Record our listen addrs.
|
||||
runenv.RecordMessage("my listen addrs: %v", host.Addrs())
|
||||
|
||||
// Obtain our own address info, and use the sync service to publish it to a
|
||||
// 'peersTopic' topic, where others will read from.
|
||||
var (
|
||||
id = host.ID()
|
||||
ai = &peer.AddrInfo{ID: id, Addrs: host.Addrs()}
|
||||
|
||||
// the peers topic where all instances will advertise their AddrInfo.
|
||||
peersTopic = sync.NewTopic("peers", new(peer.AddrInfo))
|
||||
|
||||
// initialize a slice to store the AddrInfos of all other peers in the run.
|
||||
peers = make([]*peer.AddrInfo, 0, runenv.TestInstanceCount-1)
|
||||
)
|
||||
|
||||
// Publish our own.
|
||||
initCtx.SyncClient.MustPublish(ctx, peersTopic, ai)
|
||||
|
||||
// Now subscribe to the peers topic and consume all addresses, storing them
|
||||
// in the peers slice.
|
||||
peersCh := make(chan *peer.AddrInfo)
|
||||
sctx, scancel := context.WithCancel(ctx)
|
||||
defer scancel()
|
||||
|
||||
sub := initCtx.SyncClient.MustSubscribe(sctx, peersTopic, peersCh)
|
||||
|
||||
// Receive the expected number of AddrInfos.
|
||||
for len(peers) < cap(peers) {
|
||||
select {
|
||||
case ai := <-peersCh:
|
||||
if ai.ID == id {
|
||||
continue // skip over ourselves.
|
||||
}
|
||||
peers = append(peers, ai)
|
||||
case err := <-sub.Done():
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
return host, peers, bwcounter
|
||||
}
|
24
testplans/graphsync/manifest.toml
Normal file
24
testplans/graphsync/manifest.toml
Normal file
@ -0,0 +1,24 @@
|
||||
name = "graphsync"
|
||||
|
||||
[builders]
|
||||
"docker:go" = { enabled = true, enable_go_build_cache = true }
|
||||
"exec:go" = { enabled = true }
|
||||
|
||||
[runners]
|
||||
"local:docker" = { enabled = true }
|
||||
"local:exec" = { enabled = true }
|
||||
"cluster:k8s" = { enabled = true }
|
||||
|
||||
[global.build_config]
|
||||
enable_go_build_cache = true
|
||||
|
||||
[[testcases]]
|
||||
name = "stress"
|
||||
instances = { min = 2, max = 10000, default = 2 }
|
||||
|
||||
[testcases.params]
|
||||
size = { type = "int", desc = "size of file to transfer, in human-friendly form", default = "1MiB" }
|
||||
secure_channel = { type = "enum", desc = "secure channel used", values = ["secio", "noise", "tls"], default = "noise" }
|
||||
latencies = { type = "string", desc = "latencies to try with; comma-separated list of durations", default = '["100ms", "200ms", "300ms"]' }
|
||||
bandwidths = { type = "string", desc = "bandwidths (egress bytes/s) to try with; comma-separated list of humanized sizes", default = '["10M", "1M", "512kb"]' }
|
||||
concurrency = { type = "int", desc = "concurrency level", default = "1" }
|
1
testplans/lotus-soup/.gitignore
vendored
Normal file
1
testplans/lotus-soup/.gitignore
vendored
Normal file
@ -0,0 +1 @@
|
||||
lotus-soup
|
55
testplans/lotus-soup/_compositions/baseline-docker-5-1.toml
Normal file
55
testplans/lotus-soup/_compositions/baseline-docker-5-1.toml
Normal file
@ -0,0 +1,55 @@
|
||||
[metadata]
|
||||
name = "lotus-soup"
|
||||
author = ""
|
||||
|
||||
[global]
|
||||
plan = "lotus-soup"
|
||||
case = "deals-e2e"
|
||||
total_instances = 7
|
||||
builder = "docker:go"
|
||||
runner = "local:docker"
|
||||
|
||||
[global.build]
|
||||
selectors = ["testground"]
|
||||
|
||||
[global.run_config]
|
||||
exposed_ports = { pprof = "6060", node_rpc = "1234", miner_rpc = "2345" }
|
||||
|
||||
[global.build_config]
|
||||
enable_go_build_cache = true
|
||||
|
||||
[global.run.test_params]
|
||||
clients = "5"
|
||||
miners = "1"
|
||||
genesis_timestamp_offset = "0"
|
||||
balance = "20000000" # These balances will work for maximum 100 nodes, as TotalFilecoin is 2B
|
||||
sectors = "5"
|
||||
random_beacon_type = "mock"
|
||||
mining_mode = "natural"
|
||||
|
||||
[[groups]]
|
||||
id = "bootstrapper"
|
||||
[groups.instances]
|
||||
count = 1
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "bootstrapper"
|
||||
|
||||
[[groups]]
|
||||
id = "miners"
|
||||
[groups.instances]
|
||||
count = 1
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "miner"
|
||||
|
||||
[[groups]]
|
||||
id = "clients"
|
||||
[groups.instances]
|
||||
count = 5
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "client"
|
80
testplans/lotus-soup/_compositions/baseline-k8s-10-3.toml
Normal file
80
testplans/lotus-soup/_compositions/baseline-k8s-10-3.toml
Normal file
@ -0,0 +1,80 @@
|
||||
[metadata]
|
||||
name = "lotus-soup"
|
||||
author = ""
|
||||
|
||||
[global]
|
||||
plan = "lotus-soup"
|
||||
case = "deals-e2e"
|
||||
total_instances = 14
|
||||
builder = "docker:go"
|
||||
runner = "cluster:k8s"
|
||||
|
||||
[global.build]
|
||||
selectors = ["testground"]
|
||||
|
||||
[global.run_config]
|
||||
exposed_ports = { pprof = "6060", node_rpc = "1234", miner_rpc = "2345" }
|
||||
|
||||
[global.build_config]
|
||||
push_registry=true
|
||||
go_proxy_mode="remote"
|
||||
go_proxy_url="http://localhost:8081"
|
||||
registry_type="aws"
|
||||
|
||||
[global.run.test_params]
|
||||
clients = "10"
|
||||
miners = "3"
|
||||
genesis_timestamp_offset = "0"
|
||||
balance = "20000000" # These balances will work for maximum 100 nodes, as TotalFilecoin is 2B
|
||||
random_beacon_type = "mock"
|
||||
mining_mode = "natural"
|
||||
|
||||
[[groups]]
|
||||
id = "bootstrapper"
|
||||
[groups.resources]
|
||||
memory = "512Mi"
|
||||
cpu = "1000m"
|
||||
[groups.instances]
|
||||
count = 1
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "bootstrapper"
|
||||
|
||||
[[groups]]
|
||||
id = "miners-weak"
|
||||
[groups.resources]
|
||||
memory = "8192Mi"
|
||||
cpu = "1000m"
|
||||
[groups.instances]
|
||||
count = 2
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "miner"
|
||||
sectors = "8"
|
||||
|
||||
[[groups]]
|
||||
id = "miners-strong"
|
||||
[groups.resources]
|
||||
memory = "8192Mi"
|
||||
cpu = "1000m"
|
||||
[groups.instances]
|
||||
count = 1
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "miner"
|
||||
sectors = "24"
|
||||
|
||||
[[groups]]
|
||||
id = "clients"
|
||||
[groups.resources]
|
||||
memory = "1024Mi"
|
||||
cpu = "1000m"
|
||||
[groups.instances]
|
||||
count = 10
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "client"
|
67
testplans/lotus-soup/_compositions/baseline-k8s-3-1.toml
Normal file
67
testplans/lotus-soup/_compositions/baseline-k8s-3-1.toml
Normal file
@ -0,0 +1,67 @@
|
||||
[metadata]
|
||||
name = "lotus-soup"
|
||||
author = ""
|
||||
|
||||
[global]
|
||||
plan = "lotus-soup"
|
||||
case = "deals-e2e"
|
||||
total_instances = 5
|
||||
builder = "docker:go"
|
||||
runner = "cluster:k8s"
|
||||
|
||||
[global.build]
|
||||
selectors = ["testground"]
|
||||
|
||||
[global.run_config]
|
||||
exposed_ports = { pprof = "6060", node_rpc = "1234", miner_rpc = "2345" }
|
||||
|
||||
[global.build_config]
|
||||
push_registry=true
|
||||
go_proxy_mode="remote"
|
||||
go_proxy_url="http://localhost:8081"
|
||||
registry_type="aws"
|
||||
|
||||
[global.run.test_params]
|
||||
clients = "3"
|
||||
miners = "1"
|
||||
genesis_timestamp_offset = "0"
|
||||
balance = "20000000" # These balances will work for maximum 100 nodes, as TotalFilecoin is 2B
|
||||
sectors = "10"
|
||||
random_beacon_type = "mock"
|
||||
mining_mode = "natural"
|
||||
|
||||
[[groups]]
|
||||
id = "bootstrapper"
|
||||
[groups.resources]
|
||||
memory = "512Mi"
|
||||
cpu = "1000m"
|
||||
[groups.instances]
|
||||
count = 1
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "bootstrapper"
|
||||
|
||||
[[groups]]
|
||||
id = "miners"
|
||||
[groups.resources]
|
||||
memory = "4096Mi"
|
||||
cpu = "1000m"
|
||||
[groups.instances]
|
||||
count = 1
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "miner"
|
||||
|
||||
[[groups]]
|
||||
id = "clients"
|
||||
[groups.resources]
|
||||
memory = "1024Mi"
|
||||
cpu = "1000m"
|
||||
[groups.instances]
|
||||
count = 3
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "client"
|
67
testplans/lotus-soup/_compositions/baseline-k8s-3-2.toml
Normal file
67
testplans/lotus-soup/_compositions/baseline-k8s-3-2.toml
Normal file
@ -0,0 +1,67 @@
|
||||
[metadata]
|
||||
name = "lotus-soup"
|
||||
author = ""
|
||||
|
||||
[global]
|
||||
plan = "lotus-soup"
|
||||
case = "deals-e2e"
|
||||
total_instances = 6
|
||||
builder = "docker:go"
|
||||
runner = "cluster:k8s"
|
||||
|
||||
[global.build]
|
||||
selectors = ["testground"]
|
||||
|
||||
[global.run_config]
|
||||
exposed_ports = { pprof = "6060", node_rpc = "1234", miner_rpc = "2345" }
|
||||
|
||||
[global.build_config]
|
||||
push_registry=true
|
||||
go_proxy_mode="remote"
|
||||
go_proxy_url="http://localhost:8081"
|
||||
registry_type="aws"
|
||||
|
||||
[global.run.test_params]
|
||||
clients = "3"
|
||||
miners = "2"
|
||||
genesis_timestamp_offset = "0"
|
||||
balance = "20000000" # These balances will work for maximum 100 nodes, as TotalFilecoin is 2B
|
||||
sectors = "10"
|
||||
random_beacon_type = "mock"
|
||||
mining_mode = "natural"
|
||||
|
||||
[[groups]]
|
||||
id = "bootstrapper"
|
||||
[groups.resources]
|
||||
memory = "512Mi"
|
||||
cpu = "1000m"
|
||||
[groups.instances]
|
||||
count = 1
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "bootstrapper"
|
||||
|
||||
[[groups]]
|
||||
id = "miners"
|
||||
[groups.resources]
|
||||
memory = "4096Mi"
|
||||
cpu = "1000m"
|
||||
[groups.instances]
|
||||
count = 2
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "miner"
|
||||
|
||||
[[groups]]
|
||||
id = "clients"
|
||||
[groups.resources]
|
||||
memory = "1024Mi"
|
||||
cpu = "1000m"
|
||||
[groups.instances]
|
||||
count = 3
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "client"
|
55
testplans/lotus-soup/_compositions/baseline.toml
Normal file
55
testplans/lotus-soup/_compositions/baseline.toml
Normal file
@ -0,0 +1,55 @@
|
||||
[metadata]
|
||||
name = "lotus-soup"
|
||||
author = ""
|
||||
|
||||
[global]
|
||||
plan = "lotus-soup"
|
||||
case = "deals-e2e"
|
||||
total_instances = 6
|
||||
builder = "exec:go"
|
||||
runner = "local:exec"
|
||||
|
||||
[global.build]
|
||||
selectors = ["testground"]
|
||||
|
||||
[global.run_config]
|
||||
exposed_ports = { pprof = "6060", node_rpc = "1234", miner_rpc = "2345" }
|
||||
|
||||
[global.build_config]
|
||||
enable_go_build_cache = true
|
||||
|
||||
[global.run.test_params]
|
||||
clients = "3"
|
||||
miners = "2"
|
||||
genesis_timestamp_offset = "0"
|
||||
balance = "20000000.5" # These balances will work for maximum 100 nodes, as TotalFilecoin is 2B
|
||||
sectors = "10"
|
||||
random_beacon_type = "mock"
|
||||
mining_mode = "natural"
|
||||
|
||||
[[groups]]
|
||||
id = "bootstrapper"
|
||||
[groups.instances]
|
||||
count = 1
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "bootstrapper"
|
||||
|
||||
[[groups]]
|
||||
id = "miners"
|
||||
[groups.instances]
|
||||
count = 2
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "miner"
|
||||
|
||||
[[groups]]
|
||||
id = "clients"
|
||||
[groups.instances]
|
||||
count = 3
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "client"
|
@ -0,0 +1,69 @@
|
||||
[metadata]
|
||||
name = "lotus-soup"
|
||||
author = ""
|
||||
|
||||
[global]
|
||||
plan = "lotus-soup"
|
||||
case = "deals-stress"
|
||||
total_instances = 6
|
||||
builder = "docker:go"
|
||||
runner = "cluster:k8s"
|
||||
|
||||
[global.build]
|
||||
selectors = ["testground"]
|
||||
|
||||
[global.run_config]
|
||||
exposed_ports = { pprof = "6060", node_rpc = "1234", miner_rpc = "2345" }
|
||||
|
||||
[global.build_config]
|
||||
push_registry=true
|
||||
go_proxy_mode="remote"
|
||||
go_proxy_url="http://localhost:8081"
|
||||
registry_type="aws"
|
||||
|
||||
[global.run.test_params]
|
||||
clients = "3"
|
||||
miners = "2"
|
||||
genesis_timestamp_offset = "0"
|
||||
balance = "90000000" # These balances will work for maximum 100 nodes, as TotalFilecoin is 2B
|
||||
sectors = "10"
|
||||
random_beacon_type = "mock"
|
||||
mining_mode = "natural"
|
||||
|
||||
[[groups]]
|
||||
id = "bootstrapper"
|
||||
[groups.resources]
|
||||
memory = "512Mi"
|
||||
cpu = "100m"
|
||||
[groups.instances]
|
||||
count = 1
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "bootstrapper"
|
||||
|
||||
[[groups]]
|
||||
id = "miners"
|
||||
[groups.resources]
|
||||
memory = "14000Mi"
|
||||
cpu = "1000m"
|
||||
[groups.instances]
|
||||
count = 2
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "miner"
|
||||
|
||||
[[groups]]
|
||||
id = "clients"
|
||||
[groups.resources]
|
||||
memory = "2048Mi"
|
||||
cpu = "100m"
|
||||
[groups.instances]
|
||||
count = 3
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "client"
|
||||
deals = "3"
|
||||
deal_mode = "concurrent"
|
@ -0,0 +1,57 @@
|
||||
[metadata]
|
||||
name = "lotus-soup"
|
||||
author = ""
|
||||
|
||||
[global]
|
||||
plan = "lotus-soup"
|
||||
case = "deals-stress"
|
||||
total_instances = 6
|
||||
builder = "docker:go"
|
||||
runner = "local:docker"
|
||||
|
||||
[global.build]
|
||||
selectors = ["testground"]
|
||||
|
||||
[global.run_config]
|
||||
exposed_ports = { pprof = "6060", node_rpc = "1234", miner_rpc = "2345" }
|
||||
|
||||
[global.build_config]
|
||||
enable_go_build_cache = true
|
||||
|
||||
[global.run.test_params]
|
||||
clients = "3"
|
||||
miners = "2"
|
||||
genesis_timestamp_offset = "0"
|
||||
balance = "20000000" # These balances will work for maximum 100 nodes, as TotalFilecoin is 2B
|
||||
sectors = "1000"
|
||||
random_beacon_type = "mock"
|
||||
|
||||
[[groups]]
|
||||
id = "bootstrapper"
|
||||
[groups.instances]
|
||||
count = 1
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "bootstrapper"
|
||||
|
||||
[[groups]]
|
||||
id = "miners"
|
||||
[groups.instances]
|
||||
count = 2
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "miner"
|
||||
mining_mode = "natural"
|
||||
|
||||
[[groups]]
|
||||
id = "clients"
|
||||
[groups.instances]
|
||||
count = 3
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "client"
|
||||
deals = "300"
|
||||
deal_mode = "concurrent"
|
@ -0,0 +1,56 @@
|
||||
[metadata]
|
||||
name = "lotus-soup"
|
||||
author = ""
|
||||
|
||||
[global]
|
||||
plan = "lotus-soup"
|
||||
case = "deals-stress"
|
||||
total_instances = 6
|
||||
builder = "docker:go"
|
||||
runner = "local:docker"
|
||||
|
||||
[global.build]
|
||||
selectors = ["testground"]
|
||||
|
||||
[global.run_config]
|
||||
exposed_ports = { pprof = "6060", node_rpc = "1234", miner_rpc = "2345" }
|
||||
|
||||
[global.build_config]
|
||||
enable_go_build_cache = true
|
||||
|
||||
[global.run.test_params]
|
||||
clients = "3"
|
||||
miners = "2"
|
||||
genesis_timestamp_offset = "100000"
|
||||
balance = "20000000" # These balances will work for maximum 100 nodes, as TotalFilecoin is 2B
|
||||
sectors = "1000"
|
||||
random_beacon_type = "mock"
|
||||
|
||||
[[groups]]
|
||||
id = "bootstrapper"
|
||||
[groups.instances]
|
||||
count = 1
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "bootstrapper"
|
||||
|
||||
[[groups]]
|
||||
id = "miners"
|
||||
[groups.instances]
|
||||
count = 2
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "miner"
|
||||
|
||||
[[groups]]
|
||||
id = "clients"
|
||||
[groups.instances]
|
||||
count = 3
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "client"
|
||||
deals = "300"
|
||||
deal_mode = "concurrent"
|
@ -0,0 +1,57 @@
|
||||
[metadata]
|
||||
name = "lotus-soup"
|
||||
author = ""
|
||||
|
||||
[global]
|
||||
plan = "lotus-soup"
|
||||
case = "deals-stress"
|
||||
total_instances = 6
|
||||
builder = "docker:go"
|
||||
runner = "local:docker"
|
||||
|
||||
[global.build]
|
||||
selectors = ["testground"]
|
||||
|
||||
[global.run_config]
|
||||
exposed_ports = { pprof = "6060", node_rpc = "1234", miner_rpc = "2345" }
|
||||
|
||||
[global.build_config]
|
||||
enable_go_build_cache = true
|
||||
|
||||
[global.run.test_params]
|
||||
clients = "3"
|
||||
miners = "2"
|
||||
genesis_timestamp_offset = "0"
|
||||
balance = "20000000" # These balances will work for maximum 100 nodes, as TotalFilecoin is 2B
|
||||
sectors = "1000"
|
||||
random_beacon_type = "mock"
|
||||
|
||||
[[groups]]
|
||||
id = "bootstrapper"
|
||||
[groups.instances]
|
||||
count = 1
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "bootstrapper"
|
||||
|
||||
[[groups]]
|
||||
id = "miners"
|
||||
[groups.instances]
|
||||
count = 2
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "miner"
|
||||
mining_mode = "natural"
|
||||
|
||||
[[groups]]
|
||||
id = "clients"
|
||||
[groups.instances]
|
||||
count = 3
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "client"
|
||||
deals = "300"
|
||||
deal_mode = "serial"
|
56
testplans/lotus-soup/_compositions/deals-stress-serial.toml
Normal file
56
testplans/lotus-soup/_compositions/deals-stress-serial.toml
Normal file
@ -0,0 +1,56 @@
|
||||
[metadata]
|
||||
name = "lotus-soup"
|
||||
author = ""
|
||||
|
||||
[global]
|
||||
plan = "lotus-soup"
|
||||
case = "deals-stress"
|
||||
total_instances = 6
|
||||
builder = "docker:go"
|
||||
runner = "local:docker"
|
||||
|
||||
[global.build]
|
||||
selectors = ["testground"]
|
||||
|
||||
[global.run_config]
|
||||
exposed_ports = { pprof = "6060", node_rpc = "1234", miner_rpc = "2345" }
|
||||
|
||||
[global.build_config]
|
||||
enable_go_build_cache = true
|
||||
|
||||
[global.run.test_params]
|
||||
clients = "3"
|
||||
miners = "2"
|
||||
genesis_timestamp_offset = "100000"
|
||||
balance = "20000000" # These balances will work for maximum 100 nodes, as TotalFilecoin is 2B
|
||||
sectors = "1000"
|
||||
random_beacon_type = "mock"
|
||||
|
||||
[[groups]]
|
||||
id = "bootstrapper"
|
||||
[groups.instances]
|
||||
count = 1
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "bootstrapper"
|
||||
|
||||
[[groups]]
|
||||
id = "miners"
|
||||
[groups.instances]
|
||||
count = 2
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "miner"
|
||||
|
||||
[[groups]]
|
||||
id = "clients"
|
||||
[groups.instances]
|
||||
count = 3
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "client"
|
||||
deals = "300"
|
||||
deal_mode = "serial"
|
79
testplans/lotus-soup/_compositions/drand-halt.toml
Normal file
79
testplans/lotus-soup/_compositions/drand-halt.toml
Normal file
@ -0,0 +1,79 @@
|
||||
[metadata]
|
||||
name = "lotus-soup"
|
||||
author = ""
|
||||
|
||||
[global]
|
||||
plan = "lotus-soup"
|
||||
case = "drand-halting"
|
||||
total_instances = 6
|
||||
builder = "docker:go"
|
||||
runner = "local:docker"
|
||||
|
||||
[global.build]
|
||||
selectors = ["testground"]
|
||||
|
||||
[global.run_config]
|
||||
exposed_ports = { pprof = "6060", node_rpc = "1234", miner_rpc = "2345" }
|
||||
|
||||
[global.build_config]
|
||||
enable_go_build_cache = true
|
||||
|
||||
[global.run.test_params]
|
||||
clients = "1"
|
||||
miners = "1"
|
||||
balance = "20000000" # These balances will work for maximum 100 nodes, as TotalFilecoin is 2B
|
||||
sectors = "10"
|
||||
random_beacon_type = "local-drand"
|
||||
genesis_timestamp_offset = "0"
|
||||
# mining_mode = "natural"
|
||||
|
||||
[[groups]]
|
||||
id = "bootstrapper"
|
||||
[groups.resources]
|
||||
memory = "120Mi"
|
||||
cpu = "10m"
|
||||
[groups.instances]
|
||||
count = 1
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "bootstrapper"
|
||||
|
||||
|
||||
[[groups]]
|
||||
id = "miners"
|
||||
[groups.resources]
|
||||
memory = "120Mi"
|
||||
cpu = "10m"
|
||||
[groups.instances]
|
||||
count = 1
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "miner"
|
||||
|
||||
|
||||
[[groups]]
|
||||
id = "clients"
|
||||
[groups.resources]
|
||||
memory = "120Mi"
|
||||
cpu = "10m"
|
||||
[groups.instances]
|
||||
count = 1
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "client"
|
||||
|
||||
|
||||
[[groups]]
|
||||
id = "drand"
|
||||
[groups.instances]
|
||||
count = 3
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "drand"
|
||||
drand_period = "1s"
|
||||
drand_log_level = "none"
|
||||
suspend_events = "wait 20s -> halt -> wait 1m -> resume -> wait 2s -> halt -> wait 1m -> resume"
|
71
testplans/lotus-soup/_compositions/drand-outage-k8s.toml
Normal file
71
testplans/lotus-soup/_compositions/drand-outage-k8s.toml
Normal file
@ -0,0 +1,71 @@
|
||||
[metadata]
|
||||
name = "lotus-soup"
|
||||
author = ""
|
||||
|
||||
[global]
|
||||
plan = "lotus-soup"
|
||||
case = "drand-outage"
|
||||
total_instances = 7
|
||||
builder = "docker:go"
|
||||
runner = "cluster:k8s"
|
||||
|
||||
[global.build]
|
||||
selectors = ["testground"]
|
||||
|
||||
[global.run_config]
|
||||
exposed_ports = { pprof = "6060", node_rpc = "1234", miner_rpc = "2345" }
|
||||
|
||||
[global.build_config]
|
||||
push_registry=true
|
||||
go_proxy_mode="remote"
|
||||
go_proxy_url="http://localhost:8081"
|
||||
registry_type="aws"
|
||||
|
||||
[global.run.test_params]
|
||||
clients = "0"
|
||||
miners = "3"
|
||||
balance = "20000000" # These balances will work for maximum 100 nodes, as TotalFilecoin is 2B
|
||||
sectors = "100"
|
||||
random_beacon_type = "local-drand"
|
||||
genesis_timestamp_offset = "0"
|
||||
mining_mode = "natural"
|
||||
|
||||
[[groups]]
|
||||
id = "bootstrapper"
|
||||
[groups.resources]
|
||||
memory = "1024Mi"
|
||||
cpu = "10m"
|
||||
[groups.instances]
|
||||
count = 1
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "bootstrapper"
|
||||
|
||||
[[groups]]
|
||||
id = "miners"
|
||||
[groups.resources]
|
||||
memory = "1024Mi"
|
||||
cpu = "10m"
|
||||
[groups.instances]
|
||||
count = 3
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "miner"
|
||||
|
||||
[[groups]]
|
||||
id = "drand"
|
||||
[groups.resources]
|
||||
memory = "1024Mi"
|
||||
cpu = "10m"
|
||||
[groups.instances]
|
||||
count = 3
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "drand"
|
||||
drand_period = "30s"
|
||||
drand_catchup_period = "10s"
|
||||
drand_log_level = "debug"
|
||||
suspend_events = "wait 5m -> halt -> wait 45m -> resume -> wait 15m -> halt -> wait 5m -> resume"
|
59
testplans/lotus-soup/_compositions/drand-outage-local.toml
Normal file
59
testplans/lotus-soup/_compositions/drand-outage-local.toml
Normal file
@ -0,0 +1,59 @@
|
||||
[metadata]
|
||||
name = "lotus-soup"
|
||||
author = ""
|
||||
|
||||
[global]
|
||||
plan = "lotus-soup"
|
||||
case = "drand-outage"
|
||||
total_instances = 7
|
||||
builder = "docker:go"
|
||||
runner = "local:docker"
|
||||
|
||||
[global.build]
|
||||
selectors = ["testground"]
|
||||
|
||||
[global.run_config]
|
||||
exposed_ports = { pprof = "6060", node_rpc = "1234", miner_rpc = "2345" }
|
||||
|
||||
[global.build_config]
|
||||
enable_go_build_cache = true
|
||||
|
||||
[global.run.test_params]
|
||||
clients = "0"
|
||||
miners = "3"
|
||||
balance = "20000000" # These balances will work for maximum 100 nodes, as TotalFilecoin is 2B
|
||||
sectors = "10"
|
||||
random_beacon_type = "local-drand"
|
||||
genesis_timestamp_offset = "0"
|
||||
mining_mode = "natural"
|
||||
|
||||
[[groups]]
|
||||
id = "bootstrapper"
|
||||
[groups.instances]
|
||||
count = 1
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "bootstrapper"
|
||||
|
||||
[[groups]]
|
||||
id = "miners"
|
||||
[groups.instances]
|
||||
count = 3
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "miner"
|
||||
|
||||
[[groups]]
|
||||
id = "drand"
|
||||
[groups.instances]
|
||||
count = 3
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "drand"
|
||||
drand_period = "30s"
|
||||
drand_catchup_period = "10s"
|
||||
drand_log_level = "debug"
|
||||
suspend_events = "wait 3m -> halt -> wait 3m -> resume -> wait 3m -> halt -> wait 3m -> resume"
|
68
testplans/lotus-soup/_compositions/fast-k8s-3-1.toml
Normal file
68
testplans/lotus-soup/_compositions/fast-k8s-3-1.toml
Normal file
@ -0,0 +1,68 @@
|
||||
[metadata]
|
||||
name = "lotus-soup"
|
||||
author = ""
|
||||
|
||||
[global]
|
||||
plan = "lotus-soup"
|
||||
case = "deals-e2e"
|
||||
total_instances = 5
|
||||
builder = "docker:go"
|
||||
runner = "cluster:k8s"
|
||||
|
||||
[global.build]
|
||||
selectors = ["testground"]
|
||||
|
||||
[global.run_config]
|
||||
exposed_ports = { pprof = "6060", node_rpc = "1234", miner_rpc = "2345" }
|
||||
|
||||
[global.build_config]
|
||||
push_registry=true
|
||||
go_proxy_mode="remote"
|
||||
go_proxy_url="http://localhost:8081"
|
||||
registry_type="aws"
|
||||
|
||||
[global.run.test_params]
|
||||
clients = "3"
|
||||
miners = "1"
|
||||
fast_retrieval = "true"
|
||||
genesis_timestamp_offset = "0"
|
||||
balance = "20000000" # These balances will work for maximum 100 nodes, as TotalFilecoin is 2B
|
||||
sectors = "10"
|
||||
random_beacon_type = "mock"
|
||||
mining_mode = "natural"
|
||||
|
||||
[[groups]]
|
||||
id = "bootstrapper"
|
||||
[groups.resources]
|
||||
memory = "512Mi"
|
||||
cpu = "1000m"
|
||||
[groups.instances]
|
||||
count = 1
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "bootstrapper"
|
||||
|
||||
[[groups]]
|
||||
id = "miners"
|
||||
[groups.resources]
|
||||
memory = "4096Mi"
|
||||
cpu = "1000m"
|
||||
[groups.instances]
|
||||
count = 1
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "miner"
|
||||
|
||||
[[groups]]
|
||||
id = "clients"
|
||||
[groups.resources]
|
||||
memory = "1024Mi"
|
||||
cpu = "1000m"
|
||||
[groups.instances]
|
||||
count = 3
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "client"
|
72
testplans/lotus-soup/_compositions/local-drand.toml
Normal file
72
testplans/lotus-soup/_compositions/local-drand.toml
Normal file
@ -0,0 +1,72 @@
|
||||
[metadata]
|
||||
name = "lotus-soup"
|
||||
author = ""
|
||||
|
||||
[global]
|
||||
plan = "lotus-soup"
|
||||
case = "deals-e2e"
|
||||
total_instances = 6
|
||||
builder = "docker:go"
|
||||
runner = "local:docker"
|
||||
|
||||
[global.build]
|
||||
selectors = ["testground"]
|
||||
|
||||
[global.run_config]
|
||||
exposed_ports = { pprof = "6060", node_rpc = "1234", miner_rpc = "2345" }
|
||||
|
||||
[global.build_config]
|
||||
enable_go_build_cache = true
|
||||
|
||||
[global.run.test_params]
|
||||
clients = "1"
|
||||
miners = "1"
|
||||
balance = "20000000" # These balances will work for maximum 100 nodes, as TotalFilecoin is 2B
|
||||
sectors = "10"
|
||||
random_beacon_type = "local-drand"
|
||||
genesis_timestamp_offset = "0"
|
||||
|
||||
[[groups]]
|
||||
id = "bootstrapper"
|
||||
[groups.resources]
|
||||
memory = "120Mi"
|
||||
cpu = "10m"
|
||||
[groups.instances]
|
||||
count = 1
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "bootstrapper"
|
||||
|
||||
[[groups]]
|
||||
id = "miners"
|
||||
[groups.resources]
|
||||
memory = "120Mi"
|
||||
cpu = "10m"
|
||||
[groups.instances]
|
||||
count = 1
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "miner"
|
||||
|
||||
[[groups]]
|
||||
id = "clients"
|
||||
[groups.resources]
|
||||
memory = "120Mi"
|
||||
cpu = "10m"
|
||||
[groups.instances]
|
||||
count = 1
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "client"
|
||||
|
||||
[[groups]]
|
||||
id = "drand"
|
||||
[groups.instances]
|
||||
count = 3
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "drand"
|
55
testplans/lotus-soup/_compositions/natural.toml
Normal file
55
testplans/lotus-soup/_compositions/natural.toml
Normal file
@ -0,0 +1,55 @@
|
||||
[metadata]
|
||||
name = "lotus-soup"
|
||||
author = ""
|
||||
|
||||
[global]
|
||||
plan = "lotus-soup"
|
||||
case = "deals-e2e"
|
||||
total_instances = 6
|
||||
builder = "docker:go"
|
||||
runner = "local:docker"
|
||||
|
||||
[global.build]
|
||||
selectors = ["testground"]
|
||||
|
||||
[global.run_config]
|
||||
exposed_ports = { pprof = "6060", node_rpc = "1234", miner_rpc = "2345" }
|
||||
|
||||
[global.build_config]
|
||||
enable_go_build_cache = true
|
||||
|
||||
[global.run.test_params]
|
||||
clients = "3"
|
||||
miners = "2"
|
||||
genesis_timestamp_offset = "100000"
|
||||
balance = "20000000" # These balances will work for maximum 100 nodes, as TotalFilecoin is 2B
|
||||
sectors = "10"
|
||||
random_beacon_type = "mock"
|
||||
|
||||
[[groups]]
|
||||
id = "bootstrapper"
|
||||
[groups.instances]
|
||||
count = 1
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "bootstrapper"
|
||||
|
||||
[[groups]]
|
||||
id = "miners"
|
||||
[groups.instances]
|
||||
count = 2
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "miner"
|
||||
mining_mode = "natural"
|
||||
|
||||
[[groups]]
|
||||
id = "clients"
|
||||
[groups.instances]
|
||||
count = 3
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "client"
|
57
testplans/lotus-soup/_compositions/net-chaos/latency.toml
Normal file
57
testplans/lotus-soup/_compositions/net-chaos/latency.toml
Normal file
@ -0,0 +1,57 @@
|
||||
[metadata]
|
||||
name = "lotus-soup"
|
||||
author = ""
|
||||
|
||||
[global]
|
||||
plan = "lotus-soup"
|
||||
case = "deals-e2e"
|
||||
total_instances = 7
|
||||
builder = "docker:go"
|
||||
runner = "local:docker"
|
||||
|
||||
[global.build]
|
||||
selectors = ["testground"]
|
||||
|
||||
[global.run_config]
|
||||
exposed_ports = { pprof = "6060", node_rpc = "1234", miner_rpc = "2345" }
|
||||
|
||||
[global.build_config]
|
||||
enable_go_build_cache = true
|
||||
|
||||
[global.run.test_params]
|
||||
clients = "5"
|
||||
miners = "1"
|
||||
genesis_timestamp_offset = "0"
|
||||
balance = "20000000" # These balances will work for maximum 100 nodes, as TotalFilecoin is 2B
|
||||
sectors = "5"
|
||||
random_beacon_type = "mock"
|
||||
mining_mode = "natural"
|
||||
|
||||
[[groups]]
|
||||
id = "bootstrapper"
|
||||
[groups.instances]
|
||||
count = 1
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "bootstrapper"
|
||||
|
||||
[[groups]]
|
||||
id = "miners"
|
||||
[groups.instances]
|
||||
count = 1
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "miner"
|
||||
latency_range = '["20ms", "300ms"]'
|
||||
|
||||
[[groups]]
|
||||
id = "clients"
|
||||
[groups.instances]
|
||||
count = 5
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "client"
|
||||
latency_range = '["100ms", "1500ms"]'
|
62
testplans/lotus-soup/_compositions/paych-stress-k8s.toml
Normal file
62
testplans/lotus-soup/_compositions/paych-stress-k8s.toml
Normal file
@ -0,0 +1,62 @@
|
||||
[metadata]
|
||||
name = "lotus-soup"
|
||||
author = "raulk"
|
||||
|
||||
[global]
|
||||
plan = "lotus-soup"
|
||||
case = "paych-stress"
|
||||
total_instances = 5 # 2 clients + 2 miners + 1 bootstrapper
|
||||
builder = "docker:go"
|
||||
runner = "cluster:k8s"
|
||||
|
||||
[global.build]
|
||||
selectors = ["testground"]
|
||||
|
||||
[global.run_config]
|
||||
exposed_ports = { pprof = "6060", node_rpc = "1234", miner_rpc = "2345" }
|
||||
|
||||
[global.build_config]
|
||||
push_registry=true
|
||||
go_proxy_mode="remote"
|
||||
go_proxy_url="http://localhost:8081"
|
||||
registry_type="aws"
|
||||
|
||||
[global.run.test_params]
|
||||
clients = "2"
|
||||
miners = "2"
|
||||
genesis_timestamp_offset = "0"
|
||||
balance = "100" ## be careful, this is in FIL.
|
||||
sectors = "10"
|
||||
random_beacon_type = "mock"
|
||||
mining_mode = "natural"
|
||||
# number of lanes to send vouchers on
|
||||
lane_count = "8"
|
||||
# number of vouchers on each lane
|
||||
vouchers_per_lane = "3"
|
||||
# amount to increase voucher by each time (per lane)
|
||||
increments = "3" ## in FIL
|
||||
|
||||
[[groups]]
|
||||
id = "bootstrapper"
|
||||
instances = { count = 1 }
|
||||
[groups.run.test_params]
|
||||
role = "bootstrapper"
|
||||
|
||||
[[groups]]
|
||||
id = "miners"
|
||||
instances = { count = 2 }
|
||||
[groups.run.test_params]
|
||||
role = "miner"
|
||||
[groups.resources]
|
||||
memory = "2048Mi"
|
||||
cpu = "100m"
|
||||
|
||||
[[groups]]
|
||||
id = "clients"
|
||||
# the first client will be on the receiving end; all others will be on the sending end.
|
||||
instances = { count = 2 }
|
||||
[groups.run.test_params]
|
||||
role = "client"
|
||||
[groups.resources]
|
||||
memory = "1024Mi"
|
||||
cpu = "100m"
|
53
testplans/lotus-soup/_compositions/paych-stress.toml
Normal file
53
testplans/lotus-soup/_compositions/paych-stress.toml
Normal file
@ -0,0 +1,53 @@
|
||||
[metadata]
|
||||
name = "lotus-soup"
|
||||
author = "raulk"
|
||||
|
||||
[global]
|
||||
plan = "lotus-soup"
|
||||
case = "paych-stress"
|
||||
total_instances = 5 # 2 clients + 2 miners + 1 bootstrapper
|
||||
builder = "exec:go"
|
||||
runner = "local:exec"
|
||||
|
||||
[global.build]
|
||||
selectors = ["testground"]
|
||||
|
||||
[global.run_config]
|
||||
exposed_ports = { pprof = "6060", node_rpc = "1234", miner_rpc = "2345" }
|
||||
|
||||
[global.build_config]
|
||||
enable_go_build_cache = true
|
||||
|
||||
[global.run.test_params]
|
||||
clients = "2"
|
||||
miners = "2"
|
||||
genesis_timestamp_offset = "0"
|
||||
balance = "100" ## be careful, this is in FIL.
|
||||
sectors = "10"
|
||||
random_beacon_type = "mock"
|
||||
mining_mode = "natural"
|
||||
# number of lanes to send vouchers on
|
||||
lane_count = "8"
|
||||
# number of vouchers on each lane
|
||||
vouchers_per_lane = "3"
|
||||
# amount to increase voucher by each time (per lane)
|
||||
increments = "3" ## in FIL
|
||||
|
||||
[[groups]]
|
||||
id = "bootstrapper"
|
||||
instances = { count = 1 }
|
||||
[groups.run.test_params]
|
||||
role = "bootstrapper"
|
||||
|
||||
[[groups]]
|
||||
id = "miners"
|
||||
instances = { count = 2 }
|
||||
[groups.run.test_params]
|
||||
role = "miner"
|
||||
|
||||
[[groups]]
|
||||
id = "clients"
|
||||
# the first client will be on the receiving end; all others will be on the sending end.
|
||||
instances = { count = 2 }
|
||||
[groups.run.test_params]
|
||||
role = "client"
|
64
testplans/lotus-soup/_compositions/pubsub-tracer.toml
Normal file
64
testplans/lotus-soup/_compositions/pubsub-tracer.toml
Normal file
@ -0,0 +1,64 @@
|
||||
[metadata]
|
||||
name = "lotus-soup"
|
||||
author = ""
|
||||
|
||||
[global]
|
||||
plan = "lotus-soup"
|
||||
case = "deals-e2e"
|
||||
total_instances = 7
|
||||
builder = "docker:go"
|
||||
runner = "local:docker"
|
||||
|
||||
[global.build]
|
||||
selectors = ["testground"]
|
||||
|
||||
[global.run_config]
|
||||
exposed_ports = { pprof = "6060", node_rpc = "1234", miner_rpc = "2345" }
|
||||
|
||||
[global.build_config]
|
||||
enable_go_build_cache = true
|
||||
|
||||
[global.run.test_params]
|
||||
clients = "3"
|
||||
miners = "2"
|
||||
genesis_timestamp_offset = "100000"
|
||||
balance = "20000000" # These balances will work for maximum 100 nodes, as TotalFilecoin is 2B
|
||||
sectors = "10"
|
||||
random_beacon_type = "mock"
|
||||
enable_pubsub_tracer = "true"
|
||||
|
||||
[[groups]]
|
||||
id = "pubsub-tracer"
|
||||
[groups.instances]
|
||||
count = 1
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "pubsub-tracer"
|
||||
|
||||
[[groups]]
|
||||
id = "bootstrapper"
|
||||
[groups.instances]
|
||||
count = 1
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "bootstrapper"
|
||||
|
||||
[[groups]]
|
||||
id = "miners"
|
||||
[groups.instances]
|
||||
count = 2
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "miner"
|
||||
|
||||
[[groups]]
|
||||
id = "clients"
|
||||
[groups.instances]
|
||||
count = 3
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "client"
|
80
testplans/lotus-soup/_compositions/recovery-exec.toml
Normal file
80
testplans/lotus-soup/_compositions/recovery-exec.toml
Normal file
@ -0,0 +1,80 @@
|
||||
[metadata]
|
||||
name = "lotus-soup"
|
||||
author = ""
|
||||
|
||||
[global]
|
||||
plan = "lotus-soup"
|
||||
case = "recovery-failed-windowed-post"
|
||||
total_instances = 7
|
||||
builder = "exec:go"
|
||||
runner = "local:exec"
|
||||
|
||||
[global.build]
|
||||
selectors = ["testground"]
|
||||
|
||||
[global.run_config]
|
||||
exposed_ports = { pprof = "6060", node_rpc = "1234", miner_rpc = "2345" }
|
||||
|
||||
[global.build_config]
|
||||
push_registry=true
|
||||
go_proxy_mode="remote"
|
||||
go_proxy_url="http://localhost:8081"
|
||||
registry_type="aws"
|
||||
|
||||
[global.run.test_params]
|
||||
clients = "3"
|
||||
miners = "3"
|
||||
genesis_timestamp_offset = "0"
|
||||
balance = "20000000"
|
||||
|
||||
[[groups]]
|
||||
id = "bootstrapper"
|
||||
[groups.resources]
|
||||
memory = "512Mi"
|
||||
cpu = "1000m"
|
||||
[groups.instances]
|
||||
count = 1
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "bootstrapper"
|
||||
|
||||
[[groups]]
|
||||
id = "miners"
|
||||
[groups.resources]
|
||||
memory = "4096Mi"
|
||||
cpu = "1000m"
|
||||
[groups.instances]
|
||||
count = 2
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "miner"
|
||||
sectors = "10"
|
||||
mining_mode = "natural"
|
||||
|
||||
[[groups]]
|
||||
id = "miners-biserk"
|
||||
[groups.resources]
|
||||
memory = "4096Mi"
|
||||
cpu = "1000m"
|
||||
[groups.instances]
|
||||
count = 1
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "miner-biserk"
|
||||
sectors = "5"
|
||||
mining_mode = "natural"
|
||||
|
||||
[[groups]]
|
||||
id = "clients"
|
||||
[groups.resources]
|
||||
memory = "1024Mi"
|
||||
cpu = "1000m"
|
||||
[groups.instances]
|
||||
count = 3
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "client"
|
95
testplans/lotus-soup/_compositions/recovery-k8s.toml
Normal file
95
testplans/lotus-soup/_compositions/recovery-k8s.toml
Normal file
@ -0,0 +1,95 @@
|
||||
[metadata]
|
||||
name = "lotus-soup"
|
||||
author = ""
|
||||
|
||||
[global]
|
||||
plan = "lotus-soup"
|
||||
case = "recovery-failed-windowed-post"
|
||||
total_instances = 9
|
||||
builder = "docker:go"
|
||||
runner = "cluster:k8s"
|
||||
|
||||
[global.build]
|
||||
selectors = ["testground"]
|
||||
|
||||
[global.run_config]
|
||||
exposed_ports = { pprof = "6060", node_rpc = "1234", miner_rpc = "2345" }
|
||||
keep_service=true
|
||||
|
||||
[global.build_config]
|
||||
push_registry=true
|
||||
go_proxy_mode="remote"
|
||||
go_proxy_url="http://localhost:8081"
|
||||
registry_type="aws"
|
||||
|
||||
[global.run.test_params]
|
||||
clients = "4"
|
||||
miners = "4"
|
||||
genesis_timestamp_offset = "0"
|
||||
balance = "20000000"
|
||||
|
||||
[[groups]]
|
||||
id = "bootstrapper"
|
||||
[groups.resources]
|
||||
memory = "512Mi"
|
||||
cpu = "1000m"
|
||||
[groups.instances]
|
||||
count = 1
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "bootstrapper"
|
||||
|
||||
[[groups]]
|
||||
id = "miners"
|
||||
[groups.resources]
|
||||
memory = "4096Mi"
|
||||
cpu = "1000m"
|
||||
[groups.instances]
|
||||
count = 2
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "miner"
|
||||
sectors = "10"
|
||||
mining_mode = "natural"
|
||||
|
||||
[[groups]]
|
||||
id = "miners-full-slash"
|
||||
[groups.resources]
|
||||
memory = "4096Mi"
|
||||
cpu = "1000m"
|
||||
[groups.instances]
|
||||
count = 1
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "miner-full-slash"
|
||||
sectors = "10"
|
||||
mining_mode = "natural"
|
||||
|
||||
[[groups]]
|
||||
id = "miners-partial-slash"
|
||||
[groups.resources]
|
||||
memory = "4096Mi"
|
||||
cpu = "1000m"
|
||||
[groups.instances]
|
||||
count = 1
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "miner-partial-slash"
|
||||
sectors = "10"
|
||||
mining_mode = "natural"
|
||||
|
||||
[[groups]]
|
||||
id = "clients"
|
||||
[groups.resources]
|
||||
memory = "1024Mi"
|
||||
cpu = "1000m"
|
||||
[groups.instances]
|
||||
count = 4
|
||||
percentage = 0.0
|
||||
[groups.run]
|
||||
[groups.run.test_params]
|
||||
role = "client"
|
176
testplans/lotus-soup/deals_e2e.go
Normal file
176
testplans/lotus-soup/deals_e2e.go
Normal file
@ -0,0 +1,176 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"math/rand"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"github.com/filecoin-project/go-address"
|
||||
"github.com/filecoin-project/go-state-types/big"
|
||||
"github.com/filecoin-project/lotus/api"
|
||||
"github.com/testground/sdk-go/sync"
|
||||
|
||||
mbig "math/big"
|
||||
|
||||
"github.com/filecoin-project/lotus/build"
|
||||
|
||||
"github.com/filecoin-project/oni/lotus-soup/testkit"
|
||||
)
|
||||
|
||||
// This is the baseline test; Filecoin 101.
|
||||
//
|
||||
// A network with a bootstrapper, a number of miners, and a number of clients/full nodes
|
||||
// is constructed and connected through the bootstrapper.
|
||||
// Some funds are allocated to each node and a number of sectors are presealed in the genesis block.
|
||||
//
|
||||
// The test plan:
|
||||
// One or more clients store content to one or more miners, testing storage deals.
|
||||
// The plan ensures that the storage deals hit the blockchain and measure the time it took.
|
||||
// Verification: one or more clients retrieve and verify the hashes of stored content.
|
||||
// The plan ensures that all (previously) published content can be correctly retrieved
|
||||
// and measures the time it took.
|
||||
//
|
||||
// Preparation of the genesis block: this is the responsibility of the bootstrapper.
|
||||
// In order to compute the genesis block, we need to collect identities and presealed
|
||||
// sectors from each node.
|
||||
// Then we create a genesis block that allocates some funds to each node and collects
|
||||
// the presealed sectors.
|
||||
func dealsE2E(t *testkit.TestEnvironment) error {
|
||||
// Dispatch/forward non-client roles to defaults.
|
||||
if t.Role != "client" {
|
||||
return testkit.HandleDefaultRole(t)
|
||||
}
|
||||
|
||||
// This is a client role
|
||||
fastRetrieval := t.BooleanParam("fast_retrieval")
|
||||
t.RecordMessage("running client, with fast retrieval set to: %v", fastRetrieval)
|
||||
|
||||
cl, err := testkit.PrepareClient(t)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
client := cl.FullApi
|
||||
|
||||
// select a random miner
|
||||
minerAddr := cl.MinerAddrs[rand.Intn(len(cl.MinerAddrs))]
|
||||
if err := client.NetConnect(ctx, minerAddr.MinerNetAddrs); err != nil {
|
||||
return err
|
||||
}
|
||||
t.D().Counter(fmt.Sprintf("send-data-to,miner=%s", minerAddr.MinerActorAddr)).Inc(1)
|
||||
|
||||
t.RecordMessage("selected %s as the miner", minerAddr.MinerActorAddr)
|
||||
|
||||
if fastRetrieval {
|
||||
err = initPaymentChannel(t, ctx, cl, minerAddr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
time.Sleep(12 * time.Second)
|
||||
|
||||
// generate 1600 bytes of random data
|
||||
data := make([]byte, 1600)
|
||||
rand.New(rand.NewSource(time.Now().UnixNano())).Read(data)
|
||||
|
||||
file, err := ioutil.TempFile("/tmp", "data")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer os.Remove(file.Name())
|
||||
|
||||
_, err = file.Write(data)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
fcid, err := client.ClientImport(ctx, api.FileRef{Path: file.Name(), IsCAR: false})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
t.RecordMessage("file cid: %s", fcid)
|
||||
|
||||
// start deal
|
||||
t1 := time.Now()
|
||||
deal := testkit.StartDeal(ctx, minerAddr.MinerActorAddr, client, fcid.Root, fastRetrieval)
|
||||
t.RecordMessage("started deal: %s", deal)
|
||||
|
||||
// TODO: this sleep is only necessary because deals don't immediately get logged in the dealstore, we should fix this
|
||||
time.Sleep(2 * time.Second)
|
||||
|
||||
t.RecordMessage("waiting for deal to be sealed")
|
||||
testkit.WaitDealSealed(t, ctx, client, deal)
|
||||
t.D().ResettingHistogram("deal.sealed").Update(int64(time.Since(t1)))
|
||||
|
||||
// wait for all client deals to be sealed before trying to retrieve
|
||||
t.SyncClient.MustSignalAndWait(ctx, sync.State("done-sealing"), t.IntParam("clients"))
|
||||
|
||||
carExport := true
|
||||
|
||||
t.RecordMessage("trying to retrieve %s", fcid)
|
||||
t1 = time.Now()
|
||||
_ = testkit.RetrieveData(t, ctx, client, fcid.Root, nil, carExport, data)
|
||||
t.D().ResettingHistogram("deal.retrieved").Update(int64(time.Since(t1)))
|
||||
|
||||
t.SyncClient.MustSignalEntry(ctx, testkit.StateStopMining)
|
||||
|
||||
time.Sleep(10 * time.Second) // wait for metrics to be emitted
|
||||
|
||||
// TODO broadcast published content CIDs to other clients
|
||||
// TODO select a random piece of content published by some other client and retrieve it
|
||||
|
||||
t.SyncClient.MustSignalAndWait(ctx, testkit.StateDone, t.TestInstanceCount)
|
||||
return nil
|
||||
}
|
||||
|
||||
// filToAttoFil converts a fractional filecoin value into AttoFIL, rounding if necessary
|
||||
func filToAttoFil(f float64) big.Int {
|
||||
a := mbig.NewFloat(f)
|
||||
a.Mul(a, mbig.NewFloat(float64(build.FilecoinPrecision)))
|
||||
i, _ := a.Int(nil)
|
||||
return big.Int{Int: i}
|
||||
}
|
||||
|
||||
func initPaymentChannel(t *testkit.TestEnvironment, ctx context.Context, cl *testkit.LotusClient, minerAddr testkit.MinerAddressesMsg) error {
|
||||
recv := minerAddr
|
||||
balance := filToAttoFil(10)
|
||||
t.RecordMessage("my balance: %d", balance)
|
||||
t.RecordMessage("creating payment channel; from=%s, to=%s, funds=%d", cl.Wallet.Address, recv.WalletAddr, balance)
|
||||
|
||||
channel, err := cl.FullApi.PaychGet(ctx, cl.Wallet.Address, recv.WalletAddr, balance)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create payment channel: %w", err)
|
||||
}
|
||||
|
||||
if addr := channel.Channel; addr != address.Undef {
|
||||
return fmt.Errorf("expected an Undef channel address, got: %s", addr)
|
||||
}
|
||||
|
||||
t.RecordMessage("payment channel created; msg_cid=%s", channel.WaitSentinel)
|
||||
t.RecordMessage("waiting for payment channel message to appear on chain")
|
||||
|
||||
// wait for the channel creation message to appear on chain.
|
||||
_, err = cl.FullApi.StateWaitMsg(ctx, channel.WaitSentinel, 2)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed while waiting for payment channel creation msg to appear on chain: %w", err)
|
||||
}
|
||||
|
||||
// need to wait so that the channel is tracked.
|
||||
// the full API waits for build.MessageConfidence (=1 in tests) before tracking the channel.
|
||||
// we wait for 2 confirmations, so we have the assurance the channel is tracked.
|
||||
|
||||
t.RecordMessage("reloading paych; now it should have an address")
|
||||
channel, err = cl.FullApi.PaychGet(ctx, cl.Wallet.Address, recv.WalletAddr, big.Zero())
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to reload payment channel: %w", err)
|
||||
}
|
||||
|
||||
t.RecordMessage("channel address: %s", channel.Channel)
|
||||
|
||||
return nil
|
||||
}
|
147
testplans/lotus-soup/deals_stress.go
Normal file
147
testplans/lotus-soup/deals_stress.go
Normal file
@ -0,0 +1,147 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"math/rand"
|
||||
"os"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/filecoin-project/lotus/api"
|
||||
"github.com/ipfs/go-cid"
|
||||
|
||||
"github.com/filecoin-project/oni/lotus-soup/testkit"
|
||||
)
|
||||
|
||||
func dealsStress(t *testkit.TestEnvironment) error {
|
||||
// Dispatch/forward non-client roles to defaults.
|
||||
if t.Role != "client" {
|
||||
return testkit.HandleDefaultRole(t)
|
||||
}
|
||||
|
||||
t.RecordMessage("running client")
|
||||
|
||||
cl, err := testkit.PrepareClient(t)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
client := cl.FullApi
|
||||
|
||||
// select a random miner
|
||||
minerAddr := cl.MinerAddrs[rand.Intn(len(cl.MinerAddrs))]
|
||||
if err := client.NetConnect(ctx, minerAddr.MinerNetAddrs); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
t.RecordMessage("selected %s as the miner", minerAddr.MinerActorAddr)
|
||||
|
||||
time.Sleep(12 * time.Second)
|
||||
|
||||
// prepare a number of concurrent data points
|
||||
deals := t.IntParam("deals")
|
||||
data := make([][]byte, 0, deals)
|
||||
files := make([]*os.File, 0, deals)
|
||||
cids := make([]cid.Cid, 0, deals)
|
||||
rng := rand.NewSource(time.Now().UnixNano())
|
||||
|
||||
for i := 0; i < deals; i++ {
|
||||
dealData := make([]byte, 1600)
|
||||
rand.New(rng).Read(dealData)
|
||||
|
||||
dealFile, err := ioutil.TempFile("/tmp", "data")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer os.Remove(dealFile.Name())
|
||||
|
||||
_, err = dealFile.Write(dealData)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
dealCid, err := client.ClientImport(ctx, api.FileRef{Path: dealFile.Name(), IsCAR: false})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
t.RecordMessage("deal %d file cid: %s", i, dealCid)
|
||||
|
||||
data = append(data, dealData)
|
||||
files = append(files, dealFile)
|
||||
cids = append(cids, dealCid.Root)
|
||||
}
|
||||
|
||||
concurrentDeals := true
|
||||
if t.StringParam("deal_mode") == "serial" {
|
||||
concurrentDeals = false
|
||||
}
|
||||
|
||||
// this to avoid failure to get block
|
||||
time.Sleep(2 * time.Second)
|
||||
|
||||
t.RecordMessage("starting storage deals")
|
||||
if concurrentDeals {
|
||||
|
||||
var wg1 sync.WaitGroup
|
||||
for i := 0; i < deals; i++ {
|
||||
wg1.Add(1)
|
||||
go func(i int) {
|
||||
defer wg1.Done()
|
||||
t1 := time.Now()
|
||||
deal := testkit.StartDeal(ctx, minerAddr.MinerActorAddr, client, cids[i], false)
|
||||
t.RecordMessage("started storage deal %d -> %s", i, deal)
|
||||
time.Sleep(2 * time.Second)
|
||||
t.RecordMessage("waiting for deal %d to be sealed", i)
|
||||
testkit.WaitDealSealed(t, ctx, client, deal)
|
||||
t.D().ResettingHistogram(fmt.Sprintf("deal.sealed,miner=%s", minerAddr.MinerActorAddr)).Update(int64(time.Since(t1)))
|
||||
}(i)
|
||||
}
|
||||
t.RecordMessage("waiting for all deals to be sealed")
|
||||
wg1.Wait()
|
||||
t.RecordMessage("all deals sealed; starting retrieval")
|
||||
|
||||
var wg2 sync.WaitGroup
|
||||
for i := 0; i < deals; i++ {
|
||||
wg2.Add(1)
|
||||
go func(i int) {
|
||||
defer wg2.Done()
|
||||
t.RecordMessage("retrieving data for deal %d", i)
|
||||
t1 := time.Now()
|
||||
_ = testkit.RetrieveData(t, ctx, client, cids[i], nil, true, data[i])
|
||||
|
||||
t.RecordMessage("retrieved data for deal %d", i)
|
||||
t.D().ResettingHistogram("deal.retrieved").Update(int64(time.Since(t1)))
|
||||
}(i)
|
||||
}
|
||||
t.RecordMessage("waiting for all retrieval deals to complete")
|
||||
wg2.Wait()
|
||||
t.RecordMessage("all retrieval deals successful")
|
||||
|
||||
} else {
|
||||
|
||||
for i := 0; i < deals; i++ {
|
||||
deal := testkit.StartDeal(ctx, minerAddr.MinerActorAddr, client, cids[i], false)
|
||||
t.RecordMessage("started storage deal %d -> %s", i, deal)
|
||||
time.Sleep(2 * time.Second)
|
||||
t.RecordMessage("waiting for deal %d to be sealed", i)
|
||||
testkit.WaitDealSealed(t, ctx, client, deal)
|
||||
}
|
||||
|
||||
for i := 0; i < deals; i++ {
|
||||
t.RecordMessage("retrieving data for deal %d", i)
|
||||
_ = testkit.RetrieveData(t, ctx, client, cids[i], nil, true, data[i])
|
||||
t.RecordMessage("retrieved data for deal %d", i)
|
||||
}
|
||||
}
|
||||
|
||||
t.SyncClient.MustSignalEntry(ctx, testkit.StateStopMining)
|
||||
t.SyncClient.MustSignalAndWait(ctx, testkit.StateDone, t.TestInstanceCount)
|
||||
|
||||
time.Sleep(15 * time.Second) // wait for metrics to be emitted
|
||||
|
||||
return nil
|
||||
}
|
1
testplans/lotus-soup/env-ci.toml
Normal file
1
testplans/lotus-soup/env-ci.toml
Normal file
@ -0,0 +1 @@
|
||||
[client]
|
57
testplans/lotus-soup/go.mod
Normal file
57
testplans/lotus-soup/go.mod
Normal file
@ -0,0 +1,57 @@
|
||||
module github.com/filecoin-project/oni/lotus-soup
|
||||
|
||||
go 1.14
|
||||
|
||||
require (
|
||||
contrib.go.opencensus.io/exporter/prometheus v0.1.0
|
||||
github.com/AndreasBriese/bbloom v0.0.0-20190825152654-46b345b51c96 // indirect
|
||||
github.com/codeskyblue/go-sh v0.0.0-20200712050446-30169cf553fe
|
||||
github.com/davecgh/go-spew v1.1.1
|
||||
github.com/davidlazar/go-crypto v0.0.0-20200604182044-b73af7476f6c // indirect
|
||||
github.com/drand/drand v1.1.2-0.20200905144319-79c957281b32
|
||||
github.com/filecoin-project/go-address v0.0.4
|
||||
github.com/filecoin-project/go-fil-markets v0.7.1
|
||||
github.com/filecoin-project/go-jsonrpc v0.1.2-0.20201008195726-68c6a2704e49
|
||||
github.com/filecoin-project/go-state-types v0.0.0-20200928172055-2df22083d8ab
|
||||
github.com/filecoin-project/go-storedcounter v0.0.0-20200421200003-1c99c62e8a5b
|
||||
github.com/filecoin-project/lotus v0.9.2-0.20201012041700-a2e0832a12f2
|
||||
github.com/filecoin-project/specs-actors v0.9.12
|
||||
github.com/google/uuid v1.1.1
|
||||
github.com/gorilla/mux v1.7.4
|
||||
github.com/hashicorp/go-multierror v1.1.0
|
||||
github.com/influxdata/influxdb v1.8.0 // indirect
|
||||
github.com/ipfs/go-cid v0.0.7
|
||||
github.com/ipfs/go-datastore v0.4.5
|
||||
github.com/ipfs/go-ipfs-files v0.0.8
|
||||
github.com/ipfs/go-ipld-format v0.2.0
|
||||
github.com/ipfs/go-log/v2 v2.1.2-0.20200626104915-0016c0b4b3e4
|
||||
github.com/ipfs/go-merkledag v0.3.2
|
||||
github.com/ipfs/go-unixfs v0.2.4
|
||||
github.com/ipld/go-car v0.1.1-0.20200923150018-8cdef32e2da4
|
||||
github.com/kpacha/opencensus-influxdb v0.0.0-20181102202715-663e2683a27c
|
||||
github.com/kr/text v0.2.0 // indirect
|
||||
github.com/libp2p/go-libp2p v0.11.0
|
||||
github.com/libp2p/go-libp2p-core v0.6.1
|
||||
github.com/libp2p/go-libp2p-pubsub-tracer v0.0.0-20200626141350-e730b32bf1e6
|
||||
github.com/multiformats/go-multiaddr v0.3.1
|
||||
github.com/multiformats/go-multiaddr-net v0.2.0
|
||||
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e // indirect
|
||||
github.com/stretchr/objx v0.2.0 // indirect
|
||||
github.com/testground/sdk-go v0.2.6-0.20201016180515-1e40e1b0ec3a
|
||||
go.opencensus.io v0.22.4
|
||||
golang.org/x/lint v0.0.0-20200302205851-738671d3881b // indirect
|
||||
golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208
|
||||
golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae // indirect
|
||||
google.golang.org/protobuf v1.25.0 // indirect
|
||||
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f // indirect
|
||||
honnef.co/go/tools v0.0.1-2020.1.3 // indirect
|
||||
)
|
||||
|
||||
// This will work in all build modes: docker:go, exec:go, and local go build.
|
||||
// On docker:go and exec:go, it maps to /extra/filecoin-ffi, as it's picked up
|
||||
// as an "extra source" in the manifest.
|
||||
replace github.com/filecoin-project/filecoin-ffi => ../extra/filecoin-ffi
|
||||
|
||||
replace github.com/supranational/blst => ../extra/fil-blst/blst
|
||||
|
||||
replace github.com/filecoin-project/fil-blst => ../extra/fil-blst
|
1978
testplans/lotus-soup/go.sum
Normal file
1978
testplans/lotus-soup/go.sum
Normal file
File diff suppressed because it is too large
Load Diff
52
testplans/lotus-soup/init.go
Normal file
52
testplans/lotus-soup/init.go
Normal file
@ -0,0 +1,52 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"os"
|
||||
|
||||
"github.com/filecoin-project/lotus/build"
|
||||
"github.com/filecoin-project/lotus/chain/actors/policy"
|
||||
|
||||
"github.com/filecoin-project/go-state-types/abi"
|
||||
|
||||
"github.com/ipfs/go-log/v2"
|
||||
)
|
||||
|
||||
func init() {
|
||||
build.BlockDelaySecs = 2
|
||||
build.PropagationDelaySecs = 1
|
||||
|
||||
_ = log.SetLogLevel("*", "WARN")
|
||||
_ = log.SetLogLevel("dht/RtRefreshManager", "ERROR") // noisy
|
||||
_ = log.SetLogLevel("bitswap", "ERROR") // noisy
|
||||
|
||||
_ = os.Setenv("BELLMAN_NO_GPU", "1")
|
||||
|
||||
build.InsecurePoStValidation = true
|
||||
build.DisableBuiltinAssets = true
|
||||
|
||||
// MessageConfidence is the amount of tipsets we wait after a message is
|
||||
// mined, e.g. payment channel creation, to be considered committed.
|
||||
build.MessageConfidence = 1
|
||||
|
||||
// The duration of a deadline's challenge window, the period before a
|
||||
// deadline when the challenge is available.
|
||||
//
|
||||
// This will auto-scale the proving period.
|
||||
policy.SetWPoStChallengeWindow(abi.ChainEpoch(5))
|
||||
|
||||
// Number of epochs between publishing the precommit and when the challenge for interactive PoRep is drawn
|
||||
// used to ensure it is not predictable by miner.
|
||||
policy.SetPreCommitChallengeDelay(abi.ChainEpoch(10))
|
||||
|
||||
policy.SetConsensusMinerMinPower(abi.NewTokenAmount(2048))
|
||||
policy.SetSupportedProofTypes(abi.RegisteredSealProof_StackedDrg2KiBV1)
|
||||
policy.SetMinVerifiedDealSize(abi.NewTokenAmount(256))
|
||||
|
||||
// Disable upgrades.
|
||||
build.UpgradeSmokeHeight = -1
|
||||
build.UpgradeIgnitionHeight = -2
|
||||
build.UpgradeLiftoffHeight = -3
|
||||
// We need to _run_ this upgrade because genesis doesn't support v2, so
|
||||
// we run it at height 0.
|
||||
build.UpgradeActorsV2Height = 0
|
||||
}
|
24
testplans/lotus-soup/main.go
Normal file
24
testplans/lotus-soup/main.go
Normal file
@ -0,0 +1,24 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"github.com/filecoin-project/oni/lotus-soup/paych"
|
||||
"github.com/filecoin-project/oni/lotus-soup/rfwp"
|
||||
"github.com/filecoin-project/oni/lotus-soup/testkit"
|
||||
|
||||
"github.com/testground/sdk-go/run"
|
||||
)
|
||||
|
||||
var cases = map[string]interface{}{
|
||||
"deals-e2e": testkit.WrapTestEnvironment(dealsE2E),
|
||||
"recovery-failed-windowed-post": testkit.WrapTestEnvironment(rfwp.RecoveryFromFailedWindowedPoStE2E),
|
||||
"deals-stress": testkit.WrapTestEnvironment(dealsStress),
|
||||
"drand-halting": testkit.WrapTestEnvironment(dealsE2E),
|
||||
"drand-outage": testkit.WrapTestEnvironment(dealsE2E),
|
||||
"paych-stress": testkit.WrapTestEnvironment(paych.Stress),
|
||||
}
|
||||
|
||||
func main() {
|
||||
sanityCheck()
|
||||
|
||||
run.InvokeMap(cases)
|
||||
}
|
216
testplans/lotus-soup/manifest.toml
Normal file
216
testplans/lotus-soup/manifest.toml
Normal file
@ -0,0 +1,216 @@
|
||||
name = "lotus-soup"
|
||||
extra_sources = { "exec:go" = ["../extra/filecoin-ffi", "../extra/fil-blst"] }
|
||||
|
||||
[defaults]
|
||||
builder = "docker:go"
|
||||
runner = "local:docker"
|
||||
|
||||
[builders."exec:go"]
|
||||
enabled = true
|
||||
|
||||
[builders."docker:go"]
|
||||
enabled = true
|
||||
build_base_image = "iptestground/oni-buildbase:v9"
|
||||
runtime_image = "iptestground/oni-runtime:v5-debug"
|
||||
|
||||
[runners."local:exec"]
|
||||
enabled = true
|
||||
|
||||
[runners."local:docker"]
|
||||
enabled = true
|
||||
|
||||
[runners."cluster:k8s"]
|
||||
enabled = true
|
||||
|
||||
######################
|
||||
##
|
||||
## Testcases
|
||||
##
|
||||
######################
|
||||
|
||||
[[testcases]]
|
||||
name = "deals-e2e"
|
||||
instances = { min = 1, max = 100, default = 5 }
|
||||
|
||||
[testcases.params]
|
||||
clients = { type = "int", default = 1 }
|
||||
miners = { type = "int", default = 1 }
|
||||
balance = { type = "float", default = 1 }
|
||||
sectors = { type = "int", default = 1 }
|
||||
role = { type = "string" }
|
||||
|
||||
genesis_timestamp_offset = { type = "int", default = 0 }
|
||||
|
||||
random_beacon_type = { type = "enum", default = "mock", options = ["mock", "local-drand", "external-drand"] }
|
||||
|
||||
# Params relevant to drand nodes. drand nodes should have role="drand", and must all be
|
||||
# in the same composition group. There must be at least threshold drand nodes.
|
||||
# To get lotus nodes to actually use the drand nodes, you must set random_beacon_type="local-drand"
|
||||
# for the lotus node groups.
|
||||
drand_period = { type = "duration", default="10s" }
|
||||
drand_threshold = { type = "int", default = 2 }
|
||||
drand_gossip_relay = { type = "bool", default = true }
|
||||
drand_log_level = { type = "string", default="info" }
|
||||
|
||||
# Params relevant to pubsub tracing
|
||||
enable_pubsub_tracer = { type = "bool", default = false }
|
||||
mining_mode = { type = "enum", default = "synchronized", options = ["synchronized", "natural"] }
|
||||
|
||||
# Fast retrieval
|
||||
fast_retrieval = { type = "bool", default = false }
|
||||
|
||||
|
||||
[[testcases]]
|
||||
name = "drand-halting"
|
||||
instances = { min = 1, max = 100, default = 5 }
|
||||
|
||||
[testcases.params]
|
||||
clients = { type = "int", default = 1 }
|
||||
miners = { type = "int", default = 1 }
|
||||
balance = { type = "float", default = 1 }
|
||||
sectors = { type = "int", default = 1 }
|
||||
role = { type = "string" }
|
||||
genesis_timestamp_offset = { type = "int", default = 0 }
|
||||
|
||||
|
||||
random_beacon_type = { type = "enum", default = "local-drand", options = ["mock", "local-drand", "external-drand"] }
|
||||
|
||||
# Params relevant to drand nodes. drand nodes should have role="drand", and must all be
|
||||
# in the same composition group. There must be at least threshold drand nodes.
|
||||
# To get lotus nodes to actually use the drand nodes, you must set random_beacon_type="local-drand"
|
||||
# for the lotus node groups.
|
||||
drand_period = { type = "duration", default="10s" }
|
||||
drand_threshold = { type = "int", default = 2 }
|
||||
drand_gossip_relay = { type = "bool", default = true }
|
||||
drand_log_level = { type = "string", default="info" }
|
||||
suspend_events = { type = "string", default="", desc = "a sequence of halt/resume/wait events separated by '->'" }
|
||||
|
||||
# Params relevant to pubsub tracing
|
||||
enable_pubsub_tracer = { type = "bool", default = false } # Mining Mode: synchronized -vs- natural time
|
||||
mining_mode = { type = "enum", default = "synchronized", options = ["synchronized", "natural"] }
|
||||
|
||||
|
||||
[[testcases]]
|
||||
name = "drand-outage"
|
||||
instances = { min = 1, max = 100, default = 5 }
|
||||
|
||||
[testcases.params]
|
||||
clients = { type = "int", default = 0 }
|
||||
miners = { type = "int", default = 3 }
|
||||
balance = { type = "float", default = 1 }
|
||||
sectors = { type = "int", default = 1 }
|
||||
role = { type = "string" }
|
||||
genesis_timestamp_offset = { type = "int", default = 0 }
|
||||
|
||||
|
||||
random_beacon_type = { type = "enum", default = "local-drand", options = ["mock", "local-drand", "external-drand"] }
|
||||
|
||||
# Params relevant to drand nodes. drand nodes should have role="drand", and must all be
|
||||
# in the same composition group. There must be at least threshold drand nodes.
|
||||
# To get lotus nodes to actually use the drand nodes, you must set random_beacon_type="local-drand"
|
||||
# for the lotus node groups.
|
||||
drand_period = { type = "duration", default="30s" }
|
||||
drand_catchup_period = { type = "duration", default="10s" }
|
||||
drand_threshold = { type = "int", default = 2 }
|
||||
drand_gossip_relay = { type = "bool", default = true }
|
||||
drand_log_level = { type = "string", default="info" }
|
||||
suspend_events = { type = "string", default="", desc = "a sequence of halt/resume/wait events separated by '->'" }
|
||||
|
||||
# Params relevant to pubsub tracing
|
||||
enable_pubsub_tracer = { type = "bool", default = false } # Mining Mode: synchronized -vs- natural time
|
||||
mining_mode = { type = "enum", default = "synchronized", options = ["synchronized", "natural"] }
|
||||
|
||||
|
||||
[[testcases]]
|
||||
name = "deals-stress"
|
||||
instances = { min = 1, max = 100, default = 5 }
|
||||
|
||||
[testcases.params]
|
||||
clients = { type = "int", default = 1 }
|
||||
miners = { type = "int", default = 1 }
|
||||
balance = { type = "float", default = 1 }
|
||||
sectors = { type = "int", default = 1 }
|
||||
role = { type = "string" }
|
||||
|
||||
genesis_timestamp_offset = { type = "int", default = 0 }
|
||||
|
||||
random_beacon_type = { type = "enum", default = "mock", options = ["mock", "local-drand", "external-drand"] }
|
||||
|
||||
# Params relevant to drand nodes. drand nodes should have role="drand", and must all be
|
||||
# in the same composition group. There must be at least threshold drand nodes.
|
||||
# To get lotus nodes to actually use the drand nodes, you must set random_beacon_type="local-drand"
|
||||
# for the lotus node groups.
|
||||
drand_period = { type = "duration", default="10s" }
|
||||
drand_threshold = { type = "int", default = 2 }
|
||||
drand_gossip_relay = { type = "bool", default = true }
|
||||
|
||||
# Params relevant to pubsub tracing
|
||||
enable_pubsub_tracer = { type = "bool", default = false }
|
||||
|
||||
# Mining Mode: synchronized -vs- natural time
|
||||
mining_mode = { type = "enum", default = "synchronized", options = ["synchronized", "natural"] }
|
||||
|
||||
deals = { type = "int", default = 1 }
|
||||
deal_mode = { type = "enum", default = "serial", options = ["serial", "concurrent"] }
|
||||
|
||||
|
||||
[[testcases]]
|
||||
name = "paych-stress"
|
||||
instances = { min = 1, max = 100, default = 5 }
|
||||
|
||||
[testcases.params]
|
||||
clients = { type = "int", default = 1 }
|
||||
miners = { type = "int", default = 1 }
|
||||
balance = { type = "float", default = 1 }
|
||||
sectors = { type = "int", default = 1 }
|
||||
role = { type = "string" }
|
||||
genesis_timestamp_offset = { type = "int", default = 0 }
|
||||
|
||||
random_beacon_type = { type = "enum", default = "local-drand", options = ["mock", "local-drand", "external-drand"] }
|
||||
|
||||
# Params relevant to drand nodes. drand nodes should have role="drand", and must all be
|
||||
# in the same composition group. There must be at least threshold drand nodes.
|
||||
# To get lotus nodes to actually use the drand nodes, you must set random_beacon_type="local-drand"
|
||||
# for the lotus node groups.
|
||||
drand_period = { type = "duration", default="10s" }
|
||||
drand_threshold = { type = "int", default = 2 }
|
||||
drand_gossip_relay = { type = "bool", default = true }
|
||||
drand_log_level = { type = "string", default="info" }
|
||||
suspend_events = { type = "string", default="", desc = "a sequence of halt/resume/wait events separated by '->'" }
|
||||
|
||||
# Params relevant to pubsub tracing
|
||||
enable_pubsub_tracer = { type = "bool", default = false } # Mining Mode: synchronized -vs- natural time
|
||||
mining_mode = { type = "enum", default = "synchronized", options = ["synchronized", "natural"] }
|
||||
|
||||
# ********** Test-case specific **********
|
||||
increments = { type = "int", default = "100", desc = "increments in which to send payment vouchers" }
|
||||
lane_count = { type = "int", default = "256", desc = "lanes to open; vouchers will be distributed across these lanes in round-robin fashion" }
|
||||
|
||||
|
||||
[[testcases]]
|
||||
name = "recovery-failed-windowed-post"
|
||||
instances = { min = 1, max = 100, default = 5 }
|
||||
|
||||
[testcases.params]
|
||||
clients = { type = "int", default = 1 }
|
||||
miners = { type = "int", default = 1 }
|
||||
balance = { type = "int", default = 1 }
|
||||
sectors = { type = "int", default = 1 }
|
||||
role = { type = "string" }
|
||||
|
||||
genesis_timestamp_offset = { type = "int", default = 0 }
|
||||
|
||||
random_beacon_type = { type = "enum", default = "mock", options = ["mock", "local-drand", "external-drand"] }
|
||||
|
||||
# Params relevant to drand nodes. drand nodes should have role="drand", and must all be
|
||||
# in the same composition group. There must be at least threshold drand nodes.
|
||||
# To get lotus nodes to actually use the drand nodes, you must set random_beacon_type="local-drand"
|
||||
# for the lotus node groups.
|
||||
drand_period = { type = "duration", default="10s" }
|
||||
drand_threshold = { type = "int", default = 2 }
|
||||
drand_gossip_relay = { type = "bool", default = true }
|
||||
drand_log_level = { type = "string", default="info" }
|
||||
|
||||
# Params relevant to pubsub tracing
|
||||
enable_pubsub_tracer = { type = "bool", default = false }
|
||||
mining_mode = { type = "enum", default = "synchronized", options = ["synchronized", "natural"] }
|
32
testplans/lotus-soup/paych/README.md
Normal file
32
testplans/lotus-soup/paych/README.md
Normal file
@ -0,0 +1,32 @@
|
||||
# Payment channels end-to-end tests
|
||||
|
||||
This package contains the following test cases, each of which is described
|
||||
further below.
|
||||
|
||||
- Payment channels stress test case (`stress.go`).
|
||||
|
||||
## Payment channels stress test case (`stress.go`)
|
||||
|
||||
***WIP | blocked due to https://github.com/filecoin-project/lotus/issues/2297***
|
||||
|
||||
This test case turns all clients into payment receivers and senders.
|
||||
The first member to start in the group becomes the _receiver_.
|
||||
All other members become _senders_.
|
||||
|
||||
The _senders_ will open a single payment channel to the _receiver_, and will
|
||||
wait for the message to be posted on-chain. We are setting
|
||||
`build.MessageConfidence=1`, in order to accelerate the test. So we'll only wait
|
||||
for a single tipset confirmation once we witness the message.
|
||||
|
||||
Once the message is posted, we load the payment channel actor address and create
|
||||
as many lanes as the `lane_count` test parameter dictates.
|
||||
|
||||
When then fetch our total balance, and start sending it on the payment channel,
|
||||
round-robinning across all lanes, until our balance is extinguished.
|
||||
|
||||
**TODO:**
|
||||
|
||||
- [ ] Assertions, metrics, etc. Actually gather statistics. Right now this is
|
||||
just a smoke test, and it fails.
|
||||
- [ ] Implement the _receiver_ logic.
|
||||
- [ ] Model test lifetime by signalling end.
|
313
testplans/lotus-soup/paych/stress.go
Normal file
313
testplans/lotus-soup/paych/stress.go
Normal file
@ -0,0 +1,313 @@
|
||||
package paych
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"github.com/ipfs/go-cid"
|
||||
|
||||
"github.com/filecoin-project/lotus/build"
|
||||
"github.com/filecoin-project/specs-actors/actors/builtin/paych"
|
||||
|
||||
"github.com/filecoin-project/go-address"
|
||||
"github.com/filecoin-project/go-state-types/big"
|
||||
"github.com/testground/sdk-go/sync"
|
||||
|
||||
"github.com/filecoin-project/oni/lotus-soup/testkit"
|
||||
)
|
||||
|
||||
var SendersDoneState = sync.State("senders-done")
|
||||
var ReceiverReadyState = sync.State("receiver-ready")
|
||||
var ReceiverAddedVouchersState = sync.State("receiver-added-vouchers")
|
||||
|
||||
var VoucherTopic = sync.NewTopic("voucher", &paych.SignedVoucher{})
|
||||
var SettleTopic = sync.NewTopic("settle", cid.Cid{})
|
||||
|
||||
type ClientMode uint64
|
||||
|
||||
const (
|
||||
ModeSender ClientMode = iota
|
||||
ModeReceiver
|
||||
)
|
||||
|
||||
func (cm ClientMode) String() string {
|
||||
return [...]string{"Sender", "Receiver"}[cm]
|
||||
}
|
||||
|
||||
func getClientMode(groupSeq int64) ClientMode {
|
||||
if groupSeq == 1 {
|
||||
return ModeReceiver
|
||||
}
|
||||
return ModeSender
|
||||
}
|
||||
|
||||
// TODO Stress is currently WIP. We found blockers in Lotus that prevent us from
|
||||
// making progress. See https://github.com/filecoin-project/lotus/issues/2297.
|
||||
func Stress(t *testkit.TestEnvironment) error {
|
||||
// Dispatch/forward non-client roles to defaults.
|
||||
if t.Role != "client" {
|
||||
return testkit.HandleDefaultRole(t)
|
||||
}
|
||||
|
||||
// This is a client role.
|
||||
t.RecordMessage("running payments client")
|
||||
|
||||
ctx := context.Background()
|
||||
cl, err := testkit.PrepareClient(t)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// are we the receiver or a sender?
|
||||
mode := getClientMode(t.GroupSeq)
|
||||
t.RecordMessage("acting as %s", mode)
|
||||
|
||||
var clients []*testkit.ClientAddressesMsg
|
||||
sctx, cancel := context.WithCancel(ctx)
|
||||
clientsCh := make(chan *testkit.ClientAddressesMsg)
|
||||
t.SyncClient.MustSubscribe(sctx, testkit.ClientsAddrsTopic, clientsCh)
|
||||
for i := 0; i < t.TestGroupInstanceCount; i++ {
|
||||
clients = append(clients, <-clientsCh)
|
||||
}
|
||||
cancel()
|
||||
|
||||
switch mode {
|
||||
case ModeReceiver:
|
||||
err := runReceiver(t, ctx, cl)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
case ModeSender:
|
||||
err := runSender(ctx, t, clients, cl)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Signal that the client is done
|
||||
t.SyncClient.MustSignalEntry(ctx, testkit.StateDone)
|
||||
|
||||
// Signal to the miners to stop mining
|
||||
t.SyncClient.MustSignalEntry(ctx, testkit.StateStopMining)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func runSender(ctx context.Context, t *testkit.TestEnvironment, clients []*testkit.ClientAddressesMsg, cl *testkit.LotusClient) error {
|
||||
var (
|
||||
// lanes to open; vouchers will be distributed across these lanes in round-robin fashion
|
||||
laneCount = t.IntParam("lane_count")
|
||||
// number of vouchers to send on each lane
|
||||
vouchersPerLane = t.IntParam("vouchers_per_lane")
|
||||
// increments in which to send payment vouchers
|
||||
increments = big.Mul(big.NewInt(int64(t.IntParam("increments"))), big.NewInt(int64(build.FilecoinPrecision)))
|
||||
// channel amount should be enough to cover all vouchers
|
||||
channelAmt = big.Mul(big.NewInt(int64(laneCount*vouchersPerLane)), increments)
|
||||
)
|
||||
|
||||
// Lock up funds in the payment channel.
|
||||
recv := findReceiver(clients)
|
||||
balance, err := cl.FullApi.WalletBalance(ctx, cl.Wallet.Address)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to acquire wallet balance: %w", err)
|
||||
}
|
||||
|
||||
t.RecordMessage("my balance: %d", balance)
|
||||
t.RecordMessage("creating payment channel; from=%s, to=%s, funds=%d", cl.Wallet.Address, recv.WalletAddr, channelAmt)
|
||||
|
||||
pid := os.Getpid()
|
||||
t.RecordMessage("sender pid: %d", pid)
|
||||
|
||||
time.Sleep(20 * time.Second)
|
||||
|
||||
channel, err := cl.FullApi.PaychGet(ctx, cl.Wallet.Address, recv.WalletAddr, channelAmt)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create payment channel: %w", err)
|
||||
}
|
||||
|
||||
if addr := channel.Channel; addr != address.Undef {
|
||||
return fmt.Errorf("expected an Undef channel address, got: %s", addr)
|
||||
}
|
||||
|
||||
t.RecordMessage("payment channel created; msg_cid=%s", channel.WaitSentinel)
|
||||
t.RecordMessage("waiting for payment channel message to appear on chain")
|
||||
|
||||
// wait for the channel creation message to appear on chain.
|
||||
_, err = cl.FullApi.StateWaitMsg(ctx, channel.WaitSentinel, 2)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed while waiting for payment channel creation msg to appear on chain: %w", err)
|
||||
}
|
||||
|
||||
// need to wait so that the channel is tracked.
|
||||
// the full API waits for build.MessageConfidence (=1 in tests) before tracking the channel.
|
||||
// we wait for 2 confirmations, so we have the assurance the channel is tracked.
|
||||
|
||||
t.RecordMessage("get payment channel address")
|
||||
channelAddr, err := cl.FullApi.PaychGetWaitReady(ctx, channel.WaitSentinel)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get payment channel address: %w", err)
|
||||
}
|
||||
|
||||
t.RecordMessage("channel address: %s", channelAddr)
|
||||
t.RecordMessage("allocating lanes; count=%d", laneCount)
|
||||
|
||||
// allocate as many lanes as required
|
||||
var lanes []uint64
|
||||
for i := 0; i < laneCount; i++ {
|
||||
lane, err := cl.FullApi.PaychAllocateLane(ctx, channelAddr)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to allocate lane: %w", err)
|
||||
}
|
||||
lanes = append(lanes, lane)
|
||||
}
|
||||
|
||||
t.RecordMessage("lanes allocated; count=%d", laneCount)
|
||||
|
||||
<-t.SyncClient.MustBarrier(ctx, ReceiverReadyState, 1).C
|
||||
|
||||
t.RecordMessage("sending payments in round-robin fashion across lanes; increments=%d", increments)
|
||||
|
||||
// create vouchers
|
||||
remaining := channelAmt
|
||||
for i := 0; i < vouchersPerLane; i++ {
|
||||
for _, lane := range lanes {
|
||||
voucherAmt := big.Mul(big.NewInt(int64(i+1)), increments)
|
||||
voucher, err := cl.FullApi.PaychVoucherCreate(ctx, channelAddr, voucherAmt, lane)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create voucher: %w", err)
|
||||
}
|
||||
t.RecordMessage("payment voucher created; lane=%d, nonce=%d, amount=%d", voucher.Voucher.Lane, voucher.Voucher.Nonce, voucher.Voucher.Amount)
|
||||
|
||||
_, err = t.SyncClient.Publish(ctx, VoucherTopic, voucher.Voucher)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to publish voucher: %w", err)
|
||||
}
|
||||
|
||||
remaining = big.Sub(remaining, increments)
|
||||
t.RecordMessage("remaining balance: %d", remaining)
|
||||
}
|
||||
}
|
||||
|
||||
t.RecordMessage("finished sending all payment vouchers")
|
||||
|
||||
// Inform the receiver that all vouchers have been created
|
||||
t.SyncClient.MustSignalEntry(ctx, SendersDoneState)
|
||||
|
||||
// Wait for the receiver to add all vouchers
|
||||
<-t.SyncClient.MustBarrier(ctx, ReceiverAddedVouchersState, 1).C
|
||||
|
||||
t.RecordMessage("settle channel")
|
||||
|
||||
// Settle the channel. When the receiver sees the settle message, they
|
||||
// should automatically submit all vouchers.
|
||||
settleMsgCid, err := cl.FullApi.PaychSettle(ctx, channelAddr)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to settle payment channel: %w", err)
|
||||
}
|
||||
|
||||
t.SyncClient.Publish(ctx, SettleTopic, settleMsgCid)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to publish settle message cid: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func findReceiver(clients []*testkit.ClientAddressesMsg) *testkit.ClientAddressesMsg {
|
||||
for _, c := range clients {
|
||||
if getClientMode(c.GroupSeq) == ModeReceiver {
|
||||
return c
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func runReceiver(t *testkit.TestEnvironment, ctx context.Context, cl *testkit.LotusClient) error {
|
||||
// lanes to open; vouchers will be distributed across these lanes in round-robin fashion
|
||||
laneCount := t.IntParam("lane_count")
|
||||
// number of vouchers to send on each lane
|
||||
vouchersPerLane := t.IntParam("vouchers_per_lane")
|
||||
totalVouchers := laneCount * vouchersPerLane
|
||||
|
||||
vouchers := make(chan *paych.SignedVoucher)
|
||||
vouchersSub, err := t.SyncClient.Subscribe(ctx, VoucherTopic, vouchers)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to subscribe to voucher topic: %w", err)
|
||||
}
|
||||
|
||||
settleMsgChan := make(chan cid.Cid)
|
||||
settleSub, err := t.SyncClient.Subscribe(ctx, SettleTopic, settleMsgChan)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to subscribe to settle topic: %w", err)
|
||||
}
|
||||
|
||||
// inform the clients that the receiver is ready for incoming vouchers
|
||||
t.SyncClient.MustSignalEntry(ctx, ReceiverReadyState)
|
||||
|
||||
t.RecordMessage("adding %d payment vouchers", totalVouchers)
|
||||
|
||||
// Add each of the vouchers
|
||||
var addedVouchers []*paych.SignedVoucher
|
||||
for i := 0; i < totalVouchers; i++ {
|
||||
v := <-vouchers
|
||||
addedVouchers = append(addedVouchers, v)
|
||||
|
||||
_, err := cl.FullApi.PaychVoucherAdd(ctx, v.ChannelAddr, v, nil, big.NewInt(0))
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to add voucher: %w", err)
|
||||
}
|
||||
spendable, err := cl.FullApi.PaychVoucherCheckSpendable(ctx, v.ChannelAddr, v, nil, nil)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to check voucher spendable: %w", err)
|
||||
}
|
||||
if !spendable {
|
||||
return fmt.Errorf("expected voucher %d to be spendable", i)
|
||||
}
|
||||
|
||||
t.RecordMessage("payment voucher added; lane=%d, nonce=%d, amount=%d", v.Lane, v.Nonce, v.Amount)
|
||||
}
|
||||
|
||||
vouchersSub.Done()
|
||||
|
||||
t.RecordMessage("finished adding all payment vouchers")
|
||||
|
||||
// Inform the clients that the receiver has added all vouchers
|
||||
t.SyncClient.MustSignalEntry(ctx, ReceiverAddedVouchersState)
|
||||
|
||||
// Wait for the settle message (put on chain by the sender)
|
||||
t.RecordMessage("waiting for client to put settle message on chain")
|
||||
settleMsgCid := <-settleMsgChan
|
||||
settleSub.Done()
|
||||
|
||||
time.Sleep(5 * time.Second)
|
||||
|
||||
t.RecordMessage("waiting for confirmation of settle message on chain: %s", settleMsgCid)
|
||||
_, err = cl.FullApi.StateWaitMsg(ctx, settleMsgCid, 10)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to wait for settle message: %w", err)
|
||||
}
|
||||
|
||||
// Note: Once the receiver sees the settle message on chain, it will
|
||||
// automatically call submit voucher with the best vouchers
|
||||
|
||||
// TODO: Uncomment this section once this PR is merged:
|
||||
// https://github.com/filecoin-project/lotus/pull/3197
|
||||
//t.RecordMessage("checking that all %d vouchers are no longer spendable", len(addedVouchers))
|
||||
//for i, v := range addedVouchers {
|
||||
// spendable, err := cl.FullApi.PaychVoucherCheckSpendable(ctx, v.ChannelAddr, v, nil, nil)
|
||||
// if err != nil {
|
||||
// return fmt.Errorf("failed to check voucher spendable: %w", err)
|
||||
// }
|
||||
// // Should no longer be spendable because the best voucher has been submitted
|
||||
// if spendable {
|
||||
// return fmt.Errorf("expected voucher %d to no longer be spendable", i)
|
||||
// }
|
||||
//}
|
||||
|
||||
t.RecordMessage("all vouchers were submitted successfully")
|
||||
|
||||
return nil
|
||||
}
|
796
testplans/lotus-soup/rfwp/chain_state.go
Normal file
796
testplans/lotus-soup/rfwp/chain_state.go
Normal file
@ -0,0 +1,796 @@
|
||||
package rfwp
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"sort"
|
||||
"text/tabwriter"
|
||||
"time"
|
||||
|
||||
"github.com/filecoin-project/go-address"
|
||||
"github.com/filecoin-project/go-state-types/big"
|
||||
"github.com/filecoin-project/lotus/api/apibstore"
|
||||
"github.com/filecoin-project/lotus/build"
|
||||
|
||||
"github.com/filecoin-project/lotus/api"
|
||||
"github.com/filecoin-project/lotus/chain/store"
|
||||
"github.com/filecoin-project/lotus/chain/types"
|
||||
|
||||
"github.com/filecoin-project/oni/lotus-soup/testkit"
|
||||
|
||||
"github.com/filecoin-project/go-state-types/abi"
|
||||
sealing "github.com/filecoin-project/lotus/extern/storage-sealing"
|
||||
|
||||
"github.com/filecoin-project/lotus/chain/actors/builtin/miner"
|
||||
tstats "github.com/filecoin-project/lotus/tools/stats"
|
||||
)
|
||||
|
||||
func UpdateChainState(t *testkit.TestEnvironment, m *testkit.LotusMiner) error {
|
||||
height := 0
|
||||
headlag := 3
|
||||
|
||||
ctx := context.Background()
|
||||
|
||||
tipsetsCh, err := tstats.GetTips(ctx, m.FullApi, abi.ChainEpoch(height), headlag)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
jsonFilename := fmt.Sprintf("%s%cchain-state.ndjson", t.TestOutputsPath, os.PathSeparator)
|
||||
jsonFile, err := os.Create(jsonFilename)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer jsonFile.Close()
|
||||
jsonEncoder := json.NewEncoder(jsonFile)
|
||||
|
||||
for tipset := range tipsetsCh {
|
||||
maddrs, err := m.FullApi.StateListMiners(ctx, tipset.Key())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
snapshot := ChainSnapshot{
|
||||
Height: tipset.Height(),
|
||||
MinerStates: make(map[string]*MinerStateSnapshot),
|
||||
}
|
||||
|
||||
err = func() error {
|
||||
cs.Lock()
|
||||
defer cs.Unlock()
|
||||
|
||||
for _, maddr := range maddrs {
|
||||
err := func() error {
|
||||
filename := fmt.Sprintf("%s%cstate-%s-%d", t.TestOutputsPath, os.PathSeparator, maddr, tipset.Height())
|
||||
|
||||
f, err := os.Create(filename)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
w := bufio.NewWriter(f)
|
||||
defer w.Flush()
|
||||
|
||||
minerInfo, err := info(t, m, maddr, w, tipset.Height())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
writeText(w, minerInfo)
|
||||
|
||||
if tipset.Height()%100 == 0 {
|
||||
printDiff(t, minerInfo, tipset.Height())
|
||||
}
|
||||
|
||||
faultState, err := provingFaults(t, m, maddr, tipset.Height())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
writeText(w, faultState)
|
||||
|
||||
provState, err := provingInfo(t, m, maddr, tipset.Height())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
writeText(w, provState)
|
||||
|
||||
// record diff
|
||||
recordDiff(minerInfo, provState, tipset.Height())
|
||||
|
||||
deadlines, err := provingDeadlines(t, m, maddr, tipset.Height())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
writeText(w, deadlines)
|
||||
|
||||
sectorInfo, err := sectorsList(t, m, maddr, w, tipset.Height())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
writeText(w, sectorInfo)
|
||||
|
||||
snapshot.MinerStates[maddr.String()] = &MinerStateSnapshot{
|
||||
Info: minerInfo,
|
||||
Faults: faultState,
|
||||
ProvingInfo: provState,
|
||||
Deadlines: deadlines,
|
||||
Sectors: sectorInfo,
|
||||
}
|
||||
|
||||
return jsonEncoder.Encode(snapshot)
|
||||
}()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
cs.PrevHeight = tipset.Height()
|
||||
|
||||
return nil
|
||||
}()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
type ChainSnapshot struct {
|
||||
Height abi.ChainEpoch
|
||||
|
||||
MinerStates map[string]*MinerStateSnapshot
|
||||
}
|
||||
|
||||
type MinerStateSnapshot struct {
|
||||
Info *MinerInfo
|
||||
Faults *ProvingFaultState
|
||||
ProvingInfo *ProvingInfoState
|
||||
Deadlines *ProvingDeadlines
|
||||
Sectors *SectorInfo
|
||||
}
|
||||
|
||||
// writeText marshals m to text and writes to w, swallowing any errors along the way.
|
||||
func writeText(w io.Writer, m plainTextMarshaler) {
|
||||
b, err := m.MarshalPlainText()
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
_, _ = w.Write(b)
|
||||
}
|
||||
|
||||
// if we make our structs `encoding.TextMarshaler`s, they all get stringified when marshaling to JSON
|
||||
// instead of just using the default struct marshaler.
|
||||
// so here's encoding.TextMarshaler with a different name, so that doesn't happen.
|
||||
type plainTextMarshaler interface {
|
||||
MarshalPlainText() ([]byte, error)
|
||||
}
|
||||
|
||||
type ProvingFaultState struct {
|
||||
// FaultedSectors is a slice per-deadline faulty sectors. If the miner
|
||||
// has no faulty sectors, this will be nil.
|
||||
FaultedSectors [][]uint64
|
||||
}
|
||||
|
||||
func (s *ProvingFaultState) MarshalPlainText() ([]byte, error) {
|
||||
w := &bytes.Buffer{}
|
||||
|
||||
if len(s.FaultedSectors) == 0 {
|
||||
fmt.Fprintf(w, "no faulty sectors\n")
|
||||
return w.Bytes(), nil
|
||||
}
|
||||
|
||||
tw := tabwriter.NewWriter(w, 2, 4, 2, ' ', 0)
|
||||
_, _ = fmt.Fprintf(tw, "deadline\tsectors")
|
||||
for deadline, sectors := range s.FaultedSectors {
|
||||
for _, num := range sectors {
|
||||
_, _ = fmt.Fprintf(tw, "%d\t%d\n", deadline, num)
|
||||
}
|
||||
}
|
||||
|
||||
return w.Bytes(), nil
|
||||
}
|
||||
|
||||
func provingFaults(t *testkit.TestEnvironment, m *testkit.LotusMiner, maddr address.Address, height abi.ChainEpoch) (*ProvingFaultState, error) {
|
||||
api := m.FullApi
|
||||
ctx := context.Background()
|
||||
|
||||
head, err := api.ChainHead(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
deadlines, err := api.StateMinerDeadlines(ctx, maddr, head.Key())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
faultedSectors := make([][]uint64, len(deadlines))
|
||||
hasFaults := false
|
||||
for dlIdx := range deadlines {
|
||||
partitions, err := api.StateMinerPartitions(ctx, maddr, uint64(dlIdx), types.EmptyTSK)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
for _, partition := range partitions {
|
||||
faulty, err := partition.FaultySectors.All(10000000)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if len(faulty) > 0 {
|
||||
hasFaults = true
|
||||
}
|
||||
|
||||
faultedSectors[dlIdx] = append(faultedSectors[dlIdx], faulty...)
|
||||
}
|
||||
}
|
||||
result := new(ProvingFaultState)
|
||||
if hasFaults {
|
||||
result.FaultedSectors = faultedSectors
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
type ProvingInfoState struct {
|
||||
CurrentEpoch abi.ChainEpoch
|
||||
|
||||
ProvingPeriodStart abi.ChainEpoch
|
||||
|
||||
Faults uint64
|
||||
ProvenSectors uint64
|
||||
FaultPercent float64
|
||||
Recoveries uint64
|
||||
|
||||
DeadlineIndex uint64
|
||||
DeadlineSectors uint64
|
||||
DeadlineOpen abi.ChainEpoch
|
||||
DeadlineClose abi.ChainEpoch
|
||||
DeadlineChallenge abi.ChainEpoch
|
||||
DeadlineFaultCutoff abi.ChainEpoch
|
||||
|
||||
WPoStProvingPeriod abi.ChainEpoch
|
||||
}
|
||||
|
||||
func (s *ProvingInfoState) MarshalPlainText() ([]byte, error) {
|
||||
w := &bytes.Buffer{}
|
||||
fmt.Fprintf(w, "Current Epoch: %d\n", s.CurrentEpoch)
|
||||
fmt.Fprintf(w, "Chain Period: %d\n", s.CurrentEpoch/s.WPoStProvingPeriod)
|
||||
fmt.Fprintf(w, "Chain Period Start: %s\n", epochTime(s.CurrentEpoch, (s.CurrentEpoch/s.WPoStProvingPeriod)*s.WPoStProvingPeriod))
|
||||
fmt.Fprintf(w, "Chain Period End: %s\n\n", epochTime(s.CurrentEpoch, (s.CurrentEpoch/s.WPoStProvingPeriod+1)*s.WPoStProvingPeriod))
|
||||
|
||||
fmt.Fprintf(w, "Proving Period Boundary: %d\n", s.ProvingPeriodStart%s.WPoStProvingPeriod)
|
||||
fmt.Fprintf(w, "Proving Period Start: %s\n", epochTime(s.CurrentEpoch, s.ProvingPeriodStart))
|
||||
fmt.Fprintf(w, "Next Period Start: %s\n\n", epochTime(s.CurrentEpoch, s.ProvingPeriodStart+s.WPoStProvingPeriod))
|
||||
|
||||
fmt.Fprintf(w, "Faults: %d (%.2f%%)\n", s.Faults, s.FaultPercent)
|
||||
fmt.Fprintf(w, "Recovering: %d\n", s.Recoveries)
|
||||
//fmt.Fprintf(w, "New Sectors: %d\n\n", s.NewSectors)
|
||||
|
||||
fmt.Fprintf(w, "Deadline Index: %d\n", s.DeadlineIndex)
|
||||
fmt.Fprintf(w, "Deadline Sectors: %d\n", s.DeadlineSectors)
|
||||
|
||||
fmt.Fprintf(w, "Deadline Open: %s\n", epochTime(s.CurrentEpoch, s.DeadlineOpen))
|
||||
fmt.Fprintf(w, "Deadline Close: %s\n", epochTime(s.CurrentEpoch, s.DeadlineClose))
|
||||
fmt.Fprintf(w, "Deadline Challenge: %s\n", epochTime(s.CurrentEpoch, s.DeadlineChallenge))
|
||||
fmt.Fprintf(w, "Deadline FaultCutoff: %s\n", epochTime(s.CurrentEpoch, s.DeadlineFaultCutoff))
|
||||
|
||||
return w.Bytes(), nil
|
||||
}
|
||||
|
||||
func provingInfo(t *testkit.TestEnvironment, m *testkit.LotusMiner, maddr address.Address, height abi.ChainEpoch) (*ProvingInfoState, error) {
|
||||
lapi := m.FullApi
|
||||
ctx := context.Background()
|
||||
|
||||
head, err := lapi.ChainHead(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
cd, err := lapi.StateMinerProvingDeadline(ctx, maddr, head.Key())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
deadlines, err := lapi.StateMinerDeadlines(ctx, maddr, head.Key())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
parts := map[uint64][]api.Partition{}
|
||||
for dlIdx := range deadlines {
|
||||
part, err := lapi.StateMinerPartitions(ctx, maddr, uint64(dlIdx), types.EmptyTSK)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
parts[uint64(dlIdx)] = part
|
||||
}
|
||||
|
||||
proving := uint64(0)
|
||||
faults := uint64(0)
|
||||
recovering := uint64(0)
|
||||
|
||||
for _, partitions := range parts {
|
||||
for _, partition := range partitions {
|
||||
sc, err := partition.LiveSectors.Count()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
proving += sc
|
||||
|
||||
fc, err := partition.FaultySectors.Count()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
faults += fc
|
||||
|
||||
rc, err := partition.RecoveringSectors.Count()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
recovering += rc
|
||||
}
|
||||
}
|
||||
|
||||
var faultPerc float64
|
||||
if proving > 0 {
|
||||
faultPerc = float64(faults*10000/proving) / 100
|
||||
}
|
||||
|
||||
s := ProvingInfoState{
|
||||
CurrentEpoch: cd.CurrentEpoch,
|
||||
ProvingPeriodStart: cd.PeriodStart,
|
||||
Faults: faults,
|
||||
ProvenSectors: proving,
|
||||
FaultPercent: faultPerc,
|
||||
Recoveries: recovering,
|
||||
DeadlineIndex: cd.Index,
|
||||
DeadlineOpen: cd.Open,
|
||||
DeadlineClose: cd.Close,
|
||||
DeadlineChallenge: cd.Challenge,
|
||||
DeadlineFaultCutoff: cd.FaultCutoff,
|
||||
WPoStProvingPeriod: cd.WPoStProvingPeriod,
|
||||
}
|
||||
|
||||
if cd.Index < cd.WPoStPeriodDeadlines {
|
||||
for _, partition := range parts[cd.Index] {
|
||||
sc, err := partition.LiveSectors.Count()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
s.DeadlineSectors += sc
|
||||
}
|
||||
}
|
||||
|
||||
return &s, nil
|
||||
}
|
||||
|
||||
func epochTime(curr, e abi.ChainEpoch) string {
|
||||
switch {
|
||||
case curr > e:
|
||||
return fmt.Sprintf("%d (%s ago)", e, time.Second*time.Duration(int64(build.BlockDelaySecs)*int64(curr-e)))
|
||||
case curr == e:
|
||||
return fmt.Sprintf("%d (now)", e)
|
||||
case curr < e:
|
||||
return fmt.Sprintf("%d (in %s)", e, time.Second*time.Duration(int64(build.BlockDelaySecs)*int64(e-curr)))
|
||||
}
|
||||
|
||||
panic("math broke")
|
||||
}
|
||||
|
||||
type ProvingDeadlines struct {
|
||||
Deadlines []DeadlineInfo
|
||||
}
|
||||
|
||||
type DeadlineInfo struct {
|
||||
Sectors uint64
|
||||
Partitions int
|
||||
Proven uint64
|
||||
Current bool
|
||||
}
|
||||
|
||||
func (d *ProvingDeadlines) MarshalPlainText() ([]byte, error) {
|
||||
w := new(bytes.Buffer)
|
||||
tw := tabwriter.NewWriter(w, 2, 4, 2, ' ', 0)
|
||||
_, _ = fmt.Fprintln(tw, "deadline\tsectors\tpartitions\tproven")
|
||||
|
||||
for i, di := range d.Deadlines {
|
||||
var cur string
|
||||
if di.Current {
|
||||
cur += "\t(current)"
|
||||
}
|
||||
_, _ = fmt.Fprintf(tw, "%d\t%d\t%d\t%d%s\n", i, di.Sectors, di.Partitions, di.Proven, cur)
|
||||
}
|
||||
tw.Flush()
|
||||
return w.Bytes(), nil
|
||||
}
|
||||
|
||||
func provingDeadlines(t *testkit.TestEnvironment, m *testkit.LotusMiner, maddr address.Address, height abi.ChainEpoch) (*ProvingDeadlines, error) {
|
||||
lapi := m.FullApi
|
||||
ctx := context.Background()
|
||||
|
||||
deadlines, err := lapi.StateMinerDeadlines(ctx, maddr, types.EmptyTSK)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
di, err := lapi.StateMinerProvingDeadline(ctx, maddr, types.EmptyTSK)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
infos := make([]DeadlineInfo, 0, len(deadlines))
|
||||
for dlIdx, deadline := range deadlines {
|
||||
partitions, err := lapi.StateMinerPartitions(ctx, maddr, uint64(dlIdx), types.EmptyTSK)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
provenPartitions, err := deadline.PostSubmissions.Count()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var cur string
|
||||
if di.Index == uint64(dlIdx) {
|
||||
cur += "\t(current)"
|
||||
}
|
||||
|
||||
outInfo := DeadlineInfo{
|
||||
//Sectors: c,
|
||||
Partitions: len(partitions),
|
||||
Proven: provenPartitions,
|
||||
Current: di.Index == uint64(dlIdx),
|
||||
}
|
||||
infos = append(infos, outInfo)
|
||||
//_, _ = fmt.Fprintf(tw, "%d\t%d\t%d%s\n", dlIdx, len(partitions), provenPartitions, cur)
|
||||
}
|
||||
|
||||
return &ProvingDeadlines{Deadlines: infos}, nil
|
||||
}
|
||||
|
||||
type SectorInfo struct {
|
||||
Sectors []abi.SectorNumber
|
||||
SectorStates map[abi.SectorNumber]api.SectorInfo
|
||||
Committed []abi.SectorNumber
|
||||
Proving []abi.SectorNumber
|
||||
}
|
||||
|
||||
func (i *SectorInfo) MarshalPlainText() ([]byte, error) {
|
||||
provingIDs := make(map[abi.SectorNumber]struct{}, len(i.Proving))
|
||||
for _, id := range i.Proving {
|
||||
provingIDs[id] = struct{}{}
|
||||
}
|
||||
commitedIDs := make(map[abi.SectorNumber]struct{}, len(i.Committed))
|
||||
for _, id := range i.Committed {
|
||||
commitedIDs[id] = struct{}{}
|
||||
}
|
||||
|
||||
w := new(bytes.Buffer)
|
||||
tw := tabwriter.NewWriter(w, 8, 4, 1, ' ', 0)
|
||||
|
||||
for _, s := range i.Sectors {
|
||||
_, inSSet := commitedIDs[s]
|
||||
_, inPSet := provingIDs[s]
|
||||
|
||||
st, ok := i.SectorStates[s]
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
|
||||
fmt.Fprintf(tw, "%d: %s\tsSet: %s\tpSet: %s\ttktH: %d\tseedH: %d\tdeals: %v\n",
|
||||
s,
|
||||
st.State,
|
||||
yesno(inSSet),
|
||||
yesno(inPSet),
|
||||
st.Ticket.Epoch,
|
||||
st.Seed.Epoch,
|
||||
st.Deals,
|
||||
)
|
||||
}
|
||||
|
||||
if err := tw.Flush(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return w.Bytes(), nil
|
||||
}
|
||||
|
||||
func sectorsList(t *testkit.TestEnvironment, m *testkit.LotusMiner, maddr address.Address, w io.Writer, height abi.ChainEpoch) (*SectorInfo, error) {
|
||||
node := m.FullApi
|
||||
ctx := context.Background()
|
||||
|
||||
list, err := m.MinerApi.SectorsList(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
activeSet, err := node.StateMinerActiveSectors(ctx, maddr, types.EmptyTSK)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
activeIDs := make(map[abi.SectorNumber]struct{}, len(activeSet))
|
||||
for _, info := range activeSet {
|
||||
activeIDs[info.SectorNumber] = struct{}{}
|
||||
}
|
||||
|
||||
sset, err := node.StateMinerSectors(ctx, maddr, nil, types.EmptyTSK)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
commitedIDs := make(map[abi.SectorNumber]struct{}, len(activeSet))
|
||||
for _, info := range sset {
|
||||
commitedIDs[info.SectorNumber] = struct{}{}
|
||||
}
|
||||
|
||||
sort.Slice(list, func(i, j int) bool {
|
||||
return list[i] < list[j]
|
||||
})
|
||||
|
||||
i := SectorInfo{Sectors: list, SectorStates: make(map[abi.SectorNumber]api.SectorInfo, len(list))}
|
||||
|
||||
for _, s := range list {
|
||||
st, err := m.MinerApi.SectorsStatus(ctx, s, true)
|
||||
if err != nil {
|
||||
fmt.Fprintf(w, "%d:\tError: %s\n", s, err)
|
||||
continue
|
||||
}
|
||||
i.SectorStates[s] = st
|
||||
}
|
||||
return &i, nil
|
||||
}
|
||||
|
||||
func yesno(b bool) string {
|
||||
if b {
|
||||
return "YES"
|
||||
}
|
||||
return "NO"
|
||||
}
|
||||
|
||||
type MinerInfo struct {
|
||||
MinerAddr address.Address
|
||||
SectorSize string
|
||||
|
||||
MinerPower *api.MinerPower
|
||||
|
||||
CommittedBytes big.Int
|
||||
ProvingBytes big.Int
|
||||
FaultyBytes big.Int
|
||||
FaultyPercentage float64
|
||||
|
||||
Balance big.Int
|
||||
PreCommitDeposits big.Int
|
||||
LockedFunds big.Int
|
||||
AvailableFunds big.Int
|
||||
WorkerBalance big.Int
|
||||
MarketEscrow big.Int
|
||||
MarketLocked big.Int
|
||||
|
||||
SectorStateCounts map[sealing.SectorState]int
|
||||
}
|
||||
|
||||
func (i *MinerInfo) MarshalPlainText() ([]byte, error) {
|
||||
w := new(bytes.Buffer)
|
||||
fmt.Fprintf(w, "Miner: %s\n", i.MinerAddr)
|
||||
fmt.Fprintf(w, "Sector Size: %s\n", i.SectorSize)
|
||||
|
||||
pow := i.MinerPower
|
||||
rpercI := types.BigDiv(types.BigMul(pow.MinerPower.RawBytePower, types.NewInt(1000000)), pow.TotalPower.RawBytePower)
|
||||
qpercI := types.BigDiv(types.BigMul(pow.MinerPower.QualityAdjPower, types.NewInt(1000000)), pow.TotalPower.QualityAdjPower)
|
||||
|
||||
fmt.Fprintf(w, "Byte Power: %s / %s (%0.4f%%)\n",
|
||||
types.SizeStr(pow.MinerPower.RawBytePower),
|
||||
types.SizeStr(pow.TotalPower.RawBytePower),
|
||||
float64(rpercI.Int64())/10000)
|
||||
|
||||
fmt.Fprintf(w, "Actual Power: %s / %s (%0.4f%%)\n",
|
||||
types.DeciStr(pow.MinerPower.QualityAdjPower),
|
||||
types.DeciStr(pow.TotalPower.QualityAdjPower),
|
||||
float64(qpercI.Int64())/10000)
|
||||
|
||||
fmt.Fprintf(w, "\tCommitted: %s\n", types.SizeStr(i.CommittedBytes))
|
||||
|
||||
if i.FaultyBytes.Int == nil || i.FaultyBytes.IsZero() {
|
||||
fmt.Fprintf(w, "\tProving: %s\n", types.SizeStr(i.ProvingBytes))
|
||||
} else {
|
||||
fmt.Fprintf(w, "\tProving: %s (%s Faulty, %.2f%%)\n",
|
||||
types.SizeStr(i.ProvingBytes),
|
||||
types.SizeStr(i.FaultyBytes),
|
||||
i.FaultyPercentage)
|
||||
}
|
||||
|
||||
if !i.MinerPower.HasMinPower {
|
||||
fmt.Fprintf(w, "Below minimum power threshold, no blocks will be won\n")
|
||||
} else {
|
||||
expWinChance := float64(types.BigMul(qpercI, types.NewInt(build.BlocksPerEpoch)).Int64()) / 1000000
|
||||
if expWinChance > 0 {
|
||||
if expWinChance > 1 {
|
||||
expWinChance = 1
|
||||
}
|
||||
winRate := time.Duration(float64(time.Second*time.Duration(build.BlockDelaySecs)) / expWinChance)
|
||||
winPerDay := float64(time.Hour*24) / float64(winRate)
|
||||
|
||||
fmt.Fprintln(w, "Expected block win rate: ")
|
||||
fmt.Fprintf(w, "%.4f/day (every %s)\n", winPerDay, winRate.Truncate(time.Second))
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Fprintf(w, "Miner Balance: %s\n", types.FIL(i.Balance))
|
||||
fmt.Fprintf(w, "\tPreCommit: %s\n", types.FIL(i.PreCommitDeposits))
|
||||
fmt.Fprintf(w, "\tLocked: %s\n", types.FIL(i.LockedFunds))
|
||||
fmt.Fprintf(w, "\tAvailable: %s\n", types.FIL(i.AvailableFunds))
|
||||
fmt.Fprintf(w, "Worker Balance: %s\n", types.FIL(i.WorkerBalance))
|
||||
fmt.Fprintf(w, "Market (Escrow): %s\n", types.FIL(i.MarketEscrow))
|
||||
fmt.Fprintf(w, "Market (Locked): %s\n\n", types.FIL(i.MarketLocked))
|
||||
|
||||
buckets := i.SectorStateCounts
|
||||
|
||||
var sorted []stateMeta
|
||||
for state, i := range buckets {
|
||||
sorted = append(sorted, stateMeta{i: i, state: state})
|
||||
}
|
||||
|
||||
sort.Slice(sorted, func(i, j int) bool {
|
||||
return stateOrder[sorted[i].state].i < stateOrder[sorted[j].state].i
|
||||
})
|
||||
|
||||
for _, s := range sorted {
|
||||
_, _ = fmt.Fprintf(w, "\t%s: %d\n", s.state, s.i)
|
||||
}
|
||||
|
||||
return w.Bytes(), nil
|
||||
}
|
||||
|
||||
func info(t *testkit.TestEnvironment, m *testkit.LotusMiner, maddr address.Address, w io.Writer, height abi.ChainEpoch) (*MinerInfo, error) {
|
||||
api := m.FullApi
|
||||
ctx := context.Background()
|
||||
|
||||
ts, err := api.ChainHead(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
mact, err := api.StateGetActor(ctx, maddr, ts.Key())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
i := MinerInfo{MinerAddr: maddr}
|
||||
|
||||
// Sector size
|
||||
mi, err := api.StateMinerInfo(ctx, maddr, ts.Key())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
i.SectorSize = types.SizeStr(types.NewInt(uint64(mi.SectorSize)))
|
||||
|
||||
i.MinerPower, err = api.StateMinerPower(ctx, maddr, ts.Key())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
secCounts, err := api.StateMinerSectorCount(ctx, maddr, ts.Key())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
faults, err := api.StateMinerFaults(ctx, maddr, ts.Key())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
nfaults, err := faults.Count()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
i.CommittedBytes = types.BigMul(types.NewInt(secCounts.Live), types.NewInt(uint64(mi.SectorSize)))
|
||||
i.ProvingBytes = types.BigMul(types.NewInt(secCounts.Active), types.NewInt(uint64(mi.SectorSize)))
|
||||
|
||||
if nfaults != 0 {
|
||||
if secCounts.Live != 0 {
|
||||
i.FaultyPercentage = float64(10000*nfaults/secCounts.Live) / 100.
|
||||
}
|
||||
i.FaultyBytes = types.BigMul(types.NewInt(nfaults), types.NewInt(uint64(mi.SectorSize)))
|
||||
}
|
||||
|
||||
stor := store.ActorStore(ctx, apibstore.NewAPIBlockstore(api))
|
||||
mas, err := miner.Load(stor, mact)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
funds, err := mas.LockedFunds()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
i.Balance = mact.Balance
|
||||
i.PreCommitDeposits = funds.PreCommitDeposits
|
||||
i.LockedFunds = funds.VestingFunds
|
||||
i.AvailableFunds, err = mas.AvailableBalance(mact.Balance)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
wb, err := api.WalletBalance(ctx, mi.Worker)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
i.WorkerBalance = wb
|
||||
|
||||
mb, err := api.StateMarketBalance(ctx, maddr, types.EmptyTSK)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
i.MarketEscrow = mb.Escrow
|
||||
i.MarketLocked = mb.Locked
|
||||
|
||||
sectors, err := m.MinerApi.SectorsList(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
buckets := map[sealing.SectorState]int{
|
||||
"Total": len(sectors),
|
||||
}
|
||||
for _, s := range sectors {
|
||||
st, err := m.MinerApi.SectorsStatus(ctx, s, true)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
buckets[sealing.SectorState(st.State)]++
|
||||
}
|
||||
i.SectorStateCounts = buckets
|
||||
|
||||
return &i, nil
|
||||
}
|
||||
|
||||
type stateMeta struct {
|
||||
i int
|
||||
state sealing.SectorState
|
||||
}
|
||||
|
||||
var stateOrder = map[sealing.SectorState]stateMeta{}
|
||||
var stateList = []stateMeta{
|
||||
{state: "Total"},
|
||||
{state: sealing.Proving},
|
||||
|
||||
{state: sealing.UndefinedSectorState},
|
||||
{state: sealing.Empty},
|
||||
{state: sealing.Packing},
|
||||
{state: sealing.PreCommit1},
|
||||
{state: sealing.PreCommit2},
|
||||
{state: sealing.PreCommitting},
|
||||
{state: sealing.PreCommitWait},
|
||||
{state: sealing.WaitSeed},
|
||||
{state: sealing.Committing},
|
||||
{state: sealing.CommitWait},
|
||||
{state: sealing.FinalizeSector},
|
||||
|
||||
{state: sealing.FailedUnrecoverable},
|
||||
{state: sealing.SealPreCommit1Failed},
|
||||
{state: sealing.SealPreCommit2Failed},
|
||||
{state: sealing.PreCommitFailed},
|
||||
{state: sealing.ComputeProofFailed},
|
||||
{state: sealing.CommitFailed},
|
||||
{state: sealing.PackingFailed},
|
||||
{state: sealing.FinalizeFailed},
|
||||
{state: sealing.Faulty},
|
||||
{state: sealing.FaultReported},
|
||||
{state: sealing.FaultedFinal},
|
||||
}
|
||||
|
||||
func init() {
|
||||
for i, state := range stateList {
|
||||
stateOrder[state.state] = stateMeta{
|
||||
i: i,
|
||||
}
|
||||
}
|
||||
}
|
295
testplans/lotus-soup/rfwp/diffs.go
Normal file
295
testplans/lotus-soup/rfwp/diffs.go
Normal file
@ -0,0 +1,295 @@
|
||||
package rfwp
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"os"
|
||||
"sort"
|
||||
"sync"
|
||||
|
||||
"github.com/filecoin-project/go-state-types/abi"
|
||||
"github.com/filecoin-project/go-state-types/big"
|
||||
"github.com/filecoin-project/oni/lotus-soup/testkit"
|
||||
)
|
||||
|
||||
type ChainState struct {
|
||||
sync.Mutex
|
||||
|
||||
PrevHeight abi.ChainEpoch
|
||||
DiffHeight map[string]map[string]map[abi.ChainEpoch]big.Int // height -> value
|
||||
DiffValue map[string]map[string]map[string][]abi.ChainEpoch // value -> []height
|
||||
DiffCmp map[string]map[string]map[string][]abi.ChainEpoch // difference (height, height-1) -> []height
|
||||
valueTypes []string
|
||||
}
|
||||
|
||||
func NewChainState() *ChainState {
|
||||
cs := &ChainState{}
|
||||
cs.PrevHeight = abi.ChainEpoch(-1)
|
||||
cs.DiffHeight = make(map[string]map[string]map[abi.ChainEpoch]big.Int) // height -> value
|
||||
cs.DiffValue = make(map[string]map[string]map[string][]abi.ChainEpoch) // value -> []height
|
||||
cs.DiffCmp = make(map[string]map[string]map[string][]abi.ChainEpoch) // difference (height, height-1) -> []height
|
||||
cs.valueTypes = []string{"MinerPower", "CommittedBytes", "ProvingBytes", "Balance", "PreCommitDeposits", "LockedFunds", "AvailableFunds", "WorkerBalance", "MarketEscrow", "MarketLocked", "Faults", "ProvenSectors", "Recoveries"}
|
||||
return cs
|
||||
}
|
||||
|
||||
var (
|
||||
cs *ChainState
|
||||
)
|
||||
|
||||
func init() {
|
||||
cs = NewChainState()
|
||||
}
|
||||
|
||||
func printDiff(t *testkit.TestEnvironment, mi *MinerInfo, height abi.ChainEpoch) {
|
||||
maddr := mi.MinerAddr.String()
|
||||
filename := fmt.Sprintf("%s%cdiff-%s-%d", t.TestOutputsPath, os.PathSeparator, maddr, height)
|
||||
|
||||
f, err := os.Create(filename)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
w := bufio.NewWriter(f)
|
||||
defer w.Flush()
|
||||
|
||||
keys := make([]string, 0, len(cs.DiffCmp[maddr]))
|
||||
for k := range cs.DiffCmp[maddr] {
|
||||
keys = append(keys, k)
|
||||
}
|
||||
sort.Strings(keys)
|
||||
|
||||
fmt.Fprintln(w, "=====", maddr, "=====")
|
||||
for i, valueName := range keys {
|
||||
fmt.Fprintln(w, toCharStr(i), "=====", valueName, "=====")
|
||||
if len(cs.DiffCmp[maddr][valueName]) > 0 {
|
||||
fmt.Fprintf(w, "%s diff of |\n", toCharStr(i))
|
||||
}
|
||||
|
||||
for difference, heights := range cs.DiffCmp[maddr][valueName] {
|
||||
fmt.Fprintf(w, "%s diff of %30v at heights %v\n", toCharStr(i), difference, heights)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func recordDiff(mi *MinerInfo, ps *ProvingInfoState, height abi.ChainEpoch) {
|
||||
maddr := mi.MinerAddr.String()
|
||||
if _, ok := cs.DiffHeight[maddr]; !ok {
|
||||
cs.DiffHeight[maddr] = make(map[string]map[abi.ChainEpoch]big.Int)
|
||||
cs.DiffValue[maddr] = make(map[string]map[string][]abi.ChainEpoch)
|
||||
cs.DiffCmp[maddr] = make(map[string]map[string][]abi.ChainEpoch)
|
||||
|
||||
for _, v := range cs.valueTypes {
|
||||
cs.DiffHeight[maddr][v] = make(map[abi.ChainEpoch]big.Int)
|
||||
cs.DiffValue[maddr][v] = make(map[string][]abi.ChainEpoch)
|
||||
cs.DiffCmp[maddr][v] = make(map[string][]abi.ChainEpoch)
|
||||
}
|
||||
}
|
||||
|
||||
{
|
||||
value := big.Int(mi.MinerPower.MinerPower.RawBytePower)
|
||||
cs.DiffHeight[maddr]["MinerPower"][height] = value
|
||||
cs.DiffValue[maddr]["MinerPower"][value.String()] = append(cs.DiffValue[maddr]["MinerPower"][value.String()], height)
|
||||
|
||||
if cs.PrevHeight != -1 {
|
||||
prevValue := cs.DiffHeight[maddr]["MinerPower"][cs.PrevHeight]
|
||||
cmp := big.Zero()
|
||||
cmp.Sub(value.Int, prevValue.Int) // value - prevValue
|
||||
if big.Cmp(cmp, big.Zero()) != 0 {
|
||||
cs.DiffCmp[maddr]["MinerPower"][cmp.String()] = append(cs.DiffCmp[maddr]["MinerPower"][cmp.String()], height)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
{
|
||||
value := big.Int(mi.CommittedBytes)
|
||||
cs.DiffHeight[maddr]["CommittedBytes"][height] = value
|
||||
cs.DiffValue[maddr]["CommittedBytes"][value.String()] = append(cs.DiffValue[maddr]["CommittedBytes"][value.String()], height)
|
||||
|
||||
if cs.PrevHeight != -1 {
|
||||
prevValue := cs.DiffHeight[maddr]["CommittedBytes"][cs.PrevHeight]
|
||||
cmp := big.Zero()
|
||||
cmp.Sub(value.Int, prevValue.Int) // value - prevValue
|
||||
if big.Cmp(cmp, big.Zero()) != 0 {
|
||||
cs.DiffCmp[maddr]["CommittedBytes"][cmp.String()] = append(cs.DiffCmp[maddr]["CommittedBytes"][cmp.String()], height)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
{
|
||||
value := big.Int(mi.ProvingBytes)
|
||||
cs.DiffHeight[maddr]["ProvingBytes"][height] = value
|
||||
cs.DiffValue[maddr]["ProvingBytes"][value.String()] = append(cs.DiffValue[maddr]["ProvingBytes"][value.String()], height)
|
||||
|
||||
if cs.PrevHeight != -1 {
|
||||
prevValue := cs.DiffHeight[maddr]["ProvingBytes"][cs.PrevHeight]
|
||||
cmp := big.Zero()
|
||||
cmp.Sub(value.Int, prevValue.Int) // value - prevValue
|
||||
if big.Cmp(cmp, big.Zero()) != 0 {
|
||||
cs.DiffCmp[maddr]["ProvingBytes"][cmp.String()] = append(cs.DiffCmp[maddr]["ProvingBytes"][cmp.String()], height)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
{
|
||||
value := big.Int(mi.Balance)
|
||||
roundBalance(&value)
|
||||
cs.DiffHeight[maddr]["Balance"][height] = value
|
||||
cs.DiffValue[maddr]["Balance"][value.String()] = append(cs.DiffValue[maddr]["Balance"][value.String()], height)
|
||||
|
||||
if cs.PrevHeight != -1 {
|
||||
prevValue := cs.DiffHeight[maddr]["Balance"][cs.PrevHeight]
|
||||
cmp := big.Zero()
|
||||
cmp.Sub(value.Int, prevValue.Int) // value - prevValue
|
||||
if big.Cmp(cmp, big.Zero()) != 0 {
|
||||
cs.DiffCmp[maddr]["Balance"][cmp.String()] = append(cs.DiffCmp[maddr]["Balance"][cmp.String()], height)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
{
|
||||
value := big.Int(mi.PreCommitDeposits)
|
||||
cs.DiffHeight[maddr]["PreCommitDeposits"][height] = value
|
||||
cs.DiffValue[maddr]["PreCommitDeposits"][value.String()] = append(cs.DiffValue[maddr]["PreCommitDeposits"][value.String()], height)
|
||||
|
||||
if cs.PrevHeight != -1 {
|
||||
prevValue := cs.DiffHeight[maddr]["PreCommitDeposits"][cs.PrevHeight]
|
||||
cmp := big.Zero()
|
||||
cmp.Sub(value.Int, prevValue.Int) // value - prevValue
|
||||
if big.Cmp(cmp, big.Zero()) != 0 {
|
||||
cs.DiffCmp[maddr]["PreCommitDeposits"][cmp.String()] = append(cs.DiffCmp[maddr]["PreCommitDeposits"][cmp.String()], height)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
{
|
||||
value := big.Int(mi.LockedFunds)
|
||||
roundBalance(&value)
|
||||
cs.DiffHeight[maddr]["LockedFunds"][height] = value
|
||||
cs.DiffValue[maddr]["LockedFunds"][value.String()] = append(cs.DiffValue[maddr]["LockedFunds"][value.String()], height)
|
||||
|
||||
if cs.PrevHeight != -1 {
|
||||
prevValue := cs.DiffHeight[maddr]["LockedFunds"][cs.PrevHeight]
|
||||
cmp := big.Zero()
|
||||
cmp.Sub(value.Int, prevValue.Int) // value - prevValue
|
||||
if big.Cmp(cmp, big.Zero()) != 0 {
|
||||
cs.DiffCmp[maddr]["LockedFunds"][cmp.String()] = append(cs.DiffCmp[maddr]["LockedFunds"][cmp.String()], height)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
{
|
||||
value := big.Int(mi.AvailableFunds)
|
||||
roundBalance(&value)
|
||||
cs.DiffHeight[maddr]["AvailableFunds"][height] = value
|
||||
cs.DiffValue[maddr]["AvailableFunds"][value.String()] = append(cs.DiffValue[maddr]["AvailableFunds"][value.String()], height)
|
||||
|
||||
if cs.PrevHeight != -1 {
|
||||
prevValue := cs.DiffHeight[maddr]["AvailableFunds"][cs.PrevHeight]
|
||||
cmp := big.Zero()
|
||||
cmp.Sub(value.Int, prevValue.Int) // value - prevValue
|
||||
if big.Cmp(cmp, big.Zero()) != 0 {
|
||||
cs.DiffCmp[maddr]["AvailableFunds"][cmp.String()] = append(cs.DiffCmp[maddr]["AvailableFunds"][cmp.String()], height)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
{
|
||||
value := big.Int(mi.WorkerBalance)
|
||||
cs.DiffHeight[maddr]["WorkerBalance"][height] = value
|
||||
cs.DiffValue[maddr]["WorkerBalance"][value.String()] = append(cs.DiffValue[maddr]["WorkerBalance"][value.String()], height)
|
||||
|
||||
if cs.PrevHeight != -1 {
|
||||
prevValue := cs.DiffHeight[maddr]["WorkerBalance"][cs.PrevHeight]
|
||||
cmp := big.Zero()
|
||||
cmp.Sub(value.Int, prevValue.Int) // value - prevValue
|
||||
if big.Cmp(cmp, big.Zero()) != 0 {
|
||||
cs.DiffCmp[maddr]["WorkerBalance"][cmp.String()] = append(cs.DiffCmp[maddr]["WorkerBalance"][cmp.String()], height)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
{
|
||||
value := big.Int(mi.MarketEscrow)
|
||||
cs.DiffHeight[maddr]["MarketEscrow"][height] = value
|
||||
cs.DiffValue[maddr]["MarketEscrow"][value.String()] = append(cs.DiffValue[maddr]["MarketEscrow"][value.String()], height)
|
||||
|
||||
if cs.PrevHeight != -1 {
|
||||
prevValue := cs.DiffHeight[maddr]["MarketEscrow"][cs.PrevHeight]
|
||||
cmp := big.Zero()
|
||||
cmp.Sub(value.Int, prevValue.Int) // value - prevValue
|
||||
if big.Cmp(cmp, big.Zero()) != 0 {
|
||||
cs.DiffCmp[maddr]["MarketEscrow"][cmp.String()] = append(cs.DiffCmp[maddr]["MarketEscrow"][cmp.String()], height)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
{
|
||||
value := big.Int(mi.MarketLocked)
|
||||
cs.DiffHeight[maddr]["MarketLocked"][height] = value
|
||||
cs.DiffValue[maddr]["MarketLocked"][value.String()] = append(cs.DiffValue[maddr]["MarketLocked"][value.String()], height)
|
||||
|
||||
if cs.PrevHeight != -1 {
|
||||
prevValue := cs.DiffHeight[maddr]["MarketLocked"][cs.PrevHeight]
|
||||
cmp := big.Zero()
|
||||
cmp.Sub(value.Int, prevValue.Int) // value - prevValue
|
||||
if big.Cmp(cmp, big.Zero()) != 0 {
|
||||
cs.DiffCmp[maddr]["MarketLocked"][cmp.String()] = append(cs.DiffCmp[maddr]["MarketLocked"][cmp.String()], height)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
{
|
||||
value := big.NewInt(int64(ps.Faults))
|
||||
cs.DiffHeight[maddr]["Faults"][height] = value
|
||||
cs.DiffValue[maddr]["Faults"][value.String()] = append(cs.DiffValue[maddr]["Faults"][value.String()], height)
|
||||
|
||||
if cs.PrevHeight != -1 {
|
||||
prevValue := cs.DiffHeight[maddr]["Faults"][cs.PrevHeight]
|
||||
cmp := big.Zero()
|
||||
cmp.Sub(value.Int, prevValue.Int) // value - prevValue
|
||||
if big.Cmp(cmp, big.Zero()) != 0 {
|
||||
cs.DiffCmp[maddr]["Faults"][cmp.String()] = append(cs.DiffCmp[maddr]["Faults"][cmp.String()], height)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
{
|
||||
value := big.NewInt(int64(ps.ProvenSectors))
|
||||
cs.DiffHeight[maddr]["ProvenSectors"][height] = value
|
||||
cs.DiffValue[maddr]["ProvenSectors"][value.String()] = append(cs.DiffValue[maddr]["ProvenSectors"][value.String()], height)
|
||||
|
||||
if cs.PrevHeight != -1 {
|
||||
prevValue := cs.DiffHeight[maddr]["ProvenSectors"][cs.PrevHeight]
|
||||
cmp := big.Zero()
|
||||
cmp.Sub(value.Int, prevValue.Int) // value - prevValue
|
||||
if big.Cmp(cmp, big.Zero()) != 0 {
|
||||
cs.DiffCmp[maddr]["ProvenSectors"][cmp.String()] = append(cs.DiffCmp[maddr]["ProvenSectors"][cmp.String()], height)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
{
|
||||
value := big.NewInt(int64(ps.Recoveries))
|
||||
cs.DiffHeight[maddr]["Recoveries"][height] = value
|
||||
cs.DiffValue[maddr]["Recoveries"][value.String()] = append(cs.DiffValue[maddr]["Recoveries"][value.String()], height)
|
||||
|
||||
if cs.PrevHeight != -1 {
|
||||
prevValue := cs.DiffHeight[maddr]["Recoveries"][cs.PrevHeight]
|
||||
cmp := big.Zero()
|
||||
cmp.Sub(value.Int, prevValue.Int) // value - prevValue
|
||||
if big.Cmp(cmp, big.Zero()) != 0 {
|
||||
cs.DiffCmp[maddr]["Recoveries"][cmp.String()] = append(cs.DiffCmp[maddr]["Recoveries"][cmp.String()], height)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func roundBalance(i *big.Int) {
|
||||
*i = big.Div(*i, big.NewInt(1000000000000000))
|
||||
*i = big.Mul(*i, big.NewInt(1000000000000000))
|
||||
}
|
||||
|
||||
func toCharStr(i int) string {
|
||||
return string('a' + i)
|
||||
}
|
347
testplans/lotus-soup/rfwp/e2e.go
Normal file
347
testplans/lotus-soup/rfwp/e2e.go
Normal file
@ -0,0 +1,347 @@
|
||||
package rfwp
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"math/rand"
|
||||
"os"
|
||||
"sort"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/filecoin-project/go-state-types/abi"
|
||||
"github.com/filecoin-project/go-state-types/big"
|
||||
"github.com/filecoin-project/lotus/api"
|
||||
"github.com/filecoin-project/oni/lotus-soup/testkit"
|
||||
"golang.org/x/sync/errgroup"
|
||||
)
|
||||
|
||||
func RecoveryFromFailedWindowedPoStE2E(t *testkit.TestEnvironment) error {
|
||||
switch t.Role {
|
||||
case "bootstrapper":
|
||||
return testkit.HandleDefaultRole(t)
|
||||
case "client":
|
||||
return handleClient(t)
|
||||
case "miner":
|
||||
return handleMiner(t)
|
||||
case "miner-full-slash":
|
||||
return handleMinerFullSlash(t)
|
||||
case "miner-partial-slash":
|
||||
return handleMinerPartialSlash(t)
|
||||
}
|
||||
|
||||
return fmt.Errorf("unknown role: %s", t.Role)
|
||||
}
|
||||
|
||||
func handleMiner(t *testkit.TestEnvironment) error {
|
||||
m, err := testkit.PrepareMiner(t)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
myActorAddr, err := m.MinerApi.ActorAddress(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
t.RecordMessage("running miner: %s", myActorAddr)
|
||||
|
||||
if t.GroupSeq == 1 {
|
||||
go FetchChainState(t, m)
|
||||
}
|
||||
|
||||
go UpdateChainState(t, m)
|
||||
|
||||
minersToBeSlashed := 2
|
||||
ch := make(chan testkit.SlashedMinerMsg)
|
||||
sub := t.SyncClient.MustSubscribe(ctx, testkit.SlashedMinerTopic, ch)
|
||||
var eg errgroup.Group
|
||||
|
||||
for i := 0; i < minersToBeSlashed; i++ {
|
||||
select {
|
||||
case slashedMiner := <-ch:
|
||||
// wait for slash
|
||||
eg.Go(func() error {
|
||||
select {
|
||||
case <-waitForSlash(t, slashedMiner):
|
||||
case err = <-t.SyncClient.MustBarrier(ctx, testkit.StateAbortTest, 1).C:
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return errors.New("got abort signal, exitting")
|
||||
}
|
||||
return nil
|
||||
})
|
||||
case err := <-sub.Done():
|
||||
return fmt.Errorf("got error while waiting for slashed miners: %w", err)
|
||||
case err := <-t.SyncClient.MustBarrier(ctx, testkit.StateAbortTest, 1).C:
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return errors.New("got abort signal, exitting")
|
||||
}
|
||||
}
|
||||
|
||||
errc := make(chan error)
|
||||
go func() {
|
||||
errc <- eg.Wait()
|
||||
}()
|
||||
|
||||
select {
|
||||
case err := <-errc:
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
case err := <-t.SyncClient.MustBarrier(ctx, testkit.StateAbortTest, 1).C:
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return errors.New("got abort signal, exitting")
|
||||
}
|
||||
|
||||
t.SyncClient.MustSignalAndWait(ctx, testkit.StateDone, t.TestInstanceCount)
|
||||
return nil
|
||||
}
|
||||
|
||||
func waitForSlash(t *testkit.TestEnvironment, msg testkit.SlashedMinerMsg) chan error {
|
||||
// assert that balance got reduced with that much 5 times (sector fee)
|
||||
// assert that balance got reduced with that much 2 times (termination fee)
|
||||
// assert that balance got increased with that much 10 times (block reward)
|
||||
// assert that power got increased with that much 1 times (after sector is sealed)
|
||||
// assert that power got reduced with that much 1 times (after sector is announced faulty)
|
||||
slashedMiner := msg.MinerActorAddr
|
||||
|
||||
errc := make(chan error)
|
||||
go func() {
|
||||
foundSlashConditions := false
|
||||
for range time.Tick(10 * time.Second) {
|
||||
if foundSlashConditions {
|
||||
close(errc)
|
||||
return
|
||||
}
|
||||
t.RecordMessage("wait for slashing, tick")
|
||||
func() {
|
||||
cs.Lock()
|
||||
defer cs.Unlock()
|
||||
|
||||
negativeAmounts := []big.Int{}
|
||||
negativeDiffs := make(map[big.Int][]abi.ChainEpoch)
|
||||
|
||||
for am, heights := range cs.DiffCmp[slashedMiner.String()]["LockedFunds"] {
|
||||
amount, err := big.FromString(am)
|
||||
if err != nil {
|
||||
errc <- fmt.Errorf("cannot parse LockedFunds amount: %w:", err)
|
||||
return
|
||||
}
|
||||
|
||||
// amount is negative => slash condition
|
||||
if big.Cmp(amount, big.Zero()) < 0 {
|
||||
negativeDiffs[amount] = heights
|
||||
negativeAmounts = append(negativeAmounts, amount)
|
||||
}
|
||||
}
|
||||
|
||||
t.RecordMessage("negative diffs: %d", len(negativeDiffs))
|
||||
if len(negativeDiffs) < 3 {
|
||||
return
|
||||
}
|
||||
|
||||
sort.Slice(negativeAmounts, func(i, j int) bool { return big.Cmp(negativeAmounts[i], negativeAmounts[j]) > 0 })
|
||||
|
||||
// TODO: confirm the largest is > 18 filecoin
|
||||
// TODO: confirm the next largest is > 9 filecoin
|
||||
foundSlashConditions = true
|
||||
}()
|
||||
}
|
||||
}()
|
||||
|
||||
return errc
|
||||
}
|
||||
|
||||
func handleMinerFullSlash(t *testkit.TestEnvironment) error {
|
||||
m, err := testkit.PrepareMiner(t)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
myActorAddr, err := m.MinerApi.ActorAddress(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
t.RecordMessage("running miner, full slash: %s", myActorAddr)
|
||||
|
||||
// TODO: wait until we have sealed a deal for a client
|
||||
time.Sleep(240 * time.Second)
|
||||
|
||||
t.RecordMessage("shutting down miner, full slash: %s", myActorAddr)
|
||||
|
||||
ctxt, cancel := context.WithTimeout(ctx, 10*time.Second)
|
||||
defer cancel()
|
||||
err = m.StopFn(ctxt)
|
||||
if err != nil {
|
||||
//return err
|
||||
t.RecordMessage("err from StopFn: %s", err.Error()) // TODO: expect this to be fixed on Lotus
|
||||
}
|
||||
|
||||
t.RecordMessage("shutdown miner, full slash: %s", myActorAddr)
|
||||
|
||||
t.SyncClient.MustPublish(ctx, testkit.SlashedMinerTopic, testkit.SlashedMinerMsg{
|
||||
MinerActorAddr: myActorAddr,
|
||||
})
|
||||
|
||||
t.SyncClient.MustSignalAndWait(ctx, testkit.StateDone, t.TestInstanceCount)
|
||||
return nil
|
||||
}
|
||||
|
||||
func handleMinerPartialSlash(t *testkit.TestEnvironment) error {
|
||||
m, err := testkit.PrepareMiner(t)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
myActorAddr, err := m.MinerApi.ActorAddress(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
t.RecordMessage("running miner, partial slash: %s", myActorAddr)
|
||||
|
||||
// TODO: wait until we have sealed a deal for a client
|
||||
time.Sleep(185 * time.Second)
|
||||
|
||||
t.RecordMessage("shutting down miner, partial slash: %s", myActorAddr)
|
||||
|
||||
ctxt, cancel := context.WithTimeout(ctx, 10*time.Second)
|
||||
defer cancel()
|
||||
err = m.StopFn(ctxt)
|
||||
if err != nil {
|
||||
//return err
|
||||
t.RecordMessage("err from StopFn: %s", err.Error()) // TODO: expect this to be fixed on Lotus
|
||||
}
|
||||
|
||||
t.RecordMessage("shutdown miner, partial slash: %s", myActorAddr)
|
||||
|
||||
t.SyncClient.MustPublish(ctx, testkit.SlashedMinerTopic, testkit.SlashedMinerMsg{
|
||||
MinerActorAddr: myActorAddr,
|
||||
})
|
||||
|
||||
time.Sleep(300 * time.Second)
|
||||
|
||||
rm, err := testkit.RestoreMiner(t, m)
|
||||
if err != nil {
|
||||
t.RecordMessage("got err: %s", err.Error())
|
||||
return err
|
||||
}
|
||||
|
||||
myActorAddr, err = rm.MinerApi.ActorAddress(ctx)
|
||||
if err != nil {
|
||||
t.RecordMessage("got err: %s", err.Error())
|
||||
return err
|
||||
}
|
||||
|
||||
t.RecordMessage("running miner again, partial slash: %s", myActorAddr)
|
||||
|
||||
time.Sleep(3600 * time.Second)
|
||||
|
||||
//t.SyncClient.MustSignalAndWait(ctx, testkit.StateDone, t.TestInstanceCount)
|
||||
return nil
|
||||
}
|
||||
|
||||
func handleClient(t *testkit.TestEnvironment) error {
|
||||
cl, err := testkit.PrepareClient(t)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// This is a client role
|
||||
t.RecordMessage("running client")
|
||||
|
||||
ctx := context.Background()
|
||||
client := cl.FullApi
|
||||
|
||||
time.Sleep(10 * time.Second)
|
||||
|
||||
// select a miner based on our GroupSeq (client 1 -> miner 1 ; client 2 -> miner 2)
|
||||
// this assumes that all miner instances receive the same sorted MinerAddrs slice
|
||||
minerAddr := cl.MinerAddrs[t.InitContext.GroupSeq-1]
|
||||
if err := client.NetConnect(ctx, minerAddr.MinerNetAddrs); err != nil {
|
||||
return err
|
||||
}
|
||||
t.D().Counter(fmt.Sprintf("send-data-to,miner=%s", minerAddr.MinerActorAddr)).Inc(1)
|
||||
|
||||
t.RecordMessage("selected %s as the miner", minerAddr.MinerActorAddr)
|
||||
|
||||
time.Sleep(2 * time.Second)
|
||||
|
||||
// generate 1800 bytes of random data
|
||||
data := make([]byte, 1800)
|
||||
rand.New(rand.NewSource(time.Now().UnixNano())).Read(data)
|
||||
|
||||
file, err := ioutil.TempFile("/tmp", "data")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer os.Remove(file.Name())
|
||||
|
||||
_, err = file.Write(data)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
fcid, err := client.ClientImport(ctx, api.FileRef{Path: file.Name(), IsCAR: false})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
t.RecordMessage("file cid: %s", fcid)
|
||||
|
||||
// start deal
|
||||
t1 := time.Now()
|
||||
fastRetrieval := false
|
||||
deal := testkit.StartDeal(ctx, minerAddr.MinerActorAddr, client, fcid.Root, fastRetrieval)
|
||||
t.RecordMessage("started deal: %s", deal)
|
||||
|
||||
// this sleep is only necessary because deals don't immediately get logged in the dealstore, we should fix this
|
||||
time.Sleep(2 * time.Second)
|
||||
|
||||
t.RecordMessage("waiting for deal to be sealed")
|
||||
testkit.WaitDealSealed(t, ctx, client, deal)
|
||||
t.D().ResettingHistogram("deal.sealed").Update(int64(time.Since(t1)))
|
||||
|
||||
// TODO: wait to stop miner (ideally get a signal, rather than sleep)
|
||||
time.Sleep(180 * time.Second)
|
||||
|
||||
t.RecordMessage("trying to retrieve %s", fcid)
|
||||
info, err := client.ClientGetDealInfo(ctx, *deal)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
carExport := true
|
||||
err = testkit.RetrieveData(t, ctx, client, fcid.Root, &info.PieceCID, carExport, data)
|
||||
if err != nil && strings.Contains(err.Error(), "cannot make retrieval deal for zero bytes") {
|
||||
t.D().Counter("deal.expect-slashing").Inc(1)
|
||||
} else if err != nil {
|
||||
// unknown error => fail test
|
||||
t.RecordFailure(err)
|
||||
|
||||
// send signal to abort test
|
||||
t.SyncClient.MustSignalEntry(ctx, testkit.StateAbortTest)
|
||||
|
||||
t.D().ResettingHistogram("deal.retrieved.err").Update(int64(time.Since(t1)))
|
||||
time.Sleep(10 * time.Second) // wait for metrics to be emitted
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
t.D().ResettingHistogram("deal.retrieved").Update(int64(time.Since(t1)))
|
||||
time.Sleep(10 * time.Second) // wait for metrics to be emitted
|
||||
|
||||
t.SyncClient.MustSignalAndWait(ctx, testkit.StateDone, t.TestInstanceCount) // TODO: not sure about this
|
||||
return nil
|
||||
}
|
66
testplans/lotus-soup/rfwp/html_chain_state.go
Normal file
66
testplans/lotus-soup/rfwp/html_chain_state.go
Normal file
@ -0,0 +1,66 @@
|
||||
package rfwp
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/filecoin-project/oni/lotus-soup/testkit"
|
||||
|
||||
"github.com/filecoin-project/go-address"
|
||||
"github.com/filecoin-project/go-state-types/abi"
|
||||
"github.com/filecoin-project/lotus/cli"
|
||||
tstats "github.com/filecoin-project/lotus/tools/stats"
|
||||
"github.com/ipfs/go-cid"
|
||||
)
|
||||
|
||||
func FetchChainState(t *testkit.TestEnvironment, m *testkit.LotusMiner) error {
|
||||
height := 0
|
||||
headlag := 3
|
||||
|
||||
ctx := context.Background()
|
||||
api := m.FullApi
|
||||
|
||||
tipsetsCh, err := tstats.GetTips(ctx, m.FullApi, abi.ChainEpoch(height), headlag)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for tipset := range tipsetsCh {
|
||||
err := func() error {
|
||||
filename := fmt.Sprintf("%s%cchain-state-%d.html", t.TestOutputsPath, os.PathSeparator, tipset.Height())
|
||||
file, err := os.Create(filename)
|
||||
defer file.Close()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
stout, err := api.StateCompute(ctx, tipset.Height(), nil, tipset.Key())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
codeCache := map[address.Address]cid.Cid{}
|
||||
getCode := func(addr address.Address) (cid.Cid, error) {
|
||||
if c, found := codeCache[addr]; found {
|
||||
return c, nil
|
||||
}
|
||||
|
||||
c, err := api.StateGetActor(ctx, addr, tipset.Key())
|
||||
if err != nil {
|
||||
return cid.Cid{}, err
|
||||
}
|
||||
|
||||
codeCache[addr] = c.Code
|
||||
return c.Code, nil
|
||||
}
|
||||
|
||||
return cli.ComputeStateHTMLTempl(file, tipset, stout, getCode)
|
||||
}()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
120
testplans/lotus-soup/runner/main.go
Normal file
120
testplans/lotus-soup/runner/main.go
Normal file
@ -0,0 +1,120 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"os"
|
||||
"path"
|
||||
|
||||
"github.com/codeskyblue/go-sh"
|
||||
)
|
||||
|
||||
type jobDefinition struct {
|
||||
runNumber int
|
||||
compositionPath string
|
||||
outputDir string
|
||||
skipStdout bool
|
||||
}
|
||||
|
||||
type jobResult struct {
|
||||
job jobDefinition
|
||||
runError error
|
||||
}
|
||||
|
||||
func runComposition(job jobDefinition) jobResult {
|
||||
outputArchive := path.Join(job.outputDir, "test-outputs.tgz")
|
||||
cmd := sh.Command("testground", "run", "composition", "-f", job.compositionPath, "--collect", "-o", outputArchive)
|
||||
if err := os.MkdirAll(job.outputDir, os.ModePerm); err != nil {
|
||||
return jobResult{runError: fmt.Errorf("unable to make output directory: %w", err)}
|
||||
}
|
||||
|
||||
outPath := path.Join(job.outputDir, "run.out")
|
||||
outFile, err := os.Create(outPath)
|
||||
if err != nil {
|
||||
return jobResult{runError: fmt.Errorf("unable to create output file %s: %w", outPath, err)}
|
||||
}
|
||||
if job.skipStdout {
|
||||
cmd.Stdout = outFile
|
||||
} else {
|
||||
cmd.Stdout = io.MultiWriter(os.Stdout, outFile)
|
||||
}
|
||||
log.Printf("starting test run %d. writing testground client output to %s\n", job.runNumber, outPath)
|
||||
if err = cmd.Run(); err != nil {
|
||||
return jobResult{job: job, runError: err}
|
||||
}
|
||||
return jobResult{job: job}
|
||||
}
|
||||
|
||||
func worker(id int, jobs <-chan jobDefinition, results chan<- jobResult) {
|
||||
log.Printf("started worker %d\n", id)
|
||||
for j := range jobs {
|
||||
log.Printf("worker %d started test run %d\n", id, j.runNumber)
|
||||
results <- runComposition(j)
|
||||
}
|
||||
}
|
||||
|
||||
func buildComposition(compositionPath string, outputDir string) (string, error) {
|
||||
outComp := path.Join(outputDir, "composition.toml")
|
||||
err := sh.Command("cp", compositionPath, outComp).Run()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return outComp, sh.Command("testground", "build", "composition", "-w", "-f", outComp).Run()
|
||||
}
|
||||
|
||||
func main() {
|
||||
runs := flag.Int("runs", 1, "number of times to run composition")
|
||||
parallelism := flag.Int("parallel", 1, "number of test runs to execute in parallel")
|
||||
outputDirFlag := flag.String("output", "", "path to output directory (will use temp dir if unset)")
|
||||
flag.Parse()
|
||||
|
||||
if len(flag.Args()) != 1 {
|
||||
log.Fatal("must provide a single composition file path argument")
|
||||
}
|
||||
|
||||
outdir := *outputDirFlag
|
||||
if outdir == "" {
|
||||
var err error
|
||||
outdir, err = ioutil.TempDir(os.TempDir(), "oni-batch-run-")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
}
|
||||
if err := os.MkdirAll(outdir, os.ModePerm); err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
compositionPath := flag.Args()[0]
|
||||
|
||||
// first build the composition and write out the artifacts.
|
||||
// we copy to a temp file first to avoid modifying the original
|
||||
log.Printf("building composition %s\n", compositionPath)
|
||||
compositionPath, err := buildComposition(compositionPath, outdir)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
jobs := make(chan jobDefinition, *runs)
|
||||
results := make(chan jobResult, *runs)
|
||||
for w := 1; w <= *parallelism; w++ {
|
||||
go worker(w, jobs, results)
|
||||
}
|
||||
|
||||
for j := 1; j <= *runs; j++ {
|
||||
dir := path.Join(outdir, fmt.Sprintf("run-%d", j))
|
||||
skipStdout := *parallelism != 1
|
||||
jobs <- jobDefinition{runNumber: j, compositionPath: compositionPath, outputDir: dir, skipStdout: skipStdout}
|
||||
}
|
||||
close(jobs)
|
||||
|
||||
for i := 0; i < *runs; i++ {
|
||||
r := <-results
|
||||
if r.runError != nil {
|
||||
log.Printf("error running job %d: %s\n", r.job.runNumber, r.runError)
|
||||
}
|
||||
}
|
||||
}
|
35
testplans/lotus-soup/sanity.go
Normal file
35
testplans/lotus-soup/sanity.go
Normal file
@ -0,0 +1,35 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
)
|
||||
|
||||
func sanityCheck() {
|
||||
enhanceMsg := func(msg string, a ...interface{}) string {
|
||||
return fmt.Sprintf("sanity check: "+msg+"; if running on local:exec, make sure to run `make` from the root of the oni repo", a...)
|
||||
}
|
||||
|
||||
dir := "/var/tmp/filecoin-proof-parameters"
|
||||
stat, err := os.Stat(dir)
|
||||
if os.IsNotExist(err) {
|
||||
panic(enhanceMsg("proofs parameters not available in /var/tmp/filecoin-proof-parameters"))
|
||||
}
|
||||
if err != nil {
|
||||
panic(enhanceMsg("failed to stat /var/tmp/filecoin-proof-parameters: %s", err))
|
||||
}
|
||||
|
||||
if !stat.IsDir() {
|
||||
panic(enhanceMsg("/var/tmp/filecoin-proof-parameters is not a directory; aborting"))
|
||||
}
|
||||
|
||||
files, err := ioutil.ReadDir(dir)
|
||||
if err != nil {
|
||||
panic(enhanceMsg("failed list directory /var/tmp/filecoin-proof-parameters: %s", err))
|
||||
}
|
||||
|
||||
if len(files) == 0 {
|
||||
panic(enhanceMsg("no files in /var/tmp/filecoin-proof-parameters"))
|
||||
}
|
||||
}
|
108
testplans/lotus-soup/statemachine/statemachine.go
Normal file
108
testplans/lotus-soup/statemachine/statemachine.go
Normal file
@ -0,0 +1,108 @@
|
||||
package statemachine
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"sync"
|
||||
)
|
||||
|
||||
// This code has been shamelessly lifted from this blog post:
|
||||
// https://venilnoronha.io/a-simple-state-machine-framework-in-go
|
||||
// Many thanks to the author, Venil Norohnha
|
||||
|
||||
// ErrEventRejected is the error returned when the state machine cannot process
|
||||
// an event in the state that it is in.
|
||||
var ErrEventRejected = errors.New("event rejected")
|
||||
|
||||
const (
|
||||
// Default represents the default state of the system.
|
||||
Default StateType = ""
|
||||
|
||||
// NoOp represents a no-op event.
|
||||
NoOp EventType = "NoOp"
|
||||
)
|
||||
|
||||
// StateType represents an extensible state type in the state machine.
|
||||
type StateType string
|
||||
|
||||
// EventType represents an extensible event type in the state machine.
|
||||
type EventType string
|
||||
|
||||
// EventContext represents the context to be passed to the action implementation.
|
||||
type EventContext interface{}
|
||||
|
||||
// Action represents the action to be executed in a given state.
|
||||
type Action interface {
|
||||
Execute(eventCtx EventContext) EventType
|
||||
}
|
||||
|
||||
// Events represents a mapping of events and states.
|
||||
type Events map[EventType]StateType
|
||||
|
||||
// State binds a state with an action and a set of events it can handle.
|
||||
type State struct {
|
||||
Action Action
|
||||
Events Events
|
||||
}
|
||||
|
||||
// States represents a mapping of states and their implementations.
|
||||
type States map[StateType]State
|
||||
|
||||
// StateMachine represents the state machine.
|
||||
type StateMachine struct {
|
||||
// Previous represents the previous state.
|
||||
Previous StateType
|
||||
|
||||
// Current represents the current state.
|
||||
Current StateType
|
||||
|
||||
// States holds the configuration of states and events handled by the state machine.
|
||||
States States
|
||||
|
||||
// mutex ensures that only 1 event is processed by the state machine at any given time.
|
||||
mutex sync.Mutex
|
||||
}
|
||||
|
||||
// getNextState returns the next state for the event given the machine's current
|
||||
// state, or an error if the event can't be handled in the given state.
|
||||
func (s *StateMachine) getNextState(event EventType) (StateType, error) {
|
||||
if state, ok := s.States[s.Current]; ok {
|
||||
if state.Events != nil {
|
||||
if next, ok := state.Events[event]; ok {
|
||||
return next, nil
|
||||
}
|
||||
}
|
||||
}
|
||||
return Default, ErrEventRejected
|
||||
}
|
||||
|
||||
// SendEvent sends an event to the state machine.
|
||||
func (s *StateMachine) SendEvent(event EventType, eventCtx EventContext) error {
|
||||
s.mutex.Lock()
|
||||
defer s.mutex.Unlock()
|
||||
|
||||
for {
|
||||
// Determine the next state for the event given the machine's current state.
|
||||
nextState, err := s.getNextState(event)
|
||||
if err != nil {
|
||||
return ErrEventRejected
|
||||
}
|
||||
|
||||
// Identify the state definition for the next state.
|
||||
state, ok := s.States[nextState]
|
||||
if !ok || state.Action == nil {
|
||||
// configuration error
|
||||
}
|
||||
|
||||
// Transition over to the next state.
|
||||
s.Previous = s.Current
|
||||
s.Current = nextState
|
||||
|
||||
// Execute the next state's action and loop over again if the event returned
|
||||
// is not a no-op.
|
||||
nextEvent := state.Action.Execute(eventCtx)
|
||||
if nextEvent == NoOp {
|
||||
return nil
|
||||
}
|
||||
event = nextEvent
|
||||
}
|
||||
}
|
128
testplans/lotus-soup/statemachine/suspend.go
Normal file
128
testplans/lotus-soup/statemachine/suspend.go
Normal file
@ -0,0 +1,128 @@
|
||||
package statemachine
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
const (
|
||||
Running StateType = "running"
|
||||
Suspended StateType = "suspended"
|
||||
|
||||
Halt EventType = "halt"
|
||||
Resume EventType = "resume"
|
||||
)
|
||||
|
||||
type Suspendable interface {
|
||||
Halt()
|
||||
Resume()
|
||||
}
|
||||
|
||||
type HaltAction struct{}
|
||||
|
||||
func (a *HaltAction) Execute(ctx EventContext) EventType {
|
||||
s, ok := ctx.(*Suspender)
|
||||
if !ok {
|
||||
fmt.Println("unable to halt, event context is not Suspendable")
|
||||
return NoOp
|
||||
}
|
||||
s.target.Halt()
|
||||
return NoOp
|
||||
}
|
||||
|
||||
type ResumeAction struct{}
|
||||
|
||||
func (a *ResumeAction) Execute(ctx EventContext) EventType {
|
||||
s, ok := ctx.(*Suspender)
|
||||
if !ok {
|
||||
fmt.Println("unable to resume, event context is not Suspendable")
|
||||
return NoOp
|
||||
}
|
||||
s.target.Resume()
|
||||
return NoOp
|
||||
}
|
||||
|
||||
type Suspender struct {
|
||||
StateMachine
|
||||
target Suspendable
|
||||
log LogFn
|
||||
}
|
||||
|
||||
type LogFn func(fmt string, args ...interface{})
|
||||
|
||||
func NewSuspender(target Suspendable, log LogFn) *Suspender {
|
||||
return &Suspender{
|
||||
target: target,
|
||||
log: log,
|
||||
StateMachine: StateMachine{
|
||||
Current: Running,
|
||||
States: States{
|
||||
Running: State{
|
||||
Action: &ResumeAction{},
|
||||
Events: Events{
|
||||
Halt: Suspended,
|
||||
},
|
||||
},
|
||||
|
||||
Suspended: State{
|
||||
Action: &HaltAction{},
|
||||
Events: Events{
|
||||
Resume: Running,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Suspender) RunEvents(eventSpec string) {
|
||||
s.log("running event spec: %s", eventSpec)
|
||||
for _, et := range parseEventSpec(eventSpec, s.log) {
|
||||
if et.delay != 0 {
|
||||
//s.log("waiting %s", et.delay.String())
|
||||
time.Sleep(et.delay)
|
||||
continue
|
||||
}
|
||||
if et.event == "" {
|
||||
s.log("ignoring empty event")
|
||||
continue
|
||||
}
|
||||
s.log("sending event %s", et.event)
|
||||
err := s.SendEvent(et.event, s)
|
||||
if err != nil {
|
||||
s.log("error sending event %s: %s", et.event, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
type eventTiming struct {
|
||||
delay time.Duration
|
||||
event EventType
|
||||
}
|
||||
|
||||
func parseEventSpec(spec string, log LogFn) []eventTiming {
|
||||
fields := strings.Split(spec, "->")
|
||||
out := make([]eventTiming, 0, len(fields))
|
||||
for _, f := range fields {
|
||||
f = strings.TrimSpace(f)
|
||||
words := strings.Split(f, " ")
|
||||
|
||||
// TODO: try to implement a "waiting" state instead of special casing like this
|
||||
if words[0] == "wait" {
|
||||
if len(words) != 2 {
|
||||
log("expected 'wait' to be followed by duration, e.g. 'wait 30s'. ignoring.")
|
||||
continue
|
||||
}
|
||||
d, err := time.ParseDuration(words[1])
|
||||
if err != nil {
|
||||
log("bad argument for 'wait': %s", err)
|
||||
continue
|
||||
}
|
||||
out = append(out, eventTiming{delay: d})
|
||||
} else {
|
||||
out = append(out, eventTiming{event: EventType(words[0])})
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
74
testplans/lotus-soup/testkit/deals.go
Normal file
74
testplans/lotus-soup/testkit/deals.go
Normal file
@ -0,0 +1,74 @@
|
||||
package testkit
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
|
||||
"github.com/filecoin-project/go-address"
|
||||
"github.com/filecoin-project/go-fil-markets/storagemarket"
|
||||
"github.com/filecoin-project/go-state-types/abi"
|
||||
"github.com/filecoin-project/lotus/api"
|
||||
"github.com/filecoin-project/lotus/chain/types"
|
||||
"github.com/ipfs/go-cid"
|
||||
|
||||
tstats "github.com/filecoin-project/lotus/tools/stats"
|
||||
)
|
||||
|
||||
func StartDeal(ctx context.Context, minerActorAddr address.Address, client api.FullNode, fcid cid.Cid, fastRetrieval bool) *cid.Cid {
|
||||
addr, err := client.WalletDefaultAddress(ctx)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
deal, err := client.ClientStartDeal(ctx, &api.StartDealParams{
|
||||
Data: &storagemarket.DataRef{
|
||||
TransferType: storagemarket.TTGraphsync,
|
||||
Root: fcid,
|
||||
},
|
||||
Wallet: addr,
|
||||
Miner: minerActorAddr,
|
||||
EpochPrice: types.NewInt(1000),
|
||||
MinBlocksDuration: 640000,
|
||||
DealStartEpoch: 200,
|
||||
FastRetrieval: fastRetrieval,
|
||||
})
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return deal
|
||||
}
|
||||
|
||||
func WaitDealSealed(t *TestEnvironment, ctx context.Context, client api.FullNode, deal *cid.Cid) {
|
||||
height := 0
|
||||
headlag := 3
|
||||
|
||||
cctx, cancel := context.WithCancel(ctx)
|
||||
defer cancel()
|
||||
|
||||
tipsetsCh, err := tstats.GetTips(cctx, client, abi.ChainEpoch(height), headlag)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
for tipset := range tipsetsCh {
|
||||
t.RecordMessage("got tipset: height %d", tipset.Height())
|
||||
|
||||
di, err := client.ClientGetDealInfo(ctx, *deal)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
switch di.State {
|
||||
case storagemarket.StorageDealProposalRejected:
|
||||
panic("deal rejected")
|
||||
case storagemarket.StorageDealFailing:
|
||||
panic("deal failed")
|
||||
case storagemarket.StorageDealError:
|
||||
panic(fmt.Sprintf("deal errored %s", di.Message))
|
||||
case storagemarket.StorageDealActive:
|
||||
t.RecordMessage("completed deal: %s", di)
|
||||
return
|
||||
}
|
||||
|
||||
t.RecordMessage("deal state: %s", storagemarket.DealStates[di.State])
|
||||
}
|
||||
}
|
55
testplans/lotus-soup/testkit/defaults.go
Normal file
55
testplans/lotus-soup/testkit/defaults.go
Normal file
@ -0,0 +1,55 @@
|
||||
package testkit
|
||||
|
||||
import "fmt"
|
||||
|
||||
type RoleName = string
|
||||
|
||||
var DefaultRoles = map[RoleName]func(*TestEnvironment) error{
|
||||
"bootstrapper": func(t *TestEnvironment) error {
|
||||
b, err := PrepareBootstrapper(t)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return b.RunDefault()
|
||||
},
|
||||
"miner": func(t *TestEnvironment) error {
|
||||
m, err := PrepareMiner(t)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return m.RunDefault()
|
||||
},
|
||||
"client": func(t *TestEnvironment) error {
|
||||
c, err := PrepareClient(t)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return c.RunDefault()
|
||||
},
|
||||
"drand": func(t *TestEnvironment) error {
|
||||
d, err := PrepareDrandInstance(t)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return d.RunDefault()
|
||||
},
|
||||
"pubsub-tracer": func(t *TestEnvironment) error {
|
||||
tr, err := PreparePubsubTracer(t)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return tr.RunDefault()
|
||||
},
|
||||
}
|
||||
|
||||
// HandleDefaultRole handles a role by running its default behaviour.
|
||||
//
|
||||
// This function is suitable to forward to when a test case doesn't need to
|
||||
// explicitly handle/alter a role.
|
||||
func HandleDefaultRole(t *TestEnvironment) error {
|
||||
f, ok := DefaultRoles[t.Role]
|
||||
if !ok {
|
||||
panic(fmt.Sprintf("unrecognized role: %s", t.Role))
|
||||
}
|
||||
return f(t)
|
||||
}
|
67
testplans/lotus-soup/testkit/lotus_opts.go
Normal file
67
testplans/lotus-soup/testkit/lotus_opts.go
Normal file
@ -0,0 +1,67 @@
|
||||
package testkit
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/filecoin-project/lotus/node"
|
||||
"github.com/filecoin-project/lotus/node/config"
|
||||
"github.com/filecoin-project/lotus/node/modules"
|
||||
"github.com/filecoin-project/lotus/node/modules/dtypes"
|
||||
"github.com/filecoin-project/lotus/node/modules/lp2p"
|
||||
"github.com/filecoin-project/lotus/node/repo"
|
||||
|
||||
"github.com/libp2p/go-libp2p-core/peer"
|
||||
ma "github.com/multiformats/go-multiaddr"
|
||||
)
|
||||
|
||||
func withGenesis(gb []byte) node.Option {
|
||||
return node.Override(new(modules.Genesis), modules.LoadGenesis(gb))
|
||||
}
|
||||
|
||||
func withBootstrapper(ab []byte) node.Option {
|
||||
return node.Override(new(dtypes.BootstrapPeers),
|
||||
func() (dtypes.BootstrapPeers, error) {
|
||||
if ab == nil {
|
||||
return dtypes.BootstrapPeers{}, nil
|
||||
}
|
||||
|
||||
a, err := ma.NewMultiaddrBytes(ab)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
ai, err := peer.AddrInfoFromP2pAddr(a)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return dtypes.BootstrapPeers{*ai}, nil
|
||||
})
|
||||
}
|
||||
|
||||
func withPubsubConfig(bootstrapper bool, pubsubTracer string) node.Option {
|
||||
return node.Override(new(*config.Pubsub), func() *config.Pubsub {
|
||||
return &config.Pubsub{
|
||||
Bootstrapper: bootstrapper,
|
||||
RemoteTracer: pubsubTracer,
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func withListenAddress(ip string) node.Option {
|
||||
addrs := []string{fmt.Sprintf("/ip4/%s/tcp/0", ip)}
|
||||
return node.Override(node.StartListeningKey, lp2p.StartListening(addrs))
|
||||
}
|
||||
|
||||
func withMinerListenAddress(ip string) node.Option {
|
||||
addrs := []string{fmt.Sprintf("/ip4/%s/tcp/0", ip)}
|
||||
return node.Override(node.StartListeningKey, lp2p.StartListening(addrs))
|
||||
}
|
||||
|
||||
func withApiEndpoint(addr string) node.Option {
|
||||
return node.Override(node.SetApiEndpointKey, func(lr repo.LockedRepo) error {
|
||||
apima, err := ma.NewMultiaddr(addr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return lr.SetAPIEndpoint(apima)
|
||||
})
|
||||
}
|
87
testplans/lotus-soup/testkit/net.go
Normal file
87
testplans/lotus-soup/testkit/net.go
Normal file
@ -0,0 +1,87 @@
|
||||
package testkit
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/testground/sdk-go/network"
|
||||
"github.com/testground/sdk-go/sync"
|
||||
)
|
||||
|
||||
func ApplyNetworkParameters(t *TestEnvironment) {
|
||||
if !t.TestSidecar {
|
||||
t.RecordMessage("no test sidecar, skipping network config")
|
||||
return
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
|
||||
defer cancel()
|
||||
|
||||
ls := network.LinkShape{}
|
||||
|
||||
if t.IsParamSet("latency_range") {
|
||||
r := t.DurationRangeParam("latency_range")
|
||||
ls.Latency = r.ChooseRandom()
|
||||
t.D().RecordPoint("latency_ms", float64(ls.Latency.Milliseconds()))
|
||||
}
|
||||
|
||||
if t.IsParamSet("jitter_range") {
|
||||
r := t.DurationRangeParam("jitter_range")
|
||||
ls.Jitter = r.ChooseRandom()
|
||||
t.D().RecordPoint("jitter_ms", float64(ls.Jitter.Milliseconds()))
|
||||
}
|
||||
|
||||
if t.IsParamSet("loss_range") {
|
||||
r := t.FloatRangeParam("loss_range")
|
||||
ls.Loss = r.ChooseRandom()
|
||||
t.D().RecordPoint("packet_loss", float64(ls.Loss))
|
||||
}
|
||||
|
||||
if t.IsParamSet("corrupt_range") {
|
||||
r := t.FloatRangeParam("corrupt_range")
|
||||
ls.Corrupt = r.ChooseRandom()
|
||||
t.D().RecordPoint("corrupt_packet_probability", float64(ls.Corrupt))
|
||||
}
|
||||
|
||||
if t.IsParamSet("corrupt_corr_range") {
|
||||
r := t.FloatRangeParam("corrupt_corr_range")
|
||||
ls.CorruptCorr = r.ChooseRandom()
|
||||
t.D().RecordPoint("corrupt_packet_correlation", float64(ls.CorruptCorr))
|
||||
}
|
||||
|
||||
if t.IsParamSet("reorder_range") {
|
||||
r := t.FloatRangeParam("reorder_range")
|
||||
ls.Reorder = r.ChooseRandom()
|
||||
t.D().RecordPoint("reordered_packet_probability", float64(ls.Reorder))
|
||||
}
|
||||
|
||||
if t.IsParamSet("reorder_corr_range") {
|
||||
r := t.FloatRangeParam("reorder_corr_range")
|
||||
ls.ReorderCorr = r.ChooseRandom()
|
||||
t.D().RecordPoint("reordered_packet_correlation", float64(ls.ReorderCorr))
|
||||
}
|
||||
|
||||
if t.IsParamSet("duplicate_range") {
|
||||
r := t.FloatRangeParam("duplicate_range")
|
||||
ls.Duplicate = r.ChooseRandom()
|
||||
t.D().RecordPoint("duplicate_packet_probability", float64(ls.Duplicate))
|
||||
}
|
||||
|
||||
if t.IsParamSet("duplicate_corr_range") {
|
||||
r := t.FloatRangeParam("duplicate_corr_range")
|
||||
ls.DuplicateCorr = r.ChooseRandom()
|
||||
t.D().RecordPoint("duplicate_packet_correlation", float64(ls.DuplicateCorr))
|
||||
}
|
||||
|
||||
t.NetClient.MustConfigureNetwork(ctx, &network.Config{
|
||||
Network: "default",
|
||||
Enable: true,
|
||||
Default: ls,
|
||||
CallbackState: sync.State(fmt.Sprintf("latency-configured-%s", t.TestGroupID)),
|
||||
CallbackTarget: t.TestGroupInstanceCount,
|
||||
RoutingPolicy: network.AllowAll,
|
||||
})
|
||||
|
||||
t.DumpJSON("network-link-shape.json", ls)
|
||||
}
|
252
testplans/lotus-soup/testkit/node.go
Normal file
252
testplans/lotus-soup/testkit/node.go
Normal file
@ -0,0 +1,252 @@
|
||||
package testkit
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"os"
|
||||
"sort"
|
||||
"time"
|
||||
|
||||
"github.com/filecoin-project/lotus/api"
|
||||
"github.com/filecoin-project/lotus/chain/beacon"
|
||||
"github.com/filecoin-project/lotus/chain/wallet"
|
||||
"github.com/filecoin-project/lotus/metrics"
|
||||
"github.com/filecoin-project/lotus/miner"
|
||||
"github.com/filecoin-project/lotus/node"
|
||||
"github.com/filecoin-project/lotus/node/modules/dtypes"
|
||||
modtest "github.com/filecoin-project/lotus/node/modules/testing"
|
||||
tstats "github.com/filecoin-project/lotus/tools/stats"
|
||||
|
||||
influxdb "github.com/kpacha/opencensus-influxdb"
|
||||
ma "github.com/multiformats/go-multiaddr"
|
||||
manet "github.com/multiformats/go-multiaddr-net"
|
||||
"go.opencensus.io/stats"
|
||||
"go.opencensus.io/stats/view"
|
||||
)
|
||||
|
||||
var PrepareNodeTimeout = 3 * time.Minute
|
||||
|
||||
type LotusNode struct {
|
||||
FullApi api.FullNode
|
||||
MinerApi api.StorageMiner
|
||||
StopFn node.StopFunc
|
||||
Wallet *wallet.Key
|
||||
MineOne func(context.Context, miner.MineReq) error
|
||||
}
|
||||
|
||||
func (n *LotusNode) setWallet(ctx context.Context, walletKey *wallet.Key) error {
|
||||
_, err := n.FullApi.WalletImport(ctx, &walletKey.KeyInfo)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err = n.FullApi.WalletSetDefault(ctx, walletKey.Address)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
n.Wallet = walletKey
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func WaitForBalances(t *TestEnvironment, ctx context.Context, nodes int) ([]*InitialBalanceMsg, error) {
|
||||
ch := make(chan *InitialBalanceMsg)
|
||||
sub := t.SyncClient.MustSubscribe(ctx, BalanceTopic, ch)
|
||||
|
||||
balances := make([]*InitialBalanceMsg, 0, nodes)
|
||||
for i := 0; i < nodes; i++ {
|
||||
select {
|
||||
case m := <-ch:
|
||||
balances = append(balances, m)
|
||||
case err := <-sub.Done():
|
||||
return nil, fmt.Errorf("got error while waiting for balances: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
return balances, nil
|
||||
}
|
||||
|
||||
func CollectPreseals(t *TestEnvironment, ctx context.Context, miners int) ([]*PresealMsg, error) {
|
||||
ch := make(chan *PresealMsg)
|
||||
sub := t.SyncClient.MustSubscribe(ctx, PresealTopic, ch)
|
||||
|
||||
preseals := make([]*PresealMsg, 0, miners)
|
||||
for i := 0; i < miners; i++ {
|
||||
select {
|
||||
case m := <-ch:
|
||||
preseals = append(preseals, m)
|
||||
case err := <-sub.Done():
|
||||
return nil, fmt.Errorf("got error while waiting for preseals: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
sort.Slice(preseals, func(i, j int) bool {
|
||||
return preseals[i].Seqno < preseals[j].Seqno
|
||||
})
|
||||
|
||||
return preseals, nil
|
||||
}
|
||||
|
||||
func WaitForGenesis(t *TestEnvironment, ctx context.Context) (*GenesisMsg, error) {
|
||||
genesisCh := make(chan *GenesisMsg)
|
||||
sub := t.SyncClient.MustSubscribe(ctx, GenesisTopic, genesisCh)
|
||||
|
||||
select {
|
||||
case genesisMsg := <-genesisCh:
|
||||
return genesisMsg, nil
|
||||
case err := <-sub.Done():
|
||||
return nil, fmt.Errorf("error while waiting for genesis msg: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
func CollectMinerAddrs(t *TestEnvironment, ctx context.Context, miners int) ([]MinerAddressesMsg, error) {
|
||||
ch := make(chan MinerAddressesMsg)
|
||||
sub := t.SyncClient.MustSubscribe(ctx, MinersAddrsTopic, ch)
|
||||
|
||||
addrs := make([]MinerAddressesMsg, 0, miners)
|
||||
for i := 0; i < miners; i++ {
|
||||
select {
|
||||
case a := <-ch:
|
||||
addrs = append(addrs, a)
|
||||
case err := <-sub.Done():
|
||||
return nil, fmt.Errorf("got error while waiting for miners addrs: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
return addrs, nil
|
||||
}
|
||||
|
||||
func CollectClientAddrs(t *TestEnvironment, ctx context.Context, clients int) ([]*ClientAddressesMsg, error) {
|
||||
ch := make(chan *ClientAddressesMsg)
|
||||
sub := t.SyncClient.MustSubscribe(ctx, ClientsAddrsTopic, ch)
|
||||
|
||||
addrs := make([]*ClientAddressesMsg, 0, clients)
|
||||
for i := 0; i < clients; i++ {
|
||||
select {
|
||||
case a := <-ch:
|
||||
addrs = append(addrs, a)
|
||||
case err := <-sub.Done():
|
||||
return nil, fmt.Errorf("got error while waiting for clients addrs: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
return addrs, nil
|
||||
}
|
||||
|
||||
func GetPubsubTracerMaddr(ctx context.Context, t *TestEnvironment) (string, error) {
|
||||
if !t.BooleanParam("enable_pubsub_tracer") {
|
||||
return "", nil
|
||||
}
|
||||
|
||||
ch := make(chan *PubsubTracerMsg)
|
||||
sub := t.SyncClient.MustSubscribe(ctx, PubsubTracerTopic, ch)
|
||||
|
||||
select {
|
||||
case m := <-ch:
|
||||
return m.Multiaddr, nil
|
||||
case err := <-sub.Done():
|
||||
return "", fmt.Errorf("got error while waiting for pubsub tracer config: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
func GetRandomBeaconOpts(ctx context.Context, t *TestEnvironment) (node.Option, error) {
|
||||
beaconType := t.StringParam("random_beacon_type")
|
||||
switch beaconType {
|
||||
case "external-drand":
|
||||
noop := func(settings *node.Settings) error {
|
||||
return nil
|
||||
}
|
||||
return noop, nil
|
||||
|
||||
case "local-drand":
|
||||
cfg, err := waitForDrandConfig(ctx, t.SyncClient)
|
||||
if err != nil {
|
||||
t.RecordMessage("error getting drand config: %w", err)
|
||||
return nil, err
|
||||
|
||||
}
|
||||
t.RecordMessage("setting drand config: %v", cfg)
|
||||
return node.Options(
|
||||
node.Override(new(dtypes.DrandConfig), cfg.Config),
|
||||
node.Override(new(dtypes.DrandBootstrap), cfg.GossipBootstrap),
|
||||
), nil
|
||||
|
||||
case "mock":
|
||||
return node.Options(
|
||||
node.Override(new(beacon.RandomBeacon), modtest.RandomBeacon),
|
||||
node.Override(new(dtypes.DrandConfig), dtypes.DrandConfig{
|
||||
ChainInfoJSON: "{\"Hash\":\"wtf\"}",
|
||||
}),
|
||||
node.Override(new(dtypes.DrandBootstrap), dtypes.DrandBootstrap{}),
|
||||
), nil
|
||||
|
||||
default:
|
||||
return nil, fmt.Errorf("unknown random_beacon_type: %s", beaconType)
|
||||
}
|
||||
}
|
||||
|
||||
func startServer(endpoint ma.Multiaddr, srv *http.Server) (listenAddr string, err error) {
|
||||
lst, err := manet.Listen(endpoint)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("could not listen: %w", err)
|
||||
}
|
||||
|
||||
go func() {
|
||||
_ = srv.Serve(manet.NetListener(lst))
|
||||
}()
|
||||
|
||||
return lst.Addr().String(), nil
|
||||
}
|
||||
|
||||
func registerAndExportMetrics(instanceName string) {
|
||||
// Register all Lotus metric views
|
||||
err := view.Register(metrics.DefaultViews...)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
// Set the metric to one so it is published to the exporter
|
||||
stats.Record(context.Background(), metrics.LotusInfo.M(1))
|
||||
|
||||
// Register our custom exporter to opencensus
|
||||
e, err := influxdb.NewExporter(context.Background(), influxdb.Options{
|
||||
Database: "testground",
|
||||
Address: os.Getenv("INFLUXDB_URL"),
|
||||
Username: "",
|
||||
Password: "",
|
||||
InstanceName: instanceName,
|
||||
})
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
view.RegisterExporter(e)
|
||||
view.SetReportingPeriod(5 * time.Second)
|
||||
}
|
||||
|
||||
func collectStats(t *TestEnvironment, ctx context.Context, api api.FullNode) error {
|
||||
t.RecordMessage("collecting blockchain stats")
|
||||
|
||||
influxAddr := os.Getenv("INFLUXDB_URL")
|
||||
influxUser := ""
|
||||
influxPass := ""
|
||||
influxDb := "testground"
|
||||
|
||||
influx, err := tstats.InfluxClient(influxAddr, influxUser, influxPass)
|
||||
if err != nil {
|
||||
t.RecordMessage(err.Error())
|
||||
return err
|
||||
}
|
||||
|
||||
height := int64(0)
|
||||
headlag := 1
|
||||
|
||||
go func() {
|
||||
time.Sleep(15 * time.Second)
|
||||
t.RecordMessage("calling tstats.Collect")
|
||||
tstats.Collect(context.Background(), api, influx, influxDb, height, headlag)
|
||||
}()
|
||||
|
||||
return nil
|
||||
}
|
106
testplans/lotus-soup/testkit/retrieval.go
Normal file
106
testplans/lotus-soup/testkit/retrieval.go
Normal file
@ -0,0 +1,106 @@
|
||||
package testkit
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"time"
|
||||
|
||||
"github.com/filecoin-project/lotus/api"
|
||||
"github.com/ipfs/go-cid"
|
||||
files "github.com/ipfs/go-ipfs-files"
|
||||
ipld "github.com/ipfs/go-ipld-format"
|
||||
dag "github.com/ipfs/go-merkledag"
|
||||
dstest "github.com/ipfs/go-merkledag/test"
|
||||
unixfile "github.com/ipfs/go-unixfs/file"
|
||||
"github.com/ipld/go-car"
|
||||
)
|
||||
|
||||
func RetrieveData(t *TestEnvironment, ctx context.Context, client api.FullNode, fcid cid.Cid, _ *cid.Cid, carExport bool, data []byte) error {
|
||||
t1 := time.Now()
|
||||
offers, err := client.ClientFindData(ctx, fcid, nil)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
for _, o := range offers {
|
||||
t.D().Counter(fmt.Sprintf("find-data.offer,miner=%s", o.Miner)).Inc(1)
|
||||
}
|
||||
t.D().ResettingHistogram("find-data").Update(int64(time.Since(t1)))
|
||||
|
||||
if len(offers) < 1 {
|
||||
panic("no offers")
|
||||
}
|
||||
|
||||
rpath, err := ioutil.TempDir("", "lotus-retrieve-test-")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
defer os.RemoveAll(rpath)
|
||||
|
||||
caddr, err := client.WalletDefaultAddress(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ref := &api.FileRef{
|
||||
Path: filepath.Join(rpath, "ret"),
|
||||
IsCAR: carExport,
|
||||
}
|
||||
t1 = time.Now()
|
||||
err = client.ClientRetrieve(ctx, offers[0].Order(caddr), ref)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
t.D().ResettingHistogram("retrieve-data").Update(int64(time.Since(t1)))
|
||||
|
||||
rdata, err := ioutil.ReadFile(filepath.Join(rpath, "ret"))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if carExport {
|
||||
rdata = ExtractCarData(ctx, rdata, rpath)
|
||||
}
|
||||
|
||||
if !bytes.Equal(rdata, data) {
|
||||
return errors.New("wrong data retrieved")
|
||||
}
|
||||
|
||||
t.RecordMessage("retrieved successfully")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func ExtractCarData(ctx context.Context, rdata []byte, rpath string) []byte {
|
||||
bserv := dstest.Bserv()
|
||||
ch, err := car.LoadCar(bserv.Blockstore(), bytes.NewReader(rdata))
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
b, err := bserv.GetBlock(ctx, ch.Roots[0])
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
nd, err := ipld.Decode(b)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
dserv := dag.NewDAGService(bserv)
|
||||
fil, err := unixfile.NewUnixfsFile(ctx, dserv, nd)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
outPath := filepath.Join(rpath, "retLoadedCAR")
|
||||
if err := files.WriteTo(fil, outPath); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
rdata, err = ioutil.ReadFile(outPath)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return rdata
|
||||
}
|
202
testplans/lotus-soup/testkit/role_bootstrapper.go
Normal file
202
testplans/lotus-soup/testkit/role_bootstrapper.go
Normal file
@ -0,0 +1,202 @@
|
||||
package testkit
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
mbig "math/big"
|
||||
"time"
|
||||
|
||||
"github.com/filecoin-project/lotus/build"
|
||||
"github.com/filecoin-project/lotus/chain/gen"
|
||||
"github.com/filecoin-project/lotus/chain/types"
|
||||
"github.com/filecoin-project/lotus/genesis"
|
||||
"github.com/filecoin-project/lotus/node"
|
||||
"github.com/filecoin-project/lotus/node/modules"
|
||||
modtest "github.com/filecoin-project/lotus/node/modules/testing"
|
||||
"github.com/filecoin-project/lotus/node/repo"
|
||||
"github.com/google/uuid"
|
||||
|
||||
"github.com/filecoin-project/go-state-types/big"
|
||||
|
||||
"github.com/libp2p/go-libp2p-core/peer"
|
||||
ma "github.com/multiformats/go-multiaddr"
|
||||
)
|
||||
|
||||
// Bootstrapper is a special kind of process that produces a genesis block with
|
||||
// the initial wallet balances and preseals for all enlisted miners and clients.
|
||||
type Bootstrapper struct {
|
||||
*LotusNode
|
||||
|
||||
t *TestEnvironment
|
||||
}
|
||||
|
||||
func PrepareBootstrapper(t *TestEnvironment) (*Bootstrapper, error) {
|
||||
var (
|
||||
clients = t.IntParam("clients")
|
||||
miners = t.IntParam("miners")
|
||||
nodes = clients + miners
|
||||
)
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), PrepareNodeTimeout)
|
||||
defer cancel()
|
||||
|
||||
pubsubTracerMaddr, err := GetPubsubTracerMaddr(ctx, t)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
randomBeaconOpt, err := GetRandomBeaconOpts(ctx, t)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// the first duty of the boostrapper is to construct the genesis block
|
||||
// first collect all client and miner balances to assign initial funds
|
||||
balances, err := WaitForBalances(t, ctx, nodes)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
totalBalance := big.Zero()
|
||||
for _, b := range balances {
|
||||
totalBalance = big.Add(filToAttoFil(b.Balance), totalBalance)
|
||||
}
|
||||
|
||||
totalBalanceFil := attoFilToFil(totalBalance)
|
||||
t.RecordMessage("TOTAL BALANCE: %s AttoFIL (%s FIL)", totalBalance, totalBalanceFil)
|
||||
if max := types.TotalFilecoinInt; totalBalanceFil.GreaterThanEqual(max) {
|
||||
panic(fmt.Sprintf("total sum of balances is greater than max Filecoin ever; sum=%s, max=%s", totalBalance, max))
|
||||
}
|
||||
|
||||
// then collect all preseals from miners
|
||||
preseals, err := CollectPreseals(t, ctx, miners)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// now construct the genesis block
|
||||
var genesisActors []genesis.Actor
|
||||
var genesisMiners []genesis.Miner
|
||||
|
||||
for _, bm := range balances {
|
||||
balance := filToAttoFil(bm.Balance)
|
||||
t.RecordMessage("balance assigned to actor %s: %s AttoFIL", bm.Addr, balance)
|
||||
genesisActors = append(genesisActors,
|
||||
genesis.Actor{
|
||||
Type: genesis.TAccount,
|
||||
Balance: balance,
|
||||
Meta: (&genesis.AccountMeta{Owner: bm.Addr}).ActorMeta(),
|
||||
})
|
||||
}
|
||||
|
||||
for _, pm := range preseals {
|
||||
genesisMiners = append(genesisMiners, pm.Miner)
|
||||
}
|
||||
|
||||
genesisTemplate := genesis.Template{
|
||||
Accounts: genesisActors,
|
||||
Miners: genesisMiners,
|
||||
Timestamp: uint64(time.Now().Unix()) - uint64(t.IntParam("genesis_timestamp_offset")),
|
||||
VerifregRootKey: gen.DefaultVerifregRootkeyActor,
|
||||
RemainderAccount: gen.DefaultRemainderAccountActor,
|
||||
NetworkName: "testground-local-" + uuid.New().String(),
|
||||
}
|
||||
|
||||
// dump the genesis block
|
||||
// var jsonBuf bytes.Buffer
|
||||
// jsonEnc := json.NewEncoder(&jsonBuf)
|
||||
// err := jsonEnc.Encode(genesisTemplate)
|
||||
// if err != nil {
|
||||
// panic(err)
|
||||
// }
|
||||
// runenv.RecordMessage(fmt.Sprintf("Genesis template: %s", string(jsonBuf.Bytes())))
|
||||
|
||||
// this is horrendously disgusting, we use this contraption to side effect the construction
|
||||
// of the genesis block in the buffer -- yes, a side effect of dependency injection.
|
||||
// I remember when software was straightforward...
|
||||
var genesisBuffer bytes.Buffer
|
||||
|
||||
bootstrapperIP := t.NetClient.MustGetDataNetworkIP().String()
|
||||
|
||||
n := &LotusNode{}
|
||||
stop, err := node.New(context.Background(),
|
||||
node.FullAPI(&n.FullApi),
|
||||
node.Online(),
|
||||
node.Repo(repo.NewMemory(nil)),
|
||||
node.Override(new(modules.Genesis), modtest.MakeGenesisMem(&genesisBuffer, genesisTemplate)),
|
||||
withApiEndpoint(fmt.Sprintf("/ip4/0.0.0.0/tcp/%s", t.PortNumber("node_rpc", "0"))),
|
||||
withListenAddress(bootstrapperIP),
|
||||
withBootstrapper(nil),
|
||||
withPubsubConfig(true, pubsubTracerMaddr),
|
||||
randomBeaconOpt,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
n.StopFn = stop
|
||||
|
||||
var bootstrapperAddr ma.Multiaddr
|
||||
|
||||
bootstrapperAddrs, err := n.FullApi.NetAddrsListen(ctx)
|
||||
if err != nil {
|
||||
stop(context.TODO())
|
||||
return nil, err
|
||||
}
|
||||
for _, a := range bootstrapperAddrs.Addrs {
|
||||
ip, err := a.ValueForProtocol(ma.P_IP4)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
if ip != bootstrapperIP {
|
||||
continue
|
||||
}
|
||||
addrs, err := peer.AddrInfoToP2pAddrs(&peer.AddrInfo{
|
||||
ID: bootstrapperAddrs.ID,
|
||||
Addrs: []ma.Multiaddr{a},
|
||||
})
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
bootstrapperAddr = addrs[0]
|
||||
break
|
||||
}
|
||||
|
||||
if bootstrapperAddr == nil {
|
||||
panic("failed to determine bootstrapper address")
|
||||
}
|
||||
|
||||
genesisMsg := &GenesisMsg{
|
||||
Genesis: genesisBuffer.Bytes(),
|
||||
Bootstrapper: bootstrapperAddr.Bytes(),
|
||||
}
|
||||
t.SyncClient.MustPublish(ctx, GenesisTopic, genesisMsg)
|
||||
|
||||
t.RecordMessage("waiting for all nodes to be ready")
|
||||
t.SyncClient.MustSignalAndWait(ctx, StateReady, t.TestInstanceCount)
|
||||
|
||||
return &Bootstrapper{n, t}, nil
|
||||
}
|
||||
|
||||
// RunDefault runs a default bootstrapper.
|
||||
func (b *Bootstrapper) RunDefault() error {
|
||||
b.t.RecordMessage("running bootstrapper")
|
||||
ctx := context.Background()
|
||||
b.t.SyncClient.MustSignalAndWait(ctx, StateDone, b.t.TestInstanceCount)
|
||||
return nil
|
||||
}
|
||||
|
||||
// filToAttoFil converts a fractional filecoin value into AttoFIL, rounding if necessary
|
||||
func filToAttoFil(f float64) big.Int {
|
||||
a := mbig.NewFloat(f)
|
||||
a.Mul(a, mbig.NewFloat(float64(build.FilecoinPrecision)))
|
||||
i, _ := a.Int(nil)
|
||||
return big.Int{Int: i}
|
||||
}
|
||||
|
||||
func attoFilToFil(atto big.Int) big.Int {
|
||||
i := big.NewInt(0)
|
||||
i.Add(i.Int, atto.Int)
|
||||
i.Div(i.Int, big.NewIntUnsigned(build.FilecoinPrecision).Int)
|
||||
return i
|
||||
}
|
198
testplans/lotus-soup/testkit/role_client.go
Normal file
198
testplans/lotus-soup/testkit/role_client.go
Normal file
@ -0,0 +1,198 @@
|
||||
package testkit
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"time"
|
||||
|
||||
"contrib.go.opencensus.io/exporter/prometheus"
|
||||
"github.com/filecoin-project/go-jsonrpc"
|
||||
"github.com/filecoin-project/go-jsonrpc/auth"
|
||||
"github.com/filecoin-project/lotus/api"
|
||||
"github.com/filecoin-project/lotus/api/apistruct"
|
||||
"github.com/filecoin-project/lotus/chain/types"
|
||||
"github.com/filecoin-project/lotus/chain/wallet"
|
||||
"github.com/filecoin-project/lotus/node"
|
||||
"github.com/filecoin-project/lotus/node/repo"
|
||||
"github.com/gorilla/mux"
|
||||
"github.com/hashicorp/go-multierror"
|
||||
)
|
||||
|
||||
type LotusClient struct {
|
||||
*LotusNode
|
||||
|
||||
t *TestEnvironment
|
||||
MinerAddrs []MinerAddressesMsg
|
||||
}
|
||||
|
||||
func PrepareClient(t *TestEnvironment) (*LotusClient, error) {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), PrepareNodeTimeout)
|
||||
defer cancel()
|
||||
|
||||
ApplyNetworkParameters(t)
|
||||
|
||||
pubsubTracer, err := GetPubsubTracerMaddr(ctx, t)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
drandOpt, err := GetRandomBeaconOpts(ctx, t)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// first create a wallet
|
||||
walletKey, err := wallet.GenerateKey(types.KTBLS)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// publish the account ID/balance
|
||||
balance := t.FloatParam("balance")
|
||||
balanceMsg := &InitialBalanceMsg{Addr: walletKey.Address, Balance: balance}
|
||||
t.SyncClient.Publish(ctx, BalanceTopic, balanceMsg)
|
||||
|
||||
// then collect the genesis block and bootstrapper address
|
||||
genesisMsg, err := WaitForGenesis(t, ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
clientIP := t.NetClient.MustGetDataNetworkIP().String()
|
||||
|
||||
nodeRepo := repo.NewMemory(nil)
|
||||
|
||||
// create the node
|
||||
n := &LotusNode{}
|
||||
stop, err := node.New(context.Background(),
|
||||
node.FullAPI(&n.FullApi),
|
||||
node.Online(),
|
||||
node.Repo(nodeRepo),
|
||||
withApiEndpoint(fmt.Sprintf("/ip4/0.0.0.0/tcp/%s", t.PortNumber("node_rpc", "0"))),
|
||||
withGenesis(genesisMsg.Genesis),
|
||||
withListenAddress(clientIP),
|
||||
withBootstrapper(genesisMsg.Bootstrapper),
|
||||
withPubsubConfig(false, pubsubTracer),
|
||||
drandOpt,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// set the wallet
|
||||
err = n.setWallet(ctx, walletKey)
|
||||
if err != nil {
|
||||
_ = stop(context.TODO())
|
||||
return nil, err
|
||||
}
|
||||
|
||||
fullSrv, err := startFullNodeAPIServer(t, nodeRepo, n.FullApi)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
n.StopFn = func(ctx context.Context) error {
|
||||
var err *multierror.Error
|
||||
err = multierror.Append(fullSrv.Shutdown(ctx))
|
||||
err = multierror.Append(stop(ctx))
|
||||
return err.ErrorOrNil()
|
||||
}
|
||||
|
||||
registerAndExportMetrics(fmt.Sprintf("client_%d", t.GroupSeq))
|
||||
|
||||
t.RecordMessage("publish our address to the clients addr topic")
|
||||
addrinfo, err := n.FullApi.NetAddrsListen(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
t.SyncClient.MustPublish(ctx, ClientsAddrsTopic, &ClientAddressesMsg{
|
||||
PeerNetAddr: addrinfo,
|
||||
WalletAddr: walletKey.Address,
|
||||
GroupSeq: t.GroupSeq,
|
||||
})
|
||||
|
||||
t.RecordMessage("waiting for all nodes to be ready")
|
||||
t.SyncClient.MustSignalAndWait(ctx, StateReady, t.TestInstanceCount)
|
||||
|
||||
// collect miner addresses.
|
||||
addrs, err := CollectMinerAddrs(t, ctx, t.IntParam("miners"))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
t.RecordMessage("got %v miner addrs", len(addrs))
|
||||
|
||||
// densely connect the client to the full node and the miners themselves.
|
||||
for _, miner := range addrs {
|
||||
if err := n.FullApi.NetConnect(ctx, miner.FullNetAddrs); err != nil {
|
||||
return nil, fmt.Errorf("client failed to connect to full node of miner: %w", err)
|
||||
}
|
||||
if err := n.FullApi.NetConnect(ctx, miner.MinerNetAddrs); err != nil {
|
||||
return nil, fmt.Errorf("client failed to connect to storage miner node node of miner: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// wait for all clients to have completed identify, pubsub negotiation with miners.
|
||||
time.Sleep(1 * time.Second)
|
||||
|
||||
peers, err := n.FullApi.NetPeers(ctx)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to query connected peers: %w", err)
|
||||
}
|
||||
|
||||
t.RecordMessage("connected peers: %d", len(peers))
|
||||
|
||||
cl := &LotusClient{
|
||||
t: t,
|
||||
LotusNode: n,
|
||||
MinerAddrs: addrs,
|
||||
}
|
||||
return cl, nil
|
||||
}
|
||||
|
||||
func (c *LotusClient) RunDefault() error {
|
||||
// run forever
|
||||
c.t.RecordMessage("running default client forever")
|
||||
c.t.WaitUntilAllDone()
|
||||
return nil
|
||||
}
|
||||
|
||||
func startFullNodeAPIServer(t *TestEnvironment, repo repo.Repo, api api.FullNode) (*http.Server, error) {
|
||||
mux := mux.NewRouter()
|
||||
|
||||
rpcServer := jsonrpc.NewServer()
|
||||
rpcServer.Register("Filecoin", api)
|
||||
|
||||
mux.Handle("/rpc/v0", rpcServer)
|
||||
|
||||
exporter, err := prometheus.NewExporter(prometheus.Options{
|
||||
Namespace: "lotus",
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
mux.Handle("/debug/metrics", exporter)
|
||||
|
||||
ah := &auth.Handler{
|
||||
Verify: func(ctx context.Context, token string) ([]auth.Permission, error) {
|
||||
return apistruct.AllPermissions, nil
|
||||
},
|
||||
Next: mux.ServeHTTP,
|
||||
}
|
||||
|
||||
srv := &http.Server{Handler: ah}
|
||||
|
||||
endpoint, err := repo.APIEndpoint()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("no API endpoint in repo: %w", err)
|
||||
}
|
||||
|
||||
listenAddr, err := startServer(endpoint, srv)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to start client API endpoint: %w", err)
|
||||
}
|
||||
|
||||
t.RecordMessage("started node API server at %s", listenAddr)
|
||||
return srv, nil
|
||||
}
|
391
testplans/lotus-soup/testkit/role_drand.go
Normal file
391
testplans/lotus-soup/testkit/role_drand.go
Normal file
@ -0,0 +1,391 @@
|
||||
package testkit
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"net"
|
||||
"os"
|
||||
"path"
|
||||
"time"
|
||||
|
||||
"github.com/drand/drand/chain"
|
||||
"github.com/drand/drand/client"
|
||||
hclient "github.com/drand/drand/client/http"
|
||||
"github.com/drand/drand/core"
|
||||
"github.com/drand/drand/key"
|
||||
"github.com/drand/drand/log"
|
||||
"github.com/drand/drand/lp2p"
|
||||
dnet "github.com/drand/drand/net"
|
||||
"github.com/drand/drand/protobuf/drand"
|
||||
dtest "github.com/drand/drand/test"
|
||||
"github.com/filecoin-project/lotus/node/modules/dtypes"
|
||||
"github.com/libp2p/go-libp2p-core/peer"
|
||||
ma "github.com/multiformats/go-multiaddr"
|
||||
"github.com/testground/sdk-go/sync"
|
||||
|
||||
"github.com/filecoin-project/oni/lotus-soup/statemachine"
|
||||
)
|
||||
|
||||
var (
|
||||
PrepareDrandTimeout = 3 * time.Minute
|
||||
secretDKG = "dkgsecret"
|
||||
)
|
||||
|
||||
type DrandInstance struct {
|
||||
daemon *core.Drand
|
||||
httpClient client.Client
|
||||
ctrlClient *dnet.ControlClient
|
||||
gossipRelay *lp2p.GossipRelayNode
|
||||
|
||||
t *TestEnvironment
|
||||
stateDir string
|
||||
priv *key.Pair
|
||||
pubAddr string
|
||||
privAddr string
|
||||
ctrlAddr string
|
||||
}
|
||||
|
||||
func (dr *DrandInstance) Start() error {
|
||||
opts := []core.ConfigOption{
|
||||
core.WithLogLevel(getLogLevel(dr.t)),
|
||||
core.WithConfigFolder(dr.stateDir),
|
||||
core.WithPublicListenAddress(dr.pubAddr),
|
||||
core.WithPrivateListenAddress(dr.privAddr),
|
||||
core.WithControlPort(dr.ctrlAddr),
|
||||
core.WithInsecure(),
|
||||
}
|
||||
conf := core.NewConfig(opts...)
|
||||
fs := key.NewFileStore(conf.ConfigFolder())
|
||||
fs.SaveKeyPair(dr.priv)
|
||||
key.Save(path.Join(dr.stateDir, "public.toml"), dr.priv.Public, false)
|
||||
if dr.daemon == nil {
|
||||
drand, err := core.NewDrand(fs, conf)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
dr.daemon = drand
|
||||
} else {
|
||||
drand, err := core.LoadDrand(fs, conf)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
drand.StartBeacon(true)
|
||||
dr.daemon = drand
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (dr *DrandInstance) Ping() bool {
|
||||
cl := dr.ctrl()
|
||||
if err := cl.Ping(); err != nil {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func (dr *DrandInstance) Close() error {
|
||||
dr.gossipRelay.Shutdown()
|
||||
dr.daemon.Stop(context.Background())
|
||||
return os.RemoveAll(dr.stateDir)
|
||||
}
|
||||
|
||||
func (dr *DrandInstance) ctrl() *dnet.ControlClient {
|
||||
if dr.ctrlClient != nil {
|
||||
return dr.ctrlClient
|
||||
}
|
||||
cl, err := dnet.NewControlClient(dr.ctrlAddr)
|
||||
if err != nil {
|
||||
dr.t.RecordMessage("drand can't instantiate control client: %w", err)
|
||||
return nil
|
||||
}
|
||||
dr.ctrlClient = cl
|
||||
return cl
|
||||
}
|
||||
|
||||
func (dr *DrandInstance) RunDKG(nodes, thr int, timeout string, leader bool, leaderAddr string, beaconOffset int) *key.Group {
|
||||
cl := dr.ctrl()
|
||||
p := dr.t.DurationParam("drand_period")
|
||||
catchupPeriod := dr.t.DurationParam("drand_catchup_period")
|
||||
t, _ := time.ParseDuration(timeout)
|
||||
var grp *drand.GroupPacket
|
||||
var err error
|
||||
if leader {
|
||||
grp, err = cl.InitDKGLeader(nodes, thr, p, catchupPeriod, t, nil, secretDKG, beaconOffset)
|
||||
} else {
|
||||
leader := dnet.CreatePeer(leaderAddr, false)
|
||||
grp, err = cl.InitDKG(leader, nil, secretDKG)
|
||||
}
|
||||
if err != nil {
|
||||
dr.t.RecordMessage("drand dkg run failed: %w", err)
|
||||
return nil
|
||||
}
|
||||
kg, _ := key.GroupFromProto(grp)
|
||||
return kg
|
||||
}
|
||||
|
||||
func (dr *DrandInstance) Halt() {
|
||||
dr.t.RecordMessage("drand node #%d halting", dr.t.GroupSeq)
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||
defer cancel()
|
||||
dr.daemon.Stop(ctx)
|
||||
}
|
||||
|
||||
func (dr *DrandInstance) Resume() {
|
||||
dr.t.RecordMessage("drand node #%d resuming", dr.t.GroupSeq)
|
||||
dr.Start()
|
||||
// block until we can fetch the round corresponding to the current time
|
||||
startTime := time.Now()
|
||||
round := dr.httpClient.RoundAt(startTime)
|
||||
timeout := 120 * time.Second
|
||||
ctx, cancel := context.WithTimeout(context.Background(), timeout)
|
||||
defer cancel()
|
||||
|
||||
done := make(chan struct{}, 1)
|
||||
go func() {
|
||||
for {
|
||||
res, err := dr.httpClient.Get(ctx, round)
|
||||
if err == nil {
|
||||
dr.t.RecordMessage("drand chain caught up to round %d", res.Round())
|
||||
done <- struct{}{}
|
||||
return
|
||||
}
|
||||
time.Sleep(2 * time.Second)
|
||||
}
|
||||
}()
|
||||
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
dr.t.RecordMessage("drand chain failed to catch up after %s", timeout.String())
|
||||
case <-done:
|
||||
dr.t.RecordMessage("drand chain resumed after %s catchup time", time.Since(startTime))
|
||||
}
|
||||
}
|
||||
|
||||
func (dr *DrandInstance) RunDefault() error {
|
||||
dr.t.RecordMessage("running drand node")
|
||||
|
||||
if dr.t.IsParamSet("suspend_events") {
|
||||
suspender := statemachine.NewSuspender(dr, dr.t.RecordMessage)
|
||||
suspender.RunEvents(dr.t.StringParam("suspend_events"))
|
||||
}
|
||||
|
||||
dr.t.WaitUntilAllDone()
|
||||
return nil
|
||||
}
|
||||
|
||||
// prepareDrandNode starts a drand instance and runs a DKG with the other members of the composition group.
|
||||
// Once the chain is running, the leader publishes the chain info needed by lotus nodes on
|
||||
// drandConfigTopic
|
||||
func PrepareDrandInstance(t *TestEnvironment) (*DrandInstance, error) {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), PrepareDrandTimeout)
|
||||
defer cancel()
|
||||
|
||||
ApplyNetworkParameters(t)
|
||||
|
||||
startTime := time.Now()
|
||||
|
||||
seq := t.GroupSeq
|
||||
isLeader := seq == 1
|
||||
nNodes := t.TestGroupInstanceCount
|
||||
|
||||
myAddr := t.NetClient.MustGetDataNetworkIP()
|
||||
threshold := t.IntParam("drand_threshold")
|
||||
runGossipRelay := t.BooleanParam("drand_gossip_relay")
|
||||
|
||||
beaconOffset := 3
|
||||
|
||||
stateDir, err := ioutil.TempDir("/tmp", fmt.Sprintf("drand-%d", t.GroupSeq))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
dr := DrandInstance{
|
||||
t: t,
|
||||
stateDir: stateDir,
|
||||
pubAddr: dtest.FreeBind(myAddr.String()),
|
||||
privAddr: dtest.FreeBind(myAddr.String()),
|
||||
ctrlAddr: dtest.FreeBind("localhost"),
|
||||
}
|
||||
dr.priv = key.NewKeyPair(dr.privAddr)
|
||||
|
||||
// share the node addresses with other nodes
|
||||
// TODO: if we implement TLS, this is where we'd share public TLS keys
|
||||
type NodeAddr struct {
|
||||
PrivateAddr string
|
||||
PublicAddr string
|
||||
IsLeader bool
|
||||
}
|
||||
addrTopic := sync.NewTopic("drand-addrs", &NodeAddr{})
|
||||
var publicAddrs []string
|
||||
var leaderAddr string
|
||||
ch := make(chan *NodeAddr)
|
||||
_, sub := t.SyncClient.MustPublishSubscribe(ctx, addrTopic, &NodeAddr{
|
||||
PrivateAddr: dr.privAddr,
|
||||
PublicAddr: dr.pubAddr,
|
||||
IsLeader: isLeader,
|
||||
}, ch)
|
||||
for i := 0; i < nNodes; i++ {
|
||||
select {
|
||||
case msg := <-ch:
|
||||
publicAddrs = append(publicAddrs, fmt.Sprintf("http://%s", msg.PublicAddr))
|
||||
if msg.IsLeader {
|
||||
leaderAddr = msg.PrivateAddr
|
||||
}
|
||||
case err := <-sub.Done():
|
||||
return nil, fmt.Errorf("unable to read drand addrs from sync service: %w", err)
|
||||
}
|
||||
}
|
||||
if leaderAddr == "" {
|
||||
return nil, fmt.Errorf("got %d drand addrs, but no leader", len(publicAddrs))
|
||||
}
|
||||
|
||||
t.SyncClient.MustSignalAndWait(ctx, "drand-start", nNodes)
|
||||
t.RecordMessage("Starting drand sharing ceremony")
|
||||
if err := dr.Start(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
alive := false
|
||||
waitSecs := 10
|
||||
for i := 0; i < waitSecs; i++ {
|
||||
if !dr.Ping() {
|
||||
time.Sleep(time.Second)
|
||||
continue
|
||||
}
|
||||
t.R().RecordPoint("drand_first_ping", time.Now().Sub(startTime).Seconds())
|
||||
alive = true
|
||||
break
|
||||
}
|
||||
if !alive {
|
||||
return nil, fmt.Errorf("drand node %d failed to start after %d seconds", t.GroupSeq, waitSecs)
|
||||
}
|
||||
|
||||
// run DKG
|
||||
t.SyncClient.MustSignalAndWait(ctx, "drand-dkg-start", nNodes)
|
||||
if !isLeader {
|
||||
time.Sleep(3 * time.Second)
|
||||
}
|
||||
grp := dr.RunDKG(nNodes, threshold, "10s", isLeader, leaderAddr, beaconOffset)
|
||||
if grp == nil {
|
||||
return nil, fmt.Errorf("drand dkg failed")
|
||||
}
|
||||
t.R().RecordPoint("drand_dkg_complete", time.Now().Sub(startTime).Seconds())
|
||||
|
||||
t.RecordMessage("drand dkg complete, waiting for chain start: %v", time.Until(time.Unix(grp.GenesisTime, 0).Add(grp.Period)))
|
||||
|
||||
// wait for chain to begin
|
||||
to := time.Until(time.Unix(grp.GenesisTime, 0).Add(5 * time.Second).Add(grp.Period))
|
||||
time.Sleep(to)
|
||||
|
||||
t.RecordMessage("drand beacon chain started, fetching initial round via http")
|
||||
// verify that we can get a round of randomness from the chain using an http client
|
||||
info := chain.NewChainInfo(grp)
|
||||
myPublicAddr := fmt.Sprintf("http://%s", dr.pubAddr)
|
||||
dr.httpClient, err = hclient.NewWithInfo(myPublicAddr, info, nil)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("unable to create drand http client: %w", err)
|
||||
}
|
||||
|
||||
_, err = dr.httpClient.Get(ctx, 1)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("unable to get initial drand round: %w", err)
|
||||
}
|
||||
|
||||
// start gossip relay (unless disabled via testplan parameter)
|
||||
var relayAddrs []peer.AddrInfo
|
||||
|
||||
if runGossipRelay {
|
||||
gossipDir := path.Join(stateDir, "gossip-relay")
|
||||
listenAddr := fmt.Sprintf("/ip4/%s/tcp/7777", myAddr.String())
|
||||
relayCfg := lp2p.GossipRelayConfig{
|
||||
ChainHash: hex.EncodeToString(info.Hash()),
|
||||
Addr: listenAddr,
|
||||
DataDir: gossipDir,
|
||||
IdentityPath: path.Join(gossipDir, "identity.key"),
|
||||
Insecure: true,
|
||||
Client: dr.httpClient,
|
||||
}
|
||||
t.RecordMessage("starting drand gossip relay")
|
||||
dr.gossipRelay, err = lp2p.NewGossipRelayNode(log.NewLogger(nil, getLogLevel(t)), &relayCfg)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to construct drand gossip relay: %w", err)
|
||||
}
|
||||
|
||||
t.RecordMessage("sharing gossip relay addrs")
|
||||
// share the gossip relay addrs so we can publish them in DrandRuntimeInfo
|
||||
relayInfo, err := relayAddrInfo(dr.gossipRelay.Multiaddrs(), myAddr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
infoCh := make(chan *peer.AddrInfo, nNodes)
|
||||
infoTopic := sync.NewTopic("drand-gossip-addrs", &peer.AddrInfo{})
|
||||
|
||||
_, sub := t.SyncClient.MustPublishSubscribe(ctx, infoTopic, relayInfo, infoCh)
|
||||
for i := 0; i < nNodes; i++ {
|
||||
select {
|
||||
case ai := <-infoCh:
|
||||
relayAddrs = append(relayAddrs, *ai)
|
||||
case err := <-sub.Done():
|
||||
return nil, fmt.Errorf("unable to get drand relay addr from sync service: %w", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// if we're the leader, publish the config to the sync service
|
||||
if isLeader {
|
||||
buf := bytes.Buffer{}
|
||||
if err := info.ToJSON(&buf); err != nil {
|
||||
return nil, fmt.Errorf("error marshaling chain info: %w", err)
|
||||
}
|
||||
cfg := DrandRuntimeInfo{
|
||||
Config: dtypes.DrandConfig{
|
||||
Servers: publicAddrs,
|
||||
ChainInfoJSON: buf.String(),
|
||||
},
|
||||
GossipBootstrap: relayAddrs,
|
||||
}
|
||||
t.DebugSpew("publishing drand config on sync topic: %v", cfg)
|
||||
t.SyncClient.MustPublish(ctx, DrandConfigTopic, &cfg)
|
||||
}
|
||||
|
||||
// signal ready state
|
||||
t.SyncClient.MustSignalAndWait(ctx, StateReady, t.TestInstanceCount)
|
||||
return &dr, nil
|
||||
}
|
||||
|
||||
// waitForDrandConfig should be called by filecoin instances before constructing the lotus Node
|
||||
// you can use the returned dtypes.DrandConfig to override the default production config.
|
||||
func waitForDrandConfig(ctx context.Context, client sync.Client) (*DrandRuntimeInfo, error) {
|
||||
ch := make(chan *DrandRuntimeInfo, 1)
|
||||
sub := client.MustSubscribe(ctx, DrandConfigTopic, ch)
|
||||
select {
|
||||
case cfg := <-ch:
|
||||
return cfg, nil
|
||||
case err := <-sub.Done():
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
func relayAddrInfo(addrs []ma.Multiaddr, dataIP net.IP) (*peer.AddrInfo, error) {
|
||||
for _, a := range addrs {
|
||||
if ip, _ := a.ValueForProtocol(ma.P_IP4); ip != dataIP.String() {
|
||||
continue
|
||||
}
|
||||
return peer.AddrInfoFromP2pAddr(a)
|
||||
}
|
||||
return nil, fmt.Errorf("no addr found with data ip %s in addrs: %v", dataIP, addrs)
|
||||
}
|
||||
|
||||
func getLogLevel(t *TestEnvironment) int {
|
||||
switch t.StringParam("drand_log_level") {
|
||||
case "info":
|
||||
return log.LogInfo
|
||||
case "debug":
|
||||
return log.LogDebug
|
||||
default:
|
||||
return log.LogNone
|
||||
}
|
||||
}
|
632
testplans/lotus-soup/testkit/role_miner.go
Normal file
632
testplans/lotus-soup/testkit/role_miner.go
Normal file
@ -0,0 +1,632 @@
|
||||
package testkit
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/rand"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"path/filepath"
|
||||
"time"
|
||||
|
||||
"contrib.go.opencensus.io/exporter/prometheus"
|
||||
"github.com/filecoin-project/go-address"
|
||||
"github.com/filecoin-project/go-jsonrpc"
|
||||
"github.com/filecoin-project/go-jsonrpc/auth"
|
||||
"github.com/filecoin-project/go-state-types/abi"
|
||||
"github.com/filecoin-project/go-storedcounter"
|
||||
"github.com/filecoin-project/lotus/api"
|
||||
"github.com/filecoin-project/lotus/api/apistruct"
|
||||
"github.com/filecoin-project/lotus/build"
|
||||
"github.com/filecoin-project/lotus/chain/actors"
|
||||
genesis_chain "github.com/filecoin-project/lotus/chain/gen/genesis"
|
||||
"github.com/filecoin-project/lotus/chain/types"
|
||||
"github.com/filecoin-project/lotus/chain/wallet"
|
||||
"github.com/filecoin-project/lotus/cmd/lotus-seed/seed"
|
||||
"github.com/filecoin-project/lotus/extern/sector-storage/stores"
|
||||
"github.com/filecoin-project/lotus/miner"
|
||||
"github.com/filecoin-project/lotus/node"
|
||||
"github.com/filecoin-project/lotus/node/impl"
|
||||
"github.com/filecoin-project/lotus/node/modules"
|
||||
"github.com/filecoin-project/lotus/node/repo"
|
||||
"github.com/filecoin-project/specs-actors/actors/builtin"
|
||||
saminer "github.com/filecoin-project/specs-actors/actors/builtin/miner"
|
||||
"github.com/google/uuid"
|
||||
"github.com/gorilla/mux"
|
||||
"github.com/hashicorp/go-multierror"
|
||||
"github.com/ipfs/go-datastore"
|
||||
libp2pcrypto "github.com/libp2p/go-libp2p-core/crypto"
|
||||
"github.com/libp2p/go-libp2p-core/peer"
|
||||
"github.com/testground/sdk-go/sync"
|
||||
)
|
||||
|
||||
const (
|
||||
sealDelay = 30 * time.Second
|
||||
)
|
||||
|
||||
type LotusMiner struct {
|
||||
*LotusNode
|
||||
|
||||
MinerRepo repo.Repo
|
||||
NodeRepo repo.Repo
|
||||
FullNetAddrs []peer.AddrInfo
|
||||
GenesisMsg *GenesisMsg
|
||||
|
||||
t *TestEnvironment
|
||||
}
|
||||
|
||||
func PrepareMiner(t *TestEnvironment) (*LotusMiner, error) {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), PrepareNodeTimeout)
|
||||
defer cancel()
|
||||
|
||||
ApplyNetworkParameters(t)
|
||||
|
||||
pubsubTracer, err := GetPubsubTracerMaddr(ctx, t)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
drandOpt, err := GetRandomBeaconOpts(ctx, t)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// first create a wallet
|
||||
walletKey, err := wallet.GenerateKey(types.KTBLS)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// publish the account ID/balance
|
||||
balance := t.FloatParam("balance")
|
||||
balanceMsg := &InitialBalanceMsg{Addr: walletKey.Address, Balance: balance}
|
||||
t.SyncClient.Publish(ctx, BalanceTopic, balanceMsg)
|
||||
|
||||
// create and publish the preseal commitment
|
||||
priv, _, err := libp2pcrypto.GenerateEd25519Key(rand.Reader)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
minerID, err := peer.IDFromPrivateKey(priv)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// pick unique sequence number for each miner, no matter in which group they are
|
||||
seq := t.SyncClient.MustSignalAndWait(ctx, StateMinerPickSeqNum, t.IntParam("miners"))
|
||||
|
||||
minerAddr, err := address.NewIDAddress(genesis_chain.MinerStart + uint64(seq-1))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
presealDir, err := ioutil.TempDir("", "preseal")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
sectors := t.IntParam("sectors")
|
||||
genMiner, _, err := seed.PreSeal(minerAddr, abi.RegisteredSealProof_StackedDrg2KiBV1, 0, sectors, presealDir, []byte("TODO: randomize this"), &walletKey.KeyInfo, false)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
genMiner.PeerId = minerID
|
||||
|
||||
t.RecordMessage("Miner Info: Owner: %s Worker: %s", genMiner.Owner, genMiner.Worker)
|
||||
|
||||
presealMsg := &PresealMsg{Miner: *genMiner, Seqno: seq}
|
||||
t.SyncClient.Publish(ctx, PresealTopic, presealMsg)
|
||||
|
||||
// then collect the genesis block and bootstrapper address
|
||||
genesisMsg, err := WaitForGenesis(t, ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// prepare the repo
|
||||
minerRepoDir, err := ioutil.TempDir("", "miner-repo-dir")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
minerRepo, err := repo.NewFS(minerRepoDir)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
err = minerRepo.Init(repo.StorageMiner)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
{
|
||||
lr, err := minerRepo.Lock(repo.StorageMiner)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
ks, err := lr.KeyStore()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
kbytes, err := priv.Bytes()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
err = ks.Put("libp2p-host", types.KeyInfo{
|
||||
Type: "libp2p-host",
|
||||
PrivateKey: kbytes,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
ds, err := lr.Datastore("/metadata")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
err = ds.Put(datastore.NewKey("miner-address"), minerAddr.Bytes())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
nic := storedcounter.New(ds, datastore.NewKey(modules.StorageCounterDSPrefix))
|
||||
for i := 0; i < (sectors + 1); i++ {
|
||||
_, err = nic.Next()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
var localPaths []stores.LocalPath
|
||||
|
||||
b, err := json.MarshalIndent(&stores.LocalStorageMeta{
|
||||
ID: stores.ID(uuid.New().String()),
|
||||
Weight: 10,
|
||||
CanSeal: true,
|
||||
CanStore: true,
|
||||
}, "", " ")
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("marshaling storage config: %w", err)
|
||||
}
|
||||
|
||||
if err := ioutil.WriteFile(filepath.Join(lr.Path(), "sectorstore.json"), b, 0644); err != nil {
|
||||
return nil, fmt.Errorf("persisting storage metadata (%s): %w", filepath.Join(lr.Path(), "sectorstore.json"), err)
|
||||
}
|
||||
|
||||
localPaths = append(localPaths, stores.LocalPath{
|
||||
Path: lr.Path(),
|
||||
})
|
||||
|
||||
if err := lr.SetStorage(func(sc *stores.StorageConfig) {
|
||||
sc.StoragePaths = append(sc.StoragePaths, localPaths...)
|
||||
}); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
err = lr.Close()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
minerIP := t.NetClient.MustGetDataNetworkIP().String()
|
||||
|
||||
// create the node
|
||||
// we need both a full node _and_ and storage miner node
|
||||
n := &LotusNode{}
|
||||
|
||||
// prepare the repo
|
||||
nodeRepoDir, err := ioutil.TempDir("", "node-repo-dir")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
nodeRepo, err := repo.NewFS(nodeRepoDir)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
err = nodeRepo.Init(repo.FullNode)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
stop1, err := node.New(context.Background(),
|
||||
node.FullAPI(&n.FullApi),
|
||||
node.Online(),
|
||||
node.Repo(nodeRepo),
|
||||
withGenesis(genesisMsg.Genesis),
|
||||
withApiEndpoint(fmt.Sprintf("/ip4/0.0.0.0/tcp/%s", t.PortNumber("node_rpc", "0"))),
|
||||
withListenAddress(minerIP),
|
||||
withBootstrapper(genesisMsg.Bootstrapper),
|
||||
withPubsubConfig(false, pubsubTracer),
|
||||
drandOpt,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("node node.new error: %w", err)
|
||||
}
|
||||
|
||||
// set the wallet
|
||||
err = n.setWallet(ctx, walletKey)
|
||||
if err != nil {
|
||||
stop1(context.TODO())
|
||||
return nil, err
|
||||
}
|
||||
|
||||
minerOpts := []node.Option{
|
||||
node.StorageMiner(&n.MinerApi),
|
||||
node.Online(),
|
||||
node.Repo(minerRepo),
|
||||
node.Override(new(api.FullNode), n.FullApi),
|
||||
withApiEndpoint(fmt.Sprintf("/ip4/0.0.0.0/tcp/%s", t.PortNumber("miner_rpc", "0"))),
|
||||
withMinerListenAddress(minerIP),
|
||||
}
|
||||
|
||||
if t.StringParam("mining_mode") != "natural" {
|
||||
mineBlock := make(chan miner.MineReq)
|
||||
|
||||
minerOpts = append(minerOpts,
|
||||
node.Override(new(*miner.Miner), miner.NewTestMiner(mineBlock, minerAddr)))
|
||||
|
||||
n.MineOne = func(ctx context.Context, cb miner.MineReq) error {
|
||||
select {
|
||||
case mineBlock <- cb:
|
||||
return nil
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
stop2, err := node.New(context.Background(), minerOpts...)
|
||||
if err != nil {
|
||||
stop1(context.TODO())
|
||||
return nil, fmt.Errorf("miner node.new error: %w", err)
|
||||
}
|
||||
|
||||
registerAndExportMetrics(minerAddr.String())
|
||||
|
||||
// collect stats based on blockchain from first instance of `miner` role
|
||||
if t.InitContext.GroupSeq == 1 && t.Role == "miner" {
|
||||
go collectStats(t, ctx, n.FullApi)
|
||||
}
|
||||
|
||||
// Start listening on the full node.
|
||||
fullNodeNetAddrs, err := n.FullApi.NetAddrsListen(ctx)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
// set seal delay to lower value than 1 hour
|
||||
err = n.MinerApi.SectorSetSealDelay(ctx, sealDelay)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// set expected seal duration to 1 minute
|
||||
err = n.MinerApi.SectorSetExpectedSealDuration(ctx, 1*time.Minute)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// print out the admin auth token
|
||||
token, err := n.MinerApi.AuthNew(ctx, apistruct.AllPermissions)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
t.RecordMessage("Auth token: %s", string(token))
|
||||
|
||||
// add local storage for presealed sectors
|
||||
err = n.MinerApi.StorageAddLocal(ctx, presealDir)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// set the miner PeerID
|
||||
minerIDEncoded, err := actors.SerializeParams(&saminer.ChangePeerIDParams{NewID: abi.PeerID(minerID)})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
changeMinerID := &types.Message{
|
||||
To: minerAddr,
|
||||
From: genMiner.Worker,
|
||||
Method: builtin.MethodsMiner.ChangePeerID,
|
||||
Params: minerIDEncoded,
|
||||
Value: types.NewInt(0),
|
||||
}
|
||||
|
||||
_, err = n.FullApi.MpoolPushMessage(ctx, changeMinerID, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
t.RecordMessage("publish our address to the miners addr topic")
|
||||
minerActor, err := n.MinerApi.ActorAddress(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
minerNetAddrs, err := n.MinerApi.NetAddrsListen(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
t.SyncClient.MustPublish(ctx, MinersAddrsTopic, MinerAddressesMsg{
|
||||
FullNetAddrs: fullNodeNetAddrs,
|
||||
MinerNetAddrs: minerNetAddrs,
|
||||
MinerActorAddr: minerActor,
|
||||
WalletAddr: walletKey.Address,
|
||||
})
|
||||
|
||||
t.RecordMessage("connecting to all other miners")
|
||||
|
||||
// densely connect the miner's full nodes.
|
||||
minerCh := make(chan *MinerAddressesMsg, 16)
|
||||
sctx, cancel := context.WithCancel(ctx)
|
||||
defer cancel()
|
||||
t.SyncClient.MustSubscribe(sctx, MinersAddrsTopic, minerCh)
|
||||
var fullNetAddrs []peer.AddrInfo
|
||||
for i := 0; i < t.IntParam("miners"); i++ {
|
||||
m := <-minerCh
|
||||
if m.MinerActorAddr == minerActor {
|
||||
// once I find myself, I stop connecting to others, to avoid a simopen problem.
|
||||
break
|
||||
}
|
||||
err := n.FullApi.NetConnect(ctx, m.FullNetAddrs)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to connect to miner %s on: %v", m.MinerActorAddr, m.FullNetAddrs)
|
||||
}
|
||||
t.RecordMessage("connected to full node of miner %s on %v", m.MinerActorAddr, m.FullNetAddrs)
|
||||
|
||||
fullNetAddrs = append(fullNetAddrs, m.FullNetAddrs)
|
||||
}
|
||||
|
||||
t.RecordMessage("waiting for all nodes to be ready")
|
||||
t.SyncClient.MustSignalAndWait(ctx, StateReady, t.TestInstanceCount)
|
||||
|
||||
fullSrv, err := startFullNodeAPIServer(t, nodeRepo, n.FullApi)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
minerSrv, err := startStorageMinerAPIServer(t, minerRepo, n.MinerApi)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
n.StopFn = func(ctx context.Context) error {
|
||||
var err *multierror.Error
|
||||
err = multierror.Append(fullSrv.Shutdown(ctx))
|
||||
err = multierror.Append(minerSrv.Shutdown(ctx))
|
||||
err = multierror.Append(stop2(ctx))
|
||||
err = multierror.Append(stop2(ctx))
|
||||
err = multierror.Append(stop1(ctx))
|
||||
return err.ErrorOrNil()
|
||||
}
|
||||
|
||||
m := &LotusMiner{n, minerRepo, nodeRepo, fullNetAddrs, genesisMsg, t}
|
||||
|
||||
return m, nil
|
||||
}
|
||||
|
||||
func RestoreMiner(t *TestEnvironment, m *LotusMiner) (*LotusMiner, error) {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), PrepareNodeTimeout)
|
||||
defer cancel()
|
||||
|
||||
minerRepo := m.MinerRepo
|
||||
nodeRepo := m.NodeRepo
|
||||
fullNetAddrs := m.FullNetAddrs
|
||||
genesisMsg := m.GenesisMsg
|
||||
|
||||
minerIP := t.NetClient.MustGetDataNetworkIP().String()
|
||||
|
||||
drandOpt, err := GetRandomBeaconOpts(ctx, t)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// create the node
|
||||
// we need both a full node _and_ and storage miner node
|
||||
n := &LotusNode{}
|
||||
|
||||
stop1, err := node.New(context.Background(),
|
||||
node.FullAPI(&n.FullApi),
|
||||
node.Online(),
|
||||
node.Repo(nodeRepo),
|
||||
//withGenesis(genesisMsg.Genesis),
|
||||
withApiEndpoint(fmt.Sprintf("/ip4/0.0.0.0/tcp/%s", t.PortNumber("node_rpc", "0"))),
|
||||
withListenAddress(minerIP),
|
||||
withBootstrapper(genesisMsg.Bootstrapper),
|
||||
//withPubsubConfig(false, pubsubTracer),
|
||||
drandOpt,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
minerOpts := []node.Option{
|
||||
node.StorageMiner(&n.MinerApi),
|
||||
node.Online(),
|
||||
node.Repo(minerRepo),
|
||||
node.Override(new(api.FullNode), n.FullApi),
|
||||
withApiEndpoint(fmt.Sprintf("/ip4/0.0.0.0/tcp/%s", t.PortNumber("miner_rpc", "0"))),
|
||||
withMinerListenAddress(minerIP),
|
||||
}
|
||||
|
||||
stop2, err := node.New(context.Background(), minerOpts...)
|
||||
if err != nil {
|
||||
stop1(context.TODO())
|
||||
return nil, err
|
||||
}
|
||||
|
||||
fullSrv, err := startFullNodeAPIServer(t, nodeRepo, n.FullApi)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
minerSrv, err := startStorageMinerAPIServer(t, minerRepo, n.MinerApi)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
n.StopFn = func(ctx context.Context) error {
|
||||
var err *multierror.Error
|
||||
err = multierror.Append(fullSrv.Shutdown(ctx))
|
||||
err = multierror.Append(minerSrv.Shutdown(ctx))
|
||||
err = multierror.Append(stop2(ctx))
|
||||
err = multierror.Append(stop2(ctx))
|
||||
err = multierror.Append(stop1(ctx))
|
||||
return err.ErrorOrNil()
|
||||
}
|
||||
|
||||
for i := 0; i < len(fullNetAddrs); i++ {
|
||||
err := n.FullApi.NetConnect(ctx, fullNetAddrs[i])
|
||||
if err != nil {
|
||||
// we expect a failure since we also shutdown another miner
|
||||
t.RecordMessage("failed to connect to miner %d on: %v", i, fullNetAddrs[i])
|
||||
continue
|
||||
}
|
||||
t.RecordMessage("connected to full node of miner %d on %v", i, fullNetAddrs[i])
|
||||
}
|
||||
|
||||
pm := &LotusMiner{n, minerRepo, nodeRepo, fullNetAddrs, genesisMsg, t}
|
||||
|
||||
return pm, err
|
||||
}
|
||||
|
||||
func (m *LotusMiner) RunDefault() error {
|
||||
var (
|
||||
t = m.t
|
||||
clients = t.IntParam("clients")
|
||||
miners = t.IntParam("miners")
|
||||
)
|
||||
|
||||
t.RecordMessage("running miner")
|
||||
t.RecordMessage("block delay: %v", build.BlockDelaySecs)
|
||||
t.D().Gauge("miner.block-delay").Update(float64(build.BlockDelaySecs))
|
||||
|
||||
ctx := context.Background()
|
||||
myActorAddr, err := m.MinerApi.ActorAddress(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// mine / stop mining
|
||||
mine := true
|
||||
done := make(chan struct{})
|
||||
|
||||
if m.MineOne != nil {
|
||||
go func() {
|
||||
defer t.RecordMessage("shutting down mining")
|
||||
defer close(done)
|
||||
|
||||
var i int
|
||||
for i = 0; mine; i++ {
|
||||
// synchronize all miners to mine the next block
|
||||
t.RecordMessage("synchronizing all miners to mine next block [%d]", i)
|
||||
stateMineNext := sync.State(fmt.Sprintf("mine-block-%d", i))
|
||||
t.SyncClient.MustSignalAndWait(ctx, stateMineNext, miners)
|
||||
|
||||
ch := make(chan error)
|
||||
const maxRetries = 100
|
||||
success := false
|
||||
for retries := 0; retries < maxRetries; retries++ {
|
||||
f := func(mined bool, epoch abi.ChainEpoch, err error) {
|
||||
if mined {
|
||||
t.D().Counter(fmt.Sprintf("block.mine,miner=%s", myActorAddr)).Inc(1)
|
||||
}
|
||||
ch <- err
|
||||
}
|
||||
req := miner.MineReq{
|
||||
Done: f,
|
||||
}
|
||||
err := m.MineOne(ctx, req)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
miningErr := <-ch
|
||||
if miningErr == nil {
|
||||
success = true
|
||||
break
|
||||
}
|
||||
t.D().Counter("block.mine.err").Inc(1)
|
||||
t.RecordMessage("retrying block [%d] after %d attempts due to mining error: %s",
|
||||
i, retries, miningErr)
|
||||
}
|
||||
if !success {
|
||||
panic(fmt.Errorf("failed to mine block %d after %d retries", i, maxRetries))
|
||||
}
|
||||
}
|
||||
|
||||
// signal the last block to make sure no miners are left stuck waiting for the next block signal
|
||||
// while the others have stopped
|
||||
stateMineLast := sync.State(fmt.Sprintf("mine-block-%d", i))
|
||||
t.SyncClient.MustSignalEntry(ctx, stateMineLast)
|
||||
}()
|
||||
} else {
|
||||
close(done)
|
||||
}
|
||||
|
||||
// wait for a signal from all clients to stop mining
|
||||
err = <-t.SyncClient.MustBarrier(ctx, StateStopMining, clients).C
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
mine = false
|
||||
<-done
|
||||
|
||||
t.SyncClient.MustSignalAndWait(ctx, StateDone, t.TestInstanceCount)
|
||||
return nil
|
||||
}
|
||||
|
||||
func startStorageMinerAPIServer(t *TestEnvironment, repo repo.Repo, minerApi api.StorageMiner) (*http.Server, error) {
|
||||
mux := mux.NewRouter()
|
||||
|
||||
rpcServer := jsonrpc.NewServer()
|
||||
rpcServer.Register("Filecoin", minerApi)
|
||||
|
||||
mux.Handle("/rpc/v0", rpcServer)
|
||||
mux.PathPrefix("/remote").HandlerFunc(minerApi.(*impl.StorageMinerAPI).ServeRemote)
|
||||
mux.PathPrefix("/").Handler(http.DefaultServeMux) // pprof
|
||||
|
||||
exporter, err := prometheus.NewExporter(prometheus.Options{
|
||||
Namespace: "lotus",
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
mux.Handle("/debug/metrics", exporter)
|
||||
|
||||
ah := &auth.Handler{
|
||||
Verify: func(ctx context.Context, token string) ([]auth.Permission, error) {
|
||||
return apistruct.AllPermissions, nil
|
||||
},
|
||||
Next: mux.ServeHTTP,
|
||||
}
|
||||
|
||||
endpoint, err := repo.APIEndpoint()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("no API endpoint in repo: %w", err)
|
||||
}
|
||||
|
||||
srv := &http.Server{Handler: ah}
|
||||
|
||||
listenAddr, err := startServer(endpoint, srv)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to start storage miner API endpoint: %w", err)
|
||||
}
|
||||
|
||||
t.RecordMessage("started storage miner API server at %s", listenAddr)
|
||||
return srv, nil
|
||||
}
|
79
testplans/lotus-soup/testkit/role_pubsub_tracer.go
Normal file
79
testplans/lotus-soup/testkit/role_pubsub_tracer.go
Normal file
@ -0,0 +1,79 @@
|
||||
package testkit
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/rand"
|
||||
"fmt"
|
||||
|
||||
"github.com/libp2p/go-libp2p"
|
||||
"github.com/libp2p/go-libp2p-core/crypto"
|
||||
"github.com/libp2p/go-libp2p-core/host"
|
||||
"github.com/libp2p/go-libp2p-pubsub-tracer/traced"
|
||||
|
||||
ma "github.com/multiformats/go-multiaddr"
|
||||
)
|
||||
|
||||
type PubsubTracer struct {
|
||||
t *TestEnvironment
|
||||
host host.Host
|
||||
traced *traced.TraceCollector
|
||||
}
|
||||
|
||||
func PreparePubsubTracer(t *TestEnvironment) (*PubsubTracer, error) {
|
||||
ctx := context.Background()
|
||||
|
||||
privk, _, err := crypto.GenerateEd25519Key(rand.Reader)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
tracedIP := t.NetClient.MustGetDataNetworkIP().String()
|
||||
tracedAddr := fmt.Sprintf("/ip4/%s/tcp/4001", tracedIP)
|
||||
|
||||
host, err := libp2p.New(ctx,
|
||||
libp2p.Identity(privk),
|
||||
libp2p.ListenAddrStrings(tracedAddr),
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
tracedDir := t.TestOutputsPath + "/traced.logs"
|
||||
traced, err := traced.NewTraceCollector(host, tracedDir)
|
||||
if err != nil {
|
||||
host.Close()
|
||||
return nil, err
|
||||
}
|
||||
|
||||
tracedMultiaddrStr := fmt.Sprintf("%s/p2p/%s", tracedAddr, host.ID())
|
||||
t.RecordMessage("I am %s", tracedMultiaddrStr)
|
||||
|
||||
_ = ma.StringCast(tracedMultiaddrStr)
|
||||
tracedMsg := &PubsubTracerMsg{Multiaddr: tracedMultiaddrStr}
|
||||
t.SyncClient.MustPublish(ctx, PubsubTracerTopic, tracedMsg)
|
||||
|
||||
t.RecordMessage("waiting for all nodes to be ready")
|
||||
t.SyncClient.MustSignalAndWait(ctx, StateReady, t.TestInstanceCount)
|
||||
|
||||
tracer := &PubsubTracer{t: t, host: host, traced: traced}
|
||||
return tracer, nil
|
||||
}
|
||||
|
||||
func (tr *PubsubTracer) RunDefault() error {
|
||||
tr.t.RecordMessage("running pubsub tracer")
|
||||
|
||||
defer func() {
|
||||
err := tr.Stop()
|
||||
if err != nil {
|
||||
tr.t.RecordMessage("error stoping tracer: %s", err)
|
||||
}
|
||||
}()
|
||||
|
||||
tr.t.WaitUntilAllDone()
|
||||
return nil
|
||||
}
|
||||
|
||||
func (tr *PubsubTracer) Stop() error {
|
||||
tr.traced.Stop()
|
||||
return tr.host.Close()
|
||||
}
|
69
testplans/lotus-soup/testkit/sync.go
Normal file
69
testplans/lotus-soup/testkit/sync.go
Normal file
@ -0,0 +1,69 @@
|
||||
package testkit
|
||||
|
||||
import (
|
||||
"github.com/filecoin-project/go-address"
|
||||
"github.com/filecoin-project/lotus/genesis"
|
||||
"github.com/filecoin-project/lotus/node/modules/dtypes"
|
||||
"github.com/libp2p/go-libp2p-core/peer"
|
||||
"github.com/testground/sdk-go/sync"
|
||||
)
|
||||
|
||||
var (
|
||||
GenesisTopic = sync.NewTopic("genesis", &GenesisMsg{})
|
||||
BalanceTopic = sync.NewTopic("balance", &InitialBalanceMsg{})
|
||||
PresealTopic = sync.NewTopic("preseal", &PresealMsg{})
|
||||
ClientsAddrsTopic = sync.NewTopic("clients_addrs", &ClientAddressesMsg{})
|
||||
MinersAddrsTopic = sync.NewTopic("miners_addrs", &MinerAddressesMsg{})
|
||||
SlashedMinerTopic = sync.NewTopic("slashed_miner", &SlashedMinerMsg{})
|
||||
PubsubTracerTopic = sync.NewTopic("pubsub_tracer", &PubsubTracerMsg{})
|
||||
DrandConfigTopic = sync.NewTopic("drand_config", &DrandRuntimeInfo{})
|
||||
)
|
||||
|
||||
var (
|
||||
StateReady = sync.State("ready")
|
||||
StateDone = sync.State("done")
|
||||
StateStopMining = sync.State("stop-mining")
|
||||
StateMinerPickSeqNum = sync.State("miner-pick-seq-num")
|
||||
StateAbortTest = sync.State("abort-test")
|
||||
)
|
||||
|
||||
type InitialBalanceMsg struct {
|
||||
Addr address.Address
|
||||
Balance float64
|
||||
}
|
||||
|
||||
type PresealMsg struct {
|
||||
Miner genesis.Miner
|
||||
Seqno int64
|
||||
}
|
||||
|
||||
type GenesisMsg struct {
|
||||
Genesis []byte
|
||||
Bootstrapper []byte
|
||||
}
|
||||
|
||||
type ClientAddressesMsg struct {
|
||||
PeerNetAddr peer.AddrInfo
|
||||
WalletAddr address.Address
|
||||
GroupSeq int64
|
||||
}
|
||||
|
||||
type MinerAddressesMsg struct {
|
||||
FullNetAddrs peer.AddrInfo
|
||||
MinerNetAddrs peer.AddrInfo
|
||||
MinerActorAddr address.Address
|
||||
WalletAddr address.Address
|
||||
}
|
||||
|
||||
type SlashedMinerMsg struct {
|
||||
MinerActorAddr address.Address
|
||||
}
|
||||
|
||||
type PubsubTracerMsg struct {
|
||||
Multiaddr string
|
||||
}
|
||||
|
||||
type DrandRuntimeInfo struct {
|
||||
Config dtypes.DrandConfig
|
||||
GossipBootstrap dtypes.DrandBootstrap
|
||||
}
|
88
testplans/lotus-soup/testkit/testenv.go
Normal file
88
testplans/lotus-soup/testkit/testenv.go
Normal file
@ -0,0 +1,88 @@
|
||||
package testkit
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/davecgh/go-spew/spew"
|
||||
"github.com/testground/sdk-go/run"
|
||||
"github.com/testground/sdk-go/runtime"
|
||||
)
|
||||
|
||||
type TestEnvironment struct {
|
||||
*runtime.RunEnv
|
||||
*run.InitContext
|
||||
|
||||
Role string
|
||||
}
|
||||
|
||||
// workaround for default params being wrapped in quote chars
|
||||
func (t *TestEnvironment) StringParam(name string) string {
|
||||
return strings.Trim(t.RunEnv.StringParam(name), "\"")
|
||||
}
|
||||
|
||||
func (t *TestEnvironment) DurationParam(name string) time.Duration {
|
||||
d, err := time.ParseDuration(t.StringParam(name))
|
||||
if err != nil {
|
||||
panic(fmt.Errorf("invalid duration value for param '%s': %w", name, err))
|
||||
}
|
||||
return d
|
||||
}
|
||||
|
||||
func (t *TestEnvironment) DurationRangeParam(name string) DurationRange {
|
||||
var r DurationRange
|
||||
t.JSONParam(name, &r)
|
||||
return r
|
||||
}
|
||||
|
||||
func (t *TestEnvironment) FloatRangeParam(name string) FloatRange {
|
||||
r := FloatRange{}
|
||||
t.JSONParam(name, &r)
|
||||
return r
|
||||
}
|
||||
|
||||
func (t *TestEnvironment) DebugSpew(format string, args ...interface{}) {
|
||||
t.RecordMessage(spew.Sprintf(format, args...))
|
||||
}
|
||||
|
||||
func (t *TestEnvironment) DumpJSON(filename string, v interface{}) {
|
||||
b, err := json.Marshal(v)
|
||||
if err != nil {
|
||||
t.RecordMessage("unable to marshal object to JSON: %s", err)
|
||||
return
|
||||
}
|
||||
f, err := t.CreateRawAsset(filename)
|
||||
if err != nil {
|
||||
t.RecordMessage("unable to create asset file: %s", err)
|
||||
return
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
_, err = f.Write(b)
|
||||
if err != nil {
|
||||
t.RecordMessage("error writing json object dump: %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
// WaitUntilAllDone waits until all instances in the test case are done.
|
||||
func (t *TestEnvironment) WaitUntilAllDone() {
|
||||
ctx := context.Background()
|
||||
t.SyncClient.MustSignalAndWait(ctx, StateDone, t.TestInstanceCount)
|
||||
}
|
||||
|
||||
// WrapTestEnvironment takes a test case function that accepts a
|
||||
// *TestEnvironment, and adapts it to the original unwrapped SDK style
|
||||
// (run.InitializedTestCaseFn).
|
||||
func WrapTestEnvironment(f func(t *TestEnvironment) error) run.InitializedTestCaseFn {
|
||||
return func(runenv *runtime.RunEnv, initCtx *run.InitContext) error {
|
||||
t := &TestEnvironment{RunEnv: runenv, InitContext: initCtx}
|
||||
t.Role = t.StringParam("role")
|
||||
|
||||
t.DumpJSON("test-parameters.json", t.TestInstanceParams)
|
||||
|
||||
return f(t)
|
||||
}
|
||||
}
|
77
testplans/lotus-soup/testkit/testenv_ranges.go
Normal file
77
testplans/lotus-soup/testkit/testenv_ranges.go
Normal file
@ -0,0 +1,77 @@
|
||||
package testkit
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"math/rand"
|
||||
"time"
|
||||
|
||||
"github.com/testground/sdk-go/ptypes"
|
||||
)
|
||||
|
||||
// DurationRange is a Testground parameter type that represents a duration
|
||||
// range, suitable use in randomized tests. This type is encoded as a JSON array
|
||||
// of length 2 of element type ptypes.Duration, e.g. ["10s", "10m"].
|
||||
type DurationRange struct {
|
||||
Min time.Duration
|
||||
Max time.Duration
|
||||
}
|
||||
|
||||
func (r *DurationRange) ChooseRandom() time.Duration {
|
||||
i := int64(r.Min) + rand.Int63n(int64(r.Max)-int64(r.Min))
|
||||
return time.Duration(i)
|
||||
}
|
||||
|
||||
func (r *DurationRange) UnmarshalJSON(b []byte) error {
|
||||
var s []ptypes.Duration
|
||||
if err := json.Unmarshal(b, &s); err != nil {
|
||||
return err
|
||||
}
|
||||
if len(s) != 2 {
|
||||
return fmt.Errorf("expected two-element array of duration strings, got array of length %d", len(s))
|
||||
}
|
||||
if s[0].Duration > s[1].Duration {
|
||||
return fmt.Errorf("expected first element to be <= second element")
|
||||
}
|
||||
r.Min = s[0].Duration
|
||||
r.Max = s[1].Duration
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *DurationRange) MarshalJSON() ([]byte, error) {
|
||||
s := []ptypes.Duration{{r.Min}, {r.Max}}
|
||||
return json.Marshal(s)
|
||||
}
|
||||
|
||||
// FloatRange is a Testground parameter type that represents a float
|
||||
// range, suitable use in randomized tests. This type is encoded as a JSON array
|
||||
// of length 2 of element type float32, e.g. [1.45, 10.675].
|
||||
type FloatRange struct {
|
||||
Min float32
|
||||
Max float32
|
||||
}
|
||||
|
||||
func (r *FloatRange) ChooseRandom() float32 {
|
||||
return r.Min + rand.Float32()*(r.Max-r.Min)
|
||||
}
|
||||
|
||||
func (r *FloatRange) UnmarshalJSON(b []byte) error {
|
||||
var s []float32
|
||||
if err := json.Unmarshal(b, &s); err != nil {
|
||||
return err
|
||||
}
|
||||
if len(s) != 2 {
|
||||
return fmt.Errorf("expected two-element array of floats, got array of length %d", len(s))
|
||||
}
|
||||
if s[0] > s[1] {
|
||||
return fmt.Errorf("expected first element to be <= second element")
|
||||
}
|
||||
r.Min = s[0]
|
||||
r.Max = s[1]
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *FloatRange) MarshalJSON() ([]byte, error) {
|
||||
s := []float32{r.Min, r.Max}
|
||||
return json.Marshal(s)
|
||||
}
|
0
testplans/notes/.empty
Normal file
0
testplans/notes/.empty
Normal file
55
testplans/notes/raulk.md
Normal file
55
testplans/notes/raulk.md
Normal file
@ -0,0 +1,55 @@
|
||||
# Raúl's notes
|
||||
|
||||
## Storage mining
|
||||
|
||||
The Storage Mining System is the part of the Filecoin Protocol that deals with
|
||||
storing Client’s data, producing proof artifacts that demonstrate correct
|
||||
storage behavior, and managing the work involved.
|
||||
|
||||
## Preseals
|
||||
|
||||
In the Filecoin consensus protocol, the miners' probability of being eligible
|
||||
to mine a block in a given epoch is directly correlated with their power in the
|
||||
network. This creates a chicken-and-egg problem at genesis. Since there are no
|
||||
miners, there is no power in the network, therefore no miner is eligible to mine
|
||||
and advance the chain.
|
||||
|
||||
Preseals are sealed sectors that are blessed at genesis, thus conferring
|
||||
their miners the possibility to win round elections and successfully mine a
|
||||
block. Without preseals, the chain would be dead on arrival.
|
||||
|
||||
Preseals work with fauxrep and faux sealing, which are special-case
|
||||
implementations of PoRep and the sealing logic that do not depend on slow
|
||||
sealing.
|
||||
|
||||
### Not implemented things
|
||||
|
||||
**Sector Resealing:** Miners should be able to ’re-seal’ sectors, to allow them
|
||||
to take a set of sectors with mostly expired pieces, and combine the
|
||||
not-yet-expired pieces into a single (or multiple) sectors.
|
||||
|
||||
**Sector Transfer:** Miners should be able to re-delegate the responsibility of
|
||||
storing data to another miner. This is tricky for many reasons, and will not be
|
||||
implemented in the initial release of Filecoin, but could provide interesting
|
||||
capabilities down the road.
|
||||
|
||||
## Catch-up/rush mining
|
||||
|
||||
In catch-up or rush mining, miners make up for chain history that does not
|
||||
exist. It's a recovery/healing procedure. The chain runs at at constant
|
||||
25 second epoch time. When in the network mining halts for some reason
|
||||
(consensus/liveness bug, drand availability issues, etc.), upon a restart miners
|
||||
will go and backfill the chain history by mining backdated blocks in
|
||||
the appropriate timestamps.
|
||||
|
||||
There are a few things worth highlighting:
|
||||
* mining runs in a hot loop, and there is no time for miners to gossip about
|
||||
their blocks; therefore they end up building the chain solo, as they can't
|
||||
incorprate other blocks into tipsets.
|
||||
* the miner with most power will mine most blocks.
|
||||
* presumably, as many forks in the network will appear as miners who mined a
|
||||
block + a fork filled with null rounds only (for miners that didn't win a
|
||||
round).
|
||||
* at the end of the catch-up, the heaviest fork will win the race, and it may
|
||||
be possible for the most powerful miner pre-disruption to affect the
|
||||
outcome by choosing the messages that go in their blocks.
|
Loading…
Reference in New Issue
Block a user