Merge branch 'master' into feat/remote-workers
This commit is contained in:
commit
160e11ce8c
@ -52,6 +52,8 @@ jobs:
|
||||
- install-deps
|
||||
- prepare
|
||||
- go/mod-download
|
||||
- run: sudo apt-get update
|
||||
- run: sudo apt-get install npm
|
||||
- run:
|
||||
command: make buildall
|
||||
|
||||
|
2
.gitignore
vendored
2
.gitignore
vendored
@ -15,6 +15,8 @@
|
||||
**/*.h
|
||||
**/*.a
|
||||
**/*.pc
|
||||
/**/*/.DS_STORE
|
||||
.DS_STORE
|
||||
build/.*
|
||||
build/paramfetch.sh
|
||||
/vendor
|
||||
|
3
Makefile
3
Makefile
@ -11,6 +11,7 @@ endif
|
||||
MODULES:=
|
||||
|
||||
CLEAN:=
|
||||
BINS:=
|
||||
|
||||
## FFI
|
||||
|
||||
@ -94,7 +95,7 @@ benchmarks:
|
||||
|
||||
pond: build
|
||||
go build -o pond ./lotuspond
|
||||
(cd lotuspond/front && npm i && npm run build)
|
||||
(cd lotuspond/front && npm i && CI=false npm run build)
|
||||
.PHONY: pond
|
||||
BINS+=pond
|
||||
|
||||
|
301
README.md
301
README.md
@ -1,6 +1,6 @@
|
||||
![Lotus](docs/images/lotus_logo_h.png)
|
||||
![Lotus](documentation/images/lotus_logo_h.png)
|
||||
|
||||
# project lotus - 莲
|
||||
# Project Lotus - 莲
|
||||
|
||||
Lotus is an experimental implementation of the Filecoin Distributed Storage Network. For more details about Filecoin, check out the [Filecoin Spec](https://github.com/filecoin-project/specs).
|
||||
|
||||
@ -8,301 +8,10 @@ Lotus is an experimental implementation of the Filecoin Distributed Storage Netw
|
||||
|
||||
All work is tracked via issues. An attempt at keeping an up-to-date view on remaining work is in the [lotus testnet github project board](https://github.com/filecoin-project/lotus/projects/1).
|
||||
|
||||
## Building & Documentation
|
||||
|
||||
## Building
|
||||
|
||||
We currently only provide the option to build lotus from source. Binary installation options are coming soon!
|
||||
|
||||
In order to run lotus, please do the following:
|
||||
1. Make sure you have these dependencies installed:
|
||||
- go (1.13 or higher)
|
||||
- gcc (7.4.0 or higher)
|
||||
- git (version 2 or higher)
|
||||
- bzr (some go dependency needs this)
|
||||
- jq
|
||||
- pkg-config
|
||||
- opencl-icd-loader
|
||||
- opencl driver (like nvidia-opencl on arch) (for GPU acceleration)
|
||||
- opencl-headers (build)
|
||||
- rustup (proofs build)
|
||||
- llvm (proofs build)
|
||||
- clang (proofs build)
|
||||
|
||||
Arch (run):
|
||||
```sh
|
||||
sudo pacman -Syu opencl-icd-loader
|
||||
```
|
||||
|
||||
Arch (build):
|
||||
```sh
|
||||
sudo pacman -Syu go gcc git bzr jq pkg-config opencl-icd-loader opencl-headers
|
||||
```
|
||||
|
||||
Ubuntu / Debian (run):
|
||||
```sh
|
||||
sudo apt update
|
||||
sudo apt install mesa-opencl-icd ocl-icd-opencl-dev
|
||||
```
|
||||
|
||||
Ubuntu (build):
|
||||
```sh
|
||||
sudo add-apt-repository ppa:longsleep/golang-backports
|
||||
sudo apt update
|
||||
sudo apt install golang-go gcc git bzr jq pkg-config mesa-opencl-icd ocl-icd-opencl-dev
|
||||
```
|
||||
|
||||
2. Clone this repo & `cd` into it
|
||||
```
|
||||
$ git clone https://github.com/filecoin-project/lotus.git
|
||||
$ cd lotus/
|
||||
```
|
||||
|
||||
3. Build and install the source code
|
||||
```
|
||||
$ make clean all
|
||||
$ sudo make install
|
||||
```
|
||||
|
||||
Now, you should be able to perform the commands listed below.
|
||||
|
||||
## Devnet
|
||||
|
||||
### Node setup
|
||||
|
||||
If you have run lotus before and want to remove all previous data: `rm -rf ~/.lotus ~/.lotusstorage`
|
||||
|
||||
The following sections describe how to use the lotus CLI. Alternately you can run lotus nodes and miners using the [Pond GUI](#pond).
|
||||
|
||||
### Genesis & Bootstrap
|
||||
|
||||
The current lotus build will automatically join the lotus Devnet using the genesis and bootstrap files in the `build/` directory. No configuration is needed.
|
||||
|
||||
### Start Daemon
|
||||
|
||||
```sh
|
||||
$ lotus daemon
|
||||
```
|
||||
|
||||
In another window check that you are connected to the network:
|
||||
```sh
|
||||
$ lotus net peers | wc -l
|
||||
2 # number of peers
|
||||
```
|
||||
|
||||
Wait for the chain to finish syncing:
|
||||
```sh
|
||||
$ lotus sync wait
|
||||
```
|
||||
|
||||
You can view latest block height along with other network metrics at the https://lotus-metrics.kittyhawk.wtf/chain.
|
||||
|
||||
### Basics
|
||||
|
||||
Create a new address:
|
||||
```sh
|
||||
$ lotus wallet new bls
|
||||
t3...
|
||||
```
|
||||
|
||||
Grab some funds from faucet - go to https://lotus-faucet.kittyhawk.wtf/, paste the address
|
||||
you just created, and press Send.
|
||||
|
||||
Check the wallet balance (balance is listed in attoFIL, where 1 attoFIL = 10^-18 FIL):
|
||||
```sh
|
||||
$ lotus wallet balance [optional address (t3...)]
|
||||
```
|
||||
|
||||
(NOTE: If you see an error like `actor not found` after executing this command, it means that either your node isn't fully synced or there are no transactions to this address yet on chain. If the latter, using the faucet should 'fix' this).
|
||||
|
||||
### Mining
|
||||
|
||||
Ensure that at least one BLS address (`t3..`) in your wallet exists
|
||||
```sh
|
||||
$ lotus wallet list
|
||||
t3...
|
||||
```
|
||||
With this address, go to https://lotus-faucet.kittyhawk.wtf/miner.html, and
|
||||
click `Create Miner`
|
||||
|
||||
Wait for a page telling you the address of the newly created storage miner to
|
||||
appear - It should be saying: `New storage miners address is: t0..`
|
||||
|
||||
Initialize storage miner:
|
||||
```sh
|
||||
$ lotus-storage-miner init --actor=t01.. --owner=t3....
|
||||
```
|
||||
This command should return successfully after miner is setup on-chain (30-60s)
|
||||
|
||||
Start mining:
|
||||
```sh
|
||||
$ lotus-storage-miner run
|
||||
```
|
||||
|
||||
To view the miner id used for deals:
|
||||
|
||||
```sh
|
||||
$ lotus-storage-miner info
|
||||
```
|
||||
|
||||
e.g. miner id `t0111`
|
||||
|
||||
Seal random data to start producing PoSts:
|
||||
|
||||
```sh
|
||||
$ lotus-storage-miner store-garbage
|
||||
```
|
||||
|
||||
You can check miner power and sector usage with the miner id:
|
||||
|
||||
```sh
|
||||
# Total power of the network
|
||||
$ lotus-storage-miner state power
|
||||
|
||||
$ lotus-storage-miner state power <miner>
|
||||
|
||||
$ lotus-storage-miner state sectors <miner>
|
||||
```
|
||||
|
||||
### Stage Data
|
||||
|
||||
Import some data:
|
||||
|
||||
```sh
|
||||
# Create a simple file
|
||||
$ echo "Hi my name is $USER" > hello.txt
|
||||
|
||||
# Import the file into lotus & get a Data CID
|
||||
$ lotus client import ./hello.txt
|
||||
<Data CID>
|
||||
|
||||
# List imported files by CID, name, size, status
|
||||
$ lotus client local
|
||||
```
|
||||
|
||||
(CID is short for Content Identifier, a self describing content address used throughout the IPFS ecosystem. It is a cryptographic hash that uniquely maps to the data and verifies it has not changed.)
|
||||
|
||||
### Make a deal
|
||||
|
||||
(It is possible for a Client to make a deal with a Miner on the same lotus Node.)
|
||||
|
||||
```sh
|
||||
# List all miners in the system. Choose one to make a deal with.
|
||||
$ lotus state list-miners
|
||||
|
||||
# List asks proposed by a miner
|
||||
$ lotus client query-ask <miner>
|
||||
|
||||
# Propose a deal with a miner. Price is in attoFIL/byte/block. Duration is # of blocks.
|
||||
$ lotus client deal <Data CID> <miner> <price> <duration>
|
||||
```
|
||||
|
||||
For example `$ lotus client deal bafkre...qvtjsi t0111 36000 12` proposes a deal to store CID `bafkre...qvtjsi` with miner `t0111` at price `36000` for a duration of `12` blocks. If successful, the `client deal` command will return a deal CID.
|
||||
|
||||
### Search & Retrieval
|
||||
|
||||
If you've stored data with a miner in the network, you can search for it by CID:
|
||||
|
||||
```sh
|
||||
# Search for data by CID
|
||||
$ lotus client find <Data CID>
|
||||
LOCAL
|
||||
RETRIEVAL <miner>@<miner peerId>-<deal funds>-<size>
|
||||
```
|
||||
|
||||
To retrieve data from a miner:
|
||||
|
||||
```sh
|
||||
$ lotus client retrieve <Data CID> <outfile>
|
||||
```
|
||||
|
||||
This will initiate a retrieval deal and write the data to the outfile. (This process may take some time.)
|
||||
|
||||
### Monitoring Dashboard
|
||||
|
||||
To see the latest network activity, including chain block height, blocktime, total network power, largest miners, and more, check out the monitoring dashboard at https://lotus-metrics.kittyhawk.wtf.
|
||||
|
||||
### Pond UI
|
||||
|
||||
-----
|
||||
|
||||
As an alternative to the CLI you can use Pond, a graphical testbed for lotus. It can be used to spin up nodes, connect them in a given topology, start them mining, and observe how they function over time.
|
||||
|
||||
Build:
|
||||
|
||||
```
|
||||
$ make pond
|
||||
```
|
||||
|
||||
Run:
|
||||
```
|
||||
$ ./pond run
|
||||
Listening on http://127.0.0.1:2222
|
||||
```
|
||||
|
||||
Now go to http://127.0.0.1:2222.
|
||||
|
||||
**Things to try:**
|
||||
|
||||
- The `Spawn Node` button starts a new lotus Node in a new draggable window.
|
||||
- Click `[Spawn Storage Miner]` to start mining (make sure the Node's wallet has funds).
|
||||
- Click on `[Client]` to open the Node's client interface and propose a deal with an existing Miner. If successful you'll see a payment channel open up with that Miner.
|
||||
|
||||
> Note: Don't leave Pond unattended for long periods of time (10h+), the web-ui tends to
|
||||
> eventually consume all the available RAM.
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
* Turn it off and on - Start at the top
|
||||
* `rm -rf ~/.lotus ~/.lotusstorage/`
|
||||
* Verify you have the correct versions of dependencies
|
||||
* If stuck on a bad fork, try `lotus chain sethead --genesis`
|
||||
* If that didn't help, open a new issue, ask in the [Community forum](https://discuss.filecoin.io) or reach out via [Community chat](https://github.com/filecoin-project/community#chat).
|
||||
|
||||
|
||||
|
||||
## Architecture
|
||||
|
||||
Lotus is architected modularly, and aims to keep clean API boundaries between everything, even if they are in the same process. Notably, the 'lotus full node' software, and the 'lotus storage miner' software are two separate programs.
|
||||
|
||||
The lotus storage miner is intended to be run on the machine that manages a single storage miner instance, and is meant to communicate with the full node via the websockets jsonrpc api for all of its chain interaction needs. This way, a mining operation may easily run one or many storage miners, connected to one or many full node instances.
|
||||
|
||||
## Notable Modules
|
||||
|
||||
### API
|
||||
The systems API is defined in here. The RPC maps directly to the API defined here using the JSON RPC package in `lib/jsonrpc`. Initial API documentation in [docs/API.md](docs/API.md).
|
||||
|
||||
### Chain/Types
|
||||
Implementation of data structures used by Filecoin and their serializations.
|
||||
|
||||
### Chain/Store
|
||||
The chainstore manages all local chain state, including block headers, messages, and state.
|
||||
|
||||
### Chain/State
|
||||
A package for dealing with the Filecoin state tree. Wraps the [HAMT](https://github.com/ipfs/go-hamt-ipld).
|
||||
|
||||
### Chain/Actors
|
||||
Implementations of the builtin Filecoin network actors.
|
||||
|
||||
### Chain/Vm
|
||||
The Filecoin state machine 'vm'. Implemented here are utilities to invoke Filecoin actor methods.
|
||||
|
||||
|
||||
### Miner
|
||||
The block producer logic. This package interfaces with the full node through the API, despite currently being implemented in the same process (very likely to be extracted as its own separate process in the near future).
|
||||
|
||||
### Storage
|
||||
The storage miner logic. This package also interfaces with the full node through a subset of the api. This code is used to implement the `lotus-storage-miner` process.
|
||||
|
||||
## Pond
|
||||
Pond is a graphical testbed for lotus. It can be used to spin up nodes, connect them in a given topology, start them mining, and observe how they function over time.
|
||||
|
||||
To try it out, run `make pond`, then run `./pond run`.
|
||||
Once it is running, visit localhost:2222 in your browser.
|
||||
|
||||
## Tracing
|
||||
Lotus has tracing built into many of its internals. To view the traces, first download jaeger](https://www.jaegertracing.io/download/) (Choose the 'all-in-one' binary). Then run it somewhere, start up the lotus daemon, and open up localhost:16686 in your browser.
|
||||
|
||||
For more details, see [this document](./docs/tracing.md).
|
||||
For instructions on how to build lotus from source, please visit [https://docs.lotu.sh](https://docs.lotu.sh) or read the source [here](https://github.com/filecoin-project/lotus/tree/master/documentation).
|
||||
|
||||
## License
|
||||
|
||||
Dual-licensed under [MIT](https://github.com/filecoin-project/lotus/blob/master/LICENSE-MIT) + [Apache 2.0](https://github.com/filecoin-project/lotus/blob/master/LICENSE-APACHE)
|
||||
|
@ -226,12 +226,14 @@ type QueryOffer struct {
|
||||
MinerPeerID peer.ID
|
||||
}
|
||||
|
||||
func (o *QueryOffer) Order() RetrievalOrder {
|
||||
func (o *QueryOffer) Order(client address.Address) RetrievalOrder {
|
||||
return RetrievalOrder{
|
||||
Root: o.Root,
|
||||
Size: o.Size,
|
||||
Total: o.MinPrice,
|
||||
|
||||
Client: client,
|
||||
|
||||
Miner: o.Miner,
|
||||
MinerPeerID: o.MinerPeerID,
|
||||
}
|
||||
|
@ -86,9 +86,9 @@ type SectorInfo struct {
|
||||
}
|
||||
|
||||
type SealedRef struct {
|
||||
Piece string
|
||||
Offset uint64
|
||||
Size uint64
|
||||
SectorID uint64
|
||||
Offset uint64
|
||||
Size uint64
|
||||
}
|
||||
|
||||
type SealedRefs struct {
|
||||
|
@ -137,11 +137,8 @@ func (t *SealedRef) MarshalCBOR(w io.Writer) error {
|
||||
return err
|
||||
}
|
||||
|
||||
// t.t.Piece (string) (string)
|
||||
if _, err := w.Write(cbg.CborEncodeMajorType(cbg.MajTextString, uint64(len(t.Piece)))); err != nil {
|
||||
return err
|
||||
}
|
||||
if _, err := w.Write([]byte(t.Piece)); err != nil {
|
||||
// t.t.SectorID (uint64) (uint64)
|
||||
if _, err := w.Write(cbg.CborEncodeMajorType(cbg.MajUnsignedInt, uint64(t.SectorID))); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
@ -172,16 +169,16 @@ func (t *SealedRef) UnmarshalCBOR(r io.Reader) error {
|
||||
return fmt.Errorf("cbor input had wrong number of fields")
|
||||
}
|
||||
|
||||
// t.t.Piece (string) (string)
|
||||
// t.t.SectorID (uint64) (uint64)
|
||||
|
||||
{
|
||||
sval, err := cbg.ReadString(br)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
t.Piece = string(sval)
|
||||
maj, extra, err = cbg.CborReadHeader(br)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if maj != cbg.MajUnsignedInt {
|
||||
return fmt.Errorf("wrong type for uint64 field")
|
||||
}
|
||||
t.SectorID = uint64(extra)
|
||||
// t.t.Offset (uint64) (uint64)
|
||||
|
||||
maj, extra, err = cbg.CborReadHeader(br)
|
||||
|
@ -1,26 +1,32 @@
|
||||
package test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"math/rand"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
logging "github.com/ipfs/go-log"
|
||||
|
||||
"github.com/filecoin-project/lotus/api"
|
||||
"github.com/filecoin-project/lotus/chain/address"
|
||||
"github.com/filecoin-project/lotus/build"
|
||||
"github.com/filecoin-project/lotus/chain/types"
|
||||
"github.com/filecoin-project/lotus/node/impl"
|
||||
)
|
||||
|
||||
func init() {
|
||||
logging.SetAllLoggers(logging.LevelInfo)
|
||||
build.InsecurePoStValidation = true
|
||||
}
|
||||
|
||||
func TestDealFlow(t *testing.T, b APIBuilder) {
|
||||
os.Setenv("BELLMAN_NO_GPU", "1")
|
||||
|
||||
logging.SetAllLoggers(logging.LevelInfo)
|
||||
ctx := context.Background()
|
||||
n, sn := b(t, 1, []int{0})
|
||||
client := n[0].FullNode.(*impl.FullNodeAPI)
|
||||
@ -36,13 +42,16 @@ func TestDealFlow(t *testing.T, b APIBuilder) {
|
||||
}
|
||||
time.Sleep(time.Second)
|
||||
|
||||
r := io.LimitReader(rand.New(rand.NewSource(17)), 1000)
|
||||
data := make([]byte, 1000)
|
||||
rand.New(rand.NewSource(5)).Read(data)
|
||||
|
||||
r := bytes.NewReader(data)
|
||||
fcid, err := client.ClientImportLocal(ctx, r)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
maddr, err := address.NewFromString("t0102")
|
||||
maddr, err := miner.ActorAddress(ctx)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
@ -57,7 +66,7 @@ func TestDealFlow(t *testing.T, b APIBuilder) {
|
||||
for mine {
|
||||
time.Sleep(time.Second)
|
||||
fmt.Println("mining a block now")
|
||||
if err := n[0].MineOne(ctx); err != nil {
|
||||
if err := sn[0].MineOne(ctx); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
@ -90,6 +99,42 @@ loop:
|
||||
time.Sleep(time.Second / 2)
|
||||
}
|
||||
|
||||
// Retrieval
|
||||
|
||||
offers, err := client.ClientFindData(ctx, fcid)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if len(offers) < 1 {
|
||||
t.Fatal("no offers")
|
||||
}
|
||||
|
||||
rpath, err := ioutil.TempDir("", "lotus-retrieve-test-")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer os.RemoveAll(rpath)
|
||||
|
||||
caddr, err := client.WalletDefaultAddress(ctx)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
err = client.ClientRetrieve(ctx, offers[0].Order(caddr), filepath.Join(rpath, "ret"))
|
||||
if err != nil {
|
||||
t.Fatalf("%+v", err)
|
||||
}
|
||||
|
||||
rdata, err := ioutil.ReadFile(filepath.Join(rpath, "ret"))
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if !bytes.Equal(rdata, data) {
|
||||
t.Fatal("wrong data retrieved")
|
||||
}
|
||||
|
||||
mine = false
|
||||
fmt.Println("shutting down mining")
|
||||
<-done
|
||||
|
@ -9,7 +9,7 @@ import (
|
||||
|
||||
func (ts *testSuite) testMining(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
apis, _ := ts.makeNodes(t, 1, []int{0})
|
||||
apis, sn := ts.makeNodes(t, 1, []int{0})
|
||||
api := apis[0]
|
||||
|
||||
h1, err := api.ChainHead(ctx)
|
||||
@ -20,7 +20,7 @@ func (ts *testSuite) testMining(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
<-newHeads
|
||||
|
||||
err = api.MineOne(ctx)
|
||||
err = sn[0].MineOne(ctx)
|
||||
require.NoError(t, err)
|
||||
|
||||
<-newHeads
|
||||
|
@ -11,12 +11,12 @@ import (
|
||||
|
||||
type TestNode struct {
|
||||
api.FullNode
|
||||
|
||||
MineOne func(context.Context) error
|
||||
}
|
||||
|
||||
type TestStorageNode struct {
|
||||
api.StorageMiner
|
||||
|
||||
MineOne func(context.Context) error
|
||||
}
|
||||
|
||||
// APIBuilder is a function which is invoked in test suite to provide
|
||||
@ -43,7 +43,7 @@ func TestApis(t *testing.T, b APIBuilder) {
|
||||
|
||||
func (ts *testSuite) testVersion(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
apis, _ := ts.makeNodes(t, 1, []int{})
|
||||
apis, _ := ts.makeNodes(t, 1, []int{0})
|
||||
api := apis[0]
|
||||
|
||||
v, err := api.Version(ctx)
|
||||
@ -57,7 +57,7 @@ func (ts *testSuite) testVersion(t *testing.T) {
|
||||
|
||||
func (ts *testSuite) testID(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
apis, _ := ts.makeNodes(t, 1, []int{})
|
||||
apis, _ := ts.makeNodes(t, 1, []int{0})
|
||||
api := apis[0]
|
||||
|
||||
id, err := api.ID(ctx)
|
||||
@ -69,7 +69,7 @@ func (ts *testSuite) testID(t *testing.T) {
|
||||
|
||||
func (ts *testSuite) testConnectTwo(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
apis, _ := ts.makeNodes(t, 2, []int{})
|
||||
apis, _ := ts.makeNodes(t, 2, []int{0})
|
||||
|
||||
p, err := apis[0].NetPeers(ctx)
|
||||
if err != nil {
|
||||
|
3
build/bootstrap/bootstrappers.pi
Normal file
3
build/bootstrap/bootstrappers.pi
Normal file
@ -0,0 +1,3 @@
|
||||
/ip4/147.75.80.17/tcp/1347/p2p/12D3KooWPWCCqUN3gPEaFAMpAwfh5a6SryBEsFt5R2oK8oW86a4C
|
||||
/ip6/2604:1380:2000:f400::1/tcp/36137/p2p/12D3KooWNL1fJPBArhsoqwg2wbXgCDTByMyg4ZGp6HjgWr9bgnaJ
|
||||
/ip4/147.75.80.29/tcp/44397/p2p/12D3KooWNL1fJPBArhsoqwg2wbXgCDTByMyg4ZGp6HjgWr9bgnaJ
|
Binary file not shown.
@ -3,19 +3,19 @@
|
||||
package build
|
||||
|
||||
// Seconds
|
||||
const BlockDelay = 12
|
||||
const BlockDelay = 30
|
||||
|
||||
// FallbackPoStDelay is the number of epochs the miner needs to wait after
|
||||
// ElectionPeriodStart before starting fallback post computation
|
||||
//
|
||||
// Epochs
|
||||
const FallbackPoStDelay = 1000
|
||||
const FallbackPoStDelay = 30
|
||||
|
||||
// SlashablePowerDelay is the number of epochs after ElectionPeriodStart, after
|
||||
// which the miner is slashed
|
||||
//
|
||||
// Epochs
|
||||
const SlashablePowerDelay = 2000
|
||||
const SlashablePowerDelay = 200
|
||||
|
||||
// Epochs
|
||||
const InteractivePoRepDelay = 10
|
||||
const InteractivePoRepDelay = 8
|
||||
|
@ -13,7 +13,6 @@ const UnixfsChunkSize uint64 = 1 << 20
|
||||
const UnixfsLinksPerLevel = 1024
|
||||
|
||||
var SectorSizes = []uint64{
|
||||
1 << 10,
|
||||
16 << 20,
|
||||
256 << 20,
|
||||
1 << 30,
|
||||
|
48
chain/actors/actor_cron.go
Normal file
48
chain/actors/actor_cron.go
Normal file
@ -0,0 +1,48 @@
|
||||
package actors
|
||||
|
||||
import (
|
||||
"github.com/filecoin-project/lotus/chain/actors/aerrors"
|
||||
"github.com/filecoin-project/lotus/chain/address"
|
||||
"github.com/filecoin-project/lotus/chain/types"
|
||||
)
|
||||
|
||||
type CronActor struct{}
|
||||
|
||||
type callTuple struct {
|
||||
addr address.Address
|
||||
method uint64
|
||||
}
|
||||
|
||||
var CronActors = []callTuple{
|
||||
{StoragePowerAddress, SPAMethods.CheckProofSubmissions},
|
||||
}
|
||||
|
||||
type CronActorState struct{}
|
||||
|
||||
type cAMethods struct {
|
||||
EpochTick uint64
|
||||
}
|
||||
|
||||
var CAMethods = cAMethods{2}
|
||||
|
||||
func (ca CronActor) Exports() []interface{} {
|
||||
return []interface{}{
|
||||
1: nil,
|
||||
2: ca.EpochTick,
|
||||
}
|
||||
}
|
||||
|
||||
func (ca CronActor) EpochTick(act *types.Actor, vmctx types.VMContext, params *struct{}) ([]byte, ActorError) {
|
||||
if vmctx.Message().From != CronAddress {
|
||||
return nil, aerrors.New(1, "EpochTick is only callable as a part of tipset state computation")
|
||||
}
|
||||
|
||||
for _, call := range CronActors {
|
||||
_, err := vmctx.Send(call.addr, call.method, types.NewInt(0), nil)
|
||||
if err != nil {
|
||||
return nil, err // todo: this very bad?
|
||||
}
|
||||
}
|
||||
|
||||
return nil, nil
|
||||
}
|
@ -169,7 +169,7 @@ func IsBuiltinActor(code cid.Cid) bool {
|
||||
}
|
||||
|
||||
func IsSingletonActor(code cid.Cid) bool {
|
||||
return code == StoragePowerCodeCid || code == StorageMarketCodeCid || code == InitCodeCid
|
||||
return code == StoragePowerCodeCid || code == StorageMarketCodeCid || code == InitCodeCid || code == CronCodeCid
|
||||
}
|
||||
|
||||
func (ias *InitActorState) AddActor(cst *hamt.CborIpldStore, addr address.Address) (address.Address, error) {
|
||||
|
@ -335,7 +335,7 @@ func (sma StorageMinerActor) ProveCommitSector(act *types.Actor, vmctx types.VMC
|
||||
|
||||
commD, err := vmctx.Send(StorageMarketAddress, SMAMethods.ComputeDataCommitment, types.NewInt(0), enc)
|
||||
if err != nil {
|
||||
return nil, aerrors.Wrap(err, "failed to compute data commitment")
|
||||
return nil, aerrors.Wrapf(err, "failed to compute data commitment (sector %d, deals: %v)", params.SectorID, params.DealIDs)
|
||||
}
|
||||
|
||||
if ok, err := ValidatePoRep(ctx, maddr, mi.SectorSize, commD, us.Info.CommR, ticket, params.Proof, seed, params.SectorID); err != nil {
|
||||
|
@ -611,18 +611,22 @@ func (sma StorageMarketActor) ComputeDataCommitment(act *types.Actor, vmctx type
|
||||
return nil, aerrors.HandleExternalError(err, "loading deals amt")
|
||||
}
|
||||
|
||||
if len(params.DealIDs) == 0 {
|
||||
return nil, aerrors.New(3, "no deal IDs")
|
||||
}
|
||||
|
||||
var pieces []sectorbuilder.PublicPieceInfo
|
||||
for _, deal := range params.DealIDs {
|
||||
var dealInfo OnChainDeal
|
||||
if err := deals.Get(deal, &dealInfo); err != nil {
|
||||
if _, is := err.(*amt.ErrNotFound); is {
|
||||
return nil, aerrors.New(3, "deal not found")
|
||||
return nil, aerrors.New(4, "deal not found")
|
||||
}
|
||||
return nil, aerrors.HandleExternalError(err, "getting deal info failed")
|
||||
}
|
||||
|
||||
if dealInfo.Deal.Proposal.Provider != vmctx.Message().From {
|
||||
return nil, aerrors.New(4, "referenced deal was not from caller")
|
||||
return nil, aerrors.New(5, "referenced deal was not from caller")
|
||||
}
|
||||
|
||||
var commP [32]byte
|
||||
@ -636,7 +640,7 @@ func (sma StorageMarketActor) ComputeDataCommitment(act *types.Actor, vmctx type
|
||||
|
||||
commd, err := sectorbuilder.GenerateDataCommitment(params.SectorSize, pieces)
|
||||
if err != nil {
|
||||
return nil, aerrors.Absorb(err, 5, "failed to generate data commitment from pieces")
|
||||
return nil, aerrors.Absorb(err, 6, "failed to generate data commitment from pieces")
|
||||
}
|
||||
|
||||
return commd[:], nil
|
||||
|
@ -567,8 +567,8 @@ func pledgeCollateralForSize(vmctx types.VMContext, size, totalStorage types.Big
|
||||
}
|
||||
|
||||
func (spa StoragePowerActor) CheckProofSubmissions(act *types.Actor, vmctx types.VMContext, param *struct{}) ([]byte, ActorError) {
|
||||
if vmctx.Message().From != StoragePowerAddress {
|
||||
return nil, aerrors.New(1, "CheckProofSubmissions is only callable as a part of tipset state computation")
|
||||
if vmctx.Message().From != CronAddress {
|
||||
return nil, aerrors.New(1, "CheckProofSubmissions is only callable from the cron actor")
|
||||
}
|
||||
|
||||
var self StoragePowerState
|
||||
|
@ -8,6 +8,7 @@ import (
|
||||
)
|
||||
|
||||
var AccountCodeCid cid.Cid
|
||||
var CronCodeCid cid.Cid
|
||||
var StoragePowerCodeCid cid.Cid
|
||||
var StorageMarketCodeCid cid.Cid
|
||||
var StorageMinerCodeCid cid.Cid
|
||||
@ -19,6 +20,7 @@ var InitAddress = mustIDAddress(0)
|
||||
var NetworkAddress = mustIDAddress(1)
|
||||
var StoragePowerAddress = mustIDAddress(2)
|
||||
var StorageMarketAddress = mustIDAddress(3) // TODO: missing from spec
|
||||
var CronAddress = mustIDAddress(4)
|
||||
var BurntFundsAddress = mustIDAddress(99)
|
||||
|
||||
func mustIDAddress(i uint64) address.Address {
|
||||
@ -40,6 +42,7 @@ func init() {
|
||||
}
|
||||
|
||||
AccountCodeCid = mustSum("fil/1/account") // TODO: spec
|
||||
CronCodeCid = mustSum("fil/1/cron")
|
||||
StoragePowerCodeCid = mustSum("fil/1/power")
|
||||
StorageMarketCodeCid = mustSum("fil/1/market")
|
||||
StorageMinerCodeCid = mustSum("fil/1/miner")
|
||||
|
@ -3984,3 +3984,32 @@ func (t *CheckMinerParams) UnmarshalCBOR(r io.Reader) error {
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (t *CronActorState) MarshalCBOR(w io.Writer) error {
|
||||
if t == nil {
|
||||
_, err := w.Write(cbg.CborNull)
|
||||
return err
|
||||
}
|
||||
if _, err := w.Write([]byte{128}); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (t *CronActorState) UnmarshalCBOR(r io.Reader) error {
|
||||
br := cbg.GetPeeker(r)
|
||||
|
||||
maj, extra, err := cbg.CborReadHeader(br)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if maj != cbg.MajArray {
|
||||
return fmt.Errorf("cbor input should be of type array")
|
||||
}
|
||||
|
||||
if extra != 0 {
|
||||
return fmt.Errorf("cbor input had wrong number of fields")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
@ -8,9 +8,9 @@ import (
|
||||
"strconv"
|
||||
|
||||
bls "github.com/filecoin-project/filecoin-ffi"
|
||||
"github.com/filecoin-project/go-leb128"
|
||||
cbor "github.com/ipfs/go-ipld-cbor"
|
||||
"github.com/minio/blake2b-simd"
|
||||
"github.com/multiformats/go-varint"
|
||||
"github.com/polydawn/refmt/obj/atlas"
|
||||
"golang.org/x/xerrors"
|
||||
|
||||
@ -166,7 +166,7 @@ func (a *Address) Scan(value interface{}) error {
|
||||
|
||||
// NewIDAddress returns an address using the ID protocol.
|
||||
func NewIDAddress(id uint64) (Address, error) {
|
||||
return newAddress(ID, leb128.FromUInt64(id))
|
||||
return newAddress(ID, varint.ToUvarint(id))
|
||||
}
|
||||
|
||||
// NewSecp256k1Address returns an address using the SECP256K1 protocol.
|
||||
@ -218,6 +218,14 @@ func addressHash(ingest []byte) []byte {
|
||||
func newAddress(protocol Protocol, payload []byte) (Address, error) {
|
||||
switch protocol {
|
||||
case ID:
|
||||
_, n, err := varint.FromUvarint(payload)
|
||||
if err != nil {
|
||||
return Undef, xerrors.Errorf("could not decode: %v: %w", err, ErrInvalidPayload)
|
||||
}
|
||||
if n != len(payload) {
|
||||
return Undef, xerrors.Errorf("different varint length (v:%d != p:%d): %w",
|
||||
n, len(payload), ErrInvalidPayload)
|
||||
}
|
||||
case SECP256K1, Actor:
|
||||
if len(payload) != PayloadHashLength {
|
||||
return Undef, ErrInvalidPayload
|
||||
@ -258,7 +266,14 @@ func encode(network Network, addr Address) (string, error) {
|
||||
cksm := Checksum(append([]byte{addr.Protocol()}, addr.Payload()...))
|
||||
strAddr = ntwk + fmt.Sprintf("%d", addr.Protocol()) + AddressEncoding.WithPadding(-1).EncodeToString(append(addr.Payload(), cksm[:]...))
|
||||
case ID:
|
||||
strAddr = ntwk + fmt.Sprintf("%d", addr.Protocol()) + fmt.Sprintf("%d", leb128.ToUInt64(addr.Payload()))
|
||||
i, n, err := varint.FromUvarint(addr.Payload())
|
||||
if err != nil {
|
||||
return UndefAddressString, xerrors.Errorf("could not decode varint: %w", err)
|
||||
}
|
||||
if n != len(addr.Payload()) {
|
||||
return UndefAddressString, xerrors.Errorf("payload contains additional bytes")
|
||||
}
|
||||
strAddr = fmt.Sprintf("%s%d%d", ntwk, addr.Protocol(), i)
|
||||
default:
|
||||
return UndefAddressString, ErrUnknownProtocol
|
||||
}
|
||||
@ -304,7 +319,7 @@ func decode(a string) (Address, error) {
|
||||
if err != nil {
|
||||
return Undef, ErrInvalidPayload
|
||||
}
|
||||
return newAddress(protocol, leb128.FromUInt64(id))
|
||||
return newAddress(protocol, varint.ToUvarint(id))
|
||||
}
|
||||
|
||||
payloadcksm, err := AddressEncoding.WithPadding(-1).DecodeString(raw)
|
||||
@ -395,5 +410,6 @@ func IDFromAddress(addr Address) (uint64, error) {
|
||||
return 0, xerrors.Errorf("cannot get id from non id address")
|
||||
}
|
||||
|
||||
return leb128.ToUInt64(addr.Payload()), nil
|
||||
i, _, err := varint.FromUvarint(addr.Payload())
|
||||
return i, err
|
||||
}
|
||||
|
@ -11,7 +11,7 @@ import (
|
||||
"time"
|
||||
|
||||
ffi "github.com/filecoin-project/filecoin-ffi"
|
||||
"github.com/filecoin-project/go-leb128"
|
||||
"github.com/multiformats/go-varint"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
@ -94,7 +94,9 @@ func TestVectorsIDAddress(t *testing.T) {
|
||||
maybeAddr, err := NewFromString(tc.expected)
|
||||
assert.NoError(err)
|
||||
assert.Equal(ID, maybeAddr.Protocol())
|
||||
assert.Equal(tc.input, leb128.ToUInt64(maybeAddr.Payload()))
|
||||
id, _, err := varint.FromUvarint(maybeAddr.Payload())
|
||||
assert.NoError(err)
|
||||
assert.Equal(tc.input, id)
|
||||
|
||||
// Round trip to and from bytes
|
||||
maybeAddrBytes, err := NewFromBytes(maybeAddr.Bytes())
|
||||
@ -532,3 +534,9 @@ func BenchmarkCborUnmarshal(b *testing.B) {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestIDEdgeCase(t *testing.T) {
|
||||
a, err := NewFromBytes([]byte{0, 0x80})
|
||||
_ = a.String()
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
@ -156,8 +156,9 @@ func (bs *BlockSync) GetChainMessages(ctx context.Context, h *types.TipSet, coun
|
||||
|
||||
var err error
|
||||
for _, p := range perm {
|
||||
res, err := bs.sendRequestToPeer(ctx, peers[p], req)
|
||||
if err != nil {
|
||||
res, rerr := bs.sendRequestToPeer(ctx, peers[p], req)
|
||||
if rerr != nil {
|
||||
err = rerr
|
||||
log.Warnf("BlockSync request failed for peer %s: %s", peers[p].String(), err)
|
||||
continue
|
||||
}
|
||||
@ -172,6 +173,10 @@ func (bs *BlockSync) GetChainMessages(ctx context.Context, h *types.TipSet, coun
|
||||
}
|
||||
}
|
||||
|
||||
if err == nil {
|
||||
return nil, xerrors.Errorf("GetChainMessages failed, no peers connected")
|
||||
}
|
||||
|
||||
// TODO: What if we have no peers (and err is nil)?
|
||||
return nil, xerrors.Errorf("GetChainMessages failed with all peers(%d): %w", len(peers), err)
|
||||
}
|
||||
|
@ -857,7 +857,7 @@ func (t *StorageDataTransferVoucher) MarshalCBOR(w io.Writer) error {
|
||||
_, err := w.Write(cbg.CborNull)
|
||||
return err
|
||||
}
|
||||
if _, err := w.Write([]byte{129}); err != nil {
|
||||
if _, err := w.Write([]byte{130}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
@ -867,6 +867,10 @@ func (t *StorageDataTransferVoucher) MarshalCBOR(w io.Writer) error {
|
||||
return xerrors.Errorf("failed to write cid field t.Proposal: %w", err)
|
||||
}
|
||||
|
||||
// t.t.DealID (uint64) (uint64)
|
||||
if _, err := w.Write(cbg.CborEncodeMajorType(cbg.MajUnsignedInt, uint64(t.DealID))); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@ -881,7 +885,7 @@ func (t *StorageDataTransferVoucher) UnmarshalCBOR(r io.Reader) error {
|
||||
return fmt.Errorf("cbor input should be of type array")
|
||||
}
|
||||
|
||||
if extra != 1 {
|
||||
if extra != 2 {
|
||||
return fmt.Errorf("cbor input had wrong number of fields")
|
||||
}
|
||||
|
||||
@ -897,5 +901,15 @@ func (t *StorageDataTransferVoucher) UnmarshalCBOR(r io.Reader) error {
|
||||
t.Proposal = c
|
||||
|
||||
}
|
||||
// t.t.DealID (uint64) (uint64)
|
||||
|
||||
maj, extra, err = cbg.CborReadHeader(br)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if maj != cbg.MajUnsignedInt {
|
||||
return fmt.Errorf("wrong type for uint64 field")
|
||||
}
|
||||
t.DealID = uint64(extra)
|
||||
return nil
|
||||
}
|
||||
|
@ -224,10 +224,13 @@ func (p *Provider) onDataTransferEvent(event datatransfer.Event, channelState da
|
||||
// data transfer events for opening and progress do not affect deal state
|
||||
var next api.DealState
|
||||
var err error
|
||||
var mut func(*MinerDeal)
|
||||
switch event {
|
||||
case datatransfer.Complete:
|
||||
next = api.DealStaged
|
||||
err = nil
|
||||
mut = func(deal *MinerDeal) {
|
||||
deal.DealID = voucher.DealID
|
||||
}
|
||||
case datatransfer.Error:
|
||||
next = api.DealFailed
|
||||
err = ErrDataTransferFailed
|
||||
@ -241,7 +244,7 @@ func (p *Provider) onDataTransferEvent(event datatransfer.Event, channelState da
|
||||
newState: next,
|
||||
id: voucher.Proposal,
|
||||
err: err,
|
||||
mut: nil,
|
||||
mut: mut,
|
||||
}:
|
||||
case <-p.stop:
|
||||
}
|
||||
|
@ -132,10 +132,10 @@ func (p *Provider) accept(ctx context.Context, deal MinerDeal) (func(*MinerDeal)
|
||||
return nil, err
|
||||
}
|
||||
if len(resp.DealIDs) != 1 {
|
||||
return nil, xerrors.Errorf("got unexpected number of DealIDs from")
|
||||
return nil, xerrors.Errorf("got unexpected number of DealIDs from SMA")
|
||||
}
|
||||
|
||||
log.Info("fetching data for a deal")
|
||||
log.Infof("fetching data for a deal %d", resp.DealIDs[0])
|
||||
mcid := smsg.Cid()
|
||||
err = p.sendSignedResponse(&Response{
|
||||
State: api.DealAccepted,
|
||||
@ -164,14 +164,12 @@ func (p *Provider) accept(ctx context.Context, deal MinerDeal) (func(*MinerDeal)
|
||||
// (see onDataTransferEvent)
|
||||
_, err = p.dataTransfer.OpenPullDataChannel(ctx,
|
||||
deal.Client,
|
||||
&StorageDataTransferVoucher{Proposal: deal.ProposalCid},
|
||||
&StorageDataTransferVoucher{Proposal: deal.ProposalCid, DealID: resp.DealIDs[0]},
|
||||
deal.Ref,
|
||||
allSelector,
|
||||
)
|
||||
|
||||
return func(deal *MinerDeal) {
|
||||
deal.DealID = resp.DealIDs[0]
|
||||
}, nil
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// STAGED
|
||||
@ -204,11 +202,11 @@ func (p *Provider) staged(ctx context.Context, deal MinerDeal) (func(*MinerDeal)
|
||||
return nil, xerrors.Errorf("deal.Proposal.PieceSize didn't match padded unixfs file size")
|
||||
}
|
||||
|
||||
sectorID, err := p.secb.AddUnixfsPiece(ctx, deal.Ref, uf, deal.DealID)
|
||||
sectorID, err := p.secb.AddUnixfsPiece(ctx, uf, deal.DealID)
|
||||
if err != nil {
|
||||
return nil, xerrors.Errorf("AddPiece failed: %s", err)
|
||||
}
|
||||
log.Warnf("New Sector: %d", sectorID)
|
||||
log.Warnf("New Sector: %d (deal %d)", sectorID, deal.DealID)
|
||||
|
||||
return func(deal *MinerDeal) {
|
||||
deal.SectorID = sectorID
|
||||
|
@ -127,7 +127,7 @@ func TestClientRequestValidation(t *testing.T) {
|
||||
if err != nil {
|
||||
t.Fatal("unable to construct piece cid")
|
||||
}
|
||||
if !xerrors.Is(crv.ValidatePull(minerID, &deals.StorageDataTransferVoucher{proposalNd.Cid()}, pieceRef, nil), deals.ErrNoDeal) {
|
||||
if !xerrors.Is(crv.ValidatePull(minerID, &deals.StorageDataTransferVoucher{proposalNd.Cid(), 1}, pieceRef, nil), deals.ErrNoDeal) {
|
||||
t.Fatal("Pull should fail if there is no deal stored")
|
||||
}
|
||||
})
|
||||
@ -144,7 +144,7 @@ func TestClientRequestValidation(t *testing.T) {
|
||||
if err != nil {
|
||||
t.Fatal("unable to construct piece cid")
|
||||
}
|
||||
if !xerrors.Is(crv.ValidatePull(minerID, &deals.StorageDataTransferVoucher{clientDeal.ProposalCid}, pieceRef, nil), deals.ErrWrongPeer) {
|
||||
if !xerrors.Is(crv.ValidatePull(minerID, &deals.StorageDataTransferVoucher{clientDeal.ProposalCid, 1}, pieceRef, nil), deals.ErrWrongPeer) {
|
||||
t.Fatal("Pull should fail if miner address is incorrect")
|
||||
}
|
||||
})
|
||||
@ -156,7 +156,7 @@ func TestClientRequestValidation(t *testing.T) {
|
||||
if err := state.Begin(clientDeal.ProposalCid, &clientDeal); err != nil {
|
||||
t.Fatal("deal tracking failed")
|
||||
}
|
||||
if !xerrors.Is(crv.ValidatePull(minerID, &deals.StorageDataTransferVoucher{clientDeal.ProposalCid}, blockGenerator.Next().Cid(), nil), deals.ErrWrongPiece) {
|
||||
if !xerrors.Is(crv.ValidatePull(minerID, &deals.StorageDataTransferVoucher{clientDeal.ProposalCid, 1}, blockGenerator.Next().Cid(), nil), deals.ErrWrongPiece) {
|
||||
t.Fatal("Pull should fail if piece ref is incorrect")
|
||||
}
|
||||
})
|
||||
@ -172,7 +172,7 @@ func TestClientRequestValidation(t *testing.T) {
|
||||
if err != nil {
|
||||
t.Fatal("unable to construct piece cid")
|
||||
}
|
||||
if !xerrors.Is(crv.ValidatePull(minerID, &deals.StorageDataTransferVoucher{clientDeal.ProposalCid}, pieceRef, nil), deals.ErrInacceptableDealState) {
|
||||
if !xerrors.Is(crv.ValidatePull(minerID, &deals.StorageDataTransferVoucher{clientDeal.ProposalCid, 1}, pieceRef, nil), deals.ErrInacceptableDealState) {
|
||||
t.Fatal("Pull should fail if deal is in a state that cannot be data transferred")
|
||||
}
|
||||
})
|
||||
@ -188,7 +188,7 @@ func TestClientRequestValidation(t *testing.T) {
|
||||
if err != nil {
|
||||
t.Fatal("unable to construct piece cid")
|
||||
}
|
||||
if crv.ValidatePull(minerID, &deals.StorageDataTransferVoucher{clientDeal.ProposalCid}, pieceRef, nil) != nil {
|
||||
if crv.ValidatePull(minerID, &deals.StorageDataTransferVoucher{clientDeal.ProposalCid, 1}, pieceRef, nil) != nil {
|
||||
t.Fatal("Pull should should succeed when all parameters are correct")
|
||||
}
|
||||
})
|
||||
@ -220,7 +220,7 @@ func TestProviderRequestValidation(t *testing.T) {
|
||||
if err != nil {
|
||||
t.Fatal("unable to construct piece cid")
|
||||
}
|
||||
if !xerrors.Is(mrv.ValidatePush(clientID, &deals.StorageDataTransferVoucher{proposalNd.Cid()}, pieceRef, nil), deals.ErrNoDeal) {
|
||||
if !xerrors.Is(mrv.ValidatePush(clientID, &deals.StorageDataTransferVoucher{proposalNd.Cid(), 1}, pieceRef, nil), deals.ErrNoDeal) {
|
||||
t.Fatal("Push should fail if there is no deal stored")
|
||||
}
|
||||
})
|
||||
@ -237,7 +237,7 @@ func TestProviderRequestValidation(t *testing.T) {
|
||||
if err != nil {
|
||||
t.Fatal("unable to construct piece cid")
|
||||
}
|
||||
if !xerrors.Is(mrv.ValidatePush(clientID, &deals.StorageDataTransferVoucher{minerDeal.ProposalCid}, pieceRef, nil), deals.ErrWrongPeer) {
|
||||
if !xerrors.Is(mrv.ValidatePush(clientID, &deals.StorageDataTransferVoucher{minerDeal.ProposalCid, 1}, pieceRef, nil), deals.ErrWrongPeer) {
|
||||
t.Fatal("Push should fail if miner address is incorrect")
|
||||
}
|
||||
})
|
||||
@ -249,7 +249,7 @@ func TestProviderRequestValidation(t *testing.T) {
|
||||
if err := state.Begin(minerDeal.ProposalCid, &minerDeal); err != nil {
|
||||
t.Fatal("deal tracking failed")
|
||||
}
|
||||
if !xerrors.Is(mrv.ValidatePush(clientID, &deals.StorageDataTransferVoucher{minerDeal.ProposalCid}, blockGenerator.Next().Cid(), nil), deals.ErrWrongPiece) {
|
||||
if !xerrors.Is(mrv.ValidatePush(clientID, &deals.StorageDataTransferVoucher{minerDeal.ProposalCid, 1}, blockGenerator.Next().Cid(), nil), deals.ErrWrongPiece) {
|
||||
t.Fatal("Push should fail if piece ref is incorrect")
|
||||
}
|
||||
})
|
||||
@ -265,7 +265,7 @@ func TestProviderRequestValidation(t *testing.T) {
|
||||
if err != nil {
|
||||
t.Fatal("unable to construct piece cid")
|
||||
}
|
||||
if !xerrors.Is(mrv.ValidatePush(clientID, &deals.StorageDataTransferVoucher{minerDeal.ProposalCid}, pieceRef, nil), deals.ErrInacceptableDealState) {
|
||||
if !xerrors.Is(mrv.ValidatePush(clientID, &deals.StorageDataTransferVoucher{minerDeal.ProposalCid, 1}, pieceRef, nil), deals.ErrInacceptableDealState) {
|
||||
t.Fatal("Push should fail if deal is in a state that cannot be data transferred")
|
||||
}
|
||||
})
|
||||
@ -281,7 +281,7 @@ func TestProviderRequestValidation(t *testing.T) {
|
||||
if err != nil {
|
||||
t.Fatal("unable to construct piece cid")
|
||||
}
|
||||
if mrv.ValidatePush(clientID, &deals.StorageDataTransferVoucher{minerDeal.ProposalCid}, pieceRef, nil) != nil {
|
||||
if mrv.ValidatePush(clientID, &deals.StorageDataTransferVoucher{minerDeal.ProposalCid, 1}, pieceRef, nil) != nil {
|
||||
t.Fatal("Push should should succeed when all parameters are correct")
|
||||
}
|
||||
})
|
||||
|
@ -90,6 +90,7 @@ type AskResponse struct {
|
||||
// used by the storage market
|
||||
type StorageDataTransferVoucher struct {
|
||||
Proposal cid.Cid
|
||||
DealID uint64
|
||||
}
|
||||
|
||||
// ToBytes converts the StorageDataTransferVoucher to raw bytes
|
||||
|
@ -55,12 +55,12 @@ func (fcs *fakeCS) ChainGetTipSetByHeight(context.Context, uint64, *types.TipSet
|
||||
func makeTs(t *testing.T, h uint64, msgcid cid.Cid) *types.TipSet {
|
||||
a, _ := address.NewFromString("t00")
|
||||
b, _ := address.NewFromString("t02")
|
||||
ts, err := types.NewTipSet([]*types.BlockHeader{
|
||||
var ts, err = types.NewTipSet([]*types.BlockHeader{
|
||||
{
|
||||
Height: h,
|
||||
Miner: a,
|
||||
|
||||
Ticket: &types.Ticket{[]byte{byte(h % 2)}},
|
||||
Ticket: &types.Ticket{VRFProof: []byte{byte(h % 2)}},
|
||||
|
||||
ParentStateRoot: dummyCid,
|
||||
Messages: msgcid,
|
||||
@ -73,7 +73,7 @@ func makeTs(t *testing.T, h uint64, msgcid cid.Cid) *types.TipSet {
|
||||
Height: h,
|
||||
Miner: b,
|
||||
|
||||
Ticket: &types.Ticket{[]byte{byte((h + 1) % 2)}},
|
||||
Ticket: &types.Ticket{VRFProof: []byte{byte((h + 1) % 2)}},
|
||||
|
||||
ParentStateRoot: dummyCid,
|
||||
Messages: msgcid,
|
||||
|
@ -61,7 +61,6 @@ type ChainGen struct {
|
||||
|
||||
eppProvs map[address.Address]ElectionPoStProver
|
||||
Miners []address.Address
|
||||
mworkers []address.Address
|
||||
receivers []address.Address
|
||||
banker address.Address
|
||||
bankerNonce uint64
|
||||
@ -125,8 +124,6 @@ func NewGenerator() (*ChainGen, error) {
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: this is really weird, we have to guess the miner addresses that
|
||||
// will be created in order to preseal data for them
|
||||
maddr1, err := address.NewFromString("t0300")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@ -212,9 +209,8 @@ func NewGenerator() (*ChainGen, error) {
|
||||
genesis: genb.Genesis,
|
||||
w: w,
|
||||
|
||||
Miners: minercfg.MinerAddrs,
|
||||
eppProvs: mgen,
|
||||
//mworkers: minercfg.Workers,
|
||||
Miners: minercfg.MinerAddrs,
|
||||
eppProvs: mgen,
|
||||
banker: banker,
|
||||
receivers: receievers,
|
||||
|
||||
@ -256,7 +252,6 @@ func (cg *ChainGen) nextBlockProof(ctx context.Context, pts *types.TipSet, m add
|
||||
return nil, nil, xerrors.Errorf("get miner worker: %w", err)
|
||||
}
|
||||
|
||||
log.Warnf("compute VRF ROUND: %d %s %s", round, m, worker)
|
||||
vrfout, err := ComputeVRF(ctx, cg.w.Sign, worker, m, DSepTicket, lastTicket.VRFProof)
|
||||
if err != nil {
|
||||
return nil, nil, xerrors.Errorf("compute VRF: %w", err)
|
||||
@ -308,7 +303,6 @@ func (cg *ChainGen) NextTipSetFromMiners(base *types.TipSet, miners []address.Ad
|
||||
}
|
||||
|
||||
if proof != nil {
|
||||
log.Warn("making block, ticket: ", t.VRFProof)
|
||||
fblk, err := cg.makeBlock(base, m, proof, t, uint64(round), msgs)
|
||||
if err != nil {
|
||||
return nil, xerrors.Errorf("making a block for next tipset failed: %w", err)
|
||||
@ -454,13 +448,11 @@ type ElectionPoStProver interface {
|
||||
ComputeProof(context.Context, sectorbuilder.SortedPublicSectorInfo, []byte, []sectorbuilder.EPostCandidate) ([]byte, error)
|
||||
}
|
||||
|
||||
type eppProvider struct {
|
||||
sectors []ffi.PublicSectorInfo
|
||||
}
|
||||
type eppProvider struct{}
|
||||
|
||||
func (epp *eppProvider) GenerateCandidates(ctx context.Context, _ sectorbuilder.SortedPublicSectorInfo, eprand []byte) ([]sectorbuilder.EPostCandidate, error) {
|
||||
return []sectorbuilder.EPostCandidate{
|
||||
sectorbuilder.EPostCandidate{
|
||||
{
|
||||
SectorID: 1,
|
||||
PartialTicket: [32]byte{},
|
||||
Ticket: [32]byte{},
|
||||
@ -510,7 +502,6 @@ func IsRoundWinner(ctx context.Context, ts *types.TipSet, round int64, miner add
|
||||
sectors := sectorbuilder.NewSortedPublicSectorInfo(sinfos)
|
||||
|
||||
hvrf := sha256.Sum256(vrfout)
|
||||
log.Info("Replicas: ", sectors)
|
||||
candidates, err := epp.GenerateCandidates(ctx, sectors, hvrf[:])
|
||||
if err != nil {
|
||||
return false, nil, xerrors.Errorf("failed to generate electionPoSt candidates: %w", err)
|
||||
@ -528,7 +519,7 @@ func IsRoundWinner(ctx context.Context, ts *types.TipSet, round int64, miner add
|
||||
|
||||
var winners []sectorbuilder.EPostCandidate
|
||||
for _, c := range candidates {
|
||||
if types.IsTicketWinner(c.PartialTicket[:], ssize, pow.TotalPower, 1) {
|
||||
if types.IsTicketWinner(c.PartialTicket[:], ssize, pow.TotalPower) {
|
||||
winners = append(winners, c)
|
||||
}
|
||||
}
|
||||
@ -584,7 +575,7 @@ func hashVRFBase(personalization uint64, miner address.Address, input []byte) ([
|
||||
}
|
||||
|
||||
func VerifyVRF(ctx context.Context, worker, miner address.Address, p uint64, input, vrfproof []byte) error {
|
||||
ctx, span := trace.StartSpan(ctx, "VerifyVRF")
|
||||
_, span := trace.StartSpan(ctx, "VerifyVRF")
|
||||
defer span.End()
|
||||
|
||||
vrfBase, err := hashVRFBase(p, miner, input)
|
||||
@ -614,7 +605,6 @@ func ComputeVRF(ctx context.Context, sign SignFunc, worker, miner address.Addres
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
log.Warnf("making ticket: %x %s %s %x %x", sig.Data, worker, miner, input, sigInput)
|
||||
|
||||
if sig.Type != types.KTBLS {
|
||||
return nil, fmt.Errorf("miner worker address was not a BLS key")
|
||||
@ -622,16 +612,3 @@ func ComputeVRF(ctx context.Context, sign SignFunc, worker, miner address.Addres
|
||||
|
||||
return sig.Data, nil
|
||||
}
|
||||
|
||||
func TicketHash(t *types.Ticket, addr address.Address) []byte {
|
||||
h := sha256.New()
|
||||
|
||||
h.Write(t.VRFProof)
|
||||
|
||||
// Field Delimeter
|
||||
h.Write([]byte{0})
|
||||
|
||||
h.Write(addr.Bytes())
|
||||
|
||||
return h.Sum(nil)
|
||||
}
|
||||
|
@ -2,8 +2,14 @@ package gen
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/filecoin-project/lotus/build"
|
||||
)
|
||||
|
||||
func init() {
|
||||
build.SectorSizes = []uint64{1024}
|
||||
}
|
||||
|
||||
func testGeneration(t testing.TB, n int, msgs int) {
|
||||
g, err := NewGenerator()
|
||||
if err != nil {
|
||||
|
@ -25,14 +25,6 @@ import (
|
||||
"github.com/filecoin-project/lotus/genesis"
|
||||
)
|
||||
|
||||
var validSsizes = map[uint64]struct{}{}
|
||||
|
||||
func init() {
|
||||
for _, size := range build.SectorSizes {
|
||||
validSsizes[size] = struct{}{}
|
||||
}
|
||||
}
|
||||
|
||||
type GenesisBootstrap struct {
|
||||
Genesis *types.BlockHeader
|
||||
}
|
||||
@ -100,6 +92,15 @@ func MakeInitialStateTree(bs bstore.Blockstore, actmap map[address.Address]types
|
||||
return nil, xerrors.Errorf("set init actor: %w", err)
|
||||
}
|
||||
|
||||
cronact, err := SetupCronActor(bs)
|
||||
if err != nil {
|
||||
return nil, xerrors.Errorf("setup cron actor: %w", err)
|
||||
}
|
||||
|
||||
if err := state.SetActor(actors.CronAddress, cronact); err != nil {
|
||||
return nil, xerrors.Errorf("set cron actor: %w", err)
|
||||
}
|
||||
|
||||
spact, err := SetupStoragePowerActor(bs)
|
||||
if err != nil {
|
||||
return nil, xerrors.Errorf("setup storage market actor: %w", err)
|
||||
@ -146,6 +147,23 @@ func MakeInitialStateTree(bs bstore.Blockstore, actmap map[address.Address]types
|
||||
return state, nil
|
||||
}
|
||||
|
||||
func SetupCronActor(bs bstore.Blockstore) (*types.Actor, error) {
|
||||
cst := hamt.CSTFromBstore(bs)
|
||||
cas := &actors.CronActorState{}
|
||||
|
||||
stcid, err := cst.Put(context.TODO(), cas)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &types.Actor{
|
||||
Code: actors.CronCodeCid,
|
||||
Head: stcid,
|
||||
Nonce: 0,
|
||||
Balance: types.NewInt(0),
|
||||
}, nil
|
||||
}
|
||||
|
||||
func SetupStoragePowerActor(bs bstore.Blockstore) (*types.Actor, error) {
|
||||
cst := hamt.CSTFromBstore(bs)
|
||||
nd := hamt.NewNode(cst)
|
||||
@ -205,7 +223,7 @@ func SetupStorageMarketActor(bs bstore.Blockstore, sroot cid.Cid, deals []actors
|
||||
sms := &actors.StorageMarketState{
|
||||
Balances: emptyHAMT,
|
||||
Deals: dealAmt,
|
||||
NextDealID: 0,
|
||||
NextDealID: uint64(len(deals)),
|
||||
}
|
||||
|
||||
stcid, err := cst.Put(context.TODO(), sms)
|
||||
@ -255,6 +273,10 @@ func SetupStorageMiners(ctx context.Context, cs *store.ChainStore, sroot cid.Cid
|
||||
return cid.Undef, nil, xerrors.Errorf("failed to create NewVM: %w", err)
|
||||
}
|
||||
|
||||
if len(gmcfg.MinerAddrs) == 0 {
|
||||
return cid.Undef, nil, xerrors.New("no genesis miners")
|
||||
}
|
||||
|
||||
if len(gmcfg.MinerAddrs) != len(gmcfg.PreSeals) {
|
||||
return cid.Undef, nil, xerrors.Errorf("miner address list, and preseal count doesn't match (%d != %d)", len(gmcfg.MinerAddrs), len(gmcfg.PreSeals))
|
||||
}
|
||||
@ -322,7 +344,7 @@ func SetupStorageMiners(ctx context.Context, cs *store.ChainStore, sroot cid.Cid
|
||||
if err := cst.Get(ctx, mact.Head, &mstate); err != nil {
|
||||
return cid.Undef, nil, xerrors.Errorf("getting miner actor state failed: %w", err)
|
||||
}
|
||||
mstate.Power = types.NewInt(build.SectorSizes[0])
|
||||
mstate.Power = types.BigMul(types.NewInt(build.SectorSizes[0]), types.NewInt(uint64(len(ps.Sectors))))
|
||||
|
||||
blks := amt.WrapBlockstore(cs.Blockstore())
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
package chain
|
||||
package messagepool
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
@ -9,9 +9,11 @@ import (
|
||||
"time"
|
||||
|
||||
lru "github.com/hashicorp/golang-lru"
|
||||
"github.com/ipfs/go-cid"
|
||||
"github.com/ipfs/go-datastore"
|
||||
"github.com/ipfs/go-datastore/namespace"
|
||||
"github.com/ipfs/go-datastore/query"
|
||||
logging "github.com/ipfs/go-log"
|
||||
pubsub "github.com/libp2p/go-libp2p-pubsub"
|
||||
lps "github.com/whyrusleeping/pubsub"
|
||||
"go.uber.org/multierr"
|
||||
@ -19,12 +21,16 @@ import (
|
||||
|
||||
"github.com/filecoin-project/lotus/api"
|
||||
"github.com/filecoin-project/lotus/build"
|
||||
"github.com/filecoin-project/lotus/chain"
|
||||
"github.com/filecoin-project/lotus/chain/address"
|
||||
"github.com/filecoin-project/lotus/chain/stmgr"
|
||||
"github.com/filecoin-project/lotus/chain/store"
|
||||
"github.com/filecoin-project/lotus/chain/types"
|
||||
"github.com/filecoin-project/lotus/node/modules/dtypes"
|
||||
)
|
||||
|
||||
var log = logging.Logger("messagepool")
|
||||
|
||||
var (
|
||||
ErrMessageTooBig = errors.New("message too big")
|
||||
|
||||
@ -56,9 +62,10 @@ type MessagePool struct {
|
||||
pending map[address.Address]*msgSet
|
||||
pendingCount int
|
||||
|
||||
sm *stmgr.StateManager
|
||||
curTsLk sync.Mutex
|
||||
curTs *types.TipSet
|
||||
|
||||
ps *pubsub.PubSub
|
||||
api Provider
|
||||
|
||||
minGasPrice types.BigInt
|
||||
|
||||
@ -98,20 +105,66 @@ func (ms *msgSet) add(m *types.SignedMessage) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func NewMessagePool(sm *stmgr.StateManager, ps *pubsub.PubSub, ds dtypes.MetadataDS) (*MessagePool, error) {
|
||||
type Provider interface {
|
||||
SubscribeHeadChanges(func(rev, app []*types.TipSet) error)
|
||||
PutMessage(m store.ChainMsg) (cid.Cid, error)
|
||||
PubSubPublish(string, []byte) error
|
||||
StateGetActor(address.Address, *types.TipSet) (*types.Actor, error)
|
||||
MessagesForBlock(*types.BlockHeader) ([]*types.Message, []*types.SignedMessage, error)
|
||||
MessagesForTipset(*types.TipSet) ([]store.ChainMsg, error)
|
||||
LoadTipSet(cids []cid.Cid) (*types.TipSet, error)
|
||||
}
|
||||
|
||||
type mpoolProvider struct {
|
||||
sm *stmgr.StateManager
|
||||
ps *pubsub.PubSub
|
||||
}
|
||||
|
||||
func NewProvider(sm *stmgr.StateManager, ps *pubsub.PubSub) Provider {
|
||||
return &mpoolProvider{sm, ps}
|
||||
}
|
||||
|
||||
func (mpp *mpoolProvider) SubscribeHeadChanges(cb func(rev, app []*types.TipSet) error) {
|
||||
mpp.sm.ChainStore().SubscribeHeadChanges(cb)
|
||||
}
|
||||
|
||||
func (mpp *mpoolProvider) PutMessage(m store.ChainMsg) (cid.Cid, error) {
|
||||
return mpp.sm.ChainStore().PutMessage(m)
|
||||
}
|
||||
|
||||
func (mpp *mpoolProvider) PubSubPublish(k string, v []byte) error {
|
||||
return mpp.ps.Publish(k, v)
|
||||
}
|
||||
|
||||
func (mpp *mpoolProvider) StateGetActor(addr address.Address, ts *types.TipSet) (*types.Actor, error) {
|
||||
return mpp.sm.GetActor(addr, ts)
|
||||
}
|
||||
|
||||
func (mpp *mpoolProvider) MessagesForBlock(h *types.BlockHeader) ([]*types.Message, []*types.SignedMessage, error) {
|
||||
return mpp.sm.ChainStore().MessagesForBlock(h)
|
||||
}
|
||||
|
||||
func (mpp *mpoolProvider) MessagesForTipset(ts *types.TipSet) ([]store.ChainMsg, error) {
|
||||
return mpp.sm.ChainStore().MessagesForTipset(ts)
|
||||
}
|
||||
|
||||
func (mpp *mpoolProvider) LoadTipSet(cids []cid.Cid) (*types.TipSet, error) {
|
||||
return mpp.sm.ChainStore().LoadTipSet(cids)
|
||||
}
|
||||
|
||||
func New(api Provider, ds dtypes.MetadataDS) (*MessagePool, error) {
|
||||
cache, _ := lru.New2Q(build.BlsSignatureCacheSize)
|
||||
mp := &MessagePool{
|
||||
closer: make(chan struct{}),
|
||||
repubTk: time.NewTicker(build.BlockDelay * 10 * time.Second),
|
||||
localAddrs: make(map[address.Address]struct{}),
|
||||
pending: make(map[address.Address]*msgSet),
|
||||
sm: sm,
|
||||
ps: ps,
|
||||
minGasPrice: types.NewInt(0),
|
||||
maxTxPoolSize: 5000,
|
||||
blsSigCache: cache,
|
||||
changes: lps.New(50),
|
||||
localMsgs: namespace.Wrap(ds, datastore.NewKey(localMsgsDs)),
|
||||
api: api,
|
||||
}
|
||||
|
||||
if err := mp.loadLocal(); err != nil {
|
||||
@ -120,7 +173,7 @@ func NewMessagePool(sm *stmgr.StateManager, ps *pubsub.PubSub, ds dtypes.Metadat
|
||||
|
||||
go mp.repubLocal()
|
||||
|
||||
sm.ChainStore().SubscribeHeadChanges(func(rev, app []*types.TipSet) error {
|
||||
api.SubscribeHeadChanges(func(rev, app []*types.TipSet) error {
|
||||
err := mp.HeadChange(rev, app)
|
||||
if err != nil {
|
||||
log.Errorf("mpool head notif handler error: %+v", err)
|
||||
@ -155,7 +208,7 @@ func (mp *MessagePool) repubLocal() {
|
||||
continue
|
||||
}
|
||||
|
||||
err = mp.ps.Publish(msgTopic, msgb)
|
||||
err = mp.api.PubSubPublish(msgTopic, msgb)
|
||||
if err != nil {
|
||||
errout = multierr.Append(errout, xerrors.Errorf("could not publish: %w", err))
|
||||
continue
|
||||
@ -200,7 +253,7 @@ func (mp *MessagePool) Push(m *types.SignedMessage) error {
|
||||
}
|
||||
mp.lk.Unlock()
|
||||
|
||||
return mp.ps.Publish(msgTopic, msgb)
|
||||
return mp.api.PubSubPublish(msgTopic, msgb)
|
||||
}
|
||||
|
||||
func (mp *MessagePool) Add(m *types.SignedMessage) error {
|
||||
@ -252,12 +305,12 @@ func (mp *MessagePool) addLocked(m *types.SignedMessage) error {
|
||||
mp.blsSigCache.Add(m.Cid(), m.Signature)
|
||||
}
|
||||
|
||||
if _, err := mp.sm.ChainStore().PutMessage(m); err != nil {
|
||||
if _, err := mp.api.PutMessage(m); err != nil {
|
||||
log.Warnf("mpooladd cs.PutMessage failed: %s", err)
|
||||
return err
|
||||
}
|
||||
|
||||
if _, err := mp.sm.ChainStore().PutMessage(&m.Message); err != nil {
|
||||
if _, err := mp.api.PutMessage(&m.Message); err != nil {
|
||||
log.Warnf("mpooladd cs.PutMessage failed: %s", err)
|
||||
return err
|
||||
}
|
||||
@ -306,17 +359,56 @@ func (mp *MessagePool) getNonceLocked(addr address.Address) (uint64, error) {
|
||||
return stateNonce, nil
|
||||
}
|
||||
|
||||
func (mp *MessagePool) setCurTipset(ts *types.TipSet) {
|
||||
mp.curTsLk.Lock()
|
||||
defer mp.curTsLk.Unlock()
|
||||
mp.curTs = ts
|
||||
}
|
||||
|
||||
func (mp *MessagePool) getCurTipset() *types.TipSet {
|
||||
mp.curTsLk.Lock()
|
||||
defer mp.curTsLk.Unlock()
|
||||
return mp.curTs
|
||||
}
|
||||
|
||||
func (mp *MessagePool) getStateNonce(addr address.Address) (uint64, error) {
|
||||
act, err := mp.sm.GetActor(addr, nil)
|
||||
// TODO: this method probably should be cached
|
||||
|
||||
curTs := mp.getCurTipset()
|
||||
act, err := mp.api.StateGetActor(addr, curTs)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
return act.Nonce, nil
|
||||
baseNonce := act.Nonce
|
||||
|
||||
// TODO: the correct thing to do here is probably to set curTs to chain.head
|
||||
// but since we have an accurate view of the world until a head change occurs,
|
||||
// this should be fine
|
||||
if curTs == nil {
|
||||
return baseNonce, nil
|
||||
}
|
||||
|
||||
msgs, err := mp.api.MessagesForTipset(curTs)
|
||||
if err != nil {
|
||||
return 0, xerrors.Errorf("failed to check messages for tipset: %w", err)
|
||||
}
|
||||
|
||||
for _, m := range msgs {
|
||||
msg := m.VMMessage()
|
||||
if msg.From == addr {
|
||||
if msg.Nonce != baseNonce {
|
||||
return 0, xerrors.Errorf("tipset %s has bad nonce ordering (%d != %d)", curTs.Cids(), msg.Nonce, baseNonce)
|
||||
}
|
||||
baseNonce++
|
||||
}
|
||||
}
|
||||
|
||||
return baseNonce, nil
|
||||
}
|
||||
|
||||
func (mp *MessagePool) getStateBalance(addr address.Address) (types.BigInt, error) {
|
||||
act, err := mp.sm.GetActor(addr, nil)
|
||||
act, err := mp.api.StateGetActor(addr, nil)
|
||||
if err != nil {
|
||||
return types.EmptyInt, err
|
||||
}
|
||||
@ -327,6 +419,9 @@ func (mp *MessagePool) getStateBalance(addr address.Address) (types.BigInt, erro
|
||||
func (mp *MessagePool) PushWithNonce(addr address.Address, cb func(uint64) (*types.SignedMessage, error)) (*types.SignedMessage, error) {
|
||||
mp.lk.Lock()
|
||||
defer mp.lk.Unlock()
|
||||
if addr.Protocol() == address.ID {
|
||||
log.Warnf("Called pushWithNonce with ID address (%s) this might not be handled properly yet", addr)
|
||||
}
|
||||
|
||||
nonce, err := mp.getNonceLocked(addr)
|
||||
if err != nil {
|
||||
@ -350,7 +445,7 @@ func (mp *MessagePool) PushWithNonce(addr address.Address, cb func(uint64) (*typ
|
||||
log.Errorf("addLocal failed: %+v", err)
|
||||
}
|
||||
|
||||
return msg, mp.ps.Publish(msgTopic, msgb)
|
||||
return msg, mp.api.PubSubPublish(msgTopic, msgb)
|
||||
}
|
||||
|
||||
func (mp *MessagePool) Remove(from address.Address, nonce uint64) {
|
||||
@ -421,9 +516,16 @@ func (mp *MessagePool) pendingFor(a address.Address) []*types.SignedMessage {
|
||||
}
|
||||
|
||||
func (mp *MessagePool) HeadChange(revert []*types.TipSet, apply []*types.TipSet) error {
|
||||
|
||||
for _, ts := range revert {
|
||||
pts, err := mp.api.LoadTipSet(ts.Parents())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
mp.setCurTipset(pts)
|
||||
for _, b := range ts.Blocks() {
|
||||
bmsgs, smsgs, err := mp.sm.ChainStore().MessagesForBlock(b)
|
||||
bmsgs, smsgs, err := mp.api.MessagesForBlock(b)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("failed to get messages for revert block %s(height %d): %w", b.Cid(), b.Height, err)
|
||||
}
|
||||
@ -448,7 +550,7 @@ func (mp *MessagePool) HeadChange(revert []*types.TipSet, apply []*types.TipSet)
|
||||
|
||||
for _, ts := range apply {
|
||||
for _, b := range ts.Blocks() {
|
||||
bmsgs, smsgs, err := mp.sm.ChainStore().MessagesForBlock(b)
|
||||
bmsgs, smsgs, err := mp.api.MessagesForBlock(b)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("failed to get messages for apply block %s(height %d) (msgroot = %s): %w", b.Cid(), b.Height, b.Messages, err)
|
||||
}
|
||||
@ -460,6 +562,7 @@ func (mp *MessagePool) HeadChange(revert []*types.TipSet, apply []*types.TipSet)
|
||||
mp.Remove(msg.From, msg.Nonce)
|
||||
}
|
||||
}
|
||||
mp.setCurTipset(ts)
|
||||
}
|
||||
|
||||
return nil
|
||||
@ -487,7 +590,7 @@ func (mp *MessagePool) Updates(ctx context.Context) (<-chan api.MpoolUpdate, err
|
||||
sub := mp.changes.Sub(localUpdates)
|
||||
|
||||
go func() {
|
||||
defer mp.changes.Unsub(sub, localIncoming)
|
||||
defer mp.changes.Unsub(sub, chain.LocalIncoming)
|
||||
|
||||
for {
|
||||
select {
|
223
chain/messagepool/messagepool_test.go
Normal file
223
chain/messagepool/messagepool_test.go
Normal file
@ -0,0 +1,223 @@
|
||||
package messagepool
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/filecoin-project/lotus/chain/address"
|
||||
"github.com/filecoin-project/lotus/chain/store"
|
||||
"github.com/filecoin-project/lotus/chain/types"
|
||||
"github.com/filecoin-project/lotus/chain/types/mock"
|
||||
"github.com/filecoin-project/lotus/chain/wallet"
|
||||
"github.com/ipfs/go-cid"
|
||||
"github.com/ipfs/go-datastore"
|
||||
)
|
||||
|
||||
type testMpoolApi struct {
|
||||
cb func(rev, app []*types.TipSet) error
|
||||
|
||||
bmsgs map[cid.Cid][]*types.SignedMessage
|
||||
statenonce map[address.Address]uint64
|
||||
|
||||
tipsets []*types.TipSet
|
||||
}
|
||||
|
||||
func newTestMpoolApi() *testMpoolApi {
|
||||
return &testMpoolApi{
|
||||
bmsgs: make(map[cid.Cid][]*types.SignedMessage),
|
||||
statenonce: make(map[address.Address]uint64),
|
||||
}
|
||||
}
|
||||
|
||||
func (tma *testMpoolApi) applyBlock(t *testing.T, b *types.BlockHeader) {
|
||||
t.Helper()
|
||||
if err := tma.cb(nil, []*types.TipSet{mock.TipSet(b)}); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
func (tma *testMpoolApi) revertBlock(t *testing.T, b *types.BlockHeader) {
|
||||
t.Helper()
|
||||
if err := tma.cb([]*types.TipSet{mock.TipSet(b)}, nil); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
func (tma *testMpoolApi) setStateNonce(addr address.Address, v uint64) {
|
||||
tma.statenonce[addr] = v
|
||||
}
|
||||
|
||||
func (tma *testMpoolApi) setBlockMessages(h *types.BlockHeader, msgs ...*types.SignedMessage) {
|
||||
tma.bmsgs[h.Cid()] = msgs
|
||||
tma.tipsets = append(tma.tipsets, mock.TipSet(h))
|
||||
}
|
||||
|
||||
func (tma *testMpoolApi) SubscribeHeadChanges(cb func(rev, app []*types.TipSet) error) {
|
||||
tma.cb = cb
|
||||
}
|
||||
|
||||
func (tma *testMpoolApi) PutMessage(m store.ChainMsg) (cid.Cid, error) {
|
||||
return cid.Undef, nil
|
||||
}
|
||||
|
||||
func (tma *testMpoolApi) PubSubPublish(string, []byte) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (tma *testMpoolApi) StateGetActor(addr address.Address, ts *types.TipSet) (*types.Actor, error) {
|
||||
return &types.Actor{
|
||||
Nonce: tma.statenonce[addr],
|
||||
Balance: types.NewInt(90000000),
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (tma *testMpoolApi) MessagesForBlock(h *types.BlockHeader) ([]*types.Message, []*types.SignedMessage, error) {
|
||||
return nil, tma.bmsgs[h.Cid()], nil
|
||||
}
|
||||
|
||||
func (tma *testMpoolApi) MessagesForTipset(ts *types.TipSet) ([]store.ChainMsg, error) {
|
||||
if len(ts.Blocks()) != 1 {
|
||||
panic("cant deal with multiblock tipsets in this test")
|
||||
}
|
||||
|
||||
bm, sm, err := tma.MessagesForBlock(ts.Blocks()[0])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var out []store.ChainMsg
|
||||
for _, m := range bm {
|
||||
out = append(out, m)
|
||||
}
|
||||
|
||||
for _, m := range sm {
|
||||
out = append(out, m)
|
||||
}
|
||||
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func (tma *testMpoolApi) LoadTipSet(cids []cid.Cid) (*types.TipSet, error) {
|
||||
for _, ts := range tma.tipsets {
|
||||
if types.CidArrsEqual(cids, ts.Cids()) {
|
||||
return ts, nil
|
||||
}
|
||||
}
|
||||
|
||||
return nil, fmt.Errorf("tipset not found")
|
||||
}
|
||||
|
||||
func assertNonce(t *testing.T, mp *MessagePool, addr address.Address, val uint64) {
|
||||
t.Helper()
|
||||
n, err := mp.GetNonce(addr)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if n != val {
|
||||
t.Fatalf("expected nonce of %d, got %d", val, n)
|
||||
}
|
||||
}
|
||||
|
||||
func mustAdd(t *testing.T, mp *MessagePool, msg *types.SignedMessage) {
|
||||
t.Helper()
|
||||
if err := mp.Add(msg); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestMessagePool(t *testing.T) {
|
||||
tma := newTestMpoolApi()
|
||||
|
||||
w, err := wallet.NewWallet(wallet.NewMemKeyStore())
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
ds := datastore.NewMapDatastore()
|
||||
|
||||
mp, err := New(tma, ds)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
a := mock.MkBlock(nil, 1, 1)
|
||||
|
||||
sender, err := w.GenerateKey(types.KTBLS)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
target := mock.Address(1001)
|
||||
|
||||
var msgs []*types.SignedMessage
|
||||
for i := 0; i < 5; i++ {
|
||||
msgs = append(msgs, mock.MkMessage(sender, target, uint64(i), w))
|
||||
}
|
||||
|
||||
tma.setStateNonce(sender, 0)
|
||||
assertNonce(t, mp, sender, 0)
|
||||
mustAdd(t, mp, msgs[0])
|
||||
assertNonce(t, mp, sender, 1)
|
||||
mustAdd(t, mp, msgs[1])
|
||||
assertNonce(t, mp, sender, 2)
|
||||
|
||||
tma.setBlockMessages(a, msgs[0], msgs[1])
|
||||
tma.applyBlock(t, a)
|
||||
|
||||
assertNonce(t, mp, sender, 2)
|
||||
}
|
||||
|
||||
func TestRevertMessages(t *testing.T) {
|
||||
tma := newTestMpoolApi()
|
||||
|
||||
w, err := wallet.NewWallet(wallet.NewMemKeyStore())
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
ds := datastore.NewMapDatastore()
|
||||
|
||||
mp, err := New(tma, ds)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
a := mock.MkBlock(nil, 1, 1)
|
||||
b := mock.MkBlock(mock.TipSet(a), 1, 1)
|
||||
|
||||
sender, err := w.GenerateKey(types.KTBLS)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
target := mock.Address(1001)
|
||||
|
||||
var msgs []*types.SignedMessage
|
||||
for i := 0; i < 5; i++ {
|
||||
msgs = append(msgs, mock.MkMessage(sender, target, uint64(i), w))
|
||||
}
|
||||
|
||||
tma.setBlockMessages(a, msgs[0])
|
||||
tma.setBlockMessages(b, msgs[1], msgs[2], msgs[3])
|
||||
|
||||
mustAdd(t, mp, msgs[0])
|
||||
mustAdd(t, mp, msgs[1])
|
||||
mustAdd(t, mp, msgs[2])
|
||||
mustAdd(t, mp, msgs[3])
|
||||
|
||||
tma.setStateNonce(sender, 0)
|
||||
tma.applyBlock(t, a)
|
||||
assertNonce(t, mp, sender, 4)
|
||||
|
||||
tma.setStateNonce(sender, 1)
|
||||
tma.applyBlock(t, b)
|
||||
assertNonce(t, mp, sender, 4)
|
||||
tma.setStateNonce(sender, 0)
|
||||
tma.revertBlock(t, b)
|
||||
|
||||
assertNonce(t, mp, sender, 4)
|
||||
|
||||
if len(mp.Pending()) != 3 {
|
||||
t.Fatal("expected three messages in mempool")
|
||||
}
|
||||
|
||||
}
|
@ -219,21 +219,20 @@ func (sm *StateManager) computeTipSetState(ctx context.Context, blks []*types.Bl
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: this nonce-getting is a ting bit ugly
|
||||
spa, err := vmi.StateTree().GetActor(actors.StoragePowerAddress)
|
||||
// TODO: this nonce-getting is a tiny bit ugly
|
||||
ca, err := vmi.StateTree().GetActor(actors.CronAddress)
|
||||
if err != nil {
|
||||
return cid.Undef, cid.Undef, err
|
||||
}
|
||||
|
||||
// TODO: cron actor
|
||||
ret, err := vmi.ApplyMessage(ctx, &types.Message{
|
||||
To: actors.StoragePowerAddress,
|
||||
From: actors.StoragePowerAddress,
|
||||
Nonce: spa.Nonce,
|
||||
To: actors.CronAddress,
|
||||
From: actors.CronAddress,
|
||||
Nonce: ca.Nonce,
|
||||
Value: types.NewInt(0),
|
||||
GasPrice: types.NewInt(0),
|
||||
GasLimit: types.NewInt(1 << 30), // Make super sure this is never too little
|
||||
Method: actors.SPAMethods.CheckProofSubmissions,
|
||||
Method: actors.CAMethods.EpochTick,
|
||||
Params: nil,
|
||||
})
|
||||
if err != nil {
|
||||
|
@ -601,6 +601,7 @@ func (cs *ChainStore) readAMTCids(root cid.Cid) ([]cid.Cid, error) {
|
||||
type ChainMsg interface {
|
||||
Cid() cid.Cid
|
||||
VMMessage() *types.Message
|
||||
ToStorageBlock() (block.Block, error)
|
||||
}
|
||||
|
||||
func (cs *ChainStore) MessagesForTipset(ts *types.TipSet) ([]ChainMsg, error) {
|
||||
@ -800,7 +801,7 @@ func drawRandomness(t *types.Ticket, round int64) []byte {
|
||||
}
|
||||
|
||||
func (cs *ChainStore) GetRandomness(ctx context.Context, blks []cid.Cid, round int64) ([]byte, error) {
|
||||
ctx, span := trace.StartSpan(ctx, "store.GetRandomness")
|
||||
_, span := trace.StartSpan(ctx, "store.GetRandomness")
|
||||
defer span.End()
|
||||
span.AddAttributes(trace.Int64Attribute("round", round))
|
||||
|
||||
|
@ -40,7 +40,6 @@ func (cs *ChainStore) Weight(ctx context.Context, ts *types.TipSet) (types.BigIn
|
||||
log2P = int64(tpow.BitLen() - 1)
|
||||
} else {
|
||||
// Not really expect to be here ...
|
||||
panic("where are we")
|
||||
return types.EmptyInt, xerrors.Errorf("All power in the net is gone. You network might be disconnected, or the net is dead!")
|
||||
}
|
||||
|
||||
|
@ -7,6 +7,7 @@ import (
|
||||
pubsub "github.com/libp2p/go-libp2p-pubsub"
|
||||
|
||||
"github.com/filecoin-project/lotus/chain"
|
||||
"github.com/filecoin-project/lotus/chain/messagepool"
|
||||
"github.com/filecoin-project/lotus/chain/types"
|
||||
)
|
||||
|
||||
@ -54,7 +55,7 @@ func HandleIncomingBlocks(ctx context.Context, bsub *pubsub.Subscription, s *cha
|
||||
}
|
||||
}
|
||||
|
||||
func HandleIncomingMessages(ctx context.Context, mpool *chain.MessagePool, msub *pubsub.Subscription) {
|
||||
func HandleIncomingMessages(ctx context.Context, mpool *messagepool.MessagePool, msub *pubsub.Subscription) {
|
||||
for {
|
||||
msg, err := msub.Next(ctx)
|
||||
if err != nil {
|
||||
|
@ -39,7 +39,7 @@ import (
|
||||
|
||||
var log = logging.Logger("chain")
|
||||
|
||||
var localIncoming = "incoming"
|
||||
var LocalIncoming = "incoming"
|
||||
|
||||
type Syncer struct {
|
||||
// The heaviest known tipset in the network.
|
||||
@ -119,7 +119,7 @@ func (syncer *Syncer) InformNewHead(from peer.ID, fts *store.FullTipSet) {
|
||||
}
|
||||
}
|
||||
|
||||
syncer.incoming.Pub(fts.TipSet().Blocks(), localIncoming)
|
||||
syncer.incoming.Pub(fts.TipSet().Blocks(), LocalIncoming)
|
||||
|
||||
if from == syncer.self {
|
||||
// TODO: this is kindof a hack...
|
||||
@ -152,11 +152,11 @@ func (syncer *Syncer) InformNewHead(from peer.ID, fts *store.FullTipSet) {
|
||||
}
|
||||
|
||||
func (syncer *Syncer) IncomingBlocks(ctx context.Context) (<-chan *types.BlockHeader, error) {
|
||||
sub := syncer.incoming.Sub(localIncoming)
|
||||
sub := syncer.incoming.Sub(LocalIncoming)
|
||||
out := make(chan *types.BlockHeader, 10)
|
||||
|
||||
go func() {
|
||||
defer syncer.incoming.Unsub(sub, localIncoming)
|
||||
defer syncer.incoming.Unsub(sub, LocalIncoming)
|
||||
|
||||
for {
|
||||
select {
|
||||
@ -556,7 +556,7 @@ func (syncer *Syncer) ValidateBlock(ctx context.Context, b *types.FullBlock) err
|
||||
}
|
||||
|
||||
for _, t := range h.EPostProof.Candidates {
|
||||
if !types.IsTicketWinner(t.Partial, ssize, tpow, 1) {
|
||||
if !types.IsTicketWinner(t.Partial, ssize, tpow) {
|
||||
return xerrors.Errorf("miner created a block but was not a winner")
|
||||
}
|
||||
}
|
||||
@ -620,7 +620,6 @@ func (syncer *Syncer) ValidateBlock(ctx context.Context, b *types.FullBlock) err
|
||||
err := gen.VerifyVRF(ctx, waddr, h.Miner, gen.DSepTicket, vrfBase, h.Ticket.VRFProof)
|
||||
|
||||
if err != nil {
|
||||
log.Warnf("BAD TICKET: %d %x %x %s %s %x", h.Height, h.Ticket.VRFProof, vrfBase, waddr, h.Miner, baseTs.MinTicket().VRFProof)
|
||||
return xerrors.Errorf("validating block tickets failed: %w", err)
|
||||
}
|
||||
return nil
|
||||
|
@ -3,6 +3,7 @@ package chain_test
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
@ -25,6 +26,8 @@ import (
|
||||
|
||||
func init() {
|
||||
build.InsecurePoStValidation = true
|
||||
os.Setenv("TRUST_PARAMS", "1")
|
||||
build.SectorSizes = []uint64{1024}
|
||||
}
|
||||
|
||||
const source = 0
|
||||
@ -346,7 +349,7 @@ func (tu *syncTestUtil) waitUntilSyncTarget(to int, target *types.TipSet) {
|
||||
}
|
||||
|
||||
func TestSyncSimple(t *testing.T) {
|
||||
H := 2
|
||||
H := 50
|
||||
tu := prepSyncTest(t, H)
|
||||
|
||||
client := tu.addClientNode()
|
||||
|
@ -172,7 +172,9 @@ func CidArrsEqual(a, b []cid.Cid) bool {
|
||||
|
||||
var blocksPerEpoch = NewInt(build.BlocksPerEpoch)
|
||||
|
||||
func IsTicketWinner(partialTicket []byte, ssizeI uint64, totpow BigInt, sampleRate int64) bool {
|
||||
const sha256bits = 256
|
||||
|
||||
func IsTicketWinner(partialTicket []byte, ssizeI uint64, totpow BigInt) bool {
|
||||
ssize := NewInt(ssizeI)
|
||||
|
||||
/*
|
||||
@ -189,15 +191,15 @@ func IsTicketWinner(partialTicket []byte, ssizeI uint64, totpow BigInt, sampleRa
|
||||
|
||||
lhs := BigFromBytes(h[:]).Int
|
||||
lhs = lhs.Mul(lhs, totpow.Int)
|
||||
lhs = lhs.Mul(lhs, big.NewInt(sampleRate))
|
||||
|
||||
// rhs = sectorSize * 2^256
|
||||
// rhs = sectorSize << 256
|
||||
rhs := new(big.Int).Lsh(ssize.Int, 256)
|
||||
rhs := new(big.Int).Lsh(ssize.Int, sha256bits)
|
||||
rhs = rhs.Mul(rhs, big.NewInt(build.SectorChallengeRatioDiv))
|
||||
rhs = rhs.Mul(rhs, blocksPerEpoch.Int)
|
||||
|
||||
// h(vrfout) * totalPower < e * sectorSize * 2^256?
|
||||
return lhs.Cmp(rhs) == -1
|
||||
return lhs.Cmp(rhs) < 0
|
||||
}
|
||||
|
||||
func (t *Ticket) Equals(ot *Ticket) bool {
|
||||
|
@ -1,10 +1,12 @@
|
||||
package mock
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
|
||||
"github.com/filecoin-project/lotus/chain/address"
|
||||
"github.com/filecoin-project/lotus/chain/types"
|
||||
"github.com/filecoin-project/lotus/chain/wallet"
|
||||
"github.com/ipfs/go-cid"
|
||||
)
|
||||
|
||||
@ -16,6 +18,26 @@ func Address(i uint64) address.Address {
|
||||
return a
|
||||
}
|
||||
|
||||
func MkMessage(from, to address.Address, nonce uint64, w *wallet.Wallet) *types.SignedMessage {
|
||||
msg := &types.Message{
|
||||
To: to,
|
||||
From: from,
|
||||
Value: types.NewInt(1),
|
||||
Nonce: nonce,
|
||||
GasLimit: types.NewInt(1),
|
||||
GasPrice: types.NewInt(0),
|
||||
}
|
||||
|
||||
sig, err := w.Sign(context.TODO(), from, msg.Cid().Bytes())
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return &types.SignedMessage{
|
||||
Message: *msg,
|
||||
Signature: *sig,
|
||||
}
|
||||
}
|
||||
|
||||
func MkBlock(parents *types.TipSet, weightInc uint64, ticketNonce uint64) *types.BlockHeader {
|
||||
addr := Address(123561)
|
||||
|
||||
|
@ -31,6 +31,7 @@ func newInvoker() *invoker {
|
||||
|
||||
// add builtInCode using: register(cid, singleton)
|
||||
inv.register(actors.InitCodeCid, actors.InitActor{}, actors.InitActorState{})
|
||||
inv.register(actors.CronCodeCid, actors.CronActor{}, actors.CronActorState{})
|
||||
inv.register(actors.StoragePowerCodeCid, actors.StoragePowerActor{}, actors.StoragePowerState{})
|
||||
inv.register(actors.StorageMarketCodeCid, actors.StorageMarketActor{}, actors.StorageMarketState{})
|
||||
inv.register(actors.StorageMinerCodeCid, actors.StorageMinerActor{}, actors.StorageMinerActorState{})
|
||||
|
@ -234,10 +234,8 @@ var clientRetrieveCmd = &cli.Command{
|
||||
fmt.Println("Failed to find file")
|
||||
return nil
|
||||
}
|
||||
order := offers[0].Order()
|
||||
order.Client = payer
|
||||
|
||||
if err := api.ClientRetrieve(ctx, order, cctx.Args().Get(1)); err != nil {
|
||||
if err := api.ClientRetrieve(ctx, offers[0].Order(payer), cctx.Args().Get(1)); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
|
@ -42,6 +42,7 @@ type SealingResult struct {
|
||||
PreCommit time.Duration
|
||||
Commit time.Duration
|
||||
Verify time.Duration
|
||||
Unseal time.Duration
|
||||
}
|
||||
|
||||
func main() {
|
||||
@ -63,8 +64,15 @@ func main() {
|
||||
Name: "sector-size",
|
||||
Value: 1024,
|
||||
},
|
||||
&cli.BoolFlag{
|
||||
Name: "no-gpu",
|
||||
Usage: "disable gpu usage for the benchmark run",
|
||||
},
|
||||
},
|
||||
Action: func(c *cli.Context) error {
|
||||
if c.Bool("no-gpu") {
|
||||
os.Setenv("BELLMAN_NO_GPU", "1")
|
||||
}
|
||||
sdir, err := homedir.Expand(c.String("storage-dir"))
|
||||
if err != nil {
|
||||
return err
|
||||
@ -97,9 +105,9 @@ func main() {
|
||||
CacheDir: filepath.Join(tsdir, "cache"),
|
||||
SealedDir: filepath.Join(tsdir, "sealed"),
|
||||
StagedDir: filepath.Join(tsdir, "staged"),
|
||||
MetadataDir: filepath.Join(tsdir, "meta"),
|
||||
UnsealedDir: filepath.Join(tsdir, "unsealed"),
|
||||
}
|
||||
for _, d := range []string{cfg.CacheDir, cfg.SealedDir, cfg.StagedDir, cfg.MetadataDir} {
|
||||
for _, d := range []string{cfg.CacheDir, cfg.SealedDir, cfg.StagedDir, cfg.UnsealedDir} {
|
||||
if err := os.MkdirAll(d, 0775); err != nil {
|
||||
return err
|
||||
}
|
||||
@ -113,8 +121,7 @@ func main() {
|
||||
return err
|
||||
}
|
||||
|
||||
r := rand.New(rand.NewSource(101))
|
||||
size := sectorbuilder.UserBytesForSectorSize(sectorSize)
|
||||
dataSize := sectorbuilder.UserBytesForSectorSize(sectorSize)
|
||||
|
||||
var sealTimings []SealingResult
|
||||
var sealedSectors []ffi.PublicSectorInfo
|
||||
@ -122,7 +129,10 @@ func main() {
|
||||
for i := uint64(1); i <= numSectors; i++ {
|
||||
start := time.Now()
|
||||
log.Info("Writing piece into sector...")
|
||||
pi, err := sb.AddPiece(size, i, r, nil)
|
||||
|
||||
r := rand.New(rand.NewSource(100 + int64(i)))
|
||||
|
||||
pi, err := sb.AddPiece(dataSize, i, r, nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@ -171,11 +181,24 @@ func main() {
|
||||
|
||||
verifySeal := time.Now()
|
||||
|
||||
log.Info("Unsealing sector")
|
||||
rc, err := sb.ReadPieceFromSealedSector(1, 0, dataSize, ticket.TicketBytes[:], commD[:])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
unseal := time.Now()
|
||||
|
||||
if err := rc.Close(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
sealTimings = append(sealTimings, SealingResult{
|
||||
AddPiece: addpiece.Sub(start),
|
||||
PreCommit: precommit.Sub(addpiece),
|
||||
Commit: sealcommit.Sub(precommit),
|
||||
Verify: verifySeal.Sub(sealcommit),
|
||||
Unseal: unseal.Sub(verifySeal),
|
||||
})
|
||||
}
|
||||
|
||||
@ -248,6 +271,7 @@ func main() {
|
||||
fmt.Printf("seal: preCommit: %s\n", benchout.SealingResults[0].PreCommit)
|
||||
fmt.Printf("seal: Commit: %s\n", benchout.SealingResults[0].Commit)
|
||||
fmt.Printf("seal: Verify: %s\n", benchout.SealingResults[0].Verify)
|
||||
fmt.Printf("unseal: %s\n", benchout.SealingResults[0].Unseal)
|
||||
fmt.Printf("generate candidates: %s\n", benchout.PostGenerateCandidates)
|
||||
fmt.Printf("compute epost proof (cold): %s\n", benchout.PostEProofCold)
|
||||
fmt.Printf("compute epost proof (hot): %s\n", benchout.PostEProofHot)
|
||||
|
@ -43,7 +43,7 @@ func acceptJobs(ctx context.Context, api api.StorageMiner, endpoint string, auth
|
||||
CacheDir: filepath.Join(repo, "cache"),
|
||||
SealedDir: filepath.Join(repo, "sealed"),
|
||||
StagedDir: filepath.Join(repo, "staged"),
|
||||
MetadataDir: filepath.Join(repo, "meta"),
|
||||
UnsealedDir: filepath.Join(repo, "unsealed"),
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
|
@ -61,7 +61,7 @@ var preSealCmd = &cli.Command{
|
||||
Value: "lotus is fire",
|
||||
Usage: "set the ticket preimage for sealing randomness",
|
||||
},
|
||||
&cli.Uint64Flag{
|
||||
&cli.IntFlag{
|
||||
Name: "num-sectors",
|
||||
Value: 1,
|
||||
Usage: "select number of sectors to pre-seal",
|
||||
@ -79,7 +79,7 @@ var preSealCmd = &cli.Command{
|
||||
return err
|
||||
}
|
||||
|
||||
gm, err := seed.PreSeal(maddr, c.Uint64("sector-size"), c.Uint64("num-sectors"), sbroot, []byte(c.String("ticket-preimage")))
|
||||
gm, err := seed.PreSeal(maddr, c.Uint64("sector-size"), c.Int("num-sectors"), sbroot, []byte(c.String("ticket-preimage")))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -11,6 +11,7 @@ import (
|
||||
"path/filepath"
|
||||
|
||||
badger "github.com/ipfs/go-ds-badger"
|
||||
logging "github.com/ipfs/go-log"
|
||||
"golang.org/x/xerrors"
|
||||
|
||||
"github.com/filecoin-project/lotus/api"
|
||||
@ -23,18 +24,20 @@ import (
|
||||
"github.com/filecoin-project/lotus/lib/sectorbuilder"
|
||||
)
|
||||
|
||||
func PreSeal(maddr address.Address, ssize uint64, sectors uint64, sbroot string, preimage []byte) (*genesis.GenesisMiner, error) {
|
||||
var log = logging.Logger("preseal")
|
||||
|
||||
func PreSeal(maddr address.Address, ssize uint64, sectors int, sbroot string, preimage []byte) (*genesis.GenesisMiner, error) {
|
||||
cfg := §orbuilder.Config{
|
||||
Miner: maddr,
|
||||
SectorSize: ssize,
|
||||
CacheDir: filepath.Join(sbroot, "cache"),
|
||||
SealedDir: filepath.Join(sbroot, "sealed"),
|
||||
StagedDir: filepath.Join(sbroot, "staging"),
|
||||
MetadataDir: filepath.Join(sbroot, "meta"),
|
||||
UnsealedDir: filepath.Join(sbroot, "unsealed"),
|
||||
WorkerThreads: 2,
|
||||
}
|
||||
|
||||
for _, d := range []string{cfg.CacheDir, cfg.SealedDir, cfg.StagedDir, cfg.MetadataDir} {
|
||||
for _, d := range []string{cfg.CacheDir, cfg.SealedDir, cfg.StagedDir, cfg.UnsealedDir} {
|
||||
if err := os.MkdirAll(d, 0775); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@ -58,7 +61,7 @@ func PreSeal(maddr address.Address, ssize uint64, sectors uint64, sbroot string,
|
||||
size := sectorbuilder.UserBytesForSectorSize(ssize)
|
||||
|
||||
var sealedSectors []*genesis.PreSeal
|
||||
for i := uint64(1); i <= sectors; i++ {
|
||||
for i := 0; i < sectors; i++ {
|
||||
sid, err := sb.AcquireSectorId()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@ -81,6 +84,7 @@ func PreSeal(maddr address.Address, ssize uint64, sectors uint64, sbroot string,
|
||||
return nil, xerrors.Errorf("commit: %w", err)
|
||||
}
|
||||
|
||||
log.Warn("PreCommitOutput: ", sid, pco)
|
||||
sealedSectors = append(sealedSectors, &genesis.PreSeal{
|
||||
CommR: pco.CommR,
|
||||
CommD: pco.CommD,
|
||||
@ -104,7 +108,7 @@ func PreSeal(maddr address.Address, ssize uint64, sectors uint64, sbroot string,
|
||||
Key: minerAddr.KeyInfo,
|
||||
}
|
||||
|
||||
if err := createDeals(miner, minerAddr, ssize); err != nil {
|
||||
if err := createDeals(miner, minerAddr, maddr, ssize); err != nil {
|
||||
return nil, xerrors.Errorf("creating deals: %w", err)
|
||||
}
|
||||
|
||||
@ -132,14 +136,14 @@ func WriteGenesisMiner(maddr address.Address, sbroot string, gm *genesis.Genesis
|
||||
return nil
|
||||
}
|
||||
|
||||
func createDeals(m *genesis.GenesisMiner, k *wallet.Key, ssize uint64) error {
|
||||
func createDeals(m *genesis.GenesisMiner, k *wallet.Key, maddr address.Address, ssize uint64) error {
|
||||
for _, sector := range m.Sectors {
|
||||
proposal := &actors.StorageDealProposal{
|
||||
PieceRef: sector.CommD[:], // just one deal so this == CommP
|
||||
PieceSize: ssize,
|
||||
PieceSize: sectorbuilder.UserBytesForSectorSize(ssize),
|
||||
PieceSerialization: actors.SerializationUnixFSv0,
|
||||
Client: k.Address,
|
||||
Provider: k.Address,
|
||||
Provider: maddr,
|
||||
ProposalExpiration: 9000, // TODO: allow setting
|
||||
Duration: 9000,
|
||||
StoragePricePerEpoch: types.NewInt(0),
|
||||
|
@ -63,18 +63,29 @@ var infoCmd = &cli.Command{
|
||||
fmt.Printf("\tLocal: %d / %d (+%d reserved)\n", wstat.LocalTotal-wstat.LocalReserved-wstat.LocalFree, wstat.LocalTotal-wstat.LocalReserved, wstat.LocalReserved)
|
||||
fmt.Printf("\tRemote: %d / %d\n", wstat.RemotesTotal-wstat.RemotesFree, wstat.RemotesTotal)
|
||||
|
||||
ppe, err := api.StateMinerElectionPeriodStart(ctx, maddr, nil)
|
||||
eps, err := api.StateMinerElectionPeriodStart(ctx, maddr, nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if ppe != 0 {
|
||||
if eps != 0 {
|
||||
head, err := api.ChainHead(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
pdiff := int64(ppe - head.Height())
|
||||
pdifft := pdiff * build.BlockDelay
|
||||
fmt.Printf("Proving Period: %d, in %d Blocks (~%dm %ds)\n", ppe, pdiff, pdifft/60, pdifft%60)
|
||||
lastEps := int64(head.Height() - eps)
|
||||
lastEpsS := lastEps * build.BlockDelay
|
||||
|
||||
fallback := lastEps + build.FallbackPoStDelay
|
||||
fallbackS := fallback * build.BlockDelay
|
||||
|
||||
next := lastEps + build.SlashablePowerDelay
|
||||
nextS := next * build.BlockDelay
|
||||
|
||||
fmt.Printf("PoSt Submissions:\n")
|
||||
fmt.Printf("\tPrevious: Epoch %d (%d block(s), ~%dm %ds ago)\n", eps, lastEps, lastEpsS/60, lastEpsS%60)
|
||||
fmt.Printf("\tFallback: Epoch %d (in %d blocks, ~%dm %ds)\n", eps+build.FallbackPoStDelay, fallback, fallbackS/60, fallbackS%60)
|
||||
fmt.Printf("\tDeadline: Epoch %d (in %d blocks, ~%dm %ds)\n", eps+build.SlashablePowerDelay, next, nextS/60, nextS%60)
|
||||
|
||||
} else {
|
||||
fmt.Printf("Proving Period: Not Proving\n")
|
||||
}
|
||||
|
@ -123,6 +123,11 @@ var initCmd = &cli.Command{
|
||||
}
|
||||
|
||||
if pssb := cctx.String("pre-sealed-sectors"); pssb != "" {
|
||||
pssb, err := homedir.Expand(pssb)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
log.Infof("moving pre-sealed-sectors from %s into newly created storage miner repo", pssb)
|
||||
lr, err := r.Lock(repo.StorageMiner)
|
||||
if err != nil {
|
||||
@ -133,7 +138,36 @@ var initCmd = &cli.Command{
|
||||
return err
|
||||
}
|
||||
|
||||
if err := migratePreSealedSectors(pssb, repoPath, mds); err != nil {
|
||||
oldmds, err := badger.NewDatastore(filepath.Join(pssb, "badger"), nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
oldsb, err := sectorbuilder.New(§orbuilder.Config{
|
||||
SectorSize: 1024,
|
||||
WorkerThreads: 2,
|
||||
SealedDir: filepath.Join(pssb, "sealed"),
|
||||
CacheDir: filepath.Join(pssb, "cache"),
|
||||
StagedDir: filepath.Join(pssb, "staging"),
|
||||
UnsealedDir: filepath.Join(pssb, "unsealed"),
|
||||
}, oldmds)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("failed to open up preseal sectorbuilder: %w", err)
|
||||
}
|
||||
|
||||
nsb, err := sectorbuilder.New(§orbuilder.Config{
|
||||
SectorSize: 1024,
|
||||
WorkerThreads: 2,
|
||||
SealedDir: filepath.Join(lr.Path(), "sealed"),
|
||||
CacheDir: filepath.Join(lr.Path(), "cache"),
|
||||
StagedDir: filepath.Join(lr.Path(), "staging"),
|
||||
UnsealedDir: filepath.Join(lr.Path(), "unsealed"),
|
||||
}, mds)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("failed to open up sectorbuilder: %w", err)
|
||||
}
|
||||
|
||||
if err := nsb.ImportFrom(oldsb); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := lr.Close(); err != nil {
|
||||
@ -161,51 +195,6 @@ var initCmd = &cli.Command{
|
||||
},
|
||||
}
|
||||
|
||||
// TODO: this method should be a lot more robust for mainnet. For testnet, its
|
||||
// fine if we mess things up a few times
|
||||
// Also probably makes sense for this method to be in the sectorbuilder package
|
||||
func migratePreSealedSectors(presealsb string, repoPath string, mds dtypes.MetadataDS) error {
|
||||
pspath, err := homedir.Expand(presealsb)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
srcds, err := badger.NewDatastore(filepath.Join(pspath, "badger"), nil)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("openning presealed sectors datastore: %w", err)
|
||||
}
|
||||
|
||||
expRepo, err := homedir.Expand(repoPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if stat, err := os.Stat(pspath); err != nil {
|
||||
return xerrors.Errorf("failed to stat presealed sectors directory: %w", err)
|
||||
} else if !stat.IsDir() {
|
||||
return xerrors.Errorf("given presealed sectors path was not a directory: %w", err)
|
||||
}
|
||||
|
||||
for _, dir := range []string{"meta", "sealed", "staging", "cache"} {
|
||||
from := filepath.Join(pspath, dir)
|
||||
to := filepath.Join(expRepo, dir)
|
||||
if err := os.Rename(from, to); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
val, err := srcds.Get(sectorbuilder.LastSectorIdKey)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("getting source last sector ID: %w", err)
|
||||
}
|
||||
|
||||
if err := mds.Put(sectorbuilder.LastSectorIdKey, val); err != nil {
|
||||
return xerrors.Errorf("failed to write last sector ID key to target datastore: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func migratePreSealMeta(ctx context.Context, api lapi.FullNode, presealDir string, maddr address.Address, mds dtypes.MetadataDS) error {
|
||||
presealDir, err := homedir.Expand(presealDir)
|
||||
if err != nil {
|
||||
@ -242,7 +231,6 @@ func migratePreSealMeta(ctx context.Context, api lapi.FullNode, presealDir strin
|
||||
Pieces: []storage.Piece{
|
||||
{
|
||||
DealID: dealID,
|
||||
Ref: fmt.Sprintf("preseal-%d", sector.SectorID),
|
||||
Size: meta.SectorSize,
|
||||
CommP: sector.CommD[:],
|
||||
},
|
||||
@ -384,18 +372,23 @@ func storageMinerInit(ctx context.Context, cctx *cli.Context, api lapi.FullNode,
|
||||
}
|
||||
|
||||
if pssb := cctx.String("pre-sealed-sectors"); pssb != "" {
|
||||
pssb, err := homedir.Expand(pssb)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
log.Infof("Importing pre-sealed sector metadata for %s", a)
|
||||
|
||||
if err := migratePreSealMeta(ctx, api, cctx.String("pre-sealed-sectors"), a, mds); err != nil {
|
||||
if err := migratePreSealMeta(ctx, api, pssb, a, mds); err != nil {
|
||||
return xerrors.Errorf("migrating presealed sector metadata: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
} else {
|
||||
if err := configureStorageMiner(ctx, api, a, peerid); err != nil {
|
||||
return xerrors.Errorf("failed to configure storage miner: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
if err := configureStorageMiner(ctx, api, a, peerid); err != nil {
|
||||
return xerrors.Errorf("failed to configure storage miner: %w", err)
|
||||
}
|
||||
|
||||
addr = a
|
||||
|
@ -163,7 +163,7 @@ var sectorsRefsCmd = &cli.Command{
|
||||
for name, refs := range refs {
|
||||
fmt.Printf("Block %s:\n", name)
|
||||
for _, ref := range refs {
|
||||
fmt.Printf("\t%s+%d %d bytes\n", ref.Piece, ref.Offset, ref.Size)
|
||||
fmt.Printf("\t%d+%d %d bytes\n", ref.SectorID, ref.Offset, ref.Size)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
|
32
docs/API.md
32
docs/API.md
@ -1,32 +0,0 @@
|
||||
TODO: make this into a nicer doc
|
||||
|
||||
### Endpoints
|
||||
|
||||
By default `127.0.0.1:1234` - daemon stores the api endpoint multiaddr in `~/.lotus/api`
|
||||
|
||||
* `http://[api:port]/rpc/v0` - JsonRPC http endpoint
|
||||
* `ws://[api:port]/rpc/v0` - JsonRPC websocket endpoint
|
||||
* `PUT http://[api:port]/rest/v0/import` - import file to the node repo
|
||||
* Requires write permission
|
||||
|
||||
For JsonRPC interface definition see `api/api.go`. Required permissions are
|
||||
defined in `api/struct.go`
|
||||
|
||||
### Auth:
|
||||
|
||||
JWT in the `Authorization: Bearer <token>` http header
|
||||
|
||||
Permissions:
|
||||
* `read` - Read node state, no private data
|
||||
* `write` - Write to local store / chain, read private data
|
||||
* `sign` - Use private keys stored in wallet for signing
|
||||
* `admin` - Manage permissions
|
||||
|
||||
Payload:
|
||||
```json
|
||||
{
|
||||
"Allow": ["read", "write", ...]
|
||||
}
|
||||
```
|
||||
|
||||
Admin token is stored in `~/.lotus/token`
|
35
documentation/api.md
Normal file
35
documentation/api.md
Normal file
@ -0,0 +1,35 @@
|
||||
# API
|
||||
|
||||
The systems API is defined in here. The RPC maps directly to the API defined here using the [JSON RPC package](https://github.com/filecoin-project/lotus/tree/master/lib/jsonrpc).
|
||||
|
||||
## Overview
|
||||
|
||||
By default `127.0.0.1:1234` - daemon stores the api endpoint multiaddr in `~/.lotus/api`
|
||||
|
||||
- `http://[api:port]/rpc/v0` - JsonRPC http endpoint
|
||||
- `ws://[api:port]/rpc/v0` - JsonRPC websocket endpoint
|
||||
- `PUT http://[api:port]/rest/v0/import` - import file to the node repo, it requires write permission.
|
||||
|
||||
For JsonRPC interface definition see [api/api.go](https://github.com/filecoin-project/lotus/blob/master/api/api_full.go). Required permissions are
|
||||
defined in [api/struct.go](https://github.com/filecoin-project/lotus/blob/master/api/struct.go)
|
||||
|
||||
## Auth
|
||||
|
||||
JWT in the `Authorization: Bearer <token>` http header
|
||||
|
||||
Permissions
|
||||
|
||||
- `read` - Read node state, no private data
|
||||
- `write` - Write to local store / chain, read private data
|
||||
- `sign` - Use private keys stored in wallet for signing
|
||||
- `admin` - Manage permissions
|
||||
|
||||
Payload
|
||||
|
||||
```json
|
||||
{
|
||||
"Allow": ["read", "write", ...]
|
||||
}
|
||||
```
|
||||
|
||||
Admin token is stored in `~/.lotus/token`
|
18
documentation/getting-started.md
Normal file
18
documentation/getting-started.md
Normal file
@ -0,0 +1,18 @@
|
||||
# Lotus
|
||||
|
||||
Lotus is an experimental implementation of the **Filecoin Distributed Storage Network**. For more details about Filecoin, check out the [Filecoin Spec](https://github.com/filecoin-project/specs).
|
||||
|
||||
## What can I learn here?
|
||||
|
||||
- If you want to [store](https://docs.lotu.sh/storing-data) or [retrieve](https://docs.lotu.sh/retrieving-data) data, you can install Lotus on [Arch Linux](https://docs.lotu.sh/install-lotus-arch), [Ubuntu](https://docs.lotu.sh/install-lotus-ubuntu), or [MacOS](https://docs.lotu.sh/install-lotus-macos) and get started.
|
||||
- Learn how to join the DevNet with the [command line interface](https://docs.lotu.sh/join-devnet-cli).
|
||||
- Learn how to test Lotus in a seperate local network using a [GUI](https://docs.lotu.sh/testing-with-gui).
|
||||
- Learn how to mine Filecoin using the `lotus-storage-miner` in the [CLI](https://docs.lotu.sh/mining)
|
||||
|
||||
## What makes Lotus different?
|
||||
|
||||
Lotus is architected modularly to keep clean API boundaries while using the same process. The **Lotus Full Node** and the **Lotus Storage Miner** are two separate programs.
|
||||
|
||||
The **Lotus Storage Miner** is intended to be run on the machine that manages a single storage miner instance, and is meant to communicate with the full node via the websockets JSON RPC API for all of its chain interaction needs.
|
||||
|
||||
This way, a mining operation may easily run one or many storage miners, connected to one or many full node instances.
|
33
documentation/glossary.md
Normal file
33
documentation/glossary.md
Normal file
@ -0,0 +1,33 @@
|
||||
# Glossary
|
||||
|
||||
**Chain: Types**
|
||||
|
||||
Implementation of data structures used by Filecoin and their serializations.
|
||||
|
||||
**Chain: Store**
|
||||
|
||||
The chainstore manages all local chain state, including block headers, messages, and state.
|
||||
|
||||
**Chain: State**
|
||||
|
||||
A package for dealing with the Filecoin state tree. Wraps the [HAMT](https://github.com/ipfs/go-hamt-ipld).
|
||||
|
||||
**Chain: Actors**
|
||||
|
||||
Implementations of the builtin Filecoin network actors.
|
||||
|
||||
**Chain: VM**
|
||||
|
||||
The Filecoin state machine 'vm'. Implemented here are utilities to invoke Filecoin actor methods.
|
||||
|
||||
**Miner**
|
||||
|
||||
The block producer logic. This package interfaces with the full node through the API, despite currently being implemented in the same process (very likely to be extracted as its own separate process in the near future).
|
||||
|
||||
**Storage**
|
||||
|
||||
The storage miner logic. This package also interfaces with the full node through a subset of the api. This code is used to implement the `lotus-storage-miner` process.
|
||||
|
||||
**Pond**
|
||||
|
||||
[Pond](https://docs.lotu.sh/testing-with-gui) is a graphical testbed for lotus. It can be used to spin up nodes, connect them in a given topology, start them mining, and observe how they function over time.
|
5
documentation/hardware.md
Normal file
5
documentation/hardware.md
Normal file
@ -0,0 +1,5 @@
|
||||
# Hardware Requirements
|
||||
|
||||
Lotus can build and run on most [Linux](https://ubuntu.com/) and [MacOS](https://www.apple.com/macos) systems with at least 8GB of RAM.
|
||||
|
||||
Windows is not yet supported.
|
Before Width: | Height: | Size: 5.8 KiB After Width: | Height: | Size: 5.8 KiB |
Before Width: | Height: | Size: 4.8 KiB After Width: | Height: | Size: 4.8 KiB |
44
documentation/install-lotus-arch.md
Normal file
44
documentation/install-lotus-arch.md
Normal file
@ -0,0 +1,44 @@
|
||||
# Installing Lotus on Arch Linux
|
||||
|
||||
Install these dependencies for Arch Linux.
|
||||
|
||||
- go (1.13 or higher)
|
||||
- gcc (7.4.0 or higher)
|
||||
- git (version 2 or higher)
|
||||
- bzr (some go dependency needs this)
|
||||
- jq
|
||||
- pkg-config
|
||||
- opencl-icd-loader
|
||||
- opencl driver (like nvidia-opencl on arch) (for GPU acceleration)
|
||||
- opencl-headers (build)
|
||||
- rustup (proofs build)
|
||||
- llvm (proofs build)
|
||||
- clang (proofs build)
|
||||
|
||||
Arch (run):
|
||||
|
||||
```sh
|
||||
sudo pacman -Syu opencl-icd-loader
|
||||
```
|
||||
|
||||
Arch (build):
|
||||
|
||||
```sh
|
||||
sudo pacman -Syu go gcc git bzr jq pkg-config opencl-icd-loader opencl-headers
|
||||
```
|
||||
|
||||
Clone
|
||||
|
||||
```sh
|
||||
$ git clone https://github.com/filecoin-project/lotus.git
|
||||
$ cd lotus/
|
||||
```
|
||||
|
||||
Install
|
||||
|
||||
```sh
|
||||
$ make clean all
|
||||
$ sudo make install
|
||||
```
|
||||
|
||||
Now you can use the command `lotus` in the command line.
|
32
documentation/install-lotus-macos.md
Normal file
32
documentation/install-lotus-macos.md
Normal file
@ -0,0 +1,32 @@
|
||||
# Installing Lotus on MacOS
|
||||
|
||||
Install these dependencies for MacOS Catalina.
|
||||
|
||||
- go (1.13 or higher)
|
||||
- gcc (7.4.0 or higher)
|
||||
- git (version 2 or higher)
|
||||
- bzr (some go dependency needs this)
|
||||
- jq
|
||||
- pkg-config
|
||||
- opencl-icd-loader
|
||||
- opencl driver (like nvidia-opencl on arch) (for GPU acceleration)
|
||||
- opencl-headers (build)
|
||||
- rustup (proofs build)
|
||||
- llvm (proofs build)
|
||||
- clang (proofs build)
|
||||
|
||||
Clone
|
||||
|
||||
```sh
|
||||
$ git clone https://github.com/filecoin-project/lotus.git
|
||||
$ cd lotus/
|
||||
```
|
||||
|
||||
Install
|
||||
|
||||
```sh
|
||||
$ make clean all
|
||||
$ sudo make install
|
||||
```
|
||||
|
||||
Now you can use the command `lotus` in the command line.
|
47
documentation/install-lotus-ubuntu.md
Normal file
47
documentation/install-lotus-ubuntu.md
Normal file
@ -0,0 +1,47 @@
|
||||
# Installing Lotus on Ubuntu
|
||||
|
||||
Install these dependencies for Ubuntu.
|
||||
|
||||
- go (1.13 or higher)
|
||||
- gcc (7.4.0 or higher)
|
||||
- git (version 2 or higher)
|
||||
- bzr (some go dependency needs this)
|
||||
- jq
|
||||
- pkg-config
|
||||
- opencl-icd-loader
|
||||
- opencl driver (like nvidia-opencl on arch) (for GPU acceleration)
|
||||
- opencl-headers (build)
|
||||
- rustup (proofs build)
|
||||
- llvm (proofs build)
|
||||
- clang (proofs build)
|
||||
|
||||
Ubuntu / Debian (run):
|
||||
|
||||
```sh
|
||||
sudo apt update
|
||||
sudo apt install mesa-opencl-icd ocl-icd-opencl-dev
|
||||
```
|
||||
|
||||
Ubuntu (build):
|
||||
|
||||
```sh
|
||||
sudo add-apt-repository ppa:longsleep/golang-backports
|
||||
sudo apt update
|
||||
sudo apt install golang-go gcc git bzr jq pkg-config mesa-opencl-icd ocl-icd-opencl-dev
|
||||
```
|
||||
|
||||
Clone
|
||||
|
||||
```sh
|
||||
$ git clone https://github.com/filecoin-project/lotus.git
|
||||
$ cd lotus/
|
||||
```
|
||||
|
||||
Install
|
||||
|
||||
```sh
|
||||
$ make clean all
|
||||
$ sudo make install
|
||||
```
|
||||
|
||||
Now you can use the command `lotus` in the command line.
|
74
documentation/join-devnet-cli.md
Normal file
74
documentation/join-devnet-cli.md
Normal file
@ -0,0 +1,74 @@
|
||||
# Join Lotus Devnet
|
||||
|
||||
## Node CLI setup
|
||||
|
||||
If you have run lotus before and want to remove all previous data: `rm -rf ~/.lotus ~/.lotusstorage`
|
||||
|
||||
## Genesis & Bootstrap
|
||||
|
||||
The current lotus build will automatically join the lotus Devnet using the genesis and bootstrap files in the `build/` directory. No configuration is needed.
|
||||
|
||||
## Start Daemon
|
||||
|
||||
```sh
|
||||
$ lotus daemon
|
||||
```
|
||||
|
||||
In another window check that you are connected to the network:
|
||||
|
||||
```sh
|
||||
$ lotus net peers | wc -l
|
||||
2 # number of peers
|
||||
```
|
||||
|
||||
Wait for the chain to finish syncing:
|
||||
|
||||
```sh
|
||||
$ lotus sync wait
|
||||
```
|
||||
|
||||
You can view latest block height along with other network metrics at the https://lotus-metrics.kittyhawk.wtf/chain.
|
||||
|
||||
## Basics
|
||||
|
||||
Create a new address:
|
||||
|
||||
```sh
|
||||
$ lotus wallet new bls
|
||||
t3...
|
||||
```
|
||||
|
||||
Grab some funds from faucet - go to https://lotus-faucet.kittyhawk.wtf/, paste the address
|
||||
you just created, and press Send.
|
||||
|
||||
Check the wallet balance (balance is listed in attoFIL, where 1 attoFIL = 10^-18 FIL):
|
||||
|
||||
```sh
|
||||
$ lotus wallet balance [optional address (t3...)]
|
||||
```
|
||||
|
||||
If you see an error like `actor not found` after executing this command, it means that either your node isn't fully synced or there are no transactions to this address yet on chain. If the latter, using the faucet should 'fix' this.
|
||||
|
||||
## Make a deal
|
||||
|
||||
It is possible for a Client to make a deal with a Miner on the same lotus Node.
|
||||
|
||||
```sh
|
||||
# List all miners in the system. Choose one to make a deal with.
|
||||
|
||||
$ lotus state list-miners
|
||||
|
||||
# List asks proposed by a miner
|
||||
|
||||
$ lotus client query-ask <miner>
|
||||
|
||||
# Propose a deal with a miner. Price is in attoFIL/byte/block. Duration is # of blocks.
|
||||
|
||||
$ lotus client deal <Data CID> <miner> <price> <duration>
|
||||
```
|
||||
|
||||
For example `\$ lotus client deal bafkre...qvtjsi t0111 36000 12` proposes a deal to store CID `bafkre...qvtjsi` with miner `t0111` at price `36000` for a duration of `12` blocks. If successful, the `client deal` command will return a deal CID.
|
||||
|
||||
## Monitoring Dashboard
|
||||
|
||||
To see the latest network activity, including chain block height, blocktime, total network power, largest miners, and more, check out the [monitoring dashboard](https://lotus-metrics.kittyhawk.wtf).
|
55
documentation/mining.md
Normal file
55
documentation/mining.md
Normal file
@ -0,0 +1,55 @@
|
||||
# Getting started
|
||||
|
||||
Ensure that at least one BLS address (`t3..`) in your wallet exists
|
||||
|
||||
```sh
|
||||
$ lotus wallet list
|
||||
t3...
|
||||
```
|
||||
|
||||
With this address, go to [the faucet](https://lotus-faucet.kittyhawk.wtf/miner.html), and
|
||||
click `Create Miner`
|
||||
|
||||
Wait for a page telling you the address of the newly created storage miner to
|
||||
appear.
|
||||
|
||||
The screen should show: `New storage miners address is: t0..`
|
||||
|
||||
## Initialize
|
||||
|
||||
```sh
|
||||
$ lotus-storage-miner init --actor=t01.. --owner=t3....
|
||||
```
|
||||
|
||||
This command should return successfully after miner is setup on-chain (30-60s)
|
||||
|
||||
## Start mining
|
||||
|
||||
```sh
|
||||
$ lotus-storage-miner run
|
||||
```
|
||||
|
||||
To view the miner id used for deals:
|
||||
|
||||
```sh
|
||||
$ lotus-storage-miner info
|
||||
```
|
||||
|
||||
e.g. miner id `t0111`
|
||||
|
||||
Seal random data to start producing PoSts:
|
||||
|
||||
```sh
|
||||
$ lotus-storage-miner store-garbage
|
||||
```
|
||||
|
||||
You can check miner power and sector usage with the miner id:
|
||||
|
||||
```sh
|
||||
# Total power of the network
|
||||
$ lotus-storage-miner state power
|
||||
|
||||
$ lotus-storage-miner state power <miner>
|
||||
|
||||
$ lotus-storage-miner state sectors <miner>
|
||||
```
|
17
documentation/retrieving-data.md
Normal file
17
documentation/retrieving-data.md
Normal file
@ -0,0 +1,17 @@
|
||||
# Retrieving Data
|
||||
|
||||
If you have stored data with a miner in the network, you can search for it by CID
|
||||
|
||||
```sh
|
||||
$ lotus client find <Data CID>
|
||||
LOCAL
|
||||
RETRIEVAL <miner>@<miner peerId>-<deal funds>-<size>
|
||||
```
|
||||
|
||||
Retrieve data from a miner
|
||||
|
||||
```sh
|
||||
$ lotus client retrieve <Data CID> <outfile>
|
||||
```
|
||||
|
||||
This will initiate a retrieval deal and write the data to the outfile. This process may take some time.
|
20
documentation/storing-data.md
Normal file
20
documentation/storing-data.md
Normal file
@ -0,0 +1,20 @@
|
||||
# Storing Data
|
||||
|
||||
Start by creating a file, in this example we will use the command line to create `hello.txt`.
|
||||
|
||||
```sh
|
||||
$ echo "Hi my name is $USER" > hello.txt
|
||||
```
|
||||
|
||||
Afterwards you can import the file into lotus & get a **Data CID** as output.
|
||||
|
||||
```sh
|
||||
$ lotus client import ./hello.txt
|
||||
<Data CID>
|
||||
```
|
||||
|
||||
To see a list of files by `CID`, `name`, `size`, `status`.
|
||||
|
||||
```sh
|
||||
$ lotus client local
|
||||
```
|
33
documentation/testing-with-gui.md
Normal file
33
documentation/testing-with-gui.md
Normal file
@ -0,0 +1,33 @@
|
||||
# Pond UI
|
||||
|
||||
Pond is a graphical testbed for [Lotus](https://docs.lotu.sh). Using it will setup a seperate local network which is helpful for debugging. Pond will spin up nodes, connect them in a given topology, start them mining, and observe how they function over time.
|
||||
|
||||
## Build
|
||||
|
||||
```sh
|
||||
$ make pond
|
||||
```
|
||||
|
||||
## Run
|
||||
|
||||
```sh
|
||||
$ ./pond run
|
||||
```
|
||||
|
||||
Now go to http://127.0.0.1:2222.
|
||||
|
||||
## What can I test?
|
||||
|
||||
- The `Spawn Node` button starts a new lotus Node in a new draggable window.
|
||||
- Click `[Spawn Storage Miner]` to start mining. This require's the node's wallet to have funds.
|
||||
- Click on `[Client]` to open the Node's client interface and propose a deal with an existing Miner. If successful you'll see a payment channel open up with that Miner.
|
||||
|
||||
Don't leave Pond unattended for long periods of time (10h+), the web-ui tends to eventually consume all the available RAM.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
- Turn it off and on - Start at the top
|
||||
- `rm -rf ~/.lotus ~/.lotusstorage/`
|
||||
- Verify you have the correct versions of dependencies
|
||||
- If stuck on a bad fork, try `lotus chain sethead --genesis`
|
||||
- If that didn't help, open a new issue, ask in the [Community forum](https://discuss.filecoin.io) or reach out via [Community chat](https://github.com/filecoin-project/community#chat).
|
@ -1,4 +1,8 @@
|
||||
## Tracing
|
||||
# Tracing
|
||||
|
||||
Lotus has tracing built into many of its internals. To view the traces, first download [jaeger](https://www.jaegertracing.io/download/) (Choose the 'all-in-one' binary). Then run it somewhere, start up the lotus daemon, and open up localhost:16686 in your browser.
|
||||
|
||||
## Open Census
|
||||
|
||||
Lotus uses [OpenCensus](https://opencensus.io/) for tracing application flow.
|
||||
This generates spans
|
||||
@ -10,8 +14,7 @@ fairly easy to swap in.
|
||||
## Running Locally
|
||||
|
||||
To easily run and view tracing locally, first, install jaeger. The easiest way
|
||||
to do this is to download the binaries from
|
||||
https://www.jaegertracing.io/download/ and then run the `jaeger-all-in-one`
|
||||
to do this is to [download the binaries](https://www.jaegertracing.io/download/) and then run the `jaeger-all-in-one`
|
||||
binary. This will start up jaeger, listen for spans on `localhost:6831`, and
|
||||
expose a web UI for viewing traces on `http://localhost:16686/`.
|
||||
|
||||
@ -22,6 +25,7 @@ Now, to view any generated traces, open up `http://localhost:16686/` in your
|
||||
browser.
|
||||
|
||||
## Adding Spans
|
||||
|
||||
To annotate a new codepath with spans, add the following lines to the top of the function you wish to trace:
|
||||
|
||||
```go
|
2
extern/filecoin-ffi
vendored
2
extern/filecoin-ffi
vendored
@ -1 +1 @@
|
||||
Subproject commit 0e71b164cf4b2e1c0f53ca25145e3ea57cc53e90
|
||||
Subproject commit 9faf00cb536fd86559440a09de9131520ae1ca0e
|
@ -131,6 +131,7 @@ func main() {
|
||||
actors.ComputeDataCommitmentParams{},
|
||||
actors.SectorProveCommitInfo{},
|
||||
actors.CheckMinerParams{},
|
||||
actors.CronActorState{},
|
||||
)
|
||||
if err != nil {
|
||||
fmt.Println(err)
|
||||
|
3
go.mod
3
go.mod
@ -12,7 +12,6 @@ require (
|
||||
github.com/filecoin-project/chain-validation v0.0.0-20191106200742-11986803c0f7
|
||||
github.com/filecoin-project/filecoin-ffi v0.0.0-00010101000000-000000000000
|
||||
github.com/filecoin-project/go-amt-ipld v0.0.0-20191122035745-59b9dfc0efc7
|
||||
github.com/filecoin-project/go-leb128 v0.0.0-20190212224330-8d79a5489543
|
||||
github.com/gbrlsnchs/jwt/v3 v3.0.0-beta.1
|
||||
github.com/go-ole/go-ole v1.2.4 // indirect
|
||||
github.com/google/go-cmp v0.3.1 // indirect
|
||||
@ -71,7 +70,6 @@ require (
|
||||
github.com/mattn/go-isatty v0.0.9 // indirect
|
||||
github.com/mattn/go-runewidth v0.0.4 // indirect
|
||||
github.com/mattn/go-sqlite3 v1.12.0
|
||||
github.com/miekg/dns v1.1.16 // indirect
|
||||
github.com/minio/blake2b-simd v0.0.0-20160723061019-3f5f724cb5b1
|
||||
github.com/minio/sha256-simd v0.1.1
|
||||
github.com/mitchellh/go-homedir v1.1.0
|
||||
@ -80,6 +78,7 @@ require (
|
||||
github.com/multiformats/go-multiaddr-dns v0.2.0
|
||||
github.com/multiformats/go-multiaddr-net v0.1.0
|
||||
github.com/multiformats/go-multihash v0.0.9
|
||||
github.com/multiformats/go-varint v0.0.1
|
||||
github.com/onsi/ginkgo v1.9.0 // indirect
|
||||
github.com/onsi/gomega v1.6.0 // indirect
|
||||
github.com/opentracing/opentracing-go v1.1.0
|
||||
|
7
go.sum
7
go.sum
@ -451,8 +451,6 @@ github.com/mattn/go-sqlite3 v1.12.0/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsO
|
||||
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
|
||||
github.com/mgutz/ansi v0.0.0-20170206155736-9520e82c474b/go.mod h1:01TrycV0kFyexm33Z7vhZRXopbI8J3TDReVlkTgMUxE=
|
||||
github.com/miekg/dns v1.1.12/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
|
||||
github.com/miekg/dns v1.1.16 h1:iMEQ/IVHxPTtx2Q07JP/k4CKRvSjiAZjZ0hnhgYEDmE=
|
||||
github.com/miekg/dns v1.1.16/go.mod h1:YNV562EiewvSmpCB6/W4c6yqjK7Z+M/aIS1JHsIVeg8=
|
||||
github.com/minio/blake2b-simd v0.0.0-20160723061019-3f5f724cb5b1 h1:lYpkrQH5ajf0OXOcUbGjvZxxijuBwbbmlSxLiuofa+g=
|
||||
github.com/minio/blake2b-simd v0.0.0-20160723061019-3f5f724cb5b1/go.mod h1:pD8RvIylQ358TN4wwqatJ8rNavkEINozVn9DtGI3dfQ=
|
||||
github.com/minio/sha256-simd v0.0.0-20190131020904-2d45a736cd16/go.mod h1:2FMWW+8GMoPweT6+pI63m9YE3Lmw4J71hV56Chs1E/U=
|
||||
@ -500,6 +498,8 @@ github.com/multiformats/go-multihash v0.0.9/go.mod h1:YSLudS+Pi8NHE7o6tb3D8vrpKa
|
||||
github.com/multiformats/go-multistream v0.0.1/go.mod h1:fJTiDfXJVmItycydCnNx4+wSzZ5NwG2FEVAI30fiovg=
|
||||
github.com/multiformats/go-multistream v0.1.0 h1:UpO6jrsjqs46mqAK3n6wKRYFhugss9ArzbyUzU+4wkQ=
|
||||
github.com/multiformats/go-multistream v0.1.0/go.mod h1:fJTiDfXJVmItycydCnNx4+wSzZ5NwG2FEVAI30fiovg=
|
||||
github.com/multiformats/go-varint v0.0.1 h1:TR/0rdQtnNxuN2IhiB639xC3tWM4IUi7DkTBVTdGW/M=
|
||||
github.com/multiformats/go-varint v0.0.1/go.mod h1:3Ls8CIEsrijN6+B7PbrXRPxHRPuXSrVKRY101jdMZYE=
|
||||
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
|
||||
github.com/nkovacs/streamquote v0.0.0-20170412213628-49af9bddb229 h1:E2B8qYyeSgv5MXpmzZXRNp8IAQ4vjxIjhpAf5hv/tAg=
|
||||
github.com/nkovacs/streamquote v0.0.0-20170412213628-49af9bddb229/go.mod h1:0aYXnNPJ8l7uZxf45rWW1a/uME32OF0rhiYGNQ2oF2E=
|
||||
@ -630,7 +630,6 @@ go4.org v0.0.0-20190313082347-94abd6928b1d h1:JkRdGP3zvTtTbabWSAC6n67ka30y7gOzWA
|
||||
go4.org v0.0.0-20190313082347-94abd6928b1d/go.mod h1:MkTOUMDaeVYJUOUsaDXIhWPZYa1yOyC1qaOBpL57BhE=
|
||||
golang.org/x/crypto v0.0.0-20170930174604-9419663f5a44/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20181001203147-e3636079e1a4/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20190211182817-74369b46fc67/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20190225124518-7f87c0fbb88b/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
@ -653,7 +652,6 @@ golang.org/x/net v0.0.0-20180524181706-dfa909b99c79/go.mod h1:mL1N/T3taQHkDXs73r
|
||||
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20180926154720-4dfa2610cdf3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20181011144130-49bb7cea24b1/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
@ -679,7 +677,6 @@ golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJ
|
||||
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20180928133829-e4b3c5e90611/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20181122145206-62eef0e2fa9b/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
|
@ -18,6 +18,10 @@ func (sb *SectorBuilder) StagedSectorPath(sectorID uint64) string {
|
||||
return filepath.Join(sb.stagedDir, sb.SectorName(sectorID))
|
||||
}
|
||||
|
||||
func (sb *SectorBuilder) unsealedSectorPath(sectorID uint64) string {
|
||||
return filepath.Join(sb.unsealedDir, sb.SectorName(sectorID))
|
||||
}
|
||||
|
||||
func (sb *SectorBuilder) stagedSectorFile(sectorID uint64) (*os.File, error) {
|
||||
return os.OpenFile(sb.StagedSectorPath(sectorID), os.O_RDWR|os.O_CREATE, 0644)
|
||||
}
|
||||
|
@ -1,35 +1,19 @@
|
||||
package sectorbuilder
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/filecoin-project/lotus/chain/address"
|
||||
"github.com/filecoin-project/lotus/node/modules/dtypes"
|
||||
)
|
||||
|
||||
func TempSectorbuilder(sectorSize uint64, ds dtypes.MetadataDS) (*SectorBuilder, func(), error) {
|
||||
dir, err := ioutil.TempDir("", "sbtest")
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
sb, err := TempSectorbuilderDir(dir, sectorSize, ds)
|
||||
return sb, func() {
|
||||
if err := os.RemoveAll(dir); err != nil {
|
||||
log.Warn("failed to clean up temp sectorbuilder: ", err)
|
||||
}
|
||||
}, err
|
||||
}
|
||||
|
||||
func TempSectorbuilderDir(dir string, sectorSize uint64, ds dtypes.MetadataDS) (*SectorBuilder, error) {
|
||||
addr, err := address.NewFromString("t3vfxagwiegrywptkbmyohqqbfzd7xzbryjydmxso4hfhgsnv6apddyihltsbiikjf3lm7x2myiaxhuc77capq")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
metadata := filepath.Join(dir, "meta")
|
||||
unsealed := filepath.Join(dir, "unsealed")
|
||||
sealed := filepath.Join(dir, "sealed")
|
||||
staging := filepath.Join(dir, "staging")
|
||||
cache := filepath.Join(dir, "cache")
|
||||
@ -39,7 +23,7 @@ func TempSectorbuilderDir(dir string, sectorSize uint64, ds dtypes.MetadataDS) (
|
||||
|
||||
SealedDir: sealed,
|
||||
StagedDir: staging,
|
||||
MetadataDir: metadata,
|
||||
UnsealedDir: unsealed,
|
||||
CacheDir: cache,
|
||||
|
||||
WorkerThreads: 2,
|
||||
|
@ -4,6 +4,7 @@ import (
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
@ -21,7 +22,7 @@ import (
|
||||
const PoStReservedWorkers = 1
|
||||
const PoRepProofPartitions = 2
|
||||
|
||||
var LastSectorIdKey = datastore.NewKey("/sectorbuilder/last")
|
||||
var lastSectorIdKey = datastore.NewKey("/sectorbuilder/last")
|
||||
|
||||
var log = logging.Logger("sectorbuilder")
|
||||
|
||||
@ -53,9 +54,12 @@ type SectorBuilder struct {
|
||||
|
||||
Miner address.Address
|
||||
|
||||
stagedDir string
|
||||
sealedDir string
|
||||
cacheDir string
|
||||
stagedDir string
|
||||
sealedDir string
|
||||
cacheDir string
|
||||
unsealedDir string
|
||||
|
||||
unsealLk sync.Mutex
|
||||
|
||||
sealLocal bool
|
||||
rateLimit chan struct{}
|
||||
@ -120,7 +124,7 @@ type Config struct {
|
||||
CacheDir string
|
||||
SealedDir string
|
||||
StagedDir string
|
||||
MetadataDir string
|
||||
UnsealedDir string
|
||||
}
|
||||
|
||||
func New(cfg *Config, ds dtypes.MetadataDS) (*SectorBuilder, error) {
|
||||
@ -128,7 +132,7 @@ func New(cfg *Config, ds dtypes.MetadataDS) (*SectorBuilder, error) {
|
||||
return nil, xerrors.Errorf("minimum worker threads is %d, specified %d", PoStReservedWorkers, cfg.WorkerThreads)
|
||||
}
|
||||
|
||||
for _, dir := range []string{cfg.StagedDir, cfg.SealedDir, cfg.CacheDir, cfg.MetadataDir} {
|
||||
for _, dir := range []string{cfg.StagedDir, cfg.SealedDir, cfg.CacheDir, cfg.UnsealedDir} {
|
||||
if err := os.Mkdir(dir, 0755); err != nil {
|
||||
if os.IsExist(err) {
|
||||
continue
|
||||
@ -138,7 +142,7 @@ func New(cfg *Config, ds dtypes.MetadataDS) (*SectorBuilder, error) {
|
||||
}
|
||||
|
||||
var lastUsedID uint64
|
||||
b, err := ds.Get(LastSectorIdKey)
|
||||
b, err := ds.Get(lastSectorIdKey)
|
||||
switch err {
|
||||
case nil:
|
||||
i, err := strconv.ParseInt(string(b), 10, 64)
|
||||
@ -165,9 +169,10 @@ func New(cfg *Config, ds dtypes.MetadataDS) (*SectorBuilder, error) {
|
||||
ssize: cfg.SectorSize,
|
||||
lastID: lastUsedID,
|
||||
|
||||
stagedDir: cfg.StagedDir,
|
||||
sealedDir: cfg.SealedDir,
|
||||
cacheDir: cfg.CacheDir,
|
||||
stagedDir: cfg.StagedDir,
|
||||
sealedDir: cfg.SealedDir,
|
||||
cacheDir: cfg.CacheDir,
|
||||
unsealedDir: cfg.UnsealedDir,
|
||||
|
||||
Miner: cfg.Miner,
|
||||
|
||||
@ -186,7 +191,7 @@ func New(cfg *Config, ds dtypes.MetadataDS) (*SectorBuilder, error) {
|
||||
}
|
||||
|
||||
func NewStandalone(cfg *Config) (*SectorBuilder, error) {
|
||||
for _, dir := range []string{cfg.StagedDir, cfg.SealedDir, cfg.CacheDir, cfg.MetadataDir} {
|
||||
for _, dir := range []string{cfg.StagedDir, cfg.SealedDir, cfg.CacheDir, cfg.UnsealedDir} {
|
||||
if err := os.MkdirAll(dir, 0755); err != nil {
|
||||
if os.IsExist(err) {
|
||||
continue
|
||||
@ -204,6 +209,7 @@ func NewStandalone(cfg *Config) (*SectorBuilder, error) {
|
||||
stagedDir: cfg.StagedDir,
|
||||
sealedDir: cfg.SealedDir,
|
||||
cacheDir: cfg.CacheDir,
|
||||
unsealedDir:cfg.UnsealedDir,
|
||||
|
||||
sealLocal: true,
|
||||
taskCtr: 1,
|
||||
@ -271,7 +277,7 @@ func (sb *SectorBuilder) AcquireSectorId() (uint64, error) {
|
||||
sb.lastID++
|
||||
id := sb.lastID
|
||||
|
||||
err := sb.ds.Put(LastSectorIdKey, []byte(fmt.Sprint(id)))
|
||||
err := sb.ds.Put(lastSectorIdKey, []byte(fmt.Sprint(id)))
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
@ -311,13 +317,72 @@ func (sb *SectorBuilder) AddPiece(pieceSize uint64, sectorId uint64, file io.Rea
|
||||
}, werr()
|
||||
}
|
||||
|
||||
// TODO: should *really really* return an io.ReadCloser
|
||||
func (sb *SectorBuilder) ReadPieceFromSealedSector(pieceKey string) ([]byte, error) {
|
||||
ret := sb.RateLimit()
|
||||
func (sb *SectorBuilder) ReadPieceFromSealedSector(sectorID uint64, offset uint64, size uint64, ticket []byte, commD []byte) (io.ReadCloser, error) {
|
||||
ret := sb.RateLimit() // TODO: check perf, consider remote unseal worker
|
||||
defer ret()
|
||||
|
||||
panic("fixme")
|
||||
//return sectorbuilder.Unseal(sb.handle, pieceKey)
|
||||
sb.unsealLk.Lock() // TODO: allow unsealing unrelated sectors in parallel
|
||||
defer sb.unsealLk.Unlock()
|
||||
|
||||
cacheDir, err := sb.sectorCacheDir(sectorID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
sealedPath, err := sb.SealedSectorPath(sectorID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
unsealedPath := sb.unsealedSectorPath(sectorID)
|
||||
|
||||
// TODO: GC for those
|
||||
// (Probably configurable count of sectors to be kept unsealed, and just
|
||||
// remove last used one (or use whatever other cache policy makes sense))
|
||||
f, err := os.OpenFile(unsealedPath, os.O_RDONLY, 0644)
|
||||
if err != nil {
|
||||
if !os.IsNotExist(err) {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var commd [CommLen]byte
|
||||
copy(commd[:], commD)
|
||||
|
||||
var tkt [CommLen]byte
|
||||
copy(tkt[:], ticket)
|
||||
|
||||
err = sectorbuilder.Unseal(sb.ssize,
|
||||
PoRepProofPartitions,
|
||||
cacheDir,
|
||||
sealedPath,
|
||||
unsealedPath,
|
||||
sectorID,
|
||||
addressToProverID(sb.Miner),
|
||||
tkt,
|
||||
commd)
|
||||
if err != nil {
|
||||
return nil, xerrors.Errorf("unseal failed: %w", err)
|
||||
}
|
||||
|
||||
f, err = os.OpenFile(unsealedPath, os.O_RDONLY, 0644)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
if _, err := f.Seek(int64(offset), io.SeekStart); err != nil {
|
||||
return nil, xerrors.Errorf("seek: %w", err)
|
||||
}
|
||||
|
||||
lr := io.LimitReader(f, int64(size))
|
||||
|
||||
return &struct {
|
||||
io.Reader
|
||||
io.Closer
|
||||
}{
|
||||
Reader: lr,
|
||||
Closer: f,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (sb *SectorBuilder) sealPreCommitRemote(call workerCall) (RawSealPreCommitOutput, error) {
|
||||
@ -495,6 +560,7 @@ func (sb *SectorBuilder) ComputeElectionPoSt(sectorInfo SortedPublicSectorInfo,
|
||||
}
|
||||
|
||||
proverID := addressToProverID(sb.Miner)
|
||||
|
||||
return sectorbuilder.GeneratePoSt(sb.ssize, proverID, privsects, cseed, winners)
|
||||
}
|
||||
|
||||
@ -504,7 +570,7 @@ func (sb *SectorBuilder) GenerateEPostCandidates(sectorInfo SortedPublicSectorIn
|
||||
return nil, err
|
||||
}
|
||||
|
||||
challengeCount := electionPostChallengeCount(uint64(len(sectorInfo.Values())))
|
||||
challengeCount := ElectionPostChallengeCount(uint64(len(sectorInfo.Values())))
|
||||
|
||||
proverID := addressToProverID(sb.Miner)
|
||||
return sectorbuilder.GenerateCandidates(sb.ssize, proverID, challengeSeed, challengeCount, privsectors)
|
||||
@ -555,15 +621,61 @@ func (sb *SectorBuilder) Stop() {
|
||||
close(sb.stopping)
|
||||
}
|
||||
|
||||
func electionPostChallengeCount(sectors uint64) uint64 {
|
||||
func ElectionPostChallengeCount(sectors uint64) uint64 {
|
||||
// ceil(sectors / build.SectorChallengeRatioDiv)
|
||||
return (sectors + build.SectorChallengeRatioDiv - 1) / build.SectorChallengeRatioDiv
|
||||
}
|
||||
|
||||
func fallbackPostChallengeCount(sectors uint64) uint64 {
|
||||
challengeCount := electionPostChallengeCount(sectors)
|
||||
challengeCount := ElectionPostChallengeCount(sectors)
|
||||
if challengeCount > build.MaxFallbackPostChallengeCount {
|
||||
return build.MaxFallbackPostChallengeCount
|
||||
}
|
||||
return challengeCount
|
||||
}
|
||||
|
||||
func (sb *SectorBuilder) ImportFrom(osb *SectorBuilder) error {
|
||||
if err := moveAllFiles(osb.cacheDir, sb.cacheDir); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := moveAllFiles(osb.sealedDir, sb.sealedDir); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := moveAllFiles(osb.stagedDir, sb.stagedDir); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
val, err := osb.ds.Get(lastSectorIdKey)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := sb.ds.Put(lastSectorIdKey, val); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
sb.lastID = osb.lastID
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func moveAllFiles(from, to string) error {
|
||||
dir, err := os.Open(from)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
names, err := dir.Readdirnames(0)
|
||||
if err != nil {
|
||||
return xerrors.Errorf("failed to list items in dir: %w", err)
|
||||
}
|
||||
for _, n := range names {
|
||||
if err := os.Rename(filepath.Join(from, n), filepath.Join(to, n)); err != nil {
|
||||
return xerrors.Errorf("moving file failed: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
@ -122,8 +122,7 @@ func TestSealAndVerify(t *testing.T) {
|
||||
if runtime.NumCPU() < 10 && os.Getenv("CI") == "" { // don't bother on slow hardware
|
||||
t.Skip("this is slow")
|
||||
}
|
||||
os.Setenv("BELLMAN_NO_GPU", "1")
|
||||
os.Setenv("RUST_LOG", "info")
|
||||
_ = os.Setenv("RUST_LOG", "info")
|
||||
|
||||
build.SectorSizes = []uint64{sectorSize}
|
||||
|
||||
@ -192,8 +191,7 @@ func TestSealPoStNoCommit(t *testing.T) {
|
||||
if runtime.NumCPU() < 10 && os.Getenv("CI") == "" { // don't bother on slow hardware
|
||||
t.Skip("this is slow")
|
||||
}
|
||||
os.Setenv("BELLMAN_NO_GPU", "1")
|
||||
os.Setenv("RUST_LOG", "info")
|
||||
_ = os.Setenv("RUST_LOG", "info")
|
||||
|
||||
build.SectorSizes = []uint64{sectorSize}
|
||||
|
||||
@ -255,8 +253,7 @@ func TestSealAndVerify2(t *testing.T) {
|
||||
if runtime.NumCPU() < 10 && os.Getenv("CI") == "" { // don't bother on slow hardware
|
||||
t.Skip("this is slow")
|
||||
}
|
||||
os.Setenv("BELLMAN_NO_GPU", "1")
|
||||
os.Setenv("RUST_LOG", "info")
|
||||
_ = os.Setenv("RUST_LOG", "info")
|
||||
|
||||
build.SectorSizes = []uint64{sectorSize}
|
||||
|
||||
|
@ -36,7 +36,7 @@ func NewSortedPublicSectorInfo(sectors []sectorbuilder.PublicSectorInfo) SortedP
|
||||
}
|
||||
|
||||
func VerifyElectionPost(ctx context.Context, sectorSize uint64, sectorInfo SortedPublicSectorInfo, challengeSeed []byte, proof []byte, candidates []EPostCandidate, proverID address.Address) (bool, error) {
|
||||
challengeCount := electionPostChallengeCount(uint64(len(sectorInfo.Values())))
|
||||
challengeCount := ElectionPostChallengeCount(uint64(len(sectorInfo.Values())))
|
||||
return verifyPost(ctx, sectorSize, sectorInfo, challengeCount, challengeSeed, proof, candidates, proverID)
|
||||
}
|
||||
|
||||
|
@ -107,7 +107,8 @@ class MarketState extends React.Component {
|
||||
const tipset = await this.props.client.call("Filecoin.ChainHead", []) // TODO: from props
|
||||
const participants = await this.props.client.call("Filecoin.StateMarketParticipants", [tipset])
|
||||
const deals = await this.props.client.call("Filecoin.StateMarketDeals", [tipset])
|
||||
this.setState({participants, deals})
|
||||
const state = await this.props.client.call('Filecoin.StateReadState', [this.props.actor, tipset])
|
||||
this.setState({participants, deals, nextDeal: state.State.NextDealID})
|
||||
}
|
||||
|
||||
render() {
|
||||
@ -125,7 +126,7 @@ class MarketState extends React.Component {
|
||||
</div>
|
||||
<div>
|
||||
<div>---</div>
|
||||
<div>Deals:</div>
|
||||
<div>Deals ({this.state.nextDeal} Total):</div>
|
||||
<table>
|
||||
<tr><td>id</td><td>Active</td><td>Client</td><td>Provider</td><td>Size</td><td>Price</td><td>Duration</td></tr>
|
||||
{Object.keys(this.state.deals).map(d => <tr>
|
||||
|
@ -153,7 +153,7 @@ eventLoop:
|
||||
continue
|
||||
}
|
||||
if base.ts.Equals(lastBase.ts) && lastBase.nullRounds == base.nullRounds {
|
||||
log.Errorf("BestMiningCandidate from the previous round: %s (nulls:%d)", lastBase.ts.Cids(), lastBase.nullRounds)
|
||||
log.Warnf("BestMiningCandidate from the previous round: %s (nulls:%d)", lastBase.ts.Cids(), lastBase.nullRounds)
|
||||
time.Sleep(build.BlockDelay * time.Second)
|
||||
continue
|
||||
}
|
||||
@ -237,7 +237,7 @@ func (m *Miner) GetBestMiningCandidate(ctx context.Context) (*MiningBase, error)
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (m *Miner) isSlashed(ctx context.Context, addr address.Address, ts *types.TipSet) (bool, error) {
|
||||
func (m *Miner) hasPower(ctx context.Context, addr address.Address, ts *types.TipSet) (bool, error) {
|
||||
power, err := m.api.StateMinerPower(ctx, addr, ts)
|
||||
if err != nil {
|
||||
return false, err
|
||||
@ -250,12 +250,12 @@ func (m *Miner) mineOne(ctx context.Context, addr address.Address, base *MiningB
|
||||
log.Debugw("attempting to mine a block", "tipset", types.LogCids(base.ts.Cids()))
|
||||
start := time.Now()
|
||||
|
||||
slashed, err := m.isSlashed(ctx, addr, base.ts)
|
||||
hasPower, err := m.hasPower(ctx, addr, base.ts)
|
||||
if err != nil {
|
||||
return nil, xerrors.Errorf("checking if miner is slashed: %w", err)
|
||||
}
|
||||
if slashed {
|
||||
log.Warnf("Slashed at epoch %d, not attempting to mine a block", base.ts.Height()+base.nullRounds)
|
||||
if hasPower {
|
||||
// slashed or just have no power yet
|
||||
base.nullRounds++
|
||||
return nil, nil
|
||||
}
|
||||
@ -279,10 +279,13 @@ func (m *Miner) mineOne(ctx context.Context, addr address.Address, base *MiningB
|
||||
if err != nil {
|
||||
return nil, xerrors.Errorf("failed to create block: %w", err)
|
||||
}
|
||||
log.Infow("mined new block", "cid", b.Cid())
|
||||
log.Infow("mined new block", "cid", b.Cid(), "height", b.Header.Height)
|
||||
|
||||
dur := time.Now().Sub(start)
|
||||
log.Infof("Creating block took %s", dur)
|
||||
if dur > time.Second*build.BlockDelay {
|
||||
log.Warn("CAUTION: block production took longer than the block delay. Your computer may not be fast enough to keep up")
|
||||
}
|
||||
|
||||
return b, nil
|
||||
}
|
||||
|
@ -4,14 +4,22 @@ import (
|
||||
"context"
|
||||
|
||||
"github.com/filecoin-project/lotus/api"
|
||||
"github.com/filecoin-project/lotus/chain/address"
|
||||
"github.com/filecoin-project/lotus/chain/gen"
|
||||
)
|
||||
|
||||
func NewTestMiner(nextCh <-chan struct{}) func(api api.FullNode) *Miner {
|
||||
return func(api api.FullNode) *Miner {
|
||||
return &Miner{
|
||||
func NewTestMiner(nextCh <-chan struct{}, addr address.Address) func(api.FullNode, gen.ElectionPoStProver) *Miner {
|
||||
return func(api api.FullNode, epp gen.ElectionPoStProver) *Miner {
|
||||
m := &Miner{
|
||||
api: api,
|
||||
waitFunc: chanWaiter(nextCh),
|
||||
epp: epp,
|
||||
}
|
||||
|
||||
if err := m.Register(addr); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return m
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -24,6 +24,7 @@ import (
|
||||
"github.com/filecoin-project/lotus/chain/deals"
|
||||
"github.com/filecoin-project/lotus/chain/gen"
|
||||
"github.com/filecoin-project/lotus/chain/market"
|
||||
"github.com/filecoin-project/lotus/chain/messagepool"
|
||||
"github.com/filecoin-project/lotus/chain/metrics"
|
||||
"github.com/filecoin-project/lotus/chain/stmgr"
|
||||
"github.com/filecoin-project/lotus/chain/store"
|
||||
@ -202,7 +203,7 @@ func Online() Option {
|
||||
// Filecoin services
|
||||
Override(new(*chain.Syncer), modules.NewSyncer),
|
||||
Override(new(*blocksync.BlockSync), blocksync.NewBlockSyncClient),
|
||||
Override(new(*chain.MessagePool), modules.MessagePool),
|
||||
Override(new(*messagepool.MessagePool), modules.MessagePool),
|
||||
|
||||
Override(new(modules.Genesis), modules.ErrorGenesis),
|
||||
Override(SetGenesisKey, modules.SetGenesis),
|
||||
|
@ -7,8 +7,8 @@ import (
|
||||
"golang.org/x/xerrors"
|
||||
|
||||
"github.com/filecoin-project/lotus/api"
|
||||
"github.com/filecoin-project/lotus/chain"
|
||||
"github.com/filecoin-project/lotus/chain/address"
|
||||
"github.com/filecoin-project/lotus/chain/messagepool"
|
||||
"github.com/filecoin-project/lotus/chain/types"
|
||||
)
|
||||
|
||||
@ -17,7 +17,7 @@ type MpoolAPI struct {
|
||||
|
||||
WalletAPI
|
||||
|
||||
Mpool *chain.MessagePool
|
||||
Mpool *messagepool.MessagePool
|
||||
}
|
||||
|
||||
func (a *MpoolAPI) MpoolPending(ctx context.Context, ts *types.TipSet) ([]*types.SignedMessage, error) {
|
||||
|
@ -19,6 +19,7 @@ import (
|
||||
|
||||
"github.com/filecoin-project/lotus/chain"
|
||||
"github.com/filecoin-project/lotus/chain/blocksync"
|
||||
"github.com/filecoin-project/lotus/chain/messagepool"
|
||||
"github.com/filecoin-project/lotus/chain/stmgr"
|
||||
"github.com/filecoin-project/lotus/chain/store"
|
||||
"github.com/filecoin-project/lotus/chain/types"
|
||||
@ -41,8 +42,9 @@ func ChainExchange(mctx helpers.MetricsCtx, lc fx.Lifecycle, host host.Host, rt
|
||||
return exch
|
||||
}
|
||||
|
||||
func MessagePool(lc fx.Lifecycle, sm *stmgr.StateManager, ps *pubsub.PubSub, ds dtypes.MetadataDS) (*chain.MessagePool, error) {
|
||||
mp, err := chain.NewMessagePool(sm, ps, ds)
|
||||
func MessagePool(lc fx.Lifecycle, sm *stmgr.StateManager, ps *pubsub.PubSub, ds dtypes.MetadataDS) (*messagepool.MessagePool, error) {
|
||||
mpp := messagepool.NewProvider(sm, ps)
|
||||
mp, err := messagepool.New(mpp, ds)
|
||||
if err != nil {
|
||||
return nil, xerrors.Errorf("constructing mpool: %w", err)
|
||||
}
|
||||
|
@ -6,7 +6,6 @@ import (
|
||||
|
||||
"github.com/libp2p/go-libp2p-core/host"
|
||||
"github.com/libp2p/go-libp2p-core/peer"
|
||||
"github.com/libp2p/go-libp2p/p2p/discovery"
|
||||
"go.uber.org/fx"
|
||||
|
||||
"github.com/filecoin-project/lotus/node/modules/helpers"
|
||||
@ -34,20 +33,3 @@ func DiscoveryHandler(mctx helpers.MetricsCtx, lc fx.Lifecycle, host host.Host)
|
||||
host: host,
|
||||
}
|
||||
}
|
||||
|
||||
func SetupDiscovery(mdns bool, mdnsInterval int) func(helpers.MetricsCtx, fx.Lifecycle, host.Host, *discoveryHandler) error {
|
||||
return func(mctx helpers.MetricsCtx, lc fx.Lifecycle, host host.Host, handler *discoveryHandler) error {
|
||||
if mdns {
|
||||
if mdnsInterval == 0 {
|
||||
mdnsInterval = 5
|
||||
}
|
||||
service, err := discovery.NewMdnsService(helpers.LifecycleCtx(mctx, lc), host, time.Duration(mdnsInterval)*time.Second, discovery.ServiceTag)
|
||||
if err != nil {
|
||||
log.Errorw("mdns error", "error", err)
|
||||
return nil
|
||||
}
|
||||
service.RegisterNotifee(handler)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
@ -11,6 +11,7 @@ import (
|
||||
"github.com/filecoin-project/lotus/chain"
|
||||
"github.com/filecoin-project/lotus/chain/blocksync"
|
||||
"github.com/filecoin-project/lotus/chain/deals"
|
||||
"github.com/filecoin-project/lotus/chain/messagepool"
|
||||
"github.com/filecoin-project/lotus/chain/sub"
|
||||
"github.com/filecoin-project/lotus/node/hello"
|
||||
"github.com/filecoin-project/lotus/node/modules/helpers"
|
||||
@ -53,7 +54,7 @@ func HandleIncomingBlocks(mctx helpers.MetricsCtx, lc fx.Lifecycle, pubsub *pubs
|
||||
go sub.HandleIncomingBlocks(ctx, blocksub, s)
|
||||
}
|
||||
|
||||
func HandleIncomingMessages(mctx helpers.MetricsCtx, lc fx.Lifecycle, pubsub *pubsub.PubSub, mpool *chain.MessagePool) {
|
||||
func HandleIncomingMessages(mctx helpers.MetricsCtx, lc fx.Lifecycle, pubsub *pubsub.PubSub, mpool *messagepool.MessagePool) {
|
||||
ctx := helpers.LifecycleCtx(mctx, lc)
|
||||
|
||||
msgsub, err := pubsub.Subscribe("/fil/messages")
|
||||
|
@ -66,7 +66,7 @@ func SectorBuilderConfig(storagePath string, threads uint) func(dtypes.MetadataD
|
||||
}
|
||||
|
||||
cache := filepath.Join(sp, "cache")
|
||||
metadata := filepath.Join(sp, "meta")
|
||||
unsealed := filepath.Join(sp, "unsealed")
|
||||
sealed := filepath.Join(sp, "sealed")
|
||||
staging := filepath.Join(sp, "staging")
|
||||
|
||||
@ -76,7 +76,7 @@ func SectorBuilderConfig(storagePath string, threads uint) func(dtypes.MetadataD
|
||||
WorkerThreads: uint8(threads),
|
||||
|
||||
CacheDir: cache,
|
||||
MetadataDir: metadata,
|
||||
UnsealedDir: unsealed,
|
||||
SealedDir: sealed,
|
||||
StagedDir: staging,
|
||||
}
|
||||
|
@ -9,16 +9,15 @@ import (
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"golang.org/x/xerrors"
|
||||
|
||||
"github.com/ipfs/go-blockservice"
|
||||
"github.com/ipfs/go-car"
|
||||
"github.com/ipfs/go-cid"
|
||||
offline "github.com/ipfs/go-ipfs-exchange-offline"
|
||||
logging "github.com/ipfs/go-log"
|
||||
"github.com/ipfs/go-merkledag"
|
||||
peer "github.com/libp2p/go-libp2p-peer"
|
||||
"github.com/libp2p/go-libp2p-core/peer"
|
||||
"github.com/mitchellh/go-homedir"
|
||||
"golang.org/x/xerrors"
|
||||
|
||||
"github.com/filecoin-project/lotus/chain/address"
|
||||
"github.com/filecoin-project/lotus/chain/gen"
|
||||
@ -69,16 +68,16 @@ func MakeGenesisMem(out io.Writer, gmc *gen.GenMinerCfg) func(bs dtypes.ChainBlo
|
||||
}
|
||||
}
|
||||
|
||||
func MakeGenesis(outFile, preseal string) func(bs dtypes.ChainBlockstore, w *wallet.Wallet) modules.Genesis {
|
||||
func MakeGenesis(outFile, presealInfo string) func(bs dtypes.ChainBlockstore, w *wallet.Wallet) modules.Genesis {
|
||||
return func(bs dtypes.ChainBlockstore, w *wallet.Wallet) modules.Genesis {
|
||||
return func() (*types.BlockHeader, error) {
|
||||
glog.Warn("Generating new random genesis block, note that this SHOULD NOT happen unless you are setting up new network")
|
||||
preseal, err := homedir.Expand(preseal)
|
||||
presealInfo, err := homedir.Expand(presealInfo)
|
||||
if err != nil {
|
||||
return nil, xerrors.Errorf("expanding preseals json path: %w", err)
|
||||
return nil, err
|
||||
}
|
||||
|
||||
fdata, err := ioutil.ReadFile(preseal)
|
||||
fdata, err := ioutil.ReadFile(presealInfo)
|
||||
if err != nil {
|
||||
return nil, xerrors.Errorf("reading preseals json: %w", err)
|
||||
}
|
||||
|
@ -4,13 +4,17 @@ import (
|
||||
"bytes"
|
||||
"context"
|
||||
"crypto/rand"
|
||||
"github.com/filecoin-project/lotus/build"
|
||||
"io/ioutil"
|
||||
"net/http/httptest"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"github.com/libp2p/go-libp2p-core/crypto"
|
||||
|
||||
"github.com/ipfs/go-datastore"
|
||||
badger "github.com/ipfs/go-ds-badger"
|
||||
logging "github.com/ipfs/go-log"
|
||||
"github.com/libp2p/go-libp2p-core/peer"
|
||||
mocknet "github.com/libp2p/go-libp2p/p2p/net/mock"
|
||||
"github.com/stretchr/testify/require"
|
||||
@ -25,13 +29,21 @@ import (
|
||||
"github.com/filecoin-project/lotus/cmd/lotus-seed/seed"
|
||||
"github.com/filecoin-project/lotus/genesis"
|
||||
"github.com/filecoin-project/lotus/lib/jsonrpc"
|
||||
"github.com/filecoin-project/lotus/lib/sectorbuilder"
|
||||
"github.com/filecoin-project/lotus/miner"
|
||||
"github.com/filecoin-project/lotus/node"
|
||||
"github.com/filecoin-project/lotus/node/impl"
|
||||
"github.com/filecoin-project/lotus/node/modules"
|
||||
modtest "github.com/filecoin-project/lotus/node/modules/testing"
|
||||
"github.com/filecoin-project/lotus/node/repo"
|
||||
)
|
||||
|
||||
func init() {
|
||||
_ = logging.SetLogLevel("*", "INFO")
|
||||
|
||||
build.SectorSizes = []uint64{1024}
|
||||
}
|
||||
|
||||
func testStorageNode(ctx context.Context, t *testing.T, waddr address.Address, act address.Address, pk crypto.PrivKey, tnd test.TestNode, mn mocknet.Mocknet) test.TestStorageNode {
|
||||
r := repo.NewMemory(nil)
|
||||
|
||||
@ -80,6 +92,7 @@ func testStorageNode(ctx context.Context, t *testing.T, waddr address.Address, a
|
||||
// start node
|
||||
var minerapi api.StorageMiner
|
||||
|
||||
mineBlock := make(chan struct{})
|
||||
// TODO: use stop
|
||||
_, err = node.New(ctx,
|
||||
node.StorageMiner(&minerapi),
|
||||
@ -90,6 +103,7 @@ func testStorageNode(ctx context.Context, t *testing.T, waddr address.Address, a
|
||||
node.MockHost(mn),
|
||||
|
||||
node.Override(new(api.FullNode), tnd),
|
||||
node.Override(new(*miner.Miner), miner.NewTestMiner(mineBlock, act)),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to construct node: %v", err)
|
||||
@ -101,8 +115,16 @@ func testStorageNode(ctx context.Context, t *testing.T, waddr address.Address, a
|
||||
|
||||
err = minerapi.NetConnect(ctx, remoteAddrs)
|
||||
require.NoError(t, err)*/
|
||||
mineOne := func(ctx context.Context) error {
|
||||
select {
|
||||
case mineBlock <- struct{}{}:
|
||||
return nil
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
}
|
||||
}
|
||||
|
||||
return test.TestStorageNode{minerapi}
|
||||
return test.TestStorageNode{StorageMiner: minerapi, MineOne: mineOne}
|
||||
}
|
||||
|
||||
func builder(t *testing.T, nFull int, storage []int) ([]test.TestNode, []test.TestStorageNode) {
|
||||
@ -129,6 +151,8 @@ func builder(t *testing.T, nFull int, storage []int) ([]test.TestNode, []test.Te
|
||||
PeerIDs: []peer.ID{minerPid}, // TODO: if we have more miners, need more peer IDs
|
||||
PreSeals: map[string]genesis.GenesisMiner{},
|
||||
}
|
||||
|
||||
var presealDirs []string
|
||||
for i := 0; i < len(storage); i++ {
|
||||
maddr, err := address.NewIDAddress(300 + uint64(i))
|
||||
if err != nil {
|
||||
@ -142,6 +166,8 @@ func builder(t *testing.T, nFull int, storage []int) ([]test.TestNode, []test.Te
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
presealDirs = append(presealDirs, tdir)
|
||||
gmc.MinerAddrs = append(gmc.MinerAddrs, maddr)
|
||||
gmc.PreSeals[maddr.String()] = *genm
|
||||
}
|
||||
@ -156,8 +182,6 @@ func builder(t *testing.T, nFull int, storage []int) ([]test.TestNode, []test.Te
|
||||
genesis = node.Override(new(modules.Genesis), modules.LoadGenesis(genbuf.Bytes()))
|
||||
}
|
||||
|
||||
mineBlock := make(chan struct{})
|
||||
|
||||
var err error
|
||||
// TODO: Don't ignore stop
|
||||
_, err = node.New(ctx,
|
||||
@ -167,22 +191,12 @@ func builder(t *testing.T, nFull int, storage []int) ([]test.TestNode, []test.Te
|
||||
node.MockHost(mn),
|
||||
node.Test(),
|
||||
|
||||
node.Override(new(*miner.Miner), miner.NewTestMiner(mineBlock)),
|
||||
|
||||
genesis,
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
fulls[i].MineOne = func(ctx context.Context) error {
|
||||
select {
|
||||
case mineBlock <- struct{}{}:
|
||||
return nil
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for i, full := range storage {
|
||||
@ -196,13 +210,36 @@ func builder(t *testing.T, nFull int, storage []int) ([]test.TestNode, []test.Te
|
||||
|
||||
f := fulls[full]
|
||||
|
||||
wa, err := f.WalletDefaultAddress(ctx)
|
||||
require.NoError(t, err)
|
||||
|
||||
genMiner, err := address.NewFromString("t0102")
|
||||
require.NoError(t, err)
|
||||
genMiner := gmc.MinerAddrs[i]
|
||||
wa := gmc.PreSeals[genMiner.String()].Worker
|
||||
|
||||
storers[i] = testStorageNode(ctx, t, wa, genMiner, pk, f, mn)
|
||||
|
||||
sma := storers[i].StorageMiner.(*impl.StorageMinerAPI)
|
||||
|
||||
psd := presealDirs[i]
|
||||
mds, err := badger.NewDatastore(filepath.Join(psd, "badger"), nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
osb, err := sectorbuilder.New(§orbuilder.Config{
|
||||
SectorSize: 1024,
|
||||
WorkerThreads: 2,
|
||||
Miner: genMiner,
|
||||
CacheDir: filepath.Join(psd, "cache"),
|
||||
StagedDir: filepath.Join(psd, "staging"),
|
||||
SealedDir: filepath.Join(psd, "sealed"),
|
||||
UnsealedDir: filepath.Join(psd, "unsealed"),
|
||||
}, mds)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if err := sma.SectorBuilder.ImportFrom(osb); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
if err := mn.LinkAll(); err != nil {
|
||||
@ -231,7 +268,6 @@ func rpcBuilder(t *testing.T, nFull int, storage []int) ([]test.TestNode, []test
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
fulls[i].MineOne = a.MineOne
|
||||
}
|
||||
|
||||
for i, a := range storaApis {
|
||||
@ -244,6 +280,7 @@ func rpcBuilder(t *testing.T, nFull int, storage []int) ([]test.TestNode, []test
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
storers[i].MineOne = a.MineOne
|
||||
}
|
||||
|
||||
return fulls, storers
|
||||
|
@ -47,6 +47,6 @@ type LockedRepo interface {
|
||||
// KeyStore returns store of private keys for Filecoin transactions
|
||||
KeyStore() (types.KeyStore, error)
|
||||
|
||||
// Path returns absolute path of the repo (or empty string if in-memory)
|
||||
// Path returns absolute path of the repo
|
||||
Path() string
|
||||
}
|
||||
|
@ -5,6 +5,7 @@ import (
|
||||
"fmt"
|
||||
|
||||
"github.com/ipfs/go-cid"
|
||||
"golang.org/x/xerrors"
|
||||
|
||||
"github.com/filecoin-project/lotus/chain/actors"
|
||||
"github.com/filecoin-project/lotus/chain/address"
|
||||
@ -37,7 +38,7 @@ func (pm *Manager) createPaych(ctx context.Context, from, to address.Address, am
|
||||
|
||||
smsg, err := pm.mpool.MpoolPushMessage(ctx, msg)
|
||||
if err != nil {
|
||||
return address.Undef, cid.Undef, err
|
||||
return address.Undef, cid.Undef, xerrors.Errorf("initializing paych actor: %w", err)
|
||||
}
|
||||
|
||||
mcid := smsg.Cid()
|
||||
@ -46,7 +47,7 @@ func (pm *Manager) createPaych(ctx context.Context, from, to address.Address, am
|
||||
// (tricky because we need to setup channel tracking before we know it's address)
|
||||
mwait, err := pm.state.StateWaitMsg(ctx, mcid)
|
||||
if err != nil {
|
||||
return address.Undef, cid.Undef, err
|
||||
return address.Undef, cid.Undef, xerrors.Errorf("wait msg: %w", err)
|
||||
}
|
||||
|
||||
if mwait.Receipt.ExitCode != 0 {
|
||||
@ -60,11 +61,11 @@ func (pm *Manager) createPaych(ctx context.Context, from, to address.Address, am
|
||||
|
||||
ci, err := pm.loadOutboundChannelInfo(ctx, paychaddr)
|
||||
if err != nil {
|
||||
return address.Undef, cid.Undef, err
|
||||
return address.Undef, cid.Undef, xerrors.Errorf("loading channel info: %w", err)
|
||||
}
|
||||
|
||||
if err := pm.store.trackChannel(ci); err != nil {
|
||||
return address.Undef, cid.Undef, err
|
||||
return address.Undef, cid.Undef, xerrors.Errorf("tracking channel: %w", err)
|
||||
}
|
||||
|
||||
return paychaddr, mcid, nil
|
||||
@ -108,7 +109,7 @@ func (pm *Manager) GetPaych(ctx context.Context, from, to address.Address, ensur
|
||||
return ci.Control == from && ci.Target == to
|
||||
})
|
||||
if err != nil {
|
||||
return address.Undef, cid.Undef, err
|
||||
return address.Undef, cid.Undef, xerrors.Errorf("findChan: %w", err)
|
||||
}
|
||||
if ch != address.Undef {
|
||||
// TODO: Track available funds
|
||||
|
@ -41,7 +41,7 @@ func NewMiner(sblks *sectorblocks.SectorBlocks, full api.FullNode) *Miner {
|
||||
}
|
||||
|
||||
func writeErr(stream network.Stream, err error) {
|
||||
log.Errorf("Retrieval deal error: %s", err)
|
||||
log.Errorf("Retrieval deal error: %+v", err)
|
||||
_ = cborutil.WriteCborRPC(stream, &DealResponse{
|
||||
Status: Error,
|
||||
Message: err.Error(),
|
||||
|
@ -155,7 +155,7 @@ func (t *Piece) MarshalCBOR(w io.Writer) error {
|
||||
_, err := w.Write(cbg.CborNull)
|
||||
return err
|
||||
}
|
||||
if _, err := w.Write([]byte{132}); err != nil {
|
||||
if _, err := w.Write([]byte{131}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
@ -164,14 +164,6 @@ func (t *Piece) MarshalCBOR(w io.Writer) error {
|
||||
return err
|
||||
}
|
||||
|
||||
// t.t.Ref (string) (string)
|
||||
if _, err := w.Write(cbg.CborEncodeMajorType(cbg.MajTextString, uint64(len(t.Ref)))); err != nil {
|
||||
return err
|
||||
}
|
||||
if _, err := w.Write([]byte(t.Ref)); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// t.t.Size (uint64) (uint64)
|
||||
if _, err := w.Write(cbg.CborEncodeMajorType(cbg.MajUnsignedInt, uint64(t.Size))); err != nil {
|
||||
return err
|
||||
@ -198,7 +190,7 @@ func (t *Piece) UnmarshalCBOR(r io.Reader) error {
|
||||
return fmt.Errorf("cbor input should be of type array")
|
||||
}
|
||||
|
||||
if extra != 4 {
|
||||
if extra != 3 {
|
||||
return fmt.Errorf("cbor input had wrong number of fields")
|
||||
}
|
||||
|
||||
@ -212,16 +204,6 @@ func (t *Piece) UnmarshalCBOR(r io.Reader) error {
|
||||
return fmt.Errorf("wrong type for uint64 field")
|
||||
}
|
||||
t.DealID = uint64(extra)
|
||||
// t.t.Ref (string) (string)
|
||||
|
||||
{
|
||||
sval, err := cbg.ReadString(br)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
t.Ref = string(sval)
|
||||
}
|
||||
// t.t.Size (uint64) (uint64)
|
||||
|
||||
maj, extra, err = cbg.CborReadHeader(br)
|
||||
|
@ -3,7 +3,6 @@ package storage
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"math"
|
||||
"math/rand"
|
||||
@ -95,7 +94,6 @@ func (m *Miner) storeGarbage(ctx context.Context, sectorID uint64, existingPiece
|
||||
out := make([]Piece, len(sizes))
|
||||
|
||||
for i, size := range sizes {
|
||||
name := fmt.Sprintf("fake-file-%d", rand.Intn(100000000))
|
||||
ppi, err := m.sb.AddPiece(size, sectorID, io.LimitReader(rand.New(rand.NewSource(42)), int64(size)), existingPieceSizes)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@ -105,7 +103,6 @@ func (m *Miner) storeGarbage(ctx context.Context, sectorID uint64, existingPiece
|
||||
|
||||
out[i] = Piece{
|
||||
DealID: resp.DealIDs[i],
|
||||
Ref: name,
|
||||
Size: ppi.Size,
|
||||
CommP: ppi.CommP[:],
|
||||
}
|
||||
@ -134,7 +131,7 @@ func (m *Miner) StoreGarbageData() error {
|
||||
return
|
||||
}
|
||||
|
||||
if err := m.newSector(context.TODO(), sid, pieces[0].DealID, pieces[0].Ref, pieces[0].ppi()); err != nil {
|
||||
if err := m.newSector(context.TODO(), sid, pieces[0].DealID, pieces[0].ppi()); err != nil {
|
||||
log.Errorf("%+v", err)
|
||||
return
|
||||
}
|
||||
|
@ -4,6 +4,7 @@ import (
|
||||
"context"
|
||||
"errors"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/ipfs/go-cid"
|
||||
"github.com/ipfs/go-datastore"
|
||||
@ -13,6 +14,7 @@ import (
|
||||
"golang.org/x/xerrors"
|
||||
|
||||
"github.com/filecoin-project/lotus/api"
|
||||
"github.com/filecoin-project/lotus/build"
|
||||
"github.com/filecoin-project/lotus/chain/address"
|
||||
"github.com/filecoin-project/lotus/chain/events"
|
||||
"github.com/filecoin-project/lotus/chain/gen"
|
||||
@ -142,24 +144,40 @@ func (m *Miner) runPreflightChecks(ctx context.Context) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
type sectorBuilderEpp struct {
|
||||
type SectorBuilderEpp struct {
|
||||
sb *sectorbuilder.SectorBuilder
|
||||
}
|
||||
|
||||
func NewElectionPoStProver(sb *sectorbuilder.SectorBuilder) *sectorBuilderEpp {
|
||||
return §orBuilderEpp{sb}
|
||||
func NewElectionPoStProver(sb *sectorbuilder.SectorBuilder) *SectorBuilderEpp {
|
||||
return &SectorBuilderEpp{sb}
|
||||
}
|
||||
|
||||
var _ gen.ElectionPoStProver = (*sectorBuilderEpp)(nil)
|
||||
var _ gen.ElectionPoStProver = (*SectorBuilderEpp)(nil)
|
||||
|
||||
func (epp *sectorBuilderEpp) GenerateCandidates(ctx context.Context, ssi sectorbuilder.SortedPublicSectorInfo, rand []byte) ([]sectorbuilder.EPostCandidate, error) {
|
||||
func (epp *SectorBuilderEpp) GenerateCandidates(ctx context.Context, ssi sectorbuilder.SortedPublicSectorInfo, rand []byte) ([]sectorbuilder.EPostCandidate, error) {
|
||||
start := time.Now()
|
||||
var faults []uint64 // TODO
|
||||
|
||||
var randbuf [32]byte
|
||||
copy(randbuf[:], rand)
|
||||
return epp.sb.GenerateEPostCandidates(ssi, randbuf, faults)
|
||||
cds, err := epp.sb.GenerateEPostCandidates(ssi, randbuf, faults)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
log.Infof("Generate candidates took %s", time.Since(start))
|
||||
return cds, nil
|
||||
}
|
||||
|
||||
func (epp *sectorBuilderEpp) ComputeProof(ctx context.Context, ssi sectorbuilder.SortedPublicSectorInfo, rand []byte, winners []sectorbuilder.EPostCandidate) ([]byte, error) {
|
||||
return epp.sb.ComputeElectionPoSt(ssi, rand, winners)
|
||||
func (epp *SectorBuilderEpp) ComputeProof(ctx context.Context, ssi sectorbuilder.SortedPublicSectorInfo, rand []byte, winners []sectorbuilder.EPostCandidate) ([]byte, error) {
|
||||
if build.InsecurePoStValidation {
|
||||
log.Warn("Generating fake EPost proof! You should only see this while running tests!")
|
||||
return []byte("valid proof"), nil
|
||||
}
|
||||
start := time.Now()
|
||||
proof, err := epp.sb.ComputeElectionPoSt(ssi, rand, winners)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
log.Infof("ComputeElectionPost took %s", time.Since(start))
|
||||
return proof, nil
|
||||
}
|
||||
|
@ -8,6 +8,7 @@ import (
|
||||
xerrors "golang.org/x/xerrors"
|
||||
|
||||
"github.com/filecoin-project/lotus/api"
|
||||
"github.com/filecoin-project/lotus/lib/padreader"
|
||||
"github.com/filecoin-project/lotus/lib/sectorbuilder"
|
||||
)
|
||||
|
||||
@ -37,7 +38,6 @@ func (t *SealSeed) SB() sectorbuilder.SealSeed {
|
||||
|
||||
type Piece struct {
|
||||
DealID uint64
|
||||
Ref string
|
||||
|
||||
Size uint64
|
||||
CommP []byte
|
||||
@ -97,14 +97,6 @@ func (t *SectorInfo) deals() []uint64 {
|
||||
return out
|
||||
}
|
||||
|
||||
func (t *SectorInfo) refs() []string {
|
||||
out := make([]string, len(t.Pieces))
|
||||
for i, piece := range t.Pieces {
|
||||
out[i] = piece.Ref
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func (t *SectorInfo) existingPieces() []uint64 {
|
||||
out := make([]uint64, len(t.Pieces))
|
||||
for i, piece := range t.Pieces {
|
||||
@ -150,7 +142,8 @@ func (m *Miner) sectorStateLoop(ctx context.Context) error {
|
||||
// verify on-chain state
|
||||
trackedByID := map[uint64]*SectorInfo{}
|
||||
for _, si := range trackedSectors {
|
||||
trackedByID[si.SectorID] = &si
|
||||
i := si
|
||||
trackedByID[si.SectorID] = &i
|
||||
}
|
||||
|
||||
curTs, err := m.api.ChainHead(ctx)
|
||||
@ -265,30 +258,38 @@ func (m *Miner) failSector(id uint64, err error) {
|
||||
log.Errorf("sector %d error: %+v", id, err)
|
||||
}
|
||||
|
||||
func (m *Miner) SealPiece(ctx context.Context, ref string, size uint64, r io.Reader, dealID uint64) (uint64, error) {
|
||||
log.Infof("Seal piece for deal %d", dealID)
|
||||
func (m *Miner) AllocatePiece(size uint64) (sectorID uint64, offset uint64, err error) {
|
||||
if padreader.PaddedSize(size) != size {
|
||||
return 0, 0, xerrors.Errorf("cannot allocate unpadded piece")
|
||||
}
|
||||
|
||||
sid, err := m.sb.AcquireSectorId() // TODO: Put more than one thing in a sector
|
||||
if err != nil {
|
||||
return 0, xerrors.Errorf("acquiring sector ID: %w", err)
|
||||
return 0, 0, xerrors.Errorf("acquiring sector ID: %w", err)
|
||||
}
|
||||
|
||||
ppi, err := m.sb.AddPiece(size, sid, r, []uint64{})
|
||||
if err != nil {
|
||||
return 0, xerrors.Errorf("adding piece to sector: %w", err)
|
||||
}
|
||||
|
||||
return sid, m.newSector(ctx, sid, dealID, ref, ppi)
|
||||
// offset hard-coded to 0 since we only put one thing in a sector for now
|
||||
return sid, 0, nil
|
||||
}
|
||||
|
||||
func (m *Miner) newSector(ctx context.Context, sid uint64, dealID uint64, ref string, ppi sectorbuilder.PublicPieceInfo) error {
|
||||
func (m *Miner) SealPiece(ctx context.Context, size uint64, r io.Reader, sectorID uint64, dealID uint64) error {
|
||||
log.Infof("Seal piece for deal %d", dealID)
|
||||
|
||||
ppi, err := m.sb.AddPiece(size, sectorID, r, []uint64{})
|
||||
if err != nil {
|
||||
return xerrors.Errorf("adding piece to sector: %w", err)
|
||||
}
|
||||
|
||||
return m.newSector(ctx, sectorID, dealID, ppi)
|
||||
}
|
||||
|
||||
func (m *Miner) newSector(ctx context.Context, sid uint64, dealID uint64, ppi sectorbuilder.PublicPieceInfo) error {
|
||||
si := &SectorInfo{
|
||||
SectorID: sid,
|
||||
|
||||
Pieces: []Piece{
|
||||
{
|
||||
DealID: dealID,
|
||||
Ref: ref,
|
||||
|
||||
Size: ppi.Size,
|
||||
CommP: ppi.CommP[:],
|
||||
|
@ -39,31 +39,24 @@ var ErrNotFound = errors.New("not found")
|
||||
|
||||
type SectorBlocks struct {
|
||||
*storage.Miner
|
||||
sb *sectorbuilder.SectorBuilder
|
||||
|
||||
intermediate blockstore.Blockstore // holds intermediate nodes TODO: consider combining with the staging blockstore
|
||||
|
||||
unsealed *unsealedBlocks
|
||||
keys datastore.Batching
|
||||
keyLk sync.Mutex
|
||||
keys datastore.Batching
|
||||
keyLk sync.Mutex
|
||||
}
|
||||
|
||||
func NewSectorBlocks(miner *storage.Miner, ds dtypes.MetadataDS, sb *sectorbuilder.SectorBuilder) *SectorBlocks {
|
||||
sbc := &SectorBlocks{
|
||||
Miner: miner,
|
||||
sb: sb,
|
||||
|
||||
intermediate: blockstore.NewBlockstore(namespace.Wrap(ds, imBlocksPrefix)),
|
||||
|
||||
keys: namespace.Wrap(ds, dsPrefix),
|
||||
}
|
||||
|
||||
unsealed := &unsealedBlocks{ // TODO: untangle this
|
||||
sb: sb,
|
||||
|
||||
unsealed: map[string][]byte{},
|
||||
unsealing: map[string]chan struct{}{},
|
||||
}
|
||||
|
||||
sbc.unsealed = unsealed
|
||||
return sbc
|
||||
}
|
||||
|
||||
@ -77,14 +70,13 @@ type UnixfsReader interface {
|
||||
|
||||
type refStorer struct {
|
||||
blockReader UnixfsReader
|
||||
writeRef func(cid cid.Cid, pieceRef string, offset uint64, size uint64) error
|
||||
writeRef func(cid cid.Cid, offset uint64, size uint64) error
|
||||
intermediate blockstore.Blockstore
|
||||
|
||||
pieceRef string
|
||||
remaining []byte
|
||||
}
|
||||
|
||||
func (st *SectorBlocks) writeRef(cid cid.Cid, pieceRef string, offset uint64, size uint64) error {
|
||||
func (st *SectorBlocks) writeRef(cid cid.Cid, sectorID uint64, offset uint64, size uint64) error {
|
||||
st.keyLk.Lock() // TODO: make this multithreaded
|
||||
defer st.keyLk.Unlock()
|
||||
|
||||
@ -104,9 +96,9 @@ func (st *SectorBlocks) writeRef(cid cid.Cid, pieceRef string, offset uint64, si
|
||||
}
|
||||
|
||||
refs.Refs = append(refs.Refs, api.SealedRef{
|
||||
Piece: pieceRef,
|
||||
Offset: offset,
|
||||
Size: size,
|
||||
SectorID: sectorID,
|
||||
Offset: offset,
|
||||
Size: size,
|
||||
})
|
||||
|
||||
newRef, err := cborutil.Dump(&refs)
|
||||
@ -147,7 +139,7 @@ func (r *refStorer) Read(p []byte) (n int, err error) {
|
||||
continue
|
||||
}
|
||||
|
||||
if err := r.writeRef(nd.Cid(), r.pieceRef, offset, uint64(len(data))); err != nil {
|
||||
if err := r.writeRef(nd.Cid(), offset, uint64(len(data))); err != nil {
|
||||
return 0, xerrors.Errorf("writing ref: %w", err)
|
||||
}
|
||||
|
||||
@ -160,22 +152,30 @@ func (r *refStorer) Read(p []byte) (n int, err error) {
|
||||
}
|
||||
}
|
||||
|
||||
func (st *SectorBlocks) AddUnixfsPiece(ctx context.Context, ref cid.Cid, r UnixfsReader, dealID uint64) (sectorID uint64, err error) {
|
||||
func (st *SectorBlocks) AddUnixfsPiece(ctx context.Context, r UnixfsReader, dealID uint64) (sectorID uint64, err error) {
|
||||
size, err := r.Size()
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
sectorID, pieceOffset, err := st.Miner.AllocatePiece(padreader.PaddedSize(uint64(size)))
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
refst := &refStorer{
|
||||
blockReader: r,
|
||||
pieceRef: string(SerializationUnixfs0) + ref.String(),
|
||||
writeRef: st.writeRef,
|
||||
blockReader: r,
|
||||
writeRef: func(cid cid.Cid, offset uint64, size uint64) error {
|
||||
offset += pieceOffset
|
||||
|
||||
return st.writeRef(cid, sectorID, offset, size)
|
||||
},
|
||||
intermediate: st.intermediate,
|
||||
}
|
||||
|
||||
pr, psize := padreader.New(r, uint64(size))
|
||||
pr, psize := padreader.New(refst, uint64(size))
|
||||
|
||||
return st.Miner.SealPiece(ctx, refst.pieceRef, psize, pr, dealID)
|
||||
return sectorID, st.Miner.SealPiece(ctx, psize, pr, sectorID, dealID)
|
||||
}
|
||||
|
||||
func (st *SectorBlocks) List() (map[cid.Cid][]api.SealedRef, error) {
|
||||
|
@ -2,12 +2,17 @@ package sectorblocks
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io/ioutil"
|
||||
|
||||
blocks "github.com/ipfs/go-block-format"
|
||||
"github.com/ipfs/go-cid"
|
||||
blockstore "github.com/ipfs/go-ipfs-blockstore"
|
||||
logging "github.com/ipfs/go-log"
|
||||
"golang.org/x/xerrors"
|
||||
)
|
||||
|
||||
var log = logging.Logger("sectorblocks")
|
||||
|
||||
type SectorBlockStore struct {
|
||||
intermediate blockstore.Blockstore
|
||||
sectorBlocks *SectorBlocks
|
||||
@ -67,12 +72,41 @@ func (s *SectorBlockStore) Get(c cid.Cid) (blocks.Block, error) {
|
||||
return nil, blockstore.ErrNotFound
|
||||
}
|
||||
|
||||
data, err := s.sectorBlocks.unsealed.getRef(context.TODO(), refs, s.approveUnseal)
|
||||
best := refs[0] // TODO: better strategy (e.g. look for already unsealed)
|
||||
|
||||
si, err := s.sectorBlocks.Miner.GetSectorInfo(best.SectorID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, xerrors.Errorf("getting sector info: %w", err)
|
||||
}
|
||||
|
||||
return blocks.NewBlockWithCid(data, c)
|
||||
log.Infof("reading block %s from sector %d(+%d;%d)", c, best.SectorID, best.Offset, best.Size)
|
||||
|
||||
r, err := s.sectorBlocks.sb.ReadPieceFromSealedSector(
|
||||
best.SectorID,
|
||||
best.Offset,
|
||||
best.Size,
|
||||
si.Ticket.TicketBytes,
|
||||
si.CommD,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, xerrors.Errorf("unsealing block: %w", err)
|
||||
}
|
||||
defer r.Close()
|
||||
|
||||
data, err := ioutil.ReadAll(r)
|
||||
if err != nil {
|
||||
return nil, xerrors.Errorf("reading block data: %w", err)
|
||||
}
|
||||
if uint64(len(data)) != best.Size {
|
||||
return nil, xerrors.Errorf("got wrong amount of data: %d != !d", len(data), best.Size)
|
||||
}
|
||||
|
||||
b, err := blocks.NewBlockWithCid(data, c)
|
||||
if err != nil {
|
||||
return nil, xerrors.Errorf("sbs get (%d[%d:%d]): %w", best.SectorID, best.Offset, best.Offset+best.Size, err)
|
||||
}
|
||||
|
||||
return b, nil
|
||||
}
|
||||
|
||||
var _ blockstore.Blockstore = &SectorBlockStore{}
|
||||
|
@ -1,99 +0,0 @@
|
||||
package sectorblocks
|
||||
|
||||
import (
|
||||
"context"
|
||||
"sync"
|
||||
|
||||
logging "github.com/ipfs/go-log"
|
||||
|
||||
"github.com/filecoin-project/lotus/api"
|
||||
"github.com/filecoin-project/lotus/lib/sectorbuilder"
|
||||
)
|
||||
|
||||
var log = logging.Logger("sectorblocks")
|
||||
|
||||
type unsealedBlocks struct {
|
||||
lk sync.Mutex
|
||||
sb *sectorbuilder.SectorBuilder
|
||||
|
||||
// TODO: Treat this as some sort of cache, one with rather aggressive GC
|
||||
// TODO: This REALLY, REALLY needs to be on-disk
|
||||
unsealed map[string][]byte
|
||||
|
||||
unsealing map[string]chan struct{}
|
||||
}
|
||||
|
||||
func (ub *unsealedBlocks) getRef(ctx context.Context, refs []api.SealedRef, approveUnseal func() error) ([]byte, error) {
|
||||
var best api.SealedRef
|
||||
|
||||
ub.lk.Lock()
|
||||
for _, ref := range refs {
|
||||
b, ok := ub.unsealed[ref.Piece]
|
||||
if ok {
|
||||
ub.lk.Unlock()
|
||||
return b[ref.Offset : ref.Offset+uint64(ref.Size)], nil
|
||||
}
|
||||
// TODO: pick unsealing based on how long it's running (or just select all relevant, usually it'll be just one)
|
||||
_, ok = ub.unsealing[ref.Piece]
|
||||
if ok {
|
||||
best = ref
|
||||
break
|
||||
}
|
||||
best = ref
|
||||
}
|
||||
ub.lk.Unlock()
|
||||
|
||||
b, err := ub.maybeUnseal(ctx, best.Piece, approveUnseal)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return b[best.Offset : best.Offset+uint64(best.Size)], nil
|
||||
}
|
||||
|
||||
func (ub *unsealedBlocks) maybeUnseal(ctx context.Context, pieceKey string, approveUnseal func() error) ([]byte, error) {
|
||||
ub.lk.Lock()
|
||||
defer ub.lk.Unlock()
|
||||
|
||||
out, ok := ub.unsealed[pieceKey]
|
||||
if ok {
|
||||
return out, nil
|
||||
}
|
||||
|
||||
wait, ok := ub.unsealing[pieceKey]
|
||||
if ok {
|
||||
ub.lk.Unlock()
|
||||
select {
|
||||
case <-wait:
|
||||
ub.lk.Lock()
|
||||
// TODO: make sure this is not racy with gc when it's implemented
|
||||
return ub.unsealed[pieceKey], nil
|
||||
case <-ctx.Done():
|
||||
ub.lk.Lock()
|
||||
return nil, ctx.Err()
|
||||
}
|
||||
}
|
||||
|
||||
// TODO: doing this under a lock is suboptimal.. but simpler
|
||||
err := approveUnseal()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
ub.unsealing[pieceKey] = make(chan struct{})
|
||||
ub.lk.Unlock()
|
||||
|
||||
log.Infof("Unsealing piece '%s'", pieceKey)
|
||||
data, err := ub.sb.ReadPieceFromSealedSector(pieceKey)
|
||||
ub.lk.Lock()
|
||||
|
||||
if err != nil {
|
||||
// TODO: tell subs
|
||||
log.Error(err)
|
||||
return nil, err
|
||||
}
|
||||
|
||||
ub.unsealed[pieceKey] = data
|
||||
close(ub.unsealing[pieceKey])
|
||||
return data, nil
|
||||
}
|
Loading…
Reference in New Issue
Block a user