lotus/chain/deals/client_utils.go
hannahhoward 61e2568b8d feat(datatransfer): implement and extract
feat(datatransfer): setup implementation path

Sets up a path to implementation, offering both the dagservice implementation and a future graphsync
implement, establishes message interfaces and network layer, and isolates the datatransfer module
from the app

WIP using CBOR encoding for dataxfermsg

* Bring cbor-gen stuff into datatransfer package
* make transferRequest private struct
* add transferResponse + funcs
* Rename VoucherID to VoucherType
* more tests passing

WIP trying out some stuff
* Embed request/response in message so all the interfaces work AND the CBOR unmarshaling works: this is more like the spec anyway
* get rid of pb stuff

all message tests passing, some others in datatransfer

Some cleanup for PR

Cleanup for PR, clarifying and additional comments

mod tidy

Respond to PR comments:
* Make DataTransferRequest/Response be returned in from Net
* Regenerate cbor_gen and fix the generator caller so it works better
* Please the linters

Fix tests

Initiate push and pull requests (#536)

* add issue link for data TransferID generation
* comment out failing but not relevant tests
* finish voucher rename from Identifier --> Type

tests passing

cleanup for PR

remove unused fmt import in graphsync_test

a better reflection

send data transfer response

other tests passing

feat(datatransfer): milestone 2 infrastructure

Setup test path for all tickets for milestone 2

responses alert subscribers when request is not accepted (#607)

Graphsync response is scheduled when a valid push request is received (#625)

fix(datatransfer): fix tests

fix an error with read buffers in tests

fix(deps): fix go.sum

Feat/dt graphsync pullreqs (#627)

* graphsync responses to pull requests

Feat/dt initiator cleanup (#645)

* ChannelID.To --> ChannelID.Initiator
* We now store our peer ID (from host.ID()) so it can be used when creating ChannelIDs.
* InProgressChannels returns all of impl.channels, currently just for testing
* Implements go-data-transfer issue
* Some assertions were changed based on the above.
* Renamed some variables and added some assertions based on the new understanding
* Updated SHA for graphsync module
* Updated fakeGraphSync test structs to use new interfaces from new SHA above

Techdebt/dt split graphsync impl receiver (#651)

* Split up graphsyncImpl and graphsyncReceiver
* rename graphsync to utils

DTM sends data over graphsync for validated push requests (#665)

* create channels when a request is received. register push request hook with graphsync. fix tests.
* better NewReaders
* use mutex lock around impl.channels access
* fix(datatransfer): fix test uncertainty
* fix a data race and also don't use random bytes in basic block which can fail
* privatize 3 funcs

with @hannahhoward

Feat/dt gs pullrequests (#693)

* Implements DTM Sends Data Over Graphsync For Validated Pull Requests
* rename a field in a test struct
* refactor a couple of private functions (one was refactored out of existence)

Feat/dt subscribe, file Xfer round trip (#720)

Implements the rest of Subscriber Is Notified When Request Completed #24:
* send a graphsync message within a go func and consume responses until error or transfer is complete.
* notify subscribers of results.
* Rename datatransfer.Event to EventCode.
* datatransfer.Event is now a struct that includes a message and a timestamp as well as the Event.Code int, formerly Event, update all uses
* Add extension data to graphsync request hook, gsReq
* rename sendRequest to sendDtRequest, to distinguish it from sendGsRequest, where Dt = datatransfer, Gs = graphsync
* use a mutex lock for last transfer ID
* obey the linter

Don't respond with error in gsReqRcdHook when we can't find the datatransfer extension. (#754)

* update to correct graphsync version, update tests & code to call the new graphsync hooks
* getExtensionData returns an empty struct + nil if we can't find our extension
* Don't respond with error when we can't find the extension.
* Test for same
* mod tidy

minor fix to go.sum

feat(datatransfer): switch to graphsync implementation

Move over to real graphsync implementation of data transfer, add constructors for graphsync
instances on client and miner side

fix(datatransfer): Fix validators

Validators were checking payload cid against commP -- which are not the same any more. Added a
payloadCid to client deal to maintain the record, fixed validator logic

Feat/dt extraction use go-fil-components/datatransfer (#770)

* Initial commit after changing to go-fil-components/datatransfer
* blow away the datatransfer dir
* use go-fil-components master after its PR #1 was merged
* go mod tidy

use a package

updates after rebase with master
2020-01-08 14:19:28 +01:00

171 lines
4.8 KiB
Go

package deals
import (
"context"
"runtime"
sectorbuilder "github.com/filecoin-project/go-sectorbuilder"
"github.com/ipfs/go-cid"
files "github.com/ipfs/go-ipfs-files"
unixfile "github.com/ipfs/go-unixfs/file"
"github.com/ipld/go-ipld-prime"
"github.com/libp2p/go-libp2p-core/peer"
"golang.org/x/xerrors"
"github.com/filecoin-project/go-fil-components/datatransfer"
cborutil "github.com/filecoin-project/go-cbor-util"
"github.com/filecoin-project/go-statestore"
"github.com/filecoin-project/lotus/lib/padreader"
"github.com/filecoin-project/lotus/node/modules/dtypes"
)
func (c *Client) failDeal(id cid.Cid, cerr error) {
if cerr == nil {
_, f, l, _ := runtime.Caller(1)
cerr = xerrors.Errorf("unknown error (fail called at %s:%d)", f, l)
}
s, ok := c.conns[id]
if ok {
_ = s.Reset()
delete(c.conns, id)
}
// TODO: store in some sort of audit log
log.Errorf("deal %s failed: %+v", id, cerr)
}
func (c *Client) commP(ctx context.Context, data cid.Cid) ([]byte, uint64, error) {
root, err := c.dag.Get(ctx, data)
if err != nil {
log.Errorf("failed to get file root for deal: %s", err)
return nil, 0, err
}
n, err := unixfile.NewUnixfsFile(ctx, c.dag, root)
if err != nil {
log.Errorf("cannot open unixfs file: %s", err)
return nil, 0, err
}
uf, ok := n.(files.File)
if !ok {
// TODO: we probably got directory, how should we handle this in unixfs mode?
return nil, 0, xerrors.New("unsupported unixfs type")
}
s, err := uf.Size()
if err != nil {
return nil, 0, err
}
pr, psize := padreader.New(uf, uint64(s))
commp, err := sectorbuilder.GeneratePieceCommitment(pr, psize)
if err != nil {
return nil, 0, xerrors.Errorf("generating CommP: %w", err)
}
return commp[:], psize, nil
}
func (c *Client) readStorageDealResp(deal ClientDeal) (*Response, error) {
s, ok := c.conns[deal.ProposalCid]
if !ok {
// TODO: Try to re-establish the connection using query protocol
return nil, xerrors.Errorf("no connection to miner")
}
var resp SignedResponse
if err := cborutil.ReadCborRPC(s, &resp); err != nil {
log.Errorw("failed to read Response message", "error", err)
return nil, err
}
if err := resp.Verify(deal.MinerWorker); err != nil {
return nil, xerrors.Errorf("verifying response signature failed", err)
}
if resp.Response.Proposal != deal.ProposalCid {
return nil, xerrors.Errorf("miner responded to a wrong proposal: %s != %s", resp.Response.Proposal, deal.ProposalCid)
}
return &resp.Response, nil
}
func (c *Client) disconnect(deal ClientDeal) error {
s, ok := c.conns[deal.ProposalCid]
if !ok {
return nil
}
err := s.Close()
delete(c.conns, deal.ProposalCid)
return err
}
var _ datatransfer.RequestValidator = &ClientRequestValidator{}
// ClientRequestValidator validates data transfer requests for the client
// in a storage market
type ClientRequestValidator struct {
deals *statestore.StateStore
}
// NewClientRequestValidator returns a new client request validator for the
// given datastore
func NewClientRequestValidator(deals dtypes.ClientDealStore) *ClientRequestValidator {
crv := &ClientRequestValidator{
deals: deals,
}
return crv
}
// ValidatePush validates a push request received from the peer that will send data
// Will always error because clients should not accept push requests from a provider
// in a storage deal (i.e. send data to client).
func (c *ClientRequestValidator) ValidatePush(
sender peer.ID,
voucher datatransfer.Voucher,
baseCid cid.Cid,
Selector ipld.Node) error {
return ErrNoPushAccepted
}
// ValidatePull validates a pull request received from the peer that will receive data
// Will succeed only if:
// - voucher has correct type
// - voucher references an active deal
// - referenced deal matches the receiver (miner)
// - referenced deal matches the given base CID
// - referenced deal is in an acceptable state
func (c *ClientRequestValidator) ValidatePull(
receiver peer.ID,
voucher datatransfer.Voucher,
baseCid cid.Cid,
Selector ipld.Node) error {
dealVoucher, ok := voucher.(*StorageDataTransferVoucher)
if !ok {
return xerrors.Errorf("voucher type %s: %w", voucher.Type(), ErrWrongVoucherType)
}
var deal ClientDeal
err := c.deals.Get(dealVoucher.Proposal, &deal)
if err != nil {
return xerrors.Errorf("Proposal CID %s: %w", dealVoucher.Proposal.String(), ErrNoDeal)
}
if deal.Miner != receiver {
return xerrors.Errorf("Deal Peer %s, Data Transfer Peer %s: %w", deal.Miner.String(), receiver.String(), ErrWrongPeer)
}
if !deal.PayloadCid.Equals(baseCid) {
return xerrors.Errorf("Deal Payload CID %s, Data Transfer CID %s: %w", string(deal.Proposal.PieceRef), baseCid.String(), ErrWrongPiece)
}
for _, state := range DataTransferStates {
if deal.State == state {
return nil
}
}
return xerrors.Errorf("Deal State %s: %w", deal.State, ErrInacceptableDealState)
}