lotus/chain/deals/request_validation_test.go
hannahhoward 61e2568b8d feat(datatransfer): implement and extract
feat(datatransfer): setup implementation path

Sets up a path to implementation, offering both the dagservice implementation and a future graphsync
implement, establishes message interfaces and network layer, and isolates the datatransfer module
from the app

WIP using CBOR encoding for dataxfermsg

* Bring cbor-gen stuff into datatransfer package
* make transferRequest private struct
* add transferResponse + funcs
* Rename VoucherID to VoucherType
* more tests passing

WIP trying out some stuff
* Embed request/response in message so all the interfaces work AND the CBOR unmarshaling works: this is more like the spec anyway
* get rid of pb stuff

all message tests passing, some others in datatransfer

Some cleanup for PR

Cleanup for PR, clarifying and additional comments

mod tidy

Respond to PR comments:
* Make DataTransferRequest/Response be returned in from Net
* Regenerate cbor_gen and fix the generator caller so it works better
* Please the linters

Fix tests

Initiate push and pull requests (#536)

* add issue link for data TransferID generation
* comment out failing but not relevant tests
* finish voucher rename from Identifier --> Type

tests passing

cleanup for PR

remove unused fmt import in graphsync_test

a better reflection

send data transfer response

other tests passing

feat(datatransfer): milestone 2 infrastructure

Setup test path for all tickets for milestone 2

responses alert subscribers when request is not accepted (#607)

Graphsync response is scheduled when a valid push request is received (#625)

fix(datatransfer): fix tests

fix an error with read buffers in tests

fix(deps): fix go.sum

Feat/dt graphsync pullreqs (#627)

* graphsync responses to pull requests

Feat/dt initiator cleanup (#645)

* ChannelID.To --> ChannelID.Initiator
* We now store our peer ID (from host.ID()) so it can be used when creating ChannelIDs.
* InProgressChannels returns all of impl.channels, currently just for testing
* Implements go-data-transfer issue
* Some assertions were changed based on the above.
* Renamed some variables and added some assertions based on the new understanding
* Updated SHA for graphsync module
* Updated fakeGraphSync test structs to use new interfaces from new SHA above

Techdebt/dt split graphsync impl receiver (#651)

* Split up graphsyncImpl and graphsyncReceiver
* rename graphsync to utils

DTM sends data over graphsync for validated push requests (#665)

* create channels when a request is received. register push request hook with graphsync. fix tests.
* better NewReaders
* use mutex lock around impl.channels access
* fix(datatransfer): fix test uncertainty
* fix a data race and also don't use random bytes in basic block which can fail
* privatize 3 funcs

with @hannahhoward

Feat/dt gs pullrequests (#693)

* Implements DTM Sends Data Over Graphsync For Validated Pull Requests
* rename a field in a test struct
* refactor a couple of private functions (one was refactored out of existence)

Feat/dt subscribe, file Xfer round trip (#720)

Implements the rest of Subscriber Is Notified When Request Completed #24:
* send a graphsync message within a go func and consume responses until error or transfer is complete.
* notify subscribers of results.
* Rename datatransfer.Event to EventCode.
* datatransfer.Event is now a struct that includes a message and a timestamp as well as the Event.Code int, formerly Event, update all uses
* Add extension data to graphsync request hook, gsReq
* rename sendRequest to sendDtRequest, to distinguish it from sendGsRequest, where Dt = datatransfer, Gs = graphsync
* use a mutex lock for last transfer ID
* obey the linter

Don't respond with error in gsReqRcdHook when we can't find the datatransfer extension. (#754)

* update to correct graphsync version, update tests & code to call the new graphsync hooks
* getExtensionData returns an empty struct + nil if we can't find our extension
* Don't respond with error when we can't find the extension.
* Test for same
* mod tidy

minor fix to go.sum

feat(datatransfer): switch to graphsync implementation

Move over to real graphsync implementation of data transfer, add constructors for graphsync
instances on client and miner side

fix(datatransfer): Fix validators

Validators were checking payload cid against commP -- which are not the same any more. Added a
payloadCid to client deal to maintain the record, fixed validator logic

Feat/dt extraction use go-fil-components/datatransfer (#770)

* Initial commit after changing to go-fil-components/datatransfer
* blow away the datatransfer dir
* use go-fil-components master after its PR #1 was merged
* go mod tidy

use a package

updates after rebase with master
2020-01-08 14:19:28 +01:00

272 lines
9.1 KiB
Go

package deals_test
import (
"fmt"
"math/rand"
"testing"
"github.com/ipfs/go-cid"
"github.com/ipfs/go-datastore"
"github.com/ipfs/go-datastore/namespace"
dss "github.com/ipfs/go-datastore/sync"
blocksutil "github.com/ipfs/go-ipfs-blocksutil"
"github.com/libp2p/go-libp2p-core/peer"
xerrors "golang.org/x/xerrors"
"github.com/filecoin-project/go-address"
"github.com/filecoin-project/go-cbor-util"
"github.com/filecoin-project/go-statestore"
"github.com/filecoin-project/lotus/api"
"github.com/filecoin-project/lotus/chain/actors"
"github.com/filecoin-project/lotus/chain/deals"
"github.com/filecoin-project/lotus/chain/types"
)
var blockGenerator = blocksutil.NewBlockGenerator()
type wrongDTType struct {
}
func (wrongDTType) ToBytes() ([]byte, error) {
return []byte{}, nil
}
func (wrongDTType) FromBytes([]byte) error {
return fmt.Errorf("not implemented")
}
func (wrongDTType) Type() string {
return "WrongDTTYPE"
}
func uniqueStorageDealProposal() (actors.StorageDealProposal, error) {
clientAddr, err := address.NewIDAddress(uint64(rand.Int()))
if err != nil {
return actors.StorageDealProposal{}, err
}
providerAddr, err := address.NewIDAddress(uint64(rand.Int()))
if err != nil {
return actors.StorageDealProposal{}, err
}
return actors.StorageDealProposal{
PieceRef: blockGenerator.Next().Cid().Bytes(),
Client: clientAddr,
Provider: providerAddr,
ProposerSignature: &types.Signature{
Data: []byte("foo bar cat dog"),
Type: types.KTBLS,
},
}, nil
}
func newClientDeal(minerID peer.ID, state api.DealState) (deals.ClientDeal, error) {
newProposal, err := uniqueStorageDealProposal()
if err != nil {
return deals.ClientDeal{}, err
}
proposalNd, err := cborutil.AsIpld(&newProposal)
if err != nil {
return deals.ClientDeal{}, err
}
minerAddr, err := address.NewIDAddress(uint64(rand.Int()))
if err != nil {
return deals.ClientDeal{}, err
}
return deals.ClientDeal{
Proposal: newProposal,
ProposalCid: proposalNd.Cid(),
PayloadCid: blockGenerator.Next().Cid(),
Miner: minerID,
MinerWorker: minerAddr,
State: state,
}, nil
}
func newMinerDeal(clientID peer.ID, state api.DealState) (deals.MinerDeal, error) {
newProposal, err := uniqueStorageDealProposal()
if err != nil {
return deals.MinerDeal{}, err
}
proposalNd, err := cborutil.AsIpld(&newProposal)
if err != nil {
return deals.MinerDeal{}, err
}
ref := blockGenerator.Next().Cid()
return deals.MinerDeal{
Proposal: newProposal,
ProposalCid: proposalNd.Cid(),
Client: clientID,
State: state,
Ref: ref,
}, nil
}
func TestClientRequestValidation(t *testing.T) {
ds := dss.MutexWrap(datastore.NewMapDatastore())
state := statestore.New(namespace.Wrap(ds, datastore.NewKey("/deals/client")))
crv := deals.NewClientRequestValidator(state)
minerID := peer.ID("fakepeerid")
block := blockGenerator.Next()
t.Run("ValidatePush fails", func(t *testing.T) {
if !xerrors.Is(crv.ValidatePush(minerID, wrongDTType{}, block.Cid(), nil), deals.ErrNoPushAccepted) {
t.Fatal("Push should fail for the client request validator for storage deals")
}
})
t.Run("ValidatePull fails deal not found", func(t *testing.T) {
proposal, err := uniqueStorageDealProposal()
if err != nil {
t.Fatal("error creating proposal")
}
proposalNd, err := cborutil.AsIpld(&proposal)
if err != nil {
t.Fatal("error serializing proposal")
}
pieceRef, err := cid.Cast(proposal.PieceRef)
if err != nil {
t.Fatal("unable to construct piece cid")
}
if !xerrors.Is(crv.ValidatePull(minerID, &deals.StorageDataTransferVoucher{proposalNd.Cid(), 1}, pieceRef, nil), deals.ErrNoDeal) {
t.Fatal("Pull should fail if there is no deal stored")
}
})
t.Run("ValidatePull fails wrong client", func(t *testing.T) {
otherMiner := peer.ID("otherminer")
clientDeal, err := newClientDeal(otherMiner, api.DealAccepted)
if err != nil {
t.Fatal("error creating client deal")
}
if err := state.Begin(clientDeal.ProposalCid, &clientDeal); err != nil {
t.Fatal("deal tracking failed")
}
payloadCid := clientDeal.PayloadCid
if !xerrors.Is(crv.ValidatePull(minerID, &deals.StorageDataTransferVoucher{clientDeal.ProposalCid, 1}, payloadCid, nil), deals.ErrWrongPeer) {
t.Fatal("Pull should fail if miner address is incorrect")
}
})
t.Run("ValidatePull fails wrong piece ref", func(t *testing.T) {
clientDeal, err := newClientDeal(minerID, api.DealAccepted)
if err != nil {
t.Fatal("error creating client deal")
}
if err := state.Begin(clientDeal.ProposalCid, &clientDeal); err != nil {
t.Fatal("deal tracking failed")
}
if !xerrors.Is(crv.ValidatePull(minerID, &deals.StorageDataTransferVoucher{clientDeal.ProposalCid, 1}, blockGenerator.Next().Cid(), nil), deals.ErrWrongPiece) {
t.Fatal("Pull should fail if piece ref is incorrect")
}
})
t.Run("ValidatePull fails wrong deal state", func(t *testing.T) {
clientDeal, err := newClientDeal(minerID, api.DealComplete)
if err != nil {
t.Fatal("error creating client deal")
}
if err := state.Begin(clientDeal.ProposalCid, &clientDeal); err != nil {
t.Fatal("deal tracking failed")
}
payloadCid := clientDeal.PayloadCid
if !xerrors.Is(crv.ValidatePull(minerID, &deals.StorageDataTransferVoucher{clientDeal.ProposalCid, 1}, payloadCid, nil), deals.ErrInacceptableDealState) {
t.Fatal("Pull should fail if deal is in a state that cannot be data transferred")
}
})
t.Run("ValidatePull succeeds", func(t *testing.T) {
clientDeal, err := newClientDeal(minerID, api.DealAccepted)
if err != nil {
t.Fatal("error creating client deal")
}
if err := state.Begin(clientDeal.ProposalCid, &clientDeal); err != nil {
t.Fatal("deal tracking failed")
}
payloadCid := clientDeal.PayloadCid
if crv.ValidatePull(minerID, &deals.StorageDataTransferVoucher{clientDeal.ProposalCid, 1}, payloadCid, nil) != nil {
t.Fatal("Pull should should succeed when all parameters are correct")
}
})
}
func TestProviderRequestValidation(t *testing.T) {
ds := dss.MutexWrap(datastore.NewMapDatastore())
state := statestore.New(namespace.Wrap(ds, datastore.NewKey("/deals/client")))
mrv := deals.NewProviderRequestValidator(state)
clientID := peer.ID("fakepeerid")
block := blockGenerator.Next()
t.Run("ValidatePull fails", func(t *testing.T) {
if !xerrors.Is(mrv.ValidatePull(clientID, wrongDTType{}, block.Cid(), nil), deals.ErrNoPullAccepted) {
t.Fatal("Pull should fail for the provider request validator for storage deals")
}
})
t.Run("ValidatePush fails deal not found", func(t *testing.T) {
proposal, err := uniqueStorageDealProposal()
if err != nil {
t.Fatal("error creating proposal")
}
proposalNd, err := cborutil.AsIpld(&proposal)
if err != nil {
t.Fatal("error serializing proposal")
}
pieceRef, err := cid.Cast(proposal.PieceRef)
if err != nil {
t.Fatal("unable to construct piece cid")
}
if !xerrors.Is(mrv.ValidatePush(clientID, &deals.StorageDataTransferVoucher{proposalNd.Cid(), 1}, pieceRef, nil), deals.ErrNoDeal) {
t.Fatal("Push should fail if there is no deal stored")
}
})
t.Run("ValidatePush fails wrong miner", func(t *testing.T) {
otherClient := peer.ID("otherclient")
minerDeal, err := newMinerDeal(otherClient, api.DealAccepted)
if err != nil {
t.Fatal("error creating client deal")
}
if err := state.Begin(minerDeal.ProposalCid, &minerDeal); err != nil {
t.Fatal("deal tracking failed")
}
ref := minerDeal.Ref
if !xerrors.Is(mrv.ValidatePush(clientID, &deals.StorageDataTransferVoucher{minerDeal.ProposalCid, 1}, ref, nil), deals.ErrWrongPeer) {
t.Fatal("Push should fail if miner address is incorrect")
}
})
t.Run("ValidatePush fails wrong piece ref", func(t *testing.T) {
minerDeal, err := newMinerDeal(clientID, api.DealAccepted)
if err != nil {
t.Fatal("error creating client deal")
}
if err := state.Begin(minerDeal.ProposalCid, &minerDeal); err != nil {
t.Fatal("deal tracking failed")
}
if !xerrors.Is(mrv.ValidatePush(clientID, &deals.StorageDataTransferVoucher{minerDeal.ProposalCid, 1}, blockGenerator.Next().Cid(), nil), deals.ErrWrongPiece) {
t.Fatal("Push should fail if piece ref is incorrect")
}
})
t.Run("ValidatePush fails wrong deal state", func(t *testing.T) {
minerDeal, err := newMinerDeal(clientID, api.DealComplete)
if err != nil {
t.Fatal("error creating client deal")
}
if err := state.Begin(minerDeal.ProposalCid, &minerDeal); err != nil {
t.Fatal("deal tracking failed")
}
ref := minerDeal.Ref
if !xerrors.Is(mrv.ValidatePush(clientID, &deals.StorageDataTransferVoucher{minerDeal.ProposalCid, 1}, ref, nil), deals.ErrInacceptableDealState) {
t.Fatal("Push should fail if deal is in a state that cannot be data transferred")
}
})
t.Run("ValidatePush succeeds", func(t *testing.T) {
minerDeal, err := newMinerDeal(clientID, api.DealAccepted)
if err != nil {
t.Fatal("error creating client deal")
}
if err := state.Begin(minerDeal.ProposalCid, &minerDeal); err != nil {
t.Fatal("deal tracking failed")
}
ref := minerDeal.Ref
if mrv.ValidatePush(clientID, &deals.StorageDataTransferVoucher{minerDeal.ProposalCid, 1}, ref, nil) != nil {
t.Fatal("Push should should succeed when all parameters are correct")
}
})
}