Commit Graph

14 Commits

Author SHA1 Message Date
Łukasz Magiera
d2e9d21031 Gather graphsync metrics on provider side as well 2021-10-19 19:45:25 +02:00
Łukasz Magiera
32a855b984 Fix lint 2021-10-19 19:22:32 +02:00
Łukasz Magiera
9a993d25d0 Collect and expose graphsync metrics 2021-10-19 19:20:00 +02:00
Rod Vagg
43f7fd5e10 traversals: limit maximum number of DAG links to traverse
Impacts CommP and graphsync transfers
2021-10-05 10:47:49 +02:00
hannahhoward
368d72ebfe
feat(graphsync): update to v0.10.0-rc1
also add config changes
2021-10-05 14:13:58 +11:00
hannahhoward
b4e9bc50f8 feat(deps): update go-graphsync v0.8.0
Update to go-graphsync v0.8.0 with go-ipld-prime linksystem branch & trusted store.
2021-09-03 16:21:55 +02:00
Aarsh Shah
d7076778e2
integrate DAG store and CARv2 in deal-making ()
This commit removes badger from the deal-making processes, and
moves to a new architecture with the dagstore as the cental
component on the miner-side, and CARv2s on the client-side.

Every deal that has been handed off to the sealing subsystem becomes
a shard in the dagstore. Shards are mounted via the LotusMount, which
teaches the dagstore how to load the related piece when serving
retrievals.

When the miner starts the Lotus for the first time with this patch,
we will perform a one-time migration of all active deals into the
dagstore. This is a lightweight process, and it consists simply
of registering the shards in the dagstore.

Shards are backed by the unsealed copy of the piece. This is currently
a CARv1. However, the dagstore keeps CARv2 indices for all pieces, so
when it's time to acquire a shard to serve a retrieval, the unsealed
CARv1 is joined with its index (safeguarded by the dagstore), to form
a read-only blockstore, thus taking the place of the monolithic
badger.

Data transfers have been adjusted to interface directly with CARv2 files.
On inbound transfers (client retrievals, miner storage deals), we stream
the received data into a CARv2 ReadWrite blockstore. On outbound transfers
(client storage deals, miner retrievals), we serve the data off a CARv2
ReadOnly blockstore.

Client-side imports are managed by the refactored *imports.Manager
component (when not using IPFS integration). Just like it before, we use
the go-filestore library to avoid duplicating the data from the original
file in the resulting UnixFS DAG (concretely the leaves). However, the
target of those imports are what we call "ref-CARv2s": CARv2 files placed
under the `$LOTUS_PATH/imports` directory, containing the intermediate
nodes in full, and the leaves as positional references to the original file
on disk.

Client-side retrievals are placed into CARv2 files in the location:
`$LOTUS_PATH/retrievals`.

A new set of `Dagstore*` JSON-RPC operations and `lotus-miner dagstore`
subcommands have been introduced on the miner-side to inspect and manage
the dagstore.

Despite moving to a CARv2-backed system, the IPFS integration has been
respected, and it continues to be possible to make storage deals with data
held in an IPFS node, and to perform retrievals directly into an IPFS node.

NOTE: because the "staging" and "client" Badger blockstores are no longer
used, existing imports on the client will be rendered useless. On startup,
Lotus will enumerate all imports and print WARN statements on the log for
each import that needs to be reimported. These log lines contain these
messages:

- import lacks carv2 path; import will not work; please reimport
- import has missing/broken carv2; please reimport

At the end, we will print a "sanity check completed" message indicating
the count of imports found, and how many were deemed broken.

Co-authored-by: Aarsh Shah <aarshkshah1992@gmail.com>
Co-authored-by: Dirk McCormick <dirkmdev@gmail.com>

Co-authored-by: Raúl Kripalani <raul@protocol.ai>
Co-authored-by: Dirk McCormick <dirkmdev@gmail.com>
2021-08-16 23:34:32 +01:00
Raúl Kripalani
3795cc2bd2 segregate chain and state blockstores.
This paves the way for better object lifetime management.

Concretely, it makes it possible to:
- have different stores backing chain and state data.
- having the same datastore library, but using different parameters.
- attach different caching layers/policies to each class of data, e.g.
  sizing caches differently.
- specifying different retention policies for chain and state data.

This separation is important because:
- access patterns/frequency of chain and state data are different.
- state is derivable from chain, so one could never expunge the chain
  store, and only retain state objects reachable from the last finality
  in the state store.
2021-02-28 22:49:44 +00:00
Łukasz Magiera
2e544e3e6a configure SimultaneousTransfers in node/builder 2020-11-25 19:57:38 +01:00
hannahhoward
694834e8d5 feat(graphsync): configure simultaneous requests
allow configuration of the number of simultaneous requests graphsync will process at once
2020-11-24 14:32:30 -08:00
Steven Allen
5733c71c50 Lint everything
We were ignoring quite a few error cases, and had one case where we weren't
actually updating state where we wanted to. Unfortunately, if the linter doesn't
pass, nobody has any reason to actually check lint failures in CI.

There are three remaining XXXs marked in the code for lint.
2020-08-20 20:46:36 -07:00
hannahhoward
00cd89750d feat(deps): update fil-markets, graphsync
Updates dependencies for graphsync, fil-markets, data-transfer. Moves to new graphsync blockstore
swapping capabilities, and also locks down graphsync impl so it does not accept arbitrary requests
2020-04-07 23:25:29 -07:00
hannahhoward
34f755b2b9 feat(chainsync): fixes to make graphsync work for chain fetching
- store to chain blockstore (ok for now, since storage provider is a seperate process)
- simplify request fetching and processing
2020-03-20 21:30:24 -07:00
hannahhoward
f259bc6a09 feat(graphsync): unified graphsync instance
setup a single graphsync that loads from both the chainstore & client blockstore
2020-03-17 17:25:12 -07:00