Commit Graph

94 Commits

Author SHA1 Message Date
Alfonso de la Rocha
075216d9da Merge remote-tracking branch 'upstream/master' into adlrocha/cns-iface-master 2022-12-05 18:16:14 +01:00
Ian Davis
9f85d3dca7 Address simple linter issues 2022-11-24 16:32:27 +00:00
Alfonso de la Rocha
0f92bced9d
Merge branch 'master' into adlrocha/cns-iface-master 2022-11-22 10:28:18 +01:00
Shrenuj Bansal
2fa21ff091 Merge branch 'master' into sbansal/nonce-coordination-and-consensus-for-chain-nodes 2022-11-11 14:41:38 -05:00
Łukasz Magiera
2c89b3240f retrieval: Support retrievals into remote stores 2022-11-08 09:37:34 +00:00
raulk
653af01235 Eth JSON-RPC API: implement eth_getCode and eth_getStorageAt (#9397) 2022-10-13 16:41:35 +02:00
Geoff Stuart
e2d5d12e7f Add accessors for allocations and claims maps 2022-10-07 16:41:59 -04:00
Geoff Stuart
f55dc46a32 Add api for getting allocation 2022-10-06 11:06:21 -04:00
Geoff Stuart
b4c04ad927 update markets 2022-10-06 11:06:21 -04:00
Shrenuj Bansal
f89a682d98 Add Mpool ref to raft state and rearrange some APIs 2022-09-29 10:56:57 +00:00
Shrenuj Bansal
559c2c6d34 Merge branch 'master' into sbansal/nonce-coordination-and-consensus-for-chain-nodes 2022-09-27 16:29:03 +00:00
Shrenuj Bansal
99e7c322eb More wip 2022-09-27 16:08:04 +00:00
Aayush
32670e810c chore: refactor: rename NewestNetworkVersion 2022-09-21 15:48:51 -04:00
Łukasz Magiera
859c2606f0 sealing: Address review 2022-09-19 12:13:06 +02:00
Łukasz Magiera
fbb487ae2b sector import: Plumbing for DownloadSectorData in the sealing system 2022-09-19 12:13:05 +02:00
Łukasz Magiera
29135aa77c sector import: Initial api scaffolding 2022-09-19 12:13:03 +02:00
Łukasz Magiera
45d1bd61ce
Merge pull request #9183 from filecoin-project/feat/sectornum-mgmt
feat: sealing: Use bitfields to manage sector numbers
2022-08-26 10:59:24 -04:00
Łukasz Magiera
2086b219d2 Don't use go-libp2p-core 2022-08-25 14:20:41 -04:00
Łukasz Magiera
ef2080a800 cli for managing sector reservations 2022-08-22 16:55:41 -04:00
Łukasz Magiera
8cff52aef6 Storage detach/attach in lotus-miner, cli commands 2022-08-01 15:58:06 +02:00
Jennifer Wang
c3f3eb0812 Merge branch 'releases' into jen/masterbp 2022-06-27 15:13:12 -04:00
Geoff Stuart
e684248f48 Added api call to get actors cids 2022-06-23 14:07:23 -04:00
Łukasz Magiera
05cdeb80c3 chore: remove redundant import prefixes 2022-06-15 12:06:22 +02:00
Łukasz Magiera
a9600b8a6f storage: Move extern/sector-storage to storage/sealer 2022-06-14 20:03:38 +02:00
Łukasz Magiera
98a48a47f8 storage: Move extern/storage-sealing to storage/pipeline 2022-06-14 19:41:59 +02:00
Łukasz Magiera
e65fae28de chore: fix imports 2022-06-14 17:00:51 +02:00
Geoff Stuart
b7010c9e60 Implement function to migrate actors with only code changes 2022-06-10 15:52:32 -04:00
Geoff Stuart
5c0f2c8ae6 Add putObj and putMany to apiBlockstore 2022-06-09 15:13:42 -04:00
Aayush
b28c11a57d Merge branch 'feat/nv16' 2022-06-03 14:01:49 -04:00
Łukasz Magiera
032e598962 feat: gateway: OpenRPC support 2022-05-27 17:03:56 +02:00
vyzo
b29a182da7 fix docgen 2022-04-27 17:57:04 +03:00
Łukasz Magiera
135aef78d7 Merge remote-tracking branch 'origin/master' into feat/post-worker 2022-03-11 17:04:58 +01:00
hannahhoward
49742f8fdc feat(deps): update to graphsync v0.13.0 with 2.0 protocol 2022-03-09 18:06:35 +00:00
Łukasz Magiera
e476cf7968 Merge remote-tracking branch 'origin/master' into feat/post-worker 2022-01-20 13:15:48 +01:00
vyzo
39bf59d372 add examples to docgen 2022-01-20 11:36:11 +02:00
Łukasz Magiera
b38141601c Untangle ffi from api 2022-01-18 11:57:04 +01:00
Łukasz Magiera
4a874eff70 post workers: Cleanup, tests 2022-01-14 14:17:52 +01:00
Łukasz Magiera
e216aefd23 fix make gen 2022-01-10 18:24:00 +01:00
zl
4172a3c8b7 ExampleValue for a silce is nil 2022-01-04 14:27:10 +08:00
hannahhoward
cddf63efe9 feat(storageminer): add api for transfer diagnostics
Add API + CLI for inspecting in depth diagnostics on graphsync transfers with a given peer
2021-12-22 13:41:29 -08:00
Łukasz Magiera
727765b248 Command to list active sector locks 2021-12-03 12:33:23 +01:00
Łukasz Magiera
6d52d8552b Fix docsgen 2021-11-30 02:06:58 +01:00
Clint Armstrong
93e4656a27 Use a float to represent GPU utilization
Before this change workers can only be allocated one GPU task,
regardless of how much of the GPU resources that task uses, or how many
GPUs are in the system.

This makes GPUUtilization a float which can represent that a task needs
a portion, or multiple GPUs. GPUs are accounted for like RAM and CPUs so
that workers with more GPUs can be allocated more tasks.

A known issue is that PC2 cannot use multiple GPUs. And even if the
worker has multiple GPUs and is allocated multiple PC2 tasks, those
tasks will only run on the first GPU.

This could result in unexpected behavior when a worker with multiple
GPUs is assigned multiple PC2 tasks. But this should not suprise any
existing users who upgrade, as any existing users who run workers with
multiple GPUs should already know this and be running a worker per GPU
for PC2. But now those users have the freedom to customize the GPU
utilization of PC2 to be less than one and effectively run multiple PC2
processes in a single worker.

C2 is capable of utilizing multiple GPUs, and now workers can be
customized for C2 accordingly.
2021-11-30 02:06:58 +01:00
Clint Armstrong
c4f46171ae Report memory used and swap used in worker res
Attempting to report "memory used by other processes" in the MemReserved
field fails to take into account the fact that the system's memory used
includes memory used by ongoing tasks.

To properly account for this, worker should report the memory and swap
used, then the scheduler that is aware of the memory requirements for a
task can determine if there is sufficient memory available for a task.
2021-11-30 02:06:58 +01:00
Łukasz Magiera
ec2bfb99bb make gen 2021-11-22 12:46:25 +01:00
Łukasz Magiera
b868769ec8 more retrieval api work 2021-11-22 12:46:02 +01:00
Peter Rabbitson
c4a7de9d37 Remove dead example code + dep 2021-10-07 09:44:50 +02:00
Peter Rabbitson
0444435589 Expose basic text-based datamodel selector on retrieval
Syntaxt of selection is located at
https://pkg.go.dev/github.com/ipld/go-ipld-selector-text-lite#SelectorSpecFromPath

Example use, assuming that:
  - The root of the deal is a plain dag-pb unixfs directory
  - The directory is not sharded
  - The user wants to retrieve the first entry in that directory

  lotus client retrieve --miner f0XXXXX --datamodel-path-selector 'Links/0/Hash' bafyROOTCID ~/output

For a much more elaborate example see the top of ./itests/deals_partial_retrieval_test.go
2021-09-10 09:44:11 +02:00
hannahhoward
91804d5746 feat(deps): update go-graphsync v0.9.0 2021-09-03 16:22:51 +02:00
Aarsh Shah
d7076778e2
integrate DAG store and CARv2 in deal-making (#6671)
This commit removes badger from the deal-making processes, and
moves to a new architecture with the dagstore as the cental
component on the miner-side, and CARv2s on the client-side.

Every deal that has been handed off to the sealing subsystem becomes
a shard in the dagstore. Shards are mounted via the LotusMount, which
teaches the dagstore how to load the related piece when serving
retrievals.

When the miner starts the Lotus for the first time with this patch,
we will perform a one-time migration of all active deals into the
dagstore. This is a lightweight process, and it consists simply
of registering the shards in the dagstore.

Shards are backed by the unsealed copy of the piece. This is currently
a CARv1. However, the dagstore keeps CARv2 indices for all pieces, so
when it's time to acquire a shard to serve a retrieval, the unsealed
CARv1 is joined with its index (safeguarded by the dagstore), to form
a read-only blockstore, thus taking the place of the monolithic
badger.

Data transfers have been adjusted to interface directly with CARv2 files.
On inbound transfers (client retrievals, miner storage deals), we stream
the received data into a CARv2 ReadWrite blockstore. On outbound transfers
(client storage deals, miner retrievals), we serve the data off a CARv2
ReadOnly blockstore.

Client-side imports are managed by the refactored *imports.Manager
component (when not using IPFS integration). Just like it before, we use
the go-filestore library to avoid duplicating the data from the original
file in the resulting UnixFS DAG (concretely the leaves). However, the
target of those imports are what we call "ref-CARv2s": CARv2 files placed
under the `$LOTUS_PATH/imports` directory, containing the intermediate
nodes in full, and the leaves as positional references to the original file
on disk.

Client-side retrievals are placed into CARv2 files in the location:
`$LOTUS_PATH/retrievals`.

A new set of `Dagstore*` JSON-RPC operations and `lotus-miner dagstore`
subcommands have been introduced on the miner-side to inspect and manage
the dagstore.

Despite moving to a CARv2-backed system, the IPFS integration has been
respected, and it continues to be possible to make storage deals with data
held in an IPFS node, and to perform retrievals directly into an IPFS node.

NOTE: because the "staging" and "client" Badger blockstores are no longer
used, existing imports on the client will be rendered useless. On startup,
Lotus will enumerate all imports and print WARN statements on the log for
each import that needs to be reimported. These log lines contain these
messages:

- import lacks carv2 path; import will not work; please reimport
- import has missing/broken carv2; please reimport

At the end, we will print a "sanity check completed" message indicating
the count of imports found, and how many were deemed broken.

Co-authored-by: Aarsh Shah <aarshkshah1992@gmail.com>
Co-authored-by: Dirk McCormick <dirkmdev@gmail.com>

Co-authored-by: Raúl Kripalani <raul@protocol.ai>
Co-authored-by: Dirk McCormick <dirkmdev@gmail.com>
2021-08-16 23:34:32 +01:00