vyzo
4bed3161f0
fix broken purge count log
2021-07-04 18:38:28 +03:00
vyzo
dec61fa333
deduplicate stack logs and optionally trace write stacks
2021-07-04 18:38:28 +03:00
vyzo
7ebef6d838
better log message
2021-07-04 18:38:28 +03:00
vyzo
40ff5bf164
log put errors in splitstore log
2021-07-04 18:38:28 +03:00
vyzo
9fda61abec
fix error check for unreachable cids
2021-07-04 18:38:28 +03:00
vyzo
4a71c68e06
move code around for better readability
2021-07-04 18:38:28 +03:00
vyzo
31497f4bd3
use internal get during walk to avoid blowing the compaction txn
...
otherwise the walk itself precludes purge... duh!
2021-07-04 18:38:28 +03:00
vyzo
6af3a23dd4
use a map for txn protection mark set
2021-07-04 18:38:28 +03:00
vyzo
65ccc99e79
minor tweaks in purge
...
- allocate once
- log purge count
2021-07-04 18:38:28 +03:00
vyzo
cb665d07e0
fix transactional race during compaction
...
It is possible for an object to be written or recreated (and checked with Has)
after the mark completes and during the purge; if this happens we will purge
a live block.
2021-07-04 18:38:28 +03:00
vyzo
50ebaf25aa
don't log read misses before warmup
2021-07-04 18:38:28 +03:00
vyzo
b187b5c301
fix lint
2021-07-04 18:38:28 +03:00
vyzo
a53c4e1597
implement debug log
2021-07-04 18:38:28 +03:00
vyzo
fce7b8dc9b
flush move log when cold collection is done
2021-07-04 18:38:28 +03:00
vyzo
fc247e4223
add debug log skeleton
2021-07-04 18:38:28 +03:00
vyzo
0390285c4e
always do full walks, not only when there is a sync gap
2021-07-04 18:38:28 +03:00
vyzo
30dbe4978b
adjust compaction range
2021-07-04 18:38:28 +03:00
vyzo
a21f55919b
CompactionThreshold should be 4 finalities
...
otherwise we'll wear clown shoes with the slack and end up in continuous compaction.
2021-07-04 18:38:28 +03:00
vyzo
a25ac80777
reintroduce compaction slack
2021-07-04 18:38:28 +03:00
vyzo
c4d95de987
coalesce back-to-back compactions
...
get rid of the CompactionCold construct, run a single compaction on catch up
2021-07-04 18:38:28 +03:00
vyzo
b7897595eb
augment current epoch by +1
...
to account for off by one conditions
2021-07-04 18:38:28 +03:00
vyzo
933c786421
update write epoch in the background every second
2021-07-04 18:38:28 +03:00
vyzo
66f1630f14
fix lint issue
2021-07-04 18:38:28 +03:00
vyzo
bb17608ae0
track writeEpoch relative to current wall clock time
...
The issue: head change notifications are not emitted until after catching up,
which results in all writes during a catch up period being tracked at the base epoch.
2021-07-04 18:38:28 +03:00
vyzo
421f05eab9
save the warm up epoch only if successful in warming up
2021-07-04 18:38:28 +03:00
vyzo
9b6448518c
refactor warmup to trigger at startup and not wait for sync
2021-07-04 18:38:28 +03:00
vyzo
3fe4261f12
don't attempt compaction while still syncing
2021-07-04 18:38:28 +03:00
vyzo
7b02673620
don't try to visit genesis parent blocks
2021-07-04 18:38:28 +03:00
vyzo
997f2c098b
keep headers hot when running with a noop splitstore
2021-07-04 18:38:28 +03:00
vyzo
7c814cd2e3
refactor genesis state loading code into its own method
2021-07-04 18:38:28 +03:00
vyzo
41573f1fb2
also walk parent message receipts when including messages in the walk
2021-07-04 18:38:28 +03:00
vyzo
fa6481401d
reduce SyncGapTime to 1 minute
...
for maximal safety.
2021-07-04 18:38:28 +03:00
vyzo
d33a44e67f
first visit the cid, then short-circuit non dagcbor objects
2021-07-04 18:38:28 +03:00
vyzo
bdb97d6186
more robust handling of sync gap walks
2021-07-04 18:38:28 +03:00
vyzo
7cf75e667d
keep genesis-linked state hot
2021-07-04 18:38:28 +03:00
Raúl Kripalani
b2b7eb2ded
metrics: increment misses in View().
2021-07-04 18:38:28 +03:00
vyzo
3a9b7c592d
mark from current epoch to boundary epoch when necessary
...
this is necessary to avoid wearing clown shoes when the node stays
offline for an extended period of time (more than 1 finality).
Basically it gets quite slow if we do the full 2 finality walk, so we
try to avoid it unless necessary.
The conditions under which a full walk is necessary is if there is a
sync gap (most likely because the node was offline) during which the
tracking of writes is inaccurate because we have not yet delivered the
HeadChange notification. In this case, it is possible to have
actually hot blocks to be tracked before the boundary and fail to mark
them accordingly. So when we detect a sync gap, we do the full walk;
if there is no sync gap, we can just use the much faster boundary
epoch walk.
2021-07-04 18:38:28 +03:00
vyzo
d7ceef104e
decrease CompactionThreshold to 3 finalities
2021-07-04 18:38:28 +03:00
vyzo
e3cbeec6ee
implement chain walking
2021-07-04 18:38:28 +03:00
vyzo
04f2e102a1
kill full splitstore compaction, simplify splitstore configuration
2021-07-04 18:38:28 +03:00
Steven Allen
63db9e1633
fix(splitstore): fix a panic on revert-only head changes
...
Calling, e.g., `lotus chain sethead` on an ancestor tipset won't apply
any new blocks, it'll just revert a bunch. This will lead to HeadChange
calls with no new blocks to apply.
fixes #6125
2021-04-28 20:35:30 -07:00
vyzo
1b1d3606cd
make linter happy
2021-03-11 13:10:44 +02:00
vyzo
353bb1881f
compact hotstore if it provides the method
2021-03-11 11:45:19 +02:00
vyzo
3bd77701d8
deduplicate code
2021-03-08 19:46:21 +02:00
vyzo
3d1b855f20
rename GC to CollectGarbage, ignore badger.ErrNoRewrite
2021-03-08 19:42:38 +02:00
vyzo
52de95d344
also gc in compactFull, not just compactSimple
2021-03-08 18:30:09 +02:00
vyzo
8562a9bb82
garbage collect hotstore after compaction
2021-03-08 18:12:09 +02:00
vyzo
09f5ba177a
add splitstore unit test
2021-03-05 19:55:32 +02:00
vyzo
0a2f2cf00d
use the right condition for triggering the miss metric
2021-03-05 14:48:59 +02:00
vyzo
2b32c2e597
add some metrics
2021-03-05 14:48:57 +02:00