statediff: Use a worker pool #43
No reviewers
Labels
No Label
bug
critical
duplicate
enhancement
epic
help wanted
in progress
invalid
low priority
question
rebase
v1
v5
wontfix
Copied from Github
Kind/Breaking
Kind/Bug
Kind/Documentation
Kind/Enhancement
Kind/Feature
Kind/Security
Kind/Testing
Priority
Critical
Priority
High
Priority
Low
Priority
Medium
Reviewed
Confirmed
Reviewed
Duplicate
Reviewed
Invalid
Reviewed
Won't Fix
Status
Abandoned
Status
Blocked
Status
Need More Info
No Milestone
No project
No Assignees
2 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: cerc-io/go-ethereum#43
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "41-statediff-workerpool"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Changes statediff service to spawn a pool of workers each running an independent
WriteLoop
.statediff.workers
CLI flag.receipts.DeriveFields()
when used during indexing, though I didn't debug this deeply enough to find out why the same receipts data would ever be touched by goroutines which should only be working on separate blocks.probably due to the(cache miss shouldn't cause this)state.Database
hitting its cache limit.Resolves https://github.com/vulcanize/go-ethereum/issues/41
Running this on a fresh full chain, I'm not able to reproduce the trie node errors I was seeing before, so they may not be directly related - removing the WIP tag.
Looks great! A few comments but only one or two changes. Running locally everything seem to be working (ran with 4 and 8 workers, around block 80,000). I do see some errors on shutdown that I think are new but also probably inconsequential:
WARN [11-24|02:40:55.972] Error from chain event subscription error=nil WARN [11-24|02:40:56.031] Error from chain event subscription error=nil WARN [11-24|02:40:56.032] Error from chain event subscription error=nil
Are you still seeing a Trie data race issue on your end?
I haven't benchmarked, it would be good to get a rough estimate on performance impact.
I know you can't do it with rlp encoding/decoding since it only encodes the consensus fields but I wonder if you could copy by marshaling the receipt to JSON bytes and then unmarshalling into a new
types.Receipt
.@ -105,3 +116,3 @@
SubscriptionTypes map[common.Hash]Params
// Cache the last block so that we can avoid having to lookup the next block's parent
lastBlock lastBlockCache
BlockCache blockCache
I like the params getting passed together like this!
I'm wondering if the cache is still beneficial with the concurrent approach given the additional complexity.
I may be missing it but I think we still need to cache the parentBlock returned by
bc.GetBlockByHash
?When integrating with the metrics we might want to have an intermediate
select
loop that logs the metric off the primary core.ChainEvent channel before passing the event along to a secondary worker queue channel here, if we want to avoid logging metrics from within worker goroutines.Brackets and comment can go I think
This is simpler and appears to be working when I run locally:
I checked and the hit/access ratio is very close to 1, so it does still seem to be helping, I can verify further with benchmarks.
My reasoning was that two blocks shouldn't be processed with the same parent block, so we just cache the current one so it's available when its child block gets processed. I did some logging to verify this - blocks are accessed no more than once. Also, the number of misses is so low that hardly any parents would get added to the cache (since we would only add them on a miss).
oh yeah, meant to clean it up
It's odd - I was getting a data race detected at various points in this loop, but now I can't reproduce it even on that commit where it was happening. That was the reason for the deep copy before, but now it seems even the shallow copy is not necessary, so I can revert those changes.
Done, after rebasing on this branch
Oh that's awesome, thanks for checking that!
Ah yeah that makes sense, and is generally true. The only time this wouldn't be true is when there are reorgs while syncing at the head of chain (will never see it happen during chain import e.g. during any of our local tests). I think in that case it is likely still better to need to reach past the cache to retrieve the block rather than to waste cache space on the off chance.
LGTM! Feel free to merge when you are ready.
The chain event subscription error seems to be from the workers receiving the close of the error channel before the quit channel. Like you said, doesn't seem to indicate a real problem, though.