When we map a file for generating the DAG, we do a simple truncate to e.g. 1Gb. This is fine, even if we have nowhere near 1Gb disk available, as the actual file doesn't take up the full 1Gb, merely a few bytes. When we start generating into it, however, it eventually crashes with a unexpected fault address .
This change fixes it (on linux systems) by using the Fallocate syscall, which preallocates suffcient space on disk to avoid that situation.
Co-authored-by: Felix Lange <fjl@twurst.com>
This PR offers two more database sub commands for exporting and importing data.
Two exporters are implemented: preimage and snapshot data respectively.
The import command is generic, it can take any data export and import into leveldb.
The data format has a 'magic' for disambiguation, and a version field for future compatibility.
It is because write known block only checks block and state without snapshot, which could lead to gap between newest snapshot and newest block state. However, new blocks which would cause snapshot to become fixed were ignored, since state was already known.
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
Some benchmarks in eth/filters were not good: they weren't reproducible, relying on geth chaindata to be present.
Another one was rejected because the receipt was lacking a backing transcation.
The p2p simulation benchmark had a lot of the warnings below, due to the framework calling both
Stop() and Close(). Apparently, the simulated adapter is the only implementation which has a Close(),
and there is no need to call both Stop and Close on it.
We had been assuming that the `item` returned from batch.commit()
was the item committed, but it's actually the next item to be added
to the freezer, and multiple items can be committed in a single batch.
This commit finds the smallest item in the freezer and iterates from
that to the number returned by commit(), passing any tracked blocks
in that range to plugins.
* core: write test showing that TD is not stored properly at genesis
The ToBlock method applies a default value for an empty
difficulty value. This default is not carried over through the Commit
method because the TotalDifficulty database write writes the
original difficulty value (nil) instead of the defaulty value
present on the genesis Block.
Date: 2021-10-22 08:25:32-07:00
Signed-off-by: meows <b5c6@protonmail.com>
* core: write TD value from Block, not original genesis value
This an issue where a default TD value was not written to
the database, resulting in a 0 value TD at genesis.
A test for this issue was provided at 90e3ffd393
Date: 2021-10-22 08:28:00-07:00
Signed-off-by: meows <b5c6@protonmail.com>
* core: fix tests by adding GenesisDifficulty to expected result
See prior two commits.
Date: 2021-10-22 09:16:01-07:00
Signed-off-by: meows <b5c6@protonmail.com>
* les: fix test with genesis change
Co-authored-by: Martin Holst Swende <martin@swende.se>
This PR adds a new accessor method to the freezer database. This new view offers a consistent interface, guaranteeing that all individual tables (headers, bodies etc) are all on the same number, and that this number is not changes (added/truncated) while the operation is performing.
* core/state/snapshot: fix BAD BLOCK error when snapshot is generating
* core/state/snapshot: alternative fix for the snapshot generator
* add comments and minor update
Co-authored-by: Martin Holst Swende <martin@swende.se>
* core: fix warning flagging the use of DeepEqual on error
* apply the same change everywhere possible
* revert change that was committed by mistake
* fix build error
* Update config.go
* revert changes to ConfigCompatError
* review feedback
Co-authored-by: Felix Lange <fjl@twurst.com>