Removes the yolov2 definition, adds yolov3, including EIP-2565. This PR also disables some of the erroneously generated blockchain and statetests, and adds the new genesis hash + alloc for yolov3.
This PR disables the CLI switches for yolo, since it's not complete until we merge support for 2930.
This PR implements the following modifications
- Don't shortcut check if block is present, thus avoid disk lookup
- Don't check hash ancestry in early-check (it's still done in parallel checker)
- Don't check time.Now for every single header
Charts and background info can be found here: https://github.com/holiman/headerimport/blob/main/README.md
With these changes, writing 1M headers goes down to from 80s to 62s.
Squashed from the following commits:
core/state: lazily init snapshot storage map
core/state: fix flawed meter on storage reads
core/state: make statedb/stateobjects reuse a hasher
core/blockchain, core/state: implement new trie prefetcher
core: make trie prefetcher deliver tries to statedb
core/state: refactor trie_prefetcher, export storage tries
blockchain: re-enable the next-block-prefetcher
state: remove panics in trie prefetcher
core/state/trie_prefetcher: address some review concerns
sq
This commit splits the eth package, separating the handling of eth and snap protocols. It also includes the capability to run snap sync (https://github.com/ethereum/devp2p/blob/master/caps/snap.md) , but does not enable it by default.
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
Co-authored-by: Martin Holst Swende <martin@swende.se>
This PR implements unclean shutdown marker. Every time geth boots, it adds a timestamp to a list of timestamps in the database. This list is capped at 10. At a clean shutdown, the timestamp is removed again.
Thus, when geth exits unclean, the marker remains, and at boot up we show the most recent unclean shutdowns to the user, which makes it easier to diagnose root-causes to certain problems.
Co-authored-by: Nagy Salem <me@muhnagy.com>
This commit fixes a flaw in two testcases, and brings down the exec-time from ~40s to ~8s for trie/TestIncompleteSync.
The checkConsistency was performed over and over again on the complete set of nodes, not just the recently added, turning it into a quadratic runtime.
* core: add test for headerchain inserts
* core, light: write headerchains in batches
* core: change to one callback per batch of inserted headers + review concerns
* core: error-check on batch write
* core: unexport writeHeaders
* core: remove callback parameter in InsertHeaderChain
The semantics of InsertHeaderChain are now much simpler: it is now an
all-or-nothing operation. The new WriteStatus return value allows
callers to check for the canonicality of the insertion. This change
simplifies use of HeaderChain in package les, where the callback was
previously used to post chain events.
* core: skip some hashing when writing headers
* core: less hashing in header validation
* core: fix headerchain flaw regarding blacklisted hashes
Co-authored-by: Felix Lange <fjl@twurst.com>
A lot of times when we hit 'core' errors, example: invalid tx, the information provided is
insufficient. We miss several pieces of information: what account has nonce too high,
and what transaction in that block was offending?
This PR adds that information, using the new type of wrapped errors.
It also adds a testcase which (partly) verifies the output from the errors.
The first commit changes all usage of direct equality-checks on core errors, into
using errors.Is. The second commit adds contextual information. This wraps most
of the core errors with more information, and also wraps it one more time in
stateprocessor, to further provide tx index and tx hash, if such a tx is encoutered in
a block. The third commit uses the chainmaker to try to generate chains with such
errors in them, thus triggering the errors and checking that the generated string meets
expectations.
* all: core: split vm.Config into BlockConfig and TxConfig
* core: core/vm: reset EVM between tx in block instead of creating new
* core/vm: added docs
This PR contains a minor optimization in derivesha, by exposing the RLP
int-encoding and making use of it to write integers directly to a
buffer (an RLP integer is known to never require more than 9 bytes
total). rlp.AppendUint64 might be useful in other places too.
The code assumes, just as before, that the hasher (a trie) will copy the
key internally, which it does when doing keybytesToHex(key).
Co-authored-by: Felix Lange <fjl@twurst.com>
* core/state/snapshot: print warning if failed to resolve journal
* core/state/snapshot: fix snapshot recovery
When we meet the snapshot journal consisted with:
- disk layer generator with new-format
- diff layer journal with old-format
The base layer should be returned without error.
The broken diff layer can be reconstructed later
but we definitely don't want to reconstruct the
huge diff layer.
* core: add tests
* core/state/snapshot: introduce snapshot journal version
* core: update the disk layer in an atomic way
* core: persist the disk layer generator periodically
* core/state/snapshot: improve logging
* core/state/snapshot: forcibly ensure the legacy snapshot is matched
* core/state/snapshot: add debug logs
* core, tests: fix tests and special recovery case
* core: polish
* core: add more blockchain tests for snapshot recovery
* core/state: fix comment
* core: add recovery flag for snapshot
* core: add restart after start-after-crash tests
* core/rawdb: fix imports
* core: fix tests
* core: remove log
* core/state/snapshot: fix snapshot
* core: avoid callbacks in SetHead
* core: fix setHead cornercase where the threshold root has state
* core: small docs for the test cases
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
* core/state/snapshot: add diskRoot function
* core/state/snapshot: disable iteration if the snapshot is generating
* core/state/snapshot: simplify the function
* core/state: panic for undefined layer
* core/types: tests for bloom
* core/types: refactored bloom filter for receipts, added tests
core/types: replaced old bloom implementation
core/types: change interface of bloom add+test
* core/types: refactor bloom
* core/types: minor tweak on LogsBloom
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
* core/state/snapshot: exit Geth if generator hits missing trie nodes
* core/state/snapshot: error instead of hard die on generator fault
* core/state/snapshot: don't enable logging on the tests
core/types: use stacktrie for derivesha
trie: add stacktrie file
trie: fix linter
core/types: use stacktrie for derivesha
rebased: adapt stacktrie to the newer version of DeriveSha
Co-authored-by: Martin Holst Swende <martin@swende.se>
More linter fixes
review feedback: no key offset for nodes converted to hashes
trie: use EncodeRLP for full nodes
core/types: insert txs in order in derivesha
trie: tests for derivesha with stacktrie
trie: make stacktrie use pooled hashers
trie: make stacktrie reuse tmp slice space
trie: minor polishes on stacktrie
trie/stacktrie: less rlp dancing
core/types: explain the contorsions in DeriveSha
ci: fix goimport errors
trie: clear mem on subtrie hashing
squashme: linter fix
stracktrie: use pooling, less allocs (#3)
trie: in-place hex prefix, reduce allocs and add rawNode.EncodeRLP
Reintroduce the `[]node` method, add the missing `EncodeRLP` implementation for `rawNode` and calculate the hex prefix in place.
Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Martin Holst Swende <martin@swende.se>
* database: added counters
* Improved stats for ancient db
* Small improvement
* Better message and added percentage while counting receipts
* Fast counting for receipts
* added info message
* Show both receips itemscount from ancient db and counted receipts
* Fixed default case
* Removed counter for receipts in ancient store
* Removed counting of receipts present in leveldb
* core/vm/testdata: add gascost expectations to testcases
* core/vm: verify expected gas in tests for precompiles
* core/vm: fix overflow flaw in gas/s calculation
* core: avoid modification of accountSet cache in tx_pool
when runReorg, we may copy the dirtyAccounts' accountSet cache to promoteAddrs
in which accounts will be promoted, however, if we have reset request at the
same time, we may reuse promoteAddrs and modify the cache content which is
against the original intention of accountSet cache. So, we need to make a new
slice here to avoid modify accountSet cache.
* core: fix flatten condition + comment
Co-authored-by: Felix Lange <fjl@twurst.com>
This PR significantly changes the APIs for instantiating Ethereum nodes in
a Go program. The new APIs are not backwards-compatible, but we feel that
this is made up for by the much simpler way of registering services on
node.Node. You can find more information and rationale in the design
document: https://gist.github.com/renaynay/5bec2de19fde66f4d04c535fd24f0775.
There is also a new feature in Node's Go API: it is now possible to
register arbitrary handlers on the user-facing HTTP server. In geth, this
facility is used to enable GraphQL.
There is a single minor change relevant for geth users in this PR: The
GraphQL API is no longer available separately from the JSON-RPC HTTP
server. If you want GraphQL, you need to enable it using the
./geth --http --graphql flag combination.
The --graphql.port and --graphql.addr flags are no longer available.
This replaces the two-stage shutdown scheme with the one we
use almost everywhere else: a single quit channel signalling
termination.
Co-authored-by: Felix Lange <fjl@twurst.com>
Solves issue#20582. Non-executable transactions should not be evicted on each tick if there are no promote transactions or if a pending/reset empties the pending list. Tests and logging expanded to handle these cases in the future.
core/tx_pool: use a ts for each tx in the queue, but only update the heartbeat on promotion or pending replaced
queuedTs proper naming
* eth/downloader: refactor downloader + queue
downloader, fetcher: throttle-metrics, fetcher filter improvements, standalone resultcache
downloader: more accurate deliverytime calculation, less mem overhead in state requests
downloader/queue: increase underlying buffer of results, new throttle mechanism
eth/downloader: updates to tests
eth/downloader: fix up some review concerns
eth/downloader/queue: minor fixes
eth/downloader: minor fixes after review call
eth/downloader: testcases for queue.go
eth/downloader: minor change, don't set progress unless progress...
eth/downloader: fix flaw which prevented useless peers from being dropped
eth/downloader: try to fix tests
eth/downloader: verify non-deliveries against advertised remote head
eth/downloader: fix flaw with checking closed-status causing hang
eth/downloader: hashing avoidance
eth/downloader: review concerns + simplify resultcache and queue
eth/downloader: add back some locks, address review concerns
downloader/queue: fix remaining lock flaw
* eth/downloader: nitpick fixes
* eth/downloader: remove the *2*3/4 throttling threshold dance
* eth/downloader: print correct throttle threshold in stats
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
* core: added local tx pool test case
* core, crypto: various allocation savings regarding tx handling
* core/txlist, txpool: save a reheap operation, avoid some bigint allocs
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
This change introduces garbage collection for the light client. Historical
chain data is deleted periodically. If you want to disable the GC, use
the --light.nopruning flag.
This change further improves the performance of RLP encoding by removing
allocations for big.Int and [...]byte types. I have added a new benchmark
that measures RLP encoding of types.Block to verify that performance is
improved.
* core: use uint64 for total tx costs instead of big.Int
* core: added local tx pool test case
* core, crypto: various allocation savings regarding tx handling
* Update core/tx_list.go
* core: added tx.GasPriceIntCmp for comparison without allocation
adds a method to remove unneeded allocation in comparison to tx.gasPrice
* core: handle pools full of locals better
* core/tests: benchmark for tx_list
* core/txlist, txpool: save a reheap operation, avoid some bigint allocs
Co-authored-by: Martin Holst Swende <martin@swende.se>
* core, crypto: various allocation savings regarding tx handling
* core: reduce allocs for gas price comparison
This change reduces the allocations needed for comparing different transactions to each other.
A call to `tx.GasPrice()` copies the gas price as it has to be safe against modifications and
also needs to be threadsafe. For comparing and ordering different transactions we don't need
these guarantees
* core: added tx.GasPriceIntCmp for comparison without allocation
adds a method to remove unneeded allocation in comparison to tx.gasPrice
* core/types: pool legacykeccak256 objects in rlpHash
rlpHash is by far the most used function in core that allocates a legacyKeccak256 object on each call.
Since it is so widely used it makes sense to add pooling here so we relieve the GC.
On my machine these changes result in > 100 MILLION less allocations and > 30 GB less allocated memory.
* reverted some changes
* reverted some changes
* trie: use crypto.KeccakState instead of replicating code
Co-authored-by: Martin Holst Swende <martin@swende.se>
This PR implements the EVM state transition tool, which is intended
to be the replacement for our retesteth client implementation.
Documentation is present in the cmd/evm/README.md file.
Co-authored-by: Felix Lange <fjl@twurst.com>
* core/vm: fix incorrect computation of discount
During testing on Yolov1 we found that the way geth calculates the discount
is not in line with the specification. Basically what we did is calculate
128 * Bls12381GXMulGas * discount / 1000 whenever we received more than 128 pairs
of values. Correct would be to calculate k * Bls12381... for k > 128.
* core/vm: better logic for discount calculation
* core/vm: better calculation logic, added worstcase benchmarks
* core/vm: better benchmarking logic
The ancients variable in the freezer is a list of hashes, which
identifies all of the hashes to be frozen. The slice is being allocated
with a capacity of `limit`, which is the number of the last block
this batch will attempt to add to the freezer. That means we are
allocating memory for all of the blocks in the freezer, not just
the ones to be added.
If instead we allocate `limit - f.frozen`, we will only allocate
enough space for the blocks we're about to add to the freezer. On
mainnet this reduces usage by about 320 MB.
* core/vm: use fixed uint256 library instead of big
* core/vm: remove intpools
* core/vm: upgrade uint256, fixes uint256.NewFromBig
* core/vm: use uint256.Int by value in Stack
* core/vm: upgrade uint256 to v1.0.0
* core/vm: don't preallocate space for 1024 stack items (only 16)
Co-authored-by: Martin Holst Swende <martin@swende.se>