The loop variable changes before the defer executes, so we need to
pass it to the function to make sure the closure has the correct
version of the loop variable.
It turns out we were only notifying plugins of freezer commits and
cleaning up our record when the block being processed was greater
than MAX_UINT64.
So, uh, never.
While looking for the source of a memory leak, I decided to optimize
the metatracer to avoid encoding values when there weren't any tracers
configured to received the encoded values.
I also avoided putting the EVM into debug mode when there weren't any
tracers configured.
None of these fixed the memory leak, but they still seem like good
practices. Memory leak fix is in the next commit.
SyncProgress was modified in PR #23576 to add the fields reported for
snap sync. The PR also changed ethclient to use the SyncProgress struct
directly instead of wrapping it for hex-decoding. This broke the
SyncProgress method.
Fix it by putting back the custom wrapper. While here, also put back the
fast sync related fields because SyncProgress is stable API and thus
removing fields is not allowed.
Fixes#24180Fixes#24176
Fixes#24167
New behaviour is that the endpoint returns results only for available
blocks without returning an error when it doesn't find a block. Note we
skip any block after a non-existent block.
This adds a header fetch for every block in range (even if header
is not needed). Alternatively, we could do the check in every field's
resolver method to avoid this overhead.
* core/vm: reverse bit order in bytes of code bitmap
This bit order is more natural for bit manipulation operations and we
can eliminate some small number of CPU instructions.
* core/vm: drop lookup table
* consensus: use the maxGasLimit constant to check the header.GasLimit to avoid creating new variables repeatedly
* consensus: check the header.GasLimit by the public constant MaxGasLimit
* consensus: check the header.GasLimit by the constant MaxGasLimit
Previously, Ctrl-C (SIGINT) was ignored during JS execution, so it was not
possible to get out of infinite loops in the console. With this change,
Ctrl-C now interrupts JS.
Fixes#23344
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
This PR reduces the amount of work we do when answering header queries, e.g. when a peer
is syncing from us.
For some items, e.g block bodies, when we read the rlp-data from database, we plug it
directly into the response package. We didn't do that for headers, but instead read
headers-rlp, decode to types.Header, and re-encode to rlp. This PR changes that to keep it
in RLP-form as much as possible. When a node is syncing from us, it typically requests 192
contiguous headers. On master it has the following effect:
- For headers not in ancient: 2 db lookups. One for translating hash->number (even though
the request is by number), and another for reading by hash (this latter one is sometimes
cached).
- For headers in ancient: 1 file lookup/syscall for translating hash->number (even though
the request is by number), and another for reading the header itself. After this, it
also performes a hashing of the header, to ensure that the hash is what it expected. In
this PR, I instead move the logic for "give me a sequence of blocks" into the lower
layers, where the database can determine how and what to read from leveldb and/or
ancients.
There are basically four types of requests; three of them are improved this way. The
fourth, by hash going backwards, is more tricky to optimize. However, since we know that
the gap is 0, we can look up by the parentHash, and stlil shave off all the number->hash
lookups.
The gapped collection can be optimized similarly, as a follow-up, at least in three out of
four cases.
Co-authored-by: Felix Lange <fjl@twurst.com>
This PR fixes a special corner case in transaction indexing.
When the chain is rewound by SetHead to a historical point which is even lower than the transaction indexes tail, then system will report Failed to decode block body error all the time, because the relevant blocks are already deleted.
In order to avoid this "non-critical-but-annoying" issue, we can recap the indexing target to head+1(to is excluded, so it means indexing transactions from 0 to head).