Logs stored on disk have minimal information. Contextual information such as block
number, index of log in block, index of transaction in block are filled in upon request.
We can fill in all these fields only having the block header and list of receipts.
But determining the transaction hash of a log requires the block body.
The goal of this PR is postponing this retrieval until we are sure we the transaction hash.
It happens often that the header bloom filter signals there might be matches in a block,
but after actually checking them reveals the logs do not match. We want to avoid fetching
the body in this case.
Note that this changes the semantics of Backend.GetLogs. Downstream callers of
GetLogs now assume log context fields have not been derived, and need to call
DeriveFields on the logs if necessary.
This is a breaking change in the tracing hooks API as well as semantics of the callTracer:
- CaptureEnter hook provided a nil value argument in case of DELEGATECALL. However to stay consistent with how delegate calls behave in EVM this hook is changed to pass in the value of the parent call.
- callTracer will return parent call's value for DELEGATECALL frames.
---------
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
* common, core, eth, les, trie: make prque generic
* les/vflux/server: fixed issues in priorityPool
* common, core, eth, les, trie: make priority also generic in prque
* les/flowcontrol: add test case for priority accumulator overflow
* les/flowcontrol: avoid priority value overflow
* common/prque: use int priority in some tests
No need to convert to int64 when we can just change the type used by the
queue.
* common/prque: remove comment about int64 range
---------
Co-authored-by: Zsolt Felfoldi <zsfelfoldi@gmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
This change ports some changes from the main PBSS PR:
- get rid of callback function in `trie.Database.Commit` which is not required anymore
- rework the `nodeResolver` in `trie.Iterator` to make it compatible with multiple state scheme
- some other shallow changes in tests and typo-fixes
According to the spec the payloadID needs to be random or dependent on all arguments, to prevent two payloads from clashing. This change adds withdrawals into the payload derivation.
---------
Co-authored-by: lightclient@protonmail.com <lightclient@protonmail.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Felix Lange <fjl@twurst.com>
This PR moves core/beacon to beacon/engine so that beacon-chain related code has its own top level package which also can house the the beacon lightclient-code.
This PR moves some trie-related db accessor methods to a different file, and also removes the schema type. Instead of the schema type, a string is used to distinguish between hashbased/pathbased db accessors.
This also moves some code from trie package to rawdb package.
This PR is intended to be a no-functionality-change prep PR for #25963 .
---------
Co-authored-by: Gary Rong <garyrong0905@gmail.com>
This change implements withdrawals as specified in EIP-4895.
Co-authored-by: lightclient@protonmail.com <lightclient@protonmail.com>
Co-authored-by: marioevz <marioevz@gmail.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Felix Lange <fjl@twurst.com>
This PR changes the API so that uint64 is used for fork timestamps.
It's a good choice because types.Header also uses uint64 for time.
Co-authored-by: Felix Lange <fjl@twurst.com>
This change introduces a breaking change to miner.etherbase is configured.
Previously, users did not need to explicitly set the etherbase address via flag, since 'first' local account was used as etherbase automatically. This change removes the "default first account" feature.
In Proof-of-stake world, the fee recipient address is provided by CL, and not configured in Geth any more - meaning that miner.etherbase is mostly for legacy networks(pow, clique networks etc).
In legacy (pre-merge) sync mode, headers were contiguously downloaded from the network and when no more headers were available, we checked every few seconds whether there are 64 new blocks to move the pivot.
In beacon (post-merge) sync mode, we don't need to check for new skeleton headers non stop, since those re delivered one by one by the engine API. The missing code snippet from the header fetcher was to actually look at the latest head and move the pivot if it was more than 2*64-8 away. This PR adds the missing movement logic.
This makes non-JS tracers execute all block txs on a single goroutine.
In the previous implementation, we used to prepare every tx pre-state
on one goroutine, and then run the transactions again with tracing enabled.
Native tracers are usually faster, so it is faster overall to use their output as
the pre-state for tracing the next transaction.
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
This PR removes the notion of fakeStorage from the state objects, and instead, for any state modifications that are needed, it simply makes the changes.
This changes the StorageTrie method to return an error when the trie
is not available. It used to return an 'empty trie' in this case, but that's
not possible anymore under PBSS.
This PR builds on #26299, but also updates the tests to the most recent version, which includes tests regarding TheMerge.
This change adds checks to the beacon consensus engine, making it more strict in validating the pre- and post-headers, and not relying on the caller to have already correctly sanitized the headers/blocks.
This ensures that RPC method handlers will react to a timeout or
cancelled request soon after the event occurs.
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
Currently calling `debug_TraceTransaction` with a transaction hash that doesn't exist returns a confusing error: `genesis is not traceable`. This PR changes the behaviour to instead return an error message saying `transaction not found`
Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
This PR makes it possible to modify the flush interval time via RPC. On one extreme, `0s`, it would act as an archive node. If set to `1h`, means that after one hour of effective block processing time, the trie would be flushed. If one block takes 200ms, this means that a flush would occur every `5*3600=18000` blocks -- however, if the memory size of the cached states grows too large, it will flush sooner.
Essentially, this makes it possible to configure the node to be more or less "archive:ish", and without restarting the node while reconfiguring it.
--syncTarget is a feature for development purpose in post-merge world. Previously
it's added into eth.Config. But it turns out that's a stupid idea.
- syncTarget is a block object, which is hard to be put in config file(large)
- syncTarget is just a dev feature, doesn't make too much sense to add it in config file
So I remove it from the eth config object. And it also fixes the #26328
This removes the 'time' field from logs, as well as from the tracer interface. This change makes the trace output deterministic. If a tracer needs the time they can measure it themselves. No need for evm to do this.
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
This PR introduces a node scheme abstraction. The interface is only implemented by `hashScheme` at the moment, but will be extended by `pathScheme` very soon.
Apart from that, a few changes are also included which is worth mentioning:
- port the changes in the stacktrie, tracking the path prefix of nodes during commit
- use ethdb.Database for constructing trie.Database. This is not necessary right now, but it is required for path-based used to open reverse diff freezer
It seems there is no fully typed library implementation of an LRU cache.
So I wrote one. Method names are the same as github.com/hashicorp/golang-lru,
and the new type can be used as a drop-in replacement.
Two reasons to do this:
- It's much easier to understand what a cache is for when the types are right there.
- Performance: the new implementation is slightly faster and performs zero memory
allocations in Add when the cache is at capacity. Overall, memory usage of the cache
is much reduced because keys are values are no longer wrapped in interface.
This PR changes the pending tx subscription to return RPCTransaction types instead of normal Transaction objects. This will fix the inconsistencies with other tx returning API methods (i.e. getTransactionByHash), and also fill in the sender value for the tx.
co-authored by @s1na
This PR now also includes a fix to the problem of mult-routines building blocks on the same input. This PR works as before with regards to stopping the work, but it just will not spin up a second routine if one is already building. So if the CL does N calls to FCU+buildblock, and N calls to GetPayload, only the first of each will do something, the other calls will be mostly no-ops.
This PR also adds printout of the payload id into the logs.
In some cases, it is desirable to capture what is triggered by each trace, when using the `callTracer`. For example: call `USDT.transfer` will trigger a `Transfer(from, to, value)` event.
This PR adds the option to capture logs to the call tracer, by specifying `{"withLog": true}` in the tracerconfig.
Any logs belonging to failed/reverted call-scopes are removed from the output, to prevent interpretation mistakes.
Signed-off-by: Delweng <delweng@gmail.com>
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
Inner call reverts will now return the reason similar to the top-level call. Separately, if top-level call is of type CREATE and it fails, its `to` field will now be cleared to `0x00...00` instead of being set to the created address.
This PR adds a parameter to startup, --synctarget. The synctarget flag is a developer-flag, that can be useful in some scenarios as a replacement for a CL node. It defines a fixed block sync target:
geth --syncmode=full --synctarget=./block_15816882.hex_rlp
The --synctarget is only made available during syncmode=full
* eth/tracers: fix gasUsed in call tracer
* fix js tracers gasUsed
* fix legacy prestate tracer
* fix restGas in test
* drop intrinsicGas field from js tracers
The prestate tracer did not report accounts that existed at a given address prior to a contract being created at that address.
Signed-off-by: Delweng <delweng@gmail.com>
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
In some cases, inner contract creation may not be successful, and an inner contract was not created. This PR fixes a crash that could occur when doing tracing in such situations.
This PR adds a way to subscribe to the _full_ pending transactions, as opposed to just being notified about hashes.
In use cases where client subscribes to newPendingTransactions and gets txhashes only to then request the actual transaction, the caller can now shortcut that flow and obtain the transactions directly.
Co-authored-by: Felix Lange <fjl@twurst.com>
Prior to this change, f.begin (and possibly end) stay negative, leading to strange results later in the code. With this change, filters using "safe" and "finalized" block produce results consistent w/ the overall behavior of this RPC method.
Co-authored-by: Martin Holst Swende <martin@swende.se>
* ethclient/gethclient: improve time-sensitive flaky test
* eth/catalyst: fix (?) flaky test
* core: stop blockchains in tests after use
* core: fix dangling blockchain instances
* core: rm whitespace
* eth/gasprice, eth/tracers, consensus/clique: stop dangling blockchains in tests
* all: address review concerns
* core: goimports
* eth/catalyst: fix another time-sensitive test
* consensus/clique: add snapshot test run function
* core: rename stop() to stopWithoutSaving()
Co-authored-by: Felix Lange <fjl@twurst.com>
This PR introduces a new mechanism in chain tracer for preventing creating too many trace states.
The workflow of chain tracer can be divided into several parts:
- state creator generates trace state in a thread
- state tracer retrieves the trace state and applies the tracing on top in another thread
- state collector gathers all result from state tracer and stream to users
It's basically a producer-consumer model here, while if we imagine that the state producer generates states too fast, then it will lead to accumulate lots of unused states in memory. Even worse, in path-based state scheme it will only keep the latest 128 states in memory, and the newly generated state will invalidate the oldest one by marking it as stale.
The solution for fixing it is to limit the speed of state generation. If there are over 128 states un-consumed in memory, then the creation will be paused until the states are be consumed properly.
Backwards compatibility warning: The result will from now on omit empty fields instead
of including a zero value (e.g. no more `balance: '0x'`).
The prestateTracer will now take an option `diffMode: bool`. In this mode
the tracer will output the pre state and post data for the modified parts of state.
Read-only accesses will be completely omitted. Creations (be it account or slot)
will be signified by omission in the `pre` list and inclusion in `post`. Whereas
deletion (be it account or slot) will be signified by inclusion in `pre` and omission
in `post` list.
Signed-off-by: Delweng <delweng@gmail.com>
This PR makes it so that the snap server responds to trie heal requests when possible, even if the snapshot does not exist. The idea being that it might prolong the lifetime of a state root, so we don't have to pivot quite as often.
The call tracer and prestate tracer store data JSON-encoded in memory. In order to support alternative encodings (specifically RLP), it's better to keep data a native format during tracing. This PR does marshalling at the end, using gencodec.
OBS!
This PR changes the call tracer result slightly:
- Order of type and value fields are changed (should not matter).
- Output fields are completely omitted when they're empty (no more output: "0x"). Previously, this was only _sometimes_ omitted (e.g. when call ended in a non-revert error) and otherwise 0x when the output was actually empty.
* eth/tracers: pad memory slice on oob case
* eth/tracers/js: fix testfailure due to err msg capitalization
Co-authored-by: Martin Holst Swende <martin@swende.se>
Sometimes we get stuck on db compaction, and the CL re-issues the "same" command to us multiple times. Each request get stuck on the same place, in the middle of the handler.
This changes makes it so we do not reprocess the same payload, but instead detects it early.
This changes the CI / release builds to use the latest Go version. It also
upgrades golangci-lint to a newer version compatible with Go 1.19.
In Go 1.19, godoc has gained official support for links and lists. The
syntax for code blocks in doc comments has changed and now requires a
leading tab character. gofmt adapts comments to the new syntax
automatically, so there are a lot of comment re-formatting changes in this
PR. We need to apply the new format in order to pass the CI lint stage with
Go 1.19.
With the linter upgrade, I have decided to disable 'gosec' - it produces
too many false-positive warnings. The 'deadcode' and 'varcheck' linters
have also been removed because golangci-lint warns about them being
unmaintained. 'unused' provides similar coverage and we already have it
enabled, so we don't lose much with this change.
This PR simplifies the logic of chain tracer and also adds the unit tests.
The most important change has been made in this PR is the state management. Whenever a tracing state is acquired there is a corresponding release function be returned as well. It must be called once the state is used up, otherwise resource leaking can happen.
And also the logic of state management has been simplified a lot. Specifically, the state provider(eth backend, les backend) should ensure the state is available and referenced. State customers can use the state according to their own needs, or build other states based on the given state. But once the release function is called, there is no guarantee of the availability of the state.
Co-authored-by: Sina Mahmoodi <1591639+s1na@users.noreply.github.com>
Co-authored-by: Péter Szilágyi <peterke@gmail.com>