This fixes a bug where contract code would be overridden to empty code ("0x")
when the Code field of OverrideAccount was left nil. The change also cleans up
the encoding of overrides to only send necessary fields, and improves documentation.
Fixes#25615
Co-authored-by: Felix Lange <fjl@twurst.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
The call tracer and prestate tracer store data JSON-encoded in memory. In order to support alternative encodings (specifically RLP), it's better to keep data a native format during tracing. This PR does marshalling at the end, using gencodec.
OBS!
This PR changes the call tracer result slightly:
- Order of type and value fields are changed (should not matter).
- Output fields are completely omitted when they're empty (no more output: "0x"). Previously, this was only _sometimes_ omitted (e.g. when call ended in a non-revert error) and otherwise 0x when the output was actually empty.
Some tests define an 'expectException' error but the tests runner does not check for conditions where this test value is filled (error expected) but in which no error is returned by the test runner.
An example of this scenario is GeneralStateTests/stTransactionTest/HighGasPrice.json, which expects a 'TR_NoFunds' error, but the test runner does not return any error.
Signed-off-by: meows <b5c6@protonmail.com>
`geth dumpgenesis` currently does not respect the content of the data directory. Instead, it outputs the genesis block created by command-line flags. This PR fixes it to read the genesis from the database, if the database already exists.
Co-authored-by: Martin Holst Swende <martin@swende.se>
* eth/tracers: pad memory slice on oob case
* eth/tracers/js: fix testfailure due to err msg capitalization
Co-authored-by: Martin Holst Swende <martin@swende.se>
This PR cleans up the configurations for pruner and snapshotter by passing a config struct.
And also, this PR disables the snapshot background generation if the chain is opened in "read-only" mode. The read-only mode is necessary in some cases. For example, we have a list of commands to open the etheruem node in "read-only" mode, like export-chain. In these cases, the snapshot background generation is non expected and should be banned explicitly.
The abigen exclusion pattern, previously on the form "path:type", now supports wildcards. Examples "*:type" to exclude a named type in all files, or "/path/to/foo.sol:*" all types in foo.sol.
This changes the CI build to store the git commit and date into package
internal/version instead of package main. Doing this essentially merges our
two ways of tracking the go-ethereum version into a single place, achieving
two objectives:
- Bad block reports, which use version.Info(), will now have the git commit
information even when geth is built in an environment such as
launchpad.net where git access is unavailable.
- For geth builds created by `go build ./cmd/geth` (i.e. not using `go run
build/ci.go install`), git information stored by the go tool is now used
in the p2p node name as well as in `geth version` and `geth
version-check`.
Sometimes we get stuck on db compaction, and the CL re-issues the "same" command to us multiple times. Each request get stuck on the same place, in the middle of the handler.
This changes makes it so we do not reprocess the same payload, but instead detects it early.
core/blockchain: downgrade tx indexing and unindexing logs from info to debug
If a user has a finite tx lookup limit, they will see an "unindexing" info level log each time a block is imported. This information might help a user understand that they are removing the index each block and some txs may not be retrievable by hash, but overall it is generally more of a nuisance than a benefit. This change downgrades the log to a debug log.
This shortens the chain config summary in bad block reports,
and adds go-ethereum version information as well.
Co-authored-by: Felix Lange <fjl@twurst.com>
This change makes eth_getProof and eth_getStorageAt return an error when
the argument contains invalid hex in storage keys.
Co-authored-by: Felix Lange <fjl@twurst.com>
* cmd/geth: add a verkle subcommand
* fix copyright year
* remove unused command parameters
* check that the output file was successfully written to
Co-authored-by: Martin Holst Swende <martin@swende.se>
* cmd/geth: goimports fix
Co-authored-by: Martin Holst Swende <martin@swende.se>
This changes the CI / release builds to use the latest Go version. It also
upgrades golangci-lint to a newer version compatible with Go 1.19.
In Go 1.19, godoc has gained official support for links and lists. The
syntax for code blocks in doc comments has changed and now requires a
leading tab character. gofmt adapts comments to the new syntax
automatically, so there are a lot of comment re-formatting changes in this
PR. We need to apply the new format in order to pass the CI lint stage with
Go 1.19.
With the linter upgrade, I have decided to disable 'gosec' - it produces
too many false-positive warnings. The 'deadcode' and 'varcheck' linters
have also been removed because golangci-lint warns about them being
unmaintained. 'unused' provides similar coverage and we already have it
enabled, so we don't lose much with this change.
This PR makes the event-sending for deleted and new logs happen in batches, to prevent OOM situation due to large reorgs.
Co-authored-by: Felix Lange <fjl@twurst.com>
This changes the error code returned by the RPC server in certain situations:
- handler panic: code -32603
- result marshaling error: code -32603
- attempt to subscribe via HTTP: code -32001
In all of the above cases, the server previously returned the default error
code -32000.
Co-authored-by: Nicholas Zhao <nicholas.zhao@gmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
The p2p msgrate tracker is a thing which tries to estimate some mean round-trip times. However, it did so in a very curious way: if a node had 200 peers, it would sort their 200 respective rtt estimates, and then it would pick item number 2 as the mean. So effectively taking third fastest and calling it mean. This probably works "ok" when the number of peers are low (there are other factors too, such as ttlScaling which takes some of the edge off this) -- however when the number of peers is high, it becomes very skewed.
This PR instead bases the 'mean' on the square root of the length of the list. Still pretty harsh, but a bit more lenient.