This change adds support for logging JSON records when the --log.json flag is
given. The --debug and --backtrace flags are deprecated and replaced by
--log.debug and --log.backtrace.
While changing this, it was noticed that the --memprofilerate and
--blockprofilerate were ineffective (they were always overridden even if
--pprof.memprofilerate was not set). This is also fixed.
Co-authored-by: Felix Lange <fjl@twurst.com>
This adds support for EIP-2718 access list transactions in the GraphQL API.
Co-authored-by: Amit Shah <amitshah0t7@gmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
This removes the duplicated definition of eth_chainID
in package eth and updates the definition in internal/ethapi
to treat chain ID as a bigint.
Co-authored-by: Felix Lange <fjl@twurst.com>
This fixes a rare issue where the client subscription forwarding loop
would attempt send on the subscription's channel after Unsubscribe has
returned, leading to a panic if the subscription channel was already
closed by the user. Example:
sub, _ := client.Subscribe(..., channel, ...)
sub.Unsubscribe()
close(channel)
The race occurred because Unsubscribe called quitWithServer to tell the
forwarding loop to stop sending on sub.channel, but did not wait for the
loop to actually come down. This is fixed by adding an additional channel
to track the shutdown, on which Unsubscribe now waits.
Fixes#22322
* core/state/snapshot, ethdb: track deletions more accurately
* core/state/snapshot: don't reset the iterator, leveldb's screwy
* ethdb: don't mess with the insert batches for now
This fixes an issue where the ethstats service could crash if geth was
started and then immediately stopped due to an internal error. The
cause of the crash was a nil subscription being returned by the backend,
because the background goroutine creating them was scheduled after
the backend had already shut down.
Moving the creation of subscriptions into the Start method, which runs
synchronously during startup of the node, means the returned subscriptions
can never be 'nil'.
Co-authored-by: Felix Lange <fjl@twurst.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
The main idea behind it is: the range compaction is very expensive
which can take a few hours to finish. During this long procedure,
a lot of exceptions can occur, e.g.
- Geth is killed manually
- Geth is killed because of machine crash
- etc
In order to minimize the effect of the exceptions, the compaction
is moved out of the pruning. So that even the compaction is not
finished, the pruning is regarded as done.
This upgrades the cloudflare client dependency to v0.14.0. The new
version changes the API because all methods now require a context
parameter. This change also reduces the log level of the 'Skipping...'
message to debug, following a similar change in the AWS deployer.
The PR implements the --miner.notify.full flag that enables full pending block
notifications. When this flag is used, the block notifications sent to mining
endpoints contain the complete block header JSON instead of a work package
array.
Co-authored-by: AlexSSD7 <alexandersadovskyi7@protonmail.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
* cmd/devp2p: fix comparison of TXT record value
The AWS API returns quoted DNS strings, so we must encode the new value
before comparing it against the existing record content.
* cmd/devp2p: add test
* cmd/devp2p: fix typo and rename val -> newValue
Fixes the CaptureStart api to include the EVM, thus being able to set the statedb early on. This pr also exposes the struct we used internally in the interpreter to encapsulate the contract, mem, stack, rstack, so we pass it as a single struct to the tracer, and removes the error returns on the capture methods.
In Geth v1.10, we changed the structure of the "les" ENR entry. As a result, the DHT crawler that creates the DNS lists
no longer recognizes the les nodes, which is fixed in this commit.
* cmd/devp2p: skip ENR field tails properly in nodeset filter
* cmd/devp2p: fix tail decoder for snap as well
* les: fix tail decoding in "eth" ENR entry
This PR fixes a regression introduced in #22360, when we updated to the v2 of the AWS sdk, which causes current crawler to just get the same first 100 results over and over, and get stuck in a loop.