--syncTarget is a feature for development purpose in post-merge world. Previously
it's added into eth.Config. But it turns out that's a stupid idea.
- syncTarget is a block object, which is hard to be put in config file(large)
- syncTarget is just a dev feature, doesn't make too much sense to add it in config file
So I remove it from the eth config object. And it also fixes the #26328
This improves readability of function 'push'.
sort.Search(N, ...) will at most return N when no match, so ix should be compared
with N. The previous version would compare ix with N+1 in case an additional item
was appended. No bug resulted from this comparison, but it's not easy to understand
why.
Co-authored-by: Felix Lange <fjl@twurst.com>
The gcproc field tracks the amount of time spent processing blocks,
and is used to trigger a state flush to disk when a certain threshold is
reached. After the merge, single block insertion by CL is the most
common source of block processing time, but this time was not added
into gcproc.
Here we add special handling for sending an error response when the write timeout of the
HTTP server is just about to expire. This is surprisingly difficult to get right, since is
must be ensured that all output is fully flushed in time, which needs support from
multiple levels of the RPC handler stack:
The timeout response can't use chunked transfer-encoding because there is no way to write
the final terminating chunk. net/http writes it when the topmost handler returns, but the
timeout will already be over by the time that happens. We decided to disable chunked
encoding by setting content-length explicitly.
Gzip compression must also be disabled for timeout responses because we don't know the
true content-length before compressing all output, i.e. compression would reintroduce
chunked transfer-encoding.
The new flag allows configuring an explicit endpoint which is to be
announced in the DHT. This feature was originally developed for the
discv5 wormhole experiment (#25798), but it's useful in other contexts
as well.
This removes the 'time' field from logs, as well as from the tracer interface. This change makes the trace output deterministic. If a tracer needs the time they can measure it themselves. No need for evm to do this.
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
This changes the Pop method to assign the zero value before
reducing slice size. Doing so ensures the backing array does not
reference removed item values.
This PR drops the legacy receipt types, the freezer-migrate command and the startup check. The previous attempt #22852 at this failed because there were users who still had legacy receipts in their db, so it had to be reverted #23247. Since then we added a command to migrate legacy dbs #24028.
As of the last hardforks all users either must have done the migration, or used the --ignore-legacy-receipts flag which will stop working now.
This PR introduces a node scheme abstraction. The interface is only implemented by `hashScheme` at the moment, but will be extended by `pathScheme` very soon.
Apart from that, a few changes are also included which is worth mentioning:
- port the changes in the stacktrie, tracking the path prefix of nodes during commit
- use ethdb.Database for constructing trie.Database. This is not necessary right now, but it is required for path-based used to open reverse diff freezer
This PR should makes it easier to sign EIP-712 typed data via the accounts.Wallet API, by using the mimetype for typed data.
Co-authored-by: nasdf <keenan.nemetz@gmail.com>
While investigating #22374, I noticed that the Sync operation of the
freezer does not take the table lock. It also doesn't call sync for all files
if there is an error with one of them. I doubt this will fix anything, but
didn't want to drop the fix on the floor either.
This removes an RPC test which takes > 90s to execute, and updates the
internal/guide tests to use lighter scrypt parameters.
Co-authored-by: Felix Lange <fjl@twurst.com>
This adds a way to specify HTTP headers per request.
Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Felix Lange <fjl@twurst.com>
rpc: fix connection tracking in Server
When upgrading to mapset/v2 with generics, the set element type used in
rpc.Server had to be changed to *ServerCodec because ServerCodec is not
'comparable'. While the distinction is technically correct, we know all
possible ServerCodec types, and all of them are comparable. So just use
a map instead.
It seems there is no fully typed library implementation of an LRU cache.
So I wrote one. Method names are the same as github.com/hashicorp/golang-lru,
and the new type can be used as a drop-in replacement.
Two reasons to do this:
- It's much easier to understand what a cache is for when the types are right there.
- Performance: the new implementation is slightly faster and performs zero memory
allocations in Add when the cache is at capacity. Overall, memory usage of the cache
is much reduced because keys are values are no longer wrapped in interface.
This PR changes the pending tx subscription to return RPCTransaction types instead of normal Transaction objects. This will fix the inconsistencies with other tx returning API methods (i.e. getTransactionByHash), and also fill in the sender value for the tx.
co-authored by @s1na
This fixes a problem in the SizeConstrainedLRU. The SCLRU uses an underlying simple lru which is not thread safe.
During the Get operation, the recentness of the accessed item is updated, so it is not a pure read-operation. Therefore, the mutex we need is a full mutex, not RLock.
This PR changes the mutex to be a regular Mutex, instead of RWMutex, so a reviewer can at a glance see that all affected locations are fixed.
This changes how we read performance metrics from the Go runtime. Instead
of using runtime.ReadMemStats, we now rely on the API provided by package
runtime/metrics.
runtime/metrics provides more accurate information. For example, the new
interface has better reporting of memory use. In my testing, the reported
value of held memory more accurately reflects the usage reported by the OS.
The semantics of metrics system/memory/allocs and system/memory/frees have
changed to report amounts in bytes. ReadMemStats only reported the count of
allocations in number-of-objects. This is imprecise: 'tiny objects' are not
counted because the runtime allocates them in batches; and certain
improvements in allocation behavior, such as struct size optimizations,
will be less visible when the number of allocs doesn't change.
Changing allocation reports to be in bytes makes it appear in graphs that
lots more is being allocated. I don't think that's a problem because this
metric is primarily interesting for geth developers.
The metric system/memory/pauses has been changed to report statistical
values from the histogram provided by the runtime. Its name in influxdb has
changed from geth.system/memory/pauses.meter to
geth.system/memory/pauses.histogram.
We also have a new histogram metric, system/cpu/schedlatency, reporting the
Go scheduler latency.