This PR adds server-side limits for JSON-RPC batch requests. Before this change, batches
were limited only by processing time. The server would pick calls from the batch and
answer them until the response timeout occurred, then stop processing the remaining batch
items.
Here, we are adding two additional limits which can be configured:
- the 'item limit': batches can have at most N items
- the 'response size limit': batches can contain at most X response bytes
These limits are optional in package rpc. In Geth, we set a default limit of 1000 items
and 25MB response size.
When a batch goes over the limit, an error response is returned to the client. However,
doing this correctly isn't always possible. In JSON-RPC, only method calls with a valid
`id` can be responded to. Since batches may also contain non-call messages or
notifications, the best effort thing we can do to report an error with the batch itself is
reporting the limit violation as an error for the first method call in the batch. If a batch is
too large, but contains only notifications and responses, the error will be reported with
a null `id`.
The RPC client was also changed so it can deal with errors resulting from too large
batches. An older client connected to the server code in this PR could get stuck
until the request timeout occurred when the batch is too large. **Upgrading to a version
of the RPC client containing this change is strongly recommended to avoid timeout issues.**
For some weird reason, when writing the original client implementation, @fjl worked off of
the assumption that responses could be distributed across batches arbitrarily. So for a
batch request containing requests `[A B C]`, the server could respond with `[A B C]` but
also with `[A B] [C]` or even `[A] [B] [C]` and it wouldn't make a difference to the
client.
So in the implementation of BatchCallContext, the client waited for all requests in the
batch individually. If the server didn't respond to some of the requests in the batch, the
client would eventually just time out (if a context was used).
With the addition of batch limits into the server, we anticipate that people will hit this
kind of error way more often. To handle this properly, the client now waits for a single
response batch and expects it to contain all responses to the requests.
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
* core/txpool: abstraction prep work for secondary pools (blob pool)
* core/txpool: leave subpool concepts to a followup pr
* les: fix tests using hard coded errors
* core/txpool: use bitmaps instead of maps for tx type filtering
* cmd/evm: make evm blocktest output logs if so instructed
* Apply suggestions from code review
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
---------
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
This changes the journal logic to mark the state object dirty immediately when it
is reset.
We're mostly adding this change to appease the fuzzer. Marking it dirty immediately
makes no difference in practice because accounts will always be modified by EVM
right after creation.
Continuing with a series of PRs to make the Trie interface more generic, this PR moves
the RLP encoding of storage slots inside the StateTrie and light.Trie implementations,
as other types of tries don't use RLP.
* p2p/discover: remove ReadRandomNodes
Even though it's public, this method is not callable by code outside of
package p2p/discover because one can't get a valid instance of Table.
* p2p/discover: add Table.Nodes
* p2p/discover: make Table settings configurable
In unit tests and externally developed cmd/devp2p test runs, it can be
useful to tune the timer intervals used by Table.
Drop the notions of uncles, and disables activities while syncing
- Disable activities (e.g. generate pending state) while node is syncing,
- Disable empty block submission (but empty block is still kept for payload building),
- Drop uncle notion since (ethash is already deprecated)
Deserialize hex keys early to shortcut on invalid input, and re-use the account storageTrie for each proof for each proof in the account, preventing repeated deep-copying of the trie.
Closes#27308
--------
Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Marius van der Wijden <m.vanderwijden@live.de>
eth: make StorageRangeAt take a block hash or number
Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
RPC methods `eth_getHeaderBy*` returned a size value which was meant for internal
processes. Please instead use `size` field returned by `eth_getBlockBy*` if you're interested
in the RLP encoded storage size of the block.
Signed-off-by: jsvisa <delweng@gmail.com>
This change implements async log retrievals via feeding logs in channels, instead of returning slices. This is a first step to implement #15063.
---------
Signed-off-by: jsvisa <delweng@gmail.com>
Co-authored-by: Sina Mahmoodi <itz.s1na@gmail.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
Co-authored-by: Sina Mahmoodi <1591639+s1na@users.noreply.github.com>