* rpc: make subscription test faster
reduces time for TestClientSubscriptionChannelClose
from 25 sec to < 1 sec.
* trie: cache trie nodes for faster sanity check
This reduces the time spent on TestIncompleteSyncHash
from ~25s to ~16s.
* core/forkid: speed up validation test
This takes the validation test from > 5s to sub 1 sec
* core/state: improve snapshot test run
brings the time for TestSnapshotRandom from 13s down to 6s
* accounts/keystore: improve keyfile test
This removes some unnecessary waits and reduces the
runtime of TestUpdatedKeyfileContents from 5 to 3 seconds
* trie: remove resolver
* trie: only check ~5% of all trie nodes
This PR adds server-side limits for JSON-RPC batch requests. Before this change, batches
were limited only by processing time. The server would pick calls from the batch and
answer them until the response timeout occurred, then stop processing the remaining batch
items.
Here, we are adding two additional limits which can be configured:
- the 'item limit': batches can have at most N items
- the 'response size limit': batches can contain at most X response bytes
These limits are optional in package rpc. In Geth, we set a default limit of 1000 items
and 25MB response size.
When a batch goes over the limit, an error response is returned to the client. However,
doing this correctly isn't always possible. In JSON-RPC, only method calls with a valid
`id` can be responded to. Since batches may also contain non-call messages or
notifications, the best effort thing we can do to report an error with the batch itself is
reporting the limit violation as an error for the first method call in the batch. If a batch is
too large, but contains only notifications and responses, the error will be reported with
a null `id`.
The RPC client was also changed so it can deal with errors resulting from too large
batches. An older client connected to the server code in this PR could get stuck
until the request timeout occurred when the batch is too large. **Upgrading to a version
of the RPC client containing this change is strongly recommended to avoid timeout issues.**
For some weird reason, when writing the original client implementation, @fjl worked off of
the assumption that responses could be distributed across batches arbitrarily. So for a
batch request containing requests `[A B C]`, the server could respond with `[A B C]` but
also with `[A B] [C]` or even `[A] [B] [C]` and it wouldn't make a difference to the
client.
So in the implementation of BatchCallContext, the client waited for all requests in the
batch individually. If the server didn't respond to some of the requests in the batch, the
client would eventually just time out (if a context was used).
With the addition of batch limits into the server, we anticipate that people will hit this
kind of error way more often. To handle this properly, the client now waits for a single
response batch and expects it to contain all responses to the requests.
---------
Co-authored-by: Felix Lange <fjl@twurst.com>
Co-authored-by: Martin Holst Swende <martin@swende.se>
The change fixes unmarshaling of JSON null results into json.RawMessage.
---------
Co-authored-by: Jason Yuan <jason.yuan@curvegrid.com>
Co-authored-by: Jason Yuan <jason.yuan869@gmail.com>
This changes the error code returned by the RPC server in certain situations:
- handler panic: code -32603
- result marshaling error: code -32603
- attempt to subscribe via HTTP: code -32001
In all of the above cases, the server previously returned the default error
code -32000.
Co-authored-by: Nicholas Zhao <nicholas.zhao@gmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
This enables the following linters
- typecheck
- unused
- staticcheck
- bidichk
- durationcheck
- exportloopref
- gosec
WIth a few exceptions.
- We use a deprecated protobuf in trezor. I didn't want to mess with that, since I cannot meaningfully test any changes there.
- The deprecated TypeMux is used in a few places still, so the warning for it is silenced for now.
- Using string type in context.WithValue is apparently wrong, one should use a custom type, to prevent collisions between different places in the hierarchy of callers. That should be fixed at some point, but may require some attention.
- The warnings for using weak random generator are squashed, since we use a lot of random without need for cryptographic guarantees.
This fixes a rare issue where the client subscription forwarding loop
would attempt send on the subscription's channel after Unsubscribe has
returned, leading to a panic if the subscription channel was already
closed by the user. Example:
sub, _ := client.Subscribe(..., channel, ...)
sub.Unsubscribe()
close(channel)
The race occurred because Unsubscribe called quitWithServer to tell the
forwarding loop to stop sending on sub.channel, but did not wait for the
loop to actually come down. This is fixed by adding an additional channel
to track the shutdown, on which Unsubscribe now waits.
Fixes#22322
* internal/ethapi: return revert reason for eth_call
* internal/ethapi: moved revert reason logic to doCall
* accounts/abi/bind/backends: added revert reason logic to simulated backend
* internal/ethapi: fixed linting error
* internal/ethapi: check if require reason can be unpacked
* internal/ethapi: better error logic
* internal/ethapi: simplify logic
* internal/ethapi: return vmError()
* internal/ethapi: move handling of revert out of docall
* graphql: removed revert logic until spec change
* rpc: internal/ethapi: added custom error types
* graphql: use returndata instead of return
Return() checks if there is an error. If an error is found, we return nil.
For most use cases it can be beneficial to return the output even if there
was an error. This code should be changed anyway once the spec supports
error reasons in graphql responses
* accounts/abi/bind/backends: added tests for revert reason
* internal/ethapi: add errorCode to revert error
* internal/ethapi: add errorCode of 3 to revertError
* internal/ethapi: unified estimateGasErrors, simplified logic
* internal/ethapi: unified handling of errors in DoEstimateGas
* rpc: print error data field
* accounts/abi/bind/backends: unify simulatedBackend and RPC
* internal/ethapi: added binary data to revertError data
* internal/ethapi: refactored unpacking logic into newRevertError
* accounts/abi/bind/backends: fix EstimateGas
* accounts, console, internal, rpc: minor error interface cleanups
* Revert "accounts, console, internal, rpc: minor error interface cleanups"
This reverts commit 2d3ef53c5304e429a04983210a417c1f4e0dafb7.
* re-apply the good parts of 2d3ef53c53
* rpc: add test for returning server error data from client
Co-authored-by: rjl493456442 <garyrong0905@gmail.com>
Co-authored-by: Péter Szilágyi <peterke@gmail.com>
Co-authored-by: Felix Lange <fjl@twurst.com>
The leaks were mostly in unit tests, and could all be resolved by
adding suitably-sized channel buffers or by restructuring the test
to not send on a channel after an error has occurred.
There is an unavoidable goroutine leak in Console.Interactive: when
we receive a signal, the line reader cannot be unblocked and will get
stuck. This leak is now documented and I've tried to make it slightly
less bad by adding a one-element buffer to the output channels of
the line-reading loop. Should the reader eventually awake from its
blocked state (i.e. when stdin is closed), at least it won't get stuck
trying to send to the interpreter loop which has quit long ago.
Co-authored-by: Felix Lange <fjl@twurst.com>
* rpc: remove 'exported or builtin' restriction for parameters
There is no technial reason for this restriction because package reflect
can create values of any type. Requiring parameters and return values to
be exported causes a lot of noise in package exports.
* rpc: fix staticcheck warnings
This PR updates a comment about the maximum client subscription buffer
to reflect changes made previously, and fixes a test that wouldn't fail
when wantError == true but execution did not return an error.
New APIs added:
client.RegisterName(namespace, service) // makes service available to server
client.Notify(ctx, method, args...) // sends a notification
ClientFromContext(ctx) // to get a client in handler method
This is essentially a rewrite of the server-side code. JSON-RPC
processing code is now the same on both server and client side. Many
minor issues were fixed in the process and there is a new test suite for
JSON-RPC spec compliance (and non-compliance in some cases).
List of behavior changes:
- Method handlers are now called with a per-request context instead of a
per-connection context. The context is canceled right after the method
returns.
- Subscription error channels are always closed when the connection
ends. There is no need to also wait on the Notifier's Closed channel
to detect whether the subscription has ended.
- Client now omits "params" instead of sending "params": null when there
are no arguments to a call. The previous behavior was not compliant
with the spec. The server still accepts "params": null.
- Floating point numbers are allowed as "id". The spec doesn't allow
them, but we handle request "id" as json.RawMessage and guarantee that
the same number will be sent back.
- Logging is improved significantly. There is now a message at DEBUG
level for each RPC call served.
This commit introduces a network simulation framework which
can be used to run simulated networks of devp2p nodes. The
intention is to use this for testing protocols, performing
benchmarks and visualising emergent network behaviour.
Currently http cors and websocket origins are a comma separated string in the
config object. These are replaced with string arrays that are more expressive in
case of a config file.
There is no need to depend on the old context package now that the
minimum Go version is 1.7. The move to "context" eliminates our weird
vendoring setup. Some vendored code still uses golang.org/x/net/context
and it is now vendored in the normal way.
This change triggered new vet checks around context.WithTimeout which
didn't fire with golang.org/x/net/context.
I initially made the client block if the 100-element buffer was
exceeded. It turns out that this is inconvenient for simple uses of the
client which subscribe and perform calls on the same goroutine, e.g.
client, _ := rpc.Dial(...)
ch := make(chan int) // note: no buffer
sub, _ := client.EthSubscribe(ch, "something")
for event := range ch {
client.Call(...)
}
This innocent looking code will lock up if the server suddenly decides
to send 2000 notifications. In this case, the client's main loop won't
accept the call because it is trying to deliver a notification to ch.
The issue is kind of hard to explain in the docs and few people will
actually read them. Buffering is the simple option and works with close
to no overhead for subscribers that always listen.